text stringlengths 9 7.94M |
|---|
\begin{document}
\begin{center} \begin{Large}{\bf Reconstruction for multi-wave imaging in attenuating media with large damping coefficient} \end{Large}\\
\begin{large}{\bf Benjam\'in Palacios}\\ {\small Department of Mathematics, University of Washington, Seattle, WA, USA} \end{large}
\end{center} \renewcommand{\abstractname}{} \begin{abstract} \noindent{\sc Abstract.} In this article we study the reconstruction problem in TAT/PAT on an attenuating media. Namely, we prove a reconstruction procedure of the initial condition for the damped wave equation via Neumann series that works for arbitrary large smooth attenuation coefficients extending the result of Homan in \cite{Homan}. We also illustrate the theoretical result by including some numerical experiments at the end of the paper. \end{abstract}
\noindent{\it Keywords:} Multiwave imaging; Neumann series; damped wave equation; geometric optics\\
\centerline{\sc 1. Introduction} {\it Thermoacoustic} and {\it Photoacoustic tomography} are coupled physics medical imaging techniques that consist in the application of a harmless electromagnetic pulse to some target tissue which causes a slightly increment in the local temperature, making the tissue to expand and produce pressure waves which are then recorded and used to reconstruct optical parameters of the region of interest. The goal of combining two different types of waves is to take advantage of the good contrast of the electromagnetic absorption parameters and the good resolution the ultrasound data provides, to finally obtain a tomogram endowed with both desired features. It's then natural to split the problem into two steps, the first one being the inversion of the ultrasound measurements to get some internal information related with the absorption of the electromagnetic radiation, and the second step, also call {\it quantitative TAT/PAT}, consisting in the inversion of such internal data to finally reconstruct the absorption coefficient which constitutes the image of the inside of the body. This paper concerns only in the first step.
The classical setting of this problem is to assume that waves propagate through the free space and there have been many results that follow this assumption \cite{2007Finch},\cite{2009Hristova},\cite{2009Kunyansky},\cite{TAT},\cite{TATbrain},\cite{Homan}. The validity of such hypothesis depends on the devices and setting used to acquire the ultrasound data, requiring then in this case to use detectors whose perturbation to the waves can be neglected. This is not always the case though \cite{2007BdryCondPAT}. Such fact has motivated the current study of the case where the acoustic waves propagate inside an enclosure, this is when one allows the waves to interact with some boundary or interface, as can be seen in \cite{2015StefanovYang}, \cite{2015AcostaMontalto}, \cite{Nguyen2015}, where the interactions are modeled by restricting the domain where waves propagate and by considering boundary conditions (e.g. Neumann, Robin). We believe that the analysis carried out in this article might be applied to the enclosure case to obtain reconstruction formulas when an interior damping coefficient is taken into account.
It's a fact that the attenuation of the ultrasound waves affects the quality of the reconstruction, see for example \cite{2006Patch}, and recently there has been a growing interest in trying to incorporate and compensate for the effect of the damping in TAT/PAT \cite{2016AcostaMontalto},\cite{Homan},\cite{2012PATatt},\cite{2011Roitner}. It's also known that the attenuation in biological tissues is strongly frequency-dependent, and modeling such relationship is not a trivial task. Many models have been proposed to represent such damping effect, most of them giving way to fractional derivatives and consequently to integro-differential operators (see \cite{2011fractionalwave} and \cite{2011Ammari} for a review of such models). We instead consider here a more simple non-frequency dependent model, namely the damped wave equation, motivated by the work done in \cite{Homan}.
We introduce a new Neumann series reconstruction formula that allows us to recover the source term in the attenuated TAT/PAT problem for the damped wave equation allowing more general attenuation coefficients. This problem was first addressed in \cite{Homan} where the author used a sharp time reversal introduced originally by Stefanov and Uhlmann in \cite{TAT}, and proved that under the assumption that the attenuation coefficient is smooth and sufficiently small, the error of the back projection is a contraction.
Let $\Omega\subset\mathbb{R}^n$ be a bounded domain with smooth boundary and let $\Lambda_a:f\mapsto u|_{[0,T]\times\partial\Omega}$ denote the measurement operator where $u$ satisfies \begin{equation}\label{TAT_eq} \left\{\begin{array}{rcl} (\partial^2_t +a\partial_t- c^2\Delta)u &=& 0\; \text{ in }(0,T)\times\mathbb{R}^n,\\
u|_{t=0}&=&f,\\
\partial_tu|_{t=0}&=&-af. \end{array}\right. \end{equation} The particular form of the initial condition comes from the equivalence of system \eqref{TAT_eq} with the problem of finding $f$ in $$(\partial^2_t +a\partial_t- c^2\Delta)u = f(x)\delta'(t),\; \text{ in }\mathbb{R}\times\mathbb{R}^n,\quad u = 0\;\text{ for }t<0,$$ as it's explained in \cite{Homan}. In such article, the time reversal operator associated to the damped wave equation is given by $\hat{A}_a:h\mapsto v(0,\cdot)$, where $v$ is the solution of \begin{equation}\label{time_reversal} \left\{\begin{array}{rcl} (\partial^2_t +a\partial_t- c^2\Delta)v &=& 0\; \text{ in }(0,T)\times\Omega,\\
v|_{t=T}&=&\phi,\\
\partial_tv|_{t=T}&=&0,\\
v|_{(0,T)\times\partial\Omega}&=&h,\end{array}\right. \end{equation} with $\phi$ the harmonic extension of
$h|_{t=T}$ to $\Omega$.
Following this approach it is possible to get a reconstruction formula via Neumann series for small enough attenuations (see \cite[Theorem 2.3]{Homan}). The proof is based on the continuity of the error operator $\hat{K}_a = \text{Id}-\hat{A}_a\Lambda_a$, in terms of the attenuation coefficient, which provided that $(\Omega,c^{-2}dx^2)$ is non-trapping, $T$ larger than the supremum of the length of interior geodesics for the metric $c^{-2}dx^2$, and $\|a\|_\infty$ is sufficiently small, then $\|\hat{K}_a\|<1$.
The novelty of this article is that we get a Neumann series reconstruction that allows arbitrary large bounded smooth attenuations as long as they satisfy that their support is contained in $\overline{\Omega}$. We use a different time reversal technique than in \cite{Homan}, namely we backwardly solve the damped wave equation with a damping coefficient given by $-a(x)$ (see \eqref{damped_time_reversal}). When such IBVP is solved from $t=T$ to $t=0$, the sign of the damping term flips producing an attenuation of the reversed wave. Such fact allows us to better control the energy in the iterative process of the Neumann series. To prove the error operator is a contraction we employ the microlocal approach developed in \cite{TATbrain}, this is, we get microlocal energy estimates of the high frequency part for the transmitted waves that are used to show that after some time most of the initial energy lies outside the domain. The difficulty here is to treat the new term that the attenuation coefficient makes appear in the energy. Another consequence of the damping is that as in Homan's article, we also need the measurement time $T$ to be greater than the supremum of the length of the interior geodesics, which matches the time needed to get stability for the damped TAT/PAT problem, and which doubles the time needed in the undamped case.
\centerline{\sc 2. Preliminaries}
Given a domain $U$ and a scalar function $u(t,x)$, we define the local energy of $\text{\bf u} = [u,u_t]$ at time $t$ as
$$E_U(\text{\bf u}(t)) = \int_{U}(|\nabla_x u|^2 + c^{-2}|u_t|^2)dx.$$ We also define the extended energy functional to be
$$\mathcal{E}_U(\text{\bf u},\tau) =E_U(\text{\bf u}(\tau)) + 2\int_{[0,\tau]\times U}ac^{-2}|u_t(t)|^2dtdx.$$ If $\text{\bf u}$ is a solution of the damped wave equation in the whole space, it is well known that for any $U\subset \mathbb{R}^n $ such that $u$ vanishes for all time on its boundary, the former energy functional is non-increasing due to the attenuation coefficient $a\geq0$, while the latter is conserved. Indeed, the first statement follows from the usual computation of the energy where we multiply equation \eqref{TAT_eq} by $2c^{-2}u_t$ and integrate over $U$, so then, integrating by parts we arrive to the energy inequality
$$\frac{d}{dt}E_U(\text{\bf u}(t)) = -2\int_{U}ac^{-2}|u_t(t)|^2dx\leq 0.$$ If in addition we integrate in the interval $(0,\tau)$, for any $\tau>0$, we get the extended energy conservation. \\
The energy space $\mathcal{H}(U)$ of initial conditions is defined to be the completion of $C^\infty_0(U)\times C^\infty_0(U)$ under the energy norm
$$\|\text{\bf f}\|^2_{\mathcal{H}(U)} = \int_{U}(|\nabla_x f_1|^2 + c^{-2}|f_2|^2)dx.$$ with $\text{\bf f} = [f_1,f_2]$.
Notice that $\mathcal{H}(U) = H_D(U)\oplus L^2(U)$, where we write $L^2(U)=L^2(U;c^{-2}dx)$ the $L^2$ space for the measure $c^{-2}dx$.
Recall that denoting $\text{\bf u} = [u,u_t]$, it is possible to write \eqref{TAT_eq} as the following system $$\text{\bf u}_t = \text{\bf P}_a\text{\bf u},\quad \text{\bf P}_a = \left(\begin{matrix} 0&I\\c^2\Delta& -a\end{matrix}\right),$$ where $\text{\bf P}_a$ defines a strongly continuous semigroup, and given initial conditions $\text{\bf f} = [f_1,f_2]\in \mathcal{H}(\Omega)$ the solution of the previous system takes the form $\text{\bf u} = e^{t\text{\bf P}_a}\text{\bf f}$.
\centerline{\sc 3. Main result}
Let $\Omega\subset \mathbb{R}^n$ be a strictly convex bounded domain with smooth boundary and let's consider a solution $u$ of the initial value problem \begin{equation}\label{general IBVP} \left\{\begin{array}{rcl} (\partial^2_t +a\partial_t- c^2\Delta)u &=& 0\; \text{ in }(0,T)\times\mathbb{R}^n,\\
u|_{t=0}&=&f_1,\\
u_t|_{t=0}&=&f_2, \end{array}\right. \end{equation} where we assume that $c,a\in C^\infty(\mathbb{R}^n)$ are such that $c>0, a\geq 0$ and $c -1 = a = 0 \text{ in }\mathbb{R}^n\setminus\Omega.$ In the case of TAT/PAT we take $(f_1,f_2) = (f,-af)$ for some $f\in H_{D}(\Omega)$.
We denote the measurement operator
$$\Lambda_a:\mathcal{H}(\Omega) \to H^1((0,T)\times\partial\Omega),\quad \Lambda_a\text{\bf f} := u|_{(0,T)\times\partial\Omega}.$$
By letting $h= \Lambda_a\text{\bf f}$, we consider the time reversed wave $v(x,t)$ to be the solution of the backward problem \begin{equation}\label{damped_time_reversal} \left\{\begin{array}{rcl} (\partial^2_t - a\partial_t- c^2\Delta)v &=& 0\; \text{ in }(0,T)\times\Omega,\\
v|_{t=T}&=&P_{\partial\Omega}h(T),\\
v_t|_{t=T}&=&0,\\
v|_{(0,T)\times\partial\Omega}&=&h,\end{array}\right. \end{equation} where $P_{\partial\Omega}$ is the harmonic extension operator in $\Omega$. Notice the sign in the attenuation term. We will see that by solving \eqref{damped_time_reversal} back in time the solution is also attenuated, and consequently we can control the energy during the time reversal and the subsequent iterations of the Neumann series. We set $$\text{\bf A}_a:H^{1}_{(0)}([0,T]\times\partial\Omega)\to \mathcal{H}(\Omega) \cong H^1_0(\Omega)\oplus L^2(\Omega)$$ $$\text{\bf A}_ah=[v(0,\cdot),v_t(0,\cdot)] =:[A_a^1h,A_a^2h]$$ the new time reversal operator which is a continuous map by \cite{Lasiecka} and finite speed of propagation (The subscript in $H^1_{(0)}$ stands for functions vanishing at $t=0$).
Notice that due to uniqueness of solutions, the error function $w$ satisfying \begin{equation}\label{error_function} \left\{\begin{array}{rcl} (\partial^2_t - c^2\Delta)w &=& -a(u_t+v_t)\; \text{ in }(0,T)\times\Omega,\\
w|_{t=T}&=&u^T - \phi,\\
\partial_tw|_{t=T}&=&u_t^T,\\
w|_{(0,T)\times\partial\Omega}&=&0,\end{array}\right. \end{equation} with $(u^T,u_t^T) = (u(T,\cdot),u_t(T,\cdot))$, is such that $u = v+w$ in $(0,T)\times\Omega$. We then define the error operator for the TAT/PAT problem to be $${\bf K}_a:\mathcal{H}(\Omega)\to \mathcal{H}(\Omega),\quad {\bf K}_af = [w(0,\cdot),w_t(0,\cdot)],$$ this is, ${\bf A}_a\Lambda_a = \text{Id} - {\bf K}_a$.
In contrast with the non-attenuated case we can no longer say the error operator is a composition of a compact and a unitary operator as in \cite{TAT}, since now $w$ satisfies a wave equation with nontrivial right hand side which depends on $u$. To overcome this fact we use the microlocal approach given in \cite{TATbrain} which is based on energy estimates of the high frequency part of the solution. The proof of the reconstruction formula is then reduced to show that at time $T$ a significant part of the initial energy has left the domain.
It's well known we can only detect singularities of the initial conditions that propagate through geodesics that hit the boundary at times less than the measurement time $T$. In our case though, we consider a more restricted condition for the visibility of singularities due to the effect of the attenuation coefficient, which requires both ends of those geodesics to hit $\partial\Omega$ in a non-tangent way. When such coefficient is non-zero it is not possible to orthogonally project the initial data into the initial conditions that generate $\gamma_{(x,\xi)}$ and $\gamma_{(x,-\xi)}$ respectively (see \cite[section 4.2]{TATbrain}), therefore we cannot split the analysis and work with each of those branches independently. This condition on the visibility of singularities can be compared with the fact that the damped wave equation can not be extended to negative times, making necessary to double the measurement time needed in the undamped case to get uniqueness (see \cite[Theorem 3.1]{Homan}).
We set
$$T_0(\Omega) = \sup\{|\gamma|_{g}: \gamma \subset \bar{\Omega} \text{ geodesic for the metric } g = c^{-2}dx^2\}.$$ Requiring that $T_0(\Omega)$ is finite (this is ($\Omega,c^{-2}dx^2$) non-trapping),
the visibility of all singularities is guaranteed provided we take enough time to record the boundary data. Consequently, assuming the measurement time $T>T_0(\Omega)$ means we are able to detect both parts of every singularity produced by the initial condition, or in other words, both signals originated by any singularity are ``visible'' in time less than $T$.
The main result is the following \begin{theorem}\label{NS_free_space} Assume \text{\rm ($\Omega$, $c^{-2}dx^2)$} is strictly convex, and $T_0(\Omega)<T<\infty$. Then ${\bf K}_a$ is a contraction in $\mathcal{H}(\Omega)$ and we get the following reconstruction formula for the thermoacoustic problem $$\text{\bf f} = \sum^\infty_{m=0}{\bf K}_a^m{\bf A}_ah\quad h:=\Lambda_a\text{\bf f}.$$ \end{theorem}
\centerline{\sc 4. Geometric Optics}
\noindent{\bf 4.1 Parametrix for the Cauchy problem.} The proof of Theorem \ref{NS_free_space} is based on energy estimates for the high frequency part of the solution where the analysis is done by constructing approximate solutions localized near null bicharacteristics. Homan in \cite{Homan} got a parametrix solution for \eqref{general IBVP} in the case of $\text{\bf f} = (f,-af)$, following an standard argument. We need though a parametrix for the general case $\text{\bf f} = (f_1,f_2)$. The construction below is based on \cite[\S 4.1]{TATbrain} where the only difference resides in the transport equations \eqref{transport}.
We are looking for a solution of the form
$$u(t,x) = (2\pi)^{-n}\sum_{\sigma=\pm}\int e^{i\phi^\sigma(t,x,\eta)}\big(A^\sigma_1(t,x,\eta)\hat{f}_1(\eta) + |\eta|^{-1}A^\sigma_2(t,x,\eta)\hat{f}_2(\eta)\big)d\eta,$$ where $\hat{f}_i$ stands for the Fourier transform of $f_i$. To find the equations that $\phi^{\pm}$ and $A^\pm_{j}$, j=1,2, must satisfy we first compute:
$$\square_a u = (2\pi)^{-n}\sum_{\sigma=\pm}\int e^{i\phi^\sigma}\big([I^\sigma_{1,0} + I^\sigma_{1,1} + I^\sigma_{1,2}]\hat{f}_1 + |\eta|^{-1}[I^\sigma_{2,0} + I^\sigma_{2,1} + I^\sigma_{2,2}]\hat{f}_2\big)d\eta$$ where
\begin{align*}
I^\sigma_{j,2} &= - A_j^\sigma((\partial_t\phi^\sigma)^2 - c^2|\nabla_y\phi^\sigma|^2),\\ I^\sigma_{j,1} &= 2i[(\partial_t\phi^\sigma)(\partial_tA^\sigma_j) - c^2\nabla_y\phi^\sigma\cdot\nabla_yA^\sigma_j] + iA^\sigma_j\square_a\phi^\sigma,\\ I^\sigma_{j,0} &= \square_aA^\sigma_j. \end{align*} Considering classical amplitude functions given by the asymptotic expansions $$A^\sigma_j(t,x,\eta)\sim \sum_{k\geq 0}A^\sigma_{j,k}(t,x,\eta),\quad \sigma=\pm,$$ with $A^\sigma_{j,k}$ homogeneous of degree $-k$ in $\eta$, we would like to find those $A^\sigma_{j,k}$ so that $u$ solves \eqref{general IBVP} up to a smooth terms. We then choose $\phi^\sigma$ and $A^\sigma_{j,k}$ so that the terms of same order of homogeneity in $\eta$ cancel each other. The phase functions must satisfy the eikonal equation which takes into account the second order terms $I^\sigma_{j,2}$, and we endow it with initial conditions \begin{align}\label{eikonal}
\left\{\begin{matrix} \mp\partial_t\phi^\pm &=& c|\nabla_x\phi^\pm|\\ \phi^\pm|_{t=0} &=& x\cdot\eta.\end{matrix}\right. \end{align} Consequently, we obtain $I^\sigma_{j,2} = 0$.
To get rid of the next terms with less order of homogeneity we have to solve transport equations. We define the vector field \begin{equation}\label{transport_op} X^\sigma = 2(\partial_t\phi^\sigma)\partial_t - 2c^2\nabla_x\phi^\sigma\cdot\nabla_x. \end{equation} Then, the coefficients of both amplitude functions must satisfy
\begin{equation}\label{transport} X^\sigma A^\sigma_{j,0} + A^\sigma_{j,0}\square_a\phi^\sigma = 0,\quad\text{and}\quad X^\sigma A^\sigma_{j,k} + A^\sigma_{j,k}\square_a\phi^\sigma = - \square_aA^\sigma_{j,k-1} \text{ for }k\geq 1. \end{equation}
Since we must have that $u|_{t=0} = f_1$, this is
$$f_1(x) = (2\pi)^{-n}\int e^{ix\cdot\eta}\big((A^+_1 + A^-_1)|_{t=0}\hat{f}_1(\eta) + |\eta|^{-1}(A^+_2 + A^-_2)|_{t=0}\hat{f}_2(\eta) \big)d\eta,$$ we need the condition
$$A^+_1 + A^-_1 = 1,\quad A^+_2 + A^-_2=0,\quad \text{at }t=0.$$Analogously, from $u_t|_{t=0} = f_2$ and since $\phi^\sigma$ satisfies \eqref{eikonal}, we have
\begin{align*}
f_2(x) &= (2\pi)^{-n}\int e^{ix\cdot\eta}\big([ic|\eta|(-A^+_1 + A^-_1) + \partial_t(A^+_1 + A^-_1)]|_{t=0}\hat{f}_1(\eta)\\
&\hspace{7em}+[ic(-A^+_2 + A^-_2) +|\eta|^{-1}\partial_t(A^+_2 + A^-_2)]|_{t=0}\hat{f}_2(\eta) \big)d\eta, \end{align*} thus, at $t=0$ we require
$$ic|\eta|(-A^+_1 + A^-_1) + \partial_t(A^+_1 + A^-_1) = 0,\quad ic(-A^+_2 + A^-_2) +|\eta|^{-1}\partial_t(A^+_2 + A^-_2) = 1.$$ We then consider initial conditions given by the following system at $t=0$ which can be solved iteratively:
\begin{equation}\label{IC_amplitud1}
\left\{\begin{array}{rcl} A^+_{1,0} + A^-_{1,0} &=& 1\\ A^+_{1,k} + A^-_{1,k} &=&0,\; k\geq 1 \end{array}\right.\hspace{.5em}\left\{\begin{array}{rcl} A^+_{1,0} - A^-_{1,0} &=& 0\\ A^+_{1,k} - A^-_{1,k} &=& ic^{-1}|\eta|^{-1}\partial_t(A^+_{1,k-1} + A^-_{1,k-1}),\;k\geq1,\end{array}\right. \end{equation} \begin{equation}\label{IC_amplitud2}
\left\{\begin{array}{rcl} A^+_{2,0} + A^-_{2,0} &=& 0\\ A^+_{2,k} + A^-_{2,k} &=&0,\; k\geq 1 \end{array}\right.\hspace{.5em} \left\{\begin{array}{rcl} A^+_{2,0} - A^-_{2,0} &=& i/c\\ A^+_{2,k} - A^-_{2,k} &=& ic^{-1}|\eta|^{-1}\partial_t(A^+_{2,k-1} + A^-_{2,k-1}),\;k\geq1.\end{array}\right. \end{equation} We solve the transport equations in \eqref{transport} on integral curves of $X^\sigma$ as longs as the eikonal equation \eqref{eikonal} is solvable, and imposing the initial conditions from \eqref{IC_amplitud1} and \eqref{IC_amplitud2}. In particular, at $t=0$, the leading term are given by $A^+_{1,0} = A^-_{1,0} = \frac{1}{2}$, and $A^+_{2,0} = -A^-_{2,0} = \frac{i}{2c}$.
We can iteratively use the previous construction by solving the eikonal equation in small increments on time and get parametrices $u_+, u_-$ defined on $[0,T]$. In addition, by assuming the wave front set of $\text{\bf f}$ lies inside a small conical neighborhood of some $(x_0,\xi_0)\in S^*\bar{\Omega}$, their supports can be assumed to be contained in small neighborhoods of the respective branches of the geodesics issued from $(x_0,\xi_0)$, and their wave front sets contained in small neighborhoods of the bicharacteristics issued from $(0,x_0,1,\xi_0)$ and $(0,x_0,1,-\xi_0)$ respectively.
If we restrict the previous solution to the boundary we obtain an approximate representation (up to smooth term) of the measurement operator $\Lambda_a$ as a sum of Fourier Integral Operators, this is $\Lambda_a\text{\bf f} \cong \Lambda_a^+\text{\bf f} + \Lambda_a^-\text{\bf f}$, where
\begin{align*} [\Lambda_a^{\pm}\text{\bf f}](t,x) &= [\Lambda^\pm_{a,1}f_1 + \Lambda^\pm_{a,2}f_2](t,x)\\
&=(2\pi)^{-n}\int e^{i\phi^\pm(t,x,\eta)}\big(A^\pm_1(t,x,\eta)\hat{f}_1(\eta) + |\eta|^{-1}A^\pm_2(t,x,\eta)\hat{f}_2\big)d\eta\Big|_{\partial\Omega}. \end{align*} It follows from the same arguments as in \cite{Homan} and \cite{TAT} that their canonical relations are of graph type. In consequence, since $\Lambda^\pm_{a,1}$ are FIOs of order 0 and $\Lambda^\pm_{a,2}$ are of order -1, writing $h = \Lambda_a\text{\bf f}$, we get the estimate \begin{equation}\label{FIO_reg}
\|h\|_{H^{0}([0,T]\times\partial\Omega)}\leq C\|\text{\bf f}\|_{H^{0}(\Omega)\times H^{-1}(\Omega)}. \end{equation}
Moreover, following the same arguments as in \cite[Theorem 3.2]{Homan}, we have the stability estimate: \begin{proposition}[Stability]\label{stability} Assume $T_0(\Omega)<T<\infty$. Then \begin{equation}\label{Homan_stability}
\|\text{\bf f}\|_{\mathcal{H}(\Omega)}\leq C\|\Lambda_a\text{\bf f}\|_{H^1([0,T]\times\partial\Omega)},\quad\forall \text{\bf f}\in \mathcal{H}(\Omega). \end{equation} \end{proposition}
\begin{remark}\label{remark0} From the assumption on the wave front set of $\text{\bf f}$, we can assume that $\Lambda^+_a\text{\bf f}$ and $\Lambda^-_a\text{\bf f}$ have disjoint wave front set which is a consequence of the fact that bicharacteristics don't self intersect.
\end{remark}
\noindent{\bf 4.2 Parametrix at the boundary. } We now want to have a pseudodifferential representation of the Dirichlet-to-Neumann map. Let's pick any $(x_0,\xi_0)\in S^*\bar{\Omega}$ and denote by $(t_1,x_1,1,\xi_1)\in T^*(\mathbb{R}\times\Omega)$ the point in phase space where the bicharacteristic issued from $(0,x_0,1,\xi_0)$ hits the boundary. We also denote by $(t_1,x_1,1,(\xi_1)')$ its projection onto $T^*(\mathbb{R}\times\partial\Omega)$. Close to the boundary and in a neighborhood of $x_1$ we choose local coordinates $x=(x',x^n)$ such that $\partial\Omega$ is given by $x^n=0$ and $x^n>0$ in $\mathbb{R}^n\setminus\Omega$. The following analysis can also be done for the other bicharacteristic issued from $(0,x_0,1,-\xi_0)$.
Consider a compactly supported distribution $h$ on $\mathbb{R}\times\partial\Omega$ with $WF(h)$ contained in a small conic neighborhood of $(t_1,x_1,1,(\xi^1)')$. As in \cite{TATbrain} we can get an approximate solution of \eqref{TAT_eq} outside $\Omega$ for the transmitted wave with positive wave speed $c(x)|\xi|$, as an FIO applied to $h$. Since $\partial \Omega$ is an invisible boundary for the forward wave, such transmitted wave would be the same as the solution $u_+$ defined above. The transmitted wave related to the positive wave speed has the form \begin{equation}\label{bdary_parametrix} u^+_T = (2\pi)^{-n}\int e^{i\varphi_+(t,x,\tau,\xi')}b_+(t,x,\tau,\xi')\hat{h}_+(\tau,\xi')d\tau d\xi', \end{equation} where $\hat{h}_+ = \int_{\mathbb{R}\times\mathbb{R}^{n-1}}e^{-i(- t\tau + x'\cdot\xi')}h(t,x')dtdx'$. In particular, the phase functions $\varphi_+$ must satisfy the eikonal equation plus boundary conditions on $x^n=0$: \begin{equation}\label{eikonal+bc for +}
\partial_t\varphi_+ + c(x)|\nabla_{x}\varphi_+| = 0,\quad \varphi_+|_{x^n=0} = - t\tau + x'\cdot\xi', \end{equation} where the choice of sign in the eikonal equation is such in order to agree with $\phi^+$ in \eqref{eikonal}. In the case of the negative sound speed, the transmitted wave $u^-_T$ has the same form \eqref{bdary_parametrix} but interchanging $\hat{h}_+$ with $\hat{h}_-= \int_{\mathbb{R}\times\mathbb{R}^{n-1}}e^{-i(+ t\tau + x'\cdot\xi')}h(t,x')dtdx'$ and $\varphi_+$ with $\varphi_-$ which satisfies \begin{equation}\label{eikonal+bc for -}
-\partial_t\varphi_- + c(x)|\nabla_{x}\varphi_-| = 0,\quad \varphi_-|_{x^n=0} = t\tau + x'\cdot\xi'. \end{equation} As in the previous construction we consider the amplitude functions to be classical, this is $b_{\pm} \sim \sum_{k\geq 0}b^{\pm}_k$ with $b^{\pm}_k$ homogeneous of degree $-k$ in $\tau$ and $\xi'$. It then follows that $b_{\pm}$ satisfy analogous equations to \eqref{transport} with boundary conditions $$b^{\pm}_0 = 1,\quad b^{\pm}_k=0,\; k\geq 1,\quad \text{at } x^n = 0.$$
We use the previous to locally define the Dirichlet-to-Neumann (DN) maps for the positive and negative wave speed as the following $\Psi$DOs of order 1 (this is due to \eqref{eikonal+bc for +})
$$N_{\pm}:h\mapsto \frac{\partial u^{\pm}_T}{\partial x^n}\Big|_{\mathbb{R}\times\partial\Omega}$$ This definitions are local since they depend on the choice of local boundary coordinates. From \eqref{bdary_parametrix} we get their principal symbols are the same and given by
$$\sigma(N_\pm) = \text{i}\frac{\partial \varphi_\pm}{\partial x^n}\Big|_{\mathbb{R}\times\partial\Omega} =\text{i}\sqrt{c^{-2}\tau^2 - |\xi'|^2},$$
thus they are elliptic in the hyperbolic conic set $c^{-1}|\tau|>|\xi'|$.
Notice that in contrast with \cite{TATbrain}, since the boundary doesn't perturb the propagation of the wave there is no distinction between the incoming and outgoing DN maps.
\begin{remark}\label{remark1} The previous construction agrees up to smooth terms with the approximate solutions for the Cauchy problem $u_+$ and $u_-$, in a neighborhood of the boundary, when we take $h = \Lambda^{\pm}_a\text{\bf f}$ respectively. \end{remark}
\centerline{\sc 5. Proof of Theorem \ref{NS_free_space}}
By multiplying \eqref{error_function} by $w_t = (u_t - v_t)$ and integrating on $(0,T)\times\Omega$ we get that the local energy of $w$ satisfies \begin{align*}E_\Omega(\text{\bf w}(0)) &= E_\Omega(\text{\bf w}(T)) + 2\int_{[0,T]\times\Omega} ac^{-2}(u_t+v_t)(u_t - v_t)dtdx\\
&= E_\Omega(\text{\bf w}(T)) + 2\int_{[0,T]\times\Omega} ac^{-2}|u_t|^2dtdx - 2\int_{[0,T]\times\Omega} ac^{-2}|v_t|^2dtdx\\
&\leq \|[u^T-\phi,u_t^T]\|^2_{\mathcal{H}(\Omega)}+ 2\int_0^T\int_\Omega ac^{-2}|u_t|^2dtdx. \end{align*}
Moreover, since $\phi$ is harmonic and $u^T|_{\partial\Omega} = \phi|_{\partial\Omega}$ we have $$(u^T - \phi,\phi)_{H_D(\Omega)} = 0,$$ which implies
$$\|[u^T - \phi,u_t^T]\|^2_{\mathcal{H}(\Omega)} = E_\Omega(\text{\bf u}(T))-\|\phi\|²_{H_D(\Omega)}.$$ Then \begin{align}\label{K_ineq} E_\Omega(\text{\bf w}(0))\leq \mathcal{E}_\Omega(\text{\bf u},T). \end{align}
We now state the main step in the proof of the theorem which says that after time $T>T_0(\Omega)$ a considerable part of the energy is outside the domain. In the enclosure case, this is when boundary conditions are imposed, a similar proposition would have to be proven by considering an estimate relating the initial energy and the energy absorbed on the boundary. \begin{proposition} Let $\text{\bf u}$ be a solution of \eqref{general IBVP} with initial condition $\text{\bf f}\in\mathcal{H}(\Omega)$. There exists $C>1$ so that
$$\|\text{\bf f}\|^2_{\mathcal{H}(\Omega)} \leq CE_{\Omega^c}(\text{\bf u}(T)).$$ \end{proposition}
\begin{proof} Recall that for any bounded domain $U$ with smooth boundary and for $t'\leq t\leq t''$ with $t'<t''$, if $\text{\bf u}$ is a solution of the damped wave equation then \begin{equation}\label{energy1} \mathcal{E}_U(\text{\bf u},t'') = \mathcal{E}_U(\text{\bf u},t') + 2\mathfrak{R}\int_{[t',t'']\times\partial U}u_t\frac{\partial \overline{u}}{\partial\nu}dtdS, \end{equation} where $\nu$ is the outward normal unit-vector to $\partial U$.
Let's assume for a moment that $WF(\text{\bf f})$ is contained in a conical neighborhood of some $(x_0,\xi_0)\in S^*\bar{\Omega}$ and as before we denote by $(t^\pm_1,x^\pm_1)$ the times and points where the respective branches of the geodesic issued from $(x_0,\xi_0)$ make contact with the boundary. We now want to estimate the energy transmitted outside $\Omega$ up to compact operator applied to $\text{\bf f}$, and at time $t=T$. For a large ball $B$, the energy of the solution $\text{\bf u} = e^{t{\bf P}_a}\text{\bf f}$ of \eqref{TAT_eq} in $U = B\setminus\Omega$ is given by \begin{equation}\label{energy2} E_{\Omega^c}(\text{\bf u},t_2) = 2\mathfrak{R}\int_{[0,t_2]\times\partial \Omega}\frac{\partial u}{\partial t}\frac{\partial \overline{u}}{\partial\nu}dtdS. \end{equation} where there is no term at time $t=0$ since $\text{\bf u}(0)$ vanishes outside $\Omega$. Here $\nu$ stands for the interior unit normal vector to $\partial\Omega$. Notice we used the hypothesis on the support of $a$ in order to have the equality $E_{\Omega^c}(\text{\bf u}(t_{2})) = \mathcal{E}_{\Omega^c}(\text{\bf u},t_{2})$.
Let's denote by $h$ the Dirichlet data on $\partial\Omega$ given by $h = h_+ + h_-$, with $h_\pm = \Lambda^\pm_a\text{\bf f}$, and recall Remark \ref{remark0}. We can use the construction near the boundary from the previous section and get a representation of the transmitted wave, $\text{\bf u}_T = \text{\bf u}^+_T + \text{\bf u}^-_T$, which satisfies that $\text{\bf u}_T\cong \text{\bf u}$, with $\cong$ meaning equality up to a smoothing operator applied to $h$. Notice this parametrix solutions are only constructed in neighborhoods of $(t^\pm_1,x^\pm_1)$. For times outside this neighborhoods, we can approximate $\text{\bf u}$ using the FIOs of section 4.1. Nevertheless, for such intervals of time we know the solution is smooth, therefore the more energetic part of $\text{\bf u}$ is contained precisely in $\text{\bf u}^+_T$ and $\text{\bf u}^-_T$. Then we can estimate the RHS of \eqref{energy2} and get, modulo compact operators applied to $h_\pm$, \begin{equation}\label{e1} \begin{aligned} E_{\Omega^c}(\text{\bf u},T) &\cong 2\mathfrak{R}\int_{[0,T]\times\partial \Omega}\frac{\partial u_T}{\partial t}\frac{\partial \overline{u}_T}{\partial\nu}dtdS\\ &\cong 2\mathfrak{R}(P_th,-(N_+ h^+ + N_- h^-))\\ & \cong \sum_{\sigma=\pm}\mathfrak{R}(-2N^*_\sigma P_th_\sigma,h_\sigma), \end{aligned} \end{equation} where $(\cdot,\cdot)$ stands for the inner product in $L^2(\mathbb{R}\times\mathbb{R}^{n-1})$. Notice there is no cross terms between the functions $h_\pm$ in the right hand side. This is because the wave front set of $N^*_\pm$ is contained in a small neighborhood of $(t_1^\pm,x^\pm_1,1,(\xi_1^\pm)')$, while the wave front set of $h_\mp$ lies close to $(t_1^\mp,x^\mp_1,1,(\xi_1^\mp)')$. Since both bicharacteristics don't intersect each other the above wave front sets are disjoint and consequently $N^*_\pm P_th_\mp$ are smooth functions. Therefore, those terms involve compact operators applied to the functions $h_\pm$.
From the definition of the DN map and the above energy relation we deduce that $$E_{\Omega^c}(\text{\bf u}(T)) = \sum_{\sigma=\pm}\mathfrak{R}(M_\sigma h_\sigma,h_\sigma)$$ with $M_\pm$ two $\Psi$DOs of order 2 with the same principal symbol \begin{align*}
\sigma_p(M_\pm) &= \sigma_p(-N^*_\pm P_t) = -2\cdot\overline{\text{i}\sqrt{c^{-1}\tau^2 - |\xi'|^2}}\cdot (-\text{i}\tau) \\
&= 2\tau\sqrt{c^{-2}\tau^2 - |\xi'|^2}. \end{align*}
Notice that $h_{\pm}$ are compactly supported and as we mentioned before, their wave front sets are respectively contained in neighborhoods of the points $(t_1^\pm,x^\pm_1,1,(\xi_1^\pm)')$ where the respective bicharacteristic through $(0,x_0,1,\pm\xi_0)$ hits the boundary. Furthermore, their essential supports lie in the hyperbolic region $c|\xi'|<\tau$ due to the strictly convexity of $\Omega$ which makes the bicharacteristics to cross $\partial\Omega$ non-tangentially. Then, $\sigma(M_\pm)\geq C|(\tau,\xi')|^2$ and we can apply Garding's inequality to obtain \begin{equation}\label{energy_ineq1+} \begin{aligned}
E_{\Omega^c}(\text{\bf u}(T)) &\geq C_1\sum_{\sigma=\pm}\|h_\sigma\|^2_{H^{1}(\mathbb{R}\times\partial\Omega)} - C_2\sum_{\sigma=\pm}\|h_\sigma\|^2_{H^{0}(\mathbb{R}\times\partial\Omega)}\\
&\geq C_1\|h\|^2_{H^{1}(\mathbb{R}\times\partial\Omega)} - C_2\sum_{\sigma=\pm}\|h_\sigma\|^2_{H^{0}(\mathbb{R}\times\partial\Omega)}. \end{aligned} \end{equation}
Let ${\bf X} =\text{diag}(X,X)$, with $X$ a zero order $\Psi$DO with essential support in a conic neighborhood of $(x_0,\xi_0)\in S^*\bar{\Omega}\setminus 0$, and such that ${\bf X}\text{\bf f}$ vanishes outside $\Omega$. From the last inequality and by choosing $h = \Lambda_{a}{\bf X}\text{\bf f} = \Lambda_{a}^+{\bf X}\text{\bf f}+\Lambda^-_{a}{\bf X}\text{\bf f}$, the continuity and stability of the measurement operator in \eqref{FIO_reg} and \eqref{Homan_stability} give us that \begin{equation}\label{energy_ineq2}
\|{\bf X}\text{\bf f}\|^2_{\mathcal{H}(\Omega)} \leq CE_{\Omega^c}(\text{\bf u}(T)) + C\| {\bf X}\text{\bf f}\|^2_{H^{0}(\Omega)\times H^{-1}(\Omega)}. \end{equation}
By compactness of $WF(\text{\bf f})\cap S^*\bar{\Omega}$, in a conic neighborhood of such set we can consider a finite pseudo-differential partition of unity $1=\sum\chi_j$ of symbols of $\Psi$DO's $X_j$, localizing in conic neighborhoods of a finite number of points $(x_j,\xi^j)\in WF(\text{\bf f})\cap S^*\bar{\Omega}$. Then, $\text{\bf f} = (I-\sum {\bf X}_j)\text{\bf f} + \sum {\bf X}_j\text{\bf f}$, where $WF(\text{\bf f})\cap WF(I-\sum {\bf X}_j)=\emptyset$, thus from the inequality above we get
$$\|\text{\bf f}\|_{\mathcal{H}(\Omega)}^2\leq C\sum_j E_{\Omega^c}(e^{t{\bf P}_a}{\bf X}_j\text{\bf f}(T)) + C\|\text{\bf f}\|^2_{H^{0}(\Omega)\times H^{-1}(\Omega)}.$$ We can substitute $e^{t{\bf P}_a}{\bf X}_j\text{\bf f}$ by $Q_j{\bf X}_j\text{\bf f}$ in the right hand side, with $Q_j$ denoting the parametrix operator for the wave equation localized near $(x_j,\xi_j)$, since both are equal up to a compact operator applied to $\text{\bf f}$ (see \cite[\S 4.10]{TATbrain}). By means of Egorov's Theorem (see for instance \cite[Theorem 10.1]{IntroMLA1994}) there exist zero order $\Psi$DO's, $\tilde{{\bf X}}_j$, such that $Q_j{\bf X}_j = \tilde{{\bf X}}_jQ_j$ modulo a smoothing operator, therefore the exact solution $\text{\bf u} = e^{t\text{\bf P}_a}\text{\bf f}$ of \eqref{general IBVP} satisfies \begin{equation}\label{energy_ineq3}
\|\text{\bf f}\|_{\mathcal{H}(\Omega)}\leq C\|\text{\bf u}(T)\|_{H^1(\Omega^c)\oplus L^2(\Omega^c)}+ C\|\text{\bf f}\|_{H^{0}(\Omega)\times H^{-1}(\Omega)}. \end{equation} To get rid of the second term in the right hand side we use a classical argument that requires to show $$\mathcal{H}(\Omega)\ni \text{\bf f}\mapsto \text{\bf u}(T)\in H^1(\mathbb{R}^n\setminus\Omega)\oplus L^2(\mathbb{R}^n\setminus\Omega)$$ is an injective bounded map. The continuity follows from \cite[Proposition 1]{Homan} when the domain is a large ball containing $\text{supp}(\text{\bf u}(T))$, which we know exists by finite speed of propagation. Assume there is $\text{\bf f}\in \mathcal{H}(\Omega)$ such that $$u(T,x) = 0,\;\forall x\in\mathbb{R}^n\setminus\Omega.$$
By finite domain of dependence of the damped wave equation, $u$ must vanish in $\{(t,x)\in(0,\infty)\times\mathbb{R}^n: \text{dist}(x,\partial\Omega)>|T-t|\}$, and since $\text{\bf u}(0,\cdot) = \text{\bf f} $ with $\text{supp}\text{\bf f}\subset \bar{\Omega}$, by the same reason $u=0$ in $\{(t,x)\in(0,\infty)\times\mathbb{R}^n: \text{dist}(x,\partial\Omega)>t\}$. Intersecting both light cones we in fact have that $u$ vanishes in $[0,3T/2]\times\{x\in\mathbb{R}^n:\text{dist}(x,\partial\Omega)>T/2\}$. By Tataru's unique continuation \cite[Theorem 4]{Tataru1} and since we assume $T>T_0(\Omega)>2T_1(\Omega)$, with $$T_1(\Omega) = \sup_{x\in \Omega} d(x,\partial\Omega),$$ and $d(x,\partial\Omega)$ the infimum of the lengths of curves with respect to $c^{-2}dx^2$ starting at $x$ and ending at $\partial\Omega$, we obtain $u = 0$ in a neighborhood of $\{3T/4\}\times \mathbb{R}^n$. From Proposition 1 in \cite{Homan}, it then follows by solving the initial value problem backward from $t=3T/4$ to $t=0$, that $u=0$ in $[0,3T/4]\times\Omega$, so in particular $\text{\bf f}\equiv0$.
On the other hand, the inclusion $\mathcal{H}(\Omega)\hookrightarrow H^{0}(\Omega)\times H^{-1}(\Omega)$ is compact, so by \cite[Proposition V.3.1]{Taylor} there is some $C>1$ which depends on $a$, such that
$$\|\text{\bf f}\|_{\mathcal{H}(\Omega)} \leq C\|\text{\bf u}(T)\|_{H^1(\mathbb{R}^n\setminus\Omega)\oplus L^2(\mathbb{R}^n\setminus\Omega)}.$$ To obtain the energy $E_{\Omega^c}(\text{\bf u}(T)) $ in the right hand side of the last estimate and conclude the proof, we apply Poincare's inequality in $B\setminus\Omega$ with $B$ a large ball containing $\Omega$ and such that $\text{\bf u}(T)$ vanishes in its complement (exists by finite speed of propagation). \end{proof}
\begin{figure}
\caption{{\small (left) Initial condition to be reconstructed. (center) Attenuation coefficient for the first example. (right) Smaller attenuation coefficient for the second example.}}
\label{fig:settings}
\end{figure}
From the already proven proposition we get \begin{align*} \mathcal{E}_{\Omega}(\text{\bf u},T) = \mathcal{E}_{\mathbb{R}^n}(\text{\bf u},T) - \mathcal{E}_{\Omega^c}(\text{\bf u},T)
&\leq \|\text{\bf f}\|^2_{\mathcal{H}(\Omega)} - \|\text{\bf f}\|^2_{\mathcal{H}(\Omega)}/C\\
&\leq (1-1/C)\|f\|^2_{\mathcal{H}(\Omega)}, \end{align*} where we used that the extended energy is preserved in $\mathbb{R}^n$, i.e. $\mathcal{E}_{\mathbb{R}^n}(\text{\bf u},T) = \mathcal{E}_{\mathbb{R}^n}(\text{\bf u},0)$. Finally by recalling \eqref{K_ineq} we conclude that \begin{align*}
\|{\bf K}_a\text{\bf f}\|^2_{\mathcal{H}(\Omega)}= E_{\Omega}({\bf w}(0))\leq \mu\|\text{\bf f}\|^2_{\mathcal{H}(\Omega)}, \end{align*}
with $0<\mu<1$, therefore ${\bf K}_a$ is a contraction in the norm induced by $\|\cdot\|_{\mathcal{H}(\Omega)}$ and there is convergence of the Neumann series.
\begin{figure}
\caption{\small The first row is Homan's reconstruction with 1 and 4 terms. The respective errors are 126\% and 322\%. The second row is the reconstruction following our new method for 1 and 100 terms where we got errors of 76\% and 12\% respectively.}
\label{ex_high_att}
\end{figure}
\begin{remark}
Notice that $\mu$ depends on the attenuation coefficient $a$ and it becomes closer to 1 as $\|a\|_\infty$ goes to infinity. Then, the convergence of the Neumann series gets worse when large attenuations are involved. \end{remark}
\centerline{\sc 6. Numerics}
We carried out some numerical simulations with the purpose of illustrating the theoretical result obtained in this article. We used a grid of 501x501 to discretize the domain $\Omega = [-1,1]^2$, a variable sound speed given by the function $c(x) = 1 + 0.2\sin(2\pi x_1) + 0.1\cos(2\pi x_2)$, and a regularized version of the Shepp-Logan phantom as initial condition. We also chose a measurement time of $T=3$, and since the convergence rate of this reconstruction method is slower due to the presence of the damping parameter we used 100 terms in the Neumann series reconstruction. The forward propagation of the wave was discretized following the work done in \cite{TATnumeric} which implements PML conditions to simulate the propagation of waves in the free space, while for the back projection of the boundary data we used an standard second order accurate scheme.
\begin{figure}
\caption{\small For Homan's reconstruction in the first row, the errors are 46\% and 5\% for 1 and 8 terms, while following ours reconstruction method we get 75\% and 11\% for 1 and 100 terms respectively.}
\label{ex_low_att}
\end{figure}
We consider a damping coefficient formed by a small background attenuation that increases toward the x-axis and two regularized disks with higher attenuations as you can see in Figure \ref{fig:settings}. Namely, we let $a(x)$ to be of the form $$a(x) = F\big( d_1\chi_{D_1}(x) + d_2\chi_{D_2}(x) + d_3(1+x_1)\chi_{\Omega\setminus(D_1\cup D_2)}(x)\big)$$
where $\chi$ is the characteristic function, $D_1 = \{x\in\mathbb{R}^2 : |(x_1,x_2) -(\frac{1}{3},-\frac{1}{2}) |^2 < 0.05)\}$, $D_2 = \{x\in\mathbb{R}^2 : |(x_1,x_2) - (-\frac{1}{3},\frac{1}{3}) |^2 < 0.07)\}$, and $F$ stands for a regularization function which also brings the attenuation coefficient to zero in a smooth way close to $\partial\Omega$. The idea is to compare the images obtained by the Time Reversal (this is only one term in the Neumann series) and the images when more summands are considered for Homan's reconstruction method, this is solving \eqref{time_reversal} in the back propagation process; and the method developed in this article.
In the first example we take $d_1 = 9$, $d_2 = 4$ and $d_3 = 1.5$. Figure \ref{ex_high_att} shows the results of the Time Reversal (left column) and when adding more terms in the Neumann series (right column). It's evident the improvement in the reconstruction by using our new approach since the first method diverges while ours converges, at a slow rate though and it requires significantly many more iterations in order to achieve a reasonable error.
For the second example (Figure \ref{ex_low_att}) we consider a lower attenuation coefficient with $d_1 = 5$, $d_2 = 4$ and $d_3 = 1.5$. In this case Homan's Neumann series does converge and of course according to the theory it shows a faster convergences than the Neumann series presented here.\\
\centerline{\sc 7. Acknowledgments}
The author would like to thanks Gunther Uhlmann for all the support and advising throughout the writing of this paper and Carlos Montalto for the many fruitful conversations about the subject. Also thanks Francois Monard and Sebastian Acosta for the help and guidance in the numerical part.
\renewcommand{\centerline{\large \sc References}}{\centerline{\large \sc References}}
{\footnotesize
}
\end{document} |
\begin{document}
\title{Thurston's algorithm and rational maps from quadratic polynomial matings} \author{Mary Wilkerson}
\begin{abstract}
Topological mating is an combination that takes two same-degree polynomials and produces a new map with dynamics inherited from this initial pair. This process frequently yields a map that is Thurston-equivalent to a rational map $F$ on the Riemann sphere. Given a pair of polynomials of the form $z^2+c$ that are postcritically finite, there is a fast test on the constant parameters to determine whether this map $F$ exists---but this test is not constructive. We present an iterative method that utilizes finite subdivision rules and Thurston's algorithm to approximate this rational map, $F$. This manuscript expands upon results given by the Medusa algorithm in \cite{MEDUSA}. We provide a proof of the algorithm's efficacy, details on its implementation, the settings in which it is most successful, and examples generated with the algorithm. \end{abstract}
\maketitle \tableofcontents \addtocontents{toc}{\vskip-40pt}
\let\thefootnote\relax\footnote{2010 \emph{Mathematics Subject ClassiÞcation}. Primary 37F20; Secondary 37F10.} \let\thefootnote\relax\footnote{\emph{Key words and phrases.} mating, finite subdivision rule, rational maps, Thurston's algorithm, Medusa.}
\section{Introduction}
\emph{Mating} refers to a collection of operations that combine a pair of two same-degree polynomials in order to form a new map. Depending on the type of mating and polynomial pair, the resulting mating may be Thurston-equivalent to a rational map. When topologically mating two postcritically finite polynomials of the form $z\mapsto z^2+c$, the $c$ parameters determine if the resulting mating behaves like a rational map on the 2-sphere---but this test is not constructive of the rational map itself. \cite{LEI} \cite{REES} \cite{SHISHIKURA}
Thurston's topological characterization of rational maps, which is the driving force behind this parameter test, gives a more general criteria for when a topological map $g:\mathbb{S}^2\rightarrow\mathbb{S}^2$ is Thurston-equivalent to a rational map $F:\hat{\mathbb{C}}\rightarrow\hat{\mathbb{C}}$. The proof of Thurston's characterization given in \cite{TOPCHARACTERIZATION} suggests an algorithm for obtaining $F$: if we take Thurston pullbacks of a complex structure on $\mathbb{S}^2$ by $g$, this process incidentally outputs a sequence of rational maps converging to $F$. To take these pullbacks however, we must have some topological understanding of how the branched cover $g$ acts on various subsets of $\mathbb{S}^2$. Since this is sometimes difficult information to encode, few direct attempts (such as those in \cite{MEDUSA, SPIDER}) have been made at using Thurston's algorithm to find $F$.
If we wish to find the rational map associated with a mating using Thurston's algorithm, it is clear that we need to understand the topological structure of the mated map first. In \cite{FSRCONSTRUCTION}, finite subdivision rules are constructed to develop this understanding. In the situations where a finite subdivision rule exists, we then can apply Thurston's algorithm to obtain an approximation to our desired map. The content of this article elaborates on this argument.
Put succinctly, we will develop a combinatorial map to model the behavior of the essential mating, and use this knowledge to assist in finding rational map approximations to the geometric mating. We start with prerequisite topics in Section \ref{prereqs}. In this section, we will discuss quadratic polynomials, their matings, and the Thurston and Medusa algorithms. In Section \ref{yourfsr}, we develop a finite subdivision rule construction that is tailored to the goal of describing mapping behavior of certain matings.
In Section \ref{algorithms}, we introduce our main results: a method for obtaining rational maps from postcritically finite quadratic matings using finite subdivision rules and Thurston's algorithm. We prove that the output of our iterative algorithm determines an approximation to the desired rational map, and highlight situations in which our algorithm extends the reach of a similar technique called the \emph{Medusa algorithm} \cite{MEDUSA}. We comment on examples and further avenues of exploration in Section \ref{connections}.
\section{Prerequisites in dynamics}\label{prereqs}
\subsection{Thurston equivalence}\label{thureq}
We have thus far discussed a notion of maps behaving in a dynamically similar fashion. We will formalize this with the definition below:
\begin{definition} Let $f,g: \mathbb{S}^2\rightarrow \mathbb{S}^2$ be two branched mappings with postcritical sets $P_f$ and $P_g$. We have that $f$ and $g$ are said to be \emph{Thurston equivalent} if and only if there exist homeomorphisms $h, h': (\mathbb{S}^2,P_f)\rightarrow(\mathbb{S}^2,P_g)$ such that
\begin{center}
$\begin{CD} (\mathbb{S}^2,P_f) @>h'>> (\mathbb{S}^2,P_g)\\ @VVfV @VVgV\\ (\mathbb{S}^2,P_f) @>h>> (\mathbb{S}^2,P_g)\\ \end{CD} $ \end{center}
commutes, and such that $h$ is isotopic to $h'$ relative to $P_f$.\cite{TOPCHARACTERIZATION}
\end{definition}
When $f$ and $g$ are Thurston equivalent, this implies that the action of $f$ on a sphere containing its postcritical set is similar to the action of $g$ on a sphere containing its postcritical set.
\subsection{Parameter space}\label{parameter}
Let $M$ denote the Mandelbrot set. The work in this paper will emphasize those parameters $c\in M$ for which the polynomials $f_c(z)=z^2+c$ are critically preperiodic. For such polynomials we have that $c\in M$, and that the associated Julia set $J_c$ is a connected and locally connected dendrite. When $J_c$ is a dendrite, the Julia set has no interior and we have that $J_c$ is equivalent to the filled Julia set, $K_c$.
When $K_c$ is connected, its complement on the Riemann sphere $\hat{\mathbb{C}}\backslash K_c$ is conformally isomorphic to $\hat{\mathbb{C}}\backslash\overline{\mathbb{D}}$ via some map $\phi:\hat{\mathbb{C}}\backslash \overline{\mathbb{D}}\rightarrow\hat{\mathbb{C}}\backslash K_c$. We have that $\phi$ is uniquely determined under the additional constraint that it conjugate $f_0:\hat{\mathbb{C}}\backslash\overline{\mathbb{D}}\rightarrow\hat{\mathbb{C}}\backslash\overline{\mathbb{D}}$ to $f_c:\hat{\mathbb{C}}\backslash K_c\rightarrow \hat{\mathbb{C}}\backslash K_c$ so that $\phi \circ f_0 = f_c\circ \phi$.
\begin{figure}
\caption{The conformal isomorphism $\phi$ which determines external rays for $z\mapsto z^2+i$. Shown on the right are external rays landing at points on the critical orbit of this polynomial.}
\label{varphi}
\end{figure}
We may use $\phi$ to define \emph{external rays of angle $t$}, $R_c(t)$ by fixing $t\in \mathbb{R}/\mathbb{Z}$ and taking images of rays around the unit disk under $\phi$, $R_c(t)=\{\phi(re^{2\pi i t})|r\in(1,\infty)$, as demonstrated in Figure \ref{varphi}. Since the $K_c$ we discuss here are locally connected , we may take that $\phi$ extends continuously to $\partial\mathbb{D}$ and that external rays of angle $t$ have \emph{landing point} given by $\gamma(t)=\displaystyle\lim_{r\rightarrow 1^+}\phi(re^{2\pi i t})$. The map $\gamma$ is called the \emph{Carath\'{e}odory semiconjugacy}, which in the degree two case highlights the angle-doubling behavior of the map $f_c$ when applied to landing points of external rays: $\gamma(2t)=f_c(\gamma(t))$. This is emphasized on the right of Figure \ref{varphi} for the polynomial $z\mapsto z^2+i$: we may note the critical orbit portrait $$0\mapsto i\mapsto -1+i \mapsto -i$$ for this map, or we may double the angles of external rays and record the locations of landing points in order to observe the same behavior.
Given $\phi$, a typical notational convention is to parameterize critically preperiodic polynomials $z\mapsto z^2+c$ by an angle $\theta$ of an external ray landing at the critical value rather than by $c$. In the event that more than one ray lands at the critical value, there may be multiple parameters referring to the same polynomial. As a simpler example, the polynomial given in Figure \ref{varphi} could be named $f_{1/6}$ rather than $f_i$. We will adopt this $f_\theta$ convention in lieu of the use of $f_c$ for the remainder of the paper.
\subsection{Matings}\label{mating}
Let $\widetilde{\mathbb{C}}$ be the compactification of $\mathbb{C}$ formed by union with the circle at infinity, $\widetilde{\mathbb{C}}=\mathbb{C}\cup\{\infty\cdot e^{2\pi i \theta}|\theta \in \mathbb{R}/\mathbb{Z}\}$. Then, take two monic same-degree polynomials with locally connected and connected filled Julia sets acting on two disjoint copies of $\widetilde{\mathbb{C}}$. If we form an equivalence relation on these copies of $\widetilde{\mathbb{C}}$ appropriately, the polynomial pair will determine a map that descends to the quotient space. This map is the \emph{mating} of the two polynomials. There are many kinds of polynomial matings, each dependent upon the equivalence relation we select. We will discuss three fundamental constructions: \emph{formal} matings, \emph{topological} matings, and \emph{essential} matings.
\begin{definition}Let $f_\alpha:\widetilde{\mathbb{C}}_\alpha\rightarrow\widetilde{\mathbb{C}}_\alpha$ and $f_\beta:\widetilde{\mathbb{C}}_\beta\rightarrow\widetilde{\mathbb{C}}_\beta$ be postcritically finite monic quadratic polynomials taken on two disjoint copies of $\widetilde{\mathbb{C}}$, and let $\sim_f$ be the equivalence relation which identifies $\infty \cdot e^{2\pi i t}$ on $\widetilde{\mathbb{C}}_\alpha$ with $\infty \cdot e^{-2\pi i t}$ on $\widetilde{\mathbb{C}}_\beta$ for all $t\in \mathbb{Z}/\mathbb{Z}$. Then, the quotient space $\widetilde{\mathbb{C}}_\alpha\bigsqcup \widetilde{\mathbb{C}}_\beta/\sim_f$ may be identified with $\mathbb{S}^2$, as this quotient glues two $\tilde{\mathbb{C}}$ disks together along their boundaries with opposing angle identifications to form a topological 2-sphere. (See Figure \ref{green}.) This quotient space serves as the domain of the \emph{formal mating} $f_\alpha\upmodels_ff_\beta$, that is the map that applies $f_\alpha$ and $f_\beta$ on their respective hemispheres of $\mathbb{S}^2$. \end{definition}
It should be noted that the Carath\'{e}odory semiconjugacy guarantees that $f_\alpha\upmodels_ff_\beta$ is well-defined on the equator, and provides a continuous branched covering of $\mathbb{S}^2$ to itself.
\begin{figure}
\caption{Steps in the formation of the formal mating.}
\label{green}
\end{figure}
\begin{definition}The domain of the \emph{topological mating} $f_\alpha\upmodels f_\beta$ is given by the quotient space $K_\alpha\bigsqcup K_\beta/\sim$, where $\sim$ identifies the landing point of $R_\alpha(t)$ on $J_\alpha$ with the landing point of $R_\beta(-t)$ on $J_\beta$. This quotient glues the filled Julia sets of $f_\alpha$ and $f_\beta$ together by their boundaries using opposing external angle identifications. Much like the formal mating, we obtain the map $f_\alpha\upmodels f_\beta$ by applying $f_\alpha$ and $f_\beta$ on their respective filled Julia sets. \end{definition}
The Carath\'{e}odory semiconjugacy again guarantees that the topological mating is well-defined and continuous, but it is possible that it no longer acts on a quotient space that is a topological 2-sphere even though there is an induced map. When the domain is a 2-sphere is a solved problem for the postcritically finite quadratic case, noted in the following theorem:
\begin{theorem}[Lei, Rees, Shishikura]\label{LRS} The topological mating of the postcritically finite maps $z\mapsto z^2+c$ and $z\mapsto z^2+c'$ is Thurston-equivalent to a rational map on $\hat{\mathbb{C}}$ if and only if $c$ and $c'$ do not lie in complex conjugate limbs of the Mandelbrot set \cite{LEI}, \cite{REES}, \cite{SHISHIKURA}.
\end{theorem}
In the event that $c$ and $c'$ are not in complex conjugate limbs of $M$, then the rational map referenced in the theorem above is called a \emph{geometric mating}. We will use $F$ to denote the geometric mating of a polynomial pair whenever it is unambiguous to do so.
Theorem \ref{LRS} is a powerful result as it allows us to make statements regarding the dynamics of the topological mating based on parameters alone. However, while this theorem tells us when a topological mating behaves like a rational map $F$ on the Riemann sphere, it is not constructive of this map. We will detail an approximation for $F$ later, but the algorithm will depend on having an understanding of the action of the topological mating on the sphere. Ideally we would use the formal mating instead, since it is much simpler in construction than the topological mating---but these two maps are not always Thurston equivalent. Instead, we shall make use of an intermediate mating operation called the \emph{essential mating}, $f_\alpha\upmodels_ef_\beta$. Starting with the quotient 2-sphere $\mathbb{S}^2$ developed in the formal mating, the essential mating is constructed as detailed below and in \cite{LEI}.
\begin{definition}\label{essential} Suppose $f_\alpha$ and $f_\beta$ are two polynomials with the properties described in Theorem \ref{LRS} whose topological mating is Thurston-equivalent to a rational map. Allow $\mathbb{S}^2$ to denote the domain of the formal mating $h=f_\alpha\upmodels_f f_\beta$. We define the \emph{essential mating} using the following steps. \begin{enumerate} \item Let $\{l_1,...,l_n\}$ be the set of maximal connected graphs of external rays on $\mathbb{S}^2$ containing at least two points of the postcritical set $P_h$, and let $\{q_1,...,q_m\}$ be the set of connected graphs of external rays in $\displaystyle\bigcup_{k=1}^\infty\bigcup_{i=1}^nh^{-k}(l_i)$ containing at least one point on the critical orbit of $h$. Take each of the $\{q_1,...,q_m\}$ to be an equivalence class of $\sim_e$, and note that $\mathbb{S}'^2 =\mathbb{S}^2/\sim_e$ is homeomorphic to a sphere since none of the equivalence classes of $\sim_e$ contain closed curves.
\item Note that $h$ maps equivalence classes to equivalence classes, so letting $\pi:\mathbb{S}^2\rightarrow\mathbb{S}'^2$ denote the natural projection yields that $\pi\circ h\circ\pi^{-1}$ is well-defined and preserves the mapping order of equivalence classes.
\item Set $V_j$ to be an open neighborhood of $q_j$ such that $V_j\cap(P_h\cup\Omega_h)=q_j\cap(P_h\cup\Omega_h)$ for each $j$, and such that distinct $V_j$ are nonintersecting. For each $j$, denote by $\{U_{ij}\}$ the set of connected components of $h^{-1}(V_j)$ for which $U_{ij}\cap\displaystyle\bigcup_{p=1}^mq_p=\emptyset$.
\item Define $f_\alpha\upmodels_ef_\beta:\mathbb{S}'^2\rightarrow\mathbb{S}'^2$ as follows. On the complement of $\displaystyle\bigcup_{i,j}\pi(U_{ij})$, we set $f_\alpha\upmodels_ef_\beta:=\pi\circ h\circ \pi^{-1}$. For each $i,j$ we define $f_\alpha\upmodels_ef_\beta:\pi(U_{ij})\rightarrow\pi(V_j)$ as some homeomorphism that extends continuously to the boundary of $\pi(U_{ij})$. \end{enumerate}
The map $f_\alpha\upmodels_ef_\beta$ is the \emph{essential mating} of $f_\alpha$ and $f_\beta$.
\end{definition}
To unpack the definition, the action of the essential mating on the 2-sphere is similar to that of the formal mating, save for two changes. First, postcritical points that fall into shared equivalence classes under $\sim_t$ are collapsed. Second, after collapsing along these `essential' equivalence classes, we modify the map slightly so that it does not map arcs to points and thus remains a branched covering. In making these changes, the essential mating retains much of the simplicity of the structure of the formal mating, but also serves as a map that is guaranteed to be Thurston equivalent to the topological mating--regardless of how arbitrary the selected homeomorphisms appear on the last step of the definition. \cite{LEI}
A notable implementation of this construction occurs when the postcritical points of a polynomial pairing fall into distinct equivalence classes of $\sim_t$. In this case, none of the postcritical points for the formal mating can be connected by a graph of external rays on $\mathbb{S}^2$, and so $\sim_e$ is trivial. Then, $\pi$ acts much like the identity and there are no $U_{ij}$, so we have that $f_\alpha\upmodels_ef_\beta=\pi\circ h\circ\pi$ on all of $\mathbb{S}'^2$. More simply, the essential and formal matings are the same map whenever polynomial postcritical points are not identified under $\sim_t$. If the essential and formal matings are the same for a pair of polynomials, we say that those polynomials are \emph{strongly mateable}.
To simplify notation from this point on, we will use $g$ to refer to an essential mating if it is clear to do so. As $\mathbb{S}'^2$ is homeomorphic to $\mathbb{S}^2$, we will further simplify notation by treating $g$ as a self-map on $\mathbb{S}^2$.
\subsection{The Thurston and Medusa algorithms}\label{thurstonmedusa}
Let $\mathcal{C}$ denote the space of orientation preserving complex structures on $(\mathbb{S}^2,P_f)$. We then define the \emph{Teichmuller space}, $\mathcal{T}_f$, to be the quotient of $\mathcal{C}$ by the group of orientation preserving diffeomorphisms of $(\mathbb{S}^2,P_f)$ that are isotopic to the identity. More specifically, we take two complex structures $\sigma_1, \sigma_2\in\mathcal{C}$ to be representatives of the same element $\tau\in\mathcal{T}_f$ if $\sigma_1=\sigma_2\circ g$ where $g$ is some orientation preserving homeomorphism on $\mathbb{S}^2$ that is isotopic to the identity relative to $P_f$.
Let $\sigma\in\mathcal{C}$. The pullback of $\sigma$ under $f$ gives a complex structure on $\mathbb{S}^2$, and so the mapping $\sigma\mapsto \sigma(f)$ induces a holomorphic mapping $\Sigma_f:\mathcal{T}_f\rightarrow\mathcal{T}_f$ on Teichmuller space. We call $\Sigma_f$ the \emph{Thurston pullback map}.
We then have the following:
\begin{proposition}[Thurston, Douady, Hubbard] The mapping $f$ is Thurston-equivalent to a rational function if and only if $\Sigma_f$ has a fixed point. \cite{TOPCHARACTERIZATION} \end{proposition}
This proposition is a necessary step en route to proving part of Thurston's topological characterization of rational maps. Thurston's theorem specifies that a critically finite branched map with hyperbolic orbifold is Thurston-equivalent to a rational function if and only if for any $f$-stable multi-curve $\Gamma$, the largest eigenvalue of the associated Thurston linear transformation is $<1$. Given this property, $\Sigma_f$ is holomorphic and thus distance non-increasing, while $\Sigma_f\ ^{\circ 2}$ is strictly contracting. If $\tau\in\mathcal{T}_f$, the sequence $\tau_n=\Sigma_f\ ^{\circ n}(\tau)$ then converges to the fixed point of $\Sigma_f$ in Teichmuller space, and so $f$ is Thurston-equivalent to a rational map. The interested reader may refer to \cite{TOPCHARACTERIZATION} for a full statement of the topological characterization of rational maps, and details on the proof referenced here.
The process of constructing the sequence $\tau_n$ to find the rational map $F$ which is Thurston-equivalent to $f$ is referred to as \emph{Thurston's algorithm}. Since the $\tau_n$ are equivalence classes of complex structures, it is typical to work with representatives $\sigma_n\in\mathcal{C}$ that have been normalized in some way. We then obtain a sequence of rational maps $F_n=\sigma_n\circ f\circ\sigma_{n+1}^{-1}$ as in Figure \ref{commutative1} that converge to the desired $F$.
In theory, Thurston's algorithm is very straightforward. In practice, normalizing our complex structures and labeling critically finite branched maps in a manner stringent enough to obtain the pullback sequence $\{\sigma_n\}$ is difficult. A crux of the problem is that when $f$ is a degree $m$ branched map, all points except the critical values have $m$ preimages under $f$---and constructing a pullback relies on understanding the action of $f$ on $\mathbb{S}^2$ well enough to distinguish between these preimage points. Thus, unless we understand the mapping behavior of $f$ we cannot build a meaningful pullback map. This problem is not insurmountable however, as Thurston's algorithm has been successfully tailored to specific kinds of branched maps: the Spider algorithm is one such adaptation for polynomials \cite{SPIDER}.
Since matings are of particular interest to us, we'll give a brief overview of another adaptation of Thurston's algorithm for matings: the Medusa algorithm \cite{MEDUSA}. This algorithm iteratively approximates the rational map that is Thurston-equivalent to the topological mating of two quadratic polynomials. The commutative diagram in Figure \ref{commutative1} emphasizes the relationship of the maps involved. The Medusa algorithm starts by encoding the mapping structure of the associated formal mating $f:\mathbb{S}^2\rightarrow\mathbb{S}^2$ using the \emph{Medusa}: a 1-skeleton structure embeddable in $\mathbb{S}^2$ that contains the equator and all external rays which meet $P_f$. The Caratheodory angle-doubling semiconjugacy specifies the expected action of $f$ on key regions of $\mathbb{S}^2$: notably, the equator and `legs' that form the Medusa.
\begin{figure}
\caption{The Medusa and pseudo-equator algorithms are based upon Thurston's algorithm, highlighted in the commutative diagram above.}
\label{commutative1}
\end{figure}
Next, we specify an embedding $\sigma_0$ of the Medusa into $\hat{\mathbb{C}}$ that sends the equator of $\mathbb{S}^2$ to the unit circle, and prescribes desirable images for the legs. The embedding of the two critical values of $f$ can then be used as parameters determining a rational map $F_0:\hat{\mathbb{C}}\rightarrow\hat{\mathbb{C}}$ in a particular normalized form. Pulling back the embedded Medusa by $F_0$ provides a new Medusa structure embedded in $\hat{\mathbb{C}}$, that we take to induce a new embedding of the Medusa, $\sigma_1$. We iterate this process to develop the sequences $\sigma_n$ and $F_n$, where $F_n$ serves as an approximation to the rational map that is the mating of the two polynomials.
Here, we emphasize the motivation behind this work. The Medusa algorithm guarantees convergence of $F_n$ to the geometric mating of polynomials $f_\alpha$ and $f_\beta$ in the event that $f_\alpha$ and $f_\beta$ are strongly mateable. In \cite{MEDUSA} however, it is noted that the algorithm fails in cases where postcritical points of the formal mating are identified in the topological mating. Indeed, Thurston's algorithm is intended for the setting where $f$ and the desired rational map are Thurston-equivalent---which is not the case when the two polynomials are \emph{not} strongly mateable. Investigating convergence of points at the boundary of Teichmuller space is one possible avenue of approaching this problem, but we suggest something more direct---start with a map $g$ that collapses the necessary points, and is Thurston-equivalent to the desired rational map. This is the essential mating constructed by Tan Lei in \cite{LEI}, and referenced earlier in Section \ref{mating}. Using this map instead of the formal mating would force Thurston's algorithm to generate a convergent sequence.
Changing the map that we use in conjunction with Thurston's algorithm, however, means that we may no longer use the Medusa structure to model our mating. To overcome this obstacle, we will substitute a different method that provides a combinatorial map representative of the mating. We will thus conclude our discussion of dynamics prerequisites to begin a description of this model.
\section{Finite subdivision rules and tilings}\label{yourfsr}
As a reminder, our ultimate goal in this paper will be to iteratively approximate rational maps that are Thurston-equivalent to topological matings. To do so later will require an understanding of the mapping behavior of the essential mating. This section will detail the use of \emph{finite subdivision rules} to construct a model for this behavior.
\subsection{Finite subdivision rules}\label{fsr} The mapping behavior of complex functions is often demonstrated via a distortion of gridlines: by embedding a grid or tiling in $\mathbb{C}$ and observing how the image or preimage of the complex function distorts the tiling, we visualize the action of our function on the complex plane. We will utilize a similar but more specialized tool to study our mapping behavior, described below.
\begin{definition} A \emph{finite subdivision rule} $\mathcal{R}$ is composed of the following three elements:
\begin{enumerate}
\item A finite 2-dimensional CW complex $S_\mathcal{R}$, called the \emph{subdivision complex}, with fixed cell structure so that $S_\mathcal{R}$ is the union of its closed 2-cells. Each closed 2-cell $\tilde{s}$ of $S_\mathcal{R}$ must have a CW structure $s$ on a closed 2-disk so that $s$ has $\geq 3$ vertices, the vertices and edges of $s$ are contained in $\partial s$, and the characteristic map $\psi_s:s\rightarrow S_\mathcal{R}$ which maps onto $\tilde{s}$ restricts to a homeomorphism on open cells. More colloquially, we will refer to the subdivision complex as a \emph{tiling}.
\item A finite 2-dimensional CW complex $\mathcal{R}(S_\mathcal{R})$ which is a subdivision of $S_\mathcal{R}$. We will refer to this as a \emph{subdivided tiling}.
\item A continuous cellular map $g_\mathcal{R}: \mathcal{R}(S_\mathcal{R})\rightarrow S_\mathcal{R}$, which maps open cells of $\mathcal{R}(S_\mathcal{R})$ homomorphically to $S_\mathcal{R}$. Such a map $g_\mathcal{R}$ is called a \emph{subdivision map}.
\end{enumerate}
Such finite subdivision rules may be applied recursively to yield iterated subdivisions of $S_\mathcal{R}$. \textup{\cite{FSRS} } \end{definition}
In essence, finite subdivision rules heavily parallel the grid distortion technique described above. The primary difference is that when we pull back the initial tiling for a finite subdivision rule, we do not obtain an arbitrarily distorted tiling. Instead, the result is a new tiling which is a subdivision of the original.
For us, a finite subdivision rule will be a finite combinatorial rule for subdividing tilings on a 2-sphere. We will assume that our tilings can be formed by `filling in' the faces of connected finite planar graphs on this 2-sphere with open tiles that are topological polygons. Each edge of the tiling must be a boundary edge to some tile, and tiles are not allowed to be monogons or digons, but they may be non-convex. We will allow for extreme cases where single edges serve as two sides of the boundary of a single tile. As a rudimentary example of such a finite subdivision rule, consider the following example:
\begin{example} In Figure \ref{fig:fsr}, $\hat{\mathbb{C}}$ is oriented so that the marked points $0, 1,$ and $\infty$ all lie on the equator. The positive real axis with these marked points determines a graph which yields a tiling of $\hat{\mathbb{C}}$ by a single topological quadrilateral. If we take a preimage of this structure under the map $z\mapsto z^2$, we obtain a tiling that has two quadrilaterals---each of which maps homeomorphically onto the quadrilateral in the original tiling. Here, the structure on the left is our subdivision complex $S_\mathcal{R}$, the structure on the right is the subdivided tiling $\mathcal{R}(S_\mathcal{R})$, and the map $z\mapsto z^2$ is the subdivision map.
\begin{figure}\label{fig:fsr}
\end{figure}
\end{example}
While a finite subdivision rule may be defined using analytic maps and embedded tilings as in the previous example, this is not necessary. We can use the mapping behavior of $n$-cells in a tiling to inductively determine the mapping behavior of ($n+1$)-cells, thus obtaining a subdivision map based on combinatorial data. The reader may reference \cite{FSRS} for a more detailed treatment of this topic.
\subsection{Hubbard trees}\label{Hubbard}
To build a finite subdivision rule that models the behavior of an essential mating, it will be helpful to have a finite invariant structure in mind to determine a tiling 1-skeleton. We start by considering invariant structures associated with the polynomial pair composing the mating. Julia sets are invariant under iteration of their associated polynomials, but the structure of a typical Julia set is not finite and hence cannot be used as a starting point for a finite subdivision rule. Thus, we would like to work with a discrete approximation to the Julia set: the \emph{Hubbard tree}.
The construction of a Hubbard tree for a polynomial $f_\theta$ can be simplified considerably in the case where $f_\theta$ is critically preperiodic and has a dendritic Julia set--which is the primary setting for this paper. For the reader's convenience, we thus present a definition restricted to this situation below:
\begin{definition} Let $f_\theta: \mathbb{C} \rightarrow \mathbb{C}$ be given by $f_\theta(z) = z^2 + c$ for some Misiurewicz point $c$, and let $f_\theta$ have Julia set $J_\theta$ and postcritical set $P_{f_\theta}$.
We say that a subset $X$ of $J_\theta$ is \emph{allowably connected} if $x,y\in X$ implies that there is a topological arc in $X$ that connects $x$ and $y$. The \emph{allowable hull} of a subset $A$ in $J_\theta$ is then the intersection of all allowably connected subsets of $J_\theta$ that contain $A$. Finally, the \emph{Hubbard tree} of $f_\theta$ is the allowable hull of $P_{f_\theta}$ in $J_\theta$. \cite{ORSAY} \end{definition}
\begin{figure}
\caption{The Julia set and Hubbard trees for $f_{1/4}$.}
\label{hubbardtree}
\end{figure}
The Hubbard tree as defined above is embedded in $\mathbb{C}$ and topologically equivalent to the notion of an \emph{admissible Hubbard tree} with preperiodic critical point as discussed by Bruin and Schleicher in \cite{HUBBARDTREES}. These notes, however, emphasize the combinatorial structure of the Hubbard tree as a graph with vertices marked by elements of $P_{f_\theta}$, rather than as an embedded object in the complex plane. This distinction is emphasized on the right side of Figure \ref{hubbardtree}. Bruin and Schleicher present several explicit algorithms that can be used to construct a topological copy of $T_\theta$ from the parameter $\theta$, building heavily on the notion that quadratic maps are degree 2 at their critical points and behave locally homeomorphically elsewhere. We can further expand upon these observations regarding the behavior of quadratic polynomials to note the action of $f_\theta$ on $T_\theta$: $f_\theta$ acts locally homeomorphically on $T_\theta$ everywhere except at the critical point, which maps with degree two, and iterated preimages of $T_\theta$ under $f_\theta$ give discrete approximations to $J_\theta$. We thus have that the $n$th preimage of a tree $T_\theta$ under its associated polynomial $f_\theta$ contains $2^n$ miniature copies of the tree that each map homeomorphically onto the tree via $f_\theta^{\circ n}$, as in Figure \ref{hubbardpreim}. Given a Hubbard tree and critical orbit portrait as in this figure, we may make note of the local homeomorphic behavior off of the critical point to `fill in' missing limbs of subsequent preimages of $T_\theta$. In this manner, we may then view $f_\theta$ as inducing a combinatorial map on trees.
\begin{figure}
\caption{{Preimages of a Hubbard tree under its associated polynomial.}}
\label{hubbardpreim}
\end{figure}
\subsection{Construction of a finite subdivision rule}\label{yourfsrrules} As previously referenced, one of the impediments in using Thurston's algorithm to find geometric matings is the difficulty in recording the topological behavior of a map in a useful way. The goal of this section is to remove this obstacle and provide a combinatorial rule that records enough of the action of the essential mating on $\mathbb{S}^2$ for us to successfully apply Thurston's algorithm.
There are several possible ways to construct finite subdivision rules to model the essential mating of two critically preperiodic quadratic polynomials. Examples for strongly mateable polynomial pairs are given in \cite{DISSERTATION}. In the event that the formal and essential matings are different maps, $\sim_e$ will be a nontrivial equivalence relation and we may have the opportunity to use the construction below which is detailed in both \cite{DISSERTATION} and \cite{FSRCONSTRUCTION}.
\begin{definition}\label{fsrdefn} Suppose $f_\alpha$ and $f_\beta$ are critically preperiodic monic quadratic polynomials such that $x\sim_e y$ for some points $x\in T_\alpha,y\in T_\beta$. \begin{enumerate}
\item Give $T_\alpha\bigsqcup T_\beta / \sim_e$ a graph structure on the quotient space of the essential mating by marking all postcritical points and branched points as vertices. If need be, mark additional periodic or preperiodic points on $T_\alpha$ or $T_\beta$ and the points on their forward orbits to avoid tiles forming digons. The associated 2-dimensional CW complex for this structure will yield the subdivision complex, $S_\mathcal{R}$.
\item Let $g$ denote the essential mating of $f_\alpha$ and $f_\beta$ and set $\mathcal{R}(S_\mathcal{R})$ to be the preimage of $S_\mathcal{R}$ under $g$, taking preimages of marked points of $S_\mathcal{R}$ to be marked points of $\mathcal{R}(S_\mathcal{R})$ .
\item If $\mathcal{R}(S_\mathcal{R})$ is a subdivision of $S_\mathcal{R}$ and if the essential mating $g:\mathcal{R}(S_\mathcal{R}) \rightarrow S_\mathcal{R}$ is a subdivision map, then $\mathcal{R}$ is a \emph{finite subdivision rule construction of essential type}. \end{enumerate} \end{definition}
In a simplified sense, this finite subdivision rule construction examines how the Hubbard trees of two polynomials are glued together when forming the domain space of the essential mating. The glued pair of trees is a 1-skeleton for a tiling on the 2-sphere; the pullback of this tiling by the essential mating sometimes generates a subdivided tiling. This finite subdivision rule construction then may allow us to reduce the essential mating to a combinatorial map.
\begin{example}\label{f14matingfsrex} Consider the essential mating of $f_{1/4}$ with itself. The Hubbard tree is presented on the left of Figure \ref{f14selfi}. The postcritical points of $f_{1/4}$ are the landing points of the $1/4$, $1/2$, and $0$ external rays. In the essential mating, the equivalence relation $\sim_e$ dictates that these landing points on opposing trees are identified using a $\theta$ and $1-\theta$ angle pairing. This means that in the self-mating, the pair of $1/2$ landing points are collapsed, as are the pair of $0$ landing points since we take the angles of external rays modulo 1. Gluing two copies of $T_{1/4}$ in this manner produces the graph on the right of Figure \ref{f14selfi}, which we take to be the 1-skeleton of the 2-sphere tiling $S_\mathcal{R}$.
\begin{figure}
\caption{On the left, $T_{1/4}$. On the right, the subdivision complex $S_\mathcal{R}$ for the essential self-mating of $f_{1/4}$.}
\label{f14selfi}
\end{figure}
Now, we may pull back the 1-skeleton of $S_\mathcal{R}$ by the essential mating $g=f_{1/4}\upmodels_e f_{1/4}$, as in Figure \ref{f14selfii}. This process mostly resembles pulling back two Hubbard trees by their respective polynomials (a solved problem as noted in Figure \ref{hubbardpreim}) and keeping track of identifications between these trees by $\sim_e$ at only a finite number of points. A good way to view reconciling these identifications is that the essential mating is a branched covering of the 2-sphere, and should thus behave locally homeomorphically except on the critical set. This means that we must `fill in' new edges in a manner that preserves this homeomorphic behavior, which is up to our discretion by step four in Definition \ref{essential}. We thus obtain the subdivided tiling $\mathcal{R}(S_\mathcal{R})$.
We can then use the boundary behavior of the four 2-tile hexagons in $\mathcal{R}(S_\mathcal{R})$ to note that these tiles map homeomorphically via $g$ onto the two open 2-tiles in $S_\mathcal{R}$. Thus, the construction generates a finite subdivision rule with subdivision map $g$.
\begin{figure}
\caption{On the left, the expected pullback of $S_\mathcal{R}$ by the essential mating as based on local behavior of Hubbard trees. The essential mating is locally homeomorphic everywhere except on the critical set, so we complete the pullback as shown on the right.}
\label{f14selfii}
\end{figure} \end{example}
It should be noted that there are some instances in which this construction scheme does not directly generate a finite subdivision rule:
\begin{theorem} Let $h$ be the formal mating of $f_\alpha$ and $f_\beta$. The essential type construction fails to yield a finite subdivision rule generated by this polynomial pairing if and only if there exists some $x,y$ in $T_\alpha\bigsqcup T_\beta$ with $x\sim_t y$, $x\not\sim_e y$, and $h(x)\sim_e h(y)$. \cite{FSRCONSTRUCTION} \end{theorem}
In short, this theorem notes that when all open tiles of the pullback $\mathcal{R}(S_\mathcal{R})$ can be mapped onto some open tile of $S_\mathcal{R}$, we will have a subdivision rule. Otherwise, if some tile of $\mathcal{R}(S_\mathcal{R})$ maps to $S_\mathcal{R}$ with nonhomeomorphic behavior, the construction does not allow for the essential mating to serve as a subdivision map between the two tilings. In such a case, we may require slight modification to the essential type construction in order to obtain a formal finite subdivision rule. Options for altering the construction to obtain a valid finite subdivision rule are detailed in \cite{DISSERTATION}.
\subsection{Pseudo-equators}
Theorem \ref{LRS} expresses when a mating can be viewed as equivalent to a rational map---but what about when a rational map can be viewed as a mating? In \cite{UNMATING}, it is shown that a postcritically finite rational map with Julia set the 2-sphere can be expressed as a mating if the map possesses a \emph{pseudo-equator}:
\begin{definition}\label{pseudoequator}
A homotopy $H: X\times [0,1]\rightarrow X$ is a \emph{pseudo-isotopy} if $H: X\times [0,1)\rightarrow X$ is an isotopy. We will assume $H_0=H(x,0)=x$ for all $x\in X$.
Let $f$ be a postcritically finite rational map, $C_0\subseteq \hat{\mathbb{C}}$ be a Jordan curve with $P_f\subseteq C_0$, and $C_1=f^{-1}(C_0)$. Then we say that $f$ has a \emph{pseudo-equator} if it has a pseudo-isotopy $H: \mathbb{S}^2\times[0,1]\rightarrow \mathbb{S}^2$ rel. $P_f$ with the following properties:
\begin{enumerate} \item $H_1(C_0)=C_1$. \item The set of points $w\in C_0$ such that $H_1(w)\in f^{-1}(P_f)$ is finite. (We will let $W$ denote the set of all such $w$.) \item $H_1:C_0\backslash W\rightarrow C_1\backslash f^{-1} (P_f)$ is a homeomorphism. \item $H$ deforms $C_0$ orientation-preserving to $C_1$. \end{enumerate}
\end{definition}
Possession of a pseudo-equator is not a necessary condition for a rational map to be equivalent to a mating, but considering how such a property arises from a mating will be useful in developing insight on the mapping properties of the essential mating.
\begin{theorem}\label{pseudothm} Set $g=f_\alpha\upmodels_ef_\beta$, and let $\mathbb{S}'^2$ denote the quotient space which is the domain of $g$. If there exists some Jordan curve $C$ on $\mathbb{S}'^2$ which contains $P_g$ and separates $(T_\alpha/\sim_e)\setminus P_g$ from $(T_\beta/\sim_e)\setminus P_g$, then $g$ has a pseudo-equator. \cite{FSRCONSTRUCTION} \end{theorem}
We provide a sketch of the proof in \cite{FSRCONSTRUCTION}. Given a finite subdivision rule formed using the construction from the previous section, the curve $C_0$ and associated pseudo isotopy can sometimes be constructed in a natural way from the equator $\mathcal{E}$ of the formal mating. If $C_0:=\mathcal{E}/\sim_e$ is a Jordan curve, $C_0$ separates the 2-sphere into two components; the closure of each containing the Hubbard tree of a polynomial in the mating. We will assume one to be colored black and one to be colored red. More simply, $C_0$ will appear as a simple closed curve drawn through $P_g$ that `separates' the red and black trees in the 1-skeleton of $S_\mathcal{R}$. We can then find the pullback curve $C_1$ using the finite subdivision rule construction from the previous section: the subdivision map assists us in pulling back $C_0$ as in Figure \ref{meyerex}, since open 2-tiles map homeomorphically.
We may then construct a pseudo-isotopy as described in Definition \ref{pseudoequator} between $C_0$ and its pullback $C_1$. Since both $C_0$ and $C_1$ can be taken to separate the red and black Hubbard trees off of the postcritical set, we may assume an orientation on both of these curves given by traversing each in the direction of increasing external angles for the black polynomial. This suggests natural parameterizations $C_0, C_1:[0,1)\rightarrow \mathbb{S}^2$ where points on each curve are given as a function of the associated external angle. We may then view $H$ as any homotopy which continuously distorts $C_0$ into $C_1$ by `pushing' each point $C_0(t)$ along the $t$ external ray to the point $C_1(t)$. (We may note that since both curves pass through the postcritical set, these points will be fixed by the homotopy.) Such a homotopy preserves orientation. Further, since $g$ is the subdivision map of a finite subdivision rule, we have that the remaining conditions on finiteness and homeomorphic mapping behavior for a pseudo-equator are satisfied.
A crucial idea to observe here is that if we can build a finite subdivision rule using an essential mating which has one of these pseudo-equators, pullbacks of $C_0$ behave in a predictable manner due to the pseudo-isotopy: they can be formed by deforming our original curve in an orientation-preserving manner and `pinching' at the critical points, as in Example \ref{f14f14pseudoex}:
\begin{example}\label{f14f14pseudoex}
We deepen our examination of the essential self-mating $g=f_{1/4}\upmodels_e f_{1/4}$. In Figure \ref{meyerex}, we first note the previously obtained finite subdivision rule for this mating: the 1-skeletons of $S_\mathcal{R}$ and $\mathcal{R}(S_\mathcal{R})$ are shown in red and black. Next, we form $C_0$ on the left by drawing a Jordan curve through $P_g$ that separates the red and black Hubbard trees on $S_\mathcal{R}$. Noting the homeomorphic mapping behavior of $g$ on open tiles of $\mathcal{R}(S_\mathcal{R})$, we may determine the location of the pullback curve $C_1$ on the right. We establish an orientation on both $C_0$ and $C_1$ to be given by traversing each curve so that the black tree is always on the left.
\begin{figure}\label{meyerex}
\end{figure}
In Figure \ref{meyerex}, the action of the pseudo-isotopy $H$ is to deform $C_0$ into $C_1$ by pinching the arc from $p_1$ to $p_4$ and the arc from $p_2$ to $p_3$ together at the critical point of the black tree. Simultaneously, $H$ pinches the remaining arcs together at the critical point of the red tree.\end{example}
\section{Main Results}\label{algorithms}
\subsection{Theory for the pseudo-equator algorithm}\label{convergence}
We utilize two normalization conventions given in \cite{MEDUSA}; one regarding embeddings of $\mathbb{S}$ to $\hat{\mathbb{C}}$ and one regarding rational maps.
First, suppose that $g:\mathbb{S}^2\rightarrow\mathbb{S}^2$ is the essential mating of two critically preperiodic quadratic polynomials $f_\alpha$ and $f_\beta$, and that $w_\alpha,w_\beta$ are the two critical points of $g$. Let $\mathcal{H}$ be the set of orientation preserving maps $\sigma:\mathbb{S}^2\rightarrow\hat{\mathbb{C}}$, normalized so that $\sigma(w_\alpha)=0, \sigma(w_\beta)=\infty,$ and $\sigma(1)=1$.
Then, any $\sigma\in\mathcal{H}$ can be taken as a global chart yielding a complex structure on $\mathbb{S}^2$. In this manner, we can take $\sigma$ to be a representative of some element of $\mathcal{T}_f$. Conversely, $\mathbb{S}^2$ equipped with a complex structure is conformally isomorphic to $\hat{\mathbb{C}}$, and so we may assume the existence of an associated conformal isomorphism $\sigma: \mathbb{S}^2\rightarrow\hat{\mathbb{C}}$ normalized on $0,1$, and $\infty$ as above.
Next, we note that rational maps of degree 2 can be normalized via conjugation with Mobius transformations so that 0 and $\infty$ are critical points and 1 is a fixed point. We'll refer to the collection of such normalized maps as $\mathcal{F}$, and note the following lemma:
\begin{lemma}[Henriksen, Lynch Boyd]\label{rationalmaplemma} Given two distinct points $u,v\in \hat{\mathbb{C}}\backslash\{1\}$, there exists a unique $F\in \mathcal{F}$ so that $F(0)=u$ and $F(\infty)=v$. If $u,v\neq\infty$, this map $F$ takes the form $F_{u,v}(z)= \frac{(u-1)vz^2-u(v-1)}{(u-1)z^2-(v-1)}$. The desired map $F$ is intuitively similar in structure in the event that $u$ or $v$ is equal to $\infty$.
Any degree 2 rational map normalized in this manner is uniquely determined by its two critical values $u$ and $v$. \cite{MEDUSA} \end{lemma}
We may then present the following theorems:
\begin{theorem}\label{convergencetheorem} Let $\sigma_n\in\mathcal{H}$ be given. Set $u_n=\sigma_n\circ g(w_\alpha), v_n=\sigma_n\circ g(w_\beta)$, and let $F_{u_n,v_n}$ be the map described in Lemma \ref{rationalmaplemma}. Then, there exists a unique mapping $\sigma_{n+1}\in\mathcal{H}$ such that the following diagram commutes:
$$\begin{CD} \mathbb{S}^2 @>\sigma_{n+1}>> \hat{\mathbb{C}}\\ @VVgV @VVF_{u_n,v_n}V\\ \mathbb{S}^2 @>\sigma_n>> \hat{\mathbb{C}} \end{CD} $$
Further, if $\sigma_n$ and $\sigma_n'$ represent the same element in $\mathcal{T}_g$, then $F_{u_n,v_n}$ and $F_{u_n,v_n}'$ are the same rational map and the lifts $\sigma_{n+1}$ and $\sigma_{n+1}'$ similarly represent the same element of $\mathcal{T}_g$ as well. \end{theorem}
\begin{theorem}\label{end} Fixing a starting $\sigma_0\in\mathcal{H}$ and repeatedly applying Theorem \ref{convergencetheorem} generates a sequence of rational maps $F_{u_n,v_n}$ that is equivalent to those produced by Thurston's algorithm. This sequence converges to a rational map $F$ that is Thurston-equivalent to the topological mating of $f_\alpha$ and $f_\beta$. \end{theorem}
This collection of assertions is similar in nature to Lemma 3.7, Theorem 3.8, and Theorem 3.9 of \cite{MEDUSA}, but generalized as we are not working with elements of Medusa space. The proofs follow in a similar manner.
\begin{proof}[Proof of Theorem \ref{convergencetheorem}]
Although $g$, $\sigma_n$, and $F_{u_n,v_n}$ are maps on $\mathbb{S}^2$ and $\hat{\mathbb{C}}$, we consider the following diagram on doubly punctured spheres:
$$\begin{CD} \mathbb{S}^2\setminus\Omega_f @. \hat{\mathbb{C}}\setminus \{0,\infty\}\\ @VVgV @VVF_{u_n,v_n}V\\ \mathbb{S}^2\setminus f(\Omega_f) @>\sigma_n>> \hat{\mathbb{C}}\setminus \{u_n,v_n\} \end{CD} $$\\
The fundamental group of any doubly punctured sphere is $\mathbb{Z}$. More specifically, we may fix 1 as a base point and identify the fundamental group on the doubly punctured sphere $\mathbb{S}^2$ (respectively, $\hat{\mathbb{C}}$) with $\mathbb{Z}$ so that the winding number about $w_\alpha$ or $g(w_\alpha)$ (respectively, about 0 or $u_n$) corresponds to the element $+1\in\mathbb{Z}$. The maps $g$ and $F_{u_n,v_n}$ are degree 2 branched covers of the sphere, and so are two-to-one covering maps when we omit branch points and preimages as above. The induced maps on fundamental groups $g_*$ and $F_{u_n,v_n*}$ are then both equivalent to multiplication by 2.
$\sigma_{n}$ on the other hand is a homeomorphism, so the induced map $\sigma_{n*}$ is equivalent to the identity. We then have that $F_{u_n,v_n*}$ and $(\sigma_n\circ g)_*=\sigma_{n*}\circ g_*$ have the same image. By the fundamental lifting theorem, there is a lift $\sigma_{n+1}: \mathbb{S}^2\setminus\Omega_g\rightarrow \hat{\mathbb{C}}\setminus\{0,\infty\}$ that commutes with the diagram above. This map $\sigma_{n+1}$ is unique if we specify that $\sigma_n(1)=1$. We may then continuously extend $\sigma_{n+1}$ to a map on spheres by assigning $\sigma_{n+1}(w_\alpha)=0$ and $\sigma_{n+1}(w_\beta)=\infty$ so that $\sigma_{n+1}\in\mathcal{H}$. This shows that the diagram given in the statement of Theorem \ref{convergencetheorem} commutes.
For the uniqueness portion of Theorem \ref{convergencetheorem}, we consider the following. If $\sigma_n$ and $\sigma_n'$ represent the same element in $\mathcal{H}$, there exists an isotopy relative to $P_g$ between these two maps. Since the isotopy does not disturb elements of the postcritical set, $u_n$ and $v_n$ are unchanged, thus $F_{u_n,v_n}$ and $F'_{u_n,v_n}$ are the same map. Further, our isotopy lifts to a new isotopy between $\sigma_{n+1}$ and $\sigma_{n+1}'$, and so these two lifts represent the same element in $\mathcal{T}_f$. \end{proof}
\begin{proof}[Proof of Theorem \ref{end}]
The reader may note that while Thurston's algorithm should use a pullback to define the rational map $F_n=\sigma_n\circ g\circ \sigma_{n+1}\ ^{-1}$, the above proof defines $\sigma_{n+1}$ as a lift, assuming that the analogous rational map $F_{u_n,v_n}$ is known. A useful consequence of working in normalized maps from $\mathcal{H}$ and $\mathcal{F}$ is that we can note both $F_n$ and $F_{u_n,v_n}$ always map $0\mapsto u_n$, $\infty\mapsto v_n$, and $1\mapsto 1$; which uniquely determines $F_n=F_{u_n,v_n}$ as a single member of $\mathcal{F}$. Thus, once we know $\sigma_{n-1}$ (and so the values of $u_n$ and $v_n$), we know $F_n$. Theorem \ref{convergencetheorem} guarantees that the lift map $\sigma_{n+1}$ is unique, and so the $\sigma_n$ in our algorithm and Thurston's algorithm coincide. We can then view the repeated application of Theorem \ref{convergencetheorem} as an algorithm generating the same sequence of embeddings $\sigma_n$ and rational maps $F_{u_n,v_n}$ as Thurston's algorithm.
Since the essential mating $g$ is Thurston-equivalent to the topological mating of $f_\alpha$ and $f_\beta$, we conclude by Thurston's algorithm that the sequence of rational maps $F_{u_n,v_n}$ converges to a rational map Thurston-equivalent to $g$. \end{proof}
\subsection{Implementation of the pseudo-equator algorithm}
While Theorem \ref{end} makes obtaining the rational map $F$ appear easy, implementation of the theorem as a computer algorithm involves some attention to detail. Our key result, an algorithm that obtains an approximation for the geometric mating of two polynomials, will be organized into a process that involves five major steps. We will call this the \emph{pseudo-equator algorithm}:
\begin{algorithm}\label{algorithmthing} Suppose that essential mating of two critically preperiodic polynomials has a hyperbolic orbifold, and that this mating permits construction of a finite subdivision rule and pseudo-equator curve. The \emph{pseudo-equator algorithm} refers to the following process for approximating the geometric mating for these two polynomials, as described below:
\begin{enumerate} \item Build the finite subdivision rule for the essential mating, per Definition \ref{fsrdefn}. \item Construct a pseudo-equator and embedding, per Theorem \ref{pseudothm}. \item Assign an approximation for the rational map, per Lemma \ref{rationalmaplemma}. \item Pull back the curve while noting locations of preimages of marked points, per Theorem \ref{convergencetheorem}. \item Repeat the approximation and pullback steps to obtain a sequence of rational maps, per Theorem \ref{end}. \end{enumerate}
\end{algorithm}
It should be recalled that Theorem \ref{end} guarantees not just the existence of some sequence of rational maps, but that this sequence converges to a desired rational map $F$ that can be taken as the geometric mating of our two polynomials. We expand upon each of these steps and their roles within the algorithm below:\\
\noindent \textbf{(1) Build the finite subdivision rule for the essential mating.}
There are several finite subdivision rule constructions available to describe the action of matings on $\mathbb{S}^2$ so we can apply Thurston's algorithm. In the event that the essential and formal matings are equivalent, we may use formal mating constructions given in \cite{DISSERTATION}; otherwise, we use constructions detailed in Section \ref{yourfsrrules} and in \cite{FSRCONSTRUCTION}. Since the Medusa algorithm applies in the former case , we focus our efforts on understanding the latter situation.
When $f_\alpha$ and $f_\beta$ are not strongly mateable, the essential type finite subdivision rule construction involves identifying Hubbard trees at marked points specified by $\sim_e$. This yields a 1-skeleton that can be completed to a tiling $S_\mathcal{R}$ on $\mathbb{S}^2$. Off of the marked points, the action of $g=f_\alpha\upmodels_e f_\beta$ on the 1-skeleton is similar to the action of $f_\alpha$ or $f_\beta$ on its associated Hubbard tree--that is, we have homeomorphic behavior off of the critical set. If we note expected behavior of marked points under the essential mating, we may develop a new 1-skeleton that can be completed to a subdivided tiling $\mathcal{R}(S_\mathcal{R})$, with the essential mating acting as a subdivision map.
\noindent\textbf{(2) Construct a pseudo-equator and embedding. }
If we have a finite subdivision rule that was generated in the above manner, the easiest way to attempt to find a pseudo-equator is to construct a closed curve $C_0$ through $P_g$ that separates the two Hubbard trees in the tiling 1-skeleton off of $P_g$. If this is a Jordan curve, Theorem \ref{pseudothm} guarantees that $g$ has a pseudo-equator. We may then use the finite subdivision rule to determine the pseudo-equator curve's pullback $C_1$ under the essential mating, since $n$-tiles map homeomorphically to other $n$-tiles. This pullback will then have a pseudo-isotopy $H_1:\mathbb{S}^2\times[0,1]\rightarrow \mathbb{S}^2$ so that $H_1(\cdot,1)$ maps $C_0$ orientation preserving to $C_1$.
To embed $C_0$ in $\hat{\mathbb{C}}$, we may without loss of generality select the black polynomial $f_\alpha$ to fix an orientation of the curve: we will assume $C_0$ to be the positively oriented boundary curve around the component of $\mathbb{S}^2\setminus C_0$ containing the black critical point. Recall that the polynomials $f_\alpha$ and $f_\beta$ have postcritical points given by landing points of external rays $\gamma_\alpha(\alpha\cdot 2^{n-1})$ and $\gamma_\beta(\beta\cdot 2^{n-1}), n\in\mathbb{N}$. Further, considering the external angles associated with $f_\alpha$, we view the curve $C_0$ as a path $C_0:[0,1]\rightarrow\mathbb{S}^2$ possessing a natural parameterization $C_0(t)$. (We may do this, in fact, for all pullbacks of $C_0$ as well.) Motivated by this, we let $\sigma_0:\mathcal{S}^2\rightarrow\hat{\mathbb{C}}$ be the map such that $C_0(t)\mapsto e^{2\pi i t}$, and on this unit circle we will mark the points 1, $\{e^{2\pi i \alpha \cdot 2^{n-1}}\}$ and $\{e^{2\pi i (1-\beta) \cdot 2^{n-1}}\}$, $n\in \mathbb{N}$ to correspond to the fixed point and postcritical points of the essential mating.
We complete the definition of $\sigma_0$ and extend it to an orientation preserving complex structure from $\mathbb{S}^2\rightarrow\hat{\mathbb{C}}$ by defining $\sigma_0$ to be a homeomorphic extension sending $w_\alpha$ to 0, $w_\beta$ to $\infty$, and 1 to 1. Our intent is to select $\sigma_0$ as a normalized representative of some element of Teichmuller space. While a different homeomorphic extension would in general yield a different representative of the same element in $\mathcal{T}_g$---and while we could make a similar comment regarding the exact path and parameterization for $C_0$---this is a moot point by Theorem \ref{convergencetheorem}. The remainder of the algorithm only deals in computations regarding pullbacks and embeddings of $C_0$---and only as far as determining the embedding of $P_g$ in $\hat{\mathbb{C}}$, the order in which the marked points of $P_g$ and their embeddings are connected, and the general homotopy type of the connecting curves.
The curve $C_0$ has an clear relationship to the Medusa described in Section \ref{thurstonmedusa}, and thus the above choice of $\sigma_0$ is intuitive. The Medusa resembles an equator on $\mathbb{S}^2$, with external ray limbs reaching toward the postcritical set. In both the Medusa setting and here, these equator-like curves are initially embedded in $\hat{\mathbb{C}}$ as the unit circle. A key difference is the following: if two points identify under $\sim_e$, these points are distinct on the Medusa, and there is an external ray pair joined at the equator that connects them both. In the essential mating, this pairing of external rays has been collapsed into the single marked point it intersects on the equator. One could say in this manner that our combinatorial model resembles a ``headband" of sorts for the Medusa model: there is a clear deformation retract from the embedding of the Medusa to the circle $\sigma_0(C_0)$.
\noindent\textbf{(3) Assign an approximation for the rational map.}
Since postcritical points of the map $g:\mathbb{S}^2\rightarrow\mathbb{S}^2$ are marked points on $C_0$, critical values of $g$ are explicitly embedded by $\sigma_{n-1}$ as some $u_n,v_n\in\hat{\mathbb{C}}$. We may then use the embedding of these critical values to determine the map $F_n=F_{u_n,v_n}$ as in Lemma \ref{rationalmaplemma}.
\noindent\textbf{(4) Pull back the curve, noting locations of preimages of marked points.}
Since a finite subdivision rule has been determined, $C_{n}$ and $C_{n+1}$ are ascertained by noting homeomorphic behavior on tiles, much as in step 2. Both of these curves contain the postcritical set, since $C_{n}\supseteq P_g$ implies $C_{n+1}=g^{-1}(C_{n})\supseteq g^{-1} (P_g)\supseteq P_g$---but $C_{n+1}$ will not be a Jordan curve since it contains $g^{-1}(P_g)$, and thus the critical points of the function $g$. Since there exists a pseudo isotopy $H_n:\mathbb{S}^2\times[0,1]\rightarrow\mathbb{S}^2$ such that $H_n(\cdot,1)$ maps $C_{n}$ orientation preserving to $C_{n+1}$, we do have that $H_n$ gives a canonical manner in which to traverse $C_{n+1}$ so that we visit the points of $P_g$ in the same order as $C_{n}$.
$\sigma_{n+1}$ and $\sigma_{n}$ are both orientation preserving isomorphisms, so $H_n$ may be lifted to a pseudo-isotopy on the Riemann sphere. Thus, we may expect $\sigma_{n}(C_{n})$ and $\sigma_{n+1}(C_{n+1})$ to visit embedded points corresponding to elements of $P_g$ in an intuitively similar order as well. At this point, we establish which marked points are `necessary': we primarily care about the embedding of $P_g$, not $g^{-1}(P_g)$. We can determine the cyclic order of `important' versus `unimportant' (i.e. in $P_g$ versus not in $P_g$) marked points on $C_{n+1}$, and note that $\sigma_{n+1}$ will preserve this ordering---telling us where to embed elements of $P_g$. (We touch on further subtle nuances of this process in Section \ref{implementation}.)
\noindent\textbf{(5) Repeat the approximation and pullback steps.}
The $F_n$'s give a sequence of approximations to a rational map that is Thurston-equivalent to the mating $g$.
\begin{example}\label{example1} We examine the example detailed by Milnor in \cite{MILNORMATINGS}, the self-mating of $f_{1/4}$. The astute reader will note that this map actually has a parabolic $\{2,2,2,2\}$ orbifold rather than a hyperbolic orbifold, and thus Thurston's algorithm does not actually apply---but this mating provides a simplified introduction to the algorithm, and an interesting outcome nonetheless.
\noindent\textbf{Build finite subdivision rule}: The Hubbard tree for $f_{1/4}$ appears on the left of Figure \ref{f14selfi} with the postcritical set marked. Postcritical points identify under $\sim_e$ by a $\theta$ and $1-\theta$ external angle relation, so for the self-mating we obtain the subdivision complex $S_\mathcal{R}$ shown on the right of Figure \ref{f14selfi}. The preimage of this structure under the essential mating appears as on the right of Figure \ref{f14selfii}. The tiling $S_\mathcal{R}$, subdivided tiling $\mathcal{R}(S_\mathcal{R})$, and essential mating $g=f_{1/4}\upmodels_ef_{1/4}$ form an essential type finite subdivision rule.
\noindent\textbf{Construct pseudo-equator}: The desired pseudo-equator curve $C_0$ and associated pullback curve $C_1$ are shown in Figure \ref{meyerex}. If we positively orient $C_0$ with respect to the black polynomial, we may note that it passes through the marked postcritical points
\begin{center}$\{ p_1=C_0(\frac{1}{4}), p_2=C_0(\frac{1}{2}), p_3=C_0(\frac{3}{4}),p_4=C_0(0)\}$\end{center}
\noindent in the listed order. We select an embedding of $C_0$ into $\hat{\mathbb{C}}$ that sends $C_0$ to the unit circle via the mapping $C_0(t)\mapsto e^{2\pi i t}$. The above list of marked postcritical points then maps respectively to $\{ i, -1, -i,1\}$.
\noindent\textbf{Assign rational map}: Recall that the critical values of $g$ are $p_1$ and $p_3$. We may then set $u_0=\sigma_0(p_1)$ and $v_0=\sigma_0(p_3)$. Here, since $u_0=i$ and $v_0=-i$, we may utilize Lemma \ref{rationalmaplemma} to obtain that $F_0(z)=F_{u_0,v_0}= \displaystyle\frac{(i+1)z^2+(i-1)}{(i-1)z^2+(i+1)}$.
\noindent\textbf{Pullback}: In this mating, the ramification portrait is $p_1, p_3 \mapsto p_2 \mapsto p_4 \mapsto p_4$. The pullback of $C_0$ by $g$ traverses marked points in the following ordering:
\begin{center}$\{C_1(\frac{1}{8}),p_1=C_1(\frac{1}{4}),C_1(\frac{3}{8}),p_2=C_1(\frac{1}{2}),C_1(\frac{5}{8}),p_3=C_1(\frac{3}{4}),C_1(\frac{7}{8}),p_4=C_1(0)\},$\end{center}
\noindent where $C_1(\frac{1}{8})=C_1(\frac{5}{8})$ and $C_1(\frac{3}{8})=C_1(\frac{7}{8})$ are the critical points of the mating $g$. (We should note that the curve $C_1$ forks right whenever it approaches the black critical point $C_1(\frac{1}{8})=C_1(\frac{5}{8})$, and left whenever it approaches the red critical point $C_1(\frac{3}{8})=C_1(\frac{7}{8})$.) Pulling back the curve $\sigma_0(C_0)$ by $F_0$ yields a curve which traverses marked points in the ordering $\{0,i,\infty,-1,0,-i,\infty,1\}$. These lists of marked points on $C_1$ and the pullback of $\sigma_0(C_0)$ induce a `new' embedding of $P_g$, which we will denote $\sigma_1$.
\noindent\textbf{Repeat}: For this step we will assign a new rational map and repeat the pullback step. We assign the new rational map by examining the parameters $u_1=\sigma_1(p_1)$ and $v_1(\sigma_1(p_3)$, but here it so happens that $\sigma_0$ and $\sigma_1$ agree on $P_g$. (This does not typically happen in usual examples---generally, we would expect $\sigma_1$ to assign new image elements to $P_g$.) Since the critical values for $g$ are assigned to the same respective elements of $\hat{\mathbb{C}}$, $F_1=F_0$.
Since $\sigma_0$ and $\sigma_1$ are the same map, we can infer that the pullback process does not change the embedding of $P_g$. This means that $\sigma_n$ and thus $F_n$ will both be constant sequences. This means that we must have started with a representative for the fixed point of $\Sigma_g$ in Teichmuller space, and so $F(z)= \frac{(i+1)z^2+(i-1)}{(i-1)z^2+(i+1)}$ is the rational map that is Thurston-equivalent to $g$.
\begin{figure}\label{iterates}
\end{figure}
The spheres in Figure \ref{iterates} demonstrate the iterated pullbacks of $\sigma_0(C_0)$ by $F$ under this process. The critical points $0$ and $\infty$ are located at the north and south poles of each of these spheres, which are depicted as slightly translucent so we may view the paths of the curves on the far side. To provide further orientation on each of the spheres, 1 is to the right, $-1$ is to the left, $i$ is to the rear right, and $-i$ is to the front left. It should be noted that the square grid formation occurring at the two critical points maps $2-1$ onto the pair of adjacent `triangular' tiles centered at $i$ and $-i$ from the previous sphere. (We say `triangular', since there is technically a marked point at $\pm i$ in the middle segment which makes these formations composed of two adjacent topological quadrilaterals.) These `triangular' tile pairs typically make the embeddings of critical values of $g$ visually appear to be a source of dark spots, or curve bunching in later iterations. Thus, we have a loose way to visually reaffirm that the rational maps $F_n$ generated a constant sequence: the dark spots on the spheres do not change location from one iteration to the next, so the parameters $u_n$ and $v_n$ do not change, and we thus have a constant sequence for $F_n$.
\end{example}
It should be noted that generating an immediate fixed point of $\Sigma_g$ as above is actually a great stroke of luck. In general, the algorithm does not converge with other parabolic examples (such as the self-mating of $f_{1/6}$), and further it is atypical to obtain constant sequences for $\sigma_n$ when examining matings with hyperbolic orbifold. In general, the algorithm is intended to be applied to maps with five or more postcritical points (guaranteeing hyperbolic orbifold), and in such a case $F_n$ will converge non-trivially to the desired rational map. We will now demonstrate a more typical example.
\begin{example}\label{labelforexample} We examine the matings of $f_{1/4}$ and $f_{1/8}$. The essential mating has five postcritical points, thus a hyperbolic orbifold. This means that Thurston's algorithm applies to the example, and we may attempt to use Algorithm \ref{algorithmthing} to find a rational map approximation to the geometric mating of this pair.
\noindent\textbf{Build finite subdivision rule}: In Figure \ref{endexample}, we have constructed a finite subdivision rule for this mating using the construction detailed in Section \ref{yourfsrrules}. The 1-skeleton of the subdivision complex $S_\mathcal{R}$ is in black and red on the left, and the 1-skeleton of the subdivided complex $\mathcal{R}(S_\mathcal{R})$ is similarly shown in black and red on the right. With the tiling $S_\mathcal{R}$, the subdivision $\mathcal{R}(S_\mathcal{R})$, and the subdivision map $g=f_{1/4}\upmodels_ef_{1/8}$, we have a finite subdivision rule. A critical portrait for this subdivision map is shown on the left of Figure \ref{endexample}.
\begin{figure}
\caption{The critical orbit portrait and finite subdivision rule associated with $f_{1/4}\upmodels_ef_{1/8}$, along with marked pseudo-equator curves. $C_0$ is marked in blue above and its pullback $C_1$ is marked in blue below. Here, $f_{1/4}$ is taken to be the black polynomial. }
\label{endexample}
\end{figure}
\noindent\textbf{Construct pseudo-equator}: The desired pseudo-equator curve $C_0$ is constructed by separating the black $T_{1/4}$ tree from the red $T_{1/8}$ tree with a Jordan curve through the postcritical set of $g$. This curve $C_0$ is depicted in dashed blue lines on the left of Figure \ref{endexample}. Since the subdivision map $g$ is locally homeomorphic on open tiles, we infer from the finite subdivision rule that its pullback of $C_0$, the curve $C_1$, appears as the dashed blue curve on the right of Figure \ref{endexample}.
We will orient both $C_0$ and $C_1$ positively with respect to the black polynomial. We will further assume the existence of canonical parameterizations $C_0, C_1:[0,1]\rightarrow\mathbb{S}^2$ respecting this orientation, where location on the curve is taken to be a function of the angle of external ray. This sets $C_0$ as passing through the following marked points:
\begin{center}$\{p_1=C_0(\frac{1}{4}), p_2=C_0(\frac{1}{2}), p_3=C_0(\frac{3}{4}), p_4=C_0(\frac{7}{8}), p_5=C_0(0) \}.$\end{center}
We embed $C_0$ in $\hat{\mathbb{C}}$ as the unit circle via $\sigma_0$, which is given by the mapping $C_0(t)\mapsto e^{2\pi i t}$ for $t\in [0,1]$. Thus, the above listed marked points of $P_g$ now map respectively to the points
\begin{center}$\{ i, -1, -i, \frac{\sqrt{2}-\sqrt{2}i}{2}, 0 \}$\end{center}.
\noindent\textbf{Assign rational map}: Since $p_1$ and $p_4$ are the critical values of $g$, we will set $u_0=\sigma_0(p_1)$, and $v_0=\sigma_0(p_4)$ so that $u_0=i$ and $v_0= \frac{\sqrt{2}-\sqrt{2}i}{2}$. We may then use Lemma \ref{rationalmaplemma} to obtain the first rational map approximation $F_0=F_{u_0,v_0}$.
\noindent\textbf{Pullback}: The pullback of $C_0$ by $g$ traverses marked points in the following ordering:
\begin{center}$\{C_1(\frac{1}{8}),p_1=C_1(\frac{1}{4}),C_1(\frac{3}{8}),C_1(\frac{7}{16}), p_2=C_1(\frac{1}{2}),C_1(\frac{5}{8}),p_3=C_1(\frac{3}{4}),p_4=C_1(\frac{7}{8}),C_1(\frac{15}{16}), p_5=C_1(0)\},$\end{center}
\noindent where $C_1(\frac{1}{8})=C_1(\frac{5}{8})$ and $C_1(\frac{7}{16})=C_1(\frac{15}{16})$ are respectively the black and red critical points of the mating $g$. (We should note that similar to the last example, the orientation of $C_1$ is such that the curve forks right whenever it approaches the black critical point and left whenever it approaches the red critical point.) Decimal approximations for the marked points traversed by the pullback curve $F_0^{-1}\circ \sigma_0(C_0)$ by $F_0$ are as follows:
\begin{center} $\{0,.643594i,1.18921i, \infty,-1,0,-.643594i,-1.18921i,\infty,1\}$.\end{center}
These lists of marked points on $C_1$ and the pullback of $\sigma_0(C_0)$ induce a new embedding of $P_g$, which we will denote $\sigma_1$.
\noindent\textbf{Repeat}: For this step we iteratively assign a new rational map and repeat the pullback step. The map $\sigma_1$ embeds the elements of $P_g$ as follows: $$p_1\mapsto .643594i, p_2\mapsto -1, p_3 \mapsto -.643594i, p_4\mapsto -1.18921i, p_5\mapsto 1 $$. We assign the new rational map by noting the new parameters $u_1=\sigma_1(p_1)=.643594i$ and $v_1(\sigma_1(p_4)=-1.18921i$, and applying Lemma \ref{rationalmaplemma} to obtain the approximation $F_1=F_{u_1, v_1}$.
Unlike the previous example, we will have a nontrivial sequence of rational map approximations, since $\sigma_0$ is not a representative for the fixed point of $\Sigma_g$. Continuing to iterate the pullback process from $C_1$ on generates the collection of curves $C_n$ as depicted in Figure \ref{iterates}.
\begin{figure}
\caption{Pullbacks of the equator by a sequence of rational maps which approximate the geometric mating of $f_{1/4}$ and $f_{1/8}$.}
\label{aiterates}
\end{figure}
We may note in Figure \ref{aiterates}, similar to the comments at the end of Example \ref{example1}, that `triangular' tile pairs tend to mark the locations of critical values of the map, where in later iterations they visually appear to be a source of prominent dark spots and curve bunching. The settling of darkened spots into relatively stable locations upon later iterations is reflective of the convergence of parameters $u_n$ and $v_n$; thus the convergence of the rational map approximations $F_n$ to our desired geometric mating.
\end{example}
\subsection{Fine tuning and practical concerns}\label{implementation}
In general, we wish to start with a curve containing the postcritical set that separates $0$ and $\infty$. Since we are working with an iterative algorithm, this curve is typically approximated by an ordered list of points in the complex plane--a discretized parameterization of the curve, in a way. The ordering of the points suggests the direction in which we connect the dots to form a Jordan curve, and this direction in which we traverse the curve specifies an orientation.
Taking a pullback of the curve for our purposes is then somewhat difficult: since the map is 2-1 and the original curve contains the two critical values of the map, the pullback will not be simple. Yet, in order to select a new set of marked points for the next iteration, we are required to have an understanding of the orientation of the pullback. Finding the points in the pullback of the curve is trivial, but how do we ascertain this orientation and maintain a useful discretized parameterization after each pullback?
We must consider the following:
\textbf{Handedness at the critical point:} The pullback is not Jordan, and intersects itself in at least two locations (the critical points). This means that near these points of intersection, we have a choice as to which fork we take to continue along the curve.
The initial curve that we are using here is actually an approximation to the pseudo equator for the associated rational map. If we think of the pseudo equator as separating our sphere into a black tile and a red tile, the pullback preserves the orientation of the pullback curve with respect to the colors of the pullback tiles. That is, if the curve is positively oriented to black tiles, the pullback curve will be positively oriented to the pullback of the black tiles. Further, the curve initially separates the two Julia sets of polynomials in the mating. We should have a path on the curve to traverse that preserves not only the appropriate orientation, but also 'separates" the Julia sets (as best as possible, since the pullback intersects the critical points).
This suggests that whenever a critical point is being approached as we traverse the pullback, that we are intended to fork a particular direction in order to maintain orientation and return the expected finite subdivision rule. If we stereographically project our complex points, we can check this in a rudimentary way using cross products and the right hand rule.
\textbf{Branches of the square root:} Using the normalization $\displaystyle F_n(z)= \frac{(u_n-1)v_nz^2-u_n(v_n-1)}{(u_n-1)z^2-(v_n-1)}$, we may think of the pullback of $F_n$ as the composition of a Mobius transformation and the square root function. Since Mobius transformations are orientation preserving on the Riemann sphere, we could apply this map to our discrete curve parameterization and have a direct correspondence with another discrete parameterization of the curve. The square root, however, causes potential problems. We typically think of the pullback by square root as resulting in two pieces: a `positive' version given by the principal branch of the square root function, and a `negative' version given by multiplying the former by negative one. The problem is that depending on our curve, this canonical branch of the square root function may cut our pullback into more than two pieces, jumbling the order in which we wish to traverse the points. A sample problematic curve is depicted in Figure \ref{sqrt}. We thus must pay extra attention to pairs of consecutive points on the curve that lie on opposite sides of the negative real axis. In general, two consecutive points listed on the pullback curve should be near each other, so the appropriate branch of the square root to continue on should be selected accordingly. (In essence, the effect is similar to picking a `smart' branch cut of the square root function which only intersects our curve in one spot, and \emph{then} we can worry about adjoining the positive and negative pieces of our curve for the pullback.)
\begin{figure}
\caption{The problem with using the canonical branch of the square root for pullbacks of $C_n$: orientation is important, but harder to keep record of when our pullback curve is cut into several pieces.}
\label{sqrt}
\end{figure}
\textbf{Pruning:} Naively keeping the pullback points obtained in each iteration as a record of $C_n$ will double the amount of data recorded on each iteration. Memory and processing constraints will thus neccessitate simplifying $C_n$ on each iteration before passing through to the next. Since ordering of the postcritical points on $C_n$ is really the most crucial piece of information to record, we can work with pseudo isotopies of $C_n$ instead. One way to do this is to remove points on the curve that do not alter the homotopy type of $C_n$ relative to $P_g$. In \cite{MEDUSA}, a process called ``circlifying" is suggested. In general, the specific method of simplifying $C_n$ while preserving approximate homotopy type is not so important as just implementing \emph{some} way to reduce the data requirements of $C_n$ before moving on to the next step.
\section{Open questions and remarks} \label{connections}
There are a few clear avenues along which to pursue further investigation, and on which we should make further remarks. A few are highlighted below.
\subsection{Hybrid models and extensions} The focus of this paper has been on quadratic matings in which our initial polynomial pair is not strongly mateable. This is fairly restrictive. To develop a clearer picture of matings, there are two ways that we could attempt a generalization of the algorithm: working in general quadratic matings, and/or working with matings in higher degrees.
To consider the general quadratic case, it may be helpful to note some key similarities and differences between the Medusa and pseudo-equator models. The pseudo-equator curve is, in essence, a deformation retract of the Medusa. Both models then contain Jordan curve structures that are expected to deform into the Julia set of the mating upon iteration. The curve that does this in the pseudo-equator model contains postcritical points while the analogous curve in the Medusa model does not. In a way, this dichotomy in structure makes sense: we might expect that postcritical points lie on one of these Jordan curve structures if they appear in the Julia set of the mating, and off of them when they appear in the Fatou set. As the Medusa algorithm has difficulty with some cases where the Julia set of the mating is $\mathbb{S}^2$, and the pseudo-equator algorithm is not designed for cases where postcritical points lie in the Fatou set of the mating, this suggests the need for a hybridized Medusa-pseudo-equator model.
To consider the higher degree cases, we could follow this by generalizing the technique given in this paper: obtain a pseudo-equator or Medusa-like structure, find an appropriate finite subdivision rule, apply the subdivision rule to provide mapping instructions for pullbacks, and use an appropriate rational function normalization to apply Thurston's algorithm. This will require some effort in crafting appropriate normalizations and obtaining mapping instructions for pullbacks, since higher degree matings will typically be more complicated maps.
\begin{figure}
\caption{{The ``pseudo-equator" is pinched by $\sim_e$ into a non-Jordan curve.}}
\label{meyerex3}
\end{figure}
\subsection{Unmating of rational maps} In \cite{UNMATING}, Meyer comments that the existence of a pseudo equator is sufficient but not necessary to be indicative of a mating. The methods of this paper work in the quadratic cases that a pseudo equator can in some manner be found. Not all non-hyperbolic matings have pseudo-equators, however. A potential reason is that the path $C$ is not always a Jordan curve--any time $\sim_e$ contains equivalence classes that include multiple postcritical or critical points from one of the polynomials in the mating, the equator $\Gamma$ is pinched to form $C$. This falls outside of the scope of the definition for a pseudo equator, which concerns the deformation of a Jordan curve. For instance, the example given in \cite{UNMATING} for $f_{1/6}\upmodels f_{13/14}$ presents with subdivision complex $S_\mathcal{R}$ and $C$ as shown in Figure \ref{meyerex3}. Notice the pinching of the blue equator curve due to the postcritical identifications on $f_{13/14}$.
We can still use finite subdivision rules to determine the pullbacks of these `pinched' pseudo-equators, very similar to how we have treated pseudo-equators in the rest of this paper. The Caratheodory semiconjugacy is still applicable after this pinching, and will reflect the action of the essential mating of the desired polynomials on $\mathbb{S}^2$. There are still pseudo-isotopies between such pinched curves and their pullbacks, so the general idea of the pseudo-equator algorithm should work even in the case that we examine a non-hyperbolic mating with no pseudo-equator. The computer implementation for such a case would be more difficult, but not impossible.
While pseudo-equators are a sufficient condition for a rational map to be a mating, they are not necessary. This extension of pseudo-equators to pinched pseudo-equators may extend the number of rational maps that could be decomposed as matings though. This suggests the following question: Would a pinched pseudo-equator serve as a sufficient \emph{and} necessary condition to the unmating of a map? If so, and if we tackled the computational details involved in implementation, we could possibly obtain approximations of rational functions for all matings.
\end{document} |
\begin{document}
\title[Littlewood-Offord, circular law, universality]{ From the Littlewood-Offord problem to the Circular Law: universality of the spectral distribution of random matrices }
\author{Terence Tao} \address{Department of Mathematics, UCLA, Los Angeles CA 90095-1555} \email{tao@math.ucla.edu} \thanks{T. Tao is supported by NSF grant CCF-0649473 and a grant from the MacArthur Foundation.}
\author{Van Vu} \address{Department of Mathematics, Rutgers, Piscataway, NJ 08854} \email{vanvu@math.rutgers.edu}
\thanks{V. Vu is is supported by NSF Career Grant 0635606.}
\subjclass{15A52, 60G50}
\begin{abstract} The famous \emph{circular law}
asserts that if $M_n$ is an $n \times n$ matrix with iid complex entries of mean zero and unit variance, then the empirical spectral distribution (ESD) of the normalized matrix $\frac{1}{\sqrt{n}} M_n$ converges both in probability and almost surely to the uniform distribution on the unit disk $\{ z \in \C: |z| \leq 1 \}$. After a long sequence of partial results that verified this law under additional assumptions on the distribution of the entries, the circular law is now known to be true for arbitrary distributions with mean zero and unit variance. In this survey we describe some of the key ingredients used in the establishment of the circular law at this level of generality, in particular recent advances in understanding the Littlewood-Offord problem and its inverse. \end{abstract}
\maketitle
\section {ESD of random matrices}
For an $n \times n$ matrix $A_n$ with complex entries, let $$\mu_{A_n} := \frac{1}{n} \sum_{i=1}^n \delta_{\lambda_i}$$ be the \emph{empirical spectral distribution} (ESD) of its eigenvalues $\lambda_i \in \BBC, i=1, \dots n$ (counting multiplicity), thus for instance
$$ \mu_{A_n}( \{ z \in \BBC | \Re z \leq s; \Im z \leq t \} ) = \frac{1}{n} | \{ 1 \leq i \leq n: \Re \lambda_i \leq s; \Im \lambda_i \leq t \}|$$
for any $s,t \in \R$ (we use $|A|$ to denote the cardinality of a finite set $A$), and $$ \int f\ d\mu_{A_n} = \frac{1}{n} \sum_{i=1}^n f(\lambda_i)$$ for any continuous compactly supported $f$. Clearly, $\mu_{A_n}$ is a discrete probability measure on $\BBC$.
A fundamental problem in the theory of random matrices is to compute the limiting distribution of the ESD $\mu_{A_n}$ of a sequence of random matrices $A_n$ with sizes tending to infinity \cite{Mehta, BS}. In what follows, we consider normalized random matrices of the form $A_n = \frac{1}{\sqrt{n}} M_n$, where $M_n = (\a_{ij})_{1 \leq i,j \leq n}$ has entries that are iid random variables $\a_{ij} \equiv \a$. Such matrices have been studied at least as far back as Wishart \cite{wish} (see \cite{Mehta, BS} for more discussion).
One of the first limiting distribution results is the famous semi-circle law of Wigner \cite{wig}. Motivated by research in nuclear physics, Wigner studied Hermitian random matrices with (upper triangular)
entries being iid random variables with mean zero and variance one. In the Hermitian case, of course, the ESD is supported on the real line $\R$. He proved that the expected ESD of a normalized $n \times n$ Hermitian matrix $\frac{1}{\sqrt{n}} M_n$, where $M_n = (\a_{ij})_{1 \leq i,j \leq n}$ has iid gaussian entries $\a_{ij} \equiv N(0,1)$, converges in the sense of probability measures\footnote{We say that a collection $\mu_n$ of probability measures converges to a limit $\mu$ if one has $\int f\ d\mu_n \to \int f\ d\mu$ for every continuous compactly supported function $f$, or equivalently if $\mu( \{ z \in \BBC | \Re z \leq s; \Im z \leq t \} )$ converges to $\mu( \{ z \in \BBC | \Re z \leq s; \Im z \leq t \} )$ for all $s, t$.} to the semi-circle distribution \begin{equation}\label{semicircle}
\frac{1}{2\pi} 1_{[-2,2]}(x) \sqrt {4-x^2}\ dx \end{equation} on the real line, where $1_E$ denotes the indicator function of a set $E$.
\begin{theorem}[Semi-circular law for the Gaussian ensemble]\label{theorem:Wigner}\cite{wig} Let $M_n$ be an $n \times n$ random Hermitian matrix whose entries are iid gaussian variables with mean 0 and variance 1. Then, with probability one, the ESD of $\frac{1}{\sqrt n} M_n$ converges in the sense of probability measures to the semi-circle law \eqref{semicircle}. \end{theorem}
Henceforth we shall say that a sequence $\mu_n$ of random probability measures converges \emph{strongly} to a deterministic probability measure $\mu$ if, with probability one, $\mu_n$ converges in the sense of probability measures to $\mu$. We also say that $\mu_n$ converges \emph{weakly} to $\mu$ if for every continuous compactly supported $f$, $\int f\ d\mu_n$ converges in probability to $\int f\ d\mu$, thus $\P( |\int f\ d\mu_n - \int f\ d\mu| > \eps ) \to 0$ as $n \to \infty$ for each $\eps > 0$. Of course, strong convergence implies weak convergence; thus for instance in Theorem \ref{theorem:Wigner}, $\mu_{\frac{1}{\sqrt{n}} M_n}$ converges both weakly and strongly to the semicircle law.
Wigner also proved similar results for various other distributions, such as the Bernoulli distribution (in which each $\a_{ij}$ equals $+1$ with probability $1/2$ and $-1$ with probability $1/2$). His work has been extended and strengthened in several aspects \cite{Arnold1, Arnold2, Pastur}. The most general form was proved by Pastur \cite{Pastur}:
\begin{theorem}[Semi-circular law]\label{theorem:Pastur} \cite{Pastur} Let $M_n$ be an $n \times n$ random Hermitian matrix whose entries are iid complex random variables with mean 0 and variance 1. Then ESD of $\frac{1}{\sqrt n} M_n$ converges (in both the strong and weak senses) to the semi-circle law. \end{theorem}
The situation with non-Hermitian matrices is much more complicated, due to the presence of \emph{pseudospectrum}\footnote{Informally, we say that a complex number $z$ lies in the pseudospectrum of a square matrix $A$ if $(A-zI)^{-1}$ is large (or undefined). If $z$ lies in the pseudospectrum, then small perturbations of $A$ can potentially cause $z$ to fall into the spectrum of $A$, even if it is initially far away from this spectrum. Thus, whenever one has pseudospectrum far away from the actual spectrum, the actual distribution of eigenvalues can depend very sensitively (in the worst case) on the coefficients of $A$. Of course, our matrices are random rather than worst-case, and so we expect the most dangerous effects of pseudospectrum to be avoided; but this of course requires some analytical effort to establish, and deterministic techniques (e.g. truncation) should be used with extreme caution, since they are likely to break down in the worst case.} that can potentially make the ESD quite unstable with respect to perturbations. The non-Hermitian variant of this theorem, the Circular Law Conjecture, has been raised since the 1950's (see Chapter 10 of \cite{BS} or the introduction of \cite{bai})
\begin{conjecture}[Circular law]\label{conj:CL} Let $M_n$ be the $n \times n$ random matrix whose entries are iid complex random variables with mean 0 and variance 1. Then the ESD of
$\frac{1}{\sqrt n} M_n$ converges (in both the strong and weak senses) to the uniform distribution $\mu := \frac{1}{\pi} 1_{|z| \leq 1} dz$ on the unit disk $\{ z \in \C: |z| \leq 1 \}$. \end{conjecture}
The numerical evidence for this conjecture is extremely strong (see e.g. Figure \ref{figure:CircLaw}). However, there are significant difficulties in establishing this conjecture rigorously, not least of which is the fact that the main techniques used to handle Hermitian matrices (such as moment methods and truncation) can not be applied to the non-Hermitian model (see \cite[Chapter 10]{BS} for a detailed discussion). Nevertheless, the conjecture has been intensively worked on for many decades. The circular law was verified for the complex gaussian distribution in \cite{Mehta} and the real gaussian distribution in \cite{edel}. An approach to attack the general case was introduced in \cite{Girko1}, leading to a resolution of the strong circular law for continuous distributions with bounded sixth moment in \cite{bai}. The sixth moment hypothesis in \cite{bai} was lowered to $(2+\eta)^{\operatorname{th}}$ moment for any $\eta > 0$ in \cite{BS}. The removal of the hypothesis of continuous distribution required some new ideas. In \cite{GT1} the weak circular law for (possibly discrete) distributions with subgaussian moment was established, with the subgaussian condition relaxed to a fourth moment condition in \cite{PZ} (see also \cite{Girko2} for an earlier result of similar nature), and then to $(2+\eta)^{\operatorname{th}}$ moment in \cite{GT2}. Shortly before this last result, the strong circular law assuming $(2+\eta)^{\operatorname{th}}$ moment was established in \cite{TVcir1}. Finally, in a recent paper \cite{TVcir2}, the authors proved this conjecture (in both strong and weak forms) in full generality. In fact, we obtained this result as a consequence of a more general theorem, presented in the next section.
\section{Universality}
An easy case of Conjecture \ref{conj:CL} is when the entries $\a_{ij}$ of $M_n$ are iid complex gaussian. In this case there is the following precise formula
for the joint density function of the eigenvalues, due to Ginibre \cite{gin} (see also \cite{Mehta}, \cite{hwang} for more discussion of this formula):
\begin{equation} \label{eqn:Ginibre} p(\lambda_{1}, \cdots, \lambda_{n}) = c_{n} \prod_{[i <
j}|\lambda_{i}- \lambda_{j}|^{2 } \prod_{i=1} ^{n} e^{-n
|\lambda_{i}|^{2}}. \end{equation}
From here one can verify the conjecture in this case by a direct calculation. This was first done by Mehta and also Silverstein in the 1960s:
\begin{theorem}[Circular law for Gaussian matrices]\label{theorem:mehta} \cite{Mehta} Let $M_n$ be an $n \times n$ random matrix whose entries are iid complex gaussian variables with mean 0 and variance 1. Then, with probability one, the ESD of $\frac{1}{\sqrt n} M_n$ tends to the circular law. \end{theorem}
A similar result for the real gaussian ensemble was established in \cite{edel}. These methods rely heavily on the strong symmetry properties of such ensembles (in particular, the invariance of such ensembles with respect to large matrix groups such as $O(n)$ or $U(n)$) in order to perform explicit algebraic computations, and do not extend directly to more combinatorial ensembles, such as the Bernoulli ensemble.
The above mentioned results and conjectures can be viewed as examples of a general phenomenon in probablity and mathematical physics, namely, that global information about a large random system (such as limiting distributions) does not depend on the particular distribution of the particles. This is often referred to as the \emph{universality} phenomenon (see e.g. \cite{deift}). The most famous example of this phenomenon is perhaps the central limit theorem.
In view of the universality phenomenon, one can see that Conjecture \ref{conj:CL} generalizes Theorem \ref{theorem:mehta} in the same way that Theorem \ref{theorem:Pastur} generalizes Theorem \ref{theorem:Wigner}.
A demonstration of the circular law for the Bernoulli and the Gaussian case appears\footnote{We thank Phillip Wood for creating the figures in this paper.} in the Figure~\ref{figure:CircLaw}.
\begin{figure}\label{figure:CircLaw}
\end{figure}
The universality phenomenon seems to hold even for more general models of random matrices, as demonstrated by Figure~\ref{figure:ChangeMean} and Figure~\ref{figure:Extension}.
\begin{figure}\label{figure:ChangeMean}
\end{figure}
\begin{figure}\label{figure:Extension}
\end{figure}
This evidence suggests that the asymptotic shape of the ESD depends only on the mean and the variance of each entry in the matirx. As mentioend earlier, the main result of \cite{TVcir2} (building on a large number of previous results) gives a rigorous proof of this phenomenon in full generality.
For any matrix $A$, we define the \emph{Frobenius norm} (or \emph{Hilbert-Schmidt norm})
$\|A\|_F$ by the formula $\|A\|_F := \trace(AA^\ast)^{1/2} = \trace(A^\ast A)^{1/2}$.
\begin{theorem}[Universality principle]\label{theorem:main1} Let $\a$ and $\b$ be complex random variables with zero mean and unit variance. Let $X_n = (\a_{ij})_{1 \leq i,j \leq n}$ and $Y_n := (\b_{ij})_{1 \leq i,j \leq n}$ be $n \times n$ random matrices whose entries $\a_{ij}$, $\b_{ij}$ are iid copies of $\a$ and $\b$, respectively. For each $n$, let $M_n$ be a deterministic $n \times n$ matrix satisfying
\begin{equation} \label{eqn:conditionM} \sup_n \frac{1}{n^2} \|M_n\|_F^2 < \infty. \end{equation} Let $A_n:= M_n + X_n$ and $B_n:= M_n +Y_n$. Then $\mu_{\frac{1}{\sqrt{n}} A_n} - \mu_{\frac{1}{\sqrt{n}} B_n}$ converges weakly to zero. If furthermore we make the additional hypothesis that the ESDs \begin{equation}\label{must} \mu_{(\frac{1}{\sqrt{n}} M_n-zI) (\frac{1}{\sqrt{n}} M_n-zI)^\ast} \end{equation} converge in the sense of probability measures to a limit for almost every $z$, then $\mu_{\frac{1}{\sqrt{n}} A_n} - \mu_{\frac{1}{\sqrt{n}} B_n}$ converges strongly to zero. \end{theorem}
This theorem reduces the computing of the limiting distribution to the case where one can assume\footnote{Some related ideas also appear in \cite{Girko2}. In the context of the central limit theorem, the idea of replacing arbitrary iid ensembles by Gaussian ones goes back to Lindeberg \cite{Lind}, and is sometimes known as the \emph{Lindeberg invariance principle}; see \cite{chat} for further discussion, and a formulation of this principle for Hermitian random matrices.}
that the entries $\a$ have Gaussian (or any special) distribution. Combining this theorem (in the case $M_n = 0$) with Theorem \ref{theorem:mehta}, we conclude
\begin{cor} The circular law (Conjecture \ref{conj:CL}) holds in both the weak and strong sense. \end{cor}
It is useful to notice that Theorem \ref{theorem:main1} still holds even when the limiting distributions do not exist.
The proof of Theorem \ref{theorem:main1} relies on several surprising connections between seemingly remote areas of mathematics that have been discovered in the last few years. The goal of this article is to give the reader an overview of these connections and through them a sketch of the proof of Theorem \ref{theorem:main1}. The first area we shall visit is combinatorics.
\section {Combinatorics}
As we shall discuss later, one of the primary difficulties in controlling the ESD of a non-Hermitian matrix $A_n = \frac{1}{\sqrt{n}} M_n$ is the presence of \emph{pseudospectrum} - complex numbers $z$ for which the resolvent $(A_n - zI)^{-1} = (\frac{1}{\sqrt{n}} M_n - zI)^{-1}$ exists but is extremely large. It is therefore of importance to obtain bounds on this resolvent, which leads one to understand for which vectors $v \in \C^n$ is $(A_n - zI) v$ likely to be small. Expanding out the vector $(A_n - zI)v$, one encounters expressions such as $\xi_1 v_1 + \ldots + \xi_n v_n$, where $v_1,\ldots,v_n \in \C$ are fixed and $\xi_1,\ldots,\xi_n$ are iid random variables. The problem of understanding ths distribution of such random sums is known as the \emph{Littlewood-Offord problem}, and we now pause to discuss this problem further.
\subsection{The Littlewood-Offord problem}
Let $\bv =\{v_1, \dots, v_n\}$ be a set of $n$ integers and let $\xi_1, \dots, \xi_n$ be i.i.d random Bernoulli variables. Define $S:= \sum_{i=1}^n \xi_i v_i$ and $p_{\bv} (a) := \P (S=a)$ and $p_{\bv} := \sup_{a \in \BZ} p_{\bv } (a)$.
In their study of random polynomials, Littlewood and Offord \cite{LO} raised the question of bounding $p_{\bv}$. They showed that if the $v_i$ are non-zero, then $p_{\bv} = O(\frac{\log n}{\sqrt n})$. Very soon after, Erd\H {o}s \cite{Erd1}, using Sperner's lemma, gave a beautiful combinatorial proof for the following refinement.
\begin{theorem} \label{theorem:LO} Let $v_1, \dots, v_n$ be non-zero numbers and $\xi_i$ be i.i.d Bernoulli random variables. Then\footnote{We use the usual asymptotic notation in this paper, thus $X = O(Y)$, $Y = \Omega(X)$, $X \ll Y$, or $Y \gg X$ denotes an estimate of the form $|X| \leq CY$ where $C$ does not depend on $n$ (but may depend on other parameters). We also let $X = o(Y)$ denote the bound $|X| \leq c(n) Y$, where $c(n) \to 0$ as $n \to \infty$.} $$ p_{\bv} \le \frac{\binom{n}{{\lfloor n/2 \rfloor}}}{2^n} = O(\frac{1}{\sqrt n}). $$\end{theorem}
Notice that the bound is sharp, as can be seen from the example $\bv :=\{1, \dots, 1\}$, in which case $S$ has a binomial distribution. Many mathematicians realized that while the classical bound in Theorem \ref{theorem:LO} is sharp as stated, it can be improved significantly under additional assumptions on $\bv$. For instance, Erd\H {o}s and Moser \cite{EM} showed that if the $v_i$ are distinct, then
$$p_{\bv} =O(n^{-3/2} \ln n). $$
They conjectured that the logarithmic term is not necessary and this was confirmed by S\'ark\"ozy and Szemer\'edi \cite{SS}. Again, the bound is sharp (up to a constant factor), as can be seen by taking $v_1,\ldots,v_n$ to be a proper arithmetic progression such as $1,\ldots,n$. Stanley \cite{stanley} gave a different proof that also classified the extremal cases.
A general picture was given by Hal\'asz, who showed, among other things, that if one forbids more and more additive structure\footnote{Intuitively, this is because the less additive structure one has in the $v_i$, the more likely the sums $S$ are to be distinct from each other. In the most extreme case, if the $v_i$ are linearly independent over the rationals $\Q$, then the sums $2^n$ sums $S$ are all distinct, and so $p_\bv = 1/2^n$ in this case.} in the $v_i$, then one gets better and better bounds on $p_{\bv}$. One corollary of his results (see \cite{Hal} or \cite[Chapter 9]{TVbook} is the following.
\begin{theorem} \label{theorem:halasz} Consider $\bv= \{v_1, \dots, v_n\}$. Let $R_k$ be the number of solutions to the equation
$$\eps_1 v_{i_1} + \dots +\eps_{2k} v_{i_{2k}} =0 $$
\noindent where $\eps_i \in \{-1,1\}$ and $i_1, \dots, i_{2k} \in \{1,2, \dots, n \}$. Then
$$p_ {\bv}= O_{k} ( n^{-2k-1/2} R_k ). $$ \end{theorem}
\begin{remark} Several variants of Theorem \ref{theorem:LO} can be found in \cite{Kat, GLOS, FF, Kle} and the references therein. The connection between the Littlewood-Offord problem and random matrices was first made in \cite{KKS}, in connection with the question of determining how likely a random Bernoulli matrix was to be singular. The paper \cite{KKS} in fact inspired much of the work of the authors described in this survey. \end{remark}
\subsection{The inverse Littlewood-Offord problem}
Motivated by inverse theorems from additive combinatorics, in particular Freiman's theorem (see \cite{freiman}, \cite[Chapter 5]{TVbook}) and a variant for random sums in \cite[Theorem 5.2]{TVsing} (inspired by earlier work in \cite{KKS}), the authors \cite{TVinverse} brought a different view to the problem. Instead of trying to improve the bound further by imposing new assumptions, we aim to provide the full picture by finding the underlying reason for the probability $p_{\bv}$ to be large (e.g. larger than $n^{-A}$ for some fixed $A$).
Notice that the (multi)-set $\bv$ has $2^n$ subsums, and $p_{\bv}
\ge n^{-C}$ mean that at least $2^n/ n^C$ among these take the same value. This suggests that there should be very strong additive structure in the set. In order to determine this structure, one can study examples of $\bv$ where $p_{\bv}$ is large. For a set $A$, we denote by $lA$ the set $lA := \{a_1+ \dots + a_l| a_i \in A \}$. A natural example is the following.
\begin{example} Let $I=[-N,N]$ and $v_1, \dots, v_n$ be elements of $I$. Since $S \in nI$, by the pigeon hole principle, $p_{\bv} \ge \frac{1}{nI} = \Omega (\frac{1}{N})$. In fact, a short consideration yields a better bound. Notice that with probability at least $.99$, we have $S \in 10 \sqrt n I$, thus again by the pigeonhole principle, we have $p_{\bv} = \Omega (\frac{1}{\sqrt n N})$. If we set $N=n^C$ for some constant $C$, then \begin{equation} \label{bound1} p_{\bv} = \Omega (\frac{1}{n^{C+1/2}}). \end{equation} \end{example}
The next, and more general, construction comes from additive combinatorics. A very important concept in this area is that of a \emph{generalized arithmetic progression} (GAP). A set $Q$ of numbers is a \emph{GAP of rank $d$} if it can be expressed as in the form
$$Q= \{a_0+ x_1a_1 + \dots +x_d a_d| M_i \le x_i \le M_i' \hbox{ for all } 1 \leq i \leq d\}$$ for some $a_0,\ldots,a_d,M_1,\ldots,M_d,M'_1,\ldots,M'_d$.
It is convenient to think of $Q$ as the image of an integer box $B:= \{(x_1, \dots, x_d) \in \Z^d| M_i \le x_i \le M_i' \} $ under the linear map $$\Phi: (x_1,\dots, x_d) \mapsto a_0+ x_1a_1 + \dots + x_d a_d. $$
The numbers $a_i$ are the \emph{generators } of $P$, and $\Vol(Q) := |B|$ is the \emph{volume} of $B$. We say that $Q$ is
\emph{proper} if this map is one to one, or equivalently if $|Q| = \Vol(Q)$. For non-proper GAPs, we of course have $|Q| < \Vol(Q)$.
\begin{example} Let $Q$ be a proper GAP of rank $d$ and volume $V$. Let $v_1, \dots, v_n$ be (not necessarily distinct) elements of
$P$. The random variable $S =\sum_{i=1}^n \xi_i v_i$ takes values in the GAP $nP$. Since $|nP| \le \Vol (nB) = n^d V$, the pigeonhole principle implies that $p_{\bv} \ge \Omega (\frac{1}{n^d V})$. In fact, using the same idea as in the previous example, one can improve the bound to $\Omega (\frac{1}{n^{d/2} V})$. If we set $N=n^C$ for some constant $C$, then \begin{equation} \label{bound2} p_{\bv} = \Omega (\frac{1}{n^{C+d/2}}). \end{equation} \end{example}
\noindent The above examples show that if the elements of $\bv$ belong to a proper GAP with small rank and small cardinality then $p_{\bv}$ is large. A few years ago, the authors \cite{TVinverse} showed that this is essentially the only reason:
\begin{theorem}[Weak inverse theorem]\label{theorem:weak} \cite{TVinverse} Let $C, \epsilon > 0$ be arbitrary constants. There are constants $d$ and $C'$ depending on $C$ and $\epsilon$ such that the following holds.
Assume that $\bv = \{v_1, \ldots, v_n\}$ is a multiset of integers satisfying $p_{\bv} \geq n^{-C}$. Then there is a GAP $Q$ of rank at most $d$ and volume at most $n^{C'}$ which contains all but at most $n^{1-\epsilon}$ elements of $\bv$ (counting multiplicity).
\end{theorem}
\begin{remark} The presence of the small set of exceptional elements is not completely avoidable. For instance, one can add $o(\log n)$ completely arbitrary elements to $\bv$ and only decrease $p_{\bv}$ by a factor of $n^{-o(1)}$ at worst. Nonetheless we expect the number of such elements to be less than what is given by the results here. \end{remark}
The reason we call Theorem \ref{theorem:weak} {\it weak} is the fact that the dependence between the parameters is not optimal and does not yet reflect the relations in \eqref{bound1} and \eqref{bound2}. Recently, we were able to modify the approach to obtain an almost optimal result.
\begin{theorem}[Strong inverse theorem] \label{theorem:strong} \cite{TVinversestrong} Let $C$ and $1> \eps$ be positive constants. Assume that $$p_{\bv} \ge n^{-C}. $$
Then there exists a GAP $Q$ of rank $d= O_{C, \eps} (1)$ which contains all but $O_d(n^{1 -\eps} )$
elements of $\bv$ (counting multiplicity), where
$$|Q| = O_{C, \eps} (n^{C - \frac{d}{2} + \eps}). $$ \end{theorem}
The bound on $|Q|$ matches \eqref{bound2}, up to the $\eps$ term. The proofs of Theorem \ref{theorem:weak} and \ref{theorem:strong} use harmonic analysis, combined with results from the theory of random walks and several facts about GAPs. Both theorems hold in a more general setting, where the elements of $\bv$ are from a torsion-free group. The lower bound $n^{-C}$ on $p_{\bv}$ can also be relaxed, but the statement is more technical.
As an application of Theorem \ref{theorem:strong}, one can deduce, in a straightforward manner, a slightly weaker version of the {\it forward} results mentioned above. For instance, let us show if the $v_i$ are different, then $p_{\bv } \le n^{-3/2+\delta}$ (for any constant $\delta >0$). Assume otherwise and set $\eps := \delta /2$. Theorem \ref{theorem:strong} implies that most of $\bv$ is contained in a GAP $Q$ of rank $d$ and cardinality at most $O( n^{3/2-\delta -d/2 + \eps }) =O(n^{1-\delta/2})=o(n)$. But since $\bv$ has $(1-o(1))n$ elements in $Q$, we obtain a contradiction.
Next we consider another application of Theorem \ref{theorem:strong}, which will be more important in later sections. This theorem enables us execute very precise counting arguments. Assume that we would like to count the number of
(multi)-sets $\bv$ of integers with $\max |v_{i}| \le N$ such that
$P(v) \ge p:= n^{-C}$.
Fix $d \ge 1$, fix\footnote{A more detailed version of Theorems
\ref{theorem:weak} and \ref{theorem:strong} tells us that there are
not too many ways to choose the
generators of $Q$. In particular, if $N =n^{O(1)}$, the number of ways to fix these is negligible.} a GAP $Q$ with rank $d$ and
volume $V = n^{C -d/2}$. The dominating term will be the number of multi-subsets of size $n$ of $Q$, which is
\begin{equation}\label{discretcounting} |Q|^{n }= n^{(C-d/2 +\epsilon)n} \le n^{Cn} n^{-n/2+\epsilon n }= p^{-n} n^{-n(1/2-\epsilon) }. \end{equation}
For later purposes, we need a continuous version of this result. Let the $v_i$ be complex numbers. Instead of $p_{\bv}$, consider the maximum {\it small ball} probability
$$p_{\bv}(\beta) =\max_{z \in \C} \P (|S-z| \le \beta) . $$
Given a small $\beta >0$ and $ p= n^{-O(1)}$, the collection of $\bv$ such that $|v|=1$ and $p_{\bv}(\beta) \ge p$ is infinite, but we are able to show that it can be approximated by a small set.
\begin{theorem} [The $\beta$-net Theorem] \cite{TVcir1} Suppose that $p = n^{-O(1)}$. Then the set of unit vectors $\bv= (v_{1}, \dots, v_{n})$ such that $p_{\bv}(\beta) \ge p$ admits an $\beta$-net (in the infinity norm\footnote{In other words, for any $\bv$ with $p_{\bv}(\beta) \geq p$, there exists $\bv' \in \Omega$ such that all coefficients of $\bv - \bv'$ do not exceed $\beta$ in magnitude.} $\Omega$ of size at most
\begin{equation} \label{contcounting} |\Omega| \leq p^{-n} n^{-n/2 +o(n)} . \end{equation} \end{theorem}
\begin{remark}\label{Rvrem} A related result (with different parameters) appears in \cite{RV}; in our notation, the probability $p$ is allowed to be much smaller, but the net is coarser (essentially, a $\beta \sqrt{n}$-net rather than a $\beta$-net). In terms of random matrices, the results in \cite{RV} are better suited to control the extreme tail of such quantities as the least singular value of $A_n - zI$, but require more boundedness conditions on the matrix $A_n$ (and in particular, bounded operator norm) due to the coarser nature of the net. \end{remark}
\section{Computer Science}
Our next stop is computer science and numerical linear algebra, and in particular the problem of dealing with \emph{ill-conditioned} matrices, which is closely related to the issue of pseudospectrum which is of central importance in the circular law problem.
\subsection{Theory vs Practice}
Running times of algorithms are frequently estimated by worst-case analysis. But in practice, it has been observed that many algorithms, especially those involving a large matrix, perform significantly better than the worst-case scenario. The most famous example is perhaps the simplex algorithm in linear programming. Here, the basic problem (in its simplest form) is to optimize a goal function $c \cdot x$, under the constraint $Ax \le b$, where $c, b$ are given vectors of length $n$ and $A$ is an $n \times n $ matrix. In the worst case scenario, this algorithm takes exponential time. But in practice, the algorithm runs extremally well. It is still very popular today, despite the fact that there are many other algorithms proven to have polynomial complexity.
There have been various attempts to explain this phenomenon. In this section we will discuss an influential recent explanation given by Spielman and Teng \cite{ST, ST1}.
\subsection {The effect of noise}
An important issue in the theory of computing is noise, as almost all computational processes are
effected by it. By the word \emph{noise}, we would like to refer to all kinds of
errors occurring in a process, due to both humans and machines, including
errors in measuring, errors caused by truncations, errors committed in transmitting and inputting the data, etc.
Spielman and Teng \cite{ST} pointed out that when we are interested in a solving a certain system of
equations, because of noise,
our computer actually ends
up solving a slightly perturbed version of the system. This is the
core of their so-called {\it smooth analysis} that they used to
explain the effectiveness of a specific algorithm (such as the simplex
method). Interestingly, noise, usually a burden, plays a ``positive''
role here, as it smoothes the inputs randomly, and so prevents a very
bad input from ever occurring.
The puzzling question here is, of course: {\it why is the perturbed input typically better than the original (worst-case) input ?}
In order to give a mathematical explanation, we need to introduce some notion. For an $n \times n$ matrix $M$, the \emph{condition number} $\kappa(M)$ is defined as
$$\kappa(M):= \|M\| \| M^{-1} \|$$
where $\| \|$ denotes the operator norm. (If $M$ is not invertible, we set $\kappa (M) =\infty$.)
The condition number plays a crucial role in numerical linear algebra; in particular, the condition number $\kappa(M)$ of a matrix $M$ serves as a simplified proxy for the accuracy and stability of most algorithms used to solve the
equation $Mx=b$ (see \cite{BT, GvL}, for example). The
exact solution $x= M^{-1} b$, in theory, can be computed quickly (by
Gaussian elimination, say). However, in practice computers can only represent a
finite subset of real numbers and this leads to two
difficulties: the represented numbers cannot be arbitrarily large
or small, and there are gaps between two adjacent represented numbers. A quantity which is frequently used in numerical
analysis is $\eps_{\mathrm{machine}}$ which is half of the distance
from $1$ to the nearest represented number. A fundamental result
in numerical analysis \cite{BT}
asserts that if one denotes by $\tilde x$ the result computed by
computers, then the relative error $\frac{ \| \tilde x - x \|
}{\|x\|}$ satisfies
$$ \frac{ \| \tilde x - x \| }{\|x\|} = O\big( \eps_{\mathrm{machine}}
\kappa(M) \big) $$
Following the literature, we call $M$ {\it well-conditioned} if $\kappa (M)$ is small. For quantitative purposes, we say that an $n$ by $n$ matrix $M$ is {\it well-conditioned} if its condition number is polynomially bounded in $n$ (that is, $\kappa(M) \le n^C$ for some constant $C$ independent of $n$).
\subsection{Randomly perturbed matrices are well-conditioned}
The analysis in \cite{ST} is guided by the following fundamental intuition\footnote{This conjecture, of course, does not fully explain the phenomenon of smoothed analysis, since it may be that a well-conditioned matrix still causes a difficulty in one's linear algorithms for some other reason, or perhaps the original ill-conditioned matrix did not cause a difficulty in the first place; we thank Alan Edelman for pointing out this subtlety. Nevertheless, Conjecture \ref{con} does provide an informal intuitive justification of smoothed analysis, and various rigorous versions of this conjecture were used in the formal arguments in \cite{ST}: see Section 1.4 of that paper for further discussion.}:
\begin{conjecture}\label{con} For every input instance, it is unlikely that a slight random perturbation of that instance has large condition number. \end{conjecture}
More quantitatively,
\begin{conjecture} Let $A$ be an arbitrary $n$ by $n$ matrix and let $M_n$ be a random
$n$ by $n$ matrix.
Then with high probability $A+M_n$ is well-conditioned.
\end{conjecture}
\vskip2mm Notice that here one allows $A$ to have a large condition number.
Let us take a look at $\kappa (A+M_n) = \| A+M_n \| \| (A+M_n)^{-1}
\|$. In order to have $\kappa (A+M_n) = n^{O(1)}$, we want to upper-bound both $\| A+M_n \| $ and $\| (A+M_n)^{-1} \|$. Bounding $\| A+M_n \|$ is easy, since by the triangle inequality
$$ \|A+M_n \| \le \|A\| + \|M_n \|. $$
In most models of random matrices, $\|M_n \| \leq n^{O(1)}$ with very high probability, so it suffices to assume that $\|A \| \leq n^{O(1)}$; thus we assume that the matrix $A$ is of polynomial size compared to the noise level. This is a fairly reasonable assumption for high-dimensional matrices for which the effect of noise is non-negligible\footnote{In particular, it is naturally associated to the concept of \emph{polynomially smoothed analysis} from \cite{ST}.}, and we are going to assume it in the rest of this section.
The remaining problem is to bound the norm of the inverse $\|
(A+M_n)^{-1} \|$. An important detail here is how to choose the random matrix $M_n$. In their works \cite{ST, ST1, SST}, Spielman and Teng (and coauthors) set $M_n$ to have iid Gaussian entries (with variance 1) and obtained the following bound, which played a critical role in their smooth analysis \cite{ST, ST1}.
\begin{theorem} \label{theorem:STcondition} Let $A$ be an arbitrary $n$ by $n$ matrix and $M_n$ be a random matrix with iid Gaussian entries.
Then for any $ x >0$,
$$\P( \|(A+M_n)^{-1} \| \ge x) = O(\frac{\sqrt n}{x} ) . $$ \end{theorem}
While Spielman-Teng smooth analysis does seem to have the right philosophy, the choice of $M_n$ is a bit artificial. Of course, the analysis still passes if one replaces Gaussian by a fine enough approximation. A large fraction of problems in linear programming deal with integral matrices, so the noise is perturbation by integers. In other cases, even when the noise has continuous support, the data is strongly truncated. For example, in many engineering problems, one does not keep more than, say, three to five decimal places. Thus, in many situations, the entries of $M_n$ end up having discrete support with relatively small size, which may not even grow with $n$, while the approximation mentioned above would require this support to have size exponential in $n$. Therefore, in order to come up with an analysis that better captures real life data, one needs to come up with a variant of Theorem \ref{theorem:STcondition} where the entries of $M_n$ have discrete support.
This problem was suggested to the authors by Spielman a few years ago. Using the Weak Inverse Theorem, we were able to prove the following variant of Theorem \ref{theorem:STcondition} \cite{TVstoc}.
\begin{theorem} \label{theorem:conditionTV} For any constants $a,c >0$, there is a constant $b=b(a,c)>0$ such that the following holds.
Let $A$ be an $n$ by $n$
matrix such that $\|A\|\le n^{a}$ and let $M_n$ be a random matrix with iid Bernoulli entries.
Then
$$\P( \|(A+M_n)^{-1} \| \ge n^b)\le n^{-c }. $$ \end{theorem}
Using the stronger $\beta$-net Theorem, one can have a nearly optimal relation between the constants $a$, $b$ and $c$ \cite{TVgeneral}. These results extend, with the same proof, to a large variety of distributions. For example, one does not need require the entries of $M_n$ to be iid\footnote{In practice, one would expect the noise at a large entry to have larger variance than one at a small entry, due to multiplicative effects.}, although independence is crucially exploited in the proofs. Also, one can allow many of the entries to be 0 \cite{TVstoc}.
\begin{remark} Results of this type first appear in \cite{Rud} (see also \cite{Lit} for some earlier related work for the least singualar value of \emph{rectangular} matrices). In the special case where $A=0$ and where the entries of $M_n$ are iid and have finite fourth moment, Rudelson and Vershynin \cite{RV} (see also \cite{RV2}, \cite{RV3}) obtained sharp
bounds for $\|(A+M_n)^{-1}\|$, using a somewhat different method, which relies on an inverse
theorem of a slightly different nature; see Remark \ref{Rvrem}. \end{remark}
The main idea behind the proof of Theorem \ref{theorem:conditionTV}, which first appears in \cite{Rud}, is the following. Let $d_{i}$ be the distance from the $i^{\operatorname{th}}$ row vector of $A+M_n$ to the subspace spanned by the rest of the rows. Elementary linear algebra (see also \eqref{neg} below) then gives the bound
$$\| (A+M_n)^{-1} \| =n^{O(1)} (\min_{1 \leq i \leq n} d_{i} )^{-1}. $$ Ignoring various factors of $n^{O(1)}$, the main task is then to understand the distribution of $d_i$ for any given $i$.
If $v= (v_{1}, \dots, v_{n})$ is the normal vector of a hyperplane $V$, then the distance from a random vector $(a_1+ \xi_1, \dots, a_n+ \xi_n)$ to the hyperplane $V$ is given by the formula
$$ | v_{1 } (\xi_{1 }+a_1) + \dots + v_{n} (\xi _{n }+a_n) | =|\sum_i a_i v_i + S | $$ where $S := \sum_{i=1}^n v_i \xi_i$ is as in the previous section.
To estimate the chance that $|\sum_{i=1}^n a_i v_i + S| \le \beta$,
the notion of the small ball probability $p_{\bv}(\beta)$ comes naturally. Of course, this quantity depends on the normal vector $\bv$, and so we now divide into cases depending on the nature of this vector.
If $p_{\bv}(\beta)$ small, we can be done using a conditioning argument\footnote{Intuitively, the idea of this conditioning argument is to first fix (or ``condition'') on $n-1$ of the rows of $A+M_n$, which should then fix the normal vector $\bv$. The remaining row is independent of the other $n-1$ rows, and so should have a probability at most $p_\bv(\beta)$ of lying within $\beta$ of the span of the those rows. There are some minor technical issues in making this argument (which essentially dates back to \cite{Komlos}) rigorous, arising from the fact that the $n-1$ rows may be too degenerate to accurately control $\bv$, but these difficulties can be dealt with, especially if one is willing to lose factors of $n^{O(1)}$ in various places.}. On the other hand, the $\beta$-net Theorem says that there are ``few'' $\bv$ such that $p_{\bv}(\beta)$ is large, and in this case a direct counting argument finishes the job\footnote{For instance, one important class of $\bv$ for which $p_\bv(\beta)$ tends to be large are the \emph{compressible} vectors $\bv$, in which most of the entries are close to zero. Each compressible $\bv$ (e.g. $\bv = (1,-1,0,\ldots,0)$) has a moderately large probability of being close to a normal vector for $A+M_n$ (e.g. in the random Bernoulli case, $\bv = (1,-1,0,\ldots,0)$ has a probability about $2^{-n}$ of being a normal vector); but the number (or more precisely, the metric entropy) of the set of compressible vectors is small (of size $2^{o(n)}$) and so the net contribution of these vectors is then manageable. Similar arguments (relying heavily on the $\beta$-net theorem) handle other cases when $\bv$ is large (e.g. if most entries of $\bv$ live near a GAP of controlled size).}. Details can be found in \cite{TVstoc}, \cite{TVcir1}, or \cite{TVgeneral}.
\section{Back to probability}
\subsection{The replacement principle}
Let us now take another look at the Circular Law Conjecture. Recall that $\lambda_{1}, \dots, \lambda_{n}$ are the eigenvalues of $A_n = \frac{1}{\sqrt n} M_{n}$, which generates a normalized counting measure $\mu_{A_n}$. We want to show that $\mu_{A_n}$ tends (in probability) to the uniform measure $\mu$ on the unit disk.
The traditional way to attack this conjecture is via a Stieltjes transform technique\footnote{The more classical \emph{moment method}, which is highly successful in the Hermitian setting (for instance in proving Theorem \ref{theorem:Pastur}), is not particularly effective in the non-Hermitian setting, because moments such as $\tr A_n^m$ for $m=0,1,2,\ldots$ do not determine the ESD $\mu_{A_n}$ (even approximately) unless one takes $m$ to be as large as $n$; see \cite{bai}, \cite{BS} for further discussion.}, following \cite{Girko1, bai}. Given a (complex) measure $\nu$, define, for any $z$ with Im $z >0$, $$s_{\nu}(z) := \int \frac{1}{x-z} d \nu (x). $$ For the ESD $\mu_{A_n}$, we have $$s_{\mu_{A_n}} (z) = \frac{1}{n} \sum \frac{1}{\lambda_{i} - z } . $$
Thanks to standard results from probability\footnote{One can also use the theory of logarithmic potentials for this, as is done for instance in \cite{GT1}, \cite{PZ}.}, in order to establish the Circular Law Conjecture in the strong (resp. weak) sense, it suffices to show that $s_{\mu_n}(z)$ converges almost surely (resp. in probability) to $s_{\mu}(z)$ for almost all $z$ (see \cite{TVcir2} for a precise statement).
Set $z=: s+ it$ and $s_{n} (z) =: S+ iT$. Since $s_n$ is analytic except at the poles, and vanishes at infinity, the Stieltjes transform $s_n(z)$ is determined by its the real part $S$. Let us take a closer look at this variable:
\begin{eqnarray*} S &=& \frac{1}{n} \sum \frac{ \Re(\lambda_{i}) -s} {| \lambda_{i} -z |^{2} } \\
&=&- \frac{1}{2n} \sum _{} \frac{\partial}{ \partial s} \log | \lambda_{i} -z |^{2} \\ &=& - \frac{1}{2} \frac{\partial}{\partial s} \int_{0}^{\infty} \log x \,\, \partial \eta_{n} \end{eqnarray*}
where $$\eta_{n} := \mu_{(\frac{1}{\sqrt n} M_n - zI)(\frac{1}{\sqrt n} M_n - zI)^*}$$ is the normalised counting measure of the (squares of the)
\emph{singular values} of $\frac{1}{\sqrt n} M_{n} -zI$. Notice that in the third equality, we use the fact that $\prod |\lambda_{i}-z| =
|\det (\frac{1}{\sqrt n} M_{n} - zI) |$. This step is critical as it reduces the study of a complex measure to a real one, or in other words to study the ESD of a Hermitian matrix rather than a non-Hermitian matrix.
Putting this observation in the more general setting of Theorem \ref{theorem:main1}, we arrived at the following useful result.
\begin{theorem}[Replacement principle]\label{theorem:replacement}\cite{TVcir2} Suppose for each $n$ that $A_n, B_n \in M_n(\BBC)$ are ensembles of random matrices.
Assume that \begin{itemize} \item[(i)] The expression \begin{equation}\label{pan}
\frac{1}{n^2} \|A_n\|_F^2 + \frac{1}{n^2} \|B_n\|_F^2 \end{equation} is weakly (resp. strongly) bounded\footnote{A sequence $x_n$ of non-negative random variables is said to be \emph{weakly bounded} if $\lim_{C \to \infty} \liminf_{n \to \infty} \P( x_n \leq C ) = 1$, and \emph{strongly bounded} if $\limsup_{n \to \infty} x_n < \infty$ with probability $1$.} \item[(ii)] For almost all complex numbers $z$, $$\frac{1}{n}
\log |\det(\frac{1}{\sqrt{n}} A_n - zI)| - \frac{1}{n} \log |\det(\frac{1}{\sqrt{n}} B_n -
zI)|$$ converges weakly (resp. strongly) to zero. In particular, for each fixed $z$, these determinants are non-zero with probability $1-o(1)$ for all $n$ (resp. almost surely non-zero for all but finitely many $n$). \end{itemize} Then $\mu_{\frac{1}{\sqrt{n}} A_n} - \mu_{\frac{1}{\sqrt{n}} B_n}$ converges weakly (resp. strongly) to zero. \end{theorem}
At a technical level, this theorem reduces Theorem \ref{theorem:main1} to the comparison of $\log |\det(\frac{1}{\sqrt{n}} A_n - zI)| $ and $\log
|\det(\frac{1}{\sqrt{n}} B_n -
zI)|$.
\begin{remark} Note that this expression is large and unstable when $z$ lies in the \emph{pseudospectra} of either $\frac{1}{\sqrt{n}} A_n$ or $\frac{1}{\sqrt{n}} B_n$, which means that the resolvent $(\frac{1}{\sqrt{n}} A_n - zI)^{-1}$ or $(\frac{1}{\sqrt{n}} B_n - zI)^{-1}$ is large. Controlling the probability of the event that $z$ lies in the pseudospectrum is therefore an important portion of the analysis. This technical problem is not an artefact of the method, but is in fact essential to any attempt to control non-Hermitian ESDs for general random matrix models, as such ESDs are extremely sensitive to perturbations in the matrix in regions of pseudospectrum. See \cite{bai}, \cite{BS} for further discussion. \end{remark}
\subsection{Treatment of the pole}
Using techniques from probability, such as the moment method, one can show that the distributions of the singular values of $\frac{1}{\sqrt{n}} A_n - zI$ and $\frac{1}{\sqrt{n}} B_n - zI$ are asymptotically the same\footnote{In the setting where the matrices $X_n$ and $Y_n$ have iid entries, one can use the results of \cite{doz} to establish this. In the non-iid case, an invariance principle from \cite{chat} gives a slightly weaker version of this equivalence; this was observed by Manjunath Krishnapur and appears as an appendix to \cite{TVcir2}.} \cite{bai, TVcir1, doz, TVcir2, chat}. This, however, is not sufficient to conclude that $\frac{1}{n} \log
|\det(\frac{1}{\sqrt{n}} A_n - zI)| $ and $\frac{1}{n} \log
|\det(\frac{1}{\sqrt{n}} B_n -
zI)|$ are close. As remarked earlier, the main difficulty here is that some of the singular values can be very small and thus significantly influence the value of logarithm.
Now is where Theorem \ref{theorem:conditionTV} enters the picture. This theorem tells us that (with overwhelming probability), there is no mass between $0$ and (say) $n^{-C}$, for some sufficiently large constant $C$. Using this critical information, with some more work\footnote{In particular, the presence of certain factors of $\log n$ arising from inserting Theorem \ref{theorem:conditionTV} into the normalized log-determinant $\frac{1}{n} \log |\det(\frac{1}{\sqrt{n}} A_n - zI)|$ forces one to establish a \emph{convergence rate} for the ESD of $\frac{1}{\sqrt{n}} A_n - zI$ which is faster than logarithmic in $n$ in a certain sense. This is what ultimately forces one to assume the bounded $(2+\eta)^{\operatorname{th}}$ moment hypothesis. Actually the method allows one to relax this hypothesis to that of assuming $\E |\a|^2 \log^C (2+|\a|) < \infty$ for some absolute constant $C$ (e.g. $C=16$ will do).}, we obtain:
\begin{theorem} \label{theorem:weakCL} \cite{TVcir1} The Circular Law holds (with both strong and weak convergence) under the extra condition that the entries have bounded $(2+\eta)^{\operatorname{th}}$ moment, for some constant $\eta >0$. \end{theorem}
\begin{remark} Shortly after the appearance of \cite{TVcir1}, G\"otze and Tikhomirov \cite{GT2} gave an alternate proof of the weak circular law with these hypothesis, using a variant of Theorem \ref{theorem:conditionTV}, which they obtained via a method from \cite{Rud}, \cite{RV}. This method is based on a different version of the Weak Inverse Theorem. \end{remark}
\subsection{Negative second moment and sharp concentration}
At the point it was written, the analysis in \cite{TVcir1} looked close to the limit of the method. It took some time to realize where the extra moment condition came from and even more time to figure out a way to avoid that extra condition. Consider the sums
$$\frac{1}{n} \log |\det(\frac{1}{\sqrt{n}} A_n - zI)| = \frac{1}{n} \sum_{i=1}^n \log \sigma_i, $$ where $\sigma_1 \ge \dots \ge \sigma_n$ are the singular values of $\frac{1}{\sqrt n} A_n -zI$, and
$$\frac{1}{n} \log |\det(\frac{1}{\sqrt{n}} B_n - zI)| = \frac{1}{n} \sum_{i=1}^n \log \sigma'_i, $$ where $\sigma'_1 \ge \dots \ge \sigma'_n$ are the singular values of $\frac{1}{\sqrt n} B_n -zI$.
As already mentioned, we know that the bulk of the $\sigma_i$ and $\sigma_i'$ are distributed similarly. For the smallest few, we used the lower bound on $\sigma_n$ as a uniform bound be show that their contribution is negligible. This turned out to be wasteful, and we needed to use the extra moment assumption to compensate the loss in this step.
In order to remove this assumption, we need to find a way to give a better bound on other singular values. An important first step is the discovery of the following simple, but useful, identity.
{\bf The Negative Second Moment Identity.} \cite{TVcir2} Let $A$ be an $m \times n$ matrix, $m \le n$. Then \begin{equation}\label{neg} \sum_{i=1} ^{m} d_{i}^{-2} = \sum_{i=1} ^{m} \sigma_{i} ^{-2} \end{equation} where, as usual, $d_{i} $ are the distances and $\sigma_{i} $ are the singular values.
One can prove this identity using undergraduate linear algebra. With this in hand, the rest of the proof falls into place\footnote{A possible alternate approach would be to bound the intermediate singular values directly, by adapting the results from \cite{RV2}. This would however require some additional effort; for instance, the results in \cite{RV2} assume zero mean and bounded operator norm, which is not true in general when considering $\frac{1}{\sqrt{n}} A_n - zI$ for non-zero $z$ assuming only a mean and variance condition on the entries of $A_n$. In any case, the analysis in \cite{RV2} ultimately goes through a computation of the distances $d_i$, similarly to the approach we present here based on the negative second moment identity.}. Consider the singular values $\sigma_{1} \ge \dots \ge \sigma_{n} $ involved in our analysis, and use $A$ as shorthand for $\frac{1}{\sqrt n} A_n -zI$.
To bound $\sigma_{n-k}$ from below, notice that by the interlacing law $$\sigma_{n-k} (A) \ge \sigma_{m-k} (A' )$$
where $m:=n-k$ and $A'$ is an $m \times n $ truncation of $A$, obtained by omitting the last $k$ rows. The Negative Second Moment Identity implies
$$k \sigma_{m-k} (A') ^{{-2} } \le \sum _{i=1}^{m} \sigma_{i} (A')^{-2} = \sum_{i=1}^{m} d_{i} ^{-2} . $$
On the other hand, the right-hand side can be bounded efficiently, thanks to the fact that all $d_i$ are large with overwhelming probability, which, in turn, is a consequence of Talagrand's inequality \cite{Tal}:
\begin{lemma}[Distance Lemma]\cite{TVdet, TVcir2} With probability
$1- n^{-\omega(1)}$, the distance from a random row vector to a subspace of co-dimension $k$ is at least $\frac{1}{100} \sqrt {k/n}$, as long as $k \gg {\log n }$. \end{lemma}
Thus, with overwhelming probability, $ \sum_{i=1}^{m}d_{i}^{-2}$ is $ \Omega (m/nk)= \Omega((n-k)/nk)$, which implies
{$$\sigma_{n-k}(A) \ge \sigma_{m-k} (A') \gg \frac{k}{\sqrt{(n-k)n}}. $$}
This lower bound now is sufficient to establish Theorem \ref{theorem:main1} and with it the Circular Law in full generality.
\section{Open problems}
Our investigation leads to open problems in several areas:
\vskip2mm
{\it Combinatorics.} Our studies of Littewood-Offord problem focus on the linear form $S:=\sum_{i=1}^{n } v_{i }xi_{i} $. What can one say about higher degree polynomials ?
In \cite{CTV}, it was shown that for a quadratic form $Q:=\sum_{1\le i,j \le n} c_{ij}\xi_{i}\xi_{j}$ with non-zero coefficients, $\P(Q=z)$ is $O(n^{-1/8})$. It is simple to improve this bound to $O(n^{-1/4} )$ \cite{CV1}. On the other hand, we conjecture that the truth is $O(n^{-1/2} )$, which would be sharp by taking $Q= (\xi_{1} + \dots + \xi_{n} )^{2} $. Costello (personal communication) recently improved the bound to $O(n^{-3/8})$, and it looks likely that his approach will lead to the optimal bound, or something close.
The situation with higher degrees is much less clear. In \cite{CTV}, a bound of the form $O(n^{-c_{k} } )$ was shown, where $c_{k} $ is a positive constant depending on $k$, the degree of the polynomial involved. In this bound $c_{k}$ decreases very fast with $k$.
\vskip2mm
{\it Smooth analysis.} Spielman-Teng smooth analysis of the simplex algorithm \cite{ST} was done with gaussian noise. It is a very interesting problem to see if one can achieve the same conclusion with discrete noise with fixed support, such as Bernoulli. It would give an even more convincing explanation to the efficiency of the simplex method. As discussed earlier, noise that occurs in practice typically has discrete, small support. (This question was mentioned to us by several researchers, including Spielman, few years ago.)
As discussed earlier, we now have the discrete version of Theorem \ref{theorem:STcondition}. While Theorem \ref{theorem:STcondition} plays a very important part in Spielman-Teng analysis \cite{ST1}, there are several other parts of the proof that make use of the continuity of the support in subtle ways. It is possible to modify these parts to work for fine enough discrete approximations of the continuous (noise) variables in question. However, to do so it seems one need to make the size of the support very large (typically exponential in $n$, the size of the matrix).
Another exciting direction is to consider even more realistic models of noise. For instance,
\begin{itemize}
\item In several problems, the matrix may have many {\it frozen } entries, namely those which are not effected by noise. In particular, an entry which is zero (by nature of the problem) is likely to stay zero in the whole computation. It is clear that the {\it pattern} of the frozen entries will be of importance. For example, if the first column consists of (frozen) zero, then no matter how the noise effects the rest of the matrix, it will always be non-singular (and of course ill-conditioned). We hope to classify all patterns where theorems such as Theorem \ref{theorem:main1} are still valid.
\item In non-frozen places, the noise could have different distributions. It is natural to think that the error at a large entry should have larger variance than the one occurring at a smaller entry.
\end{itemize}
Some preliminary results in these directions are obtained in \cite{TVstoc}. However, we are still at the very beginning of the road and much needs to be done.
\vskip2mm
{\it Circular Law.} A natural question here is to investigate the rate of convergence. In \cite{TVcir1}, we observed that under the extra assumption that the $(2+\eps)$-moment of the entries are bounded, we can have rate of convergence of order $n^{-\delta} $, for some positive constant $\delta $ depending on $\eps$. The exact dependence between $\eps$ and $\delta$ is not clear.
Another question concerns the determinant of random matrices. It is known, and not hard to prove, that
$\log |\det M_{n} |$ satisfies a central limit theorem, when the entries of $M_{n} $ are iid gaussian, see \cite{Girkodet1, CV2}. Girko \cite{Girkodet1} claimed that the same result holds for much more general models of matrices. We, however, are unable to verify his arguments. It would be nice to have an alternative proof.
\end{document} |
\begin{document}
\title{Supplementary Materials: Hypothesis Testing For The Covariance Matrix In High-Dimensional Transposable Data With Kronecker Product Dependence Structure} \author{ Anestis Touloumis\\ Cancer Research UK Cambridge Institute\\
University of Cambridge\\
Cambridge CB2 0RE, U.K.\\ \texttt{Anestis.Touloumis@cruk.cam.ac.uk}
\and John C. Marioni\\ The EMBL-European Bioinformatics Institute\\
Hinxton CB10 1SD, U.K.\\ \texttt{marioni@ebi.ac.uk} \and Simon Tavar\'e\\ Cancer Research UK Cambridge Institute\\
University of Cambridge\\
Cambridge CB2 0RE, U.K.\\ \texttt{Simon.Tavare@cruk.cam.ac.uk} } \date{} \maketitle
\section{Alternative formulas} Algebraic manipulation shows that \begin{align*} T_{2N} &=Y_{2N}-2Y_{4N}+Y_{5N}\nonumber\\
&=\frac{1}{c^2 P^N_2}\sum_{i,j}^{\ast} \nolimits \mathrm{tr}(\mathbf X_{i}\mathbf X^{\prime}_{i}\mathbf X_{j}\mathbf X^{\prime}_{j})-2\frac{1}{c^2 P^N_3}\sum_{i,j,k}^{\ast}\nolimits \mathrm{tr}(\mathbf X_{i}\mathbf X^{\prime}_{i}\mathbf X_{j}\mathbf X^{\prime}_{k})\\
&+\frac{1}{c^2 P^N_4} \sum_{i,j,k,l}^{\ast}\nolimits \mathrm{tr}(\mathbf X_{i}\mathbf X^{\prime}_{j}\mathbf X_{k}\mathbf X^{\prime}_{l})\\
&=\frac{1}{c^2 P^N_2}Y^{\star}_{2N}-2\frac{1}{c^2 P^N_3}Y^{\star}_{4N}+\frac{1}{c^2 P^N_4}Y^{\star}_{5N}\nonumber\\ \end{align*} where \begin{align*} Y^{\star}_{2N} &=\sum_{i,j}^{\ast}\mathrm{tr}(\mathbf X_{i}\mathbf X^{\prime}_{i}\mathbf X_{j}\mathbf X^{\prime}_{j})\\ \end{align*} and \begin{align*} Y^{\star}_{4N} &= N^2 Y^{\star}_{41N}-(N-1)^2 Y^{\star}_{42N}-Y^{\star}_{2N}+2(N-1)Y^{\star}_{43N}\\ Y^{\star}_{41N} &=\sum_{i}\mathrm{tr}\left[(\mathbf X_{i}-\bar{\mathbf X})(\mathbf X_{i}-\bar{\mathbf X})^{\prime}\mathbf X_{i} \mathbf X_{i}^{\prime}\right]\\ \bar{\mathbf X} &=\sum_{i}\mathbf X_{i}/N\\ Y^{\star}_{42N} &=\sum_{i}\mathrm{tr}(\mathbf X_{i} \mathbf X_{i}^{\prime}\mathbf X_{i} \mathbf X_{i}^{\prime})\nonumber\\ Y^{\star}_{43N} &=\sum_{i,j}^{\ast}\mathrm{tr}(\mathbf X_{i} \mathbf X_{i}^{\prime}\mathbf X_{i} \mathbf X_{j}^{\prime})\\ \end{align*} and \begin{align*} Y^{\star}_{5N} &= \frac{1}{3}\left[(N-1)(N^2-3N+3)Y^{\star}_{42N}+(2N-3)(Y^{\star}_{2N}+Y^{\star}_{52N}+Y^{\star}_{53N}) \right. \\
&\left. {} + 2(N-3)(Y^{\star}_{4N}+Y^{\star}_{54N}+Y^{\star}_{55N})-4(N^2-3N+3)Y^{\star}_{43N}-N^2Y^{\star}_{51N} \right]\\ Y^{\star}_{51N} &=\sum_{i}\mathrm{tr}\left[(\mathbf X_{i}-\bar{\mathbf X})(\mathbf X_{i}-\bar{\mathbf X})^{\prime}(\mathbf X_{i}-\bar{\mathbf X})(\mathbf X_{i}-\bar{\mathbf X})^{\prime}\right]\\ Y^{\star}_{52N} &=\sum_{i,j}^{\ast}\mathrm{tr}(\mathbf X_{i} \mathbf X_{j}^{\prime}\mathbf X_{j} \mathbf X_{i}^{\prime})\\ Y^{\star}_{53N} &=\sum_{i,j}^{\ast}\mathrm{tr}(\mathbf X_{i} \mathbf X_{j}^{\prime}\mathbf X_{i} \mathbf X_{j}^{\prime})\\ Y^{\star}_{54N} &=\sum_{i,j,k}^{\ast}\nolimits \mathrm{tr}(\mathbf X_{i}\mathbf X^{\prime}_{j}\mathbf X_{k}\mathbf X^{\prime}_{i})\\
&=N^2Y^{\star}_{541N}+2(N-1)Y^{\star}_{43N}-(N-1)^2Y^{\star}_{42N}-Y^{\star}_{52N}\\ Y^{\star}_{541N} &=\sum_{i}\mathrm{tr}\left[(\mathbf X_{i}-\bar{\mathbf X})\mathbf X^{\prime}_{i} \mathbf X_{i}(\mathbf X_{i}-\bar{\mathbf X})^{\prime}\right]\\ Y^{\star}_{55N} &=\sum_{i,j,k}^{\ast}\nolimits \mathrm{tr}(\mathbf X_{i}\mathbf X^{\prime}_{j}\mathbf X_{i}\mathbf X^{\prime}_{k})\\
&=N^2Y^{\star}_{551N}+2(N-1)Y^{\star}_{43N}-(N-1)^2Y^{\star}_{42N}-Y^{\star}_{53N}\\ Y^{\star}_{551N} &=\sum_{i}\mathrm{tr}\left[\mathbf X_{i}(\mathbf X_{i}-\bar{\mathbf X})^{\prime} \mathbf X_{i}(\mathbf X_{i}-\bar{\mathbf X})^{\prime}\right]\\ \end{align*} Note that the cyclic property should be used if $r>c$. Using the results from \cite{Himenoa2012}, it follows that \begin{align*} T^{\ast}_{2N} &=\frac{1}{P^N_2}\sum_{i,j}^{\ast} (\mathbf R^{T}_{i}\mathbf R_{j})^2-2\frac{1}{P^N_3}\sum_{i,j,k}^{\ast} \mathbf R^{T}_{i}\mathbf R_{j}\mathbf R^{T}_{i}\mathbf R_{k}+\frac{1}{P^N_4} \sum_{i,j,k,l}^{\ast} \mathbf R_{i}\mathbf R^{T}_{j}\mathbf R_{k}\mathbf R^{T}_{l}\\
&= \frac{N-1}{N(N-2)(N-3)}\left[(N-1)(N-2)\mathrm{tr}(\mathbf S^2)+\mathrm{tr}^2(\mathbf S)-NQ\right] \end{align*} where $$Q=\frac{1}{N-1}\sum_{i=1}^N \left[(\mathbf{R}_i-\bar{\mathbf{R}})^T(\mathbf{R}_i-\bar{\mathbf{R}})\right]^2$$ and $\bar{\mathbf R}=\sum_{i}\mathbf R_{i}/N$. The equivalent forms of $T_{2N}$ and $T^{\ast}_{2N}$ imply that the computational cost of the proposed statistics reduces from $O(N^4)$ to $O(N^2)$.
\section{Useful Identities} We list four properties of the Kronecker and Hadamard product (P1-P4) and five results (P5-P9) under the nonparametric model (2.1) with $\mathbf M=\mathbf 0$ because the test statistics $V^{\ast}_{N}$ and $U^{\ast}_{N}$ are invariant to location transformations. \begin{itemize} \item [P1:]$\mathrm{\mathrm{\mathrm{tr}}}(\mathbf {A^{\prime}BCD^{\prime}})=\mathrm{vec}(\mathbf A)^{\prime}(\mathbf D \otimes \mathbf B) \mathrm{vec}(\mathbf C)$. \item [P2:]$\mathrm{tr}(\mathbf A^p \otimes \mathbf B^q)= \mathrm{tr}(\mathbf A^p) \mathrm{tr}(\mathbf B^q)$ for $p,q=1,2,3\ldots$ \item [P3:]$\mathrm{tr}\left[(\mathbf A \otimes \mathbf B) \circ (\mathbf A \otimes \mathbf B)\right]= \mathrm{tr}(\mathbf A \circ \mathbf A) \mathrm{tr}(\mathbf B \circ \mathbf B) $. \item [P4:]$\mathrm{vec}(\mathbf A \mathbf B\mathbf C^{\prime})=(\mathbf{C} \otimes \mathbf{A}) \mathrm{vec}(\mathbf{B})$. \item [P5:]$\mathrm{E}[\mathbf Z_{i} \mathbf B_2 \mathbf Z^{\prime}_{i}]= \mathrm{tr}(\mathbf B_2) \mathbf I_{r}$. \item [P6:]$\mathrm{E}[\mathbf Z^{\prime}_{i} \mathbf B_1 \mathbf Z_{i}]= \mathrm{tr}(\mathbf B_1) \mathbf I_{c}$. \item [P7:]$\mathrm{E}[\mathrm{tr}^2(\mathbf B_1 \mathbf Z_{i} \mathbf B_2 \mathbf Z^{\prime}_{j})]= \mathrm{tr}(\mathbf B^2_1) \mathrm{tr}(\mathbf B^2_2)$. \item [P8:]$\mathrm{E}[\mathrm{tr}(\mathbf Z^{\prime}_{i} \mathbf B_1 \mathbf Z_{i} \mathbf B_2 \mathbf Z^{\prime}_{i} \mathbf B_1 \mathbf Z_{i} \mathbf B_3)]= \mathrm{tr}^2(\mathbf B_1)\mathrm{tr}(\mathbf B_2\mathbf B_3)+\mathrm{tr}(\mathbf B^2_1)\mathrm{tr}(\mathbf B_2\mathbf B_3)+\mathrm{tr}(\mathbf B^2_1)\mathrm{tr}(\mathbf B_2)\mathrm{tr}(\mathbf B_3)+B \mathrm{tr}(\mathbf B_1 \circ \mathbf B_1)\mathrm{tr}(\mathbf B_2 \circ \mathbf B_3)$. \item [P9:]$\mathrm{E}\left[\mathrm{tr}\left[(\mathbf B_1 \mathbf Z_{i} \mathbf B_2 \mathbf Z^\prime_{i} \mathbf B_1) \circ (\mathbf B_1 \mathbf Z_{i} \mathbf B_2 \mathbf Z^\prime_{i} \mathbf B_1) \right]\right]= B \mathrm{tr}(\mathbf B_2 \circ \mathbf B_2) \sum_{a,b} B^4_{1,ab}+ [2\mathrm{tr}(\mathbf B_2^2)+\mathrm{tr}^2(\mathbf B_2)] \mathrm{tr}(\mathbf B_1^2 \circ \mathbf B_1^2)$, where $B^4_{1,ab}$ is the $(a,b)$-th element of $\mathbf B^4_1$. \end{itemize} In the above, it is assumed that the dimensions of the involved matrices are meaningful for each of the operations considered, the matrices $\mathbf B_1$, $\mathbf B_2$ and $\mathbf B_3$ are symmetric and that the elements of $\mathbf Z_i$ satisfy the moment restrictions defined bellow model (2.1).
\section{Moment derivations}\label{Moment Derivations} We derive the first two moments for the $U$-statistics in $T_{1N}$ and $T_{2N}$. First note that $\mathrm{E}[Y_{1N}]=\mathrm{tr}(\boldsymbol \Sigma_R)$, $\mathrm{E}[Y_{2N}]=\mathrm{tr}(\boldsymbol \Sigma^2_R)$ and $\mathrm{E}[Y_{3N}]=\mathrm{E}[Y_{4N}]=\mathrm{E}[Y_{5N}]=0$. Now \begin{align*} \mathrm{E}[Y^{2}_{1N}]&=\frac{1}{c^2 N} \left\{\mathrm{E}[\mathrm{tr}^2(\mathbf X_i\mathbf X^{\prime}_i)]+(N-1)\mathrm{E}[\mathrm{tr}(\mathbf X_i\mathbf X^{\prime}_i)\mathrm{tr}(\mathbf X_j\mathbf X^{\prime}_j)] \right\}\\
&=\mathrm{tr}^2(\boldsymbol \Sigma_R)+\frac{2}{N}\frac{\mathrm{tr}(\boldsymbol \Sigma^2_C)}{c^2}\mathrm{tr}(\boldsymbol \Sigma^2_R)+\frac{B}{N}\frac{\mathrm{tr}(\boldsymbol \Sigma_C \circ \boldsymbol \Sigma_C)}{c^2} \mathrm{tr}(\boldsymbol \Sigma_R \circ \boldsymbol \Sigma_R),\\ \mathrm{E}[Y^{2}_{3N}]&=\frac{2}{c^2 N(N-1)}\mathrm{E}[\mathrm{tr}^2(\mathbf X_i\mathbf X^{\prime}_j)]=\frac{2}{N(N-1)}\frac{\mathrm{tr}(\boldsymbol \Sigma^2_C)}{c^2}\mathrm{tr}(\boldsymbol \Sigma^2_R), \end{align*} \begin{align*} \mathrm{E}[Y^2_{2N}] =&\frac{2}{c^4 P^N_2} \mathrm{E}[\mathrm{tr}^2(\mathbf X_i\mathbf X^{\prime}_i\mathbf X_j\mathbf X^{\prime}_j)]\\
&+\frac{(N-2)(N-3)}{c^4 P^N_2}\mathrm{E}[\mathrm{tr}(\mathbf X_i\mathbf X^{\prime}_i\mathbf X_j\mathbf X^{\prime}_j)\mathrm{tr}(\mathbf X_k\mathbf X^{\prime}_k\mathbf X_l\mathbf X^{\prime}_l)]\\
&+\frac{4(N-2)}{c^4 P^N_2}\mathrm{E}[\mathrm{tr}(\mathbf X_i\mathbf X^{\prime}_i\mathbf X_j\mathbf X^{\prime}_j)\mathrm{tr}(\mathbf X_i\mathbf X^{\prime}_i\mathbf X_k\mathbf X^{\prime}_k)] \\
=&\mathrm{tr}^2(\boldsymbol \Sigma_R^2)+\frac{8}{N}\frac{\mathrm{tr}(\boldsymbol \Sigma_C^2)}{c^2}\mathrm{tr}(\boldsymbol \Sigma_R^4)+\frac{4}{P^N_2}\frac{\mathrm{tr}^2(\boldsymbol \Sigma_C^2)}{c^4} \left[\mathrm{tr}^2(\boldsymbol \Sigma_R^2)+\mathrm{tr}(\boldsymbol \Sigma_R^4)\right]\\
&+\frac{4B}{N}\frac{\mathrm{tr}(\boldsymbol \Sigma_C \circ \boldsymbol \Sigma_C)}{c^2}\mathrm{tr}(\boldsymbol \Sigma^2_R \circ \boldsymbol \Sigma^2_R)\\
&+\frac{4B}{P^N_2}\frac{\mathrm{tr}(\boldsymbol \Sigma_C \circ \boldsymbol \Sigma_C)}{c^2}\frac{\mathrm{tr}(\boldsymbol \Sigma^2_C)}{c^2}\mathrm{tr}(\boldsymbol \Sigma^2_R \circ \boldsymbol \Sigma^2_R)\\
&+\frac{2B^2}{N(N-1)}\frac{\mathrm{tr}^2(\boldsymbol \Sigma_C \circ \boldsymbol \Sigma_C)}{c^4}\sum_{a,b} \Sigma^4_{R,ab}, \end{align*} \begin{align*} \mathrm{E}[Y^2_{4N}] =&\frac{2}{c^4 P^N_3}\mathrm{E}[\mathrm{tr}^2(\mathbf X_i\mathbf X^{\prime}_i\mathbf X_j\mathbf X^{\prime}_k)]\\ &+\frac{2(N-3)}{c^4 P^N_3}\mathrm{E}[\mathrm{tr}(\mathbf X_i\mathbf X^{\prime}_i\mathbf X_j\mathbf X^{\prime}_k)\mathrm{tr}(\mathbf X_l\mathbf X^{\prime}_l\mathbf X_j\mathbf X^{\prime}_k)]\\
=&\frac{2}{N(N-1)}\frac{\mathrm{tr}(\boldsymbol \Sigma^2_C)}{c^2}\left\{\mathrm{tr}(\boldsymbol \Sigma^4_R)+\frac{\mathrm{tr}(\boldsymbol \Sigma^2_C)}{c^2}\frac{\mathrm{tr}^2(\boldsymbol \Sigma^2_R)+\mathrm{tr}(\boldsymbol \Sigma^4_R)}{(N-2)(N-3)}\right\}\\
&+\frac{2B}{P^N_3}\frac{\mathrm{tr}(\boldsymbol \Sigma^2_C)}{c^2}\frac{\mathrm{tr}(\boldsymbol \Sigma_C \circ \boldsymbol \Sigma_C)}{c^2}\mathrm{tr}(\boldsymbol \Sigma^2_R \circ \boldsymbol \Sigma^2_R), \end{align*} and \begin{align*} \mathrm{E}[Y^2_{5N}] =&\frac{4}{c^4 P^N_4}\mathrm{E}[\mathrm{tr}(\mathbf X_i\mathbf X^{\prime}_j\mathbf X_k\mathbf X^{\prime}_l)\mathrm{tr}(\mathbf X_i\mathbf X^{\prime}_j\mathbf X_k\mathbf X^{\prime}_l)]\\ &+\frac{4}{c^4 P^N_4}\mathrm{E}[\mathrm{tr}(\mathbf X_i\mathbf X^{\prime}_j\mathbf X_k\mathbf X^{\prime}_l)\mathrm{tr}(\mathbf X_i\mathbf X^{\prime}_j\mathbf X_l\mathbf X^{\prime}_k)]\\
&+\frac{4}{c^4 P^N_4}\mathrm{E}[\mathrm{tr}(\mathbf X_i\mathbf X^{\prime}_j\mathbf X_k\mathbf X^{\prime}_l)\mathrm{tr}(\mathbf X_i\mathbf X^{\prime}_k\mathbf X_j\mathbf X^{\prime}_l)]\\
&+\frac{4}{c^4 P^N_4}\mathrm{E}[\mathrm{tr}(\mathbf X_i\mathbf X^{\prime}_j\mathbf X_k\mathbf X^{\prime}_l)\mathrm{tr}(\mathbf X_i\mathbf X^{\prime}_k\mathbf X_l\mathbf X^{\prime}_j)]\\
&+\frac{4}{c^4 P^N_4}\mathrm{E}[\mathrm{tr}(\mathbf X_i\mathbf X^{\prime}_j\mathbf X_k\mathbf X^{\prime}_l)\mathrm{tr}(\mathbf X_i\mathbf X^{\prime}_l\mathbf X_j\mathbf X^{\prime}_k)]\\
&+\frac{4}{c^4 P^N_4}\mathrm{E}[\mathrm{tr}(\mathbf X_i\mathbf X^{\prime}_j\mathbf X_k\mathbf X^{\prime}_l)\mathrm{tr}(\mathbf X_i\mathbf X^{\prime}_l\mathbf X_k\mathbf X^{\prime}_j)]\\
&=\frac{4}{P^N_4} \left\{\frac{\mathrm{tr}^2(\boldsymbol \Sigma^2_C)}{c^4}\left[\mathrm{tr}^2(\boldsymbol \Sigma^2_R)+\mathrm{tr}(\boldsymbol \Sigma^4_R)\right]+\frac{\mathrm{tr}(\boldsymbol \Sigma^4_C)}{c^4}\left[\mathrm{tr}^2(\boldsymbol \Sigma^2_R)+3\mathrm{tr}(\boldsymbol \Sigma^4_R)\right]\right\}. \end{align*} Finally, $\mathrm{E}[Y_{1N}Y_{3N}]=\mathrm{E}[Y_{1N}Y_{4N}]=\mathrm{E}[Y_{1N}Y_{5N}]=\mathrm{E}[Y_{2N}Y_{3N}]=\mathrm{E}[Y_{2N}Y_{4N}]=\mathrm{E}[Y_{2N}Y_{5N}]=\mathrm{E}[Y_{3N}Y_{4N}]=\mathrm{E}[Y_{3N}Y_{5N}]=\mathrm{E}[Y_{4N}Y_{5N}]=0$ and \begin{align*} \mathrm{E}[Y_{1N}Y_{2N}]=&\frac{2}{c^{3}N}\mathrm{E}[\mathrm{tr}(\mathbf X_i \mathbf X^{\prime}_i)\mathrm{tr}(\mathbf X_i \mathbf X^{\prime}_i\mathbf X_j \mathbf X^{\prime}_j)]\\
&+\frac{N-2}{c^3N}\mathrm{E}[\mathrm{tr}(\mathbf X_i \mathbf X^{\prime}_i)\mathrm{tr}(\mathbf X_j \mathbf X^{\prime}_j\mathbf X_k \mathbf X^{\prime}_k)]\\
=&\mathrm{tr}(\boldsymbol \Sigma_R^2) \mathrm{tr}(\boldsymbol \Sigma_R)+\frac{4}{N}\frac{\mathrm{tr}(\boldsymbol \Sigma_C^2)}{c^2}\mathrm{tr}(\boldsymbol \Sigma_R^3)\\
&+\frac{2B}{N}\frac{\mathrm{tr}(\boldsymbol \Sigma_C \circ \boldsymbol \Sigma_C)}{c^2} \mathrm{tr}(\boldsymbol \Sigma^2_R \circ \boldsymbol \Sigma_R). \end{align*}
\section{Proofs} \begin{proof} The essential step is to show that under model~(2.1) and assumption~(3.3) $$\frac{G_N-\mathrm{E}[G_N]}{\mathrm{Var}[G_N]} \stackrel{d}{\rightarrow}\mathrm{N}(0,1),$$ where $G_N=\kappa_{1N}Y_{1N}+\kappa_{2N} Y_{2N}$ and $\kappa_{1N}$,$\kappa_{2N}$ are arbitrary constants. To accomplish this, the martingale central limit theorem will be used. Let $\mathcal{F}_0=\{\emptyset,\Omega\}$, $\mathcal{F}_k=\sigma\{\mathbf X_1,\ldots,\mathbf X_k\}$ for $k=1,\ldots,N$, $E_k$ be the conditional expectation given $\mathcal{F}_k$, $D_{Nk}=(E_k-E_{k-1})G_N$ and $S_{Nm}=\sum_{k=1}^m D_{Nk}=E_m[G_N]-\mathrm{E}[G_N]$. Write \begin{align*} D_{Nk}&=\kappa_{1N}(E_k-E_{k-1})Y_{1N}+\kappa_{2N}(E_k-E_{k-1})Y_{2N}\\
&=\frac{1}{cN}\left[\mathbf W^{\prime}_{k} \left(\boldsymbol \Sigma_C \otimes \boldsymbol \Lambda_N \right) \mathbf W_{k}-\mathrm{tr}\left(\boldsymbol \Sigma_C \otimes \boldsymbol \Lambda_N \right)\right]\\
&+\frac{2\kappa_{2N}}{c^2 P^{N}_2}\left[\mathbf W^{\prime}_{k} \left(\boldsymbol \Sigma_C \otimes \mathbf M_{k-1} \right) \mathbf W_{k}-\mathrm{tr}\left(\boldsymbol \Sigma_C \otimes \mathbf M_{k-1} \right)\right] \end{align*} where $\mathbf W_{i}=\mathrm{vec}(\mathbf Z_{i})$, $\boldsymbol \Lambda_N=\kappa_{1N} \boldsymbol \Sigma_R+2\kappa_{2N} \boldsymbol \Sigma^2_R$, $\mathbf Q_{k}=\sum_{i=1}^{k} (\mathbf X_{i}\mathbf X^{\prime}_{i}-c \boldsymbol \Sigma_R)$ and $\mathbf M_{k}=\boldsymbol \Sigma^{1/2}_R \mathbf Q_{k} \boldsymbol \Sigma^{1/2}_R$. We need the following three lemmata: \begin{lemma} For any $N$, $\{D_{Nk},1\leq k \leq N\}$ is a martingale difference sequence with respect to the $\sigma$-fields $\{\mathcal{F}_k,1\leq k \leq N\}$. \end{lemma} \begin{proof}
Note that $\mathrm{E}[D_{Nk}]=0$ and write $S_{Nq}=S_{Nm}+E_q[G_N]-E_m[G_N]$ for $q>m$. Then it can be shown that $\mathrm{E}[S_{Nq}|\mathcal{F}_m]=S_{Nm}$ as desired. \end{proof} \begin{lemma} Let $\sigma^2_{Nk}=E_{k-1}[D^2_{Nk}]$. Under assumption~(3.3) $$\frac{\sum_{k=1}^N \sigma^2_{Nk}}{\mathrm{Var}[G_N]} \stackrel{P}{\rightarrow} 1.$$ \end{lemma} \begin{proof} First note that \begin{align*} \mathrm{Var}[G_N]=&\kappa^2_{1N} \mathrm{Var}[Y_{1N}]+\kappa^2_{2N} \mathrm{Var}[Y_{2N}]+2\kappa_{1N}\kappa_{2N}\mathrm{cov}[Y_{1N},Y_{2N}]\\
=&\frac{2}{N}\frac{\mathrm{tr}(\boldsymbol \Sigma^2_C)}{c^2}\mathrm{tr}(\boldsymbol \Lambda^2_N)+\frac{B}{N}\frac{\mathrm{tr}(\boldsymbol \Sigma_C \circ \boldsymbol \Sigma_C)}{c^2}\mathrm{tr}(\boldsymbol \Lambda_N \circ \boldsymbol \Lambda_N)\\
&+\frac{4\kappa^2_{2N}}{N^2}\frac{\mathrm{tr}^2(\boldsymbol \Sigma_C^2)}{c^4} \mathrm{tr}^2(\boldsymbol \Sigma_R^2)\left\{1+O(N^{-1})\right\}. \end{align*} Next note that for large $N$, there exists a constant $\lambda_1$ such that $$\left(\mathrm{Var}[G_N]\right)^2\geq \lambda_1\max\left\{\frac{\kappa^2_{2N}}{N^3}\frac{\mathrm{tr}^3(\boldsymbol \Sigma_C^2)}{c^6}\mathrm{tr}(\boldsymbol \Lambda^2_N)\mathrm{tr}^2(\boldsymbol \Sigma_R^2),\frac{\kappa^4_{2N}}{N^4}\frac{\mathrm{tr}^4(\boldsymbol \Sigma_C^2)}{c^8}\mathrm{tr}^4(\boldsymbol \Sigma_R)\right\}.$$ Next note that \begin{align*} \sum_{k=1}^N \sigma^2_{Nk}=&\frac{8\kappa_{2N}}{NP^N_2}\frac{\mathrm{tr}(\boldsymbol \Sigma^2_C)}{c^2}\frac{1}{c} \sum_{k=1}^N(\kappa_{1N}\tau_{2(k-1)}+2\kappa_{2N}\tau_{3(k-1)})\\
&+\frac{4\kappa_{2N}B}{NP^N_2}\frac{\mathrm{tr}(\boldsymbol \Sigma_C \circ \boldsymbol \Sigma_C)}{c^2}\frac{1}{c} \sum_{k=1}^N \mathrm{tr}(\mathbf M_{k-1} \circ \boldsymbol \Lambda_N)\\
&+\frac{8c^2_{2N}}{(P^N_2)^2}\frac{\mathrm{tr}(\boldsymbol \Sigma^2_C)}{c^2}\frac{1}{c^2} \sum_{k=1}^N \mathrm{tr}(\mathbf M^2_{k-1})\\
&+\frac{4Bc^2_{2N}}{(P^N_2)^2}\frac{\mathrm{tr}(\boldsymbol \Sigma_C \circ \boldsymbol \Sigma_C)}{c^2}\frac{1}{c^2} \sum_{k=1}^N \mathrm{tr}(\mathbf M_{k-1} \circ \mathbf M_{k-1})+H\\
&=H_{1N}+H_{2N}+H_{3N}+H_{4N}+H, \end{align*} where $H$ is a finite constant, $\tau_{2k}=\sum_{i=1}^{k} \mathrm{tr}(\mathbf Q_{k} \boldsymbol \Sigma^2_R)$ and $\tau_{3k}=\sum_{i=1}^{k} \mathrm{tr}(\mathbf Q_{k} \boldsymbol \Sigma^3_R)$. To complete the proof, we need to show that $\mathrm{Var}[H_{Nm}]=o\left\{(\mathrm{Var}[G_N])^2\right\}$ for $m=1,2,3,4$. Note that when $k \leq j$ \begin{equation*} \mathrm{cov}[\kappa_{1N} \tau_{2k}+2 \kappa_{2N} \tau_{3k}, \kappa_{1N} \tau_{2j}+2 \kappa_{2N} \tau_{3j}] = \mathrm{Var}[\kappa_{1N} \tau_{2k}+2 \kappa_{2N} \tau_{3k}] \end{equation*} and \begin{align*} \mathrm{Var}[\kappa_{1N} \tau_{2k}+2 \kappa_{2N} \tau_{3k}]=&k \mathrm{Var}[\kappa_{1N} \mathrm{tr}(\mathbf X_i \mathbf X^{\prime}_i \boldsymbol\Sigma^2_R)+ 2 \kappa_{2N} \mathrm{tr}(\mathbf X_i \mathbf X^{\prime}_i \boldsymbol\Sigma^3_R)] \\
=&k \mathrm{Var}[\mathbf W^{\prime}_i (\boldsymbol \Sigma_C \otimes \boldsymbol \Lambda_N \boldsymbol\Sigma^2_R) \mathbf W_i] \\
=&2k \mathrm{tr}(\boldsymbol \Sigma^2_C) \mathrm{tr}\left[(\boldsymbol \Lambda_N \boldsymbol \Sigma^2_R)^2\right]\\
&+B \mathrm{tr}(\boldsymbol \Sigma_C \circ \boldsymbol \Sigma_C) \mathrm{tr}\left[(\boldsymbol \Lambda_N \boldsymbol \Sigma_R^2) \circ (\boldsymbol \Lambda_N \boldsymbol \Sigma_R^2)\right] \\
\leq &k (2+\max\{0,B\}) \mathrm{tr}(\boldsymbol \Sigma^2_C) \mathrm{tr}(\boldsymbol \Lambda_N^2) \mathrm{tr}(\boldsymbol \Sigma^4_R) \end{align*} Therefore there exists a constant $\lambda_2$ such that \begin{equation*} \frac{\mathrm{Var}[H_{N1}]}{(\mathrm{Var}[G_N])^2} \leq \frac{\lambda_2 \frac{\kappa^2_{2N}}{N^3}\left(\frac{\mathrm{tr}(\boldsymbol \Sigma^2_C)}{c^2}\right)^3 \mathrm{tr}(\boldsymbol \Lambda_N^2)\mathrm{tr}(\boldsymbol \Sigma^4_R)}{\lambda_1 \frac{\kappa^2_{2N}}{N^3}\left(\frac{\mathrm{tr}(\boldsymbol \Sigma^2_C)}{c^2}\right)^3 \mathrm{tr}(\boldsymbol \Lambda_N^2)\mathrm{tr}^2(\boldsymbol{\Sigma}^2_R)}=\frac{\lambda_2}{\lambda_1}\frac{\mathrm{tr}(\boldsymbol \Sigma^4_R)}{\mathrm{tr}^2(\boldsymbol{\Sigma}^2_R)}\rightarrow 0 \end{equation*} as desired. Similar operations can show that $\mathrm{Var}[H_{Nm}]=o(\mathrm{Var}^2[G_N])$ for $m=2,3,4$. \end{proof} \begin{lemma} Under assumption~(3.3) $$\frac{\sum_{k=1}^N \mathrm{E}[D^4_{Nk}]}{(\mathrm{Var}[G_N])^2}\rightarrow 0.$$ \end{lemma} \begin{proof} By the Cauchy-Schwarz inequality there exist constants $\lambda_3$ and $\lambda_4$ such that \begin{align*} \mathrm{E}[D^4_{Nk}]& \leq \lambda_3\frac{1}{c^4N^3}\mathrm{E}[\mathbf W^{\prime}_{k} \left(\boldsymbol \Sigma_C \otimes \boldsymbol \Lambda_N \right) \mathbf W_{k}-\mathrm{tr}\left(\boldsymbol \Sigma_C \otimes \boldsymbol \Lambda_N \right)]^4\\
&+\lambda_4\frac{ 2 \kappa_{2N}}{p^8_2 (P^{N}_2)^4}\sum_{k=1}^N \mathrm{E}[\mathbf W^{\prime}_{k} \left(\boldsymbol \Sigma_C \otimes \mathbf M_{k-1} \right) \mathbf W_{k}-\mathrm{tr}\left(\boldsymbol \Sigma_C \otimes \mathbf M_{k-1} \right)]^4\\
& \leq \frac{\lambda^{\prime}_3}{N^3} \frac{\mathrm{tr}^2(\boldsymbol \Sigma_C^2)}{c^4} \mathrm{tr}^2(\boldsymbol \Lambda_N^2) +\frac{\lambda^{\prime}_4}{N^6} \frac{\mathrm{tr}^2(\boldsymbol \Sigma_C^2)}{c^4} \mathrm{tr}^2(\boldsymbol \Sigma_R^4) \end{align*} Hence $$\frac{\sum_{k=1}^N \mathrm{E}[D^4_{Nk}]}{(\mathrm{Var}[G_N])^2} \leq \frac{\lambda^{\prime}_1}{cN}+\frac{\lambda^{\prime}_2}{cN^2} \frac{\mathrm{tr}^2(\boldsymbol \Sigma_R^4)}{\mathrm{tr}^4(\boldsymbol \Sigma_R^2)} \rightarrow 0,$$ for some constants $\lambda^{\prime}_1$, $\lambda^{\prime}_2$, $\lambda^{\prime}_3$ and $\lambda^{\prime}_4$. \end{proof} Combining the three lemmata it follows that $(G_N-\mathrm{E}[G_N])/\mathrm{Var}[G_N]\stackrel{d}{\rightarrow}\mathrm{N}(0,1)$. Next write $$\frac{\mathrm{tr}^2(\boldsymbol \Sigma_R)}{\mathrm{tr}(\boldsymbol \Sigma^2_R)} \frac{U_N+1}{r}-1=\frac{\tilde{U}_N-\tilde{T}_{1N}^2}{\left(1+\tilde{T}_{1N}\right)^2}$$ where $$\tilde{U}_N=\frac{T_{2N}}{\mathrm{tr}(\boldsymbol \Sigma^2_R)}-2\frac{T_{1N}}{\mathrm{tr}(\boldsymbol \Sigma_R)}+1 \text{ and } \tilde{T}_{1N}=\frac{T_{1N}-\mathrm{tr}(\boldsymbol \Sigma_R)}{\mathrm{tr}(\boldsymbol \Sigma_R)}.$$ To complete the proof of this theorem, we need to show that $\tilde{T}_{1N} \stackrel{P}{\rightarrow} 0$, $\sigma^{-1}_U \tilde{T}_{1N} \stackrel{P}{\rightarrow} 0$ and $\sigma^{-1}_U \tilde{U}_N \stackrel{d}{\rightarrow}\mathrm{N}(0,1)$. These results are established in a similar fashion as in the proof of Theorem 1 in \cite{Chen2010a}. Since \begin{align*}
\mathrm{Var}[\tilde{T}_{1N}]&=\frac{2}{N-1}\frac{\mathrm{tr}(\boldsymbol \Sigma^2_C)}{c^2}\frac{\mathrm{tr}(\boldsymbol \Sigma^2_R)}{\mathrm{tr}^2(\boldsymbol \Sigma_R)}+\frac{B}{N}\frac{\mathrm{tr}(\boldsymbol \Sigma_C \circ \boldsymbol \Sigma_C)}{c^2} \frac{\mathrm{tr}(\boldsymbol \Sigma_R \circ \boldsymbol \Sigma_R)}{{\mathrm{tr}^2(\boldsymbol \Sigma_R)}}\\
&\leq \left[\frac{2}{N-1}+\frac{\max\{0,B\}}{N}\right]\frac{\mathrm{tr}(\boldsymbol \Sigma^2_C)}{c^2} \frac{\mathrm{tr}(\boldsymbol \Sigma^2_R)}{\mathrm{tr}^2(\boldsymbol \Sigma_R)},
\end{align*} and $\mathrm{E}[\tilde{T}_{1N}]=0$, it follows that $\tilde{T}_{1N} \stackrel{P}{\rightarrow} 0$ and $\sigma^{-1}_U \tilde{T}_{1N} \stackrel{P}{\rightarrow} 0$. Finally, note that $\mathrm{Var}\left[\tilde{U}_N\right]=\sigma^2_U\left\{1+o(1)\right\}$. It therefore follows that $\sigma^{-1}_U \tilde{U}_N \stackrel{d}{\rightarrow}\mathrm{N}(0,1)$ as desired. \end{proof}
\begin{proof}[Proof of Theorem 2] Derivations in \cite{Chen2010a} imply that $\mathrm{E}[T^{\ast}_{2N}]=\mathrm{tr}(\boldsymbol \Omega^2)$ and $\mathrm{Var}[T^{\ast}_{2N}]/\mathrm{tr}^2(\boldsymbol \Omega^2) \rightarrow 0$. Therefore, $T^{\ast}_{2N}$ is a ratio-consistent estimator of $\mathrm{tr}(\boldsymbol \Omega^2)$. Similarly, the moment derivations in Section \ref{Moment Derivations}, imply that $T_{2N}$ is ratio-consistent estimator of $\mathrm{tr}(\boldsymbol \Sigma_R^2)$. The last claim of the theorem follows from the continuity mapping theorem. \end{proof}
\begin{proof}[Proof of Theorem 3] The proof is similar to that of Theorem 2 in \cite{Chen2010a}. Write $rV_N=(Y_{2N}-2Y_{1N}+r)+2Y_{3N}-2Y_{4N}+Y_{5N}$ and note that $\mathrm{E}[rV_N]=\mathrm{tr}\left[(\boldsymbol \Sigma_R-\mathbf I_{r})^2\right]$, $\mathrm{Var}[rV_N]=\sigma^2_V\left\{1+o(1)\right\}$. Therefore $$\frac{r V_N-\mathrm{tr}\left[(\boldsymbol \Sigma_R -\mathbf I_{r})^2\right]}{\sigma_V} \stackrel{d}{\rightarrow}\mathrm{N}(0,1)$$ as desired. \end{proof}
\section{Simulation Results} \indent Table~\ref{tab:size} contains the empirical levels of the proposed sphericity test for the two distributional scenarios under a weak and a strong column-wise correlation pattern. The sphericity test was slightly liberal for small values of $N$, $r$ or $c$ but the difference between the empirical and the nominal level diminished as $N$, $r$ and $c$ all increased due to the asymptotic nature of the proposed test. Conditional on $(N,r,c)$ and $\boldsymbol \Sigma_C$, the empirical levels were comparable under both distributional scenarios due to the non-parametric nature of the test statistic. In the sampling schemes with small $N$ and/or $r$ the empirical level was closer to the nominal when $\rho=0.85$ rather than when $\rho=0.15$. Hence, the proposed test does not confound a weak row-wise correlation pattern with a strong column-wise pattern but some attention is required when both correlation patterns are weak and the sample size is small.
\indent Table~\ref{Power1} displays the empirical powers of the proposed sphericity test under the compound symmetry form for $\boldsymbol \Sigma_R$ in sampling schemes with a strong column-wise dependence structure, and Table~\ref{Power2} contains the empirical powers under the tridiagonal form for $\boldsymbol \Sigma_R$. We do not report the results for the compound symmetry structure since the empirical powers were almost all equal to $1.0$. We observed the following trends. First, the empirical powers were affected by the strength of the column-wise dependence structure, with weak correlation patterns boosting the empirical powers. According to the power analysis, this should be attributed to value of $\mathrm{tr}(\boldsymbol \Sigma^4_R)/\mathrm{tr}^2(\boldsymbol \Sigma^2_R)$ which converges to $0$ faster for the smaller value of $\rho$ while keeping the other parameters fixed. Second, the empirical powers approached $1.0$ as one or more of the elements in the triplet $(N,r,c)$ increased, indicating the consistency of the proposed tests under the working assumption that allows us to handle the `small $N$ large$p$' situation. Finally, no significant difference was noticed in the empirical powers in any of the two distributional scenarios.
\indent Due to lack of alternative testing procedures and since $\mathrm{tr}(\boldsymbol \Sigma^2_C)/c^2 \in [1/c,1]$, we considered three alternative statistics by setting the `nuisance' ratio $\mathrm{tr}(\boldsymbol \Sigma^2_C)/c^2$ equal to the two boundary values and to the true value of this ratio. If $\mathrm{tr}(\boldsymbol \Sigma^2_C)/c^2=1/c$, the difference between the empirical and the nominal level of the resulting test statistic was small when $\rho=0.15$ but larger when $\rho=0.85$. Since $\mathrm{tr}(\boldsymbol \Sigma^2_C)=c$ is satisfied only when $\boldsymbol \Sigma_C=\mathbf I_{c}$, this explains the poor performance of the test statistic in the presence of a strong column-wise dependence structure. By contrast, if $\mathrm{tr}(\boldsymbol \Sigma^2_C)/c^2$ is set equal to 1, the resulting test becomes very conservative, failing to reject the sphericity hypothesis in all cases. For these reasons, we did not try to evaluate the empirical power of these two tests. Finally when we replaced $\mathrm{tr}(\boldsymbol \Sigma^2_C)/c^2$ with its true value, we did not observe any substantial difference with the results based on $U^{\ast}_N$. Hence, accurate estimation of the `nuisance' parameter $\mathrm{tr}(\boldsymbol \Sigma^2_C)$ seems to be crucial if we want to preserve the nominal level and, most importantly, $\widehat{\mathrm{tr}(\boldsymbol \Sigma_C^2)}$ serves this purpose.
\begin{sidewaystable} \caption{\label{tab:size} Empirical levels of the proposed sphericity test for $H_0:\boldsymbol \Sigma_R=\sigma^2 \mathbf I_{r}$ versus $H_1:\boldsymbol \Sigma_R\neq \sigma^2 \mathbf I_{r}$ at $5\%$ nominal significance level.} \begin{tabular*}{\textwidth}{c @{\extracolsep{\fill}}rrrrrrrrrrrrrrr}
\toprule
& & \multicolumn{6}{c}{$\rho=0.15$} && \multicolumn{6}{c}{$\rho=0.85$}\\
& & \multicolumn{6}{c}{$r$} && \multicolumn{6}{c}{$r$}\\
\cline{3-8} \cline{10-15} $N$ & $c$ & 8 & 16 & 32 & 64 & 128 & 256 && 8 & 16 & 32 & 64 & 128 & 256 \\
\midrule
& & \multicolumn{13}{c}{Scenario 1} \\
20 & 10 & 0.086 & 0.077 & 0.079 & 0.062 & 0.078 & 0.075 && 0.047 & 0.056 & 0.059 & 0.065 & 0.062 & 0.062 \\
& 50 & 0.069 & 0.084 & 0.063 & 0.059 & 0.063 & 0.069 && 0.056 & 0.075 & 0.062 & 0.054 & 0.058 & 0.055 \\
& 100 & 0.081 & 0.066 & 0.066 & 0.060 & 0.057 & 0.063 && 0.073 & 0.063 & 0.049 & 0.056 & 0.058 & 0.062 \\
40 & 10 & 0.069 & 0.060 & 0.059 & 0.069 & 0.057 & 0.059 && 0.063 & 0.061 & 0.055 & 0.058 & 0.060 & 0.058 \\
& 50 & 0.059 & 0.056 & 0.070 & 0.050 & 0.057 & 0.072 && 0.045 & 0.055 & 0.059 & 0.066 & 0.055 & 0.053 \\
& 100 & 0.067 & 0.046 & 0.061 & 0.051 & 0.045 & 0.054 && 0.060 & 0.056 & 0.061 & 0.053 & 0.058 & 0.065 \\
60 & 10 & 0.067 & 0.060 & 0.068 & 0.074 & 0.051 & 0.054 && 0.068 & 0.056 & 0.062 & 0.067 & 0.076 & 0.055 \\
& 50 & 0.058 & 0.075 & 0.056 & 0.060 & 0.049 & 0.058 && 0.059 & 0.064 & 0.055 & 0.058 & 0.049 & 0.052 \\
& 100 & 0.066 & 0.050 & 0.050 & 0.047 & 0.043 & 0.045 && 0.061 & 0.055 & 0.058 & 0.062 & 0.058 & 0.064 \\
80 & 10 & 0.081 & 0.057 & 0.058 & 0.070 & 0.050 & 0.047 && 0.072 & 0.050 & 0.055 & 0.057 & 0.049 & 0.058 \\
& 50 & 0.060 & 0.059 & 0.051 & 0.046 & 0.047 & 0.058 && 0.057 & 0.053 & 0.045 & 0.058 & 0.056 & 0.000 \\
& 100 & 0.065 & 0.064 & 0.046 & 0.050 & 0.050 & 0.055 && 0.048 & 0.052 & 0.071 & 0.045 & 0.053 & 0.045 \\
& & \multicolumn{13}{c}{Scenario 2} \\
20 & 10 & 0.097 & 0.087 & 0.079 & 0.069 & 0.082 & 0.066 && 0.064 & 0.068 & 0.051 & 0.059 & 0.061 & 0.055 \\
& 50 & 0.088 & 0.079 & 0.057 & 0.059 & 0.058 & 0.067 && 0.081 & 0.068 & 0.052 & 0.067 & 0.054 & 0.063 \\
& 100 & 0.085 & 0.072 & 0.062 & 0.066 & 0.070 & 0.066 && 0.072 & 0.067 & 0.048 & 0.070 & 0.067 & 0.079 \\
40 & 10 & 0.086 & 0.079 & 0.063 & 0.065 & 0.056 & 0.052 && 0.063 & 0.063 & 0.052 & 0.050 & 0.050 & 0.052 \\
& 50 & 0.073 & 0.073 & 0.062 & 0.060 & 0.048 & 0.050 && 0.071 & 0.057 & 0.052 & 0.047 & 0.062 & 0.053 \\
& 100 & 0.067 & 0.067 & 0.063 & 0.056 & 0.046 & 0.058 && 0.068 & 0.067 & 0.054 & 0.056 & 0.055 & 0.077 \\
60 & 10 & 0.096 & 0.076 & 0.055 & 0.055 & 0.062 & 0.057 && 0.055 & 0.051 & 0.045 & 0.047 & 0.049 & 0.053 \\
& 50 & 0.082 & 0.070 & 0.066 & 0.056 & 0.049 & 0.055 && 0.067 & 0.059 & 0.047 & 0.060 & 0.054 & 0.070 \\
& 100 & 0.070 & 0.078 & 0.075 & 0.077 & 0.048 & 0.062 && 0.058 & 0.064 & 0.061 & 0.058 & 0.063 & 0.054 \\
80 & 10 & 0.074 & 0.074 & 0.049 & 0.055 & 0.044 & 0.056 && 0.052 & 0.072 & 0.055 & 0.046 & 0.043 & 0.058 \\
& 50 & 0.083 & 0.064 & 0.051 & 0.053 & 0.057 & 0.067 && 0.070 & 0.058 & 0.045 & 0.061 & 0.064 & 0.057 \\
& 100 & 0.070 & 0.054 & 0.049 & 0.047 & 0.046 & 0.057 && 0.056 & 0.045 & 0.056 & 0.051 & 0.052 & 0.052 \\
\bottomrule \end{tabular*} \end{sidewaystable}
\begin{sidewaystable} \caption{Empirical powers of the proposed sphericity test for $H_0:\boldsymbol \Sigma_R=\sigma^2 \mathbf I_{r}$ versus $H_1:\boldsymbol \Sigma_R=diag(\mathbf 2_{[r/8]},\mathbf 1_{[7r/8]})$ at $5\%$ nominal significance level.} \begin{tabular*}{\textwidth}{c @{\extracolsep{\fill}} rrrrrrrrrrrrrrr}
\toprule
& & \multicolumn{6}{c}{$\rho=0.15$} && \multicolumn{6}{c}{$\rho=0.85$}\\
& & \multicolumn{6}{c}{$r$} && \multicolumn{6}{c}{$r$}\\
\cline{3-8} \cline{10-15} $N$ & $c$ & 8 & 16 & 32 & 64 & 128 & 256 && 8 & 16 & 32 & 64 & 128 & 256 \\
\midrule
& & \multicolumn{13}{c}{Scenario 1} \\ 20 & 10 & 0.987 & 0.998 & 1.000 & 1.000 & 1.000 & 1.000 && 0.458 & 0.512 & 0.559 & 0.582 & 0.544 & 0.586 \\
& 50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 0.988 & 0.999 & 0.999 & 1.000 & 1.000 & 1.000 \\
& 100 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 40 & 10 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 0.814 & 0.863 & 0.935 & 0.945 & 0.959 & 0.982 \\
& 50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\
& 100 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 60 & 10 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 0.951 & 0.981 & 0.996 & 0.998 & 1.000 & 1.000 \\
& 50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\
& 100 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 80 & 10 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 0.988 & 0.999 & 1.000 & 1.000 & 1.000 & 1.000 \\
& 50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\
& 100 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000\\
& & \multicolumn{13}{c}{Scenario 2} \\ 20 & 10 & 0.958 & 0.990 & 1.000 & 1.000 & 1.000 & 1.000 && 0.435 & 0.496 & 0.546 & 0.530 & 0.586 & 0.584 \\
& 50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 0.978 & 0.995 & 1.000 & 1.000 & 1.000 & 1.000 \\
& 100 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 40 & 10 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 0.784 & 0.870 & 0.920 & 0.945 & 0.962 & 0.980 \\
& 50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\
& 100 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 60 & 10 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 0.928 & 0.979 & 0.993 & 0.997 & 0.999 & 0.999 \\
& 50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\
& 100 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 80 & 10 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 0.986 & 0.991 & 1.000 & 1.000 & 1.000 & 1.000 \\
& 50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\
& 100 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\
\bottomrule \end{tabular*} \label{Power1} \end{sidewaystable}
\begin{sidewaystable}
\caption{Empirical powers of the proposed sphericity test for $H_0:\boldsymbol \Sigma_R=\sigma^2 \mathbf I_{r}$ versus $H_1:\boldsymbol \Sigma_R=\{0.1^{|a-b|}I(|a-b|\leq 1)\}_{1\leq a,b\leq r}$ at $5\%$ nominal significance level.} \begin{tabular*}{\textwidth}{c @{\extracolsep{\fill}} rrrrrrrrrrrrrrr}
\toprule
& & \multicolumn{6}{c}{$\rho=0.15$} && \multicolumn{6}{c}{$\rho=0.85$}\\
& & \multicolumn{6}{c}{$r$} && \multicolumn{6}{c}{$r$}\\
\cline{3-8} \cline{10-15} $N$ & $c$ & 8 & 16 & 32 & 64 & 128 & 256 && 8 & 16 & 32 & 64 & 128 & 256 \\
\midrule
& & \multicolumn{13}{c}{Scenario 1} \\ 20 & 10 & 0.448 & 0.499 & 0.565 & 0.580 & 0.595 & 0.612 && 0.112 & 0.123 & 0.115 & 0.130 & 0.136 & 0.130 \\
& 50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 0.383 & 0.471 & 0.493 & 0.481 & 0.492 & 0.500 \\
& 100 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 40 & 10 & 0.804 & 0.909 & 0.948 & 0.974 & 0.967 & 0.981 && 0.200 & 0.230 & 0.223 & 0.225 & 0.221 & 0.247 \\
& 50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 0.771 & 0.859 & 0.887 & 0.932 & 0.940 & 0.943 \\
& 100 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 60 & 10 & 0.971 & 0.995 & 0.997 & 1.000 & 1.000 & 1.000 && 0.308 & 0.341 & 0.366 & 0.390 & 0.388 & 0.359 \\
& 50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 0.951 & 0.986 & 0.994 & 0.997 & 1.000 & 1.000 \\
& 100 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 80 & 10 & 0.996 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 0.409 & 0.480 & 0.517 & 0.538 & 0.570 & 0.545 \\
& 50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 0.988 & 0.999 & 1.000 & 1.000 & 1.000 & 1.000 \\
& 100 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\
& & \multicolumn{13}{c}{Scenario 2} \\ 20 & 10 & 0.449 & 0.522 & 0.527 & 0.567 & 0.593 & 0.579 && 0.114 & 0.141 & 0.127 & 0.120 & 0.146 & 0.117 \\
& 50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 0.387 & 0.449 & 0.447 & 0.499 & 0.502 & 0.523 \\
& 100 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 40 & 10 & 0.805 & 0.886 & 0.942 & 0.965 & 0.968 & 0.981 && 0.213 & 0.221 & 0.225 & 0.208 & 0.218 & 0.215 \\
& 50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 0.767 & 0.843 & 0.905 & 0.940 & 0.950 & 0.939 \\
& 100 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 0.000 && 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 60 & 10 & 0.950 & 0.990 & 0.998 & 1.000 & 0.999 & 1.000 && 0.311 & 0.345 & 0.365 & 0.379 & 0.360 & 0.384 \\
& 50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 0.938 & 0.989 & 0.998 & 1.000 & 0.999 & 1.000 \\
& 100 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 0.000 && 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ 80 & 10 & 0.992 & 0.999 & 1.000 & 1.000 & 1.000 & 1.000 && 0.421 & 0.474 & 0.503 & 0.518 & 0.529 & 0.562 \\
& 50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 && 0.983 & 0.999 & 1.000 & 1.000 & 1.000 & 1.000 \\
& 100 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 0.000 && 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\
\bottomrule \end{tabular*} \label{Power2} \end{sidewaystable}
\end{document} |
\begin{document}
\title{On the Existence of a Closed, Embedded, Rotational $\lambda$-Hypersurface} \author{John Ross} \address{Department of Mathematics and Computer Science, Southwestern University, 1001 E University Ave, Georgetown, TX 78626} \email{rossjo@southwestern.edu}
\maketitle
\begin{abstract} In this paper we show the existence of a closed, embedded $\lambda$-hypersurfaces $\Sigma \subset \mathbb{R}^{2n}$. The hypersurface $\Sigma$ is diffeomorhic to $\mathbb{S}^{n-1} \times \mathbb{S}^{n-1} \times \mathbb{S}^1$ and exhibits $SO(n) \times SO(n)$ symmetry. Our approach uses a ``shooting method'' similar to the approach used by McGrath in constructing a generalized self-shrinking ``torus'' solution to mean curvature flow. The result generalizes the $\lambda$ torus found by Cheng and Wei. \end{abstract}
\section{Introduction}
In the study of mean curvature flow, an important class of solutions are those in which the hypersurface evolves under self-similar shrinking. Indeed, under general mean curvature flow, singularities often develop which can be modeled using self-shrinking solutions \cite{Huisken90}. Such solutions can be identified with a single time-slice of the flow, which gives us a hypersurface called a self-shrinker. A self-shrinker satisfies the equation \begin{align} H = \frac{1}{2} \langle x,\nu \rangle \end{align} in which $H$ is the mean curvature of the hypersurface, $x$ is the position vector of the hypersurface, and $n$ is the vector normal to the hypersurface, with orientation chosen so that $\vec{H} = -H n$. Self-shrinkers are also notable because they are critical points of the weighted area functional \begin{align} \label{1.2}
F(\Sigma) = \int_\Sigma e^{-|x|^2 / 4}\; d\mu \end{align} and are minimal surfaces in the space $\mathbb{R}^{n+1}$ when imbued with the metric \begin{align} \label{1.3}
e^{-\frac{|x|^2}{2(k+1)}}\sum_{i=1}^{k+1}(dx^i)^2. \end{align}
A generaliztion of self-shrinkers leads to a class of hypersurfaces that are called $\lambda$-hypersurfaces. Such surfaces satisfy the equation \begin{align} H = \frac{1}{2} \langle x,\nu \rangle + \lambda \label{1.4} \end{align}
where $\lambda$ is a constant. These surfaces may be viewed as critical points to the weighted area functional \eqref{1.2} with respect to \textbf{weighted volume-preserving} variations - that is, variations for which the function describing the normal direction of the variation, $u(x)$, satisfies $\int_\Sigma u e^{-|x|^2/4}\;d\mu = 0$. They may also be viewed as stationary solutions to the isoperimetric problem on the Gaussian space with metric given by \eqref{1.3}. More information on $\lambda$-hypersurfaces, including a derivation of this viewpoint, can be found in \cite{ChengWei14} and \cite{McGonagleRoss13}
There are very few explicit examples of complete embedded self-shrinkers, despite their importance in the field. The simplest (and most important) examples are generalized cylinders $\mathbb{R}^k \times \mathbb{S}^{n-k} \subset \mathbb{R}^{n+1}$ that are centered around the origin with radius $\sqrt{2(n-k)}$. In \cite{CM12}, Colding and Minicozzi showed that such self-shrinkers were \emph{generic}, in the sense that that under the mean-curvature flow on a generic hypersurface, the singularities that would develop look like these generalized cylinders. However, other special examples of self-shrinkers exist. In 1992, Angenent showed the existence of an embedded self-shrinker of genus 1, diffeomorphic to $\mathbb{S}^1 \times \mathbb{S}^{n-1}$ \cite{Angenent92}. In \cite{KKM12} and also \cite{Moller11}, self-shrinkers of arbitrary but large genus are constructed that contain a discrete rotational symmetry. Finally, in 2015, Peter McGrath \cite{McGrath15} constructed a closed self-shrinker that contains two rotational symmetries and is diffeomorphic to $\mathbb{S}^k \times \mathbb{S}^k \times \mathbb{S}^1$.
Broadly speaking, there are two established techniques for constructing new self-shrinkers. The first, practiced by Angenent and by McGrath, is to construct rotationally symmetric solutions by finding a ``generating curve'' in a lower-dimensional space. The second technique, employed by \cite{KKM12}, uses a gluing technique to adjoin preexisting self-shrinkers and minimal surfaces.
Even less has been done to construct examples of $\lambda$-hypersurfaces. In \cite{McGonagleRoss13}, it was shown that generalized cylinders were $\lambda$-hypersurfaces. In \cite{ChengWei15}, the authors construct the first nontrivial example of a $\lambda$-hypersurface, diffeomorphic to $\mathbb{S}^{n} \times \mathbb{S}^1$, using techniques similar to Angenent. There have also been non-trivial examples of one-dimensional self-shrinkers discovered in \cite{Chang14}. The aim of this paper is describe a new closed, embedded $\lambda$-hypersurface with a $\mathbb{S}^n \times \mathbb{S}^n$ symmetry. Our main result is:
\begin{theorem} Let $n > 1$, and let $\lambda < 0$. Then there exists a $\lambda$-hypersurface $\Sigma^{2n + 1} \subset \mathbb{R}^{2n + 2}$ that is $\lambda$-hypersurface is diffeomorphic to $\mathbb{S}^n \times \mathbb{S}^n \times \mathbb{S}^1$ and exhibits a $O(n) \times O(n)$ rotational symmetry. \end{theorem}
Although there are no a priori restrictions on $\lambda$ in a $\lambda$-hypersurface, we will assume throughout this paper that $\lambda < 0$. Note that requiring $\lambda < 0$ is a necessary technical requirement for our proof, see (for example) Lemma \ref{lemma32}. The condition is interesting, and a similar requirement has been necessary in \cite{Chang14} and \cite{ChengWei15}.
To prove this theorem, we employ a method similar to McGrath \cite{McGrath15}. We first determine a relationship between the $\lambda$-hypersurfaces we are interested in, and a ``generating curve'' in the first quadrant that satisfies a system of ODEs. We then construct a closed, embedded generating curve that satisfies this system of ODEs, by using a ``shooting method'' in the spirit of \cite{Angenent92}, \cite{McGrath15}. Of note is that, under the system of ODEs developed here, very few useful solutions are known to exist. In particular, the argument present in \cite{McGrath15} made use of the linear solution $y = x$, which we will not have access to.
The paper will be organized as follows: First, in Section 2, we will construct our system of ODEs to reduce the problem to finding a generating curve. This is similar to the treatment given in \cite{McGrath15}, but is included for completeness. In Section 3, we will analyze the system of ODEs to determine possible behavior of solutions. And finally, in Section 4, we employ our shooting method.
\section{Constructing the system of ODEs}
We are interested in studing $\lambda$-hypersurfaces that satisfy a rotational invariance under $O(m) \times O(n)$ for $m,n > 1$. To this end, let $O(m) \times O(n)$ act on $\mathbb{R}^{m+n} = \{ (\vec{x},\vec{y}): \vec{x} \in \mathbb{R}^m, \vec{y} \in \mathbb{R}^n \}$ in the usual way. We can then identify the space of orbits $\mathbb{R}^{m+n} / (O(m) \times O(n))$ with the first quadrant $\{(x,y) \in \mathbb{R}^2 : x,y \geq 0 \}$ under the projection
$$\Pi(\vec{x}, \vec{y}) = (|\vec{x}|, |\vec{y}|) = (x,y) $$
Under this identification, each point $(x,y)$ in the first quadrant corresponds to the immersed submanifold $\mathbb{S}^{m-1}(x) \times \mathbb{S}^{n-1}(y) \subset \mathbb{R}^{m+n}$ (where $\mathbb{S}^{k}(x)$ is the $k$-dimensional sphere of radius $x$, embedded in $\mathbb{R}^{k+1}$ and centered at the origin).
Let $\Sigma \subset \mathbb{R}^{m+n}$ be an embedded $\lambda$-hypersurface. We say that $\Sigma$ is invariant under $O(m) \times O(n)$ if the action preserves $\Sigma$. If $\Sigma$ is invariant under $O(m) \times O(n)$, then the projection $\Pi (\Sigma)$ will give us a profile curve in the first quadrant, which we can parametrize by Euclidean arc length and write as $\gamma(t) = (x(t), y(t))$.
Recall that our $\lambda$-hypersurface satisfies the curvature equation \eqref{1.4}. Because $\Sigma$ is rotationally invariant, we can calculate that it has $m-1$ principle curvatures equal to
$$ \frac{y'(t)}{x(t)(x'(t)^2 + y'(t)^2)}, $$ $n-1$ principle curvatures equal to
$$ -\frac{x'(t)}{y(t)(x'(t)^2 + y'(t)^2)}, $$ and one principle curvature equal to
$$ \frac{x'(t) y''(t) - x''(t) y'(t)}{((x'(t)^2 + y'(t)^2)^{3/2}} $$ Also, the unit normal vector (under projection $\Pi$) gives us the vector $\nu(t)$ perpendicular to $\gamma(t)$ as
$$ \nu(t) = \frac{(-y'(t), x'(t) )}{(x'(t) + y'(t))^{1/2}} $$ (so calculated because the unit vector tangent to $\gamma(t)$ is $(x'(t), y'(t))$). Taken together, the $\lambda$-hypersurface equation reduces to
\begin{align*} \frac{1}{(x'(t)^2 + y'(t)^2)^{1/2}}\left( (m-1)\frac{y'(t)}{x(t)} - (n-1)\frac{x'(t)}{y(t)} + \frac{x'(t)y''(t) - x''(t) y'(t)}{x'(t)^2 + y'(t)^2} \right)& \\\\ = \frac{1}{2}\frac{x'(t)y(t) - x(t) y'(t)}{(x'(t)^2 + y'(t)^2)^{1/2}} &+ \lambda \end{align*} which can be rewritten as
\begin{align*} \frac{x'(t)y''(t) - x''(t)y'(t)}{x'(t)^2 + y'(t)^2} =& \frac{1}{2}(x(t)y'(t) - x'(t) y(t)) + \frac{(n-1)x'(t)}{y(t)} \\ &- \frac{(m-1)y'(t)}{x(t)} + \lambda (x'(t)^2 + y'(t)^2)^{1/2}. \end{align*} If we introduce the angle $\theta(t) = \arctan\left( \frac{y'(t)}{x'(t)}\right)$, we get that
$$ \theta'(t) = \frac{1}{1+(y'(t)^2 / x(t)'^2)} \left( \frac{x'(t) y''(t) - y'(t) x'')t)}{x'(t)^2}\right) = \left( \frac{x'(t)y''(t) - y'(t) x''(t)}{x'(t)^2 + y'(t)^2}\right). $$
If we also assume that our profile curve is parametrized by arc length, we can use previous two formulas to show that the profile curve satisfies the following system of differential equations:
\begin{align} \dot{x} &= \cos \theta \label{2.1}\\ \dot{y} &= \sin \theta \label{2.2}\\ \dot{\theta} &= \left(\frac{x}{2} - \frac{m-1}{x} \right)\sin \theta + \left(\frac{n-1}{y} - \frac{y}{2} \right)\cos \theta + \lambda \label{2.3} \end{align}
Similarly, it is clear that any curve in the first quadrant satisfying equations \eqref{2.1} - \eqref{2.3} will generate a hypersurface $\Sigma \subset \mathbb{R}^{n+m}$ that locally satisfies equation \eqref{1.4}.
\section{Analyzing the system of ODEs}
In this section, we record several explicit solutions to the system of equations \eqref{2.1}-\eqref{2.3} given above. We also perform some analysis on how solution curves can behave. We begin by identifying some explicit examples.
\begin{lemma}\label{lemma1} We have the following explicit solutions to the system of ODES:
\begin{enumerate}
\item The horizontal line $y = \lambda + \sqrt{\lambda^2 + 2(n-1)}$ .
\item The vertical line $x = \lambda + \sqrt{\lambda^2 + 2(m-1)}$.
\item The circle of radius $\lambda + \sqrt{\lambda^2 + 2(m+n-1)}$.
\end{enumerate} \end{lemma}
\begin{proof} These solutions can be verified by direct computation. \end{proof}
\begin{lemma} \label{lemma32} Let $\gamma(t)$ be a solution to the system of ODEs, and consider a subset of the solution $\gamma(t) = u(x)$ that is viewed as a graph over the $x$-axis. Then $u$ can only have maximums above a height of $y = \lambda + \sqrt{\lambda^2 + 2(n-1)}$, and can only have minimums below $y = \lambda + \sqrt{\lambda^2 + 2(n-1)}$. Similarly, if $\gamma(t) = v(y)$ is a graph over the $y$-axis, it can only have maximums (resp. minimums) at points below (resp. above) the line $x = \lambda + \sqrt{\lambda^2 + 2(m-1)}$. \end{lemma}
\begin{proof} This is seen by examining equation \eqref{2.3} at such a critical point. Note that we make use of $\lambda < 0$ in this argument. \end{proof}
\begin{lemma} \label{lemma3} Let $\gamma(t)$ be a solution to the system of ODEs \eqref{2.1} - \eqref{2.3}, defined on a time interval $t \in (a,b)$. If $x_{\gamma}(t) \rightarrow 0$ and $y_{\gamma}(t) \rightarrow y_b$, $y_b > 0$, as $t \rightarrow b$, then $\gamma$ can be extended to be defined on the interval $(a,b]$, such that $x_{\gamma}(b) = 0$, $y_{\gamma}(b) = y_b$, and $\theta_\gamma (b) = -\pi$. \end{lemma}
\begin{proof} Let $\gamma(t)$ be a curve as described above. We first remark that, locally near $x=0$, $\gamma$ may be viewed as a function $u = u(x)$ over the $x$-axis. As seen above, such a function can only have maximums occur above the height $y = \lambda + \sqrt{\lambda^2 + 2(n-1)}$, while minimums can only occur below this height. Therefore, unless $y_b$ is exactly equal to this height, the function $u(x)$ will not exhibit oscillatory behavior as $x\rightarrow 0^+$. We'll consider the case where $y_b < \lambda + \sqrt{\lambda^2 + 2(n-1)}$, as the other case follows similarly. There are several possible behaviors as $t \rightarrow b$: either $\lim_{t \rightarrow b}\theta < -\pi$, $\lim_{t \rightarrow b}\theta = -\pi$, or $\lim_{t \rightarrow b}\theta > -\pi$. Of course, in the second case our lemma is proven. Our goal is to show that the first and third case cannot occur.
To show that the third case cannot occur: by examining equation \eqref{2.3}, we see that
\begin{align*} \dot{\theta} &= \left(\frac{x}{2} - \frac{m-1}{x} \right)\sin \theta + \left(\frac{n-1}{y} - \frac{y}{2} \right)\cos \theta + \lambda\\ &\geq - \frac{m-1}{x} \dot{x} \tan \theta - \left(\frac{n-1}{y_b} - \frac{y_b}{2} \right) + \lambda\\ &\geq - \delta \frac{\dot{x}}{x} - \left(\frac{n-1}{y_b} - \frac{y_b}{2} \right) + \lambda, \end{align*}
where $\delta = (m-1) \liminf_{t \rightarrow b} \tan \theta$. Integrating this inequality from $t_1$ to $t_2$ gives us
\begin{align*} \theta (t_2) - \theta (t_1) \geq \delta \ln \left( \frac{x(t_1)}{x(t_2)}\right) - \left(\frac{n-1}{y_b} - \frac{y_b}{2} \right)(t_2 - t_1) + \lambda (t_2 - t_1). \end{align*}
Note that the expression $\theta(t_2) - \theta(t_1)$ is bounded, while the right-hand side blows up as $t_2 \rightarrow b$ - a contradiction.
As for the first case: if $\theta(t) < \-pi$ as $t \rightarrow b$, then for $t$ close to $b$ we can compute
\begin{align*} \dot{\theta} &= \left(\frac{x}{2} - \frac{m-1}{x} \right)\sin \theta + \left(\frac{n-1}{y} - \frac{y}{2} \right)\cos \theta + \lambda\\ &\leq \frac{x}{2} - (m-1)\left( \frac{\dot{x}}{x}\tan \theta \right)\\ &\leq \frac{1}{2} + (m-1)\left( \delta \frac{\dot{x}}{x}\right), \end{align*} where $\delta = - \liminf_{t\rightarrow b} \tan \theta$ is a positive quantity. Integrating this inequality from $t_1$ to $t_2$ gives us
\begin{align*} \theta (t_2) - \theta (t_1) < (m-1)\, \delta \, \ln \left( \frac{x(t_2)}{x(t_1)}\right) + \frac{1}{2}(x(t_2) - x(t_1)). \end{align*} Again, note that the expression $\theta(t_2) - \theta(t_1)$ is bounded, while the right-hand side goes to negative infinity as $t_2 \rightarrow b$ - a contradiction. Thus, the only situation that can occur is if $\gamma(t)$ meets the $y$-axis at exactly a perpendicular angle.
\end{proof}
A similar argument gives us an identical result for curves that intersect the $y$-axis, and gives us the following statement:
\begin{corollary} If $\gamma(t)$ is a solution to our system \eqref{2.1} - \eqref{2.3} and intersects the $x$- or $y$-axis, it does so at a perpendicular angle. \end{corollary}
Next, we see that straight lines are rare solutions to this sytem:
\begin{lemma} The only straight lines that satisfy the system \eqref{2.1} - \eqref{2.3} are the two mentioned in Lemma \ref{lemma1}. In particular, there is no straight line through the origin that satisfies this system. \end{lemma}
\begin{proof} We can quickly see that the only vertical or horizontal lines that satisfy the system are the two already mentioned. A straight (non-vertical) line can be written in form $y = kx + b$ for some constants $k,b$. Using the fact that $$\dot{\theta} = \left(\frac{x}{2} - \frac{m-1}{x} \right)\sin \theta + \left(\frac{n-1}{y} - \frac{y}{2} \right)\cos \theta + \lambda,$$ as well as the fact that a linear solution would satisfy
$$ \cos \theta = \frac{1}{\sqrt{1+k^2}}\,, \quad \sin \theta = \frac{k}{\sqrt{1 + k^2}}\,, \quad \text{and} \quad \dot{\theta} = 0\,, $$ we get
$$ -\lambda \sqrt{1 + k^2} = \left( \frac{n-1}{kx + b} - \frac{kx + b}{2}\right) + k\left( \frac{x}{2} - \frac{m-1}{x} \right) $$
Note that this can be rewritten as
$$ \frac{b}{2} - \lambda \sqrt{1 + k^2} = \left( \frac{n-1}{kx + b} \right) - k\left( \frac{m-1}{x} \right) $$ Clearly, the left-hand side of this equation is constant, but no choice of $k$ and $b$ will fix the right-hand side of this equation, save $k=0$ - a case we have already examined. Therefore, no new linear solutions exist. \end{proof}
\begin{remark} Since we already know all curves, including lines, must intersect the $x$- or $y$-axis away from the origin at a perpendicular angle, the main result in the previous lemma is that no line that passes through the origin is a solution. In \cite{McGrath15}, the diagonal line $L$ given by $y = \sqrt{\frac{m-1}{n-1}}x$ was a special solution. We do not have this solution in this situation. However, in the case where $m=n$, we still have symmetry of solutions over the line - a fact we will use in what follows. \end{remark}
Finally, we conclude with an important lemma concerning the direction in which $\gamma(t)$ can ``curl,'' depending on where $\gamma(t)$ is located in regards to the line $L$, given by $y = \sqrt{\frac{m-1}{n-1}}x$.
\begin{lemma} \label{lemma5} Suppose $\gamma(t) = (x(t),y(t))$ satisfies $(n-1)y^2 < (m-1)x^2$, so that $\gamma(t)$ lies below $L$. Furthermore, suppose that there is a time $t_0$ for which $\gamma$ satisfies $\dot{\theta}(t_0) < 0$, $\dot{x}(t_0) < 0$, and $\dot{y}(t_0) < 0$. Then for any $t$ in the maximal interval containing $t_0$ for which $\dot{x}(t) < 0$, $\dot{y}(t) < 0$, and $(n-1)y^2 < (m-1)x^2$, we will have $\dot{\theta}(t) < 0$. \end{lemma}
\begin{proof} We calculate \begin{align} \ddot{\theta} &=\dot{x}\dot{y} \left(\frac{m-1}{x^2} - \frac{n-1}{y^2} \right) + \dot{\theta} \left( \frac{x^2 - 2(m-1)}{2x} \cos \theta + \frac{y^2 - 2(m-1)}{2y} \sin \theta\right). \end{align} Note that when $\dot{\theta} = 0$, $\ddot{\theta} = \dot{x}\dot{y} \left(\frac{m-1}{x^2} - \frac{n-1}{y^2} \right)$. Since we are below the line $(n-1)y^2 < (m-1)x^2$, we get that when $\dot{\theta} = 0$, then $\ddot{\theta} <0$. This implies that $\theta$ will remain decreasing, with $\dot{\theta} < 0$. \end{proof}
\section{Existence of a closed solution}
Our main argument will be to construct a closed curve that is a solution to the system \eqref{2.1} - \eqref{2.3}, specifically in the case where $n=m$. To this end, we construct a curve that satisfies the system of ODEs, and that lies entirely below the line $L$ (which, in this circumstance, is simply the line $y=x$), with both starting and ending point meeting $L$ perpendicularly. Because of the symmetry of our system of ODEs, we can then reflect our curve across the line $L$, ending with a closed loop that satisfies the ODE.
Under the change of variables \begin{align} r &= \frac{1}{\sqrt{2}}(x + y)\\ s &= \frac{1}{\sqrt{2}}(x-y)\\ \phi &= \arctan (s/r) = \pi/4 + \theta, \end{align} the system of ODEs becomes
\begin{align} \dot{r} &= \sin \phi\label{4.4}\\ \dot{s} &= \cos \phi\label{4.5}\\ \dot{\phi} &= \left(\frac{-r}{2} + \frac{(n-1)2r}{r^2-s^2} \right)\cos \phi + \left(\frac{s}{2} + \frac{(n-1)2s}{r^2-s^2} \right)\sin \phi + \lambda.\label{4.6} \end{align}
Let $\gamma_{R}(t)$ denote a solution to \eqref{4.4} - \eqref{4.6} with initial conditions $r(0) = R$, $s(0) = \phi(0) = 0$, defined on some maximal time interval $[0, T_R)$. Note that, for for large starting $R$, $\dot{\phi}(0) \leq 0$, so $\gamma_R$ initially curls clockwise. Therefore, at least for a small amount of time, one can realize $\gamma_R$ as a positive, differentiable function over the $r$ axis. $T_R$ is taken to be the maximal time for which this remains a positive function - ie, $T_R$ is the first time at which either $s = 0$, $\phi = 0$, or $\phi = -\pi$. Under this identification, the function can be defined as as $s = f_R (r)$, for $r \in (r(\gamma_R(T_R)), R]$.
A major goal in this section will be to show that, for large enough $R$, $T_R$ will occur at a moment where $s = 0$.
\begin{lemma} If $f_R$ has a critical point, then it is a maximum. \end{lemma}
\begin{proof} At such critical point, we would necessarily have $\phi = -\pi/2$. Then, at such a point, $\dot{\phi} = -\left(\frac{s}{2} + \frac{(n-1)2s}{r^2-s^2} \right) + \lambda$. Since $\lambda$ is assumed to be negative and our curve is below $L$ (making $r^2 - s^2 > 0$), we have $\dot{\phi} < 0$ . Therefore, our critical point must have been a maximum. \end{proof}
An immediate corollary to this is that $f_R$ has at most one critical point.
\begin{lemma}\label{lemma42} For large values of $R$, $f_R$ will achieve a critical point (ie, a point where $\phi_R = -\pi/2$). \end{lemma}
\begin{proof} We adopt an argument from \cite{McGrath15} and rescale time by letting $\tau = R t$. Note that this means $\frac{d\tau}{dt} = R$, and therefore \begin{align} \frac{d\phi}{d\tau} &= \frac{r}{R}\cos \phi \left( -\frac{1}{2} + \frac{2(n-1)}{r^2-s^2}\right) + \frac{\lambda}{R} + \frac{1}{R} \sin \phi \left( \frac{s}{2} + \frac{2s(n-1)}{r^2-s^2}\right).\label{4.7} \end{align}
First, we show that $\forall \epsilon$ and $\forall C$, there exists a $R$ such that $\forall \tau \in (0,C)$, we have $\frac{r(\tau)}{R} > (1-\epsilon)$. Indeed, this will be true since our (rescaled) path moves at a speed of $1/R$. Thus, by choosing $R$ large enough, we can guarantee that $r(\tau)$ stays close to $R$ in the time interval $(0,C)$. We make sure that $R \geq C$ and that $R$ is large enough to satisfy $\frac{R}{R-1} > (1-\epsilon)$, proving the claim.
This shows us that, for this large $R$, and for $\tau \in (0,C)$, we have
\begin{align} \frac{d\phi}{d\tau} &\leq (1-\epsilon)\cos \phi \left( -\frac{1}{2} + \frac{2(n-1)}{r^2-s^2}\right) + \frac{\lambda}{R} + \frac{1}{R} \sin \phi \left( \frac{s}{2} + \frac{2s(n-1)}{r^2-s^2}\right) \label{48}\\ &\leq -\frac{1}{2}(1-\epsilon) \cos \phi. \end{align} To see this last inequality, note that the last term in \eqref{48} is negative (since $\phi < 0$), and the positive term $\frac{2(n-1)}{r^2 - s^2}$ is much smaller in size than the negative term $\frac{\lambda}{R}$ (since, if $R$ is large enough, $r$ will be close to $R$ and the squared term in the denominator will dominate). The equation $d\phi / d\tau = -\frac{1}{2}(1-\epsilon) \cos \phi$ has the explicit solution of \begin{align} \phi (\tau) &= -2 \arctan \left( \tanh \left( \frac{(1-\epsilon)\;\tau}{4} \right) \right) \end{align} which will govern the behavior of $\phi$ for large initial $R$. Note that \begin{align}\label{411} -2\arctan \left( \tanh \left( \frac{(1-\epsilon)\tau}{4}\right)\right) + \frac{\pi}{2} &= \mathcal{O}(e^{-\frac{(1-\epsilon)\tau}{2}}), \end{align} which implies that (again, for large initial $R$), our curve will initially curl clockwise and have $\phi \approx -\pi/2$ exponentially quickly in the $\tau$-parameter. Furthermore, \eqref{411} implies that, for a small fixed number $\tau_0$, there exists a constant $c$ such that, as long as $\frac{d\phi}{d\tau} < 0$ and $\tau > \tau_0$, we have $s(\tau) > \frac{c}{R}$. This, combined with \eqref{4.7}, implies that there is a $\tau$ for which $\phi (\tau) = -\pi / 2$. Furthermore, \eqref{411} implies that, for large $R$, $r_0 = R - \mathcal{O}(1/R)$ and $s_0 = \mathcal{O}(1/R)$.
\end{proof}
Combining this with Lemma \ref{lemma5} gives us the following corollary: \begin{corollary}\label{corollary43} For large $R$, $\phi_R$ is decreasing until either $\gamma$ crosses the line $L$, or $\phi = -3\pi / 4$. \end{corollary}
\begin{proof} The previous lemma tells us that $\phi$ will decrease to $-\pi/2$. At this point, $\dot{x}<0$ and $\dot{y}<0$, so we can apply Lemma \ref{lemma5} to complete the argument. \end{proof}
\begin{lemma}\label{lemma8} For large $R$, $f_R (T_R) = 0$.
\end{lemma}
\begin{proof}
Corollary \ref{corollary43} tell us that that, for large $R$, we have $\dot{\phi}_R < 0$ at least until $\phi < -\frac{3\pi}{4}$ or $f_R (T_R) = 0$. At the same time, Lemma \ref{lemma42} tells us that, for large $R$, the maximum of $f_R$ is $\mathcal{O}(\frac{1}{R})$. Furthermore, we can compute that
\begin{align*} \frac{d\phi}{dr} &= \dot{\phi} / \dot{r} \\ &= \left(\frac{-r}{2} + \frac{(n-1)2r}{r^2-s^2} \right)\cot \phi + \left(\frac{s}{2} + \frac{(n-1)2s}{r^2-s^2} \right) + \lambda \csc \phi \\ &= I \cot \phi + II + \lambda \csc \phi \end{align*}
If $\phi$ were to equal $-3\pi / 4$, this equation simplifies to \begin{align}\label{4.12} \frac{d\phi}{dr} = I + II - \lambda \sqrt{2}. \end{align}
Note that when $r$ is large and $s = \mathcal{O} (1/R)$ is small, we know that $II$ is a small, positive quantity, while $I$ is large and negative. Therefore, there exists a $C_1$ and $C_2$ (that depend only on $\lambda$ and $n$) such that, if $r > C_1$ and $R > C_2$ , $\frac{d\phi}{dr}$ will be negative at any point where $\phi = -3\pi / 4$. This, would make $\dot{\phi}$ positive. Therefore, for $R > C_2$, $\phi$ cannot be less than $-3\pi/4$ for $r \in [C_1, R]$.
Let's assume that we can find increasingly large $R$ for which $f_R$ remains positive on the interval $[C_1, R]$. The work above shows that, on this interval, the function $f_R$ satisfies $\dot{\phi} < 0$. Therefore, on a slightly smaller sub-interval (for which $\phi < -\pi / 2$), we have $\frac{d\phi}{dr} > 0$. Notice that the region (over $r$) where $\frac{d\phi}{dr} > 0$ corresponds to a region when $f_R$ is concave down. This gives us a way to estimate how negative $\phi$ can be. Indeed, we see that at the point $r = 2C_1$, the angle $\phi$ must satisfy \begin{align*} \phi > -\frac{\pi}{2} - \arctan \left( \frac{1}{C_1 R} \right), \end{align*} or else $f$ would cross the $r$-axis somewhere on the interval $[C_1, 2\,C_1]$. Therefore, for the point $r_0 = 2C_1$, we have that $f_R(r_0) = \mathcal{O}(R^{-1})$ and $\phi_R(r_0) + \pi / 2 = \mathcal{O}(R^{-1})$.
From this, and the smooth dependence of ODEs on their initial conditions, it is evident that the solutions $f_R$ (again, for increasingly large $R$) converge to a solution of the original system of ODEs that passes through the point $r = 2C_1$, $s=0$, $\phi = -\pi/2$. However, such a solution has $\dot{\phi} = \lambda < 0 $, so the solution instantly moves above the line $L$. This implies that, for $R$ large enough, our solution will have a point $r \in [C_1 , 2C_1]$ that satisfies $f_R (r) =0$ and $\phi > -\pi$, giving us a contradiction.
\end{proof}
By Lemma \ref{lemma8}, $\gamma_R(T_R)$ will occur when $\gamma_R$ intersects the line $L$ for all $R$ that are large enough. However, we know that there exists an explicit solution for which $\gamma_R (T_R)$ ends on the $x$-axis (this is the circle solution, described in \ref{lemma1}). This implies that the point $R_* := \inf\{ R>0 : f_{\bar{R}}(r_{\bar{R}} (T_R)) = 0\; \text{for all}\; \bar{R} > R \}$ exists, is well-defined, and is greater than 0. Our next goal is to show that, as $R \searrow R_*$, the solutions $\gamma_R$ stay away from the $y$ axis and the origin.
\begin{lemma} \label{lemma9} $R_*$ satisfies $\liminf\limits_{R \searrow R_*} \left( \min\limits_{t < T_R} y_R(t)\right) > 0$. \end{lemma}
\begin{proof} Suppose this were not true, ie suppose there was a sequence of points $R_m \searrow R_*$ and a sequence of times $t_m$ such that $y_{R_{m}}(t_m) \rightarrow 0$. Passing to a subsequence if necessary, we can assume that $x_{R_m}(t_m) \rightarrow x_*$, so that the curves $\gamma_{R_m}$ converge to a curve $\gamma_*$ with $\gamma_{R_m} (t_m) \rightarrow (x_*, 0)$. First, assume that $x_* > 0$. Then by Lemma \ref{lemma3}, $\gamma_*$ will intersect the $x$-axis orthogonally. Because of the continuity of solutions, and the fact that all $\gamma_R$ for $R > R_*$ end on the line $L$, we know that for $R$ just above $R_*$, $\gamma_R$ travels towards the point $(x_*, 0)$ in a manner perpendicular to the $x$-axis, almost touches the $x$-axis, and then rapidly curls around and moves away from the point in a nearly vertical way. In particular, for $R_m$ very close to $R_*$, the curve $\gamma_{R_m}$ will no longer be a graph over the line $L$. This implies that at this particular $R_M$, $T_{R_M}$ occurs before $\gamma_{R_M}$ returns to $L$, which is a contradiction.
Next, assume that $x_* = 0$, so our $\gamma_{R_m}(t_m)$ are converging to the origin. This implies that the solution $\gamma_{R_*}$ terminates at $\gamma_{R_*}(T_{R_*}) = (0,0)$, and that $\gamma$ stays below the line $L$ close to the origin. Because of Lemma \ref{lemma5}, we know that $\theta$ will continue to decrease, and therefore that as $\gamma_{R_*}(T_{R_*})$ intersects $(0,0)$, the angle of approach is $\lim_{t\rightarrow T_{R_*}} \theta(t)$ = $\alpha$ for some angle $\alpha < -3\pi / 4$.
For this curve $\gamma_{R_*}$, and for $t$ very close to $T_{R_*}$, we have \begin{align*} \dot{x}, \dot{y} &< 0 \\ y &< x \tan \alpha \\ \lambda &< \frac{y}{2} \cos \theta \end{align*}
Using these ingredients, we compute that
\begin{align} \dot{\theta} &= \left(\frac{x}{2} - \frac{n-1}{x} \right)\sin \theta + \left(\frac{n-1}{y} - \frac{y}{2} \right)\cos \theta + \lambda \\ &\leq \left(n-1\right)\left(1 - \tan \alpha \right) \frac{\dot{y}}{y} \end{align} which is a negative quantity since $\tan \alpha < 1$. Integrating from $t_1$ to $t_2$ gives us
\begin{align} \theta (t_2) - \theta (t_1) \leq \left(n-1 \right)\left(1 - \tan \alpha \right) \log \left( \frac{y(t_2)}{y(t_1)}\right). \end{align}
We know that $\theta(t_2) - \theta(t_1)$ is negative and bounded, which means that $\log\left(\frac{y(t_2)}{y(t_1)}\right)$ cannot grow to $-\infty$. However, as $t_2 \rightarrow T$, we have $y(t_2) \searrow 0$, which is a contradiction.
\end{proof}
\begin{proof}[Proof of Theorem 1] Because of Lemma \ref{lemma9}, we know that the solutions as $R \searrow R_*$ stay in a compact set away from the $x$ axis and the origin. Because of the continuity of the system of ODEs, this guarantees that the solution originating from $R_*$ begins and ends on the line $L$. We will now show that $\gamma_{R_*}$ must end by intersecting $L$ perpendicularly, at an angle of $\phi = -\pi$.
If $\phi_{R_*} (t_M) > -\pi$, then (by the continuity of the system of ODEs) there would exist a $R_{\circ} < R_*$ for which, for all $R$ satisfying $R_{\circ} < R \leq R_{*}$, $\gamma_{R}$ is a graph over the line, begins and ends on the line, and ends also at an angle $\phi_R (t_M) > -\pi$. This contradicts the definition of $R_*$ as the infimum of all such values.
At the same time, if $\gamma_{R_*}$ meets the line at an angle $\phi_{R_*} < -\pi$, then (again by the continuity of the system of ODEs) there would exist a value $R_{\circ} > R_*$ that also satisfies $\phi_R < -\pi$. This would imply that contradict the notion that $R_{*}$ was the infimum, as the infimum must be greater than or equal to $R_{\circ}$.
Therefore, $\gamma_{R_*}$ must begin and end on the line, and must meet the line perpedicularly. This completes the theorem.
\end{proof}
\end{document} |
\begin{document}
\title{Time-series and network analysis in quantum dynamics: \\ Comparison with classical dynamics}
\author{Pradip Laha}
\altaffiliation[]{pradip@physics.iitm.ac.in} \author{S. Lakshmibala}
\affiliation{
Department of Physics, IIT Madras, Chennai 600036, India\\} \author{V. Balakrishnan}
\affiliation{Department of Physics, IIT Madras, Chennai 600036, India\\}
\date{\today}
\begin{abstract} Time-series analysis and network analysis are now
used extensively in diverse areas of science. In this paper, we apply these techniques to quantum dynamics in an optomechanical system: specifically, the long-time dynamics of the mean photon number in an archetypal tripartite quantum system comprising a single-mode radiation field interacting with a two-level atom and an oscillating membrane. We also investigate a classical system of interacting Duffing oscillators which effectively mimics several of the features of tripartite quantum-optical systems. In both cases, we examine the manner in which the maximal Lyapunov exponent obtained from a detailed time-series analysis varies with changes in an appropriate tunable parameter of the system. Network analysis is employed in both the quantum and classical models to identify suitable network quantifiers which will reflect these variations with the system parameter. This is a novel approach towards (i) examining how a considerably smaller data set (the network) obtained from a long time series of dynamical variables captures important aspects of the underlying dynamics, and (ii) identifying the differences between classical and quantum dynamics.
\begin{description}
\item[PACS numbers] 05.45.Tp; 42.50.-p; 05.45.-a
\end{description} \end{abstract}
\pacs{Valid PACS appear here}
\keywords{ Cavity optomechanics, Duffing oscillator, time-series analysis, network analysis, recurrence plot, maximal Lyapunov exponent}
\maketitle
\section{\label{sec:level1} Introduction} \noindent The availability of time-series data in diverse areas such as weather forecasting, climate research and medicine~\cite{zou,gao1,donges,marwan2,ramirez} has facilitated detailed investigations leading to the extraction of important results on the dynamics of a variety of systems. Several tools have been proposed in time-series analysis to assess the long-time behaviour of complex dynamical systems. The methods used involve the identification and estimation of indicators of the nature of the underlying dynamics such as the maximal Lyapunov exponent (MLE), return maps, return-time distributions, recurrence plots, and so on.
In recent years, the
analysis of networks constructed from a long time series has proved to be another important tool that has contributed significantly to the understanding of classical dynamics~\cite{newman_book,cohen,newman,boccaletti,kurths_phys_rep}. The problem of handling a large
data set is circumvented by reducing it to a considerably smaller optimal set (the network), particularly in the context of machine learning protocols~\cite{seth_lloyd,rebentrost}.
Different methods
have been employed
to convert the time series of a
classical dynamical variable into an equivalent network, each method capturing specific features of the dynamics encoded in the time series~\cite{zhang,lacasa,nicolis,
marwan1,yang,xu,donner_epj}. In this paper, we have constructed $\epsilon$-recurrence networks to obtain smaller data sets from the time series of relevant observables of certain tripartite systems. We have carried out this investigation in the context of both quantum and classical dynamics. The network indicators that we
consider are the average path length (APL), link density (LD) clustering coefficient (CC), transitivity, assortativity and degree distribution. The purpose of this study is three-fold: (a)~to examine the manner in which these network indicators vary with changes in specific system parameters; (b)~to assess the extent to which these variations reflect those of indicators obtained from the full data set such as the MLE; (c)~to understand the differences in the behavior of network indicators
computed from data sets pertaining, respectively, to quantum and classical systems.
In the quantum mechanical context, we have examined the time series data for the mean photon number of the
radiation field in a cavity optomechanical system
as well as the equivalent network.
The results obtained have been compared with corresponding results reported in an earlier work~\cite{laha4} for another tripartite quantum system, namely, a three-level $\Lambda$-atom interacting with two radiation fields. (In what follows, we shall refer to this system as the tripartite $\Lambda$ system). The optomechanical model involves the interaction between the optical field contained in a cavity with a two-level atom placed inside the cavity, and a mechanical oscillator attached to one of the cavity walls, which is capable of small oscillations. The oscillations as also the atomic transitions are governed by the radiation pressure. The dynamics of the quantum oscillator has been {\em controlled} by this method in several contexts, such as the detection of gravitational waves~\cite{abramovici,braginsky}, high precision measurements of masses and the weak force~\cite{vitali2,geraci,lamoreaux}, quantum information processing~\cite{stannigel}, cooling mechanical resonators very close to their quantum ground states~\cite{barzanjeh,wilson_rae,genes1,li}, and examining classical-quantum transitions in mechanical systems~\cite{schwab,marshall}. Optomechanical systems have thus attracted considerable attention both theoretically as well as experimentally (see also ~\cite{aspelmeyer,bowen} and references therein).
Further, if the field-atom coupling is dependent on the field intensity, new phenomena occur. A special form of the intensity-dependent coupling (IDC) which is important from a group-theoretic point of view is given by $f(N) = (1 + \kappa \,N)^{1/2}$ where $\kappa$ is the `intensity parameter' and $N$ is the photon number operator~\cite{siva}. It has been shown in earlier work~\cite{lahap} that, for this form of the IDC, the dynamics of the mean photon number $\aver{N}$ as well as the entanglement properties depend sensitively on $\kappa$. These interesting features in the dynamics make this model a good candidate for time-series and network analysis.
The classical system we consider here is a set of two coupled Duffing oscillators. The dynamical variable in this case is essentially the velocity of one of the oscillators. As is well known,
the Duffing oscillator exhibits rich dynamical behaviour (see, for instance, ~\cite{alzar}), which makes it an ideal candidate for examining generic features of time series and networks, so that inferences can be drawn
in a general setting. The Duffing equation
has been extensively used
to model the behaviour of a wide spectrum of mechanical oscillators, electrical circuits, nonlinear pendulums,
aspects of hydrodynamics, and so on. Small
variations in the system parameters can produce significant changes in the dynamics,
ranging from quasiperiodicity to chaos~\cite{argyris}.
The reason for focusing on the system of
Duffing oscillators for our purposes
is as follows. The phenomenon of
electromagnetically induced transparency (EIT) occurs under suitable conditions in quantum systems involving
an atomic medium interacting with two laser fields
(see, for instance,~\cite{harris}). EIT basically refers to the appearance of a transparency window within the absorption spectrum of the atomic system. This effect has been observed in many experiments, and several investigations have been carried out using theoretical models that explain the occurrence of EIT. A simple quantum system exhibiting EIT is the tripartite $\Lambda$ system mentioned earlier. Of immediate interest to us is the fact that a classical analog of EIT-like behavior has been demonstrated in
as simple a system as two coupled harmonic oscillators subject to a harmonic driving force~\cite{alzar}. Inclusion of a cubic nonlinearity and dissipation leads to more interesting and physically more realistic
behavior, which can be effectively modelled by two coupled Duffing oscillators.
Motivated by the diversity of its dynamics and its
capability to mimic certain types of quantum
phenomena such as EIT, we have carried out both
time series analysis and network analysis on this classical system.
The rest of this paper is organized as follows: In Section~\ref{sec:time_series} we outline very briefly the salient features of time-series and network analysis, in order to make the discussion self-contained. In Section~\ref{sec:opto_model}, after introducing the quantum optomechanical model, we present our results on the time-series analysis and network characteristics in this model. The results are compared, where possible, with corresponding ones for the tripartite $\Lambda$ system. Section~\ref{sec:duffing_model} is devoted to a similar study of classical coupled Duffing oscillators. In Section~\ref{sec:conclusion}, we conclude with brief comments and indicate possible avenues for further research.
\section{\label{sec:time_series} Time-series analysis and network indicators}
\noindent We outline first the salient aspects of time-series analysis and the manner in which an $\epsilon$-recurrence network is obtained from a time series. The network indicators of relevance to us are also defined. Suppose we have a long time series ${s(i)} \, (i = 1,\,2,\cdots, M)$, either measured or otherwise generated, of some relevant quantity (the expectation value of an observable in the quantum mechanical case, or the value of a dynamical variable in the classical case). The first task is to identify an effective phase space of dimension significantly smaller than $M$ in which the dynamics can be captured. For this purpose we need to obtain a suitable time delay $t_{d}$. Following a commonly used prescription~\cite{fraser_swinney}, $t_{d}$ is taken to be the first minimum (as a function of $T$) of the average mutual information \begin{equation} I(T) = \hspace{-2ex} \sum_{s(i), s(i+T)} \hspace{-2.5ex} p\big(s(i), s(i+T)\big) \log_{2} \Big\{ \frac{p\big(s(i),s(i+T)\big)}{p(s(i)) p(s(i+T))} \Big\}. \label{eqn:mutual_info_timeseries} \end{equation} Here, $p(s(i))$ and $p\big(s(i+T)\big)$ are the individual probability densities for obtaining the values $s(i)$ and $s(i+T)$ at times $i$ and $(i+ T)$, respectively, and $p\big(s(i), s(i+T)\big)$ is the corresponding joint probability density. Now, employing the standard machinery of time-series analysis (see, e.g.,~\cite{abarbanel}) we reconstruct, from $\{s(i)\}$ and $t_{d}$, an effective phase space of dimensions $d_{\textrm{emb}}$. In this space there are $M' = M - (d_{\textrm{emb}} - 1)t_{d}$ delay vectors ${\bf x}_{j} \; (j = 1,\,2,\dotsc, M')$ given by \begin{equation}
{\mathbf x}_{j} = \big[s(j),\, s(j+t_{d}), \dotsc,\, s\big(j + (d_{\textrm{emb}}-1)t_{d}\big)\big].
\label{eqn:delay_vec} \end{equation} The underlying dynamics takes one delay (or state) vector to another, and phase trajectories arise, with
$d_{\rm emb}$ Lyapunov exponents. Of direct interest to us is the maximal Lyapunov exponent (MLE), which we have computed using the standard TISEAN package~\cite{tisean} in both the quantum and classical systems for various values of system parameters.
Network analysis involves coarse-graining the phase space into cells of a suitable size. An important aspect
here is the construction of the adjacency matrix $A$ which depends on the cell size. $A = R - I$, where $I$ is the $M'\times M'$ unit matrix, and for a given cell size $\epsilon$, $R$ is the $(M'\times M')$ recurrence matrix with elements \begin{equation}
R_{ij} = \Theta\big(
\epsilon - \parallel \mathbf{x}_{i}-\mathbf{x}_{j}
\parallel\big).
\label{eqn:rij} \end{equation} Here $\Theta$ denotes the unit step function and $\parallel \cdot\parallel$ is the standard Euclidean norm. Any two state vectors (equivalently, two nodes of a network) ${\bf x}_{i}$ and ${\bf x}_{j} \,\,(i \neq j)$ are said to be connected iff $A_{ij} = 1$. The network is constructed with links between such connected nodes.
The choice of the cell size $\epsilon$ is important. Its threshold or optimal value~$\epsilon_{c}$ must be chosen judiciously. Too small a value of $\epsilon$ makes the network sparsely connected, with an adjacency matrix that has too many vanishing off-diagonal elements. Too large a value of $\epsilon$ makes too many off-diagonal elements of $A$ equal to unity, and hence the small-scale properties of the system cannot be captured. Our choice of $\epsilon_{c}$ is based on the recent proposal~\cite{eroglu} in the context of $\epsilon$-recurrence networks. Consider the $(M'\times M')$ Laplacian matrix $L$ with elements \begin{equation} L_{ij} = D_{ij} - A_{ij}.
\label{eqn:lij} \end{equation} Here $D = {\rm diag} \,(k_{1}, \, \dotsc, \,k_{M'})$ is the degree diagonal matrix, where $k_{i} = \sum_{j} A_{ij}$
is the degree of node $i$. $L$ is a real symmetric matrix, and each of its row sums vanishes. Hence the eigenvalues of $L$ are real and non-negative, and at least one of them is zero. Increasing $\epsilon$ upward from zero, we determine the smallest value of $\epsilon$ (denoted by $\epsilon_{c}$) for which the next eigenvalue of $L$ becomes nonzero.
The network indicators that we have computed for the systems of interest to us are the average path length (APL), the link density (LD), the clustering coefficient (CC), the transitivity ($\mathcal{T}$), the assortativity ($\mathcal{R}$) and the degree distribution~\cite{kurths_phys_rep,boccaletti, strogatz}. For ready reference, their definitions are as follows.
For a network of $M'$ nodes, the average path length APL is given by \begin{equation}
\text{APL} = [M'(M'-1)]^{-1} \sum_{i,j}^{M'} d_{ij},
\label{eqn:apl} \end{equation} where $d_{ij}$ is the shortest path length connecting nodes $i$ and $j$. The link density LD is given by \begin{equation}
\text{LD} = [M'(M'-1)]^{-1} \sum_{i}^{M'} k_{i},
\label{eqn:ld} \end{equation} where $k_{i}$ is the degree of node~$i$ (as already defined). The local clustering coefficient, which measures the probability that two randomly chosen neighbors of a given node $i$ are directly connected, is defined as \begin{equation}
C_{i} = [k_{i}(k_{i}-1)]^{-1} \sum_{j,k}^{M'} A_{jk} \, A_{ij}\, A_{ik}.
\label{eqn:lcc} \end{equation} The global clustering coefficient CC is the arithmetic mean of the local clustering coefficients taken over all the nodes of the network. The transitivity $\mathcal{T}$ of the network is defined as \begin{equation}
\mathcal{T} = \frac{\sum_{i,j,k}^{M'} A_{ij}\,A_{jk}\, A_{ki}}{\sum_{i,j,k}^{M'} A_{ij} \, A_{ki}}.
\label{eqn:transitivity} \end{equation} The other indicator that we have considered is the assortativity coefficient $ \mathcal{R}$, which is a measure of the correlation between two nodes of a network. Consider a randomly chosen node $j$ connected by an edge to a randomly chosen node $i$. Then the assortativity coefficient, also known as the Pearson correlation coefficient of degree between all such pairs of linked nodes, is given by \begin{equation}
\mathcal{R} = (1/\sigma_{q}^{2})\,\sum_{i,j}^{M'} i\,j (e_{ij} - q_{i}q_{j}),
\label{eqn:assortativity} \end{equation} where the quantities on the right-hand side are defined as follows. $q_{i}$ is the distribution of the `remaining' degrees, i.e., the number of edges leaving the node $j$ other than the one that connects the chosen $(i,\, j)$ pair. $e_{{ij}}$ is the joint probability distribution of these remaining degrees, normalized according to $\sum_{i,j}{e_{ij}} = 1$. Also, $\sum _{j} e_{jk} = q_{k} = (k+1)\,p_{k+1}/\sum_{j} (j \,p_{j})$, where $p_{k}$ is the degree distribution of the network, i.e., the probability that a randomly chosen node in the network will have degree $k$. Finally,
$\sigma_{q}^{2} = \sum_{k} k^{2} q_{k} - [\sum_{k}k q_{k}]^{2}$ is the variance corresponding to
the distribution $q_{k}$.
It is readily seen that $-1\leqslant \mathcal{R} \leqslant 1$. $ \mathcal{R} = 1$ indicates perfect assortative mixing, $ \mathcal{R} = 0$ corresponds to non-assortative mixing, and $ \mathcal{R} = -1$ implies complete dissortative mixing.
\section{\label{sec:opto_model} The optomechanical model} \noindent As stated in Section \ref{sec:level1}, the tripartite quantum system we examine comprises a two-level atom placed inside a Fabry-P\'{e}rot cavity with a vibrating mirror attached to one of the cavity walls which is capable of small oscillations. The mirror is modeled as a quantum harmonic oscillator. The model Hamiltonian (setting $\hbar$ = 1) is given by~\cite{lahap} \begin{align}
H = \omega\, a^{\dagger} a & + \omega_{m}\, b^{\dagger} b + \tfrac{1}{2}\omega_{0} \sigma_{z} - G \, a^{\dagger} a (b + b^{\dagger}) \nonumber \\
&+ \Omega\, [a\, f(N) \, \sigma_{+} + f(N) \,a^{\dagger}\,\sigma_{-}].
\label{eqn:parent_hamiltonian} \end{align} $a^{\dagger}, a$ are the photon creation and annihilation operators of the cavity mode of frequency~$\omega$; $b^{\dagger}, b$ are the phonon creation and annihilation operators of the mirror-oscillator unit, with natural frequency~$\omega_{m}$. The optomechanical coupling coefficient $G = (2\,m\,\omega_{m})^{-1/2}\, (\omega/L)$ where $L$ and $m$ are the length of the cavity and the mass of the mirror. The atomic operators are $\sigma_{z} = \ket{e}\bra{e} - \ket{g}\bra{g}$, \,$\sigma_{+} = \ket{e}\bra{g}$ and $\sigma_{-} = \ket{g}\bra{e}$, where $\ket{g}$ and $\ket{e}$ denote the ground and excited states of the atom. $\omega_{0}$ is the atomic transition frequency and $\Omega$ is the field-atom coupling constant. We have used the resonance condition $\omega = \omega_{0} + \omega_{m}$ in our analysis. The real-valued function $f(N) = (1+ \kappa\, N)^{1/2}$ where $N = a^{\dagger}a$ is the photon number operator and $\kappa$ ($0\leqslant \kappa \leqslant 1$) is the tunable intensity parameter. $f(N)$ incorporates the intensity-dependent field-atom coupling present in the system. $N\ket{n} = n\ket{n}$ where $\ket{n}$ is the $n$-photon state.
An effective Hamiltonian $H_{\textrm{eff}}$ for this system can be obtained~\cite{lahap} from $H$ in the regime
$\omega_{m} \gg G, \,\Omega$. This is given by \begin{align} H_{\textrm{eff}} = (G^{2}/\omega_{m}) &\Big\{ \beta[f(N) a^{\dagger} b \sigma_{-} + a f(N)b^{\dagger} \sigma_{+}\big] \nonumber \\
&- \beta^{2} \big[a^{\dagger} a\, \sigma_{z} - \sigma_{+}\sigma_{-}\big] - (a^{\dagger} a)^{2}\Big\}. \label{eqn:eff_hamiltonian} \end{align} In real experiments the numerical values of $G$ and $\Omega$ are comparable. Therefore, in deriving Eq. \eqref{eqn:eff_hamiltonian}, we have set $\Omega = \beta\, G$, where $\beta$ is a constant of proportionality of the order of unity. We investigate the dynamics of the system in terms of the dimensionless time $\tau = (G^{2}/\omega_{m})t$.
The initial state $\ket{\psi(0)}$ of the full system is taken to be a direct product of the following states: (i) the field in the standard normalized oscillator coherent state (CS) $\ket{\alpha}$, $\alpha\in \mathbb{C}$; (ii) the mirror in the oscillator ground state $\ket{0}$; and (iii) the atom in an arbitrary superposition $(\cos\phi \ket{e} + \sin\phi \ket{g})$. Thus
\begin{equation}
\ket{\psi(0)}
= \sum_{n=0}^{\infty} l_{n}(\alpha) \big(\cos\phi \ket{n; 0; e} + \sin\phi \ket{n; 0; g}\big) \label{eqn:init_state} \end{equation} where $l_{n}(\alpha) = e^{-\vert \alpha \vert^{2}/2} \alpha^{n}/\sqrt{n!}$ and the notation in the kets representing product states is self-evident. The state of the system at any time $t > 0$
is obtained by solving the Schr\"{o}dinger equation, and is found to be given by \begin{align} \ket{\psi(t)} = \sum_{n=0}^{\infty} &l_{n}(\alpha) \big[A_{n}(t) \ket{n; 0; e} + B_{n}(t) \ket{n; 0; g}\big] \nonumber \\
&+ \sum_{n=1}^{\infty} l_{n}(\alpha) C_{n}(t) \ket{n-1; 1; e}. \label{eqn:final_state} \end{align} The time-dependent coefficients are given by \begin{subequations} \begin{align}
\label{eqn:soln_a}
A_{n}(t) &= e^{i \gamma_{1} t}\cos\phi,\\
\label{eqn:soln_b}
B_{n}(t) &= e^{i \gamma_{2} t} \sin\phi \, \big[\cos\,(R t) + \Delta_{b} \sin\,(R t)\big],\\
\label{eqn:soln_c}
C_{n}(t) &= e^{i \gamma_{2} t} \Delta_{c}
\sin\phi\,\sin(R t),
\end{align} \end{subequations} where (in units of $G^{2}/\omega_{m}$) \begin{subequations} \begin{align}
\gamma_{1} &= n^{2} + \beta^{2}(n+1), \\
\gamma_{2} &= n^{2} - n + \tfrac{1}{2},\\
\Delta_{b} &= -i (n -\tfrac{1}{2} - \beta^{2} n)/R, \\
\Delta_{c} &= -i \beta\,\sqrt{n}\, f(n)/R,\\
R &= \big\{(n^{2} - n+\tfrac{1}{2})^{2} + \beta^{2}\, n\, f^{2}(n) \nonumber \\
&\qquad- n[(n-1)^{2} + \beta^{2}\, n] (n - \beta^{2}) \big\}^{1/2}.
\label{eqn:parameters} \end{align} \end{subequations}
We now present our results. We have varied $\kappa$ from 0 to 1, and for each value of $\kappa$, numerically generated a long time series of the mean photon number $\aver{N}$ with time step $\delta\,\tau = 2.5\times10^{-5}$.
After discarding the initial transients (the first $10^{4}$ points) from each of the data sets, \begin{figure}
\caption{Tripartite optomechanical model: MLE versus $\kappa$ using a long time series of $ 3\times 10^{5}$ data points.}
\label{fig:mle_300k_vs_k_opto}
\end{figure} \begin{figure}
\caption{Tripartite optomechanical model: $\epsilon_{c}$ versus $\kappa$.}
\label{fig:eps_vs_k_opto}
\end{figure}
we have examined the manner in which the MLE, return-time distributions, recurrence plots, etc. change when the value of $\kappa$ is changed. Consistent with experiments~\cite{cleland,hood}, we set $\vert\alpha\vert^{2} = 25, \,\theta = \tfrac{1}{2} \pi,\, \Omega = 10^{6}$\,Hz, $\beta = 1$ and $\omega_{m} = 10^{9}$\,Hz. Figure \ref{fig:mle_300k_vs_k_opto} shows how the MLE varies with $\kappa$, based on a long time series of $3\times 10^{5}$ data points for each value of $\kappa$. We note that the reconstructed dynamics is chaotic, but only weakly so, as indicated by the small positive values of the MLE.
We now examine how network indicators behave as a function of $\kappa$. In the spirit of network analysis, for each value of $\kappa$, we have considered only 25000 data points in the corresponding time series. (This is only $\sim 8\%$ of the data set of the longer time series.) The optimal value $\epsilon_{c}$ has been estimated for each value of $\kappa$ (Fig. \ref{fig:eps_vs_k_opto}). We note that the qualitative behaviour of $\epsilon_{c}$ as a function of $\kappa$ is broadly similar to that of the MLE in Fig. \ref{fig:mle_300k_vs_k_opto}.
\begin{figure*}
\caption{Tripartite optomechanical model: (a) LD, (b) assortativity, and (c) CC (black curve) and transitivity (red curve) versus $\kappa$. }
\label{fig:network_opto_mech_300k_lyap}
\end{figure*}
\begin{figure}
\caption{Tripartite optomechanical model: APL (red curve) and MLE (black dotted curve) with $25000$ data points versus $\kappa$.}
\label{fig:network_opto_mech_25k_lyap}
\end{figure} \begin{figure}
\caption{Tripartite $\Lambda$ system: (a) MLE for $3\times 10^{5}$ (black curve) and $25000$ (red curve) data points. (b) CC (black curve) and transitivity (red curve). These plots are repoduced from~\cite{laha4}.}
\label{fig:lyap_cc_lambda}
\end{figure}
\begin{figure*}
\caption{Tripartite optomechanical model: Degree distributions (top panel), first-return-time distributions to 50 cells (centre panel) and recurrence plots (bottom panel) for $\kappa = $ 0, 0.03, 0.06 and 0.1 (left to right).}
\label{fig:deg_frtd_rec_plots_opto}
\end{figure*}
The manner in which LD, assortativity, CC and transitivity vary with changes in $\kappa$ is shown in Figs. \ref{fig:network_opto_mech_300k_lyap}(a)-(c). As expected, CC and transitivity display similar behavior (Fig. \ref{fig:network_opto_mech_300k_lyap}(c)). In all these plots, the network indicators are obtained using the shorter time series (data sets of 25000 points). It is seen that these network indicators display roughly the same trend as $\kappa$ increases, similar to the behavior of the MLE in Fig. \ref{fig:mle_300k_vs_k_opto}. The APL, however, does not follow this trend at all (the red curve in Fig. \ref{fig:network_opto_mech_25k_lyap}). In this sense, LD, CC, assortativity and transitivity are better network indicators than APL. On the other hand, if the MLE is computed using the short time series, its variation with $\kappa$ is qualitatively similar to that of the APL (the black dotted curve in Fig. \ref{fig:network_opto_mech_25k_lyap}), while differing significantly from the `true' variation of the MLE as depicted in Fig. \ref{fig:mle_300k_vs_k_opto}.
These inferences are in sharp contrast to those drawn from a similar investigation on the tripartite $\Lambda$ system mentioned earlier~\cite{laha4}. A noteworthy difference between this quantum system and the optomechanical system is that, for the tripartite $\Lambda$ system, the plots of the MLE versus $\kappa$ obtained with $3\times 10^{5}$ and $25000$ data points, respectively,
do not differ significantly (Fig. \ref{fig:lyap_cc_lambda}(a)). It has also been
shown in that case that CC and transitivity are very good network indicators (Fig. \ref{fig:lyap_cc_lambda}(b)), and the minimum in the MLE at $\kappa = 0.0033$ is reflected as a maximum in the CC and transitivity.
For completeness, we have examined the manner in which the degree distributions, return-time distributions to cells, recurrence plots, etc. vary with changes in $\kappa$ in the optomechanical model under study. Each time series comprises 25000 data points. The degree distribution plots are distinctly different for different values of $\kappa$ (the top panel of Fig. \ref{fig:deg_frtd_rec_plots_opto}). For instance, the single-peaked distributions for smaller values of $\kappa$ gradually change to double-peaked distributions as $\kappa$ is increased. Further, the spread in the distributions changes dramatically with increasing $\kappa$.
The first-return-time distributions to a specific cell for various values of $\kappa$ are shown in the centre panel of Fig. \ref{fig:deg_frtd_rec_plots_opto}. We find that there exist several significant peaks apart from a prominent peak for almost all values of $\kappa$. For higher value of $\kappa$ ($> 0.06$) the spread in the distribution is relatively smaller. We have verified that the second-return-time distributions exhibit similar behavior.
The manner in which recurrence plots change with $\kappa$ is displayed in the bottom panel of Fig. \ref{fig:deg_frtd_rec_plots_opto}. The return maps and the power spectra are not very sensitive to changes in $\kappa$.
In the tripartite $\Lambda$ system, qualitative changes in the recurrence plots, return maps and recurrence-time distributions with changes in the value of $\kappa$ mirrored the fact that
the MLE was at a minimum at $\kappa = 0.0033$. Such clear signatures are absent in the case of the optomechanical model.
We turn in the next Section to a classical system which is a near analog of the tripartite $\Lambda$ system, namely, two coupled Duffing oscillators. In this case, it is shown that the variation of the MLE with changes in a parameter analogous to $\kappa$ are similar for data sets with $10^{5}$ points and $25000$ points respectively, in the sense that both show an overall increase with as $\kappa$ is increased (Fig. \ref{fig:lyap_expo_duffing}). This feature is akin to that displayed in Fig. \ref{fig:lyap_cc_lambda}(a) for the tripartite $\Lambda$ system, although in that case the sensitivity to the number of data points is significantly lower. It is therefore worth investigating the differences in the behavior of network indicators in these two models.
\section{\label{sec:duffing_model} Coupled Duffing oscillators} \noindent As mentioned in the Introduction, a
classical system comprising two coupled oscillators driven by a harmonic force mimics~\cite{alzar} the phenomenon of EIT manifested in the tripartite quantum system comprising a $\Lambda$-atom interacting with two radiation fields. The dynamical equations for the displacements $x_{1}$ and $x_{2}$ of the two oscillators are given by \begin{align}
\ddot{x}_{1} + \delta_{1} \dot{x}_{1} + \omega_{\text{cl}}^{2} \,x_{1} - \Omega_{\text{cl}}^{2} \,x_{2} &= f \sin\,(\Omega_{d} t), \\
\ddot{x}_{2} + \delta_{2} \dot{x}_{2} + \omega_{\text{cl}}^{2} x_{2} - \Omega_{\text{cl}}^{2} x_{1} &= 0. \end{align} Here, $\delta_{1}$ and $\delta_{2}$ are damping parameters, $\omega_{\text{cl}}$ is the stiffness parameter, $\Omega_{\text{cl}}$ is the coupling parameter, and $f$ and $\Omega_{d}$ are, respectively, the amplitude and angular frequency of the periodic driving force. \begin{figure}
\caption{Classical model: MLE versus $\zeta$ for $25000$ (red curve) and $10^5$ (black curve) data points.}
\label{fig:lyap_expo_duffing}
\end{figure} \begin{figure*}
\caption{Classical model: (a) APL, (b) LD, (c) assortativity, and (d) CC (black curve) and transitivity (red curve) versus $\kappa$.}
\label{fig:network_duffing}
\end{figure*} In the quantum mechanical counterpart of this system, $\delta_1$ is the strength of spontaneous emission from the excited state of the $\Lambda$-atom, $\delta_2$ is the energy dissipation rate of the pumping transition, and $f$ is the amplitude of the driving field. In practice, of course, nonlinear effects arise. We have therefore considered the modified coupled Duffing equations \begin{align} \ddot{x}_{1} + \delta_{1} \dot{x}_{1} + \omega_{\text{cl}}^{2} \,x_{1} + \zeta \,x_{1}^{3} - \Omega_{\text{cl}}^{2}\, x_{2} &= f \sin\,(\Omega_{d} t), \\ \ddot{x}_{2} + \delta_{2} \dot{x}_{2} + \omega_{\text{cl}}^{2} \,x_{2} + \zeta \,x_{2}^{3} - \Omega_{\text{cl}}^{2} \,x_{1} &= 0, \end{align} where $\zeta$, the strength of the nonlinearity, is analogous to the IDC parameter $\kappa$ in the quantum model. As is well known, the forced Duffing oscillator exhibits a very diverse range of complex dynamical behavior, depending on the values of the parameters.
For numerical computations we have set
the parameters at representative values
$\omega_{0} = 2, \,\omega_{cl} = \sqrt{10},\,
\Omega_{cl} = \sqrt{6},\, \delta_1 =
10^{-2}$ and $\delta_2$ = $10^{-7}$. The dynamical variable considered is the velocity
$\dot{x}_{2}$. A long time series of
$\dot {x}_{2}$ was obtained for various values of $\zeta$ with initial conditions $x_1(0) = 1, \,\dot{x}_{1}(0) = 0, \,x_{2}(0) = 0,\,
\dot{x}_{2}(0) = 0$.
The manner in which the MLE varies with $\zeta$ for $10^{5}$ and $25000$ data points respectively (the black curve and the red curve in Fig. \ref{fig:lyap_expo_duffing}) reveals that the gross features in the two plots are in reasonable agreement with each other, in contrast to the case of the optomechanical model.
The changes in the behavior of the APL, LD, assortativity, CC and transitivity with varying in $\zeta$ are shown in Figs. \ref{fig:network_duffing}(a)-(d). It is interesting to note that none of these indicators seem to carry any signatures of the behavior of the MLE with $\zeta$. This is in marked contrast to the situation in both the quantum models considered above.
\section{\label{sec:conclusion} Concluding remarks}
In this work, we have carried out detailed time-series analysis and network analysis on a fully quantum optomechanical model and on a classical model of two interacting Duffing oscillators. Nonlinearities are inherent in both cases: in the former, the intensity-dependent coupling between subsystems; in the latter, a cubic nonlinearity in each oscillator. An archetypal indicator of complex dynamics, the maximal Lyapunov exponent (MLE), has been obtained from the analysis of a long time series in each case. Network analysis of the same systems has been carried out, and network indicators have been estimated from a considerably abbreviated time series. The variations of these quantities with changes in the nonlinearity parameters have been examined extensively and compared with each other. The conclusions drawn on the similarities between these two sets of indicators are also compared with those obtained from earlier investigations on a reference system (another tripartite quantum model comprising a $\Lambda$-atom interacting with two radiation fields).
A noteworthy feature that emerges is the following. Network indicators such as the clustering coefficient (CC) and the transitivity capture the behavior of the maximal Lyapunov exponent (MLE) very closely in the quantum system, provided the latter is not very sensitive to the number of data points used, as in the case of the reference system. In the optomechanical model, on the other hand, the
MLE is found to be very sensitive to the size of the data set. In this instance,
the CC and the transitivity merely capture the overall trend in the MLE reasonably well, without closely following its variation with the nonlinearity parameter. In the classical model considered, while the MLE is not very sensitive to the number of data points used in the analysis, the nonlinearity does not appear in the interaction between subsystems, but only in the individual subsystems. This
is the likely reason why none of the network indicators considered displays signatures of the manner in which the MLE changes with the nonlinearity. An important extension of this work would be the identification of
other `good' network indicators which reflect in
{\em all} cases the variations in the MLE when the nonlinearity is tuned, and their sensitivity to the precise form of the nonlinearity.
Network analysis would then provide a reliable
shorter technique than time-series analysis for determining the salient features of complex dynamical behavior in the expectation values of observables in multipartite quantum mechanical systems.
\acknowledgements We acknowledge Soumyabrata Paul for help with some numerical computations in the classical model.
\end{document} |
\begin{document}
\title{A calculus for ideal triangulations of three-manifolds with
embedded arcs}
{\small\noindent{\sc Abstract}: Refining the notion of an ideal triangulation of a compact three-manifold, we provide in this paper a combinatorial presentation of the set of pairs $(M,\alpha)$, where $M$ is a three-manifold and $\alpha$ is a collection of properly embedded arcs. We also show that certain well-understood combinatorial moves are sufficient to relate to each other any two refined triangulations representing the same $(M,\alpha)$. Our proof does not assume the Matveev-Pergallini calculus for ideal triangulations, and actually easily implies this calculus.}
{\small\noindent{\sc Keywords}: $3$-manifold, triangulation, presentation, calculus.}
{\small\noindent{\sc MSC (2000)}: 57Q15.}
\section*{Introduction}
A {\em combinatorial presentation} of a class of topological objects (viewed up to the appropriate equivalence relation) is a set of finite combinatorial objects, such that each combinatorial object defines (say ``presents'') a unique topological object, and each topological object is presented by at least one combinatorial object. A {\em calculus} for a combinatorial presentation is a finite set of moves on the combinatorial objects, such that two combinatorial objects present the same topological object if and only if they are related to each other by a finite sequence of moves in the given set.
Combinatorial presentations are fundamental tools for studying 3-manifolds and links, and for constructing invariants. They translate a topological problem into a combinatorial and, maybe, a simpler one. For instance, an invariant on the class of topological objects can be defined on the combinatorial objects, checking that it is preserved by the moves.
For 3-manifolds, there are several different types of presentations, {\em e.g.}~(ideal) triangulations, Heegaard diagrams, surgery (on links), and spines. In the present work we concentrate on the pairs $(M,\alpha)$, where $M$ is a compact connected $3$-manifold with non-empty boundary and $\alpha = \{\alpha^{(1)}, \ldots, \alpha^{(n)}\}$ is a (possibly empty) collection of disjoint arcs properly embedded in $M$ (viewed up to simultaneous isotopy). We provide a presentation of such pairs and we describe the corresponding calculus. The objects of the presentation are the {\em marked ideal triangulations} of the pair $(M,\alpha)$, that is the ideal triangulations of $M$ that contain as edges all the arcs in $\alpha$, and the moves of the calculus are the moves on ideal triangulations ({\em i.e.}~Matveev-Piergallini moves) which do not kill edges belonging to $\alpha$ (such moves will be called {\em admissible}).
The calculus for marked ideal triangulations is not new: in fact it has been used by Baseilhac and Benedetti (see~\cite{BB1,BB2,BB3}) in the prove that the so-called {\it quantum hyperbolic invariants} (QHI) for links in 3-manifolds equipped with flat $PSL(2,\mathbb{C})$-bundles are well defined. They derived this calculus from the Matveev-Piergallini one~\cite{Matveev:calculus,Piergallini}, as refined by Turaev and Viro in~\cite{Turaev-Viro}. They have also used the generalization to the setting of marked ideal triangulations of a result of Makovetskii~\cite{makov}. We will give a new proof of the calculus (for marked ideal triangulations), which is instead self-contained, see Section~\ref{calculus:sec}. Actually, our proof specializes to a new proof of the Matveev-Piergallini calculus. Although our proof is quite long, it is conceptually very simple: in fact it uses only easy results on triangulations and easy topological arguments. For the sake of completeness, we will also describe a sketch of the derivation of the calculus for marked triangulations from the Matveev-Piergallini one, see Subsection 2.5.
The generalized Makovetskii result states that, if two marked ideal triangulations of a pair $(M,\alpha)$ are given, then they are dominated, as far as some {\em positive} admissible moves are concerned, by another marked ideal triangulation. An admissible move is positive if it increases the number of tetrahedra. In Section~\ref{dominating:sec}, we provide the details of the proof of this refinement, and, in Subsection~\ref{hamiltonian:subsec}, we describe the relationship between marked ideal triangulations and links in 3-manifolds.
The initial motivation of the present paper was the remark, due to Frigerio and Petronio~\cite{Frig-Petr}, that marked ideal triangulations naturally arise in the study of {\em complete
finite-volume orientable hyperbolic $3$-manifolds with geodesic
boundary.} In Subsection~\ref{part_trunc_tria:subsec} we will describe how this relationship arises.
\section{Definitions}
From now on, unless explicitly stated, $M$ will be a compact connected $3$-manifold with non-empty boundary, and $\alpha = \{\alpha^{(1)}, \ldots, \alpha^{(n)}\}$ will be a (possibly empty) collection of disjoint arcs properly embedded in $M$, viewed up to simultaneous isotopy.
\subsection{Standard spines and moves}\label{spines_moves:subsection}
In this subsection we recall the definition of spine and we describe some moves.
\paragraph{Standard spines} A {\em quasi-standard} polyhedron $P$ is a finite, connected, and purely $2$-dimensional polyhedron with singularities of stable nature ({\em i.e.}~triple lines and points where 6 non-singular components meet). Such a polyhedron is called {\em standard} if it is cellularized by singularity (depending on dimension, we call the components {\em vertices}, {\em edges}, and {\em regions}). A quasi-standard sub-polyhedron $P$ of $M$ contained in $\inter{M}$ is called a {\em spine} of $M$ if the manifold $M$ collapses to it (or, equivalently, $M \setminus P \cong \partial M \times [0,1)$). Each spine of $M$ is always viewed up to isotopy. For the sake of completeness, let us recall that, if $M$ is closed, the boundary is created by puncturing $M$ ({\em i.e.}~by considering $M$ minus a ball).
It is by now well-known, after the work of Casler~\cite{Casler}, that a standard spine determines $M$ uniquely up to homeomorphism and that every $M$ has standard spines. In the sequel we will omit the word ``standard'', writing only ``spine''; nevertheless, if standardness will not be obvious, we will use the word ``standard''. Moreover, in the figure of a piece of spine the singular set is drawn thick.
\paragraph{MP-move} Any two (standard) spines of $M$ can be transformed into each other by certain well-understood moves. Let us start from the move shown in Fig.~\ref{MP_move:fig}-left, which is called MP-{\em move}. \begin{figure}
\caption{The MP-move on a spine (left) and on the dual ideal
triangulation (right).}
\label{MP_move:fig}
\end{figure} Such a move will be called {\em positive} if it increases (by one) the number of vertices, and {\em negative} otherwise. Note that, if we apply an MP-move to a spine of $M$, the result will be another spine of $M$. It is already known (but it will also follow from our Corollary~\ref{gener_MP_senza_V:cor}), after the work of Matveev~\cite{Matveev:calculus} and Piergallini~\cite{Piergallini}, that any two standard spines of the same $M$ with at least two vertices can be transformed into each other by MP-moves (see Theorem~\ref{MP_calculus:teo}).
\paragraph{V-move} If one of the two spines of $M$ (we want to transform into each other) has just one vertex, another move is required. The move shown in Fig.~\ref{V_move:fig}-left is called V-{\em move}. \begin{figure}
\caption{The V-move on a spine (left) and on the dual ideal
triangulation (right).}
\label{V_move:fig}
\end{figure} Note that if we apply such a move to a spine of $M$, the result will be another spine of $M$. As above, we have {\em positive} and {\em negative} V-moves. Note that $3$ different positive V-moves can be applied at each vertex.
If a positive V-move is applied to a spine with at least two vertices, the V-move is a composition of MP-moves. In Fig.~\ref{V_comp_MP:fig} we show the three positive and the one negative MP-moves giving the V-move. \begin{figure}
\caption{If there is another vertex, each positive V-move is a
composition of MP-moves.}
\label{V_comp_MP:fig}
\end{figure}
\paragraph{L-move} A generalization of the V-move is the L-{\em move}, see Fig.~\ref{L_move:fig}-left. \begin{figure}
\caption{The L-move on a spine (left) and on the dual ideal
triangulation (right).}
\label{L_move:fig}
\end{figure} As above, we have {\em positive} and {\em negative} L-moves. As opposed to the V-move, this move is non-local, so it must be described with some care. A positive L-move, which increases by two the number of vertices, is determined by an arc $\gamma$ properly embedded in a region $R$ of $P$. The move acts on $P$ as in Fig.~\ref{L_move:fig}-left, but, to define its effect non-ambiguously, we must specify which pairs of regions, out of the four regions incident to $R$ at the endpoints of $\gamma$, will become adjacent to each other after the move. This is achieved by noting that $R$ is a disc, so its regular neighborhood in $M$ is a product, and we can choose for $R$ a transverse orientation. Using it, at each endpoint of $\gamma$ we can tell from each other the two regions incident to $R$ as being an upper and a lower one, and we can stipulate that the two upper regions will become incident after the move (and similarly for the lower ones). Obviously, a positive L-move leads to a (standard) spine $P'$ of $M$.
For the negative case the situation is more complicated. A negative L-move can lead to a non-standard spine. If $R_1$ and $R_2$ are contained in the same region, after the negative L-move, the ``region'' $R$ would not be a disc. To avoid this loss of standardness, we will call negative L-moves only those preserving standardness. So a negative L-move can be applied only if the regions $R_1$ and $R_2$ are different. With this convention, if we apply an L-move to a spine of $M$, the result will be another spine of $M$.
Each positive L-move is a composition of V-~and MP-moves. In Fig.~\ref{L_comp_V_MP_1:fig} we show the one positive V-move and the pairs of (one positive and one negative) MP-moves giving the L-move. \begin{figure}
\caption{Each positive L-move is a composition of V-~and MP-moves
(case where $R_1$ has more than one vertex).}
\label{L_comp_V_MP_1:fig}
\end{figure} Obviously, to apply such moves, $R_1$ must have at least two vertices. If $R_1$ has only one vertex, then $R_2$ has at least two vertices (because $P'$ is standard); so we can take the symmetric picture. For future reference, we note that, if $R_1$ has only one vertex, we can obtain the L-move also as a composition of only one V-~and one pair of MP-moves, as shown in Fig.~\ref{L_comp_V_MP_2:fig}. \begin{figure}
\caption{Each positive L-move is a composition of V-~and MP-moves
(case where $R_1$ has only one vertex).}
\label{L_comp_V_MP_2:fig}
\end{figure}
\paragraph{B-move} Now we describe the B-{\em move} (shown in Fig.~\ref{B_move:fig}-left). \begin{figure}
\caption{The B-move on a spine (left) and on the dual ideal
triangulation (right).}
\label{B_move:fig}
\end{figure} As above, we have {\em positive} and {\em negative} B-moves. This move is quite different from the previous ones, because if we apply a positive B-move to a spine $P$ of $M$, the result will be a spine $P_B$ of $M \setminus B^3$ (where $B^3$ is a 3-ball with closure embedded in $M$). So it is obvious that a B-move cannot be a composition of V-~and MP-moves. By definition of spine, we have that $M \setminus P_B$ is the disjoint union of $\partial M \times [0,1)$ and $B^3 \cup (\partial B^3 \times [0,1))$. The ball ${\cal B} = B^3 \cup (\partial B^3 \times [0,1))$ will be called {\em proper ball}.
\paragraph{C-move} In the end, we describe the C-{\em move}, see Fig.~\ref{C_move:fig}-left. \begin{figure}
\caption{The C-move on a spine (left) and the corresponding arch
(right).}
\label{C_move:fig}
\end{figure} As above, we have {\em positive} and {\em negative} C-moves. This move is very similar to the B-move, but, if we apply a positive C-move to a spine of $M$, we obtain another spine of the same $M$. In fact, each positive C-move is a composition of V-~and MP-moves: the V-move and the (four) MP-moves are shown in Fig.~\ref{C_comp_V_MP:fig}. \begin{figure}
\caption{Each positive C-move is a composition of V-~and MP-moves.}
\label{C_comp_V_MP:fig}
\end{figure} Note also that $12$ different positive C-moves can be applied at each vertex.
We will call {\em arch} the configuration shown in Fig.~\ref{C_move:fig}-right, created by a C-move. Let us compare the spine $P_C$, obtained from a spine $P$ via a C-move, with the spine $P_B$, obtained from $P$ via a B-move (applied at the same vertex). They are different only for the presence of the arch, which joins two different regions ($R_1$ and $R_2$) of the spine $P_B$. Note also that, after the C-move, the proper ball ${\cal B}$, created by the B-move, is connected to $\partial M \times [0,1)$ by the cavity of the arch.
\paragraph{} In the rest of the paper we will always regard $M$ as being fixed and we will only consider spines and moves embedded in $M$, without explicit mention.
\subsection{Ideal triangulations}\label{id_tria:subsection}
In this subsection we recall the definition of loose triangulation and ideal triangulation, eventually defining the marked ideal triangulations (and spines) of a pair $(M,\alpha)$ and the moves on them.
\paragraph{Loose and ideal triangulations}
A {\em loose triangulation} of a polyhedron $|{\cal P}|$ is a triangulation ${\cal P}$ of $|{\cal P}|$ in a weak sense, namely self-adjacencies and multiple adjacencies are allowed. For any manifold $M$ (as above), let us denote by $\widehat{M}$ the space obtained from $M$ by collapsing to a point each component of $\partial M$. An {\em ideal triangulation} of a manifold $M$ (as above) is a partition ${\cal T}$ of $\inter{M}$ into open cells of dimensions 1, 2, and 3, induced by a loose triangulation $\widehat{{\cal T}}$ of the space $\widehat{M}$ such that the vertices of $\widehat{{\cal T}}$ are precisely the points of $\widehat{M}$ corresponding to the components of $\partial M$. The quotient of $\partial M$ will be denoted by $\widehat{\partial M}$. Note that $\widehat{M} \setminus \widehat{\partial M}$ can be identified with $\inter{M}$. As for spines, each ideal triangulation of $M$ is always viewed up to isotopy.
\paragraph{Duality} We show now the well-known fact that ideal triangulations exist for each $M$. It turns out \cite{Matveev-Fomenko,Petronio:tesi,Matveev:new:book} that there exists a natural bijection between standard spines and ideal triangulations of a 3-manifold. Given an ideal triangulation ${\cal T}$, the corresponding standard spine $P$ is just the 2-skeleton of the dual cellularization, as illustrated in Fig.~\ref{duality:fig}. \begin{figure}
\caption{Portion of spine dual to a tetrahedron of an ideal
triangulation.}
\label{duality:fig}
\end{figure} The inverse passage is also explicit, but it is a little more difficult; so we omit its description. The ideal triangulation ${\cal T}$ and the spine $P$ are said to be {\em dual}. As said above, every $M$ has standard spines, so dually it has ideal triangulations.
We show in Figg.~\ref{MP_move:fig}-right,~\ref{V_move:fig}-right,~\ref{L_move:fig}-right, and~\ref{B_move:fig}-right the MP-,~V-,~L-, and~B-moves, respectively, on a spine in terms of the dual ideal triangulations (we have omitted the dual version of the C-move because of the complexity of the picture). In the sequel we will intermingle the spine and the ideal triangulation viewpoints.
\paragraph{Marked ideal triangulations (and spines)} Recall that $\alpha$ is a collection of disjoint arcs properly embedded in a manifold $M$. A {\em marked ideal triangulation} of the pair $(M,\alpha)$ is a pair $({\cal T} ,\beta)$, where ${\cal T}$ is an ideal triangulation of $M$ and $\beta = \{\beta^{(1)},\ldots ,\beta^{(n)}\}$ is a collection of edges of ${\cal T}$ (simultaneously) isotopic to $\alpha = \{\alpha^{(1)},\ldots ,\alpha^{(n)}\}$. The quotient of $\beta = \{\beta^{(1)},\ldots ,\beta^{(n)}\}$ in $\widehat{{\cal T}}$ will be denoted by $\widehat{\beta} = \{\widehat{\beta}^{(1)},\ldots ,\widehat{\beta}^{(n)}\}$, and the pair $(\widehat{{\cal T}},\widehat{\beta})$ will be said {\em marked loose triangulation corresponding to $({\cal T} ,\beta)$}. With a little abuse of terminology, in the sequel we will say that the edges in $\beta$ and $\widehat{\beta}$ {\em belong to $\alpha$}.
Using duality, we can give a natural definition of {\em marked spine} of a pair $(M,\alpha)$ as a pair $(P,\tilde\beta)$, where $P$ is the spine dual to a marked ideal triangulation $({\cal T} ,\beta)$ of the pair $(M,\alpha)$ and $\tilde\beta = \{\tilde\beta^{(1)},\ldots ,\tilde\beta^{(n)}\}$ is the collection of the regions of $P$ dual to the $\beta^{(i)}$'s. With a little abuse of notation, we will drop the tilde, writing only $\beta^{(i)}$ instead of $\tilde\beta^{(i)}$, and we will say that the regions $\beta^{(i)}$ also {\em belong to $\alpha$}.
\paragraph{Existence of marked ideal triangulations} By duality, to prove that every pair $(M,\alpha)$ has marked ideal triangulations, it is enough to prove that it has marked ideal spines. So we prove that $M$ has a spine such that $\alpha$ is isotopic to the collection of the edges dual to $n$ different regions. Let $N(\alpha) = \sqcup_{i=1}^n N(\alpha^{(i)})$ be a regular neighborhood of $\alpha$, let $Q$ be a spine of $M \setminus N(\alpha)$. Note that we have a retraction $\pi$ of $M \setminus N(\alpha)$ onto $Q$. For $i=1,\ldots ,n$, let $D^{(i)}$ be a 2-disc properly embedded in $N(\alpha^{(i)})$, embedded in $\inter{M}$, and intersecting $\alpha^{(i)}$ transversely in one point. Now, we can suppose that, by projecting the $\partial D^{(i)}$'s to $Q$ along $\pi$, we obtain ``half-open'' annuli $\partial D^{(i)} \times [0,1)$. Up to isotopy, we can also suppose that each $\pi(\partial D^{(i)})$ intersects the singularity of $P$, and that $\pi(\cup_{i=1}^n \partial D^{(i)})$ is transversal to the singularity and to itself. Let us define $P$ as the union of the polyhedron $Q$, the discs $D^{(i)}$, and the annuli $\partial D^{(i)} \times [0,1)$. Obviously, $P$ is the desired spine: in fact, $P$ is a (standard) spine of $M$ and each $\alpha^{(i)}$ coincides with the edge dual to the region $D^{(i)} \cup (\partial D^{(i)} \times [0,1))$ of $P$.
\paragraph{Admissible moves} We will now discuss an extension of the MP-,~V-,~L-,~and C-moves to the context of marked ideal triangualations. Given a marked ideal triangulation $({\cal T},\beta)$ of $(M,\alpha)$, the idea is to consider a move from ${\cal T}$ to ${\cal T}'$ {\em admissible} if there is a $\beta'$ such that $({\cal T}',\beta')$ is a marked ideal triangulation of $(M,\alpha)$, and $\beta'$ coincides with $\beta$ except ``near'' the portion of ${\cal T}$ affected by the move. As it turns out, admissibility depends on $\beta$. Moreover, $\beta'$ is sometimes not unique.
By duality, we describe the moves on spines to refer to simpler pictures, but we invite the reader to figure out the dual ideal triangulation pictures. Let $(P,\beta)$ be the marked spine dual to $({\cal T},\beta)$. We describe the moves one by one.
\subparagraph{MPa-move} A positive MP-move from $P$ to $P'$ is admissible whatever $\beta$, and $\beta'$ consists of the same regions as $\beta$ ({\em i.e.}~the newborn triangular region does not belong to $\beta'$); the move from $(P,\beta)$ to $(P',\beta')$ is called {\em positive \mbox{${\rm MPa}$}-move}. A negative MP-move from $P$ to $P'$ is admissible if it is the inverse of a positive \mbox{${\rm MPa}$}-move: namely, the triangular region disappearing during the move must not belong to $\beta$, and $\beta'$ consists of the same regions as $\beta$; the move from $(P,\beta)$ to $(P',\beta')$ is called {\em negative \mbox{${\rm MPa}$}-move}. See Fig.~\ref{MP_move:fig}.
\subparagraph{Va-move} A positive V-move from $P$ to $P'$ is admissible whatever $\beta$, and $\beta'$ consists of the same regions as $\beta$ ({\em i.e.}~the two newborn little regions do not belong to $\beta'$); the move from $(P,\beta)$ to $(P',\beta')$ is called {\em positive \mbox{${\rm Va}$}-move}. A negative V-move from $P$ to $P'$ is admissible if it is the inverse of a positive \mbox{${\rm Va}$}-move: namely, the two little regions disappearing during the move must not belong to $\beta$, and $\beta'$ consists of the same regions as $\beta$; the move from $(P,\beta)$ to $(P',\beta')$ is called {\em negative \mbox{${\rm Va}$}-move}. See Fig.~\ref{V_move:fig}.
Now, recall that, if there are at least two vertices, a positive V-move is a composition of MP-moves, see Fig.~\ref{V_comp_MP:fig}; the ``admissible'' version of this fact is not so obvious but it is true. Namely, if there are at least two vertices, a positive \mbox{${\rm Va}$}-move is a composition of \mbox{${\rm MPa}$}-moves. To prove this, it is enough to note that a positive \mbox{${\rm Va}$}-move is a composition of MP-moves (see again Fig.~\ref{V_comp_MP:fig}), that the negative MP-move of the sequence eliminates a region created by a previous positive \mbox{${\rm MPa}$}-move (so the region does not belongs to $\beta$), and that the position of the $(\beta')^{(i)}$'s after the \mbox{${\rm MPa}$}-moves is the same as after the \mbox{${\rm Va}$}-move.
\subparagraph{La-move} For the L-moves, the situation is more complicated. A positive L-move from $P$ to $P'$ is admissible whatever $\beta$, but $\beta'$ is not uniquely determined. We follow the notation of Fig.~\ref{L_move:fig}. We have two cases depending on whether $R$ belongs to $\beta$ or not. If $R$ does not belong to $\beta$, then $\beta'$ consists of the same regions as $\beta$ ({\em i.e.}~$R_1$, $R_2$, and the newborn little region $D$ do not belong to $\beta'$). In such a case, the move from $(P,\beta)$ to $(P',\beta')$ is called {\em positive \mbox{${\rm La}$}-move}. If $R$ belongs to $\beta$, the situation is a little ambiguous: $R$ is divided in two regions, and both of them ``are isotopic to $R$'' ({\em i.e.}~the dual edges of $R_1$ and $R_2$ are both isotopic to the dual edge of $R$). If we define $\beta'_1$ as $(\beta \setminus \{R\}) \cup \{R_1\}$ and $\beta'_2$ as $(\beta \setminus \{R\}) \cup \{R_2\}$, we have two admissible L-moves underlying the original L-move: one from $(P,\beta)$ to $(P',\beta'_1)$ and one from $(P,\beta)$ to $(P',\beta'_2)$. Also both these moves are called {\em positive \mbox{${\rm La}$}-moves}. Note that the choice of the region, between $R_1$ and $R_2$, is included in the move.
A negative L-move from $P$ to $P'$ is admissible if it is the inverse of a positive \mbox{${\rm La}$}-move. Necessarily, the little region $D$ disappearing during the move must not belong to $\beta$, and only one region between $R_1$ and $R_2$ can belong to $\beta$. Now, we have two cases: if both $R_1$ and $R_2$ do not belong to $\beta$, then $\beta'$ consists of the same regions as $\beta$; otherwise, if one region $R_i$ (between $R_1$ and $R_2$) belongs to $\beta$, then $\beta'$ is equal to $(\beta \setminus \{R_i\}) \cup \{R\}$. In both cases, the move from $(P,\beta)$ to $(P',\beta')$ is called {\em negative \mbox{${\rm La}$}-move}.
Now, recall that each L-move is a composition of V-~and MP-moves (see Figg.~\ref{L_comp_V_MP_1:fig} and~\ref{L_comp_V_MP_2:fig}). As above, we show that each positive \mbox{${\rm La}$}-move is a composition of \mbox{${\rm Va}$}-~and \mbox{${\rm MPa}$}-moves. If $R$ does not belong to $\beta$, the situation is analogous to that of \mbox{${\rm Va}$}-move, so we omit its treatment. On the contrary, we suppose that $R$ belongs to $\beta$. Now, one region, between $R_1$ and $R_2$, belongs to $\beta'$: we suppose that $R_2$ belongs to $\beta'$ (the case for $R_1$ is symmetric). The V-~and MP-moves shown in Figg.~\ref{L_comp_V_MP_1:fig} and~\ref{L_comp_V_MP_2:fig} (we have two cases depending on whether $R_1$ has one vertex or more) are all admissible, and the (positive) \mbox{${\rm Va}$}-move leaves just $R_2$ in $\beta'$.
\subparagraph{Ca-move} A positive C-move from $P$ to $P'$ is admissible whatever $\beta$, and $\beta'$ consists of the same regions as $\beta$ ({\em i.e.}~the four newborn regions do not belong to $\beta'$); the move from $(P,\beta)$ to $(P',\beta')$ is called {\em positive \mbox{${\rm Ca}$}-move}. See Fig.~\ref{C_move:fig}. Note that $R_1$ is joined to $R_2$, so, if $R$ belongs to $\beta$, then the region containing $R_1$ and $R_2$ belongs to $\beta'$.
A negative C-move from $P$ to $P'$ is admissible if it is the inverse of a positive \mbox{${\rm Ca}$}-move: namely, the four regions (included the disc of the arch) disappearing during the move must not belong to $\beta$, and $\beta'$ consists of the same regions as $\beta$. The move from $(P,\beta)$ to $(P',\beta')$ is called {\em negative \mbox{${\rm Ca}$}-move}.
As above, it is easy to see that each \mbox{${\rm Ca}$}-move is a composition of \mbox{${\rm Va}$}-~and \mbox{${\rm MPa}$}-moves.
\subparagraph{Ba-move} For the B-moves, the situation is quite different because such moves change the homeomorphism class of the manifold. A B-move from $P$ to $P'$ will be considered {\em admissible} both if it is positive, or if it is negative and the four regions disappearing do not belong to $\beta$. In such a case, $\beta'$ consists of the same regions as $\beta$. The move from $(P,\beta)$ to $(P',\beta')$ is called {\em \mbox{${\rm Ba}$}-move} ({\em positive} or {\em negative}, respectively). See Fig.~\ref{B_move:fig}.
\paragraph{} From now on, since a marked ideal triangulation $({\cal T},\beta)$ is a pair while an ideal triangulation ${\cal T}$ is not, then, for the sake of shortness, we will omit the word ``marked'' (also for spines and loose triangulations) unless the difference is not clear.
\section{The calculus}\label{calculus:sec}
The main result of this paper is the following.
\begin{teo}\label{gener_MP:teo} Two marked ideal triangulations of a pair $(M,\alpha)$ can be obtained from each other via a sequence of \mbox{${\rm Va}$}-~and \mbox{${\rm MPa}$}-moves. \end{teo}
Recalling that, if there are at least two tetrahedra, each \mbox{${\rm Va}$}-move is a composition of \mbox{${\rm MPa}$}-moves, we obtain the following corollary of Theorem~\ref{gener_MP:teo}.
\begin{cor}\label{gener_MP_senza_V:cor} Two marked ideal triangulations of $(M,\alpha)$ with at least two tetrahedra can be obtained from each other via a sequence of \mbox{${\rm MPa}$}-moves only. \end{cor}
As a particular case we obtain the Matveev-Piergallini theorem.
\begin{teo}[Matveev-Piergallini]\label{MP_calculus:teo} Two spines of $M$ can be obtained from each other via a sequence of {\rm V}-~and {\rm MP}-moves. If moreover both spines have at least two vertices, then they can be obtained from each other via a sequence of {\rm MP}-moves only. \end{teo}
The idea of the proof of Theorem~\ref{gener_MP:teo} consists of the following steps: \begin{itemize}
\item a ``desingularization'' of the two marked ideal triangulations,
say $({\cal T}_1,\beta_1)$ and $({\cal T}_2,\beta_2)$, via \mbox{${\rm Ba}$}-,~\mbox{${\rm Va}$}-,
and~\mbox{${\rm MPa}$}-moves (leading to $({\cal T}'_1,\beta'_1)$ and
$({\cal T}'_2,\beta'_2)$, respectively);
\item an application of the relative version of the Alexander theorem
to relate $({\cal T}'_1,\beta'_1)$ and $({\cal T}'_2,\beta'_2)$ via
\mbox{${\rm Ba}$}-~and \mbox{${\rm MPa}$}-moves;
\item an elimination of each \mbox{${\rm Ba}$}-move by substituting it with a
\mbox{${\rm Ca}$}-move.
\end{itemize} We first recall the relative version of the Alexander theorem, and then we describe each of the three steps.
\subsection{Alexander's theorem}
As said above, our proof relies on the relative version of Alexander's theorem, so we recall it (the proof is quite easy and can be found in~\cite{Turaev-Viro}). Let us consider a simplex $\sigma$ of a polyhedron $|{\cal P}|$ with a (non-loose) triangulation ${\cal P}$. Let us define a move on the triangulation ${\cal P}$: the substitution of the closed star, $\clst{\sigma}$, of $\sigma$ with the cone on $\partial\clst{\sigma}$ with respect to a point in the interior of $\sigma$ will be called A-{\em move}; the inverse of an A-move will be also called an A-move. Note that an A-move does not change the homeomorphism class of
$|{\cal P}|$, but only the triangulation ${\cal P}$. The following theorem states that A-moves are enough to obtain all the triangulations of $|{\cal P}|$ from any given one, leaving fixed a sub-polyhedron $|{\cal Q}|$. \begin{teo}\label{Alexander:teo}
Let $|{\cal P}|$ be a dimensionally homogeneous polyhedron and let
$|{\cal Q}|$ be a sub-polyhedron of $|{\cal P}|$. Then two triangulations of $|{\cal P}|$, whose restrictions to $|{\cal Q}|$ coincide, can be obtained from each other via a sequence of {\rm
A}-moves which do not change the triangulation of $|{\cal Q}|$. \end{teo}
\paragraph{Reduction to B-~and MP-moves} Now we prove a modification of a result due to Pachner (Theorem~4.14 of~\cite{Pachner}), that he stated only for manifolds. Let us call {\em singular manifold with boundary} a finite polyhedron
$|{\cal P}|$ such that the link of every point (of $|{\cal P}|$) is a surface with (possibly empty) boundary. Such a space is the generalization with boundary of the so called {\em
singular manifolds}. In fact, we have an obvious definition of the {\em boundary} $\partial
|{\cal P}|$ of $|{\cal P}|$ as the 2-dimensional sub-polyhedron of $|{\cal P}|$
made of the closure of the triangles lying in only one tetrahedron. Obviously, in $|{\cal P}|$ there are only a finite number of points having link different from the 2-sphere or the 2-disk. We have the following corollary of Theorem~\ref{Alexander:teo}. \begin{prop}\label{gener_Pachner:prop}
Let $|{\cal P}|$ be a singular manifold with boundary and let $|{\cal Q}|$
be a sub-polyhedron of $|{\cal P}|$ containing $\partial |{\cal P}|$. Then two triangulations of $|{\cal P}|$, whose restrictions to $|{\cal Q}|$ coincide, can be obtained from each other via a sequence of {\rm
B}-~and {\rm MP}-moves, which do not change the common triangulation of $|{\cal Q}|$. \end{prop} \dimo{gener_Pachner:prop}
By Theorem~\ref{Alexander:teo}, the two triangulations can be obtained from each other via a sequence of A-moves which do not change the triangulation of $|{\cal Q}|$. To conclude the proof, we show that each A-move in this sequence is a composition of B-~and MP-moves which do not change the triangulation of $|{\cal Q}|$. There are four different types of A-move depending on the dimension of the simplex $\sigma$ the A-move is applied to. \begin{description}
\item{${\rm dim}(\sigma)=0$.} This case is obvious; in fact, $\sigma$ is a vertex, so $\clst{\sigma}$ is already the cone on $\partial\clst{\sigma}$ with respect to $\sigma$, and the A-move is the identity.
\item{${\rm dim}(\sigma)=1$.} Here $\sigma$ is an edge, so the A-move on $\sigma$ ``divides'' $\sigma$ adding a vertex as shown in Fig.~\ref{A_move_edge:fig}. \begin{figure}
\caption{The A-move on the edge $\sigma$ (with four tetrahedra in
$\st{\sigma}$).}
\label{A_move_edge:fig}
\end{figure} Consider the open star, $\st{\sigma}$, of $\sigma$ shown in Fig.~\ref{A_move_edge:fig}-left. Note that $\st{\sigma}$ contains at least three tetrahedra: we describe the case for four tetrahedra, other cases being similar. The A-move is the composition of the moves shown in Fig.~\ref{A_move_edge_moves:fig}: one positive B-move, two positive MP-moves, and one negative MP-move. \begin{figure}
\caption{The A-move on an edge is a composition of B-~and MP-moves.}
\label{A_move_edge_moves:fig}
\end{figure}
\item{${\rm dim}(\sigma)=2$.} In Fig.~\ref{A_move_tria_moves:fig} we show that the A-move on a triangle is a composition of one positive B-move and one positive MP-move. \begin{figure}
\caption{The A-move on a triangle is a composition of B-~and
MP-moves.}
\label{A_move_tria_moves:fig}
\end{figure}
\item{${\rm dim}(\sigma)=3$.} The A-move is already a B-move.
\end{description}
Finally, note that all the B-~and MP-moves described above do not change the common triangulation of $|{\cal Q}|$. \finedimo{gener_Pachner:prop}
\subsection{Desingularization}\label{desingularization:subsec}
Let $({\cal T},\beta)$ be an ideal triangulation of a pair $(M,\alpha)$. As said above, the idea is to eliminate the singularities of the loose triangulation $(\widehat{{\cal T}},\widehat{\beta})$, via \mbox{${\rm Ba}$}-,~\mbox{${\rm Va}$}-, and \mbox{${\rm MPa}$}-moves, to be able to apply Proposition~\ref{gener_Pachner:prop}. We will see that we cannot eliminate all the singularities, because we cannot eliminate the edges belonging to $\widehat{\beta}$. Since $\widehat{{\cal T}}$ is a loose triangulation (of $\widehat{M}$), there could be a singular edge of $\widehat{{\cal T}}$ with coinciding endpoints; such an edge will be called {\em loop}. \begin{prop}\label{desingularization:prop} Let $({\cal T},\beta)$ be an ideal triangulation of a pair $(M,\alpha)$. Then there exists an ideal triangulation $({\cal T}',\beta')$ of $(M \setminus \cup B_k,\alpha)$, where the $B_k$'s are $3$-balls disjoint from each other and from $\alpha$, such that the following facts hold. \newcounter{listi} \begin{list}{{\rm \arabic{listi}.}}{\usecounter{listi} \setlength{\labelwidth}{1cm}}
\item $({\cal T}',\beta')$ is obtained from $({\cal T},\beta)$ via
\mbox{${\rm Ba}$}-,~\mbox{${\rm Va}$}-, and \mbox{${\rm MPa}$}-moves.
\item The loose triangulation $(\widehat{{\cal T}}',\widehat{\beta}')$
has only the following types of singularities:
\newcounter{listii}
\begin{list}{{\rm (\alph{listii})}}{\usecounter{listii} \setlength{\labelwidth}{1cm}}
\item an edge $(\widehat{\beta}')^{(i)}$ which is a loop,
\item a pair of edges $(\widehat{\beta}')^{(i)}$ sharing both the
endpoints,
\item a pair of edges (giving a multiple adjacency) in
$\clst{(\widehat{\beta}')^{(i)}}$ if $(\widehat{\beta}')^{(i)}$ is
a loop.
\end{list}
\item Each $(\widehat{\beta}')^{(i)}$ has a neighborhood
${\cal N}((\widehat{\beta}')^{(i)})$ such that:
\begin{list}{{\rm (\alph{listii})}}{\usecounter{listii} \setlength{\labelwidth}{1cm}}
\item if $(\widehat{\beta}')^{(i)}$ is not a loop,
${\cal N}((\widehat{\beta}')^{(i)})$ is made of exactly three
tetrehedra;
\item if $(\widehat{\beta}')^{(i)}$ is a loop,
${\cal N}((\widehat{\beta}')^{(i)})$ is the cone on a triangle
$\theta$, where $\theta$ is triangulated as shown in
Fig.~\ref{triangulated_triangle:fig}, the endpoints of the cone on
the barycentre $b$ of $\theta$ are identified together, and the
loop $(\widehat{\beta}')^{(i)}$ is just this edge with identified
endpoints.
\begin{figure}
\caption{The triangle $\theta$ and its barycentre $b$.}
\label{triangulated_triangle:fig}
\end{figure}
\end{list}
\item ${\cal N}((\widehat{\beta}')^{(i)}) \cap
{\cal N}((\widehat{\beta}')^{(j)}) = (\widehat{\beta}')^{(i)} \cap
(\widehat{\beta}')^{(j)}$ for each $i \neq j$, and
${\cal N}((\widehat{\beta}')^{(i)}) \cap \widehat{\partial M} =
(\widehat{\beta}')^{(i)} \cap \widehat{\partial M}$ for $i=1,\ldots
,n$.
\end{list} \end{prop}
\dimo{desingularization:prop} The loose triangulation $(\widehat{{\cal T}},\widehat{\beta})$ has different types of singularity: we eliminate the singularities type by type, being careful not to create any singularity of the types already eliminated. Note that we need to analyze only the singularities for tetrahedra, because both a singular triangle and a singular edge are contained in a singular tetrahedron. There are 6 different types of singularity for tetrahedra. For the sake of shortness, we continue calling $({\cal T},\beta)$ also the triangulations obtained during the proof, also if they are actually different from $({\cal T},\beta)$.
\paragraph{Self-adjacency along triangles} The tetrahedron is shown in Fig.~\ref{self_adj_tria:fig}-left; the \mbox{${\rm Ba}$}-move eliminating the self-adjacency is shown in Fig.~\ref{self_adj_tria:fig}-right. \begin{figure}\label{self_adj_tria:fig}
\end{figure} Note that no new self-adjacency along triangles has been created.
\paragraph{Self-adjacency along edges} This case is more complicated than the previous one: for each tetrahedron the number of edges which are identified together can vary between 2 and 6. An easy induction on the maximal number of edges identified together in a tetrahedron and on the number of tetrahedra having such a maximal number of identifications reduces the number of cases to two. \begin{enumerate}
\item If two edges which are identified together do not share any
vertex (in the unfolded version of the tetrahedron), a positive
\mbox{${\rm Ba}$}-move is enough to eliminate the singularity, see
Fig.~\ref{self_adj_edges_1:fig}.
\begin{figure}
\caption{Elimination of self-adjacency along edges (first case).
The edges which are identified together are drawn thick.}
\label{self_adj_edges_1:fig}
\end{figure}
\item If two edges which are identified together share a vertex (in
the unfolded version of the tetrahedron) the situation is slightly
more difficult.
Let us start by calling $T$ the tetrahedron.
Note that the tetrahedron $T'$, attached to $T$ along the triangle
containing the two identified edges, is different from $T$, because we
have already eliminated the self-adjacencies of tetrahedra along
triangles.
So a positive \mbox{${\rm Ba}$}-move and a positive \mbox{${\rm MPa}$}-move can be applied to
eliminate the self-adjacency, see Fig.~\ref{self_adj_edges_2:fig}.
\begin{figure}
\caption{Elimination of self-adjacency along edges (second case).
The edges which are identified together are drawn thick.}
\label{self_adj_edges_2:fig}
\end{figure}
\end{enumerate} Note that no new self-adjacency along either triangles or edges has been created.
\paragraph{Multiple adjacency along triangles or edges} The situation is analogous to the case of self-adjacencies along triangles or edges, respectively; so it can be treated similarly.
\paragraph{} Before continuing desingularization, we modify the loose triangulation obtained after the first part of the process to ``isolate'' each edge $\widehat{\beta}^{(i)}$ of $\widehat{\beta}$. Namely, we apply \mbox{${\rm Ba}$}-~and \mbox{${\rm MPa}$}-moves to obtain point~4 of the statement, {\em i.e.}~${\cal N}((\widehat{\beta}')^{(i)}) \cap {\cal N}((\widehat{\beta}')^{(j)}) = (\widehat{\beta}')^{(i)} \cap (\widehat{\beta}')^{(j)}$ for each $i \neq j$, and ${\cal N}((\widehat{\beta}')^{(i)}) \cap \widehat{\partial M} = (\widehat{\beta}')^{(i)} \cap \widehat{\partial M}$ for $i=1,\ldots ,n$. The situation is similar to desingularization: we eliminate the intersections between two $\clst{\widehat{\beta}^{(i)}}$'s and between each $\clst{\widehat{\beta}^{(i)}}$ and $\widehat{\partial M}$ step by step, being careful not to add any intersection of the types already eliminated. First we eliminate the intersections between each $\clst{\widehat{\beta}^{(i)}}$ and $\widehat{\partial M}$. If $\clst{\widehat{\beta}^{(i)}} \cap \widehat{\partial M}$ contains a vertex $v$ different from the endpoints of the edge $\widehat{\beta}^{(i)}$, then we perform the moves already described to eliminate the self-adjacency of tetrahedra along edges (second case); so $v$ belongs no more to $\clst{\widehat{\beta}^{(i)}} \cap \widehat{\partial M}$. Let us consider now the intersection between two $\clst{\widehat{\beta}^{(i)}}$'s. They may share (out of the intersection between the edges $\widehat{\beta}^{(i)}$) tetrahedra, triangles, edges and vertices (different from the endpoints of the edges $\widehat{\beta}^{(i)}$). \begin{description} \item{{\em Tetrahedra}:} if two $\clst{\widehat{\beta}^{(i)}}$'s share
a tetrahedron, we note that the two $\widehat{\beta}^{(i)}$'s belong
to one tetrahedron, so we perform the moves already described to
eliminate self-adjacency of tetrahedra along edges.
\item{{\em Triangles}:} if the common simplex is a triangle, we
perform the move already used to eliminate multiple adjacency of
tetrahedra along triangles.
\item{{\em Edges}:} if the common simplex is an edge, we perform the
moves already used to eliminate multiple adjacency of tetrahedra
along edges.
\item{{\em Vertices}:} if two $\clst{\widehat{\beta}^{(i)}}$'s share a
vertex (different from the endpoints of the edges
$\widehat{\beta}^{(i)}$), we perform the moves already described to
eliminate the self-adjacency of tetrahedra along edges (second
case). \end{description} Note that all the moves described above are admissible, and that no new singularity of the types already eliminated has been created. Let us continue now with desingularization.
\paragraph{Self-adjacency along vertices} If two vertices of a tetrahedron are identified together, let us call $e$ the edge which is a loop (if there is more than one edge like $e$, we repeat the procedure). There are two cases depending on whether the edge $e$ belongs to $\widehat{\beta}$ or not.
\subparagraph{First case:~$e \not \hspace{-1pt} \in \widehat{\beta}$} Consider the unfolded version of $\clst{e}$: the case for four tetrahedra is shown in Fig.~\ref{A_move_edge:fig}-left. We know that $\clst{e}$ contains at least three tetrahedra, because we have already eliminated self-adjacencies and multiple adjacencies of tetrahedra along triangles. The idea is now to ``divide'' the edge $e$ by adding a vertex, as shown in Fig.~\ref{A_move_edge:fig}. The situation is analogous to that of the proof of Proposition~\ref{gener_Pachner:prop} when the case of ${\rm
dim}(\sigma)=1$ is analyzed; the only difference is that now some boundary faces of $\clst{e}$ could be glued together, but this does not matter: we can repeat the same B-~and MP-moves, ``dividing'' the edge $e$, as shown in Fig.~\ref{A_move_edge_moves:fig}. We conclude by noting that each move is admissible: the first three are positive and the last one eliminates the edge $e$ which does not belong to $\widehat{\beta}$. Note also that no new singularity of the types already eliminated has been created.
\subparagraph{Second case:~$e \in \widehat{\beta}$} For the sake of clarity, let us call $\widehat{\beta}^{(i)}$ the edge $e$. Note that we cannot eliminate the singularity: in fact we cannot eliminate the edge $\widehat{\beta}^{(i)}$, so each tetrahedron in $\clst{\widehat{\beta}^{(i)}}$ always has $\widehat{\beta}^{(i)}$ as an edge and it is always singular. But we will modify a neighborhood of $\widehat{\beta}^{(i)}$ to obtain point~3b of the statement. Recall that $\clst{\widehat{\beta}^{(i)}} \setminus (\widehat{\beta}^{(i)} \cap \widehat{\partial M})$ and $\clst{\widehat{\beta}^{(j)}} \setminus (\widehat{\beta}^{(j)} \cap \widehat{\partial M})$ are disjoint for each $j \neq i$. First we will modify $\clst{\widehat{\beta}^{(i)}}$ via \mbox{${\rm Ba}$}-,~\mbox{${\rm Va}$}-, and \mbox{${\rm MPa}$}-moves to have that $\clst{\widehat{\beta}^{(i)}}$ is made of exactly three tetrahedra; then we will modify these tetrahedra to obtain point~3b of the statement.
Let us describe the first modification of $\clst{\widehat{\beta}^{(i)}}$. Note that $\clst{\widehat{\beta}^{(i)}}$ cannot be made of one or two tetrahedra because we have already eliminated self-adjacencies and multiple adjacencies of tetrahedra along triangles. So let us suppose that $\clst{\widehat{\beta}^{(i)}}$ is made of at least four tetrahedra and let us modify the loose triangulation to have that $\clst{\widehat{\beta}^{(i)}}$ is made of three tetrahedra. For the sake of clarity, in Fig.~\ref{link_edge_alpha:fig} we have shown only the case of four tetrahedra in $\clst{\widehat{\beta}^{(i)}}$: the other cases are analogous. \begin{figure}\label{link_edge_alpha:fig}
\end{figure} We apply a positive \mbox{${\rm La}$}-move (which is a composition of \mbox{${\rm Va}$}-~and \mbox{${\rm MPa}$}-moves), choosing to leave in $\widehat{\beta}$ the edge whose star is made of three tetrahedra; we eliminate the multiple adjacency created by the \mbox{${\rm La}$}-move with a positive \mbox{${\rm Ba}$}-move; we eliminate the singularity of the edge $e'$ (``parallel'' to $\widehat{\beta}^{(i)}$) created by the \mbox{${\rm La}$}-move as we have done above ($e' \notin \widehat{\beta}$).
Let us pass to the second modification of $\clst{\widehat{\beta}^{(i)}}$, which is now made of three tetrahedra. Consider the unfolded version of $\clst{\widehat{\beta}^{(i)}}$: it can be seen as a triangulation, say ${\cal X}$, of the 3-ball, see Fig.~\ref{star_edge_self_alpha:fig}-left. \begin{figure}\label{star_edge_self_alpha:fig}
\end{figure} Let ${\cal X}'$ be another triangulation of the 3-ball such that: \begin{itemize}
\item ${\cal X}$ and ${\cal X}'$ coincide on the boundary of the 3-ball and
on the edge $\widehat{\beta}^{(i)}$;
\item ${\cal X}'$ appears, near $\widehat{\beta}^{(i)}$, as in
Fig.~\ref{star_edge_self_alpha:fig}-right;
\item any two boundary faces of ${\cal X}'$ do not belong to the same
tetrahedron.
\end{itemize} It is very easy to find such an ${\cal X}'$. Now, ${\cal X}$ and ${\cal X}'$ have in common the boundary and the edge $\widehat{\beta}^{(i)}$, so we can apply Proposition~\ref{gener_Pachner:prop} to obtain ${\cal X}'$ from ${\cal X}$ via B-~and MP-moves not involving both the edge $\widehat{\beta}^{(i)}$ and the boundary. Repeating these moves on the folded version of ${\cal X}$ contained in ${\cal T}$, we substitute it with a folded version of ${\cal X}'$ using B-~and MP-moves which are admissible because they have support in the folded version of ${\cal X}$ and do not involve the edge $\widehat{\beta}^{(i)}$. Now a neighborhood of $\widehat{\beta}^{(i)}$, say ${\cal N}(\widehat{\beta}^{(i)})$ appears as in Fig.~\ref{star_edge_self_alpha:fig}-bottom. Note that ${\cal N}(\widehat{\beta}^{(i)})$ is the cone on the triangle $\theta$ shown in Fig.~\ref{triangulated_triangle:fig} where the endpoints of the cone on the barycentre $b$ are identified together, that $(\widehat{\beta})^{(i)}$ is just this edge with identified endpoints, and that no new singularity of the types already eliminated has been created.
\paragraph{Multiple adjacency along vertices} The situation is analogous to the case of self-adjacency along vertices, but there are some differences to point out. The idea is to ``divide'' one of the edges giving the singularity, so the moves to apply are those applied to eliminate self-adjacency along vertices when $e \notin \widehat{\beta}$. But there are two exceptions. \begin{enumerate} \item We cannot ``divide'' the edges belonging to $\widehat{\beta}$ so
we cannot eliminate the singularity created by two edges of
$\widehat{\beta}$ sharing both the endpoints.
\item If an edge $(\widehat{\beta})^{(i)}$ (belonging to
$\widehat{\beta}$) is a loop, then we do not divide any of the edges
belonging to the closed star of $\widehat{\beta}^{(i)}$, because
such an edge has in its closed star a loop (the edge
$\widehat{\beta}^{(i)}$) and the moves described above would create
a new multiple adjacency. \end{enumerate} For the other cases we can eliminate the multiple adjacency as we have done for self-adjacencies along vertices with $e \notin \widehat{\beta}$, because both the moves are admissible and we do not add any of the singularities of the types already eliminated. Finally, let us deal with the two exceptions. \begin{enumerate} \item For each edge $\widehat{\beta}^{(i)}$ which is not a loop, we
modify $\clst{\widehat{\beta}^{(i)}}$ to have that it is made of
exactly three tetrahedra, as we have done above for the first
modification of $\clst{\widehat{\beta}^{(i)}}$ for the
$\widehat{\beta}^{(i)}$'s which are loops.
\item We do not operate on the edges belonging to the closed star of
the $\widehat{\beta}^{(i)}$'s which are loops. \end{enumerate}
\paragraph{Conclusion} Repeating the moves described above on the ideal triangulation $({\cal T} ,\beta)$ of the pair $(M,\alpha)$, we obtain, via \mbox{${\rm Ba}$}-,~\mbox{${\rm Va}$}-, and \mbox{${\rm MPa}$}-moves, an ideal triangulation $({\cal T}',\beta')$ of $(M \setminus \cup B_k, \alpha)$, where the $B_k$'s are $3$-balls disjoint from each other and from $\alpha$. We have eliminated almost all the singularities of $\widehat{{\cal T}}$, but there are three types of singularity we cannot eliminate (those due to $\alpha$). These three types of singularity are exactly those described in point~2 of the statement. The check that $({\cal T}',\beta')$ is the desired ideal triangulation is straight-forward, so we leave it to the reader. \finedimo{desingularization:prop}
\subsection{Application of the Alexander theorem}
Let us state (and prove) now a first result, which is a weak version of Theorem~\ref{gener_MP:teo}. \begin{prop}\label{gener_MP_weak:prop} Two marked ideal triangulations of a pair $(M,\alpha)$ can be obtained from each other via a sequence of \mbox{${\rm Ba}$}-,~\mbox{${\rm Va}$}-, and \mbox{${\rm MPa}$}-moves, such that the negative \mbox{${\rm Ba}$}-moves do not eliminate the spherical boundary components of $\partial M$. \end{prop} \dimo{gener_MP_weak:prop} Let $({\cal T}_1,\beta_1)$ and $({\cal T}_2,\beta_2)$ be two ideal triangulations of $(M,\alpha)$. Let us apply Proposition~\ref{desingularization:prop} to each $({\cal T}_i,\beta_i)$ obtaining $({\cal T}'_i,\beta'_i)$. Recall that each $({\cal T}'_i,\beta'_i)$ is obtained from the corresponding $({\cal T}_i,\beta_i)$ via \mbox{${\rm Ba}$}-,~\mbox{${\rm Va}$}-, and \mbox{${\rm MPa}$}-moves, that each $(\beta'_i)^{(j)}$ has a particular neighborhood ${\cal N}((\beta'_i)^{(j)})$, and that the loose triangulations $(\widehat{{\cal T}}'_i,\widehat{\beta}'_i)$ are almost desingularized (the singularities are contained in the open neighborhood $\inter{{\cal N}((\widehat{\beta}'_i)^{(j)})}$). Moreover, recall that the \mbox{${\rm Ba}$}-move does not involve the spherical boundary components of $\partial M$. Obviously, since we have ${\cal N}((\widehat{\beta}'_i)^{(j)}) \cap {\cal N}((\widehat{\beta}'_i)^{(k)}) = (\widehat{\beta}'_i)^{(j)} \cap (\widehat{\beta}'_i)^{(k)}$ for each $j \neq k$, and ${\cal N}((\widehat{\beta}'_i)^{(j)}) \cap \widehat{\partial M} = (\widehat{\beta}'_i)^{(j)} \cap \widehat{\partial M}$ for $j=1,\ldots ,n$, we can suppose, up to isotopy, that ${\cal N}(\widehat{\beta}'_1)$ and ${\cal N}(\widehat{\beta}'_2)$ coincide.
The strategy will now be to prove that $({\cal T}'_1,\beta'_1)$ and $({\cal T}'_2,\beta'_2)$ are obtained from each other via \mbox{${\rm Ba}$}-~and \mbox{${\rm MPa}$}-moves. To do this, we will apply Proposition~\ref{gener_Pachner:prop} to $\widehat{M} \setminus (\sqcup_j \inter{{\cal N}((\widehat{\beta}'_1)^{(j)})}) = \widehat{M} \setminus (\sqcup_j \inter{{\cal N}((\widehat{\beta}'_2)^{(j)})})$. Since the singularities of the loose triangulations $\widehat{{\cal T}}'_i$ are contained in the $\inter{{\cal N}((\widehat{\beta}'_i)^{(j)})}$'s (see Proposition~\ref{desingularization:prop}), the triangulations $\widehat{{\cal T}}'_i \setminus (\sqcup_j \inter{{\cal N}((\widehat{\beta}'_i)^{(j)})})$ are actually non-loose. Moreover, the two $\widehat{{\cal T}}'_i \setminus (\sqcup_j \inter{{\cal N}((\widehat{\beta}'_i)^{(j)})})$'s coincide on the boundary and on $\widehat{\partial M}$. Then, we can apply Proposition~\ref{gener_Pachner:prop} to transform $\widehat{{\cal T}}'_1 \setminus (\sqcup_j \inter{{\cal N}((\widehat{\beta}'_1)^{(j)})})$ into $\widehat{{\cal T}}'_2 \setminus (\sqcup_j \inter{{\cal N}((\widehat{\beta}'_2)^{(j)})})$ via B-~and MP-moves having support out of $\widehat{\partial M}$. Obviously, these moves can be applied on the loose triangulation $\widehat{{\cal T}}'_1$ transforming it into $\widehat{{\cal T}}'_2$, they are all admissible, and they transform the loose triangulation $(\widehat{{\cal T}}'_1,\widehat{\beta}'_1)$ into $(\widehat{{\cal T}}'_2,\widehat{\beta}'_2)$; moreover, the negative \mbox{${\rm Ba}$}-moves do not eliminate the points belonging to $\widehat{\partial
M}$. The desired sequence is obtained by repeating the moves on the ideal triangulations $({\cal T}'_i,\beta'_i)$ of $(M,\alpha)$. \finedimo{gener_MP_weak:prop}
\subsection{Elimination of Ba-moves}
To deduce Theorem~\ref{gener_MP:teo} from Proposition~\ref{gener_MP_weak:prop}, we generalize an idea of Matveev~\cite{Matveev:calculus} to the setting of marked spines.
\dimo{gener_MP:teo} Let $({\cal T}_1,\beta_1)$ and $({\cal T}_2,\beta_2)$ be two ideal triangulations of $(M,\alpha)$. By Proposition~\ref{gener_MP_weak:prop}, we have that $({\cal T}_2,\beta_2)$ is obtained from $({\cal T}_1,\beta_1)$ via \mbox{${\rm Ba}$}-,~\mbox{${\rm Va}$}-, and \mbox{${\rm MPa}$}-moves, such that the negative \mbox{${\rm Ba}$}-moves do not eliminate the spherical boundary components of $\partial M$. The idea of the proof consists in replacing each \mbox{${\rm Ba}$}-move with a \mbox{${\rm Ca}$}-move, and each \mbox{${\rm Va}$}-~or \mbox{${\rm MPa}$}-move with suitable sequences of \mbox{${\rm La}$}-,~\mbox{${\rm Va}$}-, and \mbox{${\rm MPa}$}-moves. Let us pass to the dual spine viewpoint: for $i=1,2$, let $(P_i,\beta_i)$ be the spine dual to $({\cal T}_i,\beta_i)$.
First of all, note that in the passages along the sequence of \mbox{${\rm Ba}$}-,~\mbox{${\rm Va}$}-,~and \mbox{${\rm MPa}$}-moves we get (standard) spines $P_*$ of $M$ minus some balls; so each $M \setminus P_*$ is a disjoint union of $\partial M \times [0,1)$ and some balls. When a positive \mbox{${\rm Ba}$}-move is applied, a proper ball ${\cal B}$ appears. Let us continue calling {\em proper ball} (and continue indicating it by ${\cal B}$) its transformations after the others \mbox{${\rm Ba}$}-,~\mbox{${\rm Va}$}-, and \mbox{${\rm MPa}$}-moves, until it disappears because of a negative \mbox{${\rm Ba}$}-move (each proper ball has to disappear). Note that, conversely, the negative \mbox{${\rm Ba}$}-moves eliminate only the proper balls. Note also that each ${\cal B}$ is an open ball with boundary contained in $P_*$ and it is not touched by the edges belonging to $\alpha$.
We will not replace all the \mbox{${\rm Ba}$}-moves (with \mbox{${\rm Ca}$}-moves) at the same time; instead, we will concentrate on one positive \mbox{${\rm Ba}$}-move and on the negative \mbox{${\rm Ba}$}-move eliminating the proper ball created by the positive \mbox{${\rm Ba}$}-move. The strategy will be to replace these two \mbox{${\rm Ba}$}-moves with two \mbox{${\rm Ca}$}-moves, any other \mbox{${\rm Ba}$}-move with a suitable sequence of only one \mbox{${\rm Ba}$}-move and \mbox{${\rm La}$}-,~\mbox{${\rm Va}$}-, and \mbox{${\rm MPa}$}-moves, and each \mbox{${\rm Va}$}-~or \mbox{${\rm MPa}$}-move with a suitable sequence of \mbox{${\rm La}$}-,~\mbox{${\rm Va}$}-, and \mbox{${\rm MPa}$}-moves only. In such a way we will decrease, by two, the number of \mbox{${\rm Ba}$}-moves in the sequence. By repeating this procedure we can eliminate all the \mbox{${\rm Ba}$}-moves and we can complete the proof.
Let us describe the procedure in details. Let $s$ be the following sequence of \mbox{${\rm Ba}$}-,~\mbox{${\rm Va}$}-, and \mbox{${\rm MPa}$}-moves transforming $(P_1,\beta_1)$ into $(P_2,\beta_2)$: \begin{eqnarray*} (P_1,\beta_1) \stackrel{s_1}{\longrightarrow} (Q_0,\eta_0) \stackrel{\mbox{${\rm Ba}$}^+}{\longrightarrow} (Q_1,\eta_1) \stackrel{m_1}{\longrightarrow} (Q_2,\eta_2) \stackrel{m_2}{\longrightarrow} \quad\ldots \\ \ldots\quad \stackrel{m_{r-1}}{\longrightarrow} (Q_r,\eta_r) \stackrel{\mbox{${\rm Ba}$}^-}{\longrightarrow} (Q_{r+1},\eta_{r+1}) \stackrel{s_2}{\longrightarrow} (P_2,\beta_2), \end{eqnarray*} where $s_1$ and $s_2$ are sequences of moves we will not replace, $\mbox{${\rm Ba}$}^+$ (respectively, $\mbox{${\rm Ba}$}^-$) is the positive (respectively, negative) move we will replace with a positive (respectively, negative) \mbox{${\rm Ca}$}-move, and the $m_j$'s are the other moves we will replace. From now on, we will denote by ${\cal B}$ both the proper ball created by $\mbox{${\rm Ba}$}^+$ (and eliminated by $\mbox{${\rm Ba}$}^-$) and its transformations after the $m_j$'s. To decrease by two the number of \mbox{${\rm Ba}$}-moves, we will find a sequence $s'$ transforming $(P_1,\beta_1)$ into $(P_2,\beta_2)$ and appearing as follows: \begin{eqnarray*} (P_1,\beta_1) \stackrel{s_1}{\longrightarrow} (Q_0,\eta_0) \stackrel{\mbox{${\rm Ca}$}^+}{\longrightarrow} (\widetilde{Q}_1,\widetilde{\eta}_1) \stackrel{\widetilde{m}_1}{\longrightarrow} (\widetilde{Q}_2,\widetilde{\eta}_2) \stackrel{\widetilde{m}_2}{\longrightarrow} \quad\ldots \\ \ldots\quad \stackrel{\widetilde{m}_{r-1}}{\longrightarrow} (\widetilde{Q}_r,\widetilde{\eta}_r) \stackrel{\mbox{${\rm Ca}$}^-}{\longrightarrow} (Q_{r+1},\eta_{r+1}) \stackrel{s_2}{\longrightarrow} (P_2,\beta_2), \end{eqnarray*} where $s_1$ and $s_2$ are the same sequences as above, $\mbox{${\rm Ca}$}^+$ (respectively, $\mbox{${\rm Ca}$}^-$) is the positive (respectively, negative) move replacing $\mbox{${\rm Ba}$}^+$ (respectively, $\mbox{${\rm Ba}$}^-$), and the $\widetilde{m}_j$'s are sequences of moves (composed either by only one \mbox{${\rm Ba}$}-move and some \mbox{${\rm La}$}-,~\mbox{${\rm Va}$}-, and \mbox{${\rm MPa}$}-moves if $m_j$ is a \mbox{${\rm Ba}$}-move, or by only \mbox{${\rm La}$}-,~\mbox{${\rm Va}$}-, and \mbox{${\rm MPa}$}-moves otherwise) replacing the $m_j$'s.
Let us start by replacing $\mbox{${\rm Ba}$}^+$ with a positive \mbox{${\rm Ca}$}-move $\mbox{${\rm Ca}$}^+$ (the position of the arch can be random). After applying $\mbox{${\rm Ca}$}^+$ to $(Q_0,\eta_0)$ we obtain a spine $(\widetilde{Q}_1,\widetilde{\eta}_1)$ which differs from $(Q_1,\eta_1)$ only for the presence of an arch connecting the proper ball ${\cal B}$ to $M \setminus (Q_1 \cup {\cal B})$, see Fig.~\ref{C_move:fig}-right. Note that the arch joins a region $R_1$ of $\partial {\cal B}$ with another one, $R_2$, of $Q_1$; if $R_2$ belongs to $\eta_1$, then $R_1$ is a part of $\partial {\cal B}$ belonging to $\widetilde{\eta}_1$. Note also that $R_1$ is the only part of $\partial {\cal B}$ which can belong to $\widetilde{\eta}_1$, and that $R_0$ does not belong to $\widetilde{\eta}_1$. Now the sequence $s'$ appears as follows: \begin{eqnarray*} (P_1,\beta_1) \stackrel{s_1}{\longrightarrow} (Q_0,\eta_0) \stackrel{\mbox{${\rm Ca}$}^+}{\longrightarrow} (\widetilde{Q}_1,\widetilde{\eta}_1). \end{eqnarray*}
The aim is now to replace the moves $m_j$. If we try to apply $m_1$ also on $(\widetilde{Q}_1,\widetilde{\eta}_1)$, we could fail because of the presence of the arch created by the move $\mbox{${\rm Ca}$}^+$. So the idea is either to apply the move $m_j$ if the arch is not involved in the move, or to move the arch before applying the move otherwise. To do this, we will use a recursive procedure. Let $(Q_j,\eta_j)$ be a spine (of the sequence $s$) of $(M,\alpha)$ minus some balls (let us call $k$ the number of such balls). Let ${\cal B}$ be the connected component of $M \setminus Q_j$ containing one of such balls. Note that ${\cal B}$ is an open ball embedded in $\inter{M}$, but its closure $\overline{{\cal B}}$ may not be a closed ball embedded in $\inter{M}$. In our recursive procedure ${\cal B}$ is the proper ball created by the move $\mbox{${\rm Ba}$}^+$ and modified by the moves $m_i$, with $i<j$. Let $(\widetilde{Q}_j,\widetilde{\eta}_j)$ be a spine of $(M,\alpha)$ minus $k-1$ balls, which differs from $(Q_j,\eta_j)$ only for the presence of an arch connecting the proper ball ${\cal B}$ to another connected component of $M \setminus Q_j$. Let moreover $m_j$ be an admissible move from $(Q_j,\eta_j)$ to $(Q_{j+1},\eta_{j+1})$, which does not eliminate the proper ball ${\cal B}$. Note that $(Q_{j+1},\eta_{j+1})$ is a spine of $(M,\alpha)$ minus $h$ balls, where $h=k-1,k,k+1$ depending on $m_j$. Let us continue calling ${\cal B}$ the transformation of ${\cal B}$ under $m_j$.
The recursive pass consists in describing a sequence $\widetilde{m}_j$ of admissible moves (composed either by only one \mbox{${\rm Ba}$}-move and some \mbox{${\rm La}$}-,~\mbox{${\rm Va}$}-, and \mbox{${\rm MPa}$}-moves if $m_j$ is a \mbox{${\rm Ba}$}-move, or by only \mbox{${\rm La}$}-,~\mbox{${\rm Va}$}-, and \mbox{${\rm MPa}$}-moves otherwise) from $(\widetilde{Q}_j,\widetilde{\eta}_j)$ to $(\widetilde{Q}_{j+1},\widetilde{\eta}_{j+1})$, where $(\widetilde{Q}_{j+1},\widetilde{\eta}_{j+1})$ is a spine of $(M,\alpha)$ minus $h-1$ ball, which differs from $(Q_{j+1},\eta_{j+1})$ only for the presence of an arch connecting the ball ${\cal B}$ to another connected component of $M \setminus Q_{j+1}$. If $m_j$ can be applied ({\em i.e.}~the arch is far from the support of $m_j$), then we apply $m_j$ to $(\widetilde{Q}_j,\widetilde{\eta}_j)$ obtaining $(\widetilde{Q}_{j+1},\widetilde{\eta}_{j+1})$, which obviously has all the properties described above. Note that there are some types of moves which can always be applied because the arch is never involved, up to isotopy, in the move: such moves are the positive \mbox{${\rm Ba}$}-moves and the positive \mbox{${\rm Va}$}-moves. To replace the other \mbox{${\rm Ba}$}-,~\mbox{${\rm Va}$}-, and \mbox{${\rm MPa}$}-moves, maybe we need to move the arch so to be able to apply the move. If $m_j$ cannot be applied (because of the presence of the arch), then we move the arch before applying $m_j$. Let us describe how to move the arch; afterwards we will continue the substitution of $m_j$ with $\widetilde{m}_j$.
\paragraph{Arch-move} Let $(\widetilde{Q}_j,\widetilde{\eta}_j)$ be the spine of $(M,\alpha)$ minus $k-1$ balls, which has an arch we want to move. Recall that ${\cal B}$ is the proper ball connected to another connected component of $M \setminus Q_j$ by the arch. Moreover recall that the proper ball ${\cal B}$ is an open ball embedded in $M$, but (because of the moves $m_i$ with $i<j$) its closure $\overline{{\cal B}}$ could be not a closed ball embedded in $M$.
We now define a spine $(\widetilde{Q}'_j,\widetilde{\eta}'_j)$ of $(M,\alpha)$ minus $k-1$ balls. Let $\widetilde{Q}'_j$ be the spine obtained from $\widetilde{Q}_j$ by taking away the arch we want to move and by placing it in another point, so that $\widetilde{Q}'_j$ is again a spine of $M$ minus $k-1$ balls and the ball ${\cal B}$ is connected by the new arch to another connected component of $M \setminus Q_j$, see Fig.~\ref{arch_move:fig}. \begin{figure}
\caption{The arch-move.
We show on the left the situation near the arch we want to remove
and on the right the situation near the point where we want to
place the arch.}
\label{arch_move:fig}
\end{figure} The two conditions on $\widetilde{Q}'_j$ imply that the arch, after the move, should be placed ``near'' $\partial \overline{{\cal B}}$. To define $\widetilde{\eta}'_j$, let us analyze the regions affected by the move. The region $R_1$ of $\widetilde{Q}_j$ (intersecting $\partial {\cal B}$) is divided (in $\widetilde{Q}'_j$) in two regions, $R'_1$ and $R''_1$. Note that these two regions belong also to the spine $(Q_j,\eta_j)$ and that only $R'_1$ can belong to $\eta_j$; if it belongs to $\eta_j$, we impose to leave itself in $\widetilde{\eta}'_j$. The little region $R_0$, which is eliminated by the arch-move, does not belong to $\widetilde{\eta}_j$. The other regions which are modified are the regions $R_2$ and $R'_2$, which unite. Note that these two regions belong also to $(Q_j,\eta_j)$ and that only $R_2$ can belong to $\eta_j$ (because $R''_1$ is contained in $\partial {\cal B}$); if $R_2$ belongs to $\eta_j$, we impose to leave the region $E$ in $\widetilde{\eta}'_j$. The other regions are not modified, so we leave in $\widetilde{\eta}'_j$ those belonging to $\widetilde{\eta}_j$. Finally, note that $(\widetilde{Q}'_j,\widetilde{\eta}'_j)$ differs from $(Q_j,\eta_j)$ only for the presence of the arch (connecting the proper ball ${\cal B}$ to another connected component of $M \setminus Q_j$). The transformation of $(\widetilde{Q}_j,\widetilde{\eta}_j)$ into $(\widetilde{Q}'_j,\widetilde{\eta}'_j)$ will be called {\em
arch-move}.
Now we prove that each arch-move is a composition of \mbox{${\rm La}$}-~and \mbox{${\rm MPa}$}-moves. In Fig.~\ref{arch_L_MP:fig} we have shown the \mbox{${\rm La}$}-~and \mbox{${\rm MPa}$}-moves transforming $(\widetilde{Q}_j,\widetilde{\eta}_j)$ into $(\widetilde{Q}'_j,\widetilde{\eta}'_j)$: let us describe these moves. \begin{figure}\label{arch_L_MP:fig}
\end{figure} Note that the only region which can both intersect $\partial {\cal B}$ and belong to $\widetilde{\eta}_j$ is $R_1$. For the first positive L-move, if $R_1$ belongs to $\alpha$, we choose to leave $R_3$ in $\alpha$. Note that now no region in $\alpha$ intersects $\partial {\cal B}$. For the second positive L-move, if $R_2$ belongs to $\alpha$, we choose to leave $R_4$ in $\alpha$. The region $R_5$ does not belong to $\alpha$, because it intersects $\partial {\cal B}$, so the third positive L-move is admissible. Let us now describe the move indicated by a dashed arrow. Note that the proper ball ${\cal B}$ can be seen as a tube $D^2 \times [0,1]$, where $D^2 \times \{0\} = D$ and $D^2 \times \{1\} = D'$. Obviously, we can move the disc $D$ through the tube from $D^2 \times \{0\}$ to $D^2 \times \{1\}$ via an isotopy. The move indicated by the dashed arrow consists exactly of this isotopy of the little disc $D$ through the proper ball ${\cal B}$: more precisely, if one of the two arches (or both of them) are inside ${\cal B}$ (namely, the little discs $R_0$ and $R'_0$ are contained in ${\cal B}$), the isotopy is through ${\cal B}$ minus both the arch and the tube inside it. At the end of the isotopy the little disc $D$ coincides with $D'$, so it lays near the arch we want to remove. A simple general position argument tells us that the isotopy can be substituted with L-~and MP-moves, see Lemma~1.2.16 of~\cite{Matveev:new:book} for a precise proof. All these moves are admissible because $F_1$, $F_2$, and the regions intersecting $\partial {\cal B}$ do not belong to $\alpha$. For the same reason (and since $D'$ does not belong to $\alpha$) the last three negative L-moves are admissible (obviously, the regions united in each move are different). To conclude, we note that the position of the regions in $\alpha$ after these moves is the same as after the arch-move.
\paragraph{Continuing substitution} Recall that we want to replace the move $m_j$ which cannot be applied to $(\widetilde{Q}_j,\widetilde{\eta}_j)$, because of the presence of the arch. We apply first an arch-move to $(\widetilde{Q}_j,\widetilde{\eta}_j)$ obtaining $(\widetilde{Q}'_j,\widetilde{\eta}'_j)$ and then the move $m_j$. Let us call $(\widetilde{Q}_{j+1},\widetilde{\eta}_{j+1})$ the spine just obtained. Note that, to apply the arch-move, we need to find a place where placing the arch; but it is very easy to find such a place near $\partial \overline{{\cal B}}$ and far from the move $m_j$. Note also that, by construction, $(\widetilde{Q}_{j+1},\widetilde{\eta}_{j+1})$ differs from $(Q_{j+1},\eta_{j+1})$ only for the presence of the arch (connecting the proper ball ${\cal B} $ to another connected component of $M \setminus Q_{j+1}$).
With these substitutions, we have extended the sequence $s'$ obtaining: \begin{eqnarray*} (P_1,\beta_1) \stackrel{s_1}{\longrightarrow} (Q_0,\eta_0) \stackrel{\mbox{${\rm Ca}$}^+}{\longrightarrow} (\widetilde{Q}_1,\widetilde{\eta}_1) \stackrel{\widetilde{m}_1}{\longrightarrow} (\widetilde{Q}_2,\widetilde{\eta}_2) \stackrel{\widetilde{m}_2}{\longrightarrow} \quad\ldots \\ \ldots\quad \stackrel{\widetilde{m}_{r-1}}{\longrightarrow} (\widetilde{Q}_r,\widetilde{\eta}_r). \end{eqnarray*}
Let us consider now the move $\mbox{${\rm Ba}$}^-$. We have noted above that the spine $(\widetilde{Q}_r,\widetilde{\eta}_r)$ differs from $(Q_r,\eta_r)$ only for the presence of the arch (connecting the proper ball ${\cal B}$ to another connected component of $M \setminus Q_r$), so $\widetilde{Q}_r$ near ${\cal B}$ appears exactly as in Fig.~\ref{C_move:fig}-centre. Moreover, $R_1$ is the only part of $\partial{\cal B}$ which can belong to $\widetilde{\eta}_r$ and $R_0$ does not belong to $\widetilde{\eta}_r$. Obviously, a negative \mbox{${\rm Ca}$}-move (which we call $\mbox{${\rm Ca}$}^-$) can be applied and the result is just $(Q_{r+1},\eta_{r+1})$. Now the sequence $s'$ appears as follows: \begin{eqnarray*} (P_1,\beta_1) \stackrel{s_1}{\longrightarrow} (Q_0,\eta_0) \stackrel{\mbox{${\rm Ca}$}^+}{\longrightarrow} (\widetilde{Q}_1,\widetilde{\eta}_1) \stackrel{\widetilde{m}_1}{\longrightarrow} (\widetilde{Q}_2,\widetilde{\eta}_2) \stackrel{\widetilde{m}_2}{\longrightarrow} \quad\ldots \\ \ldots\quad \stackrel{\widetilde{m}_{r-1}}{\longrightarrow} (\widetilde{Q}_r,\widetilde{\eta}_r) \stackrel{\mbox{${\rm Ca}$}^-}{\longrightarrow} (Q_{r+1},\eta_{r+1}). \end{eqnarray*}
To obtain the desired sequence, it is enough to complete the sequence just obtained by composing it with the sequence $s_2$. This proves the theorem. \finedimo{gener_MP:teo}
\subsection{Another proof}\label{Bas_Ben_proof:subsec}
In this subsection we describe how Basehilac and Benedetti have deduced Theorem~\ref{gener_MP:teo} from a result (due to Turaev and Viro) which relies on the Matveev-Piergallini theorem. For the sake of clarity, we describe the ideas of the proof, instead of only stating Theorem~3.4.B of~\cite{Turaev-Viro}. We restrict ourselves only to a sketch of the proof of Theorem~\ref{gener_MP:teo}.
\noindent{\it Sketch of the proof of} {\hspace{2pt}}\ref{gener_MP:teo}. For $i=1,2$, let $(P_i,\beta_i)$ be the spine dual to an ideal triangulation $({\cal T}_i,\beta_i)$. Let $N(\alpha)$ be a little open regular neighborhood of $\alpha$ and $M_\alpha = M \setminus N(\alpha)$. Note that, up to choosing $N(\alpha)$ small with respect to $P_1$ and $P_2$, we can suppose that $N(\alpha) \cap P_i = \cup_{j=1}^n D_i^{(j)}$, where $D_i^{(j)}$ is an open disc with closure contained in the (open) region $\beta_i^{(j)}$, for $i=1,2$ and $j=1,\ldots,n$. Now, for $i=1,2$, we define two new polyhedra $Q_i$ and $R_i$ with $Q_i \subset R_i \subset P_i$. To get $Q_i$, we remove from $P_i$ all the (open) regions $\beta_i^{(j)}$, and, to get $R_i$, we remove from $P_i$ all the open discs $D_i^{(j)}$. Note that we have a retraction $\pi_i$ of $M_\alpha$ onto $Q_i$. Moreover, we have on $\partial M_\alpha$ a family $\lambda_i = \{\lambda_i^{(1)},\ldots ,\lambda_i^{(n)}\}$ of disjoint simple circles such that $\lambda_i^{(j)} = \partial D_i^{(j)} \subset \beta_i^{(j)}$ and, up to isotopy, $R_i \setminus Q_i$ consists precisely of the ``half-open'' annuli $\lambda_i^{(j)} \times [0,1)$ obtained by projecting $\lambda_i^{(j)}$ to $Q_i$ along $\pi_i$. We have already described the ``inverse'' construction in Subsection~\ref{id_tria:subsection} when we have proved existence of marked ideal triangulations. Of course any move on $R_i$ not affecting the $\lambda_i^{(j)}$'s readily translates into an admissible move on $(P_i,\beta_i)$, and conversely. Obviously, up to isotopy, we can suppose that each $\lambda_1^{(j)}$ coincides with $\lambda_2^{(j)}$ and that each $D_1^{(j)}$ coincides with $D_2^{(j)}$: let us call simply $\lambda^{(j)}$ the curve $\lambda_1^{(j)}=\lambda_2^{(j)}$, $\lambda$ the collection $\{\lambda^{(1)},\ldots,\lambda^{(n)}\}$, and $D^{(j)}$ the disc $D_1^{(j)}=D_2^{(j)}$.
Now, $Q_i$ needs not to be standard, but one readily sees that standardness can be achieved using C-~and L-moves on $R_i$ not affecting the $\lambda^{(j)}$'s. Now, $Q_1$ and $Q_2$ are standard spines of $M \setminus N(\alpha)$, so, by Matveev-Piergallini theorem (see Theorem~\ref{MP_calculus:teo}), we can transform $Q_1$ into $Q_2$ via a deformation $Q_t$ (with $t \in [1,2]$) with elementary accidents which are L-~and MP--moves. Obviously, we can suppose that the elementary accidents occur at different times. Note that the $Q_t$'s are all quasi-standard spines, except for a finite number of times when elementary accidents occur so quasi-standardness is lost.
Parallelly, we have a deformation $\pi_t$ of $\pi_1$ into $\pi_2$, where each $\pi_t$ is a retraction of $M \setminus N(\alpha)$ onto $Q_t$. Obviously, the annuli $[\lambda^{(j)},\pi_1(\lambda^{(j)}))$ are transformed into $[\lambda^{(j)},\pi_2(\lambda^{(j)}))$ via annuli $[\lambda^{(j)},\pi_t(\lambda^{(j)}))$. By a general position argument, we can suppose that the accidents occurring to $[\lambda,\pi_t(\lambda)) \cup Q_t$ are L-,~MP-, and false L-moves not affecting the $\lambda^{(j)}$'s, where a {\em false} L-move is a negative L-move not preserving standardness (actually it is not an L-move).
Now, we have obtained a sequence of L-,~MP-, and false L-moves not affecting the $\lambda^{(j)}$'s transforming $R_1$ into $R_2$. To eliminate the false L-moves, we can use the same technique used in Theorem~1.2.30 of~\cite{Matveev:new:book}, which states the following (we use our notation):\\ {\em Two standard spines of a $3$-manifold $W$ related by a sequence
of {\rm L}-,~{\rm MP}-,~and false {\rm L}-moves are related by a
sequence of {\rm L}-~and {\rm MP}-moves only.}\\ By obviously generalizing this proposition to our setting, we obtain a sequence of L-~and MP-moves only, transforming $R_1$ into $R_2$. By adding the discs $D^{(j)}$, we obviously obtain the desired sequence of \mbox{${\rm La}$}-~and \mbox{${\rm MPa}$}-moves transforming $(P_1,\beta_1)$ into $(P_2,\beta_2)$. {
\hbox{\enspace\fbox{\ref{gener_MP:teo}}}}
\section{Existence of dominating marked spines}\label{dominating:sec}
In this section we generalize, to the setting of marked spines, a result of Makovetskii~\cite{makov} on the existence of a spine which dominates, as far as the positive L-moves and positive MP-moves are concerned, any two given spines of $M$. Namely, we prove the following.
\begin{teo}\label{gener_makov:teo} Let $({\cal T}_1,\beta_1)$ and $({\cal T}_2,\beta_2)$ be two marked ideal triangulations of a pair $(M,\alpha)$. Then there exists a marked ideal triangulation $({\cal T},\beta)$ of $(M,\alpha)$ obtained from both $({\cal T}_1,\beta_1)$ and $({\cal T}_2,\beta_2)$ via a sequence of positive \mbox{${\rm La}$}-moves and positive \mbox{${\rm MPa}$}-moves. \end{teo}
For the proof, we follow the ideas of~\cite{makov}.
\subsection{Divided spines and moves}
Let us give some definitions useful for the proof.
\paragraph{Dividing strips and divided spines} Let $(P,\beta)$, with $\beta = \{\beta^{(1)},\ldots ,\beta^{(n)}\}$, be a spine of a pair $(M,\alpha)$. Let $\gamma: [0,1] \rightarrow P$ be a simple curve such that: \begin{itemize}
\item the endpoints belong to edges (maybe, to the same edge) of $P$;
\item $\gamma$ intersects the singularities of $P$ transversely;
\item $\gamma$ contains no vertex of $P$;
\item there exists a strip $S = \gamma \times [0,1] \subset M$
intersecting $P$ exactly in $\gamma = \gamma \times \{0\}$ and
$\{\gamma(0),\gamma(1)\} \times [0,1]$.
\end{itemize} Such a curve $\gamma$ divides some regions of $P$ (those it touches) into discs: for each region $R$, we will call {\em sub-regions} ({\em
of $R$}) such discs if $R$ is divided by $\gamma$, or $R$ itself if it is untouched by $\gamma$. Let $\overline{\beta} = \{\overline{\beta}^{(1)},\ldots ,\overline{\beta}^{(n)}\}$ be a collection of sub-regions such that each $\overline{\beta}^{(i)}$ is a sub-region of $\beta^{(i)}$. The pair $(S,\overline{\beta})$, where $S = \gamma \times [0,1]$, will be said {\em dividing strip} of $(P,\beta)$, and the triplet $(P,S,\overline{\beta})$ will be said a {\em divided spine} of $(M,\alpha)$.
\paragraph{Moves on divided spines} Let $(P,S,\overline{\beta})$ be a divided spine of $(M,\alpha)$. We start by defining the obvious generalizations of the positive \mbox{${\rm La}$}-~and \mbox{${\rm MPa}$}-moves and then we define two new moves to take into account the strip $S$. As for admissible moves on marked spines, we will say that an admissible move from $(P,\beta)$ to $(P',\beta')$ gives rise to a {\em
divided-admissible move} if there is a dividing strip $(S',\overline{\beta'})$ of $(P',\beta')$ such that $(P',S',\overline{\beta'})$ is a divided spine of $(M,\alpha)$, and $(S',\overline{\beta'})$ coincides with $(S,\overline{\beta})$ except ``near'' the portion of $P$ affected by the move. As it turns out, divided-admissibility depends on $S$. Moreover, $\overline{\beta'}$ is sometimes not unique.
\subparagraph{MPd-move} Let us consider a positive \mbox{${\rm MPa}$}-move $m$ from $(P,\beta)$ to another spine $(P',\beta')$ of $(M,\alpha)$, such that the strip $S$ is not involved in the move (namely, $S$ does not intersect the part of $P$ affected by $m$). Then, we will say that $m$ gives rise to an {\em \mbox{${\rm MPd}$}-move} from $(P,S,\overline{\beta})$ to $(P',S',\overline{\beta'})$ whatever $\overline{\beta}$, where $S'$ coincides with $S$ and $\overline{\beta'}$ consists of the same sub-regions as $\overline{\beta}$ (recall that the newborn triangular region does not belong to $\beta'$). Note that an \mbox{${\rm MPd}$}-move always increases (by one) the number of vertices of $P$.
\subparagraph{Ld-move} For the \mbox{${\rm La}$}-moves, the situation is more complicated. Let us consider a positive \mbox{${\rm La}$}-move $m$ from $(P,\beta)$ to another spine $(P',\beta')$ of $(M,\alpha)$, such that the strip $S$ is not involved in the move (namely, $S$ does not intersect the part of $P$ affected by $m$). As above, we will say that $m$ gives rise to an {\em \mbox{${\rm Ld}$}-move} from $(P,S,\overline{\beta})$ to $(P',S',\overline{\beta'})$ whatever $\overline{\beta}$, where $S'$ coincides with $S$ and $\overline{\beta'}$ is uniquely determined by $\overline{\beta}$ and $\beta'$. Let us describe $\overline{\beta'}$. Recall that $m$ divides a region $R$ of $P$ in two regions $R_1$ and $R_2$, see Fig.~\ref{L_move:fig}-left. Since the strip $S$ is not involved in the move $m$, then the \mbox{${\rm Ld}$}-move divides a sub-region $\overline{R}$ of $(P,S,\overline{\beta})$ in two sub-regions $\overline{R_1}$ and $\overline{R_2}$ (where $\overline{R_i}$ is a sub-region of $R_i$, for $i=1,2$). Now, we have two cases depending on whether $\overline{R}$ belongs to $\overline{\beta}$ or not. If $\overline{R}$ does not belong to $\overline{\beta}$, then $\overline{\beta'}$ consists of the same sub-regions as $\overline{\beta}$ ({\em i.e.}~$\overline{R_1}$, $\overline{R_2}$, and the newborn little region $D$ do not belong to $\beta'$). If $\overline{R}$ belongs to $\overline{\beta}$, then we define $\overline{\beta'}$ as $(\overline{\beta} \setminus \{\overline{R}\}) \cup \{\overline{R_1}\}$ or $(\overline{\beta} \setminus \{\overline{R}\}) \cup \{\overline{R_2}\}$ depending on which region, between $R_1$ and $R_2$, belongs to $\beta'$. Note that an \mbox{${\rm Ld}$}-move always increases (by two) the number of vertices of $P$.
\subparagraph{Md-move} We call {\em \mbox{${\rm Md}$}-move} any move from a divided spine $(P,S,\overline{\beta})$ of $(M,\alpha)$ to another divided spine $(P',S',\overline{\beta'})$ of $(M,\alpha)$, where: \begin{itemize}
\item $P'$ coincides with $P$;
\item $S'$ is obtained from $S$ as in Fig.~\ref{M_move:fig} (we have
two cases depending on whether the endpoints of $\gamma$ are
involved in the move or not);
\begin{figure}\label{M_move:fig}
\end{figure}
\item $\overline{\beta'}$ coincides with $\overline{\beta}$ except
that the sub-region $R$, if it lies in $\overline{\beta}$, gets
replaced by the sub-region $R_1$.
\end{itemize} Note that an \mbox{${\rm Md}$}-move increases (by one) the number of intersections between $\gamma$ and the singularity of $P$, so it can be considered as being ``positive''.
\subparagraph{Nd-move} We call {\em \mbox{${\rm Nd}$}-move} any move from a divided spine $(P,S,\overline{\beta})$ of $(M,\alpha)$ to another divided spine $(P',S',\overline{\beta'})$ of $(M,\alpha)$, where: \begin{itemize}
\item $P'$ coincides with $P$;
\item $S'$ is obtained from $S$ as in Fig.~\ref{N_move:fig};
\begin{figure}\label{N_move:fig}
\end{figure}
\item $\overline{\beta'}$ coincides with $\overline{\beta}$ except
that the sub-regions $R$ and $R'$, if they lie in
$\overline{\beta}$, get replaced respectively by either the
sub-region $R_1$ or $R_2$, and by $R'_1$.
\end{itemize} Note that the choice of which sub-region, between $R_1$ and $R_2$, belongs to $\overline{\beta'}$ is included in the move. Finally, note that an \mbox{${\rm Nd}$}-move increases (by two) the number of intersections between $\gamma$ and the singularity of $P$, so it can be considered as being ``positive''.
\subparagraph{} If a spine $(P_2,\beta_2)$ is obtained from a spine $(P_1,\beta_1)$ via positive \mbox{${\rm La}$}-moves and positive \mbox{${\rm MPa}$}-moves, we will write $(P_1,\beta_1) \nearrow (P_2,\beta_2)$. If a divided spine $(P_2,S_2,\overline{\beta_2})$ is obtained from a divided spine $(P_1,S_1,\overline{\beta_1})$ via \mbox{${\rm Md}$}-,~\mbox{${\rm Nd}$}-,~\mbox{${\rm Ld}$}-, and \mbox{${\rm MPd}$}-moves, we will write $(P_1,S_1,\overline{\beta_1}) \nearrow (P_2,S_2,\overline{\beta_2})$.
\paragraph{Swelling} Now we define another move which, taking into account the dividing strip, transforms a divided spine into a (marked) spine. Let $(P,S,\overline{\beta})$ be a divided spine of a pair $(M,\alpha)$, where $S = \gamma \times [0,1]$. If we apply $m$ positive L-moves to $P$ along the curve $\gamma$ (following the orientation of $\gamma$), we obtain a spine, say $P'$, of $M$, see Fig.~\ref{swelling:fig}. \begin{figure}
\caption{The swelling.}
\label{swelling:fig}
\end{figure} Note that $m$ is one less than the number of intersections between $\gamma$ and the singularities of $P$. Noting that the collection $\overline{\beta}$ allows us to choose what regions of $P'$ remain in $\alpha$ after the L-moves (with a little abuse of notation, we continue calling $\overline{\beta}$ the collection of such regions), it turns out that the L-moves are admissible and that the pair $(P',\overline{\beta})$ is a marked spine of $(M,\alpha)$. The spine $(P',\overline{\beta})$ will be called {\em swelling of
$(P,\beta)$ along $(S,\overline{\beta})$} and will be denoted by $\swell{P,S,\overline{\beta}}$.
\begin{rem}\label{swelling:rem} For future reference, we underline the fact that\\ $(P,\beta) \nearrow \swell{P,S,\overline{\beta}}$. \end{rem}
\subsection{Existence of dominating marked spines}
Let us start with two preliminary results.
\begin{lemma}\label{dividing_curve:lem} Let $(P_1,S_1,\overline{\beta_1})$ be a divided spine of a pair $(M,\alpha)$ and let $(P_2,\beta_2)$ be a spine of the pair $(M,\alpha)$ such that $(P_1,\beta_1) \nearrow (P_2,\beta_2)$. Then there exists a dividing strip $(S_2,\overline{\beta_2})$ of $(P_2,\beta_2)$ such that $(P_1,S_1,\overline{\beta_1}) \nearrow (P_2,S_2,\overline{\beta_2})$. \end{lemma} \dimo{dividing_curve:lem} An easy induction on the number of positive moves transforming $(P_1,\beta_1)$ into $(P_2,\beta_2)$ allows us to analyze only the case of only one positive move between $(P_1,\beta_1)$ and $(P_2,\beta_2)$. There are two moves to analyze: the positive \mbox{${\rm La}$}-move and the positive \mbox{${\rm MPa}$}-moves. We concentrate on the first one (the second one being simpler). If necessary, we first apply \mbox{${\rm Nd}$}-moves to take the strip $S_1$ away from the part of $P_1$ affected by the \mbox{${\rm La}$}-move, see Fig.~\ref{dividing_curve_lemma:fig}-left. \begin{figure}\label{dividing_curve_lemma:fig}
\end{figure} Let us call $S'$ the strip just obtained. We impose that the collection $\overline{\beta'}$ consists of the same sub-regions as $\overline{\beta_1}$, unless a sub-region divided by one of these \mbox{${\rm Nd}$}-moves belongs to $\overline{\beta_1}$, in which case we choose which of the two new sub-regions belongs to $\overline{\beta'}$ following the choice given by the positive \mbox{${\rm La}$}-move. So $(P_1,S',\overline{\beta'})$ is a divided spine of $(M,\alpha)$. Now we are able to apply an \mbox{${\rm Ld}$}-move to $(P_1,S',\overline{\beta'})$ to obtain a divided spine $(P_2,S_2,\overline{\beta_2})$, see Fig.~\ref{dividing_curve_lemma:fig}-right. The pair $(S_2,\overline{\beta_2})$ is the dividing strip we are searching for. \finedimo{dividing_curve:lem}
\begin{lemma}\label{swelling:lem} If $(P_1,S_1,\overline{\beta_1}) \nearrow (P_2,S_2,\overline{\beta_2})$, then $\swell{P_1,S_1,\overline{\beta_1}} \nearrow \swell{P_2,S_2,\overline{\beta_2}}$. \end{lemma} \dimo{swelling:lem} An easy induction on the number of moves transforming $(P_1,S_1,\overline{\beta_1})$ into $(P_2,S_2,\overline{\beta_2})$ allows us to analyze only the case of only one move between $(P_1,S_1,\overline{\beta_1})$ and $(P_2,S_2,\overline{\beta_2})$. There are four possible moves. If the move is an \mbox{${\rm Ld}$}-~or an \mbox{${\rm MPd}$}-move, then obviously $\swell{P_1,S_1,\overline{\beta_1}} \nearrow \swell{P_2,S_2,\overline{\beta_2}}$ because $S_1$ is ``far'' from the move. If the move is an \mbox{${\rm Md}$}-move, we have three cases: \begin{enumerate}
\item If $\gamma(0)$ is involved in the move (see
Fig.~\ref{M_move:fig}-right), then
$\swell{P_2,S_2,\overline{\beta_2}}$ is obtained from
$\swell{P_1,S_1,\overline{\beta_1}}$ via a positive \mbox{${\rm La}$}-move, as
shown in Fig.~\ref{M_move_swelling_lemma:fig}.
\begin{figure}\label{M_move_swelling_lemma:fig}
\end{figure}
Note that, if the region $R$ belongs to $\overline{\beta_1}$, we
choose to leave in $\overline{\beta_2}$ the region $R_1$; so the
spine obtained is exactly the swelling of $(P_2,\beta_2)$ along
$(S_2,\overline{\beta_2})$.
\item If neither $\gamma(0)$ nor $\gamma(1)$ is involved in the move
(see Fig.~\ref{M_move:fig}-left), then
$\swell{P_2,S_2,\overline{\beta_2}}$ is obtained from
$\swell{P_1,S_1,\overline{\beta_1}}$ via two positive \mbox{${\rm MPa}$}-moves.
\item If $\gamma(1)$ is involved in the move (see again
Fig.~\ref{M_move:fig}-right), then
$\swell{P_2,S_2,\overline{\beta_2}}$ is obtained from
$\swell{P_1,S_1,\overline{\beta_1}}$ via two positive \mbox{${\rm MPa}$}-moves.
\end{enumerate} If the move is an \mbox{${\rm Nd}$}-move (see Fig.~\ref{N_move:fig}), then $\swell{P_2,S_2,\overline{\beta_2}}$ is obtained from $\swell{P_1,S_1,\overline{\beta_1}}$ via two positive \mbox{${\rm La}$}-moves, as shown in Fig.~\ref{N_move_swelling_lemma:fig}. \begin{figure}\label{N_move_swelling_lemma:fig}
\end{figure} For the first \mbox{${\rm La}$}-move, if the region $R$ belongs to $\overline{\beta_1}$, we have to choose a region, between $R_1$ and $R_2$, to leave in $\overline{\beta_2}$: we choose the region depending on which sub-region, between $R_1$ and $R_2$, belongs to $\overline{\beta_2}$ after the \mbox{${\rm Nd}$}-move. For the second \mbox{${\rm La}$}-move, if the region $R'$ belongs to $\overline{\beta_1}$, we choose to leave in $\overline{\beta_2}$ the ``nearest'' region (between $R'_1$ and $R'_2$) to $\gamma(0)$. The spine obtained is exactly the swelling of $(P_2,\beta_2)$ along $(S_2,\overline{\beta_2})$. This concludes the proof. \finedimo{swelling:lem}
Now we are able to prove Theorem~\ref{gener_makov:teo}.
\dimo{gener_makov:teo} Let $(P_i,\beta_i)$ the dual spine of $({\cal T}_i,\beta_i)$, for $i=1,2$. By applying Theorem~\ref{gener_MP:teo} and by noting that each \mbox{${\rm Va}$}-move is actually an \mbox{${\rm La}$}-move, we obtain a sequence $s$ of \mbox{${\rm La}$}-~and \mbox{${\rm MPa}$}-moves transforming $({\cal T}_1,\beta_1)$ into $({\cal T}_2,\beta_2)$. The sequence $s$ can be divided in (sub-)sequences $s_i$, with
$i=1,\ldots ,2l$, where the sequences $s_{2k+1}$ are composed by positive moves while the sequences $s_{2k}$ are composed by negative moves, and only $s_1$ and $s_{2l}$ could be empty. Let us call $|s_i|$ the number of moves of the sequence $s_i$. An easy induction on $S=\sum_{k=1}^{l-1} |s_{2k+1}|$ allows us to prove only the following statement.
{\em If $(P_2,\beta_2)$ is obtained from $(P_1,\beta_1)$ via a
sequence $s$ such that $l=2$, $|s_1|=0$, $|s_2|>0$, $|s_3|=1$ and
$|s_4|=0$, then there exists another sequence $s'$, transforming
$(P_1,\beta_1)$ into $(P_2,\beta_2)$, such that $l=1$}.
The proof of this statement concludes the proof of the theorem. We have to prove that there exists a spine $(P,\beta)$ such that $(P_1,\beta_1) \nearrow (P,\beta) \nwarrow (P_2,\beta_2)$. If we call $(Q,\beta')$ the spine before the positive move $m$ of the sequence $s_3$, we have that $(P_1,\beta_1) \nwarrow (Q,\beta') \nearrow (P_2,\beta_2)$. Let us start by choosing a dividing strip $(S',\overline{\beta'})$ for $(Q,\beta')$ (we have two cases depending on $m$). \begin{itemize}
\item If $m$ is a positive \mbox{${\rm La}$}-move, we choose as $\gamma'$ the curve
determining $m$.
Note that there are two different strips $S' = \gamma' \times [0,1]$
(up to isotopy): we choose one of them (the choice is immaterial).
If the region of $Q$ divided by $\gamma'$ is one of the
$(\beta')^{(i)}$'s, we choose the $(\overline{\beta'})^{(i)}$
following the choice given by $m$.
\item If $m$ is a positive \mbox{${\rm MPa}$}-move, we choose as $\gamma'$ a curve
parallel to the edge $e$ of $Q$ disappearing during $m$.
As above there are two different strips: we choose one of them.
If the region of $Q$ divided by $\gamma'$ is one of the
$(\beta')^{(i)}$'s, we choose as $(\overline{\beta'})^{(i)}$ the
sub-region which is not adjacent (locally) to $e$.
\end{itemize}
By Lemma~\ref{dividing_curve:lem}, there exists a dividing strip $(S_1,\overline{\beta_1})$ for $(P_1,\beta_1)$ such that $(P_1,S_1,\overline{\beta_1}) \nwarrow (Q,S',\overline{\beta'})$; so, by Lemma~\ref{swelling:lem}, $\swell{P_1,S_1,\overline{\beta_1}} \nwarrow \swell{Q,S',\overline{\beta'}}$. By Remark~\ref{swelling:rem}, we have that $(P_1,\beta_1) \nearrow \swell{P_1,S_1,\overline{\beta_1}}$. Finally, we have two cases depending on $m$. \begin{itemize}
\item If $m$ is a positive \mbox{${\rm La}$}-move, then
$\swell{Q,S',\overline{\beta'}} = (P_2,\beta_2)$; so we have that
$$
(P_1,\beta_1) \nearrow \swell{P_1,S_1,\overline{\beta_1}} \nwarrow
\swell{Q,S',\overline{\beta'}} = (P_2,\beta_2).
$$
\item If $m$ is a positive \mbox{${\rm MPa}$}-move, then
$\swell{Q,S',\overline{\beta'}}$ can be obtained from
$(P_2,\beta_2)$ via a positive \mbox{${\rm MPa}$}-move, see
Fig.~\ref{MP_swelling:fig}; so we have that
$$
(P_1,\beta_1) \nearrow \swell{P_1,S_1,\overline{\beta_1}} \nwarrow
\swell{Q,S',\overline{\beta'}} \nwarrow (P_2,\beta_2).
$$
\begin{figure}\label{MP_swelling:fig}
\end{figure}
\end{itemize} \finedimo{gener_makov:teo}
\section{Applications}
In this section we describe two applications of the previous results. The first one is due to Basehilac and Benedetti~\cite{BB1,BB2,BB3}. The second one is a natural question arisen in a work of Frigerio and Petronio~\cite{Frig-Petr}.
\subsection{Links in 3-manifolds}\label{hamiltonian:subsec}
Let $M$ be a closed 3-manifold and $L$ a link in $M$. A pair $({\cal T},{\cal L})$ is said to be a {\em distinguished
triangulation} of the pair $(M,L)$ if ${\cal T}$ is a loose triangulation of $M$, the link $L$ is triangulated by ${\cal L}$ and ${\cal L}$ is a Hamiltonian sub-complex of ${\cal T}$ ({\em i.e.}~each vertex of ${\cal T}$ is an endpoint of exactly two germs of edges of ${\cal L}$). As we have done for marked ideal triangulations, we can define ({\em
positive} and {\em negative}) {\em admissible} MP-~and L-moves between distinguished triangulations. We need another move allowing us to change the number of vertices of ${\cal T}$. We will say that the distinguished triangulation $({\cal T}',{\cal L}')$ is obtained from the distinguished triangulation $({\cal T},{\cal L})$ via a {\em positive admissible} B-move if \begin{itemize} \item
${\cal T}'$ is obtained from ${\cal T}$ via a positive B-move, \item
one edge $e$ of the tetrahedron $T$ involved in the move belongs to ${\cal L}$, \item
${\cal L}'$ coincides with ${\cal L}$ except for the edge $e$ which is
substituted with the other two edges of the only triangle of
${\cal T}'$ created by the B-move and containing $e$.
\begin{figure}
\caption{An admissible B-move on a distinguished triangulation.
(The link is drawn thick.)}
\label{distinguished_B_move:fig}
\end{figure} \end{itemize} See Fig.~\ref{distinguished_B_move:fig}. Obviously, a {\em negative admissible} B-move between distinguished triangulations is defined as the inverse of a positive admissible B-move.
Now we are able to prove the calculus for distinguished triangulations. \begin{cor}\label{distinguished_calculus:cor} Two distinguished triangulations of a pair $(M,L)$ can be obtained from each other via a sequence of admissible {\rm B}-~and {\rm MP}-moves. \end{cor} \dimo{distinguished_calculus:cor} Let $({\cal T}_1,{\cal L}_1)$ and $({\cal T}_2,{\cal L}_2)$ be two distinguished triangulations of $(M,L)$. Obviously, up to applying suitable admissible B-moves, we can suppose that $({\cal T}_1,{\cal L}_1)$ and $({\cal T}_2,{\cal L}_2)$ have the same number of vertices on each component of $L$. Moreover, up to isotopy, we can suppose that the links ${\cal L}_i$ coincide with $L$, and that the vertices of ${\cal T}_1$ and the vertices of ${\cal T}_2$ coincide with each other.
Now, we remove a little star of each vertex of ${\cal T}_i$: let us call $\overline{M}$ the manifold just obtained. Obviously, after removing the balls, the link $L$ becomes a collection of arcs, say $\overline{L}$, and, for $i=1,2$, the pair $({\cal T}_i,{\cal L}_i)$ is a marked loose triangulation corresponding to a marked ideal triangulation of $(\overline{M},\overline{L})$. So, by applying Corollary~\ref{gener_MP_senza_V:cor}, we obtain that $({\cal T}_2,{\cal L}_2)$ can be obtained from $({\cal T}_1,{\cal L}_1)$ via admissible MP-moves. This concludes the proof. \finedimo{distinguished_calculus:cor}
Using the same technique (and Theorem~\ref{gener_makov:teo}), the following result on dominating distinguished triangulations can be proved. \begin{cor} Let $({\cal T}_1,{\cal L}_1)$ and $({\cal T}_2,{\cal L}_2)$ be two distinguished triangulations of a pair $(M,L)$. Then there exists a distinguished triangulation $({\cal T},{\cal L})$ of $(M,L)$ obtained from both $({\cal T}_1,{\cal L}_1)$ and $({\cal T}_2,{\cal L}_2)$ via a sequence of admissible positive {\rm B}-,~{\rm L}-,~and {\rm
MP}-moves. \end{cor}
\subsection{Partially truncated triangulations}\label{part_trunc_tria:subsec}
In this subsection we briefly describe a generalization of ideal triangulations which is useful to study complete finite-volume orientable hyperbolic 3-manifolds with geodesic boundary~\cite{Frig-Petr}. (For the sake of shortness, in the rest of the subsection we will just say {\em hyperbolic}.) For a complete description see~\cite{Frig-Petr}.
Let $N$ be such a hyperbolic manifold. It is a fact that $N$ consists of a compact portion together with some cusps based either on tori or on annuli. This fact implies that $N$ has a natural compactification $\overline{N}$ obtained from $N$ by adding some tori and some annuli. Let us call $C$ and $A$ the collection of such tori and such annuli, respectively. It is a fact that $N$ can be obtained in a non-ambiguous way from the pair $(\overline{N},A)$ by removing from $\overline{N}$ both $A$ and {\em all} the toric components of $\partial\overline{N}$. Moreover, there is no sphere in $\partial\overline{N}$ and there is no annulus in $A$ which is contained in a torus of $C$.
Let us describe now a generalization of ideal triangulations, which takes into account the annuli. Let us start by defining the pieces substituting ideal tetrahedra. A {\em partially truncated tetrahedron} is a triple $(T,I,Z)$ where $T$ is a tetrahedron, $I$ is a set of vertices of $T$ (called {\em
ideal vertices}), and $Z$ is a set of edges of $T$ (called {\em
length-$0$ edges}) such that neither of the two endpoints of an edge in $Z$ belongs to $I$. Now we define the {\em topological realization} of a partially truncated tetrahedron $(T,I,Z)$ as the space $T^*$ obtained by removing from the tetrahedron $T$ the ideal vertices, the length-0 edges, and small open stars of the non-ideal vertices. We call {\em lateral hexagon} and {\em truncation triangle} the intersection of $T^*$ respectively with a face of $T$ and with the link in $T$ of a non-ideal vertex. Note that, if $(T,I,Z)$ has a length-0 edge, some vertices of a truncation triangle of $T^*$ may be missing and, if $(T,I,Z)$ has ideal vertices or length-0 edges, a lateral hexagon of $T^*$ may not be a hexagon, because some of its edges may be missing. See Fig.~\ref{part_trunc_tetra:fig}. \begin{figure}
\caption{A partially truncated tetrahedron with one ideal vertex and
one length-0 edge (on the left) and its topological realization
(on the right).}
\label{part_trunc_tetra:fig}
\end{figure}
Let us consider now a manifold $N$ which is a candidate to be hyperbolic. Namely, let $\overline{N}$ be a compact orientable manifold, having no sphere in the boundary, and let $A \subset \partial\overline{N}$ be a family of disjoint annuli not lying on the toric components of $\partial\overline{N}$; let $N$ be obtained from $\overline{N}$ by removing $A$ and the toric components. Finally, we define a {\em partially truncated triangulation} of $N$ as a realization of $N$ as the gluing of some $T^*$'s along a pairing of the lateral hexagons induced by a simplicial pairing of the faces of the $T$'s. Note that the truncation triangles of the $T^*$'s give a triangulation of $\partial N$ with some genuine and some ideal vertices, the links of the ideal vertices of the $T$'s give a triangulation of the toric components of $\partial\overline{N}$, and the links of the length-0 edges of the $T$'s give a decomposition into rectangles of the annuli in $A$.
Let us now translate the theory of partially truncated triangulations into the language of marked ideal triangulations. Let us consider $\overline{N}$ as above and let us collapse every annulus $[-1,1]\times S^1 \in A$ to an arc $[-1,1]\times\{*\}$. It turns out that the space just obtained, say $N'$, is a compact 3-manifold and each $[-1,1]\times\{*\}$ is an arc properly embedded in $N'$. Let us call $\alpha_N$ the family of the arcs $[-1,1]\times\{*\}$ in $N'$. It is a fact that partially truncated triangulations of $N$ bijectively correspond to marked ideal triangulations of the pair $(N',\alpha_N)$; under this correspondence, the length-0 edges and the ideal vertices correspond respectively to the edges in $\alpha_N$ and to the vertices on the tori of $\partial N'$ on which there are no ends of arcs in $\alpha_N$.
Obviously, the admissible MP-~and V-moves between marked ideal triangulations of $(N,\alpha_N)$ translate into moves between partially truncated triangulations of $N$. Let us call {\em admissible {\rm MP}-~{\rm and V}-moves} also such moves between partially truncated triangulations. Now, Theorem~\ref{gener_MP:teo} and Corollary~\ref{gener_MP_senza_V:cor} imply the following. \begin{cor} Two partially truncated triangulations of $N$ can be obtained from each other via a sequence of admissible {\rm V}-~and {\rm MP}-moves. If moreover the two partially truncated triangulations have at least two tetrahedra, then they can be obtained from each other via a sequence of admissible {\rm MP}-moves only. \end{cor}
\noindent amendola@mail.dm.unipi.it,\\ Dipartimento di Matematica,\\ Via F. Buonarroti 2,\\ I-56127 PISA
\end{document} |
\begin{document}
\title{DATGAN: Integrating expert knowledge into deep learning for synthetic tabular data}
\begin{abstract}
Synthetic data can be used in various applications, such as correcting bias datasets or replacing scarce original data for simulation purposes. Generative Adversarial Networks (GANs) are considered state-of-the-art for developing generative models. However, these deep learning models are data-driven, and it is, thus, difficult to control the generation process. It can, therefore, lead to the following issues: lack of representativity in the generated data, the introduction of bias, and the possibility of overfitting the sample's noise. This article presents the Directed Acyclic Tabular GAN (DATGAN) to address these limitations by integrating expert knowledge in deep learning models for synthetic tabular data generation. This approach allows the interactions between variables to be specified explicitly using a Directed Acyclic Graph (DAG). The DAG is then converted to a network of modified Long Short-Term Memory (LSTM) cells to accept multiple inputs. Multiple DATGAN versions are systematically tested on multiple assessment metrics. We show that the best versions of the DATGAN outperform state-of-the-art generative models on multiple case studies. Finally, we show how the DAG can create hypothetical synthetic datasets.
\end{abstract}
\keywords{Tabular Data Synthesis \and Generative Adversarial Networks \and Expert Knowledge}
\tableofcontents
\section{Introduction} \label{sec:introduction}
A massive increase in data availability has created tremendous opportunities for targeted modeling and a greater understanding of systems, particularly those involving human behavior. However, reliance on data creates a division based on data. For example, leading international cities in developed nations produce rich data about population movements and interactions with infrastructures. On the other hand, undeveloped nations have much lower data availability. The collection of such data, particularly socio-economic, can be prohibitively expensive. It can, thus, prevent non-data-rich areas from modeling. Furthermore, data can be controlled by certain groups (companies, government, or public agencies), who may be unwilling or unable to make full data publicly available. In addition, the sharing of detailed disaggregated socio-economic data has become increasingly complex with the current focus on data privacy (GDPR). Thus, synthetic data generation, \emph{i.e.} the creation of synthetic data samples which are consistent with the true population, has the opportunity to address many of these limitations.
There are multiple use cases for synthetic tabular data: \begin{inlinelist} \item The most common use-case is dataset augmentation. It can allow researchers and modelers to approximate a large population from a smaller sample, thus reducing the cost of data collection. \item Secondly, synthetic data can be used for privacy preservation. It can, then, enable the sharing of detailed disaggregate populations without contravening GDPR and other data privacy laws. \item Another use case is bias correction. Synthetic data can correct bias in existing samples, allowing for reliable modeling of marginal and minority groups and behavior. \item Finally, synthetic data generation models can be used as transfer learning methods. They can thus be used to transfer data from one city or context to a new context, allowing for detailed modeling where existing high-quality data is not available. \end{inlinelist} In this paper, we focus mainly on synthetic population generation. Such populations are generally used for simulation in agent-based models, particularly for activity-based transport models. However, the techniques proposed and reviewed in this article can be used in any context where there is a need for detailed tabular datasets.
Many methods have been developed to generate such synthetic populations in existing studies. The two main approaches are statistical techniques such as Iterative Proportional Fitting (IPF)~\citep{deming_least_1940} or simulation using Gibbs sampling~\citep{geman_stochastic_1984}, and machine learning techniques. While the first approaches have been well studied within the transportation community, the latter comes from the Machine Learning community and generally focuses on general synthetic data. These deep learning methods, such as Generative Adversarial Networks (GANs)~\citep{goodfellow_generative_2014}, have already been tested against standard statistical techniques and outperform them while generating correlations in synthetic datasets. However, these methods are data-driven. They, therefore, lack control over the generation process. Without controlling the latter, it is impossible to know how well the deep learning models have understood the original sample, \emph{i.e.} which correlations between the variables in the original dataset the models have learned. It can, thus, lead to spurious correlations or propagation of existing bias in the sample. In addition, there is generally no focus on the representativity of the output, which is crucial for the accurate understanding of socio-economic characteristics.
This article, thus, proposes a novel model that controls the generation process of such synthetic tabular data. We propose to let the researcher or modeler design a network to represent the interactions between the variables with a Directed Acyclic Graph (DAG). This DAG is then used to model the structure of the model that will generate such a population. Allowing researchers to control the process has three main advantages: they can tinker with the data generation process, create hypothetical datasets, and control the dependencies for forecasting. In this article, we thus present our new GAN model named Directed Acyclic Table GAN (DATGAN). We show that it outperforms state-of-the-art synthetic data generators on multiple metrics. These metrics have been created to allow for systematic testing both using formal statistical analysis and supervised learning-based approaches. We also provide a sensitivity analysis on the DAG to show its effect on the data generation process. Finally, we show how the DAG can create hypothetical situations and generate a synthetic dataset based on the new rules.
The rest of this article is laid as follows. In the next section, we present the Literature Review. We first introduce the existing approaches for population synthesis and then discuss the different research axes. Finally, we conclude the literature review with the opportunities and limitations of existing research. In Section~\ref{sec:methodology}, we present the whole methodology for the DATGAN. We discuss how to preprocess the data, what models are used for the generator and the discriminator, and how to use the DAG to create the generator's structure using LSTM cells. Section~\ref{sec:case_study} presents the case studies and Section~\ref{sec:results} shows the results. We conclude this article in Section~\ref{sec:conclusion} and give ideas for future work. In addition to this article, we provide supplementary materials containing a notation table for the methodology, a comparison of multiple DATGAN versions, and all the detailed results used in Section~\ref{sec:results}. \section{Literature Review} \label{sec:lit_rev}
There are five main research axes for synthetic tabular data generation: simulation/activity-based modeling, Machine Learning efficacy, bias correction, privacy preservation, and transfer learning. These research axes are discussed in detail in Section~\ref{sec:research_axes}. The literature review first focuses on population synthesis with older methods such as Iterative Proportional Fitting (IPF) and Gibbs sampling. Then, we look at more general Machine Learning techniques for synthetic tabular data generation. These techniques are discussed in detail in Section~\ref{sec:existing_approaches}. Then, in Section~\ref{sec:sota_models}, we discuss in more detail some state-of-the-art models that we selected to compare to the model presented in this article. Next, Section~\ref{sec:model_eval} is dedicated to model evaluation and shows how the transportation and Machine Learning communities are evaluating generated synthetic datasets. Finally, in Section~\ref{sec:opportunities}, we discuss the opportunities and limitations of these techniques linked to the five research axes.
\afterpage{ \begin{landscape} \mbox{}
\begin{table}[!h]
\centering
\caption{Main methods for synthetic tabular data generation found in the transportation literature and in the Machine Learning community.}
\label{tab:existing_techniques}
\begin{tabularx}{\hsize}{>{\hsize=.5\hsize}C|>{\hsize=.5\hsize}C||>{\hsize=1.45\hsize}X|>{\hsize=.85\hsize}C|>{\hsize=1.35\hsize}L|>{\hsize=1.35\hsize}L}
\multicolumn{1}{c|}{\textbf{Categories}} & \multicolumn{1}{c||}{\textbf{Methods}} & \multicolumn{1}{c|}{\textbf{Description}} & \multicolumn{1}{c|}{\textbf{References}} & \multicolumn{1}{c|}{\textbf{Advantages}} & \multicolumn{1}{c}{\textbf{Disadvantages}} \\ \midrule[1.5pt]
\multirow{6}{*}{\makecell{Statistical\\techniques}} &
IPF &
Iterative method using marginals to create a synthetic table. &
\cite{auld_population_2009};
\cite{barthelemy_synthetic_2013};
\cite{rich_large-scale_2018}
&
~~\llap{\textbullet}~~ Efficient in its basic form \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ Simple to implement
&
~~\llap{\textbullet}~~ No interactions between variables \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ Computationally expensive if more complexity added \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ Prone to sampling zero issue \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ No differences between data types
\\ \cline{2-6}
&
Gibbs sampling &
Gibbs sampler trained until reaching stationary state on prepared conditionals. &
\cite{farooq_simulation_2013};
\cite{casati_synthetic_2015};
\cite{kim_simulated_2016}
&
~~\llap{\textbullet}~~ Learn from marginals \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ Outperforms IPF \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ Can be linked to other methods
&
~~\llap{\textbullet}~~ No interactions between variables \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ Computationally expensive \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ Probability distributions are an assumption \\ \midrule
&
Bayesian networks &
Probabilistic graphical model used to determine probabilistic inferences between the variables. &
\cite{sun_bayesian_2015};
\cite{zhang_connected_2019}
&
~~\llap{\textbullet}~~ Dependencies of variables defined prior to training \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ Probabilistic model
&
~~\llap{\textbullet}~~ Requires prior information on the dataset \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ Computationally expensive when dealing with large and sparse datasets
\\ \cline{2-6}
Machine Learning techniques &
VAE &
Pair of neural networks composed of an encoder and a decoder. Transforms data in a latent space to reduce its dimensionality. Encoding-decoding scheme has to be learned. &
\cite{garrido_prediction_2019};
\cite{xu_modeling_2019}
&
~~\llap{\textbullet}~~ Aims at learning a latent representation of the variables \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ Latent space is suitable for inference and completion of data
&
~~\llap{\textbullet}~~ Might not be able to learn the true posterior distribution \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ Usually outperformed by GANs
\\ \cline{2-6}
&
GAN &
Pair of neural networks composed of a generator and a discriminator. The generator is trained to fool the discriminator. Learning process is a two players minimax game. &
\cite{goodfellow_generative_2014};
\cite{xu_synthesizing_2018};
\cite{xu_modeling_2019};
\cite{zhao_ctab-gan_2021}
&
~~\llap{\textbullet}~~ Generator never sees true data (privacy ensured) \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ Architectures of both neural networks are flexible \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ Current state-of-the-art generative model
&
~~\llap{\textbullet}~~ Equilibrium between both neural networks difficult to achieve \vskip \arraystretch pt
~~\llap{\textbullet}~~ Dependencies of variables cannot be controlled
\end{tabularx} \end{table}
\end{landscape} }
\subsection{Existing approaches for synthetic tabular data generation} \label{sec:existing_approaches}
One of the primary uses of synthetic tabular data has been for the creation of synthetic populations, in particular for transportation research. As a result, many research contributions focus specifically on this topic, using different techniques. Therefore, we focus this review first on synthetic population generation and then introduce more general-purpose Machine Learning algorithms that have been proposed in the literature, which could also be applied for this purpose.
We can identify two main categories of techniques for synthetic tabular data generation: Statistical techniques such as resampling methods and simulation-based methods (see Section~\ref{sec:stat_techniques}), and Machine Learning methods (see Section~\ref{sec:deep_learning}). In the following sections, we discuss these different methods in detail. Table~\ref{tab:existing_techniques} gives a summary of the main methods in each category with a list of references, advantages, and disadvantages.
\subsubsection{Statistical techniques} \label{sec:stat_techniques}
There are two main methods to generate synthetic populations within the transportation community: resampling and simulation-based approaches. The first one is based on Iterative Proportional Fitting methods (IPF)~\citep{deming_least_1940}. It consists of proportionally adjusting a matrix to produce a new table such that the specified marginals are individually conserved. \cite{beckman_creating_1996}, first, use this methodology to create a synthetic population based on the SF3 (San Francisco area) census data. \cite{auld_population_2009} and \cite{barthelemy_synthetic_2013} both propose to improve the IPF methodology using a multi-step procedures. While IPF methods are simple to implement, this technique has multiple significant limitations in generating highly detailed and realistic synthetic tabular data. Firstly, there is no interaction between the variables with the basic algorithm. It is possible to add these interactions by adding more dimensions to the table. However, for each level of interaction, one more dimension has to be added to the table. It, thus, quickly becomes a computationally expensive algorithm. In addition, IPF cannot differentiate between structural and sampling zeros. Multiple methods have been suggested to avoid sampling zero issues in the literature, such as \cite{auld_population_2009}. Finally, IPF cannot differentiate between the different data types (categorical and continuous). Thus, researchers have developed new techniques to generate synthetic populations, such as Markov Chain Monte Carlo (MCMC) simulation.
\cite{farooq_simulation_2013} proposes to use a Markov Chain Monte Carlo (MCMC) simulation using Gibbs sampling to generate synthetic populations. The idea is to draw from an (unknown) multi-dimensional random variable characterizing the distribution of individuals in the population using a Gibbs sampler. The marginal distributions used by the Gibbs sampler are generated from real data. Since the full-conditionals are rarely available for all the attributes in the original data, the authors use a parametric model to construct the missing conditional distributions. The authors show that this simulation technique outperforms IPF methods using multiple statistical metrics such as $R^2$ and Standardized Root Mean Squared Error (SRMSE)~\citep{muller_population_2010}. Multiple improvements have been made on the original method~\citep{casati_synthetic_2015, kim_simulated_2016, philips_fine_2017}. However, while simulation-based techniques outperform IPF techniques, these methods still have limitations in the context of synthetic population generation. The main issue is that the models are working with conditionals only. This can be an advantage if only this information is available. However, since MCMC methods must assume the type of probability distributions that the variables follow, wrong assumptions can lead to fundamentally incorrect distributions.
While these statistical methods have been widely used in the transportation community, they are outdated compared to Machine Learning techniques. For example, \cite{borysov_how_2019} show that their Machine Learning-based approaches outperform MCMC-based approaches on multiple criteria. Therefore, in this article, we concentrate on testing our methodology against state-of-the-art Machine Learning methods.
\subsubsection{Machine Learning techniques} \label{sec:deep_learning}
Recent advances in Machine Learning and data generation techniques have enabled new approaches for generating synthetic populations. We identify three primary Machine Learning-based approaches that have been used for this purpose: Bayesian networks, Variational AutoEncoder (VAE), and Generative Adversarial Networks (GANs).
\cite{sun_bayesian_2015} use Bayesian networks to generate such populations. Bayesian networks are graphical models used to encode probability distributions for a set of variables. They use a Directed Acyclic Graph (DAG) to represent the dependencies between the variables and a set of local probability distributions for each variable in the original table and given conditional probabilities. The authors show that their model outperforms both IPF and Gibbs sampling. \cite{zhang_connected_2019} extended this concept further by using a three-step procedure to generate a population and its social network. They use a Bayesian network to create a synthetic population of households, an integer problem with Langrangian relaxation for the assignment problem, and an Exponential Random Graph Model (ERGM) for the social network simulation.
Variational AutoEncoders (VAEs)~\citep{kingma_auto-encoding_2014} aim to reduce the dimensionality of the data into an encoded vector in the latent space. Data can then be generated more easily in this latent space since it is smaller in dimensionality. For example, \cite{borysov_how_2019} have used a VAE to generate a synthetic population. They demonstrated that their VAE model outperforms both IPF and Gibbs sampling for generating complex data. However, this type of method has quickly been outperformed by the current state-of-the-art method for generating synthetic data: Generative Adversarial Networks (GANs).
Generative Adversarial Networks (GANs)~\citep{goodfellow_generative_2014} make use of two neural networks, the \emph{generator} and the \emph{discriminator}, which compete against each other on independent, unsupervised tasks. The generator processes random noise to produce synthetic data. The discriminator (or critic) then evaluates the synthetic data against original data to provide a classification or continuous score on each data point on whether the data is original or synthetic. The generator is then trained through backpropagation. Figure~\ref{fig:GAN} shows the schematic representation of a GAN.
\begin{figure}
\caption{Schematic representation of the standard GAN structure.}
\label{fig:GAN}
\end{figure}
GANs have quickly evolved to become more specialized. For example, \cite{arjovsky_wasserstein_2017} demonstrate that the use of a discrete loss function results in issues such as vanishing gradients. They thus propose an alternative continuous loss function based on the Wasserstein distance. This GAN is therefore named Wasserstein GAN (WGAN). Further key developments in GAN research include the introduction of a penalty on the gradient during model training~\citep{gulrajani_improved_2017} or the addition of conditionality~\citep{mirza_conditional_2014}. While the primary application of GANs has been the generation of image data, with a particular focus on human faces~\citep{alqahtani_applications_2021}, researchers have also developed specific architectures for tabular data. It, thus, allowed researchers to switch their focus to more general synthetic tabular data generation rather than synthetic population generation.
TableGAN~\citep{park_data_2018} and TGAN~\citep{xu_synthesizing_2018} are two specific GANs models for tabular data. TableGAN has been developed with privacy-preservation techniques in mind. This model is based on Deep Convolutional GAN (DCGAN)~\citep{radford_unsupervised_2016}. On the other hand, TGAN has been developed to reproduce tabular data as realistically as possible using Long Short Term Memory (LSTM) cells for the generator~\citep{hochreiter_long_1997}. The authors demonstrated that TGAN outperforms tableGAN. Researchers have also developed their own GAN structures to generate synthetic populations in the transportation community. For example, \cite{garrido_prediction_2019} develop their own GAN structure based on the WGAN to use tabular data. They show that this new model was statistically better than IPF techniques, Gibbs sampling, and the VAE of \cite{borysov_how_2019}. Finally, \cite{badu-marfo_composite_2020} created a new GAN named Composite Travel GAN (CTGAN). Their GAN is based on the Coupled GAN (CoGAN)~\citep{liu_coupled_2016} and is used to generate the table of attributes for the population and the sequence of Origin-Destination segments. They show that the CTGAN outperforms the VAE statistically. While these models are showing outstanding performances compared to previous methods, the switch to data-driven methods has hindered the control of the researchers or modelers on the generation process. The lack of control during this process can hinder the final results depending on the research axis. Thus, in the next section, we discuss multiple axes found in the literature, some requiring high control on the generated synthetic data.
\subsection{Research axes} \label{sec:research_axes}
The primary focus of existing population synthesis in the transportation domain has been for direct use in simulation models. On the other hand, the deep learning community motivates their research by stating that using more data improves the efficacy of Machine Learning models. For example, \cite{jha_impact_2019} show that a larger and more complete dataset leads to better validation and fewer uncertainties. Other examples discussing the dataset size can be found in the literature~\citep{barbedo_impact_2018,linjordet_impact_2019}. However, this does not represent the only research axis for synthetic tabular data generation. In the remainder of this section, we present and discuss five different research axes for synthetic generation in the literature.
\subsubsection{Simulation and agent-based modeling} \label{sec:simulation}
Agent-based models~\citep{bonabeau_agent-based_2002} are used to simulate the actions and interactions of autonomous agents in order to understand the behavior of a system. These models are extensively used in the transportation community~\citep{kagho_agent-based_2020} and require a large amount of data to be adequately trained. Synthetic data are, thus, often used to replace scarce and expensive original data.
In the transportation community, the predominant focus for the population synthesis papers introduced in Section~\ref{sec:existing_approaches} is for direct use in simulation. For example, \cite{beckman_creating_1996}, \cite{barthelemy_synthetic_2013}, \cite{farooq_simulation_2013}, \cite{borysov_how_2019}, and \cite{garrido_prediction_2019} all motivate their work by discussing the generation of synthetic population for (agent-based) simulation models.
\subsubsection{Privacy preservation} \label{sec:privacy}
Privacy preservation techniques ensure that any private information is not disclosed using data or Machine Learning models. Synthetic data can, thus, replace highly sensitive original data when privacy is of concern. Methods using only the conditionals, such as IPF or Gibbs sampling, are especially effective since the methods never use the original data to generate the synthetic data.
For example, \cite{barthelemy_synthetic_2013} motivate their model (a three-step procedure based on simulation techniques) to improve the privacy preservation of the standard IPF methods. They state that the standard method tends to repeat observations, and thus it is possible to retrieve information from the original dataset. More recently, the tableGAN~\citep{park_data_2018} has been specifically designed to preserve the original datasets' privacy. Multiple GAN models have been created in computer vision, with privacy preservation as the core motivation. For example, \cite{liu_ppgan_2019} created the Privacy-Preserving GAN (PPGAN). This GAN uses differential privacy by adding specifically designed noise to the gradient during the learning procedure. \cite{yin_gans_2018}, on the other hand, directly generated protected data within the generator of their GAN by removing some sensible information and encoding them in the generated data. They tested their synthetic data against attack models to show that their GAN could generate more complex data to be deciphered. More recently, \cite{zhao_ctab-gan_2021} motivate and test their model on privacy preservation.
\subsubsection{Machine Learning efficacy} \label{sec:ml_efficacy}
The generation of synthetic populations also enables the augmentation of existing real-world datasets with synthetic individuals, therefore increasing the size and variability of the dataset. Several studies have investigated the estimation of Machine Learning techniques on augmented or fully synthetic data. This concept is already widely used on images~\citep{shorten_survey_2019}. While simple techniques such as rotating or scaling images can be used in Computer Vision, applying such simple tricks on tabular data is impossible. Thus, researchers have been developing models aiming at augmenting tabular data.
For example, \cite{xu_synthesizing_2018} motivate the development of the TGAN because organizations are using Machine Learning on relational tabular data to augment process workflows carried out by humans. Furthermore, they state that these synthetic datasets can either be used as an augmentation for the existing datasets or as a means to preserve privacy, which is discussed in Section~\ref{sec:privacy}. On the other hand, \cite{xu_modeling_2019} do not give a clear motivation on the usage of synthetic datasets. However, they test their models on Machine Learning efficacy by replacing the training data with the generated synthetic data. More recently, both \cite{wen_causal-tgan_2021} and \cite{zhao_ctab-gan_2021} motivate their models using Machine Learning efficacy as the core method to assess their generated synthetic datasets.
\subsubsection{Bias correction} \label{sec:bias_correction}
Synthetic data can also be applied to correct bias in existing datasets by controlling the data generation process. It builds on standard resampling methods~\citep{rubin_use_1973} to rebalance the dataset, which reduces the signal-to-noise ratio of the existing data (by removing oversampled data or resampling undersampled data). Indeed, data generation techniques can also be used to augment an existing dataset and rebalance it.
For example, Conditional GANs~\citep{mirza_conditional_2014} have been created to tackle such imbalance issues. The idea of such GANs is to generate synthetic data using prior information. They, thus, increase the probability of generating synthetic data with the given information. \cite{xu_modeling_2019} have adapted this methodology to tabular data with the Conditional Table GAN (CTGAN). They show that the conditionality is truly efficient for Machine Learning models when the data is highly imbalanced. They create synthetic datasets addressing the imbalance and trained Machine Learning models on this synthetic and the original dataset. The models trained on the synthetic datasets perform better than those trained on the original dataset. Previously, \cite{farooq_simulation_2013} motivate their research on population synthesis with Gibbs sampling using the fact that it can complete datasets. However, the authors have not formally evaluated this use of synthetic data.
\subsubsection{Transfer learning} \label{sec:transfer_learning}
The use of Conditional GANs, and other conditional data generation approaches, enables the possibility for transfer learning, where knowledge from one context with large data availability can be transferred to another context with lower data availability.
For example, \cite{noguchi_image_2019} propose a new method using the BigGAN to transfer the knowledge learned on large datasets and apply this knowledge to a dataset with only 25 images. They show that they can add a new class to a pre-trained generator without disturbing the performance of the original domain. \cite{wang_minegan_2020} propose to use a miner network that identifies which distribution of multiple pre-trained GANs is the most beneficial for a specific target. This mining pushed the sampling towards more suitable regions in the latent space. Therefore, the MineGAN can transfer the knowledge of multiple GANs such as the BigGAN and the Progressive GAN to a domain with fewer images. Other relevant methods for transfer knowledge can be found in the articles of \cite{jeon_t-gd_2020} and \cite{fregier_mind2mind_2020}.
While this research axis has already been explored in the computer vision community, it has not been explored in population synthesis. Indeed, while the transfer of knowledge between two tabular datasets might not make sense, it could be used for tabular data of populations. For example, census data are often collected regularly, \emph{e.g.} every two of five years. We could, thus, imagine using GANs trained on data from previous years to transfer their knowledge to the most recent years.
\subsection{State-of-the-art models} \label{sec:sota_models}
We present a detailed overview of four different approaches demonstrated to achieve the best performance when generating synthetic tabular data. As such, these approaches represent the state-of-the-art in this field. The four approaches, which all make use of deep learning algorithms, are introduced across three key articles: \cite{xu_synthesizing_2018}, \cite{xu_modeling_2019}, and \cite{zhao_ctab-gan_2021}. A summary of the models is given in Table~\ref{tab:summary_sota}.
\begin{table}[H]
\centering
\caption{Summary of the state-of-the-art models selected for comparison with the DATGAN.}
\label{tab:summary_sota}
\begin{tabularx}{0.9\textwidth}{c||>{\hsize=.35\hsize}C|>{\hsize=1.65\hsize}L}
\multicolumn{1}{c||}{\textbf{Model}} & \multicolumn{1}{c|}{\textbf{Article}} & \multicolumn{1}{c}{\textbf{Information}} \\ \midrule[1.5pt]
TGAN & \cite{xu_synthesizing_2018} & ~~\llap{\textbullet}~~ \emph{Generator:} LSTM cells in linear arrangement \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ \emph{Discriminator:} Fully-connected neural network \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ \emph{Data preprocessing:} Continuous vs categorical \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ \emph{Loss function:} Cross-entropy loss \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ \emph{Conditionality:} None \\ \midrule
CTGAN & \cite{xu_modeling_2019} & ~~\llap{\textbullet}~~ \emph{Generator:} Fully-connected neural network \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ \emph{Discriminator:} Fully connected neural network \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ \emph{Data preprocessing:} Continuous vs categorical \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ \emph{Loss function:} Wasserstein loss with gradient-penalty \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ \emph{Conditionality:} On categorical variables \\ \midrule
TVAE & \cite{xu_modeling_2019} & ~~\llap{\textbullet}~~ \emph{Encoder:} Updated structure for preprocessed data \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ \emph{Decoder:} Similar to conventional VAE \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ \emph{Data preprocessing:} Continuous vs categorical \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ \emph{Loss function:} Evidence lower-bound (ELBO) loss \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ \emph{Conditionality:} None \\ \midrule
CTAB-GAN & \cite{zhao_ctab-gan_2021} & ~~\llap{\textbullet}~~ \emph{Generator:} Convolutional neural network \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ \emph{Discriminator:} Convolutional neural network \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ \emph{Classifier:} Multi-layer perceptron \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ \emph{Data preprocessing:} Continuous vs categorical vs mixed \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ \emph{Loss function:} Cross-entropy, information and classification losses \vskip \arraystretch pt
\hspace*{0pt}~~\llap{\textbullet}~~ \emph{Conditionality:} On categorical variables
\end{tabularx} \end{table}
While some GANs have been developed for privacy preservation, TGAN~\citep{xu_synthesizing_2018} focuses on learning the marginal distributions using recurrent neural networks. Since our focus is on creating representative synthetic data, we thus selected TGAN as the first model to be compared. It uses Long Short-Term Memory (LSTM) cells~\citep{hochreiter_long_1997} to generate each variable in the table. The LSTM cells are arranged linearly following the order of the variables in the dataset. The authors make the difference between categorical and continuous variables. Both variable types are encoded differently: \begin{inlinelist}
\item continuous variables are encoded using Gaussian mixtures;
\item categorical variables are one-hot encoded. \end{inlinelist} Finally, the TGAN is trained using the standard minimax loss function~\citep{goodfellow_generative_2014}, and it is compared to other data synthesizers such as Gaussian Copula and Bayesian Networks. The authors show that TGAN outperforms all these methods.
CTGAN~\citep{xu_modeling_2019} uses a fully-connected neural network for both the generator and the critic. Like TGAN, the variables are differentiated between categorical and continuous variables. CTGAN uses the same encoding procedure for both variable types with a slight difference for continuous variables: a Variational Gaussian Mixture (VGM) is used instead of standard Gaussian mixtures. In addition, this model uses conditionality by adding a conditional vector on categorical variables. Finally, CTGAN is trained using the Wasserstein loss with gradient-penalty~\citep{gulrajani_improved_2017}.
The Tabular VAE (TVAE)~\citep{xu_modeling_2019} is an adaptation of a standard Variational AutoEncoder (VAE) by modifying the loss function and preprocessing the data. The variables are encoded using the same procedure as in CTGAN. TVAE uses the evidence lower-bounder (ELBO) loss~\citep{kingma_auto-encoding_2014}. While the encoder is slightly updated compared to conventional VAEs, the decoder keeps a usual structure. The authors have compared TVAE, CTGAN, and other methods for synthesizing tabular data such as tableGAN. They show that both TVAE and CTGAN outperform other methods. On multiple metrics, TVAE performs better than CTGAN. However, as stated by the authors, CTGAN achieves differential privacy~\citep{jordon_pate-gan_2018} easier than TVAE since the generator never sees the original data.
Finally, the particularity of CTAB-GAN~\citep{zhao_ctab-gan_2021} compared to the previous models is that it aims at fixing issues with skewed continuous distributions. Indeed, continuous distributions can take many forms, such as long-tailed, exponential, or mixed distributions. Therefore, this model implements multiple data preprocessing methods for different distributions. CTAB-GAN comprises three neural networks: a generator, a discriminator, and an additional classifier. The latter is used to learn the semantic integrity of the original data and predict the synthetic data classes. This helps produce more accurate labeled synthetic data. The generator and discriminator are convolutional neural networks, while the classifier is a multi-layer perceptron. In addition, CTAB-GAN uses conditionality to counter the imbalance in the training dataset to improve the learning process. Finally, CTAB-GAN is trained using the standard cross-entropy loss function with the addition of an information loss and a classification loss. The authors have tested their model against other state-of-the-art models such as tableGAN and CTGAN. They have shown that their model outperforms all the other models using Machine Learning efficacy and statistical similarity metrics.
\subsection{Model evaluation} \label{sec:model_eval}
Model evaluation is intrinsically linked to the research axis for which a model was developed. Indeed, a generator developed to correct bias should not be tested on the same characteristics as a model developed for privacy preservation. Thus, researchers have come up with different methods for assessing generated synthetic datasets. In this section, we discuss two types of methods that are primarily used to assess the representativity of a synthetic dataset compared to its original counterpart: statistical methods (in Section~\ref{sec:stat_assess}) and Machine Learning methods (in Section~\ref{sec:ml_techniques}).
\subsubsection{Statistical assessments} \label{sec:stat_assess}
Multiple statistical tests can be used to compare two distributions, such as the $\Xi^2$ test or the Student's t-test. While these tests can provide good information when comparing the distributions of each variable separately, it does not consider the correlation between the variables. Since this aspect is essential for creating representative synthetic populations, researchers in the transportation community have been developing new statistical tests to address this issue. The Standardized Root Mean Squared Error (SRMSE)~\citep{muller_population_2010} is used in most transportation articles working on population synthesis to assess the generated datasets. The test consists in selecting one or multiple variables in a dataset and creating a frequency list based on the appearance of each unique value. We can, then, apply the SRMSE (see Equation~\ref{eq:SRMSE}) formulation on this frequency list. While this technique has been shown to work well compared to other statistical methods, it has two main flaws: \begin{inlinelist} \item the frequency lists are computed by counting the unique values (or combinations of unique values). Therefore, it is preferably used on discrete values. \item The choice of variables (or combination of variables) is up to the researcher or modeler. Thus, articles using this methodology tend only to test a couple of combinations. \end{inlinelist} In order to address the first flaws, we can transform the continuous values into discrete values by assigning them to specific buckets. This is a relatively simple fix, but if the discretization is done correctly, SRMSE should still provide valid results. However, the second flaw is more problematic. Indeed, when generating a synthetic dataset, we want to make sure that all the correlations are correctly generated. Therefore, it is required to update the methodology of the SRMSE to do systematic testing on all the variables and their possible combinations.
\subsubsection{Machine Learning techniques} \label{sec:ml_techniques}
In Machine Learning, many datasets are considered classification datasets. Thus, they are used with a Machine Learning model to predict future instances of a unique variable. Therefore, researchers developing generators to improve the efficacy of such models usually only test the synthetic datasets using the predictive power of Machine Learning models on a single variable. While this technique works well in this specific case, it does not provide enough information if one is trying to assess the representativity of a synthetic dataset compared to an original one. Indeed, there might be missing correlations between the other variables in the dataset that the Machine Learning models will overlook. It is, thus, possible to update this technique such that a Machine Learning model is used to predict each variable in the dataset instead of a single one. If there are issues with correlations between the variables, the efficacy of the Machine Learning models will drop while predicting the other variables, thus providing more information.
\subsection{Opportunities and limitations} \label{sec:opportunities}
The role of the generator in a GAN is to produce batches of synthetic data, taking only noise as an input. As such, the structure of the generator network should be closely matched to the underlying structure of the data being replicated. For instance, in images, each variable represents a pixel whose meaning is image-specific and only defined relative to other pixels in the image. In other words, the meaning of a pixel in an image is dependent on its relative position and value, not its absolute position and value, and the meaning of a single pixel in one image is (largely) independent from the meaning of the corresponding pixel in another image in the same dataset. It is therefore typical for generators used in image generation to make use of convolutional neural networks~\citep{radford_unsupervised_2016, isola_image--image_2017, zhu_unpaired_2018}, which model the relative definitions of the pixel values learned over thousands or millions of images in a dataset.
Unlike images, the variables in tabular data typically have a specific meaning and can be understood by their absolute positions and values (within a single dataset). For example, a column representing an individual's age in socio-economic data defines the age of every instance (row) in the table. Furthermore, the age value of each row can be understood without needing to know the values of the other variables. While the variables in a dataset have a fixed position-specific meaning, their values are dependent on the other variables in the dataset. As such, the generator must capture the interdependencies between these variables.
There are several different approaches to this in the literature. The first approach mimics what is done with images. Indeed, several models are built using fully connected networks~\citep{xu_modeling_2019} or convolutional neural networks~\citep{park_data_2018, zhao_ctab-gan_2021}. The generator has to learn the structure from the data during the learning process using backpropagation. This can be rather cumbersome for the generator since it never has access to the original data. Therefore, another approach is to fix the structure of the data. For example, the TGAN~\citep{xu_synthesizing_2018} uses a sequential order based on the columns' order. The structure is implicitly learned using attention vectors used with each variable in the dataset.
In both approaches, the generator learns the relationships between the variables from the available data via backpropagation of the discriminator loss. However, there are two primary limitations of this approach. Firstly, the generator, which needs to be highly flexible, can overfit the noise in the data and generalize to relationships between the columns which do not actually exist in unseen data. Secondly, the generator has to use the limited signal in the data to learn the core structure of the data, which is often already known to some degree by the modeler. Both can cause issues when the signal-to-noise ratio is high, as is often the case with socio-economic datasets of limited size.
Therefore, there is an opportunity to develop techniques that address these flaws and use the learning power of GANs. Indeed, by defining the relationships between the variables beforehand using expert knowledge, we can force the generator only to learn specific correlations. In addition, if the researcher or modeler provides these relationships, the model starts its learning process with more information than a fully connected network that has to learn all these connections. Therefore, we can overcome the issues with GANs while keeping their strengths.
\section{Methodology} \label{sec:methodology}
Following several previous works in the literature, our approach for synthetic tabular data generation makes use of a GAN~\citep{goodfellow_generative_2014} to generate synthetic data. Our primary contribution is to closely match the generator structure to the underlying structure of the data through a Directed Acyclic Graph (DAG) specified by the modeler. According to their prior expert knowledge of the data structure, the DAG allows the modeler to define the structure of the correlations between variables in the dataset as a series of directed links between nodes in a graph. Each link in the DAG represents a causal link that the generator can capture, and if no links (either direct or indirect) exist between two variables, then the generator treats those variables independently. This has two primary advantages over the existing approaches within the context of the limitations identified in the literature review. Firstly, by restricting the set of permissible links between the variables in the datasets, the DAG represents an expert regularisation of the model and restricts the ability of the GAN to overfit noise in the training sample. Secondly, giving the generator a headstart in knowing the underlying structure of the data allows the GAN to make more efficient use of the training sample when learning to generate data. These benefits could enable the DATGAN to use limited available original data more efficiently when learning to produce realistic and representative synthetic data samples.
\begin{figure}
\caption{Global schematic representation of the DATGAN. The different element in this figure are presented in the following sections: the Generator is presented in Section~\ref{sec:generator}, the DAG in Section~\ref{sec:DAG}, the Discriminator in Section~\ref{sec:discriminator}, the Encoding in Section~\ref{sec:encoding}, and the Sampling in Section~\ref{sec:sampling}.}
\label{fig:DATGAN}
\end{figure}
Figure~\ref{fig:DATGAN} provides a high-level overview of the DATGAN data generation process. As is typical with GANs, the generator in the DATGAN (described in Section~\ref{sec:generator}) never sees the original data. It generates data purely from a random noise input. The generator's structure is determined according to a DAG which specifies the structural relationships between the variables in the data (\emph{i.e.} the expert knowledge), as defined by the modeler. We present the DAG in Section~\ref{sec:DAG} and the process to automatically create the generator structure from the DAG in Section~\ref{sec:LSTM}. At the same time, the discriminator (described in Section~\ref{sec:discriminator}) is trained to classify/critique the original and synthetic data. Therefore, it can be considered a competitive game between two adversaries (the generator and the discriminator). The loss functions used to optimize both models are presented in Section~\ref{sec:loss_function}. Since tabular data can contain attributes of different types (\emph{e.g.} continuous, nominal, and ordinal), original and synthetic data have to be processed before being used. We, thus, introduce several new data processing steps specific to the DATAGAN model in Section~\ref{sec:data_processing}. At the end of this section, we present the result assessments methods used to compare the synthetic datasets in Section~\ref{sec:result_assessments}. We conclude the methodology by providing some implementation notes in Section~\ref{sec:implementation}. The supplementary materials include a notation table summarizing all the notations used in the methodology and a summary of the possible DATGAN versions using the different components presented here.
Formally, we consider a table $\bf T$ containing $N_{V}$ columns. Each column in the table $\bf T$ is represented by $\vec{v}_t$ for $t=1,\ldots,N_{V}$. We, thus, have ${\bf T} = \left\{\vec{v}_{1:N_{V}}\right\}$. These columns have been drawn from an unknown joint distribution of random variables $V_t$, \emph{i.e.} the values in $\bf T$ are drawn from $\mathbb{P}\left(V_{1:N_{V}}\right)$. We, thus, usally refer to the variables in $\bf T$ using $V_t$. We represent the rows of $\bf T$ by $\left\{v_{1:N_{V},i}\right\}$ for $i=1,\ldots,N_{\text{rows}}$ where $N_{\text{rows}}$ corresponds to the number of rows in $\bf T$. We assume that each row of $\bf T$ is sampled independently, \emph{i.e.} it is cross-sectional and does not contain panel or sequential data. Our goal is to learn a generative model $G(\vec{z})$, where $\vec{z}$ is a tensor of random noise, such that the samples generated from $G$ create a synthetic table $\bf T_{synth}$. For neural networks, we work in standardized space consisting of values between -1 and 1. We, thus, denote processed datasets with the character $\widehat{\cdot}$, as shown in Figure~\ref{fig:DATGAN}. In the meantime, the discriminator $D$ is trained to differentiate between original data $\bf \widehat{T}$ and synthetic data $\bf \widehat{T}_{synth}$.
\subsection{Generator} \label{sec:generator}
The role of the generator is to produce batches of synthetic data, taking only noise as an input. Within DATGAN, the generator structure is defined using a Directed Acyclic Graph (DAG), which specifies the interdependencies between each variable $V_t$ in the original dataset $\bf T$. We first present the DAG, including how the modeler should construct it, in Section~\ref{sec:DAG}. We then demonstrate how the DAG is used to automatically create the generator network in Section~\ref{sec:LSTM}, through the use of Long Short Term Memory (LSTM) cells~\citep{gers_learning_2000}. This includes defining a new multi-input LSTM cell required to capture complex correlations specified in the DAG.
\subsubsection{Directed Acyclic Graph (DAG)} \label{sec:DAG}
The DAG $\mathcal{G}$ is specified by the modeler to define the correlations between the variables in the data. However, similarly to Bayesian networks, a DAG must represent causal links between variables. Indeed, correlations do not have a direction, while causal links do. The main reason to use a directed graph instead of an undirected one is due to the nature of the representation of the variables in the Generator. As explained in Section~\ref{sec:LSTM}, each variable $\vec{v}_t$ in $\bf T$ is represented by a single LSTM cell. These cells communicate with each other in a directed manner, \emph{i.e.} the previous cell sends information to the next one. Therefore, to better reflect this behavior, a directed graph is required.
The mathematical definition of a directed acyclic graph is given by: \begin{itemize}
\item The graph $\mathcal{G}$ must be directed, \emph{i.e.} each edge in the graph has only direction.
\item The graph $\mathcal{G}$ must not contain any cycle, \emph{i.e.} the starting vertex of any given path cannot be the same as the ending vertex. \end{itemize} These two properties ensure that the DAG is a topological sorting. It means that we can extract a linear ordering of the vertices such that for every directed edge $\vec{v}_{t_1}\rightarrow\vec{v}_{t_2}$ from vertex $\vec{v}_{t_1}$ to vertex $\vec{v}_{t_2}$, $\vec{v}_{t_1}$ comes before $\vec{v}_{t_2}$ in the ordering.
With these rules in mind, the modeler can define the DAG for the DATGAN. It is possible to get inspiration from Bayesian networks and how their respective DAG is created manually~\citep{lucas_bayesian_2004}. However, there are some slight differences between the two DAGs. Therefore, we provide some insights about the components of our DAG: \begin{itemize}
\item Each variable $\vec{v}_t$ in the table $\bf T$ must be associated to a node in the graph $\mathcal{G}$.
\item A directed edge between two vertices, \emph{i.e.} $\vec{v}_{t_1}\rightarrow\vec{v}_{t_2}$, means that the generation of the first variable $\vec{v}_{t_1}$ will influence the generation of the second variable $\vec{v}_{t_2}$. Usually, the direction of the edge is a matter of judgement and should not influence the final result.
\item The absence of a link between two variables means that their correlation is not \emph{directly} learned by the Generator. However, it is possible to obtain some correlation in the final synthetic dataset if these two variables have a common ancestor in the graph $\mathcal{G}$. Therefore, two variables will not show any correlations in the synthetic dataset if they do not have any common ancestors or links between them in the DAG.
\item The graph $\mathcal{G}$ can be composed of multiple DAGs as long as the first rule is respected. By creating multiple DAGs, the modeler ensures that the different parts of the dataset are not correlated. While this approach is not common, it could be used in a dataset containing variables about multiple unrelated topics. \end{itemize}
As for Bayesian networks, there is no unique way to create a DAG for a given dataset. Using the example shown in Figure~\ref{fig:mock}, we present different possibilities. The first one consists of following the variables' order in the datasets. This creates a simple ordered list of the variables. For example, the TGAN~\citep{xu_synthesizing_2018} uses this specific list to link each of its LSTM cells. The advantage of such a DAG is its simplicity since no prior knowledge of the data is required. However, this DAG defines causal links based on an arbitrary order. It, thus, does not make use of expert knowledge and results in poorer results. Another possibility is to create a DAG centered around predicting a given variable. For example, in the case of Table~\ref{fig:data}, one could want to predict the variable \texttt{mode choice}. Therefore, a possible structure for the DAG is to link all the other nodes in the table to a single sink node representing the variable that we want to predict. However, while this DAG would capture all possible correlations between each variable and the one that needs to be predicted, it will not capture other correlations. Therefore, a population generated using this DAG would fail basic correlation tests.
While the two possibilities presented above allow creating a DAG without prior knowledge of the data, they will fail to deliver a synthetic dataset that correctly models the correlations between the variables in a table $\bf T$. We recommend building a DAG containing as many links as possible. It is always possible to perform a transitive reduction of $\mathcal{G}$, \emph{i.e.} removing paths such that for all vertices $\vec{v}_{t_1}$ and $\vec{v}_{t_2}$ there exists only a unique path that goes from $\vec{v}_{t_1}$ to $\vec{v}_{t_2}$, after its definition. There are no strict rules to decide whether one should add a causal link between two variables. It is a matter of judgment, and multiple trials and errors will be needed. However, we provide a set of instructions that can help the modeler define such a DAG:
\begin{itemize}
\item If the dataset is used to predict one variable, define this variable as a sink node in $\mathcal{G}$. For example, in Figure~\ref{fig:mock}, the dataset can be used to predict the variable \texttt{mode choice}. It is, thus, the sink node of the graph. It is also possible to define multiple sink nodes.
\item Usually, datasets contain different categories of variables. For example, a travel survey dataset might contain trips, individuals, and households variables. It is, thus, generally easier to define the causal links between variables belonging to similar semantic groups.
\item The next step consists in defining the source nodes. However, there are no specific rules for this. It is entirely up to the modeler.
\item Finally, the modeler has to choose the direction of the causal links. Again, there are no rules for this. In the example of Figure~\ref{fig:DAG}, one could decide that the variables \texttt{driving license} and \texttt{age} have an inverted causal link. This would slightly change the DAG but should not fundamentally change the results. \end{itemize}
As shown in Figure~\ref{fig:mock}, we present a mock dataset (see Figure~\ref{fig:data}) and one possible DAG (see Figure~\ref{fig:DAG}) representing the causal links between the variables. This dataset is a travel survey dataset. We thus define the mode choice as the sink node. We can define the following category of variables: \begin{inlinelist}
\item trip-related variables: \texttt{mode choice} and \texttt{trip purpose};
\item individual-related variables: \texttt{age} and \texttt{driving license};
\item household-related variables: \texttt{nbr cars household};
\item survey-related variables: \texttt{type of survey}. \end{inlinelist} For each of these categories, we want to make sure that the variables are linked together. Since \texttt{mode choice} is the sink node, we can create an edge from \texttt{trip purpose} to \texttt{mode choice}. For the individual-related variables, the direction of the causal link can be either direction. For the source nodes, we decide to set the variables \texttt{age} and \texttt{nbr cars household} as the source nodes. Finally, we can add some more links to complete the DAG. We decided, on purpose, to let the variable \texttt{type of survey} out of the DAG not to influence the data generation. One could argue that it could be linked to the age since older individuals are less familiar with internet technologies. However, as stated earlier, the modeler has to make choices while constructing the DAG, requiring trials and errors.
\begin{figure}
\caption{Example of a mock dataset}
\label{fig:data}
\caption{Example of a DAG used to represent the structure of the variables}
\label{fig:DAG}
\caption{Example of the structure of the data. Figure~\ref{fig:data} shows the structure of a table with six variables. Figure~\ref{fig:DAG} shows one possible DAG used to represent the variables in Figure~\ref{fig:data}.}
\label{fig:mock}
\end{figure}
Once the DAG $\mathcal{G}$ has been created, we can define several valuable sets. These sets are used later when representing the structure of the DAG in the generator.
\begin{itemize}
\item $\mathcal{A}(V_t)$: the set of ancestors of the variable $V_t$.
\item $\mathcal{D}(V_t)$: the set of direct ancestors of the variable $V_t$.
\item $\mathcal{S}(V_t)$: the set of sources nodes leading to the variable $V_t$.
\item $\mathcal{E}(V_t)$: the set of in-edges of the variable $V_t$. \end{itemize}
If we use the variable \texttt{mode choice} from Figure~\ref{fig:DAG} as an example, we can define these different sets: \begin{itemize}
\item $\mathcal{A}(\texttt{mode choice}) = \left\{ \texttt{nbr cars households};\ \texttt{age};\ \texttt{driving license};\ \texttt{trip purpose}\right\}$
\item $\mathcal{D}(\texttt{mode choice}) = \left\{ \texttt{driving license};\ \texttt{trip purpose} \right\}$
\item $\mathcal{S}(\texttt{mode choice}) = \left\{\texttt{nbr cars households};\ \texttt{age}\right\}$
\item $\mathcal{E}(\texttt{mode choice}) = \left\{ \texttt{driving license} \rightarrow \texttt{mode choice};\ \texttt{trip purpose} \rightarrow \texttt{mode choice} \right\}$ \end{itemize}
\subsubsection{Representation of the DAG} \label{sec:LSTM}
As explained in the introduction of this section, the DAG is used to represent the causal links between the variables. Thus, we want to develop an architecture for the generator similar to the specified DAG. Tabular data cannot be considered sequential data since the order of the variables in a dataset is usually random. However, the DAG allows us to have a sequence of variables with a specific order. We can, thus, use Neural Networks models that have been to work well with this type of data. More specifically, we use Long Short Term Memory (LSTM) cells~\citep{hochreiter_long_1997}, a type of recurrent neural network, to generate synthetic values for each variable $V_t$. We denote the LSTM cell associated to the variable $V_t$ by $\mathbf{LSTM_t}$. The advantage of using recurrent neural networks is that the previous output affects the current state of the neural network. Using the sequence defined by the DAG $\mathcal{G}$, we can, thus, use previous outputs, \emph{i.e.} synthetic values of previous variables, to influence the generation process of a given variable $V_t$.
The key elements to an LSTM cell are the cell state and the multiple gates used to protect and control the cell state, as shown in Figure~\ref{fig:comp_LSTM}. For conciseness, we do not present a detailed overview of the mathematical operations of an LSTM cell. For a full description, we direct the reader to \cite{gers_learning_2000}. In Figure~\ref{fig:comp_LSTM}, the input cell state is characterized by $\vec{C}_{t-1}$ and the output cell state by $\vec{C}_t$. The cell will receive an input that is the concatenation between the output of the previous cell ($\vec{h}_{t-1}$) and an input vector ($\vec{x}_t$). It is thus given by: \begin{equation}
\vec{i}_t = \vec{h}_{t-1} \oplus \vec{x}_t \end{equation} This input vector will pass through three different gates to transform the cell state as it is necessary: \begin{enumerate}
\item {\bf the forget gate} is used to decide which old information is forgotten in the cell state.
\item {\bf the input gate} is used to decide which new information is stored kept in the new cell state
\item {\bf the output gate} is used to decide the output of the cell using information from both the input $\vec{i}_t$ and the new cell state $C_t$. \end{enumerate} The modeler has to define the size of the hidden layers $N_h$ in the LSTM cell and the batch size $N_b$. The first defines the size of the output vector $\vec{h}_t$ as well as the cell state $\vec{C}_{t}$. The latter corresponds to the number of data points fed in the network. Therefore, we usually define the output $\vec{h}_t$ and cell state $\vec{C}_t$ as tensors of size $N_h \times N_b$. The input $x_t$ usually takes a different size and is thus characterized by a tensor of size $N_x \times N_b$ (the batch size has to remain the same between all the tensors).
\begin{figure}
\caption{Main components of a LSTM cell following \cite{gers_learning_2000}. The blue hexagons represent variables, the red rectangles neural network layers, and the orange circles mathematical operations. The different gates used to transform the input and the cell state are shown in dark gray.}
\label{fig:comp_LSTM}
\end{figure}
In the case of the DATGAN, we have to modify the inputs and outputs of the LSTM cell according to the principles of GANs. Figure~\ref{fig:lstm_datgan} provides a schema of the LSTM structure in the DATGAN. The insides of the LSTM cell $\bf LSTM_t$ are the same as the one shown in Figure~\ref{fig:comp_LSTM}. The first main modification concerns the inputs. Indeed, the generator in a GAN takes random noise as an input instead of an input vector such as $\vec{x}_t$. In addition, we add an attention vector to the input tensor $\vec{i}_t$. The idea behind the attention vector is to keep information of intermediate encoders and pass it to a new encoder. This mimics cognitive attention and, thus, helps with the long-term memory of the LSTM cells. Therefore, the input tensor $\vec{i}_t$ corresponds to the concatenation of three tensors:
\begin{figure}\label{fig:lstm_datgan}
\end{figure}
\begin{itemize}
\item $\vec{z}_{t}$ is a tensor of Gaussian noise with dimension $N_z \times N_b$. For each source node in the DAG $\mathcal{G}$, we randomly draw values from $\mathcal{N}(0,1)$. For all the other variables $V_t$, the noise vector is a concatenation of the noise from the source nodes passed through a fully connected layer without any activation function, \emph{i.e.}
\begin{equation}
\vec{z}_t = \texttt{FC}\left(\bigoplus_{k\in\mathcal{S}(V_t)} \vec{z}_k, N_z\right)
\end{equation}
If two different variables $V_{t_1}$ and $V_{t_2}$ have the same source nodes, \emph{i.e.} $\mathcal{S}(V_{t_1}) = \mathcal{S}(V_{t_2})$, the noise tensor is the same for both variables, \emph{i.e.} $\vec{z}_{t_1} = \vec{z}_{t_2}$. There are two reasons to apply such a rule:
\begin{inlinelist}
\item it removes pointless computation by creating new variables;
\item if two variables have a unique source node, they will receive the same noise as an input. We must, therefore, follow this rule if there is more than one source node.
\end{inlinelist}
\item $\vec{f}_{t-1}$ can be compared to the previous output tensor $\vec{h}_{t-1}$ in Figure~\ref{fig:comp_LSTM}.
However, one of the differences between the DATGAN and a usual LSTM network is that we do not directly use the output tensor of the LSTM $\vec{h}_t$. Indeed, as shown in Figure~\ref{fig:lstm_datgan}, we transform it into the encoded synthetic variable $\widehat{\vec{v}}_t^{\bf synth}$. Since $\widehat{\vec{v}}_t^{\bf synth}$ do not have a standard size, we thus need to transform it in order to obtain the usable tensor $\vec{f}_t$ of dimension $N_h \times N_b$. We can thus say that $\vec{f}_t$ corresponds to the transformed output of the LSTM cell $\bf LSTM_t$. If the variable $V_t$ is a source node, this tensor is randomly initialized and learned during the optimization process.
\item $\vec{a}_{t}$ is an attention tensor that allows the cell to learn which previous cell outputs are relevant to the input. It is defined as a weighted average over all the LSTM outputs. If the current variable $V_t$ has at least one ancestor, we learn an attention weight vector $\alpha_t\in\mathbb{R}^{|\mathcal{A}(t)|}$. The context vector is thus computed as:
\begin{equation}
\vec{a}_t = \sum_{k\in\mathcal{A}(t)} \frac{\exp\alpha_{t}^{(k)}}{\sum_{j}\exp\alpha_{t}^{(j)}}\vec{h}_k
\end{equation}
This context tensor has a dimension $N_h \times N_b$. If variable $V_t$ is a source node, we define $\vec{a}_t$ as a zero-vector of dimension $N_h \times N_b$. \end{itemize}
LSTMs cells are initially designed to work in sequence, \emph{i.e.} each cell is linked to a unique following cell. However, as shown in the DAG in Figure~\ref{fig:DAG}, some variables can have multiple direct ancestors. For example, the variable \texttt{mode choice} has two ancestors: \texttt{driving license} and \texttt{trip purpose}. We, thus, need a way to connect multiple LSTM cells. If one cell has multiple outputs, we send the output of the cell to the next cells, \emph{e.g.} the inputs of the cells for the variables \texttt{driving license} and \texttt{trip purpose} coming from the variable \texttt{age} are the same. The main issue lies in having multiple cell inputs. The attention tensor $\vec{a}_t$ and the noise tensor $\vec{z}_t$ are defined based on all the ancestors of the current variable. Therefore, we do not have to change these definitions. On the other, the previous cell state $\vec{C}_{t-1}$ and the transformed output $\vec{f}_{t-1}$ are defined to work in sequence. Therefore, we define the multi-input cell state and multi-input transformed output by concatenating the cell states and transformed outputs from the direct ancestors and passing them through a fully connected layer to resize them: \begin{align}
\vec{C}_{t-1} &= \texttt{FC}\left(\bigoplus_{k\in\mathcal{D}(V_t)} \vec{C}_k, N_h\right) \\
\vec{f}_{t-1} &= \texttt{FC}\left(\bigoplus_{k\in\mathcal{D}(V_t)} \vec{f}_k, N_h\right) \end{align} During the training process, the weights of the two layers have to be learned. We can, thus, feed the LSTM cell with homogeneous inputs.
\begin{algorithm*}[h!] \caption{Ordering of the variables using a DAG}\label{algo:ordering} \begin{algorithmic}[1] \Require{DAG: $\mathcal{G}$} \Ensure{ordered list of variables: \texttt{treated}} \State Compute a dictionary \verb+in_edges+ with $V_t$ as the key and $\mathcal{E}(V_t)$ as the value for all $t=1,\ldots,N_{V}$. \State Initialize \verb+untreated+ as a set with all the variables names and \verb+treated+ an empty list \State Initialize \verb+to_treat+ as a list containing all the variables with 0 in-edges
\While {$|\texttt{untreated}|>0$}
\ForAll {$n \in \texttt{to\_treat}$}
\State Remove $n$ from \verb+untreated+ and add it to \verb+treated+
\EndFor
\State Set \verb+to_treat+ as an empty list
\ForAll {$e\in \mathcal{G}.E$}\Comment{$e$ is an edge and it is a tuple with 2 values: the out-vertex and the in-vertex}
\State Initialize boolean \verb+all_ancestors_treated+ to True
\ForAll {$\ell \in \texttt{in\_edges}[e[1]]$}
\If {$\ell \notin \texttt{treated}$}
\State Set \verb+all_ancestors_treated+ to False
\EndIf
\EndFor
\If {$e[0]\in\texttt{treated}$ \textbf{and} $\texttt{all\_ancestors\_treated}$\text{ is True} \textbf{and} $e[1]\notin\texttt{treated}$ \textbf{and} $e[1]\notin\texttt{to\_treat}$}
\State Add $e[1]$ to the list \verb+to_treat+
\EndIf
\EndFor \EndWhile \State \textbf{return} \verb+treated+ \end{algorithmic} \end{algorithm*}
One final issue remains in constructing the structure of the generator. Indeed, we know how to generate each variable separately and connect them using the DAG and the multi-input LSTM cells. However, while building the generator's structure, we cannot start with any random variable in the DAG. Indeed, as per the definition of the inputs of the LSTM cell, the ancestors $\mathcal{A}(V_t)$ must have already been built first. For example, the attention tensor uses the outputs of all the ancestors' cells. Therefore, we need an algorithm that creates an ordered list based on the provided DAG $\mathcal{G}$. This algorithm is given in Algorithm~\ref{algo:ordering}. The goal of this algorithm is to take the DAG $\mathcal{G}$ and return an ordered list of the variables $V_t$ such that all the ancestors of a given variable have a smaller index in the list, \emph{i.e.} they appear first in the list. The algorithm is built recursively. The idea is to define two lists, one with untreated nodes \texttt{untreated}, containing all the nodes at the beginning of the algorithm, and one with treated nodes \texttt{treated}, empty at the beginning. We, then, start by selecting all source nodes and adding to a list named \texttt{to\_treat}. Then, while the list of untreated nodes is not empty, \emph{i.e.} while there are still nodes to be added to the final list, we start by assigning all nodes in the list \texttt{to\_treat} in the \texttt{treated} list. Then, we check each edge in the DAG $\mathcal{G}$ and check if all the ancestors of the out-vertex have been treated. If it is the case, we can add this node to the \texttt{to\_treat} list and assign it later to the final list. The algorithm stops when all the nodes have been added to the \texttt{treated} list.
Now that every component has been defined for the generator, we can build it following the ordered list provided by Algorithm~\ref{algo:ordering}. Each time that we create a LSTM cell for the variable $V_t$, as in Figure~\ref{fig:lstm_datgan}, we check the direct ancestors $\mathcal{D}(V_t)$. If more than one direct ancestor, we apply the multi-input technique to the LSTM cell. The generator is finished once one LSTM cell has been created for each variable in the ordered list.
\subsection{Discriminator} \label{sec:discriminator}
As seen in Figure~\ref{fig:DATGAN}, the generator is used to create the synthetic dataset $\bf \widehat{T}_{synth}$. The role of the discriminator is to compare this dataset with the encoded original dataset $\bf \widehat{T}$. We, thus, want to train the discriminator to be able to identify original and synthetic data. That way, the generator will have to produce better synthetic data if it wants to fool the discriminator.
Following \cite{xu_synthesizing_2018}, we use a fully connected neural network with $N_{L}$-layers for the discriminator, where the internal layers, for $i=1,\ldots,N_L$, are given by: \begin{align}
\widehat{\vec{l}}_{i} &= \texttt{FC}\left(\vec{l}_{i-1}, N_l\right)\\
\vec{l}_i &= \texttt{LeakyReLU}\left(\texttt{BN}\left(\widehat{\vec{l}}_{i}\oplus\texttt{div}\left(\widehat{\vec{l}}_{i}\right)\right)\right) \end{align} where \begin{inlinelist}
\item $\texttt{div}(\cdot)$ represents the mini-batch discrimination vector presented by \cite{salimans_improved_2016};
\item $\texttt{BN}(\cdot)$ corresponds to the batch normalization;
\item $\texttt{LeakyReLU}(\cdot)$ is the leaky reflect linear activation function. \end{inlinelist} The output of the discriminator is computed using a fully connected layer with a size of 1 to return an unbounded scalar: \begin{equation}
l_{D} = \texttt{FC}\left(\vec{l}_{N_L}, 1\right) \end{equation} The input vector $\vec{l}_0$ of the discriminator is different depending on the data it is using: \begin{itemize}
\item For the original dataset, $\vec{l}_0$corresponds to the concatenation of all the column vectors $\left\{ \widehat{\vec{v}}_{1:N_{V}}\right\}$ after an encoding step, as shown in Figure~\ref{fig:DATGAN}.
\item For the synthetic dataset, $\vec{l}_0$ corresponds to the concatenation of all the usable outputs $\left\{ \widehat{\vec{v}}^{\bf synth}_{1:N_{V}}\right\}$ given by the generator, as shown in Figure~\ref{fig:lstm_datgan}. \end{itemize}
However, as seen in Figure~\ref{fig:lstm_datgan}, a label smoothing step has to be performed before feeding these matrices to the discriminator. This step is discussed in Section~\ref{sec:discr_input}.
\subsection{Loss function} \label{sec:loss_function}
The loss function sets up the game between the discriminator and the generator. The discriminator $D$ aims to maximize the loss function when comparing the synthetic data produced by the generator against the original data. Meanwhile, the generator $G$ aims to minimize the same loss function by generating synthetic data, fooling the discriminator. The generator thus learns from the discriminator in this process by backpropagating the discriminator loss.
Since the loss function drives the optimization process to obtain the best possible model, we argue that our model does not have to be characterized by a single loss function. Therefore, we systematically test three different loss functions. The first one is the standard cross-entropy loss defined by~\cite{goodfellow_generative_2014}, the second one is the Wasserstein or Earth-Mover distance defined by~\cite{arjovsky_wasserstein_2017}, and the third one is the Wasserstein distance with Gradient-Penalty defined by~\cite{gulrajani_improved_2017}.
\paragraph{Standard loss} The first loss function is the standard loss function used in the original GAN by~\cite{goodfellow_generative_2014}. We name it $\mathcal{L}^{\texttt{SGAN}}$ with SGAN standing for Standard GAN. It is given by: \begin{equation}
\label{eq:std_loss}
\min_{G} \max_{D} \mathcal{L}^{\texttt{SGAN}}(D,G) = \mathop{\mathbb{E}}_{\left\{\widehat{\vec{v}}_{1:N_{V}}\right\}\sim\mathbb{P}({\bf\widehat{T}})} \log D\left(\widehat{\vec{v}}_{1:N_{V}}\right) + \mathop{\mathbb{E}}_{\vec{z}\sim\mathcal{N}(0,1)} \log\left(1-D(G(\vec{z}))\right) \end{equation}
This loss function requires the discriminator to produce a probability for each data point to be either original or synthetic. However, as it has been defined in Section~\ref{sec:discriminator}, the discriminator outputs an unbounded scalar and not a probability. In order to use this discriminator with this loss function, we thus pass the output through an additional sigmoid layer to produce bounded $[0,1]$ probabilities.
As explained by~\cite{goodfellow_generative_2014}, $\log\left(1-D(G(\vec{z}))\right)$ tends to saturate during the training process. Therefore, instead of training $G$ to minimize the full loss function, we can instead train it to maximize $\log D(G(\vec{z}))$. We can thus define the loss function for both networks separately. The goal is to minimize both losses simultaneously during the training process. They are given by: \begin{align}
\label{eq:SGAN_gen}
\mathcal{L}^{\texttt{SGAN}}_{G} &= -\mathop{\mathbb{E}}_{\vec{z}\sim\mathcal{N}(0,1)} \log D(G(\vec{z})) \\
\mathcal{L}^{\texttt{SGAN}}_{D} &= -\mathop{\mathbb{E}}_{\left\{\widehat{\vec{v}}_{1:N_{V}}\right\}\sim\mathbb{P}({\bf \widehat{T}})} \log D\left(\widehat{\vec{v}}_{1:N_{V}}\right) + \mathop{\mathbb{E}}_{\vec{z}\sim\mathcal{N}(0,1)} \log D(G(\vec{z})) \end{align} As suggested by the authors, we train our models using the Adam optimizer~\citep{kingma_adam_2014}.
\paragraph{Wasserstein loss} The second loss function has been implemented in the Wasserstein GAN (WGAN) by~\cite{arjovsky_wasserstein_2017}. It is defined using the Earth-Mover distance: \begin{equation}
\label{eq:wgan_loss}
\min_{G} \max_{D} \mathcal{L}^{\texttt{WGAN}}(D,G) = \mathop{\mathbb{E}}_{\left\{\widehat{\vec{v}}_{1:N_{V}}\right\}\sim\mathbb{P}({\bf \widehat{T}})} D\left(\widehat{\vec{v}}_{1:N_{V}}\right) - \mathop{\mathbb{E}}_{\vec{z}\sim\mathcal{N}(0,1)} D(G(\vec{z})) \end{equation} There are multiple advantages to use this loss function instead of the standard loss: \begin{inlinelist}
\item the main advantage is the fact that the discriminator becomes a critic since it does not need to produce a 0-1 output anymore. Indeed, we can use the output of the discriminator $l_D$ as it is defined. It thus results in less vanishing gradients and an easier learning process for the generator $G$;
\item the loss function correlates with the quality of the sample, contrary to the SGAN loss. It is, thus, possible to determine when the GAN has converged. \end{inlinelist}
We can, now, define the separate loss functions for both networks as: \begin{align}
\mathcal{L}^{\texttt{WGAN}}_{G} &= -\mathop{\mathbb{E}}_{\vec{z}\sim\mathcal{N}(0,1)} D(G(\vec{z})) \\
\mathcal{L}^{\texttt{WGAN}}_{D} &= -\mathop{\mathbb{E}}_{\left\{\widehat{\vec{v}}_{1:N_{V}}\right\}\sim\mathbb{P}({\bf \widehat{T}})} D\left(\widehat{\vec{v}}_{1:N_{V}}\right) + \mathop{\mathbb{E}}_{\vec{z}\sim\mathcal{N}(0,1)} D(G(\vec{z})) \end{align} As suggested by the authors, we train our models using the RMSProp optimizer~\citep{tieleman_lecture_2012}.
\paragraph{Wasserstein loss with gradient penalty} The loss for the WGAN-GP is the same as the Wasserstein loss with the addition of a gradient penalty~\citep{gulrajani_improved_2017}. It is given by: \begin{equation}
\label{eq:wggp_loss}
\min_{G} \max_{D} \mathcal{L}^{\texttt{WGGP}}(D,G) = \mathop{\mathbb{E}}_{\left\{\widehat{\vec{v}}_{1:N_{V}}\right\}\sim\mathbb{P}({\bf \widehat{T}})} D\left(\widehat{\vec{v}}_{1:N_{V}}\right) - \mathop{\mathbb{E}}_{\vec{z}\sim\mathcal{N}(0,1)} D(G(\vec{z})) + \lambda \mathop{\mathbb{E}}_{\vec{\widehat{v}}\sim\mathbb{P}({\bf \widehat{T}}, G(\vec{z}))}\left(\left\lVert \nabla_{\vec{\widetilde{v}}} D(\vec{\widetilde{v}})\right\rVert_2 -1 \right)^2 \end{equation} where $\lambda$ is a parameter defined by the modeler. The main issue with the WGAN is that it needs to enforce the Lipschitz contsraint on the critic. It does that by clipping the weights of the critic. \cite{gulrajani_improved_2017} show that it leads undesired behaviour in the generator samples. We can, thus, fix this issue by adding a gradient penalty on the critic. In Equation~\ref{eq:wggp_loss}, the mid-value $\widetilde{\vec{v}}$ is sampled uniformely along straight lines between pair of points sampled from the original dataset ${\bf T}$ ($\widehat{v}$) and generated data $G(\vec{z})$ ($\widehat{v}^{\bf synth}$).
The separate loss functions for each networks are, therefore, defined as: \begin{align}
\mathcal{L}^{\texttt{WGGP}}_{G} &= -\mathop{\mathbb{E}}_{\vec{z}\sim\mathcal{N}(0,1)} D(G(\vec{z})) \\
\mathcal{L}^{\texttt{WGGP}}_{D} &= -\mathop{\mathbb{E}}_{\left\{\widehat{\vec{v}}_{1:N_{V}}\right\}\sim\mathbb{P}({\bf \widehat{T}})} D\left(\widehat{\vec{v}}_{1:N_{V}}\right) + \mathop{\mathbb{E}}_{\vec{z}\sim\mathcal{N}(0,1)} D(G(\vec{z})) + \lambda \mathop{\mathbb{E}}_{\vec{\widehat{v}}\sim\mathbb{P}({\bf \widehat{T}}, G(\vec{z}))}\left(\left\lVert \nabla_{\vec{\widehat{v}}} D(\vec{\widehat{v}})\right\rVert_2 -1 \right)^2 \end{align} Following \cite{gulrajani_improved_2017}, we replace the batch normalization in the discriminator with a layer normalization~\citep{ba_layer_2016}, we set $\lambda = 10$, and we train both models using the Adam optimizer~\citep{kingma_adam_2014}.
Following \cite{xu_synthesizing_2018}, we include a Kullback-Leibler (KL) divergence term to all the generator losses. For two discrete probability distributions $P$ and $Q$ defined on the same probability space $\mathcal{X}$, the KL divergence is given by:
\begin{equation}
\text{KL}\left(P, Q\right) = \sum_{x\in\mathcal{X}} P(x) \log\left(\frac{P(x)}{Q(x)}\right) \end{equation}
Therefore, we can use this divergence for any discrete probability distributions in the original and synthetic datasets. The use of this divergence has two main consequences: \begin{inlinelist}
\item it gives a boost when starting the training of the generator since it is trying to make discrete probability distributions as close as possible;
\item it makes the model more stable under training. \end{inlinelist} We discuss which variables are concerned by this divergence in Section~\ref{sec:encoding} for the original dataset $\bf \widehat{T}$ and in Section~\ref{sec:gen_output} for the synthetic dataset $\bf \widehat{T}_{synth}$.
\subsection{Data processing} \label{sec:data_processing}
Tabular data are generally composed of multiple data types, as seen in Figure~\ref{fig:data}. In the context of this article, we consider two different variable types: \begin{description}
\item[Continuous data] corresponds to a random variable following a continuous distribution (\emph{e.g.} the distance to travel to a destination). The variable can then be rounded to obtain discrete values, as is usually the case with an individual's age.
\item[Categorical data] corresponds to all other types of data such as: \\
$\bullet$\: binary random variables (\emph{e.g.} whether someone is retired or not). \\
$\bullet$\: nominal random variables, \emph{i.e.} discrete random variable with three or more possible values, where there is no order nor notion of distance between the different values (\emph{e.g.} a color). \\
$\bullet$\: ordinal random variables, \emph{i.e.} discrete random variables with three or more possible values with a defined order (and possibly also distance) between each possible value (\emph{e.g.} education level). Contrary to nominal data, we can define an order (and possibly a distance) between the different categories. \end{description}
Typically, in Machine Learning, neural networks work with data ranging from -1 to 1 or 0 to 1. However, these four types of data are not designed in such a way. We thus need to encode the original dataset $\bf T$ in a dataset $\bf \widehat{T}$, as shown in Figure~\ref{fig:DATGAN}, that transforms the different data types into more homogeneous types.
The table $\bf T$ contains $N_C$ continuous random variables $\left\{C_1,\ldots,C_{N_C}\right\}$ and $N_D$ categorical random variables $\left\{D_1,\ldots,D_{N_D}\right\}$ such that $N_C + N_D = N_V$. We can thus define the table $\bf T$ using vectors of continuous and categorical variables, \emph{i.e.} ${\bf T} = \left\{\vec{c}_{1:N_C}, \vec{d}_{1:N_D}\right\}$. Similarly, the synthetic dataset is defined as ${\bf T_{synth}} = \left\{\vec{c}^{\bf synth}_{1:N_C}, \vec{d}^{\bf synth}_{1:N_D}\right\}$.
Since we are considering two different data types in the table $\bf T$, we cannot process them the same way. Thus, Section~\ref{sec:encoding} explains how the encoding is done for each type. Section~\ref{sec:gen_output} explains how these types of data are generated. Section~\ref{sec:discr_input} discusses how the synthetic data and the original data are passed to the discriminator. Finally, Section~\ref{sec:sampling} shows how the data is sampled from the generator's output to create the final synthetic dataset.
\subsubsection{Encoding} \label{sec:encoding}
The DATGAN model takes as input only $[-1,1]$ or $[0,1]$ bounded vectors. Therefore, we need to encode unbounded continuous variables and categorical variables to be processed by the GAN. Continuous data tend to follow multimodal distributions. We thus build on the previous methodology of \cite{xu_modeling_2019} who apply a Variational Gaussian Mixture (VGM) model~\citep{bishop_pattern_2006} to cluster continuous values into a discrete number of Gaussian mixtures. In this work, we develop this further by automatically determining the number of components from the data.
\begin{algorithm*}[h!] \caption{Continuous variables preprocessing}\label{algo:cont_var} \begin{algorithmic}[1] \Require{List of float values ($\vec{c}_t$)} \Ensure{Matrix of probabilities ($\vec{p}_t$) and values ($\vec{w}_t$)} \State Set the initial number of modes $N_{m,t} = 10$ \While{True} \State Sample $\vec{s}_t$ from $\vec{c}_t$ and fit the \texttt{BayesianGaussianMixture} to $\vec{s}_t$ with $N_{m,t}$ modes \State Predict the class on $\vec{s}_t$ and compute $N_{\text{pred}}$ the number of unique classes predicted by the VGM \State Define $N_{\text{weights}}$, the number of weights of the model above a threshold $\varepsilon_w=0.01$ \If{$N_{\text{pred}} < N_{m,t}$ {\bf or} $N_{\text{weights}} < N_{m,t}$} \State $N_{m,t} = \min\left(N_{\text{pred}}, N_{\text{weights}}\right)$ \Else \State break \EndIf \EndWhile \State Fit the \texttt{BayesianGaussianMixture} to $\vec{c}_t$ with $N_{m,t}$ modes. \State Means and standard deviations of the $N_{m,t}$ Gaussian mixtures are given by \[ \vec{\eta}_t = \left(\eta_t^{(1)},\ldots,\eta_t^{(N_{m,t})}\right) \ \text{and}\ \vec{\sigma}_t = \left(\sigma_t^{(1)},\ldots,\sigma_t^{(N_{m,i})}\right) \] \State Compute the posterior probability of $c_{t,j}$ coming from each of the $N_{m,t}$ mixtures as a vector $\vec{p}_{t,j} = \left(p_{t,j}^{(1)},\ldots,p_{t,j}^{(N_{m,t})}\right)$. It corresponds to a normalized probability distributions over the $n_{m,i}$ Gaussian distributions. \State Normalize $c_{t,j}$ for each Gaussian mixture using Equation~\ref{eq:normalize}. \State Clip each value $w_{t,j}^{(k)}$ between -0.99 and 0.99 and set $\vec{w}_{t,j} = \left(w_{t,j}^{(1)}, \ldots, w_{t,j}^{(N_{m,t})}\right)$. \State \textbf{return} $\vec{p}_t$ and $\vec{w}_t$ \end{algorithmic} \end{algorithm*}
For each continuous variable $C_t$ in the dataset, we first train a VGM on a random subset of the data with a high number of components ($N_{m,t}=10$). We then determine how many components are needed to capture the distribution by comparing the component weights against a threshold and the number of predicted components $N_{\text{pred}}$ with the original number of components $N_{m,t}$. We repeat this process until convergence. Finally, we retrain the model on the entire column vector $\vec{c}_t$ using only the number of significant components. From this trained model, we extract the means $\vec{\eta}_t$ and standard deviations $\vec{\sigma}_t$ from the VGM. We can then normalize the values $c_{t,j}$ using the following formula: \begin{equation}
\label{eq:normalize}
w_{t,j}^{(k)} = \frac{c_{t,j}-\eta_t^{(k)}}{\delta \sigma_t^{(k)}}\quad \text{for } k = 1, \ldots, N_{m,t}, \end{equation} where $\delta$ is a parameter specified by the modeller. Following \cite{xu_synthesizing_2018}, we use a value of $\delta=2$ and clip the values of $w_{t,j}^{(k)}$ between -0.99 and 0.99. At the same time, we compute the posterior probability vectors $\vec{p}_{t,j}$ that the value $c_{t,j}$ belongs to each of the $N_{m,t}$ mixtures. Thus, each value $c_{t,j}$ in $\vec{c}_t$ are represented by the vector of probabilities $\vec{p}_{t,j} \in [0,1]^{N_{m,t}}$ and the vector of values $\vec{w}_{t,j} \in [-0.99,0.99]^{N_{m,t}}$. Algorithm~\ref{algo:cont_var} shows a summary of the procedure used to preprocess continuous variables.
For categorical variables\footnote{The same treatment is applied to categorical and boolean variables due to the label smoothing explained in Section~\ref{sec:discr_input}.}, we transform them using one-hot encoding. We consider $\vec{d}_t$ the realizations of the random variable $D_t$. $\vec{d}_t$ is transformed using $|D_t|$-dimensional one-hot vector $\vec{o}_t$ where $|D_t|$ corresponds to the number of unique categories in $D_t$.
We can thus convert the initial table ${\bf T}$ into an intermediate table ${\bf \widehat{T}} = \left\{\vec{w}_1, \vec{p}_1, \ldots, \vec{w}_{N_C}, \vec{p}_{N_C},\vec{o}_1, \ldots, \vec{o}_{N_D}\right\}$. As a simplification, we write ${\bf \widehat{T}} = \left\{\vec{w}_{1:N_C}, \vec{p}_{1:N_C}, \vec{o}_{1:N_D}\right\}$. The dimension of this new table is given by $\sum_{t=1}^{N_C}2N_{m,t} + \sum_{t=1}^{N_D}|D_t|$. While this new encoded table is larger than the original table, we can now use define the KL divergence on multiple variables. Indeed, the vector $\vec{p}_{t,j}$ corresponds to a discrete probability distribution for a given row. We can, thus, apply the KL divergence on this term. In addition, the one-hot encoded vector $\vec{o}_{t,j}$ also corresponds to a discrete probability distributions. The only difference is that this distribution is composed of only 1s and 0s. Nevertheless, the KL divergence is also applicable on this variable.
\subsubsection{Generator output} \label{sec:gen_output}
In Figure~\ref{fig:lstm_datgan}, we show how the LSTM cell $\bf LSTM_t$ produces an output $\vec{h}_t$ before it is transformed into the encoded synthetic variable $\widehat{\vec{v}}_t^{\bf synth}$. However, in Section~\ref{sec:encoding}, we show that we distinguish between two variable types. Thus, the output transformer in Figure~\ref{fig:lstm_datgan} differs depending on if we are working with a continuous or a categorical variable. Nevertheless, the first step for the output transformer is similar in both cases. Indeed, the goal is to use some semblance of convolution on the LSTM output $\vec{h}_t$ to improve the results. We, thus, transform this output through a hidden layer: \begin{equation}
\vec{h}'_t = \texttt{tanh}\left(\vec{h}_{t}, N_{\text{conv}} \right) \end{equation} The final transformation into the synthetic encoded variable depends on the variable type. For continuous variables, we thus pass the reduced output $\vec{h}'_t$ through two different fully connected layers to extract both the vector of probabilities $\vec{p}^{\bf synth}_t$ and the vector of corresponding values $\vec{w}^{\bf synth}_t$: \begin{align}
\vec{w}^{\bf synth}_t &= \texttt{tanh}(\vec{h}'_{t}, N_{m,t}) \\
\vec{p}^{\bf synth}_t &= \texttt{softmax}(\vec{h}'_{t}, N_{m,t}) \end{align}
For the categorical variables, we pass the reduced output $\vec{h}'_t$ through a single fully connected layer to extract the output probabilities $\vec{o}^{\bf synth}_t$ belonging to each class: \begin{equation}
\vec{o}^{\bf synth}_t = \texttt{softmax}(\vec{h}'_{t}, |D_t|) \end{equation}
Both matrices of discrete probabilities $\vec{p}^{\bf synth}_t$ and $\vec{o}^{\bf synth}_t$ are using a softmax activation function in order to ensure that the sum along the rows is equal to one. This ensures that the rows of these matrices correspond to discrete probability vectors. The matrix $\vec{w}^{\bf synth}_t$ uses a $\tanh$ activation function since we are allowing this matrix to take values between -1 and 1.
Since these encoded synthetic variables do not have homogeneous sizes, we cannot use them directly as the input of the next LSTM cell. This, thus, explains why we are passing the encoded synthetic values $\widehat{\vec{v}}_t^{\bf synth}$ through an input transformer. The goal of this transformer is to take $\widehat{\vec{v}}_t^{\bf synth}$ and transform it back to the same tensor for all the different variables $V_t$. Therefore, we have to distinguish between continuous and categorical variables again. For the continuous variables, we concatenate the transformed synthetic variables together and pass them through a fully connected layer to obtain $\vec{f}_t$: \begin{equation}
\vec{f}_t = \texttt{FC}(\vec{w}^{\bf synth}_t\oplus\vec{p}^{\bf synth}_t, N_{h}) \end{equation} For the categorical variables, we just pass $\vec{o}^{\bf synth}_t$ through a fully connected layer: \begin{equation}
\vec{f}_t = \texttt{FC}(\vec{o}^{\bf synth}_t, N_{h}) \end{equation}
Finally, the tensors $\vec{w}^{\bf synth}_t$, $\vec{p}^{\bf synth}_t$, and $\vec{o}^{\bf synth}_t$ are combined to form the encoded synthetic table ${\bf \widehat{T}_{synth}} = \left\{\vec{w}^{\bf synth}_{1:n_C}, \vec{p}^{\bf synth}_{1:n_C}, \vec{o}^{\bf synth}_{1:n_D}\right\}$. This synthetic table is passed to the discriminator as an input for the optimization process. We thus directly compare it to the encoded table ${\bf \widehat{T}}$ defined in Section~\ref{sec:encoding}.
\subsubsection{Discriminator input} \label{sec:discr_input}
As explained in Section~\ref{sec:discriminator}, the input tensor $\vec{l}_0$ corresponds to the concatenation of all the variables in ${\bf \widehat{T}}$ for the original data or $\bf \widehat{T}_{synth}$ for the synthetic data.
For categorical variables, where the original data are one-hot encoded, it would be trivial for a discriminator to differentiate between the original and synthetic values (as the generator produces probabilities over each class, which will not be $\{0,1\}$ vectors). In addition, as explained by \cite{goodfellow_nips_2017}, deep networks tend to produce overconfident results when adversarially constructed. The author thus suggests using one-sided label smoothing for the standard loss function, as defined in Section~\ref{sec:loss_function}. It means that we perturb the $\{0,1\}$ vectors with additive uniform noise and rescale them to produce $[0,1]$ bound vectors. Formally, label smoothing is defined as follows: \begin{align}
\widetilde{o}^{(k)}_{t, j} &= o^{(k)}_{t, j} + \mathcal{U}_{[0,\gamma]}, \quad k=0,\ldots, |D_t| \nonumber \\
\widetilde{\vec{o}}_{t} &= \widetilde{\vec{o}}_{t}/||\widetilde{\vec{o}}_{t}|| \end{align} where $\gamma$ is a parameter defined by the modeler. $\widetilde{\vec{o}}_{t}$ now corresponds to a noisy version of the original one-hot encoded vector $\vec{o}_{t}$.
An issue with applying label smoothing is that the generator output tries to match the distorted representation of the data, and so the generator probability outputs will be biased towards low proability values. To address this, we propose here to apply equivalent smoothing to the generator output before passing it to the discriminator: \begin{align}
\widetilde{o}^{{\bf synth},(k)}_{t, j} &= o^{{\bf synth},(k)}_{t, j} + \mathcal{U}_{[0,\gamma]}, \quad k=0,\ldots, |D_t| \nonumber \\
\widetilde{\vec{o}}^{\bf synth}_{t} &= \widetilde{\vec{o}}^{\bf synth}_{t}/||\widetilde{\vec{o}}^{\bf synth}_{t}|| \end{align} where $\gamma$ should match the parameter used for the input smoothing. It removes the bias in the generator output and, thus, it is effectively trying to learn the original $[0,1]$ representations. Therefore, it produces unbiased probabilities. We refer to this as two-sided label smoothing.
To investigate the benefits of label smoothing, we systematically investigate three possible strategies for categorical variables:
\begin{align}
\label{eq:NO}
\text{no label smoothing:}&& \vec{l}_0^{\texttt{NO}} &= \begin{cases}
\vec{w}_{1:N_C}\oplus\vec{p}_{1:N_C}\oplus\vec{o}_{1:N_D} & \text{for original data} \\
\vec{w}^{\bf synth}_{1:N_C}\oplus\vec{p}^{\bf synth}_{1:N_C}\oplus\vec{o}^{\bf synth}_{1:N_D} & \text{for synthetic data}
\end{cases} \\
\label{eq:OS}
\text{one-sided label smoothing:}&& \vec{l}_0^{\texttt{OS}} &= \begin{cases}
\vec{w}_{1:N_C}\oplus\vec{p}_{1:N_C}\oplus\widetilde{\vec{o}}_{1:N_D} & \text{for original data} \\
\vec{w}^{\bf synth}_{1:N_C}\oplus\vec{p}^{\bf synth}_{1:N_C}\oplus\vec{o}^{\bf synth}_{1:N_D} & \text{for synthetic data}
\end{cases} \\
\label{eq:TS}
\text{two-sided label smoothing:}&& \vec{l}_0^{\texttt{TS}} &= \begin{cases}
\vec{w}_{1:N_C}\oplus\vec{p}_{1:N_C}\oplus\widetilde{\vec{o}}_{1:N_D} & \text{for original data} \\
\vec{w}^{\bf synth}_{1:N_C}\oplus\vec{p}^{\bf synth}_{1:N_C}\oplus\widetilde{\vec{o}}^{\bf synth}_{1:N_D} & \text{for synthetic data}
\end{cases} \end{align}
\subsubsection{Sampling} \label{sec:sampling}
Once the generator has been trained against the discriminator, we need to be able to generate the final synthetic dataset $\bf T_{synth}$. However, the dataset created by the generator $\bf \widehat{T}_{synth}$ corresponds to the encoded dataset $\bf \widehat{T}$. Therefore, we need to decode $\bf \widehat{T}_{synth}$ to obtain the final synthetic dataset.
In previous works \citep{xu_synthesizing_2018,xu_modeling_2019}, the synthetic value is sampled from the probability distribution by simply assigning the value to the highest probability class (\emph{i.e.} $\argmax$ assignment). However, this approach does not result in representative mode shares. Instead, as is typical in choice modeling scenarios~\citep{ben-akiva_discrete_1985}, we propose to sample the synthetic value through simulation, \emph{i.e.} drawing according to the output probability values (without any label smoothing applied). Thus, for categorical variables, we have two different ways to obtain the final value: \begin{align}
d^{{\bf synth}, \argmax}_{t,j} &= \argmax \vec{o}^{\bf synth}_{t,j} \\
d^{{\bf synth}, \texttt{simul}}_{t,j} &= \texttt{simulation}\left[\vec{o}^{\bf synth}_{t,j}\right] \end{align} Similary, by inverting Equation~\ref{eq:normalize}, we have for continuous variables: \begin{align}
c^{{\bf synth}, \argmax}_{t, j} &= \delta w_{t,j}^{{\bf synth}, (k)}\sigma_t^{(k)} + \eta_t^{(k)} && \text{where} \ k = \argmax \vec{p}^{\bf synth}_{t,j} \\
c^{{\bf synth}, \texttt{simul}}_{t, j} &= \delta w_{t,j}^{{\bf synth}, (k)}\sigma_t^{(k)} + \eta_t^{(k)} && \text{where} \ k = \texttt{simulation}\left[\vec{p}^{\bf synth}_{t,j}\right] \end{align} where $\delta$ corresponds to the same values used to encode the continuous variables $\vec{c}_t$ in Section~\ref{sec:encoding}.
In order to test the impacts of simulation versus maximum probability assignment for the categorical and continuous variables, we systematically test four different sampling strategies:
\begin{align}
\label{eq:AA}
\text{Only $\argmax$:}&& {\bf T}_{\bf synth}^{\texttt{AA}} &= \left\{\vec{c}^{{\bf synth}, \argmax}_{1:N_C}, \vec{d}^{{\bf synth}, \argmax}_{1:N_D}\right\} \\
\label{eq:SA}
\text{\texttt{simulation} for continuous and $\argmax$ for categorical:}&& {\bf T}_{\bf synth}^{\texttt{SA}} &= \left\{\vec{c}^{{\bf synth}, \texttt{simul}}_{1:N_C}, \vec{d}^{{\bf synth}, \argmax}_{1:N_D}\right\}\\
\label{eq:AS}
\text{$\argmax$ for continuous and \texttt{simulation} for categorical:}&& {\bf T}_{\bf synth}^{\texttt{AS}} &= \left\{\vec{c}^{{\bf synth}, \argmax}_{1:N_C}, \vec{d}^{{\bf synth}, \texttt{simul}}_{1:N_D}\right\}\\
\label{eq:SS}
\text{Only \texttt{simulation}:}&& {\bf T}_{\bf synth}^{\texttt{SS}} &= \left\{\vec{c}^{{\bf synth}, \texttt{simul}}_{1:N_C}, \vec{d}^{{\bf synth}, \texttt{simul}}_{1:N_D}\right\} \end{align}
As a side note, we would like to add that the sampling process is entirely independent of the optimization process of both the generator and the discriminator. It is, therefore, possible to train a single model and test the different sampling methods afterward.
The supplementary materials summarize the possible DATGAN versions using the proposed loss functions, label smoothing strategies, and sampling strategies. In addition, we compare these versions against each other in order to select the best-performing one.
\subsection{Result assessments} \label{sec:result_assessments}
For assessing the quality of synthetic datasets, compared to the original datasets, we use two main methods: \begin{inlinelist} \item a statistical method; \item a machine learning-based method. \end{inlinelist} The goal of the first method (see Section~\ref{sec:res_stats}) is to verify that the synthetic datasets display the same statistical properties compared to the original dataset. In order to do this, we compare the distributions of each column individually between the synthetic datasets and the original datasets. We then test combinations of multiple columns to study if the models can grasp more complex correlations between the variables. The second method (see Section~\ref{sec:res_ml}) is closer to a real-world problem one can face. Indeed, the goal is to study if the synthetic datasets can be used in classification/regression context in Machine Learning.
\subsubsection{Statistical tests} \label{sec:res_stats}
For the statistical tests, we build on existing approaches in the transportation literature~\citep{garrido_prediction_2019, borysov_how_2019, badu-marfo_composite_2020}. The idea is to compute frequency lists (i.e. frequency count of each unique value) for each column on both the original dataset $\vec{\pi}$ and the synthetic dataset $\vec{\pi}^{\bf synth}$. In the literature, authors typically only calculate the frequency lists for single columns (i.e. marginal distributions) for a few relevant variables and test them against each other. In this paper, we build on this in two ways: (i)~calculating joint frequency lists for $n$~columns simultaneously (therefore assessing joint distributions of order~$n$) and (ii)~systematically testing all possible combinations of columns at each aggregation level.
If we only compute only the frequency lists for single variables, we will only assess the marginal distribution of each variable independently. Whilst this verifies whether each column of the synthetic data matches distribution of the corresponding column in the original data, it does not provide any information of the correlations between the columns in either the synthetic or original data (i.e. it assesses each column independently of all other columns). To assess whether relationships between variables in the the synthetic data matches that of the original data, it is necessary to investigate the joint distributions of multiple columns simultaneously. To address this, we calculate the joint frequency lists for multiple columns simultaneously. Furthermore, at each aggregation level (i.e. number of columns), we calculate the frequency lists for all possible combinations of columns.
Since the number of possible combinations is at each aggregation level increases factorially, we limit the level of aggregation to one, two, or three columns as follows: \begin{description}
\item[First order:] Columns are compared to each other separately. It returns $N_{V}$ different aggregated lists.
\item[Second order:] Columns are aggregated two-by-two, giving $\binom{N_{V}}{2}$ different aggregated lists.
\item[Third order:] Columns are aggregated three-by-three, giving $\binom{N_{V}}{3}$ different aggregated lists. \end{description} Note that continuous variables must be binned to calculate frequency lists. We arbitrarily set the number of bins for each continuous column to 10, such that the second order and third order frequency lists of continuous columns will have 100 and 1000 unique values respectively.
Once these frequency lists have been computed, we can compare them using standard statistic metrics defined in the literature. We select five different metrics: \begin{itemize}
\item Mean Absolute Error:
\begin{equation}
\text{MAE}\left(\vec{{\pi}}^{\bf synth}, \vec{\pi}\right) = \dfrac{\sum_{i=1}^{N_{\text{cnt}}} |\pi^{\bf synth}_i - \pi_i|}{N_{\vec{pi}}}
\end{equation}
where $N_{\vec{\pi}}$ corresponds to the size of the frequency list $\vec{\pi}$.
\item Root Mean Square Error:
\begin{equation}
\text{RMSE}\left(\vec{\pi}^{\bf synth}, \vec{\pi}\right) = \left(\dfrac{\sum_{i=1}^{N_{\text{cnt}}} \left(\pi^{\bf synth}_i - \pi_i\right)^2}{N_{\text{cnt}}}\right)^{1/2}
\end{equation}
\item Standardized Root Mean Square Error~\citep{muller_hierarchical_2011}:
\begin{equation}
\label{eq:SRMSE}
\text{SRMSE}\left(\vec{\pi}^{\bf synth}, \vec{\pi}\right) = \dfrac{\text{RMSE}\left(\vec{\pi^{\bf synth}}, \vec{\pi}\right)}{\vec{\overline{\pi}}}
\end{equation}
where $\vec{\overline{\pi}}$ corresponds to the average value of $\vec{\pi}$.
\item Coefficient of determination:
\begin{equation}
R^2\left(\vec{\pi^{\bf synth}}, \vec{\pi}\right) = 1 - \dfrac{\sum_{i=1}^{N_{\text{cnt}}} \left(\pi^{\bf synth}_i - \pi_i\right)^2}{\sum_{i=1}^{N_{\text{cnt}}} \left(\overline{\pi}_i - \pi_i\right)^2}
\end{equation}
\item Pearson's correlation:
\begin{equation}
\rho_{\text{Pearson}}\left(\vec{\pi}^{\bf synth}, \vec{\pi}\right) = \dfrac{\text{cov}\left(\vec{\pi}^{\bf synth}, \vec{\pi}\right)}{\sigma_{\vec{\pi}}\sigma_{\vec{\pi^{\bf synth}}}}
\end{equation}
where $\text{cov}$ corresponds to the covariance matrix and $\sigma_{\vec{X}}$ the standard deviation of a given vector $\vec{X}$. \end{itemize}
Finally, the results can be averaged over all combinations to obtain a single number per synthetic dataset, statistic, and aggregation level.
\subsubsection{Supervised learning-based validation} \label{sec:res_ml}
We propose in this paper a new supervised learning-based validation method for synthetic data which makes use of supervised classification and regression models to approximate the full conditional distributions of each variable, given all other variables in the dataset. The general approach is to estimate two regression or classification models for each variable $\vec{v}_t$. The first model ($m_{t}$) is estimated on a training portion of the original data, and the second ($m^{\bf synth}_t$) is estimated on a corresponding training portion of the synthetic data. In each case, the model tries to predict the values in the corresponding column conditional on all other columns in the dataset.
Each model is then validated on the same test portion of the original data, providing two loss scores. The expected value of the loss of the model estimated on the synthetic data (which approximates the conditionals in the synthetic dataset) should be greater than or equal to the expected loss of the model estimated on the original data (which approximates the true conditionals in the original data). The closer the loss scores of the models estimated on the synthetic and original data, the more closely the synthetic data has captured the conditional distributions of the original data. The approach is detailed in Algorithm~\ref{algo:ml_efficiency}.
\begin{algorithm*}[h!] \caption{Supervised learning-based validation}\label{algo:ml_efficiency} \begin{algorithmic}[1] \Require{Original data $\bf T$, synthetic data~$\bf T_{synth}$} \Ensure{Similarity score for each variable $\vec{v}_t \in {\bf T}$} \ForAll {$\vec{v}_t \in \bf{T}$}
\State $\vec{y}_t = \vec{v}_t$
\State $\vec{X}_t = {\bf T} \setminus \vec{v}_t $
\State Divide $\vec{y}_t$ and $\vec{X}_t$ into training set $(\vec{y}_{t,\text{train}}, \vec{X}_{t,\text{train}})$ and test set $(\vec{y}_{t,\text{test}}, \vec{X}_{t,\text{test}})$
\State $\vec{y}^{\bf synth}_t = \vec{v}^{\bf synth}_t$
\State $\vec{X}^{\bf synth}_t = {\bf T_{synth}} \setminus \vec{v}^{\bf synth}_t $
\State Sample training set $(\vec{y}^{\bf synth}_{t,\text{train}}$, $\vec{X}^{\bf synth}_{t,\text{train}})$ from $\vec{y}^{\bf synth}_t$ and $\vec{X}^{\bf synth}_t$, with the same dimensions as $(\vec{y}_{t,\text{train}}, \vec{X}_{t,\text{train}})$.
\If {$\vec{v}_t \in \vec{c}_{1:N_C}$}
\State Estimate regression model $m_{t,\text{reg}}$ on $(\vec{y}_{t,\text{train}},\vec{X}_{t,\text{train}})$
\State Estimate regression model $m^{\bf synth}_{t,\text{reg}}$ on $(\vec{y}^{\bf synth}_{t,\text{train}},\vec{X}^{\bf synth}_{t,\text{train}})$
\State $g^\text{reg}_t = \mathcal{L}_\text{MSE}(\vec{y}_{t,\text{test}}, m^{\bf synth}_{t,\text{reg}}(\vec{X}_{t,\text{test}}))/\mathcal{L}_\text{MSE}(\vec{y}^{\bf synth}_{t,\text{test}}, m_{t,\text{reg}}(\vec{X}^{\bf synth}_{t,\text{test}}))$
\State \textbf{return} $g^\text{reg}_t$
\Else
\State Estimate probabilistic classification model $m_{t,\text{class}}$ on $(\vec{y}_{t,\text{train}},\vec{X}_{t,\text{train}})$
\State Estimate probabilistic classification model $m^{\bf synth}_{t,\text{class}}$ on $(\vec{y}^{\bf synth}_{t,\text{train}},\vec{X}^{\bf synth}_{t,\text{train}})$
\State $g^\text{class}_t = \mathcal{L}_\text{log-loss}(\vec{y}_{t,\text{test}}, m^{\bf synth}_{t,\text{class}}(\vec{X}_{t,\text{test}}))-\mathcal{L}_\text{log-loss}(\vec{y}^{\bf synth}_{t,\text{test}}, m_{t,\text{class}}(\vec{X}^{\bf synth}_{t,\text{test}}))$
\State \textbf{return} $g^\text{class}_t$
\EndIf \EndFor \end{algorithmic} \end{algorithm*}
We make use of gradient boosting ensembles of decision trees for both $m_{\text{reg}, t}$ and $m_{\text{class}, t}$ as \begin{inlinelist}
\item they can be easily applied to both regression and probabilistic classification problems;
\item are computationally efficient to estimate;
\item have been shown to have high predictive performance on a wide variety of supervised learning tasks;
\item can determine appropriate regularisation automatically using early stopping \end{inlinelist}. We specifically make use of the LightGBM library~\citep{ke_lightgbm_2017} which inherently handles categorical input features, thus avoiding the need for one-hot encoding of categorical variables.
For continuous variables, the score $g^\text{reg}_t$ is the ratio of the mean-squared error of the model estimated on the synthetic dataset to the mean-squared error of the model estimated on the original dataset, with a score of 1 indicating a perfect match, and a higher score representing a worse fit. For categorical variables, the score $g^\text{class}_t$ is the absolute difference between the normalized log-loss of the model estimated on the synthetic dataset and the normalized log-loss of the model estimated on the original dataset, with a score of 0 indicating a perfect match, and a higher score representing a worse fit. The scores can be summed over all columns to give aggregate scores for all continuous and categorical variables, respectively.
While Algorithm~\ref{algo:ml_efficiency} describes a single train-test split, the same algorithm can be used with $k$-fold cross-validation to obtain more accurate estimates of the model losses. We use 5-fold cross-validation and stratified sampling to select training folds for the categorical data.
\subsection{Implementation notes} \label{sec:implementation}
The code for the DATGAN has been implemented using Python 3.7.9. We use the libraries \texttt{tensorflow} (v1.15.5) \citep{abadi_tensorflow_2016} and \texttt{tensorpack} (v0.9.4) \citep{yuxin_tensorpack_2016} for the main components of the neural networks. In addition, we use the library \texttt{networkx} (v2.5) \citep{hagberg_exploring_2008} for specifying the DAG $\mathcal{G}$ discussed in Section~\ref{sec:DAG}. This library already has built-in functions to verify that a user-specified graph is a directed acyclic graph.
For the optimization process using the different loss functions presented in Section~\ref{sec:loss_function}, we follow the instructions of the authors of the different articles for the hyperparameters. During initial tests, we investigated different values of the learning rate and decided on the following learning rates for each loss, which appeared to work best in these initial tests: \begin{description}
\item[Standard loss] learning rate of $1\cdot 10^{-3}$
\item[Wasserstein loss] learning rate of $2\cdot 10^{-4}$
\item[Wasserstein loss with gradient-penalty] learning rate of $1\cdot 10^{-4}$ \end{description} We do not provide any specific results for this hyperparameter since it is not the main focus of our article. Finally, the sizes of the different components presented in the methodology are directly given in the notation table in the supplementary materials.
The complete code for this project, including the different versions of the DATGAN, the case studies, and the results, can be found on Github: \href{https://github.com/glederrey/SynthPop}{https://github.com/glederrey/SynthPop}. A Python library has been created with the original DATGAN model as well as an updated version written with \texttt{tensorflow} (v2.8.0). The code can be found here: \href{https://github.com/glederrey/DATGAN}{https://github.com/glederrey/DATGAN}.
\section{Case studies} \label{sec:case_study}
In this section, we present the case study for this article. First, we introduce the datasets in Section~\ref{sec:datasets}. For each dataset, we provide a short description of the dataset, a summary of the variables, and the DAG used in the DATGAN. Then, in Section~\ref{sec:result_assessments}, we present the new methods used to assess the quality of the synthetic datasets generated by all the models presented in this article. Finally, Section~\ref{sec:training} gives a detailed overview of the training method used with all the models in order to bring as much fairness as possible in the comparison of the models.
\subsection{Datasets} \label{sec:datasets}
The first dataset is a travel survey made by the Chicago Metropolitan Agency for Planning. We named this dataset CMAP. The original dataset is a household travel survey of the Chicago metropolitan area, conducted from January 2007 to February 2008. The trips are given as one and two-day travel diaries, provided by all the members of the households. The data is therefore hierarchical. The dataset has first been cleaned to remove incomplete entries. Then, we selected one unique trip per individual per household for the final dataset to remove data leakage. It thus contains a total of 8'929 trips with 15 columns. A complete description of this dataset is given in Table~\ref{tab:data_Chicago} in the appendix. The DAG used for the DATGAN with this dataset can also be found in the appendix; see Figure~\ref{fig:DAG_Chicago}.
The second dataset is the London Passenger Mode Choice (LPMC) dataset~\citep{hillel_recreating_2018}. It combines the London Travel Demand Survey (LTDS) records with matched trip trajectories and corresponding mode alternatives. The LTDS has been conducted between April 2012 and March 2015 and records trips made by individuals residing within Greater London. The trip trajectories are extrapolated from Google Maps API. The final dataset has been processed to not lead to data leakage. Similar to the CMAP dataset, we selected only one trip per household. The final dataset contains a total of 17'616 trips with 27 columns. A complete description of this dataset is given in Table~\ref{tab:data_LPMC} in the appendix. The DAG used for the DATGAN with this dataset can also be found in the appendix; see Figure~\ref{fig:DAG_LPMC}. In addition, we created a smaller version of the LPMC dataset by randomly selecting 50\% of the rows for testing the DATGAN versions. This dataset is conveniently named LPMC\_half. We mainly use it to understand the effect of the number of rows on the performance of the models.
The third and final dataset is the ADULT dataset~\citep{kohavi_scaling_1996}, also known as the Census-Income dataset. This dataset contains socio-economic variables on multiple individuals to predict if their income is below or above \$50k/yr. From the original dataset, we removed all the rows with unknown values. A complete description of this dataset is given in Table~\ref{tab:data_adult} in the appendix. The DAG used for the DATGAN with this dataset can also be found in the appendix; see Figure~\ref{fig:DAG_adult}. Due to its larger size compared to the other datasets, the ADULT dataset is only used when comparing the DATGAN with state-of-the-art models.
\begin{table}[H]
\centering
\caption{Summary of the datasets used in the case studies. Full description of the datasets can be found in the Appendix.}
\label{tab:summary_dataset}
\begin{tabularx}{0.7\textwidth}{l||C|C|C||C}
\textbf{Name} & \boldmath$\#$\textbf{columns} & \boldmath$\#$\textbf{continuous} & \boldmath$\#$\textbf{categorical} & \boldmath$\#$\textbf{rows} \\ \midrule[1.5pt]
CMAP & 15 & 3 & 12 & 8'929 \\
LPMC & 27 & 13 & 14 & 17'616 \\
LPMC\_half & 27 & 13 & 14 & 8'808 \\
ADULT & 14 & 4 & 10 & 45'222
\end{tabularx} \end{table}
\subsection{Training process} \label{sec:training}
This article aims to propose a new way to generate synthetic data. However, for fairness towards state-of-the-art methods in the literature, we need to test all these generative models on a similar playground. We, thus, decide to train every model on 1'000 epochs with a batch size $N_b$ of 500, even if the optimization process could be stopped earlier. In addition, we decided to keep the original hyperparameters provided in the articles. While optimizing these parameters would most likely lead to better results, most users would use the models as the authors provide them. In addition, each model is trained five times on each dataset, and each of the five models generates five synthetic datasets with the same number of rows as the original dataset. This means that each test is performed on a total of 25 synthetic datasets. Thus, the results provided in Section~\ref{sec:results} correspond to the average value of the performed tests.
The training process is slightly different for the DATGAN versions (summary given in the supplementary materials). Indeed, the sampling can be done independently after the training process. Therefore, we only have to train a total of nine different models (combinations of loss functions and label smoothing). Each of these models is trained five times. Then, for each of these 45 models, we use the four different sampling methods to produce five synthetic datasets for each method. Therefore, we get a total of 900 synthetic datasets to be compared, \emph{i.e.} 25 synthetic datasets for each model presented in the summary table in the supplementary materials for each case study presented in Section~\ref{sec:datasets}. \section{Results} \label{sec:results}
In this section, we present the results obtained using the assessment methods presented in Section~\ref{sec:result_assessments} on the different case studies presented in Section~\ref{sec:case_study}. The first section, see Section~\ref{sec:comp_lit}, compares the DATGAN against state-of-the-art models found in the literature. The second section, see Section~\ref{sec:analysis_DAG}, performs a sensitivity analysis on the DAG used for the DATGAN in order to understand its effect on the performance of the model.
In the supplementary materials, we analyze the results of the possible DATGAN versions. We provide here the conclusions of this analysis. The reader may refer to this document for more information. Across all case studies, the DATGAN performs better when using the two-sided label smoothing and the simulation sampling strategy for both continuous and categorical variables. However, we have found that using the \texttt{WGAN} loss function leads to better results on datasets with fewer variables, such as the CMAP dataset. On the other hand, the \texttt{WGGP} loss performs better on the LPMC dataset. Since the comparison of DATGAN version was not performed on the ADULT dataset (due to its larger size compared to the other case studies), we will test both the \texttt{WGAN} and the \texttt{WGGP} loss functions for this case study. Results on the other case studies indicate that the \texttt{WGAN} loss should outperform the \texttt{WGGP} loss.
\subsection{Comparison with state-of-the-art models} \label{sec:comp_lit}
At this point of the article, we have presented our new model for generating synthetic datasets and have selected the best version depending on the type of case study. In addition, we have studied the effect of the DAG on the generated synthetic data. However, we now want to compare our model against state-of-the-art models presented in the literature. We, thus, test our DATGAN model against the four models presented in the literature (see Section~\ref{sec:sota_models}) on the four different case studies. We use the same assessment methods as previously to compare all the models. We have compiled all the results in Table~\ref{tab:summary_all_models}. It shows the average rankings on both assessment methods for the four different case studies. We see that the DATGAN model outperforms all the other models in the first three case studies. For example, it is consistently the best model when using the Machine Learning efficacy method. For the ADULT case study, we decided to test the DATGAN with the \texttt{WGAN} and the \texttt{WGGP} loss functions. Since the ADULT dataset contains more categorical variables than continuous, we expect the \texttt{WGAN} loss to perform better, as it is shown in Table~\ref{tab:summary_all_models}. It performs the best on the statistical assessments. However, it seems to struggle with the Machine Learning efficacy method. While the results are quite close on the continuous variables, the two DATGAN models are the only models that do not fail the test on the categorical variables. Indeed, it seems that the other models tend not to produce enough of the low probability categories. Thus, these models tend to oversimplify the generated synthetic data compared to the DATGAN models. One of the reasons why such a thing happens might come from the sampling process. Indeed, using simulation to get the final categories allows for more representation of the low probability values than using the maximum probability estimator.
\begin{table}[h!]
\centering
\caption{Average rankings of the state-of-the-art models against the DATGAN on the four case studies.}
\label{tab:summary_all_models}
\input{tables/summary_all} \end{table}
Since we compared models across multiple articles, it is interesting to look at their conclusions. For example, \cite{xu_modeling_2019} show that their CTGAN model is consistently outperforming the TVAE model. We see the same ranking between the two models except for the ADULT case study. Therefore, we can draw the same conclusion on these two models as the authors. However, \cite{zhao_ctab-gan_2021} claim that the CTAB-GAN outperforms the CTGAN across all their assessments. In our case, we see that the CTAB-GAN outperforms the CTGAN only when the case studies contain a small number of data points. While we have used both models as intended to be used by their authors, we only changed the final number of epochs. In their article, \cite{zhao_ctab-gan_2021} have trained both models on 150 epochs. Therefore, the CTAB-GAN may be providing a better early optimization process than the CTGAN. However, further work would be required to analyze this result, and it is out of the scope of this article.
In addition, we have tested both models presented by \cite{garrido_prediction_2019}: a WGAN and a VAE. Unfortunately, results are not shown in the article because both models failed to produce adequate continuous variables. Indeed, the encoding of continuous variables is done such that they are binned based on their original distributions, thus treating them as categorical variables. This, therefore, lead to especially poor results when comparing continuous distributions.
\subsection{Sensitivity analysis of the DAG} \label{sec:analysis_DAG}
In this section, we want to analyze how the DAG can affect the performances of the DATGAN and how it can be used to modify the generation of synthetic datasets. Section~\ref{sec:structure_DAG} performs the analysis on the DAG using different versions of the latter, and Section~\ref{sec:effect_DAG} explains how we can alter the DAG to generate hypothetical synthetic datasets.
\subsubsection{Structure of the DAG} \label{sec:structure_DAG}
In the previous section, we have analyzed all the different versions of the DATGAN to select the best one. However, in this section, we want to investigate how the DAG will influence the results. Thus, we only use the best possible model for each case study. The idea is to start with the DAG presented in the appendix for each case study and make variations of it to study the generated datasets. Therefore, we created five different DAGs for each case study: \begin{itemize}
\item \texttt{full}: Complete DAG presented in the appendix for each case study.
\item \texttt{trans. red.}: Transitive reduction of the \texttt{full} DAG. The transitive reduction consists in removing as many edges as possible in a DAG such that there exists only one path between two vertices in the graph.
\item \texttt{linear}: This DAG consists in taking the variables in the order provided by the dataset and linking them to each other linearly. Thus, there are no multi-inputs within the DAG. This is similar to the technique used in the TGAN~\citep{xu_synthesizing_2018}.
\item \texttt{prediction}: This DAG consists of only one sink node. All the other nodes are considered source nodes linked to the sink node. The source nodes are not linked to each other. The sink nodes are the \texttt{choice} for the CMAP case study and the \texttt{travel\_mode} for the LPMC case studies.
\item \texttt{no links}: This DAG consists of only nodes without any edges. This, thus, cuts all the links between the variables. \end{itemize} At first glance, we can already predict that the last two DAGs should all perform badly since the connections between them are either badly implemented or absent. However, we are mostly interested in the first three DAGs. Indeed, if the DAG can help the model to generate more representative synthetic data, the \texttt{full} DAG or the \texttt{trans. red.} DAG should outperform the \texttt{linear} DAG. As for the previous section, the details of the results are given in the supplementary materials. Table~\ref{tab:summary_Chicago_DAG} shows the rankings of the DAGs on the CMAP case study. As expected, the best two DAGs are the two complete ones. It is interesting to note that the \texttt{full} DAG provides better results on the Machine Learning efficacy assessment while the \texttt{trans. red.} provides better results on the statistical assessments. Since the model with the \texttt{full} DAG contains more edges than the other DAGs, it is, therefore, larger and more complex to train. Thus, this can hurt the LSTM cells' performance when creating the synthetic variables. However, the correlations between the variables are better with the complete DAG since it always performs best compared to the other DAGs.
\begin{table}[H]
\centering
\caption{Average rankings of the different DAGs on the CMAP dataset}
\label{tab:summary_Chicago_DAG}
\input{tables/summary_Chicago_DAG} \end{table}
The CMAP dataset is relatively small. Therefore, learning the correlations between the variables can be pretty difficult. The LPMC dataset, on the other hand, is twice as big. It is thus interesting if the DAG can still provide the same kind of help on a larger dataset. Table~\ref{tab:summary_LPMC_DAG} shows the rankings of the DAG on the LPMC case study. We see that this time, the \texttt{linear} DAG is the best one, closely followed by the two complete DAGs. While the \texttt{full} DAG has some issues with the statistical assessments, it remains the best on the Machine Learning efficacy method. Therefore, it seems that if one wants to generate a synthetic dataset with column data as close as possible to the original dataset, a smaller dataset leads to better results. However, if one wants to keep as much correlation as possible, one should opt for a complete DAG.
\begin{table}[H]
\centering
\caption{Average rankings of the different DAGs on the LPMC dataset}
\label{tab:summary_LPMC_DAG}
\input{tables/summary_LPMC_DAG} \end{table}
Finally, we want to confirm our hypothesis that the completeness of the DAG is less important with more data points. We, thus, make the same tests on the smaller LPMC case study. Table~\ref{tab:summary_LPMC_half_DAG} shows the rankings of the DAG on the LPMC\_half case study. Results show that the \texttt{linear} DAG performs better than the \texttt{full} DAG on both metrics. At first glance, this result can be quite surprising. However, when we look at the DAG for both the CMAP and the LPMC case study, we see that the LPMC DAG is much more complex than the CMAP DAG. Indeed, the CMAP DAG contains a total of 25 edges for 15 nodes for an average of $1.\bar{6}$ edges per node. On the other hand, the LPMC DAG contains a total of 63 edges for 27 nodes for an average of $2.\bar{3}$ edges per node. It is, thus, possible that the model struggles to train correctly with a more complex DAG. However, we see that the \texttt{trans. red.} version of the LPMC DAG leads to even worse results than the \texttt{full} DAG. Therefore, it might also be possible that this DAG is not well constructed for this particular case study. It, thus, requires further investigation to fully understand how the DAG affects the results of this case study.
\begin{table}[H]
\centering
\caption{Average rankings of the different DAGs on the LPMC\_half dataset}
\label{tab:summary_LPMC_half_DAG}
\input{tables/summary_LPMC_half_DAG} \end{table}
\subsubsection{Effect of the DAG on the synthetic dataset} \label{sec:effect_DAG}
\begin{figure}
\caption{Age distributions for the original CMAP dataset, for a synthetic CMAP dataset with a complete DAG, and for a synthetic CMAP dataset with an altered DAG. }
\label{fig:age_distributions}
\end{figure}
The final section of the results shows what can be achieved with the DAG depending on the desire of the modeler. Indeed, we have shown that using a complete DAG allows generating the best possible synthetic datasets compared to state-of-the-art models. However, the DAG can also be used to create hypothetical synthetic datasets. Since the DAG controls the causal links between the variables, removing any relationships between two or more variables is simple. For example, in the CMAP case study, we could imagine a hypothetical population with no minimum age requirement to get a driving license. In order to achieve this, we can simply remove the link between the variables \texttt{age} and \texttt{license} in the DAG presented in Figure~\ref{fig:DAG_Chicago}. If the modeler wants to ensure that two variables do not interact, these variables should be defined as source nodes in the DAG. Figure~\ref{fig:age_distributions} shows the age distributions for the original CMAP dataset, for a synthetic dataset generated with a complete DAG, and for a synthetic dataset generated with the altered DAG. The top figure distribution shows the distribution of the variable \texttt{age} in all of the datasets. As we can see, the synthetic probability distributions are quite similar to the original probability distributions. However, if we look at the age distribution when the individual owns a driving license (middle figure), we see that the synthetic dataset with the altered DAG still produces individuals of age lower than 18 owning a driving license. It is also the case with the synthetic dataset generated from a complete DAG. However, the number of minors with a driving license is marginally less than the data generated with the altered DAG. The effect of the altered DAG can be seen even better when looking at the age distribution of individuals not owning a driving license (bottom figure). Indeed, both the original and synthetic datasets with a complete DAG show most young individuals who do not own a driving license. However, the altered DAG produces the same type of distributions compared to the previous two. This, thus, shows that we eliminated the correlation between age and owning a driving license with the altered DAG.
Since the DAG can remove the correlations between variables altogether, the modeler must ensure that the source nodes used in the DAG do not have any correlations in the original dataset. This can be done by analyzing the data before creating the DAG. However, this is a small price to pay in order to be able to have more control on the dependencies between the variables.
\section{Conclusion} \label{sec:conclusion}
This article presents a novel GAN architecture, the DATGAN, that integrates expert knowledge to control the causal links between the variables. In the methodology, we show how a Directed Acyclic Graph (DAG) can model the generator's structure and how synthetic variables are generated using Long-Short Term Memory (LSTM) cells. In addition, we provide an efficient way to encode categorical and continuous variables. While the core mechanics of the DATGAN remain the same, we explore different loss functions for training the DATGAN, the use of label smoothing on categorical variables, and multiple sampling methods. In order to compare the results as fairly as possible, we provide two new systematic assessment methods for comparing synthetic datasets: a statistical method and a Machine Learning efficacy method. Using these two methods, we show that two-sided label smoothing and using simulation to sample the final synthetic variables lead to the best performances. The most optimal loss function, on the other hand, depends on the ratio of continuous and categorical variables. Indeed, if a dataset contains primarily categorical variables, we recommend using a \texttt{WGAN} loss function. On the contrary, if the dataset contains more continuous variables than categorical variables, we recommend using a \texttt{WGGP} loss function. We then show that using a complete DAG leads to better correlations in the final synthetic dataset than a stripped one. The optimal DATGAN models are then compared against state-of-the-art models. We show that the DATGAN models outperform all the other models on all the case studies using both assessment methods. Finally, we show how the DATGAN can create hypothetical synthetic populations.
The DATGAN architecture has been developed to improve the representativity of its generated synthetic data. Such datasets can then be used in simulations and might improve the latter's results since it has shown better results than other synthetic datasets generated by state-of-the-art models found in the literature. However, it might be possible to improve the DATGAN even further by upgrading the encoding of the tabular data. In the DATGAN, we have only considered two types of variables: continuous and categorical. However, \cite{zhao_ctab-gan_2021} consider four different types of variables: continuous, categorical, mixed data, and integers. Therefore, a straightforward improvement to the DATGAN would be to add more data types in the encoding process. Indeed, if the encoding of the data is improved, the generated synthetic data will most likely improve as well. In addition, the DATGAN, as every other GAN, achieves differential privacy~\citep{jordon_pate-gan_2018} since the generator never sees the original data. Therefore, privacy preservation is generally not a concern for GANs. For the Machine Learning efficacy research axis, the DATGAN is already showing improvements thanks to the Machine Learning evaluation metric results. However, the DATGAN does not consider bias in the original data. Therefore, it also generates biased data. Thus, improving the bias correction of the DATGAN will also improve the Machine Learning efficacy. A simple fix that already exists in the literature is the use of conditionality on GANs. We could, thus, update the DATGAN such that it can conditionally generate synthetic data to reduce the bias in the data. Furthermore, one of the difficult tasks with the DATGAN is to create a good DAG to hinder the models' performances. While the relationship between some variables might be easy to find, it is more subtle for others. Therefore, it would be interesting to add a feature to combine multiple variables in a cluster such that they all influence each other without having to decide in which specific order. However, such an improvement might not work with the current design of the DATGAN using LSTM cells since each cell is assigned to a single variable. Therefore, a complete redesign of the DATGAN might be needed to implement such an improvement. Finally, we have discussed four of the five research axes presented in the literature review. The final axis is transfer learning. Researchers have already been working on this topic with synthetic image generators. Therefore, one of the future steps for synthetic tabular data generators is to follow this trend and start implementing models that can transfer knowledge between multiple datasets.
\section{Appendix} \label{sec:app}
\subsection{Case studies - dataset description}
\begin{xltabular}{\textwidth}{l||c|X|X}
\caption{CMAP dataset}
\label{tab:data_Chicago} \\
\textbf{Variables} & \textbf{Type} & \textbf{Details} & \textbf{Description} \\ \midrule[1.5pt]
\endfirsthead
\multicolumn{4}{c}{\tablename\ \thetable{} -- continued from previous page} \\
\textbf{Variables} & \textbf{Type} & \textbf{Details} & \textbf{Description} \\ \midrule[1.5pt]
\endhead
\multicolumn{4}{r}{{Continues on next page...}}
\endfoot
\endlastfoot
\texttt{choice} & Categorical & 5 string categories & Chosen mode \\ \hline
\texttt{travel\_dow} & Categorical & 1-7 (Monday-Sunday) & Day of the week travel \\ \hline
\texttt{trip\_purpose} & Categorical & 7 string categories & Primary purpose for making trip \\ \hline
\texttt{distance} & Continuous & 8'743 unique values between 0 and 69.71 & Straight-line trip distance in miles \\ \hline
\texttt{hh\_vehicles} & Categorical & 0-8 & Number of vehicles in household \\ \hline
\texttt{hh\_size} & Categorical & 1-8 & Number of people in household \\ \hline
\texttt{hh\_bikes} & Categorical & 0-7 & Number of bikes in household \\ \hline
\texttt{hh\_descr} & Categorical & 3 string categories & Household type \\ \hline
\texttt{hh\_income} & Categorical & 7 categories & Household income level \\ \hline
\texttt{gender} & Categorical & 0 (female), 1 (male) & Gender of individual \\ \hline
\texttt{age} & Continuous & 97 unique integers between 0 and 98 & Age of individual \\ \hline
\texttt{license} & Categorical & 0 (none), 1 (has driving license) & Driving license ownership \\ \hline
\texttt{education\_level} & Categorical & 6 categories & Highest level of education achieved \\ \hline
\texttt{work\_status} & Categorical & 8 string categories & Working status \\ \hline
\texttt{departure\_time} & Continuous & 949 unique values between 0 and 23.86 & Departure time of trip (in decimal hours) \end{xltabular}
\begin{xltabular}{\textwidth}{l||c|X|X}
\caption{LPMC dataset}
\label{tab:data_LPMC} \\
\textbf{Variables} & \textbf{Type} & \textbf{Details} & \textbf{Description} \\ \midrule[1.5pt]
\endfirsthead
\multicolumn{4}{c}
{\tablename\ \thetable{} -- continued from previous page} \\
\textbf{Variables} & \textbf{Type} & \textbf{Details} & \textbf{Description} \\ \midrule[1.5pt]
\endhead
\multicolumn{4}{r}{{Continues on next page...}}
\endfoot
\endlastfoot
\texttt{travel\_mode} & Categorical & 4 string categories & Mode of travel chosen by LTDS trip \\ \hline
\texttt{purpose} & Categorical & 5 string categories & Journey purpose for trip \\ \hline
\texttt{fueltype} & Categorical & 6 string categories & Fuel type of passenger’s vehicle\\ \hline
\texttt{faretype} & Categorical & 5 string categories & Public transport fare type of passenger \\ \hline
\texttt{bus\_scale} & Categorical & 3 values (0, 0.5, 1) & Percentage of the full bus fare paid by the passenger \\ \hline
\texttt{travel\_year} & Categorical & 4 values between 2012 and 2015& Year of travel \\ \hline
\texttt{travel\_month} & Categorical & 12 values between 1 and 12 & Month of year of travel \\ \hline
\texttt{travel\_date} & Categorical & 31 values between 1 and 31& Date of month of travel\\ \hline
\texttt{day\_of\_week} & Categorical & 7 values between 1 and 7 & Day of the week of travel\\ \hline
\texttt{start\_time\_linear} & Continuous & 609 unique values between 0 and 23.92 & Start time of trip (in decimal hours)\\ \hline
\texttt{age} & Continuous & 90 unique values between 5 and 94 & Age of passenger in years \\ \hline
\texttt{female} & Categorical & 0 (male), 1 (female) & Gender of passenger \\\hline
\texttt{driving\_license} & Categorical & 0 (none), 1 (has driving license) & Whether the traveller has a driving licence\\ \hline
\texttt{car\_ownership} & Categorical & 3 values between 0 and 2 & Car ownership of household \\ \hline
\texttt{distance} & Continuous & 8'972 unique values between 77 and 40'941 & Straight line trip distance \\ \hline
\texttt{dur\_walking} & Continuous & 8'416 unique values between 0.028 and 9.28 & Duration of walking route\\ \hline
\texttt{dur\_cycling} & Continuous & 4'350 unique values between 0.0075 and 3.05 & Duration of cycling route\\ \hline
\texttt{dur\_pt\_access} & Continuous & 1'656 unique values between 0 and 1.06 & Duration walking to/from first/last stop on public transport route\\ \hline
\texttt{dur\_pt\_rail} & Continuous & 76 unique values between 0 and 1.37 & Duration spent on rail services on public transport route\\ \hline
\texttt{dur\_pt\_bus} & Continuous & 2'469 unique values between 0 and 2.15 & Duration spent on bus services on public transport route\\ \hline
\texttt{dur\_pt\_int} & Continuous & 725 unique values between 0 and 0.57 & Total duration of public transport interchanges\\ \hline
\texttt{pt\_n\_interchanges} & Categorical & 5 values between 0 and 4 & Total number of public transport interchanges\\ \hline
\texttt{dur\_driving} & Continuous & 3'544 unique values between 0.0042 and 1.79 & Duration of driving route\\ \hline
\texttt{cost\_transit} & Continuous & 146 unique values between 0 and 11.7 & Cost of public transport route\\ \hline
\texttt{cost\_driving\_fuel} & Continuous & 507 unique values between 0.02 and 10.09 & Vehicle operation costs of driving route\\ \hline
\texttt{cost\_driving\_con\_charge} & Categorical & 2 values (0 or 10.5) & Congestion charge for driving route\\ \hline
\texttt{driving\_traffic\_percent} & Continuous & 15'787 unique values between 0 and 1.04 & Traffic variability \\ \hline \end{xltabular}
\begin{xltabular}{\textwidth}{l||c|X|X}
\caption{ADULT dataset}
\label{tab:data_adult} \\
\textbf{Variables} & \textbf{Type} & \textbf{Details} & \textbf{Description} \\ \midrule[1.5pt]
\endfirsthead
\multicolumn{4}{c} {\tablename\ \thetable{} -- continued from previous page} \\
\textbf{Variables} & \textbf{Type} & \textbf{Details} & \textbf{Description} \\ \midrule[1.5pt]
\endhead
\multicolumn{4}{r}{{Continues on next page...}}
\endfoot
\endlastfoot
\texttt{age} & Continuous & 74 unique integers between 17 and 90 & Age of individual \\ \hline
\texttt{workclass} & Categorical & 9 string categories & Work status of the individual \\ \hline
\texttt{education} & Categorical & 16 string categories & Education of the individual \\ \hline
\texttt{education-num} & Categorical & 16 integer categories & Number of years of education \\ \hline
\texttt{marital-status} & Categorical & 7 string categories & Marital status of the individual \\ \hline
\texttt{occupation} & Categorical & 14 string categories & Occupation of the individual \\ \hline
\texttt{relationship} & Categorical & 6 string categories & Relationship with a partner \\ \hline
\texttt{race} & Categorical & 5 string categories & Race of the individual \\ \hline
\texttt{gender} & Categorical & 2 string categories & Gender of the individual \\ \hline
\texttt{capital-gain} & Continuous & 121 unique integers between 0 and 99'999 & Capital gains \\ \hline
\texttt{capital-loss} & Continuous & 97 unique integers between 0 and 4'356 & Capital losses \\ \hline
\texttt{hours-per-week} & Categorical & 96 unique integers between 1 and 99 & Number of hours of work per week \\ \hline
\texttt{native-country} & Categorical & 41 string categories & Native country of the individual\\ \hline
\texttt{income} & Categorical & 2 string categories & Income greater or equal to 50k per year \\ \hline \end{xltabular}
\subsection{Case studies - Directed Acyclic Graphs}
\begin{figure}
\caption{DAG used for the CMAP case study. We identified three categories of variables: purple corresponds to individuals, blue to households, and red to trips.}
\label{fig:DAG_Chicago}
\end{figure}
\begin{landscape} \mbox{}
\begin{figure}
\caption{DAG used for the LPMC case study. We identified five categories of variables: purple corresponds to individuals, blue to households, red to trips, orange to survey, and yellow to alternatives. }
\label{fig:DAG_LPMC}
\end{figure}
\end{landscape}
\begin{figure}
\caption{DAG used for the ADULT case study. We identified two categories of variables: orange corresponds to individuals, blue to occupation.}
\label{fig:DAG_adult}
\end{figure}
\ifarXiv
\foreach \x in {1,...,\the\pdflastximagepages}
{
\includepdf[fitpaper=true,pages={\x}]{supp_mat/LedHilBie_DATGAN_Support_arxiv.pdf}
} \fi
\end{document} |
\begin{document}
\title{Measuring Coherence with Entanglement Concurrence}
\author{Xianfei Qi} \author{Ting Gao} \email{gaoting@hebtu.edu.cn} \affiliation {College of Mathematics and Information Science, Hebei Normal University, Shijiazhuang 050024, China} \author{Fengli Yan} \email{flyan@hebtu.edu.cn} \affiliation {College of Physics Science and Information Engineering, Hebei Normal University, Shijiazhuang 050024, China}
\begin{abstract} Quantum coherence is a fundamental manifestation of the quantum superposition principle. Recently, Baumgratz \emph{et al}. [\href{http://dx.doi.org/10.1103/PhysRevLett.113.140401}{ Phys. Rev. Lett. \textbf{113}, 140401 (2014)}] presented a rigorous framework to quantify coherence from the view of theory of physical resource. Here we propose a new valid quantum coherence measure which is a convex roof measure, for a quantum system of arbitrary dimension, essentially using the generalized Gell-Mann matrices. Rigorous proof shows that the proposed coherence measure, coherence concurrence, fulfills all the requirements dictated by the resource theory of quantum coherence measures. Moreover, strong links between the resource frameworks of coherence concurrence and entanglement concurrence is derived, which shows that any degree of coherence with respect to some reference basis can be converted to entanglement via incoherent operations. Our work provides a clear quantitative and operational connection between coherence and entanglement based on two kinds of concurrence. This new coherence measure, coherence concurrence, may also be beneficial to the study of quantum coherence. \end{abstract}
\pacs{ 03.67.Mn, 03.65.Ud, 03.67.-a}
\maketitle
\section{Introduction} As a striking feature of the quantum mechanics, quantum coherence arising from the principle of quantum superposition is important in quantum physics. Quantum coherence is one of the fundamental features which mark the departure of quantum world from classical realm, and the origin for extensive quantum phenomena such as interference, lasers, superconductivity, and superfluidity. It is an essential ingredient for numerous physical phenomena such as quantum optics, quantum thermodynamics \cite{PRL113.150402, SR6.22174} etc.
The catalytic role of quantum superposition states when used in thermal operations was uncovered \cite{PRL113.150402}. In \cite{SR6.22174}, the authors showed that the physical realisation of optimal thermodynamic projection processes can come with a non-trivial thermodynamic work only for quantum states with coherences.
Quantum coherence is also regarded as a fundamental ingredient for quantum information processing tasks \cite{QCI.2010}. However, the comprehensive formulation of the resource theory of coherence was only recently presented \cite{PRL113.140401}, where coherence was identified to be intuitive and easily computable measures of coherence by adopting the viewpoint of coherence as a physical resource.
Following this seminal work, fruitful research has been done, some of which was mainly devoted to finding new appropriate measures of quantum coherence \cite{PRL113.170401,PRA91.042120,QIC15.1307,PRA92.022124,PRA93.012110}, or studying maximally coherent states \cite{QIC15.1355,PRA93.032326}, the issue of ordering states with coherence measures \cite{QIP2016}, distribution of quantum coherence in multipartite systems \cite{PRL116.150504}, and the relation between coherence and other measures of quantumness \cite{PRL115.020403,PRA92.022112,SR5.10922,PRL116.160407,PRL117.020402}.
Coherence has also been studied in the context of incoherent quantum operations \cite{arXiv1604v2,JPA50.045301,PRX7.011024}.
Quantum entanglement is the main ingredient of the quantum speed-up in quantum computation and communication. The role of entanglement as a resource in quantum information has stimulated intensive research trying to unveil both its qualitative and quantitative aspects \cite{RMP80.517}. In the theory of entanglement, concurrence is an important entanglement measure. Concurrence was first introduced in Ref.\cite{PRA54.3824} as an auxiliary tool to compute the entanglement of formation for Bell-diagonal two-qubit states. Subsequently, Wootters and co-workers established concurrence as an entanglement measure for two-qubit states and derived computable formulas for concurrence and entanglement of formation in the two-qubit case \cite{PRL78.5022,PRL80.2245}. Later, generalizations to bipartite higher-dimensional systems \cite{PRA64.042315} as well as for multipartite systems \cite{PRL93.230501} were proposed. Though many lower bounds for concurrence based on various approaches were obtained \cite{PRA67.052308,PRL95.040504,PRA74.050303,PRA75.052330,PRL109.200503,QIP2017}, exact formulas were derived only for two-qubit states \cite{PRL80.2245} and some highly symmetric states \cite{PRL85.2625,PRA64.062307,PRA67.012307}. Several meaningful efforts have also been spent in generalizing the notion of concurrence to obtain new forms of concurrences for detecting multipartite entanglement \cite{PRA83.062325,PRA86.062323,PRL112.180501}. For entanglement quantified by the concurrence, monogamy of multipartite quantum systems were well studied \cite{PRA61.052306,PRL96.220503,PRA78.012311}.
In quantitative coherence theory, considerable effort has been spent in developing many different coherence measures, while much less is known regarding the relations between these measures, and in particular, their connection to the resources they quantify. It is believed that---given a well-defined coherence measure---there should be a physical resource (defined through a protocol) that is quantified by this measure. Based on the distance measure, the $l_1$-norm of coherence and the relative entropy of coherence were quantified \cite{PRL113.140401}. Intrinsic randomness measure (also called coherence of formation) was proposed essentially using the intrinsic randomness of measurement \cite{PRA92.022124}. It equals coherence cost, which is the minimal asymptotic rate of consuming maximally coherent pure state for preparing $\rho$ by incoherent operation \cite{PRL116.120404}. From the viewpoint of physical resource, this coherence measure indicates the operational aspect of quantum coherence.
Both coherence and entanglement display quantumness of a physical system. Therefore, it is meaningful to study the interconversion between quantum coherence and entanglement. In this paper we put forward a new valid quantum coherence measure via the generalized Gell-Mann matrices, and derive the amount of one resource emerges from the other.
This paper is organized as follows. In Sec.~\uppercase\expandafter{\romannumeral 2} we review the framework of coherence measures and introduce three valid coherence measures, i.e., relative entropy of coherence, the $l_{1}$-norm of coherence, and the intrinsic randomness measure. In Sec.~\uppercase\expandafter{\romannumeral 3}, we present a new coherence measure called coherence concurrence for any dimensional quantum system based on the generalized Gell-Mann matrices, and prove that it is a good coherence measure. In Sec.~\uppercase\expandafter{\romannumeral 4}, we establish a relation for the interconversion between entanglement and coherence under incoherent operations based on coherence concurrence and entanglement concurrence. Sec.~\uppercase\expandafter{\romannumeral 5} is outlook and conclusion.
\section{Review of coherence measures}
Before we state our main results, a review of the framework of coherence measures is necessary. Throughout the paper, we consider a general $d$-dimensional Hilbert space $\mathcal{H}$. Note that coherence is basis dependent, we fix a particular basis, $\{|i\rangle\}_{i=1,\ldots,d}$, of the $d$-dimensional Hilbert space $\mathcal{H}$ in which we consider our quantum states. A state is called incoherent if it is diagonal in this fixed basis and otherwise coherent. The set of all incoherent states is usually labelled as $\mathcal{I}\subset \mathcal{H}$. Hence, all density operators $\delta\in \mathcal{I}$ are of the form \begin{equation} \begin{aligned}
\delta=\sum_{i=1}^{d}\lambda_{i}|i\rangle\langle i|, \end{aligned} \end{equation} where $\lambda_i$ are probabilities.
In the resource theory of coherence, free operations are given by the so-called incoherent operations. An incoherent operation is defined by an incoherent completely positive trace preserving \text{\small(ICPTP)} map. An incoherent operation $\Lambda_{\text{\tiny ICPTP}}$ is a
completely positive trace preserving map such that \begin{equation} \begin{aligned} \Lambda_{\text{\tiny ICPTP}}(\rho)=\sum_{n}K_{n}\rho K_{n}^{\dag}, \end{aligned} \end{equation} with the Kraus operators $K_{n}$ satisfying $\sum_{n}K_{n}^{\dag}K_{n}=I_d$ and $K_{n}\mathcal{I}K_{n}^{\dag}\subset \mathcal{I}$. For the case where measurement outcomes are retained, the state corresponding to outcome $n$ is given by $\rho_{n}=K_{n}\rho K_{n}^\dag/p_{n}$ and occurs with probability $p_{n}=\text{tr}[K_{n}\rho K_{n}^\dag]$.
A maximal coherent state (MCS) is one that can be used as a resource to prepare any other state of the same dimension with certainty by means of incoherent operations only. The following state \begin{equation} \begin{aligned}
|\Psi_{d}\rangle=\frac{1}{\sqrt{d}}\sum_{i=1}^{d}|i\rangle \end{aligned} \end{equation}
is a MCS. By applying the unitary incoherent operations on $|\Psi_{d}\rangle$, a set of maximally coherent states is obtained \cite{PRA93.032326} \begin{equation} \begin{aligned}
S_{\text{MCS}}=\left\{\frac{1}{\sqrt{d}}\sum\limits_{j=1}^{d}\mathrm{e}^{\mathrm{i}\theta_{j}}|j\rangle\mid \theta_{1},\ldots,\theta_{d}\in [0,2\pi)\right\}. \end{aligned} \end{equation}
A rigorous framework for quantifying coherence was proposed in Ref.\cite{PRL113.140401}. A coherence measure is a map $C$ from quantum states $\rho$ to nonnegative real numbers satisfying the following properties:
(C1) ~$C(\rho)\geq 0$ for all states $\rho$, and $C(\delta)=0$ if and only if $\delta$ is an incoherent state.
(C2) Monotonicity under incoherent operators. (C2a) $C(\rho)$ is nonincreasing under incoherent operations, i.e., $C(\Lambda_{\text{\tiny ICPTP}}(\rho))\leqslant C(\rho)$ for arbitrary incoherent operations $\Lambda_{\text{\tiny ICPTP}}$ and states $\rho$. (C2b) $C(\rho)$ is nonincreasing on average under selective incoherent operations, i.e., $\sum_{n}p_{n}C(\rho_{n})\leqslant C(\rho)$ for all incoherent operations $\Lambda_{\text{\tiny ICPTP}}$ and states $\rho$, where probabilities $p_{n}=\text{tr}[K_{n}\rho K_{n}^\dag]$, states $\rho_{n}=K_{n}\rho K_{n}^\dag/p_{n}$, and Krause operators $K_{n}$ obeying $\sum_{n}K_{n}^{\dag}K_{n}=I_d$ and $K_{n}\mathcal{I}K_{n}^{\dag}\subset \mathcal{I}$.
(C3) Nonincreasing under the mixing processes of the states (convexity), that is, $C(\rho)$ is a convex function of density matrices, i.e., $C(\sum_{n}p_{n}\rho_{n})\leqslant \sum_{n}p_{n}C(\rho_{n})$ for any set of states $\{\rho_n\}$ and any probability distribution $\{p_n\}$.
Conditions (C2b) and (C3) automatically imply condition (C2a) \cite{PRL113.140401}.
(C4) Only MCSs can achieve maximal value.
The additional requirement (C4) for coherence measure was proposed in \cite{PRA93.032326}.
We will introduce some valid coherence measures satisfying all the four requirements. In Ref.\cite{PRL113.140401}, two widely known coherence measures quantified by the minimum distance from $\rho$ to all the incoherent states based on two different distance measures were presented. One is the \emph{relative entropy of coherence}, based on the relative entropy, \begin{equation} \begin{aligned} C_{\text{rel.ent}}(\rho)\equiv \mathop{\textrm{min}}\limits_{\delta\in \mathcal{I}}S(\rho\parallel \delta)=S(\rho_{\text{diag}})-S(\rho), \end{aligned} \end{equation}
where $S$ is the von Neumann entropy and $\rho_{\text{diag}}$ is the dephased state in reference basis $\{|i\rangle\}$, i.e, the state obtained from $\rho$ by deleting all off-diagonal entries. Another is \emph{$l_{1}$-norm of coherence}, based on the $l_{1}$ matrix norm, \begin{equation} \begin{aligned}
C_{l_{1}}(\rho)\equiv \mathop{\textrm{min}}\limits_{\delta\in \mathcal{I}}\|\rho-\delta\|_{l_{1}}=\sum\limits_{i\neq j}|\langle i|\rho|j\rangle|, \end{aligned} \end{equation} which is the sum of the absolute value of the off-diagonal entries of the quantum state.
A quantum coherence measure, the \emph{intrinsic randomness measure}, essentially using the intrinsic randomness, was proposed in \cite{PRA92.022124}. It is the first convex roof measure for coherence. For pure state, \begin{equation} \begin{aligned}
R_{I}(|\psi\rangle\langle \psi|)=S(\rho_{\text{diag}}), \end{aligned} \end{equation}
which equals the relative entropy of coherence $C_{\text{rel.ent}}(|\psi\rangle\langle \psi|)$. This coherence measure is extended to mixed state by the so-called convex roof construction \begin{equation} \begin{aligned}
R_{I}(\rho)=\mathop{\textrm{min}}\limits_{\{p_{i},|\psi_{i}\rangle\}} \sum\limits_{i} p_{i}R_{I}(|\psi_{i}\rangle), \end{aligned} \end{equation}
where $\rho=\sum_{i}p_{i}|\psi_{i}\rangle\langle\psi_{i}|$, $p_i\geqslant 0$ and $\sum_{i}p_{i}=1$. The quantify in the equation above is also known as coherence of formation, and was studied in \cite{PRL116.120404}.
\section{Coherence concurrence} In this section, a new quantum coherence measure named \textquotedblleft coherence concurrence\textquotedblright, which is a convex roof measure, for a quantum system of arbitrary dimension, via the generalized Gell-Mann matrices, is presented.
It fulfills not only the original four requirements (C1), (C2a), (C2b), and (C3) of coherence measures but also the additional requirement (C4), and thus it is a valid coherence measure.
The generalized Gell-Mann matrices (GGM) are the generators of $SU(d)$ defined as the following three different types of matrices \cite{PLA314.339,JPA41.235303}:
(i) $d(d-1)/2$ symmetric GGM \begin{equation} \begin{aligned}
\Lambda_{\text{s}}^{j,k}=|j\rangle\langle k|+|k\rangle\langle j|,~~~(1\leqslant j<k\leqslant d); \end{aligned} \end{equation}
(ii) $d(d-1)/2$ antisymmetric GGM \begin{equation} \begin{aligned}
\Lambda_{\text{a}}^{j,k}=-\text{i}|j\rangle\langle k|+\text{i}|k\rangle\langle j|,~~~(1\leqslant j<k\leqslant d); \end{aligned} \end{equation}
(iii) $(d-1)$ diagonal GGM \begin{equation} \begin{aligned}
\Lambda^{l}=\sqrt{\frac{2}{l(l+1)}}&\left(\sum\limits_{j=1}^{l}|j\rangle\langle j|-l|l+1\rangle\langle l+1|\right),\\
&(1\leqslant l\leqslant d-1). \end{aligned} \end{equation}
We give a new expression of $C_{l_{1}}$ based on symmetric GGM. First, we introduce a lemma.
\emph{Lemma.} Let $A$ be Hermitian. If $A$ is positive semidefinite, then all of its principal submatrices are positive semidefinite \cite{MA.2013}.
\emph{Proposition.} For a density matrix $\rho$, there is \begin{equation}\label{Prop} \begin{aligned}
C_{l_{1}}(\rho)&=2\sum\limits_{1\leq j<k\leq d}|\rho_{jk}|\\
&=\sum\limits_{1\leq j<k\leq d}\left|\sqrt{\eta_{1}^{j,k}}-\sqrt{\eta_{2}^{j,k}}\right|, \end{aligned} \end{equation} where $\eta_{1}^{j,k}$ and $\eta_{2}^{j,k}$ are the non-zero eigenvalues of the matrix $\rho\Lambda_{\text{s}}^{j,k}\rho^{*}\Lambda_{\text{s}}^{j,k}$, and $\rho^*$ denotes complex conjugation in the standard basis.
\emph{Proof.} We just need to prove that \begin{equation}\label{Eq-1} \begin{aligned}
2|\rho_{jk}|=\left|\sqrt{\eta_{1}^{j,k}}-\sqrt{\eta_{2}^{j,k}}\right|. \end{aligned} \end{equation}
After tedious but straightforward computation, the eigenvalues of the matrix $\rho\Lambda_{\text{s}}^{j,k}\rho^{*}\Lambda_{\text{s}}^{j,k}$ are $(|\rho_{jk}|+\sqrt{\rho_{jj}\rho_{kk}})^{2}$, $(|\rho_{jk}|-\sqrt{\rho_{jj}\rho_{kk}})^{2}$, and zeros. According to Lemma, the square roots of non-zero eigenvalues are $|\rho_{jk}|+\sqrt{\rho_{jj}\rho_{kk}}$, $\sqrt{\rho_{jj}\rho_{kk}}-|\rho_{jk}|$, which implies Eq.(\ref{Eq-1}), as required.
Next we present a new quantum coherence measure, coherence concurrence.
For a $d$-dimensional pure state $|\psi\rangle$, we define its coherence concurrence as \begin{equation}\label{CoherenceConcurrenceDef} \begin{aligned}
C(|\psi\rangle)=\sum\limits_{1\leq j<k\leq d}|\langle \psi|\Lambda_{\text{s}}^{j,k}|\psi^{*} \rangle|. \end{aligned} \end{equation}
It is not difficult to derive that
\begin{equation}\label{PureState C=C_li}
C(|\psi\rangle)=\sum\limits_{1\leq j<k\leq d}|\langle \psi|\Lambda_{\text{s}}^{j,k}|\psi^{*} \rangle|=C_{\text{$l_{1}$}}(|\psi\rangle\langle\psi|).
\end{equation}
That is, the coherence concurrence equals $l_{1}$-norm of coherence for pure states. Then, coherence concurrence is extended to mixed state by convex roof construction \begin{equation} \begin{aligned}
C(\rho)=\mathop{\textrm{min}}\limits_{\{p_{i},|\psi_{i}\rangle\}} \sum\limits_{i} p_{i}C(|\psi_{i}\rangle), \end{aligned} \end{equation}
where the minimization is taken over all possible ensemble realizations $\rho=\sum_{i}p_{i}|\psi_{i}\rangle\langle\psi_{i}|$, $p_i\geqslant 0$ and $\sum_{i}p_{i}=1$. The decomposition attaining the minimum value is said to be the optimal decomposition.
\emph{Theorem 1.} Coherence concurrence is a valid coherence measure. That is, the coherence concurrence satisfies all the requirements (C1)-(C4) of coherence measures.
\emph{Proof.} For pure state, coherence concurrence satisfies (C1), (C2a), (C2b), and (C3), as coherence concurrence equals $l_{1}$-norm of coherence, while $l_{1}$-norm of coherence fulfills (C1), (C2a), (C2b), and (C3) \cite{PRL113.140401}. For mixed states, it is easy to see from the definition that the coherence concurrence satisfies the requirements (C1) and (C3). As for the requirement (C2b), it can be proven in a similar way as shown in Ref. \cite{PRA92.022124}. Coherence concurrence fulfills (C2a) since it satisfies (C2b) and (C3) \cite{PRL113.140401}. Motivated by the proof in Ref. \cite{PRA93.032326}, we prove that coherence concurrence also fulfills (C4). Detailed proof of theorem is shown in Appendix A.
\emph{Corollary 1} ~ (1) Coherence concurrence is not less than the $l_{1}$-norm of coherence for mixed state, i.e., $C(\rho)\geq C_{\text{$l_{1}$}}(\rho)$ for any state $\rho$.
(2) For the family $\rho$ of mixed states, a pure state mixed with white noise, there is $C(\rho)=C_{l_{1}}(\rho)$.
It follows directly from Theorem 1 and the definition of coherence concurrence.
The relation between coherence concurrence and other coherence measures is listed in Table \uppercase\expandafter{\romannumeral1.}
\begin{table*} \caption{\label{tab:table}The relations among four coherence measures\footnote{$C$, $C_{l_{1}}$, $R_{I}$, $C_{\text{rel.ent}}$ denote coherence concurrence, $l_{1}$-norm of coherence, intrinsic randomness measure, relative entropy of coherence, respectively.}.} \begin{ruledtabular} \begin{tabular}{ccccc}
&$C$&$C_{l_{1}}$&$R_{I}$&$C_{\text{rel.ent}}$\\ \hline
Qubit pure state&$C$ &$C_{l_{1}}=C$ &$R_{I}=H(C)$\footnote{$H(C)$ labels $H\left(\frac{1+\sqrt{1-C^{2}}}{2}\right)$.}&$C_{\text{rel.ent}}=R_{I}=H(C)$ \\
Qubit mixed state&$C$
&$C_{l_{1}}= C$ &$R_{I}=H(C)$&$ $\\
Qudit pure state&$C$&$C_{l_{1}}=C$
&&$C_{\text{rel.ent}}=R_{I}$\\
Qudit mixed state &$C$&$C_{l_{1}}\leqslant C$&$$&$ $\\
\end{tabular} \end{ruledtabular} \end{table*}
\section{The relation between coherence and entanglement}
In this section, we establish the connection between two quantum resources, coherence and entanglement, via coherence concurrence and entanglement concurrence. First, we review the knowledge about entanglement concurrence $C_{E}$. For bipartite pure state $|\psi\rangle\in \mathcal{H}_{M}\otimes \mathcal{H}_{N}$, entanglement concurrence is defined by \begin{equation}\label{ConcurrenceDef} \begin{aligned}
C_{E}(|\psi\rangle)=\sqrt{2(1-\textrm{tr}\rho_{M}^{2})}, \end{aligned} \end{equation}
where $\rho_{M}=\textrm{tr}_{N}(|\psi\rangle\langle\psi|)$. For mixed state $\rho$, the concurrence is given by the minimum average concurrence taken over all decompositions of $\rho$, the so-called convex roof construction, \begin{equation} \begin{aligned}
C_{E}(\rho)=\mathop{\textrm{min}}\limits_{\{p_{i},|\psi_{i}\rangle\}} \sum\limits p_{i}C(|\psi_{i}\rangle). \end{aligned} \end{equation} The convex roof is notoriously hard to evaluate, but for two qubits mixed state, an exact formula was given \cite{PRL80.2245} \begin{equation}\label{CE} \begin{aligned} C_{E}(\rho) = \textrm{max}\{\lambda_{1}-\lambda_{2}-\lambda_{3}-\lambda_{4},0\}, \end{aligned} \end{equation} with the numbers $\lambda_{i}~(i=1, 2, 3, 4)$ are the square roots of the eigenvalues of the non-Hermitian matrix $\rho(\sigma_y\otimes\sigma_y)\rho^*(\sigma_y\otimes\sigma_y)$ in nonincreasing order, where $*$ denotes complex conjugation in the standard basis and $\sigma_y$ is the Pauli matrix.
Next, we will discuss the relation between coherence and entanglement. The following theorems provide a strong link between entanglement concurrence $C_{E}$ and coherence concurrence $C$.
\emph{Theorem 2.} The amount of entanglement $C_{E}$ generated from a state $\rho^{S}$ via an incoherent operation $\Lambda^{SA}$, by attaching an ancilla system $A$ initialized in a reference incoherent state $|1\rangle\langle 1|^{A}$, is bounded above by its coherence concurrence $C$: \begin{equation}\label{Th2} \begin{aligned}
C_{E}(\Lambda^{SA}[\rho^{S}\otimes |1\rangle\langle 1|^{A}])\leqslant C(\rho^{S}). \end{aligned} \end{equation}
\emph{Proof.} The combination of \begin{equation}\label{Th2-1}
C(\rho^{S})=C(\rho^{S}\otimes |1\rangle\langle 1|^{A})\geqslant C(\Lambda^{SA}[\rho^{S}\otimes |1\rangle\langle 1|^{A}]), \end{equation} and \begin{equation}\label{Th2-2}
C(\Lambda^{SA}[\rho^{S}\otimes |1\rangle\langle 1|^{A}])\geqslant C_{E}(\Lambda^{SA}[\rho^{S}\otimes |1\rangle\langle 1|^{A}]), \end{equation} implies Ineq.(\ref{Th2}). Detailed proof of theorem is shown in Appendix B.
This implies that the system-ancilla state $\Lambda^{SA}[\rho^{S}\otimes |1\rangle\langle 1|^{A}]$ for any incoherent operation $\Lambda^{SA}$ is separable if the initial state $\rho^S$ of a $d$-dimensional system $S$ is incoherent. Namely, entanglement can be generated by incoherent operations if the initial state $\rho^S$ is coherent.
An even stronger link exists for qubit system. We prove that inequality (\ref{Th2}) can be saturated for the case that both the system and the ancilla system are qubit systems.
\emph{Corollary 2.} For any qubit state $\rho^{S}$, there exists an incoherent operation $\Lambda^{SA}$ such that the entanglement concurrence of two-qubit state generated from $\rho^{S}$ via $\Lambda^{SA}$, by attaching an ancilla qubit system $A$ initialized in a reference incoherent state $|1\rangle\langle 1|^{A}$, equals the coherence concurrence of $\rho^{S}$.
\emph{Proof.} Assume that $\rho^{S}=\sum_{i,j=1}^{2}\rho_{ij}|i\rangle\langle j|$. We choose two-qubit \text{\small CNOT} gate as needed incoherent operation $\Lambda^{SA}$. Note that coherence concurrence $C(\rho^{S})=C_{l_1}(\rho^S)=2|\rho_{12}|$ for qubit state $\rho^{S}$ Ref.\cite{PRA92.022124} and $C_{E}(\Lambda^{SA}[\rho^{S}\otimes |1\rangle\langle 1|^{A}])=2|\rho_{12}|$ by Eq.(\ref{CE}). Thus, $C_{E}(\Lambda^{SA}[\rho^{S}\otimes |1\rangle\langle 1|^{A}])=C(\rho^{S})$ as required. The conclusion is proved.
This shows that the degree of coherence concurrence in the initial state of qubit system $S$ can be exactly converted to an equal degree of entanglement concurrence between $S$ and the incoherent ancilla qubit $A$ by suitable incoherent operation, CNOT gate. That is,
the mount of entanglement concurrence $C_{E}$ generated from a qubit state $\rho^{S}$ via an incoherent operation $\Lambda^{SA}$, by attaching an ancilla qubit system $A$ initialized in a reference incoherent state $|1\rangle\langle 1|^{A}$, reaches the maximum value when incoherent operation $\Lambda^{SA}$ is two-qubit \text{\small CNOT} gate, which is also the coherence concurrence $C(\rho^{S})$.
Next we show that any degree of coherence with respect to some reference basis can be converted to entanglement via incoherent operations.
\emph{Theorem 3.} For an arbitrary state $\rho^{S}$, there exists an incoherent operation $\Lambda^{SA}$ such that the entanglement concurrence of bipartite state generated from $\rho^{S}$ via $\Lambda^{SA}$, by attaching an ancilla system $A$ initialized in a reference incoherent state $|1\rangle\langle 1|^{A}$, has the following inequality relation with its coherence concurrence: \begin{equation}\label{Th3} \begin{aligned}
C_{E}(\Lambda^{SA}[\rho^{S}\otimes |1\rangle\langle 1|^{A}]) \geqslant\sqrt{\frac{2}{d(d-1)}}C(\rho^{S}). \end{aligned} \end{equation} Here the dimension $d_A$ of the ancilla is not smaller than that of the system, $d_A\geq d$.
\emph{Proof.} First, we prove that this inequality is satisfied for pure state. Then, it is extended to the case of mixed state. Detailed proof of theorem can be found in Appendix C.
\emph{Corollary 3.} If $\rho^{S}$ is a maximal coherent state, there exists an incoherent operation $\Lambda^{SA}$ such that (\ref{Th3}) can be saturated.
The following result ( Theorem 2 in \cite{PRL115.020403} ) follows immediately from Theorem 2 and Theorem 3:
A state $\rho^{S}$ can be converted to an entangled state via incoherent operations if and only if $\rho^{S}$ is coherent.
The coherence of quantum states is basis dependent as well as the entanglement of states \cite{EPJD64.181, PRL87.077901}. Quantum coherence is basis dependent by its definition, while entanglement is locally basis independent, i.e., entanglement is invariant under local unitary transformations. States that are entangled with respect to a given partition in subsystems can be separable with respect to another partition \cite{PRL87.077901}. However, entanglement usually change if a global unitary is applied. That is, via global unitary transformations we can switch from an entangled state to a separable state. For pure states, we can always switch unitarily between separability and maximal entanglement. However, for mixed states a minimal mixedness is required because the maximal mixed state $\frac{1}{d_1d_2}\sum_{i=1}^{d_1}\sum_{j=1}^{d_2} |ij\rangle\langle ij|$
and a sufficiently small neighborhood is separable for any factorization \cite{EPJD64.181}, that is, any unitary transformations can not change the separabilty of the maximal mixed state.
Except the maximal mixed state$\frac{I_d}{d}$, being incoherent in any basis, we can always switch between coherence and incoherence. Since coherence is a basis dependent concept, a unitary operation in general changes the coherence of a given state. Every state $\rho=\sum_{i=1}^d \lambda_i|\varphi_i\rangle\langle\varphi_i|$ can be unitarily transformed into an incoherent state $U\rho U^\dagger=\sum_{i=1}^d \lambda_i|i\rangle\langle i|$, where $U$ is a unitary operator such that $U|\varphi_i\rangle=|i\rangle$. Theorem 2 in \cite{quant-ph1612.07570} shows that any state being different from the maximal mixed state, can be unitarily transformed into a coherent state.
\section{Outlook and conclusion}
The new coherence measure, coherence concurrence, may raise many interesting problems. One can discuss whether $l_{1}$-norm of coherence and coherence concurrence coincide. It would be of great interest to study coherence distribution in multipartite quantum systems based on the coherence concurrence. An elegant equation connects coherence concurrence with intrinsic randomness measure for qubit system. More research is needed to further study the potential link between them for qudit system. The relation between coherence concurrence and other coherence measures is also needed to be further investigated.
In summary, a new coherence measure \textquotedblleft coherence concurrence\textquotedblright ~is presented for any dimensional quantum system based on the generalized Gell-Mann matrices. It satisfies all the requirements for a proper quantum coherence measure and is convex roof measure. We show that any degree of coherence in the initial state of a quantum system $S$ can be converted to entanglement between $S$ and the incoherent ancilla $A$ by some incoherent operation. In addition, we establish the relation for the interconversion between coherence and entanglement based on coherence concurrence and entanglement concurrence. As a counterpart of entanglement concurrence for coherence manipulation, we expect that coherence concurrence can have various applications in theory of quantum coherence similar to the concurrence in entanglement theory.
\begin{acknowledgments} This work was supported by the National Natural Science Foundation of China under Grant Nos: 11371005, 11475054, Hebei Natural Science Foundation of China under Grant No: A2016205145. \end{acknowledgments}
\appendix
\section{Detailed proof of Theorem 1}
We will show that the coherence concurrence satisfies all the requirements (C1)-(C4) of a proper quantum coherence measures. Here, we only show how to prove (C2b) and (C4), the proofs for the other requirements are stated in the main text.
\subsection{Proof of (C2b)}
For the pure state, the monotonicity requirement of (C2b) is, \begin{equation}\label{A1} \begin{aligned}
C(|\psi\rangle)\geqslant \sum\limits_{n} p_{n}C(|\psi_{n}\rangle), \end{aligned} \end{equation}
where $|\psi_{n}\rangle=K_{n}|\psi\rangle/\sqrt{p_{n}}$, and $p_{n}=\text{tr}[K_{n}|\psi\rangle\langle\psi|K_{n}^{\dag}]$. It is obvious that this requirement is satisfied, because the coherence concurrence equals $l_{1}$-norm of coherence $C_{l_{1}}$ for pure state, and the monotonicity of which has been proved \cite{PRL113.140401}.
For a mixed state $\rho$, suppose that $\rho=\sum_{i}p_{i}|\psi_{i}\rangle\langle\psi_{i}|$ is the optimal decomposition that achieves the minimum value. That is, \begin{equation} \begin{aligned}
C(\rho)=\sum\limits_{i} p_{i}C(|\psi_{i}\rangle). \end{aligned} \end{equation} It remains to prove that for incoherent operators $\Lambda_{\texttt{ICPTP}}$ there must be \begin{equation} C(\rho)\geqslant\sum\limits_{n} p_{n}C(\rho_{n}), \end{equation} where $\rho_{n}=K_{n}\rho K_{n}^{\dag}/p_{n}$ and $p_{n}=\text{tr}[K_{n}\rho K_{n}^{\dag}]$. Note that \begin{equation} \begin{aligned} \rho_{n}&=\frac{K_{n}\rho K_{n}^{\dag}}{p_{n}}\\
&=\sum\limits_{i}\frac{p_{i}}{p_{n}}K_{n}|\psi_{i}\rangle\langle\psi_{i}|K_{n}^{\dag}\\
&=\sum\limits_{i}\frac{p_{i}}{p_{n}}p_{in}\rho_{in}, \end{aligned} \end{equation}
where $p_{in}=\text{tr}[K_{n}|\psi_{i}\rangle\langle\psi_{i}|K_{n}^{\dag}]$ and $\rho_{in}=K_{n}|\psi_{i}\rangle\langle\psi_{i}|K_{n}^{\dag}/p_{in}$, and we have $p_{n}=\sum_{i}p_{i}p_{in}$. It follows that \begin{equation} \begin{aligned}
C(\rho) &=\sum\limits_{i}p_{i}C(|\psi_{i}\rangle)\\
&\geqslant\sum\limits_{i}p_{i}\sum\limits_{n}p_{in}C(\rho_{in})\\
&=\sum\limits_{n}p_{n}\sum\limits_{i}\frac{p_{i}p_{in}}{p_{n}}C(\rho_{in})\\
&\geqslant\sum\limits_{n}p_{n}C\left(\sum\limits_{i}\frac{p_{i}p_{in}}{p_{n}}\rho_{in}\right)\\
&=\sum\limits_{n}p_{n}C(\rho_{n}), \end{aligned} \end{equation} as required, where the first inequality is based on the conclusion for pure states in (\ref{A1}) and the last inequality is due to the convexity of coherence concurrence.
\subsection{Proof of (C4)}
For pure state, the coherence concurrence coincides with $l_{1}$-norm of coherence, while $C_{l_{1}}$ satisfies the requirement (C4) Ref. \cite{PRA93.032326}. Next we need only consider the case of mixed state. It is evident that $C(\rho)$ could be of that maximal value only if $\rho$ can be decomposed solely into a statistical mixture of states from $S_{\text{MCS}}$, however, it is impossible because a mixed state always has at least two distinct eigenvectors $|\varphi_{1}\rangle$ and $|\varphi_{2}\rangle$ with nonzero eigenvalues $\lambda_{1}$ and $\lambda_{2}$. Without loss of generality, we can assume $\lambda_{1}\leqslant \lambda_{2}$. Then, $(\lambda_{1}|\varphi_{1}\rangle\langle \varphi_{1}|+\lambda_{2}|\varphi_{2}\rangle\langle \varphi_{2}|)$ can be rewritten as $\lambda_{1}|\varphi_{+}\rangle\langle \varphi_{+}|+\lambda_{1}|\varphi_{-}\rangle\langle \varphi_{-}|+(\lambda_{2}-\lambda_{1})|\varphi_{2}\rangle\langle \varphi_{2}|$. Here, the states $|\varphi_{\pm}\rangle$ are superpositions of $|\varphi_{1}\rangle$ and $|\varphi_{2}\rangle$ and are mutually orthogonal. By choosing the superposition parameters carefully, we can keep $|\varphi_{\pm}\rangle$ are not \text{\small MCSs} even if $|\varphi_{1}\rangle$ and $|\varphi_{2}\rangle$ belong to $S_{\text{MCS}}$. That means a mixed state can never have only decompositions of states from $S_{\text{MCS}}$. Thus, $\rho$ achieves maximal value iff $\rho$ is a \text{\text{MCS}}.
\section{Detailed proof of Theorem 2}
First, we prove the inequality (\ref{Th2-1}). It is easy to see that for pure state $|\psi\rangle^{\tiny S}$, \begin{equation} \begin{aligned}
C(|\psi\rangle^{\tiny S})=C(|\psi\rangle^{\tiny S}\otimes |1\rangle^{\tiny A}). \end{aligned} \end{equation}
For a mixed state $\rho^{\tiny S}$, suppose that $\rho^{\tiny S}=\sum_{i}p_{i}|\psi_{i}\rangle\langle\psi_{i}|^{\tiny S}$ is the optimal decomposition, i.e., \begin{equation} \begin{aligned}
C(\rho^{\tiny S})=\sum\limits_{i} p_{i}C(|\psi_{i}\rangle^{\tiny S}). \end{aligned} \end{equation}
Then $\sum_{i}p_{i}|\psi_{i}\rangle\langle\psi_{i}|^{\tiny S}\otimes |1\rangle\langle 1|^{\tiny A}$ is the optimal decomposition of $\rho^{\tiny S}\otimes |1\rangle\langle 1|^{\tiny A}$. That is, \begin{equation} \begin{aligned}
C(\rho^{\tiny S}\otimes |1\rangle\langle 1|^{\tiny A})
&=\sum\limits_{i} p_{i}C(|\psi_{i}\rangle\langle\psi_{i}|^{\tiny S}\otimes |1\rangle\langle 1|^{\tiny A})\\
&=\sum\limits_{i} p_{i}C(|\psi_{i}\rangle^{\tiny S})\\ &=C(\rho^{\tiny S}). \end{aligned} \end{equation}
Next, we prove that $C(\rho)\geqslant C_{\tiny E}(\rho)$ for any bipartite state $\rho$. For pure state $|\psi\rangle \in \mathcal{H}_{M}\otimes \mathcal{H}_{N}$ with the following decomposition \begin{equation} \begin{aligned}
|\psi\rangle=\sum\limits_{i=1}^{M}\sum\limits_{j=1}^{N}\psi_{ij}|ij\rangle, \end{aligned} \end{equation}
$C_{E}(|\psi\rangle)$ can be expressed as \cite{JPA38.6777} \begin{equation} \begin{aligned}
C_{E}(|\psi\rangle)=2\sqrt{\sum\limits_{i<j}^{M}\sum\limits_{k<l}^{N}|\psi_{ik}\psi_{jl}-\psi_{il}\psi_{jk}|^{2}}. \end{aligned} \end{equation} Obviously,
$$C(|\psi\rangle)\geqslant C_{E}(|\psi\rangle).$$
For an arbitrary decomposition of mixed state $\rho=\sum\limits_{i}p_{i}|\psi_{i}\rangle\langle\psi_{i}|$, we have
$$\sum\limits_{i}p_{i}C(|\psi_{i}\rangle)\geqslant \sum\limits_{i}p_{i}C_{E}(|\psi_{i}\rangle),$$ which imples that
$$C(\rho)\geqslant C_{E}(\rho).$$ Specially, there is
$$C(\Lambda^{SA}[\rho^{S}\otimes |1\rangle\langle 1|^{A}])\geq C_{E}(\Lambda^{SA}[\rho^{S}\otimes |1\rangle\langle 1|^{A}]),$$ which finishes the proof.
\section{Detailed proof of Theorem 3} To prove this statement, we consider the unitary incoherent operation \begin{equation} \begin{aligned}
U=&\sum\limits_{i=1}^{d}\sum\limits_{j=1}^{d}|i\rangle\langle i|^{\tiny S}\otimes |i\oplus (j-1)\rangle\langle j|^{\tiny A}\\
&+\sum\limits_{i=1}^{d}\sum\limits_{j=d+1}^{d_{A}}|i\rangle\langle i|^{\tiny S}\otimes |j\rangle\langle j|^{\tiny A}. \end{aligned} \end{equation}
Here "$\oplus$" stands for addition modulo $d$, and $d$ and $d_A$ are the dimensions of system and ancilla system, respectively. Note that for two qubits, it is equivalent to the CNOT gate with $S$ as the control qubit and $A$ as the target qubit. It can be seen that it maps the state $\rho^{S}\otimes |i\rangle\langle i|^{A}$ to the state \begin{equation} \begin{aligned}
\Lambda^{SA}[\rho^{S}\otimes |1\rangle\langle 1|^{A}]&=U(\rho^{S}\otimes |1\rangle\langle 1|^{A})U^\dag\\
&=\sum\limits_{i,j}\rho_{ij}|i\rangle\langle j|^{S}\otimes |i\rangle\langle j|^{A}, \end{aligned} \end{equation}
where $\rho_{ij}$ are the matrix elements of $\rho^{S}=\sum_{i,j}\rho_{ij}|i\rangle\langle j|^{S}$.
First, we prove that the inequality (\ref{Th3}) is satisfied for pure state. For pure state \begin{equation} \begin{aligned}
|\psi\rangle^{S}=\sum\limits_{i=1}^{d}a_{i}|i\rangle, \end{aligned} \end{equation} there is \begin{equation} \begin{aligned}
C(|\psi\rangle^{S})=2\sum\limits_{i<j}|a_{i}a_{j}|. \end{aligned} \end{equation}
The unitary incoherent operation $U$ maps $|\psi\rangle^{S}\otimes|1\rangle^A$ to \begin{equation} \begin{aligned}
|\psi\rangle^{SA}=\sum\limits_{i=1}^{d}a_{i}|ii\rangle. \end{aligned} \end{equation}
It follows that \begin{equation} \begin{aligned}
C_{E}(|\psi\rangle^{SA})=2\sqrt{\sum\limits_{i<j}|a_{i}a_{j}|^{2}}. \end{aligned} \end{equation} According to the Lagrange's identity \cite{Wiki}, it is easy to see that (\ref{Th3}) is true for pure states.
For an arbitrary decomposition of mixed state $\rho^{S}$,
$$\rho^{S}=\sum\limits_{i}p_{i}|\psi_{i}^{S}\rangle\langle\psi_{i}^{S}|,$$ it can be easily seen that \begin{equation} \begin{aligned}
\sum\limits_{i}p_{i}C_{E}(\Lambda^{SA}[|\psi_{i}^{S}\rangle\langle\psi_{i}^{S}|\otimes |1\rangle\langle 1|^{A}])\\
\geqslant \sqrt{\frac{2}{d(d-1)}}\sum\limits_{i}p_{i}C(|\psi_{i}^{S}\rangle). \end{aligned} \end{equation} Then, (\ref{Th3}) is satisfied for the case of mixed state.
\end{document} |
\begin{document}
\mainmatter \title{Efficient smoothers for all-at-once multigrid methods for Poisson and Stokes control problems}
\titlerunning{Smoothers for Poisson and Stokes control problems}
\author{Stefan Takacs \thanks{The research was funded by the Austrian Science Fund (FWF): J3362-N25.} } \authorrunning{Stefan Takacs}
\institute{Mathematical Institute, University of Oxford, United Kingdom\\ \path{stefan.takacs@numa.uni-linz.ac.at}\\ \url{http://www.numa.uni-linz.ac.at/~stefant/J3362/}}
\maketitle
\begin{abstract} In the present paper we concentrate on an important issue in constructing a good multigrid solver: the choice of an efficient smoother. We will introduce all-at-once multigrid solvers for optimal control problems which show robust convergence in the grid size and in the regularization parameter. We will refer to recent publications that guarantee such a convergence behavior. These publications do not pay much attention to the construction of the smoother and suggest to use a normal equation smoother. We will see that using a Gauss Seidel like variant of this smoother, the overall multigrid solver is speeded up by a factor of about two with no additional work. The author will give a proof which indicates that also the Gauss Seidel like variant of the smoother is covered by the convergence theory. Numerical experiments suggest that the proposed method are competitive with Vanka type methods. \keywords{PDE-constrained optimization, All-at-once multigrid, Gauss Seidel} \end{abstract}
\section{Introduction}\label{sec:1}
In the present paper we discuss the construction of the all-at-once multigrid solvers for two model problems. The first model problem is a standard \emph{Poisson control problem}: Find a state $y \in H^1(\Omega)$ and a control $u\in L^2(\Omega)$ such that they minimize the cost functional \begin{equation} \nonumber
J(y,u) := \tfrac{1}{2} \|y-y_D\|_{L^2(\Omega)}^2 + \tfrac{\alpha}{2} \|u\|_{L^2(\Omega)}^2, \end{equation} subject to the elliptic boundary value problem (BVP) \begin{equation}\nonumber
-\Delta y + y = u \mbox{ in } \Omega
\qquad \mbox{and} \qquad
\tfrac{\partial y}{\partial n} = 0 \mbox{ on } \partial \Omega. \end{equation} The desired state $y_D$ and the regularization parameter $\alpha>0$ are assumed to be given. Here and in what follows, $\Omega\subseteq\mathbb{R}^2$ is a polygonal domain. We want to solve the finite element discretization of this problem using a fast linear solver which shows robust convergence behavior in the grid size and the regularization parameter. For solving this problem, we use the method of Lagrange multipliers, cf.~\cite{Lions:1971,Schoeberl:Simon:Zulehner:2010}. We obtain a linear system in the state $y$, the control $u$ and the Lagrange multiplier $\lambda$. In this linear system we eliminate the control as this has been done in~\cite{Schoeberl:Simon:Zulehner:2010,Takacs:Zulehner:2012}. We discretize the resulting system using the Courant element and obtain a linear system: \begin{equation}\label{eq:mp2}
\underbrace{ \left(
\begin{array}{cc}
M_k & K_k \\
K_k & -\alpha^{-1} M_k
\end{array}
\right)}_{\displaystyle\mathcal{A}_k:=}
\underbrace{\left(
\begin{array}{c}
\underline{y}_k \\
\underline{\lambda}_k
\end{array}
\right)}_{\displaystyle\underline{x}_k:=}
=
\underbrace{\left(
\begin{array}{c}
\underline{\mathpzc{f}}_k \\
0
\end{array}
\right)}_{\displaystyle\underline{ f }_k:=}. \end{equation} Here, $M_k$ and $K_k$ are the standard mass and stiffness matrices, respectively. The control can be recovered using the following simple relation from the Lagrange multiplier: $\underline{u}_k = \alpha^{-1} \underline{\lambda}_k$, cf.~\cite{Schoeberl:Simon:Zulehner:2010}. In \cite{Schoeberl:Simon:Zulehner:2010,Zulehner:2010} it was shown that there are constants $\underline{C}>0$ and $\overline{C}>0$ (independent of the grid size $h_k$ and the choice of $\alpha$) such that the stability estimate \begin{equation}\label{eq:a1}
\|\mathcal{Q}_k^{-1/2} \mathcal{A}_k \mathcal{Q}_k^{-1/2}\|\le \overline{C}\qquad \mbox{and}\qquad
\|\mathcal{Q}_k^{1/2} \mathcal{A}_k^{-1} \mathcal{Q}_k^{1/2}\|\le \underline{C}^{-1} \end{equation} holds for the symmetric and positive definite matrix \begin{equation*}
\mathcal{Q}_k := \left(
\begin{array}{cc}
M_k + \alpha^{1/2} K_k \\
& \alpha^{-1} M_k + \alpha^{-1/2} K_k \\
\end{array}
\right). \end{equation*}
The second model problem is a standard \emph{Stokes control problem} (velocity tracking problem): Find a velocity filed $v\in [H^1(\Omega)]^d$, a pressure distribution $p\in L^2(\Omega)$ and a control $u\in [L^2(\Omega)]^d$ such that \begin{equation}\nonumber
J(v,p,u) = \tfrac12 \|v-v_D\|_{L^2(\Omega)}^2 + \tfrac{\alpha}{2} \|u\|_{L^2(\Omega)}^2 \end{equation} is minimized subject to the Stokes equations \begin{equation}\nonumber
-\Delta v + \nabla p = u \mbox{ in } \Omega, \qquad
\nabla \cdot v = 0 \mbox{ in } \Omega, \qquad
v = 0
\mbox{ on } \partial \Omega. \end{equation} The regularization parameter~$\alpha>0$ and the desired state (desired velocity field) $v_D\in [L^2(\Omega)]^d$ are assumed to be given. To enforce uniqueness of the solution, we additionally require $\int_{\Omega} p \mbox{ d}x=0$.
Similar as above, we can set up the optimality system and eliminate the control, cf.~\cite{Zulehner:2010,Takacs:2013a}. The discretization can be done using the Taylor-Hood element. After these steps, we end up with the following linear system: \begin{equation}\label{eq:mp3}
\underbrace{\left(
\begin{array}{cccc}
M_k & & K_k & D_k^T \\
&0&D_k\\
K_k &D_k^T& -\alpha^{-1} M_k\\
D_k&&&0
\end{array}
\right)}_{\displaystyle\mathcal{A}_k:=}
\underbrace{\left(
\begin{array}{c}
\underline{v}_k \\
\underline{p}_k \\
\underline{\lambda}_k \\
\underline{\mu}_k \\
\end{array}
\right)}_{\displaystyle\underline{x}_k:=}
=
\underbrace{\left(
\begin{array}{c}
\underline{\mathpzc{f}}_k \\
0\\0\\0
\end{array}
\right).}_{\displaystyle\underline{ f }_k:=} \end{equation} where $M_k$ and $K_k$ are standard mass and stiffness matrices and $D_k^T$ is the discretization of the gradient operator, see, e.g.,~\cite{Zulehner:2010,Takacs:2013a}. Again, we are interested in a fast solver which is robust in the regularization parameter and the grid size. As in the previous example, the control $\underline{u}_k$ can by recovered from the Lagrange multiplier: $\underline{u}_k=\alpha^{-1}\underline{\lambda}_k$. In~\cite{Zulehner:2010} it was shown that stability estimate~\eqref{eq:a1} is satisfied for \begin{equation*}
\mathcal{Q}_k = \mbox{block-diag}\left(
W_k,\;
\alpha D_kW_k^{-1}D_k^T,\;
\alpha^{-1} W_k,\;
D_kW_k^{-1}D_k^T
\right), \end{equation*} where $W_k:=M_k + \alpha^{1/2} K_k$.
\section{An all-at-once multigrid method}\label{sec:3}
The linear systems~\eqref{eq:mp2} and \eqref{eq:mp3} shall be solved by a multigrid method, which reads as follows. Starting from an initial approximation~$\underline{x}^{(0)}_k$, one iterate of the multigrid method is given by the following two steps: \begin{itemize}
\item \emph{Smoothing procedure:} Compute
\begin{equation} \nonumber
\underline{x}^{(0,m)}_k := \underline{x}^{(0,m-1)}_k + \hat{\mathcal{A}}_k^{-1}
\left(\underline{ f}_k -\mathcal{A}_k\;\underline{x}^{(0,m-1)}_k\right)
\qquad \mbox{for } m=1,\ldots,\nu
\end{equation}
with $\underline{x}^{(0,0)}_k=\underline{x}^{(0)}_k$. The choice of
the smoother (or, in other words, of the matrix $\hat{\mathcal{A}}_k^{-1}$) will be discussed below.
\item \emph{Coarse-grid correction:}
\begin{itemize}
\item Compute the defect
$\underline{ f}_k -\mathcal{A}_k\;\underline{x}^{(0,\nu)}_k$
and restrict it to grid level $k-1$ using
an restriction matrix $I_k^{k-1}$:\;
$
\underline{r}_{k-1}^{(1)} := I_k^{k-1} \left(\underline{ f}_k -\mathcal{A}_k
\;\underline{x}^{(0,\nu)}_k\right).
$
\item Solve the following coarse-grid problem approximatively:
\begin{equation}\label{eq:coarse:grid:problem}
\mathcal{A}_{k-1} \,\underline{p}_{k-1}^{(1)} =\underline{r}_{k-1}^{(1)}
\end{equation}
\item Prolongate $\underline{p}_{k-1}^{(1)}$ to the
grid level $k$ using an prolongation
matrix $I^k_{k-1}$ and add
the result to the previous iterate:
$
\underline{x}_{k}^{(1)} := \underline{x}^{(0,\nu)}_k +
I_{k-1}^k \, \underline{p}_{k-1}^{(1)}.
$
\end{itemize} \end{itemize} As we have assumed to have nested spaces, the intergrid-transfer matrices can be chosen in a canonical way: $I_{k-1}^k$ is the canonical embedding and the restriction $I_k^{k-1}$ is its (properly scaled) transpose. If the problem~\eqref{eq:coarse:grid:problem} is solved exactly, we obtain the two-grid method. In practice, the problem~\eqref{eq:coarse:grid:problem} is approximatively solved by applying one step (V-cycle) or two steps (W-cycle) of the multigrid method, recursively. Only the coarsest grid level,~\eqref{eq:coarse:grid:problem} is solved exactly.
The only part of the multigrid algorithm that has not been specified yet, is the smoother. For the choice of the smoother, we make use of the convergence theory. We develop a convergence theory based on Hackbusch's splitting of the analysis into smoothing property and approximation property: \begin{itemize}
\item \emph{Smoothing property:}
\begin{equation} \label{eq:smp}
\sup_{\underline{\tilde{x}}_k\in X_k}
\frac{\left(\mathcal{A}_k(\underline{x}_k^{(0,\nu)}-\underline{x}_k^*),
\underline{\tilde{x}}_k\right)_{\ell^2}}{\| \underline{\tilde{x}}_k\|_{\mathcal{L}_k}}
\le \eta(\nu) \| \underline{x}_k^{(0)}-\underline{x}_k^*\|_{\mathcal{L}_k}
\end{equation}
should hold for some function $\eta(\nu)$
with $\lim_{\nu\rightarrow\infty}\eta(\nu)= 0$. Here and in what follows,
$\underline{x}_k^* := \mathcal{A}_k^{-1} \underline{ f }_k$ is the exact solution, $\|\cdot\|_{\mathcal{L}_k}:=
(\cdot,\cdot)_{\mathcal{L}_k}^{1/2} := (\mathcal{L}_k \cdot,\cdot)_{\ell^2}^{1/2}$ for some
symmetric positive definite matrix $\mathcal{L}_k$ and
$(\cdot,\cdot)_{\ell^2}$ is the standard Euclidean scalar product.
\item \emph{Approximation property:}
\begin{equation} \nonumber
\| \underline{x}_k^{(1)}-\underline{x}_k^*\|_{\mathcal{L}_k}\le
C_A \sup_{\underline{\tilde{x}}_k\in X_k}
\frac{\left(\mathcal{A}_k(\underline{x}_k^{(0,\nu)}-\underline{x}_k^*),
\underline{\tilde{x}}_k\right)_{\ell^2}}{\| \underline{\tilde{x}}_k\|_{\mathcal{L}_k}}
\end{equation}
should hold for some constant $C_A>0$. \end{itemize} It is easy to see that, if we combine both conditions, we see
that the two-grid method converges in the norm $\|\cdot\|_{\mathcal{L}_k}$ for $\nu$ large enough. The convergence of the W-cycle multigrid method can be shown under mild assumptions, see e.g.~\cite{Hackbusch:1985}.
For the smoothing analysis, it is convenient to rewrite the smoothing property in pure matrix notation: \eqref{eq:smp} is equivalent to \begin{equation} \label{eq:smp2}
\|\mathcal{L}_k^{-1/2}\mathcal{A}_k(I-\hat{\mathcal{A}}_k^{-1}\mathcal{A}_k)^{\nu}\mathcal{L}_k^{-1/2}\| \le \eta(\nu). \end{equation}
For the Poisson control problem, it was shown in \cite{Schoeberl:Simon:Zulehner:2010}, that the approximation property is satisfied for the following choice of the matrix $\mathcal{L}_k$ (note that this matrix represents the norm $\|\cdot\|_{X^-}$ used in the mentioned paper) \begin{equation*}
\mathcal{L}_k = \left(
\begin{array}{cc}
\mbox{\textnormal{diag}}(M_k + \alpha^{1/2} K_k) \\
& \mbox{\textnormal{diag}}(\alpha^{-1} M_k + \alpha^{-1/2} K_k) \\
\end{array}
\right), \end{equation*} i.e., $\mathcal{L}_k = \mbox{\textnormal{diag}}(\mathcal{Q}_k)$. Here and in what follows, $\mbox{\textnormal{diag}}(M)$ is the diagonal matrix containing the diagonal of a matrix $M$. For the Stokes control problem it was shown in \cite{Takacs:2013a}, that the approximation property is satisfied for the following choice of $\mathcal{L}_k$: \begin{equation*}
\mathcal{L} _k = \left(
\begin{array}{cccc}
\hat{W}_k \\ & \hat{P}_k \\ && \alpha^{-1} \hat{W}_k \\ &&&\alpha^{-1} \hat{P}_k
\end{array}
\right), \end{equation*} where $\hat{W}_k := \mbox{\textnormal{diag}}(M_k+\alpha^{1/2} K_k)$ and $\hat{P}_k := \alpha \;\mbox{\textnormal{diag}}( D_k \hat{W}_k^{-1} D_k^T ).$
Still, we have not specified the choice of the smoother, which now can be done using the convergence theory. We have seen for which choices of $\mathcal{L}_k$ the approximation property is satisfied. We are interested in a smoother such that the smoothing property is satisfied for the same choice of $\mathcal{L}_k$.
In~\cite{Takacs:Zulehner:2012,Takacs:2013a} a \emph{normal equation smoother} was proposed. This approach is applicable to a quite general class of problems, cf.~\cite{Brenner:1996} and others. In our notation, the normal equation smoother reads as follows: \begin{equation}\nonumber
\underline{x}^{(0,m)}_k := \underline{x}^{(0,m-1)}_k + \tau
\underbrace{\mathcal{L}_k^{-1} \mathcal{A}_k^T \mathcal{L}_k^{-1}}_{\displaystyle \hat{\mathcal{A}}_k^{-1}:=}
\left(\underline{ f}_k -\mathcal{A}_k \;\underline{x}^{(0,m-1)}_k\right)
\quad \mbox{for } m=1,\ldots,\nu. \end{equation} Here, a fixed~$\tau>0$ has to be chosen such that the spectral radius~$\rho(\tau \hat{\mathcal{A}}_k^{-1}\mathcal{A}_k)$ is bounded away from~$2$ on all grid levels~$k$ and for all choices of the parameters. It was shown that it is possible to find such an uniform $\tau$ for the Poisson control problem, e.g., in \cite{Takacs:Zulehner:2012} and for the Stokes control problem, e.g., in \cite{Takacs:2013a}. For the normal equation smoother, the smoothing property can be shown using a simple eigenvalue analysis, cf.~\cite{Brenner:1996}. Numerical experiments show that the normal equation smoother works rather well for the mentioned model problems. However, there are smoothers such that the overall multigrid method converges much faster. Note that the normal equation smoother is basically a Richardson iteration scheme, applied to the normal equation. It is well-known for elliptic problems that Gauss Seidel iteration schemes are typically much better smoothers than Richardson iteration schemes. In the context of saddle point problems, the idea of Gauss Seidel smoothers has been applied, e.g., in the context of collective smoothers, see below. However, in the context of normal equation smoothers the idea of Gauss Seidel smoothers has not gained much attention. The setup of such an approach is straight forward: In compact notation such an approach, which we call \emph{least squares Gauss Seidel} (LSGS) approach, reads as follows: \begin{equation}\nonumber
\underline{x}^{(0,m)}_k := \underline{x}^{(0,m-1)}_k +
\underbrace{ \mbox{\textnormal{trig}}(\mathcal{N}_k)^{-1} \mathcal{A}_k^T \mathcal{L}_k^{-1}}_{\displaystyle\hat{\mathcal{A}}_k:=}
\left(\underline{ f}_k -\mathcal{A}_k \;\underline{x}^{(0,m-1)}_k\right)
\;\; \mbox{for } m=1,\ldots,\nu, \end{equation} where $\mathcal{N} _k:=\mathcal{A}_k^T\mathcal{L}_k^{-1} \mathcal{A}_k$ and $\mbox{\textnormal{trig}}(M)$ is a matrix whose coefficients coincide with the coefficients of $M$ on the diagonal and the left-lower triangular part and vanish elsewhere. The author provides a possible realization of that approach as Algorithm~\ref{alg2} to convince the reader that the computational complexity of the LSGS approach is equal to the computational complexity of the normal equation smoother, where a possible realization is given as Algorithm~\ref{alg1}.
We will see below that the LSGS approach works very well in the numerical experiments. However, there is no proof of the smoothing property known to the author. This is due to the fact that the matrix $\hat{\mathcal{A}}_k$ is not symmetric. One possibility to overcome this difficulty is to consider the symmetric version (symmetric least squares Gauss Seidel approach, sLSGS approach). This is analogous to the case of elliptic problems: For elliptic problems the smoothing property for the symmetric Gauss Seidel iteration can be shown for general cases but for the standard Gauss Seidel iteration the analysis is restricted to special cases, cf. Section~6.2.4 in~\cite{Hackbusch:1985}.
\begin{algorithm}[t]
\textbf{Given:} Iterate $(\texttt{x}_i)_{i=1}^{N}=\underline{x}^{(0,m-1)}$ and corresp. residual $(\texttt{r}_i)_{i=1}^{N}=\underline{f} - \mathcal{A} \underline{x}^{(0,m-1)}$;\\
\textbf{Result:} Iterate $(\texttt{x}_i)_{i=1}^{N}=\underline{x}^{(0,m)}$ and corresp. residual $(\texttt{r}_i)_{i=1}^{N}=\underline{f} - \mathcal{A} \underline{x}^{(0,m)}$;\\
\For{$i=1,\ldots,N$}{
$\texttt{q} := 0$;\\
\mbox{\textnormal{\textbf{for all $j$ such that $\mathcal{A}_{i,j}\not=0$}}} \mbox{\textnormal{\textbf{do}}}
$\texttt{q} := \texttt{q} + \mathcal{A}_{i,j} / \mathcal{L}_{j,j} * \texttt{r}_j$;\\
$\texttt{p}_i := \tau * \texttt{q} / \mathcal{L}_{i,i}$;
}
\For{$i=1,\ldots,N$}{
$\texttt{x}_i := \texttt{x}_i + \texttt{p}_i$;\\
\mbox{\textnormal{\textbf{for all $j$ such that $\mathcal{A}_{j,i}\not=0$}}} \mbox{\textnormal{\textbf{do}}}
$\texttt{r}_j := \texttt{r}_j - \mathcal{A}_{j,i} * \texttt{p}_i $;
}
\caption{Normal equation iteration scheme}\label{alg1} \end{algorithm} \begin{algorithm}[t]
\textbf{Given:} Iterate $(\texttt{x}_i)_{i=1}^{N}=\underline{x}^{(0,m-1)}$ and corresp. residual $(\texttt{r}_i)_{i=1}^{N}=\underline{f} - \mathcal{A} \underline{x}^{(0,m-1)}$;\\
\textbf{Result:} Iterate $(\texttt{x}_i)_{i=1}^{N}=\underline{x}^{(0,m)}$ and corresp. residual $(\texttt{r}_i)_{i=1}^{N}=\underline{f} - \mathcal{A} \underline{x}^{(0,m)}$;\\
\textbf{Prepare once:} $\mathcal{N} _{i,i}:=\sum_{j=1}^{N} \mathcal{A}_{i,j}^2 / \mathcal{L}_{j,j}$ for all $i=1,\ldots,N$;\\
\For{$i=1,\ldots,N$}{
$\texttt{q} := 0$;\\
\mbox{\textnormal{\textbf{for all $j$ such that $\mathcal{A}_{i,j}\not=0$}}} \mbox{\textnormal{\textbf{do}}}
$\texttt{q} := \texttt{q} + \mathcal{A}_{i,j} / \mathcal{L}_{j,j} * \texttt{r}_j$;
$\texttt{p} := \texttt{q} / \mathcal{N} _{i,i}$;\\
$\texttt{x}_i := \texttt{x}_i + \texttt{p}$;\\
\mbox{\textnormal{\textbf{for all $j$ such that $\mathcal{A}_{j,i}\not=0$}}} \mbox{\textnormal{\textbf{do}}}
$\texttt{r}_j := \texttt{r}_j - \mathcal{A}_{j,i} * \texttt{p} $;
}
\caption{LSGS iteration scheme}\label{alg2} \end{algorithm}
One step of the sLSGS iteration consists of one step of the LSGS iteration, followed by one step of the LSGS iteration with reversed order of the variables. (So the computational complexity of one step of the sLSGS iteration is equal to the computational complexity of two steps of the standard LSGS iteration.) One step of the sLSGS iteration reads as follows in compact notation: \begin{align}\nonumber
&\underline{x}^{(0,m)}_k := \underline{x}^{(0,m-1)}_k +
\hat{\mathcal{N}}_k^{-1} \mathcal{A}_k^T \mathcal{L}_k^{-1}
\left(\underline{ f}_k -\mathcal{A}_k \;\underline{x}^{(0,m-1)}_k\right)
\qquad \mbox{for } m=1,\ldots,\nu,\\
&\mbox{where $\hat{\mathcal{N}}_k:=\mbox{\textnormal{trig}}(\mathcal{N}_k)\; \mbox{\textnormal{diag}}(\mathcal{N}_k)^{-1}\; \mbox{\textnormal{trig}}(\mathcal{N}_k)^T$.}\label{eqmcNhat} \end{align} For our needs, the following convergence lemma is sufficient. \begin{lemma}\label{lem}
Assume that $\mathcal{A}_k$ is sparse, \eqref{eq:a1} is satisfied and let $\mathcal{L}_k$ be a positive definite
diagonal matrix such that
\begin{equation}\label{eq:lem}
\|\mathcal{Q}_k^{1/2}\underline{x}_k\|\le \|\mathcal{L}_k^{1/2}\underline{x}_k\|\quad\mbox{for all } \underline{x}_k.
\end{equation}
Then the sLSGS approach satisfies the smoothing property~\eqref{eq:smp2}, i.e.,
\begin{equation*}
\|\mathcal{L}_k^{-1/2} \mathcal{A}_k (I-\hat{\mathcal{N}}_k^{-1} \mathcal{N}_k)^{\nu} \mathcal{L}_k^{-1/2}\| \le \frac{2^{-1/2}\;\overline{C}\;\mbox{\textnormal{nnz}}(\mathcal{A}_k)^{5/2} }{\sqrt{\nu}},
\end{equation*}
where $\mbox{\textnormal{nnz}}(M)$ is the maximum number of non-zero entries per row of~$M$. \end{lemma} Note that \eqref{eq:lem} is a standard inverse inequality, which is satisfied for both model problems, cf.~\cite{Schoeberl:Simon:Zulehner:2010,Takacs:Zulehner:2012,Takacs:2013a}. Note moreover that this assumption also has to be satisfied to show the smoothing property for the normal equation smoother, cf.~\cite{Takacs:Zulehner:2012,Takacs:2013a}.
{\em Proof of Lemma~\ref{lem}.}
The combination of \eqref{eq:a1} and \eqref{eq:lem} yields $\|\mathcal{L}_k^{-1/2}\mathcal{A}_k \mathcal{L}_k^{-1/2}\|\le \overline{C}$.
Prop.~6.2.27 in~\cite{Hackbusch:1985} states that for any symmetric positive definite matrix $\mathcal{N}_k$
\begin{equation}\label{eq:hb}
\|\hat{\mathcal{N}}_k^{-1/2} \mathcal{N}_k (I-\hat{\mathcal{N}}_k^{-1} \mathcal{N}_k)^{\nu} \hat{\mathcal{N}}_k^{-1/2}\| \le \nu^{-1}
\end{equation}
holds, where $\hat{\mathcal{N}}_k$ is as in~\eqref{eqmcNhat}. Using $\mathcal{D}_k:= \mbox{diag}(\mathcal{N}_k)$, we obtain
\begin{align*}
&\|\mathcal{L}_k^{-1/2} \hat{\mathcal{N}}_k^{1/2}\|^2 = \rho( \mathcal{L}_k^{-1/2}\hat{\mathcal{N}}_k\mathcal{L}_k^{-1/2})
\le \|\mathcal{L}_k^{-1/2}\mbox{\textnormal{trig}}(\mathcal{N}_k) \mathcal{D}_k^{-1/2}\|^2\\
&\quad
\le\|\mathcal{L}_k^{-1/2}\mathcal{D}_k^{1/2}\|^2 \|\mathcal{D}_k^{-1/2}\mbox{\textnormal{trig}}(\mathcal{N}_k) \mathcal{L}_k^{-1/2}\|^2
\end{align*}
Let $\mathcal{A}_k=(\mathcal{A}_{i,j})_{i,j=1}^N$, $\mathcal{N}_k=(\mathcal{N}_{i,j})_{i,j=1}^N$,
$\mathcal{L}_k=(\mathcal{L}_{i,j})_{i,j=1}^N$ and $\psi(i):=\{j\in\mathbb{N}:\mathcal{N}_{i,j}\not=0\}$. We obtain using
Gerschgorin's theorem, the fact that the infinity norm is monotone in the matrix entries,
and using the symmetry of $\mathcal{N}_k$ and $\mathcal{A}_k$ and Cauchy-Schwarz inequality:
\begin{align}
&\|\mathcal{D}_k^{-1/2}\mbox{\textnormal{trig}}(\mathcal{N}_k) \mathcal{D}_k^{-1/2}\| \nonumber\\
& \le\|\mathcal{D}_k^{-1/2}\mbox{\textnormal{trig}}(\mathcal{N}_k) \mathcal{D}_k^{-1/2}\|_{\infty}^{1/2}
\|\mathcal{D}_k^{-1/2}\mbox{\textnormal{trig}}(\mathcal{N}_k)^T \mathcal{D}_k^{-1/2}\|_{\infty}^{1/2}
\le \|\mathcal{D}_k^{-1/2}\mathcal{N}_k\mathcal{D}_k^{-1/2}\|_{\infty} \nonumber\\
& = \max_{i=1,\ldots, N} \sum_{k\in\psi(i)}
\left(\sum_{n=1}^N \frac{\mathcal{A}_{i,n}^2}{\mathcal{L}_{n,n}} \right)^{-1/2}
\left(\sum_{j=1}^N \frac{\mathcal{A}_{i,j} \mathcal{A}_{j,k}}{\mathcal{L}_{j,j}}\right)
\left(\sum_{n=1}^N \frac{\mathcal{A}_{k,n}^2}{\mathcal{L}_{n,n}} \right)^{-1/2}\nonumber\\
& \le \max_{i=1,\ldots, N} \sum_{k\in\psi(i)} 1 = \mbox{\textnormal{nnz}}(\mathcal{N}_k) \le \mbox{\textnormal{nnz}}(\mathcal{A}_k)^2.\label{eq:xx1}
\end{align}
Further, we obtain
\begin{align}
&\|\mathcal{L}_k^{-1/2}\mathcal{D}_k^{1/2}\|^2 =\|\mathcal{L}_k^{-1/2}\mathcal{D}_k^{1/2}\|_{\infty}^2 = \|\mathcal{L}_k^{-1/2}\mathcal{D}_k\mathcal{L}_k^{-1/2}\|_{\infty}
= \max_{i=1,\ldots,N} \sum_{j=1}^N \frac{\mathcal{A}_{i,j}^2}{\mathcal{L}_{i,i}\mathcal{L}_{j,j}} \nonumber\\
&\quad\le \mbox{\textnormal{nnz}}(\mathcal{A}_k) \max_{i,j=1,\ldots,N}\frac{\mathcal{A}_{i,j}^2}{\mathcal{L}_{i,i}\mathcal{L}_{j,j}}
= \mbox{\textnormal{nnz}}(\mathcal{A}_k) \|\mathcal{L}^{-1/2}\mathcal{A}\mathcal{L}^{-1/2}\|^2
\le \mbox{\textnormal{nnz}}(\mathcal{A}_k) \;\overline{C}^2.\label{eq:xx2}
\end{align}
By combining \eqref{eq:hb}, \eqref{eq:xx1} and \eqref{eq:xx2}, we obtain
\begin{align*}
&\|\mathcal{L}_k^{-1/2} \mathcal{A}_k (I-\hat{\mathcal{N}}_k^{-1} \mathcal{N}_k)^{\nu} \mathcal{L}_k^{-1/2}\|^2 \\
& \quad \le \|\mathcal{L}_k^{-1/2}(I- \mathcal{N}_k\hat{\mathcal{N}}_k^{-1})^{\nu}\mathcal{A}_k\mathcal{L}_k^{-1} \mathcal{A}_k (I-\hat{\mathcal{N}}_k^{-1} \mathcal{N}_k)^{\nu} \mathcal{L}_k^{-1/2}\|\\
& \quad = \|\mathcal{L}_k^{-1/2}\mathcal{N}_k(I- \hat{\mathcal{N}}_k^{-1}\mathcal{N}_k)^{2\nu} \mathcal{L}_k^{-1/2}\| \le \frac{\overline{C}^2 \mbox{\textnormal{nnz}}(\mathcal{A}_k)^5}{2\nu},
\end{align*}
which finishes the proof. \qed
We went to compare the numerical behavior of the LSGS approach with the behavior of a standard smoother. One class of standard smoothers for saddle point problems is the class of \emph{Vanka type smoothers}, which has been originally introduced for Stokes problems, cf.~\cite{Vanka:1986}. Such smoothers have also gained interest for optimal control problems, see, e.g.,~\cite{Trottenberg:2001,Borzi:Kunisch:Kwak:2003,Takacs:Zulehner:2011}.
The idea of Vanka type smoothers is to compute updates in subspaces directly for the whole saddle point problem and to combine these updates is an additive or a multiplicative way to compute the next update. Here, the variables are not grouped based on the block-structure of $\mathcal{A}_k$, but the grouping is done of based on the location of the corresponding degrees of freedom in the domain $\Omega$. The easiest of such ideas for the Poisson control problems is to do the grouping point-wise, which leads to the idea of \emph{point smoothing}. Here, we group for each node $\delta_i$ of the discretization (each degree of freedom of the Courant element) the value $y_i$ of the state and the value $\lambda_i$ of the Lagrange multiplier and compute an update in the corresponding subspace. The multiplicative variant of such a smoother is a \emph{collective Gauss Seidel} (CGS) smoother: \begin{align*}
\underline{x}^{(0,m,i)}_k &:= \underline{x}^{(0,m,i-1)}_k +
\mathcal{P}_k^{(i)} \left(\left.\mathcal{P}_k^{(i)}\right.^T \mathcal{A}_k\mathcal{P}_k^{(i)}\right)^{-1} \left.\mathcal{P}_k^{(i)}\right.^T
\left(\underline{f}_k -\mathcal{A}_k \;\underline{x}^{(0,m,i-1)}_k\right), \end{align*} where $\underline{x}^{(0,m,0)}_k:=\underline{x}^{(0,m-1)}_k$ and $\underline{x}^{(0,m)}_k :=\underline{x}^{(0,m,N_k)}_k$. For each $i=1,\ldots, N_k$, the matrix $\mathcal{P}_k^{(i)}\in \mathbb{R}^{2 N_k\times 2}$ takes the value $1$ on the positions $(i,1)$ and $(i+N_k,2)$ and the value $0$ elsewhere. For the Poisson control problem, we obtain \begin{equation*}
\left.\mathcal{P}_k^{(i)}\right.^T \mathcal{A}_k\mathcal{P}_k^{(i)} = \left(
\begin{array}{cc}
M_{i,i} & K_{i,i} \\ K_{i,i} & - \alpha^{-1} M_{i,i}
\end{array}
\right), \end{equation*} where $M_{i,i}$ and $K_{i,i}$ are the entries of the matrices $M_k$ and $K_k$.
For the Stokes control problem, it is not reasonable to use exactly the same approach. This is basically due to the fact that the degrees of freedom for $v$ and $\lambda$ are not located on the same positions as the degrees of freedom for $p$ and $\mu$. However, we can introduce an approach based on patches: so, for each vertex of the triangulation, we consider subspaces that consist of the degrees of freedoms located on the vertex itself and the degrees of freedom located on all edges which have one end at the chosen vertex, cf. Fig.~\ref{fig1}. \begin{figure}
\caption{Patches for the Vanka-type smoother applied to a Taylor Hood discretization.
The dots are the degrees of freedom of $v$ and $\lambda$,
the rectangles are the degrees of freedom of $p$ and $\mu$}
\label{fig1}
\end{figure} Note that here the subspaces are much larger than the subspaces chosen in the case of the CGS approach for the Poisson control problem (which was just $2$). This increases the computational cost of applying the method significantly. For Vanka type smoothers there are only a few convergence results known, cf.~\cite{Borzi:Kunisch:Kwak:2003} for a Fourier Analysis and an analysis based compactness argument and \cite{Takacs:Zulehner:2011} for a proof based on Hackbusch's splitting of the analysis into smoothing property and approximation property which shows the convergence in case of a collective Richardson smoother.
\section{Numerical results}\label{sec:5}
In this section we give numerical results to illustrate quantitatively the convergence behavior of the proposed methods. The number of iterations was measured as follows: We start with a random initial guess and iterate until the relative error in the norm ${\|\cdot\|_{\mathcal{L}_k}}$ was reduced by a factor of $10^{-6}$. Without loss of generality, the right-hand side was chosen to be $0$. For both model problems, the normal equation smoother, the LSGS smoother, the sLSGS smoother and a Vanka type smoother
have been applied. For the smoothers $2$ pre- and $2$ post-smoothing steps have been applied. Only for the sLSGS smoother, just $1$ pre- and $1$ post-smoothing step has been applied. This is due to the fact that one step of the symmetric version is basically the same computational cost as two steps of the standard version. The normal equation smoother was damped with $\tau=0.4$ for the Poisson control problem and $\tau=0.35$ for the Stokes control problem, cf.~\cite{Takacs:Zulehner:2012,Takacs:2013a}. For the Gauss Seidel-like approaches, damping was not used.
In Table~\ref{tab:1}, we give the results for the standard Poisson control problem. Here, we see that all smoothers lead to convergence rates that are well bounded for a wide range of $h_k$ and $\alpha$. Compared to the normal equation smoother, the LSGS smoother leads to a speedup be a factor of about two without any additional work. The symmetric version (sLSGS) is a bit slower than the LSGS method. For the first model problem, the (popular) CGS method is significantly faster. However, for this method no convergence theory is known. \begin{table} \begin{center}
\begin{tabular}{p{1.cm}p{.6cm}p{.6cm}p{1.3cm}p{.6cm}p{.6cm}p{1.3cm}p{.6cm}p{.6cm}p{1.3cm}p{.6cm}p{.6cm}p{.8cm}}
\hline\noalign{
}
& \multicolumn{3}{l}{Normal equation} &
\multicolumn{3}{l}{LSGS} &
\multicolumn{3}{l}{sLSGS} &
\multicolumn{3}{l}{CGS} \\
\noalign{
}\hline\noalign{
}
$\alpha=$&$10^0$&$10^{-6}$&$10^{-12}$&$10^0$&$10^{-6}$&$10^{-12}$&$10^0$&$10^{-6}$&$10^{-12}$&$10^0$&$10^{-6}$&$10^{-12}$\\
\noalign{
}\hline\noalign{
}
$k=5$&26&31&28&11& 9& 7&14&12&14& 5& 5& 3\\
$k=6$&27&28&29&11&11& 7&14&14&13& 5& 5& 3\\
$k=7$&27&28&31&11&11& 6&14&14&12& 5& 5& 3\\
$k=8$&27&27&25&11&11& 3&14&14& 7& 5& 5& 4\\
\noalign{
}\hline\noalign{
}
\end{tabular}
\caption{Number of iterations for the \emph{Poisson control model problem}}
\label{tab:1}
\end{center} \end{table}
In Table~\ref{tab:3}, we give the convergence results for the Stokes control problem. Also here we observe that the LSGS and the sLSGS approach lead to a speedup of a factor of about two compared to the normal equation smoother. Here, the Vanka type smoother shows slightly smaller iteration numbers than the LSGS approach. In terms of computational costs, the LSGS smoother seems to be much better than the patch-based Vanka type smoother because there relatively large subproblems have to be solved to compute the updates. This is different the case of the CGS smoother, where the subproblems are just $2$-by-$2$ linear systems. Numerical experiments have shown that the undamped version of the patch-based Vanka type method does not lead to a convergent multigrid method. So, this smoother was damped with $\tau=0.4$. Due to lack of convergence theory, the author cannot explain why this approach -- although it is a multiplicative approach -- needs damping. \begin{table} \begin{center}
\begin{tabular}{p{1.cm}p{.6cm}p{.6cm}p{1.3cm}p{.6cm}p{.6cm}p{1.3cm}p{.6cm}p{.6cm}p{1.3cm}p{.6cm}p{.6cm}p{.8cm}}
\hline\noalign{
}
& \multicolumn{3}{l}{Normal equation} &
\multicolumn{3}{l}{LSGS} &
\multicolumn{3}{l}{sLSGS} &
\multicolumn{3}{l}{Vanka type} \\
\noalign{
}\hline\noalign{
}
$\alpha=$&$10^0$&$10^{-6}$&$10^{-12}$&$10^0$&$10^{-6}$&$10^{-12}$&$10^0$&$10^{-6}$&$10^{-12}$&$10^0$&$10^{-6}$&$10^{-12}$\\
\noalign{
}\hline\noalign{
}
$k=4$&31&31&60&13&12&14&17&16&22&11&10& 7\\
$k=5$&32&30&55&14&13&12&18&16&19&11&10& 7\\
$k=6$&32&31&44&14&13& 9&18&17&12&11&11& 7\\
$k=7$&32&31&37&14&14& 6&18&17& 9&11&11& 9\\
\noalign{
}\hline\noalign{
}
\end{tabular}
\caption{Number of iterations for the \emph{Stokes control model problem}}
\label{tab:3}
\end{center} \end{table}
For completeness, the author wants to mention that for cases, where a (closed form of a) matrix $\mathcal{Q}_k$ satisfying~\eqref{eq:a1} robustly is not known, the normal equation smoother does not show as good results as methods where such an information is not needed, like Vanka type methods. This was discussed in~\cite{Takacs:Zulehner:2011} for a boundary control problem, but it is also true for the linearization of optimal control problems with inequality constraints as discussed in~\cite{Herzog:Sachs:2010} and others. The same is true for the Gauss Seidel like variants of the normal equation smoother.
Concluding, we have observed that accelerating the idea of normal equation smoothing with a Gauss Seidel approach, leads to a speedup of a factor of about two without any further work. The fact that convergence theory is known for the sLSGS approach, helps also for the numerical practice (unlike the case of Vanka type smoothers).
\parbox{12cm}{
\begin{center} This paper has been published in\\ C. P\"otzsche, C. Heuberger, B. Kaltenbacher and F. Rendl: \\System Modeling and Optimization. Springer, 2014.\\[1em] The original publication is available at www.springerlink.com:\\ \url{http://link.springer.com/chapter/10.1007/978-3-662-45504-3_33}\end{center}
}
\end{document} |
\begin{document}
\title{Using Submodularity within Column Generation to Solve the Flight-to-Gate Assignment Problem} \author{Yijiang Li\thanks{School of Industrial and Systems Engineering, Georgia Institute of Technology (yijiangli@gatech.edu)}, John-Paul Clarke\thanks{Department of Aerospace Engineering and Engineering Mechanics, The University of Texas at Austin (johnpaul@utexas.edu)}, and Santanu S. Dey \thanks{School of Industrial and Systems Engineering, Georgia Institute of Technology (santanu.dey@isye.gatech.edu)}} \date{} \maketitle \quad\\ \textbf{Abstract.} In this paper, we provide a column generation-based approach for solving the airport flight-to-gate assignment problem, where the goal is to minimize the on-ground portion of arrival delays by optimally assigning each scheduled flight to a compatible gate. Specifically, we use a set covering formulation for the master problem and decompose the pricing problem such that each gate is the basis for an independent pricing problem to be solved for assignment patterns with negative reduced costs. We use a combination of an approximation algorithm based on the submodularity of the underlying set and dynamic programming algorithms to solve the independent pricing problems. To the best of our knowledge, this is the first use of submodularity property to efficiently solve pricing problems and improve the performance of column generation algorithm. We show that the dynamic programming algorithm is pseudo-polynomial when there are integer inputs. We also design and employ a rolling horizon method and block decomposition algorithm to solve large-sized instances. Finally, we perform extensive computational experiments to validate the performance of our approach. \quad \\ \textbf{Keywords:} submodularity, column generation, flight-to-gate assignment, approximation algorithms, dynamic programming
\section{Introduction} \subsection{Motivation and literature survey} Airports throughout the world have seen a rapid increase in the numbers of flights and passengers over the past decade. Not withstanding the current decline due to COVID-19, the international Air Transport Association \cite{iata} expects $7.2$ billion passengers to travel in $2035$, a near doubling of the $3.8$ billion air travelers in $2016$. The vast majority of current airport facilities, in particular airport terminals, are not sized to handle such traffic. And, although expansions in capacity have been planned for the long term, in the near future the requisite capacity expansions are unlikely to materialize. Airports will therefore see increases in delays and associated increases in the cost of delays that are similar to or in excess of the $\$1.6$ billion or $6$ percent increase, from $\$26.6$ to $\$28.2$ billion, that was observed between 2017 and 2018 \cite{faa}. These expected increases in the magnitude and cost of delays can only be mitigated through improved airport operations management.
One critical area of airport operations is the optimal and efficient assignment of arriving flights to gates. The airport flight-to-gate assignment problem has been extensively studied by many researchers in both the operations research and aviation communities. Many models and algorithms have been proposed. For a detailed review of past work in this area, we refer the readers to \cite{reviewsa} and \cite{DasGzaraStuzle}, but give a brief overview of the popular objectives and the common solution methods.
There are three popular objectives to consider in the models. The first one is the maximization of passenger satisfaction levels. In particular, researchers consider walking distances and waiting or transit time as proxies for the passenger satisfaction level. A more detailed approach involves a differentiation between the transfer passengers and destination passengers. In these models, the costs associated with the assignments of two flights to any two gates reflect the distance between the two gates and the connection time between the two flights. Interested readers are referred to \cite{mokhtarimousavi}, \cite{ChengHoKwan}, \cite{YanTang}, and \cite{kimFeronClarke}. The second class of objective focuses on airport and airline operations. Some papers, such as \cite{Kaliszewksi} and \cite{Cheng}, consider the minimization of the number of ungated flights or equivalently the number of flights assigned to the remote gates or introduce costs only incurred when the flights are assigned to remote gates to differentiate the priorities of the flights. Moreover, \cite{Jaehn} and \cite{NeumanAtkin} point out that airlines likely have preferences over which set of gates to park their flights leading to the consideration of maximizing total gate preference scores. The third popular objective is to improve the robustness of the solution to schedule variations. Due to the uncertain nature of the air transportation system, arrival times and departure times are likely to be stochastic subject to various factors such as weather and maintenance. Early arrival and late departure may cause the arriving flight to wait for the departing flight at the assigned gate to push back. This phenomenon is referred to as a gate conflict. Other studies consider robustness with respect to minimizing the expected gate conflict time. In the event of major disruptions such as severe weather or maintenance delays, airport has to be able to quickly recover from the deviations from the original arrival schedules and adjust the flight-to-gate assignments to reduce any potential delays. Readers are referred to the work of \cite{DorndorfJaehnPesch} and \cite{KumarBielaire}. Since the above mentioned three objectives are all very important to take into account and neither of the objectives is superior than the other two, some combinations of them are also explored. The weighted sum method is more common in such line of considerations, but a pareto-based approach has been utilized as well. The papers \cite{KimFeronClarkeMarzuoliDelahaye} and \cite{DorndorfJaehnPeschReferenceSchedule} are examples of studies that use weighted sum approach while \cite{Das} approaches this problem with a pareto local search method.
Due to the general $\mathcal{NP}$-hard nature of the problem, it can be extremely difficult to solve the problems directly and exactly. Many solution algorithms and heuristics have been proposed. In much of the work to date, an integer programming or mixed integer programming formulation is first presented and solver is called to obtain a solution and validate the model. These formulations are large and complex because of the many complicated constraints required to reflect real world phenomena. Consequently, it is very time-consuming if not impossible to obtain solutions within reasonable amount of time and it is challenging to improve these solution methodologies. Readers are referred to \cite{KumarBielaire} and \cite{TangWang} for the details of these models. In addition to integer programming or mixed integer programming approach, \cite{SekerNoyan} uses a stochastic optimization model to capture the uncertain nature of the problem and study the occurrence of gate conflicts. The paper \cite{YanHuo} uses a column generation method together with a branch-and-bound algorithm to obtain integral solutions. They give a description of the column generation method and a general strategy to apply the branch-and-bound algorithm. However, they have not given any explicit mathematical formulations for the column generation scheme and they have not discussed any strategies or methods to tackle the pricing problems. Due to the limitations/challenges presented in using exact formulations for this problem, heuristics are extensively used. Popular heuristics such as genetic algorithms and tabu search algorithms have been explored. Tabu search algorithm performs a local search around the current solution and prohibits searching at previous search points by keeping a list of those points for a few iterations while genetic algorithm is inspired from the evolutionary idea where a chromosome of higher fitness is more likely to survive and it involves operations of selections, crossover, and mutations. For an implementation of these two metaheuristics, readers are referred to \cite{kimFeronClarke}. Moreover, \cite{YuZhangLau} implemented a MIP-based heuristic which is adapted from many popular heuristic and they show that the adaptation gives a good performance. Lastly, there are some other heuristics based on neighborhood search, breakout local search, Fuzzy Bee Colony optimization (FBCO), and particle swarm optimization (PSO). Readers are referred to \cite{YuZhangLauNeighborhood}, \cite{BenlicBurkeWoodward}, \cite{BenlicBurkeWoodward}, and \cite{DengZhaoYangXiongSunLi} for these heuristics. \subsection{Contributions of this paper} As we discussed previously, there are gaps in both the formulations and solution methodologies of the flight-to-gate assignment problem that need to be addressed. In this paper, we focus on the airport operations and aim to minimize the arrival delays over all the flights. We propose an exact solution method that is capable of obtaining the flight-to-gate assignments at busy hub airports where a large number of flights are seen on a daily basis. Our main contributions are summarized below. \begin{enumerate}
\item \textbf{Column generation formulation.} We present an explicit column generation formulation with a set covering master problem and we decompose the pricing problem into independent problems to which we apply both heuristic and exact algorithms to obtain favorable assignment patterns. Such decomposition allows us to simplify the pricing problem formulations and reduce the size of the individual pricing problem significantly. Experiments show that this formulation is far more efficient than a conventional compact mixed integer programming formulation.
\item \textbf{Pricing problem algorithms.} We explore a few approximation methods and exact algorithms to efficiently solve individual pricing problems after the decomposition. In particular,
\begin{itemize}
\item We show and exploit the submodularity of the objective functions of the pricing problems to design an approximation algorithm. To the best of our knowledge, this is the first use of submodularity to efficiently solve the pricing problems in a column generation setting.
\item We derive an exact dynamic programming algorithm based on a recursive formula and show that the direct implementation of the algorithm works reasonably well in practice. We show the algorithm to be pseudo-polynomial in the special case of integer inputs and present a new heuristic based on this observation.
\item We propose a block decomposition heuristic and prove it is a $2$-approximation algorithm for large-sized instances.
\end{itemize} \end{enumerate}
The structure of the rest of the paper is as follows. In Section \ref{problemSetup}, we provide a description of the flight-to-gate assignment problem. In Section \ref{formulation}, we provide the formulation for the column generation algorithm, and the exact algorithms and heuristics used for solving the pricing problems are discussed in Section \ref{solvingpricing}. Specifically, a submodular maximization approximation algorithm is described in Subsection \ref{approximation} and a dynamic programming algorithm with complexity analyses is presented in Subsection \ref{DP}. In Subsection \ref{largesizedinstance}, we present a new block decomposition approximation for large-sized instances. We discuss the branching scheme and our method for retrieving integer feasible solutions in Section \ref{branching}. Results of the computational experiments are presented in Section \ref{numericalexp}. Finally, we make some concluding remarks and discuss possible extensions of this work in Section \ref{conclusions} .
\section{Problem setup} \label{problemSetup} The flight-to-gate assignment problem may be described as follows. Let the set of flights be $\mathcal{F}$ and the set of gates be $\mathcal{G}$. Each flight $i \in \mathcal{F}$ has an arrival time, $a_i$, an airline, and an aircraft type which determines its minimum turn time, $\tau_i$, the minimum duration for which the flight must remain after parking at its assigned gate before being ready for departure. Each gate $k \in \mathcal{G}$ has a buffer time, $b_k$, that has to be observed between consecutive flights, and a set of allowable aircraft types. The solution of the problem gives an assignment in which each flight is assigned to one gate. For each flight $i$, we define its arrival delay as the difference between its arrival time, $a_i$ and the park time at its assigned gate, $t_i^g$. The decisions are the gate assignment, $x_{ik}$, the park time, $t_i^g$, and the push back time, $t_i^p$, for $i \in \mathcal{F}$ and $k \in \mathcal{G}$. In particular, the decision variables $x_{ik}$ are equal to $1$ if flight $i$ is assigned to gate $k$ and otherwise equal to $0$. In addition, we also consider the notion of compatibility, $\alpha_{ik}$, where a flight can only be assigned to a compatible gate. In particular, $\alpha_{ik}$ equal to $1$ if flight $i$ is compatible to gate $k$ and equal to $0$ otherwise. The compatibilities are determined by various factors including gate types (heavy or regular), flight types (heavy or regular), and airline preferences. Two additional conditions have to be met for an assignment to be valid: \begin{enumerate}
\item A flight occupies its assigned gate for a duration of at least its minimum turn time, $\tau_i$, for $i \in \mathcal{F}$.
\item A pair of flights that are assigned to the same gate cannot occupy the gate at the same time and the buffer time, $b_k$, must be observed before the gate becomes ready for the next flight. \end{enumerate}
We further assume independence of the gates which exclude interferences between the gates. This assumptions can be justified from a practical perspective. Note first that we are only considering arrival delays in this paper. The current practices of many airports is that pushbacks of the departing flights are coordinated by the ramp towers. Whenever a pushback of a flight is in the way of an arriving flight into its assigned gate, the pushback is delayed. Note that such delays are rare in many of the large airport where there are multiple taxi ways in the ramp area to minimize blockage of arriving flights by the departing flights. Consequently, what affects the park time of an arriving flight most is its assigned gate and the departing flight at the assigned gate. Based on the above descriptions, we give a compact mixed integer programming formulation below, \begin{align} \min \; & \sum_{i \in \mathcal{F}} (t_i^{g}-a_i) \label{MIPO1}\\ \text{subject to } & \sum_{k \in \mathcal{G}} x_{ik} = 1, \; \forall i \in \mathcal{F} \label{MIPC1}\\ & t_i^{g} + \tau_i \leq t_i^{p}, \; \forall i \in \mathcal{F} \label{MIPC2}\\ & t_i^{g} \geq a_i, \; \forall i \in \mathcal{F} \label{MIPC3}\\ & t_i^{p} + b_k - t_j^{g} \leq M(2-x_{ik}-x_{jk}), \; i < j,\; \forall i, j \in \mathcal{F},\; \forall k \in \mathcal{G} \label{MIPC4}\\ & x_{ik} \leq \alpha_{ik}, \; \forall i \in \mathcal{F}, j \in \mathcal{G} \label{MIPC6}\\ & x_{ik} \in \{0,1\},\; \forall i \in \mathcal{F}, k \in \mathcal{G} \label{MIPbinary}\\ & t_i^{g}, t_i^{p}\ge 0, \; \forall i \in \mathcal{F}, \label{MIPcontin} \end{align} where $M$ is a sufficiently large constant. The expression in (\ref{MIPO1}) gives the objective function which minimizes the total arrival delays. Constraint (\ref{MIPC1}) ensures that each flight is assigned to exactly one gate. Constraint (\ref{MIPC2}) ensures the flight parks at a gate for at least a duration of the minimum turn time (condition 1 above). Constraint (\ref{MIPC3}) ensures that the park time, $t_i^g$, is no earlier than the arrival time, $a_i$. We want to emphasize that we do not impose an upper bound on the park times, $t_i^g$, since it is possible that an arriving flight has to wait at its assigned gate for a long period of time for the departing flight that is currently occupying the gate to pushback due to various reasons, such as weather, technical difficulties, or security reasons, although the chance of such long wait is slim. In addition, imposing such upper bounds on the park times can lead to infeasibility sometimes. Condition 2 above leads to constraint (\ref{MIPC4}) which observes the buffer time, $b_k$. It considers all pairs of flights $i < j$. When they are assigned to the same gate, the difference between $t_i^{p}$ and $t_j^{g}$ must be at least the buffer time of the gate, $b_k$. Constraint (\ref{MIPC6}) is the compatibility constraint to ensure the flights are only assigned to the compatible gates. Constraints (\ref{MIPbinary}) and (\ref{MIPcontin}) are binary and non-negativity requirements on the decision variables respectively. We will show in computational experiments that this compact formulation is not ideal to obtain the desired flight-to-gate assignment efficiently. We instead propose a column generation approach to solve the problem. Note also that this setup is in fact similar to what airlines do for their flight-to-gate assignments in the real world. Interested readers are referred to \cite{deltatabu}.
\section{Column generation formulation} \label{formulation} \subsection{Master problem consideration} Constraint (\ref{MIPC1}) in the compact mixed integer programming formulation suggests a set partitioning master problem should be used in the column generation formulation. However, as many papers, such as \cite{ColGen} and \cite{VRPTW}, have pointed out, a set covering master problem formulation is numerically more stable when solving its linear programming relaxation compared to a set partitioning master problem. Therefore, we present a set covering master problem formulation where the constraint in which each flight is assigned to exactly one gate is replaced by a constraint in which each flight is assigned to at least one gate. Since this gives a relaxation and we are minimizing the total arrival delay that is non-increasing after removing a flight from a gate, the set covering master problem either produces a set partitioning assignment or a set covering assignment in which we can fix each flight with multiple assignments to one of its assigned gates and recover a set partitioning assignment that has a total arrival delay at most as large as that of the original set covering assignment. \subsection{The set covering master problem} We define the following parameters: \begin{align*}
P_k & := \text{ the set of all feasible assignment patterns for gate }k\\
\delta_{ip}^k & := \begin{cases}1 \; \text{ if flight } i \text{ is assigned on pattern } p \in P_k \text{ of gate }k \\ 0 \; \text{ otherwise}\end{cases}\\
c_p^k & := \text{ the arrival delay of pattern } p \in P_k. \end{align*} Note that $\delta_{ip}^k$ and $c_p^k$ are constants that can be computed for a given assignment pattern. The decision variables $z_p^k \;(k \in \mathcal{G}, p \in P_k)$ are equal to $1$ if pattern $p$ of gate $k$ is used and $0$ otherwise. Then the set covering master problem is given by \begin{align} \min \; & \sum_{k \in \mathcal{G}} \sum_{p \in P_k} c_p^k z_p^k \label{colGenObj}\\ \text{subject to } & \sum_{k \in \mathcal{G}} \sum_{p \in P_k} \delta_{ip}^k z_p^k \ge 1, \; \forall i \in \mathcal{F} \quad (\pi_i)\label{coverConstraint}\\ & \sum_{p \in P_k} z_p^k = 1, \; \forall k \in \mathcal{G} \quad (\mu_k)\label{availConstraint}\\ & z_p^k \in \{0,1\}, \; \forall k \in \mathcal{G}, p \in P_k. \label{bound} \end{align}
The objective (\ref{colGenObj}) is an expression for total arrival delays by summing the arrival delays of all patterns over all gates. Constraint (\ref{coverConstraint}) is the cover constraint that ensures each flight is assigned to at least one gate. Constraint (\ref{availConstraint}) is the availability constraint that ensures only one pattern of each gate is selected. Last constraint (\ref{bound}) is the binary requirements on the decision variables $z_p^k \;(k \in \mathcal{G}, p \in P_k)$ and relaxing these constraints to $0 \le z_p^k \le 1$ gives the linear programming relaxation of the set covering master problem. The linear programming relaxation is then solved in each iteration. \subsection{The pricing problems} In the pricing problem, we construct feasible assignment patterns for each gate. If we associate dual variables $\pi_i$ with (\ref{coverConstraint}) and $\mu_k$ with (\ref{availConstraint}), the reduced cost $\bar{c}_p^k \;(k \in \mathcal{G}, p \in P_k)$ of a pattern $p$ of gate $k$ is given by \begin{equation} \bar{c}_p^k = c_p^k - \sum_{i \in \mathcal{F}} \delta_{ip}^k\pi_i - \mu_k. \end{equation}
Given a feasible solution $z$ to the linear programming relaxation of (\ref{coverConstraint}) - (\ref{bound}), we know that $z$ is optimal if and only if the optimal assignment patterns generated from the pricing problem have non-negative reduced costs. Then the pricing problem is given by \begin{equation} \min\{c_p^k - \sum_{i \in \mathcal{F}} \delta_{ip}^k\pi_i - \mu_k: k \in \mathcal{G}, p \in P_k\}. \end{equation}
Since we assume each gate is independent, we can further decompose the pricing problem into $|\mathcal{G}|$ independent pricing problems, one for each gate. The $k^{th}$ pricing problem is given by \begin{align} \label{pricingproblem} \min & \{c_p^k - \sum_{i \in \mathcal{F}} \delta_{ip}^k\pi_i - \mu_k: p \in P_k\}. \end{align}
For the decision variables, similar to the compact formulation (\ref{MIPO1}) - (\ref{MIPcontin}), we use binary decision variables $x_{ik}$ which are equal to $1$ if flight $i$ is assigned at gate $k$ and equal to $0$ otherwise, and continuous variables $t_i^g$ and $t_i^{p}$ for the park times and push back times of flight $i$ respectively. Recall the definitions of the constants $\delta_{ip}^k$ and $c_p^k$ which are given by \begin{align*}
\delta_{ip}^k & := \begin{cases}1 \; \text{ if flight } i \text{ is assigned on pattern } p \in P_k \text{ of gate }k \\ 0 \; \text{ otherwise}\end{cases}\\
c_p^k & := \text{ the arrival delay of pattern } p \in P_k. \end{align*}
When the pricing problems are solved to generate a new assignment pattern $p$, the values of the constants, $\delta_{ip}^k$, are determined using the values of the decision variables $x_{ik}$ and $c_p^k$ is the sum of all individual arrival delay, $t_i^g-a_i$. Let $M$ denote a large constant, then the $k^{th}$ pricing problem is given by \begin{align} \min \; & \sum_{i \in \mathcal{F}} (t_i^{g}-a_i) - \sum_{i \in \mathcal{F}} x_{ik}\pi_i - \mu_k \label{pricingObj}\\ \text{ subject to } \; & t_i^g + \tau_i \leq t_i^{p}, \; \forall i \in \mathcal{F} \label{C1}\\ & t_i^{g} \geq a_i, \; \forall i \in \mathcal{F} \label{C2}\\ & t_i^{p} + b_k - t_j^{g} \le M(2-x_{ik}-x_{jk}), \; \forall i < j,\;\;\; i,j \in \mathcal{F}\label{C3}\\ & x_{ik} \le \alpha_{ik}, \; \forall i \in \mathcal{F}, k \in \mathcal{G} \label{compatibility}\\ & x_{ik} \in \{0,1\}, \; \forall i \in \mathcal{F},\; k \in \mathcal{G} \label{assBound}\\ & t_i^{g}, t_i^{p}\ge 0, \; \forall i \in \mathcal{F}. \label{timeBound} \end{align}
The objective function (\ref{pricingObj}) is obtained by expressing $c_p^k$ explicitly in decision variables $t_i^g$ and parameters $a_i$, and substituting the parameters $\delta_{ip}^k$ by the decision variables $x_{ik}$. Constraints (\ref{C1}) - (\ref{timeBound}) follow directly from the constraints (\ref{MIPC2}) - (\ref{MIPcontin}) in the compact mixed integer programming formulation. However, we do not need constraint (\ref{MIPC1}). For flights that are not assigned to this gate, we assume their arrival delays to be zero.
The solutions $x_{ik}$ from a pricing problem form a potential assignment pattern. If the corresponding reduced cost given by the objective value is negative, then this assignment pattern is favorable and added to the master problem.
We want to point out that $\mu_k$ is a constant in the $k^{th}$ pricing problem and thus dropping this term we can equivalently solve (\ref{pricingObj}) - (\ref{timeBound}) by solving the following maximization problem: \begin{align} \max \; & - \sum_{i \in \mathcal{F}} (t_i^g - a_i) + \sum_{i \in \mathcal{F}} x_{ik}\pi_i \label{modifiedpricingObj}\\ \text{ subject to } \;& (\ref{C1}) - (\ref{timeBound}). \notag \end{align}
This equivalent problem is intuitively easier to interpret as we may consider $\pi_i$ as the benefits of accepting flight $i$ at the cost of an arrival delay $t_i^g - a_i$. Consequently, we are attempting to maximize the total net benefits over all flights. We refer to this maximization problem as the pricing problem for any gate in all following sections and use the terms total net benefits and reduced cost interchangeably.
\section{Solving the pricing problem} \label{solvingpricing} \subsection{Pre-processing}
The decomposition of the pricing problem into smaller pricing problems allows additional pre-processing of the set of flights $\mathcal{F}$ for each gate. Consider a particular gate $k$, we can construct a subset of flights $\mathcal{F}' \subseteq \mathcal{F}$ to include only compatible flights and flights with positive dual variables $\pi_i$. To see this fixing is correct, the incompatible flights are not allowed to be assigned to this gate while accepting the flights with $\pi_i \le 0$ does not contribute any benefit at the cost of arrival delays of other flights. For the rest of the discussions in this section, we use the subset of flights $\mathcal{F}'$ and assume $|\mathcal{F}'|=n$. Note that for different gates, the set $\mathcal{F}'$ could be different. \subsection{Submodular approximation algorithm} \label{approximation} Submodularity is a well studied property of set functions and there are many existing algorithms that can approximate the task of maximizing a submodular function very efficiently and effectively. We notice that, in the context of the flight-to-gate assignment problem, the submodularity of the objective function (\ref{modifiedpricingObj}) is very intuitive. Consider any gate and given a set of accepted flights at this gate, the impact of accepting an additional flight on the total arrival delay of this set of flights is at least as large as that of accepting the additional flight with a subset of those flights.
Formally consider the pricing problem for gate $k$ and recall that our objective after pre-processing the set of flights is \begin{align*} \max \; & - \sum_{i \in \mathcal{F}'} (t_i^g - a_i) + \sum_{i \in \mathcal{F}'} x_{ip}\pi_i. \end{align*}
For simplicity, we denote $t_i^{g}-a_i$ by $\triangle t_i^{g}$ and clearly $\triangle t_i^g \ge 0$. We define a function $f: 2^{\mathcal{F}'} \mapsto \mathbb{R}$ as follows, \begin{align} \label{subfDef} f(A) = - \sum_{i \in A} \triangle t_i^{g, A} + \sum_{i \in A}\pi_i. \end{align}
We further denote the variables $\triangle t_i^{g}$ under a set of assigned flights $A$ by an additional superscript $A$ and assume that the values of $\triangle t_i^{g,A}$ for $i \notin A$ to be zero. By the definition of $f$, we have that $f(\emptyset) = 0$. For a given set of assigned flights $A$, the constraints (\ref{C1}) - (\ref{C3}) can be incorporated when evaluating $f(A)$. Specifically, let the flights in the set $A$ be $1, 2, \ldots\,|A|$ and the total benefits $\sum_{i \in A}\pi_i$ can be computed. What remains is to compute the arrival delays. Note that the first flight parks at the gate at its arrival time, $a_{1}$, and pushes back after a duration of its minimum turn time at $a_{1} + \tau_{1}$. The gate becomes available for the second flight after a duration of the buffer time at $a_{1} + \tau_{1} + b_k$. Consequently, an arrival delay of $a_{1} + \tau_{1} + b_k - a_{2}$ on the second flight is incurred if the second flight arrives earlier than $a_{1} + \tau_{1} + b_k$ and it parks at $a_{1} + \tau_{1} + b_k$. Otherwise, there is no arrival delay on the second flight and it parks at its arrival time, $a_{2}$. The same procedure can be applied to the rest of flights in the set $A$ to compute $f(A)$.
Note that since the subset of flights $\mathcal{F}'$ are all compatible to this gate and the above procedure incorporates constraints (\ref{C1}) - (\ref{C3}), we can pose the pricing problems as the following equivalent unconstrained maximization problem \begin{equation} \label{submaxpro} \max\{f(A): A \subseteq 2^{\mathcal{F}'}\}. \tag{subMax} \end{equation}
In order to show the submodularity of the function $f$, we need the following observation about the variables $\triangle t_i^{g}$. \begin{lemma}\label{subsetDelayLemma} If $A \subseteq B \subseteq \mathcal{F}'$, then $$\sum_{i \in B} \triangle t_i^{g, B} \ge \sum_{i \in A} \triangle t_i^{g, A}.$$ \end{lemma} \begin{proof}[proof of Lemma \ref{subsetDelayLemma}.] We can rewrite the left hand side expressions into \begin{align*} \sum_{i \in B} \triangle t_i^{g, B} & = \sum_{i \in A} \triangle t_i^{g, B} + \sum_{i \in B \backslash A} \triangle t_i^{g, B}\\ & \ge \sum_{i \in A} \triangle t_i^{g, A} + \sum_{i \in B \backslash A} \triangle t_i^{g, B}. \end{align*}
The last inequality is valid because $A \subseteq B$, for any $i \in A$, thus the arrival delay of $i$ in the set of assigned flights $B$ of larger cardinality is at least as large as its arrival delay in the set of assigned flights $A \subseteq B$. Additionally, since $\triangle t_i^{g,B} \ge 0$, we have the desired inequality. \end{proof}
We next show that $f$ is a submodular function. \begin{lemma} \label{submodFunc} $f$ is a submodular set function defined on $\mathcal{F}'$. \end{lemma} \begin{proof}[proof of Lemma \ref{submodFunc}.] First note that we use the expression ``the park time of a flight is pushed to the right" to mean the park time of this flight is delayed and consequently this flight experiences a larger arrival delay. Formally, given a set of flights $A$, the park time of flight $i$ is pushed to the right in another set of flight $B$ if $\Delta t_i^{g,A} \le \Delta t_i^{g,B}$.
Now suppose that $A \subseteq B \subseteq \mathcal{F}'$ and $u \notin \mathcal{F}' \backslash B$, we compare $f(A \cup \{u\}) - f(A)$ against $f(B \cup \{u\}) - f(B)$, \begin{align} f(A \cup \{u\}) - f(A) & = - \sum_{i \in A \cup \{u\}} \triangle t_i^{g, A \cup \{u\}} + \sum_{i \in A \cup \{u\}}\pi_i - \left( - \sum_{i \in A} \triangle t_i^{g, A} + \sum_{i \in A}\pi_i \right) \notag\\ & = - \left(\sum_{i \in A} \triangle t_i^{g, A \cup \{u\}} - \sum_{i \in A} \triangle t_i^{g, A}\right) + (\pi_u - \triangle t_u^{g, A \cup \{u\}}). \label{subA} \end{align}
Similarly, we have that \begin{align} f(B \cup \{u\}) - f(B) = - \left(\sum_{i \in B} \triangle t_i^{g, B \cup \{u\}} - \sum_{i \in B} \triangle t_i^{g, B}\right) + (\pi_u- \triangle t_u^{g, B \cup \{u\}}). \label{subB} \end{align}
Since $A \subseteq B$, the arrival delay of $u$ in the set of assigned flights $B$ is at least as large as that of $u$ in the set of assigned flights $A$ and then $\triangle t_u^{g, A \cup \{u\}} \le \triangle t_u^{g, B \cup \{u\}}$. Thus $\pi_u - \triangle t_u^{g, A \cup \{u\}} \ge \pi_u- \triangle t_u^{g, B \cup \{u\}}$. For the first term in the expressions (\ref{subA}) and (\ref{subB}), we consider the following cases when accepting an additional flight $u$ to $A$ and $B$: \begin{enumerate}
\item The park times remain unchanged for all $i \in B$ and thus $i \in A$. Both terms are zero. We have $f(A \cup \{u\}) - f(A) \ge f(B \cup \{u\}) - f(B)$.
\item The park times are pushed to the right for a set of flights $I \subseteq B$. If $I \cap A \neq \emptyset$, following similar arguments for $u$, we see that the increase in the arrival delay of a flight $i \in I \cap A$ in the set of assigned flights $B$ is at least as large as that of $i$ in the set of assigned flights $A$. In addition, there are potentially increases in arrival delays of flights in the set $I \cap A^c$. Therefore, the first term in (\ref{subA}) is less negative than that in (\ref{subB}). If $I \cap A = \emptyset$, the first term in (\ref{subA}) is zero while the first term in (\ref{subB}) is non-positive. We have again $f(A \cup \{u\}) - f(A) \ge f(B \cup \{u\}) - f(B)$. \end{enumerate}
The two cases above verify that when we accept an additional flight $u$, we have $f(A \cup \{u\}) - f(A) \ge f(B \cup \{u\}) - f(B)$ and this shows that $f$ is a submodular function. \end{proof}
We want to point out that $f$ is not monotonic in general. Suppose that $A \subseteq B \subseteq \mathcal{F}'$, then \begin{align*} f(B) - f(A) & = - \sum_{i \in B} \triangle t_i^{g, B} + \sum_{i \in B}\pi_i - \left(-\sum_{i \in A} \triangle t_i^{g, A} + \sum_{i \in A}\pi_i\right)\\ & = - \left(\sum_{i \in B} \triangle t_i^{g, B} - \sum_{i \in A} \triangle t_i^{g, A}\right) + \sum_{i \in B \backslash A} \pi_i. \end{align*}
The first term $- \left(\sum_{i \in B} \triangle t_i^{g, B} - \sum_{i \in A} \triangle t_i^{g, A}\right)$ is always non-positive by Lemma \ref{subsetDelayLemma}. However, since $\pi_i \ge 0$ for all $i \in \mathcal{F}$ and $\sum_{i \in B \backslash A} \pi_i \ge 0$, $f(B) - f(A)$ is not necessarily always non-negative or non-positive and thus $f$ is not monotone in general.
Now we can utilize an existing submodular maximization algorithm by \cite{subMax} which is shown below as algorithm \ref{SubmodularApproximation}. In this algorithm, we select each flight in the set of flights $\mathcal{F}'$ with a probability. Let $X_i$ and $Y_i$ be two random sets to be updated in each iteration and we initialize $X_0 = \emptyset$ and $Y_0 = \mathcal{F}'$. We also denote the elements in $\mathcal{F}'$ by $1,2,\ldots,n$. \begin{algorithm}[H] \caption{Submodular Maximization($f,\mathcal{F}'$)}\label{SubmodularApproximation} \begin{algorithmic}[1] \State $X_0\gets \emptyset, Y_0\gets \mathcal{F}'$ \For{$i = 1,2,\ldots,n$} \State $a_i\gets f(X_{i-1} \cup \{i\}) - f(X_{i-1})$ \State $b_i\gets f(Y_{i-1} \backslash \{i\}) - f(Y_{i-1})$ \State $a_i^{\prime}\gets \max\{a_i,0\}, b_i^{\prime}\gets \max\{b_i,0\}$
\State \textbf{with probability} {$a_i^{\prime} / (a_i^{\prime} + b_i^{\prime})^*$} \textbf{do:} \Comment{$^*$ If $a_i^{\prime} = b_i^{\prime} = 0$, we assume $a_i^{\prime} / (a_i^{\prime} + b_i^{\prime}) = 1$}
\State $X_i\gets X_{i-1} \cup \{i\}, Y_i\gets Y_{i-1}$,
\State \textbf{else}\; (with the compliment probability $b_i^{\prime} / (b_i^{\prime} + a_i^{\prime}$))
\State $X_i\gets X_{i-1}, Y_i\gets Y_{i-1} \backslash \{i\}$
\State \textbf{end} \EndFor \State \Return{$X_n$ (or equivalently $Y_n$)} \end{algorithmic} \end{algorithm}
The following theorem establishes a theoretical approximation guarantee of Algorithm \ref{SubmodularApproximation}. \begin{theorem} \label{subguarantee}
Let $f: 2^{\mathcal{F}'} \mapsto \mathbb{R}$ defined by (\ref{subfDef}), and $OPT$ be the true optimal solution of the problem (\ref{submaxpro}) and $X_n$ (or equivalently $Y_n$) is the set returned by the algorithm. If $f(\mathcal{F}') \ge 0$, then $\mathbb{E}(f(X_n)) \ge f(OPT)/2$. \end{theorem} \begin{proof}[proof of Theorem \ref{subguarantee}.] The proof is a very minor modification of what is given in \cite{subMax} and attached in Appendix A for a reference and completeness. The modification is since we require $f(\mathcal{F}') \ge 0$ instead of $f(A) \ge 0$ for all $A \subseteq \mathcal{F}'$. \end{proof}
Although it is possible that the condition on $f$ to obtain the theoretical guarantee is not satisfied, the assignment patterns that are generated can still have favorable total net benefits. We discuss later in the numerical experiment section about the implementation of this algorithm and its performance.
\subsection{Dynamic programming algorithm} \label{DP} Approximation algorithm can be efficient in generating favorable assignment patterns, however, exact optimal solutions to the pricing problems have to be obtained to prove optimality at each node. This would typically be done using a general integer programming solver. However, this can be slow. To reduce overall solution time, we have developed and now present a much more effective dynamic programming algorithm to achieve this task. We divide up the analyses of the dynamic programming algorithm into three cases based on the input type, namely, general input, rational input, and integer input. We note that the analysis in the case of integer input is a simplification of the analysis used in the case of rational input. In addition, although in many cases, the input is not integer, the algorithm developed in that section can serve as another approximation algorithm. Therefore, we defer the details of the analysis in case of rational input to Appendix B. \subsubsection{The general case.} For any gate $k$, we define an auxiliary function $g_i(t)$ to be the maximum total net benefits from optimally accepting flights in the set $\{i,i+1, \cdots, n\}$ where the smallest indexed accepted flight from $\{i,i+1,\ldots,n\}$ can be accepted any time after time $t$. To give a formal definition, \begin{equation} \label{f(i,t)def}
g_i(t) = \max_{x_{ik}, x_{i+1,k}, \ldots, x_{nk}} \left\{\sum_{j \ge i} x_{jk} \pi_j - \sum_{j \ge i} (t_j^g - a_j)\; |\;t_j^g \ge t, \;\forall j \in \{i,i+1,\ldots,n\} \right\}. \end{equation}
As an illustration, we present an example of a set of $8$ arrivals and show in Figure \ref{func_g} the functions $g_i(t)$. In particular, the left plot shows the function $g_1(t)$ while the right plot shows the function $g_8(t)$. \\ \begin{figure}
\caption{Plot of functions $g_1(t)$ and $g_8(t)$.}
\label{func_g}
\end{figure}
We observe that the optimal value of the $k^{th}$ pricing problem is $g_1(0)$. We introduce the notion of processing time of a flight $i$ at gate $k$ which consists of the minimum turn time and the buffer time and denote the processing time of flight $i$ by $p_i := \tau_i + b_k$. In the rest of this section, we drop the index $k$ as we are solving the pricing problem for a fixed gate $k$. Based on the definition of $g_i(t)$, we can derive the following recursive formula, \begin{lemma}\label{recurlemma} \begin{align}\label{recureq} g_i(t) = \begin{cases} 0 & \; \text{ if } i \ge n+1\\ g_i(a_i) & \; \text{ if } t \le a_i\\ g_{i+1}(t) & \; \text{ if } t > a_i + \pi_i\\ \max\{(a_i + \pi_i - t) + g_{i+1}(t + p_i), g_{i+1}(t)\} &\; \text{ if }a_i < t \le a_i + \pi_i. \end{cases} \end{align} \end{lemma} \begin{proof}[proof of Lemma \ref{recurlemma}.] Similar to the proof of Lemma \ref{submodFunc}, we use the expression ``the park time of a flight is pushed to the right" to mean the park time of this flight is delayed and consequently this flight experiences a larger arrival delay.
We consider each case separately. \begin{enumerate} \item This is the terminating case. If $i \ge n+1$, we do not have any flights and thus receive a zero net benefit. \item Since a flight cannot park earlier than its arrival time, the total net benefits for any $t \le a_i$ are equal to $g_i(a_i)$. \item If the current time $t > a_i + \pi_i$, accepting the flight $i$ does not contribute to the total net benefits as $\pi_i - (t - a_i) < 0$. In addition, the park times of flights in the set $\{i+1,i+2,\cdots,n\}$ are potentially pushed to the right in the presence of flight $i$ and their arrival delays are at least as large as the arrival delays in the absence of flight $i$. Consequently, we do not accept flight $i$ at this gate. From the above analyses, we see that $a_i + \pi_i$ is the latest time beyond which we do not accept flight $i$. We denote this time as the end point and $a_i$ as the start point of flight $i$'s acceptance window. \item In this last case, we compare the total net benefits between accepting and not accepting flight $i$. In the former case, if we accept flight $i$ at the current time $t$, the net benefit gained from flight $i$ is $a_i + \pi_i - t$ and earliest possible park time for flight $i+1$ is $t+p_i$. In the latter case, if we do not accept flight $i$, the earliest possible park time for flight $i+1$ is $t$. \end{enumerate} \end{proof} A direct implementation of the formula (\ref{recureq}) recursively considers whether to accept each flight for all flights starting with the first flight. Algorithm \ref{DPpricing} below shows an implementation to obtain the value of $g_1(0)$. Note that since we define the function $g_i(t)$ for all continuous values of $t$, it was not immediately clear whether Algorithm $2$ would run in finite time. Thus we verify that Algorithm \ref{DPpricing} runs in finite time next. \begin{proposition}\label{finitealgorithm} For any value of $i$ and $t$, Algorithm \ref{DPpricing} runs in finite time. In particular, to evaluate $g_i(t)$ for any $t$, the procedure $\textup{EVALORACLE}(i,t)$ is recursively called $2^{(n-i+1)}$ times in the worst case. \end{proposition} \begin{proof}[proof of Proposition \ref{finitealgorithm}.] The proof is by backward induction. If $i = n$, evaluating the function $g_n(t)$ for any $t$ is equivalent to the procedure $\textup{EVALORACLE}(i,t)$ with input $n$ and $t$ which further calls the procedure $\textup{EVALORACLE}(i,t)$ at most $2$ times with input $n+1$ and $t$ or $n + 1$ and $t+p_n$. This gives the base case. Suppose that the statement is true for $i = n, n-1,\ldots,j + 1$. Now consider $i = j$ and $g_j(t)$ for any $t$, the procedure $\textup{EVALORACLE}(i,t)$ with input $j$ and $t$ further calls the procedure $\textup{EVALORACLE}(i,t)$ at most $2$ times with input $j+1$ and $t$ or $j+1$ and $t+p_i$. The former is equivalent to the value of $g_{j+1}(t)$ and the latter is equivalent to the value of $g_{j+1}(t+p_i)$. The inductive step shows that both call requires at most $2^{n-j}$ further recursive calls to the procedure $\textup{EVALORACLE}(i,t)$. Therefore, $g_i(t)$ for any $t$ requires a total of $2 \cdot 2^{n-j} = 2^{n-j+1}$ recursive calls, which completes the proof. \end{proof} \begin{algorithm}[H] \caption{Evaluation of $g_i(t)$ and optimal assignment $S$} \label{DPpricing} \begin{algorithmic}[1] \State{\textbf{input:} $i$, $t$} \Procedure{EvalOracle}{$i$, $t$} \If {$i = n + 1$} \Return $0$, $S = \emptyset$ \Else
\If{$t \le a_i$}
\State \Return{\Call{EvalOracle}{$i$, $a_i$}}
\ElsIf{$t \ge a_i + \pi_i$}
\State \Return{\Call{EvalOracle}{$i+1$, $t$}}, $S \gets S \cup \{0\}$
\Else
\If{$(a_i + \pi_i - t) + $\Call{EvalOracle}{$i+1$, $t + p_i$} $\ge$ \Call{EvalOracle}{$i+1$, $t$}}
\State \Return{$(a_i + \pi_i - t) + $\Call{EvalOracle}{$i+1$, $t + p_i$}}, $S \gets S \cup \{1\}$
\Else
\State \Return{\Call{EvalOracle}{$i+1$, $t$}}, $S \gets S \cup \{0\}$
\EndIf
\EndIf \EndIf \EndProcedure \State \Return {\Call{EvalOracle}{$i,t$}, $S$} \end{algorithmic} \end{algorithm}
\subsubsection{The case of integral input data.} \label{ncapprox} We now propose an implementation of the dynamic programming algorithm with a running time of $O(nc)$ in the case of integral input data. In Appendix B, we analyzed the case when the data is rational and we constructed the functions $g_i(t)$ in the interval $[0,c]$ by recursively evaluating the functions at the potential breakpoints from the set \begin{align*} \{0,e,2e,3e,\ldots,c\} \text{ where }e \in \{1/d,1/2d,1/3d,\ldots,1/(i+1)d\}, \end{align*} where $d$ is the common denominator. If we now assume the input data are integral, we can further reduce the set to $\{0,1,2,\ldots,c\}$. Formally, we have the following theorem. \begin{theorem} \label{alterdpthm} Assume all input data are integral, it suffices to evaluate a function $g_i(t)$ at $t \in \{0,1,2,\ldots,c\}$ to construct the function. \end{theorem} \begin{proof}[proof of Thoerem \ref{alterdpthm}.]
Consider any set of accepted flights, $S$, and let the flights in $A$ be $1, 2, \ldots, |A|$. Following similar arguments in the subsection \ref{approximation}, we can assign the park time of the first flight to be $a_1$ and the gate becomes available for the second flight at time $\min\{a_2, a_1 + p_1\}$. Therefore, we see that the park time for flight $i$ is given by $\sum_{j=2}^{i-1} \min\{a_j, a_{j-1} + p_{j-1}\} + \sum_{j=2}^{i-1}p_j$ for $i \ge 3$. Note that the park time of flight $i$ is the sum of the park time and processing time of flight $i-1$. Since we assume all input data are integral, the park times for all flights are integral.
Next, we optimally assign the flights $1,2,\ldots,n$ in the set $\mathcal{F}'$ based on the recursive formula (\ref{recureq}). For any flight $j$, if we accept $j$, it can only be accepted at an integral time that is either determined by the accepted flights before $j$ or $a_j$. If we do not accept $j$, we move on to optimally assigning the flights $j+1, j+2, \ldots, n$ and the arguments for $j$ can be applied. Therefore, decisions on whether to accept the flights all occur at integral times and we only need to evaluate $g_i(t)$ at the integral times in $[0,c]$ to construct that $g_i(t)$. \end{proof}
Based on the above observation, an implementation of the dynamic programming algorithm with a running time of $O(nc)$ is shown below as Algorithm \ref{alterapprox}. Note that this is an implementation of the backward version of the dynamic programming algorithm. \begin{algorithm}[H] \caption{Alternative implementation of the dynamic programming algorithm} \label{alterapprox} \begin{algorithmic}[1] \State Let $g_i(0 \ldots c)$ be a table where $i \in \{1,2,\cdots, n+1\}$. \For{$ t \gets c$ to $0$ } {$g_{n+1}(t) \gets 0$, $S_t \gets \emptyset$} \EndFor \For{$ i \gets n$ to $1$} \For{$ t \gets c$ to $0$} \If{$t \le a_i$} \State {$g_i(t) \gets g_i(a_i)$, $S_t \gets S_{a_i}$} \ElsIf {$t \ge a_i + \pi_i$} \State {$g_i(t) \gets g_{i+1}(t)$, $S_t \gets S_t \cup \{0\}$} \Else \If {$a_i + \pi_i - t + g_{i+1}(t+p_i) \ge g_{i+1}(t)$ } \State {$g_i(t) \gets a_i + \pi_i - t + g_{i+1}(t+p_i)$, $S_t \gets S_t \cup \{1\}$} \Else \State {$g_i(t) \gets g_{i+1}(t)$, $S_t \gets S_t \cup \{0\}$} \EndIf \EndIf \EndFor \EndFor \State \Return{$g_1(0)$, $S_0$} \end{algorithmic} \end{algorithm} Lastly note that in the cases where not all input data are integers, this algorithm works as an approximation algorithm and provides an alternative to the submodular maximization approximation algorithm. This new approximation algorithm is referred to as the approximative dynamic programming algorithm (ADP) in the computational experiments. \subsection{Alternative reformulation of the pricing problems} Alternatively, the pricing problem can be formulated as a variant of the shortest path problem with time windows known as the shortest path problem with time windows and time costs (SPPTWTC) in which there is an additional linear cost associated with the service start time at each node. Such a problem is studied in details in \cite{ESPPTWlinearnodecost}. To formulate the pricing problems as a SPPTWTC, a source and a sink need to be introduced and each of the flights in the set $\mathcal{F}'$ represents a node in the network. The cost associated with each arc between nodes $i$ and $j$ is given by the negative of the corresponding dual variable value, $-\pi_{ij}$. To solve the SPPTWTC problem, a dynamic programming algorithm is proposed in \cite{ESPPTWlinearnodecost}. This dynamic programming algorithm is derived based on the general labelling algorithm for the shortest path problems with time windows. For a detailed discussion of the labeling algorithm and some relevant extensions, we refer the readers to \cite{ESPPTWlabeling} and \cite{ESPPTWconvexcosts}. One of the key reasons for the effectiveness of this dynamic programming algorithm is the use of upper bounds on the service start times to eliminate labels that are infeasible with respect to the time windows, so that the number of labels created is significantly reduced. However, if we reformulate our pricing problems as SPPTWTC, the lack of upper bounds on the park times leads to an exponential number of labels rendering the dynamic programming algorithm proposed in \cite{ESPPTWlinearnodecost} ineffective. Nonetheless, if one wanted to enforce upper bounds on the park times, one would definitely be able to leverage the work in \cite{ESPPTWlabeling}, \cite{ESPPTWconvexcosts}, and \cite{ESPPTWlinearnodecost}. Therefore, we did not pursue this reformulation further. \subsection{Large-sized instances} \label{largesizedinstance} The total number of arrivals into a busy international airport can be very large and the proposed dynamic programming algorithm can be slow to obtain the optimal assignments. We utilize two ways to further decompose the pricing problem of each gate to tackle large-sized instances. One is a $2$-approximation algorithm and the other is a standard rolling horizon method. As the standard rolling horizon method is common in many applications, we defer the details of implementations to Appendix C. \subsubsection{Block decomposition approximation.} \label{largeapprox} When dealing with a large number of arrival flights, the flights can be usually divided into blocks based on the interactions between the flights to reduce the problem sizes. We introduce the idea of an adjacency parameter, denoted by $\overline{\sigma}$. A formal definition of $\overline{\sigma}$ is as follows. \begin{definition} \label{adjacencyparameter} For each flight $i$, let $j$ be the earliest flight after $i$ such that $a_j > a_i + \pi_i + p_i$, and denote $\sigma_i: = \min\{j-i, n-i\}$, then $\overline{\sigma} := \max_{1 \le i \le n} \sigma_i$. \end{definition}
Note that this adjacency parameter is likely to differ across the gates and change after a new iteration of the master problem is solved with the updated set $P_k$. The use of the adjacency parameter also appears in the work of \cite{delftbuitendijk}. Now we are ready to give the block decomposition approximation algorithm.
\begin{algorithm}[H] \caption{Block decomposition approximation} \label{rollingapproximation} \begin{algorithmic}[1] \State \textbf{Input:} adjacency parameter $\overline{\sigma}$ \State Divide the set $\mathcal{F}'$ into $\lfloor n/\overline{\sigma} \rfloor + 1$ blocks such that each of block has $\overline{\sigma}$ flights and the last block has $n - \lfloor n/\overline{\sigma} \rfloor \cdot \overline{\sigma}$ flights. \State Solve for the optimal assignment in each of the blocks \State Compute the sum of the total net benefits of the odd-indexed blocks and even-indexed blocks and let $B$ be the collection of blocks with a larger sum of the total net benefits \State Construct an assignment, $S$, by cascading the optimal assignments from the blocks in $B$ \State Compute the total net benefits of the assignment $S$, $f(S)$ \State \Return{$S$, $f(S)$} \end{algorithmic} \end{algorithm} Note that we refer to the discussions in Subsection \ref{approximation} for the computation of the total net benefits $f(S)$. We now provide an approximation guarantee for Algorithm \ref{rollingapproximation}. \begin{theorem} \label{blockguarantee} Let $S^*$ and OPT be the true optimal assignment and optimal objective value of the pricing problem respectively and let $S$ be the assignment returned by Algorithm \ref{rollingapproximation} and $OPT_S$ be the objective value associated with $S$, then $OPT_S \ge OPT/2$. \end{theorem} \begin{proof}[proof of Theorem \ref{blockguarantee}.] Label all the blocks by $1,2,\ldots, \lfloor n/\overline{\sigma} \rfloor, \lfloor n/\overline{\sigma} \rfloor+1$ and let the optimal objective value of each block be $OPT_i$ for $ i \in [\lfloor n/\overline{\sigma} \rfloor+1]$. By the construction of the blocks, we have that \begin{equation}
\sum_{i \in [\lfloor n/\overline{\sigma} \rfloor+1]} OPT_i \ge OPT. \end{equation}
Now without loss of generality, we assume that the sum of the total net benefits of the odd-indexed blocks is larger than that of the even-indexed blocks, then we have that \begin{equation}
2\sum_{i \text{ is odd}} OPT_i \ge \sum_{i \text{ is odd}} OPT_i + \sum_{i \text{ is even}} OPT_i \ge OPT. \end{equation}
Now we show that we can construct a valid assignment by just cascading the optimal assignments from the odd-indexed blocks. Let $I_1$, $I_2$ be any two consecutive odd-indexed blocks and let $f_{1}$ be the last flight in block $I_1$ and $f_2$ be the first flight in $I_2$ respectively. From the construction of the blocks, we have that $f_2 - f_1 > \overline{\sigma}$ since they are from two different blocks. In addition, by the definition of the adjacency parameter, we have $a_{f_2} > a_{f_1} + \pi_{f_1} + p_{f_1}$ and, in fact, $a_{f_2} > a_{i} + \pi_{i} + p_{i}$ for any flight $i$ in block $I_1$. Recall that $a_i + \pi_i$ is the end point of flight $i$'s acceptance window beyond which $i$ is not accepted. Therefore, the optimal assignment of the flights in the block $I_1$ does not conflict with whether to accept $f_2$ in the optimal assignment of the flights in the block $I_2$ and cascading the optimal assignments of $I_1$ and $I_2$ forms a valid assignment. Repeating this argument for all the odd-indexed blocks, we see cascading the optimal assignments of those blocks form a valid assignment pattern.
Therefore, we have that \begin{equation} OPT_S = \sum_{i \text{ is odd}} OPT_i \ge OPT/2, \end{equation} which completes the proof. \end{proof} Note that Theorem \ref{blockguarantee} implies that if there exists an assignment pattern with positive total net benefits, the block decomposition approximation algorithm has to return an assignment pattern, $S$, with a positive $OPT_S$ and thus $S$ is a favorable assignment pattern. Furthermore, Algorithm \ref{rollingapproximation} can be improved as follows. Suppose the odd-indexed blocks have a larger sum of the total net benefits and we construct a valid assignment pattern from the odd-indexed blocks. We can still potentially accept more flights from the even-indexed blocks to contribute more net benefits as long as accepting those flights does not violate constraints (\ref{C1}) - (\ref{C3}).\\ Lastly, as the adjacency parameter, $\overline{\sigma}$, is computed based on the interactions between flights, it is not likely to be very large. Most flights park at the gate to get ready for the next flight leg with a relatively short turn time and thus they have limited interactions with other flights that arrive hours later.
\section{Feasible solutions and branching scheme} \label{branching} \subsection{Feasible solutions} After optimality is achieved in the linear programming relaxation of the master problem (\ref{colGenObj}) - (\ref{bound}), we solve the master problem with the binary requirements of the decision variables reinstated to obtain a feasible solution. We refer to this problem as the binary program. The objective value of the linear programming relaxation provides a lower bound on the optimal objective value while the binary program provides an upper bound on the optimal objective value. Subsequently, we can study the quality of the binary program solution by the following, \begin{equation} \label{qualitysol} gap_{root} = \frac{UB_{root} - LB_{root}}{LB_{root}}, \end{equation} where $UB_{root}$ is the upper bound from the binary program and $LB_{root}$ is the lower bound from the linear programming relaxation. In the cases where $LB_{root}$ is zero, we use the absolute gap of $UB_{root}-LB_{root}$ which is denoted by an additional notation of ``(a)" after the numericals. We will use the abbreviations of UB and LB to represent the upper bound from the binary program and lower bound from the linear programming relaxation respectively in the computational experiments section.
\subsection{Branching on the assignment decisions} In many cases, although we can obtain a feasible solution from the binary program, the quality of that solution is poor. Thus, we propose a branching rule on the assignment decisions to obtain better solutions. Formally, let $z^*$ be an optimal solution to the linear programming relaxation of the master problem (\ref{colGenObj}) - (\ref{bound}). The corresponding assignment decision for each flight $i$ given by \begin{equation} \label{optimalX} y_{ik}^{*} : = \sum_{p \in P_k} \delta_{ip}^k z_p^{k*}, \quad \forall i \in \mathcal{F}, \; k \in \mathcal{G}. \end{equation}
Consider a fractional $z^*$, as $\delta_{ip}^k$ is either $0$ or $1$, there may exist $0 < y_{ik}^* < 1$ for a particular $i$ and $k$. When such a fractional assignment decision occurs, we denote the $i$ and $k$ values by $\hat{i}$ and $\hat{k}$. We present a branching scheme on this assignment decision which is adapted from \cite{Ryan}. We force $y_{\hat{i}\hat{k}} = 1$ on the left branch and $y_{\hat{i}\hat{k}} = 0$ on the right branch by adding valid inequalities. In particular, flight $\hat{i}$ can be forced to use gate $\hat{k}$ by the following inequality, \begin{equation} \sum_{p \in P_{\hat{k}}} (1 - \delta_{\hat{i}p}^{\hat{k}}) z_p^{\hat{k}}+ \sum_{k \in \mathcal{F} \backslash \hat{k}} \sum_{p \in P_k} \delta_{\hat{i}p}^k z_p^k \le 0. \end{equation}
On the other side of the branch where flight $\hat{i}$ can be forced not to use gate $\hat{k}$ by the following inequality, \begin{equation} \sum_{p \in P_{\hat{k}}} \delta_{\hat{i}p}^{\hat{k}} z_p^{\hat{k}}\le 0. \end{equation}
Note that after the branching constraints are added, the objective functions in the pricing problems have to be updated to incorporate the dual informations of these constraints. By updating the dual variables of the constraints (\ref{coverConstraint}) and (\ref{availConstraint}), we can keep the objective functions in the same format. Let the dual variables to the branching constraints be $\lambda_{\hat{i}\hat{k}}$. On the left branches, if $k = \hat{k}$, we have the new objective function as \begin{align} \min \; & \sum_{i \in \mathcal{F}} (t_i^{g}-a_i) - \sum_{i \in \mathcal{F}} x_{ik}\pi_i - \mu_k - (1 - x_{\hat{i} k}) \lambda_{\hat{i}\hat{k}} \notag \\ \Leftrightarrow \min \; & \sum_{i \in \mathcal{F}} (t_i^{g}-a_i) - \left(\sum_{i \in \mathcal{F} \backslash \hat{i}} x_{ik}\pi_i + x_{\hat{i} k} (\pi_{\hat{i}} + \lambda_{\hat{i}\hat{k}})\right) - (\mu_k + \lambda_{\hat{i}\hat{k}}). \end{align}
When $k \neq \hat{k}$, the new objective function becomes \begin{align} \label{newobjectbranching} \min \; & \sum_{i \in \mathcal{F}} (t_i^{g}-a_i) - \sum_{i \in \mathcal{F}} x_{ik}\pi_i - \mu_k - x_{\hat{i} k} \lambda_{\hat{i}\hat{k}} \notag \\ \Leftrightarrow \min \; & \sum_{i \in \mathcal{F}} (t_i^{g}-a_i) - \left(\sum_{i \in \mathcal{F} \backslash \hat{i}} x_{ik}\pi_i + x_{\hat{i} k}(\pi_{\hat{i}} + \lambda_{\hat{i}\hat{k}}) \right)- \mu_k. \end{align}
Now on the right branches, if $k = \hat{k}$, we obtain the same objective function as shown in (\ref{newobjectbranching}). When $k \neq \hat{k}$, the objective function remains unchanged.
We want to point out that the dynamic programming and approximation algorithms have to be adapted at a node where some branching constraints have been added, otherwise the assignment patterns that are generated can potentially violate the branching constraints. In particular, it is easy to modify the pricing problem methods for the right branches. If flight $\hat{i}$ is forced not to use gate $\hat{k}$, we can remove flight $\hat{i}$ from the subset of flights $\mathcal{F}'$ during the pre-processing. However, it is more difficult on the left branches when we force flight $\hat{i}$ to use gate $\hat{k}$. As flight $\hat{i}$ can be accepted at any time in the interval, $[a_{\hat{i}}, a_{\hat{i}} + \pi_{\hat{i}}]$, we need to first determine the optimal time to accept flight $\hat{i}$ after which the rest of the flights in the set $\mathcal{F}'$ can be optimally assigned. This is more difficult to implement because the time in $[a_{\hat{i}}, a_{\hat{i}} + \pi_{\hat{i}}]$ takes continuous values. To address this issue, we restrict the branching options in the dynamic program based on assignment decisions and then solve the pricing problems to optimality using a standard solver. While it can be time consuming for larger instances to solve pricing problems directly, we observed in the experiments that branching is often not needed for those larger instances and consequently this modification does not impact the performances. On the other hand, for the smaller instances where branching is needed, using a solver does not slow down the whole procedure significantly and we can still obtain favorable results.
\section{Computational experiments} \label{numericalexp} \subsection{Instance generation and initialization} We test the proposed methods on randomly generated instances and instances derived from real world data. The detailed information about each instance is reported in Table \ref{instanceSize} and a summary of the real world operational data is presented in Table \ref{ATLinfo}. For the randomly generated instances, we generate the inter-arrival time of two consecutive flights based on a uniform distribution such that each gate has an arriving flight every $75$ to $180$ minutes on average depending on the size of the instances. We use the minimum turn time data from a major U.S. airline. Furthermore, we generate each gate with a type which determines whether it can accommodate heavy aircraft, a set of eligible airlines, and a buffer time. The buffer times are set to be identical in our experiments for simplicity, but they can be adjusted accordingly if needed. The compatibilities data, $\alpha_{ik}$, used in constraints (\ref{MIPC6}) and (\ref{compatibility}) as well as pre-processing step of the pricing problem algorithms, can be computed based on the aircraft type, the gate type, and the set of eligible airlines.
For the real world instances, we use arrivals from multiple days in 2019 before the COVID-19 pandemic from the Atlanta Hartfield-Jackson Airport, the busiest airport by passenger traffic. The data are obtained from the U.S. Bureau of Transportation Statistics website. Note that for instances that do not use arrivals in the whole 24 hours' period, we report the time interval during which the arrivals are used. As it is difficult to obtain the precise information about the gates, we generate the gates in a similar way as described above, but with an additional consideration. As very large percentage of arrivals into ATL are Delta Air Lines (DAL) flights and DAL operates many gates at ATL, a large number of gates allow DAL flights.
Once we have the set of arrivals and the set of gates for an instance, we initialize the instance. For each of the test instance, the set of feasible patterns $P_k$ for all gate $k$ have to be initialized to contain at least one pattern to satisfy the availability constraint (\ref{availConstraint}) in the master problem. Moreover, the union of $P_k$ has to satisfy the covering constraint (\ref{coverConstraint}). Since we start with empty $P_k$, we provide each gate a feasible assignment pattern by randomly assigning each flight to a compatible gate to have an overall feasible assignment. Consequently, at the beginning of the proposed procedure, there is an assignment pattern for each gate. We note that although this is a simple initialization procedure, we do not expect any more complex initialization procedure could show significant improvements. In addition, our termination criteria is a relative gap of $2\%$ or an absolute gap of $0.5$ (in case the optimal objective of the root node LP relaxation is $0$.
\begin{longtable}{ccccccc} \caption{Instance information.\label{instanceSize}}\\ \hline no. (name) & size & source & no. of flights, gates & flight/gate & inter-arr(min.)\\ \hline\hline 1 (f30g5s) & small & synthetic & 30 flights, 5 gates & $6.00$ & $12.07$\\ 2 (f30g10s) & small & synthetic & 30 flights, 10 gates & $3.00$ & $9.66$\\ 3 (f50g10s) & small & synthetic & 50 flights, 10 gates & $5.00$ & $8.53$\\ 4 (f50f20s) & small & synthetic & 50 flights, 20 gates & $2.50$ & $4.37$\\ 5 (f100g35s) & small & synthetic & 100 flights, 35 gates & $2.86$ & $2.72$\\ 6 (f100g50s) & small & synthetic & 100 flights, 50 gates & $2.00$ & $1.75$\\\hline\hline 7 (f150g50s1) & moderate & synthetic & 150 flights, 50 gates & $3.00$ & $2.01$\\ 8 (f150g50s2) & moderate & synthetic & 150 flights, 50 gates & $3.00$ & $2.30$\\ 9 (f150g50s3) & moderate & synthetic & 150 flights, 50 gates & $3.00$ & $2.10$\\\hline 10 (f200g100a1) & moderate & 10:00-13:12, 08/23/19 & 200 flights, 100 gates & $2.00$ & $0.96$\\ 11 (f200g100a2) & moderate & 13:00-15:51, 08/23/19 & 200 flights, 100 gates & $2.00$ & $0.86$\\\hline 12 (f300g150s1) & moderate & synthetic & 300 flights, 150 gates & $2.00$ & $0.51$\\ 13 (f300g150s2) & moderate & synthetic & 300 flights, 150 gates & $2.00$ & $0.49$\\ 14 (f300g150s3) & moderate & synthetic & 300 flights, 150 gates & $2.00$ & $0.48$\\\hline 15 (f300g150a1) & moderate & 10:00-14:31, 08/23/19 & 300 flights, 150 gates & $2.00$ & $0.90$\\ 16 (f300g150a2) & moderate & 13:00-17:47, 08/23/19 & 300 flights, 150 gates & $2.00$ & $0.95$\\\hline\hline 17 (f800g200s1) & large & synthetic & 800 flights, 200 gates & $4.00$ & $0.77$\\ 18 (f800g200s2) & large & synthetic & 800 flights, 200 gates & $4.00$ & $0.73$\\ 19 (f800g200s3) & large & synthetic & 800 flights, 200 gates & $4.00$ & $0.75$\\\hline 20 (f800g200a1) & large & 10:00-23:13, 08/19/19 & 800 flights, 200 gates & $4.00$ & $0.99$\\ 21 (f800g200a2) & large & 10:00-23:13, 08/20/19 & 800 flights, 200 gates & $4.00$ & $0.99$\\ 22 (f800g200a3) & large & 10:00-23:41, 08/21/19 & 800 flights, 200 gates & $4.00$ & $1.02$\\ 23 (f800g200a4) & large & 10:00-22:36, 08/22/19 & 800 flights, 200 gates & $4.00$ & $0.95$\\ 24 (f800g200a5) & large & 10:00-22:22, 08/23/19 & 800 flights, 200 gates & $4.00$ & $0.93$\\\hline 25 (f1000g200s1) & large & synthetic & 1000 flights, 200 gates & $5.00$ & $0.93$\\ 26 (f1000g200s2) & large & synthetic & 1000 flights, 200 gates & $5.00$ & $0.89$\\ 27 (f1000g200s3) & large & synthetic & 1000 flights, 200 gates & $5.00$ & $0.89$\\\hline 28 (f1000g200a1) & large & 6:00-21:50, 08/19/19 & 1000 flights, 200 gates & $5.00$ & $0.95$\\ 29 (f1000g200a2) & large & 6:00-21:41, 08/20/19 & 1000 flights, 200 gates & $5.00$ & $0.94$\\ 30 (f1000g200a3) & large & 6:00-21:37, 08/21/19 & 1000 flights, 200 gates & $5.00$ & $0.94$\\ 31 (f1000g200a4) & large & 6:00-21:21, 08/22/19 & 1000 flights, 200 gates & $5.00$ & $0.92$\\ 32 (f1000g200a5) & large & 6:00-21:21, 08/23/19 & 1000 flights, 200 gates & $5.00$ & $0.92$\\ \hline 33 (f1113g192a19) & large & 08/19/19 & 1113 flights, 192 gates & $5.80$ & $1.28$\\ 34 (f1095g192a20) & large & 08/20/19 & 1095 flights, 192 gates & $5.70$ & $1.30$\\ 35 (f1092g192a21) & large & 08/21/19 & 1092 flights, 192 gates & $5.69$ & $1.31$\\ 36 (f1125g192a22) & large & 08/22/19 & 1125 flights, 192 gates & $5.86$ & $1.28$\\ 37 (f1125g192a23) & large & 08/23/19 & 1125 flights, 192 gates & $5.86$ & $1.27$\\\hline \end{longtable}
\begin{longtable}{ccc} \caption{Summary of arrivals at ATL.\label{ATLinfo}}\\ \hline date. & no. of gates & no. of arrivals \\ \hline\hline Aug 19, 2019 & $192$ & $1113$ ($687$ Delta, $39$ American, $8$ United, $379$ Other)\\ Aug 20, 2019 & $192$ & $1095$ ($681$ Delta, $32$ American, $9$ United, $373$ Other)\\ Aug 21, 2019 & $192$ & $1092$ ($675$ Delta, $35$ American, $10$ United, $372$ Other)\\ Aug 22, 2019 & $192$ & $1125$ ($684$ Delta, $40$ American, $11$ United, $390$ Other)\\ Aug 23, 2019 & $192$ & $1125$ ($684$ Delta, $39$ American, $11$ United, $391$ Other)\\\hline \end{longtable} \subsection{Software and hardware} For the implementation, the pricing problem algorithms are implemented in Python. Gurobi (version 9.1) is used whenever a commercial solver is needed. The computations are performed on an Unix system with a $12$-core CPU and $16$GB RAM. We also set a limit of $8$ hours for each instance. \subsection{Computational results} For a complete comparison, the performance of the compact mixed integer programming (MIP) formulation (\ref{MIPO1}) - (\ref{MIPcontin}) is considered using the small-sized instances and shown in Table \ref{Mip}. The formulation is solved using the Gurobi solver. As we have noted before, the MIP formulation is not ideal for this problem as evidenced in Table \ref{Mip}.\\
\begin{longtable}{ccccc} \caption{Performances of the MIP formulation.\label{Mip}}\\ \hline method & instance no. & time(sec.) & incumbent obj. & best bound\\ \hline\hline \multirow{6}{5em}{\centering{MIP}} & 1 (f30g5s) & 339.78 & 1203.26 & 1203.26\\ & 2 (f30g10s) & 6948.33 & 258.53 & 258.53\\ & 3 (f50g10s) & 5347.46 & 74.73 & 74.73\\ & 4 (f50f20s) & incomplete & 18.64 & 0.00\\ & 5 (f100g35s) & incomplete & 1.66 & 0.00\\ & 6 (f100g50s) & incomplete & 1.14 & 0.00\\\hline \end{longtable} For the following results, note that we proposed to obtain a feasible solution at the root node by reinstating the binary requirement and compute the quality of the solution by (\ref{qualitysol}). In addition, we use the following notations to report results in tables: \begin{itemize}
\item ct: computation time in second;
\item rt: computation time spent on the root node in second;
\item node(s): number of nodes in the branch-and-bound tree searched to obtain the reported solution, where ``1" represents the root node. \end{itemize} Additionally, we use the notations ``$LB_{root}$", ``$UB_{root}$", and ``$gap_{root}$" introduced previously. Note that these values are obtained at the root node. \subsubsection{Small-sized instances.} It is important to note that we can combine the pricing problem algorithms presented in Section \ref{solvingpricing} in our implementation. For the small-sized instances, we test the following ways: \begin{enumerate}
\item Gurobi solver (S)
\item Dynamic programming algorithm (DP)
\item $70$ iterations of submodular maximization followed by the dynamic programming algorithm (SM+DP)
\item $25$ iterations of approximative dynamic programming algorithm followed by the dynamic programming algorithm (ADP+DP). \end{enumerate}
We run a fixed number of iterations of both SM and ADP across different instances as discussed above, and no attempt has been made to tune these parameters. For a particular class of instances, tuning these parameters may provide even better results.
In the solver option (Option 1 above), specifically, we set the Gurobi PoolSearchMode parameter to $2$ and request the solver to provide up to $75$ feasible solutions and add as many as possible to the master problem. The updated solver parameters does not deteriorate the performance of the solver as we observe the solver very rarely performs extra computations after the optimal solution is found to fill the feasible solution pool. These extra feasible solutions are likely assignment patterns with favorable reduced costs and can potentially reduce the number of times the pricing problems are solved.
The detailed computational results for the small-sized instances are shown in Table \ref{pricingCombinations}. We see that our proposed approaches out-perform the Gurobi solver (S) in all six instances, even with just the dynamic programming algorithm alone (DP). While the approximation algorithms are designed to provide additional improvements relative to the dynamic programming algorithm, however, they do not offer any substantial boost in these small instances and, on the contrary, we see the dynamic programming algorithm alone achieves better computation time. In addition, the binary program solutions obtained with different methods differ in some of instances. This is likely due to three methods, namely, DP, SM, and ADP, are generating different patterns. In general, if we increase the size of the instance, it becomes more difficult to solve. We observe that the flight to gate ratio has an impact on the time. In particular, although instance $1$ is smaller in size than instances $2$ and $4$ are, it takes longer to solve than both instances $2$ and $4$ do. The flight to gate ratio determines how congested the gates are. The higher the ratio, the larger the number of flights each gate on average has to accept. Instances with higher ratio require more delicate assignments. Furthermore, in contrast to the worse case theoretical analysis, the dynamic programming algorithm performs well. This is likely due to the fact that the inter-arrival times are much shorter than the processing times and consequently many recursive calls to the evaluation oracle have input $t$ beyond the corresponding flights' acceptance windows. \\ \begin{longtable}{cccccccccc} \caption{Pricing problem methods on small-sized instances.\label{pricingCombinations}}\\ \hline instance & methods & ct(sec.) & rt(sec.) & $LB_{root}$ & $UB_{root}$ & $gap_{root}$($\%$) & final obj. & node(s) \\ \hline \hline \multirow{4}{6em}{\centering{1\\(f30g5s)}} & S & 130.71 & 130.71 & 1203.26 & 1203.26 & 0.00 & 1203.26 & 1\\ & DP & 68.83 & 68.83 & 1203.26 & 1203.26 & 0.00 & 1203.26 & 1\\ & SM+DP & 48.46 & 27.52 & 1203.26 & 1448.98 & 20.4 & 1203.26 & 3\\ & ADP+DP & 33.64 & 33.64 & 1203.26 & 1203.26 & 0.00 & 1203.26 & 1\\\hline \multirow{4}{6em}{\centering{2\\(f30g10s)}} & S & 26.52 & 26.52 & 258.53 & 258.53 & 0.00 & 258.53 & 1\\ & DP & 2.15 & 2.15 & 258.53 & 258.53 & 0.00 & 258.53 & 1\\ & SM+DP & 0.63 & 0.63 & 258.53 & 258.53 & 0.00 & 258.53 & 1\\ & ADP+DP & 3.52 & 3.52 & 258.53 & 258.53 & 0.00 & 258.53 & 1\\\hline \multirow{4}{6em}{\centering{3\\(f50g10s)}} & S & 91.85 & 91.85 & 74.73 & 74.73 & 0.00 & 74.73 & 1\\ & DP & 39.86 & 25.52 & 74.73 & 77.84 & 4.00 & 74.73 & 5\\ & SM+DP & 38.60 & 26.73 & 74.73 & 80.79 & 7.50 & 74.73 & 5\\ & ADP+DP & 30.79 & 30.79 & 74.73 & 74.73 & 0.00 & 74.73 & 1\\\hline \multirow{4}{6em}{\centering{4\\(f50f20s)}} & S & 47.63 & 47.63 & 18.64 & 18.64 & 0.00 & 18.64 & 1\\ & DP & 1.72 & 1.72 & 18.64 & 18.64 & 0.00 & 18.64 & 1\\ & SM+DP & 1.44 & 1.44 & 18.64 & 18.64 & 0.00 & 18.64 & 1\\ & ADP+DP & 10.07 & 10.07 & 18.64 & 18.64 & 0.00 & 18.64 & 1\\\hline \multirow{4}{6em}{\centering{5\\(f100g35s)}} & S & 1495.42 & 1495.42 & 1.66 & 1.66 & 0.00 & 1.66 & 1\\ & DP & 189.61 & 37.92 & 1.66 & 1.88 & 11.70 & 1.66 & 26\\ & SM+DP & 20.67 & 20.67 & 1.66 & 1.66 & 0.00 & 1.66 & 1\\ & ADP+DP & 46.89 & 46.89 & 1.66 & 1.66 & 0.00 & 1.66 & 1\\\hline \multirow{4}{6em}{\centering{6\\(f100g50s)}} & S & 1893.26 & 1893.26 & 1.14 & 1.14 & 0.00 & 1.14 & 1\\ & DP & 17.89 & 17.89 & 1.14 & 1.14 & 0.00 & 1.14 & 1\\ & SM+DP & 10.06 & 10.06 & 1.14 & 1.14 & 0.00 & 1.14 & 1\\ & ADP+DP & 29.67 & 29.67 & 1.14 & 1.14 & 0.00 & 1.14 & 1\\\hline \end{longtable}
\subsubsection{Moderate-sized instances.} Next we show the computational results for the moderate-sized instances. As we have seen that the solver option is not as effective as other options are in solving the small-sized instances, only the options of DP, SM+DP, and ADP+DP are considered in this set of experiments. As we pointed out previously, we run $70$ iterations of SM and $25$ iterations of ADP across all instances.
The results are shown in Table \ref{moderateInstance}. We see the same trend of longer time taken to solve an instance of larger size. Since the flight to gate ratios are small for these moderate-sized instances, we do not see any obvious impact of the ratio. The boost in performances from the submodular maximization approximation algorithm is much more obvious in the moderated-sized instances and in some cases, the approximative dynamic programming algorithm improves the computation time. The running time of either approximation algorithm increases linearly with the size of the set of flights. Given a larger set of flights, each iteration of the approximation algorithm takes much less time compared to each iteration of the dynamic programming algorithm. Nonetheless, we observe that the submodular maximization out-performs the approximative dynamic programming in all instances as the assignment patterns that are generated by the submodular maximization are usually of better total net benefits than those that are generated by the approximative dynamic programming. Moreover, we observe small variations in the times taken across instances of the same size and this suggests that besides the sizes of the set of flights and the set of gates, other factors associated with the flights and the gates can complicate the problem. In addition, although we see that even though some of the synthetic instances and real world instances have the same number of flights and gates, the performances are very different and it is likely due to arrivals in the real world instances are much more complex than those in the synthetic instances.\\
\begin{longtable}{ccccccccc} \caption{Pricing problem methods on moderate-sized instances.\label{moderateInstance}}\\ \hline instance & methods & ct(sec.) & rt(sec.) & $LB_{root}$ & $UB_{root}$ & $gap_{root}$($\%$) & final obj. & node(s) \\ \hline \hline \multirow{3}{6em}{\centering{7\\(f150g50s1)}} & DP & 770.74 & 479.53 & 0.0 & 0.69 & 0.69(a) & 0.0 & 82\\ & SM+DP & 346.58 & 346.58 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & ADP+DP & 985.58 & 985.58 & 0.0 & 0.22 & 0.22(a) & 0.22 & 1\\\hline \multirow{3}{6em}{\centering{8\\(f150g50s2)}} & DP & 2108.45 & 1866.29 & 0.90 & 1.11 & 23.3 & 0.9 & 23\\ & SM+DP & 486.85 & 156.04 & 0.90 & 5.32 & 491 & 0.9 & 50\\ & ADP+DP & 1034.31 & 631.49 & 0.9 & 1.90 & 52.6 & 0.9 & 50\\\hline \multirow{3}{6em}{\centering{9\\(f150g50s3)}} & DP & 1185.55 & 1185.55 & 10.74 & 10.74 & 0.00 & 10.74 & 1\\ & SM+DP & 439.70 & 439.70 & 10.74 & 10.74 & 0.00 & 10.74 & 1\\ & ADP+DP & 498.51 & 498.51 & 10.74 & 10.74 & 0.0 & 10.74 & 1\\\hline \multirow{3}{6em}{\centering{10\\(f200g100a1)}} & DP & 5002.04 & 5002.04 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+DP & 929.45 & 929.45 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & ADP+DP & 4689.39 & 4689.39 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \multirow{3}{6em}{\centering{11\\(f200g100a2)}} & DP & 2883.41 & 2883.41 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+DP & 570.75 & 570.75 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & ADP+DP & 3168.33 & 3168.33 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \multirow{3}{6em}{\centering{12\\(f300g150s1)}} & DP & 8918.69 & 8918.69 & 349.10 & 349.10 & 0.00 & 349.10 & 1\\ & SM+DP & 2833.93 & 2833.93 & 349.10 & 349.10 & 0.00 & 349.10 & 1\\ & ADP+DP & 6891.40 & 6891.40 & 349.10 & 349.10 & 0.00 & 349.10 & 1 \\\hline \multirow{3}{6em}{\centering{13\\(f300g150s2)}} & DP & 7927.31 & 7927.31 & 203.00 & 203.00 & 0.00 & 203.00 & 1\\ & SM+DP & 3111.97 & 3111.97 & 203.10 & 203.10 & 0.00 & 203.10 & 1\\ & ADP+DP & 6284.43 & 6284.43 & 203.10 & 203.10 & 0.00 & 203.10 & 1\\\hline \multirow{3}{6em}{\centering{14\\(f300g150s3)}} & DP & 7830.18 & 7830.18 & 414.12 & 414.12 & 0.00 & 414.12 & 1\\ & SM+DP & 4570.69 & 4570.69 & 414.12 & 414.12 & 0.00 & 414.12 & 1\\ & ADP+DP & 8734.09 & 8734.09 & 414.12 & 414.12 & 0.00 & 414.12 & 1\\\hline \multirow{3}{6em}{\centering{15\\(f300g150a1)}} & DP & 25132.32 & 25132.32 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+DP & 14032.56 & 14032.56 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & ADP+DP & 23449.23 & 23449.23 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \multirow{3}{6em}{\centering{16\\(f300g150a2)}} & DP & 15249.75 & 15249.75 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+DP & 13052.81 & 13052.81 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & ADP+DP & 19874.84 & 19874.84 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \end{longtable}
\subsubsection{Large-sized instances.} Following the experiments on the moderate-sized instances, we move on to the large-sized instances. As we discussed in the Subsection \ref{largesizedinstance}, we can utilize the block decomposition approximation and rolling horizon framework to tackle the large-sized instances. The horizon size and window size are important parameters in the rolling horizon framework. We first perform parametric studies to understand how the horizon size and window size affect the performances. We vary the horizon size and window size to test the performances of the rolling horizon method on random large-sized instances. We observe that for any window size, smaller horizon sizes result in better performances. However, very small horizon size does not further reduce the time taken as the quality of the assignment patterns that are generated under small horizon sizes is poor and more iterations are needed to reach the optimality.
For the large-sized instances, we again have a few possible ways to solve the pricing problems because we can either use a fixed or a dynamically determined horizon size for the rolling horizon method and also utilize the approximation algorithms. Here is a detailed breakdown: \begin{enumerate}
\item Rolling horizon method (horizon size: $\overline{\sigma}$, window size: $1$) (RHD)
\item Rolling horizon method (horizon size: $20$, window size: $1$) (RHF)
\item Rolling horizon method (horizon size: $\min(20,\overline{\sigma})$, window size: $1$) (RHM)
\item Submodular maximization if $\overline{\sigma} > 60$ in the first 25 iterations and rolling horizon method otherwise (horizon size: $\min(20,\overline{\sigma})$, window size: $1$) (SM+RHM)
\item $30$ iterations of block decomposition approximation followed by the rolling horizon method (horizon size: $\overline{\sigma}$, window size: $1$) (BD+RHD) \end{enumerate} where $\overline{\sigma}$ is the adjacency parameter that depends on the values of the dual variables as described in Subsection \ref{largeapprox}. Furthermore, submodular maximization can generally provide assignment patterns of good quality and has a linear running time in the size of the set of flights, so its performance remains very effective for these large-sized instances. It can potentially improve the performances and serve as a benchmark for the block decomposition approximation. Note that whenever an algorithm is needed to evaluate the optimal assignment during the rolling horizon process or the block decomposition approximation, the direct implementation of the dynamic programming algorithm (Algorithm \ref{DPpricing}) is used. Again, for these large-sized instances, we use the same parameters across different instances and there can be potential improvements if parameters are tuned for individual cases. Nonetheless, we present results with a fixed set of parameters which still confirm the strong performances of our proposed approaches.
The results of the computations are shown in Table \ref{largeInstance}. After extensive experiments, we observe that the adjacency parameters, $\overline{\sigma}$, can be very large at beginning and, consequently, the options of RHD and BD+RHD that involve decompositions based on these parameters become very ineffective and unable to obtain a good solution within the allocated time limit. Therefore, we do not report the results of these two options in Table \ref{largeInstance}. From the results in the table, we see the rolling horizon method with a fixed horizon size performs much better than the same method with a horizon size of $\overline{\sigma}$ which struggles to solve these large-sized instances. If we further keep the horizon size at most $20$ in the RHM option, we obtain comparable if not slightly better performances in all instances compared to the rolling horizon method with a fixed horizon size. However, submodular maximization algorithm efficiently provides assignment patterns of good quality when $\overline{\sigma}$ is large and improves the performances compared to both RHF and RHM options in all instances. The benefits of using submodular maximization algorithm is especially significant in the cases derived from ATL arrivals. The reductions in computation time can be as large as about $50\%$. For instances $33$-$37$, we show that our proposed approach can solve the flight-to-gate problem with arrivals in a single day for the busiest airport in the world within very reasonable amount of time.\\
\begin{longtable}{cccccccccc} \caption{Pricing problem methods on large-sized instances.\label{largeInstance}}\\ \hline instance & methods & ct(sec.) & rt(sec.) & $LB_{root}$ & $UB_{root}$ & $gap_{root}$($\%$) & final obj. & node(s)\\ \hline \hline \multirow{3}{6em}{\centering{17\\(f800g200s1)}} & RHF & 1160.33 & 1160.33 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & RHM & 1147.62 & 1147.62 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+RHM & 1067.63 & 1067.63 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \multirow{3}{6em}{\centering{18\\(f800g200s2)}} & RHF & 1166.7 & 1166.7 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & RHM & 1076.56 & 1076.56 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+RHM & 1010.11 & 1010.11 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \multirow{3}{6em}{\centering{19\\(f800g200s3)}} & RHF & 1015.80 & 1015.80 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & RHM & 899.28 & 899.28 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+RHM & 975.88 & 975.88 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \multirow{3}{6em}{\centering{20\\(f800g200a1)}} & RHF & 2313.15 & 2313.15 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & RHM & 2381.8 & 2381.8 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+RHM & 1570.65 & 1570.65 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \multirow{3}{6em}{\centering{21\\(f800g200a2)}} & RHF & 2033.91 & 2033.91 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & RHM & 2138.42 & 2138.42 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+RHM & 1708.62 & 1708.62 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \multirow{3}{6em}{\centering{22\\(f800g200a3)}} & RHF & 2658.21 & 2658.21 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & RHM & 2673.93 & 2673.93 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+RHM & 1714.31 & 1714.31 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \multirow{3}{6em}{\centering{23\\(f800g200a4)}} & RHF & 2457.84 & 2457.84 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & RHM & 2444.43 & 2444.43 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+RHM & 1769.04 & 1769.04 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \multirow{3}{6em}{\centering{24\\(f800g200a5)}} & RHF & 30932.55 & 3093.55 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & RHM & 2989.79 & 2989.79 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+RHM & 1818.73 & 1818.73 & 0.0 & 0.25 & 0.25(a) & 0.25 & 1\\\hline \multirow{3}{6em}{\centering{25\\(f1000g200s1)}} & RHF & 1675.46 & 1675.46 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & RHM & 1540.65 & 1540.65 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+RHM & 1440.66 & 1440.66 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \multirow{3}{6em}{\centering{26\\(f1000g200s2)}} & RHF & 2286.41 & 2286.41 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & RHM & 1959.05 & 1959.05 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+RHM & 1843.98 & 1843.98 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \multirow{3}{6em}{\centering{27\\(f1000g200s3)}} & RHF & 1720.46 & 1720.46 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & RHM & 1585.0 & 1585.0 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1 \\ & SM+RHM & 1580.16 & 1580.16 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \multirow{3}{6em}{\centering{28\\(f1000g200a1)}} & RHF & 5858.02 & 5858.02 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & RHM & 6354.51 & 6354.51 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+RHM & 3407.24 & 3407.24 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \multirow{3}{6em}{\centering{29\\(f1000g200a2)}} & RHF & 7270.22 & 7270.22 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & RHM & 7342.51 & 7342.51 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+RHM & 3206.97 & 3206.97 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \multirow{3}{6em}{\centering{30\\(f1000g200a3)}} & RHF & 4922.02 & 4922.02 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & RHM & 4844.84 & 4844.84 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+RHM & 3701.92 & 3701.92 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \multirow{3}{6em}{\centering{31\\(f1000g200a4)}} & RHF & 5897.09 & 5897.09 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & RHM & 5715.54 & 5715.54 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+RHM & 3093.73 & 3093.73 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \multirow{3}{6em}{\centering{32\\(f1000g200a5)}} & RHF & 4913.89 & 4913.89 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & RHM & 4654.29 & 4654.29 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+RHM & 2966.66 & 2966.66 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \multirow{3}{6em}{\centering{33\\(f1113g192a19)}} & RHF & 6305.36 & 6305.36 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1 \\ & RHM & 6177.28 & 6177.28 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+RHM & 3865.39 & 3865.39 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \multirow{3}{6em}{\centering{34\\(f1095g192a20)}} & RHF & 5763.05 & 5763.05 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & RHM & 6808.54 & 6808.54 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+RHM & 3420.51 & 3420.51 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \multirow{3}{6em}{\centering{35\\(f1092g192a21)}} & RHF & 7156.45 & 7156.45 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & RHM & 8568.96 & 8568.96 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+RHM & 3603.5 & 3603.5 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \multirow{3}{6em}{\centering{36\\(f1125g192a22)}} & RHF & 8384.49 & 8384.49 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & RHM & 9347.75 & 9347.75 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+RHM & 4279.68 & 4279.68 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \multirow{3}{6em}{\centering{37\\(f1125g192a23)}} & RHF & 7784.41 & 7784.41 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & RHM & 7727.33 & 7727.33 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\ & SM+RHM & 4325.52 & 4325.52 & 0.0 & 0.0 & 0.00(a) & 0.0 & 1\\\hline \end{longtable}
\subsection{Summary of the computational results} In summary, it seems that the dynamic programming algorithm together with the submodular maximization offer the best performance for the small-sized and moderate-sized problems. For the large-sized instances, it seems that running the submodular maximization algorithm when $\overline{\sigma}$ is large and using the rolling horizon method while keeping the horizon sizes at most $20$ otherwise lead to the best performances. Moreover, the binary program produces feasible solutions of good quality for almost all instances as the gaps are usually very small. The amount of time taken to obtain those feasible solutions is also reasonable in practice.
Finally note that all instance data can be accessed online at the authors' websites.
\section{Concluding remarks} \label{conclusions} We use a column generation approach to solve the flight-to-gate assignment problem aimed at minimizing the total arrival delays. Instead of using the integer program solver for the pricing problem, more efficient approximation algorithms and dynamic programming algorithms are proposed. Among the proposed algorithms, submodular maximization algorithm shows very strong performances on the small and medium-sized instances and together with the rolling horizon framework, it allows us to obtain solutions of very good quality very efficiently for the large-sized instances.
There are few possible extensions to consider. One of them is to consider the interferences between different gates. With different airport layouts, this can be realized by linking constraints in the master problem or a decomposition of the pricing problem by gate groups instead of by each individual gate. Another extension is to take the uncertainties in the arrival times and turn times into consideration. A stochastic programming approach may be suitable to tackle this version of the problem.
\appendix \section{Appendix A.} This proof is given in \cite{subMax} and we modified it to match our assumption.\\ Let $OPT$ denote an optimal solution and $OPT_i := (OPT \cup X_i) \cap Y_i$, then we have that $OPT_0 = OPT$ and $OPT_n = X_n = Y_n$. Here are two useful lemmas to be used later in the proof of the theorem. \begin{lemma}\label{submodularApproxLemma1} For every $1 \le i \le n$, $a_i + b_i \ge 0$. \end{lemma} \proof{Proof of Lemma \ref{submodularApproxLemma1}.} Note that if $f$ is a submodular function with a ground set $\mathcal{F}'$, we have that if $A,B \subset \mathcal{F}'$, $f(A) + f(B) \ge f(A \cup B) + f(A \cap B)$.\\ Now, we see that $(X_{i-1} \cup \{u_i\}) \cup (Y_{i-1} \backslash \{u_i\}) = Y_{i-1}$ and $(X_{i-1} \cup \{u_i\}) \cap (Y_{i-1} \backslash \{u_i\}) = X_{i-1}$. Then by the above defintion, we have that \begin{align} a_i + b_i & = [f(X_{i-1} \cup \{u_i\}) - f(X_{i-1})] + [f(Y_{i-1} \cup \{u_i\}) - f(Y_{i-1})]\\ & = [f(X_{i-1} \cup \{u_i\}) + f(Y_{i-1} \backslash \{u_i\})] - [f(X_{i-1}) + f(Y_{i-1})] \ge 0. \end{align} \endproof
\begin{lemma}\label{submodularApproxLemma2} For every $1 \le i \le n$, \begin{align} \mathbb{E} [f(OPT_{i-1}) - f(OPT_i)] \le \frac{1}{2} \mathbb{E} [f(X_i) - f(X_{i-1}) + f(Y_i) - f(Y_{i-1})] \label{submodularMainExp} \end{align} \end{lemma} \proof{Proof of Lemma \ref{submodularApproxLemma2}.} It is sufficient to prove (\ref{submodularMainExp}) conditioned on any event of the form $X_{i-1} = S_{i-1}$, when $S_{i-1} = \{u_1, \cdots, u_{i-1}\}$ and the probability $X_{i-1} = S_{i-1}$ is non-zero. Hence fix such an event for a given $S_{i-1}$. The rest of the proof implicitly assumes everything is conditioned on this event. Note due to conditioning, the following quantities become constants, \begin{itemize}
\item $Y_{i-1} = S_{i-1} \cup \{u_i, \cdots, u_n\}$
\item $OPT_{i-1} := (OPT \cup X_{i-1}) \cap Y_{i-1} = S_{i-1} \cup (OPT \cap \{u_i, \cdots, u_n\})$
\item $a_i$ and $b_i$. \end{itemize} From Lemma \ref{submodularApproxLemma1}, we have $a_i + b_i \ge 0$, so we only need to consider three cases. \begin{enumerate}
\item $a_i \ge 0$ and $b_i < 0$. In this case, $a_i^{\prime} / (a_i^{\prime} + b_i^{\prime}) = 1$. Then we have $Y_i = Y_{i-1}$ and $X_i = S_{i-1} \cup \{u_i\}$. Hence $f(Y_{i-1}) - f(Y_i) = 0$. Also, $OPT_i = (OPT \cup X_i) \cap Y_i = (OPT \cup X_{i-1} \cup \{u_i\}) \cap Y_{i-1} = OPT_{i-1} \cup \{u_i\}$. Then we are left to prove that
\begin{equation}
f(OPT_{i-1}) - f(OPT_{i-1} \cup \{u_i\}) \le \frac{1}{2}[f(X_i) - f(X_{i-1})] = \frac{a_i}{2}.
\end{equation}
If $u_i \in OPT_{i-1}$, then the left hand side is zero and this inequality is satisfied. If $u_i \notin OPT_{i-1}$, then we note that
\begin{equation}
OPT_{i-1} = (OPT \cup X_{i-1}) \cap Y_{i-1} \subseteq Y_{i-1} \backslash \{u_i\}.
\end{equation}
Next, by the definition of submodularity of $f$, we have now
\begin{equation}
f(OPT_{i-1}) - f(OPT_{i-1} \cup \{u_i\}) \le f(Y_{i-1} \backslash \{u_i\}) - f(Y_{i-1}) = b_i \le 0 \le \frac{a_i}{2}.
\end{equation}
\item $a_i < 0$ and $b_i \ge 0$. This case is analogous to the previous case.
\item $a_i \ge 0$ and $b_i > 0$. In this case, we have $a_i^{\prime} = a_i$ and $b_i^{\prime} = b_i$. Then we can compute the left hand side of (\ref{submodularMainExp}) by
\begin{align}
\mathbb{E} [f(X_i) - f(X_{i-1}) + f(Y_i) - f(Y_{i-1})] & = \frac{a_i}{a_i + b_i} [f(X_{i-1} \cup \{u_i\}) - f(X_{i-1})] \notag \\
& + \frac{b_i}{a_i + b_i} [f(Y_{i-1} \cap \{u_i\}) - f(Y_{i-1})]\\
& = \frac{a_i^2 + b_i^2}{a_i + b_i} \label{a_i>0b_i>0_1}.
\end{align}
Next we upper bound $\mathbb{E} [f(OPT_{i-1}) - f(OPT_i)]$,
\begin{align}
\mathbb{E} [f(OPT_{i-1}) - f(OPT_i)] & = \frac{a_i}{a_i + b_i} [f(OPT_{i-1}) - f(OPT_{i-1}) \cup \{u_i\}] \notag \\
& + \frac{b_i}{a_i + b_i} [f(OPT_{i-1}) - f(OPT_{i-1} \backslash \{u_i\})] \\
& \le \frac{a_ib_i}{a_i + b_i} \label{a_i>0b_i>0_2}.
\end{align}
Last inequality follows from the following cases. Note that $u_i \in Y_{i-1}$ and $u_i \notin X_{i-1}$,
\begin{enumerate}
\item $u_i \notin OPT_{i-1}$, then we note that the second term of the left hand side the last inequality is zero and
\begin{equation}
OPT_{i-1} = (OPT \cup X_{i-1}) \cap Y_{i-1} \subseteq Y_{i-1} \backslash \{u_i\}.
\end{equation}
Next, by the definition of submodularity of $f$, we have now
\begin{equation}
f(OPT_{i-1}) - f(OPT_{i-1} \cup \{u_i\}) \le f(Y_{i-1} \backslash \{u_i\}) - f(Y_{i-1}) = b_i.
\end{equation}
\item $u_i \in OPT_{i-1}$, then we note that the first term of the left hand side the last inequality is zero and
\begin{equation}
X_{i-1} \subseteq (OPT \cup X_{i-1}) \cap Y_{i-1} \backslash \{u_i\} = OPT_{i-1} \backslash \{u_i\}.
\end{equation}
Next, by the definition of submodularity of $f$, we have now
\begin{equation}
f(OPT_{i-1}) - f(OPT_{i-1} \backslash \{u_i\}) \le f(X_{i-1} \cup \{u_i\}) - f(X_{i-1}) = a_i.
\end{equation}
\end{enumerate}
Then with (\ref{a_i>0b_i>0_1}) and (\ref{a_i>0b_i>0_2}), we are left to verify
\begin{equation}
\frac{a_ib_i}{a_i + b_i} \le \frac{1}{2} \left(\frac{a_i^2 + b_i^2}{a_i + b_i} \right),
\end{equation}
which can be easily verified. \end{enumerate} \endproof
With the two Lemmas, we are ready to show the approximation guarantee of the algorithm. \proof{Proof of Theorem \ref{subguarantee}.} If we sum all inequalities from lemma \ref{submodularApproxLemma2} over $1 \le i \le n$, we have that \begin{align} & \sum_{i=1}^n \mathbb{E} [f(OPT_{i-1}) - f(OPT_i)] \le \frac{1}{2} \sum_{i=1}^n \mathbb{E} [f(X_i) - f(X_{i-1}) + f(Y_i) - f(Y_{i-1})]\\ \implies & \mathbb{E} [f(OPT_0) - f(OPT_n)] \le \frac{1}{2} \mathbb{E} [f(X_n) - f(X_0) + f(Y_n) - f(Y_0)] .\label{constantApproxExpr} \end{align} Note that $X_0 = \emptyset$, so we have $f(X_0) = 0$ and by our assumption we have $f(Y_0) = f(\mathcal{F}') \ge 0$. In addition, we have $OPT_0 = OPT$ and $OPT_n = X_n = Y_n$, so (\ref{constantApproxExpr}) gives that \begin{align*} f(OPT) \le \frac{1}{2} \mathbb{E} [2f(X_n) - f(Y_0)] + \mathbb{E} f(X_n) \le \frac{1}{2} \mathbb{E} [2f(X_n)] + \mathbb{E} f(X_n) \implies \frac{1}{2} f(OPT) \le \mathbb{E} f(X_n). \end{align*} \endproof
\section{Appendix B.} \subsection*{Rational input data for dynamic programming algorithm.} \label{intDynamicAlgo} Recall a direct implementation of the dynamic programming algorithm terminates with a worst case $2^n$ recursive calls as proven in Proposition \ref{finitealgorithm}. We can improve the complexity significantly if we further assume all input data are rationals and run a backward version of the dynamic programming algorithm. Input data includes all input parameters to the pricing problems, namely the arrival times, $a_i$, the dual information, $\pi_i$, and the processing time, $p_i$. Without loss of generality, we assume the input data are expressed as rationals with a common denominator. We introduce a new notion of last time to consider for the pricing problems, denoted by $c$. This is the time beyond which no further flights are accepted. It suffices to choose $c$ as $a_n + \max_{1 \le i \le n} \pi_i$. For any flight $j$, if we consider the function $g_{j}(t)$ with $t > c$, we get that $t > a_l + \pi_l$ for $j \le l \le n$. Consequently the function value $g_j(t) = g_l(t) = g_{n+1}(t) = 0$ and $x_{lk} = 0$ for $j \le l \le n$.
For every $i \in [n]$, we see that the function $g_i(t)$ is piecewise linear and we label the points where the slope of the function changes as the breakpoints of the function. When we have integral input data, each $g_i(t)$ does not have too many breakpoints. The following theorem gives a formal statement about this observation. \begin{theorem}\label{breakpoints} Assume all input data are rational and c is last time to consider. Let the common denominator be $d$, then each function $g_i(t)$ for $i \in [n]$ has at most $O(n^2dc)$ breakpoints. \end{theorem} To prove Theorem \ref{breakpoints}, we first consider the possible slopes for a particular function $g_{n-i}(t)$. \begin{lemma} \label{fslope} Given a function $g_{n-i}(t)$, its slope can only take values from the set $\{0,-1,\cdots, -i-1\}$ \end{lemma} \begin{proof}[proof of Lemma \ref{fslope}.] We prove this lemma by backward induction. We observe that the function $g_n(t)$ is given by \begin{align} g_n(t) = \begin{cases}\pi_n \;\; & \text{ if }t \le a_n\\ a_n + \pi_n - t\;\; & \text{ if }a_n < t \le a_n + \pi_n \\ 0\;\; & \text{ if }t > a_n + \pi_n. \end{cases} \end{align}
The set of slopes of function $g_n(t)$ is $\{0,-1\}$. This is the base case of $n$. Suppose the statement is true for $n$, $n-1$, $\cdots, n-i$. Then the set of possible slopes of $g_{n-i}(t)$ is $\{0,-1,\ldots,-i-1\}$. Now consider the function $g_{n-i-1}(t)$, based on the recursive formula (\ref{recureq}), \begin{align}\label{fslopeeq} g_{n-i-1}(t) = \begin{cases} g_{n-i-1}(a_i) & \text{if } t \le a_{n-i-1}\\ g_{n-i}(t) & \text{if } t > a_{n-i-1} + \pi_{n-i-1}\\ \max\{a_{n-i-1} + \pi_{n-i-1} - t + g_{n-i}(t + p_{n-i-1}), g_{n-i}(t)\} & \text{if }a_{n-i-1} < t \le a_{n-i-1} + \pi_{n-i-1}. \end{cases} \end{align}
In the second case, the set of possible slopes of $g_{n-i-1}(t)$ is the same as that of $g_{n-i}(t)$. In the third case, $g_{n-i-1}(t)$ can have an additional slope than $g_{n-i}$ does. If $g_{n-i}(t+p_{n-i-1})$ has a slope of $-i-1$, the slope of $g_{n-i-1}(t)$ can be $-i-2$. Therefore, the set of possible slopes of $g_{n-i-1}$ is $\{0,-1,\ldots, -i-2\}$. Thus the statement is also true for $n-i-1$, which completes the proof. \end{proof}
Intuitively, the slope of the function corresponds to the rate of loss in the total net benefits if we delay the park times of flights. For the flight $n-i$, it can only potentially affect the park times of itself and flights after it, i.e. from the set $\{n-i, n-i+1, \ldots, n\}$ and the corresponding function $g_{n-i}(t)$ can have slopes from the set $\{0,-1,\ldots,-i-1\}$.
Now we are ready to give a proof of theorem \ref{breakpoints}.
\begin{proof}[proof of Theorem \ref{breakpoints}.] Consider a function $g_{n-i}(t)$, from the recursive formula (\ref{recureq}), we observe that all the potential breakpoints can be computed by equating the two components in the max expression. The solution of $t$ may constitute a breakpoint of $g_{n-i}$. As a result, we consider the following linear equation \begin{equation} \label{breakpointseq} (a_{n-i} + \pi_{n-i} - t) + g_{n-i+1}(t + p_i) = g_{n-i+1}(t). \end{equation}
Functions $g_{n-i+1}(t + p_i)$ and $g_{n-i+1}(t)$ are piecewise linear in variable $t$. We can write \begin{equation} g_{n-i+1}(t + p_i) = c_1 - k_1t \text{ and } g_{n-i+1}(t) = c_2 - k_2t, \end{equation} where $c_1$ and $c_2$ are rational constants and $k_1$ and $k_2$ are integer constants. We only consider cases where $k_1 + 1 \neq k_2$, otherwise the two functions on the left hand side and right hand side of (\ref{breakpointseq}) have the same slope and do not constitute a breakpoint. Now equation (\ref{breakpointseq}) can be rewritten as \begin{equation}
a_{n-i} + \pi_{n-i} - t + c_1 - k_1t = c_2 - k_2t \implies t = \frac{|a_{n-i} + \pi_{n-i} +c_1 - c_2|}{|k_1 + 1 - k_2|}. \end{equation}
As shown in lemma \ref{fslope}, $k_1$ and $k_2$ only take values from the set $\{0,-1,\ldots,-i\}$ and thus $|k_1 + 1 - k_2|$ can only take values from the set $\{1,\ldots,i+1\}$. In addition, since the input data are rationals with a common denominator $d$, the numerator $a_{n-i} + \pi_{n-i} + c_1 - c_2$ can be expressed as $C/d$ where $C$ is an integer. Then all the breakpoints of function $g_{n-i}(t)$ are in the set \begin{align} \label{bpform} \{0,e,2e,3e,\ldots,c\} \text{ where }e \in \{1/d,1/2d,1/3d,\ldots,1/(i+1)d\}. \end{align}
The total number of breakpoints is then at most $1dc + 2dc + 3dc + \cdots + (i+1)dc = O(n^2dc)$, which completes the proof. \end{proof}
Note that since each of the functions $g_i(t)$ has at most $O(n^2dc)$ breakpoints, we only need to evaluate the function at these potential breakpoints to construct the function and consequently we need to run in $O(n^3dc)$ to construct all functions $g_i(t)$ in the interval $[0,c]$. In particular, note that only the function values at the points in the list of the potential breakpoints are needed since the functions are all piece-wise linear. We can first construct the function $g_n(t)$ in the interval $[0,c]$ by evaluating the function at the points in the list of the potential breakpoints given in (\ref{bpform}) in reverse order and then we construct the function $g_{j}(t)$ for $1 \le j \le n-1$ using previously constructed functions $g_l(t)$ for $j+1 \le l \le n$ based on the recursive formula (\ref{recureq}).
Although the above analyses provide a simpler complexity in the case of rational input data, it is not very effective in practice as the common denominator $d$ can be large after scaling the input data.
\section{Appendix C.} \subsection*{Rolling horizon method.} \label{rollinghorizon} The rolling horizon framework can also be utilized to decompose the pricing problems to reduce their sizes. As with any implementation of the rolling horizon method, we must decide on two parameters. One is the horizon size and the other is the window size. The horizon size can be either fixed or dynamically determined for each gate and each iteration. The adjacency parameter given in Definition \ref{adjacencyparameter} is an example of possible dynamically determined horizon sizes. We give a brief implementation of the rolling horizon method in Algorithm \ref{rollinghorizonalgo} that we have implemented in our experiments. \begin{algorithm}[H] \caption{Rolling horizon} \label{rollinghorizonalgo} \begin{algorithmic}[1] \State \textbf{Input:} horizon size, $l$, and window size, $w$ \State Initilize $s = 0$, $t= 0$, and $S = \emptyset$ \While{$s + l \le n$} \State Solve for the optimal assignment for the flights $\{s+1,s+2,\ldots, s+l\}$ at time $t$ \State Fix the assignments of first $w$ flights and add them to $S$ \State $t \gets \text{time at which the gate becomes available under the set of assigned flights } S$ \State $s \gets s + w$ \EndWhile \State Solve for the optimal assignment for the flights $\{s+1,s+2,\ldots, n\}$ at time $t$ \State Fix the assignments of the flights $\{s+1,s+2,\ldots, n\}$ and add them to $S$ \State Compute the total net benefits of the assignment $S$, $f(S)$ \State \Return{$S$, $f(S)$} \end{algorithmic} \end{algorithm} Similar to the block decomposition approximation algorithm, we refer to the discussions in Subsection \ref{approximation} for the computation of the total net benefits $f(S)$ and the time at which the gate becomes available under a set of assigned flights.
\end{document} |
\begin{document}
\begin{abstract} Recently, there were proposed some innovative convex optimization concepts, namely, relative smoothness \cite{Bauschke} and relative strong convexity \cite{Lu_Nesterov,Lu}. These approaches have significantly expanded the class of applicability of gradient-type methods with optimal estimates of the convergence rate. Later Yu. Nesterov and H. Lu \rrev{\cite{Lu_Nesterov,Lu}} introduced some modifications of the Mirror Descent method for convex minimization problems with the corresponding analogue of the Lipschitz condition (the so-called relative continuity or Lipschitz continuity). In this paper, we cover both the concept of relative smoothness and relative Lipschitz continuity and introduce some adaptive and universal methods which have optimal estimates of the convergence rate for the corresponding class of problems. We consider the relative boundedness condition for the variational inequality problem and propose some adaptive optimal methods for this class of problems. Some results of the conducted numerical experiments are presented, which demonstrate the effectiveness of the proposed methods. \end{abstract}
\maketitle
\section{Introduction}
The recent dramatic growth of various branches of science has led to the necessity of the development of numerical optimization methods in spaces of large and extra-large dimensions. A special place in modern optimization theory is given to gradient methods. Recently, there was introduced a new direction for the research, associated with the development of gradient-type methods for optimization problems with relatively smooth \cite{Bauschke} and relatively strongly convex \cite{Lu_Nesterov} functions. Such methods are in high demand and urgent due to numerous theoretical and applied problems. For example, the D-optimal design problem turned out to be relatively smooth \cite{Lu_Nesterov}. It is also quite interesting that in recent years there have appeared applications of these approaches (conditions of relative smoothness and strong convexity) to auxiliary problems for tensor methods for convex minimization problems of the second and higher orders \cite{Nest_tens,Nest_core}. It is worth noting that tensor methods make it possible to obtain optimal estimates of the rate of convergence of high-order methods for convex optimization problems \cite{kamzolov2022exploiting}.
A few years ago there was introduced a generalization of the Lipschitz condition for nonsmooth problems, namely, relative Lipschitz continuity \cite{Lu,Nestconf}. The concept of relative Lipschitz continuity essentially generalizes the classical Lipschitz condition and covers quite important applied problems, including the problem of finding the common point of ellipsoids (IEP), as well as the support vector machine (SVM) for the binary classification problem.
The concepts of relative smoothness, relative Lipschitz continuity, and relative strong convexity made it possible to significantly expand the limits of applicability of gradient type methods while preserving the optimal convergence rate $O (\frac{1}{\varepsilon^2})$ for relatively Lipschitz problems and $O(\frac{1}{\varepsilon})$ for relatively smooth problems ($\varepsilon$, as usual, denotes the accuracy of the solution \rrev{for functional residual}). The authors \rev{of} \cite{Dragomir} have shown that for the class of relatively smooth problems, such an estimate for the rate of convergence cannot be improved in the general case.
In this paper we consider the class of $(\alpha, L, \delta)$-relatively smooth objective functions (see Definition \ref{defalphacont}), which covers both the concept of relative smoothness and relative Lipschitz continuity. Let $Q$ be a closed convex subset of some finite-dimensional vector space. For the classical optimization problem \begin{equation}\label{main} \min\limits_{x\in Q}f(x) \end{equation} we propose some analogues of the universal gradient method which automatically adjusts to the "degree of relative smoothness" of the $(\alpha, L, \delta)$-relatively smooth problem (Sect. \ref{sect_univers}). {\color{black}We also mention that the proposed algorithms are applicable to solve the problem of minimizing the relatively strongly convex functions, see \cite{savchuk2022adaptive} for more details. }
In addition to the classical optimization problem, we consider the problem of solving Minty variational inequality with ($M$-)relatively bounded operator. For a given relatively bounded and monotone operator $g:Q\longrightarrow \mathbb{R}^n$,
we need to find a vector \rrev{$x_* \in Q$}, such that \begin{equation}\label{VI_problem}
\langle g(x),x_* - x\rangle \leqslant 0 \quad \forall x\in Q. \end{equation}
Relative boundedness can be understood as an analogue of relative Lipschitz continuity for variational inequalities. It should be noted that the subgradient of a relatively Lipschitz continuous function satisfies the relative boundedness condition. This fact plays an important role in considering relatively Lipschitz continuous Lagrange saddle point problems and their reduction to corresponding variational inequalities with the relatively bounded operator. Recently, in \cite{Stonyakin_etal} the authors proposed an adaptive version of the Mirror Prox method (extragradient type method) for variational inequalities with a condition similar to relative smoothness. It should be noted that variational inequalities with relatively smooth operators are applicable to the resource sharing problem \cite{Antonakopoulos}. Also, in \cite{Titov_etal} there were introduced some non-adaptive switching subgradient algorithms for convex programming problems with relatively Lipschitz continuous functions. Recently, there was proposed a non-adaptive method for solving variational inequalities with the relatively bounded operator \cite{MOTOR2021}. In this paper, we propose an adaptive algorithm for the corresponding class of problems.
The paper consists of the introduction and {\color{black} 6} main sections. In Sect. \ref{basic} we give some basic notations and definitions. In Sect. \ref{adapt} we consider the Minty variational inequality with a relatively bounded operator and propose an adaptive algorithm for \rrev{solving it}. Sect. \ref{adapt_univers} is devoted to adaptive algorithms for relatively smooth optimization problems. In Sect. \ref{sect_univers} we propose some universal algorithms for minimizing relatively smooth and relatively Lipschitz continuous functions. Sect. \ref{experiments_Alg5} is devoted to the numerical experiments which demonstrate the effectiveness of the proposed methods.
To sum it up, the contributions of the paper can be formulated as follows. \begin{itemize}
\item We consider the variational inequality with the relatively bounded operator and propose some adaptive first-order methods to solve such a class of problems with optimal complexity estimates $O(\frac{1}{\varepsilon^2})$.
\item We introduce adaptive and universal algorithms for minimizing relatively smooth and relatively Lipschitz continuous functions and provide their theoretical justification.
The stopping criteria of the introduced adaptive algorithms are \rrev{simple} (which is especially important in terms of numerical experiments), but universal algorithms are guaranteed to be applicable to a \rrev{wide} class of problems.
Our approach allows us to minimize the sum of relatively smooth and relatively Lipschitz continuous functions, even though such a sum does not satisfy \rrev{neither relatively smoothness condition nor relatively Lipschitz one}. Theoretical estimates of the proposed methods are optimal both for convex relatively Lipschitz minimization problems $O(\frac{1}{\varepsilon^2})$ and convex relatively smooth minimization problems $O(\frac{1}{\varepsilon})$.
\item We provide the numerical experiments for the Intersection of Ellipsoids Problem (IEP) and the Lagrange saddle point problem for the Support Vector Machine (SVM) with inequality-type function constraints. We also, compare numerically, for (IEP), one of the proposed algorithms with the AdaMirr algorithm, which was recently proposed in \cite{AdaMirr_2021}. The conducted experiments demonstrate that the proposed algorithms work better than AdaMirr and they can work faster than the obtained theoretical estimates in practice. \end{itemize}
\section{Basic definitions and notations}\label{basic}
Let us give some basic definitions and notations concerning Bregman divergence and the prox structure, which will be used throughout the paper.
Let $(E,\|\cdot\|)$ be some normed \rrev{finite-dimensional real vector space} and $E^*$ be its \rrev{dual space} with the norm $$
\|y\|_*=\max\limits_x\{\langle y,x\rangle,\|x\| \leqslant 1\}, $$ where $\langle y,x\rangle$ is the value of the \rrev{linear function} $y$ at $x \in E$. Assume that $Q\subset E$ is a closed convex set (for variational inequalities in Sect. \ref{adapt} we consider a convex compact set $Q\subset E$).
Let $d: Q \longrightarrow \mathbb{R}$ be a distance-generating function (d.g.f) which is continuously differentiable and convex.
For all $x, y\in Q \subset E$, we consider the corresponding Bregman divergence \begin{equation*}\label{Bregman}
V(y, x) = d(y) - d(x) - \langle \nabla d(x), y-x \rangle. \end{equation*}
Now we introduce the following concept of $(\alpha, L, \delta)$-relative smoothness which covers both the concept of relative smoothness and relative Lipschitz continuity. Further, we denote by $\nabla f$ an arbitrary subgradient of $f$.
\begin{definition}\label{defalphacont} Let us call a convex function $f:Q\longrightarrow \mathbb{R}$ {\it $(\alpha, L, \delta)$-relatively smooth} for some $\alpha \in [0; 1]$, $L>0$ and $\delta>0$, if the following inequalities hold \begin{equation}\label{eqalpha1relsm} f(y) \leqslant f(x) + \langle \nabla f(x), y - x \rangle + LV(y, x) + L\alpha V(x,y) + \delta, \quad \rrev{\forall x, y \in Q,} \end{equation} \begin{equation}\label{eqalpha1relsm1} \alpha\left(\langle \nabla f(x), y - x \rangle + LV(y, x) + \delta\right) \geqslant 0 \;\;\; \forall x, y\in Q \end{equation} for each subgradient $\nabla f(x)$ of $f(x)$. \end{definition}
It is obvious that for $\alpha = 0$, $L>0$, and $\delta = 0$ one gets the well-known relative smoothness condition (often defined as $L$-relative smoothness, see \cite{Bauschke} for $\delta = 0$ and \cite{Stonyakin_etal} for the case of $\delta >0$). For $\alpha = 1$, $L = \frac{2M^2}{\varepsilon}$, and $\delta = \rev{\frac{\varepsilon}{4}} >0 $\rrev{, where $\varepsilon$ is arbitrary,} the inequalities \eqref{eqalpha1relsm} and \eqref{eqalpha1relsm1} follow from the condition of the relative Lipschitz continuity (also known as relative continuity or $M$-relative Lipschitz continuity), proposed recently in \cite{Lu,Nestconf}
\begin{definition}\label{defrelLIP} Convex function $f:Q\longrightarrow \mathbb{R}$ is called {\it $M$-relatively Lipschitz continuous} for some $M>0$, if the following inequality holds \begin{equation*} \langle \nabla f(x), y - x \rangle + M\sqrt{2V(y,x)} \geqslant 0 \quad \forall x, y \in Q. \end{equation*} \end{definition}
Indeed, for each $x, y \in Q$ we have $$\langle \nabla f(x), x - y \rangle \leqslant M\sqrt{2V(y, x)} \leqslant \frac{2M^2 }{\varepsilon} V(y, x) + \frac{\varepsilon}{4}.$$ Further, $$f(y) - f(x) \leqslant \langle \nabla f(y), y - x \rangle \leqslant M\sqrt{2V(x, y)} \leqslant \frac{2M^2 }{\varepsilon} V(x, y) + \frac{\varepsilon}{4}$$ and $$f(y) \leqslant f(x) + \langle \nabla f(x), y - x \rangle + \frac{2M^2 }{\varepsilon} V(y, x) + \frac{2M^2 }{\varepsilon} V(x, y) + \frac{\varepsilon}{2}.$$
So, each relatively Lipschitz continuous function $f$ satisfies \eqref{eqalpha1relsm} for large enough $L > 0$ and $\delta > 0$.
It is worth mentioning that the sum of the relatively smooth function $f_1$ and relatively Lipschitz continuous convex function $f_2$ satisfies the \mbox{$(\alpha, L, \delta)$-relative} smoothness condition, if $$ f_1(y) \geqslant f_1(x) - rV(y, x) - q \quad \forall x, y \in Q, $$ for some fixed $r, q > 0$, and the corresponding values $\alpha, L, \delta >0$ (this assumption can be understood as limiting the fast growth of $f_1$ \alex{and takes place, for example, when a function defined on a bounded set is bounded from below}). Generally, such a sum is neither relatively smooth nor relatively Lipschitz continuous function.
\rev{Let us note the following important fact, which obviously follows from Lemma 3.2 from \cite{Stonyakin_etal} and plays a key role in first inequalities in the following proofs. According to this fact for each operator $g: Q \rightarrow \mathbb{R}^n$ $$y = \arg\min\limits_{x\in Q}\Big\{\langle g(z), x\rangle + \beta V(x,z)\Big\}, \quad \beta \geq 0, z \in Q $$ we have \begin{equation}\label{Lemma}
\langle g(z), x \rangle + \beta V(x,z) \geq \langle g(z), y \rangle + \beta V(y,z) + \beta V(x,y), \;\;\; \forall x, z\in Q. \end{equation}}
\section{Adaptive Method for Variational Inequalities with Relatively Bounded Operators}\label{adapt}
In this section we consider the Minty variational inequality problem \eqref{VI_problem} with relatively bounded \eqref{Rel_Boud} and monotone \eqref{monotone_operator} operator $g$, i.e. \begin{equation}\label{Rel_Boud}
\langle g(x),x-y\rangle \leqslant M\sqrt{2V(y,x)} \quad \forall x,y \in Q, \end{equation} for some $M>0$ and \begin{equation}\label{monotone_operator}
\langle g(y)-g(x),y-x\rangle \geqslant 0 \quad \forall x, y \in Q, \end{equation} where $Q$ is a convex compact set. In order to solve such a class of problems, we propose an adaptive algorithm, listed as Algorithm \ref{adaptive_alg3}, below. \begin{algorithm}[!ht] \caption{Adaptive Algorithm for Variational Inequalities with Relatively Bounded Operators.}\label{adaptive_alg3} \begin{algorithmic}[1]
\REQUIRE $\varepsilon > 0, \rrev{x_{0} \in Q}, L_{0}>0$, $R>0$ s.t. $\max\limits_{x\in Q} V\left(x, x_{0}\right) \leqslant R^{2}, \rrev{k = 0.}$
\STATE Set $ k = k+1, L_{k+1}=\frac{L_k}{2}$.
\STATE Find
\begin{equation}\label{minVI}
x_{k+1} = \arg\min\limits_{x\in Q}\{\langle g(x_k),x \rangle + L_{k+1}V(x,x_{k})\}.
\end{equation}
\STATE \textbf{if}
\begin{equation}\label{condVI}
\frac{\varepsilon}{2} + \langle g(x_k),x_{k+1}-x_k\rangle + L_{k+1} V(x_{k+1},x_k)\geqslant 0,
\end{equation}
\textbf{then } go to the next iteration (item 1).
\STATE \textbf{else}
$$
\text{set }L_{k+1}=2L_{k+1}, \text{ and go to item 2}.
$$
\STATE \textbf{end if}
\STATE Stopping criterion
\begin{equation*}
S_N := \sum\limits_{k=0}^{N-1}\frac{1}{L_{k+1}} \geqslant \frac{2R^2}{\varepsilon}.
\end{equation*}
\ENSURE
$\widehat{x} = \frac{1}{S_N}\sum\limits_{k=0}^{N-1} \frac{x_{k+1}}{L_{k+1}}$. \end{algorithmic} \end{algorithm}
\begin{theorem}\label{the_VI} Let $g: Q\longrightarrow \mathbb {R}^n$ be a relatively bounded and monotone operator, i.e. \eqref{Rel_Boud} and \eqref{monotone_operator} hold, \rrev{$L_0\leqslant \frac{2M^2}{\varepsilon}$}. Then after the stopping of Algorithm \ref{adaptive_alg3}, the following inequality holds $$
\max\limits_{x\in Q}\langle g(x),\widehat{x}-x \rangle \leqslant \frac{1}{S_N}\sum\limits_{k=0}^{N-1}\frac{1}{L_{k+1}}\langle g(x),x_{k}-x \rangle \leqslant \varepsilon. $$ Moreover, the total number of iterations will not exceed $N=\left\lceil\displaystyle\frac{4M^2R^2}{\varepsilon^2}\right\rceil.$ \end{theorem} \begin{proof} The proof is given in Appendix A. \end{proof}
\rrev{ \begin{remark}\label{rem_gap} Let us note, that defining \begin{equation*} \Delta_N:= \frac{1}{S_N} \max _{x \in Q} \sum_{k=0}^{N-1} \frac{1}{L_{k+1}}\left\langle g\left(x_k\right), x_k-x\right\rangle, \end{equation*} one can get that the convergence for the function`s residuals \begin{equation*} \min _{0 \leqslant k \leqslant N-1} f\left(x_k\right)-f^*, \end{equation*} for minimization problems with $g(x)$, defined as $g(x) = \nabla f(x)$, which also covers the primal-dual gap for saddle-point problems. \end{remark} }
Let us consider the following modification of Algorithm \ref{adaptive_alg3} with adaptation both to the parameters $L = \frac{M^2}{\varepsilon}$ and $\delta = \frac{\varepsilon}{2}$.
\begin{algorithm}[!ht] \caption{Adaptation to Inexactness for Variational Inequalities with Relatively Bounded Operators.}\label{Alg_50} \begin{algorithmic}[1]
\REQUIRE $\varepsilon > 0, \rrev{x_{0}\in Q}, L_{0}>0, \delta_0 >0$, $R$ s.t. $\max\limits_{x \in Q} V\left(x, x_{0}\right) \leqslant R^{2}, \rrev{k = 0}.$
\STATE Set $k=k+1, L_{k+1}=\frac{L_k}{2}, \delta_{k+1}=\frac{\delta_k}{2}$.
\STATE Find
\begin{equation}\label{subproblem_alg50}
x_{k+1} = \arg\min\limits_{x\in Q}\{\langle g(x_k),x \rangle + L_{k+1}V(x,x_{k})\}.
\end{equation}
\STATE \textbf{if}
\begin{equation}\label{condalg50}
0 \leqslant \langle g\left(x_{k}\right), x_{k+1}-x_{k}\rangle+L_{k+1} V(x_{k+1}, x_{k}) +\delta_{k+1},
\end{equation}
\textbf{then} go to the next iteration (item 1).
\STATE \textbf{else}
$$
\text{set }L_{k+1}=2\cdot L_{k+1}, \delta_{k+1}= 2\cdot\delta_{k+1}\text{ and go to item 2}.
$$
\STATE \textbf{end if} \ENSURE $\widehat{x} = \frac{1}{S_N}\sum\limits_{k=0}^{N-1}\frac{x_{k+1}}{L_{k+1}}$. \end{algorithmic} \end{algorithm}
\begin{theorem}\label{theorem_estimate_alg50} Let $g: Q\longrightarrow \mathbb {R}^n$ be a relatively bounded and monotone operator, i.e. \eqref{Rel_Boud} and \eqref{monotone_operator} hold, \rrev{$L_0\leqslant 2L = \frac{2M^2}{\varepsilon}$}. Then after $N$ steps of Algorithm \ref{Alg_50} the following inequality holds \begin{equation}\label{estimate_alg50}
\max\limits_{x\in Q}\langle g(x),\widehat{x} - x \rangle \leqslant \frac{R^2}{S_N} + \frac{1}{S_N} \sum_{k = 0}^{N-1} \frac{\delta_{k+1}}{L_{k+1}}. \end{equation} \rrev{Moreover, if $L_0\leqslant 2L$ and $\delta_0\leqslant \varepsilon$}, the auxiliary problem \eqref{subproblem_alg50} in Algorithm \ref{Alg_50} is solved no more than \rev{$2N + \log_2\frac{2L}{L_0}$ times.} \end{theorem} \begin{proof} The proof is given in Appendix B. \end{proof}
\rrev{ \begin{remark}
Note, that Remark \eqref{rem_gap} also takes place for Theorem \eqref{theorem_estimate_alg50}. \end{remark}}
\begin{remark}\label{remarkoptim} The condition of the relative boundedness is essential only for justifying \eqref{condalg50}. For $L_{k+1} \geqslant L = \frac{M^2}{\varepsilon}$ and $\delta_{k+1} \geqslant \frac{\varepsilon}{2}$, \eqref{condalg50} certainly holds. So, \rrev{if $L_0\leqslant C_1 L$} for $C = \max\left\{ \rrev{C_1}; \, \frac{\varepsilon}{\delta_0}\right\}$, $L_{k+1} \leqslant C L$ and $\delta_{k+1} \leqslant \frac{C\varepsilon}{2}$ $\forall k \geqslant0$. Thus, $\max\limits_{x\in Q}\langle g(x),\widehat{x} - x \rangle \leqslant \varepsilon$ after $N = O(\varepsilon^{-2})$ iterations of Algorithm \ref{Alg_50}. This fact, in essence, constitutes the optimality of the proposed method for the class of variational inequality problems with monotone $M$-relatively bounded operators.
\end{remark}
\section{Adaptive Algorithms for Relatively Lipschitz Continuous Convex Optimization Problems}\label{adapt_univers}
Now we consider the classical optimization problem \eqref{main} under the assumption of $M$-relative Lipschitz continuity of the objective function $f$. For solving such a type of problems we propose two adaptive algorithms, listed as Algorithm \ref{adaptive_alg4} and Algorithm \ref{Alg_5}, below. \begin{algorithm}[!ht] \caption{Adaptive Algorithm for Relatively Lipschitz Continuous Optimization Problems.}\label{adaptive_alg4} \begin{algorithmic}[1]
\REQUIRE $\varepsilon > 0, \rrev{x_{0}\in Q}, L_{0}>0$, $R$ s.t. $V\left(x_{*}, x_{0}\right) \leqslant R^{2}, \rrev{k = 0}.$
\STATE Set $ k = k+1, L_{k+1}=\frac{L_k}{2}$.
\STATE Find
\begin{equation*}\label{subproblem_alg4}
x_{k+1} = \arg\min\limits_{x\in Q}\{\langle\nabla f(x_k),x \rangle + L_{k+1}V(x,x_{k})\}.
\end{equation*}
\STATE \textbf{if}
\begin{equation}\label{condalg4}
0 \leqslant \langle \nabla f\left(x_{k}\right), x_{k+1}-x_{k}\rangle+L_{k+1} V(x_{k+1}, x_{k}) +\frac{\varepsilon}{2},
\end{equation}
\textbf{then} go to the next iteration (item 1).
\STATE \textbf{else}
$$
\text{set }L_{k+1}=2\cdot L_{k+1} \text{ and go to item 2}.
$$
\STATE \textbf{end if}
\STATE \rev{Stopping criterion
\begin{equation*}
S_N = \sum\limits_{k=0}^{N-1}\frac{1}{L_{k+1}} \geqslant \frac{2R^2}{\varepsilon}.
\end{equation*}}
\ENSURE $\widehat{x} = \frac{1}{S_N}\sum\limits_{k=0}^{N-1} \frac{x_{k+1}}{L_{k+1}}$. \end{algorithmic} \end{algorithm}
\begin{theorem}\label{theorem_adaptive_Alg_2} Let $f: Q\longrightarrow \mathbb {R}$ be a convex and $M$-relatively Lipschitz continuous function, i.e. \eqref{eqalpha1relsm} and \eqref{eqalpha1relsm1} take place with $\alpha = 1, \delta = \frac{\varepsilon}{2}, L \rev{\geq} \frac{M^2}{\varepsilon}$. Then after \alex{the stopping of} Algorithm \ref{adaptive_alg4}, the following inequality holds $ f(\widehat{x})-f(x_*)\leqslant \varepsilon$. Moreover, the total number of iterations will not exceed $N=\left\lceil\displaystyle\frac{4M^2R^2}{\varepsilon^2}\right\rceil.$ \end{theorem}
\begin{proof} The proof is given in Appendix C. \end{proof}
\begin{algorithm}[!ht] \caption{Adaptation to Inexactness for Relatively Lipschitz Continuous Optimization Problems.}\label{Alg_5} \begin{algorithmic}[1]
\REQUIRE $\varepsilon > 0, \rrev{x_{0}\in Q}, L_{0}>0, \delta_0 >0$, $R$ s.t. $V\left(x_{*}, x_{0}\right) \leqslant R^{2}, \rrev{k = 0}.$
\STATE Set $k=k+1, L_{k+1}=\frac{L_k}{2}, \delta_{k+1}=\frac{\delta_k}{2}$.
\STATE Find
\begin{equation}\label{subproblem_alg5}
x_{k+1} = \arg\min\limits_{x\in Q}\{\langle \nabla f(x_k),x \rangle + L_{k+1}V(x,x_{k})\}.
\end{equation}
\STATE \textbf{if}
\begin{equation*}\label{condalg5}
0 \leqslant \langle \nabla f\left(x_{k}\right), x_{k+1}-x_{k}\rangle+L_{k+1} V(x_{k+1}, x_{k}) +\delta_{k+1},
\end{equation*}
\textbf{then} go to the next iteration (item 1).
\STATE \textbf{else}
$$
\text{set }L_{k+1}=2\cdot L_{k+1}, \delta_{k+1}= 2\cdot\delta_{k+1}\text{ and go to item 2}.
$$
\STATE \textbf{end if}
\ENSURE $\widehat{x} = \frac{1}{S_N}\sum\limits_{k=0}^{N-1}\frac{x_{k+1}}{L_{k+1}}$. \end{algorithmic} \end{algorithm}
\begin{theorem}\label{theorem_estimate_alg5} Let $f: Q\longrightarrow \mathbb {R}$ be a convex and $M$-relatively Lipschitz continuous function, i.e. \eqref{eqalpha1relsm} and \eqref{eqalpha1relsm1} take place with $\alpha = 1, \delta = \frac{\varepsilon}{2}, L = \frac{M^2}{\varepsilon}$. Then after \alex{$N$ steps of} Algorithm \ref{Alg_5}, the following inequality holds \begin{equation}\label{estimate_alg5}
f(\widehat{x})-f(x_*) \leqslant \frac{R^2}{S_N} + \frac{1}{S_N} \sum_{k = 0}^{N-1} \frac{\delta_{k+1}}{L_{k+1}}. \end{equation} \end{theorem}
\begin{proof} The proof is similar to the proof of Theorem \ref{theorem_adaptive_Alg_2} with $\displaystyle\frac{\varepsilon}{2} \longrightarrow \delta_{k+1}.$ \end{proof}
The optimality of Algorithm \ref{Alg_5} for the class of convex and $M$-relatively Lipschitz continuous problems can be proved similar to Remark \ref{remarkoptim}.
\section{Universal Algorithms for Relatively Smooth and Relatively Lipschitz Continuous Convex Optimization Problems}\label{sect_univers}
In this section, we introduce some analogues of Algorithms \ref{adaptive_alg4} and \ref{Alg_5}, which adjust to the "degree of relative smoothness" of the considered $(\alpha, L, \delta)$-relatively smooth problem. This approach allows the construction of adaptive gradient-type methods that are applicable to both relatively Lipschitz continuous and relatively smooth problems with optimal complexity estimates.
\begin{algorithm}[!ht] \caption{Universal Method for Relatively Smooth and Relatively Lipschitz Continuous Convex Optimization Problems with Adaptation to Inexactness.}\label{Algor2} \begin{algorithmic}[1]
\REQUIRE $\varepsilon > 0, \rrev{x_{0} \in Q}, L_{0}>0,\delta_0>0$, $R$ s.t. $V\left(x_{*}, x_{0}\right) \leqslant R^{2}, \rrev{k=0}.$
\STATE Set $k = k+1, L_{k+1}=\frac{L_k}{2}, \delta_{k+1}=\frac{\delta_k}{2}$.
\STATE Find
\begin{equation}\label{eqproblem}
x_{k+1} = \arg\min\limits_{x\in Q}\{\langle \nabla f(x_k),x \rangle + L_{k+1}V(x,x_{k})\}.
\end{equation}
\STATE \textbf{If}
\begin{equation*}\label{condalg2}
f\left(x_{k+1}\right) \leqslant f\left(x_{k}\right)+\langle \nabla f\left(x_{k}\right), x_{k+1}-x_{k}\rangle+L_{k+1} V(x_{k+1}, x_{k})+\delta_{k+1},
\end{equation*}
\textbf{then} go to the next iteration (item 1).
\STATE \textbf{else}
$$\text{set }L_{k+1}=2\cdot L_{k+1}, \delta_{k+1}=2\cdot\delta_{k+1} \text{ and go to item 2}.$$
\STATE \textbf{end if}
\ENSURE $\widehat{x} = \frac{1}{S_N}\sum\limits_{k=0}^{N-1}\frac{x_{k+1}}{L_{k+1}}.$ \end{algorithmic} \end{algorithm}
\begin{theorem}\label{theorem_Algor2} Let $f: Q\longrightarrow \mathbb {R}$ be a convex and $(\alpha, L, \delta)$-relatively smooth function, i.e. \eqref{eqalpha1relsm}, \eqref{eqalpha1relsm1} hold. Then after $N$ iterations of Algorithm \ref{Algor2}, the following inequality holds \begin{equation*}\label{equat2}
f(\widehat{x})-f(x_{*})\leqslant \frac{R^2}{S_N}+ \frac{1}{S_N}\sum\limits_{k=0}^{N-1} \frac{\delta_{k+1}}{L_{k+1}}, \end{equation*} where $S_N = \sum\limits_{k=0}^{N-1}\frac{1}{L_{k+1}}.$ Note that for \color{black}{$L_0 \leqslant 2L$ and $\delta_0 \leqslant 2\delta$} the auxiliary problem \eqref{eqproblem} in Algorithm \ref{Algor2} is solved no more than $2N + \log_2\frac{2L}{L_0}$ times. \end{theorem}
\begin{proof} The proof is given in Appendix D. \end{proof}
The optimality of Algorithm \ref{Algor2} for the class of convex and $M$-relatively Lipschitz continuous problems can be proved similar to Remark \ref{remarkoptim}. The optimal rate of convergence $O(\varepsilon^{-1})$ for the class of $L$-relatively smooth problems also takes place for Algorithm \ref{Algor2}. \alex{For more details see the conclusion of proof in Appendix E, the proof of these facts for Algorithm \ref{Algor2} can be obtained analogously.}
Let us now formulate a variant of the universal method for relatively Lipschitz continuous and relatively smooth problems which makes it possible to prove the guaranteed preservation of the optimal complexity estimates. This method is listed as Algorithm \ref{universal_alg2}, below.
\begin{algorithm}[!ht] \caption{Universal Method for Relatively Smooth and Relatively Lipschitz Continuous Convex Optimization Problems.}\label{universal_alg2} \begin{algorithmic}[1]
\REQUIRE $\varepsilon > 0, \rrev{x_{0}\in Q}, L_{0}>0$, $R$ s.t. $V\left(x_{*}, x_{0}\right) \leqslant R^{2}, \rrev{k = 0}.$
\STATE Set $k =k+1, L_{k+1}=\frac{L_k}{2}$.
\STATE Find \begin{equation*}
x_{k+1} = \arg\min\limits_{x\in Q}\{\langle \nabla f(x_k),x \rangle + L_{k+1}V(x,x_{k})\}.
\end{equation*}
\STATE \textbf{If}
\begin{equation*}
f\left(x_{k+1}\right) \leqslant f\left(x_{k}\right)+\langle \nabla f\left(x_{k}\right), x_{k+1}-x_{k}\rangle+L_{k+1} V(x_{k+1}, x_{k})+\frac{3 \varepsilon}{4},
\end{equation*}
\textbf{then} go to the next iteration (item 1).
\STATE \textbf{else}
$$\text{set }L_{k+1}=2\cdot L_{k+1} \text{ and go to item 2.}$$
\STATE \textbf{end if}
\STATE Stopping criterion
\begin{equation*}
S_N := \sum\limits_{k=0}^{N-1}\frac{1}{L_{k+1}}\geqslant \frac{4R^2}{\varepsilon}.
\end{equation*}
\ENSURE $\widehat{x} = \frac{1}{S_N}\sum\limits_{k=0}^{N-1}\frac{x_{k+1}}{L_{k+1}}$. \end{algorithmic} \end{algorithm}
\begin{theorem}\label{ThmUnivMeth2} \rev{Let $f: Q\longrightarrow \mathbb {R}$ be a convex and $(\alpha, L, \delta)$-relatively smooth function, i.e. \eqref{eqalpha1relsm} and \eqref{eqalpha1relsm1} hold with $\delta \leqslant \frac{3\varepsilon}{4}$, \rrev{$L_0\leqslant 2L$}. Then after the stopping of Algorithm \ref{universal_alg2}, the following inequality holds $f(\widehat{x})-f(x_*) \leqslant\varepsilon.$ The number of iterations of Algorithm \ref{universal_alg2} does not exceed $\displaystyle \left\lceil\frac{8LR^2}{\varepsilon}\right\rceil$. If $f$ is $\left(1, \frac{2M^2}{\varepsilon}, \frac{\varepsilon}{2}\right)$-relatively smooth function (for example, $M$-relatively Lipschitz continuous function) then the number of iterations of Algorithm \ref{universal_alg2} does not exceed $\left\lceil\displaystyle\frac{16M^2R^2}{\varepsilon^2}\right\rceil$.} \end{theorem}
\begin{proof} The proof is given in Appendix E. \end{proof} \textcolor{black}{ \begin{remark}
It is worth noting that, generally speaking, for Algorithms \ref{adaptive_alg4}--\ref{universal_alg2} it is acceptable to use the following output point
\begin{equation*}
\widehat{x} = \arg\min\limits_{i\in\{0,\ldots,N+1\}} f(x_i).
\end{equation*} At the same time for various applied problems, such a modification can both improve and degrade the practical quality of the algorithms. \end{remark}}
\section{Numerical Experiments}\label{experiments_Alg5} In this section, in order to demonstrate the performance of the proposed Algorithms, we firstly consider some numerical experiments concerning the Intersection of Ellipsoids Problem (IEP). Secondly, we compare the proposed Algorithm \ref{Alg_5} with AdaMirr algorithm, which was recently proposed in \cite{AdaMirr_2021}. We also, consider some numerical experiments concerning the Support Vector Machine (SVM) \cite{Lu,pegasos_2011}.
All experiments were implemented in Python 3.4, on a computer fitted with Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz, 4 Core(s), 8 Logical Processor(s). The RAM of the computer is 8 GB.
\subsection{The Intersection of Ellipsoids Problem (IEP)}\label{IEP} For the Intersection of Ellipsoids Problem, \alex{supposing, that the intersection is nonempty}, we compute a point $x \in \mathbb{R}^n$ in the intersection of $m$ ellipsoids, i.e.
\begin{equation*}
x \in \mathcal{E} = \mathcal{E}_1 \cap \mathcal{E}_2 \cap \ldots \cap \mathcal{E}_m, \end{equation*} where $\mathcal{E}_{i}=\left\{ \rrev{x \in \mathbb{R}^n}: \frac{1}{2} x^{T} A_{i} x+ \rrev{b_{i}^T x}+c_{i} \leqslant 0\right\}$, $A_i \in \mathbb{R}^{n\times n}$ is a given symmetric positive semi-definite matrix, $b_i \in \mathbb{R}^n, c_i \in \mathbb{R}$ are given, for every $i =1, \ldots, m$. We note that the Intersection of Ellipsoids Problem is equivalent to the following unconstrained optimization problem \begin{equation}\label{objective_IEP}
\rev{\min\limits_{x\in \mathbb{R}^n} \left\{ f(x) := \max\limits_{\rrev{1\leqslant i\leqslant m}} \left[
\frac{1}{2} x^{T} A_{i} x + \rrev{b_{i}^T x}+c_{i} \right] \right\}.} \end{equation} The objective function $f$ in \eqref{objective_IEP} is both non-differentiable and non-Lipschitz \moh{\cite{Lu}}. So the traditional first-order methods are not applicable to such types of problems. We will demonstrate, how the proposed Algorithm \ref{Alg_5} can be applied to solve such a problem (here we take more attention to the Algorithm \ref{Alg_5}, because it works better than Algorithms \ref{adaptive_alg4} and \ref{Algor2}, see Fig. \ref{results_alg345_IEP}).
Let $\sigma:=\max\limits_{1 \leqslant i \leqslant m}\left\|A_{i}\right\|_{2}^{2}$ where $\left\|A_{i}\right\|_{2}$ is the \rrev{spectral norm} of $A_i$,\\
$\rho :=\max\limits_{1 \leqslant i \leqslant m}\left\|A_i b_i \right\|_{2}$ and $\gamma:=\max\limits_{1 \leqslant i \leqslant m}\left\|b_i \right\|_{2}^2$. We run Algorithm \ref{Alg_5} with the following prox function \begin{equation}\label{prox_function_IEP}
d(x):=\frac{a_2}{4}\|x\|_{2}^{4}+\frac{a_1}{3}\|x\|_{2}^{3}+\frac{a_0}{2}\|x\|_{2}^{2}, \end{equation} where $a_0 = \gamma, a_1 = \rho, a_2 = \sigma$ (see \cite{Lu} for more details). The objective function $f$ \eqref{objective_IEP} is $1$-relatively Lipschitz continuous with respect to the prox function $d(\cdot)$, defined in \eqref{prox_function_IEP} \rrev{\cite{Lu}}. The Bregman divergence $V(\cdot, \cdot)$ for the corresponding prox function $d(\cdot)$ is defined as follows \begin{equation}\label{Bregmann_IEP}
V(y,x) = a_0 V_{d_0}(y,x) + a_1 V_{d_1}(y,x) + a_2 V_{d_2}(y,x). \end{equation}
\rrev{where $d_i(x) = \frac{1}{i+2}\|x\|_2^{i+2} \; (i=0, 1, 2)$,} and $$
V_{d_i}(y,x) = \frac{1}{i+2} \left(\|y\|_{2}^{i+2}+(i+1) \|x\|_{2}^{i+2}-(i+2)\|x\|_{2}^{i}\langle x, y\rangle\right)\; (i = 0, 1, 2). $$ Note that each iteration of Algorithm \ref{Alg_5} requires the capability to solve the subproblem \eqref{subproblem_alg5}, which is equivalent to the following linearized problem \begin{equation}\label{equivalent_subproblem_alg5}
x_{k+1} = \arg\min\limits_{x\in \mathbb{R}^n }\{\langle c_k,x \rangle + d(x)\}, \end{equation} where $c_k = \frac{1}{L_{k+1}} \nabla f(x_k) -\nabla d(x_k)$ and $d(x)$ is given in \eqref{prox_function_IEP}. The solution of the problem \eqref{equivalent_subproblem_alg5} can be found explicitly \begin{equation*}
x_{k+1} = -\theta_k c_k, \end{equation*} for some $\theta_k \geqslant 0$, where $\theta_k$ is a positive real root of the following cubic equation \begin{equation*}\label{cubic_eq_IEP}
\gamma \theta + \rho\|c_k\|_2 \theta^2 + \sigma \|c_k\|_2^2 \theta^3 -1 = 0. \end{equation*}
We run Algorithm \ref{Alg_5} with different values of $n$ and prox-function \eqref{prox_function_IEP} and the starting point $x_0 = \left(0.2, \ldots, 0.2\right) \in \mathbb{R}^n$ and not in $\mathcal{E}$. \rrev{The matrices $A_i$, for every $i = 1, \ldots, m$, are diagonal matrices with entries chosen randomly from the uniform distribution over $(0, 1)$}, the vectors $b_i$ and the constants $c_i$, are also chosen randomly from a normal (Gaussian) distribution with mean (center) equaling $0$ and standard deviation (width) equaling $0.1$. \rrev{ We generated the random data 5 times and averaged the results of algorithms that we received each time, such that the $\mathbf{0} \in \mathbb{R}^n \in \mathcal{E}$.} We considered $L_0 = \frac{\|\nabla f(1,0,\ldots,0) - \nabla f(0,1,0, \ldots, 0)\|_2}{\sqrt{2}}, \delta_0 = 0.5$, and $R^2 = \frac{3\sigma}{4}\|x_0\|_2^3 + \frac{2\rho}{3}\|x_0\|_2^3 + \frac{\gamma}{2}\|x_0\|_2^2$ (see the proof of the Proposition 5.4 in \cite{Lu}).
The results of the work of Algorithm \ref{Alg_5} for IEP are presented in Fig \ref{results_alg4_IEP}, below. These results demonstrate the running time of the algorithm in seconds as a function of the number of iterations, and the quality of the solution "Estimate", which is in fact the right side of inequality \eqref{estimate_alg5}.
We note that the quality of the solution of the problem, which is produced by Algorithm \ref{Alg_5}, grows sharply at the beginning of the work of the algorithm for $k=100$ to $10\,000$. \rrev{We improve the quality of the initial solution by two orders of magnitude on average.} Nevertheless, \rrev{the rate of convergence significantly decreases when the number of iterations goes from $k=15\,000$ to $100\,000$.}
\begin{figure}
\caption{The results of Algorithm \ref{Alg_5} for IEP with different values of $n$ and $m = 10$.}
\label{results_alg4_IEP}
\end{figure}
\begin{figure}
\caption{The results of Algorithms \ref{adaptive_alg4}, \ref{Algor2} and \ref{Alg_5} for IEP with $n=1000$ and $m = 10$.}
\label{results_alg345_IEP}
\end{figure}
\subsection{Comparison with AdaMirr} Recently in \cite{AdaMirr_2021}, \rrev{there was proposed} an adaptive first-order method, called AdaMirr, in order to solve the relatively continuous and relatively smooth optimization problems. \rrev{AdaMirr briefly can be stated as \begin{equation*}
x_{k+1} = \arg\min\limits_{x\in Q}\{\langle - \gamma_k \nabla f(x_k),x_k - x \rangle + V(x,x_k)\}, \quad k = 1, 2, \ldots. \end{equation*} with $\gamma_k$ defined as $$ \gamma_k = \frac{1}{\sqrt{\sum_{s=0}^{k-1} \delta_s^2}} \quad \text {with} \quad \delta_s^2=\frac{V\left(x_s, x_{s+1}\right)+V\left(x_{s+1}, x_s\right)}{\gamma_s^2}, \quad k = 1, 2, \ldots $$ and $\delta_0 = \sqrt{V(x_0,x_1) + V(x_1,x_0)}$. In \cite{AdaMirr_2021}, it was proved that for $M$-relatively Lipschitz continuous convex function, then after $N$ steps of AdaMirr, the following inequality holds \begin{equation}\label{estim_adamirr} f\left(\overline{x}_N\right)-f^* \leqslant \frac{\sqrt{2} M\left[D_1+ \frac{8 M^2}{\delta_0^2} +2 \ln \left(1 + \frac{2 M^2 N}{\delta_0^2} \right)\right]}{\sqrt{N}}+\frac{3 \sqrt{2} M + \frac{4 M^2}{\delta_0^2}}{N}, \end{equation} where $\overline{x}_N = \frac{1}{N} \sum_{k=1}^{N} x_k$ and $D_1 = V(x_*, x_1)$. }
\rev{In this subsection,} we compare the proposed Algorithm \ref{Alg_5} with AdaMirr, for the Intersection of Ellipsoids Problem (see Subsec. \ref{IEP}). We run the compared algorithms for the same parameters and setting \rrev{as in the Subsec.} \ref{IEP}. The results of the comparison are presented in Fig. \ref{fir_n1000}, which illustrates the value of the objective function at the output point of each algorithm, the estimates of the quality of the solution for Algorithm \ref{Alg_5} (see the right side of inequality \eqref{estimate_alg5}) and AdaMirr (see the right side of inequality \eqref{estim_adamirr}) and the running time of algorithms in seconds.
From the results in Fig. \ref{fir_n1000}, we can see that the proposed Algorithm \ref{Alg_5} works better than AdaMirr, except for the running time, where AdaMirr works faster because, in the proposed Algorithm \ref{Alg_5}, there is an adaptive procedure for the Lipschitz continuity parameter, which needs more time. Note that AdaMirr does not converge to the solution of the problem for all taken iterations from $100$ to $10^4$. \begin{figure}
\caption{The results of comparison of Algorithm \ref{Alg_5} and AdaMirr for IEP with $n = 1000$ and $m=10$.}
\label{fir_n1000}
\end{figure}
\subsection{Support Vector Machine (SVM) and Inequality-Type Function Constraints} The Support Vector Machine (SVM) is an important supervised learning model for binary classification problem \cite{pegasos_2011}. The SVM optimization problem can be formulated as follows \begin{equation}\label{objective_SVM}
\rev{\min\limits_{x\in \widetilde{Q} } \left\{ f(x) := \left(\frac{1}{n}\sum_{i=1}^{n}\max\{0, 1- y_i x^\top w_i\}\right) +\frac{\moh{\tau}}{2}\|x\|_2^2\right\}, } \end{equation} where $w_i$ is the input feature vector of sample $i$ and $y_i \in \{-1, 1\}$ is the label of sample $i$, \rrev{$\tau>0$ } is the regularization parameter, and $\widetilde{Q}$ is a compact convex set. The objective function in \eqref{objective_SVM} is non-differentiable \rev{and because of the existence of the $\ell_2$-norm regularization the value of the Lipschitz constant of such a function can be extremely large.} Thus, we cannot always directly use typical subgradient or gradient schemes to solve the problem \eqref{objective_SVM}. The problem of constrained (inequality-type constraints) minimization of convex functions attracts widespread interest in many areas of modern large-scale optimization and its applications \cite{Robust_Truss,Shpirko_2014}. Therefore, we demonstrate the performance of the proposed Algorithm \ref{adaptive_alg3}
for such class of problems. We consider an example of the Lagrange saddle point problem induced by the function $f$ in the problem \eqref{objective_SVM}, with some inequality-type function constraints. This problem has the following form \begin{equation}\label{problem_min_for_SPP}
\min_{x \in \widetilde{Q} } \left\{f(x) \left| \; \varphi_p(x):=\sum_{i=1}^n \alpha_{pi}x_i^2 - \beta_p \leqslant 0 \right., \; p=1,...,m \right\}, \end{equation} where $\rrev{\alpha_{pi}>0}, \beta_p \in \mathbb{R}, \, \forall i=1, \ldots, n$ and $ \forall p = 1, \ldots, m$. The corresponding Lagrange saddle point problem is defined as follows \begin{equation*}\label{lagrangeSVM} \min_{x \in \widetilde{Q}} \max_{\boldsymbol{\lambda} = (\lambda_1,\lambda_2,\ldots,\lambda_m)^\top \in \widehat{Q} \subset \mathbb{R}^m_+} \left\{ L(x,\boldsymbol{\lambda}):=f(x)+\sum\limits_{p=1}^m\lambda_p\varphi_p(x)\right\}, \end{equation*} where $\widehat{Q} $ is a compact convex set. This problem is equivalent to the variational inequality with the following monotone bounded operator $$ G(x,\boldsymbol{\lambda})= \begin{pmatrix} \nabla f(x)+\sum\limits_{p=1}^m\lambda_p\nabla\varphi_p(x), \\ (-\varphi_1(x),-\varphi_2(x),\ldots,-\varphi_m(x))^\top \end{pmatrix}, $$ where $ \nabla f$ and $\nabla\varphi_p$ are subgradients of $f$ and $\varphi_p$.
We run Algorithm \ref{adaptive_alg3} with the following prox function \begin{equation*}\label{prox_function_svm}
d(x, \boldsymbol{\lambda}):=\frac{a_2}{4}\|x\|_{2}^{4}+\frac{a_1}{3}\|x\|_{2}^{3}+\frac{a_0}{2}\|x\|_{2}^{2} + \frac{1}{2}\|\boldsymbol{\lambda}\|_2^2, \quad \forall x \in \mathbb{R}^n, \boldsymbol{\lambda} \in \mathbb{R}_+^m \end{equation*} with \begin{equation}\label{ai_SVN}
a_0 = \frac{1}{n}\sum_{i=1}^{n}\|w_i\|_2^2, \quad a_1 =\frac{2{\moh{\tau}}}{n}\sum_{i=1}^{n}\|w_i\|_2, \quad a_2 = {\moh{\tau}}^2, \end{equation} and the following Bregman divergence \begin{equation}\label{bregman_div_svm}
V_{\text{new}}\left( \left(y, \boldsymbol{\lambda} \right) , \left(x, \boldsymbol{\lambda}^{\prime}\right) \right) = V(y,x) +\frac{1}{2}\|\boldsymbol{\lambda} - \boldsymbol{\lambda}^{\prime}\|_2^2, \end{equation}
for every $x, y \in \mathbb{R}^n, \boldsymbol{\lambda}, \boldsymbol{\lambda}^{\prime} \in \mathbb{R}_+^m$, where $V(y,x)$ is given in \eqref{Bregmann_IEP} with coefficients defined in \eqref{ai_SVN}. We consider the ball $\widetilde{Q} \subset \mathbb{R}^n$ at the center $\textbf{0} \in \mathbb{R}^n$ and the radius $r = \min \left\{\frac{1}{n \tau} \sum\limits_{i=1}^n\left\|w_i\right\|_2, \sqrt{ \frac{2}{\tau} }\right\}$ (see \cite{Lu}). We take the initial point $(x_0, \boldsymbol{\lambda}_0) \in \mathbb{R}^{n+m}$, with all coordinates equaling $0.01$. \rrev{The coefficients $\alpha_{pi}$ in \eqref{problem_min_for_SPP}, and the vectors $w_i$ for $i = 1, \ldots, n$ are chosen randomly from the uniform distribution over $[0, 1)$,} and $\beta_p = r\, (\forall p = 1, \ldots, m)$. We also consider $\widehat{Q} = \{\boldsymbol{\lambda} \in \mathbb{R}^m_+: \|\boldsymbol{\lambda}\|_2^2 \leqslant r^2 \}$, $L_0 = \frac{\|G((1,0,\ldots,0), \textbf{0} ) - G((0,1,0,\ldots,0), \textbf{0} )\|_2}{\sqrt{2}},$ where $\textbf{0} \in \mathbb{R}^m$, $\delta_0 = 0.5,$ and ${\moh{\tau}} = 0.5$ in \eqref{objective_SVM}. In order to estimate the parameter $R$, for the Bregman divergence \eqref{bregman_div_svm}, we have (see \cite{Lu}) \begin{equation*}
\begin{aligned}
V_{\text{new}}\left( \left(x, \boldsymbol{\lambda} \right) , \left(x_0, \boldsymbol{\lambda}_0\right) \right) & \leqslant \frac{a_2}{4} \|x-x_0\|_2^2 \left(\|x+x_0\|_2^2 + 2\|x_0\|_2^2\right)
\\&\;\;\;\; + \frac{a_1}{3}\|x-x_0\|_2^2 \left(\|x\|_2^2 + 2\|x_0\|_2^2\right) + \frac{a_0}{2}\|x-x_0\|_2^2
\\& \;\;\;\; + \frac{1}{2} \|\boldsymbol{\lambda} - \boldsymbol{\lambda}_0 \|_2^2.
\end{aligned} \end{equation*} Therefore in $Q := \widetilde{Q} \times \widehat{Q}$, we have $$ \begin{aligned}
&{V_\text{new}}\left( \left(x, \boldsymbol{\lambda} \right) , \left(x_0, \boldsymbol{\lambda}_0\right) \right) \\ &\;\;\;\; \leqslant \left(r + \|x_0\|_2\right)^2 \left[\frac{a_2}{4} \left(r^2 + 2r\|x_0\|_2 + 3 \|x_0\|_2^2\right) + \frac{a_1}{3} \left(r^2 + 2\|x_0\|_2^2\right) + \frac{a_0}{2} \right]
\\& \;\;\;\; \;\;\;\; + \frac{1}{2} (r + \|\lambda_0\|_2)^2 := v. \end{aligned} $$ Thus we can take $R = \sqrt{v}$. In each iteration of Algorithm \ref{adaptive_alg3}, solving the sub-problem \eqref{minVI}, for the problem \eqref{problem_min_for_SPP}, will be automatically (not explicitly as was for IEP in the previous subsections).
The results of the work of Algorithm \ref{adaptive_alg3}, for $n = 25, m = 5$, are presented in Fig. \ref{results_alg12_SVM}. These results demonstrate the number of the iterations of Algorithm \ref{adaptive_alg3}, as a function of $\varepsilon \in \{ i^{-1}, i=2, 4, 8, 12, 16, 20\}$.
As it is known, for the variational inequality with a non-smooth operator, the theoretical complexity estimate $O\left(\varepsilon^{-2}\right)$ is optimal. But experimentally we can see from Fig. \ref{results_alg12_SVM} that, the proposed Algorithm \ref{adaptive_alg3}, has iteration complexity nearly to $O\left(\varepsilon^{-1}\right)$, which is an optimal estimate for the problems with smooth operators.
\begin{figure}
\caption{The results of Algorithm \ref{adaptive_alg3}, for the problem \ref{problem_min_for_SPP}.}
\label{results_alg12_SVM}
\end{figure}
\begin{comment} \subsection{Comparison of Algorithm \ref{alg_online} and adaptive algorithm in \cite{titov2020online}.}
In order to see the performance of the proposed Algorithm \ref{alg_online}, we compare with an adaptive algorithm proposed in \cite{titov2020online} (algorithm 2). We consider the following example \begin{example}\label{ex1} The objective function in this example has the following form \begin{equation}\label{obj_sumabs}
f(x) = \frac{1}{T} \sum_{i=1}^{T} \left(|\langle a_i, x\rangle - b_i| + \frac{\mu}{2}\|x\|_2^2\right), \quad \forall a_i \in \mathbb{R}^n, b_i \in \mathbb{R}. \end{equation}
\end{example}
The constraint has the following form \begin{equation}\label{cons_g}
g(x) = \max_{1\leq i \leq m} \left\{g_i(x) = \langle \alpha_i, x\rangle - \beta_i +\frac{\mu}{2}\|x\|_2^2\right\}. \end{equation} The functions $f$ and $g$ are Lipschitz continuous and $\mu$-strongly convex.
For the coefficients $\alpha_i = (\alpha_{i1}, \ldots, \alpha_{in}) \in \mathbb{R}^n$ and constants $\beta_i \in \mathbb{R}$, for every $i \in \{1, \ldots, m\}$, are randomly generated from the uniform distribution over $[0,1)$.
We choose standard Euclidean proximal setup as a prox-function, starting point $x_0 = \left(\frac{1}{\sqrt{n}}, \ldots, \frac{1}{\sqrt{n}}\right)\in \mathbb{R}^n$, $\varepsilon = 10^{-3}$, and $Q$ is the unit ball in $\mathbb{R}^n$.
In the objective function \eqref{obj_sumabs}, the coefficients $a_i \in \mathbb{R}^n $ and constants $\beta_i \in \mathbb{R}$, are generated randomly from the normal (Gaussian) distribution.
We run the proposed Algorithm \ref{alg_online} and the adaptive algorithm 2 in \cite{titov2020online}, with $m =100, n=500$ and different values of $T$. The results are represented in Fig. \ref{fir_comparision}. These results demonstrate the value $\frac{1}{T} \sum\limits_{k \in I}f(x_k)$, where $\{x_k\}_{k \in I}$ is a sequence of productive steps that introduces by the compared Algorithms.
\begin{figure}
\caption{The results of the comparison. The dotted curve indicates the proposed Algorithm \ref{alg_online} and the dashed curve indicates the adaptive algorithm 2 in \cite{titov2020online}.}
\label{fir_comparision}
\end{figure}
From Fig. \ref{fir_comparision}, we can see the efficiency of the proposed Algorithm \ref{alg_online} and that it works better than adaptive algorithm recently proposed in \cite{titov2020online}. \end{comment}
\section*{Conclusions} In this paper we considered $(\alpha, L, \delta)$-relatively smooth optimization problems which provide for the possibility of minimizing both relatively smooth and relatively Lipschitz continuous functions. For such a type of problems we introduced some adaptive and universal methods with optimal estimates of the convergence rate. We also considered the problem of solving the variational inequality with a relatively bounded operator. Finally, we presented the results of numerical experiments for the considered algorithms.
The authors are very grateful to Dmitry Pasechnyuk for fruitful discussions. Also the authors are very grateful for the unknown reviewers for extremely valuable comments.
\section*{Appendix A. The proof of Theorem \ref{the_VI}} \begin{proof} Due to \eqref{minVI}, \eqref{Lemma} \rev{and \eqref{Lemma}}, for each $x \in Q$, we have $$ \begin{aligned} \langle g(x_k),x_{k+1}-x\rangle \leqslant L_{k+1}V(x,x_k)-L_{k+1}V(x,x_{k+1})- L_{k+1}V(x_{k+1},x_k). \end{aligned} $$
Thus, taking into account \eqref{condVI} and monotonicity of $g$, we get \begin{equation*}
\begin{aligned}
L_{k+1}V(x,x_k)-L_{k+1} V(x&,x_{k+1}) \geqslant \langle g(x_k),x_{k+1}-x\rangle+L_{k+1}V(x_{k+1},x_k)
\\& = \langle g(x_k),x_{k+1}-x\rangle+\langle g(x_k),x_{k+1}-x_k\rangle
\\& \;\;\;\; + L_{k+1}V(x_{k+1},x_k) - \langle g(x_k),x_{k+1}-x_k\rangle.
\\& \geqslant\langle g(x_k),x_{k+1}-x\rangle-\langle g(x_k),x_{k+1}-x_k\rangle-\frac{\varepsilon}{2}
\\& = \langle g(x_k),x_{k}-x \rangle - \frac{\varepsilon}{2} \geqslant \langle g(x_k),x_{k}-x \rangle - \frac{\varepsilon}{2},
\end{aligned} \end{equation*} whence we obtain that \begin{equation}\label{eee}
\langle g(x),x_{k}-x\rangle\leqslant L_{k+1}V(x,x_k)-L_{k+1}V(x,x_{k+1})+\frac{\varepsilon}{2}, \quad \forall x \in Q. \end{equation} Taking summation over both sides of \eqref{eee}, we have $$
\sum\limits_{k=0}^{N-1} \frac{1}{L_{k+1}}\langle g(x),x_{k}-x\rangle\leqslant V(x,x_0)+ \sum\limits_{k=0}^{N-1}\frac{\varepsilon}{2L_{k+1}}, \quad \forall x\in Q, $$ which leads to $$
\langle g(x),\widehat{x}-x\rangle\leqslant \frac{1}{S_N}\sum\limits_{k=0}^{N-1} \frac{1}{L_{k+1}}\langle g(x),x_{k}-x\rangle\leqslant \frac{R^2}{S_N}+\frac{\varepsilon}{2}, $$ and $$
\max\limits_{x\in Q}\langle g(x),\widehat{x}-x\rangle\leqslant \frac{R^2}{S_N}+\frac{\varepsilon}{2}. $$
As operator $g$ is relatively bounded, i.e. $$
\langle g(x), x - y\rangle \leqslant M\sqrt{2V(y,x)} \leqslant \frac{M^2V(y,x)}{\varepsilon} + \frac{\varepsilon}{2}, \quad \forall \varepsilon >0, $$ the stopping criterion of Algorithm \ref{adaptive_alg3} is guaranteed to be satisfied for $L_{k+1}\geqslant \frac{M^2}{\varepsilon}$. Since the exit from the iteration will certainly happen for $L_{k+1} \leqslant \frac{2M^2}{\varepsilon},$ we have $$
\frac{R^2}{S_N} \leqslant \frac{2M^2R^2}{\varepsilon N}. $$ Thus, the total number of iterations of Algorithm \ref{adaptive_alg3} will not exceed $$ N=\left\lceil\displaystyle\frac{4M^2R^2}{\varepsilon^2}\right\rceil. $$ \end{proof}
\section*{Appendix B. The proof of Theorem \ref{theorem_estimate_alg50}}
\begin{proof} The proof of \eqref{estimate_alg50} is similar to the proof of Theorem \ref{the_VI}, with $\displaystyle\frac{\varepsilon}{2} = \delta_{k+1}.$ Let us assume that on the $(k+1)$-th iteration $(k=0,1,\ldots, N-1)$ of the Algorithm \ref{Alg_50}, the auxiliary problem \eqref{subproblem_alg50} is solved $i_{k+1}$ times. Then $$
2^{i_{k+1}-2}= \frac{L_{k+1}}{L_{k}}=\frac{\delta_{k+1}}{\delta_{k}}, $$ since at the beginning of each iteration the parameters $L_{k}, \delta_{k}$ are divided by 2. Therefore, $$
\sum\limits_{k=0}^{N-1} i_{k+1}=2N+\log_2 \frac{L_N}{L_0},\quad \log_2 \frac{L_N}{L_0}=\log_2 \frac{\delta_N}{\delta_0}. $$ It is clear that at least one of the inequalities $L_N \leqslant 2L, \delta_N \leqslant 2 \delta$ holds, which ends the proof of the theorem. \end{proof}
\section*{Appendix C. The proof of Theorem \ref{theorem_adaptive_Alg_2}} \begin{proof} Let us use the reasoning in the proof of Theorem \ref{the_VI} for $ g(x)=\nabla f(x)$. \rev{Taking into account \eqref{Lemma},} for any $x \in Q$, we have, \begin{equation*}
\begin{aligned}
\langle \nabla f(x_k),x_{k+1}-x\rangle \leqslant L_{k+1}V(x,x_k)-L_{k+1}V(x,x_{k+1})-L_{k+1}V(x_{k+1},x_k).
\end{aligned} \end{equation*}
Thus, taking into account \eqref{condalg4}, we get \begin{equation*}
\begin{aligned}
L_{k+1}V(x,x_k)-L_{k+1}V( & x, x_{k+1}) \geqslant \langle\nabla f(x_k),x_{k+1}-x\rangle+L_{k+1}V(x_{k+1},x_k)
\\& = \langle\nabla f(x_k),x_{k+1}-x\rangle + \langle\nabla f(x_k),x_{k+1}-x_k\rangle
\\& \;\;\;\; + L_{k+1}V(x_{k+1},x_k)-\langle\nabla f(x_k),x_{k+1}-x_k\rangle
\\& \geqslant\langle\nabla f(x_k),x_{k+1}-x\rangle-\langle\nabla f(x_k),x_{k+1}-x_k\rangle-\frac{\varepsilon}{2}
\\& =\langle\nabla f(x_k),x_{k}-x \rangle - \frac{\varepsilon}{2}.
\end{aligned} \end{equation*} So, we have \begin{equation}\label{eq1}
\langle \nabla f(x_k),x_{k}-x\rangle\leqslant L_{k+1}V(x,x_k)-L_{k+1}V(x,x_{k+1})+\frac{\varepsilon}{2}, \quad \forall x \in Q. \end{equation} Taking summation over both sides of \eqref{eq1}, we obtain $$
\sum\limits_{k=0}^{N-1} \frac{1}{L_{k+1}}\langle \nabla f(x_k),x_{k}-x\rangle\leqslant V(x,x_0)+\sum\limits_{k=0}^{N-1}\frac{\varepsilon}{2L_{k+1}},\quad \forall x\in Q. $$ Further, in view of the inequality $$
\langle \nabla f(x_k),x_{k}-x\rangle\geqslant f(x_k)-f(x), $$ we have $$
\sum\limits_{k=0}^{N-1} \frac{1}{L_{k+1}}\left(f(x_k)-f(x)\right)\leqslant V(x,x_0)+\sum\limits_{k=0}^{N-1}\frac{\varepsilon}{2L_{k+1}}. $$ Moreover, since $f$ is convex, the following inequality holds $$
\rev{\sum\limits_{k=0}^{N-1} \frac{1}{L_{k+1}}} f(x_k)\geqslant S_Nf(\widehat{x}), $$ where $S_N=\sum\limits_{k=0}^{N-1} \frac{1}{L_{k+1}}.$ Then \begin{equation}\label{eq2}
\rev{\sum\limits_{k=0}^{N-1} \frac{1}{L_{k+1}}}\left(f(x_k)-f(x)\right)\leqslant V(x,x_0) + \frac{\varepsilon}{2}S_N. \end{equation} Since $V(x_*,x_0)\leqslant R^2,$ we obtain, for $x = x_*$ in \eqref{eq2}, that $$
f(\widehat{x})-f(x_*)\leqslant \frac{R^2}{S_N}+\frac{\varepsilon}{2}. $$ \end{proof} \section*{Appendix D. The proof of Theorem \ref{theorem_Algor2}} \begin{proof} 1) Taking into account the standard minimum condition for the subproblem \eqref{eqproblem} \rev{and \eqref{Lemma}}, we have $$
\langle \nabla f(x_k) + L_{k+1}\nabla_{x = x_{k+1}}V(x, x_k), x - x_k \rangle \geqslant 0, \quad \forall\, x \in Q. $$ After the completion of the $k$-th iteration $(k = 0, 1, \ldots)$ of the Algorithm \ref{Algor2}, the following inequalities hold \begin{equation*}
\begin{aligned}
\langle\nabla f(x_k),x_{k+1}-x \rangle \leqslant L_{k+1}V(x,x_k)-L_{k+1}V(x,x_{k+1})-L_{k+1}V(x_{k+1},x_k),
\end{aligned} \end{equation*} and $$
f(x_{k+1})\leqslant f(x_{k})+ \langle\nabla f(x_k),x_{k+1}-x_k\rangle+L_{k+1}V(x,x_k)-L_{k+1}V(x,x_{k+1})+\delta_{k+1}. $$ Therefore, $$
f(x_{k+1})\leqslant f(x_{k})+ \langle\nabla f(x_k),x-x_k\rangle + L_{k+1}V(x,x_k)-L_{k+1}V(x,x_{k+1})+\delta_{k+1}. $$ Further, taking into account the inequality $f(x_{k})+ \langle\nabla f(x_k),x-x_k\rangle \leqslant f(x)$, for $x=x_{*}$, we obtain $$
f(x_{k+1})-f(x_{*})\leqslant L_{k+1}V(x_{*},x_k)-L_{k+1}V(x_{*},x_{k+1})+\delta_{k+1}, $$ whence, after summation, in view of the convexity of $f$, we have \color{black}{ $$
f(\widehat{x})-f(x_{*})\leqslant\frac{1}{S_N}\sum\limits_{k=0}^{N-1} \frac{f(x_{k+1})}{L_{k+1}}-f(x_{*})\leqslant \frac{V(x_{*},x_0)}{S_N}+ \frac{1}{S_N}\sum\limits_{k=0}^{N-1} \frac{\delta_{k+1}}{L_{k+1}}. $$} 2) Since $f$ satisfies \eqref{eqalpha1relsm} and \eqref{eqalpha1relsm1}, for sufficiently large $L_{k+1}$ and $\delta_{k+1}$ the iteration exit criterion will certainly be satisfied. According to \eqref{eqalpha1relsm}, for some fixed $L>0$ and $\delta > 0$, the following inequality holds $$
f(y) \leqslant f(x) + \langle \nabla f(x), y - x \rangle + LV(y, x) + \alpha LV(x, y) + \delta, \quad \forall x, y \in Q. $$ Therefore, for $L_{k+1} \geqslant L$ and taking into account \eqref{eqproblem} we obtain \color{black}{\begin{equation*}
\begin{aligned}
f(x_{k+1})&\leqslant f(x_k) + \langle \nabla f(x_k), x_{k+1} - x_k \rangle + L_{k+1}(V(x_{k+1}, x_k)
\\& \;\;\;\; + \alpha V(x_k, x_{k+1}))+ \delta
\\& \leqslant f(x_k) - L_{k+1}(1 - \alpha)V(x_k, x_{k+1}) + \delta,
\end{aligned} \end{equation*} } whence \begin{equation}\label{ineq1}
\alpha f(x_{k+1})\leqslant \alpha f(x_k) - L_{k+1}\alpha(1 - \alpha)V(x_k, x_{k+1}) + \alpha\delta. \end{equation} Now, in view of \eqref{eqalpha1relsm} and taking into account $1 - \alpha \geqslant 0$, the following inequality holds \begin{equation}\label{ineq2} \begin{aligned} (1-\alpha)f(x_{k+1}) &\leqslant (1-\alpha) f(x_k) + (1-\alpha)\langle \nabla f(x_k), x_{k+1} - x_k \rangle + \\& \;\;\;\; + L_{k+1}(1- \alpha)(V(x_{k+1}, x_{k}) + \alpha V(x_{k}, x_{k+1})) + (1-\alpha)\delta \end{aligned} \end{equation} and for $\alpha = 0$ we have \begin{equation}\label{ineq211} \begin{aligned}
f(x_{k+1})\leqslant f(x_k) + \langle \nabla f(x_k), x_{k+1} - x_k \rangle + L_{k+1}V(x_{k+1}, x_{k}) + \delta. \end{aligned} \end{equation} Taking into account \eqref{eqalpha1relsm1} for $\alpha >0$, after summing the inequalities \eqref{ineq1} and \eqref{ineq2}, we have \begin{equation*}\label{ineq3} \begin{aligned} f(x_{k+1}) & \leqslant f(x_k) + (1 - \alpha)\langle \nabla f(x_k), x_{k+1} - x_k \rangle + L_{k+1}(1 - \alpha)V(x_{k+1}, x_{k}) + \\& \;\;\;\; \delta \\& \leqslant f(x_k) + \langle \nabla f(x_k), x_{k+1} - x_k \rangle + L_{k+1}V(x_{k+1}, x_{k}) + \alpha\delta \\&\leqslant f(x_k) + \langle \nabla f(x_k), x_{k+1} - x_k \rangle + L_{k+1}V(x_{k+1}, x_{k}) + \delta, \end{aligned} \end{equation*} i.e. \eqref{ineq211} holds for each $\alpha \in [0; 1]$. It means that the iteration exit criterion of the Algorithm \ref{Algor2} will certainly be satisfied for $L_{k+1} \geqslant L$ and $\delta_{k+1} \geqslant \delta$.
3) Let us assume that on the $(k+1)$-th iteration $(k=0,1,\ldots, N-1)$ of the Algorithm \ref{Algor2}, the auxiliary problem \eqref{eqproblem} is solved $i_{k+1}$ times. Then $$
2^{i_{k+1}-2}= \frac{L_{k+1}}{L_{k}}=\frac{\delta_{k+1}}{\delta_{k}}, $$ since at the beginning of each iteration the parameters $L_{k}, \delta_{k}$ are divided by 2. Therefore, $$
\sum\limits_{k=0}^{N-1} i_{k+1}=2N+\log_2 \frac{L_N}{L_0},\quad \log_2 \frac{L_N}{L_0}=\log_2 \frac{\delta_N}{\delta_0}. $$ It is clear that at least one of the inequalities $L_N\leqslant2L, \delta_N\leqslant2\delta$ holds, which ends the proof. \end{proof}
\section*{Appendix E. The proof of Theorem \ref{ThmUnivMeth2}} \begin{proof}
\color{black}{1) Analogously with i.1) of the Theorem's \ref{theorem_Algor2} proof, we have \begin{equation}\label{equat5.2}
f(\widehat{x})-f(x_{*})\leqslant\frac{1}{S_N}\sum\limits_{k=0}^{N-1} \frac{f(x_{k+1})}{L_{k+1}}-f(x_{*})\leqslant \frac{V(x_{*},x_0)}{S_N}+ \frac{3\varepsilon}{4}. \end{equation} }
2) Analogously with i.2) of the Theorem's \ref{theorem_Algor2} proof, we conclude that for each ($\alpha, L, \delta$)-relatively smooth function $f$ the criterion for the exit from the iteration is certainly fulfilled for $L_{k+1}\geqslant L$.
3) Due to \eqref{equat5.2} for each $k \geqslant 0$ we have
$$
f(\widehat{x})-f(x_*)\leqslant\frac{R^2}{S_N}+\frac{3\varepsilon}{4}. $$ So, for each ($\alpha, L, \delta$)-relatively smooth function $f$ the exit from the iteration will certainly happen for $L_{k+1}\leqslant 2L,$ whence $$
S_N\geqslant \frac{N}{2L}\quad \text{and}\quad \frac{R^2}{S_N}\leqslant\frac{2LR^2}{N}. $$ If we require the condition $\displaystyle\frac{2LR^2}{N}\leqslant\displaystyle\frac{\varepsilon}{4},$ we have that $f(\widehat{x})-f(x_*)\leqslant \varepsilon$ certainly holds for $$N\geqslant \frac{8LR^2}{\varepsilon}.$$
If we require that f is a $\left(1, \frac{2M^2}{\varepsilon}, \frac{\varepsilon}{2}\right)$-relatively smooth function, we have that $$N\geqslant \frac{8LR^2}{\varepsilon} = \frac{16M^2R^2}{\varepsilon^2}.$$
Note, that if $f$ is relatively Lipschitz continuous function then f is a $\left(1, \frac{2M^2}{\varepsilon}, \frac{\varepsilon}{2}\right)$-relatively smooth function. Indeed, we have \textcolor{black}{$$
f(y) - f(x)\leqslant \langle \nabla f(y),y - x \rangle \leqslant M\sqrt{2V(x,y)}, $$} i.e. \begin{equation}\label{111}
f(y) \leqslant f(x) + M\sqrt{2V(x,y)}, \end{equation} and \begin{equation}\label{222}
\langle\nabla f(x),x-y\rangle
\color{black}{\leqslant} M\sqrt{2V(y,x)}. \end{equation}
After summing inequalities \eqref{111} and \eqref{222}, we get, that the following inequalities hold for any $x, y \in Q$: \begin{equation}\label{1star_ineq}
\begin{aligned}
f(y) &\leqslant f(x)+\langle\nabla f(x),y-x\rangle +M \left(\sqrt{2V(y,x)}+\sqrt{2V(x,y)}\right) \\
&\leqslant f(x)+\langle \nabla f(x),y-x\rangle +\frac{2M^2}{\varepsilon}\left(V(y,x)+V(x,y)\right)+ \frac{\varepsilon}{2}.
\end{aligned} \end{equation}
\end{proof}
\end{document} |
\begin{document}
\title[Parabolic equations on conic domains]{Sobolev space theory and H\"older estimates for the stochastic partial differential equations on conic and polygonal domains}
\thanks{The first and third authors were supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. NRF-2020R1A2C1A01003354)} \thanks{The second author was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. NRF-2019R1F1A1058988)}
\author{Kyeong-Hun Kim} \address{Kyeong-Hun Kim, Department of Mathematics, Korea University, Anam-ro 145, Sungbuk-gu, Seoul, 02841, Republic of Korea} \email{kyeonghun@korea.ac.kr}
\author{Kijung Lee} \address{Kijung Lee, Department of Mathematics, Ajou University, Worldcup-ro 206, Yeongtong-gu, Suwon, 16499, Republic of Korea} \email{kijung@ajou.ac.kr}
\author{jinsol Seo} \address{Jinsol Seo, Department of Mathematics, Korea University, Anam-ro 145, Sungbuk-gu, Seoul, 02841, Republic of Korea} \email{seo9401@korea.ac.kr}
\subjclass[2010]{60H15; 35R60, 35R05}
\keywords{parabolic equation, conic domains, weighted Sobolev regularity, mixed weight}
\begin{abstract} We establish existence, uniqueness, and Sobolev and H\"older regularity results for the stochastic partial differential equation \begin{equation*} \begin{aligned} du=\Big(\sum_{i,j=1}^d a^{ij}u_{x^ix^j}&+f^0+\sum_{i=1}^d f^i_{x^i}\Big)dt\\ &+\sum_{k=1}^{\infty}g^kdw^k_t, \quad t>0, \,x\in \cD \end{aligned} \end{equation*} given with non-zero initial data. Here $\{w^k_t: k=1,2,\cdots\}$ is a family of independent Wiener processes defined on a probability space $(\Omega, \bP)$, $a^{ij}=a^{ij}(\omega,t)$ are merely measurable functions on $\Omega\times (0,\infty)$, and $\cD$ is either a polygonal domain in $\bR^2$ or an arbitrary dimensional conic domain of the type \begin{equation} \label{conic} \cD(\cM):=\left\{x\in \bR^d :\,\frac{x}{\lvert x\rvert}\in \cM\right\}, \quad \quad \cM\subsetneq S^{d-1}, \quad (d\geq 2) \end{equation} where $\cM$ is an open subset of $S^{d-1}$ with $C^2$ boundary. We measure the Sobolev and H\"older regularities of arbitrary order derivatives of the solution using a system of mixed weights consisting of appropriate powers of the distance to the vertices and of the distance to the boundary. The ranges of admissible powers of the distance to the vertices and to the boundary are sharp. \end{abstract}
\maketitle
\mysection{Introduction}\label{sec:Introduction} The goal of this article is to present a Sobolev space theory and H\"older regularity results for the stochastic partial differential equation (SPDE) \begin{align}\label{main equation in introduction} d u =\left(\sum_{i,j}^d a^{ij}u_{x^ix^j}+f^0+\sum_{i=1}^d f^i_{x^i}\right)dt +\sum^{\infty}_{k=1} g^kdw_t^k, \,\,\, t>0\,; \,\,\, u(0,\cdot)=u_0 \end{align} defined on either multi-dimensional conic domains $\cD(\cM)$ (see \eqref{conic}) or two dimensional polygonal domains. Here, $\cM$ is an open subset of $S^{d-1}$ with $\cC^2$ boundary, $\{w^k_t: k=1,2,\cdots\}$ is an infinite sequence of independent one dimensional Wiener processes, and the coefficients $a^{ij}$ are merely measurable functions of $(\omega,t)$ with the uniform parabolicity condition; see Assumption \ref{ass coeff} below.
To give the reader a flavor of our results in this article we state a particular one, an estimate, below: Let $\cD=\cD(\cM)$ be a conic domain in $\bR^d$, $\rho(x):=dist(x,\partial \cD)$, and $\rho_{\circ}(x):=\lvert x\rvert$. Then for the solution $u$ of \eqref{main equation in introduction} with zero boundary and zero initial conditions, the following holds for any $p\geq 2$: \begin{align}\label{main estimate simple} &\bE \int^T_0 \int_{\cD} \left(\lvert\rho^{-1}u\rvert^p+ \lvert u_x\rvert^p\right) \rho_{\circ}^{\theta-\Theta}\rho^{\Theta-d}\, dx\,dt \nonumber\\ \leq\quad & C\,\bE \int^T_0 \int_{\cD} \Big( \lvert \rho f^0\rvert^p+\sum_{i=1}^d\lvert f^i\rvert^p +\lvert g\rvert_{l_2}^p\Big) \rho_{\circ}^{\theta-\Theta}\rho^{\Theta-d}\, dx\,dt \end{align} with $d-1<\Theta<d-1+p$ accompanied with the sharp admissible range of $\theta$; see \eqref{theta con intro} below. Also see \eqref{main estimate intro} for higher order derivative estimates. Unlike the range of $\Theta$, the range of $\theta$ is affected by the shape of domain $\cD$, which is determined by $\cM$.
Estimate \eqref{main estimate simple}, if $\rho_{\circ}$ is replaced by the distance to the set of vertices, also holds when $\cD$ is a (bounded) polygonal domain in $\bR^2$. Regarding H\"older regularity, we have for instance, if $1-\frac{d}{p}=\delta>0$, $$ \lvert \rho^{-1+\frac{\Theta}{p}} \rho^{(\theta-\Theta)/p}_{\circ}u(\omega,t,\cdot)\rvert_{ \cC(\cD)}+
[\rho^{-1+\delta+\frac{\Theta}{p}} \rho^{(\theta-\Theta)/p}_{\circ} u( \omega,t,\cdot)]_{\cC^{\delta}(\cD)}<\infty, $$ for a.e. $(\omega,t)$. In particular, \begin{align}
\lvert u(\omega,t,x)\rvert\leq C(\omega,t) \rho^{1-\frac{\Theta}{p}}(x) \rho^{(-\theta+\Theta)/p}_{\circ}(x)\quad \text{for all }x\in\cD \label{Holder simple}. \end{align} Estimate \eqref{Holder simple} shows how $\theta$ and $\Theta$ are involved in measuring the boundary behavior of the solution with respect to $\rho$ and $\rho_{\circ}$. See Theorem \ref{cor 8.10} and Theorem \ref{cor 8.23} for the full H\"older regularity results with respect to both space and time variables.
To position our results in the context of regularity theory of stochastic parabolic equations, let us provide a stream of historical remarks.
The $L_p$-theory ($p\geq 2$) of equation \eqref{main equation in introduction} defined on the entire space $\bR^d$ was first introduced by N.V. Krylov \cite{Krylov 1999-4, Krylov 1996}. In these articles the author used an analytic approach and proved the maximal regularity estimate \begin{align}
\label{krylov lp}
\|u_x\|_{\bL_p(T)}\leq C\Big(\|f^0\|_{\bL_p(T)}+\sum_{i=1}^d \|f^i\|_{\bL_p(T)}
+\||g|_{\ell_2}\|_{\bL_p(T)}\Big), \qquad p\geq 2, \end{align} provided that $u(0,\cdot)\equiv 0$, where $\bL_p(T):=L_p(\Omega\times (0,T); L_p(\bR^d))$.
As for other approaches on Sobolev regularity theory, the method based on $H^{\infty}$-calculus is also available in the literature. This approach was introduced in \cite{Veraar}, in which the maximal regularity of $\sqrt{-A}u$ is obtained for the stochastic convolution $$ u(t):=\int^t_0 e^{(t-s)A} g(s) dW_H (s). $$
Here, $W_H(t)$ is a cylindrical Brownian motion on a Hilbert space $H$, and the operator $-A$ is assumed to admit a bounded $H^{\infty}$-calculus of angle less than $\pi/2$ on $L^q(\cO)$, where $q\geq 2$ and $\cO$ is a domain in $\bR^d$. The result of \cite{Veraar} generalizes \eqref{krylov lp} with $f^i=0$, $i=1,\ldots,d$ as one can take $A=\Delta$ and $\cO=\bR^d$.
One advantage of the approach based on $H^{\infty}$-calculus is that it provides a unified way of handling a class of differential operators satisfying the above mentioned condition. However this approach is not applicable for SPDEs with operators depending on $(\omega,t)$, and even the simplest case $A=\Delta$, it is needed that $\partial \cO$ is regular enough, that is $\partial \cO \in \cC^2$. Compared to the approach based on $H^{\infty}$-calculus, Krylov's analytic approach works well for SPDEs with operators depending also on $(\omega,t)$, and it also provides the arbitrary order regularity of solutions without much extra efforts even under weaker smoothness condition on domains.
Since the work of \cite{Krylov 1999-4, Krylov 1996} on $\bR^d$, the analytic approach has been further used for the regularity theory of SPDEs on half space \cite{Krylov 1999-2, Krylov 1999-22, KK2004-2} and on $\cC^1$-domains \cite{KK2004, Kim2004, Kim2004-2}. The major obstacle of studying SPDEs on domains is that, unless certain compatibility conditions (cf. \cite{Flandoli}) are fulfilled, the second and higher order derivatives of solutions to SPDEs blow up near the boundary, and such blow-ups are inevitable even on $\cC^{\infty}$-domains. Hence, one needs appropriate weight system to understand the behavior of solutions near the boundary.
It is shown in \cite{Krylov 1999-2, KK2004, Kim2004} that if domains satisfy $\cC^1$ boundary condition, then blow-ups of derivatives of solutions can be described very accurately by a weight system introduced in \cite{Krylov 1999-1, KK2004, Lo1}. This weight system is based solely on the distance to the boundary. Surprisingly enough, under this weight system it is irrelevant whether domains have $\cC^{\infty}$-boundary or $\cC^1$-boundary, that is, the regularity of solutions is not affected by the smoothness of the boundary provided that the boundary is at least of class $\cC^1$. To be more specific, let $\cO$ be a $\cC^1$-domain, $\rho(x)=dist (x,\partial \cO)$, then it holds that (see \cite{Kim2004, KK2004}) for any $d-1<\Theta<d-1+p$,
\begin{align}
\label{eqn 9.3.1} &\bE \int^T_0 \int_{\cO}(\vert \rho^{-1}u\vert + \vert u_x\vert )^p \rho^{\Theta-d}dt \nonumber\\ \leq\,& C\bE \int^T_0 \int_{\cO}\big( \vert \rho f^0\vert ^p+\sum_{i=1}^d \vert f^i\vert ^p+\vert g\vert ^p_{\ell_2} \big)^p \rho^{\Theta-d}dt.
\end{align}
The condition $\Theta \in (d-1, d-1+p)$ is sharp and is not affected by further smoothness of $\partial \cO$ as long as $\partial \cO\in \cC^1$. Note that estimate \eqref{eqn 9.3.1} with smaller $\Theta$ gives better decay of solutions near the boundary than that with larger $\Theta$. In particular, we have $u(\omega,t,\cdot)\in W^{1,p}_{0}(\cO)$ from \eqref{eqn 9.3.1} if $\Theta\leq d$.
As for results on non-smooth domains, that is $\partial \cO \not\in \cC^1$, very few fragmentary results are known. It turns out that \eqref{eqn 9.3.1} holds true on general Lipschitz domains if $\Theta \approx d-2+p$ (see \cite{Kim2014}), and hence the case $\Theta=d$ is not included in general if $p>2$. An example in \cite{Kim2014} also shows that if $\Theta<p/2$, then estimate \eqref{eqn 9.3.1} fails to hold even on simple wedge domains of the type \begin{equation}\label{angular domain1} \cD^{(\kappa)}=\big\{(r\cos \eta, r\sin \eta)\in \bR^2: r>0, \eta\in (-\kappa/2, \kappa/2)\big\}, \quad \kappa < 2\pi. \end{equation} The vertex $0$ makes the boundary non-smooth and changes the game.
Our interest on conic and polygonal domains arises from such question which, in particular, ask if estimates similar to \eqref{eqn 9.3.1} hold on such simple Lipschitz domains. We got the clue of the problem from
a PDE result on conic domains \cite{Kozlov Nazarov 2014} (also see \cite{Na, Sol2001}) which is similar to \eqref{eqn 9.3.1}, without the term $g=(g^1, g^2,\cdots)$ of course.
It uses the weight based only on the distance to the vertex.
A work on SPDE using a weight system based only on the distance to the vertex is introduced in \cite{CKLL 2018} (also see \cite{CKL 2019+}), in which we studied the model case of $d=2$ and $a^{ij}=\delta_{ij}$ for a starter of the program.
Even for the model case considered in \cite{ CKL 2019+,CKLL 2018} we struggled to have higher order derivative estimate and left the problem as the future work. The main issue is to include the distance to the boundary in our weight system to have a satisfactory regularity relation between solutions and the inputs. In fact, there was an omen of aforementioned difficulty that is implied in the Green's function estimate used in \cite{CKLL 2018} and \cite{CKL 2019+}. The estimate dominating Green's function does not vanish at the boundary although it does at the vertex. We need more refined Green's function estimate for the starter of a satisfactory regularity result.
We then set a program of three steps: (i) preparing a refined $d$-dimensional Green's function estimate for operators with measurable coefficients (ii) preparing PDE result (iii) establishing SPDE result addressing the higher order derivative estimates. First two steps are done in \cite{Green} and \cite{ConicPDE}, and this article fulfills the last step. In \cite{Green} the refined Green's function estimate involves both the distance to the vertex and the distance to the boundary and it now vanishes at all the points on the boundary with informative decay rate near the boundary. The work \cite{ConicPDE} fully makes use of what we prepared in \cite{Green} and it is designed to serve this article well.
Now let us explain our $L_p$-regularity result in more detail. Recall $ \rho_{\circ}(x):=\vert x\vert \quad \text{and} \quad \rho(x):=d(x,\partial \cD), $ which denote the distance from $x$ to vertex and to the boundary of the conic domain $\cD=\cD(\cM)$, respectively. We prove that for any $p\ge 2$ and $n=0,1,2,\cdots$, the estimate \begin{align} &\bE \int^T_0 \int_{\cD} \left(\vert \rho^{-1}u\vert ^p+\vert u_x\vert ^p+\cdots+ \vert \rho^{n}D^{n+1}u\vert ^p\right) \rho_{\circ}^{\theta-\Theta}\rho^{\Theta-d}\, dx\,dt \nonumber
\\ \leq& C \bE
\int^T_0 \int_{\cD} \Big( \vert \rho f^0\vert ^p+\cdots+\vert \rho^{n+1}D^nf^0\vert ^p \nonumber
\\
&\quad \quad \quad \quad \quad\,\,\,\, +\sum^d_{i=1}\vert f^i\vert ^p+\cdots+\sum_{i=1}^d\vert \rho^{n}D^nf^i\vert ^p \nonumber \\
&\quad \quad \quad \quad \quad\,\,\,\, +\vert g\vert _{\ell_2}^p+\cdots+\vert \rho^{n}D^{n}g\vert _{\ell_2}^p\Big)
\rho_{\circ}^{\theta-\Theta}\rho^{\Theta-d}\, dx\,dt \label{main estimate intro} \end{align} holds for the solution $u=u(\omega,t,x)$ to equation \eqref{main equation in introduction} with zero initial condition, provided that
\begin{equation}
\label{theta con intro} d-1<\Theta<d-1+p, \quad\,\, p(1-\lambda^+_c)<\theta<p(d-1+\lambda^-_c).
\end{equation}
Here, $\lambda^+_c$ and $\lambda^-_c$ are positive constants which depend on $\cM$ and are defined in Definition \ref{lambda} below (also see Proposition \ref{critical exponents} and Remark \ref{example proposition}). The same estimate holds for polygonal domains in $\bR^2$. Estimate \eqref{main estimate intro} with condition \eqref{theta con intro} is indeed an (seamless) extension of \cite{ConicPDE} to SPDEs, and what is satisfactory is that the ranges of $\Theta$ and $\theta$ in \eqref{theta con intro} are not shrunken smaller than the ranges for the deterministic parabolic equation. For this however very delicate computation is required and providing the work done successfully is one of main purposes of this article.
Finally, we want to summarize the improvement in this article over the results in \cite{CKLL 2018} and \cite{CKL 2019+}. Our domains $\cD(\cM)$ in $\bR^d$, $d\ge 2$, generalize two dimensional angular domains \eqref{angular domain1}; the choice of $\cM$ is much richer when $d>2$. Our operator $\sum_{i,j}a^{ij}(\omega,t)D_{ij}$ far generalizes Laplacian operator $\Delta$ in \cite{CKLL 2018} and \cite{CKL 2019+}. These generalizations make computation much more involved, especially, for the stochastic part of the solution. Also, thanks to the mixed weight system, we can now study the higher order derivatives in an appropriate manner and implementing it requires quite a work. Moreover, in this article we do not pose zero initial condition and hence we propose right function spaces for the initial condition in terms of regularity relations between inputs and output, where the initial condition is one of inputs. This result is new even for deterministic PDEs on conic domains. H\"older regularity results based on aforementioned improvements are also new even for PDEs on conic domains.
This article is organized as follows. In Section 2 we introduce some properties of weighted Sobolev spaces and present our main results on conic domains, including H\"older regularity results. In Section 3 we estimate weighed $L_p$ norm of the zero-th order derivative of the solution on conic domains based on the solution representation via Green's function and elementary but highly involved computations. The estimates of the derivatives of the solution on conic domains are obtained in Section 4 and the proof of the main results on conic domains are posed there, too. In section 5 we establish a regularity theory on polygonal domains in $\bR^2$.
\noindent\textbf{Notations.} \begin{itemize}
\item We use $:=$ to denote a definition.
\item For a measure space $(A, \cA, \mu)$, a Banach space $B$ and $p\in[1,\infty)$, we write $L_p(A,\cA, \mu;B)$ for the collection of all $B$-valued $\bar{\cA}$-measurable functions $f$ such that $$
\|f\|^p_{L_p(A,\cA,\mu;B)}:=\int_{A} \lVert f\rVert^p_{B} \,d\mu<\infty. $$ Here, $\bar{\cA}$ is the completion of $\cA$ with respect to $\mu$. We will drop $\cA$ or $\mu$ or even $B$ in $L_p(A,\cA, \mu;B)$ when they are obvious from the context.
\item $\bR^d$ stands for the $d$-dimensional Euclidean space of points $x=(x^1,\cdots,x^d)$, $B_r(x):=\{y\in \bR^d: \vert x-y\vert <r\}$,
$\bR^d_+:=\{x=(x^1,\ldots,x^d): x^1>0\}$, and $S^{d-1}:=\{x\in \bR^d: \vert x\vert =1\}$.
\item For a domain $\mathcal{O} \subset \bR^d$, $B^{\mathcal{O}}_R(x):=B_R(x)\cap \mathcal{O}$ and $Q^{\mathcal{O}}_R(t,x):=(t-R^2,t]\times B^{\mathcal{O}}_R(x)$.
\item $\bN$ denotes the natural number system, $\bN_0=\{0\}\cup \bN$, and $\bZ$ denotes the set of integers.
\item For $x$, $y$ in $\bR^d$, $x\cdot y :=\sum^d_{i=1}x^iy^i$ denotes the standard inner product.
\item For a domain $\mathcal{O}$ in $\bR^d$, $\partial \mathcal{O}$ denotes the boundary of $\mathcal{O}$.
\item For any multi-index $\alpha=(\alpha_1,\ldots,\alpha_d)$, $\alpha_i\in \{0\}\cup \bN$, $$ f_t=\frac{\partial f}{\partial t}, \quad f_{x^i}=D_if:=\frac{\partial f}{\partial x^i}, \quad D^{\alpha}f(x):=D^{\alpha_d}_d\cdots D^{\alpha_1}_1f(x). $$
We denote $\vert \alpha\vert :=\sum_{i=1}^d \alpha_i$. For the second order derivatives we denote $D_jD_if$ by $D_{ij}f$. We often use the notation $\vert gf_x\vert ^p$ for $\vert g\vert ^p\sum_i\vert D_if\vert ^p$ and $\vert gf_{xx}\vert ^p$ for $\vert g\vert ^p\sum_{i,j}\vert D_{ij}f\vert ^p$. We also use $D^m f$ to denote arbitrary partial derivatives of order $m$ with respect to the space variable.
\item $\Delta_x f:=\sum_i D_{ii}f$, the Laplacian for $f$.
\item For $n\in \{0\}\cup \bN$, $W^n_p(\mathcal{O}):=\{f: \sum_{\vert \alpha\vert \le n}\int_{\mathcal{O}}\vert D^{\alpha}f\vert ^p dx<\infty\}$, the Sobolev space.
\item For a domain $\mathcal{O}\subseteq\bR^d$ and a Banach space $X$ with the norm $\vert \cdot\vert _X$, $\cC(\mathcal{O};X)$ denotes the set of $X$-valued continuous functions $f$ in $\mathcal{O}$ such that $\vert f\vert _{\cC(\mathcal{O};X)}:=\sup_{x\in\mathcal{O}}\vert f(x)\vert _X<\infty$. Also, for $\alpha\in (0,1]$, we define the H\"older space
$\cC^{\alpha}(\mathcal{O};X)$ as the set of all $X$-valued functions $f$ such that $$ \vert f\vert _{\cC^{\alpha}(\mathcal{O};X)}:=\vert f\vert _{\cC(\mathcal{O};X)}+[f]_{\cC^{\alpha}(\mathcal{O};X)}<\infty $$ with the semi-norm $[f]_{\cC^{\alpha}(\mathcal{O};X)}$ defined by $$ [f]_{\cC^{\alpha}(\mathcal{O};X)}=\sup_{x\neq y\in \mathcal{O}} \frac{\vert f(x)-f(y)\vert _X}{\vert x-y\vert ^{\alpha}}. $$ In particular, $\mathcal{O}$ can be an interval in $\bR$.
\item For a domain $\mathcal{O}\subseteq\bR^d$, $\cC^{\infty}_c(\mathcal{O})$ is the the space of infinitely differentiable functions with compact support in $\mathcal{O}$. $supp(f)$ denotes the support of the function $f$. Also, $\cC^{\infty}(\mathcal{O})$ denotes the the space of infinitely differentiable functions in $\mathcal{O}$.
\item For a distribution $f$ on $\mathcal{O}$ and $\varphi\in \cC^{\infty}_c(\mathcal{O})$, the expression $(f,\varphi)$ denote the evaluation of $f$ with the test function $\varphi$.
\item For functions $f=f(\omega,t,x)$ depending on $\omega\in\Omega$, $t\geq 0$ and $x\in\bR^d$, we usually drop the argument $\omega$ and just write $f(t,x)$ when there is no confusion.
\item Throughout the article, the letter $C$ denotes a finite positive constant which may have different values along the argument while the dependence will be informed; $C=C(a,b,\cdots)$, meaning that $C$ depends only on the parameters inside the parentheses.
\item $A\sim B$ means that there exist constants $C_1, C_2>0$ independent of $A$ and $B$ such that $A\leq C_1B \leq C_2A$.
\item $d(x,\mathcal{O})$ stands for the distance between a point $x$ and a set $\mathcal{O}\in\bR^d$.
\item $a \vee b =\max\{a,b\}$, $a \wedge b =\min\{a,b\}$.
\item $1_U$ the indicator function on $U$.
\item We will use the following sets of functions (see \cite{Kozlov Nazarov 2014}). \begin{itemize}
\item[-] $\mathcal{V}(Q^{\mathcal{\mathcal{O}}}_R(t_0,x_0))$ : the set of functions $u$ defined at least on $Q^{\mathcal{\mathcal{O}}}_R(t_0,x_0)$ and satisfying \begin{equation*}
\sup_{t\in(t_0-R^2,t_0]}\|u(t,\cdot)\|_{L_2(B^{\mathcal{O}}_{R}(x_0))} +\|\nabla u\|_{L_2(Q^{\mathcal{O}}_{R}(t_0,x_0))}<\infty.\nonumber \end{equation*} \item[-] $\mathcal{V}_{loc}(Q^{\mathcal{O}}_R(t_0,x_0))$ : the set of functions $u$ defined at least on $Q^{\mathcal{O}}_R(t_0,x_0)$ and satisfying \begin{equation*} u\in \mathcal{V}(Q^{\mathcal{O}}_r(t_0,x_0)), \quad \forall r\in (0,R).\nonumber \end{equation*} \end{itemize}
\end{itemize}
\mysection{SPDE on $d$-dimensional conic domains}\label{sec:Cone}
Throughout this article we assume $d\ge 2$. Let $\cM$ be a nonempty open set in $S^{d-1}:=\left\{x\in \bR^d\,:\,\vert x\vert =1\right\}$ and $\overline{\cM}$ denotes the closure of $\cM$. We assume $\overline{\cM}\neq S^{d-1}$, and define the $d$-dimensional conic domain $\mathcal{D}$ by $$ \mathcal{D}=\cD(\cM):=\Big\{x\in\mathds{R}^d\setminus\{0\} \ \Big\vert \ \ \frac{x}{\vert x\vert}\in \mathcal{M} \Big\}. $$
When $d=2$, the shapes of conic domains are quite simple. For instance, with a fixed angle $\kappa$ in the range of $\left(0,2\pi\right)$ we can consider \begin{equation}\label{wedge in 2d} \mathcal{D}=\mathcal{D}^{(\kappa)}:=\left\{(r\cos\eta,\ r\sin\eta)\in\mathds{R}^2 \mid r\in(0,\ \infty),\ -\frac{\kappa}{2}<\eta<\frac{\kappa}{2}\right\}. \end{equation}
\begin{figure}
\caption{Cases of $d=2$ and $d=3$}
\end{figure}
Let $\{w_{t}^{k}\}_{k\in\bN}$ be a family of independent one-dimensional Wiener processes defined on a complete probability space $(\Omega,\mathscr{F},\bP)$ equipped with an increasing filtration of $\sigma$-fields $\mathscr{F}_{t}\subset\mathscr{F}$, each of which contains all $(\mathscr{F},\bP)$-null sets. By $\cP$ we denote the predictable $\sigma$-field on $\Omega \times (0,\infty)$ generated by $\mathscr{F}_{t}$.
In this article we study the regularity theory of the stochastic partial differential equation \begin{equation}\label{stochastic parabolic equation} d u =\Big( \cL u+f^0+\sum_{i=1}^d f^i_{x^i}\Big)dt +\sum^{\infty}_{k=1} g^kdw_t^k,\quad t>0, \;x\in \cD(\cM) \end{equation}
under the zero Dirichlet boundary condition. Here \begin{equation*} \cL := \sum_{i,j=1}^d a^{ij}(\omega,t) D_{ij}. \end{equation*}
\begin{itemize}
\item[-] Each of the stochastic integrals in \eqref{stochastic parabolic equation} is understood as an It\^o stochastic integral against the given Wiener process.
\item[-] The infinite sum of stochastic integrals is understood as the limit in probability (uniformly in $t$) of the finite sums of stochastic integrals. See Remark \ref{sto series}.
\end{itemize}
Here are our assumptions on $\cM$ and the diffusion coefficients.
\begin{assumption} \label{ass M} The boundary $\partial \cM $ of $\cM$ in $S^{d-1}$ is of class $\cC^2$. \end{assumption}
\begin{assumption}
\label{ass coeff} The diffusion coefficients $a^{ij}$, $i,j=1,\cdots,d$, are real-valued $\cP$-measurable functions of $(\omega,t)$, symmetric; $a^{ij}=a^{ji}$, and satisfy the uniform parabolicity condition, i.e. there exist constants $\nu_1, \nu_2>0$ such that for any $t\in\mathds{R}$, $\omega\in \Omega$ and $\xi=(\xi^1,\ldots,\xi^d)\in\mathds{R}^d$, \begin{equation} \nu_1 \vert \xi\vert ^2\le \sum_{i,j}a^{ij}(\omega, t)\xi_i\xi_j\le \nu_2 \vert \xi\vert ^2. \label{uniform parabolicity} \end{equation} \end{assumption}
To explain our main result in the frame of weighted Sobolev regularity, we introduce some function spaces (c.f. \cite{CKL 2019+, ConicPDE}). These spaces collect the functions whose weak derivatives can be measured by the help of appropriate weights consisting of powers of the distance to the vertex and of the distance to the boundary. Let us define $$ \rho_{\circ}(x)=\rho_{\circ,\cD}:=\vert x\vert ,\quad \quad \rho(x)=\rho_{\cD}(x):=d(x,\partial\cD). $$ For $p\in(1,\infty)$, $\theta\in\bR$ and $\Theta\in \bR$, we define $$
L_{p,\theta,\Theta}(\cD):=L_p(\cD,\rho_{\circ}^{\theta-\Theta}\rho^{\Theta-d}dx), $$ and for $m\in \bN_0$ define $$ K^m_{p,\theta,\Theta}(\cD):=\{f\, : \rho^{\vert \alpha\vert } D^{\alpha}f\in L_{p,\theta,\Theta}(\cD), \, \,\vert \alpha\vert \leq m \}. $$ The norm in $K^m_{p,\theta,\Theta}(\cD)$ is defined by \begin{equation}
\|f\|_{K^m_{p,\theta,\Theta}(\cD)} =\sum_{\vert \alpha\vert \leq m} \left(\int_{\cD} \vert \rho^{\vert \alpha\vert }D^{\alpha}f\vert ^p \rho_{\circ}^{\theta-\Theta}\rho^{\Theta-d}\,dx\right)^{1/p}. \label{space K norm} \end{equation}
The space $K^m_{p,\theta,\Theta}(\cD)$ is related to the weighted Sobolev space $H^m_{p,\Theta}(\cD)$ introduced in \cite{KK2004, Krylov 1999-1,Lo1} as follows: $$ H^m_{p,\Theta}(\cD)=K^m_{p,\Theta,\Theta}(\cD),
$$ whose norm is given by \begin{equation}
\label{eqn 8.9.5}
\|f\|_{H^m_{p,\Theta}(\cD)}:=\sum_{\vert \alpha\vert \leq m} \left(\int_{\cD} \vert \rho^{\vert \alpha\vert }D^{\alpha}f\vert ^p \rho^{\Theta-d}\,dx\right)^{1/p}, \quad m\in \bN_0.
\end{equation}
Note that the weight of $H^m_{p,\Theta}(\cD)$ is based only on the distance to the boundary. Using the fact that for any $\mu\in \bR$ and multi-index $\alpha$ \begin{equation} \label{eqn 8.28.4} \sup_{x\in \cD} \rho^{\vert \alpha\vert -\mu}_{\circ} \vert D^{\alpha} \rho^{\mu}_{\circ}(x)\vert \leq C(\mu,\alpha)<\infty, \end{equation} one can easily check
$$f\in K^m_{p,\theta,\Theta}(\cD) \quad \text{ if and only if} \quad \rho^{(\theta-\Theta)/p}_{\circ}f\in H^m_{p,\Theta}(\cD), $$ and the norms in their corresponding spaces are equivalents, that is, \begin{equation}
\label{eqn 8.9.7}
\|f\|_{K^m_{p,\theta,\Theta}(\cD)}\sim \|\rho^{(\theta-\Theta)/p}_{\circ}f\|_{H^m_{p,\Theta}(\cD)}, \quad n\in \bN_0. \end{equation}
Below we use relation \eqref{eqn 8.9.7} to define $K^{\gamma}_{p,\theta,\Theta}(\cD)$ for all $\gamma \in \bR$.
Let $\psi=\psi_{\cD}$ be a smooth function in $\cD$ (see e.g. \cite[Lemma 4.13]{Ku})
such that for any $m\in \bN_0$, \begin{equation} \label{eqn 8.9.1} \psi_{\cD}(x)\sim \rho_{\cD}(x),\quad \rho^{m}_{\cD}\vert D^{m+1}\psi_{\cD}\vert \leq N(m)<\infty. \end{equation} Actually, such $\psi$ exists on any domains. Indeed, let $\cO$ be an arbitrary domain, and put $\rho_{\cO}(x)=d(x,\partial \cO)$, and \begin{equation} \label{eqn 8.25.1} \cO_{n,k}:=\{x\in \cO: e^{-n-k}<\rho_{\cO} (x)<e^{-n+k}\}. \end{equation} Then mollifying $1_{\cO_{n,2}}$ one can easily construct $\xi_n$ such that $$ \xi_n \in \cC^{\infty}_c(\cO_{n,3}), \quad \vert D^m \xi_n\vert \leq C(m)e^{mn}, \quad \sum_{n\in \bZ} \xi_n(x) \sim 1, $$ and then one can take \begin{equation}\label{eqn 8.25.2} \psi=\psi_{\cO}=\sum_{n\in \bZ} e^{-n}\xi_n(x). \end{equation}
It is easy to check that $\psi=\psi_{\cO}$ satisfies \eqref{eqn 8.9.1} with $\rho_{\cO}$ in place of
$\rho_{\cD}$.
Next we choose a nonnegative function $\zeta\in \cC^{\infty}_{c}(\bR_{+})$ such that $\zeta>0$ on $[e^{-1},e]$. Then, by the periodicity, \begin{equation}
\label{11.4.1} \sum_{n=-\infty}^{\infty}\zeta(e^{n+t})>c>0,\quad\forall\; t\in\bR. \end{equation}
For $p\in(1,\infty)$ and $\gamma\in \bR$, by $H^{\gamma}_p=H^{\gamma}_p(\bR^d)$ we denote the space of Bessel potential with the norm $$
\|u\|_{H^{\gamma}_p}:=\|(1-\Delta)^{\gamma/2}u\|_{L_p(\bR^d)}:=\|\cF^{-1}[(1+\vert \xi\vert ^2)^{\gamma/2} \cF(u)(\xi)]\|_{L_p(\bR^d)}. $$ In case $\gamma\in \bN_0$, $H^{\gamma}_p(\bR^d)$ coincides with $W^{\gamma}_p(\bR^d)$. The spaces of Bessel potentials enjoy the property $$
\|u\|_{H^{\gamma_1}_p}\le \|u\|_{H^{\gamma_2}_p},\quad \gamma_1\le \gamma_2. $$
Especially, we have $\|u\|_{L_p}\le \|u\|_{H^{\gamma}_p}$ for any $\gamma\ge 0$. For $\ell_2$-valued functions $g$ we also define $$
\|g\|_{H^{\gamma}_p(\ell_2)}:=\|\vert (1-\Delta)^{\gamma/2}g\vert _{\ell_2}\|_{L_p(\bR^d)}. $$ Moreover, for $\bR^d$-valued functions $\tbf=(f^1,\ldots,f^d)$ we define $$
\|\tbf\|_{H^{\gamma}_p(d)}:=\|\,\vert (1-\Delta)^{\gamma/2}\tbf\vert \,\|_{L_p(\bR^d)}. $$
From now on, if a function defined on a domain $\cO$ vanishes near the boundary of $\cO$, then by a trivial extension we consider it as a function defined on $\bR^d$. In particular, for any $k\in \bZ$ and a function $f$ on $\cO$, the function $\zeta(e^{-k}\psi_{\cO}(x))f(x)$ has a compact support in $\cO$ and can be considered as a function on $\bR^d$.
\begin{defn} \label{defn 8.28} Let $p\in(1,\infty), \Theta, \gamma\in \bR$, and $\cO$ be a domain in $\bR^d$. By $H^{\gamma}_{p,\theta}(\cO)$ we denote the class of all distributions $f$ on $\cO$ such that \begin{equation}
\label{eqn 8.10.14}
\|f\|^p_{H^{\gamma}_{p,\Theta}(\cO)}:= \sum_{n\in \bZ} e^{n\Theta} \|\zeta(e^{-n}\psi(e^n\cdot))f(e^{n}\cdot)\|^p_{H^{\gamma}_p(\bR^d)}<\infty, \end{equation} where $\psi=\psi_{\cO}$ is taken from \eqref{eqn 8.25.2}. Similarly, $H^{\gamma}_{p,\theta}(\cO;\ell_2)$ is the set of $\ell_2$-valued functions $g$ such that \begin{equation*}
\|g\|^p_{H^{\gamma}_{p,\Theta}(\cO;\ell_2)}:= \sum_{n\in \bZ} e^{n\Theta} \|\zeta(e^{-n}\psi(e^n\cdot))g(e^{n}\cdot)\|^p_{H^{\gamma}_p(\bR^d;\ell_2)}<\infty. \end{equation*} \end{defn}
It turns out (see \cite[Proposition 2.2]{Lo1} or \cite[Lemma 4.3]{ConicPDE}) that the new norm in \eqref{eqn 8.10.14} is equivalent to the norm in \eqref{eqn 8.9.5} if $\gamma\in \bN_0$. In other words,
for $\gamma \in \bN_0$, \begin{equation} \label{eqn 8.9.8}
\sum_{n\in \bZ} e^{n\Theta} \|\zeta(e^{-n}\psi_{\cO} (e^n\cdot))f(e^{n}\cdot)\|^p_{H^{\gamma}_p} \quad \sim \quad \sum_{\vert \alpha\vert \leq \gamma} \int_{\cO} \vert \rho^{\vert \alpha\vert }D^{\alpha}f\vert ^p \rho^{\Theta-d}\,dx, \end{equation} and the equivalence relation depends only on $p,\gamma, \Theta,d, n,\zeta, \psi$ and $\cO$.
Now we use equivalence relations \eqref{eqn 8.9.7} and \eqref{eqn 8.9.8}, and define $K^{\gamma}_{p,\theta,\Theta}(\cD)$ for any chosen $\gamma\in \bR$.
\begin{defn} \label{defn 8.19} Let $p\in (1,\infty), \theta, \Theta, \gamma \in \bR$, and $\cD$ be a conic domain in $\bR^d$.
We write $f\in K^{\gamma}_{p,\theta,\Theta}(\cD)$ if and only if $\rho^{(\theta-\Theta)/p}_{\circ} f\in H^{\gamma}_{p,\Theta}(\cD)$, and define \begin{equation}
\label{eqn 8.10.1}
\|f\|_{ K^{\gamma}_{p,\theta,\Theta}(\cD)} := \|\rho^{(\theta-\Theta)/p}_{\circ} f\|_{H^{\gamma}_{p,\Theta}(\cD)}. \end{equation} The space $K^{\gamma}_{p,\theta,\Theta}(\cD;\ell_2)$ and its norm are defined similarly. Also we write $\tbf=(f^1,f^2,\cdots,f^d)\in K^{\gamma}_{p,\theta,\Theta}(\cD; \bR^d)$ if $$
\|\tbf\|_{K^{\gamma}_{p,\theta,\Theta}(\cD; \bR^d)}:=\sum_{i=1}^d \|f^i\|_{K^{\gamma}_{p,\theta,\Theta}(\cD)}<\infty. $$ \end{defn}
Note that the new norm of the space $K^{\gamma}_{p,\theta,\Theta}(\cD)$ is equivalent to the previous one if $\gamma\in \bN_0$. Below we collect some basic properties of the space $K^{\gamma}_{p,\theta,\Theta}(\cD)$.
\begin{lemma}\label{property1} Let $p\in(1,\infty)$ and $\theta, \Theta, \gamma \in\bR$.
(i) For a domain $\cO$ and $\eta\in \cC^{\infty}_c(\bR_+)$, \begin{equation}
\label{eqn 4.24.5}
\sum_{n \in\bZ}
e^{n\Theta} \|\eta(e^{-n}\psi_{\cO} (e^n\cdot))f(e^{n}\cdot)\|^p_{H^{\gamma}_p} \leq C(p,\Theta,d,\gamma, \eta, \cO) \|f\|_{H^{\gamma}_{p,\Theta}(\cO)}^{p}. \end{equation} The reverse inequality also holds if $\eta$ satisfies \eqref{11.4.1}.
Moreover, the same statements hold for $\ell_2$-valued functions.
(ii) $\cC_c^{\infty}(\cD)$ is dense in $K^{\gamma}_{p,\theta,\Theta}(\cD)$.
(iii) For any $\mu \in \bR$,
\begin{equation}
\label{eqn 8.19.81}
\|\psi^{\mu}f\|_{K^{\gamma}_{p,\theta,\Theta}(\cD)}\sim \|f\|_{K^{\gamma}_{p,\theta+\mu p, \Theta+\mu p}(\cD)},
\end{equation} where $\psi$ satisfies \eqref{eqn 8.9.1}. The same statement holds for $\ell_2$-valued functions.
(iv) (Pointwise multiplier) Let $\gamma\in \bR$, $n\in \bN_0$ with $\vert \gamma\vert \leq n$. If $\vert a\vert ^{(0)}_n:=\sup_{\cD} \sum_{\vert \alpha\vert \leq \vert n\vert } \rho^{\vert \alpha\vert }\vert D^{\alpha}a\vert <\infty$, then \begin{equation}
\label{eqn 8.19.11}
\|af\|_{K^{\gamma}_{p,\theta,\Theta}(\cD)}\leq C(n,p,d)\vert a\vert ^{(0)}_n \|f\|_{K^{\gamma}_{p,\theta,\Theta}(\cD)}. \end{equation}
(v) The operator $D_i:K^{\gamma}_{p,\theta,\Theta}(\cD)\to K^{\gamma-1}_{p,\theta+p,\Theta+p}(\cD)$ is bounded for any $i=1,\ldots,d$. In genereal, for any multi-index $\alpha$ we have \begin{align}
\label{eqn 4.16.1}
\|D^{\alpha}f\|_{K^{\gamma-\vert \alpha\vert }_{p,\theta+\vert \alpha\vert p,\Theta+\vert \alpha\vert p}(\cD)}\leq C \|f\|_{K^{\gamma}_{p,\theta,\Theta}(\cD)}. \end{align}
The same statement holds for $\ell_2$-valued functions.
(vi) (Sobolev-H\"older embedding) Let $\gamma-\frac{d}{p}\geq n+\delta$, where $n\in \bN_0$ and $\delta\in (0,1)$.
Then for any $f\in K^{\gamma}_{p,\theta-p,\Theta-p}(\cD)$,
\begin{eqnarray} &&\sum_{k\leq n} \vert \rho^{k-1+\frac{\Theta}{p}} \rho^{(\theta-\Theta)/p}_{\circ} D^{k}f\vert _{\cC(\cD)} \nonumber \\
&&\quad + [\rho^{n-1+\delta+\frac{\Theta}{p}} \rho^{(\theta-\Theta)/p}_{\circ} D^{n} f]_{\cC^{\delta}(\cD)} \leq C \|f\|_{K^{\gamma}_{p,\theta-p,\Theta-p}(\cD)}, \label{eqn 8.21.1}
\end{eqnarray}
where $C=C(d,\gamma,p,\theta,\Theta,\cM)$.
\end{lemma} \begin{proof} All the results follow from Definition \ref{defn 8.19} and properties of the weighted Sobolev space $H^{\gamma}_{p,\Theta}(\cO)$ (cf. \cite{Lo1,Krylov 1999-1, Krylov 2001,KK2004}). See e.g. \cite[Proposition 2.2]{Lo1} for (i)-(iii) and see \cite[Theorem 3.1]{Lo1} for (iv).
To prove (v), we put $\xi=\rho^{(\theta-\Theta)/p}_{\circ}$. Then, using $\xi Df=D(\xi f)-\xi (\xi^{-1}D \xi) f$ and \eqref{eqn 8.10.1}, we get $$
\|Df\|_{K^{\gamma-1}_{p,\theta+p,\Theta+p}(\cD)}\leq \|D(\xi f)\|_{H^{\gamma-1}_{p,\Theta+p}(\cD)}+\|(\xi^{-1}D \xi) f\|_{K^{\gamma-1}_{p,\theta+p,\Theta+p}(\cD)}. $$ By \cite[Theorem 3.1]{Lo1},
$$\|D(\xi f)\|_{H^{\gamma-1}_{p,\Theta+p}(\cD)} \leq C \|\xi f\|_{H^{\gamma}_{p,\Theta}(\cD)}=C\|f\|_{K^{\gamma}_{p,\theta,\Theta}(\cD)}. $$ Using \eqref{eqn 8.28.4}, one can check $\vert \psi \xi^{-1}D\xi\vert ^{(0)}_m<\infty$ for any $m\in \bN$. Thus, by \eqref{eqn 8.19.81} and \eqref{eqn 8.19.11}, \begin{eqnarray*}
\|(\xi^{-1}D \xi) f\|_{K^{\gamma-1}_{p,\theta+p,\Theta+p}(\cD)}
&\leq&C\| (\psi \xi^{-1}D \xi) f\|_{K^{\gamma-1}_{p,\theta,\Theta}(\cD)}\leq C \|f\|_{K^{\gamma-1}_{p,\theta,\Theta}(\cD)}. \end{eqnarray*} Thus (v) is proved.
Finally we prove (vi). Put $g=\xi f$. Then by \cite[Theorem 4.3]{Lo1}, \begin{equation}
\label{eqn 8.28.8} \sum_{k\leq n} \vert \rho^{k-1+\frac{\Theta}{p}} D^{k}g\vert _{\cC(\cD)}
+ [\rho^{n-1+\delta+\frac{\Theta}{p}} D^{n} g]_{\cC^{\delta}(\cD)} \leq C \|g\|_{H^{\gamma}_{p,\Theta-p}(\cD)}. \end{equation} Hence, to prove (vi), it is enough to note that the left hand side of \eqref{eqn 8.21.1} is bounded by a constant times of the left hand side of \eqref{eqn 8.28.8}. The lemma is proved. \end{proof}
Using the aforementioned spaces, we now introduce the function spaces for the solutions $u$ to equation \eqref{stochastic parabolic equation} as well as the function spaces for the inputs $f^0,\tbf$, and $g$. To make equation \eqref{stochastic parabolic equation} well-defined after all, we restrict $p\in [2,\infty)$; see Remark \ref{sto series} ($i$) below. With such $p$ and a fixed time $T\in(0,\infty)$ we first define \begin{eqnarray*}\label{entire} &&\bH^{\gamma}_{p}(T):=L_p(\Omega\times (0,T], \cP ; H^{\gamma}_p),\\ &&\bH^{\gamma}_{p}(T,\ell_2):=L_p(\Omega\times (0,T], \cP ; H^{\gamma}_p(\ell_2)). \end{eqnarray*} Next, for $\theta, \Theta, \gamma \in\bR$ we define the function spaces \begin{eqnarray*} &&\bK^{\gamma}_{p,\theta,\Theta}(\cD,T)\,:=L_p(\Omega\times (0,T], \cP;K^{\gamma}_{p,\theta,\Theta}(\cD)),\\ &&\bK^{\gamma}_{p,\theta,\Theta}(\cD,T,d)\,:=L_p(\Omega\times (0,T], \cP;K^{\gamma}_{p,\theta,\Theta}(\cD; \bR^d)),\\ &&\bK^{\gamma}_{p,\theta,\Theta}(\cD,T, \ell_2)\,:=L_p(\Omega \times (0,T], \cP;K^{\gamma}_{p,\theta,\Theta}(\cD;\ell_2)), \end{eqnarray*} and denote $$ \bL_{p,\theta,\Theta}(\cD,T):=\bK^0_{p,\theta,\Theta}(\cD,T),\quad \bL_{p,\theta,\Theta}(\cD,T,d):=\bK^0_{p,\theta,\Theta}(\cD,T,d), $$ $$ \bL_{p,\theta,\Theta}(\cD,T, \ell_2):=\bK^0_{p,\theta,\Theta}(\cD,T, \ell_2). $$ Also, by $\bK^{\infty}_c(\cD,T)$ we denote the space of all functions $f$ of the form \begin{equation*}
f(\omega,t,x)=\sum^m_{i=1}{\bf{1}}_{(\tau_{i-1}(\omega),\tau_i(\omega)]}(t)f_i(x), \end{equation*} where $\tau_0\le \cdots\le \tau_m$ is a finite sequence of bounded stopping times with respect to the filtration $(\rF_t)_{t\geq 0}$, and $f_i\in \cC^{\infty}_c(\cD)$, $i=1,\ldots,m$. Similarly, we define $\bK^{\infty}_c(\cD,T, \ell_2)$ as the space of $\ell_2$-valued functions $g=(g^1,g^2, \ldots)$ such that the first finite number of $g^k$ are in $\bK^{\infty}_c(\cD,T)$ and the rest are all identically zero. We also define $\bK^{\infty}_c(\cD,T, d)$ for $\bR^d$-valued functions $\tbf=(f^1,\ldots,f^d)$ in the same manner. Moreover, by $\bK^{\infty}_c(\cD)$ we denote the space of all functions $f$ of the form \begin{equation*}
f(\omega,x)=\sum^m_{i=1}{\bf{1}}_{A_i}(\omega)f_i(x), \end{equation*} where $A_i\in\rF_0$ and $f_i\in \cC^{\infty}_c(\cD)$, $i=1,\ldots,m$.
\begin{remark}\label{dense space} For any $\theta, \Theta, \gamma \in \bR$, $\bK^{\infty}_c(\cD,T)$ is dense in $\bK^{\gamma}_{p,\theta,\Theta}(\cD,T)$ and so is $\bK^{\infty}_c(\cD,T,\ell_2)$ in $\bK^{\gamma}_{p,\theta,\Theta}(\cD,T,\ell_2)$. Indeed, by the definition of $\cP$, any function $f\in \bK^{\gamma}_{p,\theta,\Theta}(\cD,T)$ can be approximated by functions of the type $$ \sum_{i=1}^m 1_{(\tau_i(\omega), \tau_{i+1}(\omega)]}(t) h_i(x), $$ where $\tau_m$ are bounded stopping times and $h_i\in K^{\gamma}_{p,\theta,\Theta}(\cD)$, $i=1,\ldots,m$. Thus the claim follows from Lemma \ref{property1} (ii). Similarly, $\bK^{\infty}_c(\cD)$ is dense in $L_p(\Omega;K^{\gamma}_{p,\theta,\Theta}(\cD)):=L_p(\Omega, \rF_0,\bP;K^{\gamma}_{p,\theta,\Theta}(\cD))$.
\end{remark}
From now on we will also use the notation $$U^{\gamma+2}_{p,\theta,\Theta}(\cD):=K^{\gamma+2-2/p}_{p,\theta+2-p,\Theta+2-p}(\cD). $$ The following definition frames the spaces for the solutions of our SPDE. \begin{defn}\label{first spaces}
Let $p\in[2,\infty)$ and $\theta, \Theta,\gamma \in\bR$. We write $u\in\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ if
$u \in \bK^{\gamma+2}_{p,\theta-p,\Theta-p}(\mathcal{D},T)$, $u(0,\cdot)\in \bU^{\gamma+2}_{p,\theta,\Theta}(\cD):=L_p(\Omega,\rF_0,\bP;U^{\gamma+2}_{p,\theta,\Theta}(\cD))$, and there exists $(\tilde{f}, \tilde{g}) \in\bK^{\gamma}_{p,\theta+p,\Theta+p}(\mathcal{D},T)\times \bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T, \ell_2)$ such that \begin{align} du=\tilde{f}\,dt+\sum_k \tilde{g}^kdw^k_t,\quad t\in(0,T] \nonumber \end{align}
in the sense of distributions on $\cD$, that is, for any $\varphi\in \cC_c^{\infty}(\cD)$ the equality \begin{align}
\label{eqn sol} (u(t,\cdot),\varphi)=(u(0,\cdot),\varphi)+\int^{t}_{0}(\tilde{f}(s,\cdot),\varphi)ds+\sum_{k=1}^{\infty}\int^t_0(\tilde{g}^k(s,\cdot),\varphi)dw^k_s \end{align} holds for all $t\in (0,T]$ (a.s.). In this case we write \begin{align*} \bD u:=\tilde{f}\quad\text{and}\quad \bS u:=\tilde{g}. \end{align*} The norm in $\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ is given by \begin{align*}
\|u\|_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)}&=\|u\|_{\bK^{\gamma+2}_{p,\theta-p,\Theta-p}(\cD,T)}+\|\bD u\|_{\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)}+\|\bS u\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,\ell_2)}\\
&\quad +\|u(0,\cdot)\|_{\bU^{\gamma+2}_{p,\theta,\Theta}(\cD)}. \end{align*} \end{defn}
\begin{remark}
\label{sol}
Let us go back to our main equation \eqref{stochastic parabolic equation}. Let $f^0\in \bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)$,
$\tbf=(f^1,\cdots,f^d)\in \bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T, d)$, $g\in \bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T, \ell_2)$, $u(0,\cdot)\in \bU^{\gamma+2}_{p,\theta,\Theta}(\cD)$, and $u$ belong to $ \bK^{\gamma+2}_{p,\theta-p,\Theta-p}(\cD,T)$ and be a solution to equation \eqref{stochastic parabolic equation}, that is, $u$ satisfies
$$ d u =\left( \cL u+f^0+\sum_{i=1}^d f^i_{x^i}\right)dt +\sum^{\infty}_{k=1} g^kdw_t^k,\quad t\in(0,T] $$
in the sense of distributions on $\cD$. Then by \eqref{eqn 4.16.1} in Lemma \ref{property1} ($v$), we have
$$
\cL u+f^0+\sum_{i=1}^d f^i_{x^i}\in \bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)
$$
and consequently $u$ belongs to $\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ with the accompanied inequality
\begin{eqnarray}
&& \|u\|_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)}\nonumber\\ &\leq& C \Big(\|u\|_{\bK^{\gamma+2}_{p,\theta-p,\Theta-p}(\cD,T)}+ \|f^0\|_{\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)}+ \sum_{i=1}^d \|f^i\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T)} \nonumber \\
&&\quad \quad +\|g\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,\ell_2)}+\|u(0,\cdot)\|_{\bU^{\gamma+2}_{p,\theta,\Theta}(\cD)}\Big). \label{eqiv norm}
\end{eqnarray}
\end{remark}
\begin{remark}
\label{sto series}
(i) Note that for any $m,n\in \bN$ with $m>n$, the quadratic variation of the continuous martingale $\sum_{k=n}^m \int^t_0(\tilde{g}^k, \varphi) dw^k_s$ is $\sum_{k=n}^m \int^t_0 (\tilde{g}^k(s), \varphi)^2ds$. Following the lines in \cite[Remark 3.2]{Krylov 1999-4} and using the condition $p\geq 2$, one can easily check $$
\bE\sum_{k=1}^{\infty} \int^T_0 (\tilde{g}^k(t),\varphi)^2 dt
\leq N(\varphi,p,T) \|\tilde{g}\|^p_{\bL_{p,\theta,\Theta}(\cD,T,\ell_2)}, $$ which implies the infinite series $\sum_{k=1}^{\infty}\int^t_0 (\tilde{g}^k(s),\varphi)dw^k_s$ converges in $L_2\big(\Omega;\cC([0,T])\big)$ and in probability uniformly in $t\in [0,T]$. As a consequence, $(u(t,\cdot),\varphi)$ in \eqref{eqn sol} is a continuous semi-martingale on $[0,T]$.
(ii) In Definition~\ref{first spaces}, $\bD u$ and $\bS u$ are uniquely determined. This can be seen by using the same arguments in \cite[Remark 3.3]{Krylov 1999-4}. \end{remark}
\begin{thm} \label{banach} For any $p\in [2,\infty)$ and $\theta,\Theta, \gamma\in \bR$, $\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ is a Banach space. \end{thm} \begin{proof} We only need to prove the completeness. This can be proved by repeating argument in Remark 3.8 of \cite{Krylov 2001}, which treats the case $\theta=\Theta$ and $\cD=\bR^d_+$. The argument in this proof is quite universal and, without any changes, works on any conic domain $\cD$ with any $\theta,\Theta\in \bR$. \end{proof}
The following theorem addresses important temporal properties of the functions in $\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$. See Section \ref{sec:Introduction} for the notations $[\cdot]_{\cC^{\alpha}}$ and $\vert \cdot\vert _{\cC^{\alpha}}$.
\begin{thm} \label{embedding} Let $p\in[2,\infty)$ and $\theta,\Theta, \gamma\in \bR$.
(i) If $2/p<\alpha<\beta \leq 1$, then for any $u\in \cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$, \begin{eqnarray}
&&\bE [\psi^{\beta-1}u]^p_{\cC^{\alpha/2-1/p}\left([0,T]; K^{\gamma+2-\beta}_{p,\theta,\Theta}(\cD)\right)}\leq C\,T^{(\beta-\alpha)p/2}\|u\|^p_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)}, \label{eqn 8.10.10}\label{Holder1} \end{eqnarray} and in addition, if $\psi^{\beta-1}u(0,\cdot)\in L_p(\Omega;K^{\gamma+2-\beta}_{p,\theta,\Theta}(\cD))$, \begin{eqnarray}
\bE \vert \psi^{\beta-1}u\vert ^p_{\cC\left([0,T]; K^{\gamma+2-\beta}_{p,\theta,\Theta}(\cD)\right)}&\leq& C\bE\|\psi^{\beta-1}u(0,\cdot)\|^p_{K^{\gamma+2-\beta}_{p,\theta,\Theta}(\cD)}\nonumber\\
&& \quad+C T^{p\beta/2-1}\|u\|^p_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)},\label{Holder2} \end{eqnarray} where $\psi$ satisfies \eqref{eqn 8.9.1} and constants $C$ are independent of $T$ and $u$.
(ii) For any $u\in \cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ with $u(0,\cdot)= 0$, $u$ belongs to $ L_p(\Omega; \cC([0,T]; K^{\gamma}_{p,\theta,\Theta}(\cD))$ and $$
\bE \sup_{t\leq T} \|u(t)\|^p_{K^{\gamma+1}_{p,\theta,\Theta}(\cD)}\leq C \|u\|^p_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)}, $$ where $C=C(d,p,n,\theta,\Theta,\cD, T)$. In particular, for any $t\leq T$, \begin{equation}
\label{eqn 8.25.31}
\|u\|^p_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,t)}\leq \int^t_0 \bE\sup_{r\leq s} \|u(r)\|^p_{K^{\gamma+1}_{p,\theta,\Theta}(\cD,r)} ds \leq
C\int^t_0 \|u\|^p_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,s)}ds. \end{equation} \end{thm}
\begin{proof} We follow the argument in \cite[Section 6]{Krylov 2001} (or the proof of \cite[Theorem 2.8]{Kim2014}), using \cite[Corollary 4.12]{Krylov 2001}.
(i). As usual, we suppress the argument $\omega$. Put $\xi(x)=|x|^{(\theta-\Theta)/p}$ and set $v=\xi u$, $\bar{f}=\xi \bD u$, $\bar{g}=\xi \bS u$. Then we have $$ dv=\bar{f}dt+\sum_{k=1}^{\infty} \bar{g}^k dw^k_t, \quad t\in(0,T] $$ in the sense of distributions on $\cD$ with the initial condition $v(0,\cdot)=\xi u(0,\cdot)$. By \eqref{eqn 8.19.81} and Definition \ref{defn 8.19}, we have \begin{eqnarray} I_1&:=&\bE\left[\psi^{\beta-1}u\right]^p_{\cC^{\alpha/2-1/p}([0,T],K^{\gamma+2-\beta}_{p,\theta, \Theta}(\cD))} \nonumber\\ &\sim& \bE\left[v\right]^p_{\cC^{\alpha/2-1/p}([0,T],H^{\gamma+2-\beta}_{p,\Theta+p(\beta-1)}(\cD))} \label{eqn 8.20.11}\\ &\leq& C\sum_n e^{n(\Theta+p(\beta-1))}\bE \left[v(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\right]^p_{\cC^{\alpha/2-1/p}([0,T];H^{\gamma+2-\beta}_{p})}. \nonumber \end{eqnarray} Now, by assumption, the function $v_n(t,x):=v(t,e^nx)\zeta(e^{-n}\psi(e^nx))$ belongs to $\bH^{\gamma+2}_p(T)$ and satisfies \begin{equation}
\label{eqn 08.31.1} dv_n=\bar{f}(t,e^nx)\zeta(e^{-n}\psi(e^nx))dt+ \sum_{k=1}^{\infty} \bar{g}^k(t,e^nx) \zeta(e^{-n}\psi(e^nx)) dw^k_t, \quad t>0 \end{equation}
on the entire space $\bR^d$. Then, by \cite[Corollary 4.12]{Krylov 2001} and \eqref{eqn 08.31.1}, there exists a constant $N>0$, independent of $T$ and $u$, so that for any constant $a>0$, \begin{eqnarray*} &&\bE\left[v(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\right]^p_{\cC^{\alpha/2-1/p}([0,T];H^{\gamma+2-\beta}_{p})}\\ &\leq&
C\,T^{(\beta-\alpha)p/2}a^{\beta-1} \Big(a\|v(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\|^p_{\bH^{\gamma+2}_{p}(T)}\\
&&\hspace{1cm}+\,a^{-1}\|\bar{f}(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\|^p_{\bH^{\gamma}_{p}(T)}+\|\bar{g}^k(\cdot,e^n\cdot) \zeta(e^{-n}\psi(e^n\cdot)) \|^p_{\bH^{\gamma+1}_{p}(T,\ell_2)} \Big) \end{eqnarray*} holds. Taking $a=e^{-np}$, we note that (\ref{eqn 8.20.11}) yields \begin{eqnarray} \nonumber I_1 &\leq& C\,T^{(\beta-\alpha)p/2}\Big(\sum_n
e^{n(\Theta-p)}\|v(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\|^p_{\bH^{\gamma+2}_p(T)}\\ \nonumber &&\quad\quad+ \sum_n e^{n(\Theta+p)}
\|\bar{f}(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\|^p_{\bH^{\gamma}_{p}(T)}\\
&&\quad \quad+\sum_n e^{n\Theta}\|\bar{g}^k(\cdot,e^n\cdot) \zeta(e^{-n}\psi(e^n\cdot)) \|^p_{\bH^{\gamma+1}_p(T,\ell_2)} \Big) \nonumber\\
&=& C\,T^{(\beta-\alpha)p/2} \Big(\|u\|^p_{\bK^{\gamma+2}_{p,\theta-p, \Theta-p}(\cD,T)}
+\|\bD u\|^p_{\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)} +\|\bS u\|^p_{\bK^{\gamma+1}_{p,\theta, \Theta}(\cD,T,\ell_2)}\Big) \nonumber\\
&\le& C\, T^{(\beta-\alpha)p/2}\|u\|^p_{\cK^{\gamma+2}_{p,\theta, \Theta}(\cD,T)}. \nonumber \end{eqnarray} Thus \eqref{Holder1} is proved.
If $\psi^{\beta-1}u(0,\cdot)\in L_p(\Omega;K^{\gamma+2-\beta}_{p,\theta,\Theta}(\cD))$, then we note that $\psi^{\beta-1}u$ belongs to $\cC\left([0,T]; K^{\gamma+2-\beta}_{p,\theta,\Theta}(\cD)\right)$ now. For estimate \eqref{Holder2}, we have \begin{eqnarray} I_2&:=&\bE \vert \psi^{\beta-1}u\vert ^p_{\cC\big([0,T]; K^{\gamma+2-\beta}_{p,\theta,\Theta}(\cD)\big)}\nonumber\\ &\leq& C\,\sum_n e^{n(\Theta+p(\beta-1))}\bE \vert v(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\vert ^p_{\cC([0,T];H^{\gamma+2-\beta}_{p})} \label{Holder2 proof} \end{eqnarray} and by \cite[Corollary 4.12]{Krylov 2001} again, for any constant $a>0$, \begin{eqnarray*} &&\bE\vert v(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\vert ^p_{\cC\big([0,T];H^{\gamma+2-\beta}_{p}\big)}\\ &\leq&
C\,\bE\|v(0,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\|^p_{H^{\gamma+2-\beta}_{p}}\\
&&+C\,T^{p\beta/2-1}a^{\beta-1} \Big(a\|v(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\|^p_{\bH^{\gamma+2}_{p}(T)}\\
&&\hspace{0.7cm}+a^{-1}\|\bar{f}(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\|^p_{\bH^{\gamma}_{p}(T)}+\|\bar{g}^k(\cdot,e^n\cdot) \zeta(e^{-n}\psi(e^n\cdot)) \|^p_{\bH^{\gamma+1}_{p}(T,\ell_2)} \Big). \end{eqnarray*} This, \eqref{Holder2 proof}, and the same argument above, especially the adjustment $a=e^{-np}$ for each $n$, lead us to \eqref{Holder2}.
(ii). We use the notations used in (i). Obviously, $$
\bE\sup_{t\leq T}\|u(t)\|^p_{K^{\gamma+1}_{p,\theta,\Theta}(\cD)}\leq C\, \sum_ne^{n\Theta} \bE \sup_{t\leq T}
\|v(t,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\|^p_{H^{\gamma+1}_{p}}. $$
By Remark 4.14 in \cite{Krylov 2001} with $\beta=1$ there, $v_n \in L_p(\Omega; \cC([0,T]; H^{\gamma+1}_p))$ and for any $a>0$, \begin{align*} \bE \sup_{t\leq T}
\|v(t,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\|^p_{H^{\gamma+1}_{p}}\leq C\, \Big(a\|v(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\|^p_{\bH^{\gamma+2}_{p}(T)}&\\
+a^{-1}\|\bar{f}(\cdot,e^n\cdot)\zeta(e^{-n}\psi(e^n\cdot))\|^p_{\bH^{\gamma}_{p}(T)}
+\|\bar{g}^k(\cdot,e^n\cdot) \zeta(e^{-n}\psi(e^n\cdot)) \|^p_{\bH^{\gamma+1}_{p}(T,\ell_2)}&\Big). \end{align*} Again, taking $a=e^{-np}$ and following the above arguments, we get \begin{eqnarray*}
&&\bE\sup_{t\leq T}\|u(t)\|^p_{K^{\gamma+1}_{p,\theta, \Theta}(\cD)} \\
&&\leq C\, \Big(\|u\|^p_{\bK^{\gamma+2}_{p,\theta-p, \Theta-p}(\cD,T)}
+\|\bD u\|^p_{\bK^{\gamma}_{p,\theta+p, \Theta+p}(\cD,T)}+\|\bS u\|^p_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,\ell_2)}\Big)\\
&&= C\,\|u\|^p_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)}. \end{eqnarray*} The theorem is proved. \end{proof}
\begin{remark}\label{additional condition initial} The additional condition $\psi^{\beta-1}u(0,\cdot)\in L_p(\Omega;K^{\gamma+2-\beta}_{p,\theta,\Theta}(\cD))$ for \eqref{Holder2} does not follow from the assumption $u\in\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$. This condition is unnecessary when we prove the corresponding result on polygonal domains. See Remark \ref{remark 8.29} for detail.
\end{remark}
\begin{remark}
Theorems \ref{banach} and \ref{embedding} hold for any $\theta, \Theta\in \bR$, but certain restrictions will be given later for our main results, Theorems \ref{main result} and \ref{main result-random}. Actually the admissible range of $\theta$ for our Sobolev-regularity theory of equation \eqref{stochastic parabolic equation} is affected by \emph{the shape of} $\cD=\cD(\cM)$, the uniform parabolicity of the leading coefficients, the space dimension $d$, and the summability parameter $p$. On the the hand, the admissible range of $\Theta$ depends only on $d$ and $p$, that is, \begin{equation*}
d-1<\Theta<d-1+p. \end{equation*}
\end{remark}
To explain the admissible range of $\theta$ for equation \eqref{stochastic parabolic equation} we need the following definitions. For some of the notations in them one can refer to Section \ref{sec:Introduction}.
\begin{defn}[cf. Section 2 of \cite{Kozlov Nazarov 2014}] \label{lambda}
Let $L=\sum_{i,j=1}^d \alpha^{ij}(t)D_{ij}$ be a uniformly parabolic ``deterministic" operator with bounded coefficients $\alpha^{ij}$s.
(i) By $\lambda^+_{c,L}=\lambda^+_{c,L,\cD}$ we denote the supremum of all $\lambda\geq 0$ such that for some constant $K_0=K_0(\lambda, L,\cM)$ it holds that \begin{equation} \label{eqn 8.17.10} \vert v(t,x)\vert\le K_0 \left(\frac{\vert x\vert}{R}\right)^{\lambda}\sup_{Q^{\mathcal{D}}_{\frac{3R}{4}}(t_0,0)}\ \vert v\vert, \quad \forall \;(t,x)\in Q^{\mathcal{D}}_{R/2}(t_0,0) \end{equation}
for any $R>0$, $t_0$, and the deterministic function $v=v(t,x)$ belonging to $\mathcal{V}_{loc}(Q^{\mathcal{D}}_R(t_0,0))$ and satisfying \begin{equation} \label{eqn 8.17.14} v_t=L v \quad \text{in}\; Q^{\mathcal{D}}_R(t_0,0)\quad ; \;\quad v(t,x)=0\quad\text{for}\;\; x\in\partial\mathcal{D}. \end{equation}
(ii) By $\lambda^-_{c,L}$ we denote the supremum of $\lambda \geq 0$ with above property for the operator \begin{equation*} \hat{L}:=\sum_{i,j}\alpha^{ij}(-t)D_{ij}. \end{equation*}
\end{defn}
Note that $K_0$ in \eqref{eqn 8.17.10} may depend on the operator $L$. Such dependency on $L$ is one of major obstacles when one handles SPDE having random coefficients, since it naturally involves infinitely many operators at the same time. To treat such case, which is in fact our case in this article, we design the following definition.
\begin{defn}
\label{lambda2} (i) By $\cT_{\nu_1,\nu_2}$ we denote the collection of all ``deterministic" operators in the form $L=\sum_{i,j=1}^d \alpha^{ij}(t)D_{ij}$, where $\alpha^{ij}(t)$ are measurable in $t$ and satisfy Assumption \ref{ass coeff} with the fixed constants $\nu_1,\nu_2$ in the uniform parabolicity condition \eqref{uniform parabolicity}.
(ii) For a fixed $\cD=\cD(\cM)$, by $\lambda_c(\nu_1,\nu_2)=\lambda_c(\nu_1,\nu_2,\cD)$ we denote the supremum of all $\lambda\geq 0$ such that for some constant $K_0=K_0(\lambda, \nu_1,\nu_2,\cM)$ it holds that for any operator $L \in \cT_{\nu_1,\nu_2}$, $R>0$ and $t_0$, \begin{equation}
\label{eqn 8.17.11} \vert v(t,x)\vert\le K_0 \left(\frac{\vert x\vert}{R}\right)^{\lambda}\sup_{Q^{\mathcal{D}}_{\frac{3R}{4}}(t_0,0)}\ \vert v\vert , \quad \forall \;(t,x)\in Q^{\mathcal{D}}_{R/2}(t_0,0), \end{equation} provided that $v$ is a deterministic function in $\mathcal{V}_{loc}(Q^{\mathcal{D}}_R(t_0,0))$ satisfying \begin{equation*} v_t=L v \quad \text{in}\; Q^{\mathcal{D}}_R(t_0,0)\quad ; \;\quad v(t,x)=0\quad\text{for}\;\; x\in\partial\mathcal{D}. \end{equation*}
\end{defn}
\begin{remark} (i) Note that the dependency of $K_0$ in Definition \ref{lambda2} is more explicit compared to that of Definition \ref{lambda}. By definitions, if $L$ is an operator in $\cT_{\nu_1,\nu_2}$, then $$\lambda^{\pm}_{c,L}\geq \lambda_c(\nu_1,\nu_2). $$
(ii) The values of $\lambda^{\pm}_{c,L}$ and $\lambda_c(\nu_1,\nu_2)$ do not change if one replaces $\frac{3}{4}$ in \eqref{eqn 8.17.10} and \eqref{eqn 8.17.11} by any number in $(1/2,1)$ (see \cite[Lemma 2.2]{Kozlov Nazarov 2014}). Following the proof of \cite[Lemma 2.2]{Kozlov Nazarov 2014}, one can also show that for any constant $\beta>0$ $$
\lambda^{\pm}_{c,\beta L}= \lambda^{\pm}_{c,L}, \qquad \lambda_c(\beta \nu_1,\beta \nu_2)= \lambda_c(\nu_1,\nu_2).
$$
\end{remark}
Below are some sharp estimates for $\lambda^{\pm}_{c,L}$ and $\lambda_c(\nu_1,\nu_2)$. See \cite{Kozlov Nazarov 2014} for more informations.
\begin{prop}\label{critical exponents} (i) If $L=\Delta_x$, then \begin{equation*} \lambda^{\pm}_{c,L}=-\frac{d-2}{2}+\sqrt{\Lambda+\frac{(d-2)^2}{4}} \, >0, \end{equation*} where $\Lambda=\Lambda_{\cD}$ is the first eigenvalue of Laplace-Beltrami operator with the Dirichlet condition on $\mathcal{M}$. In particular, if $d=2$ and $\cD=\cD^{(\kappa)}$ (see \eqref{wedge in 2d}), then \begin{equation*}
\lambda^{\pm}_{c,L}=\frac{\pi}{\kappa}. \end{equation*}
(ii) Let $0<\nu_1\leq \nu_2<\infty$. Then we have $\lambda_{c}(\nu_1,\nu_2)>0$ and \begin{equation}\label{CUB3} \lambda_{c}(\nu_1,\nu_2) \geq -\frac{d}{2}+\sqrt{\frac{\nu_1}{\nu_2}}\sqrt{\Lambda+\frac{(d-2)^2}{4}}. \end{equation} \end{prop}
\begin{proof} (i) follows from \cite[Theorem 2.4.3]{Kozlov Nazarov 2014}. (ii) also follows from
the proofs of \cite[Theorem 2.4.1, Theorem 2.4.7]{Kozlov Nazarov 2014}, which only consider the case $\nu_2=1/{\nu_1}$. Inspecting the proofs of \cite[Theorem 2.4.1, Theorem 2.4.7]{Kozlov Nazarov 2014} one can easily check
$$ \lambda^{\pm}_{c,L}\geq -\frac{d}{2}+\sqrt{\frac{\nu_1}{\nu_2}}\sqrt{\Lambda+\frac{(d-2)^2}{4}}\\,\,\,\text{and}\,\,\,\lambda^{\pm}_{c,L}>c>0\quad\text{if}\quad L\in \cT_{\nu_1,\nu_2}, $$ where the constant $c$ is the H\"older exponent of solutions to equation \eqref{eqn 8.17.14}, and it can be chosen so that it depends only on $\nu_1,\nu_2$ and $\cM$. Moreover, for $\lambda>0$ satisfying $$ \lambda<c\vee \Big(-\frac{d}{2}+\sqrt{\frac{\nu_1}{\nu_2}}\sqrt{\Lambda+\frac{(d-2)^2}{4}}\Big) $$ the constant $K_0$ in \eqref{eqn 8.17.11} can be chosen so that it depends only on $\nu_1,\nu_2$ and $\cM$. This proves \eqref{CUB3}.
\end{proof}
\begin{example}[$d=2$]\label{example proposition}
For $\kappa\in (0,2\pi)$ and $\alpha\in [0,2\pi)$, we consider $$ \cD=\mathcal{D}_{\kappa,\alpha}:=\left\{x=(r\cos\theta,\ r\sin\theta)\in\bR^2 \,\vert\, r\in(0,\ \infty),\ -\frac{\kappa}{2}+\alpha<\theta<\frac{\kappa}{2}+\alpha\right\} $$ and the constant operator $$ L=aD_{x_1x_1}+b(D_{x_1x_2}+D_{x_2x_1})+cD_{x_2x_2}, $$ where $a,b,c$ are constants such that $a+c>0$ and $ac-b^2>0$. Then, by \cite[Proposition 4.1]{Green}, we have \begin{align*} \lambda^{\pm}_{c,L}=\lambda^{\pm}_{c,L,\cD_{\kappa,\alpha}}=\frac{\,\pi\,}{\widetilde{\kappa}}, \end{align*} where $$ \widetilde{\kappa}=\pi-\arctan\Big(\,\frac{\bar{c}\,\cot(\kappa/2)+\bar{b}}{\sqrt{\det(A)}}\,\Big)-\arctan\Big(\,\frac{\bar{c}\,\cot(\kappa/2)-\bar{b}}{\sqrt{\det(A)}}\,\Big) $$ with constants $\bar{a}, \bar{b}, \bar{c}$ from the relation $$ \begin{pmatrix} \bar{a} & \bar{b}\\ \bar{b}& \bar{c} \end{pmatrix} = \begin{pmatrix} \cos \alpha & \sin \alpha\\ -\sin \alpha & \cos \alpha \end{pmatrix} \begin{pmatrix} a & b\\ b& c \end{pmatrix} \begin{pmatrix} \cos \alpha & - \sin \alpha\\ \sin \alpha & \cos \alpha \end{pmatrix}. $$ In particular, we have $\tilde{\kappa}=\pi$ if $\kappa=\pi$.
Now, let $\kappa\neq \pi$, $\alpha=0$ for $\cD$. Also, let $b=0$ in $L$. In this case we can take $\nu_1=a\wedge c$ and $\nu_2=a\vee c$ in \eqref{uniform parabolicity}. We note that $\tilde{\kappa}$ is determined by the simple relation \begin{equation*} \tan\Big(\frac{\widetilde{\kappa}}{\,2\,}\Big)=\sqrt{\frac{a}{c}}\tan\Big(\frac{\kappa}{\,2\,}\Big). \end{equation*}
\end{example}
We are ready to pose our Sobolev regularity results on conic domains. We formulate them into two theorems to handle random and non-random coefficients separately. The proofs of them are located in Section \ref{sec:main proofs}. Note that the admissible range of $\theta$ for non-random coefficients is relatively wider than that of random coefficients.
\begin{thm}(SPDE on conic domains with non-random coefficients) \label{main result} Let $\cL=\sum_{ij}a^{ij}(t)D_{ij}$ be non-random, $p\in[2,\infty)$, and $\gamma \geq -1$. Also assume that Assumptions \ref{ass M} and \ref{ass coeff} hold, and $\theta, \Theta\in\bR$ satisfy \begin{equation}
\label{theta11} p(1-\lambda^+_{c,\cL})<\theta<p(d-1+\lambda^-_{c,\cL}), \qquad d-1<\Theta<d-1+p. \end{equation} Then for any $f^0\in\bK^{\gamma \vee 0}_{p,\theta+p,\Theta+p}(\cD,T)$, $\tbf=(f^1,\cdots,f^d)\in \bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,d)$, $g\in\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,l_2)$, and $u_0\in\bU^{\gamma+2}_{p,\theta,\Theta}(\cD)$, equation \eqref{stochastic parabolic equation} has a unique solution $u$ in the class $\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ and moreover we have \begin{eqnarray}\label{main estimate}
\|u\|_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)}
&\leq& C\big(\|f^0\|_{\bK^{\gamma \vee 0}_{p,\theta+p,\Theta+p}(\cD,T)}
+ \|\tbf\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,d)}+\|g\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,l_2)}\nonumber\\
&&\;\;\quad+\|u_0\|_{\bU^{\gamma+2}_{p,\theta,\Theta}(\cD)}\big), \end{eqnarray} where the constant $C$ depends only on $\cM,d,p,\theta,\Theta,\cL, \gamma$. In particular, it is independent of $T$. \end{thm}
\begin{remark} (i) A particular result of the above theorem is introduced in \cite{CKL 2019+} (cf. \cite{CKLL 2018}). More precisely, the combination of Theorem 2.8 and Corollary 2.11 in \cite{CKL 2019+} covers the case $$
\cL=\Delta, \quad \Theta=d=2, \quad \cD=\cD^{(\kappa)}\text{ of}\;\; \eqref{wedge in 2d}. $$
(ii) If $\gamma \geq 0$, the separation of two terms $f^0$ and $\tbf=(f^1,\cdots,f^d)$ in our equation is redundant and we simply pose $f\in\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)$ instead. This is because, by \eqref{eqn 4.16.1}, we have $ h^0+\sum_{i=1}^d h^i_{x^i}\in K^{\gamma}_{p,\theta+p,\Theta+p}(\cD) $ for $ h^0\in K^{\gamma}_{p,\theta+p,\Theta+p}(\cD),\;\; h^i \in K^{\gamma+1}_{p,\theta,\Theta}(\cD),\;i=1,\ldots,d. $ The corresponding change in the estimate \eqref{main estimate} is clear. \end{remark}
\begin{thm}(SPDE on conic domains with random coefficients)
\label{main result-random} Let $\cL=\sum_{ij}a^{ij}(\omega,t)D_{ij}$ be random, $p\in[2,\infty)$, and $\gamma \geq -1$. Also assume that Assumptions \ref{ass M} and \ref{ass coeff} hold, $d-1<\Theta<d-1+p$, and \begin{equation}
\label{theta} p\big(1-\lambda_{c}(\nu_1,\nu_2)\big)<\theta<p\big(d-1+\lambda_{c}(\nu_1,\nu_2)\big). \end{equation} Then all the claims of Theorem \ref{main result} hold with a constant $N=N(\cM,d,p,\gamma,\theta,\Theta,\nu_1,\nu_2)$. \end{thm}
\begin{remark} By Proposition \ref{critical exponents}, \eqref{theta} is fulfilled if \begin{equation} \label{eqn 8.28.10} p\left(\frac{d+2}{2}-\sqrt{\frac{\nu_1}{\nu_2}}\sqrt{\Lambda_{\cD}+\frac{(d-2)^2}{4}}\right)<\theta< p\left(\frac{d-2}{2}+\sqrt{\frac{\nu_1}{\nu_2}}\sqrt{\Lambda_{\cD}+\frac{(d-2)^2}{4}}\right). \end{equation} In the case of $L=\Delta$, by Proposition \ref{critical exponents}, \eqref{theta11} is fulfilled if $$ p\left(\frac{d}{2}-\sqrt{\Lambda_{\cD}+\frac{(d-2)^2}{4}}\right)<\theta< p\left(\frac{d}{2}+\sqrt{\Lambda_{\cD}+\frac{(d-2)^2}{4}}\right). $$ \end{remark}
\begin{remark}
By \eqref{space K norm}, inequality \eqref{main estimate} yields \eqref{main estimate intro}. In particular, if $\gamma=-1$ and $u(0,\cdot)\equiv 0$, then we have \begin{eqnarray*} &&\bE \int^T_0 \int_{\cD} \left(\vert\rho^{-1}u\vert ^p+\vert u_x\vert ^p\right) \rho_{\circ}^{\theta-\Theta}\rho^{\Theta-d}\, dx\,dt \nonumber
\\ &\leq& C\, \bE
\int^T_0 \int_{\cD} \Big( \vert \rho f^0\vert ^p+\sum_{i=1}^d \vert f^i\vert ^p+\vert g\vert ^p_{\ell_2}\Big)
\rho_{\circ}^{\theta-\Theta}\rho^{\Theta-d}\, dx\,dt. \end{eqnarray*} \end{remark}
\begin{remark} The solutions $u$ in Theorems \ref{main result} and \ref{main result-random} satisfy zero Dirichlet boundary condition. Indeed, under the assumption $d-1<\Theta<d-1+p$, \cite[Theorem 2.8]{doyoon} implies that the trace operator is well defined for functions in $\bK^1_{p,\theta-p, \Theta-p}(\cD,T)$, and hence by Lemma \ref{property1} (iv) we have $u\vert _{\partial \cD}=0$. \end{remark}
Here comes our H\"older regularity properties of solutions on conic domains.
\begin{thm}[H\"older estimates on conic domains] \label{cor 8.10} Let $p\in [2,\infty)$, $\theta, \Theta\in \bR$, and $u\in \cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$
be the solution taken from Theorem \ref{main result} (or from Theorem \ref{main result-random}).
(i) If $\gamma+2-\frac{d}{p}\geq n+\delta$, where $n\in \bN_0$ and $\delta\in (0,1]$, then for any $0\le k\leq n$, $$ \vert \rho^{k-1+\frac{\Theta}{p}} \rho^{(\theta-\Theta)/p}_{\circ} D^{k}u(\omega,t,\cdot)\vert _{\cC(\cD)}+
[\rho^{n-1+\delta+\frac{\Theta}{p}} \rho^{(\theta-\Theta)/p}_{\circ} D^{n}( \omega,t,\cdot)]_{\cC^{\delta}(\cD)}<\infty $$ holds for a.e. $(\omega,t)$, in particular, \begin{equation} \label{eqn 8.10.21} \vert u(\omega,t,x)\vert \leq C(\omega,t) \rho^{1-\frac{\Theta}{p}}(x) \rho^{(-\theta+\Theta)/p}_{\circ}(x)\quad \text{for all }x\in\cD. \end{equation}
(ii) Let $$ 2/p<\alpha<\beta\leq 1, \quad \gamma+2-\beta-d/p \geq m+\varepsilon, $$ where $m\in \bN_0$ and $\varepsilon\in (0,1]$. Put $\eta=\beta-1+\Theta/p$. Then for any $0\le k\leq m$, \begin{align} \label{eqn 8.11.11} &\bE \sup_{t\ne s\leq T} \frac {\big\vert \rho^{\eta+k} \rho^{(\theta-\Theta)/p}_{\circ} \big(D^ku(t,\cdot)-D^ku(s,\cdot)\big)\big\vert^p_{\cC(\cD)}} {\vert t-s\vert ^{p\alpha/2-1}}<\infty, \\ & \bE \sup_{t\ne s\leq T} \frac {\left[\rho^{\eta+m+\varepsilon} \rho^{(\theta-\Theta)/p}_{\circ} \left(D^mu(t,\cdot)-D^mu(s,\cdot)\right)\right]^p_{\cC^{\varepsilon}(\cD)}} {\vert t-s\vert ^{p\alpha/2-1}} <\infty. \label{eqn 8.11.21} \end{align}
\end{thm}
\begin{proof} (i) By definition, for almost all $(\omega, t)$, we have $u(\omega,t,\cdot)\in K^{\gamma+2}_{p,\theta-p,\Theta-p}(\cD)$. Thus (i) is a consequence of \eqref{eqn 8.21.1}. Similarly, the claims of (ii) follow from \eqref{eqn 8.21.1} \eqref{eqn 8.10.10}, and the observation \begin{eqnarray*}
&&\bE \sup_{t\ne s\le T} \frac{\|\psi^{\beta-1}(u(t)-u(s))\|^p_{K^{\gamma+2-\beta}_{p,\theta,\Theta}(\cD)}}{\vert t-s\vert ^{(\alpha/2-1/p)p}} \\ &\sim&
\bE \sup_{t\ne s\le T} \frac{\hspace{1cm}\|u(t)-u(s)\|^p_{K^{\gamma+2-\beta}_{p,\theta+\beta p-p,\Theta+\beta p-p}(\cD)}}{\vert t-s\vert ^{(\alpha/2-1/p)p}}. \end{eqnarray*} \end{proof}
\begin{remark} (i) \eqref{eqn 8.10.21} tells how fast the solution from Theorem \ref{main result} (or Theorem \ref{main result-random}) vanishes near the boundary. Near boundary points away from the vertex, $u$ is controlled by $\rho^{1-\Theta/p}$and, if $p>\Theta$, the decay near the vertex is not slower than $\rho^{1-\theta/p}_{\circ}$.
(ii) In \eqref{eqn 8.11.11} and \eqref{eqn 8.11.21}, $\alpha/2-1/p$ is the H\"older exponent in time and $\eta$ is related to the decay rate near the boundary. As $\alpha/2-1/p\to 1/2-1/p$, $\eta$ must increase accordingly.
(iii) Suppose $\theta=d$ satisfies \eqref{eqn 8.28.10}, and let $u\in \cK^{1}_{p,d,d}(\cD,T)$ be the solution from Theorem \ref{main result-random}. Assume $$ \kappa_0:=1-\frac{(d+2)}{p}>0. $$ Then for any $\kappa\in (0,\kappa_0)$, we have \begin{equation}
\label{eqn 8.11.12} \bE \sup_{t\leq T} \sup_{x,y\in\cD} \Big\vert \frac{\vert u(t,x)-u(t,y)\vert }{\vert x-y\vert ^{\kappa}}\Big\vert^p + \bE \sup_{t\ne s\leq T}\sup_{x\in \cD}\Big\vert \frac{\vert u(t,x)-u(s,x)\vert }{\vert t-s\vert ^{\kappa/2}}\Big\vert^p <\infty. \end{equation} Indeed, \eqref{eqn 8.11.12} can be obtained from \eqref{eqn 8.11.11} and \eqref{eqn 8.11.21} with appropriate choices of $\alpha,\beta$. For the first part, to apply \eqref{eqn 8.11.21} we take $\beta=\kappa_0-\kappa+2/p$ such that $2/p<\beta<1$, and take $\varepsilon=1-\beta-d/p=\kappa=-\eta$. For the second part, we use \eqref{eqn 8.11.11} with $\alpha=\kappa+2/p, \beta=1-d/p$ so that $1-\alpha p/2=-p\kappa/2$. \end{remark}
\mysection{Key estimates on conic domains}
In this section we consider the solutions to SPDEs having a non-random operator. We fix a deterministic operator \begin{equation} \label{8.29.1} L_0:=\sum_{i,j}\alpha^{ij}(t)D_{ij}\,\, \in \, \cT_{\nu_1,\nu_2}. \end{equation} See Definition \ref{lambda2}. We will estimate the zeroth order derivative of the solution of the equation \begin{equation}\label{one event equation} d u =\left( L_0 u+f^0+\sum_{i=1}^d f^i_{x^i}\right)dt +\sum^{\infty}_{k=1} g^kdw_t^k,\quad t>0, \;x\in \cD(\cM). \end{equation}
Let $G(t,s,x,y)$ denote the Green's function for the operator $\partial_t-L_0$ on $\cD=\cD(\cM)$. By definition (cf. \cite[Lemma 3.7]{Kozlov Nazarov 2014}), $G$ is a nonnegative function such that for any fixed $s\in \bR$ and $y\in\mathcal{D}$, the function $v(t,x)=G(t,s,x,y)$ satisfies \begin{align*} &\big(\partial_t -L_0\big)v(t,x)=\delta(x-y)\delta(t-s) \quad \text{in}\quad \bR \times \cD, \nonumber\\ & v(t,x)=0\quad \textrm{on}\quad \bR \times\mathcal{\partial D} \; ; \quad v(t,x)=0\quad \textrm{for} \quad t<s. \end{align*}
Now, for any given $$
f^0\in \bL_{p,\theta+p,\Theta+p}(\cD,T), \quad \textbf{f}=(f^1,\cdots,f^d)\in \bL_{p,\theta,\Theta}(\cD,T,d), \quad
$$
$$
g\in\bL_{p,\theta,\Theta}(\cD,T, \ell_2), \quad u_0 \in L_p(\Omega; K^0_{\theta+2-p,\Theta+2-p}(\cD)),
$$ we define the function $\cR(u_0,f^0,\textbf{f},g)$ by \begin{eqnarray} &&\cR(u_0,f^0,\textbf{f},g)(t,x)\nonumber\\ &:=&\int_{\cD} G(t,0,x,y)u_0(y)dy\nonumber\\ &&+ \int^t_0\int_{\cD}G(t,s,x,y)f(s,y)dyds-\sum_{i=1}^d \int^t_0\int_{\cD}G_{y^i}(t,s,x,y)f^i(s,y)dyds\nonumber\\ &&+\sum_{k=1}^{\infty}\int^t_0\int_{\cD}G(t,s,x,y)g^k(s,y)dy\,dw^k_s. \label{eqn 8.21.11} \end{eqnarray} One immediately notices that the function $\cR(u_0,f^0,\textbf{f},g)$ is a representation of a solution of \eqref{one event equation} with zero boundary condition and initial condition $u(0,\cdot)=u_0(\cdot)$; see Lemma \ref{lemma rep} in the next section. Our main result of this section is about this representation and it is given in the following lemma. \begin{lemma}\label{main est} Let $T<\infty$, $p\in [2,\infty)$ and let $\theta\in\bR$, $\Theta\in\bR$ satisfy $$ p(1-\lambda^+_{c,L_0})<\theta<p(d-1+\lambda^-_{c,L_0})\quad\text{and}\quad d-1<\Theta<d-1+p. $$ If $f^0\in \bL_{p,\theta+p,\Theta+p}(\cD,T)$, $\tbf\in \bL^d_{p,\theta,\Theta}(\cD,T,d)$, $g\in\bL_{p,\theta,\Theta}(\cD,T, \ell_2)$, and $u_0\in L_p(\Omega; K^0_{\theta+2-p,\Theta+2-p}(\cD)):=L_p(\Omega,\rF_0; K^0_{\theta+2-p,\Theta+2-p}(\cD))$, then $u:=\cR(u_0,f^0,\tbf,g)$ belongs to $\bL_{p,\theta-p,\Theta-p}(\cD,T)$ and the estimate
\begin{eqnarray*}
\|u\|_{\bL_{p,\theta-p,\Theta-p}(\cD,T)}&\leq& C \Big(\|f^0\|_{\bL_{p,\theta+p,\Theta+p}(\cD,T)} + \|\tbf\|_{\bL_{p,\theta,\Theta}(\cD,T,d)}\nonumber \\ &&\quad\quad+
\|g\|_{\bL_{p,\theta,\Theta}(\cD,T,\ell_2)} +\|u_0\|_{ L_p(\Omega; K^0_{\theta+2-p,\Theta+2-p}(\cD))}\Big)\nonumber\\
\end{eqnarray*} holds, where $C=C(\cM,d,p,\theta,\Theta,L_0)$. Moreover, if \begin{equation*}
p\left(1-\lambda_c(\nu_1,\nu_2)\right)<\theta< p\left(d-1+ \lambda_c(\nu_1,\nu_2)\right), \end{equation*} then the constant $C$ depends only on $\cM,d,p,\theta,\Theta, \nu_1$ and $\nu_2$. \end{lemma}
To prove Lemma~\ref{main est}, we use the following two results. Lemma \ref{lemma3.1} gathers rather technical but important inequalities we keep using in this section.
\begin{lemma}\label{lemma3.1}
(i) Let $\alpha+\beta>0,\ \beta > 0$, and $\gamma>0$. Then for any $a\geq b>0$
\begin{align*} \int^{\infty}_0 \frac{1}{\left(a+\sqrt{t}\right)^{\alpha}\left(b+\sqrt{t}\right)^{\beta+\gamma}t^{1-\frac{\gamma}{2}}}dt \leq \frac{C}{a^\alpha b^\beta}, \end{align*} where $C= C(\alpha,\beta,\gamma)$.
(ii) Let $\sigma>0,\ \alpha+\gamma>-d$, $\gamma>-1$ and $\beta,\ \nu \in \bR$. Then for any $x\in\cD$, \begin{align*} \int_{\mathcal{D}} \frac{\vert y\vert ^{\alpha}}{\left(\vert y\vert +1\right)^{\beta}}\frac{\rho(y)^{\gamma}}{\left(\rho(y)+1\right)^{\nu}}\ e^{-\sigma \vert x-y\vert ^2} dy \leq C \left(\vert x\vert +1\right)^{\alpha-\beta}\left(\rho(x)+1\right)^{\gamma-\nu}, \end{align*} where $C=C(\cM,d, \alpha, \beta, \gamma,\nu,\sigma)$. \end{lemma} \begin{proof} See Lemma 3.2 and Lemma 3.7 in \cite{ConicPDE}. \end{proof} For the operator $L_0$, we take the constants $K_0, \lambda^+_{c,L_0}, \lambda^-_{c,L_0}$ and the operator $\hat{L}_0$ from Definition \ref{lambda}. \begin{lemma}\label{Green estimate}
Let $\lambda^+\in \big(0,\lambda^+_{c,L_0}\big)$ and $\lambda^-\in\big(0,\lambda^-_{c,L_0}\big)$. Denote
$$
K_0^+=K_0(L_0,\cM,\lambda^+), \quad K_0^-=K_0(\hat{L}_0,\cM,\lambda^-).$$
Then, there exist positive constants
$C=C(\mathcal{M},\nu_1,\nu_2,\,\lambda^{\pm}, K_0^{\pm})$ and $\sigma=\sigma(\nu_1,\nu_2)$ such that for any $t>s$ and $x,y\in \mathcal{D}(\cM)$, the estimates \begin{align*} &(i)\quad G(t,s,x,y)\leq \frac{C}{(t-s)^{d/2}}J_{t-s,x}\,J_{t-s,y}\,R^{\lambda^+-1}_{t-s,x}\,R^{\lambda^--1}_{t-s,y}\,e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}\\ &(ii)\quad \big\vert \nabla_y G(t,s,x,y)\big\vert \leq \frac{C}{(t-s)^{(d+1)/2}}J_{t-s,x}\,R^{\lambda^+-1}_{t-s,x}\,R^{\lambda^--1}_{t-s,y}\,e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}} \end{align*} hold, where
$$
R_{t,x}:=\frac{\rho_{\circ}(x)}{\rho_{\circ}(x)+\sqrt{t}},\quad J_{t,x}:=\frac{\rho(x)}{\rho(x)+\sqrt{t}}.
$$
In particular, if $\lambda^{\pm}\in (0,\lambda_c(\nu_1,\nu_2))$, then $C$ depends only on $\cM, \nu_1,\nu_2, \lambda^{\pm}$. \end{lemma} \begin{proof} (i) See inequality (2.8) in \cite{Green}.
(ii) Denote $\hat{G}(t,s,x,y)=G(-s,-t,y,x)$. Then $\hat{G}$ is the Green's function of the operator $\partial_t-\hat{L}_0$, where $\hat{L}_0:=\sum_{i,j}\alpha^{ij}(-t)D_{ij}$. Then by inequality (2.14) of \cite{Green} applied to $\hat{G}$, for any $\lambda^+\in(0,\lambda_c^+)$ and $\lambda^-\in(0,\lambda_c^-)$, there exist constant $C, \sigma>0$, with the dependencies prescribed in the lemma, such that \begin{align*} \vert \nabla_x \hat{G}(t,s,x,y)\vert \leq \frac{C}{(t-s)^{(d+1)/2}}J_{t-s,y}R^{\lambda^--1}_{t-s,x}R^{\lambda^+-1}_{t-s,y}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}} \end{align*} for any $t>s$ and $x,\,y\in\cD$. This and the fact $\nabla_y G(t,s,x,y)=\nabla_x \hat{G}(-s,-t,y,x)$ prove (ii). \end{proof}
Since $\cR(u_0,f^0,\tbf,g)=\cR(u_0,0,0,0)+\cR(0,f^0,\tbf,0)+\cR(0,0,0,g)$ with $0$ as zero functions in their corresponding function spaces, we will treat these three parts separately in following three lemmas and then combine them to obtain the claim of Lemma \ref{main est}. Especially, the stochastic part $\cR(0,0,0,g)$ is important in this article and elaborated thoroughly in Lemma \ref{main est2}.
\begin{lemma}\label{main est3} Let $p\in(1,\infty)$, and let $\theta\in\bR$, $\Theta\in\bR$ satisfy $$ p(1-\lambda^+_{c,L_0})<\theta<p(d-1+\lambda^-_{c,L_0})\quad\text{and}\quad d-1<\Theta<d-1+p. $$ If $u_0\in L_p(\Omega; K^0_{\theta+2-p,\Theta+2-p}(\cD))$, then $u=\cR(u_0,0,0,0)$ belongs to $\bL_{p,\theta-p,\Theta-p}(\cD,T)$ and \begin{align*}
\|u\|_{\bL_{p,\theta-p,\Theta-p}(\cD,T)}\leq C \|u_0\|_{L_p(\Omega; K^0_{\theta+2-p,\Theta+2-p}(\cD))}
\end{align*} holds, where $C=C(\cM,d,p,\theta,\Theta,L_0)$. Moreover, if \begin{equation}\label{theta range restricted} p\big(1-\lambda_c(\nu_1,\nu_2)\big)<\theta<p\big(d-1+\lambda_{c}(\nu_1,\nu_2)\big), \end{equation} then the constant $C$ depends only on $\cM,d,p,\theta,\Theta,\nu_1$ and $\nu_2$. \end{lemma}
\begin{proof} Green's function itself is not random. Hence, recalling the definitions of $\cR(u_0,0,0,0)$ and $\bL=\bK^0$, for simplicity we may assume that $u_0$ and hence $u$ are non-random and we just prove \begin{align} \int^T_0\int_{\cD}\vert \rho^{-1}u\vert ^p\rho_{\circ}^{\theta-\Theta}\rho^{\Theta-d}dxdt\leq N\int_{\cD}\vert \rho^{-1+\frac{2}{p}}\,u_0\vert ^p\rho_{\circ}^{\theta-\Theta}\rho^{\Theta-d}dx. \label{main inequality3} \end{align}
{\bf 1}. Let us denote $\mu:=-1+(\theta-d+2)/p$, $\alpha:=-1+(\Theta-d+2)/p$, and $$ h(x):=\rho_{\circ}(x)^{\mu-\alpha}\rho(x)^{\alpha}u_0(x). $$ Then the claimed estimate \eqref{main inequality3} turns into a simpler form of \begin{equation}\label{main inequality 3}
\Big\|\rho_{\circ}^{\mu-\alpha}\rho^{\alpha-\frac{2}{p}}u\Big\|_{L_p\left([0,T]\times \mathcal{D}\right)}\le N \|h\|_{L_p\left(\mathcal{D}\right)}. \end{equation}
On the other hand, by the range of $\theta$ given in the condition, we can always find $\lambda^+\in\big(0,\lambda^+_{c,L_0}\big)$ and $\lambda^-\in\big(0,\lambda^-_{c,L_0}\big)$ satisfying \begin{align} -\frac{d-2}{p}-\lambda^+<\mu<\frac{d-2}{p}+\lambda^-.\label{inequality mu1} \end{align} Also, by the given range of $\Theta$ we have \begin{align} -1+\frac{1}{p}<\alpha<\frac{1}{p}.\label{inequality alpha1} \end{align} Hence, we can choose and fix the constants $\gamma$, $\beta$ satisfying \begin{align*} 0<\gamma<\lambda^-+\frac{d-2}{p}-\mu\,,\,\quad 0<\beta<\frac{1}{p}-\alpha. \end{align*} Noting $\frac{d-2}{p}<d-\frac{d}{p}$, $\frac1p<2-\frac1p$ which is due to condition $p\in(1,\infty)$, we then have \begin{align} 0<\gamma<\lambda^-+d-\frac{d}{p}-\mu\,,\,\quad 0<\beta<2-\frac{1}{p}-\alpha.\label{gamma beta chosen 1} \end{align}
Moreover, as $\lambda^+\in (0,\lambda^+_{c,L_0})$ and $\lambda^-\in (0,\lambda^-_{c,L_0})$, by Lemma~\ref{Green estimate} there exist constants $C=C(\cM,L_0,\nu_1,\nu_2, \lambda^{\pm}),\,\sigma=\sigma(\nu_1,\nu_2)>0$ such that \begin{align}\label{20.09.15} \nonumber G(t,0,x,y)&\leq C\,t^{-\frac{d}{2}}\,R^{\lambda^+-1}_{t,x}R^{\lambda^- -1}_{t,y}\,J_{t,x} J_{t,y}\,e^{-\sigma\frac{\vert x-y\vert^2}{t}}\\ & = C\,t^{-\frac{d}{2}}\,R^{\lambda^+-1}_{t,x}J_{t,x}\,R^{\gamma}_{t,y}\left(\frac{J_{t,y}}{R_{t,y}}\right)^{\beta}\, R^{\lambda^--\gamma}_{t,y}\left(\frac{J_{t,y}}{R_{t,y}}\right)^{1-\beta}\,e^{-\sigma\frac{\vert x-y\vert^2}{t}} \end{align} holds for all $t>s$ and $x,y\in\cD$. Let us prove estimate \eqref{main inequality 3}.
{\bf 2}. Using H\"older inequality and \eqref{20.09.15}, we have \begin{align*} \vert u(t,x)\vert&=\Big\vert \int_{\mathcal{D}}G(t,0,x,y)u_0(y)dy\Big\vert \\ &\leq \int_{\mathcal{D}}G(t,0,x,y)\vert y\vert ^{-\mu+\alpha}\rho(y)^{-\alpha}\vert h(y)\vert dy\\ &\leq C\cdot I_1(t,x)\cdot I_2(t,x), \end{align*} where $q=p/(p-1)$; $\frac1p+\frac1q=1$, $$ I_1(t,x)= \left( \int_{\mathcal{D}}t^{-\frac{d}{2}}\,e^{-\sigma\frac{\vert x-y\vert ^2}{t}}\cdot R_{t,x}^{(\lambda^+-1)p}J_{t,x}^{\,p}\cdot K_{1}(t,y)\cdot \vert h(y)\vert ^pdy\right)^{1/p}, $$ and $$ I_2(t,x)=\left(\int_{\mathcal{D}}t^{-\frac{d}{2}}\,e^{-\sigma\frac{\vert x-y\vert ^2}{t}}\cdot K_{2}(t,y)\cdot \vert y\vert ^{(-\mu+\alpha)q}\rho^{-\alpha q}(y)dy\right)^{1/q} $$ with $$ K_{1}(t,y) =R^{\gamma p}_{t,y}\left(\frac{J_{t,y}}{R_{t,y}}\right)^{\beta p},\quad \, K_{2}(t,y)=R^{(\lambda^--\gamma)q}_{t,y}\left(\frac{J_{t,y}}{R_{t,y}}\right)^{(1-\beta)q}. $$
{\bf 3}. We show that there exists a constant $C$ depending only on $\cM, d,p,\theta,\Theta, \nu_1, \nu_2$ and $\lambda^{-}$ such that \begin{align*} I_2(t,x)\leq C \left(\vert x\vert +\sqrt{t}\right)^{-\mu+\alpha}\left(\rho(x)+\sqrt{t}\right)^{-\alpha}. \end{align*} This is done by Lemma~\ref{lemma3.1} (ii). Indeed, by change of variables $y/\sqrt{t}\to y$ and the fact $\rho(y)/\sqrt{t}=\rho(y/\sqrt{t})$, we have \begin{align*} I_2^q(t,x)&=t^{-\frac{d}{2}}\int_{\mathcal{D}}e^{-\sigma\frac{\vert x-y\vert ^2}{t}}K_{2}(t,y)\vert y\vert ^{(-\mu+\alpha)q}\vert \rho(y)\vert ^{-\alpha q}dy\\ &=t^{-\mu q/2}\int_{\mathcal{D}}e^{-\sigma\vert \frac{x}{\sqrt{t}}-y\vert ^2}\frac{\vert y\vert ^{(\lambda^--\mu-\gamma-1+\alpha+\beta)q}}{(\vert y\vert +1)^{(\lambda^--\gamma-1+\beta)q}}\cdot\frac{\rho(y)^{(1-\alpha-\beta)q}}{(\rho(y)+1)^{(1-\beta)q}}dy, \end{align*} for which we can apply Lemma~\ref{lemma3.1} since \eqref{gamma beta chosen 1} implies $(\lambda^--\mu-\gamma)q>-d$ and $(1-\alpha-\beta)q>-1$. Thus we get constant $C=C(\cM,d,p,\theta,\Theta, \lambda^-,\sigma)$ such that $$ I_2^q(t,x) \leq C \left(\vert x\vert +\sqrt{t}\right)^{(-\mu+\alpha)q}\left(\rho(x)+\sqrt{t}\right)^{-\alpha q} $$ holds for all $t,x$.
{\bf 4}. To prove estimate \eqref{main inequality 3}, by Step 3 we first note \begin{align*} \vert x\vert ^{\mu-\alpha}\rho(x)^{\alpha-\frac{2}{p}}\cdot \vert u(t,x)\vert &\leq C\, \vert x\vert ^{\mu-\alpha}\rho(x)^{\alpha-\frac{2}{p}} \cdot I_1(t,x)\cdot I_2(t,x)\\ & \leq C\,\rho(x)^{-2/p}R^{\,\mu-\alpha}_{t,x} J^{\,\alpha}_{t,x}\cdot I_1(t,x) \end{align*} for any $t,x$. Using this and Fubini's Theorem, we have \begin{align*}
\|\rho_{\circ}^{\mu-\alpha}\rho^{\alpha-\frac2p}u\|^p_{L_p([0,T]\times\mathcal{D})}&\leq C \int_0^T\int_{\mathcal{D}}\vert \rho(x)\vert ^{-2}\Big(R^{\,\mu-\alpha}_{t,x} J^{\,\alpha}_{t,x}\, I_1(t,x) \Big)^p\,dxdt\\ &=C\int_{\mathcal{D}}I_3(y)\cdot \vert h(y)\vert ^pdy, \end{align*} where \begin{align*} I_3(y)&=\int^T_0t^{-\frac{d}{2}}K_{1}(t,y)\left(\int_{\mathcal{D}}e^{-\sigma\frac{\vert x-y\vert ^2}{t}}\, R^{\,(\lambda^++\mu-\alpha-1)p}_{t,x} J^{\,(\alpha+1) p}_{t,x}\rho(x)^{-2}\,dx\right)dt. \end{align*}
Since \eqref{inequality mu1} and \eqref{inequality alpha1} imply $(\lambda^++\mu) p-2>-d$ and $(\alpha+1) p-2>-1$, by change of variables $x/\sqrt{t}\to x$, the fact $\rho(x)/\sqrt{t}=\rho(x/\sqrt{t})$, and Lemma~\ref{lemma3.1} (ii), we have \begin{align*} I_3(y)&=\int^T_0\frac{1}{t}K_{1}(t,y)\, \int_{\mathcal{D}}e^{-\sigma\vert x-\frac{y}{\sqrt{t}}\vert ^2}\frac{\vert x\vert ^{(\lambda^++\mu-\alpha-1)p}}{(\vert x\vert +1)^{(\lambda^++\mu-\alpha-1)p}}\frac{\rho(x)^{(\alpha+1) p-2}}{(\rho(x)+1)^{(\alpha+1) p}}\,dx\,dt\\ &\leq C\int_0^{\infty}K_{1}(t,y)\left(\rho(y)+\sqrt{t}\right)^{-2}dt\\ &= C\int_0^{\infty}\frac{\vert y\vert ^{(\gamma-\beta)p}}{\left(\vert y\vert +\sqrt{t}\right)^{(\gamma-\beta)p}}\cdot\frac{\rho(y)^{\beta p}}{\left(\rho(y)+\sqrt{t}\right)^{\beta p+2}}dt. \end{align*}
Lastly, owing to $\gamma p>0$, $\beta p> 0$, and the fact $|y|\ge \rho(y)$ in $\cD$, we can apply Lemma~\ref{lemma3.1} (i) and we obtain \begin{align*} I_3(y)\leq C(\cM,d,p,\theta,\Theta,\nu_1,\nu_2, \lambda^{\pm}). \label{Last constant} \end{align*} Hence, there exists a constant $C$ having the dependency described in the lemma such that \begin{align*}
\left\|\rho_{\circ}^{\mu-\alpha}\rho^{\alpha-\frac2p}u\right\|^p_{L_p([0,T]\times\mathcal{D})}\leq C \|h\|^p_{L_p(\mathcal{D})}. \end{align*} Estimate \eqref{main inequality 3} and the lemma are proved.
{\bf 5}. When $\theta$ obeys \eqref{theta range restricted}, we choose $\lambda^{\pm}$ in the interval $(0,\lambda_c(\nu_1,\nu_2))$. Then the constant $C$ of Green's function estimates in Lemma \ref{Green estimate} depends only on $\cM,\nu_1,\nu_2,\lambda^{\pm}$. Therefore, in particular, constant $C$ in \eqref {20.09.15} does not depend on $L_0$. Tracking the constants down through Steps 1, 2, 3, 4, we note that the constant in \eqref{main inequality 3} does not depend on the particular operator $L_0$. Rather, it depends on $\nu_1,\, \nu_2$ and hence $C=C(\cM,d,p,\theta,\Theta,\nu_1,\nu_2)$. \end{proof} \begin{remark}\label{initial.p ge 2.}
For $\gamma\ge 0$, $\|u\|_{L_p(\bR^d)}\le \|u\|_{H^{\gamma}_p(\bR^d)}$ is a basic property of the space of Bessel potentials. This with Lemma \ref{property1} and Definition \ref{defn 8.19}, in the context of Lemma \ref{main est3}, yields \begin{equation*}
\|u_0\|_{L_p(\Omega; K^0_{\theta+2-p,\Theta+2-p}(\cD))}\le \|u_0\|_{L_p(\Omega; K^{1-2/p}_{\theta+2-p,\Theta+2-p}(\cD))}=\|u_0\|_{\bU^{1}_{p,\theta,\Theta}(\cD)} \end{equation*} if $p\ge 2$. \end{remark}
\begin{lemma}\label{main est1} Let $p\in (1,\infty)$ and let $\theta\in\bR$, $\Theta\in\bR$ satisfy $$ p\big(1-\lambda^+_{c,L_0}\big)<\theta<p\big(d-1+\lambda^-_{c,L_0}\big)\quad\text{and}\quad d-1<\Theta<d-1+p. $$ If $f^0\in \bL_{p,\theta+p,\Theta+p}(\cD,T)$, $\tbf\in \bL^d_{p,\theta,\Theta}(\cD,T,d)$, then $u:=\cR(0,f^0,\tbf,0)$ belongs to $\bL_{p,\theta-p,\Theta-p}(\cD,T)$ and the estimate
\begin{eqnarray*}
\|u\|_{\bL_{p,\theta-p,\Theta-p}(\cD,T)}\leq C \big(\|f^0\|_{\bL_{p,\theta+p,\Theta+p}(\cD,T)}+ \|\tbf\|_{\bL_{p,\theta,\Theta}(\cD,T,d)}\big) \end{eqnarray*} holds, where $C=C(\cM,d,p,\theta,\Theta,L_0)$. Moreover, if \begin{equation*} p\left(1-\lambda_c(\nu_1,\nu_2)\right)<\theta< p\left(d-1+ \lambda_c(\nu_1,\nu_2)\right), \end{equation*} then the constant $C$ depends only on $\cM,d,p,\theta,\Theta, \nu_1$ and $\nu_2$. \end{lemma} \begin{proof} By the same reason explained in the beginning of the proof of Lemma \ref{main est3}, we can assume $f^0, \tbf$, and hence $u$ are non-random and we just prove \begin{align} \label{main est 2.1}
\int_0^T \int_{\cD} \vert \rho^{-1}u\vert ^p \rho_0^{\theta-\Theta} \rho^{\theta-d} dxdt \leq C \int_0^T \int_{\cD} \left(\vert \rho\,f\vert ^p + \vert \tbf\vert ^p \right)\rho_0^{\theta-\Theta} \rho^{\theta-d} dxdt . \end{align} Furthermore, when $\tbf=0$ estimate \eqref{main est 2.1} is already proved in \cite[Lemma 3.1]{ConicPDE}, the deterministic counterpart of this article. Hence, we may assume $f^0=0$. Finally, for simplicity we further assume $f^2=\cdots=f^d=0$.
{\bf 1}. We denote $\mu:=(\theta-d)/p$ and $\alpha:=(\Theta-d)/p$ and set $$ h(t,x)=\rho_{\circ}^{\mu-\alpha}(x)\rho^{\alpha}(x) f^1(t,x). $$ Then \eqref{main est 2.1} turns into \begin{equation}\label{main inequality 2-220830}
\Big\|\rho_0^{\mu-\alpha}\rho^{\alpha-1}u\Big\|_{L_p\left([0,T]\times \mathcal{D}\right)}\le C \|h\|_{L_p\left([0,T]\times \mathcal{D}\right)}. \end{equation}
We prepare a few things as we did in Step 1 of the proof of Lemma \ref{main est3}. By the range of $\theta$ given in the statement, we can find $\lambda^+\in(0,\lambda^+_{c,L_0})$ and $\lambda^-\in(0,\lambda^-_{c,L_0})$ satisfying \begin{align*} 1-\frac{d}{p}-\lambda^+<\mu<d-1-\frac{d}{p}+\lambda^-. \end{align*} Also, by the range of $\Theta$ given we have \begin{align*} -\frac{1}{p}<\alpha<1-\frac{1}{p}. \end{align*} Then we can choose and fix the constants $\gamma_1$, $\gamma_2$, $\beta_1$ and $\beta_2$ satisfying \begin{align} -\frac{d-1}{p} < \gamma_1 < \lambda^+ - 1 + \mu+\frac{1}{p},&\qquad 0<\gamma_2<\lambda^-+d-1-\frac{d}{p}-\mu\nonumber\\ 0<\beta_1<\alpha+\frac{1}{p},\qquad\qquad&\qquad 0<\beta_2<1-\frac{1}{p}-\alpha.\label{inequality gamma beta2} \end{align} Moreover, since $\lambda^+\in (0,\lambda^+_c)$ and $\lambda^-\in (0,\lambda^-_c)$, by Lemma~\ref{Green estimate} there exist constants $C=C(\cM,L_0,\nu_1, \nu_2, \lambda^{\pm}),\,\sigma=\sigma(\nu_1,\nu_2)>0$ such that \begin{align} \vert \nabla_y G(t,s,x,y)\vert &\leq \frac{C}{(t-s)^{(d+1)/2}}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}\,J_{t-s,x}R^{\lambda^+-1}_{t-s,x}R^{\lambda^--1}_{t-s,y} \nonumber\\ &=\frac{C}{(t-s)^{(d+1)/2}}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}R^{\gamma_1}_{t-s,x}\left(\frac{J_{t-s,x}}{R_{t-s,x}}\right)^{\beta_1}R^{\gamma_2}_{t-s,y}\left(\frac{J_{t-s,y}}{R_{t-s,y}}\right)^{\beta_2} \nonumber\\ &\qquad \times R^{\lambda^+-\gamma_1}_{t-s,x}\left(\frac{J_{t-s,x}}{R_{t-s,x}}\right)^{1-\beta_1}R^{\lambda^--1-\gamma_2}_{t-s,y} \left(\frac{J_{t-s,y}}{R_{t-s,y}}\right)^{-\beta_2} \label{deriv} \end{align} holds for all $t>s$ and $x,y\in\cD$. Now, we start proving \eqref{main inequality 2-220830}.
{\bf 2}. By H\"older inequality and \eqref{deriv}, we have \begin{align} \vert u(t,x)\vert &=\Big\vert \int^t_0\int_{\mathcal{D}}G_{y^1}(t,s,x,y)f^1(s,y)dyds\Big\vert \nonumber\\ &\leq \int^t_0\int_{\mathcal{D}}\vert \nabla_y G(t,s,x,y)\vert \cdot\vert y\vert ^{-\mu+\alpha}\rho(y)^{-\alpha}\vert h(s,y)\vert dyds \nonumber\\ &\leq C\, I_1(t,x)\cdot I_2(t,x), \label{I12} \end{align} where $q=p/(p-1)$, \begin{align*} &I_1(t,x)\\ &=\left(\int^t_0\int_{\mathcal{D}}\frac{1}{(t-s)^{(d+1)/2}}\,e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}K_{1,1}(t-s,x)K_{1,2}(t-s,y)\vert h(s,y)\vert ^pdyds\right)^{1/p} \end{align*} and \begin{align*} &I_2(t,x)\\ &=\left(\int^t_0\int_{\mathcal{D}}\frac{1}{(t-s)^{(d+1)/2}}\,e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}K_{2,1}(t-s,x)K_{2,2}(t-s,y)\vert y\vert ^{(-\mu+\alpha)q}\rho^{\alpha q}(y)dyds\right)^{1/q} \end{align*} with \begin{align*} K_{1,1}(t,x)=R^{\gamma_1p}_{t,x}\left(\frac{J_{t,x}}{R_{t,x}}\right)^{\beta_1p},\quad &
K_{1,2}(t,y)=R^{\gamma_2 p}_{t,y}\left(\frac{J_{t,y}}{R_{t,y}}\right)^{\beta_2 p},\\ K_{2,1}(t,x)=R^{(\lambda^+-\gamma_1)q}_{t,x}\left(\frac{J_{t,x}}{R_{t,x}}\right)^{(1-\beta_1)q},\quad &K_{2,2}(t,y)=R^{(\lambda^--1-\gamma_2)q}_{t,y}\left(\frac{J_{t,y}}{R_{t,y}}\right)^{-\beta_2q}. \end{align*}
{\bf 3}. We show that there exists a constant $C=C(\cM,d,p,\theta,\Theta,\nu_1,\nu_2)>0$ such that \begin{equation}\label{I2} I_2(t,x)\leq C \vert x\vert ^{-\mu+\alpha}\rho(x)^{-\alpha+\frac{1}{q}} \end{equation} holds for all $t,x$; we note that the right hand side is independent of $t$.
First, by change of variables $y/\sqrt{t-s}\to y$ and Lemma~\ref{lemma3.1} (ii), which we can apply since \eqref{inequality gamma beta2} gives $(\lambda^--1-\mu-\gamma_2)q>-d$ and $(-\alpha-\beta_2)q>-1$, we have \begin{align*} &\quad\frac{1}{(t-s)^{(d+1)/2}}\int_{\mathcal{D}}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}K_{2,2}(t-s,y)\vert y\vert ^{(-\mu+\alpha)q}\vert \rho(y)\vert ^{-\alpha q}dy\\ &=(t-s)^{-(\mu q+1)/2}\int_{\mathcal{D}}e^{-\sigma\vert \frac{x}{\sqrt{t-s}}-y\vert ^2}\frac{\vert y\vert ^{(\lambda^--\mu-1-\gamma_2+\alpha+\beta_2)q}}{(\vert y\vert +1)^{(\lambda^--1-\gamma_2+ \beta_2)q}}\cdot\frac{\rho(y)^{(-\alpha-\beta_2)q}}{(\rho(y)+1)^{-\beta_2 q}}dy\\ &\leq C (t-s)^{-1/2}\left(\vert x\vert +\sqrt{t-s}\right)^{(-\mu+\alpha)q}\left(\rho(x)+\sqrt{t-s}\right)^{-\alpha q}, \end{align*} where $C=C(\cM,d,p,\theta,\Theta,\nu_1,\nu_2)$. Using this, we have \begin{align*} &I_{2}^q(t,x)\\ &\leq C\int^t_0K_{2,1}(t-s,x)\cdot(t-s)^{-1/2}\left(\vert x\vert +\sqrt{t-s}\right)^{(-\mu+\alpha)q}\left(\rho(x)+\sqrt{t-s}\right)^{-\alpha q}ds\\ &\le C\int^t_{-\infty}\frac{\vert x\vert ^{(\lambda^+-1-\gamma_1+\beta_1)q}}{(\vert x\vert +\sqrt{t-s})^{(\lambda^+-1+\mu-\alpha-\gamma_1+\beta_1)q}}\cdot\frac{\rho(x)^{(1-\beta_1)q}}{(\rho(x)+\sqrt{t-s})^{(1+\alpha-\beta_1)q}}\cdot \frac{1}{(t-s)^{1/2}}ds. \end{align*} Then, the change of variable $t-s\to s$ and Lemma~\ref{lemma3.1} (i), which we can apply since we have $(\lambda^++\mu-\gamma_1)q >1 $ and $(1+\alpha-\beta_1)q> 1$ from \eqref{inequality gamma beta2}, we further obtain \begin{align*} I_2^q(t,x)&\leq C \vert x\vert ^{(-\mu+\alpha)q}\rho(x)^{-\alpha q+1}, \end{align*} which is equivalent to \eqref{I2}.
{\bf 4}. Now, by \eqref{I2} and \eqref{I12} we have \begin{align*} \vert u(t,x)\vert \leq C\, I_1(t,x)\cdot I_2(t,x)\leq C\, \vert x\vert ^{-\mu+\alpha}\rho(x)^{-\alpha+\frac{1}{q}}\, I_1(t,x) \end{align*} and hence \begin{align*}
\|\rho_0^{\mu-\alpha}\rho^{\alpha-1}u\|^p_{L_p([0,T]\times\mathcal{D})}&\leq C \int_0^T\int_{\mathcal{D}}\vert \rho(x)\vert ^{-1}I_1^p(t,x)\,dxdt\\ &=C\int_0^T\int_{\mathcal{D}}I_3(s,y)\cdot \vert h(s,y)\vert ^pdyds, \end{align*} where \begin{align*} I_3(s,y)=\int^T_s\int_{\mathcal{D}}\frac{1}{(t-s)^{(d+1)/2}}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}K_{1,1}(t-s,x)K_{1,2}(t-s,y)\rho(x)^{-1}\,dxdt. \end{align*}
By change of variables $t-s\to t$ followed by $x/\sqrt{t}\to x$ and Lemma~\ref{lemma3.1} (ii) with $\gamma_1p-1>-d$ and $\beta_1p-1>-1$ from \eqref{inequality gamma beta2}, we have \begin{align*} I_3(s,y)&=\int^T_s\frac{1}{(t-s)^{(d+1)/2}}K_{1,2}(t-s,y)\left(\int_{\mathcal{D}}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}K_{1,1}(t-s,x)\rho(x)^{-1}\,dx\right)dt\\ &\leq \int^{\infty}_0\frac{1}{t}K_{1,2}(t,y)\left(\int_{\mathcal{D}}\frac{\vert x\vert ^{(\gamma_1-\beta_1)p}}{(\vert x\vert +1)^{(\gamma_1-\beta_1)p}}\frac{\rho(x)^{\beta_1p-1}}{(\rho(x)+1)^{\beta_1p}}e^{-\sigma\vert x-\frac{y}{\sqrt{t}}\vert ^2}\,dx\right)dt\\ &\leq C\int_0^{\infty}K_{1,2}(t,y)\left(\rho(y)+\sqrt{t}\right)^{-1}t^{-1/2}dt\\ &= C\int_0^{\infty}\frac{\vert y\vert ^{(\gamma_2-\beta_2)p}}{\left(\vert y\vert +\sqrt{t}\right)^{(\gamma_2-\beta_2)p}}\cdot\frac{\rho(y)^{\beta_2p}}{\left(\rho(y)+\sqrt{t}\right)^{\beta_2p+1}}\cdot\frac{1}{t^{1/2}}dt. \end{align*} Lastly, due to $\gamma_2p>0$ and $\nu_2p> 0$, Lemma~\ref{lemma3.1} (i) yields \begin{align*} I_3(s,y)\leq C(\cM,d,p,\theta,\Theta, \nu_1,\nu_2). \end{align*} Hence, there exists a constant $C$ having the dependency described in the lemma such that \begin{align*}
\left\|\rho_{\circ}^{\mu-\alpha}\rho^{\alpha-1}u\right\|^p_{L_p([0,T]\times\mathcal{D})}\leq C \|h\|^p_{L_p([0,T]\times\mathcal{D})}. \end{align*} \eqref{main inequality 2-220830} and the lemma are proved.
{\bf 5}. The last part of the claim related to the range of $\theta$ holds by the same reason explained in Step 5 of the proof of Lemma \ref{main est3}. \end{proof} Now, we move on to the stochastic part, the most important and involved one. \begin{lemma}\label{main est2} Let $p\in [2,\infty)$ and let $\theta\in\bR$, $\Theta\in\bR$ satisfy $$ p(1-\lambda^+_{c,L_0})<\theta<p(d-1+\lambda^-_{c,L_0})\quad\text{and}\quad d-1<\Theta<d-1+p. $$ If $g\in\bL_{p,\theta,\Theta}(\cD,T, \ell_2)$, then $u:=\cR(0,0,0,g)$ belongs to $\bL_{p,\theta-p,\Theta-p}(\cD,T)$ and the estimate
\begin{eqnarray}\label{main inequality2}
\|u\|_{\bL_{p,\theta-p,\Theta-p}(\cD,T)}\leq C \|g\|_{\bL_{p,\theta,\Theta}(\cD,T,\ell_2)} \end{eqnarray} holds, where $C=C(\cM,d,p,\theta,\Theta,L_0)$. Moreover, if \begin{equation*}
p\big(1-\lambda_c(\nu_1,\nu_2)\big)<\theta< p\big(d-1+ \lambda_c(\nu_1,\nu_2)\big), \end{equation*} then the constant $C$ depends only on $\cM,d,p,\theta,\Theta, \nu_1$, and $\nu_2$. \end{lemma}
\begin{proof}
{\bf 1}. Again, we denote $\mu:=(\theta-d)/p$ and $\alpha:=(\Theta-d)/p$. We put $h(\omega,t,x)=\rho_{\circ}^{\mu-\alpha}(x) \rho(x)^\alpha g(\omega,t,x)$ and recall $$\Omega_T=\Omega \times (0,T], \quad L_p(\Omega_T \times \cD):=L_p(\Omega_T\times \cD, d\bP dt dx). $$
Then \eqref{main inequality2} is the same as \begin{equation}\label{main inequality 2}
\big\|\rho_{\circ}^{\mu-\alpha}\rho^{\alpha-1}u\big\|^p_{L_p(\Omega_T\times\mathcal{D})}\le C\,\big\|\vert h\vert_{\ell_2}\big\|^p_{L_p\left(\Omega_T\times \mathcal{D}\right)}. \end{equation}
As we did in the proof of Lemma \ref{main est1}, we prepare few things. By the range of $\theta$ given, we can find constants $\lambda^+\in(0,\lambda^+_{c,L_0})$ and $\lambda^-\in(0,\lambda^-_{c,L_0})$ satisfying \begin{align*} 1-\frac{d}{p}-\lambda^+<\mu<d-\frac{d}{p}+\lambda^-. \end{align*} Also, by the range of $\Theta$ we have \begin{align*} -\frac{1}{p}<\alpha<1-\frac{1}{p}. \end{align*} Then we can choose and fix the constants $\gamma_1$, $\gamma_2$, $\beta_1$, and $\beta_2$ satisfying \begin{align} -\frac{d-2}{p} < \gamma_1 < \lambda^+ - 1 + \mu + \frac{2}{p},&\qquad 0<\gamma_2<\lambda^-+d-\frac{d}{p}-\mu\nonumber\\ \frac{1}{p}<\beta_1<\alpha+\frac{2}{p},\qquad\qquad & \qquad 0<\beta_2<2-\frac{1}{p}-\alpha.\label{inequality gamma beta3} \end{align}
Further, by Lemma \ref{Green estimate} there exist constants $C=C(\cM,L_0,\nu_1, \nu_2, \lambda^{\pm}),\,\sigma=\sigma(\nu_1,\nu_2)>0>0$ such that for any $t>s$ and $x,y\in\cD$, \begin{align}
\nonumber G(t,s,x,y)\leq& \frac{C}{(t-s)^{d/2}}e^{-\sigma\frac{\vert x-y\vert^2}{t-s}} \,J_{t-s,x}\, J_{t-s,y}\,R^{\lambda^+-1}_{t-s,x}\,R^{\lambda^--1}_{t-s,y}\\ \nonumber
=&C\, (t-s)^{-d/2}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}} R^{\gamma_1}_{t-s,x}\left(\frac{J_{t-s,x}}{R_{t-s,x}}\right)^{\beta_1} R^{\gamma_2}_{t-s,y}\left(\frac{J_{t-s,y}}{R_{t-s,y}}\right)^{\beta_2}\\ & \times R^{\lambda^+-\gamma_1}_{t-s,x}\left(\frac{J_{t-s,x}}{R_{t-s,x}}\right)^{1-\beta_1} R^{\lambda^--\gamma_2}_{t-s,y}\left(\frac{J_{t-s,y}}{R_{t-s,y}}\right)^{1-\beta_2} \label{eqn 8.7.3} \end{align} holds.
{\bf 2}. We first estimate the $p$-th moment $\bE|u(t,x)|^p$ for any given $t$ and $x$. Using Burkholder-Davis-Gundy inequality and Minkowski's integral inequality, we have \begin{align*} \bE\vert u(t,x)\vert ^p&=\bE\Big\vert \sum_{k\in\bN}\int^t_0\int_{\mathcal{D}}G(t,s,x,y)g^k(s,y)dydw^k_s\Big\vert ^p\\ &\leq C\bE\left(\int_0^t\sum_{k\in\bN}\left(\int_{\cD}G(t,s,x,y)g^k(s,y)dy\right)^2 ds \right)^{p/2}\\ &\leq C\bE\left(\int_0^t\left(\int_{\cD}G(t,s,x,y)\vert g(s,y)\vert _{\ell_2}dy\right)^2 ds \right)^{p/2}\\ &=C\bE\left(\int_0^t\left(\int_{\cD}G(t,s,x,y)\vert y\vert ^{-\mu+\alpha}\rho(y)^{-\alpha}\vert h(s,y)\vert _{\ell_2}dy\right)^2 ds \right)^{p/2}. \end{align*} We denote $$ I(\omega,t,x):=\left(\int_0^t\left(\int_{\cD}G(t,s,x,y)\vert y\vert ^{-\mu+\alpha}\rho(y)^{-\alpha}\vert h(\omega,s,y)\vert _{\ell_2}dy\right)^2 ds \right)^{1/2}. $$ Then, using \eqref{eqn 8.7.3} and applying H\"older inequality twice for $x$ and then $t$, we get
\begin{align} I(\omega,t,x)&\leq C\left(\int_0^t\Big(\int_{\cD}I_1\cdot I_2\;dy\Big)^2ds\right)^{1/2} \label{eqn 8.7.7}\\
&\leq C\|I_1(\omega,t,\cdot,x,\cdot)\|_{L_p((0,t)\times \cD, ds\,dy)}\Big\|\|I_2(t,\cdot,x,\cdot)\|_{L_{q}(\cD,dy)}\Big\|_{L_{r}((0,t),ds)} \nonumber \end{align} where $q=\frac{p}{p-1}$, $r=\frac{2p}{p-2}\,(=\infty\text{ if }p=2)$, \begin{align} &I_1^p(\omega,t,s,x,y) \label{I_1^p}\\ &= (t-s)^{-d/2}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}} \left(R^{\gamma_1}_{t-s,x}\left(\frac{J_{t-s,x}}{R_{t-s,x}}\right)^{\beta_1}R^{\gamma_2}_{t-s,y}\left(\frac{J_{t-s,y}}{R_{t-s,y}}\right)^{\beta_2}\right)^p\vert h(\omega,s,y)\vert _{\ell_2}^p \nonumber \\ &= (t-s)^{-d/2}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}} \,\,K_{1,1}(t-s,x)\,K_{1,2}(t-s,y)\,\,\vert h(\omega,s,y)\vert _{\ell_2}^p,\label{I_1^p} \nonumber \end{align} and \begin{align*} &I_2^q(t,s,x,y)\\ &= (t-s)^{-d/2}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}\\ &\quad \times \left(R^{\lambda_1-\gamma_1}_{t-s,x}\left(\frac{J_{t-s,x}}{R_{t-s,x}}\right)^{1-\beta_1}R^{\lambda_2-\gamma_2}_{t-s,y}\left(\frac{J_{t-s,y}}{R_{t-s,y}}\right)^{1-\beta_2}\right)^q\vert y\vert ^{(-\mu+\alpha)q}\rho(y)^{-\alpha q}\\ &= (t-s)^{-d/2}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}} \,\,K_{2,1}(t-s,x)\,K_{2,2}(t-s,y)\,\,\vert y\vert ^{(-\mu+\alpha)q}\rho(y)^{-\alpha q}, \end{align*} with \begin{align*} &K_{1,1}(t,x)=R^{\gamma_1p}_{t,x}\left(\frac{J_{t,x}}{R_{t,x}}\right)^{\beta_1p},\quad K_{1,2}(t,y)=R^{\gamma_2 p}_{t,y}\left(\frac{J_{t,y}}{R_{t,y}}\right)^{\beta_2 p},\\ &K_{2,1}(t,x)=R^{(\lambda^+-\gamma_1)q}_{t,x}\left(\frac{J_{t,x}}{R_{t,x}}\right)^{(1-\beta_1)q},\quad K_{2,2}(t,y)=R^{(\lambda^--\gamma_2)q}_{t,y}\left(\frac{J_{t,y}}{R_{t,y}}\right)^{(1-\beta_2)q}. \end{align*}
Note, by \eqref{eqn 8.7.7} we have \begin{eqnarray}
\label{eqn 8.7.8} \bE \vert u(t,x)\vert ^p &\leq& C \bE I^p(t,x) \\
&\leq& C \Big\|\|I_2(t,\cdot,x,\cdot)\|_{L_{q}(\cD,dy)}\Big\|^p_{L_{r}((0,t),ds)}
\bE\|I_1(\omega,t,\cdot,x,\cdot)\|^p_{L_p((0,t)\times \cD, ds\,dy)}. \nonumber \end{eqnarray}
{\bf 3}. In this step we will show that there exists a constant $C=C(\cM,d,p,\theta,\Theta,\nu_1,\nu_2)>0$ such that \begin{equation}\label{I_2}
\big\|\|I_2(t,\cdot,x,\cdot)\|_{L_{q}(\cD,dy)}\big\|_{L_{r}((0,t),ds)}\leq C \vert x\vert ^{-\mu+\alpha}\rho(x)^{-\alpha+1-2/p}. \end{equation} In particular, the right hand side is independent of $t$.
{\bf Case 1.} Assume $p=2$ (hence, $q=2$ and $r=\infty$). First, we consider \begin{align*} &\int_{\cD}I_2^2(t,s,x,y) \,dy\\ &=K_{2,1}(t-s,x)\cdot\frac{1}{(t-s)^{d/2}}\int_{\mathcal{D}}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}K_{2,2}(t-s,y)\vert y\vert ^{2(-\mu+\alpha)}\vert \rho(y)\vert ^{-2\alpha}\,dy. \end{align*} Since $2(\lambda^--\mu-\gamma_2)>-d$ and $2(1-\alpha-\beta_2)>-1$ from \eqref{inequality gamma beta3}, by change of variables $y/\sqrt{t-s}\to y$ and Lemma~\ref{lemma3.1} (ii), we have \begin{align*} &\frac{1}{(t-s)^{d/2}}\int_{\mathcal{D}}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}K_{2,2}(t-s,y)\vert y\vert ^{2(-\mu+\alpha)}\vert \rho(y)\vert ^{-2\alpha}\,dy\\ &=(t-s)^{-\mu}\int_{\mathcal{D}}e^{-\sigma\vert \frac{x}{\sqrt{t-s}}-y\vert ^2}\frac{\vert y\vert ^{2(\lambda^--\mu-\gamma_2-1+\alpha+\beta_2)}}{(\vert y\vert +1)^{2(\lambda^--\gamma_2-1+\beta_2)}}\cdot\frac{\rho(y)^{2(1-\alpha-\beta_2)}}{(\rho(y)+1)^{2(1-\beta_2)}}dy\\ &\leq C\left(\vert x\vert +\sqrt{t-s}\right)^{2(-\mu+\alpha)}\left(\rho(x)+\sqrt{t-s}\right)^{-2\alpha}. \end{align*} Hence, we have \begin{align*} &\sup_{s\in[0,t]}\left(\int_{\cD}I_{2}^2\,dy\right)^{1/2}\\ &\leq C\sup_{s\in[0,t]}\left(K_{2,1}(t-s,x)\cdot\left(\vert x\vert +\sqrt{t-s}\right)^{2(-\mu+\alpha)}\left(\rho(x)+\sqrt{t-s}\right)^{-2\alpha}\right)^{1/2}\\ &= C\sup_{s\in[0,t]}\left(\frac{\vert x\vert ^{\lambda^+-1-\gamma_1+\beta_1}}{(\vert x\vert +\sqrt{t-s})^{\lambda^+-1+\mu-\gamma_1-\alpha+\beta_1}}\cdot\frac{\rho(x)^{1-\beta_1}}{(\rho(x)+\sqrt{t-s})^{\alpha+1-\beta_1}}\right)\\ &=C\,\vert x\vert ^{-\mu+\alpha}\rho(x)^{-\alpha}\sup_{s\in[0,t]}\left(R_{t-s,x}^{\lambda^+-1+\mu-\gamma_1}\left(\frac{J_{t-s,x}}{R_{t-s,x}}\right)^{\alpha+1-\beta_1}\right)\\ &\leq C \,\vert x\vert ^{-\mu+\alpha}\rho(x)^{-\alpha} \end{align*} due to $\lambda^+-1+\mu-\gamma_1>0$, $\alpha+1-\beta_1>0$ and $0\leq J_{t-s,x}\leq R_{t-s,x}\leq 1$. Thus \eqref{I_2} holds.
{\bf Case 2.} Let $p>2$. Again, since $(\lambda^--\mu-\gamma_2)q>-d$ and $(1-\alpha-\beta_2)q>-1$, by change of variables and Lemma~\ref{lemma3.1} (ii), we observe \begin{align*} &\frac{1}{(t-s)^{d/2}}\int_{\mathcal{D}}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}K_{2,2}(t-s,y)\vert y\vert ^{(-\mu+\alpha)q}\rho(y)^{-\alpha q}dy\\ =&\,(t-s)^{-\mu q/2}\int_{\mathcal{D}}e^{-\sigma\vert \frac{x}{\sqrt{t-s}}-y\vert ^2}\frac{\vert y\vert ^{(\lambda^--\mu-\gamma_2-1+\alpha+\beta_2)q}}{(\vert y\vert +1)^{(\lambda^--\gamma_2-1+\beta_2)q}}\cdot\frac{\rho(y)^{(1-\alpha-\beta_2)q}}{(\rho(y)+1)^{(1-\beta_2)q}}dy\\ \leq&\,C \left(\vert x\vert +\sqrt{t-s}\right)^{(-\mu+\alpha)q}\left(\rho(x)+\sqrt{t-s}\right)^{-\alpha q}. \end{align*} Hence, we have \begin{align*}
&\int_0^t\|I_2(t,s,x,\cdot)\|_{L_{q}(\cD,dy)}^r ds\\ \leq &\,C \int^t_0\left\{K_{2,1}(t-s,x)\cdot\left(\vert x\vert +\sqrt{t-s}\right)^{(-\mu+\alpha)q}\left(\rho(x)+\sqrt{t-s}\right)^{-\alpha q}\right\}^{r/q}ds\\ =&\,C \int^t_0\frac{\vert x\vert ^{(\lambda^+-1-\gamma_1+\beta_1)r}}{(\vert x\vert +\sqrt{t-s})^{(\lambda^+-1+\mu-\gamma_1-\alpha+\beta_1)r}}\cdot\frac{\rho(x)^{(1-\beta_1)r}}{(\rho(x)+\sqrt{t-s})^{(\alpha+1-\beta_1)r}}ds\,. \end{align*} Moreover, since \eqref{inequality gamma beta3} also gives $\left(\lambda^++\mu-\gamma_1\right)r>2$ and $\left(\alpha+1-\beta_1\right)r>2$, using Lemma~\ref{lemma3.1} we again obtain \begin{align*}
\big\|\|I_2(t,\cdot,x,\cdot)\|_{L_{q}(dy;\cD)}\big\|_{L_{r}((0,t),ds)}&=\left(\int_0^t\|I_2\|_{L_{q}(\cD,dy)}^r ds\right)^{1/r}\\ &\leq C\vert x\vert ^{-\mu+\alpha}\rho(x)^{-\alpha+1-2/p}. \end{align*}
{\bf 4}. Now, by \eqref{eqn 8.7.8} and \eqref{I_2} we have \begin{align*} &\quad\bE\,\big\vert \vert x\vert ^{\mu-\alpha}\rho(x)^{\alpha-1}u(t,x)\big\vert ^p\\ &\leq C\big(\vert x\vert ^{\mu-\alpha}\rho(x)^{\alpha-1}\big)^p \cdot \bE\,\int_0^t\int_{\cD}I_1^p(t,s,x,y) dy\,ds\cdot \big(\vert x\vert ^{-\mu+\alpha}\rho(x)^{-\alpha+1-2/p}\big)^p\\ &= C\,\rho(x)^{-2}\;\bE\,\int_0^t\int_{\cD}I_1^p(t,s,x,y) dy\,ds. \end{align*} Therefore, taking integrations with respect to $x$ and $t$, using Fubini theorem and recalling \eqref{I_1^p}, we have \begin{align} \nonumber
\bE\,\|\rho_0^{\mu-\alpha}\rho^{\alpha-1}u\|^p_{L_p(\Omega_T\times\mathcal{D})}&\leq C \,\bE\int_0^T\int_{\mathcal{D}}\int^t_0\int_{\cD}\vert \rho(x)\vert ^{-2}I_1^p\,dyds\;dxdt\\ &=C\,\bE\,\int_0^T\int_{\mathcal{D}}I_3(s,y)\cdot \vert h(s,y)\vert _{\ell_2}^pdyds, \label{eqn 8.11.31} \end{align} where \begin{align*} I_3(s,y):=\int^T_s\int_{\mathcal{D}}\frac{1}{(t-s)^{d/2}}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}K_{1,1}(t,s,x,y)K_{1,2}(t,s,x,y)\rho(x)^{-2}\,dxdt. \end{align*} Since \eqref{inequality gamma beta3} also implies $\gamma_1p-2>-d$ and $\beta_1p-2>-1$, by change of variables $T-t\to t$ followed by $x/\sqrt{t}\to t$ and Lemma~\ref{lemma3.1} (ii), we have \begin{align*} I_3(s,y)&=\int^T_s\frac{1}{(t-s)^{d/2}}K_{1,2}(t-s,y)\left(\int_{\mathcal{D}}e^{-\sigma\frac{\vert x-y\vert ^2}{t-s}}K_{1,1}(t-s,x)\rho(x)^{-2}\,dx\right)dt\\ &\leq \int^{\infty}_0\frac{1}{t}K_{1,2}(t,y)\left(\int_{\mathcal{D}}\frac{\vert x\vert ^{(\gamma_1-\beta_1)p}}{(\vert x\vert +1)^{(\gamma_1-\beta_1)p}}\frac{\rho(x)^{\beta_1p-2}}{(\rho(x)+1)^{\beta_1p}}e^{-\sigma'\vert x-\frac{y}{\sqrt{t}}\vert ^2}\,dx\right)dt\\ &\leq C\int_0^{\infty}K_{1,2}(t,y)\left(\rho(y)+\sqrt{t}\right)^{-2}dt\\ &= C\int_0^{\infty}\frac{\vert y\vert ^{(\gamma_2-\beta_2)p}}{\left(\vert y\vert +\sqrt{t}\right)^{(\gamma_2-\beta_2)p}}\cdot\frac{\rho(y)^{\beta_2p}}{\left(\rho(y)+\sqrt{t}\right)^{\beta_2p+2}}\,dt. \end{align*} Hence, by Lemma~\ref{lemma3.1} (i) with the conditions $\gamma_2p>0$ and $\beta_2p> 0$, we finally get \begin{align*} I_3(s,y)\leq C(\cM,d,p,\theta,\Theta, \nu_1,\nu_2). \end{align*} This and \eqref{eqn 8.11.31} lead to \eqref{main inequality 2} and the lemma is proved.
{\bf 5}. Again, the last part of the claim related to the range of $\theta$ holds by the same reason explained in Step 5 of the proof of Lemma \ref{main est3}. \end{proof}
\mysection{Proof of Theorems \ref{main result} and \ref{main result-random}}\label{sec:main proofs} In this section we prove Theorems \ref{main result} and \ref{main result-random}, following the strategy below:
{\bf 1}. \emph{ A priori estimate and the uniqueness}:
\begin{itemize}
\item[-]
In Lemma \ref{lemma entire} below, we first prove that for any solution $u\in \cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ to equation \eqref{stochastic parabolic equation} equipped with the general operator $\cL=\sum_{i,j=1}^d a^{ij}(\omega,t)D_{ij}$, we have \begin{eqnarray}
& \|u\|_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)}
\leq C \big( \|u\|_{\bL_{p,\theta-p,\Theta-p}(\cD,T)} +\text{norms of the free terms} \big). \label{eqn 8.18.21} \end{eqnarray}
\item[-] If $\cL$ is non-random, we estimate $\|u\|_{\bL_{p,\theta-p,\Theta-p}(\cD,T)}$ based on Lemma \ref{main est}.
\item[-] To treat the SPDE with random coefficients, we introduce a SPDE having non-random coefficients and the same free terms $f^0$, $\tbf$, $g$, $u_0$. Then we prove a priori estimate for the original SPDE based on the fact that the difference between the new SPDE and the original SPDE becomes a PDE (with random coefficients).
\item[-] The uniqueness of solution to the original SPDE follows from the uniqueness result of PDEs.
\end{itemize}
{\bf 2}. \emph{The existence}:
\begin{itemize}
\item[-] If the coefficients of $\cL$ are non-random, we use the representation formula.
\item[-] For general case, we use the method of continuity with the help of the a priori estimate.
\end{itemize}
Now we start our proofs. The following lemma is what we meant in \eqref{eqn 8.18.21}. We emphasize that the lemma holds for any $\theta,\Theta\in \bR$ and the condition $\partial \cM\in C^2$ is not needed in the proof.
\begin{lemma}\label{regularity.induction} Let $p\in [2,\infty)$, $\gamma, \mu, \theta, \Theta \in\bR$, $\mu<\gamma$, and the diffusion coefficients $a^{ij}=a^{ij}(\omega,t)$ satisfy Assumption \ref{ass coeff}. Assume that $f^0\in\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)$, $\tbf=(f^1,\cdots,f^d) \in\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,d)$, $g\in\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T, \ell_2)$, $u(0,\cdot)\in \bU^{\gamma+2}_{p,\theta,\Theta}(\cD)$, and $u\in\bK^{\mu+2}_{p,\theta-p,\Theta-p}(\cD,T)$ satisfies \begin{align} du=(\cL u+f^0+\sum_{i=1}^d f^i_{x^i})\,dt+\sum_{k=1}^{\infty} g^kdw^k_t,\quad t\in(0,T] \label{eqn 8.19.1} \end{align}
in the sense of distributions on $\cD$. Then $u\in\bK^{\gamma+2}_{p,\theta-p,\Theta-p}(\cD,T)$, hence $ u\in\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$, and the estimate \begin{align*}
\|u\|_{\bK^{\gamma+2}_{p,\theta-p,\Theta-p}(\cD,T)}
&\leq C\Big(\|u\|_{\bK^{\mu+2}_{p,\theta-p,\Theta-p}(\cD,T)}+\|f^0\|_{\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)}\nonumber\\
&\quad \quad \quad +\|\tbf\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,d)}+\|g\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,\ell_2)}+\|u(0,\cdot)\|_{\bU^{\gamma+2}_{p,\theta,\Theta}(\cD)} \Big) \end{align*} holds with $C=C(\cM,p,n,\theta,\Theta,\nu_1,\nu_2)$. \end{lemma}
The proof of Lemma \ref{regularity.induction} is based on the following result on $\bR^d$.
\begin{lemma}
\label{lemma entire}
Let $p\in[2,\infty)$, $\gamma \in \bR$, and Assumption \ref{ass coeff} hold. Assume $f\in \bH^{\gamma}_p(T)$, $g\in \bH^{\gamma+1}_p(T,\ell_2)$, $u(0,\cdot)\in L_p(\Omega;H^{\gamma+2-2/p}_p)$, and $u\in \bH^{\gamma+1}_p(T)$ satisfies \begin{align} du=(\cL u+f)\,dt+\sum^{\infty}_{k=1}g^kdw^k_t,\quad t\in(0,T]\nonumber \end{align}
in the sense of distributions on the whole space $\bR^d$. Then $u\in \bH^{\gamma+2}_p(T)$ and
\begin{eqnarray}
\|u\|_{\bH^{\gamma+2}_p(T)} &\leq& C\Big(\|u\|_{\bH^{\gamma+1}_p(T)}\nonumber\\
&&\quad+\|f\|_{\bH^{\gamma}_p(T)}
+\|g\|_{\bH^{\gamma+1}_p(T,\ell_2)}+\|u(0,\cdot)\|_{L_p(\Omega;H^{\gamma+2-2/p}_p)}\Big),\label{whole space estimate}
\end{eqnarray}
where $C=C(d,p,\nu_1,\nu_2)$ is independent of $T$.
\end{lemma}
\begin{proof} {\bf 1}. First, we consider the case $u(0,\cdot) \equiv 0$. Then, by e.g. \cite[Theorem 4.10]{Krylov 1999-4}, $u\in \bH^{\gamma+2}_p(T)$ and
$$
\|u_{xx}\|_{\bH^{\gamma}_p(T)}\leq C(d,p,\nu_1,\nu_2) (\|f\|_{\bH^{\gamma}_p(T)}+\|g\|_{\bH^{\gamma+1}_p(T,\ell_2)}).
$$
This and the inequality
$$
\|u\|_{\bH^{\gamma+2}_p(T)}\leq (\|u_{xx}\|_{\bH^{\gamma}_p(T)}+\|u\|_{\bH^{\gamma}_p(T)} )
$$
together with the inequality $\|u\|_{\bH^{\gamma}_p(T)} \le \|u\|_{\bH^{\gamma+1}_p(T)} $, which due to a basic property of the space of Bessel potentials,
yield the claim of the lemma.
{\bf 2}. For the case of general $u(0,\cdot)\not\equiv 0$, we use the solution $v=v(\omega,t,x)$ to the equation
$$
dv=\cL v \,dt, \quad t\in(0,T]
$$
with $v(\omega,0,\cdot)=u(\omega,0,\cdot)$ for all $\omega\in\Omega$ (see \cite[Theorem 5.2]{Krylov 1999-4}).
From a classical theory of PDE, which we apply for each $\omega$, we have
$$
\|v\|_{\bH^{\gamma+2}_p(T)}\leq C \|u_0\|_{L_p(\Omega;H^{\gamma+2-2/p}_p)}.
$$
Then for the function $u-v$, which has zero initial condition, we can apply Step 1 and we obtain estimate \eqref{whole space estimate} for $u$ simply by triangle inequality.
\end{proof}
\begin{proof}[\textbf{Proof of Lemma \ref{regularity.induction}}] We first note that we only need to consider the case $\mu=\gamma-1$. Indeed, suppose that the lemma holds true if $\mu=\gamma-1$. Now let $\mu=\gamma-n$, $n\in \bN$. Then applying the result for $\mu'=\gamma-k$ and $\gamma'=\mu'+1$ with $k=n,n-1,\cdots, 1$ in order, we get the claim when $\mu=\gamma-n$.
Now suppose that the difference between $\gamma$ and $\mu$ is not an integer, i.e. $\gamma-\mu=n+\delta$, $n=0,1,2,\cdots$ and $\delta\in (0,1)$. Then, since $\mu>\gamma-(n+1)=:\mu'$ and $\|\cdot \|_{\bK^{\mu'+2}_{p,\theta-p,\Theta-p}(\cD,T)}\le \|\cdot \|_{\bK^{\mu+2}_{p,\theta-p,\Theta-p}(\cD,T)}$, we conclude that our assumption holds for $\mu'$, that is, $u\in \bK^{\mu'+2}_{p,\theta-p,\Theta-p}(\cD,T)$. Therefore, the case $\gamma-\mu \not\in \bN$ is also covered by what we just discussed.
Now we prove the lemma when $\mu=\gamma-1$, i.e. $u\in\bK^{\gamma+1}_{p,\theta-p,\Theta-p}(\cD,T)$.
As usual, we omit the argument $\omega$ for the simplicity of presentation.
{\bf 1}. For $u\in\bK^{\gamma+1}_{p,\theta-p,\Theta-p}(\cD,T)$, put $$\xi(x)=\vert x\vert^{(\theta-\Theta)/p},\quad v:=\xi u, \quad f:=f^0+\sum_{i=1}^{d}f^i_{x^i},\quad v_0:=\xi u_0. $$ Using Definition \ref{defn 8.19}, Definition \ref{defn 8.28}, and the change of variables $t\to e^{2n}t$, we have \begin{eqnarray}
&&\|u\|^p_{\bK^{\gamma+2}_{p,\Theta-p,\Theta-p}(\cD,T)}=\|v\|^p_{\bH^{\gamma+2}_{p,\Theta-p}(\cD,T)} \nonumber\\
&&= \sum_{n\in \bZ} e^{n(\Theta-p)}\|\zeta(e^{-n}\psi(e^n\cdot))v(\cdot,e^n\cdot)\|^p_{\bH^{\gamma+2}_p(T)} \nonumber\\
&&= \sum_{n\in \bZ} e^{n(\Theta-p+2)}\|\zeta(e^{-n}\psi(e^n\cdot))v(e^{2n}\cdot,e^n\cdot)\|^p_{\bH^{\gamma+2}_p(e^{-2n}T)}. \label{eqn 4.23.1}
\end{eqnarray}
For each $n\in \bZ$, we denote $$ v_n(t,x):=\zeta(e^{-n}\psi(e^nx))v(e^{2n}t,e^nx), \quad
v_{0,n}(x)=\zeta(e^{-n}\psi(e^nx))v_0(e^nx).
$$ Then using equation \eqref{eqn 8.19.1} and the product rule of differentiation, one can easily check that $v_n$ satisfies
$$
d v_n=(\cL_n v_n +f_n)dt+ \sum_{k=1}^{\infty} g^k_n dw^{n,k}_t \quad t\in(0,e^{-2n}T]
$$
in the sense of distributions on $\bR^d$ with the initial condition $v_{n}(0,\cdot)=v_{0,n}(\cdot)$,
where
$$
\cL_n:=\sum_{i,j} a^{ij}_n(t)D_{ij},\quad a^{ij}_n(t):=a^{ij}(e^{2n}t),
$$
$$
g^k_n(t,x):=e^{n} \zeta(e^{-n}\psi(e^nx))\xi(e^nx) g^k(e^{2n}t,e^nx), \qquad w^{n,k}_t:=e^{-n}w^k_{e^{2n}t},
$$
and, with Einstein's summation convention with respect to $i,j$,
\begin{eqnarray*}
f_n(t,x)&:=&\quad e^{2n}\zeta(e^{-n}\psi(e^nx))\xi(e^nx) f(e^{2n}t,e^nx) \\
&&+e^{n} a^{ij}_n(t)D_iu(e^{2n}t,e^nx) \zeta'(e^{-n}\psi(e^nx)) D_j\psi(e^nx) \xi(e^nx) \\
&&+e^{2n}a^{ij}_n(t)D_iu(e^{2n}t,e^nx) \zeta(e^{-n}\psi(e^nx)) D_j \xi(e^nx) \\
&&+e^na^{ij}_n(t)u(e^{2n}t,e^nx)\zeta'(e^{-n}\psi(e^nx)) D_i\psi(e^nx)D_j\xi(e^nx)\\
&&+e^{2n}a^{ij}_n(t)u(e^{2n}t,e^nx)\zeta(e^{-n}\psi(e^nx))D_{ij}\xi(e^nx) \\
&&+a^{ij}_n(t)u(e^{2n}t,e^nx)\zeta''(e^{-n}\psi(e^nx))D_i\psi(e^nx) D_j\psi(e^nx) \xi(e^nx)\\
&&+ e^na^{ij}_n(t)u(e^{2n}t,e^nx)\zeta'(e^{-n}\psi(e^nx)) D_{ij}\psi(e^nx)\\
&=:& \sum_{l=1}^7 f^{l}_n(t,x). \end{eqnarray*} Here, $\zeta'$ and $\zeta''$ denote the first and second derivative of $\zeta$, respectively. We note that for each $n\in \bZ$, the operator $\cL_n$ still satisfies the uniform parabolicity condition \eqref{uniform parabolicity} and $\{w^{n,k}_t: k\in \bN\}$ is a sequence of independent Brownian motions. Hence,
we can apply Lemma \ref{lemma entire} and from \eqref{eqn 4.23.1} we get
\begin{eqnarray}
\nonumber
\|u\|^p_{\bK^{\gamma+2}_{p,\theta-p,\Theta-p}(\cD,T)}
&\leq& C \sum_{n\in \bZ} e^{n(\Theta-p+2)}\|v_n\|^p_{\bH^{\gamma+1}_p(e^{-2n}T)}\\
&&+C \sum_{l=1}^7 \sum_{n\in \bZ} e^{n(\Theta-p+2)}\|f^l_n\|^p_{\bH^{\gamma}_p(e^{-2n}T)} \nonumber\\
&&+C\sum_{n\in \bZ} e^{n(\Theta-p+2)}\|g_n\|^p_{\bH^{\gamma+1}_p(e^{-2n}T, \ell_2)}\nonumber\\
&&+ C \sum_{n\in \bZ} e^{n(\Theta-p+2)}\|v_{0,n}\|^p_{L_p(\Omega;H^{\gamma+2-2/p}_p)}\label{eqn 8.19.21}
\end{eqnarray} provided that \begin{equation}\label{in fact true}
v_n\in \bH^{\gamma+1}_p(e^{-2n}T), \quad f^l_n \in \bH^{\gamma}_p(e^{-2n}T),\quad
g_n \in \bH^{\gamma+1}_p(e^{-2n}T,\ell_2), \quad (l=1,\ldots,7).
\end{equation} It turns out that the claims in \eqref{in fact true} hold true. Indeed, the change of variable $e^{2n}t \to t$ and Definition \ref{defn 8.19} yield
\begin{eqnarray}
&& \sum_{k\in \bZ} e^{n(\Theta-p+2)}\|v_n\|^p_{\bH^{\gamma+1}_p(e^{-2n}T)} \nonumber\\
&&= \sum_{n\in \bZ} e^{n(\Theta-p)}\| \zeta(e^n\psi(e^n\cdot))v(\cdot,e^n\cdot) \|^p_{\bH^{\gamma+1}_p(T)}
= \|u\|^p_{\bK^{\gamma+1}_{p,\theta-p,\Theta-p}(\cD,T)} \label{eqn 8.19.31}
\end{eqnarray} and
\begin{eqnarray}
&& \sum_{n\in \bZ} e^{n(\Theta-p+2)}\|g_n\|^p_{\bH^{\gamma+1}_p(e^{-2n}T,\ell_2)} \nonumber \\
&&= \sum_{n\in \bZ} e^{n\Theta}\|\zeta(e^n\psi(e^n\cdot))\xi(e^n\cdot)g(\cdot,e^n\cdot) \|^p_{\bH^{\gamma+1}_p(T,\ell_2)}= \|g\|^p_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,\ell_2)}. \label{eqn 8.19.41}
\end{eqnarray} In particular, $$
v_n \in \bH^{\gamma+1}_p(e^{-2n}T), \quad
g_n \in \bH^{\gamma+1}_p(e^{-2n}T, \ell_2), \quad \forall\, n\in \bZ.
$$
Next, we show that $f^l_n$ belong to $\bH^{\gamma}_p(e^{-2n}T)$ in the following manner. For $l=1$, by Definition \ref{defn 8.19} and the change of variables $e^{2n}t \to t$, we have
\begin{eqnarray*}
&& \quad\sum_{n\in \bZ} e^{n(\Theta-p+2)}\|f^1_n\|^p_{\bH^{\gamma}_p(e^{-2n}T)}\\
&& =
\sum_{n\in \bZ} e^{n(\Theta+p)}\|\zeta(e^{-n}\psi(e^n\cdot))\xi(e^n\cdot)f(e^{2n}\cdot,e^n\cdot) \|^p_{\bH^{\gamma}_p(T)}
= \|f\|^p_{\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)}.
\end{eqnarray*} For $l=2$, by Definition \ref{defn 8.19} and \eqref{eqn 4.24.5}, we get \begin{eqnarray*}
&& \quad\sum_{n\in \bZ} e^{n(\Theta-p+2)}\|f^2_n\|^p_{\bH^{\gamma}_p(e^{-2n}T)}\\
&&\leq C
\sum_{n\in \bZ} \sum_{i,j}e^{n\Theta}\|D_iu(\cdot,e^n\cdot)\zeta'(e^{-n}\psi(e^n\cdot))\xi(e^n \cdot) D_j\psi(e^n\cdot)\|^p_{\bH^{\gamma}_p(T)} \\
&&\leq C \|\psi_x \xi u_x \|^p_{\bH^{\gamma}_{p,\Theta}(\cD,T)}=N\|\psi_x u_x \|^p_{\bK^{\gamma}_{p,\theta,\Theta}(\cD,T)} \\
&&\leq
C \|u_x\|^p_{\bK^{\gamma}_{p,\theta, \Theta}(\cD,T)}\leq
C \|u\|^p_{\bK^{\gamma+1}_{p,\theta-p, \Theta-p}(\cD,T)},
\end{eqnarray*} where the last two inequalities are due to \eqref{eqn 8.9.1}, \eqref{eqn 8.19.11}, and \eqref{eqn 4.16.1}. For $l=3$, by definitions of norms, we have
\begin{eqnarray}
&& \sum_{n\in \bZ} e^{n(\Theta-p+2)}\|f^2_n\|^p_{\bH^{\gamma}_p(e^{-2n}T)} \nonumber \\
& \leq& C \sum_{n\in \bZ} \sum_{i,j}e^{n(\Theta+p)}\|D_iu(\cdot,e^n\cdot) \zeta(e^{-n}\psi(e^n\cdot)) D_j\xi(e^n\cdot)\|^p_{\bH^{\gamma}_p(T)} \nonumber \\
&=&C
\|u_x \xi_x\|^p_{\bH^{\gamma}_{p,\Theta+p}(\cD,T)}
= C
\|\xi \xi^{-1}\xi_xu_x\|^p_{\bH^{\gamma}_{p,\Theta+p}(\cD,T)} \nonumber \\
&=&C
\|\xi^{-1}\xi_x u_x \|^p_{\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)} \leq C\|\psi \xi^{-1}\xi_x u_x\|^p_{\bK^{\gamma}_{p,\theta,\Theta}(\cD,T)}, \label{eqn 8.20.1}
\end{eqnarray}
where the last inequality is due to \eqref{eqn 8.19.81}. Now we note that for any $n\in \bN$, $$ \vert\psi \xi^{-1}\xi_x\vert ^{(0)}_n+\vert \psi^2 \xi^{-1}\xi_{xx}\vert ^{(0)}_n\leq C(n,\xi)<\infty. $$
Thus, by \eqref{eqn 8.19.11} the last term in \eqref{eqn 8.20.1} is bounded by
$$
C\|u_x\|^p_{\bK^{\gamma}_{p,\theta,\Theta}(\cD,T)} \leq C \|u\|^p_{\bK^{\gamma+1}_{p,\theta-p,\Theta-p}(\cD,T)}.
$$
For other $l$s one can argue similarly and we gather the results:
\begin{eqnarray}
&& \sum_{l=1}^7 \sum_{n\in \bZ} e^{n(\Theta-p+2)}\|f^l_n\|^p_{\bH^{\gamma}_p(e^{-2n}T)} \nonumber \\
&\leq& C \|u\|^p_{\bK^{\gamma+1}_{p,\theta-p,\Theta-p}(\cD,T)}+
C\|f\|^p_{\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)}. \label{eqn 8.19.51}
\end{eqnarray}
Consequently, coming back to \eqref{eqn 8.19.21} and using \eqref{eqn 8.19.31}, \eqref{eqn 8.19.41}, and \eqref{eqn 8.19.51}, we get \begin{align*}
&\quad\quad\quad\|u\|^p_{\bH^{\gamma+2}_{p,\theta-p,\Theta-p}(\cD,T)}\\
& \leq C\Big(\|u\|^p_{\bK^{\gamma+1}_{p,\theta-p,\Theta-p}(\cD,T)}
+\|f\|^p_{\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cD,T)}
+\|g\|^p_{\bH^{\gamma+1}_{p,\theta,\Theta}(\cD,T,\ell_2)}
+\|u_0\|^p_{\bU^{\gamma+2}_{p,\theta,\Theta}(\cD)} \Big). \end{align*}
This yields what we want to have since $\|f^i_{x^i}\|_{K^{\gamma}_{p,\theta+p,\Theta+p}(\cD)}\leq C \|f^i\|_{K^{\gamma+1}_{p,\theta,\Theta}(\cD)}$. The lemma is proved. \end{proof}
Now, we take the deterministic operator $L_0$ introduced in \eqref{8.29.1} and the Green function $G$ related to $L_0$. Also, recall the representation $\cR (u_0,f^0,\tbf,g)$ defined in \eqref{eqn 8.21.11} in connection with $L_0$.
\begin{lemma} \label{lemma rep}
If $f^0\in \bK^{\infty}_c(\cD,T)$, $\tbf\in \bK^{\infty}_c(\cD,T,d)$, $g\in \bK^{\infty}_c(\cD,T,\ell_2)$, and $u_0\in\bK^{\infty}_c(\cD)$, then $u=\cR (u_0,f^0,\tbf,g)$ belongs to $\cK^0_{p,\theta,\Theta}(\cD,T)$ and satisfies \begin{equation}
\label{eqn 8.21.13} du=\left(L_0u+f^0+\sum_{i=1}^d f^i_{x^i}\right)dt+\sum_{k=1}^{\infty}g^k dw^k_t, \quad t\in (0,T] \end{equation} in the sense of distributions on $\cD$ with $u(0,\cdot)=u_0$.
\end{lemma} \begin{proof} First, we note that \begin{eqnarray*}\cR (u_0,f^0,\tbf,g)&=&\cR (u_0,0,0,0)+\cR (0,f^0,\tbf,0)+\cR (0,0,0,g)\\ &=:&v_1+v_2+v_3. \end{eqnarray*}
By considering $v_1$ for each $\omega$ and by the definition of Green's function with the condition $u_0\in\bK^{\infty}_c(\cD)$, we note that $v_1$ satisfies $$ dv_1=L_0v_1dt,\quad t>0\,; \quad v_1(0,\cdot)=u_0(\cdot) $$
in the sense of distributions on $\cD$. Then Lemma \ref{main est3} and the facts that $\bK^{\infty}_c(\cD)$ is dense in $L_p(\Omega;K^0_{p,\theta+2-p,\Theta+2-p}(\cD))$ and $\|u_0\|_{\bU^0_{p,\theta,\Theta}(\cD)}\le \|u_0\|_{L_p(\Omega;K^0_{p,\theta+2-p,\Theta+2-p}(\cD))}$ confirm $v_1\in \cK^0_{p,\theta,\Theta}(\cD,T)$. Similarly, $v_2$ satisfies $$ dv_2=(L_0v_2+f^0+\sum_{i=1}^d f^i_{x^i})dt,\quad t>0 $$ in the sense of distributions on $\cD$ with zero initial condition and Lemma \ref{main est1} leads us to have $v_2\in \cK^0_{p,\theta,\Theta}(\cD,T)$.
The fact that $v_3$ satisfies
$$
dv_3=L_0v_3dt+\sum_{k=1}^{\infty}g^k dw^k_t, \quad t>0
$$
in the sense of distributions on $\cD$ with zero initial condition can be proved by the same way in the proof of \cite[Lemma 3.11]{CKLL 2018}, which deals with the case $d=2$. Then Lemma \ref{main est2} gives $v_2\in \cK^0_{p,\theta,\Theta}(\cD,T)$.
Hence, $u=v_1+v_2+v_3$ satisfies the assertions and the lemma is proved. \end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{main result}}]\quad
Note that, since $\cL$ is non-random, we can take $L_0=\cL$ (see \eqref{8.29.1}).
{\bf 1}. \emph{ Existence and estimate \eqref{main estimate}} :
First, we assume that $f^0\in \bK^{\infty}_c(\cD,T)$, $\tbf\in \bK^{\infty}_c(\cD,T,d)$, $g\in \bK^{\infty}_c(\cD,T,\ell_2)$, and $u_0\in\bK^{\infty}_c(\cD)$. Then by Lemma \ref{lemma rep}, $u=\cR(u_0,f^0,\tbf,g) \in \cK^0_{p,\theta,\Theta}(\cD,T)$ satisfies equation \eqref{eqn 8.21.13} in the sense of distributions on $\cD$ with initial condition $u_0$. Then, we use Lemma \ref{regularity.induction} with $\mu=-2$. As $\gamma+2\ge 1$, Lemma \ref{main est} and Remark \ref{initial.p ge 2.} imply
$u\in \cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ and \eqref{main estimate}.
The general case can be easily handled by standard approximation argument. Indeed, take $f^0_n\in \bK^{\infty}_c(\cD,T)$, $\tbf_n\in \bK^{\infty}_c(\cD,T,d)$, $g_n\in \bK^{\infty}_c(\cD,T,\ell_2)$, and $u_{0,n}\in\bK^{\infty}_c(\cD)$ such that $f^0_n \to f^0$, $\tbf_n \to \tbf$, $g_n \to g$, and $u_{0,n}\to u_{0}$, as $n\to \infty$, in the corresponding spaces. Now let $u_n:=\cR(u_0,f^0_n, \tbf_n,g_n)$. Then, estimate \eqref{main estimate} applied for $u_n-u_m$ shows that $\{u_n\}$ is a Cauchy sequence in $\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$. Taking $u$ as the limit of $u_n$ in $\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$, we find that $u$ is a solution to equation \eqref{eqn 8.21.13}. Estimate \eqref{main estimate} for $u$ also follows from those of $u_n$.
{\bf 2}. \emph{Uniqueness} :
Let $u \in \cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ be a solution to equation \eqref{eqn 8.21.13} with $f^0\equiv 0$, $\tbf \equiv 0$, $g\equiv 0$, and $u_0\equiv 0$. Due to $\gamma+2\geq 1$, $u$ at least belongs to $\bL_{p,\theta-p,\Theta-p}(\cD,T)$, and therefore by Lemma \ref{regularity.induction} we have $u\in \cK^2_{p,\theta,\Theta}(\cD,T)$ as all the inputs are zeros. Hence, for almost all $\omega\in \Omega$, $u^{\omega}:=u(\omega,\cdot,\cdot)\in L_p((0,T]; K^{2}_{p,\theta-p,\Theta-p}(\cD))$, and satisfies $$ u^{\omega}_t=\cL u^{\omega}, \quad t\in (0,T]\quad ; \quad u^{\omega}(0,\cdot)=0. $$ Hence, from the uniqueness result for the deterministic parabolic equation (see \cite[Theorem 2.12]{ConicPDE}), we conclude $u^{\omega}=0$ for almost all $\omega$. This handles the uniqueness. \end{proof}
\begin{remark}\label{sol.representation} The approximation argument and uniqueness result in the above proof show that if $\cL$ is non-random, then the solution in Theorem \ref{main result} is given by the formula $$ u=\cR(u_0,f^0,\tbf,g), \quad \text{where}\quad \tbf=(f^1,\cdots,f^d). $$ \end{remark}
\begin{proof}[\textbf{Proof of Theorem \ref{main result-random}}]\quad
{\bf 1}. \emph{The a priori estimate} :
Having the method of continuity in mind, we consider the following operators. Denote $L_0=\nu_1 \Delta$, and for $\lambda \in [0,1]$ denote \begin{eqnarray*}
\cL_{\lambda}=(1-\lambda)L_0+\lambda \cL
\end{eqnarray*}
Obviously, \begin{equation*}
\cL_{\lambda}(\omega,\cdot)\in \cT_{\nu_1,\nu_2}, \quad \forall \, \lambda\in [0,1], \, \omega\in \Omega. \end{equation*}
Now we prove that the a priori estimate \begin{eqnarray}
\|v\|_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)}&\leq& C\Big(\|f^0\|_{\bK^{\gamma \vee 0}_{p,\theta+p,\Theta+p}(\cD,T)}+ \|\tbf\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,d)} \nonumber \\
&&\quad\quad +\|g\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T,l_2)}+\|u_0\|_{\bU^{\gamma+2}_{p,\theta,\Theta}(\cD)}\Big) \label{the a priori} \end{eqnarray}
holds with $C=C(\cM,d,p,\gamma,\theta,\Theta,\nu_1,\nu_2)$, provided that
$v\in \cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ is a solution to the equation
\begin{equation}
\label{method}
dv=\left(\cL_{\lambda} v+f^0+\sum_{i=1}^d f^i_{x^i}\right)dt+\sum_{k=1}^{\infty}g^k dw^k_t, \quad t\in(0,T]\,\,\,; \quad v(0,\cdot)=u_0(\cdot).
\end{equation}
To prove \eqref{the a priori}, we take $u\in \cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ from Theorem \ref{main result}, which is the solution to equation \eqref{eqn 8.21.13} with the operator $L_0=\nu_1 \Delta$ and the initial condition $u(0,\cdot)=u_0$. Then $\bar{v}:=v-u\in \cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ satisfies
$$
\bar{v}_t=\cL_{\lambda} \bar{v}+\bar{f}=\cL_{\lambda}\bar{v}+\sum_{i=1}^d\bar{f}^i_{x^i}, \quad t\in(0,T]\quad;\quad \bar{v}(0,\cdot)=0
$$
where
$$\bar{f}:=(L_0-\cL_{\lambda})u=\sum_{i=1}^d \left(\sum_{j=1}^d [\nu_1 \delta^{ij}-a^{ij}(\omega,t)]u_{x^j}\right)_{x^i}=:\sum_{i=1}^d \bar{f}^i_{x^i}.
$$
Note that for each fixed $\omega$, $\bar{v}(\omega,\cdot)$ satisfies a deterministic PDE with non-random operator $\cL_{\lambda}(\omega,\cdot)$ and non-random free terms $\bar{f}^i(\omega,\cdot)$. Hence, using the deterministic counterpart of Theorem \ref{main result} for each $\omega$, and then taking the expectation, we get
$$
\|v-u\|_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)}= \|\bar{v}\|_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)} \leq C \sum_{i=1}^d \|\bar{f}^i\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cD,T)}\leq C\|u\|_{\bK^{\gamma+2}_{p,\theta-p,\Theta-p}(\cD,T)}.
$$ For the last inequality above we used \eqref{eqn 4.16.1}. This with estimate \eqref{main estimate} obtained for $u$ finally gives \eqref{the a priori}.
{\bf 2}. \emph{Existence, uniqueness and the estimate} :
Estimate \eqref{main estimate} and uniqueness result of solution are direct consequences of a priori estimate \eqref{the a priori}, for which the constant $C$ is independent of $\cL$ and $\lambda$. Thus we only need to prove the existence result.
Let $J$ denote the set of $\lambda\in [0,1]$ such that for any given $f^0,\tbf, g,u_0$ in their corresponding spaces, equation \eqref{method} with given $\lambda$ has a solution $v$ in $\cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$. Then by Theorem \ref{main result}, $0\in J$. Hence, the method of continuity (see e.g. proof of \cite[Theorem 5.1]{Krylov 1999-4}) and a priori estimate \eqref{the a priori} together yield $J=[0,1]$, and in particular $1\in J$. This proves the existence result. The theorem is proved. \end{proof}
In the next section, we use the result of Theorem \ref{main result-random} to study the regularity of SPDEs on polygonal domains in $\bR^2$. We also use the following result which helps us prove the existence of a solution on polygonal domains.
\begin{lemma}\label{global.uniqueness} For $j=1,\,2$, let $p_j\geq 2$ and $\theta_j,\Theta_j\in\bR$, and $d-1<\Theta_j<d-1+p_j$. Also let $\theta_j$ ($j=1,2$) satisfy \begin{equation*} p_j(1-\lambda^+_c)<\theta_j<p_j(d-1+\lambda^-_c) \quad \text{if $\cL$ is non-random}, \end{equation*} and $$ p_j(1-\lambda_{c}(\nu_1,\nu_2))<\theta_j<p_j(d-1+\lambda_{c}(\nu_1,\nu_2)) \quad \text{if $\cL$ is random}. $$
Then, if $u\in\cK^1_{p_1,\theta_1,\Theta_1}(\cD,T)$ is a solution to equation \eqref{stochastic parabolic equation} with the initial condition $u(0,\cdot)=u_0(\cdot)$ and $f^0, \tbf=(f^1,\cdots,f^d)$, $g$, $u_0$ satisfying \begin{align*} &f^0\in\bL_{p_j,\theta_j+p_j,\Theta_j+p_j}(\cD,T), \quad \tbf \in\bL_{p_j,\theta_j,\Theta_j}(\cD,T,d ), \end{align*} $$ g\in \bL_{p_j,\theta_j,\Theta_j}(\cD,T,\ell_2), \quad u_0\in\bU^1_{p,\theta,\Theta}(\cD) $$ for both $j=1$ and $j=2$, then $u\in\cK^1_{p_2,\theta_2,\Theta_2}(\cD,T)$. \end{lemma}
\begin{proof} If $\cL$ is non-random, the lemma follows from Remark \ref{sol.representation}. In general, as before we fix a deterministic operator $L_0(t)=\sum_{i,j}\alpha^{ij}(t) \in \cT_{\nu_1,\nu_2}$ and $v=\cR(u_0,f^0,\tbf,g)$. Then, since $L_0$ is non-random, by Remark \ref{sol.representation} \begin{equation}
\label{eqn 8.31.5} v\in \cK^1_{p_1,\theta_1,\Theta_1}(\cD,T) \cap \cK^1_{p_2,\theta_2,\Theta_2}(\cD,T). \end{equation} Put $\bar{u}_1:=u-v$. Then $\bar{u}=\bar{u}_1$ satisfies \begin{equation}
\label{eqn 8.23.1} d\bar{u}=\left[\cL \bar{u}+\sum_{i=1}^d \Big(\sum_{j=1}^d [\alpha^{ij}(t)-a^{ij}(\omega,t)]v_{x^j}\Big)_{x^i}\right]dt, \quad t\in(0,T]. \end{equation} Also, due to \eqref{eqn 8.31.5}, equation \eqref{eqn 8.23.1} has a solution $\bar{u}_2\in \cK^1_{p_2,\theta_2,\Theta_2}(\cD,T)$. Now note that for each fixed $\omega$, both $\bar{u}_1(\omega,\cdot,\cdot)$ and $\bar{u}_2(\omega,\cdot,\cdot)$ satisfy equation \eqref{eqn 8.23.1}, which we can consider as a deterministic equation with non-random operator. By the above result for non-random operator we conclude $$\bar{u}_1(\omega,\cdot,\cdot)=\bar{u}_2(\omega,\cdot,\cdot) $$ for almost all $\omega$. From this we conclude that both $v$ and $u-v$ are in $\cK^1_{p_2,\theta_2,\Theta_2}(\cD,T)$, and therefore the lemma is proved.
\end{proof}
\mysection{SPDE on polygonal domains}\label{sec:polygonal domains}
In this section, based on Theorem~\ref{main result-random}, we develop a regularity theory of the stochastic parabolic equations on polygonal domains in $\bR^2$. This development is an enhanced version of the corresponding result in \cite{CKL 2019+} in which $\cL=\Delta_x$ and $\Theta=d$. Our generalization is as follows: \begin{itemize}
\item{} $\Delta \quad \rightarrow \quad \cL=\sum_{i,j}a^{ij}(\omega,t)D_{ij}$; operator with (random) predictable coefficients
\item{} $\Theta=2 \quad \rightarrow \quad 1<\Theta<1+p$
\item{} The restriction on $\theta$ is weakened
\item{} Sobolev regularity with $\gamma\in \{-1,0,\cdots\}$\\ $ \quad \rightarrow \quad$ Sobolev and H\"older regularities with real number $ \gamma \geq -1$
\end{itemize}
Let $\cO\subset \bR^2$ be a bounded polygonal domain with a finite number of vertices $\{p_1,\ldots,p_M\}\subset \partial \cO$. For any $x\in\cO$, we denote \begin{align*}
\rho(x):=\rho_{\cO}(x):=d(x,\partial\cO). \end{align*} In the polygonal domain, the function of $x$ defined by $$
\min_{1\leq m\leq M}\vert x-p_m\vert
$$
will play the role of $\rho_{\circ, \cD}$, which is the distance to the vertex in an angular domain $\cD$. We first construct a smooth version of the function $\min_{1\leq m\leq M}\vert x-p_m\vert g$ as follows. Consider the domain $V:=\bR^2\setminus \{p_1,\cdots,p_M\}$ and note that $$\rho_V(x):=d(x,\partial V)=\min_{1\leq m\leq M}\vert x-p_m\vert . $$
Then, applying \eqref{eqn 8.25.1} and \eqref{eqn 8.25.2} for $\rho_V$ and the domain $V$, we define $\psi_{V}$ and set $$ \rho_{\circ}=\rho_{\circ,\cO}:=\psi_{V}. $$ We can check that for any multi-index $\alpha$ and $\mu\in \bR$, $$ \rho_{\circ} \sim \min_{1\leq m\leq M}\vert x-p_m\vert , \quad \sup_{\cO}\big\vert \rho^{\vert \alpha\vert -\mu}_{\circ}D^{\alpha}\rho^{\mu}_{\circ}\big\vert <\infty. $$ On the other hand, we also choose a smooth function $\psi=\psi_{\cO}$ such that $\psi\sim \rho_{\cO}$ and satisfies \eqref{eqn 8.9.1} with $\rho_{\cO}$ in place of $\rho_{\cD}$.
Then, we recall the norms of the spaces $H^{\gamma}_{p,\Theta}(\cO)$ and $H^{\gamma}_{p,\Theta}(\cO; \ell_2)$ introduced in Definition \ref{defn 8.28};
\begin{equation*}
\|f\|^p_{H^{\gamma}_{p,\Theta}(\cO)}:= \sum_{n\in \bZ} e^{n\Theta} \|\zeta(e^{-n}\psi(e^n\cdot))f(e^{n}\cdot)\|^p_{H^{\gamma}_p(\bR^d)},
\end{equation*}
\begin{equation*}
\|g\|^p_{H^{\gamma}_{p,\Theta}(\cO;\ell_2)}:= \sum_{n\in \bZ} e^{n\Theta} \|\zeta(e^{-n}\psi(e^n\cdot))g(e^{n}\cdot)\|^p_{H^{\gamma}_p(\bR^d;\ell_2)},
\end{equation*} where $\psi=\psi_{\cO}$. Using $\rho_{\circ,\cO}$ in place of
$\rho_{\circ,\cD}$, and following Definition \ref{defn 8.19}, we define the function spaces $$ K^{\gamma}_{p,\theta,\Theta}(\cO), \quad K^{\gamma}_{p,\theta,\Theta}(\cO;\bR^d), \quad K^{\gamma}_{p,\theta,\Theta}(\cO;\ell_2), $$ as well as the stochastic spaces $$\bK^{\gamma}_{p,\theta,\Theta}(\cO,T), \quad \bK^{\gamma}_{p,\theta,\Theta}(\cO,T,d ),\quad \bK^{\gamma}_{p,\theta,\Theta}(\cO,T,\ell_2), $$ $$ \cK^{\gamma+2}_{p,\theta,\Theta}(\cO,T), \quad \bK^{\infty}_c(\cO,T), \quad \bK^{\infty}_c(\cO,T,\ell_2), \quad \bK^{\infty}_c(\cO). $$ More specifically, we write $f\in K^{\gamma}_{p,\theta,\Theta}(\cO)$ if and only if $\rho^{(\theta-\Theta)/p}_{\circ} f\in H^{\gamma}_{p,\Theta}(\cO)$, and define
$$
\|f\|_{ K^{\gamma}_{p,\theta,\Theta}(\cO)} := \|\rho^{(\theta-\Theta)/p}_{\circ} f\|_{H^{\gamma}_{p,\Theta}(\cO)}.
$$ As in Section 2, if $\gamma\in \bN_0$, then we have \begin{equation} \label{eqn 8.25.8}
\|f\|^p_{ K^{\gamma}_{p,\theta,\Theta}(\cO)} \sim \sum_{\vert \alpha\vert \leq \gamma}\int_{\cO} \vert \rho^{\vert \alpha\vert }D^{\alpha}f\vert ^p \rho^{\theta-\Theta}_{\circ}\rho^{\Theta-d} dx. \end{equation} \begin{defn}\label{definition solution polygon} We write $u\in\cK^{\gamma+2}_{p,\theta,\Theta}(\cO,T)$ if
$u \in \bK^{\gamma+2}_{p,\theta-p,\Theta-p}(\cO,T)$ and there exist
$(\tilde{f}, \tilde{g}) \in\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cO,T)\times \bK^{\gamma+1}_{p,\theta,\Theta}(\cO,T, \ell_2)$ and $u(0,\cdot)\in \bU^{\gamma+2}_{p,\theta,\Theta}(\cO)$ satisfying $$ du=\tilde{f}\,dt+\sum_k \tilde{g}^kdw^k_t,\quad t\in(0,T] $$ in the sense of distributions on $\cO$. The norm is defined by \begin{eqnarray*}
\|u\|_{\cK^{\gamma+2}_{p,\theta,\Theta}(\cO,T)}&:=&\|u\|_{\bK^{\gamma+2}_{p,\theta-p,\Theta-p}(\cO,T)}+\|\tilde{f}\|_{\bK^{\gamma}_{p,\theta+p,\Theta+p}(\cO,T)}
+\|\tilde{g}\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\cO,T,\ell_2)}\\
&&+\|u(0,\cdot)\|_{\bU^{\gamma+2}_{p,\theta,\Theta}(\cD)}. \end{eqnarray*} \end{defn}
\begin{thm}
\label{thm all}
With $\cD$ replaced by $\cO$, all the claims of Lemma \ref{property1}, Remark \ref{dense space}, Theorem \ref{banach}, Theorem \ref{embedding}, and Lemma \ref{regularity.induction} hold. \end{thm}
\begin{proof}
All of these claims in Section 2 are proved based on \eqref{eqn 8.10.14}, \eqref{eqn 8.10.1}, and some properties of weighted Sobolev spaces $H^{\gamma}_{p,\Theta}(\cD)$ taken e.g from \cite{Lo1}. Since these properties in \cite{Lo1} hold true on arbitrary domains, the exactly same proofs of Section 2 work with $\cD$ replaced by $\cO$.
\end{proof}
\begin{remark} \label{remark 8.29} For the analog of Theorem \ref{embedding} in the case of polygonal domain we do not need the additional condition for the initial condition. This is because since $\psi$ is bounded and $\beta>2/p$, by Lemma \ref{property1} (iv), we have $$
\|\psi^{\beta-1}u(0,\cdot)\|_{L_p(\Omega;K^{\gamma+2-\beta}_{p,\theta,\Theta}(\cD))}\leq C \|\psi^{2/p-1}u(0,\cdot)\|_{L_p(\Omega;K^{\gamma+2-2/p}_{p,\theta,\Theta}(\cD))} \leq C \|u\|_{\cK_{p,\theta,\Theta}^{\gamma+2}}.
$$
\end{remark}
For $m=1,\ldots,M$, let $\kappa_m$ denote the interior angle at the vertex $p_m$, and denote \begin{equation*} \kappa_0:=\max_{1\leq m\leq M}\kappa_m. \end{equation*} Also, for each $m$, let $\cD_m$ denote the conic domain in $\bR^2$ such that $$ \cO \cap B_{\varepsilon}(p_m) \cap \{p_m+x: x\in \cD_m\} = \cO \cap B_{\varepsilon}(p_m) $$ for all sufficiently small $\varepsilon>0$. Denote $$\lambda^{\pm}_{c,\cL,\cO}:=\min_{m}\lambda^{\pm}_{c,\cL, \cD_m} \quad \text{if $\cL$ is non-random} $$ and $$\lambda_{c,\cO}(\nu_1,\nu_2):=\min_{m}\lambda_c(\nu_1,\nu_2,\cD_m)\quad \text{ if $\cL$ is random}. $$ In Theorem \ref{main result polygon} below, we pose the condition \begin{equation} \label{theta poly} p(1-\lambda^+_{c,\cL,\cO})<\theta< p(1+\lambda^-_{c,\cL,\cO}) \end{equation} if $\cL$ is non-random, and \begin{equation}
\label{theta poly2}
p(1-\lambda_{c,\cO}(\nu_1,\nu_2))<\theta <p(1+\lambda_{c,\cO}(\nu_1,\nu_2))
\end{equation}
if $\cL$ is random.
Here are our main results on polygonal domains.
\begin{thm}[SPDE on polygonal domains with random or non-random coefficients] \label{main result polygon} Let $p\in[2,\infty)$, $\gamma \geq -1$, and Assumption \ref{ass coeff} hold. Also assume that \begin{equation}
\label{theta application} 1<\Theta<p+1, \end{equation} and condition \eqref{theta poly} holds if $\cL$ is non-random, condition \eqref{theta poly2} holds if $\cL$ is random. Then for given $f^0\in\bK^{\gamma \vee 0}_{p,\theta+p,\Theta+p}(\mathcal{O},T)$, $\tbf=(f^1,\cdots,f^d) \in\bK^{\gamma +1}_{p,\theta,\Theta}(\mathcal{O},T,d)$, $g\in\bK^{\gamma+1}_{p,\theta,\Theta}(\mathcal{O},T,\ell_2)$, and $u_0\in\bU^{\gamma+2}_{p,\theta,\Theta}(\cO)$, the equation \begin{equation}\label{stochastic parabolic equation polygon} d u =\left(\cL u+f^0+\sum_{i=1}^d f^i_{x^i}\right)dt +\sum^{\infty}_{k=1} g^kdw_t^k,\quad t\in(0,T]\quad\,; \quad u(0,\cdot)=u_0 \end{equation} admits a unique solution $u$ in the class $\cK^{\gamma+2}_{p,\theta,\Theta}(\mathcal{O},T)$. Moreover, the estimate \begin{eqnarray*}
\|u\|_{\cK^{\gamma+2}_{p,\theta,\Theta}(\mathcal{O},T)}
&\leq& C\big(\|f^0\|_{\bK^{\gamma\vee 0}_{p,\theta+p,\Theta+p}(\mathcal{O},T)}
+\|\tbf\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\mathcal{O},T,d)}+\|g\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\mathcal{O},T,\ell_2)}\nonumber\\
&&\quad\quad+\|u_0\|_{\bU^{\gamma+2}_{p,\theta,\Theta}(\cO)}\big) \end{eqnarray*} holds with a constant $C=C(\mathcal{O},p,\gamma,\nu_1,\nu_2,\theta,\Theta,T)$. \end{thm}
\begin{remark} Since $d=2$ in this section, the range of $\Theta$ in \eqref{theta application} coincides with $(d-1,d-1+p)$ which we have kept throughout this article. \end{remark}
\begin{thm}[H\"older estimates on polygonal domains] \label{cor 8.23} Let $p\geq 2$, $\theta, \Theta\in \bR$ and $u\in \cK^{\gamma+2}_{p,\theta,\Theta}(\cO,T)$.
(i) If $\gamma+2-\frac{d}{p}\geq n+\delta$, where $n\in \bN_0$ and $\delta\in (0,1)$, then for any $k\leq n$, $$ \vert \rho^{k-1+\frac{\Theta}{p}} \rho^{(\theta-\Theta)/p}_{\circ} D^{k}u(\omega,t,\cdot)\vert _{\cC(\cO)}+
[\rho^{n-1+\delta+\frac{\Theta}{p}} \rho^{(\theta-\Theta)/p}_{\circ} D^{k} u(\omega,t,\cdot)]_{\cC^{\delta}(\cO)}<\infty $$ holds for almost all $(\omega,t)$. In particular, \begin{equation*}
\vert u(\omega,t,x)\vert \leq C(\omega,t) \rho^{1-\frac{\Theta}{p}}(x) \rho^{(-\theta+\Theta)/p}_{\circ}(x). \end{equation*}
(ii) Let $$ 2/p<\alpha<\beta\leq 1, \quad \gamma+2-\beta-d/p \geq m+\varepsilon, $$ where $m\in \bN_0$ and $\varepsilon\in (0,1]$. Put $\eta=\beta-1+\Theta/p$. Then for any $k\leq m$, \begin{eqnarray*}
&&\bE \sup_{t,s\leq T} \frac {\big\vert \rho^{\eta+k} \rho^{(\theta-\Theta)/p}_{\circ} \left(D^ku(t)-D^ku(s)\right)\big\vert ^p_{\cC(\cO)}} {\vert t-s\vert ^{p\alpha/2-1}}<\infty, \\ && \bE \sup_{t,s\leq T} \frac {\left[\rho^{\eta+m+\varepsilon} \rho^{(\theta-\Theta)/p}_{\circ} \left(D^mu(t)-D^mu(s)\right)\right]^p_{\cC^{\varepsilon}(\cO)}} {\vert t-s\vert ^{p\alpha/2-1}} <\infty. \end{eqnarray*}
\end{thm} \begin{proof} The claims follow from the corresponding results of \eqref{eqn 8.21.1} and \eqref{eqn 8.10.10} mentioned in Theorem \ref{thm all}. \end{proof}
For the proof of Theorem \ref{main result polygon}, we first prove the following estimate.
\begin{lemma}[A priori estimate] \label{a priori p} Let Assumptions in Theorem \ref{main result polygon} hold. Then there exists a constant $C=C(d,p,\theta,\Theta,\nu_1,\nu_2,\cO,T)$ such that the a priori estimate \begin{eqnarray}
\|u\|_{\cK^{\gamma+2}_{p,\theta,\Theta}(\mathcal{O},T)}&\leq& C\big(\|f^0\|_{\bK^{\gamma\vee 0}_{p,\theta+p,\Theta+p}(\mathcal{O},T)}
+\|\tbf\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\mathcal{O},T,d)}+\|g\|_{\bK^{\gamma+1}_{p,\theta,\Theta}(\mathcal{O},T,\ell_2)}\nonumber\\
&&\quad\quad+ \|u_0\|_{\bU^{\gamma+2}_{p,\theta,\Theta}(\cO)}\big)\label{polygon a priori} \end{eqnarray} holds provided that a solution $u\in \cK^{\gamma+2}_{p,\theta,\Theta}(\cD,T)$ to equation \eqref{stochastic parabolic equation polygon} exists. \end{lemma}
\begin{proof} First, choose a sufficiently small constant $r>0$ such that each $B_{3r}(p_m)$ contains only one vertex $p_m$ and intersects with only two edges for each $m=1,\,\ldots,\,M$. Then we choose a function $\xi\in\cC_c^{\infty}(\bR^2)$ satisfying \begin{align*} 1_{B_r(0)}(x)\leq \xi(x)\leq 1_{B_{2r}(0)}(x)\quad\text{for all }x\in\bR^2. \end{align*}
Let $\xi_m(x):=\xi(x-p_m)$ and $\xi_0:=1-\sum_{m=1}^M\xi_m$. By the choice of $r$ and $\xi$, $supp(\xi_m)$s are disjoint and hence $0\leq \xi_0\leq 1$. Moreover, $\xi_0(x)=1$ if $\rho_{V}(x)>2r$.
For $m=1,\ldots,M$, let $\cD_m$ be the angular (conic) domain centered at $p_m$ with interior angle $\kappa_m$ such that $\cD_m\cap B_{3r}(p_m)=\cO\cap B_{3r}(p_m)$.
Now let $G$ be a $C^1$-domain in $\cO$ such that $$ \xi_0(x)=0\quad\text{for }x\in\cO\setminus G\quad\text{and}\quad \inf_{x\in G} \rho_{\circ}(x)\geq c>0\text{ with a constant}\; c. $$ Then, due to the choices of $\xi_m$ and $\cD_m$ $(m=1,\ldots,M)$, \eqref{space K norm} and \eqref{eqn 8.25.8} together easily yield \begin{align*}
\|\xi_m v\|^p_{K^n_{p,\theta,\Theta}(\cO)} \sim\|\xi_m v\|_{K^n_{p,\theta,\Theta}(\cD_m)},\quad m=1,\ldots,M. \end{align*} for any $\theta,\Theta\in\bR$, $n\in\{0,1,2,\ldots\}$, and $v\in K^n_{p,\theta,\Theta}(\cO)$. Similarly, $$
\|\xi_0 v\|^p_{K^n_{p,\theta,\Theta}(\cO)}\sim \int_{G} \vert \xi_0v\vert ^p \rho^{\Theta-d}dx\sim \|\xi_0 v\|^p_{H^n_{p,\Theta}(G)}, $$ and the same relations hold for $\ell_2$-valued functions. Denote $$ \bH^{\gamma}_{p,\Theta}(G,T):=L_p(\Omega\times (0,T], \cP; H^{\gamma}_{p,\Theta}(G)), $$ $$
\bH^{\gamma}_{p,\Theta}(G,T,\ell_2):=L_p(\Omega\times (0,T], \cP; H^{\gamma}_{p,\Theta}(G;\ell_2)). $$ Then, the above observations in particular imply \begin{align}\label{partition-of-unity.eq}
\|v\|_{\bK^n_{p,\theta,\Theta}(\cO,T)} \sim \Big(\|\xi_0 v\|_{\bH^n_{p,\Theta}(G,T)}+\sum_{m=1}^M\|\xi_m v\|_{\bK^n_{p,\theta,\Theta}(\cD_m,T)}\Big) \end{align} for any $v\in \bK^n_{p,\theta,\Theta}(\cO,T)$, where $n\in \{0,1,2,\cdots\}$.
Now, for each $m=1,\ldots,M$ we define $u_m:=\xi_mu$. Then, since $\gamma+2\geq 1$, $u_m$ belongs to $\bK^{1}_{p,\theta-p,\Theta-p}(\cD_m,T)$. Also, $\xi_0 u$ belongs to $\bH^{1}_{p,\Theta-p}(G,T)$. Note that each $u_m$ satisfies \begin{equation}\label{equation for m} d(u_m)=\Big(\cL u_m+f^0_m+\sum_{i=1}^d (f^i_m)_{x^i}\Big)dt+\sum_k g^{k}_m dw_t^k,\quad t\in (0,T] \end{equation} in the sense of distributions on $\cD_m$ with the initial condition $ u_m(0,\cdot)=\xi_m u_0$ and $\xi_0 u$ satisfies \begin{equation}\label{equation for m=0} d(\xi_0 u)=\Big(\cL (\xi_0 u)+f^0_0+\sum_{i=1}^d (f^i_0)_{x^i}\Big)dt+\sum_k g^{k}_0 dw_t^k,\quad t\in (0,T] \end{equation} in the sense of distributions on $G$ with the initial condition $w(0,\cdot)=\xi_0 u_0$, where \begin{equation}\label{f g for m} f^0_m=f^0\xi_m-\sum_{i=1}^d f^i (\xi_m)_{x^i}-u \cL(\xi_{m}), \quad f^i_m=2\sum_{j=1}^d a^{ij}u\,(\xi_{m})_{x^j},\quad g_m=\, g\xi_m \end{equation} for $m=0,1,2,\ldots,M$.
Since $supp(\xi_m) \subset \overline{B_{2r}(p_m)}$ and $(\xi_m)_x=0$ on a neighborhood of $p_m$ for $m=1,\ldots,M$, we have \begin{align*}
\|u (\xi_m)_x\|_{\bL_{p,\theta,\Theta}(\cO,t)}+\|u(\xi_m)_{xx}\|_{\bL_{p,\theta+p,\Theta+p}(\cO,t)}\leq C\|u\|_{\bL_{p,\theta,\Theta}(\cO,t)} \end{align*} for $t\leq T$, where $C$ depends only on $\cO,\,p,\,\theta$ and $\Theta$.
Hence, for $m=1,\,\ldots,\,M$, by Theorems \ref{main result} and \ref{main result-random}, which our range of $\theta$ allows us to use, we have for any $t\leq T$, \begin{align*}
&\quad \quad \|\xi_m u\|_{\bK^1_{p,\theta-p,\Theta-p}(\cD_m,t)} \\
& \leq C\big(\|f^0_m\|_{\bL_{p,\theta+p,\Theta+p}(\cD_m,t)}+ \sum_{i=1}^d \|f^i_m\|_{\bL_{p,\theta,\Theta}(\cD_m,t)}+\|g_m\|_{\bL_{p,\theta,\Theta}(\cD_m,t,\ell_2)}\nonumber\\
&\hspace{1cm}+\|\xi_m u_0\|_{\bU^1_{p,\theta,\Theta}(\cD_m)}\big) \\
& \leq C\big(\|u\|_{\bL_{p,\theta,\Theta}(\cO,T)}+\|f^0\|_{\bL_{p,\theta+p,\Theta+p}(\cO,T)}
+\|\tbf\|_{\bL_{p,\theta,\Theta}(\cO,T,d)} +\|g\|_{\bL_{p,\theta,\Theta}(\cO,T,\ell_2)}\\
&\hspace{1cm}+\|u_0\|_{\bU^1_{p,\theta,\Theta}(\cO)}\big). \end{align*}
For $m=0$, by \cite[Theorem 2.7]{Kim2004-2} (or \cite[Theorem 2.9]{Kim2004}),
we have \begin{align*}
&\quad \quad\quad \|\xi_0 u\|_{\bH^1_{p,\Theta-p}(G,t)}\\
&\leq C\Big(\|f^0_0\|_{\bL_{p,\Theta+p}(G,t)}+ \sum_{i=0}^d \|f^i_0\|_{\bL_{p,\Theta}(G,t)}+\|g_0\|_{\bL_{p,\Theta}(G,t,\ell_2)}+\|\xi_0u_0\|_{L_p(\Omega;H^{1-2/p}_{p,\Theta+2-p}(G))}\Big) \\
&\leq C\Big(\|u\|_{\bL_{p,\theta,\Theta}(\cO,T)}+\|f^0\|_{\bL_{p,\theta+p,\Theta+p}(\cO,T)}
+\|\tbf\|_{\bL_{p,\theta,\Theta}(\cO,T,d)} +\|g\|_{\bL_{p,\theta,\Theta}(\cO,T,\ell_2)}\\
&\hspace{1cm}+ \|u_0\|_{\bU^1_{p,\theta,\Theta}(\cO)}\Big). \end{align*} Summing up over all $m=0,\,\ldots,\,M$ and using \eqref{partition-of-unity.eq}, for each $t\leq T$, we have \begin{align*}
&\quad\quad\|u\|_{\bK^1_{p,\theta-p,\Theta-p}(\cO,t)} \\
&\leq C \Big(\|u\|_{\bL_{p,\theta,\Theta}(\cO,t)}+\|f^0\|_{\bL_{p,\theta+p,\Theta+p}(\cO,T)}
+\|\tbf\|_{\bL_{p,\theta,\Theta}(\cO,T)} +\|g\|_{\bL_{p,\theta,\Theta}(\cO,T,\ell_2)}\\
&\hspace{1cm} \|u_0\|_{\bU^1_{p,\theta,\Theta}(\cO)}\Big). \end{align*} Using this and the polygonal versions of \eqref{eqiv norm} and \eqref{eqn 8.25.31}, which mentioned in Theorem \ref{thm all}, we get, for each $t\leq T$, \begin{align*}
&\quad\quad \|u\|^p_{\cK^1_{p,\theta,\Theta}(\cO,t)}\\
\leq &\,C\int^t_0\|u\|^p_{\cK^1_{p,\theta,\Theta}(\cO,s)}ds \\
&+C\left(\|f^0\|^p_{\bL_{p,\theta+p,\Theta+p}(\cO,T)}+\|\tbf\|^p_{\bL_{p,\theta,\Theta}(\cO,T)}+\|g\|^p_{\bL_{p,\theta,\Theta}(\cO,T,\ell_2)}+\|u_0\|_{\bU^1_{p,\theta,\Theta}(\cO)}\right). \end{align*} Applying Gronwall's inequality, we further obtain \begin{align*}
&\quad\quad \|u\|_{\cK^1_{p,\theta,\Theta}(\cO,T)}\\
&\leq C\left(\|f^0\|_{\bL_{p,\theta+p,\Theta+p}(\cO,T)}+\|\tbf\|_{\bL_{p,\theta,\Theta}(\cO,T)}+\|g\|_{\bL_{p,\theta,\Theta}(\cO,T,\ell_2)}+\|u_0\|_{\bU^1_{p,\theta,\Theta}(\cO)}\right). \end{align*} This and the polygonal version of Lemma \ref{regularity.induction}, which is mentioned in Theorem \ref{thm all}, yield a priori estimate \eqref{polygon a priori}. The lemma is proved. \end{proof}
The following is a $\cC^1$-domain version of Lemma \ref{global.uniqueness}. We use it in the proof of Theorem \ref{main result polygon} below.
\begin{lemma}\label{lem for uniqueness2} Let $G$ be a bounded $\cC^1$ domain in $\bR^d$ and let $p_j\in[2,\infty)$, $\Theta_j\in (d-1,d-1+p_j)$ for $j=1,2$. Assume that $u\in \bH^1_{p_1,\Theta_1-p_1}(G,T)$ satisfies \begin{align*} du=\Big(\cL u+f^0+\sum_{i=1}^d f^i_{x^i}\Big)\,dt+\sum_kg^kdw^k_t,\quad t\in(0,T]\quad \end{align*}
in the sense of distributions on $G$ with the initial condition $u(0,\cdot)=u_0(\cdot)$ and $f^0$, $f^i$ ($i=1,2,\ldots,$), $g$, $u_0$ satisfying $$ f^0\in \bL_{p_j,\Theta_j+p_j}(G,T)\cap \bL_{p_j,d+p_j}(G,T),\quad f^i \in \bL_{p_j,\Theta_j}(G,T)\cap \bL_{p_j,d}(G,T), \, i=1,\cdots,d, $$ $$ g\in \bL_{p_j,\Theta_j}(G,T,\ell_2)\cap \bL_{p_j,d}(G,T,\ell_2), $$ $$ \quad u_0\in L_p(\Omega,\rF_0;H^{1-2/p_j}_{p_j,\Theta_j+2-p_j}(G))\cap L_p(\Omega,\rF_0;H^{1-2/p_j}_{p_j,d+2-p_j}(G)) $$ for both $j=1$ and $j=2$. Then $u$ belongs to $\bH^1_{p_2,\Theta_2-p_2}(G,T)$.
\end{lemma}
\begin{proof} See \cite[Lemma 3.8]{CKL 2019+}. We remark that only $\Delta$ is considered in \cite{CKL 2019+}, however the proof of \cite[Lemma 3.8]{CKL 2019+} works for general case without any changes since the proof depends only on \cite[Theorem 2.7]{Kim2004-2} (or \cite[Theorem 2.9]{Kim2004}), which involves operators having coefficients measurable in $(\omega,t)$ and continuous in $x$. \end{proof}
We recall $d=2$ in this section. \begin{proof}[\textbf{Proof of Theorem \ref{main result polygon}}]
Due to Lemma \ref{a priori p}, we only need to prove the existence result. Furthermore, relying on standard approximation argument, we may assume $$ f^0\in\bK^{\infty}_c(\cO,T),\quad \tbf\in\bK^{\infty}_c(\cO,T,2), \quad g\in\bK^{\infty}_c(\cO,T,\ell_2)\quad u_0\in \bK^{\infty}_c(\cO). $$ Considering $u-u_0$ as usual, we may assume $u_0\equiv 0$. Also, note that $g^k=0$ for all large $k$ (say, for all $k> N$), and each $g^k$ is of the type $\sum_{j=1}^{n(k)} 1_{(\tau^k_{j-1},\tau^k_j]}(t) h^{kj}(x)$, where $\tau^k_j$ are bounded stopping times and $h^{jk}\in \cC^{\infty}_c(\cO)$. Thus the function $v$ defined by $$ v(t,x):=\sum_{k=1}^{\infty}\int^t_0 g^k dw^k_s=\sum_{k\leq N} \sum_{j\leq n(k)} \left(w^k_{\tau^k_j \wedge t}-w^k_{\tau^k_{j-1} \wedge t}\right) h^{kj}(x) $$ is infinitely differentiable in $x$ and vanishes near the boundary of $\cO$. Consequently $v$ belongs to $\cK^{\nu+2}_{p,\theta,\Theta}(\cO,T)$ for any $\nu, \theta, \Theta \in \bR$ as we consult with Definition \ref{definition solution polygon}. Now, $u$ satisfies equation \eqref{stochastic parabolic equation polygon} if and only if $\bar{u}:=u-v$ satisfies $$ d\bar{u}=\Big(\cL\bar{u}+\bar{f}^0+\sum_{i=1}^2 f^i_{x^i}\Big)dt, \quad t\in(0,T]\quad;\quad \bar{u}(0,\cdot)= 0, $$ where $\bar{f}^0=f^0+\cL v$. Hence, considering $\bar{f}^0$ in place of $f^0$, to prove the existence we may further assume $g=0$.
Then, by the classical results without weights for $p=2$, (see, e.g. \cite{Roz1990} or \cite[Theorem~2.12, Corollary~2.14]{Kim2014}), there exists a solution $u$ in $\cK^1_{2,2,2}(\cO,T)$ to equation \eqref{stochastic parabolic equation polygon}, which now is simplified as $$ u_t=\cL u+f^0+\sum_{i=1}^2 f^i_{x^i}, \quad t\in(0,T]\quad; \quad u(0,\cdot)=0. $$ By Theorem A in \cite{Aronson} (or see estimate (2.11) and proof of Theorem 2.4 in \cite{Kim2004-3} for more detail), for any $r>4$, we have \begin{equation}
\label{bound}
\bE \sup_{t,x} \vert u(t,x)\vert ^p \leq C \bE \|\,\vert f^0\vert +\vert \tbf\vert \,\|^p_{L_r((0,T]\times \cO))} <\infty. \end{equation}
Now we prove $u\in \cK^1_{p,\theta,\Theta}(\cO,T)$ using Lemma \ref{global.uniqueness} and Lemma \ref{lem for uniqueness2} along with $u\in \cK^1_{2,2,2}(\cO,T)$. Define $u_m:=\xi_m u$ in the same way we did in the proof of Lemma \ref{a priori p}. Then $\xi_m u$ satisfies \eqref{equation for m} in the sense of distributions on $\cD_m$ for $m=1,\ldots,M$ and $\xi_0u$ satisfies \eqref{equation for m=0}
on $G$ for $m=0$ with the same $f^0_m,\,f^i_m,\, \xi_mu_0$ as in \eqref{f g for m}. Note that since $f^0,\tbf$ are bounded and $f^0,\tbf, (\xi_m)_x, (\xi_m)_{xx}$ vanish near vertices, we have for any $\theta\in \bR$, $q\geq 2$ and $1<\Theta<1+q$, \begin{eqnarray*}
&&\|f^0_m\|^q_{\bL_{q,\theta+q,\Theta+q}(\cO,T)} +\sum_{i=1}^2\|f^i_m\|^q_{\bL_{q,\theta,\Theta}(\cO,T)}\\
&\leq& C \bE \int^T_0 \int_{\cO}(1+|u|^q) \rho^{\Theta-2}dx\leq C \left(\int_{\cO}\rho^{\Theta-2}dx\right) \bE \sup_{t,x}(1+|u|^q)<\infty. \end{eqnarray*} For the last inequality we used \eqref{bound} and the fact $\Theta-2>-1$. Hence, $f^0_m, \,f^i_m$ along with $\xi_mu_0$ satisfy assumptions in Lemma \ref{global.uniqueness} and Lemma \ref{lem for uniqueness2}. Consequently $\xi_m u \in \bK^1_{p,\theta-p,\Theta-p}(\cD_m,T)$ as $\xi_m u \in \cK^1_{p,\theta,\Theta}(\cD_m,T)$ for $m=1,2,\cdots, M$ and $\xi_0 u \in \bH^1_{p,\Theta-p}(G,T)$. These and \eqref{partition-of-unity.eq} with $n=1$ yield $u\in \bK^1_{p,\theta-p,\Theta-p}(\cO,T)$ and in turn $u\in \cK^1_{p,\theta,\Theta}(\cO,T)$.
Finally, the analogy of of Lemma \ref{regularity.induction} in case of polygonal domains (see Theorem \ref{thm all}) proves that the solution $u$ found above actually belongs to the space $u\in \cH^{\gamma+2}_{p,\theta,\Theta}(\cO,T)$. The theorem is proved. \end{proof}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
\end{document} |
\begin{document}
\newcommand*{\mean}[1]{\left\langle #1 \right\rangle}
\newcommand*{\ket}[1]{\left|#1\right\rangle}
\newcommand*{\bra}[1]{\left\langle #1\right|}
\newcommand*{\abs}[1]{\left|#1\right|} \newcommand*{\tr}{\mathrm{tr}} \newcommand*{\bo}[1]{\mathbf{#1}} \newcommand*{\norm}[1]{\left\Vert #1 \right\Vert }
\newcommand*{\DPZ}[3]{\left.\frac{\partial#1}{\partial#2}\right|_{#3}} \renewcommand*{\i}{\mathrm{i}}
\title{Ultimate sensitivity of precision measurements with Gaussian quantum light : \\a multi-modal approach}
\author{Olivier Pinel} \affiliation{Laboratoire Kastler Brossel, Université Pierre et Marie Curie-Paris 6,\\ ENS, CNRS; 4 place Jussieu, 75252 Paris, France} \author{Julien Fade} \affiliation{Institut Fresnel, CNRS, Aix-Marseille Université, \\ \'Ecole Centrale Marseille, Campus de Saint-Jérôme, 13013 Marseille, France} \author{Daniel Braun} \affiliation{Laboratoire de Physique Théorique, Université Paul Sabatier, Toulouse III, \\ 118 route de Narbonne 31062 Toulouse, France} \author{Pu Jian} \affiliation{Laboratoire Kastler Brossel, Université Pierre et Marie Curie-Paris 6,\\ ENS, CNRS; 4 place Jussieu, 75252 Paris, France} \author{Nicolas Treps} \affiliation{Laboratoire Kastler Brossel, Université Pierre et Marie Curie-Paris 6,\\ ENS, CNRS; 4 place Jussieu, 75252 Paris, France} \author{Claude Fabre} \affiliation{Laboratoire Kastler Brossel, Université Pierre et Marie Curie-Paris 6,\\ ENS, CNRS; 4 place Jussieu, 75252 Paris, France} \date{\today}
\begin{abstract}
Multimode Gaussian quantum light, which includes multimode squeezed and multipartite quadrature entangled light, is a very general and powerful quantum resource with promising applications in quantum information processing and metrology. In this paper, we determine the ultimate sensitivity in the estimation of any parameter when the information about this parameter is encoded in such light, irrespective of the information extraction protocol used in the estimation and of the measured observable. In addition we show that an appropriate homodyne detection scheme allows us to reach this ultimate sensitivity. We show that, for a given set of available quantum resources, the most economical way to maximize the sensitivity is to put the most squeezed state available in a well-defined light mode. This implies that it is not possible to take advantage of the existence of squeezed fluctuations in other modes, nor of quantum correlations and entanglement between different modes. \end{abstract}
\pacs{03.65.Ta, 42.50.Ex, 42.50.Lc, 42.50.St}
\maketitle
Optical techniques are widely used in many areas of science and technology to make accurate measurements and diagnostics, from microscopy, spectrography, chemical analysis, to gravitational wave detection and ranging. There are many reasons for this: light allows us to extract information in a remote and non destructive way, it carries information in a massively parallel way, and perhaps more importantly, optical measurements can reach very high precision and sensitivity levels.
It is therefore important to know what is the ultimate limit of sensitivity that can be possibly achieved in the estimation of a parameter $\theta$ that is encoded by one way or another in a light beam, given some constraints such as a fixed mean photon number $N$. This limit is imposed by the unavoidable quantum fluctuations of light and depends on the quantum state of light which conveys the information about $\theta$. When the light is in a coherent state, this limit is called `standard quantum limit' and scales as $1/N^{1/2}$.
Many studies have been devoted to finding ways to enhance the sensitivity of parameter estimation beyond the standard quantum limit using quantum resources. It has been shown that enhanced sensitivity can be achieved by using squeezed light \cite{Caves81} or entangled light \cite{Giovanetti06}. This has been first experimentally demonstrated for measurements in which the information about the parameter $\theta$ is carried by the total intensity \cite{Marin97} or by the phase \cite{Xiao87} of a light beam. Later situations were considered where the parameter $\theta$ does not change the total intensity of the light but modifies the details of the repartition of light in the transverse plane \cite{Delaubert08} (for example to estimate a very small lateral displacement of a beam \cite{Treps03}). As the energy of the squeezed state increases with the squeezing factor, the ultimate limit with squeezed state for a fixed total energy scales as $1/N^{3/4}$.
If one uses instead entangled states such as NOON states \cite{Kok04} one reaches the so-called Heisenberg-limit (HL) which scales as $1/N$. However, in the present state of technology real measurement schemes using these states do not lead to very high sensitivities, because of the small values of $N$ experimentally reachable (so far, the highest achievable NOON state has $N\sim100$ \cite{Higgins07}), and decoherence tends to rapidly destroy these states, therefore limiting the performance of the measurement to a $1/N^{1/2}$ scaling for large $N$ \cite{Huelga97,Kolodynski10,Escher11}. In \cite{Braun11} a scheme was proposed that reaches the HL without the use of an entangled state.
\begin{figure}
\caption{General scheme for estimating light parameters.}
\label{setup}
\end{figure}
This paper tackles the problem of optimized parameter estimation in
a more practical way. Light is considered as a probe to measure a parameter of a physical system (see Fig.~\ref{setup}). As in that case all quantum limits scale as some inverse power on $N$, it is very important to consider states with very high $N$ values. It turns out that, so far, only multimode Gaussian states are the available non classical states of light with very high mean photon number. They include quantum resources like multimode squeezing and multipartite entanglement that are widely used in quantum optics and quantum information processing. These states are already generated experimentally with impressive amounts of squeezing \cite{Mehmet10} and entanglement \cite{Laurat05} shared by many modes \cite{Yukawa08}. When they include a coherent state in one of the modes, the mean photon number $N$ can be easily as large as $10^{16}$\cite{Keller}.
The originality of the present approach is its multi-modal character. A multimode quantum state is defined not only by the value of the coefficients of its decomposition on the multimode Fock state basis $|n_1, n_2,...,n_{\ell} \rangle$ but also on the spatio-temporal shape of the different modes on which these Fock states are defined. This leaves us two kinds of degrees of freedom on which to act: as we will see below, the ultimate sensitivity is obtained not only by choosing the best possible Gaussian quantum state, but also by putting this state in an optimized mode basis.
\textit{Expression of the Quantum Cramer Rao bound for pure states} - Our aim is to measure the smallest possible variation of a parameter $\theta$ around a given value that we take to be $0$ by an appropriate change of origin. The quantum state which contains the information about this parameter is described by a density matrix $\hat{\rho}_{\theta}$. The error in the estimation of $\theta$ based on $Q$ repeated measurements of an observable $\hat{A}$ on this state is given by \cite{Braunstein94} \begin{equation} \delta\theta=\frac{\mean{\delta A_{\mathrm{est}}^{2}}_{\theta}^{1/2}}{\sqrt{Q}\abs{\frac{\partial}{\partial\theta}\mean{A_{\mathrm{est}}}_{\theta}}}, \end{equation} where $A_{\mathrm{est}}$ is an unbiased estimator of $\theta$ that depends on the results of the measurements of $\hat{A}$. By optimizing over all estimators $A_{\mathrm{est}}$ and all measurements, Braunstein and Caves \cite{Braunstein94} showed that the best achievable sensitivity for measuring a small variation of $\theta$ is bounded by the so-called quantum Cramér-Rao (QCR) bound \begin{equation} \delta\theta\geq\delta\theta_{\mathrm{min}}\equiv \left( 2\sqrt{Q}\frac{s(\hat{\rho}_{\theta},\hat{\rho}_{\theta+d\theta})}{d\theta}\right)^{-1}, \end{equation}
where $s(\hat{\rho}_{\theta},\hat{\rho}_{\theta+d\theta})$ is the Bures distance between $\hat{\rho}_{\theta}$ and $\hat{\rho}_{\theta+d\theta}$, which, in the case of pure states $\ket{\psi_{1}}$ and $\ket{\psi_{2}}$ is equal to $\sqrt{2(1-\left|\mean{\psi_{1}|\psi_{2}}\right|)}$.
Let us now consider a pure quantum state of light $\ket{\psi_{\theta}}$ spanning over $M$ different spatial or temporal modes $\{v_{i}(\bo r,t)\}$ ($i=1,...,M$). For mixed states with parameter independent mixing probabilities, the sensitivity can at most be as good as for the pure states from which it is mixed \cite{Braun10}. We call $\hat{a}_{i}$ the annihilation operator in the mode $v_{i}$, and introduce the quadrature operators $\hat{x}_{i}=\hat{a}_{i}+\hat{a}_{i}^{\dagger}$ and $\hat{p}_{i}=\i(\hat{a}_{i}^{\dagger}-\hat{a}_{i})$. We define the column vectors $\hat{\bo x}=(\hat{x}_{1},\ldots,\hat{x}_{M})^{\top}$, $\hat{\bo p}=(\hat{p}_{1},\ldots,\hat{p}_{M})^{\top}$, and $\hat{\bo X}=(\hat{\bo x},\hat{\bo p})^{\top}$.
The overlap between the states $\ket{\psi_{\theta}}$ and $\ket{\psi_{\theta+d\theta}}$ reads: \begin{equation}
\left|\mean{\psi_{\theta}|\psi_{\theta+d\theta}}\right|^{2}=\left(4\pi\right)^{M}\int W_{\theta}(\bo X)W_{\theta+d\theta}(\bo X)\ d^{2M}\!\bo X , \end{equation} $W_\theta$ being the Wigner function of $\ket{\psi_\theta}$: \begin{equation} W_\theta(\bo x,\bo p)=\frac{1}{(2\pi)^{M}}\int e^{i\boldsymbol{\xi}.\bo p} \langle \bo x-\boldsymbol{\xi} \ket{ \psi_\theta} \bra{\psi_\theta}\bo x+\boldsymbol{\xi}\rangle \ d^{M}\!\boldsymbol{\xi} \end{equation}
At second order in $d\theta$, the overlap is equal to \begin{equation}
\left|\mean{\psi_{\theta}|\psi_{\theta+d\theta}}\right|^{2}\simeq1-\frac{d\theta^{2}}{2}\left(\left(4\pi\right)^{M}\int\left(W_{\theta}^{\prime}(\bo X)\right)^{2}\ d^{2M}\!\bo X\right). \end{equation} The first order vanishes because the states are pure. Throughout this letter, for any function depending on the parameter $\theta$, we use the convention $f_{\theta}^{\prime}\equiv\DPZ f{\theta}{\theta=0}$, regardless of what other explicit variables $f$ might depend on.
This leads to the QCR bound for pure states \begin{equation} \delta\theta_{\mathrm{min}}=\left( 2 Q \left(4\pi\right)^{M}\int\left(W_{\theta}'(\bo X)\right)^{2}\ d^{2M}\!\bo X \right)^{-1/2}\label{eq:QCR_wigner} . \end{equation} This intermediate result is very interesting as it gives a simple expression of the QCR bound for any pure quantum state. In the remainder of this paper, we will apply this formula to Gaussian states.
\textit{QCR bound for pure Gaussian states} - For a Gaussian state $\ket{\psi_{\theta}}$, the Wigner function takes the form \begin{equation} W_{\theta}(\bo X)=\frac{1}{(2\pi)^{M}}\exp\left(-\frac{1}{2}(\bo X-\overline{\bo X}_{\theta})^{\top}\boldsymbol{\Gamma}_{\theta}^{-1}(\bo X-\overline{\bo X}_{\theta})\right) \end{equation} where $\overline{\bo X}_{\theta}$ is the column vector of the expectation values of the quadratures for the different modes, and $\boldsymbol{\Gamma}_{\theta}$ the symmetrized covariance matrix: $\boldsymbol\Gamma_{\theta,[i,j]}= \frac{1}{2} (\mean{\bo X_i \bo X_j}_\theta + \mean{\bo X_j \bo X_i}_\theta)$. As we treat the problem in all its generality, both possibly depend on $\theta$. One finds from (\ref{eq:QCR_wigner}) \begin{equation} \delta\theta_{\mathrm{min}}=Q^{-1/2} \left(\overline{\bo X}_{\theta}^{\prime\top}\boldsymbol{\Gamma}_{\theta}^{-1}\overline{\bo X}_{\theta}^{\prime}+\frac{\tr\left(\left(\boldsymbol{\Gamma}_{\theta}^{\prime}\boldsymbol{\Gamma}_{\theta}^{-1} \right)^{2}\right)}{4}\right)^{-1/2} \label{eq:QCRGauss}. \end{equation} The expression in the outermost bracket of Eq.~(\ref{eq:QCRGauss}) corresponds to the quantum Fisher information $I_\mathrm{Fisher}$ for a pure Gaussian state. It consists
of two terms which represent the information about $\theta$ that can be extracted respectively from the mean field and from the noise. In the limit of very large values of $N$, more precisely when the quantum field fluctuations are so small compared to $N$ that one can treat them to first order, the second term turns out to be negligible compared to the first, and we will neglect it from now on. This approximation is a consequence of the practical approach we consider in this paper and corresponds to realistic experimental implementations. Let us stress that such a linearization procedure has been widely used in the literature to determine the Gaussian quantum state which is produced by nonlinear effects such as parametric down-conversion or four-wave mixing.
Let us now use our freedom of choice of the mode basis in which to describe the quantum state: we will see that $I_\mathrm{Fisher}$ can be expressed in more physical terms if one introduces a mode basis $\{ \widetilde{v}_{i}(\bo r,t) \}$ specific to our problem. We first define the \textit{normalized mean photon field mode} as: \begin{equation}\label{fieldmode} u_{\theta}(\bo r,t)=\frac{\overline{a}_{\theta}(\bo r,t)}{\norm{\overline{a}_{\theta}}} \end{equation} where $\hat{a}(\bo r,t)=\sum_{i}\hat{a}_{i}v_{i}(\bo r,t)$ is the local annihilation operator, $\overline{a}_{\theta}(\bo r,t)=\bra{\psi_{\theta}}\hat{a}(\bo r,t)\ket{\psi_{\theta}}$ the mean photon field, and $\norm{\overline{a}_{\theta}}$ its norm: \begin{equation}
\norm{\overline{a}_{\theta}}=\left( \int |\overline{a}_{\theta}(\bo r,t)|^2 \ d^2\bo r dt \right)^{1/2} , \end {equation} where the spatial integration is made over a surface perpendicular to the light beam propagation, and the time integration over the detection time. In the limit of a narrow-band field, the mean photon field mode $u_{\theta}$ is proportional to the mean value of the electric field in the $\theta$-dependent quantum state.
We can now define the \textit{detection mode} by \begin{equation}\label{detmode} \widetilde{v}_{1}(\bo r,t)=\frac{\overline{a}_{\theta}^{\prime}(\bo r,t)}{\norm{\overline{a}_{\theta}^{\prime}}} . \end{equation} One then completes the basis starting with mode $\widetilde{v}_1$ by other orthonormal modes $\widetilde{v}_{n>1}$. The modes $\widetilde{v}_n$ do not depend on $\theta$ since the derivative in (\ref{detmode}) has been taken at the value $\theta=0$
The expression of the Fisher information in the $\{ \widetilde{v}_{i}(\bo r,t) \}$ mode basis is very simple as it involves only one matrix element of $\boldsymbol{\Gamma}_{\theta}^{-1}$: \begin{equation} I_\mathrm{Fisher}= 4 \boldsymbol{\Gamma}_{\theta=0,\left[1,1\right]}^{-1} \norm{\overline{a}_{\theta}^{\prime}}^2 \end{equation} where $\boldsymbol{\Gamma}_{\theta=0,\left[1,1\right]}^{-1}$ is the first left, top element of the matrix $\boldsymbol{\Gamma}_{\theta}^{-1}$ in the basis $\{ \widetilde{v}_{i}(\bo r,t) \}$ taken at the value $\theta=0$ of the parameter.
In particular, the Fisher information for a single measurement involving a coherent state ($\boldsymbol{\Gamma}_{\theta}=\boldsymbol{1}$), that we will call $I_0$, is found to be \begin{equation} I_0 = 4\norm{\overline{a}_{\theta}^{\prime}}^{2}=N_{\theta}\left(4 \norm{u_{\theta}^{\prime}}^{2}+\left(\frac{N_{\theta}^{\prime}}{N_{\theta}}\right)^{2}\right) , \end{equation} where $N_{\theta}=\norm{\overline{a}_{\theta}(\bo r,t)}^2$ is a quantity that tends to the mean photon number $N$ in the high $N$ limit where fluctuations can be linearized. We obtain finally the following expression of the QCR bound for parameter estimation using quantum Gaussian states \begin{equation} \label{QCRfinal} \delta \theta_{\mathrm{min}}=\left[ QN_{\theta}\left(4\norm{u_{\theta}^{\prime}}^{2}+\left(\frac{N_{\theta}^{\prime}}{N_{\theta}}\right)^{2}\right)\boldsymbol{\Gamma}_{\theta,\left[1,1\right]}^{-1} \right]^{-1/2} . \end{equation} It depends on $3$ factors: the first one is as usual the mean total number of photons measured $QN_{\theta}$. The second one is related to the variation as a function of $\theta$ of the displacement of the mean field mode and the mean photon number. The more the light properties are affected by the variation of $\theta$, the better the sensitivity one can expect for its estimation. While the general argument is obvious, the explicit formula (\ref{QCRfinal}) is not. The last factor is the influence on the measurement of the quantum fluctuations of the state, which is remarkably contained in a single element of the inverse covariance matrix in our specific mode basis.
\textit{Optimized multimode Gaussian state for parameter estimation} - Let us now discuss under which conditions nonclassical multimode Gaussian states can be put to best use in the estimation of $\theta$. We will take the point of view of an experimentalist who wants to use the minimum possible amount of quantum resources that allow him to reach the QCR bound. He will start from the simplest way known to date to generate multimode quantum Gaussian states \cite{Braunstein05}, which consists in linearly mixing several single mode squeezed beams produced by independent "squeezers", such as degenerate parametric amplifiers. We will call $\sigma_{\min}^{2}$ the smallest quadrature noise among all these squeezed modes. $\sigma_{\min}^{-2}$ is the largest eigenvalue of the inverse covariance matrix in the initial basis of the independent squeezed modes. With the help of linear couplers i.e.~of a $\theta$ dependent unitary transformation of the mode basis, the multimode squeezing can be transformed into a multimode entangled/squeezed Gaussian state in a mode basis the spatio-temporal shape of which can also be tailored at will \cite{Morizur:2011jb}. One can show that, under such unitary transformations, the diagonal matrix elements of the inverse of the covariance matrix are bound by the spectral radius of $\boldsymbol{\Gamma}^{-1}_{\theta=0}$, which is equal to $1/\sigma_{\text{min}}^2$. Equality is reached only if the detection mode 1 is an eigenmode of the covariance matrix with the eigenvalue $\sigma_{\text{min}}^2$, and thus \emph{when the most squeezed state is put in the detection mode, with no quantum correlations with any other mode}. The QCR bound corresponding to the quantum resources that we have just described is thus \begin{equation} \label{QCRopt} \delta \theta_{\mathrm{min}}=\frac{\sigma_{\text{min}}}{\sqrt{QN_{\theta}}} \left(4\norm{u_{\theta}^{\prime}}^{2}+\left(\frac{N_{\theta}^{\prime}}{N_{\theta}}\right)^{2}\right)^{-1/2} . \end{equation}
We have shown here an important result: the only way to saturate the Cramér-Rao bound in the configuration that we have just described is to put the most squeezed state available into the detection mode and not to have correlations with the other modes. The presence of other squeezed modes, or of any kind of entanglement, will not help to improve the sensitivity: one cannot take advantage of squeezed fluctuations or quantum correlations coming from different modes to improve the estimation of a single parameter \cite{Tilma10}. We therefore advise experimentalists to produce a single vacuum squeezed state, to put it in the detection mode, and to mix it with a coherent state of high mean photon number $N$ in the mean photon field mode $u_{\theta}(\bo r,t)$ . Doing that, they will be sure that nobody else will make a more sensitive estimation of the variation of $\theta$ around $0$ for a given shape $u_{\theta}(\bo r,t)$ of the mean field.
\textit{A possible experimental implementation that reaches the QCR bound} - The determination of the Quantum Cramér-Rao bound is very general and does not tell us which kind of detection, and which kind of measurement strategy is to be used in order to reach it. We show in this paragraph that a homodyne detection scheme in which the local oscillator is precisely taken in the detection mode allows us to reach the QCR bound.
If one uses an intense local oscillator in mode $\widetilde{v}_{1}$, the balanced homodyne detection operator, for a null relative phase between the local oscillator and the measured beam, is given by $\hat{D}={\hat {\tilde x}}_{1} \sqrt{N_{\mathrm{LO}}} $, where $N_{\mathrm{LO}}$ is the mean photon number of the local oscillator and ${\hat {\tilde x}}_{1}$ the real quadrature operator of the mode $\widetilde{v}_1$. A balanced detection set-up therefore allows us to measure the projection of a multimode field on the oscillator mode, even in presence of many other modes.
For a small variation of the parameter $\theta$ around $0$ the mean value of the homodyne signal is given by: \begin{eqnarray} \mean{\hat D}_{\theta}&=& \sqrt{N_{LO}} \mean{\hat {\tilde x}_{1}}_{\theta}\nonumber \\ &=& 2 \sqrt{N_{LO}} \, {\mathcal Re}\left(\int \widetilde{v}_{1}^{*}\overline{a}_{\theta} \ d^2\bo r dt\right) \end{eqnarray} using the orthonormality properties of the mode basis $\{ \widetilde{v}_{i}(\bo r,t) \}$. As \begin{equation} \overline{a}_{\theta}\approx\overline{a}_{\theta=0}+\theta \, \overline{a}_{\theta}^{\prime} , \end{equation} one finally gets by using the orthonormality properties of the mode basis $\{\widetilde{v}_{i}(\bo r,t) \}$, and the fact that $\int u_{\theta}^{*}u_{\theta}^{\prime}\ d^2\bo r dt$ is a purely imaginary number, \begin{equation}\label{homo1} \mean{\hat D}_{\theta}=\sqrt{N_{LO}} \left( \sqrt{I_{0}} \theta+2\frac{N_{\theta}^{\prime}}{\sqrt{I_{0}}}\right). \end{equation} The homodyne signal, suitably calibrated, is therefore an estimator of $\theta$. Because of the additional term in (\ref{homo1}), the estimation is biased. We then introduce the unbiased estimator $\widetilde{\theta}$ of $\theta$, \begin{equation} \widetilde{\theta}= \frac{\mean{\hat{D}}_{\theta}-D_{0}}{\sqrt{N_{\mathrm{LO}}I_{0}}}. \end{equation} where $D_{0}$ is the mean value of $\hat{D}$ for $\theta=0$. Considering the case when the light state is squeezed in the detection mode by a factor $\sigma_\mathrm{min}^{2}$ and assuming a unity signal to noise ratio, the sensitivity of the homodyne measurement can be shown to be: \begin{equation} \delta\theta_\mathrm{homodyne}=\frac{\sigma_\mathrm{min}}{\sqrt{N_{\theta}\left(4\norm{u_{\theta}^{\prime}}^{2}+\left(N_{\theta}^{\prime}/N_{\theta}\right)^{2}\right)}} \end{equation} which is indeed equal to the QCR bound (\ref{QCRopt}) for a single measurement.
In conclusion, we have derived the expression of the ultimate limit for parameter estimation using pure Gaussian multimode states. We have shown that this limit can be reached with the help of a balanced homodyne detection scheme. We have also shown that multimode squeezing and multipartite entanglement are of no help, and that it is very important to shape in the best way the mode in which to put the non-classical Gaussian state in order to reach the ultimate limit in the most economical way. These results are good news for the experimentalists because single mode highly squeezed Gaussian states can be readily generated experimentally and because a simple homodyne detection scheme, easily achievable in a laboratory, is sufficient for reaching the best possible sensitivity.
\begin{acknowledgments} We acknowledge the financial support of the Future and Emerging Technologies (FET) programme within the Seventh Framework Programme for Research of the European Commission, under the FET-Open grant agreement HIDEAS, number FP7-ICT-221906, and of the ANR project QUALITIME. \end{acknowledgments}
\end{document} |
\begin{document}
\textwidth 150mm \textheight 225mm \title{Spectral radius of graphs of given size with forbidden subgraphs\thanks{Supported by the National Natural Science Foundation of China (No. 12271439).}} \author{{Yuxiang Liu$^{a,b}$, Ligong Wang$^{a,b,}$\footnote{Corresponding author.}}\\ {\small $^{a}$ School of Mathematics and Statistics, Northwestern Polytechnical University,}\\{\small Xi'an, Shanxi 710129, P.R. China.}\\ {\small $^{b}$ Xi'an-Budapest Joint Research Center for Combinatorics, Northwestern Polytechnical University,}\\{\small Xi'an, Shanxi 710129, P.R. China.}\\ {\small E-mail: yxliumath@163.com, lgwangmath@163.com}} \date{} \maketitle \begin{center} \begin{minipage}{135mm} \vskip 0.3cm \begin{center} {\small {\bf Abstract}} \end{center} {\small Let $\rho(G)$ be the spectral radius of a graph $G$ with $m$ edges. Let $S_{m-k+1}^{k}$ be the graph obtained from $K_{1,m-k}$ by adding $k$ disjoint edges within its independent set. Nosal's theorem states that if $\rho(G)>\sqrt{m}$, then $G$ contains a triangle. Zhai and Shu showed that any non-bipartite graph $G$ with $m\geq26$ and $\rho(G)\geq\rho(S_{m}^{1})>\sqrt{m-1}$ contains a quadrilateral unless $G\cong S_{m}^{1}$ [M.Q. Zhai, J.L. Shu, Discrete Math. 345 (2022) 112630]. Wang proved that if $\rho(G)\geq\sqrt{m-1}$ for a graph $G$ with size $m\geq27$, then $G$ contains a quadrilateral unless $G$ is one of four exceptional graphs [Z.W. Wang, Discrete Math. 345 (2022) 112973]. In this paper, we show that any non-bipartite graph $G$ with size $m\geq51$ and $\rho(G)\geq\rho(S_{m-1}^{2})>\sqrt{m-2}$ contains a quadrilateral unless $G$ is one of three exceptional graphs. Moreover, we show that if $\rho(G)\geq\rho(S_{\frac{m+4}{2},2}^{-})$ for a graph $G$ with even size $m\geq74$, then $G$ contains a $C_{5}^{+}$ unless $G\cong S_{\frac{m+4}{2},2}^{-}$, where $C_{t}^{+}$ denotes the graph obtained from $C_{t}$ and $C_{3}$ by identifying an edge, $S_{n,k}$ denotes the graph obtained by joining each vertex of $K_{k}$ to $n-k$ isolated vertices and $S_{n,k}^{-}$ denotes the graph obtained by deleting an edge incident to a vertex of degree two, respectively. \vskip 0.1in \noindent {\bf Key Words}: Tur\'{a}n-type extremal problem, Spectral radius, Forbidden subgraph \vskip
0.1in \noindent {\bf AMS Subject Classification (1991)}: \ 05C50, 05C35} \end{minipage} \end{center} \section{Introduction} Throughout this paper, all graphs considered are always undirected and simple. Let $G$ be a graph of order $n$ with vertex set $V(G)=\{v_{1}, v_{2},\ldots, v_{n}\}$ and size $m$ with edge set $E(G)=\{e_{1}, e_{2}, \ldots, e_{m}\}$. The neighborhood of a vertex $u\in V(G)$ is denoted by $N_{G}(u)$. Let $N_{G}[u]=N_{G}(u)\cup\{u\}$, which is called the closed neighborhood of $u$. Let $d_{G}(u)$ be the degree of a vertex $u$. For the sake of simplicity, we omit all the subscripts if $G$ is clear from the context. The adjacency matrix of $G$ is an $n\times n$ matrix $A(G)$ whose $(i,j)$-entry is $1$ if $v_{i}$ is adjacent to $v_{j}$ and $0$ otherwise. The spectral radius $\rho(G)$ of $G$ is the largest eigenvalue of its adjacency matrix $A(G)$.
Let $P_{n}, C_{n}, K_{1,n}$ and $K_{a,b}$ be the path of order $n$, the cycle of order $n$, the star graph of order $n+1$ and the complete bipartite graph with two parts of sizes $a, b$, respectively. Let $S_{n}^{k}$ be the graph obtained from $K_{1,n-1}$ by adding $k$ disjoint edges within its independent sets. Let $S_{n,k}$ be the graph obtained by joining each vertex of $K_{k}$ to $n-k$ isolated vertices. Let $S_{n,k}^{-}$ be the graph obtained from $S_{n,k}$ by deleting an edge incident to a vertex of degree two. Let $C_{t}^{+}$ be the graph obtained from $C_{t}$ and $C_{3}$ by identifying an edge.
Given a graph $F$, a graph $G$ is $F$-free if it does not contain $F$ as a subgraph. Let $\mathcal{G}(m, F)$ denote the family of $F$-free graphs with $m$ edges and without isolated vertices. A classic problem in extremal graph theory, known as Tur\'{a}n's problem, is that what the maximum number of edges in an $F$-free graph of order $n$ is. Nikiforov \cite{Ni2} posed a spectral version of Tur\'{a}n's problem as follows: what is the maximum spectral radius of an $F$-free graph of order $n$? This spectral Tur\'{a}n-type problem of graphs have received much attention in the past decades. For example, some new results were found in \cite{ChLZ,CiDT,CiFTZ,LiuBW, ZhL}. For more results on spectral extremal graph theory, we suggest the reader to see surveys \cite {ChZ,FSi,LiLF,Ni3}, and references therein. In contrast, the spectral Tur\'{a}n-type problem of graphs with given size is that what the maximum spectral radius of an $F$-free with $m$ edges is. Equivalently, what is a lower bound of $\rho(G)$ for a graph $G$ of size $m$ containing a subgraph $F$? Earliest, Nosal \cite{Nos} showed that if $\rho(G)>\sqrt{m}$ then $G$ contains a triangle, which is known well as a spectral Mantel's theorem. Very recently, Lin, Ning and Wu \cite{LNW} showed that if $\rho(G)\geq\sqrt{m-1}$ for a non-bipartite graph $G$ of size $m$, then $G$ contains a triangle unless $G\cong C_{5}$. Zhai and Shu \cite{ZhS} showed that if $\rho(G)\geq\rho(SK_{2,\frac{m-1}{2}})$ for a non-bipartite graph $G$ of size $m$, then $G$ contains a triangle unless $G\cong SK_{2,\frac{m-1}{2}}$, where $SK_{2,\frac{m-1}{2}}$ is the graph obtained from $K_{2, \frac{m-1}{2}}$ by subdividing an edge. Wang \cite{Wan} showed that if $\rho(G)\geq \sqrt{m-2}$ for a non-bipartite graph $G$ of size $m\geq26$, then $G$ contains a triangle unless $G$ is one of some exceptional graphs. For more details, one may refer to \cite{GLZH,LiPe,LinG} and references therein.
\noindent\begin{theorem}\label{th:ch-1.1.}{\rm(}$\cite{Wan}${\rm)} Let $G$ be a non-bipartite and connected graph of size $m\geq26$. If $\rho(G)\geq\rho(S_{m}^{1})>\sqrt{m-1}$, then $G$ contains a quadrilateral unless $G\cong S_{m}^{1}$. \end{theorem}
\noindent\begin{theorem}\label{th:ch-1.2.}{\rm(}$\cite{Wan}${\rm)} Let $G$ be a graph of size $m\geq27$. If $\rho(G)\geq\sqrt{m-1}$, then $G$ contains a quadrilateral unless $G$ is one of these graphs (with possibly isolated vertices): $K_{1,m}, S_{m}^{1}, S_{m}^{e}$, or $K_{1,m-1}\cup P_{2}$, where $S_{m}^{e}$ is the graph obtained by attaching a pendent vertex to a pendent vertex of $K_{1,m-1}$. \end{theorem}
\noindent\begin{theorem}\label{th:ch-1.3.} Let $G$ be a non-bipartite graph of size $m\geq51$. If $\rho(G)\geq\rho(S_{m-1}^{2})>\sqrt{m-2}$, then $G$ contains a quadrilateral unless $G$ is one of the following: $S_{m}^{1}$, $C_{5}\bullet K_{1,m-5}$ and $S_{m-1}^{2}$, where $C_{5}\bullet K_{1,m-5}$ is the graph obtained by attaching a vertex of $C_{5}$ to the center vertex of $K_{1,m-5}$. \end{theorem}
Recently, Li, Shu and Wei \cite{LiSW} characterized the extremal graph of odd size $m$ having the largest spectral radius in $\mathcal{G}(m, C_{4}^{+})$ and $\mathcal{G}(m, C_{5}^{+})$, respectively. We list them as follows.
\noindent\begin{theorem}\label{th:ch-1.4.}{\rm(}$\cite{LiSW}${\rm)} {\rm(}$i${\rm)} If $G\in\mathcal{G}(m,C_{4}^{+})$ and $m$($\geq8$) is odd, then $\rho(G)\leq\frac{1+\sqrt{4m-3}}{2}$ and equality holds if and only if $G\cong S_{\frac{m+3}{2},2}$;
{\rm(}$ii${\rm)} If $G\in\mathcal{G}(m,C_{5}^{+})$ and $m$($\geq22$) is odd, then $\rho(G)\leq\frac{1+\sqrt{4m-3}}{2}$ and equality holds if and only if $G\cong S_{\frac{m+3}{2},2}$. \end{theorem}
Recently, Fang and You \cite{FaY} characterized the extremal graph of even size $m$ having the largest spectral radius in $\mathcal{G}(m, C_{4}^{+})$ in Theorem \ref{th:ch-1.5.}.
\noindent\begin{theorem}\label{th:ch-1.5.}{\rm(}$\cite{FaY}${\rm)} If $G\in\mathcal{G}(m,C_{4}^{+})$ and $m$($\geq 22$) is even, then $\rho(G)\leq\rho(S_{\frac{m+4}{2},2}^{-})$, and equality holds if and only if $G\cong S_{\frac{m+4}{2},2}^{-}$. \end{theorem} Motivated by Theorems \ref{th:ch-1.4.} and \ref{th:ch-1.5.}, we will characterize the extremal graph of even size $m$ having the maximum spectral radius in $\mathcal{G}(m,C_{5}^{+})$ as follows.
\noindent\begin{theorem}\label{th:ch-1.6.} If $G\in\mathcal{G}(m, C_{5}^{+})$ and $m$($\geq74$) is even, then $\rho(G)\leq \rho(S_{\frac{m+4}{2},2}^{-})$, and equality holds if and only if $G\cong S_{\frac{m+4}{2},2}^{-}$. \end{theorem}
\section{Preliminary}
In this section, we introduce some lemmas and notations. Let $X$ be the Perron vector of $G$ with coordinate $x_{v}$ corresponding to the vertex $v\in V(G)$ and $u^{\ast}$ be a vertex if $x_{u^{\ast}}=\max\{x_{v}|v\in V (G)\}$. Let $N_{i}(u)=\{v|v\in N(u)$, $d_{N(u)}(v)=i\}$, $N_{i}^{2}(u)=\{w|w\in N^{2}(u)$, $d_{N_{i}(u)}(w)\geq1\}$. Let $N[u]=N(u)\cup\{u\}$, $W=V(G)\setminus N[u]$. For a subset $S\subseteq V(G)$ and a vertex $v\in V(G)$, let $N_{S}(v)=N(v)\cap S$ and $d_{S}(v)=|N_{S}(v)|$. Let $G[S]$ be the subgraph of $G$ induced by $S$. Write $\rho=\rho(G)$. For two vertex subsets $S$ and $T$ of $V(G)$ (where $S\cap T$ may not be empty), let $e(T,S)$ denote the number of edges with one endpoint in $S$ and the other in $T$. $e(S,S)$ is simplified by $e(S)$.
\noindent\begin{lemma}\label{le:ch-2.1.} {\rm(}$\cite{ZhWF1}${\rm)} Let $u, v$ be two distinct vertices of a connected graph $G$, $\{v_{i}| i=1,2, \ldots, s\}\subseteq N(v)\setminus N(u)$, and $X=(x_{1}, x_{2}, \ldots, x_{n})^{T}$ be the Perron vector of $G$. Let $G^{\prime}=G-\sum_{i=1}^{s}v_{i}v+\sum_{i=1}^{s}v_{i}u$. If $x_{u}\geq x_{v}$, then $\rho(G)< \rho(G^{\prime})$. \end{lemma}
\noindent\begin{lemma}\label{le:ch-2.2.}{\rm(}$\cite{Ni4}${\rm)} $\rho(S^{k}_{m-k+1})$ is the largest root of the polynomial $f(x)=x^{3}-x^{2}-(m-k)x+m-3k$, then $\sqrt{m-k}<\rho(S^{k}_{m-k+1})\leq\sqrt{m-k+1}$ for $1\leq k\leq \frac{m}{3}, m\geq4k^{2}+5k$. \end{lemma} \noindent {\bf Proof.} Since $f^{\prime}(x)>0$ for $x\geq\sqrt{m-k}$ and $f(\sqrt{m-k})=-2k<0, f(\sqrt{m-k+1})=\sqrt{m-k+1}-2k-1\geq0$ for $m\geq4k^{2}+5k$. Thus, we have $\sqrt{m-k}<\rho(S^{k}_{m-k+1})\leq\sqrt{m-k+1}$ for $1\leq k\leq \frac{m}{3}, m\geq4k^{2}+5k$, as desired. $\qedsymbol$
\noindent\begin{definition}\label{de:ch-2.3.}{\rm(}$\cite{CvRS}${\rm)} Given a graph $G$, the vertex partition $\Pi$: $V(G)=V_{1}\cup V_{2} \cup \ldots \cup V_{k}$ is said to be an equitable partition if, for each $u\in V_{i}$, $|V_{j}\cap N(u)|=b_{ij}$ is a constant depending only on $i,j$ ($1\leq i,j\leq k$). The matrix $B_{\Pi}=(b_{ij})$ is called the quotient matrix of $G$ with respect to $\Pi$. \end{definition}
\noindent\begin{lemma}\label{le:ch-2.4.}{\rm(}$\cite{CvRS}${\rm)} Let $\Pi$: $V(G)=V_{1}\cup V_{2} \ldots \cup V_{k}$ be an equitable partition of $G$ with quotient matrix $B_{\Pi}$. Then $det(xI-B_{\Pi}) \mid det(xI-A(G))$. Furthermore, the largest eigenvalue of $B_{\Pi}$ is just the spectral radius of $G$. \end{lemma}
Throughout this paper, the following equalities are used.\\
Since $A(G)X=\rho X$, we have
\begin{equation}\label{eq:ch-1} \rho x_{u}=\sum_{v\in N_{0}(u)}x_{v}+\sum_{v\in N(u)\backslash N_{0}(u)}x_v. \end{equation} Since $\rho^{2}$ is the spectral radius of $A^{2}(G)$, we have \begin{equation}\label{eq:ch-2} \rho^{2}x_{u}=d(u)x_{u}+\sum_{v\in N(u)\backslash N_{0}(u)}d_{N(u)}(v)x_{v}+\sum_{w\in N^{2}(u)}d_{N(u)}(w)x_w. \end{equation} Combining with \eqref{eq:ch-1} and \eqref{eq:ch-2}, we have \begin{equation}\label{eq:ch-3} (\rho^{2}-\rho)x_{u}=d(u)x_{u}+\sum_{v\in N(u)\backslash N_{0}(u)}(d_{N(u)}(v)-1)x_{v}+\sum_{w\in N^{2}(u)}d_{N(u)}(w)x_w-\sum_{v\in N_{0}(u)}x_{v}. \end{equation}
\section{Proof of Theorem 1.3.}
Let $G$ be a non-bipartite graph and $\rho(G)\geq\rho(S_{m-1}^{2})>\sqrt{m-2}\geq7$ for $m\geq51$. Recall that $W=V(G)\backslash N[u^{\ast}]$. Assume that $G$ contains no $C_{4}$, we have $N(u^{\ast})=N_{1}(u^{\ast})\cup N_{0}(u^{\ast})$, $N_{W}(u)\cap N_{W}(v)=\emptyset$ for any two vertices $u,v\in V(G)$, and $d_{N(u^{\ast})}(w)=1$ for any vertex $w\in N^{2}(u^{\ast})$. Let $N_{1}(u^{\ast})=\{u_{2i-1}u_{2i}| i\in 1,2,\ldots, 2e(N_{1}(u^{\ast}))\}$.
\begin{equation}\label{eq:ch-4} \begin{split} \rho^{2}x_{u^{\ast}}&=d(u^{\ast})x_{u^{\ast}}+\sum_{v\in N_{1}(u^{\ast})}x_{v}+\sum_{w\in N^{2}(u^{\ast})}d_{N(u^{\ast})}(w)x_w\\ &\leq d(u^{\ast})x_{u^{\ast}}+ \sum_{u_{2i-1}u_{2i}\in E(G[N_{1}(u^{\ast})])}(x_{u_{2i-1}}+x_{u_{2i}})+e(N(u^{\ast}),N^{2}(u^{\ast}))x_{u^{\ast}}. \end{split} \end{equation}
Since $G$ is $C_{4}$-free, we obtain any two vertices in $N(u^{\ast})$ have no common neighbors in $N^{2}(u^{\ast})$. Hence,
\begin{equation}\label{eq:ch-5} e(W)=\frac{1}{2}\sum_{w\in W}d_{W}(w) \geq\frac{1}{2}\sum_{w\in N^{2}(u^{\ast})}d_{W}(w)
\geq\frac{1}{2}|N^{2}(u^{\ast})| \geq\frac{1}{2}\sum_{u\in N_{1}(u^{\ast})}d_{W}(u) \end{equation}
For each $u_{2i-1}u_{2i}\in E(G[N_{1}(u^{\ast})])$, we have $\rho x_{u_{2i-1}}=x_{u_{2i}}+x_{u^{\ast}}+\sum_{w\in N_{W}(u_{2i-1})}x_{w}$ and $\rho x_{u_{2i}}=x_{u_{2i-1}}+x_{u^{\ast}}+\sum_{w\in N_{W}(u_{2i})}x_{w}$. It follows that \begin{equation}\label{eq:ch-6} (\rho-1)(x_{u_{2i-1}}+x_{u_{2i}})\leq(2+d_{W}(u_{2i-1})+d_{W}(u_{2i}))x_{u^{\ast}}. \end{equation}
Recall that $\rho(G)\geq\rho(S_{m-1}^{2})>\sqrt{m-2}\geq7$ for $m\geq51$. Combining with \eqref{eq:ch-5} and \eqref{eq:ch-6}, we obtain that
\begin{equation}\label{eq:ch-7} \begin{aligned} \sum_{u_{2i-1}u_{2i}\in E(G[N_{1}(u^{\ast})])}(x_{u_{2i-1}}+x_{u_{2i}}) &\leq\frac{1}{\rho-1}\sum_{u_{2i-1}u_{2i}\in E(G[N_{1}(u^{\ast})])}(2+d_{W}(u_{2i-1})+d_{W}(u_{2i}))x_{u^{\ast}}\\ &\leq \frac{e(N_{1}(u^{\ast}))}{3}x_{u^{\ast}}+\frac{1}{6}\sum_{u_{2i-1}u_{2i}\in E(G[N_{1}(u^{\ast})])}(d_{W}(u_{2i-1})+d_{W}(u_{2i}))x_{u^{\ast}}\\ &=\frac{e(N_{1}(u^{\ast}))}{3}x_{u^{\ast}}+\frac{1}{6}\sum_{u\in V(N_{1}(u^{\ast}))}d_{W}(u)x_{u^{\ast}}\\ &\leq\frac{e(N_{1}(u^{\ast}))+e(W)}{3}x_{u^{\ast}}. \end{aligned} \end{equation}
Combining with \eqref{eq:ch-4} and \eqref{eq:ch-7}, we get
\begin{equation}\label{eq:ch-8} \begin{split} \rho^{2}x_{u^{\ast}}&\leq(d(u^{\ast})+\frac{1}{3}(e(N_{1}(u^{\ast}))+e(W))+e(N(u^{\ast}),N^{2}(u^{\ast})))x_{u^{\ast}}\\ &=(m-\frac{2}{3}(e(N_{1}(u^{\ast}))+e(W)))x_{u^{\ast}}. \end{split} \end{equation}
Note that $\rho(G)\geq\rho(S_{m-1}^{2})>\sqrt{m-2}\geq7$. We get $e(N_{1}(u^{\ast}))+e(W)<3$, i.e., $e(N_{1}(u^{\ast}))+e(W)\leq2$. Since $G$ is a non-bipartite graph, we have $e(N_{1}(u^{\ast}))+e(W)\neq0$. Hence $1\leq e(N_{1}(u^{\ast}))+e(W)\leq2$. Now we consider the following two cases.
{\bf Case 1.} $e(W)+e(N_{1}(u^{\ast}))=2$.
In this case, we discuss the following three subcases.
{\bf Subcase 1.1.} $e(N_{1}(u^{\ast}))=2$.
In this case, we have $e(W)=0$. Suppose that $W\neq\emptyset$, without loss of generality, there exists a vertex $w\in W$. Since $G$ does not contain $C_{4}$, we have $d(w)=1$. Let $u\in N_{N(u^{\ast})}(w)$. Let $S_{m-1}^{2}=G-uw+u^{\ast}w$. By Lemma \ref{le:ch-2.1.}, we have $\rho(S_{m-1}^{2})>\rho(G)$, a contradiction. Thus $W=\emptyset$ and $G^{\ast}\cong S_{m-1}^{2}$.
{\bf Subcase 1.2.} $e(N_{1}(u^{\ast}))=1$.
In this case, we have $e(W)=1$. Let $w_{1}w_{2}\in E(G[W])$ be the unique edge. Assume that $u_{1}\in N_{N(u^{\ast})}(w_{1})\cap N_{N(u^{\ast})}(w_{2})$, then $G\cong G_{0}$ or $G\cong G_{1}$ (see Fig. 1). Note that $S_{m-1}^{2}=G_{i}-\{u_{1}w_{1}, u_{1}w_{2}\}+\{u^{\ast}w_{1}, u^{\ast}w_{2}\}$ for each $i\in \{0,1\}$. By Lemma \ref{le:ch-2.1.}, we have $\rho(S_{m-1}^{2})>\rho(G_{i})$ for each $i\in \{0,1\}$, a contradiction. Thus $N_{N(u^{\ast})}(w_{1})\cap N_{N(u^{\ast})}(w_{2})=\emptyset$. Without loss of generality, let $u_{1}\in N_{N(u^{\ast})}(w_{1})$ and $u_{2}\in N_{N(u^{\ast})}(w_{2})$. Then $G\cong G_{2}$ or $G\cong G_{3}$ (see Fig. 1). Note that $S_{m-1}^{2}=G_{i}-\{u_{1}w_{1}, u_{2}w_{2}\}+\{u^{\ast}w_{1}, u^{\ast}w_{2}\}$ for each $i\in \{2,3\}$. By Lemma \ref{le:ch-2.1.}, we have $\rho(S_{m-1}^{2})>\rho(G_{i})$ for each $i\in \{2,3\}$, a contradiction.
{\bf Subcase 1.3.} $e(W)=2$.
In this case, we obtain that $e(N_{1}(u^{\ast}))=0$ and $G$ possibly contains the following subgraphs (see Fig. 2). If $G$ contains $C_{6}$ as a subgraph, then $G$ is a bipartite graph, a contradiction. Assume that $G$ contains $C_{5}^{+}$ as a subgraph. Note that $S_{m-1}^{2}=C_{5}^{+}-\{w_{2}w_{3}, u_{1}w_{1}, u_{1}w_{2}\}+\{w_{1}u^{\ast}, w_{2}u^{\ast},w_{3}u^{\ast}\}$. By Lemma \ref{le:ch-2.1.}, we have $\rho(S_{m-1}^{2})>\rho(C_{5}^{+})$, a contradiction. Assume that $G$ contains $G_{4}$ as a subgraph. Note that $S_{m-1}^{2}=G_{4}-\{w_{1}w_{2},w_{2}w_{3}, u_{3}w_{2}\}+\{w_{1}u^{\ast}, w_{2}u^{\ast},w_{3}u^{\ast}\}$. By Lemma \ref{le:ch-2.1.}, we have $\rho(S_{m-1}^{2})>\rho(G_{4})$, a contradiction. For the rest graphs $G_{i}$ for $i\in\{5,6,7,8,9\}$, we have the similar operation and conclusion.
{\bf Case 2.} $e(W)+e(N_{1}(u^{\ast}))=1$.
In this case, we discuss the following two subcases.
{\bf Subcase 2.1.} $e(N_{1}(u^{\ast}))=1$.
In this case, we have $e(W)=0$. Suppose that $W\neq\emptyset$, without loss of generality, there exists a vertex $w_{1}\in W$. Since $G$ does not contain $C_{4}$, we have $d(w_{1})=1$. Let $u_{1}\in N_{N_{1}(u^{\ast})}(w_{1})$. Then $G\cong G_{10}$ (see Fig. 3). By Lemma \ref{le:ch-2.4.}, $\rho(G_{10})$ is the largest roots of the equation $g(x)=0$, where $$g(x)=x^{4}-mx^{2}-2x+2m-7.$$ Since $g(\sqrt{m-2})=-2\sqrt{m-2}-3<0$ and $g^{\prime}(x)>0$ for $x\geq\sqrt{m-2}$. Thus $\sqrt{m-2}<\rho(G_{10})$. By Lemma \ref{le:ch-2.2.}, $\rho(S_{m-1}^{2})$ is the largest root of the equation $f(x)=0$, where $$f(x)=x^{3}-x^{2}-(m-2)x+m-6.$$ Let $$h(x)=g(x)-xf(x)=x^{3}-2x^{2}-(m-8)x+2m-7.$$ By calculation, $h^{\prime}(x)>0$ for $x\geq\sqrt{m-2}$ and $h(\sqrt{m-2})=6\sqrt{m-2}-3>0$ for $m\geq51$. Thus $\rho(G_{10})<\rho(S_{m-1}^{2})$, a contradiction. Thus $W=\emptyset$ and $G\cong S_{m}^{1}$. By Lemma \ref{le:ch-2.1.}, $\rho(S_{m}^{1})>\rho(S_{m-1}^{2})$, as desired.
{\bf Subcase 2.2.} $e(W)=1$.
In this case, we have $e(N_{1}(u^{\ast}))=0$ and $G\cong C_{5}\bullet K_{1,m-5}$ or $G\cong G_{11}$ or $G\cong G_{12}$ (see Fig. 3). By Lemma \ref{le:ch-2.4.}, $\rho(C_{5}\bullet K_{1,m-5})$, $\rho(G_{11})$ and $\rho(G_{12})$ are the largest roots of these equations $h_{1}(x)=0$, $h_{2}(x)=0$ and $h_{3}(x)=0$ respectively, where \begin{equation}\label{eq:ch-9} \begin{split} h_{1}(x)&= x^{4}-x^{3}-(m-2)x^{2}-(m-3)x+m-5,\\ h_{2}(x)&=x^{5}+x^{4}-(m-1)x^{3}+x^{2}+(3m-15)x+3m-17,\\ h_{3}(x)&=x^{4}-x^{3}-(m-1)x^{2}-(m-4)x+2m-8. \end{split} \end{equation} By Lemma \ref{le:ch-2.2.}, $\rho(S^{2}_{m-1})$ is the largest root of the equation $f(x)=0$. Thus $$h_{1}(x)-xf(x)=-(2m-9)x+m-5<0$$ and $\rho(C_{5}\bullet K_{1,m-5})> \rho(S^{2}_{m-1})$, as desired. Since $h_{2}(\sqrt{m-2})>0$ and $h_{2}^{\prime}(x)>0$ for $x>\sqrt{m-2}$. Thus $\rho(G_{11})< \rho(S^{2}_{m-1})$, a contradiction. Since $h_{3}(\sqrt{m-2})=m-6-2\sqrt{m-2}>0$ and $h_{3}^{\prime}(x)>0$ for $x>\sqrt{m-2}$. Hence, $\rho(G_{12})<\sqrt{m-2}$, a contradiction.
This completes the proof. $\blacksquare$
\begin{figure}
\caption{Graphs $G_{0}-G_{3}$ of Subcase 1.2.}
\label{Fig. 1.}
\end{figure}
\begin{figure}
\caption{Graphs $C_{6}, C_{5}^{+}$ and $G_{4}-G_{9}$ of Subcase 1.3.}
\label{2.}
\end{figure}
\begin{figure}
\caption{Graphs $G_{10}-G_{12}$ and $C_{5}\bullet K_{1,m-5}$ of Subcases 2.1 and 2.2.}
\label{3.}
\end{figure}
\section{Proof of Theorem 1.6.}
Let $G^{\ast}$ be the extremal graph with maximum spectral radius in $\mathcal{G}(m,F)$ for a fixed $F$. Let $\rho^{\ast}=\rho(G^{\ast})$ and let $X^{\ast}$ be the Perron vector of $G^{\ast}$ with coordinate $x_{v}$ corresponding to the vertex $v\in V(G^{\ast})$. Recall that $W=V(G^{\ast})\backslash N[u^{\ast}]$. A vertex $u^{\ast}$ in $G^{\ast}$ is said to be an extremal vertex if $x_{u^{\ast}}=max\{x_{v}\mid v\in V(G^{\ast})\}$. \noindent\begin{lemma}\label{le:ch-4.1.}{\rm(}$\cite{ZhLS}${\rm)} If $F$ is a $2$-connected graph and $u^{\ast}$ is an extremal vertex of $G^{\ast}$, then the following statements hold.
{\rm(}$i${\rm)} $G^{\ast}$ is connected.
{\rm(}$ii${\rm)} There exists no cut vertex in $V(G^{\ast})\setminus\{u^{\ast}\}$ and hence $d(u)\geq2$ for any $u\in V(G^{\ast})\setminus N[u^{\ast}]$.
{\rm(}$iii${\rm)} If $F$ is $C_{4}$-free, then $N(u_{1})=N(u_{2})$ for any non-adjacent vertices of $u_{1}, u_{2}$ of degree two. \end{lemma} \noindent\begin{lemma}\label{le:ch-4.2.}{\rm(}$\cite{BrH}${\rm)} Let $G$ be a bipartite graph of size $m$. Then $\rho(G)\leq\sqrt{m}$, with equality if and only if $G$ is a disjoint union of a complete bipartite graph and isolated vertices. \end{lemma}
\noindent\begin{lemma}\label{le:ch-4.3.}{\rm(}$\cite{MiLH}${\rm)} $\rho(S_{\frac{m+4}{2},2}^{-})>\frac{1+\sqrt{4m-5}}{2}$ for $m\geq6$.
\end{lemma}
\noindent\begin{lemma}\label{le:ch-4.4.}{\rm(}$\cite{MiLH}${\rm)} Let $X=\{x_{1}, x_{2}, \ldots, x_{n}\}^{T}$ be the Perron vector of a connected graph $G$ of size $m$ and let $x_{u^{\star}}=max\{x_{v}| v\in V(G)\}$. If $\rho(G)> \frac{1+\sqrt{4m-5}}{2}$, then we have the following results.
{\rm(}$i${\rm)} \begin{equation}\label{eq:ch-10} \sum_{v\in N(u^{\star})\setminus N_{0}(u^{\star})}(d_{N(u^{\star})}(v)-1)x_{v}>(e(W)+e(N(u^{\star}))-\frac{3}{2})x_{u^{\star}}, \end{equation} and \begin{equation}\label{eq:ch-11}
e(W)<e(N(u^{\star}))-|N(u^{\star})\setminus N_{0}(u^{\star})|+\frac{3}{2} \end{equation}
{\rm(}$ii${\rm)} If there exists a vertex $v$ of $G$ such that $x_{v}< (1-\beta)x_{u^{\star}}$ where $0<\beta<1$, then \begin{equation}\label{eq:ch-12}
e(W)<e(N(u^{\star}))-|N(u^{\star})\setminus N_{0}(u^{\star})|+\frac{3}{2}-\beta d_{N(u^{\star})}(v), \mbox{for } v\in N^{2}(u^{\star})\subseteq W, \end{equation} \begin{equation}\label{eq:ch-13}
e(W)<e(N(u^{\star}))-|N(u^{\star})\setminus N_{0}(u^{\star})|+\frac{3}{2}-\beta (d_{N(u^{\star})}(v)-1), \mbox{for } v\in N(u^{\star})\setminus N_{0}(u^{\star}). \end{equation} {\rm(}$iii${\rm)} If there exists a subset $S\subseteq N(u^{\star})\backslash N_{0}(u)$ such that $x_{v}< (1-\beta)x_{u^{\star}}$ for any $i\in V(S)$ and $0<\beta<1$, then \begin{equation}\label{eq:ch-14}
e(W)<e(N(u^{\star}))-|N(u^{\star})\setminus N_{0}(u^{\star})|+\frac{3}{2}-\beta\sum_{v\in S}(d_{N(u^{\star})}(v)-1). \end{equation} \end{lemma}
\noindent\begin{lemma}\label{le:ch-4.5.} Let $G^{\ast}$ be a $C_{5}^{+}$-free graph with $u^{\ast}\in V(G)$ and $L$ be a component of $G^{\ast}[N(u^{\ast})]$. Then $L$ is one of the following statements.
{\rm(}$i${\rm)} a star $K_{1,r}$ for $r\geq0$, where $K_{1,0}$ is a singleton component.
{\rm(}$ii${\rm)} a double star $D_{a,b}$ for $a,b\geq1$.
{\rm(}$iii${\rm)} a copy of $S_{r+1}^{1}$ for $r\geq2$, where $S_{3}^{1}$ is a triangle for $r=2$.
{\rm(}$iv${\rm)} a graph with $C_{4}$ as its spanning subgraph, that is, $C_{4}$, $C_{3}^{+}$ or $K_{4}$. \end{lemma}
\noindent {\bf Proof.} Since $G^{\ast}$ contains no $C_{5}^{+}$, then $G^{\ast}[N(u^{\ast})]$ contains no any path of length more than $3$ and any cycle of length more than $4$. If $G^{\ast}[N(u^{\ast})]$ contains $P_{1}$ as a subgraph, then $L\cong K_{1,0}$. If $G^{\ast}[N(u^{\ast})]$ contains $P_{2}$ as a subgraph, then $L\cong K_{1,1}$ or $L\cong K_{i}$ for each $i\in\{3,4\}$. If $G^{\ast}[N(u^{\ast})]$ contains $P_{3}$ as a subgraph, then $L\cong C_{3}^{+}, K_{1,r}$ or $S_{r+1}^{1}$ for $r\geq2$. If $G^{\ast}[N(u^{\ast})]$ contains $P_{4}$ as a subgraph, then $L\cong D_{a,b}$ for $a,b\geq1$, as desired. $\qedsymbol$
For each component $L$ of $G^{\ast}[N(u^{\ast})]$, let $W_{L}=\{w\mid w\in W\cap N_{u\in L}(u)\}$. Thus $W_{L_{i}}\cap W_{L_{j}}=\emptyset$ for any two distinct components $L_{i}$ and $L_{j}$ of $G^{\ast}[N(u^{\ast})]$, unless one of $L_{i}$ and $L_{j}$ is an isolated vertex and the other is a star $K_{1,r}$ for $r\geq0$ (that is, vertices in $W_{L_{i}}\cap W_{L_{j}}$ must be adjacent to the center vertex of the star $K_{1,r}$ for $r\geq0$).
Note that $\rho^{\ast}\geq \rho(S_{\frac{m+4}{2},2}^{-})> \frac{1+\sqrt{4m-5}}{2}>9$ for $m\geq74$. Thus ${\rho^{\ast}}^{2}-\rho^{\ast}>m-\frac{3}{2}$. Let $N_{+}(u^{\ast})=N(u^{\ast})\setminus N_{0}(u^{\ast})$. By \eqref{eq:ch-3}, we have
$$(m-\frac{3}{2})x_{u^{\ast}}<({\rho^{\ast}}^{2}-\rho^{\ast})x_{u^{\ast}}\leq |N(u^{\ast})|x_{u^{\ast}}+\sum_{v\in N_{+}(u^{\ast})}(d_{N(u^{\ast})}(v)-1)x_{v}+e(N(u^{\ast}), W)-\sum_{v\in N_{0}(u^{\ast})}x_{v}.$$ It follows that
$$\left(m-\frac{3}{2}-|N(u^{\ast})|-e(N(u^{\ast}), W)+\sum_{v\in N_{0}(u^{\ast})}\frac{x_{v}}{x_{u^{\ast}}}\right)x_{u^{\ast}} <\sum_{v\in N_{+}(u^{\ast})}(d_{N(u^{\ast})}(v)-1)x_{v}.$$ Let $\zeta(L)=\sum_{v\in V(L)}(d_{L}(v)-1)x_{v}$. For each non-trivial connected component $L$ of $G^{\ast}[N(u^{\ast})]$, we have \begin{equation}\label{eq:ch-15} \left(e(N(u^{\ast}))+e(W)+\sum_{v\in N_{0}(u^{\ast})}\frac{x_{v}}{x_{u^{\ast}}}-\frac{3}{2}\right)x_{u^{\ast}} <\sum_{L}\zeta(L). \end{equation} \noindent\begin{lemma}\label{le:ch-4.6.} Let $G^{\ast}$ be the extremal graph which attains maximum spectral radius $\rho^{\ast}=\rho(G^{\ast})$ among all $C_{5}^{+}$-free graphs with even size $m\geq74$, and let $X=\{x_{1}, x_{2}, \ldots, x_{n}\}^{T}$ be the Perron vector of $G^{\ast}$ and $u^{\ast}$ be an extremal vertex. Let $L^{\ast}$ be a component of $G^{\ast}[N_{+}(u^{\ast})]$. If $\rho^{\ast}>\frac{1+\sqrt{4m-5}}{2}$, then
{\rm(}$i${\rm)} $G^{\ast}[N_{+}(u^{\ast})]$ does not contain $C_{4}$ as a spanning subgraph, that is, which does not contain one of $C_{4}$,$C_{3}^{+}$ and $K_{4}$ as a spanning subgraph.
{\rm(}$ii${\rm)} $e(W)=0$, furthermore, $L^{\ast}\ncong K_{3}$ for any component $L^{\ast}$ of $G^{\ast}[N_{+}(u^{\ast})]$.
{\rm(}$iii${\rm)} $G^{\ast}[N_{+}(u^{\ast})]$ has exactly one star component $K_{1,r}$ for some $r\geq3$ and $W=\emptyset$. \end{lemma}
\noindent {\bf Proof.} {\rm(}$i${\rm)} Let $\mathcal{L}$ be the family of components of $G^{\ast}[N(u^{\ast})]$ each of which contains $C_{4}$ as a spanning subgraph and $\mathcal{L^{\prime}}$ be the family of other non-trivial components of $G^{\ast}[N(u^{\ast})]$ each of which contains no $C_{4}$ as a spanning subgraph. By Lemma \ref{le:ch-4.5.} (i)-(iii), for each $L\in\mathcal{L^{\prime}}$, we have
$$\zeta(L)=\sum_{v\in V(L)}(d_{L}(v)-1)x_{v}\leq(2e(L)-|V(L)|)x_{u^{\ast}}\leq e(L)x_{u^{\ast}}.$$ For any two distinct components $L_{i}, L_{j}\in\mathcal{L}$, since $G^{\ast}$ contains no $C_{5}^{+}$, we have $W_{L_{i}}\cap W_{L_{j}}=\emptyset$ and $e(W_{L_{i}}, W_{L_{j}})=0$. Hence, $e(W)\geq \sum_{L\in\mathcal{L}}e(W_{L}, W)$. By \eqref{eq:ch-15}, we have \begin{equation}\label{eq:ch-16} \left(\sum_{L\in \mathcal{L}}(e(L)+e(W_{L}, W))-\frac{3}{2}\right)x_{u^{\ast}}<\sum_{L\in\mathcal{L}}\zeta(L). \end{equation} Suppose that $\mathcal{L}\neq\emptyset$, we will show that $\zeta(L)\leq(e(L)+e(W_{L}, W))-\frac{3}{2})x_{u^{\ast}}$ holds for each $L\in \mathcal{L}$ and $\sum_{L\in\mathcal{L}}\zeta(L)\leq \left(\sum_{L\in \mathcal{L}}(e(L)+e(W_{L}, W)-\frac{3}{2})\right)x_{u^{\ast}}$ which contradicts \eqref{eq:ch-16}. Let $L^{\ast}\in \mathcal{L}$ with $V(L^{\ast})=\{u_{1},u_{2},u_{3},u_{4}\}$.
{\bf Case 1.} $W_{L^{\ast}}=\emptyset$.
Assume that $x_{u_{1}}=max\{x_{u_{i}}: 1\leq i\leq4\}$. Hence, $\rho^{\ast}x_{u_{1}}=\sum_{u\in N(u_{1})}x_{u}\leq x_{u^{\ast}}+3x_{u_{1}}$, i.e., $x_{u_{1}}\leq\frac{x_{u^{\ast}}}{\rho^{\ast}-3}<\frac{x_{u^{\ast}}}{6}$ for $\rho^{\ast}>9$. Note that $4\leq e(L^{\ast})\leq6$. It follows that $$ \zeta(L^{\ast})=\sum_{v\in V(L^{\ast})}(d_{L^{\ast}}(v)-1)x_{v}\leq(2e(L^{\ast})-4)x_{u_{1}}\leq \frac{1}{3}(e(L^{\ast})-2)x_{u^{\ast}}<(e(L^{\ast})-\frac{3}{2})x_{u^{\ast}},$$ as desired.
{\bf Case 2.} $W_{L^{\ast}}\neq\emptyset$.
Note that $d_{N(u^{\ast})}(w)=d_{L^{\ast}}(w)=1$ for $w\in W_{L^{\ast}}$. By Lemma \ref{le:ch-4.1.} (ii), we have $e(W_{L^{\ast}}, W)\geq1$. We consider the following three subcases.
{\bf Subcase 2.1.} All vertices in $W_{L^{\ast}}$ have a unique common neighbor $u_{1}$, i.e., $N_{W}(u_{i})=\emptyset$ for each $i\in \{2,3,4\}$.
Assume that $x_{u_{2}}=max\{x_{u_{i}}: 2\leq i\leq 4\}$. Therefore, $$\rho^{\ast}x_{u_{2}}\leq x_{u^{\ast}}+x_{u_{1}}+x_{u_{3}}+x_{u_{4}}\leq2(x_{u^{\ast}}+x_{u_{2}}),$$ it follows that $x_{u_{2}}\leq \frac{2x_{u^{\ast}}}{\rho^{\ast}-2}<\frac{2x_{u^{\ast}}}{7}$. Note that $4\leq e(L^{\ast})\leq6$. Hence, $$ \begin{aligned} \zeta(L^{\ast})&=\sum_{v\in V(L^{\ast})}(d_{L^{\ast}}(v)-1)x_{v}\\ &\leq(d_{L^{\ast}}(u_{1})-1)x_{u_{1}}+(2e(L^{\ast})-d_{L^{\ast}}(u_{1})-3)x_{u_{2}}\\ &<(d_{L^{\ast}}(u_{1})-1)x_{u^{\ast}}+(\frac{4}{7}e(L^{\ast})-\frac{2}{7}d_{L^{\ast}}(u_{1})-\frac{6}{7})x_{u^{\ast}}\\ &\leq(\frac{4}{7}e(L^{\ast})+\frac{2}{7})x_{u^{\ast}}\\ &<(e(L^{\ast})-\frac{1}{2})x_{u^{\ast}}\\ &\leq (e(L^{\ast})+e(W_{L^{\ast}},W)-\frac{3}{2})x_{u^{\ast}}, \end{aligned} $$ as desired.
{\bf Subcase 2.2.} There exist exactly two vertices $w, w^{\prime}\in W_{L^{\ast}}$ with distinct neighbors in $V(L^{\ast})$.
In this case, we have $d(w)+d(w^{\prime})\geq5$ from Lemma \ref{le:ch-4.1.} (iii). By Lemma \ref{le:ch-4.1.} (ii), we have $e(W_{L^{\ast}}, W)=d(w)+d(w^{\prime})-e(\{w,w^{\prime} \},V(L^{\ast}))\geq3$. Since $L^{\ast}\in \mathcal{L}$, we have $e(L^{\ast})\leq6$ and $e(L^{\ast})\leq e(W_{L^{\ast}}, W)+3$. Let $N_{L^{\ast}}(w)=\{u_{1}\}, N_{L^{\ast}}(w^{\prime})=\{u_{2}\}, x_{u_{3}}=max\{x_{u_{3}}, x_{u_{4}}\}$, then $$\rho^{\ast}x_{u_{3}}\leq x_{u^{\ast}}+x_{u_{1}}+x_{u_{2}}+x_{u_{3}}\leq3x_{u^{\ast}}+x_{u_{3}},$$ and $$x_{u_{3}}\leq\frac{3}{\rho^{\ast}-1}<\frac{3}{8}x_{u^{\ast}}.$$ Since $d_{L^{\ast}}(u_{3}), d_{L^{\ast}}(u_{4})\geq2$, we obtain $$ \begin{aligned} \zeta(L^{\ast})&=\sum_{v\in V(L^{\ast})}(d_{L^{\ast}}(v)-1)x_{v}\leq \sum_{v\in V(L^{\ast})}(d_{L^{\ast}}(v)-1)x_{u^{\ast}}-2(d_{L^{\ast}}(u_{3})-1)(x_{u^{\ast}}-x_{u_{3}})\\
&\leq (2e(L^{\ast})-|L^{\ast}|)x_{u^{\ast}}-\frac{5}{4}x_{u^{\ast}}\leq(e(L^{\ast})+e(W_{L^{\ast}}, W)-\frac{9}{4})x_{u^{\ast}}\\ &<(e(L^{\ast})+e(W_{L^{\ast}}, W)-\frac{3}{2})x_{u^{\ast}}, \end{aligned} $$ as desired.
{\bf Subcase 2.3.} There exist $k$ ($k\geq3$) vertices, say $w_{1}, w_{2}, \ldots, w_{k}$, of $W_{L^{\ast}}$ such that they have mutual distinct neighbors in $V(L^{\ast})$.
In this case, if $w_{i}w_{j}\in E(G^{\ast}[W_{L^{\ast}}])$, then $N_{L^{\ast}}(w_{i})=N_{L^{\ast}}(w_{j})$. Hence, $\{w_{1}, w_{2}, \ldots, w_{k}\}$ is an independent set of $G^{\ast}$ from Lemma \ref{le:ch-4.1.} (iii). By Lemma \ref{le:ch-4.1.} (ii), we obtain that $d(w_{i})\geq2$ for $1\leq i\leq k$ and $d(w_{i})=2$ holds for at most one vertex $w_{i}$. Therefore, $\sum_{1\leq i\leq k}d(w_{i})\geq3k-1$ and $e(W_{L^{\ast}}, W)\geq2k-1$. Thus $$ e(L^{\ast})\leq e(K_{4})=6\leq e(W_{L^{\ast}}, W)-2k+7\leq e(W_{L^{\ast}}, W)+1$$ and $\zeta(L^{\ast})=\sum_{v\in V(L^{\ast})}(d_{L^{\ast}}(v)-1)x_{v}\leq (2e(L^{\ast})-4)x_{u^{\ast}}\leq (e(L^{\ast})+e(W_{L^{\ast}}, W)-3)x_{u^{\ast}}<(e(L^{\ast})+e(W_{L^{\ast}}, W)-\frac{3}{2})x_{u^{\ast}},$ as desired. This completes the proof of {\rm(}$i${\rm)}.
{\rm(}$ii${\rm)} By Lemmas \ref{le:ch-4.5.} and \ref{le:ch-4.6.} (i), each component $L$ of $G[N_{+}(u^{\ast})]$ is either a tree or a unicyclic graph $S_{r+1}^{1}$ for some $r\geq2$. Let $\mathcal{L^{\prime}}$ be the family of the components of $G[N_{+}(u^{\ast})]$. Assume that there are $c$ non-trivial tree components in $G^{\ast}[N_{+}(u^{\ast})]$, then
$$\sum_{L\in\mathcal{L^{\prime}}}\zeta(L)=\sum_{L\in\mathcal{L^{\prime}}}\sum_{v\in V(L)}(d_{L}(v)-1)x_{v}\leq \sum_{L\in\mathcal{L^{\prime}}}(2e(L)-|V(L)|)x_{u^{\ast}}=\left(e(N(u^{\ast}))-c \right)x_{u^{\ast}},$$ where $L\in \mathcal{L^{\prime}}$ takes over all non-trivial components of $G^{\ast}[N_{+}(u^{\ast})]$. Combining with \eqref{eq:ch-15}, we have \begin{equation}\label{eq:ch-17} e(W)< \frac{3}{2}-c-\sum_{v\in N_{0}(u^{\ast})}\frac{x_{v}}{x_{u^{\ast}}}. \end{equation} Hence, $e(W)\leq1$ and $c\leq1$. In addition, $e(W)=1$ holds if and only if $c=0$ and $\sum_{v\in N_{0}(u^{\ast})}\frac{x_{v}}{x_{u^{\ast}}}<\frac{1}{2}$. Then each component $L$ of $G^{\ast}[N_{+}(u^{\ast})]$ contains $C_{3}$ as a subgraph. Without loss of generality, let $w_{1}w_{2}$ be the unique edge in $E(W)$. If $\{w_{1}, w_{2}\}\in N_{W}(L)$, then there exists a cut vertex or $C_{5}^{+}$ as a subgraph. If $w_{1}\in N_{W}(L), w_{2}\in N_{W}(N_{0}(u^{\ast}))$, then there exists $C_{5}^{+}$ as a subgraph. Thus $\{w_{1}, w_{2}\}\in N_{W}(N_{0}(u^{\ast}))$. By Lemma \ref{le:ch-4.1.} (ii), $d_{N_{0}(u^{\ast})}(w_{i})\geq1$ for each $i\in\{1,2\}$.
We claim that $|N_{N_{0}(u^{\ast})}(w_{1})\cap N_{N_{0}(u^{\ast})}(w_{2})|\leq2$, otherwise, there exists $C_{5}^{+}$ as a subgraph. Let $x_{w_{1}}=max\{x_{w_{1}}, x_{w_{2}}\}$. $$\rho^{\ast}x_{w_{1}}=x_{w_{2}}+\sum_{v\in N_{N_{0}(u^{\ast})}(w_{1})}x_{v}\leq x_{w_{1}}+\sum_{v\in N_{0}(u^{\ast})}x_{v}<x_{w_{1}}+\frac{1}{2}x_{u^{\ast}},$$ i.e., $$x_{w_{1}}<\frac{1}{2(\rho^{\ast}-1)}x_{u^{\ast}}<\frac{1}{16}x_{u^{\ast}}.$$ By \eqref{eq:ch-12}, we have $$e(W)<\frac{3}{2}-\frac{15}{16}d_{N_{0}(u^{\ast})}(w_{1})\leq\frac{9}{16},$$ a contradiction. Thus $e(W)=0$. By \eqref{eq:ch-17}, we have $$\frac{3}{2}-c-\sum_{v\in N_{0}(u^{\ast})}\frac{x_{v}}{x_{u^{\ast}}}>0.$$ Furthermore, we have either $c=0$ and \begin{equation}\label{eq:ch-18} \sum_{v\in N_{0}(u^{\ast})}\frac{x_{v}}{x_{u^{\ast}}}<\frac{3}{2} \end{equation} or $c=1$ and \begin{equation}\label{eq:ch-19}
\sum_{v\in N_{0}(u^{\ast})}\frac{x_{v}}{x_{u^{\ast}}}<\frac{1}{2}. \end{equation} If $c=0$ and $\sum_{v\in N_{0}(u^{\ast})}\frac{x_{v}}{x_{u^{\ast}}}<\frac{3}{2}$, then $G^{\ast}[N_{+}(u^{\ast})]$ contains a component $L^{\ast}\cong S_{r+1}^{1}$ for some $r\geq2$.
Suppose that $L^{\ast}\cong K_{3}$ with $V(L^{\ast})=\{u_{1}, u_{2}, u_{3}\}$. If $W_{L^{\ast}}=\emptyset$, then $$x_{u_{1}}=x_{u_{2}}=x_{u_{3}}=\frac{x_{u^{\ast}}}{\rho^{\ast}-2}<\frac{x_{u^{\ast}}}{7}.$$ Hence, $$\zeta(L^{\ast})=\sum_{1\leq i\leq3}(d_{L^{\ast}}(u_{i})-1)x_{u_{i}}=3x_{u_{1}}<\frac{3}{7}x_{u^{\ast}}=\frac{3}{7}(e(L^{\ast})-2)x_{u^{\ast}}.$$ Since $e(W)=0$ and $\zeta(L)\leq e(L)x_{u^{\ast}}$ for each non-trivial component $L\in \mathcal{L^{\prime}}\backslash L^{\ast}$ of $G^{\ast}[N_{+}(u^{\ast})]$. Combining with \eqref{eq:ch-15}, we have
$$\left(e(N(u^{\ast}))+\sum_{v\in N_{0}(u^{\ast})}\frac{x_{v}}{x_{u^{\ast}}}-\frac{3}{2}\right)x_{u^{\ast}} <\sum_{L\in\mathcal{L^{\prime}}}\zeta(L)<\left(e(N(u^{\ast}))-\frac{4}{7}e(L^{\ast})-\frac{6}{7}\right)x_{u^{\ast}},$$ it follows that $\sum_{v\in N_{0}(u^{\ast})}\frac{x_{v}}{x_{u^{\ast}}}<\frac{-15}{14}$, a contradiction. Thus $W_{L^{\ast}}\neq\emptyset$. Note that $2\leq d(w)\leq V(L^{\ast})=3$ for each vertex $w\in W_{L^{\ast}}$ and $N_{W}(L^{\ast})\cap N_{W}(N_{0}(u^{\ast}))=\emptyset$. If there is a vertex $w\in W_{L^{\ast}}$ such that $d(w)=3$, then $W_{L^{\ast}}=\{w\}$. If $d(w)=2$ for each vertex $w\in W_{L^{\ast}}$, then Lemma \ref{le:ch-4.1.} (iii) implies that all vertices in $W_{L^{\ast}}$ share the same neighborhoods. Without loss of generality, assume that $N(w)=\{u_{1}, u_{2}\}$ for each vertex $w\in W_{L^{\ast}}$. Let $G_{13}=G^{\ast}-\{wu_{1}|w\in N_{W_{L^{\ast}}}(u_{1}) \}+\{wu^{\ast}|w\in N_{W_{L^{\ast}}}(u_{1})\}$. In both cases, we have $G_{13}\in \mathcal{G}(m, C_{5}^{+})$ and $\rho(G_{13})>\rho^{\ast}$ from Lemma \ref{le:ch-2.1.}, a contradiction. Thus $G^{\ast}[N_{+}(u^{\ast})]$ contains no a component $L^{\ast}\cong K_{3}$. This completes the proof of {\rm(}$ii${\rm)}.
{\rm(}$iii${\rm)} Suppose that $L^{\ast}\cong S_{r+1}^{1}$ for some $r\geq3$, then we will prove that $L^{\ast}$ is the unique non-trivial component of $G^{\ast}[N_{+}(u^{\ast})]$. Note that there are $r-2$ vertices in $V(L^{\ast})$ with degree two in $G^{\ast}$. By Lemma \ref{le:ch-4.1.} (iii), there does not exist a vertex of degree two out of other components. Then $L^{\ast}$ is the unique component which contains $K_{3}$ as a subgraph. In this case, we suppose that $G^{\ast}[N_{+}(u^{\ast})]$ contains another non-trivial tree component $L$. By Lemma \ref{le:ch-4.1.} (ii), we obtain that $d(w)\geq2$ for each $w\in W$. Combining with $e(W)=0$, we obtain that $W_{L^{\ast}}=\emptyset$ and $L$ is a tree. In addition, $W_{L}\neq\emptyset$ and $d(w)\geq3$ for each vertex $w\in W_{L}\cup V(L)$. Since $e(W)=0$ and $W_{L}\cap W_{L^{\ast}}=\emptyset$. Then $N(w)\subseteq V(L)$ for each vertex $w\in W_{L}$. Let $V(L^{\ast})=\{u_{0}, u_{1}, \ldots, u_{r}\}$ with $d_{L^{\ast}}(u_{0})=r$ and $d_{L^{\ast}}(u_{1})=d_{L^{\ast}}(u_{2})=2$. Thus $x_{u_{1}}=x_{u_{2}}$ and $x_{u_{3}}=x_{u_{4}}=\ldots= x_{u_{r}}$. Note that $$\rho^{\ast}x_{u_{1}}= x_{u_{0}}+x_{u_{2}}+x_{u^{\ast}}\leq x_{u_{1}}+2x_{u^{\ast}}.$$
It follows that $$x_{u_{1}}\leq \frac{2x_{u^{\ast}}}{\rho^{\ast}-1}<\frac{x_{u^{\ast}}}{4}$$ for $\rho^{\ast}>9$. By \eqref{eq:ch-14}, $$e(W)<\frac{3}{2}-1-\frac{3}{4}\sum_{v\in\{u_{1},u_{2}\}}(d_{N(u^{\ast})}(v)-1)=-1,$$ a contradiction. Thus there is a non-trivial unique component of $G^{\ast}[N_{+}(u^{\ast})]$. Since $$\zeta(L^{\ast})=(r-1)x_{u_{0}}+x_{u_{1}}+x_{u_{2}}\leq (r-1)x_{u^{\ast}}+2x_{u_{1}}<(r-\frac{1}{2})x_{u^{\ast}}=(e(L^{\ast})-\frac{3}{2})x_{u^{\ast}}.$$ Combining with $e(W)=0$ and \eqref{eq:ch-15}, we have $$(e(N(u^{\ast}))+\sum_{v\in N_{0}(u^{\ast})}\frac{x_{v}}{x_{u^{\ast}}}-\frac{3}{2})x_{u^{\ast}} <\zeta(L^{\ast})<(e(N(u^{\ast}))-\frac{3}{2})x_{u^{\ast}},$$ it follows that $\sum_{v\in N_{0}(u^{\ast})}\frac{x_{v}}{x_{u^{\ast}}}<0$, a contradiction. Hence, $G^{\ast}[N_{+}(u^{\ast})]$ contains no unicyclic graph and contains $c$ non-trivial tree components. If $c=0$, then $G^{\ast}$ is bipartite. By Lemma \ref{le:ch-4.2.}, we have $\rho^{\ast}\leq\sqrt{m}< \frac{1+\sqrt{4m-3}}{2}$ for $m\geq74$, a contradiction. Thus $c=1$ and \eqref{eq:ch-19} holds, i.e., $G^{\ast}[N_{+}(u^{\ast})]\cong L$, where $L$ is a non-trivial tree. By Lemma \ref{le:ch-4.5.}, $diam(L)\leq3$.
If $diam(L)\leq3$, then $L$ is a double star. Since $G^{\ast}$ is $C_{5}^{+}$-free, we have $d_{N(u^{\ast})}(w)=1$ for each vertex $w\in W_{L}$. Combining with $e(W)=0$ and Lemma \ref{le:ch-4.1.} (ii), we have $W_{L}=\emptyset$, then $G^{\ast}$ contains two non-adjacent vertices of degree two with distinct neighborhoods, which contradicts the Lemma \ref{le:ch-4.1.} (iii). Thus $diam(L)\leq2$, then $L\cong K_{1,r}$ for some $r\geq1$.
Let $V(L)=\{u_{0}, u_{1}, \ldots, u_{r}\}$ with center vertex $u_{0}$ and $d_{L}(u_{0})=r\geq1$. By Lemma \ref{le:ch-4.1.} (ii), we have $d_{N(u^{\ast})}(w)\geq2$ for any vertex $w\in W$. For $r=1$, we have $9x_{u^{\ast}}<\rho^{\ast}x_{u^{\ast}}=x_{u_{0}}+x_{u_{1}}+\sum_{v\in N_{0}(u^{\ast})}x_{v}<\frac{5}{2}x_{u^{\ast}}$. For $r=2$, we have $9x_{u^{\ast}} <\rho^{\ast}x_{u^{\ast}} =x_{u_{0}} +x_{u_{1}}+x_{u_{2}}+\sum_{v\in N_{0}(u^{\ast})}x_{v}<\frac{7}{2}x_{u^{\ast}}$, a contradiction. For $r\geq3$, we discuss the following three cases.
{\bf Case 1.} $d_{L}(w)=1$.
In this case, we obtain that $w$ is only adjacent to the center vertex $u_{0}$. By Lemma \ref{le:ch-4.2.} (iii), we have $d_{N_{0}(u^{\ast})}(w)\geq2$. $\rho^{\ast}x_{w}=x_{u_{0}}+\sum_{v\in N_{N_{0}(u^{\ast})}(W)}x_{v}\leq x_{u^{\ast}}+\frac{1}{2}x_{u^{\ast}}=\frac{3}{2}x_{u^{\ast}}$, i.e., $x_{w}\leq \frac{3}{2\rho^{\ast}}x_{u^{\ast}}<\frac{1}{6}x_{u^{\ast}}$. By \eqref{eq:ch-12}, we have $$e(W)<e(N(u^{\ast}))-|N_{+}(u^{\ast})|+\frac{3}{2}-\frac{5}{6}d_{N(u^{\ast})}(w)=\frac{1}{2}-\frac{5}{2}<0,$$ a contradiction.
{\bf Case 2.} $d_{L}(w)=2$.
In this case, we have $N_{N_{0}(u^{\ast})}(w)=\emptyset$, otherwise, there is $C_{5}^{+}$ as a subgraph. By Lemma \ref{le:ch-4.2.} (iii), there exist two non-adjacent vertices of degree two with distinct neighborhoods, a contradiction.
{\bf Case 3.} $d_{L}(w)\geq3$.
In this case, we have $N_{N_{0}(u^{\ast})}(w)=\emptyset$, otherwise, there is $C_{5}^{+}$ as a subgraph. Let $\{u_{0}, u_{1}, u_{2}\}\in N_{L}(w)$. Thus $G^{\ast}[u^{\ast},u_{1},w,u_{2},u_{0},u_{3}]$ contains $C_{5}^{+}$ as a subgraph, a contradiction. Let $\{u_{1}, u_{2}, u_{3}\}\in N_{L}(w)$ and $N_{N_{0}(u^{\ast})}(w)=\emptyset$. Thus $G^{\ast}[u^{\ast},u_{1},w,u_{2},u_{0},u_{3}]$ contains $C_{5}^{+}$ as a subgraph, a contradiction.
By Case 1-3, we have $W_{L}=\emptyset$. By Lemma \ref{le:ch-4.6.} (ii), we have $e(W)=0$. Suppose that $W\neq \emptyset$, by Lemma \ref{le:ch-2.1.} (i), we obtain that $G^{\ast}$ is a connected graph. Thus $d(w)=d_{N_{0}(u^{\ast})}(w)$ for any vertex $w\in W$, furthermore, $N_{0}(u^{\ast})\neq \emptyset$. Combining with \eqref{eq:ch-19}, we have
$$\rho^{\ast}x_{w}=\sum_{v\in N(w)} x_{v}\leq \sum_{v\in N_{0}(u^{\ast})}x_{v}<\frac{1}{2}x_{u^{\ast}},$$
it follows that $x_{w}< \frac{x_{u^{\ast}}}{2\rho^{\ast}}<\frac{x_{u^{\ast}}}{18}$. By \eqref{eq:ch-12}, we have
$$e(W)<e(N(u^{\ast}))-|N_{+}(u^{\ast})|+\frac{3}{2}- \frac{17}{18}d_{N_{0}(u^{\ast})}(w)\leq\frac{1}{2}-\frac{17}{6}<0,$$ a contradiction. Thus $W=\emptyset$. $\qedsymbol$
\noindent\begin{lemma}\label{le:ch-4.7.} $G^{\ast}\cong S_{\frac{m+4}{2},2}^{-}$. \end{lemma}
\noindent {\bf Proof.} By Lemmas \ref{le:ch-4.6.}, we have $e(W)=0, W=\emptyset$ and $G^{\ast}[N_{+}(u^{\ast})]\cong K_{1,r}$ for $r\geq3$. Thus $G^{\ast}\cong G_{14}$ (see Figure. 4). Let $|N_{0}(u^{\ast})|=t$. Since $m$ is even, we obtain that $t$ is odd and $t\geq1$. By Lemma \ref{le:ch-2.4.}, we obtain that $\rho^{\ast}$ is the largest root of the equation $f(x,t)=0$ where $$f(x,t)=x^{4}-mx^{2}-(m-t-1)x+\frac{t(m-t-1)}{2}$$for $m=t+1+2r\geq74$. Since $$f(x,t)-f(x,1)=(t-1)x+\frac{m(t-1)-t^{2}-t+2}{2}>0$$ for $x>0$ and $t\geq3$, this implies that $t=1$ for the extremal graph $G^{\ast}$. By Lemma \ref{le:ch-4.3.}, we have $\rho(S_{\frac{m+4}{2},2}^{-})> \frac{1+\sqrt{4m-5}}{2}$ for $m\geq74$ and $G^{\ast}\cong S_{\frac{m+4}{2},2}^{-}$, as desired.
This completes the proof of Theorem 1.6. $\blacksquare$ \begin{figure}
\caption{Graph $G_{14}$ of Lemma 4.7.}
\label{fi:ch-4}
\end{figure}
\end{document} |
\begin{document}
\title{Costly Multidimensional Screening\thanks{I thank Mohammad Akbarpour, Gabriel Carroll, Daniel Chen, Piotr Dworczak, Mira Frick, Nima Haghpanah, Jason Hartline, Andreas Haupt, Ravi Jagadeesan, Zi Yang Kang, Yingkai Li, Paul Milgrom, Mike Ostrovsky, Anne-Katrin Roesler, Ilya Segal, Andy Skrzypacz, Takuo Sugaya, Bob Wilson, Ali Yurukoglu, and Weijie Zhong; conference and seminar participants at Stanford, NSF/NBER/CEME Decentralization Conference, Canadian Economic Theory Conference, Midwest Economic Theory Conference, ACM EC'22, Stony Brook Game Theory Conference, and SAET Conference for helpful comments and suggestions.} } \author{Frank Yang\thanks{Graduate School of Business, Stanford University. Email: shuny@stanford.edu.}} \date{\displaydate{draftdate}} \maketitle \begin{abstract} A screening instrument is \textit{costly} if it is socially wasteful and \textit{productive} otherwise. A principal screens an agent with multidimensional private information and quasilinear preferences that are additively separable across two components: a one-dimensional productive component and a multidimensional costly component. Can the principal improve upon simple one-dimensional mechanisms by also using the costly instruments? We show that if the agent has preferences between the two components that are positively correlated in a suitably defined sense, then simply screening the productive component is optimal. The result holds for general type and allocation spaces, and allows for nonlinear and interdependent valuations. We discuss applications to multiproduct pricing (including bundling, quality discrimination, and upgrade pricing), intertemporal price discrimination, and labor market screening. \\
\noindent\textbf{Keywords:} Multidimensional screening, costly instruments, mechanism design, selection markets, price discrimination, bundling. \end{abstract}
\setcounter{page}{1}
\section{Introduction}
Actions convey information. The effort to obtain credentials conveys information about the ability of a job applicant. The time spent waiting in line conveys information about the willingness to pay of a customer. The endurance of physical activity conveys information about the health status of an individual.\footnote{The New York Times reports, ``The [Wal-Mart's] memo suggests that the company could require all jobs to include some component of physical activity, like making cashiers gather shopping carts.'' \textit{Wal-Mart's health care struggle is corporate America's, too}, The New York Times, October 29, 2005. See also \citet{zeckhauser2021strategic} who argues that socially wasteful ordeals play a prominent role in health care.} These actions are often costly in that they are socially wasteful. However, because the preferences over these actions are correlated in some way with the private information that affects the allocation of productive assets, the informational content from these costly actions could be useful for screening. As \citet{Spence1973Time} writes, ``Nonprice signaling and screening in economic and social contexts deserve more attention, in spite of the fact that they are frequently inefficient.''\footnote{\citet{Stiglitz2002} also writes, ``There is a much richer set of actions which convey information beyond those on which traditional adverse selection models have focused.''}
The addition of a nonprice screening instrument significantly complicates the screening problem, by making the allocation space multidimensional. Multidimensional mechanisms are often far more powerful than one-dimensional mechanisms because they can intricately link the incentive constraints (\citealt{Rochet2003}). For example, consider the following parable of \citet{Stiglitz2002}: An insurance company ``might realize that by locating itself on the fifth floor of a walk-up building, only those with a strong heart would apply. [...] More subtly, it might recognize that how far up it needs to locate itself depends on other elements of the strategy such as premium charged.''
In this paper, we study the effectiveness of costly nonprice screening. Under what conditions should we expect these costly instruments to be used in the design of optimal contracts? Does assuming away such nonprice screening always lead to a suboptimal mechanism in this richer space of mechanisms? To address these questions, we put forward a new multidimensional screening model. The model consists of two components: (i) a productive component which the principal intrinsically cares about (such as insurance coverage), and (ii) a costly component which the principal may utilize to help screening but destroys social surplus (such as walking up stairs).
In the model, the principal designs a mechanism to assign the productive allocations in a one-dimensional space $\mathcal{X}$ and the costly actions in an arbitrary space $\mathcal{Y}$. Monetary transfers are allowed. Both the principal and the agent have quasilinear preferences that are additively separable across the two components $\mathcal{X}$ and $\mathcal{Y}$.
Our main result states that if the agent has preferences between the two components that are positively correlated in a suitably defined sense, then simply screening the one-dimensional productive component is optimal (and essentially uniquely optimal). We also provide a partial converse showing that for a given negative correlation structure, there exist utility functions such that the optimal contract must involve costly screening.
We say that the agent's preferences are \textit{positively correlated} between the two components if the type who has higher utility for the productive allocations tends to also have lower disutility for the costly actions in the stochastic dominance sense. We allow the agent to have multidimensional private information; however, our result is new even when the agent has one-dimensional types because of the multidimensionality of the screening instruments (i.e. price and nonprice).
A basic intuition behind our result can be understood as follows. Consider the parable of \citet{Stiglitz2002} with two types: a high-risk type and a low-risk type. Suppose the insurance company can make the contract contingent on a physical activity such as climbing the stairs. If the high-risk type is less fit, then the company could increase the coverage targeted at the low-risk type. To purchase the contract, the individual has to climb the stairs. Since the high-risk type finds it harder to climb the stairs than the low-risk type does, such a contract can be incentive compatible and increase the profit. On the other hand, suppose the task was to wait in line in order to be eligible for the contract. If the high-risk type is more likely to be unemployed and hence finds waiting less costly, then the company would not benefit from this instrument, since it makes it easier for the high-risk type to mimic the low-risk type.
However, this intuition is incomplete because it assumes monotonicity of the allocation rule which is with loss of generality in multidimensional settings.\footnote{Implementability in multidimensional environments is characterized by \textit{cyclic monotonicity} (see \citealt{rochet1987necessary}) which allows a much richer set of allocation rules.} In particular, in the above example, the firm can be better off by not insuring the high-risk type. In the standard setting, this is impossible since this allocation is not monotone in the type. But suppose the firm decreases the coverage targeted at the high-risk type and its associated price but requires a long waiting time. Because the low-risk type finds it more costly to wait, such a non-monotone contract involving a nonprice instrument can be incentive compatible and even optimal (see \Cref{ex:ubind} in \Cref{subsec:example}). The main difficulty of our problem is the multidimensionality of the screening instruments --- the addition of one more screening instrument significantly expands the space of implementable outcomes beyond the standard family of monotone allocation functions. This expansion applies even when the agent has one-dimensional types.
Despite this fundamental difficulty, our result shows that quite generally the simple intuition turns out to lead to the right prediction. The proof deals with the richer space of multidimensional mechanisms. It relies on two key ingredients. First, we show that if the agent has positively correlated preferences between the two components, then costly instruments can only help with upward incentive constraints. Second, we show that only downward incentive constraints are needed in any one-dimensional screening model satisfying a condition on the surplus function. This second ingredient, which we call the downward sufficiency theorem (\Cref{thm:dbind}), uncovers a novel property of one-dimensional screening models (see \Cref{subsec:example}).
In the insurance example, our result implies that costly instruments can be useful if the higher-risk type has higher costs (e.g. climbing the floors) and cannot be useful if the higher-risk type has lower costs (e.g. waiting in line). The same result also applies to the classic setting of labor market screening. Positive correlation of preferences arises there because a higher ability applicant often tends to find both accomplishing the work easier and education less costly. Our result implies that a monopsonistic firm should not make its offers contingent on the costly signals from an applicant, despite the fact that the firm prefers a higher ability applicant (see \Cref{subsec:labor}). Thus, in light of \citet{Spence1973Job} who focuses on competitive markets, our result shows that whether costly instruments appear in a market depends on the distribution of market power. The reason is that when there are no outside options generated through competition, the incentive constraints bind in the opposite direction (see \Cref{app:comp}).
Beyond such direct implications, our result also delivers new insights into multidimensional mechanism design problems that may at first glance appear to be unrelated. First, consider a multiple-good monopolist selling different qualities of bundles. A common selling strategy in streaming services is to offer the bundle of all content at various qualities (e.g. with or without advertising). When is such a strategy optimal? Our key insight is that selling the bundle of all goods can be viewed as the productive component, and selling smaller bundles instead of the grand bundle can be viewed as the costly instruments for screening values of the grand bundle. As an application, in \Cref{subsec:multigood}, we generalize a recent result of \citet{haghpanah2021pure}, who derive sufficient conditions for the optimality of pure bundling, to a multiple-good monopoly setting that allows for both probabilistic bundling and quality discrimination. The optimal mechanism in our setting generally involves price discrimination but does so only along the vertical (quality) dimension.
Second, using this perspective on bundling, as another application, we derive conditions on the optimality of selling a menu of nested bundles for a multiproduct monopolist (see \Cref{subsec:nested}). This application allows the buyer to have non-additive values, complementing a recent paper of \citet{bergemann2021optimality} who study the additive case.
Third, using the perspective that delay is costly, we also derive implications on intertemporal price discrimination. \citet{Stokey1979} shows that if consumers only differ in their values for a good but not in their patience, then the optimal dynamic selling mechanism is static and does not involve intertemporal price discrimination. What happens if consumers differ in both their values and patience? As an application, we show that intertemporal price discrimination can be profitable when consumers' values and patience are negatively correlated, but it is always unprofitable when consumers' values and patience are positively correlated (see \Cref{subsec:discounting}).
The main contributions of this paper are thus three-fold. First, we provide a unified framework for studying mechanism design with both price and nonprice screening instruments. Such a framework necessarily involves multidimensional allocations and hence requires analysis significantly different from the standard one-dimensional case. Our result provides a parsimonious benchmark to systematically understand how costly instruments should be optimally used in mechanism design.
Second, we offer a new perspective on multidimensional screening. Our framework identifies conditions under which optimal multidimensional mechanism design can be reduced to the one-dimensional case. There are very few general results in the multidimensional screening literature despite much research in the past decades (\citealt{Rochet2003}; \citealt{Carroll2017}). Our framework provides one, through the simple economics of costly screening. Our result holds for general type spaces, general allocation spaces, general utility functions. Motivated by applications --- including insurance markets, labor markets, and monopoly regulation --- our model allows for interdependent preferences.
Third, these methods can be used to analyze other mechanism design problems. Our applications demonstrate how different mechanism design problems can be formulated in a way that fits our framework by viewing a certain set of allocations as ``damages.'' In contrast to the results from the damaged good literature, our multidimensional framework distinguishes between different kinds of damage (e.g. reducing quality vs. requiring waiting) and characterizes when to use which kind of damage.
On the technical side, the contribution is a novel nonlocal approach to mechanism design. The standard approach to mechanism design uses the envelope theorem to characterize the transfers associated with each implementable allocation rule, and then optimizes over the set of implementable allocation rules. This is possible in one-dimensional settings because the set of implementable allocation rules is characterized by a simple monotonicity condition. In multidimensional environments, the set of implementable allocations is much richer. On the other hand, ignoring the implementablity constraints and optimizing only with the local IC constraints generally leads to non-implementable outcomes.
Instead, our proof method works as follows. First, for a given incentive compatible multidimensional mechanism, we reconstruct an alternative one-dimensional mechanism that improves upon the original one but only satisfies a particular class of IC constraints, the set of downward incentive constraints. Next, our key technical result, the downward sufficiency theorem, shows quite broadly that upward incentive constraints can be ignored in one-dimensional models. This result is proved by a novel variational argument. These steps together provide an alternative to the current approach to multidimensional mechanism design that relies heavily on linear programming duality and requires problem-specific techniques (e.g. \citealt{haghpanah2021pure}, \citealt{bergemann2021optimality}). We postpone the detailed discussion of previous literature to \Cref{sec:lit}, after presenting our results.
The remainder of the paper proceeds as follows. \Cref{sec:model} presents our model. \Cref{sec:main} presents the main result and a partial converse. \Cref{sec:proof} presents the proof of the main result. \Cref{sec:csignal} presents a stronger result in a special case of the model and its application to multiple-good monopoly pricing. \Cref{sec:app} discusses additional applications. \Cref{sec:lit} discusses related literature. \Cref{sec:conclusion} concludes.
\section{Model}\label{sec:model} A principal wants to screen an agent. The agent has private information summarized by a multidimensional type $\theta = (\theta^A, \theta^B)$, where $\theta^A \in \Theta^A \subseteq \mathds{R}$ and $\theta^B \in \Theta^B \subseteq \mathds{R}^N$ for a finite $N$; for convenience, sometimes we also refer to $\theta^A$ as $\theta^0$ and $\theta^B$ as $(\theta^1, \dots, \theta^N)$. We use the superscripts $A$, $B$ to indicate the productive and costly components, respectively.
Both $\Theta^A$ and $\Theta^B$ are assumed to be compact. Let $\Theta := \Theta^A \times \Theta^B$ denote the type space; let $\Delta(\Theta)$ denote the space of Borel probability measures on $\Theta$, equipped with the weak-$^*$ topology. The agent's type is drawn from a commonly known distribution $\gamma \in \Delta(\Theta)$.
The space of productive allocations $\mathcal{X} \ni x$ is a compact subset of $\mathds{R}$; the space of costly instruments $\mathcal{Y} \ni y$ is an arbitrary measurable space.
Both the principal and the agent have quasilinear preferences that are additively separable across the two components: The principal's (ex post) payoff is given by \[v^A(x, \theta^A) + v^B(y, \theta^B) + t\,, \] and the agent's payoff is given by \[u^A(x, \theta^A) + u^B(y, \theta^B) - t\,, \] where $t$ stands for transfers. The utility functions for the productive component $u^A$, $v^A$ are assumed to be continuous on $\mathcal{X} \times \Theta^A$; those for the costly component $u^B$, $v^B$ are allowed to be any bounded measurable functions on $\mathcal{Y} \times \Theta^B$. The principal has \textit{interdependent preferences} if $v^A$ or $v^B$ depends on the agent's type.
The (ex post) surplus functions for the two components are denoted by \[s^A(x, \theta^A) := u^A(x, \theta^A) + v^A(x, \theta^A)\,,\quad s^B(y, \theta^B) := u^B(y, \theta^B) + v^B(y, \theta^B) \,. \] The defining feature of the costly component is that any allocation is socially wasteful under complete information: for all $y \in \mathcal{Y}$ and all $\theta^B \in \Theta^B$, \[\label{eq:star} s^B(y, \theta^B) \leq 0 \,. \tag{1}\] There is an element $y_0 \in \mathcal{Y}$ (with $\{y_0\}$ measurable) representing \textit{no costly screening}: \[v^B(y_0, \theta^B) = u^B(y_0, \theta^B) = 0 \,.\] We say the instruments are \textit{strictly costly} if \eqref{eq:star} holds strictly for all $y \neq y_0$ and all $\theta^B$.
A \textit{mechanism} is a measurable map \[(x, y, t): \Theta \rightarrow \mathcal{X} \times \mathcal{Y} \times \mathds{R}\] satisfying the usual incentive compatibility (IC) and individual rationality (IR) constraints: \begin{alignat*}{2} u^A(x(\theta), \theta^A) + u^B(y(\theta), \theta^B) - t(\theta) &\geq u^A(x(\hat{\theta}), \theta^A) + u^B(y(\hat{\theta}), \theta^B) - t(\hat{\theta}) \quad &&\text{for all } \theta, \hat{\theta} \in \Theta \,; \\ u^A(x(\theta), \theta^A) + u^B(y(\theta), \theta^B) - t(\theta) &\geq 0 &&\text{for all } \theta \in \Theta \,. \end{alignat*} Let $\mathcal{M}(\Theta)$ denote the space of mechanisms. The principal wants to solve \[\sup_{(x, y,t) \in \mathcal{M}(\Theta)} \E[v^A(x(\theta), \theta^A) + v^B(y(\theta), \theta^B) + t(\theta)] \,.\] A mechanism $(x, y, t)$ \textit{involves no costly screening} if $y(\theta) = y_0$ for all $\theta$ and $(x, t)$ does not depend on $\theta^B$, in which case the mechanism screens only the productive component.
We make the following assumptions on the productive component. \begin{asm}[Productive Component]\text{ }\label[manualasm]{asm:prod} \begin{itemize}
\item[(1.1)] $u^A(x, \theta^A)$ is nondecreasing in $\theta^A$.
\item[(1.2)] $u^A(x, \theta^A)$ has strict increasing differences: for any $x < \hat{x}$, $\theta^A < \hat{\theta}^A$,
\[u^A(\hat{x}, \theta^A) - u^A(x, \theta^A) < u^A(\hat{x}, \hat{\theta}^A) - u^A(x, \hat{\theta}^A) \,. \]
\item[(1.3)] $s^A(x, \theta^A)$ has weak single-crossing differences: for any $x < \hat{x}$, $\theta^A<\hat{\theta}^A$,
\[s^A(\hat{x}, \theta^A) - s^A(x, \theta^A) > 0 \implies s^A(\hat{x}, \hat{\theta}^A) - s^A(x, \hat{\theta}^A) > 0 \,.\] \end{itemize} \end{asm}
To state our notion of positive correlation of preferences between the two components, we introduce some notation. Let $\preceq_{st}$ denote the usual stochastic order for $\mathds{R}^N$-valued random variables, i.e. $X \preceq_{st} Y$ if $\E[f(X)] \leq \E[f(Y)]$ for all bounded nondecreasing (measurable) functions $f:\mathds{R}^N \rightarrow \mathds{R}$. Let $\theta^B \mid \theta^A$ denote the regular conditional distribution of $\theta^B$ given $\theta^A$.\footnote{See e.g. \citeauthor{klenke2013probability} (\citeyear{klenke2013probability}, pp. 180-185). }
\begin{asm}[Positive Correlation of Preferences]\text{ }\label[manualasm]{asm:pos} \begin{itemize}
\item[(2.1)] $u^B(y, \theta^B)$ is nondecreasing in $\theta^B$.
\item[(2.2)] $\theta^B \mid \theta^A \preceq_{st} \theta^B \mid \hat{\theta}^A $ for all $\theta^A < \hat{\theta}^A $ in the support. \end{itemize} \end{asm}
\subsection{Discussion of Assumptions}
\paragraph{Mechanism Space.}\hspace{-2mm}As formally defined, our model restricts attention to deterministic mechanisms. However, when there are finitely many pure allocations, we may be able to redefine the allocation spaces to be the probabilities. We take this approach in \Cref{subsec:multigood} where we show how probabilistic bundling by a multiple-good monopolist is nested in this framework.
In the model, there is no feasibility constraint across the allocation spaces $\mathcal{X}$ and $\mathcal{Y}$. However, for any subset $\mathcal{S} \subseteq \mathcal{X} \times \mathcal{Y}$, we may constrain the feasible allocations by requiring $(x, y): \Theta \rightarrow \mathcal{S}$. Provided that $\mathcal{X} \times \{y_0\} \subseteq \mathcal{S}$, our main result is unaffected by such constraints (since the optimum in the original problem would still be feasible).
\paragraph{Additive Separability.}\hspace{-2mm}At this level of generality, additive separability is imposed to distinguish between the productive and the costly component. In applications, even when this assumption may at first glance appear to be violated, a suitable transformation of the problem may yield an additively separable structure. For example, our applications on bundling with non-additive values (\Cref{subsec:multigood}, \Cref{subsec:nested}) and intertemporal price discrimination (\Cref{subsec:discounting}) can be formulated in a way that fits this framework.
\paragraph{Monetary Transfers.}\hspace{-2mm}In the model, money is perfectly transferable. However, by rescaling, our model accommodates some environments in which monetary transfers can be imperfect. Suppose for any mechanism $(x, y, t)$ the principal's payoff is given by \[\E[v^A(x(\theta), \theta^A) + v^B(y(\theta), \theta^B) + \alpha t(\theta)] \,,\] where $\alpha$ is any positive constant (that may represent, for example, adjustment for tax). We can factor out $\alpha$ and see that the principal's problem is equivalent to that with the scaled objective $\E\big [\frac{1}{\alpha} v^A(x(\theta), \theta^A) + \frac{1}{\alpha} v^B(y(\theta), \theta^B) + t(\theta) \big ]$. If $v^A$ satisfies the weak increasing differences condition\footnote{That is, $v^A(\hat{x}, \theta^A) - v^A(x, \theta^A) \leq v^A(\hat{x}, \hat{\theta}^A) - v^A(x, \hat{\theta}^A)$ for all $x < \hat{x}$, $\theta^A < \hat{\theta}^A$. } and $v^B = 0$, then this problem automatically fits our model.
\paragraph{Productive Component.}\hspace{-2mm}Assumptions (1.1) and (1.2) are the classic assumptions of one-dimensional screening problems. For future reference, we say a one-dimensional screening problem is \textit{standard} if it satisfies Assumptions (1.1) and (1.2).\footnote{A no-trade outcome $x_0$, in the sense that $v^{A}(x_{0},\theta^{A})=u^{A}(x_{0},\theta^{A})=0$, is allowed but not required in the model. In particular, if $\min(\mathcal{X})$ is the no-trade outcome, then Assumption (1.1) is not needed because it would be implied by Assumption (1.2).}
Assumption (1.3) is a new condition that is imposed on the surplus function. For future reference, we refer to Assumption (1.3) as the \textit{surplus condition}. It is satisfied in common one-dimensional screening problems. For example, this condition automatically holds when the principal's preferences are not interdependent, given Assumption (1.2). In general, however, this condition differs from Assumption (1.2). Sufficient conditions for this surplus condition include, for example, (i) $s^A$ is strictly increasing in $x$, or (ii) the cross partial derivative $s^A_{12} \geq 0$. This assumption ensures that there is a monotone efficient allocation rule. It is not satisfied when the principal's preference to trade with low types is so strong that any socially efficient allocation rule is not monotone. This is an important assumption that is used in our key technical result, the downward sufficiency theorem (\Cref{thm:dbind}). In \Cref{subsec:example}, we further explain this condition and provide a counterexample when this condition fails.
\paragraph{Positive Correlation.}\hspace{-2mm}Assumption (2.1) says that $\theta^B$ encodes the strength of the agent's preferences on the costly component such that higher $\theta^B$ represents higher willingness to pay for any $y$. Assumption (2.2) then defines the positive correlation structure between the agent's preferences for the two components. The condition is known as stochastic monotonicity (\citealt{muller2002comparison}). We say $\theta^B$ is \textit{stochastically nondecreasing} in $\theta^A$ whenever Assumption (2.2) holds. This is an asymmetric condition. It says that observing a high $\theta^A$ conveys good news about $\theta^B$ in the sense of stochastic dominance. A sufficient condition for Assumption (2.2) is that $(\theta^0, \theta^1, \dots, \theta^N)$ are affiliated in the sense of \citet{Milgrom1982} (see \Cref{lem:aff} in \Cref{app:add}). Assumption (2.2) is weaker than affiliation. For example, when $N = 2$, $(\theta^1, \theta^2)$ may be negatively correlated with each other, while $\theta^B = (\theta^1, \theta^2)$ is positively correlated with $\theta^A$ according to this notion.
\section{Main Result} \label{sec:main} Our main result says that if the agent has positively correlated preferences between the productive and costly components, then simply screening the one-dimensional productive component is optimal and essentially uniquely optimal:
\begin{restatable}{theorem}{thm:main}\label{thm:main} Suppose \Cref{asm:prod,asm:pos} hold. Then: \begin{itemize}
\item[(i)] There exists an optimal mechanism that involves no costly screening.
\item[(ii)] If the instruments are strictly costly, then every optimal mechanism has $\mathds{P}(y(\theta) = y_0) = 1$. \end{itemize} \end{restatable}
In the case of negatively correlated preferences, we show a partial converse. We say the utility functions $u^A$, $u^B$, $v^A$, $v^B$ are \textit{admissible} if they satisfy all the assumptions in \Cref{sec:model} including the strict version of \eqref{eq:star}. For a real-valued continuous random variable $X$, let $\beta(X) = \mathds{1}_{X \geq \text{median}(X)}$ denote the ``binarization'' of $X$. \begin{prop}\label{prop:converse}
Suppose $\theta$ is absolutely continuous; $|\mathcal{X}|>1$, $|\mathcal{Y}|>1$; and there exists some $i\in\{1, \dots, N\}$ such that $\theta^i$ is stochastically \textnormal{nonincreasing} in $\theta^0$ and $\beta(\theta^i)$, $\beta(\theta^0)$ are not independent. Then, there exist admissible utility functions such that any mechanism screening only the productive component is strictly dominated by a mechanism involving costly screening. \end{prop}
We postpone the explanation and proof of \Cref{thm:main} to \Cref{sec:proof}. \Cref{prop:converse} can be shown by a simple construction that sets $v^A = v^B= 0$. The intuition is as follows. The principal can always create a menu of two nontrivial options for the agent: (i) getting the favorite allocation in $\mathcal{X}$ at a high price, and (ii) getting the same allocation at a low price but with some costly activity. The proof shows that if $\theta^i$ is negatively correlated with $\theta^0$ as defined in the statement, then there exist some admissible utility functions for the agent such that this way of price discrimination is always more profitable for the principal than selling the elements in $\mathcal{X}$ alone. The appendix provides details.
\section{Proof of the Main Result} \label{sec:proof}
In this section, we provide the proof of \Cref{thm:main} under the additional assumption that $\Theta^B = \Theta^A = \mathcal{Y} = \mathcal{X} = [0, 1]$. In addition, we also assume that $ \theta^B = \theta^A = \theta$ so that the agent has only one-dimensional types. The main difficulty of the proof already appears in this simplest case. The appendix provides the proof of the general case.
\subsection{Examples and the Approach}\label{subsec:example} Before presenting the proof, to illustrate the basic intuition outlined in the introduction and why it is incomplete, we begin with a sequence of examples.
Consider a health insurance setting with two types: $\theta = 0$ is the low-risk type and $\theta = 1$ is the high-risk type (with equal probabilities). The low-risk type has value $2$ for the insurance, whereas the high-risk type has value $3$. It costs the insurance firm $0$ to serve the low-risk type, and $4$ to serve the high-risk type. Equivalently, \[u^A(x, \theta) = (\theta + 2) x\,, \quad v^A(x, \theta) = - 4 \theta x\,.\]
Because the utilities are linear in the allocation $x$, the optimal one-dimensional mechanism has three possibilities: (i) trade with both types with full insurance (i.e. $x = 1$), (ii) trade with only the high type with full insurance, or (iii) trade with neither of the types. Option (i) yields profit $2 - \frac{1}{2} \times 4 = 0$, option (ii) yields profit $\frac{1}{2} \times (3 - 4) = -\frac{1}{2}$, and option (iii) yields profit $0$. Thus, without any costly instrument, the principal gets profit $0$.
\begin{ex}\label{ex:negative} Suppose we add a costly instrument $y$ such as a physical activity for which the low-risk type has cost $0$ and the high-risk type has cost $1$: \[u^B(y, \theta) = -\theta y\,, \quad v^B(y, \theta) = 0\,.\] The agent has negatively correlated preferences here because the high-risk type has a higher utility for the insurance but also higher disutility for the costly action. The principal can mitigate the adverse selection problem by offering $\big\{(1, 1, 2)\big\}$, which can be interpreted as requiring the physical activity in order to purchase the insurance plan (e.g. locating the office on a walk-up building). The high-risk type, finding the physical activity too costly, does not purchase this plan. Then, the principal gets profit $\frac{1}{2} \times (2 - 0) = 1 > 0$. \qed \end{ex}
\begin{ex}\label{ex:ubind} Suppose, instead, we add a costly instrument $y$ such as waiting in line for which the low-risk type has cost $1$ and the high-risk type has cost $0$: \[u^B(y, \theta) = (\theta-1) y\,, \quad v^B(y, \theta) = 0\,.\] The agent has perfectly correlated preferences here because the high-risk type has a higher utility for the insurance but also lower disutility for the costly action. Perhaps surprisingly, the costly instrument can still be useful. In particular, consider a menu of two options $\big\{(1, 0, 2),\,\, (\frac{1}{2}, \frac{1}{2}, \frac{1}{2})\big\}$, which can be interpreted as a full insurance plan with a price $2$, and a low-coverage plan that has a price $\frac{1}{2}$ but requires some amount of waiting. The low-risk type, finding waiting too costly, purchases the full insurance plan. The high-risk type, finding the low-coverage plan cheap, purchases the low-coverage plan. With this menu, the principal gets profit $\frac{1}{2} \times 2 + \frac{1}{2} \times (\frac{1}{2} - \frac{1}{2} \times 4) = \frac{1}{4} > 0$. \qed \end{ex}
Note that in both \Cref{ex:negative} and \Cref{ex:ubind}, the insurance allocation is not monotone: the high-risk type gets lower coverage than the low-risk type does. This cannot arise under any one-dimensional mechanism. The possibility of using an additional instrument significantly expands the space of implementable allocations, which is the main difficulty in proving our result.
A key observation is that the costly instrument in \Cref{ex:ubind} is used in a different way from that in \Cref{ex:negative}. The costly action in \Cref{ex:ubind} is required for the eligibility to purchase the low-coverage option targeted at the high-risk type, whereas the costly action in \Cref{ex:negative} is required for the eligibility to purchase the full-coverage option targeted at the low-risk type. That is, the costly instrument is used to deter the upward deviation under positive correlation of preferences, whereas it is used to deter the downward deviation under negative correlation of preferences.
If we can rule out upward deviations to begin with, we can then show that the costly instruments is ineffective under positive correlation of preferences. This is precisely the route our proof takes. But it turns out that ruling out upward deviations is involved.
Suppose, in \Cref{ex:ubind}, we ignore the upward deviation. Then, the principal optimally sets a price of $2$ for the full insurance plan targeted at the low-risk type, and pays the high-risk type $1$ to stay out of the market. This yields a profit $\frac{1}{2} \times 2 + \frac{1}{2} \times (-1) = \frac{1}{2}$. But, of course, this is not incentive compatible: the low-risk type wants to take the payment $1$ as well. Thus, the downward incentive constraint is not sufficient.
\begin{ex}\label{ex:positive} Suppose, as in \Cref{ex:ubind}, the costly instrument is waiting in line, but it costs $\frac{5}{2}$ instead of $4$ to serve the high-risk type, i.e. $v^A(x, \theta) = -\frac{5}{2}\theta x$. Note that the set of incentive compatible mechanisms is the same as in \Cref{ex:ubind}. In particular, the menu $\big\{(1, 0, 2),\,\, (\frac{1}{2}, \frac{1}{2}, \frac{1}{2})\big\}$ induces the same self-selection as before, and hence gives the principal profit \[\frac{1}{2} \times 2 + \frac{1}{2} \times (\frac{1}{2} - \frac{1}{2} \times \frac{5}{2}) = \frac{5}{8}\,.\] But this menu is now strictly dominated by simply selling to both types without any costly screening, because that gives the principal profit \[2 - (\frac{1}{2} \times 0 + \frac{1}{2} \times \frac{5}{2}) = \frac{3}{4} > \frac{5}{8}\,. \eqno\qed \] \end{ex}
\Cref{ex:positive} differs from \Cref{ex:ubind} in that the cost of serving the high-risk type is now lower than the value of the high-risk type (i.e. $\frac{5}{2} \leq 3$). More generally, let $\kappa$ be the cost of serving the high type. Note that the surplus function for the productive component $s^A(x, \theta) = ((1 - \kappa) \theta + 2)x$ satisfies the surplus condition, Assumption (1.3), if and only if $\kappa \leq 3$. Note also that by the same calculation as in \Cref{ex:positive}, the menu $\big\{(1, 0, 2),\,\, (\frac{1}{2}, \frac{1}{2}, \frac{1}{2})\big\}$ is no more profitable than simply selling the full insurance plan to both types if and only if $\kappa \leq 3$. Even though the costly instrument expands the space of implementable outcomes just as before, it turns out that the surplus condition, equivalent to $\kappa \leq 3$ here, guarantees that the optimum does not make use of the much richer set of mechanisms.\footnote{Note that this is not the case under \Cref{ex:negative} where the agent has negatively correlated preferences: even if $\kappa = 2.5$ instead of $4$, the optimum still uses the costly instrument because that gives profit $1 > \frac{3}{4}$.}
The reason, as alluded to earlier, is intimately linked to whether we can ignore upward incentive constraints. Our key technical result, the downward sufficiency theorem, asserts that as long as the surplus condition holds, only downward incentive constraints are needed in one-dimensional screening models. By a reconstruction argument, we also show that if the agent has positively correlated preferences between the two components, then costly instruments can only help with upward incentive constraints. Our main result follows by combining these two ingredients.
The downward sufficiency theorem is proved by a variational argument. For an arbitrary mechanism that satisfies all downward incentive constraints but violates some upward incentive constraints, we show that it can be improved by a carefully chosen sequence of modifications. This theorem is distinct from the classic results on the local incentive constraints. It is well known that the local downward incentive constraints are always binding for any one-dimensional screening model. However, as \Cref{ex:ubind} shows, this does not imply that we can ignore the upward incentive constraints. With more than two types, it is also well known that the local downward incentive constraints are generally not sufficient --- various procedures of ironing are needed (\citealt{Myerson1981}, \citealt{mussa1978monopoly}). There is no known tractable method of ironing in multidimensional environments since the space of implementable outcomes is much richer (\citealt{Rochet2003}).\footnote{For example, there is no tractable ironing method using cyclic monotonicity which characterizes implementability in multidimensional environments.} Our approach makes essential use of the global downward incentive constraints.
The rest of this section presents the proof, organized as follows. \Cref{subsec:reconstruction} shows how the reconstruction works. \Cref{subsec:dst} proves the downward sufficiency theorem. \Cref{subsec:reduction} briefly discusses how the imperfect correlation case can be reduced to the perfection correlation case.
\subsection{Reconstruction} \label{subsec:reconstruction}
Suppose we are given a mechanism $(x, y, t)$. Consider the following reconstruction: \[\tilde{x}(\theta) = x(\theta)\,, \qquad \tilde{y}(\theta) = y_0\,, \qquad \tilde{t}(\theta) = t(\theta) - u^B(y(\theta), \theta) \,.\] The reconstruction maintains the same allocations for the productive component, involves no costly screening, and uses transfers to keep all types at their previous utility levels, \textit{assuming} they report truthfully.
Assuming truthful reporting, this increases the total surplus while giving the same surplus to the agent, and therefore increases the principal's payoff. Indeed, the change in principal's payoff is \[\E\big [v^B(y_0, \theta) - v^B(y(\theta), \theta) - u^B(y(\theta), \theta) \big] = \E\big [-s^B(y(\theta), \theta)\big ] \geq 0\,.\] The last inequality is strict if $\mathds{P}(y(\theta) \neq y_0) > 0$ and the instruments are strictly costly.
Because the reconstruction maintains the utility for each type under truthful reporting, $(\tilde{x}, \tilde{y}, \tilde{t})$ satisfies all IR constraints. However, this mechanism is not necessarily IC. Indeed, suppose for illustration that $u^B(y, \,\cdot\,)$ is strictly increasing, and for some $\hat{\theta} > \theta$, $\text{IC}[\theta \rightarrow \hat{\theta}]$ binds under $(x, y, t)$. Consider the same deviation under $(\tilde{x}, \tilde{y},\tilde{t})$: \begin{align*} \label{eq:upIC}
u^A(\tilde{x}(\theta), \theta) - \tilde{t}(\theta) &= u^A(x(\theta), \theta) + u^B(y(\theta), \theta) - t(\theta) \\
&= u^A(x(\hat{\theta}), \theta) + u^B(y(\hat{\theta}), \theta) - t(\hat{\theta}) \\
&< u^A(x(\hat{\theta}), \theta) + u^B(y(\hat{\theta}),\hat{\theta}) - t(\hat{\theta}) \\
&= u^A(\tilde{x}(\hat{\theta}), \theta) - \tilde{t}(\hat{\theta})\,, \tag{2} \end{align*} where the first and the last line follow by construction, the second line uses the binding IC constraint, and the third line uses that $u^B(y, \,\cdot\,)$ is strictly increasing. Therefore, $\text{IC}[\theta \rightarrow \hat{\theta}]$ is not satisfied under $(\tilde{x}, \tilde{y}, \tilde{t})$.
This demonstrates that the reconstruction does not work directly. However, the same reasoning also shows that all downward IC constraints are still satisfied after this reconstruction. Indeed, consider a downward deviation $[\theta \rightarrow \hat{\theta}]$ for any $\hat{\theta} < \theta$: \begin{align*} \label{eq:downIC}
u^A(\tilde{x}(\theta), \theta) - \tilde{t}(\theta) &= u^A(x(\theta), \theta) + u^B(y(\theta), \theta) - t(\theta) \\
&\geq u^A(x(\hat{\theta}), \theta) + u^B(y(\hat{\theta}), \theta) - t(\hat{\theta}) \\
&\geq u^A(x(\hat{\theta}), \theta) + u^B(y(\hat{\theta}),\hat{\theta}) - t(\hat{\theta}) \\
&= u^A(\tilde{x}(\hat{\theta}), \theta) - \tilde{t}(\hat{\theta})\,, \tag{3} \end{align*} where the first and the last line follow by construction, the second line follows from $(x, y, t)$ being IC, and the third line follows from that $u^B(y, \,\cdot\,)$ is nondecreasing. Therefore, $(\tilde{x}, \tilde{y}, \tilde{t})$ satisfies all downward IC constraints.
Let $\tilde{\mathcal{M}}(\Theta)$ denote the space of measurable maps $\Theta \rightarrow \mathcal{X} \times \mathcal{Y} \times \mathds{R}$ that are (i) IR, (ii) involve no costly screening, and (iii) satisfy all downward IC constraints.
The reconstruction argument gives the following lemma: \begin{lemma}\label{lem:dom} Consider any $(x, y, t) \in \mathcal{M}(\Theta)$. There exists some $(\tilde{x}, \tilde{y}, \tilde{t}) \in \tilde{\mathcal{M}}(\Theta)$ such that \[\E\big[v^A(x(\theta), \theta) + v^B(y(\theta), \theta) + t(\theta)\big]\leq \E\big[v^A(\tilde{x}(\theta), \theta) + v^B(\tilde{y}(\theta), \theta) + \tilde{t}(\theta)\big]\,.\] If the instruments are strictly costly and $\mathds{P}(y(\theta) = y_0) < 1$, then the above inequality is strict. \end{lemma}
By \Cref{lem:dom}, it is, therefore, always an upper bound for the principal to optimize over $\tilde{\mathcal{M}}(\Theta)$. Because $v^B(y_0, \theta) = u^B(y_0, \theta) = 0$, the principal then solves the following: \begin{alignat*}{2}\label{eq:1d} \sup_{(x, t):\text{ }\Theta \rightarrow \mathcal{X} \times \mathds{R}, \text{ measurable}} \E[&v^A(x(\theta), \theta) + t(\theta)] \tag{4} \\ \text{subject to}\quad u^A(x(\theta), \theta) - t(\theta) &\geq u^A(x(\hat{\theta}), \theta) - t(\hat{\theta}) \quad && \text{for all } \theta > \hat{\theta}\,, \\ u^A(x(\theta), \theta) - t(\theta) &\geq 0 && \text{for all } \theta\,. \end{alignat*} This problem is a one-dimensional screening problem except that all upward IC constraints are ignored. For future references, we use $\eqref{eq:1d}^\dagger$ to denote the version of problem \eqref{eq:1d} with both the downward and upward IC constraints.
If we can show that there exists $(x^*, t^*)$ solving problem \eqref{eq:1d} and satisfying also all upward IC constraints, then both parts of \Cref{thm:main} follow. From now on, we drop the superscript $A$ whenever clear, as we will focus only on the productive component.
\subsection{Downward Sufficiency Theorem} \label{subsec:dst}
\begin{theorem}[Downward sufficiency]\label{thm:dbind} Consider any standard one-dimensional screening problem. Suppose the surplus function $s(x, \theta)$ satisfies the weak single-crossing differences condition. Then, there exists an optimal solution to \eqref{eq:1d} that also satisfies all upward IC constraints. \end{theorem}
Even with a continuous type space, this theorem cannot be proved by using a local approach because, as discussed in \Cref{subsec:example}, the infinitesimal downward IC constraints are not sufficient. One approach to prove this theorem is to construct a double continuum of Lagrange multipliers that place weights only on the downward constraints (under additional convexity assumptions). But doing so requires a new construction for each instance of $u$ and $v$. It is also unclear what the dual variables should be.
Our approach is variational. We first prove \Cref{thm:dbind} for the case of finite $\Theta$ and then for the general case using approximation. Let us suppose $|\Theta| = n < \infty$ and order types by $\theta_1 < \theta_2 < \cdots < \theta_n$. Let $\mu \in \Delta(\Theta)$ denote the distribution of $\theta$. We assume that $\mu$ has full support. Without loss of generality, suppose $0 \leq \theta_1$ and $\theta_n \leq 1$. A mechanism is then specified by $(x_1, x_2, \dots, x_n)$ and $(t_1, t_2, \dots, t_n)$. The principal's problem is given by \begin{alignat*}{2} \label{eq:finite} \max_{(x, t)\in \mathcal{X}^n \times \mathds{R}^n} \sum_i \mu(\theta_i) (&v(x_i, \theta_i) + t_i) \tag{5} \\ \text{subject to} \quad u(x_i, \theta_i) - t_i &\geq u(x_j, \theta_i) - t_j \quad && \text{for all } i > j\,, \\ u(x_i, \theta_i) - t_i &\geq 0 && \text{for all } i\,. \end{alignat*}
We replace $\sup$ to $\max$ as the existence of the solution is easy to see by compactness arguments. In this finite-type case, we prove a stronger claim than \Cref{thm:dbind} by showing that every optimal downward IC mechanism must satisfy all upward incentive constraints. For any feasible solution $(x, t)$ to \eqref{eq:finite} that fails some upward incentive constraints, we will perturb it to obtain a new feasible solution that strictly improves the objective.
We start with some carefully chosen definitions. Fix any allocation rule $x \in \mathcal{X}^n$. Let \[S := \{i: x_i \geq x_j \text{ for all } j < i\} \cup \{n\}\] be the \textit{running maximum index set} of $x$ (including the last index).
A $U$\textit{-shaped region} $r$ is a set of indices of the form \[\{i, i+1, \dots, i'\}\]
such that (i) $i$ and $i'$ are two consecutive elements of $S$, and (ii) $x_i > x_{i+1}$. By definition, there is a finite sequence $\mathcal{L}$ of $U$-shaped regions. Let $L:= |\mathcal{L}|$ be the number of $U$-shaped regions. We write $o_l$ and $d_l$ as the starting and end index of the $l$-th $U$-shaped region $r_l$.
An \textit{optimal downward transfer operator} (or simply \textit{transfer operator}) is a map \[\mathbf{T}: \mathcal{X}^n \rightarrow \mathds{R}^n\] such that for any $x \in \mathcal{X}^n$, $\mathbf{T}[x]$ is an optimal solution to the following problem: \begin{alignat*}{2} \label{eq:tproblem} \max_{ t \in \mathds{R}^n} \sum_i \mu(\theta_i) (&v(x_i, \theta_i) + t_i)\, \tag{6} \end{alignat*} subject to the same downward IC and IR constraints as in \eqref{eq:finite}. That is, $\mathbf{T}[x]$ maps an allocation rule $x$ to the optimal transfer rule that implements $x$ in a downward IC fashion. At this stage, it is unclear whether such an operator $\mathbf{T}$ is defined for all $x \in \mathcal{X}^n$.
\paragraph{Step 1:} Our first step is to show that such an operator exists and is unique. In fact, we explicitly characterize $\mathbf{T}$, which will be used repeatedly in the perturbation argument in Step 2. For notational convenience, we write $\text{IC}[i \rightarrow j]$ (or simply $[i \rightarrow j]$) and $\text{IR}[i]$ as a shorthand for $\text{IC}[\theta_i \rightarrow \theta_j]$ and $\text{IR}[\theta_i]$, respectively. \begin{claim}\label{claim:transfer} There exists a unique transfer operator $\mathbf{T}$. For any $x \in \mathcal{X}^n$, $\mathbf{T}[x]$ is the solution to the system of equations defined by the following constraints with equality: $\textnormal{IR}[1]$, and \[\textnormal{IC}\big[i \rightarrow \max\{j \in S: j < i\}\big]\,,\] for all $i$. In particular, for all $i$, \[(\mathbf{T}[x])_i = u(x_i, \theta_i) - \sum_{j \in S \cup \{i\}:\, j \leq i}\Big(u(x_{\max\{k \in S: k < j\}}, \theta_{j}) - u(x_{\max\{k \in S: k < j\}}, \theta_{\max\{k \in S: k < j\}})\Big)\,. \label{eq:t} \tag{7}\] \end{claim}
In words, for an arbitrary $x \in \mathcal{X}^n$, there exists a unique optimal $t$ subject to that $(x, t)$ is IR and downward IC. Moreover, for a given $x$, the binding IC constraints for $t$ go from every index $i > 1$ to the largest element in $S$ that is strictly less than $i$. \Cref{fig:transfer} illustrates how the $U$-shaped regions and the binding constraints in \Cref{claim:transfer} are identified.
\begin{figure}
\caption{$U$-shaped regions and binding constraints for a fixed allocation rule}
\label{fig:transfer}
\end{figure}
As the figure shows, equivalently, the local IC constraints bind until one travels into a $U$-shaped region (beginning with say index $o$) where the binding constraints then point toward $\theta_{o}$. This provides some further intuition about \eqref{eq:t}, as \eqref{eq:t} can then be written as \[\label{eq:t2} (\mathbf{T}[x])_i = u(x_i, \theta_i) - \underbrace{\sum_{j=1,2,\dots, i-1: \ j \in Q} \Big( u(x_{j}, \theta_{j+1}) - u(x_{j}, \theta_{j}) \Big) }_\text{local} - \underbrace{\sum_{l=1,2,\dots, L: \ o_l < i} \Big( u(x_{o_l}, \theta_{\min(d_l, i)}) - u(x_{o_l}, \theta_{o_l}) \Big) }_\text{nonlocal} \,. \tag{8} \] where $Q$ is the region where $x$ is monotone in an appropriate sense.\footnote{Formally, $Q = \big \{1\leq j \leq n: j \not \in r_l \text{ for all $l$, or } j = d_l \neq o_{l+1} \text{for some $l$} \big\}$.} The first sum arises from the binding local downward IC constraints, and the second sum arises from the binding nonlocal downward IC constraints. If $x$ is monotone, then \eqref{eq:t2} will reduce to the standard transfer formula in one-dimensional screening models, in which only the local downward IC constraints matter (for transfers). However, importantly, we cannot rule out any $x \in \mathcal{X}^n$ by implementability here since we do not have upward incentive constraints. The transfer rule $\mathbf{T}[x]$ depends on the \textit{shape} of the allocation rule $x$.
\begin{proof}[Proof of \Cref{claim:transfer}]
Relax all the constraints in \eqref{eq:tproblem} except the ones indicated in \Cref{claim:transfer}. We will show the following: First, these constraints must bind in the relaxed problem. Second, these constraints binding imply all downward IC constraints and all IR constraints. Third, there is a unique solution to the system of equations defined by these binding constraints. \Cref{claim:transfer} then follows.
Note that for every $i > 1$, there is precisely one corresponding constraint $[i \rightarrow j]$ for some $j$. If this constraint does not bind at some mechanism $(x, t)$, then simply set $\tilde{t}_i = t_i + \epsilon$ for some $\epsilon > 0$ small enough so that $[i \rightarrow j]$ still holds. This clearly increases the objective. It also does not distort other IC constraints. Indeed, the only other IC constraints this change affects are of the form $[k \rightarrow i]$ for some $k$, but \[u(x_k, \theta_k) - t_k \geq u(x_i, \theta_k) - t_i \geq u(x_i, \theta_k) - \tilde{t}_i \,.\] Therefore, all the IC constraints identified in \Cref{claim:transfer} must bind. Similarly, $\text{IR}[1]$ binds.
Given that these constraints bind, we now show that they imply all the downward IC constraints in \eqref{eq:tproblem}. We first collect two lemmas: \begin{lemma}[Local to global]\label{lem:log} Let $i > j > k$. If $[i \rightarrow j]$, $[j \rightarrow k]$ hold and $x_j \geq x_k$, then $[i \rightarrow k]$. \end{lemma}
\begin{lemma}[Global to local]\label{lem:gol} Let $i > j > k$. If $[i \rightarrow k]$, $[j \rightarrow k]$ bind and $x_j \leq x_k$, then $[i \rightarrow j]$. \end{lemma} \Cref{lem:log} is standard; it follows from a revealed-preference argument using the single-crossing property of $u$ (we include a proof in the appendix for completeness). \Cref{lem:gol} appears to be new; it requires two binding IC constraints and follows from a revealed-preference argument that subtracts the two constraints. The appendix provides details.
We show all downward IC constraints are satisfied by induction on the number of $U$-shaped regions $L$. When $L = 0$, all downward IC constraints hold by successively applying \Cref{lem:log} and building up from the adjacent local downward constraints. Suppose the claim holds for $L - 1$. Let us denote the last region as $r$ with starting index $o$ and end index $d$. By the inductive hypothesis, all downward IC constraints $[i \rightarrow j]$ are satisfied if $j < i \leq o$. We divide the remaining pairs $(j, i)$ with $j < i$ into two cases: \begin{itemize}
\item[] Case (1): $o \leq j < i$. We make the following observations:
\begin{itemize}
\item[(a)] if $o \leq j < i \leq d$, then $[i \rightarrow j]$ follows by the binding IC constraints $[i \rightarrow o]$, $[j \rightarrow o]$, $x_j \leq x_o$, and \Cref{lem:gol};
\item[(b)] if $d \leq j < i$, then $[i \rightarrow j]$ follows by successively applying \Cref{lem:log};
\item[(c)] if $j < d < i$, then $[i \rightarrow j]$ follows by $[i \rightarrow d]$ from (b), $[d \rightarrow j]$ from (a), $x_d \geq x_j$, and \Cref{lem:log}.
\end{itemize}
\item[] Case (2): $j < o < i$. Note that $x_j \leq x_o$ for all $j < o$. Then, $[i \rightarrow j]$ follows by $[i \rightarrow o]$ from Case (1), $[o \rightarrow j]$ from the inductive hypothesis, $x_o \geq x_j$, and \Cref{lem:log}. \end{itemize}
Together, these cover all the downward IC constraints and prove the inductive step. The IR constraints follow easily from $\text{IR}[1]$ and $\text{IC}[i \rightarrow 1]$ and that $u(x, \,\cdot\,)$ is nondecreasing.
The binding constraints define a system of $n$ equations for $t$. With some calculations, it is not hard to see that these equations can be solved successively starting from the lowest one. In particular, by induction, the solution is uniquely defined by \eqref{eq:t}. \end{proof}
\paragraph{Step 2:} Consider any feasible $(x, t)$ to \eqref{eq:finite} that does not satisfy some upward IC constraints. We are now ready to construct a perturbation that strictly improves on $(x, t)$.
There are two cases: (i) $x$ is monotone and (ii) $x$ is not monotone. The first case is simple. As noted before, when $x$ is monotone, $\mathbf{T}[x]$ reduces to the standard transfer formula and hence $(x, \mathbf{T}[x])$ must satisfy all incentive constraints including the upward ones. Then, $t$ cannot be identically equal to $\mathbf{T}[x]$. But that implies $(x, t)$ can be strictly improved by $(x, \mathbf{T}[x])$ by \Cref{claim:transfer}.
Consider the second case where $x$ is not monotone. Then, there must exist a $U$-shaped region. Let $r$ be the first $U$-shaped region, $o$ its starting index, and $d$ its end index.
By definition of $\mathbf{T}$, it suffices to construct a perturbation $\tilde{x}$ of $x$ such that $(\tilde{x}, \mathbf{T}[\tilde{x}])$ strictly improves upon $(x, \mathbf{T}[x])$. The perturbation we construct will act on a carefully chosen set of types. In particular, let \[g = \min \big\{j > o: x_j \geq x_o \big\}\] denote the first index after $o$ with associated allocation no less than $x_o$. Put $g = n+1$ if the above set is empty. Then, either $g = d$ or $g = n+1$.
Let \[\hat{x} = \max \big\{x_j: o < j < g \big\}\] denote the largest allocation for indices strictly between $o$ and $g$. We have $\hat{x}<x_o$. Let $j^* \in \{o+1, \dots, g-1\}$ be the first index achieving the above maximum and let $\hat{\theta} = \theta_{j^*}$.
Let \[k = \min \big \{j: x_j > \hat{x} \big \} \] denote the first index whose associated allocation is strictly higher than $\hat{x}$. Since $x_o > \hat{x}$, we have $k \leq o$. Because $r$ is the first $U$-shaped region, we have \[\hat{x} < x_{k} \leq x_{k+1} \leq \cdots \leq x_o \,.\]
Consider the following perturbation $\tilde{x}$: for all $j \in \{1, \dots, n\}$, \[\tilde{x}_j := \begin{cases} \hat{x} & \text{ if } j \in \{ k, k+1, \dots, o\}\,; \\ x_j & \text{ otherwise.} \\ \end{cases}\] \Cref{fig:allocation} illustrates. \begin{figure}
\caption{Perturbation of the allocation rule}
\label{fig:allocation}
\end{figure}
\begin{claim} \label{claim:perturb} The objective of \eqref{eq:finite} under $(\tilde{x}, \mathbf{T}[\tilde{x}])$ is strictly higher than that under $(x, \mathbf{T}[x])$. \end{claim}
\begin{proof}[Proof of \Cref{claim:perturb}] The main difficulty here is that $\mathbf{T}$ is a complicated operator because of the binding nonlocal incentive constraints. The perturbed allocation $\tilde{x}$ is constructed in such a way that it maintains the \textit{form} of $\mathbf{T}$ in the following sense. Define function $\varphi: \mathcal{P}(\{1,\dots, n\}) \times \mathcal{X}^n \rightarrow \mathds{R}^n$ by \[(\varphi(K, a))_i := u(a_i, \theta_i) - \sum_{j \in K \cup \{i\}:\, j \leq i}\Big(u(a_{\max\{k \in K: k < j\}}, \theta_j) - u(a_{\max\{k \in K: k < j\}}, \theta_{\max\{k \in K: k < j\}})\Big)\] for all $i = 1,\dots, n$, where $\mathcal{P}(\,\cdot\,)$ is the power set operator. For any allocation rule $a \in \mathcal{X}^n$, let $S_a$ be the running maximum index set as defined earlier. Then, by \eqref{eq:t}, $\mathbf{T}[a] = \varphi(S_a, a)$. \begin{lemma}\label{lem:form} Consider any two allocation rules $a, b \in \mathcal{X}^n$ with running maximum index sets $S_a, S_b$. Suppose $S_a \subseteq S_b$, and $b_i = b_{\max\{j \in S_a: j < \min(S_b \backslash S_a)\}}$ for all $i \in S_b \backslash S_a$. Then, \[\mathbf{T}[b] = \varphi(S_b, b) = \varphi(S_a, b)\,.\] \end{lemma}
The appendix provides the proof of this lemma. Note that by construction $x$, $\tilde{x}$ always satisfy the conditions in \Cref{lem:form}. Applying \Cref{lem:form} to $x$, $\tilde{x}$ gives $\mathbf{T}[\tilde{x}] = \varphi(S, \tilde{x})$, where $S$ is the running maximum index set of $x$. We show that the objective of \eqref{eq:finite}, after plugging in $\varphi(S, \,\cdot\,)$, weakly increases on the parts involving $x_k, x_{k+1}, \dots, x_{o-1}$ (which may be an empty set) and strictly increases on the parts involving $x_o$ (which always exist).
Fix any $j \in \{k,k+1,\dots,o-1\}$. Plugging $\varphi(S, \,\cdot\,)$ into the objective of \eqref{eq:finite} and collecting terms involving $x_j$ gives \[\label{eq:virtual} s(x_j, \theta_j)\mu(\theta_j) - \big(u(x_j, \theta_{j+1}) - u(x_j, \theta_{j})\big)\sum_{i > j} \mu(\theta_i) \,.\tag{9} \] Now consider the terms involving $x_{j^*}$. Because $o < j^* < g$, there is no IC constraint pointing toward $j^*$ by \Cref{claim:transfer}. Therefore, there is only one such term: \[s(x_{j^*}, \theta_{j^*}) \mu(\theta_{j^*})\,.\] Note that $x_j \in \mathcal{X}$ is feasible to assign to $\theta_{j^*}$. Moreover, since $x_j \leq x_o$, doing so maintains the form of $\mathbf{T}$ by \Cref{lem:form}, and thus generates a payoff also according to the above formula. The fact that $x$ is optimal then implies \[s(\hat{x}, \hat{\theta}) \geq s(x_j, \hat{\theta})\,;\] that is, \[ s(x_j, \hat{\theta}) - s(\hat{x}, \hat{\theta})\leq 0\,.\] Because $x_j > \hat{x}$ and $ \theta_j < \hat{\theta}$, by the weak single-crossing differences property of $s$, \[\label{eq:surplus} s(x_j, \theta_j) - s(\hat{x}, \theta_j)\leq 0 \,.\tag{10}\] Moreover, because $x_j > \hat{x}$, by the strict increasing differences property of $u$, \[\label{eq:util} u(x_j, \theta_{j+1}) - u(x_j, \theta_{j}) > u(\hat{x}, \theta_{j+1}) - u(\hat{x}, \theta_{j})\,. \tag{11}\] Combining \eqref{eq:surplus} and \eqref{eq:util} gives \[s(x_j, \theta_j)\mu(\theta_j) - \big(u(x_j, \theta_{j+1}) - u(x_j, \theta_{j})\big)\sum_{i > j} \mu(\theta_i) \leq s(\hat{x}, \theta_j)\mu(\theta_j) - \big(u(\hat{x}, \theta_{j+1}) - u(\hat{x}, \theta_{j})\big)\sum_{i > j} \mu(\theta_i)\,, \] proving that the part of the objective involving $x_j$ increases.
Because this holds for all $j \in\{ k, k+1, \dots, o-1 \}$, to conclude our proof, it remains to show that the part of the objective involving $x_o$ strictly increases. Plugging $\varphi(S, \,\cdot\,)$ into \eqref{eq:finite} and collecting terms involving $x_o$ gives \[s(x_o, \theta_o) \mu(\theta_o) - \sum_{i=o+1}^g \mu(\theta_i)\big (u(x_o, \theta_{i})- u(x_o, \theta_o) \big ) - \big (u(x_o, \theta_g) - u(x_o, \theta_o) \big)\sum_{i>g} \mu(\theta_i)\,. \] By the same argument as the previous case, we have \[s(x_o, \theta_o) \leq s(\hat{x}, \theta_o)\,.\] For any $i > o$, by the strict increasing differences property of $u$, \[u(x_o, \theta_{i})- u(x_o, \theta_o) > u(\hat{x}, \theta_{i})- u(\hat{x}, \theta_o)\,. \] Together they imply \begin{align*} s(x_o, \theta_o) \mu(\theta_o) - \sum_{i=o+1}^g \mu(\theta_i)\big (u(x_o, \theta_{i})- u(x_o, \theta_o) \big ) - \big (u(x_o, \theta_g) - u(x_o, \theta_o) \big)\sum_{i>g} \mu(\theta_i) \\ < s(\hat{x}, \theta_o) \mu(\theta_o) - \sum_{i=o+1}^g \mu(\theta_i)\big (u(\hat{x}, \theta_{i})- u(\hat{x}, \theta_o) \big ) - \big (u(\hat{x}, \theta_g) - u(\hat{x}, \theta_o) \big)\sum_{i>g} \mu(\theta_i)\,, \end{align*} where the strict inequality also uses that $\mu$ has full support. \end{proof}
\paragraph{Step 3:} We prove \Cref{thm:dbind} for general type space $\Theta$ by approximation. We give a sketch of the argument here and leave the details to the appendix. Let $\mu \in \Delta(\Theta)$ denote the distribution on $\Theta$. Recall that $\eqref{eq:1d}^\dagger$ denotes the version of program $\eqref{eq:1d}$ with all IC constraints (both downward and upward). Let $V(\Theta, \mu)$ denote the optimal value of $\eqref{eq:1d}^\dagger$ given $(\Theta, \mu)$. We show that $V(\Theta, \mu)$ equals to the optimal value of $\eqref{eq:1d}$. Suppose, for contradiction, there exists some $(\hat{x}, \hat{t})$ feasible for $\eqref{eq:1d}$ such that \[\label{eq:contrad} V(\Theta, \mu) < \E^\mu[v(\hat{x}(\theta), \theta) + \hat{t}(\theta)] \,.\tag{12}\] We first construct an appropriate sequence $\{(\Theta^{(n)}, \mu^{(n)})\}$ approximating $(\Theta, \mu)$. \begin{lemma}\label{lem:app} Suppose $v(x, \theta)$ is Lipschitz continuous on $\mathcal{X} \times \Theta$. Then, there exists a sequence $\{(\Theta^{(n)}, \mu^{(n)})\}$ with $\Theta^{(n)}\subseteq \Theta$ finite and $\mu^{(n)}\in \Delta(\Theta^{(n)})$ full support such that \begin{itemize}
\item[(i)] $\mu^{(n)} \rightarrow_w \mu$\,;
\item[(ii)] $\displaystyle \limsup_{n \rightarrow \infty} V(\Theta^{(n)}, \mu^{(n)}) \leq V(\Theta, \mu) $\,. \end{itemize} \end{lemma}
Suppose for a moment that $\hat{x}, \hat{t}$ are continuous on $\Theta$ and $v$ is Lipschitz continuous. Note that $(\hat{x}, \hat{t})$ restricted to $\Theta^{(n)}$ is a feasible solution to the finite-type version of \eqref{eq:1d} with $(\Theta^{(n)}, \mu^{(n)})$. By Step 2, we have $V(\Theta^{(n)}, \mu^{(n)}) \geq \E^{\mu^{(n)}}[v(\hat{x}(\theta), \theta) + \hat{t}(\theta)]$. Because $v(\hat{x}(\theta), \theta) + \hat{t}(\theta)$ is a bounded continuous function on $\Theta$, using \Cref{lem:app} and taking limits on both sides of the above, we have \[V(\Theta, \mu) \geq \limsup_{n \rightarrow \infty} V(\Theta^{(n)}, \mu^{(n)}) \geq \limsup_{n \rightarrow \infty} \E^{\mu^{(n)}}[v(\hat{x}(\theta), \theta) + \hat{t}(\theta)] = \E^\mu[v(\hat{x}(\theta), \theta) + \hat{t}(\theta)]\,, \] contradicting \eqref{eq:contrad}. However, the situation is more delicate in general. The proof (in the appendix) relies on the Stone–Weierstrass theorem and Lusin’s theorem.
Finally, to conclude \Cref{thm:dbind}, it suffices to show the existence of an optimal solution to the full IC program $\eqref{eq:1d}^\dagger$. Even though this is a standard one-dimensional problem, the existence result appears to be new at this level of generality. \begin{lemma}\label{lem:exist} Any standard one-dimensional screening problem has a solution. \end{lemma} The proof (in the appendix) proceeds by showing the space of IC and IR mechanisms is sequentially compact in the product topology. The argument uses a generalized version of Helly's selection theorem from \citet{fuchino1999theorem}.
\subsection{Reduction} \label{subsec:reduction}
For the more general case of positive correlation (instead of perfect correlation), we reduce it to a set of subproblems each of which has $\theta^B = \theta^A$. Suppose $\Theta^B = \Theta^A = [0, 1]$ and $\theta^B$ is stochastically monotone in $\theta^A$. Let $\varepsilon$ be an independent uniform $[0, 1]$ draw. Note that the random vector $(\theta^A, \theta^B)$ can be simulated by \[(\theta^A, \theta^B) \eqid (\theta^A, F^{-1}(\varepsilon \mid \theta^A)) \,,\]
where $F^{-1}(\, \cdot \, | \, \theta^A)$ is the generalized inverse function of $F(\, \cdot \, | \,\theta^A)$. Our positive correlation condition states that $\theta^B \mid \theta^A$ shifts upward in the sense of stochastic dominance as $\theta^A$ increases. This implies that $F^{-1}(\varepsilon \,|\, \cdot \, )$ is a nondecreasing function. Therefore, if we reveal the realization of $\varepsilon$ to the principal and let the principal design a mechanism contingent on $\varepsilon$, the problem reduces to the case in which $\theta^B = h_\varepsilon(\theta^A)$ for some nondecreasing function $h_\varepsilon$. Now, simply define $\tilde{u}^B_{\varepsilon}(y, \theta^A):=u^B(y, h_{\varepsilon}(\theta^A))$ and $\tilde{v}^B_{\varepsilon}(y, \theta^A) := v^B(y, h_{\varepsilon}(\theta^A))$. For each realization of $\varepsilon$, we can apply our result from the previous two subsections to the screening problem with utility functions $(u^A, \tilde{u}^B_\varepsilon, v^A, \tilde{v}^B_\varepsilon)$ and perfect correlation $\theta^B = \theta^A$. Thus, for each realization of $\varepsilon$, simply screening the productive component is optimal, but that mechanism does not depend on $\varepsilon$ and hence must be optimal in the original problem.
\begin{rmk} In the more general case where $\theta^B$ is multidimensional, to perform the above reduction, we use an insight due to \citet{haghpanah2021pure} who make use of a classic result of \citet{Strassen1965} on monotone coupling. For measurability issues, we prove a measurable monotone coupling lemma (see \Cref{lem:decomp} in \Cref{app:add}), building on a result by \citet{kamae1978stochastic}. \end{rmk}
\section{Monopoly Pricing with Costly Signals} \label{sec:csignal}
Before discussing other applications of \Cref{thm:main} in \Cref{sec:app}, we specialize the main model to a setting of monopoly pricing with costly signals. In this setting, a monopolist sells a spectrum of quality-differentiated goods and can make the menu of offers contingent on the costly actions that a buyer may take.
In \Cref{subsec:separable}, we show that if the buyer's utility functions are \textit{multiplicatively separable} within each component, then the positive correlation of preferences condition can be weakened to the positive correlation between the preferences for the productive component and the \textit{marginal rates of substitution} between the productive and costly components.
In \Cref{subsec:multigood}, we consider a multiple-good monopolist selling different qualities of bundles (with no costly signals). This environment generalizes the classic multiple-good monopoly problem by allowing for both probabilistic bundling and quality discrimination. We show that (a relaxation of) this problem can be mapped to a monopolist selling a spectrum of quality-differentiated goods without bundling but with costly signals as in \Cref{subsec:separable}. A key insight is that one can view selling the grand bundle as the productive component, and selling a smaller bundle \textit{instead of} the grand bundle as a costly instrument for screening a consumer's value for the grand bundle. Using this perspective, we generalize a result of \citet{haghpanah2021pure}. In particular, we show that under their stochastic ratio monotonicity condition, the general feature of the optimal mechanism is to post a menu of different qualities of the grand bundle --- the monopolist screens only the productive component and does not use any of the ``costly signals.''
\paragraph{Setup.}\hspace{-2mm}A monopolist sells a quality-differentiated spectrum of goods. A buyer of type $\theta^A \in \Theta^A$ receives utility $u^A(x, \theta^A)$ from consuming the good of quality $x \in \mathcal{X}$. The seller incurs a cost $C(x, \theta^A)$ to produce the good of quality $x$ for type $\theta^A$. Suppose $u^A(x, \theta^A)$ is nondecreasing in $\theta^A$ and has strict increasing differences, and that the surplus function $u^A(x, \theta^A) - C(x, \theta^A)$ has weak single-crossing differences. (The continuity and compactness assumptions in \Cref{sec:model} are also maintained.)
Besides offering a menu of products of different qualities and prices, the monopolist can make the offers contingent on various costly signals (e.g. waiting in line, collecting coupons, walking up stairs). A costly signal is represented by $y \in \mathcal{Y}$. To obtain a signal $y$, a buyer of type $\theta^B \in \Theta^B$ incurs a cost $c(y, \theta^B)$ that is nonincreasing in $\theta^B$ (so $\theta^B$ represents the willingness to endure various costly activities).
\Cref{thm:main} then says that if $\theta^B$ is positively correlated with $\theta^A$ according to our notion, then the monopolist never makes more profits by using these costly signals. Therefore, if the monopolist in fact uses these instruments, then we should expect that the consumers with higher willingness to pay tend to incur higher costs to obtain the signals (both measured with respect to the constant marginal value for money). In fact, sometimes we can say more when the buyer's utility functions are multiplicatively separable within each component, which we turn to next.
\subsection{Marginal Rates of Substitution Between Two Components} \label{subsec:separable} We follow the notation in the above setup, and let $\mathcal{X}$ be $[0, 1]$, $\mathcal{Y}$ any measurable space, $\Theta^A$ any compact subset of $\mathds{R}_{++}$, and $\Theta^B$ any compact subset of $\mathds{R}^N_{-}$. We say that the buyer has \textit{multiplicatively separable} utilities within each additive component if for any quality $x$, signal $y$, and price $t$, the buyer's payoff can be written as \[\theta^A u(x) + \theta^B \cdot c(y) - t\,,\] where $u:\mathcal{X} \rightarrow \mathds{R}$ is a continuous and strictly increasing function satisfying $u(0) = 0$, and $c: \mathcal{Y} \rightarrow \mathds{R}_+^{N}$ is a bounded measurable function satisfying $c(y_0) = 0$ for some $y_0 \in \mathcal{Y}$.\footnote{Note that a utility of the form $f^A(\theta^A) u(x) + f^B(\theta^B) \cdot c(y)$ provides (essentially) no additional generality. }
We say that the monopolist's cost function is \textit{not interdependent} if $C(x,\theta^A)$ does not depend on $\theta^A$, in which case without loss of generality we let $C(0) = 0$.
Recall the notation $\theta^B = (\theta^1, \dots, \theta^N)$. Let $r^i = \frac{\theta^i}{\theta^A}$ and $r^B = (r^1, \dots, r^N)$. Note that $r^B \leq 0$. We interpret $r^B$ as the (negative) \textit{marginal rates of substitution} between the productive and costly components.\footnote{The substitution here is between the \textit{utility} $u(x)$ from the productive component and the \textit{disutility} $c(y)$ from the costly component, and hence has negative marginal rates.} In this setting, we show that our assumption of positive correlation between $\theta^A$ and $\theta^B$ can be weakened to that between $\theta^A$ and $r^B$.\footnote{This is in general not necessarily a weaker condition; it is so in this case because $\theta^B \leq 0$.} \begin{prop}\label{prop:linear} Suppose the seller's cost function $C$ is continuous, nondecreasing in $x$, and not interdependent; the buyer's utilities are multiplicatively separable; and $r^B$ is stochastically nondecreasing in $\theta^A$. Then, there exists an optimal mechanism that involves no costly screening. \end{prop}
The intuition behind this result can be understood in the same way as in the proof of the main result (\Cref{sec:proof}). We show that when the marginal rates of substitution are increasing in the values, instead of using the costly signals, the principal can simply adjust the allocations of the productive component while maintaining the downward IC constraints. Because downward IC constraints are sufficient, the result follows. Unlike in \Cref{sec:proof}, in this case we substitute the costly signals with a decrease in the productive allocations holding the monetary transfers fixed, which is why the marginal rates of substitution between the two components play an important role here.
\begin{proof}[Proof of \Cref{prop:linear}] By \Cref{lem:decomp} in \Cref{app:add}, as in \Cref{subsec:reduction}, it suffices to show the case where $r^B = h(\theta^A)$ for some nondecreasing function $h: \Theta^A \rightarrow \mathds{R}^N$. Thus, we may assume for all $i$, $r^i$ is deterministic conditional on $\theta^A$ and nondecreasing in $\theta^A$. Fix any $(x, y, t)$ that is IC and IR. We may assume $t \geq 0$, because the monopolist can simply replace all options with negative profits in the menu with $(0, y_0, 0)$ and weakly increase the total profit (since the monopolist's cost function does not depend on the buyer's type). Now we apply a reconstruction argument as follows. Consider the modification: $\tilde{t} = t, \tilde{y} = y_0$, \[\tilde{x}(\theta) = u^{-1}\big(u(x(\theta)) + \frac{1}{\theta^A} [\theta^B \cdot c(y(\theta))] \big) \,.\] Because $u(\,\cdot\,)$ is continuous and strictly increasing with $u(0)=0$, $u^{-1}$ is defined on $[0, u(1)]$. Moreover, because $(x, y, t)$ is IR and $t \geq 0$, we have $0 \leq u(x(\theta)) + \frac{1}{\theta^A} [\theta^B \cdot c(y(\theta)) ]\leq u(x(\theta))$ for all $\theta$. So the modification is well-defined and $0 \leq \tilde{x} \leq x$ pointwise. In other words, the modified mechanism decreases the productive allocation to substitute the costly screening so that all types have the same utilities as before, \textit{assuming} truthful reporting.
Because $C(\,\cdot\,)$ is nondecreasing, this modification increases the objective, assuming truthful reporting. It is IR by construction. Moreover, it is downward IC: for any $\hat{\theta}^A < \theta^A$, \begin{align*} \theta^A u(\tilde{x}(\theta)) - \tilde{t}(\theta) &= \theta^A u(x(\theta)) + \theta^B \cdot c(y(\theta)) - t(\theta)\\ &\geq \theta^A u(x(\hat{\theta})) + \theta^B \cdot c(y(\hat{\theta})) - t(\hat{\theta})\\ &= \theta^A \big( u(x(\hat{\theta})) + r^B \cdot c(y(\hat{\theta})) \big) - t(\hat{\theta}) \\ &\geq \theta^A \big( u(x(\hat{\theta})) + \hat{r}^B \cdot c(y(\hat{\theta})) \big) - t(\hat{\theta}) = \theta^A u(\tilde{x}(\hat{\theta})) - \tilde{t}(\hat{\theta})\,. \end{align*} The first inequality holds because $(x, y, t)$ is IC. The second inequality holds because $\hat{r}^B \leq r^B$ and $c \geq 0$. Invoking \Cref{thm:dbind} concludes the proof. \end{proof}
\subsection{Bundling and Quality Discrimination} \label{subsec:multigood}
We now show an application to a multiple-good monopoly problem allowing for both probabilistic bundling and quality discrimination.
A monopolist sells $G$ many goods to a unit mass of consumers. For each bundle $b$, a random consumer has value $v^b$ for getting the highest quality version of the bundle with probability one. We assume that $v^b \leq v^{b'}$ for all $b \subset b'$ and $v^{\varnothing} = 0$. The monopolist can use probabilistic bundling, captured by a bundling allocation rule $v \mapsto \alpha(v) \in \Delta(2^G)$. In addition, the monopolist can adjust the quality of each bundle, captured by a quality allocation rule $v \mapsto q(v) \in [0, 1]^{2^G}$. A type-$v$ consumer's payoff is given by \[\sum_{b} \alpha^b q^b v^b - t\,. \] The monopolist incurs a cost to improve the quality of a bundle, with a payoff given by \[-\sum_{b} \alpha^b C(q^b) + t\,,\] where $C(\,\cdot\,)$ is a continuous, nondecreasing, and convex function on $[0,1]$ with $C(0)=0$. This cost structure assumes that the cost of producing a bundle of some quality does not depend on the size of the bundle, which is perhaps more suitable for digital goods.
Let $v^*$ be the value of a random consumer for the grand bundle and $\tau = (\frac{v^b}{v^*})_{b=1,\dots,2^{G}}$ be the profile of values for each bundle relative to the grand bundle.
\begin{prop} \label{prop:bundle} If $\tau$ is stochastically nondecreasing in $v^*$, then an optimal mechanism exists and can be implemented by a menu of prices for different qualities of the grand bundle. \end{prop}
This result is a natural consequence of \Cref{prop:linear} once one views selling the grand bundle as the productive component, and selling a smaller bundle \textit{instead of} the grand bundle as a costly instrument for screening a consumer's value for the grand bundle.
\begin{proof}[Proof of \Cref{prop:bundle}] By convexity of $C(\,\cdot\,)$ and Jensen's inequality, we have \begin{align*}
\sum_{b} \alpha^b(v) C(q^b(v)) \geq C \Big ( \sum_{b} \alpha^b(v) q^b(v) \Big ) \,. \end{align*} Therefore, it is an upper bound on the monopolist's revenue to maximize the objective \[\label{eq:aux} \E\Big[-C \Big ( \sum_{b} \alpha^b(v) q^b(v) \Big ) + t(v) \Big] \,.\tag{13}\] For this auxiliary problem, let us also relax the constraint $\sum_b \alpha^b = 1$ to $\sum_b \alpha^b \leq 1$. Then, because $\alpha, q$ enter both the consumer's utility and the objective in the same way, it is without loss of generality to let $q^b = 1$ for all $b$.
We now reformulate this problem as a problem of monopoly pricing with costly signals. Let $\theta^A = v^*$ be the value of the grand bundle. For any proper bundle $b$, let \[\theta^b = v^b - v^*\] be the difference of values for bundle $b$ and the grand bundle $b^*$. In words, $\theta^b$ is the negative value for getting bundle $b$ instead of $b^*$. Let $N = 2^G - 1$, and let $\theta^B = (\theta^1, \dots, \theta^{N})$ be the profile of the differences.
We use $x: \Theta \rightarrow [0,1]$ to denote the \textit{initial} allocation of the grand bundle, and $y: \Theta \rightarrow [0,1]^{N}$ to denote the allocation of the ``costly signals'' as follows. An assignment $y^b \in [0, 1]$ represents assigning bundle $b$ with probability $y^b$ while \textit{decreasing} the probability of the grand bundle $b^*$ also by $y^b$. The consumer's payoff can be rewritten as \[\theta^A x + \theta^B \cdot y - t \,.\] For any substochastic allocation $\alpha$ (i.e. $\sum_b \alpha_b \leq 1$), we can replicate it by setting \[x = \sum_{b} \alpha^b\,, \qquad y^b = \alpha^b \text{ for all } b \neq b^* \,.\] Therefore, the auxiliary problem \eqref{eq:aux} can be further relaxed to \[\label{eq:costly} \sup_{(x, y, t) \in \mathcal{M}(\Theta) } \E[-C(x(\theta)) + t(\theta)]\,. \tag{14}\] For any $b = 1, \dots, N$, we have $\frac{v^b}{v^*} = \frac{\theta^A + \theta^b}{\theta^A} = 1 + \frac{\theta^b}{\theta^A}$. Since $\tau$ is stochastically nondecreasing in $v^*$, we have $r^B := \frac{1}{\theta^A} \theta^B$ is stochastically nondecreasing in $\theta^A$. So \Cref{prop:linear} applies to \eqref{eq:costly}. Let $(x^*, 0, t^*)$ be the optimal solution to \eqref{eq:costly} that involves no costly screening.
We construct an allocation rule in the original problem as follows: \[\text{$\alpha^{b^*} = 1$, $\alpha^b = 0$ for all $b \neq b^*$; \quad $q^{b^*} = x^*$, $q^{b} = 0$ for all $b \neq b^*$.}\] Because probabilities and qualities enter the consumer's utility in the same way and $(x^*, 0, t^*)$ is IC and IR, $(\alpha, q, t^*)$ is also IC and IR. The revenue of the monopolist under $(\alpha, q, t^*)$ is \[\E[-C(q^{b^*}(\theta)) + t^*(\theta)] = \E[-C(x^*(\theta)) + t^*(\theta)]\,,\] the optimal value of \eqref{eq:costly}. Hence, $(\alpha, q, t^*)$ is optimal for the monopolist in the original problem; moreover, $(\alpha, q, t^*)$ screens using only the qualities of the grand bundle. \end{proof} \begin{rmk} \Cref{prop:bundle} says that under the stochastic ratio monotonicity condition, the monopolist can restrict attention to selling only the grand bundle at various qualities. Because that is a one-dimensional problem à la \cite{mussa1978monopoly}, the solution can be explicitly characterized. When there is no cost for quality improvement ($C = 0$), the optimal mechanism is to sell the grand bundle at the highest quality with a posted price. This special case is due to \citet{haghpanah2021pure}. In general, however, the optimal mechanism involves price discrimination. But \Cref{prop:bundle} shows that such price discrimination is only done by creating different qualities of the grand bundle. \end{rmk}
\section{Additional Applications} \label{sec:app}
\subsection{Bundling with Nested Bundles} \label{subsec:nested}
Complementary to \Cref{subsec:multigood}, rather than quality discrimination, companies may also offer a menu of bundles that are nested (e.g. cable TV providers offer a basic package and a premium package including sports channels). \citet{bergemann2021optimality} refer to such selling strategies as upgrade pricing and provide sufficient conditions under which they are optimal when consumers have additive values. We provide a different set of sufficient conditions on the optimality of upgrade pricing with non-additive values, as an application of our main result.
For this application, we restrict attention to one-dimensional types $\theta \in \Theta \subset \mathds{R}$ and deterministic bundling mechanisms. For example, consider two items $\{1, 2\}$. Let $g(\theta)$, $h(\theta)$, and $v(\theta)$ be the value of item $1$, the value of item $2$, and the value of the bundle $\{1, 2\}$ for type $\theta$, respectively. We assume that $g, h, v$ are continuous, nondecreasing in $\theta$ (e.g. $\theta$ represents income), and that $v(\theta) \geq \max\{g(\theta), h(\theta)\}$ (e.g. there is free disposal).
\begin{prop} \label{prop:nested} If $v(\theta) - g(\theta)$ is strictly increasing, and $v(\theta) - h(\theta)$ is nonincreasing, then an optimal mechanism exists and can be implemented with a menu of nested bundles $\{\{1\}, \{1, 2\}\}$. \end{prop}
In terms of the cable TV service example, this result says that if a higher-income consumer has a higher incremental value for sports channels, and a lower incremental value for basic channels, then it suffices for the seller to consider a two-tier menu that features a basic package and a premium package including sports channels. This result holds for any distribution of $\theta$. Despite its seeming simplicity, it cannot be derived with any known result in the literature (see \Cref{app:generalnested} for a parametric example).
This result is an immediate consequence of \Cref{thm:main} provided that one takes the same point of view as in \Cref{sec:csignal} by considering selling item $2$ instead of the bundle $\{1, 2\}$ as a costly instrument. It is not hard to see that any menu of nested bundles that does not include the grand bundle cannot be optimal. For any number of items and any menu of nested bundles that includes the grand bundle, in \Cref{app:generalnested}, we provide sufficient conditions under which the menu constitutes an optimal selling mechanism.
\subsection{Intertemporal Price Discrimination with Private Discounting} \label{subsec:discounting}
Let $D: \mathcal{R} \times \mathcal{T} \rightarrow [0, 1]$ be a general discount function, where $\mathcal{R} \subset \mathds{R}_+$ is a compact set of discount rates $r$, and $\mathcal{T} \subset \mathds{R}_{+}$ is a finite set of delivery dates $T$ including $0$. The discount function $D(r, T)$ is nonincreasing in $r$, with $D(r, 0) = 1$. For example, $D(r, T) = e^{-rT}$ under exponential discounting and $D(r, T)= \frac{1}{1 + rT}$ under hyperbolic discounting.
The buyer privately observes his discount rate $r$ and value $v$, with a payoff given by \[D(r, T) v u(x) - p\,,\] where $x \in [0, 1]$ is the quality of the good and $p \in \mathds{R}$ is the price at time $0$. We assume that $u$ is continuous, strictly increasing, and satisfies $u(0) = 0$.
The seller has a continuous, nondecreasing cost $C(x)$ for producing the good of quality $x$. She wants to design an optimal selling mechanism $(x, T, p)$ that specifies the quality, delivery time, and payment for each reported type. A mechanism \textit{involves no intertemporal price discrimination} if $T(v, r) = 0$ for all $v, r$.
The next example shows that if a buyer who has a higher value tends to be less patient, then intertemporal price discrimination can be profitable: \begin{ex} Suppose $\mathcal{T} = \{0, 1\}$ and $\mathcal{R} = \{0, r_1\}$, where $D(0, 1) = 1$ and $D(r_1, 1) = \frac{1}{2}$. Suppose $u(x) = x$ and $C(x) = 0$. The buyer has equal probabilities of having either $r = 0$ or $r = r_1$. If $r = 0$, then the buyer's value $v$ is drawn from uniform $[0, 1]$. If $r = r_1$, then the buyer's value $v$ is drawn from uniform $[1, 2]$. Without intertemporal price discrimination, the seller's optimal strategy is a posted price $p = 1$ which yields payoff $\frac{1}{2}$. However, consider offering $p = 1$ for the no-delay option, and $p = \frac{1}{2}$ for the delayed option. Note that all types with $r = r_1$ choose the no-delay option, and all types with $r = 0$ and $v \geq \frac{1}{2}$ choose the delayed option. This yields the seller payoff $ \frac{1}{2} \times 1 + \frac{1}{2} \times \frac{1}{2} \times \frac{1}{2} = \frac{5}{8} > \frac{1}{2}$. \qed \end{ex}
The next result shows that if a buyer who has a higher value tends to be more patient, then intertemporal price discrimination is unprofitable:
\begin{prop} \label{prop:discount} If $r$ is stochastically nonincreasing in $v$, then an optimal mechanism exists and involves no intertemporal price discrimination. \end{prop}
This result is an immediate consequence of \Cref{prop:linear}. This result does not restrict the marginal distributions of $r$ and $v$, but only requires a condition on their joint distribution. \citet{Stokey1979}'s classic result can be seen as a special case of this result with $r$ being a constant (all types have the same patience), $D(r, T) = e^{-rT}$, $u(x) = x$, and $C(x) = 0$. Our result shows that whether intertemporal price discrimination is profitable depends on the correlation of consumers' values and patience.
\subsection{Labor Market Screening} \label{subsec:labor}
A monopsonistic firm wants to hire a worker to perform a task. The firm gets a payoff $V(\theta)$ for hiring a worker of ability $\theta \in \Theta \subset \mathds{R}$, where $V$ is continuous, nondecreasing in $\theta$. The worker suffers a cost $C(\theta)$ for performing the task, where $C$ is continuous, strictly decreasing in $\theta$. Let $x \in [0, 1]$ be the probability of hiring the worker.
The firm can ask the applicant to obtain a credential. Suppose that it costs $c y$ to the worker to obtain a $y\in[0, 1]$-level credential. Both $\theta$ and $c$ are the worker's private information. For a given wage level $w$, hiring probability $x$, and credential level $y$, the firm's payoff is $(V(\theta) - w)x$, and the worker's payoff is $(w - C(\theta))x - c y$.
\begin{prop} \label{prop:labor} If $c$ is stochastically nonincreasing in $\theta$, then an optimal mechanism exists and does not require any credential. \end{prop}
This result is an immediate consequence of \Cref{thm:main}. This result contrasts with the common perception of costly signals in competitive labor markets (\citealt{Spence1973Job}). To clarify, in \Cref{app:comp}, we consider a competitive screening model in which multiple firms compete and are allowed to screen with both work allocations and costly instruments. We show that costly screening can appear in equilibrium. The intuition is that with outside options generated through competition, the binding incentive constraints become the upward ones, which is exactly the opposite to the monopsonistic case.
\section{Related Literature} \label{sec:lit}
This paper proposes a unified mechanism design framework allowing for both price and nonprice screening instruments, characterizes the exact optimum in a general multidimensional screening model, and bridges costly screening and multidimensional screening to obtain new insights into applications such as bundling and price discrimination.
\paragraph{Costly Screening.}\hspace{-2mm}Several previous papers have analyzed mechanism design with a costly instrument when monetary transfers are limited or not feasible. In his pioneering study, \citet{banerjee1997theory} studies how a bureaucracy can use red tape as an effective screening device when agents are budget-constrained. A recent line of work studies the design of surplus-maximizing mechanisms when monetary transfers are not feasible and agents engage in a one-dimensional costly activity (\citealt{hartline2008optimal}, \citealt{condorelli2012money}, \citealt{chakravarty2013optimal}).\footnote{See also \citet{acemoglu2011economics} for related moral hazard problems; \citet{ambrus2017delegation} for related delegation problems; and \citet{malladi2020delegated} for related non-Bayesian screening problems.} We study a profit-maximizing mechanism design problem allowing for both flexible transfers and heterogeneous preferences over costly actions. These features together, in contrast to past work, imply that our model necessarily has multiple screening instruments and requires analysis significantly different from the single-dimensional case.
\paragraph{Multidimensional Screening.}\hspace{-2mm}The structure of multidimensional screening differs significantly from its single-dimensional counterpart and remains elusive to fully characterize despite much research over the past decades (\citealt{Rochet2003}). Much of the literature focuses on the multiple-good monopoly problem. When there is a single good, the optimal mechanism is simply a posted price (\citealt{Myerson1981}, \citealt{riley1983optimal}). However, as soon as there is more than one good, seemingly simple special cases remain poorly understood. Significant progress has been made in developing duality approaches to certify optimality of candidate mechanisms (\citealt{Rochet1998}, \citealt{daskalakis2017strong}; \citealt{cai2016duality}, \citealt{Carroll2017}). In response to the analytical difficulty, several recent papers study either approximately optimal mechanisms (\citealt{babaioff2014simple}, \citealt{cai2016duality}, \citealt{hart2017approximate}), or worst-case optimal mechanisms (\citealt{Carroll2017}, \citealt{che2021robustly}, \citealt{deb2021multi}).
In contrast to past work, we consider a multidimensional screening model in which all dimensions except one are surplus destructive. The multiple-good monopoly problem can be viewed as a special case of our framework by redefining the allocation space. Using this perspective, we obtain new insights into bundling, quality discrimination, upgrade pricing, and intertemporal price discrimination.
Our proof method uses a novel nonlocal approach, different from both the Myersonian approach commonly used in the one-dimensional settings and the duality approach commonly used in the multidimensional settings. For a given multidimensional mechanism, we reconstruct an alternative, one-dimensional mechanism that satisfies all downward incentive constraints. We then show via a variational argument that the set of downward incentive constraints is sufficient for one-dimensional screening problems satisfying the surplus condition. Because of the multidimensionality of the screening instruments (i.e. price and nonprice), the main difficulty of our problem appears already when the agent has one-dimensional types. To deduce the more general case, our proof builds on the insight of \citet{haghpanah2021pure} who make use of Strassen's theorem to decompose a multidimensional type space. When proving their result, \citet{haghpanah2021pure} explicitly construct a set of dual variables that only place weights on downward constraints --- the existence of such dual variables can be seen as a special case of our main technical result, the downward sufficiency theorem, which states that only downward incentive constraints are needed in one-dimensional models.
\paragraph{Damaged Goods.}\hspace{-2mm}There is a line of work studying damaged goods and the profitability of price discrimination (or pure bundling) (\citealt{deneckere1996damaged}, \citealt{anderson2009price}, \citealt{haghpanah2021pure}, \citealt{ghili2021characterization}). A key difference between our result and the results obtained in this literature is that our framework distinguishes between different kinds of damage (e.g. reducing quality vs. requiring waiting) and characterizes when to use which kind of damage. For example, in our bundling application, quality discrimination is used but mixed bundling is not used under the optimal mechanism; in our intertemporal price discrimination application, quality discrimination is used but delay is not used under the optimal mechanism. This is also the reason why our framework allows us to characterize when it is optimal to sell a given menu of nested bundles, a selling strategy necessarily involving damages.
\section{Conclusion} \label{sec:conclusion}
This paper studies the effectiveness of costly instruments in a general multidimensional screening model. The model consists of two components: a one-dimensional productive component and a multidimensional costly component. Our main result says that if the agent's preferences are positively correlated between the two components in a suitably defined sense, then the costly instruments are ineffective --- the optimal mechanism simply screens the one-dimensional productive component.
Our proof provides clear insights into why this result holds. First, we show that costly instruments can loosen upward but not downward IC constraints on the costly component. Next, we show that positive correlation of preferences then converts the IC constraints on the costly component to those on the productive component without changing the direction. Finally, we show that the set of downward IC constraints is sufficient for any one-dimensional screening problem satisfying the surplus condition. Therefore, costly instruments cannot help the principal when the agent's preferences are positively correlated between the two components.
Armed with this understanding, we have also shown how additional results follow naturally. With negatively correlated preferences, we show a partial converse. With multiplicatively separable preferences within each component, we show a stronger result in terms of the marginal rates of substitution between the two components. Using the perspective of screening with costly instruments, as applications, we also provide new insights into multiproduct pricing (bundling, quality discrimination, upgrade pricing), intertemporal price discrimination, and labor market screening.
\appendix \crefalias{section}{appendix} \section{Omitted Proofs}\label{app:proof} \begin{proof}[Proof of \Cref{prop:converse}] Without loss of generality, we may assume $i = 1$. Let $v^A = v^B = 0$. Because $\theta^0$ has a continuous distribution, there exists some constant $m^0$ such that \[\mathds{P}(\theta^0 > m^0) = \mathds{P}(\theta^0 \leq m^0) = \frac{1}{2}\,.\] Similarly define $m^1$ for $\theta^1$. Since $\theta^1$ is stochastically nonincreasing in $\theta^0$, we have $\theta^1$ and $-\theta^0$ are positively upper orthant dependent (\citealt{muller2002comparison}, pp. 121-125), and hence \[\mathds{P}(-\theta^0 > -m^0, \theta^1 > m^1) \geq \mathds{P}(-\theta^0 > -m^0) \mathds{P}(\theta^1 > m^1) = \frac{1}{4} \,.\] Because $\beta(\theta^0)$, $\beta(\theta^1)$ are not independent, we have \[\mathds{P}(\theta^0 < m^0, \theta^1 > m^1) > \frac{1}{4}\] and thus \[\mathds{P}(\theta^0 > m^0, \theta^1 < m^1) > \frac{1}{4}\,.\] Define \[ f(\theta^0) = \begin{cases} 1 & \text{if $\theta^0 \leq m^0$} \\ 2 & \text{if $\theta^0 > m^0$} \end{cases}\,, \qquad g(\theta^1) = \begin{cases} -1 & \text{if $\theta^1 \leq m^1$} \\ -\epsilon & \text{if $\theta^1 > m^1$} \end{cases}\,,\]
where $\epsilon > 0$ will be determined shortly. Let $\tilde{f}$ be a continuous approximation of $f$ such that $\tilde{f}(\theta^0) = f(\theta^0)$ for all $\theta^0 \not\in (m^0 -\epsilon, m^0 + \epsilon)$. It is clear that we may select $\tilde{f}$ to be nondecreasing. Let $x_0 = \min \mathcal{X}$ and $\hat{x} = \max \mathcal{X}$. Since $|\mathcal{X}| > 1$, $\hat{x} \neq x_0$. Since $|\mathcal{Y}| > 1$, there exists some $\hat{y} \neq y_0 \in \mathcal{Y}$. Now let \[u^A(x, \theta^A) = \tilde{f}(\theta^0) \frac{x - x_0}{\hat{x} - x_0}, \qquad u^B(y, \theta^B) = g(\theta^1) \mathds{1}_{y \neq y_0} \,. \] This construction gives admissible utility functions. Consider offering the following menu of three options: \[\big \{(\hat{x}, y_0, 2-\epsilon), (\hat{x}, \hat{y}, 1 - \epsilon), (x_0, y_0, 0) \big \}\,.\] Let the agent choose among these, breaking tie in favor of the principal. This yields a payoff of at least \[r(\epsilon) := (1 - \epsilon) \mathds{P}(\theta^1 > m^1) + (2-\epsilon) \mathds{P}(\theta^0 \geq m^0 + \epsilon, \theta^1 \leq m^1) \] for the principal. Screening the productive component alone yields a payoff of at most \[q(\epsilon) := 2 \mathds{P}(m^0-\epsilon \leq \theta^0 \leq m^0+\epsilon ) + 1 \] for the principal. Note that $r(\epsilon), q(\epsilon)$ are both continuous on $(0, \frac{1}{2})$, and \[\lim_{\epsilon \downarrow 0} r(\epsilon)= \frac{1}{2} + 2\mathds{P}(\theta^0 > m^0, \theta^1 < m^1) > 1 = \lim_{\epsilon \downarrow 0} q(\epsilon)\,.\] Thus, there exists some $\epsilon^* > 0$ such that $r(\epsilon^*) > q(\epsilon^*)$. With this choice of $\epsilon^*$, the above construction then gives admissible utility functions such that the menu of three options strictly dominates any mechanism screening only the productive component. \end{proof}
\begin{proof}[Proof of \Cref{lem:log}] Write out $[i \rightarrow j]$ and $[j \rightarrow k]$: \[ u(x_i, \theta_i) - t_i \geq u(x_j, \theta_i) - t_j \,;\] \[ u(x_j, \theta_j) - t_j \geq u( x_k, \theta_j) - t_k \,.\] Adding these two yields \[ u(x_i, \theta_i) - t_i + u(x_j, \theta_j) - t_j\geq u( x_j, \theta_i) - t_j + u( x_k, \theta_j) - t_k\,.\] Hence, \[ u(x_i, \theta_i) - t_i \geq (u(x_j, \theta_i) + u(x_k, \theta_j) - u(x_j, \theta_j)) - t_k \,.\] Using $x_j \geq x_k$, $\theta_i > \theta_j$, and the strict increasing differences property of $u$, we have \[u(x_j, \theta_i) + u(x_k, \theta_j) - u(x_j, \theta_j) \geq u(x_k, \theta_i)\,.\] Thus $[i \rightarrow k]$ follows. \end{proof}
\begin{proof}[Proof of \Cref{lem:gol}] Write out the binding constraints $[i \rightarrow k]$ and $[j \rightarrow k]$: \[ u(x_i, \theta_i) - t_i = u(x_k, \theta_i) - t_k \,;\] \[ u(x_j, \theta_j) - t_j = u(x_k, \theta_j) - t_k \,.\] Subtracting these two yields \[ u(x_i, \theta_i) - u(x_j, \theta_j) - t_i = u(x_k, \theta_i) -u(x_k, \theta_j) - t_j \,.\] Hence, \[u(x_i, \theta_i) - t_i = (u(x_j, \theta_j) + u(x_k, \theta_i) -u(x_k, \theta_j)) - t_j \,.\] Using $x_k \geq x_j$, $\theta_i > \theta_j$, and the strict increasing differences property of $u$, we have \[u(x_j, \theta_j) + u(x_k, \theta_i) -u(x_k, \theta_j) \geq u(x_j, \theta_i) \,.\] Thus $[i \rightarrow j]$ follows. \end{proof}
\begin{proof}[Proof of \Cref{lem:form}] Fix any subset $K \subset \{1, \dots, n\}$, any index $k \not\in K$, and any allocation rule $a \in \mathds{R}^n$. We claim that if $a_k = a_{\max\{j \in K: j < k\}}$, then \[\varphi(K \cup \{k\}, a) = \varphi(K, a)\,.\] Let $i, i'$ be the two consecutive indices in $K$ such that $i < k < i'$. Note that for any $j \leq k$, \[(\varphi(K \cup \{k\}, a))_j = (\varphi(K, a))_j\,,\] since $(K \cup \{k\}) \cap \{j': j' < j\} = K \cap \{j': j' < j\} $. For any $k < j \leq i'$, we can write \[u(a_k, \theta_j) - u(a_k, \theta_{k}) + u(a_i, \theta_k) - u(a_i, \theta_{i}) = u(a_k, \theta_j) - u(a_i, \theta_{i}) = u(a_i, \theta_j) - u(a_i, \theta_{i})\,,\] and $i = \max\{j' \in K: j' < j\}$. Thus $(\varphi(K \cup \{k\}, a))_j = (\varphi(K, a))_j$ for any $k < j \leq i'$. Also, the fact that the above holds for $j = i'$ implies that $(\varphi(K \cup \{k\}, a))_j = (\varphi(K, a))_j$ for any $j > i'$.
Now, write $S_b = S_a \cup \{k_1, \dots, k_m\}$ with $k_1 \leq \dots \leq k_m$. By assumption, $b_{k_m} = \dots = b_{k_1} = b_{\max \{j\in S_a: j < k_1\}}$. Then, by the definition of $S_b$, for all $q = 2, \dots, m$, we have \[b_{k_q} = b_{\max\{j \in S_a \cup \{k_1, \dots, k_{q-1}\}: j < k_q\}}\,. \] Thus, we can repeatedly apply the result from the previous paragraph and obtain \[\mathbf{T}[b] = \varphi(S_a \cup \{k_1, \dots, k_m\}, b) = \varphi(S_a \cup \{k_1, \dots, k_{m-1}\}, b) = \dots = \varphi(S_a \cup \{k_1\}, b) = \varphi(S_a, b)\,,\] which proves the lemma. \end{proof}
\begin{proof}[Proof of \Cref{lem:app}] We maintain the notation of Step 3 in \Cref{subsec:dst}. Without loss, let $\Theta \subseteq [0,1)$ and $0 \in \Theta$. The construction works as follows. Fix any $n \in \mathds{N}$. Partition $[0,1)$ into intervals $\{[\frac{i-1}{n},\frac{i}{n})\}_{i=1, \dots, n}$. Let \[I = \Big \{i: \mu([\frac{i-1}{n},\frac{i}{n})) >0 \Big\}\,.\] For any $i \in I$, let \[\theta^{(n)}_i = \min \Big \{ [\frac{i-1}{n},\frac{i}{n} ) \cap \Theta \Big \}\,.\]
(The minimum is attained since $\Theta$ is compact.) For notational convenience, we reindex $i$ so that it runs over from $1$ to $|I|$. Let \[\Theta^{(n)} = \{\theta^{(n)}_i\}_{i \in I} \,;\] \[\mu^{(n)}(\theta^{(n)}_i) = \mu([\theta^{(n)}_i, \theta^{(n)}_{i+1}))\,.\]
We have $\Theta^{(n)} \subseteq \Theta$ finite and $\mu^{(n)} \in \Delta(\Theta^{(n)})$ full support. Note that \[\label{eq:dist} \mu(\{\theta \in \Theta: \theta \in [\theta^{(n)}_i, \theta^{(n)}_{i+1}) \text{ and } |\theta - \theta^{(n)}_i| > \frac{1}{n} \}) = 0 \,.\tag{A.1}\]
We first show property $(ii)$ in the statement. Recall that for this lemma we assume $v$ is Lipschitz continuous on $\mathcal{X} \times \Theta$. Then, there exists some constant $K > 0$ such that for any $\theta, \theta' \in \Theta$,
\[\label{eq:Lip}\max_{x \in \mathcal{X}} |v(x, \theta') - v(x, \theta)|
\leq K |\theta' - \theta|\,. \tag{A.2}\] Let $(x^{(n)}, t^{(n)})$ be any optimal solution to the full IC program $\eqref{eq:1d}^\dagger$ with $(\Theta^{(n)}, \mu^{(n)})$. Let $\bar{x}^{(n)}$ be the extension of $x^{(n)}$ to the right: \[\bar{x}^{(n)}(\theta) = x^{(n)}(\theta^{(n)}_i) \quad \text{for all } \theta \in [\theta^{(n)}_i, \theta^{(n)}_{i+1})\,.\] Note that $\bar{x}^{(n)}$ is a monotonic function on $[0, 1)$. Define $\bar{t}^{(n)}$ in the same way. We claim $(\bar{x}^{(n)}, \bar{t}^{(n)})$, when restricted to $\Theta$, is a feasible solution to $\eqref{eq:1d}^\dagger$ with $(\Theta, \mu)$. To see this, offer the menu $\{(x^{(n)}_i, t^{(n)}_i)\}_{i \in I}$ to all types in $\Theta$. Type $\theta^{(n)}_{i+1}$ is indifferent between $(x^{(n)}_{i+1}, t^{(n)}_{i+1})$ and $(x^{(n)}_i, t^{(n)}_i)$. Type $\theta^{(n)}_i$ finds $(x^{(n)}_i, t^{(n)}_i)$ optimal. Therefore, any type $\theta$ between $\theta^{(n)}_i$ and $\theta^{(n)}_{i+1}$ finds $(x^{(n)}_i, t^{(n)}_i)$ optimal since $u$ has strict increasing differences. By construction, \[V(\Theta^{(n)}, \mu^{(n)}) = \E^{\mu^{(n)}}[v(\bar{x}^{(n)}(\theta), \theta) + \bar{t}^{(n)}(\theta)]\,. \] Since $(\bar{x}, \bar{t})$ is feasible to $\eqref{eq:1d}^\dagger$ with $(\Theta, \mu)$, we have \[V(\Theta, \mu) \geq \E^{\mu}[v(\bar{x}^{(n)}(\theta), \theta) + \bar{t}^{(n)}(\theta)] \,.\] Because $\bar{x}^{(n)}, \bar{t}^{(n)}$ are constant over each interval $[\theta^{(n)}_i, \theta^{(n)}_{i+1})$, by \eqref{eq:dist} and \eqref{eq:Lip}, we have \begin{align*} \label{eq:diff}
&\Big | \E^{\mu}[v(\bar{x}^{(n)}(\theta), \theta) + \bar{t}^{(n)}(\theta)] - \E^{\mu^{(n)}}[v(\bar{x}^{(n)}(\theta), \theta) + \bar{t}^{(n)}(\theta)] \Big | \\
&= \Big |\int v(\bar{x}^{(n)}(\theta), \theta) \d\mu - \int v(\bar{x}^{(n)}(\theta), \theta) \d\mu^{(n)} \Big| \\
&\leq \sum_{i\in I} \mu^{(n)}(\theta^{(n)}_i) \sup_{\theta \in [\theta^{(n)}_i, \theta^{(n)}_i+\frac{1}{n}] \cap \Theta } \Big \{ \max_{x\in \mathcal{X}} \big | v(x, \theta) - v(x,\theta^{(n)}_{i})\big | \Big \}\\
&\leq \sum_{i\in I} \mu^{(n)}(\theta^{(n)}_i) \frac{K}{n} = \frac{K}{n} \,. \tag{A.3} \end{align*} Then, it follows that \[V(\Theta, \mu) \geq V(\Theta^{(n)}, \mu^{(n)}) - \frac{K}{n}\,.\] Taking $\limsup$ on both sides gives property $(ii)$ in the statement.
We now show property $(i)$ in the statement. It suffices to prove the weak convergence in $\Delta([0,1])$. Let $F$, $F^{(n)}$ be the CDFs of $\mu$, $\mu^{(n)}$. We have $ F^{(n)}(1) = F(1) = 1$. Fix any $\theta \in [0, 1)$. Note that $\mu^{(n)} \preceq \mu$ in the stochastic dominance order, and hence \[\label{eq:fosd}
F^{(n)}(\theta) \geq F(\theta) \,.\tag{A.4} \] Let $i$ be such that $[\theta^{(n)}_{i}, \theta^{(n)}_{i+1}) \ni \theta$. Note that \[F^{(n)}(\theta) = \mu^{(n)}([0, \theta]) \leq \mu^{(n)}([0, \theta^{(n)}_{i+1}))= \mu([0, \theta^{(n)}_{i+1}))\,. \] If $\theta + \frac{1}{n} \geq \theta^{(n)}_{i+1}$, then we have \[\mu([0, \theta^{(n)}_{i+1}))\leq F(\theta + \frac{1}{n}) \,.\] Otherwise, since $\theta + \frac{1}{n} \geq \theta_{i}^{(n)} + \frac{1}{n}$, we have $\mu([\theta + \frac{1}{n}, \theta^{(n)}_{i+1})) = 0$. Thus, \[\mu([0, \theta^{(n)}_{i+1})) = \mu([0, \theta + \frac{1}{n}))\leq F(\theta + \frac{1}{n})\,. \] Hence, in either case, we have \[\label{eq:rev} F^{(n)}(\theta) \leq F(\theta + \frac{1}{n}) \,.\tag{A.5}\] Using \eqref{eq:fosd}, \eqref{eq:rev}, and that $F$ is right-continuous, we have \[F(\theta) \leq \lim_{n \rightarrow \infty }F^{(n)}(\theta) \leq \lim_{n \rightarrow \infty} F(\theta + \frac{1}{n}) = F(\theta)\,.\] Therefore, $F^{(n)}$ converges to $F$ pointwise, and hence $\mu^{(n)}\rightarrow_w \mu$. \end{proof}
\begin{proof}[Proof of \Cref{lem:exist}] Recall $\mathcal{M}(\Theta)$ is the set of IC and IR mechanisms for the one-dimensional type space $\Theta$. We want to show the following program has a solution: \[\sup_{(x, t) \in \mathcal{M}(\Theta)}\E [v(x(\theta), \theta) + t(\theta) ] \,.\]
We first show that it is without loss to restrict the range of $t$ to some interval $[-K, K]$ for $K$ large enough. By the IR constraints, we have $t(\theta) \leq \max_{x, \theta}|u(x, \theta)|$. By the IC constraints, for any $\theta, \theta'$, we have
\[|t(\theta) - t(\theta')| \leq 2\max_{x,\theta}|u(x,\theta)|\,.\] Hence, for all $\theta$,
\[\label{eq:tbound} t(\theta)\geq -3\max_{x,\theta}|u(x,\theta)| -2\max_{x,\theta}|v(x,\theta)| \,, \tag{A.6}\]
because if the above is violated at any type $\theta$, the principal gets strictly less than \[-\max_{x,\theta}|u(x,\theta)| - \max_{x,\theta}|v(x,\theta)|\]
but that can be easily obtained by offering a single option. Thus, the claim holds for $K = 3\max_{x,\theta}|u(x,\theta)|+2\max_{x,\theta}|v(x,\theta)|$.
Then, $\mathcal{M}(\Theta) \subseteq \mathcal{X}^\Theta \times[-K, K]^{\Theta}$ (with the product topology); we use the notation $\mathcal{X}^\Theta := \times_{\theta \in \Theta} \mathcal{X}$. By the dominated convergence theorem, the objective is sequentially continuous on $\mathcal{M}(\Theta)$. It is clear that $\mathcal{M}(\Theta)$ is nonempty. The existence result follows once we show $\mathcal{M}(\Theta)$ is sequentially compact. Fix any sequence $\{(x^{(n)},t^{(n)})\}_{n}$ in $\mathcal{M}(\Theta)$. Let \[U^{(n)}(\theta) = u(x^{(n)}(\theta), \theta) - t^{(n)}(\theta)\] be the equilibrium payoff of type $\theta$. For any $\hat{\theta} < \theta$, by IC$[\theta \rightarrow \hat{\theta}]$, we have \[U^{(n)}(\hat{\theta}) = u(x^{(n)}(\hat{\theta}), \hat{\theta}) - t^{(n)}(\hat{\theta}) \leq u(x^{(n)}(\hat{\theta}), \theta) - t^{(n)}(\hat{\theta}) \leq U^{(n)}(\theta)\,. \] Therefore, $U^{(n)} \in [-K, K]^{\Theta}$ is a monotone function (increase $K$ if necessary). Since $u$ has strict increasing differences, $x^{(n)} \in \mathcal{X}^\Theta$ is also a monotone function. Note that $\Theta, \mathcal{X} \subset \mathds{R}$ are linearly ordered and sequentially compact sets. By the Helly's selection theorem for monotone functions on linearly ordered sets (\citealt{fuchino1999theorem}, Theorem 7), there exists a subsequence $\{x^{(n_k)}\}$ that converges pointwise. Applying the same theorem again on $\{U^{(n_k)}\}$, we obtain a subsubsequence $\{U^{(n_{k_l})}\}$ that converges pointwise. Therefore, \[t^{(n_{k_l})}(\theta) = u(x^{(n_{k_l})}(\theta), \theta) - U^{(n_{k_l})}(\theta)\] also converges pointwise by continuity of $u$. Thus, there exists some $(x^*, t^*) \in \mathcal{X}^\Theta \times [-K, K]^{\Theta}$ such that \[(x^{(n_{k_l})}, t^{(n_{k_l})}) \rightarrow (x^*, t^*)\] in the product topology. Being the pointwise limit of measurable real-valued functions, $x^*$ is measurable; so is $t^*$. Moreover, for any $\theta, \hat{\theta} \in \Theta$, \begin{align*} u(x^*(\theta), \theta) - t^*(\theta) &= \lim_{l \rightarrow \infty}\big(u(x^{(n_{k_l})}(\theta), \theta) - t^{(n_{k_l})}(\theta) \big)\\ &\geq \lim_{l \rightarrow \infty}\big(u(x^{(n_{k_l})}(\hat{\theta}), \theta) - t^{(n_{k_l})}(\hat{\theta}) \big) = u(x^*(\hat{\theta}), \theta) - t^*(\hat{\theta}) \end{align*} by continuity of $u$ and that $(x^{(n)}, t^{(n)})\in \mathcal{M}(\Theta)$ for all $n$. Therefore, $(x^*, t^*)$ satisfies all IC constraints. Similarly, $(x^*, t^*)$ satisfies all IR constraints. So $(x^*, t^*) \in \mathcal{M}(\Theta)$, and hence $\mathcal{M}(\Theta)$ is sequentially compact. \end{proof}
\begin{proof}[Completion of Proof of \Cref{thm:dbind}] We complete the proof of \Cref{thm:dbind} by filling in the details of Step 3 in \Cref{subsec:dst}.
Recall that we want to show the optimal value of \eqref{eq:1d} equals to $V(\Theta, \mu)$. We first show it for Lipschitz continuous $v$, and then extend it to all continuous $v$. Without loss, we assume $0 \in \Theta \subseteq [0, 1]$. Suppose for contradiction that there exist some $(\hat{x}, \hat{t})$ feasible for \eqref{eq:1d} and some $\epsilon > 0$ such that \[\label{eq:upperbound} V(\Theta, \mu) + \epsilon \leq \E^\mu[v(\hat{x}(\theta), \theta) + \hat{t}(\theta)]\,. \tag{A.7}\]
Let $\bar{S} = 3\max_{x,\theta} |u(x,\theta)| + 3\max_{x,\theta}|v(x,\theta)|$. By Lusin's theorem (see e.g. \citealt{Aliprantis2006}, Theorem 12.8), there exists a compact set $\tilde{\Theta} \subseteq \Theta$ such that $\hat{x}, \hat{t}$ are continuous on $\tilde{\Theta}$ and $\alpha := \mu(\Theta \backslash \tilde{\Theta}) < \epsilon/(3\bar{S}) $. Since $\tilde{\Theta}$ is compact, $\underline{\tilde{\Theta}} := \min\{\tilde{\Theta}\}$ is attained. If $\underline{\tilde{\Theta}} > 0$, we augment $\tilde{\Theta}$ by adding $\theta = 0$. Since $\{0\}$ is a singleton disjoint from the compact set $\tilde{\Theta}$, we have $\hat{x}, \hat{t}$ continuous on the augmented set as well. Since $(\hat{x}, \hat{t})$ is IR, $\hat{t}(\theta) \leq \max_{x,\theta} |u(x,\theta)|$, and hence \[\label{eq:app} \E^\mu[v(\hat{x}(\theta), \theta) + \hat{t}(\theta)] \leq (1-\alpha) \E^{\tilde{\mu}}[v(\hat{x}(\theta), \theta) + \hat{t}(\theta)] + \alpha \bar{S}\,, \tag{A.8}\] where $\tilde{\mu}$ is the distribution of $\theta$ conditional on $\theta \in \tilde{\Theta}$. We pick an approximation sequence $\{(\Theta^{(n)}, \mu^{(n)})\}$ for $(\tilde{\Theta}, \tilde{\mu})$ according to \Cref{lem:app}. By \eqref{eq:diff}, for all $n$ large enough, we have \[\label{eq:app2} \E^{\mu^{(n)}}[v(\bar{x}^{(n)}(\theta) + \bar{t}^{(n)}(\theta)] - \frac{\epsilon}{3(1-\alpha)}\leq \E^{\tilde{\mu}}[v(\bar{x}^{(n)}(\theta) + \bar{t}^{(n)}(\theta)] \,, \tag{A.9}\] where $(x^{(n)}, t^{(n)})$ is an optimal solution to the full IC problem $\eqref{eq:1d}^\dagger$ with $(\Theta^{(n)}, \mu^{(n)})$, and $(\bar{x}^{(n)}, \bar{t}^{(n)})$ is the extension of $(x^{(n)}, t^{(n)})$ to the right, as defined in the proof of \Cref{lem:app}. As in the proof of \Cref{lem:app}, $(\bar{x}^{(n)}, \bar{t}^{(n)})$ satisfies all IC and IR constraints for type space $\Theta$. As in the proof of \Cref{lem:exist}, \eqref{eq:tbound} then holds for $\bar{t}^{(n)}$. By feasibility, \eqref{eq:tbound}, and \eqref{eq:app2}, we have \begin{align*}
V(\Theta, \mu) &\geq \E^{\mu}[v(\bar{x}^{(n)}(\theta), \theta) + \bar{t}^{(n)}(\theta) ] \\
&\geq (1-\alpha)\E^{\tilde{\mu}}[v(\bar{x}^{(n)}(\theta), \theta) + \bar{t}^{(n)}(\theta) ] -\alpha \bar{S}\\
&\geq (1-\alpha)\E^{\mu^{(n)}}[v(\bar{x}^{(n)}(\theta), \theta) + \bar{t}^{(n)}(\theta) ] -\frac{2}{3}\epsilon \\
&= (1-\alpha)V(\Theta^{(n)}, \mu^{(n)}) -\frac{2}{3}\epsilon \\
&\geq (1-\alpha)\E^{\mu^{(n)}}[v(\hat{x}( \theta), \theta) + \hat{t}(\theta) ] -\frac{2}{3}\epsilon \,. \end{align*} In the last inequality, we have used that $(\hat{x}, \hat{t})$ is a downward IC and IR mechanism for $(\Theta^{(n)}, \mu^{(n)})$ and that \Cref{thm:dbind} holds for finite type spaces (see Step 2 in \Cref{subsec:dst}). Because $\hat{x}, \hat{t}$ are bounded and continuous on $\tilde{\Theta}$, and $v$ is continuous on the compact space $\mathcal{X} \times \Theta$, we have $v(\hat{x}( \theta), \theta) + \hat{t}(\theta)$ is bounded and continuous on $\tilde{\Theta}$. But, since $\mu^{(n)} \rightarrow_w \tilde{\mu}$ in $\Delta(\tilde{\Theta})$, taking limits on both sides of the above and using \eqref{eq:app}, we see that \begin{align*} V(\Theta, \mu) &\geq (1-\alpha)\E^{\tilde{\mu}}[v(\hat{x}( \theta), \theta) + \hat{t}(\theta) ] -\frac{2}{3}\epsilon \\ &\geq \E^{\mu}[v(\hat{x}( \theta), \theta) + \hat{t}(\theta) ] - \alpha \bar{S} -\frac{2}{3} \epsilon\\ & > \E^{\mu}[v(\hat{x}( \theta), \theta) + \hat{t}(\theta) ] - \epsilon\,, \end{align*} which is a direct contradiction to \eqref{eq:upperbound}.
Now we let $v$ be any continuous function on $\mathcal{X} \times \Theta$. Since $\mathcal{X} \times \Theta$ is compact, as a consequence of the Stone–Weierstrass theorem (see e.g. \citealt{Aliprantis2006}, Theorem 9.13), the set of Lipschitz continuous real-valued functions on $\mathcal{X} \times \Theta$ is dense in the space of continuous functions on $\mathcal{X} \times \Theta$ (with the sup norm). Therefore, there exists a sequence of Lipschitz continuous functions $\{v_{k}\}$ converging uniformly to $v$. Passing to a subsequence if necessary, we may assume that for all $k$,
\[\sup_{x \in \mathcal{X}, \theta \in \Theta} |v_{k}(x, \theta) - v(x, \theta)| < \frac{1}{k}\,.\] Using the above and the earlier result applied to $v_k$, we have for all $k$, \begin{align*} \sup_{(x,t) \in \tilde{\mathcal{M}}(\Theta)} \E\big [v(x(\theta), \theta) + t(\theta) \big ] - \frac{1}{k} &\leq \sup_{(x,t) \in \tilde{\mathcal{M}}(\Theta)}\E\big [v_{k}(x(\theta), \theta) + t(\theta) \big ] \\ &\leq \sup_{(x,t) \in \mathcal{M}(\Theta)}\E\big [v_{k}(x(\theta), \theta) + t(\theta) \big ] \leq \sup_{(x,t) \in \mathcal{M}(\Theta)}\E\big [v(x(\theta), \theta) + t(\theta) \big ] + \frac{1}{k}\,. \end{align*} Taking $k \rightarrow \infty$ then gives the desired inequality.
Invoking the existence result of \Cref{lem:exist}, we conclude the proof of \Cref{thm:dbind}. \end{proof}
\begin{proof}[Completion of Proof of \Cref{thm:main}] We first note that the argument in \Cref{sec:proof} holds for any compact $\mathcal{X} \subset \mathds{R}$, any measurable space $\mathcal{Y}$, and any compact $\Theta^B = \Theta^A \subset \mathds{R}$. Thus, we only need to generalize it to the case of imperfect correlation (stochastic monotonicity), where $\theta^B$ can also be potentially multidimensional.
We start by introducing the following lemma: \begin{lemma}[Measurable Monotone Coupling] \label{lem:decomp} If $\theta^B$ is stochastically nondecreasing in $\theta^A$, then there exist a measurable space $\mathcal{E}$; an $\mathcal{E}$-valued random variable $\varepsilon$, independent of $\theta^A$; and a measurable function $h: \Theta^A \times \mathcal{E} \rightarrow \Theta^B$ nondecreasing in the first argument such that \[\theta \eqid (\theta^A, h(\theta^A; \varepsilon))\,.\] \end{lemma} The proof of this lemma is in \Cref{app:add}, building on a result of \citet{kamae1978stochastic}.
Let $\Theta_\varepsilon=\{(\theta^A,\theta^B): \theta^B = h(\theta^A; \varepsilon), \theta^A \in \Theta^A\}$ be the decomposed monotonic path given a realization $\varepsilon$. For any type space $\Theta$, recall $\mathcal{M}(\Theta)$ is the set of IC and IR mechanisms. Let $v(x, y, \theta) = v^A(x, \theta^A) + v^B(y, \theta^B)$. It then follows that \begin{align*}\label{eq:split}
\sup_{(x,y,t) \in \mathcal{M}(\Theta)} \E[v(x(\theta), y(\theta), \theta) +t(\theta)] &\leq \E_\varepsilon\Big[ \sup_{(x,y,t) \in \mathcal{M}(\Theta)} \E\big [v(x(\theta), y(\theta), \theta) +t(\theta) \mid \varepsilon \big ] \Big ] \\
&\leq \E_\varepsilon\Big[ \sup_{(x,y,t) \in \mathcal{M}(\Theta_\varepsilon)} \E\big [v(x(\theta), y(\theta), \theta) +t(\theta) \mid \varepsilon \big ] \Big ] \,.\tag{A.10} \end{align*} Because $\varepsilon$ is independent of $\theta^A$, the inner expectation integrates with respect to the same marginal distribution of $\theta^A$ regardless of the realization of $\varepsilon$.
By the proof in \Cref{sec:proof}, we have that (i) for all realizations of $\varepsilon$, \[\label{eq:sobj} \sup_{(x,y,t) \in \mathcal{M}(\Theta_\varepsilon)} \E\big [v(x(\theta), y(\theta), \theta) +t(\theta) \mid \varepsilon \big ] \tag{A.11}\] can be attained by a single mechanism in $\mathcal{M}(\Theta)$ that involves no costly screening; and (ii) if the instruments are strictly costly, then all optimal solutions to \eqref{eq:sobj} satisfy $\mathds{P}(y(\theta) = y_0 \mid \varepsilon) = 1$. The first part of \Cref{thm:main} follows immediately. Now, if a mechanism $(x, y, t)$ in $\mathcal{M}(\Theta)$ has $y(\theta) \neq y_0$ for a positive measure of $\theta$, then we have $\mathds{P}(y(\theta) = y_0 \mid \varepsilon) < 1$ for a positive measure of $\varepsilon$. Hence, if the instruments are strictly costly, then $(x, y, t)$ is strictly dominated by any optimal mechanism involving no costly screening, and thus the second part of \Cref{thm:main} follows. \end{proof}
\begin{proof}[Proof of \Cref{prop:nested}] This is a special case of \Cref{prop:generalnested} in \Cref{app:generalnested} applying to items $\{1, 2\}$ and menu $\{\{1\}, \{1, 2\}\}$. \end{proof}
\begin{proof}[Proof of \Cref{prop:discount}] We can write the agent's payoff as \[vu(x) - (1 - D(r, T)) v u(x) - p\,.\]
Let $N = |\mathcal{T}|$. We relax this problem by introducing $\alpha \in \Delta(\mathcal{T})$ and $z \in \mathcal{X}^{N}$, which are assumed to enter the agent's payoff as \[vu(x) - \sum_{i = 1}^N \alpha_i (1 - D(r, T_i)) v u(z_i) - p\,,\] where $T_i$ is the $i$-th element in $\mathcal{T}$. The principal's payoff remains the same as in the original problem: $-C(x)+p$. This is a relaxed problem because for any allocation $(x, T_i)$, we can replicate it by assigning the same $x$, $z_i = x$ for all $i$, $\alpha_i = 1$ (and $\alpha_j = 0$ for $j \neq i$).
Now, letting $y = (\alpha, z)$, we can write the agent's payoff as \[vu(x) - \sum_{i=1}^N c_i(y) (1 - D(r, T_i)) v - p\,,\] where $c_i(y) := \alpha_i u(z_i) \geq 0$. Let $c(y) := (c_i(y))_i$. Let $\delta_0 \in \Delta(\mathcal{T})$ be the Dirac measure centered on time $0$. Let $y_0 := (\delta_0, 0) \in \Delta(\mathcal{T}) \times \mathcal{X}^{N} =: \mathcal{Y}$. Note that $c(y_0) = 0$. Let $\theta^i := -(1 - D(r, T_i)) v \leq 0$ and $\theta^B := (\theta^1, \cdots, \theta^N)$. Then the agent's payoff can be written as \[vu(x) + \theta^B \cdot c(y) - p\,.\] Note that because $r$ is stochastically nonincreasing in $v$ and $D(\,\cdot\,, T)$ is nonincreasing for any $T$, we have $\frac{1}{v}\theta^B$ is stochastiaclly nondecreasing in $v$. By \Cref{prop:linear}, there exists a solution to the relaxed problem that has the form $(x^*, y_0, p^*)$, but that is also implementable in the original problem with no intertemporal price discrimination, proving the result. \end{proof}
\begin{proof}[Proof of \Cref{prop:labor}] Let $t = -wx$ denote the expected transfer from the agent to the principal. Then we can write the agent's payoff as \[-C(\theta) x - cy - t\,,\] and the principal's payoff as \[V(\theta) x + t\,.\]
In this problem, there is also the constraint that $t=-wx$ must be $0$ if $x = 0$. Relax that constraint. Then this is a special case of the main model. The surplus function $(V(\theta) - C(\theta))x$ satisfies Assumption (1.3) because $V(\theta) - C(\theta)$ is nondecreasing in $\theta$. Applying \Cref{thm:main} yields an optimal mechanism $(x^*, 0, t^*)$ that involves no costly screening. Note that for $(x^*, t^*)$ to be optimal for the productive component, the payment $t_0$ associated with the option $x = 0$ must be $0$ because (i) if $t_0 > 0$ then the mechanism cannot be IR and (ii) if $t_0 < 0$ then the principal can strictly improve upon the mechanism by increasing $t(\theta)$ uniformly by $|t_0|$ for all types $\theta$. Thus, $(x^*, 0, t^*)$ is implementable in the original problem and hence must be optimal, proving the result. \end{proof}
\setlength\bibsep{12pt}
\section{Online Appendix} \label{app:b}
\subsection{Bundling with Nested Bundles: The General Case} \label{app:generalnested}
There are $G$ many different items. Let $\mathcal{B}$ be the power set of $\{1, \dots, G\}$. A menu $B \subseteq \mathcal{B}$ is a \textit{nested menu} if for every $b_1, b_2 \in B$, either $b_1 \subseteq b_2$ or $b_1 \supseteq b_2$. The consumer has private information about his type $\theta \in \Theta \subset \mathds{R}$. The value of bundle $b$ for type $\theta$ is denoted by $v(b, \theta)$, which is assumed to be nondecreasing in $b$ (set inclusion) and continuous, nondecreasing in $\theta$. Let $\bar{b}$ be the grand bundle.
\begin{prop} \label{prop:generalnested} Consider any nested menu $B$ that includes $\bar{b}$. Suppose that \begin{itemize}
\item[(i)] for every $b, b' \in B$ such that $b \subset b'$, $v(b', \theta) - v(b, \theta)$ is strictly increasing in $\theta$;
\item[(ii)] for every $b \not \in B$, there exists $b' \in B$ such that $b \subset b'$ and $v(b', \theta) - v(b, \theta)$ is nonincreasing in $\theta$. \end{itemize}
Then there exists some pricing rule $p_B \in \mathds{R}^{|B|}$ for bundles in $B$ such that the mechanism induced by $(B, p_B)$ is optimal among all deterministic mechanisms. \end{prop}
\begin{proof}[Proof of \Cref{prop:generalnested}] Order the bundles in $B$ by $b_1 \subset b_2 \subset \cdots \subset \bar{b}$. Without loss of generality, let $b_1 = \emptyset$. For $b \in B$, let $i(b)$ be its associated index.
Let $\mathcal{X} := \{1, 2, \dots, |B|\}$. Let the agent's utility on the productive component be \[u^A(x, \theta) := v(b_x, \theta)\,.\]
Let \[\mathcal{Y} := \Big \{y \in \{0, 1\}^{\mathcal{B}\backslash B}: \sum_{b \not \in B} y_b \leq 1 \Big\}\,.\] By assumption, there exists some function $\beta: \mathcal{B}\backslash B\rightarrow B$ such that (i) $b \subset \beta(b)$ and (ii) $v(\beta(b), \theta) - v(b, \theta)$ is nonincreasing in $\theta$. Let the agent's utility on the costly component be \[u^B(y, \theta) := \sum_{b \not \in B} \big (v(b, \theta) - v(\beta(b), \theta) \big ) y_b \leq 0\,.\] For any allocation $a$ that assigns $a_b = 1$ for $b \in B$, we can replicate it by letting $x = i(b)$ and $y = 0$. For any allocation $a$ that assigns $a_b = 1$ for $b \not\in B$, we can replicate it by letting $x = i(\beta(b))$ and $y_{b} = 1$ (and $y_{b'} = 0$ for $b' \neq b$).
Because $v(b, \theta) - v(\beta(b), \theta)$ is nondecreasing in $\theta$ for all $b \not \in B$, we have $u^B(y, \theta)$ is nondecreasing in $\theta$ for all $y \in \mathcal{Y}$. By assumption, $u^A(x, \theta)$ is nondecreasing in $\theta$ and has strict increasing differences. Therefore, by \Cref{thm:main}, there exists an optimal mechanism that sets $y^* = 0$ in this relaxed problem. But that is implementable in the original problem and corresponds to a pricing rule $p_B$ for bundles in $B$, proving the result. \end{proof}
\paragraph{A parametric example for two-items case.}\hspace{-2mm}Suppose $\Theta = [1, 1.5]$ with a uniform distribution. Consider the following valuations: \[v(\{1\}, \theta) = \alpha \theta^{\kappa_1}, \qquad v(\{2\}, \theta) = \theta^{\kappa_2} -1, \qquad v(\{1, 2\}, \theta) = \theta^2 \,.\] Fix $\alpha = 0.75$ and $\kappa_2 = 2.5$. There are decreasing differences in the values of $\{1, 2\}$ versus those of $\{2\}$. Now comparing $\{1, 2\}$ and $\{1\}$, we note that there are decreasing differences in the values if $\kappa_1 \geq 2.7$ and there are increasing differences in the values if $\kappa_1 \leq 2.3$. Here, for example: \begin{itemize}
\item When $\kappa_1 = 2.7$, pure bundling is optimal (with price $\approx 1$ for the bundle);
\item When $\kappa_1 = 1.8$, selling the menu $\{\{1\}, \{1, 2\}\}$ is optimal (with price $\approx 0.75$ for item $1$ and price $\approx 1.1$ for the bundle). \end{itemize} This follows from \Cref{prop:nested} but there is no known bundling result implying this.
\subsection{Competitive Labor Market Screening} \label{app:comp}
Our main model assumes monopolistic screening. It delivers a prediction different from the usual perception of costly screening in competitive labor markets. In this appendix, we formulate a stylized competitive screening model consisting of two screening devices and show how competition can reverse the use of costly instruments.
There are two types of workers $\theta_H > \theta_L \geq 0$ in a perfectly competitive labor market.\footnote{This part of the setup is standard; see e.g. \citetapp{spence1978product} and \citetapp{stantcheva2014optimal}. } A type-$\theta_i$ worker incurs a cost $\psi_i(x)$ for producing $x \in [0,1]$ units of work where $\psi_i$ is a strictly increasing, continuously differentiable, and strictly convex function on $[0, 1]$ with $\psi_i(0) = 0$. A firm gets a payoff $\theta_i x$ from $x$ units of work by a type-$\theta_i$ worker.
Suppose the marginal cost is lower for the higher type: $\psi'_H(x) < \psi'_L(x)$ for all $x \in [0,1]$. The efficient amount of production for type $\theta_i$ is $x^{e}_i := (\psi_{i}')^{-1}(\theta_i)$, assumed to be in the interior of $[0, 1]$. Suppose that \[\label{eq:adverse} \theta_L x^e_L - \psi_L(x^e_L) < \theta_H x^e_H - \psi_L(x^e_H) \,\] so the low type wants to imitate the high type when given the menu of the efficient allocations with competitive prices. Without this assumption, there is no adverse selection problem. Suppose also there exists some $x \geq x^e_L$ such that $\theta_L x^e_L - \psi_L(x^e_L) \geq \theta_H x - \psi_L(x)$ so it is possible to separate the types using only the work allocations.
There is one costly instrument. For a level $y \in [0,1]$ of the costly activity, a type-$\theta_i$ worker incurs a cost $c_i(y)$ where $c_i$ is a strictly increasing, continuously differentiable function on $[0, 1]$ with $c_i(0) = 0$. Suppose $\label{eq:effective} c'_L(0) > \psi'_L(1)$ and $c'_H(0) = 0$. This says that a small amount of $y$ costs nothing for the high type but a lot for the low type.
The firms commit to a set of offers. Each offer specifies an amount of work $x$, a level of costly activity $y$, and a wage $w$. The literature has not reached a consensus on the choice of solution concept for competitive screening models. We say a set of offers $\{(x, y, w)\}$ is a \textit{separating set} if (i) the types separate and (ii) the firms earn zero payoff on each offer. A set of offers is a \textit{Pareto-optimal separating set} if it is (constrained) Pareto-optimal among all separating sets. This solution concept is weaker than the Pareto-dominant separating set, which is known to be equivalent to the reactive equilibrium of \citetapp{riley1979informational} in settings with one screening device (\citealtapp{engers1987market}).
This competitive screening model is analogous to the labor market application in \Cref{subsec:labor}. However, costly screening now emerges in equilibrium: \begin{prop}\label{prop:comp} A Pareto-optimal separating set exists and any Pareto-optimal separating set involves costly screening. \end{prop}
\begin{proof}[Proof of \Cref{prop:comp}] We first prove the second part. Suppose for contradiction that there exists a Pareto-optimal separating set $\{(x, y, w)\}$ that does not involve costly screening ($y = 0$). By the definition of a separating set, $x_H, x_L$ must differ. By the single-crossing property of $\psi$, we then have $x_H > x_L$. Note that $x_H$ cannot be $x_H^e$ because if so $\text{IC}[\theta_L \rightarrow \theta_H]$ will be violated: \[\theta_L x_L - \psi_L(x_L) \leq \theta_L x^e_L - \psi_L(x^e_L) < \theta_H x^e_H - \psi_L(x^e_H)\,,\] where the first inequality holds by definition of $x^e_L$ and the second inequality holds by assumption. Therefore, $\text{IC}[\theta_L \rightarrow \theta_H]$ must be binding. To see this, note that if the upward IC constraint is not binding, then one can move $x_H$ by small enough $\delta$ toward $x^e_H$ without breaking the upward IC constraint. Since the surplus function $\theta_H x -\psi_H(x)$ is strictly concave, the modification increases the payoff of the high type and hence also preserves the downward IC constraint. But this means that the original set of offers is dominated by a separating set and hence impossible.
Since $x_H > x_L$ and the upward IC constraint is binding, the downward IC constraint must be slack by the single-crossing property of $\psi$. This implies that $x_L = x^e_L$ because otherwise moving $x_L$ slightly toward $x^e_L$ gives a contradiction by the same argument as above. We claim that $x_H > x^e_H$. To see this, let \[f(x) = (\theta_H x - \psi_L(x)) - (\theta_L x^e_L - \psi_L(x^e_L)) \,. \] Note that it is concave on $[x^e_L, x^e_H]$. Moreover, $f(x^e_L) = (\theta_H - \theta_L)x^e_L > 0$, and $f(x^e_H) = (\theta_H x^e_H - \psi_L(x^e_H)) - (\theta_L x^e_L - \psi_L(x^e_L)) > 0$ by assumption. Thus, $f(x) > 0$ for all $x \in [x^e_L, x^e_H]$ and hence $x_H$ cannot be in that region. Therefore, $x_H > x^e_H$.
Now consider the menu $\{(x_L, 0, \theta_L x_L), (x_H-\epsilon, \epsilon ,\theta_H (x_H - \epsilon))\}$ for $\epsilon > 0$. We claim that for $\epsilon$ small enough, the offer $(x_H-\epsilon, \epsilon ,\theta_H (x_H - \epsilon))$ increases the payoff of the high type. Let \[u_H(\epsilon) = \theta_H (x_H - \epsilon) -\psi_H(x_H - \epsilon) - c_H(\epsilon)\,.\] It is a continuously differentiable function of $\epsilon$. The right derivative of this function at $0$ is strictly positive because \[\partial_{+} u_H(0) = - (\theta_H - \partial_{-}\psi_H(x_H)) - \partial_{+} c_H(0) = - (\theta_H - \partial_{-}\psi_H(x_H)) > 0\,,\] where the second equality holds by assumption, and the last inequality holds by strict concavity of $\psi_H$ and that $x_H > x^e_H$. Therefore, there exists some $\epsilon > 0$ such that $u'_H(s) > 0$ for all $s \in [0, \epsilon]$; the claim follows immediately.
We also claim that for $\epsilon > 0$ small enough, the modification still deters the low type from imitating the high type. To see this, let \[\hat{u}_L(\epsilon) = \theta_H (x_H - \epsilon) -\psi_L(x_H - \epsilon) - c_L(\epsilon) \,.\] It is a continuously differentiable function of $\epsilon$. The right derivative of this function at $0$ is strictly negative because \[\partial_+ \hat{u}_L(0)= -\theta_H + \partial_{-} \psi_L(x_H) - \partial_+ c_L(0) \leq \partial_{-} \psi_L(1) - \partial_+ c_L(0) < 0 \,,\] where the first inequality uses convexity of $\psi_L$ and the second inequality holds by assumption. Therefore, there exists some $\epsilon > 0$ such that $\hat{u}'_L(s) < 0$ for all $s \in [0, \epsilon]$; the claim follows immediately.
Hence, for $\epsilon > 0$ sufficiently small, the proposed menu is a separating set that Pareto-improves on the original one. Contradiction.
For the first part of the statement, consider the following optimization problem: \[\max_{(x,y)\in[0,1]^2} \theta_H x -\psi_H(x) - c_H(y) \] \[\text{subject to} \quad \theta_L x^e_L -\psi_L(x^e_L) \geq \theta_H x -\psi_L(x) - c_L(y)\,. \] An optimizer $(x^*, y^*)$ exists by standard compactness arguments. Moreover, $y^* \neq 0$ because otherwise it can be strictly improved by the argument above.
We claim that $\{(x^e_L, 0, \theta_L x^e_L), (x^*, y^*, \theta_H x^*)\}$ is a separating set. The low type chooses the first offer by construction. To see that the high type chooses the second offer, recall that by assumption there exists some $x \geq x^e_L$ such that $\theta_L x^e_L - \psi_L(x^e_L) \geq \theta_H x - \psi_L(x)$. Since this inequality is violated at $x^e_L$, by continuity, there exists some $x_H > x^e_L$ such that $\theta_L x^e_L - \psi_L(x^e_L) = \theta_H x_H - \psi_L(x_H)$. Then, by the single-crossing property of $\psi$, $\theta_H x_H - \psi_H(x_H) > \theta_L x^e_L - \psi_H(x^e_L)$. Thus, \[ \theta_H x^* -\psi_H(x^*) - c_H(y^*) \geq \theta_H x_H - \psi_H(x_H) > \theta_L x^e_L - \psi_H(x^e_L)\,, \] where the first inequality uses that $(x_H, 0)$ is a feasible solution. So the high type chooses the second offer.
Observe that $\{(x^e_L, 0, \theta_L x^e_L), (x^*, y^*, \theta_H x^*)\}$ must be Pareto-optimal among all separating sets. Suppose for contradiction that there is a separating set that Pareto-dominates it. Then the separating set must provide strictly higher payoff for the high type and maintain the same payoff for the low type because the low type already gets the maximal payoff. But that is impossible subject to $\text{IC}[\theta_L \rightarrow \theta_H]$ by the construction of $(x^*, y^*)$. \end{proof}
\subsection{Additional Proofs} \label{app:add}
\subsubsection{Measurable Monotone Coupling} In this appendix, we provide the proof of \Cref{lem:decomp} (stated in \Cref{app:proof}), building on \citetapp{kamae1978stochastic}. We start with a technical lemma.
\begin{lemma}\label{lem:eqv} Suppose $X$, $Y$ are two $\Theta$-valued random variables where $\Theta$ is a compact subset of $\mathds{R}^N$. Let $\preceq_{st}$ and $\preceq'_{st}$ denote the stochastic dominance partial orders on $\Delta(\mathds{R}^N)$ and $\Delta(\Theta)$ (i.e. $X \preceq'_{st} Y$ if $\E[f(X)]\leq \E[f(Y)]$ for all bounded nondecreasing measurable $f: \Theta \rightarrow \mathds{R}$). Then $X \preceq_{st} Y$ if and only if $X \preceq'_{st} Y$. \end{lemma}
\begin{proof}[Proof of \Cref{lem:eqv}]
$(\impliedby)$ Suppose $X \preceq'_{st} Y$. Note that for any bounded monotone measurable $f: \mathds{R}^N \rightarrow \mathds{R}$, the restriction $f|_{\Theta}: \Theta \rightarrow \mathds{R}$ is also a bounded monotone measurable function, and moreover
$\E[f(X)] = \E[f|_{\Theta}(X)]\leq \E[f|_{\Theta}(Y)] = \E[f(Y)]$ since $X$, $Y$ are $\Theta$-valued and $X \preceq'_{st} Y$. So $X \preceq_{st} Y$.
$(\implies)$ Suppose $X \preceq_{st} Y$. To show $X \preceq'_{st} Y$, by Theorem 1 of \citetapp{kamae1977stochastic}, it suffices to show that for any increasing set $B \subseteq \Theta$ closed in $\Theta$, we have $\E[\mathds{1}_{X \in B}]\leq \E[\mathds{1}_{Y \in B}]$ (we say a set $B$ is \textit{increasing} if $\mathds{1}_{B}$ is a nondecreasing function).
Fix any such $B$. Let $B^{\uparrow}:= \{y \in \mathds{R}^N: y \geq x, x \in B\}$ be the increasing hull of $B$ in $\mathds{R}^N$. We claim that $B^{\uparrow}$ is closed in $\mathds{R}^N$. To see this, fix any $y^n \rightarrow y$ in $\mathds{R}^N$ where $y^n \in B^{\uparrow}$. Since $y^n \in B^{\uparrow}$, there exists $x^n \in B$ such that $y^n \geq x^n$. Since $B$ is a closed subset of a compact set $\Theta$, $B$ is compact. Therefore, there exists a subsequence $x^{n_l}$ converging to some $x \in B$. Passing to this subsequence, we have $\displaystyle y = \lim_{l\rightarrow \infty} y^{n_l} \geq \lim_{l \rightarrow \infty} x^{n_l} = x \in B$ and hence $y \in B^{\uparrow}$. This proves that $B^{\uparrow}$ is closed in $\mathds{R}^N$, and hence measurable.
Because $X$ is $\Theta$-valued, we have $\E[\mathds{1}_{X\in B^{\uparrow}}] = \E[\mathds{1}_{X\in B^{\uparrow} \cap \Theta }]$. We claim that $ B^{\uparrow} \cap \Theta = B$. Since $B \subseteq \Theta$ and $B \subseteq B^{\uparrow}$, we have $ B \subseteq B^{\uparrow} \cap \Theta$. Now take any $y\in B^{\uparrow} \cap \Theta$. Then $y \in \Theta$ and there exists some $x \in B$ such that $y \geq x$. But because $B$ itself is an increasing set in $\Theta$, we must have $y \in B$. Thus $B^{\uparrow} \cap \Theta \subseteq B$. Therefore, $B^{\uparrow} \cap \Theta = B$. Now, we have \[\E[\mathds{1}_{X\in B}] = \E[\mathds{1}_{X\in B^{\uparrow} \cap \Theta }] = \E[\mathds{1}_{X\in B^{\uparrow}}] \leq \E[\mathds{1}_{Y\in B^{\uparrow}}] = \E[\mathds{1}_{Y\in B^{\uparrow} \cap \Theta }] = \E[\mathds{1}_{Y\in B}] \] where the inequality follows from that $X \preceq_{st} Y$ and $B^{\uparrow}$ is a measurable increasing set in $\mathds{R}^N$. Since this holds for all closed increasing sets $B$ in $\Theta$, the claim follows. \end{proof}
\begin{proof}[Proof of \Cref{lem:decomp}]
Let $\mathcal{B}_{\Theta^B}$ be the Borel $\sigma$-algebra of $\Theta^B$. Let $\kappa: \Theta^A \times \mathcal{B}_{\Theta^B} \rightarrow [0, 1]$ be the regular conditional distribution of $\theta^B$ given $\theta^A$. For any $t \in \Theta^A$, define measure $P_t(\,\cdot\,) = \kappa(t, \,\cdot\,)$. Let $S$ be the support of $\theta^A$. By assumption, $\{P_t\}_{t \in S}$ is a stochastically nondecreasing family of probability measures on $\mathds{R}^N$. By \Cref{lem:eqv}, it is also a stochastically nondecreasing family of probability measures on $\Theta^B$.
Let \[\varphi(s) = \begin{cases} \max \{t:\, t \leq s, \, t \in S\} &\text{ if $s \geq \min(S)$} \\ \min(S) &\text{otherwise} \\ \end{cases} \,.\] Because $(-\infty, s] \cap S$ is compact, we have $\varphi(s) \in S$. For all $s \not \in S$, define $P_s = P_{\varphi(s)}$. Because $\varphi(\,\cdot\,)$ is nondecreasing and $\varphi(s) = s$ for all $s \in S$, $\{P_t\}_{t \in \mathds{R}}$ is a stochastically nondecreasing family of probability measures on $\Theta^B$. Invoking Theorem 6 of \citetapp{kamae1978stochastic}, we have a $\Theta^B$-valued stochastic process $\{X_t\}_{t\in \mathds{R}}$ on a probability space $(\Omega, \nu)$ such that (i) $X_s(\omega) \leq X_t(\omega)$ for all $s < t$ and all $\omega$ and (ii) $P_t$ is the distribution of $X_t$ for all $t$.
By the proof of Theorem 6 in \citetapp{kamae1978stochastic}, there exists a dense countable set $D \subset \mathds{R}$ such that for all $\omega$ and all $s \not \in D$ \[X_s(\omega) = \lim_{t \rightarrow s; \, t \in D, \,t \leq s} X_t(\omega)\,. \]
Let $X^i_t$ denote the $i$-th coordinate of $X_t$. We claim that for all $i$ and all $\omega$, the sample path $X^i_t(\omega)$ is left-continuous at all $t \not \in D$. To see this, fix any $i$, $\omega \in \Omega$, $t \not \in D$, and $\epsilon > 0$. By construction, $X^i_{t}(\omega) = \displaystyle \lim_{k \rightarrow \infty} X^i_{t_k}(\omega)$ for some sequence $t_k \uparrow t$, with $t_k \in D$. So there exists some $K \in \mathds{N}$ such that $X^i_t(\omega) - X^i_{t_{K}}(\omega)<\epsilon$. But then for any $s \in (t - \delta, t)$ where $\delta := t - t_{K}$, we have $|X^i_t(\omega) - X^i_s(\omega)| \leq X^i_t(\omega) - X^i_{t_{K}}(\omega) < \epsilon$ by monotonicity of $X^i_t(\omega)$.
Because $D$ is countable, $D^c$ is dense in $\mathds{R}$. Pick any dense countable set $Q \subset D^c$. For all $\omega$, define $\bar{X}_t(\omega) = X_t(\omega)$ for all $t \in D^c$ and \[\bar{X}_t(\omega) = \lim_{s \rightarrow t; \, s \in Q, \, s \leq t} \bar{X}_s(\omega)\] for all $t \in D$. Note that $\{\bar{X}_t\}_{t\in\mathds{R}}$ is also a nondecreasing stochastic process. By a similar argument as above, $\bar{X}^i_t(\omega)$ is left-continuous at all $t \in D$. Moreover, for any $t \in D^c$, and any sequence $t_k \uparrow t$, with $t_k \in Q$, we have $\bar{X}^i_t(\omega) = X^i_t(\omega) = \displaystyle \lim_{k \rightarrow \infty} X^i_{t_k}(\omega) = \displaystyle \lim_{k \rightarrow \infty} \bar{X}^i_{t_k}(\omega)$ by left continuity of $X^i_t(\omega)$ at $t \in D^c$. Therefore, by a similar argument as above, $\bar{X}^i_t(\omega)$ is also left-continuous at all $t \in D^c$. So $\{\bar{X}_t\}_{t\in\mathds{R}}$ is a left-continuous stochastic process, and thus the map $(t, \omega) \mapsto \bar{X}^i_t(\omega)$ is jointly measurable (see e.g. \citealtapp{Karatzas1998}, p. 5). Since $D$ is countable, $(t, \omega) \mapsto X^i_t(\omega)\mathds{1}_{t \in D}$ is also jointly measurable. Then, because $X^i_t(\omega) = \bar{X}^i_t(\omega)\mathds{1}_{t \not \in D} + X^i_t(\omega)\mathds{1}_{t \in D}$, $(t, \omega) \mapsto X^i_t(\omega)$ is jointly measurable. Therefore, $(t, \omega) \mapsto X_t(\omega)$ is jointly measurable.
Let $\mathcal{E} = \Omega$ and $h(\theta^A; \varepsilon) = X_{\theta^A}(\varepsilon)$ for all $\theta^A \in \Theta^A$ and $\varepsilon \in \mathcal{E}$. Then $h:\Theta^A \times \mathcal{E} \rightarrow \Theta^B$ is jointly measurable and nondecreasing in the first argument. Let $\mu$ denote the marginal distribution of $\theta^A$. By the construction of $h$, for any $(a, b) \in \mathds{R} \times \mathds{R}^N$, we have \begin{align*}
(\mu \times \nu)(\{(\theta^A, \varepsilon): \theta^A \leq a,\, h(\theta^A; \varepsilon) \leq b\}) &= \int \mathds{1}_{\theta^A \leq a} \mathds{1}_{\theta^A \in S} \Big(\int\mathds{1}_{h(\theta^A; \varepsilon)\leq b} \d\nu(\varepsilon) \Big)\d\mu(\theta^A)\\
&= \int \mathds{1}_{\theta^A \leq a} \mathds{1}_{\theta^A \in S} \kappa (\theta^A, \,\{\theta^B: \theta^B \leq b\} )\d\mu(\theta^A)\\
&= \int \mathds{1}_{\theta^A \leq a} \kappa(\theta^A, \,\{\theta^B: \theta^B \leq b\}) \d\mu(\theta^A) \\
& = \mathds{P}(\theta^A \leq a, \,\theta^B \leq b)\,, \end{align*} where we have also used $\mu(S) = 1$ and Fubini's theorem. Thus, $\theta \eqid (\theta^A, h(\theta^A, \varepsilon))$ when $\theta^A, \, \varepsilon$ are independently drawn from $\mu, \, \nu$ respectively. \end{proof}
\subsubsection{Stochastic Monotonicity and Affiliation} In this appendix, we show that affiliation implies stochastic monotonicity (this is standard; we include it here for completeness).
\begin{lemma}\label{lem:aff} Suppose $(\theta^A, \theta^B)$ is a jointly affiliated random variable (where $\theta^A$ is $\mathds{R}$-valued and $\theta^B$ is $\mathds{R}^N$-valued). Then, $\theta^B$ is stochastically nondecreasing in $\theta^A$. \end{lemma} \begin{proof}[Proof of \Cref{lem:aff}] By Theorem 3.10.16 in \citetapp{muller2002comparison}, affiliation implies the ``conditionally increasing'' property (CI) defined in Definition 3.10.9 in \citetapp{muller2002comparison}. Moreover, by Theorem 5.3 of \citetapp{block1985concept}, CI implies the ``positively dependent through stochastic ordering'' property (PDS) which is also defined in Definition 3.10.9 in \citetapp{muller2002comparison}. Note that $(\theta^A, \theta^B)$ satisfying PDS clearly implies that $\theta^B$ is stochastically nondecreasing in $\theta^A$, proving the claim. \end{proof}
\setlength\bibsep{12pt} \bibliographystyleapp{ecta} \bibliographyapp{references}
\end{document} |
\begin{document}
\title{
Computing the recession cone of a convex upper image via convex projection}
\let\thefootnote\relax\footnotetext{\textbf{Funding:} G. Kov\'{a}\v{c}ov\'{a} and F. Ulus acknowledge support from the OeNB anniversary fund, project number 17793.}
\begin{abstract}
It is possible to solve unbounded convex vector optimization problems (CVOPs) in two phases: (1) computing or approximating the recession cone of the upper image and (2) solving the equivalent bounded CVOP where the ordering cone is extended based on the first phase \cite{WURKH22}. In this paper, we consider unbounded CVOPs and propose an alternative solution methodology to compute or approximate the recession cone of the upper image. In particular, we relate the dual of the recession cone with the Lagrange dual of weighted sum scalarization problems whenever the dual problem can be written explicitly. Computing this set requires solving a convex (or polyhedral) projection problem. We show that this methodology can be applied to semidefinite, quadratic and linear vector optimization problems and provide some numerical examples.
\noindent {\bf Keywords:} Convex vector optimization, linear vector optimization, unbounded vector optimization, recession cone, convex projection
\noindent {\bf MSC 2010 Classification:} 90B50, 90C29, 90C25, 90C05, 90C20, 90C22.
\end{abstract}
\section{Introduction} \label{sect:intro}
Multiobjective optimization, in which there are multiple conflicting objective functions to be minimized simultaneously, has been studied extensively in the literature as there are many application areas ranging from engineering to natural sciences. Vector optimization is a generalization where the order relation on the objective space is determined by an ordering cone, here denoted $C$, which may be different than the positive orthant. There are various solution concepts and approaches regarding vector optimization problems in the literature, see for instance \cite{Jahn04, Luc88}.
In this paper, we focus on convex vector optimization problems (CVOPs), where the objective function is assumed to be convex with respect to the order relation induced by cone $C$ and the feasible region is assumed to be a convex set. An important subclass of CVOPs are linear vector optimization problems (LVOPs). What is conventionally understood as solving a LVOP is generating the set of all (weakly) minimal element of its upper image, or generating its Pareto frontier. For more details, methods and algorithms see for instance \cite{Ben98,Csirmaz13,HLR13,Loehne11,ShaEhr08}. In the case of a CVOP it is commonly not possible to compute the exact Pareto frontier. Instead, one aims to approximate it.
Whether a CVOP is \textit{bounded} or \textit{unbounded} determines what solution concepts and what solution methods are available. In vector optimization, a bounded problem is characterized by its upper image (and its Pareto frontier) being a subset of a shifted ordering cone $C$. In the multi-objective case, this simplifies to each objective being bounded from below on the feasible set. Bounded problems comprise the less challenging class: Pareto frontier of a bounded LVOP can be generated by finding its extreme points. For a bounded CVOP, one aims to find finitely many (weakly) minimal elements that generate both inner and outer polyhedral approximation of the Pareto frontier. There are solution algorithms such as \cite{AraUlusUmer2021, DorLohSchWei2021,EhrShaSch11,LRU14} capable of solving bounded CVOPs in this sense.
Unbounded problems present an additional challenge -- one also has to compute (or approximate) the recession directions of the upper image. One of the solution methods for unbounded LVOPs can be found in \cite{Loehne11}. In the first phase, the so-called \emph{homogeneous problem} is solved to compute the recession cone of the upper image. In the second phase, the ordering cone $C$ is replaced by the found recession cone, which transforms the original unbounded LVOP into an equivalent bounded LVOP. Recently, \cite{WURKH22} proposed a solution concept and a solution algorithm to solve unbounded CVOPs. The solution approach is similar to the linear case and consists of two phases. In the first phase, an outer approximation of the recession cone is algorithmically computed. This outer approximation is then used to transform the problem into bounded one, so that existing algorithms can be applied in the second phase.
In this paper, we propose an alternative way of approximating the recession cone of the upper image. We consider the dual of the recession cone and use a characterization given in terms of the well-known weighted sum scalarizations \cite{Ulus18}. We observe that for some classes of CVOPs, it is possible to write the dual of the recession cone\out{,} explicitly. Then, computing this set reduces to solving a bounded convex projection problem \cite{KovacovaRudloff2021}. For the special case of LVOPs, it is possible to compute the recession cone exactly by solving a bounded polyhedral projection problem \cite{LoeWei15}. Moreover, in this case, it is possible to reduce the dimension of the projection problem by one.
When the dimension of the objective space is two, the procedure simplifies further and it reduces to solving two convex (or linear, if the corresponding problem is linear) scalar optimization problems. Compared to applying the algorithm from \cite{WURKH22} or solving a two dimensional homogeneous problem as in \cite{Loehne11}, solving two convex or linear programs is simpler and more efficient.
The structure of the paper is as follows. In Section~\ref{sect:prelim} we provide notation and preliminaries. In Section~\ref{sect:problem} we introduce the convex vector optimization problem and the relevant solution concepts. Section~\ref{sect:recCone} introduces a method for approximating the recession cone of the upper image based on its connection to the set of weights for which the weighted sum scalarization is bounded. In Section~\ref{sect:ex} we discuss particular problem classes for which this method yields representation in the form of a convex projection problem. Section~\ref{sect:comp} provides examples.
\section{Preliminaries} \label{sect:prelim}
Let $q \in \mathbb{N}$ and $\mathbb{R}^q$ be the $q$-dimensional Euclidean space. Throughout the paper we primarily use the $\ell_2$ norm $\|y\| := \norm{y}_2 = \left( \sum_{i=1}^q \abs{y_i}^2 \right)^{\frac{1}{2}}$ on $\mathbb{R}^q$. We will shortly remark on results under the $\ell_1$ norm $\norm{y}_1 = \sum_{i=1}^q \abs{y_i}$ and the $\ell_\infty$ norm $\norm{y}_\infty = \max_{i\in \{1,\ldots,q\}} \abs {y_i}$. The (closed $\ell_2$) ball centered at point $c \in \mathbb{R}^q$ with radius $r > 0$ is denoted by $B(c,r) :=\{y \in \mathbb{R}^q \mid \norm{y-c} \leq r\}$.
The interior, closure, boundary and convex hull of a set $A \subseteq \mathbb{R}^q$ are denoted by $\Int A, \cl A, \bd A$, and $\conv A$, respectively. The \textit{(convex) conic hull} of $A$, $${\rm cone\,} A := \left\lbrace\sum_{i=1}^n \lambda_ia_i \mid n\in \mathbb{N}, \lambda_1,\ldots,\lambda_n \geq 0, a_i\ldots,a_n \in A\right\rbrace,$$ is the set of all conic combinations of points from $A$. The \textit{recession cone} of a set $A$ is $$A_{\infty} = \left\lbrace d \in \mathbb{R}^q \mid a + \lambda d \in A \quad \forall a \in A, \lambda \geq 0 \right\rbrace.$$
For two sets $A,B\subseteq \mathbb{R}^q$, their sum is understood as their \textit{Minkowski sum} $$A+B:=\{a+b \in \mathbb{R}^q \mid a \in A, b\in B\}$$ and their distance is measured via the \textit{Hausdorff distance} $$d^H(A,B) := \max \left\lbrace \sup_{a\in A} \inf_{b\in B} \norm{a-b},\sup_{b\in B} \inf_{a\in A} \norm{a-b} \right\rbrace.$$ If a different norm is considered, the Hausdorff distance can be defined analogously. We denote by $A-B$, the set $A + (-1)\cdot B = \{a-b \mid a\in A, b\in B\}$.
A set $A \subseteq \mathbb{R}^q$ is a \textit{polyhedron} if it can be identified through finitely many vertices $v_1, \dots, v_{k_v} \in \mathbb{R}^q, k_v \in \mathbb{N}$ and direction $d_1, \dots, d_{k_d} \in \mathbb{R}^q \setminus \{0\}, k_d \in \mathbb{N}\cup\{0\}$ as \begin{equation} \label{eq:Vrep}
A = \conv \{v_1, \dots, v_{k_v}\} + {\rm cone\,} \{d_1, \dots, d_{k_d}\}. \end{equation}
A polyhedron can also be represented as a finite intersection of halfspaces.
The \textit{dual cone} of a set $A \subseteq \mathbb{R}^q$ is $A^+ := \{ w \in \mathbb{R}^q \mid \forall a \in A: w^\mathsf{T} a \geq 0\}$. A cone $C \subseteq \mathbb{R}^q$ is \textit{nontrivial} if $\{0\} \subsetneq C \subsetneq \mathbb{R}^q$. It is \textit{pointed} if it does not contain any line through the origin. A cone $C \subseteq \mathbb{R}^q$ generates an order on $\mathbb{R}^q$ given through $$x \leq_C y \iff y \in \{x\} + C$$ for $x, y \in \mathbb{R}^q$. If $C$ is a nontrivial, pointed, convex ordering cone, then $\leq_C$ is a partial order. A function $f: \mathbb{R}^n \to \mathbb{R}^q$ is \textit{$C$-convex} if for all $x, y \in \mathbb{R}^n$ and all $\lambda \in [0, 1]$ it holds $$f(\lambda x + (1-\lambda)y ) \leq_C \lambda f(x) + (1-\lambda)f(y).$$
\textit{Convex projection} is a problem of the form \begin{align*} \text{compute } Y = \left\lbrace y \in \mathbb{R}^m \mid \exists x \in \mathbb{R}^n: (x, y) \in S \right\rbrace, \end{align*} where $S \subseteq \mathbb{R}^n\times \mathbb{R}^m$ is a convex feasible set. If the feasible set $S$ is a polyhedron, then the problem is a \textit{polyhedral projection}. Under solving a projection problem we understand computing the set $Y$ (if polyhedral) or an approximation of it (otherwise) in the sense of finding a representation as in \eqref{eq:Vrep}. More details on polyhedral projections can be found in \cite{LoeWei15}, on convex projection in \cite{KovacovaRudloff2021,SZC18}.
\section{Problem description} \label{sect:problem} In this section we introduce a convex vector optimization problem (CVOP) and its upper image. The main object of interest in this work is the recession cone of the upper image. Its importance can be seen by the role it plays in the boundedness properties of CVOPs and the appropriate solution concepts.
A convex vector optimization problem is \begin{align*} \tag{P} \label{eq:P} \text{minimize } f(x) \quad \text{ with respect to\ } \leq_C\quad\text{ subject to } h(x) \leq 0, \end{align*} where $C \subseteq \mathbb R^q$ is a nontrivial, pointed, convex ordering cone with non-empty interior and $h:\mathbb{R}^n \to \mathbb{R}^m$ and $f: \mathbb{R}^n \to \mathbb{R}^q$ are continuous functions that are $\mathbb{R}^m_+$- and $C$-convex, respectively. We denote the feasible region by $\mathcal{X} := \{x\in \mathbb{R}^n \mid h(x) \leq 0\}$ and its image by $f(\mathcal{X}):=\{f(x) \mid x \in \mathcal{X}\}$. The {\em{upper image}} of \eqref{eq:P} is defined as
$$\mathcal{P}:=\cl \left( f(\mathcal{X})+C \right).$$
Here we are particularly interested in the recession cone of the upper image, that is, $\mathcal{P}_\infty$. We encounter it within the boundedness notions for CVOPs recalled below. \begin{definition}\cite[Definitions 4.2, 4.5]{Ulus18} Problem \eqref{eq:P} is called \emph{bounded} if there is a point $\hat{p} \in \mathbb{R}^q$ such that $\mathcal{P} \subseteq \{\hat{p}\}+C$; and \emph{unbounded} otherwise. Problem \eqref{eq:P} is called \emph{self-bounded} if $\mathcal{P} \neq \mathbb{R}^q$ and there is a point $\hat{p} \in \mathbb{R}^q$ such that $\mathcal{P} \subseteq \{\hat{p}\}+\mathcal{P}_\infty$. \end{definition} Note that for a bounded problem it holds $\mathcal{P}_\infty = \cl C$. A bounded problem is, in particular, always self-bounded. However, an unbounded problem can be self-bounded or not. We refer reader to~\cite{Ulus18} for examples and more in-depth discussion.
An appropriate solution concept for a CVOP depends on whether the problem is bounded or not. Solution concepts for bounded CVOPs are proposed in~\cite{AraUlusUmer2021, DorLohSchWei2021,LRU14} and for self-bounded problems in~\cite{Ulus18}. Such solutions generate both an inner and an outer approximation of the upper image $\mathcal{P}$ of the problem. The self-bounded case, however, contains challenges: In general, it is difficult to check if a CVOP is self-bounded. Moreover, the solution concept of~\cite{Ulus18} includes the recession cone $\mathcal{P}_\infty$. However, computing $\mathcal{P}_\infty$ exactly may not be possible if it is not polyhedral.
Recently, a generalized solution concept was proposed in~\cite{WURKH22}, which includes an approximation of the recession cone $\mathcal{P}_\infty$ of the upper image. This solution concept is tailored for unbounded problems, but it is applicable for a CVOP regardless of whether it is (self-) bounded or not. Similarly to the above, it also yields polyhedral approximations of the upper image. We will provide this solution concept below explicitly as it illustrate importance of approximating the recession cone $\mathcal{P}_\infty$.
First, we define approximations of a convex cone. Interested reader can also compare this to the definition in~\cite{Doerfler22} for convex sets. \begin{definition}{\label{defn:approxCone}} Let $K\subseteq \mathbb{R}^q$ be a convex cone. A finite set $\mathcal{Y} \subseteq \mathbb{R}^q$ is called a \emph{finite $\delta$--outer approximation} of $K$ if $K \subseteq {\rm cone\,} \mathcal{Y}$ and $d^H \left( K \cap B(0,1), {\rm cone\,} \mathcal{Y} \cap B(0,1) \right) \leq \delta$. Similarly, a finite set $\mathcal{Z} \subseteq \mathbb{R}^q$ is called a \emph{finite $\delta$--inner approximation} of $K$ if $K \supseteq {\rm cone\,} \mathcal{Z}$ and $d^H \left( K \cap B(0,1), {\rm cone\,} \mathcal{Z} \cap B(0,1) \right) \leq \delta$. \end{definition}
Definition~\ref{defn:approxCone} differs slightly from its counterpart in~\cite{WURKH22}: Here the $\ell_2$-norm (and the corresponding Hausdorff distance) is used, while~\cite{WURKH22} applied the $\ell_1$-norm to measure distance. The $\ell_1$-norm was chosen in~\cite{WURKH22} primarily for algorithmic reasons. Here we opt for the $\ell_2$-norm for pragmatic reasons: Since we will work with dual cones, the $\ell_2$-norm has the advantage of being self-dual. Alternatively, we could work with the pair of $\ell_1$- and $\ell_\infty$-norms, but this would create a clumsy terminology. When the choice of the norm(s) impacts the results of the paper, we provide corresponding remarks.
Now we can define a solution of a CVOP, where $c \in \Int C$ with $\Vert c \Vert = 1$ is a fixed element. First, recall that a point $\bar{x} \in \mathcal{X}$ is called a \emph{minimizer} for \eqref{eq:P} if $f(\bar{x})$ is a $C$-minimal element of $f(\mathcal{X})$, that is, if $(\{f(\bar{x})\}-C\setminus \{0\}) \cap f(\mathcal{X}) = \emptyset$. Similarly, $\bar{x} \in \mathcal{X}$ is called a \emph{weak minimizer} for \eqref{eq:P} if $f(\bar{x})$ is a weakly $C$-minimal element of $f(\mathcal{X})$, that is, if $(\{f(\bar{x})\}-\Int C) \cap f(\mathcal{X}) = \emptyset$.
\begin{definition}{\label{defn:solutionconcept}}
A pair $(\bar{\mathcal{X}},\mathcal{Y})$ is a \emph{(weak) $(\varepsilon,\delta)$--solution} of~\eqref{eq:P} if $\bar{\mathcal{X}} \neq \emptyset$ is a set of (weak) minimizers, $\mathcal{Y}$ is a $\delta$--outer approximation of $\mathcal{P}_{\infty}$ and it holds
\begin{align*}
\mathcal{P} \subseteq \conv \Gamma (\bar{\mathcal{X}})+{\rm cone\,} \mathcal{Y}-\varepsilon\{c\}.
\end{align*}
A (weak) $(\varepsilon,\delta)$--solution $(\bar{\mathcal{X}},\mathcal{Y})$ of~\eqref{eq:P} is a \emph{finite (weak) $(\varepsilon,\delta)$--solution} of~\eqref{eq:P} if the sets $\bar{\mathcal{X}},\mathcal{Y}$ consist of finitely many elements. \end{definition}
An approach to compute a solution of a CVOP in the sense of \Cref{defn:solutionconcept} is provided in \cite{WURKH22}. It was shown that once an outer approximation of $\mathcal{P}_\infty$ is available, the algorithms for bounded CVOPs can be used to find a solution in the sense of \Cref{defn:solutionconcept}. This is done by transforming the (unbounded) CVOP into a bounded one by replacing the ordering cone by the outer approximation of $\mathcal{P}_\infty$. \cite{WURKH22} also contains an algorithm for computing a finite $\delta$-outer approximation of $\mathcal{P}_\infty$.
In this paper, we provide an alternative approach to compute a polyhedral approximation of $\mathcal{P}_\infty$. We consider some special classes of CVOPs for which we can compute a finite $\delta$-outer approximation $\mathcal{Y}$ of $\mathcal{P}_\infty$ by solving a particular convex projection problem. For example, we will see that if we consider linear vector optimization problems, then we can compute the exact $\mathcal{P}_\infty$ by solving a polyhedral projection problem.
\section{Approximating $\mathcal{P}_\infty$ via $\mathcal{P}_\infty^+$} \label{sect:recCone}
Let us propose an approach to compute an approximation of $\mathcal{P}_\infty$ through approximating its dual $\mathcal{P}_\infty^+$. It is based on the known close connection between the dual of the recession cone $\mathcal{P}_\infty^+$ and the set of weights for which the weighted sum scalarization of the CVOP is a bounded problem. Boundedness of a scalar (weighted sum scalarization) problem can be verified through feasibility of its dual problem, assuming strong duality. Expressing the cone $\mathcal{P}_\infty^+$ through a set of weights for which the dual problem is feasible can be interpreted through a lens of a projection problem. This interpretation should become clearer for the particular special cases considered in Section~\ref{sect:ex}. Solving this projection problem provides an inner approximation of $\mathcal{P}_\infty^+$. We show that an inner approximation of $\mathcal{P}_\infty^+$ yields an outer approximation of $\mathcal{P}_\infty$ with an appropriate tolerance.
Let us start by recalling the weighted sum scalarization of \eqref{eq:P}, which is given by \begin{align*} \tag{P$_w$} \label{eq:Pw} \text{minimize } w^\mathsf{T} f(x) \quad\text{ subject to } \quad h(x) \leq 0 \end{align*} for $w \in \mathbb{R}^q\setminus\{0\}$. It is well known that if $w \in C^+\setminus \{0\},$ then an optimal solution of \eqref{eq:Pw} is a weak minimizer of \eqref{eq:P}, see~\cite[Theorem 5.28]{Jahn04}. On the other hand, for a weak minimizer $\bar{x}\in \mathbb{R}^n$, there exists $w\in C^+$ such that $\bar{x}$ is an optimal solution to \eqref{eq:Pw}, see~\cite[Theorem 5.13]{Jahn04}. This shows us that for CVOPs one is interested in solving \eqref{eq:Pw} for $w \in C^+$. However, the weighted sum scalarization problem may be unbounded for some $w \in C^+$ if \eqref{eq:P} is not bounded. The set of weights for which the weighted sum scalarization is bounded, denoted by $$W:= \{w \in C^+ \mid \inf_{x\in \mathcal{X}} w^\mathsf{T} f(x) \in \mathbb{R} \},$$ will play an important role. The following proposition gives a relationship between the dual cone of $\mathcal{P}_\infty$ and $W$.
\begin{proposition} \cite[Proposition 4.12 and Theorem 4.14]{Ulus18} \label{prop:weightset}
It holds true that $\mathcal{P}_\infty^+ = \cl W$. If \eqref{eq:P} is self-bounded, then $\mathcal{P}_\infty^+ = W$. Furthermore, if $\{0\} \neq \mathcal{P}_\infty^+ = W$, then the problem is self-bounded. \end{proposition}
Recall that $f$ is a $C$-convex function. Then for all $w\in C^+$ is $w^\mathsf{T} f: \mathbb{R}^n \to \mathbb{R}$ a convex function and hence \eqref{eq:Pw} is a convex optimization problem. The Lagrangian $\mathcal{L}: \mathbb{R}^n \times \mathbb{R}^m \to \mathbb{R}$ for \eqref{eq:Pw} is given by \begin{equation*} \mathcal{L}(x,\nu):= w^\mathsf{T} f(x) + \nu^\mathsf{T} h (x) \end{equation*} and the dual problem is \begin{align*} \tag{D$_w$} \label{eq:Dw} \text{maximize } g(\nu) \quad\text{ subject to } \nu \in \mathbb{R}^m_+, \end{align*}
where the dual objective function $g:\mathbb{R}^m\to \mathbb{R} \cup\{\pm \infty\}$ is defined as $g(\nu):=\inf_{x\in\mathcal{X}}\mathcal{L}(x,\nu)$. We say that the dual problem is feasible if there exists $\nu \in \mathbb{R}^m_+$ such that $g(\nu) > -\infty$. We know that the weak duality holds between the primal and dual problems \eqref{eq:Pw} and \eqref{eq:Dw}, that is,
$$p^w := \inf_{x\in \mathcal{X}} w^\mathsf{T} f(x) \geq \sup_{\nu\in\mathbb{R}^m_+} g(\nu) =:d^w.$$ Moreover, we say that the strong duality holds if the value of primal and dual problems are the same, that is, $p^w = d^w$. From now on, we assume the following.
\begin{assumption} \label{assm:const_qual}
The problem \eqref{eq:P} is feasible and it satisfies a constraint qualification such that the strong duality holds for the pair of \eqref{eq:Pw} and \eqref{eq:Dw} for any $w \in C^+$.
\end{assumption}
This assumption is satisfied, for example, if the problem has only affine constraints, or if the (generalized) Slater's condition holds, that is, there exists $\bar{x} \in {\rm ri\,} \mathcal{X} $ such that $h(x) < 0$. Strong duality gives us the following result.
\begin{theorem} \label{thm:1} Suppose \Cref{assm:const_qual} holds. It holds true that
\begin{equation*}
W = \{w \in C^+ \mid \text{\eqref{eq:Dw} is feasible}\}.
\end{equation*} \end{theorem}
\begin{proof} Since \eqref{eq:P} is feasible, \eqref{eq:Pw} for any $w\in C^+$ is also a feasible problem. Then, $p^w < \infty$ holds and the weak duality implies that the dual problem \eqref{eq:Dw} is not unbounded. On the other hand, the strong duality implies that \eqref{eq:Pw} is bounded if and only if \eqref{eq:Dw} is feasible. \end{proof}
For some classes of convex optimization problems, it is possible to write the constraints of the dual problem \eqref{eq:Dw} explicitly. In the following section, we will consider these classes, for which we will express $\mathcal{P}_\infty^+$ explicitly. This will provide a way to compute $\mathcal{P}_\infty^+$ or an approximation of it.
Recall that the initial aim was to compute $\mathcal{P}_\infty$ or its outer approximation. If $\mathcal{P}_\infty^+$ is determined by finitely many generators, it is easy to compute (the finitely many generators of) $\mathcal{P}_\infty$. What if its (inner) approximation is available instead? Will a dual cone of an approximation of $\mathcal{P}_\infty^+$ be an approximation of $\mathcal{P}_\infty$? The following proposition provides an answer.
\begin{proposition}\label{prop:aprox_error}
Let $\mathcal{Z} \subseteq \mathbb{R}^q$ be a finite $\delta$-inner approximation of $\mathcal{P}_\infty^+$ and $\mathcal{Y}$ be a finite set of generating vectors of $({\rm cone\,} \mathcal{Z})^+$, that is, $({\rm cone\,} \mathcal{Z})^+ = {\rm cone\,} \mathcal{Y}$. Then, $\mathcal{Y}$ is a finite $\delta$--outer approximation of $\mathcal{P}_\infty$ . \end{proposition}
\begin{proof} Since ${\rm cone\,} \mathcal{Z} \subseteq \mathcal{P}_\infty^+$, we have ${\rm cone\,} \mathcal{Y} = ({\rm cone\,} \mathcal{Z})^+ \supseteq \mathcal{P}_\infty$. Let $y \in {\rm cone\,} \mathcal{Y} \cap B(0,1)$ and assume for contradiction that $(\{y\} + B(0,\delta)) \cap (\mathcal{P}_\infty \cap B(0,1)) = \emptyset$.
First, we show that $(\{y\} + B(0,\delta)) \cap (\mathcal{P}_\infty \cap B(0,1)) = \emptyset$ implies $(\{y\} + B(0,\delta)) \cap \mathcal{P}_\infty = \emptyset$. Assume, also by contradiction, that this is not the case, therefore there exists $b \in B(0,\delta)$ such that $y + b \in \mathcal{P}_\infty \setminus B(0,1)$. Since $\mathcal{P}_\infty$ is a cone, it holds $\lambda (y + b) \in \mathcal{P}_\infty$ for all $\lambda \geq 0$. Consider the convex quadratic optimization problem \begin{equation}\label{eq:quad_pr}
\min_{\lambda \geq 0} \Vert \lambda (y + b) - y \Vert^2 = \min_{\lambda \geq 0} \{\lambda^2 \Vert y + b \Vert^2 - 2\lambda y^\mathsf{T} (y+b) + \Vert y \Vert^2\}.
\end{equation} The first order condition yields the optimal value \begin{align*} \lambda^* = \frac{y^\mathsf{T} (y+b)}{\Vert y + b \Vert^2} \leq \frac{\Vert y \Vert \Vert y + b \Vert}{\Vert y + b \Vert^2} \leq \frac{1}{\Vert y + b \Vert}. \end{align*} Therefore, it also holds $\lambda^* (y+b) \in B(0,1)$. Finally, since $\lambda^*$ is an optimal solution of \eqref{eq:quad_pr}, we also obtain $\Vert \lambda^* (y + b) - y \Vert \leq \Vert (y + b) - y \Vert = \Vert b \Vert \leq \delta$, which shows that $\lambda^* (y+b) \in \{y\} + B(0, \delta)$. This provides a contradiction $\lambda^* (y+b) \in (\{y\} + B(0,\delta)) \cap (\mathcal{P}_\infty \cap B(0,1))$, which proves the implication.
Second, we use $(\{y\} + B(0,\delta)) \cap \mathcal{P}_\infty = \emptyset$ to show that the initial assumption cannot hold. By separation arguments, there exists $w \in \mathbb{R}^q \setminus \{0\}$ such that $w^\mathsf{T}(y - b)< w^\mathsf{T} p$ for all $b \in B(0,\delta), p\in \mathcal{P}_\infty$. In particular, $w \in \mathcal{P}_\infty^+$ and $w^\mathsf{T}(y - b)< 0$ for all $b \in B(0,\delta)$. Without loss of generality we may assume $\norm{w} = 1$. The choice of $\bar{b} = -\delta w$ shows that it holds $w^\mathsf{T} y < w^\mathsf{T} \bar{b} = -\delta$.
On the other hand, since $w \in \mathcal{P}_\infty^+ \cap B(0,1)$, there exists $z \in {\rm cone\,} \mathcal{Z} \cap B(0,1)$ such that $\norm{w-z} \leq \delta.$ Since $z \in {\rm cone\,} \mathcal{Z}, y\in {\rm cone\,} \mathcal{Y}$, we have $y^\mathsf{T} z \geq 0. $ Then, using the Cauchy-Schwarz inequality, we obtain \begin{align*} 0 \leq y^\mathsf{T} z = y^\mathsf{T}(z-w) + y^\mathsf{T} w \leq \norm{y} \norm{z-w} + y^\mathsf{T} w < 0, \end{align*} which is a contradiction. \end{proof}
\begin{remark}\label{rem_1_infty} Let us now address the issue of the norm used. The above proposition holds for the $\ell_2$-norm. What if we instead used the pair of $\ell_1$- and $\ell_\infty$-norms and adapted Definition~\ref{defn:approxCone} of approximation of a cone correspondingly? Then the tolerance in the above result would need to be adjusted. Specifically, we would obtain the following: If $\mathcal{Z} \subseteq \mathbb{R}^q$ is a $\delta$-inner approximation of $\mathcal{P}_\infty^+$ in $\ell_\infty$ and $\mathcal{Y}$ is a finite set of generating vectors of $({\rm cone\,} \mathcal{Z})^+$, then $\mathcal{Y}$ is a $(2\delta)$--outer approximation of $\mathcal{P}_\infty$ in $\ell_1$. \end{remark}
\Cref{prop:aprox_error} suggests that by computing a $\delta$-inner approximation of $\mathcal{P}^+_\infty$, we can generate a $\delta$-outer approximation of $\mathcal{P}_\infty$, which can be used to compute a finite $(\epsilon,\delta)$-solution to problem \eqref{eq:P}. Note that it is sufficient to consider the set $W$ since this set corresponds, up to the closure, to the cone $\mathcal{P}_\infty^+$ of interest by Proposition~\ref{prop:weightset}. What about the closure? Set $W$ can be computed exactly if it is polyhedral (so closed). Otherwise it needs to be approximated, in which case an approximation of $W$ is also an approximation of its closure.
From a practical point of view, instead of computing or approximating the cone $W$, we will compute or approximate the bounded convex set \begin{equation} \label{eq:Wc}
W_c := W \cap \{ w \in \mathbb{R}^d \mid w^\mathsf{T} c \leq 1\} \end{equation} for a fixed $c \in \Int C$ with $\norm{c} = 1$. The next proposition shows that the set $W$ can be approximated though an approximation of $W_c$.
\begin{proposition} \label{prop:Wc}
Let $W_c$ be as defined in \eqref{eq:Wc} for some $c \in \Int C$ with $\norm{c} = 1$. Assume that $\bar{W}$ is a finite $\delta$-inner approximation of $W_c$ in the sense that it holds
\begin{equation*} \label{eq:projection}
\bar{W} \subseteq W_c\quad \text{and} \quad d^H (W_c, \conv \bar{W})\leq \delta.
\end{equation*}
Then, $\bar{W}$ is also a finite $\delta$-inner approximation of the cone $W$. \end{proposition} \begin{proof} Consider an element $w \in W \cap B(0,1)$. Note that the Cauchy-Schwarz inequality implies $W \cap B(0,1) \subseteq W_c$, therefore, there exists $\bar{w} \in \conv \bar{W}$ with $\norm{w - \bar{w}} \leq \delta$. Our proof would be finished if $\norm{\bar{w}} \leq 1$. We proceed with the case $\norm{\bar{w}} > 1$ where we show that the orthogonal projection $\frac{w^\mathsf{T} \bar{w}}{\bar{w}^\mathsf{T} \bar{w}} \bar{w}$ provides the desired bound: Firstly, since $w^\mathsf{T} \bar{w} = \frac{1}{2} \left( \norm{w}^2 + \norm{\bar{w}}^2 - \norm{w - \bar{w}}^2 \right) > 1 - \delta^2 > 0$ we know that $\frac{w^\mathsf{T} \bar{w}}{\bar{w}^\mathsf{T} \bar{w}} \bar{w} \in {\rm cone\,} \bar{W}$. Secondly, $\frac{w^\mathsf{T} \bar{w}}{\bar{w}^\mathsf{T} \bar{w}} \bar{w} \in B(0,1)$ holds since $\norm{ \frac{w^\mathsf{T} \bar{w}}{\bar{w}^\mathsf{T} \bar{w}} \bar{w} } = \frac{\vert w^\mathsf{T} \bar{w}\vert }{\norm{\bar{w}}} \leq \frac{\norm{w} \norm{\bar{w}} }{\norm{\bar{w}}} \leq 1$. And thirdly, for $\frac{w^\mathsf{T} \bar{w}}{\bar{w}^\mathsf{T} \bar{w}} = \argmin\limits_{\alpha \in \mathbb{R}} \norm{w - \alpha \bar{w}}$ it holds $\norm{ w - \frac{w^\mathsf{T} \bar{w}}{\bar{w}^\mathsf{T} \bar{w}} \bar{w}} \leq \norm{w - \bar{w}} \leq \delta$, which proves the claim. \end{proof}
\begin{remark} In light of \Cref{rem_1_infty} we also address the $\ell_1$ and $\ell_\infty$ norms in context of \Cref{prop:Wc}: Use $c \in \Int C$ with $\norm{c}_1 = 1$ to define the set $W_c$ and let $\bar{W}$ be finite $\delta$-inner approximation of $W_c$ in $\ell_\infty$. Then $\bar{W}$ is a finite $(2\delta)$-inner approximation of the cone $W$ in $\ell_\infty$. \end{remark}
A cone is determined by its base, such as $W \cap \{ w \in \mathbb{R}^d \mid w^\mathsf{T} c = 1\}$. However, a full-dimensional set $W_c$ is preferable for computational purposes. Alternatively, one could aim to replace the base by a $(q-1)$-dimensional set generating it. Assume without loss of generality that $c_q \neq 0$. Let $c_{-q}\in \mathbb{R}^{q-1}$ denote the first $q-1$ components of $c$ and $w :\mathbb{R}^{q-1} \to \mathbb{R}^q$ be defined as $$w (\lambda):= (\lambda^\mathsf{T}, \frac{1-\lambda^\mathsf{T} c_{-q}}{c_q})^\mathsf{T}$$ so that $c^\mathsf{T} w(\lambda) = 1$ holds for all $\lambda \in \mathbb{R}^{q-1}$. Then for the bounded set \begin{equation}\label{eq:weightset}
\Lambda:= \{\lambda \in \mathbb{R}^{q-1} \mid w(\lambda)\in C^+, \ \text{\eqref{eq:Dw} is feasible for } w = w(\lambda)\} \end{equation} we have $ W = {\rm cone\,}\{w(\lambda)\in \mathbb{R}^q \mid \lambda \in \Lambda\}$ by construction.
In particular, for the $q=2$ case, the set $\Lambda$ is a bounded interval and it suffices to solve two scalar problems to compute the bounds \begin{equation*}
\inf\{\lambda \in \mathbb{R} \mid w(\lambda)\in C^+, \ \text{\eqref{eq:Dw} is feasible for } w = w(\lambda)\} \end{equation*} and \begin{equation*}
\sup\{\lambda \in \mathbb{R} \mid w(\lambda)\in C^+, \ \text{\eqref{eq:Dw} is feasible for } w = w(\lambda)\}. \end{equation*}
The drawback of considering the $(q-1)$-dimensional set $\Lambda$ arises if the set cannot be computed exactly, but has to be approximated: Approximation error for the set $\Lambda$ is not preserved for the cone $W$ and bound on the tolerance depends on the particular choice of vector $c$. Nevertheless, we consider the approach through the set $\Lambda$ useful at least in two cases: (1) If the set $\Lambda$ can be computed exactly. (2) In the $q=2$ case when the interval $\Lambda$ is approximated through two scalar problems, since solvers for scalar problems can in practice usually handle significantly lower precision than multi-objective problems or algorithms for projection problems.
\section{Computations for special cases} \label{sect:ex}
In this section we discuss three cases where Assumption~\ref{assm:const_qual} holds and the set \begin{align*}
W = \{w \in C^+ \mid \text{\eqref{eq:Dw} is feasible}\} \end{align*} can be expressed explicitly through the constraints of the dual problem. We start with a relatively wide class of semidefinite problems. \cite{BV96} surveys applications of the single-objective semidefinite programming. In particular, the arguments of~\cite{BV96} can be straightforwardly extended to show that linear vector optimization and quadratic convex vector optimization problems with polyhedral ordering cones are special cases of semidefinite vector problems. Nevertheless, we also address linear and quadratic problems individually and provide further observations.
For the problems we consider below, the set of weights $W$, and consequently also the sets $W_c$ of~\eqref{eq:Wc} and $\Lambda$ of~\eqref{eq:weightset}, have a form of a convex (or polyhedral) projection. Methods for solving convex (or polyhedral) projections can, therefore, be used to approximate (or compute) the set $W_c$ (or the set $\Lambda$). In the light of \Cref{prop:Wc}, we obtain an approximation of~$\mathcal{P}_\infty^+$. Finally, a dual cone of this approximation provides the desired approximation of the recession cone $\mathcal{P}_\infty$ of the upper images as \Cref{prop:aprox_error} shows.
An outer approximation of the recession cone is needed to solve a CVOP in the sense of \Cref{defn:solutionconcept}. The method proposed in this paper can be used to replace the first phase of the algorithm proposed in~\cite{WURKH22}. Keep in mind that if the problem is self-bounded, then the recession cone itself can also be used to solve the problem. If this is not the case, however, an outer approximation of it is needed even if it is possible to compute $\mathcal{P}_\infty$ exactly. In the light of \Cref{prop:weightset}, unless the set $W$ is known to be closed, we need to look for its inner approximation.
\subsection{Semidefinite problems}\label{subsect:semid} The first class of problems we consider are the semidefinite problems. In the following, $S^k$ denotes the set of symmetric $k \times k$ matrices and $S^k_+$ denotes the set of symmetric, positive semidefinite $k \times k$ matrices. Consider a semidefinite vector program in inequality form, \begin{align*}\tag{SDVP} \label{SDVP}
\text{minimize } & \quad P^\mathsf{T} x \quad \text{ with respect to\ } \leq_C \\ \text{ subject to } & \quad x_1 F_1 + \ldots + x_n F_n + G \preceq 0, \end{align*} for some $P \in \mathbb{R}^{n\times q}, F_1,\ldots,F_n, G \in S^k, k\geq 2, m \in \mathbb{N}$. The weighted sum scalarization for a weight $w\in C^+$ is the scalar semidefinite program \begin{align*} \text{minimize } & \quad w^\mathsf{T} P^\mathsf{T} x \\ \text{ subject to } & \quad x_1 F_1 + \ldots + x_n F_n + G \preceq 0 \end{align*} and its Lagrange dual is \begin{align*} \text{maximize } & \quad {\rm tr} (GZ) \\ \text{ subject to } & \quad {\rm tr} (F_i Z) + e_i^T Pw = 0, \ i\in\{1,\ldots,n\}, \\ & \quad Z \succeq 0. \end{align*} We refer a reader interested in derivation of the dual problem to \cite[Example 5.11]{Boyd}.
Assumption~\ref{assm:const_qual} on constraint qualification is satisfied if there exists $x\in \mathbb{R}^n$ such that $x_1 F_1 + \ldots + x_n F_n + G \prec 0$, consider \cite[Equation 5.27]{Boyd}. Then the strong duality yields the set $W$ of the convex projection form
$$W = \{w \in C^+ \mid \exists Z \succeq 0 : \ {\rm tr} (F_i Z) + e_i^T Pw = 0, \; i = 1,\ldots,n \}.$$
\subsection{Linear problems} \label{subsect:linear} Second, we look at linear problems. Given matrices $P\in \mathbb{R}^{n\times q}, A \in \mathbb{R}^{m\times n}$, a vector $b\in \mathbb{R}^m$ and a polyhedral ordering cone $C$, consider the linear vector optimization problem \begin{align*} \tag{LVP} \label{eq:Plin} \text{minimize } P^\mathsf{T} x \quad \text{ with respect to\ } \leq_C\quad\text{ subject to } A x \leq b. \end{align*} For a weight vector $w \in C^+$, the Lagrange dual \eqref{eq:Dw} of the weighted sum scalarization problem \eqref{eq:Pw} is given by \begin{align*}
\text{maximize } -b^\mathsf{T} y \quad \text{ subject to } \quad A^\mathsf{T} y = -P w, \quad y\geq 0. \end{align*} Applying \Cref{prop:weightset} and \Cref{thm:1}, we obtain \begin{align} \label{eq_lin1} \mathcal{P}_\infty^+ = W = \{w \in C^+ \mid \exists y \geq 0 \ : -P w = A^\mathsf{T} y\}. \end{align} The problem of computing the set~\eqref{eq_lin1} is a polyhedral projection problem. Closure is not needed on the right-hand side of~\eqref{eq_lin1} since the set is a polyhedron. The polyhedral dual cone $\mathcal{P}_\infty^+$ can be computed exactly, rather than approximated, which is appropriate since the linear problem is self-bounded per \Cref{prop:weightset}. Polyhedral projection is an alternative to computing the recession cone via the homogeneous problem, see \cite[Section 4.6]{Loehne11}.
As we suggested in the previous section, instead of computing the cone~\eqref{eq_lin1} in $\mathbb{R}^q$, we can compute the $(q-1)$-dimensional set
\begin{align*}
\Lambda = \{\lambda \in \mathbb{R}^{q-1} \mid w(\lambda) \in C^+, \exists y \geq 0 \ : -P w(\lambda) = A^\mathsf{T} y\},
\end{align*} which also corresponds to solving a polyhedral projection problem. Moreover, we know that $\Lambda$ is a closed interval if $q=2$. In this case, instead of a 2-dimensional unbounded projection (or a bi-objective homogeneous problem), it suffices to solve two scalar linear problems \begin{align*}
\text{minimize/maximize } & \quad \lambda \\
\text{ subject to } & \quad w(\lambda) \in C^+, \\
& \quad -P w(\lambda)= A^\mathsf{T} y,\\
& \quad \lambda \in \mathbb{R}, y \geq 0. \end{align*}
\subsection{Convex quadratic problems}\label{subsect:quad} Quadratic problems are the third class that we consider. We will see that if the problem contains at least one quadratic constraint, then the problem is bounded. Moreover, below we identify several conditions under which it holds $\mathcal{P}_\infty^+ = C^+$.
We consider the following convex quadratic vector optimization problem \begin{align*}\tag{QVP} \label{QVP} \text{minimize } & \quad f(x) \quad \text{ with respect to\ } \leq_C \\ \text{ subject to } & \quad x^\mathsf{T} Q_j x + c_j^\mathsf{T} x + r_j \leq 0, \quad j\in \{1,\ldots,p \}, \\ & \quad A x \leq b, \end{align*} where $Q_j \in S^{n}_+ \setminus \{0\}, c_j \in \mathbb{R}^n, r_j \in \mathbb{R}$ for $j\in\{1,\ldots,p\}$, $A \in \mathbb{R}^{m\times n}, b\in \mathbb{R}^m$, and the $C$-convex objective function $f = (f_1, \ldots,f_q)^\mathsf{T}:\mathbb{R}^n \to \mathbb{R}^q$ is given by $f_i(x) = x^\mathsf{T} P_i x + d_i^\mathsf{T} x$ with $P_i\in S^{n}, d_i \in \mathbb{R}^n$ for $i = 1,\ldots,q.$ Note that $f$ is $C$-convex if and only if for all $w\in C^+$ is $w^\mathsf{T} f$ convex, or equivalently $\sum_{i=1}^q w_i P_i \succeq 0$. In particular, for $C \supseteq \mathbb{R}^q_+$ a convexity of each objective $f_1, \dots, f_q$ implies $C$-convexity of $f$. For $C=\mathbb{R}^q_+$, the reverse also holds.
Now let us look at what we can learn about the problem. The weighted sum scalarization for a weight vector $w \in C^+$ \begin{align*}
\text{minimize } & x^\mathsf{T} \left(\sum_{i=1}^q w_i P_i\right) x + \left(\sum_{i=1}^q w_i d_i \right)^\mathsf{T} x \\ \text{ subject to } & \quad x^\mathsf{T} Q_j x + c_j^\mathsf{T} x + r_j \leq 0, \quad j\in \{1,\ldots,p \}, \\ & \quad A x \leq b \end{align*} yields a dual function \begin{align*} g(\nu,\mu) = &\inf_{x \in \mathbb{R}^n} \left(x^\mathsf{T} \left(\sum_{i=1}^q w_i P_i + \sum_{j=1}^p \nu_j Q_j \right) x + \left(\sum_{i=1}^q w_i d_i + \sum_{j=1}^p \nu_j c_j + A^\mathsf{T} \mu \right)^\mathsf{T} x\right) \\ &+ \nu^\mathsf{T} r -\mu^\mathsf{T} b. \end{align*} Keeping \Cref{thm:1} in mind, we are interested in the weights $w$ for which the dual problem is feasible. Given the infimum term in the dual function, we have feasibility in two cases: if the quadratic expression in $x$ is convex, or if the quadratic expression in $x$ is constant. This yields the following form of set $W$, \begin{align} \label{eq:PinftyQCQP} &W = \left\lbrace w \in C^+ \mid \exists \nu \in \mathbb{R}^p_+ : 0 \neq \sum_{i=1}^q w_i P_i + \sum_{j=1}^p \nu_j Q_j \succeq 0 \right\rbrace \ \cup \\ & \left\lbrace w \in C^+ \mid \exists \nu \in \mathbb{R}^p_+, \mu \in \mathbb{R}^m_+ : \ \sum_{i=1}^q w_i P_i + \sum_{j=1}^p \nu_j Q_j = 0, \sum_{i=1}^q w_i d_i + \sum_{j=1}^p \nu_j c_j + A^\mathsf{T} \mu = 0\right\rbrace . \notag \end{align}
Using the structure of $W$ given by \eqref{eq:PinftyQCQP}, we show in the following two propositions that either the set $W$ itself or its closure is equal to $C^+$ in some standard cases. \begin{proposition} \label{prop:quad1}
Consider problem \eqref{QVP}. In each of the following cases, $W = \mathcal{P}_\infty^+ = C^+$ holds, in particular, the problem is bounded.
\begin{enumerate}[(a)]
\item There is at least one nonlinear constraint, that is, $p>0$.
\item The objective function is nonlinear and $P_1\ldots,P_q \in S^n$ are linearly independent.
\end{enumerate}
\end{proposition}
\begin{proof} For each case we will show $W=C^+$. This implies $W = \mathcal{P}_\infty^+$ and by \Cref{prop:weightset}, the problem is self-bounded. Indeed, it is bounded as we also have $\mathcal{P}_\infty^+ = C^+$.
\begin{enumerate}[(a)]
\item By convexity, we have $Q_1, \dots, Q_p \succeq 0$ and $\sum_{i=1}^q w_i P_i \succeq 0$ for arbitrary $w \in C^+$.
If $\sum_{i=1}^q w_i P_i \neq 0$, then the choice of $\nu = 0$ gives $0 \neq \sum_{i=1}^q w_i P_i + \sum_{j=1}^p \nu_j Q_j \succeq 0$. If $\sum_{i=1}^q w_i P_i = 0$, then the choice of $\nu_1 = 1, \nu_2 = \dots, \nu_p = 0$ gives $0 \neq \sum_{i=1}^q w_i P_i + \sum_{j=1}^p \nu_j Q_j \succeq 0$. This shows that
$$\left\lbrace w \in C^+ \mid \exists \nu \in \mathbb{R}^p_+ : 0 \neq \sum_{i=1}^q w_i P_i + \sum_{j=1}^p \nu_j Q_j \succeq 0 \right\rbrace = C^+.$$
Together with \eqref{eq:PinftyQCQP}, this implies that $W=C^+$.
\item By (a), it is sufficient to consider problems without nonlinear constraints, that is, $p=0$. In this case, the cone $W$ given by \eqref{eq:PinftyQCQP} simplifies to
\begin{align*}
\left\lbrace w \in C^+ \mid 0 \neq \sum_{i=1}^q w_i P_i \succeq 0 \right\rbrace \cup
\left\lbrace w \in C^+ \mid \sum_{i=1}^q w_i P_i =0, \exists \mu \geq 0 : \ \sum_{i=1}^q w_i d_i + A^\mathsf{T} \mu = 0\right\rbrace.
\end{align*} Since the $C$-convexity of the objective implies $\sum_{i=1}^q w_i P_i \succeq 0$ for all $w \in C^+$, we can write $W = (C^+\setminus W_1) \cup W_2,$ where
\begin{align} \label{eq:W1set}
\begin{split}
W_1 &:= \left\lbrace w \in C^+ \mid \sum_{i=1}^q w_i P_i = 0 \right\rbrace , \\
W_2 &:= \left\lbrace w \in C^+ \mid \exists \mu \geq 0 : \ \sum_{i=1}^q w_i P_i =0, \sum_{i=1}^q w_i d_i + A^\mathsf{T} \mu = 0\right\rbrace.
\end{split}
\end{align}
\end{enumerate}
If the matrices $P_1, \dots, P_q$ are linearly independent, then $W_1 = \{0\}$, since $\sum_{i=1}^q w_i P_i = 0$ occurs only for $w=0$. Since $0 \in W_2$ and $W = (C^+\setminus W_1) \cup W_2$, we conclude $W=C^+$. \end{proof} \begin{proposition}\label{prop:quad2}
Consider problem \eqref{QVP} and assume that the problem is nonlinear, that is, there is at least one nonlinear constraint or objective function. If $C = \mathbb{R}^q_+$, then $\mathcal{P}_\infty^+ = \mathbb{R}^q_+$. \end{proposition}
\begin{proof}
By \Cref{prop:quad1} (a), it is sufficient to consider problems without nonlinear constraints, that is, $p=0$. In this case, $W = (C^+\setminus W_1) \cup W_2,$ where $W_1,W_2$ are as in \eqref{eq:W1set}. If $W_1 = \{0\}$, then $\mathcal{P}_\infty^+ = C^+$ follows since $\mathcal{P}_\infty^+ = \cl W$. Assume $w \in W_1 \setminus\{0\}$. Noting that $C^+ = \mathbb{R}^q_+$, $w_j>0$ for some $j\in \{1,\ldots,q\}$. Consider the diagonal elements of the matrix $\sum_{i=1}^q w_i P_i = 0$. Since the matrices $P_1, P_2, \dots, P_q$ are positive semidefinite for $C=\mathbb{R}^q_+$, all of their diagonal elements are nonnegative. Then, $\sum_{i=1}^q w_i P_i = 0$ implies for $w_j > 0$ that all the diagonal elements of matrix $P_j$ are zero and, therefore, $P_j$ is the zero matrix.
Since the problem is not linear, there exists $i \in\{1, \dots, q\}$ with $P_i \neq 0$. Then, for any $w \in W_1$ we can construct a sequence of $w^{(n)} := w + \frac{1}{n}e^i \in \mathbb{R}^q_+ \setminus W_1$ converging to $w$. Hence, we conclude $\mathcal{P}_\infty^+ = \cl \left( \mathbb{R}^q_+ \setminus W_1 \right) = \mathbb{R}^q_+$.
\end{proof}
We see that the computation of $\mathcal{P}_\infty^+$ is only relevant if $C \neq \mathbb{R}^q_+$, \eqref{QVP} has only linear constraints and $P_1, \dots, P_q$ are linearly dependent. In that case, it can be done via computing sets $W_1, W_2$ given by \eqref{eq:W1set} and setting $\mathcal{P}_\infty^+ = \cl\big((C^+\setminus W_1) \cup W_2\big).$ As long as the ordering cone is polyhedral, $W_2$ is in a form of a polyhedral projection, so $\mathcal{P}_\infty^+$ can be obtained through computations with polyhedra.
\section{Numerical Examples} \label{sect:comp} In this section we provide numerical examples to illustrate the proposed solution methodology. We consider a two-dimensional linear problem and two semidefinite programming problems with different objective functions minimized over the same feasible set. \begin{example}\label{example1} Consider the illustrative two-dimensional linear example \begin{align*} \min \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} \text{ w.r.t. } \leq_{\mathbb{R}^2_+} \text{ s.t. } \begin{pmatrix} -4 & -1 \\ -2 & -1 \\ -1 & -1 \\ -1 & -2 \\ -1 & -4 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} \leq \begin{pmatrix} -5 \\-5 \\ -4 \\ -5 \\ -5 \end{pmatrix}. \end{align*} As outlined in Section~\ref{subsect:linear}, to identify the recession cone of the upper image, it suffices to solve two scalar linear problems, \begin{align*} \text{minimize / maximize } \; \lambda \quad \text{ subject to } \begin{pmatrix} \lambda\\ 1- \lambda \end{pmatrix} \geq 0, \; y \geq 0, \; \begin{pmatrix} 4 & 2 & 1 & 1 & 1 \\ 1 & 1 & 1 & 2 & 4 \end{pmatrix} y = \begin{pmatrix} \lambda\\ 1- \lambda \end{pmatrix}. \end{align*}
These yield the optimal values $\lambda_{\min} = 0.2$ and $\lambda_{\max} = 0.8$, which generate the dual cone $W = \mathcal{P}_{\infty}^+ = {\rm cone\,} \left\lbrace \begin{pmatrix} 0.2 \\ 0.8 \end{pmatrix}, \begin{pmatrix} 0.8 \\ 0.2 \end{pmatrix} \right\rbrace$ and, consequently, the recession cone of upper image $\mathcal{P}_{\infty} = {\rm cone\,} \left\lbrace \begin{pmatrix} -1 \\ 4 \end{pmatrix}, \begin{pmatrix} 4 \\ -1 \end{pmatrix} \right\rbrace$. The dual cone of weights $W$, the recession cone $\mathcal{P}_\infty$ and the upper image $\mathcal{P}$ are depicted in Figure~\ref{fig1}.
\begin{figure}
\caption{Linear problem from Example~\ref{example1}. Left: Recession cone $\mathcal{P}_\infty$ (dark purple) and the set of weights $W$ (lighter yellow). The depicted line $w_1 + w_2 = 1$ represents the choice of base of the cones, which is used for the two scalar problems solved. Right: Upper image with highlighted recession direction. }
\label{fig1}
\end{figure}
\end{example}
\begin{example}\label{example2} We consider the semidefinite problem \begin{align*} \text{minimize } & \quad P^\mathsf{T} x \quad \text{ with respect to\ } \leq_{\mathbb{R}^3_+} \\ \text{ subject to } & \quad x_1 \begin{pmatrix} -1 & 2 \\ 2 & 4 \end{pmatrix} + x_2 \begin{pmatrix} 2 & 1 \\ 1 & -1 \end{pmatrix} + x_3 \begin{pmatrix} 2 & 2 \\ 2 & 2 \end{pmatrix} \preceq 0 \end{align*} for objectives given by matrices \begin{align*} P_1 = \begin{pmatrix} 0 & 0 & -1\\ -1 & 1 & 0\\ 1 & 1 & -1 \end{pmatrix} \text{ and } P_2 = \begin{pmatrix} 1 &0 &-1\\ -1 &1 &0\\ 0 &0 &-1 \end{pmatrix}. \end{align*}
We find approximation of the cone of recession directions $$W = \{w \in \mathbb{R}^3_+ \mid \exists Z \succeq 0: \ {\rm tr} (F_i Z) + e_i^T Pw = 0, \; i = 1,2,3 \}$$ through the convex projection problem of (approximately) computing the set \begin{align*} W_c = \{ w \in \mathbb{R}^3_+ \mid \exists Z \succeq 0: \ {\rm tr} (F_i Z) + e_i^T Pw = 0, \; i = 1,2,3, \; w_1 + w_2 + w_3 \leq \sqrt{3} \}. \end{align*} The convex projection yields both inner and outer approximations of the set $W_c$, which generate inner and outer approximations of cones of $W$ and $\mathcal{P}_\infty$. All of them are displayed in Figure~\ref{fig2} for problem with objective $P_1$. Outer approximation of $\mathcal{P}_\infty$, for which approximation tolerance is guaranteed by Propositions~\ref{prop:aprox_error} and~\ref{prop:Wc}, is needed as a part of a solution.
In Figure~\ref{fig4} we use problem with objective $P_2$ to compare approximations obtained via the set $W_c$ and via the set $\Lambda$. Recall that we only have tolerance guarantees for approach through the set $W_c$.
\begin{figure}
\caption{ Semidefinite problem from Example~\ref{example2} with objective $P_1$ solved for $\epsilon = 0.08$ (top) and $\epsilon=0.01$ (bottom). Displayed are inner and outer approximations of the set $W$ (left) and the recession cone $\mathcal{P}_\infty$ (right).}
\label{fig2}
\end{figure}
\begin{figure}
\caption{Recession cone of the semidefinite problem from Example~\ref{example2} with objective $P_2$. Compare the approximations of $\mathcal{P}_\infty$ obtained via the set $W_c$ (left) and via the set $\Lambda$ (right), both convex projections were solved for tolerance $\epsilon = 0.005$. }
\label{fig4}
\end{figure}
\end{example}
\end{document} |
\begin{document}
\markboth{Sun, Nie, Deng}{Reduced basis method for fractional PDEs} \title{A reduced finite element formulation for space fractional partial differential equation}
\author[AUTHOR1, AUTHOR2 and AUTHOR3]{Jing Sun, Daxin Nie and Weihua Deng\corrauth} \address{School of Mathematics and Statistics, Gansu Key Laboratory of Applied Mathematics and Complex Systems, Lanzhou University, Lanzhou 730000, P.R. China.} \email{{\tt Sunj2015@lzu.edu.cn} (Jing Sun), {\tt ndx1993@163.com} (Daxin Nie), {\tt dengwh@lzu.edu.cn} (Weihua Deng)}
\begin{abstract} Applying proper orthogonal decomposition to a usual finite element (FE) formulation for space fractional partial differential equation, we get a reduced FE model, which greatly reduces the complexity of computation. Then, the stability analysis and error estimate for the reduced model are presented. Finally, we verify the effectiveness of the algorithm by numerical experiments. \end{abstract}
\keywords{Proper orthogonal decomposition, Finite element method, Space fractional partial differential equation}
\ams{65M10, 78A48}
\maketitle
\section{ Introduction} \label{sec1} In recent years, fractional partial differential equations (FPDEs) have become a hot research topic with wide applications in many fields, such as, physics \cite{Metaler2000}, chemistry \cite{Yuste2004}, finance \cite{Picozzi2002}, and so on. Comparing with the classical partial differential equations (PDEs), finding the exact solutions of FPDEs is much more challenging or the solutions themselves are very complicated, being expressed by transcendental functions/infinite series. Developing the numerical methods for the FPDEs naturally attracts the interests of scholars \cite{Cheng2015,Deng2008,Deng2013,DengHes2013,Li2010,LiuDu2015}. Because of the non-locality of fractional operators, one of the key issues is on how to alleviate computational loads and resource demands, especially for the space FPDEs. In practical problems, it is not only the accuracy of the model that matters, but the computational efficiency of the model is likewise critical \cite{Hesthaven}. The goal of this paper is to develop the basic formulation for reduced basis method for space FPDEs to balance the accuracy and efficiency.
The central idea of the reduced basis approach is the identification of a suitable problem dependent basis from the snapshots to effectively represent the solutions to FPDEs, i.e., searching the most representative snapshots and determining when the basis is sufficiently rich. One of the sampling strategies is to make the singular value decomposition of a large number of snapshots, namely, the so-called proper orthogonal decomposition (POD),
which is put forward in the context of turbulence by Lumley \cite{Lumley1967}. Afterwards, POD has been successfully applied in various fields including pattern recognition \cite{Fukunaga}, coherent structures \cite{Sirovich1,Sirovich2,Sirovich3}, control theory \cite{Atwell,Kunisch1999}, and model reduction for PDEs. Some numerical methods combined with the POD have been developed; among them, combining the POD with Galerkin to solve the parabolic equation and fluid dynamics equation is discussed in \cite{Kunisch2003Galerkin,Kunisch2001Galerkin}; Ref. \cite{Luo2009Finite,luo2013a,Luo2011A,Luo2012A} incorporates POD with finite difference, finite element, finite volume to solve classical parabolic problems, Navier-Stokes equations, solute transport problem and so on; Ref. \cite{Liu2016} applies the POD to the finite element format to solve the time FPDEs; all of them could reduce the computation and memory loads after using the POD. To the best of our knowledge, there is no research works on combining the POD and finite element method for space FPDEs.
In this paper, we get a reduced model based on POD and finite element methods for the following problem: Find $u=u(x,y,t)$ satisfying \begin{equation}\label{equation2D}
\left\{
\begin{aligned}
&\frac{\partial u(x,y,t)}{\partial t}-\frac{\partial ^{\alpha} u(x,y,t)}{\partial |x|^{\alpha}}-\frac{\partial ^{\beta} u(x,y,t)}{\partial |y|^{\beta}}=f(x,y,t)~~~~~~~~~~~~~(x,y,t)\in \Omega\times(0,T),\\
&u(x,y,0)=g(x,y)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(x, y)\in \Omega,\\
&u(x,y,t)=0~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(x, y)\in \mathbb{R}^2\backslash \Omega, ~~t\in (0,T),
\end{aligned}
\right.
\end{equation}
with $\Omega=(0,1)\times(0,1)$, $1< \alpha, \beta< 2$; and $\frac{\partial ^{\alpha} u}{\partial |x|^{\alpha}}$ denotes the Riesz fractional derivative, being defined by \begin{equation}
\frac{\partial ^{\alpha} u}{\partial |x|^{\alpha}}=-\frac{1}{2\cos(\frac{\alpha\pi}{2})}(~_{-\infty}D_{x}^{\alpha}u+~_{x}D_{\infty}^{\alpha}u), \end{equation} where $~_{-\infty}D_{x}^{\alpha}u$ and $_{x}D_{\infty}^{\alpha}u$ are the left- and right-sided Riemann-Liouville derivatives, respcetively. Since ${\rm supp}(u)\subset\Omega$, we have $_{-\infty}D_x^{\alpha}u(x,y,t)$ =$~_0D_x^{\alpha}u(x,y,t)$, $_{-\infty}D_y^{\alpha}u(x,y,t)$ =$~_0D_y^{\alpha}u(x,y,t)$, $(x,y)\in \Omega$.
For Eq. (\ref{equation2D}), Ref. \cite{Bu2014} provides a finite element method to solve it numerically. Its FE scheme shows that: the stiffness matrix is not sparse since the non-locality of the operator; a finer subdivision is needed to guarantee the accuracy of numerical scheme, leading to the great increase of the memory requirement and the time cost.
To overcome this problem, we combine the POD with the finite element method to solve Eq. (\ref{equation2D}), namely, we reconstruct the POD basis in the least square sense by snapshots which are taken at uniform intervals from the solutions in the general FE, and only $d$ POD basis functions are needed when resolve Eq. (\ref{equation2D}), where $d$ is the number of the first few maximal eigenvalues of matrix $G$ ($G$ is also called the correlation matrix). So the degrees of freedom are reduced and the computing time is also greatly saved. The present method can be considered as an improvement of the classical finite element method.
This paper is organized as follows. In Sec. 2, some preliminaries needed in the paper are presented. Sec. 3 briefly recalls the classical FE method for Eq. (\ref{equation2D}). In Sec. 4, we choose the FE solutions as the snapshots to construct the POD basis in a certain least squares optimal sense and establish the reduced FE scheme based on POD. In Sec. 5, we give the stability analysis and error estimate for the reduced FE scheme. In the last Section, we demonstrate the effectiveness of the model by numerical experiments. \section{Preliminaries} We provide the preliminary knowledge in this section.
\begin{definition}[\cite{Podlubny1998Fractional}]\label{def1}
The left- and right-sided Riemann-Liouville fractional integrals of order $\mu\,(\mu>0)$ are defined by \begin{equation*} _{-\infty}I^{\mu}_xu(x)=\frac{1}{\Gamma(\mu)}\int^x_{-\infty}(x-\xi)^{\mu-1}u(\xi)d\xi \end{equation*} and \begin{equation*} _xI^{\mu}_{\infty}u(x)=\frac{1}{\Gamma(\mu)}\int^{\infty}_x(\xi-x)^{\mu-1}u(\xi)d\xi. \end{equation*} \end{definition} \begin{definition}[\cite{Podlubny1998Fractional}]\label{def2} The left- and right-sided Riemann-Liouville fractional derivatives of order $\mu\,(\mu>0)$ are described as \begin{equation*} _{-\infty}D^{\mu}_xu(x)=\frac{1}{\Gamma(n-\mu)}\frac{d^n}{dx^n}\int^x_{-\infty}(x-\xi)^{n-\mu-1}u(\xi)d\xi \end{equation*} and \begin{equation*} _xD^{\mu}_{\infty}u(x)=\frac{(-1)^n}{\Gamma(n-\mu)}\frac{d^n}{dx^n}\int^{\infty}_x(\xi-x)^{n-\mu-1}u(\xi)d\xi, \end{equation*} where $n-1<\mu<n$. \end{definition} \begin{lemma}[\cite{Zhang2010}] If $0<\mu<1$, $\mu\neq \frac{1}{2}$, $a, b\in\mathbb{R}$, $u, v\in H_0^\mu(a,b)$, then there exist \begin{equation*}
(~_aD^{2\mu}_xu,v)=(u,~_xD^{2\mu}_bv)=(~_aD^{\mu}_xu,~_xD^{\mu}_bv) \end{equation*} and \begin{equation*}
(~_aD^{2\mu}_xu,u)=(u,~_xD^{2\mu}_bu)=(~_aD^{\mu}_xu,~_xD^{\mu}_bu)=\cos(\mu\pi)\|~_aD^{\mu}_xu\|^2. \end{equation*} \end{lemma}
\begin{definition}[\cite{Ervin2006}] For $0\leq\mu<\infty$, we define the space \begin{equation*}
H^{\mu}(\mathbb{R}):=\{u\mid u \in L^{2}(\mathbb{R}),(1+|\omega|^{2})^{\frac{\mu}{2}}\hat{u}(\omega)\in L^{2}(\mathbb{R})\} \end{equation*} with the norm \begin{equation*}
\|u\|_{H^{\mu}(\mathbb{R})}:=(\|u\|^{2}+|u|_{\mu,\mathbb{R}})^{\frac{1}{2}} ~~~\forall u\in H^{\mu}(\mathbb{R}), \end{equation*}
where $|u|_{\mu,\mathbb{R}}:=\||\omega|^{\mu}\hat u\|$, and $\hat{u}$ denotes Fourier transform of $u$. \end{definition} \begin{definition}[\cite{Ervin2006}] For $-\infty \leq a < b \leq \infty$, we define \begin{equation*}
H^{\mu}(a,b):=\{v\mid_{(a,b)}~|v\in H^{\mu}(\mathbb{R})\} \end{equation*} with the norm \begin{equation*}
\|v\|_{H^{\mu}(a,b)}:=\inf_{\substack{
\tilde{v}\in H^{\mu}(\mathbb{R})\\
\tilde{v}\mid_{(a,b)}=v}} \|\tilde{v} \|_{H^\mu(\mathbb{R})}~~~~\forall v\in~H^\mu(a,b). \end{equation*} \end{definition}
Furthermore, let $\mathcal{D}(a,b)$ be the set of $C^{\infty}$ functions with compact support in $(a,b)$; and $H_{0}^{\mu}(a,b)$ is the closure of $\mathcal{D}(a,b)$ with respect to $\|\cdot\|_{H^{\mu}(a,b)}$. \begin{lemma}[\cite{Ervin2006}] If $u \in H^\mu_0(\mathbb{R})$, $\mu\neq n+1/2$, $n \in \mathbb{N}$, we have\\ \begin{equation*} \parallel u\parallel\leq C\mid u\mid_\mu, \end{equation*} where $C>0$. \end{lemma} \begin{lemma}[\cite{luo2013a} Discrete Gr\"onwall Lemma] If $\{a_n\}$, $\{b_n\}$, $\{c_n\}$ are three positive sequences, and $\{c_n\}$ is monotone, that satisfy \begin{equation} a_n+b_n\leq c_n+\lambda \sum\limits_{i=0}^{n-1}a_i, ~~~~~~~~\lambda>0,~a_0+b_0\leq c_0, \end{equation} then $a_n+b_n\leq c_n \exp(n\lambda)$,~~~$n>0$. \end{lemma}
\section{Recall of classical FE formulation} \label{sec2}
To solve Eq. $(\ref{equation2D})$ numerically, we use FE method to discretize the spatial variable and the backward Euler to the time variable. Firstly, we take the FE space as \begin{align*} X_{h}=\{v_{h} \in H_0^{\frac{\alpha}{2}}(\Omega)\bigcap H_0^{\frac{\beta}{2}}(\Omega)\bigcap C^{0}(\Omega); v_{h} \in P_{m}(K)\,\forall K \in \Im_{h}\}, \end{align*} where $m\geq 1$, $P_{m}(K)$ is taken as the piecewise polynomials of degree $\leq m$ of $K$, and $\{\Im_{h}\}$ stands for a uniformly regular family of triangulation of $\Omega$. Then the semi-discrete FE formulation can be written as: For every $t\in (0,T)$, find $u_h\in X_{h}$ such that \begin{align}\label{eq:finiteform2D} \begin{split}
\left(\frac{\partial u_{h}}{\partial t},v_h\right)-\left(\frac{\partial ^{\alpha} u_h}{\partial |x|^{\alpha}},v_h\right)-\left(\frac{\partial ^{\beta} u_h}{\partial |y|^{\beta}},v_h\right)= (f,v_h) \quad \forall v_h\in X_{h}, \end{split} \end{align} namely, \begin{align*} \left(\frac{\partial u_{h}}{\partial t},v_h\right)&+\frac{1}{2\cos(\frac{\alpha\pi}{2})}(~_{0}D_{x}^{\alpha}u_h,v_h)+\frac{1}{2\cos(\frac{\alpha\pi}{2})}(~_{x}D_{1}^{\alpha}u_h,v_h)\\ &+\frac{1}{2\cos(\frac{\beta\pi}{2})}(~_{0}D_{y}^{\beta}u_h,v_h)+\frac{1}{2\cos(\frac{\beta\pi}{2})}(~_{y}D_{1}^{\beta}u_h,v_h)= (f,v_h) \quad \forall v_h\in X_{h}. \end{align*} Next, let $N$ be an integer, $\tau=\frac{T}{N}$ be the time step size, and $t_{n}=n\tau\,(0\leq n\leq N)$. Then the fully discrete FE formulation is written as \begin{equation}\label{eq:2Ddiscretescheme} \begin{aligned} (u^{n}_h,v_h)&+\tau a(u^n_h,v_h) =\tau(f,v_h)+(u_h^{n-1},v_h), \end{aligned} \end{equation} where $C_\alpha =\frac{1}{2\cos(\alpha\pi/2)}$, $C_\beta =\frac{1}{2\cos(\beta\pi/2)}$, and \begin{equation}\label{aaaa} \begin{aligned} a(u_h,v_h)=&C_{\alpha}(~_{0}D_{x}^{\frac{\alpha}{2}}u_h,~_{x}D_{1}^{\frac{\alpha}{2}}v_h)+C_{\alpha}(~_{x}D_{1}^{\frac{\alpha}{2}}u_h,~_{0}D_{x}^{\frac{\alpha}{2}}v_h)\\ &+ C_{\beta}(~_{0}D_{y}^{\frac{\beta}{2}}u_h,~_{y}D_{1}^{\frac{\beta}{2}}v_h)+ C_{\beta}(~_{y}D_{1}^{\frac{\beta}{2}}u_h,~_{0}D_{y}^{\frac{\beta}{2}}v_h). \end{aligned} \end{equation} \begin{remark} The existence and uniqueness of the solutions to Eq. (\ref{eq:2Ddiscretescheme}) can be obtained by Lax-Milgram theorem\cite{Bu2014}. \end{remark}
When the source term $f$, the triangulation parameter $h$, the time step increment $\tau$, and the FE space $X_h$ are given, we can get an ensemble of solutions $\{{u_{h}^{n}}\}_{n=1}^{N}$ for Eq. (\ref{equation2D}). Then we choose $L\,(L\ll N,~{\rm usually}~L=20)$ instantaneous solutions $u_{h}^{n_i} (1\leq n_1\leq n_2\leq\cdots\leq n_L \leq N)$ at an uniform interval from $N$ solutions $\{{u_{h}^{n}}\}_{n=1}^{N}$ for Eq. (\ref{equation2D}), being referred to as snapshots of the POD.
\begin{remark} For the practical problems, one can get the snapshots by drawing samples from experiments or from the past data information. After obtaining the ensemble of snapshots from previous prediction, one can construct the POD basis and the finite element space $X_h$ is substituted with the subspace generated by the POD basis so as to get the reduced formulation.
\end{remark} \section{Construction of the reduced FE formulation by POD} By POD, we build the reduced FE formulation for Eq. (\ref{equation2D}). \subsection{Generation of POD bases }
Suppose that $U_{i}(x,y)=u^{n_i}_{h}(x,y)\,(1\leq i\leq L)$ and at least one of which is assumed to be nonzero. Let \begin{equation}
V={\rm span}\{U_1,\cdots,U_L\}, \end{equation} and $\{\psi_{j}\}_{j=1}^l$ stands for an orthonormal basis of $V$ with $l=\dim V\leq L$. Since $V\subseteq H_{0}^{\frac{\alpha}{2}}(\Omega)\bigcap H_0^{\frac{\beta}{2}}(\Omega)$ and $\{\psi_{j}\}_{j=1}^l$ is an orthonormal basis, we have \begin{equation}\label{(2D1.1)} U_i=\sum\limits_{j=1}^l\left(U_i,\psi_j\right)_w\psi_j,~~~~i=1,2,\cdots, L, \end{equation} where \begin{align*} \left(U_i,\psi_j\right)_w:=&C_\alpha\left(~_{0}D_{x}^{\frac{\alpha}{2}}U_i,~_{x}D_{1}^{\frac{\alpha}{2}}\psi_j\right)+C_\alpha\left(~_{x}D_{1}^{\frac{\alpha}{2}}U_i,~_{0}D_{x}^{\frac{\alpha}{2}}\psi_j\right)\\
&+C_\beta\left(~_{0}D_{y}^{\frac{\beta}{2}}U_i,~_{y}D_{1}^{\frac{\beta}{2}}\psi_j\right)+C_\beta\left(~_{y}D_{1}^{\frac{\beta}{2}}U_i,~_{0}D_{y}^{\frac{\beta}{2}}\psi_j\right). \end{align*}
\begin{definition}[\cite{Luo2012A}]
The method of POD consists in finding the orthonormal basis $\psi_{j}$ $(j=1,\cdots,L)$, such that for every $d\,(1\leq d\leq l)$, the mean square error between the elements $U_{i}$ and the corresponding $d$-th partial sum of $(\ref{(2D1.1)})$ is minimized on average, i.e., \begin{equation}\label{eq:2Doptimization}
\min\limits_{\{{\psi_j}\}_{j=1}^{d}}\frac{1}{L}\sum\limits_{i=1}^{L}\left\|U_{i}-\sum_{j=1}^{d}(U_i,\psi_j)_w\psi_j\right\|_{w}^{2}, \end{equation} subject to \begin{equation} \left(\psi_i, \psi_j\right)_w=\delta_{ij},~~~~1\leq i\leq d ,~~1\leq j\leq i, \end{equation} where \begin{equation} \begin{aligned}
\|U_{i}\|_w^{2} :=&C_\alpha\left(~_{0}D_{x}^{\frac{\alpha}{2}}U_i,~_{x}D_{1}^{\frac{\alpha}{2}}U_i\right)+C_\alpha\left(~_{x}D_{1}^{\frac{\alpha}{2}}U_i,~_{0}D_{x}^{\frac{\alpha}{2}}U_i\right) \\
&+C_\beta\left(~_{0}D_{y}^{\frac{\beta}{2}}U_i,~_{y}D_{1}^{\frac{\beta}{2}}U_i\right)+C_\beta\left(~_{y}D_{1}^{\frac{\beta}{2}}U_i,~_{0}D_{y}^{\frac{\beta}{2}}U_i\right).
\end{aligned}
\end{equation} \end{definition}
\begin{remark}
It is easy to verify that the function space defined using $\|\cdot\|_w$ is equivalent to the space $H_{0}^{\frac{\alpha}{2}}(\Omega)\bigcap H_0^{\frac{\beta}{2}}(\Omega)$. \end{remark}
By the definition (\ref{(2D1.1)}) and the orthonormality of $\psi_i$, we can rewrite (\ref{eq:2Doptimization}) as \begin{equation}\label{www1} \begin{aligned}
\frac{1}{L}\sum\limits_{i=1}^{L}\left\|U_{i}-\sum_{j=1}^{d}\left(U_i,\psi_j\right)_w\psi_j\right\|_{w}^{2}&=\frac{1}{L}\sum_{i=1}^{L}\left\|\sum_{j=d+1}^{l}\left(U_i,\psi_j\right)_w\psi_j\right\|_{w}^{2}\\
&=\sum_{j=d+1}^{l}\left[\frac{1}{L}\sum_{i=1}^{L}\left|\left(U_i,\psi_j\right)^{2}_w\right|\right]. \end{aligned} \end{equation} Moreover, \begin{align*}
\sum_{j=1}^{l}\left[\frac{1}{L}\sum_{i=1}^{L}\left|\left(U_i,\psi_j\right)_w^{2}\right|\right]=\frac{1}{L}\sum_{i=1}^{L}\left\|\sum_{j=1}^{l}\left(U_i,\psi_j\right)_w\psi_j\right\|_{w}^{2}=\frac{1}{L}\sum_{i=1}^{L}\left\|U_{i}\right\|_{w}^{2} \end{align*} is with a fixed value. Thus, in order to make (\ref{www1}) minimum, one only needs to find the orthonormal basis $\psi_j\,(j=1,2\ldots,l)$ such that \begin{equation}\label{maxx}
\max\limits_{\{{\psi_j}\}_{j=1}^{d}}\sum\limits_{j=1}^{d}\left[\frac{1}{L}\sum_{i=1}^{L}\left|(U_i,\psi_j)_{w}^{2}\right|\right], \end{equation} subject to \begin{equation}\label{condition} (\psi_i, \psi_j)_w=\delta_{ij},~~~~1\leq i\leq d ,~~1\leq j\leq i. \end{equation}
Following \cite{Luo2012A,Luo2011A}, to solve (\ref{maxx})-(\ref{condition}), one can start from finding a function (or the so-called POD basis element) $\psi$ such that it maximizes \begin{equation}\label{maxxxx}
\frac{1}{L}\sum\limits_{i=1}^{L}\left|(U_i,\psi)_{w}^{2}\right| \end{equation} satisfying $(\psi,\psi)_{w}=1$. Here, we choose $\psi$ having the form: $\psi=\sum\limits_{i=1}^{L}a_{i}U_{i}$, where $a_i$ is determined to make (\ref{maxxxx}) maximum. Then, define the operators \begin{equation} K\left((x,y),(x',y')\right)=\frac{1}{L}\sum_{i=1}^{L}U_{i}(x,y)U_{i}(x',y') \end{equation} and \begin{equation}\label{eq:2DRproj} \begin{aligned} R\psi= &C_\alpha\int\int_{\Omega}~_{0}D_{x'}^{\frac{\alpha}{2}}K\left((x,y),(x',y')\right)~_{x'}D_{1}^{\frac{\alpha}{2}}\psi(x',y')dx'dy'\\ &+C_\alpha\int\int_{\Omega}~_{0}D_{x'}^{\frac{\alpha}{2}}\psi(x',y')~_{x'}D_{1}^{\frac{\alpha}{2}}K\left((x,y),(x',y')\right)dx'dy'\\ &+C_\beta\int\int_{\Omega}~_{0}D_{y'}^{\frac{\beta}{2}}K\left((x,y),(x',y')\right)~_{y'}D_{1}^{\frac{\beta}{2}}\psi(x',y')dx'dy'\\ &+C_\beta\int\int_{\Omega}~_{0}D_{y'}^{\frac{\beta}{2}}\psi(x',y')~_{y'}D_{1}^{\frac{\beta}{2}}K\left((x,y),(x',y')\right)dx'dy', \end{aligned} \end{equation} where $R: H_0^{\frac{\alpha}{2}}(\Omega)\bigcap H_0^{\frac{\beta}{2}}(\Omega) \longrightarrow H_0^{\frac{\alpha}{2}}(\Omega) \bigcap H_0^{\frac{\beta}{2}}(\Omega)$. Direct calculation leads to \begin{align*} (R\psi,\psi)_{w}=&C_\alpha\int\int_{\Omega}~_{0}D_{x}^{\frac{\alpha}{2}}R\psi~_{x}D_{1}^{\frac{\alpha}{2}}\psi dxdy+C_\alpha\int\int_{\Omega}~_{0}D_{x}^{\frac{\alpha}{2}}\psi ~_{x}D_{1}^{\frac{\alpha}{2}}R\psi dxdy\\ &+C_\beta\int\int_{\Omega}~_{0}D_{y}^{\frac{\beta}{2}}R\psi~_{y}D_{1}^{\frac{\beta}{2}}\psi dxdy+C_\beta\int\int_{\Omega}~_{0}D_{y}^{\frac{\beta}{2}}\psi~ _{y}D_{1}^{\frac{\beta}{2}}R\psi dxdy\\
=&\frac{1}{L}\sum_{i=1}^{L}\left|(U_i,\psi)^{2}_{w}\right|. \end{align*} Furthermore, according to \begin{equation} (R\phi,\psi)_{w}=(R\psi,\phi)_{w}, \end{equation} it can be got that $R$ is a nonnegative symmetric operator on $H_{0}^{\frac{\alpha}{2}}(\Omega)\bigcap H_{0}^{\frac{\beta}{2}}(\Omega)$. So we transform the problem (\ref{maxx})-(\ref{condition}) to find the largest eigenvalue for the problem \begin{equation}\label{eigenvalue} R\psi=\lambda\psi ~~~~~~~~~~~\text{subject~to}~(\psi,\psi)_{w}=1. \end{equation} According to the definition of $R$, $K$ and $\psi$, (\ref{eigenvalue}) becomes \begin{equation}\label{leftside} \begin{aligned} &\sum_{i=1}^{L}U_i(x,y)\sum_{j=1}^{L}a_j\left[\frac{ C_\alpha}{ L}\int\int_{\Omega}~_{0}D_{x'}^\frac{\alpha}{2}U_i(x',y')~_{x'}D_{1}^\frac{\alpha}{2}\left(\sum_{j=1}^{L}U_j(x',y')\right)dx'dy'\right.\\ &+\frac{ C_\alpha}{ L}\int\int_{\Omega}~_{x'}D_{1}^\frac{\alpha}{2}U_i(x',y')~_{0}D_{x'}^\frac{\alpha}{2}\left(\sum_{j=1}^{L}U_j(x',y')\right)dx'dy'\\ &+\frac{ C_\beta}{ L}\int\int_{\Omega}~_{0}D_{y'}^\frac{\beta}{2}U_i(x',y')~_{y'}D_{1}^\frac{\beta}{2}\left(\sum_{j=1}^{L}U_j(x',y')\right)dx'dy'\\ &\left.+\frac{ C_\beta}{ L}\int\int_{\Omega}~_{y'}D_{1}^\frac{\beta}{2}U_i(x',y')~_{0}D_{y'}^\frac{\beta}{2}\left(\sum_{j=1}^{L}U_j(x',y')\right)dx'dy'\right]\\ =&\lambda\sum_{i=1}^{L}a_iU_i(x,y), \end{aligned} \end{equation} Simplifying (\ref{leftside}) further, one can get \begin{align*} &\sum\limits_{j=1}^{L}a_j\left[\frac{C_\alpha }{ L}\int\int_{\Omega}~_{0}D_{x'}^\frac{\alpha}{2}U_i(x',y')~_{x'}D_{1}^\frac{\alpha}{2}U_j(x',y')dx'dy'\right.\\ &+\frac{ C_\alpha}{ L}\int\int_{\Omega}~_{x'}D_{1}^\frac{\alpha}{2}U_i(x',y')~_{0}D_{x'}^\frac{\alpha}{2}U_j(x',y')dx'dy'\\ &+\frac{C_\beta }{ L}\int\int_{\Omega}~_{0}D_{y'}^\frac{\beta}{2}U_i(x',y')~_{y'}D_{1}^\frac{\beta}{2}U_j(x',y')dx'dy'\\ &\left.+\frac{C_\beta }{ L}\int\int_{\Omega}~_{y'}D_{1}^\frac{\beta}{2}U_i(x',y')~_{0}D_{y'}^\frac{\beta}{2}U_j(x',y')dx'dy'\right]=\lambda a_i. \end{align*}
Denote \begin{align*} G_{ij}=&\frac{C_\alpha }{L}\int\int_{\Omega}~_{0}D_{x'}^\frac{\alpha}{2}U_i(x',y')~_{x'}D_{1}^\frac{\alpha}{2}U_j(x',y')dx'dy'\\ &+\frac{C_\alpha }{ L}\int\int_{\Omega}~_{x'}D_{1}^\frac{\alpha}{2}U_i(x',y')~_{0}D_{x'}^\frac{\alpha}{2}U_j(x',y')dx'dy'\\ &+\frac{C_\beta }{ L}\int\int_{\Omega}~_{0}D_{y'}^\frac{\beta}{2}U_i(x',y')~_{y'}D_{1}^\frac{\beta}{2}U_j(x',y')dx'dy'\\ &+\frac{C_\beta }{ L}\int\int_{\Omega}~_{y'}D_{1}^\frac{\beta}{2}U_i(x',y')~_{0}D_{y'}^\frac{\beta}{2}U_j(x',y')dx'dy'. \end{align*} Then, the eigenvalue problem can be transformed to
\begin{equation}
G\mathbf{v}=\lambda \mathbf{v},~~~~~~~~~\mathbf{v}=[a_1,\cdots,a_L]^T.
\end{equation} Since the matrix $G$ is a nonnegative Hermitian matrix with the rank $l$, it has a complete set of orthonormal eigenvectors $\mathbf{v}^i=[a_1^i,a_2^i,\cdots,a_L^i],~i=1,2,\cdots, l$, with the corresponding eigenvalues $\lambda_1\geq \lambda_2\geq\cdots\geq \lambda_l>0$. Thus, the solution to the optimization for (\ref{eq:2Doptimization}) is given by \begin{equation} \psi_1=\frac{1}{\sqrt{L\lambda_1}}\sum_{i=1}^{L}a_i^1U_i, \end{equation} where $a_i^1~(i=1,2,\cdots,L)$ are the elements of the eigenvector $\mathbf{v}^1$ corresponding to the largest eigenvalue $\lambda_1$.
Similarly, the basis of POD $\psi_k \,(k=2,3,\cdots,l)$ are obtained by using other eigenvectors $\mathbf{v}^k\,(k=2,\cdots,l)$, \begin{equation} \psi_k=\frac{1}{\sqrt{L\lambda_k}}\sum_{i=1}^{L}a_i^kU_i,~~~~~k=2,3,\cdots,l. \end{equation} By the orthonormality of $\{\mathbf{v}^k: 1\leq k\leq l\}$, there exists \begin{equation} \mathbf{v}^k\cdot\mathbf{ v}^{k'}=\sum_{i=1}^{L}a_{i}^{k}a_{i}^{k'}=\left\{ \begin{aligned} &1,~k=k',\\ &0,~k\neq k', \end{aligned} \right. \end{equation} Furthermore, one can obtain that \begin{equation} \begin{aligned} (\psi_k,\psi_{k'})_{w}=&C_\alpha\int\int_{\Omega} \left(~_{0}D_{x}^{\frac{\alpha}{2}}\psi_k~_xD_1^{\frac{\alpha}{2}}\psi_{k'}+~_{0}D_{x}^{\frac{\alpha}{2}}\psi_{k'}~_xD_1^{\frac{\alpha}{2}}\psi_{k}\right)dxdy\\ &+C_\beta\int\int_{\Omega} \left(~_{0}D_{y}^{\frac{\beta}{2}}\psi_k~_yD_1^{\frac{\beta}{2}}\psi_{k'}+~_{0}D_{y}^{\frac{\beta}{2}}\psi_{k'}~_yD_1^{\frac{\alpha}{2}}\psi_{k}\right)dxdy\\ =&C_\alpha\int\int_{\Omega}\left(~_{0}D_{x}^{\frac{\alpha}{2}}\left(\frac{1}{\sqrt{L\lambda_k}}\sum_{i=1}^{L}a_i^{k}U_i\right)~_{x}D_{1}^{\frac{\alpha}{2}}\left(\frac{1}{\sqrt{L\lambda_{k'}}}\sum_{i=1}^{L}a_i^{k'}U_i\right)\right.\\ &\left.+~_{0}D_{x}^{\frac{\alpha}{2}}\left(\frac{1}{\sqrt{L\lambda_{k'}}}\sum_{i=1}^{L}a_i^{k'}U_i\right)~_{x}D_{1}^{\frac{\alpha}{2}}\left(\frac{1}{\sqrt{L\lambda_k}}\sum_{i=1}^{L}a_i^{k}U_i\right)\right)dxdy\\ &+C_\beta\int\int_{\Omega}\left(~_{0}D_{y}^{\frac{\beta}{2}}\left(\frac{1}{\sqrt{L\lambda_k}}\sum_{i=1}^{L}a_i^{k}U_i\right)~_{y}D_{1}^{\frac{\beta}{2}}\left(\frac{1}{\sqrt{L\lambda_{k'}}}\sum_{i=1}^{L}a_i^{k'}U_i\right)\right.\\ &\left.+~_{0}D_{y}^{\frac{\beta}{2}}\left(\frac{1}{\sqrt{L\lambda_{k'}}}\sum_{i=1}^{L}a_i^{k'}U_i\right)~_{y}D_{1}^{\frac{\beta}{2}}\left(\frac{1}{\sqrt{L\lambda_k}}\sum_{i=1}^{L}a_i^{k}U_i\right)\right)dxdy\\ =&\frac{1}{\sqrt{\lambda_{k}\lambda_{k'}}}\sum_{i=1}^{L}a_i^k\sum_{j=1}^{L}G_{ij}a_j^{k'}\\ =&\frac{1}{\sqrt{\lambda_{k}\lambda_{k'}}}\mathbf{v^{k}} G\mathbf{ v^{k'}}\\ =&\frac{1}{\sqrt{\lambda_{k}\lambda_{k'}}}\mathbf{v^{k}} \lambda_{k'} \mathbf{v^{k'}}\\ =&\left\{ \begin{aligned} 1~~k=k'\\ 0~~k\neq k'. \end{aligned} \right. \end{aligned} \end{equation} So the POD basis $\{\psi_1,\psi_2\ldots,\psi_l\}$ forms an orthonormal set. Next, we give a theorem for proving that the basis obtained by the above POD method is optimal. \begin{theorem} The POD basis $\{\psi_i\}_{i=1}^{l}$ is an optimal one. \end{theorem} \begin{proof}
Assume that $\{\psi_i\}_{i=1}^{l}$ isn't the optimal orthonormal basis. Then we denote the optimal one as $\{\phi_i\}_{i=1}^{l}$. Namely, \begin{equation} \begin{aligned} \psi=(\psi_1,\psi_2,\ldots,\psi_l)^T,\\ \phi=(\phi_1,\phi_2,\ldots,\phi_l)^T.\\ \end{aligned} \end{equation} Since there is an unitary matrix between two different orthonormal bases, then \begin{equation}
\phi=A\psi, \end{equation} where $A$ is an unitary matrix; and \begin{equation}
\begin{pmatrix}
\phi_{k_1}\\
\vdots\\
\phi_{k_d}
\end{pmatrix}
=\begin{pmatrix}
A_{k_1}\\
\vdots\\
A_{k_d}
\end{pmatrix}
\psi, \end{equation} where $A_i$ stands for the $i$-th line of matrix $A$ and $d\leq l$. $R$ is defined by $(\ref{eq:2DRproj})$, and denote the eigenvalues $\lambda_1>\lambda_2>\ldots >\lambda_l$ for $\psi$. Then \begin{equation}
\begin{aligned} \sum_{i=1}^{d}(R\psi_{i},\psi_{i})=&\sum_{i=1}^{d}(\lambda_i\psi_i,\psi_i)\\
=&\sum_{i=1}^{d}\lambda_i
\end{aligned} \end{equation} and \begin{equation}
\begin{aligned}
\sum_{i=1}^{d}(R\phi_{k_i},\phi_{k_i})=&\sum_{i=1}^{d}(RA_{k_i}\psi,A_{k_i}\psi)\\
=&\sum_{i=1}^{d}\sum_{j=1}^l a_{k_ij}^2\lambda_j.
\end{aligned} \end{equation} According to the property of unitary matrix, there is $\sum_{i=1}^{d}\sum_{j=1}^l a_{k_ij}^2\lambda_j\leq\sum_{i=1}^{d}\lambda_i$, being contradicted with the optimality of $\{\phi_i\}_{i=1}^{l}$. Therefore, the POD basis $\{\psi_i\}_{i=1}^{l}$ obtained by the above POD method is optimal one. \end{proof}
\subsection{Reduced FE formulation based on POD} Let $W^d={\rm span}\{\psi_1, \psi_2,\ldots, \psi_d\}$. Then $W^d \subset X_h$. Define the projection $P^d$: $X_h \rightarrow W^d$ denoted by (see\cite{Rudin,Luo2012A}) \begin{equation}\label{projection2} a(P^dU,V_d)=a(U,V_d)~~~~~\forall V_d\in W^d, \end{equation}
where $a(u,v)$ is defined by (\ref{aaaa}). According to the theory of linear operator, there is an extension $P^h$: $H_{0}^{\frac{\alpha}{2}}\bigcap H_{0}^{\frac{\beta}{2}}\rightarrow X_h$ such that $P^h|_{X_h}=P^d:\, X_h\rightarrow W^d$ satisfying \begin{equation} a(P^hU,V_h)=a(U,V_h)~~~~~\forall V_h\in X_h. \end{equation}
\begin{theorem}\label{prosp} When $U \in H^\alpha(\Omega)\bigcap H^\beta(\Omega)$, for every $d\,(1\leq d\leq l)$, the projection operator $P^d$ satisfies \begin{equation}\label{thm:2Dprojection:eq:1}
\frac{1}{L}\sum_{i=1}^L\left\|~_0D_x^{\frac{\alpha}{2}}\left(U_h^{n_i}-P^dU_h^{n_i}\right)\right\|^2+\frac{1}{L}\sum_{i=1}^L\left\|~_0D_y^{\frac{\beta}{2}}\left(U_h^{n_i}-P^dU_h^{n_i}\right)\right\|^2\leq C\sum_{j=d+1}^l\lambda_j, \end{equation} where $U_h^{n_i}$ is the solution of FE scheme. \end{theorem} \begin{proof} Since
\begin{equation}
a(U,V_h)=a(P^hU,V_h)~~~~~~\forall V_h\in X_h,
\end{equation} we have
\begin{equation}
a(U-P^hU,V_h)=0~~~~~~~~~~\forall V_h\in X_h.
\end{equation}
Moreover, according to
\begin{equation}
\left\|~_0D_x^{\frac{\alpha}{2}}(U-P^hU)\right\|^2+\left\|~_0D_y^{\frac{\beta}{2}}\left(U-P^hU\right)\right\|^2=a\left(U-P^hU,U-P^hU\right)
\end{equation} and
\begin{equation}
\begin{aligned}
&a\left(U-P^hU,U-P^hU\right)\\
=&a\left(U-P^hU,U-V_h\right)+a\left(U-P^hU,V_h-P^hU\right)\\
=&a\left(U-P^hU,U-V_h\right)\\
=&C_\alpha \left[\left(~_0D_x^{\frac{\alpha}{2}}\left(U-P^hU\right),~_xD_1^{\frac{\alpha}{2}}\left(U-V_h\right)\right)+\left(~_xD_1^{\frac{\alpha}{2}}\left(U-P^hU\right),~_0D_x^{\frac{\alpha}{2}}\left(U-V_h\right)\right)\right]\\
&+C_\beta \left[\left(~_0D_y^{\frac{\beta}{2}}\left(U-P^hU\right),~_yD_1^{\frac{\beta}{2}}\left(U-V_h\right)\right)+\left(~_yD_1^{\frac{\beta}{2}}\left(U-P^hU\right),~_0D_y^{\frac{\beta}{2}}\left(U-V_h\right)\right)\right]\\
\leq&C\left[\left\|~_0D_x^{\frac{\alpha}{2}}\left(U-P^hU\right)\right\| \left\|~_xD_1^{\frac{\alpha}{2}}\left(U-V_h\right)\right\|+\left\|~_xD_1^{\frac{\alpha}{2}}\left(U-P^hU\right)\right\|\left\|~_0D_x^{\frac{\alpha}{2}}\left(U-V_h\right)\right\|\right]\\
&+C\left[\left\|~_0D_y^{\frac{\beta}{2}}\left(U-P^hU\right)\right\|\left\|~_yD_1^{\frac{\beta}{2}}\left(U-V_h\right)\right\|+\left\|~_yD_1^{\frac{\beta}{2}}\left(U-P^hU\right)\right\|\left\|~_0D_y^{\frac{\beta}{2}}\left(U-V_h\right)\right\|\right]\\
\leq&C\left(\left\|~_0D_x^{\frac{\alpha}{2}}\left(U-P^hU\right)\right\|+\left\|~_0D_y^{\frac{\beta}{2}}\left(U-P^hU\right)\right\|\right)\left(\left\|~_0D_x^{\frac{\alpha}{2}}\left(U-V_h\right)\right\|+\left\|~_0D_y^{\frac{\beta}{2}}\left(U-V_h\right)\right\|\right),
\end{aligned}
\end{equation}
using
\begin{equation}
\begin{aligned}
&\left\|~_0D_x^{\frac{\alpha}{2}}\left(U-P^hU\right)\right\|^2+\left\|~_0D_y^{\frac{\beta}{2}}\left(U-P^hU\right)\right\|^2
\\
&\geq\frac{1}{2} \left(\left\|~_0D_x^{\frac{\alpha}{2}}\left(U-P^hU\right)\right\|+\left\|~_0D_y^{\frac{\beta}{2}}\left(U-P^hU\right)\right\|\right)^2,
\end{aligned}
\end{equation}
we have
\begin{equation}\label{projectionin}
\left\|~_0D_x^{\frac{\alpha}{2}}\left(U-P^hU\right)\right\|+\left\|~_0D_y^{\frac{\beta}{2}}\left(U-P^hU\right)\right\|\leq C\left(\left\|~_0D_x^{\frac{\alpha}{2}}(U-V_h)\right\|+\left\|~_0D_y^{\frac{\beta}{2}}\left(U-V_h\right)\right\|\right).
\end{equation}
If we take $U=U^{n_i}_h$, and let $P^h$ be restricted from $X_h$ to $W^d$, i.e., $P^hU=P^dU_h^{n_i}\in W^d$. Let $V_h=\sum_{i=1}^d\left(U_h^{n_i},\psi_j\right)_w\psi_j\in W^d\subset X_h$. Since
\begin{equation}
\begin{aligned}
\frac{1}{L}\sum_{i=1}^{L}\left\|U_i-\sum_{j=1}^{d}(U_i,\psi_j)_w\psi_j\right\|_w^2&=\frac{1}{L}\sum\limits_{j=d+1}^{l}\sum\limits_{i=1}^{L}\left|(U_i,\psi_j)_{w}^{2}\right|\\
&=\sum_{j=d+1}^l\lambda_j,
\end{aligned}
\end{equation} according to (\ref{projectionin}), we have
\begin{equation}
\begin{aligned}
&\frac{1}{L}\sum_{i=1}^l\left\|~_0D_x^{\frac{\alpha}{2}}\left(U_h^{n_i}-P^dU_h^{n_i}\right)\right\|^2+\frac{1}{L}\sum_{i=1}^l\left\|~_0D_y^{\frac{\beta}{2}}\left(U_h^{n_i}-P^dU_h^{n_i}\right)\right\|^2\\
\leq&C\frac{1}{L}\sum_{i=1}^l\left\|~_0D_x^{\frac{\alpha}{2}}\left(U_h^{n_i}-\sum_{j=1}^{d}\left(U_h^{n_i},\psi_j\right)_w\psi_j\right)\right\|^2\\
&+C\frac{1}{L}\sum_{i=1}^l\left\|~_0D_y^{\frac{\beta}{2}}\left(U_h^{n_i}-\sum_{j=1}^{d}\left(U_h^{n_i},\psi_j\right)_w\psi_j\right)\right\|^2\\
\leq&C\frac{1}{L}\sum_{i=1}^l\left\|\left(U_h^{n_i}-\sum_{j=1}^{d}\left(U_h^{n_i},\psi_j\right)_w\psi_j\right)\right\|_w^2\\
\leq&C\sum_{j=d+1}^l \lambda_j.
\end{aligned}
\end{equation}
The proof of $(\ref{thm:2Dprojection:eq:1})$ is completed. \end{proof}
By using $W^d={\rm span}\left\{\psi_1, \psi_2,\cdots, \psi_d\right\}$, based on POD we obtain the reduced FE formulation: Find $u_d^n\in W^d$ such that \begin{equation}\label{eq:2DPODscheme} \left\{ \begin{aligned}
&\left(u^{n}_d,v_d\right)+\tau C_{\alpha}\left(~_{0}D_{x}^{\frac{\alpha}{2}}u^{n}_d,~_{x}D_{1}^{\frac{\alpha}{2}}v_d\right)+\tau C_{\alpha}\left(~_{x}D_{1}^{\frac{\alpha}{2}}u^{n}_d,~_{0}D_{x}^{\frac{\alpha}{2}}v_d\right)\\
&~~~~+\tau C_{\beta}\left(~_{0}D_{y}^{\frac{\beta}{2}}u^{n}_d,~_{y}D_{1}^{\frac{\beta}{2}}v_d\right)+\tau C_{\beta}\left(~_{y}D_{1}^{\frac{\beta}{2}}u^{n}_d,~_{0}D_{y}^{\frac{\beta}{2}}v_d\right)\\
&~~~~=\tau\left(f^n,v_d\right)+\left(u_d^{n-1},v_d\right) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\forall~v_d \in W^{d},\\
&u_d^0=P^du_h^0.
\end{aligned}
\right. \end{equation}
\section{Stability analysis and error estimates for the reduced FE formulation} Now, we perform the numerical stability analysis and provide the error estimates. \begin{theorem}\label{thm2Derror} Let $u_{h}^{n}\in X_{h}$ be the finite element solution of (\ref{eq:2Ddiscretescheme}), $u_{d}^{n}\in W_{d}$ the solution of the reduced FE formulation (\ref{eq:2DPODscheme}). Then we have \begin{equation}
\left\|u_d^n\right\|^2+ (2\tau)\sum_{i=1}^{n}\left\|u_d^i\right\|^2_w\leq C\tau\sum_{i=1}^{n}\left\|f^i\right\|^2+\left\|u_d^0\right\|^2. \end{equation} If taking $L=O(N)$ and the snapshots being taken at uniform intervals, there exists \begin{equation}\label{error1}
\left\|u_{h}^{n}-u_{d}^{n}\right\|\leq M L\left(\sum_{j=d+1}^{l}\lambda_{j}\right)^{1/2}+M\tau. \end{equation} \end{theorem} \begin{proof} Taking $v_d=u_d^n$ in $(\ref{eq:2DPODscheme})$, we get \begin{equation} \begin{aligned}
&\left\|u_d^n\right\|^2+\tau\left\|u_d^n\right\|_w^2\leq\tau\left\|f^n\right\|\left\|u_d^n\right\|+\left\|u_d^{n-1}\right\|\left\|u_d^n\right\|\\
&\leq\frac{1}{2}\left(\tau\left\|f^n\right\|^2+\tau\left\|u_d^n\right\|^2\right)+\frac{1}{2}\left(\left\|u_d^{n-1}\right\|^2+\left\|u_d^n\right\|^2\right), \end{aligned} \end{equation} which leads to \begin{equation}
\left\|u_d^n\right\|^2+ (2\tau)\sum_{i=1}^{n}\left\|u_d^i\right\|^2_w\leq \tau\sum_{i=1}^{n}\left\|f^i\right\|^2+\left\|u_d^0\right\|^2+\tau\sum_{i=1}^{n}\left\|u_d^i\right\|^2. \end{equation} According to the discrete Gr\"onwall inequality, we have \begin{equation}
\left\|u_d^n\right\|^2+ (2\tau)\sum_{i=1}^{n}\left\|u_d^i\right\|^2_w\leq C\tau\sum_{i=1}^{n}\left\|f^i\right\|^2+\left\|u_d^0\right\|^2. \end{equation} By (\ref{eq:2Ddiscretescheme}) and (\ref{eq:2DPODscheme}), there exists \begin{equation}\label{eq:1Derroreq} \begin{aligned}
&\left(u^{n}_h-u^{n}_d,v_d\right)+\tau C_{\alpha}\left(~_{0}D_{x}^{\frac{\alpha}{2}}\left(u^{n}_h-u^{n}_d\right),~_{x}D_{1}^{\frac{\alpha}{2}}v_d\right)\\
&+\tau C_{\alpha}\left(~_{x}D_{1}^{\frac{\alpha}{2}}\left(u^{n}_h-u^{n}_d\right),~_{0}D_{x}^{\frac{\alpha}{2}}v_d\right)+\tau C_{\beta}\left(~_{0}D_{y}^{\frac{\beta}{2}}\left(u^{n}_h-u^{n}_d\right),~_{y}D_{1}^{\frac{\beta}{2}}v_d\right)\\
&+\tau C_{\beta}\left(~_{y}D_{1}^{\frac{\beta}{2}}\left(u^{n}_h-u^{n}_d\right),~_{0}D_{y}^{\frac{\beta}{2}}v_d\right)=\left(u^{n-1}_h-u^{n-1}_d,v_d\right).
\end{aligned} \end{equation} Combining (\ref{projection2}) with (\ref{eq:1Derroreq}), we have \begin{equation} \begin{aligned}
&\left\|P^du_h^n-u_d^n\right\|^2+\tau\left\|~_0D_x^{\frac{\alpha}{2}}(P^du_h^n-u_d^n)\right\|^2+\tau\left\|~_0D_y^{\frac{\beta}{2}}(P^du_h^n-u_d^n)\right\|^2\\ =&\left(P^du_h^n-u_d^n,P^du_h^n-u_d^n\right)\\ &+\tau C_{\alpha}\left(~_0D_x^{\frac{\alpha}{2}}\left(P^du_h^n-u_d^n\right),~_xD_1^{\frac{\alpha}{2}}\left(P^du_h^n-u_d^n\right)\right)\\ &+\tau C_{\alpha}\left(~_xD_1^{\frac{\alpha}{2}}\left(P^du_h^n-u_d^n\right),~_0D_x^{\frac{\alpha}{2}}\left(P^du_h^n-u_d^n\right)\right)\\ &+\tau C_{\beta}\left(~_0D_y^{\frac{\beta}{2}}\left(P^du_h^n-u_d^n\right),~_yD_1^{\frac{\beta}{2}}\left(P^du_h^n-u_d^n\right)\right)\\ &+\tau C_{\beta}\left(~_yD_1^{\frac{\beta}{2}}\left(P^du_h^n-u_d^n\right),~_0D_y^{\frac{\beta}{2}}\left(P^du_h^n-u_d^n\right)\right)\\ =&\left(P^du_h^n-u_h^n,P^du_h^n-u_d^n\right)+\left(u_h^n-u_d^n,P^du_h^n-u_d^n\right)\\ &+\tau C_{\alpha}\left(~_0D_x^{\frac{\alpha}{2}}\left(P^du_h^n-u_h^n\right),~_xD_1^{\frac{\alpha}{2}}\left(P^du_h^n-u_d^n\right)\right)\\ &+\tau C_{\alpha}\left(~_0D_x^{\frac{\alpha}{2}}\left(u_h^n-u_d^n\right),~_xD_1^{\frac{\alpha}{2}}\left(P^du_h^n-u_d^n\right)\right)\\ &+\tau C_{\alpha}\left(~_xD_1^{\frac{\alpha}{2}}\left(P^du_h^n-u_h^n\right),~_0D_x^{\frac{\alpha}{2}}\left(P^du_h^n-u_d^n\right)\right)\\ &+\tau C_{\alpha}\left(~_xD_1^{\frac{\alpha}{2}}\left(u_h^n-u_d^n\right),~_0D_x^{\frac{\alpha}{2}}\left(P^du_h^n-u_d^n\right)\right)\\ &+\tau C_{\beta}\left(~_0D_y^{\frac{\beta}{2}}\left(P^du_h^n-u_h^n\right),~_yD_1^{\frac{\beta}{2}}\left(P^du_h^n-u_d^n\right)\right)\\ &+\tau C_{\beta}\left(~_0D_y^{\frac{\beta}{2}}\left(u_h^n-u_d^n\right),~_yD_1^{\frac{\beta}{2}}\left(P^du_h^n-u_d^n\right)\right)\\ &+\tau C_{\beta}\left(~_yD_1^{\frac{\beta}{2}}\left(P^du_h^n-u_h^n\right),~_0D_y^{\frac{\beta}{2}}\left(P^du_h^n-u_d^n\right)\right)\\ &+\tau C_{\beta}\left(~_yD_1^{\frac{\beta}{2}}\left(u_h^n-u_d^n\right),~_0D_y^{\frac{\beta}{2}}\left(P^du_h^n-u_d^n\right)\right)\\ =&\left(P^du_h^n-u_h^n,P^du_h^n-u_d^n\right)+\left(u^{n-1}_h-u^{n-1}_d,P^du_h^n-u_d^n\right). \end{aligned} \end{equation} According to Cauchy-Schwartz inequality, there exists \begin{equation}
\|P^du_h^n-u_d^n\|\leq \|P^du_h^n-u_h^n\|+\|u_h^{n-1}-u_d^{n-1}\|. \end{equation} Further using triangle inequality, \begin{equation}
\left\|u_{h}^{n}-u_{d}^{n}\right\|\leq \left\|P^du_h^n-u_h^n\right\|+\left\|P^du_h^n-u_d^n\right\|, \end{equation} we get \begin{equation}\label{eq2d}
\left\|u_{h}^{n}-u_{d}^{n}\right\|\leq 2\left\|P^du_h^n-u_h^n\right\|+\left\|u_h^{n-1}-u_d^{n-1}\right\|. \end{equation} Summing (\ref{eq2d}) for $1,2,\cdots,n$, squaring, and using Harmonic inequality, we get \begin{equation}
\left\|u_{h}^{n}-u_{d}^{n}\right\|^2\leq 4n\sum_{i=1}^{n}\left\|P^du_h^i-u_h^i\right\|^2. \end{equation}
For $1\leq n\leq N$, we assume $n_i\leq n\leq n_{i+1}\leq N \,(i=1,2,\ldots,L-1)$ and $n_i\leq n\leq\frac{n_i+n_{i+1}}{2}$, then expanding $u_h^n$ into Taylor series about $t_{n_i}$ yields that \begin{equation} u_h^n=u_h^{n_i}+\varepsilon_i\tau u_{ht}(\xi_i), ~~~~~~t_{n_i}\leq \xi_i \leq t_n, ~~i=1,2,\cdots,L, \end{equation}
where $\varepsilon_i$ is the step number from $t_{n_i}$ to $t_n$. If snapshots are taken at uniform intervals, then $|\varepsilon_i|\leq \frac{N}{2L}$ and \begin{equation}\label{equ:taylor1}
\left\|u_{h}^{n}-u_{d}^{n}\right\|^2\leq MN\frac{N}{L}\sum_{i=n_1}^{n_i}\left\|P^du_h^i-u_h^i\right\|^2+MN\varepsilon_i^2\tau^2\sum_{i=1}^{n}\left\|P^du_{ht}(\xi_i)-u_{ht}(\xi_i)\right\|^2. \end{equation} For the second term of (\ref{equ:taylor1}), we have the following estimate, \begin{equation}\label{equ:taylor2} \begin{aligned}
&\sum_{i=1}^{n}\|P^du_{ht}(\xi_i)-u_{ht}(\xi_i)\|^2\\
\leq&C\frac{N}{L}\sum_{i=1}^{L-1}\left\|P^d\left(\frac{u^{n_{i+1}}_{h}-u^{n_i}_{h}}{\varepsilon_i\tau}\right)-\frac{u^{n_{i+1}}_{h}-u^{n_i}_{h}}{\varepsilon_i\tau}+O(\tau)\right\|^2\\
\leq&C\frac{N}{L(\varepsilon_i\tau)^2}\sum_{i=1}^{L-1}\left\|P^d\left(u^{n_{i+1}}_{h}-u^{n_i}_{h}\right)-\left(u^{n_{i+1}}_{h}-u^{n_i}_{h}\right)+O(\tau^2)\right\|^2 \end{aligned} \end{equation} According to (\ref{equ:taylor2}) and (\ref{equ:taylor1}), we get \begin{equation}
\begin{aligned}
&\left\|u_{h}^{n}-u_{d}^{n}\right\|^2\\
\leq& MN\frac{N}{L}\sum_{i=n_1}^{n_i}\left\|P^du_h^i-u_h^i\right\|^2+MN\varepsilon_i^2\tau^2\sum_{i=1}^{n}\left\|P^du_{ht}(\xi_i)-u_{ht}(\xi_i)\right\|^2\\
\leq& M\frac{N^2}{L}\sum_{i=n_1}^{n_i}\left\|P^du_h^i-u_h^i\right\|^2+O(\tau^2)
\end{aligned} \end{equation} If taking $L=O(N)$, according to fractional Poincare inequality and Theorem \ref{prosp}, there exists \begin{equation}
\left\|u_{h}^{n}-u_{d}^{n}\right\|\leq M L\left(\sum_{j=d+1}^{l}\lambda_{j}\right)^{1/2}+M\tau. \end{equation} \end{proof} \begin{theorem} Let $u^n$ be the exact solution of Eq. (\ref{equation2D}) and $u_{d}^{n}$ the solution of the reduced formulation (\ref{eq:2DPODscheme}). Then we have \begin{equation}
\left\|u^n-u_{d}^{n}\right\|\leq M L\left(\sum_{j=d+1}^{l}\lambda_{j}\right)^{1/2}+M\tau+M h^{k+1-\gamma}, \end{equation} where $\gamma=\max(\alpha,\beta)$. \end{theorem} \begin{proof} According to \cite{Bu2014}, we have \begin{equation}
\left\|u^n-u_{h}^{n}\right\|\leq C\left(\tau+h^{k+1-\gamma}\right). \end{equation} Combining Theorem \ref{thm2Derror} and the triangle inequality leads to the desired result. \end{proof} \begin{remark} Since the term $L\left(\sum\limits_{j=d+1}^{l}\lambda_{j}\right)^{1/2}$ is produced by reduced-order and the error formula, it implies that $L$ isn't too large. At the same time, the error estimate provides a measurement for deciding the number of POD bases needed, namely, the number $d$ of POD bases should satisfy $L\left(\sum\limits_{j=d+1}^{l}\lambda_{j}\right)^{1/2}\leq max\{\tau, h^{k+1-\gamma}\}$ when $L=O(N)$ for obtaining the optimal convergence order. \end{remark}
\section{Numerical experiments} In this section, we verify the effectiveness of the algorithm and show the advantage of the reduced POD FE formulation by numerical examples. \begin{example}\label{example1} Consider the exact solution of (\ref{equation2D}) as follows \begin{equation} u=4\cos(1.5\pi/2)\cos(1.6\pi/2)\exp(-t)\sin(2\pi x)^2\sin(2\pi y)^2.
\end{equation}
We take $\Omega=[0,1]\times[0,1]$, $\alpha=1.5$, $\beta=1.6$, and $T=1$. The source term $f$ can be calculated numerically. Firstly, we divide the field $\Omega$ into 256 squares with side length $\bigtriangleup x= \bigtriangleup y=1/16$, and then link the diagonal of the square to divide each square into two triangles in the same direction which constitute triangulation $\{\Im_{h}\}$, and the time step size is taken as $\tau=1/256$.
A group of numerical solutions are obtained by the classical finite element, and then 17 snapshots are chosen at an uniform intervals from 256 transient solutions. Figure \ref{fig:2DT1stsumoflambda} shows the change trend of $\sum_{i=d+1}^{17}\lambda_i$ as the number $d$ of POD basis increases; combining with the theoretical error estimate, we only need 1 POD basis to satisfy the requested accuracy. When $T=1$, we find that $16\times 16\times 2=512$ degrees of freedom are needed and the required computing time is about 2.58 seconds for the usual finite element method; while for the reduced FE formulation, 1 degree of freedom is needed and the corresponding time is about 0.043 seconds, which shows that our method can save memory and computing time effectively. Figure \ref{fig:2DT1stPODsol} and \ref{fig:2DT1streal} depict the POD solution and the real solution graphically when $T=1$, respectively, and it can be found that the POD solution is visually the same as the real solution. The numerical results confirm the theoretical analysis.
\begin{figure}
\caption{Sum of $\lambda_i$}
\label{fig:2DT1stsumoflambda}
\end{figure}
\begin{figure}
\caption{POD solution when $T=1$}
\label{fig:2DT1stPODsol}
\end{figure}
\begin{figure}
\caption{Real solution when $T=1$}
\label{fig:2DT1streal}
\end{figure} \end{example} Next, we give an exact solution whose shape changes sharply with the time evolution. In this case, usually the adaptive finite element method is adopted to do simulation. Here we show that the reduced FE method can also do it very well.
\begin{example} The exact solution of (\ref{equation2D}) is taken as \begin{equation} u=4\times 10^3\cos(1.5\pi/2)\cos(1.6\pi/2)\exp\left(-\frac{(x-t)^2+(y-t)^2}{0.04}\right)x^2(x-1)^2y^2(1-y)^2.
\end{equation}
Here, we take $\Omega=[0,1]\times[0,1]$, $\alpha=1.5 $, $\beta=1.6$, and $T=1$. The mesh is generated by the same way as Example \ref{example1} and the time step size is $1/256$. The source term can be calculated numerically.
The finite element solutions are obtained with $h=1/16$ and $\tau=1/256$. We choose 34 values from 256 values; every 7 values compose of a set of snapshots. Figure \ref{fig:2DT1time} presents the CPU time of the general FE scheme and the reduced FE system; it can be found that the computation time can be reduced significantly by using the reduced system. Figure \ref{fig:2DT1error} shows the change of the error of the POD solution when using different number $d$ of POD bases; it can be noted that the error decreases as the number $d$ of POD bases increases. Figure \ref{fig:2DT1sumoflambda} shows the trend of $\sum_{i=d}^{34}\lambda_i$ as the number $d$ of POD basis increases, being consistent with the theoretical estimate. Furthermore as error estimate predicts, by combining Figures \ref{fig:2DT1error} and \ref{fig:2DT1sumoflambda}, one can notice that the error depends on $\sum_{i=d+1}^{34}\lambda_i$.
Figures \ref{fig:2DT1PODSol} and \ref{fig:2DT1real} depict the POD solution and the real solution graphically when $T=1$; from them, it can be noted that the POD solution is visually the same as the real solution, which shows the effectiveness of the provided algorithm. \begin{figure}
\caption{Used CPU Time}
\label{fig:2DT1time}
\end{figure}
\begin{figure}
\caption{Errors}
\label{fig:2DT1error}
\end{figure}
\begin{figure}
\caption{Sum of $\lambda$}
\label{fig:2DT1sumoflambda}
\end{figure}
\begin{figure}
\caption{POD solution when $T=1$}
\label{fig:2DT1PODSol}
\end{figure} \end{example}
\begin{figure}
\caption{Real solution when $T=1$}
\label{fig:2DT1real}
\end{figure}
\section{conclusion}
This paper provides the basic framework for solving space FPDEs by reduced FE model. The basic strategy of choosing the reduced basis is provided. The detailed numerical stability analysis and error estimates are proposed for the reduced model. To show the effectiveness of the reduced model in alleviating computational load, saving memory, and keeping accuracy, extensive numerical experiments are performed, which also confirm the theoretical results.
In the future study, we will use the reduced FE method to solve large-scale models to reduce the memory storage requirements.
\section*{Acknowledgments}
This work was supported by the National Natural Science Foundation of China under Grant No. 11671182, and the Fundamental Research Funds for the Central Universities under Grant No. lzujbky-2017-ot10.
\end{document} |
\begin{document}
\title{\bf Range Quantile Queries:\!+\! Another Virtue of Wavelet Trees
\thanks{This work was supported by the Sofja Kovalevskaja Award from the Alexander von Humboldt Foundation and the German Federal Ministry of Education and Research and by the Australian Research Council.} }
\author{ Travis Gagie\inst{1} \and Simon J. Puglisi\inst{2}\thanks{Corresponding Author.} \and Andrew Turpin\inst{2} }
\institute{
Research Group for Combinatorial Algorithms in Bioinformatics,\!+\!
Bielefeld University, Germany\!+\!
\email{travis.gagie@gmail.com}\!+\![1ex]
\and
School of Computer Science and Information Technology,\!+\!
Royal Melbourne Institute of Technology, Australia\!+\!
\email{\!+\!{simon.puglisi,andrew.turpin\!+\!}@rmit.edu.au} }
\maketitle \thispagestyle{empty}
\begin{abstract} We show how to use a balanced wavelet tree as a data structure that stores a list of numbers and supports efficient {\em range quantile queries}. A range quantile query takes a rank and the endpoints of a sublist and returns the number with that rank in that sublist. For example, if the rank is half the sublist's length, then the query returns the sublist's median. We also show how these queries can be used to support space-efficient {\em coloured range reporting} and {\em document listing}. \end{abstract}
\section{Introduction}
If we are given a list of the closing prices of a stock for the past $n$ days and asked to find the $k$th lowest price, then we can do so in $\Oh{n}$ time~\cite{BFP+73}. We can also preprocess the list in $\Oh{n \log n}$ time and store it in $\Oh{n}$ words such that, given $k$ later, we can find the answer in $\Oh{1}$ time: we simply sort the list. However, we might also later face {\em range quantile queries}, which have the form ``what was the $k$th lowest price in the interval between the $\ell$th and the $r$th days?''. Of course, we could precompute the answers to all such queries, but storing them would take \!+\!(\Omega (n^3 \log n)\!+\!) bits of space. In this paper we show how to use a balanced wavelet tree to store the list in $\Oh{n}$ words such that we can answer range quantile queries in $\Oh{\log \sigma}$ time, where $\sigma$ is the number of distinct items in the entire list.
We can generalize our result to any constant number of dimensions but, currently, only by using slightly super-linear space.
We know of no previous work on quantile queries\footnote{Henceforth, for brevity, we will use ``quantile query'' to mean ``range quantile query'', and similarly with other types of range queries.}, but several authors have written about {\em range median queries}, the special case in which $k$ is half the length of the interval between $\ell$ and $r$. Krizanc, Morin and Smid~\cite{KMS05} introduced the problem of preprocessing for median queries and gave four solutions, three of which have worse bounds than using a balanced wavelet tree; their fourth solution involves storing $\Oh{n^2 \log \log n / \log n}$ words to answer queries in $\Oh{1}$ time. Bose, Kranakis, Morin and Tang~\cite{BKMT05} then considered approximate queries, and Har-Peled and Muthukrishnan~\cite{HM08} and Gfeller and Sanders~\cite{GS??} considered batched queries. Recently, Krizanc {\em et al.}'s fourth solution was superseded by one due to Petersen and Grabowski~\cite{Pet08,PG09}, who reduced the space bound to $\Oh{n^2 (\log \log n)^2 / \log^2 n}$ words. Table~\ref{tab:prev} shows the bounds for Krizanc {\em et al.}'s first three solutions, for Petersen and Grabowski's solution, and for using a balanced wavelet tree.
Har-Peled and Muthukrishnan~\cite{HM08} describe applications of median queries to the analysis of Web advertising logs. In the final section of this paper we show that our solution for quantile queries can be used to support {\em coloured range reporting}, that is, to enumerate the distinct items in a sublist. This result immediately improves V{\!+\!"a}lim{\!+\!"a}ki and M{\!+\!"a}kinen's recent space-efficient solution to the {\em document listing problem}~\cite{m2002,vm2007}.
In the full version of this paper we will also discuss how to use a wavelet tree to answer range counting queries (see~\cite{MN07}), coloured range counting queries (returning the number of distinct elements in a range without enumerating them), and how to support updates at the cost of slowing queries down to take time proportional to the logarithm of the largest number allowed.
\begin{table}[t]
\begin{center} \caption{Bounds for range median queries.} \label{tab:prev}
\begin{tabular}{l|@{\hspace{2ex}}l@{\hspace{2ex}}l@{\hspace{2ex}}l} & space (words) & time & restriction\!+\! \hline\!+\![-2ex] Krizanc {\em et al.}~\cite{KMS05} & $\Oh{n}$ & $\Oh{n^\epsilon}$ & \!+\!(\epsilon > 0\!+\!)\!+\![.5ex] Krizanc {\em et al.}~\cite{KMS05} & $\Oh{n \log_b n}$ & $\Oh{b \log^2 n / \log b}$ & \!+\!(2 \leq b \leq n\!+\!)\!+\![.5ex] Krizanc {\em et al.}~\cite{KMS05} & $\Oh{n \log^2 n / \log \log n}$ & $\Oh{\log n}$ &\!+\![.5ex] Petersen and &&&\!+\![-1.75ex]
& $\Oh{n^2 (\log \log n)^2 / \log^2 n}$ & $\Oh{1}$ &\!+\![-1.75ex] Grabowski~\cite{PG09} &&&\!+\![.5ex] Theorem~\ref{thrm-quantile} & $\Oh{n}$ & $\Oh{\log n}$ & \end{tabular} \end{center}
\end{table}
\section{Wavelet Trees}
Grossi, Gupta and Vitter~\cite{GGV03} introduced wavelet trees for use in data compression, and Ferragina, Giancarlo and Manzini~\cite{FGM06} showed they have myriad virtues in this respect. Wavelet trees are also important for compressed full-text indexing~\cite{nm2007}. As we shall see, there is yet more to this intriguing data structure.
A wavelet tree $T$ for a sequence $s$ of length $n$ is an ordered, strictly binary tree whose leaves are labelled with the distinct elements in $s$ in order from left to right and whose internal nodes store binary strings. The binary string at the root contains $n$ bits and each is set to 0 or 1 depending on whether the corresponding character of $s$ is the label of a leaf in $T$'s left or right subtree. For each internal node $v$ of $T$, the subtree $T_v$ rooted at $v$ is itself a wavelet tree for the {\em subsequence} of $s$ consisting of the occurrences of its leaves' labels. For example, if \!+\!(s = \mathsf{a, b, r, a, c, a, d, a, b, r, a}\!+\!) and the leaves in $T$'s left subtree are labelled {\sf a}, {\sf b} and {\sf c}, then the root stores \!+\!(00100010010\!+\!), the left subtree is a wavelet tree for {\sf abacaaba} and the right subtree is a wavelet tree for {\sf rdr}.
The important properties of the wavelet tree for our purposes are summarized in the following lemma. \begin{thrm}[Grossi et al.~\cite{GGV03}] \label{thrm-wave-preprocess} The wavelet tree $T$ for a list of $n$ elements on alphabet $\sigma$ requires $n\log\sigma(1 + o(1))$ bits of space, and can be constructed in $O(n\log \sigma)$ time. \end{thrm}
To see why the space bound is true, consider that the binary strings' total length is the sum over the distinct elements of their frequencies times their depths, which is $\Oh{n \log \sigma}$ bits. The construction time bound is easy to see from the recursive description of the wavelet tree given above.
We note as an aside that, while investigating data structures that support rank and select queries, M\!+\!"{a}kinen and Navarro~\cite{MN07} pointed out a connection between wavelet trees and a data structure due to Chazelle~\cite{Cha88} for two-dimensional range searching on sets of points.
\section{Range Quantile Queries}
We now describe how the wavelet tree can be used to answer quantile queries. Let $s$ be the list of $n$ numbers we want to query. We build and store the wavelet tree $T$ for $s$ and, at each internal node $v$, we store a small data structure that lets us perform $\Oh{1}$-time rank queries on $v$'s binary string. A rank query on a binary string takes a position and returns the number of 1s in the prefix that ends at that position. Jacobson~\cite{Jac89} and later Clark~\cite{c1996} showed we can support $\Oh{1}$-time rank queries on a binary string with a data structure that uses a sublinear number of extra bits, beyond those needed to store the string itself. It follows that the size of this preprocessed wavelet tree remains $\Oh{n\log \sigma}$ bits.
Given $k$, $\ell$ and $r$ and asked to find the $k$th smallest number in \!+\!(s [\ell..r]\!+\!), we start at the root of $T$ and consider its binary string $b$. We use the two rank queries $\rank{b}{\ell - 1}$ and $\rank{b}{r}$ to find the numbers of 0s and 1s in \!+\!(b [1..\ell - 1]\!+\!) and \!+\!(b [\ell..r]\!+\!). If there are more than $k$ copies of 0 in \!+\!(b [\ell..r]\!+\!), then our target is a label on one of the leaves in $T$'s left subtree, so we set $\ell$ to one more than the number of 0s in \!+\!(b [1..\ell - 1]\!+\!), set $r$ to the number of 0s in \!+\!(b [1..r]\!+\!), and recurse on the left subtree. Otherwise, our target is a label on one of the leaves in $T$'s right subtree, so we subtract from $k$ the number of 0s in \!+\!(b [\ell..r]\!+\!), set $\ell$ to one more than the number of 1s in \!+\!(b [1..\ell - 1]\!+\!), set $r$ to the number of 1s in \!+\!(b [1..r]\!+\!), and recurse on the right subtree. When we reach a leaf, we return its label. An example is given in Figure~\ref{fig:example}. Since $T$ is balanced and we spend constant time at each node as we descend (using the rank structures), our search takes $\Oh{\log \sigma}$ time. Thus, together with Theorem~\ref{thrm-wave-preprocess} we have the following.
\begin{figure}
\caption{A wavelet tree $T$ (left) for \!+\!(s = 6, 2, 0, 7, 9, 3, 1, 8, 5, 4\!+\!), and the values (right) the variables $k$, $\ell$ and $r$ take on as we search for the 5th smallest element in \!+\!(s [3..9]\!+\!). The dashed boxes in $T$ show the ranges from which we recursively select.}
\label{fig:example}
\end{figure}
\begin{thrm} \label{thrm-quantile} There exists a data structure of size $\Oh{n\log \sigma}$ bits which can be built in $\Oh{n\log\sigma}$ time that answers range quantile queries on $s[1..n]$ in $\Oh{\log \sigma}$ time. \end{thrm}
Some comments on $\sigma$ are in order at this point. Firstly, and obviously, if $\sigma$ is constant, then so is our query time.
If we represent the binary strings at each level of the wavelet tree with a more complicated rank/select data structure of Raman et. al~\cite{rrr2002} (instead of Clark~\cite{c1996}, see~\cite{GGV03,MN07}), the size of the wavelet tree is reduced to $nH_0(s) + \Oh{n\log\log n/\log_{\sigma} n}$ bits without affecting the query time, where $H_0(s)$ is the zeroth order entropy of $s$. Prior solutions for median queries do not make such {\em opportunistic} use of space.
At the other extreme, if $\sigma$ is $\Omega(n)$ we can map the symbols in $s$ to the range $[1..n]$, by first sorting the items in $\Oh{n\log n}$ time, and storing the mapping in $\Oh{n\log\sigma}$ bits of space. Preprocessing the array this way, and then using the wavelet tree approach above, allows us to match the $\Omega(n\log n)$ time lower bound for median queries~\cite{KMS05}, when the number of queries is $\Oh{n}$. This lower bound applies to any computational model which has an $\Omega(n\log n)$ time lower bound on sorting $s$. Still, the solution is not completely satisfying, and we leave an open question: Does an $\Oh{n\log n}$ preprocessing algorithm exist that allows quantile (or even just median) queries to be answered in $o(\log n)$ time when $\sigma$ is $\Omega(n)$?
It is not difficult to generalize Theorem~\ref{thrm-quantile} to any constant number of dimensions, using slightly super-linear space. Suppose we are given a given a multidimensional array $A$ of total size $N$. We build a balanced binary search tree on the $\sigma'$ distinct elements in $A$ and, at each node $v$, we store a binary array of size $N$ with 1s indicating the positions of occurrences of elements in $v$'s subtree. We store each binary array in a folklore data structure (see, e.g.,~\cite[Lemma 2]{KN09}) that supports multidimensional range counting in $\Oh{1}$ time using $\Oh{m N^\epsilon}$ bits, where $m$ is the number of 1s and $\epsilon$ is any positive constant; thus, we use a total of $\Oh{N^{1 + \epsilon} \log \sigma'}$ bits. To find the $k$th smallest number in a given range in $A$, we start at the root of the tree and use a range counting query to find the numbers of 0s and 1s in the same range of the binary array stored there. If there are more than $k$ copies of 0 in the range, then we recurse on the left subtree; otherwise, we subtract the number of 0s from $k$ and recurse on the right subtree. Since we use a single range counting query at each node as we descend, we use a total of $\Oh{\log \sigma'}$ time.
\begin{theorem} \label{thm:multidimensional} For any constants $d$ and \!+\!(\epsilon > 0\!+\!), there exists a data structure of size $\Oh{N^{1 + \epsilon} \log \sigma'}$ bits that answers $d$-dimensional range quantile queries on $A$ in $\Oh{\log \sigma'}$ time. \end{theorem}
\section{Application to Space Efficient Document Listing}
The algorithm for quantile queries just described can, when coupled with another wavelet tree property, be used to enumerate the $d$ distinct items in a given sublist $s[\ell..r]$ in $\Oh{d\log\sigma}$ time as follows. Let $c_1, c_2, \ldots, c_d$ be the distinct elements in $s[\ell..r]$ and, without loss of generality, assume $c_1 < c_2 < \ldots < c_d$. Further, let $m_i$, $i \in 1..d$ be the number of times $c_i$ occurs in $s[\ell..r]$. To enumerate the $c_i$, we begin by finding $c_1$, which can be achieved in $\Oh{\log\sigma}$ via a quantile query, as $c_1$ must be the element with rank~$1$ in $s[\ell..r]$. Observe now that $c_2$ must be the element in the range with rank $m_1+1$, and in general $c_i$ is the element with rank $1+\sum_{j=1}^{i-1}{m_{j+1}}$. Fortunately, each $m_i$ can be determined in $\Oh{\log\sigma}$ time by exploiting a well known property of wavelet trees, namely, their ability to return, in $\Oh{\log\sigma}$ the number of occurrences of a symbol in a prefix of $s$~(see \cite{GGV03}). Each $m_i$ is the difference of two such queries.
The {\em document listing problem}~\cite{m2002} is a variation on the classical pattern matching problem. Instead of returning all the positions at which a pattern $P$ occurs in the text $T$, we consider $T$ as a collection of $k$ documents (concatenated) and our task is to return the set of documents in which $P$ occurs.
Muthukrishnan~\cite{m2002}, who first considered the problem, gave an $\Oh{n\log n}$
bit data structure (essentially a heavily preprocessed suffix tree) that lists documents in optimal $\Oh{|P| + \mbox{\rm {\em ndoc}}}$ time, where $\mbox{\rm {\em ndoc}}$ is the number of documents containing $P$. Recently, V{\!+\!"a}lim{\!+\!"a}ki and M{\!+\!"a}kinen~\cite{vm2007} used more modern compressed and succinct data structures to reduce the space requirements of Muthukrishnan's approach at the cost of slightly increasing search to $\Oh{|P| + \mbox{\rm {\em ndoc}} \log k}$ time. Their data structure consists of three pieces: the {\em compressed suffix array} (CSA) of $T$; a wavelet tree built on an auxilliary array, $E$ (described shortly); and a succinct range minimum query data structure~\cite{f2007}.
Central to both Muthukrishnan's and V{\!+\!"a}lim{\!+\!"a}ki and M{\!+\!"a}kinen's solutions is the so-called ``document array'' $E[1..n]$, which is parallel to the suffix array $\mbox{\rm SA}[1..n]$: $E[i]$ is the document in which suffix $\mbox{\rm SA}[i]$ begins. Given an interval $\mbox{\rm SA}[i..j]$ where all the occurrences of a pattern lie, the document listing problem then reduces to enumerating the distinct items in $E[i..j]$. Without getting into too many details, V{\!+\!"a}lim{\!+\!"a}ki and M{\!+\!"a}kinen use the {\em compressed suffix array} (CSA) of $T$ to find the relevant sublist of $E$ in
$\Oh{|P|}$ time, and then a combination of $E$'s wavelet tree and a range minimum query data structure~\cite{f2007} to enumerate the distinct items in that sublist in $\Oh{\mbox{\rm {\em ndoc}} \log k}$ time. However, as we have described above, the wavelet tree of $E$ alone is sufficient to solve this problem in the same $\Oh{\mbox{\rm {\em ndoc}} \log k}$ time bound. In practice we may expect this new approach to be faster, as the avoidance of the minimum queries should reduce CPU cache misses. Also, because the wavelet tree of $E$ is already present in~\cite{vm2007} we have reduced the size of their data structure by $2n + o(n)$ bits, the size of the data structure for minimum queries.
\end{document} |
\begin{document}
\pagestyle{plain}
\title {The Morava $K$-theory of $BO(q)$ and $MO(q)$} \author{Nitu Kitchloo} \author{W. Stephen Wilson} \address{Department of Mathematics, Johns Hopkins University, Baltimore, USA} \email{nitu@math.jhu.edu} \email{wsw@math.jhu.edu} \thanks{Nitu Kitchloo is supported in part by NSF through grant DMS
1307875.}
\date{\today}
{\abstract We give an easy proof that the Morava $K$-theories for $BO(q)$ and $MO(q)$ are in even degrees. Although this is a known result, it had followed from a difficult proof that $BP^*(BO(q))$ was Landweber flat. Landweber flatness follows from the even Morava $K$-theory. We go further and compute an explicit description of $K(n)_*(BO(q))$ and $K(n)_*(MO(q))$ and reconcile it with the purely algebraic construct from Landweber flatness. }
\maketitle
\section{Introduction}
We are concerned with the (co)homology theory, Morava $K$-theory, $K(n)^*(-)$, where $K(n)_* = \mathbb Z/2 [v_n^{\pm 1}]$ with the degree of $v_n$ equal to $2(2^n - 1 ) $ (we are only concerned with $p=2$).
What brought us to the problem of computing the Morava $K$-theories of the spaces $BO(q)$ was a real need to have $BP^*(BO(q))$ be Landweber flat (in the sense \cite{Land:Hom}) for \cite{Kitch-Wil-BO}. $BP^*(BO(q))$ had been computed in \cite{WSW:BO} and was shown to be Landweber flat in \cite{KY}, with some seriously complex computations. Kono and Yagita went on to show that $K(n)^*(BO(q))$ was concentrated in even degrees because $BP^*(BO(q))$ was.
The computation in \cite{KY} does not give an explicit answer to what $K(n)^*(BO(q))$ is, only that it is even degree. If it is known that $K(n)^*(BO(q))$ is even degree for all $n$, then the results of \cite{RWY} show that $BP^*(BO(q))$ is Landweber flat, without having to compute it.
We present here an easy proof that $K(n)_*(BO(q))$ is even degree and then go further and give a basis. Duality for Morava $K$-theory is straightforward, so $K(n)^*(BO(q))$ is also even degree.
\begin{thm}[\cite{KY}] \label{even}
\
\begin{enumerate}[(i)] \item $K(n)^*(BO(q))$ and $K(n)^*(MO(q))$ are even degree for all $n$. \item $BP^*(BO(q))$ is Landweber flat. \end{enumerate} \end{thm}
As mentioned, $(ii)$ follows directly from $(i)$ using \cite{RWY} but Kono and Yagita prove $(ii)$ first and then $(i)$. $(i)$ will be proven in Section 3.
We work with the homology version of the theories and have:
\begin{thm} \label{detail} \ \begin{enumerate}[(i)] \item There are elements $b_{2i} \in K(n)_{2i}(BO(1))$ for $0 < i < 2^n$ coming from $K(n)_{2i}(RP^\infty)$. \item There are elements $c_{4i} \in K(n)_{4i}(BO(2))$ for $2^n \le i $. \item Using products from the standard maps $BO(i)\times BO(j) \rightarrow BO(i+j)$, a basis for the reduced homology, $\widetilde{K(n)}_*(BO(q))$, is: $$
\{b_{2i_1}b_{2i_2}\ldots b_{2i_k} c_{4j_1}c_{4j_2}\ldots c_{4j_m}\} \quad 0 < k+2m \le q. $$ $$ 0 < i_1 \le i_2 \le \ldots \le i_k < 2^n \le j_1 \le j_2 \le \ldots \le j_m $$ \item $\widetilde{K(n)}_*(MO(q))$ is as above with $k+2m = q$.
\end{enumerate} \end{thm}
In \cite{WSW:BO}, it was shown that \begin{equation} \label{alg} BP^*(BO(q)) \simeq BP^*[[c_1,c_2,\ldots,c_q]]/(c_1 - c_1^*, c_2 - c_2^*, \ldots,c_q - c_q^*), \end{equation} where $c_j$ is the Conner-Floyd Chern class and $c_j^*$ is its complex conjugate. In \cite{KY}, Kono and Yagita show that $BP^*(BO(q))$ is Landweber flat and that \begin{equation} K(n)^*(BO(q)) \simeq K(n)^* \widehat{\otimes}_{BP^*} BP^*(BO(q)). \end{equation} This shows that the Morava $K$-theory is even degree. We have computed Morava $K$-theory directly to show it is even degree, so the results of \cite{RWY} also give us Landweber flatness for $BP^*(BO(q))$. Either approach gives us: \begin{equation} \label{cohomology} K(n)^*(BO(q)) \simeq K(n)^*[[c_1,c_2,\ldots,c_q]]/(c_1 - c_1^*, c_2 - c_2^*, \ldots,c_q - c_q^*). \end{equation}
This is a purely algebraic construct that looks nothing like the answer given in this paper. In Section 5 we reconcile it with our direct computation of $K(n)_*(BO(q))$ by finding a basis for it that is consistent with what we find for $K(n)_*(BO(q))$.
We review some facts about the standard homology of $BO(q)$ in Section 2 and prove the details of Theorem \ref{detail} in Section 4.
\section{The standard homology of $BO(q)$ and $MO(q)$}
We begin with some review of basic facts about the homology of $BO$ and $BO(n)$. All of our (co)homology will be with $\mathbb Z/2$ coefficients. We start with elements $$ b_i \in \tilde{H}_i(RP^\infty = BO(1)) \quad i > 0. $$ We have $$ \tilde{H}_*(BO(1)) = \mathbb Z/2\{b_i:i > 0\} $$ and maps $$ BO(1) \rightarrow \cdots \rightarrow BO(q-1) \rightarrow BO(n) \rightarrow \cdots \rightarrow BO. $$ The image of the above $b_i$ in $H_*(BO)$ give us the well-known homology of $BO$ as a polynomial algebra: $$ H_*(BO) = \mathbb Z/2[b_1,b_2,\ldots]. $$
We also have the usual maps \begin{equation} \label{product} BO(q) \times BO(k) \longrightarrow BO(q+k). \end{equation} For homology we only need $$ \prod^q BO(1) \longrightarrow BO(q). $$ Because $b_i b_j = b_j b_i$, we have elements: $$ b_{i_1} b_{i_2} \cdots b_{i_k} \in \tilde{H}_*(BO(q)) \quad \mbox{for} \quad 0 < k \le q \quad \mbox{and} \quad 0 < i_1 \le i_2 \le \cdots \le i_k. $$ These elements form a basis for the reduced homology of $BO(q)$.
As an aside, if that is not commonly understood, we can quickly use the better known cohomology of $BO(q)$ to see that the size is right. We have $$ H^*(BO(q)) = \mathbb Z/2[w_1,w_2,\ldots,w_q], $$ a polynomial algebra on the Stiefel-Whitney classes. If, by induction, we know $H_*(BO(q-1))$, all we have to do to see the size is right is show that the elements with $k = q$ above are in one-to-one correspondence with the ideal generated by $w_q \in H^*(BO(q))$. That correspondence is easily given by $$ 0 < i_1 \le i_2 \le \cdots \le i_q \quad \mbox{ goes to } \quad w_q^{i_1} w_{q-1}^{i_2 - i_1} w_{q-2}^{i_3-i_2} \cdots w_1^{i_q - i_{q-1}}. $$
The Steenrod algebra operates on the mod 2 homology of $BO$ and $BO(q)$. As an element of the Steenrod algebra operates on an element $ b_{i_1} b_{i_2} \cdots b_{i_k} $, it does not alter the number of $b$'s, so we can define: $$ M_q = \mathbb Z/2\{ b_{i_1} b_{i_2} \cdots b_{i_q} \} \quad \mbox{for} \quad 0 < i_1 \le i_2 \le \cdots \le i_q $$ and we get the reduced homology \begin{equation} \label{equ1} \tilde{H}_*(BO(q)) = \bigoplus_{j=1}^q M_j \end{equation} and \begin{equation} \label{equ2} \tilde{H}_*(BO) = \bigoplus_{j=1}^\infty M_j \end{equation} as modules over the Steenrod algebra.
From \cite{MitPrid} we know that stably $BO(q) \simeq \Wedge_{1\le i \le q} MO(i)$, so, stably, $BO(q) \simeq BO(q-1) \Wedge MO(q)$. From this we see that $M_q = H_*(MO(q))$.
\section{The Morava $K$-theories of $BO(q)$ and $MO(q)$ are even}
The first differential in the Atiyah-Hirzebruch spectral sequence (AHSS), $H_*(X;K(n)_*)$, is just the Milnor primitive, $Q_n$, which is easy to evaluate in $H_*(BO(1))$ as it just takes $b_{2k}$ to $b_{2k+1-2^{n+1}}$, as long as $2k > 2^{n+1}-1$.
\begin{remark}
After the first differential, the AHSS collapses for $K(n)_*(BO(1))$ because the AHSS is even degree. The reduced homology is $K(n)_*$ free on $\{b_2,b_4, \ldots, b_{2^{n+1}-2} \}$.
\end{remark}
\begin{remark}
More interesting is that after the first differential for $BO$ we are also done, with the polynomial result, from the AHSS: $$ K(n)_*(BO) \simeq K(n)_*[b_2,b_4,\ldots,b_{2^{n+1}-2}]\otimes K(n)_*[b_{2i}^2 : i \ge 2^n ], $$ which was done in \cite{RWY}. The differential, or as we prefer to say, the $Q_n$ homology, is computed by pairing up what is missing above as $$ P(b_{2i+1})\otimes E(b_{2i+2^{n+1}}). $$ Each of these has trivial $Q_n$ homology. This collapses after this first differential because it is even degree. Since $b_{2i}$, $i \ge 2^n$, is not an element, the notation is misleading. Later, we will give this generator the name $c_{4i}$. The element exists in $k(n)_*(BO)$ and reduces to $b_{2i}^2$ in $H_*(BO)$.
\end{remark}
\begin{proof}[Proof of Theorem \ref{even}] Now we know that the first differential of the AHSS is all it takes to get $K(n)_*(BO)$ and see that it is all in even degrees. The first differential is just an operation from the Steenrod algebra, $Q_n$. By Equation \eqref{equ2}, we must have the $Q_n$ homology of each $M_j$ in even degrees. From this we see that $K(n)_*(BO(q))$ and $K(n)_*(MO(q))$ must be in even degrees, and by standard Morava $K$-theory duality, $K(n)^*(BO(q))$ is in even degrees. This completes the proof of Theorem \ref{even}. \end{proof}
\section{The details of the Morava $K$-theories of $BO(q)$ and $MO(q)$ }
All of the homology of $BO(q)$ came from products of elements from $BO(1)$. For Morava $K$-theory we have to use elements from $BO(2)$ as well.
Two kinds of elements in $K(n)_*(BO(2))$ come from $K(n)_*(BO(1))$. First we have the image coming from the map $BO(1) \rightarrow BO(2)$, i.e.
$K(n)_*\{b_2,b_4, \ldots, b_{2^{n+1}-2} \}$. Our second kind comes from the product, $BO(1)\times BO(1) \rightarrow BO(2)$, which gives: $$ K(n)_*\{b_{2i_1}b_{2i_2}\} \quad 0 < i_1 \le i_2 < 2^n. $$ There are more elements that come from $M_2$ in $K(n)_*(BO(2))$. In particular, from the computation of $K(n)_*(BO)$ we know that all $b_{2j}^2$ survive. These elements live in $M_2$ so actually survive to $K(n)_*(BO(2))$. Consequently, between $K(n)_*(BO(1))$ and $K(n)_*(BO(2))$, we have all the multiplicative generators of $K(n)_*(BO)$. We easily see which $M_q$ these multiple products live in by the number of $b$'s.
We can now pretty much read off the description of a basis for $K(n)_*(BO(q))$. To make the description a little easier to read, we can consider the part that comes from $M_q$ and call it $M_q^K = K(n)_*(MO(q))$. Then we have: $$ K(n)_*(BO(q)) \simeq K(n)_*(BO(q-1)) \bigoplus K(n)_*(MO(q)). $$
We are not using the splitting from \cite{MitPrid} to compute $K(n)_*(BO(q))$, only to compute $K(n)_*(MO(q))$.
Let's give new names to the elements in $K(n)_*(BO(2))$ represented by $b_{2j}^2$ so we won't have the non-existent product hanging around. Let's set $c_{4j} = b_{2j}^2$ for $j \ge 2^n$.
We can now give an explicit description of $M_q^K = K(n)_*(MO(q))$. $$ M_q^K \simeq K(n)_*\{b_{2i_1}b_{2i_2}\ldots b_{2i_k} c_{4j_1}c_{4j_2}\ldots c_{4j_m}\} \quad k+2m=q. $$ $$ 0 < i_1 \le i_2 \le \ldots \le i_k < 2^n \le j_1 \le j_2 \le \ldots \le j_m $$
This completes the proof of Theorem \ref{detail}.
There is still one bit of unaccounted for structure that we should mention. Although $K(n)_*(BO(q))$ is not an algebra, it is a coalgebra. The coalgebra structure for the $b$'s comes from $BO(1)$, so, for $p < 2^n$, we get $$ \psi(b_{2p}) = \sum_{i+j=p} b_{2i} \otimes b_{2j}. $$ The $c_{4j}$ are written in terms of the $b$'s in the AHSS, so we also know their coproduct modulo $(v_n)$. It would just be $$ \psi(c_{4p}) = \psi(b_{2p}^2) = \sum_{i+j=p} b_{2i}^2 \otimes b_{2j}^2 \qquad \mod (v_n). $$ If $i \ge 2^n$, replace $b_{2i}^2$ with $ c_{4i}$. Do the same with $j$. We can work modulo $(v_n)$ because this single differential also computes $k(n)_*(BO(q))$ where we only have non-negative powers of $v_n$.
We know that $K(n)_*(BO) \subset K(n)_*(BU)$. In \cite{KLW}, there are elements of $K(n)_*(BU)$ named $z_q$ that are our $c_{4(2^n+q)}$. In \cite[Theorem 3.14]{KLW}, the $z_q$ are computed in terms of $K(n)_*(BU)$ modulo $(v_n^2)$, and their complexity, and consequently the complexity of the coproduct, shows up here already. This is to be expected given the complexity of the dual algebra structure from Equation \eqref{cohomology}.
\section{Reconciliation}
The map $BO(q) \rightarrow BU(q)$ automatically gives a map of the algebraic construct on the right side of Equation \eqref{alg} to $BP^*(BO(q))$. The work of \cite{WSW:BO} first involves showing the map is surjective, which is done with the Adams spectral sequence. To show injectivity, the algebraic construct is analyzed. We can use that analysis here to show what we want. We have to establish some notation first.
We have $BP^*(CP^\infty) \simeq BP^*[[x]]$, $x \in BP^2(CP^\infty)$ and $$ \xymatrix{ BP^*(\prod^q CP^\infty) & \simeq & BP^*[[x_1,x_2,\ldots,x_q]] \\ \cup & & \cup \\ BP^*(BU(q)) & \simeq & BP^*[[c_1,c_2,\ldots,c_q]] } $$
The inclusion is given by all of the symmetric functions, which are generated by the elementary symmetric functions given by the $c_k$.
For $I = (i_1,\ldots,i_q)$, let $x^I = x_1^{i_1} \ldots x_q^{i_q}$. Two monomials are equivalent if some permutation of the $x_i$ takes one to the other. Define the symmetric function $$ s_I = \sum x^I $$ where the sum goes over all monomials equivalent to $x^I$. The elementary symmetric function is $c_k = \sum x_1 \ldots x_k$.
Theorem 1.30 of \cite[page 358]{WSW:BO} computes $c_k^*$ for $BP$ as $$ c_k^* = (-1)^k c_k + \sum_{i > 0} v_i s_{2^i,\underbrace{1,1,\ldots,1}_{k-1}} \quad \mod J^2 $$ where $J=(2,v_1,v_2,\ldots)$. We know that the generators of $BP^*(BO(q))$ all map non-trivially to the cohomology $H^*(BO(q))$. As a result, we can look at this relation using only the coefficients of $k(n)^* = \mathbb Z/2[v_n]$ and consider the relation modulo $(v_n^2)$. Inductively, the only relation we need is $k=q$. This reduces to $$ c_q - c_q^* = v_n s_{2^n,\underbrace{1,1,\ldots,1}_{q-1}} \qquad \mod (v_n^2). $$ Note that for $BU(q)$, our relation is divisible by $c_q = x_1 \ldots x_q$, i.e. $$ s_{2^n,1,1,\ldots,1} = c_q s_{2^n-1}. $$ Because $K(n)^*(BU(q))\simeq K(n)^* \widehat{\otimes} BP^*(BU(q))$, we can be quite sloppy with our powers of $v_n$ because we are going to invert $v_n$ to get our algebraic description in the end. The degree of $v_n$ is negative, so the more powers of $v_n$, the higher the degree of the symmetric function.
The following theorem will reconcile our two different descriptions of $K(n)^*(BO(q))$.
\begin{thm} A basis for $ K(n)^*[[c_1,\ldots,c_q]]/(c_1 - c_1^*, \ldots,c_q - c_q^*)$ in terms of symmetric functions is given by $$ s_{IJ} = \sum x_1^{i_1}\ldots x_m^{i_m} x_{m+1}^{j_1}\ldots x_{m+p}^{j_p} $$ where $0 < i_1 < \ldots < i_m < 2^n$ and $0 \le j_1 \le \ldots \le j_p$ with $j_{2i-1} = j_{2i}$. \end{thm}
\begin{remark} The definition forces $p$ to be even. If we drop the $i_m < 2^n$ condition, any $s_K$ can be written in this form. First, just find all the pairs of equal exponents and create $J$. Finding $I$ is easy after that. \end{remark}
\begin{remark} All we do in our proof is reduce arbitrary elements to those in our theorem. Because we know $K(n)_*(BO(q))$, we know that there can be no further reduction, so this is a basis. This does reconcile the two descriptions though. \end{remark}
\begin{proof} The proof is by double induction. First, it is by induction on $q$. This is easy to start with $q=1$ where the result is well known and straightforward, but worth talking about anyway as it illustrates things to come in the proof.
The relation in $k(n)^*(BU(1))$ that gives $k(n)^*(BO(1))$ and then $K(n)^*(BO(1))$ is just $0 = c_1 - c_1^* = v_n s_{2^n} = v_n x^{2^n}$ modulo $(v_n^2)$. The induction is on the degree of the symmetric function, which in this case is just powers of $x$. Inverting $v_n$, we see that $x^{2^n}$ is zero modulo higher powers of $x$.
For any $s_{2^n+k} = x^{2^n+k}$, we have $$ 0 = s_{2^n} s_k = s_{2^n + k} \mod \text{ higher powers of $x$}. $$ That is, each $s_{2^n+k}$ is zero modulo higher degree symmetric products. By induction on the degree of the symmetric product (i.e. induction on $k$) we push the relation to higher and higher degrees. In the topology on $K(n)^*(BU(1)) \simeq K(n)^*[[x]]$, this converges to zero, and so each $s_{2^n+k}$, $k \ge 0$, is really zero. We remind the reader that our relation isn't really $s_{2^n,1,\ldots,1}=0$ modulo higher degree symmetric functions. The relation has a $v_n$ in front. Since our relation really is in $k(n)^*(-)$ because it comes from $BP^*(-)$, all powers of $v_n$ are positive. Since we are going to invert $v_n$ in the end in order to get $K(n)^*(-)$, we can be quite loose with our $v_n$'s.
The same thing will happen in the general, arbitrary $q$, case. However, for $q > 1$, there are non-trivial basis elements in high degrees, so this process doesn't have to go to zero in the limit, but could settle on a basis element. Either way, it works for our proof.
From our induction on $q$, we assume the result for $q-1$. Stably, $BO(q) \simeq BO(q-1) \Wedge MO(q)$ from \cite{MitPrid} as well as $BU(q) \simeq BU(q-1) \Wedge MU(q)$. From \cite{WSW:BO}, we know that $BP^*(MO(q))$ is the ideal in $BP^*(BO(q))$ generated by $c_q$, and so the same is true for $K(n)^*(BO(q))$. Of course, the same is true for $BP^*(MU(q))$, $BP^*(BU(q))$, and $K(n)^*(BU(q))$. Consequently, we can focus our attention on the symmetric functions divisible by $c_q$ when there are only $q$ variables.
We know that $H^*(BU(q))$ is free on the symmetric functions $s_I$ with $I= ( i_1,\ldots,i_q)$. If all $i_k > 0$, this is a basis for $H^*(MU(q))$ and if some are not greater than 0, they are part of the basis for $H^*(BU(q-1))$. This splitting is only additive, not multiplicative. Because there is no torsion, this is all true for $BP^*(-)$, $k(n)^*(-)$ and $K(n)^*(-)$ as well.
Our next induction is on the degree of the symmetric functions. We will show that elements not of the form in our theorem are zero modulo higher degree elements. We know that $K(n)^*(BO(q))$ is $K(n)^*(BU(q))$ modulo the relations already described and that $K(n)^*(BU(q))$ is just given by the usual symmetric functions. To prove our result, we will not mod out our relations, but work with $BU(q)$ and just describe how the relations accomplish what we want. This will suffice for our purposes. We begin our induction by noticing that all elements in degrees less than the degree of $s_{2^n,1,\ldots,1} =c_q s_{2^n-1}$ are in our desired basis. The only element in the degree of $s_{2^n,1,\ldots,1}$ not in the basis is our relation element, which is zero modulo higher degree symmetric functions (ignoring the $v_n$ as discussed above).
An arbitrary element not of the form in the theorem simply has $i_m \ge 2^n$ instead of $i_m < 2^n$. Having fixed a degree, we first consider the cases where $i_m = 2^n + k$, with $k > 0$. Since we are working with elements divisible by $c_q$, we can divide by $c_q$ to get a new symmetric function, $s_{I'J'}$, with each $i_s$ replaced by $i_s - 1$ and the same for the $j_s$. This symmetric function has $i_m' = 2^n +k -1$. Since $k > 0$, this is known to be zero modulo higher degrees by our induction on degree. Multiplying by $c_k$ to get our original symmetric function, we see it must be zero modulo higher degrees. Note that we are using our induction on $q$ here. If $i_1$ or $j_1$ (or both) are equal to $1$, then $s_{I'J'}$ is in $K(n)^*(BU(q-1))$ because it is not divisible by $c_q$. By our induction, we know the behavior of the relations here.
In our fixed degree, we have eliminated all of the bad elements except those with $i_m = 2^n$. From such a symmetric function $s_{IJ}$, we create a new symmetric function $s_{I'J'}$ by eliminating the $x_m^{i_m}=x_m^{2^n}$ term and subtracting $1$ from from all of the other $i_s$ and $j_s$. We want to analyze $$ s_{2^n,1,1,\ldots,1}s_{I'J'}. $$ Since $s_{2^n,1,1,\ldots,1}$ is zero modulo higher degrees, this product is too. Multiplying symmetric functions can be tricky because the result can be a sum of symmetric functions. The easy one to deal with is when $i_1$ and $j_1$ are greater than one (recall that $m+p=q$). In this case, if your $x^{2^n}$ term is multiplied by any power of $x$, we are in the situation where our product has $x^{2^n + k}$, with $k >0$, and we have dealt with those terms already. The only thing left is to multiply the $x^{2^n}$ back into the place it was removed from and then all of the other exponents are raised by 1, giving us back our original $s_{IJ}$.
Things are slightly more complicated if $i_1$ or $j_1$ is 1. (They must be at least 1 because everything is divisible by $c_q$.) Again, if our $x^{2^n}$ is multiplied by a non-zero power of $x$, we get $x^{2^n+k}$ and these terms have been handled already. Our $x^{2^n}$ must hit an $x^0$ term, but by the definition of symmetric functions, these are all equivalent, so the other $x_i$ all just have their exponent raised by 1 in our product and we get our $s_{IJ}$ back, showing it is zero modulo higher degrees. \end{proof}
\end{document} |
\begin{document}
\setcounter{page}{1}
\title[Complete systems of unitary invariants for $2$-isometries] {Complete systems of unitary invariants for \\ some classes of $2$-isometries}
\author[A. Anand, S. Chavan, Z.\ J.\ Jab{\l}o\'nski, \MakeLowercase{and} J. Stochel] {Akash Anand,$^1$ Sameer Chavan,$^1$ Zenon Jan Jab{\l}o\'nski,$^2$ \\ \MakeLowercase{and} Jan Stochel$^{2*}$}
\address{$^{1}$Department of Mathematics and Statistics\\ Indian Institute of Technology Kanpur, India.} \email{\textcolor[rgb]{0.00,0.00,0.84}{akasha@iitk.ac.in; chavan@iitk.ac.in}} \address{$^{2}$Instytut Matematyki, Uniwersytet Jagiello\'nski, ul.\ \L ojasiewicza 6, PL-30348 Kra\-k\'ow, Poland.} \email{\textcolor[rgb]{0.00,0.00,0.84}{Zenon.Jablonski@im.uj.edu.pl; Jan.Stochel@im.uj.edu.pl}}
\dedicatory{Dedicated to the memory of Professor Ronald G. Douglas}
\subjclass[2010]{Primary 47B20, 47B37; Secondary 47B49.}
\keywords{$2$-isometry, kernel condition, complete system of unitary invariants, weighted shift on a directed tree, Cauchy dual operator, $C_{0 \cdot}$ and $C_{\cdot 0}$ classes.}
\begin{abstract} The unitary equivalence of $2$-isometric operators satisfying the so-called kernel condition is characterized. It relies on a model for such operators built on operator valued unilateral weighted shifts and on a characterization of the unitary equivalence of operator valued unilateral weighted shifts in a fairly general context. A complete system of unitary invariants for $2$-isometric weighted shifts on rooted directed trees satisfying the kernel condition is provided. It is formulated purely in the langauge of graph-theory, namely in terms of certain generation branching degrees. The membership of the Cauchy dual operators of $2$-isometries in classes $C_{0 \cdot}$ and $C_{\cdot 0}$ is also studied. \end{abstract}
\maketitle
\section{Introduction} We begin by defining the basic concepts discussed in this paper. Let $\hh$ be a (complex) Hilbert space and $\boldsymbol B(\hh)$ stand for the $C^*$-algebra of all bounded linear operators on $\hh$. We say that an operator $T\in \boldsymbol B(\hh)$ is
\begin{enumerate}
\item[$\bullet$] {\em hyponormal} if $T^*T-TT^* \Ge 0,$
\item[$\bullet$] {\em subnormal} if it has a normal extension in a possibly larger Hilbert space,
\item[$\bullet$] {\em $2$-hyperexpansive} if $I - 2 T^*T + T^{*2}T^2 \Le 0,$
\item[$\bullet$] {\em $2$-isometric} if $I - 2 T^*T + T^{*2}T^2 =0.$
\end{enumerate} Subnormal operators are hyponormal (see \cite[Proposition~ II.4.2]{Co}) and $2$-isome\-tries are $2$-hyperexpansive, but none of these implications can be reversed (see \cite[Exercise~ 3, p.\ 50]{Co} and \cite[Lemma~ 6.1]{Ja-St}, respectively). Moreover, hyponormal operators which are $2$-hyperexpansive are isometric (see \cite[Theorem~ 3.4]{Ja-St}). The theory of subnormal and hyponormal operators was initiated by Halmos \cite{Hal-s}. The notion of a $2$-isometry was invented by Agler \cite{Ag-0}, while the concept of a $2$-hyperexpansive operator goes back to Richter \cite{R-0} (see also \cite[Remark~ 2]{At}). The {\it Cauchy dual operator} $T'$ of a left-invertible operator $T$ is defined by $T'=T(T^*T)^{-1}.$ This concept is due to Shimorin \cite{Sh}. The basic relationship between $2$-hyperexpansions and hyponormal operators via the Cauchy dual transform is as follows (see \cite[Sect.\ 5]{Sh-1} and \cite[Theorem~ 2.9]{Ch-0}).
\begin{align} \label{2hypcon}
\begin{minipage}{70ex} {\em If $T\in \boldsymbol B(\hh)$ is a $2$-hyperexpansive operator, then $T$ is left-invertible and $T'$ is a hyponormal contraction.}
\end{minipage}
\end{align}
In a recent paper \cite{A-C-J-S}, the present authors solved the Cauchy dual subnormality problem in the negative by showing that there are $2$-isometric operators $T$ whose Cauchy dual operators $T'$ are not subnormal. One of the ideas of constructing such counterexamples relies on perturbing the so-called kernel condition in the context of weighted shifts on directed trees (see \cite{JJS} for more information on this class of operators). Recall from \cite{A-C-J-S} that $T\in \boldsymbol B(\hh)$ satisfies the {\em kernel condition}~ if
\begin{align} \label{kc} T^*T (\ker T^*) \subseteq \ker T^*.
\end{align} It was proved in \cite[Theorem~ 6.5]{A-C-J-S} that if $\tcal$ is a rooted directed tree and $\slam$ is a $2$-isometric weighted shift on $\tcal$ with nonzero weights which satisfies the perturbed kernel condition, then the Cauchy dual operator $\slam'$ of $\slam$ is subnormal if and only if $\slam$ satisfies the kernel condition. Further, it was shown in \cite[Theorem~ 3.3]{A-C-J-S} that the Cauchy dual operator $T'$ of a $2$-isometry $T$ satisfying the kernel condition is always subnormal. This can in turn be derived from a model theorem for $2$-isometries satisfying the kernel condition (see \cite[Theorem~ 2.5]{A-C-J-S}). The model itself is built on operator valued unilateral weighted shifts and is the starting point of the present investigations. It is worth mentioning that there are Dirichlet-type models for cyclic analytic $2$-isometries and for finitely multicyclic $2$-isometries given by Richter \cite[Theorem~ 5.1]{R-1} and by Agler and Stankus \cite[Theorem~ 3.49]{Ag-St}, respectively. Richter used his model to characterize unitary equivalence of cyclic analytic $2$-isometries (see \cite[Theorem~ 5.2]{R-1}). As far as we know, there are no models for arbitrary $2$-isometries.
The paper is organized as follows. In Section \ref{Sec4}, looking for a complete system of unitary invariants for $2$-isometries satisfying the kernel condition, we first discuss the question of unitary equivalence of operator valued unilateral weighted shifts in the general context. This class of operators was investigated by Lambert \cite{lam-1}. An essential progress in their study, also relevant for our present work, was done in \cite{Ja-3}. As opposed to the previous approaches, our do not require the operator weights to be even quasi-invertible. We only assume that they have dense range. We provide a characterization of unitary equivalence of such operators (see Theorem~ \ref{przepluni}). Under some carefully chosen constraints, we obtain a characterization of their unitary equivalence (see Theorem~ \ref{Fran4}), which resembles that for scalar weighted shifts (cf.\ \cite[Theorem~ 1]{Shi}). We conclude this section by characterizing the unitary equivalence of orthogonal sums (of arbitrary cardinality) of injective unilateral weighted shifts (see Theorem~ \ref{desz1}). We want to draw the reader's attention to \cite{Ba-Mi}, where the so-called block shifts generalizing operator valued unilateral weighted shifts were studied.
In Section \ref{Sec5}, using the model for $2$-isometries satisfying the kernel condition (see \cite[Theorem~ 2.5]{A-C-J-S}), we answer the question of when two such operators are unitarily equivalent (see Theorem~ \ref{fincyc} and Lemma~ \ref{unrown}). We also answer the question of when a completely non-unitary $2$-isometry satisfying the kernel condition is unitarily equivalent to an orthogonal sum of scalar unilateral weighted shifts (see Theorem~ \ref{fincyc2}). This enables us to show that each finitely multicyclic completely non-unitary $2$-isometry satisfying the kernel condition is a finite orthogonal sum of weighted shifts (see Corollary~ \ref{mulcyc}). As a consequence, the adjoint of any such operator is in the Cowen-Douglas class (see \cite[Corollary~ 3.7]{Cha2008} for a more general result). We refer the reader to \cite{C-D} for the definition of the Cowen-Douglas class.
In Section \ref{Sec9}, we investigate $2$-isometric weighted shifts on directed trees satisfying the condition \eqref{hypo+}, which in general is stronger than the kernel condition. However, they coincide in the case when the directed tree is leafless and the weights of the weighted shift under consideration are nonzero (see \cite[Lemma~ 5.6]{A-C-J-S}). Example~ \ref{obustr} shows that the fact that a weighted shift on a rooted directed tree is completely non-unitary (see \cite[Lemma~ 5.3(viii)]{A-C-J-S}) is no longer true for weighted shifts on rootless directed trees even though they are isometric and non-unitary. Theorem~ \ref{2isscs-t} provides a model for $2$-isometric weighted shifts on rooted directed trees that satisfy the condition \eqref{hypo+}. These operators are modelled by orthogonal sums of inflations of unilateral weighted shifts whose weights come from a single $2$-isometric unilateral weighted shift. What is more, the additive exponent of the $k$th inflation that appears in the orthogonal decomposition \eqref{zenob} is equal to ${\mathfrak j}^{\tcal}_k,$ the $k$th generation branching degree of the underlying graph $\tcal$. This enables us to answer the question of when two such operators are unitarily equivalent by using ${\mathfrak j}^{\tcal}_k$ (see Theorem~ \ref{equival}). We conclude this section by showing that there are two unitarily equivalent $2$-isometric weighted shifts on non-graph isomorphic directed trees with nonzero weights which satisfy the kernel condition (see Example~ \ref{2+3}).
In Section \ref{Sec6}, we continue our investigations of unitary invariants. We begin by calculating explicitly another unitary invariant, namely the SOT limit $\mathsf A_{T'}$ of the sequence $\{T'^{*n}T'^{n}\}_{n=1}^{\infty}$ for two classes of $2$-isometries $T$ (see Lemma~ \ref{convcd}). We next show that the Cauchy dual operator $T^\prime$ of a $2$-isometry $T$ is of class $C_{\cdot 0}$ if and only if $T$ is completely non-unitary. Under the additional assumption that $T$ satisfies the kernel condition, the Cauchy dual operator $T^\prime$ is of class $C_{0\cdot}$ if and only if $G(\{1\})=0,$ or equivalently if and only if $E(\{1\})=0,$ where $G$ and $E$ are the spectral measures of $T^*T$ and the zeroth weight $W_0$ of the model operator $W$ for $T,$ respectively (see Theorem~ \ref{coo}). Note that non-isometric quasi-Brownian isometries do not satisfy the kernel condition (see \cite[Example~ 4.4 and Corollary~ 4.6]{A-C-J-S}) and their Cauchy dual operators are never of class $C_{0\cdot}$ (see Proposition~ \ref{coo-qB}(i)).
Now we fix notation and terminology. Let $\C$ stand for the set of complex numbers. Denote by $\nbb$, $\zbb_+$ and $\rbb_+$ the sets of positive integers, nonnegative integers and nonnegative real numbers, respectively. Given a set $X$, we write $\card{X}$ for the cardinality of $X$ and denote by $\chi_{\varDelta}$ the characteristic function of a subset $\varDelta$ of $X$. The $\sigma$-algebra of all Borel subsets of a topological space $X$ is denoted by $\borel{X}$. In this paper, Hilbert spaces are assumed to be complex and operators are assumed to be linear. Let $\hh$ be a Hilbert space. As usual, we denote by $\dim \hh$ the orthogonal dimension of $\hh$. If $f \in \hh$, then $\langle f \rangle$ stands for the linear span of the singleton of $f$. Given another Hilbert space $\kk$, we denote by $\boldsymbol B(\hh,\kk)$ the Banach space of all bounded operators from $\hh$ to $\kk$. The kernel, the range and the modulus of an operator $T \in \boldsymbol B(\hh,\kk)$ are denoted by $\ker T,$
$\ran T$ and $|T|,$ respectively. We abbreviate $\boldsymbol B(\hh,\hh)$ to $\boldsymbol B(\hh)$ and regard $\boldsymbol B(\hh)$ as a $C^*$-algebra. Its unit, which is the identity operator on $\hh$, is denoted here by $I_\hh,$ or simply by $I$ if no ambiguity arises. We write $\sigma(T)$ for the spectrum of $T\in \boldsymbol B(\hh).$ Given $T \in \boldsymbol B(\hh)$ and a cardinal number $\mathfrak n$, we set $\hh^{\oplus{\mathfrak n}} = \bigoplus_{j\in J} \hh_j$ and $T^{\oplus{\mathfrak n}}= \bigoplus_{j\in J} T_j$ with $\hh_j=\hh$ and $T_j=T$ for all $j\in J$, where $J$ is an index set of cardinality $\mathfrak n$. We call $\hh^{\oplus{\mathfrak n}}$ and $T^{\oplus{\mathfrak n}}$ the {\em $\mathfrak n$-fold inflation} of $\hh$ and $T$, respectively. We adhere to the convention that $\hh^{\oplus{0}}=\{0\}$ and $T^{\oplus{0}}=0$. If $S$ and $T$ are Hilbert space operators which are unitarily equivalent, then we write $S \cong T$.
We say that an operator $T\in \boldsymbol B(\hh)$ is {\em completely non-unitary} (resp., {\em pure}) if there is no nonzero reducing closed vector subspace $\mathcal L$ of $\hh$ such that the restriction
$T|_{\mathcal L}$ of $T$ to $\mathcal L$ is a unitary (resp., a normal\/) operator. Following \cite{R-1}, we call $T$ {\em analytic} if $\bigcap_{n=1}^{\infty} T^n(\hh)=\{0\}$. Note that any analytic operator is completely non-unitary. It is well known that any operator $T\in \boldsymbol B(\hh)$ has a unique orthogonal decomposition $T=N\oplus P$ such that $N$ is a normal operator and $P$ is a pure operator (see \cite[Corollary~ 1.3]{Mo}). We shall refer to $N$ and $P$ as the {\em normal} and {\em pure} parts of $T$, respectively. The following fact can be deduced from \cite[Corollary~ 1.3]{Mo}.
\begin{lemma}\label{unrown} Operators $T_1\in \boldsymbol B(\hh_1)$ and $T_2\in \boldsymbol B(\hh_2)$ are unitarily equivalent if and only if their corresponding normal and pure parts are unitarily equivalent.
\end{lemma}
\section{\label{Sec4}Unitary equivalence of operator valued unilateral weighted shifts} In this section, the question of unitary equivalence of operator valued unilateral weighted shifts is revisited. First, we give a necessary and sufficient condition for two such operators whose weights have dense range to be unitarily equivalent (see Theorem~ \ref{przepluni}). This result generalizes in particular \cite[Corollary~ 3.3]{lam-1} in which weights are assumed to be invertible. If weights are more regular, where the regularity does not refer to invertibility, then the characterization of unitary equivalence takes on a much simpler form (see Theorem~ \ref{Fran4} and Corollary~ \ref{dom2}). As an application, we answer the question of when two orthogonal sums of uniformly bounded families of injective unilateral weighted shifts are unitarily equivalent (see Theorem~ \ref{desz1}).
We begin by proving a criterion for the modulus of a finite product of bounded operators to be equal to the product of their moduli.
\begin{lemma} \label{kommod} Let $n$ be an integer greater than or equal to $2$. Suppose $A_1, \ldots, A_n \in \boldsymbol B(\hh)$ are such that
$|A_i|$ commutes with $A_j$ whenever $i < j$. Then
\begin{enumerate}
\item[(i)] the operators $|A_1|, \ldots, |A_n|$ mutually commute,
\item[(ii)] $|A_1 \,\cdots\, A_n|^2 = |A_1|^2 \,\cdots\,
|A_n|^2$,
\item[(iii)] $|A_1 \,\cdots\, A_n| = |A_1| \,\cdots\,
|A_n|$.
\end{enumerate}
\end{lemma}
\begin{proof} (i) Fix integers $i,j \in \{1, \ldots, n\}$ such that
$i<j$. Since $|A_i| A_j = A_j |A_i|$, and thus $|A_i|
A_j^* = A_j^* |A_i|$, we see that
$|A_i||A_j|^2=|A_j|^2|A_i|$. Hence
$|A_i||A_j|=|A_j||A_i|$, which proves (i).
(ii) By our assumption and (i), we have \allowdisplaybreaks
\begin{align} \notag
|A_1\,\cdots\, A_n|^2 & = A_n^* \,\cdots\, A_2^*
|A_1|^2 A_2 \,\cdots\, A_n
\\ \notag
& = |A_1|^2 A_n^* \,\cdots\, A_3^* |A_2|^2 A_3 \,\cdots\, A_n
\\ \notag & \hspace{4.5ex} \vdots
\\ \label{Fran3}
& = |A_1|^2 \,\cdots\, |A_n|^2.
\end{align}
(iii) It follows from \eqref{Fran3} and (i) that
\begin{align*}
|A_1 \,\cdots\, A_n|^2 = (|A_1| \,\cdots\, |A_n|)^2.
\end{align*} Applying the square root theorem and the fact that the product of commuting positive bounded operators is positive, we conclude that (iii) holds.
\end{proof} Let us recall the definition of an operator valued unilateral weighted shift. Suppose $\mathcal M$ is a \underline{nonzero} Hilbert space. Denote by
$\ell^2_{\mathcal M}$ the Hilbert space of all vector sequences $\{h_n\}_{n=0}^{\infty} \subseteq \mathcal M$ such that $\sum_{n=0}^{\infty} \|h_n\|^2 < \infty$ equipped with the standard inner product
\begin{align*} \big\langle \{g_n\}_{n=0}^{\infty}, \{h_n\}_{n=0}^{\infty}\big\rangle = \sum_{n=0}^{\infty} \langle g_n, h_n \rangle, \quad \{g_n\}_{n=0}^{\infty}, \{h_n\}_{n=0}^{\infty} \in \ell^2_{\mathcal M}.
\end{align*} Let $\{W_n\}_{n=0}^{\infty} \subseteq \boldsymbol B(\mathcal M)$ be a uniformly bounded sequence of operators. Then the operator $W \in \boldsymbol B(\ell^2_{\mathcal M})$ defined by
\begin{align*} W(h_0, h_1, \ldots) = (0, W_0 h_0, W_1 h_1, \ldots), \quad (h_0, h_1, \ldots) \in \ell^2_{\mathcal M},
\end{align*} is called an {\em operator valued unilateral weighted shift} with weights $\{W_n\}_{n=0}^{\infty}$. It is easy to verify that \allowdisplaybreaks
\begin{align} \label{aopws} W^*(h_0, h_1, \ldots) &= (W_0^*h_1, W_1^*h_2, \ldots), \quad (h_0, h_1, \ldots) \in \ell^2_{\mathcal M},
\\ \label{aopws2} W^*W(h_0, h_1, \ldots) &= (W_0^*W_0h_0, W_1^*W_1h_1, \ldots), \quad (h_0, h_1, \ldots) \in \ell^2_{\mathcal M}.
\end{align} If each weight $W_n$ of $W$ is an invertible (resp., a positive) element of the $C^*$-algebra $\boldsymbol B(\mathcal M)$, then we say that $W$ is an operator valued unilateral weighted shift with {\em invertible} (resp., {\em positive}) weights. Putting $\mathcal M=\C$, we arrive at the well-known notion of a unilateral weighted shift in $\ell^2_{\C}=\ell^2$.
From now on, we assume that $\mathcal M^{(1)}$ and $\mathcal M^{(2)}$ are nonzero Hilbert spaces and $W^{(1)} \in \boldsymbol B(\ell^2_{\mathcal M^{(1)}})$ and $W^{(2)} \in \boldsymbol B(\ell^2_{\mathcal M^{(2)}})$ are operator valued unilateral weighted shifts with weights $\{W_n^{(1)}\}_{n=0}^{\infty} \subseteq \boldsymbol B(\mathcal M^{(1)})$ and $\{W_n^{(2)}\}_{n=0}^{\infty} \subseteq \boldsymbol B(\mathcal M^{(2)})$, respectively. Below, under the assumption that the weights of $W^{(1)}$ have dense range, we characterize bounded operators which intertwine $W^{(1)}$ and $W^{(2)}$ (see \cite[Lemma~ 2.1]{lam-1} for the case of invertible weights).
\begin{lemma} \label{przepl} Suppose that each operator $W_n^{(1)}$, $n \in \zbb_+$, has dense range. Let $A\in \boldsymbol B(\ell^2_{\mathcal M^{(1)}}, \ell^2_{\mathcal M^{(2)}})$ be an operator with the matrix representation $[A_{i,j}]_{i,j=0}^{\infty}$, where $A_{i,j}\in\boldsymbol B(\mathcal M^{(1)}, \mathcal M^{(2)})$ for all $i,j\in\zbb_+$. Then the following two conditions are equivalent{\em :}
\begin{enumerate}
\item[(i)] $AW^{(1)} = W^{(2)}A$,
\item[(ii)] $A$ is lower triangular, that is, $A_{i,j}=0$ whenever $i<j$, and
\begin{align} \label{ABBA0} A_{i,j} W_{j-1}^{(1)} \,\cdots\, W_{0}^{(1)} = W_{i-1}^{(2)} \,\cdots\, W_{i-j}^{(2)} A_{i-j,0}, \quad i \Ge j \Ge 1.
\end{align}
\end{enumerate}
\end{lemma}
\begin{proof} Denote by $\delta_{i,j}$ the Kronecker delta function. Since $W^{(k)}$ has the matrix representation $[\delta_{i,j+1} W_j^{(k)}]_{i,j=0}^{\infty}$ for $k=1,2$, we see that (i) holds if and only if $A_{i,j+1} W_j^{(1)} = W_{i-1}^{(2)} A_{i-1,j}$ for all $i,j \in \zbb_+$ (with the convention that $W_{-1}^{(2)}=0$ and $A_{-1,j}=0$ for $j\in \zbb_+$). Hence, (i) holds if and only if the following equations hold \allowdisplaybreaks
\begin{align} \label{ABBA1} A_{0,j} & = 0, \quad j \in \nbb,
\\ \label{ABBA3} A_{i+1,j+1} W_j^{(1)} & = W_i^{(2)} A_{i,j}, \quad i,j \in \zbb_+.
\end{align}
(i)$\Rightarrow$(ii) By induction, we infer from \eqref{ABBA3} that
\begin{align} \label{ABBA2} A_{i+k,j+k} W_{j+k-1}^{(1)} \,\cdots\, W_{j}^{(1)} = W_{i+k-1}^{(2)} \,\cdots\, W_{i}^{(2)} A_{i,j}, \quad i,j \in \zbb_+, \, k \in \nbb.
\end{align} This and \eqref{ABBA1} combined with the assumption that each $W_n^{(1)}$ has dense range, imply that $A$ is lower triangular. It is a matter of routine to show that \eqref{ABBA2} implies \eqref{ABBA0}.
(ii)$\Rightarrow$(i) Since $A$ is lower triangular and \eqref{ABBA0} holds, it remains to show that \eqref{ABBA3} is valid whenever $i\Ge j\Ge 1$. Applying \eqref{ABBA0} again, we get \allowdisplaybreaks
\begin{align*} A_{i+1,j+1} W_j^{(1)} \Big(W_{j-1}^{(1)} \,\cdots\, W_0^{(1)}\Big) & = W_i^{(2)} \Big(W_{i-1}^{(2)} \,\cdots\, W_{i-j}^{(2)} A_{i-j,0}\Big)
\\ & = W_i^{(2)} A_{i,j} \Big(W_{j-1}^{(1)} \,\cdots\, W_0^{(1)}\Big).
\end{align*} Since each operator $W_n^{(1)}$ has dense range, we conclude that $A_{i+1,j+1} W_j^{(1)}=W_i^{(2)} A_{i,j}$. This completes the proof.
\end{proof} The question of when the operators $W^{(1)}$ and $W^{(2)}$ whose weights have dense range are unitarily equivalent is answered by the following theorem (see \cite[Corollary~ 3.3]{lam-1} for the case of invertible weights).
\begin{theorem} \label{przepluni} Suppose that for any $k=1,2$ and every $n \in \zbb_+$, the operator $W_n^{(k)}$ has dense range. Then the following two conditions are equivalent{\em :}
\begin{enumerate}
\item[(i)] $W^{(1)} \cong W^{(2)}$,
\item[(ii)] there exists a unitary isomorphism $U_0\in \boldsymbol B(\mathcal M^{(1)}, \mathcal M^{(2)})$ such that
\begin{align} \label{Fra1}
|W_{[i]}^{(1)}| = U_0^*|W_{[i]}^{(2)}| U_0, \quad i \in \nbb,
\end{align} where $W_{[i]}^{(k)} = W_{i-1}^{(k)} \,\cdots\, W_{0}^{(k)}$ for $i\in \nbb$ and $k=1,2$.
\end{enumerate}
\end{theorem}
\begin{proof} (i)$\Rightarrow$(ii) Suppose that $U\in \boldsymbol B(\ell^2_{\mathcal M^{(1)}}, \ell^2_{\mathcal M^{(2)}})$ is a unitary isomorphism such that $U W^{(1)} = W^{(2)}U$ and $[U_{i,j}]_{i,j=0}^{\infty}$ is the matrix representation of $U$, where $\{U_{i,j}\}_{i,j=0}^{\infty} \subseteq \boldsymbol B(\mathcal M^{(1)}, \mathcal M^{(2)})$. It follows from Lemma~ \ref{przepl} that the operator $U$ is lower triangular. Since $U^*=U^{-1}$ is a unitary isomorphism with the corresponding matrix representation $[(U_{j,i})^*]_{i,j=0}^{\infty}$ and $U^*W^{(2)} = W^{(1)}U^*$, we infer from Lemma~ \ref{przepl} that $U^*$ is lower triangular. In other words, $U_{i,j}=0$ whenever $i\neq j$. Since $U$ is a unitary isomorphism, we deduce that for any $i \in \zbb_+$, $U_i:=U_{i,i}$ is a unitary isomorphism. It follows from \eqref{ABBA0} that
\begin{align*} U_i W_{[i]}^{(1)} = W_{[i]}^{(2)} U_0, \quad i \in \nbb.
\end{align*} This yields
\begin{align*}
|W_{[i]}^{(1)}|^2 = (W_{[i]}^{(1)})^* U_i^* U_i W_{[i]}^{(1)} = U_0^* |W_{[i]}^{(2)}|^2 U_0, \quad i\in \nbb.
\end{align*} Applying the square root theorem implies \eqref{Fra1}.
(ii)$\Rightarrow$(i) In view of \eqref{Fra1}, we have
\begin{align} \label{fran2}
\|W_{[i]}^{(1)} f\| = \||W_{[i]}^{(1)}| f\| =
\||W_{[i]}^{(2)}| U_0 f\| = \|W_{[i]}^{(2)} U_0 f\|, \quad f \in \mathcal M^{(1)}, \, i \in \nbb.
\end{align} By our assumption, for any $k=1,2$ and every $i\in \nbb$, the operator $W_{[i]}^{(k)}$ has dense range. Hence, by \eqref{fran2}, for every $i\in \nbb$, there exists a unique unitary isomorphism $U_i\in \boldsymbol B(\mathcal M^{(1)}, \mathcal M^{(2)})$ such that
\begin{align*} U_i W_{[i]}^{(1)} = W_{[i]}^{(2)} U_0, \quad i \in \nbb.
\end{align*} Set $U=\bigoplus_{i=0}^{\infty} U_i$. Applying Lemma~ \ref{przepl} to $A=U$, we get $UW^{(1)} = W^{(2)}U$ which completes the proof.
\end{proof} Under additional assumptions on weights, the above characterization of unitary equivalence of $W^{(1)}$ and $W^{(2)}$ can be substantially simplified.
\begin{theorem} \label{Fran4} Suppose that for any $k=1,2$ and every $n \in \zbb_+$,
$\ker W_n^{(1)} = \{0\}$, the operator $W_n^{(k)}$ has dense range and $|W_n^{(k)}|$ commutes with $W_m^{(k)}$ whenever $m < n$. Then the following two conditions are equivalent{\em :}
\begin{enumerate}
\item[(i)] $W^{(1)} \cong W^{(2)}$,
\item[(ii)] there exists a unitary isomorphism $U_0\in \boldsymbol B(\mathcal M^{(1)}, \mathcal M^{(2)})$ such that
\begin{align} \label{Fran1}
|W_{n}^{(1)}| = U_0^*|W_{n}^{(2)}| U_0, \quad n\in \zbb_+.
\end{align}
\end{enumerate}
\end{theorem}
\begin{proof} (i)$\Rightarrow$(ii) It follows from Theorem~ \ref{przepluni} that there exists a unitary isomorphism $U_0\in \boldsymbol B(\mathcal M^{(1)}, \mathcal M^{(2)})$ such that \eqref{Fra1} holds. We will show that \eqref{Fran1} is valid. The case of $n=0$ follows directly from \eqref{Fra1} with $i=1$. Suppose now that $n\in \nbb$. Then, by Lemma~ \ref{kommod} and \eqref{Fra1}, we have \allowdisplaybreaks
\begin{align} \notag
|W_n^{(1)}| |W_{[n]}^{(1)}| = |W_{[n+1]}^{(1)}| & =
U_0^*|W_{[n+1]}^{(2)}| U_0
\\ \notag
& = U_0^* |W_n^{(2)}| U_0 U_0^* |W_{[n]}^{(2)}| U_0
\\ \label{dom1}
& = U_0^* |W_n^{(2)}| U_0 |W_{[n]}^{(1)}|.
\end{align}
Since $W_{[n]}^{(1)}$ is injective, we deduce that the operator $|W_{[n]}^{(1)}|$ has dense range. Hence, by
\eqref{dom1}, $|W_n^{(1)}| = U_0^* |W_n^{(2)}| U_0$.
(ii)$\Rightarrow$(i) It follows from Lemma~ \ref{kommod} that
\begin{align*}
|W_{[i]}^{(k)}| = |W_{i-1}^{(k)}| \,\cdots\,
|W_{0}^{(k)}|, \quad i\in \nbb, \, k=1,2.
\end{align*} Hence, by \eqref{Fran1} and Lemma~ \ref{kommod}, we have
\begin{align*}
|W_{[i]}^{(1)}| = (U_0^*|W_{i-1}^{(2)}| U_0)
\,\cdots\, (U_0^* |W_{0}^{(2)}|U_0) = U_0^*
|W_{[i]}^{(2)}| U_0, \quad i\in \nbb.
\end{align*} In view of Theorem~ \ref{przepluni}, $W^{(1)} \cong W^{(2)}$. This completes the proof.
\end{proof}
\begin{corollary} \label{dom2} Suppose that for $k=1,2$, $\{W_n^{(k)}\}_{n=0}^{\infty}$ are injective diagonal operators with respect to the same orthonormal basis of $\mathcal M^{(k)}$. Then $W^{(1)} \cong W^{(2)}$ if and only if the condition {\em (ii)} of Theorem~ {\em \ref{Fran4}} is satisfied.
\end{corollary}
\begin{remark} First, it is easily verifiable that Theorem~ \ref{Fran4} remains true if instead of assuming that the operators $\{W_n^{(1)}\}_{n=0}^{\infty}$ are injective, we assume that the operators $\{W_n^{(2)}\}_{n=0}^{\infty}$ are injective. Second, the assumption that the operators $\{W_n^{(1)}\}_{n=0}^{\infty}$ are injective was used only in the proof of the implication (i)$\Rightarrow$(ii) of Theorem~ \ref{Fran4}. Third, the assertion (ii) of Theorem~ \ref{Fran4} implies that the operators $\{W_n^{(1)}\}_{n=0}^{\infty}$ are injective if and only if the operators $\{W_n^{(2)}\}_{n=0}^{\infty}$ are injective.
$\diamondsuit$
\end{remark} We are now in a position to characterize the unitary equivalence of two orthogonal sums of uniformly bounded families of injective unilateral weighted shifts.
\begin{theorem} \label{desz1} Suppose for $k=1,2$, $\varOmega_k$ is a nonempty set and $\{S_{\omega}^{(k)}\}_{\omega \in \varOmega_k} \subseteq \boldsymbol B(\ell^2)$ is a uniformly bounded family of injective unilateral weighted shifts. Then the following two conditions are equivalent{\em :}
\begin{enumerate}
\item[(i)] $\bigoplus_{\omega\in \varOmega_1} S_{\omega}^{(1)} \cong \bigoplus_{\omega\in \varOmega_2} S_{\omega}^{(2)}$,
\item[(ii)] there exists a bijection $\varPhi\colon \varOmega_1 \to \varOmega_2$ such that $S_{\varPhi(\omega)}^{(2)}=S_{\omega}^{(1)}$ for all $\omega \in \varOmega_1$.
\end{enumerate}
\end{theorem}
\begin{proof} (i)$\Rightarrow$(ii) For $k=1,2$, we denote by $\hh^{(k)}$ the Hilbert space in which the orthogonal sum $T^{(k)}:=\bigoplus_{\omega\in \varOmega_k} S_{\omega}^{(k)}$ acts and choose an orthonormal basis $\{e_{\omega, n}^{(k)}\}_{\omega \in \varOmega_k,n \in \zbb_+}$ of $\hh^{(k)}$ such that $T^{(k)} e_{\omega, n}^{(k)} = \lambda_{\omega, n}^{(k)} e_{\omega, n+1}^{(k)}$ for all $\omega \in \varOmega_k$ and $n\in \zbb_+$, where $\lambda_{\omega, n}^{(k)}$ are nonzero complex numbers. Clearly, the space $\bigoplus_{n \in \zbb_+} \langle e_{\omega, n}^{(k)} \rangle$ reduces $T^{(k)}$ to an operator which is unitarily equivalent to $S_{\omega}^{(k)}$ for all $w\in \varOmega_k$ and $k=1,2$.
Assume that $T^{(1)}\cong T^{(2)}$. First, we note that there is no loss of generality in assuming that $\varOmega_1 = \varOmega_2=:\varOmega$ because, due to $(T^{(1)})^*\cong (T^{(2)})^*$, we have
\begin{align*} \card{\varOmega_1}&=\dim \bigg(\bigoplus_{\omega\in \varOmega_1} \ker \big(S_{\omega}^{(1)}\big)^* \bigg) = \dim \ker (T^{(1)})^*
\\ & = \dim \ker (T^{(2)})^* = \card{\varOmega_2}.
\end{align*} In turn, by \cite[Corollary~ 1]{Shi}, we can assume that $\lambda_{\omega, n}^{(k)} > 0$ for all $\omega \in \varOmega,$ $n\in \zbb_+$ and $k=1,2$. For $k=1,2$, we denote by $\mathcal M^{(k)}$ the orthogonal sum $\bigoplus_{\omega \in \varOmega} \langle e_{\omega, 0}^{(k)} \rangle$ and by $W^{(k)}$ the operator valued unilateral weighted shift on $\ell^2_{\mathcal M^{(k)}}$ with weights $\{W_n^{(k)}\}_{n=0}^{\infty} \subseteq \boldsymbol B(\mathcal M^{(k)})$ uniquely determined by the following equations
\begin{align*} W_n^{(k)} e_{\omega, 0}^{(k)} = \lambda_{\omega, n}^{(k)} e_{\omega, 0}^{(k)}, \quad \omega \in \varOmega, \, n \in \zbb_+, \, k=1,2.
\end{align*}
($W^{(k)}$ is well-defined because $\|T^{(k)}\| = \sup_{n\in \zbb_+} \sup_{\omega\in \varOmega} \lambda_{\omega, n}^{(k)} = \sup_{n\in
\zbb_+}\|W_n^{(k)}\|$.) We claim that $T^{(k)} \cong W^{(k)}$ for $k=1,2$. Indeed, for $k=1,2,$ there exists a unique unitary isomorphism $V_k\in \boldsymbol B(\hh^{(k)},\ell^2_{\mathcal M^{(k)}})$ such that
\begin{align*} V_k e_{\omega, n}^{(k)} = (\underset{\substack{\phantom{a} \\ \langle 0 \rangle}} 0, \ldots, 0, \underset{\langle n \rangle}{e_{\omega, 0}^{(k)}}, 0, \dots), \quad \omega \in \varOmega,\, n\in \zbb_+.
\end{align*} It is a matter of routine to show that $V_k T^{(k)} e_{\omega, n}^{(k)}= W^{(k)} V_k e_{\omega, n}^{(k)}$ for all $\omega \in \varOmega,$ $n\in \zbb_+$ and $k=1,2.$ This implies the claimed unitary equivalence. As a consequence, we see that $W^{(1)} \cong W^{(2)}$. Hence, by Corollary~ \ref{dom2}, there exists a unitary isomorphism $U_0\in \boldsymbol B(\mathcal M^{(1)}, \mathcal M^{(2)})$ such that
\begin{align} \label{Jur3} U_0 W_n^{(1)} = W_n^{(2)} U_0, \quad n\in \zbb_+.
\end{align} Given $k,l \in \{1,2\}$ and $\omega_0 \in \varOmega$, we set
\begin{align*} \varOmega_{\omega_0}^{(k,l)}=\big\{\omega\in \varOmega \colon \lambda_{\omega, n}^{(k)} = \lambda_{\omega_0, n}^{(l)} \, \forall n \in \zbb_+\big\}=\big\{\omega \in \varOmega \colon S_{\omega}^{(k)} = S_{\omega_0}^{(l)}\big\}.
\end{align*} Our next goal is to show that
\begin{align} \label{Jur1} \card \varOmega_{\omega_0}^{(1,1)} = \card \varOmega_{\omega_0}^{(2,1)}, \quad \omega_0 \in \varOmega.
\end{align} For this, fix $\omega_0 \in \varOmega$. It follows from the injectivity of $U_0$ that \allowdisplaybreaks
\begin{align} \notag U_0 \bigg(\bigcap_{n=0}^{\infty} \ker (\lambda_{\omega_0, n}^{(1)} I - W_n^{(1)})\bigg) & = \bigcap_{n=0}^{\infty} U_0 \Big (\ker (\lambda_{\omega_0, n}^{(1)} I - W_n^{(1)})\Big )
\\ \label{Ko} & \hspace{-1.7ex}\overset{\eqref{Jur3}}= \bigcap_{n=0}^{\infty} \ker (\lambda_{\omega_0, n}^{(1)} I - W_n^{(2)}).
\end{align} Since
\begin{align*} \ker (\lambda_{\omega_0, n}^{(1)} I - W_n^{(k)}) = \bigoplus_{\substack{\omega\in \varOmega\colon \\ \lambda_{\omega, n}^{(k)} = \lambda_{\omega_0, n}^{(1)}}} \langle e_{\omega, 0}^{(k)} \rangle, \quad n \in \zbb_+, \, k=1,2,
\end{align*} and consequently
\begin{align*} \bigcap_{n=0}^{\infty} \ker (\lambda_{\omega_0, n}^{(1)} I - W_n^{(k)}) = \bigoplus_{\omega\in \varOmega_{\omega_0}^{(k,1)}} \langle e_{\omega, 0}^{(k)} \rangle, \quad k=1,2,
\end{align*} we deduce that \allowdisplaybreaks
\begin{align*} \card \varOmega_{\omega_0}^{(1,1)} &= \dim \bigoplus_{\omega\in \varOmega_{\omega_0}^{(1,1)}} \langle e_{\omega, 0}^{(1)} \rangle = \dim \bigcap_{n=0}^{\infty} \ker (\lambda_{\omega_0, n}^{(1)} I - W_n^{(1)})
\\ & \hspace{-1.7ex}\overset{\eqref{Ko}}= \dim \bigcap_{n=0}^{\infty} \ker (\lambda_{\omega_0, n}^{(1)} I - W_n^{(2)}) = \card \varOmega_{\omega_0}^{(2,1)}.
\end{align*} Hence, the condition \eqref{Jur1} holds. Since by \eqref{Jur3}, $U_0^* W_n^{(2)} = W_n^{(1)} U_0^*$ for all $n\in \zbb_+$, we infer from \eqref{Jur1} that
\begin{align} \label{Jur2} \card \varOmega_{\omega_0}^{(2,2)} = \card \varOmega_{\omega_0}^{(1,2)}, \quad \omega_0 \in \varOmega.
\end{align} Using the equivalence relations $\mathcal R_k \subseteq \varOmega \times \varOmega$, $k=1,2$, defined by
\begin{align*} \omega \mathcal R_k \omega^{\prime} \iff S_{\omega}^{(k)} = S_{\omega^{\prime}}^{(k)}, \quad \omega, \omega^{\prime} \in \varOmega, \, k,l \in \{1,2\},
\end{align*} and combining \eqref{Jur1} with \eqref{Jur2} we obtain (ii).
(ii)$\Rightarrow$(i) This implication is obvious.
\end{proof}
\section{\label{Sec5}Unitary equivalence of $2$-isometries satisfying the kernel condition} In view of the well-known characterizations of the unitary equivalence of normal operators (see e.g., \cite[Chap.\ 7]{b-s}), Lemma~ \ref{unrown} reduces the question of unitary equivalence of $2$-isometries satisfying the kernel condition to the consideration of pure operators in this class. By Theorem~ \ref{Zak1} below, a $2$-isometry satisfying the kernel condition is pure if and only if it is unitarily equivalent to an operator valued unilateral weighted shift $W$ on $\ell^2_{\mathcal M}$ with weights $\{W_n\}_{n=0}^{\infty}$ defined by \eqref{wagi}. Our first goal is to give necessary and sufficient conditions for two such operators to be unitarily equivalent (see Theorem~ \ref{fincyc}). Next, we discuss the question of when a pure $2$-isometry satisfying the kernel condition is unitarily equivalent to an orthogonal sum of unilateral weighted shifts (see Theorem~ \ref{fincyc2}). This enables us to answer the question of whether all finitely multicyclic pure $2$-isometries satisfying the kernel condition are necessarily finite orthogonal sums of weighted shifts (see Corollary~ \ref{mulcyc}).
Before stating a model theorem for pure $2$-isometries satisfying the kernel condition, we list some basic properties of the sequence $\{\xi_n\}_{n=0}^{\infty}$ of self-maps of the interval $[1,\infty)$ which are defined by
\begin{align} \label{xin} \xi_n(x) = \sqrt{\frac{1+ (n+1)(x^2-1)}{1+ n(x^2-1)}}, \quad x \in [1,\infty), \, n\in \zbb_+.
\end{align}
\begin{lemma} \label{xin11}
\mbox{\phantom{a}} \begin{enumerate}
\item[(i)] $\xi_0$ is the identity map,
\item[(ii)] $\xi_{m+n} = \xi_m \circ \xi_n$ for all $m,n\in \zbb_+$,
\item[(iii)] $\xi_n(1)=1$ for all $n\in \zbb_+$,
\item[(iv)] $\xi_{n}(x) > \xi_{n+1}(x) > 1$ for all $x \in (1,\infty)$ and $n\in \zbb_+,$
\item[(v)] if $\{\zeta_n\}_{n=0}^{\infty}$ is a sequence of self-maps of $[1,\infty)$ such that $\zeta_0$ is the identity map and $\zeta_{n+1} = \sqrt{\frac{2\zeta_{n}^2-1}{\zeta_{n}^2}}$ for all $n\in \zbb_+,$ then $\zeta_{n}= \xi_{n}$ for all $n\in \zbb_+.$
\end{enumerate}
\end{lemma} The following model theorem, which is a part of \cite[Theorem~ 2.5]{A-C-J-S}, classifies (up to unitary equivalence) pure $2$-isometries satisfying the kernel condition.
\begin{theorem} \label{Zak1} If $\hh \neq \{0\}$ and $T\in \boldsymbol B(\hh)$, then the following are equivalent{\em :}
\begin{enumerate}
\item[(i)] $T$ is an analytic $2$-isometry satisfying the kernel condition,
\item[(ii)] $T$ is a completely non-unitary $2$-isometry satisfying the kernel condition,
\item[(iii)] $T$ is a pure $2$-isometry satisfying the kernel condition,
\item[(iv)] $T$ is unitarily equivalent to an operator valued unilateral weighted shift $W$ on $\ell^2_{\mathcal M}$ with weights\footnote{\;\label{Foot2}Note that the sequence $\{W_n\}_{n=0}^{\infty} \subseteq \boldsymbol B(\mathcal M)$ defined by \eqref{wagi} is uniformly bounded, and consequently $W\in \boldsymbol B(\ell^2_{\mathcal M})$.} $\{W_n\}_{n=0}^{\infty}$ given by
\begin{align} \label{wagi}
\left.
\begin{gathered} W_n = \int_{[1,\infty)} \xi_n(x) E(d x), \quad n \in \zbb_+,
\\
\begin{minipage}{63ex} where $E$ is a compactly supported $\boldsymbol B(\mathcal M)$-valued Borel spectral measure on the interval $[1,\infty)$.
\end{minipage}
\end{gathered}
\; \right\}
\end{align}
\end{enumerate}
\end{theorem} Now we answer the question of when two pure $2$-isometries satisfying the kernel condition are unitarily equivalent. We refer the reader to \cite[Section~ 2.2]{JJS} (resp., \cite[Chapter~ 7]{b-s}) for necessary information on the diagonal operators (resp., the spectral type and the multiplicity function of a selfadjoint operator, which is a complete system of its unitary invariants).
\begin{theorem} \label{fincyc} Suppose $W\in \boldsymbol B(\ell^2_{\mathcal M})$ is an operator valued unilateral weighted shift with weights $\{W_n\}_{n=0}^{\infty}$ given by
\begin{align*} W_n = \int_{[1,\infty)} \xi_n(x) E(d x), \quad n \in \zbb_+,
\end{align*} where $\{\xi_n\}_{n=0}^{\infty}$ are as in \eqref{xin} and $E$ is a compactly supported $\boldsymbol B({\mathcal M})$-valued Borel spectral measure on $[1,\infty)$. Let $({\widetilde W}, {\widetilde{\mathcal M}}, \{\widetilde W_n\}_{n=0}^{\infty}, {\widetilde E})$ be another such system. Then the following conditions are equivalent{\em :}
\begin{enumerate}
\item[(i)] $W\cong \widetilde W$,
\item[(ii)] $W_0\cong \widetilde W_0$,
\item[(iii)] the spectral types and the multiplicity functions of $W_0$ and $\widetilde W_0$ coincide,
\item[(iv)] the spectral measures $E$ and $\widetilde E$ are unitarily equivalent.
\end{enumerate} Moreover, if the operators $W_0$ and $\widetilde W_0$ are diagonal, then {\em (ii)} holds if and only if
\begin{enumerate}
\item[(v)] $\dim \ker(\lambda I - W_0) = \dim \ker(\lambda I - \widetilde W_0)$ for all $\lambda \in \C$.
\end{enumerate}
\end{theorem}
\begin{proof} Since $\xi_0(x) = x$ for all $x \in [1,\infty)$, $E$ and $\widetilde E$ are the spectral measures of $W_0$ and $\widetilde W_0$, respectively. Hence, the conditions (ii) and (iv) are equivalent. That (ii) and (iii) are equivalent follows from \cite[Theorem~ 7.5.2]{b-s}. Note that $\{W_n\}_{n=0}^{\infty}$ are commuting positive bounded operators such that $W_n \Ge I$ for all $n\in \zbb_+$. The same is true for $\{\widetilde W_n\}_{n=0}^{\infty}$. Therefore, $W$ and $\widetilde W$ satisfy the assumptions of Theorem~ \ref{Fran4}.
(i)$\Rightarrow$(ii) This is a direct consequence of Theorem~ \ref{Fran4}.
(iii)$\Rightarrow$(i) If $UE=\widetilde E U$, where $U\in \boldsymbol B({\mathcal M}, \widetilde {\mathcal M})$ is a unitary isomorphism, then by \cite[Theorem~ 5.4.9]{b-s} $UW_n=\widetilde W_nU$ for $n \in \zbb_+$. Hence, by Theorem~ \ref{Fran4}, $W\cong \widetilde W$.
It is a simple matter to show that if the operators $W_0$ and $\widetilde W_0$ are diagonal, then the conditions (ii) and (v) are equivalent. This completes the proof.
\end{proof} It follows from Theorems~ \ref{Zak1} and \ref{fincyc} that the spectral type and the multiplicity function of the spectral measure of $W_0$ form a complete system of unitary invariants for completely non-unitary $2$-isometries satisfying the kernel condition.
Theorem~ \ref{fincyc2} below answers the question of when a completely non-unitary $2$-isometry satisfying the kernel condition is unitarily equivalent to an orthogonal sum of unilateral weighted shifts. In the case when $\ell^2_{\mathcal M}$ is a separable Hilbert space, this result can in fact be deduced from \cite[Theorem~ 3.9]{lam-1}. There are two reasons why we have decided to include the proof of Theorem~ \ref{fincyc2}. First, our result is stated for Hilbert spaces which are not assumed to be separable. Second, an essential part of the proof of Theorem~ \ref{fincyc2} will be used later in the proof of Theorem~ \ref{2isscs-t}.
\begin{theorem} \label{fincyc2} Let $W\in \boldsymbol B(\ell^2_{\mathcal M})$ be an operator valued unilateral weighted shift with weights $\{W_n\}_{n=0}^{\infty}$ given by
\begin{align} \label{wnint} W_n = \int_{[1,\infty)} \xi_n(x) E(d x), \quad n \in \zbb_+,
\end{align} where $\{\xi_n\}_{n=0}^{\infty}$ are as in \eqref{xin} and $E$ is a compactly supported $\boldsymbol B({\mathcal M})$-valued Borel spectral measure on $[1,\infty)$. Then the following conditions are equivalent{\em :}
\begin{enumerate}
\item[(i)] $W \cong \bigoplus_{j\in J} S_j$, where $S_j$ are unilateral weighted shifts,
\item[(ii)] $W_0$ is a diagonal operator.
\end{enumerate} Moreover, if {\em (i)} holds, then the index set $J$ is of cardinality $\dim \ker W^*.$
\end{theorem}
\begin{proof} (ii)$\Rightarrow$(i) Since $W_0$ is a diagonal operator and $W_0\Ge I$, there exists an orthonormal basis $\{e_j\}_{j\in J}$ of $\mathcal M$ and a system $\{\lambda_j\}_{j\in J} \subseteq [1,\infty)$ such that
\begin{align*} W_0e_j = \lambda_j e_j, \quad j\in J.
\end{align*} By \eqref{aopws}, $\dim \ker W^*=\dim \mathcal M=\text{the cardinality of $J$}$. Note that $E$, which is the spectral measure of $W_0$, is given by
\begin{align} \label{edel} E(\varDelta) f = \sum_{j\in J} \chi_{\varDelta}(\lambda_j) \langle f, e_j \rangle e_j, \quad f \in \mathcal M, \, \varDelta \in \borel{[1,\infty)}.
\end{align} Let $S_j$ be the unilateral weighted shift in $\ell^2$ with weights $\{\xi_n(\lambda_j)\}_{n=0}^{\infty}$. By \cite[Lemma~ 6.1 and Proposition~ 6.2]{Ja-St}, $S_j$
is a $2$-isometry such that $\|S_j\|=\lambda_j$ for every $j\in J$. Since $\sup_{j\in J} \lambda_j < \infty$, we see that $\bigoplus_{j\in J} S_j \in \boldsymbol B\big((\ell^2)^{\oplus{\mathfrak n}}\big)$, where $\mathfrak{n}$ is the cardinal number of $J$. Define the operator $V \colon \ell^2_{\mathcal M} \to (\ell^2)^{\oplus{\mathfrak n}}$ by
\begin{align*} (V(h_0,h_1,\dots))_j = (\langle h_0, e_j\rangle, \langle h_1, e_j\rangle, \ldots), \quad j \in J, \, (h_0,h_1,\ldots) \in \ell^2_{\mathcal M}.
\end{align*} Since for every $(h_0,h_1,\ldots) \in \ell^2_{\mathcal M}$,
\begin{align*}
\sum_{j\in J} \sum_{n=0}^{\infty} |\langle h_n, e_j\rangle|^2 = \sum_{n=0}^{\infty} \sum_{j\in J}
|\langle h_n, e_j\rangle|^2 = \sum_{n=0}^{\infty}
\|h_n\|^2 = \|(h_0,h_1,\ldots)\|^2,
\end{align*} the operator $V$ is an isometry. Note that for all $j,k\in J$ and $m\in \zbb_+$,
\begin{align*} (V(\underset{\langle 0 \rangle} 0, \ldots, 0, \underset{\langle m \rangle}{e_k}, 0, \ldots))_j =
\begin{cases} (0,0,\ldots) & \text{if } j\neq k,
\\[1.5ex] (\underset{\langle 0 \rangle} 0, \ldots, 0, \underset{\langle m \rangle} 1, 0, \dots) & \text{if } j=k,
\end{cases}
\end{align*} which means that the range of $V$ is dense in $(\ell^2)^{\oplus{\mathfrak n}}$. Thus $V$ is a unitary isomorphism. It follows from \eqref{wnint} that
\begin{align} \label{wnej} W_n e_j = \int_{[1,\infty)} \xi_n(x) E(dx) e_j \overset{\eqref{edel}}= \xi_n(\lambda_j) e_j, \quad j \in J, \, n \in \zbb_+.
\end{align} This implies that \allowdisplaybreaks
\begin{align*} VW(h_0,h_1,\ldots) & = \{(0, \langle W_0h_0, e_j\rangle, \langle W_1h_1, e_j\rangle, \ldots)\}_{j\in J}
\\ & \hspace{-.7ex}\overset{\eqref{wnej}}= \{(0, \xi_0(\lambda_j)\langle h_0, e_j\rangle, \xi_1(\lambda_j)\langle h_1, e_j\rangle, \ldots)\}_{j\in J}
\\ & = \{S_j(V(h_0,h_1, \ldots))_j\}_{j\in J}
\\ & = \Big(\bigoplus_{j\in J} S_j\Big) V(h_0,h_1,\ldots), \quad (h_0,h_1,\ldots) \in \ell^2_{\mathcal M}.
\end{align*}
(i)$\Rightarrow$(ii) Suppose that $W\cong \bigoplus_{j\in J} S_j$, where $S_j$ are unilateral weighted shifts. Since $W$ is a $2$-isometry, so is $S_j$ for every $j\in J$. Hence $S_j$ is injective for every $j\in J$. As a consequence, there is no loss of generality in assuming that the weights of $S_j$ are positive (see \cite[Corollary~ 1]{Shi}). By \cite[Lemma~ 6.1(ii)]{Ja-St}, for every $j\in J$ there exists $\lambda_j \in [1,\infty)$ such that $\{\xi_n(\lambda_j)\}_{n=0}^{\infty}$ are weights of $S_j$. Let $\widetilde{\mathcal M}$ be a Hilbert space such that $\dim \widetilde{\mathcal M}=\text{the cardinality of $J$}$, $\{\tilde e_j\}_{j\in J}$ be an orthonormal basis of $\widetilde{\mathcal M}$ and $\widetilde E$ be a $\boldsymbol B(\widetilde{\mathcal M})$-valued Borel spectral measure on $[1,\infty)$ given by
\begin{align*} \widetilde E(\varDelta) f = \sum_{j\in J} \chi_{\varDelta}(\lambda_j) \langle f, \tilde e_j \rangle \tilde e_j, \quad f \in \widetilde {\mathcal M}, \, \varDelta \in \borel{[1,\infty)}.
\end{align*}
Since by \cite[Proposition~ 6.2]{Ja-St}, $\sup_{j\in J} \lambda_j = \sup_{j\in J} \|S_j\| < \infty$, the spectral measure $\widetilde E$ is compactly supported in $[1,\infty)$. Define the sequence $\{\widetilde W_n\}_{n=0}^{{\infty}} \subseteq \boldsymbol B(\widetilde{\mathcal M})$ by
\begin{align*} \widetilde W_n = \int_{[1,\infty)} \xi_n(x) \widetilde E(d x), \quad n \Ge 0.
\end{align*} Note that the sequence $\{\widetilde W_n\}_{n=0}^{{\infty}}$ is uniformly bounded (see Footnote \ref{Foot2}). Clearly, $\widetilde W_0 \tilde e_j = \lambda_j \tilde e_j$ for all $j \in J$, which means that $\widetilde W_0$ is a diagonal operator. Denote by $\widetilde W$ the operator valued unilateral weighted shift on $\ell^2_{\widetilde{\mathcal M}}$ with weights $\{\widetilde W_n\}_{n=0}^{{\infty}}$. It follows from the proof of the implication (ii)$\Rightarrow$(i) that $\widetilde W \cong \bigoplus_{j\in J} S_j$. Hence $W \cong \widetilde W$. By Theorem~ \ref{fincyc}, $W_0$ is a diagonal operator.
\end{proof}
\begin{remark} Regarding Theorem~ \ref{fincyc2}, it is worth noting that if $\dim \ker W^* \Le \aleph_0$ and $W_0$ is diagonal, then $W$ can be modelled by a weighted composition operator on an $L^2$-space over a $\sigma$-finite measure space (use \cite[Section 2.3(g)]{BJJS2} and an appropriately adapted version of \cite[Corollary~ C.2]{BJJS1}).
$\diamondsuit$
\end{remark} Recall that for a given operator $T \in B (\hh)$, the smallest cardinal number $\mathfrak n$ for which there exists a closed vector subspace $\mathcal N$ of $\hh$ such that $\dim {\mathcal N} = \mathfrak n$ and $\hh = \bigvee_{n=0}^{\infty} T^n(\mathcal N)$ is called the {\em order of multicyclicity} of $T$. If the order of multicyclicity of $T$ is finite, then $T$ is called {\em finitely multicyclic}. As shown in Lemma~ \ref{mulcyc1} below, the order of multicyclicity of a completely non-unitary $2$-isometry can be calculated explicitly (in fact, the proof of Lemma~ \ref{mulcyc1} contains more information). Part (i) of Lemma~ \ref{mulcyc1} appeared in \cite[Proposition~ 1(i)]{Her} with a slightly different definition of the order of multicyclicity and a different proof. Part (ii) of Lemma~ \ref{mulcyc1} is covered by \cite[Lemma~ 2.19(b)]{Ch-0} in the case of finite multicyclicity. In fact, the proof of part (ii) of Lemma~ \ref{mulcyc1}, which is given below, works for analytic operators having Wold-type decomposition in the sense of Shimorin (see \cite{Sh}).
\begin{lemma} \label{mulcyc1} Let $T\in \boldsymbol B(\hh)$ be an operator. Then
\begin{enumerate}
\item[(i)] the order of multicyclicity of $T$ is greater than or equal to $\dim \ker T^*,$
\item[(ii)] if $T$ is a completely non-unitary $2$-isometry, then the order of multicyclicity of $T$ is equal to $\dim \ker T^*.$
\end{enumerate}
\end{lemma}
\begin{proof} (i) Let $\mathcal N$ be a closed vector subspace of $\hh$ such that $\hh = \bigvee_{n=0}^{\infty} T^n(\mathcal N)$ and $P\in \boldsymbol B(\hh)$ be the orthogonal projection of $\hh$ onto $\ker T^*.$ Clearly, $\ker T^* \perp T^n(\hh)$ for all $n \in\nbb.$ If $f \in \ker T^* \ominus \overline{P(\mathcal N)}$, then
\begin{align*} \langle f, T^0 h \rangle = \langle f, P h \rangle = 0, \quad h \in {\mathcal N},
\end{align*} which together with the previous statement yields $f \in \big(\bigvee_{n=0}^{\infty} T^n(\mathcal N) \big)^{\perp} = \{0\}.$ Hence $\overline{P(\mathcal N)} = \ker T^*.$ As a consequence, the operator
$P|_{\mathcal N}\colon \mathcal N\to \ker T^*$ has dense range, which implies that $\dim \ker T^* \Le \dim \mathcal N$ (see \cite[Problem 56]{Hal}). This gives~ (i).
(ii) Since, by \cite[Theorem~ 3.6]{Sh}, $\hh = \bigvee_{n=0}^{\infty} T^n(\ker T^*),$ we see that the order of multicyclicity of $T$ is less than or equal to $\dim \ker T^*.$ This combined with (i) completes the proof.
\end{proof} The following result generalizes the remarkable fact that a finitely multicyclic completely non-unitary isometry is unitarily equivalent to an orthogonal sum of finitely many unilateral unweighted shifts (cf. \cite[Proposition~ 2.4]{Kub}).
\begin{corollary} \label{mulcyc} A finitely multicyclic completely non-unitary $2$-isometry $T$ satisfying the kernel condition is unitarily equivalent to an orthogonal sum of $\mathfrak n$ unilateral weighted shifts, where $\mathfrak n$ equals the order of multicyclicity of $T$. Moreover, for each cardinal number $\mathfrak n \Ge \aleph_0$ there exists a completely non-unitary $2$-isometry satisfying the kernel condition, whose order of multicyclicity equals $\mathfrak n$ and which is not unitarily equivalent to any orthogonal sum of unilateral weighted shifts.
\end{corollary}
\begin{proof} Apply Theorem~ \ref{fincyc2}, Lemma~ \ref{mulcyc1} and the fact that positive operators in finite-dimensional Hilbert spaces are diagonal while in infinite-dimensional not necessarily.
\end{proof}
\section{\label{Sec9} Unitary equivalence of $2$-isometric weighted shifts on directed trees satisfying the kernel condition} This section provides a model for a $2$-isometric weighted shift $\slam$ on a rooted directed tree $\tcal$ which satisfy the condition \eqref{hypo+} (see Theorem~ \ref{2isscs-t}). Although the kernel condition is weaker than the condition \eqref{hypo+}, both are equivalent if $\tcal$ is leafless and the weights of $\slam$ are nonzero. The aforesaid model enables us to classify (up to unitary equivalence) $2$-isometric weighted shifts on rooted directed trees satisfying the condition \eqref{hypo+} in terms of $k$th generation branching degree (see Theorem~ \ref{equival}).
We begin with necessary information on weighted shifts on directed trees. The reader is referred to \cite{JJS} for more details on this subject (see also \cite{B-D-P,K-L-P,M-A} for very recent developments). Let $\tcal = (V,E)$ be a directed tree (if not stated otherwise, $V$ and $E$ stand for the sets of vertices and edges of $\tcal$, respectively). If $\tcal$ has a root, we denote it by $\rot$. We set $V^{\circ}=V$ if $\tcal$ is rootless and $V^{\circ}=V\setminus \{\rot\}$ otherwise. We say that $\tcal$ is {\em leafless} if $V=V^{\prime}$, where $V^{\prime} := \{u \in V\colon \child{u} \neq \emptyset\}$. Given $W\subseteq V$ and $n\in \zbb_+,$ we set $\childn{n}{W}=W$ if $n=0$ and $\childn{n}{W}=\child{\childn{n-1}{W}}$ if $n\Ge 1$, where $\child{W} = \bigcup_{u\in W} \{v\in V \colon (u,v) \in E\}.$ We put $\des{W}=\bigcup_{n=0}^{\infty} \childn{n}{W}.$ Given $v\in V$, we write $\child{v}=\child{\{v\}}$ and $\childn{n}{v}=\childn{n}{\{v\}}$. For $v\in V^{\circ}$, a unique $u\in V$ such that $(u,v)\in E$ is said to be the {\em parent} of $v$; we denote it by $\parent{v}$. The cardinality of $\child{v}$ is called the {\em degree} of a vertex $v\in V$ and denoted by $\deg{v}$. Recall that if $\tcal$ is rooted, then by \cite[Corollary~ 2.1.5]{JJS}, we have
\begin{align} \label{ind1} V = \bigsqcup_{n=0}^\infty \childn{n} {\rot} \quad \text{(the disjoint sum)}.
\end{align}
Following \cite[page 67]{JJS}, we define the
directed tree $\tcal_{\eta,\kappa} =
(V_{\eta,\kappa}, E_{\eta,\kappa})$ by
\allowdisplaybreaks
\begin{align} \label{tetak}
\left.
\begin{aligned} V_{\eta,\kappa} & = \big\{-k\colon k\in J_\kappa\big\} \cup \{0\} \cup \big\{(i,j)\colon i\in J_\eta,\, j \in J_{\infty}\big\},
\\ E_{\eta,\kappa} & = E_\kappa \cup \big\{(0,(i,1))\colon i \in J_\eta\big\} \cup \big\{((i,j),(i,j+1))\colon i\in J_\eta,\, j\in J_{\infty}\big\},
\\ E_\kappa & = \big\{(-k,-k+1) \colon k\in J_\kappa\big\},
\end{aligned}
\;\right\}
\end{align} where $\eta \in \{2,3,4,\ldots\} \cup \{\infty\}$, $\kappa \in \zbb_+ \cup \{\infty\}$ and $J_\iota = \{k \in \zbb_+\colon 1 \Le k\Le \iota\}$ for $\iota \in \zbb_+ \sqcup \{\infty\}$. The directed tree $\tcal_{\eta,\kappa}$ is leafless, it has only one branching vertex $0$ and $\deg{0}=\eta$. Moreover, it is rooted if $\kappa < \infty$ and rootless if $\kappa=\infty$.
Let $\tcal=(V,E)$ be a directed tree. In what follows $\ell^2(V)$ stands for the Hilbert space of square summable complex functions on $V$ equipped with the standard inner product. If $W$ is a nonempty subset of $V,$ then we regard the Hilbert space $\ell^2(W)$ as a closed vector subspace of $\ell^2(V)$ by identifying each $f\in \ell^2(W)$ with the function $\widetilde f \in \ell^2(V)$ which extends $f$ and vanishes on the set $V \setminus W$. Note that the set $\{e_u\}_{u\in V}$, where $e_u\in \ell^2(V)$ is the characteristic function of $\{u\}$, is an orthonormal basis of $\ell^2(V)$. Given a system $\lambdab = \{\lambda_v\}_{v\in V^{\circ}}$ of complex numbers, we define the operator $\slam$ in $\ell^2(V)$, called a {\em weighted shift on} $\tcal$ with weights $\lambdab$ (or simply a weighted shift on $\tcal$), as follows
\begin{align*}
\begin{aligned} \mathscr D(\slam) & = \{f \in \ell^2(V) \colon \varLambda_\tcal f \in \ell^2(V)\},
\\ \slam f & = \varLambda_\tcal f, \quad f \in \mathscr D(\slam),
\end{aligned}
\end{align*} where $\mathscr D(\slam)$ stands for the {\em domain} of $\slam$ and $\varLambda_\tcal$ is the mapping defined on complex functions $f$ on $V$ by
\begin{align*} (\varLambda_\tcal f) (v) =
\begin{cases} \lambda_v \cdot f\big(\parent v\big) & \text{if } v\in V^\circ,
\\
0 & \text{if } v \text{ is a root of } \tcal.
\end{cases}
\end{align*}
Now we collect some properties of weighted shifts on directed trees that are needed in this paper (see \cite[Propositions~ 3.1.3, 3.1.8, 3.4.3 and 3.5.1]{JJS}). From now on, we adopt the convention that $\sum_{v\in \emptyset} x_v = 0$.
\begin{lemma} \label{basicws} Let $\slam$ be a weighted shift on $\tcal$ with weights $\lambdab=\{\lambda_v\}_{v\in V^{\circ}}$. Then
\begin{enumerate}
\item[(i)] $e_u$ is in $\mathcal D(\slam)$ if and only if $\sum_{v \in \child{u}}|\lambda_v|^2 < \infty;$ if $e_u \in \mathscr D(\slam)$, then $\slam e_u = \sum_{v \in \child{u}}\lambda_v e_v$ and
$\|\slam e_u\|^2 = \sum_{v \in
\child{u}}|\lambda_v|^2,$
\item[(ii)]
$\slam \in \boldsymbol B(\ell^2(V))$ if and only if $\sup_{u\in V} \sum_{v \in \child{u}}|\lambda_v|^2 < \infty;$ if this is the case, then $\|\slam\|^2=\sup_{u\in V}
\|\slam e_u\|^2 = \sup_{u\in V} \sum_{v \in
\child{u}}|\lambda_v|^2.$
\end{enumerate} Moreover, if $\slam \in \boldsymbol B(\ell^2(V))$, then
\begin{enumerate}
\item[(iii)] $\ker{\slam^*} =
\begin{cases} \langle e_{\rot} \rangle \oplus \bigoplus_{u \in V^\prime} \big(\ell^2(\child{u}) \ominus \langle \lambdab^u \rangle\big) & \text{if $\tcal$ is rooted,}
\\[.5ex] \bigoplus_{u \in V^\prime} \big(\ell^2(\child{u}) \ominus \langle \lambdab^u \rangle\big) & \text{otherwise,}
\end{cases}$ \\[1ex] where $\lambdab^u \in \ell^2(\child{u})$ is given by $\lambdab^u\colon \child{u} \ni v \to \lambda_v \in \C$,
\item[(iv)]
$|\slam| e_u = \|\slam e_u\|e_u$ for all $u\in V,$
\end{enumerate}
\end{lemma} According to \cite[Lemma~ 5.3(viii)]{A-C-J-S}, bounded weighted shifts on rooted directed trees are completely non-unitary. As shown in Example~ \ref{obustr} below, this is no longer true for bounded weighted shifts on rootless directed trees even though they are isometric and non-unitary (note that, by \cite[Proposition~ 3.5]{Ja-St}, $2$-isometric bilateral weighted shifts are always unitary).
\begin{example} \label{obustr} Let us consider any isometric weighted shift $\slam$ on the directed tree $\tcal_{\eta,\infty}$ (see \eqref{tetak}) with weights $\lambdab=\{\lambda_v\}_{v\in V_{\eta,\infty}}$, where $\eta \in \{2,3,4, \ldots\} \cup \{\infty\}$ is fixed. This means that $\sum_{i=1}^{\eta}
|\lambda_{i,1}|^2=1$ and $|\lambda_{i,j}| =
|\lambda_{-k}|=1$ for all $i\in J_{\eta}$, $j\in J_{\infty} \setminus \{1\}$ and $k\in \zbb_+$. We will show that $\slam$ is non-unitary and it is not completely non-unitary. For this, by Wold's decomposition theorem (see \cite[Theorem~ 1.1]{SF}), it suffices to prove that $\ker \slam^*\neq \{0\}$ and $\bigoplus_{n=0}^{\infty} \slam^n(\ker \slam^*) \neq \ell^2(V_{\eta,\infty})$. In view of Lemma~ \ref{basicws}(iii), we have
\begin{align} \label{orthd} \ker \slam^* = \bigoplus_{v \in V_{\eta,\infty}} \Big(\ell^2(\child{v}) \ominus \langle \lambdab^{v} \rangle\Big).
\end{align} Since $\eta \Ge 2$ and $\lambdab^{v}\neq 0$ for all $v\in V_{\eta,\infty}$, we deduce that the only nonzero term in the orthogonal decomposition \eqref{orthd} is $\ell^2(\child{0}) \ominus \langle \lambdab^{0} \rangle$. Hence $\ker \slam^*\neq \{0\}$ and
\begin{align*} \bigoplus_{n=0}^{\infty} \slam^n(\ker \slam^*) \subseteq \chi_{\varOmega} \cdot \ell^2(V_{\eta,\infty}) \neq \ell^2(V_{\eta,\infty}),
\end{align*} where $\varOmega = \bigcup_{n=1}^{\infty} \childn{n}{0}$. This proves our claim.
$\diamondsuit$
\end{example}
\begin{remark} By \cite[Remark~ 5.8 and Proposition~ 5.11]{A-C-J-S}, a $2$-isometric weighted shift on a rootless directed tree with nonzero weights which satisfies the kernel condition is isometric. Further, if $\slam$ is an isometric weighted shift on a rootless directed tree, then by Wold's decomposition theorem, it is (up to unitary equivalence) an orthogonal sum $W \oplus S^{\oplus{\mathfrak n}}$, where $W$ is a unitary operator, $S$ is the isometric unilateral shift of multiplicity $1$ and ${\mathfrak n}=\dim \ker \slam^*$. In particular, the isometry $\slam$ in Example~ \ref{obustr} is equal to $U \oplus S^{\oplus (\eta - 1)}$, where $U$ is the unitary bilateral shift of multiplicity~ $1$.
$\diamondsuit$
\end{remark} Recall that a weighted shift $\slam \in \boldsymbol B(\ell^2(V))$ on a leafless directed tree $\tcal$ with nonzero weights $\lambdab=\{\lambda_v\}_{v\in V^{\circ}}$ satisfies the kernel condition if and only if there exists a family $\{\alpha_v\}_{v\in V} \subseteq \rbb_+$ such that
\begin{align} \label{hypo+}
\|\slam e_u\|=\alpha_{\parent{u}}, \quad u \in V^{\circ}.
\end{align} In general, the condition \eqref{hypo+} is stronger than the kernel condition (see \cite[Lemma~ 5.6]{A-C-J-S}). In view of \cite[Remark~ 5.8 and Proposition~ 5.10]{A-C-J-S}, if $\slam\in \boldsymbol B(\ell^2(V))$ is a $2$-isometric weighted shift on a rooted directed tree $\tcal$ with nonzero weights
$\lambdab=\{\lambda_v\}_{v\in V^{\circ}}$ which satisfies the kernel condition, then $\tcal$ is leafless, $\|\slam e_v\| = \mathrm{const}$ on $\childn{n}{\rot}$ for every $n\in \zbb_+$, and the corresponding sequence of constants forms a sequence of positive weights of a $2$-isometric unilateral weighted shift (cf.\ \cite[Lemma~ 6.1(ii)]{Ja-St}). This suggests the following method of constructing such $\slam$'s.
\begin{procedure} \label{uwrem} Let $\tcal$ be a rooted and leafless directed tree. Take a sequence $\{\beta_n\}_{n=0}^{\infty}$ of positive weights of a $2$-isometric unilateral weighted shift. By \cite[Lemma~ 6.1(ii)]{Ja-St}, there exists $x\in [1,\infty)$ such that $\beta_n=\xi_n(x)$ for all $n\in\zbb_+$ (the converse statement is true as well). Then, using \eqref{ind1} and the following equation (cf.\ \cite[Eq.\ (2.2.6)]{BJJS})
\begin{align*} \childn{n+1}{\rot} = \bigsqcup_{u\in \childn{n}{\rot}} \child{u}, \quad n\in \zbb_+,
\end{align*} we can define inductively for every $n\in\zbb_+$ the system $\{\lambda_v\}_{v\in \childn{n+1}{\rot}}$ of complex numbers (not necessarily nonzero) such that
$\sum_{w\in \child{u}} |\lambda_w|^2 = \beta_{n}^2$ for all $u\in \childn{n}{\rot}$. Let $\slam$ be the weighted shift on $\tcal$ with the so-constructed weights $\lambdab=\{\lambda_v\}_{v \in V^{\circ}}.$ Clearly, in view of Lemma~ \ref{basicws}(i), we have
\begin{align*}
x=\beta_0=\|\slam e_{\rot}\|.
\end{align*} Since the sequence $\{\xi_n(t)\}_{n=0}^{\infty}$ is monotonically decreasing for every $t\in [1,\infty)$ (see Lemma~ \ref{xin11}), we infer from \eqref{ind1} and Lemma~ \ref{basicws}(ii) that $\slam \in
\boldsymbol B(\ell^2(V))$ and $\beta_0=\|\slam\|.$ By \cite[Proposition~ 5.10]{A-C-J-S}, $\slam$ is a $2$-isometric weighted shift on $\tcal$ which satisfies \eqref{hypo+} for some $\{\alpha_v\}_{v\in V} \subseteq \rbb_+.$ Hence, according to \cite[Lemma~ 5.6]{A-C-J-S}, $\slam$ satisfies the kernel condition.
$\diamondsuit$
\end{procedure} We will show in Theorem~ \ref{2isscs-t} below that a $2$-isometric weighted shift on a rooted directed tree which satisfies \eqref{hypo+} is unitarily equivalent to an orthogonal sum of $2$-isometric unilateral weighted shifts with positive weights; the orthogonal sum always contains a ``basic'' $2$-isometric unilateral weighted shift with weights $\{\xi_n(x)\}_{n=0}^{\infty}$ for some $x\in [1,\infty)$ and a number of inflations of $2$-isometric unilateral weighted shifts with weights $\{\xi_n(x)\}_{n=k}^{\infty}$, where $k$ varies over a (possibly empty) subset of $\nbb$ (cf.\ Remark~ \ref{3uw}).
For $x \in [1,\infty)$, we denote by $S_{[x]}$ the unilateral weighted shift in $\ell^2$ with weights $\{\xi_n(x)\}_{n=0}^{\infty}$, where $\{\xi_n\}_{n=0}^{\infty}$ is as in \eqref{xin}. Given a leafless directed tree $\tcal$ and $k\in \nbb$, we define the {\em $k$th generation branching degree} ${\mathfrak j}^{\tcal}_k$ of $\tcal$ by
\begin{align} \label{defjn} {\mathfrak j}^{\tcal}_k = \sum_{u \in \childn{k-1}{\rot}} (\deg{u}-1), \quad k\in \nbb.
\end{align} Let us note that the proof of Theorem~ \ref{2isscs-t}(i) relies on the technique involved in the proof of the implication (iii)$\Rightarrow$(v) of \cite[Theorem~ 2.5]{A-C-J-S}.
\begin{theorem} \label{2isscs-t}
\mbox{\phantom{a}}
\begin{enumerate}
\item[(i)] Let $\slam \in \boldsymbol B(\ell^2(V))$ be a $2$-isometric weighted shift on a rooted directed tree $\tcal$ satisfying \eqref{hypo+} for some $\{\alpha_v\}_{v\in V} \subseteq \rbb_+.$ Then $\tcal$ is leafless and $\slam$ is unitarily equivalent to the orthogonal sum
\begin{align} \label{zenob} S_{[x]} \oplus \bigoplus_{k = 1}^{\infty} \big(S_{[\xi_k(x)]}\big)^{\oplus {j}_k},
\end{align}
where $x =\|\slam e_{\rot}\|$ and ${j}_k = {\mathfrak j}^{\tcal}_k$ for all $k\in \nbb.$ Moreover, if the weights of $\slam$ are nonzero, then ${j}_k \Le \aleph_0$ for all $k\in \nbb$.
\item[(ii)] For any $x\in [1,\infty)$ and any sequence of cardinal numbers $\{j_k\}_{k=1}^{\infty},$ the orthogonal sum \eqref{zenob} is unitarily equivalent to a $2$-isometric weighted shift $\slam \in \boldsymbol B(\ell^2(V))$ on a rooted directed tree $\tcal$ satisfying \eqref{hypo+} for some $\{\alpha_v\}_{v\in V}
\subseteq \rbb_+$ such that $x=\|\slam e_{\rot}\|.$ Moreover, if $j_k \Le \aleph_0$ for all $k\in \nbb$, then the weights of $\slam$ can be chosen to be~ positive.
\end{enumerate}
\end{theorem}
\begin{proof} (i) First, observe that by \cite[Lemma~ 5.7]{A-C-J-S}, $\tcal$ is leafless. To prove the unitary equivalence part, we show that $\slam$ is unitarily equivalent to an operator valued unilateral weighted shift $\widetilde W$ on $\ell^2_{\ker{\slam^*}}$ with weights $\{\widetilde W_n\}_{n=0}^{\infty}$ satisfying the assumptions of Theorem~ \ref{fincyc2} and that $\widetilde W_0$ is a diagonal operator.
It follows from \eqref{hypo+} and \cite[Lemma~ 5.6]{A-C-J-S} that $T:=\slam$ satisfies the kernel condition. By Lemma~ \ref{basicws}(iii), $\ker T^*\neq \{0\}$ and so $T$ is a non-unitary $2$-isometry. Hence, by \cite[Theorem~ 2.5]{A-C-J-S}, the spaces $\{T^n (\ker T^*)\}_{n=0}^{\infty}$ are mutually orthogonal. Since, by \cite[Lemma~5.3(viii)]{A-C-J-S}, $T$ is analytic, we infer from \cite[Theorem~ 3.6]{Sh} that
\begin{align} \label{ella} \ell^2(V) = \bigoplus_{n=0}^{\infty} \mathcal M_n,
\end{align} where $\mathcal M_n:=T^n(\ker T^*)$ for $n\in \zbb_+$. Given that $T$ is non-unitary and left-invertible, we see that $\mathcal M_n$ is a nonzero closed vector subspace of $\ell^2(V)$ and $\varLambda_n:=
T|_{\mathcal M_n}\colon \mathcal M_n \to \mathcal M_{n+1}$ is a linear homeomorphism for every $n\in \zbb_+$. Therefore, by \cite[Problem 56]{Hal}, the Hilbert spaces $\mathcal M_n$ and $\mathcal M_0$ are unitarily equivalent for every $n\in \zbb_+$. Set $V_0=I_{\mathcal M_0}$. Let $\varLambda_0 = U_0
|\varLambda_0|$ be the polar decomposition of $\varLambda_0$. Then $U_0\colon \mathcal M_0 \to \mathcal M_1$ is a unitary isomorphism. Put $V_1=U_0^{-1}\colon \mathcal M_1 \to \mathcal M_0$. For $n\Ge 2$, let $V_n\colon \mathcal M_n \to \mathcal M_0$ be any unitary isomorphism. By \eqref{ella}, we can define the unitary isomorphism $V\colon \ell^2(V) \to \ell^2_{\mathcal M_0}$ by
\begin{align*} V(h_0 \oplus h_1 \oplus \ldots) = (V_0h_0, V_1h_1, \ldots), \quad h_0 \oplus h_1 \oplus \ldots \in \ell^2(V).
\end{align*} Let $W\in \boldsymbol B(\ell^2_{\mathcal M_0})$ be the operator valued unilateral weighted shift with (uniformly bounded) invertible weights $\{V_{n+1} \varLambda_n V_n^{-1}\}_{n=0}^{\infty} \subseteq \boldsymbol B(\mathcal M_0)$. It is a routine matter to verify that $VT = WV.$ Therefore, $T=\slam$ is unitarily equivalent to $W$. Since the zeroth weight of $W$, say $W_0$, equals $V_1
\varLambda_0 V_0^{-1},$ we get $W_0=|\varLambda_0|$. A careful look at the proof of \cite[Proposition~ 2.2]{Ja-3} reveals that $W$ is unitarily equivalent to a $2$-isometric operator valued unilateral weighted shift $\widetilde W$ on $\ell^2_{\mathcal M_0}$ with invertible weights $\{\widetilde W_n\}_{n=0}^{\infty} \subseteq \boldsymbol B(\mathcal M_0)$ such that $\widetilde W_0
= |W_0|$ and $\widetilde W_n \, \cdots\, \widetilde W_0 \Ge 0$ for all $n \in \zbb_+$. Thus
\begin{align} \label{w0l0}
\widetilde W_0=|\varLambda_0|.
\end{align}
By \cite[Lemma~1]{R-0}, $\|\widetilde Wh\| \Ge \|h\|$ for all $h\in \ell^2_{\mathcal M_0}$, which yields
\begin{align*}
\|\widetilde W_0 h_0\| = \|(0, \widetilde W_0 h_0, 0,
\ldots)\| = \|\widetilde W(h_0, 0, \ldots)\| \Ge
\|h_0\|, \quad h_0 \in \mathcal M_0.
\end{align*} Hence, by \eqref{w0l0}, $\widetilde W_0 \Ge I$. This combined with the proof of \cite[Theorem~ 3.3]{Ja-3} and Lemma~ \eqref{xin11}(v) implies that
\begin{align*}
\widetilde W_n = \int_{[1,\|\widetilde W_0\|]} \xi_n(x) E(d x), \quad n\in \zbb_+,
\end{align*} where $E$ is the spectral measure of $\widetilde W_0.$
Our next goal is to show that
\begin{align} \label{w0more}
\text{${\mathcal M_0}$ reduces $|\slam|$ and
$\widetilde W_0 = |\slam||_{\mathcal M_0}$.}
\end{align} For this, observe that $\slam$ extends the operator $\varLambda_0\colon \mathcal M_0 \to \mathcal M_1$ and consequently
\begin{align} \label{herb} \langle \varLambda_0^*\varLambda_0 f,g\rangle = \langle \slam^*\slam f,g\rangle, \quad f, g \in \mathcal M_0.
\end{align} Knowing that $\slam$ satisfies the kernel condition, we infer from \eqref{herb} that
$\varLambda_0^*\varLambda_0 = \slam^*\slam|_{\mathcal M_0}$. This means that the orthogonal projection of $\ell^2(V)$ onto $\mathcal M_0$ commutes with
$\slam^*\slam$. By the square root theorem, it commutes with $|\slam|$ as well, which together with \eqref{w0l0} implies \eqref{w0more}.
It follows from \eqref{ind1} and Lemma~ \ref{basicws}(iii) that
\begin{align} \label{w0more-3} \mathcal M_0 = \ker \slam^* = \langle e_{\rot} \rangle \oplus \bigoplus_{k=1}^{\infty} \mathcal G_k,
\end{align} where $\mathcal G_{k}=\bigoplus_{u\in \childn{k-1}{\rot}} \big(\ell^2(\child{u}) \ominus
\langle \lambdab^{u} \rangle\big)$ for $k\in \nbb$. In view of Lemma~ \ref{basicws}(iv) and \eqref{hypo+}, we see that $|\slam| e_{\rot} = \|\slam e_{\rot}\| e_{\rot}$ and
\begin{align*}
|\slam| f = \sum_{v\in \child{u}} f(v) |\slam| e_v = \alpha_u f, \quad f \in \ell^2(\child{u}), \, u \in V.
\end{align*} This combined with \eqref{w0more} and \cite[Lemma~ 5.9(iii)]{A-C-J-S} implies that
\begin{align} \label{afew}
\left.
\begin{aligned} &\text{$\widetilde W_0$ is a diagonal operator,}
\\
& \text{$\langle e_{\rot} \rangle$ reduces $\widetilde W_0$ and $\widetilde W_0|_{\langle e_{\rot} \rangle} =
x I_{\langle e_{\rot} \rangle}$ with $x:=\|\slam e_{\rot}\|$,}
\\ & \text{$\mathcal G_k$ reduces $\widetilde W_0$ and
$\widetilde W_0|_{\mathcal G_k}=\xi_k(x) I_{\mathcal G_k}$ for every $k\in \nbb$.}
\end{aligned}
\; \right\}
\end{align} Since $2$-isometries are injective and, by Lemma~
\ref{basicws}(i), $\|\slam e_u\|^2 = \sum_{v \in
\child{u}}|\lambda_v|^2$, we see that $\lambdab^{u} \neq 0$ for every $u\in V$. As a consequence, we have
\begin{align} \label{afew2} \dim \mathcal G_k = \sum_{u \in \childn{k-1}{\rot}} (\deg{u}-1)={\mathfrak j}^{\tcal}_k, \quad k\in\nbb.
\end{align} Now, following the proof of the implication (ii)$\Rightarrow$(i) of Theorem~ \ref{fincyc2} and applying \eqref{w0more-3}, \eqref{afew} and \eqref{afew2} we see that $\slam$ is unitarily equivalent to the orthogonal sum \eqref{zenob}. The ``moreover'' part is a direct consequence of \cite[Proposition~ 3.1.10]{JJS}.
(ii) Let $\{j_k\}_{k=1}^{\infty}$ be a sequence of cardinal numbers and $x\in [1,\infty).$ First, we construct a directed tree $\tcal$. Without loss of generality, we may assume that the set $\{n \in \nbb\colon j_n \Ge 1\}$ is nonempty. Let $1 \Le n_1 < n_2 < \ldots$ be a (finite or infinite) sequence of positive integers such that
\begin{align*} \{n \in \nbb\colon j_n \Ge 1\} = \{n_1, n_2, \ldots\}.
\end{align*} Then using induction one can construct a leafless directed tree $\tcal=(V,E)$ with root $\rot$ such that each set $\childn{n_k-1}{\rot}$ has exactly one vertex of degree $1+j_{n_k}$ and these particular vertices are the only vertices in $V$ of degree greater than one; clearly, the other vertices of $V$ are of degree one (see Figure \ref{Fig1c-general}). Note that if $k \Ge 3$, then a directed tree with these properties is not unique (up to graph-isomorphism). Using Procedure~ \ref{uwrem}, we can find a system $\lambdab = \{\lambda_v\}_{v\in V^{\circ}} \subseteq \rbb_+$ such that $\slam \in \boldsymbol B(\ell^2(V))$, $\slam$ is a $2$-isometry which satisfies \eqref{hypo+} for some $\{\alpha_v\}_{v\in V} \subseteq \rbb_+$ and
$x=\|\slam e_{\rot}\|$. If additionally $j_n \Le \aleph_0$ for all $n\in \nbb$, then the weights $\{\lambda_v\}_{v\in V^{\circ}}$ can be chosen to be positive (consult Procedure~ \ref{uwrem}). Since
\begin{align*} j_n=\sum_{u \in \childn{n-1}{\rot}} (\deg{u}-1), \quad n\in \nbb,
\end{align*} we deduce from Theorem~ \ref{2isscs-t} that $T\cong \slam$.
\end{proof}
\begin{figure}
\caption{An example of a leafless directed tree $\tcal$ with the properties required in the proof of Theorem~ \ref{2isscs-t}(ii).}
\label{Fig1c-general}
\end{figure} Combining Theorem~ \ref{2isscs-t}(i), Theorem~ \ref{desz1} and Lemma~ \ref{xin11}(iv), we get the following classification theorem.
\begin{theorem} \label{equival} For $k=1,2$, let $\tcal_k=(V_k,E_k)$ be a directed tree with root $\rot_{k}$ and let $S_{\lambdab_k}\in \boldsymbol B(\ell^2(V_k))$ be a $2$-isometric weighted shift on $\tcal_k$ with weights $\lambdab_k = \{\lambda_{k,v}\}_{v\in V_k^{\circ}}$ which satisfies the condition \eqref{hypo+} for some $\{\alpha_{k,v}\}_{v\in V_k} \subseteq \rbb_+$. Then $S_{\lambdab_1} \cong S_{\lambdab_2}$ if and only if one of the following conditions holds{\em :}
\begin{enumerate}
\item[(i)] $\|S_{\lambdab_1} e_{\rot_{1}}\| =
\|S_{\lambdab_2} e_{\rot_{2}}\| > 1$ and ${\mathfrak j}^{\tcal_1}_n = {\mathfrak j}^{\tcal_2}_n$ for every $n\in \nbb$,
\item[(ii)] $\|S_{\lambdab_1} e_{\rot_{1}}\|
= \|S_{\lambdab_2} e_{\rot_{2}}\| =1$ and $\sum_{n=1}^{\infty}{\mathfrak j}^{\tcal_1}_n = \sum_{n=1}^{\infty} {\mathfrak j}^{\tcal_2}_n$.
\end{enumerate}
\end{theorem} It is worth pointing out that, by \cite[Remark~ 5.8 and Lemma~ 5.9(iv)]{A-C-J-S} and Theorem~
\ref{equival}, the sequence $(\|\slam e_{\omega}\|, {\mathfrak j}^{\tcal}_1, {\mathfrak j}^{\tcal}_2, {\mathfrak j}^{\tcal}_3, \ldots)$ forms a complete system of unitary invariants for non-isometric $2$-isometric weighted shifts $\slam$ on rooted directed trees $\tcal$ with nonzero weights satisfying the kernel condition. In turn, the quantity $\sum_{n=1}^{\infty} {\mathfrak j}^{\tcal}_n$ forms a complete system of unitary invariants for isometric weighted shifts $\slam$ on rooted directed trees $\tcal$ (cf.\ \cite[Proposition~ 2.4]{Kub}).
\begin{remark} \label{3uw} Let us make a few observations concerning Theorem~ \ref{2isscs-t} (still under the assumptions of this theorem). First, if $\slam$ is not an isometry, then Lemma~ \ref{xin11}(iv) implies that the additive exponent ${j}_k$ of the inflation $\big(S_{[\xi_k(x)]}\big)^{\oplus {j}_k}$ that appears in the orthogonal decomposition \eqref{zenob} is maximal for every $k\in \nbb$. Second, by Lemma~ \ref{xin11}(ii), the weights of $S_{[\xi_k(x)]}$ take the form $\{\xi_n(x)\}_{n=k}^{\infty}$. Hence, the weights of components of the decomposition \eqref{zenob} are built on the weights of a single $2$-isometric unilateral weighted shift. Third, in view of Corollary~ \ref{mulcyc} and Theorem~ \ref{2isscs-t}, general completely non-unitary $2$-isometric operators satisfying the kernel condition cannot be modelled by weighted shifts on rooted direct trees. Finally, in view of Theorem~ \ref{equival}, there exist two unitarily equivalent $2$-isometric weighted shifts on the same rooted directed tree one with nonzero weights, the other with some zero weights.
$\diamondsuit$
\end{remark} Concluding this section, we show that there are unitarily equivalent $2$-isometric weighted shifts on non-graph isomorphic directed trees that satisfy \eqref{hypo+}.
\begin{figure}
\caption{Two non-graph isomorphic directed trees used in Example~ \ref{2+3}.}
\label{fig1d}
\end{figure}
\begin{example} \label{2+3} For $k=1,2$, let $\tcal_k=(V_k,E_k)$ be a directed tree with root $\rot_{k}$ as in Figure \ref{fig1d}. Clearly, these two directed graphs are not graph isomorphic. Moreover, we have (see \eqref{defjn} for notation)
\begin{align*} {\mathfrak j}^{\tcal_{1}}_n = {\mathfrak j}^{\tcal_{2}}_n =
\begin{cases} 1 & \text{if } n=1,
\\ 2 & \text{if } n=2,
\\ 0 & \text{if } n\ge 3.
\end{cases}
\end{align*} Fix $x\in [1,\infty)$. Using Procedure~ \ref{uwrem}, one can construct for $k=1,2$, a $2$-isometric weighted shift $S_{\lambdab_k}\in \boldsymbol B(\ell^2(V_k))$ on $\tcal_k$ with weights $\lambdab_k = \{\lambda_{k,v}\}_{v\in V_k^{\circ}}$ which satisfies the condition \eqref{hypo+} for some
$\{\alpha_{k,v}\}_{v\in V_k} \subseteq \rbb_+$ and the equation $x=\|S_{\lambdab_k} e_{\rot_{k}}\|$. The above combined with Theorem~ \ref{2isscs-t} implies that
\begin{align*} S_{\lambdab_k} \cong S_{[x]} \oplus S_{[\xi_1(x)]} \oplus \big(S_{[\xi_2(x)]}\big)^{\oplus 2}, \quad k=1,2,
\end{align*} and so $S_{\lambdab_1} \cong S_{\lambdab_2}.$ In particular, if $x=1$, then $S_{\lambdab_1}$ and $S_{\lambdab_2}$ are unitarily equivalent isometries.
$\diamondsuit$
\end{example}
\section{\label{Sec6}The membership of the Cauchy dual operators in $C_{0 \cdot}$ and $C_{\cdot 0}$} We begin by recalling necessary concepts from \cite[Chapter~ II]{SF}. A contraction $S \in \boldsymbol B(\hh)$ is of class $C_{0 \cdot}$ (resp., $C_{\cdot 0}$) if $S^n f \rightarrow 0$ (resp., $S^{*n}f \rightarrow 0$) as $n \rightarrow \infty$ for all $f\in \hh$. If $S$ is of class $C_{0 \cdot}$ and of class $C_{\cdot 0}$, then we say that $S$ is of class $C_{00}$. Observe that the norm of a contraction which is not of class $C_{0 \cdot}$ (or not of class $C_{\cdot 0}$) must equal $1$. Clearly, a contraction $S$ is of class $C_{0 \cdot}$ if and only if $\mathsf A_S=0,$ where $\mathsf A_S$ stands for the limit in the strong (equivalently, weak) operator topology of the sequence $\{S^{*n}S^{n}\}_{n=1}^{\infty}.$ That such a limit exists plays a key role in the theory of unitary and isometric asymptotes (see \cite[Chapter~ IX]{SF}; see also \cite[Theorem~ 1]{Dou}). As we know, the Cauchy dual operator $T'$ of a $2$-isometry $T$ is always a contraction (see \eqref{2hypcon}), so we can look for an explicit description of $\mathsf A_{T'}.$ By examining the proof of \cite[Corollary~ 4.6]{A-C-J-S}, we can calculate $\mathsf A_{T^\prime}$ for two classes of $2$-isometries, namely quasi-Brownian isometries and $2$-isometries satisfying the kernel condition. Recall that an operator $T\in \boldsymbol B(\hh)$ is a {\em quasi-Brownian isometry} if $T$ is a $2$-isometry such that $\triangle_T T = \triangle_T^{1/2} T \triangle_T^{1/2},$ where $\triangle_T =T^*T-I.$ A quasi-Brownian isometry, called in \cite{Maj} a $\triangle_T$-regular $2$-isometry, generalizes the notion of a Brownian isometry introduced in \cite{Ag-St}.
\begin{lemma} \label{convcd} Let $T\in \boldsymbol B(\hh)$ be a $2$-isometry and $G_T$ be the spectral measure of $T^*T$. Then the following assertions hold{\em :}
\begin{enumerate}
\item[(i)] if $T$ satisfies the kernel condition, then $\mathsf A_{T^\prime}=G_T(\{1\}),$
\item[(ii)] if $T$ is a quasi-Brownian isometry, then $\mathsf A_{T^\prime}=\frac{1}{2}G_T(\{1\}) + (I + T^*T)^{-1}.$
\end{enumerate}
\end{lemma} Before stating the main result of this section, we record the following fact.
\begin{lemma} \label{Wiktorek} If $T\in \boldsymbol B(\hh)$ is left-invertible and $T'$ is of class $C_{0\cdot}$ or of class $C_{\cdot 0},$ then $T$ is completely non-unitary.
\end{lemma}
\begin{proof} First, note the following.
\begin{align} \label{orthsum}
\begin{minipage}{70ex} {\em If $T$ is left-invertible and $T$ is an orthogonal sum of operators $A$ and $B$, i.e., $T=A\oplus B$, then $A$ and $B$ are left-invertible and $T^{\prime} = A^{\prime} \oplus B^{\prime}.$}
\end{minipage}
\end{align}
This together with the fact that the Cauchy dual operator of a unitary operator is unitary completes the proof.
\end{proof} Now, we can prove the main result of this section.
\begin{theorem} \label{coo} Let $T\in \boldsymbol B(\hh)$ be a $2$-isometry. Then
\begin{enumerate}
\item[(i)] $T'$ is of class $C_{\cdot 0}$ if and only if $T$ is completely non-unitary.
\end{enumerate} Moreover, if $T$ satisfies the kernel condition, then
\begin{enumerate}
\item[(ii)] $T'$ is of class $C_{0\cdot}$ if and only if $T$ is completely non-unitary and $E(\{1\})=0$, where $E$ is as in Theorem~ {\em \ref{Zak1}(iv)},
\item[(iii)] $T'$ is of class $C_{0\cdot}$ if and only if $T'$ is of class $C_{00}$, or equivalently if and only if $G_T(\{1\})=0,$ where $G_T$ is the spectral measure of $T^*T.$
\end{enumerate}
\end{theorem}
\begin{proof} First, observe that if $T'$ is of class $C_{0 \cdot}$ or of class $C_{\cdot 0}$, then by Lemma~ \ref{Wiktorek}, $T$ is completely non-unitary. Note also that the same conclusion holds if $G_T(\{1\})=0.$ Indeed, otherwise there exists a nonzero closed vector subspace $\mathcal M$ of $\hh$ reducing $T$ to a unitary operator. Then $T^*T = I$ on $\mathcal M$ and thus $1$ is in the point spectrum of $T^*T$, which implies that $G_T(\{1\}) \neq 0$, a contradiction. These two observations show that there is no loss of generality in assuming that $T$ is completely non-unitary.
(i) It is enough to prove that $T'$ is of class $C_{\cdot 0}$ (under the assumption that $T$ is completely non-unitary). Using \eqref{orthsum}, the well-known identity $(T')'=T$ (which holds for any left-invertible operator $T$) and observing that the Cauchy dual operator of a left-invertible normal operator is normal and a normal $2$-isometry is unitary (see \cite[Theorem~ 3.4]{Ja-St}), one can deduce from \eqref{2hypcon} that $T$ is a pure and hyponormal contraction. Since, according to \cite[Theorem~ 3]{Put}, a pure and hyponormal contraction is of class $C_{\cdot 0},$ we are done.
(ii)\&(iii) Assume that $T$ satisfies the kernel condition. In view of Theorem~ \ref{Zak1}, we may further assume that $T=W,$ where $W$ is as in (iv) of this theorem. Using Lemma~ \ref{convcd}(i), we deduce that $W'$ is of class $C_{0\cdot}$ if and only if $G_W(\{1\})=0.$ We will show that
\begin{align} \label{wprim-4} \text{$G_W(\{1\})=0$ if and only if $E(\{1\})=0.$}
\end{align} Set $\eta=\sup(\supp{E})$. Note that $\eta \in [1, \infty)$. It follows from \eqref{aopws2} and \eqref{wagi} that
\begin{align} \label{wprim-2} W^{*}W = \bigoplus_{j=0}^{\infty} \int_{[1,\eta]} \phi_{j}(x) E(dx),
\end{align} where $\phi_{j}\colon [1,\eta] \to \rbb_+$ is given by $\phi_{j}(x)=\xi_j(x)^2$ for $x\in [1,\eta]$ and $j\in \zbb_+$. By Lemma~ \ref{xin11}, $1 \Le \phi_j \Le \eta^2$ for all $j\in \zbb_+$. This together with \eqref{wprim-2}, \cite[Theorem~ 5.4.10]{b-s} and the uniqueness part of the spectral theorem implies that
\begin{align*} G_W(\varDelta) = \bigoplus_{j=0}^{\infty} E(\phi_{j}^{-1}(\varDelta)), \quad \varDelta \in \borel{[1,\eta^2]}.
\end{align*} Since $\phi_{j}^{-1}(\{1\}) = \{1\}$ for all $j\in \zbb_+$, we conclude that \eqref{wprim-4} holds. This together with (i) completes the proof.
\end{proof}
\begin{remark} According to \cite[Theorem~ 3.1]{Ch-1}, all positive integer powers $T'^n$ of the Cauchy dual operator $T'$ of a $2$-hyperexpansive operator $T\in \boldsymbol B(\hh)$ are hyponormal. This immediately implies that if $T\in\boldsymbol B(\hh)$ is a $2$-hyperexpansive operator such that $T'$ is of class $C_{0\cdot}$, then $T'$ is of class $C_{0 0}.$
$\diamondsuit$
\end{remark} Regarding Theorem~ \ref{coo}, note that there exist completely non-unitary $2$-isome\-tries satisfying the kernel condition whose Cauchy dual operators are not of class $C_{0\cdot}.$ To see this, consider a nonzero Hilbert space $\mathcal M$ and a compactly supported $\boldsymbol B(\mathcal M)$-valued Borel spectral measure $E$ on the interval $[1,\infty)$ such that $E(\{1\}) \neq 0$. Then, by Theorems~ \ref{Zak1} and \ref{coo}(ii), the operator valued unilateral weighted shift $W$ on $\ell^2_{\mathcal M}$ with weights $\{W_n\}_{n=0}^{\infty}$ defined by \eqref{wagi} has all the required properties.
The following proposition shows that unlike the case of $2$-isometries satisfying the kernel condition, the Cauchy dual operator of a quasi-Brownian isometry is never of class $C_{0.}$ (see also Lemma~ \ref{convcd}(ii)).
\begin{proposition} \label{coo-qB} Let $T\in \boldsymbol B(\hh)$ be a $2$-isometry and let $T'$ be its Cauchy dual operator. Then the following assertions hold{\em :}
\begin{enumerate}
\item[(i)] if $T$ is a quasi-Brownian isometry, then for every $n\in \zbb_+$,
\begin{align} \label{coo-qBw}
\|T^{\prime n} f\|^2 \Ge c_n \|f\|^2, \quad f\in \hh,
\end{align}
where $c_n=\frac{1+\|T\|^{2(1-2n)}}{1+\|T\|^2}$ is the largest constant for which \eqref{coo-qBw} holds; in particular, $T'$ is not of class $C_{0.}$ and
$\|T'\|=1,$
\item[(ii)] if $T$ satisfies the kernel condition, then for every $n\in \zbb_+$,
\begin{align} \label{coo-kcw}
\|T^{\prime n} f\|^2 \Ge c_n \|f\|^2, \quad f\in \hh,
\end{align}
where $c_n=\frac{1}{1+n(\|T\|^2-1)}$ is the largest constant for which \eqref{coo-kcw} holds.
\end{enumerate}
\end{proposition}
\begin{proof} (i) Fix $n\in \zbb_+.$ Note that $T^{\prime n}$ is left-invertible. Denote by $\hat c_n$ the largest positive constant for which \eqref{coo-qBw} holds. Define $s_n\colon [1,\infty) \to (0,\infty)$ by
\begin{align*} s_n(x) = \frac{1+x}{1+{x^{1-2n}}}, \quad x \in [1, \infty).
\end{align*} Using \cite[Theorem~ 4.5]{A-C-J-S}, the fact that $\sigma(T^*T) \subseteq [1,\infty)$ and the functional calculus (see \cite[Theorem~ VIII.2.6]{Con}), we deduce that
\begin{gather*}
\hat c_n = \frac{1}{\|(T'^{*n}T'^n)^{-1}\|} =
\frac{1}{\|s_n(T^*T)\|} = \frac{1}{\sup_{x\in \sigma(T^*T)}s_n(x)}
\\ = \frac{1}{s_n(\sup \sigma(T^*T))} =
\frac{1}{s_n(\|T\|^2)}.
\end{gather*} Due to \eqref{2hypcon}, the ``in particular'' part of (i) is now clear.
(ii) Argue as in (i) using \cite[Theorem~ 3.3]{A-C-J-S} in place of \cite[Theorem~ 4.5]{A-C-J-S}.
\end{proof} As a direct consequence of Proposition~ \ref{coo-qB}
and the fact that $\|T\|\Ge 1$ for any $2$-isometry $T$ (see \cite[Lemma~ 1]{R-0}), we get
\begin{align*} \lim_{n\to \infty} c_n =
\begin{cases} 0 & \text{if $T$ is a $2$-isometry satisfying
\eqref{kc} and } \|T\|\neq 1,
\\
\frac{1}{1+\|T\|^2} & \text{if $T$ is a quasi-Brownian isometry and } \|T\|\neq 1.
\end{cases}
\end{align*} {\bf Acknowledgments.} A part of this paper was written while the second author visited Jagiellonian University in Summer of 2018. He wishes to thank the faculty and the administration of this unit for their warm hospitality.
\end{document} |
\begin{document}
\title[Bidemocratic bases]{Bidemocratic bases and their connections with other greedy-type bases}
\author[Albiac]{Fernando Albiac} \address{Department of Mathematics, Statistics, and Computer Sciencies--InaMat2\\ Universidad P\'ublica de Navarra\\ Campus de Arrosad\'{i}a\\ Pamplona\\ 31006 Spain} \email{fernando.albiac@unavarra.es}
\author[Ansorena]{Jos\'e L. Ansorena} \address{Department of Mathematics and Computer Sciences\\ Universidad de La Rioja\\ Logro\~no\\ 26004 Spain} \email{joseluis.ansorena@unirioja.es}
\author[Berasategui]{Miguel Berasategui} \address{Miguel Berasategui\\ IMAS - UBA - CONICET - Pab I, Facultad de Ciencias Exactas y Naturales\\ Universidad de Buenos Aires\\ (1428), Buenos Aires, Argentina} \email{mberasategui@dm.uba.ar}
\author[Bern\'a]{Pablo M. Bern\'a} \address{Pablo M. Bern\'a\\ Departamento de Matem\'atica Aplicada y Estad\'istica, Facultad de Ciencias Econ\'omicas y Empresariales, Universidad San Pablo-CEU, CEU Universities\\ Madrid, 28003 Spain.} \email{pablo.bernalarrosa@ceu.es}
\author[Lassalle]{Silvia Lassalle} \address{Silvia Lassalle\\ Departamento de Matem\'atica\\ Universidad de San Andr\'es, Vito Duma 284\\ (1644) Victoria, Buenos Aires, Argentina and\\ IMAS - CONICET} \email{slassalle@udesa.edu.ar}
\begin{abstract} In nonlinear greedy approximation theory, bidemocratic bases have traditionally played the role of dualizing democratic, greedy, quasi-greedy, or almost greedy bases. In this article we shift the viewpoint and study them for their own sake, just as we would with any other kind of greedy-type bases. In particular we show that bidemocratic bases need not be quasi-greedy, despite the fact that they retain a strong unconditionality flavor which brings them very close to being quasi-greedy. Our constructive approach gives that for each $1<p<\infty$ the space $\ell_p$ has a bidemocratic basis which is not quasi-greedy. We also present a novel method for constructing conditional quasi-greedy bases which are bidemocratic, and provide a characterization of bidemocratic bases in terms of the new concepts of truncation quasi-greediness and partially democratic bases. \end{abstract}
\subjclass[2010]{41A65, 41A46, 46B15, 46B45}
\keywords{Nonlinear approximation, Thresholding greedy algorithm, quasi-greedy basis, democracy}
\thanks{F. Albiac acknowledges the support of the Spanish Ministry for Science and Innovation under Grant PID2019-107701GB-I00 for \emph{Operators, lattices, and structure of Banach spaces}. F. Albiac and J.~L. Ansorena acknowledge the support of the Spanish Ministry for Science, Innovation, and Universities under Grant PGC2018-095366-B-I00 for \emph{An\'alisis Vectorial, Multilineal y Aproximaci\'on.} M. Berasategui and S. Lassalle were supported by ANPCyT PICT-2018-04104. P.~M. Bern\'a by Grants PID2019-105599GB-I00 (Agencia Estatal de Investigaci\'on, Spain) and 20906/PI/18 from Fundaci\'on S\'eneca (Regi\'on de Murcia, Spain). S. Lassalle was also supported in part by CONICET PIP 0483 and PAI UdeSA 2020-2021. F. Albiac, J.~L. Ansorena and P.~M. Bern\'a would like to thank the Erwin Schr\"odinger International Institute for Mathematics and Physics, Vienna, for support and hospitality during the programme \emph{Applied Functional Analysis and High-Dimensional Approximation}, held in the Spring of 2021, where work on this paper was undertaken.}
\maketitle
\section{Introduction and background}\noindent Let $\ensuremath{\mathbb{X}}$ be an infinite-dimensional separable Banach space (or, more generally, a quasi-Banach space) over the real or complex field $\ensuremath{\mathbb{F}}$. Throughout this paper, unless otherwise stated, by a \emph{basis} of $\ensuremath{\mathbb{X}}$ we mean a norm-bounded sequence $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ that generates the entire space, in the sense that \[ \overline{\spn}(\ensuremath{\bm{x}}_n \colon n\in\ensuremath{\mathbb{N}})=\ensuremath{\mathbb{X}}, \] and for which there is a (unique) norm-bounded sequence $\ensuremath{\mathcal{X}}^{\ast}=(\ensuremath{\bm{x}}_{n}^{\ast})_{n=1}^\infty$ in the dual space $\ensuremath{\mathbb{X}}^{\ast}$ such that $(\ensuremath{\bm{x}}_{n}, \ensuremath{\bm{x}}_{n}^{\ast})_{n=1}^{\infty}$ is a biorthogonal system. We will refer to the basic sequence $\ensuremath{\mathcal{X}}^{\ast}$ as to the \emph{dual basis} of $\ensuremath{\mathcal{X}}$.
We recall that the basis $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ is called \emph{democratic} if there is a constant $\Delta$ such that \[ \left\Vert \sum_{k\in A}\ensuremath{\bm{x}}_k \right\Vert\le \Delta \left\Vert \sum_{k\in B}\ensuremath{\bm{x}}_k \right\Vert \]
whenever $A$ and $B$ are finite subsets of $\ensuremath{\mathbb{N}}$ with $|A|\le |B|$. The \emph{fundamental function} $\varphi\colon\ensuremath{\mathbb{N}}\to[0,\infty)$ of $\ensuremath{\mathcal{X}}$ is then defined by \[
\varphi(m)=\sup\limits_{|A|\le m}\left\Vert \sum_{k\in A}\ensuremath{\bm{x}}_k \right\Vert,\quad m\in\ensuremath{\mathbb{N}}, \] while the \emph{dual fundamental function} of $\ensuremath{\mathcal{X}}$ is just the fundamental function of its dual basis, i.e., \[
\varphi^{\ast}(m)=\sup\limits_{|A|\le m}\left\Vert \sum_{k\in A}\ensuremath{\bm{x}}_k^{\ast} \right\Vert, \quad m\in \ensuremath{\mathbb{N}}. \] In general it is not true that if a basis $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ is democratic, then its dual basis $\ensuremath{\mathcal{X}}^{\ast}$ is democratic as well. For instance, the $L_1$-normalized Haar system is an unconditional democratic basis of the dyadic Hardy space $H_1$ \cites{Woj1982,Woj2000}, but the $L_\infty$-normalized Haar system is not democratic in the dyadic $\ensuremath{\mathrm{BMO}}$-space \cite{Oswald2001}. In order to understand better how certain greedy-like properties dualize, Dilworth et al.\ introduced in \cite{DKKT2003} a strengthen form of democracy. Notice that the elementary computation \[ m=\left(\sum_{k\in A} \ensuremath{\bm{x}}_k^{\ast}\right)\left(\sum_{k\in A} \ensuremath{\bm{x}}_k\right) \le \left\Vert\sum_{k\in A} \ensuremath{\bm{x}}_k^{\ast}\right\Vert \left\Vert\sum_{k\in A} \ensuremath{\bm{x}}_k\right\Vert \;\text{if}\;
|A|=m, \] yields the estimate \[ m\le \varphi(m)\,\varphi^{\ast}(m),\quad m\in\ensuremath{\mathbb{N}}. \] A basis $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ is then said to be \emph{bidemocratic} if the reverse inequality is fulfilled up to a constant, i.e., $\ensuremath{\mathcal{X}}$ is bidemocratic with constant $\Delta_{b}$ ($\Delta_{b}$-bidemocratic for short) if \[ \varphi(m)\, \varphi^{\ast}(m)\le \Delta_{b}\, m, \quad m\in \ensuremath{\mathbb{N}}. \] Amongst other relevant results relative to this kind of bases in Banach spaces, Dilworth et al.\ showed in \cite{DKKT2003} that being quasi-greedy passes to dual bases under the assumption of bidemocracy (see \cite{DKKT2003}*{Theorem 5.4}). Since the dual basis of a bidemocratic basis is democratic, it follows that the corresponding result also holds for almost greedy and greedy bases. That is, if a bidemocratic basis is almost greedy (respectively, greedy), then so is its dual basis.
Despite the instrumental role played by bidemocratic bases as a key that permits dualizing some greedy-type properties, it is our contention in this paper that these bases are of interest by themselves and that they deserve to be studied as any other kind of greedy-like bases. For instance, the unconditionality constants of bidemocratic bases have been estimated (see Theorem~\ref{thm:BidCond} below), which sheds some information onto the performance of the greedy algorithm when it is implemented specifically for these bases.
To undertake our task we must first place bidemocratic bases in the map by relating them with other types of bases that are relevant in the theory. In this respect the most important open question is whether bidemocratic bases are quasi-greedy. This problem is motivated by recent results that show that bidemocratic bases have uniform boundedness properties of certain (nonlinear) truncation operators that make them very close to quasi-greedy bases (see \cite{AABW2021}*{Proposition 5.7}). In our language, bidemocratic bases are truncation quasi-greedy. In Section~\ref{sect:BDNonQG} we will solve this question in the negative by proving that bidemocracy is not in general strong enough to ensure quasi-greediness and show that for $1<p<\infty$ the space $\ell_p$ has a bidemocratic basis which is not quasi-greedy.
Before that, we will look for sufficient conditions for a basis to be bidemocratic. Here one must take into account that if $\ensuremath{\mathcal{X}}$ is bidemocratic then both $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{X}}^{\ast}$ are democratic but the converse fails. The only positive result we find in the literature in the reverse direction is the aforementioned Theorem 5.4 from \cite{DKKT2003}, which tells us that if $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{X}}^{\ast}$ are quasi-greedy and democratic then $\ensuremath{\mathcal{X}}$ is bidemocratic. In Section~\ref{sect:truncation quasi-greedy} we extend this result by relaxing the conditions on the bases $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{X}}^{\ast}$ while still attaining the bidemocracy of $\ensuremath{\mathcal{X}}$.
Turning to quasi-greedy bases, it is natural and consistent with our discussion in this paper, to further the study of conditional quasi-greedy bases by looking for conditional bidemocratic quasi-greedy bases, i.e., conditional almost greedy bases whose dual bases are also almost greedy. The previous methods for building conditional almost greedy bases in Banach spaces yield either bases whose fundamental function coincides with the fundamental function of the canonical basis of $\ell_1$, or bases whose fundamental function increases steadily enough (formally, bases that have the upper regularity property and the lower regularity property). In the former case, the bases are not bidemocratic unless they are equivalent to the canonical $\ell_1$-basis; in the latter, the bases are always bidemocratic by \cite{DKKT2003}*{Proposition 4.4}. The existence of conditional bidemocratic quasi-greedy bases which do not have the upper regularity property seems to be an unexplored area. In Section~\ref{sect:NM} we contribute to this topic by developing a new method for building bidemocratic, conditional, quasi-greedy bases with arbitrary fundamental functions.
Throughout this paper we will use standard notation and terminology from Banach spaces and greedy approximation theory, as can be found, e.g., in \cite{AlbiacKalton2016}. We also refer the reader to the recent article \cite{AABW2021} for other more especialized notation. We next single out however the most heavily used terminology.
For broader applicability, whenever it is possible we will establish our results in the setting of quasi-Banach spaces. Let us recall that a \emph{quasi-Banach space} is a vector space $\ensuremath{\mathbb{X}}$ over the real or complex field $\ensuremath{\mathbb{F}}$ equipped with a \emph{quasi-norm}, i.e., a map $\|\cdot\|\colon \ensuremath{\mathbb{X}}\to [0,\infty)$ that satisfies all the usual properties of a norm with the exception of the triangle law, which is replaced with the condition \begin{equation}\label{defquasinorm}
\|f+g\|\leq \kappa( \| f\| + \|g\|),\quad f,g\in \ensuremath{\mathbb{X}}, \end{equation}
for some $\kappa\ge 1$ independent of $f$ and $g$, and moreover $(\ensuremath{\mathbb{X}},\|\cdot\|)$ is complete. The \emph{modulus of concavity} of the quasi-norm is the smallest constant $\kappa\ge 1$ in \eqref{defquasinorm}. Given $0<p\le 1$, a \emph{$p$-Banach space} will be a quasi-Banach space whose quasi-norm is $p$-subadditive, i.e., \[ \Vert f+g\Vert^p \le \Vert f\Vert^p +\Vert g \Vert^p, \quad f,g\in\ensuremath{\mathbb{X}}. \]
Some authors have studied the Thresholding Greedy Algorithm, or TGA for short, for more demanding types of bases that we will bring into play on occasion. A sequence $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ of $\ensuremath{\mathbb{X}}$ is said to be a \emph{Schauder basis} if for every $f\in\ensuremath{\mathbb{X}}$ there is a unique sequence $(a_n)_{n=1}^\infty$ in $\ensuremath{\mathbb{F}}$ such that $f= \sum_{n=1}^{\infty} a_n\, \ensuremath{\bm{x}}_{n}$, where the convergence of the series is understood in the topology induced by the quasi-norm. If $\ensuremath{\mathcal{X}}$ is a Schauder basis we define the biorthogonal functionals associated to $\ensuremath{\mathcal{X}}$ by $\ensuremath{\bm{x}}_k^*(f)=a_k$ for all $f=\sum_{n=1}^{\infty} a_n \, \ensuremath{\bm{x}}_{n}\in\ensuremath{\mathbb{X}}$ and $k\in\ensuremath{\mathbb{N}}$. The \emph{partial-sum projections} $S_{m}\colon \ensuremath{\mathbb{X}}\to \ensuremath{\mathbb{X}}$ with respect to the Schauder basis $\ensuremath{\mathcal{X}}$, given by \[ f\mapsto S_{m}(f)= \sum_{n=1}^{m} \ensuremath{\bm{x}}_n^*(f)\, \ensuremath{\bm{x}}_{n}, \quad f\in\ensuremath{\mathbb{X}},\, m\in\ensuremath{\mathbb{N}}, \] are uniformly bounded, whence we infer that $\sup_n \Vert \ensuremath{\bm{x}}_n\Vert \, \Vert \ensuremath{\bm{x}}_n^*\Vert<\infty$. Hence, if a Schauder basis $\ensuremath{\mathcal{X}}$ is semi-normalized, i.e., \[ 0<\inf_n \Vert \ensuremath{\bm{x}}_n\Vert\le \sup_n \Vert \ensuremath{\bm{x}}_n\Vert<\infty, \] then $(\ensuremath{\bm{x}}_n^*)_{n=1}^\infty$ is norm-bounded and so $\ensuremath{\mathcal{X}}$ is a basis in the sense of this paper. If $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ is a Schauder basis, then the \emph{coefficient transform} \[ f\mapsto (\ensuremath{\bm{x}}_n^{\ast}(f))_{n=1}^\infty, \quad f\in\ensuremath{\mathbb{X}}, \] is one-to-one, that is, the basis $\ensuremath{\mathcal{X}}$ is \emph{total}. In the case when $\Vert S_m\Vert \le 1$ for all $m\in\ensuremath{\mathbb{N}}$ the Schauder basis $\ensuremath{\mathcal{X}}$ is said to be \emph{monotone}.
Given $A\subseteq \ensuremath{\mathbb{N}}$, we will use $\ensuremath{\mathcal{E}}_A$ to denote the set consisting of all families $(\varepsilon_n)_{n\in A}$ in $\ensuremath{\mathbb{F}}$ with $|\varepsilon_n|=1$ for all $n\in A$. Given a basis $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ of $\ensuremath{\mathbb{X}}$, a finite set $A\subseteq\ensuremath{\mathbb{N}}$ and $\varepsilon=(\varepsilon_n)_{n\in A}\in\ensuremath{\mathcal{E}}_A$ it is by now customary to use \[ \textstyle \ensuremath{\mathbbm{1}}_{\varepsilon,A}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]=\sum_{n\in A} \varepsilon_n\, \ensuremath{\bm{x}}_n \;(\text{resp.,}\; \ensuremath{\mathbbm{1}}^{\ast}_{\varepsilon,A}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]=\sum_{n\in A} \varepsilon_n \, \ensuremath{\bm{x}}_n^{\ast} ). \] If the basis and the space are clear from context we simply put $\ensuremath{\mathbbm{1}}_{\varepsilon,A}$ (resp., $\ensuremath{\mathbbm{1}}^{\ast}_{\varepsilon,A}$), and if $\varepsilon_n=1$ for all $n\in A$ we put $\ensuremath{\mathbbm{1}}_A$ (resp., $\ensuremath{\mathbbm{1}}^{\ast}_{A}$). Associated with the fundamental function $\varphi$ of the basis are the \emph{upper super-democracy function} of $\ensuremath{\mathcal{X}}$, \[
\ensuremath{\bm{\varphi_u}}(m)=\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m)=\sup\left\lbrace \left\Vert \ensuremath{\mathbbm{1}}_{\varepsilon,A} \right\Vert \colon |A|\le m,\, \varepsilon\in\ensuremath{\mathcal{E}}_A \right\rbrace, \quad m\in\ensuremath{\mathbb{N}}. \] and the \emph{lower super-democracy function} of $\ensuremath{\mathcal{X}}$, \[
\ensuremath{\bm{\varphi_l}}(m)=\ensuremath{\bm{\varphi_l}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m)=\inf\left\lbrace \left\Vert \ensuremath{\mathbbm{1}}_{\varepsilon,A} \right\Vert \colon |A|\ge m,\, \varepsilon\in\ensuremath{\mathcal{E}}_A \right\rbrace, \quad m\in\ensuremath{\mathbb{N}}. \] The growth of $\ensuremath{\bm{\varphi_u}}$ is of the same order as $\varphi$ (see \cite{AABW2021}*{inequality (8.3)}), and so the basis $\ensuremath{\mathcal{X}}$ is bidemocratic if and only if \begin{equation*} \sup_{m\in\ensuremath{\mathbb{N}}} \frac{1}{m} \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m) \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}}^{\ast},\ensuremath{\mathbb{X}}^{\ast}](m) <\infty \end{equation*} (see \cite{AABW2021}*{Lemma 5.5}).
The symbol $\alpha_j\lesssim \beta_j$ for $j\in J$ means that there is a positive constant $C$ such that the families of nonnegative real numbers $(\alpha_j)_{j\in J}$ and $(\beta_j)_{j\in J}$ are related by the inequality $\alpha_j\le C\beta_j$ for all $j\in J$. If $\alpha_j\lesssim \beta_j$ and $\beta_j\lesssim \alpha_j$ for $j\in J$ we say $(\alpha_j)_{j\in J}$ are $(\beta_j)_{j\in J}$ are equivalent, and write $\alpha_j\approx \beta_j$ for $j\in J$.
We finally recall that two bases $(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ and $(\ensuremath{\bm{y}}_n)_{n=1}^\infty$ of quasi-Banach spaces $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{Y}}$ are said to be equivalent if there is an isomorphism $T$ from $\ensuremath{\mathbb{X}}$ onto $\ensuremath{\mathbb{Y}}$ with $T(\ensuremath{\bm{x}}_n)=\ensuremath{\bm{y}}_n$ for all $n\in\ensuremath{\mathbb{N}}$.
\section{From truncation quasi-greedy to bidemocratic bases }\label{sect:truncation quasi-greedy}\noindent Let $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_{n})_{n=1}^{\infty}$ be a semi-normalized basis for a quasi-Banach space $\ensuremath{\mathbb{X}}$ with dual basis $(\ensuremath{\bm{x}}_{n}^*)_{n=1}^{\infty}$. For each $f\in \ensuremath{\mathbb{X}}$ and each $B\subseteq\ensuremath{\mathbb{N}}$ finite, put \[
\ensuremath{\mathcal{U}}(f,B) = \min_{n\in B} |\ensuremath{\bm{x}}_n^{\ast}(f)| \sum_{n\in B} \sgn (\ensuremath{\bm{x}}_n^{\ast}(f)) \, \ensuremath{\bm{x}}_n. \] Given $m\in\ensuremath{\mathbb{N}}\cup\{0\}$, the $m$\emph{th-restricted truncation operator} $\ensuremath{\mathcal{U}}_m\colon \ensuremath{\mathbb{X}} \to \ensuremath{\mathbb{X}}$ is defined as \[ \ensuremath{\mathcal{U}}_m(f)=\ensuremath{\mathcal{U}}(f,A_m(f)), \quad f\in\ensuremath{\mathbb{X}}, \]
where $A=A_m(f)\subseteq\ensuremath{\mathbb{N}}$ is a \emph{greedy set} of $f$ of cardinality $m$, i.e., $|\ensuremath{\bm{x}}_{n}^{\ast}(f)|\ge| \ensuremath{\bm{x}}_{k}^{\ast}(f)|$ whenever $n\in A$ and $k\not\in A$. The set $A$ depends on $f$ and $m$, and may not be unique; if this happens we take any such set. We put \begin{equation*} \Lambda_u=\Lambda_u[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]=\sup\{ \Vert \ensuremath{\mathcal{U}}(f,B)\Vert \colon B \;\text{greedy set of}\; f, \, \Vert f\Vert \le 1\}. \end{equation*} If the quasi-norm is continuous, applying a perturbation technique yields \[ \Lambda_u=\sup_m \Vert \ensuremath{\mathcal{U}}_m\Vert. \] Thus, the basis $\ensuremath{\mathcal{X}}$ is said to be \emph{truncation quasi-greedy} if $(\ensuremath{\mathcal{U}}_m)_{m=1}^\infty$ is a uniformly bounded family of (nonlinear) operators, or equivalently, if and only if $\Lambda_u<\infty$. In this case we will refer to $\Lambda_u$ as the \emph{truncation quasi-greedy constant} of the basis.
Quasi-greedy bases are truncation quasi-greedy (see \cite{DKKT2003}*{Lemma 2.2} and \cite{AABW2021}*{Theorem 4.13}), but the converse does not hold in general. The first case in point appeared in the proof of \cite{BBG2017}*{Proposition 5.6}, where the authors constructed a basis that dominates the unit vector system of $\ell_{1,\infty}$, hence it is truncation quasi-greedy by \cite{AABW2021}*{Proposition 9.4}, but it is not quasi-greedy. In spite of that, truncation quasi-greedy bases still enjoy most of the nice unconditionality-like properties of quasi-greedy bases. For instance, they are quasi-greedy for large coefficients (QGLC for short), suppression unconditional for constant coefficients (SUCC for short), and lattice partially unconditional (LPU for short). See \cite{AABW2021}*{Sections 3 and 4} for the precise definitions and the proofs of these relations.
In turn, if $\ensuremath{\mathcal{X}}$ is bidemocratic then both $\ensuremath{\mathcal{X}}$ and its dual basis $\ensuremath{\mathcal{X}}^{\ast}$ are truncation quasi-greedy (\cite{AABW2021}*{Proposition 5.7}). In this section we study the converse implication, i.e., we want to know which additional conditions make a truncation quasi-greedy basis bidemocratic. A good starting point is the following result, which uses the upper regularity property (URP for short) and which is valid only for Banach spaces. Following \cite{DKKT2003} we shall say that a basis has the URP if there is an integer $b\ge 3$ so that its fundamental function $\varphi$ satisfies \begin{equation}\label{URPdef} 2\varphi(b m)\le {b} \varphi(m),\quad m\in\ensuremath{\mathbb{N}}. \end{equation}
\begin{theorem}[see \cite{AABW2021}*{Lemma 9.8 and Proposition 10.17(iii)}] Let $\ensuremath{\mathcal{X}}$ be a basis of a Banach space $\ensuremath{\mathbb{X}}$. Suppose that $\ensuremath{\mathcal{X}}$ is democratic, truncation quasi-greedy, and has the URP. Then $\ensuremath{\mathcal{X}}$ is bidemocratic (and so $\ensuremath{\mathcal{X}}^{\ast}$ is truncation quasi-greedy too). \end{theorem}
Can we do any better? Dilworth et al.\ characterized bidemocratic bases as those quasi-greedy bases which fulfill an additional condition, weaker than democracy, which they named conservative (\cite{DKKT2003}*{Theorem 5.4}). Recall that a basis is said to be \emph{conservative} if there is a constant $C$ such that $\Vert \ensuremath{\mathbbm{1}}_{A} \Vert \le C \Vert \ensuremath{\mathbbm{1}}_{B} \Vert$ whenever $|A|\le |B|$ and $\max (A) \le \min (B)$. Our objection to this concept is that it is not preserved under rearrangements of the basis. Thus, since the greedy algorithm is ``reordering invariant'' (i.e., if $\pi$ is a permutation of $\ensuremath{\mathbb{N}}$, the greedy algorighm with respect to the bases $(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ and $(\ensuremath{\bm{x}}_{\pi(n)})_{n=1}^\infty$ is the same) when working with conservative bases we are bringing an outer element into the theory. This is the reason why we establish our characterization of bidemocratic bases below in terms of a reordering invariant new class of bases which is more general than the class of conservative bases and which we next define.
\begin{definition}
We say that a basis is \emph{partially democratic} if there is a constant $C$ such that for each $D\subseteq\ensuremath{\mathbb{N}}$ finite there is $D\subseteq E\subseteq\ensuremath{\mathbb{N}}$ finite such that $\Vert \ensuremath{\mathbbm{1}}_A\Vert \le C \Vert \ensuremath{\mathbbm{1}}_B\Vert$ whenever $A\subseteq D$ and $B\subseteq \ensuremath{\mathbb{N}}\setminus E$ satisfy $|A|\le |B|$. \end{definition}
The following lemma is well-known. \begin{lemma}[See \cite{AABW2021}*{Proposition 4.16} or \cite{AAW2021b}*{Lemma 5.2}]\label{lem:truncation quasi-greedyQU} Suppose that $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ is a truncation quasi-greedy basis of a quasi-Banach space $\ensuremath{\mathbb{X}}$. Then there is a constant $C$ depending on the modulus of concavity of $\ensuremath{\mathbb{X}}$ and the truncation quasi-greedy constant of $\ensuremath{\mathcal{X}}$ such that \[ \left\Vert \sum_{n\in A} a_n\, \ensuremath{\bm{x}}_n\right\Vert \le C \Vert f\Vert \]
for all $f\in \ensuremath{\mathbb{X}}$, and $A\subseteq\ensuremath{\mathbb{N}}$ such that $\max_{n\in A}|a_n|\le \min_{n\in A} |\ensuremath{\bm{x}}_n^{\ast}(f)|$. \end{lemma}
\begin{theorem}\label{thm:PDtruncation quasi-greedy} Let $\ensuremath{\mathcal{X}}$ be a basis of a Banach space $\ensuremath{\mathbb{X}}$. Suppose that both $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{X}}^{\ast}$ are truncation quasi-greedy and partially democratic. Then $\ensuremath{\mathcal{X}}$ is bidemocratic. \end{theorem}
\begin{proof} We will customize the proof of \cite{DKKT2003}*{Theorem 5.4} to suit our more general statement. By Lemma~\ref{lem:truncation quasi-greedyQU} there is a constant $\Lambda$ such that \begin{equation} \Vert \ensuremath{\mathbbm{1}}_{\varepsilon,A} \Vert\le \Lambda \Vert \ensuremath{\mathbbm{1}}_{\varepsilon,A} + f\Vert \label{anotherone} \end{equation}
for every $A\subseteq \ensuremath{\mathbb{N}}$ finite, every $\varepsilon\in\ensuremath{\mathcal{E}}_A$, and every $f\in\ensuremath{\mathbb{X}}$ with $\supp(f)\cap A=\emptyset$. Applying the Hahn--Banach theorem to the equivalence class of $\ensuremath{\mathbbm{1}}_{\varepsilon,A}$ in the quotient space $\ensuremath{\mathbb{X}}/\overline{\spn}(\ensuremath{\bm{x}}_n\colon n\notin A)$ yields $f^*\in\spn(\ensuremath{\bm{x}}_n^* \colon n\in A)$ with $\|f^{\ast}\|=1$ such that \[
{\|\ensuremath{\mathbbm{1}}_{\varepsilon, A}\|}\le {\Lambda} |f^{\ast}(\ensuremath{\mathbbm{1}}_{\varepsilon,A})|. \]
Set $\ensuremath{\bm{\varphi_u}}=\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]$ and $\ensuremath{\bm{\varphi_u}}^{\ast}=\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}}^{\ast},\ensuremath{\mathbb{X}}^{\ast}]$. Let $\Delta_d$ and $\Delta_d^{\ast}$ be the partial democracy constants of $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{X}}^{\ast}$ respectively, and let $\Lambda_u^{\ast}$ be the truncation quasi-greedy constant of $\ensuremath{\mathcal{X}}^{\ast}$. Given $m\in \ensuremath{\mathbb{N}}$, fix $0<\epsilon<1$ and choose sets $B_1, B_2$ and signs $\varepsilon\in \ensuremath{\mathcal{E}}_{B_1}$, $\varepsilon'\in \ensuremath{\mathcal{E}}_{B_2}$ so that $|B_1|\le m$, $|B_2|\le m$, \begin{align}
\|\ensuremath{\mathbbm{1}}_{\varepsilon, B_1}\|\ge (1-\epsilon) \ensuremath{\bm{\varphi_u}}(m) \;\text{and}\;
\|\ensuremath{\mathbbm{1}}_{\varepsilon', B_2}^{\ast}\|\ge (1-\epsilon)\ensuremath{\bm{\varphi_u}}^{\ast}(m). \label{two} \end{align}
Use partial democracy to pick $D\subseteq\ensuremath{\mathbb{N}}$ disjoint with $B_1\cup B_2$ such that $|D|=2m$, $\Vert \ensuremath{\mathbbm{1}}_B\Vert \le \ensuremath{\mathbf{C}} \Vert \ensuremath{\mathbbm{1}}_A\Vert$, and $\Vert \ensuremath{\mathbbm{1}}_B^{\ast}\Vert \le \ensuremath{\mathbf{C}}^{\ast} \Vert \ensuremath{\mathbbm{1}}_A^{\ast}\Vert$ whenever $B\subseteq B_1\cup B_2$ and $A\subseteq D$ satisfy $|B|\le |A|$.
It follows from \eqref{two} and partial democracy that, for every $A\subseteq D$ with $|A|\ge m$, \begin{align}
(1-\epsilon)\ensuremath{\bm{\varphi_u}}(m)\le& \ensuremath{\mathbf{C}}\|\ensuremath{\mathbbm{1}}_{A}\|, \;\text{and}\;
(1-\epsilon)\ensuremath{\bm{\varphi_u}}^{\ast}(m)\le \ensuremath{\mathbf{C}}^{\ast}\|\ensuremath{\mathbbm{1}}_{A}^{\ast}\|,\label{one3} \end{align} where $\ensuremath{\mathbf{C}}= 2\lambda \Delta_d$ and $\ensuremath{\mathbf{C}}^{\ast}=2 \lambda \Delta_d^{\ast}$ with $\lambda=1$ if $\ensuremath{\mathbb{F}}=\ensuremath{\mathbb{R}}$ or $2$ if $\ensuremath{\mathbb{F}}=\ensuremath{\mathbb{C}}$. For such subsets $A$ of $\ensuremath{\mathbb{N}}$ the set \[ \ensuremath{\mathcal{K}}_A=\left\{f^{\ast}\in
\spn(\ensuremath{\bm{x}}_n^{\ast} \colon n\in A) \colon \|f^{\ast}\|\le 1, \; f^{\ast}(\ensuremath{\mathbbm{1}}_{A})\ge \frac{ (1-\epsilon)\ensuremath{\varphi_u}(m)}{\ensuremath{\mathbf{C}}\Lambda}\right\} \] is convex and nonempty. Note that $\ensuremath{\mathcal{K}}_A$ increases with $A$, and that \begin{equation}\label{eq:TrivialEst}
\sum_{n\in A} |f^{\ast}(\ensuremath{\bm{x}}_n)| = f^{\ast}\left( \ensuremath{\mathbbm{1}}_{\overline{\varepsilon(f^{\ast})},A}\right) \le \Vert f^{\ast}\Vert \, \left\Vert \ensuremath{\mathbbm{1}}_{\overline{\varepsilon(f^{\ast})},A}\right\Vert
\le \ensuremath{\varphi_u}(|A|), \; f^{\ast}\in \ensuremath{\mathcal{K}}_A. \end{equation}
Pick $f^{\ast}\in\ensuremath{\mathcal{K}}_D$ that minimizes $\sum_{n\in D}|f^{\ast}(\ensuremath{\bm{x}}_n)|^2$. The geometric properties of minimizing vectors on convex subsets of Hilbert spaces yield \begin{equation}\label{eq:GeoH}
\sum_{n\in D}|f^{\ast}(\ensuremath{\bm{x}}_n)|^2\le \Re\left( \sum_{n\in D} f^{\ast}(\ensuremath{\bm{x}}_n) g^{\ast}(\ensuremath{\bm{x}}_n)\right), \quad g^{\ast}\in \ensuremath{\mathcal{K}}_D. \end{equation}
Let $E$ be a greedy set of $f^{\ast}$ with $|E|=m$, and put $A=D\setminus E$. Using that $\ensuremath{\mathcal{X}}^{\ast}$ is truncation quasi-greedy we obtain \begin{equation}\label{eq:truncation quasi-greedyD}
\min_{n\in E}|f^{\ast}(\ensuremath{\bm{x}}_n)|\, \|\ensuremath{\mathbbm{1}}_{E}^{\ast}\|\le \Lambda_u^{\ast}\|f^{\ast}\|\le \Lambda_u^{\ast}. \end{equation} Pick $g^{\ast}\in \ensuremath{\mathcal{K}}_A$. By \eqref{eq:GeoH}, \eqref{eq:TrivialEst}, \eqref{eq:truncation quasi-greedyD} and \eqref{one3}, \begin{align*}
\sum_{n\in D}|f^{\ast}(\ensuremath{\bm{x}}_n)|^2&\le \sum_{n\in A}|f^{\ast}(\ensuremath{\bm{x}}_n)||g^{\ast}(\ensuremath{\bm{x}}_n)|\\
&\le \min_{n\in E}|f^{\ast}(\ensuremath{\bm{x}}_n)| \sum_{n\in A} |g^{\ast}(\ensuremath{\bm{x}}_n)|\\ &\le\frac{\Lambda_u^{\ast}}{\Vert \ensuremath{\mathbbm{1}}_E^{\ast}\Vert} \ensuremath{\bm{\varphi_u}}(m) \\ &\le\frac{\Lambda_u^{\ast}\ensuremath{\mathbf{C}}^{\ast}}{(1-\epsilon)\ensuremath{\bm{\varphi_u}}^{\ast}(m)}\ensuremath{\varphi_u}(m). \end{align*} Hence, by the Cauchy--Bunyakovsky--Schwarz inequality, \begin{align*}
(1-\epsilon)^{2}(\ensuremath{\varphi_u}(m))^2&\le \ensuremath{\mathbf{C}}^2\Lambda^2 | f^{\ast}(\ensuremath{\mathbbm{1}}_D)|^2\\
&\le \ensuremath{\mathbf{C}}^2\Lambda^2\left(\sum_{n\in D}|f^{\ast}(\ensuremath{\bm{x}}_n)|\right)^2\\
&\le 2 \ensuremath{\mathbf{C}}^2\Lambda^2 m \sum_{n\in D}|f^{\ast}(\ensuremath{\bm{x}}_n)|^2\\ & \le 2m \frac{\ensuremath{\mathbf{C}}^2\ensuremath{\mathbf{C}}^{\ast} \Lambda^2\Lambda_u^{\ast}}{(1-\epsilon)} \frac{\ensuremath{\bm{\varphi_u}}(m)}{\ensuremath{\bm{\varphi_u}}^{\ast}(m)}. \end{align*} Since $\epsilon$ is arbitrary, we obtain \[ \ensuremath{\bm{\varphi_u}}(m)\ensuremath{\bm{\varphi_u}}^{\ast}(m)\le 2 \ensuremath{\mathbf{C}}^2\ensuremath{\mathbf{C}}^{\ast}\Lambda^2\Lambda_u^{\ast}m, \] and so the basis is bidemocratic. \end{proof}
\begin{corollary} Let $\ensuremath{\mathcal{X}}$ be a basis of a Banach space $\ensuremath{\mathbb{X}}$. Suppose that both $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{X}}^{\ast}$ are truncation quasi-greedy and conservative. Then $\ensuremath{\mathcal{X}}$ is bidemocratic. \end{corollary}
\begin{proof} If follows readily from Theorem~\ref{thm:PDtruncation quasi-greedy} since conservative bases are partially democratic. \end{proof}
\begin{remark} Note that Theorem~\ref{thm:PDtruncation quasi-greedy} makes sense only for Banach spaces, i.e., it cannot be extended to nonlocally convex quasi-Banach spaces. Indeed, for $0<p<1$ the unit vector system of $\ell_p$ is a democratic unconditional basis whose dual basis is the unit vector system of $c_0$, which also is democratic; but the unit vector system of $\ell_p$ is not bidemocratic! \end{remark}
\section{Existence of bidemocratic non-quasi-greedy bases}\label{sect:BDNonQG}\noindent This section is geared towards proving the existence of bidemocratic bases which are not quasi-greedy. To that end, let us first set the minimum requirements on terminology we need for this section.
Suppose $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_{n})_{n=1}^{\infty}$ is a democratic basis in a quasi-Banach space $\ensuremath{\mathbb{X}}$. We shall say that $\ensuremath{\mathcal{X}}$ has the \emph{lower regularity property} (LRP for short) if there is an integer $b\ge 2$ such \begin{equation}\label{LRPdef} 2 \varphi(m) \le \varphi(bm), \quad m\in\ensuremath{\mathbb{N}}. \end{equation} In a sense, the LRP is the dual property of the URP. Abusing the language we will say that a sequence has the URP (respectively, LRP), if its terms verify the condition \eqref{URPdef} (respectively, \eqref{LRPdef}). Note that $(\varphi(m))_{m=1}^\infty$ has the LRP if and only if $(m/\varphi(m))_{m=1}^\infty$ has the URP. If $(\varphi(m))_{m=1}^\infty$ has the LRP then there is $a>0$ and $C\ge 1$ such that \begin{equation}\label{eq:LRP} \frac{m^a}{n^a}\le C \frac{\varphi(m)}{\varphi(n)}, \quad n\le m. \end{equation} In the case when $\varphi$ is non-decreasing and the sequence $(\varphi(m)/m)_{m=1}^\infty$ is non-increasing, $\varphi$ has the LRP if and only if the weight $\ensuremath{\bm{w}}=(w_n)_{n=1}^\infty$ defined by $w_n=\varphi(n)/n$ is a \emph{regular} weight, i.e., it satisfies the Dini condition \[ \sup_{n} \frac{1}{n w_n} \sum_{k=1}^n w_k <\infty \] (see \cite{AABW2021}*{Lemma 9.8}), in which case \begin{equation}\label{eq:LRPbis} \sum_{n=1}^m \frac{\varphi(n)}{n} \approx \varphi(m), \quad m\in\ensuremath{\mathbb{N}}. \end{equation} For instance, the power sequence $(m^{1/p})_{m=1}^{\infty}$ has the URP for $1<p<\infty$. In other words, the weight $\ensuremath{\bm{w}}= (n^{-a})_{n=1}^{\infty}$ is regular for $0<a<1$.
We will need the following elementary lemma about the \emph{harmonic numbers} \[ H_m=\sum_{n=1}^m \frac{1}{n}, \quad m\in\ensuremath{\mathbb{N}}\cup\{0\}. \] \begin{lemma}\label{lem:JarDif} For each $0<a<1$ there exists a constant $C(a)$ such that \begin{equation*} S(a,r,t):=\sum_{k=r+1}^t k^{-a}(k-r)^{a-1}\le C(a) (H_t-H_r), \quad t\ge 2r. \end{equation*} \end{lemma} \begin{proof} The inequality is trivial for $r=0$. So we assume that $r\ge 1$. If we define $f\colon [1,\infty) \to [0,\infty)$ by $ f(u) = u^{-a} (u-1)^{a-1}, $ we have \[ k^{-a}(k-r)^{a-1} \le x^{-a} (x-r)^{a-1}= \frac{1}{r} f\left( \frac{x}{r}\right), \quad k\in \ensuremath{\mathbb{N}}, \; x\in [k-1,k]. \] Hence, \[ S(a,r,t)\le \int_{r}^t f\left( \frac{x}{r}\right) \frac{dx}{r}=\int_1^{t/r} f(u) \, du. \] Since $f$ is integrable on $[1,2]$ and $f(u) \lesssim 1/u$ for $u\in[2,\infty)$, there is a constant $C_1$ such that $S(a,r,t) \le C_1 \log(t/r)$. Taking into account that $H_t-H_r\ge (t-r)/t\ge 1/2$, and that there is a constant $C_2$ such that $ \log m\le H_m\le \log m+C_2$ for all $m\in\ensuremath{\mathbb{N}}$ we are done. \end{proof}
For further reference, we record an easy lemma that we will use several times. Note that it applies in particular to the harmonic series.
\begin{lemma}\label{lem:Jar} Let $\sum_{n=1}^\infty c_n$ be a divergent series of nonnegative terms. Suppose that $\lim_n c_n=0$. Then, for every $m\in\ensuremath{\mathbb{N}}\cup\{0\}$ and $0\le a<b$, there are $m\le r<s$ such that $a\le \sum_{n=r+1}^s c_n <b$. \end{lemma}
We will also use the following well-known lemma. Note that it could be used to prove the divergence of the harmonic series.
\begin{lemma}[See \cite{Rudin1976}*{Exercise 11, p.\ 84}]\label{lem:AlsoDiverges} Let $\sum_{n=1}^\infty c_n$ be a divergent series of nonnegative terms. Then the (smaller) series \[ \sum_{n=1}^\infty \frac{c_n}{\sum_{k=1}^n c_k} \] also diverges. \end{lemma}
Lorentz sequence spaces $d_{1,q}(\ensuremath{\bm{w}})$ play a relevant role in the qualitative study of greedy-like bases. Let $\ensuremath{\bm{w}}=(w_n)_{n=1}^\infty$ be a weight (i.e., a sequence of nonnegative numbers with $w_1>0$) whose primitive weight $(s_m)_{m=1}^\infty$, defined by $s_m=\sum_{n=1}^m w_n$, is unbounded and \emph{doubling}, i.e., \[ \sup_m \frac{s_{2m}}{s_m} <\infty. \] Given $0<q\le \infty$, we will denote by $d_{1,q}(\ensuremath{\bm{w}})$ the quasi-Banach space of all $f\in c_0$ whose non-increasing rearrangement $(a_n)_{n=1}^\infty$ satisfies \[ \Vert f\Vert_{d_{1,q}(\ensuremath{\bm{w}})}=\left( \sum_{n=1}^\infty a_n^q s_n^{q-1}w_n\right)^{1/q}<\infty, \] with the usual modification if $q=\infty$. For power weights this definition yields the classical Lorentz sequence spaces $\ell_{p,q}$. To be precise, if $\ensuremath{\bm{w}}=(n^{1/p-1})_{n=1}^\infty$ for some $0<p<\infty$, then, up to an equivalent quasi-norm, $d_{1,q}(\ensuremath{\bm{w}}) =\ell_{p,q}$ and if $(a_n)_{n=1}^\infty$ is the non-increasing rearrangement of $f\in c_0$, \[ \Vert f \Vert_{\ell_{p,q}}=\left( \sum_{n=1}^\infty a_n^q n^{q/p-1}\right)^{1/q}. \] For a quick introduction to Lorentz sequence spaces, we refer the reader to \cite{AABW2021}*{Section 9.2}. Here we gather the properties of these spaces that are most pertinent for our purposes. Although it is customary to designate them after the weight $\ensuremath{\bm{w}}$, it must be conceded that as a matter of fact they depend on its primitive weight $(s_m)_{m=1}^\infty$ rather than on $\ensuremath{\bm{w}}$. That is, given weights $\ensuremath{\bm{w}}=(w_n)_{n=1}^\infty$ and $\ensuremath{\bm{w}}'=(w_n')_{n=1}^\infty$ with primitive weights $(s_m)_{m=1}^\infty$ and $(s_m')_{m=1}^\infty$, we have $d_{1,q}(\ensuremath{\bm{w}})=d_{1,q}(\ensuremath{\bm{w}}')$ (up to an equivalent quasi-norm) if and only if $s_m\approx s_m'$ for $m\in\ensuremath{\mathbb{N}}$. The fundamental function of the unit vector system of $d_{1,q}(\ensuremath{\bm{w}})$ is equivalent to $(s_m)_{m=1}^\infty$ thus, essentially, it does not depend on $q$. We have \[ d_{1,p}(\ensuremath{\bm{w}}) \subseteq d_{1,q}(\ensuremath{\bm{w}}), \quad 0<p<q\le \infty. \] To show that this inclusion is actually strict we can, for instance, use the sequence \[ H_m[\ensuremath{\bm{w}}]=\sum_{n=1}^m \frac{w_n}{s_n}, \quad m\in\ensuremath{\mathbb{N}}, \] and notice that $\lim_m H_m[\ensuremath{\bm{w}}]=\infty$ by Lemma~\ref{lem:AlsoDiverges}, and \begin{equation}\label{eq:NormLorentz} \left\Vert \sum_{n=1}^m \frac{1}{s_n}\, \ensuremath{\bm{e}}_n\right\Vert_{d_{1,q}(\ensuremath{\bm{w}})}=(H_m[\ensuremath{\bm{w}}])^{1/q}, \quad m\in\ensuremath{\mathbb{N}},\; 0<q<\infty. \end{equation}
\begin{lemma}\label{lem:LorentzLRP} Let $0<q\le \infty$, and let $(s_m)_{m=1}^\infty$ be the primitive weight of a weight $\ensuremath{\bm{w}}$. Suppose that $(s_m)_{m=1}^\infty$ has the LRP and that the weight $\ensuremath{\bm{w}}'=(w_n')_{n=1}^\infty$ given by $w_n'=s_n/n$ is non-increasing. Then: \begin{enumerate}[label=(\roman*), leftmargin=*, widest=iii] \item\label{LorentzLRP:1} $d_{1,q}(\ensuremath{\bm{w}})=d_{1,q}(\ensuremath{\bm{w}}')$; \item\label{LorentzLRP:2} for $0\le r \le t<\infty$, $H_t[\ensuremath{\bm{w}}']-H_r[\ensuremath{\bm{w}}']\approx H_t-H_r$ and \item\label{LorentzLRP:3} $ A(r,t):=\left\Vert \sum_{n=r+1}^t s_n^{-1}\, \ensuremath{\bm{e}}_n \right\Vert _{d_{1,q}(\ensuremath{\bm{w}})} \lesssim \max\{1, (H_t-H_r)^{1/q}\}. $ \end{enumerate} \end{lemma}
\begin{proof} The first part follows from \eqref{eq:LRPbis}. Let $(s_m')_{m=1}^\infty$ be the primitive weight of $\ensuremath{\bm{w}}'$. The equivalence \eqref{eq:LRPbis} also yields \[ \frac{w_n'}{s_n'}\approx \frac{1}{n}, \quad n\in\ensuremath{\mathbb{N}}. \] Hence, \ref{LorentzLRP:2} holds. Pick $0<a<1/q$ such that \eqref{eq:LRP} holds. On one hand, if $t\le 2r+1$, \[ A(r,t) \le \frac{1}{s_{r+1}} \left\Vert \sum_{n=r+1}^t \ensuremath{\bm{e}}_n\right\Vert_{d_{1,q}(\ensuremath{\bm{w}})}\lesssim \frac{s_{t-r}}{s_{r+1}}\le 1. \] On the other hand, if $t\ge 2r$ using again \ref{LorentzLRP:1} we obtain \[ A(r,t) \approx \left(\sum_{k=r+1}^t \frac{s_{k-r}^q}{s_k^q(k-r)} \right)^{1/q} \lesssim \left(\sum_{k=r+1}^t \frac{(k-r)^{aq}}{k^{aq}(k-r)} \right)^{1/q}. \] Hence, applying Lemma~\ref{lem:JarDif} yields the desired inequality. \end{proof}
To contextualize the assumptions in Theorem~\ref{theoremLp} below we must take into account that any basis $\ensuremath{\mathcal{X}}$ of a $r$-Banach space $\ensuremath{\mathbb{X}}$, $0<r\le 1$, is dominated by the unit vector basis of the Lorentz sequence space $d_{1,r}(\ensuremath{\bm{w}})$, where the primitive weight of $\ensuremath{\bm{w}}$ is $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]$ (see \cite{AABW2021}*{Theorem 9.12}). Although it is not central in our study, in the proof of Theorem~\ref{theoremLp} we will keep track of the \emph{quasi-greedy parameters} of the basis, \[ \ensuremath{\overline{\bm{g}}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]=\sup\{ \Vert S_A[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](f) \Vert \colon A
\;\text{greedy set of}\; f\in B_\ensuremath{\mathbb{X}}, \, |A|= m\}, \] where for a finite subset $A\subseteq \ensuremath{\mathbb{N}}$, we let $S_A=S_A[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]\colon \ensuremath{\mathbb{X}} \to\ensuremath{\mathbb{X}}$ denote the coordinate projection on $A$ , i.e., \[ S_A(f)=\sum_{n\in A} \ensuremath{\bm{x}}_n^{\ast}(f)\, \ensuremath{\bm{x}}_n,\quad f\in \ensuremath{\mathbb{X}}. \] The quasi-greedy parameters are bounded above by the \emph{unconditionality parameters} \[
\ensuremath{\bm{k}}_m=\ensuremath{\bm{k}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}] :=\sup_{|A|= m} \Vert S_A\Vert, \quad m\in\ensuremath{\mathbb{N}}, \] which are used to quantify how far the basis is from being unconditional. Thus, the following result exhibits that bidemocratic bases are close to being quasi-greedy.
\begin{theorem}\label{thm:BidCond} Let $\ensuremath{\mathcal{X}}$ be a bidemocratic basis of a $p$-Banach space $\ensuremath{\mathbb{X}}$, $0<p\le 1$. Then, \[ \ensuremath{\bm{k}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]\lesssim (\log m)^{1/p}, \quad m\ge 2. \] \end{theorem}
\begin{proof} Just combine \cite{AABW2021}*{Proposition 5.7} with \cite{AAW2021b}*{Theorem 5.1}. \end{proof}
Since $(\ensuremath{\overline{\bm{g}}}_m)_{m=1}^\infty$ need not be non-decreasing (see \cite{Oikhberg2018}*{Proposition 3.1}), we also set \[ \ensuremath{\bm{g}}_m=\ensuremath{\bm{g}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]=\sup_{k\le m} \ensuremath{\overline{\bm{g}}}_k. \]
Of course, $\ensuremath{\mathcal{X}}$ is quasi-greedy if and only if $\sup_m \ensuremath{\bm{g}}_m=\sup_m \ensuremath{\overline{\bm{g}}}_m<\infty$, and $\ensuremath{\mathcal{X}}$ is unconditional if and only if $\sup_m \ensuremath{\bm{k}}_m<\infty$.
We will use the fact that quasi-greedy bases are in particular total bases (see \cite{AABW2021}*{Corollary 4.5}) to prove the advertised existence of bidemocratic non-quasi-greedy bases.
\begin{theorem}\label{theoremLp} Let $1<q<\infty$, and let $\ensuremath{\bm{w}}=(w_n)_{n=1}^\infty$ be a weight whose primitive weight $(s_m)_{m=1}^\infty$ is unbounded. Let $\ensuremath{\mathbb{X}}$ be a quasi-Banach space with a basis $\ensuremath{\mathcal{X}}$. Suppose that $\ensuremath{\mathcal{X}}$ is bidemocratic with $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m)\approx s_m$ for $m\in\ensuremath{\mathbb{N}}$, and that $\ensuremath{\mathcal{X}}$ has a subsequence dominated by the unit vector basis of $d_{1,q}(\ensuremath{\bm{w}})$. Then $\ensuremath{\mathbb{X}}$ has a non-total bidemocratic basis $\ensuremath{\mathcal{Y}}$ with \[ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}](m)\approx s_m, \quad m\in \ensuremath{\mathbb{N}}. \] Moreover, if $(s_m)_{m=1}^\infty$ has the LRP and $(s_m/m)_{m=1}^\infty$ is non-increasing, \[ \ensuremath{\overline{\bm{g}}}_m[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}] \gtrsim \left(\log m\right)^{{1/q'}}, \quad m\ge 2, \] where $1/q+1/q^{\prime}=1$. \end{theorem}
\begin{proof} Choose a subsequence $\left(\ensuremath{\bm{x}}_{\eta(k)}\right)_{k=1}^{\infty}$ of $\ensuremath{\mathcal{X}}=\left(\ensuremath{\bm{x}}_n\right)_{n=1}^{\infty}$ so that $\eta(1)\ge 2$ and the linear operator $T\colon d_{1,q}(\ensuremath{\bm{w}})\rightarrow \ensuremath{\mathbb{X}}$ given by \[ T\left(\ensuremath{\bm{e}}_k\right)= \ensuremath{\bm{x}}_{\eta(k)}, \quad k\in\ensuremath{\mathbb{N}}, \] is bounded. For each $n\in\ensuremath{\mathbb{N}}$, $n\ge 2$, define $\ensuremath{\bm{y}}_n=\ensuremath{\bm{x}}_n+\ensuremath{\bm{z}}_n$, where \[ \ensuremath{\bm{z}}_n =\begin{cases} w_k \, \ensuremath{\bm{x}}_1& \text{ if } n=\eta(k),\\ 0 & \text{otherwise}. \end{cases} \] It is clear that $(\ensuremath{\bm{y}}_n,\ensuremath{\bm{x}}_n^{\ast})_{n=2}^\infty$ is a biorthogonal system. Thus, in order to prove that $\ensuremath{\mathcal{Y}}:=\left(\ensuremath{\bm{y}}_n\right)_{n=2}^{\infty}$ is a basis of $\ensuremath{\mathbb{X}}$ with dual basis $\ensuremath{\mathcal{Y}}^*=(\ensuremath{\bm{x}}_n^*)_{n=2}^\infty$ it suffices to prove that $\ensuremath{\bm{x}}_1$ belongs to the closed linear span of $\ensuremath{\mathcal{Y}}$. For each $m\in\ensuremath{\mathbb{N}}$ we have \begin{align*} f_m&:=\frac{1}{H_m[\ensuremath{\bm{w}}]} \sum_{k=1}^m \frac{1}{s_k} \ensuremath{\bm{y}}_{\eta(k)}\\ &=\ensuremath{\bm{x}}_1+\frac{1}{H_m[\ensuremath{\bm{w}}]}\sum_{k=1}^m\frac{1}{s_k} \ensuremath{\bm{x}}_{\eta(k)}\\ &=\ensuremath{\bm{x}}_1+\frac{1}{H_m[\ensuremath{\bm{w}}]} T(g_m), \end{align*} where $g_m=\sum_{k=1}^m s_k^{-1} \ensuremath{\bm{e}}_k$. By \eqref{eq:NormLorentz}, \[ \Vert f_m-\ensuremath{\bm{x}}_1\Vert\le \Vert T \Vert(H_m[\ensuremath{\bm{w}}])^{-1/q'}, \quad m\in\ensuremath{\mathbb{N}}. \] Since by Lemma~\ref{lem:AlsoDiverges}, $\lim_m H_m[\ensuremath{\bm{w}}]=\infty$ we obtain $\lim_m f_m=\ensuremath{\bm{x}}_1$. Since $\ensuremath{\bm{y}}_n^{\ast}\left(\ensuremath{\bm{x}}_1\right)=0$ for all $n\ge 2$, $\ensuremath{\mathcal{Y}}$ is not a total basis.
Put $\ensuremath{\mathcal{Z}}=(\ensuremath{\bm{z}}_n)_{n=2}^\infty$. Then, \[ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Z}},\ensuremath{\mathbb{X}}](m)=\Vert \ensuremath{\bm{x}}_1\Vert \sum_{k=1}^m w_k \approx s_m, \quad m\in\ensuremath{\mathbb{N}}. \] Hence, \[ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}](m)\lesssim \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m)+ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Z}},\ensuremath{\mathbb{X}}](m)\lesssim s_m, \quad m\in\ensuremath{\mathbb{N}}, \] so that, since $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}}^*,\ensuremath{\mathbb{X}}](m)\lesssim m/s_m$ for $m\in\ensuremath{\mathbb{N}}$, $\ensuremath{\mathcal{Y}}$ is bidemocratic.
In the case when $(s_m)_{m=1}^\infty$ has the LRP and $(s_m/m)_{m=1}^\infty$ is non-increasing, by parts \ref{LorentzLRP:1} and ~\ref{LorentzLRP:2} of Lemma~\ref{lem:LorentzLRP} we can assume without loss of generality that \begin{equation}\label{eq:HarmonicEquivalence} H_t[\ensuremath{\bm{w}}]-H_r[\ensuremath{\bm{w}}]\approx H_t-H_r, \quad 0\le r \le t. \end{equation}
To estimate the quasi-greedy parameters we appeal to Lemma~\ref{lem:Jar} to pick for each $m\ge 2$, natural numbers $r=r(m)$ and $s=s(m)$ with $m\le r \le s$, and \begin{equation}\label{eq:HarmonicEstimates} H_m[\ensuremath{\bm{w}}] \le H_s[\ensuremath{\bm{w}}]-H_r[\ensuremath{\bm{w}}]\le (H_m[\ensuremath{\bm{w}}])^{1/q}+H_m[\ensuremath{\bm{w}}]. \end{equation} Set $h_m=\sum_{k=r+1}^s s_k^{-1} \ensuremath{\bm{e}}_k$ and \begin{align*} u_m&=\frac{1}{H_m[\ensuremath{\bm{w}}]} \left( \sum_{k=1}^{m} \frac{1}{s_k} \ensuremath{\bm{y}}_{\eta(k)} -\sum_{k=r+1}^{s} \frac{1}{s_k} \ensuremath{\bm{y}}_{\eta(k)}\right)\\ &=\frac{1}{H_m[\ensuremath{\bm{w}}]} \left( T(g_m)-T(h_m) + (H_m[\ensuremath{\bm{w}}]-H_s[\ensuremath{\bm{w}}]+H_r[\ensuremath{\bm{w}}]) \ensuremath{\bm{x}}_1 \right). \end{align*} By Lemma~\ref{lem:LorentzLRP}~\ref{LorentzLRP:3}, \eqref{eq:NormLorentz}, \eqref{eq:HarmonicEquivalence} and \eqref{eq:HarmonicEstimates}, \[
\max\{\Vert g_m\Vert, \Vert h_m\Vert, |H_s[\ensuremath{\bm{w}}]-H_r[\ensuremath{\bm{w}}]-H_m[\ensuremath{\bm{w}}]|\}\lesssim H_m^{1/q}, \quad m\in\ensuremath{\mathbb{N}}. \] Hence, $ \Vert u_m\Vert \lesssim H_m^{-1/q'} $ for $m\in\ensuremath{\mathbb{N}}$. Since $A_m:=\{\eta(1),\dots,\eta(m)\}$ is a greedy set of $u_m$ with respect to $\ensuremath{\mathcal{Y}}$, and \[ \Vert S_{A_m}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}](u_m)\Vert=\Vert f_m\Vert \approx 1,\quad m\in\ensuremath{\mathbb{N}}, \] we are done. \end{proof}
\begin{corollary}\label{corbidem} Let $\ensuremath{\mathbb{X}}$ be a Banach space with a Schauder basis. Suppose that $\ensuremath{\mathbb{X}}$ has a complemented subspace isomorphic to $\ell_{p,q}$, where $p$, $q\in(1,\infty)$. Then $\ensuremath{\mathbb{X}}$ has a non-total bidemocratic basis $\ensuremath{\mathcal{Y}}$ with \[ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}](m)\approx m^{{1/p}}, \quad m\in \ensuremath{\mathbb{N}}, \] and \[ \ensuremath{\overline{\bm{g}}}_m[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}] \gtrsim \left(\log m\right)^{{1/q'}}, \quad m\ge 2. \] \end{corollary}
\begin{proof} An application of the Dilworth-Kalton-Kutzarova method, or DKK-method for short (see \cites{AADK2019b,DKK2003}), yields a bidemocratic Schauder basis of $\ensuremath{\mathbb{X}}$ with fundamental function equivalent to $(m^{1/p})_{m=1}^\infty$ (see \cite{AADK2019b}). The direct sum of this basis with the unit vector system of $\ell_{p,q}$ is a bidemocratic Schauder basis of $ \ensuremath{\mathbb{X}}\oplus\ell_{p,q}\approx\ensuremath{\mathbb{X}}$ that posesses a subsequence equivalent to the unit vector basis of $\ell_{p,q}$. Applying Theorem~\ref{theoremLp} we are done. \end{proof}
Note that Corollary~\ref{corbidem} can be applied with $1<p=q<\infty$, so that $\ell_{p,q}=\ell_p$. Hence as a consequence we obtain the result that we announced in the Introduction.
\begin{theorem}\label{thm:BDNotTotal} Let $1<p<\infty$. Then $\ell_p$ has a bidemocratic non-total (hence, non-quasi-greedy) basis. \end{theorem}
Theorem~\ref{thm:BDNotTotal} leads us naturally to the question about the existence of bidemocratic non-total bases in $\ell_1$ and $c_0$. We make a detour from our route to solve both questions in the negative. For that we will need to apply the arguments that follow, keeping in mind that $\ell_1=(c_0)^{\ast}$ is a GT-space (see \cite{LinPel1968}).
\begin{proposition} Let $\ensuremath{\mathbb{X}}$ be a quasi-Banach space, and let $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ and $\ensuremath{\mathcal{Y}}=(\ensuremath{\bm{x}}_n^{\ast})_{n=1}^\infty$ be sequences in $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{X}}^{\ast}$, respectively. Suppose that $(\ensuremath{\bm{x}}_n,\ensuremath{\bm{x}}_n^{\ast})_{n=1}^\infty$ is a biorthogonal system and that \[ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m) \, \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}^{\ast}](m)\le C m, \quad m\in\ensuremath{\mathbb{N}}, \]
for some constant $C$. Then $\Vert \ensuremath{\mathbbm{1}}_{\varepsilon,A} [\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}] \Vert\le C \Vert f \Vert$ whenever $A\subseteq \ensuremath{\mathbb{N}}$, $\varepsilon\in\ensuremath{\mathcal{E}}_A$, and $f\in\ensuremath{\mathbb{X}}$ are such that $|\ensuremath{\bm{x}}_n^{\ast}(f)|\ge 1$ on a set of cardinality at least $|A|$. \end{proposition}
\begin{proof} In the case when $\ensuremath{\mathcal{X}}$ spans the whole space $\ensuremath{\mathbb{X}}$, this proposition says that any bidemocratic basis is truncation quasi-greedy. In fact, the proof of \cite{AABW2021}*{Proposition 5.7} gives this slightly more general result. \end{proof}
\begin{theorem}\label{thm:AAW}
Let $\ensuremath{\mathbb{X}}$ be a GT-space and let $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ and $(\ensuremath{\bm{x}}_n^{\ast})_{n=1}^\infty$ be a sequences in $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{X}}^{\ast}$, respectively. Suppose that $(\ensuremath{\bm{x}}_n,\ensuremath{\bm{x}}_n^{\ast})_{n=1}^\infty$ is a biorthogonal system and that there is a constant $C$ such that $\Vert \ensuremath{\mathbbm{1}}_{\varepsilon,A} [\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}] \Vert\le C \Vert f \Vert$ whenever $A\subseteq \ensuremath{\mathbb{N}}$ and $f\in\ensuremath{\mathbb{X}}$ satisfy $|\ensuremath{\bm{x}}_n^{\ast}(f)|\ge 1 \ge |\ensuremath{\bm{x}}_k^{\ast}(f)|$ for $(n,k)\in A\times (\ensuremath{\mathbb{N}}\setminus A)$, and $\varepsilon=(\varepsilon_n)_{n\in A}\in\ensuremath{\mathcal{E}}_A$ is defined by $\ensuremath{\bm{x}}_n^{\ast}(f)=|\ensuremath{\bm{x}}_n^{\ast}(f)|\, \varepsilon_n$. Then, $\ensuremath{\bm{\varphi_l}}(m) \gtrsim m$ for $m\in\ensuremath{\mathbb{N}}$. \end{theorem}
\begin{proof} In the case when $\ensuremath{\mathcal{X}}$ spans the whole space $\ensuremath{\mathbb{X}}$, this theorem says that any truncation quasi-greedy basis of a GT-space is democratic with fundamental function equivalent to $(m)_{m=1}^\infty$ (see \cite{AAW2021}*{Theorem 4.3}). As a matter of fact, the proof of \cite{AAW2021}*{Theorem 4.3} gives this slightly more general result. \end{proof}
\begin{theorem}Let $\ensuremath{\mathcal{X}}$ be a bidemocratic basis of a Banach space $\ensuremath{\mathbb{X}}$. \begin{enumerate}[label=(\roman*),leftmargin=*,widest=ii] \item\label{GTspace}If $\ensuremath{\mathbb{X}}$ is a GT-space, then $\ensuremath{\mathcal{X}}$ is equivalent to the canonical basis of $\ell_1$. \item\label{predualGTspace}If $\ensuremath{\mathbb{X}}^{\ast}$ is a GT-space, then $\ensuremath{\mathcal{X}}$ is equivalent to the canonical basis of $c_0$. \end{enumerate} \end{theorem}
\begin{proof} Suppose that $\ensuremath{\mathbb{X}}$ (resp.,\ $\ensuremath{\mathbb{X}}^{\ast}$) is a GT-space. By Theorem~\ref{thm:AAW} $\ensuremath{\bm{\varphi_l}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]$ (resp.,\ $\ensuremath{\bm{\varphi_l}}[\ensuremath{\mathcal{X}}^{\ast},\ensuremath{\mathbb{X}}^{\ast}]$) is equivalent to $(m)_{m=1}^\infty$. Hence, $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}}^{\ast},\ensuremath{\mathbb{X}}^{\ast}]$ (resp.,\ $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]$) is bounded. This readily gives that $\ensuremath{\mathcal{X}}^{\ast}$ (resp.\ $\ensuremath{\mathcal{X}}$) is equivalent to the canonical basis of $c_0$. To conclude the proof of \ref{GTspace}, we infer that $\ensuremath{\mathcal{X}}^{**}$ is equivalent to the canonical basis $\ensuremath{\mathcal{B}}_{\ell_1}$ of $\ell_1$. Since $\ensuremath{\mathcal{B}}_{\ell_1}$ dominates $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{X}}$ dominates $\ensuremath{\mathcal{X}}^{**}$ we are done. \end{proof}
It is known that some results involving the TGA work for total bases but break down if we drop this assumption (see, e.g., \cite{BL2020}*{Theorem 4.2 and Example 4.5}). In view of this, another question springing from Theorem~\ref{thm:BDNotTotal} is whether working with total bases makes a difference, i.e., whether bidemocratic total bases are quasi-greedy. We solve this question in the negative by proving the following theorem.
\begin{theorem}\label{thm:BDTotalNotQG} Let $1<p<\infty$. Then any infinite-dimensional subspace of $\ell_p$ has a further subspace with a bidemocratic non-quasi-greedy total basis. \end{theorem}
Theorem~\ref{thm:BDTotalNotQG} will follow as a consequence of the following general result.
\begin{theorem}\label{theoremLp2} Let $\ensuremath{\bm{w}}=(w_n)_{n=1}^\infty$ be a weight, and suppose that its primitive weight $(s_m)_{m=1}^\infty$ has the LRP and that $(s_m/m)_{m=1}^\infty$ is non-increasing. Let $\ensuremath{\mathbb{X}}$ be a Banach space with a total basis $\ensuremath{\mathcal{X}}$. Suppose that $\ensuremath{\mathcal{X}}$ is bidemocratic with $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m)\approx s_m$ for $m\in\ensuremath{\mathbb{N}}$, and that $\ensuremath{\mathcal{X}}$ has a subsequence dominated by the unit vector basis of $d_{1,q}(\ensuremath{\bm{w}})$ for some $q>1$. Then $\ensuremath{\mathbb{X}}$ has a subspace $\ensuremath{\mathbb{Y}}$ with a basis $\ensuremath{\mathcal{Y}}$ satisfying the following properties: \begin{enumerate}[label=(\roman*),leftmargin=*,widest=iii] \item\label{bidemocraticlp2}$\ensuremath{\mathcal{Y}}$ is bidemocratic with $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}](m)\approx s_m$ for $m\in\ensuremath{\mathbb{N}}$. \item\label{markulp2} $\ensuremath{\mathcal{Y}}$ is total. \item\label{notqglps}$\ensuremath{\mathcal{Y}}$ is not quasi-greedy. \item\label{notslp2}$\ensuremath{\mathcal{Y}}$ is not Schauder in any order. \end{enumerate} \end{theorem}
\begin{proof} Choose a subsequence $\left(\ensuremath{\bm{x}}_{\eta(j)}\right)_{j=1}^{\infty}$ of $\ensuremath{\mathcal{X}}=\left(\ensuremath{\bm{x}}_n\right)_{n=1}^{\infty}$ so that $\ensuremath{\mathbb{N}}\setminus\eta(\ensuremath{\mathbb{N}})$ is infinite and the linear operator $T\colon d_{1,q}(\ensuremath{\bm{w}})\rightarrow \ensuremath{\mathbb{X}}$ given by \[ T\left(\ensuremath{\bm{e}}_j\right)= \ensuremath{\bm{x}}_{\eta(j)}, \quad k\in\ensuremath{\mathbb{N}}, \] is bounded. Let $\psi\colon\ensuremath{\mathbb{N}}\rightarrow\ensuremath{\mathbb{N}}$ be the increasing sequence defined by $\psi(\ensuremath{\mathbb{N}})=\ensuremath{\mathbb{N}}\setminus\eta(\ensuremath{\mathbb{N}})$. Since the harmonic series diverges we can recursively construct an increasing sequence $(t_k)_{k=0}^\infty$ of natural numbers with $t_0=0$ such that, if we put \[ \Lambda_k=H_{t_k} -H_{t_{k-1}}, \] then $\lim_k \Lambda_k=\infty$. For each $j\in\ensuremath{\mathbb{N}}$ define $\ensuremath{\bm{y}}_j=\ensuremath{\bm{x}}_{\eta(j)}+\ensuremath{\bm{z}}_j$, where \[ \ensuremath{\bm{z}}_j = \frac{s_j}{j} \ensuremath{\bm{x}}_{\psi(k)}, \quad k\in\ensuremath{\mathbb{N}},\; t_{k-1} <j \le t_k. \]
It is clear that $(\ensuremath{\bm{y}}_j,\ensuremath{\bm{x}}_{\eta(j)}^{\ast})_{j=1}^\infty$ is a biorthogonal system. Thus, to see that $\ensuremath{\mathcal{Y}}:=\left(\ensuremath{\bm{y}}_j\right)_{j=1}^{\infty}$ satisfies \ref{bidemocraticlp2} it suffices to prove that, if $\ensuremath{\mathcal{Z}}=(\ensuremath{\bm{z}}_j)_{j=1}^\infty$, $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Z}},\ensuremath{\mathbb{X}}](m) \lesssim s_m$ for $m\in\ensuremath{\mathbb{N}}$. Set $C_1=\sup_n \Vert \ensuremath{\bm{x}}_n\Vert$. For every $A\subseteq\ensuremath{\mathbb{N}}$ with $|A|=m<\infty$ and $\varepsilon\in\ensuremath{\mathcal{E}}_A$ we have \[ \Vert \ensuremath{\mathbbm{1}}_{\varepsilon,A}[\ensuremath{\mathcal{Z}},\ensuremath{\mathbb{X}}]\Vert \le C_1 \sum_{j\in A} \frac{s_j}{j} \le C_1\sum_{j=1}^m \frac{s_j}{j} \lesssim s_m. \]
Let us see that $\ensuremath{\mathcal{Y}}$ is a total basis of $\ensuremath{\mathbb{Y}}=[\ensuremath{\mathcal{Y}}]$. Set \[ \ensuremath{\bm{z}}_k^{\ast}=\ensuremath{\bm{x}}_{\psi(k)}^{\ast} -\sum_{j=1+t_{k-1}}^{t_k} \frac{s_j}{j} \, \ensuremath{\bm{x}}^{\ast}_{\eta(j)}, \quad k\in\ensuremath{\mathbb{N}}. \] We have $\ensuremath{\bm{z}}_k^{\ast}(\ensuremath{\bm{y}}_j)=0$ for all $j$ and $k\in\ensuremath{\mathbb{N}}$. Therefore $\ensuremath{\bm{z}}_k^{\ast}(f)=0$ for all $f\in\ensuremath{\mathbb{Y}}$ and $k\in\ensuremath{\mathbb{N}}$. Pick $f\in\ensuremath{\mathbb{Y}}$ and suppose that $\ensuremath{\bm{x}}_{\eta(j)}^{\ast}(f)=0$ for all $j\in\ensuremath{\mathbb{N}}$. We infer that $\ensuremath{\bm{x}}_{\psi(k)}^{\ast}(f)=0$ for all $k\in\ensuremath{\mathbb{N}}$. Since $\ensuremath{\mathcal{X}}$ is a total basis, $f=0$.
To prove that $\ensuremath{\mathcal{Y}}$ is neither a quasi-greedy basis nor a Schauder basis in any ordering, we pick a permutation $\pi$ of $\ensuremath{\mathbb{N}}$. For each $k\in\ensuremath{\mathbb{N}}$, choose $A_k\subseteq D_k:=[1+t_{k-1},t_k]\cap \ensuremath{\mathbb{N}}$ minimal with the properties \[ l:=\max( \pi^{-1}(A_k)) <\min (\pi^{-1}(D_k\setminus A_k)) \;\text{and}\; \Gamma_k:=\sum_{j\in A_k} \frac{1}{j} > \frac{\Lambda_k}{2}. \] By construction, \[ \frac{\Lambda_k}{2} \ge \Gamma_k-\frac{1}{\pi(l)} \ge \Gamma_k-1. \] Then, if we set \[ \Theta_k:=\sum_{j\in D_k\setminus A_k} \frac{1}{j}=\Lambda_k-\Gamma_k, \] we have $\Gamma_k-\Theta_k=-\Lambda_k+2\Gamma_k\in(0,2]$. Also by construction, if we set \[ g_k=\sum_{j\in A_k} \frac{1}{s_j} \ensuremath{\bm{y}}_j, \quad h_k=\sum_{j\in D_k\setminus A_k} \frac{1}{s_j} \ensuremath{\bm{y}}_j, \quad k\in\ensuremath{\mathbb{N}}, \] then $g_k$ is a partial-sum projection of $f_k:=g_k-h_k$ with respect to the rearranged basis $(\ensuremath{\bm{y}}_{\pi(i)})_{i=1}^\infty$. Moreover, in the case when $\pi$ is the identity map, $g_k$ is a greedy projection of $f_k$. On one hand, if we set \[ f_k'= \sum_{j\in A_k} \frac{1}{s_j} \ensuremath{\bm{e}}_j- \sum_{j\in D_k\setminus A_k} \frac{1}{s_j} \ensuremath{\bm{e}}_j,\\ \] we have $f_k=T(f_k') + (\Gamma_k-\Theta_k) \ensuremath{\bm{x}}_{\psi(k)}$ for all $k\in\ensuremath{\mathbb{N}}$. By Lemma~\ref{lem:LorentzLRP}~\ref{LorentzLRP:3}, \[ \Vert f_k'\Vert_{d_{1,q}(\ensuremath{\bm{w}})} = \left\Vert \sum_{j=1+t_{k-1}}^{t_k} \frac{1}{s_j} \ensuremath{\bm{e}}_j\right\Vert_{d_{1,q}(\ensuremath{\bm{w}})}\lesssim \max\{ 1, \Lambda_k^{1/q}\}\approx \Lambda_k^{1/q}. \] Hence, $\Vert f_k\Vert \lesssim \Lambda_k^{1/q}$ for $k\in\ensuremath{\mathbb{N}}$. On the other hand, since $\ensuremath{\bm{x}}_{\psi(k)}^{\ast} (g_k)=\Gamma_k$, we have \[ \Lambda_k<2\Gamma_k\le 2 C_2 \Vert g_k\Vert \] where $C_2=\sup_n \Vert \ensuremath{\bm{x}}_n^{\ast}\Vert$. Summing up, \[ \frac{ \Vert g_k\Vert}{\Vert f_k\Vert} \gtrsim \Lambda_k^{1/q'} \xrightarrow[k\to \infty]{}\infty. \qedhere \] \end{proof}
\begin{corollary}\label{corollaryl2} There is a bidemocratic total basis of $\ell_2$ that is not Schauder under any rearrangement of the terms nor quasi-greedy. \end{corollary}
Let us notice that the bases we construct to prove Theorem~\ref{thm:BDTotalNotQG} are \emph{not} Schauder bases. As the TGA does no depend on the particular way we reorder the basis, whereas being a Schauder basis does, studying the TGA within the framework of Schauder bases is somehow unnatural. Nonetheless, Schauder bases have provided a friendly framework to develop the greedy approximation theory with respect to bases since its beginning at the turn of the century. In fact, it is nowadays unknown even whether certain results involving the TGA work outside the framework of Schauder bases (see, e.g., \cite{Berna2020})! Hence, in connection with our discussion it is natural to wonder whether bidemocratic Schauder bases are quasi-greedy. We close this section by providing a negative answer to this question too.
\begin{theorem}\label{thm:BDSchauderNotQG} There is a Banach space with a bidemocratic Schauder basis which is not quasi-greedy. \end{theorem}
The proof of Theorem~\ref{thm:BDSchauderNotQG} relies on a construction that has its roots in \cite{KoTe1999}, where it was used to build a conditional quasi-greedy basis. Variants of the original idea of Konyagin and Telmyakov have appeared in several papers with different motivations (see \cites{GHO2013,BBGHO2018,AABW2021,Oikhberg2018}). Prior to tackling the proof we introduce a quantitative version of \cite{DKKT2003}*{Theorem 5.4}.
\begin{theorem}\label{thm:dualQuantitative} Let $\ensuremath{\mathcal{X}}$ be a bidemocratic basis of a quasi-Banach space $\ensuremath{\mathbb{X}}$. Then \[ \ensuremath{\overline{\bm{g}}}_m[\ensuremath{\mathcal{X}}^{\ast},\ensuremath{\mathbb{X}}^{\ast}]\lesssim \ensuremath{\overline{\bm{g}}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}], \quad m\in\ensuremath{\mathbb{N}}. \] In particular, if $\ensuremath{\mathcal{X}}$ is a Schauder basis of a Banach space, then \[ \ensuremath{\overline{\bm{g}}}_m[\ensuremath{\mathcal{X}}^{\ast},\ensuremath{\mathbb{X}}^{\ast}]\approx \ensuremath{\overline{\bm{g}}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}], \quad m\in\ensuremath{\mathbb{N}}. \] \end{theorem}
\begin{proof} The proof of \cite{DKKT2003}*{Theorem 5.4} (see also \cite{AABW2021}*{Proof of Proposition 5.7}) yields the first estimate. To see the equivalence, we use that $\ensuremath{\mathcal{X}}^{**}$ is equivalent to $\ensuremath{\mathcal{X}}$ (see \cite{AlbiacKalton2016}*{Corollary 3.2.4}). \end{proof}
\begin{proposition}\label{theorembidemocraticnotqG} Let $1<p<\infty$. There is a Banach space $\ensuremath{\mathbb{X}}$ with a monotone Schauder basis $\ensuremath{\mathcal{X}}$ with the following properties: \begin{enumerate}[label=(\roman*),leftmargin=*,widest=ii] \item\label{1bidem}For all finite sets $A\subseteq \ensuremath{\mathbb{N}}$ and all $\varepsilon\in \ensuremath{\mathcal{E}}_A$, \[
\Vert \ensuremath{\mathbbm{1}}_{\varepsilon A}\Vert=\left|A\right|^{1/p} \;\text{and}\;
\Vert\ensuremath{\mathbbm{1}}_{\varepsilon A}^{\ast}\Vert=\left|A\right|^{1/p'}, \] where $1/p+1/p^{\prime}=1$. Therefore, $\ensuremath{\mathcal{X}}$ is $1$-bidemocratic. \item\label{notnQG} Neither $\ensuremath{\mathcal{X}}$ nor $\ensuremath{\mathcal{X}}^{\ast}$ are quasi-greedy. Quantitatively, \[ \ensuremath{\overline{\bm{g}}}_m\approx\ensuremath{\overline{\bm{g}}}_m^{\ast}\approx \ensuremath{\bm{k}}_m\approx \ensuremath{\bm{k}}_m^{\ast}\approx \left(\log{m}\right)^{1/p'} , \quad m\in\ensuremath{\mathbb{N}},\; m\ge 2. \] \end{enumerate} \end{proposition} \begin{proof} Put \[ \ensuremath{\mathcal{D}}:=\{(m,k)\in\ensuremath{\mathbb{N}}^2 \colon 1\le k \le m\}, \] where the elements are taken in the lexicographical order. Appealing to Lemma~\ref{lem:Jar} we recursively construct a family $(r_{m,k},s_{m,k})_{(m,k)\in\ensuremath{\mathcal{D}}}$ in $\ensuremath{\mathbb{N}}^2$ such that \begin{align} m+1<r_{m,k}<s_{m,k},\quad&1\le k\le m,\label{movetotheright1}\\ s_{m,k}<r_{m,k+1}, \quad &1\le k<m, \;\text{and}\label{movetotheright2}\\ \frac{1}{k}- \frac{1}{m}\le T_{m,k}:= \sum_{j=r_{m,k}}^{s_{m,k}}\frac{1}{j}<\frac{1}{k},\quad &1\le k\le m. \label{conditionclosesums} \end{align}
Next, we choose a sequence $( A_m)_{m=1}^\infty$ of integer intervals contained in $\ensuremath{\mathbb{N}}$ so that $\max(A_m)<\min(A_{m+1})$ for all $m\in\ensuremath{\mathbb{N}}$, and \begin{equation}\label{separatingthesets}
|A_m|=2m+\sum_{k=1}^{m}s_{m,k}-r_{m,k}. \end{equation} Let \[ i_{m,k}=\min A_m+\sum_{j=1}^{k-1}\left(s_{m,j}-r_{m,j}+2\right), \quad (m,k)\in\ensuremath{\mathcal{D}}. \] Fix $m\in\ensuremath{\mathbb{N}}$. For each $n\in A_m$ there are unique integers $1\le k \le m$ and $-1\le t \le s_{m,k}-r_{m,k}$ so that $n=i_{m,k}+1+t$. Let us set \[ (d_{m,n},\varepsilon_{m,n})= \begin{cases} (k,1) &\text{ if } t=-1,\\ (r_{m,k}+t,-1) &\text{otherwise.} \end{cases} \] Consider the subset of $\ensuremath{\mathbb{N}}$ given by \begin{equation*} B_m=\{ i_{m,k} \colon 1\le k \le m\}. \end{equation*} The family $(d_{m,n})_{n\in B_m}$ is increasing. By \eqref{movetotheright2}, $(d_{m,n})_{n\in A_m\setminus B_m}$ is also increasing, and by \eqref{movetotheright1}, \begin{equation}\label{eq:BlGreedy} \max_{n\in B_m} d_{m,n} < \min_{n\in A_m\setminus B_m} d_{m,n}. \end{equation} Set $b_{m,n}=d_{m,n}^{-1/p'}$ for $m\in\ensuremath{\mathbb{N}}$ and $n\in A_m$. Since the family $(d_{m,n})_{n\in A_m}$ consists of distinct positive integers, for each $m\in \ensuremath{\mathbb{N}}$ and $A\subseteq A_m$ we have \begin{align} \sum_{n\in A}b_{m,n}
&\le \sum_{n=1}^{\left|A\right|}n^{-1/p'}
\le p \left|A\right|^{1/p}, \;\text{and}\label{firstboundforfundamentalfunction}\\ \sum_{n\in A}b_{m,n}^{p'}
&\le H_{|A|}, \label{lastestimate} \end{align} where, as we said, $H_m$ denotes the $m$th harmonic number. Once the family $(b_{m,n})_{m\in\ensuremath{\mathbb{N}},n\in A_m}$ has been constructed, we define $\Vert\cdot\Vert_{\maltese}$ on $c_{00}$ by \[
\left\Vert(a_n)_{n=1}^\infty\right\Vert_{\maltese}=\frac{1}{p}\sup_{\substack{m\in\ensuremath{\mathbb{N}}\\ l\in A_m}}\left|\sum_{\substack{n\in A_m\\ensuremath{\mathbf{n}}\le l}}a_{n}b_{m,n} \right|.\label{Snorm0} \] Since $\max(A_m)<\min(A_{m+1})$ for all $m\in \ensuremath{\mathbb{N}}$, we have that $\Vert f\Vert_{\maltese}<\infty$ for all $f\in c_{00}$, so that $\Vert\cdot\Vert_{\maltese}$ is a semi-norm. Let $\ensuremath{\mathbb{X}}$ be the Banach space obtained as the completion of $c_{00}$ endowed with the norm \[ \Vert f\Vert=\max\left\lbrace \Vert f\Vert_p,\Vert f\Vert_{\maltese}\right\rbrace.\label{norm} \] It is routine to check that the unit vector system $\ensuremath{\mathcal{X}}$ is a monotone normalized Schauder basis of $\ensuremath{\mathbb{X}}$ whose coordinate functionals $\ensuremath{\mathcal{X}}^{\ast}$ are the canonical projections on each coordinate. It follows from \eqref{firstboundforfundamentalfunction} that \[
\Vert \ensuremath{\mathbbm{1}}_{\varepsilon,A}\Vert_{\maltese}\le \frac{1}{p}\sup_{m\in \ensuremath{\mathbb{N}}}\sum_{n\in A\cap A_m} b_{m,n}\le \left|A\right|^{1/p},
\quad |A|<\infty, \; \varepsilon\in\ensuremath{\mathcal{E}}_A. \] By definition, there is a norm-one linear map from $\ensuremath{\mathbb{X}}$ into $\ell_p$ which maps $\ensuremath{\mathcal{X}}$ to the unit vector system of $\ell_p$. By duality, there is a norm-one map from $\ell_{p'}$ into $\ensuremath{\mathbb{X}}^{\ast}$ which maps the unit vector system of $\ell_{p'}$ to $\ensuremath{\mathcal{X}}^{\ast}$. In particular, \[
\Vert \ensuremath{\mathbbm{1}}_{\varepsilon,A}^{\ast}\Vert \le |A|^{1/p'}, \quad |A|<\infty, \; \varepsilon\in\ensuremath{\mathcal{E}}_A. \] We infer that \ref{1bidem} holds.
Define $a_{m,n}=\varepsilon_{m,n} d_{m,n}^{-1/p}$, so that $a_{m,n} b_{m,n}={\varepsilon_{m,n}}/{d_{m,n}}$ for $m\in\ensuremath{\mathbb{N}}$ and $n\in A_m$. For each $m\in\ensuremath{\mathbb{N}}$ set \[ f_m=\sum_{n\in A_m}a_{m,n}\, \ensuremath{\bm{x}}_n. \] Let $(m,k)\in\ensuremath{\mathcal{D}}$ and use the convention $i_{m,m+1}=1+\max(A_m)$. If $i_{m,k}\le l <i_{m,k+1}$ by construction we have \[ B_{m,k}(l):=\sum_{n=i_{m,k}}^{l} a_{m,n} b_{m,n}=\frac{1}{k}-\sum_{j=r_{m,k}}^{l-1+r_{m,k}-i_{m,k}} \frac{1}{j}. \] Thus, the maximum and minimum values of $B_{m,k}(l)$ on the interval $i_{m,k}\le l <i_{m,k+1}$ are $1/k$ and $1/k-T_{m,k}$, respectively. Since by the right hand-side inequality in \eqref{conditionclosesums}, $1/j-T_{m,j}>0$ for all $1\le j \le m$ we infer that \[
\|f_m\|_{\maltese} =\frac{1}{p} \max_{l\in A_m} \sum_{\substack{n\in A_m \\ n\le l}} a_{m,n} b_{m,n} =\frac{1}{p}\max_{1\le k \le m} \frac{1}{k}+ \sum_{j=1}^{k-1} \left( \frac{1}{j}-T_{m,j}\right). \] Using the left hand-side inequality in \eqref{conditionclosesums} we obtain \[
\|f_m\|_{\maltese}\le \frac{1}{p} \max_{1\le k \le m} \frac{1}{k}+\frac{k-1}{m}=\frac{1}{p}. \]
We also have \[ \Vert f_m\Vert_{p}^p=\sum_{k=1}^m\left( \frac{1}{k}+T_{m,k}\right) \le 2 H_m. \] Hence, $\Vert f_m\Vert\le 2^{1/p} H_m^{1/p}$ for all $m\in\ensuremath{\mathbb{N}}$.
By \eqref{eq:BlGreedy}, $B_m$ is a greedy set of $f_m$. Since every coefficient of $f_m$ is positive on $B_m$, \[ \Vert S_{B_m}(f_m) \Vert\ge \Vert S_{B_m}(f_m) \Vert_{\maltese}=\frac{1}{p}\sum_{j\in B_m} \frac{1}{d_{m,n}}=\frac{1}{p}H_m. \] Summing up, \[ \frac{\Vert S_{B_m}(f_m)\Vert} {\Vert f_m\Vert}\ge \frac{1}{p \, 2^{1/p}} H_m^{1/p'}, \quad m\in\ensuremath{\mathbb{N}}. \]
Since $|B_m|=m$, this shows that $\ensuremath{\bm{g}}_m\ge p^{-1} 2^{-1/p} H_m^{1/p'}$ for all $m\in\ensuremath{\mathbb{N}}$.
By Theorem~\ref{thm:dualQuantitative}, it only remains to obtain the upper estimate for the unconditionality constants of $\ensuremath{\mathcal{X}}$. By \eqref{lastestimate} and H\"older's inequality, for all $A\subseteq\ensuremath{\mathbb{N}}$ with $|A|\le m$ we have \[
\Vert S_A( f)\Vert_{\maltese}\le \frac{1}{p}\Vert f\Vert_p \sup_m\left( \sum_{n\in A\cap A_m} |b_{m,n}|^{p'}\right)^{1/p'} \le \frac{1}{p} H_m^{1/p'} \Vert f\Vert_p. \] Hence, $\ensuremath{\bm{k}}_m\le\max\{1, H_m^{1/p'}/p \}$ for all $m\in\ensuremath{\mathbb{N}}$. \end{proof}
\begin{remark} Given a basis $\ensuremath{\mathcal{X}}$ and an infinite subset $\ensuremath{\mathbf{n}}$ of $\ensuremath{\mathbb{N}}$, we say that $\ensuremath{\mathcal{X}}$ is $\ensuremath{\mathbf{n}}$-quasi-greedy if \[ \sup\left\lbrace \dfrac{\Vert S_A(f)\Vert}{\Vert f\Vert} \colon f\in\ensuremath{\mathbb{X}},\, A
\;\text{greedy set of}\; f,\, |A|\in\ensuremath{\mathbf{n}}\right\rbrace<\infty \] (see \cite{Oikhberg2018}). Note that the basis constructed in Proposition~\ref{theorembidemocraticnotqG} is not $\ensuremath{\mathbf{n}}$-quasi-greedy for any increasing sequence $\ensuremath{\mathbf{n}}$. \end{remark}
\begin{remark}\label{remarklp} The basis $\ensuremath{\mathcal{X}}$ in Proposition~\ref{theorembidemocraticnotqG} has a subbasis isometrically equivalent to the unit vector basis of $\ell_p$. Indeed, it is easy to check that $\left(\ensuremath{\bm{x}}_{i_{m,1}}\right)_{m=1}^\infty$ has this property. The basis $\ensuremath{\mathcal{X}}$ also has, as we next show, a block basis isometrically equivalent to the unit vector basis of $c_0$. Let $(A_m)_{m=1}^\infty$, $(B_m)_{m=1}^\infty$ and $(f_m)_{m=1}^\infty$ be as in that proposition, and define \[
g_m:=S_{B_m}(f_m), \quad h_m=\frac{g_m}{\|g_m\|_{\maltese}}, \quad m\in\ensuremath{\mathbb{N}}. \] Pick positive scalars $(\varepsilon_k)_{k=1}^\infty$ with $\sum_{k=1}^\infty \varepsilon_k^p=1$. Since \[ \lim_{m} \frac{ \Vert g_m\Vert_p}{\Vert g_m\Vert_{\maltese}}=0, \] there is a subsequence $\left(g_{m_k}\right)_{k=1}^\infty$ with $\left\Vert g_{m_k}\right\Vert_p\le \varepsilon_k \Vert g_{m_k}\Vert_{\maltese}$ for all $k\in\ensuremath{\mathbb{N}}$. Let $f=(a_k)_{k=1}^\infty\in c_{00}$. Since $\supp(h_m)\subseteq A_{m}$ for all $m$, we have \[ \left\Vert \sum_{k=1}^{\infty}a_k h_{m_k} \right\Vert_{\maltese}
=\max_{k\in\ensuremath{\mathbb{N}}} |a_k| \Vert h_{m_k}\Vert_{\maltese}
=\max_{k\in\ensuremath{\mathbb{N}}}|a_k| \] and \[ \left\Vert \sum_{k=1}^{\infty}a_k h_{m_k} \right\Vert_p
=\left( \sum_{k=1}^\infty |a_k|^p \Vert h_{m_k}\Vert_p^p\right)^{1/p}
\le \left( \sum_{k=1}^\infty |a_k|^p \varepsilon_k^p \right)^{1/p}
\le \max_{k\in\ensuremath{\mathbb{N}}} |a_k|. \]
Consequently, $\Vert \sum_{k=1}^\infty a_k h_{m_k}\Vert= \max_{k\in\ensuremath{\mathbb{N}}} |a_k|$. \end{remark}
\section{Building bidemocratic conditional quasi-greedy bases}\label{sect:NM}\noindent Probably, the most versatile method for building conditional quasi-greedy bases is the previously mentioned DKK-method due to Dilworth, Kalton and Kutzarova, which works only in the locally convex setting (i.e., for Banach spaces). It produces conditional almost greedy bases whose fundamental function either is equivalent to $(m)_{m=1}^\infty$ or has both the LRP and the URP. Thus, the DKK-method serves as a tool for constructing Banach spaces with bidemocratic conditional quasi-greedy bases whose fundamental function has both the LRP and the URP. In this section we develop a new method for building conditional bases that allows us to construct bidemocratic conditional quasi-greedy bases with an arbitrary fundamental function.
We write $\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}$ for the Cartesian product of the quasi-Banach spaces $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{Y}}$ endowed with the quasi-norm \[ \Vert (f,g)\Vert=\max\{ \Vert f\Vert, \Vert g\Vert\}, \quad f\in\ensuremath{\mathbb{X}},\, g\in\ensuremath{\mathbb{Y}}. \] Given sequences $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ and $\ensuremath{\mathcal{Y}}=(\ensuremath{\bm{y}}_n)_{n=1}^\infty$ in quasi-Banach spaces $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{Y}}$ respectively, its direct sum is the sequence $\ensuremath{\mathcal{X}}\oplus\ensuremath{\mathcal{Y}}=(\ensuremath{\bm{u}}_n)_{n=1}^\infty$ in $\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}$ given by \[ \ensuremath{\bm{u}}_{2n-1}=(\ensuremath{\bm{x}}_n,0), \quad \ensuremath{\bm{u}}_{2n}=(0,\ensuremath{\bm{y}}_n), \quad n\in\ensuremath{\mathbb{N}}. \] If $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ are bidemocratic bases, and $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]\approx \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}]$, then the basis $\ensuremath{\mathcal{X}}\oplus \ensuremath{\mathcal{Y}}$ of $\ensuremath{\mathbb{X}}\oplus \ensuremath{\mathbb{Y}}$ is also bidemocratic with \begin{align*} \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}}\oplus \ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}\oplus \ensuremath{\mathbb{Y}}]&\approx \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]\approx \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}],\\ \ensuremath{\bm{g}}_m[\ensuremath{\mathcal{X}}\oplus \ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}\oplus \ensuremath{\mathbb{Y}}]=&\max\{\ensuremath{\bm{g}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}], \ensuremath{\bm{g}}_m[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}]\},\\ \ensuremath{\bm{k}}_m[\ensuremath{\mathcal{X}}\oplus \ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}\oplus \ensuremath{\mathbb{Y}}]=&\max\{\ensuremath{\bm{k}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}], \ensuremath{\bm{k}}_m[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}]\}. \end{align*}
Loosely speaking, we could say that $\ensuremath{\mathcal{X}}\oplus \ensuremath{\mathcal{Y}}$ inherits naturally the properties of $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$. In contrast, `rotating' $\ensuremath{\mathcal{X}}\oplus \ensuremath{\mathcal{Y}}$ gives rise to more interesting situations. In this section we study the `rotated' sequence $\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}}=(\ensuremath{\bm{z}}_n)_{n=1}^\infty$ in $\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}$ given by \[ \ensuremath{\bm{z}}_{2n-1}=\frac{1}{\sqrt{2}}(\ensuremath{\bm{x}}_n,\ensuremath{\bm{y}}_n), \quad \ensuremath{\bm{z}}_{2n}=\frac{1}{\sqrt{2}}(\ensuremath{\bm{x}}_n,-\ensuremath{\bm{y}}_n), \quad n\in\ensuremath{\mathbb{N}}. \] Note that \[ \sum_{n=1}^\infty a_n\, \ensuremath{\bm{z}}_n =\frac{1}{\sqrt{2}}\left(\sum_{n=1}^\infty (a_{2n-1}+a_{2n})\ensuremath{\bm{x}}_n, \sum_{n=1}^\infty (a_{2n-1}-a_{2n})\ensuremath{\bm{y}}_n\right), \] whenever the series converges.
To deal with bases built using this method, we introduce some notation. Given $A\subseteq\ensuremath{\mathbb{N}}$ we set \[ A^{\ensuremath{\bm{o}}}=\{2n-1\colon n\in A\}, \quad A^{\ensuremath{\bm{e}}}=\{2n\colon n\in A\}. \] Consider also the onto map $\eta\colon\ensuremath{\mathbb{N}}\to\ensuremath{\mathbb{N}}$ given by $\eta(n)=\lceil n/2 \rceil$. Note that $\eta^{-1}(A)=A^{\ensuremath{\bm{o}}}\cup A^{\ensuremath{\bm{e}}}$ and $\eta(A^{\ensuremath{\bm{o}}})=\eta( A^{\ensuremath{\bm{e}}})=A$ for all $A\subseteq\ensuremath{\mathbb{N}}$.
Our first auxiliary result is pretty clear and well-known. In its statement we implicitly use the natural identification of $(\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}})^{\ast}$ with $\ensuremath{\mathbb{X}}^{\ast}\oplus\ensuremath{\mathbb{Y}}^{\ast}$.
\begin{lemma}[cf.\ \cite{AAW2019}*{Theorem 2.6}]\label{lem:dualdiamond} Suppose that $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ are bases of $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{Y}}$ respectively. Then $\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}}$ is a basis of $\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}$ whose dual basis is $\ensuremath{\mathcal{X}}^{\ast}\diamond\ensuremath{\mathcal{Y}}^{\ast}$. Moreover, if $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ are Schauder bases, so is $\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}}$. \end{lemma}
\begin{lemma}\label{constantupperdemfunct} Let $\ensuremath{\mathcal{X}}$ be a basis of a quasi-Banach space. There is a constant $C$ such that \begin{equation*} \left\Vert \sum_{n=1}^\infty a_n\, \ensuremath{\bm{x}}_n \right\Vert \le C \ensuremath{\bm{\varphi_u}}(m) \end{equation*}
whenever $|a_n|\le 1$ for all $n\in\ensuremath{\mathbb{N}}$ and $a_n\not=0$ for at most $m$ indices. Moreover, if $\ensuremath{\mathbb{X}}$ is $p$-Banach space, $0<p\le 1$, we can choose $C=(2^p-1)^{-1/p}$. \end{lemma}
\begin{proof} It follows readily from \cite{AABW2021}*{Corollary 2.3}. \end{proof}
\begin{lemma}\label{lem:SDDiamond} Suppose that $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ are bases of $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{Y}}$ respectively. Then \[ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}] \le C\max\{ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}], \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}]\} \] for some constant $C$ that only depends of the spaces $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{Y}}$ (and it is $\sqrt{2}$ if $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{Y}}$ are Banach spaces). \end{lemma}
\begin{proof}
Let $m\in\ensuremath{\mathbb{N}}$, $A\subseteq\ensuremath{\mathbb{N}}$ with $|A|\le m$, and $\varepsilon=(\varepsilon_n)_{n\in A}\in\ensuremath{\mathcal{E}}_A$. We extend $\varepsilon$ by setting $\varepsilon_n=0$ if $n\in\ensuremath{\mathbb{N}}\setminus A$. Put \[ B=\{ n\in\ensuremath{\mathbb{N}} \colon 2n-1\in A\} \cup \{ n\in\ensuremath{\mathbb{N}} \colon 2n\in A\}, \]
that is, $B=\eta(A)$. We have $|\varepsilon_{2n-1} \pm \varepsilon_{2n}|\le 2 \chi_B(n)$ for all $n\in\ensuremath{\mathbb{N}}$, and all $B$ such that $|B|\le |A|$. Thus, if $C$ is the constant in Lemma~\ref{constantupperdemfunct}, \begin{align*} \Vert \ensuremath{\mathbbm{1}}_{\varepsilon,A}&[\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}]\Vert\\ &= \frac{1}{\sqrt{2}} \max \left\{ \left\Vert \sum_{n=1}^\infty (\varepsilon_{2n-1}+\varepsilon_{2n})\ensuremath{\bm{x}}_n \right\Vert, \left\Vert \sum_{n=1}^\infty (\varepsilon_{2n-1}-\varepsilon_{2n})\ensuremath{\bm{y}}_n \right\Vert \right\}\\ &\le \frac{2C}{\sqrt{2}} \max \left\{ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m), \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}](m). \right\}\qedhere \end{align*} \end{proof}
\begin{proposition}\label{prop:bidem} Suppose that $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ are bidemocratic bases of quasi-Banach spaces $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{Y}}$ respectively. Suppose also that \[ s_m:= \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m) \approx \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}](m), \quad m\in\ensuremath{\mathbb{N}}. \] Then $\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}}$ is a bidemocratic basis of $\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}$. Moreover, \[ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}](m) \approx s_m, \quad m\in\ensuremath{\mathbb{N}}. \] \end{proposition}
\begin{proof}Since, by assumption, \[ \max\{\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}}^{\ast},\ensuremath{\mathbb{X}}](m) , \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}}^{\ast},\ensuremath{\mathbb{Y}}](m)\} \lesssim \frac{m}{s_m}, \quad m\in\ensuremath{\mathbb{N}}, \] applying Lemma~\ref{lem:SDDiamond} yields \[ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}](m) \lesssim s_m, \quad m\in\ensuremath{\mathbb{N}},\] and \[ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}}^{\ast}\diamond\ensuremath{\mathcal{Y}}^{\ast},\ensuremath{\mathbb{X}}^{\ast}\oplus\ensuremath{\mathbb{Y}}^{\ast}](m) \lesssim \frac{m}{s_m}, \quad m\in\ensuremath{\mathbb{N}}. \] Using Lemma~\ref{lem:dualdiamond}, these inequalities readily give the desired result. \end{proof}
\begin{proposition}\label{prop:diamondConditional} Let $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ and $\ensuremath{\mathcal{Y}}=(\ensuremath{\bm{y}}_n)_{n=1}^\infty$ be non-equivalent bases of quasi-Banach spaces $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{Y}}$ respectively. Then, $\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}}$ is a conditional basis of $\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}$. Quantitatively, if \[ \textstyle
\ensuremath{\mathtt{c}}_m=\{(a_n)_{n=1}^\infty\in\ensuremath{\mathbb{F}}^\ensuremath{\mathbb{N}} \colon |\{n\in\ensuremath{\mathbb{N}}\colon a_n\not=0\}|\le m\} \] and \[ E_m[\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}]=\sup_{(a_n)_{n=1}^\infty\in \ensuremath{\mathtt{c}}_m } \frac{\Vert \sum_{n=1}^\infty a_n\, \ensuremath{\bm{x}}_n\Vert}{\Vert \sum_{n=1}^\infty a_n\, \ensuremath{\bm{y}}_n\Vert}, \quad m\in\ensuremath{\mathbb{N}}, \] then \[ \ensuremath{\bm{k}}_m[\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}]\ge \frac{1}{2}\max\{E_m[\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}],E_m[\ensuremath{\mathcal{Y}},\ensuremath{\mathcal{X}}]\}, \quad m\in\ensuremath{\mathbb{N}}. \] \end{proposition}
\begin{proof} Given an eventually null sequence $(a_n)_{n\in A}$, define $(b_n)_{n\in A^{\ensuremath{\bm{o}}}}$ and $(c_n)_{n\in A^{\ensuremath{\bm{e}}}}$ by $b_{2n-1}=c_{2n}=a_n$ for all $n\in A$. If \[ f_{\ensuremath{\bm{o}}}=\sum_{n\in A^{\ensuremath{\bm{o}}}} b_n\, \ensuremath{\bm{z}}_n \;\text{and}\; f_{\ensuremath{\bm{e}}}=\sum_{n\in A^{\ensuremath{\bm{e}}}} c_n\, \ensuremath{\bm{z}}_n \] we have \begin{align*} f_{\ensuremath{\bm{o}}} &=\frac{1}{\sqrt{2}}\left(\sum_{n\in A} a_n\, \ensuremath{\bm{x}}_n,\sum_{n\in A} a_n\, \ensuremath{\bm{y}}_n\right),\\ f_{\ensuremath{\bm{o}}}+ f_{\ensuremath{\bm{e}}} &=\sqrt{2} \left(\sum_{n\in A} a_n\, \ensuremath{\bm{x}}_n,0\right), \;\text{and}\\ f_{\ensuremath{\bm{o}}}- f_{\ensuremath{\bm{e}}}&=\sqrt{2} \left(0,\sum_{n\in A} a_n\, \ensuremath{\bm{y}}_n\right). \end{align*} Therefore, \[
\frac{ \Vert f_{\ensuremath{\bm{o}}}\Vert}{ \Vert f_{\ensuremath{\bm{o}}}+f_{\ensuremath{\bm{e}}}\Vert }\ge\frac{1}{2}\frac{\|\sum_{n=1}^{\infty}a_n\ensuremath{\bm{y}}_n\|}{\| \sum_{n=1}^\infty a_n\, \ensuremath{\bm{x}}_n\|} \;\text{and}\;
\frac{ \Vert f_{\ensuremath{\bm{o}}}\Vert}{ \Vert f_{\ensuremath{\bm{o}}}-f_{\ensuremath{\bm{e}}}\Vert }\ge\frac{1}{2}\frac{\|\sum_{n=1}^{\infty}a_n\ensuremath{\bm{x}}_n\|}{\| \sum_{n=1}^\infty a_n\, \ensuremath{\bm{y}}_n\|}.\qedhere \] \end{proof}
Proposition~\ref{prop:diamondConditional} gives that the conditionality constants of $\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}}$ are bounded below by \[ \frac{1}{2}\max\left\{ \frac{\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]}{\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}]}, \frac{\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}]}{\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]}, \frac{\ensuremath{\bm{\varphi_l}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]}{\ensuremath{\bm{\varphi_l}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}]}, \frac{\ensuremath{\bm{\varphi_l}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}]}{\ensuremath{\bm{\varphi_l}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]} \right\}. \] Thus, applying our method to bases with non-equivalent fundamental functions yields `highly' conditional bases. In contrast, since bidemocratic bases are truncation quasi-greedy (see \cite{AABW2021}*{Proposition 5.7}), a combination of Proposition~\ref{prop:bidem} with Theorem~\ref{thm:BidCond} exhibits that we can apply the `rotation method' to bidemocratic bases with equivalent fundamental functions to obtain bases whose conditionality constants grow `slowly'. However, the basis $\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}}$ is always conditional unless $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ are equivalent. In this context, since quasi-greedy bases are truncation quasi-greedy (see \cite{AABW2021}*{Theorem 4.13}) we ask ourselves whether our construction preserves quasi-greediness. Our next result provides an affirmative answer to this question.
\begin{theorem}\label{thm:QGC} Let $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ be bidemocratic bases of quasi-Banach spaces $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{Y}}$ respectively. Suppose that \[ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m) \approx \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}](m), \quad m\in\ensuremath{\mathbb{N}}. \] Then, \[ \ensuremath{\bm{g}}_m[\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}]\approx \max\{\ensuremath{\bm{g}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}], \ensuremath{\bm{g}}_m[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}]\}, \quad m\in\ensuremath{\mathbb{N}}. \] In particular, $\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}}$ is quasi-greedy if and only if $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ are quasi-greedy. \end{theorem}
Before the proof of Theorem~\ref{thm:QGC} we give two auxiliary lemmas.
\begin{lemma}\label{lem:ShareBidem} Let $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$ and $\ensuremath{\mathcal{Y}}=(\ensuremath{\bm{y}}_n)_{n=1}^\infty$ be bases of quasi-Banach spaces $\ensuremath{\mathbb{X}}$ and $\ensuremath{\mathbb{Y}}$ respectively. Suppose that $\ensuremath{\mathcal{Y}}$ is truncation quasi-greedy and that \[ \ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m) \lesssim \ensuremath{\bm{\varphi_l}}[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}](m), \quad m\in\ensuremath{\mathbb{N}}. \] Then, there is a constant $C_0$ such that \[ \left\Vert \sum_{n\in E} c_n \, \ensuremath{\bm{x}}_n \right\Vert \le C_0 \left\Vert \sum_{n=1}^\infty d_n\, \ensuremath{\bm{y}}_n \right\Vert \]
whenever $\max_{n\in E} |c_n|\le \min_{n\in E} |d_n|$. \end{lemma}
\begin{proof} It is immediate from Lemma~\ref{constantupperdemfunct} and Lemma~\ref{lem:truncation quasi-greedyQU} combined. \end{proof}
\begin{lemma}\label{cor:doubling} Let $\ensuremath{\mathcal{X}}$ be a basis of a quasi-Banach space $\ensuremath{\mathbb{X}}$. If $\ensuremath{\mathcal{X}}$ is truncation quasi-greedy there is a constant $C$ such that \[ \ensuremath{\overline{\bm{g}}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}]\le C\, \ensuremath{\overline{\bm{g}}}_k[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}], \quad k\le m \le 2k. \] In particular, the sequences $(\ensuremath{\overline{\bm{g}}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}])_{m=1}^\infty$ and $(\ensuremath{\bm{g}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}])_{m=1}^\infty$ are doubling. \end{lemma}
\begin{proof}
Let $A$ be a greedy set of $f\in\ensuremath{\mathbb{X}}$ with $|A|=m$. Pick a greedy set $B$ of $f$ with $B\subseteq A$ and $|B|=k$. Since $|A\setminus B|\le |B|$, applying Lemma~\ref{lem:ShareBidem} with $\ensuremath{\mathcal{X}}$ and a permutation of $\ensuremath{\mathcal{X}}$ yields $\Vert S_{A\setminus B}(f) \Vert \le C_0 \Vert f\Vert$, where $C_0$ only depends on $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathbb{X}}$. Hence, if $\kappa$ denotes the modulus of concavity of $\ensuremath{\mathbb{X}}$, $\Vert S_A(f)\Vert \le \kappa(C_0+\ensuremath{\overline{\bm{g}}}_k) \Vert f \Vert$. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:QGC}] Set $\ensuremath{\mathcal{X}}\diamond \ensuremath{\mathcal{Y}}=(\ensuremath{\bm{z}}_n)_{n=1}^\infty$. Choosing $f\in\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}$ with $\ensuremath{\bm{z}}_{2n-1}^{\ast}(f)=\pm \ensuremath{\bm{z}}_{2n}^{\ast}(f)$ yields \[ \ensuremath{\bm{h}}_m:= \max\{\ensuremath{\bm{g}}_m[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}], \ensuremath{\bm{g}}_m[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}]\}\le \ensuremath{\bm{g}}_{2m}[\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}], \quad m\in\ensuremath{\mathbb{N}}. \] Using Lemma~\ref{cor:doubling}, we obtain the desired upper estimate for $\ensuremath{\bm{h}}_m$.
For $A\subseteq\ensuremath{\mathbb{N}}$, set $S_A=S_A[\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}]$, $S_A^\ensuremath{\mathbb{X}}=S_A[\ensuremath{\mathcal{X}}, \ensuremath{\mathbb{X}}]$, and $S_A^\ensuremath{\mathbb{Y}}=S_A[\ensuremath{\mathcal{Y}},\ensuremath{\mathbb{Y}}]$. Given a greedy set $B$ of $f=(g,h)\in \ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{Y}}$, let $A_1$, $A_2$ and $A_{12}$ be disjoint subsets of $\ensuremath{\mathbb{N}}$ such that \[ B=(A_{12}\cup A_1)^{\ensuremath{\bm{o}}} \cup (A_{12}\cup A_2)^{\ensuremath{\bm{e}}}. \]
We have $|B|=2|A_{12}|+|A_1|+|A_2|$. Set $A_0=\ensuremath{\mathbb{N}}\setminus (A_{12}\cup A_1\cup A_2)$. Let $(c_n)_{n=1}^\infty$ be the coefficients of $f$ relative to $\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}}$, let $(a_n)_{n=1}^\infty$ be the coefficients of $g$ relative to $\ensuremath{\mathcal{X}}$, and let $(b_n)_{n=1}^\infty$ be the coefficients of $h$ with respect to $\ensuremath{\mathcal{Y}}$. If $c=\min\{|c_n| \colon n\in B\}$, \[
\max_{n\in A_0} \left\{\frac{1}{\sqrt{2}} |a_n+b_n|, \frac{1}{\sqrt{2}} |a_n-b_n|\right\} =
\max_{n\in A_0^{\ensuremath{\bm{o}}}\cup A_0^{\ensuremath{\bm{e}}}} |c_n| \le c. \]
Hence, $|a_n|$, $|b_n|\le \sqrt{2} c$ for all $n\in A_0$, i.e., $A_3\cup A_4\subseteq \ensuremath{\mathbb{N}}\setminus A_0$, where \[
A_3=\{n\in\ensuremath{\mathbb{N}} \colon |a_n|>\sqrt{2} c\}, \quad A_4=\{n\in\ensuremath{\mathbb{N}} \colon |b_n|>\sqrt{2} c\}. \] Note that $A_3$ is a greedy set of $g$, $A_4$ is a greedy set of $h$, and \[
\max\{ |A_3|,|A_4|\}\le |\ensuremath{\mathbb{N}}\setminus A_0|=|A_{12}\cup A_1\cup A_2|\le |A_{12}|+|A_1|+|A_2|\le |B|. \] Set $A_5=\ensuremath{\mathbb{N}}\setminus (A_3\cup A_0)$ and $A_6=\ensuremath{\mathbb{N}}\setminus (A_4\cup A_0)$. Taking into account that, for any $D\subseteq\ensuremath{\mathbb{N}}$, the coordinate projection on $\eta^{-1}(D)$ with respect to $\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{Y}}$ coincides with that with respect to the direct sum $\ensuremath{\mathcal{X}}\oplus\ensuremath{\mathcal{Y}}$ of bases $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ we obtain \[ (S_{A_3}^\ensuremath{\mathbb{X}}(g),S_{A_4}^\ensuremath{\mathbb{Y}}(h)) -S_B(f)=S_{A_1^{\ensuremath{\bm{e}}}}(f) +S_{A_2^{\ensuremath{\bm{o}}}}(f)-(S_{A_5}^\ensuremath{\mathbb{X}}(g),S_{A_6}^\ensuremath{\mathbb{Y}}(h)). \] Therefore, it suffices to prove that \[ \max\{ \Vert S_{A_5}^\ensuremath{\mathbb{X}}(g)\Vert, \Vert S_{A_6}^\ensuremath{\mathbb{Y}}(h)\Vert, \Vert S_{A_1^{\ensuremath{\bm{e}}}}(f) \Vert , \Vert S_{A_2^{\ensuremath{\bm{o}}}}(f) \Vert\} \le C_1 \Vert f \Vert \] for some constant $C_1$. Thus, the result would follow by applying the next two claims to the pairs of bases $(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}})$, $(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{Y}}^-)$, $(\ensuremath{\mathcal{Y}},\ensuremath{\mathcal{X}})$, and $(\ensuremath{\mathcal{Y}}^-,\ensuremath{\mathcal{X}})$, where $\ensuremath{\mathcal{X}}=(\ensuremath{\bm{x}}_n)_{n=1}^\infty$, $\ensuremath{\mathcal{Y}}=(\ensuremath{\bm{y}}_n)_{n=1}^\infty$, and $\ensuremath{\mathcal{Y}}^-=(-\ensuremath{\bm{y}}_n)_{n=1}^\infty$. \begin{claim}\label{claim1} There is a constant $C$ such that \[ \left\Vert \sum_{n\in A} a_n \, \ensuremath{\bm{x}}_n\right\Vert \le C \left\Vert \sum_{n=1}^\infty a_n \, (\ensuremath{\bm{x}}_n,\ensuremath{\bm{y}}_n) + \sum_{n=1}^\infty b_n \, (\ensuremath{\bm{x}}_n,-\ensuremath{\bm{y}}_n) \right\Vert \]
whenever $A\subseteq\ensuremath{\mathbb{N}}$ is finite and $\max_{n\in A} |a_n|\le b:=\min_{n\in A} |b_n|$. \end{claim}
\begin{claim}\label{claim2} There is a constant $C$ such that \[ \left\Vert \sum_{n\in A} a_n \, \ensuremath{\bm{x}}_n\right\Vert \le C \left\Vert\left( \sum_{n=1}^\infty a_n \, \ensuremath{\bm{x}}_n , \sum_{n=1}^\infty a_n \,\ensuremath{\bm{y}}_n\right) \right\Vert \]
whenever $\max_{n\in A} |a_n|\le b:=\min_{n\in A} \max\{|a_n+b_n|,|a_n-b_n|\}$. \end{claim}
Let us prove Claim~\ref{claim1}. Set $D_1=\{n\in A \colon |a_n-b_n|\ge b\}$. If $n\in D_2:=A\setminus D_1$ then \[
|a_n+b_n|=|2b_n+(a_n-b_n)| \ge 2 |b_n|-|a_n-b_n|> 2b-b=b. \] Hence, if $\kappa$ is the modulus of concavity of $\ensuremath{\mathbb{X}}$, applying Lemma~\ref{lem:ShareBidem} we obtain \begin{align*} \left\Vert \sum_{n\in A} a_n \, \ensuremath{\bm{x}}_n\right\Vert &\le \kappa\left( \left\Vert \sum_{n\in D_1} a_n \, \ensuremath{\bm{x}}_n\right\Vert +\left\Vert \sum_{n\in D_2} a_n \, \ensuremath{\bm{x}}_n\right\Vert \right)\\ &\le \kappa C_0\left( \left\Vert \sum_{n=1}^\infty (a_n-b_n) \, \ensuremath{\bm{y}}_n\right\Vert + \left\Vert \sum_{n=1}^\infty (a_n+b_n)\ensuremath{\bm{x}}_n\right\Vert \right)\\ &\le 2 \kappa C_0 \max\left\{ \left\Vert \sum_{n=1}^\infty (a_n+b_n) \, \ensuremath{\bm{x}}_n\right\Vert , \left\Vert \sum_{n=1}^\infty (a_n-b_n) \ensuremath{\bm{y}}_n\right\Vert \right\}\\ &= 2 \kappa C_0\left\Vert \sum_{n=1}^\infty a_n \, (\ensuremath{\bm{x}}_n,\ensuremath{\bm{y}}_n) + \sum_{n=1}^\infty b_n \, (\ensuremath{\bm{x}}_n,-\ensuremath{\bm{y}}_n) \right\Vert. \end{align*}
We conclude by proving Claim~\ref{claim2}. Set $D_1=\{n\in A \colon |a_n|\le |b_n| \}$ and $D_2=A\setminus D_1$. Since \[
\max\{ |a_n|,|b_n|\}\ge \frac{b}{2}, \quad n\in A, \]
we have $|b_n|\ge b/2$ for all $n\in D_1$ and $|a_n|\ge b/2$ for all $n\in D_2$. Therefore \[
\max_{n\in D_1} |a_n|\le 2 \min_{n\in D_1} |b_n|, \quad
\max_{n\in D_2} |a_n|\le 2 \min_{n\in D_2} |a_n|. \] Applying Lemma~\ref{lem:ShareBidem} we obtain \begin{align*} \left\Vert \sum_{n\in A} a_n \, \ensuremath{\bm{x}}_n\right\Vert &\le \kappa \left(\left\Vert \sum_{n\in D_1} a_n \, \ensuremath{\bm{x}}_n\right\Vert +\left\Vert \sum_{n\in D_2} a_n \, \ensuremath{\bm{x}}_n\right\Vert \right)\nonumber\\ &\le \kappa C_0\left(\left\Vert \sum_{n =1}^\infty 2 b_n \, \ensuremath{\bm{y}}_n\right\Vert +\left\Vert \sum_{n=1}^\infty 2 a_n \, \ensuremath{\bm{x}}_n\right\Vert \right)\nonumber\\ &\le 4\kappa C_0 \left\Vert \left( \sum_{n =1}^\infty a_n \, \ensuremath{\bm{x}}_n, \sum_{n=1}^\infty b_n \, \ensuremath{\bm{y}}_n\right)\right\Vert.\qedhere \end{align*} \end{proof}
If $\varphi$ is the fundamental function of a basis of a Banach space, then $(\varphi(m))_{m=1}^\infty$ and $(m/\varphi(m))_{m=1}^\infty$ are non-decreasing sequences (see \cite{DKKT2003}). Our next result says that any such $\varphi$ corresponds in fact to a bidemocratic basis of a Banach space.
\begin{theorem}\label{thm:BDQGAFF} Let $(s_m)_{m=1}^\infty$ be a non-decreasing unbounded sequence of positive scalars. Suppose that $(m/s_m)_{m=1}^\infty$ is unbounded and non-decreasing. Then there is a Banach space $\ensuremath{\mathbb{X}}$ and a conditional bidemocratic quasi-greedy basis $\ensuremath{\mathcal{X}}$ of $\ensuremath{\mathbb{X}}$ whose fundamental function grows as $(s_m)_{m=1}^\infty$. \end{theorem}
\begin{proof} Let $\ensuremath{\bm{w}}=(w_n)_{n=1}^\infty$ denote the weight whose primitive weight is $(s_m)_{m=1}^\infty$. Then $d_{1,1}(\ensuremath{\bm{w}})$ is a Banach space whose dual space is the Marcinkiewicz space $m(\ensuremath{\bm{w}})$, consisting of all $f\in c_0$ whose non-increasing rearrangement $(a_n)_{n=1}^\infty$ satisfies \[ \Vert f\Vert_{m(\ensuremath{\bm{w}})}=\sup_m \frac{1}{s_m}\sum_{n=1}^m a_n<\infty \] (see \cite{CRS2007}*{Theorems 2.4.14 and 2.5.10}). Let $m_0(\ensuremath{\bm{w}})$ be the separable part of $m(\ensuremath{\bm{w}})$, and let $\ensuremath{\bm{w}}^{\ast}$ be the weight whose primitive weight is $(m/s_m)_{m=1}^\infty$. We have the following chain of norm-one inclusions: \begin{equation}\label{eq:embeddingsM} d_{1,1}(\ensuremath{\bm{w}}) \subseteq m_0(\ensuremath{\bm{w}}^{\ast}) \subseteq m(\ensuremath{\bm{w}}^{\ast}) \subseteq d_{1,\infty}(\ensuremath{\bm{w}}). \end{equation} The right hand-side inclusion is clear. Let us prove the left hand-side inclusion. Let $(a_n)_{n=1}^\infty$ be the non-increasing rearrangement of $f\in c_0$. Given $m\in\ensuremath{\mathbb{N}}$ we define $(b_n)_{n=1}^\infty$ by $b_n=a_n$ is $n\le m$ and $b_n=0$ otherwise. Using Abel's summation formula we obtain \begin{align*} \frac{s_m}{m}\sum_{n=1}^m a_n &= \frac{s_m}{m} \sum_{n=1}^\infty (b_n-b_{n+1})n \\ &\le \sum_{n=1}^\infty (b_n-b_{n+1})s_n\\ &=\sum_{n=1}^m a_n w_n\le \Vert f\Vert_{1,\ensuremath{\bm{w}}}. \end{align*}
We infer from \eqref{eq:embeddingsM} that $d_{1,1}(\ensuremath{\bm{w}})$ and $m_0(\ensuremath{\bm{w}}^{\ast})$ are Banach spaces for which the unit vector system is a symmetric basis with fundamental function $(s_m)_{m=1}^\infty$. Applying the rotation method with these bases yields a bidemocratic quasi-greedy basis of $d_{1,1}(\ensuremath{\bm{w}})\oplus m_0(\ensuremath{\bm{w}}^{\ast})$ with fundamental function equivalent to $(s_m)_{m=1}^\infty$.
To show that this basis is conditional, by Proposition~\ref{prop:diamondConditional} it suffices to show that $d_{1,1}(\ensuremath{\bm{w}})$ and $m_0(\ensuremath{\bm{w}}^{\ast})$ are not isomorphic, so that $d_{1,1}(\ensuremath{\bm{w}})\subsetneq m_0(\ensuremath{\bm{w}}^{\ast})$. For that, we note that the unit vector system is a boundedly complete basis of $d_{1,1}(\ensuremath{\bm{w}})$ and that $\ell_1$ is a complemented subspace of $d_{1,1}(\ensuremath{\bm{w}}^{\ast})$. Indeed, the first fact is clear. The second fact follows taking into account that the proof in \cite{ACL1973} works even without imposing to $\ensuremath{\bm{w}}^*$ to be non-increasing. An appeal to \cite{AlbiacKalton2016}*{Theorem 3.2.15} and \cite{AlbiacKalton2016}*{Theorem 3.3.1} concludes the proof. \end{proof}
\begin{remark} Notice that in Theorem~\ref{thm:BDQGAFF} we can obtain that $\ensuremath{\mathcal{X}}$ is $1$-bidemocratic with $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m)=s_m$ for all $m\in\ensuremath{\mathbb{N}}$. Indeed, if $(s_m)_{m=1}^\infty$ is a non-decreasing sequence of positive scalars such that $(m/s_m)_{m=1}^\infty$ is non-decreasing, and $\ensuremath{\mathbb{X}}$ is a $p$-Banach space, $0<p\le 1$, with a bidemocratic basis $\ensuremath{\mathcal{X}}$ such that $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m)\approx s_m$ for $m\in\ensuremath{\mathbb{N}}$, then, arguing as in the proof of \cite{DOSZ2011}*{Theorem 2.1} (where unconditionality plays no role), we obtain an equivalent $p$-norm for $\ensuremath{\mathbb{X}}$ with respect to which $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}},\ensuremath{\mathbb{X}}](m)=s_m$ and $\ensuremath{\bm{\varphi_u}}[\ensuremath{\mathcal{X}}^{\ast},\ensuremath{\mathbb{X}}^{\ast}](m)=m/s_m$ for all $m\in\ensuremath{\mathbb{N}}$. \end{remark}
\begin{remark} In the case when $(s_m)_{m=1}^\infty$ has the URP we can give a more quantitative approach to the proof of Theorem~\ref{thm:BDQGAFF}. In this particular case we have $m(\ensuremath{\bm{w}}^{\ast})=d_{1,\infty}(\ensuremath{\bm{w}})$. Moreover, by \cite{CRS2007}*{Theorem 2.5.10}, $d_{1,q}(\ensuremath{\bm{w}})$ is a Banach space for every $1<q<\infty$. Notice that, in general, $d_{1,q}(\ensuremath{\bm{w}})$ is $r$-Banach for all $r<1$ and $1<q\le \infty$, and that $d_{1,q}(\ensuremath{\bm{w}})$ is $q$-Banach for all $0<q<1$. Applying the rotation method with the unit vector systems of $d_{1,p}(\ensuremath{\bm{w}})$ and $d_{1,q}(\ensuremath{\bm{w}})$, $0<p<q\le \infty$, yields a bidemocratic quasi-greedy basis (of a quasi-Banach space which is locally convex if $p\ge 1$) whose fundamental function is equivalent to $(s_m)_{m=1}^\infty$. Combining \eqref{eq:NormLorentz} with Proposition~\ref{prop:diamondConditional} gives that the conditionality constants $(\ensuremath{\bm{k}}_m)_{m=1}^\infty$ of the basis we obtain satisfy \[ \ensuremath{\bm{k}}_m\gtrsim (H_m[\ensuremath{\bm{w}}])^{1/p-1/q}, \quad m\in\ensuremath{\mathbb{N}}. \] In the particular case that $(s_m)_{m=1}^\infty$ has the LRP, by Lemma~\ref{lem:LorentzLRP}, \[ \ensuremath{\bm{k}}_m\gtrsim (\log m)^{1/p-1/q}, \quad m\in\ensuremath{\mathbb{N}},\; m\ge 2. \] Notice that, if $1<p<q<\infty$ and $(m/s_m^q)_{m=1}^\infty$ is non-decreasing, $\ensuremath{\mathbb{X}}$ is superreflexive (see \cite{Altshuler1975}). In particular, we find a bidemocratic quasi-greedy basis of a Banach space with $\ensuremath{\bm{k}}_m\gtrsim \log m$ for $m\ge 2$; and, for each $0<s<1$, a bidemocratic quasi-greedy basis of a superreflexive Banach space with $\ensuremath{\bm{k}}_m\gtrsim (\log m)^s$ for $m\ge 2$. Thus, the rotation method serves to built `highly conditional' almost greedy bases (see \cite{AADK2019b} for background on this topic). \end{remark}
\begin{example} Let $\ensuremath{\mathbb{X}}$ be a Banach space with a greedy, non-symmetric basis $\ensuremath{\mathcal{X}}$ whose dual basis is also greedy. Then, if $\ensuremath{\mathcal{X}}_\pi$ is a permutation of $\ensuremath{\mathcal{X}}$ nonequivalent to $\ensuremath{\mathcal{X}}$, we have that $\ensuremath{\mathcal{X}}\diamond\ensuremath{\mathcal{X}}_\pi$ is a conditional quasi-greedy basis of $\ensuremath{\mathbb{X}}\oplus\ensuremath{\mathbb{X}}$. For instance, in light of \cite{Temlyakov1998}*{Theorem 2.1}, this technique can be applied to the $L_p$-normalized Haar system to obtain a bidemocratic conditional quasi-greedy basis of $L_p([0,1])$, $p\in(1,2)\cup(2,\infty)$. Also, since, for the same values of $p$, the space $\ell_p$ has a greedy basis which is non-equivalent to the canonical basis (see \cite{DHK2006}*{Theorem 2.1}), this technique yields a bidemocratic conditional basis of $\ell_p$. \end{example}
\begin{bibdiv} \begin{biblist}
\bib{AABW2021}{article}{ author={Albiac, Fernando}, author={Ansorena, Jos\'{e}~L.}, author={Bern\'{a}, Pablo~M.}, author={Wojtaszczyk, Przemys{\l}aw}, title={Greedy approximation for biorthogonal systems in quasi-Banach spaces}, date={2021}, journal={Dissertationes Math. (Rozprawy Mat.)}, volume={560}, pages={1\ndash 88}, }
\bib{AADK2019b}{article}{ author={Albiac, Fernando}, author={Ansorena, Jos\'{e}~L.}, author={Dilworth, Stephen~J.}, author={Kutzarova, Denka}, title={Building highly conditional almost greedy and quasi-greedy bases in {B}anach spaces}, date={2019}, ISSN={0022-1236}, journal={J. Funct. Anal.}, volume={276}, number={6}, pages={1893\ndash 1924}, url={https://doi-org/10.1016/j.jfa.2018.08.015}, review={\MR{3912795}}, }
\bib{AAW2019}{article}{ author={Albiac, Fernando}, author={Ansorena, Jos\'{e}~L.}, author={Wojtaszczyk, Przemys{\l}aw}, title={Conditional quasi-greedy bases in non-superreflexive {B}anach spaces}, date={2019}, ISSN={0176-4276}, journal={Constr. Approx.}, volume={49}, number={1}, pages={103\ndash 122}, url={https://doi-org/10.1007/s00365-017-9399-x}, review={\MR{3895765}}, }
\bib{AAW2021b}{article}{ author={Albiac, Fernando}, author={Ansorena, Jos\'{e}~L.}, author={Wojtaszczyk, Przemys{\l}aw}, title={On certain subspaces of {$\ell_p$} for {$0<p\leq1$} and their applications to conditional quasi-greedy bases in {$p$}-{B}anach spaces}, date={2021}, ISSN={0025-5831}, journal={Math. Ann.}, volume={379}, number={1-2}, pages={465\ndash 502}, url={https://doi-org/10.1007/s00208-020-02069-3}, review={\MR{4211094}}, }
\bib{AAW2021}{article}{ author={Albiac, Fernando}, author={Ansorena, Jos\'{e}~L.}, author={Wojtaszczyk, Przemys{\l}aw}, title={Quasi-greedy bases in {$\ell_ p$} {$(0<p<1)$} are democratic}, date={2021}, ISSN={0022-1236}, journal={J. Funct. Anal.}, volume={280}, number={7}, pages={108871, 21}, url={https://doi-org/10.1016/j.jfa.2020.108871}, review={\MR{4211033}}, }
\bib{AlbiacKalton2016}{book}{ author={Albiac, Fernando}, author={Kalton, Nigel~J.}, title={Topics in {B}anach space theory}, edition={Second}, series={Graduate Texts in Mathematics}, publisher={Springer, [Cham]}, date={2016}, volume={233}, ISBN={978-3-319-31555-3; 978-3-319-31557-7}, url={https://doi.org/10.1007/978-3-319-31557-7}, note={With a foreword by Gilles Godefory}, review={\MR{3526021}}, }
\bib{Altshuler1975}{article}{ author={Altshuler, Zvi}, title={Uniform convexity in {L}orentz sequence spaces}, date={1975}, ISSN={0021-2172}, journal={Israel J. Math.}, volume={20}, number={3-4}, pages={260\ndash 274}, url={https://doi.org/10.1007/BF02760331}, review={\MR{385517}}, }
\bib{ACL1973}{article}{ author={Altshuler, Zvi}, author={Casazza, Peter~G.}, author={Lin, Bor~Luh}, title={On symmetric basic sequences in {L}orentz sequence spaces}, date={1973}, ISSN={0021-2172}, journal={Israel J. Math.}, volume={15}, pages={140\ndash 155}, url={https://doi-org/10.1007/BF02764600}, review={\MR{328553}}, }
\bib{BL2020}{article}{ author={Berasategui, Miguel}, author={Lassalle, Silvia}, title={Weak semi-greedy bases and the equivalence between semi-greedy, branch semi-greedy and almost greedy {M}arkushevich bases in {B}anach spaces}, date={2020}, journal={arXiv e-prints}, eprint={2004.06849}, }
\bib{Berna2020}{article}{ author={Bern\'{a}, Pablo~M.}, title={Characterization of weight-semi-greedy bases}, date={2020}, ISSN={1069-5869}, journal={J. Fourier Anal. Appl.}, volume={26}, number={1}, pages={Paper No. 21, 21}, url={https://doi-org/10.1007/s00041-020-09727-9}, review={\MR{4056847}}, }
\bib{BBG2017}{article}{ author={Bern\'{a}, Pablo~M.}, author={Blasco, \'{O}scar}, author={Garrig\'{o}s, Gustavo}, title={Lebesgue inequalities for the greedy algorithm in general bases}, date={2017}, ISSN={1139-1138}, journal={Rev. Mat. Complut.}, volume={30}, number={2}, pages={369\ndash 392}, url={https://doi.org/10.1007/s13163-017-0221-x}, review={\MR{3642039}}, }
\bib{BBGHO2018}{article}{ author={Bern\'{a}, Pablo~M.}, author={Blasco, Oscar}, author={Garrig\'{o}s, Gustavo}, author={Hern\'{a}ndez, Eugenio}, author={Oikhberg, Timur}, title={Embeddings and {L}ebesgue-type inequalities for the greedy algorithm in {B}anach spaces}, date={2018}, ISSN={0176-4276}, journal={Constr. Approx.}, volume={48}, number={3}, pages={415\ndash 451}, url={https://doi.org/10.1007/s00365-018-9415-9}, review={\MR{3869447}}, }
\bib{CRS2007}{article}{ author={Carro, Mar\'{\i}a~J.}, author={Raposo, Jos\'{e}~A.}, author={Soria, Javier}, title={Recent developments in the theory of {L}orentz spaces and weighted inequalities}, date={2007}, ISSN={0065-9266}, journal={Mem. Amer. Math. Soc.}, volume={187}, number={877}, pages={xii+128}, url={https://doi-org/10.1090/memo/0877}, review={\MR{2308059}}, }
\bib{DHK2006}{article}{ author={Dilworth, Stephen~J.}, author={Hoffmann, Mark}, author={Kutzarova, Denka}, title={Non-equivalent greedy and almost greedy bases in {$l_p$}}, date={2006}, ISSN={0972-6802}, journal={J. Funct. Spaces Appl.}, volume={4}, number={1}, pages={25\ndash 42}, url={https://doi-org/10.1155/2006/368648}, review={\MR{2194634}}, }
\bib{DKK2003}{article}{ author={Dilworth, Stephen~J.}, author={Kalton, Nigel~J.}, author={Kutzarova, Denka}, title={On the existence of almost greedy bases in {B}anach spaces}, date={2003}, ISSN={0039-3223}, journal={Studia Math.}, volume={159}, number={1}, pages={67\ndash 101}, url={https://doi.org/10.4064/sm159-1-4}, note={Dedicated to Professor Aleksander Pe{\l}czy\'nski on the occasion of his 70th birthday}, review={\MR{2030904}}, }
\bib{DKKT2003}{article}{ author={Dilworth, Stephen~J.}, author={Kalton, Nigel~J.}, author={Kutzarova, Denka}, author={Temlyakov, Vladimir~N.}, title={The thresholding greedy algorithm, greedy bases, and duality}, date={2003}, ISSN={0176-4276}, journal={Constr. Approx.}, volume={19}, number={4}, pages={575\ndash 597}, url={https://doi-org/10.1007/s00365-002-0525-y}, review={\MR{1998906}}, }
\bib{DOSZ2011}{article}{ author={Dilworth, Stephen~J.}, author={Odell, Edward~W.}, author={Schlumprecht, Thomas}, author={Zs\'{a}k, Andr\'{a}s}, title={Renormings and symmetry properties of $1$-greedy bases}, date={2011}, ISSN={0021-9045}, journal={J. Approx. Theory}, volume={163}, number={9}, pages={1049\ndash 1075}, url={https://doi.org/10.1016/j.jat.2011.02.013}, review={\MR{2832742}}, }
\bib{GHO2013}{article}{ author={Garrig\'os, Gustavo}, author={Hern\'{a}ndez, Eugenio}, author={Oikhberg, Timur}, title={Lebesgue-type inequalities for quasi-greedy bases}, date={2013}, ISSN={0176-4276}, journal={Constr. Approx.}, volume={38}, number={3}, pages={447\ndash 470}, url={https://doi-org/10.1007/s00365-013-9209-z}, review={\MR{3122278}}, }
\bib{KoTe1999}{article}{ author={Konyagin, Sergei~V.}, author={Temlyakov, Vladimir~N.}, title={A remark on greedy approximation in {B}anach spaces}, date={1999}, ISSN={1310-6236}, journal={East J. Approx.}, volume={5}, number={3}, pages={365\ndash 379}, review={\MR{1716087}}, }
\bib{LinPel1968}{article}{ author={Lindenstrauss, Joram}, author={Pe{\l}czy\'{n}ski, Aleksander}, title={Absolutely summing operators in {$L_{p}$}-spaces and their applications}, date={1968}, ISSN={0039-3223}, journal={Studia Math.}, volume={29}, pages={275\ndash 326}, url={https://doi-org/10.4064/sm-29-3-275-326}, review={\MR{0231188}}, }
\bib{Oikhberg2018}{article}{ author={Oikhberg, Timur}, title={Greedy algorithm with gaps}, date={2018}, ISSN={0021-9045}, journal={J. Approx. Theory}, volume={225}, pages={176\ndash 190}, url={https://doi-org/10.1016/j.jat.2017.10.006}, review={\MR{3733255}}, }
\bib{Oswald2001}{article}{ author={Oswald, Peter}, title={Greedy algorithms and best {$m$}-term approximation with respect to biorthogonal systems}, date={2001}, ISSN={1069-5869}, journal={J. Fourier Anal. Appl.}, volume={7}, number={4}, pages={325\ndash 341}, url={https://doi.org/10.1007/BF02514500}, review={\MR{1836816}}, }
\bib{Rudin1976}{book}{ author={Rudin, Walter}, title={Principles of mathematical analysis}, edition={Third}, publisher={McGraw-Hill Book Co., New York-Auckland-D\"{u}sseldorf}, date={1976}, note={International Series in Pure and Applied Mathematics}, review={\MR{0385023}}, }
\bib{Temlyakov1998}{article}{ author={Temlyakov, Vladimir~N.}, title={The best {$m$}-term approximation and greedy algorithms}, date={1998}, ISSN={1019-7168}, journal={Adv. Comput. Math.}, volume={8}, number={3}, pages={249\ndash 265}, url={https://doi.org/10.1023/A:1018900431309}, review={\MR{1628182}}, }
\bib{Woj1982}{article}{ author={Wojtaszczyk, Przemys{\l}aw}, title={The {F}ranklin system is an unconditional basis in {$H_{1}$}}, date={1982}, ISSN={0004-2080}, journal={Ark. Mat.}, volume={20}, number={2}, pages={293\ndash 300}, url={https://doi.org/10.1007/BF02390514}, review={\MR{686177}}, }
\bib{Woj2000}{article}{ author={Wojtaszczyk, Przemys{\l}aw}, title={Greedy algorithm for general biorthogonal systems}, date={2000}, ISSN={0021-9045}, journal={J. Approx. Theory}, volume={107}, number={2}, pages={293\ndash 314}, url={https://doi-org/10.1006/jath.2000.3512}, review={\MR{1806955}}, }
\end{biblist} \end{bibdiv}
\end{document} |
\begin{document}
\title{Primitive tuning via quasiconformal surgery}
\begin{abstract} Using quasiconformal surgery, we prove that any primitive, postcritically-finite hyperbolic polynomial can be tuned with an arbitrary generalized polynomial with non-escaping critical points, generalizing a result of Douady-Hubbard for quadratic polynomials to the case of higher degree polynomials. This solves affirmatively a conjecture by Inou and Kiwi on surjectivity of the renormalization operator on higher degree polynomials in one complex variable. \end{abstract} \section{Introduction} Quasiconformal surgery is a powerful technique in the theory of holomorphic dynamics of one variable. The process often consists of two steps. First, we construct a quasi-regular map with certain dynamical properties, by modifying part of a holomorphic dynamical system, or extending the definition of a partially defined holomorphic dynamical system. Then we prove existence of a rational map which is (essentially) conjugate to the quasi-regular mapping. Successful applications of this technique include Douady-Hubbard's theory of polynomial-like mappings (\cite{DH2}) and Shishikura's sharp bounds on the number of Fatou cycles (\cite{Shishi}), among others.
Tuning is an operator introduced by Douady-Hubbard~\cite{DH2} to prove existence of small copies in the Mandelbrot set. Recall that the Mandelbort set consists of all complex numbers $c$ for which $P_c(z)=z^2+c$ has a connected Julia set. Let $P_a(z)=z^2+a$ be a quadratic polynomial for which the critical point $0$ has period $p$. Douady-Hubbard proved that there is a homeomorphism $\tau=\tau_a$, from $\mathcal{M}$ into itself with $\tau_a(0)=a$ and with the property that the $p$-th iterate of $P_{\tau(c)}$ has a quadratic-like restriction which is hybrid equivalent to $P_c$. Intuitively, the filled Julia set of $P_{\tau(c)}$ is obtained from that of $P_a$ by replacing each bounded Fatou component of $P_a$ with a copy of the filled Julia set of $P_c$. Douady-Hubbard's argument involved detailed knowledge of the combinatorics of the Mandelbrot set and also continuity of the straightening map in the quadratic case, and thus breaks down for higher degree polynomials.
In~\cite{IK}, Inou-Kiwi defined a natural analogy of the map $\chi=\tau^{-1}$ for higher degree polynomials, which we will call the {\em IK straightening map}. Given a postcritically finite hyperbolic polynomial $f_0$ of degree $d\ge 2$, together with an internal angle system, they defined a map $\chi$ from a certain space $\mathcal{R}(f_0)$ into a space $\mathcal{C}(T)$ which consists of generalized polynomials over the reduced mapping scheme $T=T(f_0)$ of $f_0$ with fiberwise connected Julia set. The map $\chi$ is injective and in general not continuous~\cite{I3}. The map $f\in \mathcal{R}(f_0)$ is the tuning of $f_0$ with $\chi(f)$ in the sense of Douady and Hubbard when the Julia set $\chi(f)$ is fiberwise locally connected. In the case that $f_0$ is {\em primitive}, i.e. the bounded Fatou components have pairwise disjoint closures, the set $\mathcal{R}(f_0)$ coincide with a combinatorially defined set $\mathcal{C}(f_0)$ which is known to be compact. Inou-Kiwi's argument is mostly combinatorial and they posed a conjecture that in the case that $f_0$ is primitive, $\chi$ is surjective and with connected domain. The fact that $\chi$ is surjective means that $f_0$ can be tuned by every $g\in \mathcal{C}(T)$.
In this paper, we shall prove the conjecture of Inou and Kiwi, using quasiconformal surgery technique. In particular, this shows that a primitive postcritically finite hyperbolic map $f_0$ can be tuned by each $g\in \mathcal{C}(T(f_0))$.
\begin{maintheorem} Let $f_0$ be a postcritically finite hyperbolic polynomial that is primitive and fix an internal angle system for $f_0$. Then the IK straightening map $\chi:\mathcal{C}(f_0)\to \mathcal{C}(T)$ is bijective and $\mathcal{C}(f_0)$ is connected. \end{maintheorem} We shall recall the definition of Inou-Kiwi's straightening map later. Let us just mention now that if $f_0(z)=z^d+a$ for some $a\in \mathbb C$ then $\mathcal{C}(T)$ is the set of all monic centered polynomials of degree $d$ which have connected Julia sets.
The main part of the proof is to show the surjectivity of the map $\chi$. It is fairly easy to construct a quasi-regular map $\widetilde{f}$ which has a generalized polynomial-like restriction hybrid equivalent to a given generalized polynomial in $\mathcal{C}(T)$, via qc surgery in a union of annuli domains around the critical Fatou domains of $f_0$. In order to show that there is a polynomial map with essentially the same dynamics as $\widetilde{f}$, we run Thurston's algorithm to $\widetilde{f}$. We modify an argument of Rivera-Letelier~\cite{R-L} to show the convergence. In order to control distortion, we use a result of Kahn~\cite{Kahn} on removability of certain Julia sets. After surjectivity is proved, we deduce connectivity of $\mathcal{C}(f_0)$ from that of $\mathcal{C}(T)$ which is a theorem of Branner-Hubbard~\cite{BH} and Lavaurs~\cite{La}.
In \cite{EY}, qc surgery was successfully applied to construct intertwining tuning. Our case is more complicated since the surgery involves the non wandering set of the dynamics.
The paper is organized as follows. In \S\ref{sec:IK}, we recall definition of generalized polynomial-like maps and the IK straightening map. In \S\ref{sec:puzzle}, we construct a specific Yoccoz puzzle for postcritically finite hyperbolic primitive polynomials which is used to construct renormalizations. In \S\ref{sec:Kahn}, we prove a technical distortion lemma. In \S\ref{sec:Thurston}, we prove a convergence theorem for Thurston algorithm. The proof of the main theorem is given in \S\ref{sec:surgery} using qc surgery.
\noindent {\bf Acknowledgment.} We would like to thank the referee for carefully reading the paper and in particular, for pointing out an error in Section 3 of the first version. This work was supported by NSFC No. 11731003. \section{The IK straightening map}\label{sec:IK} We recall some basic notations and the definition of IK straightening maps. The multi-critical nature of the problem makes the definition a bit complicated.
Let $\text{Poly}_d$ denote the set of all monic centered polynomials of degree $d$, i.e. polynomials of the form $z\mapsto z^d+ a_{d-2}z^{d-2}+\cdots+a_0$, with $a_0,a_1,\cdots, a_{d-2}\in \mathbb C$. For each $f\in \text{Poly}_d$, let $K(f)$ and $J(f)$ denote the filled Julia set and the Julia set respectively. Let $P(f)$ denote the postcritical set of $f$: $$P(f)=\overline{\bigcup_{f'(c)=0, c\in \mathbb C}\bigcup_{n=1}^\infty \{f^n(c)\}}.$$
Let $$\mathcal{C}_d=\{f\in \text{Poly}_d: K(f) \text{ is connected}\}.$$ \subsection{External Rays and Equipotential Curves} For $f\in Poly_d$, the {\em Green function} is defined as
$$G_f(z)=\lim_{n\to\infty} \frac{1}{d^n}\log^+ |f^n(z)|,$$ where $\log^+=\max(\log , 0).$ The Green function is continuous and subharmonic in $\mathbb C$, satisfying $G_f(f(z))=dG_f(z)$. It is nonnegative, harmonic and takes positive values exactly in the attracting basin of $\infty$ \[B_f(\infty):=\{z\in \mathbb C\mid f^n(z) \to \infty\ as~n \to \infty\}=\mathbb C\setminus K(f).\]
Assume $f\in\mathcal{C}_d$. Then there exists a unique conformal map \[\phi_f:B_f(\infty) \to \{z\mid |z|>1\}\] such that $\phi(z)/z\to 1$ as $z\to\infty$ and such that $\phi_f\circ f(z)=(\phi_f(z))^d$ on $B_f(\infty)$. The {\em external ray of angle $t \in \mathbb{R}/\mathbb{Z}$} is defined as \[\mathcal R_{f}(t)=\{\phi_f^{-1}(re^{i2\pi t})\mid 1<r<\infty\},\] and the {\em equipotential curve of pontential $l\in (0,\infty)$} as \[E_{f}(l)=\{\phi_f^{-1}(e^{l+i2\pi\theta})\mid 0\le\theta<1\}.\]
We say the external ray $\mathcal R_f(t)$ {\em lands} at some point $z_0$ if $\lim_{r\to 1}\phi_f^{-1}(re^{i2\pi t})=z_0$. It is known that for each $t\in \mathbb{Q}/\mathbb{Z}$, $\mathcal{R}_f(t)$ lands at some point. On the other hand, if $z_0$ is a repelling or parabolic periodic point, then there exists at least one but at most finitely many external rays landing at $z_0$. See for example~\cite{Mil1}.
The {\em rational lamination} of $f$, denoted by $\lambda(f)$, is the equivalence relation on $\mathbb{Q}/\mathbb{Z}$ so that $\theta_1\sim \theta_2$ if and only if $\mathcal{R}_f(\theta_1)$ and $\mathcal{R}_f(\theta_2)$ land at the same point.
\subsection{Generalized polynomials over a scheme}\label{subsec:GPL}
Now let us consider a postcritically finite hyperbolic polynomial $f_0$. Following Milnor \cite{Mil3}, we define the {\em reduced mapping scheme} $$T(=T(f_0))=(|T|,\sigma,\delta)$$ of $f_0$ as follows. Let $|T|$ denote the collection of critical bounded Fatou components of $f_0$. For each $\mathbf{v}\in |T|$, let $r(\mathbf{v})$ be the minimal positive integer such that $f_0^{r(\mathbf{v})}(\mathbf{v})$ is again a critical Fatou component. Define $\sigma:|T|\to |T|$ and $\delta: |T|\to \{2,3,\cdots\}$ by
$$\sigma(\mathbf{v})=f_0^{r(\mathbf{v})}(\mathbf{v})\mbox{ and }\delta(\mathbf{v})=\text{deg}(f_0^{r(\mathbf{v})}|\mathbf{v})=\text{deg}(f_0|\mathbf{v}).$$
\begin{definition}[Generalized Polynomials] A generalized polynomial $g$ over $T$ is a map
\[g:|T|\times \mathbb C \to |T|\times \mathbb C\] such that $g(\mathbf{v},z)=(\sigma(\mathbf{v}),g_{\mathbf{v}}(z))$ where $g_{\mathbf{v}}(z)$ is a monic centered polynomial of degree $\delta(\mathbf{v})$. \end{definition}
Denote by $P(T)$ the set of all generalized polynomials over the scheme $T$. Given $g \in P(T)$, the {\em filled Julia set} $K(g)$ of $g$ is the set of points in $|T|\times \mathbb C$ whose forward orbits are precompact. The boundary $\partial K(g)$ of the filled Julia set is called the {\em Julia set} $J(g)$ of $g$. The filled Julia set $K(g)$ is called {\em fiberwise connected} if $K(g,\mathbf{v}):=K(g)\cap (\{\mathbf{v}\}\times \mathbb C)$ is connected for each $\mathbf{v}\in |T|$. Let $\mathcal C(T)$ be the set of all the generalized polynomials with fiberwise connected filled Julia set over $T$.
We shall need external rays and Green function for $g\in \mathcal{C}(T)$ which can be defined similarly as in the case of a single polynomial. Indeed, for each $\mathbf{v}\in |T|$ there exists a unique conformal $\varphi_{\mathbf{v},g}:\mathbb C\setminus K(g,\mathbf{v})\to \mathbb C\setminus \overline{\mathbb{D}}$ such that $\varphi_{\mathbf{v},g}(z)/z\to 1$ as $z\to\infty$ and these maps satisfy $\varphi_{\sigma(\mathbf{v}),g}(g_\mathbf{v}(z))=\varphi_{\mathbf{v},g}(z)^{\delta(\mathbf{v})}$. For $t\in\mathbb{R}/\mathbb{Z}$, the {\em external ray} $\mathcal{R}_g(\mathbf{v},t)$ is defined as $\varphi_{\mathbf{v},g}^{-1}(\{re^{2\pi i t}: r>1\})$. The {\em Green function} of $g$ is defined as $$G_g(\mathbf{v},z)=\left\{\begin{array}{ll}
\log |\varphi_\mathbf{v}(z)|, & \mbox{ if } z\not\in K(g,\mathbf{v});\\ 0 &\mbox{ otherwise.} \end{array} \right. $$
\subsection{Generalized polynomial-like maps}
We shall now recall the definition of generalized polynomial-like map over the mapping scheme $T$. We say $U$ is a {\em topological multi-disk} in $|T|\times \mathbb C$ if there exist topological disks $\{U_\mathbf{v}\}_{\mathbf{v}\in|T|}$ in $\mathbb C$ such that $U \cap (\{\mathbf{v}\}\times \mathbb C)=\{\mathbf{v}\}\times U_\mathbf{v}$. \begin{definition} An AGPL (almost generalized polynomial-like) map $g$ over the scheme $T$ is a map \[g:U \to U', (\mathbf{v},z)\mapsto (\sigma(\mathbf{v}),g_{\mathbf{v}}(z)),\] with the following properties: \begin{itemize}
\item $U, U'$ are two topological multi-disks in $|T|\times \mathbb C$ and $U_\mathbf{v}\subsetneq U'_\mathbf{v}$ for each $\mathbf{v}\in |T|$; \item $g_{\mathbf{v}}: U_\mathbf{v} \to U'_{\sigma(\mathbf{v})}$ is a proper map of degree $\delta(\mathbf{v})$ for each $\mathbf{v}$; \item The set $K(g):=\bigcap\limits_{n=0}^{\infty} g^{-n}(U)$, called the {\em filled-Julia set of $g$}, is compactly contained in $U$. \end{itemize} If in addition $U\Subset U'$, then we say that $g$ is a GPL (generalized polynomial-like) map. \end{definition}
It should be noted that every AGPL map has a GPL restriction with the same filled-Julia set. See \cite[Lemma 2.4]{LY}.
Let $g_1, g_2$ be two AGPL maps over $T$. We say that they are {\em qc conjugate} if there is a fiberwise qc map $\varphi$ from a neighborhood of $K(g_1)$ onto a neighborhood of $K(g_2)$, sending the $\mathbf{v}$ fiber of $g_1$ to the $\mathbf{v}$ fiber of $g_2$, such that $\varphi\circ g_1=g_2\circ \varphi$ near $K(g_1)$. We say that they are {\em hybrid equivalent} if they are qc conjugate and we can choose $\varphi$ such that $\bar{\partial} \varphi=0$ a.e. on $K(g_1)$.
The Douady-Hubbard straightening theorem~\cite{DH2} extends in a straightforward way: every AGPL map $g$ is hybrid equivalent to a generalized polynomial $G$. In the case that the filled Julia set of $g$ is fiberwise connected, $G$ is determined up to affine conjugacy. For monic centered quadratic polynomials, each affine conjugacy class consists of a single polynomial. For more general maps, it is convenient to introduce an (external) marking to uniquely determine $G$ for a given $g$.
\begin{definition}[Access and external marking]Let $f: U \to U'$ be an AGPL map over the mapping scheme $T$ with fiberwise connected filled Julia set. A path to $K(f)$ is a continuous map $\gamma: [0, 1] \to U'$ such that $\gamma ((0, 1]) \subset U' \backslash K(f)$ and $\gamma(0) \in J(f)$. We say two paths $\gamma_0$ and $\gamma_1$ to $K(f)$ are homotopic if there exists a continuous map $\tilde \gamma : [0, 1] \times [0, 1] \to U$ such that \begin{enumerate} \item $t \mapsto \tilde \gamma (s, t)$ is a path to $K(f)$ for all $s \in [0, 1]$; \item $\tilde \gamma (0, t) = \gamma_0(t)$ and $\tilde\gamma (1, t) = \gamma_1(t)$ for all $t \in [0, 1]$; \item $\tilde \gamma (s, 0) = \gamma_0(0)$ for all $s \in [0, 1]$. \end{enumerate}
An access to $K(f)$ is a homotopy class of paths to $K(f)$.\par An external marking of $f$ is a collection $\Gamma=(\Gamma_{\mathbf{v}})_{\mathbf{v} \in |T|}$ where each $\Gamma_{\mathbf{v}}$ is an access to $K(f)$, contained in $\{\mathbf{v}\}\times \mathbb C$, such that $\Gamma$ is forward invariant in the following sense. For every $\mathbf{v} \in |T|$ and every representative $\gamma_{\mathbf{v}} \subset (\{\mathbf{v}\}\times\mathbb C) \cap U$ of $\Gamma_{\mathbf{v}}$, the connected component of $f(\gamma_{\mathbf{v}})\cap U$ which intersects $J(f)$ is a representative of $\Gamma_{\sigma(\mathbf{v})}$. \end{definition}
For a generalized polynomial $g \in \mathcal C(T)$, there is a {\em standard external marking of $g$}, given by the external rays $(\mathcal R(\mathbf{v},0))_{\mathbf{v} \in |T|}$ with angle $0$.
\begin{Theorem}[Straightening] Let $g$ be an AGPL map over $T$ with fiberwise connected filled Julia set and let $\Gamma$ be an external marking of $g$. There is a unique $f\in \mathcal{C}(T)$ such that there is a hybrid conjugacy between $g$ and $f$ which sends the external marking $\Gamma$ to the standard marking of $f$. \end{Theorem}
See \cite[Theorem A]{IK} for a proof.
\subsection{The Inou-Kiwi map}
Let $f_0, T$ and $r:|T|\to\mathbb{N}$ be as in \S\ref{subsec:GPL} and assume $f_0$ is primitive. It is well-known that $\partial \mathbf{v}$ is a Jordan curve for each $\mathbf{v}\in |T|$. Let $$\mathcal{C}(f_0)=\{f\in \text{Poly}_d: \lambda(f)\supset \lambda(f_0)\},$$ where $\lambda(f)$ and $\lambda(f_0)$ are the rational laminations of $f$ and $f_0$ respectively.
For each $f\in \mathcal{C}(f_0)$, Inou-Kiwi constructed an AGPL (in fact, GPL) map \begin{equation}\label{eqn:lambda0R}
F:\bigcup_{\mathbf{v}\in |T|} \{\mathbf{v}\}\times U_\mathbf{v}\to \bigcup_{\mathbf{v}\in |T|} \{\mathbf{v}\}\times U'_\mathbf{v} \end{equation} over $T$ such that $F(\mathbf{v}, z)=(\sigma(\mathbf{v}), f^{r(\mathbf{v})}(z))$ for each $z\in U_\mathbf{v}$ and such that the filled Julia set of $F$ is the union of $\{\mathbf{v}\}\times K(\mathbf{v},f)$: $$K(\mathbf{v},f)=\bigcap_{\theta\sim_{\lambda(f_0)} \theta'} \overline{S(\theta, \theta'; \mathbf{v})}\cap K(f),$$ where $S(\theta,\theta';\mathbf{v})$ is the component of $\mathbb C\setminus (\overline{\mathcal{R}_f(\theta)\cup \mathcal{R}_f(\theta')})$ which contains external rays $\mathcal{R}_f(t)$ for which $\mathcal{R}_{f_0}(t)$ land on $\partial \mathbf{v}$. We shall call such an AGPL map $F$ as in (\ref{eqn:lambda0R}) a {\em $\lambda(f_0)$-renormalization of } $f$. While there are many choices of $U_\mathbf{v}$ and $U'_\mathbf{v}$, the hybrid class of $\lambda(f_0)$-renormalizations of $f$ is uniquely determined.
In order to choose an external marking for $F$, Inou-Kiwi first fixed an {\em internal angle system} which is, by definition, a collection of homeomorphisms $\alpha=(\alpha_{\mathbf{v}}:\partial \mathbf{v} \to \mathbb R/\mathbb Z)_{\mathbf{v}\in |T|}$ such that: \[\delta(\mathbf{v})\alpha_{\mathbf{v}}=\alpha_{\sigma(\mathbf{v})}\circ f_0^{r(\mathbf{v})}\mod 1.\]
An internal angle system is uniquely determined by the points $z_\mathbf{v}=\alpha_{\mathbf{v}}^{-1}(0)$, $\mathbf{v}\in |T|$, which are (pre-)periodic points of $f_0$. Choose for each $\mathbf{v}\in |T|$ an external angle $\theta_\mathbf{v}$ so that $\mathcal{R}_{f_0}(\theta_\mathbf{v})$ lands at $z_\mathbf{v}$. They observed that the external rays $\mathcal{R}_f(\theta_\mathbf{v})$ define an external marking of $F$ and this external marking is independent of the choices of $\theta_\mathbf{v}$. Indeed, we can choose $(\theta_\mathbf{v})_{\mathbf{v}\in |T|}$ such that $\delta_{\mathbf{v}}\theta_\mathbf{v}=\theta_{\sigma(\mathbf{v})}\mod 1$ for each $\mathbf{v}\in |T|$, see Lemma~\ref{lem:externalangle}.
The IK straightening map $\chi:\mathcal{C}(f_0)\to \mathcal{C}(T)$ is defined as follows. For each $f\in \mathcal{C}(f_0)$, $\chi(f)$ is the generalized polynomial in $\mathcal{C}(T)$ for which there is a hybrid conjugacy from a $\lambda(f_0)$-renormalization of $f$ to $\chi(f)$ sending the external marking determined by the internal angle system to the standard marking for $\chi(f)$.
\begin{Lemma}\label{lem:externalangle} Let $f_0$ be as above and let $\alpha$ be an internal angle system. Then there exists $\theta_\mathbf{v}\in \mathbb{Q}/\mathbb{Z}$, $\mathbf{v}\in |T|$, such that $\mathcal{R}_{f_0}(\theta_\mathbf{v})$ lands at $\alpha_\mathbf{v}^{-1}(0)$ and such that $f_0^{r(\mathbf{v})}(\mathcal{R}_{f_0}(\theta_\mathbf{v}))=\mathcal{R}_{f_0}(\theta_{\sigma(\mathbf{v})})$. \end{Lemma} \begin{proof} {\bf Claim.} Let $W$ be a fixed Fatou component of $f\in \mathcal{C}_d$ and let $p$ be a repelling fixed point of $f$ in $\partial U$. If the external ray $\mathcal{R}_f(t)$ lands at $p$, then $dt=t\mod 1$.
Let $\Theta\subset \mathbb{R}/\mathbb{Z}$ denote the set of external angles $\theta$ for which $\mathcal{R}_f(\theta)$ lands at $p$. It is well known that $m_d: t\mapsto dt\mod 1$ maps $\Theta$ onto itself and preserves the cyclic order. So all the angles $\theta\in \Theta$ has the same period under $m_d$. We may certainly assume that $\Theta$ contains at least two points. Let $\Gamma$ denote the union of these external rays and $\{p\}$. Let $V$ denote the component of $\mathbb C\setminus \Gamma$ which contains $W$ and let $V_1$ denote the component of $\mathbb C\setminus f^{-1}(\Gamma)$ which contains $W$. Then $\partial V\subset \partial V_1$ and $f:V_1\to V$ is a proper map. It follows that $f$ must fix both of the external rays in $\partial V$. This proves the claim.
Now for each $\mathbf{v}\in |T|$, we have
$f^{r(\mathbf{v})}(\alpha_{\mathbf{v}}^{-1}(0))=\alpha_{\sigma(\mathbf{v})}^{-1}(0)$. So each $\alpha_{\mathbf{v}}^{-1}(0)$ eventually lands at a repelling periodic orbit. The claim enables us to choose $\theta(\mathbf{v})\in \mathbb{R}/\mathbb{Z}$ such that $\delta_\mathbf{v}\theta_\mathbf{v}=\theta_{\sigma(\mathbf{v})}\mod 1$ for each $\mathbf{v}\in |T|$. \end{proof}
\section{Yoccoz puzzle}\label{sec:puzzle} Let $f_0$ be a monic centered postcritically finite hyperbolic and primitive polynomial of degree $d\ge 2$. In this section, we shall construct a specific Yoccoz puzzle for $f_0$ (Theorem~\ref{thm:puzzle}). Recall that $\text{Poly}_d$ denotes the collection of monic centered polynomials of degree $d$.
We say that a finite set $Z$ is {\em $f_0$-admissible} if the following hold: \begin{itemize} \item $f_0(Z)\subset Z$, \item each periodic point in $Z$ is repelling; \item for each $z\in Z$, there exist at least two external rays landing at $z$. \end{itemize} Let $\Gamma_0=\Gamma^Z_0$ denote the union of all the external rays landing on $Z$, the set $Z$ itself and the equipotential $\{G_{f_0}(z)=1\}$. For each $n\ge 1$, define $\Gamma^Z_n=f_0^{-n}(\Gamma^Z_0)$. A bounded component of $\mathbb C\setminus \Gamma^Z_n$ is called a {\em $Z$-puzzle piece} of depth $n$.
The aim of this section is to prove the following:
\begin{Theorem}\label{thm:puzzle} Let $f_0$ be a monic centered polynomial of degree $d\ge 2$ which is postcritically finite, hyperbolic and primitive. Assume that $f_0(z)\not=z^d$. Then there exists an $f_0$-admissible finite set $Z$ such that for each (finite) critical point $c$ of $f_0$, if $Y_n(c)$ denotes the $Z$-puzzle piece of depth $n$ which contains $c$ and $U(c)$ denotes the Fatou component containing $c$, then $\bigcap_{n=0}^\infty Y_n(c)=\overline{U(c)}$. \end{Theorem}
We say a point $a\in J(f_0)$ is {\em bi-accessible} if it is the common landing points of two distinct external rays. A point in $J(f_0)$ is called {\em buried} if it is not in the boundary of any bounded Fatou componnet. We shall need the following lemmas to find buried bi-accessible periodic points.
\begin{Lemma}\label{lem:puuzle-1} Let $f_0\in \text{Poly}_d$ be a postcritically finite hyperbolic polynomial with $f_0(z)\not=z^d$, where $d\ge 2$. Then any bi-accessible point in the boundary of a bounded Fatou component is eventually periodic. \end{Lemma} \begin{proof} Arguing by contradiction, assume that there exists a bi-accessible point $a_0$ which is in the boundary of a bounded Fatou component $U$ and such that $a_0$ is not eventually periodic. By Sullivan's no wandering theorem, $U$ is eventually periodic. So it suffices to consider the case that $U$ is fixed by $f_0$.
Let $t, t'\in \mathbb{R}/\mathbb{Z}$, $t\not=t'$, be such that $\mathcal{R}_{f_0}(t)$ and $\mathcal{R}_{f_0}(t')$ land at $a_0$. For each $n\ge 1$, let $a_n:=f_0^n(a_0)$ and let $t_n=d^n t$, $t'_n=d^n t'$. Then $a_n$ are pairwise distinct, $t_n, t'_n\not\in \mathbb{Q}/\mathbb{Z}$ and $t_n\not=t'_n$ for all $n\ge 0$. Let $\Gamma_n=\mathcal{R}_{f_0}(t_n)\cup \mathcal{R}_{f_0}(t'_n)\cup \{a_n\}$ and let $W_n$ and $W'_n$ be the components of $\mathbb C\setminus\Gamma_n$ so that $W'_n\supset U$ and $W_n\cap U=\emptyset$. Note that $\overline{W_n}\cap \overline{U}=\{a_n\}$ for each $n\ge 0$. Since $\partial W_n$, $n\ge 0$, are pairwise disjoint, $W_n$, $n\ge 0$, are pairwise disjoint. So there exists $n_0$ such that for all $n\ge n_0$, $W_n$ contains no critical point.
{\em Claim.} If $W_n$ contains no critical point, then $f_0(W_n)=W_{n+1}$.
To see this, let $\widehat{\Gamma}_n=f_0^{-1}(\Gamma_{n+1})$ which is a finite union of simple curves stretching to infinity on both sides. Let $V_{n}$ (resp. $V'_n$) denote the component of $\mathbb C\setminus \widehat{\Gamma}_n$ which contains $a_n$ in its boundary and such that $V_n\subset W_n$ and $\partial W_n\subset \partial V_n $ (resp. $V'_n\subset W'_n$ and $\partial W'_n\subset \partial V'_n $). Then $f_0(V_n)$ and $f_0(V'_n)$ are distinct components of $\mathbb C\setminus \Gamma_{n+1}$. Since $V'_n\supset U$, we have $f_0(V'_n)=W'_{n+1}$ and hence $f_0(V_n)=W_{n+1}$. Since $W_n$ contains no critical point, $f_0:V_n\to W_{n+1}$ is a conformal map, which implies that $\partial V_{n}$ consists of one component of $\widehat{\Gamma}_n$. Thus $\partial V_n=\partial W_n$, hence $W_n=V_n$, $f_0(W_n)=W_{n+1}$.
Thus, for all $n\ge n_0$, $f_0(W_n)=W_{n+1}$. It follows that $f^n_0(W_{n_0})=W_{n+n_0}$ for all $n\ge 0$. So $W_{n_0}$ is a wandering domain, which contradicts Sullivan's no wandering theorem. A more elementary argument is as follows: We can choose $\theta\in \mathbb Q/\mathbb Z$ such that $\mathcal R_{f_0}(\theta) \subset W_{n_0}$. As $\mathcal R_{f_0}(\theta)$ is eventually periodic, $W_{n_0}$ cannot be a wandering domain. \end{proof} \begin{Lemma}\label{lem:puzzle0} Let $f_0\in \text{Poly}_d$ be a postcritically finite hyperbolic polynomial with $f_0(z)\not=z^d$, where $d\ge 2$. Then $f_0$ has a bi-accessible repelling periodic point. \end{Lemma} \begin{proof} Without loss of generality, we assume that $f_0$ has a fixed bounded Fatou component $U$, and let $$\Lambda=\{t\in \mathbb{R}/\mathbb{Z}: \mathcal{R}_{f_0}(t) \textrm{ lands on } \partial U\}.$$ Since the Julia set of $f_0$ is locally connected, by the Caratheodory Theorem, there is a continuous surjective map $\pi:\mathbb{R}/\mathbb{Z}\to J(f)$ such that the external ray $\mathcal{R}_{f_0}(t)$ lands at $\pi(t)$ and hence $\pi\circ m_d=f_0\circ \pi$, where $m_d: \mathbb{R}/\mathbb{Z}\to \mathbb{R}/\mathbb{Z}$, $t\mapsto d t\mod 1$. In particular, $\Lambda=\pi^{-1}(\partial U)$ is a non-empty compact subset of $\mathbb{R}/\mathbb{Z}$ which is forward invariant under the map $m_d$. Since $f_0(z)\not=z^d$, $J(f)$ is not a Jordan curve, so $\Lambda$ is a proper subset of $\mathbb{R}/\mathbb{Z}$. Thus $\pi: \Lambda\to \partial U$ is not a homeomorphism. Since $\pi: \Lambda\to \partial U$ is continuous and surjective, this map is not injective. In other words, there exists a bi-accessible point $w\in \partial U$. By Lemma~\ref{lem:puuzle-1}, $w$ is eventually periodic. Any periodic point in the orbit of $w$ is a bi-accessible repelling periodic point. \end{proof} \begin{Lemma}\label{lem:puzzle} If $f_0\in Poly_d$ is postcritically finite, hyperbolic and primitive and $f_0(z)\not=z^d$, then $f_0$ has a buried biaccessible repelling periodic point. \end{Lemma} \begin{proof} In order to show that $f_0$ has a buried bi-accessible point, it is enough to show that for some $s\ge 1$, $f_0^s$ has a repelling fixed point which is the landing point of an external ray not fixed by $f_0^s$. Indeed, if a repelling fixed point $p$ of $f_0^s$ is in the boundary of a bounded Fatou component $V$, then by the assumption that $f_0$ is primitive, we have $f_0^s(U)=U$, hence by the Claim in Lemma~\ref{lem:externalangle}, any external ray landing at $p$ is fixed by $f_0^s$. Therefore, it is enough to prove the following Statement $N$ for each $N\ge 0$.
{\bf Statement $N$.} Suppose that $f_0\in \text{Poly}_d$ is a postcritically finite, hyperbolic and primitive map and $f_0(z)\not=z^d$, where $d\ge 2$. If $f_0$ has at most $N$ attracting periodic points, then there exists $s\ge 1$ such that $f_0^s$ has a repelling fixed point $p$ which is the landing point of an external ray which is not fixed by $f_0^s$.
We proceed by induction on $N$. Statement $0$ is a null statement. Let $N\ge 1$ and assume that the statement $N'$ holds all $0\le N'<N$. Let $f_0\in Poly_{d}$ be as in Statement $N$. By Lemma~\ref{lem:puzzle0}, $f_0$ has a bi-accessible repelling periodic point $p_0$. Let $\mathcal{R}_{f_0}(t_i)$, $1\le i\le q$, be the external rays landing at $p_0$, where $q\ge 2$. Replacing $f_0$ by an iterate of $f_0$, we assume that \begin{itemize} \item all the external rays $\mathcal{R}_{f_0}(t_i)$ are fixed by $f_0$; \item all attracting periodic points of $f_0$ are fixed by $f_0$. \end{itemize} In particular, $f_0(p_0)=p_0$. (Note that $f_0^k$, $k\ge 1$, satisfies the assumption of Statement $N$.) Let us construct a Yoccoz puzzle using $Z=\{p_0\}$. Let $Y_0^j$, $1\le j\le q$ denote the puzzle pieces of depth zero and for each $n\ge 1$, let $Y_n^j$ denote the puzzle piece of depth $n$ which satisfies $Y_n^j\subset Y_0^j$ and $p_0\in \overline{Y_n^j}$. Since all the external rays $\mathcal{R}_{f_0}(t_j)$ are fixed, $f_0: Y_n^j\to Y_{n-1}^j$ is a proper map. Let $d_{n,j}$ denote the degree of $f_0:Y_n^j\to Y_{n-1}^j$, let $D_{n,j}=d_{1,j}d_{2,j}\cdots d_{n,j}$ and let $d_j=\lim_{n\to\infty} d_{n,j}$. Let $\Delta_{n,j}=\{t\in\mathbb{R}/\mathbb{Z}: \mathcal{R}_{f_0}(t)\cap \overline{Y_n^j}\not=\emptyset\}$. Note that $\Delta_{0,j}$ is a closed interval and $\Delta_{n,j}$ is a disjoint union of $D_{n,j}$ closed intervals each of which is mapped onto $\Delta_{0,j}$ under $m_d^n$ diffeomorphically.
Let us show $d_{1,j}\ge 2$ for each $j$. Indeed otherwise, $f_0: Y_1^j\to Y_0^j$ is a conformal map, which implies that $m_d:\Delta_{1,j}\to \Delta_{0,j}$ is a homeomorphism, which is absurd since $\Delta_{1,j}$ intersects both endpoints of $\Delta_{0,j}$.
{\em Case 1.} Suppose that $d_j=1$ holds for some $j$. Let $s_0$ be such that $d_{s,j}=1$ for all $s> s_0$. Then $f_0^{s-s_0}|Y_s^j$ is univalent for all $s> s_0$. Since $f_0$ has only finitely many attracting periodic points and every attracting cycle of $f_0$ contains a critical point, there exists $s_1>s_0$ such that $\overline{Y_{s_1}^j}$ does not contain any attracting periodic point of $f_0$. Consider the map $f_0^{s_1}: \overline{Y_{s_1}^j}\to \overline{Y_0^j}$ which has degree $D:=D_{s_1,j}\ge d_{1,j}\ge 2$. By the thickening technique (\cite{Mil4}), it extends to a polynomial-like map $f_0^{s_1}: U_j\to U_j'$ of degree $D$ in the sense of Douady and Hubbard, so it has $D$
fixed points which are contained in $\overline{Y_{s_1}^j}$. By our choice of $s_1$, none of these fixed point is attracting. Since $f_0$ is hyperbolic, it follows that all the $D$ fixed points of $f_0^{s_1}: U_j\to U_j'$ are repelling. The number of external rays of $f_0$ which intersect $\overline{Y_{s_1}^j}$ and are fixed by $f_0^{s_1}$ is exactly $D$, with two of them landing at the same point $p_0$. It follows that one of the repelling fixed point $p$ of $f_0^{s_1}|\overline{Y_{s_1}^j}$ is not the landing point of $f_0^{s_1}$-fixed external ray.
{\em Case 2.} Assume that $d_j>1$ holds for all $j$. Take $n_j$ sufficently large so that $d_{n,j}=d_j$ for all $n\ge n_j$. Then all critical points of $f_0|Y_{n_j}^j$ do not escape from $Y_{n_j}^j$ under iteration, hence $f_0: \overline{Y_{n_j}^j}\to \overline{Y_{n_j-1}^j}$ is a proper map of degree $d_j$ with non-escaping critical points. Again by the thickening technique (\cite{Mil4}), it extends to a polynomial-like map $f_0: U_j\to U_j'$ of degree $d_j$ in the sense of Douady and Hubbard. Thus it is topologically conjugate to a polynomial $g_j\in Poly_{d_j}$ with connected Julia set. The polynomial $g_j$ is hyperbolic, postcritically finite and primitive. Since for any $j'\not=j$, $f_0$ has an attracting fixed point in $Y_{n_{j'}}^{j'}$, the number of attracting periodic points of $f_0: U_j\to U'_j$ is less $N$. Thus the number of attracting periodic points of $g_j$ is less than $N$.
{\em Subcase 2.1} Assume $g_j(z)\not=z^{d_j}$ for some $j$. Then by the induction hypothesis, $g_j$ has a periodic point $\tilde{p}$ of period $s$ which is not the landing point of $g_j^s$-fixed external rays. Taking the corresponding periodic point $p$ of $f_0: U_j\to U_j'$, we are done.
{\em Subcase 2.2} Assume $g_j(z)=z^{d_j}$ for all $j$. This implies that the filled Julia set of $f_0: U_j\to U_j'$ is the closure of a Jordan disk $V_j$ which contains $p_0$. Each $V_j$ is a bounded Fatou component of $f_0$. These bounded Fatou components $V_j$, $1\le j\le q$, have $p_0$ as a common point in their closures, contradicting the assumption that $f_0$ is primitive. \end{proof}
\newcommand{\text{Crit}}{\text{Crit}} \begin{proof}[Proof of Theorem~\ref{thm:puzzle}]
Let $\text{Crit}_{per}$ denote the set of all periodic critical points of $f_0$ and for each $c\in \text{Crit}_{per}$, let $q(c)$ denote its period. For each admissible set $Z$ and $c\in\text{Crit}_{per}$, let $s_Z(c)$ denote the minimal positive integer such that $f_0^{s_Z(c)}(c)\in \bigcap_{n=0}^\infty Y_n^Z(c)$ and let $d_Z(c)=\lim_{n\to\infty} \text{deg} (f^{s_Z(c)}|Y_n^Z(c))$. Of course $q(c)\ge s_Z(c)$. Note that if $Z\subset Z'$ then $s_Z(c)\le s_{Z'}(c)$ for all $c\in\text{Crit}_{per}$, and if $s_Z(c)=s_{Z'}(c)$ then $d_{Z}(c)\ge d_{Z'}(c)$. Given admissible sets $Z\subset Z'$ we say that $Z'$ is a {\em (proper) refinement} of $Z$ if one of the following holds: \begin{itemize} \item there exists $c_0\in \text{Crit}_{per}$ such that $s_{Z'}(c_0)>s_{Z}(c_0)$; \item $s_{Z'}(c)=s_Z(c)$ for all $c\in \text{Crit}_{per}$ and there exists $c_0\in \text{Crit}_{per}$ such that $d_{Z'}(c_0)< d_{Z}(c_0)$. \end{itemize} Clearly, there does not exist an infinite sequence of admissible sets $\{Z_n\}_{n=1}^\infty$ such that for all $n$, $Z_{n+1}$ is a refinement of $Z_n$.
Let us say an $f_0$-admissible set $Z$ is {\em buried} if $Z$ is disjoint from the boundary of any bounded Fatou component. A buried $f_0$-admissible set exists by Lemma~\ref{lem:puzzle}. It suffices to prove that if $Z$ is a buried $f_0$-admissible set for which the property required by the theorem does not hold, then there exists a buried $f_0$-admissible set $Z'$ which is a refinement of $Z$.
To this end, assume that there exists $c_0\in \text{Crit}_{per}$ such that $\bigcap_{n=0}^\infty Y_n^Z(c_0)\supsetneq \overline{U(c_0)}$. Write $s=s_Z(c_0)$. When $N$ is large enough, the critical points of the proper map $g=f_0^s|Y_{N+s}(c_0)$ never escapes from its domain. Using the thickening technique (\cite{Mil4}), $g$ extends to a Douady-Hubbard polynomial-like map with connected Julia set. Thus it is hybrid equivalent to a monic centered polynomial $P$ which is necessarily hyperbolic and postcritically finite. Let $D\ge 2$ denote the degree of $P$ and let $h$ denote a hybrid conjugacy. As the filled Julia set of $P$ is not a topological disk, $P(z)\not=z^D$. So by Lemma~\ref{lem:puzzle}, $P$ has a repelling periodic point $\hat{z}_1$ which is biaccessible and buried. By~\cite[Lemma 3.6]{I1} (see also ~\cite[Theorem 7.11]{McM1}), $z_1=h^{-1}(\hat{z}_1)$ is a buried biaccessible repelling periodic point of $f_0$. Let $Z'$ denote the union of $Z$ and the $f_0$-orbit of $z_1$. As $\bigcap_{n=0}^\infty Y_n^{Z'}(c_0)$ is a proper subset of $\bigcap_{n=0}^\infty Y_n^Z(c_0)$, either $s_{Z'}(c_0)\not=s_Z(c_0)$ or $d_{Z'}(c_0)< d_Z(c_0)$. This completes the proof. \end{proof}
We shall need the following result later. \begin{Proposition}\label{prop:noncrpiece} Let $f_0$ and $Z$ be as in Theorem~\ref{thm:puzzle}. Then $$\sup\{\text{diam}(Y): Y \text{ is a puzzle piece of depth }n, \overline{Y}\cap Z\not=\emptyset\}\to 0\text{ as } n\to\infty.$$ \end{Proposition}
\begin{proof} For each $n\ge 0$ and $z\in Z$, let $Y^*_n(z)$ denote the union of the closures of the puzzle pieces of depth $n$ which contain $z$ in their boundaries. Since $Z$ is finite, there exists $N$ such that $Y^*_N(z)\cap P(f_0)=\emptyset$ for all $z\in Z$. For each $n\ge 0$, and $z\in Z$, $f_0^n: Y_{n+N}^*(z)\to Y_N^*(f_0^n(z))$ is a conformal map which extends to a definite neighborhood of $f_0^n(z)$. It follows that $f_0^n|Y_{n+N}^*(z)$ has uniformly bounded distortion. Since $z\in J(f)$, this implies that $\text{diam}(Y_n^*(z))\to 0$ as $n\to\infty$. \end{proof}
We shall construct $\lambda(f_0)$-renormalizations using the puzzle given by Theorem~\ref{thm:puzzle}. The following is a criterion which will be used in the proof of surjectivity part of the main theorem.
\begin{Proposition}\label{prop:com}
Let $N_0$ be a positive integer such that for each $\mathbf{v}\in |T|$, the puzzle pieces $f_0^j(Y_{N_0}(\mathbf{v}))$, $\mathbf{v}\in |T|$, $1\le j\le r(\mathbf{v})$, are pairwise disjoint, and $f_0^{r(\mathbf{v})}: Y_{N_0}(\mathbf{v})\to Y_{N_0-r(\mathbf{v})}(\sigma(\mathbf{v}))$ has degree $\delta(\mathbf{v})$. Assume that $f\in \text{Poly}_d$ satisfies the following: \begin{enumerate} \item there is a homeomorphism $\psi: \mathbb C\to \mathbb C$ with the following properties: \begin{itemize} \item $\phi_f\circ \psi=\phi_{f_0}$ holds near $\infty$, where $\phi_f$ and $\phi_{f_0}$ are the B\"ottcher map for $f$ and $f_0$ respectively; \item $\psi \circ f_0(z)=f\circ \psi(z)$ for all $z\in \mathbb C\setminus \bigcup_{\mathbf{v}} Y_{N_0}(\mathbf{v})$. \end{itemize} \item The map $$F:\bigcup_{\mathbf{v}}\{\mathbf{v}\}\times \psi(Y_{N_0}(\mathbf{v}))\to \bigcup_{\mathbf{v}} \{\mathbf{v}\}\times \psi(Y_{N_0-r(\mathbf{v})}(\sigma(\mathbf{v}))),$$ defined as $F(\mathbf{v}, z)=(\sigma(\mathbf{v}),f^{r(\mathbf{v})}(z))$, is an AGPL map with fibrewise connected filled Julia set. \end{enumerate} Then $f\in \mathcal{C}(f_0)$ and $F$ is a $\lambda(f_0)$-renormalization of $f$.
\end{Proposition} \begin{proof} First of all, note the assumption implies that the filled Julia set $K(f)$ of $f$ is connected and $\widehat{Z}=\psi(Z)$ is an admissible set for $f$. It suffices to show that $f\in \mathcal{C}(f_0)$. Once this is proved, the other statement follows from \cite[Proposition 3.13]{IK}. Let $L(\mathbf{v}, f)$ denote the filled Julia set of $F$ in the fiber $\{\mathbf{v}\}\times \mathbb C$.
{\bf Step 1.} We show by induction that for each $k\ge N_0$, there is a homeomorphsim $\psi_k:\mathbb C\to\mathbb C$ which coincide with $\phi_f^{-1}\circ \phi_{f_0}$ on $\Gamma_k\setminus J(f)$.
For $k=N_0$, we choose $\psi_{N_0}=\psi$. Assume now that $\psi_k$ has been defined for some $k\ge N_0$ and let us construct $\psi_{k+1}$. For each $Y\subset \mathbb C$, denote $Y'=\psi_k(Y)$. It suffices to construct, for each $Y\in \mathcal{Y}_k$, a homeomorphism
$\psi_{k+1}: \overline{Y}\to \overline{Y'}$ so that $f\circ \psi_{k+1}(z)=\psi_k\circ f_0(z)$ for $z\in \overline{Y}\cap \Gamma_{k+1}$. Indeed, if $Y$ does not contain a critical point of $f_0$, then $f_0:Y\to f_0(Y)$ is a conformal map, and so is $f: Y'\to f(Y')$. In this case, we define $\psi_{k+1}|\overline{Y}=(f|\overline{Y'})^{-1}\circ (\psi_k|f_0(\overline{Y})) \circ (f_0|\overline{Y}))$. Assume that $Y$ contains a critical point of $f_0$, so that $Y=Y_k(\mathbf{v})$ for some $\mathbf{v}\in |T|$ and hence $Y'\supset L(\mathbf{v}, f)$. Let $B=Y_{k-r(\mathbf{v})+1}(\sigma(\mathbf{v}))$, $A=Y_{k+1}(\mathbf{v})$ and $X=Y_{k-r(\mathbf{v})}(\sigma(\mathbf{v}))$. Then $B'\supset L(\sigma (\mathbf{v}), f)$, $A'\supset L(\mathbf{v}, f)$ and $X'\supset L(\sigma(\mathbf{v}), f)$. Since $f_0^{r(\mathbf{v})}: \overline{Y}\setminus A\to \overline{X}\setminus B$ and $f^{r(\mathbf{v})}: \overline{Y'}\setminus A'\to \overline{X'}\setminus B'$ are both $\delta(\mathbf{v})$ to $1$ covering, there is a homeomorphism $\psi_{k+1}: \overline{Y}\setminus A\to \overline{Y'}\setminus A'$ such that $\psi_{k+1}\circ f_0^{r(\mathbf{v})}=f^{r(\mathbf{v})}\circ \psi_k$ on $\overline{Y}\setminus A$ and $\psi_{k+1}=\psi_k$ on $\partial Y$. Extending the map $\psi_{k+1}$ in an arbitrary way to a homeomorphism from $\overline{Y}$ to $\overline{Y'}$, we obtain the desired map $\psi_{k+1}:\overline{Y}\to \overline{Y'}$.
{\bf Step 2.} For each $k\ge N_0$, there is a qc map $\Psi_k$ such that $\Psi_k=\phi_f^{-1}\circ \phi_{f_0}$ near infinity and such that $f\circ \Psi_k(z)=\Psi_k\circ f_0(z)$ for all $z\not\in \bigcup_\mathbf{v} Y_k(\mathbf{v})$. This is well-known. See for example~\cite[Section 5]{KSS}. This implies that if $\mathcal{R}_{f_0}(\theta_1)$ and $\mathcal{R}_{f_0}(\theta_2)$ land at a common point which is not in
$\bigcup_{n=0}^\infty f_0^{-n}\left(\bigcup_{\mathbf{v}\in |T|}\partial \mathbf{v}\right)$ ($\theta_1,\theta_2\in \mathbb{Q}/\mathbb{Z}$), then $\mathcal{R}_f(\theta_1)$ and $\mathcal{R}_{f}(\theta_2)$ have a common landing point as well. Indeed, there exists $k$ such that the whole $f_0$-orbit of the rays $\mathcal{R}_{f_0}(\theta_1)$ and $\mathcal{R}_{f_0}(\theta_2)$ lie outside $\bigcup_\mathbf{v} Y_k(\mathbf{v})$, so $\mathcal{R}_f(\theta_i)=\Psi_k(\mathcal{R}_{f_0}(\theta_i))$, $i=1,2$.
{\bf Step 3.} It remains to show that if $\mathcal{R}_{f_0}(\theta_1)$ and $\mathcal{R}_{f_0}(\theta_2)$, $\theta_1, \theta_2\in\mathbb{Q}/\mathbb{Z}$ landing at a common point in $\bigcup_{n=0}^\infty f_0^{-n}\left(\bigcup_{\mathbf{v}\in |T|}\partial \mathbf{v}\right)$, then $\mathcal{R}_f(\theta_1)$ and $\mathcal{R}_{f}(\theta_2)$ have a common landing point.
Let us first assume that the common landing point is in $\partial \mathbf{v}_0$ for some $\mathbf{v}_0\in |T|$. Let $\Psi=\Psi_{N_0}$ be given by Step 2. We define a new qc map $H$ from $\mathbb{C}\setminus \bigcup_{\mathbf{v}}\overline{\mathbf{v}} \to \mathbb{C}\setminus \bigcup_{\mathbf{v}} L(\mathbf{v}, f)$ such that $H=\Psi$ outside $\bigcup_{\mathbf{v}} Y_{N_0}(\mathbf{v})$ and such that $H\circ f_0^{r(\mathbf{v})}=f^{r(\mathbf{v})}\circ H$ inside $\bigcup_{\mathbf{v}} Y_{N_0}(\mathbf{v})$. Note that $H$ maps $\mathcal{R}_{f_0}(\theta_i)$ onto $\mathcal{R}_f(\theta_i)$, $i=1,2.$ For each $\textbf{v}\not=\textbf{v}_0$, choose a quasidisk $\Omega_{\textbf{v}}$ so that these quasidisks are pairwise disjoint and disjoint from $\textbf{v}_0\cup \mathcal{R}_{f_0}(\theta_1)\cup\mathcal{R}_{f_0}(\theta_2)$. Let $H_0=H$ on $\mathbb{C} \setminus ( \overline{\mathbf{v}_0}\cup\bigcup_{\mathbf{v}\ne \mathbf{v}_0}\Omega_{\mathbf{v}})$, and then extend $H_0$ quasiconformally to $\mathbb C\setminus \overline{\mathbf{v}_0}$ by Beurling-Ahlfors extension. So we obtain a qc map $H_0: \mathbb{C}\setminus \overline{\mathbf{v}_0}\to \mathbb{C}\setminus L(\mathbf{v}_0,f)$ which again maps $\mathcal{R}_{f_0}(\theta_i)$ onto $\mathcal{R}_f(\theta_i)$, $i=1,2$. Let $\vartheta:\mathbb{C}\setminus \overline{\mathbb{D}}\to \mathbb{C}\setminus L(\mathbf{v}_0,f)$ denote a Riemann mapping. Since $\partial \mathbf{v}_0$ is a Jordan curve, $\vartheta^{-1}\circ H_0$ extends continuously to $\mathbb{C}\setminus \mathbf{v}_0$. Since $\mathcal{R}_f(\theta_i)$ both land and $\vartheta^{-1}(\mathcal{R}_{f} (\theta_1))$ and $\vartheta^{-1}(\mathcal{R}_f(\theta_2))$ have a common landing point, by Lindelof's theorem, we conclude that $\mathcal{R}_f(\theta_i)$, $i=1,2$, land at the same point.
For the general case, let $n\ge 1$ be minimal such that the common landing point of $\mathcal{R}_{f_0}(d^n \theta_i)$ ($i=1,2$) lie in $\bigcup_{\mathbf{v}} \partial \mathbf{v}$. As proved above, the external rays $\mathcal{R}_f(d^n \theta_i)$, $i=1,2$, have a common landing point $z$. Let $k$ be large integer such that the external rays $\mathcal{R}_{f_0}(d^j \theta_i)$, $0\le j<n$, lie outside $\bigcup_\mathbf{v} Y_k(\mathbf{v})$, let $Y$ denote $f_0$-the puzzle piece of depth $k+n$ which contains the common landing point of $\mathcal{R}_{f_0}(\theta_1)$ and $\mathcal{R}_{f_0}(\theta_2)$ and let $Y'=\psi_{k+n}(Y)$. Then $f^n: Y'\to Y_k(\mathbf{v})$ is a conformal map for some $\mathbf{v}\in |T|$ and the rays $\mathcal{R}_f(\theta_1)$ and $\mathcal{R}_f(\theta_2)$ enter $Y'$. Thus both of them have to land at the unique point in $\overline{Y'}$ which is mapped to $z$ by $f^n$. \end{proof}
\section{Kahn's quasiconformal distortion bounds}\label{sec:Kahn} In this section, we will modify the argument in~\cite{Kahn}\footnote{According to Kahn, Yoccoz may have a similar result.} to obtain a K-qc extension principle. The main result is Theorem~\ref{thm:Kqcpuzzle} which will be used later to show the convergence of the Thurston Algorithm in the proof of the Main Theorem.
Throughout we fix a monic centered, hyperbolic, postcritically finite and primitive polynomial $f_0$ of degree $d$ such that $f_0(z)\not=z^d$. Let $Z$ be an admissible set given by Theorem~\ref{thm:puzzle} and let $Y_n(z)=Y_n^Z(z)$. Let $\text{Crit}(f_0)=\{c\in \mathbb C: f_0'(c)=0\}$ and let $L_n$ denote the domain of the first landing map to $\bigcup_{c\in \text{Crit}(f_0)} Y_n(c)$: \begin{equation}\label{eqn:dfnLn} L_n=\left\{z\in \mathbb C: \exists k\ge 0 \text{ such that } f_0^k(z)\in \bigcup_{c\in \text{Crit}(f_0)} Y_n(c)\right\}. \end{equation}
\begin{Theorem}\label{thm:Kqcpuzzle} There exists $N>0$ and for any puzzle piece $Y$ of depth $m\ge 0$, there is a constant $C=C(Y)>1$ satisfying the following property: if $Q:Y \to Q(Y)$ is a qc map which is conformal a.e. in $Y\setminus L_{m+N}$,
then there exists a $C$-qc map $\widetilde Q :Y\to Q(Y)$ such that $\widetilde Q=Q$ on $\partial Y$. \end{Theorem}
\subsection{Quasiconformal Distortion Bounds and a toy model} The difficulty in proving the theorem is that the landing domains $L_{m+N}$ may come arbitrarily close to the boundary of $Y$. To deal with the situation, we shall need a toy model developed by Kahn (\cite{Kahn}).
Let us first recall some terminology from \cite{Kahn}. Let $U \subset \mathbb C$ be a Jordan domain and $A$ be a measurable subset of $U$. We say that $(A,U)$ has {\em bounded qc distortion} if there exists a constant $K\ge 1$ with the following property: if $Q:U\to Q(U)$ is a quasiconformal map and $\bar\partial Q=0$ a.e. outside $A$, then there is a $K$-q.c map $\tilde Q:U \to Q(U)$ such that $\tilde Q=Q$ on $\partial U$. Let $\mathcal{QD}(A,U)$ denote the smallest $K$ satisfying the property. Using this terminology, we can restate Theorem~\ref{thm:Kqcpuzzle} as follows:
\begin{theorem3'} There exists $N>0$ such that if $Y$ is a puzzle piece $Y$ of depth $m\ge 0$, \[\mathcal{QD}(L_{m+N}\cap Y, Y)<\infty.\] \end{theorem3'}
We shall need the following easy facts. \begin{Lemma}\label{compact}\cite[Fact~1.3.6]{Kahn} If $A \subset U$ is compact, then $\mathcal{QD}(A,U)<\infty$. \end{Lemma}
\begin{Lemma}\label{qc1}\cite[Fact~1.3.4]{Kahn} Let $U$ and $V$ be Jordan domains in $\mathbb C$ and $A$ be a measurable subset of $U$. If there exists a $L$-qc map $g:U\to V$ and $\mathcal{QD}(A,U)<\infty$, then \[\mathcal{QD}(g(A),V)\le L^2\mathcal{QD}(A,U).\] \end{Lemma}
\begin{Lemma}\label{qc2} The following statements are equivalent: \begin{enumerate} \item [(i)] $\mathcal{QD}(A,U)=C<\infty$; \item [(ii)] For any qc map $Q:U \to Q(U)$, if $\mathrm{Dil}(Q)\le K$ for some $K\ge 1$ a.e. outside $A$, then there is a $KC$-qc map $\tilde Q:U \to Q(U)$ such that $\tilde Q=Q$ on $\partial U$. \end{enumerate} \end{Lemma}
\begin{proof} It is obvious that (ii) implies (i). Let us show that (i) implies (ii). Let $\mu$ be the Beltrami differential such that $\mu =\bar\partial Q^{-1}/\partial Q^{-1}$ on $Q(U\backslash A)$ and $\mu=0$ otherwise. By the Measurable Riemann Mapping Theorem, there exists $g:\mathbb C \to \mathbb C$ quasiconformal map with Beltrami differential $\mu$. Then $g\circ Q: U\to g\circ Q(U)$ is a quasiconformal map and $\bar\partial g\circ Q=0$ a.e. outside A. Thus there exists a $C$-qc map $G:U\to g\circ Q(U)$ such that $G=g\circ h$ on $\partial U$, where $C=\mathcal{QD}(A,U)<\infty$. Finally, let $\tilde Q=g^{-1}\circ G$ and we are done. \end{proof}
We shall now recall the {\em recursively notched square model} developed in~\cite{Kahn}. Let $S=(0,1)\times (-1/2,1/2)$. Let $\mathcal{I}$ denote the collection of the components of $(0,1)\setminus \mathcal{C}$, where $\mathcal{C}$ is the ternary Cantor set. Let \begin{equation}\label{eqn:setN}
\mathcal{N}=\bigcup_{I\in\mathcal{I}} \overline{I}\times [-|I|/2, |I|/2] \end{equation} which is a countable disjoint union of closed squares. The following is ~\cite[Lemma 2.1.1]{Kahn}: \begin{Theorem}\label{thm:Kahn} $\mathcal{QD}(\mathcal{N}, S)<\infty.$ \end{Theorem}
\subsection{Reduce to the toy model} We will work on the polynomial map $f_0$ fixed at the beginning of this section. In the following we write $\mathcal{R}(\theta)$ for $\mathcal{R}_{f_0}(\theta)$. A {\em geometric ray-pair} is, by definition, a simple curve consisting of two distinct external rays together with their common landing point. A {\em slice} is an open set $U$ bounded by two disjoint ray-pairs $\mathcal{R}(\theta_i)\cup \mathcal{R}(\theta_i')\cup \{a_i\}$, $i=1,2$ such that no external ray lying inside $U$ lands at either $a_1$ or $a_2$.
For every $z_0 \in Z$, the external rays landing at $z_0$ cut the complex plane $\mathbb C$ into finitely many sectors $S_1(z_0),\cdots,S_{n(z_0)}(z_0)$. Let $\mathcal S=\{S_j(z)\mid z\in Z,~1\le j\le n(z) \}$. We list the elements in $\mathcal S$ as $S_1, S_2,\cdots, S_{\nu}$ where $\nu=\# \mathcal S$. For each $j$, the boundary of $S_j$ is a geometric ray-pair: there exists $\alpha^j\in Z$ and $\theta^-_j,\theta^+_j\in\mathbb{R}/\mathbb{Z}$ such that $$\partial S_j=\mathcal R(\theta^-_j)\cup \{\alpha^j\}\cup \mathcal R(\theta^+_j).$$ We order $\theta^-_j, \theta^+_j$ in such a way that $$\{t\in \mathbb{R}/\mathbb{Z}: \mathcal{R}(t)\subset S_j\}=(\theta_j^-, \theta_j^+).$$
\begin{Proposition}\label{prop:slice} For each $j\in \{1,2,\ldots, \nu\}$ and each $n$ sufficiently large, there exists a geometric ray-pair $\gamma_n^j=\mathcal{R}(t_n^-(j))\cup \mathcal{R}(t_n^+(j))\cup \{\alpha_n^j\}$ contained in $S_j$ with the following properties: \begin{itemize} \item $\alpha_n^j\in f_0^{-n}(Z)$; \item $\theta^-_j, t_n^-(j), t_n^+(j), \theta^+_j$ lie in $\mathbb{R}/\mathbb{Z}$ in the anticlockwise order; \item $\partial S_j$ and $\gamma_n^j$ bound a slice; \item $t_n^-(j)\to \theta_j^-$, $t_n^+(j)\to \theta_j^+$ as $n\to\infty$. \end{itemize} \end{Proposition}
We postpone the proof of this proposition to the end of this section and show now how it implies Theorem~\ref{thm:Kqcpuzzle}.
\begin{proof}[Proof of Theorem 3'] For each $n$ large, let $\widehat{S}_n^j$ be the slice bounded by $\gamma_n^j$ and $\partial S_j$ given by Proposition~\ref{prop:slice}, and let $S_n^j=\{z\in \widehat{S}_n^j: G(z)<1/d^n\}$, where $G$ is the Green function of $f_0$. So $\overline{S_n^j}$ is a finite union of closures of puzzle pieces of depth $n$. Choose $N_*$ sufficiently large so that the closure of $S^j:=S^j_{N_*}$ is disjoint from the post-critical set of $f_0$. Let $q\gg N_*$ be a positive integer so that for any $j,j'$, the diameter of each component of $f_0^{-q}(S^j)$ is much smaller than that of $S^{j'}$. For each $j\in \{1,2,\ldots,\nu\}$, $f_0^q$ maps a neighborhood $Q_j$ of $\alpha^j$ conformally onto the component of the interior of $\bigcup_{j'=1}^ {\nu} \overline{S^{j'}}$ which contains $f_0^q(\alpha_j)$. Let $A_j=Q_j\cap S^j$. Then $A_j$ is a quasi-disk which contains $\alpha^j$ in its boundary and there is $\sigma(j)\in \{1,2,\ldots, \nu\}$ such that $f_0^q(A_j)=S^{\sigma(j)}$. Similarly, there is a quasi-disk $B_j$ which is contained in $S^j$ and contains $\alpha_{N_*}^j$ in its boundary and $\tau(j)\in \{1,2,\ldots, \nu\}$ such that $f_0^q(B_j)=S^{\tau(j)}$. Choosing $q$ large enough, we can ensure that $\overline{A_j}\cap\overline{B_j}=\emptyset$. Note that $m_d^q(\theta^+_j)=\theta^+_{\sigma(j)}, m_d^q(\theta^-_j)=\theta^-_{\sigma(j)}$, $m_d^q(t_{N_*}^-(j))=\theta^+_{\tau(j)}$ and $m_d^q(t_{N_*}^+(j))=\theta^-_{\tau(j)}$, where $m_d:\mathbb{R}/\mathbb{Z}\to \mathbb{R}/\mathbb{Z}$ denotes the map $t\mapsto dt \mod 1$. Let $$F:\bigcup_{j=1}^{\nu} \overline{A_j}\cup\overline{B_j}\to \bigcup_{j=1}^{\nu} \overline{S^j}$$ be the restriction of $f_0^q$. Let $A=(0,1/3)\times (-1/6, 1/6)$, $B=(2/3, 1)\times (-1/6,1/6)$ and $S=(0,1)\times (-1/2,1/2)$ and define a map $$G: \bigcup_{j=1}^\nu \{j\}\times (A\cup B) \to \bigcup_{j=1}^\nu \{j\}\times S,$$ as follows: $$G(j,z)=\left\{\begin{array}{ll} (\sigma(j), 3z), &\mbox{ if } z\in A;\\ (\tau(j), 3(1-z)), &\mbox{ if } z\in B. \end{array} \right. $$ Let $C_j=\overline{\{z\in S^j\setminus (\overline{A_j\cup B_j}): G(z)\le d^{-q-N_*}\}}$ and $C=[1/3,2/3]\times [-1/6, 1/6]$.
{\bf Claim.} There is a qc homeomorphism $H_:\bigcup_{j=1}^\nu S^j\to \bigcup_{j=1}^\nu \{j\}\times S$ such that \begin{enumerate} \item[(i)] $H(A_j)=\{j\}\times A, H(B_j)=\{j\}\times B, H(C_j)=\{j\}\times C,$ \item[(ii)] $H\circ F=G\circ H$ holds on $\bigcup_{j=1}^\nu \overline {A_j\cup B_j}$. \end{enumerate} Indeed, it suffices to prove there is qc map $H_0:\bigcup_{j=1}^\nu S^j\to \bigcup_{j=1}^\nu \{j\}\times S$ such that (i) holds and (ii) holds on $\bigcup_{j} (\partial A_j\cup \partial B_j)$ (with $H$ replaced by $H_0$). Indeed, once such a $H_0$ is constructed, we can construct inductively a sequence $\{H_n\}_{n=0}^\infty$ of qc maps by pull-back which has the following properties: \begin{itemize} \item $H_{n+1}=H_n$ on $\bigcup_{j=1}^\nu S^j \setminus (A_j\cup B_j);$ \item $H_{n}\circ F=G\circ H_{n+1}$ holds on $\bigcup_{j=1}^\nu \overline {A_j\cup B_j}$. \end{itemize} These maps $H_n$ have the same maximal dilatation as $H_0$, and they eventually stablize for any point in the set $X=\{z\in \bigcup_{j=1}^\nu S^j: F^n(z)\not\in \bigcup A_j\cup B_j\mbox{ for some }n\}$. Since $F$ is uniformly expanding, the set $X$ is dense in $\bigcup_j S^j$, it follows that $H_n$ converges to a qc map $H$ which satisfies the requirements. For the existence of $H_0$, a concrete construction of a homeomorphism with the desired properties can be easily done using B\"ottcher coordinate with an extra property that it is qc in $S^j\setminus \overline{A_j\cup B_j\cup C_j}$. It can be made global qc since $S^j, A_j, B_j, C_j$ are all quasi-disks.
Now, let $$\mathcal{N}_j=\{z\in S^j: \exists n\ge 1\text{ such that }F^n(z) \text{ is well-defined and belongs to} \bigcup_{j'} C_{j'}\}.$$ Note that for each $j$, $H(\mathcal{N}_j)=\{j\}\times\mathcal{N},$ where $\mathcal{N}$ is as in (\ref{eqn:setN}). Therefore, $$Q:=\max_{j=1}^\nu \mathcal{QD}(\mathcal{N}_j, S^j)<\infty.$$
Let $N=q+N_*$. Then any landing domain of $Y_{N}:=\bigcup_{c\in\text{Crit}(f_0)} Y_N(c)$ does not intersect $\bigcup_{k=0}^{q-1}\bigcup_{j=1}^\nu f_0^k(\partial A_j\cup\partial B_j)$. Therefore, $L_N\cap S^j\subset \mathcal{N}_j,$ so that $$\mathcal{QD}(L_N\cap S^j, S^j)\le \mathcal{QD}(\mathcal{N}_j, S^j)\le Q.$$
Now let $Y$ be an arbitrary Yoccoz puzzle piece of depth $m\ge 0$. Similarly as in the construction for $A_j$ and $B_j$ above, for each $x\in \partial Y\cap J(f)$, there is a quasi-disk $V_x$ which is contained in $Y$ and contains $x$ in its closure such that $f_0^{m}$ maps $V_x$ onto $S^{j(x)}$ for some $j(x)\in \{1,2,\ldots, \nu\}$. Since $\overline{S^{j(x)}}$ is a finite union of the closure of puzzle pieces of depth $N_*$ and disjoint from the postcritical set of $f_0$, for each $k=0,1,\ldots, m-1$, $f^k(V_x)$ is a finite union of the closure of puzzle pieces of depth $N_*+m-k$ ($< m+N$) and does not contain a critical point of $f_0$, so $f^k(V_x)\cap Y_{m+N}=\emptyset$. It follows that $f_0^{m}$ maps $V_x\cap L_{m+N}$ onto $S^{j(x)}\cap L_{m+N}$. Therefore $$\mathcal{QD}(L_{m+N}\cap V_x, V_x)= \mathcal{QD}(L_{m+N}\cap S^{j(x)}, S^{j(x)})\le \mathcal{QD}(L_N\cap S^{j(x)}, S^{j(x)})\le Q.$$ These $V_x$'s are pairwise disjoint since each of them is mapped onto of a component of $\bigcup_j S^j$ univalently under $f_0^{m}$ Noting that $(Y\cap L_{m+N})\setminus \bigcup_{x\in \partial Y\cap J(f)}V_x$ is compactly contained in $Y$, we conclude that $\mathcal{QD}(L_{m+N}\cap Y, Y)<\infty$. \end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:slice}] We denote by $Y^j_n$ the unique puzzle piece of depth $n$ which attaches $\alpha^j$ and is contained in the sector $S_j$. By Proposition~\ref{prop:noncrpiece}, when $n$ is sufficiently large, $\overline{Y_n^j}$ is disjoint from the postcritical set of $f_0$. Fix $n_0$ large so that for each $j$, there exists $\alpha^j_{n_0} \in \overline Y^j_{n_0}\cap J(f_0)$ with $\alpha^j_{n_0} \not\in Z$. Let $\mathcal R(s^-_j)$ and $\mathcal R(s^+_j)$ be the external rays landing at $\alpha^j_{n_0}$ which are the boundary curves of $Y^j_{n_0}$. We assume that $\theta^-_j, s^-_j, s^+_j, \theta^+_j$ lie in $\mathbb{R}/\mathbb{Z}$ in the anticlockwise cyclic order.
For $n\ge n_0$, we define two angles $t^-_n(j)$ and $t^+_n(j)$ as following: \[t^-_n(j):=\sup\{\theta\in(\theta^-_j,s^-_j)\mid \mathcal R(\theta)\cap Y^j_{n}\ne \varnothing\}\] and \[t^+_n(j):=\inf\{\theta\in(s^+_j,\theta^+_{j})\mid \mathcal R(\theta)\cap Y^j_{n}\ne \varnothing\}.\]
{\bf Claim 1.} {\em For every $1\le j\le \nu$ and any $n\ge n_0$, the two external rays $\mathcal R(t^-_n(j))$ and $\mathcal R(t^+_n(j))$ land at the same point, denoted by $\alpha_n^j$.}
Let $\mathcal R^n(t)=\mathcal R(t)\cap \{z\mid G_{f_0}(z)<d^{-n}\}$. Clearly, $\mathcal R^n(t^-_n(j))$ and $\mathcal R^n(t^+_n(j))$ are on the boundary of $Y^j_n$ and are closest rays (on the boundary of $Y^j_n$) to $\mathcal R(s^-_j)$ and $\mathcal R(s^+_j)$ respectively. First, we show that $\mathcal R(t^-_{n_0+1}(j))$ and $\mathcal R(t^+_{n_0+1}(j))$ land at the same point.
{\em Case 1.} $\alpha_{n_0}^j\in \overline{Y_{n_0+1}^j}$. Then $t^{\pm}_{n_0+1}(j)=s^{\pm}_j$, and so claim holds.
{\em Case 2.} $\alpha_{n_0}^j\not\in \overline{Y_{n_0+1}^j}$. Then there exists $t^-$ and $t^+$ such that \begin{itemize} \item $\mathcal{R}(t^{\pm})$ intersects the boundary of $Y_{n_0+1}^j$ with a common landing point $\alpha\not\in \{\alpha_{n_0}^j,\alpha^j\}$; \item $\mathcal{R}(t^+)\cup \mathcal{R}(t^-)\cup\{\alpha\}$ separates $\alpha_{n_0+1}^j$ from $\alpha^j$. \end{itemize} Without loss of generality, assume $t^-\in (\theta_j^-, s_j^-)$. Then $t^+\in (s_j^+,\theta_j^+)$. So for any $t\in (t_-, s_j^-)$, $\mathcal{R}(t)$ is disjoint from $Y_{n_0+1}^j$, hence $t^-=t_{n_0+1}^j$. Similarly, $t^+=t_{n_0+1}^j$. Thus $t_{n_0+1}^-\sim_{\lambda_{f_0}} t_{n_0+1}^+$.
The general case can be proved similarly by induction.
It is clear that $\partial S_j$ and $\gamma_n^j=\mathcal{R}(t_-^n(j))\cup\mathcal{R}(t_+^n(j))\cup \{\alpha_n^j\}$ bound a slice for each $n\ge n_0$. So it remains to show
{\bf Claim 2.} {\em The sequence $\{t^-_n(j)\}_{n>n_0}$ decreases monotonically to $\theta^-_j$ and $\{t^+_n(j)\}_{n>n_0}$ increases monotonically to $\theta^+_{j}$ for all $j$.}
Since $\mathrm{diam}(Y^j_n)$ converges to $0$, $\alpha^j_n \to \alpha^j$ . Obviously, $\{t^-_n(j)\}$ is monotonically decreasing, thus it converges to some $\theta\in[\theta^-_j,s^-_j]$. If $\theta \ne \theta^-_j$, then $\alpha^j_n$ converges to the landing point of $\mathcal R(\theta)$ which is not equal to $\alpha^j$. This leads to a contradiction. \end{proof}
\section{Thurston's Algorithm}\label{sec:Thurston} \subsection{Thurston's Algorithm} The Thurston algorithm was introduced by Thurston to construct rational maps that is combinatorially equivalent to a given branched covering of the 2-sphere. See \cite{DH3}. The algorithm goes as follows. Let $\widetilde{f}:\widehat{\mathbb{C}}\to \widehat{\mathbb{C}}$ be a quasi-regular map of degree $d>1$. Given a qc map $h_0:\widehat{\mathbb{C}}\to \widehat{\mathbb{C}}$, let $\sigma_0$ be the standard complex structure on $\widehat{\mathbb{C}}$ and $\sigma=(h_0\circ\tilde f)^*\sigma_0$. By the Measurable Riemann Mapping Theorem, there exists a qc map $h_1:\widehat{\mathbb{C}}\to \widehat{\mathbb{C}}$ with $h_1^*\sigma_0=\sigma$ so that $Q_0:=h_0\circ \widetilde{f}\circ h_1^{-1}$ is a rational map of degree $d$. The qc map $h_1$ is unique up to composition with a M\"obius transofrmation. Applying the same argument to $h_1$ instead of $h_0$, we obtain a qc map $h_2$ and a rational map $Q_1$ of degree $d$ such that $Q_1\circ h_2=h_1\circ \widetilde{f}$. Repeating the argument, we obtain a sequence of normalized qc maps $\{h_n\}_{n=0}^\infty$ and a sequence of rational maps $\{Q_n\}_{n=0}^\infty$ of degree $d$ such that $Q_n\circ h_{n+1}=h_{n}\circ \widetilde{f}$. The question is to study the convergence of $\{h_n\}_{n=1}^\infty$ and $\{Q_n\}_{n=1}^\infty$ after suitable normalization.
In \cite{R-L}, Rivera-Letelier applied the algorithm to a certain class of quasi-regular maps which may have non-recurrent branched points with infinite orbits. In this section, we shall modify his argument and prove a convergence theorem for a quasi-regular map $\widetilde{f}:\widehat{\mathbb{C}}\to \widehat{\mathbb{C}}$ where the irregular part has a nice Markov structure.
For simplicity, we shall assume that $\widetilde{f}$ satisfies the following: $\widetilde{f}^{-1}(\infty)=\infty$ and $\widetilde{f}$ is holomorphic in a neighborhood of $\infty$. Below, we shall use the terminology {\em quasi-regular polynomial} for such a map.
An open set $\mathcal{B}$ is called {\em nice} if each component of $\mathcal{B}$ is a Jordan disk and $\widetilde{f}^k(\partial \mathcal{B})\cap \mathcal{B}=\emptyset$ for each $k\ge 1$. Let $$D(\mathcal{B})=\{z\in \mathbb C: \exists n\ge 1\text{ such that } \widetilde{f}^n(z)\in\mathcal{B}\}$$ denote the domain of the first entry map to $\mathcal{B}$. We say that a nice open set $\mathcal{B}$ is {\em free} if $$P(\widetilde{f})\cap\overline{\mathcal{B}}=\emptyset,$$ where $$P(\widetilde{f})=\overline{\bigcup_{c\in \text{Crit}(\widetilde{f})}\bigcup_{n\ge 1} \{\widetilde{f}^n(c)\}},$$ and $\text{Crit}(\widetilde{f})$ denotes that set of the ramification points of $\widetilde{f}$. We say that an open set $\mathcal{B}$ is {\em $M$-nice} if it is nice and for each component $B$ of $\mathcal{B}$, the following three conditions hold: \begin{equation}\label{eqn:shapeB} \text{diam}(B)^2\le M \text{area} (B); \end{equation} \begin{equation}\label{eqn:Bretsmall} \frac{\mathrm{area}(B\setminus D(\mathcal{B}))}{\mathrm{area}(B)}>M^{-1}; \end{equation} \begin{equation}\label{eqn:Bretqc} \mathcal{QD}(D(\mathcal B)\cap B, B)\le M, \end{equation} where $\mathcal{QD}$ is as defined in \S\ref{sec:Kahn}.
\begin{Theorem}\label{thm:Thurston} Let $\widetilde{f}:\widehat{\mathbb{C}}\to \widehat{\mathbb{C}}$ be a quasi-regular polynomial of degree $d\ge 2$ and let $A\subset \widehat{\mathbb{C}}$ be a Borel set such that $\bar\partial \tilde f=0$ a.e. outside $A$. Assume that there is a free open set $\mathcal{B}$ which is $M$-nice for some $M<\infty$ and a positive integer $T$ such that $$(\ast) \text{ for every }z\text{ and }n\ge 1, \text{ if }\widetilde{f}^j(z)\not\in\mathcal{B}\text{ for each }0\le j <n,\text{ then }\#\{0\le k<n: \widetilde f^k(z) \in A\}\le T.$$ Then there is a continuous surjection $h:\widehat{\mathbb{C}}\to \widehat{\mathbb{C}}$ and a rational map $f:\widehat{\mathbb{C}}\to \widehat{\mathbb{C}}$ of degree $d$ such that $f\circ h=h\circ \tilde f$. Moreover, there is a qc map $\lambda_0$ such that $\lambda_0(z)=h(z)$ whenever $z\not\in \bigcup_{n=0}^\infty \widetilde{f}^{-n}(\mathcal{B})$ and such that $\bar{\partial} \lambda_0=0$ holds a.e. on the set $\{z\in \widehat{\mathbb{C}}: \widetilde{f}^n(z)\not\in A, \forall n\ge 0\}$. \end{Theorem} The rest of this section is devoted to a proof of the theorem. Without loss of generality, we may assume that $\mathcal{B}\subset A$. Note that ${D}(\mathcal{B})\cup \mathcal{B}$ is compactly contained in $\mathbb C$. Starting with $h_0=id$, we construct a sequence of qc maps $h_n$ as above, normalized so that $h_n(z)=z+o(1)$ near infinity. Note that there is a neighborhood $V$ of $\infty$ such that $\widetilde{f}^{-1}(V)\subset V$, so that all the maps $h_n$ are conformal in $V$.
For a map $\varphi:\mathbb C\to \mathbb C$ and $z\in \mathbb C$, let
$$H(\varphi;z)=\limsup_{r\searrow 0}\frac{\sup_{|w-z|=r} |\varphi(w)-\varphi(z)|}{\inf_{|w-z|=r}|\varphi(w)-\varphi(z)|}.$$ So if $\varphi$ is differentiable at $z$ with a positive Jacobian, then
$$H(\varphi;z)=\frac{|\partial \varphi (z)|+|\overline{\partial} \varphi(z)|}{|\partial \varphi (z)|-|\overline{\partial} \varphi(z)|}.$$
Fix $K>1$ such that $\widetilde{f}$ is $K$-quasi-regular. Let us call a point $z\in \mathbb C$ {\em regular} if the following hold: \begin{enumerate} \item $\widetilde{f}^n(z)$ is not a ramification point of $\widetilde{f}$ for any $n\ge 0$; \item for any non-negative $n$ and $m$, the maps $h_n$ and $\widetilde{f}$ are differentiable at $\widetilde{f}^m(z)$ with a positive Jakobian. Moreover,
$$H(\widetilde{f}; \widetilde{f}^m(z))\le K.$$ \end{enumerate} Note that Lebesgue almost every point in $\mathbb C$ is regular.
\begin{Lemma}\label{lem:hkh-n1st} If $z$ is a regular point, then for any integers $k>n\ge 0$, the following hold: \begin{equation} H(h_k\circ h_n^{-1}, h_n(z))=H(\widetilde{f}^{k-n};\widetilde{f}^n(z))\le \prod_{j=n}^{k-1} H(\widetilde{f}; \widetilde{f}^j(z))\le K^{\#\{n\le j<k: \widetilde{f}^j(z)\in A\}}. \end{equation} \end{Lemma} \begin{proof} The equality follows from the identity $$\widetilde{f}^{k-n}\circ (Q_0\circ Q_1\circ \cdots Q_{n-1})\circ h_n =Q_0\circ \cdots \circ Q_{k-1}\circ h_k=\widetilde{f}^k.$$ The first inequality follows from the definition of $H$ and the second inequality follows from the assumption that $z$ is regular. \end{proof} Define
\[\widetilde{\mathcal{K}}(n):=\{z\mid \tilde f^k(z) \notin A,\text{ for all } k \ge n\}\] and \[\widetilde{\mathcal{L}}(n)=\{z\mid \tilde f^k(z) \notin \mathcal B,\text{ for all } k \ge n\} \supset \widetilde{\mathcal{K}}(n).\]
As $\mathcal{B}$ is open, $\tilde {\mathcal L}(n)$ is closed for each $n\ge 0$.
\begin{Lemma}For every $k>n$, $\overline{\partial } (h_k\circ h^{-1}_n)=0$ a.e. on $h_n(\widetilde{\mathcal{K}}(n))$. Moreover, there exists $K'$-q.c map $h_{k,n}$ such that $h_{k,n}=h_k\circ h^{-1}_n$ on $h_n(\widetilde{\mathcal{L}}(n))$. \end{Lemma} \begin{proof}
For each regular $z\in \widetilde{\mathcal{K}}(n)$, $\#\{n\le j<k: \widetilde{f}^j(z)\in A\}=0$. So by Lemma~\ref{lem:hkh-n1st}, $H(h_k\circ h_n^{-1}; h_n(z))=1$. Since a qc map is absolutely continuous, the first statement follows.
Now let us turn to the second statement. Note that for each regular $z\in \widetilde{\mathcal{L}}(n)$, \begin{equation}\label{eqn:hkhn-1Ln} H(h_k\circ h_n^{-1}; h_n(z))\le K^T, \end{equation} since by assumption ($\ast$), $\#\{n\le j<k:\widetilde{f}^j(z)\in A\}\le T.$ Put $K'=K^{4T+1}M$.
{\bf Claim.} For each component
$W$ of $\mathbb C\setminus \widetilde{\mathcal{L}}(n)$, there is a $K'$-qc map $h^W_{k,n}$ from $h_n(W)$ onto its image such that $h_{k,n}^W|\partial h_n(W)=h_k\circ h_n^{-1}|h_n(\partial W)$.
Once this claim is proved, we can obtain a homeomorphism $h_{k,n}:\widehat{\mathbb{C}}\to \widehat{\mathbb{C}}$ which coincides with $h_k\circ h_n^{-1}$ outside $\widetilde{\mathcal{L}}(n)$ and coincides with $h_{k,n}^W$ on $h_n(W)$ for each component $W$ of $\mathbb C\setminus \widetilde{\mathcal{L}}(n)$. By \cite[Lemma 2 in Chapter 1]{DH2}, the map $h_{k,n}$ is $K'$-qc.
To prove the claim, let $w$ be the smallest integers such that $w\ge n$ and $\widetilde{f}^w(W)\cap\mathcal{B}\not=\emptyset$. Since $\mathcal{B}$ is nice and disjoint from the postcritical set of $\widetilde{f}$, $\widetilde{f}^w$ maps $W$ homeomorphically onto a component $B$ of $\mathcal{B}$, and $\widetilde{f}^k(W)\cap\mathcal{B}=\emptyset$ for each $n\le k<w$. By the assumption ($\ast$), for any $z\in W$, $\#\{n\le j<w: \widetilde{f}^j(z)\in A\}\le T.$ By Lemma~\ref{lem:hkh-n1st}, if $z$ is regular, then for any $n<k\le w$, $$H(h_k\circ h_n^{-1}; h_n(z))=H(\widetilde{f}^{k-n}; \widetilde{f}^n(z))\le K^T.$$
In particular, if $n<k\le w$, then $h_k\circ h_n^{-1}$ is $K^T$-qc on $h_n(W)$. Assume now that $k>w$ and let $W'=(\widetilde{f}^{w}|W)^{-1}(D(\mathcal{B})\cap B)$. Since $\widetilde{f}^{n}\circ h_n^{-1}$ is conformal on $h_n(W)$ and $\widetilde{f}^{w-n}$ is a $K^T$-qc map in $\widetilde{f}^n(W)$, by Lemma~\ref{qc1}, we have $$\mathcal{QD}(h_n(W'), h_n(W))\le K^{2T} \mathcal{QD}(D(\mathcal{B})\cap B, B)\le K^{2T}M.$$ For each $z\in W\setminus W'$, $\widetilde{f}^w(z)\not\in D(\mathcal{B})$, so $\widetilde{f}^j(z)\not\in \mathcal{B}$ for all $j>w$. By assumption ($\ast$), it follows that $$\#\{n\le j<k: \widetilde{f}^j(z)\in A\}\le 2T+1.$$ Thus for a regular $z\in W\setminus W'$, $H(h_k\circ h_n^{-1};h_n(z))\le K^{2T+1}.$ It follows by Lemma~\ref{qc2} that there is a $K'$-qc map defined on $h_n(W)$ which has the same boundary value as $h_k\circ h_n^{-1}$. \end{proof}
By compactness of normalized $K$-qc maps and the diagonal argument, there exists a sequence $\{k_j\}_{j=1}^\infty$ of positive integers such that for each $n$, $h_{k_j,n}$ converges locally uniformly in $\mathbb C$ to a $K'$-qc map $\lambda_n:\mathbb C\to\mathbb C$. Since $\overline{\partial } h_{k_j,n}$ and $\overline{\partial }\lambda_n$ have bounded norm in $L^2(\mathbb C)$ and $\overline{\partial } h_{k_j,n}$ converges to $\overline{\partial }\lambda_n$ in the sense of distribution, it follows that \begin{equation}\label{eqn:dbarchi} \overline{\partial }\lambda_n=0 \text{ a.e. on } h_n(\widetilde{\mathcal{K}}(n)). \end{equation}
\begin{Lemma}\label{lem:chinhn} For each $k>n\ge 0$, $$\lambda_n \circ h_n=\lambda_k \circ h_k\text{ on }\widetilde{\mathcal{L}}(n).$$ \end{Lemma} \begin{proof} For each $j$ large enough so that $k_j>k>n$, since $\widetilde{\mathcal{L}}(n)\subset \widetilde{\mathcal{L}}(k)$, \begin{eqnarray*} h_{k_j,n}\circ h_n=h_{k_j} = h_{k_j}\circ h^{-1}_k\circ h_k = h_{k_j,k}\circ h_k \end{eqnarray*} is valid on $\widetilde{\mathcal{L}}(n)$. Letting $j$ go to infinity, we obtain $\lambda_n\circ h_n=\lambda_k\circ h_k$ on $\widetilde{\mathcal{L}}(n)$. \end{proof}
\subsection{Limit geometry} Let $\mathcal{L}(n)=\lambda_n\circ h_n(\widetilde{\mathcal{L}}(n))$ and $\mathcal{K}(n)=\lambda_n\circ h_n(\widetilde{\mathcal{K}}(n))$. Since we assume $A\supset \mathcal{B}$, $\mathcal{K}(n)\subset \mathcal{L}(n)$. By Lemma~\ref{lem:chinhn}, for $k\ge n$, we have \[\mathcal{L}(n)=\lambda_k\circ h_k(\widetilde{\mathcal{L}}(n))\subset \lambda_k\circ h_k(\widetilde{\mathcal{L}}(k))=\mathcal{L}(k)\] and \[\mathcal{K}(n)=\lambda_k\circ h_k(\widetilde{\mathcal{K}}(n))\subset \lambda_k\circ h_k(\widetilde{\mathcal{K}}(k))=\mathcal{K}(k).\] By (\ref{eqn:dbarchi}), $\lambda^{-1}_n$ is conformal a.e.on $\mathcal{K}(n)$.
\begin{equation*} \xymatrix@C=4em@R=3em{ \cdots \ar[r] &\mathbb C\ar[r] &\mathbb C \ar[r] &\cdots \ar[r] &\mathbb C\ar[r] &\mathbb C\\ \cdots \ar[r] &\mathbb C \ar[u]^{\lambda_{n+1}}\ar[r]^{Q_n} &\mathbb C\ar[u]^{\lambda_{n}}\ar[r] &\cdots \ar[r] &\mathbb C \ar[u]^{\lambda_{1}}\ar[r]^{Q_0}&\mathbb C\ar[u]^{\lambda_{0}}\\ \cdots \ar[r] &\mathbb C \ar[u]^{h_{n+1}}\ar[r]^{\tilde f} &\mathbb C\ar[u]^{h_n}\ar[r] &\cdots \ar[r] &\mathbb C \ar[u]^{h_1}\ar[r]^{\tilde f} &\mathbb C \ar[u]^{\mathrm{id}}} \end{equation*}
\begin{Lemma}\label{lem:shapeW} There exists $M'>0$ such that for any $n\ge 0$ and any component $W$ of $\widehat{\mathbb{C}}\setminus \mathcal{L}(n)$, $$\text{diam} (W)^2\le M'\text{area}(W).$$ \end{Lemma} \begin{proof} Let $\widetilde{W}=(\lambda_n\circ h_n)^{-1}(W)$. Let $w$ be the minimal integer such that $w\ge n$ and $\widetilde{f}^w(\widetilde{W})\cap \mathcal{B}\not=\emptyset$. Then $\widetilde{W}$ is a component of $\widehat{\mathbb{C}}\setminus \widetilde{\mathcal{L}}(w)$ and $\widetilde{f}^w$ maps $\widetilde{W}$ homeomorphically onto a component $B$ of $\mathcal{B}$. So $\varphi:=Q_0\circ Q_1\circ \cdots Q_{w-1}$ maps $h_w(\widetilde{W})$ conformally onto $B$. Moreover, since $B$ has a definite neighborhood disjoint from $P(\widetilde{f})$, the conformal map $\varphi$ has bounded distortion. Thus $\text{diam}(h_w(\widetilde{W}))^2/\text{area}(h_w(\widetilde{W}))$ is bounded from above. Since $\lambda_w$ are normalized $K'$-qc maps and $\lambda_w (h_w(\widetilde{W}))=W$, the statement follows. \end{proof}
\begin{Lemma}\label{lem:Lnsmall} \begin{enumerate} \item The Lebesgue measure of the set $\mathbb C\setminus \mathcal{L}(n)$ tends to zero as $n\to\infty$. \item $$\lim_{n\to\infty} \sup\{\text{diam}(W): W\text{ is a component of }\mathbb C\setminus \mathcal{L}(n)\}=0.$$ \end{enumerate} \end{Lemma} \begin{proof} The second statement follows form the first since all components of $\widehat{\mathbb{C}}\setminus \mathcal{L}(n)$ have uniformly bounded shape by Lemma~\ref{lem:shapeW}.
To prove the first statement, we shall use a martingale type argument. Let $\mathscr{W}$ denote the collection of components of $\widehat{\mathbb{C}}\setminus \mathcal{L}(n)$, where $n$ runs over all non-negative integers. Let $\mathscr{W}^0$ denote the maximal elements in $\mathscr{W}$, i.e., those that are not contained in any other. For each $k\ge 1$, define inductively $\mathscr{W}^k$ to be the maximal elements in $\mathscr{W}\setminus \bigcup_{0\le j<k} \mathscr{W}^j$. So $\mathscr{W}$ is the disjoint union of $\mathscr{W}^k$, $k=0,1,\ldots$. Note that $$\widehat{\mathbb{C}}\setminus \bigcup_n \mathcal{L}(n)\subset \bigcap_{k=0}^\infty \bigcup_{W\in \mathscr{W}^k} W.$$ It suffices to show that there is a constant $\lambda\in (0,1)$ such that for $k\ge 0$ and each $W\in \mathscr{W}^k$, \begin{equation}\label{eqn:Wfollower} \frac{\text{area}(W\cap (\bigcup_{W'\in \mathscr{W}^{k+1}}W'))}{\text{area} (W)}\le \lambda. \end{equation}
To this end, fix such a $W$. Note that there is $n$ such that $\widetilde{W}:=(\lambda_n\circ h_n)^{-1}(W)$ satisfies the following: $\widetilde{f}^n$ maps $\widetilde{W}$ homeomorphically onto a component $B$ of $\mathcal{B}$. So $\varphi:=Q_0\circ Q_1\circ \cdots\circ Q_{n-1}$ maps $\lambda_n^{-1}(W)$ conformally onto $B$ and maps $\lambda_n^{-1}(W\setminus (\bigcup_{W'\in \mathscr{W}^{k+1}}W'))$ onto $B\setminus D(\mathcal{B})$. Since $B$ has a definite neighborhood disjoint from $P(\widetilde{f})$, $\varphi|\lambda_n^{-1}(W)$ has bounded distortion. Thus $$\frac{\text{area}(\lambda_n^{-1}(W\setminus (\bigcup_{W'\in \mathscr{W}^{k+1}}W'))}{\text{area} (\lambda_n^{-1}(W))} \ge C\frac{\text{area}(B\setminus D(\mathcal{B}))}{\text{area}(B)}>\frac{C}{M},$$ where $C>0$ is independent of $W$. Since $\lambda_n$ are normalized $K'$-qc maps, (\ref{eqn:Wfollower}) follows. \end{proof}
\begin{Lemma} \label{lem:KnLn} $\bigcup_{n=0}^\infty \mathcal{K}(n)=\bigcup_{n=0}^\infty\mathcal{L}(n)$. \end{Lemma} \begin{proof} By the assumption ($\ast$), $\bigcup_n \widetilde{\mathcal{K}}(n)=\bigcup_n \widetilde{\mathcal{L}}(n)$. Arguing by contradiction, assume that there exists $z\in (\bigcup_n \mathcal{L}(n))\setminus (\bigcup_n \mathcal{K}(n))$. Then there exists $n_0$ such that for all $n\ge n_0,$ $z\in \mathcal{L}(n)\setminus \mathcal{K}(n)$. By definition, there exists $\tilde{z}_n\in \widetilde{\mathcal L}(n)\setminus \widetilde{\mathcal{K}}(n)$ such that $\lambda_n\circ h_n(\tilde{z}_n)=z$. Then $$\lambda_{n+1}\circ h_{n+1}(\tilde{z}_n)=\lambda_n\circ h_n(\tilde{z}_n)=z=\lambda_{n+1}\circ h_{n+1}(\tilde{z}_{n+1}),$$ so $\tilde{z}_{n+1}=\tilde{z}_n$. Therefore $\tilde{z}=\tilde{z}_{n_0}$ satisfies $\tilde{z}\in (\bigcup_{n=n_0}^\infty \widetilde{\mathcal{L}}(n))\setminus (\bigcup_{n=0}^\infty\widetilde{\mathcal{K}}(n))$. This is absurd. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:Thurston}] By Lemma~\ref{lem:chinhn}, $\lambda_n\circ h_n=\lambda_k\circ h_k$ on $\widetilde{L}(n)$ for all $k>n$, we obtain $\lambda_n\circ h_n(\widetilde{W})=\lambda_k\circ h_k(\widetilde{W})$ for all component $\widetilde{W}$ of $\mathbb C\backslash \widetilde{L}(n)$. By Lemma~\ref{lem:Lnsmall},
\[\sup\limits_{z\in \mathbb C}|\lambda_k\circ h_k(z)-\lambda_n\circ h_n(z)| \le 2\sup\limits_{\widetilde W}\mathrm{diam}~\lambda_n\circ h_n(\widetilde W) \to 0\] as $n$ tends to $\infty$, so $\lambda_n\circ h_n$ converges uniformlly to a continuous function $h:\mathbb C\to \mathbb C$. Similarly, \begin{eqnarray*}
& &\sup\limits_{z\in \mathbb C}|\lambda_{k}\circ Q_{k}\circ \lambda^{-1}_{k+1}(z)-\lambda_{n}\circ Q_{n}\circ \lambda^{-1}_{n+1}(z)|\\
&=& \sup\limits_{z\in \mathbb C}|\lambda_{k}\circ h_k\circ\tilde f\circ h^{-1}_{k+1}\circ \lambda^{-1}_{k+1}(z)-\lambda_{n}\circ h_n\circ \tilde f \circ h^{-1}_{n+1}\circ \lambda^{-1}_{n+1}(z)| \\
&=& 2 \sup\limits_{W}\mathrm{diam}W \to 0 \end{eqnarray*} as $n$ tends to $0$, so $\lambda_{n}\circ Q_{n}\circ \lambda^{-1}_{n+1}$ converges uniformlly to a proper map $f:\mathbb C\to \mathbb C$.\par Recall that $\lambda^{-1}_n$ is a normalized $\tilde K$-qc map and conformal Lebsgue a.e. on $\mathcal{K}(n)$. By Lemma~\ref{lem:Lnsmall} (1) and Lemma~\ref{lem:KnLn}, $\bigcup\limits_{n}\mathcal{K}(n)=\bigcup\limits_n \mathcal{L}(n)$ has full Lebsgue measure. By \cite[Lemma B.1]{R-L}, $\lambda^{-1}_n$ converges uniformly to identity, hence so does $\lambda_n$.
We conclude that $h_n$ converges uniformly to a continuous map $h$ and $Q_n$ converges uniformly to a rational map $f$ of degree $d$. Thus $h\circ \widetilde{f}=f\circ h$. On $\widetilde{\mathcal{L}}(0)$, $h(z)=\lim_{n\to\infty}\lambda_n\circ h_n(z)=\lambda_0\circ h_0(z)=\lambda_0(z)$. \end{proof}
\section{Qc surgery and proof of the Main Theorem}\label{sec:surgery}
As before, we fix $f_0\in \text{Poly}(d)$ which is postcritically finite, hyperbolic and primitive, let $T=(|T|, \sigma, \delta)$ denote the reduced mapping scheme of $f_0$ and let $r:|T|\to \mathbb{N}$ denote the return time function. We also fix a collection $\{\theta_\mathbf{v}\}_{\mathbf{v}\in |T|}$ of external angles such that $d^{r(\mathbf{v})}\theta_\mathbf{v}=\theta_{\sigma(\mathbf{v})}\mod 1$ and such that $\mathcal{R}_{f_0}(\theta_\mathbf{v})$ lands on the boundary of $\mathbf{v}$, for each $\mathbf{v}\in |T|$, according to Lemma~\ref{lem:externalangle}.
Choosing two large positive integers $N_0<N_1$ such that the following hold for each $\mathbf{v}\in |T|$: \begin{itemize}
\item $f_0^j(Y_{N_0}(\mathbf{v}))$, $\mathbf{v}\in |T|$, $0\le j<r(\mathbf{v})$, are pairwise disjoint; \item $f_0^{r(\mathbf{v})}: Y_{N_0}(\mathbf{v})\to Y_{N_0-r(\mathbf{v})}(\sigma(\mathbf{v}))$ has degree $\delta(\mathbf{v})$; \item putting $N'_0=N_0+N+\max_{\mathbf{v}} r(\mathbf{v})$, $Y_{N_1}(\mathbf{v})\Subset Y_{N'_0}(\mathbf{v})$. \end{itemize} where $N$ is as in Theorem~\ref{thm:Kqcpuzzle}.
Applying the `thickening' procedure (\cite{Mil4} and \cite[Lemma 5.13]{IK}), we obtain quasi-disks $U'_{\mathbf{v}}\Subset U''_{\mathbf{v}}$, with $U'_{\mathbf{v}}\supset Y_{N_1+r(\mathbf{v})}(\mathbf{v})$ and $Y_{N_0+N}(\mathbf{v})\Supset U''_{\mathbf{v}}\supset Y_{N_1}(\mathbf{v})$ and such that $f_0^{r(\mathbf{v})}: U'_{\mathbf{v}}\to U''_{\mathbf{v}}$ again has degree $\delta(\mathbf{v})$. Then the map
$$F_0:\bigcup_{\mathbf{v}\in |T|} \{\mathbf{v}\}\times U'_\mathbf{v}\to \bigcup_{\mathbf{v}} \{\mathbf{v}\}\times U''_\mathbf{v}$$
defined by $F_0|U'_\mathbf{v}=f_0^{r(\mathbf{v})} |U'_\mathbf{v}$, is a GPL map over $T$, with filled Julia set equal to $\bigcup_{\mathbf{v}}\{\mathbf{v}\}\times\overline{\mathbf{v}}$. We may choose these domains $U'_\mathbf{v}$ and $U''_\mathbf{v}$ so that $\mathcal{R}_{f_0}(\theta_\mathbf{v})$ intersects $\partial U'_\mathbf{v}$ (resp. $\partial U''_\mathbf{v}$) at a single point. Let $U_\mathbf{v}=\{z\in U'_\mathbf{v}: F_0(\mathbf{v}, z)\in \{\sigma(\mathbf{v})\}\times U'_{\sigma(\mathbf{v})}\}$.
\begin{Theorem}\label{thm:tildef} Given $g\in\mathcal{C}(T)$ there exists a quasi-regular map $\widetilde{f}$ of degree $d$ with the following properties: \begin{enumerate} \item $f_0(z)=\widetilde{f}(z)$ for each $z\in \mathbb C\setminus (\bigcup_{\mathbf{v}} U'_\mathbf{v})$; \item There exist quasi-disks $U_{\mathbf{v}, g} \Subset U'_\mathbf{v}$ such that $\widetilde{f}$ is holomorphic in $U_{\mathbf{v},g}$ and the map $$\widetilde{F}:\bigcup_{\mathbf{v}} \{\mathbf{v}\}\times U_{\mathbf{v}, g} \to \bigcup_{\mathbf{v}} \{\mathbf{v}\}\times U'_\mathbf{v},\,\, (\mathbf{v}, z)\mapsto (\sigma(\mathbf{v}), \widetilde{f}^{r(\mathbf{v})}(z)),$$ is a GPL map over $T$ which is conformally conjugate to $g$ near their filled Julia set. More precisely, there are quasi-disks $V_{\mathbf{v}, g}\Subset V'_{\mathbf{v}, g}$ such that
$g:\bigcup_{\mathbf{v}} \{\mathbf{v}\}\times V_{\mathbf{v}, g} \to \bigcup_{\mathbf{v}} \{\mathbf{v}\}\times V'_{\mathbf{v},g}$ is a GPL map over $T$, and for each $\mathbf{v}\in |T|$ there is a conformal map $\varphi_\mathbf{v}: U'_\mathbf{v}\to V'_{\mathbf{v},g}$ such that $\varphi_{\mathbf{v}} (U_{\mathbf{v}, g})=V_{\mathbf{v},g}$ and $$\varphi_{\sigma(\mathbf{v})}\circ \widetilde{f}^{r(\mathbf{v})}=g\circ \varphi_\mathbf{v} \text{ holds on } U_{\mathbf{v},g}.$$ \item Furthermore, if $\ell_\mathbf{v}$ denote the union of $\mathcal{R}_{f_0}(\theta_\mathbf{v})\setminus U'_\mathbf{v}$ and $\varphi_\mathbf{v}^{-1}(\mathcal{R}_g(\mathbf{v}, 0)\cap V'_{\mathbf{v},g})$, then $\ell_\mathbf{v}$ is a ray, that is a simple curve starting from the infinity,
and $\widetilde{f}^{r(\mathbf{v})}(\ell_\mathbf{v})=\ell_{\sigma(\mathbf{v})}$. \end{enumerate} \end{Theorem}
\begin{proof}
Let $a'_\mathbf{v}$ (resp. $a_\mathbf{v}$) denote the unique intersection point of $\mathcal{R}_{f_0}(\theta_\mathbf{v})$ with $\partial U'_\mathbf{v}$ (resp. $U_\mathbf{v}$). Let $V'_{\mathbf{v},g}=\{z\in \mathbb C: |G_g(\mathbf{v},z)|<1\}$ and $V_{\mathbf{v},g}=\{z\in \mathbb C: |G_g(\mathbf{v},z)|<1/\delta(\mathbf{v})\}$, where $G_g$ is the Green function of $g$. Let $a'_{\mathbf{v},g}$ (resp. $a_{\mathbf{v}, g}$) denote the unique intersection point of the external ray $\mathcal{R}_g(\mathbf{v}, 0)$ with $\partial V'_{\mathbf{v}, g}$ (resp. $\partial V_{\mathbf{v}, g}$).
Let $\varphi_\mathbf{v}$ denote the unique Riemann mapping from $U'_\mathbf{v}$ onto $V'_{\mathbf{v},g}$ such that $\varphi_\mathbf{v}(a'_{\mathbf{v}})=a'_{\mathbf{v}, g}$ and $\varphi_\mathbf{v} (a(\mathbf{v}))=a_{\mathbf{v}, g}$. Define $U_{\mathbf{v},g}=\varphi_{\mathbf{v}}^{-1}(V_{\mathbf{v},g})$ and define
$\widetilde{f}|U_{\mathbf{v},g}= (f^{r(\mathbf{v})-1}|f(U'_\mathbf{v}))^{-1}\circ (\varphi_{\sigma(\mathbf{v})}^{-1}\circ g\circ \varphi_\mathbf{v})$ which is a holomorphic proper map of degree $\delta(\mathbf{v})$. Finally, define $\widetilde{f}$ on each annulus $U'_{\mathbf{v}}\setminus U_{\mathbf{v}, g}$ so that $\widetilde{f}$ is a quasiregular covering map from this annulus to $f_0(U'_{\mathbf{v}}\setminus U_{\mathbf{v}})$ of degree $\delta(\mathbf{v})$ and $\widetilde{f}$ maps the arc $\varphi^{-1}_{\mathbf{v}}(\mathcal{R}_g(\mathbf{v},0)\cap (V_\mathbf{v}'\setminus V_\mathbf{v}))$ onto the arc $f_0(\mathcal{R}_{f_0}(\theta_\mathbf{v})\cap (U_\mathbf{v}'\setminus U_\mathbf{v}))$. All the desired properties are easily checked. \end{proof}
\begin{proof}[Proof of the Main Theorem (Surjectivity)] Let $\widetilde f$ be the quasi-regular map constructed in Theorem~\ref{thm:tildef}, and let $C\ge 1$ be the maximal dilatation of $\widetilde{f}$. Let us check that it satisfies the conditions of Theorem~\ref{thm:Thurston}. Firstly, it is a quasi-regular polynomial and $\bar{\partial} f=0$ a.e. on $\mathbb C\setminus A$, where
$$A:=\bigcup\limits_{\mathbf{v} \in |T|} U'_{\mathbf{v}}\backslash U_{\mathbf{v},g}\subset \bigcup_{\mathbf{v}} Y_{N_0+N}(\mathbf{v}).$$ To construct the set $\mathcal{B}$, let $$R(A)=\{x\in A:\exists n\ge 1\text{ such that } \widetilde{f}^n(z)\in A\}$$ be the return domain to $A$ under $\widetilde{f}$. For every $x \in R(A)$, there is a smallest integer $k=k(x)$ such that $\tilde f^k(x) \in \bigcup\limits_{\mathbf{v}} Y_{N_0}(\mathbf{v})\backslash \overline{Y_{N_0+r(\mathbf{v})}(\mathbf{v})}=:\Omega$. It is easy to see $Q:=\sup\limits_{x \in R(A)}k(x)<\infty$. Let $$E=\left\{z\in \Omega: \exists n\ge 1, \text{ such that } \widetilde{f}^n(z)\in \bigcup_{\mathbf{v}}Y_{N_0}(\mathbf{v})\right\},$$ and let $\mathcal{B}$ be the union of components of $D(E)$ which intersect $R(A)$. So $\mathcal{B}\supset R(A)$. Consequently, if $\widetilde{f}^j(z)\not\in \mathcal{B}$ for $0\le j<n$, then $\#\{0\le j<n: \widetilde{f}^j(z)\in A\}\le 1$. So the condition ($\ast$) in Theorem~\ref{thm:Thurston} holds with $T=1$.
Let us show that $\mathcal{B}$ is a free $M$-nice set for some $M>0$. To this end, we first observe that $E$ and hence $\mathcal{B}$ is a nice set of $\widetilde{f}$. Since $\overline{\Omega}$ is disjoint from the post-critical set of $\widetilde{f}$ and $\mathcal{B}\subset \bigcup_{k=0}^Q \widetilde{f}^{-k}(\Omega)$, $\mathcal{B}$ is free. Fix a component $B$ of $\mathcal{B}$, let $s$ be the entry time of $B$ into $E$, and let $t$ denote the return time of $\widetilde{f}^{s}(B)$ into $\bigcup_{\mathbf{v}} Y_{N_0}(\mathbf{v})$. Then $\widetilde{f}^{s+t}$ maps $B$ homeomorphically onto a component of $\bigcup_{\mathbf{v}}Y_{N_0}(\mathbf{v})$
and the map $\widetilde{f}^{s+t}|B$ is $C$-qc. In fact, for each $x\in B$, $\#\{0\le j<s: \widetilde{f}^j(x)\in A\}\le 1$ and $\widetilde{f}^t|\widetilde{f}^s(B)=f_0^t|\widetilde{f}^s(B)$ is conformal. Our assumption on $N_1$ and $N_0$ ensures that $B\subset \mathcal{B}\subset \bigcup_{\mathbf{v}}Y_{N_0+N}(\mathbf{v})$, so that $\widetilde{f}^{s+t} (D(\mathcal{B})\cap B)\subset L_{N_0+N}$. It follows from Lemma~\ref{qc1} and Theorem~3' that
\[\mathcal{QD}(D(\mathcal {B})\cap B, B) \le C^{2}\max_{\mathbf{v}\in |T|}(L_{N_0+N}\cap Y_{N_0}(\mathbf{v}),Y_{N_0}(\mathbf{v}))=:M<\infty.\]
Since $\widetilde{f}^{s+t}|B$ extends to a $C$-qc map onto a neighborhood of $\widetilde{f}^{s+t}(B)$, enlarging $M$ if necessary, we have that $M\text{area}(B)\ge \text{diam}(B)^2$ and $M\text{area}(B\cap D(\mathcal{B}))>\text{area}(B)$. This proves that $\mathcal{B}$ is $M$-nice.
So by Theorem~\ref{thm:Thurston}, there is a continuous surjective map $h$ and a map $f\in Poly_d$
such that $f\circ h=h\circ \widetilde{f}$. The map $h$ is holomorphic and $h(z)=z+o(1)$ near infinity. Near $\infty$, $\widetilde{f}=f_0$. Thus $h=\phi_f\circ \phi_{f_0}^{-1}$ near $\infty$, where $\phi_f$ and $\phi_{f_0}$ are the B\"ottcher map for $f$ and $f_0$ respectively. It follows that $h(\ell_\mathbf{v})$ is the external ray $\mathcal{R}_f(\theta_\mathbf{v})$. By Proposition~\ref{prop:com}, $f\in \mathcal{C}(f_0)$ and $F:\bigcup_{\mathbf{v}}\{\mathbf{v}\}\times h(Y_{N_0+r(\mathbf{v})}(\mathbf{v}))\to \bigcup_{\mathbf{v}}\{\mathbf{v}\}\times h(Y_{N_0}(\mathbf{v}))$, $F|h(Y_{N_0+r(\mathbf{v})}(\mathbf{v}))=f^{r(\mathbf{v})}$, is a $\lambda(f_0)$-renormalization of $f$.
In order to show that $\chi(f)=g$, we need to show that $F$ and $\widetilde{F}$ are hybrid equivalent. Let us consider the associated maps $\widetilde{\textbf{F}}: \bigcup_{\mathbf{v}} Y_{N_0+r(\mathbf{v})}(\mathbf{v})\to \bigcup_{\mathbf{v}} Y_{N_0}(\mathbf{v})$ and
$\textbf{F}:\bigcup_{\mathbf{v}} h(Y_{N_0+r(\mathbf{v})}(\mathbf{v}))\to \bigcup_{\mathbf{v}}h(Y_{N_0}(\mathbf{v}))$, where $\widetilde{\textbf{F}}|Y_{N_0+r(\mathbf{v})}(v)=\widetilde{f}^{r(\mathbf{v})}|Y_{N_0+r(\mathbf{v})}(\mathbf{v})$ and $\textbf{F}|h(Y_{N_0+r(\mathbf{v})})=f^{r(\mathbf{v})}|h(Y_{N_0+r(\mathbf{v})}(\mathbf{v}))$. It suffices to show that there is a qc map $H:\bigcup_{\mathbf{v}} U'_\mathbf{v}\to \bigcup_{\mathbf{v}} h(U'_\mathbf{v})$ such that $H\circ \widetilde{\textbf{F}}=\textbf{F}\circ H$ and such that $\bar{H}=0$ a.e. on the filled Julia set $K(\widetilde{\textbf{F}})$ of $\widetilde{\textbf{F}}$. Note that $h\circ \widetilde{\textbf{F}}= \textbf{F}\circ h$ and $K(\widetilde{\textbf{F}})$ contains the postcritical set of $\widetilde{\textbf{F}}$. By Theorem~\ref{thm:Thurston}, there is a qc map $\lambda_0$ such that $\lambda_0=h$ outside $\mathcal{B}':=\bigcup_{k=0}^\infty \widetilde{f}^{-k}(\mathcal{B})$ and such that $\bar{\partial } \lambda_0=0$ a.e. on $\{z: \widetilde{f}^n(z)\notin A, \forall n\ge 0\}$. In particular, $\bar{\partial} \lambda_0=0$ a.e. on $K(\widetilde{\textbf{F}})$. Since $\mathcal{B}'$ is a countable union of Jordan disks with pairwise disjoint closure and it is disjoint from $K(\widetilde{\textbf{F}})$, $\lambda_0$ is homotopic to $h$ rel $K(\widetilde{\textbf{F}})$. Therefore, there is a sequence of qc maps $\lambda_k:\bigcup_{\mathbf{v}} Y_{N_0}(\mathbf{v})\to \bigcup_{\mathbf{v}} h(Y_{N_0}(\mathbf{v}))$, $k=1,2,\ldots$, all homotopic to $h$ rel $K(\widetilde{\textbf{F}})$, such that $\textbf{F}\circ \lambda_{k+1}=\lambda_k \circ \widetilde{\textbf{F}}$ and $\lambda_k=\lambda_0$ on $W:=\bigcup_{\mathbf{v}} (Y_{N_0}(\mathbf{v})\setminus Y_{N_0+r(\mathbf{v})}(\mathbf{v}))$ holds for $k=0,1,\ldots$. Since $\textbf{F}$ is holomorphic, $\widetilde{\textbf{F}}$ is holomorphic outside $A$, and each orbit of $\widetilde{\textbf{F}}$ passes through $A$ at most once, the maximal dilatation of $\lambda_k$ is uniformly bounded. Since $\lambda_k(z)$ eventually stablizes for each $z$ in the domain of $\widetilde{\textbf{F}}$, $\lambda_k$ converges to a qc map $H$. Moreover, $\bar{\partial }H =0$ holds a.e. on $K(\widetilde{\textbf{F}})$ since so does $\lambda_k$ for each $k$. \end{proof}
In order to show that $\mathcal{C}(f_0)$ is connected, we shall make use of the following result. \begin{Theorem}[Branner-Hubbard-Lavaurs]\label{thm:bhl} The set $\mathcal{C}(T)$ is a connected compact set. \end{Theorem} \begin{proof} The proofs in the literature were stated for the case $\mathcal{C}(d)$. For $d=2$ this is due to Douady-Hubbard (\cite{DH1}), the case $d=3$ was proved by Branner-Hubbard (\cite{BH}) and for all $d\ge 3$ this was proved by Lavaurs (\cite{La}). The proof of Lavaurs generalizes to the case of $\mathcal{C}(T)$ in a straightforward way. See also \cite{DP} for a stronger result with a different proof. \end{proof} \begin{proof}[Proof of the Main Theorem (connectivity)] We shall show that if $E$ is a non-empty open and closed subset of $\mathcal{C}(f_0)$, then $\chi(E)$ is a closed subset of $\mathcal{C}(T)$. Together with connectivity of $\mathcal{C}(T)$ and bijectivity of the map $\chi$, this implies that $\mathcal{C}(f_0)$ is connected.
Suppose that $g_n$ is a sequence in $\chi(E)$ and $g_n\to g$ in $\mathcal{C}(T)$. We need to show that $g\in \chi(E)$. Let $f_n=\chi^{-1}(g_n)\in E$. Since $E$ is compact, passing to a subsequence we may assume that $f_n\to f\in E$. As in \cite[Section 7]{DH2}, we may choose hybrid conjugacies $h_n$ between $\lambda(f_0)$-renormalization of $f_n$ and $g_n$ so that the maximal dilatation of $h_n$ is uniformly bounded. Passing to a further subsequence, we see that the $\lambda(f_0)$-renormalization of $f$
is qc conjugate to $g$ respecting the external markings. Thus $f$ is conjugate to $\chi^{-1}(g)$ via a qc map $h:\mathbb C\to \mathbb C$ which is conformal outside the filled Julia set of $f$ and satisfies $h(z)=z+o(1)$ near infinity. The Beltrami path connecting $f$ and $\chi^{-1}(g)$ is contained in $E$ and thus $g=\chi(\chi^{-1}(g))\in \chi(E)$. \end{proof}
\end{document} |
\begin{document}
\begin{titlepage} \begin{center}
{\large \bf Path integral approach \\ to \\the full Dicke model}\\
{\large\em M.~Aparicio Alcalde,\footnotemark[1] and
B. M. Pimentel\,\footnotemark[2]}\\
Instituto de F\'{\i}sica Te\'orica, UNESP - S\~ao Paulo State University,\\
Caixa Postal 70532-2, 01156-970 S\~ao Paulo, SP, Brazil. \\
05/11/2010\\
\subsection*{\\Abstract} \end{center}
\baselineskip .1in
The full Dicke model describes a system of $N$ identical two level-atoms coupled to a single-mode quantized bosonic field. The model considers rotating and counter-rotating coupling terms between the atoms and the bosonic field, with coupling constants $g_1$ and $g_2$, for each one of the coupling terms, respectively. We study finite temperature properties of the model using the path integral approach and functional methods. In the thermodynamic limit, $N\rightarrow\infty$, the system exhibits phase transition from normal to superradiant phase, at some critical values of temperature and coupling constants. We distinguish between three particular cases, the first one corresponds to the case of rotating wave approximation, which $g_1\neq 0$ and $g_2=0$, the second one corresponds to the case of $g_1=0$ and $g_2\neq 0$, in these two cases the model has a continuous symmetry. The last one, corresponds to the case of $g_1\neq 0$ and $g_2\neq 0$, which the model has a discrete symmetry. The phase transition in each case is related to the spontaneous breaking of its respective symmetry. For each one of these three particular cases, we find the asymptotic behaviour of the partition function in the thermodynamic limit, and the collective spectrum of the system in the normal and the superradiat phase. For the case of rotating wave approximation, and also the case of $g_1=0$ and $g_2\neq 0$, in the superradiant phase, the collective spectrum has a zero energy value, corresponding to the Goldstone mode associated to the continuous symmetry breaking of the model. Our analyse and results are valid in the limit of zero temperature, $\beta\rightarrow\infty$, in which, the model exhibits a quantum phase transition.
PACS numbers: 03.65.Db, 05.30.Jp, 73.43.Nq, 73.43.Lp
\footnotetext[1]{e-mail: \,aparicio@ift.unesp.br} \footnotetext[2]{e-mail:\,\,pimentel@ift.unesp.br}
\end{titlepage}
\baselineskip .18in
\section{Introduction}
\quad $\,\,$ The Dicke model is an interesting spin-boson model because, being a simple model, exhibits the superradiance effect \cite{dicke}. This model describes a system of $N$ identical two level-atoms coupled to a single-mode radiation field, simplified according to the rotating wave approximation. In this context, the super-radiance is characterized as the coherent spontaneous radiation emission with intensity proportional to $N^2$. Thermodynamic properties of the Dicke model were studied in the thermodynamic limit, $N\rightarrow\infty$. It is found that, the model exhibits a second order phase transition from normal to superradiant phase at certain critical temperature and sufficiently larger value of the coupling constant between the atoms and the field \cite{hepp} \cite{wang}. The influence of the counter-rotating term on the thermodynamics of the Dicke model also was studied in the literature, \cite{hepp2} \cite{duncan}. Using different coupling constants between the rotating and the counter-rotating coupling, it is calculated the critical temperature and the free energy of the model \cite{hioe} \cite{pimentel1}. We call this generalization of full Dicke model. Path integral approach and functional methods were used for study spin-boson problems, finding critical temperature, free energy and collective spectrum of the models, in the thermodynamic limit, \cite{moshchi} \cite{yarunin}. With this approach, Popov and Fedotov \cite{popov1} \cite{popov2}, rigorously calculated the partition function and collective spectrum for the Dicke model in the normal and superradiant phase. Relation between the phase transition and continuous symmetry breaking in the Dicke model was pointed out in reference \cite{popov3}. The full Dicke model was studied using the path integral approach \cite{aparicio1}, here the authors find the asymptotic behaviour of the partition function and collective spectrum in the normal phase. Using the same approach, thermodynamic properties of some other spin-boson models were also studied \cite{aparicio2} \cite{aparicio3}.
In this paper, using the path integral approach and functional methods, we find the asymptotic behaviour of the partition function and collective spectrum of the full Dicke model in the thermodynamic limit, $N\rightarrow\infty$, in the normal and super-radiant phase. The full Dicke model exhibits phase transition from normal to superradiant phase, at some critical values of temperature and coupling constants. In our study we distinguish three particular cases. The first one corresponds to the case of rotating wave approximation, $g_1\neq 0$ and $g_2=0$, in this case the model has a continuous symmetry, which is associated to the conservation of the sum of the number excitation of the $N$ atoms with the number excitation of the boson field. The second case corresponds to the model with $g_1=0$ and $g_2\neq 0$, in this case the model also has a continuous symmetry, which is associated to the conservation of the difference between the number excitation of the $N$ atoms and the number excitation of the boson field. The last one corresponds to the case of $g_1\neq 0$ and $g_2\neq 0$, which the model has a discrete symmetry. The phase transition in each case is related to the spontaneous breaking of their respective symmetry. For the case of rotating wave approximation, and also for the case of $g_1=0$ and $g_2\neq 0$, in the superradiant phase, the collective spectrum has a zero energy value, corresponding to the Goldstone mode associated to the breaking of their respective continuous symmetry. The collective spectrum obtained in this paper is valid for the zero temperature limit, corresponding to the case of quantum phase transition.
Practical realization of the full Dicke model in the laboratory was discussed by Dimer {\it et al.} \cite{dimer}. Since the radiation frequency and energy separation between the two levels of the atoms exceed the coupling constant strength by many orders of magnitude the counter-rotating terms have a little effect on the dynamics. These authors proposed that in cavities with the $N$ qubits, only one mode of quantized field and classical fields (lasers), it is possible to obtain an effective Hamiltonian equal to the full Dicke Hamiltonian. It is possible to control the parameters in this effective Hamiltonian, and it is possible to operate in the phase transition regime. Other authors stressed the importance for quantum information technology of experimental realization of generalizations of the Dicke model in cavity quantum electrodynamics \cite{harkonen} \cite{baumann}.
Quantum phase transition of the Dicke model, in the thermodynamic limit, is studied by diagonalizing the Hamiltonian \cite{hillery}. For this purpose it is applied the Holstein-Primakoff map, which represents the total angular momentum of the $N$ atoms by a single bosonic field. These author find the collective spectrum in the normal phase. Similar method was used by Emary and Brandes to study the connection between the quantum phase transition and the quantum chaos in the Dicke model without using the rotating wave approximation \cite{emary2}. They find the collective spectrum of the model in the normal and superradiant phase, as another quatities properly of quantum chaos. The relationship between entanglement and quantum phase transition in the Dicke model was also studied \cite{ent1} \cite{ent2}, the authors find that the atom-field entanglement entropy diverges at the critical point of the phase transition. Studies of this relationship between entanglement and quantum phase transition for others collective models exist in the literature \cite{vidal}.
This paper is organized as follows. In section 2, we introduce the full Dicke Hamiltonian and study its symmetries. In section 3, we introduce a map between the spin momentum operators of each atom, with bilinear forms of fermionic operators, defining the fermion full Dicke model. In section 4, we are able to introduce the path integral approach for the full Dicke model, using functional methods we obtain the critical temperature and the asymptotic behaviour of the partition function in some particular cases of the model. In section 5, partition function and collective spectrum of the model are presented in the normal phase. In section 6, partition function and collective spectrum of the model are presented in the superradiant phase. In section 7 we discuss our conclusions. In the paper we use $k_{B}=c=\hbar=1$.
\section{The full Dicke Hamiltonian and symmetries} \quad $\,\,$ The full Dicke model describes a system of $N$ identical two level-atoms coupled to a single-mode quantized bosonic field. The model considers rotating and counter-rotating coupling terms between the atoms and the bosonic field in the Hamiltonian, with coupling constants $g_1$ and $g_2$, for each one of the coupling terms, respectively. Consequently, the Hamiltonian of the full Dicke model can be written as
\begin{eqnarray} H\,=\,\frac{\Omega}{2}\,\sum_{j=1}^{N}\,\sigma_{(j)}^z+\omega_{0}\,b^{\dagger}\,b\,+\frac{g_1}{\sqrt{N}} \sum_{j=1}^{N}\, \Bigl(b\, \sigma_{(j)}^{+}+b^{\dagger}\sigma_{(j)}^{-}\Bigr)+\,\frac{g_2}{\sqrt{N}} \sum_{j=1}^{N}\, \Bigl(b\, \sigma_{(j)}^{-}+b^{\dagger}\sigma_{(j)}^{+}\Bigr)\,. \label{fullDHamil} \end{eqnarray}
In above equation we define the operators $\sigma_{(j)}^{\pm}=\frac{1}{2}\,(\sigma_{(j)}^{1} \pm i\,\sigma_{(j)}^{2})$, which the operators $\sigma_{(j)}^1$, $\sigma_{(j)}^2$ and $\sigma_{(j)}^z=\sigma_{(j)}^3$ satisfy the commutation relations $[\sigma_{(j)}^p,\sigma_{(j)}^q]= 2\,\epsilon^{pqr}\,\sigma_{(j)}^r$ with $p,q,r=1,2,3$. Therefore, $[\sigma_{(j)}^+,\sigma_{(j)}^-]=\sigma_{(j)}^z$ and $[\sigma_{(j)}^z,\sigma_{(j)}^{\pm}]= \pm\,2\,\sigma_{(j)}^{\pm}$. The $b$ and $b^{\dagger}$ are the boson annihilation and creation operators of mode excitations that satisfy the usual commutation relation rules.
Let us define three different operators. The first one, the operator $N$, which is defined by
\begin{eqnarray} N=b^{\dagger}b+\frac{1}{2}\sum_{i=1}^N\sigma_{(i)}^z\,. \label{Nexc1} \end{eqnarray}
The second one, the operator $N_-$, defined by
\begin{eqnarray} N_-=b^{\dagger}b-\frac{1}{2}\sum_{i=1}^N\sigma_{(i)}^z\,. \label{N-1} \end{eqnarray}
Finally, we define the parity operator $\Pi$ by
\begin{eqnarray} \Pi=e^{i\,\pi\,N}\,, \label{parity1} \end{eqnarray}
with operator $N$ defined in Eq. (\ref{Nexc1}). In particular case of $g_1\neq 0$ and $g_2=0$, which corresponds to the rotating wave approximation case, it is possible to show that $[H,N]=0$. In particular case of $g_1=0$ and $g_2\neq 0$, it is possible to show that $[H,N_-]=0$. And it is possible to show that $[H,\Pi]=0$ for arbitrary non-negative values of $g_1$ and $g_2$. These commutation relations of the Hamiltonian with each operator defined above, correspond to symmetries of the model for each case. It is interesting to see that, for the case of $g_1\neq 0$ and $g_2\neq 0$, we only have that $[H,\Pi]=0$, it means that the system only has parity symmetry. The operators defined by $J^p=\frac{1}{2}\sum_{i=1}^N\sigma_{(i)}^p$ with $p=1,2,3$, satisfy the usual angular momentum
commutation relations. The Hilbert space corresponding to the atoms states can be generated by the basis $\{|j\,m\rangle\}$ with $j=N/2$ and $m=-j,-j+1,...,j-1,j$; each basis state satisfies
$J^3|j\,m\rangle =m|j\,m\rangle$ and ${\bf J}^2|j\,m\rangle =j(j+1)|j\,m\rangle$. The Hilbert space,
which the photon states are defined, can be generated by the basis $\{|n\rangle\}$, with their elements satisfying
$b^{\dagger}b|n\rangle =n|n\rangle$, in this case, $n$ is the number of
photons. Now we are able to construct a basis for the total system as a tensor product of the above basis introduced, i.e., the set $\{|n\rangle \otimes |j\,m\rangle\}$. The symmetries mentioned above, are related with conserved quatities. In the case of $g_1\neq 0$ and $g_2=0$, with $[H,N]=0$, the
excitation number of the system, $n+m$, is conserved. It means that the temporal evolution of a state given by $|n\rangle \otimes |j\,m\rangle$ only evolves toward another states $|n'\rangle \otimes |j\,m'\rangle$ which $n'+m'=n+m$. In similar fashion, for the case of $g_1=0$ and $g_2\neq 0$, with $[H,N_-]=0$, the difference of excitation numbers, $n-m$, is conserved. When $g_1\neq 0$ and $g_2\neq 0$, which $[H,\Pi]=0$, the value $e^{i\,\pi\, (n+m)}$ is conserved. It means that
the temporal evolution of a state given by $|n\rangle \otimes |j\,m\rangle$ only evolves toward another states
$|n'\rangle \otimes |j\,m'\rangle$ with both, $n+m$ and $n'+m'$ being even or $n+m$ and $n'+m'$ being odd. In all mentioned cases, the phase transition is related to the spontaneous breaking of their respective symmetries. In further analysis we shall see that, the symmetry associated to the commutation relation $[H,\Pi]=0$ is discrete, and the symmetries associated to the commutation relations $[H,N]=0$ and $[H,N_-]=0$ are continuous symmetries. In cases of continuous symmetry breaking the Goldstone theorem is valid, with the appearing of zero energy value in the phase with the symmetry broken.
\section{The fermion full Dicke model} \quad $\,\,$
Let us define the fermion full Dicke model. For this purpose, let us define the raising and lowering Fermi operators $\alpha^{\dagger}_{i}$, $\alpha_{i}$, $\beta^{\dagger}_{i}$ and $\beta_{i}$, that satisfy the anti-commutator relations $\alpha_{i}\alpha^{\dagger}_{j}+\alpha^{\dagger}_{j}\alpha_{i} =\delta_{ij}$ and $\beta_{i}\beta^{\dagger}_{j}+\beta^{\dagger}_{j}\beta_{i} =\delta_{ij}$. In this analysis, we use a representation of the operators $\sigma_{(j)}^z$, $\sigma_{(j)}^+$ and $\sigma_{(j)}^-$ by the following bilinear combination of Fermi operators, $\alpha^{\dagger}_{i}\alpha_{i} -\beta^{\dagger}_{i}\beta_{i}$, $\alpha^{\dagger}_{i}\beta_{i}$ and $\beta^{\dagger}_{i}\alpha_{i}$, the correspondence is given by
\begin{equation} \sigma_{(i)}^{z}\longrightarrow \alpha_{i}^{\dagger}\alpha_{i} -\beta_{i}^{\dagger}\beta_{i}\, , \label{34} \end{equation}
\begin{equation} \sigma_{(i)}^{+}\longrightarrow \alpha_{i}^{\dagger}\beta_{i}\, , \label{35} \end{equation}
and
\begin{equation} \sigma_{(i)}^{-}\longrightarrow \beta_{i}^{\dagger}\alpha_{i}\, . \label{36} \end{equation}
Using this representation given in Eq. (\ref{34}), Eq. (\ref{35}) and Eq. (\ref{36}) in the full Dicke Hamiltonian given by Eq. (\ref{fullDHamil}), we define the Hamiltonian of the fermion full Dicke model $H_F$. So that, we have
\begin{eqnarray} H_F=\omega_0\; b^{\dagger}b+ \frac{\Omega}{2}\sum_{i=1}^N \Bigl(\alpha_i^{\dagger}\alpha_i -\beta_i^{\dagger}\beta_i\Bigr) +\frac{g_1}{\sqrt{N}}\sum_{i=1}^N \Bigl(b\,\alpha_i^{\dagger}\beta_i \,+\, b^{\dagger}\, \beta_i^{\dagger}\alpha_i \Bigr)+\, \frac{g_2}{\sqrt{N}}\sum_{i=1}^N \Bigl(b^{\dagger}\,\alpha_i^{\dagger}\beta_i \,+\, b\, \beta_i^{\dagger}\alpha_i \Bigr)\, . \label{37} \end{eqnarray}
We are interested in studying thermodynamic properties of the system, therefore we must find the partition function $Z$. It is important to note that Hamiltonians $H$ and $H_F$ are defined in different spaces. Each operator $\sigma_i^{\alpha}$ appearing in the Hamiltonian $H$ acts on two-dimensional Hilbert space, notwithstanding, Fermi operators $\alpha^{\dagger}_{i}$, $\alpha_{i}$, $\beta^{\dagger}_{i}$ and $\beta_{i}$, appearing in the Hamiltonian $H_F$ act on four-dimensional Fock space. The following property relates the partition function of the full Dicke model with the partition function of the fermion full Dicke model:
\begin{eqnarray} Z=Tr\Bigl(\exp(-\beta\,H)\Bigr)=i^N\,Tr\left(\exp\left(-\beta\,H_F-\frac{i\pi}{2}\,N_F\right)\right)\,. \label{partitionsfunctions} \end{eqnarray}
In this last relation $H$ is given by the Eq. (\ref{fullDHamil}), $H_F$ is given by Eq. (\ref{37}) and the operator $N_F$ is defined by
\begin{eqnarray} N_F=\sum_{i=1}^{N}(\alpha_{i}^{\dagger}\alpha_{i}+\beta_{i}^{\dagger}\beta_{i})\,. \end{eqnarray}
The traces used in Eq. (\ref{partitionsfunctions}) for each Hamiltonians are carried over their repective spaces. The relation given by Eq. (\ref{partitionsfunctions}) let us express the partition function of the full Dicke model $Z$ using the fermion full Dicke Hamiltonian given by Eq. (\ref{37}).
\section{The partition function with path integral approach}
In this section we perform calculations in order to obtain an asymptotic expression for the partition function $Z$ of the full Dicke model in the limit of $N\rightarrow\infty$. For this purpose we use path integral approach and functional methods. Let us define the Euclidean action $S$ of the full Dicke model in the following form
\begin{equation} S=\int_0^{\beta} d\tau \left(b^*(\tau)\,\partial_{\tau}b(\tau)+ \sum_{i=1}^{N} \Bigl(\alpha^*_i(\tau)\,\partial_{\tau}\alpha_i(\tau) +\beta^*_i (\tau)\,\partial_{\tau}\beta_i(\tau)\Bigr)\right) -\int_0^{\beta}d\tau H_{F}(\tau)\,, \label{66} \end{equation}
the Hamiltonian $H_{F}$ is the full Hamiltonian for the full fermion Dicke model, which is given by
\begin{eqnarray} H_{F}(\tau)\,=\,\omega_{0}\,b^{\,*}(\tau)\,b(\tau)\,+ \,\frac{\Omega}{2}\,\displaystyle\sum_{i\,=\,1}^{N}\, \biggl(\alpha^{\,*}_{\,i}(\tau)\,\alpha_{\,i}(\tau)\,- \,\beta^{\,*}_{\,i}(\tau)\beta_{\,i}(\tau)\biggr)\,+ \nonumber\\ +\,\frac{g_{\,1}}{\sqrt{N}}\,\displaystyle\sum_{i\,=\,1}^{N}\, \biggl(\alpha^{\,*}_{\,i}(\tau)\,\beta_{\,i}(\tau)\,b(\tau)\,+ \alpha_{\,i}(\tau)\,\beta^{\,*}_{\,i}(\tau)\,b^{\,*}(\tau)\,\biggr)\,+ \nonumber\\ +\,\frac{g_{\,2}}{\sqrt{N}}\,\displaystyle\sum_{i\,=\,1}^{N}\, \biggl(\alpha_{\,i}(\tau)\,\beta^{\,*}_{\,i}(\tau)\,b(\tau)\,+ \,\alpha^{\,*}_{\,i}(\tau)\,\beta_{\,i}(\tau)\,b^{\,*}(\tau)\biggr). \label{66a} \end{eqnarray}
Let us define the formal quotient of the partition function of the full Dicke model and the partition function of the free Dicke model. Therefore we are interested in calculating the following quantity
\begin{equation} \frac{Z}{Z_0}=\frac{\int [d\eta]\,\exp{\left(\,S-\frac{i\pi}{2\beta}\int_0^{\beta}n(\tau)d\tau\right)}}{\int [d\eta]\,\exp{\left(\,S_{0}-\frac{i\pi}{2\beta}\int_0^{\beta}n(\tau)d\tau\right)}}\, , \label{65} \end{equation}
the function $n(\tau)$ is defined by
\begin{eqnarray} n(\tau)=\sum_{i=1}^{N} \Bigl(\alpha^{\,*}_i(\tau)\,\alpha_i(\tau)+\beta^{\,*}_i(\tau)\beta_i(\tau)\Bigr)\,, \end{eqnarray}
$S=S(b,b^*,\alpha,\alpha^{\dagger},\beta,\beta^{\dagger})$ is the Euclidean action of the full Dicke model given by Eq. (\ref{66}), $S_0=S_{0}(b,b^*,\alpha,\alpha^{\dagger},\beta,\beta^{\dagger})$ is the free Euclidean action for the free single bosonic mode and the free atoms, i.e., the expression of the complete action $S$ taking $g_1=g_2=0$ and finally $[d\eta]$ is the functional measure.
The functional integrals involved in Eq. (\ref{65}), are functional integrals with respect to the complex functions $b^*(\tau)$ and $b(\tau)$ and Fermi fields $\alpha_i^*(\tau)$, $\alpha_i(\tau)$, $\beta_i^*(\tau)$ and $\beta_i(\tau)$. Since we are using thermal equilibrium boundary conditions, in the imaginary time formalism, the integration variables in Eq. (\ref{65}) obey periodic boundary conditions for the Bose field, i.e., $b(\beta)=b(0)$ and anti-periodic boundary conditions for Fermi fields i.e., $\alpha_i(\beta)=-\alpha_i(0)$ and $ \beta_i(\beta)=-\beta_i(0)$.
In section 2, we have analysed the symmetry of the model studying commutation relations between the Hamiltonian given by Eq. (\ref{fullDHamil}) with some operators defining the symmetry. Now we are able to analise the symmetry of the model studying the invariance of the action given by Eq. (\ref{66}) under symmetry transformations. In this way, let us introduce the following field transformation
\begin{eqnarray} \begin{array}{ccc} b(\tau)\rightarrow\,e^{i\,\gamma}\,b(\tau)\,,\;\; & \alpha(\tau)\rightarrow\,e^{i\,\theta}\,\alpha(\tau)\,,\;\; & \beta(\tau)\rightarrow\,e^{i\,\phi}\,\beta(\tau)\,,\\ \;\;\;\;b^*(\tau)\rightarrow\,e^{-i\,\gamma}\,b^*(\tau)\,,\;\; & \;\;\;\;\alpha^*(\tau)\rightarrow\,e^{-i\,\theta}\,\alpha^*(\tau)\,,\;\; & \;\;\;\;\beta^*(\tau)\rightarrow\,e^{-i\,\phi}\,\beta^*(\tau)\,. \label{symmtranf1} \end{array} \end{eqnarray}
In the case of $g_1\neq 0$ and $g_2=0$, corresponding to the case of rotating wave approximation, its respective action is invariant under tranformation given by Eq. (\ref{symmtranf1}), taking $\gamma=\theta-\phi$. In the case of $g_1=0$ and $g_2\neq 0$, its corresponding action is invariant under tranformation given by Eq. (\ref{symmtranf1}), taking $\gamma=\phi-\theta$. Finally, in the case of $g_1\neq 0$ and $g_2\neq 0$, its corresponding action is invariant under tranformation given by Eq. (\ref{symmtranf1}), with $\gamma=\theta-\phi=0$ or $\gamma=\theta-\phi=\pi$. In the two first cases, the case of $g_1\neq 0$ and $g_2=0$, and the case of $g_1=0$ and $g_2\neq 0$, their respective actions are invariant under continuous transformation, $U(1)$, of the boson field $b(\tau)$. In the case of $g_1\neq 0$ and $g_2\neq 0$, its action is invariant under discrete transformations, $Z_2$, of the boson field $b(\tau)$, i. e., $b(\tau)\rightarrow b(\tau)$ and $b(\tau)\rightarrow -b(\tau)$.
Following with the purpose of calculating the quantity $\frac{Z}{Z_0}$ given by Eq. (\ref{65}), let us use the following transformation
\begin{eqnarray} \begin{array}{cc} \alpha_i(\tau)\rightarrow e^{\frac{i\pi}{2\beta}t}\,\alpha_i(\tau)\,,\;\;\; & \alpha_i^*(\tau)\rightarrow e^{-\,\frac{i\pi}{2\beta}t}\,\alpha_i^*(\tau)\,,\\ \beta_i(\tau)\rightarrow e^{\frac{i\pi}{2\beta}t}\,\beta_i(\tau)\,,\;\;\; & \beta_i^*(\tau)\rightarrow e^{-\,\frac{i\pi}{2\beta}t}\,\beta_i^*(\tau)\,. \end{array} \label{trans2} \end{eqnarray}
With this last transformation, the term $n(\tau)$ appearing in Eq. (\ref{65}) can be dropped. Therefore, applying the tranformation given by Eq. (\ref{trans2}) into the expression given by Eq. (\ref{65}), we obtain that
\begin{equation} \frac{Z}{Z_0}=\frac{\int [d\eta]\,e^S}{\int[d\eta]\,e^{S_0}}\,. \label{65n} \end{equation}
In Eq. (\ref{65n}), the Bose field obeys periodic boundary conditions, i.e., $b(\beta)=b(0)$, and the Fermi fields obey the following boundary conditions:
\begin{eqnarray} \begin{array}{cc} \alpha_i(\beta)=i\,\alpha_i(0)\,,\;\;\;& \alpha_i^*(\beta)=-\,i\,\alpha_i^*(0)\,,\\ \beta_i(\beta)=i\,\beta_i(0)\,,\;\;\;& \beta_i^*(\beta)=-\,i\,\beta_i^*(0)\,. \end{array} \label{nbound} \end{eqnarray}
The free action for the single mode bosonic field $S_{B0}(b)$ is given by
\begin{equation} S_{B0}(b) = \int_{0}^{\beta} d\tau\; b^{*}(\tau)\,\Bigl( \partial_{\tau}-\omega_{0}\Bigr)\,b(\tau)\, . \label{67} \end{equation}
Then we can write the action $S$ of the full fermion Dicke model, given by Eq. (\ref{66}), using the free action for the single mode bosonic field $S_{B0}(b)$ defined by Eq. (\ref{67}), plus an additional term that can be expressed in matrix form. Therefore the total action $S$ can be written as
\begin{equation} S = S_{B0}(b) + \int_{0}^{\beta} d\tau\,\sum_{i=1}^{N}\, \rho^{\dagger}_{i}(\tau)\, M(b^{*},b)\,\rho_{i}(\tau)\, , \label{68} \end{equation}
the column matrix $\rho_{\,i}(\tau)$ is given in terms of Fermi field operators in the following way
\begin{eqnarray} \rho_{\,i}(\tau) &=& \left( \begin{array}{c} \beta_{\,i}(\tau) \\ \alpha_{\,i}(\tau) \end{array} \right), \nonumber\\ \rho^{\dagger}_{\,i}(\tau) &=& \left( \begin{array}{cc} \beta^{*}_{\,i}(\tau) & \alpha^{*}_{\,i}(\tau) \end{array} \right) \label{69a} \end{eqnarray}
and the matrix $M(b^{*},b)$ is given by
\begin{equation} M(b^{*},b) = \left( \begin{array}{cc} L & (N)^{-1/2}\,\biggl(g_{1}\,b^{*}\,(\tau) + g_{2}\,b\,(\tau)\biggr)\\ (N)^{-1/2}\,\biggl(g_{1}\,b\,(\tau) + g_{2}\,b^{*}\,(\tau)\biggr) & L_* \end{array} \right)\,, \label{69b} \end{equation}
the operators $L$ and $L_*$ are defined by $\partial_{\tau} + \Omega/2$ and $\partial_{\tau} - \Omega/2$ respectively. Substituting the action $S$ given by Eq. (\ref{68}) in the functional integral form of the partition function given by Eq. (\ref{65n}) we see that this functional integral is Gaussian in the Fermi fields. Now, let us begin integrating with respect these Fermi fields, therefore we obtain
\begin{eqnarray} Z=\int[d\eta(b)]\,e^{S_{B0}} \Bigl(\det{M(b^{*},b)}\Bigr)^N\,, \label{Zop} \end{eqnarray}
in this case, $[d\eta(b)]$ is the functional measure only for the bosonic field. With the help of the following property for matrices with operator components
\begin{eqnarray} \det\left(\begin{array}{cc} A&B\\ C&D \end{array} \right)=\det\left(AD-ACA^{-1}B\right)\,, \label{mazprop} \end{eqnarray}
and determinant properties, we have that
\begin{eqnarray} \det{M(b^{*},b)}=\det{\Bigl(LL_*\Bigr)}\,\det{\left(1-N^{-1}L_*^{-1} \Bigl(g_1\,b+ g_2\,b^*\Bigr)L^{-1}\Bigl(g_1\,b^*+ g_2\,b\Bigr)\right)}\,. \label{Mop1} \end{eqnarray}
Substituting Eq. (\ref{Zop}) and Eq. (\ref{Mop1}) in Eq. (\ref{65n}), we have that
\begin{eqnarray} \frac{Z}{Z_0}=\frac{Z_A}{\int[d\eta(b)]\,e^{S_{B0}}}\,, \label{ZA0} \end{eqnarray}
with $Z_A$ defined by
\begin{eqnarray} Z_A=\int[d\eta(b)]\exp{\left(S_{B0}+N\,tr\ln\biggl(1-N^{-1}L_*^{-1} \Bigl(g_1\,b+ g_2\,b^*\Bigr)L^{-1}\Bigl(g_1\,b^*+ g_2\,b\Bigr)\biggr)\right)}\,. \label{ZA1} \end{eqnarray}
We are interested in knowing the asymptotic behaviour of the quotient $\frac{Z}{Z_0}$ in the thermodynamic limit, i. e., $N\rightarrow\infty$. With this intention, we analyse the asymptotic behaviour of the last defined expression $Z_A$. First, let us scale the bosonic field by $b\rightarrow\sqrt{N}\,b$ and $b^*\rightarrow\sqrt{N}\,b^*$, so that we get
\begin{eqnarray} Z_A=A(N)\int[d\eta(b)]\exp{\left(N\,\Phi(b^*,b)\right)}\,, \label{ZA2} \end{eqnarray}
with the function $\Phi(b^*,b)$ defined by
\begin{eqnarray} \Phi(b^*,b)=S_{B0}+tr\ln\biggl(1-L_*^{-1} \Bigl(g_1\,b+ g_2\,b^*\Bigr)L^{-1}\Bigl(g_1\,b^*+ g_2\,b\Bigr)\biggr)\,. \label{fi1} \end{eqnarray}
The term $A(N)$ in Eq. (\ref{ZA2}) comes from transforming the functional measure $[d\eta(b)]$ under scaling the bosonic field by $b\rightarrow\sqrt{N}\,b$ and $b^*\rightarrow\sqrt{N}\,b^*$. The asymptotic behaviour of the integral functional appearing in Eq. (\ref{ZA2}) when $N\rightarrow\infty$, can be obtained by using the method of steepest descent \cite{amit}. In this method, we expand the function $\Phi(b^*,b)$ around the point $b(\tau)=b_0(\tau)$ and $b^*(\tau)=b^*_0(\tau)$, which can be of two kinds. One kind that makes $Re(\Phi(b^*,b))$ maximum, and the other kind is defined as saddle point. We consider the first terms of the expansion in the integral functional, which are the leading terms for the value of the integral function. We can find the maximum points, or saddle points, finding the stationary points. The stationary points are solution of the following equations $\frac{\delta\, \Phi(b^*,b)}{\delta\,b(\tau)}=0$ and $\frac{\delta\,\Phi(b^*,b)}{\delta\,b^*(\tau)}=0$. For the full Dicke model, the stationary points are constant functions $b(\tau)=b_0$ and $b^*(\tau)=b^*_0$. It is not difficult to show that for $\beta\leq\beta_c$ the stationary point is given by $b_0=b_0^*=0$, which is a maximum point. The critical value $\beta_c$ is obtained by solving the following equation
\begin{eqnarray} \frac{\omega_0\,\Omega}{(g_1+g_2)^2}=\tanh\left(\frac{\beta_c\,\Omega}{2} \right)\,. \label{tcrit} \end{eqnarray}
In this last equation, it is possible to find some solution for $\beta_c$, in the case of $(g_1+g_2)^2>\omega_0\Omega$. With this condition the system undergoes a phase transition. When the system has $\beta<\beta_c$ we say that the system is in the normal phase. For $\beta>\beta_c$ the stationary points $b(\tau)=b_0$ and $b^*(\tau)=b^*_0$ satisfy the following equation
\begin{eqnarray} \frac{\omega_0\,\Omega_{\Delta}}{(g_1+g_2)^2}=\tanh\left(\frac{\beta\,\Omega_{\Delta}}{2}\right)\,, \label{tra1} \end{eqnarray}
with $\Omega_{\Delta}$ defined by
\begin{eqnarray} \Omega_{\Delta}=
\sqrt{\Omega^2+4\,(g_1+g_2)^2\,|b_0|^2}\,. \label{omegadelta} \end{eqnarray}
Phase transition happens if it is possible to find some real solution for $|b_0|\neq 0$ in Eq. (\ref{tra1}). It is only possible when $(g_1+g_2)^2>\omega_0\,\Omega$ and $\beta>\beta_c$. In the case of $g_1\neq 0$ and $g_2=0$, and also in the case of $g_1=0$ and $g_2\neq 0$, the maximum points are a continuous set of values given by the expression $b_0=\rho\,e^{i\,\phi}$ and $b^*_0=\rho\,e^{-i\,\phi}$ with $\phi\in [0,2\pi)$ and
$\rho=|b_0|$, with $|b_0|$ defined by Eq. (\ref{tra1}). In the case of $g_1\neq 0$ and $g_2\neq 0$, we have
two maximum points, which are given by $b^*_0=b_0=\pm |b_0|$, with $|b_0|$ defined by Eq. (\ref{tra1}). When the system has $\beta>\beta_c$ we say that the system is in the superradiant phase.
Let us continue, with the computation of the asymptotic behaviour for the integral functional appearing in Eq. (\ref{ZA2}), for the thermodynamic limit, $N\rightarrow\infty$. In following steps, we shall find this asymptotic behaviour when we only have one maximum point defined by $b_0=b^*_0$. The resulting expressions will be useful for the normal phase of the full Dicke model, and also for the superradiant phase in the case of $g_1\neq 0$ and $g_2\neq 0$. We consider the two first leading terms in the integral function appearing in Eq. (\ref{ZA2}) coming from the expansion of $\Phi(b^*,b)$ around the maximal value $b^*_0=b_0$, this expansion is given by
\begin{eqnarray} \Phi(b^*,b)=\Phi(b^*_0,b_0)+\frac{1}{2}\int_0^{\beta}d\tau_1\,d\tau_2\, (b^*(\tau_1)-b^*_0\,,\,b(\tau_1)-b_0)\,M_{\Phi} \left(\begin{array}{c} b^*(\tau_2)-b^*_0\\ b(\tau_2)-b_0 \end{array}\right)\,, \label{fi2} \end{eqnarray}
the matriz $M_{\Phi}$, is given by
\begin{eqnarray} M_{\Phi}=\left(\begin{array}{cc} \frac{\delta^2\Phi(b^*,b)}{\delta b^*(\tau_1)\,\delta b^*(\tau_2)}&\frac{\delta^2\Phi(b^*,b)}{\delta b^*(\tau_1)\,\delta b(\tau_2)}\\ \frac{\delta^2\Phi(b^*,b)}{\delta b(\tau_1)\,\delta b^*(\tau_2)}&\frac{\delta^2\Phi(b^*,b)}{\delta b(\tau_1)\,\delta b(\tau_2)} \end{array}\right)
\Biggr|_{b^*=b=b_0}\,. \label{Mfi} \end{eqnarray}
Substituting this expansion given by Eq. (\ref{fi2}) in Eq. (\ref{ZA2}) we obtain
\begin{eqnarray} Z_A=e^{N\Phi(b^*_0,b_0)}\int[d\eta(b)]\exp{\left(\frac{1}{2}\int_0^{\beta}d\tau_1\,d\tau_2\, \Bigl(b^*(\tau_1)\,,\,b(\tau_1)\Bigr)\,M_{\Phi} \left(\begin{array}{c} b^*(\tau_2)\\ b(\tau_2) \end{array}\right)\right)}\,, \label{ZA3} \end{eqnarray}
to obtain the last expression, we have applied the transformation $b(\tau)\rightarrow \Bigr(b(\tau)+b_0\Bigl)/\sqrt{N}$ and $b^*(\tau)\rightarrow \Bigr(b^*(\tau)+b^*_0\Bigl)/\sqrt{N}$ in the functional integral involved. In order to make easier the integration of the functional integral given by Eq. (\ref{ZA2}), let us use the following transformation
\begin{eqnarray} c\,(\tau)&=&\alpha\,\Bigl(g_2\,b(\tau)+g_1\,b^*(\tau)\Bigr)\nonumber\\ c^*(\tau)&=&\alpha\,\Bigl(g_1\,b(\tau)+g_2\,b^*(\tau)\Bigr)\,, \label{Trv1} \end{eqnarray}
the parameter $\alpha$ defined by the equation $\alpha^2=(g_2^2-g_1^2)^{-1}$. It is worth mentioning that, the Jacobian of this transformation is $1$. Applying this transformation in Eq. (\ref{ZA2}) we obtain that
\begin{eqnarray} Z_A=A(N)\int[d\eta(c)]\exp{\left(N\,\Phi_I(c^*,c)\right)}\,, \label{ZA4} \end{eqnarray}
the function $\Phi_I(c^*,c)$ is given by
\begin{eqnarray} \Phi_I(c^*,c)&=&\alpha^2\int_0^{\beta}d\tau\,\Bigl(g_1\,c(\tau)-g_2\,c^*(\tau)\Bigr)\times\nonumber\\ &&\times\Bigl(\partial_{\tau}-\omega_0\Bigr)\, \Bigl(g_1\,c^*(\tau)- g_2\,c(\tau)\Bigr)+tr\ln\biggl(1-\alpha^{-2}L_*^{-1}c^*L^{-1}c\biggr)\,. \label{fi3} \end{eqnarray}
The maximum point corresponds to $c^*_0=c_0=\alpha(g_1+g_2)b_0$, the point $b^*_0=b_0$ corresponds to a maximum for the function $Re(\Phi(b^*,b))$. Using the same expansion given in Eq. (\ref{fi2}) for $\Phi_I(c^*,c)$ and substituting in Eq. (\ref{ZA4}) we obtain that
\begin{eqnarray} Z_A=e^{N\Phi(b^*_0,b_0)}\int[d\eta(c)]\exp{\left(\frac{1}{2}\int_0^{\beta}d\tau_1\,d\tau_2\, \Bigl(c^*(\tau_1)\,,\,c(\tau_1)\Bigr)\,M_{\Phi_I} \left(\begin{array}{c} c^*(\tau_2)\\ c(\tau_2) \end{array}\right)\right)}\,, \label{ZA5} \end{eqnarray}
we have used the identity $\Phi_I(c^*_0,c_0)=\Phi(b^*_0,b_0)$, and the matrix $M_{\Phi_I}$ is defined by
\begin{eqnarray} M_{\Phi_I}=\left(\begin{array}{cc} \frac{\delta^2\Phi_I(c^*,c)}{\delta c^*(\tau_1)\,\delta c^*(\tau_2)}&\frac{\delta^2\Phi_I(c^*,c)}{\delta c^*(\tau_1)\,\delta c(\tau_2)}\\ \frac{\delta^2\Phi_I(c^*,c)}{\delta c(\tau_1)\,\delta c^*(\tau_2)}&\frac{\delta^2\Phi_I(c^*,c)}{\delta c(\tau_1)\,\delta c(\tau_2)} \end{array}\right)
\Biggr|_{c^*=c=c_0}\,. \label{MfiI} \end{eqnarray}
At this level, it is convenient to use Fourier representation of the field $c(\tau)$ in the functional integral Eq. (\ref{ZA5}). From boundaries conditions of the bosonic field $b(\tau)$ and from Eq. (\ref{Trv1}), we deduce that $c(\tau)$ and $c^*(\tau)$ satisfies periodic boundary conditions $c(\beta)=c(0)$ and $c^*(\beta)=c^*(0)$ respectively. Therefore Fourier representation of $c(\tau)$ and $c^*(\tau)$ are given by
\begin{eqnarray} c(\tau)&=&\frac{1}{\sqrt{\beta}}\sum_{\omega}c(\omega)e^{i\omega\tau}\,,\nonumber\\ c^*(\tau)&=&\frac{1}{\sqrt{\beta}}\sum_{\omega}c^*(\omega)e^{-i\omega\tau}\,, \label{fourier1} \end{eqnarray}
the parameter $\omega$ takes the values: $2\pi\,n/\beta$, with $n$ being all the integers. These values correspond to the Matsubara frenquencies for bosonic fields. Substituting this Fourier representation, Eq. (\ref{fourier1}), in Eq. (\ref{ZA5}), we obtain that
\begin{eqnarray} Z_A=e^{N\Phi(b^*_0,b_0)}\int[d\eta(c)]\exp{\left(\frac{1}{2}\sum_{\omega_1\omega_2} \Bigl(c^*(\omega_1)\,,\,c(\omega_1)\Bigr)\,\delta^2\Phi(\omega_1,\omega_2) \left(\begin{array}{c} c^*(\omega_2)\\ c(\omega_2) \end{array}\right)\right)}\,, \label{ZA6} \end{eqnarray}
with $\delta^2\Phi(\omega_1,\omega_2)$ being defined by
\begin{eqnarray} \delta^2\Phi(\omega_1,\omega_2)=\left(\begin{array}{cc} \delta^2\Phi_{11}(\omega_1,\omega_2) & \delta^2\Phi_{12}(\omega_1,\omega_2)\\ \delta^2\Phi_{21}(\omega_1,\omega_2) & \delta^2\Phi_{22}(\omega_1,\omega_2) \end{array}\right)\,, \label{deltafi} \end{eqnarray}
and each component of this matrix satisfies
\begin{eqnarray} \delta^2\Phi_{11}(\omega_1,\omega_2)&=&\frac{1}{\beta}\int_0^{\beta}d\tau_1d\tau_2\,\, e^{-i\omega_1\tau_1}\,\frac{\delta^2\Phi_I(c^*,c)}{\delta c^*(\tau_1)\,\delta c^*(\tau_2)}
\Biggr|_{c=c^*=c_0}e^{-i\omega_2\tau_2}\,,\nonumber\\ \delta^2\Phi_{12}(\omega_1,\omega_2)&=&\frac{1}{\beta}\int_0^{\beta}d\tau_1d\tau_2\,\, e^{-i\omega_1\tau_1}\,\frac{\delta^2\Phi_I(c^*,c)}{\delta c^*(\tau_1)\,\delta c(\tau_2)}
\Biggr|_{c=c^*=c_0}e^{i\omega_2\tau_2}\,,\nonumber\\ \delta^2\Phi_{21}(\omega_1,\omega_2)&=&\frac{1}{\beta}\int_0^{\beta}d\tau_1d\tau_2\,\, e^{i\omega_1\tau_1}\,\frac{\delta^2\Phi_I(c^*,c)}{\delta c(\tau_1)\,\delta c^*(\tau_2)}
\Biggr|_{c=c^*=c_0}e^{-i\omega_2\tau_2}\,,\nonumber\\ \delta^2\Phi_{22}(\omega_1,\omega_2)&=&\frac{1}{\beta}\int_0^{\beta}d\tau_1d\tau_2\,\, e^{i\omega_1\tau_1}\,\frac{\delta^2\Phi_I(c^*,c)}{\delta c(\tau_1)\,\delta c(\tau_2)}
\Biggr|_{c=c^*=c_0}e^{i\omega_2\tau_2}\,. \label{deltaficomp} \end{eqnarray}
In this Fourier representation of the functional integral given by Eq. (\ref{ZA6}), the integral measure $[d\eta(c)]$ takes the tractable form $\prod_{\omega}{dc^*(\omega)\,dc^*(\omega)}$. Using the expression for $\Phi_I(c^*,c)$ given in Eq. (\ref{fi3}), we can calculate the matriz $\delta^2\Phi(\omega_1,\omega_2)$ with components given by Eq. (\ref{deltaficomp}). Performing these calculations we obtain that
\begin{eqnarray} \delta^2\Phi_{11}(\omega_1,\omega_2)&=&\delta^2\Phi_{12}(\omega_1,\omega_2)= \delta_{\omega_1\,,\,-\omega_2}\,R(\omega_1)\,,\nonumber\\ \delta^2\Phi_{21}(\omega_1,\omega_2)&=&\delta^2\Phi_{22}(\omega_1,\omega_2)= \delta_{\omega_1\,,\,\omega_2}\,S(\omega_1)\,, \label{deltaficomp1} \end{eqnarray}
with $\delta_{\omega_1\,,\,\omega_2}$ being the delta Kronecker and the functions $R(\omega)$ and $S(\omega)$ are given by
\begin{eqnarray} R(\omega)&=&2\,\omega_0\,g_1\,g_2\,\alpha^2-\frac{(\,\Omega^2_{\Delta}-\Omega^2)\,\alpha^{-2}} {2\,\Omega_{\Delta}(\omega^2+\Omega^2_{\Delta})}\tanh{\left(\frac{\beta\,\Omega_{\Delta}}{2}\right)}\,,\nonumber\\ S(\omega)&=&i\,\omega\left(1-\frac{\Omega\,\alpha^{-2}} {\Omega_{\Delta}(\omega^2+\Omega^2_{\Delta})}\tanh{\left(\frac{\beta\,\Omega_{\Delta}}{2}\right)}\right)+\nonumber\\ &-& \omega_0\,(\,g_1^2+g_2^2\,)\,\alpha^2+\frac{(\,\Omega^2_{\Delta}+\Omega^2)\,\alpha^{-2}} {2\,\Omega_{\Delta}\,(\omega^2+\Omega^2_{\Delta})}\tanh{\left(\frac{\beta\,\Omega_{\Delta}}{2}\right)}\,. \label{RS} \end{eqnarray}
The expression for $\Omega_{\Delta}$ is given by Eq. (\ref{omegadelta}). Substituting the matriz $\delta^2\Phi(\omega_1,\omega_2)$, with components given by Eq. (\ref{deltaficomp1}), in the functional integral appearing in $Z_A$, given by Eq. (\ref{ZA6}), we obtain that
\begin{eqnarray} Z_A=e^{N\Phi(b^*_0,b_0)}\int[d\eta(c)]\exp{\sum_{\omega}\left(\, S(\omega)\,c(\omega)\,c^*(\omega)+\frac{1}{2}\,R(\omega)\, \Bigl(\,c(\omega)\,c(-\omega)+c^*(\omega)\,c^*(-\omega)\,\Bigr)\right)}\,. \label{ZA7} \end{eqnarray}
Performing this Gaussian functional integral, we finally obtain that
\begin{eqnarray} Z_A=e^{N\Phi(b^*_0,b_0)}\frac{2\,\pi\,i}{(S^2(0)-R^2(0))^{1/2}}\,\, \prod_{\omega\geq 1}\frac{(\,2\,\pi\,i\,)^2}{\,S(\omega)\,S(-\omega)\,-\,R^2(\omega)}\,. \label{ZA8} \end{eqnarray}
In order to find the asymptotic behaviour of $\frac{Z}{Z_0}$ when $N\rightarrow\infty$, we must calculate $\int{[d\eta(b)]\,e^{S_{B0}}}$ appearing in Eq. (\ref{ZA0}). Using the free bosonic action $S_{B0}$ given by Eq. (\ref{67}), we obtain that
\begin{eqnarray} \int{[d\eta(b)]\,e^{S_{B0}}}=\prod_{\omega}\,\frac{2\,\pi\,i}{\omega_0-i\,\omega}\,. \label{sb0} \end{eqnarray}
Substituting Eq. (\ref{ZA8}) and Eq. (\ref{sb0}) in Eq. (\ref{ZA0}) we have that
\begin{eqnarray} \frac{Z}{Z_0}=e^{N\Phi(b^*_0,b_0)}\frac{1}{(H(0))^{1/2}}\,\, \prod_{\omega\geq 1}\,\frac{1}{\,H(\omega)}\,, \label{Zfin} \end{eqnarray}
in this last equation, the function $H(\omega)$ is given by
\begin{eqnarray} H(\omega)=\frac{S(\omega)\,S(-\omega)-R^2(\omega)}{\omega^2+\omega^2_0}\,. \label{H1} \end{eqnarray}
The Eq. (\ref{RS}) gives the expresions for the functions $S(\omega)$ and $R(\omega)$, substituting these functions in Eq. (\ref{H1}) we obtain that
\begin{eqnarray} &&H(\omega)=\,1\,+\,\frac{(\,g_1^2-g_2^2\,)^2\,\Omega^2}{\Omega_{\Delta}^2\,(\omega^2+\Omega_{\Delta}^2)\, (\omega^2+\omega^2_0)}\tanh^2\left(\frac{\beta\,\Omega_{\Delta}}{2}\right)\,+\nonumber\\ &+&\frac{2\,(g_1^2-g_2^2\,)\,\Omega\,\omega^2\,-\,(\,g_1^2+g_2^2\,)\,(\Omega^2+\Omega_{\Delta}^2)\,\omega_0\,+\, 2\,g_1\,g_2\,(\Omega_{\Delta}^2-\Omega^2)\,\omega_0}{\Omega_{\Delta}\,(\omega^2+\Omega_{\Delta}^2)\, (\omega^2+\omega^2_0)}\, \tanh\left(\frac{\beta\,\Omega_{\Delta}}{2}\right)\,. \label{H2} \end{eqnarray}
The expression, given by Eq. (\ref{Zfin}), with $H(\omega)$ given by Eq. (\ref{H2}), provides a valid expression for the quotient $\frac{Z}{Z_0}$ in the normal phase, and also in the superradiant phase for the particular case of $g_1\neq 0$ and $g_2\neq 0$.
\section{Normal phase: $\beta <\beta_c$} In the normal phase, $\beta <\beta_c$, from Eq. (\ref{tra1}) we have that $b_0=b^*_0=0$, i.e. $\Omega_{\Delta}=\Omega$. Subtituting this equality in Eq. (\ref{Zfin}) and Eq. (\ref{H2}), we obtain that
\begin{eqnarray} \frac{Z}{Z_0}=\frac{1}{(H_I(0))^{1/2}}\,\, \prod_{\omega\geq 1}\,\frac{1}{\,H_I(\omega)}\,, \label{Zfinb0} \end{eqnarray}
where
\begin{eqnarray} H_I(\omega)&=&\,1\,+\,\frac{(\,g_1^2-g_2^2\,)^2}{(\omega^2+\Omega^2)\, (\omega^2+\omega^2_0)}\tanh^2\left(\frac{\beta\,\Omega}{2}\right)\,+\nonumber\\ &+&\frac{2\,(g_1^2-g_2^2\,)\,\omega^2\,-\,2\,(\,g_1^2+g_2^2\,)\,\Omega\,\omega_0} {(\omega^2+\Omega^2)\, (\omega^2+\omega^2_0)}\, \tanh\left(\frac{\beta\,\Omega}{2}\right)\,. \label{H21} \end{eqnarray}
Making the analytic continuation $(i\omega \rightarrow E)$ in $H_I(\omega)$, we solve the equation $H_I(-i\,E)=0$, which corresponds to the collective spectrum equation. Solving the equation, we have that
\begin{eqnarray} 2\,E^2&=&\omega_0^2+\Omega^2\,+\,2\,(g_1^2-g_2^2)\,\tanh\left(\frac{\beta\,\Omega}{2}\right)+\nonumber\\ &\pm&\left(\Bigl(\omega_0^2-\Omega^2\Bigr)^2+4\,\Bigl(g_1^2\,(\omega_0+\Omega)^2-g_2^2\,(\omega_0-\Omega)^2\Bigr) \,\tanh\left(\frac{\beta\,\Omega}{2}\right)\right)^{1/2}\,. \label{Espb0} \end{eqnarray}
It is interesting to see, when $\beta =\beta_c$ we find the following roots \cite{aparicio1}
\begin{equation} E_{\,1}\,=\,0 \label{106} \end{equation}
and
\begin{equation} E_{\,2}\,=\,\Biggl(\,\frac{g_{\,1}\,(\Omega\,+\,\omega_{\,0})^{\,2}\,+\, g_{\,2}\,(\Omega\,-\,\omega_{\,0})^{\,2}}{(g_{\,1}\,+\,g_{\,2})}\,\Biggr)^{\,1/2}\,. \label{107} \end{equation}
With Eq. (\ref{Espb0}) we can obtain the collective spectrum for the two following known cases: the first one, when $g_2=0$, corresponds to the Dicke model considering the rotating wave approximation \cite{popov1}. Here we have that
\begin{eqnarray} 2\,E=\omega_0+\Omega\,\pm\,\left(\Bigl(\omega_0-\Omega\Bigr)^2+4\,g_1^2 \,\tanh\left(\frac{\beta\,\Omega}{2}\right)\right)^{1/2}\,. \label{Espb0g20} \end{eqnarray}
The second one corresponds to $g_1=g_2=g$. Here we have that
\begin{eqnarray} 2\,E^2=\omega_0^2+\Omega^2\,\pm\,\left(\Bigl(\omega_0^2-\Omega^2\Bigr)^2+16\,g^2\, \omega_0\,\Omega\,\tanh\left(\frac{\beta\,\Omega}{2}\right)\right)^{1/2}\,. \label{Espb0g1g2} \end{eqnarray}
In the case of quantum phase transition, we are in the particular case where $\beta=\infty$. Here, the collective spectrum corresponds to the Eq. (\ref{Espb0g1g2}) where: $\tanh\left(\beta\,\Omega/{2}\right)=1$ \cite{emary2}.
\section{Superradiant phase: $\beta >\beta_c$}
\subsection{Case of $g_1\neq 0$ and $g_2\neq 0$:}
In the superradiant phase $\beta >\beta_c$, in the case of $g_1\neq 0$ and $g_2\neq 0$, we have two maximum points. Both maximum points contribute equally to the partition function. Therefore $\frac{Z}{Z_0}$, given by Eq. (\ref{Zfin}) must be multiplied by a factor $2$. In this case $b_0\neq 0$, i.e. $\Omega_{\Delta}\neq\Omega$. From Eq. (\ref{Zfin}), we have that the expresion for $\frac{Z}{Z_0}$ is given by
\begin{eqnarray} \frac{Z}{Z_0}=2\,e^{N\phi}\frac{1}{(H_{II}(0))^{1/2}}\,\, \prod_{\omega\geq 1}\,\frac{1}{\,H_{II}(\omega)}\,, \label{Zfinb1} \end{eqnarray}
where the factor $\phi$, is defined by
\begin{eqnarray} \phi\,=\,-\,\frac{\omega_0\,\beta\,(\Omega_{\Delta}^2-\Omega^2)}{4\,(g_1+g_2)^2}+ \ln{\left(\frac{\cosh\left(\frac{\beta\,\Omega_{\Delta}}{2}\right)} {\cosh\left(\frac{\beta\,\Omega}{2}\right)}\right)}\,. \label{phi1} \end{eqnarray}
The function $H_{II}(\omega)$ has the form
\begin{eqnarray} &&H_{II}(\omega)=\frac{1}{(\omega^2+\Omega_{\Delta}^2)\,(\omega^2+\omega^2_0)}\,\times\nonumber\\ &&\times\left[\,\omega^4+\left(\omega_0^2+\Omega_{\Delta}^2+\frac{2\,(g_1^2-g_2^2)}{(g_1+g_2)^2}\,\omega_0\,\Omega\right)\, \omega^2+\frac{4\,g_1\,g_2}{(g_1+g_2)^2}\,\omega_0^2\,(\Omega_{\Delta}^2-\Omega^2)\right]\,, \label{H22} \end{eqnarray}
and setting $\omega=0$ in Eq. (\ref{H22}), we obtain the expression for $H_{II}(0)$, so that
\begin{eqnarray} H_{II}(0)=\frac{4\,g_1\,g_2\,(\Omega_{\Delta}^2-\Omega^2)}{(\,g_1+g_2\,)^2\,\Omega_{\Delta}^2}\,. \label{H022} \end{eqnarray}
Making the analytic continuation $(i\omega \rightarrow E)$ in $H_{II}(\omega)$ given by Eq. (\ref{H22}), we solve the equation $H_{II}(-i\,E)=0$. The set of solutions, $E$, are the collective spectrum in the superradiant phase for the case of $g_1\neq 0$ and $g_2\neq 0$. Therefore, solving the equation we have that
\begin{eqnarray} 2\,E^2&=&\omega_0^2+\Omega_{\Delta}^2\,+\,2\,\frac{(\,g_1^2-g_2^2\,)}{(\,g_1+g_2)^2}\,\Omega\,\omega_0\,+\nonumber\\ &\pm&\left[\left(\omega_0^2+\Omega_{\Delta}^2\,+\,2\,\frac{(\,g_1^2-g_2^2\,)}{(\,g_1+g_2)^2}\,\Omega\,\omega_0\right)^2 -\frac{16\,g_1\,g_2}{(\,g_1+g_2)^2}\,\omega_0^2\,\Bigl(\Omega_{\Delta}^2-\Omega^2\Bigr)\right]^{1/2}\,. \label{Espb1} \end{eqnarray}
For the particular case of $g_1=g_2=g$, the collective spectrum energy takes the particular form
\begin{eqnarray} 2\,E^2\,=\,\omega_0^2+\Omega_{\Delta}^2\,\pm\,\left((\omega_0^2-\Omega_{\Delta}^2)^2+4\,\omega_0^2\,\Omega^2\right)^{1/2}\,. \label{Espb1g} \end{eqnarray}
In the limit of zero temperature, $\beta\rightarrow\infty$, from Eq. (\ref{tra1}) we have that $\Omega_{\Delta}=4\,g^2/\omega_0$. Consequently, at zero temperature we obtain that \cite{emary2}
\begin{eqnarray} 2\,E^2\,=\,\omega_0^2+\,\frac{16\,g^4}{\omega_0^2}\,\pm\, \left[\left(\omega_0^2-\frac{16\,g^4}{\omega_0^2}\right)^2+4\,\omega_0^2\,\Omega^2\right]^{1/2}\,. \label{Espb1g2} \end{eqnarray}
\subsection{Case of $g_1\neq 0$ and $g_2=0$:}
Now let us study, the case of rotating wave approximation, i. e., the case of $g_1\neq 0$ and $g_2=0$, in the superradiant phase. Here, the expression for $\frac{Z}{Z_0}$ is obtained setting $g_2=0$ in Eq. (\ref{ZA2}) and Eq. (\ref{fi1}), therefore we have that
\begin{eqnarray} Z_A=A(N)\int[d\eta(b)]\exp{\left(N\,\Phi_{g_1}(b^*,b)\right)}\,, \label{ZApopov} \end{eqnarray}
the function $\Phi_{g_1}(b^*,b)$ is defined by
\begin{eqnarray} \Phi_{g_1}(b^*,b)=\int_0^{\beta}d\tau\,b^*(\omega)\,(\partial_{\tau}-\omega_0)\,b(\omega)+tr\ln\biggl(1-\,g_1^2\,L_*^{-1} \,b\,L^{-1}\,b^*\biggr)\,. \label{fi1g1} \end{eqnarray}
In last equation, Eq. (\ref{fi1g1}), we can see that the function $\Phi_{g_1}(b^*,b)$ is invariant by transformation $b(\tau)\rightarrow\exp{(i\,\theta\,\tau)}\,b(\tau)$ and $b^*(\tau)\rightarrow\exp{(-i\,\theta\,\tau)}\,b^*(\tau)$, where $\theta$ is an arbitrary factor independent of $\tau$. This continuous invariance is responsible for the appearing of Goldstone mode in the system. In order to perform the functional integral given by Eq. (\ref{ZApopov}), let us separate the function $b(\tau)$ in the following form
\begin{eqnarray} b\,(\tau)&=&b_c+b'\,(\tau)\,,\nonumber\\ b^*(\tau)&=&b^*_c+b'^{\,*}(\tau)\,, \label{bg1} \end{eqnarray}
where $b_c$ is a constant function, and the fields $b'(\tau)$ and $b'^{\,*}(\tau)$ satisfy the following boundaries conditions $b'(0)=b'(\beta)=0$ and $b'^{\,*}(0)=b'^{\,*}(\beta)=0$. Using the representation $b_c=\rho\, e^{i\,\phi}$ and $b^*_c=\rho\, e^{-i\,\phi}$ in the functional integral given by Eq. (\ref{ZApopov}) and Eq. (\ref{fi1g1}), and after applying the transformation $b'(\tau)\rightarrow e^{i\,\phi}\,b'(\tau)$ and $b'^{\,*}(\tau)\rightarrow e^{-i\,\phi}\,b'^{\,*}(\tau)$, we obtain that
\begin{eqnarray} Z_A=2\,\pi\,i\,A(N)\,\int_0^{\infty}d\rho^2\,\int[d\eta(b')]\exp{\left(N\,\Phi_{g_1}(\rho,b'^{\,*},b')\right)}\,, \label{ZApopov1} \end{eqnarray}
the function $\Phi_{g_1}(\rho,b'^{\,*},b')$ is given by
\begin{eqnarray} \Phi_{g_1}(\rho,b'^{\,*},b')&=&\int_0^{\beta}d\tau\,\Bigl(\rho+b'^{\,*}(\tau)\Bigr) \Bigl(\partial_{\tau}-\omega_0\Bigr)\Bigl(\rho+b'(\tau)\Bigr)\,+\nonumber\\ &+&tr\ln\left(1-g_1^2\,L_*^{-1}\,\Bigl(\rho+ b'\Bigr)\,L^{-1}\,\Bigl(\rho+b'^{\,*}\Bigr)\right)\,. \label{phipopov1} \end{eqnarray}
In the integral function appearing in Eq. (\ref{ZApopov1}), one variable of integration is $\rho^2$. Here we use the steepest descent method in order to analyse the limit $N\rightarrow\infty$, we find the stationary point with respect to the variable $\rho^2$. Therefore, the stationary
point satisfies the following equation $\frac{\delta\,\Phi_{g_1}}{\delta\,(\rho^2)}\Bigr|_{\rho=\rho_0}$ with $b'^{\,*}(\tau)=b'(\tau)=0$. In this case the value for $\rho_0$ is the same as $b_0$ defined by Eq. (\ref{tra1}) setting $g_2=0$. Let us consider the two first leading terms in the functional integral appearing in Eq. (\ref{ZApopov1}), coming from the expansion of $\Phi_{g_1}(\rho,b'^{\,*},b')$ around the point defined by $\rho_0$ and $b'^{\,*}(\tau)=b'(\tau)=0$, giving the maximum for $Re\Bigl(\Phi_{g_1}(\rho,b'^{\,*},b')\Bigr)$. This expansion is given by
\begin{eqnarray} \Phi_{g_1}(\rho,b'^{\,*},b')&=&\Phi_{g_1}(\rho_0,0,0)+\frac{1}{2}\,\frac{\delta^2\Phi_{g_1}}
{\delta(\rho^2)^2}\Biggr|_{\rho=\rho_0,\,b'=b'^{*}=0}\,\Bigl(\rho^2-\rho_0^2\Bigr)^2\,+\nonumber\\ &+&\frac{1}{2}\int_0^{\beta}d\tau_1\,d\tau_2\, (b'^{\,*}(\tau_1)\,,\,b'(\tau_1))\,M_{\Phi_{g_1}} \left(\begin{array}{c} b'^{\,*}(\tau_2)\\ b'(\tau_2) \end{array}\right)\,, \label{fipopov2} \end{eqnarray}
here the matriz $M_{\Phi_{g_1}}$ is given by
\begin{eqnarray} \left(\begin{array}{cc} \frac{\delta^2\Phi_{g_1}}{\delta b'^{*}(\tau_1)\,\delta b'^{*}(\tau_2)}&\frac{\delta^2\Phi_{g_1}}{\delta b'^{*}(\tau_1)\,\delta b'(\tau_2)}\\ \frac{\delta^2\Phi_{g_1}}{\delta b'(\tau_1)\,\delta b'^{*}(\tau_2)}&\frac{\delta^2\Phi_{g_1}}{\delta b'(\tau_1)\,\delta b'(\tau_2)} \end{array}\right)
\Biggr|_{\rho=\rho_0,\,b'=b'^{*}=0}\,\,. \label{Mfipopov} \end{eqnarray}
Using this expansion given by Eq. (\ref{Mfipopov}) to perform functional integral given by Eq. (\ref{ZApopov1}), we have that
\begin{eqnarray} Z_A&=&2\,\pi\,i\,\sqrt{N}\,e^{N\phi_{g_1}}\,\int_{-\sqrt{N}\rho^2_0}^{\infty}dy\,e^{\frac{1}{2}\,\frac{\delta^2\Phi_{g_1}}
{\delta(\rho^2)^2}\Bigr|_{\rho=\rho_0,\,b'=b'^{*}=0}\,y^2}\,\times\nonumber\\ &\times&\int[d\eta(b')]\exp\left(\frac{1}{2}\int_0^{\beta}d\tau_1\,d\tau_2\, (b'^{\,*}(\tau_1)\,,\,b'(\tau_1))\,M_{\Phi_{g_1}} \left(\begin{array}{c} b'^{\,*}(\tau_2)\\ b'(\tau_2) \end{array}\right)\right)\,, \label{ZApopov2} \end{eqnarray}
The expression $\phi_{g_1}$ corresponds to the expression of $\phi$ defined in Eq. (\ref{phi1}) taking $g_2=0$. The factor $\sqrt{N}$ appearing in Eq. (\ref{ZApopov2}) come from the scaling $\rho^2\rightarrow\rho^2/\sqrt{N}$. For $N\rightarrow\infty$, integrals appearing in Eq. (\ref{ZApopov2}) are Gaussians. We represent the functions $b'(\tau)$ and $b'^{\,*}(\tau)$ in Fourier series, which do not possess the zero mode, since they satisfy the boundary conditions given by $b'(0)=b'(\beta)=0$ and $b'^{\,*}(0)=b'^{\,*}(\beta)=0$. Therefore, performing the funtional integral and substituting in Eq. (\ref{ZA0}), we obtain that
\begin{eqnarray} \frac{Z}{Z_0}=\sqrt{N}\,e^{N\phi_{g_1}}\,\frac{1}{A_0}\, \prod_{\omega\geq 1}\,\frac{1}{\,H_{II}(\omega)}\,, \label{Zfinpopov} \end{eqnarray}
functions $\phi_{g_1}$ and $H_{II}(\omega)$ are given respectively by Eq. (\ref{phi1}) and Eq. (\ref{H22}) setting $g_2=0$, and $A_0$ is given by
\begin{eqnarray} A_0=\frac{g_1}{\Omega_{\Delta}\,\sqrt{\pi\,\beta\,\omega_0}}\,\left(1-\frac{\beta\, \Omega_{\Delta}}{\sinh(\beta\Omega_{\Delta})}\right)^{\frac{1}{2}}\,. \label{A0} \end{eqnarray}
Making the analytic continuation $(i\omega \rightarrow E)$ in $H_{II}(\omega)$ given by Eq. (\ref{H22}) setting $g_2=0$, the collective spectrum is obtained by solving the equation $H_{II}(-i\,E)=0$. So that, we obtain the following spectrum
\begin{eqnarray} E_1=0\,, \label{Espg11} \end{eqnarray}
and
\begin{eqnarray} E_2^{\,2}=\omega_0^2+\Omega_{\Delta}^2\,+\,2\,\omega_0\,\Omega\,. \label{Espg12} \end{eqnarray}
The particular value of the spectrum given by $E_1=0$ in Eq. (\ref{Espg11}) corresponds to the Goldstone mode \cite{popov1}.
\subsection{Case of $g_1=0$ and $g_2\neq 0$:}
Now let us study the case of $g_1=0$ and $g_2\neq 0$, in the superradiant phase. Here, the expression for $\frac{Z}{Z_0}$ is obtained setting $g_1=0$ in Eq. (\ref{ZA2}) and Eq. (\ref{fi1}). For this case we have
\begin{eqnarray} Z_A=A(N)\int[d\eta(b)]\exp{\left(N\,\Phi_{g_2}(b^*,b)\right)}\,, \label{ZApopovg2} \end{eqnarray}
the function $\Phi_{g_2}(b^*,b)$ is defined by
\begin{eqnarray} \Phi_{g_2}(b^*,b)=\int_0^{\beta}d\tau\,b^*(\omega)\,(\partial_{\tau}-\omega_0)\,b(\omega)+tr\ln\biggl(1-\,g_2^2\,L_*^{-1} \,b^*\,L^{-1}\,b\biggr)\,. \label{fi1g2} \end{eqnarray}
In last equation, Eq. (\ref{fi1g2}), we can see that the function $\Phi_{g_2}(b^*,b)$ is invariant by transformation $b(\tau)\rightarrow\exp{(i\,\theta\,\tau)}\,b(\tau)$ and $b^*(\tau)\rightarrow\exp{(-i\,\theta\,\tau)}\,b^*(\tau)$, where $\theta$ is an arbitrary factor independent of $\tau$. This continuous invariance is responsible for the appearing of Goldstone mode in the system. Since Eq. (\ref{fi1g2}) is very similar to Eq. (\ref{fi1g1}), we can see that, the calculation to obtain $\frac{Z}{Z_0}$ in the case of $g_1=0$ follows the same steps as the calculation performed to obtain $\frac{Z}{Z_0}$ in the case of rotating wave approximation. Consequently, we have that
\begin{eqnarray} \frac{Z}{Z_0}=\sqrt{N}\,e^{N\phi_{g_2}}\,\frac{1}{A_0}\, \prod_{\omega\geq 1}\,\frac{1}{\,H_{II}(\omega)}\,, \label{Zfinpopovg2} \end{eqnarray}
where $H_{II}(\omega)$ is given by Eq. (\ref{H22}) setting $g_1=0$, the value $\phi_{g_2}$ corresponds to the expression for $\phi$ defined in Eq. (\ref{phi1}) taking $g_1=0$, and $A_0$ is given by
\begin{eqnarray} A_0=\frac{g_2}{\Omega_{\Delta}\,\sqrt{\pi\,\beta\,\omega_0}}\,\left(1-\frac{\beta\, \Omega_{\Delta}}{\sinh(\beta\Omega_{\Delta})}\right)^{\frac{1}{2}}\,. \label{A0g2} \end{eqnarray}
Making the analytic continuation $(i\omega \rightarrow E)$ in $H_{II}(\omega)$ given by Eq. (\ref{H22}) setting $g_1=0$, the collective spectrum is obtained by solving the equation $H_{II}(-i\,E)=0$. So that, we obtain the following spectrum
\begin{eqnarray} E_1=0\,, \label{Espg21} \end{eqnarray}
and
\begin{eqnarray} E_2^{\,2}=\omega_0^2+\Omega_{\Delta}^2\,-\,2\,\omega_0\,\Omega\,. \label{Espg22} \end{eqnarray}
The particular value of the spectrum given by $E_1=0$ in Eq. (\ref{Espg21}) corresponds to the Goldstone mode.
\section{Summary}
\quad $\,\,$ In this paper, using the path integral approach and functional methods, in the thermodynamic limit $N\rightarrow\infty$, we find the asymptotic behaviour of the partition function and collective spectrum of the full Dicke model in the normal and superradiant phase. In our study we distinguish three particular cases. The first one corresponds to the case of rotating wave approximation, $g_1\neq 0$ and $g_2=0$, in this case the model has a continuous symmetry, which is associated to the conservation of the sum of the number excitation of the $N$ atoms with the number excitation of the boson field. The second case corresponds to the model with $g_1=0$ and $g_2\neq 0$, in this case the model has a continuous symmetry, which is associated to the conservation of the difference between the number excitation of the $N$ atoms and the number excitation of the boson field. The last one, corresponds to the case of $g_1\neq 0$ and $g_2\neq 0$, which the model has a discrete symmetry. The phase transition in each case is related to the spontaneous breaking of their respective symmetry. In the case of rotating wave approximation, and also in the case of $g_1=0$ and $g_2\neq 0$, the collective spectrum has a zero energy value, corresponding to the Goldstone mode associated to the continuous symmetry breaking for these cases.
\end{document} |
\begin{document}
\makeatletter
\begin{center} \epsfxsize=10in \end{center}
\def$\Box${$\Box$}
\begin{center} \vskip 1cm {\LARGE\bf Recurrence Divisibility Tests} \vskip 1cm
{\large Mehdi Hassani}
\vskip .5cm Department of Mathematics\\ Institute for Advanced Studies in Basic Sciences\\ Zanjan, Iran\\ \href{mailto:mhassani@iasbs.ac.ir}{\tt mhassani@iasbs.ac.ir} \end{center}
\newtheorem{thm}{Theorem} \newtheorem{prop}{Proposition} \newtheorem{lemma}{Lemma} \newtheorem{cor}{Corollary} \newtheorem{prob}{Note and Problem} \def\framebox(5.2,6.2){}{\framebox(5.2,6.2){}} \def\dashbox{2.71}(3.5,9.00){}{\dashbox{2.71}(3.5,9.00){}} \def\rule{5.25\unitlength}{9.75\unitlength}{\rule{5.25\unitlength}{9.75\unitlength}} \def\rule{8.00\unitlength}{12.00\unitlength}{\rule{8.00\unitlength}{12.00\unitlength}} \def\qed{\hbox{\hskip 6pt\vrule width 7pt height11pt depth1pt\hskip 3pt}
} \newenvironment{proof}{\trivlist\item[\hskip\labelsep{\bf Proof}:]}{
$\framebox(5.2,6.2){}$ \endtrivlist} \newcommand{\COM}[2]{{#1\choose#2}}
\thispagestyle{empty} \null \addtolength{\textheight}{1cm}
\begin{abstract} In this note, we are going to introduce some recurrence divisibility tests for all primes except than 2 and 5. \end{abstract}
\hrule
\noindent 2000 {\it Mathematics Subject Classification}: 13A05, 11A41, 11B37, 11D04.\\ \noindent \emph{Keywords: divisibility, primes, recurrence, linear diophantine equation.}
\hrule
\vskip .2cm\hspace{-8mm} For $n\in{\mathbb N}$, we define $ud(n)=$ unite digit of $n$. As we know, we have the following divisibility tests for 2 and 5: $$
2|n\Leftrightarrow 2|ud(n), $$ and $$
5|n\Leftrightarrow ud(n)\in\{0,5\}. $$ In this note, we are going to find some \textit{recurrence divisibility tests} for other primes except than 2 and 5. As soon, you will understand our mind by ``recurrence''. \begin{thm} For every prime $p\neq 2, 5$, there exists unique $t(p)\in\mathbb{Z}_p$, in which $$
p|n\Leftrightarrow p\Big|\Big\lfloor\frac{n}{10}\Big\rfloor-t(p)ud(n). $$ Also, we have $$ t(p)=\left\{ \begin{array}{ll} \frac{p-1}{10} & {\rm if~} ud(p)=1,\\ \frac{7p-1}{10} & {\rm if~} ud(p)=3,\\ \frac{3p-1}{10} & {\rm if~} ud(p)=7,\\ \frac{9p-1}{10} & {\rm if~} ud(p)=9. \end{array} \right. $$ \end{thm} \begin{proof} Suppose $p\neq 2, 5$ is a prime. Since gcd$(10,p)=1$, the diophantine equation $10t+1=kp$ with $0\leq t<p$, has an unique solution module $p$ and clearly $kt>0$. Let
$t=t(p)$. So, $p|10t(p)+1$ and $1\leq t(p)\leq p-1$.\\
Now, suppose $p|n$, or consequently $p|t(p)n$. Also, we have
$p|10t(p)+1$. Therefore, by considering $ud(n)=n-10\lfloor\frac{n}{10}\rfloor$, we have
$p|\lfloor\frac{n}{10}\rfloor-t(p)ud(n)$.\\
Also, suppose $p|\lfloor\frac{n}{10}\rfloor-t(p)ud(n)$ or
$p|(10t(p)+1)\lfloor\frac{n}{10}\rfloor-t(p)n$. Since
$p|10t(p)+1$, we obtain $p|t(p)n$ and since $1\leq t(p)\leq p-1$, we have gcd$(p,t(p))=1$, therefore $p|n$.\\ Now, we compute the value of $t(p)$. To do this, we note that since $p\neq 2, 5$, we have $ud(p)\in\{1,3,7,9\}$. If we let $k=k(p)$, we have the relation $10t(p)+1=k(p)p$ with $1\leq t(p)\leq p-1$. So, $ud(pk(p))=1$ always holds. Also, the condition $1\leq t(p)\leq p-1$ yield that $\frac{9}{p}\leq k(p)\leq 10-\frac{11}{p}$. So, for all primes $p\neq 2, 5$ we have $1\leq k(p)\leq 9$. If $ud(p)=1$, then since $ud(pk(p))=1$ and $1\leq k(p)\leq 9$, we obtain $k(p)=1$ and so $t(p)=\frac{p-1}{10}$. Other cases have similar reasons. This completes the proof. \end{proof} \textbf{Note 1.} We called this divisibility test, ``recurrence'', because for all primes $p\neq 2, 5$ we have $$ \lfloor\log_{10}n\rfloor= 1+\Big\lfloor\log_{10}\Big(\Big\lfloor\frac{n}{10}\Big\rfloor-t(p)ud(n)\Big)\Big\rfloor. $$ It means, this test works by cancelling digits of given number for check, 1 digit by 1 digit.\\\\ \textbf{Note 2.} Same divisibility tests can be obtain which work by cancelling $m$ digit by $m$ digit.\\\\ \textbf{Note 3.} We can yield same tests for all $n\in{\mathbb N}$ with $2\nmid n$ and $5\nmid n$.\\\\ \textbf{Acknowledgements.} I deem my duty to thank Mr. H. Osanloo, who mentioned me recurrence divisibility tests in some special cases.
\end{document} |
\begin{document}
\tightenlines \title{Microscopic Origin of Spatial Coherence and Wolf Shifts\footnote{Festschrift in honor of Prof. E. Wolf, Edited by T. Jansen, SPIE Publication No (2004)}} \author{Girish S. Agarwal\footnote{email: gsa@prl.ernet.in}} \address{Physical Research Laboratory, Navrangpura, Ahmedabad-380 009, India} \date{\today} \maketitle \begin{abstract} \end{abstract}
\section{Introduction} Wolf $^{1,2,3,4}$ discovered how the spatial coherence characteristics of the source affect the spectrum of the radiation in the far zone. In particular the spatial coherence of the source can result either in red or blue shifts in the measured spectrum.His predictions have been verified in a large number of different classes of systems. Wolf and coworkers usually assume a given form of source correlations and study its consequence. In this paper we consider microscopic origin of spatial coherence and radiation from a system of atoms$^{5,6,7,8}$. We discuss how the radiation is different from that produced from an independent system of atoms. We show that the process of radiation itself is responsible for the creation of spatial correlations within the source. We present different features of the spectrum and other statistical properties of the radiation, which show strong dependence on the spatial correlations. We show the existence of a new type of two-photon resonance that arises as a result of such spatial correlations. We further show how the spatial coherence of the field can be used in the context of radiation generated by nonlinear optical processes. We conclude by demonstrating the universality of Wolf shifts and its application in the context of pulse propagation in a dispersive medium. \begin{figure}\label{fig1}
\end{figure} We start by giving a summary of Wolf's main results$^{1,2}$. Consider the radiation produced by two point sources $P_1$ and $P_2$ at the observation point $P$,
Fig.~\ref{fig1}. Let us consider for simplicity the case of scalar fields $U(P,\omega)$. The spectrum of the field at $P$ is given by \begin{equation} S_U(P,\omega)= \langle U^{*}(P,\omega)U(P,\omega)\rangle, \label{1} \end{equation} where as the spectrum of the source is defined by \begin{eqnarray} S_{Q}(\omega)&=&\langle Q^{*}(P_1,\omega)Q(P_1,\omega)\rangle\\ &=&\langle Q^{*}(P_2,\omega)Q(P_2,\omega)\rangle. \label{2} \end{eqnarray} We assume identical spectra for the two sources. Let $\mu_{Q}(\omega)$ be a measure of correlation between two sources:
\begin{eqnarray} \mu_{Q}=\frac{\langle Q^{*}(P_1,\omega)Q(P_2,\omega)\rangle}{S_{Q}(\omega)}, \label{3} \end{eqnarray} The spectral degree of coherence between two sources would be $\mid\mu_{Q}\mid$. For two coherent sources $\mu$ is $1$ whereas for incoherent sources $\mu=0$. The field $U$ at the point $P$ can be related to the strength of the sources via \begin{eqnarray} U(P,\omega)=Q(P_1,\omega)\frac{e^{ikR_{1}}}{R_1}+ Q(P_2,\omega)\frac{e^{ikR_{2}}}{R_2}. \label{4} \end{eqnarray} Here we have ignored unnecessary numerical factors. Using Eq.~(\ref{4}) the spectrum of the field is related to the spectrum of the source and the degree of spatial coherence \begin{eqnarray} S_U(P,\omega)=S_Q(\omega)\left(\frac{1}{R_1^2}+\frac{1}{R_2^2}+\frac{1}{R_1R_2} \left[\mu_Q(\omega)e^{ik(R_{2}-R_1)}+{\rm c. c.}\right]\right). \label{5} \end{eqnarray} Clearly in general, the source spectrum and the spectrum at $P$ are not equal \begin{equation} S_U(P,\omega)\neq S_Q(\omega). \label{6} \end{equation} Clearly the measured spectral characteristics will also be determined by $\mu_{Q}$ and $S_U(P,\omega)$, in general, would exhibit correlation induced spectral shifts. Wolf used phenomenological model for $S_Q$ and $\mu_{Q}$ to demonstrate a variety of spectral shifts and even the correlation induced splitting of a line into several lines. Clearly it is desirable to understand the origin of source correlations.
\section{microscopic origin of source correlations}
We thus examine the question of how the atom radiate. Consider for example an atom in its excited state. It interacts with the modes of quantized electromagnetic field in vacuum state. The atom makes a transition to the ground state by the emission of a photon. The photon can be emitted in any mode of the field. The atom has infinity of available modes. It is known that the spectrum of the emitted radiation has Lorentzian spectrum \begin{equation} S_A(\omega)=\frac{\gamma/\pi}{(\omega-\omega_0)^2+\gamma^2}, \label{7} \end{equation} where $\omega_0$ is the frequency of the atomic transition and $\gamma$ is half the Einstein $A$ coefficient. \begin{figure}\label{fig2}
\end{figure}
Next consider two atoms located at $\vec{r}_{A}$ and $\vec{r}_{B}$. Let each atom be initially in its excited state. The question is whether the atoms radiate independently of each other {\it i.e.} whether the spectrum of the emitted photons factorizes \begin{equation} S(\omega_1,\omega_2)=S_A(\omega_1)S_B(\omega_2) \label{8} \end{equation} or not. The correlations between the two atoms$^{6,7,8}$ would invalidate (\ref{8}) and it would in general also imply that \begin{eqnarray} S_A(\omega_1)\neq\int S(\omega_1,\omega_2)d\omega_2, \label{9} \end{eqnarray} \begin{figure}\label{fig3}
\end{figure} \noindent {\it i.e.}, the spectrum of the emitted radiation would be different from the one if the other atom was absent. Note that both atoms interact with a common
quantized electromagnetic field. This interaction with a common field results
in an effective interaction between two
atoms even if the atoms do not interact. This can also be understood by
considering, say, the net field on the atom $B(A)$ which would consist of the
vacuum field and field radiated by the atom $A(B)$. Let us denote by
$\chi_{ij}(\vec{r}_{A},\vec{r}_{B},\omega)$ as the $i-th$ component of the field at position
$\vec{r}_{A}$ due to a unit dipole oriented in the direction $j$ at the position
$\vec{r}_{B}$.
\begin{figure}\label{fig4}
\end{figure} \noindent
This field$^{9,10,11}$ is well known from the solution of Maxwell equations
\begin{eqnarray}
\chi_{ij}(\vec{r}_{A},\vec{r}_{B},\omega)=
\left(\frac{\omega^2}{c^2}\delta_{ij}+\frac{\partial^2}
{\partial r_{A}\partial r_{B}}\right)
\frac{exp(i|\vec{r}_{A}-\vec{r}_{B}|\omega/c)}
{|\vec{r}_{A}-\vec{r}_{B}|}.
\label{10}
\end{eqnarray}
This function has close connection with the spatial coherence of the vacuum of
the electromagnetic field. Let us write the electric field operator in terms of
its positive and negative frequency parts
\begin{eqnarray}
E=E^{(+)}+E^{(-)}.
\label{11}
\end{eqnarray}
It is well known in quantum optics that $E^{(+)}(E^{(-)})$
corresponds to the
absorption (emission) of photons. Further $E^{(+)}$ is an analytical signal.
Let us consider second order coherence function of the electromagnetic field
\begin{eqnarray}
S^{A}_{\alpha\beta}(\vec{r_1},\vec{r_2},\tau)=\langle
E_{\alpha}^{(+)}(\vec{r}_{1},t+\tau)E_{\beta}^{(-)}(\vec{r}_2)\rangle
\label{12}
\end{eqnarray}
which is non-vanishing even-though the field is in vacuum state. Its Fourier
transform is given by$^{9}$
\begin{eqnarray}
\int d\tau e^{i\omega\tau}S_{\alpha\beta}^{A}(\vec{r}_1,\vec{r}_2,\tau)&=&
2\hbar Im\chi_{ij}(\vec{r_1},\vec{r_2},\omega){\rm~~~ if~~}\omega>0\nonumber\\
&=&0{\rm~~~~if~~}\omega< 0.
\label{13}
\end{eqnarray}
We thus conclude that the vacuum of the electromagnetic field has spatial
coherence which extends over the dimensions of wavelength. Therefore the
correlation between atoms would extend over at least distances of the order of
wavelength. Clearly in a macroscopic sample these correlation could build up
over much larger distances. Explicit results for two atoms can be found in
Refs.~[6],[7],[8].
\section{source correlation induced two photon resonance}
\begin{figure}\label{fig5}
\end{figure}
We next discuss several other situations where atom-atom correlations play an
important role. Consider first the case of two unidentical atoms with transition
frequencies $\omega_A$ and $\omega_B$ and which are located within a wavelength
of each other. Let both the atoms start in ground state and let these interact
with a laser field of frequency $\omega_l$. We now study the total intensity
$I(\omega_l)$ of the emitted radiation as a function of $\omega_l$. Clearly
$I(\omega_l)$ will exhibit single photon resonance at
$\omega_l=\omega_{Aeg},\omega_{Beg}$. In principle there is also the
possibility of two photon resonance $2\omega_l=\omega_{Aeg}+\omega_{Beg}$. It
turns out that in the absence of source correlations, the two photon resonance
does not occur as the two paths
\begin{eqnarray}
|g_A,g_B\rangle\rightarrow|e_A,g_B\rangle\rightarrow|e_A,e_B\rangle,{\rm~~and~~}
|g_A,g_B\rangle\rightarrow|g_A,e_B\rangle\rightarrow|e_A,e_B\rangle \end{eqnarray} interfere destructively. Thus the source correlations are the key to the two photon
resonance. In an earlier work the effect of source correlations on such a two
photon resonance was studied in great detail$^{7}$ and recently it has been observed
in experiments involving single molecules$^{12}$ further very recently we show how the source
correlation arise in a cavity$^{13}$.
\section{spatial coherence and emission in presence of a mirror}
\begin{figure}\label{fig6}
\end{figure}
Another class of systems where spatial coherence plays an important role is for
example, the emission of radiation in front of a metallic mirror$^{10}$
or in a cavity formed by metallic or dielectric mirrors. The spectrum
of the emitted radiation depends on the distance of the atom from mirror. As a
matter of fact both line width and line shift become $b$-dependent.
If the metallic mirror is treated as a perfect conductor, then the calculations show
that the line shifts, for example, are determined by the
spatial coherence of the field at the location of the atom and its image. Thus the correlation of the vacuum
$\langle\vec{E}^{(+)}(\vec{b},t)\vec{E}^{(-)}(-\vec{b},t^{'})\rangle$,
which is related to $\chi(\vec{b},-\vec{b},\omega)$, determines the line shifts and line
widths. Explicit results for the $b$ - dependence of shifts and widths can be found in
Refs.~[10],[14].
\section{spatial coherence induced control of nonlinear generation}
We next discuss the effects of spatial coherence in the context of nonlinear
optics. We would show that the generation of radiation using nonlinear
processes can be controlled by source correlations. Consider for example, the
process of second harmonic generation (SHG) with $P=\chi^{(2)}E^2$, $E\sim
e^{i\vec{k}.\vec{r}}$.
The efficiency of the SHG depends on the phase matching integral
\begin{eqnarray}
f=\frac{1}{V}\int e^{-i\vec{q}.\vec{r}} e^{2i\vec{k}.\vec{r}}d^3r
\label{14}
\end{eqnarray}
which goes to unity if $\vec{q}=2\vec{k}$.The function $f$ determines the direction in
which second harmonic generation is dominant. \begin{figure}\label{fig7}
\end{figure} \noindent
If however the field $E$ is
partially coherent, then in place of (\ref{14}) we need to consider
\begin{eqnarray}
f=\int d^3r^{'}d^3r^{''}e^{-i\vec{q}.\vec{r}^{'}}
e^{2i\vec{k}.\vec{r}^{''}}\langle
P(\vec{r}^{'})P^{*}(\vec{r}^{''})\rangle.
\label{15}
\end{eqnarray}
\begin{figure}\label{fig8}
\end{figure} \noindent
Note that for SHG with coherent radiation
\begin{eqnarray}
\langle P(\vec{r}^{'})P^{*}(\vec{r}^{''})\rangle\equiv
\langle P(\vec{r}^{'})\rangle\langle
P^{*}(\vec{r}^{''})\rangle\equiv e^{2i\vec{k}.(\vec{r}^{'}-\vec{r}^{''})}
\label{16}
\end{eqnarray}
and then
\begin{equation}
I\propto (vol)^2.
\end{equation}
On the other hand for the case of incoherent radiation
\begin{eqnarray}
&&\langle P(\vec{r}^{'})P^{*}(\vec{r}^{''})\rangle\equiv|\wp|^2
\delta(\vec{r}^{'}-\vec{r}^{''}),\\
&& I\rightarrow|\wp|^2(vol).
\end{eqnarray}
For the partially coherent radiation
\begin{equation}
\langle P(\vec{r}^{'})P^{*}(\vec{r}^{''})\rangle=|\chi^{(2)}|^2\langle
E^2(\vec{r}^{'})E^{*2}(\vec{r}^{''})\rangle,
\end{equation}
which under the assumption of a Gaussian field will become
\begin{eqnarray}
\langle P(\vec{r}^{'})P(\vec{r}^{''})\rangle=
2I^2|\mu(\vec{r}^{'}-\vec{r}^{''})|^2,
\label{21}
\end{eqnarray}
where $\mu(\vec{r}^{'}-\vec{r}^{''})$ denotes the degree of spatial coherence
of the incident field.
Thus SHG would now be determined by the integral
\begin{eqnarray}
&&|f(\vec{Q})|^2=\int\int
d^3r^{'}d^3r{''}|\mu(\vec{r}^{'}-\vec{r}^{''})|^2
e^{\vec{Q}.(\vec{r}^{'}-\vec{r}^{''})}\\
&&\vec{Q}=-\vec{q}+2\vec{k}.
\label{22}
\end{eqnarray}
Clearly now the direction of SHG would be determined by the spatial coherence
of the field. Thus spatial coherence can serve as a control parameter for the
nonlinear generation. Clearly the above ideas should also find interesting
applications in other areas of nonlinear optics as well.
\section{universality of Wolf shift} Before concluding the paper we also like to make some general remarks for the universality and applicability of Wolf shifts in the context of other systems. We know for instance, that other standard equations of physics (such as those describing vibrations of string, heat transport) admit following relation between the effect $\Phi$ of the source $P$ at the observation point \begin{eqnarray} \Phi(\vec{r})=\int G(\vec{r},\vec{r}^{'})P(\vec{r}^{'})d^3r^{'}, \label{23} \end{eqnarray} where $G$ is Green's function for the underlying equation. The observed quantities are usually quadratic in $\Phi$. Thus observation at the point $\vec{r}$ would depend on the correlations of the source at two points. This is due to the nonlocal nature of the solution (\ref{23}).
\section{Fluctuating Pulses in a Dispersive medium} As another example of this universality we can consider the propagation of pulses in a dispersive medium which is described by the equation \begin{eqnarray} i\frac{\partial{\it E}}{\partial z}=\frac{\tilde{k}}{2}\frac{\partial^{2} {\it E}}{\partial t^2} \label{24} \end{eqnarray} The solution of this equation can be given in terms of Green's function \begin{eqnarray} &&{\it E}(z,t)=\int G(z,t;0,t^{'}){\it E}(0,t^{'})dt^{'};\\ &&G=\frac{i}{2\pi z\tilde{k}}exp\left(- \frac{i}{2z\tilde{k}}(t^2-2tt^{'}+t^{'2})\right). \label{25} \end{eqnarray} If the input pulse has fluctuations, then the intensity of the output pulse would be determined by the correlation in pulses on input plane \begin{eqnarray} I(L,t)=\int\int dt{'}dt{''}G^{*}(L,t;0,t^{'})G(L,t;0,t^{''}) \langle{\it E}(t{'}){\it E}^{*}(t{''})\rangle. \label{26} \end{eqnarray} Clearly the intensity of the pulse at the output plane is not completely determined by the intensity of the pulse at input plane.
\section{conclusions} Thus in conclusion we have shown that the vacuum of electromagnetic field has intrinsic partial spatial coherence in frequency domain which effectively extends over regions of the order of wavelength $\lambda$. This spatial coherence leads to a dynamical coupling between atoms and is the cause of source correlations. We showed how such correlations can lead to a new type of two photon resonance and how these are relevant for near field optics. We further showed how the source spatial correlations can lead to new phase matching conditions for nonlinear optical effects leading to the possibility of using spatial coherence to produce tailor made emissions. We also discussed the universality of source correlation effects and as a specific example we treated the case of the propagation of fluctuating pulses in a dispersive medium.
The author thanks E. Wolf for many discussions on the subject of correlation induced shifts. \begin{references} \bibitem{1} E. Wolf, "Invariance of the Spectrum of Light on Propagation," {\it Phys. Rev. Lett.\/} {\bf 56}, pp. 1370-1372 (1986);
E. Wolf, "Red shifts and blue shifts of spectral lines emitted by two correlated sources," {\it ibid.\/} {\bf 58}, pp. 2646-2648 (1987); E. Wolf, "Correlation-induced Doppler-type frequency shifts of spectral lines," {\it ibid.\/} {\bf 63}, pp. 2220-2223 (1989).
\bibitem{2} E. Wolf, "Non-cosmological redshifts of spectral lines," {\it Nature (London)} {\bf 326}, pp. 363-366 (1987). \bibitem{3} E. Wolf and D. F. V. James, "Correlation-induced spectral changes," {\it Rep. Prog. Phys.} {\bf 59}, pp. 771-818 (1996). \bibitem{4} L. Mandel and E. Wolf, {\it Optical Coherence and Quantum Optics} (Cambridge University Press, 1995). \bibitem{5} G. S. Agarwal, in {\it Quantum Optics} (Springer Tracts in Modern Physics, Vol. 70, 1974). \bibitem{6} G. Varada and G. S. Agarwal, "Microscopic approach to correlation-induced frequency shifts," {\it Phys. Rev. A} {\bf 44}, pp. 7626-7634 (1991). \bibitem{7} G. Varada and G. S. Agarwal, "Two-photon resonance induced by the dipole-dipole interaction," {\it Phys. Rev. A} {\bf 45}, pp. 6721-6729 (1992). \bibitem{8} D. F. V. James, "Frequency shifts in spontaneous emission from two interacting emission," {\it Phys. Rev. A} {\bf 47}, pp. 1336-1346 (1993). \bibitem{9} G. S. Agarwal, "Quantum electrodynamics in the presence of dielectrics and conductors : I. Electromagnetic-field response functions and black-body fluctuations in finite geometries," {\it Phys. Rev. A} {\bf 11}, pp. 230-242 (1975). \bibitem{10} G. S. Agarwal, "Quantum electrodynamics in the presence of dielectrics and conductors : IV. General Theory of spontaneous emission in finite geometries," {\it Phys. Rev. A} {\bf 12}, pp. 1475-1497 (1975). \bibitem{11} G. S. Agarwal, in {\it Quantum Electrodynamics and Quantum Optics}, ed. A. Barut (Plenum, 1983). \bibitem{12}C. Hettich, C. Schmitt, J. Zitzmann, S. Kuhn, I. Gerhardt, and V. Sandoghdar, "Nanometer resolution and and coherent optical dipole coupling of two individual molecules," {\it Science} {\bf 298}, pp. 385-389 (2002). \bibitem{13} P. K. Pathak and G. S. Agarwal, "Giant two-atom two-photon vacuum Rabi oscillations in a high quality cavity," to be published. \bibitem{14} G. S. Agarwal and H. D. Vollmer, "Surface polariton effects in spontaneous emission," {\it Physica Status Solidi B} {\bf 79}, 249 (1977); G. S. Agarwal and Vollmer, "Surface polariton effects in spontaneous emission. II. Effects of spatial dispersion," {\it ibid.\/} {\bf 85}, 301 (1978). \end{references} \end{document} \documentstyle[aps,amsmath,amssymb,epsfig]{revtex} \draft \begin{document} \tightenlines \title{Microscopic Origin of Spatial Coherence and Wolf Shifts\footnote{Festschrift in honor of Prof. E. Wolf, Edited by T. Jansen, SPIE Publication No (2004)}} \author{Girish S. Agarwal\footnote{email: gsa@prl.ernet.in}} \address{Physical Research Laboratory, Navrangpura, Ahmedabad-380 009, India} \date{\today} \maketitle \begin{abstract} \end{abstract}
\section{Introduction} Wolf $^{1,2,3,4}$ discovered how the spatial coherence characteristics of the source affect the spectrum of the radiation in the far zone. In particular the spatial coherence of the source can result either in red or blue shifts in the measured spectrum.His predictions have been verified in a large number of different classes of systems. Wolf and coworkers usually assume a given form of source correlations and study its consequence. In this paper we consider microscopic origin of spatial coherence and radiation from a system of atoms$^{5,6,7,8}$. We discuss how the radiation is different from that produced from an independent system of atoms. We show that the process of radiation itself is responsible for the creation of spatial correlations within the source. We present different features of the spectrum and other statistical properties of the radiation, which show strong dependence on the spatial correlations. We show the existence of a new type of two-photon resonance that arises as a result of such spatial correlations. We further show how the spatial coherence of the field can be used in the context of radiation generated by nonlinear optical processes. We conclude by demonstrating the universality of Wolf shifts and its application in the context of pulse propagation in a dispersive medium. \begin{figure}\label{fig1}
\end{figure} We start by giving a summary of Wolf's main results$^{1,2}$. Consider the radiation produced by two point sources $P_1$ and $P_2$ at the observation point $P$,
Fig.~\ref{fig1}. Let us consider for simplicity the case of scalar fields $U(P,\omega)$. The spectrum of the field at $P$ is given by \begin{equation} S_U(P,\omega)= \langle U^{*}(P,\omega)U(P,\omega)\rangle, \label{1} \end{equation} where as the spectrum of the source is defined by \begin{eqnarray} S_{Q}(\omega)&=&\langle Q^{*}(P_1,\omega)Q(P_1,\omega)\rangle\\ &=&\langle Q^{*}(P_2,\omega)Q(P_2,\omega)\rangle. \label{2} \end{eqnarray} We assume identical spectra for the two sources. Let $\mu_{Q}(\omega)$ be the spectral degree of coherence between two sources \begin{eqnarray} \mu_{Q}=\frac{\langle Q^{*}(P_1,\omega)Q(P_2,\omega)\rangle}{S_{Q}(\omega)}, \label{3} \end{eqnarray} This is a measure of correlation between the two sources. For two coherent sources $\mu$ is $1$ whereas for incoherent sources $\mu=0$. The field $U$ at the point $P$ can be related to the strength of the sources via \begin{eqnarray} U(P,\omega)=Q(P_1,\omega)\frac{e^{ikR_{1}}}{R_1}+ Q(P_2,\omega)\frac{e^{ikR_{2}}}{R_2}. \label{4} \end{eqnarray} Here we have ignored unnecessary numerical factors. Using Eq.~(\ref{4}) the spectrum of the field is related to the spectrum of the source and the degree of spatial coherence \begin{eqnarray} S_U(P,\omega)=S_Q(\omega)\left(\frac{1}{R_1^2}+\frac{1}{R_2^2}+\frac{1}{R_1R_2} \left[\mu_Q(\omega)e^{ik(R_{2}-R_1)}+{\rm c. c.}\right]\right). \label{5} \end{eqnarray} Clearly in general, the source spectrum and the spectrum at $P$ are not equal \begin{equation} S_U(P,\omega)\neq S_Q(\omega). \label{6} \end{equation} Clearly the measured spectral characteristics will also be determined by $\mu_{Q}$ and $S_U(P,\omega)$, in general, would exhibit correlation induced spectral shifts. Wolf used phenomenological model for $S_Q$ and $\mu_{Q}$ to demonstrate a variety of spectral shifts and even the correlation induced splitting of a line into several lines. Clearly it is desirable to understand the origin of source correlations.
\section{microscopic origin of source correlations}
We thus examine the question of how the atom radiate. Consider for example an atom in its excited state. It interacts with the modes of quantized electromagnetic field in vacuum state. The atom makes a transition to the ground state by the emission of a photon. The photon can be emitted in any mode of the field. The atom has infinity of available modes. It is known that the spectrum of the emitted radiation has Lorentzian spectrum \begin{equation} S_A(\omega)=\frac{\gamma/\pi}{(\omega-\omega_0)^2+\gamma^2}, \label{7} \end{equation} where $\omega_0$ is the frequency of the atomic transition and $\gamma$ is half the Einstein $A$ coefficient. \begin{figure}\label{fig2}
\end{figure}
Next consider two atoms located at $\vec{r}_{A}$ and $\vec{r}_{B}$. Let each atom be initially in its excited state. The question is whether the atoms radiate independently of each other {\it i.e.} whether the spectrum of the emitted photons factorizes \begin{equation} S(\omega_1,\omega_2)=S_A(\omega_1)S_B(\omega_2) \label{8} \end{equation} or not. The correlations between the two atoms$^{6,7,8}$ would invalidate (\ref{8}) and it would in general also imply that \begin{eqnarray} S_A(\omega_1)\neq\int S(\omega_1,\omega_2)d\omega_2, \label{9} \end{eqnarray} \begin{figure}\label{fig3}
\end{figure} \noindent {\it i.e.}, the spectrum of the emitted radiation would be different from the one if the other atom was absent. Note that both atoms interact with a common
quantized electromagnetic field. This interaction with a common field results
in an effective interaction between two
atoms even if the atoms do not interact. This can also be understood by
considering, say, the net field on the atom $B(A)$ which would consist of the
vacuum field and field radiated by the atom $A(B)$. Let us denote by
$\chi_{ij}(\vec{r}_{A},\vec{r}_{B},\omega)$ as the $i-th$ component of the field at position
$\vec{r}_{A}$ due to a unit dipole oriented in the direction $j$ at the position
$\vec{r}_{B}$.
\begin{figure}\label{fig4}
\end{figure} \noindent
This field$^{9,10,11}$ is well known from the solution of Maxwell equations
\begin{eqnarray}
\chi_{ij}(\vec{r}_{A},\vec{r}_{B},\omega)=
\left(\frac{\omega^2}{c^2}\delta_{ij}+\frac{\partial^2}
{\partial r_{A}\partial r_{B}}\right)
\frac{exp(i|\vec{r}_{A}-\vec{r}_{B}|\omega/c)}
{|\vec{r}_{A}-\vec{r}_{B}|}.
\label{10}
\end{eqnarray}
This function has close connection with the spatial coherence of the vacuum of
the electromagnetic field. Let us write the electric field operator in terms of
its positive and negative frequency parts
\begin{eqnarray}
E=E^{(+)}+E^{(-)}.
\label{11}
\end{eqnarray}
It is well known in quantum optics that $E^{(+)}(E^{(-)})$
corresponds to the
absorption (emission) of photons. Further $E^{(+)}$ is an analytical signal.
Let us consider second order coherence function of the electromagnetic field
\begin{eqnarray}
S^{A}_{\alpha\beta}(\vec{r_1},\vec{r_2},\tau)=\langle
E_{\alpha}^{(+)}(\vec{r}_{1},t+\tau)E_{\beta}^{(-)}(\vec{r}_2)\rangle
\label{12}
\end{eqnarray}
which is non-vanishing even-though the field is in vacuum state. Its Fourier
transform is given by$^{9}$
\begin{eqnarray}
\int d\tau e^{i\omega\tau}S_{\alpha\beta}^{A}(\vec{r}_1,\vec{r}_2,\tau)&=&
2\hbar Im\chi_{ij}(\vec{r_1},\vec{r_2},\omega){\rm~~~ if~~}\omega>0\nonumber\\
&=&0{\rm~~~~if~~}\omega< 0.
\label{13}
\end{eqnarray}
We thus conclude that the vacuum of the electromagnetic field has spatial
coherence which extends over the dimensions of wavelength. Therefore the
correlation between atoms would extend over at least distances of the order of
wavelength. Clearly in a macroscopic sample these correlation could build up
over much larger distances. Explicit results for two atoms can be found in
Refs.~[6],[7],[8].
\section{source correlation induced two photon resonance}
\begin{figure}\label{fig5}
\end{figure}
We next discuss several other situations where atom-atom correlations play an
important role. Consider first the case of two unidentical atoms with transition
frequencies $\omega_A$ and $\omega_B$ and which are located within a wavelength
of each other. Let both the atoms start in ground state and let these interact
with a laser field of frequency $\omega_l$. We now study the total intensity
$I(\omega_l)$ of the emitted radiation as a function of $\omega_l$. Clearly
$I(\omega_l)$ will exhibit single photon resonance at
$\omega_l=\omega_{Aeg},\omega_{Beg}$. In principle there is also the
possibility of two photon resonance $2\omega_l=\omega_{Aeg}+\omega_{Beg}$. It
turns out that in the absence of source correlations, the two photon resonance
does not occur as the two paths
\begin{eqnarray}
|g_A,g_B\rangle\rightarrow|e_A,g_B\rangle\rightarrow|e_A,e_B\rangle,{\rm~~and~~}
|g_A,g_B\rangle\rightarrow|g_A,e_B\rangle\rightarrow|e_A,e_B\rangle \end{eqnarray} interfere destructively. Thus the source correlations are the key to the two photon
resonance. In an earlier work the effect of source correlations on such a two
photon resonance was studied in great detail$^{7}$ and recently it has been observed
in experiments involving single molecules$^{12}$ further very recently we show how the source
correlation arise in a cavity$^{13}$.
\section{spatial coherence and emission in presence of a mirror}
\begin{figure}\label{fig6}
\end{figure}
Another class of systems where spatial coherence plays an important role is for
example, the emission of radiation in front of a metallic mirror$^{10}$
or in a cavity formed by metallic or dielectric mirrors. The spectrum
of the emitted radiation depends on the distance of the atom from mirror. As a
matter of fact both line width and line shift become $b$-dependent.
If the metallic mirror is treated as a perfect conductor, then the calculations show
that the line shifts, for example, are determined by the
spatial coherence of the field at the location of the atom and its image. Thus the correlation of the vacuum
$\langle\vec{E}^{(+)}(\vec{b},t)\vec{E}^{(-)}(-\vec{b},t^{'})\rangle$,
which is related to $\chi(\vec{b},-\vec{b},\omega)$, determines the line shifts and line
widths. Explicit results for the $b$ - dependence of shifts and widths can be found in
Refs.~[10],[14].
\section{spatial coherence induced control of nonlinear generation}
We next discuss the effects of spatial coherence in the context of nonlinear
optics. We would show that the generation of radiation using nonlinear
processes can be controlled by source correlations. Consider for example, the
process of second harmonic generation (SHG) with $P=\chi^{(2)}E^2$, $E\sim
e^{i\vec{k}.\vec{r}}$.
The efficiency of the SHG depends on the phase matching integral
\begin{eqnarray}
f=\frac{1}{V}\int e^{-i\vec{q}.\vec{r}} e^{2i\vec{k}.\vec{r}}d^3r
\label{14}
\end{eqnarray}
which goes to unity if $\vec{q}=2\vec{k}$.The function $f$ determines the direction in
which second harmonic generation is dominant. \begin{figure}\label{fig7}
\end{figure} \noindent
If however the field $E$ is
partially coherent, then in place of (\ref{14}) we need to consider
\begin{eqnarray}
f=\int d^3r^{'}d^3r^{''}e^{-i\vec{q}.\vec{r}^{'}}
e^{2i\vec{k}.\vec{r}^{''}}\langle
P(\vec{r}^{'})P^{*}(\vec{r}^{''})\rangle.
\label{15}
\end{eqnarray}
\begin{figure}\label{fig8}
\end{figure} \noindent
Note that for SHG with coherent radiation
\begin{eqnarray}
\langle P(\vec{r}^{'})P^{*}(\vec{r}^{''})\rangle\equiv
\langle P(\vec{r}^{'})\rangle\langle
P^{*}(\vec{r}^{''})\rangle\equiv e^{2i\vec{k}.(\vec{r}^{'}-\vec{r}^{''})}
\label{16}
\end{eqnarray}
and then
\begin{equation}
I\propto (vol)^2.
\end{equation}
On the other hand for the case of incoherent radiation
\begin{eqnarray}
&&\langle P(\vec{r}^{'})P^{*}(\vec{r}^{''})\rangle\equiv|\wp|^2
\delta(\vec{r}^{'}-\vec{r}^{''}),\\
&& I\rightarrow|\wp|^2(vol).
\end{eqnarray}
For the partially coherent radiation
\begin{equation}
\langle P(\vec{r}^{'})P^{*}(\vec{r}^{''})\rangle=|\chi^{(2)}|^2\langle
E^2(\vec{r}^{'})E^{*2}(\vec{r}^{''})\rangle,
\end{equation}
which under the assumption of a Gaussian field will become
\begin{eqnarray}
\langle P(\vec{r}^{'})P(\vec{r}^{''})\rangle=
2I^2|\mu(\vec{r}^{'}-\vec{r}^{''})|^2,
\label{21}
\end{eqnarray}
where $\mu(\vec{r}^{'}-\vec{r}^{''})$ denotes the degree of spatial coherence
of the incident field.
Thus SHG would now be determined by the integral
\begin{eqnarray}
&&|f(\vec{Q})|^2=\int\int
d^3r^{'}d^3r{''}|\mu(\vec{r}^{'}-\vec{r}^{''})|^2
e^{\vec{Q}.(\vec{r}^{'}-\vec{r}^{''})}\\
&&\vec{Q}=-\vec{q}+2\vec{k}.
\label{22}
\end{eqnarray}
Clearly now the direction of SHG would be determined by the spatial coherence
of the field. Thus spatial coherence can serve as a control parameter for the
nonlinear generation. Clearly the above ideas should also find interesting
applications in other areas of nonlinear optics as well.
\section{universality of Wolf shift} Before concluding the paper we also like to make some general remarks for the universality and applicability of Wolf shifts in the context of other systems. We know for instance, that other standard equations of physics (such as those describing vibrations of string, heat transport) admit following relation between the effect $\Phi$ of the source $P$ at the observation point \begin{eqnarray} \Phi(\vec{r})=\int G(\vec{r},\vec{r}^{'})P(\vec{r}^{'})d^3r^{'}, \label{23} \end{eqnarray} where $G$ is Green's function for the underlying equation. The observed quantities are usually quadratic in $\Phi$. Thus observation at the point $\vec{r}$ would depend on the correlations of the source at two points. This is due to the nonlocal nature of the solution (\ref{23}).
\section{Fluctuating Pulses in a Dispersive medium} As another example of this universality we can consider the propagation of pulses in a dispersive medium which is described by the equation \begin{eqnarray} i\frac{\partial{\it E}}{\partial z}=\frac{\tilde{k}}{2}\frac{\partial^{2} {\it E}}{\partial t^2} \label{24} \end{eqnarray} The solution of this equation can be given in terms of Green's function \begin{eqnarray} &&{\it E}(z,t)=\int G(z,t;0,t^{'}){\it E}(0,t^{'})dt^{'};\\ &&G=\frac{i}{2\pi z\tilde{k}}exp\left(- \frac{i}{2z\tilde{k}}(t^2-2tt^{'}+t^{'2})\right). \label{25} \end{eqnarray} If the input pulse has fluctuations, then the intensity of the output pulse would be determined by the correlation in pulses on input plane \begin{eqnarray} I(L,t)=\int\int dt{'}dt{''}G^{*}(L,t;0,t^{'})G(L,t;0,t^{''}) \langle{\it E}(t{'}){\it E}^{*}(t{''})\rangle. \label{26} \end{eqnarray} Clearly the intensity of the pulse at the output plane is not completely determined by the intensity of the pulse at input plane.
\section{conclusions} Thus in conclusion we have shown that the vacuum of electromagnetic field has intrinsic partial spatial coherence in frequency domain which effectively extends over regions of the order of wavelength $\lambda$. This spatial coherence leads to a dynamical coupling between atoms and is the cause of source correlations. We showed how such correlations can lead to a new type of two photon resonance and how these are relevant for near field optics. We further showed how the source spatial correlations can lead to new phase matching conditions for nonlinear optical effects leading to the possibility of using spatial coherence to produce tailor made emissions. We also discussed the universality of source correlation effects and as a specific example we treated the case of the propagation of fluctuating pulses in a dispersive medium.
The author thanks E. Wolf for many discussions on the subject of correlation induced shifts. \begin{references} \bibitem{1} E. Wolf, "Invariance of the Spectrum of Light on Propagation," {\it Phys. Rev. Lett.\/} {\bf 56}, pp. 1370-1372 (1986);
E. Wolf, "Red shifts and blue shifts of spectral lines emitted by two correlated sources," {\it ibid.\/} {\bf 58}, pp. 2646-2648 (1987); E. Wolf, "Correlation-induced Doppler-type frequency shifts of spectral lines," {\it ibid.\/} {\bf 63}, pp. 2220-2223 (1989).
\bibitem{2} E. Wolf, "Non-cosmological redshifts of spectral lines," {\it Nature (London)} {\bf 326}, pp. 363-366 (1987). \bibitem{3} E. Wolf and D. F. V. James, "Correlation-induced spectral changes," {\it Rep. Prog. Phys.} {\bf 59}, pp. 771-818 (1996). \bibitem{4} L. Mandel and E. Wolf, {\it Optical Coherence and Quantum Optics} (Cambridge University Press, 1995). \bibitem{5} G. S. Agarwal, in {\it Quantum Optics} (Springer Tracts in Modern Physics, Vol. 70, 1974). \bibitem{6} G. Varada and G. S. Agarwal, "Microscopic approach to correlation-induced frequency shifts," {\it Phys. Rev. A} {\bf 44}, pp. 7626-7634 (1991). \bibitem{7} G. Varada and G. S. Agarwal, "Two-photon resonance induced by the dipole-dipole interaction," {\it Phys. Rev. A} {\bf 45}, pp. 6721-6729 (1992). \bibitem{8} D. F. V. James, "Frequency shifts in spontaneous emission from two interacting emission," {\it Phys. Rev. A} {\bf 47}, pp. 1336-1346 (1993). \bibitem{9} G. S. Agarwal, "Quantum electrodynamics in the presence of dielectrics and conductors : I. Electromagnetic-field response functions and black-body fluctuations in finite geometries," {\it Phys. Rev. A} {\bf 11}, pp. 230-242 (1975). \bibitem{10} G. S. Agarwal, "Quantum electrodynamics in the presence of dielectrics and conductors : IV. General Theory of spontaneous emission in finite geometries," {\it Phys. Rev. A} {\bf 12}, pp. 1475-1497 (1975). \bibitem{11} G. S. Agarwal, in {\it Quantum Electrodynamics and Quantum Optics}, ed. A. Barut (Plenum, 1983). \bibitem{12}C. Hettich, C. Schmitt, J. Zitzmann, S. Kuhn, I. Gerhardt, and V. Sandoghdar, "Nanometer resolution and and coherent optical dipole coupling of two individual molecules," {\it Science} {\bf 298}, pp. 385-389 (2002). \bibitem{13} P. K. Pathak and G. S. Agarwal, "Giant two-atom two-photon vacuum Rabi oscillations in a high quality cavity," to be published. \bibitem{14} G. S. Agarwal and H. D. Vollmer, "Surface polariton effects in spontaneous emission," {\it Physica Status Solidi B} {\bf 79}, 249 (1977); G. S. Agarwal and Vollmer, "Surface polariton effects in spontaneous emission. II. Effects of spatial dispersion," {\it ibid.\/} {\bf 85}, 301 (1978). \end{references}
\end{document} |
\begin{document}
\title{On fractionality of the path packing problem} \author{Natalia Vanetik\footnote{Department of Computer Science, Ben-Gurion University, Israel, \textit{orlovn@cs.bgu.ac.il}}}
\maketitle
\begin{abstract} \small{In an undirected graph $G$ with node set $N$ and a subset $T\subseteq N$,
a \textit{fractional multiflow problem} is defined as finding
$\max_{f} \sum_{(u,v)}\omega(u,v)f[u,v]$ over all collections $f$ of
weighted paths with ends in $T$ (the $\omega$\textit{-problem}).
$f[u,v]$ denotes the total weight
of paths with the end-pair $(u,v)$ in $f$.
The paths of $f$ must
satisfy the edge capacity constraint: total weight of
the paths traversing a single edge does not exceed $1$.
We study a fractional multiflow problem with the reward function $\omega$ having values
$(0, 1)$ (a \textit{fractional path packing problem}),
and an auxiliary \textit{weak problem} where $\omega$ is a metric.
A. Karzanov in \cite{k89} defined the {\it fractionality}
of $\omega$ with respect to a
given class of networks $(G,T)$ as the least natural $D$ such that
for any network $(G,T)$ from the class, the $\omega$-problem has a
solution which becomes integer-valued when multiplied by $D$.
He proved that a fractional path packing problem has infinite
fractionality outside a very specific class of networks, and conjectured
that within this class, the fractionality does not exceed $4$
($2$ for Eulerian networks).
In this paper we prove Karzanov's conjecture by showing that the
fractionality of both fractional path packing and weak problems
is $1$ or $2$ for every Eulerian network in this class.} \end{abstract} \section{Introduction} In this paper we study collections of edge-disjoint paths in a network, also called \textit{paths packings} or \textit{multiflows}, addressing an optimization problem of the following form. Let $G = (N, E)$ be a multigraph with node-set $N$ and edge-set $E$, and let $T\subseteq N$ be a set of nodes distinguished as \textit{terminals}. By a $T${\it -path} we mean an unclosed path with the ends in T, and by an {\it integer $T$-flow}, or an integer multiflow, we mean a collection of pairwise edge-disjoint $T$-paths in $G$. Let us define a \textit{fractional $T$-flow} as a non-negative weight function $f (P )$ on the set of all $T$-paths in $(G, T )$, satisfying the {\it edge capacity} constraints: \begin{equation}\label{capacity-c}\mbox{\sl $\sum_{P}f(P)I(P,(x,y))\leq c(x,y)$ for each adjacent pair $(x, y)$ of nodes in $N$}\end{equation} Here $I(P, (x, y))$ denotes the number of $(x,y)$-edges of $G$ traversed by $P$, and $c(x, y)$ is the edge capacity, equal to the number of $(x,y)$-edges in $G$. Given non-negative "rewards" $\omega(u, v)$ assigned to the unordered pairs of terminals, the problem is to \begin{equation}\label{omega-problem}\mbox{\sl maximize
$\sum_{u,v}\omega(u,v)f[u,v]$ over the fractional $T$-flows $f$ in $(G,
T )$,}\end{equation} where $f[u,v]$ denotes the total weight of the $(u,v)$-paths in $f$. For short, \eqref{omega-problem} will be referred to as the $\omega$\textit{-problem}. This is one of the basic multiflow problems, having numerous applications, such as communication and VLSI design. Not surprisingly, for most reward functions the $w$-problem is known to be $\mathbb{NP}$-hard over integer multiflows, not only when a network $(G, T )$ is quite arbitrary, but even for such friendly classes as the planar or the Eulerian networks (the latter class is studied in this paper).
However, the more fragmented is $f$ between various paths, the less is its utility for discrete path packing. To make this precise, let us, following A. Karzanov \cite{k89}, define the {\it fractionality} of the reward function $\omega$ with respect to a given class of networks $(G, T )$: this is the least natural $D$ such that for any network $(G, T )$ from the class, the $\omega$-problem has a solution $f$ which becomes integer-valued when multiplied by $D$ (in short, a $\frac{1}{D}$ -integer solution). For certain reward functions, fractionality for the general networks was found to be $2$ (see \cite{ikl00} and \cite{l04}); for some of them, the $\omega$-problem was also shown to have an integer solution provided that the non-terminal
(\textit{inner}) nodes of a network have even degrees; such networks are called \textit{Eulerian}.
Two specific classes of the reward function are of principal importance. One comprises the $(0, 1)$ reward functions. It is convenient to represent such a function by a demand graph (or scheme) $(T, S)$ where $S: = {(u, v):\: \omega(u, v) = 1}$, and to call \eqref{omega-problem} the {\it $S$-problem}. Let a path in $G$ be called an \textit{$S$-path} if its end-pair belongs to $S$, and a collection of $S$-paths satisfying \eqref{capacity-c} be called an $S$-flow. Thus, the $S$-problem may be stated as maximizing of \\$f [S]: = \sum_{(u,v)\in S}f[u,v]$. A. Karzanov has described the fractionality of the $(0, 1)$ reward functions (or the schemes $S$) in \cite{k89}. Namely, the fractionality of $S$ is finite iff any distinct pairwise intersecting anticliques (i.e., inclusion-maximal stable sets) $A, B, C$ of $(T, S)$ satisfy \begin{equation}\label{kc}A\cap B = A\cap C = B\cap C,\end{equation} and the finite fractionality can only equal $1$, $2$, or $4$. He conjectured that this \begin{equation}\label{k-conjecture}\mbox{\sl finite fractionality can only be $1$ or $2$.}\end{equation}
Not long ago, H. Ilani and E. Barsky observed that the problem of discrete path packing is $\mathbb{NP}$-hard, even for Eulerian networks, for each demand graph violating \eqref{kc}. So, investigating the $S$-problem has focused on the schemes satisfying \eqref{kc}. In this paper we consider the $S$-problem for $S$ satisfying \eqref{kc} together with an auxiliary weak problem, denoted a $W$\textit{-problem}: an $\omega$-problem where $\omega$ is a metric defined by $\omega(u, v) = 1$ for $(u, v)\in S$, $\half$ for $(u, v)$ covered by exactly one anticlique of $(T, S)$, and $0$ for the others (i. e., those covered by at least two anticliques). An anticlique clutter of $(T,S)$ satisfying \eqref{kc} is called a \textit{K-clutter}, and an Eulerian network $(G,T,\K)$ with an anticlique K-clutter $\mathcal{K}$ of $(T,S)$ is called a \textit{K-network}. The maxima of $S$- and $W$-problems are denoted by $\eta$ and $\theta$ respectively.
In this paper, we prove conjecture \eqref{k-conjecture}. Additionally, we show that the $W$-problem in a K-network also admits a solution of fractionality at most $2$. We use the following crucial fact: every $S$-problem and $W$-problem in a network satisfying \eqref{kc} have a common solution (Theorem 1 of \cite{va07a}).
The bound on fractionality is tight in both cases, as an example in Figure \ref{non-integral-figure} demonstrates. There we have $\mathcal{K}=\{\{s_{i},t_{j}\}\}$, $i,j\in \{1,2,3\}$, and every integer multiflow in this network has no more than $2$ $S$-paths, for example, paths $P$ and $Q$ in Figure \ref{non-integral-figure}(a). The maximum of the $W$-problem among integer multiflows is $2\half$. However, in this network there exists a half-integer multiflow $h=\{P_{1},P_{2},P_{3},Q_{1},Q_{2},Q_{3}\}$ with weight of every path $\half$ being (see Figure \ref{non-integral-figure}(b)). The value of $\sum_{u,v}\omega(u,v)h[u,v]$ for both $S$-problem and $W$-problem is $3$. Thus, an integer solution to the $S$-problem or the $W$-problem does not always exist. \begin{figure}
\caption{The fractionality of $S$-problem and $W$-problem can be $2$.}
\label{non-integral-figure}
\end{figure} Table \ref{defs-table} summarizes notation used in this paper. \begin{table}[!h]
\small{
\begin{tabular}{|l|l|}
\hline
\textbf{Notation} & \textbf{Definition} \\ \hline \hline
$(G,T,\K)$ & a network $(G,(T,S))$ and the anticlique clutter $\mathcal{K}$ of $(T,S)$\\
\hline
$S$-path & a path whose end-pair is in $S$\\ \hline
$W$-path & a path whose end-pair is covered by exactly one member of $\mathcal{K}$\\ \hline
zero path & a path whose end-pair is covered by two members of $\mathcal{K}$\\ \hline
$d(X)$, $X\subset N$ & the number of $(X,\overline{X})$-edges in $G$\\
\hline
$\lambda(A)$, $A\subseteq T$ & $\min\{d(X): \; X\subset N,\;\;X\cap T=A\}$\\ \hline
$\beta(A)$, $A\subseteq T$ & $\frac{1}{2}(\sum_{t\in A}\lambda
(t)-\lambda(A))$; is an integer in Eulerian networks\\ \hline
$A^{c}$, $A\subseteq T$ & $T\setminus A$\\ \hline
$\overline{A}$, $A\subseteq N$ & $N\setminus A$\\ \hline
an $(A,B)$-path (an $A$-path), $A,B\subseteq N$ & a path ends in $A$ and
$B$ (in $A)$\\ \hline
$f[A,B]$ & the number of $(A,B)$-paths in $f$ ($f[A]$ when $A=B$) \\ \hline
$w(P)$ & the weight of path $P$ \\ \hline
$xPy$ & an $(x,y)$-segment of a path $P$, where $x$ and $y$ are nodes\\
\hline
$|f|$ & the size of a multiflow $f$: the total weight of its paths\\ \hline
a maximum multiflow & a multiflow of maximum size \\ \hline
the fractionality of a multiflow & the largest denominator among its paths'
weights\\ \hline
$s\sim t$, $s,t\in T$ & $(s,t)$ is a zero pair\\ \hline
an atom & a set of terminals not separated by a member of $\mathcal{K}$\\ \hline
$\mathcal{K}$ is simple & every atom in $\mathcal{K}$ has size $1$\\ \hline
\end{tabular}}
\caption{\label{defs-table}Notation} \end{table}
\section{Outline of the proof} We observe K-networks that are counterexamples to the fractionality conjecture for either $W$- or $S$-problem.
First, we prove the fractionality conjecture for the $W$-problem by showing that a half-integer simple multiflow of the smallest size solving the $W$-problem exists. Second, we observe a minimal K-network that fails to satisfy the $S$-problem
fractionality conjecture and show that it admits a half-integer solution. \section{\label{locking-section}Operations on paths and locking} A pair of paths with disjoint end-pairs and a common node forms a \textit{cross}. A path is \textit{compound} if it traverses a terminal different from its ends, and \textit{simple} otherwise. A multiflow is called \textit{simple} if it contains only simple paths.
Let paths $P$ and $Q$ of a multiflow $f$ traverse an inner node $x$, so that $P=P^{\prime} xP^{\prime\prime}$ and $Q=Q^{\prime} xQ^{\prime\prime}$. \textit{Switching} $P$ and $Q$ in $x$ transforms them into $K=P^{\prime} xQ^{\prime}$ and $L=P^{\prime\prime} xQ^{\prime\prime}$ and $f$ into the multiflow $f\setminus\{P,Q\}\cup\{K,L\}$. A \textit{split} of an inner node $x$ is a graph transformation consisting of removal of $x$ and linking its neighbors by $\frac{d(x)}{2}$ edges so as to preserve their degrees. Given a multiflow $h$ in a network, an $h$\textit{-split} of an inner node is a split preserving the paths of $h$.
A maximum multiflow $f$ \textit{locks} a set $A\subseteq T$ if it contains a maximum $(A,A^{c})$-flow, that is, if $f[A,A^c]=\lambda(A)$. Otherwise, $f$ \textit{unlocks} $A$. In other words, $f$ locks $A$ if it contains the smallest possible number of $A$-paths. A. Karzanov and M. Lomonosov have introduced in \cite{kl78} the following application of the Ford-Fulkerson augmenting path procedure, assuming that a multiflow traverses each edge. A maximum multiflow unlocks $A\in \mathcal{K}$ if and only if it contains an \textit{augmenting sequence} $P_{1},x_{1},...,x_{i-1}P_{i}x_{i},....,P_{n}$ of paths $P_{1}$ (an $A$-path), $P_{2},...,P_{n-1}$ ($(A,A^{c})$-paths) $P_{n}$ (an $A^{c}$-path) and inner nodes $x_{1},...,x_{n-1}$ so that $x_{i}\in P_{i},P_{i+1}$ for $i\in {1,...,n-1}$ and $x_{i}$ is located on $P_{i}$ between $x_{i-1}$ and the $A$-end of $P_{i}$. In the paper, we use the fact that unlocking a member of $\mathcal{K}$ and existence of the alternating sequence are equivalent. When $\mathcal{K}$ is a K-clutter, there exists a series of switches of $P_{1},...,P_{n}$ in $x_{1},...,x_{n-1}$ that creates a maximum multiflow $f^{\prime}$ containing a cross and having $\Theta (f^{\prime})\geq \Theta(f)$. If $f$ solves the $W$-problem and unlocks $A\in \mathcal{K}$, switching $P_{1},...,P_{n-1}$ in $x_{1},...,x_{n-2}$ creates a multiflow $f^{\prime}$ with $A$-path $P_{0}^{\prime}$ and $A^{c}$-path $P_{1}^{\prime}$ having a common node $x_{n-1}$, so that every switch of $P_{0}^{\prime}$ and $P_{1}^{\prime}$ in $x_{n-1}$ preserves $\Theta(f^{\prime})=\theta$.
Let $P$ and $Q$ be an $A$- and $A^{c}$-paths of a multiflow $h$ with a common inner node so that $w(P)=w(Q)$ and no switch of $P$ and $Q$ changes $\Theta(h)$. Let us denote the ends of $P$ and $Q$ by $p_{1},p_{2}$ and $q_{1},q_{2}$ respectively. Let w.l.o.g. $(p_{1},p_{2}),(p_{1},q_{1}),(p_{1},q_{2})\in W$, $(p_{2},q_{1}),(p_{2},q_{2}),(q_{1},q_{2}),\in S$. A multiflow transformation that replaces $P$ and $Q$ with three $(p_{2},q_{2})$-, $(p_{2},q_{2})$- and $(q_{1},q_{2})$-paths of weight $\frac{w(P)}{2}$ (see Figure \ref{3/2-operation-figure}), is called a $\frac{3}{2}$\textit{-operation}. It preserves $\Theta(h)$ and increases $h[S]$ by $\frac{w(P)}{2}$. \begin{figure}
\caption{The $\frac{3}{2}$-operation.}
\label{3/2-operation-figure}
\end{figure}
\section{\label{WP-section}Fractionality of the $W$-problem} To prove the fractionality conjecture for the $W$-problem, we show the following: \begin{theorem}\label{kc-for-WP}In every K-network $(G,T,\K)$ there exists a
simple $W$-problem solution of the smallest size that is half-integer.\end{theorem} We later use this Theorem to prove the fractionality conjecture for the $S$-problem. Let us observe a K-network $(G,T,\K)$ which is a minimal counterexample to Theorem \ref{kc-for-WP}. We assume that $(G,T,\K)$ has \textbf{inner node
degree $4$}, by the known reduction (see, e.g. \cite{frank}),
is \textbf{simple} (since atom compression preserves all $W$-problem solutions)
and is \textbf{minimal} first in fractionality $k$ of the smallest size $W$-problem
solution,
and then in $E$ as a set. Then $k=4$, for otherwise we can
duplicate each edge in $E$ and obtain a network with $W$-problem fractionality $\lceil\frac{k}{2}\rceil$. In this section, $f$ denotes \textit{a quarter-integer simple multiflow of the smallest size solving the $W$-problem } in $(G,T,\K)$. For simplicity, we assume that the paths of $f$ have weight $\quarter$. Let us denote \begin{equation}\label{heta}\mbox{\sl $\hat{\eta}$:=maximum of the $S$-problem among simple multiflows in $(G,T,\K)$.}\end{equation} In the Appendix we prove the max-min theorem for the $W$-problem in Theorem \ref{wp-maxmin-theorem}, which implies that for every K-network $(G,T,\K)$, $2\theta(G,T,\K)\in \mathbb{N}$ and $2\hat{\eta}\in \mathbb{N}$. We use these facts in the proof. \subsection{General flow properties} Here, we study the behavior of $W$-problem solutions inside the members of $\mathcal{K}$. The series of properties below directly follows directly from the results of Lov\~{a}sz, Cherkassky and Lomonosov described in Section \ref{locking-section}. \begin{claim}\label{at-least-beta}Let $(G,T,\K)$ be a simple K-network, and let $h$ be a simple
multiflow of fractionality $k$ in it such that $h[A]<\beta(A)$ for some $A\in \mathcal{K}$. Then there exists a simple multiflow $h^{\prime}$ of fractionality $k$ having $\Theta(h^{\prime})\geq \Theta(h)+\half(\beta(A)-h[A])$.\end{claim} \textbf{Proof. } Since $h[A,A^{c}]\leq \lambda(A)$ by definition, and $$h[A]=\half(\sum_{t\in
A}h[t,t^{c}]-h[A,A^{c}])<\half(\sum_{t\in A}\lambda(t)-\lambda(A))=\beta(A),$$ $\sum_{t\in A}h[t,t^{c}]<\sum_{t\in A}\lambda(t)$. We modify $h$ by adding paths starting in $t\in A$ until $h[t,t^{c}]=\lambda(t)$ for all $t\in A$. Since we use edges not saturated by $h$, we obtain a simple multiflow of fractionality $k$, denoted $h^{\prime}$. If $W$- or $S$-paths of total weight no less than $\beta(A)-h[A]$ were added, $h^{\prime}$ is the required multiflow. Otherwise, some of these paths are cycles that traverse one terminal from $A$ each.
Let us modify $h^{\prime}$ into a multiflow without cyclic paths traversing terminals from $A$ using Cherkassky procedure, and denote the resulting multiflow by $h^{\prime\prime}$. If $\Theta(h^{\prime\prime})\geq \Theta(h)+\half(\beta(A)-h[A])$, we are done. Otherwise, we have $\sum_{t\in A}h^{\prime\prime}[t,t^{c}]=\sum_{t\in A}\lambda(t)$ and $h^{\prime\prime}[A]<\beta(A)$, thus $h^{\prime\prime}[A,A^{c}]>\lambda(A)$ - a contradiction. \qed \begin{corollary}\label{smallest-solution-locks-K}Let $(G,T,\K)$ be a simple
K-network, and let $h$ be a simple
multiflow of the smallest size solving the $W$-problem in $(G,T,\K)$. Then $h$ locks $\mathcal{K}$.\end{corollary} \textbf{Proof. } By Claim \ref{at-least-beta}, $h[A]\geq \beta(A)$ for all $A\in \mathcal{K}$. If $h$ unlocks some $A\in \mathcal{K}$, i.e. has $h[A]>\beta(A)$, $h$ contains an augmenting sequence for $A$. Switching paths of this sequence creates a simple multiflow $h^{\prime}$ that has the same size as $h$, solves the $W$-problem and
allows us to perform a $\frac{3}{2}$-operation, which preserves $\Theta(h^{\prime})$ but decreases the size of $h^{\prime}$ - a contradiction. \qed
\subsection{Proof of the weak fractionality theorem}
Let us denote by $(G\p,T\p,\K \p)$ a network obtained from $(G,T,\K)$ by split-offs in one or more inner nodes. We denote the $W$-problem maximum in $(G\p,T\p,\K \p)$ by $\theta^{\prime}$, and let $A^{\prime}$ and $t^{\prime}$ denote a clutter member and a terminal corresponding to some $A\in \mathcal{K}$ and $t\in T$. We let $g$ denote a simple half-integer $W$-problem solution of the smallest size in $(G\p,T\p,\K \p)$. $g$ exists because $(G,T,\K)$ is minimal in $E$. Let us denote the value of \eqref{heta} in $(G\p,T\p,\K \p)$ by $\hat{\eta}^{\prime}$. Note that \begin{equation}\label{smaller-eta}\hat{\eta}^{\prime}\leq \hat{\eta},\end{equation} because by Theorem 1 from \cite{va07a} $f$ solves the $S$-problem in a network obtained from $(G,T,\K)$ by splitting every terminal $t$ into $d(t)$ equivalent terminals of degree $1$.
For this type of networks we prove the following series of claims. \begin{claim}\label{beta-decreases-or-same}Let $\theta^{\prime}=\theta-\half$ and
$\hat{\eta}-\hat{\eta}^{\prime}\leq 1$. Then $\sum_{A^{\prime}\in \mathcal{K}^{\prime}}\beta(A^{\prime})\leq \sum_{A\in \mathcal{K}}\beta(A)$.\end{claim} \textbf{Proof. } Let us assume that $\sum_{A^{\prime}\in \mathcal{K}^{\prime}}\beta(A^{\prime})>\sum_{A\in \mathcal{K}}\beta(A)$. As all $\beta(A)$ and $\beta(A^{\prime})$ are integers by definition, we have $$\theta-\theta^{\prime}=\half=\hat{\eta}-\hat{\eta}^{\prime}+(\sum_{A\in \mathcal{K}}\beta(A)-\sum_{A^{\prime}\in
\mathcal{K}^{\prime}}\beta(A^{\prime})),$$ thus $$1\geq \hat{\eta}-\hat{\eta}^{\prime}=\half+\sum_{A^{\prime}\in \mathcal{K}^{\prime}}\beta(A^{\prime})-\sum_{A\in
\mathcal{K}}\beta(A)>1,$$ a contradiction. \qed \begin{corollary}\label{beta-increases-or-same}Let $\theta^{\prime}=\theta-\half$ and
$\hat{\eta}-\hat{\eta}^{\prime}\leq 1$. Then for all $A\in \mathcal{K}$, $\beta(A^{\prime})\geq
\beta(A)$.\end{corollary} \textbf{Proof. } Let $\beta(A^{\prime})<\beta(A)$. Then by Claim \ref{at-least-beta}, $g$ can be completed to a half-integer simple flow $g^{\prime}$ in $(G,T,\K)$
with $\Theta(g^{\prime})=\theta$. Since $|g|=\hat{\eta}^{\prime}+\sum_{A^{\prime}\in \mathcal{K}^{\prime}}\beta(A^{\prime})<|f|$ by Claim \ref{beta-decreases-or-same} and \eqref{smaller-eta}, we have
$|g^{\prime}|\leq |f|$ - a contradiction. \qed \begin{corollary}\label{same-beta-eta-minus-half}Let $\theta^{\prime}=\theta-\half$ and
$\hat{\eta}-\hat{\eta}^{\prime}\leq 1$. Then for all $A\in \mathcal{K}$,
$\beta(A^{\prime})=\beta(A)$ and $\hat{\eta}-\hat{\eta}^{\prime}=\half$.\end{corollary} \textbf{Proof. } Follows from Claim \ref{beta-decreases-or-same} and Corollary \ref{beta-increases-or-same}. \qed
\begin{claim}\label{theta-decreases}$\theta^{\prime}\neq \theta$.\end{claim} \textbf{Proof. } Let us assume the contrary. Then for all $A\in \mathcal{K}$, $\beta(A^{\prime})\geq \beta(A)$, for otherwise by Claim \ref{at-least-beta}, in $(G,T,\K)$ $g$ can be modified into a multiflow $g^{\prime}$ with $\Theta(g^{\prime})>\theta$ - a contradiction. If $\sum_{A^{\prime}\in \mathcal{K}^{\prime}}\beta(A^{\prime})>\sum_{A\in \mathcal{K}}\beta(A)$, we have $$\theta-\theta^{\prime}=0=\hat{\eta}-\hat{\eta}^{\prime}+(\sum_{A^{\prime}\in \mathcal{K}^{\prime}}\beta(A^{\prime})-\sum_{A\in
\mathcal{K}}\beta(A))>1,$$
a contradiction because $\hat{\eta}>\hat{\eta}^{\prime}$ (otherwise, $g$ is the solution we seek). Then $g[W]=f[W]=\sum_{A\in \mathcal{K}}\beta(A)$ and $\Theta(g)=\Theta(f)$, resulting in $|g|=|f|$ - a contradiction. \qed
Let us call two paths traversing the same inner node $x$ \textit{opposite in} $x$ if they do not traverse the same edge incident to $x$. \begin{claim}\label{good-split}Let $x\in N\setminus
T$. Then there exists a
split of $x$ that decreases
$\theta$ by no more than $\half$.\end{claim} \textbf{Proof. } Let us assume the contrary. Let the number of paths of $f$ destroyed by a split of $x$
be $n$. Then the split decreases
$\Theta(f)$ by at least $1$ by Corollary \ref{theta-half-integer},
thus $8\geq n\geq 4$. Clearly, $n\neq 7,8$
for otherwise $x$ admits an $f$-split (see Figure \ref{good-split-figure}(a)).
\begin{figure}
\caption{Possible switches of $f$ in an inner node.}
\label{good-split-figure}
\end{figure}
Likewise, if $n\in \{5,6\}$, then the switch opposite to the chosen one destroys
no more than two paths of $f$ (see Figure \ref{good-split-figure}(b)) - a contradiction.
Therefore, $n=4$, and the paths destroyed by a split
contribute no more than $1$ to $\Theta(f)$.
By our assumption, the split decreases $\Theta(f)$ by $1$, and these paths
are $S$-paths of $f$ with two common ends.
By our assumption, two of these paths cannot be
switched so as to comply with the remaining paths traversing $x$.
If these two paths are opposite,
we switch one pair so as to comply with the other, and there are
two options to do so (see Figure \ref{good-split-figure}(c)). The opposite switch affects
the other $4$ paths of $f$ traversing $x$ and, like above, those paths can traverse $x$
in two different ways. We then select a common switch and obtain
a new multiflow $f^{\prime}$ that is a common solution in $(G,T,\K)$ and admits
an $f^{\prime}$-split in $x$ - a contradiction.
If the paths in question are not opposite (see Figure \ref{good-split-figure}(d)),
all the paths of $f$ traversing $x$ end in two terminals. Then there
exists a switch of paths of $f$ in $x$ allowing an $f$-split - a
contradiction. \qed
We can now finish the proof of the fractionality theorem for the $W$-problem. \begin{atheorem}{\ref{kc-for-WP}}Let $(G,T,\K)$ be a K-network. Then in $(G,T,\K)$ there
exists a simple half-integer $W$-problem solution of the smallest size.\end{atheorem} \textbf{Proof. } Let $(G\p,T\p,\K \p)$ be the network with $\theta^{\prime}=\theta-\half$ and $\hat{\eta}-\hat{\eta}^{\prime}\leq 1$, obtained from $(G,T,\K)$ by the maximum number of split-offs in inner nodes. At least one such network exists because of Claim \ref{good-split}. By Claim \ref{theta-decreases} and Corollary \ref{same-beta-eta-minus-half}, $\beta(A^{\prime})=\beta(A)$ for all $A\in \mathcal{K}$. Then $\hat{\eta}-\hat{\eta}^{\prime}=\half$.
Let $g$ denote a simple $W$-problem solution of the smallest size in $(G,T,\K)$.
Since $|g|=|f|-\half$, $g$ is not maximum and we can add a half-integer zero path $P$ to $g$ with an end in $t\in A$. We select $g$ so that $P$ is the longest w.r.t. number of edges. Let $P$ traverse edge $(t,x)$. Then a path $Q\in g$ opposite to $P$ in $x$ has no end in $t$ (otherwise, switching $P$ and $Q$ prolongs $P$). \begin{figure}
\caption{$\theta$-preserving split of an inner node.}
\label{zero-cycle-figure}
\end{figure}
Switching of $P$ and $Q$ in $x$ cannot increase $g[S]$ for then the resulting half-integer flow $g^{\prime}$ has
$\Theta(g^{\prime})=\theta$ and $|g^{\prime}|\leq |f|$. Likewise, switching $P$ and $Q$ so as to allow a $g$-split in $x$ cannot increase $\Theta(g)$, for otherwise we obtain a network $(G\pp,T\pp,\K \pp)$ with $\theta^{\prime\prime}\geq \theta-\frac{1}{4}$ - a contradiction to Claim \ref{theta-decreases}. Therefore, $Q$ is a $t^{c}$-path and an $S$-path. Switching $P$ and $Q$ in $x$ so as to allow a $g$-split of $x$ produces two $W$-paths (see Figure \ref{zero-cycle-figure}). We switch $P$ and $Q$ in this way, obtain a new multiflow $g^{\prime\prime}$ and a network denoted $(G\pp,T\pp,\K \pp)$. Then $\theta^{\prime\prime}=\theta-\half$ and $\hat{\eta}^{\prime\prime}\geq \hat{\eta}^{\prime}-\half=\hat{\eta}-1$ while $(G\pp,T\pp,\K \pp)$ contains less inner nodes than $(G\p,T\p,\K \p)$, contrary to our choice. \qed
\section{Fractionality of the $S$-problem } We use Theorem \ref{kc-for-WP} to show that the fractionality conjecture for the $S$-problem
holds. Let us select a K-network $(G,T,\K)$ which is a counterexample to the conjecture,
$$\mbox{\sl minimal in fractionality $k$ and $\alpha:=\frac{\sum_{t\in T}|N(t)|}{|T|}$.}$$ Like in Section \ref{WP-section}, we can assume that $k=4$.
\begin{claim}\label{alpha=1}$\alpha=1$\end{claim} \textbf{Proof. } Let us assume the contrary and select
$t\in T$ with $|N(t)|\geq 2$. Let $g$ be a quarter-integer common solution to the $W$- and $S$-problems in $(G,T,\K)$. Let us suppose first that no path of $g$ has an end in $t$. We turn $t$ into an inner node, adding a new terminal $t^{\prime}\sim t$ and an edge $(t,t^{\prime})$ if $d(t)$ is odd. In the resulting network $(G\p,T\p,\K \p)$, $\eta^{\prime}:=\eta(G\p,T\p,\K \p)=\eta$ because the reverse operation does not decrease $\eta^{\prime}$. Let us suppose now that $g$ contains paths with an end in $t$. Let $w_{g}(t)$ denote the total weight of $g$'s paths beginning in $t$. Then $w_{g}(t)\leq \frac{3}{4} d(t)$, for otherwise there exists an edge $(t,x)$ traversed by four paths of weight $\quarter$ with an end in $t$. We replace $(t,x)$ with a new edge $(t^{\prime},x)$, where $t^{\prime}\sim t$ is a new terminal, and turn $t$ into an inner node. We also add enough $(t,t^{\prime})$-edges to allow the paths of $g$ with an end in $t$ to end in $t^{\prime}$ instead and the degree of $t$ to be even. In the resulting network $(G\p,T\p,\K \p)$, $\alpha^{\prime}<\alpha$ and $\eta^{\prime}=\eta$ because the reverse operation does not decrease $\eta^{\prime}$. \qed
\begin{theorem}\label{kc-for-SP}Every K-network $(G,T,\K)$ admits a
half-integer least-size $W$-problem solution $f$ that also solves the $S$-problem.\end{theorem} \textbf{Proof. } Let $(G,T,\K)$ be a K-network $(G,T,\K)$. By Claim \ref{alpha=1}, we can transform $(G,T,\K)$ into a K-network $(G\p,T\p,\K \p)$ with $\alpha=1$, $\eta^{\prime}=\eta$ and $\theta^{\prime}=\theta$. Moreover, every $S$-problem or $W$-problem solution in $(G\p,T\p,\K \p)$ remains such in $(G,T,\K)$ after the reverse transformation. By Theorem \ref{kc-for-WP}, $(G\p,T\p,\K \p)$ admits a simple half-integer $W$-problem solution of the smallest size, denoted $f^{\prime}$. By Theorem 1 of \cite{va07a}, $f^{\prime}$ solves the $S$-problem in $(G\p,T\p,\K \p)$. Then the multiflow $f$ in $(G,T,\K)$, obtained from $f^{\prime}$, solves both $W$- and $S$-problems. \qed \begin{corollary}In a general, not necessarily Eulerian, network $(G,T)$ where the anticlique clutter of $(T,S)$ is a K-clutter, both $W$-problem and $S$-problem have fractionality $4$. \qed\end{corollary}
\section{Acknowledgments} The author expresses her deepest gratitude to Prof. Eyal S. Shimony for the help with this manuscript and the Lynn and William Fraenkel Center for Computer Science for partially supporting this work.
\section{Appendix: combinatorial max-min for the $W$-problem} Let $\mathcal{E}=\{\alpha,\beta,...\}$ be a partition of $T$ such that for each $\alpha\in \mathcal{E}$ any $t^{\prime},t^{\prime\prime}\in \alpha$ are equivalent (an \textit{equi-partition}). We call $\mathcal{X}=(X_{\alpha}:\: \alpha\in \mathcal{E})$ is an \textit{expansion} if $X_{\alpha}\cap T=\alpha$, $\alpha\in \mathcal{E}$. Taking members of $\mathcal{X}$ as terminals and an induced clutter, we obtain a new network with a graph $G_{\mathcal{X}}$, terminals $\mathcal{X}$ and a clutter $\mathcal{K}_{\mathcal{X}}$ on $\mathcal{X}$ ($\mathcal{K}_{\mathcal{X}}$ is a K-clutter if $\mathcal{K}$ is a K-clutter). For $X_{\alpha},X_{\beta}\in \mathcal{X}$, we call $(X_{\alpha},X_{\beta})$ \textit{strong} or \textit{weak} if for every $s\in \alpha$ and $t\in \beta$, $(s,t)\in S$ or $(s,t)\in W$ respectively. Likewise, $X_{\alpha}\sim X_{\beta}$ if for every pair of terminals $s\in \alpha$ and $t\in \beta$, $s\sim t$. An $\mathcal{X}$\textit{-path} in $G$ is an $(x,y)$-path with $x,y$ lying in distinct members of $\mathcal{X}$. An $\mathcal{X}$\textit{-flow} is a flow in the network $(\Gx,\x,\kx)$ consisting of $\mathcal{X}$-paths. The $S$-problem and the $W$-problem in $(G_{\mathcal{X}},\mathcal{X},\mathcal{K}_{\mathcal{X}})$ are defined in the same way as for $(G,T,\K)$, and their maxima are denoted by $\eta_{\x}$ and $\theta_{\x}$ respectively.
We define a \textit{partial order} on expansions as follows. Let $\mathcal{E}$ and $\mathcal{F}$ be equi-partitions of $T$ and let $\mathcal{X}=(X_{\alpha}:\: \alpha\in \mathcal{E})$ and $\mathcal{Y}=(Y_{\alpha}:\: \alpha\in \mathcal{F})$ be expansions. Then $\mathcal{X}\preceq \mathcal{Y}$ if for every $X\in \mathcal{X}$ there exists $Y\in \mathcal{Y}$ so that $X\subset Y$. Note that for every $\mathcal{X}\preceq \mathcal{Y}$, every $\mathcal{X}$-flow is also a $\mathcal{Y}$-flow (but the converse may be not true). Since for $\mathcal{X}\preceq \mathcal{Y}$ any $\mathcal{X}$-flow is also a $\mathcal{Y}$-flow, $\theta_{\y}\geq \theta_{\mathcal{X}}$. Since $T$-flow is also an $\mathcal{X}$-flow, $\theta_{\mathcal{X}}\geq \theta$. $\mathcal{X}$ is called \textit{critical} if $\theta_{\y}>\theta_{\x}$ for every $\mathcal{Y}\succ \mathcal{X}$. A critical $\mathcal{X}$ with $\theta_{\mathcal{X}}=\theta$ is called a \textit{dual solution}. The triangle theorem (\cite{l85}) ensures that: \begin{equation}\label{max-x-solution}\mbox{\textsl{there exists a maximum $\mathcal{X}$-flow $h$ such that} $\Theta_{\mathcal{X}}(h)=\theta_{\mathcal{X}}$.}\end{equation} We limit ourselves to networks $(G,T,\K)$ with simple $\mathcal{K}$. The results of this section that hold for simple clutters hold for general networks as well, because compressing a non-trivial atom into one terminal does not change $\theta$ by triangle theorem from \cite{l85} and metric properties of a K-clutter. For a K-network with simple $\mathcal{K}$, every subset in an expansion $\mathcal{X}$ contains exactly one terminal; $X_{t}$ denotes a member of $\mathcal{X}$ containing $t\in T$. Then \eqref{max-x-solution} implies that for a maximum $X$-flow $h$ (even when $\mathcal{X}=T$):
\begin{equation}\label{theta-with-size}\Theta_{\mathcal{X}}(h)=|h| - \half
h[W].\end{equation} We aim to prove the following max-min theorem for the fractional $W$-problem. \begin{theorem}\label{wp-maxmin-theorem}
In a K-network $(G,T,\K)$: \begin{equation}\label{wp-maxmin-equation2}\mbox{\sl $\mathrm{max}_{f} \Theta(f)=\mathrm{min}_{\mathcal{X}}( \frac{1}{2}\sum _{t\in
T}d(X_{t})-\frac{1}{2}\sum _{A\in \mathcal{K}_{\mathcal{X}}}\beta(A))$.}\end{equation} The maximum is taken over the fractional multiflows in $(G,T,\K)$, and the minimum is taken over all expansions in $(G,T,\K)$. Moreover, \eqref{wp-maxmin-equation2} holds as equality for every dual solution $\mathcal{X}$.\end{theorem} To prove this theorem, we state the following inequality for an expansion $\mathcal{X}$ and a $T$-flow $f$: \begin{equation}\label{max-min}\Theta (f)\begin{array}{c}
(a)\\
\leq \\
\, \end{array} \theta \begin{array}{c}
(b)\\
\leq \\
\, \end{array} \Theta _{\mathcal{X}}(h) \begin{array}{c}
(c)\\
\leq \\
\, \end{array} \frac{1}{2}\sum _{t\in T}d(X_{t})-\frac{1}{2}\sum _{A\in \mathcal{K}_{\mathcal{X}}}\beta(A) \end{equation} We aim to show that \eqref{max-min} holds as inequality for every expansion and as equality for every critical expansion. \eqref{max-min}(a) follows directly from the definition of $\theta $. \eqref{max-min}(b) holds because $f$ is also an $\mathcal{X}$-flow. \eqref{max-min}(c) holds because there exists a maximum $\mathcal{X}$-flow $h$ that solves the $W$-problem in $\mathcal{X}$. For such $h$ the minimum of $\sum _{A\in \mathcal{K}_{\mathcal{X}}}h[A]$ is achieved when all $A\in \mathcal{K}_{\mathcal{X}}$ are locked by $h$, i.e. $\sum _{A\in \mathcal{K}_{\mathcal{X}}}h[A]\leq \sum _{A\in \mathcal{K}_{\mathcal{X}}}\beta(A)$
and $|h| =\frac{1}{2}\sum _{t\in T}\lambda (X_{t})$ by the Lov\~{a}sz-Cherkassky theorem (\cite{lo76,ch77}). We need the following two claims to show that \eqref{max-min}(c) is an equality. \begin{claim}\label{saturation}Let $(G,T,\K)$ be a simple K-network, and
let $\mathcal{X}$ be a dual solution in it.
A maximum fractional $\mathcal{X}$-flow $h$ that satisfies $\Theta _{\mathcal{X}}(h)=\theta_{\x}$ (that is, solves the $W$-problem in $(\Gx,\x,\kx)$) locks $X_{t}$ for all $t\in T$.\end{claim} \textbf{Proof. } First, let us show that $h$ saturates every $(X_{t},\overline{X_{t}})$-edge. Let $e$ be an $(x,y)$-edge with $x\in X_{t}$ and $y\in \overline{X_{t}}$. Let $\mathcal{Y}\succ \mathcal{X}$ be an expansion where $Y_{s}=X_{s}$ for terminal $s\neq t$ and $Y_{t}=X_{t}\cup \{y\}$. Since $\mathcal{X}$ is critical, $\theta_{\y}>\theta_{\x}$ and there exists a $\mathcal{Y}$-flow $g$ such that $\Theta _{\mathcal{Y}}(g)>\theta_{\x}$. Let us denote the unused capacity of $e$ by $\varepsilon $ and let $\delta =g[y,\cup _{s\neq t}X_{s}]$.
Clearly, $\varepsilon <\delta $. We turn $g$ into an $\mathcal{X}$-flow by prolonging all its paths starting in $y$ to $x$ instead through the edge $e$. Let $g^{\prime}$ be the functions on $\mathcal{X}$-paths thus obtained; $g^{\prime}$ does not satisfy the capacity constraint on $(x,y)$. Then there exists $0<\alpha<1$ such that $h^{\prime}=(1-\alpha )h+\alpha g^{\prime}$ is an $\mathcal{X}$-flow.
$h^{\prime}$ satisfies all capacity constraints and has $\Theta _{\mathcal{X}}(h^{\prime})\geq (1-\alpha )\Theta _{\mathcal{X}}(h)+\alpha \Theta _{\mathcal{Y}}(g)>\theta_{\x}$, contradicting the definition of $\mathcal{X}$.
Let us assume now that a $(p,q)$-path $P$ of $h$, $p\in X_{t}$, contains two $(X_{t},\overline{X_{t}})$-edges, $e_{1}=(x_{1},y_{1})$ and $e=(x_{2},y_{2})$ where $x_{1},x_{2}\in X_{t},y_{1},y_{2}\in \overline{X_{t}}$ and $y_{1},x_{1},x_{2},y_{2}$ appear on $P$ in this order. Then by replacing $P$ with $x_{2}Pq$ we obtain an $\mathcal{X}$-flow $g$ for which $\Theta _{\mathcal{X}}(g)=\theta_{\x}$ and the edge $(x_{1},y_{1})$ is not saturated by $g$, a contradiction. \qed
\begin{claim}\label{locking-clutter} Let $(G,T,\K)$ be a simple K-network, and
let $\mathcal{X}$ be a dual solution. A maximum fractional $\mathcal{X}$-flow $h$ would then satisfy $\Theta _{\mathcal{X}}(h)=\theta_{\x}$ iff every $A\in \mathcal{K}_{\mathcal{X}}$ is locked by $h$.\end{claim} \textbf{Proof. } The ``if'' direction is trivial. Let $h$ be a maximum $\mathcal{X}$-flow with $\Theta _{\mathcal{X}}(h)=\theta_{\x}$ that locks every member of $\mathcal{K}_{\mathcal{X}}$. Because of Claim \ref{saturation} and the simplicity of $\mathcal{K}_{\mathcal{X}}$, we get $\Theta(h)=\frac{1}{2}\sum _{X\in \mathcal{X}}d(X)-\frac{1}{2}\sum _{A\in
\mathcal{K}_{\mathcal{X}}}\beta _{A}$ and thus $\Theta(h)\geq \theta_{\x}$ by \eqref{max-min}(c).
For the ``only if'' direction, assume that $h$ is a maximum $\mathcal{X}$-flow that has $\Theta_{\mathcal{X}}(h)=\theta_{\x}$ and unlocks $A\in \mathcal{K}_{\mathcal{X}}$. Let $A^{c}$ in the context of $\mathcal{K}_{\mathcal{X}}$ denote the members of $\mathcal{X}$ that do not lie in $A$. Then $h$ contains an augmenting sequence $P_{0},x_{0},...,x_{m-1},P_{m}$, where $P_{0}$ is an $A$-path, $P_{m}$ is an $A^{c}$-path, and each one of $P_{1},...,P_{m-1}$ is an $(A,A^{c})$-path. We can choose $h$ so that $m=1$. Let $P_{0}$ and $P_{1}$ be $(s^{\prime},t^{\prime})$- and $(q^{\prime},r^{\prime})$-paths with weights $\alpha $ and $\beta $ respectively where $s^{\prime}\in X_{s}$, $t^{\prime}\in X_{t}$, $q^{\prime}\in X_{q}$ and $r^{\prime}\in X_{r}$. Since a switch of $P_{0}$ and $P_{1}$ in $x_{0}$ cannot increase $\Theta(h)$, we can assume that w.l.o.g. $(X_{q},X_{r})$, $(X_{t},X_{r})$ and $(X_{t},X_{q})$ are $S$-pairs while $(X_{s},X_{q})$ and $(X_{s},X_{r})$ are $W$-pairs by the simplicity of $\mathcal{K}_{\mathcal{X}}$.
We construct a new flow $f$ from $h$ by replacing $P_{0}$ and $P_{1}$ with $(t^{\prime},r^{\prime})$, $(t^{\prime},q^{\prime})$, $(q^{\prime},r^{\prime})$ and $(s^{\prime},t^{\prime})$-paths of weights $\frac{\varepsilon }{2}$, $\frac{\varepsilon }{2}$, $\beta -\frac{\varepsilon }{2}$
and $\alpha -\varepsilon $ respectively (this is the $\frac{3}{2}$\textit{-operation}, see Figure \ref{3/2-operation}). It follows that $|f| =|h| -\frac{\varepsilon }{2}$ and $f[W]= h[W]-\varepsilon$ since $(X_{q},X_{t}),(X_{q},X_{r}),(X_{r},X_{t})\in S$
and $\Theta _{\mathcal{X}}(f)=\Theta _{\mathcal{X}}(h)$. \begin{figure}
\caption{The fractional $\frac{3}{2}$-operation.}
\label{3/2-operation}
\end{figure}
The subpath $s^{\prime} P_{0}x_{0}$ does not have common nodes with any other
$\mathcal{X}$-path $Q$ whose ends do not lie in $X_{s}\cup X_{t}$. If it were so, then the above $\frac{3}{2}$-operation could be applied to both $P_{0},P_{1}$ and $P_{0},Q$ and a flow $f^{\prime}$ with $|f^{\prime}| =|h| -\frac{\varepsilon}{2}$ and $f^{\prime}[W]=h[W]-2\varepsilon$ could be created, which contradicts the maximality of $\Theta _{\mathcal{X}}(h)$. Therefore, there exists an edge $(s^{\prime},x)$ of $s^{\prime} Lv$ which is not saturated by $f$ - a contradiction to Claim \ref{saturation}. \qed
Theorem \ref{wp-maxmin-theorem} follows from Claims \ref{saturation} and \ref{locking-clutter}. \begin{corollary}\label{theta-half-integer}$2\theta(G,T,\K)\in
\mathbb{N}.$\end{corollary} \textbf{Proof. } Let $\mathcal{X}$ be an expansion that achieves equality in Theorem \ref{wp-maxmin-theorem} for $(G,T,\K)$. Then $\theta(G,T,\K)=\half\sum_{X\in \mathcal{X}}d(X)-\half\sum_{A\in \mathcal{K}_{\mathcal{X}}}\beta(A)$, while $\sum_{X\in \mathcal{X}}d(X)$ is always even in an Eulerian network and every $\beta(A)$ is an integer by definition. Thus, a split of an inner node
in $(G,T,\K)$ decreases $\theta$ by $\frac{k}{2}$, $k\in
\mathbb{N}\cup \{0\}$. \qed \begin{corollary}\label{eta-half-integer}
Let $(G,T,\K)$ be a simple K-network and let $h$ be a simple $W$-problem solution in $(G,T,\K)$
with $\sum_{A\in\mathcal{K}}h[A]=\sum_{A\in \mathcal{K}}\beta(A)$.
Then $2h[S]\in \mathbb{N}$.\end{corollary} \textbf{Proof. } $2h[S]$ is an integer because $\theta=h[S]+\half h[W]=h[S]+\half \sum_{A\in
\mathcal{K}}\beta(A)$ and $\theta$ is half-integer by Corollary \ref{theta-half-integer}. \qed
\renewcommand{1.5}{1}
\renewcommand{1.5}{1.5}
\begin{center}\large \bf Keywords\end{center}
Path packing, multiflow, fractionality
\begin{center}\large \bf Contact author\end{center}
Natalia Vanetik
Department of Computer Science, Ben-Gurion University, Israel.
\textbf{E-mail address}: \textit{orlovn@cs.bgu.ac.il}
\textbf{Fax}: +972-8-6477650
\textbf{Phone}: +972-8-6477866
\textbf{Address}:
\hspace{2cm}N. Vanetik
\hspace{2cm}Department of Computer Science
\hspace{2cm}Ben Gurion University of the Negev
\hspace{2cm}P.O.B 653 Be'er Sheva 84105
\hspace{2cm}Israel
\begin{center}\large \bf Footnotes\end{center}
\textbf{Author affiliation:}
N. Vanetik, Department of Computer Science, Ben-Gurion University, Israel, \textit{orlovn@cs.bgu.ac.il}
\end{document} |
\begin{document}
\title{New Examples of Compact Manifolds with Holonomy $\mathrm{Spin}(7)$} \author{Robert Clancy} \email{clancy@maths.ox.ac.uk}
\begin{abstract}
We find new examples of compact $\mathrm{Spin}(7)$-manifolds using a construction of Joyce \cite{Joyce:1999fk,joyce2000compact}.
The essential ingredient in Joyce's construction is a Calabi--Yau 4-orbifold with particular singularities admitting an antiholomorphic involution, which fixes the singularities.
We search the class of well-formed quasismooth hypersurfaces in weighted projective spaces for suitable Calabi--Yau 4-orbifolds.
We find that different hypersurfaces within the same family of Calabi--Yau 4-orbifolds may result in different $\mathrm{Spin}(7)$-manifolds. \end{abstract}
\maketitle \section{Introduction}
The holonomy group of a connected Riemannian manifold is the group of parallel transport maps around piecewise smooth loops based at a point. Berger \cite{Berger:1955fk} classified the possible holonomy groups of irreducible, nonsymmetric Riemannian metrics on simply-connected manifolds in 1955.
\begin{thm}[Berger] Suppose $M$ is a simply-connected manifold and $g$ is a Riemannian metric on $M$, which is irreducible and nonsymmetric. Then one of the following cases holds: \begin{itemize} \item[(i)] $\Hol(g)=\mathrm{SO}(n)$, \item[(ii)] $n=2m$ with $m\geq 2$, and $\Hol(g)=\mathrm{U}(m)$ in $\mathrm{SO}(2m)$, \item[(iii)] $n=2m$ with $m\geq 2$, and $\Hol(g)=\mathrm{SU}(m)$ in $\mathrm{SO}(2m)$, \item[(iv)] $n=4m$ with $m\geq2$, and $\Hol(g)=\mathrm{Sp}(m)$ in $\mathrm{SO}(4m)$, \item[(v)] $n=4m$ with $m\geq 2$, and $\Hol(g)=\mathrm{Sp}(m)\mathrm{Sp}(1)$ in $\mathrm{SO}(4m)$, \item[(vi)] $n=7$ and $\Hol(g)= G_2$ in $\mathrm{SO}(7)$, or \item[(vii)] $n=8$ and $\Hol(g)=\mathrm{Spin}(7)$ in $\mathrm{SO}(8)$. \end{itemize} \end{thm}
The question of whether there existed manifolds with holonomy group $G_2$ or $\mathrm{Spin}(7)$ would not be resolved for more than 30 years. Bryant \cite{Bryant:1987uq} in 1987 used the theory of exterior differential systems to show the existence of many metrics with holonomy $G_2$ and $\mathrm{Spin}(7)$ on small balls in $\mathbb{R}^7$ and $\mathbb{R}^8$, respectively. Then Bryant and Salamon \cite{Bryant:1989kx} constructed examples of complete metrics with holonomy $G_2$ and $\mathrm{Spin}(7)$ on non-compact manifolds, which were vector bundles over manifolds of dimensions 3 and 4. In 1994--5 Joyce \cite{Joyce:1996gt,Joyce:1996st} constructed examples of compact manifolds with holonomy $G_2$ and $\mathrm{Spin}(7)$ by resolving quotients of tori by finite groups.
Joyce \cite{Joyce:1999fk} gives a second construction of manifolds with holonomy $\mathrm{Spin}(7)$ whose basic ingredient is a Calabi--Yau 4-orbifold with an antiholomorphic involution. The exact conditions on the Calabi--Yau 4-orbifold are stated in Condition \ref{cond:Y}.
In this thesis we will find all examples of suitable Calabi--Yau 4-orbifolds arising as well-formed quasismooth hypersurfaces in weighted projective spaces. We will then find the Betti numbers of the $\mathrm{Spin}(7)$-manifolds, which result from the construction given in \cite{Joyce:1999fk}.
\end{ack}
\section {Review of $\mathrm{Spin}(7)$ geometry} The material of this section is entirely from \cite{joyce2000compact}. We will recall the basic definitions and properties of Riemannian holonomy groups and then discuss the group $\mathrm{Spin}(7)$. In this section $M$ will denote a connected manifold.
\begin{defn} Let $E$ be a vector bundle over $M$, and $\nabla^E$ a connection on $E$. Let $p\in M$ be a point. We say $\gamma$ is a \emph{loop based at $p$} if $\gamma:[0,1]\rightarrow M$ is a piecewise-smooth curve with $\gamma(0)=\gamma(1)=p$. If $\gamma$ is a loop based at $p$, then the parallel transport map $P_\gamma:E_p\rightarrow E_p$ is an invertible linear map. Define the \emph{holonomy group $\Hol_p(\nabla^E)$ of $\nabla^E$ based at $p$} to be \[ \Hol_p(\nabla^E)=\{P_\gamma:\text{ $\gamma$ is a loop based at $p$}\}\subset \GL(E_p). \] \end{defn}
Since $M$ is connected, $\Hol_p(\nabla^E)$ and $\Hol_q(\nabla^E)$ are conjugate as subgroups of $\GL(k,\mathbb{R})$, if $k$ is the rank of $E$ and we have chosen identifications $E_p\simeq \mathbb{R}^k\simeq E_q$. We write $\Hol(\nabla^E)$ to mean this conjugacy class of subgroups of $\GL(k,\mathbb{R})$. The following proposition is a very useful property of holonomy groups.
\begin{prop}
\label{prop:ParallelSectionsActionOfHolonomy} Let $M$ be a manifold, $E$ a vector bundle over $M$, and $\nabla^E$ a connection on $E$. Let $p\in M$ be a point. Then the parallel sections of $E$ are in one-to-one correspondence with the fixed points of the action of $\Hol_p(\nabla^E)$ on $E_p$. \end{prop}
If $(M,g)$ is a Riemannian manifold we define the holonomy group of $g$, $\Hol(g)$, to be the holonomy group of the Levi-Civita connection of $(M,g)$.
Note that the holonomy group of a Riemannian manifold comes equipped with a representation on the fibres of the tangent bundle. Therefore when we say that a manifold has holonomy $\mathrm{Spin}(7)$ we must also say what representation of $\mathrm{Spin}(7)$ we are considering.
$\mathrm{Spin}(7)$ can be defined as the simply-connected double cover of $SO(7)$. We however will define it as the stabiliser group of a certain 4-form on $\mathbb{R}^8$, which will determine an embedding of $\mathrm{Spin}(7)$ in $\GL(8,\mathbb{R})$ and hence the irreducible 8-dimensional representation, to which Berger's theorem refers.
\begin{defn} Let $\mathbb{R}^8$ have coordinates $(x_1,\dotsc, x_8)$. Let $\mathrm{d} x_{ijkl}$ denote the 4-form $\mathrm{d} x_i\wedge \mathrm{d} x_j\wedge \mathrm{d} x_k\wedge \mathrm{d} x_l$. We define the \emph{Cayley form}, $\Omega_0$, by \begin{equation*} \begin{split} \Omega_0&=\mathrm{d} x_{1234}+\mathrm{d} x_{1256}+\mathrm{d} x_{1278}+\mathrm{d} x_{1357}-\mathrm{d} x_{1368}-\mathrm{d} x_{1458}-\mathrm{d} x_{1467} \\ &+\mathrm{d} x_{5678}+\mathrm{d} x_{3478}+\mathrm{d} x_{3456}+\mathrm{d} x_{2468}-\mathrm{d} x_{2457}-\mathrm{d} x_{2367}-\mathrm{d} x_{2358} \end{split} \end{equation*}
$\mathrm{Spin}(7)$ is the subgroup of $\GL(8,\mathbb{R})$ preserving $\Omega_0$. \end{defn}
The 4-form above can be motivated by the structure of the octonions. The relationship between the octonions and the Cayley form can be found in, for example, \cite[Section IV.1.C]{Harvey:1982fk}. It should be noted that the Cayley form given above differs from that in \cite{Harvey:1982fk} by an orientation-preserving permutation of the coordinates and an overall change in sign.
Since we have defined $\mathrm{Spin}(7)$ as the stabilizer group of the Cayley form by Proposition \ref{prop:ParallelSectionsActionOfHolonomy}, if $(M,g)$ is an oriented Riemannian $8$-manifold with $\Hol(g)\subseteq \mathrm{Spin}(7)$, then $M$ admits a (not necessarily unique) parallel 4-form $\Omega$ such that for any $p\in M$ there exists an oriented isometry $T_pM\rightarrow \mathbb{R}^8$, which takes $\Omega_p$ to $\Omega_0$.
We will define a $\mathrm{Spin}(7)$-manifold to include a choice of Cayley form. This fixes a particular embedding of $\Hol(g)\subseteq \mathrm{Spin}(7)$.
\begin{defn} A \emph{$\mathrm{Spin}(7)$-manifold} is a triple $(M,\Omega,g)$ where $(M,g)$ is an oriented Riemannian $8$-manifold, $\Hol(g)\subseteq \mathrm{Spin}(7)$ and $\Omega$ is a parallel 4-form such that for any $p\in M$ there exists an oriented isometry $T_pM\rightarrow \mathbb{R}^8$, which takes $\Omega_p$ to $\Omega_0$.
\end{defn}
We can break up the condition of being a $\mathrm{Spin}(7)$-manifold into a topological one, namely the existence of a reduction of the structure group of $TM$ to $\mathrm{Spin}(7)$, and an integrability condition on this reduction.
\begin{defn} Let $M$ be an oriented manifold $8$-manifold. A \emph{$\mathrm{Spin}(7)$-structure on $M$} is a pair $(\Omega,g)$ where $g$ is a Riemannian metric and for any $p\in M$ there exists an oriented isometry $T_pM\rightarrow \mathbb{R}^8$, which takes $\Omega_p$ to $\Omega_0$. \end{defn}
A $\mathrm{Spin}(7)$-structure is equivalent to a reduction of the structure group of $TM$ to $\mathrm{Spin}(7)$. The existence of a $\mathrm{Spin}(7)$-structure is a topological property of $M$ as the following result from \cite[Th. 10.7]{Lawson:1989fk} shows.
\begin{prop}
Let $M$ be an oriented $8$-manifold. $M$ admits a $\emph{Spin}(7)$-structure if and only if $w_2(M)=0$ and
\begin{equation*}
p_1(M)^2-4p_2(M)+ 8\chi(M)=0.
\end{equation*} \end{prop}
For $M$ to be a $\mathrm{Spin}(7)$-manifold it must satisfy an extra integrability condition as the following proposition shows from \cite[Prop. 10.5.3]{joyce2000compact}.
\begin{prop}
Let $M$ be an oriented $8$-manifold with $\mathrm{Spin}(7)$-structure $(\Omega,g)$. Then $\Hol(g)\subseteq \mathrm{Spin}(7)$ and $\Omega$ is the induced $4$-form if and only if $\mathrm{d}\Omega=0$. In this case we say the $\mathrm{Spin}(7)$-structure $(\Omega,g)$ is \emph{torsion-free}. \end{prop}
The construction of Joyce uses a Calabi--Yau 4-orbifold to construct a $\mathrm{Spin}(7)$-manifold. Any Calabi--Yau 4-fold carries an $S^1$ of torsion-free $\mathrm{Spin}(7)$-structures as we will soon see. We will now define $\mathrm{SU}(4)$ as the stabiliser group of a set of tensors and show that $\mathrm{SU}(4)$ embeds into $\mathrm{Spin}(7)$.
$\mathrm{SU}(4)$ can be defined as the stabiliser of a metric, a K\"ahler form $\omega_0$ and holomorphic volume form $\theta_0$. If we let $(z_1,z_2,z_3,z_4)$ be coordinates on $\mathbb{C}^4$ we can write $\omega_0$, $ \theta_0$ as
\begin{equation*}
\omega_0=\frac{i}{2}(\mathrm{d} z_1\wedge \mathrm{d} \bar{z}_1+\dotsb +\mathrm{d} z_4\wedge \mathrm{d} \bar{z}_4) \text{ and }
\theta_0=\mathrm{d} z_1\wedge\dotsb \wedge \mathrm{d} z_4. \end{equation*}
We define a Calabi--Yau $4$-fold as a quadruple $(X,g,\omega,\theta)$ consisting of a K\"ahler manifold $(X,g,\omega)$ and a holomorphic $(4,0)$-form $\theta$ such that $|\theta|\equiv 4$. It can be shown that for any $p\in X$ there exists an isometry $T_pX\rightarrow \mathbb{C}^4$ taking $(\omega,\theta)$ to $(\omega_0,\theta_0)$.
\begin{prop} \label{prop:cyisspin7} Let $(X,g,\omega,\theta)$ be a Calabi--Yau $4$-fold. Define a $4$-form by $\Omega=\frac{1}{2}\omega\wedge\omega+\re{\theta}$, then $(\Omega,g)$ is a torsion-free $\mathrm{Spin}(7)$-structure on $X$. \end{prop} \begin{proof} Let $p\in X$ and identify $\omega_p$ and $\theta_p$ with the standard forms on $\mathbb{C}^4$. Identifying $\mathbb{C}^4$ with $\mathbb{R}^8$ via $z_j=x_{2j-1}+i x_{2j}$
and comparing the expressions for $\Omega_p$ and $\Omega_0$ we see that $(\Omega,g)$ defines a $\mathrm{Spin}(7)$-structure. Since $(X,g,\omega,\theta)$ is a Calabi-Yau manifold we have $\mathrm{d}\omega=\mathrm{d}\theta=0$, which implies $\mathrm{d}\Omega=0$. \end{proof}
The proposition above describes a particular embedding of $\mathrm{SU}(4)\hookrightarrow \mathrm{Spin}(7)$.
Let $(X,g,\omega,\theta)$ be a Calabi--Yau $4$-fold. Then the 4-form $\Omega_\phi=\frac{1}{2}\omega\wedge\omega+\re(e^{i\phi}\theta)$ for $\phi\in[0,2\pi)$ also defines a torsion-free $\mathrm{Spin}(7)$-structure on $X$ and a different embedding of $\mathrm{SU}(4)\hookrightarrow \mathrm{Spin}(7)$.
\section{Construction of $\mathrm{Spin}(7)$-manifolds} The essential idea in Joyce's constructions of manifolds with exceptional holonomy is that of resolving the singularities of orbifolds within a particular holonomy group. We will therefore review the definitions of orbifolds and discuss Riemannian metrics and their holonomy groups on orbifolds. We will then give a short overview of the construction of manifolds with holonomy $\mathrm{Spin}(7)$ from Calabi--Yau 4-orbifolds. We direct the reader to \cite{Joyce:1999fk} and \cite[Ch. 10]{joyce2000compact} for the details of the construction.
\subsection{Orbifolds} \begin{defn} An \emph{orbifold} is a singular manifold $M$ of dimension $n$ whose singularities are locally isomorphic to quotient singularities $\mathbb{R}^n/G$ for finite subgroups $G\subset \GL(n,\mathbb{R})$, such that if $1\neq \gamma\in G$, then the subspace $V_\gamma$ of $\mathbb{R}^n$ fixed by $\gamma$ has $\dim V_\gamma\leq n-2$. \end{defn}
We say a point $p$ in $M$ is an \emph{orbifold point with orbifold group $G$} if $M$ is locally isomorphic to $\mathbb{R}^n/G$ at $p$ with $G$ non-trivial.
\begin{defn} A \emph{Riemannian metric $g$ on an orbifold $M$} is a Riemannian metric in the usual sense on the nonsingular part of $M$ and where $M$ is locally isomorphic to $\mathbb{R}^n/G$, the metric $g$ can be identified with the quotient of a $G$-invariant Riemannian metric defined on an open set of $0$ in $\mathbb{R}^n$. We define the \emph{holonomy group} $\Hol(g)$ of $g$ to be the holonomy group of the restriction of $g$ to the nonsingular part of $M$. \end{defn}
If $p$ is an orbifold point of $M$ with orbifold group $G$ then we have an inclusion of groups \[ G\subseteq \Hol(g). \] Therefore for an orbifold to have holonomy $\mathrm{Spin}(7)$ we must have that each orbifold group $G$ lies in (the conjugacy class of subgroups of) $\mathrm{Spin}(7)$.
Many results for manifolds carry over with small modifications to orbifolds. In particular the Calabi conjecture holds for compact K\"ahler orbifolds. As a consequence we have the following theorem \cite[Th. 6.5.6]{joyce2000compact}, which we can use to find Calabi--Yau metrics on orbifolds.
\begin{thm} \label{thm:ccorbifolds} Let $X$ be a compact complex orbifold with $c_1(X)=0$ admitting K\"ahler metrics. Then there is a unique Ricci-flat K\"ahler metric in every K\"ahler class on $X$. \end{thm}
\subsection{Resolution of singularities} Given a Riemannian orbifold $(M,g)$ with holonomy $\Hol(g)$ we would like to know whether we can find a resolution $(\hat{M},\hat{g})$ of $M$ such that $\Hol(\hat{g})\subseteq \Hol(g)$.
If $Hol(g)\subset \mathrm{SU}(n)$ and then we can use complex geometry to determine whether resolutions exist which do not change the holonomy group. Suppose $X$ is a Gorenstein algebraic variety so the canonical sheaf $\mathcal{O}(K_X)$ in invertible. A resolution $\pi:\hat{X}\rightarrow X$ is called \emph{crepant} if $\pi^*(\mathcal{O}(K_X))=\mathcal{O}(K_{\hat{X}})$. The importance of crepant resolutions is that a crepant resolution of a Calabi--Yau orbifold is a Calabi--Yau manifold. The question of existence and uniqueness of crepant resolutions of quotient singularities $\mathbb{C}^n/G$ for finite groups $G\subset \mathrm{SU}(n)$ is a difficult one, to which we will return later.
If we wish to construct manifolds with special holonomy then we will not, in general, be able to use algebraic techniques and instead we must rely on finding singularities of a type, which we know we can resolve within a given holonomy group.
\subsection{Review of construction} \label{sec:ReviewofConstruction} Now suppose $Y$ is a complex $4$-orbifold admitting metrics with holonomy $\mathrm{SU}(4)$. We require $Y$ to have isolated singularities $\{p_1,\dotsc,p_k\}$ modelled on $\mathbb{C}^4/\mathbb{Z}_4$, where the generator of $\mathbb{Z}_4$ acts as \begin{equation}
\label{eq:alpha}
\alpha:(z_1,z_2,z_3,z_4)\mapsto (iz_1,iz_2,iz_3,iz_4). \end{equation} The group $\mathbb{Z}_4$ lies in $\mathrm{SU}(4)$, which is consistent with $Y$ having holonomy $\mathrm{SU}(4)$. We also require that $Y$ admits an antiholomorphic isometric involution $\tau$ with fixed points the finite set $\{p_1,\dotsc,p_k\}$.
Since $\tau$ is antiholomorphic, the complex structure does not descend to the quotient of $Y$ by $\tau$. However we can form a torsion-free $\mathrm{Spin}(7)$-structure $(\Omega,g)$ given by $\Omega=\frac{1}{2}\omega\wedge\omega+\re{\theta}$, which is $\tau$-invariant.
Defining $Z=Y/\langle \tau\rangle$ we have that this $\tau$-invariant torsion-free $\mathrm{Spin}(7)$-structure on $Y$ descends to $Z$ and the orbifold singularities of $Z$ are modelled on $\mathbb{R}^8/G$ where $G$ is a finite subgroup of $\mathrm{Spin}(7)$. We now wish to resolve these singularities by glueing in ALE $\mathrm{Spin}(7)$-manifold to construct a $\mathrm{Spin}(7)$-manifold $M$.
It is shown in \cite[Prop. 5.3]{Joyce:1999fk} that all the singularities are of the same form. If we define coordinates on $\mathbb{R}^8$ as $(x_1,\dotsc,x_8)$ and complex coordinates by $z_i=x_{2i-1}+i x_{2i}$ then the singularities are of the form $\mathbb{R}^8/G$ where $G=\langle\alpha,\beta\rangle$ is a finite non-abelian subgroup of $\mathrm{Spin}(7)$ generated by $\alpha$, which is described in Eq. \eqref{eq:alpha}, and $\beta$, whose action on $\mathbb{C}^4$ is given by \begin{equation*}
\beta:(z_1,z_2,z_3,z_4)\mapsto (\overline{z}_2,-\overline{z}_1,\overline{z}_4,-\overline{z}_3). \end{equation*}
The singularity $\mathbb{R}^8/G$ can be resolved with holonomy contained in $\mathrm{Spin}(7)$ in two ways. Both resolutions $X_1,X_2$ have the same holonomy as abstract Lie groups but the embeddings into $\GL(8,\mathbb{R})$ (or $\mathrm{Spin}(7)$) are different. If we choose the resolutions of the singularities of $Z$ wisely we can ensure that the holonomy of the resolution of the orbifold $Z$ has holonomy exactly $\mathrm{Spin}(7)$ and not a proper subgroup of it.
The precise necessary conditions on the complex 4-orbifold, $Y$, are stated below.
\begin{cond} Let $Y$ be a compact complex 4-orbifold with $c_1(Y)=0$, admitting K\"ahler metrics. Let $\tau$ be an antiholomorphic involution on $Y$. We require that $Y$ have isolated singularities $\{p_1\dotsc,p_k\}$, with $k\geq 1$, modelled on $\mathbb{C}^4/\mathbb{Z}_4$ as described above and that the fixed point set of $\tau$ is $\{p_1,\dotsc,p_k\}$. We also require that $Y\setminus \{p_1,\dotsc,p_k\}$ is simply-connected and $h^{2,0}(Y)=0$. \label{cond:Y} \end{cond}
We will use the following theorem of Joyce to construct examples of $\mathrm{Spin}(7)$-manifolds from appropriate complex $4$-orbifolds, which can be found in \cite[Th. 5.14]{Joyce:1999fk}.
\begin{thm} Suppose $Y$ satisfies Condition \ref{cond:Y}. Let $M$ be the resulting compact $8$-manifold defined in \cite[Def. 5.8]{Joyce:1999fk}. Then there exist torsion-free $\mathrm{Spin}(7)$-structures $(\Omega,g)$ on $M$. We can choose the resolutions of the singularities so that $\Hol(g)=\mathrm{Spin}(7)$. \end{thm}
\section{Weighted projective spaces and hypersurfaces}
Our goal is now to find complex 4-orbifolds which satisfy Condition \ref{cond:Y}. We will use algebraic geometry to find examples of such orbifolds. Hypersurfaces in weighted projective spaces provide a large source of orbifolds with specified (cyclic quotient) singularities. We will therefore begin by reviewing weighted projective spaces, their singularities and hypersurfaces contained in weighted projective spaces. The majority of this section is from \cite{ianofletcher}.
\begin{defn} Let $a_0,\dotsc,a_n$ be positive integers with $\gcd(a_0,\dotsc,a_n)=1$. The \emph{weighted projective space} $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ with weights $a_0,\dotsc,a_n$ is the quotient of $\mathbb{C}^{n+1}\setminus\{0\}$ by the action of $\mathbb{C}^*$ given by \[ \lambda:(z_0,\dotsc,z_n)\mapsto (\lambda^{a_0}z_0,\dotsc,\lambda^{a_n}z_n). \] \end{defn}
In general $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ will have singularities where the action of $\mathbb{C}^*$ on $\mathbb{C}^{n+1}\setminus\{0\}$ is not free. The stabiliser groups of these points are finite and so we can treat $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ as an orbifold. We can also treat $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ as a singular algebraic variety by considering $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ as $\Proj$ of a graded ring. This will be a useful viewpoint for us because it shows the similarities between weighted projective spaces and the usual straight projective space.
Let $R$ be the graded ring $\mathbb{C}[z_0,\dotsc,z_n]$ where $z_i$ has weight $a_i$. $R$ has a direct sum decomposition $R=\bigoplus_dR_d$ into its graded pieces. Elements of $R_d$ will be called \emph{weighted homogeneous polynomials of degree $d$} but we will soon drop the term weighted and leave it as understood.
We can treat $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ as a variety as $\Proj(R)$. From generalities on taking $\Proj$ of graded rings, see \cite[Prop. 5.11]{Hartshorne:1977fk}, we have that a finitely generated graded $R$-module determines a coherent sheaf of $\mathcal{O}_{\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}}$-modules. In particular the module $R(m)$ determines a sheaf, which we will denote $\mathcal{O}(m)$ for brevity.
We should be careful when distinguishing between the two viewpoints. For example we have the following result from \cite[Cor. 5.9]{ianofletcher}, which holds only when considering $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ as a variety.
\begin{lem} Let $a_0,\dotsc,a_n$ be positive integers with $\gcd(a_0,\dotsc,a_n)=1$. Let $q=\gcd(a_1,\dotsc,a_n)$. Then $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}\simeq \mathbb{C}\mathbb{P}^n_{a_0,a_1/q,\dotsc,a_n/q}$ as varieties. \end{lem}
We have the following corollary.
\begin{cor} Let $a_0,\dotsc,a_n$ be positive integers with $\gcd(a_0,\dotsc,a_n)=1$. Then $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}\simeq \mathbb{C}\mathbb{P}^n_{b_0,\dotsc,b_n}$ as varieties for some weights $b_0,\dotsc,b_n$ such that $ \gcd(b_0,\dotsc,b_{i-1},b_{i+1},\dotsc,b_n)=1 $
for each $i$. \end{cor}
This motivates the following definition.
\begin{defn} We say $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ is \emph{well-formed} if \[ \gcd(a_0,\dotsc,a_{i-1},a_{i+1},\dotsc,a_n)=1 \text{ for each } i. \] \end{defn}
The condition of being well-formed is related to the structure of the singularities of $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$. The singularities of $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ are all cyclic quotient singularities. We say a cyclic quotient singularity $\mathbb{C}^n/\mathbb{Z}_m$ is of type $\frac{1}{m}(a_1,\dotsc,a_n)$ if $\mathbb{Z}_m$ acts on $\mathbb{C}^n$ as \[ (z_1,\dotsc,z_n)\xrightarrow{\xi}(\xi^{a_1}z_1,\dotsc,\xi^{a_n}z_n) \] where $\xi^m=1$.
For any subset $I\subset \{0,\dotsc,n\}$ we define $S_I=\{[z_0,\dotsc,z_n]:z_{j}=0;\text{ $\forall j\notin I$}\}\allowbreak\subset \mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n} $. Now suppose $\gcd(a_{i_0},\dotsc,a_{i_k})=m\neq 1$, then a generic point $p \in S_{i_0,\dotsc,i_k}$ is an orbifold point modelled on the singularity $\mathbb{C}^k\times \mathbb{C}^{n-k}/\mathbb{Z}_m$. If we extend the sequence $(i_0,\dotsc,i_k)$ to be a permutation $(i_0,\dotsc,i_n)$ of the sequence $(0,\dotsc,n)$ then the singularity $\mathbb{C}^{n-k}/\mathbb{Z}_m$ is of type $\frac{1}{m}(a_{i_{k+1}},\dotsc,a_{i_n})$.
\begin{exmp} Consider the weighted projective space $\mathbb{C}\mathbb{P}^3_{1,2,3,6}$. The singular locus consists of the union of the two curves $S_{1,3}\cup S_{2,3}$, which intersect at the singular point $S_{3}$. The singularity at a generic point of $S_{1,3}$ is modelled on $\mathbb{C}\times \mathbb{C}^2/\mathbb{Z}_2$, where $\mathbb{Z}_2$ acts as $(z_0,z_2)\mapsto(-z_0,-z_2)$. The singularity at a generic point of $S_{1,3}$ is modelled on $\mathbb{C}\times\mathbb{C}^2/\mathbb{Z}_3$, where $\mathbb{Z}_3$ acts as $(z_0,z_1)\mapsto (\xi z_0,\xi^{-1}z_1)$ and $\xi^3=1$. Finally we have a nonisolated singular point at $S_{3}$, which is modelled on $\mathbb{C}^3/\mathbb{Z}_6$, where $\mathbb{Z}_6$ acts as $(z_0,z_1,z_2)\mapsto (\zeta z_0,\zeta^2z_1,\zeta^3z_2)$ and $\zeta^6=1$.
\end{exmp}
From the description of singularities above we see that the condition of being well-formed is equivalent to $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ having only singularities in complex codimension greater than 1.
\subsection{Hypersurfaces}
A section $f\in \Gamma(\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n},\mathcal{O}(d))=R_d$ determines a hypersurface in $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$, by definition of degree $d$. It can be shown that any hypersurface is determined by such a section \cite[Th. 3.7]{Cox:1995uq}.
\subsubsection{Quasismoothness}
Recall that a projective variety is smooth if the affine cone is smooth away from the origin. We shall define quasismoothness in a similar way. Let $\pi:\mathbb{C}^{n+1}\setminus\{0\}\mapsto \mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ be the projection.
\begin{defn}
Let $Y\subset \mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ be an algebraic variety. We say $Y$ is \emph{quasismooth} if $\pi^{-1}(Y)$ is smooth.
\end{defn}
An algebraic variety $Y$ is quasismooth if $Y$ only has singularities coming from the orbifold singularities of $\mathbb{C}\mathbb{P}_{a_0,\dotsc,a_n}$. We will restrict our attention to quasismooth hypersurfaces because we can understand their singularities easily in terms of those of the ambient weighted projective space. Regarding $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ as an orbifold, quasismoothness of $Y$ is equivalent to being a sub-orbifold, see \cite[Prop. 3.5]{batyrev1994hodge}.
Note that a generic hypersurface of a fixed degree in a particular weighted projective space is not necessarily quasismooth.
\begin{exmp} Consider the graded ring $R=\mathbb{C}[z_0,z_1,z_2]$ where the weights are $1,2,2$ respectively, then $\Proj(R)=\mathbb{C}\mathbb{P}^2_{1,2,2}$. A generic weighted homogeneous polynomial of degree $3$ is of the form $f=\lambda_1z_0z_1+\lambda_2z_0z_2+\lambda_3z_0^3$. The hypersurface $Y_3=V(f)$ is not quasismooth since the affine variety defined by $f$ is singular along the set $\{(z_0,z_1,z_2)\in \mathbb{C}^3:\text{ $z_0=0$ and $\lambda_1 z_1+\lambda_2 z_2=0$}\}$. \end{exmp}
The conditions for the generic hypersurface of degree $d$ to be quasismooth are described below, taken from \cite[Th. 8.1]{ianofletcher}.
\begin{thm} The generic hypersurface $Y_d$ of degree d in $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ is quasismooth if and only if either $a_i=d$ for some $i$, i.e. $Y_d$ is a linear cone, or for every nonempty subset $\{i_0,\dotsc,i_k\}\subset \{0,\dotsc,n\}$ either \begin{enumerate} \item there exists a monomial $z_{i_0}^{d_0}\dotsm z_{i_k}^{d_k}$ of degree $d$; or \item for $j=0,\dotsc,k$ there exist monomials $z_{i_0}^{d_{0,j}}\dotsm z_{i_k}^{d_{k,j}} z_{e_j}$ of degree $d$, where the $e_j\notin\{i_0,\dotsc,i_k\}$ are distinct. \end{enumerate} \end{thm}
\subsubsection{Canonical Sheaf of a Hypersurface}
Recall that if $Y_d\subset \mathbb{C}\mathbb{P}^n$ is a smooth hypersurface of degree $d$, i.e. defined by a homogeneous polynomial of degree $d$, then the adjunction formula gives us that $K_{Y_d}=\mathcal{O}(d-n-1)|_{Y_d}$. We would like a similar result for weighted projective spaces so that we could test the triviality of the canonical sheaf easily.
Fortunately we have such a result for a large class of hypersurfaces, namely those which are quasismooth and well-formed.
\begin{defn} Let $Y\subset \mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ be a hypersurface.
We say $Y$ is \emph{well-formed} if $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ is well-formed and $Y$ does not contain a codimension 2 singular set of $\mathbb{C}\mathbb{P}_{a_0,\dotsc,a_n}$.
\end{defn}
We have the following criterion for well-formedness for generic hypersurfaces from \cite[Prop. 6.10]{ianofletcher}. \begin{prop}
\label{prop:GenericHypersurfaceWellFormed} The generic hypersurface of degree $d$ in $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ is well-formed if and only if \begin{enumerate} \item $\gcd(a_0,\dotsc,a_{i-1},a_{i+1},\dotsc,a_{j-1},a_{j+1},\dotsc,a_n)\mid d$ for all $i,j$ and \item $\gcd(a_0,\dotsc,a_{i-1},a_{i+1},\dotsc,a_n)=1$ for all $i$. \end{enumerate} \end{prop}
\begin{prop}
Let $Y_d\subset\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ be a well-formed quasismooth hypersurface of degree $d$. Then the canonical sheaf is $K_{Y_d}=\mathcal{O}(d-n-1)|_{Y_d}$.
\end{prop}
Hence for $n>1$ we have that the canonical bundle of a degree $d$ well-formed quasismooth hypersurface is trivial if $d=n+1$.
\section{Antiholomorphic involutions} Let $Y\subset \mathbb{C}\mathbb{P}^5_{a_0,\dotsc,a_5}$ be a well-formed quasismooth hypersurface with trivial canonical bundle. We now wish to consider antiholomorphic involutions on $Y$. We will consider only antiholomorphic involutions which arise as restrictions of antiholomorphic involutions on $\mathbb{C}\mathbb{P}^5_{a_0,\dotsc,a_5}$. The main result of this section is the classification of antiholomorphic involutions of weighted projective spaces, Proposition \ref{prop:antiholinv}.
It is shown in \cite{partouche2001rolling} that for standard projective space $\mathbb{C}\mathbb{P}^n$ the number of conjugacy classes of antiholomorphic involutions is either 1 or 2 depending on whether $n$ is odd or even respectively. If $n$ is odd, then the only antiholomorphic involution of $\mathbb{C}\mathbb{P}^n$ up to conjugacy is the standard one: \[ [z_0,\dotsc,z_n]\longmapsto [\overline{z}_0,\dotsc,\overline{z}_n]. \] If $n$ is even we also have the involution \[ [z_0,\dotsc,z_n]\longmapsto [\overline{z}_1,-\overline{z}_0,\dotsc,\overline{z}_n,-\overline{z}_{n-1}]. \]
We will consider antiholomorphic involutions up to conjugation by automorphisms of $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$. Therefore we should first describe $\Aut(\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n})$.
\subsection{Automorphisms of weighted projective spaces} Consider the action of $\mathbb{C}^*$ on $\mathbb{C}^{n+1}$ defining a weighted projective space with weights $a_0,\dotsc,a_n$. We want to decompose $\mathbb{C}^{n+1}$ by the action of $\mathbb{C}^*$. We relabel the collection of weights $w_1<\dotsb <w_m$ and let $k_i$ be the number of times $w_i$ appears in the sequence $a_0,\dotsc,a_n$. We decompose $\mathbb{C}^{n+1}=\bigoplus_{i\in I}W_i$ where $\mathbb{C}^*$ acts on $W_i$ with weight $w_i$, $\dim(W_i)=k_i$ and $I=\{1,\dotsc,m\}$.
Any automorphism of $\mathbb{C}^{n+1}\setminus \{0\}$ extends to an automorphism of $\mathbb{C}^{n+1}$ for $n>0$ by Hartogs' Theorem. Let $\Aut_{\mathbb{C}^*}(\mathbb{C}^{n+1})$ denote the $\mathbb{C}^*$-equivariant automorphisms of $\mathbb{C}^{n+1}$. We can also describe $\Aut_{\mathbb{C}^*}(\mathbb{C}^{n+1})$ as the centralizer of $\mathbb{C}^*$ in $\Aut(\mathbb{C}^{n+1})$. Any $\mathbb{C}^*$-equivariant morphism of $\mathbb{C}^{n+1}$ descends to an automorphism of $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ and the converse can be shown to hold.
A $\mathbb{C}^*$-equivariant morphism $F:\mathbb{C}^{n+1}\rightarrow \mathbb{C}^{n+1}$ is determined by a collection of polynomials $(F_{i,j})$ where $i\in I$, $1\leq j\leq k_i$ and $F_{i,j}$ is of degree $w_i$. Each polynomial $F_{i,j}$ can be decomposed \[
F_{i,j}=A_{i,j}+f_{i,j} \] into a linear part and a non-linear, i.e. weighted homogenous quadratic and higher, part.
\begin{exmp} Consider the graded ring $R=\mathbb{C}[z_0,z_1,z_2]$ where $z_0,z_1,z_2$ have weights $1,1,2$ respectively. $\mathbb{C}^3$ splits as $\mathbb{C}^3=W_1\oplus W_2$ as representations of $\mathbb{C}^*$ where $\mathbb{C}^*$ acts on $W_1$ with weight 1 and on $W_2$ with weight 2.
A $\mathbb{C}^*$-equivariant morphism $F:\mathbb{C}^3\rightarrow\mathbb{C}^3$ is determined by a collection $F_{1,1},F_{1,2},F_{2}$ where $F_{1,1},F_{1,2}$ are linear functions of $z_0,z_1$ and $F_2$ is a sum of a linear multiple of $z_2$ and a homogeneous quadratic polynomial in $z_0,z_1$. \end{exmp}
For each $i\in I$ we define $A_i$ to be the matrix formed from the rows $(A_{i,j})_{1\leq j\leq k_i}$. The morphism $F$ is invertible on $W_1$ if $A_1$ is invertible since there are no polynomials with degree less than $w_1$. An inductive argument gives us that $F$ is invertible if and only if each $A_i$ is.
The map that sends a morphism $F$ to the corresponding collection of linear maps $A_i$ is a surjective homomorphism \[ \Aut_{\mathbb{C}^*}(\mathbb{C}^{n+1})\longrightarrow \prod_{i\in I} \GL(W_i). \]
The kernel of this homomorphism is the set of morphisms of the form $F_{i,j}=z_{i,j}+f_{i,j}$ where $(z_{i,j})_{1\leq j\leq k_i}$ are coordinates on $W_i$. Let us denote this kernel by $H$. We have an inclusion of groups $\prod_{i\in I}\GL(W_i)\hookrightarrow \Aut_{\mathbb{C}^*}(\mathbb{C}^{n+1})$ and hence the short exact sequence \[ 0\longrightarrow H\longrightarrow \Aut_{\mathbb{C}^*}(\mathbb{C}^{n+1})\longrightarrow \prod_{i\in I} \GL(W_i)\longrightarrow 0 \] is right split and we have $\Aut_{\mathbb{C}^*}(\mathbb{C}^{n+1})\simeq H\rtimes \prod_{i\in I}\GL(W_i)$.
Each element of $\Aut_{\mathbb{C}^*}(\mathbb{C}^{n+1})$ determines an automorphism of $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$, but in order to determine an automorphism uniquely we must take a quotient by the diagonal action of $\mathbb{C}^*$ on $\Aut_{\mathbb{C}^*}(\mathbb{C}^{n+1})$.
More explicitly, consider the homomorphism \begin{align*} \mathbb{C}^*&\hookrightarrow \GL(W_1)\times\dotsb\times \GL(W_m) \\ \lambda&\mapsto (\lambda^{w_1},\lambda^{w_2},\dotsc,\lambda^{w_m}) \end{align*} which is an embedding since $\gcd(w_1,\dotsc,w_m)=1$. Then the quotient of $\Aut_{\mathbb{C}^*}(\mathbb{C}^{n+1})$ by this subgroup is isomorphic to $\Aut(\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n})$.
The linear structure on polynomials gives a linear structure to the group $H$. The group $\prod_{i\in I}\GL(W_i)$ acts on $H$ via the adjoint action, which we will denote $\Ad_A:H\rightarrow H$ for $A\in \prod_{i\in I}\GL(W_i)$. With respect to this linear structure the adjoint action of $\prod_{i \in I}\GL(W_i)$ on $H$ is linear. $H$ decomposes as a vector space as $H=\bigoplus_{i\in I} H_i$ where $H_i$ consists of the morphisms in $H$ with $f_{i^\prime,j}=0$ for $i^\prime\neq i$.
We can describe some of the group structure on $H$ using the order on $I$ given by $i<i^\prime$ if $w_i<w_{i^\prime}$.
For $f,g\in H$, let $i$ be such that $f_{i^\prime,j}=0$ $\forall i^\prime <i$, then $(gf)_{i,j}=g_{i,j}+f_{i,j}$. In particular $(f^{-1})_{i,j}=-f_{i,j}$.
\subsection{Classification of Antiholomorphic Involutions} Any invertible antiholomorphic map can be written as the composition of the standard antiholomorphic involution coming from complex conjugation on $\mathbb{C}^{n+1}$, which we will denote by $c$, followed by an automorphism of $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ so let us write $\widetilde{\Aut}(\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n})$ for $\Aut( \mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n})\rtimes \mathbb{Z}_2$ where $\mathbb{Z}_2$ is generated by $c$.
By the discussion above we can write any antiholomorphic involution of $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ as a composition $(f,A,c)$ where $f\in H$ and $A\in \prod_{i\in I}\GL(W_i)$. Up to conjugation by elements of $\Aut(\mathbb{C}\mathbb{P}^n_{a_0,\dotsc, a_n})$ we can ignore $H$ due to the following lemma.
\begin{lem} \label{lem:antihol} Let $\tau=(f,A,c)\in\widetilde{\Aut}(\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n})$ be an antiholomorphic involution. Then $\tau$ is conjugate to $(0,A,c)$. \end{lem} \begin{proof} Since $\tau$ is an involution we have $f\Ad_{A}(\bar{f})=1$. Let $f=f_1+\dotsb+f_m$ be the direct sum decomposition of $f$ and let $i$ be minimal such that $f_i\neq 0$. Since $i$ is minimal and the action of $\Ad_{A}$ on $H$ fixes each $H_j$ we have $(f\Ad_{A}\bar{f})_i=f_i+\Ad_{A}(\bar{f}_i)=0$.
Let $h\in H$ be such that $h_i=-\frac{1}{2}f_i$ and $h_j=0$ for $j<i$. Then $(h\tau h^{-1})_j=0$ for $j<i$ and \begin{align*} (h\tau h^{-1})_i&=h_i+f_i-\Ad_{A}(\bar{h}_i)\\ &=-\frac{1}{2}f_i+f_i+\frac{1}{2}\Ad_{A}(\bar{f}_i)\\ &=\frac{1}{2}(f_i+\Ad_{A}(\bar{f}_i))\\ &=0, \end{align*} where we have used the linearity of the action of $\prod_{i\in I}\GL(W_i)$ on $H$ and the discussion of the group structure of $H$. Now by induction on $i$ we find that $(0,A,c)$ is in the conjugacy class of $\tau$.
\end{proof}
\begin{prop} \label{prop:antiholinv} The number of conjugacy classes of antiholomorphic involutions of $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ is either $1$ or $2$. Let $k_j$, $w_j$ be defined in terms of $a_0,\dotsc,a_n$ as above. Then $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ admits a non-standard antiholomorphic involution if and only if $w_i k_i$ is even for each $i$. \end{prop} \begin{proof}
Let $\tau\in\widetilde{\Aut}(\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n})$ be an antiholomorphic involution. By Lemma \ref{lem:antihol} we have that $\tau$ is conjugate to $A\circ c$ so we will assume $\tau=A\circ c$. Now $A\in \prod_{i\in I}\GL(W_i)$ and for $\tau$ to be an involution we must have \begin{equation} \label{eq:AcA} A_j \overline{A}_j=\lambda^{w_j} \end{equation}
for some $\lambda\in\mathbb{C}^*$. Taking trace shows that $\lambda^{w_j}$ is real for each $j$ so $\lambda$ is real. By an action of the diagonal $\mathbb{C}^*$ we can ensure that $|\lambda|=1$. Taking determinants of Eq. \eqref{eq:AcA} then implies that $\lambda^{w_jk_j}=1$.
We know that there are exactly two antiholomorphic involutions of $\mathbb{C}\mathbb{P}^{k_j}$ for $k_j$ even and one for $k_j$ odd up to conjugation and scale. For the standard antiholomorphic involution we have $A_j\overline{A}_j=1$ and for the non-standard involution we have $A_j\overline{A}_j=-1$.
For $A_j$ to be non-standard we must have $\lambda^{w_j}=-1$ so $w_j$ must be odd and $k_j$ must be even. In this case $\lambda=-1$ and since $(-1)^{w_ik_i}=1$ we must have $w_ik_i$ even for each $i$.
\end{proof}
The standard antiholomorphic involution has a fixed point locus of (real) dimension $n$ so the fixed point locus of the involution restricted to a hypersurface will never consist of isolated points. We therefore must consider only weighted projective spaces which admit non-standard involutions.
Let $\tau$ be a non-standard antiholomorphic involution and let $w_j,k_j$ be defined as before. The fixed point locus of $\tau\circ c$ is of (complex) dimension \[\sum_{j:w_j\in 2\mathbb{Z}}\!\!\!{k_j}-1.\]
Since we want $\tau$ to have isolated fixed points when acting on a generic hypersurface we therefore require that $\sum_{j:w_j\in 2\mathbb{Z}}k_j=2$.
For the case we are interested in, namely $\mathbb{C}\mathbb{P}^5_{a_0,\dotsc,a_n}$ the discussion above imposes conditions on the allowed sets of weights. In order for $\mathbb{C}\mathbb{P}^5_{a_0,\dotsc,a_n}$ to admit an antiholomorphic involution whose fixed locus has (real) dimension 1 we must have, without loss of generality, $a_0=a_1$ and $a_2=a_3$, all of which are odd, and $a_4$, $a_5$ both even. The action of $\tau$ on $\mathbb{C}\mathbb{P}^5_{a_0,\dotsc,a_5}$ can be given as \begin{equation}
\label{eq:ActionofSigma}
\tau:[z_0,z_1,z_2,z_3,z_4,z_5]\mapsto [\overline{z}_1,-\overline{z}_0,\overline{z}_3,-\overline{z}_2,\overline{z}_4,\overline{z}_5]. \end{equation} From now on we will assume $\tau$ is of this form.
\section{Singularities}
\label{sec:DesirableSingularities}
Recall that we require the Calabi--Yau 4-orbifold to have singularities of the type $\frac{1}{4}(1,1,1,1)$, which are fixed by $\tau$.
The fixed point locus of $\tau$ is contained in $S_{4,5}$ so we should find hypersurfaces with singularities of the correct type in $S_{4,5}$.
Suppose the weights $a_0,\dotsc,a_n$ have been chosen so that a generic hypersurface of degree $d=\sum_ia_i$ is well-formed and quasismooth. The isolated singularities of type $\frac{1}{4}(1,1,1,1)$ can occur in two ways, either $Y_d$ transversely intersects the singular locus $S_{4,5}$ at a generic point and $\gcd(a_4,a_5)=4$ or $Y_d$ contains a point $S_{4}$ or $S_5$ with $a_4=4$ or $a_5=4$ respectively.
For $Y_d$ to intersect a generic point of $S_{4,5}$ transversely we must have that there exist at least two monomials $z_4^{d_4}z_5^{d_5}$ of degree $d$. These singularities are of the type $\frac{1}{4}(1,1,1,1)$ if $a_k\equiv 1\mod 4$ for $k\neq 4,5$.
$Y_d$ contains the point $S_{4}$ if $a_4 \nmid d$ so that there does not exist a monomial $z_4^{d_4}$ of degree $d$. For $Y_d$ to be quasismooth we must have that $a_4 \mid d-a_j$ for some $j$ so there exists a monomial $z_4^{d_4}z_j$ of degree $d$. As before in order for this singularity to be of the type $\frac{1}{4}(1,1,1,1)$ we must have $a_k\equiv 1\mod 4$ for $k\neq 4,j$, however $a_4$ and $a_5$ are both even so therefore we must have $j=5$.
If $\gcd(a_4,a_5)=2$ then for $Y_d$ to be quasismooth we must have that either $Y_d$ intersects $S_{4,5}$ transversely at generic points with singularities modelled on $\mathbb{C}^4/\mathbb{Z}_2$ or we have a monomial $z_4z_5$ of degree $d$. We eliminate the first possibility because the singularity of type $\frac{1}{2}(1,1,1,1)$ does not admit a crepant resolution and we eliminate the second because $d=\sum_ia_i$. Hence $\gcd(a_4,a_5)=4$ and $Y_d$ does not contain $S_4$ or $S_{5}$. We summarize our results in the following proposition.
\begin{prop}
\label{prop:CorrectSings}
Suppose the generic hypersurface of degree $d=\sum_ia_i$ in $\mathbb{C}\mathbb{P}^5_{a_0,\dotsc,a_5}$ has isolated singularities of type $\frac{1}{4}(1,1,1,1)$ and $\mathbb{C}\mathbb{P}^5_{a_0,\dotsc,a_5}$ admits an antiholomorphic involution, which fixes only these points in $Y_d$, then without loss of generality the weights $a_0,\dotsc,a_5$ satisfy \begin{itemize}
\item[(i)] $a_0=a_1$ and $a_2=a_3$, and
\item[(ii)] $\gcd(a_4,a_5)=4$, and
\item[(iii)] $a_i\equiv 1\mod 4$ for $0\leq i\leq 3$, and
\item[(iv)] $a_4|d$ and $a_5|d$. \end{itemize} \end{prop}
\subsection{Resolving Undesired Singularities} $Y_d$ may have other singularities, which we first need to resolve.
We will use methods from \cite{dais2006existence} to determine whether a given cyclic quotient singularity of dimension 4 admits a crepant resolution.
We will assume that the reader is familiar with the basic definitions of toric geometry \cite{Fulton:1993fk}. Consider a cyclic quotient singularity of the type $\frac{1}{m}(a_1,a_2,a_3,a_4)$. We can describe this as an affine toric variety. Let $N=\mathbb{Z}^4+\mathbb{Z}\cdot\frac{1}{m}(a_1,a_2,a_3,a_4)$ be a lattice and $\sigma\subset N_\mathbb{Q}=N\otimes_\mathbb{Z}\mathbb{Q}$ the cone spanned by the unit vectors $e_1=(1,0,0,0),\dotsc,e_4=(0,0,0,1)$. The affine toric variety associated to the cone $\sigma$ is isomorphic to the cyclic quotient singularity of type $\frac{1}{m}(a_1,a_2,a_3,a_4)$.
The set of elements of age $i$, $\sigma_i$, is defined to be the convex hull in $N$ of the elements $\{ie_1,ie_2,ie_4,ie_4\}\in N$. The following theorem, from \cite[Th. 6.1]{dais2006existence}, gives a necessary condition for the cyclic quotient singularity to admit a crepant resolution.
\begin{thm} \label{thm:neccrepres} Let $\mathbb{C}^n/G$ be a quotient singularity, where $G\subset \SL(n,\mathbb{C})$ is a finite abelian group. If $\mathbb{C}^n/G$ admits a crepant resolution, then the set of elements of age 1, $\sigma_1$, is a minimal generating set for $\sigma$ over $\mathbb{Z}$. \end{thm}
Theorem \ref{thm:neccrepres} gives us a necessary condition for a given cyclic quotient singularity to admit a crepant resolution. This condition is sufficient for all singularities of codimension $4$ where the cyclic group has order less than 39 and is sufficient in all but 10 cases for quotient singularities of codimension $4$ with cyclic group of order less than 100 \cite{dais2006existence}.
\begin{exmp} Consider the isolated cyclic quotient singularity of type $\frac{1}{2}(1,1,1,1)$. The elements of age 1 are $e_1,\dotsc,e_4$. The element $\frac{1}{2}(1,1,1,1)\in \sigma$ cannot be written as a sum of $e_1,\dotsc,e_4$ with integer coefficients. Therefore the elements of age 1 do not form a generating set for $\sigma$ over $\mathbb{Z}$ and hence the singularity of type $\frac{1}{2}(1,1,1,1)$ does not admit a crepant resolution. \end{exmp}
\begin{exmp}
The generic hypersurface, $Y_{84}$, of degree 84 in $\mathbb{C}\mathbb{P}^5_{1,1,21,21,12,28}$ is a well-formed quasismooth Calabi--Yau hypersurface. The singularities of $Y_{84}$ consist of the curves $Y_{84}\cap S_{2,3,4}$ and $Y_{84}\cap S_{2,3,5}$, which intersect in the 4 points $\{p_1,\dotsc,p_4\}=Y_{84}\cap S_{2,3}$. The singularities of $Y_{84}$ at each of the $p_i$ are of type $\frac{1}{21}(1,1,7,12)$.
Let $N=\mathbb{Z}^4+\mathbb{Z}\cdot\frac{1}{21}(1,1,7,12)$ and $\sigma\subset N_\mathbb{Q}$ the cone spanned by $e_1=(1,0,0,0),\dotsc,e_4=(0,0,0,1)$.
The elements of age 1, which are listed in Table \ref{tab:exampleage1}, are a minimal generating set for $\sigma$ and hence the singularity $\mathbb{C}^4/\mathbb{Z}_{21}$ admits a crepant resolution. \begin{table}[htbp] \begin{tabular}{r r r r} $(1,0,0,0)$, & $(0,1,0,0)$, & $(0,0,1,0)$, & $(0,0,0,1)$, \\[3pt] $\frac{1}{21}(1,1,7,12)$, & $\frac{1}{21}(2,2,14,3)$,& $\frac{1}{21}(3,3,0,15)$, & $\frac{1}{21}(4,4,7,6)$, \\[3pt] $\frac{1}{21}(6,6,0,9)$, & $\frac{1}{21}(7,7,7,0)$, & $\frac{1}{21}(9,9,0,3)$, \\[3pt] \end{tabular}
\caption{\label{tab:exampleage1}Elements of age 1 in the cone $\sigma$, which defines the cyclic quotient singularity of type $\frac{1}{21}(1,1,7,12)$.} \end{table} \end{exmp}
We are now in a position to determine whether a particular weighted projective space contains a suitable Calabi--Yau 4-orbifold as a well-formed quasismooth hypersurface. \begin{prop}
\label{prop:AdmissableWeights} The weights $a_0,\dotsc,a_5$ such that \begin{itemize} \item[(i)] The generic hypersurface of degree $d=\sum_ia_i$ in $\mathbb{C}\mathbb{P}^5_{a_0,\dotsc,a_5}$ is well-formed and quasismooth; \item[(ii)] $Y_d$ has isolated singularities of the type $\frac{1}{4}(1,1,1,1)$; \item[(iii)] $\mathbb{C}\mathbb{P}^5_{a_0,\dotsc,a_5}$ admits an antiholomorphic involution whose fixed point locus intersects $Y_d$ at the isolated singularities of type $\frac{1}{4}(1,1,1,1)$; \item[(iv)] Any other singularities of $Y_d$ admit crepant resolutions; \end{itemize} are listed in Table $\ref{tab:AdmissableWeights}$. \end{prop}
\begin{proof}
Lynker et al. \cite{lynker1999landau} determined the complete set of weighted projective spaces of dimension $5$ such that the generic hypersurface of degree $d=\sum_ia_i$ is quasismooth.
The list of weights can be found at \url{http://thp.uni-bonn.de/Supplements/cy.html}.
Propositions \ref{prop:GenericHypersurfaceWellFormed} and \ref{prop:CorrectSings} translate conditions (i)--(iii) into numerical conditions on the weights $a_0,\dotsc,a_5$.
We use a computer programme to search the list of 1,100,055 sets of weights given by Lynker to get a list of $18$ sets of weights, for which conditions (i)--(iii) apply.
Then we use Theorem \ref{thm:neccrepres} to test whether any undesired singularities of the generic hypersurface admit crepant resolutions.
This test eliminates the weights which are not listed in Table \ref{tab:AdmissableWeights}. \end{proof} \begin{table}[htbp]
\begin{tabular}{|r r r r r r| r r r r r r|}
\hline
\multicolumn{12}{|c|}{$\{a_0,\dotsc,a_5\}$} \\
\hline
1& 1& 1& 1& \phantom{2}4& 4 & \phantom{2}1& \phantom{2}1& 1& 1& 4& 8 \\ 1& 1& 1& 1& 8& 12 & 1& 1& 5& 5& 8& 20 \\ \hline 1& 1& 9& 9& 4& 4 & 5& 5& 13& 13& 4& 4 \\ 1& 1& 13& 13& 4& 8 & 1& 1& 21& 21& 4& 16 \\ 5& 5& 25& 25& 4& 16 & 1& 1& 21& 21& 12& 28 \\ 1& 1& 37& 37& 8& 28 & 1& 1& 53& 53& 20& 32 \\ 21& 21& 49& 49& 4& 24 & 1& 1& 69& 69& 16& 52 \\ \hline \end{tabular}
\caption{\label{tab:AdmissableWeights}The admissable weights of the ambient weighted projective spaces of Calabi--Yau 4-orbifolds. The weighted projective spaces with weights listed in the first two rows appear as ambient spaces for Calabi--Yau 4-orbifolds in \cite{Joyce:1996st}.} \end{table}
\section{Determining Betti Numbers of $M$}
Let us now suppose that $Y$ is a Calabi--Yau 4-orbifold contained in one of the weighted projective spaces we have determined in Proposition \ref{prop:AdmissableWeights}. In general $Y$ will have singularities, which we first need to resolve. We will denote the resolution of $Y$ by $\hat{Y}$ and let us assume that $\tau$ lifts to $\hat{Y}$ so that $\hat{Y}$ satisfies Condition \ref{cond:Y}.
For the moment let us assume that we can determine the Hodge numbers of $\hat{Y}$. We can determine the Betti numbers of $Z=\hat{Y}/\langle \tau\rangle$ and the resulting $\mathrm{Spin}(7)$-manifold $M$ as follows.
\begin{prop}
\label{prop:BettiNumbersofMintermsofZ}
Let $\hat{Y}$, $Z$, $M$ be as above.
Suppose $Z$ has $k$ singularities modelled on $\mathbb{R}^8/G$ as in Section \ref{sec:ReviewofConstruction}.
Then the Betti numbers of $M$ are \begin{align*} b^2(M)&=b^2(Z), & b^4_+(M)&=\frac{1}{2}(h^{2,2}(\hat{Y})+k)-b^2(Z)+1,\\ b^3(M)&=\frac{1}{2}b^3(\hat{Y}), & b^4_-(M)&=h^{3,1}(\hat{Y})+b^2(\hat{Y})-b^2(Z)+k-1. \end{align*} \end{prop} \begin{proof}
Let $h^{p,p}_\tau(\hat{Y})$ be the dimension of the $\tau$-invariant part of $H^{p,p}(\hat{Y})$.
Noting that in all of the cases we will discuss we have $h^{2,0}(\hat{Y})=h^{3,0}(\hat{Y})=0$ and $h^{4,0}(\hat{Y})=1$ since $\hat{Y}$ has $\Hol(\hat{Y})=SU(4)$,
the Betti numbers of $Z$ can then be expressed as
\begin{align*}
b^2(Z)&=h^{1,1}_\tau(\hat{Y}), & b^4_+(Z)&=h^{2,2}_\tau(\hat{Y})-h^{1,1}(\hat{Y})+h^{1,1}_\tau(\hat{Y})+2, \\
b^3(Z)&=h^{2,1}(\hat{Y}), & b^4_-(Z)&=h^{3,1}(\hat{Y})+h^{1,1}(\hat{Y})-h^{1,1}_\tau(\hat{Y})-1.
\end{align*}
Applying the Lefschetz fixed point theorem we find that
\begin{equation*}
k=2+4h^{1,1}_\tau(\hat{Y})-2h^{1,1}(\hat{Y})+2h^{2,2}_\tau(\hat{Y})-h^{2,2}(\hat{Y}),
\end{equation*}
which we use to eliminate $h^{2,2}_\tau(\hat{Y})$ from the expressions for the Betti numbers of $Z$.
The ALE $\mathrm{Spin(7)}$ manifolds that we use to resolve the quotient singularities of $Z$ have Betti numbers $b^1=b^2=b^3=b^4_+=0$ and $b^4_-=1$ hence the Betti numbers of $M$ satisfy
\begin{equation*}
\begin{split}
b^j(M)&=b^j(Z) \text{ for $j=1$, $2$, $3$} \\
b^4_+(M)&= b^4_+(Z) \text{ and } b^4_-(M)=b^4_-(Z)+k.
\end{split} \end{equation*}
Combining these facts gives us the result. \end{proof}
From Proposition \ref{prop:BettiNumbersofMintermsofZ} we see that to determine the Betti numbers of $M$ it suffices to know the Hodge numbers of the orbifold $\hat{Y}$ and to understand how $\tau$ acts on $H^{1,1}(\hat{Y})$. We will use techniques from toric geometry both to determine the Hodge numbers and to understand the action of $\tau$.
The rest of this section will almost entirely be material from \cite{batyrev1994dual}, and we direct the reader to this paper for more details.
\subsection{Lattice Polytopes} We will now change our viewpoint from weighted projective spaces to toric varieties associated to reflexive polytopes. With this change we will find the Hodge numbers of the resolved hypersurface $\hat{Y}$, show that the antiholomorphic involution $\tau$ lifts to $\hat{Y}$, and determine the dimension of $H^2_\tau(\hat{Y})$.
Batyrev and Cox \cite{batyrev1994dual,batyrev1996strong} have determined the Hodge numbers of the crepant resolutions of Calabi--Yau hypersurfaces in toric varieties associated to reflexive polytopes. We will give the definition of reflexive polytope and show how to associate a toric variety to such a polytope.
In this subsection $N$ will denote a lattice and $M=\Hom(N,\mathbb{Z})$ its dual. We will denote a fan by $\Sigma$ and a rational strongly convex cone by $\sigma$.
Let $\Delta\subset M$ be an $n$-dimensional lattice polytope, i.e. a polytope with vertices in $M$, and suppose $\Delta$ contains the origin. We associate a toric variety to $\Delta$ by taking cones over the maximal faces of $\Delta$ as described in the following proposition. \begin{prop}
For every $k$-dimensional face $\Theta\subset \Delta$ let $\check{\sigma}(\Theta)\subset M_\mathbb{Q}=M\otimes_\mathbb{Z}\mathbb{Q}$ be the cone over $\Theta$, $\check{\sigma}(\Theta)=\{\lambda x\in M_\mathbb{Q}\text{ : $x\in \Theta$ and $\lambda \in \mathbb{Q}$}\}$, and let $\sigma(\Theta)\subset N_\mathbb{Q}$ be the $(n-k)$-dimensional dual cone. Then the collection of dual cones $\Sigma(\Delta)=\{\sigma(\Theta)\text{ : $\Theta\subset\Delta$}\}$ is a fan and hence determines a toric variety $P_\Delta$. \end{prop}
\begin{exmp} We recall how weighted projective space can be constructed as a toric variety.
Let $a_0,\dotsc,a_n$ be positive integers with $\gcd(a_0,\dotsc,a_n)=1$. Let $\overline{N}$ be generated by $e_0,\dotsc,e_{n}$ and let $N=\overline{N}/\mathbb{Z}\cdot(a_0e_0+\dotsb+a_ne_n)$. $N$ is a lattice since $\gcd(a_0,\dotsc,a_n)=1$. $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ is the toric variety associated to the the fan whose $n$ dimensional cones are given by $\linspan(e_0,\dotsc,e_{i-1},e_{i+1},\dotsc,e_n)$ for $i=0,\dotsc,n$ in $N_\mathbb{Q}$. \end{exmp}
\begin{exmp}
Let $\overline{N}$ be generated by $e_0,\dotsc,e_n$ and let $\overline{M}$ be the dual lattice. Suppose $x\in \mathbb{Z}_{>0}e_0+\dotsb+\mathbb{Z}_{>0}e_n$ is a primitive element in $\overline{N}$, i.e. $x=a_0e_0+\dotsb+a_ne_n$ where $\gcd(a_0,\dotsc,a_n)=1$ and $a_i>0$ for each $i$. Then the set $\Delta=\{y\in \overline{M}\text{ : $\langle y,x\rangle=0$ and $\langle y,e_i\rangle\geq-1$}\}$ is an $n$-dimensional lattice polytope in the lattice $M=\{y\in\overline{M}\text{ : $\langle y,x\rangle=0$}\}$.
The dual lattice $N=\Hom(M,\mathbb{Z})$ can be identified with the quotient $\overline{N}/\mathbb{Z}\cdot(a_0e_0+\dotsb+a_ne_n)$. If $\gcd(a_0,\dotsc,\widehat{a_i},\dotsc,a_n)=1$ for each $i$ so that $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ is well-formed, then the fan $\Sigma(\Delta)$ is a refinement of the fan of $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$ so the toric variety $P_\Delta$ is a partial resolution of $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$.
\end{exmp}
The previous example shows that to any weighted projective space we can associate a lattice polytope, and after applying the toric construction to the polytope we get a toric variety, which is a partial resolution of the original weighted projective space.
A polytope $\Delta$ determines not only a toric variety $P_\Delta$ but also a choice of ample invertible sheaf $\mathcal{O}_\Delta(1)$ on $P_\Delta$ by \cite[Prop. 2.1.5]{batyrev1994dual}.
\begin{defn} Let $M$ be a lattice and $\Delta\subset M$ a lattice polytope of the same dimension as $M$ containing the origin. We define the \emph{dual polytope} $\Delta^*\subset N_\mathbb{Q}$ as the set \[ \Delta^*=\{x\in N_\mathbb{Q}\text{ : $\langle y,x\rangle\geq -1$ for all $y\in\Delta$}\} \] We say a lattice polytope $\Delta$ is \emph{reflexive} if $\Delta^*$ is also a lattice polytope, i.e. if the vertices of $\Delta^*$ lie in $N$ and not just $N_\mathbb{Q}$. \end{defn}
The relevance of reflexive polytopes to Calabi--Yau orbifolds is described in the following result from \cite[Th. 4.1.9]{batyrev1994dual}.
\begin{thm}
Let $\Delta$ be an integral polytope and $P_\Delta$ the corresponding projective toric variety. The following conditions are equivalent.
\begin{enumerate}
\item the ample invertible sheaf $\mathcal{O}_\Delta(1)$ on $P_\Delta$ is anticanonical;
\item $\Delta$ is reflexive.
\end{enumerate} \end{thm}
Now suppose $\Delta$ is reflexive. Then a generic section of the sheaf $\mathcal{O}_\Delta(1)$ determines a quasismooth Calabi--Yau hypersurface, by an application of the adjunction formula.
The lattice polytopes associated to weighted projective spaces as we have described above are not always reflexive. For example the weights 1, 1, 1, 1, 1, 2 do not determine a reflexive polytope. Fortunately the lattice polytopes associated to the weights described in Proposition \ref{prop:AdmissableWeights} are all reflexive.
The Hodge numbers of the resolved hypersurface $\hat{Y}$ are given in terms of combinatorial properties of the lattice polytope $\Delta$ defined by the weights $a_0,\dotsc,a_5$. We will not give the formulae here but direct the reader to \cite{batyrev1994dual,batyrev1994hodge}.
\subsection{$\tau$-Equivariant resolutions of singularities} If $Y\subset P_\Delta$ is a Calabi--Yau hypersurface then a crepant resolution of the singularities of $P_\Delta$ will induce a crepant resolution of the singularities of $Y$. We will resolve the singularities of $P_\Delta$ and in the process desingularize all of the Calabi--Yau hypersurfaces.
A subdivision of the polytope $\Delta^*$ will determine a refinement of the fan of $P_\Delta$ and hence a partial resolution of $P_\Delta$. A subdivision of $\Delta^*$ is called a \emph{(maximal) triangulation} if every lattice point in $\Delta^*$ is the vertex of some simplex in the subdivision. A triangulation of $\Delta^*$ will determine a maximal partial crepant resolution of $P_\Delta$ and hence $Y$. A triangulation is called \emph{projective} if the associated resolution is. The existence of projective triangulations is guaranteed by \cite[Prop. 4]{Gelfand:1989fk}
and the resulting resolution will, very importantly, be crepant by \cite[Th. 2.2.24]{batyrev1994dual}.
However we require that $\tau$ lifts to the resolution of $P_\Delta$, which we will denote by $\hat{P}_\Delta$. The antiholomorphic involution $\tau$ can be decomposed into three parts \begin{equation*}
\tau=t\cdot \tau_m \cdot c, \end{equation*} where $c$ denotes the standard antiholomorphic involution, $\tau_m$ is an element of the torus in $P_\Delta$, and $t$ is a morphism of $P_\Delta$ induced by an involution of the lattice $N$, which fixes the polytope $\Delta^*$ (and $\Delta$).
The involution $\tau$ will lift to the resolution if the triangulation of $\Delta^*$ is invariant under the action of $t$, by which we mean that $t$ sends a simplex in the triangulation to another simplex in the triangulation. Unfortunately we cannot always find such an invariant triangulation as the following example shows.
\begin{exmp}
Let $e_0=(0,0,-2)$, $e_1=(1,1,1)$, $e_2=(1,-1,1)$, $e_3=(-1,-1,1)$, and $e_4=(-1,1,1)$ be points in $\mathbb{R}^3$.
Let $N$ be the lattice generated by $e_1,e_2,e_3$ and let $\Delta^*$ denote the reflexive polytope given by the convex hull of the set $\{e_0,\dotsc,e_4\}$.
The polytope $\Delta^*$ can be pictured as a cone over a square with vertices $e_1,e_2,e_3,e_4$.
Let $t$ be the lattice isomorphism defined by $t(e_1)=e_2$, $t(e_2)=e_1$ and $t(e_3)=e_4$, which is reflection in a plane in $\mathbb{R}^3$.
There are two triangulations of the polytope $\Delta^*$.
One contains the edge joining $e_1$ and $e_3$ and the other contains the edge joining $e_2$ and $e_4$.
Neither of these triangulations is invariant under the action of $t$ and we see that the map induced by $t$ on the toric variety does not lift to any resolution.
It is interesting to note that the polytope is not simplicial and hence the associated toric variety is not an orbifold.
We have not found an example of a simplicial reflexive polytope with an involution $t$, which does not admit a $t$-invariant projective triangulation.
\end{exmp}
Projective triangulations of the lattice polytope, $\Delta^*$, are in one-to-one correspondence with the faces of an associated polytope, known as the secondary polytope \cite[Ch. 7]{Gelfand:1994fk}. For $\Delta^*$ to admit a $t$-invariant triangulation, we require that $t$ must fix a face of the secondary polytope, or equivalently the fixed point set of $t$ intersects the interior of a face of the secondary polytope. The secondary polytope sits inside the vector space $A_{n-1}(\hat{P}_\Delta)\otimes \mathbb{Q}$, where $\hat{P}_\Delta$ is any maximal partial crepant resolution of $P_\Delta$. Recall that $A_{n-1}(\hat{P}_\Delta)$ is determined by an exact sequence \cite{Fulton:1993fk} \begin{equation*}
\label{eq:DefOfChowGroup}
0\longrightarrow M\longrightarrow \sum_{\rho\in\Delta^*\setminus{\{0\}}} \mathbb{Z}\cdot e_\rho\longrightarrow A_{n-1}(\hat{P}_\Delta)\longrightarrow 0, \end{equation*} where $m\in M\mapsto \sum_{\rho\in\Delta^*\setminus{\{0\}}}\langle m,\rho\rangle e_\rho$. The lattice isomorphism $t$ acts on $M$ and $\Delta^*$ and hence on $A_{n-1}(\hat{P}_\Delta)$. If we tensor this sequence with $\mathbb{Q}$ we can check that for all of the cases we are interested in, $t$ in fact fixes the whole secondary polytope. This means that $\tau$ will lift to any maximal crepant resolution of $P_\Delta$.
\subsection{Hodge Numbers of $Y$}
The Hodge numbers of the resolved Calabi--Yau hypersurface, $\hat{Y}$, are given by formulae presented in \cite{batyrev1994dual,batyrev1996strong}. To understand how $\tau$ acts on $H^{1,1}(\hat{Y})$ we will describe in more detail how $h^{1,1}(\hat{Y})$ is calculated.
The basic idea is that $h^{1,1}(\hat{Y})$ counts the components of the intersection of the resolution with the union of all irreducible toric divisors in the desingularization of $\mathbb{C}\mathbb{P}^n_{a_0,\dotsc,a_n}$. If the resolution intersects a divisor with dimension $>0$, then it is irreducible, so the action of $\tau$ on this component can be determined by whether $\tau$ fixes the toric divisor or swaps it with another.
We denote the torus in $\hat{P}_\Delta$ by $T$ and let $X$ denote the intersection of $\hat{Y}$ with the union of all irreducible $T$-invariant divisors in $\hat{P}_\Delta$. We have a short exact sequence of cohomology groups \begin{equation*}
0\longrightarrow H^6_c(\hat{Y})\longrightarrow H^6_c(X)\longrightarrow H^7_c(\hat{Y}\setminus X)\longrightarrow 0, \end{equation*} which is natural under $\tau$. By Poincar\'e duality we have that $b^2(\hat{Y})=\dim(H^6_c(\hat{Y}))$ and hence $b^2_\tau(\hat{Y})$ is the difference of the dimension of the $\tau$-invariant parts of $H^6_c(X)$ and $H^7_c(\hat{Y}\setminus X)$. Using a Lefchetz-type theorem \cite{Danilov:1986uq} for affine hypersurfaces in algebraic tori we have that $H^7_c(\hat{Y}\setminus X)\simeq H^9_c(T)$ is an isomorphism. Given the description of $\tau$ as in Eq. \eqref{eq:ActionofSigma}, it is easy to check that the dimension of the $\tau$-invariant part of $H^9_c(T)$ is 3.
The irreducible $T$-invariant divisors in $\hat{P}_\Delta$ are indexed by the set $\Delta^*\setminus{\{0\}}$. Let $\rho\in \Delta^*\setminus{\{0\}}$ and $D_\rho$ the corresponding $T$-invariant divisor. If $\rho$ is contained in the interior of a face of codimension $\geq 3$, then the intersection $D_\rho\cap \hat{Y}$ is irreducible, while if $\rho$ is contained in the interior of a face of codimension $2$, then $D_\rho$ intersects $\hat{Y}$ in isolated points, the number of which is determined by the polytope.
If $D_\rho\cap \hat{Y}$ is irreducible, then the action of $\tau$ is determined by whether $\tau$ fixes $\rho\in \Delta^*\setminus{\{0\}}$ or not. If $\tau$ fixes $\rho$, then $\rho$ does not contribute to $b^2_\tau(\hat{Y})$ and a pair $\rho_1,\rho_2$ of $T$-invariant divisors, which are swapped by $\tau$, contribute 1 to $b^2_\tau(\hat{Y})$.
If $D_\rho$ intersects $\hat{Y}$ in $d$ points, then for a generic $Y$, $\tau$ will swap $d/2$ pair of points if $d$ is even and $(d-1)/2$ pairs of points if $d$ is odd. However we can choose $Y$ so that $\tau$ swaps $k$ pairs of points where $0\leq k\leq d/2$, in which case this contributes $k$ to $b^2_\tau(\hat{Y})$. In this way we see that we can get $\mathrm{Spin(7)}$-manifolds with different topological invariants arising from the same family of Calabi--Yau 4-orbifolds.
We have used the software PALP \cite{kreuzer2004palp} to find the toric divisors and to determine the toric divisors fixed by $\tau$.
\begin{exmp}
Consider the reflexive polytope with weights $1,1,9,9,4,4$ $\Delta\subset M$. Let $N=M^*$ and $\Delta^*$ the dual polytope of $\Delta$.
The points of $\Delta^*\setminus\{0\}$ correspond to toric divisors in $P_\Delta$.
There are exactly 11 of these, which are listed in Table \ref{tab:ToricDivisorsExample} with respect to a particular basis of $N$.
The antiholomorphic involution swaps the elements in the first column in pairs and leaves the other 7 invariant.
\begin{table}[htbp]
\begin{tabular}{l l l l}
$(\phantom{-}1, \phantom{-}0, \phantom{-}0, \phantom{-}0, \phantom{-}0)$, & $(0, \phantom{-}0, \phantom{-}0, \phantom{-}1, \phantom{-}0)$, & $(0, -5, -5, -2, -2)$, \\
$(\phantom{-}0, \phantom{-}1, \phantom{-}0, \phantom{-}0, \phantom{-}0)$, & $(0, \phantom{-}0, \phantom{-}0, \phantom{-}0, \phantom{-}1)$, & $(0, -3, -3, -1, -1)$, \\
$(\phantom{-}0, \phantom{-}0, \phantom{-}1, \phantom{-}0, \phantom{-}0)$, & $(0, -7, -7, -3, -3)$, & $(0, -2, -2, -1, -1)$, \\
$(-1, -9, -9, -4, -4)$, & $(0, -1, -1, \phantom{-}0, \phantom{-}0)$, &
\end{tabular}
\caption{\label{tab:ToricDivisorsExample} $T$-invariant divisors in a maximal partial crepant resolution of the reflexive polytope with weights $1,1,9,9,4,4$.}
\end{table}
There is one divisor contained in an interior of a face with codimension 2, which corresponds to the singularities of our Calabi--Yau hypersurface of type $\frac{1}{4}(1,1,1,1)$.
A generic hypersurface intersects the divisor in 7 points.
We can choose the hypersurface so that $\tau$ swaps $k$ pairs of points where $0\leq k\leq 3$ and fixes the remaining singular points.
After taking the quotient by $\tau$ and resolving the quotient singularities in the usual way, we can find $\mathrm{Spin}(7)$ manifolds with $0\leq b^2(M)\leq 3$.
\end{exmp}
\section{Results and further work}
Table \ref{tab:results} lists the examples of $\mathrm{Spin}(7)$-manifolds constructed from well-formed quasismooth hypersurfaces in weighted projective spaces. We include the examples already given in \cite[Ch. 15]{joyce2000compact} for the sake of completeness, which appear as the first four rows.
\begin{table}[tbp]
\begin{tabular}{| r r r r r r | c| c| c| c| } \hline
\multicolumn{6}{|c|}{$\{a_0,\dotsc,a_5\}$} & $b^2$ & $b^3$ & $b^4_+$ & $b^4_-$ \\ \hline 1&1&1&1&4&4&$0\leq k\leq 1$ & 0 & $1639-k$ & $807-k$ \\ 1&1&1&1&4&8& 0 & 0 & 3175 & 1575 \\ 1&1&1&1&8&12& 0 & 0 & 7784 & 3879 \\ 1&1&5&5&8&20& 0 & 6 & 2493 & 1237 \\ \hline 1&1&9&9&4&4&$0\leq k\leq 3$ & 0 & $1415-k$ & $695-k$ \\ 5&5&13&13&4&4&$0\leq k\leq 5$ & 0 & $295-k$ & $135-k$ \\ 1&1&13&13&4&8& $0\leq k\leq 2$ & 0 & $983-k$ & $1991-k$ \\ 1&1&21&21&4&16& $0\leq k\leq 1$ & 0 & $3927-k$ & $1951-k$\\ 5&5&25&25&4&16& $0\leq k\leq 2$ & 0 &$487-k$ & $231-k$\\ 1&1&21&21&12&28& $0\leq k\leq 2$ & 0 &$2983-k$&$1479-k$\\ 1&1&37&37&8&28& 0 & 0 & 5911 & 2943 \\ 1&1&53&53&20&32&0 & 0 & 6055&3015 \\ 21&21&49&49&4&24&$0\leq k\leq 9$& 0 &$263-k$&$119-k$ \\ 1&1&69&69&16&52&0 & 0 & 10087 & 5031 \\ \hline \end{tabular}
\caption{\label{tab:results}The weights of the ambient weighted projective spaces of the Calabi--Yau 4-folds and Betti numbers of the resulting $\mathrm{Spin}(7)$-manifolds.} \end{table}
The sets of Betti numbers realised by manifolds constructed in this thesis are all distinct from those of compact manifolds with holonomy $\mathrm{Spin}(7)$ already known. Also it should be noted that the example with $b^4=15118$ and $b^4_-=5031$ has the largest known value of $b^4$ or $b^4_-$ for a compact manifold with holonomy $\mathrm{Spin}(7)$.
For a fixed weighted projective space if we consider the family of Calabi--Yau 4-folds, which are fixed by the antiholomorphic involution $\tau$, it is interesting to note that the resulting $\mathrm{Spin}(7)$-manifolds coming from resolutions of different Calabi--Yau 4-folds in the family all have $b^4_-+b^2+1$ constant, which is the dimension of the Conformal Field Theory moduli space \cite{shatashvili1995superstrings}.
The next step for looking for Calabi--Yau orbifolds satisfying Condition \ref{cond:Y} would be to look for hypersurfaces in toric varieties coming from reflexive polytopes, which is invariant under an involution of the lattice. One could also look for orbifolds with more general types of singularities, which admit resolutions with holonomy $\mathrm{Spin}(7)$, for example the singularities in \cite[Sec. 4.3]{Joyce:1999fk}. The methods we have described for calculating the Betti numbers would immediately apply.
It should be noted that there is a notion of mirror symmetry for Calabi--Yau hypersurfaces in toric varieties determined by reflexive polytopes. The mirror polytope will also admit a non-standard antiholomorphic involution but there will be a choice involved. Also the mirror Calabi--Yau will, in general, not have the type of singularities we described in Section \ref{sec:ReviewofConstruction}.
\end{document} |
\begin{document}
\title{Positive and negative results on the internal controllability of parabolic equations coupled by zero and first order terms}
\begin{abstract}
This paper is devoted to studying the null and approximate controllability
of two linear coupled parabolic equations posed on a smooth domain $\Omega$ of $\mathbb R^N$ ($N\geqslant 1$)
with coupling terms of zero and first orders and one control
localized in some arbitrary nonempty open subset $\omega$ of the domain $\Omega$. We prove the null controllability under a new sufficient condition and we also provide the first example of a not approximately controllable system in the case where the support of one of the nontrivial first order coupling terms intersects the control domain $\omega$.
\end{abstract}
\textbf{Keywords:} Controllability; Parabolic systems; Fictitious control method; Algebraic solvability.
\textbf{MSC Classification:} 93B05; 93B07; 35K40.
\section{Introduction} \subsection{Presentation of the problem and main results}
\hspace*{4mm} Let $T>0$, let $\Omega$ be a bounded domain of $\mb{R}^N$ ($N\in\mb{N}^*$)
of class $\mathcal C^2$
and let $\omega$ be an arbitrary nonempty open subset of $\Omega$. Let $Q_T:=(0,T)\times\Omega$, $q_T:=(0,T)\times\omega$ and $\Sigma_T:=(0,T)\times\partial\Omega$. We consider the following system of two parabolic linear equations with variable coefficients and coupling terms of order zero and one \begin{equation}\label{system primmal}
\left\{\begin{array}{ll} \partial_ty_1=\Div (d_1\nabla y_1)+g_{11}\cdot\nabla y_1+g_{12}\cdot\nabla y_2+a_{11}y_1+a_{12}y_2+\mathds{1}_{\omega}u&\mbox{in }Q_T,\\\noalign{
} \partial_ty_2=\Div (d_2\nabla y_2)+g_{21}\cdot\nabla y_1+g_{22}\cdot\nabla y_2+a_{21}y_1+a_{22}y_2&\mbox{in } Q_T,\\\noalign{
} y=0&\mbox{on }\Sigma_T,\\\noalign{
} y(0,\cdot)=y^0&\mbox{in }\Omega,
\end{array} \right. \end{equation} where $y^0\in L^2(\Omega)^2$ is the initial condition and $u\in L^2(Q_T)$ is the control.
The zero and first order coupling terms
$(a_{ij})_{1\leqslant i,j\leqslant 2}$ and $(g_{ij})_{1\leqslant i,j\leqslant 2}$
are assumed (for the moment) to be in $ L^\infty(Q_T)$ and
in $L^\infty(0,T;W^{1}_{\infty}(\Omega))^N$, respectively.
For $l\in\{1,2\}$, the second order elliptic self-adjoint operator $\Div (d_l\nabla)$ is given by
\begin{equation*}
\Div (d_l\nabla)=\sum\limits_{i,j=1}^N\partial_i(d^{ij}_l\partial_j),
\end{equation*} with \begin{equation*} \left\{
\begin{array}{l}
d^{ij}_l\in W^{1}_{\infty}(Q_T),\\\noalign{
}
d^{ij}_l= d^{ji}_l\mbox{ in }Q_T,
\end{array}\right. \end{equation*} where the coefficients $d^{ij}_l$ satisfy the uniform ellipticity condition \begin{equation*}
\sum\limits_{i,j=1}^N d^{ij}_l\xi_i\xi_j\geqslant d_0|\xi|^2\mbox{ in }Q_T,~\forall \xi\in\mb{R}^N, \end{equation*} for a constant $d_0>0$.
It is well-known (see for instance \cite[Th. 3, p. 356-358]{MR2597943}) that for every initial data $y^0\in L^2(\Omega)^2$ and every control $u\in L^2(Q_T)$,
System \eqref{system primmal}
admits a unique solution $y$ in $W(0,T)^2$, where \begin{equation*} W(0,T):=L^2(0,T;H^1_0(\Omega )) \cap H^1(0,T;H^{-1}(\Omega ))\hookrightarrow\mc{C}^0([0,T];L^2(\Omega)). \end{equation*}
In this article, we are concerned with the approximate or null controllability of System \eqref{system primmal}. Let us recall the precise definitions of these notions. We say that System \eqref{system primmal} is \begin{itemize} \item[$\bullet$]\textit{approximately controllable} on the time interval $(0,T)$ if for every initial condition $y^0\in L^2(\Omega)^2$, every target $y^T\in L^2(\Omega)^2$ and every $\varepsilon>0$, there exists a control $u\in L^2(Q_T)$ such that the corresponding solution $y$ to System \eqref{system primmal} satisfies \begin{equation*}
\|y(T,\cdot)-y^T\|_{L^2(\Omega)^2}\leqslant \varepsilon. \end{equation*} \item[$\bullet$]\textit{null controllable} on the time interval $(0,T)$ if for every initial condition $y^0\in L^2(\Omega)^2$, there exists a control $u\in L^2(Q_T)$ such that the corresponding solution $y$ to System \eqref{system primmal} satisfies \begin{equation*} y(T,\cdot)=0\mbox{ in }\Omega. \end{equation*} \end{itemize}
It is well-known that if a parabolic system like \eqref{system primmal} is null controllable on the time interval $(0,T)$,
then it is also approximately controllable on the time interval $(0,T)$
(this is an easy consequence of usual results
of backward uniqueness for parabolic equations as given for example in \cite{MR0338517}).
Since the case $a_{21}\neq0$ and $g_{21}=0$ in $(t_0,t_1)\times\omega_0\subset q_T$ has already been studied in \cite{gonzalez2010controllability}, we will always work under the following assumption. \begin{ass}\label{MainAss} There exists $t_0<t_1\in (0,T)$ and a nonempty open subset $\omega_0$ of $\omega$ such that \begin{equation*} g_{21}\neq0\mbox{ in }(t_0,t_1)\times\omega_0. \end{equation*} \end{ass}
Moreover, as we will see in Section \ref{sec:simpl coupl}, it is possible, with the help of appropriate changes of variables and unknowns (we lose a little bit of regularity on the coefficients though, see Section \ref{sec:simpl coupl}), to replace the coupling operator
$g_{21}\cdot \nabla +a_{21}$
by the simpler coupling operator $\partial_{x_1}$ (where $x_1$ is the first direction in space), at least locally on some subset of $q_T$.
Hence, without loss of generality, we can also work under the following assumption.
\begin{ass}\label{MainAss2} There exists a nonempty open subset $ \mathcal O_T$ of $\omega_0$ such that $$g_{21}\cdot \nabla +a_{21}=\partial_{x_1}\mbox{ on } \mathcal O_T:=(t_0,t_1)\times \mathcal O .$$ \end{ass}
For a nonempty set $\omega_T\subset \mathbb{R}^{N+1}$, let us denote by $\mc{C}^0_{t,x_2,...,x_N}( \overline{\omega}_T)$ the subset of $\mc{C}^0(\overline{\omega}_T)$ composed by the functions depending only on the variables $t,~x_2,~x_3,...,x_N$. Let us now introduce the following condition, which will be crucial in our following results, and which is closely related to the particular form for the coupling term given in Assumption \ref{MainAss2} (removing this assumption would make Condition \ref{cond:modul} impossible to write down explicitly).
\begin{cond}\label{cond:modul} There exists a nonempty open set $\omega_T\subset (t_0,t_1)\times\mathcal O$ such that \begin{equation}\label{alg resol:cond cont} \left\{\begin{array}{l} \widetilde{a}_{22}\mbox{ is not an element of the } \mc{C}^0_{t,x_2,...,x_N}(\overline{\omega}_T)\mbox{-module }\\\noalign{
} \left\langle 1,\widetilde{g}_{22}^2,...,\widetilde{g}_{22}^N,d_2^{22},...,d_2^{NN}\right\rangle_{\mc{C}^0_{t,x_2,...,x_N}(\overline{\omega}_T)}, \end{array}\right. \end{equation} where \begin{equation}\label{resol alg:def g22 tilde} \left\{\begin{array}{l} \widetilde{g}_{22}^i:=g_{22}^i-\sum\limits_{j=1}^N\partial_{x_j}d_{22}^{ij},\\\noalign{
} \widetilde{a}_{22}:=- a_{22}+\Div (g_{22}). \end{array}\right. \end{equation} \end{cond}
Our first main result is the following:
\begin{theo}\label{theo:positif} Let $d_i^{kl}, g_{ij}^k\in \mc{C}^{N^2+3}(\overline{\omega}_T)$ and $a_{ij}\in \mc{C}^{N^2+2}(\overline{\omega}_T)$ for every $i,j\in\{1,2\}$ and $k,l\in\{1,...,N\}$.
Assume that Assumptions \ref{MainAss}, \ref{MainAss2} and Condition \ref{cond:modul} hold. Then System \eqref{system primmal} is null controllable at any time $T>0$. \end{theo}
\begin{rem} Theorem \ref{theo:positif} is stated and will be proved in the case of two coupled parabolic equations and one control. However, as in \cite{ML15}, it is possible to extend Theorem \ref{theo:positif} to systems of $m$ parabolic equations controlled by $m-1$ controls for arbitrary $m\geqslant 2$. More precisely, consider the system \begin{equation}\label{system primmalg}
\left\{\begin{array}{ll} \partial_ty_1=\Div(d_1\nabla y_1)+\sum_{i=1}^mg_{1i}\cdot\nabla y_i+\sum_{i=1}^{m}a_{1i}y_i+\mathds{1}_{\omega}u_1&\mbox{in }Q_T,\\\noalign{
} \partial_ty_2=\Div(d_2\nabla y_2)+\sum_{i=1}^mg_{2i}\cdot\nabla y_i+\sum_{i=1}^{m}a_{2i}y_i+\mathds{1}_{\omega}u_2&\mbox{in }Q_T,\\\noalign{
} \vdots\\\noalign{
} \partial_ty_{m-1}=\Div(d_{m-1}\nabla y_{m-1})+\sum_{i=1}^mg_{(m-1)i}\cdot\nabla y_i+\sum_{i=1}^{m}a_{(m-1)i}y_i+\mathds{1}_{\omega}u_{m-1}&\mbox{in } Q_T,\\\noalign{
} \partial_ty_m=\Div (d_m\nabla y_m)+\sum_{i=1}^mg_{mi}\cdot\nabla y_i+\sum_{i=1}^{m}a_{mi}y_i&\mbox{in }Q_T,\\\noalign{
} y_1=\ldots=y_m=0&\mbox{on } \Sigma_T,\\\noalign{
} y_1(0,\cdot)=y_1^0,\ldots,~y_m(0,\cdot)=y_m^0&\mbox{in } \Omega,
\end{array} \right. \end{equation} where $y^0:=(y_1^0,\ldots,y_m^0)\in L^2(\Omega)^m$ is the initial data and $u:=(u_1,\ldots,u_{m-1})\in L^2(Q_T)^{m-1}$ is the control. Let us suppose that there exists $i\in \{1,...,m\}$, $t_0<t_1\in (0,T)$ and a nonempty open subset $\omega_0$ of $\omega$ such that $g_{mi}(t,x)\neq0$ on $q_T:=(t_0,t_1)\times\omega_0$. As explained in Section \ref{sec:simpl coupl}, we can suppose that the operator $g_{mi}\cdot\nabla +a_{mi}$ is equal to $\partial_{x_1}$ in $(t_0,t_1)\times\mathcal O\subset q_T$. Assume that there exists an open set $\omega_T\subset (t_0,t_1)\times\mathcal O$ such that \begin{equation*} \left\{\begin{array}{l} \widetilde{a}_{mm}\mbox{ is not an element of the } \mc{C}^0_{t,x_2,...,x_N}(\overline{\omega}_T)\mbox{-module }\\\noalign{
} \left\langle 1,\widetilde{g}_{mm}^2,...,\widetilde{g}_{mm}^N,d_2^{22},...,d_2^{NN}\right\rangle _{\mc{C}^0_{t,x_2,...,x_N}(\overline{\omega}_T)}, \end{array}\right. \end{equation*} where \begin{equation*} \left\{\begin{array}{l} \widetilde{g}_{mm}^i:=g_{mm}^i-\sum\limits_{j=1}^N\partial_{x_j}d_{mm}^{ij},\\\noalign{
} \widetilde{a}_{mm}:=- a_{mm}+\Div (g_{mm}). \end{array}\right. \end{equation*} Then we can adapt the proof of Theorem \ref{theo:positif} to prove that System \eqref{system primmalg} is null controllable on the time interval $(0,T)$ under suitable regularity conditions on the coefficients. \end{rem} \begin{rem}Condition \ref{cond:modul} is clearly technical since it does not even cover the case of constant coefficients proved in \cite{ML15}, the general case given in \cite{benabdallah2014} (under some assumption on the control domain) or the one-dimensional result given in \cite{duprez2016}. However, Theorem \ref{theo:negatif} implies that one cannot expect the null controllability to be true in general without extra assumptions on the coefficients. We do not know what would be a reasonable necessary and sufficient condition on the coupling terms for the null controllability of System \eqref{system primmal}. \end{rem}
The second main result of the present paper is the following surprising result.
\begin{theo}\label{theo:negatif2} Let us assume that $\omega\subset\subset \Omega$. Let $\omega_1$ be a nonempty regular open set satisfying $\omega\subset\subset \omega_1\subset\subset\Omega$. and consider a function $\theta\in \mathcal{C}^{\infty}(\overline{\Omega})$ satisfying \begin{equation*} \left\{\begin{array}{l} \theta=1\mbox{ in }\omega,\\\noalign{
} \Supp(\theta)\subset\overline\omega,\\\noalign{
} \theta>0\mbox{ in } \omega_1 \end{array}\right. \end{equation*} Then there exists $a\in \mc{C}^{\infty}(\overline{\Omega})$ such that the system \begin{equation}\label{syst simpl 1D} \left\{\begin{array}{ll} \partial_ty_1=\Delta y+\mathds{1}_{\omega}u&\mbox{in } Q_T,\\\noalign{
} \partial_ty_2=\Delta y_2+ay_2+\partial_{x_1}(\theta y_1)&\mbox{in } Q_T,\\\noalign{
} y=0&\mbox{on } \Sigma_T,\\\noalign{
} y(0,\cdot)=y^0&\mbox{in }\Omega \end{array}\right. \end{equation} is not approximately controllable (hence not null controllable) on the time interval $(0,T)$. \end{theo}
In other words, Theorem \ref{theo:negatif2} tells us that for every control set $\omega$ strongly included in $\Omega$, there exists a potential $a$ for which approximate controllability of \eqref{syst simpl 1D} does not hold, in any space dimension. We may improve a bit this result on the one-dimensional case, where we are able to obtain the following result, which expresses that for some well-constructed potential $a$, that there exists one control domain on which System \eqref{syst simpl} is not approximately controllable (hence not null controllable) and another control domain on which System \eqref{syst simpl} is null controllable (hence approximatively controllable),
highlighting the surprising fact that some \emph{geometrical conditions} on the control domain has to be imposed in order to obtain a controllability result.
\begin{theo}\label{theo:negatif} Consider the following system \begin{equation}\label{syst simpl} \left\{\begin{array}{ll} \partial_ty_1=\partial_{xx} y+\mathds{1}_{\omega}u&\mbox{in } (0,T)\times(0,\pi),\\\noalign{
} \partial_ty_2=\partial_{xx} y_2+ay_2+\partial_{x} y_1&\mbox{in } (0,T)\times(0,\pi),\\\noalign{
} y(\cdot,0)=y(\cdot,\pi)=0&\mbox{on } (0,T),\\\noalign{
} y(0,\cdot)=y^0&\mbox{in }(0,\pi). \end{array}\right. \end{equation}
There exists a coefficient $a\in \mc{C}^{\infty}([0,\pi])$ such that: \begin{enumerate}
\item There exists an open interval $(a,b)\subset\subset (0,\pi)$ such that, for all $T>0$, System \eqref{syst simpl} is null controllable (then approximatively controllable) at time $T$. \item There exists an open interval $(a,b)\subset\subset (0,\pi)$ such that, for all $T>0$, System \eqref{syst simpl} is not approximatively controllable (then not null controllable) at time $T$. \end{enumerate} \end{theo}
\begin{rem} Let us mention that Theorems \ref{theo:negatif2} and \ref{theo:negatif} are the first negative result for the controllability of System \eqref{system primmal} when the support of the first order coupling term intersects the control domain in the case of distributed controls.
The authors want to highlight that the coupling operator is constant in the whole domain and nevertheless the system can be controllable or not following the localisation of the control domain, which is an unexpected phenomenon. \end{rem}
\subsection{State of the art}
\hspace*{4mm} Many models of interest involve (linear or non-linear) coupled equations of parabolic systems, notably in fluid mechanics, medicine, chemistry, ecology, geology, etc., and this explains why during the past years, the study of the controllability properties of linear or nonlinear parabolic systems has been an increasing subject of interest (see for example the survey \cite{ammar2011recent}). The main issue is what is called the \emph{indirect} controllability, that is to say one wants to control many equations with less controls than equations, by acting indirectly on the equations where no control term appears thanks to the coupling terms appearing in the system. This notion is fondamental for real-life applications, since in some complex systems only some quantities can be effectively controlled. Here, we will concentrate on the previous results concerning the null or approximate controllability of linear parabolic systems with distributed controls, but
there are also many other results concerning boundary controls or other classes of
systems like hyperbolic systems.
First of all, in the case of zero order coupling terms, the case of constant coefficients is now completely treated and we refer to \cite{ammar2009generalization} and \cite{ammar2009kalman} for parabolic systems having constant coupling coefficients (with diffusion coefficients that may depend on the space variable though) and for some results in the case of time-dependent coefficients. In the case of zero and one order coupling terms and constant coefficients, a necessary and sufficient condition in the case of $m$ equations and $m-1$ controls for constant coefficients is provided in \cite{ML15} by the authors.
The case of space-varying coefficients remains still widely open despite many new partial results these last years. In the case where the support of the coupling terms intersects the control domain, a general result is proved in \cite{gonzalez2010controllability} for parabolic systems in cascade form with one control force (and possibly one order coupling terms). We also mention \cite{MR2226005}, where a result of null controllability is proved in the case of a system of two equations with one control force, with an application to the controllability of a nonlinear system of transport-diffusion equations. In the situation where the coupling regions do not intersect the control domain, the situation is still not very well-understood and we have partial results, in general under technical and geometrical restrictions, notably on the control domain (see for example \cite{alabau2013}, \cite{MR3039207}, \cite{MR2783322} and \cite{CherifMinimalTimeDisjoint}). Let us mention that in this case, there might appear a minimal time for the null controllability of System \eqref{system primmal} (see \cite{Ammar-Khodja2015}), which is a very surprising phenomenon for parabolic equations, because of the infinite speed of propagation of the information.
Concerning the case of first order coupling terms, we mention \cite{gonzalez2010controllability} which gives some controllability results when the coefficient $g_{21}$ is equal to zero on the control domain.
Let us also mention the recent work \cite{benabdallah2014}, which concerns the small systems in small dimension, that is to say $2\times2$ and $3\times 3$ systems. The authors of \cite{benabdallah2014} suppose that the control domain contains a part of the boundary $\partial\Omega$.
Recently, in \cite{duprez2016}, the first author studied a particular cascade system with space dependent coefficients and in dimension one thanks to the moment method, and obtained necessary and sufficient conditions on the coupling terms of order $0$ and $1$ for the null controllability. To conclude, let us also mention another result given in \cite{ML15} by the authors, which provides a sufficient condition for null controllability in dimension one for space and time-varying coefficients under some technical conditions on the coefficients, which turns out to be exactly equivalent to Condition \ref{cond:modul} under Assumption \ref{MainAss2} (but with more regularity than in Assumption \ref{MainAss}). Hence, Theorem \ref{theo:positif} can be seen as a generalization in the multi-dimensional case of the one-dimensional result given in \cite{ML15}. For a more detailed state of the art concerning this problem, we refer to \cite{ML15}.
Hence, the present paper improves the previous results in the following sense: \begin{itemize} \item Contrary to \cite{benabdallah2014,guerrerosyst22,duprez2016,ML15}, we prove in Theorem \ref{theo:positif} the null controllability of System \eqref{system primmal} with a condition on $a_{22}$ but for space/time dependent coefficients, in any space dimension and without any condition on the control domain.
\item In the previous results, it was surprising to have some very different sufficient conditions for the null controllability of System \eqref{system primmal} in the case of first order coupling terms, for example on one hand constant coupling coefficients and on the other hand a region of control which intersects the boundary of the domain. Through the example of a not approximately controllable system given in Theorem \ref{theo:negatif2} and \ref{theo:negatif}, we can now better understand why such different conditions appeared since the expected general condition for the null controllability of System \eqref{system primmal} with space and time-varying coefficients (i.e. it is sufficient that the control and coupling region intersect) may be false in general if $\omega\subset\subset\Omega$.
\end{itemize}
\section{Simplification of the coupling term}\label{sec:simpl coupl} In this section, we will prove that it is possible to replace locally the coupling operator $g_{21}\cdot \nabla +a_{21}$ by $\partial_{x_1}$, where $x_1$ is the first direction in space. This kind of simplification has already been used in \cite[Lemma 2.6]{benabdallah2014} for example, and we refer to this article for a more detailed proof (see also \cite{duprez2016}). Let us remark that the regularities stated in Lemma \ref{reduc1} are higher than the one stated in Theorem \ref{theo:positif} due to technical reasons appearing in the proofs of Lemmas \ref{reduc1} and \ref{reduc2}.
\begin{Lemme}\label{reduc1} Let $ d_i^{kl},~g_{ij}^k,~a_{ij}\in \mc{C}^{N^2+4}([t_0,t_1]\times\overline{\omega}_0)$ for every $i,j\in\{1,2\}$ and $k,l\in\{1,...,N\}$. Suppose that Assumption \ref{MainAss} is verified. Then, there exist a nonempty open subset $U$ of $\mathbb R^{N-1}$, a positive real number $\varepsilon$ and a $\mathcal{C}^{N^2+3}$-diffeomorphism $\Lambda$ from $U_{\varepsilon}:=(t_0,t_1)\times(0,\varepsilon)\times U $ to an open set $(t_0,t_1)\times \mathcal O\subset (t_0,t_1)\times\omega_0$ that keeps $t$ invariant and such that if we call $\widetilde y_1:=y_1\circ\Lambda$ and $\widetilde y_2:=y_2\circ\Lambda$, then there exist a matrix $\widetilde d_2\in \mathcal M_N(\mathcal C^{N^2+3}(U_{\varepsilon}))$, a vector $\widetilde g_{22}\in (\mathcal C^{N^2+3}(U_{\varepsilon}))^N$ and coefficients $\widetilde a_{21},~\widetilde a_{22}\in \mathcal C^{N^2+3}(U_{\varepsilon})$ such that locally on $U_{\varepsilon}$ one has
\begin{gather}\label{imp}
\partial_t\widetilde y_2=\Div (\widetilde d_2\nabla \widetilde y_2)+\widetilde g_{22}\cdot\nabla \widetilde y_2+\widetilde a_{22} \widetilde y_2 +\partial_{x_1} \widetilde{y_1}+\widetilde a_{21} \widetilde y_1\mbox{ in } U_{\varepsilon}.
\end{gather} \end{Lemme}
\textbf{Proof of Lemma~\ref{reduc1}}\\ Let us consider some open hyper-surface $\gamma$ of class $\mathcal{C}^\infty$ included in $\omega_0$ on which $g_{21}\cdot\nu<0$, where $\nu$ is the normalized outward normal on $\gamma$ (this can always be done since $g_{21}\not =0$ on $(t_0,t_1)\times\omega_0$ and is at least continuous), small enough such that it can be parametrized by a local diffeomorphism $$F:s_0:=(s_2,\ldots,s_N)\in U\subset\mathbb R^{N-1}\mapsto F(s_0)\in\gamma,$$ where $U$ is a nonempty open set. We call $\gamma_T:=(t_0,t_1)\times\gamma$. Let us consider some $\mathcal{C}^{N^2+4}$ extension of $g_{21}$ (that exists thanks to the regularity of $\gamma$ and $g_{21}$) that we denote by $g^T_{21}:(t,x)\in\mathbb R^{N+1}\mapsto(0,g_{21}(t,x))\in\mathbb R^{N+1}$. Using the Cauchy-Lipschitz Theorem, we infer that for every $(t,\sigma)\in \gamma_T$, there exists a unique global solution to the Cauchy Problem
\begin{equation*}
\left\{\begin{array}{ll} \frac{d}{ds}\Phi(t,s,\sigma)=g^T_{21}(\Phi(t,s,\sigma)),\\\noalign{
} \Phi(t,0,\sigma)=(t,\sigma).
\end{array} \right. \end{equation*} Since $\Phi$ is continuous and $g_{21}\cdot\nu<0$ on $\gamma_T$, we deduce that there exists some $\varepsilon>0$ such that $\Phi(t,s,\sigma)\in (t_0,t_1)\times\omega_0$ for every $s\in (0,\varepsilon)$ and every $(t,\sigma)\in \gamma_T$. We define $$\Lambda:(t,s,z)\in (t_0,t_1)\times(0,\varepsilon)\times U\mapsto \Phi(t,s,F(z)).$$ Then, by the inverse mapping theorem, $\Lambda$ is a $\mathcal C^{N^2+4}$-diffeomorphism from $U_{\varepsilon}$ to $(t_0,t_1)\times\mathcal O:=\Lambda(U_{\varepsilon})$ with $\mathcal O\subset\omega_0$. Let us call $\widetilde y_1(t,s,z):=y_1(\Lambda(t,s,z))$ and $\widetilde y_2(t,s,z):=y_2(\Lambda(t,s,z))$, then it is clear that $$\partial_t \widetilde y_i(t,s,z)=(\partial_t y_i)\circ\Lambda(t,s,z) \mbox{ for }i=1,2~ \mbox{ and }\partial_s \widetilde y_2(t,s,z)=(g_{21}\cdot\nabla y_2)\circ\Lambda(t,s,z),$$ and hence we obtain \eqref{imp} and the regularities wished for the new coefficients by writing down the equation verified by $\widetilde y$. \cqfd
Let us now perform a second useful reduction. \begin{Lemme}\label{reduc2} There exists an open subset$\mc{O}_T$ of $ U_{\varepsilon}$ and
a function $\theta\in \mathcal C^{N^2+4}(\Omega)$ such that $|\theta(x)|\geqslant C$ for some constant $C>0$ and if $$\overline y_1(t,x):=\theta^{-1}(t,x) \widetilde y_1(t,x)$$ and $$\overline y_2(t,x):=\theta^{-1}(t,x) \widetilde y_2(t,x),$$ then there exists some coefficients $\overline a_{22}\in \mathcal C^{N^2+2}(\mc{O}_T)$ and $\overline g_{22}\in \mathcal C^{N^2+3}(\mc{O}_T)^N$ such that locally on $\mathcal O_T$ one has
\begin{gather}\label{imp2} \partial_t\overline y_2=\Div (\widetilde d_2\nabla \overline y_2)+\partial_{x_1}\overline y_1+\overline g_{22}\cdot\nabla \overline y_2 +\overline a_{22} \overline y_2 \mbox{ in } {\mathcal O_T}. \end{gather} \end{Lemme}
\textbf{Proof of Lemma~\ref{reduc2}}\\
Let us consider some $\theta\in \mathcal C^{N^2+4}(\overline{\Omega})$ such that $|\theta(x)|\geqslant C$ for some constant $C>0$, and consider the change of unknowns $$\left\{\begin{array}{l} \overline y_1(t,x):=\theta^{-1}(t,x) \widetilde y_1(t,x),\\\noalign{
} \overline y_2(t,x):=\theta^{-1}(t,x) \widetilde y_2(t,x).\end{array}\right.$$ Using equation \eqref{imp}, we infer that $\overline y_2$ verifies $$\partial_t\overline y_2=\Div (\widetilde{d}_2\nabla \overline y_2)+\overline g_{22}\cdot\nabla \overline y_2+\overline a_{22} \overline y_2 +\partial_{x_1} \overline y_1 +\theta^{-1}(\partial_{x_1}\theta+\widetilde{a}_{21}\theta)\overline y_1,$$ where $\overline g_{22}:=2\theta^{-1}\widetilde{d}_2\nabla\theta+\widetilde{g}_{22}$ and $\overline a_{22}:=\theta^{-1}\Div(\widetilde{d}_2\nabla\theta)+\theta^{-1}\widetilde{g}_{22}\nabla\theta+\widetilde{a}_{22}$.
Hence, if we choose $\theta\in \mathcal C^{N^2+4}(\overline{\Omega})$ satisfying $\partial_{x_1}\theta+\widetilde{a}_{21}\theta=0$ and $|\theta(x)|\geqslant C$ in $Q_T$, which is always possible, then $\overline y_1$ and $\overline y_2$ verify \eqref{imp2} and we have $\overline a_{22}\in \mathcal C^{N^2+2}(\mc{O}_T)$ and $\overline g_{22}\in \mathcal C^{N^2+3}(\mc{O}_T)^N$ .
\cqfd
\section{Proof of Theorem \ref{theo:positif}} During all this Section, we will always assume that Assumptions \ref{MainAss} and \ref{MainAss2} are satisfied. \subsection{Strategy : Fictitious control method }\label{section strategy}
The fictitious control method has already been used for instance in \cite{gonzalezperez2006}, \cite{coronlissy2014}, \cite{ACO}, \cite{CG16} and \cite{ML15}. Roughly, the method is the following: we first control the equations with two controls (one on each equation) and we try to eliminate the control on the last equation thanks to algebraic manipulations locally on the control domain. For more details, see for example \cite[Section 1.3]{ML15}. Let us be more precise and decompose the problem into three different steps:
\vspace*{0.2cm}
\begin{itemize}
\item[(i)] \textbf{Analytic Problem: Null controllability by two forces}\\ Find a solution $(\widehat{y},\widehat{u})$ in an appropriate space
to the control problem by two controls \begin{equation}\label{strat:syst lin pb ana}
\left\{\begin{array}{ll} \partial_t\widehat{y}_1=\Div (d_1\nabla \widehat{y}_1)+g_{11}\cdot\nabla \widehat{y}_1+g_{12}\cdot\nabla \widehat{y}_2+a_{11}\widehat{y}_1+a_{12}\widehat{y}_2+ \widehat{u}_1&\mbox{in }Q_T,\\\noalign{
} \partial_t\widehat{y}_2=\Div (d_2\nabla \widehat{y}_2)+g_{21}\cdot\nabla \widehat{y}_1+g_{22}\cdot\nabla \widehat{y}_2+a_{21}\widehat{y}_1+a_{22}\widehat{y}_2+\widehat{u}_2&\mbox{in } Q_T,\\\noalign{
} \widehat{y}=0&\mbox{on }\Sigma_T,\\\noalign{
} \widehat{y}(0,\cdot)=y^0,~\widehat{y}(T,\cdot)=0&\mbox{in }\Omega,
\end{array} \right. \end{equation} where the controls $\widehat{u}_1$ and $\widehat{u}_2$ are regular enough and with a support strongly included in $\omega_T$ (remind that $\omega_T$ was introduced in Condition \ref{cond:modul}).
Solving Problem \eqref{strat:syst lin pb ana} is easier than solving the null controllability on the time interval $(0,T)$ of System \eqref{system primmal}, because we control System \eqref{strat:syst lin pb ana} with one control on each equation. The important point is that the control has to be regular enough, so that it can be differentiated a certain amount of times with respect to the space and/or time variables (see the next section about the algebraic resolution).
\begin{prop}\label{prop:contr regul} Let $k\in\mathbb{N}^*$. Suppose that $ d_i^{kl},~g_{ij}^k\in \mc{C}^{k+2}(\overline{\omega}_T)$ and $a_{ij}\in \mc{C}^{k+1}(\overline{\omega}_T)$ for every $i,j\in\{1,2\}$ and $k,l\in\{1,...,N\}$. Then there exists two constants $K>0$ and $C_k$ such that for every initial condition $y^0\in L^2(\Omega)^2$ one can find a control $u\in \mathcal{C}^k(Q_T)^2$ verifying moreover $\Supp(u)\subset \subset \omega_T$
for which the solution to System \eqref{strat:syst lin pb ana} is equal to zero at time $T$ and the following estimate holds: \begin{equation}\label{intro:exp control}
\|u\|_{\mathcal{C}^k(Q_T)^2}\leqslant C_k\|y^0\|_{L^2(\Omega)^2}. \end{equation}
\end{prop}
The controllability of parabolic systems with regular controls is nowadays well-known. For a proof of Proposition \ref{prop:contr regul}, one can adapt the strategy developed in \cite{gonz_perez_insensit04,perez_gonz_insent04,bodart_burgos_perez_local_04, gonzalezperez2006} where the authors prove the controllability of parabolic systems with $L^{\infty}$ controls thanks to the fictitious control method and the local regularity of parabolic equations. For more details, we refer to \cite[Chap. I, Sec. 2.4]{these_michel}. It is also possible to use the Carleman estimates (see for instance \cite{MR1751309} and \cite[Section 2.3]{ML15}), however this will impose the coefficients of System \eqref{strat:syst lin pb ana} to be regular in the whole space $Q_T$ (and would require higher regularity on $\Omega$).
\item[(ii)] \textbf{Algebraic Problem: Null controllability by one force}\\ For given $\widehat{u}_1,\widehat{u}_2$ with $\Supp(\widehat{u}_1,\widehat{u}_2)$ strictly included in $\omega_T$, find $(z,v)$, in an appropriate space, satisfying the following control problem: \begin{equation}\label{strat:probleme ramene a tout l espace}
\left\{\begin{array}{ll} \partial_tz_1=\Div (d_1\nabla z_1)+g_{11}\cdot\nabla z_1+g_{12}\cdot\nabla z_2+a_{11}z_1+a_{12}z_2+\widehat{u}_1+v&\mbox{in }\omega_T,\\\noalign{
} \partial_tz_2=\Div (d_2\nabla z_2)+\partial_{x_1} z_1+g_{22}\cdot\nabla z_2+a_{22}z_2+\widehat{u}_2&\mbox{in } \omega_T,
\end{array} \right. \end{equation} with $\Supp(z,v)$ strictly included in $\omega_T$, which impose the initial and final data and the boundary conditions. We recall that $g_{21}\cdot\nabla+a_{21}$ is equal to $\partial_{x_1}$ in $\omega_T$.
We will solve this problem using the notion of \emph{algebraic resolvability} of differential systems, which is based on ideas coming from \cite[Section 2.3.8]{Gromovbook} and was already used in some different contexts in \cite{coronlissy2014}, \cite{ACO}, \cite{ML15} or \cite{CG16}. The idea is to write System \eqref{strat:probleme ramene a tout l espace} as an \emph{underdetermined}
system in the variables $z$ and $v$ and to see $\widehat{u}$ as a source term. More precisely, we remark that System \eqref{strat:probleme ramene a tout l espace} can be rewritten as
\begin{equation}\label{def L Strategy 2}
\mathcal{L}(z,v)=f,
\end{equation}
where $f:=\widehat{u}$ and
\begin{equation*} \mathcal{L}(z,v):=
\left(\begin{array}{c} \partial_tz_1-\Div (d_1\nabla z_1)-g_{11}\cdot\nabla z_1-g_{12}\cdot\nabla z_2-a_{11}z_1-a_{12}z_2-v\\\noalign{
} \partial_tz_2-\Div (d_2\nabla z_2)-\partial_{x_1} z_1-g_{22}\cdot\nabla z_2-a_{22}z_2
\end{array} \right). \end{equation*} The goal in Section \ref{sec:resol alg} will be then to find a partial differential operator $\mc{M}$ satisfying \begin{gather} \label{strat:LMB} \mc{L}\circ\mc{M}=Id\mbox{ in }\omega_T. \end{gather} Thus to solve control problem \eqref{strat:probleme ramene a tout l espace}, it suffices to take \begin{equation*} (z,v):=\mathcal{M}(f). \end{equation*} When \eqref{strat:LMB} is satisfied, we say that System \eqref{def L Strategy 2} is \emph{algebraically solvable}.
\item[(iii)] \textbf{Conclusion}\\ If we are able to solve the analytic and algebraic problems, then it is easy to check that $(y,u):=(\widehat{y}-z,-v)$ will be a solution to System (\ref{system primmal}) in an appropriate space and will satisfy $y(T,\cdot)\equiv0$ in $\Omega$ (for more explanations, see \cite[Prop. 1]{coronlissy2014} and the proof of Theorem \ref{theo:positif} on pages 11-12).
\end{itemize}
\subsection{Algebraic solvability of the linear control problem}\label{sec:resol alg}
The goal of this section is to solve algebraic problem \eqref{def L Strategy 2}. We will use the following lemma:
\begin{Lemme}\label{lemme:resol alg} Let $\omega$ be a nonempty open subset of $\mathbb{R}^n$ \emph{(}$n\geqslant 1$\emph{)} and let $R\in\mathbb N^*$. Consider two differential operators $\mathcal L_1$ and $\mathcal L_2$ defined for every $\varphi\in \mathcal C^{\infty}(\overline{\omega})$ by \begin{equation*}
\mathcal{L}_1\varphi:=\partial_{x_1}\varphi
\mbox{ and }
\mathcal{L}_2\varphi:=a_0\varphi+\sum\limits_{i=1}^R a_iD^{\alpha_i}\varphi, \end{equation*} where, for $\alpha_i=(\alpha_i^2,...,\alpha_i^n)$, $D^{\alpha_i}:=\partial_{x_2}^{\alpha_i^2}\cdots\partial_{x_n}^{\alpha_i^n}$. If $a_i\in\mc{C}^M(\overline{\omega})$ for every $i\in\{0,...,R\}$ where $$M:=\sum\limits_{j=1}^R\beta_j\mbox{ with }\beta_j\mbox{ the order of the operator }\sum\limits_{i=j}^Ra_iD^{\alpha_i}$$ and $a_0$ is not an element of the $\mc{C}^0_{x_2,...,x_n}(\overline{\widetilde\omega})$-module \begin{equation}\label{module} \begin{array}{l} \left\langle a_1,...,a_R\right\rangle_{\mc{C}^0_{x_2,...,x_n}(\overline{\widetilde\omega})}, \end{array} \end{equation} for a nonempty open subset $\widetilde\omega$ of $\omega$, then there exists two differential operators $\mathcal M_1$ and $\mathcal M_2$ such that \begin{equation}\label{M1M2} \mathcal M_1\circ \mathcal L_1+\mathcal M_2\circ \mathcal L_2=Id\mbox{ in }\mathcal C^{\infty}(\overline{\widetilde\omega}). \end{equation} \end{Lemme}
\textbf{Proof of Lemma \ref{lemme:resol alg}}\\ The goal is to apply some differential operators $\mathcal M_1$ and $\mathcal M_2$ to $\mathcal{L}_1\varphi$ and $ \mathcal{L}_2\varphi$ in order to obtain $\varphi$. So, since $\varphi$ is not appearing in $\mathcal{L}_1\varphi$, we would like to eliminate all the derivatives $D^{\alpha_i}\varphi$ in the expression of $\mathcal L_2\varphi$ by differentiations and linear combinations.
If $a_0\not =0$ and $a_i=0$ in $\omega$ for every $i\in \{1,....,R\}$, we define $$\mathcal N:=\mathcal L_2.$$
If not, let $k_1$ be the smallest number of $\{1,....,R\}$ such that there exists a nonempty open subset $\omega_1$ of $\omega$ where $|a_{k_1}|>\delta>0$.
Then we consider $\mc{L}_3$ the commutator of $\mc{L}_1$ and $a_{k_1}^{-1}\mc{L}_2$: \begin{equation*} \mc{L}_3\varphi:=[\mc{L}_1,a_{k_1}^{-1}\mc{L}_2]\varphi =\partial_{x_1}\left(\frac{a_0}{a_{k_1}}\right)\varphi+\sum\limits_{i={k_1}+1}^R \partial_{x_1}\left(\frac{a_i}{a_{k_1}}\right)D^{\alpha_i}\varphi. \end{equation*} Again, if for every $i\in \{k_1+1,....,R\}$, we have $\partial_{x_1}\left(\frac{a_i}{a_{k_1}}\right)=0$ in $\omega$, we define $$\mathcal N:=\mathcal L_3.$$ If not, let $k_2$ be the smallest number of $\{k_1+1,....,R\}$ such that there exists a nonempty open subset $\omega_2$ of $\omega_1$
where $|\partial_{x_1}\left(\frac{a_{k_2}}{a_{k_1}}\right)|>\delta>0$. Then we consider $\mc{L}_4$ the commutator of $\mc{L}_1$ and $\left[\partial_{x_1}\left(\frac{a_{k_2}}{a_{k_1}}\right)\right]^{-1}\mc{L}_3$: \begin{equation*} \mc{L}_4\varphi:=[\mc{L}_1,\left[\partial_{x_1}\left(\frac{a_{k_2}}{a_{k_1}}\right)\right]^{-1}\mc{L}_3]\varphi =\partial_{x_1}\left(\frac{\partial_{x_1}\left(\frac{a_0}{a_{k_1}}\right)}{\partial_{x_1}\left(\frac{a_{k_2}}{a_{k_1}}\right)}\right)\varphi +\sum\limits_{i=k_2+1}^R \partial_{x_1}\left(\frac{\partial_{x_1}\left(\frac{a_i}{a_{k_1}}\right)}{\partial_{x_1}\left(\frac{a_{k_2}}{a_{k_1}}\right)}\right)D^{\alpha_i}\varphi. \end{equation*}
Again, if, for every $i\in \{k_2+1,....,R\}$, we have $ \partial_{x_1}\left(\frac{\partial_{x_1}\left(\frac{a_i}{a_{k_1}}\right)}{\partial_{x_1}\left(\frac{a_{k_2}}{a_{k_1}}\right)}\right)=0$ in $\omega_2$, we define $$\mathcal N:=\mathcal L_4.$$ If not, we continue the same reasoning that will stop at some point since there is only a finite order of derivatives $R$. Hence, we obtain, for a $m\in\{1,...,R\}$, a nonempty open subset $\widetilde\omega$ of $\omega$ and an operator \begin{equation}\label{expression N} \mathcal N\varphi :=\mathcal L_{m+2}\varphi =\partial_{x_1}\left(\frac{\partial_{x_1}\left(\cdots\frac{\partial_{x_1}\left(\frac{a_{0}}{a_{k_1}}\right)} {\vdots}\right)} {\partial_{x_1}\left(\cdots\frac{\partial_{x_1}\left(\frac{a_{k_m}}{a_{k_1}}\right)} {\vdots}\right)}\right) \varphi\mbox{ in }\widetilde\omega. \end{equation} Moreover, $\mathcal N$ is obtained by making iterated commutators of operators involving only $\mathcal L_1$ and $\mathcal L_2$. Hence it is clear that there exists two linear partial differential operators $\widetilde {\mathcal M}_1$ and $\widetilde {\mathcal M}_2$ such that $$\mathcal N=\widetilde {\mathcal M}_1\mathcal L_1+\widetilde {\mathcal M}_2\mathcal L_2.$$
Hence, in view of \eqref{expression N}, we will have the desired conclusion as soon as the coefficient in the right-hand side in \eqref{expression N} is different from zero. Let us explain into more details what this condition exactly means. For the sake of clarity, let us assume that $m=3$ (but the following reasoning can be extended to any $m\in \{1,\ldots, R\}$). We remark that \begin{equation}\label{express0} \partial_{x_1}\left(\frac{\partial_{x_1}\left(\frac{\partial_{x_1}\left(\frac{a_{0}}{a_{k_1}}\right)} {\partial_{x_1}\left(\frac{a_{k_2}}{a_{k_1}}\right)}\right)} {\partial_{x_1}\left(\frac{\partial_{x_1}\left(\frac{a_{k_3}}{a_{k_1}}\right)} {\partial_{x_1}\left(\frac{a_{k_2}}{a_{k_1}}\right)}\right)}\right) = 0 \end{equation} holds only if, for some $\lambda_3\in \mathcal C^0_{x_2,...,x_n}(\overline{\widetilde\omega})$, we have \begin{equation*} \frac{\partial_{x_1}\left(\frac{\partial_{x_1}\left(\frac{a_{0}}{a_{k_1}}\right)} {\partial_{x_1}\left(\frac{a_{k_2}}{a_{k_1}}\right)}\right)} {\partial_{x_1}\left(\frac{\partial_{x_1}\left(\frac{a_{k_3}}{a_{k_1}}\right)} {\partial_{x_1}\left(\frac{a_{k_2}}{a_{k_1}}\right)}\right)} = \lambda_3. \end{equation*} The last expression can be rewritten as \begin{equation}\label{expres1} \partial_{x_1}\left(\frac{\partial_{x_1}\left(\frac{a_{0}-\lambda_3a_{k_3}}{a_{k_1}}\right)} {\partial_{x_1}\left(\frac{a_{k_2}}{a_{k_1}}\right)}\right) =0. \end{equation} Again, \eqref{expres1} holds only if, for some $\lambda_2\in \mathcal C^0_{x_2,...,x_n}(\overline{\widetilde\omega})$, we have \begin{equation*} \frac{\partial_{x_1}\left(\frac{a_{0}-\lambda_3a_{k_3}}{a_{k_1}}\right)} {\partial_{x_1}\left(\frac{a_{k_2}}{a_{k_1}}\right)} =\lambda_2, \end{equation*} or, equivalently, \begin{equation*} \partial_{x_1}\left(\frac{a_{0}-\lambda_3a_{k_3}-\lambda_2a_{k_2}}{a_{k_1}}\right) =0. \end{equation*} Thus \eqref{express0} is satisfied only if, for some $\lambda_1,~\lambda_2,~\lambda_3\in \mathcal C^0_{x_2,...,x_n}(\overline{\widetilde\omega})$, we have \begin{equation*} a_{0}=\lambda_3a_{k_3}+\lambda_2a_{k_2}+\lambda_1a_{k_1}. \end{equation*} Hence, we find back condition \eqref{cond:modul} and the proof of Lemma \ref{lemme:resol alg} is achieved.
\cqfd
We are now able to prove the algebraic solvability of \eqref{def L Strategy 2}. \begin{prop}\label{prop:resol alg}
Suppose that $d_i^{kl},~ g_{ij}^k,~a_{ij}\in \mc{C}^{N^2}(\overline{\omega}_T)$ for every $i,j\in\{1,2\}$ and $k,l\in\{1,...,N\}$.
Then, under Condition \ref{cond:modul}, System \eqref{def L Strategy 2} is algebraically solvable with an operator $\mc M$ of order $N^2$. \end{prop}
\textbf{Proof of Proposition \ref{prop:resol alg}}\\ Let us remark that the first equation of System \eqref{def L Strategy 2} can be rewritten locally on $\omega_T$ as $$v=\partial_tz_1-\Div (d_1\nabla z_1)-g_{11}\cdot\nabla z_1-g_{12}\cdot\nabla z_2-a_{11} z_1-a_{12} z_2-f_1,$$ hence one can always solve algebraically first the second equation of System \eqref{def L Strategy 2},
$v$ will then be given with respect to $z_1$, $z_2$ and $f_1$. Hence, solving \eqref{def L Strategy 2} is equivalent to solving \begin{equation*} \mc{L}_0z=f_2,\ \end{equation*} where \begin{gather}\label{def L02}
\mc{L}_0z:=
\partial_tz_2-\Div (d_2\nabla z_2)
-\partial_{x_1} z_1
- g_{22}\cdot\nabla z_2- a_{22}z_2\mbox{ in } \omega_T.
\end{gather} Hence, finding a differential operator $\mc{M}$ such that \eqref{strat:LMB} is satisfied is now equivalent to finding a differential operator $\mc{M}_0$ such that \begin{equation}\label{L0M0}
\mc{L}_0\circ\mc{M}_0=Id. \end{equation}
We can remark that equality \eqref{L0M0} is formally equivalent to \begin{equation}\label{M*L*} \mc{M}_0^*\circ \mc{L}_0^*=Id, \end{equation} where the formal adjoint $ \mc{L}_0^*$ of the operator $ \mc{L}_0$ is given for every $\varphi\in \mc{C}^{\infty}(\overline{\omega}_T)$ by \begin{equation}\label{def L*} \begin{array}{rcl}
\mc{L}^*_0\varphi&:=& \left(\begin{array}{c}
{\mc{L}}_1\varphi\\\noalign{
}
{\mc{L}}_{2}\varphi
\end{array}\right) = \left(\begin{array}{c}\partial_{x_1}\varphi \\\noalign{
} -\partial_t(\varphi)-\Div ( d_2\nabla (\varphi)) +\Div (g_{22} \varphi) - a_{22}\varphi\\\noalign{
}
\end{array}\right). \end{array}\end{equation} Operator $\mc{L}_2$ can be rewritten as \begin{equation*} \begin{array}{rcl}
{\mc{L}}_{2}\varphi =-\partial_t\varphi-\sum\limits_{i,j=1}^Nd_2^{ij}\partial_{x_ix_j}\varphi + \sum\limits_{i=1}^N\widetilde{g}_{22}^i\partial_{x_i}\varphi +\widetilde{a}_{22}\varphi, \end{array}\end{equation*} where $\widetilde{g}_{22}^i$ and $\widetilde{a}_{22}$ are given in \eqref{resol alg:def g22 tilde}. Let us first consider the following linear combination of $\mc{L}_1$ and $\mc{L}_2$: \begin{equation*} \begin{array}{rcl}
{\mc{L}}_{3}\varphi
&=& {\mc{L}}_{2}\varphi
-[-2\sum\limits_{i=2}^Nd_2^{i1}\partial_{x_i}
+ \widetilde{g}_{22}^1]\mc{L}_1\varphi\\ &=&-\partial_t\varphi-\sum\limits_{i,j=2}^Nd_2^{ij}\partial_{x_ix_j}\varphi + \sum\limits_{i=2}^N\widetilde{g}_{22}^i\partial_{x_i}\varphi +\widetilde{a}_{22}\varphi, \end{array}\end{equation*} Lemma \ref{lemme:resol alg} leads to the algebraic resolvability of System \eqref{def L Strategy 2} under Condition \ref{cond:modul}.
Concerning the order of $\mathcal M$,
if we follow the proof of Lemma \ref{lemme:resol alg} step by step, we apply at most $N\times(N-1)/2$ operators of order two to eliminate the terms $d_2^{ij}\partial_{x_ix_j}$ with $i,j\in\{2,...,N\}$ (thanks to the symmetry property of $d_2$), then at most $N-1$ operators of order one for the term $\widetilde{g}_{22}^i\partial_{x_i}$ with $i\in\{2,...,N\}$ and finally an operator of order at most one for $\partial_t$. Thus the operator $\mathcal M$ is of order at most $N\times(N-1)+(N-1)+1=N^2.$ \cqfd
We are now ready for the proof of Theorem \ref{theo:positif}.
\textbf{Proof of Theorem \ref{theo:positif}.}\\ We apply Proposition \ref{prop:contr regul} with $k=N^2+1$ and obtain the existence of two constants $K>0$ and $C>0$ such that for every initial condition $y^0\in L^2(\Omega)^2$ one can find a control $\widehat u\in \mathcal{C}^{N^2+1}(\overline{Q_T})$ verifying $\Supp(\widehat{u})\subset \subset \omega_T$ for which the solution $\widehat y$ to System \eqref{strat:syst lin pb ana} is equal to zero at time $T$ and the following estimate holds: \begin{equation}\label{regexp}
\|\widehat u\|_{\mathcal{C}^{N^2+1}(Q_T)^2}\leqslant C_k\|y^0\|_{L^2(\Omega)^2}. \end{equation}
Now, using Proposition \ref{prop:resol alg}, locally on $\omega_T$ there exists a solution $(z,v)\in \mc{C}^1(\overline{Q}_T)^3\subset W(0,T)^2\times L^2(Q_T)$ to the following control problem: \begin{equation*}
\left\{\begin{array}{ll} \partial_tz_1=\Div (d_1\nabla z_1)+g_{11}\cdot\nabla z_1+g_{12}\cdot\nabla z_2+a_{11}z_1+a_{12}z_2+\widehat{u}_1+v&\mbox{in }\omega_T,\\\noalign{
} \partial_tz_2=\Div (d_2\nabla z_2)+\partial_{x_1} z_1+g_{22}\cdot\nabla z_2+a_{22}z_2+\widehat{u}_2&\mbox{in } \omega_T,
\end{array} \right. \end{equation*} with $(\widehat u_1,\widehat u_2):=\widehat u$. Moreover, since $\Supp(z)\subset \subset \omega_T$, we have $z(0,\cdot)=z(T,\cdot)=0$ in $\Omega$.
We conclude by remarking that $(y,u):=(\widehat{y}-z,-v)$ is a solution to System (\ref{system primmal}) which satisfies $y(T,\cdot)\equiv0$ in $\Omega$.
\cqfd
\section{Proof of Theorem \ref{theo:negatif2}}
\hspace*{4mm}
Let $\omega_1$ be a nonempty regular open set satisfying $\omega\subset\subset \omega_1\subset\subset\Omega$. Let $\theta$ be a function of $\mathcal{C}^{\infty}(\overline{\Omega})$ satisfying \begin{equation*}\left\{\begin{array}{l} \theta=1\mbox{ in }\omega_0,\\\noalign{
} \Supp(\theta)\subset\\omega,\\\noalign{
} \theta>0\mbox{ in } \omega_1 \end{array}\right. \end{equation*} Consider the following system \begin{equation}\label{syst simplp} \left\{\begin{array}{ll} \partial_ty_1=\Delta y+\mathds{1}_{\omega}u&\mbox{in } Q_T,\\\noalign{
} \partial_ty_2=\Delta y_2+ay_2+\partial_{x_1}(\theta y_1)&\mbox{in } Q_T,\\\noalign{
} y=0&\mbox{on } \partial\Omega,\\\noalign{
} y(0,\cdot)=y^0&\mbox{in }\Omega, \end{array}\right. \end{equation} where $u\in L^2(Q_T)$ is the control and $a\in L^{\infty}(\Omega)$ will be specified later. If we can control approximately System \eqref{syst simplp}, then it implies that we are also able to control approximately the following equation: \begin{equation}\label{syst primal} \left\{\begin{array}{ll} \partial_tz=\Delta z+az+\partial_{x_1}(\theta v)&\mbox{in } Q_T,\\\noalign{
} z=0&\mbox{on } \partial\Omega,\\\noalign{
} z(0,\cdot)=y^0_2&\mbox{in }\Omega, \end{array}\right. \end{equation} where $v\in L^2((0,T),H^1(\Omega))$ is the control. Since $\theta>0$ on $\omega_1$, the approximate controllability on the time interval $(0,T)$ of System \eqref{syst simplp} is equivalent to the following property, called the Fattorini criterion (see \cite[Theorem 1 \& Section 3]{olive_bound_appr_2014}): \begin{theo}\label{theo:fattorini2} System \eqref{syst primal} is approximately controllable on the time interval $(0,T)$, if and only if for every $s\in\mathbb{C}$ and every $\varphi \in D (\Delta )$, we have \begin{equation*} \left. \begin{array}{ll} -\Delta \varphi-a\varphi = s\varphi &\mbox{ in } \Omega\\\noalign{
} \partial_{x_1} \varphi= 0&\mbox{ in }\omega_1 \end{array}\right\} \Rightarrow \varphi = 0. \end{equation*} \end{theo} Since $\omega_1\subset\subset\Omega$, Then there exists a open set $\omega_2$ such that $\omega_1\subset\subset \omega_2\subset\subset \Omega$. The first eigenfunction $\varphi_1$ of $-\Delta$ is well-known to be positive in $\Omega$, so we can define a function $\varphi\in \mathcal{C}^{\infty}(\overline\Omega)$ satisfying \begin{equation*} \left\{\begin{array}{ll} \varphi =\varphi_1 &\mbox{ in } \Omega\backslash \omega_2,\\\noalign{
} \varphi = 1&\mbox{ in }\omega_1,\\\noalign{
} \varphi>\delta>0&\mbox{ in }\omega_2. \end{array}\right. \end{equation*} For instance, if $\Omega:=(0,\pi)$ and $\omega_1:=(2\pi/5,3\pi/5)$, as in Figure 1, we may construct a function $\varphi\in \mathcal{C}^2([0,\pi])$ satisfying \begin{equation*} \left\{\begin{array}{ll} \varphi(x) = \sin(x) &\mbox{ for every } x\in [0,\pi/5]\cup[4\pi/5,\pi ],\\\noalign{
} \varphi(x) = 1&\mbox{ for every }x\in [2\pi/5,3\pi/5],\\\noalign{
} \varphi>\delta>0&\mbox{ in }[\pi/5,4\pi/5]. \end{array}\right. \end{equation*} \begin{figure}
\caption{Example of function $\varphi$ on $[0,\pi]$}
\end{figure} Consider \begin{equation*} a:=\dfrac{-\Delta \varphi-\varphi}{\varphi}. \end{equation*} Thanks to the definition of $\varphi$, is well defined in $\overline{\Omega}$ and is an element of $\mc{C}^{\infty}(\overline{\Omega})$.
Thus $\varphi$ satisfies \begin{equation*} \left\{ \begin{array}{ll} -\Delta \varphi-a\varphi = \varphi &\mbox{ in } \Omega,\\\noalign{
} \partial_{x_1} \varphi = 0&\mbox{ in }\omega,\\\noalign{
} \varphi\neq0. \end{array}\right. \end{equation*} Using Theorem \ref{theo:fattorini2}, System \eqref{syst primal} is not approximately controllable on the time interval $(0,T)$.
\cqfd \begin{rem} Let us emphasize that in this case, as expected, Condition \ref{cond:modul} is not verified: on $\omega$ we have by definition $a_{22}=-1$, $g_{22}=0$ and $d^{ii}_2=0$ for every $i\in\{2,\ldots,N\}$, which implies that $\tilde a_{22}=-1$ on $\omega$ and $\widetilde g_{22}=0$, hence \begin{equation*} \left\{\begin{array}{l} \widetilde{a}_{22}\mbox{ is an element of the } \mc{C}^0_{t,x_2,...,x_N}(\overline{\omega}_T)\mbox{-module }\\\noalign{
} \left\langle 1,\widetilde{g}_{22}^2,...,\widetilde{g}_{22}^N,d_2^{22},...,d_2^{NN}\right\rangle_{\mc{C}^0_{t,x_2,...,x_N}(\overline{\omega}_T)}. \end{array}\right. \end{equation*} This will also be the case for the potential constructed in the first part of the proof of Theorem \ref{theo:negatif}. \end{rem}
\section{Proof of Theorem \ref{theo:negatif}}
\hspace*{4mm}
Let $\Omega:=(0,\pi)$ and $\omega:=(7\pi/15,8\pi/15)$. Consider the following system \begin{equation}\label{syst simpl2} \left\{\begin{array}{ll} \partial_ty_1=\Delta y+\mathds{1}_{\omega}u&\mbox{in } Q_T,\\\noalign{
} \partial_ty_2=\Delta y_2+ay_2+\partial_{x} y_1&\mbox{in } Q_T,\\\noalign{
} y=0&\mbox{on } \Sigma_T,\\\noalign{
} y(0,\cdot)=y^0&\mbox{in }\Omega, \end{array}\right. \end{equation} where $u\in L^2(Q_T)$ is the control and $a\in C^{\infty}(\overline\Omega)$ will be specified later.
As in the previous section, it is well-known that the approximate controllability on the time interval $(0,T)$ of System \eqref{syst primal} is equivalent to the following property:
\begin{theo}\label{theo:fattorini} System \eqref{syst simpl2} is approximately controllable on the time interval $(0,T)$, if and only if for every $s\in\mathbb{C}$ and every $\varphi \in D (\Delta )$, we have \begin{equation*} \left. \begin{array}{ll} -\Delta \varphi-\partial_{x} \psi = s\varphi &\mbox{ in } \Omega\\\noalign{
} -\Delta\psi -a\psi= s\psi&\mbox{ in } \Omega\\\noalign{
} \varphi= 0&\mbox{ in }\omega \end{array}\right\} \Rightarrow (\varphi,\psi) = (0,0). \end{equation*} \end{theo}
Let us construct three functions $\varphi$, $\psi$, $a\in\mc{C}^{\infty}(\overline{\Omega})$ satisfying \begin{equation}\label{cond:constr simpl} \left\{ \begin{array}{ll} -\Delta \varphi-\partial_{x} \psi = 9\varphi &\mbox{ in } \Omega,\\\noalign{
} -\Delta\psi -a\psi= 9\psi&\mbox{ in } \Omega,\\\noalign{
} \varphi(0)=\varphi(\pi)=\psi(0)=\psi(\pi)=0,&\\\noalign{
} \varphi= 0&\mbox{ in }\omega,\\\noalign{
} \varphi\neq 0,~\psi\neq0&\mbox{ in }\Omega. \end{array}\right. \end{equation} The idea will be to construct the function $\psi$ as a perturbation of $x\mapsto\sin(3x)$. Consider $\psi$ a function of $\mc{C}^{\infty}(\overline{\Omega})\cap D(\Delta)$ satisfying \begin{equation}\label{cond:constr simpl2} \left\{ \begin{array}{ll} \psi(x)=\sin(3x)+C_1\theta_1+C_2\theta_2+C_3\theta_3&\mbox{ for all }x\in\overline{\Omega},\\\noalign{
} \psi(x)=\sin(7\pi/5)&\mbox{ for all }x\in\overline{\omega},\\\noalign{
}
|\psi(x) - \sin(3x)|<\varepsilon &\mbox{ for all }x\in[6\pi/15,7\pi/15]\cup[8\pi/15,9\pi/15], \end{array}\right. \end{equation} where $\theta_1,~\theta_2,~\theta_3$ are three nontrivial functions of $\mc{C}^{\infty}(\overline{\Omega})$ satisfying \begin{equation} \left\{ \begin{array}{l} \Supp(\theta_1)\subset(\pi/12,\pi/6),\\\noalign{
} \Supp(\theta_2)\subset(9\pi/12,5\pi/6),\\\noalign{
} \Supp(\theta_3)\subset(5\pi/6,11\pi/12),\\\noalign{
} \theta_1,~\theta_2,~\theta_3\geqslant0\mbox{ in } \Omega, \end{array}\right. \end{equation} $\varepsilon>0$ small enough and $C_1,~C_2,~C_2$ are three positive constants to determined (See Figure \ref{fig:contre exemple} for some examples of function $\psi$). Let us remark that, for a constant $\alpha\in\mb{R}$ to determined, the function $\varphi\in\mc{C}^{\infty}(\overline{\Omega})$ defined for all $x\in \overline{\Omega}$ by \begin{equation*} \begin{array}{rcl} \varphi(x)&:=&\alpha\sin(3x)-\frac13\displaystyle\int_0^x\sin(3(x-y))\partial_x\psi(y)dy \end{array} \end{equation*} is solution to the first equation of \eqref{cond:constr simpl}. In order to apply Theorem \ref{theo:fattorini}, let us first prove that $C_1$ and $\alpha$ can be chosen such that $\varphi=0$ in $\omega$. Since $\psi=\sin(7\pi/5)$ in $\omega$, \begin{equation*} \begin{array}{rcl} \varphi(x) &=&\left[\alpha-\frac13\cos(7\pi/5)\sin(7\pi/5)-\displaystyle\int_0^{7\pi/15}\sin(3y)\psi(y)dy\right]\sin(3x)\\ &&\hspace*{2cm} +\left[\frac13\sin(7\pi/5)^2-\displaystyle\int_0^{7\pi/15}\cos(3y)\psi(y)dy\right]\cos(3x), \end{array} \end{equation*} for all $x\in\omega$. Since $\cos(3x)>0,~\sin(3x)>0$ for all $x$ in $(\pi/12,\pi/6)$ and \begin{equation*} \frac13\sin(7\pi/5)^2-\displaystyle\int_0^{7\pi/15}\cos(3y)\sin(3y)dy>0, \end{equation*} then, according to the last line of \eqref{cond:constr simpl2}, for $\epsilon$ small enough, it is possible to choose $C_1>0$ in order to obtain \begin{equation*} \frac13\sin(7\pi/5)^2-\displaystyle\int_0^{7\pi/15}\cos(3y)\psi(y)dy=0. \end{equation*} Thus, for $\alpha$ given by \begin{equation*} \alpha:=\frac13\cos(7\pi/5)\sin(7\pi/5)+\displaystyle\int_0^{7\pi/15}\sin(3y)\psi(y)dy, \end{equation*} we obtain $\varphi=0$ in $\omega$. By definition of $\varphi$, we have $\varphi(0)=0$. Let us now prove that for some appropriate $C_2$ and $C_3$, we have $\varphi(\pi)=0$. We remark that \begin{equation*} \varphi(\pi)=\frac13\displaystyle\int_0^{\pi}\cos(3y)\psi(y)dy. \end{equation*} Let us distinguish two cases: \begin{enumerate} \item If \begin{equation}\label{quantity phi(pi)} \frac13\displaystyle\int_0^{2\pi/3}\cos(3y)\psi(y)dy+\frac13\displaystyle\int_{2\pi/3}^{\pi}\cos(3y)\sin(3y)dy \end{equation} is negative, then, using the fact that $\sin(3x),~\cos(3x)>0$ for all $x\in (9\pi/12,5\pi/6)$, one can choose $C_3:=0$ and find some some $C_2>0$ such that $\varphi(\pi)=0$. \item If now the quantity \eqref{quantity phi(pi)} is positive, since $\sin(3x)>0$ and $\cos(3x)<0$ for all $x\in (5\pi/6,11\pi/12)$,
one can choose $C_2:=0$ and find some some $C_3>0$ such that $\varphi(\pi)=0$. \end{enumerate} The function $\psi$ will have one of the two following forms \begin{figure}
\caption{Examples of function $\psi$ on $[0,\pi]$}
\label{fig:contre exemple}
\end{figure}
To satisfy the second equality in \eqref{cond:constr simpl}, we define the function $a\in\mc{C}^{\infty}(\overline{\Omega})$ as follows \begin{equation}\label{def:a} a:=\dfrac{-\Delta \psi-9\psi}{\psi}. \end{equation} This function $a$ is bounded since at each point where $\psi$ is null, i.e. at $0$, $\pi/3$, $2\pi/3$ and $\pi$, there exists a neighbourhood in which $\psi(x)$ is equal to $\sin(3x)$. Thus the constructed $\varphi$, $\psi$ and $a$ verify \eqref{cond:constr simpl}. Using Theorem \ref{theo:fattorini}, System \eqref{syst simpl2} is not approximately controllable on the time interval $(0,T)$.
Let us now prove the second item of Theorem \ref{theo:negatif}. We remark that it is possible to chose $\theta_1=\mbox{exp}$ in $\omega_1\subset(\pi/12,\pi/6)$ with $\omega_1$ small enough. Then $a$ is defined in $\omega_1$ for all $x\in\omega_1$ by $$a(x)=\dfrac{-10C_1\exp(x)}{\sin(3x)+C_1\exp(x)}.$$ Thus $a$ satisfies Condition \ref{cond:modul} for $\omega:=\omega_1$, that is $a$ is non-constant in the space variable on $\omega_1$. We conclude applying Theorem \ref{theo:positif} for $\omega:=\omega_1$.
\cqfd
\textbf{{\Large Funding}}
Pierre Lissy is partially supported by the project IFSMACS funded by the french Agence Nationale de la Recherche, 2015-2019 (Reference: ANR-15-CE40-0010).
\textbf{{\Large Conflict of Interest}}
The authors declare that they have no conflict of interest.
\end{document} |
\begin{document}
\title{Computing Longest (Common) Lyndon Subsequences} \begin{abstract}
Given a string~$T$ with length~$n$ whose characters are drawn from an ordered alphabet of size $\sigma$,
its longest Lyndon subsequence is a longest subsequence of~$T$ that is a Lyndon word.
We propose algorithms for finding such a subsequence in \Oh{n^3} time with \Oh{n} space,
or \emph{online} in \Oh{n^3 \sigma} space and time.
Our first result can be extended to find the longest common Lyndon subsequence of two strings of length~$n$ in \Oh{n^4 \sigma} time using \Oh{n^3} space. \end{abstract} \keywords{Lyndon word, subsequence, dynamic programming}
\section{Introduction} A recent theme in the study of combinatorics on words has been the generalization of regularity properties from substrings to subsequences. For example, given a string~$T$ over an ordered alphabet, the longest increasing subsequence problem is to find the longest subsequence of increasing symbols in~$T$~\cite{GBRobinson1938,Schensted61}. Several variants of this problem have been proposed~\cite{Knuth70,elmasry10longest}. These problems generalize to the task of finding such a subsequence that is not only present in one string, but common in two given strings~\cite{kutz11faster,ta21computing,he18longest}, which can also be viewed as a specialization of the longest common subsequence problem~\cite{wagner74correction,kiyomi21longest,hirschberg77algorithms}.
More recently, the problem of computing the longest square word that is a subsequence~\cite{kosowski04efficient}, the longest palindrome that is a subsequence~\cite{chowdhury14computing,inenaga18hardness}, the lexicographically smallest absent subsequence~\cite{kosche21absent}, and longest rollercoasters~\cite{DBLP:conf/gd/BiedlCDJL17,DBLP:conf/stacs/GawrychowskiMS19,DBLP:journals/siamdm/BiedlBCLMNS19,DBLP:conf/spire/FujitaNIBT21} have been considered.
Here, we focus on subsequences that are Lyndon, i.e., strings that are lexicographically smaller than any of its non-empty proper suffixes~\cite{lyndon54}. Lyndon words are objects of longstanding combinatorial interest, and have also proved to be useful algorithmic tools in various contexts (see, e.g.,~\cite{BIINTT17}). The longest Lyndon \emph{substring} of a string is the longest factor of the Lyndon factorization of the string~\cite{chen58lyndon}, and can be computed in linear time~\cite{duval83lyndon}. The longest Lyndon \emph{subsequence} of a unary string is just one letter, which is also the only Lyndon subsequence of a unary string. A (naive) solution to find the longest Lyndon subsequence is to enumerate all distinct Lyndon subsequences, and pick the longest one. However, the number of distinct Lyndon subsequences can be as large as $2^n$ considering a string of increasing numbers $T = 1 \cdots n$. In fact, there are no bounds known (except when $\sigma=1$) that bring this number in a polynomial relation with the text length~$n$ and the alphabet size~$\sigma$~\cite{hirakawa21counting}, and thus deriving the longest Lyndon subsequence from all distinct Lyndon subsequences can be infeasible. In this paper, we focus on the algorithmic aspects of computing this {longest} Lyndon subsequence in polynomial time without the need to consider all Lyndon subsequences. In detail, we study the problems of computing \begin{enumerate} \item the lexicographically smallest (common) subsequence for each length online, cf.~\cref{secLexicographicallySmallest}, and \item the longest subsequence that is Lyndon, cf.~\cref{secLongestLyndon}, with two variations considering the computation as online, or the restriction that this subsequence has to be common among to given strings. \end{enumerate} The first problem serves as an appetizer. Although the notions of \emph{Lyndon} and \emph{lexicographically smallest} share common traits, our solutions to the two problems are independent, but we will reuse some tools for the online computation.
\section{Preliminaries}
Let $\Sigma$ denote a totally ordered set of symbols called the alphabet. An element of $\Sigma^*$ is called a string. Given a string $S \in \Sigma^*$, we denote its length with $|S|$, its $i$-th symbol with $S[i]$ for $i \in [1..|S|]$. Further, we write $S[i..j] = S[i]\cdots S[j]$. A \teigi{subsequence} of a string~$S$ with length~$\ell$ is a string $S[i_1] \cdots S[i_\ell]$ with $i_1 < \ldots < i_\ell$.
Let $\bot$ be the empty string. We stipulate that $\bot$ is lexicographically larger than every string of $\Sigma^+$. For a string $S$, appending $\bot$ to~$S$ yields~$S$.
A string $S \in \Sigma^*$ is a \teigi{Lyndon word}~\cite{lyndon54} if $S$ is lexicographically smaller than all its non-empty proper suffixes. Equivalently, a string $S$ is a Lyndon word if and only if it is smaller than all its proper cyclic rotations.
The algorithms we present in the following may apply techniques limited to integer alphabets. However, since the final space and running times are not better than $\Oh{n}$ space and $\Oh{n \lg n}$ time, respectively, we can reduce the alphabet of $T$ to an integer alphabet by sorting the characters in $T$ with a comparison based sorting algorithm taking \Oh{n \lg n} time and \Oh{n} space, removing duplicate characters, and finally assigning each distinct character a unique rank within $[1..n]$. Hence, we assume in the following that $T$ has an alphabet of size $\sigma \le n$.
\section{Lexicographically Smallest Subsequence}\label{secLexicographicallySmallest}
As a starter, we propose a solution for the following related problem: Compute the lexicographically smallest subsequence of~$T$ for each length $\ell \in [1..n]$ online.
\subsection{Dynamic Programming Approach}\label{secFirstApproachLexicographicallySmallest}
The idea is to apply dynamic programming dependent on the length~$\ell$ and the length of the prefix $T[1..i]$ in which we compute the lexicographically smallest subsequence of length~$\ell$. We show that the lexicographically smallest subsequence of $T[1..i]$ length~$\ell$, denoted by $D[i,\ell]$ is $D[i-1,\ell]$ or $D[i-1,\ell-1]T[i]$, where $D[0,\cdot] = D[\cdot,0] = \bot$ is the empty word. See \Cref{algoLexSmallest} for a pseudo code.
\begin{algorithm}[t]
\DontPrintSemicolon{}
$D[0,1] \gets \bot$ \;
\For(\Comment*[f]{Initialize $D[\cdot,1]$}){$i = 1$ to $n$}{$D[i,1] \gets \min_{j \in [1..i]} T[j] = \min(D[i-1,1],T[i])$ \Comment*{\Oh{1} time per entry}
} \For(\Comment*[f]{Induce $D[\cdot,\ell]$ from $D[\cdot,\ell-1]$}){$\ell = 2$ to $n$}{\For(\Comment*[f]{Induce $D[i,\ell]$}){$i = 2$ to $i$}{\lIf{$\ell < i$}{$D[i, \ell] \gets \bot$
}\lElse{$D[i, \ell] \gets \min(D[i-1,\ell], D[i-1,\ell-1]T[i])$ \label{lineDIEll}
}
}
}
\caption{Computing the lexicographically smallest subsequence $D[i,\ell]$ in $T[1..i]$ of length~$\ell$.}
\label{algoLexSmallest} \end{algorithm}
\begin{figure}
\caption{Sketch of the proof of \cref{lemLexSmallestDP}.
We can fill the fields shaded in blue (the first row and the diagonal) in a precomputation step.
Further, we know that entries left of the diagonal are all empty.
A cell to the right of it (red) is based on its left-preceding and diagonal-preceding cell (green).
}
\label{figLexSmallestDP}
\end{figure}
\begin{lemma}\label{lemLexSmallestDP}
\Cref{algoLexSmallest} correctly computes $D[i,\ell]$, the lexicographically smallest subsequence of $T[1..i]$ with length~$\ell$. \end{lemma} \begin{proof}
The proof is done by induction over the length~$\ell$ and the prefix~$T[1..i]$.
We observe that $D[i,\ell] = \bot$ for $i < \ell$ and $D[i,i] = T[1..i]$
since $T[1..i]$ has only one subsequence of length~$i$.
Hence, for (a) $\ell = 1$ as well as for (b) $i \le \ell$, the claim holds.
See \cref{figLexSmallestDP} for a sketch.
Now assume that the claim holds for
$D[i',\ell']$ with (a) $\ell' < \ell$ and all $i \in [1..n]$, as well as (b) $\ell' = \ell$ and all $i' \in [1..i-1]$.
In what follows, we show that the claim also holds for $D[i,\ell]$ with $i > \ell > 1$.
For that, let us assume that $T[1..i]$ has a subsequence~$L$ of length~$\ell$ with $L \prec D[i,\ell]$.
If $L[\ell] \not= T[i]$, then $L$ is a subsequence of $T[1..i-1]$, and therefore $D[i-1,\ell] \preceq L$ according to the induction hypothesis.
But $D[i,\ell] \preceq D[i-1,\ell]$, a contradiction.
If $L[\ell] = T[i]$, then $L[1..\ell-1]$ is a subsequence of $T[1..i-1]$,
and therefore $D[i-1,\ell-1] \preceq L[1..\ell-1]$ according to the induction hypothesis.
But $D[i,\ell] \preceq D[i-1,\ell-1]T[i] \preceq L[1..\ell-1]T[i] = L$, a contradiction.
Hence, $D[i,\ell]$ is the lexicographically smallest subsequence of $T[1..i]$ of length~$\ell$. \end{proof}
Unfortunately, the lexicographically smallest subsequence of a given length is not a Lyndon word in general, so this dynamic programming approach does not solve our problem finding the longest Lyndon subsequence. In fact, if $T$ has a longest Lyndon subsequence of length~$\ell$, then there can be a lexicographically smaller subsequence of the same length. For instance, with $T = \texttt{aba}$, we have the longest Lyndon subsequence \texttt{ab}, while the lexicographically smallest length-2 subsequence is \texttt{aa}.
Analyzing the complexity bounds of \Cref{algoLexSmallest}, we need \Oh{n^2} space for storing the two-dimensional table $D[1..n,1..n]$. Its initialization costs us $\Oh{n^2}$ time. Line~\ref{lineDIEll} is executed \Oh{n^2} time. There, we compute the lexicographical minimum of two subsequences. If we evaluate this computation with naive character comparisons, for which we need to check \Oh{n} characters, we pay \Oh{n^3} time in total, which is also the bottleneck of this algorithm.
\begin{lemma}\label{thmDPCubed}
We can compute the lexicographically smallest substring of $T$ for each length~$\ell$
online in \Oh{n^3} time with \Oh{n^2} space. \end{lemma}
\subsection{Speeding Up String Comparisons}\label{secOrderMaintenance}
\newcommand*{\functionname}[1]{{\ensuremath{\renewcommand{\rmdefault}{ptm}\fontfamily{ppl}\selectfont\textrm{\textup{#1}}}}} \newcommand*{\fnPreceding}{\functionname{precedes}} \newcommand*{\fnInsert}{\functionname{insert}} \newcommand*{\fnDepth}{\functionname{depth}} \newcommand*{\fnLevelanc}{\functionname{level-anc}}
Below, we improve the time bound of \cref{thmDPCubed} by representing each cell of $D[1..n,1..n]$ with a node in a trie, which supports the following methods: \begin{itemize}
\item $\fnInsert(v,c)$: adds a new leaf to a node~$v$ with an edge labeled with character~$c$, and returns a handle to the created leaf.
\item $\fnPreceding(u, v)$: returns true if the string represented by the node~$u$ is lexicographically smaller than the string represented by the node~$v$. \end{itemize} Each cell of~$D$ stores a handle to its respective trie node. The root node of the trie represents the empty string~$\bot$, and we associate $D[0,\ell] = \bot$ with the root node for all $\ell$. A node representing $D[i-1,\ell-1]$ has a child representing $D[i,\ell]$ connected with an edge labeled with~$c$ if $D[i,\ell] = D[i-1,\ell-1]c$, which is a concept similar to the LZ78 trie. If $D[i,\ell] = D[i-1,\ell]$, then both strings are represented by the same trie node. Since each node stores a constant number of words and an array storing its children, the trie takes \Oh{n^2} space.
\paragraph*{\bf Insert.} A particularity of our trie is that it stores the children of a node in the order of their creation, i.e., we always make a new leaf the last among its siblings. This allows us to perform \fnInsert{} in constant time by representing the pointers to the children of a node by a plain dynamic array. When working with the trie, we assure that we do not insert edges into the same node with the same character label (to prevent duplicates).
We add leaves to the trie as follows: Suppose that we compute $D[i,\ell]$. If we can copy $D[i-1,\ell]$ to $D[i,\ell]$ (Line~\ref{lineDIEll}), we just copy the handle of~$D[i-1,\ell]$ pointing to its respective trie node to $D[i,\ell]$. Otherwise, we create a new trie leaf, where we create a new entry of~$D$ by selecting a new character ($\ell = 1$), or appending a character to one of the existing strings in~$D$. We do not create duplicate edges since we prioritize copying to the creation of a new trie node: For an entry~$D[i,\ell]$, we first default to the previous occurrence~$D[i-1,\ell]$, and only create a new string~$D[i-1,\ell-1]T[i]$ if $D[i-1,\ell-1]T[i] \prec D[i-1,\ell]$. $D[i-1,\ell-1]T[i]$ cannot have an occurrence represented in the trie. To see that, we observe that $D$ obeys the invariants that (a) $D[i,\ell] = \min_{j \in [1..i]} D[j,\ell]$ (where $\min$ selects the lexicographically minimal string) and (b) all pairs of rows $D[\cdot,\ell]$ and $D[\cdot,\ell']$ with $\ell \neq \ell'$ have different entries. Since \Cref{algoLexSmallest} fills the entries in $D[\cdot,\ell]$ in a lexicographically non-decreasing order for each length~$\ell$, we cannot create duplicates (otherwise, an earlier computed entry would be lexicographically smaller than a later computed entry having the same length). The string comparison $D[i-1,\ell-1]T[i] \prec D[i-1,\ell]$ is done by calling $\fnPreceding{}$, which works as follows:
\paragraph*{\bf Precedes.} We can implement the function \fnPreceding{} efficiently by augmenting our trie with the dynamic data structure of \cite{cole05dynamic} supporting lowest common ancestor (LCA) queries in constant time and the dynamic data structure of \cite{dietz91finding} supporting level ancestor queries $\fnLevelanc(u,d)$ returning the ancestor of a node~$u$ on depth~$d$ in amortized constant time. Both data structures conform with our definition of \fnInsert{} that only supports the insertion of \emph{leaves}. With these data structures, we can implement $\fnPreceding(u,v)$, by first computing the lowest ancestor~$w$ of $u$ and $v$, selecting the children $u'$ and $v'$ of~$w$ on the paths downwards to $u$ and $v$, respectively, by two level ancestor queries~$\fnLevelanc(u,\fnDepth(w)+1)$ and~$\fnLevelanc(v,\fnDepth(w)+1)$, and finally returning true if the label of the edge $(w,u')$ is smaller than of $(w,v')$.
We use \fnPreceding{} as follows for deciding whether $D[i-1,\ell-1]T[i] \prec D[i-1,\ell]$ holds: Since we know that $D[i-1,\ell-1]$ and $D[i-1,\ell]$ are represented by nodes~$u$ and $v$ in the trie, respectively, we first check whether $u$ is a child of $v$. In that case, we only have to compare $T[i]$ with $D[i-1,\ell][\ell]$. If not, then we know that $D[i-1,\ell-1]$ cannot be a prefix of $D[i-1,\ell]$, and $\fnPreceding(u,v)$ determines whether $D[i-1,\ell-1]$ or the $\ell-1$-th prefix of $D[i-1,\ell]$ is lexicographically smaller.
\begin{theorem}
We can compute the table $D[1..n,1..n]$
in \Oh{n^2} time using \Oh{n^2} words of space. \end{theorem}
\newcommand*{\Stack}{\ensuremath{\mathsf{S}}}
\begin{figure}
\caption{Longest Lyndon subsequences of prefixes of a text~$T$.
The $i$-th row of bars below $T$ depicts the selection of characters forming a Lyndon sequence.
In particular, the $i$-th row corresponds to the longest subsequence of
$T[1..9]$ for $i=1$ (green), $T[1..11]$ for $i=2$ (blue), and of $T[1..12]$ for $i=3$ (red).
The first row (green) corresponds also to a longest Lyndon subsequence of $T[1..10]$ and $T[1..11]$.
Extending the second Lyndon subsequence with $T[12]$ gives also a Lyndon subsequence, but is shorter than the third Lyndon subsequence (red).
Having only the information of the Lyndon subsequences in $T[1..i]$ at hand seems not to give us a solution for $T[1..i+1]$.
}
\label{figCounterExampleDP}
\end{figure}
\newcommand*{\Top}{\textsf{top}}
\subsection{Most Competitive Subsequence} If we want to find only the lexicographically smallest subsequence for a fixed length~$\ell$, this problem is also called to
\emph{Find the Most Competitive Subsequence}\footnote{\url{https://leetcode.com/problems/find-the-most-competitive-subsequence/}}. For that problem, there are linear-time solutions using a stack~\Stack{} storing the lexicographically smallest subsequence of length~$\ell$ for any prefix~$T[1..i]$ with $\ell \le i$. The idea is to first fill \Stack{} with $1,\ldots,\ell$. Let $\Top$ denote the top element of \Stack{}. Given we are at text position $i > \ell$, we recursively pop \Top{} as long as $T[\Top] > T[i]$ and $n-i \ge (\ell - |\Stack|)$. The latter condition ensures that when we are near the end of the text, we still have enough positions in \Stack{} to fill up \Stack{} with the remaining positions to obtain a sequence of $\ell$ text positions. Finally, we put $T[i]$ on top of $\Stack{}$ if $|\Stack{}| < \ell$. Since a text position gets inserted into \Stack{} and removed from \Stack{} at most once, the algorithm runs in linear time. Consequently, if the whole text $T$ is given (i.e., not online), this solution solves our problem in the same time and space bounds by running the algorithm for each $\ell$ separately.
\subsection{Lexicographically Smallest Common Subsequence}
Another variation is to ask for the lexicographically smallest subsequence of each distinct length that is common with two strings~$X$ and~$Y$. Luckily, our ideas of \cref{secFirstApproachLexicographicallySmallest,secOrderMaintenance} can be straightforwardly translated. For that, our matrix~$D$ becomes a cube $D_3[1..L,1..|X|,1..|Y|]$ with $L := \min(|X|,|Y|)$, and we set \[ D_3[\ell,x+1,y+1] = \min \begin{cases} D_3[\ell-1,x,y]X[x+1] \text{~if~} X[x+1] = Y[y+1], \\ D_3[\ell,x,y+1], \\ D_3[\ell,x+1,y], \end{cases} \]
with $D_3[0,\cdot,\cdot] = D_3[\ell,x,y] = \bot$ for all $\ell,x,y$ with $\LCS(X[1..x],Y[1..y]) < \ell$, where $\LCS$ denotes the length of a longest common subsequence of $X$ and $Y$. This gives us an induction basis similar to the one used in the proof of \cref{lemLexSmallestDP}, such that we can use its induction step analogously. The table $D_3$ has $\Oh{n^3}$ cells, and filling each cell can be done in constant time by representing each cell as a pointer to a node in the trie data structure proposed in \cref{secOrderMaintenance}. For that, we ensure that we never insert a subsequence of $D_3$ into the trie twice. To see that, let $L \in \Sigma^+$ be a subsequence computed in $D_3$, and let $D_3[\ell,x,y] = L$ be the entry at which we called $\fnInsert$ to create a trie node for $L$ (for the first time). Then $\ell = |L|$, and $X[1..x]$ and $Y[1..y]$ are the shortest prefixes of $X$ and $Y$, respectively, containing $L$ as a subsequence. Since $D_3[\ell,x,y] = \min_{x' \in [1..x], y' \in [1..y]} D_3[\ell,x',y']$, all other entries $D_3[\ell,x',y']=L$ satisfy $D_3[\ell,x'-1,y']=L$ or $D_3[\ell,x',y'-1]=L$, so we copy the trie node handle representing $L$ instead of calling \fnInsert{} when filling out $D_3[\ell,x',y']$.
\begin{theorem}
Given two strings $X, Y$ of length~$n$,
we can compute the lexicographically smallest common subsequence for each length $\ell \in [1..n]$
in $\Oh{n^3}$ time using $\Oh{n^3}$ space. \end{theorem}
\section{Computing the Longest Lyndon Subsequence}\label{secLongestLyndon}
In the following, we want to compute the longest Lyndon subsequence of $T$. See \cref{figCounterExampleDP} for examples of longest Lyndon subsequences. Compared to the former introduced dynamic programming approach for the lexicographically smallest subsequences, we follow the sketched solution for the most competitive subsequence using a stack, which here simulates a traversal of the trie~$\tau$ storing all pre-Lyndon subsequences. $\tau$ is a subgraph of the trie storing all subsequences, sharing the same root. This subgraph is connected since, by definition, there is no string~$S$ such that $WS$ forms a pre-Lyndon word for a non-pre-Lyndon word~$W$ (otherwise, we could extend $WS$ to a Lyndon word, and so $W$, too). We say that the \teigi{string label} of a node~$v$ is the string read from the edges on the path from root to~$v$. We associate the label~$c$ of each edge of the trie with the leftmost possible position such that the string label~$V$ of~$v$
is associated with the sequence of text positions $i_1 < i_2 < \cdots < i_{|V|}$ and $T[i_1]T[i_2] \cdots T[i_{|V|}] = V$.
\subsection{Basic Trie Traversal}\label{secBasicTrieTraversal} Problems already emerge when considering the construction of $\tau$ since there are texts like $T = 1 \cdots n$ for which $\tau$ has $\Ot{2^n}$ nodes. Instead of building~$\tau$, we simulate a preorder traversal on it. With simulation we mean that we enumerate the pre-Lyndon subsequences of~$T$ in lexicographic order. For that, we maintain a stack~\Stack{} storing the text positions~$(i_1, \ldots, i_\ell)$ with $i_1 < \cdots < i_\ell$ associated with the path from the root to the node~$v$ we currently visit i.e., $i_1, \ldots, i_\ell$ are the smallest positions with $T[i_1] \cdots T[i_\ell]$ being the string label of~$v$, which is a pre-Lyndon word. When walking down, we select the next text position~$i_{\ell+1}$ such that $T[i_1] \cdots T[i_\ell]T[i_{\ell+1}]$ is a pre-Lyndon word. If such a text position does not exist, we backtrack by popping $i_\ell$ from~\Stack{}, and push the smallest text position~$i'_\ell > i_{\ell-1}$ with $T[i'_\ell] > T[i_\ell]$ onto~\Stack{} and recurse. Finally, we check at each state of~\Stack{} storing the text positions~$(i_1, \ldots, i_\ell)$ whether $T[i_1] \cdots T[i_\ell]$ is a Lyndon word. For that, we make use of the following facts:
\paragraph*{\bf Facts about Lyndon Words. } A Lyndon word cannot have a \teigi{border}, that is, a non-empty proper prefix that is also a suffix of the string. A \teigi{pre-Lyndon word} is a (not necessarily proper) prefix of a Lyndon word. Given a string $S$ of length $n$, an integer $p \in [1..n]$ is a \teigi{period} of $S$ if $S[i] = S[i+p]$ for all $i \in [1..n-p]$. The length of a string is always one of its periods. We use the following facts: \begin{enumerate}[label=(Fact~\arabic*), ref=\arabic*,leftmargin=*]
\item Only the length $|S|$ is the period of a Lyndon word~$S$.
\item The prefix $S[1..|p|]$ of a pre-Lyndon word~$S$ with period~$p$ is a Lyndon word.
In particular, a pre-Lyndon word~$S$ with period~$|S|$ is a Lyndon word.
\label{factPreLyndonPeriod}
\item Given a pre-Lyndon word~$S$ with period~$p$ and a character~$c \in \Sigma$, then
\begin{itemize}
\item $Sc$ is a pre-Lyndon word of the same period if and only if $S[|S|-p+1] = c$
\item $Sc$ is a Lyndon word if and only if $S[|S|-p+1] < c$.
In particular, if $S$ is a Lyndon word, then $Sc$ is a Lyndon word if and only if $S[1]$ is smaller than $c$.
\end{itemize}
\label{factLyndonExtension} \end{enumerate}
\paragraph*{\bf Checking pre-Lyndon Words. } Now suppose that our stack~\Stack{} stores the text positions~$(i_1, \ldots, i_\ell)$. To check whether $T[i_1] \cdots T[i_\ell] c$ for a character $c \in \Sigma$ is a pre-Lyndon word or whether it is a Lyndon word, we augment each position~$i_j$ stored in~\Stack{} with the period of $T[i_1] \cdots T[i_j]$, for $j \in [1..\ell]$, such that we can make use of Fact~\ref{factLyndonExtension} to compute the period and check whether $T[i_1] \cdots T[i_j] c$ is a pre-Lyndon word, both in constant time, for $c \in \Sigma$.
\paragraph*{\bf Trie Navigation. } To find the next text position~$i_{\ell+1}$, we may need to scan \Oh{n} characters in the text, and hence need \Oh{n} time for walking down from a node to one of its children. If we restrict the alphabet to be integer, we can augment each text position~$i$ to store the smallest text position~$i_c$ with $i < i_c$ for each character~$c \in \Sigma$ such that we can visit the trie nodes in constant time per node during our preorder traversal.
This gives already an algorithm that computes the longest Lyndon subsequence with \Oh{n \sigma} space and time linear to the number of nodes in $\tau$. However, since the number of number can be exponential in the text length, we present ways to omit nodes that do not lead to the solution. Our aim is to find a rule to judge whether a trie node contributes to the longest Lyndon subsequence to leave certain subtrees of the trie unexplored. For that, we use the following property:
\begin{lemma}\label{lemTakeLongerLyndon}
Given a Lyndon word~$V$ and
two strings~$U$ and $W$
such that $UW$ is a Lyndon word,
$V \prec U$, and $|V| \ge |U|$, then
$VW$ is also a Lyndon word with $VW \prec UW$. \end{lemma} \begin{proof}
Since $V \prec U$ and $V$ is not a prefix of~$U$, $U \succ VW$.
In what follows, we show that $S \succ VW$ for every proper suffix~$S$ of $VW$.
\begin{itemize}
\item If $S$ is a suffix of~$W$, then
$S \succeq UW \succeq U \succ VW$
because $S$ is a suffix of the Lyndon word~$UW$.
\item Otherwise, ($|S| > |W|$), $S$ is of the form $V'W$ for a proper suffix~$V'$ of~$V$.
Since $V$ is a Lyndon word, $V' \succ V$, and $V'$ is not a prefix of~$V$ (Lyndon words are border-free).
Hence, $V' W \succeq V' \succ VW$.
\end{itemize} \end{proof} Note that $U$ in \cref{lemTakeLongerLyndon} is a pre-Lyndon word since it is the prefix of the Lyndon word~$UW$.
\newcommand*{\Len}{\ensuremath{\mathsf{L}}}
Our algorithmic idea is as follows: We maintain an array~$\Len[1..n]$, where $\Len[\ell]$ is the smallest text position~$i$ such that our traversal has already explored a length-$\ell$ Lyndon subsequence of $T[1..i]$. We initialize the entries of $\Len$ with $\infty$ at the beginning. Now, whenever we visit a node~$u$ whose string label is a pre-Lyndon subsequence~$U = T[i_1] \cdots T[i_\ell]$ with $\Len[\ell] \le i_\ell$, then we do not explore the children of~$u$. In this case, we call $u$ \teigi{irrelevant}. By skipping the subtree rooted at~$u$, we do not omit the solution due to \cref{lemTakeLongerLyndon}: When $\Len[\ell] \le i_\ell$, then there is a Lyndon subsequence~$V$ of $T[1..i_\ell]$ with $V \prec U$ (since we traverse the trie in lexicographically order)
and $|V|=|U|$. Given there is a Lyndon subsequence $UW$ of $T$, then we have already found $VW$ earlier, which is also a Lyndon subsequence of $T$ with $|VW| = |UW|$.
Next, we analyze the complexity of this algorithm, and propose an improved version. For that, we say that a string is \teigi{immature} if it is pre-Lyndon but not Lyndon. We also consider a subtree rooted at a node~$u$ as pruned if $u$ is irrelevant, i.e., the algorithm does not explore this subtree. Consequently, irrelevant nodes are leaves in the pruned subtree, but not all leaves are irrelevant (consider a Lyndon subsequence using the last text position~$T[n]$). Further, we call a node Lyndon or immature if its string label is Lyndon or immature, respectively. (All nodes in the trie are either Lyndon or immature.)
\paragraph*{\bf Time Complexity. } Suppose that we have the text positions~$(i_1, \ldots, i_\ell)$ on~\Stack{} such that $U := T[i_1] \cdots T[i_\ell]$ is a Lyndon word. If $\Len[\ell] > i_\ell$, then we lower $\Len[\ell] \gets i_\ell$. We can lower an individual entry of~$\Len$ at most $n$ times, or at most $n^2$ times in total for all entries. If a visited node is Lyndon, we only explore its subtree if we were able to lower an entry of $\Len$. Hence, we visit at most $n^2$ Lyndon nodes that trigger a decrease of the values in $\Len$. While each node can have at most $\sigma$ children, at most one child can be immature. Since the depth of the trie is at most $n$, we therefore visit $\Oh{n\sigma}$ nodes between two updates of $\Len$. These nodes are leaves (of the pruned trie) or immature nodes. Thus, we traverse $\Oh{n^3 \sigma}$ nodes in total.
\begin{theorem}
We can compute the longest Lyndon subsequence
of a string of length~$n$
in $\Oh{n^3 \sigma}$ time using $\Oh{n \sigma}$ words of space. \end{theorem}
\subsection{Improving Time Bounds}
We further improve the time bounds by avoiding visiting irrelevant nodes due to the following observation: First, we observe that the number of {\em relevant} (i.e., non-irrelevant) nodes that are Lyndon is $\Oh{n^2}$. Since all nodes have a depth of at most $n$, the total number of relevant nodes in $\Oh{n^3}$. Suppose we are at a node~$u$, and~\Stack{} stores the positions $(i_1, \ldots, i_\ell)$ such that $T[i_1] \cdots T[i_\ell]$ is the string label of~$u$. Let $p$ denote the smallest period of $T[i_1]\cdots T[i_\ell]$. Then we do not want to consider all $\sigma$ children of~$u$, but only those whose edges to~$u$ have a label~$c \geq T[i_{\ell-p+1}]$ such that $c$ occurs in $T[i_\ell+1..\Len[\ell+1]-1]$ (otherwise, there is already a Lyndon subsequence of length~$\ell+1$ lexicographically smaller than $T[i_1]\cdots T[i_\ell] c$). In the context of our preorder traversal, each such child can be found iteratively using range successor queries: starting from $b = T[i_{\ell-p+1}]-1$, we want to find the \emph{lexicographically smallest} character~$c > b$ such that $c$ occurs in $T[i_\ell+1..\Len[\ell+1]-1]$. In particular, we want to find the leftmost such occurrence. A data structure for finding $c$ in this interval is the wavelet tree~\cite{grossi03wavelet} returning the position of the \emph{leftmost} such $c$ (if it exists) in $\Oh{\lg \sigma}$ time. In particular, we can use the wavelet tree instead of the $\Oh{n \sigma}$ pointers to the subsequent characters to arrive at \Oh{n} words of space. Finally, we do not want to query the wavelet tree each time, but only whenever we are sure that it will lead us to a relevant Lyndon node. For that, we build a range maximum query (RMQ) data structure on the characters of the text~$T$ in a preprocessing step. The RMQ data structure of~\cite{bender05lowest} can be built in $\Oh{n}$ time; it answers queries in constant time. Now, in the context of the above traversal where we are at a node~$u$ with \Stack{} storing $(i_1, \ldots, i_\ell)$, we query this RMQ data structure for the largest character~$c$ in $T[i_\ell+1..\Len[\ell+1]-1]$ and check whether the sequence $S := T[i_1] \cdots T[i_\ell]c$ forms a (pre-)Lyndon word. \begin{itemize}
\item If $S$ is not pre-Lyndon, i.e., $T[i_{\ell-p+1}] > c$ for $p$ being the smallest period of $T[i_1] \cdots T[i_\ell]$,
we are sure that the children of $u$ cannot lead to Lyndon subsequences~\cite[Prop.~1.5]{duval83lyndon}.
\item If $S$ is immature, i.e., $T[i_{\ell-p+1}] = c$,
$u$ has exactly one child, and this child's string label is $S$.
Hence, we do not need to query for other Lyndon children.
\item Finally, if $S$ is Lyndon, i.e., $T[i_{\ell-p+1}] < c$,
we know that there is at least one child of $u$ that will trigger an update in $\Len$ and thus is a relevant node. \end{itemize} This observation allows us to find all relevant children of $u$ (including the single immature child, if any) by iteratively conducting $\Oh{k}$ range successor queries, where $k$ is the number of children of $u$ that are relevant Lyndon nodes. Thus, if we condition the execution of the aforementioned wavelet tree query with an RMQ query result on the same range, the total number of wavelet tree queries can be bounded by $\Oh{n^2}$. This gives us $\Oh{n^3 + n^2 \lg \sigma} = \Oh{n^3}$ time for $\sigma = \Oh{n}$ (which can be achieved by an $\Oh{n\log n}$ time re-enumeration of the alphabet in a preliminary step).
\begin{theorem}
We can compute the longest Lyndon subsequence
of a string of length~$n$
in \Oh{n^3} time using \Oh{n} words of space. \end{theorem}
In particular, the algorithm computes the lexicographically smallest one among all longest Lyndon subsequences: Assume that this subsequence~$L$ is not computed, then we did not explore the subtree of the original trie~$\tau$ (before pruning) containing the node with string label~$L$. Further, assume that this subtree is rooted at an irrelevant node~$u$ whose string label is the pre-Lyndon subsequence~$U$. Then $U$ is a prefix of $L$,
and because $u$ is irrelevant (i.e., we have not explored $u$'s children), there is a node~$v$ whose string label is a Lyndon word~$V$ with $V \prec U$ and $|V| = |U|$. In particular, the edge of~$v$ to $v$'s parent is associated with a text position equal to or smaller than the associated text position of the edge between~$u$ and $u$'s parent. Hence, we can extend $V$ to the Lyndon subsequence $V L[|U|+1]..]$ being lexicographically smaller than $L$, a contradiction.
\subsection{Online Computation}\label{secLyndonOnlineComputation} If we allow increasing the space usage in order to maintain the trie data structure introduced in \cref{secOrderMaintenance}, we can modify our \Oh{n^3 \sigma}-time algorithm of \cref{secBasicTrieTraversal} to perform the computation online, i.e., with $T$ given as a text stream. For that, we explicitly represent the visited nodes of the trie $\tau$ with an explicit trie data structure~$\tau'$ such that we can create pointers to the nodes. (In other words, $\tau'$ is a lazy representation of $\tau$.) The problem is that we can no longer perform the traversal in lexicographic order, but instead keep multiple fingers in the trie $\tau'$ constructed up so far, and use these fingers to advance the trie traversal in text order.
With a different traversal order, we need an updated definition of $\Len[1..n]$: Now, while the algorithm processes $T[i]$, the entry $\Len[\ell]$ stores the lexicographically smallest length-$\ell$ Lyndon subsequence of $T[1..i]$ (represented by a pointer to the corresponding node of $\tau'$). Further, we maintain $\sigma$ lists storing pointers to nodes of $\tau'$. Initially, $\tau'$ consists only of the root node, and each list stores only the root node. Whenever we read a new character~$T[i]$ from the text stream, for each node~$v$ of the \mbox{$T[i]$-th} list, we add a leaf~$\lambda$ connected to $v$ by an edge with label~$T[i]$. Our algorithm adheres to the invariant that $\lambda$'s string label $S$ is a pre-Lyndon word so that $\tau'$ is always a subtree of $\tau$.
If $S$ is a Lyndon word satisfying $S\prec \Len[|S|]$ (which can be tested using the data structure of \cref{secOrderMaintenance}), we further set $\Len[|S|]:=S$.
This completes the process of updating $\Len[1..n]$. Next, we clear the $T[i]$-th list and iterate again over the newly created leaves. For each such leaf $\lambda$ with label $S$, we check whether $\lambda$ is relevant, i.e., whether $S \preceq \Len[|S|]$. If $\lambda$ turns out irrelevant, we are done with processing it. Otherwise, we put $\lambda$ into the $c$-th list for each character $c \in \Sigma$ such that $Sc$ is a pre-Lyndon word. By doing so, we effectively create new events that trigger a call-back to the point where we stopped the trie traversal.
Overall, we generate exactly the nodes visited by the algorithm of \cref{secBasicTrieTraversal}. In particular, there are \Oh{n^3} relevant nodes, and for each such node, we issue \Oh{\sigma} events. The operations of \cref{secOrderMaintenance} take constant amortized time, so the overall time and space complexity of the algorithm are \Oh{n^3\sigma}.
\begin{theorem}
We can compute the longest Lyndon subsequence online in \Oh{n^3 \sigma} time using \Oh{n^3 \sigma} space. \end{theorem}
\section{Longest Common Lyndon Subsequence}\label{secLongestCommonLyndon}
Given two strings $X$ and $Y$, we want to compute the longest common subsequence of $X$ and $Y$ that is Lyndon. For that, we can extend our algorithm finding the longest Lyndon subsequence of a single string as follows. First, we explore in depth-first order the trie of all \emph{common} pre-Lyndon subsequences of $X$ and $Y$. A node is represented by a pair of positions $(x, y)$ such that, given the path from the root to a node~$v$ of depth~$\ell$ visits the nodes $(x_1, y_1), \ldots, (x_\ell, y_\ell)$ with $L = X[x_1] \cdots X[x_\ell] = Y[y_1] \cdots Y[y_\ell]$ being a pre-Lyndon word, $L$ is neither a subsequence of $X[1..x_\ell-1]$ nor of $Y[1..y_\ell-1]$, i.e., $x_\ell$ and $y_\ell$ are the leftmost such positions. The depth-first search works like an exhaustive search in that it tries to extend $L$ with each possible character in $\Sigma$ having an occurrence in both remaining suffixes $X[x_{\ell}+1..]$ and $Y[y_{\ell}+1..]$, and then, after having explored the subtree rooted at~$v$, visits its lexicographically succeeding sibling nodes (and descends into their subtrees)
by checking whether $L[1..|L|-1]$ can be extended with a character $c > L[|L|]$ appearing in both suffixes $X[x_{\ell-1}+1..]$ and $Y[y_{\ell-1}+1..]$.
The algorithm uses again the array $\Len$ to check whether we have already found a lexicographically smaller Lyndon subsequence with equal or smaller ending positions in $X$ and $Y$ than the currently constructed pre-Lyndon subsequence. For that, $\Len[\ell]$ stores not only one position, but a list of positions $(x,y)$ such that $X[1..x]$ and $Y[1..y]$ have a \emph{common} Lyndon subsequence of length~$\ell$. Although there can be $n^2$ such pairs of positions, we only store those that are pairwise non-dominated. A pair of positions $(x_1,y_1)$ is called \teigi{dominated} by a pair $(x_2,y_2) \neq (x_1,y_1)$ if $x_2 \le x_1$ and $y_2 \le y_1$. A set storing pairs in $[1..n] \times [1..n]$ can have at most $n$ elements that are pairwise non-dominated, and hence $|\Len[\ell]| \le n$.
At the beginning, all lists of $\Len$ are empty. Suppose that we visit a node $v$ with pair $(x_\ell,y_\ell)$ representing a common Lyndon subsequence of length~$\ell$. Then we query whether $\Len[\ell]$ has a pair dominating $(x_\ell,y_\ell)$. In that case, we can skip $v$ and its subtree. Otherwise, we insert $(x_\ell,y_\ell)$ and remove pairs in $\Len[\ell]$ that are dominated by $(x_\ell,y_\ell)$. Such an insertion can happen at most $n^2$ times. Since $\Len[1..n]$ maintains $n$ lists, we can update $\Len$ at most $n^3$ times in total. Checking for domination and insertion into $\Len$ takes \Oh{n} time. The former can be accelerated to constant time by representing $\Len[\ell]$ as an array~$R_\ell$ storing in $R_\ell[i]$ the value $y$ of the tuple $(x,y) \in \Len[\ell]$ with $x \le i$ and the lowest possible $y$, for each $i \in [1..n]$. Then a pair $(x,y) \not\in \Len[\ell]$ is dominated if and only if $R_\ell[x] \le y$.
\begin{example}
For $n = 10$, let $\Len_\ell = [(3,9), (5,4), (8,2)]$.
Then all elements in $\Len_\ell$ are pairwise non-dominated, and
$R_\ell = [\infty,\infty,9,9,4,4,4,2,2,2]$.
Inserting $(3,2)$ would remove all elements of $\Len_\ell$, and update all entries of $R_\ell$.
Alternatively, inserting $(7,3)$ would only involve updating $R_\ell[7] \gets 3$;
since the subsequent entry $R_\ell[8] = 2$ is less than $R_\ell[7]$, no subsequent entries need to be updated. \end{example}
An update in $\Len[\ell]$ involves changing \Oh{n} entries of $R_\ell$, but that cost is dwarfed by the cost for finding the next common Lyndon subsequence that updates $\Len$. Such a subsequence can be found while visiting $\Oh{n \sigma}$ irrelevant nodes during a naive depth-first search (cf.\ the solution of \cref{secFirstApproachLexicographicallySmallest} computing the longest Lyndon sequence of a single string). Hence, the total time is $\Oh{n^4 \sigma}$.
\begin{theorem}
We can compute the longest common Lyndon subsequence
of a string of length~$n$
in $\Oh{n^4 \sigma}$ time using $\Oh{n^3}$ words of space. \end{theorem}
\section{Open Problems} Since we shed light on the computation of the longest (common) Lyndon subsequence for the very first time, we are unaware of the optimality of our solutions. It would be interesting to find non-trivial lower bounds that would justify our rather large time and space complexities.
\subsubsection*{Acknowledgments} This work was supported by JSPS KAKENHI Grant Numbers JP20H04141 (HB), JP19K20213 (TI), JP21K17701 and JP21H05847 (DK). TK was supported by NSF 1652303, 1909046, and HDR TRIPODS 1934846 grants, and an Alfred P. Sloan Fellowship.
\end{document} |
\begin{document}
\begin{abstract} We provide a local classification of self-dual Einstein Riemannian four manifolds admitting a positively oriented Hermitian structure and characterize those which carry a hyperhermitian, non-hyperk{\"a}hler structure compatible with the negative orientation. We finally show that self-dual Einstein 4-manifolds obtained as quaternionic quotients of the Wolf spaces ${\mathbb H}P^2$, ${\mathbb H}H^2$, $SU(4)/S(U(2)U(2))$, and $SU(2,2)/S(U(2)U(2))$ are always Hermitian.
\noindent 2000 {\it Mathematics Subject Classification}. Primary 53B35, 53C55\\ {\it Keywords:} Einstein metrics; complex structures; hypercomplex structures; quaternionic K{\"a}hler manifolds. \end{abstract} \maketitle \section*{Introduction}
The main goal of this paper is to give a local description of all self-dual Einstein $4$-manifolds $(M,g)$ which admit a positive Hermitian structure.
It follows from a (weak) Riemannian version of the Goldberg-Sachs theorem \cite{PB,Bo4,Nu,AG} that a Riemannian Einstein $4$-manifold locally admits a positive Hermitian structure if and only if the {\it self-dual Weyl tensor} $W^+$ is {\it
degenerate}. This means that at any point of $M$ at least two of the three eigenvalues of $W^+$ coincide, when $W ^+$ is viewed as a symmetric traceless operator acting on the three-dimensional space of self-dual 2-forms.
Riemannian Einstein 4-manifolds with degenerate self-dual Weyl tensor have been much studied by A. Derdzi\'nski; we here recall the following facts taken from \cite{De}: \begin{enumerate}
\item[{\rm (i)}] $W^+$ either vanishes identically or else has no
zero, i.e. has exactly two distinct eigenvalues at any point (one of them, say $\lambda$,
is simple; the
other one is
of multiplicity $2$, and therefore equals $- \frac{\lambda}{2}$ as $W ^+$ is trace-free).
\item[{\rm (ii)}] In the latter case, the K{\"a}hler form of the Hermitian structure $J$ is a generator of the simple eigenspace of $W^+$ --- in particular, the conjugacy class of $J$ is uniquely defined by the metric ---
and the conformal metric ${\bar g} = |W^+|^{\frac{2}{3}}g$ is {\it K{\"a}hler} with respect to $J$.
\item[{\rm (iii)}] If, moreover, $g$ is assumed to be {\it
self-dual} --- meaning that the {\it anti-self-dual} Weyl tensor,
$W ^-$, vanishes identically ---
the simple eigenvalue $\lambda$ of $W ^+$ is constant (equivalently, the norm $|W ^+|$ is constant) if and only if $(M, g)$ is locally symmetric, i.e., a real or complex space form.
\end{enumerate}
We then have a natural bijection between the following three classes of Riemannian $4$-manifolds (see Lemma \ref{de-ga} below):
\begin{enumerate}
\item Self-dual Einstein $4$-manifolds with degenerate self-dual Weyl
tensor $W ^+$, such that $|W ^+|$ is not constant.
\item Self-dual Einstein Hermitian $4$-manifolds which are neither conformally-flat nor K{\"a}hler.
\item Self-dual K{\"a}hler manifolds with nowhere vanishing and non-constant scalar curvature.
\end{enumerate}
In this correspondence, the Riemannian metrics are defined on the same manifold and belong to the same conformal class. Observe that each class is defined by an algebraic closed condition (the vanishing of some tensors) and an open genericity condition.
Since the compact case is completely understood, see e.g. \cite{BYC} or \cite{De,Bu,It,Bo4, ADM} for a classification, the paper will concentrate on the local situation.
The first known examples of (non-locally-symmetric) self-dual Einstein Hermitian metrics have been metrics of cohomogeneity one under the isometric action of a four-dimensional Lie group. Einstein metrics which are of cohomogeneity one under the action of a four-dimensional Lie group are automatically Hermitian \cite{De}. By using this remark, A. Derdzi\'nski constructed \cite{de} a family of cohomogeneity-one self-dual Einstein Hermitian metrics under the action of ${\mathbb R}\times {\rm Isom}({\Bbb R}^2)$, U(1,1) and U(2); this family actually includes (in a rather implicit way) the well-known {\it Pedersen-LeBrun metrics} \cite{P,Le} which play an important r{o}le in Section 3 of this paper.
It is {\it a priori} far from obvious that there are any other examples of self-dual Einstein Hermitian 4-manifolds, since the conditions of being self-dual, Einstein and Hermitian constitute an over-determined second order PDE system for the metric $g$. We show however that there are actually many other examples; more precisely, we classify all local solutions of this system and provide a simple, {\it explicit} (local) Ansatz for self-dual Einstein Hermitian 4-manifolds (see Theorem \ref{th3} and Lemma \ref{integrate} for a precise statement).
An amazing, a priori unexpected fact comes out from the argument and explains a posteriori the integrability of the above mentioned Frobenius system~: {\it all self-dual Einstein Hermitian metrics admit a local isometric action of ${\Bbb R}^2$ with two-dimensional orbits} (Theorem \ref{th3} and Remark 3). In particular, these metrics locally fall into the more general context of self-dual metrics with torus action considered in \cite{joyce} and, more recently, in \cite{Ca0,Ca1} (see Remark \ref{rem3} (ii)).
It turns out that this property of having more (local) symmetries than expected is actually shared by K{\"a}hler metrics with vanishing Bochner tensor in all dimensions, as shown in the recent work of R. Bryant \cite{Br} (see \cite{Br} for precise statements). Since the Bochner tensor of a K{\"a}hler manifold of real dimension four is the same as the anti-self-dual tensor $W ^-$ --- so that Bochner-flat K{\"a}hler metrics are a natural generalization of self-dual K{\"a}hler metrics in higher dimensions --- by using the correspondence given by Lemma 2, Bryant's work provides an alternative approach to our classification in Section 2.
Moreover, Bryant's work includes a large section devoted to complete metrics; in particular, by specifying his general techniques to dimension four, he has been able (again via Lemma 2) to give {\it complete} examples of self-dual Einstein Hermitian 4-manifolds, corresponding to the generic case considered in Theorem \ref{th3}.
The paper is organized as follows:
Section 1 displays the background material; the notation closely follows our previous work \cite{AG} --- with the exception of the Lee form, whose definition here is slightly different --- and we send back the reader to \cite{AG} for more details and references.
Section 2.1 provides a complete description of (locally defined) cohomogeneity-one self-dual Einstein Hermitian metrics (Theorem \ref{th2}). It turns out that they all admit a local isometric action (with three-dimensional orbits) of certain four-dimensional Lie groups, such that the metrics can be put in a {\it diagonal} form; in other words, they are {\it biaxial diagonal Bianchi metrics of type A}, see {e.g.} \cite{Tod1,CP}. Theorem \ref{th2} relies on the fact that every (non-locally-symmetric) self-dual Einstein Hermitian metric $(g,J)$ has a distinguished non-trivial Killing field, namely
$K= J{\rm grad}_g(|W^+|^{-\frac{1}{3}})$, \cite{De}. Then, the Jones-Tod reduction with respect to $K$ \cite{Tod2} provides a three-dimensional space of {\it constant curvature}. The diagonal form of the metrics follows from \cite{Tod2} and \cite{Tod1} (a unified presentation for these cohomogeneity-one metrics also appears in \cite{CP}). To the best of our knowledge, apart from these metrics no other examples of self-dual Einstein Hermitian metrics were known in the literature (see however Section 4).
Section 2.2 is devoted to the generic case, when the metric is neither locally-symmetric nor of cohomogeneity one. Our approach is similar to Armstrong's one in \cite{Arm}: When considering the Einstein condition alone, the Riemannian Goldberg-Sachs theorem together with Derdzi\'nski's results reported above imply a number of relations for the 4-jet of an Einstein Hermitian metric (Sec. 2.1, Proposition \ref{prop2}); these happen to be the only obstructions for prolonging the 3-jet solutions of the problem to 4-jet and no further obstructions appear when reducing the equations for non-K{\"a}hler, non-anti-self-dual Hermitian Einstein 4-manifolds to a (simple) perturbated ${\rm SU}(\infty)$-Toda field equation \cite{Arm, Pl-Pr}. If, moreover, we insist that $g$ be also {\it self-dual}, we find further relations for the 5-jet of the metric and we show that they have the form of an integrable closed Frobenius system of PDE's for the parameter space of the 4-jet of the metric. We thus prove the local existence of non-locally symmetric and non-cohomogeneity-one self-dual Einstein Hermitian metrics (Theorem \ref{th3}). It turns out that this Frobenius system can be explicitly integrated (Lemma 3). We thus obtain a uniform local description for {\it all} self-dual Einstein Hermitian metrics in an explicit way.
Section 3 is devoted to the subclass of self-dual Einstein Hermitian metrics which admit a compatible, non-closed, anti-self-dual hypercomplex structure. This is the same, locally, as the class of self-dual Einstein Hermitian
metrics which admit a non-closed Einstein-Weyl connection (see
Section 1.2). From this viewpoint, it is a particular case of
four-dimensional conformal metrics which admit two distinct
Einstein-Weyl connections. In our case, one of them is the
Levi-Civita connection of the Einstein metric, whereas the other one
is {\it non-closed}, hence, because of Proposition \ref{prop4},
attached to a non-closed hyperhermitian structure. (Recall that a conformal 4-manifold admitting two distinct {\it closed} Einstein-Weyl structures is necessarily conformally flat (folklore), and that, conversely, every conformally flat 4-manifold only admits closed Einstein-Weyl structures \cite{ET}, see also Proposition \ref{prop4} and Corollary \ref{cor1} below).
It turns out that self-dual Einstein Hermitian
metrics which admit a compatible, non-closed, anti-self-dual hypercomplex structure, actually admit a second one and thus fall in the {\it bi-hypercomplex} situation described by Madsen in
\cite{Mad}; in particular, these metrics admit a local action of ${\rm U(2)}$, with three-dimensional orbits, and are diagonal Bianchi XI
metrics, see Theorem \ref{th1} below.
Notice that a general description of (anti-self-dual) metrics admitting two distinct compatible hypercomplex structures appears in \cite{Ca2}, see also \cite{BCM}, whereas a family of self-dual Einstein metrics with compatible non-closed hyperhermitian structures, parameterized by holomorphic functions of one variable, has been constructed in \cite{CT}.
In Section 4, we show that all anti-self-dual,
Einstein four dimensional {\it orbifolds} obtained by quaternionic
K{\"a}hler reduction from the eight dimensional quaternionic K{\"a}hler
Wolf spaces ${\mathbb H}{P}^2$, $SU(4)/S(U(2)U(2))$
and their non-compact duals
(see \cite{galicki1,galicki2} and \cite{G-L}) are
actually Hermitian with respect to the opposite orientation,
hence locally isomorphic to metrics described in Section 2.
These orbifolds include the {\it weighted projective planes}
${\mathbb C} P^{[p_1,p_2,p_3]}$ for
integers $0<p_1\le p_2\le p_3$ satisfying $p_3<p_1+p_2$,
cf. \cite[Sec. 4]{G-L}. On these orbifolds,
Bryant has constructed Bochner-flat
K{\"a}hler metrics with everywhere positive scalar curvature,
hence also self-dual, Einstein Hermitian metrics
according to Lemma 2 below, \cite[Sec. 4.3]{Br};
in view of the results of Section 2,
Galicki-Lawson's and Bryant's metrics agree locally,
but the issue as to whether they agree globally remains unclear.
\noindent {\bf Acknowledgments.} The first-named author thanks the Dipartimento di Matematica, Universit{\`a} di Rome Tre and the Max-Planck-Institut in Bonn for hospitality during the preparation of this paper. He would like to express his gratitude to J. Armstrong for explaining his approach to Einstein Hermitian metrics and many illuminating discussions. The authors warmly thank S. Salamon for being an initiator of this work and for gently sharing his expertise, and C. LeBrun, C. Boyer, K. Galicki, whose comments are at the origin of the last section of the paper. It is also a pleasure for us to thank N. Hitchin, S. Marchiafava, H. Pedersen, P. Piccinni, M. Pontecorvo and K.P. Tod for their interest and stimulating conversations, and R. Bryant for his interest and remarks.
Finally, a special aknowledgment is due to D. Calderbank for his friendly assistance in carefully reading the manuscript, checking computations, correcting mistakes and suggesting improvements; he in particular decisively contributed to improving the paper by pointing out a mistake in a former version and thus revealing the rational character of the metrics described in Section 2.2.
\section{Einstein metrics, Hermitian structures and Einstein-Weyl geometry in dimension 4}
\subsection{Einstein metrics and compatible Hermitian structures}
In the whole paper $(M, g)$ denotes an oriented Riemannian four-dimensional manifold.
A specific feature of the four-dimensional Riemannian geometry is the splitting \begin{equation} \label{split1} AM = A ^+M \oplus A ^-M, \end{equation} of the Lie algebra bundle, $AM$, of skew-symmetric endomorphisms of the tangent bundle, $TM$, into the direct sum of two Lie algebra subbundles, $A ^{\pm}M$, derived from the Lie algebra splitting $\mathfrak{so} (4) = \mathfrak{so} (3) \oplus \mathfrak{so}(3)$ of the orthogonal Lie algebra $\mathfrak{so} (4)$ into the direct sum of two copies of $\mathfrak{so} (3)$.
A similar decomposition occurs for the bundle $\Lambda ^2M$ of $2$-forms \begin{equation} \label{split2} \Lambda ^2 M = \Lambda ^+M \oplus
\Lambda ^-M, \end{equation} given by the spectral decomposition of the Hodge-star operator, $*$, whose restriction to $\Lambda ^2 M$ is an involution; here, $\Lambda ^{\pm} M$ is the eigen-subbundle for the eigenvalue $\pm$ of $*$.
Both decompositions are actually determined by the conformal metric $[g]$ only. When $g$ is fixed, $\Lambda ^2 M$ is identified to $AM$ by setting: $\psi (X, Y) = g(\Psi (X), Y)$, for any $\Psi$ in $AM$ and any vector fields $X, Y$; then, we can arrange signs in (\ref{split1}) so that (\ref{split1}) and (\ref{split2}) are identified to each other. A similar decomposition and a similar identification occur for the bundle $\Lambda ^2 (TM)$ of bivectors.
Sections of $\Lambda ^+M$, resp. $\Lambda ^-M$, are called {\it
self-dual}, resp. {\it anti-self-dual}, and similarly for
sections of $AM$ or $\Lambda ^2 (TM)$.
In the sequel, the vector bundles $AM$, $\Lambda ^2 M$ and $\Lambda ^2 (TM)$ will be freely identified to each other; similarly, the cotangent bundle $T ^*M$ will be freely identified to $TM$; when no confusion can arise, the inner product determined by $g$ will be simply denoted by $(\cdot, \cdot)$; we adopt the convention that $(\Psi _1 , \Psi _2) = - \frac{1}{2} {\rm tr} \, (\Psi _1 \circ \Psi _2)$, for sections of $AM$, and the corresponding convention for $\Lambda ^2 M$ and $\Lambda ^2 (TM)$.
The Riemannian curvature, $R$, is defined by $R _{X, Y} = D ^g _{[X, Y]} - [D ^g _X, D ^g _Y],$ where $D ^g$ denotes the Levi-Civita connection of $g$; $R$ is thus a $AM$-values $2$-form, but will be rather considered as a section of the bundle $S^2 (\Lambda ^2M)$ of symmetric endomorphisms of $\Lambda ^2 M$.
The Weyl tensor, $W$, commutes with $*$ and, accordingly, splits as $W = W ^+ + W ^-$, where $W ^{\pm} = \frac{1}{2} (W \pm W \circ *)$; $W ^+$ is called the {\it self-dual Weyl tensor}; it acts trivially on $\Lambda ^- M$ and will be considered in the sequel as a field of (symmetric, trace-free) endomorphisms of $\Lambda ^+ M$; similarly, the {\it anti-self-dual Weyl tensor} $W ^-$ will be considered as a field of endomorphismes of $\Lambda ^ - M$.
The Ricci tensor, ${\rm Ric}$, is the symmetric bilinear form defined by ${\rm Ric} (X, Y) = {\rm tr} \, \{ Z \to R _{X, Z} Y \}$; alternatively, ${\rm Ric} (X, Y) = \sum _{i = 1}^4 (R _{X, e _i} Y, e _i)$ for any $g$-orthonormal basis $\{ e _i \}$. We then have ${\rm Ric} = \frac{s}{4} \, g + {\rm Ric} _0$, where $s$ is the scalar curvature (= the trace of ${\rm Ric}$ with respect to $g$) and ${\rm Ric} _0$ is the {\it trace-free Ricci tensor}. The latter can be made into a section of $S ^2 (\Lambda ^2 M)$, then denoted by $\widetilde{{\rm Ric} _0}$, by putting $ \widetilde{{\rm Ric} _0} (X \wedge Y) = {\rm Ric} _0 (X) \wedge Y + X
\wedge {\rm Ric} _0 (Y). $
It is readily checked that $\widetilde{{\rm Ric} _0}$ satisfies the first Bianchi identity, i.e. $\widetilde{{\rm Ric} _0}$ is a tensor of the same kind as $R$ itself, as well as $W ^+$ and $W ^-$; moreover, $ \widetilde{{\rm Ric} _0}$ anti-commutes with $*$, so that it can be viewed as a field of homomorphisms from $\Lambda ^+ M$ into $\Lambda ^- M$, or from $\Lambda ^- M$ into $\Lambda ^+ M$ (adjoint to each other); we eventually get the well-known Singer-Thorpe decomposition of $R$, see e.g. \cite{besse}:
\begin{equation}\label{SO(4)} R = \frac{s}{12} \, {\rm Id} _{| \Lambda ^2 M} +
\frac{1}{2} \widetilde{{\rm Ric}} _0 +
W ^+ + W ^-, \end{equation} or, in a more pictorial way \begin{equation*} R = \begin{pmatrix} & W ^+ + \frac{s}{12} \, {\rm
Id} _{| \Lambda ^+ M} & \frac{1}{2} \widetilde{{\rm Ric} _0} _{| \Lambda ^- M}
\\ \\ & \frac{1}{2} \widetilde{{\rm Ric} _0} _{| \Lambda ^+ M} & W ^- + \frac{s}{12} \, {\rm
Id} _{| \Lambda ^- M} \end{pmatrix} \end{equation*}
The metric $g$ is {\it Einstein} if ${\rm Ric} _0 = 0$ (equivalently, $g$ is Einstein if $R$ commutes with $*$).
The metric $g$ (or rather the conformal class $[g]$) is {\it
self-dual} if $W ^- = 0$; {\it anti-self-dual} if $W ^+ = 0$.
An {\it almost-complex structure} $J$ is a field of automorphisms of $TM$ of square $ - {\rm Id}|_{TM}$. An {\it integrable} almost-complex structure is simply called a {\it complex structure}.
In this paper, the metric $g$, or its conformal class $[g]$, is fixed and we only consider $g$-orthogonal almost-complex structures, i.e. almost-complex structure $J$ satisfying the identity $g (JX, JY) = g (X, Y)$, so that the pair $(g, J)$ is an {\it almost-Hermitian structure}; then, the associated bilinear form, $F$, defined by $F (X, Y) = g(JX, Y)$ is a $2$-form, called the {\it
K{\"a}hler form}.
The pair $(g, J)$ is {\it Hermitian} if $J$ is integrable; {\it
K{\"a}hler} if $J$ is parallel with respect to the Levi-Civita
connection $D ^g$; if $(g, J)$ is K{\"a}hler then $J$ is integrable and $F$
is closed; conversely, these two conditions together imply that $(g, J)$ is
K{\"a}hler.
A $g$-compatible almost-complex structure $J$ is either a section of $A ^+M$ or a section of $A ^-M$; it is called {\it positive}, or {\it self-dual}, in the former case, {\it negative}, or {\it anti-self-dual} in the latter case. Alternatively, the K{\"a}hler form {\rm is} either self-dual or
anti-self-dual. Conversely, any section $\Psi$ of $A ^+M$, resp. $A ^- M$, such that $|\Psi| ^2 = 2$, is a positive, resp. negative, $g$-orthogonal almost-complex structure. It follows that any non-vanishing section, $\Psi$, of $A ^+M$ --- if any --- determines a (positive) almost-complex structure
$J$, defined by $J = \sqrt{2} \frac{\Psi}{|\Psi|}$ (similarly for non-vanishing sections of $A ^- M$).
Whereas the existence of a (positive) $g$-orthogonal almost-complex structure is a purely topological problem, the similar issue for {\it complex} structures heavily depends on the geometry of $g$, and this dependence is essentially measured by the self-dual Weyl tensor $W ^+$.
This assertion can be made more precise in the following way. We denote by $\lambda _ + \geq \lambda _0 \geq \lambda _-$ the eigenvalues of $W ^+$ at some point, $x$, of $M$, and we assume that $W ^+$ does not vanish at $x$; equivalently, since $W ^+$ is trace-free, we assume that $\lambda _+ -
\lambda _-$ is positive; we denote by $F _{+}$ an eigenform of $W ^+$ with respect to $\lambda _+$, normalized by $|F _+| ^2 = 2$; similarly, $F _-$ denotes an eigenform of $W ^+$ for $\lambda _-$, again normalized by $|F _-| ^2 = 2$; the {\it roots}, $P$, of $W ^+$ at $x$ are then defined by $P = \frac{(\lambda_+ - \lambda_0 )^{\frac{1}{2}}}{(\lambda_+ -
\lambda_- )^{\frac{1}{2}}}F_- + \frac{(\lambda_0 - \lambda_- )^{\frac{1}{2}}}{(\lambda_+ - \lambda_-)^{\frac{1}{2}}}F_+;$ it is easily checked that this expression actually determine {\it
two} distinct pairs of opposite roots in the generic case, when the eigenvalues are all distinct, and {\it one} pair in the degenerate case, when $\lambda _0$ is equal to either $\lambda _+$ or $\lambda _-$.
It is a basic fact that when $J$ is a positive, $g$-orthogonal {\it complex} structure defined on $M$, the value of $J$ at any point $x$ where $W ^+$ does not vanish must be equal to a root of $W ^+$ at that point. This means that on the open subset of $M$ where $W ^+$ does not vanish, the conjugacy class of a positive, $g$-orthogonal complex structure --- if any --- is almost entirely determined by $g$ (in fact by $[g]$), with at most a $2$-fold ambiguity.
On the other hand, it is an easy consequence of the integrability theorem in \cite{AHS} that $A ^+M$ can be locally trivialized by integrable (positive, $g$-orthogonal) almost-complex structures if and only if $[g]$ is anti-self-dual.
In the sequel, $W ^+$ will be called {\it degenerate} at some point $x$ if it has at most two distinct eigenvalues at that point. The terms {\it
anti-self-dual} and {\it non-anti-self-dual} will be abbreviated as ASD and non-ASD respectively.
For a given non-ASD metric $g$ it is a subtle question to decide whether the roots of $W ^+$ actually provide complex structures (this is of course not true in general). The situation is quite different if $g$ is Einstein. It is then settled by the following (weak) Riemannian version of the Goldberg-Sachs theorem, cf. \cite{De,PB,Nu,AG}: \begin{prop}\label{prop1} Let $(M,g)$ be an oriented Einstein 4-manifold; then the following three conditions are equivalent: \begin{enumerate} \item[(i)] $W^+$ is everywhere degenerate;
\item[(ii)] there exists a positive $g$-orthogonal complex structure in a neighbourhood of each point of $M$;
\item[(iii)] $(M,g)$ is either ASD or $W^+$ has two distinct eigenvalues at each point. \end{enumerate} \end{prop}
A consequence of this proposition is that the self-dual Weyl tensor $W ^+$ of a non-ASD Einstein Hermitian $4$-manifold nowhere vanishes and has two distinct eigenvalues at any point, one simple, the other one of multiplicity $2$; moreover, the K{\"a}hler form $F$ is an eigenform of $W ^+$ for the simple eigenvalue. Conversely, for any oriented, Einstein $4$-manifold whose $W ^+$ has two distinct eigenvalues, the generator of the simple eigenspace of $W^+$ determines a (positive) Hermitian structure.
For any positive $g$-orthogonal almost-complex structure $J$, $A ^+ M$ splits as follows: \begin{equation} \label{splitJ1} A ^+ M = {\mathbb R}\cdot {J} \oplus A ^{+, 0} M, \end{equation} where ${\mathbb R}\cdot{J}$ is the trivial subbundle generated by $J$ and $A ^{+, 0} M$ is the orthogonal complement (equivalently, $A ^{+, 0} M$ is the subbundle of elements of $A ^+ M$ that anticommute with $J$); $A ^{+, 0} M$ is a rank $2$ vector bundle and will be also considered as a complex line bundle by putting $J \Phi = J \circ \Phi$. We have the corresponding decomposition \begin{equation} \label{splitJ2} \Lambda ^+ M = {\mathbb R}\cdot{F} \oplus \Lambda
^{+, 0} M, \end{equation} where $\Lambda
^{+, 0} M$ is the subbundle of $J$-anti-invariant $2$-forms,
i.e. $2$-forms satisfying $\phi (JX, JY) = - \phi (X, Y)$; again, $\Lambda
^{+, 0} M$ is considered as a complex line bundle by putting $(J
\phi) (X, Y) = - \phi (JX, Y) = - \phi (X, JY)$. As complex line
bundles, both $A ^{+, 0} M$ and $\Lambda
^{+, 0} M$ are identified to the {\it anti-canonical bundle} $K
^{-1} M = \Lambda ^{0, 2} M$ of the (almost-complex) manifold $(M,
J)$.
For an Einstein, Hermitian $4$-manifold, the action of $W ^+$ preserves the decompositions (\ref{splitJ1}) and (\ref{splitJ2}).
The {\it Lee form} of an almost-Hermitian structure $(g, J)$ is the real $1$-form, $\theta$, defined by \begin{equation} \label{lee} {\rm d} F = - 2 \theta \wedge F; \end{equation} equivalently, $\theta = - \frac{1}{2} J \, \delta F$, where $\delta$ denotes the co-differential with respect to $g$ (here, and henceforth, the action of $J$ on $1$-forms is defined via the identification $T ^*M \simeq TM$ given by the metric; we thus have $(J \alpha ) (X) = - \alpha (JX)$, for any $1$-form $\alpha$). The reason for the choice of the factor $-2$ in (\ref{lee}) will be clear in the next subsection (notice that a different normalization is used in our previous work \cite{AG}).
When $(g, J)$ is Hermitian, it is K{\"a}hler if and only if $\theta$ vanishes identically; it is conformally K{\"a}hler if and only if $\theta$ is exact, i.e. $\theta = - {\rm d} \ln{f}$ for a positive smooth real function $f$ (then, $J$ is K{\"a}hler with respect to the conformal metric $g' = f ^{-2} \, g)$; it is locally conformally K{\"a}hler --- lcK for short --- if and only if $\theta$ is closed, hence locally of the above type.
The Lee form clearly satisfies $({\rm d} \theta, F) = 0$; this means that the self-dual part, ${\rm d}
\theta ^+$, of ${\rm d}
\theta $ is a section of the rank $2$ subbundle,
$\Lambda ^{+,0} M$.
In the Hermitian case, ${\rm d}
\theta ^+$ is an eigenform of $W ^+$ for the mid-eigenvalue $\lambda
_0$; moreover, $\lambda _0 = - \frac{\kappa}{12}$, where $\kappa$ is the {\it
conformal scalar curvature}, of which a more direct definition is
given in the next subsection; ${\kappa}$ is related to the (Riemannian) scalar curvature $s$ by \begin{equation} \kappa = s +
6 \, (\delta \theta - |\theta| ^2), \end{equation} and we also have \begin{equation} \kappa = 3 \, (W ^+ (F), F), \end{equation} see \cite{Va2,Ga1}. Notice that, in the Hermitian case, the mid-eigenvalue $\lambda _0$ of $W ^+$ is always a {\it smooth} function (this, however, is not true in general for the remaining two eigenvalues of $W ^+$, $\lambda _+$ and $ \lambda _-$, which are given by: $$\lambda_{\pm}=\frac{1}{24}\kappa \pm \frac{1}{8}(\kappa^2 +
32|{\rm d} \theta ^+|^2)^{\frac{1}{2}},$$ cf. \cite{AG}).
It follows that for Hermitian 4-manifolds the following three conditions are equivalent (cf. \cite{Bo2,AG}):
\begin{enumerate} \item[(i)] ${\rm d} \theta ^+=0$;
\item[(ii)] $W^+$ is degenerate;
\item[(iii)] $F$ is an eigenform of $W^+$. \end{enumerate}
\noindent (In the latter case $F$ is actually an eigenform for the simple eigenvalue of $W ^+$, which is then equal to $\frac{\kappa}{6}$, {\rm also} equal to $\lambda _+$ or $\lambda _-$ according as $\kappa$ is positive or negative). If, moreover, $M$ is compact, any one of the above three conditions is equivalent to $(g,J)$ being locally conformally K{\"a}hler; if, in addition, the first Betti number of $M$ is even, $(g, J)$ is then globally conformally K{\"a}hler \cite{Va1}.
By Proposition \ref{prop1} we conclude that for every Einstein Hermitian $4$-manifold, we have ${\rm d} \theta ^+=0$, i.e. ${\rm d} \theta$ is self-dual. In fact, a stronger statement is true, see \cite[Prop.1]{AG} and \cite[Prop.4]{De}: \begin{prop}\label{prop2} Let $(M,g,J)$ be an Einstein, non-ASD Hermitian 4-manifold. Then the conformal scalar curvature $\kappa$ nowhere vanishes and the Lee form $\theta$
is given by \ $\theta = \frac{1}{3}{\rm d}\ln{|\kappa|}$ {\rm (}in particular, $(g,J)$ is conformally K{\"a}hler{\rm )}.
If, moreover, $\kappa$ is not constant, i.e. if $(g,J)$ is not K{\"a}hler, then $K=J\rm{grad}_g(\kappa^{-\frac{1}{3}})$ is a non-trivial Killing vector field with respect to $g$, holomorphic with respect to $J$. \end{prop}
\subsection{Einstein-Weyl structures and anti-self-dual conformal
metrics}
Another specific feature of the four-dimensional geometry is that to each conformal Hermitian structure $([g], J)$ is canonically attached a unique {\it Weyl connection} $D$ such that $J$ is parallel with respect to $D$; in other words, any Hermitian structure is ``K{\"a}hler'' in the extended context of Weyl structures (of course, $(g, J)$ is K{\"a}hler in the usuel sense --- the only one used in this paper --- if and only if $D$ is the Levi-Civita connection of some metric in the conformal class $[g]$).
Recall that, given a conformal metric $[g]$, a Weyl connection (with respect to $[g]$) is a torsion-free linear connection, $D$, on $M$ which preserves $[g]$; the latter condition can be reformulated as follows: for any metric $g$ in $[g]$, there exists a real $1$-form $\theta _g$ such that $D g = - 2 \theta _g \otimes g$; $\theta _g$ is called the {\it Lee form} of $D$ with respect to $g$; then, the Weyl connection $D$ and the Levi-Civita connection $D ^g$ are related by $D = D ^g + \tilde{\theta} _g$, meaning \begin{equation}\label{D^J} D _X Y = D ^g _X Y + \theta _g (X) Y + \theta _g (Y)
X - g(X, Y) \, \theta _g ^{\sharp _g}, \end{equation} where $\theta _g ^{\sharp _g}$ is the Riemannian dual of $\theta _g$ with respect to $g$. If $g' = f ^{-2} g$ is another metric in $[g]$, the Lee form, $\theta _{g'}$, of $D$ with respect to $g'$ is related to $\theta _g$ by $\theta _{g'} = \theta _g + {\rm d} \ln{f}$.
A Weyl connection $D$ is the Levi-Civita connection of some metric in the conformal class $[g]$ if and only if its Lee form with respect to any metric $g$ in $[g]$ is exact, i.e. $\theta _g = - {\rm
d} \ln{f}$; then, $D = D ^{f ^{-2} g}$; such a Weyl connection is called {\it exact}. More generally, a Weyl connection is said to be {\it closed} if its Lee form with respect to any metric in $[g]$ is closed; then, $D$ is locally of the above type, i.e. locally the Levi-Civita connection of a (local) metric in $[g]$.
The definitions of the curvature $R ^D$ and the Ricci tensor ${\rm Ric} ^D$ of a Weyl connection $D$ are formally identical as the ones we gave for $D ^g$ (notice that the derivation of ${\rm Ric} ^D$ from $R ^D$ requires no metric); however, $R ^D$ is now a $AM \oplus {\mathbb R} \, {{\rm Id}|_{TM}}$-valued $2$-form, i.e. has a {\it scalar part} equal to $F ^D \otimes{{\rm Id}|_{TM}}$, where the real $2 $-form $F ^D$, the so-called {\it Faraday tensor} of the Weyl connection, is equal to $- {\rm d} \theta _g$ for any metric $g$ in $[g]$; moreover, ${\rm Ric} ^D$ is not symmetric in general: its skew-symmetric part is equal to $\frac{1}{2} F ^D$; ${\rm Ric}^D$ is thus symmetric if and only if $D$ is closed.
A Weyl connection $D$ is called {\it Einstein-Weyl} if the symmetric, trace-free part of ${\rm Ric} ^D$ vanishes; with respect to any metric $g$ in $[g]$, and by writing $\theta$ instead of $\theta _g$, this conditions reads \begin{equation} \label{EW} D ^g \theta - \theta \otimes \theta + \frac{1}{4}
(\delta \theta + |\theta| ^2) \, g - \frac{1}{2} {\rm d} \theta -
\frac{1}{2} {\rm Ric} _0 = 0, \end{equation} see {e.g.} \cite{Ga2}; for a fixed metric $g$, (\ref{EW}) should be considered as an equation for an
unknown $1$-form $\theta$.
The {\it conformal scalar curvature} of $D$ with respect to $g$, denoted by $\kappa _g$, is the trace of ${\rm Ric}^D$ with respect to $g$; it is related to the (Riemannian) scalar curvature $s$ by: \begin{equation}\label{kappag}
\kappa _g = s + 6 \, (\delta \theta - |\theta |^2), \end{equation} see { e.g.} \cite{Ga2}.
A key observation is that the Lee form, $\theta$, of an almost-Hermitian structure $(g, J)$ is also the Lee form with respect to $g$ of the Weyl connection canonically attached to the conformal almost-Hermitian structure $([g], J)$; in other words, the Weyl connection $D$ defined by $ D = D ^g + \tilde{\theta}$ is actually independent of $g$ in its conformal class $[g]$. The Weyl connection $D$ defined in this way is called the {\it canonical Weyl connection} of the (conformal) almost-Hermitian structure $([g], J)$.
The scalar curvature $\kappa _g$ of $D$ with respect to $g$ is called the {\it conformal scalar curvature} of $(g, J)$; it coincides with the function $\kappa$ introduced in the previous paragraph.
The canonical Weyl connection is an especially interesting object when $J$ is integrable, because of the following lemma:
\begin{Lemma} \label{weyl} {\rm (i)} $J$ is integrable if and only if $DJ = 0$.
{\rm (ii)} If $J _1$ and $J _2$ are two $g$-orthogonal complex structures, the corresponding canonical connections $D ^1$ and $D ^2$ coincide if and only if the scalar product $(J _1, J_2)$ is constant. \end{Lemma} \begin{proof} (i) The condition $DJ = 0$ reads \begin{equation} \label{integrable} D ^g _X J = [X \wedge \theta, J]; \end{equation} this identity is proved e.g. in \cite{Ga1,Va2}.
(ii) Let $p$ denote the
{\it angle function} of $J _1$ and $J_2$, defined by $p = -
\frac{1}{4} {\rm tr} \, (J _1 \circ J _2) = \frac{1}{2} (J _1,
J_2)$; we then have \begin{equation} J_1 \circ J _2 + J _2 \circ J_2 = - 2p \, {{\rm Id}|_{TM}}. \end{equation} Let $\theta _1$ and $\theta _2$ be the Lee forms of $D ^1$, $D ^2$; from (\ref{integrable}) applied to $J _1$, we infer $(D ^g J _1, J_2) = ([J_1, J_2] X, \theta _1)$; similarly, we have $(D ^g J _2, J_1) = ([J_2, J_1] X, \theta _2)$; putting together these two identities, we get \begin{equation} {\rm d} p = - \frac{1}{2} [J_1, J_2] (\theta _1 -
\theta _2). \end{equation} This obviously implies ${\rm d} p = 0$ if $D ^1 = D ^2$; the converse is also true, as the commutator $[J _1, J_2]$ is invertible at each point where $J _2 \neq \pm J_1$. \end{proof}
An {\it almost-hypercomplex structure} is the datum of three almost-complex structures, $I _1, I_2, I_3$, such that $$ I _1 \circ I _2 = - I _2 \circ I _1 = I _3.$$
Since $M$ is a four-dimensional manifold, any almost-hypercomplex structure $I _1, I_2, I_3$ determines a conformal class $[g]$ with respect to which each $I_i$ is orthogonal: $[g]$ is defined by decreeing that, for any non-vanishing (local) vector field $X$, the frame $X, I_1X, I_2X, I_3X$ is (conformally) orthonormal; for any $g$ in the conformal class defined in this way, we thus get an {\it
almost-hyperhermitian structure} $(g, I_1, I_2,I_3)$; notice that the $I_i$'s are pairwise orthogonal with respect to $g$, so that $I _1, I_2, I_3$ is a (normalized) orthonormal frame of $A ^+ M$; conversely, for a given Riemannian metric $g$ any (normalized) orthonormal frame of $A ^+ M$ is an almost-hypercomplex structure and, together with $g$ form an almost-hyperhermitian structure.
An almost-hyperhermitian structure $(g, I_1, I_2,I_3)$ is called {\it hyperhermitian} if all $I_i$'s are integrable; it is called {\it
hyperk{\"a}hlerian} if the $I_i$'s are all parallel with respect to the Levi-Civita connection $D ^g$.
In the hyperhermitian case the canonical Weyl connections, $D ^1, D ^2, D ^3$, of the almost-Hermitian structures $(g, I_1)$, $(g, I_2)$, $(g, I_3)$ are the same by Lemma \ref{weyl}; the common Weyl connection, $D$, is called the {\it canonical Weyl connection} of the hyperhermitian structure.
Conversely, the condition $D ^1 = D ^2 = D ^3$ implies that $(g, I_1, I_2,I_3)$ is hyperhermitian (this observation is due to S. Salamon and F. Battaglia, see e.g. \cite{GT}).
The canonical Weyl connection of a hyperhermitian structure $(g, I_1, I_2,I_3)$ is closed if and only if $I_1, I_2,I_3$ is locally hyperk{\"a}hler with respect to some (local) metric belonging to the conformal class $[g]$; for brevity, a hyperhermitian structure will be called {\it closed} or {\it non-closed} according as its canonical Weyl connection being closed or non-closed.
\begin{rem} {\rm In general, for any given hypercomplex structure $I _1,
I_2, I _3$ on a $n$-dimensional manifold, there exists a {\it unique}
torsion--free linear connection on $M$ that preserves the $I
_i$'s, called the {\it Obata connection}; the canonical connection thus coincides with the
Obata connection; for $n > 4$ however, there is no conformal
metric canonically attached to $I _1,
I_2, I _3$ and, in general, the Obata connection is
not a Weyl connection.} \end{rem}
If $(g, I_1, I_2,I_3)$ is hyperhermitian, we have $D I_1 = D I_2 = D I_3 = 0$, where $D$ is the canonical Weyl connection acting on sections of $A ^+ M$; it follows that the connection of $A ^+M$ induced by $D$ is {\it flat}; conversely, if $D$ is a Weyl connection, whose induced connection on $A ^+ M$ is flat, then $A ^+ M$ can be locally trivialized by a $D$-parallel (normalized) orthonormal frame $I_1, I_2,I_3$, which, together with $g$, constitute a hyperhermitian structure.
The curvature, $R ^{D, A ^+M} $, of the induced connection is given by $R ^{D, A ^+M} _{X, Y} \Psi = [R ^D _{X, Y}, \Psi]$, where $R ^D _{X, Y}$ is understood as a field of endomorphisms of $TM$ --- more precisely a section of $A M \oplus {\mathbb R} \, {{\rm Id}|_{TM}}$ --- and $[R ^D _{X, Y}, \Psi]$ is the commutator of $R ^D _{X, Y}$ and $\Psi$; we easily infer that the vanishing of $R ^{D, A ^+M} $ is equivalent to the following four conditions:
\begin{enumerate} \label{swann}
\item $W ^+ = 0$;
\item $(F ^D) ^+ = 0$; if $\theta$ denotes the Lee form of $D$, this
also reads ${\rm d} \theta ^+ = 0$;
\item $D$ is Einstein-Weyl, i.e. the Lee form $\theta$ is solution of
(\ref{EW});
\item The scalar curvature of $D$ vanishes identically; in view of
(\ref{kappag}), this condition reads \begin{equation}\label{hypherm}
s = 6 \, (-\delta \theta + |\theta|^2). \end{equation}
\end{enumerate}
It follows from this discussion that, for an ASD Riemannian 4-manifold, the existence of a compatible hypercomplex structure is locally equivalent to the existence of an Einstein-Weyl connection satisfying the above conditions 2 and 4 (cf. \cite{PS} or \cite{GT}). In this correspondence, conformally hyperk{\"a}hler structures correspond to closed Einstein-Weyl structures. The existence of a non locally hyperk{\"a}hler, hyperhermitian structure is actually (locally) equivalent to the existence of a non-closed Einstein-Weyl connection, in view of the following result of D. Calderbank: \begin{prop}\label{prop4}{\rm (\cite{Ca})} Let $(M,[g],D)$ be an anti-self-dual Einstein-Weyl 4-manifold. Then either $D$ is closed, or else $D$ satisfies conditions 2 and 4 above, i.e. is the canonical Weyl connection of a hyperhermitian structure. \end{prop}
Notice that in the case when $M$ is compact, $d \theta ^+ = 0$ implies ${\rm d}\theta=0$, hence {\it any} hyperhermitian structure is locally conformally hyperk{\"a}hler; a complete classification appears in \cite{Bo3}.
\section{Self-dual Einstein Hermitian 4-manifolds}
By Proposition \ref{prop2}, a Hermitian, Einstein 4-manifold, whose self-dual Weyl tensor $W ^+$ has constant eigenvalues is either anti-self-dual or K{\"a}hler-Einstein, \cite{De}. If, moreover, the metric $g$ is self-dual, this happens precisely when $g$ is locally-symmetric, i.e. when $(M, g)$ is a real or a complex space form, see
\cite{TV}. More generally, a self-dual Einstein 4-manifold is
locally-symmetric if and only if $W ^+$ is degenerate, with constant
eigenvalues, \cite{De}.
In the opposite case, we have the following lemma: \begin{Lemma} \label{de-ga} Non-locally-symmetric self-dual Einstein Hermitian metrics are in one-to-one correspondence with self-dual K{\"a}hler metrics of nowhere vanishing and non-constant scalar curvature. \end{Lemma} \begin{proof} Every self-dual Einstein Hermitian 4-manifold $(M,g,J)$ of non-constant curvature is conformally related (via Proposition \ref{prop2}) to a self-dual K{\"a}hler metric ${\bar g}$ of nowhere vanishing scalar curvature. A self-dual K{\"a}hler metric is locally-symmetric if and only if its scalar curvature is constant \cite{De}; thus, the one direction in the correspondence stated in the lemma follows by observing that ${\bar g}$ is locally-symmetric as soon as $g$ is. Since the Bach tensor of a self-dual metric vanishes \cite{Gau3}, it follows from \cite[Prop.4]{De} that any self-dual K{\"a}hler metric of nowhere vanishing scalar curvature gives rise to an Einstein Hermitian metric in the same conformal class. \end{proof}
In the remainder of this section, $(M, g, J)$ is an Einstein, self-dual Hermitian $4$-manifold, and we assume that $g$ is {\it not}
locally-symmetric; in particular, $W ^+$ is degenerate, but its eigenvalues, $\lambda, - \frac{\lambda}{2}$, or, equivalently, its norm $|W^+| = \sqrt{\frac{3}{2}} \, |\lambda|$, are not constant.
Since $(M,g,J)$ is not K{\"a}hler (Proposition \ref{prop2}), by substituting to $M$ the dense open subset where the Lee form $\theta$ does not vanish, we shall assume throughout this section that $D^g J$ nowhere vanishes, see (\ref{integrable}).
For convenience, we choose a (local, normalized) orthonormal frame of
$\Lambda ^{+, 0} M$ of the form $\{ \phi, J \phi \}$, where $|\phi| = \sqrt{2}$; such a frame will be called a {\it gauge}. Then, the triple $\{ F, \phi, J \phi \}$ is a (local, normalized) orthonormal frame of $\Lambda ^ + M$.
Recall that by Proposition \ref{prop1} we have \begin{equation} \label{lambda0} W ^+ (\psi) = - \frac{\kappa}{12} \psi, \end{equation} for any section $\psi$ of $\Lambda ^{+, 0} M$, whereas \begin{equation} \label{lambda+} W ^+ (F) = \frac{\kappa}{6} F. \end{equation} With respect to the gauge $\{ \phi, J \phi \}$, the covariant derivative $ D^g F$ is written as \begin{equation}\label{DF} D^g F = \alpha\otimes \phi + J\alpha\otimes J\phi, \end{equation} where \begin{equation} \alpha = \phi (J \theta); \end{equation} equivalently,
\begin{equation} \label{phialpha} \phi = -\frac{1}{|\theta|^2}\big(\alpha\wedge J\theta + J\alpha\wedge \theta \big); \ \
J\phi = \frac{1}{|\theta|^2}\big(\alpha\wedge \theta - J\alpha\wedge J\theta \big). \end{equation} We also have \begin{equation} \label{Dphi} D^g \phi = - \alpha\otimes F + \beta\otimes J\phi; \ \ D^g (J\phi)= -J\alpha\otimes F - \beta\otimes \phi, \end{equation} for some 1-form $\beta$.
From (\ref{DF}), we infer \begin{eqnarray}\nonumber
(D^g)^2|_{\Lambda^2M} F &=& ({\rm d}\alpha + J\alpha\wedge \beta)\otimes \phi + ({\rm d}(J\alpha) -\alpha\wedge \beta)\otimes J\phi \\ \nonumber
& =& -R(J\phi)\otimes \phi + R(\phi)\otimes J\phi. \end{eqnarray} Because of (\ref{lambda0}), this reduces to \begin{equation}\label{ricci1} \left\{ \begin{array}{c@{ = }c} {\rm d}\alpha - \beta\wedge J\alpha \ & \frac{(\kappa -s)}{12}J\phi\\ {\rm d}(J\alpha) + \beta\wedge \alpha \ & -\frac{(\kappa -s)}{12}\phi. \end{array} \right. \end{equation} Similarly, because of (\ref{lambda+}), we infer the following additional relation from (\ref{Dphi}): \begin{equation}\label{ricci2} {\rm d}\beta + \alpha \wedge J\alpha = - \frac{(s + 2\kappa)}{12}F. \end{equation} Notice that 1-forms $\alpha$ and $\beta$ are both {\it gauge dependent}; if $$\phi' = (\cos\varphi ) \phi + (\sin\varphi )J\phi $$ they transform to $$\alpha'= (\cos\varphi) \alpha + (\sin\varphi) J\alpha; \ \ \beta'= \beta + {\rm d}\varphi.$$
We next introduce 1-forms $n_i, m_i, i=1,2$ by \begin{equation}\label{Dtheta} D^g \theta = m_1\otimes \theta + n_1\otimes J\theta + m_2 \otimes \alpha + n_2\otimes J\alpha. \end{equation}
By (\ref{DF}) and (\ref{phialpha}) we derive \begin{equation}\label{DJtheta} \begin{array}{c@{}c} &D^g (J\theta) = -n_1\otimes \theta + m_1\otimes J\theta -(n_2+J\alpha)\otimes\alpha +(m_2 + \alpha)\otimes J\alpha; \\ & \ \ \ \ \ D^g \alpha \ = -m_2\otimes \theta + (n_2 + J\alpha)\otimes J\theta +
m_1\otimes\alpha - (n_1 -\beta)\otimes J\alpha; \\ & D^g(J\alpha) = - n_2\otimes \theta -(m_2 + \alpha)\otimes J\theta + (n_1-\beta)\otimes \alpha + m_1\otimes J\alpha. \end{array} \end{equation}
A straightforward computation, using identities (\ref{ricci1}) and the fact that the vector field $K= ({\kappa}^{-\frac{1}{3}}J\theta)^{\sharp_g}$, the dual of ${\kappa}^{-\frac{1}{3}}J\theta$, is Killing (see Proposition \ref{prop2}), gives the following expressions for $m_i$ and $n_i$: \begin{equation}\label{mn} \left\{ \begin{array}{c@{ = }c}
m_1 & m_0 + (p-\frac{(\kappa -s)}{24|\theta|^2} +\frac{1}{2}) \theta \\
n_1 & Jm_0 +(p-\frac{(\kappa -s)}{24|\theta|^2} -\frac{1}{2}) J\theta \\
m_2 & J\phi(m_0) -(p +\frac{(\kappa -s)}{24|\theta|^2} +\frac{1}{2})\alpha \\
n_2 & -\phi(m_0) -(p +\frac{(\kappa -s)}{24|\theta|^2} +\frac{1}{2})J\alpha, \end{array} \right. \end{equation} where $p$ is a smooth function, and $m_0$ is a 1-form which belongs to the distribution ${\mathcal D}^{\perp}= {\rm span} \{ \alpha , J\alpha \}$, the orthogonal complement of ${\mathcal D} = {\rm span} \{ \theta , J\theta \}$.
Since
$m_1 = {\rm d}\ln |\theta|$, the 1-form $m_0$ is nothing else than the projection of
${\rm d}\ln |\theta|$ to the subbundle ${\mathcal D}^{\perp}$. Moreover, with respect to any gauge $\phi$, we write \begin{equation}\label{m0} m_0 = q\alpha + r J\alpha, \end{equation} for some smooth functions $q$ and $r$.
In view of (\ref{integrable}), identities (\ref{Dtheta}) and (\ref{mn}) are conditions on the 2-jet of $J$. Since $J$ is completely determined by $W^+$ (see Proposition \ref{prop1}), these are the conditions on the 4-jet of the metric referred to in the introduction.
This completes the analysis of the Einstein condition and we are now going to see how the vanishing of $W^-$ interacts on further jets of $g$.
For that, we introduce the ``mirror frame'' of $\Lambda^-M$:
$${\bar F}= -F + \frac{2}{|\theta|^2}\theta\wedge J\theta; \ \
{\bar \phi} = \phi + \frac{2}{|\theta|^2} J\alpha\wedge \theta;$$
$$I{\bar \phi}= J\phi + \frac{2}{|\theta|^2} J\alpha\wedge J\theta, $$ where the {\it negative} almost Hermitian structure $I$, of which the anti-self-dual 2-form ${\bar F}$ is the K{\"a}hler form, is equal to $J$ on ${\mathcal D}$ and $-J$ on ${\mathcal D}^{\perp}$. By (\ref{DJtheta}) and the fact that $\theta= \frac{{\rm d}\kappa}{3\kappa}$, we obtain the following expression for the covariant derivative of the Killing vector field $K=(\kappa^{-\frac{1}{3}}J\theta)^{{\sharp_g}}$ \begin{equation}\label{DK}
D^g K = \kappa^{-\frac{1}{3}}|\theta|^2\big(q{\bar \phi} - rI{\bar\phi} - (p-\frac{1}{2}){\bar F} + \frac{(\kappa -s)}{24|\theta|^2}F\big). \end{equation} Moreover, since $K$ is Killing, we have \begin{equation} \label{killing} D^g_X \Psi = R(K,X), \end{equation} where $\Psi = D^g K$.
Considering the ASD parts of both sides of (\ref{killing}), we infer that the condition $W^-=0$ is equivalent to \begin{equation}\label{W^-=0} D^g(\Psi^-) = \frac{s}{24}(\bar{\phi}(K)\otimes {\bar \phi} + I\bar{\phi}(K)\otimes I\bar{\phi} + IK\otimes {\bar F}), \end{equation} where
$$\Psi^-= \kappa^{-\frac{1}{3}}|\theta|^2\big(q{\bar \phi} - rI{\bar\phi} - (p-\frac{1}{2}){\bar F}\big)$$ is the ASD part of $\Psi=D^gK$, see (\ref{DK}). Furthermore, by (\ref{Dtheta}) and (\ref{DJtheta}) one gets \begin{eqnarray}\nonumber D^g {\bar F} &=& -(2m_2 + \alpha)\otimes {\bar \phi} + (2Jm_2 +J\alpha) \otimes I{\bar \phi};\\ \label{I} D^g {\bar \phi} &=& \ \ \ (2m_2 +\alpha) \otimes {\bar F} + (2n_1- \beta)\otimes I{\bar \phi}; \\ \nonumber D^g I{\bar \phi} &=& -(2Jm_2+ J\alpha)\otimes {\bar F} - (2n_1 -\beta)\otimes {\bar \phi}. \end{eqnarray} Keeping in mind that
$\theta=\frac{{\rm d}\kappa}{3\kappa}$ and $m_1={\rm d}\ln|\theta|$, (\ref{W^-=0}) then reduces to \begin{eqnarray}\label{system1} {\rm d}p &=& -(p-\frac{1}{2})(2m_1 - \theta) + q(m_2+ \alpha) \\ \nonumber
& & + r(Jm_2 + J\alpha) -
\frac{s}{24|\theta|^2} \theta \\ \label{system2} {\rm d}q &=& -(p-\frac{1}{2})(m_2+ \alpha) - q(2m_1 - \theta) \\\nonumber
& & - r(2n_1 - \beta) - \frac{s}{24|\theta|^2} \alpha \\\label{system3} {\rm d}r &=& -(p-\frac{1}{2})(Jm_2+ J\alpha) + q(2n_1 - \beta) \\\nonumber
& & - r(2m_1 - \theta)-
\frac{s}{24|\theta|^2} J\alpha. \end{eqnarray} Now, taking into account (\ref{ricci1}) and (\ref{ricci2}), (\ref{system1})--(\ref{system3}) constitute a closed differential system that a self-dual Einstein Hermitian metric must satisfy; by (\ref{ricci1}), (\ref{ricci2}), (\ref{DJtheta}) and (\ref{mn}) one can directly check that the integrability conditions ${\rm d(d}p)={\rm d(d}q)={\rm d(d}r)=0$ are satisfied. This is a first evidence that the existence of self-dual Einstein Hermitian metrics with prescribed 4-jet at a given point can be expected. To carry out this program explicitly, we first consider the case when $q\equiv 0, r\equiv 0$ and show that it precisely corresponds to {\it cohomogeneity-one} self-dual Einstein Hermitian metrics.
\subsection{Self-dual Einstein Hermitian metrics of cohomogeneity one}
A Riemannian 4-manifold $(M,g)$ is said to be (locally) {\it of cohomogeneity one}, if it admits a (local) isometric action of a Lie group $G$, with three-dimensional orbits. The manifold $M$ is then locally a product $$ M \cong (t_1,t_2)\times G/H.$$ The metric $g$ descends to a left invariant metric $h(t)$ on each orbit $\{ t \} \times G/H$, and, by an appropriate choice of the parameter $t$, can be written as $$g= dt^2 + h(t).$$ If, moreover, $(M,g)$ is Einstein and self-dual, and $G$ is at least of dimension four, then, according to a result of A. Derdzi\'nski \cite{De}, the spectrum of the self-dual Weyl tensor of $g$ is everywhere degenerate, and $g$ is Hermitian with respect some invariant complex structure.
Here is a way of constructing such metrics, all belonging to the class of {\it diagonal Bianchi metrics of type A} (see e.g. \cite{Tod1}). Let $\widetilde{G}$ be one of the following six three-dimensional Lie groups: ${\Bbb R}^3$, ${\rm Nil}^3, {\rm Sol}^3$, Isom(${\Bbb R}^2$), SU(1,1) or SU(2); let $H$ be a discrete subgroup of $\widetilde{G}$ and consider, on $\widetilde{G} / H$, the family of diagonal metrics $h (t)$ of the form \begin{equation}\label{diagonal}
h(t) = A(t)\sigma_1^2 + B(t)\sigma_2^2 + C(t)\sigma_3^2, \end{equation} where $A,B,C$ are positive smooth functions, and $\sigma_i$ are the standard left invariant generators of the corresponding Lie algebras; we thus have $$d\sigma^1 = n_1 \sigma_2\wedge \sigma_3; \ d\sigma_2=-n_2\sigma_1\wedge\sigma_3; \ d\sigma_3 = n_3\sigma_1\wedge \sigma_2$$ for a triple $(n_1,n_2,n_3)$, $n_i \in \{-1,0,1\}$, depending on the chosen group, according to the following table:
\begin{center}
\begin{tabular}{|c|c|c|} \hline {\rm class} & $ n_1 \ \ \ n_2 \ \ \ n_3 $ & ${\widetilde G}$ \\ \hline \hline {\rm I} & $0 \ \ \ \ 0 \ \ \ \ 0\ $ & ${\Bbb R}^3$ \\ \hline {\rm II} & $0 \ \ \ \ 0 \ \ \ \ 1\ $ & ${\rm {Nil}^3}$\\ \hline ${\rm VI}_0$ & $1 \ \ {-1} \ \ \ 0\ $ & ${\rm {Sol}^3}$ \\ \hline ${\rm VII}_0$& $1 \ \ \ \ 1 \ \ \ \ 0\ $ & ${\rm Isom}({\Bbb R}^2)$ \\ \hline {\rm VIII} & $1 \ \ \ \ 1 \ {-1}\ $ & ${\rm SU}(1,1)$ \\ \hline {\rm IX} & $1 \ \ \ \ 1 \ \ \ \ 1\ $ & ${\rm SU(2)}$ \\ \hline \end{tabular} \end{center}
Except for Class ${\rm VI}_0$, when $A = B$ all these metrics admit a further (local) symmetry which rotates the $\{\sigma_1, \sigma_2 \}$-plane, i.e. we get the so-called {\it biaxial} Bianchi metrics, see e.g. \cite{CP}. We thus obtain diagonal Bianchi metrics of Class A, admitting a local isometric action of a four-dimensional Lee group $G$, where $G$ is ${\Bbb R}\times{\rm Isom({\Bbb R}^2)}$, {\rm U(1,1)}, {\rm U(2)}, or the non-trivial central extension of ${\rm Isom}({\Bbb R}^2)$ corresponding to biaxial Class II metrics. Clearly, any such metric admits a positive {\it and} a negative invariant Hermitian structure, $J$ and $I$, whose K{\"a}hler forms are given by $$F= \sqrt{C}dt\wedge \sigma_3 + A\sigma_1\wedge \sigma_2,$$ and $${\bar F} =\sqrt{C}dt\wedge \sigma_3 - A\sigma_1\wedge \sigma_2,$$ respectively. When imposing the Einstein and the self-duality conditions, we obtain an ODE system for the unknown functions $A$ and $C$, which can be explicitly solved, cf. {e.g.} \cite{P}, \cite{Le}, \cite{DS}, \cite{Tod1}, \cite{CP}, \cite{bergery}.
In the sequel, we shall simply refer to these (self-dual, Einstein, Hermitian) metrics as {\it diagonal Bianchi} metrics.
Notice that $4$-dimensional locally symmetric metrics,
i.e. real and complex space forms, can also be put (in several ways) as diagonal Bianchi metrics. For example, self-dual Einstein Hermitian metrics in Class I are all flat \cite{Tod1}.
Our next result shows that, apart from locally symmetric spaces, diagonal Bianchi metrics in the above sense are actually {\it
all} (non-locally symmetric) cohomogeneity-one self-dual Einstein Hermitian metrics, and, in fact, can be characterized by the property $m_0\equiv 0$ in the notation of the preceding section. More
precisely, we have:
\begin{theo}\label{th2} Let $(M,g)$ be a self-dual Einstein 4-manifold. Suppose that $(M,g)$ is not locally symmetric. Then the following three conditions are equivalent: \begin{enumerate} \item[(i)] $(M,g)$ is of cohomogeneity one and the spectrum of $W^+$ is degenerate. \item[(ii)] $(M,g)$ admits a local isometric action of a Lie group of
dimension at least four, with three-dimensional orbits, and is locally isometric to a diagonal Bianchi self-dual Einstein Hermitian metric belonging to one of the classes ${\rm II}$, ${\rm VII}_0$, ${\rm VIII}$ or ${\rm IX}$. \item[(iii)] $(M,g)$ admits a positive, non-K{\"a}hler Hermitian structure $J$, and a negative Hermitian structure $I$ such that $I$ is equal to $J$ on ${\mathcal D}={\rm span} \{\theta , J\theta \}$ and to $-J$ on the orthogonal complement ${\mathcal D}^{\perp}$ ; equivalently, the 1-form $m_0$ of $(g,J)$ vanishes identically.
\end{enumerate} \end{theo} \begin{proof}
${\rm (i)} \Rightarrow {\rm (iii)}$. By Propositions \ref{prop1} and \ref{prop2}, $W^+$ has two distinct, non-constant eigenvalues at any point and there exists a positive, non-K{\"a}hler Hermitian structure $J$ whose K{\"a}hler form $F$ generates the eigenspace of $W^+$ corresponding to the simple eigenvalue. It follows that the Hermitian structure is preserved by the action of $G$, and therefore both functions
$|D^g F|^2 = 2|\theta|^2$ and
$|W^+|^2=\frac{{\kappa}^2}{24}$ are constant along the orbits of $G$; in particular,
${\rm d}\ln|\theta|$ is colinear to $\theta = \frac{{\rm
d}\kappa}{3\kappa}$, at any point; this means that $m_0=0$; by (\ref{I}) and (\ref{mn}), the vanishing of $m_0$ is equivalent to the integrability of the negative almost Hermitian structure $I$.
${\rm (iii)} \Rightarrow {\rm (ii)}$. If $m_0\equiv 0$ or, equivalently, if the negative almost Hermitian structure $I$ is integrable, then, by (\ref{I}), the Lie form $\theta_I$ of $(g,I)$ reads: \begin{equation}\label{thetaI}
\theta_I =(2p +\frac{(\kappa -s)}{12|\theta|^2})\theta. \end{equation} According to (\ref{mn}) we also have
$m_1={\rm d}\ln|\theta| =(p-\frac{(\kappa -s)}{24|\theta|^2} +
\frac{1}{2}) \theta$ and $\theta= \frac{1}{3}{\rm d}\ln|\kappa|$; it follows that ${\rm d}\theta_I=0$; then, locally, $\theta_I = {\rm d}f$ for a positive function $f$, i.e., $g$ is conformal to a K{\"a}hler metric ${g'} = f^2g$. Since $W^-=0$, the K{\"a}hler metric $g'$ is of zero scalar curvature. Clearly, the Killing field $K$ preserves both $J$ and $g$, hence, also, the K{\"a}hler structure $(g',I)$. Two cases occur, according as $g '$ is homothetic or not to $g$.
(a) Suppose ${g'}$ is {\it not} homothetic to $g$; equivalently, the scalar curvature $s$ of $g$ does not vanishes; then, by \cite{De}, $K'=I{\rm grad}_g(f^{-1})$ is a Killing vector field for $g$ and $g'$ and is holomorphic with respect $I$. By the very definition of $I$ we have that
$J|_{\mathcal D} = I|_{\mathcal D}$; the Killing vector fields $K'$ and $K$ are thus colinear everywhere (see (\ref{thetaI})); it follows that $K'$ is a constant multiple of $K$. By considering $z=f^2$ as a local coordinate on $M$ and, by introducing a holomorphic coordinate $x+iy$ on the (locally defined) orbit-space for the holomorphic action of $K+\sqrt{-1}IK$ on $(M,I)$, the metric $g$ can be written in the following form: \begin{equation}\label{g} g = \frac{1}{z^2}[e^{u}w({\rm d}x^2 + {\rm d}y^2) + w {\rm d}z^2 + w^{-1}\omega^2], \end{equation} where $u(x,y,z)$ is a smooth function satisfying the ${\rm SU(\infty )}$ Toda field equation: $$u_{xx} + u_{yy} + (e^u)_{zz}=0,$$ $w$ is a positive function given by $$w= \frac{6(zu_z -2)}{s},$$ and $\omega$ is a connection 1-form of the ${\Bbb R}$-bundle $M \mapsto N=\{(x,y,z)\} \subset {\Bbb R}^3$, whose curvature is given by \begin{equation}\label{domega} {\rm d}\omega = -w_x{\rm d}y\wedge {\rm d}z - w_y {\rm d}z\wedge {\rm d}x - (we^u)_z {\rm d}x\wedge {\rm d}y, \end{equation} (see, e.g. \cite{Tod2}). Moreover, the Killing field $K$ is dual to $\frac{1}{wz^2}\omega$, and the (anti-self-dual) K{\"a}hler form of the negative Hermitian structure $I$ is given by \begin{equation}\label{barF} {\bar F} =\frac{1}{z^2}\big(we^u {\rm d}x\wedge {\rm d}y - {\rm d}z\wedge \omega \big). \end{equation} By (\ref{thetaI}) we have that
${\mathcal D}={\rm span}\{ \theta , J\theta \}= {\rm span}\{\theta_I, I\theta_I \}={\rm span}\{K^{\sharp_g},IK^{\sharp_g}\}$, so that the K{\"a}hler form $F$ of the positive Hermitian structure $J$ is given by \begin{equation}\label{F} F = \frac{1}{z^2}\big(we^u {\rm d}x\wedge {\rm d}y + {\rm d}z\wedge \omega \big). \end{equation} It is now easily seen that (\ref{barF}) and (\ref{F}) simultaneously define integrable almost complex structures if and only if $w_x=w_y=0$, or equivalently if and only if $u(x,y,z)=u_1(x,y) + u_2(z)$. This means that $u$ is a {\it separable} solution to the ${\rm SU(\infty )}$ Toda field equation. Up to a change of the holomorphic coordinate $x+ iy$, it is explicitly given by \cite{Tod2} $$e^u = \frac{4(c + bz + az^2)}{(1 + a(x^2 + y^2))^2},$$ for properly chosen constants $a,b,c$. Any such solution gives rise to a {diagonal Bianchi} self-dual Einstein Hermitian metric pertaining to one of classes II, ${\rm VII}_0$, VIII and IX, depending on the choice of the constants $a,b,c$ (see e.g. \cite[Sec. 8]{CP}) for a common case of these metrics in the Bianchi IX case).
(b) If $g ' $ is homothetic to $g$, i.e. $(g,I)$ is itself a K{\"a}hler structure of zero scalar curvature, then $g$ is locally hyperk{\"a}hler and $K$ is a Killing vector field preserving the K{\"a}hler structure $I$. Then, one of the two following situations occurs:
(b1) {\it $K$ is {\it triholomorphic}}, i.e. $K$ preserves each K{\"a}hler structure in the hyperk{\"a}hler family: Then the quotient space, $N$, for the (real) action of $K$ is flat and is endowed with a field of parallel straight lines. This situation is described by the Gibbons-Hawking Ansatz \cite{GH}, and the metric $g$ has the form: $$g = w({\rm d}x^2 + {\rm d}y^2 + {\rm d}z^2) + \frac{1}{w}\omega^2,$$ for a positive harmonic function $w(x,y,z)$ on $N$ and a 1-form $\omega$ on $M$ satisfying $${\rm d}\omega = -w_x{\rm d}y\wedge {\rm d}z - w_y {\rm d}z\wedge {\rm d}x - w_z {\rm d}x\wedge {\rm d}y.$$ The Killing field $K$ is dual to $\frac{1}{w}\omega$ and one may consider that the positive and negative Hermitian structures, $J$ and $I$, correspond to the 2-forms $$F= w{\rm d}x\wedge {\rm d}y + {\rm d}z\wedge \omega; \ \ {\bar F}= w{\rm d}x\wedge {\rm d}y - {\rm d}z\wedge \omega,$$ respectively. We again conclude $w_x=0, w_y=0$, and therefore $w=az +b$. The case $a=0$ corresponds to flat metrics in Class I, whereas, when $a\neq 0$, by putting $at = az + b, \sigma_1 = {\rm d}x, \sigma_2 = {\rm d}y, \sigma_3 = \omega$, the metric becomes a diagonal Bianchi metric of Class II.
(b2) {\it $K$ is {\it not} triholomorphic}: Since, nevertheless, $K$ preserves $(g,I)$, the metric $g$ takes the form \cite{BoF} $$ g = e^{u}w({\rm d}x^2 + {\rm d}y^2) + w {\rm d}z^2 + w^{-1}\omega^2, $$ where $u(x,y,z)$ is a solution to the ${\rm SU(\infty)}$ Toda field equation, $w=au_z$, $\omega$ satisfies (\ref{domega}) and $a$ is a constant. Moreover, $K$ is dual to $\frac{1}{w}\omega$, and $I$ is defined by the anti-self-dual form $${\bar F} = we^u {\rm d}x\wedge {\rm d}y - {\rm d}z\wedge \omega.$$ Similar arguments as above show that $w_x=w_y=0$, i.e., $u$ is a separable solution to the ${\rm SU(\infty )}$ Toda field equation, and therefore our metric is again a diagonal Bianchi metric in one of the classes II, ${\rm VII}_0$, VIII or IX, cf. \cite{CP}.
The implication ${\rm (ii)} \Rightarrow {\rm (i)}$ is clear. \end{proof}
\begin{rem}{\rm A weaker version of Theorem \ref{th2} was announced in \cite{de} (see \cite[Rem.~1.3]{de} and Lemma 2 above).} \end{rem}
\subsection{The generic case} We now consider the generic case, when $m_0$ a {\it non-vanishing} section of ${\mathcal D}^\perp$, hence determines a gauge $\phi$ such that $r\equiv 0, q\neq 0$ in (\ref{mn}). According to (\ref{mn}), the 1-form $\alpha$ is then given by \begin{equation}\label{alpha}
m_1 = {\rm d}\ln|\theta| = q\alpha + (p- \frac{(\kappa -s)}{24|\theta|^2} + \frac{1}{2})\theta; \end{equation} moreover, by (\ref{system1})--(\ref{system3}), we have that \begin{eqnarray}\label{beta}
\beta &= & \frac{1}{q}\Big(p(2p + \frac{(\kappa -s)}{12|\theta|^2} - 1) - \frac{\kappa}{24|\theta|^2} + 2q^2\Big)J\alpha \\\nonumber
& &-\frac{(\kappa -s)}{12|\theta|^2}J\theta, \end{eqnarray} \begin{eqnarray}\label{frobenius1}
{\rm d}p &=&\Big(2q^2 - p(2p - \frac{(\kappa -s)}{12|\theta|^2} - 1)
- \frac{\kappa}{24|\theta|^2}\Big)\theta \\ \nonumber
& & - q\Big(4p +\frac{(\kappa -s)}{12|\theta|^2} - 1\Big)\alpha,
\\\label{frobenius2}
{\rm d}q &=& - q\Big(4p -\frac{(\kappa -s)}{12|\theta|^2} - 1\Big)\theta
\\ \nonumber
& & - \Big(2q^2 -p(2p + \frac{(\kappa -s)}{12|\theta|^2} - 1)
+ \frac{\kappa}{24|\theta|^2}\Big)\alpha. \end{eqnarray} By differentiating (\ref{alpha}) and by making use of (\ref{frobenius1})--(\ref{frobenius2}), we get \begin{equation}\label{dalpha}
{\rm d}\alpha = \frac{(\kappa -s)}{12|\theta|^2}\alpha \wedge \theta = \alpha\wedge J\beta; \end{equation} this is nothing else than the first relation in (\ref{ricci1}), when $\beta$ is given by (\ref{beta}); by substituting the expression (\ref{beta}) for $\beta$ into the second relation of (\ref{ricci1}), we obtain \begin{equation}\label{dJalpha} {\rm d}(J\alpha) = J\alpha \wedge J\beta. \end{equation} In view of (\ref{alpha}) and (\ref{frobenius1})--(\ref{frobenius2}), it is not hard to check that the 1-form $J\beta$ is equivalently given by \begin{equation}\label{dJbeta}
J\beta = {\rm d}\ln(\frac{|\kappa|}{|q||\theta|^4}), \end{equation} so that (\ref{dJalpha}) becomes \begin{equation}\label{dJa}
{\rm d}(\frac{\kappa}{q|\theta|^4}J\alpha)=0; \end{equation} from (\ref{DJtheta}) we get \begin{equation}\label{dJthe}
{\rm d}(J\theta)= J\theta\wedge\big(\frac{1}{3}{\rm d}\ln|\kappa| -2{\rm d}\ln|\theta|\big) + J\alpha \wedge \eta, \end{equation} or, equivalently, \begin{equation}\label{dJtheta}
{\rm d}(\frac{\kappa^{\frac{1}{3}}}{|\theta|^2} J\theta)=
\frac{\kappa^{\frac{1}{3}}}{|\theta|^2}J\alpha\wedge \eta, \end{equation} where
$$\eta = -2q\theta + (2p + \frac{(\kappa -s)}{12|\theta|^2} -1)\alpha.$$
We are now rea{\rm d}y to prove the existence of self-dual Einstein Hermitian metrics with $m_0\neq 0$. More precisely, we exhibit a 1--1-correspondence between these metrics and the set of solutions of the integrable Frobenius system (\ref{frobenius1})--(\ref{frobenius2}). We start with the data
$(s, \kappa, |\theta|)$ consisting of a constant $s$ (the scalar curvature), a
nowhere vanishing smooth function $\kappa$ (the conformal scalar curvature), and a positive smooth function $|\theta|$ (the norm of the Lie form $\theta= \frac{{\rm d}\kappa}{3\kappa}$), defined on an open subset ${U}$ of $M$,
such that $\theta\wedge {\rm d}|\theta|^2$ has no zero on ${U}$
(equivalently, $m_0$ does not vanish on ${U}$). We then introduce local coordinates $x= \kappa^{\frac{1}{3}} \neq 0$ and $y=|\theta|^2 >0$. Observe that $x$ is a {\it momentum map} for the Killing field $K$ with respect to the self-dual K{\"a}hler metric
${\bar g}={\kappa}^{\frac{2}{3}}g$ while $y=|K|_{\bar g}^2$ is the square-norm of $K$ with respect to ${\bar g}$ (see Proposition \ref{prop2}). The Lee form $\theta$ is then given by \begin{equation}\label{deftheta} \theta= \frac{{\rm d}x}{x}, \end{equation} and the 1-form $\alpha$ is given by (\ref{alpha}) for some smooth functions $p(x,y)$ and $q(x,y)\neq 0$ of $x,y$, i.e. \begin{equation}\label{defalpha} \alpha = \frac{1}{q}\Big( \frac{{\rm d}y}{2y} - \frac{1}{x}(p -\frac{(x^3-s)}{24y} + \frac{1}{2}){\rm d}x\Big). \end{equation} Then, (\ref{frobenius1})--(\ref{frobenius2}) can be made into the following Frobenius system for the (unknown) functions $p$ and $q^2$: \begin{eqnarray}\label{defp} {\rm d}p &=& \frac{1}{x}\Big[ 2q^2 + 2(p+\frac{(x^3 -s)}{24 y})(p-\frac{(x^3 -s)}{24 y} +1) -\frac{1}{2} - \frac{x^3}{24y}\Big]{\rm d}x \\ \nonumber
& & - \frac{1}{y}\Big[2p + \frac{(x^3 -s)}{24y} - \frac{1}{2}\Big]{\rm d}y \end{eqnarray} \begin{eqnarray}\label{defq} {\rm d}(q^2) &=& -\frac{1}{y}\Big[ 2q^2 -2p(p+\frac{(x^3 -s)}{24y} - \frac{1}{2}) +\frac{x^3}{24y}\Big]{\rm d}y\\ \nonumber & &- \frac{2}{x}\Big[\Big(p-\frac{(x^3 -s)}{24y} + \frac{1}{2}\Big)\Big(2p(p +\frac{(x^3-s)}{24y} -\frac{1}{2}) - \frac{x^3}{24y}\Big)\\ \nonumber
& & \ \ \ \ \ \ \ - 2q^2(1-p)\Big]{\rm d}x \end{eqnarray} A straightforward computation shows that the integrability condition ${\rm d}({\rm d}p)={\rm d}({\rm
d}q^2)=0$ is satisfied (as a matter of fact, the explicit solutions are given in Lemma 3 below). The above mentioned correspondence between solutions to (\ref{defp})--(\ref{defq}) and self-dual Einstein Hermitian metrics with $m_0\neq 0$ now goes as follows. Since (\ref{defp})--({\ref{defq}) is integrable, each value of $(p,q)$ at a given point $(x_0,y_0)$ can be extended to a solution of (\ref{defp})--(\ref{defq}) in some neighborhood $V$ of $(x_0,y_0)$; moreover, by choosing $q(x_0,y_0) \neq 0$, we may assume that $q$ has no zero on $V$; by (\ref{defalpha}) and (\ref{defp})--(\ref{defq}), one immediately obtains (\ref{dalpha}) for the corresponding 1-form $\alpha$. We then introduce a third local coordinate, $z$, such that \begin{equation}\label{defJalpha} J\alpha = \frac{qy^2}{x^3}{\rm d}z, \end{equation} see (\ref{dJa}). Finally, since the 1-form $J\theta$ satisfies (\ref{dJthe}) or, equivalently, (\ref{dJtheta}), the integrability condition reads as follows: $${\rm d}(\frac{qy}{x^2}\eta)=0,$$ see (\ref{dJa}) and (\ref{dJthe}); by using (\ref{frobenius1})--(\ref{dJalpha}), one easily checks that the integrability condition is actually satisfied, so that \begin{equation}\label{defJtheta} J\theta= \frac{y}{x}({\rm d}t + h{\rm d}z), \end{equation} where $t$ is a suitable transversal coordinate to $(x,y,z)$, and $h(x,y)$ is a smooth function on $V$, defined by $${\rm d}h =-\frac{qy}{x^2}\eta.$$ It is an easy consequence of (\ref{defp}) that the above equation is solved by \begin{equation}\label{defh} h = \frac{yp}{x^2} + \frac{x}{24}. \end{equation} The metric $g$ and the orthogonal almost complex structure $J$ are then given by
$$g = \frac{1}{|\theta|^2}(\theta\otimes\theta + J\theta\otimes J\theta + \alpha\otimes \alpha + J\alpha\otimes J\alpha);$$ according to (\ref{deftheta}),(\ref{defalpha}),(\ref{defJalpha}) and (\ref{defJtheta}), and by using the coordinates $(x,y,z,t)$, the metric $g$ takes the form \begin{equation}\label{canonic} g = \frac{1}{y}\Big[\frac{{\rm d}x^2}{x^2} + \frac{1}{q^2}\Big(\frac{{\rm d}y}{2y} - \frac{1}{x}(p -\frac{(x^3 -s)}{24y} + \frac{1}{2}){\rm d}x\Big)^2 + \frac{q^2y^4}{x^6}{\rm d}z^2 + \frac{y^2}{x^2}({\rm d}t + h{\rm d}z)^2\Big]; \end{equation} this shows that any self-dual Einstein Hermitian metric with $m_0\neq 0$ is locally isometric to a metric of the above form for some solution $(p,q)$ to (\ref{defp})--(\ref{defq}).
Conversely, for any solution to (\ref{defp})--(\ref{defq}), the corresponding almost-Hermitian metric $(g,J)$ is self-dual Einstein Hermitian metric with $m_0\neq 0$. Indeed, by (\ref{dalpha}), (\ref{dJalpha}) and (\ref{dJtheta}), $J$ is integrable and it is easily checked that $\theta=\frac{{\rm d}x}{x}$ is the Lee form for $(g,J)$, i.e., $${\rm d}F = -2\theta\wedge F;$$ moreover, the 1-form $\alpha$ corresponds to the gauge $$ \phi = -\frac{1}{y}\big(\alpha\wedge J\theta + J\alpha\wedge \theta \big),$$ meaning that $\alpha = \phi(J\theta)$; one directly computes $${\rm d}\phi = (\theta + J\beta)\wedge \phi,$$ where the 1-form $\beta$ is given by (\ref{beta}); it follows that $\beta$ is precisely the {\rm 1-form} defined by (\ref{Dphi}) and that (\ref{dalpha})--(\ref{dJalpha}) are nothing else than the Ricci identities (\ref{ricci1}); this allows us to recognize the curvature: By (\ref{ricci1}), the Ricci tensor of $(g,J)$ is $J$-invariant, and, since $\theta= \frac{{\rm d}x}{x}$, the dual vector field $K$ of $\kappa^{-\frac{1}{3}}J\theta=\frac{1}{x}J\theta$ is Killing, cf. e.g. \cite{AG}; by (\ref{dJtheta}) and (\ref{DF}), the covariant derivative of $\theta$ is given by (\ref{Dtheta}) for $p$ and $q$ constructed as above, and $r\equiv 0$; hence, (\ref{beta}) and (\ref{frobenius1})--(\ref{frobenius2}) (equivalently, (\ref{defp})--(\ref{defq})) are the same as relations (\ref{system1})--(\ref{system3}); these, in turn, are a way of re-writing (\ref{W^-=0}); it follows that the projection of the curvature to $\Lambda^-M$ reduces to
$\frac{s}{12}{\rm Id}|_{\Lambda^-M}$, i.e. the Hermitian metric $g$ is Einstein and self-dual, with scalar curvature equal to $s$, see (\ref{SO(4)}); turning back to (\ref{dalpha}), we conclude that the conformal scalar curvature is $\kappa=x^3$, see (\ref{ricci1}); the metric constructed in this way is not of cohomogeneity one, as $m_0\neq 0$, see Theorem \ref{th2}. Finally, different solutions $(p,q)$ of (\ref{defp})--(\ref{defq}) give rise to non-isometric metrics,
as $p$ and $q$ are completely determined by $|W^+|, {\rm d}|W^+|$ and ${\rm d}|D^gW^+|$, see Sec.~ 2 and (\ref{alpha}).
We finally observe that the metric (\ref{canonic}) admits two commuting vector fields, $\frac{\partial}{\partial t}$ and $\frac{\partial}{\partial z}$.
We summarize the results obtained so far as follows: \begin{theo}\label{th3} Let $(M,g,J)$ be a self-dual Einstein Hermitian 4-manifold. Suppose that $(M,g,J)$ is neither locally-symmetric nor of cohomogeneity one. Then, on an open dense subset of $M$, $g$ is locally given by (\ref{canonic}). In particular, $(M,g)$ admits a local isometric action of ${\mathbb R}^2$ almost-everywhere. \end{theo}
\begin{rem} \label{rem3} {\rm (i) It is easily seen that the metrics (\ref{canonic}) have only 2-dimensional continuous symmetries. Moreover, as we already observed, the coordinate $x=\kappa^{\frac{1}{3}}$ is a {momentum map} of the Killing vector field $\frac{\partial}{\partial t}$ with respect to the K{\"a}hler metric ${\bar g}= x^2 g$ while, by (\ref{defp}) and (\ref{defh}), a momentum map $\tilde{\mu}$ of the second Killing field, $\frac{\partial}{\partial z}$, is given by $$2x{\tilde \mu} = y + \frac{x^3 + s}{12},$$ where $\frac{x^3 + s}{12} = \frac{\kappa + s}{12}$ is the (pointwise constant) holomorphic sectional curvature of $(g,J)$.
The momentum map $x$ is also equal to the scalar curvature of the K{\"a}hler metric ${\bar g}$. A straighforward computation shows that the second momentum map $\tilde{\mu}$ defined above is related to the Pfaffian of the {\it normalized Ricci form} ${\bar \sigma}$ of the K{\"a}hler metric ${\bar g}$ by $$ \tilde{\mu} = 12 \, ( {\rm Pfaff} \, {\bar \sigma} + b),$$ where $b$ is the constant appearing in (\ref{explicitef}) below. This fits with an observation of R. Bryant in \cite{Br}. ({\rm Recall that for any
$2$-form $\psi$, the Pfaffian of $\psi$ with respect to ${\bar g}$
is defined by: $\psi \wedge \psi = 2\, {\rm Pfaff}\, \psi \, v _{{\bar
g}}$, where $v _{{\bar
g}}$ is the volume form of ${\bar g}$; the
normalized Ricci form ${\bar \sigma}$ is the $(1,1)$-form associated
to the normalized Ricci tensor, ${\bar S}$, appearing in the
usual decomposition ${\bar R} = {\bar S} \wedge {\bar g} + W$ of the
curvature operator of ${\bar g}$~; it is related to the usual Ricci
form ${\bar \rho}$ by ${\bar \sigma} = \frac{1}{2} \, ({\bar \rho} _0 +
\frac{x}{12} \, {\bar \omega})$, where ${\bar \rho} _0$ is the
trace-free part of ${\bar \rho}$; since $g = x ^{-2} {\bar g}$ is
Einstein and ${\rm d}^c x$ is the dual of a Killing vector field,
we have that ${\bar \rho} _0 = - \frac{1}{x} \, ({\rm d}
{\rm d}^c x) _0$; the result follows easily}).
(ii) It follows from Theorems \ref{th2} and \ref{th3} that every self-dual Einstein Hermitian 4-manifold admits a (local) isometric ${\Bbb R}^2$-action compatible with a {\it product structure} in the sense of \cite{joyce}; the general considerations in \cite[Sec.2]{joyce} therefore apply to the present situation; a detailed analysis of self-dual Einstein 4-manifolds admitting ${\Bbb R}^2$-continuous symmetry has been carried out by D. Calderbank \cite{Ca0}, based on results of \cite{Ca1}. } \end{rem}
We end this section by providing an explicit form for the metric (\ref{canonic}), in view of the following \begin{Lemma}\label{integrate} The solutions $p(x,y)$ and $q(x,y)$ of the system (\ref{defp})--(\ref{defq}) are explicitly given by \begin{equation}\label{explicitep} p = \frac{f}{y^2} - \frac{(x^3 -s)}{24y} + \frac{1}{4}; \end{equation} \begin{equation}\label{expliciteq} q^2= \frac{1}{y^2}\Big[\frac{x}{2}f' -f + \big(\frac{x^3-s}{24}\Big)^2\Big] - \frac{x^3}{24y} - p^2, \end{equation} where \begin{equation}\label{explicitef} f(x)= ax^2 + bx^4 - \frac{(x^6 - s^2)}{576}, \end{equation}
$a$ and $b$ are constants defined by positivity in (\ref{expliciteq}), and $f'$ stands for the first derivative of $f$. \end{Lemma} \begin{proof} We first observe that (\ref{defp}) can be equivalently written as $${\rm d}\Big(y^2(p + \frac{(x^3 -s)}{24y} - \frac{1}{4})\Big) = $$ $$\frac{y^2}{x}\Big[2q^2 + 2(p + \frac{(x^3 -s)}{24y})(p - \frac{(x^3 -s)}{24y}) + 2(p + \frac{(x^3 -s)}{24y} - \frac{1}{4}) + \frac{x^3}{12y}\Big]{\rm d}x;$$ this shows that $y^2(p + \frac{(x^3 -s)}{24y} - \frac{1}{4})$ is function of $x$, say $f$; from the above equality, we get (\ref{explicitep}) and (\ref{expliciteq}), where $f$ is a (still unknown) smooth function; in order to determine $f$, we differentiate (\ref{expliciteq}) by using (\ref{explicitep}) and substitute into (\ref{defq}); then, cancellations occur and (\ref{defq}) eventually reduces to \begin{equation}\label{ODE} x^2f'' -5xf' + 8f + \frac{(x^6 -s^2)}{72}=0; \end{equation} the solutions of (\ref{ODE}) are given by (\ref{explicitef}). \end{proof}
\section{Self-dual Einstein Hermitian metrics with hyperhermitian structures}
In this section, we consider self-dual, Einstein, Hermitian metrics which in addition admit a {\it non-closed} hyperhermitian structure compatible with the negative orientation. It is well-known that LeBrun-Pedersen metrics, which are of cohomogeneity one under the action of the unitary group ${\rm U(2)}$, carry such hyperhermitian structures; in LeBrun's coordinates \cite{Le} these metrics read as follows: \begin{equation}\label{g1} g = \frac{1}{(b t^2 + 4c)^2} \Big((1 + \frac{8b}{t^2} + \frac{16c}{t^4})^{-1}{\rm d}t^2 + \frac{t^2}{4} \big[\sigma_1^2 + \sigma_2^2 + (1 + \frac{8b}{t^2} + \frac{16c}{t^4}) \sigma_3^2 \ \big]\Big), \end{equation} where $b$ and $c$ are properly chosen constants \cite{Mad}; more precisely, we have the following \begin{prop}\label{prop5}{\rm (\cite{Mad})} Let $(M,g)$ be an oriented self-dual Einstein 4-manifold. Assume that $(M,g)$ admits a $\rm{U}(2)$ isometric action with generically three-dimensional $\rm{SU}(2)$-orbits. If $g$ admits a non-closed, $\rm{U}(2)$-invariant negative hyperhermitian structure, then $g$ is isometric to (\ref{g1}) with $c > b^2$, and actually admits exactly two distinct invariant hyperhermitian structures. \end{prop} We here prove the following more general result: \begin{theo}\label{th1} A self-dual Einstein Hermitian 4-manifold $(M,g,J)$ locally admits a non-closed, negative hyperhermitian structure if and only if $g$ is locally isometric to one of the $\rm{U}(2)$-invariant metrics (\ref{g1}) with $c > b^2$; then, $(M, g)$ actually carries exactly two distinct hyperhermitian structures, each of them $\rm{U}(2)$-invariant. \end{theo}
We first establish general facts concerning self-dual Einstein 4-manifolds which carry a {\it non-closed} hyperhermitian structure compatible with the negative orientation. As already observed in Sec.2, a (negative) hyperhermitian structure $(g, I_1,I_2,I_3)$ is determined by a real $1$-form $\theta$ --- the common Lee form of $(g,I_i)$, also the Lee form of the Obata connection --- satisfying conditions (\ref{EW}) and (\ref{hypherm}), and such that $\Phi := {\rm d} \theta$ is self-dual; in particular, the 2-form $\Phi$ is harmonic. The next Lemma shows that the self-dual Weyl tensor of $g$ is completely determined by $\theta$, $\Phi$ and the first covariant derivative $D^g\Phi$ of $\Phi$. \begin{Lemma}\label{gauduchon1} Let $(M,g)$ be an oriented self-dual Einstein 4-manifold and assume that $(M, g)$ carries a negative hyperhermitian structure. Then, as a symmetric operator acting on $\Lambda ^ + M$, the self-dual Weyl tensor $W^+$ is given by \begin{equation}\label{gau1}
W^+(\psi) = \frac{1}{2}[\psi, \Phi] + \frac{1}{|\theta|^2} D^g_{\psi(\theta)} \Phi, \end{equation} where $\psi$ is any self-dual 2-form, $\theta$ is viewed as a vector field by Riemannian duality, and $[\cdot, \cdot]$ denotes the commutator of 2-forms, viewed as skew-symmetric endomorphisms of the tangent bundle. Moreover, $\theta$ and $\Phi$ are related by \begin{equation}\label{gau3}
D^g_{\theta} \Phi = 2|\theta|^2\Phi. \end{equation} \begin{equation}\label{gau2}
{\rm d}|\theta|^2 -(\frac{s}{12} + |\theta|^2)\theta + \Phi(\theta)=0, \end{equation} \end{Lemma} \begin{proof} By using (\ref{EW}), the right-hand side of $$R_{X,Y}\theta= (D^g)^2_{Y,X}\theta - (D^g)^2_{X,Y}\theta $$ is easily computed; we thus obtain: \begin{eqnarray}\label{util1}
R(\theta\wedge Z) &=& -\frac{1}{2}{\rm d}|\theta|^2\wedge Z - \frac{1}{2}(\frac{s}{12} -|\theta|^2)\theta \wedge Z\\ \nonumber & & -\frac{1}{2}\Phi(Z)\wedge \theta - \frac{1}{2}D^g_Z\Phi + \theta(Z)\Phi. \end{eqnarray}
Since $g$ is self-dual and Einstein, $R = \frac{s}{12}{\rm Id}|_{\Lambda^2M} + W^+$, see (\ref{SO(4)}). Then, by projecting (\ref{util1}) to $\Lambda^-M$, we get (\ref{gau2}), whereas the projection of (\ref{util1}) to $\Lambda^+M$ gives (\ref{gau1}) and (\ref{gau3}). \end{proof} \begin{cor}\label{cor1} {\rm (\cite{ET,Ca})} Every hyperhermitian structure on a conformally flat 4-manifold is closed. \end{cor} \begin{proof} If we assume that $\Phi \neq 0$ somewhere on $M$ and that the anti-self-dual Weyl tensor is identically zero, then, after contracting (\ref{gau1}) and (\ref{gau3}) with $\Phi$, we obtain
$\theta = \frac{1}{4} {\rm d}\ln|\Phi|^2$, which contradicts $\Phi = {\rm d}\theta \neq 0$. \end{proof}
We can compute the covariant derivative $D^g_{\theta}W^+$ of $W^+$ along the dual vector field of $\theta$ (still denoted by $\theta$), by using (\ref{gau1}) together with (\ref{gau3}) and (\ref{gau2}) (the latter are used for evaluating the term $(D^g)^2_{\theta,\psi(\theta)} \Phi$ which appears in the calculation); we thus get \begin{Lemma}\label{gauduchon2} Let $(M,g)$ be an oriented self-dual Einstein 4-manifold, admitting a negative hyperhermitian structure; then, the covariant derivative $D^g_{\theta} W^+$ of the self-dual Weyl tensor $W^+$ along the dual vector field of the Lee form $\theta$ is given by \begin{eqnarray}\nonumber \big( (D^g_{\theta} W^+)(\psi), \phi \big) &=& \big( [W^+(\phi),\psi] + [W^+(\psi),\phi], \Phi \big) \\ \nonumber
& & + (4|\theta|^2 - \frac{s}{6})\big( W^+(\psi), \phi \big) \\ \label{gau4}
& & + |\Phi|^2\big( \psi, \phi \big) - 3\big( \Phi, \psi \big) \big( \Phi, \phi \big) , \end{eqnarray} for any sections, $\phi$ and $\psi$, of $\Lambda^+M$. \end{Lemma}
From Lemma \ref{gauduchon2} and Propositions \ref{prop1} and \ref{prop2}, we infer \begin{prop}\label{prop6} Let $(M,g)$ be an oriented self-dual Einstein 4-manifold, admitting a non-closed hyperhermitian structure compatible with the negative orientation. Then the following three conditions are equivalent: \begin{enumerate} \item[{\rm(i)}] the spectrum of $W^+$ is everywhere degenerate; \item[{\rm(ii)}] $W^+$ has two distinct eigenvalues at any point; \item[{\rm(iii)}] the self-dual 2-form $\Phi$ is a nowhere vanishing eigenform for $W^+$ with respect to the simple eigenvalue, and is proportional to a positive Hermitian structure $J$. \end{enumerate}\ \end{prop} \begin{proof}
${\rm (i)} \Rightarrow {\rm (ii)}$. According to Proposition \ref{prop1}, if the spectrum of $W^+$ is everywhere degenerate, then either $W^+$ vanishes identically (and therefore
the hyperhermitian structure is closed by Corollary \ref{cor1}) or $W^+$ has two distinct eigenvalues $\lambda$ and $-\frac{\lambda}{2}$ at any point.
${\rm (ii)} \Rightarrow {\rm (iii)}$. By Proposition \ref{prop1}, we know that a normalized generator $F$ of the $\lambda$-eigenspace of $W^+$ is the K{\"a}hler form of a positive Hermitian structure $J$. Let $\phi$ be any self-dual 2-form orthogonal to
$F$, with $|\phi | ^2 = 2$; then, $\phi$ and $\psi = (J\circ \phi)$ are orthogonal, $(-\frac{\lambda}{2})$-eigenforms of $W^+$; by substituting into (\ref{gau4}), we get $$ 0= \big( (D^g_{\theta} W^+)(\phi), \psi \big) = - 3\big( \Phi, \psi \big) \big( \Phi, \phi \big),$$
$$ -{\rm d}\lambda (\theta)= \big( (D^g_{\theta} W^+)(\phi), \phi \big) = -(4|\theta|^2 -\frac{s}{6})\lambda +2|\Phi|^2 - 3\big( \Phi, \phi \big)^2,$$
$$ -{\rm d}\lambda (\theta)= \big( (D^g_{\theta} W^+)(\psi), \psi \big) = -(4|\theta|^2 -\frac{s}{6})\lambda + 2|\Phi|^2 - 3\big( \Phi, \psi \big)^2.$$ From the last two equalities, we get $\big( \Phi, \psi \big) = \pm \big( \Phi, \phi \big) $, and by the first one we conclude that $\big( \Phi, \psi \big) = \big( \Phi, \phi \big) =0$. This shows that $\Phi$ is a multiple of $F$. It remains to prove that $\Phi$ does not vanish on $M$; by taking a two-fold cover of $M$ if necessary, we may assume that the Hermitian structure $J$ is globally defined on $M$; by
Proposition \ref{prop2}, $(g,J)$ is conformally K{\"a}hler and $\lambda^{\frac{2}{3}}F$ is the corresponding closed K{\"a}hler form; but $\Phi$ is also a closed, self-dual 2-form, and a multiple of ${F}$, hence a constant (non zero) multiple of $\lambda^{\frac{2}{3}}F$.
${\rm (iii)} \Rightarrow {\rm (i)}$. This is an immediate consequence of Proposition \ref{prop1}. \end{proof}
\noindent {\bf Convention:} From now on, we assume that $(M,g)$ is an oriented self-dual Einstein 4-manifold whose self-dual Weyl $W^+$ has degenerate spectrum, and which admits a {\it non-closed} hyperhermitian structure compatible with the negative orientation of $M$. According to Proposition \ref{prop6}, $W^+$ has two distinct eigenvalues which we denote by $\lambda$ and $-\frac{\lambda}{2}$, and the harmonic self-dual 2-form $\Phi$ defines a positive Hermitian structure $J$ on $(M,g)$ whose K{\"a}hler form, $F$, is an $\lambda$-eigenform for $W^+$. Moreover, it follows from Proposition \ref{prop2} that, after rescaling the metric if necessary, we may assume: \begin{equation}\label{util2} \Phi = \frac{1}{2}\lambda^{\frac{2}{3}}F. \end{equation} In the notation of Sec.2.1, the conformal scalar curvature $\kappa$ of $(g,J)$ is thus equal to $6\lambda$; the Lee form $\theta_J$ and the Killing vector field $K$, rescaled by an appropriate positive constant, are therefore given by: \begin{equation}\label{theta-X} \theta_J= \frac{{\rm d}\lambda}{3\lambda}; \ \ K=J\rm{grad_g}(\lambda^{-\frac{1}{3}}), \end{equation} (see Proposition \ref{prop2}).
At this point, our main technical result reads as follows: \begin{prop}\label{prop7} A self-dual Einstein Hermitian 4-manifold $(M,g,J)$ admits a non-closed, hyperhermitian structure compatible with the negative orientation if and only if the Lee form $\theta_J$ satisfies \begin{equation} \label{gau5} \begin{split} D^g \theta_J &= \frac{(1+ \lambda^{\frac{2}{3}})(s +
3\lambda^{\frac{1}{3}})}{12}g \\ & \ \ \ + \frac{(1 + 2\lambda^{\frac{2}{3}})}{(1 + \lambda^{\frac{2}{3}})} \theta_J\otimes \theta_J + \frac{\lambda^{\frac{2}{3}}}{(1+\lambda^{\frac{2}{3}})}J\theta_J\otimes J\theta_J. \end{split} \end{equation} In this case, $(M,g)$ actually admits exactly two non-closed hyperhermitian structures $\{ I_1',I_2',I_3' \}$ and $\{ I_1'', I_2'', I_3'' \}$ whose Lee forms, $\theta '$ and $\theta ''$, are given by $$\theta ' = \frac{1}{(1 + \lambda^{\frac{2}{3}})}\big( \theta_J - \lambda^{\frac{1}{3}} J\theta_J \big),$$ $$\theta '' = \frac{1}{(1 + \lambda^{\frac{2}{3}})}\big( \theta_J + \lambda^{\frac{1}{3}} J\theta_J \big)$$ respectively. Moreover, the Killing vector field $K$ is triholomorphic for both hyperhermitian structures, i.e., $K$ preserves all complex structures $I_i'$ and $I_i''$, $i=1,2,3$. \end{prop} \begin{proof}
We first show that if $(M,g,J)$ admits a non-closed hyperhermitian structure compatible with the negative orientation, then the corresponding Lee form $\theta$ must be one of the forms $\theta'$ and $\theta''$ given in Proposition \ref{prop7}.
From (\ref{gau3}) and the fact that $\Phi$ is an $\lambda$-eigenform of $W^+$, we infer \begin{equation}\label{util3}
{\rm d}|\Phi|^2 =4|\Phi|^2\theta + 4\lambda \Phi(\theta). \end{equation} By differentiating (\ref{util3}) and by using (\ref{gau2}) in order to compute ${\rm d}(\Phi(\theta ))$, we obtain
$$({\rm d}\lambda - 3\lambda \theta)\wedge \Phi(\theta ) + \big(|\Phi|^2 + \lambda(\frac{s}{12} +
|\theta|^2)\big)\Phi =0; $$ we infer: \begin{equation}\label{util4}
|\Phi|^2 = - \lambda(\frac{s}{12}+ |\theta|^2). \end{equation}
By substituting the above expression of $|\Phi|^2$ in (\ref{util3}), and by using (\ref{gau2}) again, we get \begin{equation}\label{util5}
{\rm d}\lambda - 3\lambda\theta = \frac{3\lambda^2}{|\Phi|^2}\Phi(\theta). \end{equation} Now, according to the above convention, by (\ref{theta-X}) and (\ref{util2}) we end up with the following expression for $\theta$: \begin{equation}\label{util7} \theta = \frac{1}{(1 + \lambda^{\frac{2}{3}})}\big( \theta_J - \lambda^{\frac{1}{3}} J\theta_J \big). \end{equation} This shows that every non-closed hyperhermitian structure is completely determined by the self-dual harmonic 2-form $\Phi$. It remains to prove that $\Phi$ itself is determined, up to sign, by the metric $g$; then, the two possible values of $\theta$ appearing in Proposition \ref{prop7} will only differ by conjugation of $J$ or, equivalently, by substituting $ - \Phi$ to $\Phi$. Notice that, according to our convention, at this stage we have the freedom to rescal the $2$-form $\Phi$ by a non-zero constant. In other words, by fixing one non-closed hyperhermitian structure and by following our convention, we know that any other non-closed hyperhermitian structure corresponds to a harmonic 2-form of the form $a \Phi = \frac{a}{2}\lambda^{\frac{2}{3}}F$, where $a$ is a non-zero constant. Our claim is that $a=\pm 1$; to see this, by using (\ref{gau1}) and (\ref{gau3}), we calculate
$$|D^g \Phi |^2 = 2|\theta|^2(3|\Phi|^2 + |W^+|^2);$$ in the present situation, when $W^+$ has degenerate spectrum,
the norm of $W^+$ is given by $|W^+|^2=\frac{3}{2}\lambda^2$; then, by (\ref{util4}), the above equality reduces itself to \begin{equation}\label{util6}
|D^g \Phi |^2 = -(\frac{|\Phi|^2}{\lambda} + \frac{s}{12})(6|\Phi|^2 + 3\lambda^2); \end{equation} it is readily checked that if the $2$-forms $\Phi$ and $a \Phi$ simultaneously satisfy (\ref{util6}), then $a=\pm 1$.
We now check that the conditions (\ref{EW})\&(\ref{hypherm}) for either $\theta'$ or $\theta''$ are equivalent to (\ref{gau5}). Keeping (\ref{util2}) in mind, we see that (\ref{util5}) can be equivalently re-written as \begin{equation}\label{util8} \theta_J= \theta + \lambda^{\frac{1}{3}}J\theta; \end{equation} then, the equivalence ``(\ref{gau5}) $\Leftrightarrow$ (\ref{EW})\&(\ref{hypherm})'' follows by a straightforward computation involving the expressions (\ref{util7}) and (\ref{util8}), and using formula (\ref{integrable}); the 1-forms $\theta'$ and $\theta''$ thus correspond to two distinct, non-closed hyperhermitian structures $\{ I_1',I_2',I_3' \}$ and $\{ I_1'', I_2'', I_3'' \}$ provided that (\ref{gau5}) holds, see Sec. 1.2.
As a final step, we have to prove that $K$ is triholomorphic with respect to both hyperhermitian structures. For a general hyperhermitian structure $I_i, i=1,2,3$, with Lee form $\theta$, and for any Killing field $K$, we have $${\mathcal L}_K I_i = D_K I_i -[DK,I_i],$$ where $D$ is the Weyl derivative given by (\ref{D^J}); we thus only need to check that in our specific situation $D K$ commutes with $I_i$; by using (\ref{D^J}), (\ref{theta-X}), (\ref{integrable}) and (\ref{gau5}), we get $$DK = \theta(K){{\rm Id}|_{TM}} + \frac{(1+ \lambda^{\frac{2}{3}})}{4} J;$$ the claim follows immediately. \end{proof}
\begin{cor}\label{cor2} {\rm (\cite{ET})} A locally-symmetric self-dual Einstein 4-manifold does not admit non-closed hyperhermitian structures. \end{cor} \begin{proof} Any such manifold is either a space of constant curvature, hence conformally flat, or a K{\"a}hler manifold of constant holomorphic sectional curvature (see Propositions \ref{prop1} and \ref{prop2}). In the former case, the claim follows by Corollary \ref{cor1}, whereas in the latter case $\theta_J=0$; we then conclude by using Proposition \ref{prop7}. \end{proof}
\begin{rem} {\rm D. Calderbank proved that any conformal selfdual 4-manifold admitting two distinct Einstein-Weyl structures is equipped with a canonical conformal submersion to an Einstein-Weyl 3-manifold \cite{Ca2}. In the situation described by Proposition \ref{prop7}, this conformal submersion is seen as follows: the hyperhermitian structures $\{ I_1',I_2',I_3' \}$ and $\{ I_1'', I_2'', I_3'' \}$ determine a SO(3)-valued function, $p$, on $M$ defined by: $$I_i'' = \sum_{j=1}^3 a_{ij}I_j'; \ A=(a_{ij})\in {\rm SO(3)};$$ we claim that $p$ is a conformal submersion of $(M,g)$ to SO(3)=${\Bbb RP}^4$: The differential of $p$ is easily computed by using the fact that $I_i''$ and $I_j'$ are both integrable; we thus obtain: \begin{equation}\label{dA} {\rm d}(a_{ij}) + \frac{\lambda^{\frac{2}{3}}}{2(1+ \lambda^{\frac{2}{3}})}\Sigma_{k=1}^3 a_{ik}\big([I'_k,I'_j] K\big)^{\sharp_g}=0; \end{equation} here, $[\cdot, \cdot]$ denotes the commutator of endomorphisms of $TM$ and $^{{\sharp}_g}$ stands for the Riemannian duality; from (\ref{dA}), we infer: $${\mathcal L}_{K} a_{ij} =0,$$ $$\sum_{i,j} \big(da_{ij}(X)\big)^2 = \frac{\lambda^{\frac{4}{3}}}{2(1+\lambda^{\frac{2}{3}})^2}g(X,X), \ \forall X\in K^{\perp};$$ The first equality shows that $p$ coincides with the projection of $M$ to the space, $N$, of orbits of $K$, whereas the second equality means that the $K$-invariant metric ${\bar g}=\frac{\lambda^{\frac{2}{3}}}{(1+\lambda^{\frac{2}{3}})}g$ descends to the round metric of ${\rm SO(3)} = {\mathbb R} P^3$; in other words, $K$ defines a Riemannian submersion from $(M,{\bar g})$ to ${\rm SO(3)}$.} \end{rem}
\noindent {\bf Proof of Theorem \ref{th1}.} We first notice that the Killing vector field $K$ is trivial if and only if $\lambda$ is constant (see (\ref{theta-X})), or, equivalently, $\theta_J=0$. Thus, according to Propositions \ref{prop6} and \ref{prop7}, if $(M,g,J)$ is a self-dual Einstein Hermitian 4-manifold admitting a {\it non-closed} hyperhermitian structure, the Killing vector field $K$ does not vanish on an open, dense subset of $M$. It then follows from \cite{GT,CT,CP} that self-dual Einstein 4-manifolds admitting two distinct hyperhermitian structures and a non-trivial triholomorphic Killing vector field are locally given by Proposition \ref{prop5}.
For completeness, however, we here give a different and more direct argument adapted to our ``Hermitian'' situation.
By Proposition \ref{prop5} it is sufficient to show that our metric can be written in the diagonal form (\ref{diagonal}). Since the eigenvalues of $W^+$ are not constant, i.e., $\theta_J\neq 0$ (Proposition \ref{prop7}), we introduce the variable $t=\lambda^{\frac{1}{3}}$; the Lee form $\theta_J$ is then equal to $\frac{{\rm d}t}{t}$, whereas the dual $1$-form of the Killing vector field is given by $-\frac{1}{t^2}J{\rm d}t$. We set: $\sigma_3 = f(t)J{\rm d}t$, for some smooth function $f$ of $t$, and we insist that \begin{equation}\label{dsigma3} {\rm d}\sigma_3 = \sigma_1\wedge \sigma_2, \end{equation} where the 1-forms $\sigma_1$ and $\sigma_2=J\sigma_1$ are both orthogonal to ${\rm d}t$ and satisfy \begin{equation}\label{dsigma1} {\rm d}\sigma_1 =\sigma_2\wedge \sigma_3; \ \ {\rm d}\sigma_2 = \sigma_3\wedge \sigma_1. \end{equation} We then derive $f$ from (\ref{dsigma3}): By differentiating (\ref{util7}) and by making use of (\ref{util2}), we obtain \begin{equation}\label{dJr} {\rm d}(J{\rm d}t) = -\frac{(1+t^2)t^2}{2}F + \frac{2t}{(1+t^2)}{\rm d}t\wedge J{\rm d}t. \end{equation} By (\ref{util8}), (\ref{util4}) and (\ref{util2}), we also get
$$|{\rm d}t|^2 =-(\frac{t}{2} + \frac{s}{12})(t^4 + t^2);$$ it follows that $\big({\rm d}\sigma_3, {\rm d}t\wedge J{\rm d}t \big) =0$ if and only if $(\ln f)'= -\frac{2t}{(1+t^2)} - \frac{1}{(t + \frac{s}{6})}$, where the prime stands for $\frac{{\rm d}}{{\rm d}t}$; we then have $f= \frac{a}{(1+t^2)(t+ \frac{s}{6})}$, hence \begin{equation}\label{sigma3} \sigma_3 = \frac{a}{(1+t^2)(t + \frac{s}{6})} J{\rm d}t \end{equation} for a positive constant $a$.
In order to determine the 1-forms $\sigma_1$ and $\sigma_2$, we choose a gauge $\phi$ or, equivalently, a 1-form $\alpha =\phi(J\theta_J) \in {\mathcal
D}^{\perp}$; since $\sigma_1$ and $\sigma_2=J\sigma_1$ are orthogonal to ${\rm d}t$, there certainly exists a smooth function $h$ of $t$ and a smooth function $\varphi$ on $M$, such that $$\sigma_1 = h(\cos\varphi \alpha + \sin\varphi J\alpha); \sigma_2=h(-\sin\varphi \alpha + \cos\varphi J\alpha);$$ by (\ref{sigma3}) and (\ref{dsigma3}), we obtain the following expression for $h$: \begin{equation}\label{h} h^2 = \frac{at^2}{(t + \frac{s}{6})^2(1+t^2)}; \end{equation} by using (\ref{sigma3}) and (\ref{ricci1}), we now see that the conditions (\ref{dsigma1}) are equivalent to \begin{equation}\label{varphi} {\rm d}\varphi + \beta + \frac{(\frac{s}{6}-t^3 +at)}{t(1+t^2)(\frac{s}{6}+ t)}J{\rm d}t =0; \end{equation} therefore, the existence of a smooth function $\varphi$ satisfying (\ref{varphi}) is equivalent to the following condition: $${\rm d}(\beta + \frac{(\frac{s}{6}-t^3 +at)}{t(1+t^2)(\frac{s}{6}+ t)}J{\rm d}t)=0;$$ a straightforward computation involving (\ref{ricci2}) and (\ref{dJr}) shows that the above equality holds whenever the constant $a$ is chosen equal to $1 + \frac{s^2}{36}.$ \ \
\section{Hermitian structures on quaternionic quotients}
Let $(N,g)$ be a quaternionic K{\"a}hler manifold of real dimension $4n$, endowed with a non-trivial Killing field $K$ which preserves the quaternionic structure. According to Galicki \cite{galicki1, galicki2} and Galicki-Lawson \cite{G-L}, under some ``non-degeneracy'' condition for $K$ one can define a $4(n-1)$-dimensional quaternionic orbifold $(M,g^*)$ via the so-called {\it quaternionic reduction construction}. This can be described as follows. We first consider the following orthogonal splitting of the bundle of 2-forms: \begin{equation}\label{quaternion-split} \Lambda^2N = \Lambda^+N \oplus \Lambda^{1,1}N \oplus \Lambda^{\perp}N, \end{equation} where: \begin{enumerate} \item[$\bullet$] $\Lambda^+ N$ is the 3-dimensional sub-bundle of ``self-dual'' 2-forms which determines the {quaternionic structure} (also identified to a sub-bundle $A^+N$ of skew-symmetric endomorphism of $TN$): both $A^+N$ and $\Lambda^+N$ are preserved by the Levi-Civita connection, $D ^g$, and at each point $x$ of $N$
there is an orthonormal basis $\{ I_1, I_2, I_3 \}$ of $A^+N \subset {\rm End}(T_x N)$ with the property that: $I_i\circ I_j = -\delta_{ij}{\rm Id}|_{TN} + \epsilon_{ijk} I_k$ (resp. $\Lambda^+N = {\rm span}(\omega_1,\omega_2,\omega_3)$, where $\omega_i$ are the fundamental 2-forms of the almost Hermitian structures $(g,I_l)$. In the sequel, we refer to any such choice of $I_l$'s (resp. $\omega_l$'s) as a {\it trivialization} of $A^+ N$ (resp. $\Lambda^+ N$); \item[$\bullet$] $\Lambda^{1,1}N$ is the sub-bundle of 2-forms which are $I_i$-invariant for any section of $A^+N$; \item[$\bullet$] $\Lambda^{\perp}N$ denotes the orthogonal complement of $\Lambda^+N \oplus \Lambda^{1,1}N$ in $\Lambda^2N$. \end{enumerate} We denote by ${\Pi^+}$ the projection of $\Lambda^2N$ to $\Lambda^+N$; for any trivialization $\{ \omega_1, \omega_2, \omega_3 \}$ of $\Lambda^+ N$ we then have $$\Pi^+ = \frac{1}{2n}\sum_{l} \omega_l\otimes \omega_l,$$ and $\Pi^+_{K} := \frac{1}{2n}\sum_{l} (i_K\omega_l \otimes \omega_l)$ is a section of $T^*N\otimes \Lambda^+ N$. Then, Galicki-Lawson showed \cite[Th.~2.4]{G-L}. that there exists a section $f_K$ of $\Lambda^+N$ such that $${\rm d}^{D^g} f_K = D ^g f_K = \Pi^+_K.$$ The section $f_K$ is called {\it the momentum map} associated to $(N,g,K)$ and it is easily seen that the ``level set'' \begin{equation}\label{LK}\nonumber L_{K} := \{ x\in N: f_K(x)=0 \} \end{equation} is $K$-invariant.
Assuming that $K_x \neq 0$ at $x\in L_{K}$, Galicki-Lawson proved that $L_{K}$ is regular, i.e. $L_K$ is a smooth submanifold of $N$. If moreover the quotient space $M:= L_{K}/K$ is (locally) a $(4n-4)$-dimensional manifold (or just an orbifold), then it becomes a quaternionic K{\"a}hler manifold with respect to the ``projected'' quaternionic structure, $g^*$, of $N$. Thus, when $N$ is 8-dimensional, the quaternonic reduction gives rise to a four dimensional {\it anti-self-dual} Einstein orbifold (with respect to the canonical orientation induced by $N$). Note that when $K$ is the generator of a $S^1$-quaternionic action on $N$, under the non-degeneracy condition as above $M$ always inherits an orbifold structure, cf. \cite[Th. 3.1 \& Cor. 3.2]{G-L}.
The above construction applies in particular to $N = {\mathbb H}{P}^2$ endowed with certain {\it weighted} $S^1$-actions; one thus obtains a wealth of examples of {\it compact} anti-self-dual Einstein orbifolds; as shown by Galicki-Lawson, the corresponding orbifolds are all weighted projective planes ${\mathbb C} P^{[p_1,p_2,p_3]}$ for some integers $0<p_1\le p_2\le p_3$ satisfying $p_3<p_1+p_2$, \cite[Sec. 4]{G-L}. Notice that, with respect to the orientation induced by the canonical complex structure, the metric becomes {\it self-dual}. (In the case when $p_1=p_2=p_3$ one obtains the Fubini-Study metric on ${\mathbb C}{P}^2$).
On the other hand, R. Bryant showed \cite[Sec. 4.2]{Br} that each weighted projective plane admits a self-dual K{\"a}hler metric which under the above assumption for the weights has everywhere positive scalar curvature. Therefore, according to \cite[Lemma \ref{de-ga}]{AG}, Bryant's metric gives rise to a self-dual {\it Einstein} Hermitian metric on ${\mathbb C} P^{[p_1,p_2,p_3]}$, $p_3<p_1+p_2$.
When considering both results together, a natural question arises:
\noindent {\bf Question.} \cite{LeBrun} Are the Galicki-Lawson metrics on ${\mathbb C} P^{[p_1,p_2,p_3]}$ Hermitian with respect to some anti-self-dual complex structure?
In this section we show that this is indeed the case, at least on a dense open subset; more generally, we show that the answer to the above question is essentially yes for any anti-self-dual Einstein 4-orbifold obtained by quaternionic reduction from the 8-dimensional Wolf spaces ${\mathbb H}P^2$, $SU(4)/S(U(2)U(2))$ and the corresponding non-compact dual spaces (but according to \cite{kris} the argument fails for quaternionic quotients of the exeptional 8-spaces $G_2/SO(4)$ and $G^2_2/SO(4)$). More precisely, we have the following
\begin{prop}\label{quat-quot} Let $(N,g)$ be ${\mathbb H}{P}^2, SU(4)/S(U(2)U(2))$, or one of the corresponding non-compact dual spaces. Then, any anti-self-dual, Einstein 4-orbifold $(M,g^*)$ which is obtained as a quaternionic reduction of $(N,g)$ by a quaternionic Killing field $K$ locally admits (a negatively oriented) Hermitian structure $J$. In particular, the metric $g^*$ is locally given by the explicit constructions in Sec. 2. \end{prop}
The proof is based on the following simple observation. \begin{Lemma} \label{Phi} Let $(N,g)$ be a quaternionic K{\"a}hler manifold of non-zero scalar curvature and $K$ be a Killing field on $N$. Denote by $\Psi(X,Y)=(D ^g_X K, Y)$ the 2-form corresponding to $D ^g K$ and let $\Psi^+ = \Pi^+(\Psi)$ be the projection of $\Psi$ to $\Lambda^+ N$. Then, up to multiplication by a constant, the momentum map $f_K$ of $K$ is given by $\Psi^+$. \end{Lemma} \begin{proof} Since $K$ is Killing, equality (\ref{killing}) $$D ^g_X \Psi = R(K\wedge X)$$ holds. For a quaternionic K{\"a}hler manifold the curvature operator $R$
acts on $\Lambda^+ N$ by $\lambda {\rm Id}|_{\Lambda^+N}$, where $\lambda$ is a positive multiple of the scalar curvature, cf. e.g. \cite{salamon}. Thus, projecting (\ref{killing}) to $\Lambda^+N$ we get $D ^g_X \Psi^+ = \lambda \Pi^+_K. $ \end{proof}
By Lemma \ref{Phi} the ``level set'' $L_K$ of $K$ is the same as the set of points $x\in N$ where $\Psi^+_x =0$. Thus, at any point $x\in L_K$ the tangent space $T_xL_K$ is given by $T_xL_K = \{ T_xN \ni X : D ^g_X \Psi^+ =0 \}.$ Since by assumption $K$ does not vanish on $L_K$, we conclude by (\ref{killing}) and the fact that
$R|_{\Lambda^+N}= \lambda {\rm Id}|_{\Lambda^+N}$ $$T_xL_K = {\rm span}(I_1K,I_2K,I_3K)^{\perp},$$ where $\{I_1, I_2,I_3\}$ is any trivialization of $A^+N$.
We also observe that the 2-form $\Psi$ is a section of $\Lambda^+N \oplus \Lambda^{1,1}N$, provided that $K$ preserves the quaternionic structure. Indeed, $$[D ^g K, I_l] = D ^g _K I_l -{\mathcal L}_K I_l, $$ where $[\cdot ,\cdot]$ stands for the commutator of ${\rm End}(TN)$. Since $K$ is quaternionic, the left-hand-side of the above equality is a section of $\Lambda^+ N$. By summing over $l$ in the above relation we get \begin{equation}\label{la11} \Psi + 2\Pi^{1,1}(\Psi) \in \Lambda^+N, \end{equation} where $\Pi^{1,1}$ denotes the projection to $\Lambda^{1,1}N$: \begin{equation}\label{pi11} \Pi^{1,1}(\psi)(\cdot,\cdot) = \frac{1}{4} \Big[(\psi(\cdot, \cdot) + \sum_l\psi(I_l\cdot ,I_l\cdot)\Big], \ \forall \psi \in \Lambda^2N. \end{equation} Thus, $\Psi$ is a section of $\Lambda^+N \oplus \Lambda^{1,1}N$, and at $x\in L_K$, $\Psi_x$ actually belongs to $\Lambda_x^{1,1}N$.
Since $\Psi = \frac{1}{2} {\rm d} K^{\sharp}$, where $K^{\sharp}$ is the $g$-dual 1-form of $K$, we conclude that
$${\mathcal L}_K \Psi = {\rm d}(i_K(\Psi)) = -\frac{1}{2} {\rm d}({\rm d}|K|^2)=0, $$ i.e. $\Psi$ is a closed $K$-invariant 2-form. This shows that $\Psi$ projects to $M= L_K/K$ to define an {\it anti-self-dual} form on $(M,g^*)$, then denoted by $\Psi^*$. Considering the Riemannian submersion $$\pi: L_K \longmapsto M = L_K/K,$$ the {\it horizontal} space, $H$, of $TL_K$ is given by $$H = {\rm span}(K,I_1K,I_2K,I_3K)^{\perp}.$$ Note that $H$ is $I_l$-invariant for any section $I_l$ of $A^+N$. Using the above remarks we calculate: \begin{equation}\label{important} (D ^{g^*}_{U^*} \Psi^*)(V^*, T^*) = (D ^g_U \Psi)(V,T)
-\frac{4}{|K|^2_g}\Pi^{1,1}(i_U\Psi \wedge i_K\Psi)(V,T), \end{equation} where $D ^{g^*}$ is the Levi-Civita connection of $g^*$, $U^*,V^*,T^*$ are any vectors on $M$, and $U,V,T$ are the corresponding horizontal lifts.
By assumption, $K$ has no zero on $L_K$; it then follows from (\ref{important}) and (\ref{killing}) that
$\Psi^*$ does not vanish identically on $M$. Thus, on the open subset of $(M,g^*)$ where $\Psi^* \neq 0$ the normalised ASD form $\frac{{\sqrt 2} \Psi^*}{|\Psi^*|_{g^*}}$ determines a {\it negative} almost Hermitian structure $J$. By virtue of the Riemannian Goldberg-Sachs (\cite[Prop. 1]{AG}), Proposition \ref{quat-quot} follows from the following \begin{Lemma}\label{integrab} The almost-complex structure $J$ is integrable. \end{Lemma} \begin{proof} We denote $Z^*_i$ any complex (1,0)-vector field of $(M,J)$ and $Z_i$ the corresponding horizontal lift (considered as complex vector in $T_x^{\mathbb C} N$); then, $J$ is integrable if and only if the following identity holds: \begin{equation}\label{integrability0}
D ^{g ^*}_{Z^*_i} (\frac{{\sqrt 2}\Psi^*}{|\Psi^*|_{g^*}})(Z^*_j,Z^*_k) = (D ^{g ^*}_{Z^*_i} \Psi^*)(Z^*_j, Z^*_k) =0 \ \forall i,j,k ; \end{equation} by the very definition of $J$ we have $\Psi(Z_i,Z_j)=0$; moreover, since $\Psi$ belongs to $\Lambda^{1,1}N$ on $L_K$, the almost complex structure $J$ (defined on $H$) commutes with $I_l$'s for any trivialization $\{I_1,I_2,I_3\}$ of $A^+N$. Then, by (\ref{important}) and (\ref{killing}) it is easily seen that the integrability condition (\ref{integrability0}) for $J$ is the same as \begin{equation}\label{integrability} (D ^{g ^*}_{Z^*_i} \Psi^*)(Z^*_j, Z^*_k)= ({D^g}_{Z_i} \Psi)(Z_j, Z_k) = (R(K\wedge Z_i),Z_j\wedge Z_k) = 0. \end{equation}
We now derive (\ref{integrability}) from the structure of the curvature tensor of the Riemannian symmetric spaces ${\mathbb H}{P}^2, SU(4)/S(U(2)U(2))$ and the corresponding non-compact duals, ${\mathbb H}{H}^2$ and $SU(2,2)/S(U(2)U(2))$ (we refer to \cite{salamon, gauduchon} for a general description of the curvature operator, $R$, of a Riemannian symmetric space).
We first consider the simplest case of $N={\mathbb H}{P}^2 = Sp(3)/(Sp(1)Sp(2))$ (or its non-compact dual). The eigenspaces of $R$ are then the simple factors ${\bf sp}(1)$ and ${\bf sp}(2)$ of the isotropy Lie sub-algebra ${\bf h} = {\bf sp}(1) \oplus {\bf sp}(2)$, and the orthogonal complement ${\bf h}^{\perp}$ of ${\bf h}$ in the space ${\rm Skew}({\bf m})$ of the skew-symmetric endomorphisms of ${\bf m}= {\bf sp}(3)/{\bf h}$ (note that $R$ acts trivially on ${\bf h}^{\perp}$); the decomposition ${\rm Skew}({\bf m}) = {\bf sp}(1) \oplus {\bf sp}(2) \oplus {\bf h}^{\perp}$ into eigenspaces of $R$ then fits with the splitting (\ref{quaternion-split}); $\Lambda^+N$ is thus identified to ${\bf sp}(1)$, and $\Lambda^{1,1}N$ to ${\bf sp}(2)$, whereas $\Lambda^{\perp}N$ corresponds to the kernel of $R$, the space ${\bf h}^{\perp}$. This shows that the curvature operator acts on the first two factors in (\ref{quaternion-split}) by multiplication with a non-zero constant (a certain multiple of the scalar curvature), and acts trivially on the third factor (therefore, $R$ has thus three distinct eigenvalues, $\lambda,\mu$ and $0$); this observation also shows that any Killing field on ${\mathbb H}P^2$ is necessarily quaternionic.
As already observed, the almost complex structure $J$ (defined on $H$) commutes with the $I_l$'s, so that $I_l(Z_k)$ is again a (1,0)-vector of $(H,J)$; we thus get $$\Pi^+(Z_j\wedge Z_k) = \sum_{l} (Z_j, I_l(Z_k))\omega_l= 0,$$ which means that $Z_j\wedge Z_k$ is an element of $\Lambda_x^{1,1}M \oplus \Lambda_x^{\perp}N$. It then follows that \begin{eqnarray}\nonumber (R(K\wedge Z_i), Z_j\wedge Z_k) &=& (R(Z_j\wedge Z_k), K\wedge Z_i) \\ \nonumber
&=& \mu(\Pi^{1,1}(Z_j\wedge Z_k), K\wedge Z_i). \end{eqnarray} But $\Pi^{1,1}(Z_j\wedge Z_k)$ is again a (2,0)-vector of $(M,J)$ (see formula (\ref{pi11})), so that $(\Pi^{1,1}(Z_j\wedge Z_k), K\wedge Z_i)=0$; this implies (\ref{integrability}).
The same argument holds for the non-compact dual space ${\mathbb H}H^2$.
The case of $N=SU(4)/S(U(2)U(2))$ (or its non-compact dual) is similar, but $N$ is now a {\it Hermitian symmetric} space, whose canonical Hermitian structure $I$ comutes with any $I_i \in \Lambda_x^+ N$. The corresponding K{\"a}hler form, $\Omega_I$, then belongs to the space $\Lambda^{1,1}N$ and gives rise to a further splitting $$\Lambda^{1,1}N = {\mathbb R}\cdot \Omega_I \oplus \Lambda^{1,1}_0 N, $$ where $\Lambda^{1,1}_0 N$ is the orthogonal complement of $\Omega_I$. Correspondingly, the eigenspaces of the curvature $R$ are the bundles $\Lambda^+ N$, ${\mathbb R}\cdot \Omega_I$, $\Lambda^{1,1}_0 N$, and $\Lambda^{\perp} N$. Note that $R$ acts trivially on $\Lambda^{\perp}N$, whereas $\Omega_I$ is an eigenform of $R$ corresponding to the simple eigenvalue; in particular, $K$ must preserve $I$ and $\Omega_I$, so that $\Psi$ is of type $(1,1)$ with respect to $I$; in other words, the almost complex structure $I$ commutes with $J$, when acting on $H$. It follows that $Z_i\wedge Z_j$ belongs to $\Lambda^{1,1}_0 N \oplus \Lambda^{\perp}N$, and we conclude as in the case of ${\mathbb H}P^2$. \end{proof}
\noindent
{\bf Remark 5.} (i) By (\ref{important}) and Lemma \ref{integrab}, we see that $\frac{1}{|K|^2} \Psi^*$ is a harmonic 2-form on $(M,g^*)$; it is actually the K{\"a}hler form of a self-dual K{\"a}hler metric in the conformal class of $g^*$ (see \cite[Prop. 2]{AG}). In particular, if $(M,g^*)$ is not a real space form, then $\Psi^*$ has no zero on $M$. By construction,
$\frac{2}{|K|^2}\Psi^*$ is the curvature form of the submersion $\pi: L_K \longmapsto M$. It follows that $L_K$ is a Sasakian manifold fibered over a K{\"a}hler self-dual --- equivalently, a Bochner-flat --- four-manifold. It is well known that the corresponding CR-structure of $L_K$ has vanishing fourth-order Chern-Moser curvature; therefore $L_{K}$ is uniformized over $S^5$ with respect to ${\rm Aut}_{CR}(S^5)=PU(3,1)$, cf. \cite{webster}.
(ii) As observed in \cite[p. 20]{G-L}, the quaternionic reduction procedure can be applied to the quaternionic hyperbolic space to obtain {\it smooth}, {\it complete} (non locally symmetric) Einstein self-dual metrics of negative scalar curvature, which are necessarily Hermitian by Lemma \ref{integrab}; see also \cite{Br} for another construction of complete Einstein self-dual Hermitian metrics. In view of our first remark, these examples seem to contradict some results in \cite{kamishima}.
\end{document} |
\begin{document}
\title{Half-Space Proximal Stochastic Gradient Method for Group-Sparsity Regularized Problem}
\begin{abstract} Optimizing with group sparsity is significant in enhancing model interpretability in machining learning applications, \textit{e.g.}, feature selection, compressed sensing and model compression. However, for large-scale stochastic training problems,
effective group sparsity exploration are typically hard to achieve.
Particularly, the state-of-the-art stochastic optimization algorithms usually generate merely dense solutions.
To overcome this shortage, we propose a stochastic method---Half-space Stochastic Projected Gradient (HSPG) method to
search solutions of high group sparsity while maintain the convergence. Initialized by a simple Prox-SG Step, the HSPG{}{} method relies on a novel~\text{Half-Space Step}{} to substantially boost the sparsity level.
Numerically, HSPG{}{} demonstrates its superiority in
deep neural networks, \textit{e.g.},~\text{VGG16}{},~\text{ResNet18}{} and~\text{MobileNetV1}, by computing solutions of higher group sparsity, competitive objective values and generalization accuracy. \end{abstract}
\section{Introduction}
In many recent machine learning optimization tasks, researchers not only focus on finding solutions with small prediction/generalization error
but also concentrate on improving the interpretation of model by filtering out redundant parameters and achieving slimmer model architectures. One technique to achieve the above goal is by augmenting the sparsity-inducing regularization terms to the raw objective functions to generate sparse solutions (including numerous zero elements). The popular $\ell_1$-regularization promotes the sparsity of solutions by element-wise penalizing the optimization variables. However, in many practical applications, there exist additional constraints on variables such that the zero coefficients are often not randomly distributed but tend to be clustered into varying more sophisticated sparsity structures, \textit{e.g.}, disjoint and overlapping groups and hierarchy \citep{yuan2006model,huang2010benefit,huang2009learning}.
As the most important and natural form of structured sparsity, the disjoint group-sparsity regularization, which assumes the pre-specified disjoint blocks of variables are selected (non-zero variables) or ignored (zero variables) simultaneously~\citep{bach2012structured}, serves as a momentous role in general structured sparsity learning tasks since other instances such as overlapping group and hierarchical sparsity are typically solved by converting into the equivalent disjoint group versions via introducing latent variables~\citep{bach2012structured}, and has found numerous applications in computer vision \citep{elhamifar2012see}, signal processing \citep{chen2014group}, medical imaging \citep{liu2018mri}, and deep learning~\citep{scardapane2017group}, especially on the model compression of deep neural networks, where the group sparsity\footnote{Group sparsity is defined as \# of zero groups, where a zero group means all its variables are exact zeros.} is leveraged to remove redundant entire hidden structures directly.
\textbf{Problem Setting.} We study the disjoint group sparsity regularization problem which can be typically formulated as the mixed $\ell_1/\ell_p$-regularization problem, and pay special attention to the most popular and widely used instance $p$ as $2$~\citep{bach2012structured,el2018combinatorial}, \begin{equation}\label{prob.x} \minimize{\bm{x}\in \mathbb{R}^n}\ \Big\{\Psi(\bm{x})\ \myeq\ f(\bm{x})+\lambda \Omega(\bm{x})={\frac{1}{N}\sum_{i=1}^N f_i(\bm{x})}+\lambda{\sum_{g\in\mathcal{G}}\norm{[\bm{x}]_g}}\Big\}, \end{equation} where $\lambda>0$ is a weighting factor, $\norm{\cdot}$ denotes $\ell_2$-norm, $f(\bm{x})$ is the average of numerous $N$ continuously differentiable instance functions $f_i : \mathbb{R}^n\rightarrow\mathbb{R}$, such as the loss functions measuring the deviation from the observations in various data fitting problems, $\Omega(\bm{x})$ is the so-called mixed $\ell_1/\ell_2$ norm, $\mathcal{G}$ is a prescribed fixed partition of index set $\mathcal{I}=\{1,2,\cdots, n\}$, wherein each component $g\in\mathcal{G}$ indexes a group of variables upon the perspective of applications. Theoretically, a larger $\lambda$ typically results in a higher group sparsity while sacrifices more on the bias of model estimation, hence $\lambda$ needs to be carefully fine-tuned to achieve both low $f$ and high group-sparse solutions.
\textbf{Literature Review.\;} Problem~(\ref{prob.x}) has been well studied in deterministic optimization with various algorithms that are capable of returning solutions with both low objective value and high group sparsity under proper $\lambda$~\citep{yuan2006model,roth2008group,huang2011learning,ndiaye2017gap}. Proximal methods are classical approaches to solve the structured non-smooth optimization~\eqref{prob.x}, including the popular proximal gradient method (Prox-FG) which only uses the first-order derivative information. When $N$ is huge, stochastic methods become ubiquitous to operate on a small subset to avoid the costly evaluation over all instances in deterministic methods for large-scale problems. Proximal stochastic gradient method~(\text{Prox-SG})~\citep{duchi2009efficient} is the natural stochastic extension of Prox-FG. Regularized dual-averaging method (\text{RDA})~\citep{xiao2010dual,yang2010online} is proposed by extending the dual averaging scheme in~\citep{nesterov2009primal}. To improve the convergence rate,
there exists a set of incremental gradient methods inspired by SAG~\citep{roux2012stochastic} to utilizes the average of accumulated past gradients. For example,
proximal stochastic variance-reduced gradient method (\text{Prox-SVRG}{})~\citep{xiao2014proximal} and proximal spider (\text{Prox-Spider})~\citep{zhang2019multi} are developed to adopt multi-stage schemes based on the well-known variance reduction technique SVRG proposed in~\citep{johnson2013accelerating} and Spider developed in~\citep{fang2018spider} respectively. \text{SAGA}{}~\citep{defazio2014saga} stands as the midpoint between SAG and Prox-SVRG.
Compared to deterministic methods, the studies of mixed $\ell_1/\ell_2$-regularization~(\ref{prob.x}) in stochastic field become somewhat rare and limited. \text{Prox-SG}{},~\text{RDA}{},~\text{Prox-SVRG}{}, Prox-Spider and~\text{SAGA}{} are valuable state-of-the-art stochastic algorithms for solving problem~(\ref{prob.x}) but with apparent weakness. Particularly, these existing stochastic algorithms typically meet difficulties to achieve both decent convergence and effective group sparsity identification simultaneously (e.g., small function values but merely dense solutions), because of the randomness and the limited sparsity-promotion mechanisms. In depth,~\text{Prox-SG}{},~\text{RDA}{},~\text{Prox-SVRG}{},~\text{Prox-Spider}{} and~\text{SAGA}{} derive from proximal gradient method to utilize the proximal operator to produce group of zero variables. Such operator is generic to extensive non-smooth problems, consequently perhaps not sufficiently insightful if the target problems possess certain properties,~\textit{e.g.}, the group sparsity structure as problem~(\ref{prob.x}). In fact, in convex setting, the proximal operator suffers from variance of gradient estimate; and in non-convex setting, especially deep learning, the discreet step size (learning rate) further deteriorates its effectiveness on the group sparsity promotion, as will show in Section~\ref{sec.algorithm} that the projection region vanishes rapidly except~\text{RDA}{}. \text{RDA}{} has superiority on finding manifold structure to others~\citep{lee2012manifold}, but inferiority on the objective convergence. Besides, the variance reduction techniques are typically required to measure over a huge mini-batch data points in both theory and practice which is probably prohibitive for large-scale problems, and have been observed as sometimes noneffective for deep learning applications~\citep{defazio2019ineffectiveness}. On the other hand, to introduce sparsity, there exist heuristic weight pruning methods~\citep{li2016pruning,luo2017thinet}, whereas they commonly do not equip with theoretical guarantee, so that easily diverge and hurt generalization accuracy.
\textbf{Our Contributions.} Half-Space Stochastic Projected Gradient{} (HSPG{}) method overcomes the limitations of the existing stochastic algorithms on the group sparsity identification, while maintains comparable convergence characteristics. While the main-stream works on (group) sparsity have focused on using proximal operators of regularization, our method is unique and fresh in enforcing group sparsity more effectively by leveraging half-space structure and is well supported by the theoretical analysis and empirical evaluations. We now summarize our contributions as follows.
\begin{itemize}[leftmargin=*]
\item \emph{Algorithmic Design:} We propose the HSPG{}{} to solve the disjoint group sparsity regularized problem as~(\ref{prob.x}). Initialized with a Prox-SG Step for seeking a close-enough but perhaps dense solution estimate, the algorithmic framework relies on a novel~\text{Half-Space Step}{} to exploit group sparse patterns. We delicately design the Half-Space Step with the following main features: \textit{(i)} it utilizes previous iterate as the normal direction to construct a reduced space consisting of a set of half-spaces and the origin; \textit{(ii)} a new group projection operator maps groups of variables onto zero if they fall out of the constructed reduced space to identify group sparsity considerably more effectively than the proximal operator; and \textit{(iii)} with proper step size, the Half-Space Step enjoys the sufficient decrease property, and achieves progress to optimum in both theory and practice.
\item \emph{Theoretical Guarantee:} We provide the convergence guarantees of~HSPG{}{}. Moreover, we prove HSPG{}{} has looser requirements to identify the sparsity pattern than Prox-SG, revealing its superiority on the group sparsity exploration. Particularly, for the sparsity pattern identification, the required distance to the optimal solution $\bm{x}^*$ of~HSPG{}{} is better than the distance required by~\text{Prox-SG}{}.
\item \emph{Numerical Experiments:} Experimentally, HSPG{}{} outperforms the state-of-the-art methods in the aspect of the group sparsity exploration, and achieves competitive objective value convergence and runtime in both convex and non-convex problems. In the popular deep learning tasks, HSPG{}{} usually computes the solutions with multiple times higher group sparsity and similar generalization performance on unseen testing data than those generated by the competitors, which may be further used to construct smaller and more efficient network architectures.
\end{itemize}
\section{The HSPG{}\ method }\label{sec.algorithm} We state the~Half-Space Stochastic Projected Gradient{} (HSPG{}) method in Algorithm~\ref{alg:main.x.outline}. In general, it contains two stages: Initialization Stage and Group-Sparsity Stage. The first Initialization Stage employs~\text{Prox-SG Step}{} (Algorithm~\ref{alg:main.x.prox_sg_step}) to search for a close-enough but usually non-sparse solution estimate. Then the second and fundamental stage proceeds \text{Half-Space Step}{} (Algorithm~\ref{alg:main.x.halfspacestep}) started with the non-sparse solution estimate to effectively exploit the group sparsity within a sequence of reduced spaces, and converges to the group-sparse solutions with theoretical convergence property.
\begin{algorithm}[h!]
\caption{Outline of HSPG{}{} for solving \eqref{prob.x}.}
\label{alg:main.x.outline}
\begin{algorithmic}[1]
\State \textbf{Input:} $x_0\in\mathbb{R}^n$, $ \alpha_0\in(0,1), \epsilon\in [0,1)$, and $ N_\mathcal{P}\in \mathbb{Z}^+ $.
\For{$k = 0,1,2,\dots$ }
\If{$k< N_\mathcal{P}$} \label{line:switch_prox_sg_step}
\State Compute $x_{k+1}\leftarrow \text{Prox-SG}(x_k,\alpha_k)$ by Algorithm~\ref{alg:main.x.prox_sg_step}.
\Else{}
\State Compute $x_{k+1}\leftarrow\text{Half-Space}(x_k,\alpha_k, \epsilon)$
by Algorithm \ref{alg:main.x.halfspacestep}. \hspace{0.3in}
\EndIf
\State Update $\alpha_{k+1}$.
\EndFor
\end{algorithmic} \end{algorithm}
\begin{algorithm}[h]
\caption{Prox-SG Step.}
\label{alg:main.x.prox_sg_step}
\begin{algorithmic}[1]
\State \textbf{Input:} Current iterate $x_k$, and step size $ \alpha_k$.
\State Compute the stochastic gradient of $f$ on mini-batch $ \mathcal{B}_k $
\begin{equation}
\nabla f_{\mathcal{B}_k}(x_k)\leftarrow\frac{1}{|\mathcal{B}_k|}\sum_{i\in \mathcal{B}_k}\Grad f_i(x_k).
\end{equation}\label{line:g_t_estimate_prox_sg}
\State \textbf{Return }
$x_{k+1}\leftarrow\text{Prox}_{\alpha_k\lambda\Omega(\cdot)}\left(x_k-\alpha_k\nabla f_{\mathcal{B}_k}(x_k)\right)$ \label{line:prox}.
\end{algorithmic} \end{algorithm}
\paragraph{Initialization Stage.} The Initialization Stage performs the vanilla proximal stochastic gradient method (Prox-SG, Algorithm~\ref{alg:main.x.prox_sg_step}) to approach the solution of~(\ref{prob.x}). At $k$th iteration, a mini-batch $\mathcal{B}_k$ is sampled to generate an unbiased estimator of the full gradient of $f$ (line~\ref{line:g_t_estimate_prox_sg}, Algorithm~\ref{alg:main.x.prox_sg_step}) to compute a trial iterate $\widehat{x}_{k+1}:=x_k-\alpha_k \Grad f_{\mathcal{B}_k}(x_k)$, where $\alpha_k$ is the step size, and $f_{\B_k}$ is the average of the instance functions $f_i$ cross $\B_k$. The next iterate $x_{k + 1}$ is then updated based on the proximal mapping \begin{equation}\label{eq:proxmapping} \begin{split} x_{k+1}&=\text{Prox}_{\alpha_k\lambda\Omega(\cdot)}(\hat{x}_{k+1})=\argmin_{x\in \mathbb{R}^{n}}\ \frac{1}{2\alpha_k}\norm{x-\hat{x}_{k+1}}^2+ \lambda\Omega(x), \end{split} \end{equation} where the regularization term $\Omega(x)$ is defined in~(\ref{prob.x}). Notice that the above subproblem~(\ref{eq:proxmapping}) has a closed-form solution, where for each $g\in \G$, we have
\begin{equation}\label{eq:proximal_x_kp1} [x_{k+1}]_g=\max\left\{0,1-\alpha_k\lambda /\norm{[\widehat{x}_{k+1}]_g}\right\}\cdot [\widehat{x}_{k+1}]_g. \end{equation} In~HSPG{}{}, the Initialization Stage proceeds Prox-SG Step $N_\P$ times as a localization mechanism to seek an estimation which is close enough to a solution of problem~(\ref{prob.x}), where $N_\P:=\min\{k:k\in\mathbb{Z}^+, \norm{\bm{x}_k-\bm{x}^*}\leq R/2\}$ associated with a positive constant $R$ related to the optima, see~\eqref{def:R} in Appendix~\ref{appendix:convergence_analysis}. In practice, although the close-enough requirement is perhaps hard to be verified, we empirically suggest to keep running the Prox-SG Step until observing some stage-switch signal by testing on the stationarity of objective values, norm of (sub)gradient or validation accuracy similarly to~\citep{zhang2020statistical}. However, the Initialization Stage alone is \textit{insufficient} to exploit the group sparsity structure, \textit{i.e.}, the computed solution estimate is typically dense, due to the randomness and the moderate truncation mechanism of proximal operator constrained in its projection region, \textit{i.e.}, the trial iterate $[\widehat{x}_{k+1}]_g$ is projected to zero only if it falls into an $\ell_2$-ball centered at the origin with radius $\alpha_k\lambda $ by~(\ref{eq:proximal_x_kp1}). Our remedy is to incorporate it with the following Half-Space Step, which exhibits an effective sparsity promotion mechanism while still remains the convergent property.
\begin{algorithm}[h]
\caption{\text{Half-Space Step}}
\label{alg:main.x.halfspacestep}
\begin{algorithmic}[1]
\State \textbf{Input:} Current iterate $x_k$, step size $ \alpha_k$, and $ \epsilon $.
\State Compute the stochastic gradient of {$\Psi$} on $ \mathcal{I}^{\neq 0}(x_k) $ by mini-batch $ \mathcal{B}_k$
\begin{align}
& [\Grad\Psi_{\mathcal{B}_k}(x_k)]_{\mathcal{I}^{\neq 0}(x_k) }\gets \frac{1}{|{\mathcal{B}_k}|}\sum_{i\in \mathcal{B}_k}[\Grad\Psi_i(x_k)]_{\mathcal{I}^{\neq 0}(x_k) }
\end{align}\label{line:g_k_estimate_half_space_step}
\State Compute
$[\tilde{x}_{k+1}]_{\mathcal{I}^{\neq 0}(x_k)}\leftarrow [x_{k}-\alpha_k\Grad \Psi_{\mathcal{B}_k}(x_k)]_{\mathcal{I}^{\neq 0}(x_k) }$ and $[\tilde{x}_{k+1}]_{\mathcal{I}^{ 0}(x_k)}\leftarrow 0$.\label{line:half_space_trial_iterate}
\For{each group $ g $ in $ \I^{\neq 0}(x_{k}) $}\label{line:half_space_project_start}
\If{$ [\tilde{x}_{k+1}]_{g}^\top [x_{k}]_g <\epsilon\norm{[x_k]_g}^2$} \label{line:half_space_set_new_iterate_start}
\State $ [\tilde{x}_{k+1}]_{g} \gets 0 $. \label{line:half_space_project}
\EndIf
\EndFor
\State \textbf{Return}\ $x_{k+1}\gets \tilde{x}_{k+1}$.\label{line:half_space_set_new_iterate_end}
\end{algorithmic} \end{algorithm}
\paragraph{Group-Sparsity Stage.} The Group-Sparsity Stage is designed to effectively determine the groups of zero variables and capitalize convergence characteristic, which is in sharp contrast to other heuristic aggressive weight pruning methods but typically lacking theoretical guarantee~\citep{li2016pruning,luo2017thinet}. The underlying intuition of its atomic~\text{Half-Space Step}{}~(Algorithm~\ref{alg:main.x.halfspacestep}) is to project $[x_k]_g$ to zero only if $-[x_k]_g$ serves as a descent step to $\Psi(x_k)$, \textit{i.e.}, $-[x_k]_g^\top[\Grad \Psi(x_k))]_g<0$, hence updating $[x_{k+1}]_g\gets[x_k]_g-[x_k]_g=0$ still results in some progress to the optimality. Before introducing that, we first define the following index sets for any $ x\in\mathbb{R}^n $: \begin{equation}\label{def:I_set} \mathcal{I}^0(x) := \{g: g\in\mathcal{G}, [x]_g=0\}\ \text{and}\ \mathcal{I}^{\neq 0}(x) :=\{g: g\in\mathcal{G}, [x]_g\neq 0\}, \end{equation} where $\I^{0}(x)$ represents the indices of groups of zero variables at $x$, and $\I^{\neq 0}(x)$ indexes the groups of nonzero variables at $x$. To proceed, we further define an artificial set that $x$ lies in: \begin{equation}\label{def:polytope} \S(x)\coloneqq \left\{z\in\mathbb{R}^n : [z]_g=0 \ \text{if}\ g\in\I^0{(x)},\text{and}\ [z]_g^T[x]_g\geq \epsilon \norm{[x]_g}^2\ \text{if}\ g\in\I^{\neq 0}(x)\right\} \bigcup \{0\}, \end{equation} which consists of half-spaces and the origin. Here the parameter $\epsilon > 0$ controls the grey region presented in Figure~\ref{figure:project_region}, and the exact way to set $\epsilon$ will be discussed in Section~\ref{sec:exp} and Appendix. Hence, $x$ inhabits $\S(x_k)$, \textit{i.e.}, $x\in\S(x_k)$, only if: \textit{(i)} $[x]_g$ lies in the upper half-space for all $g\in\mathcal{I}^{\neq 0}(x_k)$ for some prescribed $\epsilon\in[0,1)$ as shown in Figure~\ref{figure:half_space_projection}; and \textit{(ii)} $[x]_g$ equals to zero for all $g\in\mathcal{I}^0(x_k)$. \begin{figure}
\caption{Half-Space Projection}
\caption{Projection Region}
\label{figure:half_space_projection}
\label{figure:project_region}
\label{figure:proj_euclidean}
\end{figure} The fundamental assumption for~Half-Space Step to success is that: the~Initialization Stage has produced a (possibly \textit{non-sparse}) solution estimate $x_k$ nearby a group sparse solution $x^*$ of problem~(\ref{prob.x}), \textit{i.e.}, the optimal distance $\norm{x_k-x^*}$ is sufficiently small. As seen in Appendix, it further indicates that the group sparse optimal solution $x^*$ inhabits $\S_k:=\S(x_k)$, which implies that $\S_k$ has already covered the group-support of $x^*$, \textit{i.e.}, $\I^{\neq 0}(x^*)\subseteq \I^{\neq 0}(x_k)$. Our goal now becomes minimizing $\Psi(x)$ over $\S_k$ to identify the remaining groups of zero variables, \textit{i.e.}, $\I^0(x^*)/\I^{0}(x_k)$, which is formulated as the following smooth optimization problem: \begin{equation}\label{prob.half_space_sub_problem} x_{k+1}=\argmin_{x\in\S_k}\ \Psi(x)=f(x)+\lambda\Omega(x). \end{equation} By the definition of $\S_k$, $[x]_{\mathcal{I}^0(x_k)}\equiv 0$ are constrained as fixed during Algorithm~\ref{alg:main.x.halfspacestep} proceeding, and only the entries in $\I^{\neq 0}(x_k)$ are allowed to move. Hence $\Psi(x)$ is smooth on $\S_k$, and~(\ref{prob.half_space_sub_problem}) is a reduced space optimization problem. A standard way to solve problem~(\ref{prob.half_space_sub_problem}) would be the stochastic gradient descent equipped with Euclidean projection~\citep{nocedal2006numerical}. However, such a projected method rarely produces zero (group) variables as the dense $\hat{x}_{E}$ illustrated in Figure~\ref{figure:half_space_projection}. To address it, we introduce a novel projection operator to effectively conduct group projection as follows.
As stated in Algorithm~\ref{alg:main.x.halfspacestep}, we first approximate the gradient of $\Psi$ on the free variables in $\I^{\neq 0}(x_k)$ by $[\Grad \Psi_{\B_k}(x_k)]_{\I^{\neq 0}(x_k)}$ (line~\ref{line:g_k_estimate_half_space_step}, Algorithm~\ref{alg:main.x.halfspacestep}), then employ SGD to compute a trial point $\widetilde{x}_{k+1}$ (line~\ref{line:half_space_trial_iterate}, Algorithm~\ref{alg:main.x.halfspacestep}) which is passed into a new projection operator $\proj_{\S_k}(\cdot)$ defined as \begin{equation}\label{def:proj} \left[\proj_{\S_k}(z)\right]_g\coloneqq\bigg\{ \begin{array}{ll} [z]_g & \text{if}\ [z]_g^T[x_k]_g\geq \epsilon\norm{[x_k]_g}^2,\\ 0 & \text{otherwise}. \end{array} \end{equation} The above projector of form~(\ref{def:proj}) is not the standard Euclidean projection operator in most cases\footnote{ Unless $\Omega(x)$ is $\norm{x}_1$ where each $g\in\G$ is singleton, then $\S_k$ becomes an orthant face~\citep{chen2020orthant}.}, but still satisfies the following two advantages: \textit{(i)} the actual search direction $d_k:=(\proj_{\S_k}(\tilde{x}_{k+1})-x_k)/\alpha_k$ performs as a descent direction to $\Psi_{\B_k}(x_k):=f_{\B_k}(x_k)+\lambda\Omega(x_k)$, \textit{i.e.}, $[d_k]_g^\top[\Grad \Psi_{\B_k}(x_k))]_g<0$ as $\theta<90^{\circ}$ in Figure~\ref{figure:half_space_projection}, then the progress to the optimum is made via the sufficient decrease property as drawn in Lemma~\ref{lemma:sufficient_decrease_half_space}; and \textit{(ii)} effectively project groups of variables to zero simultaneously if the inner product of corresponding entries is sufficiently small. In contrast, the Euclidean projection operator is far away effective to promote group sparsity, as the Euclidean projected point $\hat{x}_E\neq 0$ versus $x_{k+1}=\proj_{\S_k}(\tilde{x}_{k+1})=0$ shown in Figure~\ref{figure:half_space_projection}. \begin{lemma}\label{lemma:sufficient_decrease_half_space}
Algorithm~\ref{alg:main.x.halfspacestep} yields the next iterate $x_{k+1}$ as $\textit{Proj}_{\S_k}(x_k-\alpha_k\Grad \Psi_{\B_k}(x_k))$, then the search direction $d_k:=(x_{k+1}-x_k)/\alpha_k$ is a descent direction for $\Psi_{\B_k}(x_k)$, \textit{i.e.}, $d_k^\top\Grad \Psi_{\B_k}(x_k)<0$. Moreover, letting $L$ be the Lipschitz constant for $ \Grad \Psi_{\mathcal{B}_k} $ on the feasible domain, and $\hat{\mathcal{G}}_k:=\I^{\neq 0}(x_k)\bigcap \I^{0}(x_{k+1})$ and $\tilde{\mathcal{G}}_k:=\I^{\neq 0}(x_k)\bigcap \I^{\neq 0}(x_{k+1})$ be the sets of groups which projects or not onto zero, we have
\begin{equation}
\small
\begin{split}
\Psi_{\mathcal{B}_k}(x_{k+1})\leq & \Psi_{\mathcal{B}_k}(x_{k})-\left(\alpha_k-\frac{\alpha_k^2L}{2}\right)\sum_{g\in\tilde{\G}_k}\norm{[\Grad \Psi_{\mathcal{B}_k}(x_{k})]_g}^2-\left(\frac{1-\epsilon}{\alpha_k}-\frac{L}{2}\right)\sum_{g\in\hat{\G}_k}\norm{[x_{k}]_g}^2.
\end{split}
\end{equation}
\end{lemma} We then intuitively illustrate the strength of~HSPG{}{} on group sparsity exploration. In fact, the half-space projection~(\ref{def:proj}) is a more effective sparsity promotion mechanism compared to the existing methods. Particularly, it benefits from a much larger projection region to map a reference point $\hat{x}_{k+1}:=x_k-\alpha_k\Grad f_{\mathcal{B}_k}(x_k)$ or its variants to zero. As the 2D case described in Figure~\ref{figure:project_region}, the projection regions of~Prox-SG, Prox-SVRG,~\text{Prox-Spider}{} and SAGA are $\ell_2$-balls with radius as $\alpha_k\lambda$. In stochastic learning, especially deep learning tasks, the step size $\alpha_k$ is usually selected around $10^{-3}$ to $10^{-4}$ or even smaller for convergence. Together with the common setting of $\lambda \ll 1$, their projection regions would vanish rapidly, resulting in the difficulties to produce group sparsity. As a sharp contrast, even though $\alpha_k\lambda$ is near zero, the projection region of~HSPG{}{} $\{x: x_k^Tx< (\alpha_k\lambda + \epsilon\norm{x_k})\norm{x_k}\}$ (seen in Appendix) is still an open half-space which contains those $\ell_2$ balls as well as~\text{RDA}'s if $\epsilon$ is large enough. Moreover, the positive control parameter $\epsilon$ adjusts the level of aggressiveness of group sparsity promotion~(\ref{def:proj}),~\textit{i.e.}, the larger the more aggressive, and meanwhile maintains the progress to the optimality by Lemma~\ref{lemma:sufficient_decrease_half_space}. In practice, proper fine tuning $\epsilon$ is sometimes required to achieve both group sparsity enhancement and sufficient decrease on objective value as will see in Section~\ref{sec:exp}.
\textbf{Intuition of Two-Stage Method:} To end this section, we discuss the advantage of designing such two stage schema rather than an adaptive switch back and forth between the Prox-SG Step and~\text{Half-Space Step}{} based on some evaluation switching criteria, as many multi-step deterministic optimization algorithms~\citep{chen2017reduced}. In fact, we numerically observed that switching back to the Prox-SG Step consistently deteriorate the progress of group sparsity exploration by~\text{Half-Space Step}{} while without obvious gain on convergence. Such regression on group sparsity by the Prox-SG Step is less attractive in realistic applications, \textit{e.g.}, model compression, where people usually possess heavy models of high generalization accuracy ahead and want to filter out the redundancy effectively. Therefore, in term of the ease of application, we end at organizing Prox-SG Step and~\text{Half-Space Step}{} as such a two-stage schema, controlled by a switching hypermeter $N_\P$. In theory, we require $N_\P$ sufficiently large to let the initial iterate of~\text{Half-Space Step}{} be close enough to the local minimizer as shown in Section~\ref{sec:convergence}. In practice,~HSPG{}{} is sensitive to the choice of $N_\P$ at early iterations, \textit{i.e.}, switching to~\text{Half-Space Step}{} too early may result accuracy loss. But such sensitivity vanishes rapidly if switching to~\text{Half-Space Step}{} after some acceptable evaluation switching criteria.
\section{Convergence Analysis}\label{sec:convergence}
In this section, we give the convergence guarantee of our HSPG{}{}. Towards that end, we make the following widely used assumption in optimization literature~\citep{xiao2014proximal,yang2019stochastic} and active set identification analysis of regularization problem~\citep{nutini2019active,chen2018farsa}.
\begin{assumption}\label{assumption}
Each $f_i:\mathbb{R}^n\to \mathbb{R}$, for $i=1,2,\cdots, N$, is differentiable and bounded below. Their gradients $\Grad f_{i}(\bm{x})$ are Lipschitz continuous, and let $L$ be the shared Lipschitz constant. \end{assumption} \begin{assumption}
The least and the largest $\ell_2$-norm of non-zero groups in $\bm{x}^*$ are lower and upper bounded by some constants, \textit{i.e.}, $0<2\delta_1 :=\min_{g\in \mathcal{I}^{\neq 0}(\bm{x}^*)}\norm{[\bm{x}^*]_g}$ and $\ 0<2\delta_2 :=\max_{g\in \mathcal{I}^{\neq 0}(\bm{x}^*)}\norm{[\bm{x}^*]_g}$. Moreover, we request a common strict complementarity on any $\B$, \textit{i.e.}, $0<2\delta_3:=\min_{g\in\mathcal{I}^0(\bm{x}^*)}\left(\lambda - \norm{[\Grad f_{\B}(\bm{x}^*)]_g}\right)$ for regularization optimization. \end{assumption} \textbf{Notations:} Let $\bm{x}^*$ be a local minimizer of problem (\ref{prob.x}) with group sparsity property, $\Psi^*$ be the local minimum value corresponding to $\bm{x}^*$, and $\{\bm{x}_k\}_{k=0}^\infty$ be the iterates generated from Algorithm~\ref{alg:main.x.outline}. Denote the gradient mapping of $\Psi(\bm{x})$ and its estimator on mini-batch $\mathcal{B}$ as $\bm{\bm{\xi}}_{\eta}(\bm{x}):=\frac{1}{\eta}\left(\bm{x}-\text{Prox}_{\eta\lambda\Omega(\cdot)}(\bm{x}-\eta \Grad f(\bm{x}))\right)$ and $\bm{\bm{\xi}}_{\eta,\mathcal{B}}(\bm{x}):=\frac{1}{\eta}\left( \bm{x}-\text{Prox}_{\eta\lambda\Omega(\cdot)}(\bm{x}-\eta \Grad f_\mathcal{B}(\bm{x}))\right)$ respectively. We say $\tilde{\bm{x}}$ a stationary point of $\Psi(\bm{x})$ if $\bm{\bm{\xi}}_{\eta}(\tilde{\bm{x}})=0$.
To be simple, let $\widetilde{\mathcal{X}}$ be a neighbor of $\bm{x}^*$ as $\widetilde{\mathcal{X}}:=\{\bm{x}: \norm{\bm{x}-\bm{x}^*}\leq R\}$ with $R$ as a positive constant related to $\delta_1, \delta_2$ and $\epsilon$ (see~\eqref{def:R} in Appendix~\ref{appendix:convergence_analysis}), and $M$ be the supremum of $\norm{\partial \Psi(\bm{x})}$ on the compact set $\widetilde{\mathcal{X}}$.
\textbf{Remark:} Assumption~\ref{assumption} implies that $\Grad f_{\B}(\bm{x})$ measured on mini-batch $\B$ is Lipschitz continuous on $\mathbb{R}^n$ with the same Lipschitz constant $L$, while $\Grad \Psi_{\B}(\bm{x})$ is not as shown in Appendix. However, the Lipschitz continuity of $\Grad \Psi_{\B}(\bm{x})$ still holds on $\mathcal{X}=\{\bm{x}: \norm{[\bm{x}]_g}\geq \delta_1\ \text{for each}\ g\in\mathcal{G}\}$ by excluding a $\ell_2$-ball centered at the origin with radius $\delta_1$ from $\mathbb{R}^n$. For simplicity, let $\Grad \Psi_\B(\bm{x})$ share the same Lipschitz constant $L$ on $\mathcal{X}$ with $\Grad f_\B(\bm{x})$, since we can always select the bigger value as their shared Lipschitz constant. Now, we state the first main theorem of~HSPG{}{}. \begin{theorem}\label{thm:convergence}
Suppose $f$ is convex on $\widetilde{\mathcal{X}}$, $\epsilon\in\left[0,\min\left\{\frac{\delta_1^2}{\delta_2}, \frac{2\delta_1-R}{2\delta_2+R}\right\}\right)$, $\norm{\bm{x}_{K}-\bm{x}^*}\leq\frac{R}{2}$ for $K\geq N_\P$. Set $k:=K+t$, $(t\in\mathbb{Z}^+)$. Then for any $\tau\in(0,1)$, there exist step size $\alpha_k=\O(\frac{1}{\sqrt{N}t})\in\left(0,\min\left\{\frac{2(1-\epsilon)}{L}, \frac{1}{L},\frac{2\delta_1-R-\epsilon(2\delta_2+R)}{M}\right\}\right)$, and mini-batch size $|\B_k|=\O(t)\leq N-\frac{N}{2M}$, such that $\{\bm{x}_k\}$ converges to some stationary point in expectation with probability at least $1-\tau$, \textit{i.e.}, $\mathbb{P}(\lim_{k\rightarrow \infty} \mathbb{E}\left[ \norm{\bm{\bm{\xi}}_{\alpha_k,\mathcal{B}_k}(\bm{x}_k)}\right]=0)\geq 1-\tau$. \end{theorem}
\textbf{Remark:} Theorem~\ref{thm:convergence} only requires local convexity of $f$ on a neighborhood $\widetilde{\mathcal{X}}$ of $\bm{x}^*$ while itself can be non-convex in general. This local convexity assumption appears in many non-convex analysis, such as: tensor decomposition~\citep{ge2015escaping} and shallow neural networks~\citep{zhong2017recovery}. Theorem~\ref{thm:convergence} implies that if the $K$th iterate locates close enough to $\bm{x}^*$, the step size $\alpha_k$ and mini-batch size $|\B_k|$ is set as above, (it further indicates $\bm{x}^*$ inhabits the $\{\S_k\}_{k\geq K}$ of all subsequent iterates updated by~\text{Half-Space Step}{} with high probability in Appendix), then the~\text{Half-Space Step}{} in Algorithm~\ref{alg:main.x.halfspacestep} guarantees the convergence to the stationary point. The $\O(t)$ mini-batch size is commonly used in the analysis of stochastic algorithms, \textit{e.g.}, Adam and Yogi~\citep{zaheer2018adaptive}. Later based on numerical results in Section~\ref{sec:exp}, we observe that a much weaker increasing or even constant mini-batch size is sufficient. In fact, experiments show that practically, a reasonably large mini-batch size can work well if the variance is not large. Although the assumption $\norm{\bm{x}_{K}-\bm{x}^*}<R/2$ is hard to be verified in practice, setting $N_\mathcal{P}$ large enough usually performs quite well.
We then reveal the sparsity identification guarantee of~HSPG{}{} as stated in Theorem~\ref{thm:sparsity_recovery_rate_hbproxsg}.
\begin{theorem}\label{thm:sparsity_recovery_rate_hbproxsg}
If $k\geq N_\P$ and $\norm{\bm{x}_k-\bm{x}^*}\leq \frac{2\alpha_k\delta_3}{1-\epsilon+\alpha_kL}$, then~HSPG{}{} yields $\I^0(\bm{x}^*)\subseteq \I^0(\bm{x}_{k+1})$. \end{theorem}
\textbf{Remark:} Theorem~\ref{thm:sparsity_recovery_rate_hbproxsg} shows that when $\bm{x}_k$ is in the $\ell_2$-ball centered at $\bm{x}^*$ with radius $\frac{2\alpha_k\delta_3}{1-\epsilon+\alpha_kL}$,~HSPG{}{} identifies the optimal sparsity pattern, \textit{i.e.}, $\I^0(\bm{x}^*)\subseteq \I^0(\bm{x}_{k+1})$. In contrast, to identify the sparsity pattern,~\text{Prox-SG}{} requires the iterates to fall into the $\ell_2$-ball centered at $\bm{x}^*$ with radius $\alpha_k \delta_3$~\citep{nutini2019active}. Since $\alpha_k\leq 1/L$ and $\epsilon\in [0,1)$, then $\frac{2\alpha_k\delta_3}{1-\epsilon+\alpha_kL}\geq\alpha_k\delta_3$ implies that the $\ell_2$-ball of~HSPG{}{} contains the $\ell_2$-ball of~\text{Prox-SG}{}, \textit{i.e.},~HSPG{}{} has a stronger performance in sparsity pattern identification. Therefore, Theorem~\ref{thm:sparsity_recovery_rate_hbproxsg} reveals a better sparsity identification property of~HSPG{}{} than~\text{Prox-SG}{}, and no similar results exist for other methods to our knowledge.
\noindent \textbf{The Initialization Stage Selection:} To satisfy the pre-requirement of convergence of Half-Space Step as Theorem~\ref{thm:convergence}, \textit{i.e.}, initial iterate close enough to $\bm{x}^*$, there exists several proper candidates \textit{e.g.},~\text{Prox-SG}{},~\text{Prox-SVRG}{} and~\text{SAGA}{} to form as the Initialization Stage. Considering the tradeoff between computational efficiency and theoretical convergence, our default setting is to select~\text{Prox-SG}{}. Although ~\text{Prox-SVRG}/SAGA may have better theoretical convergence property than~\text{Prox-SG}{}, they require higher time and space complexity to compute or estimate full gradient on a huge mini-batch or store previous gradient, which may be prohibitive for large-scale training especially when the memory is often limited. Besides, it is well noticed that SVRG does not work as desired on the popular non-convex deep learning applications~\citep{defazio2019ineffectiveness, chen2020orthant}. In contrast, Prox-SG is efficient and can also achieves the good initialization assumption in Theorem~\ref{thm:convergence}, \textit{i.e.}, $\norm{\bm{x}_{N_\P}-\bm{x}^*}\leq R/2$, in the manner of high probability via performing sufficiently many times, as revealed in Appendix~\ref{appendx:upper_bound_n_p} by leveraging related literature~\citep{rosasco2019convergence} associated with an additional strongly convex assumption. However, one should notice that Prox-SG does not guarantee any group sparsity property of $\bm{x}_{N_\mathcal{P}}$ due to the limited projection region and randomness.
\textbf{Remark}: We emphasize that this paper focuses on improving the group sparsity identification, which is rarely explored and also a key indicator of success for structured sparsity regularization problem. Meanwhile, we would like to point out improving the convergence rate has been very well explored in a series of literatures~\citep{reddi2016proximal,li2018simple}, but out of our main consideration.
\begin{comment} \textcolor{red}{We now provide a sufficient condition that employing~\text{Prox-SG}{} Step sufficiently many times can compute such an iterate $\bm{x}_{N_{\mathcal{P}}}$ sufficiently close to $\bm{x}^*$ with high probability as Proposition~\ref{thm:n_p_upper_bound}, which is established under an additional but popular Polyak-Lojasiewicz (PL) condition for non-smooth problem~\citep{li2018simple}, \textit{i.e.}, there exists a $\mu>0$ such that for any $\bm{x}\in\mathbb{R}^n$ and $\eta>0$, $\norm{\bm{\bm{\xi}}_{\eta}(\bm{x})}^2\geq 2\mu (\Psi(\bm{x})-\Psi^*)$. \begin{proposition}\label{thm:n_p_upper_bound}
Suppose $f$ is convex and $\Psi$ satisfies the PL condition. There exist constants $C>0, \gamma \in (0, 1 / 2L)$ such that for any fixed $\tau\in(0,1)$, if $\alpha_k\equiv\alpha<\min\left\{\frac{2\gamma\mu\tau R^2}{(2L\gamma-1){C}}, \frac{1}{2\mu},\frac{1}{L}\right\}$, and mini-batch size $|\mathcal{B}_k|\equiv|\mathcal{B}|> \frac{32\gamma\mu M^2}{2\gamma\mu\tau R^2-(2L\gamma-1)C\alpha}$ for any $k< N_\P$, then $\norm{\bm{x}_{N_\mathcal{P}}-\bm{x}^*}_2\leq R/2$ holds with probability at least $1-\tau$, \textit{i.e.}, $\mathbb{P}(\norm{\bm{x}_{N_\mathcal{P}}-\bm{x}^*}\leq R/2)\ge 1-\tau$ for any $N_{\mathcal{P}}\geq K$ with $K:=\left\lceil\frac{\log{(\text{poly}(\tau R^2, 1/|\mathcal{B}|, \alpha)/(\Psi(\bm{x}_0)-\Psi^*))}}{\log{(1-2\mu\alpha)}}\right\rceil$, where $\text{poly}(\cdot)$ is some polynomial of assembled variables. \end{proposition} \textbf{Remark:} PL condition is sufficient but not necessary for ensuring iterate close enough to the optima. Proposition~\ref{thm:n_p_upper_bound} implies that after sufficient number of iterations, Prox-SG produces an iterate $\bm{x}_{N_\mathcal{P}}$ that is $R/2$-close to $\bm{x}^*$ with high probability. However, one should notice that Prox-SG does not guarantee any group sparsity property of $\bm{x}_{N_\mathcal{P}}$ due to the limited projection region and randomness.} \end{comment}
\section{Numerical Experiments} \label{sec:exp}
In this section, we present results of several benchmark numerical experiments in deep neural networks to illustrate the superiority of~HSPG{}{} than other related algorithms on group sparsity exploration and the comparable convergence. Besides, two extensible convex experiments are conducted in Appendix to empirically demonstrate the validness and superiority of the group sparsity identification by~HSPG{}{}.
\paragraph{Image Classification:}
We now consider the popular Deep Convolutional Neural Networks (DCNNs) for image classification tasks.
Specifically, we select several popular and benchmark DCNN architectures, \textit{i.e.}, \text{VGG16}{}~\citep{simonyan2014very}, \text{ResNet18}{}~\citep{he2016deep} and~\text{MobileNetV1}{}~\citep{howard2017mobilenets} on two benchmark datasets CIFAR10~\citep{Krizhevsky09} and~\text{Fashion-MNIST}{}~\citep{xiao2017online}.
We conduct all experiments
for 300 epochs with a mini-batch size of 128 and $\lambda$ as $10^{-3}$, since it returns competitive testing accuracy to the models trained without regularization, (see more in Appendix~\ref{appendix:nonconvex_exp}).
The step size $\alpha_k$ is initialized as $0.1$, and decayed by a factor 0.1 periodically. We set each filter in the convolution layers as a group variable.
In these experiments, we proceed a test on the objective value stationarity similarly to~\citep[Section 2.1]{zhang2020statistical} and switch to Half-Space Step roughly on 150 epochs with $N_\mathcal{P}$ as $150N/|\mathcal{B}|$. The control parameter $\epsilon$ in the half-space projection~\eqref{def:proj} controls the aggressiveness level of group sparsity promotion, which is first set as 0, then fined tuned to be around $0.02$ to favor the sparsity level whereas does not hurt the target objective $\Psi$; the detailed procedure is in Appendix. We exclude~\text{RDA}{} because of no acceptable experimental results attained during our tests with the step size parameter $\gamma$ setting throughout all powers of 10 from $10^{-3}$ to $10^3$, and skip Prox-Spider and SAGA since Prox-SVRG has been a superb representative to the proximal incremental gradient methods.
Table~\ref{table:nonconvex} demonstrates the effectiveness and superiority of~HSPG{}{}, where we mark the best values as bold, and the group sparsity ratio is defined as the percentage of zero groups. In particular, \textit{(i)} HSPG{}{} computes remarkably higher group sparsity than other methods on all tests under both $\epsilon=0$ and fine tuned $\epsilon$, of which the solutions are typically multiple times sparser in the manner of group than those of~\text{Prox-SG}{}, while~\text{Prox-SVRG}{} performs not comparably since the variance reduction techniques may not work as desired for deep learning applications~\citep{defazio2019ineffectiveness}; \textit{(ii)} HSPG{}{} performs competitively with respect to the final objective values $\Psi$ and $f$ (see $f$ in Appendix). In addition, all the methods reach a comparable generalization performance on unseen test data. On the other hand, sparse regularization methods may yield solutions with entries that are not exactly zero but are very small. Sometimes all entries below certain threshold ($\mathcal{T}$) are set to zero~\citep{jenatton2010structured,el2018combinatorial}. However, such simple truncation mechanism is heuristic-rule based, hence may hurt convergence and accuracy. To illustrate this, we set the groups of the solutions of~\text{Prox-SG}{} and~\text{Prox-SVRG}{} to zero if the magnitudes of the group variables are less than some $\mathcal{T}$, and denote the corresponding solutions as \text{Prox-SG}{}* and~\text{Prox-SVRG}{}*.
As shown in Figure~\ref{figure:simple_truncation}\textcolor{red}{(i)}, under the $\mathcal{T}$ with no accuracy regression,~\text{Prox-SG}{}* and~\text{Prox-SVRG}{}* reach higher group sparsity ratio as 60\% and 32\% compared to Table~\ref{table:nonconvex}, but still significantly lower than the 70\% of HSPG under $\epsilon=0.05$ without simple truncation. Under the $\mathcal{T}$ to reach the same group sparsity ratio as HSPG, the testing accuracy of~\text{Prox-SG}{}* and~\text{Prox-SVRG}{}* regresses drastically to 28\% and 17\% in Figure~\ref{figure:simple_truncation}\textcolor{red}{(ii)} respectively. Remark here that although further refitting the models from \text{Prox-SG}{}* and~\text{Prox-SVRG}{}* on active (non-zero) groups of weights may recover the accuracy regression, it requires additional engineering efforts and training cost, which is less attractive and convenient than~HSPG{}{} (with no need to refit).
\begin{table}[t]
\centering
\caption{Final $\Psi$/group sparsity ratio/testing accuracy for tested algorithms on non-convex problems.}
\label{table:nonconvex}
\begin{comment}
\resizebox{\textwidth}{!}{
\begin{tabular}{
cccccc}
\Xhline{3\arrayrulewidth}
\multirow{2}{*}{Backbone} & \multirow{2}{*}{Dataset} &\multirow{2}{*}{\text{Prox-SG}{}} & \multirow{2}{*}{\text{Prox-SVRG}{}} & \multicolumn{2}{c}{HSPG{}{}}\\
& & & & $\epsilon$ as $0$ & fine tuned $\epsilon$ \\
\hline
\multirow{2}{*}{\text{VGG16}{}} & \text{CIFAR10}{} & \textbf{0.59}\ /\ \textbf{0.010}\ /\ 53.95\%\ /\ 90.57\% & 0.82\ /\ 0.036\ /\ 14.73\%\ /\ 89.42\% & \textbf{0.59}\ /\ \textbf{0.010}\ /\ \textbf{74.60\%}\ /\ \textbf{91.10\%} & \textbf{0.59}\ /\ 0.009\ /\ 75.61\% \ /\ 90.92\% \\
& \text{Fashion-MNIST}{} & \textbf{0.54}\ /\ 0.181\ /\ 15.63\%\ /\ 92.98\% & 2.66\ /\ \textbf{0.165}\ /\ 0.45\%\ /\ 92.69\% & \textbf{0.54}\ /\ 0.180\ /\ \textbf{22.18}\%\ /\ \textbf{92.99}\% & \textbf{0.53}\ /\ 0.184\ /\ 60.77\%\ /\ 92.87\% \\\hdashline
\multirow{2}{*}{\text{ResNet18}{}} & \text{CIFAR10}{} & \textbf{0.31}\ /\ \textbf{0.001}\ /\ 19.50\%\ /\ 94.09\% & 0.36\ /\ 0.002\ /\ 2.79\%\ /\ 94.17\% & \textbf{0.31}\ /\ \textbf{0.001}\ /\ \textbf{41.58\%}\ /\ \textbf{94.39\%} & \textbf{0.31}\ /\ \textbf{0.006} \ /\ 62.97\%\ /\ 94.53\% \\
& \text{Fashion-MNIST}{} & 0.14\ /\ \textbf{0.006}\ /\ 0.00\%\ /\ 94.82\% & 0.19\ /\ 0.008\ /\ 0.00\%\ /\ 94.64\% & \textbf{0.13}\ /\ 0.004\ /\ 6.60\%\ /\ 94.93\% & \textbf{0.13}\ /\ 0.011\ /\ \textbf{63.93\%}\ /\ \textbf{94.86}\%\\
\Xhline{3\arrayrulewidth}
\end{tabular}}
\end{comment}
\resizebox{\textwidth}{!}{
\begin{tabular}{
cccccc}
\Xhline{3\arrayrulewidth}
\multirow{2}{*}{Backbone} & \multirow{2}{*}{Dataset} &\multirow{2}{*}{\text{Prox-SG}{}} & \multirow{2}{*}{\text{Prox-SVRG}{}} & \multicolumn{2}{c}{HSPG{}{}}\\
\cline{5-6}
& & & & $\epsilon$ as $0$ & fine tuned $\epsilon$ \\
\hline
\multirow{2}{*}{\text{VGG16}{}} & \text{CIFAR10}{} & \textbf{0.59}\ /\ 53.95\%\ /\ 90.57\% & 0.82\ /\ 14.73\%\ /\ 89.42\% & \textbf{0.59}\ /\ 74.60\%\ /\ \textbf{91.10\%} & \textbf{0.59}\ /\ \textbf{75.61\%} \ /\ 90.92\% \\
& \text{Fashion-MNIST}{} & 0.54\ /\ 15.63\%\ /\ \textbf{92.99\%} & 2.66\ /\ 0.45\%\ /\ 92.69\% & 0.54\ /\ 22.18\%\ /\ 92.98\% & \textbf{0.53}\ /\ \textbf{60.77}\%\ /\ 92.87\% \\\hdashline
\multirow{2}{*}{\text{ResNet18}{}} & \text{CIFAR10}{} & \textbf{0.31}\ /\ 19.50\%\ /\ 94.09\% & 0.36\ /\ 2.79\%\ /\ 94.17\% & \textbf{0.31}\ /\ 41.58\%\ /\ 94.39\% & \textbf{0.31}\ /\ \textbf{62.97\%}\ /\ \textbf{94.53\%} \\
& \text{Fashion-MNIST}{} & 0.14\ /\ 0.00\%\ /\ 94.82\% & 0.19\ /\ 0.00\%\ /\ 94.64\% & \textbf{0.13}\ /\ 6.60\%\ /\ \textbf{94.93\%} & \textbf{0.13}\ /\ \textbf{63.93\%}\ /\ 94.86\%\\
\hdashline
\multirow{2}{*}{\text{MobileNetV1}{}} & \text{CIFAR10}{} & \textbf{0.40}\ /\ 57.81\% \ /\ 91.60\% & 0.65\ /\ 32.22\% \ /\ 90.08\% & \textbf{0.40}\ /\ 65.04\% \ /\ \textbf{91.86\%} & 0.41\ /\ \textbf{71.66\%} \ /\ 91.54\% \\
& \text{Fashion-MNIST}{} & \textbf{0.22}\ /\ 65.80\% \ /\ 94.36\% & 0.48\ /\ 38.76\% \ /\ 93.95\% & 0.23\ /\ 74.52\% \ /\ 94.43\% & 0.24\ /\ \textbf{83.71\%}\ /\ \textbf{94.44\%} \\
\Xhline{3\arrayrulewidth}
\end{tabular}}
\end{table}
\begin{figure}
\caption{\scriptsize Objective $\Psi$}
\caption{\scriptsize Group Sparsity Ratio}
\caption{\scriptsize Testing Accuracy}
\caption{{\scriptsize HSPG VS Truncation}}
\caption{\small On~\text{ResNet18}{} with~\text{CIFAR10}{}, (a)-(c): Evolution of $\Psi$, group sparsity ratio and testing accuracy, (d):~HSPG{}{} versus~\text{Prox-SG}{}* and~\text{Prox-SVRG}{}* (\text{Prox-SG}{} and \text{Prox-SVRG}{} with simple truncation mechanism).}
\label{figure:group_sparsity}
\label{figure:simple_truncation}
\end{figure}
Finally, we investigate the group sparsity evolution under different $\epsilon$'s. As shown in Figure~\ref{figure:group_sparsity}, HSPG{}{} produces the highest group-sparse solutions compared with other methods. Notably, at the early $N_{\mathcal{P}}$ iterations, ~HSPG{}{} performs merely the same as~\text{Prox-SG}{}. However, after switching to \text{Half-Space Step}{} at the 150th epoch, HSPG{}{} outperforms all the other methods dramatically, and larger $\epsilon$ results in higher sparsity level. It is a strong evidence that our half-space based technique is much more successful than the proximal mechanism and its variants in terms of the group sparsity identification. Besides, the evolutions of $\Psi$ and testing accuracy confirm the comparability on convergence among the tested algorithms. Particularly, the objective $\Psi$ generally monotonically decreases for small $\epsilon=0$ to $0.02$, and experiences a mild pulse after switch to~\text{Half-Space Step}{} for larger $\epsilon$, \textit{e.g.}, 0.05, which matches Lemma~\ref{lemma:sufficient_decrease_half_space}. As a result, with the similar generalization accuracy, HSPG{}{} allows dropping entire hidden units of networks, which may further achieve automatic dimension reduction and construct smaller model architectures for efficient inference.
\section{Conclusions and Future Work}
We proposed a new~Half-Space Stochastic Projected Gradient{} (HSPG{}) method for disjoint group-sparsity induced regularized problem, which can be applied to various structured sparsity stochastic learning problem. HSPG{}{} makes use of proximal stochastic gradient method to seek a near-optimal solution estimate, followed by a novel half-space group projection to effectively exploit the group sparsity structure. In theory, we provided the convergence guarantee, and showed its better sparsity identification performance. Experiments on both convex and non-convex problems demonstrated that~HSPG{}{} usually achieves solutions with competitive objective values and significantly higher group sparsity compared with state-of-the-arts stochastic solvers. Further study is needed to investigate the proper leverage of group sparsity into diverse deep learning applications, \textit{e.g.}, help people design and understand optimal network architecture by removing redundant hidden structures.
\appendix
\section{Projection Region}\label{appendix:projection_region}
In this Appendix, we derive the projection region of~HSPG{}{}, and reveal that is a superset of those of~\text{Prox-SG}{},~\text{Prox-SVRG}{} and~\text{Prox-Spider}{} under the same $\alpha_k$ and $\lambda$. \begin{proposition} The~\text{Half-Space Step}{} of~HSPG{}{} yields next iterate $x_{k+1}$ based on the trial iterate $\hat{x}_{k+1}=x_k-\alpha_k\Grad f_{\mathcal{B}_k}(x_k)$ as follows for each $g\in\mathcal{I}^{\neq 0}(x_k)$ \begin{equation} [x_{k+1}]_g= \begin{cases} [\hat{x}_{k+1}]_g-\alpha_k\lambda \frac{[x_k]_g}{\norm{[x_k]_g}}& \text{if}\ [\hat{x}_{k+1}]_g^\top[x_k]_g> (\alpha_k\lambda + \epsilon)\norm{[x_k]_g} \\ 0 & \text{otherwise}. \end{cases} \end{equation} Consequently, if $\norm{[\hat{x}_{k+1}]_g}\leq \alpha_k\lambda$, then $[x_{k+1}]_g=0$ for any $\epsilon\geq 0$. \end{proposition} \begin{proof} For $g\in\I^{\neq 0}(x_k)\bigcap\mathcal{I}^{\neq 0}(x_{k+1})$, by Algorithm~\ref{alg:main.x.halfspacestep}, it is equivalent to \begin{equation} \begin{split} \left[x_k-\alpha_k\Grad f_{\mathcal{B}_k}(x_k)-\alpha_k\lambda \frac{[x_k]_g}{\norm{[x_k]_g}}\right]_g^\top[x_k]_g> \epsilon \norm{[x_k]_g}^2,\\ [\hat{x}_{k+1}]_g^\top[x_k]_g -\alpha_k\lambda \norm{[x_k]_g} > \epsilon \norm{[x_k]_g}^2,\\ [\hat{x}_{k+1}]_g^\top[x_k]_g > (\alpha_k\lambda+\epsilon\norm{[x_k]_g})\norm{[x_k]_g}. \end{split} \end{equation} Similarly, $g\in\I^{\neq 0}(x_k)\bigcap\mathcal{I}^{0}(x_{k+1})$ is equivalent to \begin{equation}\label{eq:xkp1_equals_zero} \begin{split} \left[x_k-\alpha_k\Grad f_{\mathcal{B}_k}(x_k)-\alpha_k\lambda \frac{[x_k]_g}{\norm{[x_k]_g}}\right]_g^\top[x_k]_g\leq \epsilon \norm{[x_k]_g}^2,\\ [\hat{x}_{k+1}]_g^\top[x_k]_g -\alpha_k\lambda \norm{[x_k]_g} \leq \epsilon \norm{[x_k]_g}^2,\\ [\hat{x}_{k+1}]_g^\top[x_k]_g \leq (\alpha_k\lambda+\epsilon\norm{[x_k]_g})\norm{[x_k]_g}. \end{split} \end{equation} If $\norm{[\hat{x}_{k+1}]_g}\leq \alpha_k\lambda$, then \begin{equation} [\hat{x}_{k+1}]_g^\top[x_k]_g \leq \norm{[\hat{x}_{k+1}]_g}\norm{[x_k]_g}\leq \alpha_k\lambda \norm{[x_k]_g}. \end{equation} Hence $[x_{k+1}]_g=0$ holds for any $\epsilon\geq 0$ by~\eqref{eq:xkp1_equals_zero}, which implies that the projection region of~\text{Prox-SG}{} and its variance reduction variants, \textit{e.g.},~\text{Prox-SVRG}{},~\text{Prox-Spider}{} and~\text{SAGA}{} are the subsets of~HSPG{}{}'s. \end{proof}
\section{Non-Lipschitz Continuity of $\Grad \Psi(x)$ on $\mathbb{R}^n$}\label{sec.Psi_not_lipschitz_countinuous_gradient}\label{appendix:nonlipschitz-continuity}
The first-derivative of $\Psi(x)$ at $x\neq 0$ can be written as \begin{equation} \Grad \Psi(x)=\Grad f(x)+\lambda\sum_{g\in\mathcal{G}}\frac{[x]_g}{\norm{[x]_g}} \end{equation}
We next show $\frac{[x]_g}{\norm{[x]_g}}$ is not Lipschitz continuous on $\mathbb{R}^n$ if $|g|\geq 2$. Take a example for $[x]_g=(x_1, x_2)^\top\in\mathbb{R}^2$, and select $x_1=(t, a_1t), x_2=(t, a_2t), a_1\neq a_2$ and $t\in\mathbb{R}$. Then suppose there exists a positive constant $L<\infty$ such that Lipschitz continuity holds as follows \begin{equation} \begin{split} \norm{\frac{x_1}{\norm{x_1}}-\frac{x_2}{\norm{x_2}}}&\leq L\norm{x_1-x_2}\\
\norm{\frac{(1,a_1)}{\sqrt{1+a_1^2}}-\frac{(1,a_2)}{\sqrt{1+a_2^2}}}&\leq L|a_1-a_2|\cdot|t| \end{split} \end{equation} holds for any $t\in\mathbb{R}$, and note the left hand side is a positive constant. However, letting $t\to 0$, we have that $L\to \infty$ which contradicts the $L<\infty$. Therefore, $\frac{[x]_g}{\norm{[x]_g}}$ is not Lipschitz continuous on $\mathbb{R}^2$, specifically the region surrounding the origin point.
Although $[\Grad \Psi(x)]_{\I^{\neq 0}(x)}$ is not Lipscthiz continuous on $\mathbb{R}^{n}$, the Lipschitz continuity still holds on by excluding a fixed size $\ell_2$-ball centered at the origin for the group of non-zero variables $\I^{\neq 0}(x)$ from $\mathbb{R}^{n}$. For our paper, we define the region where Lipscthiz continuity of $[\Grad \Psi(x)]_{\I^{\neq 0}(x)}$ still holds as \begin{equation}\label{def:mathcal_X} \mathcal{X}=\{x: \norm{[x]_g}\geq \delta_1\ \text{for each}\ g\in\I^{\neq 0}(x),\ \text{and}\ [x]_g=0\ \text{for each}\ g\in\I^{0}(x) \}. \end{equation}
\section{Convergence Analysis Proof}\label{appendix:convergence_analysis}
Denote the following frequently used constant $R$ describing the size of neighbor around $x^*$. \begin{equation}\label{def:R} R:=\min\left\{\frac{-(\delta_1+2\epsilon\delta_2)+\sqrt{(\delta_1+2\epsilon\delta_2)^2-4\epsilon^2\delta_2+4\epsilon\delta_1^2}}{\epsilon},\delta_1\right\}>0. \end{equation} \textbf{Remark:}~\eqref{def:R} is well defined as $0<\epsilon< \frac{\delta_1^2}{\delta_2}$, and degenerated to $\delta_1$ as $\epsilon=0$.
\subsection{Sufficient Decrease of~\text{Prox-SG Step}{} and~\text{Half-Space Step}{}}\label{appendix:proof_sufficient_decrease}
Our convergence analysis relies on the following sufficient decrease properties of~\text{Half-Space Step}{} and~\text{Prox-SG Step}{}.
\paragraph{Sufficient Decrease of~\text{Half-Space Step}{} as Lemma~\ref{lemma:sufficient_decrease_half_space}:}
\begin{proof}
It follows Algorithm~\ref{alg:main.x.halfspacestep} and the definition of $\tilde{\G}_k$ and $\hat{\G}_k$ that $x_{k+1}=x_k+\alpha_kd_k$ where $d_k$ is \begin{equation}\label{eq:proof_d_k_def} [d_k]_g= \begin{cases} -[\partial \Psi_{\mathcal{B}_k}(x_{k})]_g& \text{if}\ g\in\tilde{\mathcal{G}}_k=\I^{\neq 0}(x_k)\bigcap \I^{\neq 0}(x_{k+1}),\\ -[x_{k}]_g/\alpha_k & \text{if}\ g \in \hat{\mathcal{G}}_k=\I^{\neq 0}(x_k)\bigcap \I^{0}(x_{k+1}),\\ 0 & \text{otherwise}. \end{cases} \end{equation} We also notice that for any $g\in\hat{\mathcal{G}}_k$, the following holds \begin{equation}\label{eq:descent_direction_tmp1} \begin{split} [x_k-\alpha_k\partial \Psi_{\mathcal{B}_k}(x_k)]_g^\top[x_k]_g<\epsilon \norm{[x_k]_g}^2,\\ (1-\epsilon)\norm{[x_k]_g}^2< \alpha_k[\partial \Psi_{\mathcal{B}_k}(x_k)]_g^\top[x_k]_g. \end{split} \end{equation} For simplicity, let $\I^{\neq 0}_k:=\I^{\neq 0}(x_k)$. Since $[d_k]_g=0$ for any $g\in \I^{0}(x_k)$, then by~(\ref{eq:proof_d_k_def}) and~(\ref{eq:descent_direction_tmp1}), we have \begin{equation} \begin{split} d_k^\top\partial \Psi_{\B_k}(x_k)&=[d_k]_{\I^{\neq 0}_k}^\top[\partial \Psi_{\B_k}(x_k)]_{\I^{\neq 0}_k}\\ &=-\sum_{g\in\tilde{\G}_k}\norm{[\partial \Psi_{\B_k}(x_k)]_g}^2-\sum_{g\in \hat{\G}_k}\frac{1}{\alpha_k}[x_k]_g^\top[\partial \Psi_{\B_k}(x_k)]_g\\ &\leq -\sum_{g\in\tilde{\G}_k}\norm{[\partial \Psi_{\B_k}(x_k)]_g}^2-\sum_{g\in \hat{\G}_k}\frac{1}{\alpha_k^2}(1-\epsilon)\norm{[x_k]_g}^2< 0, \end{split} \end{equation} holds for any $\epsilon\in[0,1)$, which implies that $d_k$ is a descent direction for $\Psi_{\B_k}(x_k)$.
Now, we start to prove the suffcient decrease of~\text{Half-Space Step}{}. By the descent lemma, $x_k\in\mathcal{X}$ and the Lipschitz continuity of $[\partial \Psi_{\mathcal{B}_k}]_{\I_k^{\neq 0}}$ on $\mathcal{X}$, we have that \begin{equation}\label{eq:decentlemma} \Psi_{\mathcal{B}_k}(x_{k}+\alpha_k d_{k})\leq \Psi_{\mathcal{B}_k}(x_{k})+\alpha_k[\partial \Psi_{\mathcal{B}_k}(x_{k})]_{\I_k^{\neq 0}}^\top[d_{k}]_{\I_k^{\neq 0}}+\frac{L}{2}\alpha_k^2\norm{[d_{k}]_{\I_k^{\neq 0}}}^2. \end{equation} Then it follows~\eqref{eq:proof_d_k_def} that~\eqref{eq:decentlemma} can be rewritten as follows \begin{equation}\label{eq:decentlemma_more} \begin{split} &\Psi_{\mathcal{B}_k}(x_{k}+\alpha_k d_{k})\\ \leq& \Psi_{\mathcal{B}_k}(x_{k})+\alpha_k[\partial \Psi_{\mathcal{B}_k}(x_{k})]_{\I^{\neq 0}_k}^\top[d_{k}]_{\I^{\neq 0}_k}+\frac{L}{2}\alpha_k^2\norm{[d_{k}]_{\I^{\neq 0}_k}}^2\\ =&\Psi_{\mathcal{B}_k}(x_{k})-\sum_{g\in\tilde{\G}_k}\norm{[\partial \Psi_{\mathcal{B}_k}(x_{k})]_g}^2\left(\alpha_k-\frac{L}{2}\alpha_k^2\right)-\sum_{g\in\hat{\G}_k} \left\{[\partial \Psi_{\mathcal{B}_k}(x_{k})]_g^\top[x_{k}]_g-\frac{L}{2}\norm{[x_{k}]_g}^2\right\} \end{split} \end{equation} Consequently, combining with $\epsilon\in [0,1)$ and~\eqref{eq:descent_direction_tmp1}, ~\eqref{eq:decentlemma_more} can be further shown as \begin{equation} \Psi_{\mathcal{B}_k}(x_{k+1})\leq \Psi_{\mathcal{B}_k}(x_{k})-\left(\alpha_k-\frac{\alpha_k^2L}{2}\right)\sum_{g\in\tilde{\G}_k}\norm{[\partial \Psi_{\mathcal{B}_k}(x_{k})]_g}^2-\left(\frac{1-\epsilon}{\alpha_k}-\frac{L}{2}\right)\sum_{g\in\hat{\G}_k}\norm{[x_{k}]_g}^2, \end{equation} which completes the proof.
\end{proof}
\paragraph{Sufficient Decrease of~\text{Prox-SG Step}:} The second lemma is well known for proximal operator under our notations. We include this proof for completeness.
\begin{lemma}\label{lemma:Psi_decrease_proxsg}
Line~\ref{line:prox} of Algorithm~\ref{alg:main.x.prox_sg_step} yields that $x_{k+1}=x_{k}-\alpha_k\xi_{\alpha_k, \mathcal{B}_k}(x_k)$, where
\begin{equation}
\xi_{\alpha_k, \mathcal{B}_k}(x_k)\in -\left(\Grad f_{\mathcal{B}_k}(x_{k})+\lambda \partial \Omega(x_{k+1})\right).
\end{equation}
And the objective value $ \Psi_{\mathcal{B}_k} $ satisfies
\begin{equation}\label{eq:decrease_p_batch}
\Psi_{\mathcal{B}_k}(x_{k+1})\leq \Psi_{\mathcal{B}_k}(x_{k})-\left(\alpha_k-\frac{\alpha_k^2L}{2}\right)\norm{\xi_{\alpha_k, \mathcal{B}_k}(x_k)}^2.
\end{equation} \end{lemma} \begin{proof}
It follows from the line~\eqref{line:prox} in Algorithm~\ref{alg:main.x.prox_sg_step} and the definitions of proximal operator that
\begin{equation}
\begin{split}
\quad x_{k+1}&=\argmin_{x\in \mathbb{R}^n}\ \frac{1}{2\alpha_k}\norm{x-(x_k-\alpha_k\Grad f_{\mathcal{B}_k}(x_k))}^2+\lambda\Omega(x)\\
&=\argmin_{x\in \mathbb{R}^n}\ \Grad f_{\mathcal{B}_k}(x_k)^\top(x-x_k)+\lambda\Omega(x)+\frac{1}{2\alpha_k}\norm{x-x_k}^2
\end{split}
\end{equation}
By the optimal condition, we have
\begin{equation}
\begin{split}
0 & \in \frac{1}{\alpha_k}(x_{k+1}-x_k)+\Grad f_{\mathcal{B}_k}(x_k)+\lambda\partial\Omega(x_{k+1}).
\end{split}
\end{equation}
Since $ x_{k+1}=x_k-\alpha_k\xi_{\alpha_k,\mathcal{B}_k}(x_k)$, we have
\begin{equation}
0 \in -\xi_{\alpha_k,\mathcal{B}_k}(x_k)+\Grad f_{\mathcal{B}_k}(x_k)+\lambda\partial\Omega(x_{k+1}),
\end{equation}
which implies that
\begin{equation}
\xi_{\alpha_k,\mathcal{B}_k}(x_k)\in \Grad f_{\mathcal{B}_k}(x_k)+\lambda \partial\Omega(x_{k+1}).
\end{equation}
And thus there exists some $v\in \partial \Omega(x_{k+1})$ such that
\begin{equation}\label{eq:st}
\xi_{\alpha_k,\mathcal{B}_k}(x_k)= \Grad f_{\mathcal{B}_k}(x_k) +\lambda v.
\end{equation}
\noindent
By Lipschitz continuity of $ \Grad f_{\mathcal{B}_k} $ and convexity of $\Omega(\cdot)$, we have
\begin{equation}\label{eq:f_lipschiz_convex}
\begin{split}
f_{\mathcal{B}_k}(x_{k+1})&=f_{\mathcal{B}_k}(x_k-\alpha_k\xi_{\alpha_k,\mathcal{B}_k}(x_k))\\
&\leq f_{\mathcal{B}_k}(x_k)-\alpha_k\Grad f_{\mathcal{B}_k}(x_k)^\top\xi_{\alpha_k,\mathcal{B}_k}(x_k)+\frac{\alpha_k^2L}{2}\norm{\xi_{\alpha_k,\mathcal{B}_k}(x_k)}^2
\end{split}
\end{equation}
and
\begin{equation}\label{eq:omega_convex}
\begin{split}
\lambda\Omega(x_{k+1})&=\lambda\Omega(x_k-\alpha_k\xi_{\alpha_k,\mathcal{B}_k}(x_k))\\
&\leq \lambda \Omega(x_k) + \lambda v^\top(x_k-\alpha_k\xi_{\alpha_k,\mathcal{B}_k}(x_k)-x_k)\\
&=\lambda \Omega(x_k) - \alpha_k\lambda v^\top\xi_{\alpha_k,\mathcal{B}_k}(x_k).
\end{split}
\end{equation}
Hence, by \eqref{eq:st}, \eqref{eq:f_lipschiz_convex} and~\eqref{eq:omega_convex}, the objective $\Psi_{\mathcal{B}_k}(x_{k+1})$ satisfies
\begin{equation*}\label{eq:Psi_decrease}
\begin{split}
&\Psi_{\mathcal{B}_k}(x_{k+1})=f_{\mathcal{B}_k}(x_{k+1}) + \lambda\Omega(x_{k+1})\\
\leq& f_{\mathcal{B}_k}(x_k)-\alpha_k\Grad f_{\mathcal{B}_k}(x_k)^\top\xi_{\alpha_k,\mathcal{B}_k}(x_k)+\frac{\alpha_k^2L}{2}\norm{\xi_{\alpha_k,\mathcal{B}_k}(x_k)}^2+\lambda \Omega(x_k) - \alpha_k\lambda v^\top\xi_{\alpha_k,\mathcal{B}_k}(x_k)\\
=&\Psi_{\mathcal{B}_k}(x_k)-\alpha_k(\Grad f_{\mathcal{B}_k}(x_k)+\lambda v)^\top\xi_{\alpha_k,\mathcal{B}_k}(x_k)+\frac{\alpha_k^2L}{2}\norm{\xi_{\alpha_k,\mathcal{B}_k}(x_k)}^2\\
=&\Psi_{\mathcal{B}_k}(x_k) -\left(\alpha_k-\frac{\alpha_k^2L}{2}\right)\norm{\xi_{\alpha_k,\mathcal{B}_k}(x_k)}^2,
\end{split}
\end{equation*}
which completes the proof.
\end{proof}
According to Lemma~\ref{lemma:sufficient_decrease_half_space} and Lemma~\ref{lemma:Psi_decrease_proxsg}, the objective value on a mini-batch tends to achieve a sufficient decrease in both Prox-SG Step and Half-Space Step given $\alpha_k$ is small enough. By taking the expectation on both sides, we obtain the following result characterizing the sufficient decrease from $\Psi(x_{k})$ to $\mathbb{E}\left[\Psi(x_{k+1})\right]$.
\begin{corollary}\label{corollary:Psi_epoch_decrease}
For iteration $k$, we have
\begin{enumerate}[label=(\roman*),leftmargin=0.5cm]
\item if $k$th iteration conducts~\text{Prox-SG Step}, then
\begin{equation}
\mathbb{E}\left[\Psi(x_{k+1})\right]\leq \Psi(x_k)-\left(\alpha_k-\frac{\alpha_k^2L}{2}\right)\mathbb{E}\left[\norm{\xi_{\alpha_k,\mathcal{B}_k}(x_k)}^2\right].
\end{equation}
\item if $k$th iteration conducts~\text{Half-Space Step}, $x_k\in\mathcal{X}$, then
\begin{equation}
\mathbb{E}\left[\Psi(x_{k+1})\right]\leq \Psi(x_k)-\sum_{g\in\tilde{\mathcal{G}}_k}\left(\alpha_k-\frac{\alpha_k^2L}{2}\right)\mathbb{E}\left[\norm{\partial \Psi_{\mathcal{B}_k}(x_k)}^2\right]-\left(\frac{1-\epsilon}{\alpha_k}-\frac{L}{2}\right)\sum_{g\in\hat{\G}_k}\norm{[x_{k}]_g}^2.
\end{equation}
\end{enumerate} \end{corollary}
Corollary~\ref{corollary:Psi_epoch_decrease} shows that the bound of $\Psi$ depends on step size $\alpha_k$ and norm of search direction. It further indicates that both \text{Half-Space Step}{} and \text{Prox-SG Step}{} can make some progress to optimality with proper selection of $\alpha_k$.
\subsection{Proof of Theorem~\ref{thm:convergence}}\label{appendix:convergence_theorem}
Toward that end, we first show that if the optimal distance from $x_k$ to the local minimizer $x^*$ is sufficiently small, then~HSPG{}{} already covers the supports of $x^*$,~\textit{i.e.}, $\I^{\neq 0}(x^*)\subseteq \I^{\neq 0}(x_k)$. \begin{lemma}\label{lemma:support_cover} If $\norm{x_k-x^*}\leq R$, then $\I^{\neq 0}(x^*)\subseteq \I^{\neq 0}(x_k)$. \end{lemma} \begin{proof} For any $g\in I^{\neq 0}(x^*)$, by the assumption of this lemma and the definition of $R$ as~\eqref{def:R} and $\delta_1$ in Assumption~\ref{assumption:delta}, we have that \begin{equation} \begin{split} \norm{[x^*]_g}-\norm{[x_k]_g}&\leq \norm{[x_k-x^*]_g}\leq \norm{x_k-x^*}\leq R\leq \delta_1\\ \norm{[x_k]_g}&\geq \norm{[x^*]_g}-\delta_1\geq 2\delta_1-\delta_1=\delta_1>0 \end{split} \end{equation} Hence $\norm{[x_k]_g}\neq 0$, \textit{i.e.}, $g\in \I^{\neq 0}(x_k)$. Therefore, $\I^{\neq 0}(x^*)\subseteq \I^{\neq 0}(x_k)$. \end{proof} \begin{comment} {\color{blue} \begin{lemma}\label{lemma:support_cover_new} If $\norm{x_k-x^*}\leq \frac{R}{2}:=\frac{2\delta_1^2-\epsilon\delta_2}{2(2\delta_1+\epsilon)}$, then $\I^{\neq 0}(x^*)\subseteq \I^{\neq 0}(x_k)$. \end{lemma} \begin{proof} For any $g\in I^{\neq 0}(x^*)$, by the assumption of this lemma and the definition of $\delta_1$ as~\eqref{def:delta_1}, we have that \begin{equation} \begin{split} \norm{[x_k-x^*]_g}&\leq \norm{x_k-x^*}\leq \frac{2\delta_1^2-\epsilon\delta_2}{2\delta_1+\epsilon}\\ \norm{[x^*]_g}-\norm{[x_k]_g}&\leq \frac{2\delta_1^2-\epsilon\delta_2}{2\delta_1+\epsilon}\\ \norm{[x_k]_g}&\geq \norm{[x^*]_g}-\frac{2\delta_1^2-\epsilon\delta_2}{2\delta_1+\epsilon}\geq2\delta_1-\frac{2\delta_1^2-\epsilon\delta_2}{2\delta_1+\epsilon}\\ &= \frac{4\delta_1^2+2\delta_1\epsilon-2\delta_1^2+\epsilon\delta_2}{2\delta_1 +\epsilon}=\frac{2\delta_1^2+2\delta_1\epsilon+\delta_2\epsilon }{2\delta_1+\epsilon}>0 \end{split} \end{equation} Hence $\norm{[x_k]_g}\neq 0$, \textit{i.e.}, $g\in \I^{\neq 0}(x_k)$. Therefore, $\I^{\neq 0}(x^*)\subseteq \I^{\neq 0}(x_k)$. \end{proof} } \end{comment}
The next lemma shows that if the distance between current iterate $x_k$ and $x^*$, \textit{i.e.}, $\norm{x_k-x^*}$ is sufficiently small, then $x^*$ inhabits the reduced space $\S_k:=\S(x_k)$. \begin{lemma}\label{lemma:x_star_in_polyhedron}
Under Assumption~\ref{assumption}, if $0\leq \epsilon<\frac{\delta_1^2}{\delta_2}$, $\norm{x_{k}-x^*}\leq R$, then for each $g\in\mathcal{I}^{\neq 0}(x^*)$,
\begin{equation}
[x_{k}]_g^\top[x^*]_g\geq \epsilon\norm{[x_k]_g}^2
\end{equation}
Consequently, it implies $x^*\in\S_k$ by the definition as~\eqref{def:polytope}. \end{lemma} \begin{proof}
It follows the assumption of this lemma and the definition of $R$ in~\eqref{def:R}, $\delta_1$ and $\delta_2$ in Assumption~\ref{assumption:delta} that for any $g\in\mathcal{I}^{\neq 0}(x^*)$,
\begin{equation}
\begin{split}
\norm{[x_k]_g}\leq \norm{[x^*]_g} +R\leq 2\delta_2+R,
\end{split}
\end{equation}
and the $\left[-(\delta_1+2\epsilon\delta_2)+\sqrt{(\delta_1+2\epsilon\delta_2)^2-4\epsilon^2\delta_2+4\epsilon\delta_1^2}\right]/\epsilon$ in~\eqref{def:R} is actually the solution of $\epsilon z^2+(4\epsilon\delta_2+2\delta_1)z+4\epsilon\delta_2^2-4\delta_1^2=0$ regarding $z\in \mathbb{R}^+$. Then we have that
\begin{equation}
\begin{split}
[x_{k}]_g^\top[x^*]_g=&[x_{k}-x^*+x^*]_g^\top [x^*]_g\\
=&[x_{k}-x^*]_g^\top[x^*]_g+\norm{[x^*]_g}^2\\
\geq& \norm{[x^*]_g}^2-\norm{[x_k-x^*]_g}\norm{[x^*]_g}\\
=& \norm{[x^*]_g}(\norm{[x^*]_g}-\norm{[x_k-x^*]_g})\\
\geq & 2\delta_1(2\delta_1-R)\geq \epsilon (2\delta_2+R)^2\\
\geq &\epsilon\norm{[x_k]_g}^2
\end{split}
\end{equation}
holds for any $g\in\mathcal{I}^{\neq 0}(x^*)$, where the second last inequality holds because that $2\delta_1(2\delta_1-R)=\epsilon(2\delta_2+R)^2$ as $R=\left[-(\delta_1+2\epsilon\delta_2)+\sqrt{(\delta_1+2\epsilon\delta_2)^2-4\epsilon^2\delta_2+4\epsilon\delta_1^2}\right]/\epsilon$. Now combing with the definition of $\S_k$ as~\eqref{def:polytope}, we have $x^*$ inhabits $\S_k$, which completes the proof. \end{proof}
Furthermore, if $\norm{x_k-x^*}$ is small enough and the step size is selected properly, every recovery of group sparsity by~\text{Half-Space Step}{} can be guaranteed as successful as stated in the following lemma. \begin{lemma}\label{lemma.project_as_zero_group}
Suppose $k\geq N_\P$, $\norm{x_{k}-x^*}\leq R$, $0\leq \epsilon<\frac{2\delta_1-R}{2\delta_2+R}$ and $0<\alpha_k\leq\frac{2\delta_1-R-\epsilon(2\delta_2+R)}{M}$, then for any $g\in\hat{\G}_k={\I^{\neq 0}(x_k)}\bigcap \I^0(x_{k+1})$, $g$ must be in $\I^{0}(x^*)$, \textit{i.e.}, $g\in\I^0(x^*)$. \end{lemma} \begin{proof} To prove it by contradiction, suppose there exists some $g\in\hat{\G}_k$ such that $g\in \I^{\neq 0}(x^*)$. Since $g\in\hat{\G}_k={\I^{\neq 0}(x_k)}\bigcap \I^0(x_{k+1})$, then the group projection~\eqref{def:proj} is trigerred at $g$ such that \begin{equation}\label{eq:hypothesis} \begin{split} [\tilde{x}_{k+1}]_{g}^\top[x_k]_{g}&=[x_k-\alpha\Grad \Psi_{\B_k}(x_k)]_{g}^\top[x_k]_{g}\\ &=\norm{[x_k]_{g}}^2-\alpha_k [\Grad \Psi_{\B_k}(x_k)]_{g}^\top [x_k]_{g}< \epsilon \norm{[x_k]_g}^2. \end{split} \end{equation} On the other hand, it follows the assumption of this lemma and $g\in\I^{\neq 0}(x^*)$ that \begin{equation}
\norm{[x_{k}-x^*]_{g}}\leq \norm{x_k-x^*}\leq R \end{equation} Combining the definition of $\delta_1$ and $\delta_2$, we have that \begin{equation} \begin{split} \norm{[x_k]_{g}}\geq \norm{[x^*]_{g}}-R\geq 2\delta_1 -R\\ \norm{[x_k]_{g}}\leq \norm{[x^*]_{g}}+R\leq 2\delta_2 +R\\ \end{split} \end{equation} It then follows $0<\alpha_k\leq\frac{2\delta_1-R-\epsilon(2\delta_2+R)}{M}$, where note $2\delta_1-R-\epsilon(2\delta_2+R)>0$ as $R\leq \delta_1$ and $\epsilon<\frac{2\delta_1-R}{2\delta_2+R}$, that \begin{equation} \begin{split} [\tilde{x}_{k+1}]_{g}^\top[x_k]_{g}&=\norm{[x_k]_{g}}^2-\alpha_k [\Grad \Psi_{\B_k}(x_k)]_{g}^\top [x_k]_{g}\\ &\geq \norm{[x_k]_{g}}^2-\alpha_k\norm{[\Grad \Psi_{\B_k}(x_k)]_g}\norm{[x_k]_g}\\ &= \norm{[x_k]_{g}}(\norm{[x_k]_{g}}-\alpha_k\norm{[\Grad \Psi_{\B_k}(x_k)]_g})\\ &\geq \norm{[x_k]_{g}}(\norm{[x_k]_{g}}-\alpha_kM)\\ &\geq \norm{[x_k]_{g}}\left[(2\delta_1-R)-\alpha_kM\right]\\ &\geq \norm{[x_k]_{g}}\left[(2\delta_1-R)-\frac{2\delta_1-R-\epsilon(2\delta_2+R)}{M}M\right]\\ &\geq \norm{[x_k]_{g}}\left[(2\delta_1-R)-2\delta_1+R+\epsilon(2\delta_2+R)\right]\\ &\geq \epsilon\norm{[x_k]_{g}}(2\delta_2+R)\\ &\geq \epsilon\norm{[x_k]_g}^2 \end{split} \end{equation} which contradicts with~\eqref{eq:hypothesis}. Hence, we conclude that any $g$ of variables projected to zero, \textit{i.e.}, $g\in\hat{\G}_k={\I^{\neq 0}(x_k)}\bigcap \I^0(x_{k+1})$ are exactly also the zeros on the optimal solution $x^*$, \textit{i.e.}, $g\in\I^{0}(x^*)$. \end{proof}
We next present that if the iterate of~\text{Half-Space Step}{} is close enough to the optimal solution $x^*$, then $x^*$ inhabits all reduced spaces constructed by the subsequent iterates of~\text{Half-Space Step}{} with high probability. To establish this results, we require the below two lemmas. The first bounds the accumulated error because of random sampling.
\begin{lemma}\label{lemma:convergence-series}
Given any $\theta > 1$, $K\geq N_\P$, let $k:=K+t$, $t\in\mathbb{Z}^+\bigcup \{0\}$, then there exists $\alpha_k=\O(1/t)$ and $|\B_k|=\O(t)$, such that for any $y_t\in \mathbb{R}^n$, \begin{align*}
\max_{\{y_t\}_{t = 0}^{\infty} \in \mathcal{X}^{\infty}} \sum_{t = 0}^{\infty} \alpha_{k} \|e_{\mathcal{B}_{k}}(y_{t})\|_2 \leq \frac{3R^2}{8(4R + 1)} \end{align*} holds with probability at least $1 - \frac{1}{\theta^2}$. \end{lemma} \begin{proof}
Define random variable $Y_t := \alpha_{K + t} \|e_{\mathcal{B}_{K + t}}(y_{t})\|_2$ for all $t \geq 0$. Since $\{y_t\}_{t = 0}^{\infty}$ are arbitrarily chosen, then the random variables $\{Y_t\}_{t = 0}^{\infty}$ are independent. Let $Y := \sum_{t= 0}^{\infty} Y_t$. Using Chebshev's inequality, we obtain \begin{align}\label{eq:chevshev_inequality}
\mathbb{P}\left( Y \geq \mathbb{E}[Y] + \theta \sqrt{\text{Var}[Y]} \right) \leq \mathbb{P}\left( |Y - \mathbb{E}[Y]| \geq \theta \sqrt{\text{Var}[Y]} \right) \leq \frac{1}{\theta^2}. \end{align}
And based on the Assumption~\ref{assumption}, there exists an upper bound $\sigma^2>0$ for the variance of random noise $e(x)$ generated from the one-point mini-batch, \textit{i.e.}, $\mathcal{B}=\{i\}, i = 1,\ldots, N$. Consequently, for each $t \geq 0$, we have $\mathbb{E}[Y_t] \leq \frac{\alpha_{K + t} \sigma}{\sqrt{|\mathcal{B}_{K + t}|}}$ and $\text{Var}[Y_t] \leq \frac{\alpha_{K + t}^2 \sigma^2}{|\mathcal{B}_{K + t}|}$, then combining with~\eqref{eq:chevshev_inequality}, we have \begin{align}
Y &\leq \mathbb{E}[Y] + \theta \sqrt{\text{Var}[Y]} \\
&\leq \sum_{t = 0}^{\infty} \frac{\alpha_{K + t} \sigma}{\sqrt{|\mathcal{B}_{k + t}|}} + \theta \cdot \sum_{t = 0}^{\infty} \frac{\alpha_{K + t}^2 \sigma^2}{|\mathcal{B}_{K + t}|}\\
&\leq \sum_{t = 0}^{\infty} \frac{\alpha_{K + t} \sigma}{\sqrt{|\mathcal{B}_{k + t}|}} + \theta \cdot \sum_{t = 0}^{\infty} \frac{\alpha_{K + t} \sigma}{\sqrt{|\mathcal{B}_{K + t}|}} =(1+\theta)\sum_{t = 0}^{\infty} \frac{\alpha_{K + t} \sigma}{\sqrt{|\mathcal{B}_{K + t}|}} \end{align}
holds with probability at least $1 - \frac{1}{\theta^2}$. Here, for the second inequality, we use the property that the equality $\mathbb{E}[\sum_{t = 0}^{\infty} Y_i] = \sum_{t = 0}^{\infty} \mathbb{E}[ Y_i]$ holds whenever $\sum_{t = 0}^{\infty} \mathbb{E}[|Y_i|]$ convergences, see Section 2.1 in \cite{mitzenmacher2005probability}; and for the third inequality, we use $\frac{\alpha_{K + t} \sigma}{\sqrt{|\mathcal{B}_{K + t}|}}\leq 1$ without loss of generality as the common setting of large mini-batch size and small step size.
Given any $\theta > 1$, there exists some $\alpha_{k}=\O(1/t)$ and $|\B_{k}|=\O(t)$, the above series converges and satisfies that \begin{align*}
(1+\theta)\sum_{t = 0}^{\infty} \frac{\alpha_{K + t} \sigma}{\sqrt{|\mathcal{B}_{K + t}|}} \leq \frac{3R^2}{8(4R + 1)} \end{align*} holds. Notice that the above proof holds for any given sequence $\{y_t\}_{t = 0}^{\infty} \in \mathcal{X}^{\infty}$, thus \begin{align*}
\max_{\{y_t\}_{t = 0}^{\infty} \in \mathcal{X}^{\infty}} \sum_{t = 0}^{\infty} \alpha_{k} \|e_{\mathcal{B}_{k}}(y_{t})\|_2 \leq \frac{3R^2}{8(4R + 1)} \end{align*} holds with probability at least $1 - \frac{1}{\theta^2}$. \end{proof}
The second lemma draws if previous iterate of~\text{Half-Space Step}{} falls into the neighbor of $x^*$, then under appropriate step size and mini-batch setting, the current iterate also inhabits the neighbor with high probability.
\begin{lemma}\label{lemma:k_plus_1_optimal_dist_non_increase}
Under the assumptions of Lemma~\ref{lemma:convergence-series}, suppose $\norm{x_{K}-x^*}\leq R/2$; for any $\ell$ satisfying $K\leq \ell<K+ t$, $0<\alpha_{\ell}\leq \min\{\frac{1}{L},\frac{2\delta_1-R-\epsilon(2\delta_2+R)}{M}\}$, $|B_\ell|\geq N-\frac{N}{2M}$ and $\norm{x_{\ell}-x^*}\leq R$ holds, then
\begin{equation}
\norm{x_{K+t}-x^*}\leq R.
\end{equation}
holds with probability at least $1-\frac{1}{\theta^2}$. \end{lemma} \begin{proof} It follows the assumptions of this lemma, Lemma~\ref{lemma.project_as_zero_group} that for any $\ell$ satisfying $K\leq \ell<K+ t$ \begin{equation} \norm{[x^*]_g}=0,\ \text{for any}\ g\in \hat{\mathcal{G}}_{\ell}. \end{equation} Hence we have that for $K\leq \ell<K+ t$, \begin{equation}\label{eq:optimal_dist_ell} \begin{split} &\norm{x_{\ell+1}-x^*}^2\\ =&\sum_{g\in\tilde{\mathcal{G}}_\ell}\norm{[x_{\ell}-x^*-\alpha_\ell\Grad \Psi(x_{\ell})-\alpha_\ell e_{\B_\ell}(x_\ell)]_g}^2+\sum_{g\in\hat{\mathcal{G}}_k}\norm{[x_{\ell}-x^*-x_{\ell}]_g}^2\\ =&\sum_{g\in\tilde{\mathcal{G}}_\ell}\left\{\norm{[x_{\ell}-x^*]_g}^2-2\alpha_\ell[x_{\ell}-x^*]_g^\top[\Grad \Psi(x_{\ell})+e_{\B_\ell}(x_\ell)]_g+\alpha_\ell^2\norm{[\Grad \Psi(x_{\ell})+e_{\B_\ell}(x_\ell)]_g}^2\right\}+\sum_{g\in\hat{\G}_\ell}\norm{[x^*]_g}^2\\ =&\sum_{g\in\tilde{\mathcal{G}}_\ell}\left\{\norm{[x_{\ell}-x^*]_g}^2-2\alpha_\ell[x_{\ell}-x^*]_g^\top[\Grad \Psi(x_{\ell})]_g-2\alpha_\ell[x_{\ell}-x^*]_g^\top[e_{\B_\ell}(x_\ell)]_g+\alpha_\ell^2\norm{[\Grad \Psi(x_{\ell})+e_{\B_\ell}(x_\ell)]_g}^2\right\}\\ \leq&\sum_{g\in\tilde{\G}_\ell}\norm{[x_{\ell}-x^*]_g}^2-\norm{[\Grad \Psi(x_{\ell})]_g}^2\left(2\frac{\alpha_\ell}{L}-\alpha_\ell^2\right)-2\alpha_\ell[x_{\ell}-x^*]_g^\top[e_{\B_\ell}(x_\ell)]_g+\alpha_\ell^2\norm{[e_{\B_\ell}(x_\ell)]_g}^2\\ &+2\alpha_\ell^2[\Grad \Psi(x_\ell)]_g^\top[e_{\B_\ell}(x_\ell)]_g\\ \leq& \sum_{g\in\tilde{\G}_\ell}\norm{[x_{\ell}-x^*]_g}^2-\norm{[\Grad \Psi(x_{\ell})]_g}^2\left(2\frac{\alpha_\ell}{L}-\alpha_\ell^2\right) +2\alpha_\ell\norm{[x_{\ell}-x^*]_g}\norm{[e_{\B_\ell}(x_\ell)]_g}+\alpha_\ell^2\norm{[e_{\B_\ell}(x_\ell)]_g}^2\\ &+2\alpha_\ell^2\norm{[\Grad \Psi(x_\ell)]_g}\norm{[e_{\B_\ell}(x_\ell)]_g}\\ \leq& \sum_{g\in\tilde{\G}_\ell}\norm{[x_{\ell}-x^*]_g}^2-\norm{[\Grad \Psi(x_{\ell})]_g}^2\left(2\frac{\alpha_\ell}{L}-\alpha_\ell^2\right) +(2\alpha_\ell+2\alpha_\ell^2L)\norm{[x_{k}-x^*]_g}\norm{[e_{\B_\ell}(x_\ell)]_g}+\alpha_\ell^2\norm{[e_{\B_\ell}(x_\ell)]_g}^2\\ \leq& \sum_{g\in\tilde{\G}_\ell}\left\{\norm{[x_{\ell}-x^*]_g}^2-\norm{[\Grad \Psi(x_{\ell})]_g}^2\left(2\frac{\alpha_\ell}{L}-\alpha_\ell^2\right)\right\} +(2\alpha_\ell+2\alpha_\ell^2L)\norm{x_{\ell}-x^*}\norm{e_{\B_\ell}(x_\ell)}+\alpha_\ell^2\norm{e_{\B_\ell}(x_\ell)}^2 \end{split} \end{equation}
On the other hand, by the definition of $e_{\B}(x)$, we have that \begin{equation}\label{eq:error_eq} \begin{split} e_{\B}(x)=&[\Grad \Psi_{\B}(x)-\Grad \Psi(x)]_{\I^{\neq 0}(x)}=[\Grad f_{\B}(x)-\Grad f(x)]_{\I^{\neq 0}(x)}\\
=&\frac{1}{|\B|}\sum_{j\in \B}[\Grad f_{j}(x)]_{\I^{\neq 0}(x)}-\frac{1}{N}\sum_{i=1}^N [\Grad f_i(x)]_{\I^{\neq 0}(x)}\\
=&\frac{1}{N}\sum_{j\in\B}\left[\frac{N}{|\B|}[\Grad f_{j}(x)]_{\I^{\neq 0}(x)}-[\Grad f_j(x)]_{\I^{\neq 0}(x)}\right]-\frac{1}{N}\sum_{\substack{i=1\\ i\notin \B}}^N[\Grad f_i(x)]_{\I^{\neq 0}(x)}\\
=&\frac{1}{N}\sum_{j\in\B}\left[\frac{N-|\B|}{|\B|}[\Grad f_{j}(x)]_{\I^{\neq 0}(x)}\right]-\frac{1}{N}\sum_{\substack{i=1\\ i\notin \B}}^N[\Grad f_i(x)]_{\I^{\neq 0}(x)}\\ \end{split} \end{equation} Thus taking the norm on both side of~\eqref{eq:error_eq} and using triangle inequality results in the following: \begin{equation}\label{eq:bound_error} \begin{split}
\norm{e_\B(x)}&\leq \frac{1}{N}\sum_{j\in\B}\left[\frac{N-|\B|}{|\B|}\norm{[\Grad f_{j}(x)]_{\I^{\neq 0}(x)}}\right]+\frac{1}{N}\sum_{\substack{i=1\\ i\notin \B}}^N\norm{[\Grad f_i(x)]_{\I^{\neq 0}(x)}}\\
&\leq \frac{1}{N} \frac{N-|\B|}{|\B|} |\B_k| M + \frac{1}{N} (N-|\B|)M\leq \frac{2(N-|\B|)M}{N}. \end{split} \end{equation}
Since $\alpha_{\ell}\leq 1$, and $|B_\ell|\geq N-\frac{N}{2M}$ hence $\alpha_{\ell}\norm{e_{\B_{\ell}}(x_{\ell})}\leq 1$. Then combining with $\alpha_\ell\leq 1/L$,~\eqref{eq:optimal_dist_ell} can be further simplified as \begin{equation}\label{eq:x_kp1_optimal_dist_2} \begin{split} &\norm{x_{\ell+1}-x^*}^2\\ \leq & \sum_{g\in\tilde{\G}_\ell}\left\{\norm{[x_{\ell}-x^*]_g}^2-\norm{[\Grad \Psi(x_{\ell})]_g}^2\left(2\frac{\alpha_\ell}{L}-\alpha_\ell^2\right)\right\} +(2\alpha_\ell+2\alpha_\ell^2L)\norm{x_{\ell}-x^*}\norm{e_{\B_\ell}(x_\ell)}+\alpha_\ell^2\norm{e_{\B_\ell}(x_\ell)}^2\\ \leq & \sum_{g\in\tilde{\G}_\ell}\left\{\norm{[x_{\ell}-x^*]_g}^2-\frac{1}{L^2}\norm{[\Grad \Psi(x_{\ell})]_g}^2\right\}+4\alpha_\ell\norm{x_{\ell}-x^*}\norm{e_{\B_\ell}(x_\ell)}+\alpha_\ell^2\norm{e_{\B_\ell}(x_\ell)}^2\\ \leq &\norm{x_\ell-x^*}^2+4\alpha_\ell\norm{x_{\ell}-x^*}\norm{e_{\B_\ell}(x_\ell)}+\alpha_\ell\norm{e_{\B_\ell}(x_\ell)} \end{split} \end{equation} Following from the assumption that $\norm{x_{\ell}-x^*}\leq R$, then~\eqref{eq:x_kp1_optimal_dist_2} can be further simplified as \begin{equation}\label{eq:x_kp1_optimal_dist_3} \begin{split} \norm{x_{\ell+1}-x^*}^2\leq & \norm{x_\ell-x^*}^2+4\alpha_\ell R\norm{e_{\B_\ell}(x_\ell)}+\alpha_k\norm{e_{\B_\ell}(x_\ell)}\\ \leq & \norm{x_\ell-x^*}^2+(4R+1)\alpha_\ell\norm{e_{\B_\ell}(x_\ell)} \end{split} \end{equation} Summing the the both side of~\eqref{eq:x_kp1_optimal_dist_3} from $\ell=K$ to $\ell=K+t-1$ results in \begin{equation} \begin{split} \norm{x_{K+t}-x^*}^2\leq \norm{x_K-x^*}^2+(4R+1)\sum_{\ell=K}^{K+t-1}\alpha_{\ell}\norm{e_{\B_{\ell}}(x_{\ell})}\\ \end{split} \end{equation} It follows Lemma~\ref{lemma:convergence-series} that the followng holds with probability at least $1-\frac{1}{\theta^2}$, \begin{equation}
\sum_{\ell = K}^{\infty} \alpha_{\ell} \|e_{\mathcal{B}_{\ell}}(x_{\ell})\| \leq \frac{3R^2}{4(4R + 1)}. \end{equation} Thus we have that \begin{equation} \begin{split} \norm{x_{K+t}-x^*}^2&\leq \norm{x_K-x^*}^2+\left(4R+1\right)\sum_{\ell=K}^{K+t-1}\alpha_{\ell}\norm{e_{\B_{\ell}}(x_{\ell})}\\
&\leq \norm{x_K-x^*}^2+\left(4R+1\right)\sum_{\ell = K}^{\infty} \alpha_{\ell} \|e_{\mathcal{B}_{\ell}}(x_{\ell})\|\\ &\leq \frac{R^2}{4}+(4R+1)\frac{3R^2}{4(4R+1)}\leq \frac{R^2}{4}+\frac{3R^2}{4}\leq R^2, \end{split} \end{equation} holds with probability at least $1-\frac{1}{\theta^2}$, which completes the proof.
\end{proof}
Based on the above lemmas, the Lemma~\ref{lemma:x_k_in_neghibors} below shows if initial iterate of~\text{Half-Space Step}{} locates closely enough to $x^*$, step size $\alpha_k$ polynomially decreases, and mini-batch size $\B_k$ polynomially increases, then $x^*$ inhabits all subsequent reduced space $\{\S_k\}_{k=K}^{\infty}$ constructed in~\text{Half-Space Step}{} with high probability.
\begin{lemma}\label{lemma:x_k_in_neghibors}
Suppose $\norm{x_{K}-x^*}\leq \frac{R}{2}$, $K\geq N_\P$, $k=K+t$, $t\in\mathbb{Z}^+$, $0<\alpha_k=\O(1/(\sqrt{N}t))\leq \min\{\frac{2(1-\epsilon)}{L}, \frac{1}{L},\frac{2\delta_1-R-\epsilon(2\delta_2+R)}{M}\} $ and $|\B_k|=\O(t)\geq N-\frac{N}{2M}$. Then for any constant $\tau\in (0,1)$, $\norm{x_k-x^*}\leq R$ with probability at least $1-\tau$ for any $k\geq K$.
\end{lemma} \begin{proof}
It follows Lemma~\ref{lemma:x_star_in_polyhedron} and the assumption of this lemma that $x^*\in\S_K$. Moreover, it follows the assumptions of this lemma, Lemma~\ref{lemma:convergence-series} and~\ref{lemma:k_plus_1_optimal_dist_non_increase}, the definition of finite-sum $f(x)$ in \eqref{prob.x}, and the bound of error as~\eqref{eq:bound_error} that
\begin{equation}
\mathbb{P}(\{x_k\}_{k=K}^{\infty}\in \{x: \norm{x-x^*}\leq R\}^{\infty})\geq \left(1-\frac{1}{\theta^2}\right)^{\O(N-K)}\geq 1-\tau,
\end{equation}
where the last two inequalities comes from that the error vanishing to zero as $|\B_k|$ reaches the upper bound $N$, and $\theta$ is sufficiently large depending on $\tau$ and $\O(N-K)$. \end{proof}
\begin{corollary}\label{corollary:x_star_in_all_polyhedrons} Lemma~\ref{lemma:x_k_in_neghibors} further implies $x^*$ inhabits all subsequent $\S_k$, i.e., $x^*\in \S_{k}$ for any $k\geq K$. \end{corollary}
Next, we establish that after finitely number of iterations,~HSPG{}{} generates sequences that inhabits in the feasible domain $\mathcal{X}$ where Lipschitz continuity of $\Psi$ holds. \begin{lemma} Suppose the assumptions of Lemma~\ref{lemma:x_k_in_neghibors} hold, then after finite number of iterations, all subsequent iterates $x_k\in\mathcal{X}$ with high probability. \end{lemma} \begin{proof} It follows Lemma~\ref{lemma:x_k_in_neghibors} that all subsequent $x_k$ satisfying $\norm{x_k-x^*}\leq R$ with high probability. Combining with Lemma~\ref{lemma:support_cover}, we have that $\I^{\neq 0}(x^*)\subseteq\I^{\neq 0}(x_k)$ for all $k\geq K$ with high probability. Then for any $g\in\I^{\neq 0}(x_k)$, there are two possbilities, either $g\in\I^{\neq 0}(x^*)$ or $g\in \I^{0}(x^*)$. For the first case $g\in\I^{\neq 0}(x^*)\bigcap\I^{\neq 0}(x_k)$, it follows the definitions of $R$ as~\eqref{def:R} and $\delta_1$ that \begin{equation} \begin{split} \norm{[x_k-x^*]_g}&\leq \norm{x_k-x^*}\leq R\leq \delta_1\\ \norm{[x^*]_g}-\norm{[x_k]_g}&\leq \delta_1\\ \norm{[x_k]_g}&\geq \norm{[x^*]_g}-\delta_1\geq2\delta_1-\delta_1=\delta_1 \end{split} \end{equation} For any $g\in \I^{0}(x^*)\bigcap\I^{\neq 0}(x_k)$, by~Algorithm~\ref{alg:main.x.halfspacestep}, its norm is bounded below by \begin{equation} \delta_1\geq \norm{[x_k-x^*]_g}=\norm{[x_k]_g}\geq \epsilon^t\norm{[x_K]_g}, \end{equation} where by the Theorem~\ref{thm:sparsity_recovery_rate_hbproxsg} will shown in Appendix~\ref{appendix:sparsity_recovery_thm}, if $\norm{[x_k]_g}\leq \frac{2\alpha_k\delta_3}{1-\epsilon+\alpha_kL}$, then $[x_{k+1}]_g$ equals to zero and will be fixed as zero since Algorithm~\ref{alg:main.x.halfspacestep} operates on $\S_k$ as~\eqref{def:polytope}. Note $\alpha_k=\O(1/t)$, following~\cite[Theorem 4]{karimi2016linear} and~\cite[Theorem 3.2]{drusvyatskiy2018error}, $\mathbb{E}[\norm{[x_k]_g}^2]=\O(1/t)$. If $\epsilon>0$, then after finite number of iterations $\O(1/\epsilon^2)$, $g\in \I^{0}(x^*)\bigcap\I^{\neq 0}(x_k)$ becomes zero. If $\epsilon=0$, note $\B_k=\O(t)$ and $f$ is finite-sum, then similar result holds by~\cite[Theorem 2.3, Theorem 3.2]{robert2018convergegd} ($f$ needs further strongly convexity on $\tilde{\mathcal{X}}$). Hence with high probability, after finite number of iterations, denoted by $T$, all subsequent $x_k$, $k\geq K+T$ inhabits $\mathcal{X}$. Regarding $[x_k]_{g\in \I^{0}(x^*)\bigcap\I^{\neq 0}(x_k)}$ for $K\leq k\leq K+T$, note $\epsilon^{t}\norm{[x_K]_g}$ is also bounded below by constant $\epsilon^{T}\norm{[x_K]_g}>0$ given $x_K$, for similicity, denote the Lipschitz constant of $[\Grad \Psi(x_k)]_g$ as $L$ as well. \end{proof}
We now prove the first main theorem of~HSPG{}{}.\\ \textbf{Proof of Theorem~\ref{thm:convergence}} We know that Algorithm~\ref{alg:main.x.outline} performs an infinite sequence of iterations. It follows Corollary~\ref{corollary:Psi_epoch_decrease} that for any $\ell\in\mathbb{Z}^+$, \begin{equation} \begin{split} &\mathbb{E}[\Psi(x_K)]-\mathbb{E}[\Psi(x_{\ell+1})]=\sum_{k=K}^{\ell}\left\{ \mathbb{E}[\Psi(x_{k})]-\mathbb{E}[\Psi(x_{k+1})]\right\}\\ \geq&\sum_{K\leq k\leq \ell}\left(\alpha_k-\frac{\alpha_k^2L}{2}\right)\sum_{g\in\tilde{\G}_k}\mathbb{E}\left[\norm{[\Grad \Psi(x_{k})]_g}^2\right]+\sum_{K\leq k\leq \ell}\left(\frac{1-\epsilon}{\alpha_k}-\frac{L}{2}\right)\sum_{g\in\hat{\G}_k}\norm{[x_{k}]_g}^2. \end{split} \end{equation} Combining the assumption that $\Psi$ is bounded below and letting $\ell\rightarrow \infty$, we obtain \begin{equation}\label{series:sum_convergent_half_space} \begin{split} \sum_{k\geq K}\left(\alpha_k-\frac{\alpha_k^2L}{2}\right)\sum_{g\in\tilde{\G}_k}\mathbb{E}\left[\norm{[\Grad \Psi(x_{k})]_g}^2\right]+\sum_{k\geq K}\left(\frac{1-\epsilon}{\alpha_k}-\frac{L}{2}\right)\sum_{g\in\hat{\G}_k}\norm{[x_{k}]_g}^2<\infty \end{split} \end{equation} By~Algorithm~\ref{alg:main.x.halfspacestep}, variables on $\I^{0}(x_k)$ are fixed during~$k$th~\text{Half-Space Step}{} and $n$ is finite, then the group projection appears finitely many times, consequently, \begin{equation} \sum_{k\geq K}\left(\frac{1-\epsilon}{\alpha_k}-\frac{L}{2}\right)\sum_{g\in\hat{\G}_k}\norm{[x_{k}]_g}^2<\infty. \end{equation}
Thus~\eqref{series:sum_convergent_half_space} implies that \begin{align}\label{series:sum_convergent_half_space_grad_Psi} &\sum_{k\geq K}\left(\alpha_k-\frac{\alpha_k^2L}{2}\right)\sum_{g\in\tilde{\G}_k}\mathbb{E}\left[\norm{[\Grad \Psi(x_{k})]_g}^2\right]\\ =&\sum_{k\geq K}\alpha_k\sum_{g\in\tilde{\G}_k}\mathbb{E}\left[\norm{[\Grad \Psi(x_{k})]_g}^2\right]- \sum_{k\geq K}\frac{\alpha_k^2}{L}\sum_{g\in\tilde{\G}_k}\mathbb{E}\left[\norm{[\Grad \Psi(x_{k})]_g}^2\right]<\infty \end{align} Since $\alpha_k=\O(1/(\sqrt{N}t))$, then $\sum_{k\geq K}\alpha_k=\infty$ and $\sum_{k\geq K}\alpha_k^2\leq\infty$. Combining with~\eqref{series:sum_convergent_half_space_grad_Psi} and the boundness of $\partial \Psi$, it implies \begin{equation}\label{series:sum_convergent_half_space_alpha_grad_Psi} \sum_{k\geq K}\alpha_k\sum_{g\in\tilde{\G}_k}\mathbb{E}\left[\norm{[\Grad \Psi(x_{k})]_g}^2\right]<\infty. \end{equation} By $\sum_{k\geq K}\alpha_k=\infty$ and~\eqref{series:sum_convergent_half_space_alpha_grad_Psi}, we have that \begin{equation} \liminf_{k\geq K} \sum_{g\in\tilde{\G}_k}\mathbb{E}\left[\norm{[\Grad \Psi(x_{k})]_g}^2\right]=0 \end{equation} then there exists a subsequence $\mathcal{K}$ such that \begin{equation}\label{series:sum_convergent_half_space_grad_Psi_subsequence} \lim_{k\in\mathcal{K}}\sum_{g\in\tilde{\G}_k}\mathbb{E}\left[\norm{[\Grad \Psi(x_{k})]_g}^2\right]=0 \end{equation}
It follows from the assumptions of this theorem and Lemma~\ref{lemma:support_cover} to~\ref{lemma:x_k_in_neghibors} and Corollay~\ref{corollary:x_star_in_all_polyhedrons} that with high probability at least $1-\tau$, for each $k\geq K$, $x^*$ inhabits $\S_k$. Note as $|\B_k|=\O(t)$ linearly increases, the error of gradient estimate vanishes. Hence,~\eqref{series:sum_convergent_half_space_grad_Psi_subsequence} naturally implies that the sequence $\{x_k\}_{k\in\mathcal{K}}$ converges to some stationary point with high probability. And we can extend $\mathcal{K}$ to $\{k: k\geq K\}$ due to the non-decreasing distance to optimal solution as shown in the Lemma~\ref{lemma:x_k_in_neghibors}. By the above, we conclude that \begin{equation}
\mathbb{P}(\lim_{k\rightarrow \infty} \mathbb{E}\left[ \norm{\xi_{\alpha_k,\mathcal{B}_k}(x_k)}\right]=0)\geq 1-\tau. \end{equation}
\subsection{Proof of Theorem~\ref{thm:sparsity_recovery_rate_hbproxsg}}\label{appendix:sparsity_recovery_thm}
In this Appendix, we compare the group sparsity identification property of~HSPG{}{} and~\text{Prox-SG}{}. We first show the generic sparsity identification property of~\text{Prox-SG}{} for any mixed $\ell_1/\ell_p$ regularization for $p\geq 1$. \begin{lemma}\label{lemma:next_iterate_as_zero1_prox_sg}
If $\norm{x_{k}-x^*}_{p'}\leq \min\{\delta_3/L, \alpha_k\delta_3\}$, where $1/p+1/p'=1$ $(p'=\infty\ \text{if}\ p = 1)$, then the~\text{Prox-SG}{} yields that for each $g\in \I^0(x^*)$, $[x_{k+1}]_g = 0$ holds, \textit{i.e.}, $\I^0(x^*)\subseteq \I^0(x_{k+1})$. \end{lemma} \begin{proof}
It follows from the reverse triangle inequality, basic norm inequalities, Lipschitz continuity of $\Grad f(x)$ and the assumption of this lemma that for any $g\in \mathcal{G}$,
\begin{equation}\label{eq:grad_f_star_pprime}
\begin{split}
\norm{[\Grad f_{\mathcal{B}_k} (x_{k})]_g}_{p'}-\norm{[\Grad f_{\mathcal{B}_k}(x^*)]_g}_{p'}&\leq\norm{[\Grad f_{\mathcal{B}_k}(x_{k})-\Grad f_{\mathcal{B}_k}(x^*)]_g}_{p'}\\
&\leq \norm{\Grad f_{\mathcal{B}_k}(x_{k})-\Grad f_{\mathcal{B}_k}(x^*)}_{p'}\\
&\leq L\norm{x_{k}-x^*}_{p'}\leq L\cdot\frac{\delta_3}{L}=\delta_3.
\end{split}
\end{equation}
By~\eqref{eq:grad_f_star_pprime}, we have that for any $g\in \I^{0}(x^*)$,
\begin{equation}\label{eq:grad_f_x_k_delta3}
\begin{split}
\norm{[\Grad f_{\mathcal{B}_k}(x_{k})]_g}_{p'}&\leq \norm{[\Grad f_{\mathcal{B}_k}(x^*)]_g}_{p'} + \delta_3\\
&\leq \lambda - 2\delta_3 + \delta_3 = \lambda - \delta_3
\end{split}
\end{equation}
Combining~\eqref{eq:grad_f_x_k_delta3} and the assumption of this lemma, the following holds for any $\alpha_k>0$ that
\begin{equation}
\begin{split}
\norm{[x_{k}-\alpha_k\Grad f_{\mathcal{B}_k}(x_{k})]_g}_{p'}&\leq \norm{[x_{k}]_g}_{p'} + \norm{[\alpha_k\Grad f_{\mathcal{B}_k}(x_{k})]_g}_{p'}\\
&\leq \alpha_k\delta_3+\alpha_k(\lambda - \delta_3)=\alpha_k\lambda
\end{split}
\end{equation}
which further implies that the Ecludiean projection yields that
\begin{equation}\label{eq:proj_x_k_grad_f}
\proj^E_{\mathcal{B}(\norm{\cdot}_{p'},\alpha_k\lambda)} ([x_{k}-\alpha_k\Grad f_{\mathcal{B}_k}(x_k)]_g) = [x_{k}-\alpha_k\Grad f_{\mathcal{B}_k} (x_{k})]_g.
\end{equation}
Combining with~\eqref{eq:proj_x_k_grad_f}, the fact that proximal operator is the residual of identity operator subtracted by Euclidean project operator onto the dual norm ball and $[x_{k}]_g=0$ for any $g\in\mathcal{I}^0(x^*)$~\citep{chen2018fast}, we have that
\begin{equation}
\begin{split}
[x_{k+1}]_g&=\text{Prox}_{\alpha_k\lambda \norm{\cdot}_p}([x_{k}-\alpha_k \Grad f_{\mathcal{B}_k}(x_{k})]_g)\\
&=\left[I-\proj^E_{\mathcal{B}(\norm{\cdot}_{p'},\alpha_k\lambda)}\right]\left[x_{k}-\alpha_k \Grad f_{\mathcal{B}_k}(x_k)\right]_g\\
&=\left[x_{k}-\alpha_k \Grad f_{\mathcal{B}_k}(x_{k})\right]_g-\left[x_{k}-\alpha_k \Grad f_{\mathcal{B}_k}(x_{k})\right]_g=0,
\end{split}
\end{equation}
consequently $\I^0(x^*)\subseteq \I^0(x_{k+1})$, which completes the proof. \end{proof}
Now we establish the group-sparsity identification of~HSPG{}{}. \\\\ \textbf{Proof of Theorem~\ref{thm:sparsity_recovery_rate_hbproxsg}:}
Suppose $\norm{x_k-x^*}\leq \frac{2\alpha_k\delta_3}{1-\epsilon+\alpha_kL}$. There is nothing to prove if $g\in \I^{0}(x^*)\bigcap \I^{0}(x_k)$. For $g\in \I^0(x^*)\bigcap \I^{\neq 0}(x_k)$, we compute that
\begin{equation}\label{eq:tmp_1}
\begin{split}
&[x_k-\alpha_k\Grad \Psi_{\B_k}(x_k)]_g^\top[x_k]_g- \epsilon\norm{[x_k]_g}^2\\
=&\norm{[x_k]_g}^2-\alpha_k[\Grad \Psi_{\B_k}(x_k)]_g^\top[x_k]_g-\epsilon\norm{[x_k]_g}^2\\
=&(1-\epsilon)\norm{[x_k]_g}^2-\alpha_k\left([\Grad f_{\B_k}(x_k)]_g+\lambda \frac{[x_k]_g}{\norm{[x_k]_g}}\right)^\top[x_k]_g\\
=&(1-\epsilon)\norm{[x_k]_g}^2-\alpha_k[\Grad f_{\B_k}(x_k)]_g^\top[x_k]_g-\alpha_k\lambda \norm{[x_k]_g}\\
\leq & (1-\epsilon)\norm{[x_k]_g}^2+\alpha_k\norm{[\Grad f_{\B_k}(x_k)]_g}\norm{[x_k]_g}-\alpha_k\lambda \norm{[x_k]_g}\\
=& \norm{[x_k]_g}\left\{(1-\epsilon)\norm{[x_k]_g}+\alpha_k\norm{[\Grad f_{\B_k}(x_k)]_g}-\alpha_k\lambda\right\}
\end{split}
\end{equation}
By the Lipschitz continuity of $\Grad f$, we have that for each $g\in \I^0(x^*)\bigcap \I^{\neq 0}(x_k)$,
\begin{equation}
\begin{split}
\norm{[\Grad f_{\B_k}(x_k)-\Grad f_{\B_k}(x^*)]_g}&\leq L\norm{[x_k-x^*]_g}=L\norm{[x_k]_g}\\
\norm{[\Grad f_{\B_k}(x_k)]_g}&\leq L\norm{[x_k]_g}+\norm{[\Grad f_{\B_k}(x^*)]_g}
\end{split}
\end{equation}
Combining with the definition of $\delta_3$, which implies that $\norm{[\Grad f_{\B_k}(x^*)]_g}\leq \lambda-2\delta_3$ that
\begin{equation}
\norm{[\Grad f_{\B_k}(x_k)]_g}\leq L\norm{[x_k]_g}+\lambda -2\delta_3
\end{equation}
Hence combining with $\norm{[x_k]_g}\leq \frac{2\alpha_k\delta_3}{1-\epsilon+\alpha_kL}$,~\eqref{eq:tmp_1} can be further written as
\begin{equation}
\begin{split}
&[x_k-\alpha_k\Grad \Psi_{\B_k}(x_k)]_g^\top[x_k]_g- \epsilon\norm{[x_k]_g}^2\\
\leq & \norm{[x_k]_g}\left\{(1-\epsilon)\norm{[x_k]_g}+\alpha_k\norm{[\Grad f_{\B_k}(x_k)]_g}-\alpha_k\lambda\right\}\\
\leq & \norm{[x_k]_g}\left\{(1-\epsilon)\norm{[x_k]_g}+\alpha_kL \norm{[x_k]_g} +\alpha_k\lambda-2\alpha_k\delta_3-\alpha_k\lambda\right\}\\
= & \norm{[x_k]_g}\left\{(1-\epsilon+\alpha_kL)\norm{[x_k]_g}-2\alpha_k\delta_3\right\}\\
\leq & \norm{[x_k]_g}\left\{(1-\epsilon+\alpha_kL)\frac{2\alpha_k\delta_3}{1-\epsilon+\alpha_kL}-2\alpha_k\delta_3\right\}\\
= & \norm{[x_k]_g}\left(2\alpha_k\delta_3-2\alpha_k\delta_3\right)=0.
\end{split}
\end{equation}
which shows that $[x_k-\alpha_k\Grad \Psi_{\B_k}(x_k)]_g^\top[x_k]_g\leq \epsilon\norm{[x_k]_g}^2$. Hence the group projection operator is trigerred on $g$ to map the variables to zero, then $g\in \I^{0}(x_{k+1})$,~\textit{i.e.}, $[x_{k+1}]_g=0$. Therefore, the group sparsity of $x^*$ can be successfully identified by~\text{Half-Space Step}{}, \textit{i.e.}, $\I^0(x^*)\subseteq \I^0(x_{k+1})$.\\\\
In the end, if further assumptions hold, we can further show its group-support recovery. \begin{corollary} Under the assumption of Theorem~\ref{thm:sparsity_recovery_rate_hbproxsg}, moreover, if $\norm{x_k-x^*}\leq R$, $x^*\in \S_k$, $0\leq \epsilon<\min\left\{\frac{\delta_1^2}{\delta_2}, \frac{2\delta_1-R}{2\delta_2+R}\right\}$ and $\alpha_k \leq\frac{2\delta_1-R-\epsilon(2\delta_2+R)}{M}$, then $\I^0(x^*)= \I^0(x_{k+1})$ and $\I^{\neq 0}(x_{k+1})=\I^{\neq 0}(x^*)$. \end{corollary} \begin{proof}
Moreover, besides $\norm{x_k-x^*}\leq\frac{2\alpha_k\delta_3}{1-\epsilon+\alpha_kL}$, suppose $\norm{x_k-x^*}\leq R$, $x^*\in \S_k$, $0\leq \epsilon<\min\left\{\frac{\delta_1^2}{\delta_2}, \frac{2\delta_1-R}{2\delta_2+R}\right\}$ and $\alpha_k \leq\frac{2\delta_1-R-\epsilon(2\delta_2+R)}{M}$. Then $x^*\in \S_k$ indicates that $\I^{\neq 0}(x^*)\subseteq\I^{\neq 0}(x_k)$ by the definition of~$\S_k$. It still holds for $x_{k+1}$ by Lemma~\ref{lemma.project_as_zero_group}, \textit{i.e.}, $\I^{\neq 0}(x^*)\subseteq\I^{\neq 0}(x_{k+1})$. Combining with $\I^{0}(x^*)\subseteq \I^{0}(x_k)$, we have that both group-supports and group sparsity of $x^*$ are identified by~HSPG{}{}, \textit{i.e.}, $\I^{\neq 0}(x^*)= \I^{\neq 0}(x_{k+1})$ and $\I^{0}(x^*)= \I^{0}(x_{k+1})$. \end{proof}
\subsection{Upper bound of $N_\P$ under strongly convexity}\label{appendx:upper_bound_n_p}
\begin{proposition} Suppose the following conditions hold:
\begin{itemize}
\item (A1) $\mathbb{E}[\nabla f_{\mathcal{B}_k}(\bm{x})] = \nabla f(\bm{x})$.
\item (A2) there exists a $\sigma > 0$ such that $\mathbb{E}_{\mathcal{B}} [\| \nabla f_{\mathcal{B}}(\bm{x}) - \nabla f(\bm{x})\|^2] \leq \sigma^2$ for any mini-batch $\mathcal{B}$.
\item (A3) there exists a $\beta \in (0,1)$ such that $0 < \alpha_k < \frac{1 -\beta}{L}$.
\item (A4) $f$ is $\mu$-strongly convex.
\end{itemize}
Set the step-size $\alpha_k = \frac{1}{2\mu\beta k}$, $k_0 = \lceil\max\{1, \frac{1}{2\mu\beta}\} \rceil$. For any $\tau\in (0,1)$, there exists a $N_\P\in\mathbb{Z}^+$ such that $N_\P\geq\left\lceil\max\left\{ \frac{8 k_0 \mathbb{E}[\|\bm{x}_{k_0} - \bm{x}^{\ast}\|^2] }{R^2 \tau}, ~~ \frac{8 \sigma^2 \log (N_\P-1)}{\mu^2 \beta^2 R^2 \tau} \right\} \right\rceil$, such that performing Prox-SG $N_\P$ times yields
\begin{equation}
\|\bm{x}_{N_\P} - \bm{x}^{\ast}\|\leq R/2
\end{equation}
with probability at least $1-\tau$.
\end{proposition}
\begin{proof}
By the conditions (A1, A2, A3), Assumption 3.1 and Theorem 3.2 in~\cite{rosasco2019convergence}, we have for any $k\geq 2$,
\begin{align}
\mathbb{E}[\|\bm{x}_{k} - \bm{x}^{\ast}\|^2] \leq \mathbb{E}[\|\bm{x}_{k_0} - \bm{x}^{\ast}\|^2] \left( \frac{k_0}{k} \right) + \frac{\sigma^2}{\mu^2 \beta^2} \frac{\log (k-1)}{k}.
\end{align}
Let $\mathbb{E}[\|\bm{x}_{k_0} - \bm{x}^{\ast}\|^2] = s_{k_0}$. For any $\tau \in (0,1)$, there exists a $N_\P\in\mathbb{Z}^+$ satisfying
\begin{equation}
N_\P \geq \left\lceil\max\left\{ \frac{8 k_0 s_{k_0} }{R^2 \tau}, ~~ \frac{8 \sigma^2 \log (N_\P-1)}{\mu^2 \beta^2 R^2 \tau} \right\}\right\rceil,
\end{equation}
we have
\begin{align}
\mathbb{E}[\|\bm{x}_{N_\P} - \bm{x}^{\ast}\|^2] \leq \frac{R^2 \tau}{4}.
\end{align}
Therefore, by Markov inequality, we have that
\begin{align}
\|\bm{x}_{N_\P} - \bm{x}^{\ast}\|^2 \leq \frac{R^2}{4} ~~ \Leftrightarrow ~~ \|\bm{x}_{N_\P} - \bm{x}^{\ast}\| \leq \frac{R}{2}
\end{align}
holds with probability at least $1 - \tau$.
\end{proof}
\section{Additional Numerical Experiments}\label{appendix:experiments}
In this section, we provide additional numerical experiments to \textit{(i)} demonstrate the validness of group sparsity identification of HSPG; \textit{(ii)} provide comprehensive comparison to Prox-SG, RDA and Prox-SVRG on benchmark convex problems; and \textit{(iii)} describe more details regarding our non-convex deep learning experiments shown in the main body.
\subsection{Linear Regression on Synthetic Data}\label{appendix:convex_exp_linear_regression}
We first numerically validate the proposed HSPG on group sparsity identification by linear regression problems with $\ell_1/\ell_2$ regularizations using synthetic data. Consider a data matrix $A\in\mathbb{R}^{N\times n}$ consisting of $N$ instances and the target variable $y\in\mathbb{R}^N$, we are interested in the following problem: \begin{equation}\label{eq:lr}
\minimize{x\in\mathbb{R}^n}\ \frac{1}{2N}\|Ax-y\|^2+ \lambda \sum_{g\in \G}\norm{[x]_g}. \end{equation}
Our goal is to empirically show that HSPG is able to identify the ground truth zero groups with synthetic data. We conduct the experiments as follows: \textit{(i)} generate the data matrix $A$ whose elements are uniformly distributed among $[-1, 1]$; \textit{(ii)} generate a vector $x^*$ working as the ground truth solution, where the elements are uniformly distributed among $[-1, 1]$ and the coordinates are equally divided into 10 groups ($|\G|=10$); \textit{(iii)} randomly set a number of groups of $x^*$ to be 0 according to a pre-specified group sparsity ratio; \textit{(iv)} compute the target variable $y=Ax^*$; (v) solve the above problem \eqref{eq:lr} for $x$ with $A$ and $y$ only, and then evaluate the Intersection over Union (IoU) with respect to the identities of the zero groups between the computed solution estimate $\hat x$ by HSPG and the ground truth $x^*$.
We test HSPG on \eqref{eq:lr} under different problem settings. For a slim matrix $A$ where $N\ge n$, we test with various group sparsity ratios among $\{0.1,0.3,0.5,0.7,0.9\}$, and for a fat matrix $A$ where $N<n$, we only test with a certain group sparsity value since a recovery of $x^*$ requires that the number of non-zero elements in $x^*$ is bounded by $N$. Throughout the experiments, we set $\lambda$ to be $100/N$, the mini-batch size $|\mathcal{B}|$ to be 64, step size $\alpha_k$ to be 0.1 (constant), and fine-tune $\epsilon$ per problem. Based on a similar statistical test on objective function stationarity~\citep{zhang2020statistical}, we switch to \text{Half-Space Step}{} roughly after 30 epoches. Table~\ref{tb:lr} shows that under each setting, the proposed HSPG correctly identifies the groups of zeros as indicated by $\textrm{IoU}(\hat{x},x^*)=1.0$, which is a strong evidence to show the correctness of group sparsity idenfitication of HSPG.
\begin{table}[h] \centering \caption{Linear regression problem settings and IoU of the recovered solutions by HSPG.} \label{tb:lr}
\begin{tabular}{c|cccc} \hline
& \quad $N$\quad & \quad $n$ \quad & \quad Group sparsity ratio of $x^*$ \quad & \quad IoU($\hat x,x^*$) \quad \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}l@{}} Slim $A$ \\ \end{tabular}} & \quad 10000\quad & \quad1000 \quad & \quad\{0.1, 0.3, 0.5, 0.7, 0.9\} \quad& \quad1.0 \quad\\
& \quad 10000\quad & \quad2000 \quad & \quad\{0.1, 0.3, 0.5, 0.7, 0.9\} \quad & \quad1.0 \quad \\
& \quad 10000\quad & \quad3000 \quad & \quad\{0.1, 0.3, 0.5, 0.7, 0.9\} \quad & \quad1.0 \quad\\
& \quad 10000\quad & \quad4000 \quad & \quad\{0.1, 0.3, 0.5, 0.7, 0.9\} \quad& \quad1.0 \quad \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}l@{}}Fat $A$\\ \end{tabular}} & \quad200 \quad & \quad1000 \quad& \quad0.9 \quad & \quad 1.0 \quad \\
& \quad300 \quad & \quad1000 \quad& \quad 0.8 \quad & \quad1.0 \quad\\
& \quad400 \quad & \quad1000 \quad& \quad0.7 \quad & \quad1.0 \quad\\
& \quad500 \quad & \quad1000 \quad & \quad 0.6 \quad & \quad1.0 \quad\\ \hline \end{tabular} \end{table}
\subsection{Logistic Regression}\label{appendix:convex_exp_logistic_regression}
We then focus on the benchmark convex logistic regression problem with the mixed $\ell_1/\ell_2$-regularization given $N$ examples $(d_1, l_1), \cdots, (d_N, l_N)$ where $d_i\in \mathbb{R}^n$ and $l_i \in \{-1, 1\}$ with the form
\begin{equation}\label{def:minimize_logistic_l1} \small \minimize{(x; b)\in \mathbb{R}^{n+1}}\ \frac{1}{N}\sum_{i=1}^N \log(1 + e^{-l_i (x^T d_i +b)}) + \lambda \sum_{g\in \G}\norm{[x]_g}, \end{equation} for binary classification with a bias $b\in\mathbb{R}$. We set the regularization parameter $\lambda$ as $100/N$ throughout the experiments since it yields high sparse solutions and low object value $f$’s, equally decompose the variables into 10 groups to form $\mathcal{G}$, and test~problem~\eqref{def:minimize_logistic_l1} on 8 standard publicly available large-scale datasets from LIBSVM repository~\citep{chang2011libsvm}
as summarized in Table~\ref{table:datasets}. All convex experiments are conducted on a 64-bit operating system with an Intel(R) Core(TM) i7-7700K CPU $@$ 4.20 GHz and 32 GB random-access memory.
We run the solvers with a maximum number of epochs as $60$.
The mini-batch size $|\mathcal{B}|$ is set to be $\min\{256, \lceil{0.01N\rceil}\}$ similarly to~\citep{yang2019stochastic}. The step size $\alpha_k$ setting follows~[Section 4]\citep{xiao2014proximal}. Particularly, we first compute a Lipschitz constant $L$ as $\max_{i}\norm{d_i}^2/4$, then fine tune and select constant $\alpha_k\equiv\alpha=1/L$ to~\text{Prox-SG}{} and~\text{Prox-SVRG}{} since it exhibits the best results. For~\text{RDA}{}, the step size parameter $\gamma$ is fined tuned as the one with the best performance among all powers of $10$. For~HSPG{}{}, we set $\alpha_k$ as the same as~\text{Prox-SG}{} and \text{Prox-SVRG}{} in practice. We set $N_\mathcal{P}$ as $30N/|\mathcal{B}|$ such that \text{Half-Space Step}{} is triggered after employing Prox-SG Step 30 epochs similarly to Appendix~\ref{appendix:convex_exp_linear_regression}, and the control parameter $\epsilon$ in~\eqref{def:proj} as 0.05. We select two $\epsilon$'s as $0$ and $0.05$. The final objective value $\Psi$ and $f$, and group sparsity in the solutions are reported in Table~\ref{table:object_Psi_value_convex}-\ref{table:group_sparsity_convex}, where we mark the best values as bold to facilitate the comparison. Furthermore, Figure~\ref{figure:runtime_convex} plots the relative runtime of these solvers for each dataset, scaled by the runtime of the most time-consuming solver.
Table~\ref{table:group_sparsity_convex} shows that our~HSPG{}{} is definitely the best solver on exploring the group sparsity of the solutions. In fact,~HSPG{}{} under $\epsilon=0.05$ performs all the best except \textit{ijcnn1}.~\text{Prox-SVRG}{} is the second best solver on group sparsity exploration, which demonstrates that the variance reduction techniques works well in convex setting to promote sparsity, but not in non-convex settings. ~HSPG{}{} under $\epsilon=0$ performs much better than~\text{Prox-SG}{} which matches the better sparsity recovery property of~HSPG{}{} as stated in Theorem~\ref{thm:sparsity_recovery_rate_hbproxsg} even under $\epsilon$ as $0$. Moreover, as shown in Table~\ref{table:object_Psi_value_convex} and~\ref{table:object_f_value_convex}, we observe that all solvers perform quite competitively in terms of final objective values (round up to 3 decimals) except~\text{RDA}{}, which demonstrates that HSPG{}{} reaches comparable convergence as~\text{Prox-SG}{} and~\text{Prox-SVRG}{} in practice. Finally, Figure~\ref{figure:runtime_convex} indicates that Prox-SG, RDA and HSPG{}{} have similar computational cost to proceed, except~\text{Prox-SVRG}{} due to its periodical full gradient computation.
\begin{table}[h]
\scriptsize
\centering
\def1.1{1.1}
\caption{Summary of datasets.\label{table:datasets}}
\begin{tabular}{ccccccccc}
\Xhline{2\arrayrulewidth}
Dataset & N & n & Attribute & & Dataset & N & n & Attribute \\
\hline
a9a & 32561 & 123 & binary \{0, 1\} & & news20 & 19996 & 1355191 & unit-length \\
higgs & 11000000 & 28 & real $[-3, 41]$ & & real-sim & 72309 & 20958 & real [0, 1]\\
ijcnn1 & 49990 & 22 & real [-1, 1] & & url\_combined & 2396130 & 3231961 & real $[-4, 9]$ \\
kdda & 8407752 & 20216830 & real $[-1, 4]$ & & w8a & 49749 & 300 & binary \{0, 1\}\\
\Xhline{2\arrayrulewidth}
\end{tabular} \end{table}
\begin{table}[h]
\centering
\def1.1{1.1}
\caption{Final objective values $\Psi$ for tested algorithms on convex problems.}
\label{table:object_Psi_value_convex}
{\scriptsize
\begin{tabularx}{\textwidth} {
>{\centering\arraybackslash}X
>{\centering\arraybackslash}X
>{\centering\arraybackslash}X
>{\centering\arraybackslash}X
>{\centering\arraybackslash}X
>{\centering\arraybackslash}X }
\Xhline{3\arrayrulewidth}
\multirow{2}{*}{Dataset} & \multirow{2}{*}{\text{Prox-SG}{}} & \multirow{2}{*}{\text{RDA}} & \multirow{2}{*}{\text{Prox-SVRG}{}} & \multicolumn{2}{c}{HSPG{}{}} \\
\cline{5-6}
& & & & $\epsilon$ as $0$ & $\epsilon$ as $0.05$\\
\hline
a9a & \textbf{0.355} & 0.359 & \textbf{0.355} & \textbf{0.355} & \textbf{0.355} \\
higgs & \textbf{0.357} & 0.360 & 0.365 & 0.358 & 0.358\\
ijcnn1 & \textbf{0.248} & 0.278 & \textbf{0.248} & \textbf{0.248} & \textbf{0.248}\\
kdda & \textbf{0.103} & 0.124 & \textbf{0.103} & \textbf{0.103} & \textbf{0.103}\\
news20 & \textbf{0.538} & 0.693 & \textbf{0.538} & \textbf{0.538} & \textbf{0.538} \\
real-sim & \textbf{0.242} & 0.666 & 0.244 & \textbf{0.242} & \textbf{0.242} \\
url\_combined & 0.397 & 0.579 & \textbf{0.391} & 0.405 & 0.405\\
w8a & \textbf{0.110} & 0.111 & 0.112 & \textbf{0.110} & \textbf{0.110}\\
\Xhline{3\arrayrulewidth}
\end{tabularx}
}
\caption{Final objective values $f$ for tested algorithms on convex problems.}
\label{table:object_f_value_convex}
{\scriptsize
\begin{tabularx}{\textwidth} {
>{\centering\arraybackslash}X
>{\centering\arraybackslash}X
>{\centering\arraybackslash}X
>{\centering\arraybackslash}X
>{\centering\arraybackslash}X
>{\centering\arraybackslash}X }
\Xhline{3\arrayrulewidth}
\multirow{2}{*}{Dataset} & \multirow{2}{*}{\text{Prox-SG}{}} & \multirow{2}{*}{\text{RDA}} & \multirow{2}{*}{\text{Prox-SVRG}{}} & \multicolumn{2}{c}{HSPG{}{}} \\
\cline{5-6}
& & & & $\epsilon$ as $0$ & $\epsilon$ as $0.05$\\
\hline
a9a & \textbf{0.329} & 0.338 & \textbf{0.329} & \textbf{0.329} & \textbf{0.329} \\
higgs & \textbf{0.357} & 0.360 & 0.365 & 0.358 & 0.358 \\
ijcnn1 & \textbf{0.213} & 0.270 & \textbf{0.213} & \textbf{0.213} & 0.214\\
kdda & \textbf{0.103} & 0.124 & \textbf{0.103} & \textbf{0.103} & \textbf{0.103}\\
news20 & 0.373 & 0.693 & 0.381 & \textbf{0.372} & \textbf{0.372} \\
real-sim & \textbf{0.148} & 0.665 & 0.159 & \textbf{0.148} & \textbf{0.148} \\
url\_combined & 0.397 & 0.579 & \textbf{0.391} & 0.405 & 0.405\\
w8a & \textbf{0.089} & 0.098 & 0.091 & \textbf{0.089} & \textbf{0.089}\\
\Xhline{3\arrayrulewidth}
\end{tabularx}
}
\caption{Group sparsity for tested algorithms on convex problems.}
\label{table:group_sparsity_convex}
{\scriptsize
\begin{tabularx}{\textwidth} {
>{\centering\arraybackslash}X
>{\centering\arraybackslash}X
>{\centering\arraybackslash}X
>{\centering\arraybackslash}X
>{\centering\arraybackslash}X
>{\centering\arraybackslash}X }
\Xhline{3\arrayrulewidth}
\multirow{2}{*}{Dataset} & \multirow{2}{*}{\text{Prox-SG}{}} & \multirow{2}{*}{\text{RDA}{}} & \multirow{2}{*}{\text{Prox-SVRG}{}} & \multicolumn{2}{c}{HSPG{}{}} \\
\cline{5-6}
& & & & $\epsilon$ as $0$ & $\epsilon$ as $0.05$\\
\hline
a9a & 20\% & \textbf{30\%} & \textbf{30\%} & \textbf{30\%} & \textbf{30\%} \\
higgs & 0\% & 10\% & 0\% & 0\% & \textbf{30\%}\\
ijcnn1 & 50\% & \textbf{70\%} & 60\% & 60\% & 60\% \\
kdda & 0\% & 0\% & 0\% & 0\% & \textbf{80\%}\\
news20 & 20\% & 80\% & \textbf{90\%} & 80\% & \textbf{90\%} \\
real-sim & 0\% & 0\% & \textbf{80\%} & 0\% & \textbf{80\%} \\
url\_combined & 0\% & 0\% & 0\% & 0\% & \textbf{90\%} \\
w8a & \textbf{0\%} & \textbf{0\%} & \textbf{0\%} & \textbf{0\%} & \textbf{0\%} \\
\Xhline{3\arrayrulewidth}
\end{tabularx}
} \end{table}
\begin{figure}
\caption{Relative runtime.}
\label{figure:runtime_convex}
\end{figure}
\subsection{Deep Learning Experiments}\label{appendix:nonconvex_exp}
We conduct all deep learning experiments on one GeForce GTX 1080 Ti GPU, and describe how to fine-tune the control parameter $\epsilon$ in~\eqref{def:proj} in details. According to Theorem~\ref{thm:sparsity_recovery_rate_hbproxsg}, a larger $\epsilon$ results in a faster group sparsity identification, while by Lemma~\ref{lemma:sufficient_decrease_half_space} on the other hand too large $\epsilon$ may cause a significant regression on the target objective $\Psi$ value, \textit{i.e.}, the $\Psi$ value increases a lot. Hence, in our experiments, from the point of view of optimization, we search a proper $\epsilon$ in the following ways: start from $\epsilon=0.0$ and the models trained by employing $N_\P$~\text{Prox-SG Step}{}s, incrementally increase $\epsilon$ by 0.01 and check if the $\Psi$ on the first~\text{Half-Space Step}{} has an obvious increase, then accept the largest $\epsilon$ without regression on $\Psi$ as our fine tuned $\epsilon$ shown in the main body of the paper. Particularly, the fine tuned $\epsilon$'s equal to 0.03, 0.05, 0.02 and 0.02 for \text{VGG16}{} with~\text{CIFAR10}{}, \text{VGG16}{} with~\text{Fashion-MNIST}{},~\text{ResNet18}{} with~\text{CIFAR10}{} and~\text{ResNet18}{} with~\text{Fashion-MNIST}{} respectively. Note from the perspective of different applications, there are different criterions to fine tune $\epsilon$, \textit{i.e.}, for model compression, we may accept $\epsilon$ based on the validation accuracy regression to reach higher group sparsity.
Additionally, we also report the final $f$ comparison in Table~\ref{table:object_f_value_nonconvex} and its evolution on~\text{ResNet18}{} with~\text{CIFAR10}{} in Figure~\ref{figure:f_evolution}, where we can see that all tested algorithms can achieve competitive $f$ values as they do in convex settings. And the evolution of $f$ is similar to that of $\Psi$, \textit{i.e.}, the raw objective $f$ generally monotonically decreases for small $\epsilon=0$ to $0.02$, and experiences a mild pulse after switch to~\text{Half-Space Step}{} for larger $\epsilon$, \textit{e.g.}, 0.05, which matches Lemma~\ref{lemma:sufficient_decrease_half_space_restated}. \begin{table}[h]
\centering
\def1.1{1.1}
\caption{Final objective values $f$ for tested algorithms on non-convex problems.}
\label{table:object_f_value_nonconvex}
{\scriptsize
\begin{tabularx}{\textwidth} {
>{\centering\arraybackslash}X
>{\centering\arraybackslash}X
>{\centering\arraybackslash}X
>{\centering\arraybackslash}X
>{\centering\arraybackslash}X
>{\centering\arraybackslash}X }
\Xhline{3\arrayrulewidth}
\multirow{2}{*}{Backbone} & \multirow{2}{*}{Dataset} & \multirow{2}{*}{\text{Prox-SG}{}} & \multirow{2}{*}{\text{Prox-SVRG}{}} & \multicolumn{2}{c}{HSPG{}{}} \\
\cline{5-6}
& & & & $\epsilon$ as $0$ & fine tuned $\epsilon$\\
\hline
\multirow{2}{*}{\text{VGG16}{}}& \text{CIFAR10}{} & 0.010 & 0.036 & 0.010 & \textbf{0.009} \\
& \text{Fashion-MNIST}{} & 0.181 & \textbf{0.165} & 0.181 & 0.182\\
\hdashline
\multirow{2}{*}{\text{ResNet18}{}} & \text{CIFAR10}{} & \textbf{0.001} & 0.002 & \textbf{0.001} & 0.004 \\
&\text{Fashion-MNIST}{} & 0.006 & 0.008 & \textbf{0.005} & 0.010 \\
\hdashline
\multirow{2}{*}{\text{MobileNetV1}{}} & \text{CIFAR10}{} & \textbf{0.021} & 0.031 & \textbf{0.021} & 0.031 \\
&\text{Fashion-MNIST}{} & 0.074 & \textbf{0.057} & 0.074 & 0.088 \\
\Xhline{3\arrayrulewidth}
\end{tabularx}
} \end{table} \begin{figure}
\caption{Evolution of $f$ value on~\text{ResNet18}{} with~\text{CIFAR10}{}.}
\label{figure:f_evolution}
\end{figure}
\end{document} |
\begin{document}
\title{Chiral excitation of a single atom by a single photon in a guided mode of a nanofiber}
\author{Fam Le Kien} \affiliation{Okinawa Institute of Science and Technology Graduate University, Onna, Okinawa 904-0495, Japan}
\author{S\'{i}le Nic Chormaic} \affiliation{Okinawa Institute of Science and Technology Graduate University, Onna, Okinawa 904-0495, Japan}
\author{Thomas Busch} \affiliation{Okinawa Institute of Science and Technology Graduate University, Onna, Okinawa 904-0495, Japan}
\date{\today}
\begin{abstract} We study the interaction between a single two-level atom and a single-photon probe pulse in a guided mode of a nanofiber. We examine the situation of chiral interaction, where the atom has a dipole rotating in the meridional plane of the nanofiber, and the probe pulse is quasilinearly polarized along the radial direction of the atom position in the fiber transverse plane. We show that the atomic excitation probability, the photon transmission flux, and the photon transmission probability depend on the propagation direction of the probe pulse along the fiber axis. In contrast, the reflection flux and the reflection probability do not depend on the propagation direction of the probe pulse. We find that the asymmetry parameter for the atomic excitation probability does not vary in time and does not depend on the probe pulse shape. \end{abstract}
\pacs{} \maketitle
\section{Introduction}
The manipulation and control of coupling between light and matter at a single quantum level lie at the heart of quantum optics and quantum information processing and, therefore, have received a lot of attention in the past \cite{Cirac1997,Haroche1997,Duan2001,Dayan2014}. The interaction between a single two-level atom and a quantized single-photon light pulse has been studied extensively \cite{Domokos2002,Leuchs2007,Leuchs2009,Fan2010,Wang2011,Koenderink2011,Wang2012,GB}. It has been shown that the transient excitation probability of a single two-level atom interacting with a quantized single-photon pulse can achieve higher values than that in the steady-state regime. In particular, it has been predicted that the excitation probability of the atom can, in principle, approach unity if the photon waveform matches both spatially and temporally the time-reversed version of a spontaneously emitted photon \cite{Leuchs2007,Leuchs2009,Fan2010}. This condition means that the spatial profile of the incident photon should match the atomic dipole emission pattern and that the temporal shape of the incident photon should be a rising exponential \cite{Leuchs2007,Leuchs2009,Fan2010}. Schemes for efficient excitation involving free-space interaction \cite{Leuchs2007,Leuchs2009} as well as waveguides \cite{Fan2010,Wang2011,Koenderink2011,Wang2012,GB} have been studied. The analogy between a single atom and an optical resonator in the absorption of a light pulse has been investigated \cite{Leuchs2010,Leuchs2013}. Experiments on the use of rising exponential pulses for efficient atomic excitation, photon absorption, and loading of photons into a cavity at a single quantum level have been reported \cite{Du2012,Kurtsiefer2013,Martinis2014,Du2014,Kurtsiefer2016}.
It is difficult to achieve spatial mode matching between the incident photon wave packet and the atomic dipole emission profile when the atom is in free space. In contrast, the use of a waveguide provides strong spatial mode matching and hence simplifies practical implementations \cite{Fan2010,Wang2011,Koenderink2011,Wang2012,GB}. This strong mode matching is also the source of efficient channeling of spontaneous emission from atoms into fibers \cite{Jhe,Klimov,cesium decay}.
The efficient coupling between atoms and light can be seen clearly in nanofiber-based systems. Nanofibers are vacuum-clad, ultrathin optical fibers that allow tightly radially confined light to propagate over a long distance (the range of several millimeters is typical) and to interact efficiently with nearby atoms \cite{TongNat03,review2016,review2017,Nayak2018}. It has been shown that, for atoms near a nanofiber, spontaneous emission may become asymmetric with respect to opposite propagation directions \cite{Fam2014,Petersen2014,Mitsch14b,sponhigh}. This directional effect is a signature of spin-orbit coupling of light carrying transverse spin angular momentum \cite{Zeldovich,Bliokh review,Bliokh2014,Bliokh review2015,Bliokh2015,Banzer review2015,Lodahl2017}. The chirality of the field in a nanofiber-guided mode occurs as a consequence of the fact that the field has a nonzero longitudinal component, which oscillates in phase quadrature with respect to the radial transverse component. The chiral interaction of the guided field with a nearby atom appears when the atom has a dipole rotating in the meridional plane of the nanofiber.
The purpose of this paper is to study the chiral interaction between a single two-level atom and a single-photon probe pulse in a guided mode of a nanofiber. We show that the atomic excitation probability, the photon transmission flux, and the photon transmission probability depend on the propagation direction of the probe pulse along the fiber axis.
The paper is organized as follows. In Sec.~\ref{sec:model} we describe the model and the Hamiltonian of the system. Section \ref{sec:equations} is devoted to the dynamical equations. In Sec. \ref{sec:numerical}, we present the results of numerical calculations. Our conclusions are given in Sec.~\ref{sec:summary}.
\section{Model and Hamiltonian} \label{sec:model}
We consider a single two-level atom interacting with an injected quantized near-resonant light pulse in a guided mode of a vacuum-clad, ultrathin optical fiber (see Fig.~\ref{fig1}). The atom has an upper energy level $|e\rangle$ and a lower energy level $|g\rangle$, with energies $\hbar\omega_e$ and $\hbar\omega_g$, respectively, and is located at a fixed point outside the fiber. We assume that the central frequency $\omega_L$ of the probe pulse is close to the transition frequency $\omega_0=\omega_e-\omega_g$ of the atom, and the spectral pulse width is small compared to the optical frequency. The fiber is a dielectric cylinder of radius $a$ and refractive index $n_1>1$ and is surrounded by an infinite background vacuum or air medium of refractive index $n_2=1$. We are interested in vacuum-clad silica-core ultrathin fibers with diameters in the range of hundreds of nanometers, which can support only the fundamental HE$_{11}$ mode and a few higher-order modes in the optical region. Such optical fibers are usually called nanofibers \cite{TongNat03,review2016,review2017,Nayak2018}. In view of the very low losses of silica in the wavelength range of interest, we neglect material absorption.
We use Cartesian coordinates $\{x,y,z\}$, where $z$ is the coordinate along the fiber axis, and also cylindrical coordinates $\{r,\varphi,z\}$, where $r$ and $\varphi$ are the polar coordinates in the fiber transverse plane $xy$. We assume that the atom is located at a point $\mathbf{R}\equiv (r,\varphi,z)$ in the cylindrical coordinates. We use the notation $\mathbf{r}=(r,\varphi)$ for the position of the atom in the fiber transverse plane.
\begin{figure}
\caption{Two-level atom interacting with a quantized light pulse in a guided mode of an optical nanofiber.
}
\label{fig1}
\end{figure}
The atom interacts with the full quantum electromagnetic field, which includes the injected quantum field in the input mode and the vacuum quantum field in other modes. In the presence of the fiber, the quantum field can be decomposed into the contributions from guided and radiation modes \cite{fiber books}. In the interaction picture, the Hamiltonian for the atom-field interaction in the dipole and rotating-wave approximations can be written as \cite{cesium decay,sponhigh} \begin{equation}\label{v1} \begin{split} H_{\mathrm{int}}&=-i\hbar\sum_{\alpha=\mu,\nu}(G_{\alpha}\sigma^{\dagger} a_{\alpha}e^{-i(\omega-\omega_0)t}-\mbox{H.c.}). \end{split} \end{equation}
Here, $\sigma=|g\rangle\langle e|$ and $\sigma^\dagger=|e\rangle\langle g|$ are the atomic transition operators, $a_{\alpha}$ and $a_{\alpha}^\dagger$ are the photon operators,
and $G_\alpha$ is the coupling coefficient for the interaction between the atom and the quantum field in mode $\alpha$. To describe the atom, we use not only the transition operators $\sigma$ and $\sigma^\dagger$ but also the operators $\sigma_{ee}=|e\rangle\langle e|$ and $\sigma_{gg}=|g\rangle\langle g|$ for the populations of the excited and ground states, respectively, and the operator $\sigma_z=\sigma_{ee}-\sigma_{gg}$ for the level population difference.
In Eq.~(\ref{v1}), the notations $\alpha=\mu,\nu$ and $\sum_{\alpha}=\sum_{\mu}+\sum_{\nu}$ stand for the mode index and the mode summation. The index $\mu=(\omega \mathcal{N} f p)$ labels guided modes. Here, $\omega$ is the mode frequency, $\mathcal{N}=\mathrm{HE}_{lm}$, EH$_{lm}$, TE$_{0m}$, or TM$_{0m}$ is the mode type, with $l=1,2,\dots$ and $m=1,2,\dots$ being the azimuthal and radial mode orders, $f=\pm1$ denotes the positive or negative propagation direction along the fiber axis $z$, and $p=\pm1$ for HE and EH modes and $0$ for TE and TM modes is the phase circulation direction index \cite{fiber books}. The longitudinal propagation constant $\beta$ of a guided mode is determined by the fiber eigenvalue equation. Meanwhile, the index $\nu=(\omega \beta l p)$ labels radiation modes. Here, $\beta$ is the longitudinal propagation constant, $l=0,\pm1,\pm2,\dots$ is the mode order, and $p=+,-$ is the mode polarization index. The longitudinal propagation constant $\beta$ of a radiation mode of frequency $\omega$ can vary continuously, from $-kn_2$ to $+kn_2$ (with $k=\omega/c$). The notations $\sum_{\mu}=\sum_{\mathcal{N} fp}\int_0^{\infty}d\omega$ and $\sum_{\nu}=\sum_{lp}\int_0^{\infty}d\omega\int_{-kn_2}^{kn_2}d\beta$ denote the generalized summations over guided and radiation modes, respectively.
The expressions for the coupling coefficients $G_{\alpha}$ with $\alpha=\mu,\nu$ are given as \cite{cesium decay,sponhigh} \begin{eqnarray}\label{v2} G_{\mu}&=&\sqrt{\frac{\omega\beta'}{4\pi\epsilon_0\hbar}}\; (\mathbf{d}\cdot\mathbf{e}^{(\mu)})e^{i(f\beta z+pl\varphi)},\nonumber\\ G_{\nu}&=&\sqrt{\frac{\omega}{4\pi\epsilon_0\hbar}}\; (\mathbf{d}\cdot\mathbf{e}^{(\nu)})e^{i(\beta z+l\varphi)}, \end{eqnarray} where $\mathbf{e}^{(\mu)}=\mathbf{e}^{(\mu)}(\mathbf{r})$ and $\mathbf{e}^{(\nu)}=\mathbf{e}^{(\nu)}(\mathbf{r})$ are the normalized mode functions given in \cite{fiber books, sponhigh}, $\beta'$ is the derivative of $\beta$ with respect to $\omega$, and $\mathbf{d}$ is the dipole matrix element of the atom. In general, the dipole matrix element $\mathbf{d}$ can be a complex vector.
\section{Dynamical equations} \label{sec:equations}
In this section, we derive the dynamical equations for interaction between the atom and a quantized probe light pulse in a guided mode of the nanofiber. In this derivation, we closely follow the techniques of Refs.~\cite{Domokos2002,Wang2011,Wang2012,GB} and extend them to include the specific characteristics of the nanofiber.
\subsection{Heisenberg-Langevin equation for the atom interacting with a quantized guided light pulse}
In this subsection, we extend the Weisskopf-Wigner theory \cite{Scully} to describe the observables of the internal state of the atom interacting with a quantized guided light pulse of the nanofiber. We call $\mathcal{O}$ an arbitrary atomic operator. The Heisenberg equation for this operator is \begin{equation}\label{v3} \begin{split} \dot{\mathcal{O}}&=\sum_{\alpha}(G_{\alpha} [\sigma^{\dagger},\mathcal{O}] a_{\alpha} e^{-i(\omega-\omega_0)t}\\ &\quad+G_{\alpha}^{*}a_{\alpha}^{\dagger}[\mathcal{O},\sigma] e^{i(\omega-\omega_0)t}. \end{split} \end{equation} Meanwhile, the Heisenberg equation for the photon annihilation operator $a_{\alpha}$ is $\dot{a}_{\alpha}=G_{\alpha}^*\sigma e^{i(\omega-\omega_0)t}$. When we integrate this equation, we obtain \begin{equation}\label{v4} \begin{split} a_{\alpha}(t)&=a_{\alpha}(t_0)+G_{\alpha}^*\int\limits _{t_0}^t dt'\, \sigma(t')e^{i(\omega-\omega_0)t'}, \end{split} \end{equation} where $t_0$ is the initial time.
We assume that the evolution time $t-t_0$ and the characteristic atomic lifetime $\tau_0$ are large compared to the atomic transition period $2\pi/\omega_0$. When the continuum of the guided and radiation modes is regular and broadband around the atomic frequency $\omega_0$, the Markov approximation $\sigma(t')=\sigma(t)$ can be applied to describe the back action of the second term in Eq.~(\ref{v4}) on the atom. Under the condition $t-t_0\gg 2\pi/\omega_0$, we calculate the integral with respect to $t'$ in the limit $t-t_0\to\infty$. We set aside the imaginary part of the integral, which describes the frequency shift. Such a frequency shift is usually small. We can effectively account for it by incorporating it into the atomic frequency. With the above approximations and procedures, we find $a_{\alpha}(t)=a_{\alpha}(t_0)+\pi G_{\alpha}^*\sigma(t)\delta(\omega-\omega_0)$. We insert this expression into Eq.~(\ref{v3}). Then, we obtain the following Heisenberg-Langevin equation: \begin{equation}\label{v6} \begin{split} \dot{\mathcal{O}}&=\sum_{\alpha}(G_{\alpha} [\sigma^{\dagger},\mathcal{O}] a_{\alpha}(t_0) e^{-i(\omega-\omega_0)t}\\ &\quad+G_{\alpha}^{*}a_{\alpha}^{\dagger}(t_0)[\mathcal{O},\sigma] e^{i(\omega-\omega_0)t})\\ &\quad +\frac{1}{2}\sum\gamma( [\sigma^{\dagger},\mathcal{O}] \sigma +\sigma^{\dagger}[\mathcal{O},\sigma]) +\xi_{\mathcal{O}}. \end{split} \end{equation} Here, the coefficient
$\gamma=2\pi\sum_{\alpha=\mu,\nu}|G_{\alpha}|^2\delta(\omega-\omega_0)$
is the total spontaneous emission rate of the atom and $\xi_{\mathcal{O}}$ is the noise operator. Note that the total spontaneous emission rate $\gamma$ can be decomposed as $\gamma=\gamma_g+\gamma_r$, where $\gamma_g=2\pi\sum_{\mu}|G_{\mu}|^2\delta(\omega-\omega_0)$ and
$\gamma_r=2\pi\sum_{\nu}|G_{\nu}|^2\delta(\omega-\omega_0)$ are the rates of spontaneous emission into guided and radiation modes, respectively.
We assume that the initial field is a quantum pulse light field propagating in a superposition of guided modes $(\omega\mathcal{N}_Lf_Lp_L)$ with the frequency $\omega$ varying in a small interval around a central frequency $\omega_L$. We introduce the label $\mu_L=(\mathcal{N}_Lf_Lp_L)$ for this integral mode. When the bandwidth of the pulse is narrow and the field central frequency $\omega_L$ is close to the atomic transition frequency $\omega_0$, we can use the approximation $\sum_{\alpha}G_{\alpha}a_{\alpha}(t_0)e^{-i(\omega-\omega_0)t}\cong G_{L}\int_0^{\infty} a_{\omega}(t_0)e^{-i(\omega-\omega_0)(t-f_Lz/v_{g_L})}d\omega$. Here, $G_{L}=G_{\omega_0\mathcal{N}_Lf_Lp_L}$, $a_{\omega}=a_{\omega\mathcal{N}_Lf_Lp_L}$, and $v_{g_L}=1/\beta'_L(\omega_0)$ are the coupling coefficient, the photon operator, and the group velocity of the input guided mode, respectively. Then, we can rewrite Eq.~\eqref{v6} as \begin{equation}\label{v8} \begin{split} \dot{\mathcal{O}}&=\sqrt{2\pi}(G_{L} [\sigma^{\dagger},\mathcal{O}] a_{t_d}+G_{L}^{*}a_{t_d}^{\dagger} [\mathcal{O},\sigma])\\ &\quad+\frac{1}{2}\gamma( [\sigma^{\dagger},\mathcal{O}] \sigma +\sigma^{\dagger}[\mathcal{O},\sigma]) +\xi_{\mathcal{O}}, \end{split} \end{equation} where $t_d=t-f_Lz/v_{g_L}$ and \begin{equation}\label{v9} a_t=\frac{1}{\sqrt{2\pi}}\int_0^{\infty} a_{\omega}(t_0) e^{-i(\omega-\omega_0)t}d\omega. \end{equation} We note that Eq.~(\ref{v8}) is in agreement with Eqs.~(13) and (14) of Ref.~\cite{Wang2012}.
In deriving Eq.~(\ref{v8}), we have used the mode function for the quasicircularly polarized mode $\mu_L=(\mathcal{N}_Lf_Lp_L)$ to describe the input field. However, this equation can also be used for the quasilinearly polarized mode $\mu_L=(\mathcal{N}_Lf_L\varphi_{\mathrm{pol}})$, where the angle $\varphi_{\mathrm{pol}}$ characterizes the orientation of the principal polarization axis in the fiber transverse plane $xy$.
For this mode, the coupling coefficient is given as $G_L=(e^{-i\varphi_{\mathrm{pol}}}G_{\omega_0\mathcal{N}_Lf_L,p_L=+}+e^{i\varphi_{\mathrm{pol}}}G_{\omega_0\mathcal{N}_Lf_L,p_L=-})/\sqrt2$. Note that the rate of spontaneous emission from the atom into the input guided mode $\mu_L$ is $\gamma_L=2\pi |G_L|^2$. This rate characterizes the strength of the coupling between the atom and the input field. The coupling efficiency is characterized by the parameter $\eta_L=\gamma_L/\gamma$.
\subsection{Quantized light pulses}
Quantized light pulses are described by the continuous-mode quantization formalism \cite{Loudon}. We briefly summarize below the key points of this description \cite{Loudon,Wang2011,Wang2012}.
A quantized light pulse can be considered as a photon wave packet. The photon wave-packet creation operator is defined as \cite{Loudon} \begin{equation}\label{v10} A^\dagger=\int_{-\infty}^\infty F_t a^\dagger_t dt=\int_{0}^\infty F_\omega a^\dagger_{\omega} d\omega, \end{equation} where $a^\dagger_t$ and $a^\dagger_\omega=a^\dagger_\omega(t_0)$ are the photon creation operators in the time and frequency domains, respectively, and $F_t$ and $F_\omega$ are the temporal shape and spectral distribution of the wave packet. They are related by the Fourier transformation \begin{equation}\label{v11} \begin{split} a_t&=\frac{1}{\sqrt{2\pi}}\int_{0}^\infty e^{-i(\omega-\omega_0) t} a_{\omega} d\omega,\\ F_t&=\frac{1}{\sqrt{2\pi}}\int_{0}^\infty e^{-i(\omega-\omega_0) t} F_\omega d\omega. \end{split} \end{equation} The amplitudes $F_t$ and $F_\omega$ are normalized as
$\int_{-\infty}^\infty |F_t|^2 dt=\int_{0}^\infty |F_\omega|^2 d\omega=1$.
The Fock state of the wave packet with the photon number $n=0,1,2,\dots$ is defined as \cite{Loudon} \begin{equation}\label{v13}
|n\rangle=\frac{1}{\sqrt{n!}} (A^{\dagger})^n |0\rangle. \end{equation}
The Fock state $|n\rangle$ has the properties $a_t|n\rangle=\sqrt{n} F_t|n-1\rangle$, $a_\omega|n\rangle=\sqrt{n} F_\omega|n-1\rangle$,
$A|n\rangle=\sqrt{n}|n-1\rangle$, and $A^\dagger|n\rangle=\sqrt{n+1}|n+1\rangle$.
The coherent state of the wave packet with the complex amplitude $\alpha$ is defined as \cite{Loudon} \begin{equation}\label{v16}
|\alpha\rangle=e^{-|\alpha|^2/2}\sum_n \frac{\alpha^n}{\sqrt{n!}}|n\rangle. \end{equation}
It has the properties $a_t|\alpha\rangle=\alpha F_t|\alpha\rangle$, $a_\omega|\alpha\rangle=\alpha F_\omega|\alpha\rangle$, and $A|\alpha\rangle=\alpha|\alpha\rangle$.
In the continuous-mode quantization formalism, the photon number operator is defined as $\hat{n}=\int_{-\infty}^\infty a^\dagger_t a_t dt=\int_{0}^\infty a^\dagger_{\omega} a_{\omega} d\omega$ \cite{Loudon}. We have $\hat{n}|n\rangle=n|n\rangle$ and
$\langle\alpha|\hat{n}|\alpha\rangle=|\alpha|^2$.
\subsection{Interaction of the atom with a Fock- or coherent-state pulse}
In this subsection, we derive the dynamical equations for the atom interacting with a Fock- or coherent-state light pulse.
First, we consider the interaction of the atom with a Fock-state pulse of $N$ photons.
We assume that the atom is initially in the ground state $|g\rangle$. We introduce the notation $|g,n,0\rangle$ for the state where the atom is in the ground state with $n$ photons in the pulse field and no photons in the other modes.
We also introduce the notation $\langle\mathcal{O}\rangle_{nn'}=\langle g,n,0|\mathcal{O}|g,n',0\rangle$. Without loss of generality, we assume that the axial coordinate of the atom is $z=0$. In this case, we have $t_d=t$. Then, Eq.~(\ref{v8}) yields \cite{Wang2012} \begin{eqnarray}\label{v19} \langle\dot{\sigma}_z\rangle_{nn'}&=&-\gamma(\langle\sigma_z\rangle_{nn'}+\delta_{nn'}) -2\sqrt{2\pi n'}G_{L} F_t \langle\sigma^\dagger\rangle_{n,n'-1}\nonumber\\ &&\mbox{} -2\sqrt{2\pi n}G_{L}^{*}F_t^*\langle\sigma\rangle_{n-1,n'},\nonumber\\ \langle\dot{\sigma}\rangle_{nn'}&=&-\frac{\gamma}{2} \langle\sigma\rangle_{nn'} +\sqrt{2\pi n'} G_{L}F_t\langle\sigma_z\rangle_{n,n'-1}, \end{eqnarray} where $n$ and $n'$ run from 0 to $N$. The initial conditions are $\langle\sigma_z(t_0)\rangle_{nn'}=-\delta_{nn'}$ and $\langle\sigma(t_0)\rangle_{nn'}=0$. It follows from these initial conditions and Eqs.~(\ref{v19}) that the only nonzero matrix elements of the atomic operators are $\langle\sigma_z\rangle_{nn}$, $\langle\sigma^\dagger\rangle_{n,n-1}$, and $\langle\sigma\rangle_{n-1,n}$. The time dependencies of these matrix elements are governed by the coupled equations \cite{Wang2012} \begin{eqnarray}\label{v20} \langle\dot{\sigma}_z\rangle_{nn}&=&-\gamma(\langle\sigma_z\rangle_{nn}+1) -2\sqrt{2\pi n}G_{L} F_t \langle\sigma^\dagger\rangle_{n,n-1}\nonumber\\ &&\mbox{} -2\sqrt{2\pi n}G_{L}^{*}F_t^*\langle\sigma\rangle_{n-1,n},\nonumber\\ \langle\dot{\sigma}\rangle_{n-1,n}&=&-\frac{\gamma}{2} \langle\sigma\rangle_{n-1,n} +\sqrt{2\pi n} G_{L}F_t\langle\sigma_z\rangle_{n-1,n-1},\qquad \end{eqnarray} where $n$ runs from 1 to $N$. The corresponding initial conditions are $\langle\sigma_z(t_0)\rangle_{nn}=-1$ and $\langle\sigma(t_0)\rangle_{n,n+1}=0$. Note that $\langle\sigma_z(t)\rangle_{00}=-1$ for any $t\ge t_0$.
We now consider the interaction of the atom with a pulse in a coherent state $\alpha$. We introduce the notations
$\langle\sigma_z\rangle=\langle g,\alpha,0|\sigma_z|g,\alpha,0\rangle$ and
$\langle\sigma\rangle=\langle g,\alpha,0|\sigma|g,\alpha,0\rangle$. Then, Eq.~(\ref{v8}) yields \cite{Wang2011,Wang2012} \begin{eqnarray}\label{v22} \langle\dot{\sigma}_z\rangle&=&-\gamma (\langle\sigma_z\rangle+1) -2\sqrt{2\pi}\alpha G_{L} F_t \langle\sigma^\dagger\rangle \nonumber\\&&\mbox{} -2\sqrt{2\pi}\alpha^* G_{L}^{*}F_t^*\langle\sigma\rangle,\nonumber\\ \langle\dot{\sigma}\rangle&=&-\frac{\gamma}{2} \langle\sigma\rangle +\sqrt{2\pi}\alpha G_{L}F_t\langle\sigma_z\rangle. \end{eqnarray} Note that Eqs.~(\ref{v22}) are the same as the equations for a two-level atom interacting with a classical driving field.
\subsection{Interaction of the atom with a single-photon Fock-state pulse}
In this subsection, we consider the case of a single-photon Fock-state pulse, that is, the case where the pulse is initially in the Fock state $|N\rangle$ with the photon number $N=1$. In this case, Eqs.~(\ref{v20}) reduce to \cite{Wang2011,Wang2012} \begin{eqnarray}\label{v23} \dot{P}&=&-\gamma P -\sqrt{2\pi}G_{L} F_t Q^* -\sqrt{2\pi}G_{L}^{*}F_t^*Q,\nonumber\\ \dot{Q}&=&-\frac{\gamma}{2} Q -\sqrt{2\pi} G_{L}F_t, \end{eqnarray}
where $P=(1+\langle g,1,0|\sigma_z|g,1,0\rangle)/2$ and
$Q=\langle g,0,0|\sigma|g,1,0\rangle$, with the initial conditions $P(t_0)=0$ and $Q(t_0)=0$. The quantities $P$ and $Q$ are the excitation probability and the induced dipole amplitude, respectively, of the atom. The solution of Eqs.~(\ref{v23}) for $t\geq t_0$ reads \cite{GB} \begin{equation}\label{v25a}
P=2\pi|G_{L}|^2\left|\int_{t_0}^t e^{-\gamma(t-t')/2} F_{t'}dt' \right|^2 \end{equation} and \begin{equation}\label{v25b} Q=-\sqrt{2\pi} G_{L}\int_{t_0}^t e^{-\gamma (t-t')/2} F_{t'} dt'. \end{equation}
It is clear that $P=|Q|^2$. Note that, in the case of a coherent-state pulse with mean photon number $\bar{N}=|\alpha|^2=1$, Eqs.~(\ref{v22}) do not reduce to Eqs.~(\ref{v23}). The two sets of equations agree with each other only in the case of $\langle\sigma_z\rangle\simeq-1$, that is, the case of weak atomic excitation.
We note that the temporal shape of the single-photon probe pulse can be arbitrary and is described by the profile function $F_t$. It has been shown in Refs.~\cite{Wang2011,Wang2012,GB} that the excitation of the atom depends on the temporal profile of the probe pulse. According to these references, the maximal value of the excitation probability $P$ is $P_{\mathrm{max}}=\eta_L=\gamma_L/\gamma=2\pi |G_L|^2/\gamma$. This value can be achieved at $t=0$ for a rising exponential resonant pulse, $F_t=T^{-1/2}e^{t/2T}$ for $t\leq0$ and 0 for $t>0$, with the time constant $T=1/\gamma$. It is worth mentioning here that the techniques for generation of single-photon pulses of various shapes have been demonstrated \cite{Du2012,Kurtsiefer2013,Martinis2014,Du2014,Kurtsiefer2016}. Below, we extend the treatment of Ref.~\cite{GB} and present the explicit analytical expressions for $P$ and $Q$ in the particular cases where the shape of the single-photon probe pulse is Gaussian, exponentially rising, or exponentially decaying, with a possible detuning $\Delta$.
\subsubsection{Gaussian pulse}
First, we consider the case of a Gaussian single-photon Fock-state pulse, where the pulse form function is $F_t=(2\pi T^2)^{-1/4}e^{-t^2/4T^2-i\Delta t}$. Here, $T$ is the characteristic pulse duration and $\Delta=\omega_L-\omega_0$ is the detuning of the field central frequency $\omega_L$ from the atomic transition frequency $\omega_0$. In this case, we find \cite{GB} \begin{eqnarray}\label{v26} P&=& (\pi T^2/2)^{1/2}
2\pi|G_{L}|^2e^{-\gamma t+(\gamma^2-4\Delta^2)T^2/2} \nonumber\\ &&\mbox{}\times
\Big|1+\mathrm{erf}\Big(\frac{t}{2T}-\frac{\gamma T}{2}+i\Delta T\Big)\Big|^2,\nonumber\\
Q&=& -(\pi T^2/2)^{1/4}\sqrt{2\pi} G_{L}e^{-\gamma t/2+(\gamma-2i\Delta)^2T^2/4}\nonumber\\ &&\mbox{}\times \Big[1+\mathrm{erf}\Big(\frac{t}{2T}-\frac{\gamma T}{2}+i\Delta T\Big)\Big]. \end{eqnarray}
\subsubsection{Rising exponential pulse}
Next, we consider the case of a rising exponential single-photon Fock-state pulse, where the pulse form function is $F_t=T^{-1/2}e^{t/2T-i\Delta t}$ for $t\leq0$ and 0 for $t>0$. In this case, we find \cite{GB} \begin{eqnarray}\label{v27} P&=&
\frac{8\pi T}{(1+\gamma T)^2+4\Delta^2 T^2} |G_{L}|^2 e^{t/T},\nonumber\\ Q&=& -\frac{2\sqrt{T}}{1+\gamma T-2i\Delta T}\sqrt{2\pi} G_{L}e^{t/2T-i\Delta t}\qquad \end{eqnarray} for $t\leq0$, and \begin{eqnarray}\label{v28} P&=&
\frac{8\pi T}{(1+\gamma T)^2+4\Delta^2 T^2} |G_{L}|^2e^{-\gamma t},\nonumber\\ Q&=& -\frac{2\sqrt{T}}{1+\gamma T-2i\Delta T}\sqrt{2\pi} G_{L}e^{-\gamma t/2}\qquad \end{eqnarray}
for $t>0$. It is clear that the maximal value of the excitation probability is $P_{\mathrm{max}}=2\pi |G_L|^2/\gamma=\gamma_L/\gamma$ and can be achieved at $t=0$ for a rising exponential resonant pulse with $T=1/\gamma$ and $\Delta=0$.
\subsubsection{Decaying exponential pulse}
Finally, we consider the case of a decaying exponential single-photon Fock-state pulse, where the pulse form function is $F_t=T^{-1/2}e^{-t/2T-i\Delta t}$ for $t\geq0$ and 0 for $t<0$. In this case, we find \cite{GB} \begin{eqnarray}\label{v29} P&=&
\frac{8\pi T}{(1-\gamma T)^2+4\Delta^2 T^2} |G_{L}|^2 (e^{-t/T}+e^{-\gamma t}\nonumber\\ &&\mbox{} -2e^{-t/2T-\gamma t/2}\cos\Delta t),\nonumber\\ Q&=&\frac{2\sqrt{T}}{1-\gamma T+2i\Delta T}\sqrt{2\pi} G_{L}(e^{-t/2T-i\Delta t}-e^{-\gamma t/2})\qquad \end{eqnarray} for $t\geq0$, and $P=Q=0$ for $t<0$.
We note that, in the case where $\Delta=0$, Eqs.~(\ref{v26})--(\ref{v29}) reduce to the results of Ref.~\cite{GB}.
\subsection{Photon transmission and reflection fluxes}
In this subsection, we derive the expressions for the fluxes of transmitted and reflected photons.
In the framework of the continuous-mode quantization formalism, the flux of photons in the guided modes propagating in the direction $f$ through the fiber cross-sectional plane at a position $z$ is given by \cite{Loudon} \begin{equation}\label{v30} I_f(z,t)=\sum_{\mathcal{N}p}\langle A_{\mathcal{N}fp}^\dagger(z,t) A_{\mathcal{N}fp}(z,t)\rangle, \end{equation} where \begin{equation}\label{v31} A_{\mathcal{N}fp}(z,t)=\frac{1}{\sqrt{2\pi}} \int_0^{\infty}d\omega\, a_{\omega\mathcal{N}fp}(t) e^{-i(\omega t-f\beta z)} \end{equation} is the Fourier-transformed photon operator.
Let the atom be located at a point $\mathbf{R}_a=(r_a,\varphi_a,z_a)$. We insert Eq.~(\ref{v4}) into Eq.~(\ref{v31}). Under the condition of narrow bandwidth, we use the approximations $G_{\omega\mathcal{N} fp}(\mathbf{R}_a)=G_{\omega_0\mathcal{N} fp}(\mathbf{R}_a)\exp[if\beta'_0(\omega-\omega_0)z_a]$ and $\beta=\beta_0+\beta'_0(\omega-\omega_0)$ to calculate the integral with respect to $\omega$ in expression (\ref{v31}). In addition, we extend the lower bound of the frequency integration to $-\infty$. This procedure artificially restores the effects of the missing counter-rotating terms in the Hamiltonian \cite{Loudon}. As a result, we obtain \begin{eqnarray}\label{v32} A_{\mathcal{N}fp}(z,t)&=&A_{\mathcal{N}fp}^{(\mathrm{in})}(z,t) +\sqrt{2\pi}G^*_{\omega_0\mathcal{N}fp} e^{-i(\omega_0t-f\beta_0z)} \nonumber\\&&\mbox{}\times
\sigma(t-|z-z_a|/v_g)\Theta[f(z-z_a)] \nonumber\\&&\mbox{}\times
\Theta(t-|z-z_a|/v_g -t_0), \end{eqnarray} where \begin{equation}\label{v33} A_{\mathcal{N}fp}^{(\mathrm{in})}(z,t)=\frac{1}{\sqrt{2\pi}}\int_0^{\infty}d\omega\, a_{\omega\mathcal{N} fp}(t_0) e^{-i(\omega t-f\beta z)} \end{equation} is the injected field. In Eq.~(\ref{v32}), the coupling coefficient $G_{\omega_0\mathcal{N}fp}$ is evaluated at the atomic transition frequency $\omega_0$ and the atomic position $\mathbf{R}_a$. The notation $v_g=1/(d\beta/d\omega)$ stands for the group velocity and is evaluated at the atomic transition frequency $\omega_0$. The notation $\Theta(x)$ stands for the Heaviside step function, equal to zero for negative argument and one for positive argument.
We study the case where the input guided pulse is prepared in a Fock state of $N$ photons, propagates in a direction $f_L=\pm$ along the fiber axis, and has a pulse shape $F_t$. The flux of transmitted photons at a position $z$ satisfying the condition $f_L(z-z_a)>0$ is given by $I_T(z,t)=I_{f=f_L}(z,t)$. When we insert Eq.~\eqref{v32} into Eq.~\eqref{v30} and take $f=f_L$ and $f_L(z-z_a)>0$, we obtain \begin{eqnarray}\label{v34}
\lefteqn{I_{T}(z,t)=N|F_{t-f_Lz/v_{g_L}}|^2} \nonumber\\&&\mbox{}
+\sum_{\mathcal{N}p}\gamma_{\mathcal{N}f_Lp}\langle\sigma_{ee}(t-|z-z_a|/v_g)\rangle_{NN} \nonumber\\&&\mbox{} +\sqrt{2\pi N}G_{L}F_{t-f_Lz/v_{g_L}}
\langle\sigma^\dagger(t-|z-z_a|/v_{g_L})\rangle_{N,N-1} \nonumber\\&&\mbox{} +\sqrt{2\pi N}G^*_{L}F^*_{t-f_Lz/v_{g_L}}
\langle\sigma(t-|z-z_a|/v_{g_L})\rangle_{N-1,N},\qquad \end{eqnarray} where
$\gamma_{\mathcal{N}fp}=2\pi |G_{\omega_0\mathcal{N}fp}|^2$ is the rate of spontaneous emission into the guided mode $\mathcal{N}fp$.
Meanwhile, the flux of reflected photons at a position $z$ satisfying the condition $f_L(z-z_a)<0$ is given by $I_R(z,t)=I_{f=-f_L}(z,t)$. When we insert Eq.~\eqref{v32} into Eq.~\eqref{v30} and take $f=-f_L$ and $f_L(z-z_a)<0$, we obtain \begin{eqnarray}\label{v35}
I_R(z,t)&=&\sum_{\mathcal{N}p}\gamma_{\mathcal{N},-f_L,p}
\langle\sigma_{ee}(t-|z-z_a|/v_g)\rangle_{NN}. \end{eqnarray}
Without loss of generality, we assume that the atom is located at a point with the axial coordinate $z_a=0$. In addition, we assume that, in Eqs.~(\ref{v34}) and (\ref{v35}), the group delay $|z|/v_g$ for all guided modes $\mathcal{N}fp$ is small compared to the characteristic pulse duration $T$. Then, Eqs.~(\ref{v34}) and (\ref{v35}) reduce to \begin{eqnarray}\label{v36}
I_T&=&N|F_t|^2+\gamma_{g}^{(\mathrm{fw})}\langle\sigma_{ee}\rangle_{NN} \nonumber\\&&\mbox{} +\sqrt{2\pi N}(G_{L} F_t \langle\sigma^\dagger\rangle_{N,N-1} +G_{L}^{*} F_t^* \langle\sigma\rangle_{N-1,N})\qquad \end{eqnarray} and \begin{equation}\label{v37} I_{R}=\gamma_{g}^{(\mathrm{bw})}\langle\sigma_{ee}\rangle_{NN}, \end{equation} where $\gamma_{g}^{(\mathrm{fw})}=\gamma_g^{(f_L)}$ and $\gamma_{g}^{(\mathrm{bw})}=\gamma_g^{(-f_L)}$ are the rates of spontaneous emission into guided modes in the forward direction $f=f_L$ and the backward direction $f=-f_L$, respectively. Here, $\gamma_g^{(f)}=\sum_{\mathcal{N}p}\gamma_{\mathcal{N}fp}$ is the rate of spontaneous emission into guided modes with the propagation direction $f$.
The expression on the right-hand side of Eq.~(\ref{v36}) has three terms. The first term, $N|F_t|^2$, is the flux of the incident field. The second term, $\gamma_{g}^{(\mathrm{fw})}\langle\sigma_{ee}\rangle_{NN}$, is the rate of scattering into guided modes in the forward direction $f=f_L$. The last term, proportional to $\sqrt{N}(G_{L} F_t \langle\sigma^\dagger\rangle_{N,N-1}+\mathrm{c.c.})$, describes the effect of the interference between the incident and forward scattered fields. Meanwhile, the expression on the right-hand side of Eq.~(\ref{v37}) is the rate of scattering into guided modes in the backward direction $f=-f_L$. According to Eq.~(\ref{v37}), the photon reflection flux $I_R$ and the atomic excitation probability $\langle\sigma_{ee}\rangle_{NN}$ are proportional to each other. Consequently, the time dependencies of $I_R$ and $\langle\sigma_{ee}\rangle_{NN}$ have the same shape.
We introduce the notation $I_{\mathrm{rad}}=\gamma_r \langle\sigma_{ee}\rangle_{NN}$ for the rate of scattering into radiation modes, where $\gamma_r$ is the rate of spontaneous emission into radiation modes. We find the relation $I_T+I_R+I_{\mathrm{rad}}+\langle\dot{\sigma}_{ee}\rangle_{NN}=N|F_t|^2$, in agreement with the energy conservation law.
We introduce the notations $P_T=\int_{t_0}^{\infty}I_T(t)dt$ and $P_R=\int_{t_0}^{\infty}I_R(t)dt$ for the mean numbers of transmitted and reflected photons, respectively. We also introduce the notation $P_{\mathrm{rad}}=\int_{t_0}^{\infty}I_{\mathrm{rad}}(t)dt$ for the mean number of photons scattered into radiation modes. We find $P_T+P_R+P_{\mathrm{rad}}=N$. The extinction of the pulse is $P_{\mathrm{ext}}=N-P_T=P_R+P_{\mathrm{rad}}$.
In the case of single-photon pulses ($N=1$), we can rewrite Eqs.~(\ref{v36}) and (\ref{v37}) in the form \begin{equation}\label{v36a}
I_T=|F_t|^2+\gamma_{g}^{(\mathrm{fw})}P+\sqrt{2\pi}(G_{L} F_t Q^*+G_{L}^{*} F_t^* Q) \end{equation} and \begin{equation}\label{v37c} I_{R}=\gamma_{g}^{(\mathrm{bw})}P. \end{equation}
In addition, we find $I_T+I_R+I_{\mathrm{rad}}+\dot{P}=|F_t|^2$ and $P_T+P_R+P_{\mathrm{rad}}=1$.
Equations (\ref{v36a}) and (\ref{v37c}) are in agreement with the results of Ref.~\cite{Domokos2002} for single-photon light pulses. With the help of the relation $P=|Q|^2$, valid for the case of single-photon pulses, we can rewrite Eq.~(\ref{v36a}) as $I_T=(\gamma_{g}^{(\mathrm{fw})}-\gamma_L)P+|F_t+\sqrt{2\pi}G_L^*Q|^2$.
In the particular case where $\gamma_{g}^{(\mathrm{fw})}=\gamma_L$, we obtain $I_T=|F_t+\sqrt{2\pi}G_L^*Q|^2$, in agreement with the results of Ref.~\cite{Kurtsiefer2016}.
In the case of single-photon pulses, $P_T$ and $P_R$ are the probabilities of transmission and reflection, respectively, $P_{\mathrm{rad}}$ is the probability of scattering into radiation modes, and $P_{\mathrm{ext}}=1-P_T=P_R+P_{\mathrm{rad}}$ is the extinction probability. When we integrate the atomic excitation probability $P(t)$ over the time $t$ for the whole interaction process, we obtain the quantity $\tau_{e}=\int_{t_0}^{\infty} P(t) dt$, which can be called the effective excitation time of the atom. With the help of Eqs.~(\ref{v27})--(\ref{v29}), we can show that single-photon rising and decaying exponential pulses with the same pulse duration $T$ produce the same effective excitation time \begin{equation}\label{v37b2} \tau_e=4T\frac{\gamma_L}{\gamma}\frac{1+\gamma T}{(1+\gamma T)^2+4\Delta^2 T^2}. \end{equation} Hence, the reflection probability $P_R=\gamma_{g}^{(\mathrm{bw})}\tau_{e}$, the probability of emission into radiation modes $P_{\mathrm{rad}}=\gamma_r\tau_{e}$, the extinction probability $P_{\mathrm{ext}}=(\gamma_{g}^{(\mathrm{bw})}+\gamma_r)\tau_{e}$, and the transmission probability $P_T=1-(\gamma_{g}^{(\mathrm{bw})}+\gamma_r)\tau_{e}$ do not depend on whether the pulse is exponentially rising or decaying \cite{Kurtsiefer2016}.
We note that, in the case where the injected pulse is prepared in a coherent state $\alpha$, we have \begin{eqnarray}\label{v38}
I_T&=&|\alpha|^2|F_t|^2+\gamma_g^{(\mathrm{fw})}\langle\sigma_{ee}\rangle
+\sqrt{2\pi} (\alpha G_{L} F_t \langle\sigma^\dagger\rangle
\nonumber\\&&\mbox{}
+\alpha^* G_{L}^{*} F_t^* \langle\sigma\rangle) \end{eqnarray} and \begin{equation}\label{v39} I_{R}=\gamma_{g}^{(\mathrm{bw})}\langle\sigma_{ee}\rangle. \end{equation}
It is worth mentioning that, by using appropriate expressions for the coupling coefficient $G_L$, we can apply Eqs.~(\ref{v36}) and (\ref{v37}) to not only quasicircularly polarized modes but also quasilinearly polarized modes.
\subsection{Chiral coupling between an atom and a quasilinearly polarized hybrid guided field}
In this subsection, we study the dependence of the interaction between the atom and the guided probe pulse on the propagation direction of the pulse.
We assume that the probe pulse is prepared in a quasilinearly polarized hybrid guided mode $\mu_L$ of the nanofiber. Quasilinearly polarized hybrid modes are linear superpositions of counterclockwise and clockwise quasicircularly polarized hybrid modes. The amplitude of the guided field in a quasilinearly polarized hybrid mode can be written in the form \cite{highorder} \begin{eqnarray}\label{c12} \mathbf{e}^{(\mu_L)}&=&\sqrt2[e_r\cos (l\varphi-\varphi_{\mathrm{pol}})\,\hat{\mathbf{r}} +i e_\varphi\sin (l\varphi-\varphi_{\mathrm{pol}})\,\hat{\boldsymbol{\varphi}} \nonumber\\&&\mbox{}\times +f_Le_z\cos (l\varphi-\varphi_{\mathrm{pol}})\,\hat{\mathbf{z}}], \end{eqnarray} where $e_r$, $e_\varphi$, and $e_z$ are the cylindrical components of the profile function of the corresponding quasicircularly polarized hybrid guided modes and are evaluated at the frequency $\omega=\omega_0$. The phase angle $\varphi_{\mathrm{pol}}$ determines the orientation of the symmetry axes of the mode profile in the fiber transverse plane. In particular, the specific values $\varphi_{\mathrm{pol}}=0$ and $\pi/2$ define two orthogonal polarization profiles, called even and odd, respectively. We again use the notation $\mathbf{R}=(r,\varphi,z)$ for the position of the atom.
The coupling coefficient $G_L$ for the atom and the quasilinearly polarized hybrid guided field is given as \begin{equation}\label{v2a} G_L=\sqrt{\frac{\omega_0\beta_L'}{4\pi\epsilon_0\hbar}}\; (\mathbf{d}\cdot\mathbf{e}^{(\mu_L)})e^{if_L\beta_L z}, \end{equation} where $\beta_L$ and $\beta_L'$ are evaluated at the frequency $\omega=\omega_0$. We assume that the atom is located on the positive side of the $x$ axis, that is, $\varphi=0$. When we insert Eq.~(\ref{c12}) into Eq.~(\ref{v2a}), we obtain \begin{equation}\label{v2b1}
|G_L(\varphi_{\mathrm{pol}}=0)|=\sqrt{\frac{\omega_0\beta_L'}{2\pi\epsilon_0\hbar}}\; |d_xe_r+f_Ld_ze_z| \end{equation} and \begin{equation}\label{v2b2}
|G_L(\varphi_{\mathrm{pol}}=\pi/2)|=\sqrt{\frac{\omega_0\beta_L'}{2\pi\epsilon_0\hbar}}\;
|d_y e_\varphi|. \end{equation} Here, $d_x=d_r$, $d_y=d_\varphi$, and $d_z$ are the components of the dipole matrix element vector $\mathbf{d}$ in the Cartesian and cylindrical coordinate systems.
According to Eq.~(\ref{v2b2}), the absolute value $|G_L|$ of the coupling coefficient $G_L$ for the quasilinearly polarized guided mode of the odd type (with the polarization angle $\varphi_{\mathrm{pol}}=\pi/2$) does not depends on the propagation direction $f_L$. The reason is that the polarization of the field at the position of the atom is linear.
Meanwhile, Eq.~(\ref{v2b1}) shows that the absolute value $|G_L|$ of the coupling coefficient $G_L$ for the quasilinearly polarized guided mode of the even type (with the polarization angle $\varphi_{\mathrm{pol}}=0$) depends on the field propagation direction $f_L$ if \begin{equation}\label{v2c} \mathrm{Re}\,(d_xd_z^*e_re_z^*)\not=0. \end{equation} It is known that both the radial component $e_r$ and the axial component $e_z$ of the mode function of quasicircularly polarized hybrid modes are nonzero and their relative phase is $\pi/2$ \cite{fiber books,highorder}. Hence, condition (\ref{v2c}) reduces to the condition \begin{equation}\label{v2d} \mathrm{Im}\,(d_xd_z^*)\not=0 \end{equation}
for the atomic dipole. This condition means that the atom has a dipole rotating in the meridional plane $zx$, that is, the atom is chiral. The ellipticity vector of the dipole of this atom overlaps with the ellipticity vector of the quasilinearly polarized field mode of the even type \cite{Fam2014,Petersen2014,Mitsch14b,Lodahl2017,sponhigh}. The directional dependence of the absolute value of the coupling coefficient $G_L$ leads to the directional dependence of the coupling parameter $\gamma_L=2\pi |G_L|^2$ and, hence, to the directional dependence of the atomic excitation probability $P$ [see Eq.~(\ref{v25a})].
Similar to the probe-atom coupling parameter $\gamma_L$, the rate $\gamma_{g}^{(f)}$ of spontaneous emission into guided modes in the direction $f$ under condition (\ref{v2d}) is asymmetric with respect to the opposite directions $f=+$ and $f=-$ \cite{Fam2014,Petersen2014,Mitsch14b,Lodahl2017,sponhigh}. The directional dependencies of $\gamma_L$ and $\gamma_{g}^{(f)}$ are the signatures of spin-orbit coupling of light carrying transverse spin angular momentum \cite{Zeldovich,Bliokh review,Bliokh review2015,Bliokh2014,Bliokh2015,Lodahl2017,Banzer review2015}. They are due to the existence of a nonzero longitudinal component of the field in the presence of the nanofiber. This component oscillates in phase quadrature with respect to the radial transverse component and, hence, makes the field chiral. The directional dependencies of $\gamma_L$ and $\gamma_{g}^{(f)}$ determine the directional dependencies of the transmission and reflection fluxes and the corresponding transmission and reflection probabilities.
\section{Numerical results} \label{sec:numerical}
In this section, we present the results of numerical calculations for the interaction between the atom and a quantized light pulse in a guided mode of the nanofiber.
We use the atomic transition wavelength $\lambda_0=852$ nm and the natural linewidth $\gamma_0/2\pi=5.2$ MHz, which correspond to the transitions in the $D_2$ line of atomic cesium. The atomic dipole matrix element $d$ is calculated from the formula $\gamma_0=d^2\omega_0^3/3\pi\epsilon_0\hbar c^3$ for the natural linewidth of a two-level atom \cite{Loudon,Scully}.
In order to maximize the coupling efficiency between the guided probe field and the atom, we use a single-mode nanofiber. We assume that the fiber radius is $a=200$ nm, and the refractive indices of the fiber and the vacuum cladding are $n_1=1.45$ and $n_2=1$, respectively. This thin fiber supports only the fundamental mode HE$_{11}$ at the wavelength $\lambda_0$ of the atom considered. The quasilinearly polarized HE$_{11}$ modes with the polarization angles $\varphi_{\mathrm{pol}}=0$ and $\pi/2$ of the nanofiber are called $x$- and $y$-polarized guided modes, respectively. We assume that the injected field is prepared in the $x$-polarized guided mode. We note that the spatial intensity distribution of the injected field is maximal on the $x$ axis, where the atom is positioned. In order to get a chiral effect in the interaction between the atom and the probe guided light field, we consider the case where the atomic dipole rotates in the meridional plane containing the atomic position. In this case, the dipole matrix element vector $\mathbf{d}$ is a complex vector in the $zx$ plane. To be concrete, we take $\mathbf{d}=d(i\hat{\mathbf{x}}-\hat{\mathbf{z}})/\sqrt2$. This matrix element corresponds to a $\sigma_+$-type transition between the magnetic levels of an alkali-metal atom that are specified with the use of the axis $y$ as the quantization axis. Since condition (\ref{v2d}) is satisfied, the absolute value of the coupling coefficient for the atom and the $x$-polarized fundamental mode depends on the field propagation direction [see Eq.~(\ref{v2b1})]. Meanwhile, since $d_y=0$, the atom does not interact with the $y$-polarized fundamental mode [see Eq.~(\ref{v2b2})]. Hence, we have $\gamma_L=\gamma_g^{(f_L)}=\gamma_g^{(\mathrm{fw})}$, that is, the probe-atom coupling parameter $\gamma_L$ is equal to the rate $\gamma_g^{(f_L)}$ of spontaneous emission into guided modes in the forward direction $f_L$.
\begin{figure}\label{fig2}
\end{figure}
We calculate the total spontaneous emission rate $\gamma$,
the probe-atom coupling parameter $\gamma_L=2\pi |G_L|^2$, and the coupling efficiency $\eta_L=\gamma_L/\gamma$. We plot in Fig.~\ref{fig2} the radial dependencies of these characteristics. We observe from the figure that $\gamma$, $\gamma_L$, and $\eta_L$ reduce quickly with increasing distance from the atom to the fiber surface. Figures \ref{fig2}(b) and \ref{fig2}(c) show that the values of the coupling parameter $\gamma_L$ and the coupling efficiency $\eta_L$ for the probe field with the propagation direction $f_L=+$ (solid red lines) are larger than those for the probe field with the propagation direction $f_L=-$ (dashed blue lines). It follows from the dependence of $\gamma_L$ on $f_L$ and the relation $\gamma_L=\gamma_g^{(\mathrm{fw})}$ that the rates $\gamma_g^{(\mathrm{fw})}$ and $\gamma_g^{(\mathrm{bw})}$ of spontaneous emission into guided modes in the forward and backward directions also depend on $f_L$. We show below that the directional dependencies of the coupling parameter $\gamma_L$ and the rates $\gamma_g^{(\mathrm{fw})}$ and $\gamma_g^{(\mathrm{bw})}$ lead to the directional dependencies of the atomic excitation probability, the photon transmission flux, and the photon transmission probability.
\subsection{Atomic excitation probability}
We use Eqs.~(\ref{v23}) or the analytical expressions (\ref{v25a})--(\ref{v29}) to calculate the internal state of the atom interacting with a single-photon guided light pulse. We plot in Fig.~\ref{fig3} the time dependence of the atomic excitation probability $P$ for the case of a single-photon Gaussian guided light pulse. We observe from the solid red curve of Fig.~\ref{fig3}(b) that, for an atom at the fiber surface, the excitation probability $P$ can be as large as $\approx 0.13$ even though the incident guided light pulse has just a single photon. Comparison between different curves of Fig.~\ref{fig3}(b) as well as Fig.~\ref{fig3}(c) shows that the peak value of the excitation probability decreases with increasing distance from the atom to the fiber surface. This behavior is a consequence of the evanescent-wave nature of the guided field. We observe that the arrival of the peak is delayed by a significant amount of time, which is comparable to the free-space lifetime $\tau_0=1/\gamma_0$ of the atom. More importantly, comparison between Figs.~\ref{fig3}(b) and \ref{fig3}(c) shows that the excitation probability $P$ strongly depends on the propagation direction $f_L$ of the pulse. The directional dependence of $P$ is a chiral effect and is a consequence of spin-orbit coupling of guided light carrying transverse spin angular momentum \cite{Zeldovich,Bliokh review,Bliokh review2015,Bliokh2014,Bliokh2015,Lodahl2017,Banzer review2015}.
\begin{figure}
\caption{Excitation of the atom by a single-photon Gaussian light pulse in the $x$-polarized fundamental mode HE$_{11}$. (a) Temporal pulse profile function $|F_t|^2$. (b),(c)
Time dependence of the atomic excitation probability $P$ of the atom
interacting with the pulse with the propagation direction $f_L=+$ (b) or $f_L=-$ (c). The radial position of the atom is $r/a=1$ (solid red lines), 1.5 (dashed green lines), and 2 (dotted blue lines).
The quantized pulse is at exact resonance with the atom. The characteristic pulse length is $T=1/\gamma_0\simeq 30$ ns. Other parameters are as for Fig.~\ref{fig2}.
The vertical dotted black line indicates the pulse peak time $t=0$. }
\label{fig3}
\end{figure}
We plot in Figs.~\ref{fig4} and \ref{fig5} the time dependencies of the atomic excitation probability $P$ for single-photon rising and decaying exponential pulses. We observe that the excitation probability of the atom substantially depends on the pulse shape \cite{Wang2011,Wang2012,GB,Kurtsiefer2016}. Comparison between Figs.~\ref{fig3}--\ref{fig5} shows that the rising exponential pulse shape is more favorable to excite atoms than the other pulse shapes. The magnitude of $P$ can be as high as $\approx 0.2$, achieved for a rising exponential pulse interacting with an atom at the fiber surface [see the solid red line in Fig.~\ref{fig4}(b)]. Comparison between Figs.~\ref{fig4}(b) and \ref{fig4}(c) and between Figs.~\ref{fig5}(b) and \ref{fig5}(c) confirms that, like in the case of Fig.~\ref{fig3}, the excitation probability $P$ in the cases of Figs.~\ref{fig4} and \ref{fig5} strongly depends on the propagation direction $f_L$ of the pulse. We observe again that the peak value of $P$ decreases with increasing distance from the atom to the fiber surface.
\begin{figure}
\caption{Excitation of the atom by a single-photon rising exponential light pulse in the $x$-polarized fundamental mode HE$_{11}$. (a) Temporal pulse profile function $|F_t|^2$. (b),(c) Time dependence of the atomic excitation probability $P$ of the atom interacting with the pulse with the propagation direction $f_L=+$ (b) or $f_L=-$ (c). Other parameters are as for Fig.~\ref{fig3}.
}
\label{fig4}
\end{figure}
\begin{figure}
\caption{Excitation of the atom by a single-photon decaying exponential light pulse in the $x$-polarized fundamental mode HE$_{11}$. (a) Temporal pulse profile function $|F_t|^2$. (b),(c) Time dependence of the atomic excitation probability $P$ of the atom interacting with the pulse with the propagation direction $f_L=+$ (b) or $f_L=-$ (c). Other parameters are as for Fig.~\ref{fig3}.
}
\label{fig5}
\end{figure}
The relative difference between the excitation probabilities $P_{\pm}=P$ for the opposite propagation directions $f_L=\pm$ can be characterized by the asymmetry parameter $\eta_{\mathrm{asym}}=(P_+-P_-)/(P_++P_-)$. It follows from Eq.~(\ref{v25a}) that $\eta_{\mathrm{asym}}=(\gamma_L^{(+)}-\gamma_L^{(-)})/(\gamma_L^{(+)}+\gamma_L^{(-)})$, where $\gamma_L^{(\pm)}=\gamma_L$ for $f_L=\pm$. It is clear that the asymmetry parameter $\eta_{\mathrm{asym}}$ does not vary in time and does not depend on the pulse shape. We observe these features in Fig.~\ref{fig6}, where the asymmetry parameter $\eta_{\mathrm{asym}}$ is plotted as a function of time for single-photon light pulses of arbitrary shape.
\begin{figure}\label{fig6}
\end{figure}
\subsection{Photon reflection and transmission fluxes}
We use Eqs.~(\ref{v36a}) and (\ref{v37c}) to calculate the photon reflection flux $I_R$ and the photon transmission flux $I_T$. We plot in Figs.~\ref{fig7} and \ref{fig8} the results of calculations for the time dependencies of the fluxes $I_R$ and $I_T$ for the atom interacting with a single-photon Gaussian pulse. Comparison between Figs.~\ref{fig3} and \ref{fig7} shows that the time dependencies of the atomic excitation probability $P$ and the photon reflection flux $I_R$ have the same shape, in agreement with Eq.~(\ref{v37c}). Like the peak of $P$ in Fig.~\ref{fig3}, the peak of $I_R$ in Fig.~\ref{fig7} is delayed by a significant amount of time.
\begin{figure}
\caption{Time dependence of the photon reflection flux $I_R$ for a single-photon Gaussian guided light pulse. The propagation direction of the pulse is $f_L=+$ or $-$. Other parameters are as for Fig.~\ref{fig3}.
}
\label{fig7}
\end{figure}
\begin{figure}
\caption{Time dependence of the photon transmission flux $I_T$ for a single-photon Gaussian guided light pulse. The propagation direction of the pulse is $f_L=+$ (a) or $-$ (b). Other parameters are as for Fig.~\ref{fig3}.
}
\label{fig8}
\end{figure}
The numerical results presented in Fig.~\ref{fig7} show that, unlike the atomic excitation probability $P$, the photon reflection flux $I_R$ does not depend on the propagation direction $f_L$ of the probe pulse. This feature is a consequence of the fact that the reflection involves two processes, namely the atomic excitation by the pulse propagating in one direction [see Eq.~(\ref{v25a})] and the subsequent photon re-emission into the guided modes propagating in the opposite direction [see Eq.~(\ref{v37c})]. Due to this fact, the dependence of $I_R$ on $f_L$ is contained in the proportionality factor $\gamma_g^{(\mathrm{bw})}\gamma_L$ [see Eqs.~(\ref{v25a}) and (\ref{v37c})]. For the considered atomic dipole and guided probe pulse, we have $\gamma_L=\gamma_g^{(\mathrm{fw})}$. Therefore, the proportionality factor is $\gamma_g^{(\mathrm{bw})}\gamma_L=\gamma_g^{(\mathrm{bw})}\gamma_g^{(\mathrm{fw})}=\gamma_g^{(+)}\gamma_g^{(-)}$. It is clear that this factor does not depend on $f_L$ and hence neither does the reflection flux $I_R$.
Comparison between Figs.~\ref{fig8}(a) and \ref{fig8}(b) shows that the photon transmission flux $I_T$ depends on the field propagation direction $f_L$. In the case of the solid red curve in Fig.~\ref{fig8}(a), where $f_L=+$ and $r/a=1$, we observe a significant advance (negative delay) of the time for the arrival of the peak of the pulse. This advance is related to the anomalous dispersion of the susceptibility of resonant two-level atoms \cite{Scully}.
\begin{figure}
\caption{Time dependence of the photon reflection flux $I_R$ for a single-photon rising exponential guided light pulse. The propagation direction of the pulse is $f_L=+$ or $-$. Other parameters are as for Figs.~\ref{fig3} and \ref{fig4}.
}
\label{fig9}
\end{figure}
\begin{figure}
\caption{Time dependence of the photon transmission flux $I_T$ for a single-photon rising exponential guided light pulse. The propagation direction of the pulse is $f_L=+$ (a) or $-$ (b). Other parameters are as for Figs.~\ref{fig3} and \ref{fig4}.
}
\label{fig10}
\end{figure}
\begin{figure}
\caption{Time dependence of the photon reflection flux $I_R$ for a single-photon decaying exponential guided light pulse. The propagation direction of the pulse is $f_L=+$ or $-$. Other parameters are as for Figs.~\ref{fig3} and \ref{fig5}.
}
\label{fig11}
\end{figure}
\begin{figure}
\caption{Time dependence of the photon transmission flux $I_T$ for a single-photon decaying exponential guided light pulse. The propagation direction of the pulse is $f_L=+$ (a) or $-$ (b). Other parameters are as for Figs.~\ref{fig3} and \ref{fig5}.
}
\label{fig12}
\end{figure}
We plot in Figs.~\ref{fig9}--\ref{fig12} the time dependencies of the photon reflection and transmission fluxes $I_R$ and $I_T$ for single-photon rising and decaying exponential pulses. We observe that the temporal shapes of the reflection and transmission fluxes substantially depend on the pulse shape. Like in the case of Gaussian pulses, we observe in the cases of rising and decaying exponential pulses that the reflection flux $I_R$ does not depend on the pulse propagation direction $f_L$, while the transmission flux $I_T$ depends on $f_L$.
\subsection{Photon reflection and transmission probabilities}
\begin{figure}
\caption{Dependence of the reflection probability $P_R$ on the field detuning $\Delta$ of a single-photon Gaussian guided light pulse. The propagation direction of the pulse is $f_L=+$ or $-$. Other parameters are as for Fig.~\ref{fig3}.
}
\label{fig13}
\end{figure}
\begin{figure}
\caption{Dependence of the transmission probability $P_T$ on the field detuning $\Delta$ of a single-photon Gaussian guided light pulse. The propagation direction of the pulse is $f_L=+$ (a) or $-$ (b). Other parameters are as for Fig.~\ref{fig3}.
}
\label{fig14}
\end{figure}
We calculate the photon reflection probability $P_R=\int_{t_0}^{\infty}I_R(t)dt$ and the photon transmission probability $P_T=\int_{t_0}^{\infty}I_T(t)dt$ by integrating the corresponding fluxes. We plot in Figs.~\ref{fig13} and \ref{fig14} the dependencies of $P_R$ and $P_T$ on the field detuning $\Delta=\omega_L-\omega_0$ for a single-photon Gaussian pulse. We depict in Figs.~\ref{fig15} and \ref{fig16} the corresponding results for single-photon rising and decaying exponential pulses. It is clear that the curves are symmetric with respect to $\Delta$. According to the numerical results presented in Figs.~\ref{fig13} and \ref{fig15}, the reflection probability $P_R$ has the same magnitude for pulses with the opposite propagation directions $f_L=\pm$. This feature occurs as a consequence of the fact that the photon reflection flux $I_R$ does not depend on the propagation direction $f_L$ of the pulse. Comparison between Figs.~\ref{fig14}(a) and \ref{fig14}(b) and between Figs.~\ref{fig16}(a) and \ref{fig16}(b) shows that the transmission probability $P_T$ depends on the field propagation direction $f_L$. We observe from Figs.~\ref{fig13}--\ref{fig16} that the linewidths of the curves for the frequency dependencies of $P_R$ and $P_T$ increase with decreasing distance from the atom to the fiber surface. This feature is a consequence of the dependence of the total decay rate $\gamma$ on the radial position of the atom [see Fig.~\ref{fig2}(a)]. The numerical results presented in Figs.~\ref{fig15} and \ref{fig16} show that the probabilities $P_R$ and $P_T$ do not depend on whether the single-photon probe pulse is exponentially rising or decaying. This result is in agreement with the results of Ref.~\cite{Kurtsiefer2016} for the extinction probability $P_{\mathrm{ext}}=1-P_T$ of a single photon interacting with a single trapped atom. Comparison between the curves of Figs.~\ref{fig13}--\ref{fig16} shows that, for increasing distance from the atom to the fiber surface, the reflection probability $P_R$ decreases and the transmission probability $P_T$ increases.
\begin{figure}
\caption{Dependence of the reflection probability $P_R$ on the field detuning $\Delta$ of a single-photon rising or decaying exponential guided light pulse. The propagation direction of the pulse is $f_L=+$ or $-$. Other parameters are as for Figs.~\ref{fig3}--\ref{fig5}.
}
\label{fig15}
\end{figure}
\begin{figure}
\caption{Dependence of the transmission probability $P_T$ on the field detuning $\Delta$ of a single-photon rising or decaying exponential guided light pulse. The propagation direction of the pulse is $f_L=+$ (a) or $-$ (b). Other parameters are as for Fig.~\ref{fig3}--\ref{fig5}.
}
\label{fig16}
\end{figure}
\section{Summary} \label{sec:summary}
In conclusion, we have studied the interaction between a single two-level atom and a single-photon probe pulse in a guided mode of a nanofiber. We have focused on the situation of chiral interaction where the atom has a dipole rotating in the meridional plane of the nanofiber, and the probe pulse is quasilinearly polarized along the radial direction of the position of the atom in the fiber transverse plane. We have shown that, for increasing distance from the atom to the fiber surface, the peak atomic excitation probability and the photon reflection probability decrease, while the photon transmission probability increases. We have found that the atomic excitation probability, the photon transmission flux, and the photon transmission probability depend on the propagation direction of the probe pulse along the fiber axis. These directional dependencies are the consequences of spin-orbit coupling of light carrying transverse spin angular momentum. We have shown that the asymmetry parameter for the atomic excitation probability does not vary in time and does not depend on the probe pulse shape. Unlike the photon transmission flux and the photon transmission probability, the reflection flux and the reflection probability do not depend on the propagation direction of the probe pulse. In the case of single-photon Gaussian pulses, we have observed a time delay of the peak of the photon reflection flux and a time advance of the peak of the photon transmission flux. We have shown that, for an arbitrary detuning, the reflection probability and the transmission probability do not depend on whether the pulse is exponentially rising or decaying.
Our results are important, as they can be used to control and manipulate the directional dependence of the interaction between a single atom and a single-photon guided light pulse. They can be envisioned to have significant influence on ongoing and future experiments in nanofiber quantum optics. Due to the high efficiencies that can be achieved for coupling into the fiber, our scheme could be more efficient than the scheme for single-photon scattering by a single atom in free space \cite{Kurtsiefer2016}. Our scheme can also be extended to be used as a one-atom switch for single-photon routing controlled by a single photon \cite{Dayan2014}. Compared the microcavity-based system \cite{Dayan2014}, a nanofiber-based system is likely to be less efficient, though somewhat simpler in design.
\begin{acknowledgments} This work was supported by the Okinawa Institute of Science and Technology Graduate University. \end{acknowledgments}
\end{document} |
\begin{document}
\title[Linkage of modules over Cohen-Macaulay rings]
{Linkage of modules over Cohen-Macaulay rings}
\author[M. T. Dibaei]{Mohammad T. Dibaei$^{1}$}
\author[M. Gheibi]{Mohsen Gheibi$^2$}
\author[S. H. Hassanzadeh]{S. H. Hassanzadeh$^{3}$}
\author[A. Sadeghi]{Arash Sadeghi$^4$}
\address{$^{1, 2, 3, 4}$ Faculty of Mathematical Sciences and Computer, Tarbiat Moallem University, Tehran, Iran.}
\address{$^{1, 2, 4}$ School of Mathematics, Institute for Research in Fundamental Sciences (IPM), P.O. Box: 19395-5746, Tehran, Iran.} \email{dibaeimt@ipm.ir} \email{mohsen.gheibi@gmail.com} \email{sadeghiarash61@gmail.com}
\address{$^{3}$ Departamento de Mate\'{m}atica,CCEN, Universidade Federal de Pernambuco,50740-540 Recife, PE Brazil.} \email{hamid@dmat.ufpe.br}
\keywords{Linkage of modules, Sliding Depth of Extention modules, modules with Cohen-Macaulay extension, sequentially Cohen-Macaulay \\ 1. M.T. Dibaei was supported in part by a grant from IPM (No. 89130114)\\
3. S.H.Hassanzadeh was partially supported by a grant from CNPq (Brazil).}
\subjclass[2000]{13C40, 13D45,13C14}
\begin{abstract} Inspired by the works in linkage theory of ideals, the concept of sliding depth of extension modules is defined to prove the Cohen-Macaulyness of linked module if the base ring is merely Cohen-Macaulay. Some relations between this new condition and other module-theory conditions such as G-dimension and sequentially Cohen-Macaulay are established. By the way several already known theorems in linkage theory are improved or recovered by new approaches.
\end{abstract}
\maketitle
\section{introduction} Classification is one of the main perspective in any field of mathematics. Among rare theories deliberate this viewpoint in commutative algebra and Algebraic Geometry, linkage theory is a well-developed theory during decades. Classically it refers to Halphen (1870) and M. Noether \cite{No}(1882) who worked to classify space curves. During forties and fifties in twenty century Apery and Ga\'{e}ta started the new contributions to classify curves in $\mathbb{P}^3$, the reason one always feels that the twisted cubic curve is that smooth is behind the fact that it is linked to a line. In 1974 the significant work of Peskine and Szpiro\cite{PS} brought breakthrough to this theory and stated it in the modern algebraic language; two closed subschemes $V_1$ and $V_2$ in $\mathbb{P}^n$ are said to be linked if they are unmixed with no common component and their union is complete intersection. More precisely two ideals $I$ and $J$ in Cohen-Macaulay local ring $R$ is said to be linked if there is a regular sequence $\alpha$ in their intersection such that $I=\alpha:J$ and $J=\alpha:I$. The first main theorem in the theory of linkage is the following \cite{PS}.
{\quote\it {Theorem A. If $(R,\mathfrak{m})$ is a Gorenstein local ring, $I$
and $J$ are two linked ideals of R then $R/I$ is Cohen-Macaulay if and only if $R/J$ is so.}}
Attempts to generalize this theorem lead to several development in linkage theory, especially the works by C. Huneke and B. Ulrich \cite{Hu},\cite{HU1}. A counterexample given by Peskine and Szpiro in the same article shows that {\it Theorem A} is no longer true if the base ring $R$ is only Cohen-Macaulay (from now on CM). Trying to determine the accurate condition for an ideal $I$, in a CM local ring, such that any ideal which is linked to $I$ be CM, Huneke \cite{Hu} introduced the concept of Strongly Cohen-Macaulay (SCM) condition, in the sense that an ideal $I$ is SCM if all of the Koszul homology modules with respect to some generating set of $I$ are Cohen-Macaulay. He stated that {\it in a Cohen-Macaulay local ring any ideal linked to a strongly Cohen-Macaulay ideal is Cohen-Macaulay.} Herzog, Vasconcelos and Villarreal \cite{HVV} replaced the SCM condition by the so called Sliding Depth condition; namely we say that an ideal $I$ satisfies sliding depth condition if $\mbox{depth}\, H_i \geq \mbox{dim}\,(R)-r+i$, for $i > 0$, where $H_i$ is $i$th homology of Koszul with respect to some generating set of $I$ and $r$ is the number of elements of this generating set.
The new progress in the linkage theory is the recent work of Martsinkovsky and Strooker
\cite{MS} which established the concept of linkage of modules.
This paper began to attract some interest, they not only could recover some
of the known theorems in the theory of linked ideals such as the ones in \cite{Sc} but also present
new conceptual ideas that only exist in module theory.
In this paper, inspired by the works in the ideal case, we extend the strongly Cohen-Macaulay and sliding depth condition
for modules; so that we can state Theorem A for linked modules in
CM local rings.
In section 2, as mentioned above, we define the new sliding depth conditions for modules so called SDE (Sliding Depth on Ext´s) or CME (Cohen-Macaulay Ext´s). Some sufficient condition for being SDE or CME is given, for example in Proposition \ref{B0} it is shown that Cohen-Macaulay $R$--modules with finite G--dimension are CME. As well it is proven that over Cohen-Macaulay local ring $R$, if $M$ is SDE, then $\lambda M$ is maximal Cohen-Macaulay(see Corollary \ref{B1}).
Trying to detect the module theory invariants that have no ideal inscription in the theory of linkage of ideals, we encounter to the combinatorial conception \emph{sequentially Cohen-Macaulay}. We first present a computational criterion for this concept involving the ideas from linkage of module in Corollary \ref{CC}. Finally in this section we pose an extension to a theorem of Foxby \cite{F} for the class of CME modules and answer this question; so that in Cohen-Macaulay local ring with canonical module $\omega_R$, it is shown that $M$ is CME if and only if $M\otimes_R\omega_R$ is sequentially Cohen-Macaulay and $\mbox{Tor}\,_i^R(M,\omega_R)=0$ for $i>0$ ( Theorem \ref{A3}).
In section 3, for a finite $R$--module $M$ over Cohen-Macaulay local ring $R$ of dimension $d\geq 2$ with canonical module $\omega_R$,
we establish a duality between local cohomology modules of $M\otimes_R\omega_R$
and those of $\lambda M$ (Theorem \ref{A4}) provided $M\otimes_R\omega_R$ be generalized
Cohen-Macaulay. This theorem is a generalization to \cite[Theorem 10]{MS} and
also \cite{Sc} while for its proof instead of techniques in derived category we appeal to Spectral sequences.
Also whenever $M$ is generalized Cohen-Macaulay, under some vanishing assumption on Tor-modules of $M$ and $\omega_R$ we show that $\mbox{H}^i_\mathfrak{m}(\lambda M)\cong \mbox{Ext}\,^i_R(M,R)$ for $i=1,\ldots ,d-1$, (Corollary \ref{B2}).
\section{SDE and CME modules} Throughout, $R$ is a Noetherian ring and $M$ is a finite generated $R$--module. Assume that $M$ is a stable $R$--module (i.e. $M$ has no projective summand). Let $P_1\overset{f}{\rightarrow}P_0\rightarrow M\rightarrow 0$ be a finite projective presentation of $M$. The transpose $\mbox{Tr}\, M$ of $M$ is defined to be $\mbox{Coker}\, f^*$ where $(-)^* := \mbox{Hom}\,_R(-,R),$ which is unique up to projective equivalence. Thus the minimal projective presentations of $M$ represent isomorphic transposes of $M$ and it is also stable $R$--module (see \cite[Theorem 32.13]{AF}). Let $P\overset{\alpha}{\rightarrow}M$ be an epimorphism such that $P$ is a projective. The syzygy module of $M$, denoted by $\Omega M$, is the kernel of $\alpha$ which is unique up to projective equivalence. Thus $\Omega M$ is determined uniquely up to isomorphism if $P\rightarrow M$ is a projective cover. The operator $\lambda = \Omega\mbox{Tr}\,$, introduced by Martsinkovsky and Strooker, enabled them to define linkage for modules: Two finitely generated $R$--modules $M$ and $N$ are said to be\emph{ horizontally linked} if $M\cong \lambda N$ and $N\cong\lambda M$. Thus, $M$ is horizontally linked (to $\lambda M$) if and only if $M\cong\lambda^2M$. It is shown in \cite[Proposition 8 in section 4]{MS} that, over a Gorenstein local ring $R$, a stable $R$--module $M$ with $\mbox{dim}\, M= \mbox{dim}\, R$ is maximal Cohen-Macaulay if and only if $\lambda M$ is maximal Cohen-Macaulay and $M$ is unmixed.
If the ring $R$ is merely a Cohen-Macaulay local ring this statements is not true \cite[section 6]{MS}.
The following definition is the module theory version of strongly Cohen-Macaulay and Sliding depth conditions. \begin{defn}\label{D1}
\emph{Let $R$ be a local ring of dimension $d$ and let $M$ be a finitely generated $R$--module.
The module $M$ is called to be SDE (having Sliding Depth of Extension modules) if either $\mbox{Ext}\,^i_R(M,R)=0$ or
$\mbox{depth}\,_R(\mbox{Ext}\,^i_R(M,R))\geq d-i$ for all $i=1,\ldots ,d-1$. Also $M$ is called to be CME (having Cohen-Macaulay
Extension modules) if either $\mbox{Ext}\,^i_R(M,R)=0$ or $\mbox{Ext}\,^i_R(M,R)$ is Cohen-Macaulay of dimension $d-i$ for all $i=1,\ldots ,d-1$.} \end{defn} To see the ambiguity of CME-modules, it is shown in the next proposition that any Cohen-Macaulay module with finite G-dimension is a CME. Clearly any CME module is SDE.
For the definition of G-dimension we refer to \cite{C}.
\begin{prop}\label{B0}
Let $R$ be a Cohen-Macaulay local ring of dimension $d$. Then any Cohen-Macaulay $R$--module with finite G--dimension is \emph{CME}. \end{prop} \begin{proof} Let $M$ be Cohen-Macaulay $R$--module with finite G--dimension. The Auslander-Bridger formula $\mbox{G--dim}\,_R(M)+ \mbox{depth}\,_R(M)=\mbox{depth}\, R$ \cite[Theorem 1.4.8]{C} in conjunction with the Cohen-Macaulayness of $M$, imply that $\mbox{grade}\,_R(M)=\mbox{G--dim}\,_R(M)=:g$. So that $\mbox{Ext}\,^i_R(M, R)= 0$ for all $i\neq g$. Choose $\underline{x}:=x_1, \ldots , x_g$ to be a maximal $R$--sequence contained in $\mbox{Ann}\,_R(M)$. We have $\mbox{Ext}\,^{g}_R(M,R)\cong \mbox{Hom}\,_{R/(\underline{x})}(M,R/(\underline{x}))$ and $\mbox{Ext}\,^i_{R/(\underline{x})}(M,R/(\underline{x}))=0$ for all $i> 0$. Since $M$ is a maximal Cohen-Macaulay $R/(\underline{x})$--module, by \cite[Proposition 3.3.3]{BH}, $\mbox{Hom}\,_{R/(\underline{x})}(M,R/(\underline{x}))$ is maximal Cohen-Macaulay $R/(\underline{x})$--module. Therefore $\mbox{Ext}\,^{g}_R(M,R)$ is Cohen-Macaulay of dimension $d-g$. \end{proof}
Determining the depth of linked ideals is in the center of the questions on the arithmetic properties of ideals. About linkage of modules the depth of modules linked to SDE modules is rather under control.
\begin{prop}\label{A1}
Let $(R,\mathfrak{m})$ be a local ring and let $M$ be a \emph{SDE} $R$--module. Then $\emph\mbox{depth}\,_R(\lambda M) \geq\min\{\emph\mbox{depth}\,_R(M),\emph\mbox{depth}\, R\}$. \end{prop} \begin{proof} Set $t=\min\{\mbox{depth}\,_R(M),\mbox{depth}\,_R(R)\}$. For $t=0$ it is trivial. Suppose that $t>0$. Set $X:=\displaystyle\cup^{d-1}_{i=1}\mbox{Ass}\,_R(\mbox{Ext}\,^i_R(M,R))\cup \mbox{Ass}\,_R(M)\cup \mbox{Ass}\,_R(R)$. As $M$ is SDE and $t>0$, there is $x\in\mathfrak{m}\setminus\underset{\mathfrak{p}\in X}{\cup}$$\mathfrak{p}$. Set $\overline{M}=M/xM$ and $\overline{R}=R/xR$. The exact sequence $0\rightarrow R\overset{x}{\longrightarrow}$$R\longrightarrow \overline R\longrightarrow 0$ implies the exact sequence\\ \centerline{$0\longrightarrow M^* \overset{x}{\longrightarrow}$$ M^* \longrightarrow\mbox{Hom}\,_R(M,\overline{R}) \longrightarrow \mbox{Ext}\,^1_R(M,R)\overset{x}{\longrightarrow}$$\mbox{Ext}\,^1_R(M,R)\longrightarrow \cdots.$} As each map $\mbox{Ext}\,^i_R(M,R)\overset{x}{\longrightarrow}$ $\mbox{Ext}\,^i_R(M,R)$ is an injection for all $i=0, \cdots, d-1$, we have standard isomorphisms $\mbox{Ext}\,^i_{\overline{R}}( \overline{M},\overline{R})\cong \mbox{Ext}\,^i_R(M,\overline{R})\cong \mbox{Ext}\,^i_R(M,R)/x\mbox{Ext}\,^{i}_R(M,R)$ for all $i=0,1,\ldots ,d-2$. Therefore $\overline{M}$ is $\mbox{SDE}\,$ as $\overline{R}$--module. Let $P_1\longrightarrow P_0\longrightarrow M\longrightarrow 0$
be a minimal projective presentation of $M$ and consider the exact sequence $0\longrightarrow M^*\longrightarrow
P^*_0\longrightarrow \lambda M\longrightarrow 0$. As $\lambda M$ is a syzygy module, $x$ is also a non-zero-divisor on
$\lambda M$. Thus there is a commutative diagram with exact rows $$\begin{CD} &&&&&&&&\\ \ \ &&&& 0 @>>> M^*/{xM^*} @>>>P^*_0/{xP^*_0} @>>> {\lambda M}/{x\lambda M} @>>>0& \\ &&&&&& @VV{\cong}V @VV{\cong}V \\ \ \ &&&& 0 @>>>\mbox{Hom}\,_{\overline R}(\overline M,\overline R) @>>> \mbox{Hom}\,_{\overline R}(\overline P_0,\overline R)
@>>>\lambda_{\overline{R}}\overline{M} @>>>0&\\ \end{CD}$$\\ which implies that $\lambda M/{x\lambda M} \cong \lambda_{\overline{R}}\overline{M}$.\\ By induction $\mbox{depth}\,_{\overline{R}}(\lambda_{\overline{R}}\overline{M})\geq \min\{\mbox{depth}\,_{ \overline{R}}( \overline{M}),\mbox{depth}\,_{\overline{R}}(\overline{R})\}=t-1$. Thus $\mbox{depth}\,_R(\lambda M)\geq t$. \end{proof}
As a corollary of the above general proposition, we have the following generalization of\emph{ Theorem A} which is in fact the module theory version of \cite[Proposition1.1]{Hu}.
\begin{cor}\label{B1}
Let $R$ be a Cohen-Macaulay local ring of dimension, and let $M$ be a maximal Cohen-Macaulay and $\emph{SDE}$ $R$--module.
Then $\lambda M$ is maximal Cohen-Macaulay. \end{cor}
The composed functors $\mathcal{T}_i:=\mbox{Tr}\, \Omega ^{i-1}$ for $i>0$ have been already introduced by Auslander and Bridger \cite{AB} and recently used by
Nishida \cite{N} to relate linkage and duality. In the following result, over Cohen-Macaulay
local ring, we characterize an SDE module $M$ in terms of depths of the $R$-modules $\mathcal{T}_iM$ and
$\lambda\Omega^iM$. Moreover, it follows that for $\lambda M$ to be
maximal Cohen-Macaulay we only need $M$ to be SDE.
\begin{thm}\label{A2}
Let $R$ be a Cohen-Macaulay local ring of dimension $d\geq2$, $M$ a finitely generated $R$--module.
The following statements are equivalent. \begin{itemize}
\item[(i)]{$M$ is $\emph{SDE}$.}
\item[(ii)]{$\emph{depth}_R(\mathcal{T}_iM)\geq d-i$ for all $i=1,\ldots ,d-1$.}
\item[(iii)]{$\emph{depth}_R(\lambda\Omega^iM)\geq d-i$ for all $i=0,\ldots ,d-2$.}
\end{itemize} \end{thm} \begin{proof}For a stable finite $R$--module $N$, there is an exact sequence (\cite[section 5]{MS}), \centerline{$ 0\longrightarrow \mbox{Ext}\,^1_R(\mbox{Tr}\, N,R)\longrightarrow N\longrightarrow \lambda ^2N\longrightarrow 0$.} Note that since the transpose of every finite $R$--module is either stable or zero, $\mbox{Tr}\,\mathcal{T}_iM$ is stably isomorphic to $\Omega^{i-1}M$ for $i>0$, and $\mbox{Ext}\,^1_R(\Omega^{i-1}M,R)\cong \mbox{Ext}\,^i_R(M,R)$, so we have the exact sequence \begin{equation}\label{e} 0\longrightarrow \mbox{Ext}\,^i_R(M,R)\longrightarrow \mathcal{T}_iM\longrightarrow \lambda^2 \mathcal{T}_iM\longrightarrow 0. \end{equation} Also, since $\lambda ^2\mathcal{T}_iM$ is stably isomorphic to $\Omega\mathcal{T}_{i+1}M$ and $R$ is
Cohen-Macaulay, we have \begin{equation}\label{e2} \mbox{depth}\,_R(\lambda ^2\mathcal{T}_iM)=\mbox{depth}\,_R
(\Omega\mathcal{T}_{i+1}M).
\end{equation}
(i)$\Longrightarrow$(ii). We proceed by induction on $i$. From the exact sequence (\ref{e}) we have $\mbox{depth}\,_R(\mathcal{T}_{d-1}M)\geq 1$. Now suppose that $i\leq d-2$ and $\mbox{depth}\,_R(\mathcal{T}_{i+1}M)\geq d-i-1$, accordingly, $\mbox{depth}\,_R(\Omega\mathcal{T}_{i+1}M)\geq d-i$ which in turn implies $\mbox{depth}\,_R(\mathcal{T}_iM)\geq d-i$, using (\ref{e2}) and (\ref{e}).
(ii)$\Longrightarrow$(i). By (\ref{e2}) and the assumption, $\mbox{depth}\,_R(\lambda ^2\mathcal{T}_iM)=\mbox{depth}\,_R (\Omega\mathcal{T}_{i+1}M)\geq d-i$
for all $i=1,\ldots ,d-1$. Using (\ref{e}), we get either $\mbox{Ext}\,^i_R(M,R)=0$ or $\mbox{depth}\,_R(\mbox{Ext}\,^i_R(M,R))
\geq d-i$ for all $i=1,\ldots ,d-1$.\\ (ii)$\Longleftrightarrow$(iii) Note that $\Omega\mathcal{T}_{i+1}M=\lambda\Omega^iM$ for each $i$. Thus $\mbox{depth}\,_R(\mathcal{T}_iM)\geq d-i$
for all $i=1,\ldots ,d-1$ if and only if $\mbox{depth}\,_R(\lambda\Omega^iM)=\mbox{depth}\,_R(\Omega\mathcal{T}_{i+1}M)
\geq d-i$ for all $i=0,\ldots ,d-2$. \end{proof}
A shellable simplicial complex is a special kind of Cohen-Macaulay complex with a simple combinatorial definition. Shellability is a simple but powerful tool for proving the Cohen-Macaulay property. A simplicial complex $\Delta$ is pure if each facet (= maximal face) has the same dimension(cf. \cite[Section II]{S} ).
The concept of \emph{Sequentially Cohen-Macaulay} was defined by combinatorial commutative algebraists (\emph{loc. cit. 3.9} ) to answer a basic question to find a "nonpure" generalization of the concept of a Cohen-Macaulay module, so that the face ring of a shellable (nonpure) simplicial complex has this property.
This concept was then applied by commutative algebraists to study some algebraic invariants or special algebras come from graphs(c.f. \cite{AH}). In following propositions we see the relation between \emph{Sequentially Cohen-Macaulay}, SDE and CME as well as a way to construct a family of modules with these properties. \begin{defn}
\emph{Let $(R,\mathfrak{m})$ be a local Noetherian ring and let $M$ be a finitely generated $R$--module. A finite filtration $0=M_0\subset M_1\subset M_2\subset\ldots \subset M_r=M $ of submodules of $M$ is called a} Cohen-Macaulay filtration, \emph{if each quotient $M_i/M_{i-1}$ is Cohen-Macaulay, and $\mbox{dim}\,_R(M_1/M_0)<\mbox{dim}\,_R(M_2/M_1)<\ldots <\mbox{dim}\,_R(M_r/M_{r-1})$. The module $M$ is called} Sequentially Cohen-Macaulay \emph{if $M$ admits a Cohen-Macaulay filtration.} \end{defn}
A basic fact about Sequentially Cohen-Macaulay modules is the following theorem of Herzog and Popescu \cite[Theorem 2.4]{HP}.
\begin{thm}\label{SQ}
Let $R$ be Cohen-Macaulay local of dimension $d$ with canonical module $\omega_R$. The following conditions are equivalent. \begin{itemize}
\item[(i)] $M$ is Sequentially Cohen-Macaulay.
\item[(ii)] $\emph\mbox{Ext}\,^{d-i}_R(M,\omega _R)$ are either 0
or Cohen-Macaulay of dimension i for all $i\geq 0$.
\end{itemize} \end{thm}
Thus one observes that over a Gorenstein local ring the conditions SDE, CME and Sequentially
Cohen-Macaulay are equivalent. Hence Theorem \ref{A2} provides the following computable characterization of
sequentially Cohen-Macaulay modules.
\begin{cor}\label{CC}
Let $R$ be Gorenstein local ring of dimension $d\geq 2$ and $M$ be a finitely generated $R$--module. The following conditions are equivalent. \begin{itemize}
\item[(i)] $M$ is Sequentially Cohen-Macaulay.
\item[(ii)] $\emph\mbox{depth}\,_R(\lambda{\Omega^i M})\geq d-i
$ for all $i$, $0\leq i\leq d-2$.
\end{itemize} \end{cor}
To prove Proposition \ref{P}, we need to recall the next generalization of the definition of linkage of modules, \cite[Definition 4]{MS}.
\begin{defn}
\emph{Let $M$ and $N$ be two finitely generated $R$--modules. The module $M$
is said to be} linked \emph{to $N$ by an ideal $\mathfrak{c}$ of $R$, if $\mathfrak{c} \subseteq \mbox{Ann}\,_R(M) \cap \mbox{Ann}\,_R(N)$
and $M$ and $N$ are horizontally linked as $R/\mathfrak{c}$--modules.} \end{defn} The following result shows that, over Gorenstein local ring, each of the properties Sequentially Cohen-Macaulay, (or equivalently SDE or CME) is preserved under evenly linkage.
\begin{prop}\label{P}
Let $R$ be a Gorenstein local ring. Then the condition sequentially Cohen-Macaulay (or equivalently, \emph{SDE} or \emph{CME}) is preserved under evenly linkage by ideals. \end{prop} \begin{proof} Set $d:=\mbox{dim}\, R$. Let $\mathfrak{c}_1$ and $\mathfrak{c}_2$ be Gorenstein ideals. Assume that $M_1$, $M$, and $M_2$ are $R$--modules such that $M_1$ is linked to $M$ by $\mathfrak{c}_1$ and $M$ is linked to $M_2$ by $\mathfrak{c}_2$. For each $i > 0$, by \cite[Lemma 11 and Proposition 16]{MS}, we have \[\begin{array}{rl} \mbox{Ext}\,^{i+g}_R(M_1,R)&\cong \mbox{Ext}\,^i_{R/\mathfrak{c}_1}(M_1,R/\mathfrak{c}_1)\\ &\cong\mbox{Ext}\,^i_{R/\mathfrak{c}_2}(M_2,R/\mathfrak{c}_2)\\ &\cong \mbox{Ext}\,^{i+g}_R(M_2,R) \end{array}\]
where $g=\mbox{ht}\,{\mathfrak{c}_1}=\mbox{ht}\,{\mathfrak{c}_2}=\mbox{grade}\,_R{M_1}=\mbox{grade}\,_R{M_2}$.
Suppose that $M_1$ is Sequentially Cohen-Macaulay. By Theorem 2.9,
$\mbox{Ext}\,^{d-i}_R(M_1,R)$ is either zero or Cohen-Macaulay of dimension $i$ for each $i$. Hence
$\mbox{Hom}\,_{R/{\mathfrak{c}_1}}(M_1,{R/{\mathfrak{c}_1}})(\cong\mbox{Ext}\,^g_R(M_1,R)) $ is Cohen-Macaulay $R$--module of dimension $d-g$,
and so it is maximal Cohen-Macaulay $R/{\mathfrak{c}_1}$--module.
Note that, for $j=1, 2$ there are exact sequences\\
\centerline{$ 0 \rightarrow \mbox{Hom}\,_{R/{\mathfrak{c}_j}}(M_j,{R/{\mathfrak{c}_j}}) \rightarrow P_j \rightarrow
{\lambda_{R/{\mathfrak{c}_j}}M_j} \rightarrow 0 $} where $P_j$ is a
projective ${R/{\mathfrak{c}_j}}$--module. Thus we have $\mbox{depth}\,_{R/\mathfrak{c}_2}(\lambda_{R/{\mathfrak{c}_2}}M_2)= \mbox{depth}\,_R(M)=
\mbox{depth}\,_R(\lambda_{R/{\mathfrak{c}_1}}M_1)\geq d-g-1$.
Again $\mbox{Hom}\,_{R/\mathfrak{c}_2}(M_2,{R/\mathfrak{c}_2})$
is maximal Cohen-Macaulay $R/{\mathfrak{c}_2}$--module, and so that $\mbox{Ext}\,^g_R(M_2,R)$
is Cohen-Macaulay $R$--module of dimension $d-g$.
Hence $M_2$ is Sequentially Cohen-Macaulay $R$--module by Theorem \ref{SQ}. \end{proof}
As mentioned just after Theorem \ref{SQ}, over Gorenstein local rings, CME modules are exactly sequentially Cohen-Macaulay modules. On the other hand , when $(R,\mathfrak{m})$ is Cohen-Macaulay ring with canonical module $\omega_R$, it follows from the result \cite[Theorem 2.5]{F} of Foxby that $\mbox{Tor}\,_i^R(M,\omega_R)=0$ for all $i>0$, whenever $\mbox{G--dim}\,_R M <\infty$.
Moreover, Khatami and Yassemi in \cite[Theorem 1.11]{KY} prove that whenever $(R,\mathfrak{m})$ is Cohen-Macaulay ring with canonical module $\omega_R$ and $M$ is an $R$-module with finite Gorenstein dimension then
$M\otimes_R \omega_R$ is Cohen-Macaulay if and only if $M$ is Cohen-Macaulay. Note that by Lemma \ref{B0}, if $\mbox{G--dim}\,_R M <\infty$
and $M$ is Cohen-Macaulay then $M$ is CME, i.e. the class of CME module contains
the class of Cohen-Macaulay modules of finite G-dimensions.
Hence the following question is naturally posed.
{\it What does happen if in results of Foxby, Yassemi and Khatami one replace
finite G-dimension and Cohen-Macaulay conditions of $M$ with the condition
that $M$ is \emph{CME}?}
The following Theorem provides an answer to this question.
\begin{thm}\label{A3}
Let $R$ be a Cohen-Macaulay local ring with the canonical module $\omega_R$, and let $M$ be a finitely generated $R$--module. Then the following two statements are equivalent. \begin{itemize}
\item[(i)] $M$ is \emph{CME}.
\item[(ii)]{$M\otimes_R\omega_R$ is sequentially Cohen-Macaulay
and $\emph{Tor}^R_i(M,\omega_R)=0$ for all $i>0$.}
\end{itemize} \end{thm} \begin{proof} (i)$\Rightarrow$(ii). Let $P_\bullet: \cdots\rightarrow P_1\rightarrow
P_0\rightarrow 0$ be a projective resolution of $M$, and let $I^\bullet: 0\rightarrow I^0\rightarrow I^1\rightarrow \cdots$ be an injective resolution of $\omega_R$ and construct the third quadrant double complex $F:=\mbox{Hom}\,_R(\mbox{Hom}\,_R(P_\bullet,R),I^\bullet).$ Let $^v\mbox{E}$ (resp. $^h\mbox{E}$) denote the vertical (resp. horizontal) spectral sequence associated to the double complex $F$. Then $^v\mbox{E}^{i,j}_2 \cong\mbox{Ext}\,^i_R(\mbox{Ext}\,^j_R(M,R),\omega_R)$. Since $\mbox{Ext}\,^i_R(M,R)$ is either zero or is Cohen-Macaulay of dimension $d-i$, we have $$^v\mbox{E}^{i,j}_2\cong \left\lbrace
\begin{array}{c l}
\mbox{Ext}\,^i_R(\mbox{Ext}\,^j_R(M,R),\omega_R)\ \ & \text{ \ \ $i=j$,}\\
0\ \ & \text{ \ \ $\textrm{otherwise}$.}
\end{array}
\right.$$\\% By using the equivalence of functors $\mbox{Hom}\,_R(\mbox{Hom}\,_R(X,R),Y)$ and $X\otimes_RY$, when $X$(resp. $Y$) belongs to the subcategory of projective (respectively injective)$R$--modules, we find that the double complex $\mbox{Hom}\,_R(\mbox{Hom}\,_R(P_\bullet,R),I^\bullet)$ is isomorphic to the third quadrant double complex $P_\bullet\otimes_RI^\bullet$. Now we may use this double complex to find that $$^h\mbox{E}^{i,j}_2\cong \left\lbrace
\begin{array}{c l}
\mbox{Tor}\,^R_i(M,\omega_R)\ \ & \text{ \ \ $j=0$,}\\
0\ \ & \text{ \ \ $\textrm{otherwise}$.}
\end{array}
\right.$$\\% It follows that $^h\mbox{E}_{\infty}=\ ^h\mbox{E}_2$ and $^v\mbox{E}_{\infty}=\
^v\mbox{E}_2$. By comparing the two spectral sequences $^h\mbox{E}$ and $^v\mbox{E}$ we get $\mbox{Tor}\,^R_i(M,\omega_R)= 0$ for all $i>0$. Thus there is a filtration $0=\Phi_{d+1}\subset\Phi_d\subset\ldots \subset\Phi_0=M\otimes_R\omega_R$ of $M\otimes_R\omega_R$ such that
$\mbox{Ext}\,^i_R(\mbox{Ext}\,^i_R(M,R),\omega_R)\cong\Phi_i/\Phi_{i+1}$ for $i=0,\ldots ,d$. Note that, by
\cite[Theorem 3.3.10]{BH}, $\mbox{Ext}\,^i_R(\mbox{Ext}\,^i_R(M,R),\omega_R)$
is either zero or Cohen-Macaulay of dimension $d-i$. In other words $M\otimes_R\omega_R$ is
sequentially Cohen-Macaulay.\\
(ii)$\Rightarrow$(i). Consider the third quadrant double complex $\mbox{Hom}\,_R(P_\bullet \otimes_R \omega_R , E^\bullet )$. Using the same notation as before, let $^vE$ (resp. $^hE$) be the vertical (resp. horizontal) spectral sequences associated
to the double complex $\mbox{Hom}\,_R(P_\bullet \otimes_R \omega_R , E^\bullet ).$ Then $^v\mbox{E}^{i,j}_2\cong\mbox{Ext}\,^i_R (\mbox{Tor}\,^R_j(M,\omega_R),\omega_R)\cong0$, for all $j>0$, by our assumption. By using the equivalence of functors $\mbox{Hom}\,_R(X\otimes_R\omega_R,Y)$ and $\mbox{Hom}\,_R(X,\mbox{Hom}\,_R(\omega_R,Y))$ in the category of $R$--modules, we find the following isomorphism of double complexes $\mbox{Hom}\,_R(P_\bullet\otimes_R\omega_R,E^\bullet) \cong\mbox{Hom}\,_R(P_\bullet,\mbox{Hom}\,_R(\omega_R,E^\bullet)).$ Thus, we get $^hE^2_{i,j}\cong\mbox{Ext}\,^i_R(M,\mbox{Ext}\,^j_R(\omega_R,\omega_R))$, for all $i,j\geq0$. As $\mbox{Ext}\,^i_R(\omega_R,\omega_R)=0$ for $i>0$ and $\mbox{Hom}\,_R(\omega_R,\omega_R)\cong R$, we get $$^h\mbox{E}^2_{i,j}\cong \left\lbrace
\begin{array}{c l}
\mbox{Ext}\,^i_R(M,R)\ \ & \text{ \ \ $j=0$,}\\
0\ \ & \text{ \ \ $\textrm{otherwise}$.}
\end{array}
\right.$$\\% As the two spectral sequences $^vE$ and $^hE$ collapse, we have $^hE^\infty=\ ^hE^2$ and $^vE^\infty=\ ^vE^2$ and so that $\mbox{Ext}\,^i_R(M,R)\cong\mbox{Ext}\,^i_R(M\otimes_R\omega_R,\omega_R)$. Since $M\otimes_R\omega_R$ is sequentially Cohen-Macaulay, $\mbox{Ext}\,^i_R(M,R)=0$ or Cohen-Macaulay of dimension $d-i$ (see\cite[Theorem 1.9]{BH}), i.e. $M$ is $\mbox{CME}\,$. \end{proof}
\begin{cor}
Let $(R,\mathfrak{m})$ be a Cohen-Macaulay local ring and let $M$ be a finitely generated $R$--module. Set $\omega_{\widehat{R}}$ as the canonical module of $\widehat{R}$, the completion of $R$ with respect to the $\mathfrak{m}$--adic topology. Then the following are equivalent. \begin{itemize}
\item[(i)]{$M$ is $\emph{CME}$.}
\item[(ii)]{$\widehat{M}\otimes_{\widehat{R}}\omega_{\widehat{R}}$ is sequentially Cohen-Macaulay and
$\emph{Tor}^{\widehat{R}}_i(\widehat{M},\omega_{\widehat{R}})=0$ for all $i>0$.}
\end{itemize}
\end{cor}
\section{local cohomology and linkage}
The main purpose of this section is to give a generalization of \cite[Theorem 10]{MS} which states that $\mbox{H}^i_\mathfrak{m}(\lambda M)\cong \mbox{D}(\mbox{H}^{d-1}_\mathfrak{m}(M))$ for $i=1,\ldots ,d-1$, whenever $M$ is a generalized Cohen-Macaulay module over Gorenstein local ring $R$, where $\mbox{D}(-)$ is the Matlis duality functor. Here we assume that $R$ is Cohen-Macaulay with canonical module $\omega_R$ such that $M\otimes_R\omega_R$ is generalized Cohen-Macaulay; it is then shown that for each $i=1,\ldots ,d-1$, $\mbox{H}^i_{\mathfrak{m}}(M\otimes_R\omega_R)\cong \mbox{D}(\mbox{H}^{d-i}_{\mathfrak{m}}(\lambda M))$. Also whenever $M$ is generalized Cohen-Macaulay, under some vanishing assumption on Tor-modules of $M$ and $\omega_R$, we show that $\mbox{H}^i_\mathfrak{m}(\lambda M)\cong \mbox{Ext}\,^i_R(M,R)$ for $i=1,\ldots ,d-1$ (see Corollary \ref{B2}).
The next proposition will lead to a "cohomologic criterion" for generalized Cohen-Macaulay modules to be linked (Corollary \ref{Clink}). This proposition has its own interest as it shows the exactness of the sequence \ref{mes}. Although this exact sequence may be already known, but for the sake of a detailed proof and statement we mention it.
\begin{prop}\label{A}
Let $(R,\mathfrak{m})$ be a Cohen-Macaulay local ring of dimension $d$ with the canonical module $\omega_R$. Assume that $M$ is a finitely generated $R$--module, $\emph\mbox{Ass}\,_R(M)\subseteq \emph\mbox{Ass}\, R\cup\{\mathfrak{m}\}$ and that M satisfies the Serre condition $(S_2)$ on the punctured spectrum. Set $M^\upsilon =\emph\mbox{Hom}\,_R(M, \omega_R)$. Let $\phi :M\longrightarrow M^{\upsilon\upsilon}$ be the natural map, $K:=\emph\mbox{Ker}\,(\phi)$ and $C:=\emph\mbox{Coker}\,(\phi)$. The following statements holds true. \begin{itemize} \item[(i)] If $d=0$ then $K=0$. \item[(ii)] If $d\leq 1$ then $C= 0$. \item[(iii)] If $d\geq 1$ then $K\cong\Gamma_\mathfrak{m}(M)$. \item[(iv)] If $d\geq 2$ then $C\cong\emph\mbox{H}_\mathfrak{m}^1(M)$ and so there is an exact sequence \begin{equation}\label{mes} 0\longrightarrow \Gamma_\mathfrak{m}(M)\longrightarrow M\longrightarrow M^{\upsilon\upsilon}
\longrightarrow \emph\mbox{H}^1_\mathfrak{m}(M)\longrightarrow 0. \end{equation} \end{itemize} \end{prop} \begin{proof} If $d= 0$, it is clear by \cite[Theorem 3.3.10]{BH} that $C=0 $ and $K= 0$. Assume that $d\geq 1$. One has $\mbox{depth}\,_R(M^{\upsilon\upsilon})\geq \mbox{Min}\,\{2,\mbox{depth}\,_R(\omega_R)\}\geq 1 $ and so $\Gamma_{\mathfrak{m}}(M^{\upsilon\upsilon})=0$. By applying $\Gamma_{\mathfrak{m}}(-)$ on the exact sequence
\begin{equation}\label{c} 0\longrightarrow K\longrightarrow
M\overset{\phi}{\longrightarrow} M^{\upsilon\upsilon}\longrightarrow C\longrightarrow 0,\end{equation} it follows that $\Gamma_{\mathfrak{m}}(K)=\Gamma_{\mathfrak{m}}(M)$. Taking $d=1$, for each $\mathfrak{p}\in \mbox{Spec}\, R\setminus \{\mathfrak{m}\}$, $M_{\mathfrak{p}}\cong (M^{\upsilon\upsilon})_{\mathfrak{p}}\cong (M_{\mathfrak{p}})^{\upsilon\upsilon}$, which implies that $\mbox{Supp}\,_R(K)\subseteq \{\mathfrak{m}\}$ i.e. $K=\Gamma_\mathfrak{m}(M)$. Hence we get the exact sequence \begin{equation}\label{a} 0\longrightarrow M/{\Gamma_{\mathfrak{m}}(M)}\longrightarrow M^{\upsilon\upsilon}\longrightarrow C\longrightarrow0,\end{equation} from which, by applying $\Gamma_{\mathfrak{m}}(-)$, we obtain the exact sequence \begin{equation}\label{b} 0\longrightarrow \Gamma_\mathfrak{m}(C)\longrightarrow \mbox{H}^1_{\mathfrak{m}}(M)\longrightarrow \mbox{H}^1_{\mathfrak{m}}(M^{\upsilon\upsilon})\longrightarrow \mbox{H}^1_{\mathfrak{m}}(C).\end{equation} As $\mbox{depth}\,_R(M^\upsilon)\geq\min\{2, \mbox{depth}\,_R(\omega_R)\}\geq 1$, $M^\upsilon$ is maximal Cohen-Macaulay. Therefore the natural map $M^\upsilon\longrightarrow M^{\upsilon\upsilon\upsilon}$ is isomorphism. Using the local duality theorem functorially gives the commutative diagram $$\begin{CD} &&&&\\ \ \ &&&&0 @> >>\Gamma_\mathfrak{m}(C)@>>>\mbox{H}^1_{\mathfrak{m}}(M)@>>> \mbox{H}^1_{\mathfrak{m}}(M^{\upsilon\upsilon})& \\
&&&&&&&& @VV{\cong}V@VV{\cong}V \\ \ \ &&&&&&&& D(M^\upsilon)@>{\cong}>>D(M^{\upsilon\upsilon\upsilon}),&&\\ \end{CD}$$\\ where $D(-)=\mbox{Hom}\,_R(-,E(R/\mathfrak{m}))$. Thus we get $\Gamma_\mathfrak{m}(C)=0$. Note that if $\mathfrak{p}\in\mbox{Spec}\, R\setminus\{\mathfrak{m}\}$, then $\mbox{dim}\, R_\mathfrak{p}=0$
and so $C_\mathfrak{p}=0=K_\mathfrak{p}$ by \cite[Theorem 3.3.10]{BH}. Hence $C=\Gamma_\mathfrak{m}(C)=0$.
In case $d\geq 2$, $\mbox{depth}\,_R(M^{\upsilon\upsilon})\geq\min\{2, \mbox{depth}\,_R(\omega_R)\}\geq 2$ and (\ref{b}) implies that \begin{equation}\label{d}\Gamma_\mathfrak{m}(C)\cong\mbox{H}_\mathfrak{m}^1(M).\end{equation}
Finally, we prove by induction on $d\geq 2$ that $K=\Gamma_\mathfrak{m}(M)$ and $C=\mbox{H}_\mathfrak{m}^1(M)$. Assume that the statement is settled for rings with
dimension smaller than $d$. Let $\mathfrak{p}\in\mbox{Supp}\,_R(M)\setminus\{\mathfrak{m}\}$. We first show that
$\mathfrak{p}\not\in\mbox{Supp}\,_R(K)\cup\mbox{Supp}\,_R(C)$.
If $\mbox{ht}\,\mathfrak{p}=0$, the claim holds true as before. Assume that $\mbox{ht}\,\mathfrak{p}\geq1$. As $\mbox{dim}\, R_\mathfrak{p}<d$,
induction hypothesis for $R_\mathfrak{p}$ implies that $K_{\mathfrak{p}}=\Gamma_{\mathfrak{p} R_{\mathfrak{p}}}(M_{\mathfrak{p}})$ and $C_\mathfrak{p}=\mbox{H}_{\mathfrak{p} R_\mathfrak{p}}^1(M_\mathfrak{p})$. Since $\mathfrak{p}\not\in \mbox{Ass}\,_R(R)$ and so $\mathfrak{p}\not\in \mbox{Ass}\,_R(M)$, we get $\mbox{depth}\,_{R_\mathfrak{p}}(M_{\mathfrak{p}})\geq 1$ and thus $K_{\mathfrak{p}}=0$, i.e. $\mathfrak{p}\not\in\mbox{Supp}\,_R(K)$. For the case $\mbox{ht}\,\mathfrak{p}=1$, we already have, $C_{\mathfrak{p}}=0$. Assume that $\mbox{ht}\,\mathfrak{p}\geq2$. Again from the exact sequence \ref{c} and the fact that $\mbox{depth}\,_R(M^{\upsilon\upsilon})>1$, we get $K=\Gamma_{\mathfrak{m}}(K)=\Gamma_{\mathfrak{m}}(M)$. As $R$ is Cohen-Macaulay and $\mbox{Ass}\,_R(M)\subseteq\mbox{Ass}\,(R)\cup\{\mathfrak{m}\}$, $\mbox{dim}\,_{R_{\mathfrak{p}}}(M_{\mathfrak{p}})=\mbox{dim}\,_{R_{\mathfrak{p}}}(R_{\mathfrak{p}})=\mbox{ht}\,{\mathfrak{p}}\geq 2$ and so $\mbox{depth}\,_{R_{\mathfrak{p}}}(M_{\mathfrak{p}})\geq \mbox{Min}\,\{2,\mbox{dim}\,_{R_{\mathfrak{p}}}(M_{\mathfrak{p}})\}=2$ because $M$ satisfies $(S_2)$. Hence $\mbox{H}^1_{{\mathfrak{p}}R_{\mathfrak{p}}}(M_{\mathfrak{p}})=0$. Hence $C_{\mathfrak{p}}=0$, i.e. $\mathfrak{p}\not\in\mbox{Supp}\,_R(C)$. In particular, $K=\Gamma_\mathfrak{m}(K)$ and $C=\Gamma_\mathfrak{m}(C)$. Now $K=\Gamma_\mathfrak{m}(M)$ by (\ref{c}), and $C=\mbox{H}_\mathfrak{m}^1(M)$ by (\ref{d}). \end{proof}
\begin{cor}\label{Clink}
Let $R$ be a Gorenstein local ring and let $M$ be a generalized Cohen-Macaulay stable $R$--module with $\emph\mbox{dim}\,_R(M)=\emph\mbox{dim}\, R$. A necessary and sufficient condition for $M$ to be horizontally linked is that $\Gamma_\mathfrak{m}(M)=0$. \end{cor} \begin{proof} Note that, by \cite[Exescise 9.5.6]{BS} , $\mbox{Ass}\,_R(M)\subseteq \mbox{Ass}\,_R(R)\cup \{\mathfrak{m}\}$ and $M$ satisfies $(S_2)$ on the punctured spectrum. As $M$ is linked if and only if the natural map $M\rightarrow M^{**}$ is one to one, the result follows by Proposition \ref{A}. \end{proof}
In the following result, we extend \cite[Theorm 10 in section 10]{MS} for Cohen-Macaulay rings with canonical module.
\begin{thm}\label{A4}
Let $(R,\mathfrak{m})$ be local Cohen-Macaulay ring of dimension $d\geq 2$ with canonical module $\omega_R$. Let $M$ be a finitely generated $R$--module of dimension $d$, such that $M\otimes_R\omega_R$ is generalized Cohen-Macaulay. Then for each $i=1,\ldots ,d-1$, $\emph{H}^i_{\mathfrak{m}}(M\otimes_R\omega_R)\cong \emph{Hom}_R(\emph{H}^{d-i}_{\mathfrak{m}}(\lambda M),E(R/{\mathfrak{m}}))$. \end{thm} \begin{proof} First we examine the general situation for an $R$--module $N$ which is a generalized
Cohen-Macaulay of dimension $d$. Let $0\rightarrow I^0\rightarrow I^1\rightarrow \cdots$ be an injective resolution of $\omega_R$ and
$\cdots\rightarrow P_1\rightarrow P_0\rightarrow 0$ be a projective resolution of $N$
and construct the third quadrant double complex
$F:=\mbox{Hom}\,_R(\mbox{Hom}\,_R(P_{\bullet},\omega_R),I^{\bullet})$. Let $^v\mbox{E}$ (resp. $^h\mbox{E}$)
denote the vertical (resp. horizontal) spectral
sequence associated to the double complex $F$. Then $^v\mbox{E}^{i,j}_2 \cong\mbox{Ext}\,^i_R(\mbox{Ext}\,^j_R(N,\omega_R),\omega_R)$. As $N$ is generalized Cohen-Macaulay, by local duality theorem, $\mbox{Ext}\,^i_R(N,\omega_R)$ is of finite length, for all $i=1,\ldots ,d$. Therefore $$^v\mbox{E}^{i,j}_2\cong \left\lbrace
\begin{array}{c l}
\mbox{Ext}\,^i_R(N^\upsilon ,\omega_R) \ \ & \text{if \ \ $j=0$,} \\
\mbox{H}^{d-j}_{\mathfrak{m}}(N) \ \ & \text{if \ \ $j\neq 0 ,i= d$,}\\
0\ \ & \text{if \ \ $j\neq 0\ ,i\neq d$.}
\end{array}
\right.$$\\ As the map $d^r$ is of bidegree $(r,1-r)$, one can observe that $^v\mbox{E}^{r,0}_r\cong \mbox{Ext}\,^{d-r}_R(N^\upsilon,\omega_R)$, for $r\geq 2$. Thus we have the following diagram: $$
\xymatrix{
\mbox{Ext}\,^0_R(N^\upsilon,\omega_R)\ar@{..>}^{d^d}[rrrrddd]& \cdots &\mbox{Ext}\,^{d-2}_R(N^\upsilon,\omega_R)\ar^{d^2}[rrd] &\mbox{Ext}\,^{d-1}_R(N^\upsilon,\omega_R) &\mbox{Ext}\,^{d}_R(N^\upsilon,\omega_R) \\
0 & & &0 & \mbox{H}^{d-1}_{\mathfrak{m}}(N)\\
\vdots & & & & \vdots\\
0 & \cdots & &0 & \mbox{H}^{1}_{\mathfrak{m}}(N)\\
0 & \cdots & &0 & \mbox{H}^{0}_{\mathfrak{m}}(N)\\
}
$$
To compute $^h\mbox{E}_2$, we change our double complex with the functorial isomorphisms $\mbox{Hom}\,_R(\mbox{Hom}\,_R(P_i,\omega_R),I^j)\cong P_i\otimes_R\mbox{Hom}\,_R(\omega_R,I^j).$ Thus we get
$$^h\mbox{E}_2\cong\left\lbrace
\begin{array}{c l}
N \ \ & \text{if \ \ $i=0, j=0$,} \\
0\ \ & \text{otherwise}.
\end{array}
\right.$$\\ As $\mbox{Ker}\, d^r$ and $\mbox{Coker}\, d^r$ are isomorphic to $^v\mbox{E}_\infty$, comparing the two spectral sequences, one get isomorphisms $d^r:\mbox{Ext}\,^{d-r}_R(N^\upsilon,\omega_R)\longrightarrow \mbox{H}^{d-r+1}_\mathfrak{m}(N)$ for $r=2,\ldots ,d-1$. Therefore $\mbox{Ext}\,^{d-r}_R(N^\upsilon,\omega_R)$ is of finite length and so, by local duality theorem, $\mbox{Ext}\,^{d-r}_R(N^\upsilon,\omega_R)\cong \mathrm{D}(\mbox{H}^r_\mathfrak{m}(N^\upsilon)).$ Hence one obtains the isomorphisms $\mbox{H}^{d-r+1}_\mathfrak{m}(N)\cong \mathrm{D}(\mbox{H}^r_\mathfrak{m}(N^\upsilon))$, for all $r=2,\ldots ,d-1$.
Replacing $N$ by $M\otimes_R\omega_R$, gives $$\mbox{H}^{d-i+1}_\mathfrak{m}(M\otimes_R\omega_R)\cong \mathrm{D} \big( \mbox{H}^i_\mathfrak{m}( \mbox{Hom}\,_R( M\otimes_R\omega_R,\omega_R) ) \big) \cong \mathrm{D}( \mbox{H}^i_\mathfrak{m}(M^*)),$$ for all $i=2,\ldots ,d-1.$ Consider the exact sequence $0\rightarrow M^* \rightarrow P_0^* \rightarrow \lambda M \rightarrow 0.$ Applying $\Gamma_\mathfrak{m}(-)$ we get $\mbox{H}^{i+1}_\mathfrak{m}(M^*)\cong \mbox{H}^i_\mathfrak{m}(\lambda M)$ for $i=0,\ldots ,d-2$. Therefore we have isomorphisms $\mbox{H}^i_\mathfrak{m}(M\otimes_R\omega_R)\cong \mathrm{D}(\mbox{H}^{d-i}_\mathfrak{m}(\lambda M))$, for $i=2,\ldots ,d-1$. Now it remains to prove the claim for $i=1$. Applying Theorem \ref{A} to $M\otimes_R\omega_R$ and applying the functor $\mbox{Hom}\,_R(-,\omega_R)$ on the exact sequence $0\rightarrow M^* \rightarrow P_0^* \rightarrow \lambda M \rightarrow 0,$ we get the following commutative diagram with exact rows and columns. $$\begin{CD} &&&&&&&&\\ \ \ &&&& P_0\otimes_R\omega_R @>>>M\otimes_R\omega_R @>>>0& \\ &&&& @V{\cong}VV @VVV \\ \ \ &&&& \mbox{Hom}\,_R(P_0^*,\omega_R) @>>> {(M\otimes_R\omega_R)}^{\upsilon\upsilon}
@>>> \mbox{Ext}\,^1_R(\lambda M,\omega_R)@>>>0&\\ &&&&&&@VVV\\ \ \ &&&&&& \mbox{H}^1_{\mathfrak{m}}(M\otimes_R\omega_R)&\\ &&&&&&@VVV \\ &&&&&&0 \\ \end{CD}$$\\ which implies that $\mbox{H}^1_\mathfrak{m}(M\otimes_R\omega_R)\cong \mbox{Ext}\,^1_R(\lambda M,\omega_R)\cong \mathrm{D}(\mbox{H}^{d-1}_\mathfrak{m}(\lambda M) ).$ \end{proof}
As the final result, we state the next corollary of Theorem \ref{A4}. \begin{cor}\label{B2}
Let $R$ be a Cohen-Macaulay ring of dimension $d\geq2$ with canonical module $\omega_R$, and let $M$ be a finitely
generated $R$--module. Suppose that $\emph{Ext}^i_R(M,R)$ is of finite length for $i=0,\ldots
,d-1$ and $\emph{Tor}^R_i(M,\omega_R)=0$
for $i>0$. Then $\emph{H}^i_{\mathfrak{m}}(\lambda M)\cong \emph{Ext}^i_R(M,R)$, $i=1,\ldots ,d-1$,
and so $\lambda M$ is generalized Cohen-Macaulay. \end{cor} \begin{proof} Since $\mbox{Tor}\,^R_i(M,\omega_R)=0$ for all $i>0$, as in the proof of Theorem \ref{A3} ((ii)$\Rightarrow$(i)), $\mbox{Ext}\,^i_R(M\otimes_R\omega_R,\omega_R)\cong \mbox{Ext}\,^i_R(M,R)$ for $i=0,\ldots ,d-1$. Hence $\mbox{Ext}\,^i_R(M\otimes_R\omega_R,\omega_R)$ is of finite length for $i=1,\ldots ,d-1$ and so that $M\otimes_R\omega_R$ is generalized Cohen-Macaulay. Therefore the result follows from local duality theorem and Theorem \ref{A4}. \end{proof}
\end{document} |
\begin{document}
\title{Optimal Measures for Multivariate Geometric Potentials} \begin{abstract} We study measures and point configurations optimizing energies based on multivariate potentials. The emphasis is put on potentials defined by geometric characteristics of sets of points, which serve as multi-input generalizations of the well-known Riesz potentials for pairwise interaction. One of such potentials is volume squared of the simplex with vertices at the $k \ge 3$ given points: we show that the arising energy is maximized by balanced isotropic measures, in contrast to the classical two-input energy. These results are used to obtain interesting geometric optimality properties of the regular simplex. As the main machinery, we adapt the semidefinite programming method to this context and establish relevant versions of the $k$-point bounds.
\end{abstract} \tableofcontents
\section{Introduction}
A variety of problems in many areas of mathematics and science can be formulated as discrete or continuous energy optimization problems for two-point interaction potentials. The discrete energy and the continuous energy integral in this setup are defined as \begin{equation}\label{e.2ener} \frac{1}{N^2} \sum_{x,y \in \omega_N} K (x,y) \,\,\, \textup{ or } \int_\Omega \int_\Omega K(x,y) \, d\mu (x) \, d\mu (y), \end{equation}
where $K: \Omega \times \Omega \rightarrow \mathbb R$ is a potential function. In the former case, the energy is determined for discrete sets $\omega_N$ of $N$ points in $\Omega$. In the latter case, it is determined for probability measures $\mu$ on the domain $\Omega$. For $\Omega \subset \mathbb R^d$, undoubtedly, one of the most well studied energies of this type are the Riesz energies with the kernels $K(x,y) = \| x-y \|^s$ (diagonal terms need to be dropped in the discrete case for $s<0$). We refer the reader to \cite{BHS} for an excellent exposition of the subject.
However, numerous applications (e.g. Menger curvature \cite{MMV}, $U$-statistics \cite{L,V}, $k$-point bounds \cite{BV, DMOV, Mu, CW}, three-nucleon force in physics \cite{Z}, etc) call for energies that depend on interactions of triples or $k$-tuples of particles, rather than just pairwise interactions, i.e.\ energies of the type \begin{align}\label{e.nener} E_K (\omega_N) & = \frac{1}{N^k} \sum_{ z_1,\dots,z_k \in \omega_N} K (z_1,\ldots,z_k), \\ \label{e.neneri} I_K (\mu) & = \int_\Omega \dots \int_\Omega K (x_1,\dots,x_k) \, d\mu (x_1) \,\dots \, d\mu (x_k), \end{align} with $k\ge 3$. The question of interest is finding point configurations and measures optimizing such energies.\\
Continuing the general study initiated in \cite{BFGMPV1}, in this paper we study multivariate potentials that are determined by geometric characteristics of sets of $k$ points in $\mathbb R^d$ and, at the same time, serve as generalizations of classical pairwise potentials ubiquitous in the literature, in particular, the aforementioned Riesz potentials. There are two main classes of such potentials which we investigate here. \\
\noindent {\bf The potential $V$: volume of the parallelepiped spanned by $k$-tuples of vectors.} Let $2\leq k\leq d$ and define the $k$-input kernel $V(x_1,\ldots, x_k)$ as the $k$-dimensional volume of the parallelepiped whose vertices are the points $x_1$, $\ldots$, $x_k$ and the origin. In other words, the parallelepiped spanned by the {\emph{vectors}} $x_1$, $\ldots$, $x_k$ (or, equivalently, $V$ is the volume of the simplex spanned by these $k$ vectors, scaled by a factor of $k!$). Note that $V^2$ is the determinant of the Gram matrix of the set of vectors $\{x_1,\ldots, x_k\}$. \\% scaled by $\frac{1}{(k!)^2}$. As an analogue of Riesz energies, we study energies with kernels $V^{\alpha}$.\\
\noindent {\bf The potential $A$: volume (area) of the simplex spanned by $k$-tuples of points.} For $2\leq k\leq d+1$, define the $k$-input kernel $A(x_1,\ldots, x_k)$ as the $(k-1)$-dimensional volume of the simplex whose vertices are the points $x_1$, $\ldots$, $x_k$. Similarly to $V^2$, the potential $A^2$ can be represented as a determinant of a matrix based on scalar products of the set of vectors $\{x_1,\ldots, x_k\}$, see Lemma \ref{lem:A-formula}.\\
Observe that in the case $k=3$, $V$ is simply the three-dimensional {\emph{volume}} of the parallelepiped spanned by the vectors $x_1$, $x_2$, $x_3$, while $A$ is the {\emph{area}} of the triangle with vertices at the points $x_1$, $x_2$, $x_3$. This explains the notation chosen for these potentials. The three-input case of the potentials will be the focus of Section \ref{sec:SD}. Note also that for $k=2$, $A(x,y) = \| x -y\|$, so the potentials $A^s$ are direct multivariate generalizations of the Riesz potentials. This was also remarked upon in \cite{KPS}, where the authors studied the gradient flow of $A^2$ as a generalization of the linear consensus model. The two-input case for both of these potentials is discussed in Section \ref{sec:k=2}. \\
In direct analogy to the Riesz energies, we shall study multi-input energies with kernels given by powers of these potentials: $A^{s}$ and $V^s$ (in this paper, the powers are mostly assumed to be positive, i.e. $s>0$). Due to the nature of these potentials for $s>0$, one is generally interested in measures and point configurations that {\emph{maximize}} (rather than minimize) the corresponding energies. The geometric setting will be primarily restricted to the case when the domain $\Omega$ is the unit sphere $\mathbb S^{d-1}$, as well as $\Omega= \mathbb R^d$ with certain moment restrictions on the underlying probability measures \eqref{eq:moment}. In the former case, one of the most natural questions is whether the normalized uniform surface measure $\sigma$ on the sphere $\mathbb S^{d-1}$ maximize the energies $I_{A^s} (\mu)$ and $I_{V^s} (\mu)$.
These questions can be reformulated in probabilistic terms as follows: assume that $k$ random points/vectors are chosen on the sphere $\mathbb S^{d-1}$ independently according to the probability distribution $\mu$. Which probability distribution $\mu$ maximizes the expected $s^{th}$ power of the volume of the parallelepiped spanned by the vectors (or, respectively, of the volume of the simplex spanned by the random points)? Is the uniform distribution $\sigma$ optimal? \\
The case $s=2$ appears to be more manageable than others, since, as mentioned above, both $V^2$ and $A^2$ can be expressed as polynomials. In fact, it has been already shown by Cahill and Casazza \cite{CC} (see also Theorem \ref{thm:volume-gen} below) that $I_{V^2}$ is maximized by isotropic measures on the sphere (see \eqref{eq:isotropic_init} for the definition), which includes $\sigma$. Based on this result we show that $I_{V^s}$ is maximized by the discrete measure uniformly distributed over the vertices of an orthonormal basis when $s>2$ (Corollary \ref{cor:V^s for s>2}). Other main results of the present paper concerning multivariate geometric potentials include:
\begin{itemize} \item The maximizers of the energy $I_{A^2}(\mu)$ on the sphere $\mathbb S^{d-1}$ are exactly the balanced isotropic measures (which includes the uniform surface measure $\sigma$, see Section \ref{sec:not} for the relevant definitions). This is proved in Theorem \ref{thm:area-gen} in full generality (for all $d\ge 2$ and $3\le k \le d+1$), but different proofs of partial cases are also given in Theorem \ref{thm: triangle area squared max} (the case of $k=3$ inputs, i.e. area squared of a triangle) and Theorem \ref{thm:A^2maxGen} ($k=d+1$ inputs in dimension $d\ge 2$, i.e. a full-dimensional simplex; this theorem also applies to measures on $\mathbb R^d$ with unit second moment). \item When $k=d+1$ and $s>2$, the energy $I_{A^s}(\mu)$ is maximized by the discrete measure uniformly distributed over the vertices of a regular $d$-dimensional simplex (Corollary \ref{cor:A^s for s>2}). \item For $0<s\le 2$, the discrete energies $E_{V^s}$ and $E_{A^s}$ with $N=d+1$ points are maximized by the vertices of a regular $d$-dimensional simplex, see Corollary \ref{cor:Simplex Best A, V}. As a corollary, a regular $d$-dimensional simplex maximizes the sum of volumes of $j$-dimensional faces ($1\le j \le d$) among all simplices of a given circumradius (Corollary \ref{cor:Geometric Simplex is best}). \end{itemize} \noindent For more precise technical statements of these results the reader is directed to the theorems and corollaries referenced above. \\
The case $s=2$ is also special due to the fact that in the classical two-input setting this is exactly the phase transition for the Riesz energy on the sphere, which is maximized uniquely by the uniform surface measure $\sigma$ for $0<s<2$ and by discrete measures for $s>2$ \cite{Bj} (see also Proposition \ref{prop:k=2} below). Some of our main results suggest that similar behavior persists in the multivariate case, although the case $0<s<2$ (including the very natural $s=1$) remains out of reach. We conjecture that the uniform surface measure $\sigma$ maximizes both $I_{A^s} (\mu)$ and $I_{V^s} (\mu)$ when $0<s<2$. \\
The main machinery for our optimization results is a variant of the semidefinite programming method. We adapt the method developed by Bachoc and Vallentin for finding three-point packing bounds for spherical codes \cite{BV}. Three-point bounds were also applied to energy optimization for pair potentials in \cite{CW} and for multivariate $p$-frame energy in \cite{BFGMPV2}. The approach of Bachoc and Vallentin later was generalized by Musin \cite{Mu} who established the $k$-point version of packing bounds. This method is actively utilized for solving packing/energy problems (see, e.g. \cite{DMOV, DDM}) but its applicability is typically limited due to complexity of actual semidefinite programs. Our paper seems to be the first one where general $k$-point bounds are explicitly used for all positive integer $k$.\\
The paper is organized as follows. Section \ref{sec:not} describes the relevant background, definitions, notation, and covers the two-input case of the energies. Section \ref{sec:Discrete A and V} presents the applications of our main results to some geometric optimality properties of the regular simplex. In Section \ref{sec:SD} we discuss the semidefinite programming approach of \cite{BV} and demonstrate how it leads to optimization results for $3$-input energies with geometric kernels. Section \ref{sec:vol} shows how the known results about $I_{V^2}$ \cite{CC} can be used to obtain partial results for $I_{A^2}$, as well as the discreteness of maximizers for $I_{V^s}$ and $I_{A^s}$ with $s>2$. In Section \ref{sec:k-point} we provide a self-contained description of $k$-point semidefinite bounds for the sphere and give a general construction of $k$-positive definite multivariate functions based on these bounds. Finally, in the main result of Section \ref{sec:A^2 on Sphere}, we use multivariate functions from Section \ref{sec:k-point} to prove that the energy $I_{A^2}$ based on the squared volume of a simplex is maximized by balanced isotropic measures on the sphere. In the Appendix (Section \ref{sec:Appen A^2}) we give an explicit expression for the potential $A^2$.
\section{Background and notation}\label{sec:not}
The notation in this paper generally follows \cite{BFGMPV1}. Most of the optimization problems, with a few exceptions, will be formulated for measures or finite configurations of points on the unit Euclidean sphere $\mathbb{S}^{d-1}$. Often the potentials will be invariant under the action changing an argument to its opposite. Essentially, this means that the underlying space is the real projective space $\mathbb{RP}^{d-1}$, but we will still formulate our results in terms of the unit sphere.
In what follows, the domain $\Omega$ is either the sphere $\mathbb S^{d-1}$ or the Euclidean space $\mathbb R^d$.
Assume $ k \in \mathbb{N}\setminus\{1\}$ is the number of inputs and the kernel $K: \Omega^k \rightarrow \mathbb{R}$ is continuous. We denote by $\mathcal{M}(\Omega)$ the set of finite signed Borel measures on $\Omega$, and by $\mathcal{P}(\Omega)$ the set of Borel probability measures on $\Omega$. If $\Omega = \mathbb{R}^d$, we define $\mathcal{P}^*(\mathbb R^d)$ to be the set of Borel probability measures $\mu$ on $\mathbb R^d$ satisfying \begin{equation}\label{eq:moment}
\int_{\mathbb R^d} \| x \|^2 d \mu(x) = 1. \end{equation} Observe that, by a slight abuse of notation, $\mathcal{P}(\mathbb S^{d-1}) \subset \mathcal{P}^*(\mathbb R^d)$
Let $\omega_N = \{ z_1, z_2, \ldots, z_N\}$ be an $N$-point configuration (multiset) in $\Omega$, for $N \geq k$. Then the discrete $K$-energy of $\omega_N$ is defined to be \begin{equation*}\label{eq:DiscreteEnergyDef} E_K(\omega_N) := \frac{1}{N^k} \sum_{j_1=1}^{N} \cdots \sum_{j_k = 1}^{N} K(z_{j_1}, \ldots, z_{j_k}). \end{equation*} Similarly, we define the energy integral for measures on $\Omega$: for $\mu \in \mathcal{M}(\Omega)$, \begin{equation*}\label{eq:ContEnergyDef} I_K(\mu ) = \int_{ \Omega} \cdots \int_{\Omega} K(x_1, \ldots, x_k) \,d\mu(x_1) \cdots d\mu(x_k), \end{equation*} when absolutely convergent, as will be the case in all of the contexts considered below. In the present paper we shall be interested in finding probability measures ($\mu \in \mathcal{P}(\mathbb S^{d-1} ) $ or $\mu \in \mathcal{P}^*(\mathbb R^d)$) which optimize (in most cases, maximize) the energy integrals $I_K$.
\subsection{Isotropic measures and frame energy}
The \textit{$p$-frame potential} is defined as $|\langle x,y \rangle|^p$. The notion of the $2$-frame potential, or simply \textit{frame potential}, was introduced by Benedetto and Fickus \cite{BF}, and later generalized to $p \in (0, \infty)$ by Ehler and Okoudjou \cite{EO}. Minimization of the frame energy is well understood: the following lemma is usually stated for $\mu \in \mathcal P (\mathbb S^{d-1})$, see e.g. Theorem 4.10 in \cite{BF}, but the extension to $ \mathcal{P}^*(\mathbb R^d)$ is straightforward (see also Remark \ref{rem:proj} below).
\begin{lemma}\label{lem:frame} For any {$\mu\in \mathcal{P}^*(\mathbb R^d)$}, and hence also any $\mu \in \mathcal P (\mathbb S^{d-1})$, \begin{equation*} \int_{ \Omega} \int_{\Omega} \langle x,y \rangle^2\, d\mu(x) d\mu(y) \geq \frac 1 d. \end{equation*} \end{lemma}
It is easy to see that the equality in the estimate above is achieved precisely for the measures which satisfy \begin{equation}\label{eq:isotropic_init} \int_{ \Omega} x x^T \, d\mu(x)=\frac 1 d I_d, \end{equation} where $I_d$ is the $d\times d$ identity matrix. It will be convenient for us to use this condition in the following form: for any $y\in\mathbb{S}^{d-1}$, \begin{equation}\label{eq:isotropic} \int_{ \Omega} \langle x, y\rangle^2\, d\mu(x)=\frac 1 d. \end{equation}
Measures which satisfy \eqref{eq:isotropic_init} or, equivalently, \eqref{eq:isotropic}, are called \textit{isotropic}. We note that $\operatorname{Tr} (x x^T)=\|x\|^2$ and (\ref{eq:isotropic_init}) implies $\int_{\Omega} \| x \|^2 d \mu(x) = 1$ so, as a matter of fact, all isotropic measures on $\mathbb{R}^d$ automatically belong to $\mathcal{P}^*(\mathbb{R}^d)$.
The discrete version of Lemma \ref{lem:frame} states that for $N \geq d$ and $\{ x_1,\dots,x_N\} \subset \mathbb S^{d-1}$, \begin{equation*} \sum_{i=1}^{N}\sum_{j = 1}^{N} \langle x_i, x_j \rangle^2\geq \frac {N^2} d. \end{equation*} Discrete sets for which this bound is sharp are known as \textit{unit norm tight frames}, which explains the term \emph{frame energy}. The lower bound for the discrete frame energy is a special case of bounds by Welch \cite{W} and Sidelnikov \cite{Sid}.
There is a natural projection $\pi:\mathcal{P}^*(\mathbb{R}^d)\rightarrow \mathcal{P}(\mathbb{S}^{d-1})$ that maps isotropic measures in $\mathbb{R}^d$ onto isotropic measures in $\mathbb{S}^{d-1}$. First, we define the projection $\pi_0:\mathbb{R}^d\setminus\{0\}\rightarrow\mathbb{S}^{d-1}$ by $\pi_0(x)=x/\|x\|$. Now for any $\mu\in \mathcal{P}^*(\mathbb{R}^d)$, we define $\mu^*=\pi(\mu)$ as {the pushforward measure $ (\pi_0)_\# \|x\|^2\, d\mu(x) $, that is}: for any Borel subset $B$ of $\mathbb{S}^{d-1}$, we set
$$\mu^*(B)=\int_{\pi_0^{-1}(B)} \|x\|^2\, d\mu(x).$$ Clearly, $\mu^*$ is a Borel probability measure on $\mathbb{S}^{d-1}$. Checking (\ref{eq:isotropic}), we can also see that for an isotropic $\mu$, $\pi(\mu)$ is isotropic too.
\begin{remark}\label{rem:proj} For potentials $K$ that are homogeneous of degree $2$ in each variable, the energy $I_K(\mu)$ is invariant under the projection $\pi$. The kernel $V^2$ is such a function, since it is the determinant of the Gram matrix of $\{ x_1,\dots,x_N\}$. This property is also satisfied by the frame potential $K(x,y) = \langle x,y \rangle^2$. In such cases, it is sufficient to find optimizers for probability measures on the sphere in order to solve an optimization problem in $\mathcal{P}^*(\mathbb{R}^d)$. \end{remark}
We call a measure $\mu$ \textit{balanced} if $\int_{\Omega} x\, d\mu(x) = 0$, i.e. the center of mass is at the origin. Balanced isotropic measures can be used to construct isotropic measures in higher dimensions, as will be seen in the proof of Theorem~\ref{thm:A^2maxGen}.
\subsection{Linear programming and positive definite kernels} The linear programming method, developed for the spherical case in \cite{DGS}, appeared to be successful in finding optimizing measures and point configurations as well as in giving lower bounds for two-point interaction energies (see, e.g., \cite{BGMPV1,CK, Y}). Here we briefly describe how it works. In Sections \ref{sec:SD} and \ref{sec:k-point}, we explain in more detail how the method is extended to semidefinite bounds for $k$-point energies.
A symmetric kernel $K: ({\mathbb S^{d-1}})^2 \rightarrow \mathbb{R}$ is called \textit{positive definite} if for every $\nu \in \mathcal{M}(\mathbb S^{d-1})$, the energy integral satisfies $I_K(\nu) \geq 0$. A classical theorem of Schoenberg described positive definite kernels via Gegenbauer polynomials \cite{Sch}. The Gegenbauer polynomials $P_m^d$, $ m\geq 0 $, form an orthogonal basis on $[-1,1]$ with respect to the measure $(1-t^2)^{\frac{d-3}2}dt$. Here, $P_m^d$ is normalized so that $P_m^d(1) = 1$. All continuous functions on $[-1,1]$ can be expanded like so: \begin{equation}\label{eq:GegenbauerExpansion} f(t)=\sum_{m=0}^{\infty} \hat{f}_m P_m^d(t), \end{equation} where the sum converges uniformly and absolutely if $K(x,y) = f( \langle x, y \rangle)$ is positive definite on $\mathbb{S}^{d-1}$ (due to Mercer's Theorem). Rotationally-invariant positive definite kernels on the sphere are exactly {characterized} by the positivity of their Gegenbauer coefficients.
\begin{theorem}[Schoenberg \cite{Sch}]\label{thm:sch} The kernel $K(x,y)=f(\langle x,y \rangle)$ is positive definite on $\mathbb{S}^{d-1}$ if and only if all coefficients $\hat{f}_m$ of the Gegenbauer expansion \eqref{eq:GegenbauerExpansion} are non-negative. \end{theorem}
More background on Gegenbauer polynomials, energy, and positive definite kernels on the sphere can be found in \cite{AH, BHS}.
If one can bound a given function $f$ from below by a positive definite (modulo a constant) function $h$, usually a polynomial, then the linear programming bounds on the energy of $f$ are then essentially consequences of the inequalities $$\int_{\mathbb{S}^{d-1}}\int_{\mathbb{S}^{d-1}} P_m^d(\langle x,y \rangle) \,d\mu(x) d\mu(y)\geq 0.$$ For example, $P_2^d(t)=\frac {dt^2-1} {d-1}$ and the inequality above immediately implies the lower bound in Lemma \ref{lem:frame}. \\
\subsection{$k$-positive definite kernels} As an extension of the notion of positive definite kernels to the multivariate case, we define \textit{$k$-positive definite kernels}. Let $K: (\mathbb S^{d-1})^k \rightarrow \mathbb{R}$ be continuous and symmetric in the first two variables. We define the \textit{potential function of $K$ for fixed $z_3, \ldots, z_k$} as \begin{equation}\label{eq:PotentialFuncDef} U_{K}^{z_3, \ldots, z_{k}}(x,y):= K(x,y, z_3, \ldots, z_k). \end{equation} We call $K$ $k$-positive definite if for any $z_3, \ldots, z_k \in \mathbb S^{d-1}$, the potential function $U_{K}^{z_3, \ldots, z_{k}}(x,y)$ is positive definite as a function of $x$ and $y$. For kernels symmetric in all variables, this definition is the same as the one given in \cite{BFGMPV1}. A kernel $Y$ is $k$-negative definite if $-Y$ is $k$-positive definite. In Section \ref{sec:k-point} we provide a self-contained construction of large classes of $k$-positive definite kernels for $\mathbb{S}^{d-1}$. Here we collect some general results about positive definiteness and energy minimization for multivariate kernels.
\begin{lemma}\label{lem:Schur's Lemma} Suppose that $K_1, K_2, \ldots$ are $k$-positive definite. Then $K_1 + K_2$ and $K_1 K_2$ are $k$-positive definite. If the sequence of $K_j$'s converges (uniformly in the first two variables and pointwise in the others) to a kernel $K$, then $K$ is also $k$-positive definite. \end{lemma}
This result follows immediately from the same results for two-input kernels. Similarly, we have the following:
\begin{lemma}\label{lem:Schur's Lemma2} Suppose that $K_1, K_2, \ldots$ are kernels such that each $I_{K_j}$ is minimized by some probability measure $\mu$. Then $I_{K_1 + K_2}$ is also. If the sequence of $K_j$'s converges (uniformly in the first two variables and pointwise in the others) to a kernel $K$, then $I_K$ is also minimized by $\mu$. \end{lemma} As in the two-input case, multiplication does not generally preserve the minimizers of energies.
\begin{proposition}\label{prop:kPosDef and Energy Zero is Min} Suppose that $Y$ is a $k$-positive definite kernel on $\mathbb S^{d-1}$ and $\mu \in \mathcal{P}(\mathbb S^{d-1})$ with $I_Y(\mu) = 0$. Then $\mu$ is a minimizer of $I_Y$. \end{proposition}
\begin{proof} Let $\nu \in \mathcal{P}(\mathbb S^{d-1})$. Then, since $Y$ is $k$-positive definite \begin{equation*} I_Y(\nu) \geq \min_{z_3, \ldots, z_k \in \mathbb S^{d-1}} \int_{\mathbb S^{d-1}} \int_{\mathbb S^{d-1}} Y(x,y, z_3, \ldots, z_n)\, d\nu(x) d\nu(y) \geq 0 = I_Y(\mu). \end{equation*} \end{proof}
We can create multivariate kernels from kernels with fewer inputs in a natural way that preserves minimizers of the energy. \begin{lemma}\label{lem:kPosDef to nPosDef} For some kernel $Y: (\mathbb S^{d-1})^k \rightarrow \mathbb{R}$ and $n > k$, let \begin{equation}
K(x_1, x_2, \ldots, x_n) = \frac{1}{|S|} \sum_{\pi \in S} Y(x_1, x_2, x_{\pi(3)}, \ldots, x_{\pi(k)}), \end{equation} where $S$ is a nonempty set of permutations of the set $\{ 3, \ldots, n\}$. Then $I_K$ is minimized by $\mu \in \mathcal{P}(\mathbb S^{d-1})$ if and only if $I_Y$ is as well. In addition, if $Y$ is $k$-positive definite, then $K$ is $n$-positive definite. \end{lemma}
Note that if $S$ is the set of all such permutations, then $K$ is symmetric in the last $n-2$ variables.
\begin{proof} For any $\nu \in \mathcal{M}(\mathbb S^{d-1})$, we see that \begin{equation*}
I_K(\nu) = I_Y(\nu), \end{equation*} meaning their minimizers must be the same, and for any $z_3, \ldots, z_n \in \mathbb S^{d-1}$, \begin{equation*}
\int_{\mathbb S^{d-1}} \int_{\mathbb S^{d-1}} K(x,y, z_3, \ldots, z_n)\, d\nu(x) d\nu(y) = \frac{1}{|S|} \sum_{\pi \in S} \int_{\mathbb S^{d-1}} \int_{\mathbb S^{d-1}} Y(x, y, z_{\pi(3)}, \ldots, z_{\pi(k)})\nu(x)\, d\nu(y), \end{equation*} which is non-negative if $Y$ is $k$-positive definite. \end{proof}
\begin{proposition}\label{prop:Min is Min for Symmetrization} For some kernel $Y: (\mathbb S^{d-1})^k \rightarrow \mathbb{R}$ let $K$ be defined by $$K( x_1,\ldots, x_k) = \frac{1}{k!} \sum_{\pi} Y( x_{\pi(1)}, \ldots, x_{\pi(k)}),$$ where $\pi$ varies over all permutations of $\{1, \ldots, k\}$. Then $K$ is a symmetric kernel, and $I_K$ is minimized by $\mu \in \mathcal{P}(\mathbb S^{d-1})$ if and only if $I_Y$ is as well. \end{proposition}
The proof is identical to that of Lemma \ref{lem:kPosDef to nPosDef}. We note that, unlike in Lemma \ref{lem:kPosDef to nPosDef}, $k$-positive definiteness of $Y$ does not imply that $K$ is also $k$-positive definite. In fact, in the three-input case, $-V^2$ and $-A^2$ in this paper are examples of symmetric kernels that are not $3$-positive definite \cite[Propositions 6.9 and 6.10]{BFGMPV1} but are the symmetrizations of $3$-positive definite kernels (modulo a constant), see \eqref{eq:Volume^2 SDP decomposition} and \eqref{eq:Area^2 SDP Decomposition}.
We finally remark that the discussion of this section generalizes to arbitrary compact metric spaces in place of the sphere $\mathbb S^{d-1}$.
\subsection{Two-input volumes}\label{sec:k=2}
Here, we address the two-input versions of $V^2$ and $A^2$ on the sphere.
\begin{proposition}\label{prop:vsas} Let $k=2$. On the sphere $\mathbb{S}^{d-1}$, $\sigma$ is a maximizer of the two-input energies $I_{A^s}$ and $I_{V^s}$ for $0<s<2$. Moreover, in the case of $A^s$, $\sigma$ is the unique maximizer. \end{proposition}
\begin{proof} It is well known, see e.g. \cite[Proposition 2.3]{BGMPV1}, that $\sigma$ is a minimizer of $I_K$, where $K(x,y)=f(\langle x,y\rangle)$, if and only if for the Gegenbauer expansion $f(t)=\sum_{m=0}^{\infty} \hat{f}_m P_m^d(t)$, $\hat{f}_m\geq 0$ for all $m\geq 1$ (which, according to Theorem \ref{thm:sch}, is equivalent to the fact that $f$ is positive definite on $\mathbb S^{d-1}$ modulo an additive constant). Moreover, $\sigma$ is the unique minimizer if $\hat{f}_m> 0$ for all $m\geq 1$.
We note that it is sufficient to use a weaker condition based on Maclaurin expansions of $f$. Assume $f(t)=\sum_{m=0}^{\infty} f^*_m t^m$ for $t\in[-1,1]$, with the series converging uniformly and absolutely. Each function $t^m$ is positive definite on $\mathbb{S}^{d-1}$ by Schur's product theorem, and, by Theorem \ref{thm:sch}, it can be represented as a non-negative combination of Gegenbauer polynomials with, in particular, a positive coefficient for $P_m^d(t)$. This means that, whenever all $f^*_m$ are non-negative (positive) for $m\geq 1$, then all Gegenbauer coefficients $\hat{f}_m$ for $m\geq 1$ are also non-negative (positive). We need to show that $\sigma$ is a maximizer, so it is sufficient to check that all coefficients, starting from $m=1$, of the Maclaurin expansions of $V^s$ and $A^s$ are nonpositive.
Indeed, \begin{equation*} V^s(x,y) = (V^2)^{s/2} = \left( 1-\langle x,y\rangle^2 \right)^{s/2} = \sum_{m=0}^{\infty} (-1)^m \binom{s/2}{m} \langle x,y \rangle^{2m}. \end{equation*} Similarly, \begin{equation*} A^s(x,y) = (A^2)^{s/2} = (2-2\langle x,y\rangle)^{s/2}=2^{s/2} (1-\langle x,y\rangle)^{s/2} = 2^{s/2}\sum_{m=0}^{\infty} (-1)^m \binom{s/2}{m} \langle x,y \rangle^m. \end{equation*} In both cases, $(-1)^m \binom{s/2}{m}$ is negative for all $m\geq 1$ so $\sigma$ is a maximizer for $V^s$ and the unique maximizer for $A^s$. \end{proof}
\begin{remark} Since $V$ is invariant under central symmetry, it would be natural to consider it as a potential on the projective space $\mathbb{RP}^{d-1}$. Under this setup the uniform distribution over $\mathbb{RP}^{d-1}$ is the unique maximizer of $I_{V^s}$. \end{remark}
Since $A(x,y) = \| x-y \|$, the statements about $A^s$ in Proposition \ref{prop:vsas} can be viewed as a special case of a more general result of Bjorck \cite{Bj}. Below we collect his results specialized to the sphere.
\begin{proposition}[Bjorck \cite{Bj}]\label{prop:k=2}
Let $k=2$, i.e. $A (x,y) = \| x- y \|$. For the two-input energy $I_{A^s}$ on the sphere $\mathbb{S}^{d-1}$, \begin{itemize} \item if $0<s<2$, then $\sigma$ is the unique maximizer of $I_{A^s}$; \item if $s = 2$, then $\mu$ is a maximizer of $I_{A^s}$ if and only if $\mu$ is balanced; \item if $s > 2$, then the maximizers of $I_{A^s}$ are exactly measures of the the form $\frac{1}{2}( \delta_{p} + \delta_{-p})$, for some $p \in \mathbb{S}^{d-1}$. \end{itemize} \end{proposition} A similar proposition about the minimizers over $\mathcal P(\mathbb S^{d-1})$ can be formulated for powers of $V$ in the two-input case.
\begin{proposition}\label{prop:Vk=2} Let $k=2$, i.e. $V (x,y) = \left( 1-\langle x,y\rangle^2 \right)^{1/2} $. For the two-input energy $I_{V^s}$ on the sphere $\mathbb{S}^{d-1}$, \begin{itemize} \item if $0<s<2$, then $\sigma$ is a maximizer of $I_{V^s}$; \item if $s = 2$, then $\mu$ is a maximizer of $I_{V^s}$ if and only if $\mu$ is isotropic; \item if $s > 2$, then the only maximizers (up to central symmetry and rotation) of $I_{V^s}$ are uniform measures on the elements of an orthonormal basis of $\mathbb R^d$, i.e. measures of the the form $\frac{1}{d} \sum_{i=1}^d \delta_{e_i}$, where $\{e_i\}_{i=1}^d$ is an orthonormal basis of $\mathbb R^d$. \end{itemize} \end{proposition}
The case $0<s<2$ is covered in Proposition \ref{prop:vsas} above. The phase transition case $s=2$ follows from the case of equality in Lemma \ref{lem:frame}. The case $s>2$ can be easily handled by the linear programming method, but we give the proof of a more general statement for all $2\le k\le d$ in Corollary \ref{cor:V^s for s>2}. Exposition on the logarithmic and singular energies ($s < 0$) can be found in \cite{BHS} (and the references therein) for $A^s$ and \cite{CHS} for $V^s$.
\subsection{Comparison of two-input and multi-input energies}\label{sec:compare} The multi-input, i.e. $k\geq 3$, generalizations of Propositions \ref{prop:k=2} and \ref{prop:Vk=2}, which are naturally more complicated, require different methods and form the main purpose of this paper. As stated in the introduction, we believe that the uniform measure $\sigma$ still maximizes both $I_{A^s}$ and $I_{V^s}$ in the range $0<s<2$ for $k\ge 3$, but this remains a conjecture.
When $s=2$ and $k\ge 3$, maximizers of $I_{V^2}$ are, as in Proposition \ref{prop:Vk=2},
exactly the isotropic measures on $\mathbb S^{d-1}$ \cite{CC} (see Theorem \ref{thm:volume-gen}). However, we shall show (see Theorem \ref{thm:area-gen}, as well as Theorems \ref{thm: triangle area squared max} and \ref{thm:A^2maxGen}) that the maximizers of $I_{A^2}$ for $3\le k \le d+1$ are exactly \emph{balanced isotropic measures} (and not just balanced as in Proposition \ref{prop:k=2} for $k=2$).
The case $s>2$ of Proposition \ref{prop:Vk=2} for $I_{V^s}$ still holds for all $2\le k \le d$ (Corollary \ref{cor:V^s for s>2}). However, we are only able to prove an analogue of this case for $A^s$ when $k=d+1$ (Corollary \ref{cor:A^s for s>2}): the uniform measure on the vertices of a regular simplex replaces the two poles as the unique (up to rotations) maximizer of $I_{A^s}$ for $s>2$. We conjecture that maximizers of $I_{A^s}$ with $s>2$ are discrete for all $3\le k \le d+1$, but their exact structure remains elusive (see end of Section~\ref{sec:A^2 on Sphere}).
This discussion shows that in the multi-input case $k\ge 3$ the behavior of $A^s$ is significantly more complicated than that of $V^s$, which is evidenced already by the fact that the polynomial representation of $A^2$ (Lemma \ref{lem:A-formula}) is more involved than that of $V^2$.
\section{Discrete Energies and Optimality of the Regular Simplex}\label{sec:Discrete A and V}
Before presenting the study of maximizers of continuous energies with kernels $V^s$ and $A^s$, we discuss their discrete analogues with $N=d+1$ points, and find that the regular simplex is a maximizer for $0 < s < 2$. Consequently, we discover a new geometrically optimal property of the regular simplex. These statements use the results from Sections \ref{sec:vol} and \ref{sec:A^2 on Sphere} about continuous $k$-point energies as a tool. We chose to open with these discrete results since, in our opinion, they yield particularly elegant applications of the theory. We start with a general statement:
\begin{theorem}\label{thm: simplex best, general} Let $2 \leq k \leq d+1$ and $B: ( \mathbb{S}^{d-1})^k \rightarrow [0, \infty)$ be a polynomial kernel of degree at most two in each variable, such that $\sigma$ maximizes $I_B$ and whenever $x_i = x_j$ for some $i \neq j$, then $B(x_1, x_2, \ldots, x_k) = 0$.
Let $f: [0, \infty) \rightarrow \mathbb{R}$ be concave, increasing, and such that $f(0) = 0$, and define the kernel $K(x_1, \ldots, x_k) = f(B(x_1, \ldots, x_k))$. If $N= d+1$, then the set of vertices of a regular $(N-1)$-simplex inscribed in $\mathbb{S}^{d-1}$ maximizes the discrete energy $E_K(\omega_N)$ over all $N$-point configurations on the sphere.
Moreover, if $f$ is strictly concave and strictly increasing, then the vertices of regular $(N-1)$-simplices are the only maximizers of the energy (if $B$ doesn't contain terms which are linear in some of the variables, the uniqueness is up to changing any individual vertex $x$ to its opposite $-x$). \end{theorem}
\begin{proof}
Let $\omega_N = \{ z_1, \ldots, z_N\}$ be an arbitrary point configuration on $\mathbb{S}^{d-1}$. Since $B$ is zero if two of its inputs are the same, we can restrict the sum to $k$-tuples with distinct entries. Combining this with the fact that $f$ is increasing and concave, using Jensen's inequality, we have \begin{align*} E_K(\omega_N) & := \frac{1}{N^k} \sum_{z_1, \ldots, z_k \in \omega_N} f(B(z_1, \ldots, z_k)) \\ & \leq \frac{N (N-1) \cdots (N-k+1)}{N^k} f \left( \sum_{\substack{z_{j_1}, \ldots, z_{j_k} \in \omega_N \\ j_1, \ldots, j_k \text{ distinct}}} \frac{B(z_{j_1}, \ldots, z_{j_k})}{N (N-1) \cdots (N-k+1)}\right)\\ & = \frac{N (N-1) \cdots (N-k+1)}{N^k} f \left( \frac{ N^k E_B (\omega_N)}{N (N-1) \cdots (N-k+1)}\right)\\ & \leq \frac{N (N-1) \cdots (N-k+1)}{N^k} f \left( \frac{ N^k I_B(\sigma)}{N (N-1) \cdots (N-k+1)}\right).\\ \end{align*} The first inequality becomes an equality if \begin{equation*} B(y_1,\ldots, y_k) = \frac{N^k E_B (\omega_N)}{N (N-1) \cdots (N-k+1)} \end{equation*} for all distinct $y_1, \ldots, y_k \in \omega_N$, while the second becomes an equality if the point configuration is a spherical $2$-design, in particular, if $\omega_N$ is a regular simplex. The case of uniqueness is similar. \end{proof}
This generalizes some known results for $B(x,y) = \| x-y\|^2$ \cite{Y, CK}. Note that this proof also extends to provide an upper bound of the energy $E_{f \circ B}(\omega_N)$ for every $N \geq k$, and that this upper bound is achieved whenever $B(z_{j_1}, ..., z_{j_k})$ is constant for every $k$-tuple of distinct points, meaning that one may find additional optimizers of the energy. For instance, for $B=V^2$ and $N = d$, any orthonormal basis would be a maximizer (though this would not work for $B = A^2$, since an orthonormal basis is not balanced). An upper bound of this form was given for $E_{V}(\omega_N)$ in \cite[Corollary 5.2]{CC}.
We will show in subsequent sections that $\sigma$ maximizes the continuous energies with kernels $V^2$ and $A^2$ (Theorems \ref{thm:area-gen} and \ref{thm:volume-gen}), both of which are polynomials of degree two. Hence Theorem \ref{thm: simplex best, general} applies, immediately yielding the following corollary:
\begin{corollary}\label{cor:Simplex Best A, V} Assume that either $K(x_1, \ldots, x_k) = V(x_1, \ldots, x_k)^s$ with $2 \leq k \leq d$, or $K(x_1, \ldots, x_k) = A(x_1, \ldots, x_k)^s$ with $2 \leq k \leq d+1$.
Let $0 < s \le 2$. For $N=d+1$ points, the discrete $k$-input energy $E_K$ on the sphere $\mathbb S^{d-1}$ is uniquely (up to rotations, and up to central symmetry in the case of $V^2$) maximized by the vertices of a regular simplex inscribed in $\mathbb{S}^{d-1}$. \end{corollary}
\begin{proof} For $0<s<2$, the function $f(t) = t^{s/2}$ is strictly concave and strictly increasing, so the Theorem immediately applies. The uniqueness in the case $s=2$ needs a separate discussion. By Theorems \ref{thm:area-gen} and \ref{thm:volume-gen}, $I_{V^2}$ is maximized by isotropic measures on $\mathbb S^{d-1}$, and $I_{A^2}$ -- by balanced isotropic measures. In the discrete case, isotropic measures on $\mathbb S^{d-1}$ are exactly unit norm tight frames. The only tight frames on $\mathbb S^{d-1}$ with $N=d+1$ elements (up to central symmetry and rotations) are the vertices of a regular simplex \cite[Theorem 2.6]{GK}.
\end{proof}
Taking $K = A^s$ in Corollary \ref{cor:Simplex Best A, V}, and setting $j=k-1$, we obtain an interesting geometric result:
\begin{corollary}\label{cor:Geometric Simplex is best} Let $1 \leq j \leq d$, $0 < s \leq 2$, $S$ be a $d$-simplex inscribed in $\mathbb{S}^{d-1}$, $\mathcal{F}_{j}$ the set of $j$-dimensional faces of $S$, and $\mathrm{Vol}_{j}(C)$ the $j$-dimensional volume of a set $C$. Then \begin{equation}\label{eq:T-functional} \sum_{F \in \mathcal{F}_{j}} \mathrm{Vol}_{j}(F)^s, \end{equation} achieves its maximum if and only if $S$ is a regular simplex. \end{corollary}
In the case $s = 1$, this generalizes the known results for $j=1$, i.e. the sum of distances between vertices \cite{F1}, $j=d-1$, i.e. the surface area \cite{Ta2}, and $j=d$, i.e. the volume \cite{Jo, Ta1, Ball2, HL}. We also note that \eqref{eq:T-functional} is a special case of the $T$-functional, which has received a fair amount of study, mostly in Stochastic Geometry (see e.g. \cite{A, GKT, HoLe, HMR, KMTT, KTT}).
\begin{remark} We note that by adjusting the definition of the discrete energy $E_K$ to only include summands where all inputs are distinct, we can study lower-semicontinuous kernels $K: \Big(\mathbb{S}^{d-1} \Big)^k \rightarrow ( - \infty, \infty]$. In this case, if we define $f$ in the statement of Theorem \ref{thm: simplex best, general} as a decreasing, convex function $f: (0, \infty) \rightarrow \mathbb{R}$, with $f(0) = \lim_{x \rightarrow 0^+} f(x)$, an identical proof shows that the vertices of a regular simplex minimize $E_{f \circ B}$, as a generalization of \cite[Theorem 1.2]{CK}. In particular, as an extension of Corollary \ref{cor:Simplex Best A, V}, this shows that regular simplices are optimal for $-\log(A)$ and $-\log(V)$, as well as $A^{s}$ and $V^{s}$ for $s < 0$. \end{remark}
\section{Semidefinite Programming and Three-input Volumes}\label{sec:SD} {In this section we recall the basics of semidefinite programming and apply it to the maximization of integral functionals with three-input kernels. It will be shown that isotropic probability measures on $ \mathbb S^{d-1} $ maximize $ I_{V^2} $, where $ V $ is the $3$-volume, among all probability measures, while for the $2$-area energy integral $ I_{A^2} $, maximizers are isotropic and balanced probability measures. In Sections} \ref{sec:Powers of V} and \ref{sec:A^2 on Sphere}, {these results are generalized to larger numbers of inputs and to measures on $\mathbb{R}^d$.}
For brevity, we denote $u=\langle y,z \rangle$, $v=\langle v,z \rangle$, $t=\langle x,y \rangle$. We also take $\sigma$, as before, to be the uniform probability measure on the sphere $\mathbb{S}^{d-1}$. In \cite{BV}, Bachoc and Vallentin produced a class of infinite matrices and associated polynomials of the form \begin{equation}\label{eq:SemDefProgYs} (Y_{m}^d)_{i+1,j+1} (x,y,z) := Y_{m,i,j}^d(x,y,z) := P_i^{d + 2m}(u) P_{j}^{d + 2m}(v) Q_m^d(u,v,t), \end{equation} where $m,i,j \in \mathbb{N}_0$, $P_m^h$ is the normalized Gegenbauer polynomial of degree $m$ on $\mathbb{S}^{h-1}$ and \begin{equation}\label{eq:BachocValQKernel} Q_m^d(u,v,t) = ((1-u^2)(1-v^2))^{\frac{m}{2}}P_{m}^{d-1}\left(\frac{t-uv}{\sqrt{(1-u^2)(1-v^2)}}\right). \end{equation}
\begin{remark} Polynomials $Y_{m,i,j}^d$ in \cite{BV} were defined with certain coefficients which we omit here for the sake of simplicity. \end{remark}
Here we provide the upper left $3\times 3$, $2 \times 2$, and $1 \times 1$ submatrices of infinite matrices $Y_0^d$, $Y_1^d$, and $Y_2^d$, respectively, which is all that we need for the rest of this section: $$\begin{pmatrix} 1 & v & \frac {dv^2-1} {d-1}\\ u & uv & u \frac {dv^2-1} {d-1}\\ \frac {dv^2-1} {d-1} & \frac {du^2-1} {d-1} v & \frac {du^2-1} {d-1} \frac {dv^2-1} {d-1}\end{pmatrix}$$ $$ \begin{pmatrix} t-uv & u (t-uv)\\ v(t-uv) & uv(t-uv)\end{pmatrix}, \begin{pmatrix} \frac {(d-1)(t-uv)^2-(1-u^2)(1-v^2)} {d-2}\end{pmatrix}.$$ By letting $\pi$ run through the group of all permutation of the variables $x$, $y$, and $z$, and averaging, they defined the following symmetric matrices and associated polynomials \begin{equation*} (S_{m}^d)_{i+1,j+1} (x,y,z) := S_{m,i,j}^d (x,y,z) := \frac{1}{6} \sum_{\pi} Y_{m,i,j}^d (\pi(x), \pi(y), \pi(z)). \end{equation*} These polynomials and matrices have a variety of nice properties: \begin{enumerate} \item\label{i} For any $\mu \in \mathcal{P}(\mathbb{S}^{d-1})$ and $e \in \mathbb{S}^{d-1}$, $$ \int_{\mathbb{S}^{d-1}} \int_{\mathbb{S}^{d-1}} Y_{m}^d(x,y,e)\, d\mu(x) d \mu(y)$$ and $$ S_m^d(\mu) := \int_{\mathbb{S}^{d-1}} \int_{\mathbb{S}^{d-1}} \int_{\mathbb{S}^{d-1}} S_{m}^d(x,y,z)\, d\mu(x) d \mu(y) d \mu(z)$$ are positive semidefinite, i.e. all principal minors (formed by finite submatrices) are nonnegative. \item\label{ii} For $(m,i,j) \neq (0,0,0)$, $I_{S_{m,i,j}^d}(\sigma) = 0$, and for all $e \in \mathbb{S}^{d-1}$, $ I_{Y_{m,i,j}^d}(\sigma, \sigma, \delta_{e}) = 0$. \item\label{iii} For $m \geq 1$ and $e \in \mathbb{S}^{d-1}$, $I_{S_{m,i,j}^d}(\delta_e) = I_{Y_{m,i,j}^d}(\delta_e) = 0$. \end{enumerate} We note that the paper \cite{BV} was only concerned with finite point sets. However, the results naturally extend to the continuous setting, and \eqref{i} is simply the extension of Corollary 3.5 in \cite{BV}, while \eqref{ii} follows from the construction of $Y_{m,i,j}$'s from spherical harmonics (see Theorem 3.1 and the preceding text, as well as equation (11), in \cite{BV}). Finally, \eqref{iii} follows from the fact that $Q_m^d(1,1,1) = 0$.
Now consider an infinite, symmetric, positive semidefinite matrix $A$ with finitely many nonzero entries. Then for any $m \geq 1$ and $\mu \in \mathcal{P}( \mathbb{S}^{d-1})$, $\operatorname{Tr}( S_m^d(\mu) A) \geq 0$, with equality if $\mu = \sigma$. (Indeed, observe that, for two matrices positive definite matrices $B=(b_{ij})$, $C = (c_{ij})$, Schur's theorem implies that the Hadamard product $B\circ C = (b_{ij} c_{ij})$ is positive definite, which leads to the inequality $\operatorname{Tr} (BC) = \sum_{i,j} b_{ij} c_{ij} \ge 0$.)
Likewise, let $A_0$ be an infinite, symmetric, positive semidefinite matrix $A$ with finitely many nonzero entries and such that all entries in the first row and first column are zeros. Then for any probability measure $\mu$, $\operatorname{Tr}( S_m^d(\mu) A_0) \geq 0$, with equality if $\mu = \sigma$. In this case, we require zeros in the first row and column due to the fact that $S_{0,0,0}^d$ is a constant, so we would not get equality for $\sigma$ in the above inequality. This gives us the following: \begin{theorem}\label{thm:SemiDefMin} Let $n \in \mathbb{N}_0$. For each $m \leq n$, let $A_m$ be an infinite, symmetric, positive semidefinite matrix with finitely many nonzero entries, with the additional requirement that $A_0$ has only zeros in its first row and first column. Let $$K(x,y,z) = \sum_{m=0}^{n} \operatorname{Tr}( S_{m}^d(x,y,z)\, A_m).$$ Then $\sigma$ is a minimizer of $I_K$ over probability measures on the sphere $\mathbb{S}^{d-1}$. \end{theorem} \noindent Naturally, adding a constant to $K$ does not change this statement, and multiplying by $-1$ turns it into a maximization result. Observe also that, when $A$ is a diagonal matrix, $\operatorname{Tr}( S_m^d A) $ is simply a positive linear combination of the diagonal elements of $ S_m^d $. Theorem \ref{thm:SemiDefMin} is often applied in this way (see the proofs of Theorems \ref{thm:vol squared max} and \ref{thm: triangle area squared max} below), in close analogy to Theorem \ref{thm:sch}.
\subsection{Volume of a parallelepiped}
Maximizing the sum of distances between points on a space (or the corresponding distance integrals) is a very natural optimization problem for two-input kernels, and one which has garnered a fair amount of attention (see, e.g. \cite{B, BBS, BS, BD, BDM, Bj, F1, F2, Sk, St}), and, as mentioned in the introduction, higher dimensional analogues, such as area and volume, yield natural extensions for kernels with more inputs. In this section we discuss such questions for $k=3$ inputs, focusing on volume squared and area squared, as these produce polynomials which are easier to work with.
We first consider the kernel \begin{equation*} K(x,y,z) = V^2(x,y,z) = \det\begin{pmatrix} 1 & u & v \\ u & 1 & t \\ v & t & 1 \end{pmatrix}=1-u^2-v^2-t^2+2uvt \end{equation*} on the sphere $\mathbb{S}^{d-1}$, with $d > 2$, and where $V(x,y,z)$ is the volume of the parallelepiped formed by the vectors $x$, $y$, and $z$. As mentioned in \cite{BFGMPV1}, $-V^2$ is not 3-positive definite (modulo a constant) but as we show here, $\sigma$ is a minimizer of $I_{-V^2}$, i.e. a maximizer of $I_{V^2}$.
Indeed, we see that \begin{equation}\label{eq:Volume^2 SDP decomposition} V^2(x,y,z) = \frac{(d-1)(d-2)}{d^2} - \frac{(d-1)(d-2)}{d^2} S_{0,2,2}^d - \frac{4(d-2)}{d} S_{1,1,1}^d - \frac{(3d-4)(d-2)}{d (d-1)} S_{2,0,0}^d, \end{equation} so Theorem \ref{thm:SemiDefMin} tells us that $\sigma$ is a maximizer. Moreover, since $V^2$ is a polynomial of degree two in every variable, and has no linear terms, any isotropic measure on the sphere is also a maximizer, and in fact this classifies all maximizers. \begin{theorem}\label{thm:vol squared max} Isotropic probability measures on the sphere maximize $I_{V^2}$ over $\mathcal{P}(\mathbb{S}^{d-1})$. \end{theorem}
\subsection{Area of a triangle} Using the same method as in Theorem \ref{thm:vol squared max}, we can show that $\sigma$ is a maximizer of $I_{A^2}$, where $A(x,y,z)$ is the area of a triangle, since \begin{equation}\label{eq:Area^2 SDP decomposition} A^2(x,y,z) = \frac{1}{4} \Big(3 \frac{d-1}{d} - 3 \frac{d-2}{d-1} S_{2,0,0}^d -6 S_{1,1,1}^d - 6S_{1,0,0}^d - 3 \frac{d-1}{d}S_{0,2,2}^d \Big). \end{equation}
However, we can also prove this with a slightly different method, which acts as a special case of Theorem \ref{thm:vol squared max}, a more general statement that we will prove by means of $k$-point bounds.
\begin{theorem}\label{thm: triangle area squared max} Suppose $d \geq 2$, and let $ A^2(x,y,z) $ be the square of the area of the triangle with vertices at $x$, $y$, $z\in \mathbb S^{d-1}$. Then the uniform surface measure $\sigma$ maximizes $I_{A^2} (\mu)$ over $\mathcal{P} (\mathbb{S}^{d-1})$. Moreover, any balanced, isotropic measure $\mu \in \mathcal{P} (\mathbb{S}^{d-1})$ maximizes $I_{A^2}$. \end{theorem}
\begin{proof} Using Heron's formula, we express $A^2(x,y,z)$ via the scalar products of $x,y,z$: \begin{equation}\label{eq:Area^2 SDP Decomposition} A^2 (x,y,z) = \frac34 -\frac12 (u+v+t) + \frac12 (uv + vt +tu) - \frac{1}4 ( u^2 + v^2 + t^2) = \frac {3} {4} - \frac 3 2 S_{1,0,0}^d - \frac{1}4 ( u^2 + v^2 + t^2). \end{equation}
Note that $\sigma$ minimizes both $S_{1,0,0}^d$ and $u^2+v^2+t^2$ by Theorem \ref{thm:SemiDefMin} and Lemma \ref{lem:frame}, respectively. Therefore, for all $\mu \in \mathcal{P}(\mu)$, $$I_{A^2}(\mu)\leq \frac 3 4 - \frac 1 4 \cdot \frac 3 d = \frac {3(d-1)} {4d} = I_{A^2}(\sigma).$$ More generally, maximizers of $I_{A^2}$ must be isotropic measures in order to achieve the sharp bound from Lemma \ref{lem:frame} and must be balanced so that $S_{1,0,0}^d$ vanishes on them. \end{proof}
An alternative proof of this result is also given in \cite[Theorem 6.7]{BFGMPV1}. We also would like to remark that since \begin{equation*} 3 \frac{d-2}{d-1} S_{2,0,0}^d +6 S_{1,1,1}^d + 3 \frac{d-1}{d}S_{0,2,2}^d + \frac{3}{d} = u^2 + v^2 + t^2, \end{equation*} which follows from \cite[Proposition 3.6]{BV}, Lemma \ref{lem:frame} follows from Theorem \ref{thm:SemiDefMin}, which demonstrates an instance of obtaining $2$-point bounds from $3$-point bounds.
\section{Maximizing $k$-volumes}\label{sec:vol}
{
This section collects results on maximization of volume integral functionals over probability measures with unit second moment in $ \mathbb R^d $, denoted as before by $ \mathcal P^*(\mathbb R^d) $, for $k\ge 3$ inputs. In some cases we will further restrict the supports of such measures to the unit sphere, thereby optimizing over $ \mathcal P(\mathbb S^{d-1}) $.
As in the rest of the paper, we are interested in powers of two kernels: the $k$-dimensional Euclidean volume of the parallelepiped
$V(x_1,\ldots,x_k)$,
and the $(k-1)$-dimensional
volume of the simplex
$A (x_1,\ldots,x_k)$.}
\subsection{Maximizing the powers of $V$}\label{sec:Powers of V}
As in the previous section, we start with $ V^2 $, i.e. the squared $k$-dimensional volume of a parallelepiped spanned by the vectors $x_1, \ldots, x_k$, equal to the determinant of the Gram matrix of the set of vectors $\{x_1,\ldots,x_k\}\subset \mathbb R^d$. Alternatively, $\frac 1 {(k!)^2} V^2(x_1, \ldots, x_k)$ can be seen as the square of the Euclidean volume of the simplex with vertices $0, x_1, \ldots, x_k$. The following theorem (in a slightly different form) can be found in the literature:
\begin{theorem}\label{thm:volume-gen} Let $d\ge 3 $ and $3\le k \le d$. The set of maximizing measures of $I_{V^2}$ in $\mathcal{P}^*(\mathbb{R}^d)$ is the set of isotropic measures on $\mathbb{R}^d$. The value of the maximum is $\frac {k!}{d^k}\binom{d}{k}$.
As a corollary, isotropic measures on $\mathbb S^{d-1}$ (which include the uniform surface measure $\sigma$) are exactly the maximizers of $I_{V^2}$ over $\mathcal{P}(\mathbb S^{d-1})$. \end{theorem}
This theorem was proved by Rankin \cite{R} for $k=d$ and by Cahill and Casazza \cite{CC} in the general case. In both papers, the statements are for finite spherical sets but the proofs work for measures in $\mathcal{P}^*(\mathbb{R}^d)$ with only minor adjustments. We also note that due to Remark \ref{rem:proj}, it is sufficient to prove the result for the spherical case only, since $V^2$ is homogeneous of degree two. The equality case of Theorem \ref{thm:volume-gen} was also treated in a more general context in \cite{FNZ, P}.
{Theorem \ref{thm:volume-gen} allows one to characterize the minimizers of $ I_{V^s} $ with $ s>2 $ as well.}
\begin{corollary}\label{cor:V^s for s>2} For $s > 2$, the energy $I_{V^s}$ on $\mathbb{S}^{d-1}$ is uniquely (up to rotations and central symmetry) maximized by the uniform measure on an orthonormal basis. \end{corollary}
\begin{proof} For $s > 2$, $V^2(x_1, \ldots, x_k) \geq V^s(x_1, \ldots, x_k)$ for all $x_1, \ldots, x_k \in \mathbb{S}^{d-1}$, with equality exactly when $x_1, \ldots, x_k$ is an orthonormal set (so the volume is $1$) or when $x_1, \ldots, x_k$ are linearly dependent (so the volume is $0$). Thus, for all $\mu \in \mathcal{P}(\mathbb{S}^{d-1})$, \begin{equation} \frac {k!}{d^k}\binom{d}{k} \geq I_{V^2}(\mu) \geq I_{V^s}(\mu). \end{equation} The first inequality becomes an equality if $\mu$ is isotropic, and the second inequality becomes equality if any $x_1, \ldots, x_k \in \operatorname{supp}(\mu)$ are either orthonormal or linearly dependent. Since the support of an isotropic measure must be full-dimensional, both of these conditions occur simultaneously if and only if $\mu(\{- e_j, e_j\}) = \frac{1}{d}$ for $j = 1, \ldots, d$ for some orthonormal basis $e_1, \ldots, e_d$. \end{proof}
It is easy to see that the uniform distribution on an orthonormal basis is not a maximizer of $V^s$ for $0 < s < 2$.
\subsection{Maximizing the powers of $A$}\label{sec:Powers of A}
We now turn to the powers of $A (x_1,\ldots,x_k)$, the $(k-1)$-dimensional volume of the simplex with vertices $x_1, \ldots, x_k$, and again start by considering the kernel $A^2$.
The result of Theorem \ref{thm:volume-gen} for $V^2$ can be used to obtain a similar statement for the measures in $\mathcal{P}^*(\mathbb{R}^d)$ maximizing $ I_{A^2} $, in the case of $k= d+1$ inputs, i.e when the simplex is full-dimensional. The main idea is to embed $ \mathbb R^d $ into $ \mathbb R^{d+1} $, treat the value of $ A^2(x_1,\ldots,x_{d+1}) $ as the value of $ V^2(y_1,\ldots,y_{d+1}) $ for a suitable $ y_1,..., y_k $, and then use Theorem \ref{thm:volume-gen}.
We shall defer the calculation of the maximal value of $ I_{A^2} $ until Theorem~\ref{thm:area-gen}, which also gives an alternative proof of the characterization of maximizers on the sphere $\mathbb S^{d-1}$, moreover, addressing the case of {\emph{any}} number of inputs $3\le k \le d+1$, rather than just $k=d+1$ as in the theorem below. However, the result of this section, Theorem \ref{thm:A^2maxGen}, applies to measures on $\mathbb R^d$, while Theorem~\ref{thm:area-gen} is restricted to the sphere.
\begin{theorem}\label{thm:A^2maxGen} For $d \geq 2$ and $k = d+1$, maximizers of $I_{A^2}$ in $\mathcal{P}^*(\mathbb{R}^d)$ are the balanced, isotropic probability measures on $\mathbb{R}^d$. \end{theorem}
\begin{proof} In this proof, we say that a measure $ \mu $ on $ \mathbb R^d $ is $ d $-isotropic if equation \eqref{eq:isotropic_init} holds. Given a unit basis vector $ e_{d+1}\in \mathbb R^{d+1} $, we identify $ \mathbb R^d $ with the hyperplane in $ \mathbb R^{d+1} $, orthogonal to $ e_{d+1} $ and passing through the origin.
To reduce the value of $ I_{A^2} $ to that of $ I_{V^2} $, given a measure $ \mu $ on $ \mathbb R^d $, denote its pushforward to $\mathbb R^{d+1}$ by $\hat\mu$: \begin{equation}
\label{eq:muhat}
\hat\mu:= \psi_\# \mu, \end{equation} where the map $ \psi: \mathbb R^d \to \mathbb R^{d+1} $ is \[
\psi(x) := \sqrt{\frac{d}{d+1}}x + \frac1{\sqrt{d+1}}e_{d+1}. \] It is understood here that $ x\in\mathbb R^d \subset \mathbb R^{d+1} $, so the addition in right-hand side is in \(\mathbb R^{d+1}\).
Recall that a $ (d+1) $-dimensional simplex with base of $ d $-dimensional volume $ S $ and height $ h $ has $ (d+1) $-dimensional volume ${ S h}/{(d+1)} $. This gives, with $V^2 = V^2(x_1,\ldots,x_{d+1}) $ the square of the $ (d+1) $-dimensional volume of the parallelepiped spanned by its inputs, \begin{equation}\label{eq:AtoV} I_{V^2}(\hat\mu) = (d!)^2 \frac{d^{d}}{(d+1)^{d+1}}I_{A^2}(\mu), \end{equation} where we account for the fact that $ V $ includes a factor of $ (d+1)! $, whereas $ A $ does not.
By Theorem \ref{thm:volume-gen}, the functional $I_{V^2}$ on the left-hand side of equation~\eqref{eq:AtoV} is maximized over $\mathcal{P}^*(\mathbb{R}^{d+1})$ exactly when $\hat{\mu}$ is isotropic. To finish the proof, it remains to observe that the pushforward $ \hat\mu = \psi_\#\mu $ is $ (d+1)$-isotropic if and only if $ \mu $ is $ d $-isotropic and balanced. Indeed, writing $ x = (x^{(1)}, \ldots, x^{(d+1)}) = (y, x^{(d+1)}) $, we have for $ 1\leq i \leq j \leq d+1 $ \begin{equation}
\label{eq:isotropyGamma}
\int_{\mathbb{R}^{d+1}} x^{(i)}x^{(j)}\, d\hat\mu(x) =
\begin{cases}
\frac{d}{d+1} \int_{\mathbb{R}^d} y^{(i)}y^{(j)}\, d\mu(y) & j \leq d,\\
\frac{\sqrt{d }}{d+1} \int_{\mathbb{R}^d} y^{(i)}\, d\mu(y) & i \leq d, \ j=d+1,\\
\frac{1}{d+1} & i=j=d+1.
\end{cases} \end{equation} By \eqref{eq:isotropic_init}, $ (d+1) $-isotropy of $ \hat\mu $ means the integral in the left-hand side of \eqref{eq:isotropyGamma} is equal to $ \delta_{ij}/(d+1) $, $ 1\leq i\leq j\leq d+1 $. In particular, then the first integral in the right-hand side is equal to $ \delta_{ij}/d $, which is precisely the condition for $ d $-isotropy of $ \mu $, and the integrals of $ y^{(i)} $ are all equal to zero, implying $ \mu $ is balanced. The converse implications obviously follows along the same lines. \end{proof}
We can use the same methods as in Corollary \ref{cor:V^s for s>2} to find the maximizers of $ I_{A^s} $, for larger powers, on the sphere $\mathbb S^{d-1}$. \begin{corollary}\label{cor:A^s for s>2} Let $s > 2$ and $A(x_1, \ldots, x_{d+1})$ be the $d$-dimensional volume of a simplex with vertices $x_1, \ldots, x_{d+1} \in \mathbb{S}^{d-1}$. Then $I_{A^s}$ is uniquely (up to rotations) maximized by the uniform distribution on the vertices of a regular $d$ simplex. \end{corollary}
\begin{proof} We know that $A(x_1, \ldots, x_{d+1})$ is maximized exactly when $x_1, \ldots, x_{d+1}$ are the vertices of a regular simplex (see, e.g. \cite{Jo, Ta1, Ball2, HL}, see also the case $j=d$ of Corollary \ref{cor:Geometric Simplex is best}). Let $\alpha$ be that maximum volume. We see that for $s > 2$, \begin{equation*}
A^2(x_1, \ldots, x_{d+1}) \geq \alpha^{2-s} A^s(x_1, \ldots, x_{d+1}) \end{equation*} for all $x_1, \ldots, x_{d+1} \in \mathbb{S}^{d-1}$, with equality exactly when $A(x_1, \ldots, x_{d+1})$ is $0$ or $\alpha$.
We know that, for all $\mu \in \mathcal{P}(\mathbb{S}^{d-1})$ \begin{equation} \frac{d+1}{d! d^d} \geq I_{A^2}(\mu) \geq \alpha^{2-s} I_{A^s}(\mu). \end{equation} The first inequality becomes an equality when $\mu$ is balanced and isotropic, and the second becomes an equality when $A(x_1, \ldots, x_{d+1})$ is $0$ or $\alpha$ for all $x_1, \ldots, x_{d+1} \in \operatorname{supp}(\mu)$. These both occur exactly when $\mu$ is the uniform distribution on the vertices of a regular $d$-simplex. \end{proof}
For $0 < s < 2$, the uniform distribution on the vertices of a regular simplex is not a maximizer of $I_{A^s}$. It is also not clear which measures maximize $I_{A^s}$ for $2<s$ for general $k< d+1$. We conjecture that the maximizers are again discrete in this case (see the discussion at the end of Section~\ref{sec:A^2 on Sphere}).
\section{Kernels for \texorpdfstring{$k$}{k}-point bounds}\label{sec:k-point}
In this section, we explain how to construct a class of continuous (in certain cases, polynomial) kernels which are $k$-positive definite and whose energy is minimized by $\sigma$. These kernels are a generalization of those developed by Bachoc and Vallentin in \cite{BV}, and similar to the kernels given in \cite{Mu} and \cite{DMOV}, all of which were used for obtaining $k$-point semidefinite programming bounds. We provide a slight alteration to these kernels, so that the inputs are no longer restricted to being linearly independent, or constrained to some proper subset of the sphere.
Consider the points $\{x_1,\ldots, x_k\}\subset \mathbb S^{d-1}$ with $k \leq d+1$. Suppose that $x_3,\ldots, x_k$ are linearly independent and $x_1, x_2 \not\in X = \operatorname{span}\{x_3, \ldots, x_k\}$, and denote the orthogonal projections of $x_1$ and $x_2$ onto $X^{\perp}$ as $y_1$ and $y_2$, respectively. Then the normalized vectors $\frac{y_1}{\|y_1\|}$ and $\frac{y_2}{\|y_2\|}$ belong to the unit sphere in the $(d-k+2)$-dimensional space $X^{\perp}$. If $k \leq d$, then on this unit sphere, the kernel given by $P_l^{d-k+2}( \langle x, y \rangle)$ is positive definite, suggesting that we may be able to build a $k$-positive definite kernel from \begin{equation}\label{eq:SDP Building Block}
P_l^{d-k+2}\Big( \Big\langle \frac{y_1}{\|y_1\|}, \frac{y_2}{\|y_2\|} \Big\rangle \Big). \end{equation}
If $k = d+1$, then $\frac{y_1}{\|y_1\|}, \frac{y_2}{\|y_2\|} \in \mathbb{S}^0 = \{ -1, 1 \}$, and we see that $1$ and $\frac{y_1}{\|y_1\|} \frac{y_2}{\|y_2\|}$ make a basis for positive definite functions. Of course, for $l > 0$, $P_l^{d-k+2}\Big( \Big\langle \frac{y_1}{\|y_1\|}, \frac{y_2}{\|y_2\|} \Big\rangle \Big)$ is not well defined, as a function of $x_1, \ldots, x_k$, if $x_1$ or $x_2$ is in $X$, and may not be continuous whenever the dimension of $X$ changes. We can modify \eqref{eq:SDP Building Block} to account for these issues, arriving at the following polynomial kernel. In what follows, we denote $W$ to be the Gram matrix of $x_3, \ldots, x_k$, and we set $P^1_0 =1$ and $P^1_1(t) = t$ (these are the only cases when $P^1_j$ are defined).
\begin{theorem}\label{thm:Qdef} With the notation above, for any $l \in \mathbb{N}_0$, the function $Q_{k,l}^{d}: \Big( \mathbb{S}^{d-1} \Big)^k \rightarrow \mathbb{R}$ defined by \begin{equation}\label{eq:Qdef}
Q_{k,l}^{d}( x_1, \ldots, x_k) = \det(W)^l \| y_1 \|^{l} \| y_2 \|^{l} P_l^{d-k+2}\Big( \Big\langle \frac{y_1}{\|y_1\|}, \frac{y_2}{\|y_2\|} \Big\rangle \Big) \end{equation} is a rotationally-invariant $k$-positive definite polynomial kernel and $I_{Q_{k,l}^d}$ is minimized by $\sigma$. \end{theorem}
We note that these kernels are symmetric in the last $k-2$ variables as well as in the first two variables.
\begin{proof} Note that $Q_{k,0}^d = 1$, so our claim holds in these cases. Now, assume that $l \in \mathbb{N}$.
Rotational-invariance follows immediately from the rotational-invariance of $W$, $\| y_1 \|$, $\|y_2\|$ and $\langle y_1, y_2 \rangle$.
In what follows, we denote $u_{i,j} = \langle x_i, x_j \rangle$ for all $i$ and $j$, and for $h \in \{ 1, 2\}$, $w_h = (u_{h,3}, \ldots, u_{h,k})^T$, $y_h^{\perp} = x_h - y_h$, and $z_h = \frac{y_h}{\|y_h\|}$.
We first must show that our kernel $Q_{k,l}^{d}$ is well-defined. Write $y_1^{\perp} = \sum_{j=3}^{k} \alpha_j x_j$ and $y_2^{\perp} = \sum_{j=3}^{k} \beta_j x_j$, and denote $\alpha = ( \alpha_3 , \ldots, \alpha_k)^T$ and $\beta = ( \beta_3, \ldots, \beta_k)^T$. Since for $3 \leq j \leq k$, \begin{equation*} u_{1,j} = \langle x_1, x_j \rangle = \langle y_1^{\perp}, x_j \rangle = \sum_{i=3}^{k} \alpha_i u_{i,j} \quad \text{ and } \quad u_{2,j} = \langle x_2, x_j \rangle = \langle y_2^{\perp}, x_j \rangle = \sum_{i=3}^{k} \beta_i u_{i,j} \end{equation*} we conclude that $w_1 = W \alpha$ and $w_2 = W \beta$.
Now assume that $x_3, \ldots, x_k$ are linearly independent. Consequently, $\alpha = W^{-1} w_1$ and $\beta = W^{-1} w_2$, so \begin{equation} \langle y_1^{\perp}, y_2^{\perp} \rangle = \alpha^T W \beta = w_1^T W^{-1} W W^{-1} w_2 = w_1^T W^{-1} w_2, \end{equation} and similarly \begin{equation} \langle y_1^{\perp}, y_1^{\perp} \rangle = w_1^T W^{-1} w_1 \text{ and } \langle y_2^{\perp}, y_2^{\perp} \rangle = w_2^T W^{-1} w_2. \end{equation} We then see that \begin{equation} \langle y_1, y_2 \rangle = \langle x_1, x_2 \rangle - \langle y_1^{\perp}, y_2^{\perp} \rangle = u_{1,2} - w_1^T W^{-1} w_2, \end{equation} \begin{equation}
\| y_1 \|^2 = 1 - w_1^T W^{-1} w_1 \text{ and } \| y_2 \|^2 = 1 - w_2^T W^{-1} w_2. \end{equation} Thus, if $x_1, x_2 \not\in X$, we can rewrite \eqref{eq:Qdef} as \begin{align} Q_{k,l}^d (x_1, \ldots, x_k) = \det(W)^l \Big( \big(1-w_1^T W^{-1} w_1 \big) & \big(1-w_2^T W^{-1} w_2 \big) \Big)^{l/2} \\ \nonumber & \times P_l^{d-k+2}\left(\frac {u_{1,2} - w_1^T W^{-1} w_2} {\sqrt{1-w_1^T W^{-1} w_1}\sqrt{1- w_2^T W^{-1} w_2}} \right). \end{align}
Letting $P_l^{d-k+2}(t) = \sum_{m=0}^{ \lfloor \frac{l}{2} \rfloor} a_{l-2m} t^{l-2m}$, we see that \begin{align}\label{eq:Qexpansion1} Q_{k,l}^{d}( x_1, \ldots, x_k) = \sum_{m=0}^{ \lfloor \frac{l}{2} \rfloor} a_{l-2m} & \Big( \det(W) u_{1,2} -w_1^{T} \operatorname{adj}(W) w_2 \Big)^{l - 2m} \; \Big(\det(W) -w_1^{T} \operatorname{adj}(W) w_1 \Big)^{m} \; \\ \nonumber & \times \Big(\det(W) -w_2^{T} \operatorname{adj}(W) w_2 \Big)^{m}, \end{align} where $\operatorname{adj}(W)$ is the adjugate matrix of $W$. This is a polynomial of the inner products of $x_1,\ldots, x_k$, and so is well defined for all $x_1, \ldots, x_k \in \mathbb{S}^{d-1}$.
In addition, by rewriting \eqref{eq:Qdef} as \begin{equation}\label{eq:Qexpansion2}
Q_{k,l}^{d}( x_1, \ldots, x_k) = \det(W)^l \sum_{m=0}^{ \lfloor \frac{l}{2} \rfloor} a_{l-2m} \langle y_1, y_2 \rangle^{l - 2m} \|y_1\|^{m} \|y_2\|^{m}, \end{equation} for $k \leq d$ and \begin{equation}\label{eq:Qexpansion2, k=d+1} Q_{d+1,1}^d(x_1, \ldots, x_k) = \det(W) \langle y_1, y_2 \rangle, \end{equation} we see that $Q_{k,l}^d$ is zero if $x_3, \ldots, x_k$ are linearly dependent.
If $k = d+1$ and $l =1$, \eqref{eq:Qexpansion2, k=d+1} shows us that for any fixed $x_3, \ldots, x_{d+1} \in \mathbb{S}^{d-1}$ and $\mu \in \mathcal{M}(\mathbb{S}^{d-1})$, \begin{equation}\label{eq:I_Q k = d+1} I_{U_{Q_{d+1,l}^d}^{x_3, \ldots, x_{d+1}}}(\mu) = \int_{\mathbb{S}^{d-1}} \int_{\mathbb{S}^{d-1}} \det(W) y_1 y_2\, d\mu(x_1) d\mu( x_2) = \det(W) \Big( \int_{\mathbb{S}^{d-1}} y_1 d \mu(x_1) \Big)^2 \geq 0, \end{equation} so $Q_{k, 1}^d$ is $k$-positive definite. Note that since $W$ is the Gram matrix of $x_3, \ldots, x_k$, its determinant is nonnegative. We now want to show that this energy is zero when $\mu = \sigma$. We first note that this occurs if $x_3, \ldots, x_k$ are linearly dependent, so let us assume that $x_3,\ldots, x_k$ are linearly independent, and set $f( y_1^{\perp}, y_1) = f(x_1) = y_1$. Denoting the unit ball in $\mathbb{R}^n$ as $\mathbb{B}^n$, we have, by Lemma A.5.4 of \cite{DX}, that \begin{align*}
\int_{\mathbb{S}^{d-1}} f(x_1)\, d\sigma(x_1) & = \int_{\mathbb{B}^{d-1}} \frac{f(y_1^{\perp}, \sqrt{ 1 - \| y_1^{\perp}\|^2}) + f(y_1^{\perp}, -\sqrt{ 1 - \| y_1^{\perp}\|^2})}{\sqrt{ 1 - \| y_1^{\perp}\|^2}} d y_1^{\perp} \\
& = \int_{\mathbb{B}^{d-1}} \frac{\sqrt{ 1 - \| y_1^{\perp}\|^2} + (-\sqrt{ 1 - \| y_1^{\perp}\|^2})}{\sqrt{ 1 - \| y_1^{\perp}\|^2}} d y_1^{\perp} = 0. \\ \end{align*} It now follows from \eqref{eq:I_Q k = d+1} that $I_{Q_{d+1,1}^d}(\sigma) = 0$, so $\sigma$ minimizes $I_{Q_{d+1,1}^d}$.
When $k \leq d$, we need a bit more machinery. Let $Y_1,\ldots Y_{\operatorname{dim}(\mathcal{H}_l^{d-k+1})}$ be an orthonormal basis of $\mathcal{H}_l^{d-k+1}$, the space of spherical harmonics of degree $l$ on $\mathbb{S}^{d-k+1}$. Then the addition formula (see \cite[Theorem 1.2.6]{DX}) tells us that \begin{equation}\label{eq:addition formula for Q}
Q_{k,l}^{d}( x_1, \ldots, x_k) = \det(W)^l \| y_1 \|^{l} \| y_2 \|^{l} \frac{1}{\operatorname{dim}(\mathcal{H}_l^{d-k+1})} \sum_{j=1}^{\operatorname{dim}(\mathcal{H}_l^{d-k+1})} Y_j( z_1) Y_j(z_2). \end{equation} Thus for any fixed $x_3, \ldots, x_k$ and $\mu \in \mathcal{M}(\mathbb{S}^{d-1})$, \begin{align*} I_{U_{Q_{k,l}^d}^{x_3, \ldots, x_k}}(\mu) & = \int_{\mathbb{S}^{d-1}} \int_{\mathbb{S}^{d-1}} Q_{k,l}^d( x_1, x_2, \ldots, x_k)\, d\mu(x_1) d\mu(x_2) \\
& = \int_{\mathbb{S}^{d-1}} \int_{\mathbb{S}^{d-1}}\det(W)^l \| y_1 \|^{l} \| y_2 \|^{l} \sum_{j=1}^{\operatorname{dim}(\mathcal{H}_l^{d-k+1})} Y_j( z_1) Y_j(z_2) \,d\mu(x_1) d\mu(x_2) \\
& = \det(W)^l \sum_{j=1}^{\operatorname{dim}(\mathcal{H}_l^{d-k+1})} \Bigg( \int_{\mathbb{S}^{d-1}} Y_j( z_1) \| y_1 \|^{l} d \mu(x_1) \Bigg)^2 \geq 0. \end{align*} Note that since $W$ is the Gram matrix of $x_3, \ldots, x_k$, its determinant is nonnegative. Thus, $Q_{k,l}^d$ is indeed $k$-positive definite, so for any $\mu \in \mathcal{P}(\mathbb{S}^{d-1})$, $I_{Q_{k,l}^d}(\mu) \geq 0$.
We now show that $\sigma$ minimizes the energy $I_{Q_{k,l}^d}$. For any fixed $x_3, \ldots, x_k \in \mathbb{S}^{d-1}$, we see that
$$I_{U_{Q_{k,l}^d}^{x_3, \ldots, x_k}}(\sigma) = \det(W)^l \sum_{j=1}^{\operatorname{dim}(\mathcal{H}_l^{d-k+1})} \Bigg( \int_{\mathbb{S}^{d-1}} Y_j( z_1) \| y_1 \|^{l} d \sigma(x_1) \Bigg)^2.$$ If $x_3, \ldots, x_k$ are linearly dependent, we know this is zero. Assume that $x_3, \ldots, x_k$ are linearly independent, and for $1 \leq j \leq \operatorname{dim}(\mathcal{H}_l^{d-k+1})$, let
$$f_j(x_1) = f_j( y_1^{\perp}, y_1) = Y_j( z_1) \| y_1 \|^{l} .$$ By Lemma A.5.4 of \cite{DX}, we have that \begin{align*}
\int_{\mathbb{S}^{d-1}} f_j(x_1) d \sigma (x_1) & = \int_{\mathbb{B}^{k-2}} ( 1 - \|y_1^{\perp} \|^2 )^{\frac{d-k}{2}} \left[ \int_{\mathbb{S}^{d-k+1}} f_j( y_1^{\perp} , \sqrt{1 - \| y_1^{\perp} \|^2} \xi ) d \sigma (\xi) \right] dy_1^{\perp} \\
& = \int_{\mathbb{B}^{k-2}} ( 1 - \| y_1^{\perp} \|^2 )^{\frac{d-k}{2}} \left[ \int_{\mathbb{S}^{d-k+1}} Y_j(\xi) (1 - \| y_1^{\perp} \|^2)^{\frac{l}{2}} d \sigma (\xi) \right] dy_1^{\perp} \\ & = 0. \end{align*} Thus, for any fixed $x_3, \ldots, x_k \in \mathbb{S}^{d-1}$, $$ \int_{ \mathbb{S}^{d-1}} \int_{\mathbb{S}^{d-1}} Q_{k,l}^d(x_1, x_2, \ldots, x_k) d \sigma(x_1) d \sigma(x_2) = 0,$$ meaning that $$ I_{Q_{k,l}^d}(\sigma) = 0,$$ so $\sigma$ is indeed a minimizer of $I_{Q_{k,l}^d}$.
\end{proof} We note here that $Q_{k,l}^d$ is zero if $x_3, \ldots, x_k$ are linearly dependent or if $x_1$ or $x_2$ are in $X$.
For $k=3$, these kernels are essentially \eqref{eq:BachocValQKernel}, introduced by Bachoc and Vallentin \cite{BV}. In this instance, note that $\det(W) =1$ is constant. The general case was covered by Musin \cite{Mu} who used the kernels to formulate general SDP bounds for spherical codes and, with some additional machinery, generalized the result of Schoenberg \cite{Sch} to characterize all positive definite kernels invariant under the stabilizer of $X$. However, in that setting, it was assumed that $x_3, \ldots, x_k$ were fixed and linearly independent, so no factor such as $\det(W)^l$ was included and the functions were only really functions of two variables. Recently, similar kernels with $k\geq 4$ were used for finding new bounds for sizes of equiangular sets of lines in \cite{DMOV}, where kernels were constructed in a way that assumed that distance set the last $k-2$ inputs had finitely many values, making them multivariate functions, but not allowing the last $k-2$ inputs to take arbitrary values on the sphere. The authors of \cite{DMOV} even discuss the difficulty of such a task. Our inclusion of $\det(W)$ as a factor of $Q$ allows us to address this issue, though alone, this would not allow us to construct functions which are not constant when $x_3, \ldots, x_k$ are linearly dependent, or more complicated positive definite functions, such as semidefinite combinations of the functions \eqref{eq:SemDefProgYs}. We discuss how to construct such functions later in this section.
For the main result of Section \ref{sec:A^2 on Sphere}, it is sufficient to use the case $l=1$ so we formulate the relevant statement as a separate lemma.
\begin{lemma}\label{lem:Musin-1} For any set of fixed vectors $x_3, \ldots, x_k\in\mathbb{S}^{d-1}$, the kernel $$Q_{k,1}^d (x_1, \ldots, x_k) = \det(W) \langle y_1, y_2 \rangle = \det(W)u_{1,2} - w_1^T \operatorname{adj}(W) w_2 $$ is $k$-positive definite, and $I_{Q_{k,1}^d}$ is minimized by $\sigma$. \end{lemma}
For small values of $k$, these kernels take the form: \begin{align*} Q_{3,1}^d &= u_{1,2} - u_{1,3}u_{2,3},\\ Q_{4,1}^d &= u_{1,2} -u_{1,2}u_{3,4}^2 - u_{1,3}u_{2,3} - u_{1,4}u_{2,4} + u_{1,3}u_{2,4}u_{3,4} + u_{1,4}u_{2,3}u_{3,4}. \end{align*}
We can use these kernels $Q_{k,l}^d$ to construct various other kernels which are $k$-positive definite and whose energies are minimized by $\sigma$. Similar objects were studied in \cite{KV}.
\begin{corollary}\label{cor:MoreGenPosDef} Let $G: (\mathbb{S}^{d-1} )^{k-1} \rightarrow \mathbb{R}$ be a continuous function such that, for $\eta_1, \eta_2, \ldots, \eta_{k-1} \in \mathbb{S}^{d-1}$, $G(\eta_1, \ldots, \eta_{k-1})$ depends only on the inner products $\langle \eta_i, \eta_j \rangle$, $1 \leq i < j \leq k-1$. Then the kernel \begin{equation}\label{eq:MorGenPosDef} T( x_1, x_2, \ldots, x_k) = G(x_1, x_3, \ldots, x_k) G( x_2, x_3,\ldots, x_k) Q_{k,l}^d( x_1, x_2, \ldots, x_k ) \end{equation} is rotationally-invariant and $k$-positive definite. If $l \geq 1$, $T$ satisfies \begin{equation} \inf_{\mu \in \mathcal{P}(\mathbb{S}^{d-1})} I_{T}(\mu) = I_{T}(\sigma) = 0. \end{equation} \end{corollary}
From the way we defined $T$, we can see that $T$ is indeed continuous and symmetric in the first two variables.
\begin{proof} We will use the same notation as in the proof of Theorem \ref{thm:Qdef}. We see immediately that the rotational-invariance of $T$ follows from the rotational-invariance of $Q_{k,l}^d$ and the inner products $\langle x_i, x_j \rangle$. We also have that for fixed $x_3, \ldots, x_k$, $G(x_i, x_3, \ldots, x_k)$ depends only on $y_i^{\perp}$, the orthogonal projection of $x_i$ onto $X$.
For $k \leq d$, that $T$ is $k$-positive definite can be seen by the fact that for fixed $x_3, \ldots, x_k$ and $\mu \in \mathcal{M}(\mathbb{S}^{d-1})$, \eqref{eq:addition formula for Q} gives us
$$I_{U_T^{x_3, \ldots, x_k}}(\mu) = \det(W)^l \sum_{j=1}^{\operatorname{dim}(\mathcal{H}_l^{d-k+1})} \Bigg( \int_{\mathbb{S}^{d-1}} Y_j( z_1) \| y_1 \|^{l} G( x_1, x_3,\ldots x_k) d \mu(x_1) \Bigg)^2 \geq 0.$$
If $ x_3, \ldots, x_k$ are linearly dependent, then $T = 0$, so assume that $x_3, \ldots, x_k$ are linearly independent, and for $1 \leq j \leq \operatorname{dim}(\mathcal{H}_l^{d-k+1})$, let
$$f_j(x_1) = f_j( y_1^{\perp}, y_1) = Y_j( z_1) \| y_1 \|^{l} G( x_1, x_3,\ldots x_k).$$ By Lemma A.5.4 of \cite{DX}, and since $G$ does not depend on $y_1$, we have \begin{align*}
\int_{\mathbb{S}^{d-1}} f_j(x_1) d \sigma (x_1) & = \int_{\mathbb{B}^{k-2}} ( 1 - \| y_1^{\perp} \|^2 )^{\frac{d-k}{2}} \left[ \int_{\mathbb{S}^{d-k+1}} f_j( y_1^{\perp} , \sqrt{1 - \| y_1^{\perp} \|^2} \xi ) d \sigma (\xi) \right] dy_1^{\perp} \\
& = \int_{\mathbb{B}^{k-2}} ( 1 - \| y_1^{\perp} \|^2 )^{\frac{d-k}{2}} (1 - \| y_1^{\perp} \|^2)^{\frac{l}{2}} G(x_1, x_3,\ldots x_k) \left[ \int_{\mathbb{S}^{d-k+1}} Y_j(\xi) d \sigma (\xi) \right] d y_1^{\perp} \\ & = 0. \end{align*} Thus, for any fixed $x_3, \ldots, x_k \in \mathbb{S}^{d-1}$, $$ \int_{ \mathbb{S}^{d-1}} \int_{\mathbb{S}^{d-1}} T(x_1, x_2, \ldots, x_k) d \sigma(x_1) d \sigma(x_2) = 0,$$ meaning that $$ I_{T}(\sigma) = 0,$$ so $\sigma$ is indeed a minimizer of $I_{T}$.
The case of $k = d+1$ is similar.
\end{proof}
\begin{lemma}\label{lem:l=0PosDef} Let $G: \big( \mathbb{S}^{d-1} \big)^{k-1} \rightarrow \mathbb{R}$ be continuous, depend only on the inner products of its inputs, and satisfy $\int_{\mathbb{S}^{d-1}} G( \eta_1, \ldots, \eta_{k-1}) d \sigma(\eta_1) = 0$. Then the kernel \begin{equation}\label{eq:l=0PosDef} H(x_1, x_2, \ldots, x_k) = G(x_1, x_3, \ldots, x_k) G( x_2, x_3, \ldots, x_k) \end{equation} is rotationally-invariant, $k$-positive definite, and satisfies \begin{equation} \inf_{\mu \in \mathcal{P}(\mathbb{S}^{d-1})} I_H(\mu) = I_H(\sigma) = 0. \end{equation} \end{lemma}
The formulation of $T$ and $H$ in the corollary and lemma, and the fact that the sum of $k$-positive definite kernels minimized by $\sigma$ is a $k$-positive definite kernel minimized by $\sigma$, allows us to now recover Theorem \ref{thm:SemiDefMin}. In \cite{BV}, the authors created matrices $Y_l^d$ of polynomials, and then took the trace of the product of a positive semidefinite matrix and a $Y_l^d$. When $l = 0$, this would lead to a sum of kernels of the form \eqref{eq:l=0PosDef}, and for $l > 0$ this would lead to a sum of kernels of the form \eqref{eq:MorGenPosDef}.
By combining Lemmas \ref{lem:Schur's Lemma}, \ref{lem:Schur's Lemma2}, \ref{lem:kPosDef to nPosDef}, and \ref{lem:l=0PosDef} with Corollary \ref{cor:MoreGenPosDef} we can now construct a wide range of rotationally-invariant $k$-positive definite kernels whose energies are minimized by $\sigma$ from the kernels $Q_{n,l}^d$'s for $n < k$. In particular, we can construct kernels which are not constant when $x_3, \ldots, x_n$ are linearly dependent, unlike $Q_{k,l}^d$.
\section{Maximizing the integral of $A^2$ on the sphere}\label{sec:A^2 on Sphere}
We now turn to the last main results of the paper. As an analogue of the result by Cahill and Casazza (Theorem \ref{thm:volume-gen}), we solve the optimization problem for $A^2$, the square of the $(k-1)$-dimensional volume of the simplex, for an arbitrary number of inputs $3\le k \le d+1$. We have already proved some partial cases of the theorem below: Theorem \ref{thm: triangle area squared max} (for the case $k=3$ and $d\ge 2$, i.e. the area of the triangle) and Theorem \ref{thm:A^2maxGen} (for full-dimensional simplices, i.e. $k=d+1\ge 3$). We would like to point out, that the latter theorem applies to measures on $\mathbb R^d$. The following theorem, while restricted to the sphere, covers the whole range $3\le k \le d+1$.
\begin{theorem}\label{thm:area-gen} Let $d\ge 2$ and $3\leq k\leq d+1$. Let $A(x_1, \ldots, x_k)$ be the $(k-1)$-dimensional Euclidean volume of a simplex with vertices $x_1, \ldots, x_k \in \mathbb{S}^{d-1}$. Then the set of maximizing measures of $I_{A^2}$ in $\mathcal{P}(\mathbb S^{d-1})$ is the set of balanced isotropic measures on $\mathbb S^{d-1}$. In particular, the uniform surface measure $\sigma$ maximizes $I_{A^2}$. The value of the maximum is $\frac {k}{(k-1)! d^{k-1}}\binom{d}{k-1}$. \end{theorem}
\begin{proof} Let $U$ be the Gram matrix of vectors $\{x_1,\ldots,x_k\}\subset \mathbb S^{d-1}$ with entries $u_{i,j}$, i.e. $\langle x_i,x_j \rangle = u_{i,j}$ for $1\leq i,j\leq k$. For $I,J\subseteq \{1,\ldots,k\}$, we denote by $U_{I,J}$ the submatrix of $U$ obtained by deleting rows with numbers from $I$ and columns with numbers from $J$. By Lemma \ref{lem:A-formula}, whose proof is postponed to the Appendix, $$((k-1)!)^2 A^2 = - \det \begin{pmatrix}U&\mathbf{1}\\ \mathbf{1}^T&0\end{pmatrix}.$$ We expand the determinant by choosing, for each $i,j \in \{ 1,\ldots, k\}$, the elements in the last column and $i^{th}$ row and the $j^{th}$ column and last row. We treat the cases $i=j$ and $i\neq j$ separately.
\begin{align*} ((k-1)!)^2 A^2 & = - \sum\limits_{i=1}^k (-1)^{k+1+i+k+i} \det(U_{\{i\},\{i\}}) - \sum\limits_{i\neq j} (-1)^{k + 1 + i+ k +j} \det(U_{\{i\},\{j\}})\\ &=\sum\limits_{i=1}^k \det(U_{\{i\},\{i\}}) + \sum\limits_{i\neq j} (-1)^{i+j} \det(U_{\{i\},\{j\}}) \end{align*}
For each $i \in \{ 1, \ldots, k \}$, $\det(U_{\{i\},\{i\}})$ is the $(k-1)$-point kernel $V^2 (x_1,\dots, x_{i-1},x_{i+1},\dots,x_n)$ from Theorem \ref{thm:volume-gen}. Subsequently, Theorem \ref{thm:volume-gen} implies that the energy integral for the kernel defined by the first sum is not greater than $k \frac {(k-1)!} {d^{k-1}} \binom{d}{k-1}$.
It is now sufficient to show that the contribution of the second sum is nonpositive. Let us fix $i,j \in \{1, \ldots, k\}$, with $i \neq j$, and denote $U_{\{i,j\},\{i,j\}}$ by $U'$. We expand $\det(U_{\{i\},\{j\}})$ by row $j$ of $U$ and column $i$ of $U$ taking an element $u_{j,m}$, $m\neq j$, from the row and $u_{n,i}$, $n\neq i$, from the column, respectively.
If $m = i$ and $n = j$, then we take $u_{j,i}$ both for the row and the column expansion. The contribution of this case to $\det(U_{\{i \}, \{ j\}})$ the sum is then $(-1)^{i+j -1} u_{j,i} \det(U')$.
Let us now consider the case where $m\neq i$ and $n\neq j$. Without loss of generality, let us assume that $i < j$ (the case of $i > j$ is similar). Let $n'$ be the position of row $n$ of $U$ after rows $i$ and $j$ are deleted, i.e. $n'=n$ if $n<i$, $n'=n-1$ if $i < n < j$, and $n' = n-2$ if $j < n$. Similarly we define $m'=m$ if $m<i$, $m'=m-1$ if $i < m < j$, and $m' = m-2$ if $j < m$. This guarantees that $U_{\{i,j,n\},\{i,j,m\} } = U'_{ \{n'\},\{m'\}}$. A careful examination of the signs shows that the contribution of this expansion in the sum is then \begin{align*} (-1)^{i +n' + j + m'} u_{n,i} u_{j, m} \det(U'_{\{n'\},\{m'\}}) &= (-1)^{p} u_{n,i} u_{j, m} \det(U'_{\{n'\},\{m'\}}) \\ &= (-1)^{ p} u_{n,i} u_{j, m} \det(U_{\{i, j, n\},\{i, j, m\}}) \end{align*} where $$p = i + n + j + m + \frac{\operatorname{sgn}(n-i) + \operatorname{sgn}(n-j) + \operatorname{sgn}(m-i) + \operatorname{sgn}(m-j)}{2} - 2.$$
Overall, we have \begin{align*} (-1)^{i+j}\det(U_{\{i\},\{j\}}) & = (-1)^{2i + 2j -1}u_{j,i} \det(U') + \sum_{\substack{1\leq m,n\leq k \\ m,n\notin \{i,j\}}} (-1)^{2i + 2j + m' + n'} u_{n,i} u_{j,m}\det(U'_{\{n'\},\{m'\}}) \\ & = -u_{j,i} \det(U') + \sum_{\substack{1\leq m,n\leq k\\ m,n\neq i,j}} (-1)^{m'+n'}u_{n,i} u_{j,m} \det(U'_{\{n'\},\{m'\}}) \\ & = -(u_{j,i} \det(U') - {u_i'}^T \operatorname{adj}(U') u_j')\\ & = -(u_{j,i} \det(U') - {u_i'}^T \operatorname{adj}(U') u_j')\\ & = - Q_{k,1}^d( x_i, x_j, x_{l_1}, \ldots, x_{l_{k-2}}), \end{align*} where $u_i'=(u_{1,i},\ldots,u_{k,i})^T$ with the first index running through all $n\neq i,j$, $u_j'=(u_{j,1},\ldots,u_{j,k})^T$ with the second index running through all $m\neq i,j$, and $\{l_1, \ldots, l_{k-2} \} = \{ 1, \ldots, k \} \setminus \{i, j \}$. For the last identity above, see Lemma \ref{lem:Musin-1}. From Theorem \ref{thm:Qdef} (or Lemma \ref{lem:Musin-1} specifically for this case), we know that $-Q_{k,1}^d$ is $k$-negative definite, so its contribution to $I_{A^2}$, and therefore the contribution of $$K (x_1, \ldots, x_k) = \sum_{i \neq j} (-1)^{i+j} \det(U_{\{i \}, \{j\}}) = - \frac{\binom{k}{2}}{k!} \sum_{\pi} Q_{k,1}^d( x_{\pi(1)}, \ldots, x_{\pi(k)}),$$ to $I_{A^2}$ is nonpositive, so $I_{A^2} \leq \frac{k}{(k-1)! d^{k-1}} \binom{d}{k-1}$.
It remains to find out which measures maximize $I_{A^2}$. Due to Theorem \ref{thm:volume-gen}, any maximizing measure $\mu$ must be isotropic. In order to find measures vanishing on the second part we need to return to Lemma \ref{lem:Musin-1}. The necessary and sufficient condition for vanishing on $Q_{k,1}^d(x_1, \ldots, x_k)$ is the following. For any linearly independent $x_3, \ldots, x_k$ from the support of $\mu$ and the linear space $X$ generated by them, the projection of $\mu$ onto $X^{\perp}$ must be balanced. In other words, the center of mass of $\mu$ must belong to $X$. An isotropic measure must be full-dimensional so there must exist $d$ linearly independent vectors from $\operatorname{supp}(\mu)$. The intersection of all linear spaces generated by any $(k-2)$ of these $d$ vectors is only the origin so the center of mass of $\mu$ must be at the origin. Clearly, balanced isotropic measures attain the found maximum. \end{proof}
\begin{remark} In the last part of the proof we could also have shown that $\sigma$ is a maximizer, then noted that the potential is a polynomial of degree at most two in any of its variables, with some parts being of degree one, meaning that balanced isotropic measures are the maximizers, since they yield the same value of energy as $\sigma$ (this a direct analogy to spherical $2$-designs). \end{remark}
We note that in the case that $k = 2$, $A(x,y)$ is simply the Euclidean distance between $x$ and $y$. If we were to split $\| x-y\|^2$ into a linear part and a ``volume" part as in the proof, then the volume is simply the distance from the origin to a point on the circle, which is always $1$. Thus, in that particular case, only the linear part matters, so maximizers of $I_{A^2}$ on the sphere are all balanced measures, as shown in \cite{Bj}. The case of $k = 3$ was handled by Theorem \ref{thm: triangle area squared max}, where in this case $Q_{k,1}^d = Y_{1,0,0}$ and the $(k-1)$-volume squared function is $V^2(x,y) = 1- \langle x, y \rangle^2$, as discussed in Section \ref{sec:k=2}.
Finally, we note that despite having this result for $A^2$ on the sphere, Corollary \ref{cor:A^s for s>2} does not hold if $A$ has $k < d+1$ inputs. As $s \rightarrow \infty$, we should expect the maximizer of $I_{A^s}$ to be supported on some set such that $A$ takes only its minimum (zero) and maximum values. This will be the set of vertices of a regular $(k-1)$-simplex on some $(k-2)$-dimensional subsphere. Such a measure is not isotropic on $\mathbb{S}^{d-1}$ for $k < d+1$, and so we can also not use the same proof method to determine maximizers. We do, however, conjecture that the maximizers of $I_{A^s}$ are discrete when $s>2$, for all $3\le k \le d+1$.
\section{Acknowledgments}
We would like to thank Danylo Radchenko and David de Laat for fruitful discussions and useful suggestions. All of the authors express gratitude to ICERM for hospitality and support during the Collaborate{@}ICERM program in 2021. D.~Bilyk has been supported by the NSF grant DMS-2054606 and Simons Collaboration Grant 712810. D.~Ferizovi\'{c} thankfully acknowledges support by the Methusalem grant of the Flemish Government. A.~Glazyrin was supported by the NSF grant DMS-2054536. R.W.~Matzke was supported by the Doctoral Dissertation Fellowship of the University of Minnesota, the Austrian Science Fund FWF project F5503 part of the Special Research Program (SFB) ``Quasi-Monte Carlo Methods: Theory and Applications", and NSF Postdoctoral Fellowship Grant 2202877. O.~Vlasiuk was supported by an AMS-Simons Travel Grant.
\section{Appendix: Expressing \texorpdfstring{$A^2$}{A2} through Gram determinants}\label{sec:Appen A^2}
Let $U$ be the Gram matrix of vectors $\{x_1,\ldots,x_k\}\subset \mathbb S^{d-1}$ with entries $u_{i,j}$, i.e. $\langle x_i,x_j \rangle = u_{i,j}$ for $1\leq i,j\leq k$. The following lemma provides a linear-algebraic description of $A^2$.
\begin{lemma}\label{lem:A-formula} $$A^2 (x_1,\ldots,x_k) = -\frac 1 {((k-1)!)^2} \det \begin{pmatrix}U&\mathbf{1}\\ \mathbf{1}^T&0\end{pmatrix},$$ where $\mathbf{1}$ is the column vector of $k$ ones. \end{lemma}
\begin{proof}
$A^2$ can be found from the Gram matrix of the vectors $x_2-x_1, \ldots, x_k-x_1$. \begin{align*} A^2 (x_1,\ldots,x_k) &= \frac 1 {((k-1)!)^2} \det\begin{pmatrix}\langle x_2-x_1, x_2-x_1\rangle&\ldots&\langle x_2-x_1, x_k-x_1\rangle \\ \vdots & \ddots& \vdots \\ \langle x_k-x_1, x_2-x_1\rangle & \ldots & \langle x_k-x_1, x_k-x_1\rangle\end{pmatrix} \\ & = \frac 1 {((k-1)!)^2} \det\begin{pmatrix}2-2u_{1,2}&\ldots&1+u_{2,k} - u_{1,2} - u_{1,k} \\ \vdots & \ddots& \vdots \\ 1+u_{k,2} - u_{1,k} - u_{1,2}& \ldots & 2-2u_{1,k}\end{pmatrix} \\ & = \frac {-1} {((k-1)!)^2} \det\begin{pmatrix}0 & 0 & \ldots & 0 &1\\0 & 2-2u_{1,2}&\ldots&1+u_{2,k} - u_{1,2} - u_{1,k} & 0\\ \vdots & \vdots & \ddots& \vdots &\vdots\\0& 1+u_{k,2} - u_{1,k} - u_{1,2}& \ldots & 2-2u_{1,k}&0\\ 1 & 0 & \ldots & 0 & 0\end{pmatrix}\\ & = \frac {-1} {((k-1)!)^2} \det\begin{pmatrix}0 & u_{1,2} & \ldots & u_{1,k} &1\\u_{1,2} & 2-2u_{1,2}&\ldots&1+u_{2,k} - u_{1,2} - u_{1,k} & 0\\ \vdots & \vdots & \ddots& \vdots &\vdots\\u_{1,k}& 1+u_{k,2} - u_{1,k} - u_{1,2}& \ldots & 2-2u_{1,k}&0\\ 1 & 0 & \ldots & 0 & 0\end{pmatrix}. \end{align*}
Note that in the third equality, we created a $(k+1) \times (k+1)$ matrix whose determinant is the negative of our original matrix, due to the only nonzero entries in the last row and column being the ones in the upper right and lower left corners. This also means that inserting $u_{1,j}$'s into the first row and column doesn't affect the determinant.
Now we add the first row and column to all rows and columns except for the last ones.
$$A^2=-\frac 1 {((k-1)!)^2} \det\begin{pmatrix}0 & u_{1,2} & \ldots & u_{1,k} &1\\u_{1,2} & 2&\ldots&1+u_{2,k} & 1\\ \vdots & \vdots & \ddots& \vdots &\vdots\\u_{1,k}& 1+u_{k,2}& \ldots & 2&1\\ 1 & 1 & \ldots & 1 & 0\end{pmatrix}.$$
We subtract the last column from all columns except for the first one, then add the bottom row to the top row, and see that \begin{align*} A^2 & = -\frac 1 {((k-1)!)^2} \det\begin{pmatrix}0 & u_{1,2}-1 & \ldots & u_{1,k}-1 &1\\u_{1,2} & 1&\ldots&u_{2,k} & 1\\ \vdots & \vdots & \ddots& \vdots &\vdots\\u_{1,k}& u_{k,2}& \ldots & 1&1\\ 1 & 1 & \ldots & 1 & 0\end{pmatrix} \\ & = -\frac 1 {((k-1)!)^2} \det\begin{pmatrix}1 & u_{1,2}& \ldots & u_{1,k}&1\\u_{1,2} & 1&\ldots&u_{2,k} & 1\\ \vdots & \vdots & \ddots& \vdots &\vdots\\u_{1,k}& u_{k,2}& \ldots & 1&1\\ 1 & 1 & \ldots & 1 & 0\end{pmatrix}= -\frac 1 {((k-1)!)^2} \begin{pmatrix}U&\mathbf{1}\\ \mathbf{1}^T&0\end{pmatrix}. \end{align*}
\end{proof}
\end{document} |
\begin{document}
\title{ extbf{Cooling algorithms based on the 3-bit majority}
\begin{abstract} Algorithmic cooling is a potentially important technique for making scalable NMR quantum computation feasible in practice. Given the constraints imposed by this approach to quantum computing, the most likely cooling algorithms to be practicable are those based on simple reversible polarization compression (RPC) operations acting locally on small numbers of bits. Several different algorithms using 2- and 3-bit RPC operations have appeared in the literature, and these are the algorithms I consider in this note. Specifically, I show that the RPC operation used in all these algorithms is essentially a majority vote of 3 bits, and prove the optimality of the best such algorithm. I go on to derive some theoretical bounds on the performance of these algorithms under some specific assumptions about errors. \end{abstract}
\section{Background}
Consider a probabilistic bit that equals 0 with probability $p$. Define the {\it bias} of the bit to be \[\mathcal{B}=p-(1-p)=2p-1,\] which is the difference between the probability that the bit equals 0 and the probability that the bit equals 1. (The symbol ``$\varepsilon$'' is usually used to denote the bias in the literature on algorithmic cooling; I prefer to reserve this symbol for error rates.) The problem addressed by algorithmic cooling is the following. Given some number of bits initially having a common bias $\mathcal{B}_\text{i} > 0$, distill out some smaller number of bits having greater bias. This should be accomplished without the need for any pure ancillary bits initialized to 0, since preparing such initialized bits is the problem to be solved. Also, we should assume that we cannot perform measurements.
Algorithmic cooling has significant relevance to quantum computing, because for physical systems like nuclear spins controlled using nuclear magnetic resonance (NMR), obtaining a pure initial state can be very challenging. It is this fact that has motivated recent research on the implementation of algorithmic cooling in NMR quantum computers, as well as theoretical investigations of the efficiency and performance of cooling algorithms.
Algorithmic cooling in the context of NMR quantum computation first appeared in \cite{SV99}. The authors presented a method for implementing {\it reversible polarization compression} (RPC). The idea of RPC is to use reversible logic to implement a permutation on the (classical) states of $n$ bits, so that the bias of some of the bits is increased, while the bias of others is decreased (this is closely connected to data compression). Unfortunately RPC is theoretically limited by Shannon's Bound, which says that the entropy of a closed system cannot decrease.
An alternative algorithm was proposed in \cite{BMRVV} to enable cooling below the Shannon bound. The idea is to use a second register of bits that quickly {\it relax} to the initial bias $\mathcal{B}_\text{i}$. Call these the {\it relaxation bits}, and refer to the bits on which we perform the RPC operation as the {\it compression bits}. The idea is to first use RPC to increase the bias of some of the compression bits, while decreasing the bias of the other compression bits. Then the hotter compression bits (i.e. those having decreased bias) are swapped with the relaxation bits, where they will quickly relax back to the initial bias $\mathcal{B}_\text{i}$. Repeating this procedure effectively pumps heat out of the some of the compression bits, cooling them to bias much higher than $\mathcal{B}_\text{i}$. This system is analogous to a kitchen refrigerator, where the relaxation bits behave like the radiator on the back of the refrigerator, dumping the heat taken from the refrigerator compartment out into the surrounding environment. This approach is often referred to as ``heat-bath algorithmic cooling'', and the relaxation bits are often referred to as the ``heat bath''.
Another approach to heat-bath algorithmic cooling was introduced in \cite{FLMR04}. Their algorithm has a simpler analysis than the algorithm in \cite{BMRVV}, and gives a better bound on the size of molecule required to cool a single bit.
In \cite{SMW05} the physical limits of heat-bath cooling are explored. In their analysis, the assumption is that the basic operations can be implemented perfectly, without errors. Even given this assumption, the authors show that if the heat bath temperature is above a certain temperature threshold, no cooling procedure can initialize the system sufficiently for quantum computation. A heat-bath cooling algorithm called the ``partner pairing algorithm'' (PPA) is introduced to derive bounds on the best possible performance of algorithmic cooling with a heat bath. The PPA performs better than the previous algorithms, but it is unclear whether implementing the required permutations will be realistic in practice. In this paper I will focus on cooling algorithms based on repeated application of simple 2 or 3-bit RPC steps.
\section{Architecture}\label{sec_architecture}
To be useful for NMR quantum computing, we should implement cooling algorithms on a register of quantum bits all having some initial bias $\mathcal{B}_\text{i}$, without access to any ``clean'' ancillary bits. Further, we should be careful about how much ``local control'' we assume is directly provided by the system. In \cite{SV98}, four primitive computational operations are proposed as being supported by NMR quantum computers. For implementing the cooling algorithm, the first two of these suffice: \begin{enumerate} \item[$o_1$)] Cyclically shift the $n$ bits clockwise or counterclockwise one position. \item[$o_2$)] Apply an arbitrary two-bit operation to the first two bits (i.e. to the bits under a fixed ``tape-head''). \end{enumerate} To implement the two operations, \cite{SV98} suggested to use a repeating polymer like the $ABC$-chains used for global control schemes (e.g. \cite{Llo93}). The chain could be configured as a closed loop. To mark the position of the ``first two bits'' of the chain (for operation $o_2$), an atom of a fourth type, $D$, is positioned adjacent to the chain, in the desired location.
Notice that a system supporting operations $o_1$ and $o_2$ above can be re-phrased in terms of a fixed ``tape'' containing the bit-string, and a moving ``head'' that can be positioned over any adjacent pair of tape cells. For convenience, the tape can be viewed as a closed loop. In \cite{SV99} an architecture is proposed that uses a repeating polymer with 8 species to implement a system having four such tapes. A rather complicated scheme for implementing the cooling algorithm is described for this four-tape machine.
Some versions of the cooling algorithm (\cite{BMRVV}, \cite{FLMR04}) use (classical reversible) 3-bit operations: generalized Toffoli gates (from which controlled-\mbox{\sc swap}\xspace operations can be implemented).\footnote{By ``generalized Toffoli'' I mean any 3-bit gate that applies a $\mbox{\sc not}\xspace$ operation to one of the bits conditioned on a specific pattern of the basis states of the other two bits.} Without access to ancillary bits, the Toffoli cannot be implemented by classical 2-bit gates (\mbox{\sc cnot}\xspace and \mbox{\sc not}\xspace gates). It can be implemented without ancilla {\it if} we also have access to arbitrary single-qubit quantum gates \cite{BBC+95}. So to implement the algorithms of \cite{BMRVV}, and \cite{FLMR04} using operations $o_1$ and $o_2$ would require inherently quantum operations. An error analysis of the cooling algorithms is greatly simplified if we assume it has a ``classical'' implementation, however. Fortunately, $ABC$-chains naturally support generalized Toffoli operations directly, since the transition frequency of one species will be affected by the states of the neighbouring bits of two other species.
It is worth revisiting the idea put forth in \cite{SV98}, to use an $ABC$-chain. I propose an alternative set of operations that should be supported (these are sufficient for cooling, although obviously not for quantum computing):
\begin{enumerate} \item[$o^\prime_1$)] Move any three bits into adjacent positions under a fixed ``tape head'' (which covers three bits). \item[$o^\prime_2$)] Apply any generalized Toffoli or \mbox{\sc cnot}\xspace operation to the bits under the tape head. \end{enumerate}
Using the scheme described in Appendix \ref{append_abc_ring_scheme}, $o^\prime_1$ and $o^\prime_2$ can be implemented on an $ABC$-chain which is configured as a closed loop.\footnote{We could alternatively use a linear configuration, but would then have to be careful about the behaviour at the ends of the chain. One approach would be to have the chain be long enough so that the bits of interest are sufficiently far into the interior of the chain that the effects the ends are irrelevant.} An atom of a fourth type, $D$ is positioned adjacent to some $ABC$-triple selected (arbitrarily) to be the position of the tape head.
The cooling algorithms work by moving some bits under the tape head and applying a basic (2-bit or 3-bit) RPC step. The resulting cooler bits are then moved to one side of the array (tape), while the hotter bits are moved to the other side. The RPC step is repeated to cool several bits, and then recursively applied to these cooled bits.
\section{The Reversible Polarization Compression Step}
We will assume that our initial configuration is some string of bits, each of which is (independently) in state 0 with some probability $p>0$. Equivalently, we assume the bits all have an identical bias $\mathcal{B}>0$ before applying the polarization compression step. The assumption of independence (i.e. a binomial distribution on the strings) is required for the analysis. \footnote{In \cite{SV99} it is suggested that by performing an initial permutation of the bits we can limit our reliance on the assumption of independence.} Algorithmic cooling only amplifies an existing bias and hence the initial bias $\mathcal{B}$ must be positive.
The basic idea behind RPC is to implement a permutation that maps strings with low Hamming weight (i.e. having many 0's) to strings having a long prefix of 0's. Because it will be useful to implement cooling algorithms on systems for which we don't have arbitrary local control, we will construct RPC permutations based on basic ``RPC steps''. An RPC step will be a permutation on the states of a small number of bits (2 or 3 in the examples I consider). The overall system will be cooled by recursively applying the basic RPC step to all the bits. If we apply the RPC steps to disjoint pairs or triples of bits at each stage, the assumption of independence will hold throughout.
In the following sections we will examine candidates for the RPC step, and discuss how they may be implemented.
\subsection{The 2-bit RPC step}\label{sec_2bit_step}
The algorithms described in \cite{SV99} and \cite{BMRVV} both use a very simple 2-bit operation for the basic RPC step. The operation begins with a \mbox{\sc cnot}\xspace gate. Suppose the \mbox{\sc cnot}\xspace is applied to two bits initially having some positive bias $\mathcal{B}$. After the \mbox{\sc cnot}\xspace, the target bit is 0 if both bits were originally equal, and is 1 if both bits were originally different. In the case that they were both the same, the control bit has an amplified bias after the \mbox{\sc cnot}\xspace. So, conditioned on the outcome of the target bit, the control bit is either accepted as a new bit with higher bias and is subsequently moved to the ``colder'' side of the array with a sequence of controlled-\mbox{\sc swap}\xspace operations, or it is rejected and subsequently moved to the ``warmer'' side of the array. For specificity, I will refer to this 2-bit RPC step as ``2BC''.
Suppose the values of the control and target bits before the \mbox{\sc cnot}\xspace are $b_c$ and $b_t$ respectively. Then after the \mbox{\sc cnot}\xspace the value of the target bits is $b_c+b_t$. The control bit is accepted iff this value equals 0. The probability that $b_c=0$ given that $b_c+b_t=0$ is \begin{align} &\frac{P(b_c=0 \wedge b_t=0)}{P(b_c+b_t=0)}\\ &=\frac{1}{2}+\frac{2\mathcal{B}}{1+\mathcal{B}^2} \end{align} and so in this case the bias of the control bit is \begin{equation} \mathcal{B}^\prime=\frac{2\mathcal{B}}{1+\mathcal{B}^2}. \end{equation} The probability that the control bit is accepted equals the probability that $b_c+b_t=0$, which is \begin{equation} \frac{1+\mathcal{B}^2}{2}. \end{equation}
If the control bit is rejected, it has bias 0. To achieve the polarization compression, the \mbox{\sc cnot}\xspace must be followed by an operation that selects the accepted bits to be retained. This is accomplished in the 2BC operation by controlled-\mbox{\sc swap}\xspace operations that move the bit to the left or right according to whether it was accepted or rejected.\footnote{In Section \ref{sec_2BC_3BC_equivalence} we will show that the \mbox{\sc cnot}\xspace followed by a controlled-\mbox{\sc swap}\xspace actually computes the majority of three bits, and thus the 2BC operation is equivalent to the 3BC operation defined in Section \ref{sec_3bit_step}.}
A cooling algorithm can work by recursive application of the 2BC step across many bits having an initial bias $\mathcal{B}_\text{i}$. First some of the bits will be cooled by one application of 2BC, while others are warmed. The cooled bits will be moved away from the warmed bits, and then cooled further by another application of 2BC, and so on. The total number of starting bits required is determined by the depth of recursion required to obtain a single bit cooled to the desired target bias.
\subsection{A 3-bit RPC step}\label{sec_3bit_step}
The algorithm described in \cite{FLMR04} uses a 3-bit reversible polarization compression step (3BC). This RPC step is implemented by a permutation on the basis states of a 3-bit register that has the effect of increasing the bias of the one of the bits, while decreasing the bias of the other two. Experimental demonstration of the 3-bit RPC step has been conducted using NMR \cite{BMRNL}. The implementation of the 3BC operation given in \cite{FLMR04} uses a \mbox{\sc cnot}\xspace gate followed by a controlled-\mbox{\sc swap}\xspace gate. Recall from our discussion in Section \ref{sec_architecture} that we are assuming that the bits have already been moved onto an $ABC$-triple under the ``tape-head'', and that we can implement any reversible 3-bit (classical) operation on them. The quantum circuit model is a convenient paradigm for describing the operations\footnote{Current NMR experiments in algorithmic cooling \cite{BMRNL} do not implement the 3-bit permutation through a decomposition into a sequence of gates such as we consider here, but rather use a more direct method. This direct method is not scalable in the number of bits over which the majority is being computed.}. Note that the controlled-\mbox{\sc swap}\xspace can be implemented by generalized Toffoli operations, as shown in Figure \ref{fig_cswap_cooling}. (Approaches for implementing such generalized Toffoli gates on $ABC$-chains are described for example in \cite{Llo93} and \cite{Ben00}.)
\begin{figure}\label{fig_cswap_cooling}
\end{figure}
The permutation implemented by the circuit in Figure \ref{fig_cswap_cooling} results in the majority value of the three bits (before the operation) being encoded into bit $A$. Since we are only interested in the final bias of bit $A$, we can use any permutation that has this effect. In fact, the following claim says that such a permutation is the best choice for a 3-bit RPC step.
\begin{claim} Suppose we have a register of $n$ bits independently having identical bias $\mathcal{B}>0$, where $n$ is odd. Suppose we want to implement a permutation that has the effect of increasing the bias of the first bit as much as possible. Then the best choice is a permutation that computes the majority value of the $n$ bits into the first bit. \end{claim}
\begin{quote} {\bf Proof} Since each bit has bias $\mathcal{B}>0$, each bit is independently 0 with probability $p>\frac{1}{2}$. An optimal permutation for increasing the bias of the first bit will be one which maps the $\frac{2^n}{2}$ most-likely strings to strings having a 0 in the first bit. The $\frac{2^n}{2}$ most-likely strings are precisely those having at least $\ceil{\frac{n}{2}}$ bits in the state 0.\hspace{5mm}$\square$ \end{quote}
The circuit is shown in Figure \ref{fig_3bitmaj} is an alternative implementation of the 3-bit majority, which is simpler in terms of Toffoli and \mbox{\sc cnot}\xspace operations. I will henceforth refer to the operation implemented by this circuit as 3BC. Note that the circuit of Figure \ref{fig_3bitmaj} implements a different permutation that that implemented by the circuit of Figure \ref{fig_cswap_cooling}, but the effect on bit $A$ (i.e. after tracing-out bits $B$ and $C$) is the same for both circuits (assuming the input bits are independently distributed).
\begin{figure}\label{fig_3bitmaj}
\end{figure}
Since Toffoli and \mbox{\sc cnot}\xspace operations are ``classical'' in the sense that they do not generate nontrivial superpositions given basis states as inputs, we can analyze the behaviour of the 3BC circuit entirely in the computational basis. In the following, I will restrict the analysis in terms of classical bits.
Consider the effect on the bias of bit $A$ after applying the circuit of Figure \ref{fig_3bitmaj}. The majority value is computed into bit $A$. Suppose initially the bias of each of the three bits is $\mathcal{B}$. So the probability for each bit equaling 0 is initially $(1+\mathcal{B})/2$. After the 3BC operation, the probability that bit $A$ (which now equals the majority of the initial values of $A,B,C$) equals zero is \begin{align} p^{(A)}&=\left(\frac{1+\mathcal{B}}{2}\right)^3+3\left(\frac{1+\mathcal{B}}{2}\right)^2\left(\frac{1-\mathcal{B}}{2}\right)\\ &=\frac{1}{4}(2+3\mathcal{B}-\mathcal{B}^3). \end{align} So the bias of bit $A$ after the 3BC operation is \begin{align} \mathcal{B}^\prime&=2p^{(A)}-1\\ &=\frac{3}{2}\mathcal{B}-\frac{1}{2}\mathcal{B}^3\label{3BC_newB}. \end{align}
\subsection{Equivalence between the 2BC and 3BC operations}\label{sec_2BC_3BC_equivalence}
Recall that 2BC is \mbox{\sc cnot}\xspace followed by controlled-\mbox{\sc swap}\xspace operations which moves the control bit (of the \mbox{\sc cnot}\xspace) to the left or right conditioned on the state of the target bit. The \mbox{\sc cnot}\xspace itself has no effect on the bias of the control bit. It is the value of the target bit after the \mbox{\sc cnot}\xspace that provides some information about the state of the control bit. In the case that the target bit equals zero, the control bit is more likely to be 0, and hence has a greater bias. So the 2BC step is really a method for gaining some information about which bits are more likely to be zero, and moving these off to one side. After a single application of 2BC on two bits having equal bias, we may or may not be left with a bit having increased bias.
The 3BC step, on the other hand, deterministically increases the bias of the third bit at the expense of decreasing the bias of the other two. Every time we apply the 3BC step to three bits having equal bias we are certain to be left with a bit whose bias has been increased. This property makes it somewhat simpler to analyze the efficiency of algorithms based 3BC. The analysis of the 2BC-based heat-bath algorithm in \cite{BMRVV} relies on the law of large numbers, and gives a worse bound than does the analysis of the 3BC-based\par algorithm of \cite{FLMR04}.
In the algorithms of \cite{SV99} and \cite{BMRVV} the \mbox{\sc cnot}\xspace of the 2BC step is always followed by a controlled-\mbox{\sc swap}\xspace operation. An important observation is that the \mbox{\sc cnot}\xspace followed by a controlled-\mbox{\sc swap}\xspace actually computes the three-bit majority (indeed this is the way the 3BC step was implemented in \cite{FLMR04}). Specifically, suppose we first apply a \mbox{\sc cnot}\xspace between bits in states $b_1$ and $b_2$ (with $b_1$ as the control bit), and then apply a controlled-\mbox{\sc swap}\xspace between $b_1$ and a third bit in state $c$, controlled on the target bit of the \mbox{\sc cnot}\xspace being 0. The final state of $c$ is \begin{equation} b_1c+b_2c+b_1b_2 \end{equation} which is the majority of $b_1,b_2,c$. So if we explicitly include the extra target bit of the controlled-\mbox{\sc swap}\xspace operation, the 2BC step is is equivalent to the 3BC step.
This suggests an equivalence between the early algorithms described in terms of a 2BC operation and algorithms phrased in terms of a 3-bit majority vote (3BC). For this reason, in the following I will restrict attention to algorithms based on the 3BC operation.
\section{Efficiency}\label{sec_efficiency}
\subsection{The simple recursive algorithm}
We will analyze the efficiency of a simple algorithm that recursively partitions the string of bits into triplets and applies 3BC to these triplets. After each 3BC step (say on bits $A,B,C$), the $B$ and $C$ bits which become heated are discarded. Thus at each level of recursion the total number of bits is reduced by a factor of 3, and the remaining bits' bias is increased from $\mathcal{B}$ to a new value \begin{equation} \mathcal{B}^\prime=\frac{3}{2}\mathcal{B}-\frac{1}{2}\mathcal{B}^3. \end{equation} To simplify the analysis we will approximate $\mathcal{B}^\prime$ by \begin{equation} \mathcal{B}^\prime\approx\frac{3}{2}\mathcal{B}. \end{equation}
After $k$ levels of recursion the bias is increased to \begin{equation} \mathcal{B}_k\approx\left(\frac{3}{2}\right)^k \mathcal{B}. \end{equation} This gives us an estimate on the number of levels of recursion $k$ required to achieve some target bias $\mathcal{B}_t<1$ on a single bit. \begin{equation} k\approx\log_{3/2}\left(\frac{\mathcal{B}_t}{\mathcal{B}}\right). \end{equation} Therefore the total number of bits starting at bias $\mathcal{B}$ required to obtain one bit with a target bias of $\mathcal{B}_t$ is $3^k$ which is polynomial in $\mathcal{B}_t$. For example, suppose we start with a bias of $\mathcal{B}=10^{-5}$ (see \cite{M05}). Then the number of bits required to yield a single bit with bias $0.1$ is about $6.9\times 10^{10}$, and the number required to yield a bit with bias $0.9999$ is about $3.5\times 10^{13}$.
Note that this analysis has only given the number of bits required. To obtain a good estimate of the time complexity, we would have to specify the computational model more precisely, and account for the time required to shuttle the states around as required by the architecture and the algorithm.
\subsection{Algorithms using a heat bath}
There are many ways in which the recursive algorithm might be modified to take advantage of a heat bath. The heat bath is a mechanism by which a heated bit can be exchanged for a fresh bit having initial bias $\mathcal{B}_\text{i}$ (taken from the environment). For a rough analysis, we ignore the details of how the heat-bath contact will be implemented, and assume we can apply an operation which resets a bit's bias to $\mathcal{B}_\text{i}$ on-demand (this may be an unrealistically optimistic assumption).
One approach to using the heat bath in a 3BC algorithm is as follows. First apply the 3BC step as in the simple recursive algorithm. At this point we have $n/3$ bits cooled to $\mathcal{B}^\prime$. Now, instead of discarding the $2n/3$ bits that were heated in this process, send them to the heat bath to return them to bias $\mathcal{B}_\text{i}$. Then partition these $2n/3$ bits into triples, and apply the 3BC step to them. This yields another $2n/9$ bits of bias $\mathcal{B}^\prime$. Repeat this process until there are fewer than $3$ bits left having bias less than $\mathcal{B}^\prime$ (there will always be exactly 2 bits left over). Now we have $n-2$ bits cooled to bias $\mathcal{B}^\prime$ and we can proceed to the next level of recursion. As before, the number of levels of recursion $k$ required to achieve a bit having some target bias $\mathcal{B}_t<1$ is \begin{equation} k\approx\log_{3/2}\left(\frac{\mathcal{B}_t}{\mathcal{B}_\text{i}}\right). \end{equation} This time, however, a logarithmic amount additional work is done for each level of recursion. By taking this extra time, we save on the total number of bits required. After each level of recursion an additional 2 bits are discarded. So the total number of bits required to obtain one bit cooled to $\mathcal{B}_t$ by this method is $2k$ which is polylogarithmic in $\mathcal{B}_t$. As before, supposing we start with a bias of $\mathcal{B}_\text{i}=10^{-5}$, then the number of bits required to yield a single bit with bias $0.1$ is about $46$, and the number required to yield a bit with bias $0.9999$ is about $57$.
Another approach to using the heat bath is described in \cite{SMW07}. Their algorithm repeatedly applies the 3BC step to three bits having bias values $\mathcal{B}_{j-2},\mathcal{B}_{j-1}$ and $\mathcal{B}_j$. This requires more careful analysis. Consider applying 3BC to three bits $b_{j-2},b_{j-1},b_j$ having initial bias values $\mathcal{B}_{j-2},\mathcal{B}_{j-1}$ and $\mathcal{B}_j$ respectively, where the majority is computed into the third bit $b_j$. The resulting bias of the third bit is \begin{equation} \mathcal{B}_j^\prime=\frac{\mathcal{B}_{j-2}+\mathcal{B}_{j-1}+\mathcal{B}_j-\mathcal{B}_{j-2}\mathcal{B}_{j-1}\mathcal{B}_j}{2}. \end{equation} Now suppose the first two bits are sent to the heat bath, and then run back through the cooling procedure to regain bias values of $\mathcal{B}_{j-2}$ and $\mathcal{B}_{j-1}$. Then 3BC is applied again (on the same three bits, except this time the third bit starts with bias $\mathcal{B}_j^\prime$). If this process is repeated several times, the bias of the third bit reaches a steady state value of \begin{equation} \frac{\mathcal{B}_{j-2}+\mathcal{B}_{j-1}}{1+\mathcal{B}_{j-2}\mathcal{B}_{j-1}}. \end{equation}
The algorithm described by \cite{SMW07} is based on this process. Suppose the algorithm has built-up an array of $k>3$ cooled bits $b_1,b_2\ldots,b_k$ having bias values $\mathcal{B}_1,\mathcal{B}_2,\ldots,\mathcal{B}_{k}$ in that order, where $\mathcal{B}_j=\frac{\mathcal{B}_{j-2}+\mathcal{B}_{j-1}}{1+\mathcal{B}_{j-2}\mathcal{B}_{j-1}}$ for each $3<j<k$. Then in the next stage of the algorithm a new bit $b_{k+1}$ is introduced having the heat-bath bias $\mathcal{B}_0$. The 3BC procedure is applied to the three bits $b_{k-1},b_k,b_{k+1}$ repeatedly, where between each application the algorithm is recursively repeated to re-establish the bias values $\mathcal{B}_{k-1},\mathcal{B}_k$ on bits $b_k,b_{k+1}$. Repeating this process several times the bias of bit $b_{k+1}$ will reach the steady state value $\mathcal{B}_{k+1}=\frac{\mathcal{B}_{k-1}+\mathcal{B}_{k}}{1+\mathcal{B}_{k-1}\mathcal{B}_{k}}$.
Starting with $n$ bits of bias $\mathcal{B}_\text{i}$, the algorithm of \cite{SMW07} achieves one bit of bias approximately $\mathcal{B}_n=\mathcal{B}_\text{i} F(n)$, where $F(n)$ is the $n^{th}$ Fibonacci number. This is even better than the simple recursive heat-bath method described previously. Starting with a bias of $\mathcal{B}_\text{i}=10^{-5}$, the number of bits required for this method to yield a single bit with bias $0.1$ is about $20$, and the number required to yield a bit with bias $0.9999$ is about $28$. There is a polynomial cost in time incurred by the repeated re-cooling of bits from the point of heat-bath contact at the left end of the chain up to $\mathcal{B}_{j-2}$ and $\mathcal{B}_{j-1}$.
Notice that in the heat bath algorithms we have described, after a 3BC operation the two bits that have become heated by this operation are both sent to the heat bath. In the early stages of an algorithm, this would be sensible, because the 3BC operation will have warmed those two bits to bias values less than the initial bias $\mathcal{B}_\text{i}$. Towards the end of the algorithm, however, 3BC will be applied to triples of bits that are all very cold bits, and the bits that become heated may still have bias considerably higher than $\mathcal{B}_\text{i}$. In this case, sending these bits to the heat bath does not seem like the right thing to do. To analyze the performance of algorithms, however, it is extremely convenient to assume we always do so. If we do not send the two heated bits back to the heat bath after a 3BC application, the bits' values are no longer described by independent probability distributions, and bias values are no longer well-defined. It is convenient to model the process of a 3BC application followed by sending the two heated bits to the heat bath as a single operation, as follows.
\begin{definition} Consider three bits $b_1,b_2,b_3$ having bias values $\mathcal{B}_1\leq\mathcal{B}_2\leq\mathcal{B}_3$ respectively. Define $\textnormal{3BC}_\textnormal{hb}$ as the three-bit majority on $b_1,b_2,b_3$ (where the majority is computed into $b_3$) followed by sending $b_1$ and $b_2$ to the heat bath. The bias values of the three bits after this operation are $\mathcal{B}_\text{i},\mathcal{B}_\text{i},\frac{\mathcal{B}_1+\mathcal{B}_2+\mathcal{B}_3-\mathcal{B}_1\mathcal{B}_2\mathcal{B}_3}{2}$ respectively. \end{definition}
The heat-bath algorithms described above can all be described as a sequence of operations $(\textnormal{3BC}_\textnormal{hb}, P_1, \textnormal{3BC}_\textnormal{hb}, P_2, \textnormal{3BC}_\textnormal{hb}, P_3, \ldots)$ where each $\textnormal{3BC}_\textnormal{hb}$ is applied to three bits in some specific positions (e.g. under a ``tape head''), and each $P_i$ is some permutations of the positions of the bits in the string. The following claim shows that the algorithm of \cite{SMW07} is the best such algorithm (this is not claimed in \cite{SMW07}).
\begin{claim} Consider a string of bits each having initial bias value $\mathcal{B}_\text{i}$. Let $\mathcal{A}$ be any cooling algorithm described by a sequence of operations \[\textnormal{3BC}_\textnormal{hb}, P_1, \textnormal{3BC}_\textnormal{hb}, P_2, \textnormal{3BC}_\textnormal{hb}, P_3, \ldots\] where each $\textnormal{3BC}_\textnormal{hb}$ is applied to three bits in some specific positions (e.g. under a ``tape head''), and each $P_i$ is some permutation of the positions of the bits in the string. At any stage of the algorithm, suppose we arrange the bits in a nondecreasing order of their bias values $\mathcal{B}_1,\ldots\mathcal{B}_N$. Then we have $B_j\leq \mathcal{B}_\text{i} F_j$ for all $1\leq j\leq N$, where $F_j$ is the $j^\text{th}$ Fibonacci number. \end{claim}
\begin{quote} The proof is by induction. The claim is initially true (before starting the algorithm) by assumption. Since the only operation allowed in $\mathcal{A}$ that changes the bias values is the $\textnormal{3BC}_\textnormal{hb}$ operation, it suffices to show that after an arbitrary $\textnormal{3BC}_\textnormal{hb}$ operation the claim is still true. Suppose the ordered bias values before the 3BC operation are \[\mathcal{B}_1,\mathcal{B}_2,\ldots, \mathcal{B}_N.\] Then suppose the $\textnormal{3BC}_\textnormal{hb}$ operation is applied to any three bits, suppose those having bias values $\mathcal{B}_k,\mathcal{B}_l$ and $\mathcal{B}_m$, where $k<l<m$. We assume that after the 3BC operation the value of $\mathcal{B}_m$ is not decreased. This is a safe assumption, because otherwise algorithm $\mathcal{A}$ would have done just as well not to apply that 3BC operation.
After the $\textnormal{3BC}_\textnormal{hb}$ operation, the new bias of the bit originally indexed by $m$ is \[\mathcal{B}_r^\prime =\frac{\mathcal{B}_k+\mathcal{B}_l+\mathcal{B}_m-\mathcal{B}_k\mathcal{B}_l\mathcal{B}_m }{2}.\] By assumption we have $\mathcal{B}_m\leq \mathcal{B}_\text{i} F_{m}$, $\mathcal{B}_l\leq \mathcal{B}_\text{i} F_{m-1}$ and $\mathcal{B}_k\leq \mathcal{B}_\text{i} F_{m-2}$. Since $F_m=F_{m-1}+F_{m-2}$ by definition, we have \begin{equation}\label{eqn_bound_Br} \mathcal{B}_r^\prime\leq \mathcal{B}_\text{i} F_m. \end{equation} Now, suppose the re-ordered bias values are \[\mathcal{B}^\prime_1,\ldots,\mathcal{B}^\prime_N.\] Since two of the bits were subjected to heat bath contact, we have $\mathcal{B}^\prime_1=\mathcal{B}^\prime_2=\mathcal{B}_\text{i}$ and $B_j^\prime<\mathcal{B}_j$ for $3\leq j\leq m-1$. So the claim is true for the first $m-1$ bias values. By the ordering we have $\mathcal{B}^\prime_m,\ldots,\mathcal{B}^\prime_{r-1}\leq \mathcal{B}^\prime_r$, and by (\ref{eqn_bound_Br}) we know these are all at most $F_m$, so the claim is true for these bias values. For the remaining bits we have $\mathcal{B}^\prime_j=\mathcal{B}_i\leq F_j$ for $r+1\leq j\leq N$, and so the claim is true for them as well. This completes the proof.$\hspace{5mm}\square\,
$ \end{quote}
\subsection{Accounting for the heat bath as a computational resource}\label{sec_heat_bath_resource}
The heat bath is typically modeled by a process whereby a hot bit is magically replaced by a fresh bit having the initial bias $\mathcal{B}_\text{i}$. Usually we would make some assumption about where the heat-bath contact occurs, for example requiring that only the bit on the end of a chain can be replaced with a fresh bit.
From a complexity theory point of view, the heat bath is a resource that should be account for. For modeling the physics of the situation it might be very convenient to draw a conceptual boundary between the system we are trying to cool and the heat bath, which for all practical purposes might be extremely large. Continuing our previous analogy between heat-bath cooling and a kitchen refrigerator, if we put the refrigerator in a large enough room we won't have to account for the fact that the room itself is gradually heated by the radiator on the back of the fridge. While heat-bath techniques appear to drastically reduce the number of bits required to achieve a target bias, it should be recognized that this hasn't come for free. The extra bits ultimately come from the heat bath. In practice, it may be very reasonable to assume we get these bits ``for free'', since we don't have to exercise control over the heat bath the way we do with the bits directly involved in the algorithm.
\section{Accounting for errors in an analysis of cooling}
In the following sections I investigate the performance of cooling algorithms when errors can occur in the RPC step. The bounds I will derive will apply to cooling algorithms that are based on recursive application of the 3BC step, where the step is always applied to 3 bits that have been previously cooled to equal bias values. In Section \ref{sec_general_3BC_algs} I discuss how the same approach can be applied to analyze more general algorithms based on the 3BC step. I do not account for errors that might occur between applications of the RPC steps, such as when bits are being shuttled around, or placed in an external heat bath. For this reason the bounds apply quite generally, independent of implementation details and low-level algorithmic details.
The most general way to analyze the effect of errors on a quantum circuit is to examine the effect of the errors on the density matrix of the state as it evolves through the circuit. As we observed above, the 3BC step can be implemented by classical operations, and can be analyzed entirely in the computational basis. I therefore perform the error analysis in a classical setting.
Suppose we implement the RPC operation in a system subject to errors described by a set of error patterns $\{S_j\}$. The error pattern is a record of what errors actually occurred. For each error pattern $S_j$ we can analyze the effect by considering a new circuit containing the original RPC circuit as well as the error operations that occurred. We can then find the probability $p_j$ that the cooled bit would be in state 0 after applying this new circuit. Thus the probability that the cooled bit equals 0 for the overall process is \begin{equation} p=\sum_j p_j\Pr(S_j) \end{equation} where $\Pr(S_j)$ is the probability that error pattern $S_j$ occurs. The new bias of the cooled bit after the process is then \begin{equation} \mathcal{B}^\prime=2p-1 \end{equation} Equivalently, we could compute the new bias $\mathcal{B}_j^\prime$ of the cooled bit resulting from application of the RPC step for each error pattern $S_j$, and take a weighted sum of these bias values (weighted by the probabilities $\Pr(S_j)$). \begin{equation} \mathcal{B}^\prime=\sum_j \mathcal{B}_j^\prime\Pr(S_j). \end{equation} After obtaining the new bias $\mathcal{B}^\prime$ of the cooled bit after the overall process, we can obtain theoretical limits on the performance of the cooling algorithm by analyzing the condition \begin{equation} \mathcal{B}^\prime > \mathcal{B} \end{equation} where $\mathcal{B}$ is the bias before the RPC and error channel were applied (this simply says that the bias should be greater after application of the 3BC step). In practice, to analyze the inequality \begin{equation}\label{general_bias_boost_inequal} \mathcal{B}^\prime - \mathcal{B} > 0 \end{equation} we study the expression $\mathcal{B}^\prime - \mathcal{B}$, which for the error models we consider will be a quadratic or cubic polynomial in $\mathcal{B}$ (and also a function of the error rates). By studying the roots of this polynomial we can find ranges of values for the error rates for which inequality (\ref{general_bias_boost_inequal}) has solutions $\mathcal{B}>0$, and also obtain the maximum value of $\mathcal{B}$ which is a solution (this maximum value will be the maximum bias achievable by the RPC step for the given error rates).
\section{The symmetric bit-flip channel}
The first error model we will consider is the symmetric bit-flip model, in which a bit's value is flipped with probability $\varepsilon<\frac{1}{2}$ (``symmetric'' in this context means that the probability of a bit flip is independent of the initial state of the bit).
If the bit-flip channel is applied to a bit initially having bias $\mathcal{B}$, the result is a bit with bias $-\mathcal{B}$.
\begin{comment} \subsection{2BC followed by a symmetric bit-flip}\label{sec_sym_after_2BC}
We begin by considering the case in which a bit-flip error may occur immediately after the 2BC step. For this error model there are two error patterns. Pattern $S_1$ represents the case where a bit flip does not occur. In this case, the final bias of bit $A$ is \begin{equation} \mathcal{B}_1^{\prime}=\frac{3}{2}\mathcal{B}-\frac{1}{2}\mathcal{B}^2 \end{equation} as we saw in Section \ref{sec_2bit_step} (equation (\ref{2BC_newB})).
The error pattern $S_2$ represents the case where the bit flip occurs on the newly biased bit. In this case, the bias is negated, and so the new bias is \begin{equation} \mathcal{B}_2^{\prime}=-\frac{3}{2}\mathcal{B}+\frac{1}{2}\mathcal{B}^2. \end{equation} So the new bias of $A$ for the overall process is \begin{align} \mathcal{B}^\prime&=(1-\varepsilon)\mathcal{B}_1^\prime+\varepsilon \mathcal{B}_2^\prime\\ &=\left(\frac{3}{2}\mathcal{B}-\frac{1}{2}\mathcal{B}^2\right)(1-2\varepsilon)\label{sym_after_2BC_newB}. \end{align}
Then the condition that $\mathcal{B}^\prime>\mathcal{B}$ gives \begin{equation}\label{sym_after_2BC_thresh_inequal} -(1-2\varepsilon)\mathcal{B}-6\varepsilon+1 > 0 \end{equation} which leads to \begin{align} \varepsilon& < \frac{1}{2}-\frac{1}{3-\mathcal{B}}\\ & < \frac{1}{6}\label{sym_after_2BC_thresh} \end{align} (where in the second line we used the assumption that $\mathcal{B}$ is positive). So for this simple error model $\varepsilon_{\text{th}}=1/6$ is an error threshold beyond which the 2BC procedure can have no positive effect on the bias (regardless of how low the initial bias is). The error tolerance (threshold) decreases as the bias increases. For a fixed error rate $\varepsilon\leq\varepsilon_\text{th}$ the more interesting result is a bound on the maximum bias that will be achievable. This is obtained by solving for $\mathcal{B}$ in (\ref{sym_after_2BC_thresh_inequal})): \begin{equation}\label{sym_after_2BC_Blim} \mathcal{B}<\frac{1-6\varepsilon}{1-2\varepsilon}\equiv \mathcal{B}_\text{lim}. \end{equation} Once the bias exceeds $\mathcal{B}_\text{lim}$, the 3BC procedure will no longer be effective in increasing the bias, and the algorithm will yield no further improvement. So $\mathcal{B}_\text{lim}$ represents the limit of the polarization bias that can be achieved by any cooling algorithm that is based on the 2BC step, under this error model.
To facilitate comparison with later results, it is useful to give a power series approximation of (\ref{sym_after_2BC_Blim}). For small values of $\varepsilon$, we can approximate the expression to second-order (taking $\varepsilon^3\approx 0$). \begin{equation}\label{sym_after_2BC_Blim_approx} \mathcal{B}_\text{lim}\approx 1-4\varepsilon-8\varepsilon^2. \end{equation}
For error rates $\varepsilon$ below 1\%, the approximate value in (\ref{sym_after_2BC_Blim_approx}) is within 0.01\% of the value in (\ref{sym_after_2BC_Blim}). \end{comment}
\subsection{3BC followed by a symmetric bit-flip error}\label{sec_sym_after_3BC}
We will now consider the case in which a bit-flip error can occur after the 3BC step has been performed (and errors do not occur between application of the gates in Figure \ref{fig_3bitmaj}).
There are two error patterns. Pattern $S_1$ represents the case where a bit flip does not occur. In this case, the final bias of bit $A$ is \begin{equation} \mathcal{B}_1^{\prime}=\frac{3}{2}\mathcal{B}-\frac{1}{2}\mathcal{B}^3 \end{equation} as we found in Section \ref{sec_3bit_step} (equation (\ref{3BC_newB})). The error pattern $S_2$ represents the case where the bit flip occurs on the newly biased bit. In this case, the bias is negated, and so the new bias is \begin{equation} \mathcal{B}_2^{\prime}=-\frac{3}{2}\mathcal{B}+\frac{1}{2}\mathcal{B}^3 \end{equation} So the new bias of $A$ for the overall process is \begin{align} \mathcal{B}^\prime&=(1-\varepsilon)\mathcal{B}_1^\prime+\varepsilon \mathcal{B}_2^\prime\\ &=\left(\frac{3}{2}\mathcal{B}-\frac{1}{2}\mathcal{B}^3\right)(1-2\varepsilon)\label{sym_after_3BC_newB}. \end{align}
Then the condition that $\mathcal{B}^\prime > \mathcal{B}$ gives \begin{equation}\label{sym_after_3BC_thresh_inequal} -(1-2\varepsilon)\mathcal{B}^2-6\varepsilon+1 > 0 \end{equation} which leads to \begin{align} \varepsilon& < \frac{1}{2}-\frac{1}{3-\mathcal{B}^2}\\ & < \frac{1}{6}.\label{sym_after_2BC_thresh} \end{align} So for this simple error model $\varepsilon_{\text{th}}=1/6$ is an error threshold beyond which the 3BC procedure can have no positive effect on the bias (regardless of how low the initial bias is). For a fixed error rate $\varepsilon<\varepsilon_\text{th}$ a bound on the maximum bias that will be achievable is obtained by solving for $\mathcal{B}$ in (\ref{sym_after_3BC_thresh_inequal})): \begin{equation}\label{sym_after_3BC_Blim} \mathcal{B} < \sqrt{\frac{1-6\varepsilon}{1-2\varepsilon}}=\mathcal{B}_\text{lim}. \end{equation} Approximating the expression to second-order gives \begin{equation}\label{sym_after_3BC_Blim_approx} \mathcal{B}_\text{lim}\approx 1-2\varepsilon-6\varepsilon^2. \end{equation}
Once the bias exceeds $\mathcal{B}_\text{lim}$, the 3BC procedure will no longer be effective in increasing the bias, and the algorithm will yield no further improvement. So $\mathcal{B}_\text{lim}$ represents the limit of the bias that can be achieved by any cooling algorithm that is based on the 3BC step, under this error model.
For error rates $\varepsilon$ below 1\%, the approximate value in (\ref{sym_after_3BC_Blim_approx}) is within 0.01\% of the value in (\ref{sym_after_3BC_Blim}).
\subsection{Symmetric bit-flip errors during application of 3BC}\label{sec_sym_during_3BC}
We will now do a more careful analysis accounting for the possibility of errors occurring during the application of the 3BC step. Specifically, we consider independent bit-flip errors on each bit with probability $\varepsilon$, where the errors can occur at each time step; that is, immediately after the application of any gate in the circuit of Figure \ref{fig_3bitmaj} (equivalently after the application of each $o_2^\prime$ operation). This is only one possible decomposition of the majority-vote operation into a sequence of basic operations, but it serves to illustrate the technique for analysis. A similar analysis can easily be conducted given an alternative decomposition of the majority vote into a sequence of basic operations.
There are 9 possible sites for bit-flip errors in Figure \ref{fig_3bitmaj}, but two of these can be ignored (errors on the $B$ or $C$ bits after the final Toffoli have no effect on the final bias of the $A$ bit). Figure \ref{fig_3bitmaj_err1} illustrates the circuit including the possible error operations. The binary variables $e_i$ shown on the circuit are taken to be ``1'' if a bit-flip error occurs in that location, and ``0'' otherwise.
\begin{figure}\label{fig_3bitmaj_err1}
\end{figure}
Suppose the value of the $(A,B,C)$ register is initially $(a,b,c)$, where $a$, $b$, and $c$ are the binary values of the three bits. Analyzing the circuit in Figure \ref{fig_3bitmaj_err1}, the final value of the $A$ bit is found to be \begin{equation}\label{3bitmaj_err1_newA} a+e_1+e_4+e_7+(a+b+e_2+e_5)(a+c+e_1+e_3+e_6)\mod 2. \end{equation} Since the errors occur independently with probability $\varepsilon$ at each position, the probability associated with each error pattern $S_i=(e_1,e_2,\ldots,e_7)$ (where $i=\sum_{k=0}^6e_k2^k$ indexes the possible patterns) can be evaluated as \begin{equation}\label{3bitmaj_err1_prob_pattern} \text{Pr}(S_i)=\varepsilon^{e_1+e_2+e_3+e_4+e_5+e_6+e_7} (1-\varepsilon)^{\bar{e}_1+\bar{e}_2+\bar{e}_3+\bar{e}_4+\bar{e}_5+\bar{e}_6+\bar{e}_7} \end{equation} where $\bar{x}\equiv 1+x\mod 2$. Initially, the probability that each bit $a,b$ or $c$ equals 0 is $p=\frac{B_0+1}{2}$. So the tuple $(a,b,c,e_1,\ldots,e_7)$ describes the situation where the register was initially in the state $(a,b,c)$ and the error described by $(e_1,e_2,\ldots,e_7)$ occurred, and this happens with probability \begin{equation}\label{3bitmaj_sym_err_prob_scen} \text{Pr}(a,b,c,e_1,\ldots,e_7)\equiv(1-p)^{a+b+c}p^{\bar{a}+\bar{b}+\bar{c}}\varepsilon^{e_1+e_2+e_3+e_4+e_5+e_6+e_7} (1-\varepsilon)^{\bar{e_1}+\bar{e_2}+\bar{e_3}+\bar{e_4}+\bar{e_5}+\bar{e_6}+\bar{e_7}}. \end{equation} Let $p^{(A)}$ be the probability that the final value of $A$ for the overall process equals 0. The value of $p^{(A)}$ is obtained by adding the probabilities $\text{Pr}(a,b,c,e_1,\ldots,e_7)$ over all those tuples $(a,b,c,e_1,\ldots,e_7)$ for which the value of (\ref{3bitmaj_err1_newA}) equals 0. The new bias of $A$ is then determined as \begin{equation}\label{newbias_pA} \mathcal{B}^\prime=2p^{(A)}-1. \end{equation} This value is \begin{equation} \mathcal{B}^\prime=(2\varepsilon-1)^3\left[1+4\varepsilon^2(\varepsilon-1)-4p\varepsilon(6\varepsilon^2-8\varepsilon+3)-2p^2(2p-3)(2\varepsilon-1)^3\right] \end{equation} which can be expressed in terms of the original bias by substituting $p=\frac{\mathcal{B}+1}{2}$: \begin{equation}\label{sym_during_3BC_newB} \mathcal{B}^\prime= \frac{1}{2}\mathcal{B}(1-2\varepsilon)^3\left(3-6\varepsilon+4\varepsilon^2-\mathcal{B}^2(1-2\varepsilon)^3\right). \end{equation}
Now the condition $\mathcal{B}^\prime>\mathcal{B}$ leads to \begin{equation}\label{sym_during_3BC_inequal} -2+(1-2\varepsilon)^3\left(3-6\varepsilon+4\varepsilon^2-\mathcal{B}^2(1-2\varepsilon)^3\right)>0. \end{equation}
The expression on the left side of (\ref{sym_during_3BC_inequal}) represents the improvement in the bias. It decreases monotonically as $\mathcal{B}$ increases from 0, and so an upper bound can be obtained by setting $\mathcal{B}=0$. Then, by studying the real roots of the resulting polynomial in $\varepsilon$ we can determine the range of values for which the improvement is positive. By numerical computation, the threshold is found to be
\begin{equation}\label{sym_during_3BC_thresh} \varepsilon < 0.048592 \equiv\varepsilon_\text{th}. \end{equation}
For a fixed $\varepsilon < \varepsilon_{\text{th}}$, inequality (\ref{sym_during_3BC_inequal}) also gives a bound on the maximum bias achievable by the 3BC step under the given error model. \begin{equation}\label{sym_during_3BC_Blim} \mathcal{B} < \frac{\sqrt{1-24\varepsilon+76\varepsilon^2-120\varepsilon^3+96\varepsilon^4-32\varepsilon^5}} {(1-2\varepsilon)^3}\equiv \mathcal{B}_\text{lim} \end{equation}
For small values of $\varepsilon$, we can approximate (\ref{sym_during_3BC_Blim}) to second order. \begin{equation}\label{sym_during_3BC_Blim_approx} \mathcal{B}_\text{lim}\approx 1-6\varepsilon-82\varepsilon^2. \end{equation}
For error rates $\varepsilon$ below 1\%, the approximate value in (\ref{sym_during_3BC_Blim_approx}) is within 0.1\% of the value in (\ref{sym_during_3BC_Blim}).
\section{Debiasing errors}\label{sec_debiasing_errors}
We will now consider a more general error model for a classical bit. Under this error model, called the {\it asymmetric bit-flip channel}, a bit transforms from 0 to 1 with some probability $\varepsilon_0$, and transforms from 1 to 0 with some probability $\varepsilon_1$.
A fixed-point probability distribution for the asymmetric bit-flip channel is \begin{align} p[0]&=\frac{\varepsilon_1}{\varepsilon_0+\varepsilon_1}\\ p[1]&=\frac{\varepsilon_0}{\varepsilon_0+\varepsilon_1}. \end{align}
If left to evolve for under the symmetric channel, a bit will eventually settle to a bias value of \begin{equation} \mathcal{B}_\text{steady}=\frac{\varepsilon_1-\varepsilon_0}{\varepsilon_0+\varepsilon_1}. \end{equation} The rate at which the bias approaches this fixed point is related to $(\varepsilon_0+\varepsilon_1)$.
It will be convenient to make a couple of assumptions about the error rates. First, we will assume that errors cause the system to tend back to the initial bias $\mathcal{B}_\text{i}$ (which would be the same as, or close to, the bias of the ``heat bath'' for cooling algorithms that use this device). That is, \begin{equation} \mathcal{B}_\text{steady}=\mathcal{B}_\text{i}. \end{equation}
In other words, errors cause a partial debiasing of the cooled bits (ideally, this will happen very slowly, and so a the value for the sum of the error rates, $(\varepsilon_0+\varepsilon_1)$, will be small). In the following, I will refer to this type of asymmetric bit-flip error as a {\it debiasing error}.
Since $\mathcal{B}_\text{i}>0$, we have \begin{equation} \varepsilon_1-\varepsilon_0>0. \end{equation}
We will also assume that the error rates $\varepsilon_0$ and $\varepsilon_1$ are both less than $\frac{1}{2}$. In this case we have \begin{align} \varepsilon_1-\varepsilon_0<\mathcal{B}_\text{i}. \end{align} Since we assumed that the bias of the bit being cooled starts at $\mathcal{B}_\text{i}$ and is thereafter nondecreasing, we can say that at any stage of the algorithm we have \begin{align} \varepsilon_1-\varepsilon_0<\mathcal{B} \end{align} where $\mathcal{B}$ is the current bias of the bits that the RPC step is being applied to.
Consider what happens to a bit initially having some bias $\mathcal{B}$ when we apply the asymmetric bit-flip channel once. A simple calculation shows the resulting bias to be \begin{equation}\label{1_asym_step_newB_e0e1} \mathcal{B}^\prime=\mathcal{B}(1-(\varepsilon_0+\varepsilon_1))+(\varepsilon_1-\varepsilon_0). \end{equation} In the following analysis, it will be convenient to make a change of variables, letting \begin{align} s&\equiv \varepsilon_0+\varepsilon_1\text{ , and}\\ d&\equiv \varepsilon_1-\varepsilon_0. \end{align} Then our assumptions are $s<1$, and $0<d<\mathcal{B}$, and equation (\ref{1_asym_step_newB_e0e1}) becomes \begin{equation}\label{1_asym_step_newB} \mathcal{B}^\prime=\mathcal{B}(1-s)+d. \end{equation} Notice that $d<s$ is also an obvious condition.
Consider the special case of the symmetric bit-flip channel. In this case $\mathcal{B}_\text{steady}=0$, and so $\mathcal{B}_\text{steady}<\mathcal{B}_\text{i}$. This is why we obtained positive threshold error rates for the RPC step to increase the bias. Now, under our assumption $\mathcal{B}_\text{steady}=\mathcal{B}_\text{i}$, we will not obtain such a threshold. Even with high error rates (fast debiasing) the RPC step will increase the bias above $\mathcal{B}_\text{i}$ by some positive amount.
It is still important to analyze the effect of these errors on the RPC step, because they will imply a limiting value on the highest bias achievable. The RPC step tends to increase the bias away from the value $\mathcal{B}_\text{i}=d/s$, while the errors tend to force the bias back towards $B_\text{i}$. The maximum achievable value of $\mathcal{B}$ will be determined by $d$ and $s$, or equivalently, by $\mathcal{B}_\text{i}$ and $s$. Recall that $s$ can be seen as a measure of the rate at which the errors force the bias towards the initial value $\mathcal{B}_\text{i}$. Thus the maximum achievable bias is limited by the initial bias, and by the rate at which errors cause the system to tend back to the initial bias.
\begin{comment} \subsection{2BC followed by a debiasing error}\label{sec_asym_after_2BC}
Consider first the scenario in which debiasing errors can occur immediately following the 2BC step. The bound obtained here will apply to any implementation of the 3BC step. Assuming the bias before the 2BC step is $\mathcal{B}$, the bias of the resulting bit (accounting for the possibility of an error) is \begin{equation}\label{asym_after_2BC_newB} \mathcal{B}^\prime=\left(\frac{3}{2}\mathcal{B}-\frac{1}{2}B^2\right)(1-s)+d. \end{equation} The condition that $B^\prime>B$ leads to \begin{equation}\label{asym_after_2BC_inequal_newB} \mathcal{B}^2(s-1)+\mathcal{B}(1-3s)+2d>0. \end{equation} For values of $s,d$ satisfying $0<d<s<1$, the expression on the left side of (\ref{asym_after_2BC_inequal_newB}) is a quadratic in $\mathcal{B}$ having two real roots (one positive and one negative), and having positive values when $\mathcal{B}$ is between the roots. So inequality (\ref{asym_after_2BC_inequal_newB}) is satisfied only for values of $\mathcal{B}$ less than the positive root of the quadratic. That is, \begin{equation} \mathcal{B}<\frac{1-3s+\sqrt{(1-3s)^2+8d(1-s)}}{2(1-s)}. \end{equation} This is the maximum value of $\mathcal{B}$ that is achievable by the 2BC step under this error model. To second-order in $s$ and $d$, we have \begin{equation}\label{asym_after_2BC_Blim_approx_sd} \mathcal{B}_\text{lim}\approx 1-2s+2d-2s^2-4d^2+6sd. \end{equation} Consider this in the symmetric case, where $\varepsilon_0=\varepsilon_1$. Then we have $s=2\varepsilon$, and $d=0$ and the bound (\ref{asym_after_2BC_Blim_approx_sd}) agrees with the bound (\ref{sym_after_2BC_Blim_approx}) which we found in Section \ref{sec_sym_after_2BC}.
In terms of $\mathcal{B}_\text{i}$ and $s$, (\ref{asym_after_2BC_Blim_approx_sd}) is \begin{equation}\label{asym_after_2BC_Blim_approx} \mathcal{B}_\text{lim} \approx 1-2s-2s^2+2\mathcal{B}_\text{i} s+6\mathcal{B}_\text{i} s^2-4\mathcal{B}_\text{i}^2 s^2. \end{equation}
For error rates less than 1\%, the approximate value (\ref{asym_after_2BC_Blim_approx}) agrees with the actual value to within $10^{-6}$. \end{comment}
\subsection{3BC followed by a debiasing error}\label{sec_asym_after_3BC}
Consider the scenario in which a debiasing error may occur immediately after the 3BC operation. The bound obtained here will apply regardless of how the 3BC step is implemented. Assuming all three bits initially start with bias $\mathcal{B}$, the bias of bit $A$ after the process (the 3BC circuit followed by a debiasing error) is \begin{equation}\label{asym_after_3BC_newB} \mathcal{B}^\prime=\left(\frac{3}{2}\mathcal{B}-\frac{1}{2}\mathcal{B}^3\right)(1-s)+d. \end{equation} The condition that $B^\prime > B$ leads to \begin{equation}\label{asym_after_3BC_inequal_newB} \mathcal{B}^3(s-1)+\mathcal{B}(1-3s)+2d > 0. \end{equation} For values of $s<1/3$ (recall the threshold condition $\varepsilon<1/6$ we obtained in Section \ref{sec_sym_after_3BC}) and for $d<s$, the cubic polynomial on the left-hand side of (\ref{asym_after_3BC_inequal_newB}) has one positive real root (and the value of this root will be less than 1). A positive value of $\mathcal{B}$ will satisfy inequality (\ref{asym_after_3BC_inequal_newB}) only if it is less than than the value of this root. That is, \begin{small} \begin{equation}\label{asym_after_3BC_Blim} \mathcal{B} < \frac{i\left(-3(\sqrt{3}-i)(s-1)(3s-1)+(\sqrt{3}+i) (-27d(s-1)^2+\sqrt{729d^2(s-1)^4+(-3+12s-9s^2)^3})^\frac{2}{3}\right)} {6(s-1)\left(-27d(s-1)^2+\sqrt{729d^2(s-1)^4+(-3+12s-9s^2)^3}\right)^\frac{1}{3}}. \end{equation} \end{small} The appearance of nonreal numbers in (\ref{asym_after_3BC_Blim}) is unavoidable\footnote{This is {\it Casus Irreducibilis}: in certain cases, any expression for the roots of a cubic polynomial in terms of radicals must involve nonreal expressions, even if all the roots are real.}. To second order in $d$ and $s$, (\ref{asym_after_3BC_Blim}) gives \begin{equation}\label{asym_after_3BC_Blim_approx_sd} \mathcal{B}_\text{lim} \approx1-s+d-\frac{3}{2}s^2-\frac{3}{2}d^2+3ds. \end{equation} In the symmetric case, the bound (\ref{asym_after_3BC_Blim_approx_sd}) agrees with the bound (\ref{sym_after_3BC_Blim_approx}) which we found in Section \ref{sec_sym_after_3BC}.
In terms of $s$ and $\mathcal{B}_\text{i}$, (\ref{asym_after_3BC_Blim_approx_sd}) is \begin{equation}\label{asym_after_3BC_Blim_approx} \mathcal{B}_\text{lim} \approx 1-s-\frac{3}{2}s^2+\mathcal{B}_\text{i} s +3\mathcal{B}_\text{i} s^2-\frac{3}{2}\mathcal{B}_\text{i}^2s^2. \end{equation}
For error rates less than 1\%, the approximate value (\ref{asym_after_3BC_Blim_approx}) agrees with the actual value to within $10^{-5}$.
\subsection{Debiasing errors during application of 3BC}\label{sec_asym_during_3BC}
We will now consider the error model in which debiasing errors can occur at each time step (i.e. immediately after the application of any gate in the circuit of Figure \ref{fig_3bitmaj}, or equivalently after each $o_2^\prime$ operation). The analysis is performed similarly to what we did in Section \ref{sec_sym_during_3BC}, by considering the probability associated with each binary tuple $(a,b,c,e_1,\ldots,e_7)$ for which the resulting value of bit $A$ equals 0. For the asymmetric model, by tracing through the circuit of Figure \ref{fig_3bitmaj}, we find that equation (\ref{3bitmaj_sym_err_prob_scen}) generalizes to \begin{small} \begin{equation}\label{3bitmaj_asym_err_prob_scen} \Pr(a,b,c,e_1,\ldots,e_7)\equiv(1-p)^{a+b+c}p^{\bar{a}+\bar{b}+\bar{c}} \left(\varepsilon_0^{\sum_{i=1}^7\bar{\phi}_ie_i}\right) \left((1-\varepsilon_0)^{\sum_{i=1}^7\bar{\phi}_i\bar{e}_i}\right) \left(\varepsilon_1^{\sum_{i=1}^7\phi_ie_i}\right)\left((1-\varepsilon_1)^{\sum_{i=1}^7\phi_i\bar{e}_i}\right) \end{equation} \end{small} where $\bar{x}\equiv(1+x\mod 2)$ and \begin{align} \phi_1&=a\\ \phi_2&=a+b\mod 2\\ \phi_3&=c\\ \phi_4&=\phi_1+e_1\mod 2\\ \phi_5&=\phi_2+e_2\mod 2\\ \phi_6&=\phi_3+e_3+\phi_4\mod 2\\ \phi_7&=\phi_4+e_4+(\phi_5+e_5)(\phi_6+e_6) \mod 2. \end{align} Again we can sum the probabilities $\Pr(a,b,c,e_1,\ldots,e_7)$ over those tuples for which the final value of bit $A$ (given by equation (\ref{3bitmaj_asym_err_prob_scen}) equals 0, and compute then new bias. The new bias, approximated to second-order in $s$ and $d$, is \begin{equation}\label{asym_during_3BC_newB} \mathcal{B}^\prime\approx\frac{1}{2}\left((5d+4d^2-6sd)+(3-12s+19s^2-d^2+4ds)\mathcal{B}+d\mathcal{B}^2+(-1+6s-15s^2)\mathcal{B}^3\right). \end{equation} Then the condition $\mathcal{B}^\prime>\mathcal{B}$ gives \begin{equation}\label{asym_during_3BC_inequal_newB} (5d+4d^2-6sd)+(1-12s+19s^2-d^2+4ds)\mathcal{B}+d\mathcal{B}^2+(-1+6s-15s^2)\mathcal{B}^3>0. \end{equation} For values of $s\lesssim 0.04$ (recall the threshold condition we obtained in Section \ref{sec_sym_during_3BC}) and for $d<s$, the cubic polynomial on the left-hand side of (\ref{asym_during_3BC_inequal_newB}) has one positive real root (whose value will be less than 1, modulo the error in the second-order approximation). A positive value of $\mathcal{B}$ will satisfy inequality (\ref{asym_during_3BC_inequal_newB}) only if it is not greater than the value of this root, which is (to second order in $s$ and $d$) \begin{equation}\label{asym_during_3BC_Blim_approx_sd} \mathcal{B}_\text{lim}\approx 1-3s+3d-9d^2-\frac{41}{2}s^2+32ds. \end{equation} In the symmetric case, the bound (\ref{asym_during_3BC_Blim_approx_sd}) agrees with the bound (\ref{sym_during_3BC_Blim_approx}) that we obtained in Section \ref{sec_sym_during_3BC}. In terms of $s$ and $\mathcal{B}_\text{i}$ we have,
\begin{equation}\label{asym_during_3BC_Blim_approx} \mathcal{B}_\text{lim}\approx 1-3s-\frac{41}{2}s^2+3\mathcal{B}_\text{i} s+32\mathcal{B}_\text{i} s^2-9\mathcal{B}_\text{i}^2 s^2. \end{equation}
For error rates less than 1\%, the approximate value (\ref{asym_after_3BC_Blim_approx}) agrees with the actual value (\ref{asym_after_3BC_Blim}) to within $10^{-4}$.
\section{More general algorithms based on 3BC}\label{sec_general_3BC_algs}
In all of the above error analysis, we have assumed that the 3BC step is applied to three bits having identical bias at each stage of the algorithm. Recall in Section \ref{sec_efficiency} it was mentioned that an algorithm proposed in \cite{SMW07} is structured somewhat differently, and applies the 3BC step to three bits having different bias values $\mathcal{B}_{j-2}$, $\mathcal{B}_{j-1}$ and $\mathcal{B}_j$. We can still learn something by performing the previous analysis assuming all three bits have bias $\text{max}(\mathcal{B}_{j-2},\mathcal{B}_{j-1},\mathcal{B}_j)$, but it is worth briefly considering how we could analyze this more general scenario directly. Consider applying the debiasing error channel with error parameters $\varepsilon_0$ and $\varepsilon_1$ immediately after the 3BC step is applied. In this case, the bias of the third bit after the process is \begin{equation}\label{equation_general_3BC_alg_err1} \frac{\mathcal{B}_{j-2}+\mathcal{B}_{j-1}+\mathcal{B}_j-\mathcal{B}_{j-2}\mathcal{B}_{j-1}\mathcal{B}_j }{2}(1-s)+d \end{equation} (recall $s=\varepsilon_0+\varepsilon_1$ and $d=\varepsilon_1-\varepsilon_0$). As in Section \ref{sec_debiasing_errors}, we assume that the error parameters satisfy $s<1$, $d>0$ and $d$ is less than each of $\mathcal{B}_{j-2}$, $\mathcal{B}_{j-1}$ and $\mathcal{B}_j$. We also assume that $\frac{d}{s}$ is less than each of $\mathcal{B}_{j-2}$, $\mathcal{B}_{j-1}$ and $\mathcal{B}_j$ so that the errors are indeed tending the system towards a lower bias.
Now suppose we proceed as in \cite{SMW07} and send the first two bits back to the heat bath, re-cool them up to bias values $\mathcal{B}_{j-2}$ and $\mathcal{B}_{j-1}$, and again apply 3BC. Without errors, we mentioned previously that by repeating this process several times the third bit reaches a steady-state bias value of \begin{equation} \frac{\mathcal{B}_{j-2}+\mathcal{B}_{j-1}}{1+\mathcal{B}_{j-2}\mathcal{B}_{j-1}}. \end{equation} With the debiasing error channel being applied after every application of 3BC, this steady-state bias value is reduced to \begin{equation}\label{equation_general_3BC_alg_err2} \frac{(\mathcal{B}_{j-2}+\mathcal{B}_{j-1})(1-s)+2d}{1+\mathcal{B}_{j-2}\mathcal{B}_{j-1}(1-s)+s}. \end{equation} Equations (\ref{equation_general_3BC_alg_err1}) and (\ref{equation_general_3BC_alg_err1}) can be used to analyze can be used to analyze more general algorithms based on repeated application of 3BC, including the Fibonacci sequence algorithm proposed in \cite{SMW07}, under the effect of debiasing errors that may occur after each application. We could similarly decompose the 3BC step into a suitable sequence of discrete operations, and proceed as we have done above to analyze the effect of errors that may occur after each discrete step.
\section{Conclusions and other considerations}
I have studied the performance of cooling algorithms that use the 3-bit majority as the compression step (e.g. \cite{FLMR04}, \cite{SMW07}) and argued that previously discovered algorithms (e.g. \cite{SV99}, \cite{BMRVV}) can be re-cast in this way. I have proven the optimality of the best such algorithm for obtaining one cold bit with the fewest possible number of initially mixed bits. An error analysis of these algorithms has been conducted, first under a simple error model (symmetric bit-flip), and then under a more realistic model of debiasing. Since the implementations of the RPC steps are inherently ``classical'' (states do not leave the computational basis), it is reasonable to restrict attention to these classical error models. In each case, I first derived some bounds assuming that errors may occur immediately after the RPC step. Since this may be taken as a best-case scenario, then these bounds apply regardless of the implementation. I also derived bounds assuming that the 3BC cooling step is implemented by a sequence of physical operations that simulate a sequence of \mbox{\sc cnot}\xspace and Toffoli gates (i.e. a sequence of $O_2^\prime$ operations). Specifically we considered the simplest such arrangement for implementing the 3BC step, shown in Figure \ref{fig_3bitmaj}. The results are summarized below (approximated to second-order).
{\renewcommand\arraystretch{1.5}
\begin{tabular}{*{3}{|l}|} \hline {\bf Error Model}& {\bf Threshold} & {\bf Maximum achievable bias}\\
\hline Symmetric bit-flip after 3BC &$\varepsilon<\frac{1}{6}$ & $1-2\varepsilon-6\varepsilon^2$ \\ \hline Symmetric bit-flip during 3BC &$\varepsilon\lesssim 0.048592$&$1-6\varepsilon-82\varepsilon^2$ \\
\hline debiasing error after 3BC &N/A & $1-s-\frac{3}{2}s^2+\mathcal{B}_\text{i} s +3\mathcal{B}_\text{i} s^2-\frac{3}{2}\mathcal{B}_\text{i}^2s^2$\\ \hline debiasing error during 3BC &N/A& $1-3s-\frac{41}{2}s^2+3\mathcal{B}_\text{i} s+32\mathcal{B}_\text{i} s^2-9\mathcal{B}_\text{i}^2 s^2$ \\ \hline \end{tabular}}
Given a specific low-level implementation of a cooling algorithm, specified as a sequence of pulses applied to an $ABC$-chain or some other suitable hardware, a detailed error analysis could be conducted in a manner similar to the approach I have taken here. For specific cooling algorithms it will also be interesting to analyze the effects of errors occurring between applications of the RPC step (for example, while the bits are being permuted to move the required bits into position for the next application of the cooling step). By studying the time-complexity of a specific algorithm implemented on a specific architecture, we can determine the balance between the rate at which the algorithm increases the bias, and the rate at which debiasing errors are causing the bias to decrease.
Cooling algorithms can be built from basic steps other than the 3-bit majority. For those that have ``classical'' implementations (that is, can be built from some sequence of generalized Toffoli gates) the approach we have taken here could be employed to conduct a similar error analysis. For basic RPC steps operating on more than 3 bits, the kind of analysis we have performed here would require examining higher-order polynomials, and this may have to be done numerically.
For RPC steps that are implemented ``quantumly'' (i.e. using gates that force states to leave the the computational basis), more general quantum error models will have to be considered, and a different approach to the error analysis will be required.
\section{Architecture using an $ABC$-chain}\label{append_abc_ring_scheme}
Recall the following two operations we proposed that should be supported by an NMR computer for implementing algorithmic cooling.
\begin{enumerate} \item[$o^\prime_1$)] Move any three bits into adjacent positions under a fixed ``tape head'' (which covers three bits). \item[$o^\prime_2$)] Apply any generalized Toffoli or \mbox{\sc cnot}\xspace operation to the bits under the tape head. \end{enumerate}
Here we sketch an method for implementing $o_1^\prime$ and $o_2^\prime$ on an $ABC$-chain which is configured as a closed loop. Note that we could alternatively use a linear configuration, but would then have to be careful about the behaviour at the ends of the chain (one approach would be to have the chain be long enough so that the bits of interest are sufficiently far into the interior of the chain that the effects the ends are irrelevant). We will also assume that the loop consists of an odd number of $ABC$-triples.
An atom of a fourth type, $D$ is positioned adjacent to some $ABC$-triple selected (arbitrarily) to be the position of the tape head.
We assume that the physical system directly supports the implementation of a generalized Toffoli or \mbox{\sc cnot}\xspace operation on all the $ABC$-triples in parallel (or alternatively an all $BCA$-triples, or on all $CAB$-triples)\footnote{details are described in \cite{Llo93}.}. We also assume that we can implement any such operation on \emph{only} the $ABC$-triple under the tape head, by making use of the effect of the proximity of the $D$ atom. So operation $o_2$ is directly supported by the hardware. Given these primitives, we focus on the problem of implementing operation $o_1$.
For clarity of exposition, we will refer to the physical bits of species $A,B,C$ as ``cells'' of ``types'' $A,B,C$. When we talk about ``moving a bit to a cell'', we are referring to a sequence of logical operations (usually nearest-neighbour \mbox{\sc swap}\xspace operations) that permute the logical states of the physical bits on the chain. We will use the following lemma.
\begin{lemma} For any pair of bits initially occupying adjacent cells, there exists a sequence of primitive operations that has the effect of bringing those bits into adjacent positions under the tape head. \end{lemma}
\begin{quote} \textbf{Proof Sketch} Let $S_{AB}$ be the operation that swaps the states of the bits on the $A$-cells with the bits in the neighbouring-$B$ cells. The sequence $(S_{AC},S_{AB},S_{BC},S_{AB})$ has the effect of moving every bit initially in an $A$-cell to the $A$-cell of the next $ABC$-triple to the left (counterclockwise). It also moves every bit initially in a $C$-cell to the $C$-cell of the next $ABC$-triple to the right (clockwise). It leaves the bits on the $B$-cells fixed. By permuting the labels of the species we have similar sequences for moving the bits in the $A$- and $B$-cells, while keeping the bits in the $C$-cells fixed. Suppose we have a pair of bits $(b_i,c_i)$ in adjacent $B$- and $C$-cells, that we wish to move into adjacent positions under the tape head. First we apply the sequence that moves the bits in the $A$- and $B$-cells (keeping the bits in the $C$-cells fixed) until $b_i$ is in the $B$-cell under the tape-head. Then we apply the sequence that moves the bits in the $A$- and $C$-cells (keeping the bits in the $B$-cells fixed), until $c_i$ is in the $C$-cell under the tape head (beside $b_i$). Similar procedures will bring any pairs of adjacent bits into adjacent positions under the tape head.\hspace{4mm}$\square$ \end{quote}
From the Lemma, it follows that we can implement a nearest-neighbour \mbox{\sc swap}\xspace operation between any adjacent pair of bits on the tape. First we move the pair under the tape-head, and then use an $o_2^\prime$ operation to \mbox{\sc swap}\xspace the states of these bits. Finally use the sequence operation of $o_2^\prime$ to move all the bits back to their corresponding original positions (modulo the swapped pair). Now it follows that we can implement an arbitrary permutation of the bits on the tape (by a suitable sequence of nearest-neighbour transpositions), of which operation $o_1^\prime$ is a special case. Notice that this also allows us to perform the permutations of the bits required to move cooled bits to one side of the tape and move warmer bits to the other side, as would be required between applications of the basic cooling step.
\end{document} |
\begin{document}
\title{Dual Half-integrality for Uncrossable Cut Cover and its Application to Maximum Half-Integral Flow}
\author{Naveen Garg, Nikhil Kumar \\ Indian Institute of Technology Delhi, India}
\maketitle
\begin{abstract} Given an edge weighted graph and a forest $F$, the {\em 2-edge connectivity augmentation problem} is to pick a minimum weighted set of edges, $E'$, such that every connected component of $E'\cup F$ is 2-edge connected. Williamson et al. gave a 2-approximation algorithm (WGMV) for this problem using the primal-dual schema. We show that when edge weights are integral, the WGMV procedure can be modified to obtain a half-integral dual. The 2-edge connectivity augmentation problem has an interesting connection to routing flow in graphs where the union of supply and demand is planar. The half-integrality of the dual leads to a tight 2-approximate max-half-integral-flow min-multicut theorem. \end{abstract}
\section{Introduction} Let $G=(V,E)$ be an undirected graph with integer edge costs $c:E\rightarrow\mathbb{Z}^+$ and let $f:2^V\rightarrow\mathbb{Z}^+$ be a requirement function on sets of vertices. We wish to find a set of edges, $E'$ of minimum total cost such that for every set $S$ the number of edges in $E'$ across $S$ is at least the requirement of $S$, ie. $f(S)$. This problem captures many scenarios in network design and has been the subject of much investigation. The Steiner forest problem, minimum weight maximum matching and other problems can be modeled by requirement functions which are {\em proper} and 0-1 (see Definition \ref{def:proper}) and for such functions Agrawal, Klein, Ravi~\cite{AKR} and Goemans, Williamson~\cite{GW} gave a primal-dual algorithm that is a 2-approximation. The key idea of primal-dual algorithms is to use complementary slackness to guide the construction of the dual and primal solutions which are within a factor 2 of each other.
To use this approach for the Steiner network design problem where the requirements of sets are not just 0-1, Williamson et al.~\cite{WGMV} extend the primal dual algorithm of GW to the setting of 0-1 {\em uncrossable} requirement functions (see Definition \ref{def: uncrossable}); we call this the WGMV algorithm. The idea was to augment the connectivity of the solution in rounds with each round augmenting the requirements of unsatisfied sets by 1. The WGMV algorithm for uncrossable functions also builds a dual solution and while the primal solution constructed is integral, nothing is known of the integrality of the dual solution. In particular while for proper functions it is possible to argue that the dual solution constructed by the GW procedure is half-integral the same is not true for the WGMV procedure for uncrossable functions as is illustrated by the example in Section \ref{WGMV:counter}.
For weakly supermodular requirement functions (see Definition \ref{def:weakly-sup}) Jain~\cite{Jain95} gave a 2 approximation algorithm based on iterative rounding. Although this algorithm does not build a dual solution, the iterative rounding technique saw a lot of interesting applications and quickly became an integral part of tool-kit of approximation algorithms. This together with the fact that the dual solution constructed by the WGMV procedure seems useful only for certifying the approximation guarantee of the procedure, implied that there were no further results on the nature and properties of the dual solution.
In~\cite{KGS} the authors show that the problem of finding maximum multiflow when the union of the supply and demand edges forms a planar graph can be reduced to the problem of finding a large dual solution for a suitable cut-covering problem with uncrossable requirement function. In addition, a primal solution would correspond to a multicut and a half-integral dual solution would correspond to a half-integral multiflow. Therefore, a primal solution which is within twice a half-integral dual solution would imply a 2-approximate max-half-integral-multiflow min-multicut theorem for such graph classes. In \cite{KGS} the authors also show instances where max-half-integral-multiflow min-multicut gap can be arbitrarily close to 2, implying that our result is best possible.
In this paper we show that a suitable modification to the WGMV procedure does indeed lead to a half-integral dual solution of value at least half the primal solution. \begin{theorem} \label{thm : half integer WGMV} Let $G=(V,E)$ be an undirected graph with edge costs $c:E\rightarrow\mathbb{Z}^+$ and a uncrossable requirement function $f:2^V\rightarrow\set{0,1}$. One can find a subset of edges $F$ and an assignment, $y$, of non-negative half-integral dual variables to sets such that for all edges $e\in E$, $\sum_{S:e\in\delta(S)} y_S\le c_e$ and $\sum_{e\in F} c_e\le 2\sum_S f(s)y_S$. \end{theorem} To achieve this we need to build an alternate stronger analysis of the 2-approximation of the WGMV algorithm and these are the main results of this paper. In Section~\ref{sec:proper} we argue that the Goemans-Williamson algorithm for proper functions leads to half-integral duals. To prove the above, we come up with a notion of parity of a node with respect to the current dual solution. The crux of our argument is to show that all nodes in an active set have the same parity. We then employ the idea of ensuring that all nodes in an active set have the same parity to modify the WGMV procedure in Section~\ref{sec:WGMVmodified}. However our procedure for ensuring uniform parity entails reducing some edge costs by 1/2. Since this decrease in edge costs also needs to be bounded by the dual solution we need a stronger guarantee on the total degree of the active sets in each iteration of the WGMV procedure. We develop this alternate analysis in Section~\ref{sec:WGMVanalysis}. Finally, Section~\ref{sec:Seymour} shows how maximum flow in Seymour graphs corresponds to building the dual solution for a suitable uncrossable cut cover problem and lets us claim the following result which is also best possible. \begin{theorem} \label{half integral maxflow mincut} Let $G+H$ be planar. There exists a feasible half-integral flow of value $F$ and a multicut of value $C$ such that $C \leq 2F$. Further, such a flow and cut can be computed in polynomial time. \end{theorem}
\section{Preliminaries}
Given a graph $G=(V,E)$ with edge costs $c:E\rightarrow\mathbb{R^+}$ and a 0-1 requirement function $f:2^V\rightarrow\{0,1\}$ we are interested in picking a subset of edges $E'$ of minimum total cost such that every set with requirement 1 has at least one edge of $E'$ across it. In other words, for all $S\subseteq V$, $|\delta_{E'}(S)|\ge f(S)$, where $|\delta_{E'}(S)|$ is the number of edges in $E'$ which have exactly one endpoint in $S$.
\begin{definition} \label{def:proper} A function $f:2^{V}\rightarrow \{0,1\}$ is called \textbf{proper} if $f(V)=0,f(S)=f(V-S)$ for all $S \subseteq V$ and for any disjoint $A,B \subseteq V$, $f(A \cup B)\leq \max \{f(A),f(B) \}$. \end{definition}
\begin{definition} \label{def:weakly-sup} A function $f:2^{V}\rightarrow \{0,1\}$ is called \textbf{weakly supermodular} if $f(V)=f(\phi)=0$ and for any $A,B \subseteq V$, $f(A)+f(B) \leq \max \{f(A \cap B)+f(A \cup B),f(A \setminus B) + f(B \setminus A)\}$. \end{definition}
\begin{definition} \label{def: uncrossable} A function $f:2^{V}\rightarrow \{0,1\}$ is called \textbf{uncrossable} if $f(V)=f(\phi)=0$ and for any $A,B \subseteq V$, if $f(A)=f(B)=1$, then either $f(A \cap B)=f(A \cup B)=1$ or $f(A \setminus B)=f(B \setminus A)=1$. \end{definition}
It is easy to argue that every proper function is also weakly supermodular and every weakly supermodular function is also uncrossable. In this paper we will only be interested in uncrossable requirement functions and shall refer to the problem in this setting as the {\em uncrossable cut cover problem} (\mbox{\tt UCC}). The following integer program for $\mbox{\tt UCC}$ is well known. \begin{lp}{minimize}{\sum_{e \in E} c_e x_e} \cnstr{\sum_{e \in \delta(S) } x_e}{\ge}{f(S)}{ S \subseteq V} \cnstr{x_e}{\in}{\{0,1\}}{e \in E} \end{lp} We can relax the integrality constraint on $x_{e}$ to $0 \leq x_{e} \leq 1$ to get a linear programming relaxation of the above. The dual program of the relaxation can be given as: \begin{lp}{maximize}{\sum_{S \subseteq V} f(S)y(S)} \cnstr{\sum_{S: e \in \delta(S) } y_S}{\le}{c_{e}}{e \in E} \cnstr{y_S}{\ge}{0}{S \subseteq V} \end{lp} Williamson et al. \cite{WGMV} gave a primal-dual 2-approximation algorithm for the above integer program for uncrossable $f$.
\section{Half-integrality of the GW-dual for proper functions} \label{sec:proper} We first argue that the Goemans-Williamson (GW) algorithm - for the case when requirement functions are proper and edge costs are integral - constructs a half-integral dual whose value is at least half the primal integral solution.
The GW algorithm proceeds by raising dual variables corresponding to sets of vertices and picking edges which are tight into the current solution. An edge $e$ is tight when the sum of dual variables of sets containing exactly one end-point of $e$ equals $c(e)$. The algorithm raises dual of all minimal sets $S$ such that $f(S)=1$ but no edge going across $S$ has been picked in the current solution. We imagine growing the duals in a continuous manner and define a notion of time: $t=0$ at start of the algorithm and $y_S$ increases by $\delta$ during $[t,t+\delta]$ if $S$ is a minimally unsatisfied set at every point of time in $[t,t+\delta]$. If $f$ is proper, these minimal sets correspond exactly to the connected components formed by the set of tight edges. Let $C$ be a connected component at time $t$. If $f(C)=1$ then $C$ is active while $C$ is inactive if $f(C)=0$. In each iteration, the GW procedure raises dual variables of all active sets simultaneously till an edge goes tight. At this point the connected components are recomputed and the algorithm continues with the next iteration unless all sets are inactive. Let $F$ be the set of tight edges picked after the first phase. In a second phase, called the {\em reverse delete}, the GW algorithm considers the edges of $F$ in the reverse order in which they were added to $F$. If the removal of an edge from $F$ does not violate the requirement function of any set then the edge is removed.
We shall only be concerned with the first phase of the GW algorithm since it is in this phase that the dual variables, $y:2^V\rightarrow\mathbb{R}_{\ge 0}$ are set. Let $\mathcal{S}=\set{S:y_S>0}$ and note that this family of sets is laminar. For $v\in S, S\in\mathcal{S}$, we define the parity of $v$ with respect to $S$ as $\pi_v(S)=\left\{\sum_{T:v\in T\subseteq S}y_T\right\}$, where $\{x\}$ denotes the fractional part of $x$.
If $S$ is active at time $t$ then there exists a vertex $v\in S$ which for all times in $[0,t]$ was in an active component; we call such a vertex an {\em active vertex} of set $S$.
We now argue that the GW procedure ensures that for all $S\in\mathcal{S}$, for all $ u,v\in S$, $\pi_u(S)=\pi_v(S)$. We call this quantity the parity of set $S$, $\pi(S)$, and show that $\pi(S)\in\set{0,1/2}$. Let $S$ be formed by the merging of sets $S_1,S_2$ at time $t$. We induct on the iterations of the GW procedure and assume that all vertices in $S_1$ (respectively $S_2$) have the same parity with respect to $S_1$ (respectively $S_2$). If $S_1$ is active at time $t$ then $\pi(S_1)=\pi_v(S_1)=\{t\}$ where $v$ is an active vertex of set $S_1$. Similarly if $S_2$ is active at time $t$ then $\pi(S_2)=\{t\}$. Thus if both $S_1,S_2$ are active at time $t$ then $\pi(S_1)=\pi(S_2)$ and hence all vertices of $S$ have the same parity with respect to $S$. Let $e=(u,v), u\in S_1, v\in S_2$ be the edge which gets tight when $S_1,S_2$ merge at time $t$. Since $l(e)$ is integral and $\sum_{T:u\in T\subseteq S_1} y_T+\sum_{T:v\in T\subseteq S_2} y_T=l(e)$, we have that $\pi(S_1)=\pi(S_2)\in\set{0,1/2}$.
Suppose only $S_1$ is active at time $t$. By our induction hypothesis $\pi(S_2)\in\set{0,1/2}$. Once again, since $l(e)$ is integral and $\sum_{T:u\in T\subseteq S_1} y_T+\sum_{T:v\in T\subseteq S_2} y_T=l(e)$, we have that $\pi(S_1)=\pi(S_2)$ which implies that all vertices of $S$ have the same parity with respect to $S$.
Since $\pi(S),\pi(S_1)\in\set{0,1/2}$, it must be the case that $\set{y_S}\in\set{0,1/2}$. Since this is true for all sets $S\in\mathcal{S}$ this implies that the duals constructed by the GW procedure are half-integral.
\section{The WGMV algorithm}
We now give a brief description of the algorithm in \cite{WGMV}. Given an undirected graph $G=(V,E)$ with edge costs $c_{e} \geq 0$ and a uncrossable function $f$ we wish to find a set of edges $F' \subseteq E$ such that for any $S \subseteq V, |F' \cap \delta(S)| \geq f(S)$. A set $S$ is said to be unsatisfied if $f(S)=1$ but no edge crosses $S$ in the current solution.
The algorithm works in iterations. At the beginning of every iteration the algorithm computes a collection of minimally unsatisfied sets. Williamson et al.\cite{WGMV} show that minimally unsatisfied sets are disjoint and can be found in polynomial time (follows easily from uncrossability). Raise the dual variables corresponding to all minimally unsatisfied sets simultaneously until some edge is tight (the total dual across it equals its cost). All edges that go tight are added to a set $T$. The edges of $T$ are considered in an arbitrary order and $e\in T$ is added to $F$ if it crosses a minimally unsatisfied set. Note that whenever an edge is added to $F$ the collection of minimally unsatisfied sets is recomputed. The {\em growth phase} of the WGMV procedure stops when all sets are satisfied; let $F$ be the set of edges picked in this phase.
The edges of $F$ are considered in the reverse order in which they were picked. An edge $e\in F$ is dropped from the solution if its removal keeps the current solution feasible.
At the end of the procedure, we have a set of edges $F$ and a feasible dual solution $y_S$ such that $\sum_{e \in F} c_{e} x_{e} \leq 2 \sum_{S}f(S)y_S$. By weak duality, $\sum_{e \in F} c_{e} x_{e} \geq \sum_{S}f(S)y_S$ and this shows that the cost of solution picked by the algorithm is at most twice the optimal. \begin{algorithm} \caption{Primal-Dual Algorithm for uncrossable functions}\label{alg:euclid} \begin{algorithmic}[1] \Procedure{WGMV} { $G=(V,E)$ with cost $c_{e}$, uncrossable function $f$} \State $y\leftarrow 0$, $F \leftarrow \phi$ \While {$\exists S \subseteq V$ such that $S$ is not satisfied} \State Compute $\mathcal{C}$, the collection of minimally unsatisfied sets with respect to $F$. \State Increase $y_{C}$ for all $C \in \mathcal{C}$ simultaneously until some edge $e \in \delta(C), C \in \mathcal{C}$ is tight ($c_e=\sum_{S:e\in \delta(S)}y_S$) \State Add all tight edges to $T$ \ForAll{$e\in T$} \If{$\exists C\in\mathcal{C}, e\in\delta(C)$} \State $F\leftarrow F\cup\set{e}$; Recompute $\mathcal{C}$ \EndIf \EndFor \EndWhile \ForAll{$e\in F$} \State // Edges of $F$ are considered in the reverse order in which they were added to $F$ \If{$F\setminus\set{e}$ is feasible} \State $F\leftarrow F\setminus\set{e}$ \EndIf \EndFor \State \textbf{return} $F$ \EndProcedure \end{algorithmic} \end{algorithm}
\subsection{Duals constructed by WGMV are not half-integral} \label{WGMV:counter} In the example in Figure~\ref{fig:counter}, the red edges are not edges of the graph $G$. For a set $S\subseteq V$,$f(S)=1$ iff there is exactly one red edge with exactly one end point in $S$. Thus this problem corresponds to picking edges so as to augment the red tree into a 2-edge connected graph. It is known that $f$ is uncrossable. In each iteration the WGMV procedure raises dual variables corresponding to all minimally unsatisfied sets. The edge $(c,d)$ gets tight in the first iteration. At the end of the first iteration $y_{\set{e}}=1/2$ and so in the second iteration $y_{\set{e}}$ increases to 3/4 and $y_{\set{b,c,d}}$ to 1/4 before edge $(b,e)$ goes tight. \begin{figure}
\caption{Example showing that the duals constructed by the WGMV procedure are not half-integral}
\label{fig:counter}
\end{figure}
\section{A stronger analysis of the WGMV algorithm} \label{sec:WGMVanalysis} To analyse the algorithm, Willimason et al.\cite{WGMV} argue that in each iteration the total contribution of the dual variables to the primal solution is at most twice the increase in the value of the dual solution. This then, added over all iterations, implies that $\sum_{e\in F} c_e x_e\le 2\sum_S f(S)y_S$. If in an iteration the dual values of all active sets increases by $\delta$ then the contribution of the dual variables to the primal solution equals $\delta$ times the total degree of the active sets in $F$. On the other hand the increase in the value of the dual solution is $\delta$ times the number of active sets and hence Williamson et al. argue that in each iteration the average degree of the active sets in $F$ is at most 2.
Let $\mathcal{S}$ be the collection of minimally unsatisfied sets identified during a run of the algorithm. Note thst we do not claim that $y_S >0$ for $S\in\mathcal{S}$. The uncrossability of $f$ implies that $\mathcal{S}$ is a laminar family. Add $V$, the set of all vertices, to $\mathcal{S}$ and construct a tree, $\mathcal{T}=(X,Y)$, which has vertex set $X=\set{v_S|S\in\mathcal{S}}$. $v_A$ is the parent of $v_B$ iff $A$ is the minimal set in $\mathcal{S}$ containing $B$.
Each set $S\in\mathcal{S}$ is labelled with the number of the iteration in which $S$ became satisfied; let $l:\mathcal{S}\rightarrow[T]$ be this function. Let $\mathcal{S}^i$ be the sets with label at least $i$; these are the minimally unsatisfied sets encountered in iterations $i$ or later. Similarly, each edge $e\in F$ is labeled with the number of the iteration in which it became tight. We overload notation and let $l:F\rightarrow[T]$ also denote this function. Let $F^i\subseteq F$ be edges with label at least $i$. We note a few properties of these labels. \begin{enumerate}
\item if $B\subset A$ then $l(B)\le l(A)$.
\item if $e\in\delta_F(S)$ then $l(e)\ge l(S)$ \end{enumerate}
Let $v_{B_1},v_{B_2},\ldots v_{B_p}$ be the children of node $v_A$ in $\mathcal{T}$ (see Figure~\ref{fig:notation}). We number sets so that $l(B_1)\ge l(B_2)\ge \cdots \ge l(B_p)$. Let $p_i\in[p]$ be the largest index such that $l(B_{p_i})\ge i$. Hence all sets $B_j, j\in[p_i]$ are in $\mathcal{S}^i$. Let $X^i_A=A\setminus\cup_{j\in [p_i]} B_j$ and $H^i_A$ be a graph whose nodes correspond to sets $X^i_A, B_1,\ldots, B_{p_i}$ and edges correspond to the edges between these sets in $F$. Since sets $B_j, j\in[p_i]$ have label at least $i$, edges in $H^i_A$ have label at least $i$ and hence they are in $F^i$.
\begin{claim} $H^i_A$ is a forest. \end{claim}
\begin{proof} For contradiction assume $H^i_A$ has a cycle and consider the edge of the cycle, say $e=(u,v)$, which was added last to $F$. We consider two cases.
$u \in B_r$ and $v \in B_s$, $r,s\in[p_i]$. When $e$ was picked, both $B_r,B_s$ had another edge in $F$ across them and were therefore satisfied. Recall that $\mathcal{S}$ is the collection of all the minimally unsatisfied sets encountered during the growth phase of the algorithm. Picking $e$ did not lead to any unsatisfied set in $\mathcal{S}$ getting satisfied and this is a contradiction.
$u \in B_r$ and $v \in X^i_A$, $r\in[p_i]$. No subset of vertices in $X^i_A$ is unsatisfied in the $i^{\rm th}$ (or any subsequent) iteration. When $e$ was picked, $B_r$ had another edge in $F$ across it and was therefore satisfied. Once again picking $e$ did not lead to any unsatisfied set in $\mathcal{S}$ getting satisfied and this is a contradiction. \end{proof}
Since $H^i_A$ is a forest on $p_i+1$ vertices it contains at most $p_i$ edges.
\begin{definition} A set $A\in\mathcal{S}$ is {\em critical} in iteration $i$ if $H^i_A$ is a tree of which the node corresponding to $X^i_A$, is a leaf. \end{definition} For a set $A\in\mathcal{S}^i$, let $\alpha^i(A)= \delta_{F}(A)\setminus\cup_{S\subset A, S\in\mathcal{S}^i} \delta_{F}(S)$. Thus $\alpha^i(A)$ is the set of edges of $F$ which have one endpoint in the set $A\setminus\cup_{S\subset A, S\in\mathcal{S}^i} S$ and the other endpoint in $V\setminus A$. Equivalently $\alpha^i(A)$ is the subset of edges in $\delta_F(A)$ which are incident on vertices in $X^i_A$. We note the following important property of $\alpha^i(A)$. \begin{claim}
Let $A\in\mathcal{S}^i$. The collection of sets $\set{\alpha^i(S)| S\in\mathcal{S}^i, S\subseteq A}$ forms a partition of the set $\delta_F(A)$. \end{claim}
Let $\mathcal{A}^i$ be the collection of minimally unsatisfied sets whose dual is raised in iteration $i$ of the WGMV algorithm. These are the {\em active sets} in iteration $i$. Note that \begin{enumerate}
\item $\mathcal{A}^i\subseteq \mathcal{S}^i$.
\item A set $S \in \mathcal{S}$ is contained in $\mathcal{S}^i$ if and only if there exits an $A\in\mathcal{A}^i$ such that $A\subseteq S$.
\item If $A\in\mathcal{A}^i$ then no subset of $A$ is in $\mathcal{S}^i$ which implies $\alpha^i(A)=\delta_F(A)$. \end{enumerate}
\begin{lemma}\label{lem:equal} $\sum_{S\in\mathcal{S}^i} \abs{\alpha^i(S)} \le 2\abs{\mathcal{A}^i}-2+\abs{\mathcal{R}^i}$ where $\mathcal{R}^i$ is the collection of critical sets in iteration $i$. \end{lemma} \begin{proof} We show an argument built on redistributing tokens which help us prove the above lemma. We begin by assigning every node of tree $\mathcal{T}$ a number of tokens equal to two less than twice the number of its children in $\mathcal{S}^i$. Thus a node with 1 child in $\mathcal{S}^i$ gets no tokens. We also give every node that corresponds to a critical set in iteration $i$ an additional token. It is easy to see that the total number of tokens distributed initially is $2\abs{\mathcal{A}^i}-2+\abs{\mathcal{R}^i}$.
$v_A$ transfers one token to each edge in $H^i_A$ incident on $X^i_A$ and 2 tokens each to remaining edges in $H^i_A$. If $v_A$ has $p_i$ children in $\mathcal{S}^i$ and is critical in iteration $i$, it was assigned $2p_i-1$ tokens and these are sufficient to undertake the above assignment. If $v_A$ is not critical then it was assigned $2p_i-2$ tokens and again this is sufficient to complete the transfer of tokens to edges in $H^i_A$.
For every edge in $e\in F^i$ there is a unique $A\in\mathcal{S}^i$ such that $e$ is in $H^i_A$. If $e$ has an endpoint in $X^i_A$ it is assigned 1 token by $v_A$. Note that this edge contributes 1 to the sum on the left. The remaining edges of $F^i$ are assigned 2 tokens each and this is also their contribution to the sum on the left. This establishes that the sum on the left equals the number of tokens assigned to edges which is at most the number of tokens assigned to nodes which in turn equals the quantity in the right. \end{proof}
\begin{lemma}\label{lem:one} If $A$ is critical in iteration $i$ then $\alpha^i(A)\neq\phi$. \end{lemma} \begin{proof} Since $A$ is critical, $H^i_A$ is a tree and $X^i_A$ is a leaf node. Let $e$ be the unique edge in $H^i_A$ incident to $X^i_A$. Consider the step in the reverse delete phase when edge $e$ was considered and was retained in $F$ only because its deletion would have caused some set to become unsatisfied. Let $U\subseteq V$ be the minimal such set and note that $e$ is the only edge in $F'$ across $U$ at this step.
\begin{claim} $\forall j\in[p_i]$, $U\cap B_j =\phi$ or $U\cap B_j=B_j$. \end{claim} \begin{proof} For a contradiction assume that for some $j\in[p_i]$, $\phi\neq U\cap B_j\subset B_j$. Since $f(B_j)=f(U)=1$ by uncrossability either $f(B_j\cap U)=1$ or $f(B_j\setminus U)=1$. In either case, during the growth phase we must have added an edge, say $g$, to $F$ between $B_j\setminus U$ and $B_j\cap U$ in an iteration before $B_j$ became a minimally unsatisfied set. Thus, in the reverse delete phase when we considered $e$, edge $g$ was in $F$ and hence $e$ was not the only edge across $U$. \end{proof}
\begin{figure}
\caption{ Illustrating the notation used. $A$ is a critical set. The thick edges are the edges in $H^i_A$.}
\label{fig:notation}
\end{figure} If $U\cap A$ includes some sets $B_j, j\in[p_i]$ and not the others then the number of edges across the set $U$ will be more than 1. Thus either $\cup_{j\in p_i} B_j\subseteq A\cap U$ or $\cup_{j\in[p_i]} B_j\subseteq A\setminus U$. Since $f(A)=f(U)=1$ by uncrossability we have either $f(A\cap U)=1$ or $f(A\setminus U)=1$. If $\cup_{j\in[p_i]} B_i \subseteq A\cap U$ then $f(A\setminus U)\neq 1$ as that would imply a minimal unsatisfied set in $X^i_A$ which would be a contradiction. Similarly if $\cup_{j\in[p_i]} B_j\subseteq A\setminus U$ then $f(A\cap U)\neq 1$. Hence we need to consider only two cases \begin{enumerate} \item $\cup_{j\in[p_i]} B_j \subseteq A\cap U$ and $f(A\cap U)=f(A\cup U)=1$: $F$ should have an edge across the set $A\cup U$. Since the only edge across $U$ goes to $A\setminus U$, there should be an edge across $A$ that is incident to $X^i_A$. \item $\cup_{j\in[p_i]} B_j \subseteq A\setminus U$ and $f(A\setminus U)=f(U\setminus A)=1$: $F$ should have an edge across the set $U\setminus A$. Since the only edge across $U$ goes from $A\cap U$ to $A\setminus U$, there should be an edge across $A$ that is incident to $A\cap U \subseteq X^i_A$. \end{enumerate} Hence in both cases we conclude that there is an edge across $A$ incident to $X^i_A$ which implies $\alpha^i(A)\neq\phi$. \end{proof}
\begin{lemma} \label{lem:twice} The total degree (in $F$) of sets in $\mathcal{A}^i$ is at most twice $\abs{\mathcal{A}^i}$. \end{lemma} \begin{proof} A set in $\mathcal{A}^i$ cannot be critical in iteration $i$. Further for $S\in\mathcal{A}^i$, $\abs{\alpha^i(S)}$ equals the degree of $S$ in $F$. By Lemma~\ref{lem:one} if $A$ is critical in iteration $i$ then $\alpha^i(A)\neq\phi$. Hence $\sum_{S\in\mathcal{S}^i} \abs{\alpha^i(S)} \ge \sum_{S\in\mathcal{A}^i} \abs{\delta_F(A)}+\abs{\mathcal{R}^i}$ where $\mathcal{R}^i$ is the collection of critical sets. Applying Lemma~\ref{lem:equal}, we obtain $\sum_{S\in\mathcal{A}^i} \abs{\delta_F(A)}\le 2\abs{\mathcal{A}^i}-2$ which proves the lemma. \end{proof} \remove{ $A$ became minimally unsatisfied only after sets $B_1,B_2,\ldots, B_p$ were satisfied by inclusion of some tight edges in $F$; let $e\in F$ be one such tight edge. Note that $e$ has both endpoints within the set $A$ since if one endpoint of $e$ was not in $A$ then $A$ would have been satisfied when $e$ was added to $F$ and would never become a minimally unsatisfied set.}
\section{Modifying WGMV} \label{sec:WGMVmodified} We now modify the WGMV algorithm so that the duals obtained are half-integral while ensuring that the primal solution has cost at most twice the dual solution. In doing so we are guided by the fact that the GW algorithm constructed half-integral duals since the parity of all vertices in a set was identical. This property does not hold true for the WGMV algorithm as seen in the example in Figure~\ref{fig:counter}.
As before, let $\mathcal{S}$ be the set of minimally unsatisfied sets during a run of the algorithm. Our modification to the WGMV algorithm involves reducing costs of some edges in $\delta(S), S\in\mathcal{S}$ by 1/2. Let $\delta'(S)\subseteq\delta(S)$ denote the subset of edges whose cost was reduced by 1/2 when considering $S$. We now define the {\em parity} of an edge $e$ with respect to a set $S\in\mathcal{S}, e\in\delta(S)$ as $$\pi_e(S)=\set{\sum_{e\in\delta(T),T\subseteq S} y_T +\frac{1}{2}\abs{\set{T\subseteq S|e\in\delta'(T)}}}$$ where as before \set{x} denotes the fractional part of $x$. Our modification to the WGMV procedure is: \begin{quote} Let $S$ be a set which becomes minimally unsatisfied at time $t$ and let $x\in S$ be an active vertex of set $S$. Then $\pi_x(S)=\set{t}$. For edge $e\in\delta(S)$, if $\pi_e(S)\neq\set{t}$ then decrease $c_e$ by 1/2 (note $e$ gets included in $\delta'(S)$). \end{quote} We decrease the costs of edges in $\delta(S)$ in the above manner only when $S$ becomes minimally unsatisfied and need to argue that the total cost of edges in $F$ can still be bounded by twice the sum of the dual variables. Our modification allows us the following claim. \begin{claim} $\forall S\in\mathcal{S}$, $\forall e,f\in\delta(S)$, $\pi_e(S)=\pi_f(S)$ \end{claim}
When we increase dual variables of sets in $\mathcal{A}^i$ in iteration $i$, one or more edges go tight and these are added to a set $T$. Let $t^i$ be the time at which we stop growing dual variables of sets in $\mathcal{A}^i$. The edges of $T$ are considered in an arbitrary order and $e\in T$ is added to $F$ if it crosses a minimally unsatisfied set. Note that whenever an edge is added to $F$ the collection of minimally unsatisfied sets is recomputed. Let $\mathcal{C}$ be the collection of minimally unsatisfied sets after all edges in $T$ have been considered. For every $S\in\mathcal{C}$ and every edge $e\in\delta(S)$, if $\pi_e(S)\neq\set{t}$ then we reduce the cost of edge $e$ by 1/2. All edges that go tight after this step are included in $T$ and the process repeated until no edge gets added to $T$. The minimally unsatisfied sets at this stage are the active sets, $\mathcal{A}^{i+1}$ for iteration $i+1$.
\begin{algorithm} \caption{Modification to an iteration of the WGMV algorithm} \label{alg:modified} \begin{algorithmic}[1] \State $\mathcal{C}$ is the collection of minimally unsatisfied sets with respect to $F$. \State $T$ is the set of tight edges which have not yet been included in $F$. \Repeat \ForAll{$e\in T$} \If{$\exists C\in\mathcal{C}, e\in\delta(C)$} \State $F\leftarrow F\cup\set{e}$; compute $\mathcal{C}$ \EndIf \EndFor \State $T\leftarrow\phi$ \ForAll{$C\in\mathcal{C}$} \ForAll{$e\in\delta(C)$} \If{$\pi_e(C)\neq\set{t}$} \State $c_e\leftarrow c_e-1/2$ \If{$e$ is tight} \State $T\leftarrow T\cup\set{e}$ \EndIf \EndIf \EndFor \EndFor \Until {$T=\phi$} \end{algorithmic} \end{algorithm}
Let $\mathcal{C}^i$ be the collection of sets in $\mathcal{S}$ which properly contain a set in $\mathcal{A}^i$ and are subsets of some set in $\mathcal{A}^{i+1}$. Formally, $\mathcal{C}^i=\set{S\in\mathcal{S}|\exists A\in\mathcal{A}^i,\exists B\in\mathcal{A}^{i+1}, A\subset S\subseteq B}$. Note that \begin{enumerate}
\item $\mathcal{A}^{i+1}\setminus \mathcal{A}^i\subseteq \mathcal{C}^i$,
\item $\mathcal{A}^i\cap\mathcal{C}^i=\phi$,
\item any edge whose cost is reduced by 1/2 in iteration $i$ goes across a set in $\mathcal{C}^i$,
\item $\mathcal{C}^i\cap\mathcal{C}^{i+1}=\phi$ for $i\in[T-1]$ \end{enumerate}
Before $A\in\mathcal{C}^i$ was considered in iteration $i$ we would have considered the sets in $\mathcal{S}^i$ corresponding to children of node $v_A$ in tree $\mathcal{T}$. Let $B_j, j\in[p_i]$ be these sets and note that they belong to $\mathcal{C}^i\cup\mathcal{A}^i$. For each $B_j, j\in[p_i]$ we would already have reduced the cost of edges $e\in\delta(B_j)$ if $\pi_e(B_j)\neq\set{t^i}$. Hence when considering $A$ we would only be reducing the cost of edges in $\delta(A)$ which are incident to $A\setminus\cup_{j\in[p_i]} B_j=X^i_A$. Thus the edges of $F$ whose cost is reduced when considering $A\in\mathcal{C}^i$ are subsets of $\alpha^i(A)$, let this subset be $\beta^i(A)$.
After iteration $i$, (reduced) cost of an edge $e$ is $c_e-\sum_{S:e \in \delta(S)}y_S$, where $y$ is the dual value after iteration $i$. Note that as the algorithm proceeds, (reduced) cost of edges decrease. To prove that the modified WGMV procedure gives a 2-approximate solution, we bound the total reduction in costs of edges in $F$ by twice the total increase in the value of dual variables. In iteration $i$, the total reduction in edge costs of $F$ due to increase of dual variables of sets $A^i$ equals $\gamma^i\sum_{S\in\mathcal{A}^i} \abs{\delta_F(S)}=\gamma^i\sum_{S\in\mathcal{A}^i} \abs{\alpha^i(S)}$, where $\gamma^i=t^i-t^{i-1}$ is the increase in the dual variable of a set in $\mathcal{A}^i$. The other reduction occurs when we reduce by 1/2 the costs of edges due to parity considerations. The total reduction in the cost of edges of $F$ due to this reason is at most $1/2\sum_{S\in\mathcal{C}^i}\abs{\beta^i(S)}$.
To prove the approximation guarantee of WGMV, authors in \cite{WGMV} show that in every iteration the total reduction in cost of edges in $F$ is at most twice the total increase in dual values in that iteration. To prove the approximation guarantee of modified WGMV, we need to charge the reduction in edge costs across iterations. To do this, we introduce a procedure for marking and unmarking sets. All sets are unmarked before the first iteration of the algorithm. In the first iteration a set $A\in\mathcal{S}$ is not marked \begin{enumerate} \item if $A$ is critical or, \item if node $v_A$ exhausts all its tokens and $\alpha^1(A)=\phi$ \end{enumerate} All other sets in $\mathcal{S}$ are marked in iteration 1. Let $M$ be the number of sets which are marked.
In iteration $i$ we unmark a set $S\in\mathcal{C}^i$ if it is critical and $\beta^i(S)\neq\phi$. Let $M_i$ be the number of sets unmarked in iteration $i$. In Lemma~\ref{lem:marking} we argue that we unmark a set only if it has a mark on it.
\begin{lemma}\label{lem:modified} In any iteration $i>1$, $$\gamma^i\sum_{S\in\mathcal{A}^i}\abs{\alpha^i(S)} + (1/2)\sum_{S\in\mathcal{C}^i}\abs{\beta^i(S)} - M_i/2 \le 2\gamma^i(\abs{A_i}-1)$$ \end{lemma} \begin{proof} Recall $\mathcal{R}^i$ is the collection of critical sets in iteration $i$. \begin{equation}\label{eq:one} \gamma^i\sum_{S\in\mathcal{S}^i} \abs{\alpha^i(S)} \ge \gamma^i\sum_{S\in\mathcal{A}^i} \abs{\alpha^i(S)} + \gamma^i\sum_{S\in\mathcal{C}^i} \abs{\alpha^i(S)} + \gamma^i\sum_{S\in\mathcal{S}^i\setminus\mathcal{A}^i\cup\mathcal{C}^i} \abs{\alpha^i(S)} \end{equation} By Lemma~\ref{lem:one} we obtain \begin{equation}\label{eq:two} \gamma^i\sum_{S\in\mathcal{S}^i\setminus\mathcal{A}^i\cup\mathcal{C}^i} \abs{\alpha^i(S)} \ge \gamma^i\abs{\mathcal{R}^i\setminus\mathcal{C}^i} \end{equation} and the unmarking procedure gives \begin{equation}\label{eq:three} \gamma^i\sum_{S\in\mathcal{C}^i} \abs{\alpha^i(S)}+M_i/2 \ge (1/2)\sum_{S\in\mathcal{C}^i} \abs{\beta^i(S)} + \gamma^i\abs{\mathcal{R}^i\cap\mathcal{C}^i} \end{equation} Inequality \ref{eq:three} holds since \begin{enumerate} \item if $S$ is not critical it contributes $\gamma^i\abs{\alpha^i(S)}$ to the left and $\abs{\beta^i(S)}$ to the right and $\beta^i(S)\subseteq\alpha^i(S)$. \item if $S$ is critical but $\beta^i(S)=\phi$ then $S$ contributes $\gamma^i\abs{\alpha^i(S)}$ to the left and $\gamma^i$ to the right and $\alpha^i(S)\neq\phi$. \item if $S$ is critical and $\beta^i(S)\neq\phi$ then $S$ contributes $\gamma^i\abs{\alpha^i(S)}+1/2$ to the left and $(1/2)\abs{\beta^i(S)}$ $+ \gamma^i$ to the right. Since $\phi\neq\beta^i(S)\subseteq\alpha^i(S)$ and $\gamma^i\ge 1/2$, the contribution to the left is more than the contribution of $S$ to the right. \end{enumerate} Adding inequalities \ref{eq:one}, \ref{eq:two} and \ref{eq:three} we get \begin{equation}\label{eq:four} \gamma^i\sum_{S\in\mathcal{S}^i} \abs{\alpha^i(S)} \ge \gamma^i\sum_{S\in\mathcal{A}^i} \abs{\alpha^i(S)} + \gamma^i\sum_{S\in\mathcal{C}^i} \abs{\beta^i(S)} + \gamma^i\abs{\mathcal{R}^i} -M_i/2 \end{equation} Inequality \ref{eq:four} when combined with the inequality in Lemma~\ref{lem:equal} and together with the fact that $\gamma^i\ge 1/2$ proves the lemma. \end{proof}
Iteration 1 differs from other other iterations since we mark sets in this iteration. For iteration 1 we make the following claim. \begin{lemma}\label{lem:modified1} $$\gamma^1\sum_{S\in\mathcal{A}^1}\abs{\alpha^1(S)} + (1/2)\sum_{S\in\mathcal{C}^1}\abs{\alpha^1(S)} + (1/2)(M-M_1)\le 2\gamma^1(\abs{A_1}-1)$$ \end{lemma} \begin{proof} Inequalities \ref{eq:one} and \ref{eq:three} remain unchanged for iteration 1 (with 1 replacing $i$) while inequality \ref{eq:two} is modified due to the marks placed on sets. $A$ is marked if it is not critical and $\alpha^1(A)\neq\phi$; let $m$ be the number of such sets. This together with Lemma~\ref{lem:one} gives \begin{equation}\label{eq:modtwo} \gamma^1\sum_{S\in\mathcal{S}^1\setminus\mathcal{A}^1\cup\mathcal{C}^1} \abs{\alpha^1(S)} \ge \gamma^1\abs{\mathcal{R}^1\setminus\mathcal{C}^1}+m/2 \end{equation} Adding inequalities~\ref{eq:one}, \ref{eq:three} (with $i=1$) and inequality \ref{eq:modtwo} we get \begin{equation}\label{eq:modfour} \gamma^1\sum_{S\in\mathcal{S}^1} \abs{\alpha^1(S)} \ge \gamma^1\sum_{S\in\mathcal{A}^1} \abs{\alpha^1(S)} + \gamma^1\sum_{S\in\mathcal{C}^1} \abs{\beta^1(S)} + \gamma^1\abs{\mathcal{R}^1} +(1/2)(m-M_1) \end{equation} Recall that we also mark a set $A$ when node $v_A$ does not exhaust all its tokens. Note that the number of such sets is $M-m$ and hence the inequality on Lemma~\ref{lem:equal} becomes
\begin{equation}\label{eq:equal} \sum_{S\in\mathcal{S}^1} \abs{\alpha^1(S)} + M-m \le 2(\abs{\mathcal{A}^1}-1)+\abs{\mathcal{R}^1} \end{equation} Combining inequality \ref{eq:modfour} and inequality~\ref{eq:equal} and using the fact that $\gamma^i\ge 1/2$ proves the lemma. \end{proof}
\remove{ \begin{enumerate} \item Once a set is marked in an iteration it remains marked until it is unmarked in a subsequent iteration. A set once unmarked is not marked again. \item A set $S\in\mathcal{S}^i\setminus\mathcal{A}^i\cup\mathcal{C}^i$ is {\em marked} in iteration $i$ if $S$ is not critical in iteration $i$ and $\alpha^i(S)\ge 1$. \item In iteration $i$ we unmark a set $S\in\mathcal{C}^i$ if it is critical and $\beta^i(S)\ge 1$. In Lemma~\ref{lem:marking} we argue that we unmark a set only if it has a mark on it. \end{enumerate}}
Summing the inequality in Lemma~\ref{lem:modified1} and Lemma~\ref{lem:modified} over all iterations gives us \begin{equation*} \sum_{i\in[T]}\left(\gamma^i\sum_{S\in\mathcal{A}^i}\alpha^i(S) + (1/2)\sum_{s\in\mathcal{C}^i}\beta^i(S) - M_i/2\right) + M/2 \le \sum_{i\in[T]} 2\gamma^i(\abs{A_i}-1) \end{equation*} Since we unmark a set only if it has been marked in iteration 1 (Lemma \ref{lem:marking}), $M\ge\sum_{i\in[t]} M_i$. Therefore, the total reduction in the cost of the edges of $F$ over all iterations (which is the total cost of edges in $F$) is at most the quantity on the left of the above inequality. Hence the cost of the solution $F$ is at most twice the total dual raised over all iterations and this completes the proof of Theorem \ref{thm : half integer WGMV}.
It remains to show that a set is unmarked only if it has been marked in iteration 1.
\begin{lemma}\label{lem:marking}
If $A\in\mathcal{C}^i$ is critical in iteration $i$ but not marked in iteration 1 then $\beta^i(A)=\phi$.
\end{lemma}
\begin{proof}
Let $\set{B_j|j\in[p]}$ be the sets corresponding to children of $v_A$ and $X^1_A=A\setminus\cup_{j\in[p]} B_j$. If $H^1_A$ has a tree spanning nodes corresponding to sets $X^1_A$, $B_j, j\in[p]$, then edges of $\delta(A)$ would have equal parity. If $A$ becomes a minimal unsatisfied set at time $t$ then $B_1$ was active till time $t$. Therefore the parity of edges in $\delta(B_1)$ and hence those of all edges in $\delta(A)$ would equal \set{t} which would imply $\beta^i(A)=\phi$.
Since $A$ is unmarked either it is critical in iteration 1 or $v_A$ exhausts all its tokens and $\alpha^1(A)=\phi$. In the former case we have a tree spanning nodes corresponding to sets $X^1_A$, $B_j, j\in[p]$. In the latter case if there is no such tree there would be a tree spanning nodes corresponding to sets $B_j, j\in[p]$ and no edge in $\delta_F(A)$ incident to $X^1_A$. Again, this implies that all edges in $\delta_F(A)$ have equal parity.
\end{proof}
\remove{Consider the time, $t$, at which set $A$ becomes minimally unsatisfied. If in the preceding iteration, node $v_A$ was a good node then in that iteration we can include the degree of the node $v_A$ in $Z\setminus Z_A$ while bounding the total degree of the leaves in $Z$ by twice the number of leaves. Since in this iteration sets $B_1,B_2,\ldots B_p$ are active we would not have to decrease edge costs of the edges incident to these sets while ensuring uniform parity of nodes in the set $A$. The dual variables increased by at least 1/2 in this iteration and we can thus account for the decrease by at most 1/2 of edges of $F$ incident to $A$ caused by our modification to the WGMV procedure.
Suppose instead that, $v_A$ was a critical node in the iteration preceding the one in which $A$ became minimally unsatisfied. If $v_A$ was a good node in any of the preceding iterations then we can in that iteration save a dual value of 1/2 and associate it with the node $v_A$. Note that this saving of 1/2 is all we need to be able to bound the decrease in the costs of the edges incident to $A$ in $F$ by 1/2 when $A$ becomes minimally unsatisfied. If however, $v_a$ was never a good node then we claim we do not need to decrease costs of edges incident to $A$ after $A$ became minimally unsatisfied. Suppose $z\in Z_A$ was incident to $B_1$. Among the edges of $Z_A$, $z$ would be the first edge to get tight since if some other edge went tight ahead of $z$, $z$ could never go tight. Since the parity of the node (with respect to set $A$) in $A\setminus \cup_i B_i$ to which edge $z$ is incident is 0, all nodes in $B_1$ would also have parity 0. Continuing in this manner we can argue that all nodes in $A$ would have parity 0 when $A$ became minimally unsatisfied.}
\section{Computing half-integral flow in Seymour graphs} \label{sec:Seymour} \def\mbox{\tt 2ECAP}{\mbox{\tt 2ECAP}}
In this section, we describe the connection between multicommodity flows/multicuts and connectivity augmentation problems from \cite{KGS}. In particular, we will be interested in \mbox{\tt 2ECAP}, a special case of the \mbox{\tt UCC}\ problem defined in \cite{KGS}. \begin{definition} \textbf{2-edge connectivity Augmentation Problem (\mbox{\tt 2ECAP})}: Given an undirected graph (without loops but possible parallel edges) $G=(V,E \cup Y)$ and edge weights $w:E\rightarrow \mathbb{Z}_{\geq 0}$ find a minimum weight set of edges $E' \subseteq E$ such that each connected component of $(V,E' \cup Y)$ is 2-edge connected. \end{definition} For every $S \subseteq V$, let $f: S \rightarrow \{0,1\}$ be defined as follows: $f(S)=1$ iff exactly one edge of $Y$ crosses the cut $(S,V \setminus S)$, otherwise it is zero. $\mbox{\tt 2ECAP}$ can be formulated equivalently as: find a minimum weight subset of edges $E' \subseteq E$ such that at least $f(S)$ edges of $E'$ cross the cut $(S, V \setminus S)$. It is well known that $f$ as defined above is uncrossable and hence the WGMV algorithm can be used to compute a 2-approximate solution.
Now, we define the multicommodity flow problem. Let $G=(V,E)$ be a simple undirected graph with edge capacities $c:E\rightarrow\mathbb{Z}_{\geq 0}$ (called the supply graph) and $H=(V,F)$ be a simple graph each edge of which corresponds to a commodity and the endpoints of that edge are the source/sink of that commodity (called the demand graph). Given any $G$ and $H$, an instance of sum multicommodity flow asks for a feasible flow of maximum total value between the end points of demand edges. A minimum multicut is a set of edges of $G$ of minimum total weight whose removal disconnects endpoints of all the demand edges in $G$. It is easy to see that the value of minimum multicut (C) is always greater than the value of the maximum flow (F). Given a class of instances, the maximum value of the ratio between $C$ and $F$ is known as the $\texttt{flow-multicut gap}$ for the class. This gap is $\theta(\log k)$ for general $G,H$ while it is $O(1)$ for planar $G$ and arbitrary $H$. There is rich literature on proving $\texttt{flow-multicut gaps}$ \cite{garg1996approximate,garg1997primal,klein1993excluded}.
If we restrict the flow to be integral (resp. half integral), we call the flow-multicut gap as the integral (resp. half integral) $\texttt{flow-multicut gap}$. An instance of the multicommodity flow/multicut problem is called a Seymour instance if the union of the supply and demand graphs is planar. In \cite{KGS}, the authors establish a \texttt{flow-multicut gap} of at most 2 for Seymour instances by showing that the problem of computing a multicut in a Seymour instance is equivalent to solving an appropriate instance of $\mbox{\tt 2ECAP}$ in the planar dual of the supply and demand graph. Given a planar graph $G$, let $G^{*}$ denotes its planar dual. Formally, \begin{lemma}[\cite{KGS}] $C$ is a multicut for the instance $(G,H)$ if and only if $C^{*}$ is a feasible solution to $\mbox{\tt 2ECAP}$ for the instance $(V^{*},E^{*}\cup F^{*})$. \end{lemma} The WGMV algorithm immediately gives a 2-approximation algorithm for multicuts in Seymour instances. In order to prove the \texttt{flow-multicut gap}, \cite{KGS} shows that the duals constructed by the WGMV algorithm correspond to flow paths in $G$ and that this correspondence is value preserving, ie. total flow is equal to the total value of the dual and if the duals constructed are integral (resp. half integral), then the corresponding flows are integral (resp. half integral). Formally, \begin{lemma}[\cite{KGS}] There exists a flow of value $\sum_{S \subseteq V^{*}}y_S$ in $G$. \end{lemma} \cite{KGS} show how to extract a half-integral flow of value at least half of any given fractional flow and an integral flow of value at least half any given half integral flow. This shows a half integral (resp. integral) flow-multicut gap of 4 (resp. 8). Using our modified WGMV algorithm, we obtain a half integral dual (and hence half integral flow) of value at least half the cost of the $\mbox{\tt 2ECAP}$ solution and hence the multicut. This gives us a 2 (resp. 4) approximate half-integral (resp. integral) \texttt{flow-multicut} theorem for Seymour instances.
\begin{theorem} \label{integral maxflow mincut} Let $G+H$ be planar. There exists a feasible integral flow of value $F$ and a multicut of value $C$ such that $C \leq 4F$. Further, such a flow and cut can be computed in polynomial time. \end{theorem} $\cite{KGS}$ shows a class of Seymour instances such that the half-integral flow-multicut gap approaches 2 from below. This, along with our upper bound of 2 proves that Theorem~\ref{half integral maxflow mincut} is tight. The best known lower bound for the integral \texttt{flow-multicut gap} is also 2 and it remains an interesting open question to determine the exact gap.
In this paper we have shown that the Williamson et.al. procedure for the weakly supermodular cut cover problem can be modified to yield a dual solution that is half-integral. This implies a 2-approximate half-integral max multicommodity flow min-multicut theorem for instances where the union of the supply and demand edges is planar. We conjecture that the gap between the maximum integral flow and the minimum multicut for such instances is also bounded by 2 and leave this as an open problem.
\end{document} |
\begin{document}
\begin{abstract} We use an alternative definition of topological complexity to show that the topological complexity of the mapping telescope of a sequence $X_1\stackrel{f_1}{\longrightarrow}X_2\stackrel{f_2}{\longrightarrow}X_3\stackrel{f_3}{\longrightarrow}\ldots$ is bounded above by $2\max\{\mathord{\mathrm{TC}}(X_i);\;i=1,2,\ldots\}$. \end{abstract} \title{Topological complexity of the telescope}
\section{Introduction}
The notion of topological complexity was first introduced by Farber in \cite{Farber:TC}:
\begin{definition}\label{FarberDef} {\em Topological complexity} $\mathord{\mathrm{TC}}(X)$ of a space $X$ is the least integer $n$ for which there exist an open cover $\{U_1, U_2,\ldots, U_n\}$ of $X\times X$ and sections $s_i\colon U_i\rightarrow X^I$ of the fibration $\pi\colon X^I\rightarrow X\times X$, $\alpha\mapsto (\alpha(0),\alpha(1))$. If no such integer exists we write $\mathord{\mathrm{TC}}(X)=\infty$. \end{definition}
In \cite{IS}, Iwase and Sakai proved that (for nice spaces $X$) topological complexity is a special case of what James and Morris \cite{JamesMorris} call {\em fibrewise pointed LS category}. A {\em fibrewise pointed space} over a {\em base} $B$ is a topological space $E$, supplied with a {\em projection} $p\colon E\rightarrow B$ and a {\em section} $s\colon B\rightarrow E$. Fibrewise pointed spaces over a base $B$ form a category and the notions of fibrewise pointed maps and fibrewise pointed homotopies are defined as one would expect. More details can be found in \cite{James} and \cite{JamesMorris}.
We consider the product $X\times X$ as a fibrewise pointed space over the base $X$ with the projection to the first component and the diagonal section $\Delta\colon X\rightarrow X\times X$. According to Theorem 1.7 of \cite{IS}, we do not have to work with the fibrewise pointed homotopies but can instead use the less restrictive notion of (unpointed) fibrewise homotopies. A fibrewise homotopy in this case is any homotopy $H\colon X\times X\times I\rightarrow X\times X$ that fixes the first coordinate. So, $H(x,y,t)=(x,h(x,y,t))$ for some homotopy $h\colon X\times X\times I\rightarrow X$. For obvious reasons we call them {\em vertical homotopies}.
We can therefore consider the following theorem as an alternative definition of topological complexity: \begin{theorem}\label{IwaseDef} {\em Topological complexity} $\mathord{\mathrm{TC}}(X)$ of a space $X$ is the least integer $n$ for which there exists an open cover $\{U_1, U_2,\ldots, U_n\}$ of $X\times X$ such that each $U_i$ is vertically compressible to the diagonal $\Delta(X)$. If no such integer exists we write $\mathord{\mathrm{TC}}(X)=\infty$. \end{theorem}
Note that we do not require the homotopies to be stationary on the section $\Delta(X)$, nor do we require the sets $U_i$ to contain the section.
Our result is analogous to the statement concerning LS category proven by Ganea in \cite{Ganea}. He gave an example to show that the LS category of the telescope is not necessarily equal to the LS categories of its parts. As we will see, this is also true for topological complexity. In \cite{Hardie}, Hardie improved Ganea's bound by 1 and Ganea's example shows that Hardie's bound is sharp.
\section{Topological complexity of the telescope}
We approach the problem indirectly by first estimating the topological complexity of an increasing union. The increasing union is much easier to handle and we can explicity construct a cover with the required properties. We then use homotopy invariance of topological complexity to apply the result to mapping telescopes.
\begin{theorem}\label{maintheorem} Let $X=\bigcup_{i=1}^{\infty}X_i$ be the increasing union of closed subspaces with the property that for each $i$ there exists an open set $Y_i\subset X$ such that $X_i\subset Y_i\subset\mathord{\mathrm{cl}}(Y_i)\subset \mathord{\mathrm{int}}{(X_{i+1})}$. If $\mathord{\mathrm{TC}}(X_i)\leq n$ for all $i$, then $\mathord{\mathrm{TC}}(X)\leq 2n$. \end{theorem}
\begin{proof} Since $X_i\subset X_{i+1}$ for all $i$, we have $X_i\times X_i\subset X_{i+1}\times X_{i+1}$ for all $i$ and the product $X\times X=\bigcup_{i=1}^{\infty}X_i\times X_i$ is an increasing union of its subspaces. Let $\{U_j^{(i)}\}_{j=1}^n$ be an open cover of $X_i\times X_i$ with sets $U_j^{(i)}$ vertically compressible to the diagonal $\Delta(X_i)\subset\Delta(X)$. Define $L_i=\mathord{\mathrm{int}}(X_i\times X_i)-\mathord{\mathrm{cl}}(Y_{i-2}\times Y_{i-2})$ for $i>2$, $L_2=\mathord{\mathrm{int}}(X_2\times X_2)$, $L_1=\mathord{\mathrm{int}}(X_1\times X_1)$. Here, $\mathord{\mathrm{int}}(A)$ and $\mathord{\mathrm{cl}}(A)$ denote the interior and the closure of $A$ as a subset of $X\times X$. Let $V_j^{(i)}=U_j^{(i)}\cap L_i$ and consider the sets $$W_1 = \bigcup_{i=1}^\infty V_1^{(2i-1)}, W_2 = \bigcup_{i=1}^\infty V_1^{(2i)},\ldots, W_{2n-1} = \bigcup_{i=1}^\infty V_n^{(2i-1)}, W_{2n} = \bigcup_{i=1}^\infty V_n^{(2i)}.$$ Figure \ref{figure} illustrates the construction of the first three sets from $W_1$.
\begin{figure}
\caption{The shaded areas represent the sets $V_1^{(1)}$, $V_1^{(3)}$ and $V_1^{(5)}$. These sets are all part of $W_1$.}
\label{figure}
\end{figure}
We observe the following: \begin{itemize} \item Every $(x,y)\in X$ belongs to $L_i$ for some $i$ and is therefore contained in $V_j^{(i)}$ for some $j$. So, $\{W_k\}_{k=1}^{2n}$ covers $X\times X$.
\item Each $V_j^{(i)}$ can be compressed to $\Delta(X_i)\subset\Delta(X)$ by the restriction of the vertical homotopy defined on $U_j^{(i)}$. For all positive integers $l$ and $m$ we have $L_{l}\cap L_{m}=\emptyset$ as long as $|l-m|\geq 2$, so $V_j^{(l)}\cap V_j^{(m)}=\emptyset$ for $|l-m|\geq 2$. The vertical homotopies we defined on $V_j^{(i)}$ can therefore be combined to define a (continuous) homotopy that vertically compresses $W_k$ to $\Delta(X)$. \item The sets $L_i$ are open in $X\times X$, so $V_j^{(i)}=U_j^{(i)}\cap L_i$ are open in $L_i$ and therefore in $X\times X$. Each $W_k$ is defined as a union of open sets, so all $W_k$ are open. \end{itemize} From this we infer that $\{W_k\}_{k=1}^{2n}$ is indeed an open cover of $X\times X$ with each $W_k$ vertically compressible to $\Delta(X)$. The conclusion now follows from Theorem \ref{IwaseDef}. \end{proof}
\begin{remark} The proof of Theorem \ref{maintheorem} can be reused with only minor alterations to notation to prove a slightly more general statement. For a fibrewise pointed space $p\colon E\rightarrow B$ with section $s$ denote by $\mathord{\mathrm{cat}}_B^*(E)$ the {\em fibrewise unpointed category} as in Definition 1.6 of \cite{IS}. Assume that $E=\bigcup_{i=1}^{\infty}E_i$ is an increasing union of closed subspaces with the property that $s(p(E_i))\subset E_i$ and that there exist open sets $Y_i\subset E$ such that $E_i\subset Y_i\subset\mathord{\mathrm{cl}}{(Y_i)}\subset\mathord{\mathrm{int}}{(E_{i+1})}$. Let $B_i=p(E_i)$ and denote by $p_i\colon E_i\rightarrow B_i$ the restriction of $p$ to $E_i$ with the section $s_i$ being the restriction of section $s$ to $B_i$. If $\mathord{\mathrm{cat}}_{B_i}^*(E_i)\leq n$, then $\mathord{\mathrm{cat}}_{B}^*(E)\leq 2n$. \end{remark}
We now represent a mapping telescope as an increasing union of subspaces and obtain the following result:
\begin{coro}\label{maincoro} Let $X=\bigcup_{i=1}^{\infty}X_i\times[i-1,i]$ be the mapping telescope of a sequence of maps $$X_1\stackrel{f_1}{\longrightarrow}X_2\stackrel{f_2}{\longrightarrow}X_3\stackrel{f_3}{\longrightarrow}\ldots$$ and let $\mathord{\mathrm{TC}}(X_i)\leq n$ for all $i$. Then $\mathord{\mathrm{TC}}(X)\leq 2n$. \end{coro}
\begin{proof} Define $X'_n=\bigcup_{i=1}^{n}X_i\times[i-1,i]$ to be the union of the first $n$ mapping cylinders in the telescope $X=\bigcup_{i=1}^{\infty}X_i\times[i-1,i]$. Then $X$ is the increasing union $X=\bigcup_{i=1}^{\infty}X'_i$ and we can take $$Y_i=\left(\bigcup_{i=1}^{n}X_i\times[i-1,i]\right)\cup X_{i+1}\times\left[i,i+1/2\right).$$ Since $X'_i$ are homotopy equivalent to $X_i$ for all $i$, we have $\mathord{\mathrm{TC}}(X'_i)=\mathord{\mathrm{TC}}(X_i)\leq n$ for all $i$. The conclusion now follows from Theorem~\ref{maintheorem}. \end{proof}
Finally, here is an equivalent formulation of Corollary \ref{maincoro}:
\begin{coro} Let $X=\bigcup_{i=1}^{\infty}X_i\times[i-1,i]$ be the mapping telescope of a sequence of maps $$X_1\stackrel{f_1}{\longrightarrow}X_2\stackrel{f_2}{\longrightarrow}X_3\stackrel{f_3}{\longrightarrow}\ldots.$$ Then $\mathord{\mathrm{TC}}(X)\leq 2\max\{\mathord{\mathrm{TC}}(X_i);\;i=1,2,\ldots\}$. \end{coro}
\begin{proof} If $\mathord{\mathrm{TC}}(X_i)$ are not bounded above, then $\max\{\mathord{\mathrm{TC}}(X_i);\;i=1,2,\ldots\}=\infty$ and the statement is trivially true. If $\max\{\mathord{\mathrm{TC}}(X_i);\;i=1,2,\ldots\}=M<\infty$, then $\mathord{\mathrm{TC}}(X_i)\leq M$ for all $i$ and Corollary~\ref{maincoro} implies that $\mathord{\mathrm{TC}}(X)\leq 2M$. \end{proof}
\begin{example} The mapping telescope of the sequence $$S^1\stackrel{\cdot 2}{\longrightarrow}S^1\stackrel{\cdot 2}{\longrightarrow}S^1\stackrel{\cdot 2}{\longrightarrow}\ldots$$ is $X=K(\mathbb{Z}[\frac{1}{2}],1)$. We have $\mathord{\mathrm{TC}}(S^1)=2$ and Corollary \ref{maincoro} implies that $\mathord{\mathrm{TC}}(X)\leq 4$. The cohomology of $X$ is nontrivial only in dimension 2, and there we have $H^2(X;\mathbb{Z})=\hat{\mathbb{Z}}_2/\mathbb{Z}$, where $\hat{\mathbb{Z}}_2$ denotes the group of $2$-adic integers (detailed calculations can be found in \cite{Hatcher}, Section 3F, in particular Example 3F.9). Elements of finite order in $\hat{\mathbb{Z}}_2/\mathbb{Z}$ are represented by rational numbers. Since $\hat{\mathbb{Z}}_2/\mathbb{Z}$ is uncountable, there exists an element $u\in H^2(X;\mathbb{Z})$ of infinite order and we obtain a non-trivial product of length $2$: $$(1\otimes u-u\otimes 1)^2=-2u\otimes u\in H^2(X;\mathbb{Z})\otimes H^2(X;\mathbb{Z}).$$ Combining Theorem 7 of \cite{Farber:TC} and Theorem 4 of \cite{Schwarz} we get a lower bound in terms of zero-divisors: $\mathord{\mathrm{TC}}(X)\geq 3$. So, $3\leq\mathord{\mathrm{TC}}(X)\leq 4$.
Notice how in this example our upper bound is better than the standard upper bounds in terms of dimension and LS category (see \cite{Farber:TC}, Theorem 4 and Theorem 5), although it is not low enough to determine $TC(X)$. \end{example} This example shows that the topological complexity of the telescope $X$ can be greater than the topological complexity of its parts $X_i$. The question remains of whether our bound can be improved by 1.
\end{document} |
\begin{document}
\title{Symmetries of second order differential equations on Lie algebroids} \author{Liviu Popescu} \maketitle
\begin{abstract} In this paper we investigate the relations between semispray, nonlinear connection, dynamical covariant derivative and Jacobi endomorphism on Lie algebroids. Using these geometric structures, we study the symmetries of second order differential equations in the general framework of Lie algebroids. \end{abstract}
MSC2010: 17B66, 34A26, 53C05, 70S10
Keywords: Lie algebroids, symmetries, semispray, nonlinear connection, dyna-mical covariant derivative, Jacobi endomorphism.
\section{\textbf{Introduction}}
The geometry of second order differential equations (SODE) on the tangent bundle $TM$ of a differentiable manifold $M$ is closely related to the geometry of nonlinear connections \cite{Cr1, Gr1}. The system of SODE can by represented using the notion of semispray, which together with the nonlinear connection induce two important concepts: the dynamical covariant derivative and Jacobi endomorphism \cite{Bu2, Bu3, Cr2, Gr2, Ma, Sa}. The notion of dynamical covariant derivative was introduced for the first time in the case of tangent bundle by J. Cari\~nena and E. Mart\'\i nez \cite{Ca0} as a derivation of degree 0 along the tangent bundle projection. The notion of symmetry in fields theory using various geometric framework is intensely studied (see for instance \cite{Ab, Bua, Bu3, Ga, Le0, Le1,Pr1, Pr2, Ro}). The notion of Lie algebroid is a natural generalization of the tangent bundle and Lie algebra. In the last decades the Lie algebroids \cite{Mk1, Mk2} are the objects of intensive studies with applications to mechanical systems or optimal control \cite {Ar2, Co, Fe, Le, Li, Ma2, Ma4, Me, Po1, Po2, Po2', Po3, Po4, We} and are the natural framework in which one can develop the theory of differential equations, where the notion of symmetry plays a very important role.
In this paper we study some properties of semispray and generalize the notion of symmetry for second order differential equations on Lie algebroids and characterize its properties using the dynamical covariant derivative and Jacobi endomorphism. The paper is organized as follows. In section two the preliminary geometric structures on Lie algebroids are introduced and some relations between them are given. We present the Jacobi endomorphism on Lie algebroids and find the relation with the curvature tensor of Ehresmann nonlinear connection. In section three we study the dynamical covariant derivative on Lie algebroids. Using a semispray and an arbitrary nonlinear connection, we introduce the dynamical covariant derivative on Lie algebroids as a tensor derivation and prove that the compatibility condition with the tangent structure fixes the canonical nonlinear connection. In the case of the canonical nonlinear connection induced by a semispray, more properties of dynamical covariant derivative are added. In the case of homogeneous second order differential equations (spray) the relation between the dynamical covariant derivative and Berwald linear connection is given. In the last section we study the dynamical symmetries, Lie symmetries, Newtonoid sections and Cartan symmetries on Lie algebroids and find the relations between them. These structures are studied for the first time on the tangent bundle by G. Prince in \cite{Pr1, Pr2}. Also, we prove that an exact Cartan symmetry induces a conservation law and conversely, which extends the work developed in \cite{Mar}. Moreover, we find the invariant equations of dynamical symmetries, Lie symmetries and Newtonoid sections in terms of dynamical covariant derivative and Jacobi endomorphism, which generalize some results from \cite{Bu3, Pr1, Pr2}. We have to mention that the Noether type theorems for Lagrangian systems on Lie algebroids can be found in \cite{Ca, Ma2} and Jacobi sections for second order differential equations on Lie algebroids are studied in \cite{Ca1}. Finally, using an example from optimal control theory (driftless control affine systems), we prove that the framework of Lie algebroids is more useful than the tangent bundle in order to find the symmetries of the dynamics induced by a Lagrangian function. Also, using the $k$-symplectic formalism on Lie algebroids developed in \cite{Le2} one can study the symmetries in this new framework, which generalize the results from \cite{Bua}.
\section{\textbf{Lie algebroids}}
Let $M$ be a real, $C^\infty $-differentiable, $n$-dimensional manifold and $ (TM,\pi _M,M)$ its tangent bundle. A Lie algebroid over a manifold $M$ is a triple $(E,[\cdot ,\cdot ]_E,\sigma )$, where ($E,\pi ,M)$ is a vector bundle of rank $m$ over $M,$ which satisfies the following conditions: \\a) $ C^\infty (M)$-module of sections $\Gamma (E)$ is equipped with a Lie algebra structure $[\cdot ,\cdot ]_E$. \\b) $\sigma :E\rightarrow TM$ is a bundle map (called the anchor) which induces a Lie algebra homomorphism (also denoted $\sigma $) from the Lie algebra of sections $(\Gamma (E),[\cdot ,\cdot ]_E)$ to the Lie algebra of vector fields $(\mathcal{\chi }(M),[\cdot ,\cdot ])$ satisfying the Leibnitz rule \begin{equation} \lbrack s_1,fs_2]_E=f[s_1,s_2]_E+(\sigma (s_1)f)s_2,\ \forall s_1,s_2\in \Gamma (E),\ f\in C^\infty (M). \end{equation}
From the above definition it results: \\$1^{\circ }$ $[\cdot ,\cdot ]_E$ is a $\Bbb{R}$-bilinear operation, \\$2^{\circ }$ $[\cdot ,\cdot ]_E$ is skew-symmetric, i.e. $[s_1,s_2]_E=-[s_2,s_1]_E,\quad \forall s_1,s_2\in \Gamma (E),$\\$3^{\circ }$ $[\cdot ,\cdot ]_E$ verifies the Jacobi identity \begin{equation*} \lbrack s_1,[s_2,s_3]_E]_E+[s_2,[s_3,s_1]_E]_E+[s_3,[s_1,s_2]_E]_E=0,\ \end{equation*} and $\sigma $ being a Lie algebra homomorphism, means that $\sigma [s_1,s_2]_E=[\sigma (s_1),\sigma (s_2)].$
The existence of a Lie bracket on the space of sections of a Lie algebroid leads to a calculus on its sections analogous to the usual Cartan calculus on differential forms. If $f$ is a function on $M$, then $df(x)\in E_x^{*}$ is given by $ \left\langle df(x),a\right\rangle =\sigma (a)f$, for $\forall a\in E_x$. For $\omega $ $\in $ $\bigwedge^k(E^{*})$ the \textit{exterior derivative} $ d^E\omega \in \bigwedge^{k+1}(E^{*})$ is given by the formula \begin{eqnarray*} d^E\omega (s_1,...,s_{k+1}) &=&\overset{k+1}{\sum_{i=1}}(-1)^{i+1}\sigma (s_i)\omega (s_1,...,\overset{\symbol{94}}{s}_i,...,s_{k+1})+ \\ &&\ \ \ \ \ \ \ +\sum_{1\leq i<j\leq k+1}(-1)^{i+j}\omega ([s_{i,}s_j]_E,s_1,...,\overset{\symbol{94}}{s_i},...,\overset{\symbol{94}}{ s_j},...s_{k+1}), \end{eqnarray*} where $s_i\in \Gamma (E)$, $i=\overline{1,k+1}$, and the hat over an argument means the absence of the argument. It results that \begin{equation*} (d^E)^2=0,\quad d^E(\omega _1\wedge \omega _2)=d^E\omega _1\wedge \omega _2+(-1)^{\deg \omega _1}\omega _1\wedge d^E\omega _2. \end{equation*} The cohomology associated with $d^E$ is called the \textit{Lie algebroid cohomology} of $E$. Also, for $\xi $ $\in \Gamma (E)$ one can define the \textit{Lie derivative} with respect to $\xi$, given by $\mathcal{L}_\xi =i_\xi \circ d^E+d^E\circ i_\xi $, where $i_\xi $ is the contraction with $\xi $. We recall that if $L$ and $K$ are $(1,1)$-type tensor field, Fr\"olicher-Nijenhuis bracket $[L,K]$ is the vector valued 2-form \cite{Fr} \begin{eqnarray*} \lbrack L,K]_E(X,Y) &=&[LX,KY]_E+[KX,LY]_E+(LK+KL)[X,Y]_E- \\ &&\ \ \ \ \ \ -L[X,KY]_E-K[X,LY]_E-L[KX,Y]_E-K[LX,Y]_E, \end{eqnarray*} and the Nijenhuis tensor of $L$ is given by \begin{equation*} \mathbf{N}_L(X,Y)=\frac 12[L,L]_E=[LX,LY]_E+L^2[X,Y]_E-L[X,LY]_E-L[LX,Y]_E. \end{equation*} For a vector field in $\mathcal{X}(E)$ and a $(1,1)$-type tensor field $L$ on $E$ the Fr\"olicher-Nijenhuis bracket $[X,L]_E=\mathcal{L}_XL$ is the $ (1,1)$-type tensor field on $E$ given by \begin{equation*} \mathcal{L}_XL=\mathcal{L}_X\circ L-L\circ \mathcal{L}_X, \end{equation*} where $\mathcal{L}_X$ is the usual Lie derivative. If we take the local coordinates $(x^i)$ on an open $U\subset $ $M$, a local basis $\{s_\alpha \}$ of the sections of the bundle $\pi ^{-1}(U)\rightarrow U$ generates local coordinates $(x^i,y^\alpha )$ on $E$. The local functions $\sigma _\alpha ^i(x)$, $L_{\alpha \beta }^\gamma (x)$ on $M$ given by \begin{equation} \sigma (s_\alpha )=\sigma _\alpha ^i\frac \partial {\partial x^i},\quad [s_\alpha ,s_\beta ]_E=L_{\alpha \beta }^\gamma s_\gamma ,\quad i=\overline{1,n},\quad \alpha ,\beta ,\gamma =\overline{1,m}, \end{equation} are called the \textit{structure functions of the Lie algebroid}, and satisfy the \textit{structure equations} on Lie algebroids \begin{equation*} \sum_{(\alpha ,\beta ,\gamma )}\left( \sigma _\alpha ^i\frac{\partial L_{\beta \gamma }^\delta }{\partial x^i}+L_{\alpha \eta }^\delta L_{\beta \gamma }^\eta \right) =0,\quad \sigma _\alpha ^j\frac{\partial \sigma _\beta ^i}{\partial x^j}-\sigma _\beta ^j\frac{\partial \sigma _\alpha ^i}{\partial x^j}=\sigma _\gamma ^iL_{\alpha \beta }^\gamma . \end{equation*} Locally, if $f\in C^\infty (M)$ then $d^Ef=\frac{\partial f}{\partial x^i} \sigma _\alpha ^is^\alpha ,$ where $\{s^\alpha \}$ is the dual basis of $ \{s_\alpha \}$ and if $\theta \in \Gamma (E^{*}),$ $\theta =\theta _\alpha s^\alpha $ then \begin{equation*} d^E\theta =\left( \sigma _\alpha ^i\frac{\partial \theta _\beta }{\partial x^i}-\frac 12\theta _\gamma L_{\alpha \beta }^\gamma \right) s^\alpha \wedge s^\beta. \end{equation*} Particularly, we get $d^Ex^i=\sigma _\alpha ^is^\alpha $ and $d^Es^\alpha =-\frac 12L_{\beta \gamma }^\alpha s^\beta \wedge s^\gamma .$
\subsection{\textbf{The prolongation of a Lie algebroid over the vector bundle projection}}
Let $(E,\pi ,M)$ be a vector bundle. For the projection $\pi :E\rightarrow M$ we can construct the prolongation of $E$ (see \cite{Hi, Le, Ma2}). The associated vector bundle is ($\mathcal{T}E,\pi _2,E$) where \begin{equation*} \mathcal{T}E=\underset{_{w\in E}}{\cup }\mathcal{T}_wE,\quad \mathcal{T} _wE=\{(u_x,v_w)\in E_x\times T_wE\mid \sigma (u_x)=T_w\pi (v_w),\quad \pi (w)=x\in M\}, \end{equation*} and the projection $\pi _2(u_x,v_w)=\pi _E(v_w)=w$, where $\pi _E:TE\rightarrow E$ is the tangent projection. We also have the canonical projection $\pi _1:\mathcal{T}E\rightarrow E$ given by $\pi _1(u,v)=u$. The projection onto the second factor $\sigma ^1:\mathcal{T}E\rightarrow TE$, $ \sigma ^1(u,v)=v$ will be the anchor of a new Lie algebroid over the manifold $E$. An element of $\mathcal{T}E$ is said to be vertical if it is in the kernel of the projection $\pi _1$. We will denote $(V\mathcal{T}E,\pi _{2\mid _{V\mathcal{T}E}},E)$ the vertical bundle of
$(\mathcal{T}E,\pi _2,E) $ and $\sigma ^1\left| _{V\mathcal{T}E}\right. :V\mathcal{T} E\rightarrow VTE$ is an isomorphism. If $f\in C^\infty (M)$ we will denote by $f^c$ and $f^v$ the \textit{complete and vertical lift} to $E$ of $f$ defined by \begin{equation*} f^c(u)=\sigma (u)(f),\quad f^v(u)=f(\pi (u)),\quad u\in E. \end{equation*} For $s\in \Gamma (E)$ we can consider the \textit{vertical lift} of $s$ given by $s^v(u)=s(\pi (u))_u^v,$ for $u\in E,$ where $_u^v:E_{\pi (u)}\rightarrow T_u(E_{\pi (u)})$ is the canonical isomorphism. There exists a unique vector field $s^c$ on $E$, the \textit{complete lift} of $s$ satisfying the following conditions:
i) $s^c$ is $\pi $-projectable on $\sigma (s),$
ii) $s^c(\overset{\wedge }{\alpha })=\widehat{\mathcal{L}_s\alpha },$\\for all $\alpha \in \Gamma (E^{*}),$ where $\overset{\wedge }{\alpha }(u)=\alpha (\pi (u))(u)$, $u\in E$ (see \cite{Gu1, Gu2}). \\Considering the prolongation $\mathcal{T}E$ of $E$ \cite{Ma2}, we may introduce the \textit{vertical lift }$s^{\mathrm{v}}$ and the \textit{complete lift} $s^{ \mathrm{c}}$ of a section $s\in \Gamma (E)$ as the sections of $\mathcal{T} E\rightarrow E$ given by \begin{equation*} s^{\mathrm{v}}(u)=(0,s^v(u)),\quad s^{\mathrm{c}}(u)=(s(\pi (u)),s^c(u)),\quad u\in E. \end{equation*} Other two canonical objects on $\mathcal{T}E$ are the \textit{Euler section} $\Bbb{C}$ and the \textit{tangent structure} (\textit{vertical endomorphism}) $J$. The Euler section $\Bbb{C}$ is the section of $\mathcal{T} E\rightarrow E$ defined by $\Bbb{C}(u)=(0,u_u^v),\ \forall u\in E.$ The vertical endomorphism is the section of $(\mathcal{T}E)\oplus (\mathcal{T} E)^{*}\rightarrow E$ characterized by $J(s^{\mathrm{v}})=0,$ $J(s^{\mathrm{c} })=s^{\mathrm{v}}$, $s\in \Gamma (E)$ which satisfies \begin{equation*} J^2=0,\ ImJ=\ker J=V\mathcal{T}E,\quad \ [\Bbb{C},J]_{\mathcal{T}E}=-J. \end{equation*} A section $\mathcal{S}$ of $\mathcal{T}E\rightarrow E$ is called \textit{ semispray} (\textit{second order differential equation -SODE}) on $E$ if $J( \mathcal{S})=\Bbb{C}$. The local basis of $\Gamma (\mathcal{T}E)$ is given by $\{\mathcal{X}_\alpha ,\mathcal{V}_\alpha \}$, where \begin{equation}
\mathcal{X}_\alpha (u)=\left( s_\alpha (\pi (u)),\left. \sigma _\alpha ^i\frac \partial {\partial x^i}\right| _u\right) ,\quad
\mathcal{V}_\alpha (u)=\left( 0,\left. \frac \partial {\partial y^\alpha }\right| _u\right), \end{equation} and $(\partial /\partial x^i,\partial /\partial y^\alpha )$ is the local basis on $TE$ (see \cite{Ma2}). The structure functions of $\mathcal{T}E$ are given by the following formulas \begin{equation} \sigma ^1(\mathcal{X}_\alpha )=\sigma _\alpha ^i\frac \partial {\partial x^i},\quad \sigma ^1(\mathcal{V}_\alpha )=\frac \partial {\partial y^\alpha }, \end{equation} \begin{equation} \lbrack \mathcal{X}_\alpha ,\mathcal{X}_\beta ]_{\mathcal{T}E}=L_{\alpha \beta }^\gamma \mathcal{X}_\gamma ,\quad [\mathcal{X}_\alpha ,\mathcal{V} _\beta ]_{\mathcal{T}E}=0,\quad [\mathcal{V}_\alpha ,\mathcal{V}_\beta ]_{ \mathcal{T}E}=0. \end{equation} The vertical lift of a section $\rho =\rho ^\alpha s_\alpha $ is $\rho ^{ \mathrm{v}}=\rho ^\alpha \mathcal{V}_\alpha $.$\,$ The coordinate expression of Euler section is $\Bbb{C}=y^\alpha \mathcal{V}_\alpha $ and the local expression of $J$ is given by $J=\mathcal{X}^\alpha \otimes \mathcal{V} _\alpha ,$ where $\{\mathcal{X}^\alpha ,\mathcal{V}^\alpha \}$ denotes the corresponding dual basis of $\{\mathcal{X}_\alpha ,\mathcal{V}_\alpha \}$. The Nijenhuis tensor of the vertical endomorphism vanishes and it results that $J$ is integrable. The expression of the complete lift of a section $ \rho =\rho ^\alpha s_\alpha $ is \begin{equation} \rho ^{\mathrm{c}}=\rho ^\alpha \mathcal{X}_\alpha +(\sigma _\varepsilon ^i \frac{\partial \rho ^\alpha }{\partial x^i}-L_{\beta \varepsilon }^\alpha \rho ^\beta )y^\varepsilon \mathcal{V}_\alpha . \end{equation} In particular $s_\alpha ^{\mathrm{v}}=\mathcal{V}_\alpha $, $s_\alpha ^{ \mathrm{c}}=\mathcal{X}_\alpha -L_{\alpha \varepsilon }^\beta y^\varepsilon \mathcal{V}_\beta .$ The local expression of the differential of a function $ L$ on $\mathcal{T}E$ is $d^EL=\sigma _\alpha ^i\frac{\partial L}{\partial x^i }\mathcal{X}^\alpha +\frac{\partial L}{\partial y^\alpha }\mathcal{V}^\alpha $ and we have $d^Ex^i=\sigma _\alpha ^i\mathcal{X}^\alpha $, $\ d^Ey^\alpha = \mathcal{V}^\alpha$. The differential of sections of $(\mathcal{T}E)^{*}$ is determined by \begin{equation*} d^E\mathcal{X}^\alpha =-\frac 12L_{\beta \gamma }^\alpha \mathcal{X}^\beta \wedge \mathcal{X}^\gamma ,\quad d^E\mathcal{V}^\alpha =0. \end{equation*} In local coordinates a semispray has the expression \begin{equation} \mathcal{S}(x,y)=y^\alpha \mathcal{X}_\alpha +\mathcal{S}^\alpha (x,y) \mathcal{V}_\alpha . \end{equation} and the following equality holds \begin{equation} J[\mathcal{S},JX]_{\mathcal{T}E}=-JX,\ X\in \Gamma (E). \end{equation} The integral curves of $\sigma ^1(\mathcal{S})$ satisfy the differential equations \begin{equation*} \frac{dx^i}{dt}=\sigma _\alpha ^i(x)y^\alpha ,\quad \frac{dy^\alpha }{dt}= \mathcal{S}^\alpha (x,y). \end{equation*} If we have the relation $[\Bbb{C},\mathcal{S}]_{\mathcal{T}E}=\mathcal{S}$ then $\mathcal{S}$ is called spray and the functions $\mathcal{S}^\alpha $ are homogeneous functions of degree $2$ in $y^\alpha .$ Let us consider a regular Lagrangian $L$ on $E$, that is the matrix \begin{equation*} g_{\alpha \beta }=\frac{\partial ^2L}{\partial y^\alpha \partial y^\beta}, \end{equation*} has constant rank $m$. We have the Cartan 1-section $\theta _L=\frac{\partial L}{\partial y^\alpha }\mathcal{X}^\alpha$ and the Cartan 2-section $\omega _L=d^E\theta _L$, which is a symplectic structure induced by $L$ given by \cite{Ma2} \begin{equation*} \omega _L=g_{\alpha \beta }\mathcal{V}^\beta \wedge \mathcal{X}^\alpha +\frac 12\left( \sigma _\alpha ^i\frac{\partial ^2L}{\partial x^i\partial y^\beta }-\sigma _\beta ^i\frac{\partial ^2L}{\partial x^i\partial y^\alpha } -\frac{\partial L}{\partial y^\varepsilon }L_{\alpha \beta }^\varepsilon \right) \mathcal{X}^\alpha \wedge \mathcal{X}^\beta . \end{equation*} Considering the energy function $E_L=\Bbb{C}(L)-L$, with local expression \begin{equation*} E_L=y^\alpha \frac{\partial L}{\partial y^\alpha }-L, \end{equation*} then the symplectic equation \begin{equation*} i_S\omega _L=-d^EE_L, \end{equation*} determines the components of the canonical semispray \cite{Ma2} \begin{equation} S^\varepsilon =g^{\varepsilon \beta }\left( \sigma _\beta ^i\frac{\partial L }{\partial x^i}-\sigma _\alpha ^i\frac{\partial ^2L}{\partial x^i\partial y^\beta }y^\alpha -L_{\beta \alpha }^\gamma y^\alpha \frac{\partial L}{ \partial y^\gamma }\right) , \end{equation} where $g_{\alpha \beta }g^{\beta \gamma }=\delta _\alpha ^\gamma $, which depends only on the regular Lagrangian and the structure function of the Lie algebroid.
\subsection{\textbf{Nonlinear connections on Lie algebroids}}
A nonlinear connection is an important tool in the geometry of systems of second order differential equations. The system of SODE can by represented using the notion of semispray, which together with a nonlinear connection induce two important concepts (the dynamical covariant derivative and Jacobi endomorphism) which are used in order to find the invariant equations of the symmetries of SODE. \begin{definition} A nonlinear connection on $\mathcal{T}E$ is an almost product structure $ \mathcal{N}$ on $\pi _2:\mathcal{T}E\rightarrow E$ (i.e. a bundle morphism $ \mathcal{N}:\mathcal{T}E\rightarrow \mathcal{T}E$, such that $\mathcal{N} ^2=Id$) smooth on $\mathcal{T}E\backslash \{0\}$ such that $V\mathcal{T} E=\ker (Id+\mathcal{N}).$ \end{definition} If $\mathcal{N}$ is a connection on $\mathcal{T}E$ then $H\mathcal{T}E=\ker (Id-\mathcal{N})$ is the horizontal subbundle associated to $\mathcal{N}$ and $\mathcal{T}E=V\mathcal{T}E\oplus H\mathcal{T}E.$ Each $\rho \in \Gamma ( \mathcal{T}E)$ can be written as $\rho =\rho ^{\mathrm{h}}+\rho ^{\mathrm{v} },$ where $\rho ^{\mathrm{h}}$, $\rho ^{\mathrm{v}}$ are sections in the horizontal and respective vertical subbundles. If $\rho ^{\mathrm{h}}=0,$ then $\rho $ is called\textit{\ vertical }and if $\rho ^{\mathrm{v}}=0,$ then $\rho $ is called \textit{horizontal}. A connection $\mathcal{N}$ on $ \mathcal{T}E$ induces two projectors $\mathrm{h},\mathrm{v}:\mathcal{T} E\rightarrow \mathcal{T}E$ such that $\mathrm{h}(\rho )=\rho ^{\mathrm{h}}$ and $\mathrm{v}(\rho )=\rho ^{\mathrm{v}}$ for every $\rho \in \Gamma ( \mathcal{T}E)$. We have \begin{equation*} \mathrm{h}=\frac 12(Id+\mathcal{N}),\quad \mathrm{v}=\frac 12(Id-\mathcal{N} ),\quad \ker \mathrm{h}=Im\mathrm{v}=V\mathcal{T}E,\quad Im\mathrm{h}=\ker \mathrm{v}=H\mathcal{T}E. \end{equation*} \begin{equation*} \mathrm{h}^2=\mathrm{h},\quad \mathrm{v}^2=\mathrm{v},\quad \mathrm{hv}= \mathrm{vh}=0,\quad \mathrm{h}+\mathrm{v}=Id,\quad \mathrm{h}-\mathrm{v}= \mathcal{N}. \end{equation*} \begin{equation*} J\mathrm{h}=J,\quad \mathrm{h}J=0,\quad J\mathrm{v}=0,\quad \mathrm{v}J=J. \end{equation*} Locally, a connection can be expressed as $\mathcal{N}(\mathcal{X}_\alpha )= \mathcal{X}_\alpha -2\mathcal{N}_\alpha ^\beta \mathcal{V}_\beta $, $ \mathcal{N}(\mathcal{V}_\beta )=-\mathcal{V}_\beta ,$ where $\mathcal{N} _\alpha ^\beta =\mathcal{N}_\alpha ^\beta (x,y)$ are \ the local coefficients of $\mathcal{N}$. The sections \begin{equation*} \delta _\alpha =\mathrm{h}(\mathcal{X}_\alpha )=\mathcal{X}_\alpha -\mathcal{ N}_\alpha ^\beta \mathcal{V}_\beta , \end{equation*} generate a basis of $H\mathcal{T}E$. The frame $\{\delta _\alpha ,\mathcal{V} _\alpha \}$ is a local basis of $\mathcal{T}E$ called Berwald basis. The dual adapted basis is $\{\mathcal{X}^\alpha ,\delta \mathcal{V}^\alpha \}$ where $\delta \mathcal{V}^\alpha =\mathcal{V}^\alpha -\mathcal{N}_\beta ^\alpha \mathcal{X}^\beta .$ The Lie brackets of the adapted basis $\{\delta _\alpha ,\mathcal{V}_\alpha \}$ are \cite{Po2} \begin{equation} \lbrack \delta _\alpha ,\delta _\beta ]_{\mathcal{T}E}=L_{\alpha \beta }^\gamma \delta _\gamma +\mathcal{R}_{\alpha \beta }^\gamma \mathcal{V} _\gamma ,\quad [\delta _\alpha ,\mathcal{V}_\beta ]_{\mathcal{T}E}=\frac{ \partial \mathcal{N}_\alpha ^\gamma }{\partial y^\beta }\mathcal{V}_\gamma ,\quad [\mathcal{V}_\alpha ,\mathcal{V}_\beta ]_{\mathcal{T}E}=0, \end{equation} \begin{equation} \mathcal{R}_{\alpha \beta }^\gamma =\delta _\beta (\mathcal{N}_\alpha ^\gamma )-\delta _\alpha (\mathcal{N}_\beta ^\gamma )+L_{\alpha \beta }^\varepsilon \mathcal{N}_\varepsilon ^\gamma . \end{equation}
\begin{definition} The curvature of the nonlinear connection $\mathcal{N}$ on $\mathcal{T}E$ is $\Omega =-\mathbf{N}_{\mathrm{h}}$ where $\mathrm{h}$ is the horizontal projector and $\mathbf{N}_{\mathrm{h}}$ is the Nijenhuis tensor of $\mathrm{h }$. \end{definition} In local coordinates we have \begin{equation*} \Omega =-\frac 12\mathcal{R}_{\alpha \beta }^\gamma \mathcal{X}^\alpha \wedge \mathcal{X}^\beta \otimes \mathcal{V}_\gamma , \end{equation*} where $\mathcal{R}_{\alpha \beta }^\gamma $ are given by (11) and represent the local coordinate functions of the curvature tensor. The curvature of the nonlinear connection is an obstruction to the integrability of $H\mathcal{T} E $, understanding that a vanishing curvature entails that horizontal sections are closed under the Lie algebroid bracket of $\mathcal{T}E$. The horizontal distribution {}$H\mathcal{T}E$ is integrable if and only if the curvature $\Omega $ of the nonlinear connection vanishes. Also, from the Jacobi identity we obtain \begin{equation*} \lbrack \mathrm{h},\Omega ]_{\mathcal{T}E}=0. \end{equation*} Let us consider a semispray $\mathcal{S}$ and an arbitrary nonlinear connection $\mathcal{N}$ with induced $(\mathrm{h},\mathrm{v})$ projectors. Then we set (see also \cite{Po3}).
\begin{definition} The vertically valued $(1,1)$-type tensor field on Lie algebroid $\mathcal{T} E$ given by \begin{equation} \Phi =-\mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\mathrm{v}, \end{equation} will be called the Jacobi endomorphism. \end{definition} The Jacobi endomorphism $\Phi$ has been used in the study of Jacobi equations for SODE on Lie algebroids in \cite{Ca1} and to express one of the Helmholtz conditions of the inverse problem of the calculus of variations on Lie algebroids \cite{Po3} (see also \cite{Bar}).
We obtain \begin{equation*} \Phi =-\mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\mathrm{v}=\mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\mathrm{h}=\mathrm{v}\circ (\mathcal{L}_{\mathcal{S} }\circ \mathrm{h}-\mathrm{h}\circ \mathcal{L}_{\mathcal{S}})=\mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{h}, \end{equation*} and in local coordinates the action of Lie derivative on the Berwald basis is given by \begin{equation} \mathcal{L}_{\mathcal{S}}\delta _\beta =\left( \mathcal{N}_\beta ^\alpha -L_{\beta \varepsilon }^\alpha y^\varepsilon \right) \delta _\alpha + \mathcal{R}_\beta ^\gamma \mathcal{V}_\gamma ,\quad \mathcal{L}_{\mathcal{S}} \mathcal{V}_\beta =-\delta _\beta -\left( \mathcal{N}_\beta ^\alpha +\frac{ \partial \mathcal{S}^\alpha }{\partial y^\beta }\right) \mathcal{V}_\alpha. \end{equation} The Jacobi endomorphism has the local form \begin{equation} \Phi =\mathcal{R}_\beta ^\alpha \mathcal{V}_\alpha \otimes \mathcal{X}^\beta ,\quad \mathcal{R}_\beta ^\gamma =-\sigma _\beta ^i\frac{\partial \mathcal{S} ^\gamma }{\partial x^i}-\mathcal{S}(\mathcal{N}_\beta ^\gamma )+\mathcal{N} _\beta ^\alpha \mathcal{N}_\alpha ^\gamma +\mathcal{N}_\beta ^\alpha \frac{ \partial \mathcal{S}^\gamma }{\partial y^\alpha }+\mathcal{N}_\varepsilon ^\gamma L_{\alpha \beta }^\varepsilon y^\alpha . \end{equation}
\begin{proposition} The following formula holds \begin{equation} \Phi =i_{\mathcal{S}}\Omega +\mathrm{v}\circ \mathcal{L}_{\mathrm{v}\mathcal{ S}}\mathrm{h}. \end{equation} \end{proposition}
\textbf{Proof}. Indeed, $\Phi (\rho )=\mathrm{v}\circ \mathcal{L}_{\mathcal{S }}\mathrm{h}\rho =\mathrm{v}\circ \mathcal{L}_{\mathrm{h}\mathcal{S}}\mathrm{ h}\rho +\mathrm{v}\circ \mathcal{L}_{\mathrm{v}\mathcal{S}}\mathrm{h}\rho $ and $\Omega (\mathcal{S},\rho )=\mathrm{v}[\mathrm{h}\mathcal{S},\mathrm{h} \rho ]_{\mathcal{T}E}=\mathrm{v}\circ \mathcal{L}_{\mathrm{h}\mathcal{S}} \mathrm{h}\rho$, which yields $\Phi (\rho )=\Omega (\mathcal{S},\rho )+ \mathrm{v}\circ \mathcal{L}_{\mathrm{v}\mathcal{S}}\mathrm{h}\rho .$
\hbox{\rlap{$\sqcap$}$\sqcup$}
For a given semispray $\mathcal{S}$ on $\mathcal{T}E$ the Lie derivative $\mathcal{L}_\mathcal{S}$ defines a tensor derivation on $\mathcal{T}E$, but does not preserve some of the geometric structure as tangent structure and nonlinear connection. Next, using a nonlinear connection, we introduce a tensor derivation on $\mathcal{T}E$, called the dynamical covariant derivative, that preserves some other geometric structures.
\section{\textbf{Dynamical covariant derivative on Lie algebroids}}
In the following we will introduce the notion of dynamical covariant derivative on Lie algebroids as a tensor derivation and study its properties. We will use the Jacobi endomorphism and the dynamical covariant derivative in the study of symmetries for SODE on Lie algebroids.
\begin{definition} \cite{Po3} A map $\nabla :\frak{T}(\mathcal{T}E\backslash \{0\})\rightarrow \frak{T}(\mathcal{T}E\backslash \{0\})$ is said to be a tensor derivation on $\mathcal{T}E\backslash \{0\}$ if the following conditions are satisfied:\\ i) $\nabla $ is $\Bbb{R}$-linear\\ii) $\nabla $ is type preserving, i.e. $ \nabla (\frak{T}_s^r(\mathcal{T}E\backslash \{0\})\subset \frak{T}_s^r( \mathcal{T}E\backslash \{0\})$, for each $(r,s)\in \Bbb{N}\times \Bbb{N.}$\\ iii) $\nabla $ obeys the Leibnitz rule $\nabla (P\otimes S)=\nabla P\otimes S+P\otimes \nabla S$, for any tensors $P,S$ on $\mathcal{T}E\backslash \{0\}. $\\iv) $\nabla \,$commutes with any contractions, where $\frak{T}_{\bullet }^{\bullet }(\mathcal{T}E\backslash \{0\})$ is the space of tensors on $\mathcal{T}E\backslash \{0\}.$ \end{definition}
For a semispray $\mathcal{S}$ and an arbitrary nonlinear connection $ \mathcal{N}$ we consider the $\Bbb{R}$-linear map $\nabla :\Gamma (\mathcal{T }E\backslash \{0\})\rightarrow \Gamma (\mathcal{T}E\backslash \{0\})$ given by \begin{equation} \nabla =\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{h}+\mathrm{v} \circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v,} \end{equation} which will be called the dynamical covariant derivative induced by the semispray $\mathcal{S}$ and the nonlinear connection $\mathcal{N}$. By setting $\nabla f=\mathcal{S}(f),$ for $f\in C^\infty (E\backslash \{0\})$ using the Leibnitz rule and the requirement that $\nabla $ commutes with any contraction, we can extend the action of $\nabla $ to arbitrary section on $ \mathcal{T}E\backslash \{0\}$. For a section on $\mathcal{T}E\backslash \{0\} $ the dynamical covariant derivative is given by $(\nabla \varphi )(\rho )=S(\varphi )(\rho )-\varphi (\nabla \rho ).$ For a $(1,1)$-type tensor field $T$ on $\mathcal{T}E\backslash \{0\}$ the dynamical covariant derivative has the form \begin{equation} \nabla T=\nabla \circ T-T\circ \nabla. \end{equation} and by direct computation using (17) we obtain \begin{equation*} \nabla \mathrm{h}=\nabla \mathrm{v}=0. \end{equation*} which means that $\nabla$ preserves the horizontal and vertical sections. Also, we get \begin{equation*} \nabla \mathcal{V}_\beta =\mathrm{v}[\mathcal{S},\mathcal{V}_\beta ]_{ \mathcal{T}E}=-\left( \mathcal{N}_\beta ^\alpha +\frac{\partial \mathcal{S} ^\alpha }{\partial y^\beta }\right) \mathcal{V}_\alpha ,\quad \nabla \delta \mathcal{V}^\beta =\left( \mathcal{N}_\alpha ^\beta +\frac{\partial \mathcal{ S}^\beta }{\partial y^\alpha }\right) \delta \mathcal{V}^\alpha, \end{equation*} \begin{equation*} \nabla \delta _\beta =\mathrm{h}[\mathcal{S},\delta _\beta ]_{\mathcal{T} E}=\left( \mathcal{N}_\beta ^\alpha -L_{\beta \varepsilon }^\alpha y^\varepsilon \right) \delta _\alpha,\quad \nabla \mathcal{X}^\beta =-\left( \mathcal{N}_\alpha ^\beta -L_{\alpha \varepsilon }^\beta y^\varepsilon \right) \mathcal{X}^\alpha . \end{equation*} The action of the dynamical covariant derivative on the horizontal section $ X=hX$ is given by following relations \begin{equation} \nabla X=\nabla \left( X^\alpha \delta _\alpha \right) =\nabla X^\alpha \delta _\alpha ,\quad \nabla X^\alpha =\mathcal{S}(X^\alpha )+\left( \mathcal{N}_\beta ^\alpha +y^\varepsilon L_{\varepsilon \beta }^\alpha \right) X^\beta. \end{equation}
\begin{proposition} The following results hold \begin{equation} \mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ J=-\mathrm{h},\quad J\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}=-\mathrm{v}, \end{equation} \begin{equation} \nabla J=\mathcal{L}_{\mathcal{S}}J+\mathcal{N},\quad \nabla J=-\left( \frac{ \partial \mathcal{S}^\beta }{\partial y^\alpha }-y^\varepsilon L_{\alpha \varepsilon }^\beta +2\mathcal{N}_\alpha ^\beta \right) \mathcal{V}_\beta \otimes \mathcal{X}^\alpha. \end{equation} \end{proposition}
\textbf{Proof}. From (8) we get \begin{equation*} J[\mathcal{S},JX]_{\mathcal{T}E}=-JX\Rightarrow J\left( [\mathcal{S},JX]_{ \mathcal{T}E}+X\right) =0\Rightarrow [\mathcal{S},JX]_{\mathcal{T}E}+X\in V \mathcal{T}E, \end{equation*} \begin{equation*} \mathrm{h}\left( [\mathcal{S},JX]_{\mathcal{T}E}+X\right) =0\Rightarrow \mathrm{h}[\mathcal{S},JX]_{\mathcal{T}E}=-\mathrm{h}X\Leftrightarrow \mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ J=-\mathrm{h}. \end{equation*} Also, in $J[\mathcal{S},JX]_{\mathcal{T}E}+JX=0$ considering $JX=\mathrm{v}Z$ it results $J[\mathcal{S},\mathrm{v}Z]_{\mathcal{T}E}=-\mathrm{v} Z\Leftrightarrow J\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}=-\mathrm{v} .$ Next \begin{eqnarray*} \nabla \circ J &=&\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{h} \circ J+\mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}\circ J= \mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\circ J= \\ \ &=&(Id-\mathrm{h})\circ \mathcal{L}_{\mathcal{S}}\circ J=\mathcal{L} _S\circ J-\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ J=\mathcal{L} _S\circ J+\mathrm{h}. \end{eqnarray*} But, on the other hand \begin{equation*} J\circ \nabla =J\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{h}=J\circ \mathcal{L}_{\mathcal{S}}\circ (Id-\mathrm{v})=J\circ \mathcal{L}_{\mathcal{S }}-J\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}=J\circ \mathcal{L}_{ \mathcal{S}}+v. \end{equation*} and we obtain \begin{equation*} \nabla \circ J-J\circ \nabla =\mathcal{L}_S\circ J+\mathrm{h}-J\circ \mathcal{L}_{\mathcal{S}}-\mathrm{v}\Rightarrow \nabla J=\mathcal{L}_{ \mathcal{S}}J+\mathrm{h}-\mathrm{v}=\mathcal{L}_{\mathcal{S}}J+\mathcal{N}. \end{equation*} For the last relation, we have
\begin{eqnarray*} \nabla J &=&\nabla \left( \mathcal{X}^\beta \otimes \mathcal{V}_\beta \right) =\nabla \mathcal{X}^\beta \otimes \mathcal{V}_\beta +\mathcal{X} ^\beta \otimes \nabla \mathcal{V}_\beta \\ \ &=&-\left( \mathcal{N}_\alpha ^\beta -L_{\alpha \varepsilon }^\beta y^\varepsilon \right) \mathcal{X}^\alpha \otimes \mathcal{V}_\beta +\mathcal{ X}^\beta \otimes \left( -\mathcal{N}_\beta ^\alpha -\frac{\partial \mathcal{S }^\alpha }{\partial y^\beta }\right) \mathcal{V}_\alpha \\ \ &=&-\mathcal{N}_\alpha ^\beta \mathcal{X}^\alpha \otimes \mathcal{V}_\beta +L_{\alpha \varepsilon }^\beta y^\varepsilon \mathcal{X}^\alpha \otimes \mathcal{V}_\beta -\mathcal{N}_\beta ^\alpha \mathcal{X}^\beta \otimes \mathcal{V}_\alpha -\frac{\partial \mathcal{S}^\alpha }{\partial y^\beta } \mathcal{X}^\beta \otimes \mathcal{V}_\alpha \\ \ &=&\left( L_{\alpha \varepsilon }^\beta y^\varepsilon -\frac{\partial \mathcal{S}^\beta }{\partial y^\alpha }-2\mathcal{N}_\alpha ^\beta \right) \mathcal{X}^\alpha \otimes \mathcal{V}_\beta . \end{eqnarray*}
\hbox{\rlap{$\sqcap$}$\sqcup$}.
The above proposition leads to the following result:
\begin{theorem} For a semispray $\mathcal{S}$, an arbitrary nonlinear connection $\mathcal{N} $ and $\nabla$ the dynamical covariant derivative induced by $\mathcal{S}$ and $ \mathcal{N}$, the following conditions are equivalent:\\ $i)$ $\nabla J=0,$\\ $ii)$ $\mathcal{L}_SJ+\mathcal{N}=0,$\\ $iii)$ $\mathcal{N}_\alpha ^\beta =\frac 12\left( -\frac{\partial \mathcal{S} ^\beta }{\partial y^\alpha }+y^\varepsilon L_{\alpha \varepsilon }^\beta \right)$. \end{theorem}
\textbf{Proof}. The proof follows from the relations (20).
\hbox{\rlap{$\sqcap$}$\sqcup$}
This theorem shows that the compatibility condition $\nabla J=0$ of the dynamical covariant derivative with the tangent structure determines the nonlinear connection $\mathcal{N}=-\mathcal{L}_{\mathcal{S}}J$. For the particular case of tangent bundle we obtain the results from \cite{Bu3}. In the following we deal with this nonlinear connection induced by semispray.
\subsection{The canonical nonlinear connection induced by a semispray}
A semispray $\mathcal{S},$ together with the condition $\nabla J=0,$ determines the canonical nonlinear connection $\mathcal{N}=-\mathcal{L}_{ \mathcal{S}}J$ with local coefficients \begin{equation*} \mathcal{N}_\alpha ^\beta =\frac 12\left( -\frac{\partial \mathcal{S}^\beta }{\partial y^\alpha }+y^\varepsilon L_{\alpha \varepsilon }^\beta \right) . \end{equation*} In this case the following equations hold \begin{equation*} \lbrack \mathcal{S},\mathcal{V}_\beta ]_{\mathcal{T}E}=-\delta _\beta +\left( \mathcal{N}_\beta ^\alpha -L_{\beta \varepsilon }^\alpha y^\varepsilon \right) \mathcal{V}_\alpha , \end{equation*} \begin{equation*} \lbrack \mathcal{S},\delta _\beta ]_{\mathcal{T}E}=\left( \mathcal{N}_\beta ^\alpha -L_{\beta \varepsilon }^\alpha y^\varepsilon \right) \delta _\alpha + \mathcal{R}_\beta ^\alpha \mathcal{V}_\alpha , \end{equation*} where \begin{equation} \mathcal{R}_\beta ^\alpha =-\sigma _\beta ^i\frac{\partial \mathcal{S} ^\alpha }{\partial x^i}-\mathcal{S}(\mathcal{N}_\beta ^\alpha )-\mathcal{N} _\gamma ^\alpha \mathcal{N}_\beta ^\gamma +(L_{\varepsilon \beta }^\gamma \mathcal{N}_\gamma ^\alpha +L_{\gamma \varepsilon }^\alpha \mathcal{N}_\beta ^\gamma )y^\varepsilon . \end{equation} are the local coefficients of the Jacobi endomorphism. \begin{proposition} If $\mathcal{S}$ is a spray, then the Jacobi endomorphism is the contraction with $\mathcal{S}$ of curvature of the nonlinear connection \begin{equation*} \Phi =i_{\mathcal{S}}\Omega . \end{equation*} \end{proposition}
\textbf{Proof}. If $\mathcal{S}$ is a spray, then the coefficients $\mathcal{ S}^\alpha $ are 2-homogeneous with respect to the variables $y^\beta$ and it results \begin{equation*} 2\mathcal{S}^\alpha =\frac{\partial \mathcal{S}^\alpha }{\partial y^\beta } y^\beta =-2\mathcal{N}_\beta ^\alpha y^\beta +L_{\beta \gamma }^\alpha y^\beta y^\gamma =-2\mathcal{N}_\beta ^\alpha y^\beta . \end{equation*} \begin{equation*} \mathcal{S}=\mathrm{h}\mathcal{S}=y^\alpha \delta _\alpha ,\quad \mathrm{v} \mathcal{S}=0,\quad \mathcal{N}_\beta ^\alpha =\frac{\partial \mathcal{N} _\varepsilon ^\alpha }{\partial y^\beta }y^\varepsilon +L_{\beta \varepsilon }^\alpha y^\varepsilon , \end{equation*} which together with (15) yields $\Phi =i_{\mathcal{S}}\Omega$. Locally, we get $\mathcal{R} _\beta ^\alpha =\mathcal{R}_{\varepsilon \beta }^\alpha y^\varepsilon $ and represents the local relation between the Jacobi endomorphism and the curvature of the nonlinear connection. Also, we have $\Phi(\mathcal{S})=0$.
\hbox{\rlap{$\sqcap$}$\sqcup$}\\ Next, we introduce the almost complex structure in order to find the decomposition formula for the dynamical covariant derivative. \begin{definition} The almost complex structure is given by the formula \begin{equation*} \Bbb{F}=\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\mathrm{h}-J. \end{equation*} \end{definition}
We have to show that $\Bbb{F}^2=-Id$. Indeed, from the relation $\mathcal{L} _{\mathcal{S}}\mathrm{h}=\mathcal{L}_{\mathcal{S}}\circ \mathrm{h}-\mathrm{h} \circ \mathcal{L}_{\mathcal{S}}$ we obtain $\Bbb{F}=\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{h}- \mathrm{h}\circ \mathcal{L}_{\mathcal{S}}-J=\mathrm{h}\circ \mathcal{L}_{ \mathcal{S}}\circ (\mathrm{h}-Id)-J=-\mathrm{h}\circ \mathcal{L}_{\mathcal{S} }\circ \mathrm{v}-J$ and $\Bbb{F}^2=\left( -\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v} -J\right) \circ \left( -\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}-J\right) =\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v }\circ \mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}+\mathrm{h} \circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}\circ J+$\\ $+J\circ \mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}+J^2= \mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ J+J\circ \mathcal{L}_{ \mathcal{S}}\circ \mathrm{v}$ $=-\mathrm{h}-\mathrm{v}=-Id.$
\begin{proposition} The following results hold \begin{equation*} \begin{array}{c} \Bbb{F}\circ J=\mathrm{h},\quad J\circ \Bbb{F}=\mathrm{v},\quad \mathrm{v} \circ \Bbb{F}=\Bbb{F}\circ \mathrm{h}=-J, \\ \mathrm{h}\circ \Bbb{F}=\Bbb{F}\circ \mathrm{v}=\Bbb{F}+J,\quad \mathcal{N} \circ \Bbb{F}=\Bbb{F}+2J,\quad \Phi =\mathcal{L}_{\mathcal{S}}\mathrm{h}- \Bbb{F}-J. \end{array} \end{equation*} \end{proposition}
\textbf{Proof}. Using the relations (19) we obtain\\
$\Bbb{F}\circ J=\left( -\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}-J\right) \circ J=$ $-\mathrm{h}\circ \mathcal{L}_{\mathcal{S} }\circ \mathrm{v}\circ J-J^2$ $=-\mathrm{h}\circ \mathcal{L}_{\mathcal{S} }\circ J=\mathrm{h}$,\\ $J\circ \Bbb{F}=-J\circ \left( \mathrm{h}\circ \mathcal{L}_{\mathcal{S} }\circ \mathrm{v}+J\right) =-J\circ \mathrm{h}\circ \mathcal{L}_{\mathcal{S} }\circ \mathrm{v}-J^2=-J\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}= \mathrm{v}$,\\ $\mathrm{v}\circ \Bbb{F}=\mathrm{v}\circ \left( \mathrm{h}\circ \mathcal{L}_{ \mathcal{S}}\mathrm{h}-J\right) =-\mathrm{v}\circ J=-J$, $\Bbb{F}\circ \mathrm{h}=\left( -\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v} -J\right) \circ \mathrm{h}=-J\circ \mathrm{h}=-J,$ $\mathrm{h}\circ \Bbb{F}=\mathrm{h}\circ \left( \mathrm{h}\circ \mathcal{L}_{ \mathcal{S}}\mathrm{h}-J\right) =\mathrm{h}\circ \mathcal{L}_{\mathcal{S}} \mathrm{h}=\Bbb{F}+J$, $\Bbb{F}\circ \mathrm{v=}\left( -\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}-J\right) \circ \mathrm{v}=-\mathrm{ h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}=$ $\Bbb{F}+J$. In the same way, the other relations can be proved.
\hbox{\rlap{$\sqcap$}$\sqcup$}\\ In local coordinates we have \begin{equation*} \Bbb{F}=-\mathcal{V}_\alpha \otimes \mathcal{X}^\alpha +\delta_\alpha \otimes \delta \mathcal{V}^\alpha . \end{equation*} For a semispray $\mathcal{S}$ and the associated nonlinear connection we consider the $\Bbb{R}$-linear map $\nabla _0:\Gamma (\mathcal{T}E\backslash \{0\})\rightarrow \Gamma (\mathcal{T}E\backslash \{0\})$ given by \begin{equation*} \nabla _0\rho =\mathrm{h}[\mathcal{S},\mathrm{h}\rho ]_{\mathcal{T}E}+ \mathrm{v}[\mathcal{S},\mathrm{v}\rho ]_{\mathcal{T}E},\quad \forall \rho \in \Gamma (\mathcal{T}E\backslash \{0\}). \end{equation*} It results that \begin{equation*} \nabla _0(f\rho )=\mathcal{S}(f)\rho +f\nabla _0\rho ,\quad \forall f\in C^\infty (E),\ \rho \in \Gamma (\mathcal{T}E\backslash \{0\}). \end{equation*} Any tensor derivation on $\mathcal{T}E\backslash \{0\}$ is completely determined by its actions on smooth functions and sections on $\mathcal{T} E\backslash \{0\}$ (see \cite{Sz2} generalized Willmore's theorem). Therefore, there exists a unique tensor derivation $\nabla $ on $\mathcal{T} E\backslash \{0\}$ such that \begin{equation*} \nabla \mid _{C^\infty (E)}=\mathcal{S},\quad \nabla \mid _{\Gamma (\mathcal{ T}E\backslash \{0\})}=\nabla _0. \end{equation*} We will call the tensor derivation $\nabla $, the \textit{dynamical covariant derivative} induced by the semispray $\mathcal{S}$ (see \cite{Bu2} for the tangent bundle case). \begin{proposition} The dynamical covariant derivative has the following decomposition \begin{equation} \nabla =\mathcal{L}_{\mathcal{S}}+\Bbb{F}+J-\Phi. \end{equation} \end{proposition}
\textbf{Proof}. Using the formula (16) and the expressions of $\Bbb{F}$ and $\Phi $ we obtain \begin{eqnarray*} \nabla &=&\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{h}+\mathrm{v }\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}= \\ &=&\mathrm{h}\circ \left( \mathcal{L}_{\mathcal{S}}\mathrm{h}+\mathrm{h} \circ \mathcal{L}_{\mathcal{S}}\right) +\mathrm{v}\circ \left( \mathcal{L}_{ \mathcal{S}}\mathrm{v}+\mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\right) = \\ &=&\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\mathrm{h}+\mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\mathrm{v}+(\mathrm{h}+\mathrm{v})\circ \mathcal{L} _{\mathcal{S}}=\mathcal{L}_{\mathcal{S}}+\mathrm{h}\circ \mathcal{L}_{ \mathcal{S}}\mathrm{h}+\mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\mathrm{v}= \\ &=&\mathcal{L}_{\mathcal{S}}+\Bbb{F}+J-\Phi . \end{eqnarray*} In this case the dynamical covariant derivative is characterized by the following formulas \begin{equation*} \nabla \mathcal{V}_\beta =\mathrm{v}[\mathcal{S},\mathcal{V}_\beta ]_{ \mathcal{T}E}=\left( \mathcal{N}_\beta ^\alpha -L_{\beta \varepsilon }^\alpha y^\varepsilon \right) \mathcal{V}_\alpha =-\frac 12\left( \frac{ \partial \mathcal{S}^\alpha }{\partial y^\beta }+L_{\beta \varepsilon }^\alpha y^\varepsilon \right) \mathcal{V}_\alpha , \end{equation*} \begin{equation*} \nabla \delta _\beta =\mathrm{h}[\mathcal{S},\delta _\beta ]_{\mathcal{T} E}=\left( \mathcal{N}_\beta ^\alpha -L_{\beta \varepsilon }^\alpha y^\varepsilon \right) \delta _\alpha =-\frac 12\left( \frac{\partial \mathcal{S}^\alpha }{\partial y^\beta }+L_{\beta \varepsilon }^\alpha y^\varepsilon \right) \delta _\alpha. \end{equation*} The next result shows that $\nabla$ acts identically on both vertical and horizontal distribution, that is enough to find the action of $\nabla$ on either one of the two distributions. \begin{proposition} The dynamical covariant derivative induced by the semispray $\mathcal{S}$ is compatible with $J$ and $\Bbb{F}$, that is \begin{equation*} \nabla J=0,\ \nabla \Bbb{F}=0. \end{equation*} \end{proposition}
\textbf{Proof}. $\nabla J=0$ follows from (20). Using the formula $\Bbb{F}=- \mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}-J$ and $\nabla \Bbb{F}=\nabla \circ \Bbb{F}-\Bbb{F}\circ \nabla$ we obtain \begin{eqnarray*} \nabla \Bbb{F} &=&(\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{h}+ \mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v})\circ (-\mathrm{h} \circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v})-(-\mathrm{h}\circ \mathcal{L }_{\mathcal{S}}\circ \mathrm{v})\circ (\mathrm{h}\circ \mathcal{L}_{\mathcal{ S}}\circ \mathrm{h}+\mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v} )= \\ \ &=&-\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}+\mathrm{h}\circ \mathcal{L}_{ \mathcal{S}}\circ \mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}= \\ \ &=&\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ (\mathrm{v}-\mathrm{h} )\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}=\mathrm{h}\circ \mathcal{L} _{\mathcal{S}}\circ \mathcal{L}_{\mathcal{S}}J\circ \mathcal{L}_{\mathcal{S} }\circ \mathrm{v}= \\ \ &=&\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ (\mathcal{L}_{\mathcal{S} }\circ J-J\circ \mathcal{L}_{\mathcal{S}})\circ \mathcal{L}_{\mathcal{S} }\circ \mathrm{v}= \\ \ &=&\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathcal{L}_{\mathcal{S} }\circ (J\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v})-(\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ J)\circ \mathcal{L}_{\mathcal{S}}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v}= \\ \ &=&-\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathcal{L}_{\mathcal{S} }\circ \mathrm{v}+\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathcal{L} _{\mathcal{S}}\circ \mathrm{v}=0. \end{eqnarray*}
\hbox{\rlap{$\sqcap$}$\sqcup$}\\ The next proposition proves that in the case of spray $\nabla$ has more properties. \begin{proposition} If the dynamical covariant derivative is induced by a spray $\mathcal{S}$ then \begin{equation*} \nabla \mathcal{S}=0,\ \nabla \Bbb{C}=0. \end{equation*} \end{proposition}
\textbf{Proof}. Indeed, if $\mathcal{S}$ is a spray then we have $\mathcal{S}= \mathrm{h}\mathcal{S}$ and $\mathrm{v}\mathcal{S}=0$ and it results $\nabla \mathcal{S}=\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{h} \mathcal{S}+\mathrm{v}\circ \mathcal{L}_{\mathcal{S}}\circ \mathrm{v} \mathcal{S}=\mathrm{h}\circ \mathcal{L}_{\mathcal{S}}\circ \mathcal{S}=0.$ Also $\nabla \Bbb{C}=0$ follows from $\mathrm{h}\Bbb{C}=0$, $\mathrm{v}\Bbb{C }=\Bbb{C}$ and $[\Bbb{C},\mathcal{S}]_{\mathcal{T}E}=\mathcal{S}$.
\hbox{\rlap{$\sqcap$}$\sqcup$}\\ Next, we introduce the Berwald linear connection induced by a nonlinear connection and prove that in the case of homogeneous second order differential equations it coincides with the dynamical covariant derivative. The Berwald linear connection is given by \begin{equation*} \mathcal{D}:\Gamma (\mathcal{T}E\backslash \{0\})\times \Gamma (\mathcal{T} E\backslash \{0\})\rightarrow \Gamma (\mathcal{T}E\backslash \{0\}) \end{equation*} \begin{equation*} \mathcal{D}_XY=\mathrm{v}[\mathrm{h}X,\mathrm{v}Y]_{\mathcal{T}E}+\mathrm{h}[ \mathrm{v}X,\mathrm{h}Y]_{\mathcal{T}E}+J[\mathrm{v}X,(\Bbb{F}+J)Y]_{ \mathcal{T}E}+(\Bbb{F}+J)[\mathrm{h}X,JY]_{\mathcal{T}E}. \end{equation*}
\begin{proposition} The Berwald linear connection has the following properties \begin{equation*} \mathcal{D}\mathrm{h}=0,\quad \mathcal{D}\mathrm{v}=0,\quad \mathcal{D} J=0,\quad \mathcal{D}\Bbb{F}=0. \end{equation*} \end{proposition}
\textbf{Proof}. Using the properties of the vertical and horizontal projectors we obtain\\ $\mathcal{D}_X\mathrm{v}Y=\mathrm{v}[\mathrm{h}X,\mathrm{v}Y]_{\mathcal{T} E}+J[\mathrm{v}X,(\Bbb{F}+J)Y]_{\mathcal{T}E}$ and\\ $\mathrm{v}(\mathcal{D}_XY)=\mathrm{v}[\mathrm{h}X,\mathrm{v}Y]_{\mathcal{T} E}+J[\mathrm{v}X,(\Bbb{F}+J)Y]_{\mathcal{T}E}$ which yields $\mathcal{D} \mathrm{v}=0$. Also,\\ $\mathcal{D}_X\mathrm{h}Y=\mathrm{h}[\mathrm{v}X,\mathrm{h}Y]_{\mathcal{T} E}+(\Bbb{F}+J)[\mathrm{h}X,JY]_{\mathcal{T}E}=\mathrm{h}(\mathcal{D}_XY)$ and it results $\mathcal{D}\mathrm{h}=0$. Moreover,\\ $\mathcal{D}_XJY=\mathrm{v}[\mathrm{h}X,JY]_{\mathcal{T}E}+J[\mathrm{v}X, \mathrm{h}Y]_{\mathcal{T}E}$ and $J(\mathcal{D}_XY)=J[\mathrm{v}X,\mathrm{h} Y]_{\mathcal{T}E}+\mathrm{v}[\mathrm{h}X,JY]_{\mathcal{T}E}$ and we obtain $\mathcal{D}J=0.$ From\\ $\mathcal{D}_X\Bbb{F}Y=\mathrm{v}[\mathrm{h}X,-JY]_{\mathcal{T}E}+\mathrm{h}[ \mathrm{v}X,(\Bbb{F}+J)Y]_{\mathcal{T}E}+J[\mathrm{v}X,-\mathrm{h}Y]_{ \mathcal{T}E}+(\Bbb{F}+J)[\mathrm{h}X,\mathrm{v}Y]_{\mathcal{T}E}$ and\\ $\Bbb{F}(\mathcal{D}_XY)=(\Bbb{F}+J)[\mathrm{h}X,\mathrm{v}Y]_{\mathcal{T} E}-J[\mathrm{v}X,\mathrm{h}Y]_{\mathcal{T}E}+\mathrm{h}[\mathrm{v}X,(\Bbb{F} +J)Y]_{\mathcal{T}E}-v[\mathrm{h}X,JY]_{\mathcal{T}E}=$\\ $\mathcal{D}_X\Bbb{F}Y$ we get $\mathcal{D}\Bbb{F}=0.$
\hbox{\rlap{$\sqcap$}$\sqcup$}\\ It results that the Berwald connection preserves both horizontal and vertical sections. Moreover, $\mathcal{D}$ has the same action on horizontal and vertical distributions and locally we have the following formulas \begin{equation*} \mathcal{D}_{\delta _\alpha }\delta _\beta =\frac{\partial \mathcal{N} _\alpha ^\gamma }{\partial y^\beta }\delta _\gamma ,\quad \mathcal{D} _{\delta _\alpha }\mathcal{V}_\beta =\frac{\partial \mathcal{N}_\alpha ^\gamma }{\partial y^\beta }\mathcal{V}_\gamma ,\quad \mathcal{D}_{\mathcal{V }_\alpha }\delta _\beta =0,\quad \mathcal{D}_{\mathcal{V}_\alpha }\mathcal{V} _\beta =0. \end{equation*} We can see that the dynamical covariant derivative has the same properties and this leads to the next result. \begin{proposition} If $\mathcal{S}$ is a spray then the following equality holds \begin{equation*} \nabla =\mathcal{D}_{\mathcal{S}}. \end{equation*} \end{proposition} \textbf{Proof}. If $\mathcal{S}$ is a spray then $\mathcal{S}=\mathrm{h} \mathcal{S}$ and $\mathrm{v}\mathcal{S}=0$ which implies \begin{equation*} \mathcal{D}_{\mathcal{S}}Y=\mathrm{v}[\mathcal{S},\mathrm{v}Y]_{\mathcal{T} E}+(\Bbb{F}+J)[\mathcal{S},JY]_{\mathcal{T}E}. \end{equation*} But $\nabla Y=\mathrm{h}[\mathcal{S},\mathrm{h}Y]_{\mathcal{T}E}+\mathrm{v}[ \mathcal{S},\mathrm{v}Y]_{\mathcal{T}E}$ and we will prove that $\mathrm{h}[ \mathcal{S},\mathrm{h}Y]_{\mathcal{T}E}=(\Bbb{F}+J)[\mathcal{S},JY]_{ \mathcal{T}E}$ using the computation in local coordinates. Let us consider $ Y=X^\alpha (x,y)\mathcal{X}_\alpha +Y^\beta (x,y)\mathcal{V}_\beta $ and using (10) we get \begin{equation*} \lbrack \mathcal{S},\mathrm{h}Y]_{\mathcal{T}E}=[y^\alpha \delta _\alpha ,X^\beta \delta _\beta ]_{\mathcal{T}E}=y^\alpha X^\beta \mathcal{R}_{\alpha \beta }^\varepsilon \mathcal{V}_\varepsilon +y^\alpha X^\beta L_{\alpha \beta }^\varepsilon \delta _\varepsilon +y^\alpha \delta _\alpha (X^\beta )\delta _\beta +X^\beta N_\beta ^\alpha \delta _\alpha, \end{equation*}
\begin{equation*} \mathrm{h}[\mathcal{S},\mathrm{h}Y]_{\mathcal{T}E}=\left( y^\alpha \delta _\alpha (X^\beta )+X^\alpha N_\alpha ^\beta+y^\alpha X^\varepsilon L_{\alpha \varepsilon}^\beta \right) \delta _\beta. \end{equation*} Next \begin{equation*} \lbrack \mathcal{S},JY]_{\mathcal{T}E}=[y^\alpha \delta _\alpha ,X^\beta \mathcal{V}_\beta ]_{\mathcal{T}E}=y^\alpha X^\beta \frac{\partial N_\alpha ^\varepsilon }{\partial y^\beta }\mathcal{V}_\varepsilon +y^\alpha \delta _\alpha (X^\beta )\mathcal{V}_\beta -X^\beta \delta _\beta . \end{equation*} Also, we have \begin{equation*} y^\alpha X^\beta \frac{\partial N_\alpha ^\varepsilon }{\partial y^\beta }= \mathcal{N}_\beta ^\varepsilon X^\beta -L_{\beta \alpha }^\varepsilon y^\alpha X^\beta, \end{equation*} and using the relations $(\Bbb{F}+J)(\mathcal{V}_\alpha )=\delta _\alpha $, $ (\Bbb{F}+J)(\delta _\alpha )=0$ we obtain the result which ends the proof.
\hbox{\rlap{$\sqcap$}$\sqcup$}\\ Moreover, $\nabla{\mathcal{S}} =\mathcal{D}_{\mathcal{S}}{\mathcal{S}}=0$ and it results that the integral curves of the spray are geodesics of the Berwald linear connection.
\section{\textbf{Symmetries for semispray}}
In this section we study the symmetries of SODE on Lie algebroids and prove that the canonical nonlinear connection can be determined by these symmetries. We find the relations between dynamical symmetries, Lie symmetries, Newtonoid sections, Cartan symmetries and conservation laws, and show when one of them will imply the others. Also, we obtain the invariant equations of these symmetries, using the dynamical covariant derivative and Jacobi endomorphism. In the particular case of the tangent bundle some results from \cite{Bu3, Mar, Pr1, Pr2} are obtained.
\begin{definition} A section $X\in \Gamma (\mathcal{T}E\backslash \{0\})$ is a dynamical symmetry of semispray $\mathcal{S}$ if $[\mathcal{S},X]_{\mathcal{T}E}=0.$ \end{definition}
In local coordinates for $X=X^\alpha (x,y)\mathcal{X}_\alpha +Y^\alpha (x,y) \mathcal{V}_\alpha $ we obtain \begin{equation*} \lbrack \mathcal{S},X]_{\mathcal{T}E}=\left( y^\alpha L_{\alpha \gamma }^\beta X^\gamma -Y^\beta +\mathcal{S}(X^\beta )\right) \mathcal{X}_\beta +\left( \mathcal{S}(Y^\beta )-X(\mathcal{S}^\beta )\right) \mathcal{V}_\beta , \end{equation*} and it results that the dynamical symmetry is characterized by the equations \begin{equation} Y^\alpha =\mathcal{S}(X^\alpha )+y^\varepsilon L_{\varepsilon \beta }^\alpha X^\beta , \end{equation} \begin{equation} \mathcal{S}(Y^\alpha )-X(\mathcal{S}^\alpha )=0. \end{equation} Introducing (23) into (24) we obtain \begin{equation*} \mathcal{S}^2(X^\alpha )-X(\mathcal{S}^\alpha )=\left( \sigma _\gamma ^i \frac{\partial L_{\varepsilon \beta }^\alpha }{\partial x^i}X^\beta +L_{\varepsilon \beta }^\alpha \sigma _\gamma ^i\frac{\partial X^\beta }{ \partial x^i}\right) y^\gamma y^\varepsilon +\mathcal{S}^\gamma \left( L_{\gamma \beta }^\alpha X^\beta +y^\varepsilon L_{\varepsilon \beta }^\alpha \frac{\partial X^\beta }{\partial y^\gamma }\right). \end{equation*}
\begin{definition} A section $\widetilde{X}=\widetilde{X}^\alpha (x,y)s_\alpha $ on $E\backslash \{0\}$ is a Lie symmetry of a semispray if its complete lift $\widetilde{X}^c$ is a dynamical symmetry, that is $[\mathcal{S},\widetilde{X}^c]_{\mathcal{T}E}=0.$ \end{definition}
\begin{proposition} The local expression of a Lie symmetry is given by \begin{eqnarray*} \mathcal{S}^\alpha \frac{\partial \widetilde{X}^\beta }{\partial y^\alpha } =0, \end{eqnarray*} \begin{equation*} \mathcal{S}^\alpha \widetilde{X}_{\mid _\alpha }^\beta +y^\alpha y^\varepsilon \sigma _\alpha ^i\frac{\partial \widetilde{X}_{\mid _\varepsilon }^\beta }{\partial x^i}-\widetilde{X}^\alpha \sigma _\alpha ^i \frac{\partial \mathcal{S}^\beta }{\partial x^i}-y^\varepsilon \widetilde{X} _{\mid _\varepsilon }^\alpha \frac{\partial \mathcal{S}^\beta }{\partial y^\alpha }+S^\alpha y^\varepsilon \left( \sigma _\varepsilon ^i\frac{ \partial ^2\widetilde{X}^\beta }{\partial y^\alpha \partial x^i}-L_{\gamma \varepsilon }^\beta \frac{\partial \widetilde{X}^\gamma }{\partial y^\alpha } \right) =0. \end{equation*} where \begin{equation*} \widetilde{X}_{\mid_\varepsilon }^\alpha :=\sigma_\varepsilon ^i\frac{ \partial \widetilde{X}^\alpha }{\partial x^i}-L_{\beta \varepsilon}^\alpha \widetilde{X}^\beta , \end{equation*} \end{proposition}
\textbf{Proof}. Considering $\widetilde{X}^c=\widetilde{X}^\alpha \mathcal{X} _\alpha +y^\varepsilon \widetilde{X}_{\mid \varepsilon }^\alpha \mathcal{V} _\alpha $ and using (1) we obtain
\begin{equation*} \lbrack \mathcal{S},\widetilde{X}^c]_{\mathcal{T}E}=\left( \widetilde{X} ^\alpha y^\varepsilon L_{\varepsilon \alpha }^\beta +y^\alpha \sigma _\alpha ^i\frac{\partial \widetilde{X}^\beta }{\partial x^i}-y^\varepsilon \widetilde{X}_{\mid _\varepsilon }^\beta +\mathcal{S}^\alpha \frac{\partial \widetilde{X}^\beta }{\partial y^\alpha }\right) \mathcal{X}_\beta +\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \end{equation*}
\begin{equation*} \ \ \ \ \left( y^\alpha y^\varepsilon \sigma _\alpha ^i\frac{\partial \widetilde{X}_{\mid _\varepsilon }^\beta }{\partial x^i}-\widetilde{X} ^\alpha \sigma _\alpha ^i\frac{\partial \mathcal{S}^\beta }{\partial x^i}+ \mathcal{S}^\alpha \widetilde{X}_{\mid _\alpha }^\beta -y^\varepsilon \widetilde{X}_{\mid _\varepsilon }^\alpha \frac{\partial \mathcal{S}^\beta }{ \partial y^\alpha }+S^\alpha y^\varepsilon \left( \sigma _\varepsilon ^i \frac{\partial ^2\widetilde{X}^\beta }{\partial y^\alpha \partial x^i} -L_{\gamma \varepsilon }^\beta \frac{\partial \widetilde{X}^\gamma }{ \partial y^\alpha }\right) \right) \mathcal{V}_\beta. \end{equation*} We deduce that $\widetilde{X}^\alpha y^\varepsilon L_{\varepsilon \alpha }^\beta +y^\alpha \sigma _\alpha ^i\frac{\partial \widetilde{X}^\beta }{ \partial x^i}-y^\varepsilon \widetilde{X}_{\mid _\varepsilon }^\beta =0$ and it results the local expression of a Lie symmetry.
\hbox{\rlap{$\sqcap$}$\sqcup$}\\ We have to remark that a section $\widetilde{X}=\widetilde{X}^\alpha (x)s_\alpha $ on $E\backslash \{0\}$ is a Lie symmetry if and only if (see also \cite{Pe}) \begin{equation*} y^\alpha y^\varepsilon \sigma _\alpha ^i\frac{\partial \widetilde{X}_{\mid _\varepsilon }^\beta }{\partial x^i}-\widetilde{X}^\alpha \sigma _\alpha ^i \frac{\partial \mathcal{S}^\beta }{\partial x^i}+\mathcal{S}^\alpha \widetilde{X}_{\mid _\alpha }^\beta -y^\varepsilon \widetilde{X}_{\mid _\varepsilon }^\alpha \frac{\partial \mathcal{S}^\beta }{\partial y^\alpha } =0. \end{equation*} and it results, by direct computation, that the components $\widetilde{X} ^\alpha (x)$ satisfy the equations (23), (24).
\begin{definition} A section $X\in \Gamma (\mathcal{T}E\backslash \{0\}$ is called Newtonoid if $J[\mathcal{S},X]_{\mathcal{T}E}=0.$ \end{definition} In local coordinates we obtain \begin{equation*} J[\mathcal{S},X]_{\mathcal{T}E}=\left( \mathcal{S}(X^\alpha )-Y^\alpha +y^\varepsilon L_{\varepsilon \beta }^\alpha X^\beta \right) \mathcal{V} _\alpha , \end{equation*} which yields \begin{equation} Y^\alpha =\mathcal{S}(X^\alpha )+y^\varepsilon L_{\varepsilon \beta }^\alpha X^\beta ,\quad X=X^\alpha \mathcal{X}_\alpha +\left( \mathcal{S}(X^\alpha )+y^\varepsilon L_{\varepsilon \beta }^\alpha X^\beta \right) \mathcal{V} _\alpha . \end{equation} We remark that a section $X\in \Gamma (\mathcal{T}E\backslash \{0\}$ is a dynamical symmetry if and only if it is a Newtonoid and satisfies the equation (24). The set of Newtonoid sections denoted $\frak{X}_{\mathcal{S}}$ is given by \begin{equation*} \frak{X}_{\mathcal{S}}=Ker(J\circ \mathcal{L}_{\mathcal{S}})=Im(Id+J\circ \mathcal{L}_{\mathcal{S}}). \end{equation*} In the following we will use the dynamical covariant derivative in order to find the invariant equations of Newtonoid sections and dynamical symmetries on Lie algebroids. Let $\mathcal{S}$ be a semispray, $\mathcal{N}$ an arbitrary nonlinear connection and $\nabla $ the induced dynamical covariant derivative. We set:
\begin{proposition} A section $X\in \Gamma (\mathcal{T}E\backslash \{0\})$ is a Newtonoid if and only if \begin{equation} \mathrm{v}(X)=J(\nabla X), \end{equation} which locally yields \begin{equation*} X=X^\alpha \delta _\alpha +\nabla X^\alpha \mathcal{V}_\alpha, \end{equation*} with $\nabla X^\alpha $ given by formula (18). \end{proposition}
\textbf{Proof}. We know that $J\circ \nabla =J\circ \mathcal{L}_{\mathcal{S} }+\mathrm{v}$ and it results $J[\mathcal{S},X]_{\mathcal{T}E}=0$ if and only if $\mathrm{v}(X)=J(\nabla X).$ In local coordinates we obtain \begin{eqnarray*} X &=&X^\alpha \left( \delta _\alpha +\mathcal{N}_\alpha ^\beta \mathcal{V} _\beta \right) +\left( \mathcal{S}(X^\alpha )+y^\varepsilon L_{\varepsilon \beta }^\alpha X^\beta \right) \mathcal{V}_\alpha \\ \ &=&X^\alpha \delta _\alpha +\left( \mathcal{S}(X^\alpha )+X^\beta ( \mathcal{N}_\beta ^\alpha +y^\varepsilon L_{\varepsilon \beta }^\alpha \right) \mathcal{V}_\alpha \\ \ &=&X^\alpha \delta _\alpha +\nabla X^\alpha \mathcal{V}_\alpha. \end{eqnarray*}
\hbox{\rlap{$\sqcap$}$\sqcup$}
\begin{proposition} A section $X\in \Gamma (\mathcal{T}E\backslash \{0\})$ is a dynamical symmetry if and only if $X$ is a Newtonoid and \begin{equation} \nabla (J\nabla X)+\Phi (X)=0. \end{equation} \end{proposition}
\textbf{Proof}. If $X$ is a dynamical symmetry then $\mathrm{h}[\mathcal{S} ,X]_{\mathcal{T}E}=\mathrm{v}[\mathcal{S},X]_{\mathcal{T}E}=0$ and composing by $J$ we get $J[\mathcal{S},X]_{\mathcal{T}E}=0$ that means $X$ is a Newtonoid. Therefore, $\mathrm{v}[\mathcal{S},X]_{\mathcal{T}E}=\mathrm{v}[ \mathcal{S},\mathrm{v}X]_{\mathcal{T}E}+\mathrm{v}[\mathcal{S},\mathrm{h}X]_{ \mathcal{T}E}=\nabla (\mathrm{v}X)+\Phi (X)$ and using (26) we get $\nabla (J\nabla X)+\Phi (X)=0.$
\hbox{\rlap{$\sqcap$}$\sqcup$}\\
For $f\in C^\infty (E)$ and $X\in \Gamma (\mathcal{T}E\backslash \{0\})$ we define the product \begin{equation*} f*X=(Id+J\circ \mathcal{L}_{\mathcal{S}})(fX)=fX+fJ[\mathcal{S},X]_{\mathcal{ T}E}+\mathcal{S}(f)JX, \end{equation*} and remark that a section $X\in \Gamma (\mathcal{T}E\backslash \{0\})$ is a Newtonoid if and only if \begin{equation*} X=X^\alpha (x,y)*\mathcal{X}_\alpha . \end{equation*} If $X\in \frak{X}_{\mathcal{S}}$ then \begin{equation*} f*X=fX+\mathcal{S}(f)JX. \end{equation*} The next result proves that the canonical nonlinear connection can by determined by symmetry. \begin{proposition} Let us consider a semispray $\mathcal{S}$, an arbitrary nonlinear connection $\mathcal{N}$ and $\nabla $ the dynamical covariant derivative. The following conditions are equivalent:\\ $i)$ $\nabla$ restricts to $\nabla :\frak{X}_{\mathcal{S}}\rightarrow \frak{X }_{\mathcal{S}}$ satisfies the Leibnitz rule with respect to the $*$ pro-duct.\\ $ii)$ $\nabla J=0$,\\ $iii)$ $\mathcal{L}_{\mathcal{S}}J+\mathcal{N}=0,$\\ $iv)$ $\mathcal{N}_\alpha ^\beta =\frac 12\left( -\frac{\partial \mathcal{S} ^\beta }{\partial y^\alpha }+y^\varepsilon L_{\alpha \varepsilon }^\beta \right)$. \end{proposition}
\textbf{Proof}. For $ii)\Rightarrow i)$ we consider $X\in \frak{X}_{\mathcal{ S}}$ and using (26) we have $\mathrm{v}X=J(\nabla X)$ which leads to $\nabla (\mathrm{v}X)=\nabla (J\nabla X)$. It results $(\nabla \mathrm{v})X+\mathrm{v }(\nabla X)=(\nabla J)(\nabla X)+J\nabla (\nabla X)$ and using the relations $\nabla \mathrm{v}=0$ and $\nabla J=0$ we obtain $\mathrm{v}(\nabla X)=J\nabla (\nabla X)$ which implies $\nabla X\in \frak{X}_{\mathcal{S}}$ . For $X\in \frak{X}_{\mathcal{S}}$ we have \begin{equation*} \nabla \left( f*X\right) =\nabla (fX+\mathcal{S}(f)JX)=\mathcal{S} (f)X+f\nabla X+\mathcal{S}^2(f)JX+\mathcal{S}(f)\nabla (JX), \end{equation*} \begin{equation*} \nabla f*X+f*\nabla X=\mathcal{S}(f)X+\mathcal{S}^2(f)JX+f\nabla X+\mathcal{S }(f)J(\nabla X). \end{equation*} But $\nabla (JX)=(\nabla J)X+J(\nabla X)$ and from $\nabla J=0$ we obtain $ \nabla (JX)=J(\nabla X)$ which leads to $\nabla \left( f*X\right) =\nabla f*X+f*\nabla X$.
For $i)\Rightarrow ii)$ we consider the set $\frak{X}_{\mathcal{S}}\cup \Gamma ^{\mathrm{v}}(\mathcal{T}E\backslash \{0\})$ which is a set of generators for $\Gamma (\mathcal{T}E\backslash \{0\})$. We have $\nabla J(X)=0$ for $X\in \Gamma ^{\mathrm{v}}(\mathcal{T}E\backslash \{0\})$ and for $X\in \frak{X}_{\mathcal{S}}$ using $\nabla \left( f*X\right) =\nabla f*X+f*\nabla X$ it results $\mathcal{S}(f)\nabla (JX)=\mathcal{S}(f)J(\nabla X),$ which implies $\mathcal{S}(f)(\nabla J)X=0,$ for an arbitrary function $ f\in C^\infty (E\backslash \{0\})$. Therefore, $\nabla J=0$ on $ \frak{X}_{\mathcal{S}}$ which ends the proof. The equivalence of the conditions $ii)$, $iii)$, $iv)$ have been proved in the Theorem 1.
\hbox{\rlap{$\sqcap$}$\sqcup$}
Next, we consider the dynamical covariant derivative $\nabla$ induced by the semispray $\mathcal{S}$, the canonical nonlinear connection $\mathcal{N}=- \mathcal{L}_{\mathcal{S}}J$ and find the invariant equations of dynamical and Lie symmetries.
\begin{proposition} A section $X\in \Gamma (\mathcal{T}E\backslash \{0\}$ is a dynamical symmetry if and only if $X$ is a Newtonoid and \begin{equation} \nabla ^2JX+\Phi (X)=0, \end{equation} which locally yields \begin{equation*} \nabla ^2X^\alpha +\mathcal{R}_\beta ^\alpha X^\beta =0. \end{equation*} \end{proposition}
\textbf{Proof}. From (20) it results $\nabla J=0$ and using (27) and (17) we get (28). Next, using (25) and (14) the local components of the vertical section $\nabla ^2JX+\Phi (X)$ is $\nabla ^2X^\alpha +\mathcal{R}_\beta ^\alpha X^\beta$.
\hbox{\rlap{$\sqcap$}$\sqcup$}
\begin{proposition} A section $\widetilde{X}\in \Gamma (E\backslash \{0\})$ is a Lie symmetry of $\mathcal{S}$ if and only if \begin{equation} \nabla ^2\widetilde{X}^v+\Phi (\widetilde{X}^c)=0 \end{equation} \end{proposition}
\textbf{Proof}. Using (28) and the relation $J(\widetilde{X}^c)=\widetilde{X} ^v$ we obtain (29).
\hbox{\rlap{$\sqcap$}$\sqcup$}\\
Let us consider in the following a regular Lagrangian $L$ on $E$, the Cartan 1-section $\theta_L$, the symplectic structure $\omega_L=d^E\theta_L$, the energy function $E_L$ and the induced canonical semispray $\mathcal{S}$ with the components given by the relation (9).
\begin{proposition} If $\widetilde{X}$ is a section on $E$ such that $\mathcal{L}_{\widetilde{X} ^c}\theta _{L\text{ }}$is closed and $d^E(\widetilde{X}^cE_L)=0$, then $ \widetilde{X}$ is a Lie symmetry of the canonical semispray $\mathcal{S}$ induced by $L$. \end{proposition}
\textbf{Proof}. We have \begin{eqnarray*} i_{[\widetilde{X}^c,\mathcal{S}]}\omega _L &=&\mathcal{L}_{\widetilde{X} ^c}(i_{\mathcal{S}}\omega _L)-i_{\mathcal{S}}(\mathcal{L}_{\widetilde{X} ^c}\omega _L)=-\mathcal{L}_{\widetilde{X}^c}d^EE_L-i_{\mathcal{S}}(\mathcal{L }_{\widetilde{X}^c}d^E\theta _L) \\ &=&-d^E\mathcal{L}_{\widetilde{X}^c}E_L-i_{\mathcal{S}}d^E(\mathcal{L}_{ \widetilde{X}^c}\theta _L)=-d^E(\widetilde{X}^cE_L)-i_{\mathcal{S}}d^E( \mathcal{L}_{\widetilde{X}^c}\theta _L)=0.\end{eqnarray*} But $\omega _L$ is a symplectic structure ($L$ is regular) and we get $[ \widetilde{X}^c,\mathcal{S}]=0$ which ends the proof.
\hbox{\rlap{$\sqcap$}$\sqcup$}
\begin{definition} a) A section $X\in \Gamma (\mathcal{T}E\backslash \{0\})$ is called a Cartan symmetry of the Lagrangian $L$, if $\mathcal{L}_X\omega _L=0$ and $\mathcal{L }_XE_L=0$.\\ b) A function $f\in C^\infty (E)$ is a constant of motion (or a conservation law) for the Lagrangian $L$ if $\mathcal{S}(f)=0$ \end{definition}
\begin{proposition} The canonical semispray induced by the regular Lagrangian $L$ is a Cartan symmetry. \end{proposition}
\textbf{Proof}. Using the relation $i_{\mathcal{S}}\omega_L=-d^EE_L$ and the skew symmetry of the symplectic 2-section $\omega _L$ we obtain \[ 0=i_{\mathcal{S}}\omega _L(\mathcal{S})=-d^EE_L(\mathcal{S})=-\mathcal{S}(E_L)=- \mathcal{L}_SE_L. \] Also, from $d^E\omega _L=0$ we get \[ \mathcal{L}_{\mathcal{S}}\omega _L=d^Ei_{\mathcal{S}}\omega _L+i_{\mathcal{S} }d^E\omega _L=-d^E(d^EE_L)=0, \] and it results that the semispray $\mathcal{S}$ is a Cartan symmetry.
\hbox{\rlap{$\sqcap$}$\sqcup$}
\begin{proposition} A Cartan symmetry $X$ of the Lagrangian $L$ is a dynamical symmetry for the canonical semispray $\mathcal{S}$. \end{proposition}
\textbf{Proof}. From the symplectic equation $i_S\omega _L=-d^EE_L$, applying the Lie derivative in both sides, we obtain \begin{equation*} \mathcal{L}_X(i_S\omega _L)=-\mathcal{L}_Xd^EE_L=-d^E\mathcal{L}_XE_L=0. \end{equation*} Also, using the formula $i_{[X,Y]_{\mathcal{T}E}}=\mathcal{L}_X\circ i_Y-i_Y\circ \mathcal{L}_X$ it results \begin{equation*} \mathcal{L}_X(i_{\mathcal{S}}\omega _L)=-i_{[\mathcal{S},X]_{\mathcal{T} E}}\omega _L+i_{\mathcal{S}}\mathcal{L}_X\omega _L=-i_{[\mathcal{S},X]_{ \mathcal{T}E}}\omega _L \end{equation*} which yields \begin{equation} i_{[\mathcal{S},X]_{\mathcal{T}E}}\omega _L=0. \end{equation} But $\omega _L$ is a symplectic 2-section and we conclude that $[\mathcal{S},X]_{ \mathcal{T}E}=0$, so $X$ is a dynamical symmetry.
\hbox{\rlap{$\sqcap$}$\sqcup$}\\ Since Lie and exterior derivatives commute, we obtain \[ d^E\mathcal{L}_X\theta _L=\mathcal{L}_Xd^E\theta _L=\mathcal{L}_X\omega _L=0, \] It results that, for a Cartan symmetry, the 1-section $\mathcal{L}_X\theta _L$ is a closed 1-section. \begin{definition} A Cartan symmetry $X$ is said to be an exact Cartan symmetry if the 1-section $\mathcal{L}_X\theta _L$ is exact. \end{definition} The next result proves that there is a one to one correspondence between exact Cartan symmetries and conservation laws. Also, if $X$ is an exact Cartan symmetry, then there is a function $f\in C^\infty (E)$ such that $\mathcal{L}_X\theta _L=d^{E}f$. \begin{proposition} If $X$ is an exact Cartan symmetry, then $f-\theta_L(X)$ is a conservation law for the Lagrangian $L$. Conversely, if $f\in C^\infty (E)$ is a conservation law for $L$, then $X\in \Gamma (\mathcal{T}E\backslash \{0\})$ the unique solution of the equation $i_X\omega_L=-d^Ef$ is an exact Cartan symmetry. \end{proposition} \textbf{Proof}. We have $\mathcal{S}(f-\theta _L(X)) =d^E(f-\theta _L(X))(\mathcal{S})=\left( \mathcal{L}_X\theta _L-d^Ei_X(\theta _L)\right) (\mathcal{S})=i_Xd^E\theta _L(\mathcal{S}) =i_X\omega _L(\mathcal{S})=-i_{\mathcal{S}}\omega _L(X)=d^EE_L(X)=0,$ and it results that $f-\theta _L(X)$ is a conservation law for the dynamics associated to the regular Lagrangian $L$. Conversely, if $X$ is the solution of the equation $i_X\omega_L=-d^Ef$ then $\mathcal{L}_X{\theta_L}=i_X{\omega_L}$ is an exact 1-section. Consequently, $0=d^E\mathcal{L}_X\theta _L=\mathcal{L}_Xd^E\theta _L=\mathcal{L}_X\omega _L.$ Also, $f$ is a conservation law, and we have $0=\mathcal{S}(f)=d^Ef(\mathcal{S})=-i_X\omega _L(\mathcal{S})=i_S\omega _L(X)=-d^EE_L(X)=-X(E_L).$ Therefore, we obtain $\mathcal{L}_XE_L=0$ and $X$ is an exact Cartan symmetry.
\hbox{\rlap{$\sqcap$}$\sqcup$} \\We have to mention that the Noether type theorems for Lagrangian systems on Lie algebroids are studied in \cite{Ca, Ma2} and Jacobi sections for second order differential equations on Lie algebroids are investigated in \cite{Ca1}. \quad
\subsection{Example}
Next, we consider an example from optimal control theory and prove that the framework of Lie algebroids is more useful that the tangent bundle in order to calculate some symmetries of the dynamics induced by a Lagrangian function. Let us consider the following distributional system in $\Bbb{R}^3$ (driftless control affine system) \cite{Po2'}: \[ \left\{ \begin{array}{l} \dot x^1=u^1+u^2x^1 \\ \dot x^2=u^2x^2 \\ \dot x^3=u^2 \end{array} \right. \] Let $x_0$ and $x_1$ be two points in $\Bbb{R}^3$. An optimal control problem consists of finding the trajectories of our control system which connect $x_0 $ and $x_1$ and minimizing the Lagrangian \[ {\min }\int_0^T\mathcal{L}(u(t))dt,\ \mathcal{L} (u)=\frac 12\left( (u^1)^2+(u^2)^2\right) ,\quad x(0)=x_0,\ x(T)=x_1, \] where $\dot x^i=\frac{dx^i}{dt}$ and $u^1,u^2$ are control variables. From the system of differential equations we obtain $u^2=\dot x^3$, $u^1=\dot x^1-\dot x^3x^1$. The Lagrangian function on the tangent bundle $T\Bbb{R}^3$ has the form \[ \mathcal{L}=\frac 12\left( \dot x^1-\dot x^3x^1)^2+(\dot x^3)^2\right) , \] with the constraint \[ \dot x^2=\dot x^3x^2. \] Then, using the Lagrange multiplier $\lambda =\lambda (t)$, we obtain the total Lagrangian (including the constraints) given by \[ L(x,\dot x)=\mathcal{L}(x,\dot x)+\lambda \left( \dot x^2-\dot x^3x^2\right) =\frac 12\left( (\dot x^1-\dot x^3x^1)^2+(\dot x^3)^2\right) +\lambda \left( \dot x^2-\dot x^3x^2\right) . \] We observe that the Hessian matrix of $L$ is singular, and $L$ is a degenerate Lagrangian (not regular). The corresponding Euler-Lagrange equations lead to a complicated system of second order differential equations. Moreover, because the Lagrangian is not regular, we cannot obtain the explicit coefficients of the semispray $\mathcal{S}$ from the equation $i_{\mathcal{S}}\omega _L=-dE_L$ and it is difficult to study the symmetries of SODE in this case.\\ For this reason, we will use a different approach, considering the framework of Lie algebroids. The system can be written in the next form \begin{eqnarray*} \left. \dot x=u^1X_1+u^2X_2,\quad x=\left( \begin{array}{c} x^1 \\ x^2 \\ x^3 \end{array} \right) \in \Bbb{R}^3,\ X_1=\left( \begin{array}{c} 1 \\ 0 \\ 0 \end{array} \right) ,\ X_2=\left( \begin{array}{c} x^1 \\ x^2 \\ 1 \end{array} \right) .\right. \end{eqnarray*} The associated distribution $\Delta =span\{X_1,X_2\}$ has the constant rank $2$ and is holonomic, because \[ X_1=\frac \partial {\partial x^1},\quad X_2=x^1\frac \partial {\partial x^1}+x^2\frac \partial {\partial x^2}+\frac \partial {\partial x^3},\quad [X_1,X_2]=X_1. \] From the Frobenius theorem, the distribution $\Delta $ is integrable, it determines a foliation on $T\Bbb{R}^3$ and two points can be joined by a optimal trajectory if and only if they are situated on the same leaf (see \cite{Po2'}). In order to apply the theory of Lie algebroids, we consider the Lie algebroid being just the distribution, $E=\Delta $ and the anchor $\sigma :E\rightarrow T\Bbb{R}^3$ is the inclusion, with the components \[ \sigma _\alpha ^i=\left( \begin{array}{cc} 1 & x^1 \\ 0 & x^2 \\ 0 & 1 \end{array} \right) . \] From the relation \[ \lbrack X_\alpha ,X_\beta ]=L_{\alpha \beta }^\gamma X_\gamma ,\quad \alpha ,\beta ,\gamma =1,2, \] we obtain the non-zero structure functions \[ L_{12}^1=1,\ L_{21}^1=-1. \] The components of the semispray from (9) are given by \[ \mathcal{S}^1=-u^1u^2,\ \mathcal{S}^2=(u^1)^2. \] The functions $\mathcal{S}^\alpha $ are homogeneous of degree 2 in $u$ and it results that $\mathcal{S}$ is a spray. By straightforward computation we obtain the expression of the canonical spray induced by $\mathcal{L}$ \[ \mathcal{S}(x,u)=(u^1+u^2x^1)\frac \partial {\partial x^1}+u^2x^2\frac \partial {\partial x^2}+u^2\frac \partial {\partial x^3}-u^1u^2\frac \partial {\partial u^1}+(u^1)^2\frac \partial {\partial u^2}. \] From the Proposition 17 it results that $\mathcal{S}(x,u)$ is a Cartan symmetry of the dynamics associated to the regular Lagrangian $\mathcal{L}$ on Lie algebroids.\\ The coefficients of the canonical nonlinear connection $\mathcal{N}=- \mathcal{L}_{\mathcal{S}}J$ are given by \[ \mathcal{N}_1^1=u^2,\ \mathcal{N}_2^1=0,\ \mathcal{N}_1^2=u^1,\ \mathcal{N}_2^2=0, \] and the components of the Jacobi endomorphism from (21) have the form \[ \mathcal{R}_1^1=-(u^2)^2,\ \mathcal{R}_1^2=-u^1u^2,\ \mathcal{R} _2^1=u^1u^2,\ \mathcal{R}_2^2=(u^1)^2. \] Also, the non-zero coefficients of the curvature from (11) of $\mathcal{N}$ are \[ \mathcal{R}_{12}^1=u^2,\ \mathcal{R}_{12}^2=u^1,\ \mathcal{R}_{21}^1=-u^2,\ \mathcal{R}_{21}^2=-u^1, \] and we obtain that the Jacobi endomorphism is the contraction with $\mathcal{S}$ of the curvature of $\mathcal{N}$, or locally $\mathcal{R}_\beta ^\alpha =\mathcal{R}_{\varepsilon \beta }^\alpha u^\varepsilon $.\\ The Euler-Lagrange equations on Lie algebroids given by (see \cite{We}) \[ \frac{dx^i}{dt}=\sigma _\alpha ^iu^\alpha ,\quad \frac d{dt}\left( \frac{ \partial \mathcal{L}}{\partial u^\alpha }\right) =\sigma _\alpha ^i\frac{ \partial \mathcal{L}}{\partial x^i}-L_{\alpha \beta }^\varepsilon u^\beta \frac{\partial \mathcal{L}}{\partial u^\varepsilon }, \] lead to the following differential equations \[ \dot u^1=-u^1u^2,\quad \dot u^2=(u^1)^2, \] which can be written in the form \[ \frac{dx^i}{dt}=\sigma _\alpha ^iu^\alpha ,\quad \frac{du^\alpha }{dt}= \mathcal{S}^\alpha (x,u), \] and give the integral curves of $\mathcal{S}$. The Cartan 1-section $\theta _{\mathcal{L}}$ has the form \[ \theta _{\mathcal{L}}=u^1dx^1+u^2(x^1dx^1+x^2dx^2+dx^3), \] and the symplectic structure is $\omega _{\mathcal{L}}=d^E\theta _{\mathcal{L}}$. The energy of Lagragian $\mathcal{L}$ is \[ E_{\mathcal{L}}=\frac 12\left( (u^1)^2+(u^2)^2\right). \] For the optimal solution of the control system (using the framework of Lie algebroids) see \cite{Po2'}.\\
\quad \\ \textbf{Conclusions}. The main purpose of this paper is to study the symmetries of SODE on Lie algebroids and relations between them, using the dynamical covariant derivative and Jacobi endomorphism. The existence of a semispray $\mathcal{S}$ together with an arbitrary nonlinear connection $ \mathcal{N}$ define a dynamical covariant derivative and the Jacobi endomorphism. Let us remark that at this point we do not have any relation between $\mathcal{S}$ and the nonlinear connection $\mathcal{N}.$ This will be given considering the compatibility condition between the dynamical covariant derivative and the tangent structure, $\nabla J=0$, which fix the canonical nonlinear connection $\mathcal{N}=-\mathcal{L}_{ \mathcal{S}}J$. This canonical nonlinear connection depends only on semispray. In this case we have the decomposition $\nabla =\mathcal{L}_{ \mathcal{S}}+\Bbb{F}+\mathcal{J}-\Phi $ which can be compared with the tangent case from \cite{Bu3, Ma}. Also, in the case of homogeneous SODE (spray), the dynamical covariant derivative coincides with Berwald linear connection and the Jacobi endomorphism is the contraction with $\mathcal{S}$ of the curvature of the nonlinear connection. We study the dynamical symmetry, Lie symmetry, Newtonoid section and Cartan symmetry on Lie algebroids and find their invariant equations with the help of dynamical covariant derivative and Jacobi endomorphism. Finally, we give an example from optimal control theory which proves that the framework of Lie algebroids is more useful than the tangent bundle in order to find the symmetries of the dynamics induced by a Lagrangian function. For further developments one can study the symmetries using the $k$-symplectic formalism on Lie algebroids given in \cite{Le2}.\\ \textbf{Acknowledgments}. The author wishes to express his thanks to the referees for useful comments and suggestions concerning this paper.
Author's address: \\University of Craiova,\\Dept. of Statistics and Informatics,\\ 13, Al. I. Cuza, st., Craiova 200585, Romania\\e-mail: liviupopescu@central.ucv.ro; liviunew@yahoo.com
\end{document} |
\begin{document}
\title{A Proof of the Conjecture by Carpentier-De Sole-Kac}
\begin{center} 0. Introduction \end{center}
Let $\mathcal{K}$ be a differential field with derivation $\partial$, and let $R$ be a differential subring of $\mathcal{K}$. We consider an $n \times n$ matrix $M$, whose elements lie in the ring $R[\partial]$. A number of useful notions can be defined on such a matrix, such as the following ones: \newline\newline For a matrix $A$ with entries in the field $\mathcal{K}[\partial]$, the Dieudonn\'e Determinant has the form $det A = det_1 A\lambda^d$, where $det_1 A \in \mathcal{K}$, $\lambda$ is indeterminate, and $d$ is an integer. Some of its characterizing properties are $det A \cdot det B = det AB$, that $det A$ changes sign upon permuting two rows of $A$, and that subtracting $h$ times $A$'s ith row from its jth row leaves $det A$ unchanged, when $h \in \mathcal{K}[\partial]$ and $i \neq j$. Furthermore, if $A$ is upper triangular, then $det_1 A$ is the product of the leading coefficients of the diagonal entries, and $d$ is the sum of their orders. In such a case, if any of the diagonal entries are 0, then these evaluate to $det_1 A=0$ and $d=-\infty$. That a determinant with these properties exists is shown in a more general context in [Die43]. \newline\newline \textbf{Definition 0.1.} The total order of a matrix $A$ is $$ tord(A) = \max_{\sigma \in S_n}\sum_{i=1}^{n}ord(A_{i,\sigma (i)}) $$ where $S_n$ denotes the group of permutations of $\{1,2,\dots n\}$. We can then also define the degeneracy degree of $A$ by $$ dd(A) = tord(A) - d(A) $$ where $d(A)$ is the order of $det A$. \newline\newline \textbf{Definition 0.2.} A system of integers $(N_1,N_2,\dots N_n,h_1,h_2,\dots,h_n)$ is called a majorant of $A$ if for all $i,j \in \{1,2,\dots n\}$ $$ ord(A_{ij})\leq N_j - h_i $$ Given a matrix A and a majorant of that matrix, one can associate a characteristic matrix $\bar{A}(\lambda)$ to the majorant by the equation $$ \bar{A}_{ij}(\lambda) = A_{ij;N_j-h_i} \lambda^{N_j - h_i} $$ where $A_{ij;N_j-h_i}$ is the coefficient of $\partial^{N_j-h_i}$ in $A_{ij}$. We can also note that fact that for any majorant $$ \sum_{i=1}^{n}N_i-h_i \geq tord(A) $$ We call a majorant optimal when equality holds in the above statement.
The following theorem of [CDSK12] follows from the results in [Huf65]. \newline\newline \textbf{Theorem 0.3} Let $A$ be a matrix with elements in $\mathcal{K}[\partial]$ and let $det A \neq 0$. Then: \begin{enumerate}[label = \roman*.] \item $dd(A) \geq 0$
\item there exists an optimal majorant of A
\item if $dd(A) \geq 1$, then $det(\bar{A}(\lambda)) = 0$ for any majorant
\item if $dd(A) = 0$, then $det(\bar{A}(\lambda)) = 0$ for any majorant which is not optimal, and $det(\bar{A}(\lambda)) = det A$ for any majorant which is optimal \end{enumerate}
It is obvious that $det_1 A \in R$ if $dd(A) = 0$, and it was shown in [CDSK12] that $det_1 A$ always lies in the integral closure of $R$ in $K$. However, in general $det_1 A \notin R$; there is a counterexample in [CDSK12] with $dd(A) = 2$. It was conjectured in [CDSK12] that $det_1 A \in R$ when $dd(A) = 1$. In the present paper this conjecture is proved.
\begin{center} 1. Proof of the Carpentier-De Sole-Kac conjecture. \end{center}
\textbf{Proposition 1.1.} Let $\mathcal{K}$ be a differential field, and let $R$ be a differential subring that is a subring of $\mathcal{K}$. Now, let $M$ be a matrix whose elements lie in $R[\partial]$, and let $D = det_1 M$. If $dd(M) = 1$, then $D \in R$. \newline\newline Proof. First, note that we may multiply we may increase the order of each term in a given row or column by either left or right multiplying by a matrix of the form $diag(1\: \ldots 1\: \partial \: 1\: \ldots 1)$. This operation results in a matrix whose entries are still in $R$ and also leaves the coefficient of the determinant unchanged. Furthermore, it increases both the total order and degree of the determinant by 1, leaving the degeneracy degree unchanged. Next, we would like to extract an optimal majorant $N_1, N_2,\ldots N_n, h_1,h_2,\ldots h_n$. Both a definition and some of its basic properties can be found in [CDSK12,Def4.6] and [CDSK,Thm4.7]. A majorant exists as long as $D\neq 0$, which we may assume, since otherwise $D=0$, at which point $D$ must lie in $R$ anyway. Now, define $N$ to be the largest of all the $N_i$ and $h$ the minimum of the $h_i$. Note that when we use the process described earlier to increase the degrees of the elements of column i by 1, we can get an optimal majorant for the new matrix by simply taking the old majorant and increasing $N_i$ by 1. Similarly, if we increase the degrees in row i by 1, then the new majorant is just the old majorant, with $h_i$ decreased by 1.
We can now use this to create a new matrix $M'$ as follows. Increase the degrees in column i by 1 $N - N_i$ times, and increase the degrees in row i by 1 $h_i - h$ times. Then, $M'$ will have a determinant coefficient of $D$, degeneracy degree 1, and an optimal majorant of $N,N,\ldots N,h,h,\ldots h$. From this optimal majorant, we may extract two properties of $M'$. First, every entry has degree less than or equal to $N-h$. Also, the total order is $n(N-h)$, found by summing over the majorant. These two facts together imply that every row must contain at least one term of degree exactly $N-h$. In fact, they show something stronger, but this is all we will need.
We now know that the largest degree term has degree $N-h$. We can now express the matrix $M'$ in the form $A\partial^{N-h} + B\partial^{N-h-1} \ldots$, where $A$ is the leading coefficient matrix of terms of degree $N-h$, $B$ contains the terms of degree $N-h-1$, and so on.
Now, we know that since $M'$ is degenerate, its characteristic matrix has determinant 0 (see [CDSK12,Thm4.7]). In this case, the characteristic matrix is exactly $A$. Letting the rows of $A$ be called $A_1,A_2,\ldots A_n$, the fact that $A$ has determinant 0 implies a linear relation amongst the rows with coefficients in the fraction field of $R$. We may clear denominators to get a relation with coefficients strictly in the ring $R$. $$c_1A_1 + c_2A_2 \ldots + c_nA_n = 0$$
Next, multiply the first row of $M'$ by $c_1$ via the use of an elementary matrix. From the total order and degeneracy of $M'$, we can see that the determinant of this product is $c_1D\lambda^{n(N-h)-1}$. We can further alter the matrix by next subtracting $c_i$ times the ith row from the first row, for each $i\neq1$. This operation leaves the determinant unchanged, and yields a new matrix we call $M''$. We express $M''$ as a sum similar to the one we used before, so that $M'' = A'\partial^{N-h} + B'\partial^{N-h-1} \ldots$ for new matrices $A'$ and $B'$. These matrices take on the following forms: $$ A'= \begin{bmatrix} 0\\ A_2\\ A_3\\ \vdots\\ A_n\\ \end{bmatrix} \qquad\qquad B'= \begin{bmatrix} c_1B_1 + c_2B_2 \ldots + c_nB_n\\ B_2\\ B_3\\ \vdots\\ B_n\\ \end{bmatrix} $$
We now note that the total order of $M''$ is at least $n(N-h)-1$, the order of its determinant. On the other hand, $N,N,\ldots N,h+1,h,h,\ldots h$ is a majorant, which implies that not only is it an optimal majorant, but that $M''$ is non-degenerate, and thus that its determinant is the determinant of the characteristic matrix [CDSK12,Thm.4.7]. To finish the proof, we now show that the determinant of the characteristic matrix is a multiple of $c_1$. As this is equal to $c_1D$, we can then apply the cancellation law to show that $D$ lies in the ring $R$.
The characteristic matrix is $$ M''_{char} = \begin{bmatrix} c_1B_1 + c_2B_2 \ldots + c_nB_n\\ A_2\\ A_3\\ \vdots\\ A_n\\ \end{bmatrix} $$
Since the determinant is linear in the first row, we can set the determinant of $M''_{char}$ equal to $det(M''_1) + det(M''_2) + \ldots + det(M''_n)$, where $$ M''_i = \begin{bmatrix} c_iB_i\\ A_2\\ A_3\\ \vdots\\ A_n\\ \end{bmatrix} $$
$M''_1$ has a first row that is a multiple of $c_1$, so its determinant is already a multiply of $c_1$. We now look at the other $M''_i$: Since we can multiply by elementary matrices, we can divide out the factor of $c_i$ from the top row and multiply the ith row by $c_i$, leaving the determinant unaffected. Next, for each $j\neq1$, add $c_j$ times row j to row i. The resulting row i can then be simplified using our earlier linear relation and becomes a multiple of $c_1$. Thus, the determinant of $M''_i$ is a multiple of $c_1$, as desired. Below, these last steps are written out explicitly: $$ \begin{bmatrix} c_iB_i\\ A_2\\ A_3\\ \vdots\\ A_n\\ \end{bmatrix} \rightarrow \begin{bmatrix} B_i\\ A_2\\ A_3\\ \vdots\\ c_iA_i\\ \vdots\\ A_n\\ \end{bmatrix} \rightarrow \begin{bmatrix} B_i\\ A_2\\ A_3\\ \vdots\\ c_2A_2+c_3A_3+\ldots+c_nA_n\\ \vdots\\ A_n\\ \end{bmatrix} \rightarrow \begin{bmatrix} B_i\\ A_2\\ A_3\\ \vdots\\ -c_1A_1\\ \vdots\\ A_n\\ \end{bmatrix} $$ \newline\newline \begin{center} References \end{center} \begin{flushleft} [CDSK12] S. Carpentier, A. De Sole, V.G. Kac, \textit{Some Algebraic Properties of Differential Operators}, arXiv:1201.1992v1
[Die43] J. Dieudonn\'e \textit{Les d\'eterminants sur un corps non commutatif}, Bull. Soc. Math. France \textbf{71}, (1943), 27-45.
[Huf65] G. Hufford, \textit{On the characteristic matrix of a matrix of differential operators}, J. Differential Equations \textbf{1}, (1965) 27–38. \end{flushleft}
\end{document} |
\begin{document}
\title{Witness for initial correlations among environments } \author{F. T. Tabesh} \affiliation{Department of Physics, University of Kurdistan, P.O.Box 66177-15175 , Sanandaj, Iran} \affiliation{Turku Center for Quantum Physics, Department of Physics and Astronomy, University of Turku, FIN-20014 Turku, Finland} \author{ S. Salimi} \affiliation{Department of Physics, University of Kurdistan, P.O.Box 66177-15175 , Sanandaj, Iran} \author{ A. S. Khorashad} \email{a.sorouri@uok.ac.ir} \affiliation{Department of Physics, University of Kurdistan, P.O.Box 66177-15175 , Sanandaj, Iran} \date{\today} \begin{abstract} A quantum system inevitably interacts with its surroundings. In general, one does not have detailed information on an environment. Identifying the environmental features can help us to control the environment and its effects on the dynamics of an open system. Here, we consider a tripartite system and introduce a witness for the initial correlations among environments by means of the concept of the trace distance. Due to the existence of the initial environmental correlations, a tight upper bound is obtained for the growth of the trace distance of an open quantum system states. Therefore, the initial correlations among the environments subject to particular conditions can be detected by measurements on the open system. \end{abstract} \pacs{03.65.Yz, 42.50.Lc, 03.65.Ud, 05.30.Rt} \maketitle \section{Introduction} In real world, quantum systems are open systems interacting with their environments. Dynamics of an open system can be described by either Markovian or non-Markovian approach. Markovian dynamics is based on the assumptions that the coupling between the system under study and its environment is weak and that the initial system-environment (S-E) state is factorized neglecting all memory effects. Violation of any one of these conditions may lead to non-Markovian dynamics which guarantees the existence of memory effects in time evolution of an open system \cite{Breuer,Wolf}.\\ \indent As mentioned in the above, initial correlation between a system and its environment is one of the important factors to determine the Markovianity or non-Markovianity of the dynamics. Thus it plays a very important role in time evolution of an open system. If there is not any initial correlation, the dynamics of an open system is described by a completely positive map \cite{Breuer1,Nielsen}. In recent years, many attempts have been made to study open quantum systems with initial S-E correlations. In the presence of initial correlation, it is shown that dynamics of an open quantum system may not be completely positive \cite{Pechukas}. In fact, it has been indicated that entangled initial states can lead to non-completely positive maps \cite{Jordan,Carteret}. In the case that quantum discord of initial states vanishes, the dynamics is described by a completely positive map \cite{Rosario}. Shabani and Lidar showed that the above-mentioned condition is not only sufficient but also necessary for complete positivity of the corresponding map\cite{Shabani1,Shabani2}. Recently, some examples were provided to show that the relation between complete positivity and quantum discord is not generalized to all cases \cite{Brodutch,Buscemi,Dominy}.\\ \indent The initial S-E correlations may lead to increase the trace distance over its initial value \cite{Laine}. According to the definition of the trace distance between two arbitrary states \cite{Nielsen,Breuer}, it can be regarded as a measure for the degree of distinguishability of the two states. If the value of the trace distance during a system evolution is not constant, one can conclude that there is a flow of information between the system and its environment \cite{Laine}. A tight upper bound for its increasing has been derived which can be considered as a witness for initial S-E correlations \cite{Laine,Wismann,Smirne,Breuer2}.\\ \indent Therefore, a lot of effort has been put in to investigate the influence of initial S-E correlations on an open system dynamics. Unfortunately, a clear general relation has not yet been found between them, and the following questions need to be answered: How do initial environmental correlations affect the dynamics of an open system? How can we obtain information about initial states of an environment?\\ \indent In this paper, we study the role of initial correlations among environments on the dynamics of an open system. For this purpose, we consider a tripartite system. In a tripartite system one can face to three scenarios: a system and two environments; two systems and one environment; and one system, one environment and one ancilla. Here, we find an upper bound for the time evolution of the trace distance in the first scenario. When the trace distance grows above its initial value, the upper bound can be regarded as a witness for initial environmental correlations. Also, we regard some examples to illustrate the tightness of the upper bound. It should be noted that realizing initial environmental correlations may help us to characterize the environment and control its effects. In the following, we will discuss the above-mentioned questions in detail with the help of a three-qubit Heisenberg XX spin chain, two Jaynes-Cummings systems, two amplitude damping channels, and an experimental example. We will see that the initial correlations alter the information flow. Accordingly, initial correlations can be witnessed from the dynamical features of the open system.\\ \indent The paper is organized as follows. In Sec. II a review of the concept of the trace distance is provided and its important role in determining the direction of information flow and also the amount of total correlations is explained. Upper bound for the growth of the distinguishability is derived in Sec. III. In order to witness initial correlations, backflow of information is investigated for some examples in Sec. IV. The paper concludes in Sec. V. \section{Trace distance} The trace distance of two quantum states $\rho$ and $\sigma$ is defined as \begin{equation} D(\rho,\sigma) = \dfrac{1}{2}\Vert\rho-\sigma\Vert_{1}, \end{equation} where the trace norm of an operator A is introduced by $\Vert A\Vert_{1}=Tr\vert A\vert =Tr\sqrt{A^{\dagger}A}$ \cite{Nielsen}. It represents a metric on space of physical states, because $D\in[0,1]$, ($D(\rho,\sigma) = 0$ if and only if $\rho =\sigma$, and $D(\rho,\sigma) = 1$ if and only if $\rho$ and $\sigma$ have orthogonal supports) and it satisfies the triangular inequality, $D(\rho,\sigma)\leq D(\rho,\tau)+ D(\tau,\sigma)$.\\ \indent The other properties of the trace distance are its subadditivity with respect to the tensor product \begin{equation}\label{5} D(\rho_{1}\otimes\sigma_{1},\rho_{2}\otimes\sigma_{2})\leq D(\rho_{1},\rho_{2})+D(\sigma_{1},\sigma_{2}), \end{equation} and its contractivity under all trace-preserving positive maps, i.e. $D(\Lambda\rho,\Lambda\sigma)\leq D(\rho,\sigma)$, where the equality holds if $\Lambda$ is a unitary transformation. It is well known that the trace distance can be interpreted as a measure for the distinguishability of the states, therefore a trace-preserving positive map can never increase the distinguishability of any two quantum states \cite{Breuer}.\\ \indent The variation of distinguishability of two states can be considered as a witness for the flow of information in an open quantum system. Let S be an open quantum system interacting to an environment E. If $\rho^{S}_{1,2}(0)$ are two different initial states of S, their time evolutions obey $\rho^{S}_{1,2}(t)=\Phi_{t}\rho^{S}_{1,2}(0)$, where $\Phi_{t}$ denotes the corresponding quantum dynamical map. The time variation of the trace distance is interpreted as \emph{information flow}, and is shown by \begin{equation} \sigma(t) = \frac{d}{dt}D\left(\rho_{1}^{S}(t),\rho_{2}^{S}(t)\right). \end{equation} Positive values of $\sigma(t)$ in some time intervals correspond to information backflow from the environment to the system and the negative values indicate the information flow from the system to the environment. The quantity \begin{equation} I(\rho^{S}) = D\left(\rho_{1}^{S}(t),\rho_{2}^{S}(t)\right)-D\left( \rho_{1}^{S}(0),\rho_{2}^{S}(0)\right), \end{equation} can be regarded as a quantifier for the information exchange between an open system and its environment \cite{schmidt}. In Eq. (4), $D\left(\rho_{1}^{S}(t),\rho_{2}^{S}(t)\right)$ can be interpreted as the information inside the system at time $t$, therefore $I(\rho^{S})$ shows the difference between the information inside the system at $t=0$ and $t$ \cite{Breuer2}. When both $I(\rho_{S})$ and $\sigma(t)$ are positive, one can obtain more information than that of the initial state of the system.\\ \indent For any state $\rho^{AB}$, the quantity $D(\rho^{AB},\rho^{A}\otimes\rho^{B})$ describes how well $\rho^{AB}$ can be distinguished from the product state, fully uncorrelated, $\rho^{A}\otimes\rho^{B}$. Thus, $D(\rho^{AB},\rho^{A}\otimes\rho^{B})$ can be interpreted as a measure for the total amount of correlations in the state $\rho^{AB}$ \cite{Laine}. It should be mentioned that one can not recognize the correlations types by using the trace distance.\\ \indent Suppose an open system $S$ coupled to its environment $E$, with initial states $\rho_{1,2}^{SE}(0)$. Using the subadditivity and the triangular inequality of the trace distance, one can obtain the following inequality \cite{Laine} \begin{eqnarray} &&I(\rho^{S})=D(\rho_{1}^{S}(t),\rho_{2}^{S}(t))-D(\rho_{1}^{S}(0),\rho_{2}^{S}(0))\leq\nonumber\\ &&D(\rho_{1}^{E}(0),\rho_{2}^{E}(0)) + \sum_{i=1}^{2}D(\rho_{i}^{SE}(0),\rho_{i}^{S}(0)\otimes\rho_{i}^{E}(0)).\nonumber\\ \end{eqnarray} The above inequality shows an upper bound of information backflow from the environment to the system. The upper bound implies that the probable increase of the distinguishability over the initial value is due to the initial correlations in the total initial states $\rho_{i}^{SE}(0)$ or (and) different initial states of the environment $E$ . Note that these terms quantify both quantum and classical correlations of the total system states.\\ \indent In the next section with using the properties of the trace distance, we obtain the upper bound of the backflow of information in tripartite systems. \begin{figure}
\caption{(Color online) Schematic diagrams of a tripartite quantum system: (a) first scenario, (b) second scenario.}
\label{fig12}
\end{figure}
\section{Dynamics of the trace distance in tripartite quantum systems} Assume a tripartite quantum system consists of three subsystems $A,B$ and C which can be coupled to each other. They form an isolated system described by the initial state $\rho^{ABC}(0)$. The state of the total system at time $t$ can be written as $\rho^{ABC}(t)=U_{t}\rho^{ABC}(0)U_{t}^{\dagger}$, where $U_{t}=\exp(\frac{-i Ht}{\hbar})$ represents the unitary time evolution operator of the composite system with total Hamiltonian $H$. In a tripartite system one can face to three scenarios: a system and two environments; two systems and one environment; and one system, one environment and one ancilla. The first and the second scenarios are shown in Fig. 1. Here, we investigate the first scenario.\\
Consider the subsystem A as an open system S and the subsystems B and C as its environments. Indeed the environment E includes two subsystems B and C [see Fig. 1(a)]. Suppose two initial states $\rho^{ABC}_{1,2}(0)$ for total system, with corresponding reduced open system states $\rho^{A}_{1,2}(0)=Tr_{BC}\left(\rho^{ABC}_{1,2}(0)\right)$ and environment states $\rho^{BC}_{1,2}(0)=Tr_{A}\left(\rho^{ABC}_{1,2}(0)\right)$. According to Eq. (5), the dynamics of the trace distance for the open system A can be written as \begin{eqnarray} &&D\left(\rho_{1}^{A}(t),\rho_{2}^{A}(t)\right)-D\left(\rho_{1}^{A}(0),\rho_{2}^{A}(0)\right)\leq \nonumber \\ &&\hspace{8mm}\sum_{i=1}^{2}D\left(\rho_{i}^{ABC}(0),\rho_{i}^{A}(0)\otimes\rho_{i}^{BC}(0)\right)\nonumber \\ &&\hspace{8mm}+\hspace{1mm}D\left(\rho_{1}^{BC}(0),\rho_{2}^{BC}(0)\right). \end{eqnarray} As stated in the introduction, our main aim is to find a witness for the initial environmental correlations, therefore, we consider the second term in the right-hand side of the above equation.
Applying the subadditivity of the trace distance and the triangular inequality (twice) for
$ D\left(\rho_{1}^{BC}(0),\rho_{2}^{BC}(0)\right)$, one can obtain \begin{eqnarray} &&D\left(\rho_{1}^{BC}(0),\rho_{2}^{BC}(0)\right)\leq \sum_{i=1}^{2}D\left(\rho_{i}^{BC}(0),\rho_{i}^{B}(0)\otimes\rho_{i}^{C}(0)\right) \nonumber \\ &&\hspace{8mm}+\hspace{1mm}D\left(\rho_{1}^{B}(0),\rho_{2}^{B}(0)\right)+D\left(\rho_{1}^{C}(0),\rho_{2}^{C}(0)\right). \end{eqnarray} Substituting the above inequality into Eq. (6), we find \begin{eqnarray} &&D\left(\rho_{1}^{A}(t),\rho_{2}^{A}(t)\right)-D\left(\rho_{1}^{A}(0),\rho_{2}^{A}(0)\right)\leq \nonumber \\ &&\hspace{5mm}\sum_{i=1}^{2}D\left(\rho_{i}^{ABC}(0),\rho_{i}^{A}(0)\otimes\rho_{i}^{BC}(0)\right) \nonumber \\ &&\hspace{3mm}+\hspace{1mm}\sum_{i=1}^{2}D\left(\rho_{i}^{BC}(0),\rho_{i}^{B}(0)\otimes\rho_{i}^{C}(0)\right) \nonumber \\ &&\hspace{3mm}+\hspace{1mm}D\left(\rho_{1}^{B}(0),\rho_{2}^{B}(0)\right)+D\left(\rho_{1}^{C}(0),\rho_{2}^{C}(0)\right), \end{eqnarray} where the above inequality generalizes the result of Eq. (5). This inequality shows that in the most general case an increase of the distinguishability above its initial value implies that there must be initial S-E correlations or initial correlations among environments or environments have different initial states.
For the special case that there are no initial S-E correlations, the first summation in Eq. (8) vanishes and we have \begin{eqnarray} &&D\left(\rho_{1}^{A}(t),\rho_{2}^{A}(t)\right)-D\left(\rho_{1}^{A}(0),\rho_{2}^{A}(0)\right)\leq \nonumber \\ &&\sum_{i=1}^{2}D\left(\rho_{i}^{BC}(0),\rho_{i}^{B}(0)\otimes\rho_{i}^{C}(0)\right) \nonumber \\ &&+\hspace{1mm}D\left(\rho_{1}^{B}(0),\rho_{2}^{B}(0)\right)+D\left(\rho_{1}^{C}(0),\rho_{2}^{C}(0)\right). \end{eqnarray} Let us consider a further important special case, which discloses most clearly the role of initial environmental correlations, and is obtained if we assume $\rho_{2}^{BC}(0)=\rho_{1}^{B}(0)\otimes\rho_{1}^{C}(0)$. Therefore, the inequality in Eq. (9) is simplified to \begin{eqnarray} &&D\left(\rho_{1}^{A}(t),\rho_{2}^{A}(t)\right)-D\left(\rho_{1}^{A}(0),\rho_{2}^{A}(0)\right)\leq \nonumber \\ &&D\left(\rho_{1}^{BC}(0),\rho_{1}^{B}(0)\otimes\rho_{1}^{C}(0)\right), \end{eqnarray} where the quantity on the right-hand side of Eq. (10) can be larger than zero because of the presence of initial environmental correlations in $\rho_{1}^{BC}(0)$. This inequality shows that any increase of the trace distance over its initial value is a \emph{witness} for the presence of initial environmental correlations. When the inequality in Eq. (10) becomes an equality at a certain time $t$, we can detect the initial environmental correlations. Otherwise, the initial correlations are not transformed completely to the open system during the dynamics.\\ \indent In this step, one can ask some questions like: where is the rest of information stored? Has it been transformed into other forms, or is it still frozen in bipartite environmental correlations? To answer these questions, let us recall the definition of $I_{int}(t)$ ($I_{ext}(t)$) as the information inside (outside of) the open system. Mathematically, they are written as \cite{Breuer2} \begin{eqnarray} && I_{int}(t)=D(\rho_{1}^{S}(t),\rho_{2}^{S}(t)),\nonumber \\ && I_{ext}(t) = D(\rho_{1}^{SE}(t),\rho_{2}^{SE}(t))-D(\rho_{1}^{S}(t),\rho_{2}^{S}(t)). \nonumber \\ \end{eqnarray} Due to the unitary dynamics of the total system, one has \begin{eqnarray} && I_{ext}(0)+I_{int}(0) =I_{ext}(t)+I_{int}(t),\nonumber \\ &&I(\rho^{S})=-[I_{ext}(t)-I_{ext}(0)],\nonumber \\ \end{eqnarray} It can clearly be seen that if $I_{int}(t)$ increases, $I_{ext}(t)$ decreases and vice versa. The second equation of Eq. (12) can be regarded as an introduction of the exchange information between the open system and the environment. Rewriting the first equation of Eq. (12) as $I_{ext}(0)=I_{ext}(t)+I_{in}(t)-I_{in}(0)$, leads us to this fact that the initially inaccessible information can either flow to the open system or remain as external information at time $t$. With the help of Eqs. (7) and (11), one can obtain the following inequality for all $t \geq 0$: \begin{eqnarray} &&I_{ext}(t)\leq \sum_{i=1}^{2}D(\rho^{ABC}_{i}(t),\rho^{A}_{i}(t)\otimes\rho^{BC}_{i}(t)) \nonumber \\ &&\hspace{14mm}+ \sum_{i=1}^{2}D(\rho^{BC}_{i}(t),\rho^{B}_{i}(t)\otimes\rho^{C}_{i}(t))\nonumber \\ &&\hspace{14mm}+ D(\rho^{B}_{1}(t),\rho^{B}_{2}(t))+ D(\rho^{C}_{1}(t),\rho^{C}_{2}(t)).\nonumber \\ \end{eqnarray} The right-hand side of the above inequality consists of six terms: The first summation measures the total correlations between the system and the environments and the second summation measures the environmental correlations. The third and fourth terms are the trace distances of the corresponding environmental states. Thus, when $I_{ext}(t)$ grows over the initial value, $I_{ext}(0)$, the system-environment or the environment-environment correlations are created; or the environmental states become more different, implying an increase of the distinguishability of the environmental states. This demonstrates that the corresponding decrease in $I_{int}(t)$ has always an impact on degrees of freedom which are inaccessible by measurements on the open system. Conversely, if $I_{int}(t)$ starts to increase at time $t$, the corresponding decrease in $I_{ext}(t)$ implies that all kinds of correlations already exist or (and) the environmental states are different at time $t$.
Therefore, according to Eqs. (12) and (13), the rest of the initially inaccessible information is stored in the system-environment or the environment-environment correlations, or inside each environment. Hence, initial environmental correlations may be transformed into other forms of bipartite or tripartite correlations.
Here, we discuss some examples to illustrate that the inequality in Eq. (10) is tight. Suppose four qubits such that the first and second qubit are regarded as an open system S (control qubits), and the third and fourth qubit are regarded as an environment (target qubits), where the first (second) qubit interacts locally with the third (fourth) qubit. We first apply a controlled-NOT gate and then a swap operation on the two qubits. Thus, the interaction is given by unitary operator $U=U_{1}\otimes U_{2}$, where $U_{i}=U_{swap}U_{c}$, ($i=1,2$). We consider two total initial states as \begin{eqnarray}
&&\rho_{1}^{SE}(0)=|\varphi\rangle_{S}\langle\varphi|\otimes|\psi\rangle_{E}\langle\psi|, \nonumber\\
&&\rho_{2}^{SE}(0)=|\varphi\rangle_{S}\langle\varphi|\otimes\rho_{1}^{E_{1}}(0)\otimes\rho_{1}^{E_{2}}(0), \end{eqnarray}
in which $|\varphi\rangle_{S}= a|00\rangle+b|11\rangle$, $|\psi\rangle_{E}= \alpha|00\rangle+\beta|11\rangle$, where $\rho_{1}^{E}=|\psi\rangle_{E}\langle\psi|$ with $\alpha,\beta\neq 0$ and $a,b\neq 0$, and $\rho_{1}^{E_{1,2}}=Tr_{E_{2,1}}(\rho_{1}^{E})$. The state $\rho_{1}^{E}$ is a pure entangled state and $\rho_{2}^{E}=\rho_{1}^{E_{1}}(0)\otimes\rho_{1}^{E_{2}}(0)$ is the product of marginal states of $\rho_{1}^{E}$. For these total states, the system states are the same.
Under the action of the unitary operator $U$ the left-hand side of Eq. (10) is found to be \begin{eqnarray}
D(Tr_{E}(U\rho_{1}^{SE}(0)U^{\dagger}),Tr_{E}(U\rho_{2}^{SE}(0)U^{\dagger})) =|\alpha\beta|^{2}+|\alpha\beta|,\nonumber\\ \end{eqnarray}
which shows that the trace distance of the open system states increases over its initial value. This means that the initial state of $\rho_{1}^{E}$ must be correlated. We also have $D(\rho_{1}^{E}(0),\rho_{1}^{E_{1}}(0)\otimes\rho_{1}^{E_{2}}(0)) =|\alpha\beta|^{2}+|\alpha\beta|$ which shows that the upper bound of the inequality in Eq. (10) is reached. Thus, the initial information in the environment state is transferred completely to the open system by applying the the unitary operator $U$. Now, we study a situation in which the initial environmental state has only classical correlations. Assume two total initial states as \begin{eqnarray}
&&\rho_{1}^{SE}(0)=|\phi\rangle_{S}\langle\phi|\otimes(|\alpha|^{2}|00\rangle\langle00|+|\beta|^{2}|11\rangle\langle11|)_{E}, \nonumber\\
&&\rho_{2}^{SE}(0)=|\phi\rangle_{S}\langle\phi|\otimes\rho_{1}^{E_{1}}(0)\otimes\rho_{1}^{E_{2}}(0), \end{eqnarray}
where $\rho_{1}^{E}$ is a purely classical state and $|\phi\rangle_{S}= a |01\rangle+ b |10\rangle$. Then one obtains \begin{eqnarray}
&&D(Tr_{E}(U\rho_{1}^{SE}(0)U^{\dagger}),Tr_{E}(U\rho_{2}^{SE}(0)U^{\dagger})) =2|\alpha\beta|^{2},\nonumber\\ \end{eqnarray}
and the trace distance of the initial environmental states is found to be $D(\rho_{1}^{E}(0),\rho_{1}^{E_{1}}(0)\otimes\rho_{1}^{E_{2}}(0))=2|\alpha\beta|^{2}$. We, then, see that the equality sign in Eq. (10) holds; the tightness of the bound is illustrated again. Also, this means that the trace distance can increase even when the initial states of the environment are mixed states.\\ \indent In order to construct initial conditions for Eq. (10), we need a second reference state $\rho_{2}^{ABC}(0)$ whose evolution is compared with that of the state $\rho_{1}^{ABC}(0)$. Therefore, we regard three operators. The first one is the operator $\textsc{\textbf{P}}$ which removes the correlations between the open system and the environments, i.e., $\textsc{\textbf{P}}(\rho_{1}^{ABC}(0))=\rho_{1}^{A}(0)\otimes\rho_{1}^{BC}(0)$. The second one is a local trace-preserving quantum operator generating a new state for the open system, i.e., $(\Lambda^{A}\otimes\textsc{\textbf{I}}^{BC})\circ\textsc{\textbf{P}}(\rho_{1}^{ABC}(0)) =\rho^{A}_{2}(0)\otimes\rho_{1}^{BC}(0)$. Finally, the third one is an operator which destroys the correlations among the environments as \begin{eqnarray} &&\rho^{ABC}_{2}(0)=(\textsc{\textbf{I}}^{A}\otimes\Omega^{BC})\circ(\Lambda^{A}\otimes\textsc{\textbf{I}}^{BC})\circ\textsc{\textbf{P}}(\rho^{ABC}_{1}(0))\nonumber\\ && \hspace{14mm}=\rho^{A}_{2}(0)\otimes\rho^{B}_{1}(0)\otimes\rho^{C}_{1}(0). \end{eqnarray} Consequently, we have $\rho_{2}^{BC}(0)=\rho_{1}^{B}(0)\otimes\rho_{1}^{C}(0)$.\\
In the next section, the trace distance dynamics will be illustrated by means of a three-qubit Heisenberg XX spin chain, two Jaynes-Cummings systems, two amplitude damping channels and an experimental example. We will see that the bound in Eq. (10) is reached for two Jaynes-Cummings systems and the growth of the distinguishability witnesses the correlations in the initial state of the environments for these cases. \section{ Examples}\label{III} \subsection{ Three-Qubit Heisenberg XX Spin Chain} Here, interactions between three qubits are investigated, which form a three-qubit Heisenberg XX spin chain \cite{Wang}. The Hamiltonian describing the chain subject to a uniform magnetic field is \begin{equation}\label{17} H=\frac{J}{2}\sum_{n=1}^{3}(\sigma^{x}_{n}\sigma^{x}_{n+1}+\sigma^{y}_{n}\sigma^{y}_{n+1})+ B\sum_{n=1}^{3} \sigma^{z}_{n}, \end{equation} where $J$ is the exchange interaction constant, $\sigma^{\alpha}_{n}$ is the Pauli matrix corresponding to each $\alpha$ $(\alpha=x,y,z)$, and $B$ is the magnitude of a uniform magnetic field. Introducing the spin raising and lowering operators of the $n${\emph{th}} qubit, $\sigma_{n}^{\pm}=1/2 (\sigma_{n}^{x}\pm i\sigma_{n}^{y})$, the Hamiltonian can be rewritten as \begin{equation}\label{17} H=J\sum_{n=1}^{3}(\sigma^{+}_{n}\sigma^{-}_{n+1}+\sigma^{-}_{n}\sigma^{+}_{n+1})+ B\sum_{n=1}^{3} \sigma^{z}_{n}. \end{equation} Applying the periodic boundary conditions, $\sigma^{x}_{1}=\sigma^{x}_{4}$ and $\sigma^{y}_{1}=\sigma^{y}_{4}$, leads to the following eigenvalues and eigenstates of the Hamiltonian, \begin{eqnarray} &&E_{0}=-E_{7}=-3B, \nonumber \\ &&E_{1}=E_{2}=-J-B, \nonumber \\ &&E_{4}=E_{5}=-J+B, \nonumber \\ &&E_{3}=2J-B, \nonumber \\ &&E_{6}=2J+B, \end{eqnarray} and \begin{eqnarray}
|\psi_{0}\rangle&=&|000\rangle,\nonumber\\
|\psi_{1}\rangle&=&\frac{1}{\sqrt{3}}(e^{\frac{2 i\pi}{3}}|001\rangle+e^{\frac{-2 i\pi}{3}}|010\rangle+|100\rangle),\nonumber\\
|\psi_{2}\rangle&=&\frac{1}{\sqrt{3}}(e^{\frac{-2 i\pi}{3}}|001\rangle+e^{\frac{2 i\pi}{3}}|010\rangle+|100\rangle),\nonumber\\
|\psi_{3}\rangle&=&\frac{1}{\sqrt{3}}(|001\rangle+|010\rangle+|100\rangle),\nonumber\\
|\psi_{4}\rangle&=&\frac{1}{\sqrt{3}}(e^{\frac{2 i\pi}{3}}|110\rangle+e^{\frac{-2 i\pi}{3}}|101\rangle+|011\rangle),\nonumber\\
|\psi_{5}\rangle&=&\frac{1}{\sqrt{3}}(e^{\frac{-2 i\pi}{3}}|110\rangle+e^{\frac{2 i\pi}{3}}|101\rangle+|011\rangle),\nonumber\\
|\psi_{6}\rangle&=&\frac{1}{\sqrt{3}}(|110\rangle+|101\rangle+|011\rangle),\nonumber \\
|\psi_{7}\rangle&=&|111\rangle, \end{eqnarray} respectively.\\ \begin{figure}
\caption{(Color online) Plot of the trace distance of the open system A, $D(\rho^{A}_{1}(t),\rho^{A}_{2}(t))$, as a function of time $t$, \textbf{in arbitrary units}, for the three-qubit Heisenberg XX spin chain example. We have used $\alpha=1$ in (a) and $\alpha=0.6$ in (b). Similarly $\alpha=0.2$ in (c) and $\alpha=0$ in (d). Parameters: $f =g=1/\sqrt{2}, l=\sqrt{3/7}$, and $m=\sqrt{4/7}$. }
\label{fig12}
\end{figure} If the normalized initial state is chosen as \begin{equation}\label{17}
|\Psi(0)\rangle=\alpha|001\rangle+\beta|010\rangle+\gamma|100\rangle, \end{equation} with the help of Eqs. (21) and (22), its time evolution will be \begin{equation}\label{17}
|\Psi(t)\rangle=a(t)|001\rangle+b(t)|010\rangle+c(t)|100\rangle, \end{equation} where \begin{equation}\label{17} \begin{split} a(t)= \frac{1}{3}( e^{it(J+B)}(2\alpha-\beta-\gamma)+ K(t)),\\ b(t)=\frac{1}{3}( e^{it(J+B)}(2\beta-\alpha-\gamma)+ K(t)),\\ c(t)=\frac{1}{3}( e^{it(J+B)}(2\gamma-\alpha-\beta)+ K(t)), \end{split} \end{equation} in which $K(t)=e^{-it(2J-B)} (\alpha+\beta+\gamma)$.\\ \indent As a different case, one can assume that there are two excitations in the total system. Thus, the initial state is defined as \begin{equation}\label{17}
|\Phi(0)\rangle=\alpha_{1}|110\rangle+\beta_{1}|101\rangle+\gamma_{1}|011\rangle, \end{equation} and its time evolution is determined by \begin{equation}\label{17}
|\Phi(t)\rangle=a_{1}(t)|110\rangle+b_{1}(t)|101\rangle+c_{1}(t)|011\rangle, \end{equation} where \begin{equation}\label{17} \begin{split} a_{1}(t)= \frac{1}{3}(e^{-it(-J+B)}(2\alpha_{1}-\beta_{1}-\gamma_{1})+Z(t)),\\ b_{1}(t)=\frac{1}{3}(e^{-it(-J+B)}(2\beta_{1}-\alpha_{1}-\gamma_{1})+ Z(t)),\\ c_{1}(t)=\frac{1}{3}(e^{-it(-J+B)}(2\gamma_{1}-\alpha_{1}-\beta_{1})+ Z(t)), \end{split} \end{equation} in which $Z(t)=e^{-it(2J+B)}(\alpha_{1}+\beta_{1}+\gamma_{1})$.\\ \indent In order to show the influence of the initial environmental correlations on the trace distance dynamics, we illustrate three situations. Note that we regard the first qubit as an open system S and the other two qubits as its environment E [see Fig. 1(a)].\\ \indent i) For the first case, let us assume two environmental states such that only one of them has initial correlations. Hence, we regard the total initial states as \begin{equation}\label{17}
\rho_{1}(0)=|\varphi\rangle_{A}\langle\varphi|\otimes\left(\frac{1-\alpha}{4}I+\alpha|\psi^{-}\rangle\langle\psi^{-}|\right)_{BC}, \end{equation}and \begin{equation}\label{17}
\rho_{2}(0)=|\phi\rangle_{A}\langle\phi|\otimes\frac{1}{2}I_{B}\otimes\frac{1}{2}I_{C}, \end{equation}
where $\rho^{BC}_{1}(0)$ is a Werner state, $|\varphi\rangle_{A}=f|0\rangle+g|1\rangle$, $|\phi\rangle_{A}=l|0\rangle+m|1\rangle$, and $|\psi^{-}\rangle=\frac{1}{\sqrt{2}}(|01\rangle-|10\rangle)$.
\indent For these states, we have $D\left(\rho^{B}_{1}(0),\rho^{B}_{2}(0)\right)=0$, $D\left(\rho^{C}_{1}(0),\rho^{C}_{2}(0)\right)=0$, $D\left(\rho^{BC}_{2}(0),\rho^{B}_{2}(0)\otimes\rho^{C}_{2}(0)\right)=0$, and initial S-E correlations are zero. According to Eq. (10), the upper bound of the increase of the trace distance is restricted to the initial correlations among the environments in $\rho_{1}(0)$.
\indent In order to calculate the trace distance dynamics of the open system A, we find the time evolution of these total states from Eqs. (24) and (27). Then, with tracing over the environments (B+C), the reduced open system dynamics can be obtained. The behavior of the trace distance of $\rho^{A}$ as a function of $t$ is plotted in Fig. 2. Different initial states are considered with parameters $f =g=1/\sqrt{2}, l=\sqrt{3/7}$, and $m=\sqrt{4/7}$ . In Figs. 2(a), (b), (c), and (d) the values of $\alpha$ are assumed to be $1, 0.6, 0.2$, and $0$, respectively.\\ \indent In Fig. 2(a), initial state of the environments in $\rho_{1}(0)$ is defined by a Bell state ($\alpha=1$), a maximally entangled state. As can be seen, the trace distance begins to increase after the initial time. This means that an amount of the initial environmental correlations flows to the open system from the beginning of the dynamics. Furthermore, it has a periodic behavior during the dynamics. In Fig. 2(b), the initial state of the environments in $\rho_{1}(0)$ is not maximally entangled state and it is characterized by $\alpha=0.6$. From the figure one can see that the amount of information backflow is reduced by decreasing the initial environmental correlations although the dynamics behaviour is similar to Fig. 2(a).
\indent The value $\alpha=0.2$ is used in Fig. 2(c), where the amount of quantum initial correlations decreases such that the amount of entanglement is zero but the amount of discord is not. We remark that the trace distance starts decreasing already at the initial time then it begins to grow at a later time. In Fig. 2(d), the initial state of the environments in $\rho_{1}(0)$ is given by $\alpha=0$. Note that in this case the trace distance does not increase over its initial value since there is no initial correlation between environments. \\ \indent In brief, Fig. 2 shows the effect of initial correlations among the environments on the trace distance dynamics of the open system. We conclude, for this example, that the amount of the information backflow from the environments to the open system is increased by increasing initial quantum correlations among the environments and it can lead to increase distinguishability over its initial value. In situations investigated in Fig. 2, the maximum amount of the trace distance as a function of time is not equal to the upper bound given by Eq. (10). This means that the information initially inaccessible to the open system has not been transferred completely to it during the dynamics.
\indent ii) For the second situation, let us study an example in which the both initial environmental states have quantum correlations. In this and the next example, we use Eq. (9) to witness the initial environmental correlations. The total initial states can be taken as \begin{eqnarray}
&& \rho_{1}(0)=|\varphi\rangle_{A}\langle\varphi|\otimes(\frac{1-\alpha_{1}}{4}I+\alpha_{1}|\psi^{-}\rangle\langle\psi^{-}|)_{BC},\nonumber \\
&& \rho_{2}(0)=|\phi\rangle_{A}\langle\phi|\otimes(\frac{1-\alpha_{2}}{4}I+\alpha_{2}|\psi^{-}\rangle\langle\psi^{-}|)_{BC}. \nonumber \\ \end{eqnarray} In Fig. 3(a) the dynamics of $D(\rho_{1}^{A}(t),\rho_{2}^{A}(t))$ is shown for $\alpha_{1}=1$ and $\alpha_{2}=0.6$. If this figure is compared with Figs. 2(a) and (b), one realizes that the both quantum correlations have destructive effect on the distinguishability of the open system states which means that the amount of information flowing to the system is little. Equation \begin{eqnarray}
D(\rho^{BC}_{1}(0),\rho^{BC}_{2}(0))=\frac{3}{4}|\alpha_{1}-\alpha_{2}|,\nonumber \\ \end{eqnarray} implies that the maximum information outside of the open system can be obtained for $\alpha_{1}=1$, $\alpha_{2}=0$, and $\alpha_{1}=0$, $\alpha_{2}=1$. Therefore, the more difference among the initial quantum correlations (initial environmental states), the more information is initially stored outside of the open system and as a result the distinguishability of the open system states increases over its initial value. Actually, in order to have more information flowed to the open system, the difference among the initial quantum correlations must be more. A maximally entangled state and a product state are suitable candidates for this purpose (for the initial environmental states).
\indent iii) For the third one, let us consider a situation in which there is quantum correlation in one of the two initial environmental states and classical correlation in the other. An example for this case can be \begin{eqnarray}
&&\rho_{1}(0)=|\varphi\rangle_{A}\langle\varphi|\otimes(\frac{1-\alpha}{4}I+\alpha|\psi^{-}\rangle\langle\psi^{-}|)_{BC},\nonumber \\
&&\rho_{2}(0)=|\phi\rangle_{A}\langle\phi|\otimes\frac{1}{2}(|00\rangle\langle 00|+|11\rangle\langle 11|). \nonumber \\ \end{eqnarray} Fig. 3(b) shows the time behavior of the trace distance of the open system states for $\alpha=1$. As can be seen, the maximum value of the distingushability is 0.75. Comparing Fig. 3(b) with Fig. 3(a) and Fig. 2, leads us to this fact that maximal classical and quantum correlations are the best choice for obtaining maximum inaccessible initial information. Thus, the states with the above-mentioned properties have effective influence on the growth of the distinguishability of the open system states. This is confirmed by \begin{eqnarray} D(\rho^{BC}_{1}(0),\rho^{BC}_{2}(0))=\frac{1+\alpha}{2},\nonumber \\ \end{eqnarray} showing that the information outside of the open system gets its maximum value when $\alpha=1$ (maximally entangled state).\\ \indent Studying the above examples shows that whenever more distinguishable the environmental states are, the more information is stored outside of the open system; and returned information to the open system is maximum if there are initial classical and quantum correlations. Although the presence of quantum correlations in the both of the initial environmental states has destructive effect on the growth of the distiguishability of the open system states, initial quantum-classical correlations constructively affect the distinguishability.\\ \indent In the next subsection we introduce two Jaynes-Cummings systems by which one can show that the inequality in Eq. (10) is tight. \begin{figure}
\caption{(Color online) Plot of $D(\rho_{1}^{A}(t),\rho_{2}^{A}(t))$, for the three-qubit Heisenberg XX spin chain example, as a function of time $t$, \textbf{in arbitrary units}. We have used $\alpha_{1}=1$ and $\alpha_{2}=0.6$ in (a) and $\alpha=1$ in (b). }
\label{fig12}
\end{figure}
\subsection{ Two Jaynes-Cummings systemes} i) Suppose that one provides two Jaynes-Cummings systems in which each atom is locally coupled to a single-mode field. In this case, the open system of the tripartite system is assumed to include two atoms and each field is regarded as an environment. The total Hamiltonian is given by \begin{eqnarray}\label{17} &&H=H^{(1)}+H^{(2)}, \end{eqnarray} where \begin{eqnarray} &&H^{(j)}=\omega_{0}^{j}\sigma_{+}^{j}\sigma_{-}^{j}+\omega^{j}b^{j \dag}b^{j}+g^{j}(\sigma_{+}^{j}b^{j}+\sigma_{-}^{j}b^{j \dag}),\nonumber \end{eqnarray} in which $\sigma_{+}^{j}(\sigma_{-}^{j})$ is the raising (lowering) operator of the $j$th atom, $b^{j\dag}$ $(b^{j})$ is the creation (annihilation) operator of the $j${th} field, $\omega_{0}^{j}$ is the frequency of the $j${th} atom, $\omega^{j}$ is the frequency of the $j${th} field, and $g^{j}$ is the coupling constant between the $j${th} atom and the $j${th} field ($j=1,2$). In the interaction picture the Hamiltonian takes the following form \begin{equation}\label{17} H_{I}^{(j)}=g^{j}(\sigma_{+}^{j}b^{j} e^{i \Delta^{j}(t)}+\sigma_{-}^{j}b^{j \dag}e^{-i \Delta^{j}(t)}), \end{equation} where $\Delta^j=\omega_{0}^j-\omega^j$ is the detuning between the $j$th atom and the $j$th field. Let us assume that $b^{1}=b^{2}=b$, $g^{1}=g^{2}=g$, $\omega_{0}^{1}=\omega_{0}^{2}=\omega_{0}$, and $\omega^{1}=\omega^{2}=\omega$, hence, $\Delta=\Delta^1=\Delta^2= \omega_{0}-\omega$. The local time evolution operator in the interaction picture can be written as \begin{equation}\label{covariance matrix} U^{(j)}(t)= \begin{pmatrix}
c(\hat{n}+1,t) & d(\hat{n}+1,t)b \\
-b^{\dag}d^{\dag}(\hat{n}+1,t) & c(\hat{n},t) \\ \end{pmatrix}, \end{equation} where \begin{eqnarray} &&c(\hat{n},t)=e^{i \Delta t/2} \left[\cos\left(\Omega(\hat{n})\frac{t}{2}\right)-i\frac{\Delta}{\Omega(\hat{n})}\sin\left(\Omega(\hat{n})\frac{t}{2}\right)\right],\nonumber\\ &&d(\hat{n},t)=- i e^{i \Delta t/2} \frac{2g}{\Omega(\hat{n})}\sin\left(\Omega(\hat{n})\frac{t}{2}\right),\nonumber\\ \end{eqnarray} in which $\Omega(\hat{n})=\sqrt{\Delta^{2}+4 g^{2} \hat{n}}$ \cite{puri}.
The $i$th reduced density matrix of the system at time $t$ can be written as \begin{eqnarray}\label{17} &&\rho^{S}_{i}(t)=\nonumber\\ &&Tr_{E}\left[U^{(1)}(t)\otimes U^{(2)}(t)\left(\rho_{i}(0)\right)U^{(1)\dag}(t) \otimes U^{(2)\dag}(t)\right],\nonumber\\ \end{eqnarray}
where $\rho_{i}(0)$ is the $i$th initial state of the total system and it is assumed to be a product state as $\rho_{i}(0)=\rho^{S}(0)\otimes\rho^{BC}_{i}(0)\quad (i=1,2,3)$. Let the initial state of the open system be $\rho^{S}(0)=|ee\rangle\langle ee|$. The first environmental initial state is taken as \begin{eqnarray}\label{17} \rho^{BC}_{1}(0)=
(\alpha|0,n\rangle+\beta|n,0\rangle)(\alpha^{\ast}\langle 0,n|+\beta\langle n,0|),\nonumber\\ \end{eqnarray} which shows entanglement among the environments. The second one is built by the marginal states of the first environmental initial state as
$\rho^{BC}_{2}(0)=\rho^{B}_{1}(0)\otimes \rho^{C}_{1}(0)$ which is obtained as \begin{eqnarray}\label{17}
&&\rho^{BC}_{2}(0)=|\alpha|^{4}|0,n\rangle\langle 0,n|+|\beta|^{4}|n,0\rangle\langle n,0|\nonumber\\
&&+|\alpha|^{2}|\beta|^{2}\left( |0,0\rangle\langle 0,0|+|n,n\rangle\langle n,n| \right).\nonumber\\ \end{eqnarray} Finally, the third state is chosen to be a classically correlated state \begin{eqnarray}\label{17}
&&\rho^{BC}_{3}(0)=|\alpha|^{2}|0,0\rangle\langle 0,0|+|\beta|^{2}|n,n\rangle\langle n,n|.\nonumber\\ \end{eqnarray} \begin{figure}
\caption{(Color online) The trace distance dynamics of the open system for the Jaynes-Cummings example as a function of time $t$, \textbf{in arbitrary units}, and $g = 1$ (a) $D(\rho^{S}_{1}(t),\rho^{S}_{2}(t))$, with $\Delta=0.1$ and $n=1$ (b) $D(\rho^{S}_{1}(t),\rho^{S}_{3}(t))$, with $n=7$ and $\Delta=0$ (c) and (d) $D(\rho^{S}_{3}(t),\rho^{S}_{2}(t))$, with $\Delta=0$, $n=10$ and $n=50$, respectivrly. The horizontal line denotes the upper bound of Eq. (10) (Eq. (5)) in figures (a), (c), and (d) ((b)).}
\label{fig12}
\end{figure} \indent Substituting the above three initial states into Eq. (39) and taking into account Eqs. (37) and (38), one can obtain the dynamics of the open system. The trace distance dynamics of the open system states is plotted in Fig. 4 for $g=1$ and $\alpha=\beta=1/\sqrt{2}$. Fig. 4(a) shows the time behavior of $D(\rho^{S}_{1}(t),\rho^{S}_{2}(t))$ for $n=1$ and $\Delta=0.1$. The distinguishability value of the initial environmental states is $D(\rho^{BC}_{1}(0),\rho^{BC}_{2}(0))=0.75$ in which $\rho^{BC}_{1}(0)$ is maximally entangled state and $\rho^{BC}_{2}(0)$ is a product one. As can be seen, the total initial entanglement among two modes flows to the system at some points of time. It actually shows that the bound of the inequality in Eq. (10) is tight. \\ \indent For $n=7$ and $\Delta=0$, $D(\rho^{S}_{1}(t),\rho^{S}_{3}(t))$ is plotted against time in Fig. 4(b). In this case, $D(\rho^{BC}_{1}(0),\rho^{BC}_{3}(0))=1$, and $D(\rho^{BC}_{1}(0),\rho^{BC}_{2}(0))+D(\rho^{BC}_{2}(0),\rho^{BC}_{3}(0))=1.25$ which is greater than $1$. According to Eqs. (40) and (42), one can realize that there is quantum correlation in $\rho^{BC}_{1}(0)$, whereas, $\rho^{BC}_{3}(0)$ is a classically correlate state. This is an example for which the inequality in Eq. (5) is tight but the one in Eq. (9) is not. The plot shows that the trace distance reaches $1$ at some values of time, and therefore, the open system becomes completely distingushable in those values of time.\\ \indent In Figs. 4(c) and 4(d), $D(\rho^{S}_{3}(t),\rho^{S}_{2}(t))$ is depicted for $\Delta=0$, and for two values of $n$, $10$ and $50$, respectively. The trace distance of the initial environmental states is $D(\rho^{BC}_{3}(0),\rho^{BC}_{2}(0))=0.5$. As can be seen the upper bound is reached for both values of $n$.\\ \indent In summary, Fig. 4 shows that the upper bound is tight and the distingushability reaches $1$ when there are initial quantum-classical correlations among the fields. Furthermore, it indicates that initial quantum correlations make the trace distance increase more than classical correlations do.\\ \indent ii) Let us assume an example showing the tightness of the upper bound for classical states. To this aim, the total initial states are taken as \begin{eqnarray}\label{17}
&&\rho_{1}(0)=|ee\rangle\langle ee|\otimes \frac{1}{2}(|\beta,-\beta\rangle\langle \beta,-\beta|+|-\beta,\beta\rangle\langle -\beta,\beta|),\nonumber\\
&&\rho_{2}(0)=|ee\rangle\langle ee|\otimes \frac{1}{2}(|\beta\rangle\langle \beta|+|-\beta\rangle\langle -\beta|)\nonumber\\
&&\hspace{24mm}\otimes \frac{1}{2}(|-\beta\rangle\langle -\beta|+|\beta\rangle\langle\beta|),\nonumber\\ \end{eqnarray}
in which $|\beta\rangle= e^{-|\beta|^{2}/2}\sum_{n=0}^{\infty}\frac{\beta^{n}}{\sqrt{n!}}|n\rangle$ is a coherent state with mean number of photons as $\langle n\rangle=|\beta|^{2}$. It is well known that the coherent state does always have minimum uncertainty and resembles a classical state. Substituting the initial states into Eq. (39), one can obtain $D(\rho^{S}_{1}(t),\rho^{S}_{2}(t))$. For $\Delta=0$ and $g=1$, the trace distance dynamics is plotted for $|\beta|^{2}=100$ and $|\beta|^{2}=200$, in Figs. 5(a) and 5(b), respectively. One can see that as the average number of photons increases, the initial total classical correlation among the modes is detected at a given time. Therefore, the bound is tight for classical state and our witness can be applied for those states.\\ \indent The above two examples indicate that one can detect the initial quantum and classical correlations among two fields by studying the dynamics of the trace distance of the system states.\\ In the following the witness can be applied for a dissipative dynamics. For this purpose a discussion on amplitude damping channels is provided. \begin{figure}
\caption{(Color online) Plot of $D(\rho^{S}_{1}(t),\rho^{S}_{2}(t))$ as a function of time $t$, \textbf{in arbitrary units}, and $\Delta=0$ and $g = 1$, for the Jaynes-Cummings example. In both figures the horizontal line marks the upper bound of Eq. (10). (a)$|\beta|^{2}=100$, for this value the bound is not tight (b) $|\beta|^{2}=200$, as can be seen total initial classical correlation can be observed in a given time.}
\label{fig12}
\end{figure} \subsection{ Amplitude damping model} Here, we consider an open system consisting of two atoms locally interacting with amplitude damping reservoir. The Hamiltonian $H$ of the whole system is defined as \begin{eqnarray}\label{17} H=H^{(1)}+H^{(2)},\nonumber\\ \end{eqnarray} where \begin{eqnarray}\label{17} H^{(i)}=\omega_{0}^{i}\sigma_{+}^{i}\sigma_{-}^{i}+\sum_{k=0}\omega_{k}^{i}b_{k}^{i \dag }b_{k}^{i}+\sum_{k=0}g_{k}^{i}(\sigma_{+}^{i}b_{k}^{i}+\sigma_{-}^{i}b_{k}^{ i \dag});\nonumber\\ \nonumber \end{eqnarray} in which $b_{k}^{i\dag}$ ($b_{k}^{i}$) is the creation (annihilation) operator corresponding to the $k$th mode of the $i${th} reservoir, $\omega_{k}^{i}$ is the frequency of the $k$th mode of the $i${th} reservoir, $\omega_{0}^{i}$ is the frequency related to the transition energy of the $i${th} atom, $ g_{k}^{i}$ is the coupling constant between the $i${th} atom and the $k$th mode of the $i${th} reservoir, and $\sigma_{+}^{i}(\sigma_{-}^{i})$ is the raising (lowering) operator of the $i${th} atom ($i=1,2$). We suppose that the two atoms have the same transition energy and the same coupling to the reservoirs. Furthermore, we assume that the both reservoirs have the same Lorentz spectral density\cite{ban,ban2}.\\
\indent In order to introduce an initial state, let us define the vacuum state as $|\textbf{0}\rangle=|0_{1}0_{2}...0_{k}...\rangle$, therefore a first excited state is $|\textbf{1}_{k}\rangle=|0_{1}0_{2}...0_{k-1}1_{k}0_{k+1}...\rangle$ in which $|1_{k}\rangle=b_{k}^{\dag}|0_{k}\rangle$. It is obvious that the both states are orthogonal, i.e. $\langle \textbf{0}|\textbf{1}\rangle=0$. Total initial state is assumed to be a superposition of two states. In one state, atoms are in a Bell state and the reservoirs are in the vacuum states. The other one is that the two qubits are in the ground states and one of the two reservoirs has only one excitation. Thus the initial state of the total system is written as \begin{eqnarray}\label{17}
&&|\psi(0)\rangle=c_{eg}(0)|\psi_{+}\rangle|\textbf{0},\textbf{0}\rangle+\sum_{k}c_{k}(0)|g,g\rangle\otimes|\textbf{1}_{k},\textbf{0}\rangle \nonumber\\ &&\hspace{14mm}+\sum_{k}d_{k}(0)|g,g\rangle\otimes|\textbf{0},\textbf{1}_{k}\rangle;\nonumber\\ \end{eqnarray}
where $|\psi_{+}\rangle=\frac{1}{\sqrt{2}}(|e,g\rangle+|g,e\rangle)$ is a Bell state and $|g\rangle$ ($|e\rangle$)refers to the ground (excited) state of each atom. The normalization condition for $|\psi(0)\rangle$ is $|c_{eg}(0)|^{2}+|\sum_{k}c_{k}(0)|^{2}+|\sum_{k}d_{k}(0)|^{2}=1$. In the case $c_{k}(0)=d_{k}(0)$, the state of the whole system at time $t$ is written as \begin{eqnarray}\label{17}
&&|\psi(t)\rangle=c_{eg}(t)|\psi_{+}\rangle|\textbf{0},\textbf{0}\rangle+\sqrt{1-|c_{eg}(t)|^{2}}|g,g\rangle\otimes|\psi_{+}^{t}\rangle, \nonumber\\ \end{eqnarray} where \begin{eqnarray}\label{17}
&&c_{eg}(t)=h_{1}(t)c_{eg}(0)+h_{2}(t)\sqrt{1-|c_{eg}(0)|^{2}}, \nonumber\\ \end{eqnarray} in which \begin{eqnarray}\label{17} &&h_{1}(t)= e^{-\frac{1}{2}\lambda t}\left[\cosh\left(\frac{\lambda a}{2}t\right)+\frac{1}{a}\sinh\left(\frac{\lambda a}{2}t\right)\right], \nonumber\\ &&h_{2}(t)=-i e^{-\frac{1}{2}\lambda t}\left[\sqrt{\frac{1}{a^{2}}-1}\sinh\left(\frac{\lambda a}{2}t\right)\right], \end{eqnarray}
with $a=\sqrt{1-2\frac{\gamma}{\lambda}}$, where $\gamma$ is connected to the time scale of the system and $\lambda$ is coupling spectral width. Also in Eq. (46), $|\psi_{+}^{t}\rangle$ is a Bell state of the two reservoirs which is $\frac{1}{\sqrt{2}}(|\textbf{1}^{t},\textbf{0}\rangle+|\textbf{0},\textbf{1}^{t}\rangle)$. The first excitation state of each reservoir depends on time as \begin{eqnarray}\label{17}
&&|\textbf{1}^{t}\rangle= \frac{1}{\sqrt{\sum_{k}|c_{k}(t)|^{2}}}\sum_{k}c_{k}(t)|\textbf{1}_{k}\rangle ,\nonumber\\ \end{eqnarray}
which is normalized, $\langle \textbf{1}^{t}| \textbf{1}^{t}\rangle=1$, and orthogonal to $|\textbf{0}\rangle$, $\langle \textbf{0}| \textbf{1}^{t}\rangle=0$. \\ \indent An initial state of the whole system can be obtained if one has
$c_{eg}(0)=0$, $|\sum_{k}c_{k}(0)|^{2}=1/2$, and $|\textbf{1}\rangle= (\sum_{k}|c_{k}(0)|^{2})^{-1/2}\sum_{k}c_{k}(0)|\textbf{1}_{k}\rangle$, which result in $|\psi_{1}(0)\rangle=|gg\rangle \otimes\frac{1}{\sqrt{2}}(|\textbf{1},\textbf{0}\rangle+|\textbf{0},\textbf{1}\rangle)$. Therefore, the initial environmental state is an entangled state. Regarding the above assumptions, the state of the atoms at time $t$ is \begin{eqnarray}\label{17}
&&\rho^{S}_{1}(t)=|h_{2}(t)|^{2}|\psi_{+}\rangle\langle\psi_{+}|+(1-|h_{2}(t)|^{2})|gg\rangle\langle gg|. \nonumber\\ \end{eqnarray} Another initial state is assumed to be a product state as \begin{eqnarray}\label{17}
&&\rho_{2}(0)=|gg\rangle\langle gg|\otimes \rho^{B}_{1}(0)\otimes\rho^{C}_{1}(0), \nonumber\\ \end{eqnarray} in which \begin{eqnarray}\label{17}
&&\rho^{B}_{1}(0)=\rho^{C}_{1}(0)=\frac{1}{2}(|\textbf{0}\rangle\langle \textbf{0}|+|\textbf{1}\rangle\langle \textbf{1}|).\nonumber\\ \nonumber \end{eqnarray} Regarding Eq. (46) and the corresponding equations in \cite{ban}, the reduced density matrix of the atoms gets the following form \begin{equation}\label{covariance matrix} \rho^{S}_{2}(t)= \frac{1}{4}\begin{pmatrix}
\rho_{ee}(t) & 0& 0&0 \\
0 & \rho_{eg}(t) &0&0\\
0 &0 & \rho_{ge}(t)&0\\
0 &0 &0 &\rho_{gg}(t) \\ \end{pmatrix}, \end{equation} with \begin{eqnarray}\label{17}
&&\rho_{ee}(t)=|h_{2}^{2}(t)|^{2}, \nonumber\\
&&\rho_{eg}(t)=\rho_{ge}(t)=|h_{2}(t)|^{2}(2-|h_{2}(t)|^{2}), \nonumber\\
&&\rho_{gg}(t)=(2-|h_{2}(t)|^{2})^{2}.\nonumber\\ \end{eqnarray} \begin{figure}
\caption{(Color online) Plot of $D(\rho^{S}_{1}(t),\rho^{S}_{2}(t))$ as a function of scaled time $\lambda t$ for the amplitude damping example (a) local non-Markovian dynamics ($\gamma/\lambda=1000$), (b) local Markovian dynamics ($\gamma/\lambda=0.1$).}
\label{fig12}
\end{figure} \indent For two amplitude damping channels, the trace distance dynamics of the open system (the atoms), $D(\rho^{S}_{1}(t),\rho^{S}_{2}(t))$, is plotted against time ($\lambda t$) for $\gamma/\lambda=1000$ (local non-Markovian dynamics) in Fig. 6(a) and for $\gamma/\lambda=0.1$ (local Markovian dynamics) in Fig. 6(b). Here, the value of the initial environment-environment correlation is $D(\rho^{BC}_{1}(0),\rho^{BC}_{2}(0))=0.75$, and as can be seen in Fig. 6 the upper bound is not reached for the both cases. It is clear from Fig. 6(a), that the trace distance damply oscillates as a function of time, however, no oscillation can be seen in Fig. 6(b). The oscillation of the trace distance in Fig. 6(a) shows that information repeatedly exchanges between the system and environments; and comparing the plot in Fig. 6(a) with that in Fig. 6(b) indicates that the value of the exchanged information in the first case is greater than that in the second case. It should be mentioned that in the case of initial classical correlation, our calculations show that the inequality in Eq. (10) is not tight.\\ \indent As a final example, in the next subsection, let us consider an experimental one which has been introduced by other authors\cite{Laine1}. \subsection{ Experimental example} As an experimental example, we consider two entangled photons whose polarization degrees of freedom locally interact with their frequency degrees of freedom. The polarization degrees of freedom of the photons are regarded as an open system and their frequency degrees of freedom form two environments\cite{Laine1}. The Hamiltonian of the local interaction is defined by \begin{equation}\label{17}
H_{i}=- \int d\omega_{i}\omega_{i}(n_{V}|V\rangle\langle V|+n_{H}|H\rangle\langle H|)|\omega_{i}\rangle\langle\omega_{i}|, \end{equation}
where $|H\rangle$ ($|V\rangle$) and $|\omega_{i}\rangle$ indicate the state of a photon with horizontal (vertical) polarization and frequency $\omega_{i}$, respectively. The refraction index for photon with polarization $H$ ($V$) is signified by $n_{H}$ ($n_{V}$). We assume the total initial state as \begin{equation}\label{17}
|\Psi(0)\rangle=|\psi^{12}\rangle\otimes\int d\omega_{1}d\omega_{2} g(\omega_{1},\omega_{2})|\omega_{1},\omega_{2}\rangle, \end{equation}
where $|\psi^{12}\rangle=a|HH\rangle+b|HV\rangle+c|VH\rangle+d|VV\rangle$ and $g(\omega_{1},\omega_{2})$ denotes the probability amplitude for the first photon to have frequency $\omega_{1}$ and the second photon to have frequency $\omega_{2}$, with the corresponding joint probability distribution
$P(\omega_{1},\omega_{2})=|g(\omega_{1},\omega_{2})|^{2}$.\\
\indent Due to the initial product system-environments state, the time evolution of the open system can be described as $\rho^{S}(t)=\Phi_{t}^{12}(\rho^{S}(0))$, $\rho^{S}(0)=|\psi^{12}\rangle\langle\psi^{12}|$, where $\Phi_{t}^{12}$ is a dynamical map which maps the initial polarization state to the polarization state at time $t$. The state of the open system at time $t$ is given by \begin{equation}\label{covariance matrix} \rho^{S}(t)= \begin{pmatrix}
|a|^{2} & ab^{\ast}\kappa_{2}(t) & ac^{\ast}\kappa_{1}(t) & ad^{\ast}\kappa_{12}(t) \\
b a^{\ast}\kappa^{\ast}_{2}(t) & |b|^{2} & bc^{\ast}\Lambda_{12}(t) &bd^{\ast}\kappa_{1}(t) \\
c a^{\ast}\kappa^{\ast}_{1}(t) &bc^{\ast}\Lambda^{\ast}_{12}(t) & |c|^{2} & cd^{\ast}\kappa_{2}(t) \\
d a^{\ast}\kappa^{\ast}_{12}(t) &db^{\ast}\kappa^{\ast}_{1}(t) &dc^{\ast}\kappa^{\ast}_{2}(t) & |d|^{2} \\ \end{pmatrix}, \end{equation} in which $\kappa_{1}(t)=G(\Delta n t_{1},0)$, $\kappa_{2}(t)=G(0,\Delta n t_{2})$, $\kappa_{12}(t)=G(\Delta n t_{1},\Delta n t_{2})$, and $\Lambda_{12}(t)=G(\Delta n t_{1},-\Delta n t_{2})$, where \begin{equation}\label{17} G(\tau_{1},\tau_{2})=\int d\omega_{1}d\omega_{2} P(\omega_{1},\omega_{2})e^{-i(\omega_{1}\tau_{1}+\omega_{2}\tau_{2})} \end{equation} is the Fourier transform of the joint probability distribution and $\Delta n= n_{V}-n_{H}$. \begin{figure}\label{fig12}
\end{figure} \indent Note that the dynamical map $\Phi_{t}^{12}$ can be written as a product of local dynamical maps, i.e. $\Phi_{t}^{12}=\Phi_{t}^{1}\otimes\Phi_{t}^{2}$, if and only if $\Lambda_{12}(t)=\kappa_{1}(t)\kappa^{\ast}_{2}(t)$ and $\kappa_{12}(t)=\kappa_{1}(t)\kappa_{2}(t)$. This means that the frequencies $\omega_{1}$ and $\omega_{2}$ are not correlated.
We assume a Gaussian frequency distribution whose Fourier transform is obtained as \begin{equation}\label{17} G(\tau_{1},\tau_{2})= e^{i \omega_{0}(\tau_{1}+\tau_{2})/2-C_{11}(\tau^{2}_{1}+\tau^{2}_{2}+K \tau^{2}_{1}\tau^{2}_{2})/2}, \end{equation} where $C_{ij}=\langle\omega_{i}\omega_{j}\rangle-\langle\omega_{i}\rangle\langle\omega_{j}\rangle$ are elements of the covariance matrix, $\langle\omega_{i}\rangle=\langle\omega_{j}\rangle=\omega_{0}/2$, and $K=C_{12}/C_{11}$ is correlation coefficient.
In order to examine Eq. (10) as a witness for initial environmental correlations, we assume two total states $\rho_{1}(0)$ and $\rho_{2}(0)$ such that $|\psi_{1}^{12}\rangle=|\psi_{2}^{12}\rangle=1/\sqrt{2}(|HH\rangle+|VV\rangle)$, and the environmental state of $\rho_{1}(0)$ is correlated whereas the environmental state of $\rho_{2}(0)$ is not. Thus, according to Eq.(10) $I\left(\rho^{S}\right)$ is always definitely positive due to the initial environmental correlations. We have plotted $D\left(\rho^{S}_{1}(t),\rho^{S}_{2}(t)\right)$ in terms of $\sqrt{C_{11}}\Delta n t$ for different values of the correlation coefficient in Fig. 7(a). As can be seen, the trace distance for $K=-1$, where the frequencies $\omega_{1}$ and $\omega_{2}$ are anticorrelated, gets its maximum increasing and after a specific time approaches to the value of 0.5. For $K=0$ the frequencies are not correlated and the trace distance is always zero. The trace distance decreases after an increasing then approaches to zero for other values of K. In Fig. 7(b), initial states of the open system are $|\psi_{1}^{12}\rangle=1/\sqrt{2}(|HH\rangle+|VV\rangle)$, and $|\psi_{2}^{12}\rangle=\sqrt{16/18}|HH\rangle+\sqrt{2/18}|VV\rangle$ and the initial environmental states are the same as these in Fig. 7(a). It is clear that the trace distance raises above its initial value for $K=-1$, $K=-0.99$, and $K=-0.95$, meaning that the more anticorrelated (more distinguishable) the frequencies $\omega_{1}$ and $\omega_{2}$ are, the more information is stored outside of the open system. It has been shown that when the frequencies $\omega_{1}$ and $\omega_{2}$ become more anticorrelated, the nature of the global dynamics becomes more non-Markovian, while the local dynamics is Markovian\cite{Laine1}. Regarding these results with what shown in Fig. 6, it seems that time behavior of trace distance, with different initial environmental states, can be considered as a witness for determining the type of the local dynamics of the system under study. \\ \indent Finally, we conclude from Fig.7 that the trace distance may increase over its initial value due to the initial environmental correlations and the best value of the correlation coefficient is $K=-1$ for witnessing the initial correlations.\\ \section{CONCLUSIONS} Dynamics of the trace distance with initial correlations has been studied in tripartite systems. We considered a scenario consisting of one system and two environments, and obtained a bound for the growth of distingushability in open system. The bound can be used as a witness for initial correlations among environments. The obtained inequality is general and can be applied to any interaction among three systems. We demonstrated that initial correlations among environments under particular conditions can be witnessed by local measurements on the open quantum system. We illustrated that the bound is tight for initial classical and quantum environmental correlations. Generally, since we do not have enough information about initial states of environments, the inequality can be applied to obtain more information about environments.\\ \indent To confirm our results we studied different tripartite systems such as a three-qubit Heisenberg XX spin chain, two Jaynes-Cummings systems, two qubits interacting with amplitude damping environment, and an experimentally realizable example. We indicated that the distinguishability increases over its initial value due to initial correlations among the environments.\\ \indent Generalization to systems including more than three subsystems is straightforward. \section*{ACKNOWLEDGMENTS} \indent F.T.T. and A.S.K. would like to thank H.-P. Breuer for his expert advice and comments. F.T.T. also thanks J. Piilo, S. Maniscalco, and E.-M. Laine for their useful discussions and comments.
\end{document} |
\begin{document}
\title[Prandtl equation] { Long time well-posdness\\ of the Prandtl equations in Sobolev space} \author[C.-J. Xu]{Chao-Jiang Xu} \date{May 8, 2016} \address{Chao-Jiang XU \newline\indent School of Mathematics and Statistics, Wuhan University \newline\indent 430072, Wuhan, P. R. China \newline \indent and \newline \indent Universit\'e de Rouen, CNRS, UMR 6085-Laboratoire de Math\'ematiques \newline \indent 76801 Saint-Etienne du Rouvray, France } \email{chao-jiang.xu@univ-rouen.fr }
\author[X. Zhang]{Xu Zhang}
\address{Xu ZHANG \newline\indent School of Mathematical Sciences, Xiamen University, Xiamen, Fujian 361005, China \newline \indent and \newline \indent Universit\'e de Rouen, CNRS, UMR 6085-Laboratoire de Math\'ematiques
\newline \indent
76801 Saint-Etienne du Rouvray, France} \email{xu.zhang1@etu.univ-rouen.fr}
\keywords{Prandtl boundary layer equation, energy method, well-posedness theory, monotonic condition, Sobolev space} \subjclass[2010]{35M13, 35Q35, 76D10, 76D03, 76N20}
\begin{abstract} In this paper, we study the long time well-posedness for the nonlinear Prandtl boundary layer equation on the half plane. While the initial data are small perturbations of some monotonic shear profile, we prove the existence, uniqueness and stability of solutions in weighted Sobolev space by energy methods. The key point is that the life span of the solution could be any large $T$ as long as its initial date is a perturbation around the monotonic shear profile of small size like $e^{-T}$. The nonlinear cancellation properties of Prandtl equations under the monotonic assumption are the main ingredients to establish a new energy estimate. \end{abstract}
\maketitle \tableofcontents
\section{Introduction}
In this work, we study the initial-boundary value problem for the Prandtl boundary layer equation in two dimension, which reads \begin{equation*} \begin{cases} \partial_t u + u \partial_x u + v\partial_y u + \partial_x p = \partial^2_y u,\quad t>0,\, (x, y)\in\mathbb{R}^2_+, \\ \partial_x u +\partial_y v =0, \\
u|_{y=0} = v|_{y=0} =0 , \ \lim\limits_{y\to+\infty} u = U(t,x), \\
u|_{t=0} =u_0 (x,y)\, , \end{cases} \end{equation*} where $\mathbb{R}^2_+=\{(x, y)\in \mathbb{R}^2;\, y>0\}$, $u(t,x,y)$ represents the tangential velocity, $v(t, x, y)$ normal velocity. $p(t, x)$ and $U(t, x)$ are the values on the boundary of the Euler's pressure and Euler's tangential velocity and determined by the Bernoulli's law: $ \partial_t U(t,x) + U(t,x)\partial_x U(t,x) + \partial_x p =0.$
Prandtl equations is a major achievement in the progress of understanding the famous D'Alembert's paradox in fluid mechanics. In a word, D'Alembert's paradox can be stated as: while a solid body moves in an incompressible and inviscid potential flow, it undergoes neither drag or buoyancy. This of course disobeys our everyday experiences. In 1904, Prandtl said that, in fluid of small viscosity, the behavior of fluid near the boundary is completely different from that away from the boundary. Away from the boundary part can be almost considered as ideal fluid, but the near boundary part is deeply affected by the viscous force and is described by Prandtl boundary layer equation which was firstly derived formally by Prandtl in 1904 (\cite{prandtl1904uber}).
From the mathematical point of view, the well-posedness and justification of the Prandtl boundary layer theory don't have satisfactory theory yet, and remain open for general cases. During the past century, lots of mathematicians have investigated this problems. The Russian school has contributed a lot to the boundary layer theory and their works were collected in \cite{oleinik1999prandtl}. Up to now, the local existence theory for the Prandtl boundary layer equation has been achieved when the initial data belong to some special functional spaces: 1) the analytic space or analytic with respect to the tangential variable \cite{vicol2013,sammartino2003onlyx,sammartino1998analytic-1,sammartino1998analytic-2}; 2) Sobolev spaces or H\"older spaces under monotonicity assumption \cite{xu2014local,wangyaguang2014three,masmoudi2012local,oleinik1999prandtl,xin2004global}; 3) recently \cite{masmoudi2013gevrey} in Gevrey class with non-degenerate critical point. See also \cite{vicol2014} where the initial data is monotone on a number of intervals and analytic on the complement.
Except explaining the D'Alembert's Parabox, Prandtl equations play a vital role in the challenging problem: inviscid limit problem. { In deed, as pointed out by Grenier-Guo-Nguyen \cite{ggn1,ggn2,ggn3}, the long time behavior of the Prandtl equations is important to make progress towards the inviscid limit of the Navier-Stokes equations. We must understand behaviors of solutions to on a longer time interval than the one which causes the instability used to prove ill-posedness. }
To the best of our knowledge, under the monotonic assumption, by using the Crocco transformation, Oleinik (\cite{oleinik1999prandtl}) obtained the long-time smooth solution in H\"older space for the Prandtl equation defined on the interval $0\le x\le L$ with $L$ very small. Xin-Zhang (\cite{xin2004global}) proved the global existence of weak solutions if the pressure gradient has a favorable sign, that is $\partial_x p\le 0$. See \cite{wangyaguang2015global} for a similar work in 3-D case. The global existence of smooth solutions in the monotonic case remains open.
In the analytical frame, Ignatova-Vicol (\cite{vicol2015}) recently get an almost global-in-time solution which is analytic with respect to the tangential variable, see also \cite{zhangping2014longtime} for a same attempt work by using a refined Littlewood-Paley analysis. On the other side, without the monotonicity assumption, E and Engquist in \cite{e-2} constructed finite time blowup solutions to the Prandtl equation. After this work, there are many un-stability or strong ill-posedness results. In particular, G\'erard-Varet and Dormy \cite{gerard2010ill} showed that the linearized Prandtl equation around the shear flow with a non-degenerate critical point is ill-posed in the sense of Hadamard in Sobolev spaces. See also \cite{e-1,gerard2010remark,grenier-2,guo2011nonlinear,renardy2009ill-steady-hydrostatic} for the relative works.
Besides, Crocoo transformation can't be used to Navier-Stokes equations. The best choice left for us is to get the long time wellposedness by energy method, since energy method works well for both Navier-Stokes equations and Euler equations. Recently, there are two works\cite{xu2014local,masmoudi2012local} where the local-in-time wellposedness is obtained by different kinds of energy methods. One is by Nash-Moser-H\"ormander iteration. The other is by using uniform estimates of the regularized parabolic equation and Maximal Principle.
Motivated by above analysis, in this work, using directly energy method, we will prove the long time existence of smooth solutions of Prandtl equations in Sobolev space. { In details, for any fixed $T>0$, we will show that if the initial perturbation are size of $e^{-T}$ small enough, then the life time of solutions to Prandtl equations could at least be $T$.}
In what follows, we choose the uniform outflow $ U(t,x)= 1$ which implies $p_x=0$. In other words the following problem for the Prandtl equation is considered : \begin{equation}\label{full-prandtl} \begin{cases} \partial_t u + u \partial_x u + v\partial_y u = \partial^2_y u,\,\, t>0,\,\, (x, y)\in\mathbb{R}^2_+, \\ \partial_x u +\partial_y v =0, \\
u|_{y=0} = v|_{y=0} =0 , \ \lim\limits_{y\to+\infty} u =1, \\
u|_{t=0} =u_0(x, y). \end{cases} \end{equation}
The weighted Sobolev spaces (similar to \cite{masmoudi2012local}) are defined as follows: \begin{equation*}
\|f\|_{H^n_\lambda(\mathbb{R}^2_+)}^2 = \sum\limits_{|\alpha_1+\alpha_2|\le n}\int_{\mathbb{R}^2_+} \langle y \rangle^{2\lambda+2\alpha_2} |\partial^{\alpha_1}_{x}\partial^{\alpha_2}_{y} f|^2 dx dy\, ,~~~ \lambda > 0,~n \in \mathbb{N}^+. \end{equation*}
Specially, $\|f\|_{L^2_\lambda(\mathbb{R}^2_+)} = \|f\|_{H^0_\lambda(\mathbb{R}^2_+)} $ and $H^n$ stands for the usual Sobolev space.
{\bf Initial data of shear flow.} Loosely speaking, shear flow is a solution to Prandtl equations and is independent of $x$. For more details, please check the {\it analysis of shear flow } part in Section \ref{section2} and Lemma \ref{shear-profile}. We denote shear flow as $u^s$. From now on, we consider solutions to Prandtl equations as their perturbations around some shear flow. That is to say, \[ u(t, x, y) = u^s(t, y) + \tilde{u}(t, x, y), t \ge 0.\] Assume that $u^s_0$(initial datum of shear flow) satisfies the following conditions: \begin{align} \label{shear-critical-momotone} \begin{cases} u^s_0\in C^{m+4}([0, +\infty[),\,\,\, \lim\limits_{y \to + \infty} u^s_0(y)=1;\\ (\partial^{2p}_y u^s_0)(0) = 0,\,\,\,0\le 2p\le m+4;\\ c_1\langle y \rangle^{-k}\le (\partial_y u^s_0)(y)\le c_2 \langle y \rangle^{-k}, ~~ \forall\,y\ge 0,\\
|(\partial_y^p u^s_0)(y)| \le c_2 \langle y \rangle^{-k-p+1},\,\, \forall\,\,y\ge 0,\,\, 1\le p\le m+4, \end{cases} \end{align} for certain $c_1, c_2>0$ and even integer $m$.
We have the following long time wellposedness results.
\begin{theorem}\label{main-theorem} Let $m\ge 6$ be an even integer, $k>1$ and $-\frac12<\nu<0$. Assume that $u^s_0$ satisfies \eqref{shear-critical-momotone}, the initial data $\tilde u_0=(u_0-u^s_0) \in H^{m+3}_{k + \nu }(\mathbb{R}^2_+)$, and $\tilde u_0$ satisfies the compatibility condition up to order $m+2$. Then for any $T>0$, there exists $\delta_0>0$ small enough such that if \begin{equation}\label{initial-small}
\|\tilde u_0 \|_{H^{m+1}_{k + \nu }(\mathbb{R}^2_+)}\le \delta_0, \end{equation} then the initial-boundary value problem \eqref{full-prandtl} admits a unique solution $(u, v)$ with $$ (u-u^s)\in L^\infty([0, T]; H^{m}_{k+\nu-\delta'}(\mathbb{R}^2_+)),\,\, v\in L^\infty([0, T]; L^\infty(\mathbb{R}_{y, +}; H^{m-1}(\mathbb{R}_x)), $$ where $\delta'>0$ satisfying $\nu+\frac 12<\delta'<\nu+1$ and $k+\nu-\delta'>\frac 12$.
Moreover, we have the stability with respect to the initial data in the following sense: given any two initial data $$ u^1_0=u^s_0+\tilde{u}^1_0,\quad u^2_0=u^s_0+\tilde{u}^2_0, $$ if $u^s_0$ satisfies \eqref{shear-critical-momotone} and $\tilde{u}^1_0, \,\tilde{u}^2_0$ satisfies \eqref{initial-small}, then the solutions $u^1$ and $u^2$ to \eqref{full-prandtl} satisfy, $$
\|u^1-u^2\|_{L^\infty([0, T]; H^{m-3}_{k+\nu-\delta'}(\mathbb{R}^2_+))} \le C\|u^1_0-u^2_0 \|_{ H^{m+1}_{k +\nu}(\mathbb{R}^2_+)}, $$ where the constant $C$ depends on the norm of $\partial_y{u}^1, \partial_y{u}^2$ in $L^\infty([0, T]; H^m_{k+\nu-\delta'+1}(\mathbb{R}^2_+))$. \end{theorem}
\begin{remark}~
\begin{itemize}
\item [1.] We also can verify ,
\begin{align*} \partial_y (u-u^s)\in L^\infty([0, T]; H^{m}_{k+\nu-\delta' + 1}(\mathbb{R}^2_+)),\, \partial_y v\in L^\infty([0, T]; H^{m-1}_{k+\nu-\delta'}(\mathbb{R}^2_+)).
\end{align*}
\item [2.] From \eqref{c-tilde} and \eqref{bound-2}, the relationship between the life span $T$ and the size of initial data is:
$$
\delta_0\,\,\approx\,\, e^{-T}.
$$
\item [3.] The results of main Theorem can be generated to the periodic case where $x$ is in torus.
\item [4.] We find that the weight of solution $u(t) - u^s(t)$ is smaller than that of initial dates $u_0 - u^s_0$.
There means that there exist decay loss of order $\delta'>0$ which may be very small. It results from the term $v\,\partial_y u$ which is the major difficulty for the analysis of Prandtl equation.
\end{itemize} \end{remark}
This article is arranged as follows. In Section \ref{section2}, we explain the main difficulties for the study of the Prandtl equation and present an outline of our approach. In Section \ref{section3}, we study the approximate solutions to \eqref{full-prandtl} by a parabolic regularization. In Section \ref{section4}, we prepare some technical tools and the formal transformation for the Prandtl equations. Sections \ref{section5} is dedicated to the uniform estimates of approximate solutions obtained in Section \ref{section3}. We prove finally the main theorem in Section \ref{section7}-\ref{section8}.
\noindent {\bf Notations: } The letter $C$ stands for various suitable constants, independent with functions and the special parameters, which may vary from line to line and step to step. When it depends on some crucial parameters in particular, we put a sub-index such as $C_\epsilon$ etc, which may also vary from line to line.
\section{Preliminary}\label{section2}
\noindent {\bf Difficulties and our approach.} Now, we explain the main difficulties in proving Theorem \ref{main-theorem}, and present the strategies of our approach.
It is well-known that the major difficulty for the study of the Prandtl equation \eqref{full-prandtl} is the term $v\,\partial_y u$, where the vertical velocity behaves like $$ v(t, x, y)=-\int^y_0 \partial_x u(t, x, \tilde y)d\tilde y, $$ by using the divergence free condition and boundary conditions. So it introduces a loss of $x$-derivative. The $y$-integration create also a loss of weights with respect to $y$-variable. Then the standard energy estimates do not work. This explains why there are few existence results in the literatures.
Recalling that in \cite{xu2014local} (see also \cite{masmoudi2012local} for a similar transformation), under the monotonic assumption $\partial_y u>0$, we divide the Prandtl equations by $\partial_y u$ and then take derivative with respect to $y$, to obtain an equation of the new unknown function $f=\left(\frac{u}{\partial_y u}\right)_y$ . In the new equation, the term $v$ disappears by using the divergence free condition. Here a little different from \cite{xu2014local}, we use $ g_m =\left(\frac{\partial_x^m u}{\partial_y u}\right)_y$, where $m$ stands for the highest derivative with $x$. From \cite{masmoudi2012local}, we can observe that we only need to worry about the highest derivative with $x$. This is why we only define $g_m$.
In order to prove the existence of solutions, following the idea of Masmoudi-Wong (\cite{masmoudi2012local}), we will construct an approximate scheme and study the parabolic regularized Prandtl equation \eqref{shear-prandtl-approxiamte}, which preserves the nonlinear structure of the original Prandlt equation \eqref{full-prandtl}, as well as the nonlinear cancellation properties. Then by uniform energy estimates of the approximate solutions, the existence of solutions to the original Prandlt equation \eqref{full-prandtl} follows. This energy estimate also implies the uniqueness and the stability. The uniform energy estimate for the approximate solutions is the main duty of this paper.
\noindent {\bf Analysis of shear flow.} We write the solution $(u, v)$ of system \eqref{full-prandtl} as
\begin{align*}
u(t, x, y) = u^s(t, y) + \tilde{u}(t, x, y),\,\, v(t, x, y)=\tilde v(t, x, y),
\end{align*} where $u^s(t,y)$ is the solution of the following heat equation \begin{align} \label{shear-flow} \begin{cases} \partial_t u^s - \partial_y^2 u^s =0,\\
u^s|_{y=0} = 0, \lim\limits_{y \to + \infty} u^s(t,y) = 1,\\
u^s|_{t=0} = u^s_0(y). \end{cases}
\end{align} Then \eqref{full-prandtl} can be written as \begin{equation}\label{non-shear-prandtl} \begin{cases} \partial_t\tilde{u} + (u^s + \tilde{u}) \partial_x\tilde{u} + \tilde v (u^s_y +\partial_y \tilde{u}) = \partial^2_y\tilde{u}, \\ \partial_x\tilde{u} +\partial_y\tilde{v} =0, \\
\tilde{u}|_{y=0} = \tilde{v}|_{y=0} =0 , \ \lim\limits_{y\to+\infty} \tilde{u} = 0, \\
\tilde{u}|_{t=0} =\tilde{u}_0 (x,y)\, . \end{cases} \end{equation} We first study the shear flow,
\begin{lemma} \label{shear-profile} Assume that the initial date $u^s_0$ satisfy \eqref{shear-critical-momotone}, then for any $T>0$, there exist $\tilde{c}_1, \tilde{c}_2, \tilde{c}_3>0$ such that the solution $u^s(t,y)$ of the initial boundary value problem \eqref{shear-flow} satisfies \begin{align} \label{shear-critical-momotone-2} \begin{cases} \tilde{c}_1\langle y \rangle^{-k}\le \partial_y u^s(t, y) \le \tilde{c}_2 \langle y \rangle^{-k}, ~~ \forall\,(t, y)\in [0, T]\times \bar{\mathbb{R}}_+,\\
|\partial_y^p u^s(t, y)| \le \tilde{c}_3 \langle y \rangle^{-k-p+1},\,\, \forall\,\,(t, y)\in [0, T]\times \bar{\mathbb{R}}_+,\,\, 1\le p\le m+4,
\end{cases} \end{align} where $\tilde{c}_1, \tilde{c}_2, \tilde{c}_3$ depend on $T$. \end{lemma}
\begin{proof} Firstly, the solution of \eqref{shear-flow} can be written as \begin{align*} u^s (t,y) &=\frac{1}{2\sqrt {\pi t}} \int^{+\infty}_{0} \Big( e^{-\frac{(y-\tilde{y})^2}{4 t}}-e^{-\frac{(y+\tilde{y})^2} {4 t}}\Big) u_0^s (\tilde{y}) d\tilde{y}\\ &=\frac{1}{\sqrt {\pi}} \Big(\int^{+\infty}_{- {\frac{y}{2\sqrt t}}}
e^{-\xi^2} u_0^s (2\sqrt t \xi +y) d\xi - \int^{+\infty}_{
{\frac{y}{2\sqrt t}}} e^{-\xi^2}u_0^s (2\sqrt t \xi -y)d\xi \Big), \end{align*} which gives \begin{align*} \partial_t u^s(t, y) =& \frac{1}{\sqrt {\pi t}} \Big( \int^{+\infty}_{- {\frac{y}{2\sqrt t}}} {\xi}\, e^{-\xi^2} (\partial_y u_0^s) (2\sqrt t \xi +y) d\xi \\ &\qquad- \int^{+\infty}_{ {\frac{y}{2\sqrt t}}}{\xi}\, e^{-\xi^2}(\partial_y u_0^s) (2\sqrt t \xi -y)d\xi \Big). \end{align*} By using $(\partial_y^{2j}u_0^s)(0)=0$ for $0\le 2j\le m+4$, it follows \begin{align}\label{u-0} \begin{split} \partial^p_y u^s(t, y) =& \frac{1}{\sqrt \pi} \Big( \int^{+\infty}_{- {\frac{y}{2\sqrt t}}} e^{-\xi^2} (\partial^p_yu_0^s) (2\sqrt t \xi+y) d\xi\\
&\quad+ (-1)^{p+1}\int^{+\infty}_{ {\frac{y}{2\sqrt t}}}
e^{-\xi^2}(\partial^p_yu_0^s) (2\sqrt t \xi -y)d\xi \Big)\\
&=\frac{1}{2\sqrt {\pi t}} \int^{+\infty}_{0} \Big( e^{-\frac{(y-\tilde{y})^2}{4 t}}+ (-1)^{p+1}e^{-\frac{(y+\tilde{y})^2} {4 t}}\Big) (\partial^p_yu_0^s) (\tilde{y}) d\tilde{y}, \end{split} \end{align} for all $1\le p\le m+4$.
For $p=1$, we have, \begin{align*} \partial_y u^s(t, y) =& \frac{1}{\sqrt \pi} \Big( \int^{+\infty}_{- {\frac{y}{2\sqrt t}}} e^{-\xi^2} (\partial_yu_0^s) (2\sqrt t \xi+y) d\xi\\
&\quad+\int^{+\infty}_{ {\frac{y}{2\sqrt t}}}
e^{-\xi^2}(\partial_yu_0^s) (2\sqrt t \xi -y)d\xi \Big)\\
&=\frac{1}{2\sqrt {\pi t}} \int^{+\infty}_{0} \Big( e^{-\frac{(y-\tilde{y})^2}{4 t}}+e^{-\frac{(y+\tilde{y})^2} {4 t}}\Big) (\partial_yu_0^s) (\tilde{y}) d\tilde{y}\,. \end{align*} Thanks to the monotonic assumption \eqref{shear-critical-momotone}, we have that \begin{align*} \partial_y u^s(t, y) &\approx \frac{1}{2\sqrt {\pi t}} \int^{+\infty}_{0} \Big( e^{-\frac{(y-\tilde{y})^2}{4 t}}+e^{-\frac{(y+\tilde{y})^2} {4 t}}\Big) \langle \tilde{y}\rangle^{-k} d\tilde{y}\\ &\approx \frac{1}{2\sqrt {\pi t}} \int^{+\infty}_{-\infty} e^{-\frac{(y+\tilde{y})^2} {4 t}} \langle \tilde{y}\rangle^{-k} d\tilde{y}\,. \end{align*} Recalling now Peetre's inequality, for any $\lambda\in\mathbb{R}$ \begin{equation*}
\tilde{c}_0\langle y\rangle^{\lambda}\langle y +\tilde{y}\rangle^{-|\lambda|}\le \langle \tilde{y}\rangle^{\lambda}\le
\tilde{c}^{-1}_0\langle {y}\rangle^{\lambda}\langle y+ \tilde{y}\rangle^{|\lambda|}, \end{equation*} then for $\lambda=-k$, we get the first estimate of \eqref{shear-critical-momotone-2} with \begin{equation}\label{c-tilde} \tilde{c}_1=c_1\tilde{c}_0 (1+T)^{-\frac k2},\,\,\tilde{c}_2=c_2\tilde{c}^{-1}_0 (1+T)^{\frac k2}. \end{equation}
For the second estimate of \eqref{shear-critical-momotone-2}, \eqref{u-0} implies \begin{align*}
|\partial^p_y u^s(t, y)| &\le \frac{c_2}{2\sqrt {\pi t}} \int^{+\infty}_{0} \Big( e^{-\frac{(y-\tilde{y})^2}{4 t}}+e^{-\frac{(y+\tilde{y})^2} {4 t}}\Big) \langle \tilde{y}\rangle^{-k-p+1} d\tilde{y}\\ &\le \frac{c_2}{2\sqrt {\pi t}} \int^{+\infty}_{-\infty} e^{-\frac{(y+\tilde{y})^2} {4 t}} \langle \tilde{y}\rangle^{-k-p+1} d\tilde{y}\,. \end{align*} Using now Peetre's inequality, with $\lambda=-k-p+1$, we get \begin{align*}
|\partial^p_y u^s(t, y)|\le c_2\tilde{c}^{-1}_0(1+T)^{\frac {k+p-1}2}\langle {y}\rangle^{-k-p+1}, \end{align*} for any $(t, y)\in [0, T]\times \mathbb{R}_+$. \end{proof}
\noindent {\bf Compatibility conditions and reduction of boundary data.} We give now the precise version of the compatibility condition for the nonlinear system \eqref{non-shear-prandtl} and the reduction properties of boundary data.
\begin{proposition}\label{prop-comp} Let $m\ge 6$ be an even integer, and assume that $\tilde{u}$ is a smooth solution of the system \eqref{non-shear-prandtl}, then the initial data $\tilde u_0$ have to satisfy the following compatibility conditions up to order $m+2$: \begin{equation}\label{compatibility-a1} \begin{cases} &\tilde{u}_0(x, 0)=0, \quad\,(\partial^2_y \tilde{u}_0)(x, 0)=0, \,\,\forall x\in \mathbb{R},\\ &(\partial^4_y \tilde u_0)(x, 0)=\big(\partial_yu^s_0(0) + (\partial_y\tilde{u}_0)(x, 0)\big) (\partial_y\partial_x\tilde{u}_0)(x, 0),\forall x\in \mathbb{R}, \end{cases} \end{equation} and for $4\le 2p\le m$, \begin{equation}\label{compatibility-a2}
(\partial^{2(p+1)}_y \tilde{u}_0)(x, 0)=\sum^p_{q=2}\sum_{(\alpha, \beta)\in \Lambda_q}C_{\alpha,\beta}\prod\limits_{j=1}^q \partial_x^{\alpha_j}\partial_y^{\beta_j +1} \big( u^s_0 + \tilde{u}_0 \big)\big|_{y=0}\,,\, \,\,\forall x\in \mathbb{R}, \end{equation} where \begin{align}\label{Lambda-p} \begin{split} \Lambda_q=&\bigg\{ (\alpha, \beta)=(\alpha_1, \cdots, \alpha_q; \beta_1, \cdots, \beta_q)\in \mathbb{N}^{q}\times\,\mathbb{N}^{q};\\ &\qquad\alpha_j+\beta_j\le 2p-1,\,\,\, 1\le j\le q;\,\,~\sum^q_{j=1}3\alpha_j + \beta_j = 2p +1;\\ &\qquad\quad\quad\qquad~~\sum\limits_{j=1}^{q}\beta_j \le 2p -2,~\,0<\sum\limits_{j=1}^{q} \alpha_j \le p - 1\bigg\}\, . \end{split} \end{align} \end{proposition} Remark that for $\alpha_j>0$, we have $\partial_x^{\alpha_j}\partial_y^{\beta_j +1} \big( u^s+ \tilde{u} \big)=\partial_x^{\alpha_j}\partial_y^{\beta_j +1} \tilde{u}$. So the condition $0<\sum\limits_{j=1}^{q} \alpha_j$ implies that, for each terms of \eqref{compatibility-a2}, there is at last one factor like $\partial_x^{\alpha_j}\partial_y^{\beta_j +1} \tilde{u}_0$.
\begin{proof} By the assumption of this Proposition, $\tilde u$ is a smooth solution. If we need the existence of the trace of $\partial_y^{m+2} \tilde u$ on $y=0$, then we at least need to assume that $\tilde{u}\in L^\infty([0, T]; H^{m+3}_{k+\ell-1}(\mathbb{R}^2_+))$.
Recalling the boundary condition in \eqref{non-shear-prandtl}: \begin{equation*} \tilde u(t, x, 0)=0, \quad \tilde v(t, x, 0)=0,\quad (t, x)\in [0, T]\times \mathbb{R}, \end{equation*} then the following is obvious: \begin{equation*} (\partial_t\partial^n_x\tilde u)(t, x, 0)=0, \quad (\partial_t\partial^n_x \tilde v)(t, x, 0)=0,\quad (t, x)\in [0, T]\times \mathbb{R}, \, 0\le n \le m. \end{equation*} Thus the first result of \eqref{compatibility-a1} is exactly the compatibility of the solution with the initial data at $t=0$. For the second result of \eqref{compatibility-a1}, using the equation of \eqref{non-shear-prandtl}, we find that, fro $0\le n\le m$ \begin{equation*} (\partial^2_{y}\partial^n_x\tilde{u})(t, x, 0)=0,\quad (\partial_t\partial^2_{y}\partial^n_x\tilde{u})(t, x, 0)=0,\quad (t, x)\in [0, T]\times \mathbb{R}. \end{equation*} Derivating the equation of \eqref{non-shear-prandtl} with $y$, $$ \partial_t\partial_y\tilde{u} + \partial_y\big((u^s + \tilde{u}) \partial_x\tilde{u}\big) +\partial_y\big(\tilde {v} (u^s_y + \partial_y \tilde{u})\big) = \partial^3_{y}\tilde{u}, $$ observing \begin{align*}
\bigg(\partial_y\big((u^s + \tilde{u}) \partial_x\tilde{u}\big) +\partial_y\bigg(\tilde {v} (u^s_y + \partial_y \tilde{u})\big)\bigg)\bigg|_{y=0}=0, \end{align*} then we get \begin{equation*}
(\partial_t\partial_y\tilde{u}))|_{y=0}
= (\partial^3_{y}\tilde{u}_\epsilon)|_{y=0} . \end{equation*} Derivating again the equation of \eqref{non-shear-prandtl} with $y$, $$ \partial_t\partial^2_y\tilde{u} + \partial^2_y\bigg((u^s + \tilde{u}) \partial_x\tilde{u}\bigg) +\partial^2_y\bigg(\tilde {v} (u^s_y + \partial_y \tilde{u})\bigg) = \partial^4_{y}\tilde{u}, $$ using Leibniz formula \begin{align*} &\partial^2_y\bigg((u^s + \tilde{u}) \partial_x\tilde{u}\bigg) +\partial^2_y\bigg(\tilde {v} (u^s_y + \partial_y \tilde{u})\bigg)\\ &=(\partial^2_y(u^s + \tilde{u})) \partial_x\tilde{u} +(\partial^2_y\tilde {v})(u^s_y + \partial_y \tilde{u})\\ &\quad+(u^s + \tilde{u}) \partial^2_y\partial_x\tilde{u} +\tilde {v} \partial^2_y(u^s_y + \partial_y \tilde{u})\\ &\qquad+2(\partial_y(u^s + \tilde{u})) \partial_y\partial_x\tilde{u} +2(\partial_y\tilde {v})\partial_y(u^s_y + \partial_y \tilde{u}), \end{align*} thus, \begin{equation*} (\partial^4_y \tilde u)(t, x, 0)=\bigg(u^s_y(t, 0) + (\partial_y\tilde{u})(t, x, 0)\bigg) (\partial_y\partial_x\tilde{u})(t, x, 0), \end{equation*} and \begin{equation}\label{boundary-a15} \begin{split} (\partial_t\partial^4_y \tilde u)(t, x, 0)=&\bigg(\partial_y u^s(t, 0) + (\partial_y\tilde{u})(t, x, 0)\bigg)\bigg((\partial^3_y\partial_x\tilde{u})(t, x, 0)\bigg) \\ & + \bigg(\partial^3_y u^s(t, 0) + (\partial^3_y\tilde{u})(t, x, 0)\bigg)\bigg((\partial_y\partial_x\tilde{u})(t, x, 0)\bigg). \end{split} \end{equation} For $p=2$, we have $$ \partial_t\partial^4_y\tilde{u}+ \partial^4_y\bigg((u^s + \tilde{u}) \partial_x\tilde{u}\bigg) +\partial^4_y\bigg(\tilde {v} (u^s_y + \partial_y \tilde{u})\bigg) = \partial^6_{y}\tilde{u}, $$ using Leibniz formula \begin{align*} &\partial^4_y\bigg((u^s + \tilde{u}) \partial_x\tilde{u}\bigg) +\partial^4_y\bigg(\tilde {v} (u^s_y + \partial_y \tilde{u})\bigg)\\ &=(\partial^4_y(u^s + \tilde{u})) \partial_x\tilde{u} +(\partial^4_y\tilde {v})(u^s_y + \partial_y \tilde{u}) +(u^s + \tilde{u}) \partial^4_y\partial_x\tilde{u} +\tilde {v} \partial^4_y(u^s_y + \partial_y \tilde{u})\\ &\qquad+\sum_{1\le j\le 3}C^4_j \bigg((\partial^j_y(u^s + \tilde{u})) \partial^{4-j}_y\partial_x\tilde{u} +(\partial^j_y\tilde {v})\partial^{4-j}_y(u^s_y + \partial_y \tilde{u})\bigg), \end{align*} thus, by \eqref{boundary-a15} \begin{equation}\label{boundary-16-0} \begin{split} &(\partial^6_y \tilde u)(t, x, 0)= (\partial_t\partial^4_y \tilde u)(t, x, 0) -(\partial^3_y\partial_x{u})(u^s_y + \partial_y \tilde{u})(t, x, 0)\\ &\quad+\sum_{1\le j\le 3}C^4_j \bigg((\partial^j_y(u^s + \tilde{u})) \partial^{4-j}_y\partial_x\tilde{u} +(\partial^j_y\tilde {v})\partial^{4-j}_y(u^s_y + \partial_y \tilde{u})\bigg)(t, x, 0)\\ &=\quad\quad \bigg(\partial^3_y u^s(t, 0) + (\partial^3_y\tilde{u})(t, x, 0)\bigg)\bigg((\partial_y\partial_x\tilde{u})(t, x, 0)\bigg)\\ &\quad+\sum_{1\le j\le 3}C^4_j \bigg((\partial^j_y(u^s + \tilde{u})) \partial^{4-j}_y\partial_x\tilde{u} -(\partial^{j-1}_y\partial_x\tilde {u})\partial^{4-j}_y(u^s_y + \partial_y \tilde{u})\bigg)(t, x, 0). \end{split} \end{equation} Taking the values at $t=0$, we have proven \eqref{compatibility-a2} for $p=2$. The case of $p\ge 3$ is then by induction. \end{proof}
\begin{remark} By the similar methods, we can prove that if $\tilde u$ is a smooth solution of the system \eqref{non-shear-prandtl}, then we have \begin{equation*} \begin{cases} &\tilde{u}(t, x, 0)=0, \,\,(\partial^2_y \tilde{u})(t, x, 0)=0, \,\,\forall (t, x)\in [0, T]\times \mathbb{R},\\ &(\partial^4_y \tilde u)(t, x, 0)=\big(u^s_y(t, 0) + (\partial_y\tilde{u})(t, x, 0)\big) (\partial_y\partial_x\tilde{u})(t, x, 0),\forall (t, x)\in [0, T]\times \mathbb{R}, \end{cases} \end{equation*} and for $4\le 2p\le m$, \begin{equation}\label{boundary-data1-e} (\partial^{2(p+1)}_y \tilde{u})(t, x, 0)=\sum^p_{q=2}\sum_{(\alpha, \beta)\in \Lambda_q}C_{\alpha,\beta}\prod\limits_{j=1}^q \partial_x^{\alpha_j}\partial_y^{\beta_j +1} \Big( u^s(t, 0) + \tilde{u}(t, x, 0) \Big), \end{equation} for all $ (t, x)\in [0, T]\times \mathbb{R}$, where $\Lambda_q$ is defined in \eqref{Lambda-p}.
See Lemma 5.9 of \cite{masmoudi2012local} and Lemma 4 of \cite{masmoudi2013gevrey} for the similar results. \end{remark}
Remark that the condition $0<\sum\limits_{j=1}^{q} \alpha_j$ implies that, for each terms of \eqref{boundary-data1-e}, there is at last one factor like $\partial_x^{\alpha_j}\partial_y^{\beta_j +1} \tilde{u}(t, x, 0)$.
\section{The approximate solutions} \label{section3}
To prove the existence of solution of the Prandtl equation, we study a parabolic regularized equation for which we can get the existence by using the classical energy method.
\noindent {\bf Nonlinear regularized Prandtl equation.} We study the following nonlinear regularized Prandtl equation, for $0<\epsilon\le 1$, \begin{equation}\label{shear-prandtl-approxiamte} \left\{\begin{array}{l} \partial_t\tilde{u}_\epsilon + (u^s + \tilde{u}_\epsilon) \partial_x\tilde{u}_\epsilon +{v}_\epsilon (u^s_y + \partial_y \tilde{u}_\epsilon) = \partial^2_{y}\tilde{u}_\epsilon + \epsilon \partial^2_{x}\tilde{u}_\epsilon, \\ \partial_x\tilde{u}_\epsilon +\partial_y{v}_\epsilon =0, \\
\tilde{u}_\epsilon|_{y=0} = {v}_\epsilon|_{y=0} =0 , \ \lim\limits_{y\to+\infty} \tilde{u}_\epsilon = 0, \\
\tilde{u}_\epsilon|_{t=0}=\tilde{u}_{0, \epsilon} =\tilde{u}_0+\epsilon \mu_\epsilon \, , \end{array}\right. \end{equation} where we choose the corrector $\epsilon \mu_\epsilon $ such that $\tilde{u}_0 +\epsilon \mu_\epsilon $ satisfies the compatibility condition up to order $m+2$ for the regularized system \eqref{shear-prandtl-approxiamte}.
We study now the boundary data of the solution for the regularized nonlinear system \eqref{shear-prandtl-approxiamte} which give also the precise version of the compatibility condition for the system \eqref{shear-prandtl-approxiamte}, see \cite{cannone-non1,cannone-non2} for the Prandtl equation with non-compatible data.
\begin{proposition}\label{prop-comp-b} Let $m\ge 6$ be an even integer $1<k, 0< \ell<\frac12$ and $k+\ell>\frac 32$, and assume that $\tilde{u}_0$ satisfies the compatibility conditions \eqref{compatibility-a1} and \eqref{compatibility-a2} for the system \eqref{non-shear-prandtl}, and $\mu_\epsilon \in H^{m+3}_{k +\ell'-1}(\mathbb{R}^2_+)$ for some $\frac 12 <\ell'<\ell+\frac 12$ such that $\tilde{u}_0 +\epsilon \mu_\epsilon $ satisfies the compatibility conditions up to order $m+2$ for the regularized system \eqref{shear-prandtl-approxiamte}. If $\tilde{u}_\epsilon \in L^\infty ([0, T]; H^{m+3}_{k +\ell}(\mathbb{R}^2_+))\cap Lip([0, T]; H^{m+1}_{k +\ell}(\mathbb{R}^2_+))$ is a solution of the system \eqref{shear-prandtl-approxiamte}, then we have \begin{equation*} \begin{cases} &\tilde{u}_\epsilon(t, x, 0)=0, \,\,(\partial^2_y \tilde{u}_\epsilon)(t, x, 0)=0, \,\,\forall (t, x)\in [0, T]\times \mathbb{R},\\ &(\partial^4_y \tilde u_\epsilon)(t, x, 0)=\big(u^s_y(t, 0) + (\partial_y\tilde{u}_\epsilon)(t, x, 0)\big) (\partial_y\partial_x\tilde{u}_\epsilon)(t, x, 0),\forall (t, x)\in [0, T]\times \mathbb{R}, \end{cases} \end{equation*} and for $4\le 2p\le m$, \begin{equation}\label{boundary-data1b} \begin{split} (\partial^{2(p+1)}_y \tilde{u}_\epsilon)(t, x, 0)=& \sum^p_{q=2}\sum^{q-1}_{l=0}\epsilon^l\sum_{(\alpha^l, \beta^l)\in \Lambda^l_q}C_{\alpha^l,\beta^l} \\& \qquad\times\, \prod\limits_{j=1}^q \partial_x^{\alpha^l_j}\partial_y^{\beta^l_j +1} \big( u^s(t, 0) + \tilde{u}_\epsilon(t, x, 0) \big), \end{split} \end{equation} for all $ (t, x)\in [0, T]\times \mathbb{R}$, where \begin{equation*} \begin{split} \Lambda^l_q=&\bigg\{(\alpha, \beta)=(\alpha_1, \cdots, \alpha_p; \beta_1, \cdots, \beta_p)\in \mathbb{N}^{q}\times \mathbb{N}^q;\\ &\qquad \alpha_j+\beta_j\le 2p-1,\,,~~1\le j\le q; \,\,~\sum^q_{j=1}3\alpha_j + \beta_j = 2p +4l+1;\\ &\qquad\qquad\sum\limits_{j=1}^{q}\beta_j \le 2p -2l-2,~\,0<\sum\limits_{j=1}^{q} \alpha_j \le p +2l - 1\bigg\}. \end{split} \end{equation*} \end{proposition} \begin{remark}\label{remark3.2}. \begin{itemize} \item[1.] Remark that the condition $0<\sum\limits_{j=1}^{q} \alpha^l_j$ implies that, for each terms of \eqref{boundary-data1b}, there are at last one factor like $\partial_x^{\alpha^l_j}\partial_y^{\beta^l_j +1} \tilde{u}_\epsilon(t, x, 0)$. \item[2.] Here we change the notation for the wighted index of function space, in fact, using the notations of Theorem \ref{main-theorem}, we have $$ \ell=\nu-\delta'+1,\quad \ell'=\nu+1. $$ \end{itemize} \end{remark}
\begin{proof} Firstly, for $ p\le \frac m2$, we have $\partial^{2p+2}_y\tilde{u}_\epsilon \in L^\infty ([0, T]; H^{1}_{k +\ell + 2p + 1}(\mathbb{R}^2_+))$. So the trace of $\partial^{2p+2}_y\tilde{u}_\epsilon$ exists on $y=0$.
Using the boundary condition of \eqref{shear-prandtl-approxiamte}, we have, for $0\le n\le m+2$, \begin{equation*} \partial^n_x\tilde u_\epsilon(t, x, 0)=0, \quad \partial^n_xv_\epsilon(t, x, 0)=0,\quad (t, x)\in [0, T]\times \mathbb{R}, \end{equation*} and for $0\le n\le m$ \begin{equation*} (\partial_t\partial^n_x\tilde u_\epsilon)(t, x, 0)=0, \quad (\partial_t\partial^n_x v_\epsilon)(t, x, 0)=0,\quad (t, x)\in [0, T]\times \mathbb{R}. \end{equation*} From the equation of \eqref{shear-prandtl-approxiamte}, we get also \begin{equation}\label{boundary-12b} (\partial^2_{y}\partial^n_x\tilde{u}_\epsilon)(t, x, 0)=0,\quad (\partial_t\partial^2_{y}\partial^n_x\tilde{u}_\epsilon)(t, x, 0)=0,\quad (t, x)\in [0, T]\times \mathbb{R}. \end{equation} On the other hand, $$ \partial_t\partial_y\tilde{u}_\epsilon + \partial_y\big((u^s + \tilde{u}_\epsilon) \partial_x\tilde{u}_\epsilon\big) +\partial_y\big({v}_\epsilon (u^s_y + \partial_y \tilde{u}_\epsilon)\big) = \partial^3_{y}\tilde{u}_\epsilon + \epsilon \partial^2_{x}\partial_y\tilde{u}_\epsilon, $$ observing \begin{align*}
\big[\partial_y\big((u^s + \tilde{u}_\epsilon) \partial_x\tilde{u}_\epsilon\big) +\partial_y\bigg({v}_\epsilon (u^s_y + \partial_y \tilde{u}_\epsilon)\big)\big]\big|_{y=0}=0, \end{align*} we get \begin{equation*}
(\partial_t\partial_y\tilde{u}_\epsilon)|_{y=0}
= (\partial^3_{y}\tilde{u}_\epsilon)|_{y=0} + \epsilon (\partial^2_{x}\partial_y\tilde{u}_\epsilon)|_{y=0}. \end{equation*} We have also $$ \partial_t\partial^2_y\tilde{u}_\epsilon + \partial^2_y\big((u^s + \tilde{u}_\epsilon) \partial_x\tilde{u}_\epsilon\big) +\partial^2_y\big({v}_\epsilon (u^s_y + \partial_y \tilde{u}_\epsilon)\big) = \partial^4_{y}\tilde{u}_\epsilon + \epsilon \partial^2_{x}\partial^2_y\tilde{u}_\epsilon, $$ using Leibniz formula \begin{align*} &\partial^2_y\big((u^s + \tilde{u}_\epsilon) \partial_x\tilde{u}_\epsilon\big) +\partial^2_y\big({v}_\epsilon (u^s_y + \partial_y \tilde{u}_\epsilon)\big)\\ &=(\partial^2_y(u^s + \tilde{u}_\epsilon)) \partial_x\tilde{u}_\epsilon +(\partial^2_y{v}_\epsilon)(u^s_y + \partial_y \tilde{u}_\epsilon)\\ &\quad+(u^s + \tilde{u}_\epsilon) \partial^2_y\partial_x\tilde{u}_\epsilon +{v}_\epsilon \partial^2_y(u^s_y + \partial_y \tilde{u}_\epsilon)\\ &\qquad+2(\partial_y(u^s + \tilde{u}_\epsilon)) \partial_y\partial_x\tilde{u}_\epsilon +2(\partial_y{v}_\epsilon)\partial_y(u^s_y + \partial_y \tilde{u}_\epsilon), \end{align*} thus, \begin{equation}\label{boundary-14} (\partial^4_y \tilde u_\epsilon)(t, x, 0)=\left(u^s_y(t, 0) + (\partial_y\tilde{u}_\epsilon)(t, x, 0)\right) (\partial_y\partial_x\tilde{u}_\epsilon)(t, x, 0). \end{equation} Applying $\partial_t$ to \eqref{boundary-14}, we have \begin{equation*} \begin{split} &(\partial_t\partial^4_y \tilde u_\epsilon)(t, x, 0)=\left(\partial^3_y u^s(t, 0) + (\partial^3_y\tilde{u}_\epsilon)(t, x, 0)+\epsilon (\partial^2_x\partial_y \tilde u_\epsilon)(t, x, 0)\right)(\partial_y\partial_x\tilde{u}_\epsilon)(t, x, 0)\\ &+\left(u^s_y(t, 0) + (\partial_y\tilde{u}_\epsilon)(t, x, 0)\right) \left((\partial^3_y\partial_x\tilde{u}_\epsilon)(t, x, 0)+\epsilon (\partial^3_x\partial_y \tilde u_\epsilon)(t, x, 0)\right). \end{split} \end{equation*} On the other hand, we have $$ \partial_t\partial^4_y\tilde{u}_\epsilon + \partial^4_y\big((u^s + \tilde{u}_\epsilon) \partial_x\tilde{u}_\epsilon\big) +\partial^4_y\big({v}_\epsilon (u^s_y + \partial_y \tilde{u}_\epsilon)\big) = \partial^6_{y}\tilde{u}_\epsilon + \epsilon \partial^2_{x}\partial^4_y\tilde{u}_\epsilon, $$ using Leibniz formula \begin{align*} &\partial^4_y\big((u^s + \tilde{u}_\epsilon) \partial_x\tilde{u}_\epsilon\big) +\partial^4_y\big({v}_\epsilon (u^s_y + \partial_y \tilde{u}_\epsilon)\big)\\ &=(\partial^4_y(u^s + \tilde{u}_\epsilon)) \partial_x\tilde{u}_\epsilon +(\partial^4_y{v}_\epsilon)(u^s_y + \partial_y \tilde{u}_\epsilon)\\ &\quad+(u^s + \tilde{u}_\epsilon) \partial^4_y\partial_x\tilde{u}_\epsilon +{v}_\epsilon \partial^4_y(u^s_y + \partial_y \tilde{u}_\epsilon)\\ &\qquad+\sum_{1\le j\le 3}C^4_j \big((\partial^j_y(u^s + \tilde{u}_\epsilon)) \partial^{4-j}_y\partial_x\tilde{u}_\epsilon +(\partial^j_y{v}_\epsilon)\partial^{4-j}_y(u^s_y + \partial_y \tilde{u}_\epsilon)\big), \end{align*} thus, \begin{equation*} \begin{split} &(\partial^6_y \tilde u_\epsilon)(t, x, 0)= (\partial_t\partial^4_y \tilde u_\epsilon)(t, x, 0) -(\partial^3_y\partial_x{u}_\epsilon)(u^s_y + \partial_y \tilde{u}_\epsilon)(t, x, 0)\\ &\quad+\sum_{1\le j\le 3}C^4_j \big[(\partial^j_y(u^s + \tilde{u}_\epsilon)) \partial^{4-j}_y\partial_x\tilde{u}_\epsilon +(\partial^j_y{v}_\epsilon)\partial^{4-j}_y(u^s_y + \partial_y \tilde{u}_\epsilon)\big](t, x, 0)\\ &\qquad\qquad -\underline{\epsilon \partial^2_{x}\partial^4_y\tilde{u}_\epsilon(t, x, 0)}. \end{split} \end{equation*} Using \eqref{boundary-14}, we get then \begin{equation}\label{boundary-16} \begin{split} &(\partial^6_y \tilde u_\epsilon)(t, x, 0) = \big(\partial^3_y u^s(t, 0) + \partial^3_y\tilde{u}_\epsilon (t, x, 0)\big)\partial_y\partial_x\tilde{u}_\epsilon(t, x, 0)\\ &\hskip 5cm -\underline{2\epsilon \partial_x\partial_y\tilde{u}_\epsilon(t, x, 0) (\partial_y\partial_x^2\tilde{u}_\epsilon)(t, x, 0)}\\ &+\sum_{1\le j\le 3}C^4_j \big[(\partial^j_y(u^s + \tilde{u}_\epsilon)) \partial^{4-j}_y\partial_x\tilde{u}_\epsilon - \partial^{j - 1}_y\partial_x \tilde{u}_\epsilon\partial^{4-j}_y(u^s_y + \partial_y \tilde{u}_\epsilon)\big](t, x, 0), \end{split} \end{equation} Compared to \eqref{boundary-16-0}, the underlined term is the new term.
This is the Proposition \ref{prop-comp-b} for $p=2$. We can complete the proof of Proposition \ref{prop-comp-b} by induction. \end{proof}
The proof of the above Proposition implies also the following result. \begin{corollary}\label{coro-boundary} Let $m\ge 6$ be an even integer, assume that $\tilde{u}_0$ satisfies the compatibility conditions \eqref{compatibility-a1} - \eqref{compatibility-a2} for the system \eqref{non-shear-prandtl} and $\partial_y\tilde{u}_{0}\in H^{m+2}_{k+\ell'}(\mathbb{R}^2_+)$, then there exists $\epsilon_0>0$, and for any $0<\epsilon\le \epsilon_0$ there exists $\mu_\epsilon \in H^{m+3}_{k +\ell'-1}(\mathbb{R}^2_+)$ such that $\tilde{u}_0 +\epsilon \mu_\epsilon $ satisfies the compatibility condition up to order $m+2$ for the regularized system \eqref{shear-prandtl-approxiamte}. Moreover, for any $m\le \tilde m\le m+2$ $$
\|\partial_y\tilde{u}_{0, \epsilon}\|_{H^{\tilde m}_{k+\ell'}(\mathbb{R}^2_+)}\le \frac 32 \|
\partial_y\tilde{u}_{0}\|_{H^{\tilde m}_{k+\ell'}(\mathbb{R}^2_+)}, $$ and $$
\lim_{\epsilon\to 0}\|\partial_y\tilde{u}_{0, \epsilon}-\partial_y\tilde{u}_{0}\|_{H^{\tilde m}_{k+\ell'}(\mathbb{R}^2_+)}=0. $$ \end{corollary}
\begin{proof} We use the proof of the Proposition \ref{prop-comp-b}.
Taking the values at $t=0$ for \eqref{boundary-12b}, then \eqref{compatibility-a1} implies that the function $\mu_\epsilon$ satisfies \begin{equation*} (\partial^n_x\mu_\epsilon )(x, 0)=0, \quad (\partial^2_y \partial^n_x\mu_\epsilon )(x, 0)=0,\quad x\in \mathbb{R}\,. \end{equation*} Taking $t=0$ for \eqref{boundary-14}, we have \begin{align*} (\partial^4_y \tilde u_0)(x, 0)+\epsilon(\partial^4_y \mu_\epsilon)(x, 0))=&\big[\partial_yu^s_0(0) + (\partial_y\tilde{u}_0)(x, 0)+\epsilon(\partial_y \mu_\epsilon)(x, 0)\big]\\ &\times\big[(\partial_y\partial_x\tilde{u}_0)(x, 0)+\epsilon(\partial_y \partial_x\mu_\epsilon)(x, 0)\big], \end{align*} using \eqref{compatibility-a1}, we have that $\mu_\epsilon$ satisfies \begin{equation*} \begin{split} (\partial^4_y \mu_\epsilon )(x, 0))=&\big(\partial_yu^s_0(0) + (\partial_y\tilde{u}_0)(x, 0)\big)(\partial_y \partial_x\mu_\epsilon )(x, 0)\\ &+(\partial_y \mu_\epsilon )(x, 0)(\partial_y\partial_x\tilde{u}_0)(x, 0)\\ &+\epsilon(\partial_y \partial_x\mu_\epsilon )(x, 0)(\partial_y \partial_x\mu_\epsilon )(x, 0). \end{split} \end{equation*} We have also \begin{equation*} \begin{split} (\partial_t\partial^4_y \tilde u_\epsilon)(0, x, 0)=&\big(\partial^3_y u^s_0(0) + (\partial^3_y\tilde{u}_\epsilon)(0, x, 0)+\epsilon (\partial^2_x\partial_y \tilde u_\epsilon)(0, x, 0)\big)\\ &\times \big((\partial^3_y\partial_x\tilde{u}_\epsilon)(0, x, 0)+\epsilon (\partial^3_x\partial_y \tilde u_\epsilon)(0, x, 0)\big). \end{split} \end{equation*} Taking the values at $t=0$ for \eqref{boundary-16}, we obtain a restraint condition for $(\partial^6_y \mu_\epsilon)(x, 0)$, \begin{align*}
\partial_y^6 \mu_\epsilon (x, 0) & =( (\partial_y^3 u^s_0 + \partial_y^3 \tilde{u}_0 ) \partial_y \partial_x \mu_\epsilon)|_{y=0} + \partial_y^3 \mu_\epsilon \partial_y \partial_x \tilde{u}_0|_{y=0} + \epsilon \partial_y^3 \mu_\epsilon \partial_y \partial_x \mu_\epsilon|_{y=0}\\
& - \underline{2 \partial_x\partial_y\tilde{u}_0(x, 0) (\partial_y\partial_x^2\tilde{u}_0)( x, 0)}- 2 \epsilon \partial_x\partial_y\tilde{u}_0( x, 0) (\partial_y\partial_x^2\mu_\epsilon)(t, x, 0) \\
& - 2 \epsilon \partial_x\partial_y\mu_\epsilon(t, x, 0) (\partial_y\partial_x^2\tilde{u}_0)(t, x, 0) - 2 \epsilon^2 \partial_x\partial_y\mu_\epsilon(x, 0) (\partial_y\partial_x^2\mu_\epsilon)(x, 0) \\
& + \sum\limits_{1 \le j \le 3}C_j^4 \big[ \partial_y^j\big( u^s_0 + \tilde{u}_0 \big)\partial_y^{4 - j}\partial_x \mu_\epsilon + \partial_y^j \mu \partial_y^{4 - j} \partial_x \tilde{u}_0 + \epsilon\partial_y^j \mu \partial_y^{4 - j} \partial_x \mu_\epsilon \big]\big|_{y=0}\\
& - \sum\limits_{ 1 \le j \le 3 }C_j^4 \big[ \partial_y^{j-1} \partial_x \tilde{u}_0 \partial_y^{4 - j} \mu_\epsilon + \epsilon \partial_y^{j - 1} \partial_x \mu_\epsilon \partial_y^{4 - j} \partial_y \mu_\epsilon \big]\big|_{y = 0}\\
& - \sum\limits_{ 1 \le j \le 3 }C_j^4 \partial_y^{j - 1} \partial_x \mu_\epsilon \partial_y^{4 - j}( \partial_y u^s_0 + \partial_y \tilde{u}_0 ) \big|_{y = 0}, \end{align*} thus \begin{equation}\label{mu-6} \begin{split}
\partial_y^6 \mu_\epsilon (x, 0) & = - ~ \underline{2 \partial_x\partial_y\tilde{u}_0(x, 0) (\partial_y\partial_x^2\tilde{u}_0)( x, 0)}\\
& + \sum\limits_{\alpha_1, \beta_1; \alpha_2, \beta_2}C_{\alpha_1, \beta_1; \alpha_2, \beta_2} \partial_x^{\alpha_1}\partial_y^{\beta_1 + 1} ( u^s_0 + \tilde{u}_0) \partial_x^{\alpha_1}\partial_y^{\beta_1 + 1}\mu_\epsilon(x, 0)\\
& + \sum\limits_{\alpha_1, \beta_1; \alpha_2, \beta_2}C_{\alpha_1, \beta_1; \alpha_2, \beta_2} \partial_x^{\alpha_1}\partial_y^{\beta_1 + 1} \mu_\epsilon \partial_x^{\alpha_1}\partial_y^{\beta_1 + 1}\mu_\epsilon(x, 0), \end{split} \end{equation} where the summation is for the index $\alpha_2+\beta_2\le 3;\, \alpha_1+\beta_1+\alpha_2+\beta_2\le 3$. The underlined term in the above equality is deduced from the underlined term in \eqref{boundary-16}. All these underlined terms are from the added regularizing term $\epsilon\partial_x^2 \tilde{u}$ in the equation \eqref{shear-prandtl-approxiamte}. This means that the regularizing term $\epsilon \partial_x^2 \tilde{u}$ has an affect on the boundary. This is why we add a corrector term.
More generally, for $6\le 2p\le m$, we have that $(\partial^{2(p+1)}_y \mu_\epsilon)(x, 0) $ is a linear combination of the terms of the form $$
\prod\limits_{j=1}^{q_1}\left( \partial_x^{\alpha^1_j}\partial_y^{\beta^1_j +1} \big( u^s_0 + \tilde{u}_0 \big)\right)\bigg|_{y=0},\,\quad \prod\limits_{i=1}^{q_2}\left( \partial_x^{\alpha^2_i}\partial_y^{\beta^2_i+1} \mu_\epsilon\right)\bigg|_{y=0}\,, $$ and $$
\prod\limits_{j=1}^{q_1}\left( \partial_x^{\alpha^1_j}\partial_y^{\beta^1_j +1} \big( u^s_0 + \tilde{u}_0 \big)\right)\bigg|_{y=0}\,\times \, \prod\limits_{i=1}^{q_2}\left( \partial_x^{\alpha^2_i}\partial_y^{\beta^2_i+1} \mu_\epsilon\right)\bigg|_{y=0}\,, $$ where the coefficients of the combination can be depends on $\epsilon$ but with a non-negative power. We have also $\alpha^l_j+\beta^l_j+1\le 2p, l=1, 2$, thus $(\partial^{2(p+1)}_y \mu_\epsilon)(x, 0)$ is determined by the low order derivatives of $\mu_\epsilon$ and these of $\tilde u_0$.
We now construct a polynomial function $\tilde \mu_\epsilon$ on $y$ by the following Taylor expansion, $$ \tilde \mu_\epsilon(x, y)=\sum^{\frac m2+1}_{p=3} \tilde \mu^{2p}_\epsilon(x)\frac{ y^{2p}}{(2p)!}\,, $$ where $$ \tilde \mu^{6}_\epsilon(x)=-2 (\partial_x\partial_y\tilde{u}_0)(x, 0)(\partial_y\partial^2_x\tilde{u}_0)(x, 0), $$
and $\tilde \mu^{2p}_\epsilon(x)$ will give successively by $(\partial^{2q}_y \mu_\epsilon)(x, 0)$ with $(\partial^{2q+1}_y \mu_\epsilon)(x, 0)=0, q=0, \cdots, m$, and it is then determined by $(\partial^\alpha_x\partial^\beta_y\tilde u_0)|_{y=0}$. Finally we take $\mu_\epsilon= \chi(y)\tilde \mu_\epsilon$ with $\chi\in C^\infty([0, +\infty[);\, \chi(y)=1,\, 0\le y\le 1;\, \chi(y)=0,\, y\ge 2$. Thus we complete the proof of the Corollary. \end{proof}
\begin{remark}\label{remark-corrector} Suppose that $\tilde u_0$ satisfies the compatibility conditions up to order $m+2$
for the system \eqref{non-shear-prandtl} with $m\ge 4$, then for the regularized
system \eqref{shear-prandtl-approxiamte}, if we want to obtain the smooth solution $\tilde w_\epsilon$, we have to add a non-trivial corrector $\mu_\epsilon$ to the initial data such that $\tilde u_0+\epsilon\mu_\epsilon$ satisfies the compatibility conditions
up to order $m+2$ for the system \eqref{shear-prandtl-approxiamte}. In fact, if we take $\mu_\epsilon$ with $$ (\partial^{j}_y \mu_\epsilon)(x, 0)=0,\quad 0\le j\le 5, $$
then \eqref{mu-6} implies $$ (\partial^{6}_y \mu_\epsilon)(x, 0)=-2 (\partial_x\partial_y\tilde{u}_0)(x, 0)(\partial_y\partial^2_x\tilde{u}_0)(x, 0), $$ which is not equal to $0$. So added a corrector is necessary for the initial data of the regularized system. \end{remark}
We will prove the the existence of the approximate solutions of the system \eqref{shear-prandtl-approxiamte} by using the following equation of vorticity $ \tilde{w}_\epsilon=\partial_y\tilde{u}_\epsilon $, it reads \begin{equation} \label{shear-prandtl-approxiamte-vorticity} \begin{cases} & \partial_t\tilde{w}_\epsilon + (u^s + \tilde{u}_\epsilon) \partial_x\tilde{w}_\epsilon +{v}_\epsilon (u^s_{yy} + \partial_y\tilde{w}_{\epsilon}) = \partial^2_{y}\tilde{w}_\epsilon + \epsilon \partial^2_{x}\tilde{w}_\epsilon, \\
& \partial_y\tilde{w}_{\epsilon}|_{y=0}=0,\\
& \tilde{w}_{\epsilon}|_{t=0}=\tilde{w}_{0, \epsilon}=\tilde{w}_0+\epsilon \partial_y\mu_{\epsilon}, \end{cases} \end{equation} where \begin{equation}\label{u-v-w} \tilde{u}_\epsilon(t, x, y)=-\int^{+\infty}_y \tilde{w}_\epsilon(t, x, \tilde y) d\tilde y,\quad \tilde{v}_\epsilon(t, x, y)=-\int^{y}_0\partial_x \tilde{u}_\epsilon(t, x, \tilde y) d\tilde y. \end{equation} We have the following theorem for the existence of approximate solutions \begin{theorem}\label{theorem3.1} Let $\partial_y \tilde{u}_{0}\in H^{m+2}_{k+\ell}(\mathbb{R}^2_+)$, and $m\ge 6$ be an even integer, $k>1, 0\le \ell<\frac12, k+\ell>\frac32$, assume that $\tilde{u}_0$ satisfies the compatibility conditions of order $m+2$ for the system \eqref{non-shear-prandtl}. Suppose that the shear flow satisfies $$
|\partial^{p+1}_y u^s(t, y)|\le C\langle y \rangle^{-k-p}, \quad (t, y)\in [0, T_1]\times \mathbb{R}_+,\,\, 0\le p\le m+2. $$ Then, for any $0<\epsilon\le \epsilon_0$ and $0<\bar\zeta$, there exits $T_\epsilon>0$ which depends on $\epsilon$ and $\bar\zeta$, such that if $$
\|\tilde{w}_{0}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\le \bar\zeta, $$ then the system \eqref{shear-prandtl-approxiamte-vorticity}-\eqref{u-v-w} admits a unique solution $$ \tilde{w}_\epsilon\in L^\infty([0, T_\epsilon]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+)), $$ which satisfies \begin{equation}\label{2-estimate}
\|\tilde{w}_\epsilon\|_{L^\infty([0, T_\epsilon];H^m_{k+\ell}(\mathbb{R}^2_+))}\le \frac 43
\|\tilde{w}_{0, \epsilon}\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}\le 2
\|\tilde{w}_0\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}. \end{equation} \end{theorem}
\begin{remark}.
\begin{itemize} \item[(1)] Remark that $T_\epsilon$ depends on $\epsilon$ and $\bar\zeta$, and $T_\epsilon\to 0$ as $\epsilon \to 0$. So this is not a bounded estimate for the approximate solution sequences $\{u^s+\tilde{u}_\epsilon; 0<\epsilon\le \epsilon_0\}$ where $\epsilon_0>0$ is given in Corollary \ref{coro-boundary}. When the initial data $\tilde u_{0}$ is small enough, we observe that $u^s+\tilde{u}_\epsilon$ preserves the monotonicity and convexity of the shear flow on $[0, T_\epsilon]$.
\item [(2)] In this theorem, for the regularized Prandtl equation, there are not constrain conditions on the initial date, meaning that we don't need the monotonicity or convexcity of shear flow $u^s$, and $\bar\zeta$ is also arbitrary. \end{itemize} \end{remark}
If $ \tilde{w}_\epsilon$ is a solution of the system \eqref{shear-prandtl-approxiamte-vorticity}-\eqref{u-v-w}, then \eqref{Hardy1} with $\lim_{y\to +\infty} \tilde u_\epsilon=0$ imply $$ \tilde{u}_\epsilon\in L^\infty([0, T_\epsilon]; H^{m+2}_{k+\ell-1}(\mathbb{R}^2_+)), $$ and $$ \tilde v_\epsilon\in L^\infty([0, T_\epsilon]; L^\infty(\mathbb{R}_{y, +}; H^{m+1}(\mathbb{R}_x)). $$ Integrating the equation of \eqref{shear-prandtl-approxiamte-vorticity} over $[y, +\infty[$ imply that $(\tilde u_\epsilon, \tilde v_\epsilon)$ is a solution of the system \eqref{shear-prandtl-approxiamte}, except the boundary condition to check: \begin{equation}\label{u-w-0} \tilde{u}_\epsilon(t, x, 0)=-\int^{+\infty}_0 \tilde{w}_\epsilon(t, x, \tilde y) d\tilde y=0,\quad (t, x)\in [0, T_\epsilon]\times \mathbb{R}. \end{equation} In fact, noting $f(t, x)=-\int^{+\infty}_0 \tilde{w}_\epsilon(t, x, \tilde y) d\tilde y =\tilde{u}_\epsilon(t, x, 0)$, a direct calculate give \begin{equation} \label{3.00} \begin{cases} & \partial_t f+f \partial_x f = \epsilon \partial^2_{x}f, \quad (t, x)\in ]0, T_\epsilon]\times \mathbb{R};\\
& f|_{t=0}=0, \end{cases} \end{equation} here we use \begin{align*}
\int^{+\infty}_0 {v}_\epsilon (u^s_{yy} + \partial_y\tilde{w}_{\epsilon}) dy&=
\big[{v}_\epsilon (u^s_{y} + \tilde{w}_{\epsilon})\big]^{+\infty}_0 -\int^\infty_0 (\partial_y{v}_\epsilon) (u^s_{y} + \tilde{w}_{\epsilon}) dy\\ &=\int^\infty_0 (\partial_x{u}_\epsilon) \partial_y(u^s + \tilde{u}_{\epsilon}) dy\\ &=
\big[(\partial_x{u}_\epsilon) (u^s + \tilde{u}_{\epsilon})\big]\big|^{+\infty}_0 -\int^\infty_0 (\partial_x{w}_\epsilon) (u^s+ \tilde{u}_{\epsilon}) dy\\ &=- f \partial_x f -\int^\infty_0 (\partial_x{w}_\epsilon) (u^s+ \tilde{u}_{\epsilon}) dy. \end{align*} Since $f\in L^\infty([0, T_\epsilon], H^{m+2}(\mathbb{R}))$, the uniqueness of solution for equation \eqref{3.00} imply that $f=0$ on $[0, T_\epsilon]\times \mathbb{R}$. \eqref{u-w-0} imply also \begin{equation*} \tilde{u}_\epsilon(t, x, y)=-\int^{+\infty}_y \tilde{w}_\epsilon(t, x, \tilde y) d\tilde y= \int^{y}_0 \tilde{w}_\epsilon(t, x, \tilde y) d\tilde y,\quad (t, x, y)\in [0, T_\epsilon]\times \mathbb{R}^2_+. \end{equation*}
We will prove Theorem \ref{theorem3.1} by the following three Propositions, where the first one is devoted to the local existence of approximate solution $\tilde{w}_\epsilon$ of \eqref{shear-prandtl-approxiamte-vorticity}.
\begin{proposition}\label{prop3.0} Let $\tilde{w}_{0, \epsilon}\in H^{m+2}_{k+\ell}(\mathbb{R}^2_+)$, $m\ge 6$ be an even integer, $k>1, 0\le \ell<\frac12, k+\ell> \frac 32$, and satisfy the compatibility conditions up to order $m+2$ for \eqref{shear-prandtl-approxiamte-vorticity}. Suppose that the shear flow satisfies $$
|\partial^{p+1}_y u^s(t, y)|\le C\langle y \rangle^{-k-p}, \quad (t, y)\in [0, T_1]\times \mathbb{R}_+,\,\, 0\le p\le m+2. $$ Then, for any $0<\epsilon\le 1$ and $\bar\zeta>0$, there exits $T_\epsilon>0$ such that if $$
\|\tilde{w}_{0, \epsilon}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\le \bar\zeta, $$ then the system \eqref{shear-prandtl-approxiamte-vorticity} admits a unique solution $$ \tilde{w}_\epsilon\in L^\infty([0, T_\epsilon]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+))\, . $$ \end{proposition} \begin{remark}\label{remark3.5} If $\tilde{w}_{0}\in H^{m+2}_{k+\ell}(\mathbb{R}^2_+)$ is the initial data in Theorem \ref{theorem3.1}, using Corollary \ref{coro-boundary}, there exists $\epsilon_0>0$, and for any $0<\epsilon\le \epsilon_0$, there exists $\mu_\epsilon \in H^{m+3}_{k+\ell}(\mathbb{R}^2_+)$ such that $\tilde{w}_{0,\epsilon}= \tilde{w}_{0}+\epsilon \partial_y \mu_\epsilon $ satisfies the compatibility conditions up to order $m+2$ for the system \eqref{shear-prandtl-approxiamte-vorticity}, and $$
\|\tilde{w}_{0, \epsilon}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\le \frac 32 \|\tilde{w}_{0}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}. $$ Then, using Proposition \ref{prop3.0}, we obtain also the existence of the approximate solution under the assumption of Theorem \ref{theorem3.1}. \end{remark}
The proof of this Proposition is standard since the equation in \eqref{shear-prandtl-approxiamte-vorticity} is a parabolic type equation. Firstly, we establish the {\it \`a priori} estimate and then prove the existence of solution by the standard iteration and weak convergence methods. Because we work in the weighted Sobolev space and the computation is not so trivial, we give a detailed proof in the Appendix \ref{section-a3}, to make the paper self-contained. So the rest of this section is devoted to proving the estimate \eqref{2-estimate}.
\noindent {\bf Uniform estimate with loss of $x$-derivative } In the proof of the Proposition \ref{prop3.0} (see Lemma \ref{lemmab.2}), we already get the {\it \`a priori} estimate for $\tilde{w}_\epsilon$. Now we try to prove the estimate \eqref{2-estimate} in a new way, and our object is to establish an uniform estimate with respect to $\epsilon>0$. We first treat the easy part in this subsection.
We define the non-isotropic Sobolev norm, \begin{equation}\label{norm-1}
\|f\|^2_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}=
\sum_{|\alpha_1+\alpha_2|\le m, \alpha_1\le m-1}\|\langle y\rangle^{k+\ell+\alpha_2}\,\partial^{\alpha_1}_x \partial^{\alpha_2}_y f\|_{L^{2}(\mathbb{R}^2_+)}^2, \end{equation} where we don't have the $m$-order derivative with respect to $x$-variable. Then $$
\|f\|^2_{H^{m}_{k+\ell}(\mathbb{R}^2_+)}=
\|f\|^2_{H^{m,m-1}_{k+\ell}(\mathbb{R}^2_+)}+\|\partial^m_x f\|^2_{L^{2}_{k+\ell}(\mathbb{R}^2_+)}. $$
\begin{proposition}\label{prop3.1} Let $m\ge 6$ be an even integer, $k>1, 0< \ell<\frac12, k+\ell> \frac 32$, and assume that $\tilde{w}_\epsilon\in L^\infty([0, T_\epsilon]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+))$ is a solution to \eqref{shear-prandtl-approxiamte-vorticity}, then we have
\begin{equation}
\label{approx-less-k}
\begin{split}
&\frac{d}{dt}\|\tilde{w}_\epsilon\|^2_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}+ \|\partial_y\tilde{w}_\epsilon\|^2_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}\\
&\qquad+ \epsilon\|\partial_x\tilde{w}_\epsilon\|^2_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}
\le C_1\bigg( \| \tilde{w}_\epsilon\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}^2 + \| \tilde{w}_\epsilon\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}^m \bigg),
\end{split}
\end{equation}
where $C_1>0$ is independent of $\epsilon$. \end{proposition}
\noindent
{\bf Remark.} The above estimate is uniform with respect to $\epsilon>0$, but on the left hand of \eqref{approx-less-k}, we missing the terms $\|\partial^{m}_x\tilde{w}_\epsilon\|_{L^{2}_{k+\ell}}^2$. This is because that we can't control the term $$ \partial^{m}_x\tilde{v}_\epsilon(t, x, y)=- \int^y_0\partial^{m+1}_x\tilde{u}_\epsilon(t, x, \tilde{y}) d\tilde{y}, $$ which is the major difficulty in the study of the Prandtl equation. We will study this term in the next Proposition with a non-uniform estimate firstly, and then focus on proving the uniform estimate in the rest part of this paper.
\begin{proof}
For $|\alpha|=\alpha_1+\alpha_2\le m, \alpha_1\le m-1$, we have \begin{equation}\label{non-approx-est-less-s} \begin{split}
&\partial_t \partial^{\alpha} \tilde{w}_\epsilon - \epsilon \partial^2_x\partial^{\alpha} \tilde{w}_\epsilon - \partial_y^2 \partial^{\alpha}\partial\tilde{w}_\epsilon \\
&= - \partial^{\alpha} \big((u^s + \tilde{u}_\epsilon)\partial_x \tilde{w}_\epsilon \big) - \partial^{\alpha} \big( \tilde{v}_\epsilon ( u^s_{yy}+ \partial_y\tilde{w}_{\epsilon} ) \big). \end{split} \end{equation} Multiplying the \eqref{non-approx-est-less-s} with $ \langle y \rangle^{2(k+\ell+{\alpha_2})} \partial^{\alpha} \tilde{w}_\epsilon $, and integrating over $\mathbb{R}^2_+$, \begin{equation*} \begin{split}
&\int_{\mathbb{R}^2_+} (\partial_t \partial^{\alpha} \tilde{w}_\epsilon) \langle y \rangle^{2(k+\ell)+2{\alpha_2}} \partial^{\alpha} \tilde{w}_\epsilon dx dy - \epsilon \int_{\mathbb{R}^2_+} (\partial^2_x\partial^{\alpha} \tilde{w}_\epsilon) \langle y \rangle^{2(k+\ell)+2{\alpha_2}} \partial^{\alpha} \tilde{w}_\epsilon dx dy \\ &\qquad\qquad- \int_{\mathbb{R}^2_+} (\partial^2_y\partial^{\alpha} \tilde{w}_\epsilon) \langle y \rangle^{2(k+\ell)+2{\alpha_2}} \partial^{\alpha} \tilde{w}_\epsilon dx dy \\
&= - \int_{\mathbb{R}^2_+} \partial^{\alpha} \big((u^s + \tilde{u}_\epsilon)\partial_x \tilde{w}_\epsilon - \tilde{v}_\epsilon ( u^s_{yy}+ \partial_y\tilde{w}_{\epsilon} )\big) \langle y \rangle^{2(k+\ell)+2{\alpha_2}} \partial^{\alpha} \tilde{w}_\epsilon dx dy . \end{split} \end{equation*} Remark that for $\tilde{w}_\epsilon\in L^\infty([0, T_\epsilon]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+))$, all above integrations are in the classical sense. We deal with each term on the left hand respectively. After integration by part, we have \begin{align*}
& \int_{\mathbb{R}^2_+} (\partial_t \partial^{\alpha} \tilde{w}_\epsilon) \langle y \rangle^{2(k+\ell)+2{\alpha_2}(\mathbb{R}^2_+)} \partial^{\alpha} \tilde{w}_\epsilon dx dy =\frac 12 \frac{d}{ dt}\| \partial^{\alpha} \tilde{w}_\epsilon\|_{L^2_{k+\ell+\alpha_2}(\mathbb{R}^2_+)}^2,\\
& -\epsilon\int_{\mathbb{R}^2_+} ( \partial_x^{2} \partial^{\alpha} \tilde{w}_\epsilon )\langle y \rangle^{2(k+\ell)+2{\alpha_2}(\mathbb{R}^2_+)} \partial^{\alpha} \tilde{w}_\epsilon dx dy =\epsilon\|\partial_x \partial^{\alpha} \tilde{w}_\epsilon\|_{L^2_{k+\ell+\alpha_2}(\mathbb{R}^2_+)}^2,
\end{align*} and \begin{align*} &\quad -\int_{\mathbb{R}^2_+} \partial_y^{2} \partial^{\alpha} \tilde{w}_\epsilon \langle y \rangle^{2(k+\ell)+2{\alpha_2}} \partial^{\alpha} \tilde{w}_\epsilon dx dy \\
&=\|\partial_y \partial^{\alpha} \tilde{w}_\epsilon\|_{L^2_{k+\ell+\alpha_2}(\mathbb{R}^2_+)}^2+\int_{\mathbb{R}^2_+} \partial^{\alpha}\partial_y\tilde{w}_\epsilon (\langle y \rangle^{2(k+\ell)+2{\alpha_2}} )'\partial^{\alpha} \tilde{w}_\epsilon dx dy\\
&\qquad\qquad+\int_{\mathbb{R}} \left(\partial^{\alpha}\partial_y\tilde{w}_\epsilon \partial^{\alpha} \tilde{w}_\epsilon\right)\big|_{y=0} dx . \end{align*} Cauchy-Schwarz inequality implies \begin{align*}
&\left|\int_{\mathbb{R}^2_+} \partial^{\alpha}\partial_y\tilde{w}_\epsilon (\langle y \rangle^{2(k+\ell)+2{\alpha_2}} )'\partial^{\alpha} \tilde{w}_\epsilon dx dy\right|\\
&\qquad\le\frac{1}{16} \|\partial_y \partial^{\alpha} \tilde{w}_\epsilon\|_{L^2_{k+\ell+\alpha_2}(\mathbb{R}^2_+)}^2 +C\| \partial^{\alpha} \tilde{w}_\epsilon\|_{L^2_{k+\ell+\alpha_2-1}(\mathbb{R}^2_+)}^2. \end{align*} We study now the term $$
\int_{\mathbb{R}} \left(\partial^{\alpha}\partial_y\tilde{w}_\epsilon \partial^{\alpha} \tilde{w}_\epsilon\right)\big|_{y=0} dx. $$
{\bf Case : $|\alpha|\le m-1$}, using the trace Lemma \ref{lemma-trace}, we have \begin{align*}
\left|\int_{\mathbb{R}} \left(\partial^{\alpha}\partial_y\tilde{w}_\epsilon \partial^{\alpha} \tilde{w}_\epsilon\right)\big|_{y=0} dx\right|
&\le\|(\partial^{\alpha}\partial_y\tilde{w}_\epsilon)|_{y=0}\|_{L^2(\mathbb{R})}
\|(\partial^{\alpha}\tilde{w}_\epsilon)|_{y=0}\|_{L^2(\mathbb{R})}\\
&\le C\|\partial^{\alpha}\partial^2_y\tilde{w}_\epsilon\|_{L^2_{k+\ell}(\mathbb{R}^2_+)}
\|\partial^{\alpha}\partial_y\tilde{w}_\epsilon\|_{L^2_{k+\ell}(\mathbb{R}^2_+)}\\
&\le C\|\partial_y\tilde{w}_\epsilon\|_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}\|\tilde{w}_\epsilon\|_{H^{m}_{k+\ell} (\mathbb{R}^2_+)}\\
&\le \frac 1{16}\|\partial_y\tilde{w}_\epsilon\|^2_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}+C\|\tilde{w}_\epsilon\|^2_{H^{m}_{k+\ell} (\mathbb{R}^2_+)}. \end{align*} {\bf Case : $\alpha_1=m-1, \alpha_2=1$}, using \eqref{boundary-12b}, we have $$
(\partial^{\alpha}\tilde{w}_\epsilon)|_{y=0}=
(\partial_x^{\alpha_1}\partial_y^{2} \tilde{u}_\epsilon)|_{y=0} = 0, $$ thus $$
\int_{\mathbb{R}} \left(\partial^{\alpha}\partial_y\tilde{w}_\epsilon \partial^{\alpha} \tilde{w}_\epsilon\right)|_{y=0} dx=0. $$ {\bf Case : $\alpha_1=0, \alpha_2=m$}. Only in this case, we need to suppose that $m$ is even. Using again the trace Lemma \ref{lemma-trace}, we have \begin{align*}
\left|\int_{\mathbb{R}} \left(\partial^{m+1}_y\tilde{w}_\epsilon \partial^{m}_y \tilde{w}_\epsilon\right)|_{y=0} dx\right|
&\le\|(\partial^{m+2}_y\tilde{u}_\epsilon)|_{y=0}\|_{L^2(\mathbb{R})}
\|(\partial^{m}_y\tilde{w}_\epsilon)|_{y=0}\|_{L^2(\mathbb{R})}\\
&\le C\|(\partial^{m+2}_y\tilde{u}_\epsilon)|_{y=0}\|_{L^2(\mathbb{R})}
\|\partial^{m+1}_y\tilde{w}_\epsilon\|_{L^2_{k+\ell}(\mathbb{R}^2_+)}\\
&\le \frac 1{16}\|\partial_y\tilde{w}_\epsilon\|^2_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}+C\|(\partial^{m+2}_y\tilde{u}_\epsilon)
|_{y=0}\|^2_{L^2(\mathbb{R})}. \end{align*}
Using Proposition \ref{prop-comp-b} and the trace Lemma \ref{lemma-trace}, we can estimate the above last term $\|(\partial^{m+2}_y\tilde{u}_\epsilon)
|_{y=0}\|^2_{L^2(\mathbb{R})}$ by a finite summation of the following forms
$$
\|\prod^{p}_{j=1} (\partial^{\alpha_j}_x\partial^{\beta_j+1}_y( u^s + \tilde u_\epsilon))|_{y=0}\|^2_{L^2(\mathbb{R})}\le C\|\partial_y\prod^{p}_{j=1} (\partial^{\alpha_j}_x\partial^{\beta_j+1}_y( u^s + \tilde u_\epsilon)) \|^2_{L^2_{\frac 12+\delta}(\mathbb{R}^2_+)} $$ with $2\le p\le \frac m2$, $\alpha_j + \beta_j \le m -1$ and $\{j; \alpha_j>0\}\not=\emptyset$. Then using Sobolev inequality and $m\ge 6$, we get $$
\|(\partial^{m+2}_y\tilde{u}_\epsilon)
|_{y=0}\|_{L^2(\mathbb{R})}\le C \|\tilde{w}_\epsilon\|^{m/2}_{H^{m}_{k+\ell} (\mathbb{R}^2_+)}. $$ {\bf Case : $1\le \alpha_1\le m-2, \alpha_1+\alpha_2=m, \alpha_2$ even}, using the same argument to the precedent case, we have \begin{align*}
&\left|\int_{\mathbb{R}}(\partial^{\alpha}\partial_y\tilde{w}_\epsilon \partial^{\alpha} \tilde{w}_\epsilon)|_{y=0}dx \right|=\left|\int_{\mathbb{R}}
(\partial^{\alpha_1}_x\partial^{\alpha_2+1}_y\tilde{w}_\epsilon \partial^{\alpha_1}_x\partial^{\alpha_2}_y \tilde{w}_\epsilon)|_{y=0}dx \right|\\
&\qquad\le \|(\partial^{\alpha_1}_x\partial^{\alpha_2+1}_y\tilde{w}_\epsilon)|_{y=0}\|_{L^2(\mathbb{R})} \|(\partial^{\alpha_1}_x\partial^{\alpha_2}_y \tilde{w}_\epsilon)|_{y=0}\|_{L^2(\mathbb{R})}\\
&\qquad\le \frac 1{16}\|\partial_y\tilde{w}_\epsilon\|^2_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}+C\|(\partial^{\alpha_1}_x
\partial^{\alpha_2+2}_y\tilde{u}_\epsilon)|_{y=0}\|^2_{L^2(\mathbb{R})}\\
&\qquad\le \frac 1{16}\|\partial_y\tilde{w}_\epsilon\|^2_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}+C\|\tilde{w}_\epsilon\|^{\alpha_2}_{H^{m}_{k+\ell} (\mathbb{R}^2_+)}. \end{align*}
{\bf Case : $1\le \alpha_1\le m-2, \alpha_1+\alpha_2=m, \alpha_2$ odd}, integration by part with respect to $x$ variable implies \begin{align*}
&\left|\int_{\mathbb{R}}
(\partial^{\alpha_1}_x\partial^{\alpha_2+1}_y\tilde{w}_\epsilon \partial^{\alpha_1}_x\partial^{\alpha_2}_y \tilde{w}_\epsilon)|_{y=0}dx \right|=\left|\int_{\mathbb{R}}
(\partial^{\alpha_1-1}_x\partial^{\alpha_2+1}_y\tilde{w}_\epsilon \partial^{\alpha_1+1}_x\partial^{\alpha_2}_y \tilde{w}_\epsilon)|_{y=0}dx \right|\\
&\qquad\le \|(\partial^{\alpha_1-1}_x\partial^{\alpha_2+1}_y\tilde{w}_\epsilon)|_{y=0}\|_{L^2(\mathbb{R})} \|(\partial^{\alpha_1+1}_x\partial^{\alpha_2}_y \tilde{w}_\epsilon)|_{y=0}\|_{L^2(\mathbb{R})}\\
&\qquad\le \frac 1{16}\|\partial_y\tilde{w}_\epsilon\|^2_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}+C\|(\partial^{\alpha_1+1}_x
\partial^{\alpha_2+1}_y\tilde{u}_\epsilon)|_{y=0}\|^2_{L^2(\mathbb{R})}\\
&\qquad\le \frac 1{16}\|\partial_y\tilde{w}_\epsilon\|^2_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}+C\|\tilde{w}_\epsilon\|^{\alpha_2-1}_{H^{m}_{k+\ell} (\mathbb{R}^2_+)}. \end{align*}
Finally, we have proven \begin{align*} \begin{split}
& \int_{\mathbb{R}^2_+} \big(\partial_t \partial^{\alpha} \tilde{w}_\epsilon - \partial_y^2\partial^{\alpha} \tilde{w}_\epsilon - \epsilon \partial^2_x\partial^{\alpha} \tilde{w}_\epsilon\big) \langle y \rangle^{2 (k+\ell+\alpha_2)} \partial^{\alpha} \tilde{w}_\epsilon dx dy\\
& \ge\frac12 \frac{d}{ dt}\|\partial^{\alpha} \tilde{w}_\epsilon\|_{L^2_{k+\ell+\alpha_2}}^2+ \epsilon\|\partial_x\partial^{\alpha} \tilde{w}_\epsilon\|_{L^2_{k+\ell+\alpha_2}}^2 +\|\partial_y \partial^{\alpha} \tilde{w}_\epsilon\|_{L^2_{k+\ell+\alpha_2}}^2\\
&\qquad-\frac{1}{4} \|\partial_y\tilde{w}_\epsilon\|^2_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}-C\|\tilde{w}_\epsilon\|^{m}_{H^{m}_{k+\ell} (\mathbb{R}^2_+)}. \end{split} \end{align*}
We estimate now the right hand of \eqref{non-approx-est-less-s}. For the first item, we need to split it into two parts
\begin{align*}
- \partial^{\alpha} \big((u^s + \tilde{u}_\epsilon)\partial_x \tilde{w}_\epsilon \big) = - (u^s + \tilde{u}_\epsilon)\partial_x \partial^{\alpha} \tilde{w}_\epsilon + [ (u^s + \tilde{u}_\epsilon), \partial^{\alpha}]\partial_x \tilde{w}_\epsilon.
\end{align*}
Firstly, we have
\begin{align*}
\int_{\mathbb{R}^2_+} \big((u^s + \tilde{u}_\epsilon) \partial_x \partial^{\alpha} \tilde{w}_\epsilon\big) \langle y \rangle^{2(k+\ell+{\alpha_2})}\partial^{\alpha} \tilde{w}_\epsilon dx dy \le \|\partial_x \tilde{u}_\epsilon\|_{L^\infty}\|\partial^{\alpha} \tilde{w}_\epsilon\|_{L^2_{k+\ell+{\alpha_2}}}^2,
\end{align*}
then using \eqref{sobolev-1}, we get
\begin{align*}
\left|\int_{\mathbb{R}^2_+} \big((u^s + \tilde{u}_\epsilon) \partial_x \partial^{\alpha} \tilde{w}_\epsilon\big)\langle y \rangle^{2(\ell+{\alpha_2})}\partial^{\alpha} \tilde{w}_\epsilon dx dy\right| \le \| \tilde{w}_\epsilon\|_{H^3_1}\|\partial^{\alpha} \tilde{w}_\epsilon\|_{L^2_{k+\ell+{\alpha_2}}}^2.
\end{align*}
For the commutator operator, in fact, it can be written as
\begin{align*}
& [ (u^s + \tilde{u}_\epsilon), \partial^{\alpha}]\partial_x \tilde{w}_\epsilon = \sum\limits_{\beta \le \alpha,\, 1\le|\beta|}C^\beta_\alpha \,\,\partial^{\beta}( u^s + \tilde{u}_\epsilon) \partial^{\alpha - \beta}\partial_x \tilde{w}_\epsilon.
\end{align*}
Then for $|\alpha|\le m, m\ge 4$, using the Sobolev inequality again and Lemma \ref{inequality-hardy},
$$
\| [ (u^s + \tilde{u}), \partial^{\alpha}]\partial_x \tilde{w}_\epsilon\|_{L^2_{k+\ell+\alpha_2}}\le C( \|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}
+\|\tilde{w}_\epsilon\|^2_{H^m_{k+\ell}}).
$$
Thus
\begin{align*}
\left|\int_{\mathbb{R}^2_+} \langle y \rangle^{2(k+\ell+\alpha_2)}\big( [ (u^s + \tilde{u}_\epsilon), \partial^{\alpha}]\partial_x \tilde{w}_\epsilon \big)\cdot \partial^{\alpha} \tilde{w}_\epsilon dx dy \right|\le C \big( \|\tilde{w}_\epsilon\|^2_{H^m_{k+\ell}}
+\|\tilde{w}_\epsilon\|^3_{H^m_{k+\ell}} \big) ,
\end{align*}
and
\begin{align*}
\left|\int_{\mathbb{R}^2_+} \langle y \rangle^{2(k+\ell+\alpha_2)}\big( \partial^{\alpha} \big( (u^s + \tilde{u}_\epsilon)\partial_x \tilde{w}_\epsilon \big)\big)\partial^{\alpha} \tilde{w}_\epsilon dx dy\right| \le C \big( \|\tilde{w}_\epsilon\|^2_{H^m_{k+\ell}}
+\|\tilde{w}_\epsilon\|^3_{H^m_{k+\ell}} \big),
\end{align*}
where $C$ is independent of $\epsilon$.
For the next one, similar to the first term in \eqref{non-approx-est-less-s}, we have
\begin{align*}
\partial^{\alpha} \big( \tilde{v}_\epsilon ( u^s_{yy}+\partial_y\tilde{w}_\epsilon ) \big) & = \tilde{v}_\epsilon \partial_y \partial^{\alpha} \tilde{w}_\epsilon - [\tilde{v}_\epsilon, \partial^{\alpha} ] \partial_y \tilde{w}_\epsilon+ \partial^{\alpha}
(\tilde{v}_\epsilon u^s_{yy} ).
\end{align*}
Then
\begin{align*}
\left|\int_{\mathbb{R}^2_+} \tilde{v}_\epsilon \langle y \rangle^{ 2(k+\ell+\alpha_2)} (\partial_y\partial^{\alpha} \tilde{w}_\epsilon)\cdot \partial^{\alpha} \tilde{w}_\epsilon dx dy\right|
&\le \|\tilde{v}_\epsilon\|_{L^\infty(\mathbb{R}^2_+)}
\|\partial_y \tilde{w}_\epsilon\|_{H^m_{k+\ell}}\| \tilde{w}_\epsilon\|_{H^m_{k+\ell}}\\
&\le \frac 1{4} \|\partial_y \tilde{w}_\epsilon\|^2_{H^m_{k+\ell}(\mathbb{R}^2_+)}+C\| \tilde{w}_\epsilon\|^4_{H^m_{k+\ell}(\mathbb{R}^2_+)}
\end{align*}
where we have used
\begin{equation*}
\begin{split}
&\|\tilde{v}_{\epsilon}\|_{L^\infty(\mathbb{R}^2_+)} \le C\|\partial_x\tilde{u}_{\epsilon}\|_{L^\infty(\mathbb{R}_x; L^2_{\frac 12 +\delta}(\mathbb{R}_{y, +}))}\\
&\le C \int_{\mathbb{R}^2_+}\langle y\rangle ^{1+2\delta}(|\partial_x\tilde{u}_{\epsilon}|^2+|\partial^2_x\tilde{u}_{\epsilon}|^2)
dx dy\\
&\le C \int_{\mathbb{R}^2_+}\langle y\rangle ^{3+2\delta}(|\partial_x\tilde{w}_{\epsilon}|^2+|\partial^2_x\tilde{w}_{\epsilon}|^2)
dx dy\le C\| \tilde{w}_\epsilon\|_{H^2_{\frac 32 +\delta}},
\end{split}
\end{equation*}
where $\delta>0$ is small.
Noticing that
\begin{align*}
[ \tilde{v}_\epsilon, \partial^{\alpha} ] \partial_y \tilde{w}_\epsilon & = \sum\limits_{ \beta \le \alpha, 1\le |\beta|} C^\beta_\alpha \,\,\, \partial^{\beta} \tilde{v}_\epsilon \,\,\partial^{\alpha - \beta}\partial_y \tilde{w}_\epsilon.
\end{align*}
Since $H^m_\ell$ is an algebra for $m\ge 6$, we only need to pay attention to the order of derivative in the above formula. Firstly for $|\beta|\ge 1$, we have for $|\alpha-\beta|+1\le m$,
$$
-\partial^{\beta} \tilde{v}_\epsilon=\partial^{\beta_1}_x \partial^{\beta_2}_y \int^y_0\tilde{u}_{\epsilon, x} d\tilde {y}
=\left\{\begin{array}{ll}
\partial^{\beta_1+1}_x \partial^{\beta_2-1}_y \tilde{u}_{\epsilon},&\quad \beta_2\ge 1,\\
\int^y_0 \partial^{\beta_1+1}_x \tilde{u}_{\epsilon} d\tilde {y},&\quad\beta_2=0.
\end{array}\right.
$$
Now using the hypothesis $\beta\le \alpha, 1\le |\beta|$ and $\beta_1\le \alpha_1\le m-1$, using Lemma \ref{inequality-hardy}, we get
$$
\| [ \tilde{v}_\epsilon, \partial^{\alpha} ] \partial_y \tilde{w}_\epsilon \|_{L^2_{k+\ell+\alpha_2}}\le C\|\tilde{w}_\epsilon\|^2_{H^m_{k+\ell}}.
$$
On the other hand, if $\alpha_2=0$, using $-1+\ell<-\frac 12$, we can get
$$
\| \partial^{m-1}_x( \tilde{v}_\epsilon u^s_{yy}) \|_{L^2_{k+\ell}}\le
C\|\partial^{m}_x\tilde{u}_\epsilon\|_{L^2_{\frac 12+\delta}(\mathbb{R}^2_+)}\|u^s_{yy} \|_{L^2_{k+\ell}(\mathbb{R}_+)}
\le C\|\tilde{w}_\epsilon\|_{H^m_{\frac 32+\delta}}.
$$
Similar computation for other cases, we can get, for $\alpha_2>0, \alpha_1+\alpha_2\le m$,
$$
\| \partial^{\alpha}( \tilde{v}_\epsilon u^s_{yy}) \|_{L^2_{k+\ell+\alpha_2}}\le C\|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}.
$$ Combining the above estimates, we have finished the proof of the Proposition \ref{prop3.1}. \end{proof}
\noindent {\bf Smallness of approximate solutions.} To close the energy estimate, we still need to estimate the term $\partial^m_x \tilde{w}_\epsilon$.
\begin{proposition} \label{prop3.2}
Under the hypothesis
of Theorem \ref{theorem3.1}, and with the same notations as in Proposition \ref{prop3.1}, we have
\begin{align}
\label{approx-k}
\begin{split}
& \frac 12\frac{d}{dt}\|\partial_x^m \tilde{w}_\epsilon\|_{L^2_{k+\ell}}^2 + \frac{3\epsilon}{4}\|\partial_x^{m+1}\tilde{w}_\epsilon\|_{L^2_{k+\ell}}^2 + \frac{3}{4} \|\partial_y \partial_x^m \tilde{w}_\epsilon\|_{L^2_{k+\ell}}^2 \\
& \le C\big( \| \tilde{w}_\epsilon\|_{H^m_{k+\ell}}^2 + \| \tilde{w}_\epsilon\|_{H^m_{k+\ell}}^3 \big) + \frac{32}{\epsilon}\big(\|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}^4+\|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}^2\big).
\end{split}
\end{align} \end{proposition}
\begin{proof}
We have
\begin{align*}
\partial_t \partial_x^m \tilde{w}_\epsilon - \partial_y^2 \partial_x^m \tilde{w}_\epsilon - \epsilon \partial_x^m\partial^2_x \tilde{w}_{\epsilon}= - \partial_x^m \big((u^s + \tilde{u}_\epsilon)\partial_x \tilde{w}_\epsilon \big) - \partial_x^m \big( \tilde{v}_\epsilon (\partial_y\tilde{w}_\epsilon + u^s_{yy}) \big),
\end{align*}
then the same computations as in Proposition \ref{prop3.1} give
\begin{align}
\label{approx-k-part-1}
\begin{split}
& \frac{d}{2 dt}\|\partial_x^m \tilde{w}_\epsilon\|_{L^2_{{k+\ell}}}^2 + \epsilon\|\partial_x^{m+1} \tilde{w}_\epsilon\|_{L^2_{k+\ell}}^2 + \frac{3}{4} \|\partial_y \partial_x^m\tilde{w}_\epsilon\|_{L^2_{k+\ell}}^2\\
&\le C( \|\tilde{w}_\epsilon \|_{H^m_{k+\ell}}^2+\|\tilde{w}_\epsilon \|_{H^m_{k+\ell}}^3)\\
&+\left|\int_{\mathbb{R}^2_+} \partial_x^m \big( \tilde{v}_\epsilon (\partial_y\tilde{w}_\epsilon + u^s_{yy}) \big) \langle y \rangle^{2(k+\ell)} \partial_x^m \tilde{w}_\epsilon dxdy\right|,
\end{split}
\end{align} where the boundary terms is more easy to control, since $$ (\partial_y \partial_x^m\tilde{w}_\epsilon)(t, x, 0)=(\partial^2_y \partial_x^m\tilde{u}_\epsilon)(t, x, 0)=0,\,\,\,(t, x)\in [0, T]\times \mathbb{R}. $$ The estimate of the last term on right hand is the main obstacle for the study of the Prandtl equations.
\begin{align*}
\partial_x^m \big( \tilde{v}_\epsilon (\partial_y\tilde{w}_\epsilon + u^s_{yy}) \big) & = \tilde{v}_\epsilon \partial_x^m \partial_y \tilde{w}_\epsilon + (\partial^m_x\tilde{v}_\epsilon) (\partial_y \tilde{w}_\epsilon+u^s_{yy})\\
&\quad +\sum_{1\le j\le m-1}C^j_m\, \partial^j_x\tilde{v}_\epsilon\partial^{m-j}_x \partial_y\tilde{w}_\epsilon.
\end{align*} For the first term
\begin{align*}
\int_{\mathbb{R}^2_+} \tilde{v}_\epsilon(\partial_x^m \partial_y \tilde{w}_\epsilon) \langle y \rangle^{ 2(k+\ell)} (\partial_x^m \tilde{w}_\epsilon)dx dy
&=\frac{1}{2}\int \tilde{v}_\epsilon \langle y \rangle^{ 2(k+\ell)} \partial_y(\partial_x^m \tilde{w}_\epsilon)^2 dx dy \\
& =
\frac{1}{2}\int \tilde{u}_{\epsilon, x} \langle y \rangle^{ 2(k+\ell)} (\partial_x^m \tilde{w}_\epsilon)^2 dx dy\\
& - \ell \int \tilde{v}_\epsilon \langle y \rangle^{ 2(k+\ell) - 1} (\partial_x^m \tilde{w}_\epsilon)^2 dx dy\\
& \le C \|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}^3,
\end{align*}
where we have used $\tilde{v}_\epsilon|_{y=0}=0$, and
\begin{align*}
\left|\int_{\mathbb{R}^2_+} \big(\sum_{1\le j\le m-1}C^j_m\, \partial^j_x\tilde{v}_\epsilon\partial^{m-j}_x \partial_y\tilde{w}_\epsilon) \langle y \rangle^{ 2(k+\ell)} (\partial_x^m \tilde{w}_\epsilon\big)dx dy\right| \le C \|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}^3.
\end{align*}
Finally for the worst term, we have
\begin{align*}
&\left|\int_{\mathbb{R}^2_+} (\partial^m_x \tilde{v}_\epsilon)(\partial_y \tilde{w}_\epsilon
+u^s_{yy}) \langle y \rangle^{ 2(k+\ell)} (\partial_x^m \tilde{w}_\epsilon)dx dy \right|\\
&\le C\|\partial^m_x \tilde{v}_\epsilon\|_{L^2(\mathbb{R}_x; L^\infty(\mathbb{R}_+))} \|\partial_y\tilde{w}_\epsilon\|_{L^\infty(\mathbb{R}_x; L^2_{k+\ell}(\mathbb{R}_+))} \|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}\\
&\qquad\qquad +\|\partial^m_x \tilde{v}_\epsilon u^s_{yy}\|_{L^2_{k+\ell}(\mathbb{R}^2_+)} \|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}.
\end{align*}
On the other hand, observing
$$
\partial^m_x \tilde{v}_\epsilon (t, x, y)=-\int^y_0 \partial^{m+1}_x \tilde{u}_\epsilon(t, x, \tilde{y})d\tilde y,
$$ then using Sobolev inequality and Lemma \ref{inequality-hardy}, for $\delta>0$ small,
$$
\|\partial^m_x \tilde{v}_\epsilon\|_{L^2(\mathbb{R}_x; L^\infty(\mathbb{R}_+))}\le C\|\partial^{m+1}_x \tilde{u}_\epsilon\|_{L^2_{\frac 12+\delta}(\mathbb{R}^2_+)} \le C\|\partial^{m+1}_x \tilde{w}_\epsilon\|_{L^2_{\frac 32+\delta}(\mathbb{R}^2_+)},
$$ we get
$$
\|\partial^m_x \tilde{v}_\epsilon\|_{L^2(\mathbb{R}_x; L^\infty(\mathbb{R}_+))} \le C\|\partial^{m+1}_x \tilde{w}_\epsilon\|_{L^2_{\frac 32+\delta}(\mathbb{R}^2_+)}.
$$
Using the hypothesis for the shear flow $u^s$ and $\ell-1<-\frac 12$,
\begin{align*}
\|\partial^m_x (\tilde{v}_\epsilon u^s_{yy})\|_{L^2_{k+\ell}(\mathbb{R}^2_+)}&\le \|\partial^m_x \tilde{v}_\epsilon\|_{L^2(\mathbb{R}_x; L^\infty(\mathbb{R}_+))}\| u^s_{yy}\|_{L^2_{k+\ell}(\mathbb{R}_+)}\\
&\le C\|\partial^{m+1}_x \tilde{w}_\epsilon\|_{L^2_{\frac 32+\delta}(\mathbb{R}^2_+)},
\end{align*}
and for $k+\ell\ge \frac 32+\delta$,
$$
\|\partial_y\tilde{w}_\epsilon\|_{L^\infty(\mathbb{R}_x; L^2_{k+\ell}(\mathbb{R}_+))} \le C \|\partial_y\tilde{w}_\epsilon\|_{H^1(\mathbb{R}_x; L^2_{k+\ell}(\mathbb{R}_+))}\le C\|\tilde{w}_\epsilon\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}.
$$ Thus, we have
\begin{align}
\label{approx-k-part-3}
\begin{split}
&\int \big(\partial_x^m \big( \tilde{v}_\epsilon (\partial_y\tilde{w}_\epsilon + u^s_{yy}) \big)\big) \, \langle y \rangle^{2(k+\ell)} \partial_x^m \tilde{w}_\epsilon dx dy \\
& \le C \| \tilde{w}_\epsilon \|_{H^m_{k+\ell}}^3 + \frac{32}{\epsilon}\big(\|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}^4+\|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}^2\big) + \frac{\epsilon}{4}\| \partial_x^{m+1} \tilde{w}_\epsilon\|_{L^2_{\frac 32+\delta}}^2.
\end{split}
\end{align}
From \eqref{approx-k-part-1} and \eqref{approx-k-part-3}, we have, if $k+\ell >\frac 32$,
\begin{align*}
\begin{split}
&\frac 12 \frac{d}{dt}\|\partial_x^m \tilde{w}_\epsilon\|_{L^2_{k+\ell}}^2 + \frac{3\epsilon}{4}\|\partial_x^{m+1}\tilde{w}_\epsilon\|_{L^2_{k+\ell}}^2 + \frac{3}{4} \|\partial_y \partial_x^m \tilde{w}_\epsilon\|_{L^2_{k+\ell}}^2 \\
& \le C\big( \| \tilde{w}_\epsilon\|_{H^m_{k+\ell}}^2 + \| \tilde{w}_\epsilon\|_{H^m_{k+\ell}}^3 \big) + \frac{32}{\epsilon}\big(\|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}^4+\|\tilde{w}_\epsilon\|_{H^m_{k+\ell}}^2\big).
\end{split}
\end{align*} \end{proof}
\begin{proof}[{\bf End of proof of Theorem \ref{theorem3.1}}]
Combining \eqref{approx-less-k} and \eqref{approx-k}, for $m\ge 6, k>1, \frac 32 -k<\ell<\frac 12 $ and $0<\epsilon\le 1$, we get
\begin{align}
\label{approx-total}
\frac{d}{dt}\| \tilde{w}_\epsilon\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}^2 &\le \frac{C}{\epsilon}\big( \| \tilde{w}_\epsilon\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}^2 + \| \tilde{w}_\epsilon\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}^m \big),
\end{align}
with $C>0$ independent of $\epsilon$.
From \eqref{approx-total}, by the nonlinear Gronwall's inequality, we have
$$
\| \tilde{w}_\epsilon(t)\|^{m-2}_{H^m_{k+\ell}(\mathbb{R}^2_+)} \le\, \frac{\|\tilde{w}_\epsilon(0)\|_{H^m_{k+\ell}}^{m-2}}
{e^{-\frac{C}{\epsilon}t(\frac m2-1)}-(\frac m2-1)\frac{C}{\epsilon}t
\|\tilde{w}_\epsilon(0)\|_{H^m_{k+\ell}}^{m-2} },\,\,~~0< t \le T_\epsilon,
$$ where we choose $T_\epsilon>0$ such that \begin{align} \label{time-1} \left(e^{-\frac{C}{\epsilon}T_\epsilon(\frac m2-1)}-(\frac m2-1)\frac{C}{\epsilon}T_\epsilon \bar \zeta^{\,m-2} \right)^{-1}=\left(\frac 43\right)^{m-2}. \end{align}
Finally, we get for any $\| \tilde{w}_\epsilon(0)\|_{H^m_{k+\ell}}\le \bar\zeta$, and $0<\epsilon\le \epsilon_0$,
\begin{align*}\label{bound-3}
\| \tilde{w}_\epsilon(t)\|_{H^m_{k+\ell}(\mathbb{R}^2_+)} \le \frac 43\| \tilde{w}_\epsilon(0)\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}\le 2\| \tilde{w}_0\|_{H^m_{k+\ell}(\mathbb{R}^2_+)},~~~~~0< t \le T_\epsilon.
\end{align*} \end{proof}
The rest of this paper is dedicated to improve the results of Proposition \ref{prop3.2}, and try to get an uniform estimate with respect to $\epsilon$. Of course, we have to recall the assumption on the shear flow in the main Theorem \ref{main-theorem}.
\section{Formal transformations}\label{section4}
Since the estimate \eqref{approx-less-k} is independent of $\epsilon$, we only need to treat \eqref{approx-k} in a new way to get an estimate which is also independent of $\epsilon$. To simplify the notations, from now on, we drop the notation tilde and sub-index $\epsilon$, that is, with no confusion, we take $$ u=\tilde{u}_\epsilon,\quad v=\tilde{v}_\epsilon,\quad w=\tilde{w}_\epsilon. $$
Let $w\in L^\infty ([0, T]; H^{m}_{k+\ell}(\mathbb{R}^2_+), m\ge 6, k>1, 0<\ell<\frac12,~~ \frac{1}{2}<\ell' < \ell + \frac{1}{2},~ k+\ell>\frac 32$ be a classical solution of \eqref{shear-prandtl-approxiamte-vorticity} which satisfies the following {\em \`a priori} condition \begin{equation}\label{apriori}
\|w\|_{ L^\infty ([0, T]; H^{m}_{k+\ell}(\mathbb{R}^2_+))}\le \zeta. \end{equation} Then \eqref{sobolev-1} gives \begin{align*}
\|\langle y\rangle ^{k+\ell} w\|_{ L^\infty ([0, T]\times\mathbb{R}^2_+)}\le
&C(\|\langle y\rangle ^{\frac 12+\delta}(\langle y\rangle ^{k+\ell}w)_y \|_{ L^\infty ([0, T]; L^2(\mathbb{R}^2_+))}\\ &+
\|\langle y\rangle ^{\frac 12+\delta}(\langle y\rangle ^{k+\ell}w)_{xy} \|_{ L^\infty ([0, T]; L^2(\mathbb{R}^2_+))})\\
&\le C_m \|w\|_{ L^\infty ([0, T]; H^{m}_{k+\ell}(\mathbb{R}^2_+))}, \end{align*} which implies $$
|\partial_y u(t, x, y)|=|w(t, x, y)|\le C_m\, \zeta \, \langle y\rangle^{-k-\ell},\quad (t, x, y)\in [0, T]\times \mathbb{R}^2_+. $$ We assume that {\bf $\zeta$ is small enough} such that \begin{equation}\label{C0} C_m\, \zeta \le \frac{\tilde c_1}{4}, \end{equation} where $C_m$ is the above Sobolev embedding constant. Then we have for $\ell\ge 0$, \begin{align}
\label{pior-2}\frac{\tilde c_1}4 \langle y \rangle^{-k} \le |u^s_y + u_y|\le 4\tilde c_2 \langle y \rangle^{-k},\quad (t, x, y)\in [0, T]\times \mathbb{R} \times \mathbb{R}^+. \end{align}
\noindent {\bf The formal transformation of equations.} Under the conditions \eqref{C0} and \eqref{pior-2}, in this subsection, we will introduce the following formal transformations of system \eqref{shear-prandtl-approxiamte}. {
Set, for $0\le n\le m$ \[ g_n = \left( \frac{\partial^n_x u}{u^s_y + u_y} \right)_y,\,~\eta_1 = \frac{u_{xy}}{u^s_y + u_y},~ \eta_2 = \frac{u^s_{yy} + u_{yy}}{{u^s_y +u_y}}, \,\forall (t, x, y)\in [0, T]\times \mathbb{R}^2_+ . \] Formally, we will use the following notations \[ \partial^{-1}_y g_n(t, x, y) = \frac{\partial^n_x u}{u^s_y + {u}_y}(t,x, y),\, \partial_y \partial_y^{-1} g_n = g_n,~\forall (t, x, y)\in [0, T]\times \mathbb{R}^2_+ \]
Applying $\partial^n_x$ to \eqref{shear-prandtl-approxiamte}, we have \begin{align}\label{shear-prandtl-approxiamte-aa} \begin{split} \partial_t\partial^n_x u + (u^s + {u}) \partial_x\partial^n_x u &+(\partial^n_x {v}) (u^s_y + \partial_y {u})\\ &= \partial^2_{y}\partial^n_x u + \epsilon \partial^2_{x}\partial^n_x u+ A^1_n+A^2_n, \end{split} \end{align} where \begin{align*} A^1_n=-[\partial^n_x,\, (u^s + {u})] \partial_x u=- \sum_{i=1}^{n}C^i_n\partial_x^i {u} \, \partial^{n + 1 -i}_x {u}, \,\,\\ A^2_n=-[\partial^n_x,\, (u^s_y + \partial_y {u})] {v}= -\sum_{i=1}^{n}C^i_n\partial_x^i {w} \,\partial^{n -i}_x {v}. \end{align*} Dividing \eqref{shear-prandtl-approxiamte-aa} with $(u^s_y + {u}_y)$ and performing $\partial_y$ on the resulting equation, observing $$
\partial_x\partial^n_x u +\partial_y\partial^n_x v= \partial^n_x(\partial_x u +\partial_y v)=0, $$ we have for $j=1, 2$, \begin{equation*} \begin{split} & \partial_y\left(\frac{\partial_t {\partial^n_x u}}{u^s_y + {u}_y}\right) + (u^s + {u}) \partial_y\left(\frac{\partial_x{\partial^n_x u}} {u^s_y + {u}_y}\right)\\ &\qquad\qquad= \partial_y\left(\frac{\partial^2_{y}{\partial^n_x u} + \epsilon \partial^2_{x}{\partial^n_x u}}{u^s_y + {u}_y}\right) + \partial_y\left(\frac{A^1_n+A^2_n}{u^s_y + {u}_y}\right). \end{split} \end{equation*} We compute each term on the support of $ $,
\begin{align*}
\partial_y\left(\frac{\partial_t {\partial^n_x u} }{u^s_y + {u}_y}\right) &= \partial_y\bigg(\partial_t \frac{{\partial^n_x u} }{u^s_y + {u}_y} + \partial_y^{-1} g_n \, \frac{\partial_t {u}_y + \partial_t u^s_y }{u^s_y + {u}_y} \bigg)\\
&= \partial_t g_n + \partial_y\bigg( \partial_y^{-1} g_n \, \frac{ \partial_t u^s_y+\partial_t {u}_y}{u^s_y + \tilde{u}_y} \bigg),
\end{align*}
\begin{align*}
(u^s + {u})\partial_y \left(\frac{\partial_x {\partial^n_x u}}{u^s_y + {u}_y}\right) & = (u^s + {u})\bigg\{\partial_x\partial_y\left(\frac{{\partial^n_x u}}{u^s_y + {u}_y}\right) +\partial_y \left(\frac{{\partial^n_xu}}{u^s_y +{u}_y}\right) \, \frac{{u}_{xy}}{u^s_y + {u}_y} \\
&\hskip 3cm + \left(\frac{\partial^n_x{u}}{u^s_y + {u}_y}\right)\partial_y\left( \frac{{u}_{xy}}{u^s_y + {u}_y}\right)\bigg\}\\
& = (u^s + {u})(\partial_x g_n + g_n\, \eta_1 + \partial_y^{-1} g_n \, \partial_y \eta_1),
\end{align*}
\begin{align*}
\frac{{\partial^2_{y} {\partial^n_x u}}}{u^s_y + {u}_y} =\partial^2_y \left(\frac{{ {\partial^n_x u}}}{u^s_y + {u}_y} \right) + 2\left(\frac{{ \partial_y {u}}}{u^s_y + {u}_y}\right)\frac{u^s_{yy} + {u}_{yy}}{{u^s_y + {u}_y}} - {\partial^n_x u} \, \partial^2_y\left(\frac{1}{u^s_y + {u}_y}\right),
\end{align*}
\begin{align*} \partial^2_y \left(\frac{1}{u^s_y + {u}_y}\right) = - \partial_y\left(\frac{u_{yy}^s + {u}_{yy}}{(u^s_y + {u}_y)^2}\right)= -\frac{u_{yyy}^s + {u}_{yyy}}{(u^s_y + {u}_y)^2} + 2 \left(\frac{u_{yy}^s + {u}_{yy}}{(u^s_y + {u}_y)}\right)^2 \frac{1}{u^s_y + {u}_y},
\end{align*}
\begin{align*}
\frac{{ \partial_y {\partial^n_x u}}}{u^s_y + {u}_y}\frac{u^s_{yy} +{u}_{yy}}{{u^s_y + {u}_y}} = \left(\frac{{{\partial^n_xu}}}{u^s_y + {u}_y}\right)_y \frac{u^s_{yy} + {u}_{yy}}{{u^s_y + {u}_y}} - \frac{{{\partial^n_xu}}}{u^s_y + {u}_y}\left(\frac{u_{yy}^s + {u}_{yy}}{(u^s_y + {u}_y)}\right)^2. \end{align*}
So \begin{align*} \frac{{\partial^2_{y}{\partial^n_x u}}}{u^s_y + {u}_y} = \partial_y g_n + 2(g_n \eta_2 - 2 \partial^{-1}_y g_n \eta_2^2) + \partial^{-1}_y g_n \left(\frac{u_{yyy}^s + {u}_{yyy}}{(u^s_y + \tilde{u}_y)}\right), \end{align*} \begin{align*} \partial_y\left(\frac{{\partial^2_{y} {\partial^n_x u}}}{u^s_y + {u}_y}\right)&=\partial^2_y g_n + 2 (\partial_y g_n) \eta_2 + 2g_n \partial_y \eta_2 - 4g_n\eta_2^2\\
&- 8 \partial_y^{-1} g_n \eta_2 \partial_y \eta_2 + \partial_y\bigg(\partial_y^{-1} g_n\, \frac{u_{yyy}^s + {u}_{yyy}}{u^s_y + {u}_y}\bigg). \end{align*} Similarly, we have \begin{align*} \frac{{\partial^2_{x} {\partial^n_x u}}}{u^s_y + {u}_y} &= \partial^2_x\left(\frac{{{\partial^n_x u}}}{u^s_y + {u}_y} \right) + 2\left(\frac{{{\partial^n_x u}}}{u^s_y + {u}_y}\right)_x\frac{ {u}_{xy}}{{u^s_y + {u}_y}} \\ &- 2\frac{{{\partial^n_x u}}}{u^s_y +{u}_y}\left( \frac{{u}_{xy}}{(u^s_y +{u}_y)}\right)^2 + \frac{{{\partial^n_x u}}}{u^s_y + {u}_y}\frac{{u}_{xxy}}{(u^s_y +{u}_y)}, \end{align*} \begin{align*} \partial_y \left(\frac{{\partial^2_{x}{\partial^n_x u}}}{u^s_y + {u}_y}\right)& = \partial^2_xg_n + 2\partial_x g_n \eta_1 + 2\partial_x \partial_y^{-1} g_n \partial_y \eta_1 \\
&- 2g_n\eta_1^2 - {4 \partial_y^{-1} g_n \eta_1 \partial_y \eta_1} +\partial_y\bigg(\partial_y^{-1} g_n\,\frac{ {u}_{xxy}}{u^s_y +{u}_y}\bigg)\, . \end{align*} For the boundary condition, we only need to pay attention to $j=1$. From \eqref{shear-prandtl-approxiamte-aa} and the boundary condition for $(u, v)$ in \eqref{shear-prandtl-approxiamte}, we observe $$
\partial^n_x u|_{y=0}=0,\,\,\partial^2_y \partial^n_x u|_{y=0}=0,\,\, (u^s_y+u_y)|_{y=0}\not=0. $$ At the same time, \begin{align*}
0=\frac{{\partial^2_{y}{\partial^n_x u}}}{u^s_y + {u}_y}\bigg|_{y=0} &= \partial_y g_n|_{y=0} + 2(g_n \eta_2 - 2 (\partial^{-1}_y g_n )\eta_2^2)|_{y=0}\\ &
\qquad+ \partial^{-1}_y g_n \left(\frac{u_{yyy}^s + {u}_{yyy}}{(u^s_y + \tilde{u}_y)}\right)\bigg|_{y=0}, \end{align*} and $$
\eta_2|_{y=0} = \frac{u^s_{yy} + u_{yy}}{{u^s_y +u_y}}\bigg|_{y=0}=0, \quad
\partial^{-1}_y g_n(t, x, y)|_{y=0} = \frac{\partial^n_x u}{u^s_y + {u}_y}(t,x, y)\bigg|_{y=0}=0,\, $$ we get then $$
(\partial_y g_n)|_{y=0}=0,\quad 0\le n\le m. $$ Finally, we have, for $j=1, 2$, \begin{equation} \begin{cases} \label{non-monotone-transformation} \partial_t g_n + ( u^s + {u}) \partial_x g_n - \partial^2_y g_n - \epsilon \partial^2_xg_n\\ \qquad\qquad\qquad - \epsilon \,2\, (\partial_x \partial_y^{-1} g_n) \partial_y \eta_1 = M_n,\\
(\partial_y g_n)|_{y=0}=0, \\
g_n|_{t=0}= g_{n,0}, \end{cases} \end{equation} with $M_n=\sum^6_{j=1}M_j^n$, \begin{align*} & M^n_1 = -(u^s + {u})( g_n \eta_1 +( \partial_y^{-1} g_n ) \partial_y \eta_1) ,\\ & M^n_2 = 2(\partial_y g_n) \eta_2 + 2g_n (\partial_y \eta_2 - 2\eta_2^2) - 8 (\partial_y^{-1} g_n)\, \eta_2 \partial_y \eta_2 ,\\ & M^n_3 = \epsilon \big(2(\partial_x g_n) \eta_1 - 2 g_n\,\eta_1^2 - 4{(\partial_y^{-1} g_n) \eta_1 \partial_y \eta_1}\big), \\ & M^n_4 = \partial_y\bigg(\partial_y^{-1} g_n \, \frac{(u^s + {u}) {w}_x + {v} ({w}_y + u^s_{yy})}{u^s_y + {u}_y}\bigg),\\ & M^n_5 = -\partial_y\bigg(\frac{\sum_{i=1}^{n}C^i_n\partial_x^i {u} \cdot \partial^{n + 1 -i}_x {u} }{ u^s_y +{u}_y} \bigg)\,,\\ & M^n_6 = -\partial_y\bigg(\frac{ \sum_{i=1}^{n}C^i_n\partial_x^i {w} \cdot \partial^{n -i}_x {v}}{ u^s_y +{u}_y} \bigg)\, , \end{align*} where we have used the relation, $$ \partial_t u^s_y+\partial_t {u}_y -(u_{yyy}^s + {u}_{yyy})-\epsilon {u}_{xxy}=-(u^s + {u}) {w}_x + {v} (u^s_{yy}+{w}_y). $$
\section{Uniform estimate}\label{section5}
In the future application(see Lemma \ref{lemma-g-h-w}), we need that the weight of $g_m$ big then $\frac 12$, but from the definition, $ w\in H^{m+2}_{k + \ell}(\mathbb{R}^2_+)$ imply only $g_m\in H^{2}_{\ell}(\mathbb{R}^2_+)$ with $0<\ell<\frac 12$. So the first step is to improve this weights if the weight of the initial data is more big. We first have \begin{lemma}\label{lemma-initial-deta} If $\tilde w_0\in H^{m+2}_{k + \ell'}(\mathbb{R}^2_+), m\ge 6, k>1, 0< \ell<\frac12,~~ \frac{1}{2}<\ell' < \ell + \frac{1}{2},~ k+\ell>\frac 32$ which satisfies \eqref{apriori}-\eqref{C0} with $0<\zeta\le1$, then $( g_m)(0)\in H^2_{k+\ell}(\mathbb{R}^2_+)$, and we have $$
\|( g_m)(0)\|_{H^2_{\ell'}(\mathbb{R}^2_+)}\le C \|\tilde w_0\|_{H^{m+2}_{k + \ell''}(\mathbb{R}^2_+)}. $$ \end{lemma} \noindent {\bf Remark.} In fact, observing \begin{align*}
g_m(0)= \left( \frac{\partial_x^m \tilde{u}_0}{u^s_{0,y} + \tilde u_{0, y}} \right)_y=
\frac{\partial_y\partial_x^m \tilde{u}_0}{u^s_{0,y} + \tilde u_{0, y}}- \frac{\partial_x^m \tilde{u}_0}{u^s_{0,y} + \tilde u_{0, y}} \eta_2(0), \end{align*} then \eqref{pior-2} implies \begin{align*}
\langle y \rangle ^{k+\ell}| g_m(0)|\le C\langle y \rangle ^{k + \ell'}| \partial_x^m \tilde{w}_0|+C\langle y \rangle ^{k + \ell'-1}| \partial_x^m \tilde{u}_0|, \end{align*} which finishes the proof of this Lemma.
\begin{proposition}\label{lemma-non-x-k-monotone-part1-2} Let $w \in L^\infty ([0, T]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+)), m\ge 6, k>1, 0\le \ell<\frac12, ~\ell'> \frac{1}{2},~~\ell' - \ell < \frac{1}{2},~k+\ell>\frac 32$, satisfy \eqref{apriori}-\eqref{C0} with $0<\zeta\le1$. Assume that the shear flow $u^s$ verifies the conclusion of Lemma \ref{shear-profile}, and $g_n$ satisfies the equation \eqref{non-monotone-transformation} for $ 1\le n\le m$, then we have the following estimates, for $t\in [0, T]$ \begin{equation} \label{uniform-part2-2} \begin{split}
\frac{d}{dt}\sum^m_{n=1}\| g_n \|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 & + \sum^m_{n=1}\| \partial_y g_n\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 + \epsilon \sum^m_{n=1}\| \partial_x g_n\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2\\
& \le C_2(\sum^m_{n=1}\| g_n\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 + \|{w}\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}^2), \end{split} \end{equation} where $C_2$ is independent of $\epsilon$. \end{proposition}
\noindent {\bf\em Approach of the proof for the Proposition \ref{lemma-non-x-k-monotone-part1-2}:} We can't prove \eqref{uniform-part2-2} directly, since the approximate solution $w_\epsilon$ obtained in Theorem \ref{theorem3.1} is belongs to $L^\infty([0, T_\epsilon]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+))$, which implies only $ g_n\in L^\infty([0, T_\epsilon]; H^{2}_{\ell}(\mathbb{R}^2_+))$. Then we can't use $\langle y\rangle^{2\ell'} g_n \in L^\infty([0, T_\epsilon]; H^{2}_{\ell-2\ell'}(\mathbb{R}^2_+)) $ as the test function to the equation \eqref{non-monotone-transformation}. To overcome this difficulty, we consider that \eqref{non-monotone-transformation} as a linear system for $ g_n, n=1, \cdots, m$ with the coefficients and the source terms depends on $w$ and their derivatives up to order $m$, we will clarify this confirmation in the following proof of the the Proposition \ref{lemma-non-x-k-monotone-part1-2}. We prove now the estimate \eqref{uniform-part2-2} by the following approach: For the linear system \eqref{non-monotone-transformation}, we prove firstly \eqref{uniform-part2-2} as {\em \`a priori} estimate. Lemma \ref{lemma-initial-deta} imply that $ {g}_{n}(0)\in H^2_{\ell'}(\mathbb{R}^2_+), n=1,\cdots, m$, then by using Hahn-Banach theorem, this {\em \`a priori} estimate imply the existence of solutions $$
g_n\in L^\infty([0, T]; H^2_{\ell'}(\mathbb{R}^2_+)),\quad n=1, \cdots, m. $$ Finally, by uniqueness, we can prove the estimate \eqref{uniform-part2-2} by proving it as {\em \`a priori} estimate. So that the proof of the Proposition \ref{lemma-non-x-k-monotone-part1-2} is reduced to the proof of the {\em \`a priori} estimate \eqref{uniform-part2-2}.
\begin{proof}[{\bf Proof of the {\em \`a priori} estimate \eqref{uniform-part2-2}}] Multiplying the linear system \eqref{non-monotone-transformation} by $\langle y\rangle^{2\ell'}g_{n}\in L^\infty([0, T]; H^2_{-\ell'}(\mathbb{R}^2_+)) $ and integrating over $\mathbb{R} \times \mathbb{R}^+$. We start to deal with the left hand of \eqref{non-monotone-transformation} first, we have \begin{align*}
\int_{\mathbb{R}^2_+} \partial_t g_{n} \, \langle y\rangle^{2\ell'} g_{n} dx dy = \frac 12\frac{d}{dt}\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2, \end{align*} and \begin{align*} \int_{\mathbb{R}^2_+} ( u^s + {u})\partial_x g_{n} \, \langle y\rangle^{2\ell'} g_{n}dx dy =& \frac{1}{2}\int_{\mathbb{R}^2_+} ( u^s + {u}) \cdot \partial_x (\langle y\rangle^{2\ell'} g_{n}^2) dx dy \\
&\le \frac 12 \|{u}_x\|_{L^\infty(\mathbb{R}^2_+)}\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2\\
&\le C\|w\|_{H^2_{1}(\mathbb{R}^2_+)}\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2. \end{align*} Integrating by part, where the boundary value is vanish, \begin{align*} - \int_{\mathbb{R}^2_+} \partial_y^2 g_{n} \, \langle y\rangle^{2\ell'} g_{n}dx dy
&= \|\partial_y g_{n} \|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 + \int_{\mathbb{R}^2_+} \partial_y g_{n} (\langle y\rangle^{2\ell'})' g_{n} dx dy\\
& \ge \frac{3}{4}\| \partial_y g_{n} \|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 - 4 \| g_{n}\|_{L^2(\mathbb{R}^2_+)}^2, \end{align*} and \begin{align*}
-\epsilon\int_{\mathbb{R}^2_+} \partial_x^2 g_{n} \, \langle y\rangle^{2\ell'} g_{n} dx dy =\epsilon\| \partial_x g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2. \end{align*} We have also \begin{align*} &-\epsilon \int_{\mathbb{R}^2_+} \big(\partial_x \partial_y^{-1} g_{n} \big)\partial_y \eta_1 \langle y\rangle^{2\ell'}g_{n} dx dy \\ & = \epsilon \int_{\mathbb{R}^2_+} \partial_y^{-1} g_{n} \partial_y \eta_1 \langle y\rangle^{2\ell'} \partial_x g_{n} dx dy\\
&\quad+ \epsilon \int_{\mathbb{R}^2_+} \partial_y^{-1} g_{n} (\partial_y \partial_x \eta_1)\langle y\rangle^{2\ell'} g_{n}dx dy\\
& \le \epsilon\| \partial_y^{-1} g_{n} \partial_y \eta_1 \|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 + \frac{\epsilon}{8}\|\partial_x g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2\\
& \quad+ \epsilon\| \partial_y^{-1} g_{n}\partial_y \partial_x \eta_1\|_{L^2(\mathbb{R}^2_+)}^2 + \epsilon\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2. \end{align*} So by \eqref{non-monotone-transformation} and $0<\epsilon\le 1$, we obtain \begin{equation*} \begin{split}
&\frac{d}{dt}\|g_{n}\|^2_{L^2_{\ell'}(\mathbb{R}^2_+)}+\| \partial_yg_{n}\|^2_{L^2_{\ell'}(\mathbb{R}^2_+)}
+\epsilon \|\partial_x g_{n}\|^2_{L^2_{\ell'}(\mathbb{R}^2_+)}\\
&\le C \|g_{n}\|^2_{L^2_{\ell'}(\mathbb{R}^2_+)}+\|(\partial_y^{-1} g_{n}) \partial_y \eta_1 \|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 \\
&\qquad+ \| (\partial_y^{-1} g_{n}) \partial_y \partial_x \eta_1\|_{L^2(\mathbb{R}^2_+)}^2 + 2\sum_{j=1}^{6}\bigg|\int_{\mathbb{R}^2_+} {M}_{j} \, \langle y\rangle^{2\ell'}g_{n} dx dy\bigg|. \end{split} \end{equation*} Then we can finish the proof of the {\em \`a priori} estimate \eqref{uniform-part2-2} by the following four Lemmas. \end{proof}
\begin{lemma}\label{lemma5.1} Under the assumption of Proposition \ref{lemma-non-x-k-monotone-part1-2}, we have \begin{equation*} \begin{split}
\| \partial_y^{-1} g_{n} \partial_y \eta_1 \|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 + \| \partial_y^{-1} g_{n} \partial_y \partial_x \eta_1\|_{L^2(\mathbb{R}^2_+)}^2 \le C \|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2. \end{split} \end{equation*} where $\tilde{C}$ is independent of $\epsilon$. \end{lemma}
\begin{proof} Notice that \eqref{apriori} and \eqref{C0} imply \begin{align*}
&|\eta_1| \le C \langle y \rangle^{-\ell},\quad |\partial_x \eta_1 |\le C\langle y \rangle^{-\ell},\\
&|\partial_y \eta_1 |\le C \langle y \rangle^{-\ell - 1},\quad
|\partial_y \partial_x \eta_1| \le C \langle y \rangle^{-\ell - 1}. \end{align*} Then $\ell'>\frac 12, \ell'-\ell<\frac12$, imply \begin{align*} \begin{split}
\|\partial_y^{-1} g_{n} (\partial_y \partial_x \eta_1)\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 & \le C\int_{\mathbb{R}^2_+}\langle y \rangle^{2(\ell' - \ell-1)} \Big(\int^y_0 g_n (t, x, \tilde y)d\tilde y \Big)^2 dx dy\\
& \le C\| g_{n} \|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2. \end{split} \end{align*} Similarly, we also obtain \begin{align*} \begin{split}
\| \partial_y^{-1} g_{n} \partial_y \eta_1\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 \le C\| g_{n} \|_{L^2_{ \ell'}(\mathbb{R}^2_+)}^2. \end{split} \end{align*} \end{proof} \begin{lemma}\label{lemma5.2} Under the assumption of Proposition \ref{lemma-non-x-k-monotone-part1-2}, we have \begin{equation*} \begin{split}
\left|\int_{\mathbb{R}^2_+} \sum^4_{j = 0}\tilde{M}^n_{j } \, \langle y\rangle^{2\ell'}g_{n} dx dy\right|\le &
\frac18\|\partial_y g_{n}\|^2_{L^2_{\ell'}(\mathbb{R}^2_+)}+
\frac{\epsilon}8\| \partial_x g_{n}\|^2_{L^2_{\ell'}(\mathbb{R}^2_+)} \\
&\quad+ \tilde{C}(\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2
+\|w\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}^2), \end{split} \end{equation*} where $\tilde{C}$ is independent of $\epsilon$. \end{lemma} \begin{proof}
Recalling $ {M}^n_{1 } = -(u^s + {u})\big( g_{n} \eta_1 +( \partial_y^{-1} g_{n} ) \partial_y \eta_1 \big) $, by Lemma \ref{lemma5.1}, \begin{align*}
\left|\int_{\mathbb{R}^2_+} (u^s + {u})g_{n} \eta_1 \, \langle y\rangle^{2\ell'}g_{n} dxdy\right| &\le C\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2,\\
\int_{\mathbb{R}^2_+} |(u^s + {u}) ( \partial_y^{-1} g_n) \partial_y \eta_1\, \langle y \rangle^{2\ell'} g_{n} | dy dx &\le C \| w\|_{H^n_{k+ \ell}}^2 + C \| g_{n}\|_{L^2_{\ell'}}^2. \end{align*} Besides, we have \begin{align*}
\left|\int_{\mathbb{R}^2_+} {M}^n_{1 } \, \langle y\rangle^{2\ell'}g_{n} dx dy\right|\le {C}(\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2
+\|w\|_{H^n_{\ell'}(\mathbb{R}^2_+)}^2). \end{align*}
The estimates of $ {M}^n_2$ and $ {M}^n_3$ needs the following decay rate of $\eta_2$: \begin{align*}
&|\eta_2| \le C \langle y \rangle^{-1},\quad |\partial_x \eta_2| \le C \langle y \rangle^{-\ell-1},\\
&|\partial_y \eta_2 |\le C \langle y \rangle^{-2},\quad |\partial_y \partial_x \eta_2| \le C \langle y \rangle^{-\ell - 2}. \end{align*} Recall $ {M}_{2 }^n= 2 \partial_y g_{n} \eta_2 + 2g_{n} (\partial_y \eta_2 - 2\eta_2^2) - 8 \partial_y^{-1} g_{n}\, \eta_2 \partial_y \eta_2$. We have \begin{align*}
& \left|\int_{\mathbb{R}^2_+} g_{n} (\partial_y \eta_2 - \eta_2^2) \, \langle y\rangle^{2\ell'} g_{n}dx dy\right| \le C\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2,\\
&\left|\int_{\mathbb{R}^2_+} (\partial_y g_{n}) \eta_2 \, \langle y\rangle^{2\ell'}g_{n} dx dy\right| \le C\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 + \frac{1}{8}\| \partial_y g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2,\\
&\left|2 \int_{\mathbb{R}^2_+} \partial_y^{-1} g_{n} \eta_2 \, \partial_y \eta_2 \langle y\rangle^{2\ell'}g_{n} dx dy\right| \\
&\qquad\qquad \le C\| \langle y \rangle^{\ell'-3}\partial_y^{-1} g_n \|_{L^2}^2 + \| g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2\le C\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2. \end{align*} All together, we conclude \begin{align*}
\left|\int_{\mathbb{R}^2_+} {M}^n_{2 } \langle y\rangle^{2\ell'}g_{n} dx dy\right| &\le C(\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 +\|w\|_{H^n_{\ell'}(\mathbb{R}^2_+)}^2)+ \frac{1}{8}\| \partial_y g_n\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2, \end{align*} and exactly same computation gives also \begin{align*}
\left|\int_{\mathbb{R}^2_+} {M}^n_{3 } \langle y\rangle^{2\ell'}g_{n} dx dy\right| &\le C(\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 +\|w\|_{H^n_{k+\ell}(\mathbb{R}^2_+)}^2)+ \frac{\epsilon}{8}\|\partial_x g_n\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2. \end{align*} Now using \eqref{apriori}-\eqref{C0} and $m\ge 6$, with the same computation as above, we can get $$
\left|\int_{\mathbb{R}^2_+} {M}^n_{4 } \langle y\rangle^{2\ell'}g_{n} dx dy\right| \le C\left(\|g_{n}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 +\|w\|_{H^n_{k+\ell}(\mathbb{R}^2_+)}^2\right). $$ which finishes the proof of Lemma \ref{lemma5.2}. \end{proof}
\begin{lemma}\label{lemma5.3} Under the assumption of Proposition \ref{lemma-non-x-k-monotone-part1-2}, we have \begin{equation*} \begin{split}
&\left|\int_{\mathbb{R}^2_+} {M}^n_5 \, \langle y\rangle^{2\ell'}g_{n} dx dy\right|\le
\tilde{C}\left(\sum^n_{p=1}\|\tilde g_p\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2
+\|w\|_{H^n_{k + \ell}(\mathbb{R}^2_+)}^2\right), \end{split} \end{equation*} where $\tilde{C}$ is independent of $\epsilon$. \end{lemma} \noindent
\begin{proof} Recall, \begin{align*} \tilde{M}_{5 }^n & =\sum_{i\ge 4}C^i_ng_{i}\, \partial^{n + 1 -i}_x u + \sum_{1\le i\le 3}C^i_n\partial_x^i u\, g_{n+1-i }\\ &\quad+\sum_{i\ge 4}C^i_n \partial_y^{-1} g_{n} \partial^{n + 1 -i}_x w + \sum_{1\le i\le 3}C^i_n\partial_x^i w\, \partial_y^{-1} g_{n+ 1 - i } , \end{align*}
here if $n\le 3$, we have only the last term. Then, for { $\|w\|_{H^{m}_{k+\ell}}\le \zeta\le 1, m\ge 6$}, \begin{align*}
&\sum_{i\ge 4}C^i_n\|g_{i}\, \partial^{n + 1 -i}_x u\|_{L^2_{\ell'}(\mathbb{R}^2_+)} +
\sum_{1\le i\le 3}\|\partial_x^i u\, g_{n+1-i }\|_ {L^2_{\ell'}(\mathbb{R}^2_+)}\\
&\le \sum_{i\ge 4}C^i_n\|g_{i}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}\|\partial^{n + 1 -i}_x u\|_{L^\infty(\mathbb{R}^2_+)} \\ &\qquad+
\sum_{1\le i\le 3}C^i_n\|\partial_x^i u\|_{L^\infty(\mathbb{R}^2_+)}\, \|\tilde g_{n+1-i }\|_ {L^2_{\ell'}(\mathbb{R}^2_+)}\\
&\le C \sum_{i\ge 4}C^i_n\|g_{i}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}\|w\|_{H^{n + 3 -i}_1} \\ &\qquad+ C
\sum_{1\le i\le 3}C^i_n\|w\|_{H^{i + 3}_1}\, \|g_{n+1-i }\|_ {L^2_{\ell'}(\mathbb{R}^2_+)}\\
& \le C \sum_{i=1}^{n} \| g_{i}\|_{L^2_{\ell'}}. \end{align*} Similarly, for the second line in ${M}_{5}$, by Lemma \ref{lemma5.1}, we have \begin{align*}
\sum_{i\ge 4}C^i_n\| (\partial^{-1}_y g_{i}) \partial^{n + 1 -i}_x w\|_{L^2_{\ell'}(\mathbb{R}^2_+)} &\le \sum_{i\ge 4}C^i_n\| \langle y \rangle^{\ell' - \ell - 1} (\partial^{-1}_y g_{i}) \|_{L^2(\mathbb{R}^2_+)} \| \langle y \rangle^{\ell + 1}\partial^{n + 1 -i}_x w\|_{L^\infty}\\
& \le C \sum_{i = 1}^{n} \|g_{i}\|_{L^2_{\ell'}(\mathbb{R}^2_+)}.
\end{align*} We have proven Lemma \ref{lemma5.3}. \end{proof}
\begin{lemma}\label{lemma5.4} Under the assumption of Proposition \ref{lemma-non-x-k-monotone-part1-2}, we have \begin{equation*} \begin{split}
&\left|\int_{\mathbb{R}^2_+} {M}^n_6 \, \langle y\rangle^{2\ell'}g_{n} dx dy\right|\\
&\quad\le \frac 1{8m} \sum^n_{p=1}\|\partial_y g_p\|^2_{L^2_{\ell'}(\mathbb{R}^2_+)}+
\tilde{C}\left(\sum^n_{p=1}\| g_p\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2
+\|w\|_{H^n_{k+\ell}(\mathbb{R}^2_+)}^2\right), \end{split} \end{equation*} where $\tilde{C}$ is independents of $\epsilon$. \end{lemma}
\begin{proof} Recall \begin{align*} {M}_{6 } & = \sum_{i=1}^{n}C^i_n g_{i}\eta_2 \partial^{n -i}_x {v}+ \sum_{i=1}^{n}C^i_n g_{i} \partial^{n+1 -i}_x {u} + \sum_{i=1}^{n}C^i_n \partial_y g_{i} \partial^{n -i}_x {v}\\ & + \sum_{i=1}^{n} \partial_y^{-1} g_{i} \left( C_n^i\partial^{n -i}_x {v} \partial_y \eta_2 + C_n^i \partial^{n+1 -i}_x {u} \eta_2\right). \end{align*} {In ${M}_6^n$, we just study the term $\partial_y g_{1} \partial_x^{n - 1} v $ as an example, the others terms are similar, \begin{align*} \int_{\mathbb{R}^2_+} \partial_y g_{1 } \partial_x^{n - 1} v \, \langle y \rangle^{2\ell'} g_{n} & = - \int_{\mathbb{R}^2_+} g_{1 } \partial_x^{n - 1} v \, \langle y \rangle^{2\ell'} \partial_y g_{n}\\ & + \int_{\mathbb{R}^2_+} g_{1 } \partial_x^{n } u\, \langle y \rangle^{2\ell'} g_{n} dx dy, \end{align*} \begin{align*}
\int_{\mathbb{R}^2_+} g_{1 } \partial_x^{n - 1} v \, \langle y \rangle^{2\ell'} \partial_y g_{n} dx dy & \le \frac{1}{8m} \|\partial_y g_{n}\|_{L^2_{\ell'}}^2 + C\| g_{1 } \partial_x^{n - 1} v \|_{L^2_{\ell'}}^2, \end{align*} \begin{align*}
\| g_{1 } \partial_x^{n - 1} v \|_{L^2_{\ell'}}^2 & \le \sup_{x \in \mathbb{R}} \int_{0}^{+\infty} \langle y \rangle^{2\ell'}g_{1 }^2 dy \, \sup_{y \in \mathbb{R}_+} \int_{-\infty}^{+\infty} \left|\int_{0}^{y}\partial_x^{n} u dz\right|^2 dy\\
& \le \bigg( \|g_{1 }\|_{L^2_{\ell'}(\mathbb{R}_+^2)}^2 + \|\partial_x g_{1 }\|_{L^2_{\ell'}(\mathbb{R}_+^2)}^2 \bigg) \int_{-\infty}^{+\infty} \left|\int_{0}^{+\infty}|\partial_x^{n} u| dz\right|^2 dy\\
& \le C \bigg( \|g_{1 }\|_{L^2_{\ell'}(\mathbb{R}_+^2)}^2 + \|\partial_x g_{1 }\|_{L^2_{\ell'}(\mathbb{R}_+^2)}^2 \bigg)\\
&\qquad \times \int_{-\infty}^{+\infty} \left|\int_{0}^{+\infty}\langle y \rangle^{- k - \ell + 1} \, \langle y \rangle^{k + \ell - 1}|\partial_x^{n} u| dz\right|^2 dy\\
& \le C \bigg( \|g_{1 }\|_{L^2_{\ell'}(\mathbb{R}_+^2)}^2 + \| g_{2 }\|_{L^2_{\ell'}(\mathbb{R}_+^2)}^2 + \|w\|_{H^m_{k +\ell}}^2\bigg)\\
& \qquad\times \int_{-\infty}^{+\infty} \left|\int_{0}^{+\infty}\langle y \rangle^{- k - \ell + 1} \, \langle y \rangle^{k + \ell - 1}|\partial_x^{n} u| dz\right|^2 dy\\
& \le C\sum_{i=1}^{2}\|g_{i}\|_{L^2_{\ell'}}^2 + C \|w\|_{H^m_{k +\ell}}^2. \end{align*} Here we have used Lemma \ref{lemma5.1} and
\[ k + \ell - 1 > \frac{1}{2},~~ \| w\|_{H^m_{k + \ell}} \le 1, \] and \[ \partial_x g_j = g_{j+1} - g_j \eta_1 - \partial_y^{-1} g_{n} \cdot \partial_y \eta_1. \] } By the similar trick, we have completed the proof of this lemma. \end{proof}
\section{Existence of the solution}\label{section7}
Now, we can conclude the following energy estimate for the sequence of approximate solutions.
\begin{theorem}\label{energy} Assume $u^s$ satisfies Lemma \ref{shear-profile}. Let $m\ge 6$ be an even integer, $k + \ell >\frac{3}{2}, 0 < \ell<\frac12, ~\ \frac{1}{2}<\ell' < \ell + \frac{1}{2},~$, and $\tilde{u}_{0}\in H^{m+3}_{k + \ell'-1}(\mathbb{R}^2_+)$ which satisfies the compatibility conditions \eqref{compatibility-a1}-\eqref{compatibility-a2}. Suppose that $\tilde{w}_\epsilon \in L^\infty ([0, T]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+))$ is a solution to \eqref{shear-prandtl-approxiamte-vorticity} such that
\begin{equation*}
\|\tilde{w}_\epsilon\|_{ L^\infty ([0, T]; H^m_{k+\ell}(\mathbb{R}^2_+)}\le \zeta \end{equation*} with $$ 0<\zeta\le1, \quad C_m\zeta\le \frac{\tilde c_1}{2}, $$ where $0<T\le T_1$ and $T_1$ is the lifespan of shear flow $u^s$ in the Lemma \ref{shear-profile}, $C_m$ is the Sobolev embedding constant in \eqref{C0}. Then there exists $C_T>0, \tilde C_T>0$ such that, \begin{equation}\label{energy estimate-A}
\|\tilde{w}_\epsilon\|_{ L^\infty ([0, T]; H^m_{k+\ell}(\mathbb{R}^2_+))}\le C_T\|\tilde{u}_{0}\|_{H^{m+1}_{k + \ell'-1}(\mathbb{R}^2_+)}, \end{equation} where $C_T>0$ is increasing with respect to $0<T\le T_1$ and independent of $0<\epsilon\le 1$. \end{theorem} Firstly, we collect some results to be used from Section \ref{section3} - \ref{section5}. We come back to the notations with tilde and the sub-index $\epsilon$. Then $g^\epsilon_m, h^\epsilon_m$ are the the functions defined by $\tilde{u}_\epsilon$. Under the hypothesis of Theorem \ref{energy}, we have proven the estimates \eqref{approx-less-k} and \eqref{uniform-part2-2} \begin{equation} \label{approx-less-k-b} \begin{split}
\frac{d}{ dt}\| \tilde{w}_\epsilon\|_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}^2 &+ \|\partial_y\tilde{w}_\epsilon\|_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}^2\\
&+ \epsilon\|\partial_x\tilde{w}_\epsilon\|_{H^{m, m- 1}_{k+\ell}(\mathbb{R}^2_+)}^2
\le C_1 \| \tilde{w}_\epsilon\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}^2, \end{split} \end{equation}
\begin{equation} \label{uniform-part2-1b} \begin{split}
\frac{d}{dt}\sum^m_{n=1}\| g^\epsilon_n\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 & + \sum^m_{n=1}\| \partial_y g^\epsilon_n\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 + \epsilon \sum^m_{n=1}\| \partial_x g^\epsilon_n\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2\\
& \le C_2(\sum^m_{n=1}\| g^\epsilon_n\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 + \|\tilde{w}_\epsilon\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}^2)\,, \end{split} \end{equation}
\begin{lemma}\label{lemma-initial} For the inital date, we have \begin{align*}
T^\epsilon_m(g, w)(0)&= \sum^m_{n=1}\| g^\epsilon_n(0)\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2+\| \tilde{w}_\epsilon(0)\|_{H^{m, m -1}_{k+\ell}(\mathbb{R}^2_+)}^2\\
&\le C\| \tilde{u}_0\|_{H^{m+1}_{k + \ell'-1}(\mathbb{R}^2_+)}^2, \end{align*} where $C$ is independent of $\epsilon$. \end{lemma} \begin{proof} Notice for any $1\le n\le m$, \[
g^\epsilon_n = \big(\frac{\partial_x^n \tilde{u}_\epsilon}{u^s_y + \tilde{w}_\epsilon}\big)_y = \frac{\partial_x^n\partial_y \tilde{u}_\epsilon}{u^s_y + \tilde{w}_\epsilon}
- \frac{\partial_x^n \tilde{u}_\epsilon}{u^s_y + \tilde{w}_\epsilon} \eta_2, \] and $\tilde{u}_\epsilon(0)=\tilde{u}_0$, then we deduce, for any $1\le n\le m$, \begin{align*}
&\| g^\epsilon_n(0)\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2\le 2\left\| \frac{\partial_x^n\partial_y \tilde{u}_0}{u^s_{0, y} + \tilde{w}_0}\right\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2+2\left\| \frac{\partial_x^n\tilde{u}_0}{u^s_{0, y} + \tilde{w}_0}\eta_2(0)\right\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2\\
&\le C\big(\| \partial_x^n\partial_y \tilde{u}_0\|_{L^2_{k+\ell'}(\mathbb{R}^2_+)}^2+\|\partial_x^n\tilde{u}_0\|_{L^2_{k+\ell'-1}(\mathbb{R}^2_+)}^2\big)
\le C\|\tilde{u}_0\|_{H^{m+1}_{k+\ell'-1}(\mathbb{R}^2_+)}^2. \end{align*} \end{proof} From \eqref{approx-less-k-b} and \eqref{uniform-part2-1b}, we have
\begin{align} \label{uniform-full-1b} \begin{split}
& \| g^\epsilon_m\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 + \| \tilde{w}_\epsilon\|_{H^{m, m-1}_{k+\ell}(\mathbb{R}^2_+)}^2 \\
& \le C_8 e^{C_2 t}\int^t_0e^{-C_2 \tau}\| \tilde{w}_\epsilon(\tau)\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}^2 d\tau+C_9e^{C_2 t}\| \tilde{u}_0\|_{H^{m+1}_{k + \ell'-1}(\mathbb{R}^2_+)}^2 . \end{split} \end{align} \begin{lemma}\label{lemma-g-h-w} We have also the following estimate : $$
\|\partial^m_x\tilde{w}_\epsilon\|_{L^2_{k+\ell}(\mathbb{R}^2_+)}^2\le
\tilde C \|{g}_m^\epsilon\|_{L^2_{\ell'}}^2. $$ where $\tilde C$ is independent of $\epsilon$. \end{lemma} \begin{proof} By the definition, \[ \partial_x^m \tilde{u}_\epsilon(t, x, y) = ( u^s_y+ \tilde{w}_\epsilon)\int_{0}^{y} g^\epsilon_m(t, x, \tilde y) d\tilde y,\quad y\in \mathbb{R}_+, \] Therefore, \[ \partial_x^m \tilde{w} = ( u^s_{yy}+(\tilde{w}_\epsilon)_y)\int_{0}^{y} g^\epsilon_m(t, x, \tilde y) d\tilde y - ( u^s_y+ \tilde{w}_\epsilon) g^\epsilon_m(t, x, y) ,\,\,~~ y \ge 0 , \] and \begin{align*}
\| \partial_x^m \tilde{w}\|_{L^2_{k + \ell}}^2 & \le C \int_{\mathbb{R}_+^2}\langle y \rangle^{2\ell - 2} \bigg( \int_{0}^{y} g_m^\epsilon(t,x, z) dz \bigg)^2 dx dy + \|g_m^\epsilon(t)\|_{L^2_{\ell'}(\mathbb{R}_+^2)}^2\\
& \le C \|g_m^\epsilon(t)\|_{L^2_{\ell'}(\mathbb{R}_+^2)}^2, \end{align*} where we have used $\ell - 1 < - \frac{1}{2}$ and $\frac12 < \ell'$.
\end{proof}
\begin{proof}[{\bf End of proof of Theorem \ref{energy}}]
Combining \eqref{uniform-full-1b}, Lemma \ref{lemma-initial} and Lemma \ref{lemma-g-h-w}, we get, for any $t\in ]0, T]$, \begin{align*} \begin{split}
\|\tilde{w}_\epsilon(t)\|_{H^{m}_{k+\ell}(\mathbb{R}^2_+)}^2\le & \tilde C_8 e^{C_2 t}\int^t_0e^{-C_2 \tau}\| \tilde{w}_\epsilon(\tau)\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}^2 d\tau\\
&+\tilde C_9e^{C_2 t}\| \tilde{u}_0\|_{H^{m+1}_{k + \ell'-1}(\mathbb{R}^2_+)}^2, \end{split} \end{align*} with $\tilde C_8, \tilde C_9$ independent of $0<\epsilon\le 1$. We have by Gronwell's inequality that, for any $t\in ]0, T]$, $$
\|\tilde{w}_\epsilon(t)\|_{H^{m}_{k+\ell}(\mathbb{R}^2_+)}^2\le \tilde C_9e^{(C_2+\tilde C_8) t}\| \tilde{u}_0\|_{H^{m+1}_{k + \ell'-1}(\mathbb{R}^2_+)}^2. $$ So it is enough to take \begin{align}\label{bound-2} C^2_T=\tilde C_9e^{(C_2+\tilde C_8) T} \end{align} which gives \eqref{energy estimate-A}, and $C_T$ is increasing with respect to $T$. We finish the proof of Theorem \ref{energy}. \end{proof}
\begin{theorem}\label{uniform-existence} Assume $u^s$ satisfies Lemma \ref{shear-profile}, and let $\tilde{u}_{0}\in H^{m+3}_{k + \ell'-1}(\mathbb{R}^2_+)$, $m\ge 6$ be an even integer, $k>1, 0<\ell<\frac12,~~ \frac{1}{2}<\ell' < \ell + \frac{1}{2},~k+\ell>\frac 32$, and $$ 0<\zeta\le 1\,\,\,\mbox{with}\,\,\, C_m\zeta\le \frac{\tilde c_1}{2}, $$ where $C_m$ is the Sobolev embedding constant. If there exists $0<\zeta_0$ small enough such that,
\begin{equation*}
\|\tilde{u}_0\|_{H^{m+1}_{k + \ell'-1}(\mathbb{R}^2_+)}\le \zeta_0, \end{equation*} then, there exists $\epsilon_0>0$ and for any $0<\epsilon\le \epsilon_0$, the system \eqref{shear-prandtl-approxiamte-vorticity} admits a unique solution $\tilde{w}_\epsilon$ such that $$
\|\tilde{w}_\epsilon\|_{L^\infty ([0, T_1]; H^{m}_{k+\ell}(\mathbb{R}^2_+))}\le \zeta, $$ where $T_1$ is the lifespan of shear flow $u^s$ in the Lemma \ref{shear-profile}. \end{theorem}
\begin{remark}\label{remark7.1} Under the uniform monotonic assumption \eqref{shear-critical-momotone}, some results of above theorem holds for any fixed $T>0$. But $\zeta_0$ decreases as $T$ increases, according to the \eqref{c-tilde}. \end{remark}
\begin{proof} We fix $0<\epsilon\le 1$, then for any $\tilde{w}_{0}\in H^{m+2}_{k+\ell}(\mathbb{R}^2_+)$, Theorem \ref{theorem3.1} ensures that, there exists $\epsilon_0>0$ and for any $0<\epsilon\le \epsilon_0$, there exits $T_\epsilon>0$ such that the system \eqref{shear-prandtl-approxiamte-vorticity} admits a unique solution $\tilde{w}_\epsilon \in L^\infty ([0, T_\epsilon]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+))$ which satisfies $$
\|\tilde{w}_\epsilon\|_{L^\infty ([0, T_\epsilon]; H^{m}_{k+\ell}(\mathbb{R}^2_+))}\le \frac 43 \|\tilde{w}_\epsilon(0)\|_{H^{m}_{k+\ell}(\mathbb{R}^2_+)}\le 2 \|\tilde{u}_0\|_{H^{m+1}_{k+\ell-1}(\mathbb{R}^2_+)}. $$ Now choose $\zeta_0$ such that \begin{equation*} \max\{2, C_{T_1}\} \zeta_0\le \frac{\zeta}{2}. \end{equation*} On the other hand, taking $\tilde{w}_\epsilon(T_\epsilon)$ as initial data for the system \eqref{shear-prandtl-approxiamte-vorticity}, Theorem \ref{theorem3.1} ensures that there exits $T'_\epsilon>0$, which is defined by \eqref{time-1} with $\bar\zeta=\frac{\zeta}{2}$, such that the system \eqref{shear-prandtl-approxiamte-vorticity} admits a unique solution $\tilde{w}'_\epsilon \in L^\infty ([T_\epsilon, T_\epsilon+T'_\epsilon]; H^{m}_{k+\ell}(\mathbb{R}^2_+))$ which satisfies $$
\|\tilde{w}'_\epsilon\|_{L^\infty ([T_\epsilon, T_\epsilon+T'_\epsilon]; H^{m}_{k+\ell}(\mathbb{R}^2_+))}\le \frac 43 \|\tilde{w}_\epsilon(T_\epsilon)\|_{H^{m}_{k+\ell}(\mathbb{R}^2_+)}\le \zeta. $$ Now, we extend $\tilde{w}_\epsilon$ to $[0, T_\epsilon+T'_\epsilon]$ by $\tilde{w}'_\epsilon$, then we get a solution $\tilde{w}_\epsilon \in L^\infty ([0, T_\epsilon+T'_\epsilon]; H^{m}_{k+\ell}(\mathbb{R}^2_+))$ which satisfies $$
\|\tilde{w}_\epsilon\|_{L^\infty ([0, T_\epsilon+T'_\epsilon]; H^{m}_{k+\ell}(\mathbb{R}^2_+))}\le \zeta. $$ So if $T_\epsilon+T'_\epsilon<T_1$, we can apply Theorem \ref{energy} to $\tilde{w}_\epsilon$ with $T=T_\epsilon+T'_\epsilon$, and use \eqref{energy estimate-A}, this gives $$
\|\tilde{w}_\epsilon\|_{L^\infty ([0, T_\epsilon+T'_\epsilon]; H^{m}_{k+\ell}(\mathbb{R}^2_+))}\le C_{T_1} \|\tilde{u}_0\|_{H^{m+1}_{k+\ell-1}(\mathbb{R}^2_+)}\le \frac{\zeta}{2}. $$ Now taking $\tilde{w}_\epsilon(T_\epsilon+T'_\epsilon)$ as initial data for the system \eqref{shear-prandtl-approxiamte-vorticity}, applying again Theorem \ref{theorem3.1}, for the same $T'_\epsilon>0$, the system \eqref{shear-prandtl-approxiamte-vorticity} admits a unique solution $\tilde{w}'_\epsilon \in L^\infty ([T_\epsilon+T'_\epsilon, T_\epsilon+2T'_\epsilon]; H^{m}_{k+\ell}(\mathbb{R}^2_+))$ which satisfies $$
\|\tilde{w}'_\epsilon\|_{L^\infty ([T_\epsilon+T'_\epsilon, T_\epsilon+2T'_\epsilon]; H^{m}_{k+\ell}(\mathbb{R}^2_+))}\le \frac 43 \|\tilde{w}_\epsilon(T_\epsilon+T'_\epsilon)\|_{H^{m}_{k+\ell}(\mathbb{R}^2_+)}\le \zeta. $$ Now, we extend $\tilde{w}_\epsilon$ to $[0, T_\epsilon+2T'_\epsilon]$ by $\tilde{w}'_\epsilon$, then we get a solution $\tilde{w}_\epsilon \in L^\infty ([0, T_\epsilon+2T'_\epsilon]; H^{m}_{k+\ell}(\mathbb{R}^2_+))$ which satisfies $$
\|\tilde{w}_\epsilon\|_{L^\infty ([0, T_\epsilon+2T'_\epsilon]; H^{m}_{k+\ell}(\mathbb{R}^2_+))}\le \zeta. $$ So if $T_\epsilon+2T'_\epsilon<T_1$, we can apply Theorem \ref{energy} to $\tilde{w}_\epsilon$ with $T=T_\epsilon+2T'_\epsilon$, and use \eqref{energy estimate-A}, this gives again $$
\|\tilde{w}_\epsilon\|_{L^\infty ([0, T_\epsilon+2T'_\epsilon]; H^{m}_{k+\ell}(\mathbb{R}^2_+))}\le C_{T_1} \|\tilde{u}_0\|_{H^{m+1}_{k+\ell-1}(\mathbb{R}^2_+)}\le \frac{\zeta}{2}. $$ Then by recurrence, we can extend the solution $\tilde{w}_\epsilon$ to $[0, T_1]$, and then the lifespan of approximate solution is equal to that of shear flow if the initial date $\tilde{u}_0$ is small enough. \end{proof}
\label{sec-exi} We have obtained the following estimate, for $m\ge 6$ and $0<\epsilon\le\epsilon_0$, \begin{align*}
\|\tilde{w}_\epsilon(t)\|_{H^m_{k+\ell}(\mathbb{R}^2_+)} \le \zeta,\quad t \in [0, T_1]. \end{align*} By using the equation \eqref{shear-prandtl-approxiamte-vorticity} and the Sobolev inequality, we get, for $0<\delta<1$ \[
\|\tilde{w}_\epsilon\|_{Lip ([0, T_1]; C^{2, \delta}(\mathbb{R}^2_+))}\le M<+\infty. \] Then taking a subsequence, we have, for $0<\delta'<\delta$, \[ \tilde{w}_\epsilon \to \tilde{w}\,\,({\epsilon\,\to\,0}),\,\, \text{locally strong in }~~C^0([0, T_1]; C^{2, \delta'}(\mathbb{R}^2_+))\,, \] and \[ \partial _t \tilde{w} \in L^\infty ([0, T_1]; H^{m-2}_{k+\ell}(\mathbb{R}^2_+)),\quad \tilde{w} \in L^\infty ([0, T_1]; H^{m}_{k+\ell}(\mathbb{R}^2_+)), \] with \begin{align*}
\|\tilde{w}\|_{L^\infty ([0, T_1]; H^{m}_{k+\ell}(\mathbb{R}^2_+))} \le \zeta. \end{align*} Then we have \begin{align*} \tilde{u}=\partial^{-1}_y w\in L^\infty([0, T_1]; H^{m}_{k+\ell-1}(\mathbb{R}^2_+)), \end{align*} where we use the Hardy inequality \eqref{Hardy1}, since $$ \lim_{y\to+\infty} \tilde{u}(t, x, y)=-\lim_{y\to+\infty} \int^{+\infty}_y \tilde{w}(t, x, \tilde{y} )d\tilde{y}=0. $$ In fact, we also have $$ \lim_{y\to 0} \tilde{u}(t, x, y)=\lim_{y\to 0} \int^y_0 \tilde{w}(t, x, \tilde{y} )d\tilde{y}=0. $$ Using the condition $k+\ell-1>\frac 12$, we have also $$ \tilde{v}=-\int^y_0 \tilde{u}_x\, d \tilde{y} \in L^\infty ([0, T_1]; L^\infty(\mathbb{R}_{+, y}); H^{m-1}(\mathbb{R}_x)). $$ We have proven that, $\tilde{w}$ is a classical solution to the following vorticity Prandtl equation \begin{align*} \begin{cases} & \partial_t\tilde{w} + (u^s + \tilde{u}) \partial_x\tilde{w} + \tilde{v} \partial_y(u^s_y+\tilde{w}) = \partial^2_y\tilde{w},\\
& \partial_y \tilde{w}|_{y=0} = 0,\\
& \tilde{w}|_{t=0} = \tilde{w}_0, \end{cases} \end{align*} and $(\tilde{u}, \tilde{v})$ is a classical solution to \eqref{non-shear-prandtl}. Finally, $(u, v)=(u^s+\tilde{u}, \tilde{v})$ is a classical solution to \eqref{full-prandtl}, and satisfies \eqref{main-energy}. In conclusion, we have proved the following theorem which is the existence part of main Theorem \ref{main-theorem}.
\begin{theorem}\label{main-theorem-bis} Let $m\ge 6$ be an even integer, $k>1, 0< \ell<\frac12,~ \frac 12< \ell' < \ell+ \frac 12,~ k+\ell>\frac 32$, assume that $u^s_0$ satisfies \eqref{shear-critical-momotone}, the initial date $\tilde{u}_0 \in H^{m+3}_{k + \ell' -1 }(\mathbb{R}^2_+)$ and $\tilde{u}_0 $ satisfies the compatibility condition \eqref{compatibility-a1}-\eqref{compatibility-a2} up to order $m+2$. Then there exists $T>0$ such that if \begin{equation*}
\|\tilde{u}_0 \|_{H^{m+1}_{k + \ell' -1 }(\mathbb{R}^2_+)}\le \delta_0, \end{equation*} for some $\delta_0>0$ small enough, then the initial-boundary value problem \eqref{non-shear-prandtl} admits a solution $(\tilde{u}, \tilde{v})$ with
\begin{align*}
&\tilde{u}\in L^\infty([0, T]; H^{m}_{k+\ell-1}(\mathbb{R}^2_+)),\quad \partial_y\tilde{u}\in L^\infty([0, T]; H^{m}_{k+\ell}(\mathbb{R}^2_+)).
\end{align*} Moreover, we have the following energy estimate, \begin{align}\label{main-energy} \begin{split}
\|\partial_y\tilde{u}\|_{L^\infty([0, T]; H^m_{k+\ell}(\mathbb{R}^2_+))} \le C\|\tilde{u}_0 \|^2_{ H^{m+1}_{k + \ell' -1 }(\mathbb{R}^2_+)}. \end{split} \end{align} \end{theorem}
\section{Uniqueness and stability}\label{section8}
Now, we study the stability of solutions which implies immediately the uniqueness of solution.
Let $\tilde{u}^1, \tilde{u}^2$ be two solutions obtained in Theorem \ref{main-theorem-bis} with respect to the initial date $\tilde{u}^1_0, \tilde{u}^2_0$ respectively. Denote $\bar u = \tilde{u}^1 - \tilde{u}^2$ and $\bar v= \tilde{v}^1-\tilde{v}^2$, then \begin{equation*} \begin{cases} \partial_t \bar{u} + (u^s + \tilde{u}_1)\partial_x \bar{u} + (u^s_y + \tilde{u}_{1, y})\bar{v} = \partial^2_y \bar{u} - \tilde{v}_2 \partial_y\bar{u} -(\partial_x\tilde{u}_2) \bar{u},\\ \partial_x \bar{u}+\partial_y\bar{v}=0,\\
\bar{u}|_{y=0}=\bar{v}|_{y=0}=0,\\
\bar{u}|_{t=0}=\tilde{u}^1_0 - \tilde{u}^2_0 . \end{cases} \end{equation*} So it is a linear equation for $\bar{u}$. We also have for the vorticity $\bar w= \partial_y \bar u$, \begin{equation}\label{stability-2} \begin{cases} \partial_t \bar{w} + (u^s + \tilde{u}_1)\partial_x \bar{w} + (u^s_{yy} + \tilde{w}_{1, y})\bar{v} = \partial^2_y \bar{w} - \tilde{v}_2 \partial_y\bar{w} -(\partial_x\tilde{w}_2) \bar{u},\\
\partial_y\bar{w}|_{y=0}=0,\\
\bar{w}|_{t=0}=\tilde{w}^1_0 - \tilde{w}^2_0 . \end{cases} \end{equation}
\noindent {\bf Estimate with a loss of $x$-derivative.} Firstly, for the vorticity $\bar w=\partial_y \bar u$, we deduce an energy estimate with a loss of $x$-derivative with the anisotropic norm defined by \eqref{norm-1}. \begin{proposition}\label{prop8.1} Let $\tilde{u}^1, \tilde{u}^2$ be two solutions obtained in Theorem \ref{main-theorem-bis} with respect to the initial date $\tilde{u}^1_0, \tilde{u}^2_0$, then we have \begin{equation} \label{w-bar-less-k} \begin{split}
\frac{d}{ dt}\| \bar{w}\|_{H^{m-2, m-3}_{k+\ell}(\mathbb{R}^2_+)}^2+ \|\partial_y\bar{w}\|_{H^{m-2, m-3}_{k+\ell}(\mathbb{R}^2_+)}^2\le \bar C_1\| \bar{w}\|_{H^{m-2}_{k+\ell}}^2, \end{split} \end{equation} where the constant $\bar C_1$ depends on the norm of $\tilde{w}^1, \tilde{w}^2$ in $L^\infty([0, T]; H^m_{k+\ell}(\mathbb{R}^2_+))$. \end{proposition} \begin{proof}
The proof of this Proposition is similar to the proof of the Proposition \ref{prop3.1}, and we need to use that $m-2$ is even. We only give the calculation for the terms which need a different argument. Moreover we also explain why we only get the estimate on $\|\bar{w}\|_{H^{m-2}_{k+\ell}}^2$ but require the norm of $\tilde{w}^1, \tilde{w}^2$ in $L^\infty([0, T]; H^m_{k+\ell}(\mathbb{R}^2_+))$. With out loss of the generality, we suppose that $\|\bar{w}\|_{H^{m-2}_{k+\ell}}\le 1, \|\tilde{w}^1\|_{H^{m}_{k+\ell}}\le 1$ and $\|\tilde{w}^2\|_{H^{m}_{k+\ell}}\le 1$.
Derivating the equation of \eqref{stability-2} with $\partial^\alpha=\partial^{\alpha}_x\partial^{\alpha_2}_y$, for $|\alpha|=\alpha_1+\alpha_2\le m-2, \alpha_1\le m-3$, \begin{align}\label{8.1} \begin{split} &\partial_t \partial^{\alpha} \bar{w} - \partial_y^2 \partial^{\alpha}\partial\bar{w}= - \partial^{\alpha} \big((u^s + \tilde{u}_1)\partial_x \bar{w}+ \tilde{v}_2 \partial_y\bar{w}\\ &\qquad\qquad+( u^s_{yy}+ \tilde{w}_{1, y} )\bar{v} +(\partial_x\tilde{w}_2) \bar{u} \big). \end{split} \end{align} Multiplying the above equation with $ \langle y \rangle^{k + \ell'+{\alpha_2}} \partial^{\alpha} \bar{w}$, the same computation as in the proof of the Proposition \ref{prop3.1}, in particular, the reduction of the boundary-data are the same, gives
\begin{align*}
\begin{split}
& \int_{\mathbb{R}^2_+} \bigg(\partial_t \partial^{\alpha} \bar{w} - \partial_y^2\partial^{\alpha} \bar{w} \bigg) \langle y \rangle^{2 (k+\ell+\alpha_2)} \partial^{\alpha} \bar{w} dx dy\\
& \ge\frac 12 \frac{d}{dt}\|\partial^{\alpha}\bar{w}\|_{L^2_{k+\ell+\alpha_2}}^2+ \frac{3}{4} \|\partial_y \bar{w}\|_{H^{m-2, m-3}_{k+\ell}}^2 - C\|\bar{w}\|_{H^{m-2}_{k+\ell}}^2.
\end{split}
\end{align*}
As for the right hand of \eqref{8.1}, for the first item, we split it into two parts
\begin{align*}
- \partial^{\alpha} \bigg((u^s + \tilde{u}_1)\partial_x \bar{w}\bigg) = - (u^s + \tilde{u}_1)\partial_x \partial^{\alpha} \bar{w} + [ (u^s + \tilde{u}_1), \partial^{\alpha}]\partial_x \bar{w}. \end{align*} Firstly, we have \begin{align*}
\left|\int_{\mathbb{R}^2_+} \big((u^s + \tilde{u}_1) \partial_x \partial^{\alpha} \bar{w}\big)\langle y \rangle^{2(\ell+{\alpha_2})}\partial^{\alpha} \bar{w} dx dy\right| \le \| \tilde{w}_1\|_{H^3_1}\|\partial^{\alpha} \bar{w}\|_{L^2_{k+\ell+{\alpha_2}}}^2. \end{align*} For the commutator operator, we have, \begin{align*}
\| [ (u^s + \tilde{u}_1), \partial^{\alpha}]\partial_x \tilde{w}_\epsilon\|_{L^2_{k+\ell+\alpha_2}}&\le C \|\tilde{w}_1\|_{H^{m-2}_{k+\ell}(\mathbb{R}^2_+)}\|\bar{w}\|_{H^{m-2, m-3}_{k+\ell}(\mathbb{R}^2_+)}. \end{align*} Notice that for this term, we don't have the loss of $x$-derivative.
With the similar method for the terms $\tilde{v}_2 \partial_y\bar{w}$, we get \begin{align*}
\left|\int_{\mathbb{R}^2_+} \tilde{v}_2 \partial_y\bar{w}\langle y \rangle^{2(\ell+{\alpha_2})}\partial^{\alpha} \bar{w} dx dy\right| &\le \| \tilde{w}_2\|_{H^{m-2}_{k+\ell}(\mathbb{R}^2_+)}\|
\bar{w}\|_{H^{m-2, m-3}_{k+\ell}(\mathbb{R}^2_+)}^2. \end{align*} For the next one, we have $$ \partial^{\alpha} \bigg(( u^s_{yy}+\partial_y\tilde{w}_1) \bar{v} \bigg)
= \sum\limits_{ \beta \le \alpha } C^\alpha_\beta\, \partial^{\beta} ( u^s_{yy}+\partial_y\tilde{w}_1) \partial^{\alpha - \beta}\bar{v}, $$ and thus \begin{align*}
&\left\|\sum\limits_{ \beta \le \alpha, 1\le |\beta|<|\alpha| } C^\alpha_\beta\, \partial^{\beta} ( u^s_{yy}+\partial_y\tilde{w}_1)\partial^{\alpha - \beta}\bar{v}\right\|_{L^2_{k+\ell+{\alpha_2}}}\\
&\qquad\le C\| \tilde{w}_1\|_{H^{m-2}_{k+\ell}(\mathbb{R}^2_+)}\|
\bar{w}\|_{H^{m-2, m-3}_{k+\ell}(\mathbb{R}^2_+)}. \end{align*} On the other hand, using Lemma \ref{inequality-hardy} and $\frac 32 -k<\ell<\frac 12$, \begin{align*}
&\left\|\big(\partial^{\alpha} ( u^s_{yy}+\partial_y\tilde{w}_1)\big)\bar{v}\right\|_{L^2_{k+\ell+{\alpha_2}}}\le
\left\|\big(\partial^{\alpha} u^s_{yy}\big)\bar{v}\right\|_{L^2_{k+\ell+{\alpha_2}}}+
\left\|\big(\partial^{\alpha} \partial_y\tilde{w}_1\big)\bar{v}\right\|_{L^2_{k+\ell+{\alpha_2}}}\\
&\qquad\qquad\le C \left\|\bar{v}\right\|_{L^2(\mathbb{R}_x; L^\infty(\mathbb{R}_+))}+
C\| \tilde{w}_1\|_{H^{m}_{k+\ell}(\mathbb{R}^2_+)}\left\|\bar{v}\right\|_{L^\infty(\mathbb{R}^2_+)}\\
&\qquad\qquad\le C \left\|\bar{u}_x\right\|_{L^2_{\frac 12+\delta}(\mathbb{R}^2_+)}+
C\| \tilde{w}_1\|_{H^{m}_{k+\ell}(\mathbb{R}^2_+)}(\left\|\bar{u}_x\right\|_{L^2_{\frac 12+\delta}(\mathbb{R}^2_+)}
+\left\|\bar{u}_{xx}\right\|_{L^2_{\frac 12+\delta}(\mathbb{R}^2_+)})\\
&\qquad\qquad\le C(1+\| \tilde{w}_1\|_{H^{m}_{k+\ell}(\mathbb{R}^2_+)})\left\|\bar{w}\right\|_{H^2_{\frac 12+\delta}(\mathbb{R}^2_+)}\\
&\qquad\qquad\le C(1+\| \tilde{w}_1\|_{H^{m}_{k+\ell}(\mathbb{R}^2_+)})\left\|\bar{w}\right\|_{H^2_{k+\ell}(\mathbb{R}^2_+)}. \end{align*}
So this term requires the norms $\| \tilde{w}_1\|_{H^{m}_{k+\ell}(\mathbb{R}^2_+)})$.
Moreover, if $\alpha_2\not =0$ \begin{align*}
&\left\|( u^s_{yy}+\partial_y\tilde{w}_1)\partial^{\alpha} \bar{v}\right\|_{L^2_{k+\ell+{\alpha_2}}}=
\left\|( u^s_{yy}+\partial_y\tilde{w}_1)\partial^{\alpha_1}_x\partial^{\alpha_2-1} \bar{u}_x\right\|_{L^2_{k+\ell+{\alpha_2}}}\\
&\qquad\qquad\le C (1+\| \tilde{w}_1\|_{H^{m-1}_{k+\ell}(\mathbb{R}^2_+)})\left\|\bar{w}
\right\|_{H^{m-2}_{k+\ell}(\mathbb{R}^2_+)}, \end{align*} and also if $\alpha_2 =0$ \begin{align*}
&\left\|( u^s_{yy}+\partial_y\tilde{w}_1)\partial^{\alpha_1}_x \bar{v}\right\|_{L^2_{k+\ell}}=
\left\|( u^s_{yy}+\partial_y\tilde{w}_1)\partial^{-1}_y\partial^{\alpha_1}_x \bar{u}_x\right\|_{L^2_{k+\ell}}\\
&\qquad\qquad\le C (1+\| \tilde{w}_1\|_{H^{m-1}_{k+\ell}(\mathbb{R}^2_+)})\left\|\partial^{\alpha_1+1}_x\bar{w}\right\|_{L^2_{\frac 32+\delta}(\mathbb{R}^2_+)}. \end{align*} These two cases imply the loss of $x$-derivative.
Similar argument also gives \begin{align*}
\left|\int_{\mathbb{R}^2_+} \big(\partial^{\alpha}(\partial_x\tilde{w}_2) \bar{u}\big)\langle y \rangle^{2(\ell+{\alpha_2})}\partial^{\alpha} \bar{w} dx dy\right| \le C \| \tilde{w}_2\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}\| \bar{w}\|_{H^{m-2}_{k+\ell}(\mathbb{R}^2_+)}^2, \end{align*} which finishes the proof of the Proposition \ref{prop8.1}. \end{proof}
\noindent {\bf Estimate on the loss term.} To close the estimate \eqref{main-energy}, we need to study the terms $\|\partial^{m-2}_x
\bar w\|_{L^2_{k+\ell}(\mathbb{R}^2_+)}$ which is missing in the left hand side of \eqref{w-bar-less-k}.
Similar to the argument in Section \ref{section7}, we will recover this term by the estimate of functions \begin{align*} \bar g_n& = \left( \frac{\partial_x^n \bar{u}}{u^s_y + \tilde u_{1,y}} \right)_y, \quad \forall (t, x, y)\in [0, T]\times \mathbb{R} \times \mathbb{R}^+. \end{align*}
\begin{proposition} \label{prop8.2b} Let $\tilde{u}^1, \tilde{u}^2$ be two solutions obtained in Theorem \ref{main-theorem-bis} with respect to the initial date $\tilde{u}^1_0, \tilde{u}^2_0$, then we have \begin{equation*} \begin{split}
\frac{d}{dt}\sum^{m-2}_{n=1}\| \bar g_n \|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 & + \sum^{m-2}_{n=1}\| \partial_y \bar g_n\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2\\
& \le C_2(\sum^{m-2}_{n=1}\| \bar g_n\|_{L^2_{\ell'}(\mathbb{R}^2_+)}^2 + \|\bar {w}\|_{H^{m-2}_{k+\ell}}^2), \end{split} \end{equation*} where the constant $\bar C_2$ depends on the norm of $\tilde{w}^1, \tilde{w}^2$ in $L^\infty([0, T]; H^m_{k+\ell}(\mathbb{R}^2_+))$. \end{proposition}
These Propositions can be proven by using exactly the same calculation as in Section \ref{section5}. The only difference is that when we use the Leibniz formula, for the term where the order of derivatives is $|\alpha|=m-2$, it acts on the coefficient which depends on $\tilde{u}^1, \tilde{u}^2$. Therefore, we need their norm in the order of $(m-2)+1$. So we omit the proof of this Proposition here.
With the similar argument to the proof of Theorem \ref{energy}, we get \begin{equation*}
\|\bar{w}\|_{ L^\infty ([0, T]; H^{m-2}_{k+\ell}(\mathbb{R}^2_+))}\le C \|\bar{u}_{0}\|_{H^{m+1}_{k + \ell'-1}(\mathbb{R}^2_+)}, \end{equation*} which finishes the proof of Theorem \ref{main-theorem}.
\appendix
\section{Some inequalities} We will use the following Hardy type inequalities. \begin{lemma}\label{inequality-hardy} Let $f : \mathbb{R} \times \mathbb{R}^+\to \mathbb{R}$. Then \begin{itemize}
\item[(i)] if $\lambda > - \frac{1}{2}$ and $ \lim\limits_{y \to \infty} f(x,y) = 0$, then
\begin{equation} \label{Hardy1}
\|\langle y \rangle^\lambda f\|_{L^2 (\mathbb{R}^2_+)} \le C_\lambda
\|\langle y \rangle^{\lambda +1} \partial_y f\|_{L^2 (\mathbb{R}^2_+)};
\end{equation} \item[(ii)] if $-1 \le \lambda < - \frac{1}{2}$ and $f(x, 0) = 0$, then
\begin{equation*}
\|\langle y \rangle^\lambda f\|_{L^2 (\mathbb{R}^2_+)} \le C_\lambda
\| \langle y \rangle^{\lambda + 1} \partial_y f \|_{L^2 (\mathbb{R}^2_+)}.
\end{equation*}
\end{itemize}
Here $C_\lambda \to +\infty,~ \text{as}~~\lambda \to -\frac 12$.
\end{lemma}
We need the following trace theorem in the weighted Sobolev space. \begin{lemma}\label{lemma-trace} Let $\lambda>\frac 12$, then there exists $C>0$ such that for any function $f$ defined on $\mathbb{R}^2_+$, if $\partial_y f\in L^2_{\lambda}(\mathbb{R}^2_+)$, it admits a trace on $\mathbb{R}_x\times\{0\}$, and satisfies $$
\|\gamma_0(f)\|_{L^2 (\mathbb{R}_x)}\le C
\|\partial_y f\|_{L^2_\lambda(\mathbb{R}^2_+)}, $$ where $\gamma_0(f)(x)=f(x, 0)$ is the trace operator. \end{lemma} The proof of the above two Lemmas is elementary, so we leave it to the reader.
We use also the following Sobolev inequality and algebraic properties of $H^m_{k+\ell}(\mathbb{R}^2_+)$, \begin{lemma}\label{lemma2.4} For the suitable functions $f, g$, we have:
\noindent 1) If the function $f$ satisfies $f(x, 0) = 0$ or $\lim_{y \to +\infty}f(x, y)=0$, then for any small $\delta>0$, \begin{equation}\label{sobolev-1} \begin{split}
\|f\|_{L^\infty(\mathbb{R}^2_+)}\le C(\| f_y\|_{L^2_{\frac 12+\delta}(\mathbb{R}^2_+)}+\| f_{x y}\|_{L^2_{\frac 12+\delta}(\mathbb{R}^2_+)}). \end{split} \end{equation}
2) For $m\ge 6, k+\ell>\frac 32$, and any $\alpha, \beta\in \mathbb{N}^2$ with $|\alpha|+|\beta|\le m$, we have \begin{equation}\label{sobolev-2} \begin{split}
\|(\partial^\alpha f)(\partial^\beta g)\|_{L^2_{k+\ell+\alpha_2+\beta_2}(\mathbb{R}^2_+)}\le C\|f\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}\|g\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}. \end{split} \end{equation}
3) For $m\ge 6, k+\ell>\frac 32$, and any $\alpha\in \mathbb{N}^2, p\in\mathbb{N}$ with $|\alpha|+p\le m$, we have, $$
\|(\partial^\alpha f)(\partial^p_x (\partial^{-1}_y g))\|_{L^2_{k+\ell+\alpha_2}(\mathbb{R}^2_+)}\le C\|f\|_{H^m_{k+\ell}(\mathbb{R}^2_+)}\|g\|_{H^m_{\frac 12+\delta}(\mathbb{R}^2_+)}, $$ where $\partial^{-1}_y$ is the inverse of derivative $\partial_y$, meaning, $\partial^{-1}_y g=\int^y_0 g(x, \tilde y) \, d\tilde{y}$. \end{lemma} \begin{proof} For (1), using $f(x, 0) = 0$, we have \begin{equation*} \begin{split}
\|f\|_{L^\infty(\mathbb{R}^2_+)}&= \left\|\int^y_0 (\partial_y f)(x, \tilde y) \, d\tilde{y}\right\|_{L^\infty(\mathbb{R}^2_+)}
\le C\|\partial_y f\|_{L^\infty(\mathbb{R}_x; L^2_{\frac 12+\delta}(\mathbb{R}_+))}\\
&\le C(\| \partial_y f\|_{L^2_{\frac 12+\delta}(\mathbb{R}^2_+)}+\| \partial_x\partial_y f\|_{L^2_{\frac 12+\delta}(\mathbb{R}^2_+)}). \end{split} \end{equation*} If $\lim_{y\to+\infty} f(x, y)=0$, we use $$ f(x, y)=-\int^\infty_y (\partial_y f)(x, \tilde y) \, d\tilde{y}. $$
For (2), firstly, $m\ge 6$ and $|\alpha|+|\beta|\le m$ imply $|\alpha|\le m-2$ or $|\beta|\le m-2$, without loss of generality, we suppose that $|\alpha|\le m-2$. Then, using the conclusion of (1), we have \begin{equation*} \begin{split}
\|(\partial^\alpha f)(\partial^\beta g)\|_{L^2_{k+\ell+\alpha_2+\beta_2}(\mathbb{R}^2_+)}
&\le \|\langle y\rangle^{\alpha_2}(\partial^\alpha f)\|_{L^\infty(\mathbb{R}^2_+)}\|\partial^\beta g\|_{L^2_{k+\ell+\beta_2}(\mathbb{R}^2_+)}\\
&\le C\|f\|_{H^{|\alpha|+2}_{\frac 12 +\delta}(\mathbb{R}^2_+)}\|\partial^\beta g\|_{L^2_{k+\ell+\beta_2}(\mathbb{R}^2_+)}, \end{split} \end{equation*} which give \eqref{sobolev-2}.
For (3), if $|\alpha|\le m-2$, we have \begin{equation*} \begin{split}
\|(\partial^\alpha f)&(\partial^p_x (\partial^{-1}_y g))\|_{L^2_{k+\ell+\alpha_2}(\mathbb{R}^2_+)}\\
&\le \|\langle y\rangle^{k+\ell+\alpha_2}(\partial^\alpha f)\|_{L^2(\mathbb{R}_{y, +}; L^\infty(\mathbb{R}_x))}\|\partial^p_x (\partial^{-1}_y g)\|_{L^\infty(\mathbb{R}_{y, +}; L^2(\mathbb{R}_x))}\\
&\le C\|f\|_{H^{|\alpha|+2}_{k+\ell}(\mathbb{R}^2_+)}\|\partial^p_x g\|_{L^2_{\frac 12+\delta}(\mathbb{R}^2_+)}. \end{split} \end{equation*} If $p\le m-2$, we have \begin{equation*} \begin{split}
\|(\partial^\alpha f)&(\partial^p_x (\partial^{-1}_y g))\|_{L^2_{k+\ell+\alpha_2}(\mathbb{R}^2_+)}\\
&\le \|\langle y\rangle^{k+\ell+\alpha_2}(\partial^\alpha f)\|_{L^2(\mathbb{R}^2_+)}\|\partial^p_x (\partial^{-1}_y g)\|_{L^\infty(\mathbb{R}^2_+)}\\
&\le C\|f\|_{H^{|\alpha|}_{k+\ell}(\mathbb{R}^2_+)}\|\partial^p_x g\|_{L^\infty(\mathbb{R}_x; L^2_{\frac 12+\delta}(\mathbb{R}_{y, +}))}\\
&\le C\|f\|_{H^{|\alpha|}_{k+\ell}(\mathbb{R}^2_+)}\|g\|_{H^m_{\frac 12+\delta}(\mathbb{R}^2_+)}. \end{split} \end{equation*} We have completed the proof of the Lemma. \end{proof}
\section{The existence of approximate solutions} \label{section-a3} Now, we prove the Proposition \ref{prop3.0}, the existence of solution to the vorticity equation $ \tilde{w}_\epsilon=\partial_y\tilde{u}_\epsilon $ and
suppose that $m, k, \ell$ and $u^s(t, y)$ satisfy the assumption of Proposition \ref{prop3.0}, \begin{align} \label{apendix-vorticity-bb} \begin{cases} & \partial_t\tilde{w}_\epsilon + (u^s + \tilde{u}_\epsilon) \partial_x\tilde{w}_\epsilon +{v}_\epsilon (u^s_{yy} + \partial_y \tilde{w}_{\epsilon}) = \partial^2_{y}\tilde{w}_\epsilon + \epsilon \partial^2_{x}\tilde{w}_\epsilon, \\
& \partial_y \tilde{w}_{\epsilon}|_{y=0}=0\\
&\tilde{w}_\epsilon|_{t=0}=\tilde{w}_{0, \epsilon}, \end{cases} \end{align} where \begin{equation*} \tilde{u}_\epsilon(t, x, y)=-\int^{+\infty}_y \tilde{w}_\epsilon(t, x, \tilde y) d\tilde y,\quad \tilde{v}_\epsilon(t, x, y)=-\int^{y}_0\partial_x \tilde{u}_\epsilon(t, x, \tilde y) d\tilde y. \end{equation*} We will use the following iteration process to prove the existence of solution, where $ w^0=\tilde{w}_{0, \epsilon}$, \begin{align} \label{apendix-vorticity-iteration} \begin{cases} & \partial_tw^n + (u^s+{u}^{n-1} )\partial_x w^{n} + (u^s_{yy} +\partial_y w^{n-1}){v}^{n} = \partial^2_{y}w^n + \epsilon \partial^2_{x}w^n , \\
& \partial_y w^n |_{y=0}=0\\
&w^n|_{t=0}=\tilde{w}_{0, \epsilon}, \end{cases} \end{align} with \begin{equation*} u^{n-1}(t, x, y)=-\int^{+\infty}_y w^{n-1}(t, x, \tilde y) d\tilde y, \end{equation*} and \begin{align*} v^{n}(t, x, y)&=-\int^{y}_0\partial_x u^{n}(t, x, \tilde y) d\tilde y\\ &=\int^{y}_0\int^{+\infty}_{\tilde y} \partial_x w^{n}(t, x, z) dz d\tilde y. \end{align*}
Here for the boundary data, we have $$
\partial^3_{y}w^n|_{y=0}=((u^s_y+{w}^{n-1} )\partial_x w^{n})|_{y=0}, $$ \begin{equation*} \begin{split} &\qquad(\partial^5_y w^n)(t, x, 0)\\ &= \left(\partial^3_y u^s(t, 0) + \partial^2_y w^{n-1}(t, x, 0)+\epsilon (\partial^2_x w^{n-1})(t, x, 0)\right)( \partial_x w^n )(t, x, 0)\\ &\qquad+\left(u^s_y(t, 0) + (w^{n-1})(t, x, 0)\right) \left((\partial^2_y\partial_x w^{n})(t, x, 0)+\epsilon (\partial^3_x w^{n})(t, x, 0)\right)\\ &\qquad\qquad\qquad\qquad-(\partial_y\partial_x w^{n})(u^s_y + w^{n-1})(t, x, 0)\\ &\quad+\sum_{1\le j\le 3}C^4_j \bigg((\partial^j_y(u^s + u^{n-1})) \partial^{4-j}_y\partial_x u^{n} - (\partial^{j - 1}_y\partial_x \tilde{u}^n )\partial^{4-j}_y(u^s_y + w^{n-1})\bigg)(t, x, 0)\\ &\qquad\qquad -\epsilon \partial^2_{x}\bigg(\left(u^s_y(t, 0) + (w^{n-1})(t, x, 0)\right) (\partial_x w^n )(t, x, 0)\bigg). \end{split} \end{equation*}
and also for $3\le p\le \frac m2+1$, $\partial^{2p+1}_{y}w^n|_{y=0}$ is a linear combination of the terms of the form: \begin{align}\label{mu-4}
\prod^{q_1}_{j=1}\bigg(\partial_x^{\alpha_j} \partial_y^{\beta_j + 1}\big( u^s + {u}^n \big) \bigg)\bigg|_{y=0}\times \prod^{q_2}_{l=1} \bigg(\partial_x^{\tilde \alpha_l} \partial_y^{\tilde\beta_l + 1}\big( u^s + {u}^{n-i} \big)\bigg)\bigg|_{y=0}\,\, , \end{align} where $2\le q_1+q_2\le p,\,\, 1\le i \le \min\{n,\, p\} $ and \begin{align*} &\alpha_j + \beta_j\le 2p - 1, \,\, 1\le j\le q_1;\,\, \tilde\alpha_l + \tilde\beta_l \le 2p - 1,\,\, 1\le l\le q_2;&\\ &\sum^{q_1}_{j=1} (3\alpha_j + \beta_j) +\sum^{q_2}_{l=1}(3\tilde \alpha_l + \tilde\beta_l )= 2p +1\,;&\\ &~\sum\limits_{j=1}^{q_1}\beta_j+\sum\limits_{l=1}^{q_2}\tilde \beta_l \le 2p -2;\,\,~\sum\limits_{j=1}^{q_1} \alpha_j +\sum\limits_{l=1}^{q_2} \tilde\alpha_l \le p - 1, \,\,\,0<\sum\limits_{j=1}^{q_1} \alpha_j .& \end{align*} Remark that the condition $0<\sum\limits_{j=1}^{q_1} \alpha_j$ implies that, in \eqref{mu-4}, there are at last one factor like $\partial_x^{\alpha_j}\partial_y^{\beta_j +1} {u}^n(t, x, 0)$.
For given $w^{n-1}$, we have ${u}^{n-1}=\partial^{-1}_yw^{n-1} $ and ${v}^{n}=-\partial^{-1}_y{u}^{n}_{x}$. We will prove the existence and boundness of the sequence $\{ w^n, n\in\mathbb{N}\}$ in $L^\infty([0, T_\epsilon]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+))$ to the linear equation \eqref{apendix-vorticity-iteration} firstly, then the existence of solution to \eqref{apendix-vorticity-bb} is guaranteed by using the standard weak convergence methods.
\begin{lemma}\label{lemma-app-vorticity-iteration-1} Assume that $w^{n-i}\in L^\infty([0, T]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+)), 1\le i \le \min\{n, \frac{m}{2} +1\}$ and $\tilde w_{0, \epsilon}$ satisfies the compatibility condition up to order $m+2$ for the system \eqref{apendix-vorticity-bb}, then the initial-boundary value problem \eqref{apendix-vorticity-iteration} admit a unique solution $w^n$ such that, for any $t\in [0, T]$, \begin{equation}\label{appendix-ck-a}
\frac {d}{dt}\|w^n (t)\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}^2\le B^{n-1}_T\|{w}^n(t)\|^2_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}+
D^{n-1}_T\| w^n\|^{m+2}_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}, \end{equation} where \begin{align*}
B^{n-1}_T=C\bigg(1+&{ \sum\limits_{i=1}^{\min\{n, {m}/{2} +1\}}}\|w^{n-i}\|_{L^\infty([0, T]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+))}\\ &+
(1+\frac{1}{\epsilon} ) { \sum\limits_{i=1}^{\min\{n, {m}/{2} +1\}}}\|w^{n-i}\|^2_{L^\infty([0, T]; H^{m+2}_{k+\ell}(\mathbb{R}^2_+))} \bigg), \end{align*} and $$
D^{n-1}_T=C{ \sum\limits_{i=1}^{\min\{n, {m}/{2} +1\}}}\|w^{n-i}\|^{m+2}_{L^\infty([0, T];H^{m+2}_{k+\ell}(\mathbb{R}^2_+))}\,. $$ \end{lemma}
\begin{proof} Once we get {\em \`a priori} estimate for this linear problem, the existence of solution is guaranteed by the Hahn-Banach theorem. So we only prove the {\em \`a priori} estimate of the smooth solutions.
For any $\alpha\in \mathbb{N}^2, |\alpha|\le m+2$, taking the equation \eqref{apendix-vorticity-iteration} with derivative $\partial^\alpha$, multiplying the resulting equation by $\langle y \rangle^{2k + 2\ell+2\alpha_2} \partial^\alpha w^n $ and integrating by part over $\mathbb{R}^2_+$, one obtains that \begin{align}\label{appendix-ck} \begin{split}
&\frac{1}{2}\frac{d}{dt}\|w^n \|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}^2 + \|\partial_y w^n \|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}^2 + \epsilon \|\partial_x w^n \|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}^2\\ &
=\sum_{|\alpha|\le {m+2}}\int_{\mathbb{R}^2_+} \langle y \rangle^{2k+ 2\ell +2\alpha_2 }\partial^\alpha\big((u^s+{u}^{n-1})\partial_x w^{n} \\ &\qquad\qquad\qquad \qquad -(\partial^{-1}_y {u}^n_{x})(u^s_{yy} +\partial_y {w}^{n-1})\big)\partial^\alpha w^n dx dy\\
&\quad+\sum_{|\alpha|\le {m+2}}\int_{\mathbb{R}^2_+} (\langle y \rangle^{2k+ 2\ell +2\alpha_2 })'\partial^\alpha\partial_y w^{n}\partial^\alpha\partial_y w^{n}dxdy \\
&\qquad \qquad +\sum_{|\alpha|\le {m+2}}\int_{\mathbb{R}}
(\partial^\alpha\partial_y w^{n}\partial^\alpha\partial_y w^{n})\big|_{y=0}dx, \end{split} \end{align} With similar analysis to Section \ref{section5}, we have \begin{align*} \begin{split}
&\left|\int_{\mathbb{R}^2_+}
\langle y \rangle^{2k+ 2\ell +2\alpha_2 }(u^s+{u}^{n-1})\partial_x\partial^\alpha w^{n}\partial^\alpha w^n dx dy\right|\\
&=\left|-\frac 12 \int_{\mathbb{R}^2_+}
\langle y \rangle^{2k+ 2\ell +2\alpha_2 }\partial_x(u^s+{u}^{n-1})\partial^\alpha w^{n}\partial^\alpha w^n dx dy\right|\\
&\le C \|{u}^{n-1}\|_{L^\infty(\mathbb{R}^2_+)} \|w^n\|^2_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}, \end{split} \end{align*} and \begin{align*} \begin{split}
&\left|\int_{\mathbb{R}^2_+}
\langle y \rangle^{2k+ 2\ell +2\alpha_2 }[\partial^\alpha, (u^s+{u}^{n-1})]\partial_x w^{n}\partial^\alpha w^n dx dy\right|\\
&\le C(1+ \|{w}^{n-1}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)} ) \|w^n\|^2_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}. \end{split} \end{align*} For the second term on the right hand of \eqref{appendix-ck}, by using the Leibniz formula, we need to pay more attention to the following two terms \begin{align*} \begin{split}
&\left|\int_{\mathbb{R}^2_+}
\langle y \rangle^{2k+ 2\ell +2\alpha_2}\big(\partial^\alpha\partial^{-1}_y {u}^n_{x}\big)(u^s_{yy} +\partial_y {w}^{n-1})\partial^\alpha w^n dx dy\right|\\
&\le C(1+\|{w}^{n-1}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)})\|\partial_x w^n\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}
\|w^n\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\\
&\le\frac{\epsilon}{2}\|\partial_x w^n\|^2_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}+
\frac{C}{\epsilon}(1+\|{w}^{n-1}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)})^2
\|w^n\|^2_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}, \end{split} \end{align*} and \begin{align*} \begin{split} &\int_{\mathbb{R}^2_+} \langle y \rangle^{2k+ 2\ell +2\alpha_2 } {v}^n\big)\big(\partial^\alpha\partial_y {w}^{n-1}\big)\partial^\alpha w^n dx dy\\ &=-\int_{\mathbb{R}^2_+}\partial_y \big( \langle y \rangle^{2k+ 2\ell +2\alpha_2 }(\partial^{-1}_y {u}^n_{x})\big)\big(\partial^\alpha{w}^{n-1}\big)\partial^\alpha w^n dx dy\\ &\quad-\int_{\mathbb{R}^2_+}\big( \langle y \rangle^{2k+ 2\ell +2\alpha_2 }(\partial^{-1}_y {u}^n_{x})\big)\big(\partial^\alpha {w}^{n-1}\big)\partial_y \partial^\alpha w^n dx dy, \end{split} \end{align*}
here we have used $v^n|_{y=0}=0$, thus \begin{align*} \begin{split}
&\left|\int_{\mathbb{R}^2_+}
\langle y \rangle^{2k+ 2\ell +2\alpha_2} {v}^n\big)\big(\partial^\alpha\partial_y {w}^{n-1}\big)\partial^\alpha w^n dx dy\right|\\
&\le C\|{w}^{n-1}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}
\big(\|w^n\|^2_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}+\|\partial_y w^n\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}
\|w^n\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\big). \end{split} \end{align*} For the boundary term, similar to the proof of Proposition \ref{prop3.1}, we can get \begin{align*}
&\sum_{|\alpha|\le {m+2}}\left|\int_{\mathbb{R}}
(\partial^\alpha\partial_y w^{n}\partial^\alpha\partial_y w^{n})\big|_{y=0}dx\right|\\
&\le \frac 1{16} \|\partial_y w^n\|^2_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}+
C\|w^{n-1}\|^{m+2}_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\| w^n\|^{m+2}_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}. \end{align*} We get finally \begin{align*} \begin{split}
\frac {d}{dt}\|w^n (t)\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}^2 &+ \|\partial_y w^n(t) \|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}^2 + \epsilon \|\partial_x w^n(t) \|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}^2 \\
&\le B^{n-1}_T\|{w}^n(t)\|^2_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}+
D^{n-1}_T\| w^n\|^{m+2}_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\, . \end{split} \end{align*} \end{proof}
\begin{lemma}\label{lemmab.2} Suppose that $m, k, \ell$ and $u^s(t, y)$ satisfy the assumption of Proposition \ref{prop3.0}, $\bar\zeta>0$, then for any $0<\epsilon\le 1$, there exists $T_\epsilon>0$ such that for any $\tilde{w}_{0, \epsilon}\in H^{m+2}_{k+\ell}(\mathbb{R}^2_+)$ with $$
\|\tilde{w}_{0, \epsilon}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\le \bar \zeta, $$ the iteration equations \eqref{apendix-vorticity-iteration} admit a sequence of solution $\{w^n, n\in\mathbb{N}\}$ such that, for any $t\in [0, T_\epsilon]$, $$
\|w^n(t)\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\le \frac 43\|\tilde{w}_{0, \epsilon}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)} ,\quad \forall n\in\mathbb{N}. $$ \end{lemma} \noindent {\bf Remark.} Here $\bar\zeta$ is aribitary.
\begin{proof} Integrating \eqref{appendix-ck-a} over $[0, t]$, for $0<t\le T$ and $T>0$ small, $$
\|w^n(t)\|^{m}_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\le \frac{\|\tilde{w}_{0, \epsilon}\|^{m}_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}}{e^{-\frac m2 B^{n-1}_T t}-\frac m2 D^{n-1}_T t\|\tilde{w}_{0, \epsilon}\|^{m}_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}}. $$ We prove the Lemma by induction. For $n=1$, we have \begin{align*}
B^{0}_T&=C\left(1+\|\tilde w_{0, \epsilon}\|_{ H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}+
(1+\frac{1}{\epsilon} ) \|\tilde w_{0, \epsilon}\|^2_{ H^{m+2}_{k+\ell}(\mathbb{R}^2_+)} \right)\\ &\le C\left(1+\bar \zeta+ (1+\frac{1}{\epsilon} ) \bar \zeta^2 \right) , \end{align*} and $$
D^{n-1}_T=C\|\tilde w_{0, \epsilon}\|^{m+2}_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\le C\bar \zeta^{m+2}. $$ Choose $T_\epsilon>0$ small such that $$ \left(e^{-\frac m2 C\left(1+2\bar \zeta+ 4(1+\frac{1}{\epsilon} ) \bar \zeta^2 \right) T_\epsilon}-\frac m2 C(2\bar \zeta)^{m+2} T_\epsilon (2\bar \zeta)^{m}\right)^{-1}=\left(\frac 43\right)^m, $$ we get $$
\|w^1(t) \|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)} \le \frac 43 \|\tilde{w}_{0, \epsilon}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}. $$ Now the induction hypothesis is: for $0\le t\le T_\epsilon$, $$
\|w^{n-1}(t) \|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)} \le \frac 43 \|\tilde{w}_{0, \epsilon}\|_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}, $$ thanks to the choose of $T_\epsilon$, we have also $$
\left(e^{-\frac m2 B^{n-1}_{T_\epsilon} T_\epsilon}-\frac m2 D^{n-1}_{T_\epsilon} T_\epsilon\|\tilde{w}_{0, \epsilon}\|^{m}_{H^{m+2}_{k+\ell}(\mathbb{R}^2_+)}\right)^{_1}\le \left(\frac 43\right)^m $$ for any $t\in [0, T_\epsilon]$, then we finish the proof of the Lemma \ref{lemmab.2}. \end{proof}
\section*{Acknowledgments}
The first author was partially supported by `` the Fundamental Research Funds for the Central Universities'' and the NSF of China (No. 11171261). The second author was supported by a period of sixteen months scholarship from the State Scholarship Fund of China, and he would thank ``Laboratoire de math\'ematiques Rapha\"el Salem de l'Universit\'e de Rouen" for their hospitality.
\end{document} |
\begin{document}
\title{Reconfiguration of 3D Pivoting Modular Robots}
\begin{abstract} \label{abstract} We study a new model of 3-dimensional modular self-reconfigurable robots Rhombic Dodecahedral (RD). By extending results on the 2D analog of this model we characterize the free space requirements for a pivoting move and investigate the \emph{reconfiguration problem}, that is, given two configurations $s$ and $t$ is there a sequence of moves that transforms $s$ into $t$? We show reconfiguration is PSPACE-hard for RD modules in a restricted pivoting model. In a more general model, we show that RD configurations are not universally reconfigurable despite the fact that their 2D analog is [Akitaya et al., SoCG 2021]. Additionally, we present a new class of RD configurations that we call \textit{super-rigid}. Such a configuration remains rigid even as a subset of any larger configuration, which does not exist in the 2D setting. \end{abstract}
\section{Introduction} \label{Intro}
\emph{Programmable matter} refers to matter with the ability to change its physical properties, such as shape, on demand. A popular approach to implement such a system is through \emph{modular self-reconfigurable robotic systems} (MSR) where small robotic units called \textit{modules} can attach and detach from each other, communicate, and move relative to each other, changing the shape of the system.
We require configurations to be \emph{connected}, meaning the adjacency graph of modules is connected (known as the \emph{single backbone condition}~\cite{dumitrescu2004motion}). A \emph{move} is an operation that transforms a configuration into another by changing the position of a single module.
We focus on the \emph{pivoting model} where the moving module rotates around a static module. This model is similar to many hardware implementations~\cite{romanishin2013m,piranda2018designing} but it is challenging from an algorithmic perspective as a single pivoting move can collide with several modules.
If certain local configurations are forbidden, there are polynomial-time algorithms for square, hexagonal, and cube modules that compute a sequence of pivoting moves to transform a configuration into another~\cite{DBLP:conf/icra/SungBRR15, DBLP:journals/ral/FeshbachS21}.
Akitaya et al.~\cite{Reconfiguration,Musketeers} classified three sets of pivoting moves for square modules and two for hexagonal modules and described their required free space, Fig.~\ref{fig:plane-moves}. The \emph{restricted hexagonal model} only uses the restricted move while the \emph{monkey hexagonal model} uses both restricted and monkey moves.
The general problem was shown to be PSPACE-hard for the restricted hexagonal model~\cite{Reconfiguration}. However, \cite{Reconfiguration} also gave a universal reconfiguration algorithm for the monkey hexagonal model that produces move sequences of length $O(n^3)$, for configurations of $n$ robots.
Not much is known about general reconfiguration in 3D models.
A candidate to generalize hexagonal modules is the rhombic dodecahedron (RD). Implementations of RD-like modules exist~\cite{piranda2018designing}, but free space constraints for pivoting moves of these modules have never been described as in~\cite{Reconfiguration,Musketeers}. We present such free space constraints to facilitate the description of 3D configurations and moves of such models. We then generalize the PSAPCE-hardness proofs from~\cite{Reconfiguration} to RD pivoting models.
Finally, we show there are configurations of RD that are rigid even if they are a subset of a larger configuration, implying the configuration space of RD is disconnected.
\begin{figure}
\caption{2-dimensional (a,c) restricted moves, (d) leapfrog, and (b, e) monkey moves.}
\label{fig:plane-moves}
\end{figure}
\section{Free-Space} \label{FreeSpace}
\begin{figure}
\caption{Equivalency between rhombic dodecahedra and hexagons. Red and blue modules are drawn small for legibility but are, in reality as large as the central grey modules. }
\label{fig:RD2HEX}
\end{figure}
A RD lattice can be represented as stacked layers of hexagons (Fig.~\ref{fig:RD2HEX}), and RD are space-filling polyhedra and therefore are a logical 3D analog to hexagonal MSR. We define restricted and monkey moves as in~\cite{Reconfiguration}. During a pivoting move, a RD module might collide with modules that are in layers adjacent to its plane of movement, so three layers of modules are needed to describe their free-space requirements, presented in Fig.~\ref{fig:RDMoves}.
A RD lattice is 3-cyclic so the bottom and top layer free-space requirements may flip based on a module's lattice position or direction of movement, hence Fig.~\ref{fig:RDMoves} is presented w.l.o.g.
\begin{figure}
\caption{3-dimensional RD Moves. The moving module $m$ is pink, and the modules and the edges that $m$ pivots around are highlighted in yellow. The required free lattice positions (resp. top, resp. bottom) are marked with a grey (blue, red) asterisk.}
\label{fig:RDMoves}
\end{figure}
\section{Reconfiguration is Hard} \label{Hardness}
\begin{theorem} General Reconfiguration of RD with Restricted moves is PSPACE-hard. \end{theorem} We use the configurations in Fig.~\ref{fig:roofs} to extend the gadgets used by Akitaya et al. \cite{Reconfiguration} from 2D to 3D. A roof configuration is a gadget contained in a single layer formed by a cycle of modules, the \emph{boundary}, and all positions that are enclosed by the boundary. A cap configuration takes a path of modules and makes one end rigid. We use the cap to make our roof rigid by attaching a path of modules to each module in the boundary of a roof and terminating it with a cap.
By placing instances of the roof configurations two layers above and below the gadgets, no module can move up or down a layer, essentially restricting the gadgets to two dimensions. A similar approach can extend the square results from \cite{Reconfiguration} to prove reconfiguration of cubes under restricted, leapfrog, and monkey moves is PSPACE-hard as well. Details will be in an upcoming full version.
\begin{figure}
\caption{Roof and Cap configurations for rhombic dodecahedron.}
\label{fig:roofs}
\end{figure}
\section{Rhombic Dodechahedra Are Super Rigid} \label{SuperRigid} In this section, we define a new class of configuration we call \emph{super-rigid}. \begin{definition} A configuration $G$ is super rigid if given any configuration $C$ where $G$ is a subconfiguration, the modules in $G$ cannot be moved. \end{definition}
\begin{figure}
\caption{A super rigid configuration, positions U, V, W, X, Y, and Z are empty}
\label{fig:SuperRigid}
\end{figure}
\begin{figure}
\caption{In (b) the addition of the blue module makes configuration (a) mobile}
\label{fig:FreeRigid}
\end{figure}
Where a \emph{subconfiguration} of a configuration $C$ is any connected configuration that can be obtained by removing any number of modules from $C$. To our knowledge, super-rigid configurations are novel, and likely only possible in MSR models where moving modules can have collisions outside their plane of movement. Previously known rigid configurations (including all 2D rigid configurations) are like configuration (a) in Fig. \ref{fig:FreeRigid}, where adding just one new module (in blue) the configuration becomes mobile (b).
\begin{theorem} The configuration in Fig.~\ref{fig:SuperRigid} is super rigid under restricted and monkey moves.
\end{theorem}
The proof (Appendix~\ref{Super-Rigid-Appen}) uses the free-space requirements in Fig. \ref{fig:RDMoves} to verify no module in $G$ is mobile and none can be made mobile by adding any modules to $G$. A surprising consequence of this is a solid ``cube'' containing Fig. \ref{fig:SuperRigid} cannot be reconfigured into a single row of modules in a line.
\begin{corollary} The Rhombic Dodechadral MSR model is not universally reconfigurable under Restricted or Monkey moves. \end{corollary}
\appendix \section{Expanded PSPACE Construction Discussion} \label{PSPACE-Appen}
This construction has two pieces, a roof and a cap, the cube and RD variants of these are shown in Fig.~\ref{fig:roofs}. A roof configuration is a gadget contained in a single layer formed by a cycle of modules, the \emph{boundary}, and all positions that are enclosed by the boundary. A cap configuration can be placed at the end of a path of modules to make it rigid. The general idea of a cap configuration is the same for cubes and RD. A line of modules comes in from one end (shown by the dashed arrow) and then the configuration wraps around itself such that the final module cannot move and no other module can move without disconnecting the configuration. \par In the cube cap, a path of modules makes a loop and then position its terminal module (in pink) so the top face is surrounded on all sides by other modules. Thus the module cannot move without hitting a module. The RD cap is similar, though instead of completely surrounding the terminal module, our loop puts a module one layer above and one layer below it, on directly opposite sides of the terminal module. These lock it in place, as any possible move will hit one of these two modules.\par As a module surrounded on all sides is clearly immobile, the only mobile modules in a roof could be those on the boundary (shown in dark grey). At each boundary module we attach a path of modules running away from the roof. These need to alternate between adjacent edge modules, one path goes away horizontally, the next goes away vertically from the roof, etc... This way no path contains a module adjacent to another path, avoiding cycles in the adjacency graph which would likely create mobile modules. After some arbitrary constant length, we end each path with a cap configuration. Now each boundary module has a collection of modules attached to it, they cannot move without disconnecting their attached modules. Every module in a path is either immobile or cannot move without disconnecting its end. Finally the internal modules in the roof configuration are surrounded and therefore immobile, so the roof configuration is immobile.
\section{Super Rigid Case Analysis} \label{Super-Rigid-Appen} We show there are no modules that could be added to the configuration in Fig.~\ref{fig:SuperRigid}, call it $G$, to make it mobile. As the grey layer is symmetric there are only 3 modules we need to consider, A, B, and C. Additionally the red and blue layers are mirror images of each other so we can reduce our case analysis to the blue layer, and further we only need consider modules 1 and 2.
\begin{itemize}
\item[] \underline{\textbf{Modules 1 and C:}} Both of these modules have two neighbors on direct opposite sides. A module that is sandwiched like this can never move unless one of its neighbors moves first. Therefore there is no position where we could add a module to make these mobile, under either restricted or monkey moves.
\item[] \underline{\textbf{Module B:}} There are 4 positions in the grey layer adjacent to B. If we add a module at Z or X, B gains no new options for movement as A and C block any in-layer rotation, and red and blue modules block any possible out-of-layer movement. If a module was added at either Y or W then B would have two adjacent modules on directly opposite sides, either $\{A, Y\}$ or $\{C, W\}$ making B immobile. Therefore the only remaining options are an out-of-layer move or a monkey move. B cannot make a monkey move in any direction as the existence of 1 and 2 forbid it. B cannot rotate out of the layer with a restricted move as 1 and 2 prevent it from moving up a layer or down a layer. If B tried to move down to the red layer its neighbors in the grey layer would prevent such a move. The existence of A and C also prohibit any possible face move by B.
\item[] \underline{\textbf{Module A:}} If we add a module at Z or U then A would have two adjacent modules on directly opposite sides, making A immobile. If we added a module at position W, A would be unable to make any in-layer rotation around it as its grey neighbors would interfere. Similar to B, 2 and 3 prevent A from moving up or down a layer with a base in the grey or blue layer. If A moved down a layer using a base in the red layer, its neighbors in the grey layer would prevent such a move. Finally, A could never make a monkey move as it has two neighbors in the layers above and below it, which violates both monkey move free-space requirements.
\item[] \underline{\textbf{Module 2:}} The modules below 2 in the grey layer prevent any restricted move 2 would make. 2 could not make a move across the face of the module below it as 1 and 3 are non-adjacent neighbors which instantly prevents this move. 1 and 3 also prevent a move up a layer with a base in the blue layer. The grey layer modules also prevent an upward or downward move using an out-of-layer base. Finally, 1 and 3 again prevent any monkey move as they are two non-adjacent neighbors of 2. \end{itemize}
As there is no position we could add a module to make $G$ mobile, then there is no configuration with $G$ as a sub-configuration that is not at least locked if not rigid. Therefore $G$ is a super rigid configuration.
\end{document} |
\begin{document}
\title[On van der Corput property of shifted primes]{On van der Corput property of shifted primes} \author{Sini\v{s}a Slijep\v{c}evi\'{c}} \address{Department of Mathematics, Bijeni\v{c}ka 30, Zagreb,\ Croata} \email{slijepce@math.hr} \urladdr{} \date{October 27, 2011} \subjclass[2000]{Primary 11P99; Secondary 37A45} \keywords{S\'{a}rk\"{o}zy theorem, recurrence, primes, difference sets, positive definiteness, van der Corput property, Fourier analysis}
\begin{abstract} We prove that the upper bound for the van der Corput property of the set of shifted primes is $O((\log n)^{-1+o(1)})$, giving an answer to a problem considered by Ruzsa and Montgomery for the set of shifted primes $p-1$. We construct normed non-negative valued cosine polynomials with the spectrum in the set $p-1$, $p\leq n$, and a small free coefficient $a_{0}=O((\log n)^{-1+o(1)})$. This implies the same bound for the intersective property of the set $p-1$, and also bounds for several properties related to uniform distribution of related sets. \end{abstract}
\maketitle
\section{Introduction}
We say that a set of integers $\mathcal{S}$ is a \textit{van der Corput} (or \textit{correlative}) set, if given a real sequence $(x_{n})_{n\in N}$, if all the sequences $(x_{n+d}-x_{n})_{n\in N}$, $d\in \mathcal{S}$, are uniformly distributed $\func{mod}1$, then the sequence $(x_{n})_{n\in N}$ is itself uniformly distributed $\func{mod}1$. The property was introduced by Kamae and Mend\`{e}s France (\cite{Kamae:77}), and is important as it is closely related to the intersective property of integers, discussed below. Classical examples of van der Corput sets are sets of squares, shifted primes $p+1$, $p-1$, and also sets of values $P(n)$, where $P$ is any polynomial with integer coefficients, and has a solution of $P(n)\equiv 0( \func{mod}k$) for all $k$. All van der Corput sets are intersective sets, but the converse does not hold, as was shown by Bourgain (\cite{Bourgain:87} ).
We first recall the key characterization of the van der Corput property. If $ \mathcal{S}$ is a set of positive integers, then let $\mathcal{S}_{n}= \mathcal{S}\cap \{1,...,n\}$. We denote by $\mathcal{T}(\mathcal{S})$ the set of all cosine polynomials \begin{equation} T(x)=a_{0}+\tsum_{d\in \mathcal{S}_{n}}a_{d}\cos (2\pi dx)\text{,} \label{r:d0} \end{equation} $T(0)=1$, $T(x)\geq 0$ for all $x$, where $n$ is any integer and $a_{0}$, $ a_{d}$ are real numbers (i.e. $T$ is a non-negative normed cosine polynomial with the spectrum in $\mathcal{S}\cup \{0\}$). Kamae and Mend\`{e}s France proved that a set is a van der Corput set if and only if (\cite{Kamae:77}, \cite{Montgomery:94}) \begin{equation} \inf_{T\in \mathcal{T}(\mathcal{S})}a_{0}=0. \label{r:d2} \end{equation}
We can define a function which measures how quickly a set is becoming a van der Corput set with \begin{equation} \gamma (n)=\inf_{T\in \mathcal{T}(\mathcal{S}_{n})}a_{0}, \label{r:dgamma} \end{equation} and then a set is van der Corput if and only if $\gamma (n)\rightarrow 0$ as $n\rightarrow \infty $.
Ruzsa and Montgomery set a problem of finding any upper bound for the function $\gamma $ for any non-trivial van der Corput set (\cite {Montgomery:94}, unsolved problem 3; \cite{Ruzsa:84a}). Ruzsa in \cite {Ruzsa:81} announced the result that for the set of squares, $\gamma (n)=O((\log n)^{-1/2})$, but the proof was never published. The author in \cite{Slijepcevic:09} proved that for the set of squares, $\gamma (n)=O((\log n)^{-1/3})$. In this paper we prove the following result:
\begin{theorem} \label{t:main01}If $\mathcal{S}$ is the set of shifted primes $p-1$, then $ \gamma (n)=O((\log n)^{-1+o(1)})$. \end{theorem}
The gap between the upper bound and the best available lower bound remains very large, as in the case of the sets of recurrence discussed below. The lower bound below relies on a construction of Ruzsa \cite{Ruzsa84b}:
\begin{theorem} \label{t:main02}If $\mathcal{S}$ is the set of shifted primes $p-1$, then $ \gamma (n)\,\gg n^{\left( -1+\frac{\log 2-\varepsilon }{\log \log n}\right) } $, where $\varepsilon >0$ is an arbitrary real number. \end{theorem}
\textbf{Structure of the proof and its limitations.} We define a cosine polynomial \begin{equation} F_{N,d}(\theta )=\frac{1}{k}\func{Re}\sum_{\substack{ p\leq dN+1 \\ p\equiv 1(\func{mod}d)}}\log p\cdot e((p-1)\theta ), \label{r:defF} \end{equation} where $e(\theta )=\exp (2\pi i\theta )$ and $k$ is chosen so that $ F_{N,d}(0)=1$. We show in Sections 2 and 3 by using exponential sum estimates along major and minor arcs that \begin{equation*} F_{N,d}(\theta )\geq \tau (d,q)+E(d,q,\kappa ,N). \end{equation*} Here $\kappa =\theta -a/q$, the function $E$ is the error term and $\tau
(d,q)$ is the principal part which is (for square-free $d$) $1$ for $q|d$, $ 0 $ if $q$ not square-free, and $-1/\varphi (q/(q,d))$ otherwise ($\varphi $ being the Euler's totient function and $(q,d)$ the greatest common divisor). In Section 4 we demonstrate that for a given $\delta >0$, one can find a collection of positive integers $\mathcal{D}$ not exceeding $\exp ((\log 1/\delta )^{2+o(1)})$ and weights $\sum_{d\in \mathcal{D}}w(d)=1$ such that for any integer $q>0$, \begin{equation*} \sum_{d\in \mathcal{D}}w(d)\tau (d,q)\geq -\delta /2\text{.} \end{equation*}
In addition, one can find constants $R,N$ not exceeding $O(\exp ((\log 1/\delta )^{4+o(1)}))$ for any given $\theta $ such that if $a/q$ is the Dirichlet's approximation of $\theta =a/q+\kappa $, $\kappa \leq 1/(qR)$, then the error term $|E(d,q,\kappa ,N)|\leq \delta /2$. This seemingly implies effectively the same upper bound for $\gamma (n)$ as obtained in \cite{Ruzsa:08} for a stronger intersective property of sets of integers (see below).
Unfortunately, in our calculations the constants $R,N$ can not be chosen so that for all $\theta \in \boldsymbol{T}=\boldsymbol{R}/\boldsymbol{Z}$ the error term is small. Namely, for $d\theta $ close to an integer, the error term is $O(dN/R)$, and for $\theta $ on minor arcs, the error term is $ O(d^{2}\sqrt{R}/\sqrt{N})$. We resolve it by choosing a geometric sequence of constants $N_{1},...,N_{4/\delta }$, which results with the bound in Theorem \ref{t:main01}. We finalize the proof in Section 5 by constructing the required cosine polynomial as a convex combination of $F_{N,d}$ over $ d\in \mathcal{D}$ and $N_{j}$.
\textbf{Applications.}We say a set $\mathcal{S}$ is \textit{intersective set} (or a \textit{set of recurrence}, or a \textit{Poincar\'{e} set)}, if for any set $A$ of integers with positive upper Banach density \begin{equation*}
\rho (A)=\lim \sup_{n\rightarrow \infty }|A\cap \lbrack 1,n]|/n>0, \end{equation*} its difference set $A-A$ contains an element of $\mathcal{S}$. Given any set of integers $\mathcal{S}$, one can define the function $\alpha :\boldsymbol{N }\rightarrow \lbrack 0,1]$ as $\alpha (n)=\sup \rho (A)$, where $A$ goes over all sets of integers whose difference set does not contain an element of $\mathcal{S}\cap \lbrack 1,n]$ (equivalent definitions of $\alpha $ can be found in \cite{Ruzsa:84a}). A set is an intersective set if and only if $ \lim_{n\rightarrow \infty }\alpha (n)=0$. Ruzsa in \cite{Ruzsa:84a} also proved that if $\mathcal{S}$ is a van der Corput set, then it is also an intersective set, and \begin{equation*} \alpha (n)\leq \gamma (n). \end{equation*}
The bound $\alpha (n)=O((\log n)^{-1+o(1)})=O(\exp ((-1+o(1))\log \log n))$ for the set of shifted primes follows then as a corollary of Theorem \ref {t:main01}. This is worse than the bound $\alpha (n)=O(\exp (-c\sqrt[4]{\log n}))$ obtained by Ruzsa and Sanders in \cite{Ruzsa:08}, but better than earlier bounds in \cite{Lucier:08} and \cite{Sarkozy:78b}.
The function $\gamma (n)$ has different characterizations and further applications discussed in detail in \cite{Montgomery:94}. We discuss in Section 9 the Heilbronn property of the set of shifted primes, which specifies how well the expression $x(p-1)$ can approximate integers uniformly in $x\in \boldsymbol{R}$, by choosing for a given $x$ some prime $ p\leq n$ so that $x(p-1)$ is as close to an integer as possible.
\section{The major arcs}
If $\Lambda $ is the von-Mangoldt function, we define as in \cite{Ruzsa:08} \begin{equation*} \Lambda _{N,d}(x):=\left\{ \begin{array}{cc} \Lambda (dx+1) & \text{if }1\leq x\leq N \\ 0 & \text{otherwise,} \end{array} \right. \end{equation*} and let $\Lambda _{N}(x)=\Lambda _{N,1}(x)$. The Fourier transform $\widehat{ .}:l^{1}(\boldsymbol{Z})\rightarrow L^{\infty }(\boldsymbol{R})$ is defined as the map which takes $f\in l^{1}(\boldsymbol{Z})$ to $\widehat{f}(\theta )=\sum_{x\in \boldsymbol{Z}}f(x)\overline{e(x\theta )}$, thus $\widehat{ \Lambda _{N,d}}(\theta )$ is the exponential sum \begin{equation*} \widehat{\Lambda _{N,d}}(\theta )=\sum_{x\leq N}\Lambda (dx+1)\overline{ e(x\theta )}\text{.} \end{equation*}
The classical estimates for Fourier transforms of $\Lambda _{N,d}(x)$ were optimized by Ruzsa and Sanders to the class of problems studied in this paper. They studied two cases related to the generalized Riemann hypothesis: given a pair of integers $D_{1}\geq D_{0}\geq 2$, then there either exists an exceptional Dirichlet character of modulus $d_{D}$ $\leq D_{0}$ or not ( \cite{Ruzsa:08}, Proposition 4.7). They then obtained the following estimates (we will be more specific below on the assumptions):\ if $\kappa =\theta -a/q$, where $\theta \in \boldsymbol{T}$, then \begin{eqnarray}
\left\vert \widehat{\Lambda _{N,d}}(\theta )\right\vert &\leq &\frac{|\tau _{a,d,q}|}{\varphi (q)}\widehat{\Lambda _{N,d}}(0)+O\left( (1+|\kappa
|N)E_{N,D_{1}}\right) \text{,} \label{r:rs1} \\ \left\vert \widehat{\Lambda _{N,d}}(0)\right\vert &\gg &\frac{N}{\varphi (d)} +O\left( E_{N,D_{1}}\right) \text{,} \label{r:rs2} \end{eqnarray} where \begin{eqnarray*} E_{N,D_{1}} &=&ND_{1}^{2}\exp \left( -\frac{c_{1}\log N}{\sqrt{\log N}+\log D_{1}}\right) , \\ \tau _{a,d,q} &=&\sum_{\substack{ m=0 \\ (md+1,q)=1}}^{q-1}e\left( m\frac{a }{q}\right) \text{.} \end{eqnarray*}
\begin{proposition} \label{p:rusa}(Ruzsa, Sanders). There is an absolute constant $c_{1}$ such that for any pair of integers $D_{1}\geq D_{0}\geq 2$, one of the following possibilities hold:
(i)\ ($(D_{1},D_{0})$ is exceptional). There is an integer $d_{D}\leq D_{0}$
, such that for all non-negative integers $N,a,q,d$, where $1\leq dq\leq D_{1}$, $d_{D}|d$, and $(a,q)=1$, for any $\theta \in \boldsymbol{T}$ (\ref {r:rs1}), (\ref{r:rs2}) hold, where $\kappa =\theta -a/q$.
(ii)\ ($(D_{1},D_{0})$ is unexceptional). For all non-negative integers $ N,a,q,d$, where $1\leq dq\leq D_{0}$ and $(a,q)=1$, for any $\theta \in \boldsymbol{T}$ (\ref{r:rs1}), (\ref{r:rs2}) hold, where $\kappa =\theta -a/q $. \end{proposition}
\begin{proof} (\cite{Ruzsa:08}), Propositions 5.3. and 5.5. (Note that (\ref{r:rs1}) is explicitly obtained at the end of the proof of Proposition 5.3.) \end{proof}
We now define a function $\tau $ closely related to $\tau _{a,d,q}$ above, which will be the main term when estimating cosine polynomials $F_{N,d}$. Let \begin{equation} \tau (d,q)=\left\{ \begin{array}{ll}
1, & q|d \\ 0, & (d,r)>1\text{ or }r\text{ not square-free} \\ -1/\varphi (r) & \text{otherwise,} \end{array} \right. \label{r:deftau} \end{equation} where $r=q/(q,d)$. Note that for $d$ square-free, the second row condition above is equivalent to $q$ being not square-free.
\begin{lemma} \label{l:tau}Let $a,d,q$ be positive integers, $(a,q)=1$, $r=q/(q,d)>1$ and $ a^{\ast }=ad/(q,d)$. Then \begin{equation}
\frac{|\tau _{a^{\ast },d,r}|}{\varphi (r)}=|\tau (d,q)|\text{.} \label{r:help} \end{equation} \end{lemma}
\begin{proof} As was noted in \cite{Ruzsa:08}, Section 5, \begin{equation*} \tau _{a,d,q}=\left\{ \begin{array}{ll} c_{q}(a)e(-m_{d,q}a/q) & \text{if }(d,q)=1 \\ 0 & \text{otherwise,} \end{array} \right. \end{equation*} where $c_{q}(a)$ is the Ramanujan sum and $m_{d,q}$ is a solution of $
m_{d,q}d\equiv 1(\func{mod}q)$. Now if $q|d$, $\tau _{a^{\ast },d,r}=\tau _{a^{\ast },d,1}=1$, thus both sides of (\ref{r:help}) are equal to $1$. If $ (d,r)>1$, then $\tau _{a^{\ast },d,r}=0$, and if $r$ not square-free, then $ \tau _{a^{\ast },d,r}=0$ as the Ramanujan sum $c_{r}(a^{\ast })=0$ when $r$ not square-free. The remaining case follows from $(a^{\ast },r)=1$, $r$
square-free implying that the Ramanujan sum $|c_{r}(a^{\ast })|=1$. \end{proof}
It is easy to see that there exists a constant $c_{2}$ depending only on $ c_{1}$ such that if \begin{equation} \log N\geq c_{2}(\log D_{1})^{2}, \label{r:new1} \end{equation} then \begin{equation} D_{1}^{2}\exp \left( -\frac{c_{1}\log N}{\sqrt{\log N}+\log D_{1}}\right) \leq \frac{1}{D_{1}^{2}}\text{.} \label{r:new2} \end{equation}
We first discuss the case of $q$ not dividing $d$, and then $q|d$.
\begin{proposition} \label{p:major}Assume all the assumptions of Proposition \ref{p:rusa} hold for $D_{0},D_{1},\theta ,N,a,q,d,\kappa $, and in addition (\ref{r:new1}), ( \ref{r:new2}). If $q$ not dividing $d$, then \begin{equation*}
F_{N,d}(\theta )\geq \tau (d,q)+O\left( \frac{1}{D_{1}}+|\kappa |N\right) \text{.} \end{equation*} \end{proposition}
\begin{proof} If we write \begin{eqnarray*} \psi (x;q,a) &=&\sum_{\substack{ n\leq x \\ n\equiv a(\func{mod}q)}}\Lambda (n), \\ \vartheta (x;q,a) &=&\sum_{\substack{ p\leq x \\ p\equiv a(\func{mod}q)}} \log (p), \end{eqnarray*} then $\widehat{\Lambda _{N,d}}(0)=\psi (Nd+1;d,1)$ and $k=\vartheta (Nd+1;d,1)$ where $k$ is the denominator in (\ref{r:defF}). By the well-known property of functions $\psi ,\vartheta $ (see e.g. \cite {Montgomery:07}, p.381), \begin{equation*} \psi (Nd+1;d,1)-\vartheta (Nd+1;d,1)\ll \sqrt{dN}\text{.} \end{equation*} Relations (\ref{r:rs2}), (\ref{r:new2}) and $\varphi (d)<D_{1}$ imply that \begin{equation} \frac{N}{\left\vert \widehat{\Lambda _{N,d}}(0)\right\vert }\ll D_{1}\text{.} \label{r:lambda0} \end{equation} If we use the shorthand notation $F=\func{Re}\sum_{_{p\leq dN+1,p\equiv 1( \func{mod}d)}}\log p\cdot e((p-1)\theta )$, and then $F_{N,d}(\theta )=F/k$, we see from definitions that $F$ is approximately $\func{Re}\widehat{\Lambda _{N,d}}(d\theta )$, or more precisely \begin{equation*}
|\func{Re}\widehat{\Lambda _{N,d}}(d\theta )-F|\leq \widehat{\Lambda _{N,d}} (0)-k\ll \sqrt{dN}. \end{equation*} Putting these three inequalities together, \begin{equation} \left\vert \frac{F}{k}-\frac{\func{Re}\widehat{\Lambda _{N,d}}(d\theta )}{ \widehat{\Lambda _{N,d}}(0)}\right\vert \leq \left\vert \frac{F}{k}
\right\vert \frac{|\widehat{\Lambda _{N,d}}(0)-k|}{|\widehat{\Lambda _{N,d}}
(0)|}+\frac{|\func{Re}\widehat{\Lambda _{N,d}}(d\theta )-F|}{|\widehat{
\Lambda _{N,d}}(0)|}\ll \frac{\sqrt{d}}{\sqrt{N}}D_{1}\text{.} \label{r:relf} \end{equation} Now if $\theta -a/q=\kappa $, then $d\theta -a^{\ast }/r=d\kappa $, where $ a^{\ast }=ad/(d,q)$, $r=q/(d,q)$. Combining (\ref{r:rs1}), (\ref{r:rs2}), ( \ref{r:new2}) and (\ref{r:lambda0}) we easily get that \begin{equation*} \left\vert \frac{\widehat{\Lambda _{N,d}}(d\theta )}{\widehat{\Lambda _{N,d}}
(0)}\right\vert \leq \frac{|\tau _{a^{\ast },d,r}|}{\varphi (r)}+O\left(
\frac{1}{D_{1}}+|\kappa |N\right) \text{.} \end{equation*} The last two relations combined (noting that if $d\leq D_{1}$ and (\ref {r:new1}), then $\sqrt{d}D_{1}/\sqrt{N}\ll 1/D_{1}$)\ and Lemma \ref{l:tau} complete the proof. \end{proof}
\begin{proposition} \label{p:main2}Say $d,N$ are\ positive integers, $\theta \in \boldsymbol{T}$
, and $\kappa =\theta -a/q$, $(a,q)=1$ and $q|d$. Then \begin{equation}
F_{N,d}(\theta )\geq 1+O(dN|\kappa |)\text{.} \label{p:m2} \end{equation} \end{proposition}
\begin{proof} We first recall that $\func{Re}e(\theta )=\cos (2\pi \theta )\geq 1-2\pi
\left\Vert \theta \right\Vert $, where $\left\Vert .\right\Vert $ is the distance from the nearest integer. Thus if $|dN\kappa |\leq 1/2,$ then for each $p\leq dN+1$, $d|(p-1)$, we get $\left\Vert (p-1)\theta \right\Vert
=(p-1)|\kappa |$ and $\func{Re}e((p-1)\theta )\geq 1-2\pi dN|\kappa |$, which easily implies (\ref{p:m2}). \end{proof}
\section{The minor arcs}
We start with the minor arc estimate from \cite{Ruzsa:08}, Corollary 6.2, which is derived from the classical result of Vinogradov (\cite {Montgomery:94}, Theorem 2.9).
\begin{proposition} \label{t:vin}Suppose that $d\leq N$ and $q\leq R$ are positive integers, $
\theta \in \boldsymbol{T}$, $(a,q)=1$ and $|\theta -a/q|\leq 1/qR$. Then \begin{equation} \left\vert \widehat{\Lambda _{N,d}}(\theta )\right\vert \ll d(\log N)^{4}\left( \frac{N}{\sqrt{q}}+N^{4/5}+\sqrt{NR}\right) \text{.} \label{r:a1} \end{equation} \end{proposition}
The minor arc estimate for $F_{N,d}(\theta )$ now follows.
\begin{corollary} \label{c:main3}Suppose $d\leq D_{1}$, $q\leq R,$ $N$ are positive integers, $
\theta \in \boldsymbol{T}$, $(a,q)=1$ and $|\theta -a/q|\leq 1/qR$. Assume also (\ref{r:new1}) and (\ref{r:new2}) hold. Then \begin{equation}
|F_{d,N}(\theta )|\ll D_{1}^{2}(\log N)^{4}\left( \frac{1}{\sqrt{q}} +N^{-1/5}+\frac{\sqrt{R}}{\sqrt{N}}\right) \text{.} \label{p:m3} \end{equation} \end{corollary}
\begin{proof} First note that as $d\leq D_{1}$, Proposition \ref{p:rusa} implies that (\ref {r:rs2}) holds. Then similarly as in the proof of Proposition \ref{p:major}, \begin{equation} \frac{N}{\left\vert \widehat{\Lambda _{N,d}}(0)\right\vert }\ll D_{1} \label{r:a2} \end{equation} and \begin{equation} \left\vert \frac{F}{k}-\frac{\func{Re}\widehat{\Lambda _{N,d}}(d\theta )}{ \widehat{\Lambda _{N,d}}(0)}\right\vert \ll \frac{\sqrt{d}}{\sqrt{N}} D_{1}\leq \frac{D_{1}^{3/2}}{\sqrt{N}}\text{.} \label{r:a3} \end{equation} We complete the proof by combining (\ref{r:a1}), (\ref{r:a2}) and (\ref{r:a3} ). \end{proof}
\section{Cancelling out the main term}
Recall the definition of the arithmetic function $\tau $ in (\ref{r:deftau} ). We first cancel out the main terms in the unexceptional case.
\begin{theorem} \label{t:cancelling}For a given $\delta >0$ smaller than some $\delta _{0}>0$ there exists a collection of positive integers $\mathcal{D}$ not greater than $\exp ((\log 1/\delta )^{2+o(1)})$ and weights $w:\mathcal{D\rightarrow }\boldsymbol{R}$, $\sum_{d\in \mathcal{D}}w(d)=1$, such that for all positive integers $q$, \begin{equation} \sum_{d\in \mathcal{D}}w(d)\tau (d,q)\geq -\delta /2. \label{r:A} \end{equation} \end{theorem}
\begin{proof} We first define the set $\mathcal{D}$ depending on three constants $ p^{-}<p^{+}$, $l$ to be defined below. Let \begin{equation*} d^{\ast }=\prod_{p\leq p^{-}}p \end{equation*} ($p$ denoting a product over primes as usual), and let $\mathcal{D}(j)$ be the set of all square-free numbers $d^{\ast }d$, $d$ containing in its decomposition only primes $p^{-}<p\leq p^{+}$, and such that $\omega (d)=j$, where $\omega (d)$ denotes the number of distinct primes dividing $d$. We set now \begin{eqnarray*} p^{+} &=&2/\delta +1, \\ l &=&\left\lceil 2\log (1/\delta )\left( \frac{2\log \log (2/\delta )}{\log 2 }+1\right) \right\rceil =\log (1/\delta )^{1+o(1)}, \\ p^{-} &=&2l^{2}+1=\log (1/\delta )^{2+o(1)}, \\ \mathcal{D} &=&\mathcal{D}(l), \\ W(j) &=&\sum_{d^{\ast }d\in \mathcal{D}(j)}1/\varphi (d), \\ w(d^{\ast }d) &=&\frac{1}{W(l)}\frac{1}{\varphi (d)}\text{,} \end{eqnarray*} where $\left\lceil x\right\rceil $ is the smallest integer $\geq x$. We denote the left-hand side of (\ref{r:A}) with $A(q)$.
By using $\prod_{p\leq x}p=\exp (x^{(1+o(1))})$ (see e.g. \cite {Montgomery:07}, Corollary 2.6), we easily see that for each $d^{\ast }d\in \mathcal{D}$, \begin{equation*} d^{\ast }d\leq \prod_{p\leq p^{-}}p\cdot (p^{+})^{l}=\exp (\log (1/\delta )^{2+o(1)}). \end{equation*}
If $q$ is not square-free or $q$ contains a prime larger than $p^{+}$, the claim $A(q)\geq -\delta /2$ is straightforward as for all $d$, $\tau (d,q)=0$ , respectively $\tau (d,q)\geq -1/\varphi (p^{+})\geq -\delta /2$.
We can now without loss of generality assume that $q$ is square-free, containing no prime $>p^{+}$ or $\leq p^{-}\,$in its decomposition (the latter can be eliminated as primes $\leq p^{-}\,\ $do not affect the value of $\tau (d^{\ast }d,q)$ for square-free $q$). We define the following constants and sets to assist us in calculations: \begin{eqnarray*} k &=&\log (1/\delta ), \\ \mathcal{D}(j;q) &=&\{d^{\ast }d\in \mathcal{D}(j)\text{, }(d,q)=1\}, \\ W(j;q) &=&\sum_{d^{\ast }d\in \mathcal{D}(j,q)}1/\varphi (d), \\ W &=&W(1)=\sum_{p^{-}<p\leq p^{+}}\frac{1}{\varphi (p)}=\sum_{p^{-}<p\leq p^{+}}\frac{1}{p-1}\text{.} \end{eqnarray*}
The remaining cases will be distinguished by $\omega (q)$.
(i) Assume $\omega (q)\leq 2k$. We will show that the terms for which $q|d$ dominate all the others. We first show the following: for $j_{1}<j_{2}$, \begin{equation} W(j_{2};q)\leq \frac{W^{j_{2}-j_{1}}W(j_{1};q)}{j_{2}(j_{2}-1)...(j_{1}+1)} \text{.} \label{r:laminq} \end{equation} Indeed, if we define \begin{equation*} W^{\ast }(j;q)=\sum_{(p_{1},p_{2},...,p_{j})}\frac{1}{\varphi (p_{1}p_{2}...p_{j})}\text{,} \end{equation*} where the sum goes over all ordered j-tuples of pairwise different primes $ p_{i}$, $p^{-}<p_{i}\leq p^{+},\,\ p_{i}$ coprime with $q$, then $ W(j;q)=W^{\ast }(j;q)/j!$. However, as $\varphi $ is multiplicative for coprime integers, \begin{equation} W^{\ast }(j_{2};q)\leq W^{j_{2}-j_{1}}W^{\ast }(j_{1};q) \label{r:laminq2} \end{equation} (we first choose the first $j_{2}-j_{1}$ primes and then the remaining $j_{1} $). We obtain (\ref{r:laminq}) by dividing (\ref{r:laminq2}) with $j_{2}!$.
The definition of $A(q)$ now yields: \begin{equation*}
A(q)=\sum_{q|d}\frac{1}{\varphi (d)}-\sum_{q/(q,d)>1}\frac{1}{\varphi (d)} \frac{1}{\varphi (r)}, \end{equation*} where the sums above and below are over $d^{\ast }d\in \mathcal{D}$ unless specified otherwise and $r$ always denotes $r=q/(q,d)$ (recall that we assumed that $q$ and $d^{\ast }$ are coprime). We first detail out the first term: \begin{equation*}
\sum_{q|d}\frac{1}{\varphi (d)}=\sum_{d^{\ast }d\in \mathcal{D}(l-\omega (q);q)}\frac{1}{\varphi (d)}\frac{1}{\varphi (q)}=W(l-\omega (q);q)\frac{1}{ \varphi (q)}. \end{equation*}
If $\omega ((d,q))=j$, we can choose $(d,q)$ as a factor of $q$ in $\binom{ \omega (q)}{j}$ ways. Using that, (\ref{r:laminq})\ and in the last rows $ \omega (q)\leq 2k$ and $(1+x/n)^{n}<\exp (x)$ we obtain \begin{eqnarray*} \sum_{q/(q,d)>1}\frac{1}{\varphi (d)}\frac{1}{\varphi (r)} &=&\sum_{j=0}^{\omega (q)-1}\sum_{\omega ((d,q))=j}\frac{1}{\varphi (d)} \frac{1}{\varphi (r)}=\sum_{j=0}^{\omega (q)-1}W(l-j;q)\binom{\omega (q)}{j} \frac{1}{\varphi (q)}\leq \\ &\leq &\sum_{j=0}^{\omega (q)-1}\frac{W^{\omega (q)-j}}{(l-j)...(l-\omega (q)+1)}\binom{\omega (q)}{j}\cdot \frac{W(l-\omega (q);q)}{\varphi (q)}\leq \\ &\leq &\frac{W(l-\omega (q);q)}{\varphi (q)}\sum_{j=0}^{\omega (q)-1}\binom{ \omega (q)}{j}\frac{W^{\omega (q)-j}}{(l-\omega (q))^{\omega (q)-j}}\leq \\ &\leq &\frac{W(l-\omega (q);q)}{\varphi (q)}\left[ \left( 1+\frac{W}{(l-2k)} \right) ^{2k}-1\right] < \\ &<&\frac{W(l-\omega (q);q)}{\varphi (q)}\left[ \exp \left( \frac{W}{l/(2k)-1} \right) -1\right] . \end{eqnarray*} As by e.g. \cite{Montgomery:07}, Theorem 2.7.(d), \begin{equation} \sum_{p\leq x}\frac{1}{p-1}=\log \log x\cdot (1+o(1)), \label{r:sump} \end{equation} we get that \begin{equation} W=\sum_{p^{-}<p\leq p^{+}}\frac{1}{p-1}=\log \log (p^{+})(1+o(1))\leq 2\log \log (2/\delta ). \label{r:below} \end{equation} It is easy to check that the definitions of $l,k$ imply that \begin{equation*} 1-\left[ \exp \left( \frac{2\log \log (2/\delta )}{l/(2k)-1}\right) -1\right] \geq 0\text{.} \end{equation*}
Putting all of the above together we get $A(q)>0$.
(ii)\ Assume $2k<\omega (q)\leq 2l$. We now show that all the terms are small. First assume $\omega ((q,d))=j\geq k$. By the same reasoning as in ( \ref{r:laminq}) one gets for $j\leq l$, \begin{equation*}
W(l;q)=\frac{(W-\sum_{p|q}1/\varphi (p))^{l-j}W(j;q)}{l(l-1)...(j+1)}\text{.} \end{equation*}
Now by definition, $W(l)\geq W(l;q)$. Applying again (\ref{r:sump}) we see that for $\delta $ small enough, \begin{equation*}
W-\sum_{p|q}1/\varphi (p)\geq \log \log (p^{+})(1+o(1))-\log \log (2l)(1+o(1))\geq 1\text{.} \end{equation*} Combining all of it one gets \begin{equation*} \frac{W(j;q)}{W(l)}\leq l^{l-j}\text{.} \end{equation*}
Furthermore, as by the Stirling's formula $k!\geq k^{k}\exp (-k)$ and as $ k=\log (1/\delta )$, we get for $\delta $ small enough \begin{equation*} \frac{l}{k!}\leq \frac{\log (1/\delta )^{(1+o(1))}}{\log (1/\delta )^{\log (1/\delta )}\exp (-(\log (1/\delta ))}\leq \delta /4. \end{equation*}
Putting it all that together and summing over $d^{\ast }d\in \mathcal{D}$ similarly as above we get \begin{eqnarray}
\sum_{j=k}^{l}\sum_{\omega ((d,q))=j}|w(d^{\ast }d)\tau (d^{\ast }d,q)| &=& \frac{1}{W(l)}\sum_{j=k}^{\min \{l,\omega (q)\}}\sum_{\omega (d,q)=j}\frac{1 }{\varphi (d)}\frac{1}{\varphi (r)}= \notag \\ &=&\sum_{j=k}^{\min \{l,\omega (q)\}}\binom{\omega (q)}{j}\frac{W(l-j;q)}{ W(l)}\frac{1}{\varphi (q)}\leq \notag \\ &\leq &\sum_{j=k}^{\min \{l,\omega (q)\}}\frac{(2l)^{j}}{j!}l^{j}\frac{1}{ (p^{-}-1)^{\omega (q)}}\leq \notag \\ &\leq &\frac{1}{k!}\sum_{j=k}^{\min \{l,\omega (q)\}}\left( \frac{2l^{2}}{ p^{-}-1}\right) ^{\omega (q)}\leq \frac{l}{k!}\leq \delta /4\text{.} \label{r:one} \end{eqnarray}
For $\omega ((q,d))=j<k$, $\omega (r)=\omega (q)-j>k$ (where $r=q/(q,d)$). We now see that for $\delta >0$ small enough, \begin{equation}
|\tau (d^{\ast }d,q)|=1/\varphi (r)\leq 1/(p^{-}-1)^{k}=\log (1/\delta )^{(-2-o(1))\log (1/\delta )}\leq \delta /4\text{,} \label{r:oneB} \end{equation} thus \begin{equation}
\sum_{j=0}^{k-1}\sum_{\omega ((d,q))=j}|w(d^{\ast }d)\tau (d^{\ast
}d,q)|\leq \frac{\delta }{4}\sum_{d^{\ast }d\in \mathcal{D}}|w(d^{\ast
}d)|=\delta /4. \label{r:two} \end{equation}
Relations (\ref{r:one})\ and (\ref{r:two})\ give $|A(q)|\leq \delta /2.$
(iii) Assume $2l<\omega (q)$. Then it is enough to see that for all $d^{\ast
}d\in \mathcal{D}$, $\omega (r)\geq l>k$. We now obtain in the same way as in (\ref{r:oneB}) that $|\tau (d^{\ast }d,q)|\leq \delta /4$, but now for all $d^{\ast }d\in \mathcal{D}$, thus $|A(q)|\leq \delta /4$. \end{proof}
We now modify this for the exceptional case.
\begin{theorem} \label{t:cancelling2}Assume $\delta >0$ is smaller than some $\delta _{0}>0$ and let $d_{D}$ be a positive integer, $d_{D}=\exp ((\log 1/\delta )^{2+o(1)})$. Then there exists a collection of positive integers $\mathcal{D
}$, \ such that $d_{D}|d$ for all $d\in \mathcal{D}$, not greater than $\exp ((\log 1/\delta )^{2+o(1)})$ and weights $w:\mathcal{D\rightarrow } \boldsymbol{R}$, $\sum_{d\in \mathcal{D}}w(d)=1$, such that for all positive integers $q$, \begin{equation} \sum_{d\in \mathcal{D}}w(d)\tau (d,q)\geq -\delta /2. \label{r:B} \end{equation} \end{theorem}
\begin{proof} We define $d^{\ast }=d_{D}\prod_{p\leq p^{-}}p$, where $p^{-}$ and all the other constants remain the same as in the proof of Theorem \ref{t:cancelling} . Let $\mathcal{D}$ be the set of all the numbers $d^{\ast }d$, $d$ square-free, relatively prime with $d^{\ast }$, containing in its decomposition only primes $p^{-}<p\leq p^{+}$, and such that $\omega (d)=l$. The rest of the proof is analogous as the proof of Theorem \ref{t:cancelling} with all calculations the same, thus omitted. \end{proof}
\section{Proof of Theorem}
We complete the proof of Theorem \ref{t:main01} in this section. We will choose below the constants $Q,R$, and will use the major arcs estimates for $ q\leq Q$ and minor arcs estimates for $Q<q\leq R$. We will assume that $a/q$
is the Dirichlet's approximation of $\theta \in \boldsymbol{T}$, $|\theta
-a/q|\leq 1/qR$, $(a,q)=1$. The error terms in Propositions \ref{p:major}, \ref{p:main2} are then \begin{eqnarray*} E_{1} &=&O\left( \frac{1}{D_{1}}+\frac{N}{R}\right) , \\ E_{2} &=&O\left( D_{1}N/R\right) , \end{eqnarray*}
as $|\kappa |=1/qR$ and $d\leq D_{1}$. The error term for minor arcs is the entire right-hand side of (\ref{p:m3}), thus as $q>Q$, it is \begin{equation*} E_{3}=O\left( D_{1}^{2}(\log N)^{4}\left( \frac{1}{\sqrt{Q}}+N^{-1/5}+\frac{ \sqrt{R}}{\sqrt{N}}\right) \right) . \end{equation*} To complete the proof, we need to choose the constants $D_{1},N,Q,R\,$\ so that the error terms $E_{1},E_{2},E_{3}\leq \delta /2$ for all $\theta \in \boldsymbol{T}$ on major; respectively minor arcs. As was noted in the introduction, this is impossible, so we proceed as follows. We define \begin{equation*} Q=\exp (\log (1/\delta )^{2+o(1)}) \end{equation*} (the constant obtained as the upper bound on $\mathcal{D}$ in\ Theorem \ref {t:cancelling}), and let \begin{equation*} D_{0}=Q^{2},\text{ }D_{1}=Q^{4}\text{.} \end{equation*} If $(D_{0},D_{1})$ is unexceptional, we construct the set $\mathcal{D}$ according to Theorem \ref{t:cancelling}, and if it is exceptional with the modulus of the exceptional character $d_{D}\leq D_{0}$, then according to Theorem \ref{t:cancelling2}. Now let $N_{0}=\exp (c_{2}(\log D_{1})^{2})$, where $c_{2}$ is the constant in (\ref{r:new1}). We now define \begin{eqnarray*} N_{j} &=&N_{0}D_{1}^{8j}, \\ R_{j}^{\ast } &=&N_{0}D_{1}^{8k+2}\text{,} \end{eqnarray*} where $j=1,...,m$, $4/\delta \leq m<4/\delta +1$. Then for $0<\delta \leq \delta _{0}$ for some $\delta _{0}$ small enough, and $j\leq j^{\ast }$, it is easy to see that the error terms $E_{1},E_{2}\leq \delta /4$ for the constants $Q,D_{1},N_{j},R_{j^{\ast }}^{\ast }$. Furthermore, if $j\geq j^{\ast }+1$, the error term $E_{3}\leq \delta /4$ for the constants $ Q,D_{1},N_{j},R_{j^{\ast }}^{\ast }$.
Let for a given $\theta \in \boldsymbol{T}$ the rational $a_{j}^{\ast
}/q_{j}^{\ast }$, $(a_{j}^{\ast },q_{j}^{\ast })$ be the Dirichlet's approximation of $\theta $, $|\theta -a_{j}^{\ast }/q_{j}^{\ast }|\leq 1/q_{j}^{\ast }R_{j}^{\ast }$. Without loss of generality, we can also assume that $a_{j}^{\ast }/q_{j}^{\ast }$ is the rational with the smallest $ q_{j}^{\ast }$ for a given $R_{j}^{\ast }$. Then the sequence $q_{j}^{\ast }$ is increasing.
Let $j_{0}$ be the smallest index such that $q_{j_{0}}^{\ast }>Q$ ($ q_{j_{0}}^{\ast }=m+1$ if $q_{j}^{\ast }\leq Q$ for all $j$). We define \begin{eqnarray*} a_{j}/q_{j} &=&a_{j_{0}-1}^{\ast }/q_{j_{0}-1}^{\ast }\text{, } R_{j}=R_{j_{0}-1}^{\ast }\text{ for }j\leq j_{0}-1\text{,} \\ a_{j}/q_{j} &=&a_{j_{0}}^{\ast }/q_{j_{0}}^{\ast }\text{, } R_{j}=R_{j_{0}}^{\ast }\text{ for }j\geq j_{0}\text{.} \end{eqnarray*}
Now one can easily check that for any $d\in \mathcal{D}$ and any $j\leq j_{0}-1$, the assumptions of Proposition \ref{p:major} in the case $q$ not dividing $d$, respectively of Proposition \ref{p:main2} in the case $q|d$, do hold for the constants $D_{0},D_{1},Q,a_{j},q_{j},R_{j},N_{j}$, and as was noted above, $E_{1},E_{2}\leq \delta /4$, thus \begin{equation} F_{d,N_{j}}(\theta )\geq \tau (d,q_{j})-\delta /4\text{.} \label{r:sum1} \end{equation} Similarly for $j\geq j_{0}+1$ and $d\leq D_{1}$, the assumptions of Corollary \ref{c:main3} hold and $E_{3}\leq \delta /4$, therefore \begin{equation} F_{d,N_{j}}(\theta )\geq -\delta /4. \label{r:sum2} \end{equation} Also by definition, \begin{equation} F_{d,N_{j_{0}}}(\theta )\geq -1\text{.} \label{r:sum3} \end{equation} Now the required polynomial is \begin{equation*} T=\frac{1}{m}\sum_{d\in \mathcal{D}}\sum_{j=1}^{m}w(d)F_{d,N_{j}}. \end{equation*} By applying (\ref{r:sum1}), (\ref{r:sum2}), (\ref{r:sum3}) for $1/m$ the sum over $j$, and (\ref{r:A}) respectively (\ref{r:B}) for the sum over $d\in \mathcal{D}$, we get that for any $\theta \in \boldsymbol{T}$, $T(\theta )\geq -\delta $. As the largest non-zero coefficient in $T$ is $dN_{m}\leq N_{0}D_{1}^{8(4/\delta +1)+1}=\exp ((1/\delta )^{1+o(1)})$, this completes the proof.
\section{The lower bound}
In this section we prove Theorem \ref{t:main02} on the lower bound for $ \gamma (n)$ associated to the set $p-1$. Ruzsa in \cite{Ruzsa84b}, Section 5, constructed for a given $n$ a subset $A$ of integers not larger than $n$,
$|A|\gg n^{((\log 2-\varepsilon )/\log \log n)}$ such that $A-A$ contains no shifted prime $p-1$. We now construct a set $B$ of positive integers by the following rule:\ if $x\equiv a(\func{mod}2n)$, then $x\in B$ for $a\in A$, otherwise $x\not\in B$. Now clearly the upper Banach density of $B$ satisfies \begin{equation} \rho (B)\gg n^{(-1+(\log 2-\varepsilon )/\log \log n)} \label{r:upperbanach} \end{equation} and $B$ contains no shifted prime $p-1$ smaller than $n$. Recall the measure of intersectivity $\alpha (n)$ defined in the introduction, satisfying $ \gamma \geq \alpha $. As $\alpha (n)$ is by definition $\gg $ than the right-hand side of (\ref{r:upperbanach}), the proof is completed.
\section{Application:\ Heilbronn property of shifted primes}
An estimate for the Heilbronn property of shifted primes is an example of application of Theorem \ref{t:main01}. If $\mathcal{H}$ is a set of positive integers, we say that it is a Heilbronn set if $\eta =0$, where \begin{equation*}
\eta =\sup_{\theta \in \boldsymbol{T}}\inf_{h\in \mathcal{H}}||h\theta || \end{equation*} (for more detailed discussion, see \cite{Montgomery:94}, Section 2.7 or \cite {Schmidt:77}). One can quantify the Heilbronn property similarly as the van der Corput and Poincar\'{e} properties of integers, and define \begin{equation} \eta (n)=\sup_{\theta \in \boldsymbol{T}}\inf_{h\in \mathcal{H}
_{n}}||h\theta ||, \label{r:dnu} \end{equation} where $\mathcal{H}_{n}=\mathcal{H}\cap \{1,...,n\}$. One can show that a set is a Heilbronn set if and only if $\lim_{n\rightarrow \infty }\eta (n)=0$ ( \cite{Montgomery:94}, Section 2.7). All van der Corput sets are Heilbronn sets (the converse does not hold), and as was shown in \cite{Montgomery:94}, Theorem 2.9, \begin{equation} \eta (n)\leq \gamma (n)\text{.} \label{r:heilbronn} \end{equation}
Various estimates for the function $\eta $ have been obtained by Schmidt \cite{Schmidt:77} for sets of values of polynomials with integer coefficients. An upper bound for the set of shifted primes follows from\ Theorem \ref{t:main01} and (\ref{r:heilbronn}).
\begin{corollary} If $\eta $ is the arithmetic function (\ref{r:dnu}) associated to the set of shifted primes $\mathcal{H}$, then $\eta (n)=O((\log n)^{-1+o(1)})$. \end{corollary}
\end{document} |
\begin{document}
\author{ {\normalsize \'Alvaro Piedrafita\textsuperscript{1,2}\thanks{AP completed most of this work while at ETH Z\"urich.} and Joseph M.\ Renes\textsuperscript{1}}\\ \emph{\normalsize \textsuperscript{1}Institute for Theoretical Physics, ETH Z\"urich, Switzerland}\\ \emph{\normalsize \textsuperscript{2}Qusoft and CWI, Amsterdam, The Netherlands} }
\title{\large {\bf Reliable channel-adapted error correction: Bacon-Shor code recovery from amplitude damping}}
\date{
}
\maketitle
\begin{abstract} We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. We show that existing fault-tolerance methods can be employed for the latter, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits. \end{abstract}
\section{Introduction} The performance of quantum error correction over a given channel can be substantially improved by adapting the decoding algorithm to the channel. For the amplitude damping channel, which describes the effects of energy relaxation, Leung \emph{et al.}\ illustrate this quite dramatically with a four qubit code capable of correcting single damping errors~\cite{leung_approximate_1997}, one qubit fewer than the minimum number needed to recover from an arbitrary single-qubit error~\cite{bennett_mixed-state_1996,knill_theory_1997}. However, there is no guarantee that the performance available to a given code and channel can be realized by simple correction circuits. For instance, although the Shor code~\cite{shor_scheme_1995} can in principle correct two amplitude damping errors~\cite{gottesman_stabilizer_1997}, standard recovery based on stabilizer measurement and Pauli recovery operations will correct only one. And the problem of finding simple high-performance decoders is compounded when requiring the decoder to tolerate noise during its implementation. Nonetheless, reducing the qubit overhead needed for error correction would be tremendously useful in near term efforts to construct quantum information processors.
In this paper we investigate the performance of simple decoding schemes for the Bacon-Shor code family~\cite{bacon_operator_2006} adapted to amplitude damping noise. The $(n,m)$ Bacon-Shor code combines two classical repetition codes, a length-$n$ code for phase flips and a length-$m$ code for bit flips, and by standard stabilizer recovery can correct up to $\min(\lfloor(n-1)/2\rfloor,\lfloor(m-1)/2\rfloor)$ arbitrary single-qubit errors. Duan \emph{et al.}\ observe that for amplitude damping noise the $(t+1,t+1)$ code exactly satisfies the error-correcting conditions to order $t$ in the damping probability $\gamma$~\cite{duan_multi-error-correcting_2010}, meaning Bacon-Shor codes can in principle correct twice as many damping errors as arbitrary single-qubit errors. Here we present an error correcting procedure using only Clifford operations that achieves this performance, as well as a syndrome-based procedure (requiring only stabilizer measurements) which perfectly recovers $(t+1,2t+1)$ codewords to order $t$.
Furthermore, we also investigate the prospects for fault-tolerant implementation of the two correction schemes. Our syndrome correction method also corrects the Pauli twirl~\cite{bennett_mixed-state_1996} of the amplitude damping channel, which makes it amenable to the use of standard fault-tolerant gadgets. The Clifford correction does not, however, complicating the analysis. Nevertheless, we find that while single, ill-timed damping events can be catastrophic for direct implementation of the correction procedure, this problem can be avoided by suitable alterations. Additionally, we show that damping errors in ancillas used for error correction propagate into the data as phase errors, and are therefore correctable by subsequent correction cycles. This would seem to require the use of the $(2t+1,2t+1)$ code for syndrome correction, wiping out all qubit savings relative to standard decoding. But the $(2t+1,t+1)$ code would suffice for Clifford correction, meaning it could reliably recover from combined amplitude damping and dephasing noise (often characterized as $T_1$ and $T_2$ decay times, respectively~\cite{nielsen_quantum_2000}) using half as many qubits as the standard method.
Channel adapted error correction has been studied by several authors. Fletcher \emph{et al.}\ consider the problem of finding structured decoders in~\cite{fletcher_structured_2008} and \cite{fletcher_channel-adapted_2008}. The latter, which is specifically focused on amplitude damping, extends the construction of Leung \emph{et al.}\ to higher rates and gives a stabilizer-based recovery, but does not consider fault-tolerance. They do describe a stabilizer-based decoder for the Shor code which outperforms standard decoding, but it does not correct all errors to second order. Several further codes adapted to the amplitude damping channel have been found in~\cite{duan_multi-error-correcting_2010,shor_high_2011,grassl_quantum_2014,jackson_concatenated_2016}. Indeed, Duan \emph{et al.}\ observe that the $(t+1,t+1)$ code can correct damping to order $t$, but do not give an explicit error correction scheme. A comparison of various short codes subject to generalized amplitude damping is given in \cite{cafaro_approximate_2014}. Meanwhile, fault-tolerance of Bacon-Shor codes has been analyzed for different noise models. Ralph \emph{et al.}\ considered protection of optically-encoded quantum information against photon loss (erasure)~\cite{ralph_loss-tolerant_2005}, and this approach was recently analyzed for constructing a quantum repeater~\cite{muralidharan_ultrafast_2014}. The effectiveness of concatenated and plain Bacon-Shor codes against Pauli noise is considered in~\cite{aliferis_subsystem_2007,cross_comparative_2009} and \cite{napp_optimal_2013,brooks_fault-tolerant_2013}, respectively.
We have structured the presentation of our results as follows. The next section details the amplitude damping channel and Bacon-Shor codes. We describe the two correction methods in \S\ref{sec:ec} and prospects for fault-tolerant implementation in \S\ref{sec:ft}.
\section{Setup} \subsection{Amplitude damping channel} The amplitude damping channel describes the process of energy relaxation of a system from an excited state to its ground state. Supposing that the probability for decay is $\gamma$ and we are only interested in the two-dimensional space of the excited and ground states, the action of the channel $\mathcal N_\gamma$ is given by the following two Kraus operators \begin{align} \label{eq:krausops} A_0&=\begin{pmatrix}1 & 0\\ 0& \sqrt{1-\gamma}\end{pmatrix}\qquad \text{and}\qquad A_1=\begin{pmatrix}0 & \sqrt{\gamma}\\ 0& 0\end{pmatrix}. \end{align} An arbitrary state $\rho$ is transformed into $\mathcal N_\gamma(\rho)=A_0\rho A_0^\dagger+A_1\rho A_1^\dagger$.
Unlike a Pauli channel, neither of the Kraus operators has trivial effect on the input state. Damping, the action of $A_1$, is of course linear in the probability $\gamma$ (acting on the density operator, not state vector). Meanwhile, the nondamping operator can be expressed as a superposition of identity $\mathtt{I}$ and phase flip ${\mathtt{Z}}=\sigma_z$, \begin{subequations} \label{eq:A0} \begin{align} A_0&=\tfrac12\left((1+\sqrt{1-\gamma})\mathtt{I}+(1-\sqrt{1-\gamma}){\mathtt{Z}}\right)\\ &=(1-\tfrac\gamma4)\mathtt{I}+\tfrac\gamma4{\mathtt{Z}}+O(\gamma^2). \end{align} \end{subequations} Thus, by the usual discretization argument~\cite{shor_scheme_1995,gottesman_introduction_2010}, phase error correction is sufficient to reverse the action of $A_0$ to second order in $\gamma$.
It will also be useful to consider the Pauli twirl of $\mathcal N_\gamma$~\cite{bennett_mixed-state_1996}, the Pauli channel resulting from randomizing the orientation of the input and output in the following manner: \begin{align} \mathcal N^{\text{Pauli}}_\gamma(\rho)=\tfrac14 \sum_{k=0}^{3} \sigma_k^\dagger \mathcal N_\gamma(\sigma_k\rho\sigma_k^\dagger)\sigma_k\,, \end{align} where we have used the usual notation for the Pauli operators for convenience, including $\sigma_0={\mathtt{I}}$. Amplitude damping is of course already covariant with respect to rotations about the $z$ axis, so only the ${\mathtt{X}}=\sigma_x$ randomization has any effect here. It is easy to work out that the probabilities of the various Pauli operators in $\mathcal N^{\text{Pauli}}_\gamma$ are just $\tfrac14\gamma$ for both ${\mathtt{X}}$ and ${\mathtt{Y}}$ and $\tfrac14(2-\gamma\pm 2\sqrt{1-\gamma})$ for ${\mathtt{I}}$ and ${\mathtt{Z}}$, respectively. Expanding the latter to second order, we have $1-\tfrac12\gamma-\tfrac1{16}\gamma^2$ for ${\mathtt{I}}$ and $\tfrac1{16}\gamma^2$ for ${\mathtt{Z}}$, corresponding to the contribution of phase errors by $A_0$ at second order.
\subsection{Bacon-Shor codes} \label{sec:BS} Bacon-Shor codes are stabilizer-based subsystem codes built from two classical repetition codes~\cite{bacon_operator_2006}. The $(n,m)$ code encodes a single qubit into $nm$ physical qubits, which can be thought of as arranged on a rectangular $n\times m$ lattice. The stabilizer generators of the code are then given by the products of ${\mathtt{X}}$ operators on all qubits in any two neighboring rows and the products of ${\mathtt{Z}}$ operators on all qubits in two neigboring columns. Additional operators are needed to fix the ``gauge'' of the code. For our purposes we will be interested in the ${\mathtt{Z}}$ gauge, where the gauge operators are ${\mathtt{Z}}$ operators on pairs of qubits anywhere in a row. Note that the ${\mathtt{Z}}$-type stabilizers are products of these ${\mathtt{Z}}{\mathtt{Z}}$ gauge operators. In the complementary ${\mathtt{X}}$ gauge, the additional stabilizers are ${\mathtt{X}}$ operators on pairs of neighboring qubits anywhere in a column; these similarly subsume the ${\mathtt{X}}$-type stabilizers. Our convention is that the logical operators $\bar {\mathtt{X}}$ and $\bar{\mathtt{Z}}$ of the code are just the row of ${\mathtt{X}}$ operators along the top and the column of ${\mathtt{Z}}$ operators along the left, respectively. Let us denote these operators by ${\mathtt{X}}^{\text T}$ and ${\mathtt{Z}}^{\text L}$, respectively.
Unsurprisingly, the Shor code is indeed the $(3,3)$ ${\mathtt{Z}}$-gauge code (with logical ${\mathtt{X}}$ and ${\mathtt{Z}}$ swapped) and it happens that the Leung \emph{et al.}\ code is the $(2,2)$ ${\mathtt{Z}}$-gauge code. By using the stabilizers to separately perform standard correction of the repetition code for bit flips and phase flips, the $(n,m)$ code can protect its single encoded qubit from $\lfloor(n-1)/2\rfloor$ ${\mathtt{Z}}$ errors and $\lfloor(m-1)/2\rfloor$ ${\mathtt{X}}$ errors, irrespective of gauge. We refer to this procedure as standard correction.
The ${\mathtt{Z}}$ gauge codewords can be simply described as follows. The ${\mathtt{Z}}{\mathtt{Z}}$ gauge operators enforce the parity checks of the (${\mathtt{Z}}$-basis) $m$-fold repetition code in each row, so that the codewords must be spanned by $\ket{\ubar{0}}=\ket{0}\otimes \cdots\otimes \ket{0}$ or $\ket{\ubar{1}}=\ket{1}\otimes\cdots\otimes \ket{1}$, where each $\ket{\ubar{i}}$ consists of $m$ qubits. Now it is easy to build up the codewords recursively by adding rows.
For $|{\bar i'}\rangle$ a logical ${\mathtt{Z}}$ codeword of the $(n-1,m)$ code, the codewords of the $(n,m)$ code are just
$|{\bar 0}\rangle =|{\ubar{0}}\rangle\otimes |{\bar 0'}\rangle+\ket{\ubar{1}}\otimes |\bar 1'\rangle$ and
$|{\bar 1}\rangle =\ket{\ubar{0}}\otimes |{\bar 1'}\rangle+\ket{\ubar{1}}\otimes |\bar 0'\rangle$. This can be more succinctly expressed by writing the $(n,m)$ encoded state $\ket{\bar\varphi}$ in terms of the same state $\ket{\bar\varphi'}$ encoded in the $(n-1,m)$ code and its logical ${\mathtt{X}}$ operator as \begin{align} \label{eq:recursion} \ket{\bar\varphi}=\ket{\ubar{0}}\otimes \ket{\bar\varphi'}+\ket{\ubar{1}}\otimes \bar {\mathtt{X}}'\ket{\bar \varphi'}\,. \end{align}
\section{Error correction} \label{sec:ec} \subsection{Damping of Bacon-Shor codewords} Duan \emph{et al.}\ observed that the $(t+1,t+1)$ code satisfies the error-correcting conditions to order $t$~\cite{duan_multi-error-correcting_2010}. It is important to note that this only holds in ${\mathtt{Z}}$ gauge. Consider the $(2,2)$ code in ${\mathtt{X}}$ gauge, whose codewords are $\ket{\hat 0}=\ket{0000}+\ket{0101}+\ket{1010}+\ket{1111}$ and $\ket{\hat 1}=\ket{0011}+\ket{0110}+\ket{1001}+\ket{1100}$. It is easy to see that the error-correcting conditions are not satisfied for single jumps. If the first qubit is damped, the state resulting from $\ket{\hat 1}$ will contain a term $\ket{0001}$, but this term will also be present in the result of damping the second qubit of $\ket{\hat 0}$.
Before proceeding to the details of our two correction schemes, let us first consider which error operators are relevant to order $t$. Each qubit is afflicted with either Kraus operator $A_0$ or $A_1$, and since damping is linear in $\gamma$ we need only consider products of Kraus operators with at most $t$ factors of $A_1$. As $A_0$ effectively contributes phase errors starting at second order in $\gamma$, error operators with $k\leq t$ factors of $A_1$ can be regarded as $k$ damping errors on the corresponding qubits and no more than $\lfloor(t-k)/2\rfloor$ phase flips on the remaining qubits. For any given $A_0$ factor we are really only concerned with the first term in the expansion given in \eqref{eq:A0}, since using the higher order terms only reduces the number of possible phase errors on other qubits.
It is also useful to examine the effect of damping errors on ${\mathtt{Z}}$-gauge Bacon-Shor codewords. Consider the effect of damping on a particular row. Since the codewords are constructed from $\ket{\ubar{0}}$ and $\ket{\ubar{1}}$, by symmetry it suffices to consider damping of the first $k$ qubits by the error operator $E_k=A_1^{\otimes k}\otimes A_0^{\otimes {m-k}}$, for some $0<k\leq m$. From \eqref{eq:recursion} we have $E_k\ket{\bar\varphi}=\ket{0}^{\otimes k}\otimes A_0^{\otimes {m-k}}\ket{1}^{\otimes m-k}\otimes \bar {\mathtt{X}}'\ket{\bar\varphi'}$. Thus, damping decouples the given row from the codeword and maps its logical information to the Bacon-Shor code with one fewer row, applying a logical bit flip along the way. Moreover, it manifests itself by altering at least one of the parity checks in the row. Logical ${\mathtt{Z}}$ is now $-{\mathtt{Z}}^{\text L}_{(n-1,m)}$, the left column of ${\mathtt{Z}}$ operators in the $(n-1,m)$ code, and the logical ${\mathtt{X}}$ is just ${\mathtt{X}}^{\text T}_{(n-1,m)}$. Damping of further rows moves the quantum information into fewer and fewer rows, applying a logical bit flip each time.
\subsection{Clifford decoder} \label{sec:decoderA} In light of the above, the following procedure will recover the quantum information encoded in the $(n,m)$ code to order $\min(n-1,m-1)$ in $\gamma$. This is an explicit scheme enabling the $(t+1,t+1)$ code to be perfectly recovered to order $t$. First, the ${\mathtt{Z}}{\mathtt{Z}}$ gauge operators are measured to identify rows afflicted with damping errors. Next the remaining $n'$ undamped rows are treated as an $(n',m)$ codeword, and its ${\mathtt{X}}$ stabilizers measured. After performing standard phase error correction, a logical ${\mathtt{X}}$ is applied if $n-n'$ is odd. The original encoded information can then be obtained by reversing the $(n',m)$ encoding circuit.
To verify the error correcting capability of this scheme, first observe that $E_k$ will produce a state with nontrivial syndrome unless $k=m$, i.e.\ unless all qubits in a row are damped. We therefore require $m\geq t+1$ in order to avoid this possibility at order $t$. Similarly, as the encoded information is lost if at least one qubit in each row is damped, we also require $n\geq t+1$. Given that $n'$ undamped rows remain, to reverse the action of $A_0$ on each qubit to order $t$, by the usual discretization argument~\cite{shor_scheme_1995,gottesman_introduction_2010} it is sufficient to correct at most $\lfloor (t-n+n')/2\rfloor$ phase errors. Standard phase error correction achieves this for any $n'\leq n$ since we have already chosen $n\geq t+1$. Finally, the last step reverses the logical bit flip acquired from each damped row.
Recovery to order $\min(n-1,m-1)$ is precisely the same performance the code has against erasures~\cite{gottesman_stabilizer_1997}. The reason this works for amplitude damping is that we need only detect damping events, not determine precisely which qubits were damped. Hence, the bit-flip error-detecting properties of the code suffice. Phase errors, meanwhile, are handled using error correction, but there are essentially only half as many phase errors to correct. Note that neither this nor any procedure can recover $(t+1,t+1)$ codewords to order $t$ from the twirled channel $\mathcal N_\gamma^{\text{Pauli}}$. Doing so would require correcting $t$ bit flips in a single row, which is impossible for the $(t+1)$-bit repetition code.
\subsection{Clifford codeword correction} Combining the above decoder with one-bit teleportation~\cite{zhou_methodology_2000} yields a simple Clifford correction circuit which restores the input codeword without completely decoding and re-encoding it. This is more amenable to fault-tolerant implementation. The action of the one-bit teleportation circuit at the logical level is depicted in Figure~\ref{fig:teleport}, while for encoded states we interchange the order of error correction and teleportation steps as follows.
\begin{figure}
\caption{ One-qubit teleportation circuit.}
\label{fig:teleport}
\end{figure}
First prepare $\ket{\bar +}_B$ in the new block $B$. Then measure $\tilde {\mathtt{Z}}$, the product of ${\mathtt{Z}}$ on qubits in the first column of $A$ and the first column of $B$. Next measure the ${\mathtt{Z}}{\mathtt{Z}}$ gauge operators on $A$. Depending on the result, measure all qubits in undamped rows in the ${\mathtt{X}}$ basis or the first qubit in damped rows in the ${\mathtt{Z}}$ basis. Finally, the necessary Pauli correction operators are determined from the measurement results. The value of $\bar {\mathtt{Z}}_A\bar {\mathtt{Z}}_B$ is given by $\bar {\mathtt{Z}}_A\bar {\mathtt{Z}}_B=(-1)^\ell \tilde {\mathtt{Z}}$, where $\ell$ is the number of rows in which the first qubit is found to be in state $\ket 0$. Meanwhile, the value of $\bar {\mathtt{X}}_A$ is found by multiplying the individual ${\mathtt{X}}$ measurement results in each undamped row (treating them as $\pm 1$) and then taking the majority.
To see that this procedure indeed restores the original codeword, first note that the output is certainly a valid codeword, since measurement of $\tilde {\mathtt{Z}}$ commutes with the ${\mathtt{Z}}{\mathtt{Z}}$ gauge operators and ${\mathtt{X}}$ stabilizers in block $B$. Thus, we need only insure that the logical information is properly extracted. The logical $\bar {\mathtt{X}}_A$ operator is determined in the usual way from individual ${\mathtt{X}}$ measurements and standard error correction. Matters are more subtle for logical $\bar {\mathtt{Z}}_A$. Unlike the description above, here it makes sense to express $\bar {\mathtt{Z}}$ in terms of ${\mathtt{Z}}^{\text L}_{(n,m)}$ and not in terms of the logical operator of the subcode of undamped rows. This is easily done. Suppose one row is damped, so that, as above, $\bar {\mathtt{Z}}=-{\mathtt{Z}}^{\text L}_{(n-1,m)}$. If the first qubit in the damped row is damped, then it is necessarily in the ${\mathtt{Z}}=+1$ eigenstate $\ket 0$, meaning $\bar {\mathtt{Z}}=-{\mathtt{Z}}\otimes {\mathtt{Z}}^{\text L}_{(n-1,m)}=-{\mathtt{Z}}^{\text L}_{(n,m)}$. On the other hand, if the first qubit was not damped, it is left in the state $\ket 1$ by virtue of being in a damped row, and we have $\bar {\mathtt{Z}}={\mathtt{Z}}^{\text L}_{(n,m)}$. A similar argument applies to any number of damped rows, and shows that the procedure above extracts the correct value of $\bar {\mathtt{Z}}_A\bar {\mathtt{Z}}_B$ from the measurement of $\tilde Z$.
\subsection{Syndrome correction} \label{sec:synd}
A simpler, purely syndrome-based correction scheme is also possible, one where the only quantum operations needed are stabilizer measurements and Pauli corrections. This method has less noise tolerance, but is capable of correcting $\mathcal N_\gamma^{\text{Pauli}}$ as well as $\mathcal N_\gamma$. The basic idea is to use the ${\mathtt{Z}}{\mathtt{Z}}$ gauge operators to identify the position of damped qubits, i.e.\ to use the ${\mathtt{Z}}{\mathtt{Z}}$ guage operators for error correction and not just detection. Correlations between bit and phase errors in $\mathcal N_\gamma^{\text{Pauli}}$ can be utilized to enable recovery to order $\min(n-1,\lfloor (m-1)/2\rfloor)$, meaning $(t+1,2t+1)$ codewords can be perfectly restored to order $t$.
The scheme is as follows. First, the ${\mathtt{Z}}{\mathtt{Z}}$ gauge and ${\mathtt{X}}$-type stabilizer operators are measured. The measurement results of the former are then used to perform standard bit-flip error correction in each damaged row. Next, standard phase-flip error correction is performed on the $(n',m)$ code consisting of the undamaged rows; the necessary ${\mathtt{X}}$-type syndromes are computed from the ${\mathtt{X}}$ stabilizer measurement results. Finally, any suitable pattern of ${\mathtt{I}}$ and ${\mathtt{Z}}$ is applied to the first qubits of damaged rows to ensure all remaining ${\mathtt{X}}$ syndromes are reset to $+1$.
The ${\mathtt{Z}}{\mathtt{Z}}$ syndromes enable the decoder to identify and undo the actual bit-flip error pattern as usual, and thererfore recovery to order $t$ requires a code of at least $2t+1$ columns. Correction of phase errors, however, does not proceed by attempting to identify the precise phase error pattern. Note that in ${\mathtt{Z}}$ gauge we may as well regard any (odd number of) phase flips in a row as a phase flip on the first qubit, and so phase error correction can be reduced to recovery from the usual one-dimensional repetition code. But, and here is the distinction to the usual correction scheme, the error model does not just assume independent noise. Instead, rows containing bit-flipped qubits are as likely as not to be afflicted by phase flips, while phase flips occur at a lower rate in undamaged rows. Hence the damaged rows should simply be excluded in majority-vote decoding. Therefore, we can recover the quantum codeword by performing error correction on the undamaged rows and then resetting the ${\mathtt{X}}$ stabilizers involving the remaining rows by action on the damaged rows as needed. Following the calculation in \S\ref{sec:decoderA}, this requires at least $t+1$ rows to recover from order $t$ error events.
\section{Fault-tolerant implementation} \label{sec:ft}
Now we turn to the question of whether the qubit savings afforded by either of the two correction schemes can be realized when damping noise also afflicts the correction operations themselves. There are two immediate issues to confront. One is the potential for catastrophic error propagation in the correction implementation itself, excluding ancillas. The other is how damping noise on ancilla systems and its subsequent propagation affects correctability in future correction cycles. We do not attempt a complete analysis of fault-tolerant implementation. Rather, our goal is to show that neither of these issues is immediately fatal to the proposed correction schemes, so there is indeed hope for realizing the savings offered by adapting the correction to the noise model.
First, though, it is important to note that it is not possible to reach arbitrarily low logical noise rates by choosing large enough $n$ and $m$. As with Pauli noise~\cite{napp_optimal_2013}, Bacon-Shor codes have no nontrival noise threshold against amplitude damping even for ideal error correction. Consider the event in which every row has at least one damped qubit. This occurs with probability $(1-(1-\gamma)^m)^n$ for the $(n,m)$ code, since the probability of any qubit in a row being damped is $1-(1-\gamma)^m$. Taking $m=n$, we even have $\lim_{n\to \infty} (1-(1-\gamma)^n)^n=1$, which is easily seen by employing l'H\^opital's rule on the logarithm of the probability. The increased error correcting power of larger codes is outstripped by the number of likely errors needing correction, and so code concatenation or some combination with another coding scheme is thus required to have a noise threshold. Nonetheless, encoding into small codes will result in a lower logical error rate than no encoding at all, just as observed for Pauli noise in \cite{napp_optimal_2013,brooks_fault-tolerant_2013}. We leave finding the optimal code size for a given damping rate $\gamma$ as an open question.
\subsection{Catastrophic errors} Catastrophic errors are not particularly an issue for syndrome correction, as it can be understood in the usual Pauli error framework. Indeed, this fact means that we can appeal to standard fault-tolerant methods such as Steane or Knill error correction and passive implementation of Pauli corrections by tracking the ``Pauli frame''~\cite{steane_active_1997,knill_quantum_2005,gottesman_introduction_2010}. Nonetheless, these methods do not guarantee recovery of a $(t+1,2t+1)$ code to order $t$, as a proper accounting of the contributions from the various possible faults is still needed. This issue is investigated in the next section.
For the Clifford decoder and teleportation correction, however, catastrophic errors are a major concern. Indeed, damping of even a single qubit of an arbitrarily-sized codeword can cause a logical bit flip error in the Clifford decoder if it occurs between the ${\mathtt{Z}}{\mathtt{Z}}$ gauge and ${\mathtt{X}}$ stabilizer measurements. Take the $(2,2)$ code, for instance, and imagine that one of the first two qubits is damped after the ${\mathtt{Z}}{\mathtt{Z}}$ operators are found to both have the value $+1$. The stabilizer are now $-{\mathtt{Z}}{\mathtt{Z}}{\mathtt{I}}{\mathtt{I}}$, ${\mathtt{I}}{\mathtt{I}}{\mathtt{Z}}{\mathtt{Z}}$, and $(-1)^j{\mathtt{Z}}{\mathtt{I}}{\mathtt{I}}{\mathtt{I}}$, where $j\in\{0,1\}$ indicates which qubit was damped, while the logical operators can be expressed as ${\mathtt{I}}{\mathtt{I}}{\mathtt{X}}{\mathtt{X}}$ and $-(-1)^j{\mathtt{Z}}{\mathtt{I}}{\mathtt{Z}}{\mathtt{I}}$. Measurement of ${\mathtt{X}}{\mathtt{X}}{\mathtt{X}}{\mathtt{X}}$ will remove $(-1)^j{\mathtt{Z}}{\mathtt{I}}{\mathtt{I}}{\mathtt{I}}$ as a stabilizer, leaving us with no access to the value of $j$, and so error-correction fails. The difficulty is that the location of damping determines whether the encoded bit value is flipped or not, and the ${\mathtt{X}}$ stabilizer measurement destroys this information, also in codes of arbitrary size. Note that that the syndrome decoder avoids this problem by using the ${\mathtt{Z}}{\mathtt{Z}}$ checks to determine where damping occurred.
A similar problem plagues teleportation correction. One source of trouble is damping occurring between then $\tilde {\mathtt{Z}}$ and ${\mathtt{Z}}{\mathtt{Z}}$ measurements. We can think of the $\tilde {\mathtt{Z}}$ measurement as transferring the value of ${\mathtt{Z}}^{\text L}_A$ to the second block, and this value will be either $\pm \bar {\mathtt{Z}}$ depending on whether any first qubits are damped before or after the $\tilde {\mathtt{Z}}$ measurement itself. But in the scheme we have no way of knowing which is the case. This problem can be avoided by measuring $\tilde {\mathtt{Z}}$ in all the columns and making use of the fact that the logical operator could be defined using any column, not just the first. (This will not unduly disturb $\bar {\mathtt{X}}_B$ since the product of $\tilde {\mathtt{Z}}$ in two columns is a ${\mathtt{Z}}$-type stabilizer.) After the remaining measurements of the correction cycle, we can determine which column is unaffected by damping and take it as the value of $\bar {\mathtt{Z}}_A\bar {\mathtt{Z}}_B$. To order $t$ there will definitely be at least one good column, since there are $t+1$ in total.
Another issue is damping subsequent to the ${\mathtt{Z}}{\mathtt{Z}}$ checks. This will lead to a random value of the computed ${\mathtt{X}}$ value of the affected row, and thus behaves as a phase error. Hence phase errors now essentially occur at order $\gamma$, not just $\gamma^2$, which implies that a $(2t+1,t+1)$ code will be needed to recover to order $t$. Though note that this should be understood as a lower bound on the code size, since we have not considered the effects of multiple damping events distributed over the different stages of the correction procedure. We will see in the next section that the additional rows will anyway be needed to combat phase errors arising from damping of ancilla qubits.
\subsection{Error propagation}
The second issue is whether subsequent error correction cycles can deal with the effects of damping noise in the ancillas needed for correction in earlier cycles. Let us examine this effect for single-qubit ancilla schemes for measuring the ${\mathtt{Z}}{\mathtt{Z}}$ check operators, since this possibility is indeed one appeal of Bacon-Shor codes in the first place.
The most straightforward measurement setup is to prepare a qubit in the $\ket 0$ state, sequentially apply \textsc{cnot} gates from the two data qubits to the ancilla, and then measure the ancilla in the standard basis. However, it is not difficult to work out that damping of the qubit after the first \textsc{cnot} will project the data qubit onto the $\ket 1$ state. The Kraus operator for this case is simply $(\mathbbm 1\otimes A_1)U_{\textsc{cnot}}(\mathbbm 1\otimes \ket 0)=\sqrt{\gamma}\ketbra 1\otimes \ket 0$, where the first qubit is the control and the second the target. This effectively introduces a phase error in the data block, since an otherwise undamaged codeword $\ket{\bar\varphi}$ will be transformed into $\ket{\ubar{1}}\otimes \bar {\mathtt{X}}'\ket{\bar\varphi'}$. Logical $\bar {\mathtt{Z}}$ is unaffected, but both ${\mathtt{X}}$ stabilizers involving the damaged row will be flipped with probability one half, just as with a phase error. Moreover, the output state is such that phase error correction will catch this error and recover.
This same effect occurs in the Pauli noise model, since a damping event is now a bit flip, which is accompanied by a phase error that will propagate into the data block. And the same conclusion holds for a measurement setup using a single ancilla qubit in the $\ket +$ state and \textsc{cphase} gates.
Ultimately, in any of these cases, phase errors effectively occur in the data block at order $\gamma$, reducing the error-resilience of the code. For the syndrome decoder, this would imply that a $(2t+1,2t+1)$ code is needed, wiping out any advantage over the standard syndrome decoder. Perhaps this issue can be avoided by using Steane error correction. The one-bit teleportation scheme still retains an advantage over the standard decoder, as only a $(2t+1,t+1)$ code is needed.
\section{Conclusions}
We have shown that Bacon-Shor codes offer the possibility of considerably reduced qubit overhead for error correction against amplitude damping noise. From a theoretical point of view, it is interesting that the error correcting conditions can be met by a Clifford circuit, since neither is the channel a Pauli channel, nor does the circuit correct its Pauli twirl. From a more applied point of view our work invites a more detailed analysis of fault-tolerant implementation to determine if the qubit savings could be realized in a more realistic setting, such as combined amplitude damping and dephasing noise. Moreover, as adapting to the amplitude damping channel is relatively simple, this raises the question of whether similar gains can be found for other codes, e.g.\ surface codes. In their investigation of optimal decoding of small surface codes~\cite{darmawan_tensor-network_2016}, Darmawan and Poulin recently observed that non-square lattices with effectively half as many qubits performed better than full square lattices. It seems plausible that significant overhead reductions are also possible for simple correction algorithms.
\begin{comment} question of channel-adapted correction can of course be applied to any code,
and our results show that reductions in overhead could be
This is interesting from a theoretical point of view, as the Clifford decoder can even work against non-Pauli channels. B
ut the implications for near-term implementation of quantum information processors is perhaps more interesting. - have shown that fault-tolerance implementation is not immediately ruled out. - invites a more detailed analysis and simulation.
In general the question of channel-adapted decoding can be applied to any code. Here we have seen how the structure of amplitude damping interplays with the structure of the code, perhaps useful in other codes such as the surface code. Gains are to be expected even if we target the Pauli twirl for correction.
The teleportation scheme is also interesting for error models which are mixtures of amplitude damping and dephasing--- T1 and T2 errors.
\end{comment}
\noindent{\bf Acknowledgments.} This work was supported by the Swiss National Science Foundation (SNSF) via the National Centre of Competence in Research ``QSIT'', and by the European Commission via the project ``RAQUEL''. AP thanks the NWO WISE grant the La Caixa Foundation for support.
\printbibliography[heading=bibintoc,title=References]
\end{document} |
\begin{document}
\maketitle
\begin{abstract} We show that there exist real quadratic maps of the interval whose attractors are computationally intractable. This is the first known class of such natural examples. \end{abstract}
\section{Introduction} A simple dynamical system, which is easy to implement numerically, can nevertheless exhibit chaotic dynamics. This renders impractical attempting to compute the behaviour of a trajectory of the system for an extended period of time: small computational errors are magnified very rapidly. Thus, the modern paradigm of the numerical study of chaos is the following: since the simulation of an individual orbit for an extended period of time does not make a practical sense, one should study the limit set of a typical orbit (both as a spatial object and as a statistical distribution). Such limit sets are known as {\it attractors}, we refer the reader to \cite{Mil} for a detailed discussion of the relevant definitions.
From the theoretical computability point of view, the principal problem thus becomes:
\noindent {\it Suppose that a dynamical system with a single attractor can be numerically simulated. Can its attractor be effectively computed? }
\noindent
The first author with M. Hoyrup and S. Galatolo constructed in \cite{HRG} a computable map of the unit circle for which the orbit of every point accumulates in a set that is not effectively computable. However, the dynamics restricted to this set is not transitive, and the class of maps one obtains is rather artificial.
The second author and M.~Braverman had obtained a natural class of counter-examples in the setting of one-dimensional complex dynamics. Recall, that for a rational map $R(z)$ of $\hat\mathbb{C}$ with $\operatorname{deg} R\geq 2$, the Julia set $J(R)$ is the {\it repeller} (that is, the attractor for the multi-valued dynamics of $R^{-1}$). In a series of works \cite{BY1,BY2,BY3} they showed that there exist quadratic polynomials $P_c(z)=z^2+c$ with {\it computable} values of $c$ whose Julia sets $J(P_c)$ cannot be effectively computed.
In this work we present a different class of examples which are even more striking. Indeed, they occur in the same quadratic family $P_c$ but this time with real values of $c$ and viewed as maps of the interval, as opposed to maps of the complex plane. The study of such dynamical systems, known as {\it unimodal maps} has been the cornerstone of one-dimensional dynamics, the subject that blossomed in the 1970's, and has been at the center of attention since.
As the reader will see below, these maps have attractors (in the classical sense) with a well-understood topological structure. In contrast with Julia sets, given an access to the value of the parameter $c$, such an attractor is {\it always} computable. However, this is only true in theoretical terms. Our main result is:
\noindent
{\sl Given an arbitrary lower bound on time complexity $f:{\mathbb N}\to{\mathbb N}$, we can produce a parameter $c$ such that any algorithm which computes the attractor of the unimodal map $P_c$ has a running time worse than $f$.}
\noindent Of course, for a sufficiently ``bad'' lower bound, this renders the computation impossible in practice.
Similarly to the case of non-computable quadratic Julia sets, our construction is quite delicate, and involves modern tools of Complex Dynamics, such as parabolic implosion and renormalization.
\section{Preliminaries}
\subsection*{Computational Complexity of sets}
We give a very brief summary of relevant notions of Computability Theory and Computable Analysis. For a more in-depth introduction, the reader is referred to e.g. \cite{BY3}. As is standard in Computer Science, we formalize the notion of an algorithm as a {\it Turing Machine} \cite{Tur}.
Let us begin by giving the modern definition of the notion of computable real number, which goes back to the seminal paper of Turing \cite{Tur}. By identifying $\mathbb{Q}$ with $\mathbb{N}$ through some effective enumeration, we can assume algorithms can operate on $\mathbb{Q}$. Then a real number $x\in{\mathbb R}$ is called \defin{computable} if there is an algorithm $M$ which, upon input $n$, halts and outputs a rational number $q_n$ such that $|q_n-x|<2^{-n}$.
Algebraic numbers or the familiar constants such as $\pi$, $e$, or the Feigenbaum constant are computable real numbers. However, the set of all computable real numbers ${\mathbb R}_C$ is necessarily countable, as there are only countably many Turing Machines.
Computability of compact subsets of ${\mathbb R}^k$ is defined by following the same principle. Let us say that a point in ${\mathbb R}^k$ is a {\it dyadic rational with denominator} $2^{-n}$ if it is of the form $\bar v\cdot 2^{-n}$, where $\bar v\in{\mathbb Z}^k$ and $n\in{\mathbb N}$. Recall that {\it Hausdorff distance} between two compact sets $K_1$, $K_2$ is $$\operatorname{dist}_H(K_1,K_2)=\inf_\epsilon\{K_1\subset U_\epsilon(K_2)\text{ and }K_2\subset U_\epsilon(K_1)\},$$ where $U_\epsilon(K)=\bigcup_{z\in K}B(z,\epsilon)$ stands for an $\epsilon$-neighbourhood of a set.
\begin{defn}\label{1}We say that a compact set $K\Subset{\mathbb R}^k$ is {\it computable} if there exists an algorithm $M$ with a single input $n\in{\mathbb N}$, which outputs a finite set $C_n \subset \mathbb{Q}$ of dyadic rational points in ${\mathbb R}^k$ such that $$\operatorname{dist}_H(C_n,K)<2^{-n}.$$ \end{defn}
An equivalent way of defining computability of sets is the following. For $\bar x=(x_1,\ldots,x_k)\in{\mathbb R}^k$ let the norm $||\bar x||_1$ be given by
$$||\bar x||_1=\max|x_i|.$$ \begin{defn}
\label{def-local}
A compact set $K\Subset{\mathbb R}^k$ is computable if there exists an algorithm $M$ with a single input $n\in{\mathbb N}$ and a dyadic rational point $x$ with denominator $2^{-n}$, such that the following holds. $M$ outputs $0$ if $x$ is at least $2\cdot 2^{-n}$-far from $K$ in $||\cdot||_1$ norm, outputs $1$ if $x$ is at most $2^{-n}$-far from $K$, and outputs either $0$ or $1$ in the ``borderline'' case. \end{defn} In the familiar context of $k=2$, such an algorithm can be used to ``zoom into'' the set $K$ on a computer screen with $W\times H$ square pixels to draw an accurate picture of the portion of $K$ inside a rectangle of width $W\cdot 2^{-n}$ and height $H\cdot 2^{-n}$. $M$ decides which pixels in this picture have to be black (if their centers are $2^{-n}$-close to $K$) or white (if their centers are $2\cdot 2^{-n}$-far from $K$), allowing for some ambiguity in the intermediate case.
Let $C=\operatorname{dist}_H(K,0)$. For an algorithm $M$ as in Definition~\ref{def-local} let us denote by $T_{M}(n)$ the supremum of running times of $M$ over all dyadic points with denominator $2^{-n}$ which are inside the ball of radius $2C$ centered at the origin: this is the computational cost of using $M$ for deciding the hardest pixel at the given resolution.
\begin{defn} We say that a function $T:{\mathbb N}\to{\mathbb N}$ is a {\it lower bound} on time complexity of $K$ if for any $M$ as in Definition~\ref{def-local} there exists an infinite sequence $\{n_i\}$ such that $$T_M(n_i)\geq T(n_i). $$ Similarly, we say that $T(n)$ is an {\it upper bound} on time complexity of $K$ if there exists an algorithm $M$ as in Definition~\ref{def-local} such that for all $n\in{\mathbb N}$ $$T_M(n)\leq T(n).$$ \end{defn}
In this paper, we will be interested in the time complexity of attractors of quadratic maps of the form $x^2 + c$, with $c\in {\mathbb R}$. As is standard in computing practice, we will assume that the algorithm can read the value of $c$ externally to produce a zoomed in picture of the attractor. More formally, let us denote ${\cal D}_n\subset {\mathbb R}$ the set of dyadic rational numbers with denominator $2^{-n}$. We say that a function $\phi:{\mathbb N}\to{\mathbb Q}$ is an {\it oracle} for $c\in {\mathbb R}$ if for every $m\in {\mathbb N}$ $$\phi(m)\in {\cal D}_m\text{ and }d(\phi(m),c)<2^{-(m-1)}.$$ We amend our definitions of computability and complexity of a compact set $K$ by allowing {\it oracle Turing Machines} $M^\phi$ where $\phi$ is any function as above. On each step of the algorithm, $M^\phi$ may read the value of $\phi(m)$ for an arbitrary $m\in{\mathbb N}$.
This approach allows us to separate the questions of computability and computational complexity of a parameter $c$ from that of the attractor. It is crucial to note that reading the values of $\phi$ comes with a computational cost:
\noindent {\it querying $\phi$ with precision $m$ counts as $m$ time units. In other words, it takes
$m$ ticks of the clock to read the first $m$ dyadic digits of $c$. }
\noindent This is again in a full agreement with computing practice: to produce a verifiable picture of a set, we have to use the ``long arithmetic'' for constants, which are represented by sequences of dyadic bits. The computational cost grows with the precision of the computation, and manipulating a single bit takes one unit of machine time.
\subsection*{Attractors of quadratic maps of the interval and the statement of the main result} Consider a real quadratic polynomial $P_c(x)=x^2+c$, with $c\in[-2,-1]$. We denote $$I_c\equiv [c,P_c(c)].$$ It is easy to see that $P_c(I_c)=I_c$; we will refer to the invariant interval $I_c$ as the {\it dynamical interval} of $P_c$. We denote $\Omega(P_c)$ the postcritical set $$\Omega(P_c)\equiv\overline{\cup_{n\geq 0}P_c^n(0) }.$$
Let us say that $P_c$ is {\it infinitely renormalizable} if there exists an infinite nested sequence of cycles of periodic sub-intervals of $I_c$: $$\mathcal{C}_0\supset \mathcal{C}_1\supset \mathcal{C}_2\supset\cdots\supset \omega(0)$$ with increasing periods. We say that the {\it Feigenbaum-like Cantor set} of an infinitely renormalizable polynomial is the intersection $\cap_{k\in{\mathbb N}}\mathcal{C}_k.$ It is known (see \cite{G}) that: \begin{theorem}
Let $c\in[-2,-1]$ and let $P_c$ be infinitely renormalizable. Denote $\mathcal{A}$ the Feigenbaum-like Cantor set of $P_c$. Then $\mathcal{A}=\omega(0)$. Furthermore, for Lebesgue almost every $x\in I_c$, the limit set $\omega(x)=\mathcal{A}$; the same is true for a dense-$G_\delta$ set of $x\in I_c$. \end{theorem} Thus, the Feigenbaum-like Cantor set is an attractor both in the measure-theoretic and in the topological sense (cf. \cite{Mil}). We will refer to it as the {\it Feigenbaum-like attractor}.
It is worthwhile to note that there are only three possibilities for the structure of an attractor of a real quadratic polynomial (see \cite{Lyu3} where the classification was completed, and references therein): \begin{theorem} Let $P_c$ and $I_c$ be as above. Then there is a unique set $\mathcal{A}$ (a measure-theoretic attractor in the sense of Milnor) such that $\mathcal{A} = \omega(x)$ for Lebesgue almost all $x\in [0, 1]$, and only one of the following three possibilities can occur: \begin{enumerate} \item $\mathcal{A}$ is a limit cycle; \item $\mathcal{A}$ is a cycle of intervals; \item $\mathcal{A}$ is a Feigenbaum-like attractor. \end{enumerate} In all of the above cases, $\mathcal{A}$ is also the topological attractor of $P_c$. \end{theorem} Our main result is the following.
\noindent\textbf{Main Theorem.} \emph{
The attractor of a quadratic map $P_c$ is always computable given a parameter $c$. However, given any function $f:{\mathbb N}\to{\mathbb N}$, there exists a value of $c$ such that the map $P_c$ has a Feigenbaum-like attractor $\mathcal{A}$, whose computational complexity is bounded below by $f(n)$.
}
\section{Combinatorics of renormalization}
\subsection*{Renormalization windows} Given two points $a\neq b$, we will denote $[a,b]$ the closed interval connecting them without regard to their linear order. For our purposes, {\it a unimodal map} of the interval $[a,b]\ni 0$ is an analytic map with a single extremum at $0$, such that $f([a,b])=[a,b]$ and $a=f(0)$, $b=f^2(0)$. We call $I_f\equiv [a,b]$ the {\it dynamical interval } of $f$.
We say that a unimodal map $f$ is {\it renormalizable} if there exists $n>1$ and a sub-interval $J\ni 0$ such that $$f^n:J\to J$$ is a unimodal map. We call the lowest such $n$ the {\it period of renormalization}, and write $$n\equiv p(f);$$ we call the corresponding $J$ {\it a renormalization interval}.
The {\it renormalization } ${\cal R}(f)$ is the unimodal map
$${\cal R}(f)=\Lambda^{-1}\circ f^n|_J\circ \Lambda,\text{ where }\Lambda(x)\equiv ((f|_J)^{n}(0)\cdot x). $$ The map
$f^n|_J$ can be renormalizable in its turn, and so on, giving a rize of a sequence of periods $1<n_1, n_2,\ldots$ and a nested sequence of renormalization intervals $J_1=J\supset J_2\supset\cdots$. If this sequence is infinite, we call $f$ {\it infinitely renormalizable}. Each $J_i$ is periodic with the period $n_1n_2\cdots n_i\equiv p_i$. We denote $\mathcal{C}_i$ and $\hat \mathcal{C}_i$ the collections of intervals $$\mathcal{C}_i=\cup _{k=0}^{p_i-1}f(J_i);\text{ and }\hat\mathcal{C}_i=\cup_{k=0}^{n_i-1}f^{p_{i-1}}(J_i).$$ Thus, $\hat\mathcal{C}_i$ consists of the intervals of the cycle $\mathcal{C}_i$ contained in the renormalization interval of the previous level $J_{i-1}$. The iterate $f^{p_{i-1}}$ induces a permutation of the intervals $\hat\mathcal{C}_i$; we will call the corresponding element of the symmetric group $S_{n_i}$ {\it the combinatorial type} of the $i$-th renormalization of $f$, and denote it $\tau_i(f)$.
Denote $W_\tau$ the set of real renormalizable quadratic polynomials with combinatorial type of the first renormalization equal to $\tau$. This set is a closed interval known as a {\it renormalization window}; it is equal to the intersection of a small copy of the Mandelbrot set $\mathcal{M}_\tau$ with the real line.
Let us denote $\chi$ the Douady-Hubbard {\it straightening map} \cite{DH}. For each quadratic-like map $f$ with a connected Julia set, it corresponds a unique parameter $c$ in the Mandelbrot set ${\mathcal M}$ such that $f$ is hybrid equivalent to $P_c$. For $c\in W_\tau$, the mapping $$c\mapsto \chi({\cal R}(P_c))$$ is a homeomorphism between $W_\tau$ and $[-2,0.25]={\mathcal M}\cap{\mathbb R}$; it naturally extends to a homeomorphism ${\mathcal M}_\tau\to{\mathcal M}$.
Renormalization windows are dense in $[-2,0.25]$. Let $n$ be the renormalization period of $W_\tau$ (that is, $\tau\in S_n$). The left endpoint of $W_\tau$ is the parameter $l$ such that the critical value $P^n_l(0)$ is a pre-fixed point of $P^n_l$: \begin{equation}
\label{left}
P^{2n}_l(0)=P^{3n}(0).
\end{equation} The right end-point $b$ is the cusp of $\mathcal{M}_\tau$: the map $P^n_b$ has a parabolic fixed point with multiplier $1$. It is the latter case that will be at the center of our attention. We will see below that one can find two small renormalization windows $W_1$ and $W_2$ which are both arbitrarily close to $b$ on the left-hand side such that the postcritical sets of maps in these windows are drastically different. This is the well-studied phenomenon of {\it parabolic implosion}; we will review some of its applications below.
By way of example, consider the cyclical permutation $\tau=(2,3,1)$. The orbit of the renormalization interval for a map in $W_\tau$ is illustrated in the top portion of Figure~\ref{fig-tau}. The interval $W_\tau=[l,r]$ is the intersection of a small copy of the Mandelbrot set ${\mathcal M}_\tau$ with the real line; this is illustrated in the bottom portion of the same figure. This is the unique small copy with period $3$ intersecting the real line; we will sometimes refer to it as ${\mathcal M}^3$. The right end-point $r=-1.75$ corresponds to the polynomial with a periodic point of period $3$ with the multiplier equal to $1$.
\begin{figure}
\caption{Above: the orbit of the renormalization interval for the combinatorial type $\tau=(2,3,1)$. Below: the small copy ${\mathcal M}_\tau$ inside the Mandelbrot set. }
\label{fig-tau}
\end{figure}
\subsection*{Definition of the essential period} A detailed discussion of the combinatorics of renormalization goes beyond the scope of this paper. We will recall some of the relevant concepts briefly.
Let $f:I_f=[a,b]\to[a,b]$ be a renormalizable unimodal map. Denote $\alpha_f\in I_f$ the fixed point of $f$. The {\it principal nest} of $f$ is the sequence of intervals $$[-\alpha_f,\alpha_f]\equiv I^0\supset I^1\supset I^2\supset\cdots$$ where $I^m\ni 0$ is the central component of the first return map of $I^{m-1}$, $$g_m:\cup I_i^m\to I^{m-1}.$$
A level $m>0$ is {\it non-central}, if $g_m(0)\in I^{m-1}\setminus I^m$. If $m$ is non-central, then $g_{m+1}|_{I^{m+1}}$ is not merely a restriction of the central branch of $g_m$, but a different iterate of $f$. Set $m(0)=0$, and let $$m(0)<m(1)<m(2)<\cdots<m(\kappa)$$ be the sequence of non-central levels. The map
$$g_{m(\kappa)+1}|_{I^{m(\kappa)+1}}\equiv f^{n_1}.$$ For $0\leq k<\kappa$ the nested intervals $$I^{m(k)+1}\supset I^{m(k)+2}\supset\cdots\supset I^{m(k+1)}$$
form a {\it central cascade}, whose {\it length} is $m(k+1)-m(k)$. Lyubich called a cascade {\it saddle-node} if $0\notin g_{m(k)+1}(I^{m(k)+1})$. The reason for this terminology is that if the length of a saddle-node cascade is large, then $g_{m(k)+1}|_{I^{m(k)+1}}$ is combinatorially close to the saddle-node quadratic map $x\mapsto x^2+1/4$.
Let $x\in P(f)\cap (I^{m(k)}\setminus I^{m(k)+1})$ and set $d(x)=\min \{j-m(k),m(k+1)-j\}$, where $g_{m(k)+1}(x)\in I^j\setminus I^{j+1}$. This number shows how deep the image of $x$ lands inside the cascade. Let us now define $d_k$ as the maximum of $d_k(x)$ over all points $x\in P(f)\cap (I^{m(k)}\setminus I^{m(k)+1})$. For a saddle-node cascade the levels $l$ such that $m(k)+d_k<l<m(k+1)-d_k$ are {\it neglectable}. Now we define the essential period of $f$ as follows. Set $J=I^{m(\kappa)+1}$, and let $p$ be its period, that is the smallest positive integer for which $f^p(J)\ni 0$. Consider the orbit $J_0\equiv J$, $J_i=f^i(J_0)$, $i\leq p-1$. For each $J_k$ consider the deepest cascade which contains this interval, and call $J_k$ neglectable if the cascade is saddle-node and $J_k$ is contained in a neglectable level of the cascade. Now count the non-neglectable intervals in the orbit $\{J_i\}_{i=0}^{p-1}$. Their number is the {\it essential period}, $p_e(f)$. Recall that an infinitely renormalizable map $f$ has a bounded combinatorial type if there is a finite upper bound on the periods of its renormalizations. Similarly, $f$ is said to have an {\it essentially bounded combinatorial type} if $\sup_k p_e({\cal R}^k)<\infty$.
We say that two renormalization types $\tau$ and $\tau'$ are {\it essentially equivalent} if removing the neglectable intervals from both renormalization cycles, we obtain the same permutation.
\noindent \subsection*{ An example of a map with essentially bounded combinatorics.} The definiton given above is rather delicate. It is useful therefore to provide the reader with a simple yet archetypical example of an infinitely renormalizable map of unbounded but essentially bounded combinatorial type (cf. \cite{Hinkle,Yam-bounds}). This map is constructed in such a way that its every renormalization is a small perturbation of a unimodal map with a period 3 parabolic orbit (see Figure~\ref{period3}). Closeness to a parabolic will ensure that the renormalization periods are high, but the essential periods will all be bounded.
\begin{figure}
\caption{Above: the dynamics of the map $z\mapsto z^2-1.75$. Center: the domain of $g=f^3|_{I^1}$.
Below: a small perturbation $f_\epsilon$, as in our example, with the orbit of the renormalization interval indicated. Note the long saddle-node cascade of iterates of $f_\epsilon^3$ which arises in the vicinity of the parabolic point $w$ of the unperturbed map $f$. }
\label{period3}
\end{figure}
Before constructing the example, let us consider the dynamics of the quadratic map $f:z\mapsto z^2-1.75$. This polynomial has a parabolic orbit of period $3$ on the real line, let us denote $w$ the element of this orbit which is nearest to $0$. Recall that $I^0=[-\alpha_f,\alpha_f]$, and $I^1$ is the central component of the domain of the
first return map $g:I^0\to I^0$. For this map we have $g|_{I^1}\equiv f^3$, $w\in I^0$, and $f^{3n}(0)\to w$. The map $g$ has two non-central components; denoting $I^1_1$ the one whose boundary contains $\alpha_f$, we have $g=f^2:I^1_1\to I^0$. For a small $\epsilon>0$ let us set $f_\epsilon(z)=z^2-1.75+\epsilon$. The orbit of $0$ under $f_\epsilon$ eventually escapes $I^0$. Let us define $\epsilon_n$ as the parameter value for which $$P^{3i}_{\epsilon_n}(0)\in I^1, \;i\leq n-1,$$ $$P_{\epsilon_n}^{3n}(0)\in I^1_1\text{, and }P_{\epsilon_n}^{3n+1}(0)=0.$$ These maps correspond to the centers of a sequence of small copies ${\mathcal M}_n^{3}$ of the Mandelbrot set converging to he cusp $c=-1.75$ of the real period $3$ copy ${\mathcal M}^{3}$. For each $P_{\epsilon_n}$ the essential period $p_e(P_{\epsilon_n})=4$, obviously $p(P_{\epsilon_n})\to\infty$. Now consider an infinitely renormalizable unimodal map $h$ such that the combinatorial type $\tau({\cal R}^kh)=\tau(P_{\epsilon_{n_k}})$, with $n_k\to\infty$. This is the desired example. We can, of course, select $h$ in the real quadratic family, picking an infinitely renormalizable parameter value $c\in{\mathcal M}$ such that $\chi({\cal R}^k(f_c))\in{\mathcal M}^{3}_{n_k}$. This amounts to blowing up a small copy ${\mathcal M}^{3}_{n_1}$, finding its period $3$ cusp, and the corresponding sequence of small copies converging to this cusp, blowing up one of them, {\it ad infinitum} (see Figure~\ref{blow-up}).
\begin{figure}
\caption{ An airplane inside of an airplane: consecutive blow-ups of a Julia set of a map with essentially bounded combinatorics, and the corresponding blow-ups of the Mandelbrot set}
\label{blow-up}
\end{figure}
\subsection*{Applications of parabolic implosion to limits of maps with essentially bounded combinatorics} Theory of parabolic implosion is the principal mechanism used in our proof of the main result. It is quite involved and we will not attempt to give a self-contained review here. For a beautiful introduction, see the paper of Douady \cite{Do}. The applications to dynamics of quadratic polynomials are described in the paper of the second author \cite{Yam-bounds} and the work of Hinkle \cite{Hinkle}. Before giving a brief summary of the relevant results below, let us very informally describe their main thrust. Consider a sequence of quadratic polynomials $P_{c_n}$ with $c_n\to c_*$. The limiting map $P_{c_*}$ can be described as the {\it algebraic limit} of the sequence $P_{c_n}$. The {\it geometric limit} of the same sequence consists of all of the analytic maps $\{ g\}$ which can be obtained as limits of uniformly converging subsequences of arbitrary iterates of our polynomials:
$$P^{m(n_k)}_{c_{n_k}}|_\Omega\rightrightarrows g,\text{ where }\Omega\text{ is a subdomain of }{\mathbb C}.$$ Clearly, the geometric limit contains all of the iterates of $P_{c_*}$, but may {\it a priori} be larger. As an example, consider the parabolic quadratic polynomial $P_{1/4}$. It has a fixed point $p=\frac{1}{2}$ with multiplier $1$, and its critical orbit $$P_{1/4}^n(0)\nearrow \frac{1}{2}.$$ In particular, no iterate of $0$ under $P_{1/4}$ lies to the right of $\frac{1}{2}$. On the other hand, {\it for every} $\epsilon>0$ \begin{equation}
\label{div}
P_{1/4+\epsilon}^n(0)\nearrow \infty. \end{equation} An easy way to see this is to apply the coordinate change $w=z-\frac{1}{4}$, which transforms $P_{1/4+\epsilon}$ into $$f_\epsilon(w)\equiv w+w^2+\epsilon;$$ clearly, $f^n_\epsilon(w)\geq w+ n\epsilon$ for all $w\in{\mathbb R}$. Hence, for every sequence $P_{1/4+\epsilon_n}$ with $1>\epsilon_n\to 0$ the geometric limit will contain maps $g$ which map $0$ to a point between $1$ and $2$. It will thus be larger than the algebraic limit. More importantly for our needs, this will be reflected in the fact that any Hausdorff limit point of the postcritical sets of $P_{1/4+\epsilon_n}$ will be larger than the postcritical set of $P_{1/4}$: in particular, it will contain points in the interval $[1,2]$. The theory of parabolic implosion provides a description of such Hausdorff limit points, and relates them to particular sequences of perturbations of parabolic parameter values.
We now proceed to quote several facts we are going to need. All of them are consequences of the main rigidity result of \cite{Hinkle}, which is, in turn, a version of the Rigidity Theorem for parabolic towers of \cite{Ep1} (a formal statement of the Tower Rigidity theorem is beyond the scope of the present paper).
\begin{theorem} \label{thm:sequence1} Let $n\in{\mathbb N}$. Suppose $c_k$ is a sequence of parameter values in ${\mathcal M}$ such that the following properties hold. \begin{itemize}
\item Each $P_{c_k}$ is $n$-times renormalizable, and
for each $j\leq n$ we have $\tau_j(P_{c_k})\equiv \tau_j$ are identical.
\item Furthermore, the combinatorial types
$\tau_{n+1}(P_{c_k})$ are essentially equivalent, and periods $p({\cal R}^n(P_{c_k}))\to\infty$.
\item For each $k$ the renormalization
${\cal R}^{n}(P_{c_k})$ has a single saddle-node cascade, whose period is greater than the essential period $p_e({\cal R}^n(P_{c_k}))$.
\item Finally, assume that $\chi({\cal R}^{n+1}(P_{c_k}))$ does not depend on $k$ and is a parabolic parameter ${c_*}$. \end{itemize} Then we have: \begin{enumerate} \item the parameters $c_k$ have a limit, which is the cusp of a small copy of the Mandelbrot set; \item the postcritical sets of $P_{c_k}$ have a limit, which only depends on $c_*$ and is different for different values of $c_*$. \end{enumerate}
\end{theorem}
\noindent We also note \begin{theorem}
\label{thm:sequence2}
Suppose $\tau_n$ is a sequence of essentially equivalent combinatorial types with a single saddle-node cascade whose period is greater than $p_e(\tau_n)$ and whose periods $p(\tau_n)\underset{n\to\infty}{\longrightarrow}\infty$. Then
the renormalization windows $W_{\tau_n}$ converge to a parabolic parameter $\hat{c}$.
\end{theorem}
\noindent Finally, \begin{theorem}
\label{thm:sequence3}
For $i=1,2$, let $c_k^i$ be two different sequences of parameter values in ${\mathcal M}$ satisfying Theorem~\ref{thm:sequence1} for the same $n$ and $c_*$.
Furthermore, assume that for al $j\leq n$, the combinatorial types $\tau_j(c_k^i)$ are identical, and that
$\tau_{n+1}(c_k^1)$ is not essentially equivalent to $\tau_{n+1}(c_k^2)$. Then the limits of the postcritical sets of $P_{c_k^i}$ are different for $i=1,2$. \end{theorem}
\section{Proof of the Main Theorem} \label{section:proof}
\subsection{Computability of the attractor ${\mathcal A}$} In this sub-section we will show that the attractor ${\mathcal A}$ of the map $P_c:I_c\to I_c$, $c\in[-2,4]$ is always computable by a machine $M^\phi$ with an oracle for $c$. Recall that there are three possible types for ${\mathcal A}$: \begin{enumerate} \item ${\mathcal A}$ is a limit cycle $$w_0\overset{P_c}{\mapsto} w_1\overset{P_c}{\mapsto}\cdots\overset{P_c}{\mapsto}w_{n-1}\overset{P_c}{\mapsto}w_n=w_0.$$
Note that in this case, the cycle is either (a) attracting: $0<|DP_c^n(w_0)|<1$, (b) super-attracting: $|DP_c^n(w_0)|=0$, or (c) parabolic: $DP_c^n(w_0)=\pm 1$. \item ${\mathcal A}$ is a periodic cycle of intevals $$J_0\overset{P_c}{\mapsto} J_1\overset{P_c}{\mapsto}\cdots\overset{P_c}{\mapsto}J_{n-1}\overset{P_c}{\mapsto}J_n=J_0\text{ with }0\in J_0.$$
In this case, $P_c$ is $k$-times renormalizable and $J_0$ is the renormalization interval of level $k$.
\item $P_c$ is infinitely renormalizable and ${\mathcal A}$ is its Feigenbaum-like Cantor set. \end{enumerate} We prove the following: \begin{theorem}
\label{thm:comput}
The attractor ${\mathcal A}$ is always computable by a Turing machine with an oracle for $c$. Moreover:
\begin{itemize}
\item all attractors of type (1a) are uniformly computable;
\item all attractors of types (1b) and (1c) are computable by an oracle machine $M^\phi$ which uses as non-uniform information the period $n$ of the cycle;
\item all attractors of type (2) are computable by an oracle machine $M^\phi$ which uses as non-uniform information the period $n$ of the cycle;
\item all attractors of type (3) are uniformly computable.
\end{itemize} \end{theorem} \begin{proof} {\sl Case (1a).} \\
We run a brute force search to find a round disk $D$ with a dyadic radius centered at a dyadic point $x\in I_c$ and $n\in{\mathbb N}$ such that
$P^n_c(D)$ is univalent (which can be verified using the Argument Principle) and $P_c^n(D)\Subset D$. Such a disk will always exist, since any sufficiently small disk centered at a point of the attracting limit cycle has this property. By Schwarz Lemma, $D$ contains an attracting periodic point, and
$$\operatorname{diam}(P_c^{nk}(D))\overset{k\to\infty}{\longrightarrow}0$$
geometrically fast. Since $P_c$ can have at most one non-repelling orbit, the images $P_c^{nk}$ converge to a point in the limit cycle ${\mathcal A}$.
We iterate $P_c^n$ on $D$ until $P_c^{nk}(D)$ is small enough, to find a sufficiently good approximation of the cycle.
\noindent
{\sl Case (1b).} \\
In this case, the critical point $0$ lies in the cycle, and hence the proof is trivial.
\noindent
{\sl Case (1c).}\\ We can use Weyl's algorithm \cite{Wey} to find all roots of the equation $P^n_c(w)=w$. An obvious modification of the argument from Case(1a) allows us to compute all {\it repelling} cycles of periods $\leq n$. Since all orbits of $P_c$ except for the limit cycle are repelling, the only orbit that is left is the limit cycle ${\mathcal A}$.
\noindent
{\sl Case (2).}\\
In this case we have that ${\mathcal A} = \cup_{i=0}^{n} J_{i}$ where $$J_0\overset{P_c}{\mapsto} J_1\overset{P_c}{\mapsto}\cdots\overset{P_c}{\mapsto}J_{n-1}\overset{P_c}{\mapsto}J_n=J_0.$$ ${\mathcal A}$ is therefore clearly computable since $J_{0}=[P^{n}_{c}(0),P^{2n}_{c}(0)]$.
\noindent
{\sl Case (3).}\\ A renormalization window $W_\tau$ with period $p$ is bounded by parameters $a$, $b$ for which $\chi({\cal R}(P_c))=-2$ and $\chi({\cal R}(P_c))=0.25$ respectively. These can be identified algebraically. Indeed, $P_{-2}(z)=z^2-2$ is a Chebyshev polynomial for which $0\mapsto -2\mapsto 2\mapsto 2$ and is the unique quadratic map with this combinatorics of the critical orbit; and $P_{0.25}$ is a parabolic map with a fixed point which has multiplier $1$ and is, again, the only such quadratic map. Thus, for $P_a^p$ the critical point $0$ is pre-fixed, and $P_b^p$ has a parabolic fixed point. Since there are only finitely many such parameters for each $p$, we can compute all renormalization windows $W_\tau$ with a given period $p$ with an arbitrary precision.
Using an exhaustive search, we can thus identify the renormalization window $W_{\tau_1}\ni c$ with the smallest period. Proceeding inductively, we can identify the renormalization window $W_{\tau_2}\subset W_{\tau 1}$ which corresponds to the $2$-nd renormalization ${\cal R}^2(P_c)$ and so on. At each renormalization level, we can then compute the corresponding period, say $p$, from which we can compute the corresponding cycle of period intervals as in Case (2). We proceed as above to find one-by-one $2^{-(n+2)}$-approximations of the nested cycles of periodic intervals $\mathcal{C}_k$. We halt when we obtain a set for which every connected component has diameter $\leq 2^{-(n+2)}$: it is the desired $2^{-n}$-approximation of the Feigenbaum-like Cantor set ${\mathcal A}$.
\end{proof}
\subsection{Constructing Feigenbaum-like sets with high complexity} Without loss of generality, we can specialize to the case of a monotone function $f_n:{\mathbb N}\nearrow {\mathbb N}$. There are countably many Turing Machines, and we begin by enumerating them in some arbitrary computable fashion: $M_1^\phi$, $M_2^\phi,\ldots$ so that every machine appears infinitely many times in the enumeration. For $i=1,2$ let $\hat\tau^i_n$ be an infinite sequence of combinatorial types such that: \begin{itemize} \item all $\hat\tau_n^i$ for the same value of $i=1,2$ are essentially equivalent; \item $\hat\tau_n^1$ is not essentially equivalent to $\hat\tau_n^2$; \item $\hat\tau_n^i$ has a single saddle-node cascade, whose period is greater than the essential period of $\hat\tau_n^i$; \item periods $p(\hat\tau_n^i)\underset{n\to\infty}{\longrightarrow}\infty$; \item for each $i=1,2$, the sequence of renormalization windows $W_{\hat\tau^i_n}$ converges to $-1.75$.
\end{itemize} We will proceed constructing the value of $c$ inductively. At step $n$ of the induction, we will have a parabolic parameter $c_n$, and a natural number $l_n$ such that: \begin{enumerate}
\item $|c_n-c_{n+1}|<2^{-3l_n}$; \item $d(\Omega(P_{c_n}),\Omega(P_{c_{n+1}}))<2^{-3l_n}$; \item given an oracle for $c_n$, the machine $M_n^\phi$ cannot compute a $2\cdot 2^{-l_n}$-approximation of $\Omega(P_{c_n})$ in time $f(l_n)$. More precisely, $M_n^\phi$ either does not halt in time $f(l_n)$, or outputs a set $K$ such that
$$d(K,\Omega(P_{c_n}))>3\cdot 2^{-l_n};$$ \item $P_{c_n}$ is $n$ times renormalizable; \item fix $j\in{\mathbb N}$ and let $m,n\geq j$. Then $\tau_j(P_{c_n})=\tau_j(P_{c_m})$. \end{enumerate}
\noindent
{\bf Base of induction.} For $i=1,2$ and $n\in{\mathbb N}$ let $c^i_n$ be renormalizable parameters such that $P_{i,n}\equiv P_{c^i_n}$ has the properties
\begin{itemize}
\item $\tau(P_{i,n})=\hat\tau^i_n$;
\item $\chi({\cal R}(P_{i,n}))=-1.75$.
\end{itemize}
By Theorem~\ref{thm:sequence1}, for each $i=1,2$ there exists a Hausdorff limit $\omega_i$ of the postcritical set $\Omega(P_{c^i_n})$.
By Theorem~\ref{thm:sequence3}, there exists $l_1\in{\mathbb N}$ such that the distance
\begin{equation}\label{eq1}
\operatorname{dist}(\omega_1,\omega_2)>4\cdot 2^{-l_1}.
\end{equation}
We let the machine $M_1^\phi$ compute the attractor ${\mathcal A}$ with precision $2^{-l_1}$, giving it $c=-1.75$ as the parameter.
The first possibility we consider is that the machine does not
halt in the time $f(l_1)$. Then we set $c_1\equiv c^1_n$ such that $|c^1_n+1.75|<2^{-f(l_1)}$. Note, that in the running time $f(l_1)$, the machine $M^\phi_1$ cannot tell the difference between these parameters, and therefore, it will not halt in the time $f(l_1)$.
The second possibility is that the machine does halt and outputs a set $A_1$. By (\ref{eq1}), there exist $i\in\{1,2\}$ and $n\in{\mathbb N}$ such that
$|c^i_n+1.75|<2^{-f(l_1)}$ and $$\operatorname{dist}(\Omega(c^i_n),A_1)>2\cdot 2^{-l_1}.$$ We set $c_1\equiv c^i_n$.
\noindent
{\bf Step of induction.}
By Theorem~\ref{thm:sequence1}, there exist two sequences of parabolic parameter values $c^1_k$, $c^2_k$ which converge to $c_n$, and such that the combinatorial type of ${\cal R}^n(P_{c^i_k})$ is equal to $\hat \tau^i_k$ and
$$\chi({\cal R}^{n+1}(P_{c^i_k})=-1.75.$$
By Theorems \ref{thm:sequence1} and \ref{thm:sequence3}, the
corresponding sequences of postcritical sets $\Omega(P_{c^i_k})$ converge to different limits $\Omega^1\neq \Omega^2$. Let $l_{n+1}\geq l_n+1$
be such that
$$\operatorname{dist} (\Omega_1,\Omega_2)>2\cdot 2^{-l_{n+1}}.$$
Now the argument proceeds as above. If the machine $M^\phi_{n+1}$ does not halt in time $f(l_{n+1})$, then, by continuity, there exists
$k\in{\mathbb N}$ such that the value $c_{n+1}=c_k^1$ satisfies the conditions (1)-(2) of the induction, and we make it our choice.
Otherwise, if the machine $M^\phi_{n+1}$ does halt and outputs a set $K$, then there is $i\in 1,2$ and $K\in{\mathbb N}$ such that for all $k\geq K$
the property (3) is satisfied for $c_{n+1}=c^i_k$. We select $k$ large enough so that (1)-(2) are satisfied as well.
\end{document} |
\begin{document}
\maketitle \begin{abstract} The paper is devoted to searching algorithms which will allow to generate images of attractors of \emph{generalized iterated function systems} (GIFS in short), which are certain generalization of classical iterated function systems, defined by Mihail and Miculescu in 2008, and then intensively investigated in the last years (the idea is that instead of selfmaps of a metric space $X$, we consider mappings form the Cartesian product $X\times...\times X$ to $X$).\\ Two presented algorithms are counterparts of classical \emph{deterministic algorithm} and so-called \emph{chaos game}. The third and fourth one one is fitted to special kind of GIFSs - to \emph{affine} GIFS, which are, in turn, also investigated. \end{abstract}
\section{Introduction} If $X$ is a metric space and $\mathcal F=\{f_1,...,f_n\}$ is a finite family of continuous selfmaps of $X$, then by the same letter $\mathcal F$ we will also denote the function $\mathcal F:\mathbb K(X)\to\mathbb K(X)$ defined by $$ \mathcal F(K):=f_1(K)\cup...\cup f_n(K), $$ where $\mathbb K(X)$ denotes the family of all nonempty and compact subsets of $X$ (and is considered as a metric space with the classical Hausdorff-Pompeiu metric, which will be denoted by the letter $H$).\\ The following Hutchinson-Barnsley theorem (first proved by Hutchinson \cite{H}, then popularized by Barnsley \cite{B}), is one of the milestones for the fractal theory. \begin{theorem}\label{f1} Assume that $(X,d)$ is a complete metric space and $\mathcal F=\{f_1,...,f_n\}$ is a finite family of Banach contractions of $X$ (i.e., selfmaps of $X$ with Lipschitz constants less then $1$). Then there is a unique set $A_\mathcal F\in \mathbb K(X)$ such that $$A_\mathcal F=\mathcal F(A_\mathcal F)=f_1(A_\mathcal F)\cup...\cup f_n(A_\mathcal F)$$ Moreover, for every $K\in\mathbb K(X)$, the sequence of iterations $\mathcal F^{(k)}(K)$ converges to $A_\mathcal F$ with respect to the Hausdorff-Pompeiu metric. \end{theorem} Note that in the above theorem, instead of Banach contractions, there can be consider many weaker types of contractive mappings (later we will discuss this topic broader).\\ In the above frame, family of mappings $\mathcal F$ is called \emph{iterated function system}, and a set $A_\mathcal F$ - the \emph{fractal generated by $\mathcal F$}. Also a compact set $A$ is called \emph{a Hutchinson-Barnsley fractal}, if it is a fractal generated by some IFS.\\ It turnes out that many classical fractals, like the Cantor ternary set, the Sierpiński gasket etc., are H-B fractals. Also many objects from the nature (like trees, clouds etc.) can be modelled as H-B fractals.\\ In connection with the above statement, a natural problem is: \begin{problem}\label{f2} Given an IFS $\mathcal F$ on an Euclidean space $\mathbb R^d$, how to make an image of its fractal $A_\mathcal F$? \end{problem} There are two main algorithms that can be easily adjusted to computer programs:\\ The first one, called the \emph{deterministic algorithm}, bases on the second part of Theorem \ref{f1}: we choose a compact set $K$, then we find $K_1:=\mathcal F(K)$, then $K_2:=\mathcal F(K_1)$ and so on. By Theorem \ref{f1}, sets $K_n$ are better and better approximations of the fractal $A_\mathcal F$.\\ The second one, called the \emph{chaos game algorithm}, goes in the following way: we choose randomly any point $x_0\in X$, then we choose randomly $i_1\in\{1,...,n\}$, and put $x_1:=f_{i_1}(x_0)$. Then we choose randomly $i_2\in\{1,...,n\}$, and put $x_2:=f_{i_2}(x_1)$ and so on. As a consequence, we obtain a sequence $x_0,x_1,x_2,...$, and it turns out that sets $\{x_k,...,x_N\}$, where $k<N$ are appropriately big, are good approximations of the fractal $A_\mathcal F$.\\ In this paper we are going to consider Problem \ref{f2} for the fractals generated by \emph{generalized iterated function systems} introduced by Mihail and Miculescu, then intensively investigated by them and also Strobin and Swaczyna, and Secelean - see for example \cite{M},\cite{M1},\cite{MM1},\cite{MM2},\cite{MS}, \cite{SS1} and \cite{SS2}. We now recall the notion of GIFSs.\\ Let $(X,d)$ be a metric space and $m\in\mathbb N$. By $X^m$ we denote the Cartesian product of $m$ copies of $X$. We endow $X^m$ with the maximum metric $d_m$: $$ d_m((x_1,...,x_m),(y_1,...,y_m)):=\max\{d(x_1,y_1),...,d(x_m,y_m)\} $$ We say that $f:X^m\to X$ is a \emph{generalized Matkowski contraction (of order $m$)}, if for some nondecreasing function $\varphi:[0,\infty)\to[0,\infty)$ such that for any $t>0$, the sequence of iterates $\varphi^{(k)}(t)\to 0$, we have that $$ d(f(x),f(y))\leq \varphi(d_m(x,y)),\;\;\;x,y\in X^m $$ Note that if $m=1$, then we get a \emph{Matkowski contraction} \cite{Ma}, and it is well known that Theorem \ref{f1} can be strengthened by considering IFSs consisting of Matkowski contractions (clearly, each Banach contraction is Matkowski, but the converse need not be true).\\ Finally, a finite family $\mathcal F=\{f_1,...,f_n\}$ of generalized Matkowski contractions of order $m$ is called a \emph{generalized iterated function system of order $m$} (GIFS in short).\\ Also by $\mathcal F$ we will denote the mapping $\mathcal F:\mathbb K(X)^m\to \mathbb K(X)$ defined by $$ \mathcal F(K_1,...,K_m):=f_1(K_1\times...\times K_m)\cup...\cup f_n(K_1\times...\times K_m) $$ The following result is an extension of Theorem \ref{f1} (see \cite{MM1},\cite{MM2},\cite{SS1}, \cite{SS2}): \begin{theorem}\label{f3} Let $X$ be a complete metric space and $\mathcal F$ be a GIFS of order $m$. Then there exists a unique set $A_\mathcal F\in\mathbb K(X)$ such that $$ A_\mathcal F=\mathcal F(A_\mathcal F,...,A_\mathcal F)=f_1(A_\mathcal F\times...\times A_\mathcal F)\cup...\cup f_n(A_\mathcal F\times...\times A_\mathcal F) $$ Moreover, for every $K_0,...,K_{m-1}\in\mathbb K(X)$, the sequence $(K_k)$, defined by $K_{k+m}:=\mathcal F(K_k,...,K_{m+k-1})$, converges to $A_\mathcal F$ with respect to the Hausdorff-Pompeiu metric. \end{theorem}
In the above frame, the set $A_\mathcal F$ is called the \emph{fractal generated by $\mathcal F$}. Also, a compact set $A\subset X$ is called \emph{a Hutchinson-Barnsley generalized fractal}, if it is a fractal generated by some GIFS.\\ Note that GIFSs give us some new fractals - there are sets which are attractors of some GIFSs, but are not attractors of any IFSs (even consisting of Matkowski contractions) and there are compact sets which are not attractors of any GIFSs - see \cite{S}. Also see \cite{MM2} for another example.\\ We will also need some facts which involves the notion of a \emph{code space} for GIFSs (for detailed discussion see \cite{M1}, \cite{SS2} and \cite{SRum}). The constructions are a bit technically complicated, but, in fact, they are natural counterparts of classical case of IFSs.\\
So assume that $\mathcal F=\{f_1,...,f_n\}$ is a GIFS of order $m$, on a space $X$.\newline Define $\Omega _{1},\Omega _{2},\ldots $ by the following inductive formula: \begin{equation*} \Omega _{1}:=\{1,...,n\} \end{equation*} \begin{equation*} \Omega _{k+1}:=\underbrace{\Omega _{k}\times \ldots \times \Omega _{k}}_{m \mbox{ times}}\;\;\;\mbox{for }k\geq 1 \end{equation*} \indent Then for every $k\in\mathbb{N}$, let \begin{equation*} {}_{k}\Omega :=\Omega _{1}\times \ldots \times \Omega _{k} \end{equation*} and define \begin{equation*} \Omega _{<}:=\bigcup_{k\in\mathbb{N}}{}_{k}\Omega \end{equation*} and \begin{equation*} \Omega :=\Omega _{1}\times \Omega _{2}\times \Omega _{3}\times \ldots = \underset{i\in\mathbb{N}}{\Pi }\Omega _{i}. \end{equation*}
The space $\Omega $ is called a \emph{code space for $\mathcal F$}.
\begin{remark} \emph{\ In the case $m=1$ we have \begin{equation*} \Omega =\{1,...,n\}^\mathbb N \end{equation*} so $\Omega$ is the standard code space for an iterated function systems consisting of $n $ mappings (see \cite{H},\cite{B}).} \end{remark} Now we will define the families of mappings $\mathcal{F}^{k}$, $k\in\mathbb{N}$, inductively with respect to $k$.\newline If $k>1$ and \begin{equation*} \alpha =(\alpha ^{1},\ldots ,\alpha ^{k})=(\alpha ^{1},(\alpha _{1}^{2},\ldots ,\alpha _{m}^{2}),\ldots ,(\alpha _{1}^{k},\ldots ,\alpha _{m}^{k}))\in {}_{k}\Omega \end{equation*} then for any $i=1,\ldots ,m$, we set \begin{equation} \alpha (i):=(\alpha _{i}^{2},\alpha _{i}^{3},\ldots ,\alpha _{i}^{k}). \label{n1} \end{equation} Clearly, $\alpha (i)\in {}_{k-1}\Omega $.\newline If $\alpha \in \Omega $, we define $\alpha (i)\in \Omega $ in an analoguos way.\newline Also, define spaces $X_1,X_2,...$ be the following inductive formula: \begin{equation}\label{filip1}X_1:=\underbrace{X\times \ldots \times X}_{m \mbox{ times}}\end{equation} \begin{equation}\label{filip2} X_{k+1}:=\underbrace{X_{k}\times \ldots \times X_{k}}_{m \mbox{ times}}\end{equation} We are ready to define the families $\mathcal{F}^{k}$, $k\in\mathbb{N}$.\newline Define \begin{equation*} \mathcal{F}^{1}:=\mathcal{F} \end{equation*} and observe that (since $_1\Omega=\Omega_1=\{1,...,n\}$) we can write $\mathcal{F}^{1}=\{f_{\alpha }:\alpha \in \;_{1}\Omega \}$ and each $f_{\alpha }\in \mathcal{F}^{1}$ is a function from $X_{1}$ to $X$.
Assume that we have already defined $\mathcal{F}^k=\{f_\alpha:\alpha\in\;_k \Omega\}$ such that each $f_\alpha\in\mathcal{F}^k$ is a function from $X_k$ to $X$. Then for every $\alpha=(\alpha^1,\ldots ,\alpha^k,\alpha^{k+1})\in{} _{k+1}\Omega$, let $f_\alpha:X_{k+1}\to X$ be defined by \begin{equation*} f_\alpha(x_1,\ldots ,x_m)=f_{\alpha^1}(f_{\alpha(1)}(x_1),\ldots ,f_{\alpha(m)}(x_m)), \end{equation*} for each $(x_{1},...,x_m) \in X_{k}\times\dots\times X_k=X_{k+1}$.\newline Then set \begin{equation*} \mathcal{F}^{k+1}:=\{f_\alpha:\alpha\in\;_{k+1}\Omega\} \end{equation*} Finally, define $\mathcal F^{<}:=\bigcup_{k\in\mathbb N}\mathcal F^k=\{f_\alpha:\alpha\in\Omega_{<}\}$.
\begin{remark} \emph{\ Clearly, in the case when $m=1$, if $\alpha=(\alpha^1,...,\alpha^k) \in\;_k\Omega$, then $f_\alpha=f_{\alpha_1}\circ...\circ f_{\alpha_k}$, hence defined families of mappings are natural generalizations of compositions.} \end{remark} If $D\in\mathbb K(X)$, then we define sets $D_1,D_2,...$ in a similar way as spaces $X_1,X_2,...$ in (\ref{filip1}) and (\ref{filip2}), and then, for every $k\in\mathbb N$ and $\alpha\in\;_k\Omega$, we define $$ D_\alpha:=f_\alpha(D_k) $$ Also, we set $A_\alpha:=(A_\mathcal F)_\alpha$ for $\alpha\in\Omega_<$.\\ The following result can be found in \cite{SS2}. \begin{lemma}\label{l1} Let $\mathcal F$ be a GIFS of order $m$ on a complete metric space $X$. Then \begin{itemize} \item[(a)] For every $D\in\mathbb K(X)$, $\lim_{k\to\infty}\max\{\operatorname{diam}D_\alpha:\alpha\in\;_k\Omega\}=0$. \item[(b)] For every $k\in\mathbb N$, $A_\mathcal F=\bigcup_{\alpha\in\;_k\Omega}A_\alpha$. \end{itemize} \end{lemma} We will also need the following \begin{lemma}\label{filip4} Let $\mathcal F$ be a GIFS of order $m$ on a complete metric space $X$. For every closed and bounded set $D\subset X$ and $k\in\mathbb N$, define $$K_k=\bigcup_{\alpha\in\;_k\Omega}f_\alpha(D_k)$$ where $D_1,D_2,...$ are defined as in (\ref{filip1}) and (\ref{filip2}). Then $K_k\to A_\mathcal F$ with respect to the Hausdorff-Pompeiu metric. \end{lemma} \begin{proof} By the previous Lemma, for every $k\in\mathbb N$, we have $A_\mathcal F=\bigcup_{\alpha\in\;_k\Omega}A_\alpha$. Hence, using a standard property of the Hausdorff-Pompeiu metric, we get $$ H(A_\mathcal F,K_k)=H\left(\bigcup_{\alpha\in\;_k\Omega}A_\alpha,\bigcup_{\alpha\in\;_k\Omega}f_\alpha(D_k)\right)\leq \max\left\{H\left(A_\alpha,f_\alpha(D_k)\right):\alpha\in\;_k\Omega\right\}\leq\varphi^{(k)}(H(A_\mathcal F,D)) $$ where $\varphi$ is a witness to the fact that $\mathcal F$ is a GIFS (clearly, by choosing the maximum function, we can assume that we have one function $\varphi$ for all $f_1,...,f_n$). Note that the last inequality can be proved in a standard way - the case $k=1$ can be proved as \cite[Theorem 3.7]{SS1}, and then we can proceed by induction.\\ Now, using the known (and easy) fact that $\varphi^{(k)}(t)\to 0$ for all $t\geq 0$, we get the thesis. \end{proof} In the next section we will introduce the counterpart of deterministic algorithm for GIFS. Section 3 is devoted to counterpart of chaos game algorithm for GIFSs. Finally, in Section 4 we will study affine GIFSs, and introduce (deterministic) algorithm for such GIFSs. \section{Deterministic Algorithm for GIFSs} Assume that $\mathcal F$ is a GIFS of order $m$ on a complete metric space $X$. The pseudocode for {deterministic algorithm} for $\mathcal F$ is the following \begin{center}\textbf{Pseudocode for deterministic algorithm for GIFSs}\end{center} Initially chosen compact sets: $K_0,...,K_{m-1}\subset X$.\\ Initially defined objects: constant: $m$, mappings: $f_1,...,f_n$, variables: $i,D_0,...,D_{m-1}$.\\ Initial values: $D_0:=K_0$,...,$D_{m-1}:=K_{m-1}$.\\ Main loop:
$\;\;\;\;\;\;\;\;\;\;\;\;\;$ $K:=\mathcal F(D_0,...,D_{m-1})$
$\;\;\;\;\;\;\;\;\;\;\;\;\;$ For $i$ from $0$ to $m-2$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ $D_i:=D_{i+1}$
$\;\;\;\;\;\;\;\;\;\;\;\;\;$ $D_{m-1}:=K$\\
By Theorem \ref{f3}, sets $K$ are closer and closer to the fractal $A_\mathcal F$.\\ Now we give some examples: \begin{example}\label{e1}\emph{ Consider the GIFSs $\mathcal F=\{f_1,f_2\}$ and $\mathcal G=\{g_1,g_2\}$ on $\mathbb R^2$, where, for $x=(x_1,x_2)$ and $y=(y_1,y_2)$, we have\\ $\;$\\ $ f_1(x,y):=(0,1x_1+0,15y_1+0,04y_2\;\mbox{\textbf{;}}\;0,16x_2-0,04y_1+0,15y_2+1,6) $\\ $ f_2(x,y):=(0,1x_1-0,15x_2-0,1y_1+0,15y_2+1,6\;\mbox{\textbf{;}}\;0,15x_1+0,15x_2+0,15y_1+0,07) $\\$\;$\\ $ g_1(x,y):=(0,05x_1+0,02y_1+0,1y_2+0,635\;\mbox{\textbf{;}}\;0,1x_1+0,2x_2+0,08y_1+0,15y_2+0,5) $\\ $ g_2(x,y):=(0,15x_1+0,05y_1+0,1y_2+0,5\;\mbox{\textbf{;}}\;0,15x_1+0,15x_2+0,45) $\\$\;$\\ It can be easily seen that $\mathcal F$ and $\mathcal G$ are GIFSs, indeed.\\ Using deterministic algorithm for these GIFSs, we get the following images:}
\begin{center} \includegraphics{Fdeterministic.png}
\end{center}
\begin{center} \includegraphics{Gdeterministic.png}
\end{center} \end{example} \begin{remark}\emph{ Observe that if $\mathcal F$ is a GIFS on $X$ and $K_0,...,K_{m-1}\in\mathbb K(X)$ are such that $K_i\subset A_\mathcal F$ for $i=0,...,m-1$, then each $K_k\subset A_\mathcal F$, where $(K_k)$ is defined as in Theorem \ref{f3}. Hence, if we assure our start sets are subsets of the attractor, then we can draw all the points from each step. }\end{remark} \begin{remark}\emph{ Assume that $\mathcal F$ is a GIFS on $X$ and $K_0,...,K_{m-1}\in\mathbb K(X)$ such that the cardinality $card(K_i)\leq M$ for $i=0,...,m-1$. By an inductive argument, we can calculate that for every $k\in\mathbb N$, $card(K_{m+k})\leq n^{2^k}M^{2^k(m-1)+1}$, where $(K_k)$ is defined as in Theorem \ref{f3}. Hence we can easily estimate the maximal required memory for calculating $K_{m+k}$ (we need to remember $ card(K_{k})+...+card(K_{k+m-1}) $ points). }\end{remark}
\begin{remark} \emph{Note that if $\mathcal F$ is a GIFS on $X$ such that $c:=\max\{Lip(f):f\in\mathcal F\}<1$, and $K_0=...=K_{m-1}=K$ for some $K\in\mathbb K(X)$, then, using standard properties of the Hausdorff-Pompeiu metric, for every $k\geq 0$, the distance $$ H(K_k,A_\mathcal F)\leq c^{\lfloor \frac{k}{m} \rfloor}H(K,A_\mathcal F) $$ where $(K_k)$ is defined as in Theorem \ref{f3}. This gives some control on the speed of convergence of the sequence $(K_k)$ to the attractor. } \end{remark} \begin{remark}\emph{Observe that one can get the attractor $A_\mathcal F$ of a GIFS $\mathcal F$ by using another version of deterministic algorithm. Indeed, consider the mapping $\tilde{\mathcal F}:\mathbb K(X)\to\mathbb K(X)$ given by $$ \tilde{\mathcal F}:=f_1(K, ..., K) \cup ... \cup f_n(K, ...,K) $$ Then $\tilde{\mathcal F}$ is a Matkowski contraction, so each sequence of iterations $(\tilde{\mathcal F}^k(K))$ converges to $A_\mathcal F$. Note that if $c:=\max\{Lip(f):f\in\mathcal F\}<1$, then $Lip(\tilde{F})\leq c$, so the speed of convergence of $(\tilde{\mathcal F}^k(K))$ is $\leq c^k$, so such an approach seems to be even more efficient. }\end{remark}
\section{chaos game algorithm} The counterpart of chaos game for GIFSs will be much more complicated, as the counterpart for composition for GIFSs is so.
At first we define a certain bijection $H:\mathbb N\times\mathbb N\to\mathbb N$:\\ Set $H(1,1):=1$. Assume that we have defined $H(j,k)$ for some $(j,k)$\\ -- if $j\operatorname{mod} m\neq 0$, then we set $H(m^{k-1}\cdot j+1,1):=H(j,k)+1$,\\ -- if $j\operatorname{mod} m=0$, then we set $H\left(\frac{j}{m},k+1\right):=H(j,k)+1$. \begin{remark}\label{filip3}\emph{ Note that the definition of $H$ is correct - in Remark \ref{f3} below it can be seen how it works (for $m=2$) - the rows are connected to $"$levels$"$ $k$, and each row, the coefficient $j$ changes according to enumeration from the left. So, for example (when $m=2$), we have $H(1,1)=1$, $H(2,1)=2$, $H(1,2)=3$, $H(3,1)=4$, $H(4.1)=5$, $H(2,2)=6$, $H(1,3)=7$, $H(5,1)=8$,...,$H(2,3)=14$, $H(1,4)=15$,...} \end{remark}
Now let $\mathcal F=\{f_1,...,f_n\}$ be a GIFS of order $m$ on a complete space $X$, and choose a sequence $(\gamma_i)\in\{1,...,n\}^\mathbb N$ and $x_0\in X$. Then we can define the sequence $(x_i)\subset X$ by the following recursive formula: $$x_1:=f_{\gamma_{1}}(x_0,...,x_0),\;x_2:=f_{\gamma_{2}}(x_0,...,x_0),\;...,\;x_{m}:=f_{\gamma_{m}}(x_0,...,x_0)$$ and if $i\geq m+1$ and $i=H(j,k)$, then:
$\;\;\;\;\;$if $k>1$, then $$x_i=x_{H(j,k)}:= f_{\gamma_i}\left(x_{H(mj-m+1,k-1)},...,x_{H(mj,k-1)}\right)$$
$\;\;\;\;\;$and if $k=1$, then $$x_i:=f_{\gamma_i}(x',...,x'),$$
$\;\;\;\;\;$where $x':=x_{l}$ and $l:=H(j',k')$ is maximal number with the property that $l<i$ and $k'>1$.
\begin{remark}\label{f3}\emph{ Observe that the definition of the sequence $(x_i)$ is, in fact, very natural. For example, if $m=2$, then we have:}\\
$k=1\;\;\;\;\;\;x_1\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;x_2\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;x_4\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;x_5\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;x_8\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;x_9\;\;\;\;\;\;\;\;\;\;\;\;\;x_{11}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;x_{12}$\\
$k=2\;\;\;\;\;\;\;\;\;\;\;\;\;x_3=f_{\gamma_3}(x_1,x_2)\;\;\;\;\;\;\;\;\;\;\;\;x_6=f_{\gamma_6}(x_4,x_5)\;\;\;\;\;\;\;\;\;\;\;x_{10}=f_{\gamma_{10}}(x_8,x_9)\;\;\;\;\;\;\;\;x_{13}=f_{\gamma_{13}}(x_{11},x_{12})$\\
$k=3\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;x_{7}=f_{\gamma_7}(x_3,x_6)\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;x_{14}=f_{\gamma_{14}}(x_{10},x_{13})$\\
$k=4\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;x_{15}=f_{\gamma_{15}}(x_{7},x_{14})$\\
\emph{and} $x_4=f_{\gamma_4}(x_3,x_3)$, $x_5=f_{\gamma_5}(x_3,x_3)$, $x_8=f_{\gamma_8}(x_7,x_7)$, $x_9=f_{\gamma_9}(x_7,x_7)$, $x_{11}=f_{\gamma_{11}}(x_{10},x_{10})$, $x_{12}=f_{\gamma_{12}}(x_{10},x_{10})$, and, for example, $x_{16}=f_{\gamma_{16}}(x_{15},x_{15})$.
\end{remark} The following result gives a justification for the correctness of chaos game algorithm, which will be presented later. The idea base on the proof of a chaos game for IFSs presented in \cite{Martyn}.\\ If $p_1,...,p_m>0$ are such that $p_1+...+p_m=1$, then on the space $\Omega=\{1,...,m\}^\mathbb N$ we consider the product probability measure generated by a measure $P(\{i\})=p_i$ for $i\in\{1,...,m\}$ (see \cite{Mi} for deeper discussion connected with space of such measures endowed to GIFSs). \begin{theorem}\label{cg-main} In the above setting, put $K_k:=\overline{\{x_i:i\geq k\}}$ for $k\in\mathbb N$. Then with probability $1$, we have that $K_k\to A_\mathcal F$ with respect to the Hausdorff-Pompeiu metric, and, moreover, if $x_0\in A_\mathcal F$, then $K_k=A_\mathcal F$ for every $k\in\mathbb N$. \end{theorem}
\begin{proof} We will just give the proof for the case $m=2$. The general case goes in the same way but is more technically complicated.\\ For any subsequence $(k_i)$ of naturals, consider the condition $(*)$ for a sequence $(\gamma_i)$:\\ $(*)\;\;$for every finite sequence $(\tilde{\gamma}_1,...,\tilde{\gamma}_l)$, there is infinitely many $i\in\mathbb N$ such that $\gamma_{k_i}=\tilde{\gamma}_1,...,\gamma_{k_i+l-1}=\tilde{\gamma}_l$.\\ By the Borel-Cantelli lemma (\cite{Fe}), condition $(*)$ is satisfied with a probability $1$ (i.e., there is a set $\mathcal{A}\subset \{1,...,n\}^\mathbb N$ such that $P(\mathcal{A})=1$, and for each $(\gamma_i)\in\mathcal{A}$, condition $(*)$ is satisfied).
For every $(j,k)\in\mathbb N\times\mathbb N$, we now define $\alpha_{(j,k)}\in\;_k\Omega$, inductively with respect to $k$.\\ So let $\alpha_{(j,1)}:=\gamma_{H(j,1)}$ for $j\in\mathbb N$
and if $\alpha_{(j,k)}$ is defined for all $j\in\mathbb N$ and some $k\in\mathbb N$, then let $\alpha_{(j,k+1)}$ be such that $\alpha_{(j,k+1)}^1=\gamma_{H(j,k+1)}$ (i.e., the first coordinate is equal to $\gamma_{H(j,k+1)}$), and $\alpha_{(j,k+1)}(1)=\alpha_{(2j-1,k)}$ and $\alpha_{(j,k+1)}(2)=\alpha_{(2j,k)}$ (cf. (\ref{n1})).\\ \emph{For example, $\alpha_{(3,2)}=(\gamma_{10},(\gamma_8,\gamma_9))$ and $\alpha_{(1,3)}=(\gamma_7,(\gamma_{3},\gamma_6),((\gamma_1,\gamma_2),(\gamma_4,\gamma_5)))$, see Remark \ref{f3}).}\\ Assume first that $x_0\in A_\mathcal F$. By induction and the above construction it is easy to see that (recall the definition of $A_\alpha$ from earlier section): \begin{equation}\label{f4} x_{H(j,k)}\in A_{\alpha_{(j,k)}}\;\;\mbox{for all}\;\;(j,k)\in\mathbb N\times\mathbb N \end{equation} According to our observation from the beginning of the proof, with a probability $1$, each sequence $\alpha\in\Omega_k$ realizes as a sequence $\alpha_{(j,k)}$ infinitely many times (in a proper order - \emph{for example, to get $\alpha=(1,(2,3),((1,2),(4,3)))$, we have to choose $(1,2,2,4,3,3,1)$ - compare it with $\alpha_{(1,3)}$ above}) - we have to take, for example, a sequence $k_i=2^i$ (as we have to have more and more space). Hence, by (\ref{f4}), for every $\alpha\in\Omega_{<}$, there is infinitely many $i\in\mathbb N$ such that $x_i\in A_\alpha$. Thus, Lemma \ref{l1} implies that with a probability $1$, for every $k\in\mathbb N$, $\overline{\{x_i:i\geq k\}}=A_\mathcal F$, and the second assertion is proved.\\ Now assume that $y_0\in X$, and consider the sequence $(y_i)$ defined as the sequence $(x_i)$, but with $y_0$ as a starting point. In order to prove the first assertion, we only have to show that $d(x_i,y_i)\to 0$ (because then $H(\overline{\{x_i:i\geq k\}},\overline{\{y_i:i\geq k\}})\to 0$).\\ First we will prove that \begin{itemize} \item[(a)] $d\left(x_{H(j,k)},y_{H(j,k)}\right)\leq d(x_0,y_0)$ for all $(j,k)$; \item[(b)] $d\left(x_{H(j,k)},y_{H(j,k)}\right)\leq \max\left\{\varphi^{(k-1)}\left(d(x_{H(2^{k-1}j-2^{k-1}+i,1)},y_{H(2^{k-1}j-2^{k-1}+i,1)}\right):\;1\leq i\leq 2^{k-1}\right\}$ for $(j,k)$; \item[(c)] $d\left(x_{H(j,1)},y_{H(j,1)}\right)\leq \varphi\left(x_{H(1,{p+1})},y_{H(1,{p+1})}\right)$ where $j\geq 3$ and $p\geq 1$ is such that $2^p<j\leq 2^{p+1}$. \end{itemize} where $\varphi$ is a witness to the fact that $\mathcal F$ is a GIFS, and we also set $\varphi^0(t):=t$.\\
\emph{ For example, $(b)$ says that $d(x_{14},y_{14})\leq \max\{\varphi^{(2)}(d(x_i,y_i)):i=8,9,11,12\}$ and $d(x_{15},y_{15})\leq \max\{\varphi^{(3)}(d(x_i,y_i)):i=1,2,4,5,8,9,11,12\}$,\\ and $(c)$ says that $d(x_5,y_5)\leq \varphi(d(x_3,y_3))$ and $d(x_{11},y_{11})\leq \varphi(d(x_7,y_7))$, see Remark \ref{f3}.}\\ The assertion $(a)$ is obvious. Assertion $(b)$ for $k=1$ is a trivial equality. Assume that $(b)$ holds for some $k\in\mathbb N$ and all $j\in\mathbb N$. We will prove it for $k+1$ and all $j\in\mathbb N$. Hence let $j\in\mathbb N$. We have $$ d(x_{H(j,k+1)},y_{H(j,k+1)})=d(f_{\gamma_{H(j,k+1)}}(x_{H(2j-1,k)},x_{H(2j,k)}),f_{\gamma_{H(j,k+1)}}(y_{H(2j-1,k)},y_{H(2j,k)})\leq $$ $$ \leq\varphi(\max\{d(x_{H(2j-1,k)},y_{H(2j-1,k)}),d(x_{H(2j,k)},y_{H(2j,k)})\})\leq $$ $$ \leq \varphi(\max\{\max\{\varphi^{(k-1)}(d(x_{H(2^{k-1}(2j-1)-2^{k-1}+i,1)},y_{H(2^{k-1}(2j-1)-2^{k-1}+i,1)}):\;1\leq i\leq 2^{k-1}\},$$ $$\max\{\varphi^{(k-1)}(d(x_{H(2^{k-1}\cdot 2j-2^{k-1}+i,1)},y_{H(2^{k-1}\cdot 2j-2^{k-1}+i,1)}):\;1\leq i\leq 2^{k-1}\})= $$ $$ =\max\{\varphi^{(k)}(d(x_{H(2^{k}j-2^k+i,1)},y_{H(2^{k}j-2^k+i,1)}):\;1\leq i\leq 2^{k}\} $$ where inequality and equality follows from the fact that $\varphi$ is nondecreasing. Hence we get $(b)$. Now we prove $(c)$. If $j=2^p+s$ for $s=1,2$, then the assertion follows from definition (as $x_{H(2^p+s,1)}=f_{\gamma_{H(2^p+s,1)}}(x_{H(1,p+1)},x_{H(1,p+1)})$, and similarly $y_{H(2^p+s,1)}$ is defined). Now assume that $(c)$ holds for a fixed $p\in\mathbb N$ and $j=2^p+1,2^p+2,...,2l-1,2l$, where $2^p<2l<2^{p+1}$. Then for $s=1,2$ and proper $(j,k)$, we have by definition that $x_{H(2l+s,1)}=f_{\gamma_{H(2l+s,1)}}(x_{H(j,k)},x_{H(j,k)})$, and similarly $y_{H(2l+s,1)}$ is defined. Then by $(b)$, we have $$ d(x_{H(2l+s,1)},y_{H(2l+s,1)})\leq\varphi\left(d(x_{H(j,k)},y_{H(j,k)}\right)\leq$$ $$\leq \varphi(\max\{\varphi^{(k-1)}(d(x_{H(2^{k-1}j-2^{k-1}+i,1)},y_{H(2^{k-1}j-2^{k-1}+i,1)}):\;1\leq i\leq 2^{k-1}\})\leq $$ $$\leq\varphi(x_{H(1,{p+1})},y_{H(1,{p+1})}) $$ as, by the construction and inductive assumption, each $$d(x_{H(2^{k-1}j-2^{k-1}+i,1)},y_{H(2^{k-1}j-2^{k-1}+i,1)})\leq \varphi(d(x_{H(1,p+1)},y_{H(1,p+1)}).$$ This gives us $(c)$.\\ We are ready to show that $d(x_i,y_i)\to 0$. We will use that fact that a sequence is convergent iff any of its subsequences has a convergent (to the same limit) subsequence.\\ Hence let $(r_i)$ be any subsequence of naturals, and for any $i\in\mathbb N$, let $(j_i,k_i)$ be such that $x_{r_i}=x_{H(j_i,k_i)}$ and $y_i=y_{H(j_i,k_i)}$. Consider two cases:\\ Case1: $\sup\{k_i:i\in\mathbb N\}=\infty$. Then switching to an appropriate subsequence, we can assume that $k_i\to\infty$ and hence, by $(a)$ and $(b)$, we have $$ d(x_{r_i},y_{r_i})\leq\varphi^{(k_i-1)}(d(x_0,y_0))\to 0. $$ Case2: $\sup\{k_i:i\in\mathbb N\}<\infty$. Then, switching to an appropriate subsequence, we can assume that $k_i=k$ for all $i\in\mathbb N$ and some $k\in\mathbb N$, and, consequently, $j_i\to\infty$. Then also $p_i\to\infty$, where $p_i$ is such that $2^{p_i}<j_i\leq 2^{p_i+1}$. Therefore, combining $(a)$, $(b)$ and $(c)$, we get $$ d(x_{r_i},y_{r_i})\leq\varphi(x_{H(1,{p_i+1})},y_{H(1,{p_i+1})})\leq \varphi(\varphi^{(p_i)}(d(x_0,y_0)))=\varphi^{(p_i+1)}(d(x_0,y_0))\to 0 $$ All in all, $d(x_i,y_i)\to\infty$ and the proof is finished. \end{proof}
Now we give a pseudcode of chaos game for GIFSs. It will not be as simple as for IFSs, since some points have to be remembered for a $"$long$"$ time (for example, to get $x_{14}$, we have to remembered $x_{10}$ and $x_{13}$, and to get $x_{15}$, we have to have $x_{7}$ and and $x_{14}$). Fortunately, it can be constructed so that only $m$ lists with reasonable and the same lengths will have to be remembered. \begin{center}\textbf{Pseudocode for chaos game algorithm for GIFSs}\end{center} Initially chosen point: $x_0\in X$.\\ Initially defined objects: constants: $m,n$, mappings: $f_1,...,f_n$, variables: $i,j,k$, list $x$ of the length $2$, lists: $z_1,..,z_m$.\\ Initial values: $j:=1$, $k:=1$, $z_1[0]:=x_0$,..., $z_m[0]:=x_0$.\\ First chosen point: choose randomly $\gamma\in\{1,...,n\}$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ $x[j,k]:=f_\gamma(z_1[0],...,z_m[0])$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ $z_1[1]:=x[j,k]$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ print $x[j,k]$.\newline\\ Main loop: $\;$ choose randomly $\gamma\in\{1,...,n\}$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ if $\;j\operatorname{mod}m\neq 0$, then
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;j:=m^{k-1}\cdot j+1$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;k:=1$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;x[j,k]:=f_\gamma(z_1[k-1],...,z_m[k-1])$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$if $j\operatorname{mod}m=0$, then $i:=m$, else $i:=j\operatorname{mod}m$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;z_i[k]:=x[j,k]$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ else
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;j:=\frac{j}{m}$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;k:=k+1$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;x[j,k]:=f_\gamma(z_1[k-1],...,z_m[k-1])$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$if $j\operatorname{mod}m=0$, then $i:=m$, else $i:=j\operatorname{mod}m$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;z_i[k]:=x[j,k]$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ if $j\operatorname{mod}m\neq 0$, then $z_1[0]:=x[j,k],$..., $z_m[0]:=x[j,k]$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ print $x[j,k]$.\\
Now we present examples: \begin{example}\emph{ Consider the GIFSs $\mathcal F$ from Example \ref{e1}, and $\mathcal{H}=\{h_1,h_2,h_3\}$, where for any $x=(x_1,x_2),y=(y_1,y_2)\in\mathbb R^2$, we have \\$\;$\\ $ h_1(x,y):=(0,25x_1+0,2y_2\;\mbox{\textbf{;}}\;0,25x_2+0,2y_2) $\\ $ h_2(x,y):=(0,25x_1+0,2y_1\;\mbox{\textbf{;}}\;0,25x_2+0,1y_2+0,5) $\\ $ h_3(x,y):=(0,25x_1+0,1y_1+0,5\;\mbox{\textbf{;}}\;0,25x_2+0,2y_2) $\\$\;$\\ Using chaos game for GIFSs we get the following images }
\begin{center} \includegraphics{chaosgameF.png}
\end{center}
\begin{center} \includegraphics{Hchaosgame.png}
\end{center} \end{example} \begin{remark}\emph{ Let us remark that lengths of lists $z_1,...,z_m$ are really reasonable - if we want to choose $\sum_{i=0}^{k-1}m^i=\frac{1-m^k}{1-m}$ points, then lengths of $z_1,...,z_m$ should be not less then $k$. Also, in this case, we need at most $km$ points to be remembered to proceed all calculations.
Hence the above algorithm is indeed quite efficient.} \end{remark} \begin{remark}\emph{ It is easy to see that the algorithm gives us exactly the sequence $(x_i)$ (because $x[j,k]=x_{H(j,k)}$, and we get points in natural order $x_1,x_2,x_3,...$), with randomly chosen values $(\gamma_i)$.\\ We want also point out that the presented algorithm draw all points from the sequence $(x_i)$. However, it is obvious how to modify it in order to get the sequence $(x_i)$ without some of first of its points - such a change may be important if we start with $x_0$ which does not belong to $A_\mathcal F$. On the other hand, fixed points of mappings $f$ from a GIFS $\mathcal F$ belong to $A_\mathcal F$, so they are natural candidates for starting points.} \end{remark} \begin{remark}\emph{ There might be an attempt to define chaos game for GIFSs in alternative, even more similar to classical chaos game, way. Namely, for a given sequence $(\gamma_i)$, define the sequence $(x_i)$ in the following procedure:\\
$x_0,...,x_{m-1}$ be chosen points from $X$, and $x_{k+m}:=f_{\gamma_{k+m}}(x_k,...,x_{k+m-1})$ for $k\geq 0$. \\ Is true that (with a probability $1$), sets $\{x_i:i\geq k\}$ are closer and closer to the attractor $A_\mathcal F$?\\ Unfortunately this does not work:\\ Let $x_0,x_1$ be any points from the attractor $A_\mathcal F$. Then (assuming the case $m=2$) $$ x_6=f_{\gamma_6}(x_4,x_5)=f_{\gamma_6}\left(f_{\gamma_4}(x_2,x_3),f_{\gamma_5}(x_3,x_4)\right)=$$ $$=f_{\gamma_6}\left(f_{\gamma_4}\left(f_{\gamma_2}(x_0,x_1),f_{\gamma_3}(x_1,x_2)\right),f_{\gamma_5}\left(f_{\gamma_3}(x_1,x_2),f_{\gamma_4}(x_2,x_3)\right)\right)\in A_{(\gamma_6,(\gamma_4,\gamma_5),((\gamma_2,\gamma_3),(\gamma_3,\gamma_4)))} $$ so some coefficients are the same. In particular, $x_6$ cannot belong to $A_\beta$ for $\beta:=(1,(1,1),((1,2),(1,2)))$. As $x_0,x_1$ were taken arbitrarily, this shows that for any $i\geq 6$, $x_i\notin A_\beta$, provided $A_\beta$ is disjoint with other sets $A_\alpha$, $\alpha\in\;_3\Omega$.\\% defined for $x_0,x_1\in A_\mathcal F$, according to the above procedure, is disjoint with $A_{\beta}$.\\ In the Cantor set defined in \cite{S}, the sets $A_\alpha$, $\alpha\in\;_3\Omega$ are pairwise disjoint, even in a stronger sense that for every $\alpha\in \;_3\Omega$, there is an open set $U_\alpha\subset \mathbb R^2$ such that $A_\alpha\subset U_\alpha$ and $U_\alpha's$ are pairwise disjoint (as it is standard construction of Cantor-like set; of course, the same holds for any given $k$, not necessarily $k=3$). Therefore, any set $\overline{\{x_i:i\geq 6\}}\cap A_\beta=\emptyset$ and, in particular, $\overline{\{x_i:i\geq 0\}}\neq\bigcup_{\alpha\in\;_3\Omega}A_\alpha=A_\mathcal F$ (see Lemma \ref{l1}).\\ In fact, we could take many other, much longer $\beta's$ - it seems that the set $\overline{\{x_n:n\geq 0\}}$ touches just selected sets $A_\alpha$.} \end{remark}\label{e2}
\section{Affine GIFSs} Observe that, having some GIFS $\mathcal F$, by Lemma \ref{filip4}, for every closed and bounded set $D$, sets $\bigcup_{\alpha\in\;_k\Omega}f_\alpha(D_k)$, $k\in\mathbb N$, are closer and closer to the attractor $A_\mathcal F$. Hence if we have a nice formula for mappings $f_\alpha$, $\alpha\in\Omega_<$, then we could easily get a good approximation of the attractor $A_\mathcal F$. In this section we will show that it can be done for a natural class of GIFSs.\\ We say that a GIFS $\mathcal F=\{f_1,...,f_n\}$ on an Euclidean space $\mathbb R^d$ is \emph{affine}, if each $f_i$ is an affine mapping, i.e., it is of the form: \begin{equation}\label{aff} f_i(x_1,...,x_m):=A^i_1\cdot x_1+...+A^i_m\cdot x_m+B^i,\;\;\;x_1,...,x_m\in\mathbb R^d \end{equation} where $A^i_1,...,A^i_m,B^i$ are some real square matrices of dimension $d$, and $\cdot$ and $+$ are standard multiplication and addition in space of matrices.\\ It turns out that in this case, mappings $f_\alpha$, $\alpha\in\Omega_<$ have natural descriptions. Before we introduce it, let us give some further denotation.\\ If $X$ is a metric space, then let $X_1,X_2,...$ have a meaning as in (\ref{filip1}) and (\ref{filip2}).\\ If $x\in X_1=X\times...\times X$, then let $x_{(1)},...,x_{(m)}\in X$ be such that $x=(x_{(1)},...,x_{(m)})$.\\ Assume that for some $k\in\mathbb N$ and all $x\in X_k$, we defined $x_{(\epsilon_1,...,\epsilon_k)}\in X$, for $(\epsilon_1,...,\epsilon_k)\in\{1,...,m\}^k$.\\ If $x=(x_1,...,x_m)\in X_k\times...\times X_k=X_{k+1}$, then for every $(\epsilon_1,...,\epsilon_{k+1})\in\{1,...,m\}^{k+1}$, define $$x_{(\epsilon_1,...,\epsilon_{k+1})}:=(x_{\epsilon_1})_{(\epsilon_2,...,\epsilon_{k+1})}$$ All in all, for any $k\in\mathbb N$, any $x\in X_k$ and any $(\epsilon_1,...,\epsilon_k)\in\{1,...,m\}^k$, we defined $x_{(\epsilon_1,...,\epsilon_k)}\in X$.\\ In fact, we can give a bit less formal, but more natural description of elements $x_{(\epsilon_1,...,\epsilon_k)}$:\\ Observe that for every $k\in\mathbb N$, there is a natural bijection between $X_k$ and $X^{m^k}$, which adjust to any sequence $x\in X_k$, the sequence $\tilde{x}\in X^{m^k}$ which is created from $x$ by erasing all but the first and the last brackets.\\ \emph{For example, if $x=(((1,2),(3,1)),((2,1),(1,4)))$, then $\tilde{x}=(1,2,3,1,2,1,1,4)$.}\\ Now, we can enumerate elements of $\tilde{x}$ by using $m$-ary system, from the left to the right, but by using values $\{1,...,m\}$, instead of $\{0,...,m-1\}$. Then $x_{(\epsilon_1,...,\epsilon_k)}$ is exactly the $\epsilon_1,...,\epsilon_k$ term in a sequence $\tilde{x}$.\\ \emph{For example, $x=(((x_{(1,1,1)},x_{(1,1,2)}),(x_{(1,2,1)},x_{(1,2,2)})),((x_{(2,1,1)},x_{(2,1,2)}),(x_{2,2,1},x_{2,2,2})))$}.\\ In a similar way, for every $k\geq 2$ and $\alpha\in\;\Omega_k$, we define elements $\alpha_{(\epsilon_1,...,\epsilon_{k-1})}$.\\ \emph{For example, $\alpha=((\alpha_{(1,1)},\alpha_{(1,2)}),(\alpha_{(2,1)},\alpha_{(2,2)}))$.}\\ We are ready to state the theorem (recall that if $\alpha\in\;_k\Omega$, then $\alpha=(\alpha^1,...,\alpha^k)$ and $\alpha^i\in\Omega_i$ for $i=1,...,k$ - see denotations from the first section).
\begin{theorem}\label{it2} Let $\mathcal F=\{f_1,...,f_n\}$ be an affine GIFS which consists of functions of the form (\ref{aff}).
For every $k\in\mathbb N$, $\alpha\in {_k\Omega}$ and $x\in X_k$,
we have (all $\epsilon's$ below ranges from $1$ to $m$) \begin{equation}\label{aff1} f_{\alpha}(x) =
\sum_{\epsilon_1, \epsilon_2, ..., \epsilon_k } A^{\alpha ^1}_{\epsilon_1} \cdot A^{\alpha^2_{(\epsilon_1)}}_{\epsilon_2} \cdot ... \cdot A^{\alpha^k_{(\epsilon_1, \epsilon_2, ..., \epsilon_{k-1})}}_{\epsilon_k} \cdot x_{(\epsilon_1, \epsilon_2, ..., \epsilon_k)} + {B^{\alpha}} \end{equation} where \begin{equation}\label{aff2}
{B^{\alpha}} := B^{\alpha^1} + \sum_{\epsilon_1} A^{\alpha^1}_{\epsilon_1}\cdot B^{\alpha^2_{(\epsilon_1)}} + ... + \sum_{\epsilon_1, \epsilon_2, ..., \epsilon_{k-1}} A^{\alpha ^1}_{\epsilon_1} \cdot A^{\alpha^2_{(\epsilon_1)}}_{\epsilon_2} \cdot ... \cdot A^{\alpha^{k-1}_{(\epsilon_1, \epsilon_2, ..., \epsilon_{k-2})}}_{\epsilon_{k-1}} \cdot B^{\alpha^k_{(\epsilon_1, \epsilon_2, ..., \epsilon_{k-1})}} \end{equation} \end{theorem}
\begin{proof} The proof will follow by induction.
For $k=1$ let us take some $\alpha \in \{1, 2, ..., n\}$ and $x=(x_1, x_2, ..., x_m) \in X^m$. We have: $$ f_{\alpha}(x) = A^{\alpha}_1 \cdot x_1+...+A^{\alpha}_m \cdot x_m+B^{\alpha} = \sum_{ \epsilon_1} A^{\alpha^1}_{\epsilon_1} \cdot x_{(\epsilon_1)} + B^{\alpha^1} $$ so (\ref{aff1}) holds for $k=1$.\\ Now assume that it holds for some $k\geq 1$, and let us take some $\alpha=(\alpha^1,...,\alpha^{k},\alpha^{k+1}) \in {_{k+1}\Omega}$ and $x=(x_1,...,x_m)\in X_{k}\times...\times X_k=X_{k+1}$. We have (see (\ref{f1}) and other denotation from the first section).
$$f_{{\alpha}} (x) = f_{{\alpha}} (x_1,...,x_m) =
f_{\alpha^1}\left(f_{{\alpha}(1)}(x_1),f_{{\alpha}(2)}(x_2),..., f_{{\alpha}(m)}(x_m)\right) = $$ $$ = A^{\alpha^1}_1 \cdot f_{{\alpha}(1)}(x_1) +A^{\alpha^1}_2 \cdot f_{{\alpha}(2)}(x_2) + ... +A^{\alpha^1}_m \cdot f_{{\alpha}(m)}(x_m) + B^{\alpha^1} = [\mbox{by definition and inductive assumption}]= $$
$$ = A^{\alpha^1}_1 \cdot \left(\sum_{\epsilon_1, \epsilon_2, ..., \epsilon_k} A^{{\alpha}(1)^1}_{\epsilon_1} \cdot A^{{\alpha}(1)^2_{(\epsilon_1)}}_{\epsilon_2} \cdot ... \cdot A^{{\alpha}(1)^k_{(\epsilon_1, \epsilon_2, ..., \epsilon_{k-1})}}_{\epsilon_k} \cdot x_{(1, \epsilon_1, \epsilon_2, ..., \epsilon_k)} + {B^{{\alpha}{(1)}}}\right) + $$ $$ \ \ \ \ \ +A^{\alpha^1}_2 \cdot \left(\sum_{\epsilon_1, \epsilon_2, ..., \epsilon_k} A^{{\alpha}(2)^1}_{\epsilon_1} \cdot A^{{\alpha}(2)^2_{(\epsilon_1)}}_{\epsilon_2} \cdot ... \cdot A^{{\alpha}(2)^k_{(\epsilon_1, \epsilon_2, ..., \epsilon_{k-1})}}_{\epsilon_k} \cdot x_{(2, \epsilon_1, \epsilon_2, ..., \epsilon_k)} +{B^{{\alpha}{(2)}}}\right) + ... + $$ $$ \ \ \ \ \ \ \ \ \ \ \ \ + A^{\alpha^1}_m \cdot \left(\sum_{\epsilon_1, \epsilon_2, ..., \epsilon_k} A^{{\alpha}(m)^1}_{\epsilon_1} \cdot A^{{\alpha}(m)^2_{(\epsilon_1)}}_{\epsilon_2} \cdot ... \cdot A^{{\alpha}(m)^k_{(\epsilon_1, \epsilon_2, ..., \epsilon_{k-1})}}_{\epsilon_k} \cdot x_{(m, \epsilon_1, \epsilon_2, ..., \epsilon_k)} + {B^{{\alpha}{(m)}}}\right) + B^{\alpha^1} = \bigotimes $$ Now, we should observe that for any $i=1, 2, ..., m$ and any $s=1,...,k$, $\alpha(i)^s = \alpha^{s+1}_i$, provided $\alpha=(\alpha^1,(\alpha^2_1,...,\alpha^2_m),...,(\alpha^{k+1}_1,...,\alpha^{k+1}_m))$. With this observation we come to: $$ \bigotimes= A^{\alpha^1}_1 \cdot \left(\sum_{\epsilon_1, \epsilon_2, ..., \epsilon_k} A^{{\alpha}^2_1}_{\epsilon_1} \cdot A^{({\alpha}^3_1)_{(\epsilon_1)}}_{\epsilon_2} \cdot ... \cdot A^{({\alpha}^{k+1}_1)_{(\epsilon_1, \epsilon_2, ..., \epsilon_{k-1})}}_{\epsilon_n} \cdot x_{(1, \epsilon_1, \epsilon_2, ..., \epsilon_k)} + {B^{{\alpha}{(1)}}}\right) + $$ $$ \ \ \ \ \ \ \ A^{\alpha^1}_2 \cdot \left(\sum_{\epsilon_1, \epsilon_2, ..., \epsilon_k} A^{{\alpha}^2_2}_{\epsilon_1} \cdot A^{({\alpha}^3_2)_{(\epsilon_1)}}_{\epsilon_2} \cdot ... \cdot A^{({\alpha}^{k+1}_2)_{(\epsilon_1, \epsilon_2, ..., \epsilon_{k-1})}}_{\epsilon_n} \cdot x_{(2, \epsilon_1, \epsilon_2, ..., \epsilon_k)} + {B^{{\alpha}{(2)}}}\right) + ... + $$ $$ \ \ \ \ \ \ \ \ \ \ \ \ \ A^{\alpha^1}_m \cdot \left(\sum_{\epsilon_1, \epsilon_2, ..., \epsilon_k} A^{{\alpha}^2_m}_{\epsilon_1} \cdot A^{({\alpha}^3_m)_{(\epsilon_1)}}_{\epsilon_2} \cdot ... \cdot A^{({\alpha}^{k+1}_m)_{(\epsilon_1, \epsilon_2, ..., \epsilon_{k-1})}}_{\epsilon_n} \cdot x_{(m, \epsilon_1, \epsilon_2, ..., \epsilon_k)} + {B^{{\alpha}{(m)}}}\right) + B^{\alpha^1}= \bigotimes $$ Now observe that for any $i=1,...,m$ and $s=3,...,k+1$, and every $\epsilon_1,...,\epsilon_{s-2}$, we have $(\alpha^s_i)_{(\epsilon_1,...,\epsilon_{s-2})}=\alpha^s_{(i,\epsilon_1,...,\epsilon_{s-2})}$, and, similarly, $\alpha^2_i=\alpha^2_{(i)}$. Hence $$ \bigotimes= A^{\alpha^1}_1 \cdot \left(\sum_{\epsilon_1, \epsilon_2, ..., \epsilon_k} A^{{\alpha}^2_{(1)}}_{\epsilon_1} \cdot A^{{\alpha}^3_{(1, \epsilon_1)}}_{\epsilon_2} \cdot ... \cdot A^{{\alpha}^{k+1}_{(1, \epsilon_1, \epsilon_2, ..., \epsilon_{k-1})}}_{\epsilon_k} \cdot x_{(1, \epsilon_1, \epsilon_2, ..., \epsilon_k)} + {B^{{\alpha}{(1)}}}\right) + $$ $$ \ \ \ \ \ \ \ \ A^{\alpha^1}_2 \cdot \left(\sum_{\epsilon_1, \epsilon_2, ..., \epsilon_k} A^{{\alpha}^2_{(2)}}_{\epsilon_1} \cdot A^{{\alpha}^3_{(2, \epsilon_1)}}_{\epsilon_2} \cdot ... \cdot A^{{\alpha}^{k+1}_{(2, \epsilon_1, \epsilon_2, ..., \epsilon_{k-1})}}_{\epsilon_k} \cdot x_{(2, \epsilon_1, \epsilon_2, ..., \epsilon_k)} + {B^{{\alpha}{(2)}}}\right) + ... + $$ $$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A^{\alpha^1}_m \cdot \left(\sum_{\epsilon_1, \epsilon_2, ..., \epsilon_k} A^{{\alpha}^2_{(m)}}_{\epsilon_1} \cdot A^{{\alpha}^3_{(m, \epsilon_1)}}_{\epsilon_2} \cdot ... \cdot A^{{\alpha}^{k+1}_{(m, \epsilon_1, \epsilon_2, ..., \epsilon_{k-1})}}_{\epsilon_k} \cdot x_{(m, \epsilon_1, \epsilon_2, ..., \epsilon_k)} + {B^{{\alpha}{(m)}}}\right) + B^{\alpha^1} =\bigotimes $$ Now, remunerating $\epsilon's$ and rearranging coefficients, we get $$ \bigotimes= \sum_{\epsilon_1, \epsilon_2, \epsilon_3, ..., \epsilon_{k+1}} A^{\alpha^1}_{\epsilon_1} \cdot A^{{\alpha}^2_{(\epsilon_1)}}_{\epsilon_2} \cdot A^{{\alpha}^3_{(\epsilon_1, \epsilon_2)}}_{\epsilon_3} \cdot ... \cdot A^{{\alpha}^k_{(\epsilon_1, \epsilon_2, \epsilon_3, ..., \epsilon_k)}}_{\epsilon_{k+1}} \cdot x_{(\epsilon_1, \epsilon_2, \epsilon_3, ..., \epsilon_{k+1})}+ $$ $$ + A^{\alpha^1}_1 \cdot {B^{{\alpha}{(1)}}} + A^{\alpha^1}_2 \cdot {B^{{\alpha}{(2)}}}+...+ A^{\alpha^1}_m \cdot {B^{{\alpha}{(m)}}}+B^{\alpha^1} $$ Hence it remains to show that \begin{equation}\label{fifi1} B^\alpha=A^{\alpha^1}_1 \cdot {B^{{\alpha}{(1)}}} + A^{\alpha^1}_2 \cdot {B^{{\alpha}{(2)}}}+...+ A^{\alpha^1}_m \cdot {B^{{\alpha}{(m)}}}+B^{\alpha^1} \end{equation} We have (we will use the same tricks as earlier without mentioning them) $$ A^{\alpha^1}_1 \cdot {B^{{\alpha}{(1)}}} + A^{\alpha^1}_2 \cdot {B^{{\alpha}{(2)}}}+...+ A^{\alpha^1}_m \cdot {B^{{\alpha}{(m)}}}+B^{\alpha^1}= \sum_{\epsilon_1} A^{\alpha^1}_{\epsilon_1} \cdot {B^{{\alpha}(\epsilon_1)}} + B^{\alpha^1} = [\mbox{by inductive assumption}]= $$ $$ =B^{\alpha^1}+ \sum_{\epsilon_1} A^{\alpha^1}_{\epsilon_1} \cdot \left(B^{{\alpha}(\epsilon_1)^1} + \sum_{\epsilon_2} A^{{\alpha}(\epsilon_1)^1}_{\epsilon_2} \cdot B^{{\alpha}(\epsilon_1)^2_{(\epsilon_2)}} + ...
+ \sum_{\epsilon_2, ..., \epsilon_k}\cdot A^{{\alpha}(\epsilon_1)^1}_{\epsilon_2} \cdot ... \cdot A^{{\alpha}(\epsilon_1)^{k-1}_{(\epsilon_2, ..., \epsilon_{k-1})}}_{\epsilon_k} \cdot B^{{\alpha}(\epsilon_1)^k_{(\epsilon_2, ..., \epsilon_k)}}\right) = $$ $$ = B^{\alpha^1}+ \sum_{\epsilon_1} A^{\alpha^1}_{\epsilon_1} \cdot \left(B^{\alpha^2_{\epsilon_1}} + \sum_{\epsilon_2} A^{\alpha^2_{\epsilon_1}}_{\epsilon_2} \cdot B^{(\alpha^3_{\epsilon_1})_{(\epsilon_2)}} + ... + \sum_{\epsilon_2, ..., \epsilon_k} A^{\alpha^2_{\epsilon_1}}_{\epsilon_2} \cdot ... \cdot A^{(\alpha^k_{\epsilon_1})_{(\epsilon_2, ..., \epsilon_{k-1})}}_{\epsilon_k} \cdot B^{(\alpha^{k+1}_{\epsilon_1})_{(\epsilon_2, ..., \epsilon_k)}}\right) = $$ $$ =B^{\alpha^1}+ \sum_{\epsilon_1} A^{\alpha^1}_{\epsilon_1} \cdot B^{\alpha^2_{(\epsilon_1)}} + \sum_{\epsilon_1,\epsilon_2} A^{\alpha^1}_{\epsilon_1} \cdot A^{\alpha^2_{(\epsilon_1)}}_{\epsilon_2} \cdot B^{\alpha^3_{(\epsilon_1, \epsilon_2)}} + ... + \sum_{\epsilon_1, ..., \epsilon_k} A^{\alpha^1}_{\epsilon_1} \cdot A^{\alpha^2_{(\epsilon_1)}}_{\epsilon_2} \cdot ... \cdot A^{\alpha^k_{(\epsilon_1, \epsilon_2, ..., \epsilon_{k-1})}}_{\epsilon_k} \cdot B^{\alpha^{k+1}_{(\epsilon_1, \epsilon_2, ..., \epsilon_k)}}) =
{B^{{\alpha}}}. $$ \end{proof}
Now we are going to present a pseudocode of the algorithm, which will generate the values \begin{equation}\label{baff3} A^\alpha_{\epsilon_1,...,\epsilon_k}:=A^{\alpha ^1}_{\epsilon_1} \cdot A^{\alpha^2_{(\epsilon_1)}}_{\epsilon_2} \cdot ... \cdot A^{\alpha^k_{(\epsilon_1, \epsilon_2, ..., \epsilon_{k-1})}}_{\epsilon_k} \end{equation} and \begin{equation}\label{baff4} B^\alpha:=B^{\alpha^1} + \sum_{\epsilon_1} A^{\alpha^1}_{\epsilon_1}\cdot B^{\alpha^2_{(\epsilon_1)}} + ... + \sum_{\epsilon_1, \epsilon_2, ..., \epsilon_{k-1}} A^{\alpha ^1}_{\epsilon_1} \cdot A^{\alpha^2_{(\epsilon_1)}}_{\epsilon_2} \cdot ... \cdot A^{\alpha^{k-1}_{(\epsilon_1, \epsilon_2, ..., \epsilon_{k-2})}}_{\epsilon_{k-1}} \cdot B^{\alpha^k_{(\epsilon_1, \epsilon_2, ..., \epsilon_{k-1})}} \end{equation}
As we want to work with numbers rather than quite many lists, we need to define some further denotations and remarks. Assume that $\mathcal F=\{f_1,...,f_n\}$ is a GIFS of order $m$.\\ At first observe that for any $k\in\mathbb N$, we can adjust to each $\alpha\in\;\Omega_k$ (and $\alpha\in\;_k\Omega$), the sequence $\tilde{\alpha}$ of the length $m^{k-1}$ (and $1+...+m^{k-1}=\frac{1-m^k}{1-m}$ respectively), in a similar way as we did it for $x's$ -earlier, i.e., by erasing all brackets but the first and the last one.\\ \emph{for example, if $\alpha=(1,(2,1),((3,2),(4,1)))$, then $\tilde{\alpha}=(1,2,1,3,2,4,1)$.}\\ Therefore ($\operatorname{card}(\cdot)$ denotes the cardinality of a set) \begin{equation}\label{afff1} \operatorname{card}(\Omega_k)=n^{m^{k-1}}\;\;\;\mbox{and}\;\;\;\operatorname{card}(_k\Omega)=n^{\frac{1-m^k}{1-m}} \end{equation} Now to any $\alpha\in\;_k\Omega$, we can adjust the number $N(\alpha,k)$ such that the sequence $\tilde{\alpha}$ is its representation in the $n$-ary system, but with the the use of numbers $1,...,n$ instead of $0,...,n-1$. In a similar way we adjust to each $\alpha\in\;\Omega_k$, the number $M(\alpha,k)$.\\ \emph{For example, for $\alpha$ as above (and for $n=4$), we have (note that here $6=\frac{1-2^3}{1-2}-1=\frac{1-m^k}{1-m}-1$) $$N(\alpha,3)= 0\cdot 4^6+1\cdot 4^5+0\cdot 4^4+2\cdot 4^3+1\cdot 4^2+3\cdot 4^1+0\cdot 4^0=$$ $$=(1-1)\cdot 4^6+(2-1)\cdot 4^5+(1-1)\cdot 4^4+(3-1)\cdot 4^3+(2-1)\cdot 4^2+(4-1)\cdot 4^1+(1-1)\cdot 4^0$$} Clearly, the mappings $\alpha\to N(\alpha,k)$ and $\alpha\to M(\alpha,k)$ are bijections of $_k\Omega$ and $\left\{0,...,n^{\frac{1-m^k}{1-m}}-1\right\}$, and $\Omega_k$ and $\left\{0,...,n^{m^{k-1}}-1\right\}$, respectively.\\ Also, by definition, if $\alpha\in\;_{k-1}\Omega$ and $\gamma\in\Omega_k$, then \begin{equation}\label{afff2} N(\alpha\hat\;\gamma,k)=N(\alpha,k-1)\cdot n^{m^{k-1}}+M(\gamma,k), \end{equation} where $\alpha\hat\;\gamma$ denotes the extension of $\alpha$ by $\gamma$. Similarly, to any sequence $\epsilon=(\epsilon_1,...,\epsilon_k)\in\{1,...,m\}^k$, we adjust the number \begin{equation}\label{aa1}P(\epsilon,k):=(\epsilon_1-1)\cdot m^{k-1}+...+(\epsilon_{k-1}-1)\cdot m^1+(\epsilon_{k}-1)\cdot m^0 \end{equation} Clearly, if $\epsilon=(\epsilon_1,...,\epsilon_k)$ and $\epsilon'=(\epsilon_1,...,\epsilon_{k-1})$, then \begin{equation}\label{aa2}P(\epsilon,k)=m\cdot P(\epsilon',k-1)+\epsilon_k-1\end{equation} Now we show that if $\alpha\in\Omega_k$ and $\epsilon=(\epsilon_1,...,\epsilon_{k-1})\in\{1,...,m\}^{k-1}$, then \begin{equation}\label{aff3} \alpha_{(\epsilon_1,...,\epsilon_{k-1})}=\left(\operatorname{floor}\left(\frac{M(\alpha,k)}{n^{m^{k-1}-1-P(\epsilon,k-1)}}\right)\right)\operatorname{mod}n+1 \end{equation} where $\operatorname{floor}$ is the integer part of a number.\\ To see (\ref{aff3}), observe that $\alpha_{(\epsilon_1,...,\epsilon_{k-1})}$ is exactly the $m^{k-1}-P(\epsilon,k-1)$ term (from the right) in a sequence $\tilde{\alpha}$ (which can be proved by induction with respect to $k$, with the use of (\ref{aa2})), and the formula in (\ref{aff3}) detects this term from $\tilde{\alpha}$ (we need to add $1$ because we use numbers $1,...,m$ instead of $0,...,m-1$).\\ \emph{For example, let $\alpha=(((1,2),(4,3)),((3,2),(1,2)))$ (and $n=4)$, and $\epsilon=(1,2,1)$. Then $\alpha_{(1,2,1)}=4$, and our formula (\ref{aff3}) gives us the same: $M(\alpha,4)=7825$, $P(\epsilon,3)=2$, and $\left(\operatorname{floor}\left(\frac{785}{4^{2^{4-1}-1-2}}\right)\right)\operatorname{mod}n+1=4$. }\\ Now, for every $\alpha=(\alpha^1,...,\alpha^k)\in\;_k\Omega$ and $\epsilon=(\epsilon_1,...,\epsilon_k)\in\{1,...,m\}^k$, denote $$ A[k,N(\alpha,k),P(\epsilon,k)]:=A^{\alpha}_{\epsilon_1,...,\epsilon_k}$$ $$B[k,N(\alpha,k)]:=B^\alpha$$ and if $k\geq 2$, then $$C[k,N(\alpha,k),P(\epsilon',k-1)]:=A^{\alpha'}_{\epsilon_1,...,\epsilon_{k-1}}\cdot B^{\alpha^k_{(\epsilon_1,...,\epsilon_{k-1})}} $$
where $\alpha'=(\alpha^1,...,\alpha^{k-1})$ and $\epsilon'=(\epsilon_1,...,\epsilon_{k-1})$. Then by (\ref{baff3}), (\ref{baff4}), (\ref{aa1}), (\ref{aa2}) and (\ref{afff2}), we have for $k\geq 2$: \begin{equation}\label{aa3} A[k,N(\alpha,k),P(\epsilon,k)]=A\left[k,N(\alpha',k-1)\cdot n^{m^{k-1}}+M(\alpha^k,k),P(\epsilon',k-1)\cdot m+\epsilon_k-1\right]=\end{equation} $$=A[k-1,N(\alpha',k-1),P(\epsilon',k-1)]\cdot A\left[1,\alpha^k_{(\epsilon_1,...,\epsilon_{k-1})}-1,\epsilon_{k}-1\right] $$ and \begin{equation}\label{aa4} C[k,N(\alpha,k),P(\epsilon',k-1)]=A[k-1,N(\alpha',k-1),P(\epsilon',k-1)]\cdot B\left[1,\alpha^k_{(\epsilon_1,...,\epsilon_{k-1})}-1\right] \end{equation} and, finally, \begin{equation}\label{aa5} B[k,N(\alpha,k)]=B[k-1,N(\alpha',k-1)]+\sum_{\epsilon=(\epsilon_1,...,\epsilon_{k-1})}C[k,N(\alpha,k),P(\epsilon,k-1)] \end{equation} We are ready to give a pseudocode of an algorithm which will generate values (which are, in fact, matrices) $A[k,N,P]$, $C[k,N,P]$ and $B[k,N]$, according to the above procedure. Then, having these values, we automatically have the mappings $f_\alpha$ and hence, using Lemma \ref{filip4}, we can easily make images of $A_\mathcal F$. Note that in the main loop below we used (\ref{aa3}), (\ref{aa4}) and (\ref{aa5}).
\begin{center}\textbf{Pseudocode for affine GIFSs - defining all coefficients}\end{center} Initially defined objects: constants: $m,n,A^i_j,B^i$, $i=1,...,n$, $j=1,...,m$, variables: $N,M,P,I,k$, matrices: $A,B,C$.\\
First defined values:
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ $k:=1$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ For $N$ from $0$ to $n-1$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ $B[k,N]:=B^{N+1}$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ For $P$ from $0$ to $m-1$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ $A[k,N,P]:=A^{N+1}_{P+1}$
Main loop
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ $k:=k+1$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ For $N$ from $0$ to $n^{\frac{m^{k-1}-1}{m-1}}-1$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ For $M$ from $0$ to $n^{m^{k-1}}-1$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ $B\left[k,N\cdot n^{m^{k-1}}+M\right]:=B[k-1,N]$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ For $P$ from $0$ to $m^{k-1}-1$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ $C\left[k,N\cdot n^{m^{k-1}}+M,P\right]:=$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ $:=A[k-1,N,P]\cdot B\left[1,\left(\operatorname{floor}\left(\frac{M}{n^{m^{k-1}-1-P}}\right)\right)\operatorname{mod}n\right]$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ $B\left[k,N\cdot n^{m^{k-1}}+M\right]:=B\left[k,N\cdot n^{m^{k-1}}+M\right]+C\left[k,N\cdot n^{m^{k-1}}+M,P\right]$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ For $I$ form $0$ to $m-1$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ $A\left[k,N\cdot n^{m^{k-1}}+M,P\cdot m+I\right]:=$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$
$:=A[k-1,N,P]\cdot A\left[1,\left(\operatorname{floor}\left(\frac{M}{n^{m^{k-1}-1-P}}\right)\right)\operatorname{mod}n,I\right]$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$
\begin{remark}\emph{ Note that for computing $B^\alpha$ and $A's$ on the step $k$, we need $n+n^{\frac{m^{k-1}-1}{m-1}}\left(1+m^{k-1}\right)$ places for matrices from the first and the $k-1$ steps, which are necessary for further computations, and $n^{\frac{m^k-1}{m-1}}\left(1+(m+1)m^{k-1}\right)$ places for a result. } \end{remark}
The algorithm given above is quite expansive, even if we make it a bit more efficient (for example, we can avoid making some computations many times by defining certain lists at the beginning).
Therefore we also present a shortcut. The crucial observation is that if in Lemma \ref{filip4} we take $D:=\{0\}$, then for every $\alpha\in\;_k\Omega$, $f_\alpha(D_k)=\{B^\alpha\}$ (because in this case, $x_{(\epsilon_1,...,\epsilon_k)}=0$). Hence sets $\{B^\alpha:\alpha\in\;_k\Omega\}$ are better and better approximations of the attractor $A_\mathcal F$. Also, by a proof of Theorem \ref{it2} (see (\ref{fifi1})), we have $$ B^\alpha=A^{\alpha^1}_1 \cdot {B^{{\alpha}{(1)}}} + A^{\alpha^1}_2 \cdot {B^{{\alpha}{(2)}}}+...+ A^{\alpha^1}_m \cdot {B^{{\alpha}{(m)}}}+B^{\alpha^1} $$ for all $\alpha\in\;_k\Omega$, $k\geq 2$. This gives a suggestion how appropriate algorithm should work (of course, from the previous one we also obtain elements $B^\alpha$).\\ Let $\alpha=(\alpha^1,(\alpha^2_1,...,\alpha^2_m),...(\alpha^k_1,...,\alpha^k_m))\in\;_k\Omega$, $k\geq 2$. By similar reasonings as earlier, we can see that for any $i=2,...,k$ and $j=1,...,m$, we have \begin{equation}\label{fifi2} M(\alpha^i_j,i-1)=\left(\operatorname{floor}\left(\frac{N(\alpha,k)}{n^{\frac{1-m^k}{1-m}-\frac{1-m^{i-1}}{1-m}-j\cdot m^{i-2}}}\right)\right) \operatorname{mod}n^{m^{i-2}} \end{equation} and \begin{equation}\label{fifi3} N(\alpha(j),k-1)=M\left(\alpha^k_j,k-1\right)+M\left(\alpha^{k-1}_j,k-2\right)\cdot n^{m^{k-2}}+ M\left(\alpha^{k-2}_j,k-3\right)\cdot n^{m^{(k-2)}+m^{(k-3)}}+...+\end{equation} $$+...+M\left(\alpha^2_j,1\right)\cdot n^{m^{(k-2)}+m^{(k-3)}+...+m^{1}} $$ provided $k>2$, and \begin{equation}\label{fifi4} N(\alpha(j),1)=M\left(\alpha^2_j,1\right) \end{equation} provided $k=2$. Observe that (\ref{fifi3}) and (\ref{fifi4}) can be written in this form: $$ N(\alpha(j),k-1)=\sum_{i=2}^{k}M(\alpha^i_j,i-1)n^{\frac{m^{i-1}-m^{k-1}}{1-m}} $$
We are ready to give the pseudocode for the algorithm. $N$ will mean $N(\alpha,k)$, $B[k,N]$ will mean $B[k,N(\alpha,k)]$, and $P[j]$ will mean $N[\alpha(j),k-1]$.
\begin{center}\textbf{Pseudocode for affine GIFSs - defining variables $B^\alpha$}\end{center}
Initially defined objects: constants $m,n,A^i_j,B^i$, $i=1,...,n$, $j=1,...,m$, variables: $N,i,j,k,M$, list: $P$, matrices: $A,B$
Initially defined values:
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ $k:=1$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ For $N$ from $0$ to $n-1$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ $B[k,N]:=B^{N+1}$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ For $j$ from $1$ to $m$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ $A[N,j]:=A^{N+1}_j$
Main loop:
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ $k:=k+1$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ For $N$ form $0$ to $n^{\frac{m^k-1}{m-1}}-1$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ For $j$ from $1$ to $m$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ $P[j]:=\sum_{i=2}^k\left(\left(\operatorname{floor}\left(\frac{N}{n^{\frac{1-m^k}{1-m}-\frac{1-m^{i-1}}{1-m}-j\cdot m^{i-2}}}\right)\right) \operatorname{mod}n^{m^{i-2}}\right)\cdot n^{\frac{m^{i-1}-m^{k-1}}{1-m}}$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ $B[k,N]:=\sum_{j=1}^mA\left[\operatorname{floor}\left(\frac{N}{n^{\frac{m^{k}-1}{m-1}-1}}\right),j\right]\cdot B\left[k-1,P[j]\right]+B\left[1,\operatorname{floor}\left(\frac{N}{n^{\frac{m^{k}-1}{m-1}-1}}\right)\right]$\\
Finally, we present the example. \begin{example}\label{e3}\emph{ Consider the GIFSs $\mathcal F$ and $\mathcal G$ from Example \ref{e1}. Clearly, they are affine. Using the above algorithm, we get the following images:} \begin{center} \includegraphics{Faffine.png}
\end{center} \begin{center} \includegraphics{Gaffine.png}
\end{center} \end{example}
\begin{remark}\emph{ Note that in the "shorter" version of the algorithm, for computing $B^\alpha$ on the step $k$, we need $n^{\frac{m^{k-1}-1}{m-1}}+n(m+1)$ places for matrices from the first and the $k-1$ steps, which are necessary for further computations, and $m+n^{\frac{m^k-1}{m-1}}$ places for a result. } \end{remark}
\end{document} |
\begin{document}
\title{On the testability and repair of hereditary hypergraph properties} \author{Tim Austin \and Terence Tao} \date{}
\maketitle
\newenvironment{nmath}{\begin{center}\begin{math}}{\end{math}\end{center}}
\newtheorem{theorem}{Theorem}[section] \newtheorem{thm}{Theorem}[section] \newtheorem{lemma}[thm]{Lemma} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{proposition}[thm]{Proposition} \newtheorem{corollary}[thm]{Corollary} \newtheorem{cor}[thm]{Corollary} \newtheorem{conj}[thm]{Conjecture} \newtheorem{definition}[thm]{Definition} \newtheorem{dfn}[thm]{Definition} \newtheorem{example}[thm]{Example} \newtheorem{examples}[thm]{Examples} \newtheorem{remark}[thm]{Remark} \newtheorem{prob}[thm]{Problem} \newtheorem{ques}[thm]{Question} \newtheorem{assertion}[thm]{Assertion}
\newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\mathcal{H}}{\mathcal{H}} \renewcommand{\mathcal{P}}{\mathcal{P}}
\newcommand{\mathcal{W}}{\mathcal{W}} \newcommand{\mathcal{X}}{\mathcal{X}} \newcommand{\mathcal{D}}{\mathcal{D}} \newcommand{\mathcal{Y}}{\mathcal{Y}} \newcommand{\mathcal{Q}}{\mathcal{Q}} \newcommand{\mathcal{I}}{\mathcal{I}} \renewcommand{\mathcal{X}}{\mathcal{X}} \newcommand{\mathcal{Z}}{\mathcal{Z}} \renewcommand{\Lambda}{\Lambda}
\renewcommand{\operatorname{Pr}}{\operatorname{Pr}} \newcommand{\operatorname{Inj}}{\operatorname{Inj}} \newcommand{\operatorname{Col}}{\operatorname{Col}} \newcommand{\operatorname{id}}{\operatorname{id}} \renewcommand{{\operatorname{th}}}{{\operatorname{th}}} \newcommand{\operatorname{pt}}{\operatorname{pt}} \newcommand{{\operatorname{supp}}}{{\operatorname{supp}}} \newcommand{{\operatorname{stab}}}{{\operatorname{stab}}}
\newcommand{\mathbf{E}}{\mathbf{E}} \newcommand{\mathbf{N}}{\mathbf{N}} \newcommand{\mathbf{R}}{\mathbf{R}} \newcommand{\mathbf{Z}}{\mathbf{Z}} \newcommand{\mathbf{P}}{\mathbf{P}} \newcommand{\varepsilon}{\varepsilon}
\newcommand{\bb}[1]{\mathbb{#1}} \renewcommand{\rm}[1]{\mathrm{#1}} \renewcommand{\it}[1]{\mathit{#1}} \renewcommand{\cal}[1]{\mathcal{#1}} \renewcommand{\bf}[1]{\mathbf{#1}} \renewcommand{\frak}[1]{\mathfrak{#1}}
\begin{abstract} Recent works of Alon-Shapira \cite{AloSha2} and R\"odl-Schacht \cite{rs2} have demonstrated that every hereditary property of undirected graphs or hypergraphs is testable with one-sided error; informally, this means that if a graph or hypergraph satisfies that property ``locally'' with sufficiently high probability, then it can be perturbed (or ``repaired'') into a graph or hypergraph which satisfies that property ``globally''.
In this paper we make some refinements to these results, some of which may be surprising. In the positive direction, we strengthen the results to cover hereditary properties of multiple directed polychromatic graphs and hypergraphs. In the case of undirected graphs, we extend the result to continuous graphs on probability spaces, and show that the repair algorithm is ``local'' in the sense that it only depends on a bounded amount of data; in particular, the graph can be repaired in a time linear in the number of edges. We also show that local repairability also holds for monotone or partite hypergraph properties (this latter result is also implicitly in \cite{ishi}). In the negative direction, we show that local repairability breaks down for directed graphs, or for undirected $3$-uniform hypergraphs. The reason for this contrast in behavior stems from (the limitations of) Ramsey theory. \end{abstract}
\parskip 0pt
\tableofcontents
\parskip 7pt
\section{Introduction}
The purpose of this paper is to investigate various generalisations of some recent graph and hypergraph property testing results of Alon-Shapira\cite{AloSha2}, R\"odl-Schacht\cite{rs2}, and others, when the graphs and hypergraphs are allowed to become coloured, non-uniform, directed and/or containing loops. We also investigate a stronger property than local testability of such properties, which we call ``local repairability''. Very briefly, our conclusions will be that the local testability results of R\"odl and Schacht extend to very general settings, but that the stronger local repairability results of Alon and Shapira are largely restricted to the setting of undirected graphs.
\subsection{Previous results}\label{prevsec}
Before discussing the general setting of coloured, non-uniform, directed hypergraphs in which our main results will take place, we first discuss the more familiar setting of monochromatic, uniform, undirected graphs and hypergraphs, which is the focus of most of the previous literature on this subject.
We begin with the property testing theory for (monochromatic, undirected) graphs $G = (V,E)$, where $V$ is a finite vertex set and $E \subset \binom{V}{2}$ is\footnote{We use $\binom{V}{k} := \{e \subset V: |e|=k\}$ to denote the $k$-element subsets of $V$, and $|e|$ to denote the cardinality of a finite set $e$.} a set of edges in $V$. One can also view such a graph as a map\footnote{The notational conventions in this section may seem somewhat odd, but will become clearer in the next section when we generalise these notions to coloured, non-uniform, and directed hypergraphs. The subscript $2$, in particular, has to do with the $2$-uniform nature of graphs, i.e. that all edges consist of two vertices; the set $\{0,1\}$, meanwhile, is there to emphasise the monochromatic nature of the graph.} $G_2: \binom{V}{2} \to \{0,1\}$, where $G_2(\{v,w\})$ equals $1$ when $\{v,w\}$ lies in $E$ and equals zero otherwise. The set of all graphs on a fixed vertex set $V$ will be denoted $2^{\binom{V}{2}}$.
A \emph{graph property} $\mathcal{P}$ is an assertion which holds true for some graphs and not for others. More formally, such a property assigns to each vertex set $V$ a collection $\mathcal{P}^{(V)} \subset \{0,1\}_2^{(V)}$ of graphs on $V$, defined as the set of graphs on $V$ that obey $\mathcal{P}$. Thus, for instance, if $\mathcal{P}$ is the property of being bipartite, then $\mathcal{P}^{(V)}$ is the collection of bipartite graphs on $V$.
We will restrict attention to two special types of graph properties, namely the monotone and hereditary properties. A graph property $\mathcal{P}$ is \emph{hereditary} if, for every injection $\phi: V \to W$ between two finite sets $V, W$, and any graph $G \in \mathcal{P}^{(W)}$ on $W$ obeying $\mathcal{P}$, the pullback graph (or \emph{induced graph}) $\{0,1\}_2^{(\phi)}(G)$ on $V$ (defined by declaring an edge $\{v_1,v_2\}$ to lie in $\{0,1\}_2^{(\phi)}(G)$ if and only if $\{\phi(v_1),\phi(v_2)\}$ lies in $W$) also obeys $\mathcal{P}$; in other words, the pullback map $\{0,1\}_2^{(\phi)}$ maps $\mathcal{P}^{(W)}$ to $\mathcal{P}^{(V)}$. In particular, this implies that the graph property is invariant with respect to graph isomorphism, and is also preserved by passing from a graph $G \in 2^{\binom{V}{2}}$ to an induced subgraph $G \downharpoonright_W \in 2^{\binom{V}{2}}$ for any $W \subset V$. A \emph{monotone} graph property is a hereditary graph property with the additional property that if one takes a graph in $\mathcal{P}^{(V)}$ and removes one or more edges from it, then the graph continues to have the property $\mathcal{P}$.
\begin{example} The properties of being $4$-colourable, bipartite, or triangle-free are monotone (and hence hereditary). Given any $k > 0$, the properties of being connected, or of avoiding either the empty graph on $k$ vertices or the complete graph on $k$ vertices are hereditary (but not monotone). The property of having an odd number of edges, or containing a Hamiltonian cycle, are neither monotone nor hereditary. It is not hard to show that a graph property $\mathcal{P}$ is monotone if and only if there is a (possibly infinite) family ${\mathcal F}$ of ``forbidden subgraphs'', such that a graph $G$ obeys $\mathcal{P}$ if and only if does not have any of the graphs in ${\mathcal F}$ as subgraphs, while $\mathcal{P}$ is hereditary if and only if there is a family of ${\mathcal F}$ of ``forbidden induced subgraphs'' such that a graph $G$ obeys $\mathcal{P}$ if and only if it does not have any of the graphs in ${\mathcal F}$ as \emph{induced} subgraphs. For further discussion of monotone and hereditary graph properties, see \cite{AloSha2}. \end{example}
We now come to the key notion of \emph{testability}.
\begin{definition}[Testability for graph properties]\label{testgraph}\cite{RS} A graph property $\mathcal{P}$ is said to be \emph{locally testable with one-sided error}, or \emph{testable} for short, if for every $\varepsilon > 0$ there exists $N \geq 1$ and a real number $\delta > 0$ with the following property: whenever $G = (V,E)$ is a graph with $N \leq |V| < \infty$ which locally almost obeys $\mathcal{P}$ in the sense that \begin{equation}\label{voom}
\frac{1}{|\binom{V}{N}|} |\{ W \in \binom{V}{N}: G \downharpoonright_W \in \mathcal{P}^{(W)} \}| \geq 1-\delta \end{equation}
(thus most $N$-element induced subgraphs of $G$ obey $\mathcal{P}$), then there exists $G' = (V,E')$ obeying $\mathcal{P}$ which is close to $G$ in the sense that $\frac{1}{\binom{V}{2}} |E \Delta E'| \leq \varepsilon$. \end{definition}
\begin{remark} See \cite{AloSha2} for a discussion as to why the above concept is equivalent to testability with one-sided error, as defined in \cite{RS}. \end{remark}
The following is the main result of \cite{AloSha2}:
\begin{theorem}\label{as-basic}\cite{AloSha2} Every hereditary graph property is testable. \end{theorem}
See \cite{AloSha2} for a history of this result and for a survey of the many prior results in this direction, including the earlier result in \cite{AloSha} that every monotone graph property is testable. The proof of this theorem is rather intricate, involving repeated application of the Szem\'eredi regularity lemma, as well as Ramsey's theorem.
Theorem \ref{as-basic} has been generalised in two different ways. Firstly, the work of R"odl and Schacht \cite{rs2} found a somewhat simpler (but more indirect) argument, avoiding Ramsey's theorem and using only a single instance of the (hypergraph) regularity lemma, which extended Theorem \ref{as-basic} to the setting of hypergraph properties for $k$-uniform hypergraphs $G = (V,E)$, where $k \geq 2$ and $E \subset \binom{V}{k}$. It is straightforward to extend all of the above definitions to the $k$-uniform hypergraph setting, basically by replacing $2$ with $k$ in all the above definitions; we omit the details (and in any case, we will also make further generalisations of these definitions in the next section).
The main result of \cite{rs2} can now be stated as follows:
\begin{theorem}\label{rs-basic}\cite{rs2} Let $k \geq 2$. Then every hereditary $k$-uniform hypergraph property is testable. \end{theorem}
This builds upon a number of earlier hypergraph results which can be interpreted as testability results, such as the hypergraph removal lemma \cite{Gow1}, \cite{rs}, \cite{rodl2} or the induced $3$-uniform hypergraph removal lemma in \cite{KohNagRod}. We refer the reader to \cite{rs2} for further references and discussion.
There is however a different way to generalise Theorem \ref{as-basic}, in which we stay in the setting of graphs, but instead replace testability by a stronger property which we call \emph{local repairability}, and which is analogous to the notion of \emph{local correctability} in the theory of error-correcting codes. (Actually, we will eventually discuss two such properties, \emph{strong local repairability} and \emph{weak local repairability}, but we will only discuss the weak one for now.) For simplicity we now restrict attention to graphs rather than $k$-uniform hypergraphs.
To motivate this concept, recall that if $G$ is a graph that locally almost obeys a testable property $\mathcal{P}$, then it is guaranteed that there is a way to modify a small number of edges of $G$ to create a graph $G'$ which truly does obey $\mathcal{P}$. We will refer to the act of replacing $G$ by $G'$ as \emph{repairing} the graph $G$. However, note that no \emph{algorithm} is provided in order to actually execute this repair; one can of course perform a brute force search among all possible candidate graphs $G'$, but this will take a time which is at least exponential in the number of vertices and is thus impractical. It is thus of interest to determine whether a testable graph (or hypergraph) property $\mathcal{P}$ also comes with an ``efficient'' algorithm that can repair a graph $G$ quickly. We will focus on a rather strong notion of efficiency here, namely that of a \emph{local} repair algorithm, in which any edge of the repaired graph $G'$ can be decided upon using only a \emph{bounded} number of queries to the original graph $G$ (which in particular implies that the entire graph can be repaired in time linear in the total number $\binom{|V|}{2}$ of possible edges). More precisely, we seek repair algorithms which are given by a \emph{local modification rule}, which we will define shortly. For technical reasons we will have to delete a small set $A$ of ``training'' vertices in order to perform this rule; thus the rule will start with a graph $G = (A \uplus V, E)$ almost obeying $\mathcal{P}$, where $A \uplus V$ is the disjoint union of a large vertex set $V$ and a small set $A$ of training vertices, and return a repaired graph $G' = (V,E')$ which obeys $\mathcal{P}$ exactly, but for which the training vertices $A$ have been deleted.
To motivate the concept of a local modification rule, let us discuss (somewhat informally) a specific example of repairability, in which $\mathcal{P}$ is the property of being a complete bipartite graph. (For instance, one could think of the vertex set of a graph obeying $\mathcal{P}$ as a collection of positive and negative charges, with an edge between two vertices if they have opposite charge.) Now consider a large graph $G_0 = (A \uplus V,E_0)$ obeying $\mathcal{P}$, and ``corrupt'' it to create a new graph $G = (A \uplus V,E)$ formed by adding or removing a small fraction of the edges to $E_0$. (For instance, one could imagine a large collection of real-world charged particles $A \uplus V$, with an edge between two vertices $v,w$ in $E$ if the two particles are observed to attract each other in some (mostly reliable) measurement; in this case, the corruption between $E$ and the ``true'' graph $E_0$ would be caused by measurement error.) Then $G$ approximately obeys $\mathcal{P}$. If one is given $G$ (but not $G_0$), we now consider the task of repairing $G$ to form a graph $G' = (V,E')$ close to $G$ which obeys $\mathcal{P}$. (Ideally, we would like $G'$ to recover the original uncorrupted graph $G_0$, but there is not enough information given to do so exactly, and will settle for obtaining a slightly different repaired graph $G'$ which is still complete and bipartite.) Continuing our measurement example, this task would correspond to that of using the measured attraction and repulsion data to assign "charges" to various particles, thus attempting to correct for corrupted measurements and giving a prediction as to what the ``true'' attraction between any two particles are.
To do this, we first look at the restriction $G\downharpoonright_A$ of $G$ to the training vertices $A$. If the training vertices were a sufficiently representative subset of the whole set $A \uplus V$ (which, in practice, we will ensure with high probability by drawing $A$ randomly from the vertex set of $G$), then we expect $G \downharpoonright_A$ to be very close to a complete bipartite graph. By performing a brute force search on $A$ only, we can then find a complete bipartite graph $G'_A := (A, E'_A)$ on $A$ which is very close to $G\downharpoonright_A$ (and thus, presumably, also close to $G_0\downharpoonright_A$. Note that while a brute force search on all of $V$ is exponentially expensive, if $A$ is bounded size then it will only take a bounded amount of time to locate $G'_A$. Let $A = A_1 \uplus A_2$ be the partition of $A$ corresponding to the complete bipartite graph $G'_A$. (This partition is only unique up to interchange of the labels $1,2$, but this will not concern us.) We can then use this partition to create a partition $V = V_1 \uplus V_2$ of the larger vertex set $V$, by the following rule: a vertex $v$ will lie in $V_1$ if it is connected to more vertices of $A_2$ than to $A_1$, and in $V_2$ otherwise. (Informally, $G'_A$ has ``decided'' which of the training vertices in $A$ are positively charged or negatively charged, and then one tests those charged particles against any other vertex $v \in V$ to decide whether $v$ should be classified as positive or negative only.) We then define $G' = (V,E')$ to be the complete bipartite graph between $V_1$ and $V_2$. Clearly $G'$ obeys $\mathcal{P}$; and it is intuitively clear that if $G$ is sufficiently close to $G_0$, and $A$ is sufficiently large (but still bounded) and drawn randomly from $G$, then $G'$ will be close to $G$ with high probability. (In particular, if $G$ was exactly equal to $G_0$, one easily sees that $G'$ is equal to $G$.)
Now we make these concepts more precise.
\begin{definition}[Local modification rule]\label{lacma} A \emph{local modification rule} is a pair $(A,T)$, where $A$ is a finite set, and $T: 2^{\binom{A \uplus [2]}{2}} \to \{0,1\}$ is a map from graphs on $A \cup [2]$ to $\{0,1\}$, where $[2] := \{1,2\}$, which is symmetric with respect to interchange of the $1$ and $2$ labels. Given any vertex set $V$, we define a \emph{modification map} $\overline{T}^{(V)}: 2^{\binom{A \uplus V}{2}} \to 2^{\binom{V}{2}}$ by declaring an edge $(v_1,v_2)$ in $V$ to lie in $\overline{T}^{(V)}(G)$ for some $G \in 2^{\binom{A \uplus V}{2}}$ if and only if $T( \{0,1\}_2^{(\operatorname{id}_A \oplus \phi)}(G) ) = 1$, where $\operatorname{id}_A \oplus \phi: A \uplus [2] \to A \uplus \{v_1,v_2\}$ is the map which is the identity on $A$ and maps $1,2$ to $v_1,v_2$ respectively. \end{definition}
\begin{example} The rule $G \mapsto G'$ defined in the preceding discussion can be viewed as a local modification rule, in which $T(G)$ for $G \in 2^{\binom{A \uplus [2]}{2}}$ is defined by first constructing the graph $G'_A$ and the partition $A=A_1 \cup A_2$ as above, and then $[2]$ is partitioned into $V_1 \cup V_2$, and $T(G) = 1$ if $1, 2$ lie in distinct partition classes, and $T(G)=0$ otherwise. \end{example}
\begin{remark}\label{lacma2} Informally, a local modification rule only has to query $G$ between vertices in $\{v_1,v_2\} \cup A$ to decide how $v_1$ and $v_2$ are connected in $G' := \overline{T}^{(V)}(G)$; furthermore; all pairs $\{v_1,v_2\}$ are ``treated equally'' in the sense that the same modification function $T$ is applied to each of them. There is also an equivalent category-theoretic definition of a local modification rule $(A,T)$, namely it is a finite set $A$ together with a \emph{natural transformation} $\overline{T}$, or more precisely a collection of maps $\overline{T}^{(V)}: 2^{\binom{A \uplus V}{2}} \to 2^{\binom{V}{2}}$ for every vertex set $V$ which obeys the natural transformation property $$ \overline{T}^{(W)} \circ \{0,1\}_2^{(\operatorname{id}_A \oplus \phi)} = \{0,1\}_2^{(\phi)} \circ \overline{T}^{(V)}$$ for all injections $\phi: W \to V$ between two finite vertex sets $V, W$, where $\operatorname{id}_A \oplus \phi: A \cup W \to A \cup V$ is the extension of $\phi$ which is the identity on $A$. This alternate characterisation of a local modification rule will be more convenient for us in later sections when we generalise to hypergraphs which may be multicoloured, non-uniform, directed, and/or infinite. \end{remark}
\begin{definition}[Weak local repairability]\label{wlr} Let ${\mathcal P}$ be a graph property. We say that ${\mathcal P}$ is \emph{weakly locally repairable} if for every $\varepsilon > 0$ there exists a finite set $A$, an integer $N \geq |A|+2$, and a $\delta > 0$ with the following property: if $G = (V,E)$ is a graph with $N \leq |V| < \infty$ which almost obeys ${\mathcal P}$ in the sense of \eqref{voom}, then there exists an embedding of $A$ in $V$ (thus identifying $V$ with $A \uplus V'$ for some $|V'| = |V| - |A|$) and a local modification rule $(A,T)$ such that $G' = (V',E') := T^{(V')}(G)$ obeys ${\mathcal P}$, and $G'$ is close to $G$ in the sense that
$$|E' \Delta (E\downharpoonright_{V'})| \leq \varepsilon |\binom{V'}{2}|$$ where $E\downharpoonright_{V'} := E \cap \binom{V'}{2}$. \end{definition}
\begin{remark} Observe that weak local repairability stronger than local testability in the sense that the repaired graph $G'$ is given from $G$ by a local modification rule, but weaker because one had to remove a small number of vertices; see Remark \ref{easy-rem} for further discussion. Also, observe that the embedding of $A$ in $V$ is not specified; also, the rule $(A,T)$ is only guaranteed to produce a graph $G'$ obeying ${\mathcal P}$ for the chosen input $G$. Later on we shall introduce the notion of \emph{strong local repairability}, which roughly speaking is similar to weak local repairability, but the embedding of $A$ in $V$ is now chosen at random (and the algorithm has a small probability of failure), the rule $(A,T)$ now entails the property ${\mathcal P}$ for \emph{all} choices of input graph $G$, rather than being permitted to depend on $G$, and furthermore the graph $G$ is allowed to be infinite (or even ``continuous'') rather than just finite (or discrete). However, to keep the discussion simple for now, we will not formally define strong local repairability until later sections. \end{remark}
An inspection of the arguments in \cite{AloSha2} then reveals the following strengthening of Theorem \ref{as-basic}:
\begin{theorem}\label{as-2} Every hereditary graph property is weakly locally repairable. \end{theorem}
Strictly speaking, this result is not explicitly stated in \cite{AloSha2}, but is an implicit consequence of their methods, together with the observation that Szemer\'edi partitions can be constructed using random neighbourhoods (see e.g. \cite{ishi-0}). In any event we will establish a stronger version of this theorem in the next section.
\begin{example} We have informally discussed this result in the case when ${\mathcal P}$ is the property of being a complete bipartite graph. Another illustrative example is the property of being triangle-free, which is a monotone property. The local testability of this property is a well-known fact, often called the ``triangle-removal lemma'', and is due to Ruzsa and Szemer\'edi \cite{rsz}. To repair an almost-triangle-free-graph into a genuinely triangle-free graph, the standard approach is to apply the Szemer\'edi regularity lemma \cite{szemeredi-reg} to the graph, and then delete all edges between pairs of cells of that partition that are too small, have too low an edge density, or too irregular. This regularisation can be done in purely local fashion, by randomly selecting vertex neighbourhoods to create the partition (see e.g. \cite{ishi-0}), and this can be used to create a local modification rule to repair corrupted triangle-free graphs. \end{example}
\subsection{General setup}
The prior results were restricted to properties for uniform monochromatic undirected graphs or hypergraphs without loops. We now generalise much of the above discussion to a more general setting which allows for the hypergraphs to be non-uniform, directed, multi-coloured, and/or contain loops. As such, there will be some overlap between the discussion here and that in the preceding section.
\begin{definition}[Vertex sets]\label{vertset} A \emph{vertex set} is any set which is at most countable. If $V$ and $W$ are vertex sets, we define a \emph{morphism} from $W$ to $V$ to be an injective map $\phi: W \to V$, and use $\operatorname{Inj}(W,V)$ to denote the space of such morphisms. We use $\operatorname{id}_V \in \operatorname{Inj}(V,V)$ to denote the identity map from $V$ to itself, and if $W \subset V$, we use $\iota_{W \subset V} \in \operatorname{Inj}(W,V)$ to denote the inclusion map. If $N$ is a non-negative integer, we use $[N] := \{ 1,\ldots,N\}$ to denote the vertex set of integers from $1$ to $N$. If $v_1,\ldots,v_N$ are distinct vertices of $V$, we use $(v_1,\ldots,v_N) \in \operatorname{Inj}([N],V)$ to denote the morphism that sends $i$ to $v_i$ for all $i \in [N]$ (in particular, we canonically embed $\operatorname{Inj}([N],V)$ in $V^N$, and the unique element of $\operatorname{Inj}([0],V)$ is denoted $()$). If $V$ is a set, we use $|V|$ to denote the cardinality of $V$, and for any $k \geq 0$ we let
$$\binom{V}{k} := \{ e \subset V: |e| = k \} \equiv \operatorname{Inj}([k],V)/\operatorname{Inj}([k],[k])$$ denote the $k$-element subsets of $V$. If $V, W$ are vertex sets, we use $V \uplus W := (V \times \{0\}) \cup (W \times \{1\})$ to denote the disjoint union of $V$ and $W$. We often abuse notation and view $V$ and $W$ as subsets of $V \uplus W$. If $\phi_1 \in \operatorname{Inj}(W_1,V_1)$ and $\phi_2 \in \operatorname{Inj}(W_2,V_2)$, we use $\phi_1 \oplus \phi_2 \in \operatorname{Inj}(W_1 \uplus W_2, V_1 \uplus V_2)$ to denote the direct sum of $\phi_1$ and $\phi_2$. \end{definition}
\begin{remark} One can view the collection of all vertex sets and their morphisms as a category. We will make this category-theoretic perspective more explicit later in our analysis, as it contains a number of useful notions for us, most notably that of a \emph{natural transformation}. However, readers who are not familiar with category theory can safely skip all remarks in this introductory section referring to this subject. \end{remark}
\begin{definition}[Palettes] A \emph{finite palette} is a sequence $K = (K_j)_{j=0}^\infty$ of finite non-empty sets, all but finitely many of which are singleton sets. We refer to the singleton components as \emph{points} and denote them by $\operatorname{pt}$. We define the \emph{order} of $K$ to be the greatest integer $k$ for which $K_k$ is not a point (or $-1$ if all components are points). We shall often abbreviate $K$ as $(K_0,\ldots,K_k)$ (thus discarding the trivial palettes $K_j = \operatorname{pt}$ for $j > k$). For any $k \geq 0$, we define the \emph{monochromatic palette} $\{0,1\}_k$ of order $k$ to be the palette whose $k^{{\operatorname{th}}}$ component is $\{0,1\}$ and all other components are points. If $j \in \mathbf{Z}$, we let $K_{\leq j}$ (resp. $K_{<j}$, $K_{\geq j}$, $K_{>j}$, $K_{=j}$) be the palette whose $i^{{\operatorname{th}}}$ component is $K_i$ when $i \leq j$ (resp. $i < j$, $i \geq j$, $i>j$, $i=j$), and is $\operatorname{pt}$ otherwise, thus for instance $K = K_{\geq 0} = K_{>-1}$. \end{definition}
\begin{definition}[Hypergraphs]\label{hyperdef} If $V$ is a vertex set, we define a \emph{$K$-coloured (directed) hypergraph} to be a tuple $G = (G_j)_{j=0}^\infty$, where each $G_j: \operatorname{Inj}([j],V) \to K_j$ is a function. (Note that $G_j$ will be trivial when $K_j$ is a point, and so only finitely many of the $G_j$ are of any interest. We will often abuse notation slightly by omitting the trivial components $G_j$ of a hypergraph.) We let $$K^{(V)} \equiv \prod_{j=0}^\infty K_j^{\operatorname{Inj}([j],V)}$$ denote the collection of all $K$-coloured hypergraphs on $V$. We say that the hypergraph is \emph{undirected} if we have the symmetry property $G_j( \phi \circ \sigma ) = G_j(\phi)$ for all $j \geq 0$, all $\sigma \in \operatorname{Inj}([j],[j])$, and all $\phi \in \operatorname{Inj}([j],V)$. If $\phi \in \operatorname{Inj}(W,V)$ is a morphism between vertex sets, we define the \emph{pullback map} $K^{(\phi)}: K^{(V)} \to K^{(W)}$ by defining $K^{(\phi)}(G)_j(\psi) := G_j(\phi \circ \psi)$ for all $G = (G_j)_{j=0}^\infty \in K^{(V)}$, $j \geq 0$, and $\psi \in \operatorname{Inj}([j],W)$. If $W$ is a subset of $V$, we write $G\downharpoonright_W$ for $K^{(\iota_{W \subset V})}(G)$, and refer to $G\downharpoonright_W$ as the \emph{restriction} of $G$ to $W$. \end{definition}
\begin{example} An ordinary undirected graph $G = (V,E)$, where $E \subset \binom{V}{2}$ can be viewed as an undirected $\{0,1\}_2$-coloured hypergraph; similarly, a $k$-uniform hypergraph can be viewed as an undirected $\{0,1\}_k$-coloured hypergraph. In particular, $2^{\binom{V}{2}}$ is nothing more than the hypergraphs in $\{0,1\}_2^{(V)}$ which are undirected. More generally, if $G = (G_j)_{j=0}^\infty \in K^{(V)}$ is undirected, then the maps $G_j: \operatorname{Inj}([j],V) \to K_j$ can be viewed instead as maps from $\binom{V}{j}$ to $K_j$. A bipartite graph can be viewed as an undirected $(\operatorname{pt},\{0,1\},\{0,1\})$-coloured hypergraph, in which the order $1$ palette $\{0,1\}$ is used for the vertex partition, and the order $2$ palette $\{0,1\}$ is used to describe the edges of the graph. One can similarly view partite hypergraphs using this framework; see also Definition \ref{partite} below. Later on we will need to generalise the notion of a palette to allow the palettes $K_j$ to be sub-Cantor spaces instead of finite sets; see Definition \ref{subcantor}. \end{example}
\begin{remark} In the language of category theory, one can view the palette $K$ as a \emph{contravariant functor} $V \mapsto K^{(V)}$, $\phi \mapsto K^{(\phi)}$ between the category of vertex sets $V$ (whose morphisms are the injective maps $\phi: W \to V$), and the category of sub-Cantor spaces (see Definition \ref{subcantor} below), whose morphisms are the continuous maps (and more generally, the probability kernels, see Appendix \ref{prob}). This category-theoretic language seems to be a natural framework to phrase many of our notions, such as local repairability, as we shall see in later sections. \end{remark}
\begin{definition}[Hereditary hypergraph properties]\label{hered} Let $K = (K_j)_{j=0}^\infty$ be a finite palette. A \emph{hereditary $K$-property} is an assignment $\mathcal{P}: V \mapsto \mathcal{P}^{(V)}$ of a collection $\mathcal{P}^{(V)} \subset K^{(V)}$ of $K$-coloured hypergraphs on $V$ for every\footnote{Technically, the class of finite vertex sets is not itself a set, and so $\mathcal{P}$ is a class function rather than a function. If one wishes to work with actual functions, one restricting attention to vertex sets which are (for instance) subsets of the integers. As this issue does not make any actual impact on our arguments, we shall henceforth ignore it.} finite vertex set $V$, such that \begin{equation}\label{kphi} K^{(\phi)}( \mathcal{P}^{(V)} ) \subset \mathcal{P}^{(W)} \end{equation} for every morphism $\phi \in \operatorname{Inj}(W,V)$ between finite vertex sets. In particular, the $K$-property $\mathcal{P}$ is invariant under hypergraph isomorphism and preserved under hypergraph restriction\footnote{In category-theoretic language, one can view $\mathcal{P}$ (like $K$) as a contravariant functor, in which $\mathcal{P}^{(\phi)}: \mathcal{P}^{(V)} \to \mathcal{P}^{(W)}$ is the restriction of the pullback map $K^{(\phi)}$ to $\mathcal{P}^{(V)}$ for any injection $\phi: W \to V$; see Example \ref{propfunc}.}. We say that the $K$-property $\mathcal{P}$ is \emph{undirected} if $\mathcal{P}^{(V)}$ consists entirely of undirected hypergraphs for each vertex set $V$. We extend $\mathcal{P}$ to countably infinite vertex sets $V$ by declaring $$\mathcal{P}^{(V)} := \{ G \in K^{(V)}: G\downharpoonright_W \in \mathcal{P}^{(W)} \hbox{ for all finite } W \subset V \}.$$ We say that a hypergraph $G \in K^{(V)}$ \emph{obeys} $\mathcal{P}$ if $G \in \mathcal{P}^{(V)}$. \end{definition}
\begin{examples} In the case of $\{0,1\}_2$-coloured hypergraphs (i.e. graphs), the properties of being undirected and connected, of being bipartite, of being undirected and free of triangles, of being planar, and of being four-colourable, are all hereditary $\{0,1\}_2$-properties. \end{examples}
\begin{definition}[Testability]\label{Testdef}\cite{RS} Let $K$ be a finite palette of some order $k \geq 0$, and let $\mathcal{P}$ be a hereditary $K$-property. We say that $\mathcal{P}$ is \emph{testable with one-sided error} if, for every $\varepsilon > 0$, there exists an integer $N \geq 1$ and a real number $\delta > 0$ with the following property: if $G \in K^{(V)}$ is a $K$-coloured hypergraph with $N \leq |V| < \infty$ which locally almost obeys $\mathcal{P}$ in the sense that \begin{equation}\label{injv}
\frac{1}{|\binom{V}{N}|} |\{ W \in \binom{V}{N}: G\downharpoonright_W \in \mathcal{P}^{(W)} \}| \geq 1-\delta, \end{equation} then there exists $G' \in \mathcal{P}^{(V)}$ which is close to $G$ in the sense that \begin{equation}\label{gv}
\frac{1}{|\binom{V}{k}|} |\{ W \in \binom{V}{k}: G\downharpoonright_W \neq G'\downharpoonright_W \}| \leq \varepsilon. \end{equation} \end{definition}
This definition of course generalises Definition \ref{testgraph}.
We can now state the main results of Alon-Shapira and R\"odl-Schacht again:
\begin{theorem}[Every hereditary undirected hypergraph property is testable]\label{as-thm}\cite{AloSha2},\cite{rs2} If $k \geq 0$, then every hereditary undirected $\{0,1\}_k$-property is testable with one-sided error. \end{theorem}
\begin{remark} See \cite{AloSha2} for further discussion of this result, and why it is natural to restrict attention to hereditary properties. The cases $k=0,1$ of this result are easy. In the case of graphs $k=2$, this result was first obtained by \cite{AloSha2}, after building upon several earlier results in this direction; see \cite{GGR}, \cite{AloSha}, \cite{AloFisKriSze} and the references therein. For general $k$, this result was first obtained in \cite{rs2}, with several earlier results in this direction in \cite{Gow1}, \cite{rs}, \cite{ars}, \cite{ishi}, \cite{KohNagRod}. The special case of the above theorem in which $\mathcal{P}$ is the $\{0,1\}_k$-property of not containing any embedded copy of a fixed hypergraph is known as the \emph{hypergraph removal lemma} and is already a non-trivial result, implying for instance the multidimensional Szemer\'edi theorem; see \cite{Gow1}, \cite{rs} for further discussion. \end{remark}
The Alon-Shapira argument \cite{AloSha2} that gave the $k=2$ case was somewhat intricate, using the Szemer\'edi regularity lemma three times and also using Ramsey's theorem for graphs. The R\"odl-Schacht argument \cite{rs2}, in contrast, avoided Ramsey's theorem and used fewer applications of the (hypergraph) regularity lemma, leading to a simpler proof (though of course the fact that it dealt with general $k$ rather than $k=2$ lead to several notational complications). On the other hand, the R\"odl-Schacht argument was more indirect than the Alon-Shapira one and did not yield explicitly quantitative bounds. One of the purposes of this paper is to explain why this difference is in fact essential: the Alon-Shapira argument cannot extend to the case of general hypergraphs, for reasons which we shall explain below.
\subsection{New positive results}
In this paper we explore some generalisations and refinements of the above theorem, as well as counterexamples to some of these extensions. Some obvious generalisations include that of allowing more general palettes $K$, allowing directed edges, allowing loops, and replacing the finite vertex set $V$ with a more general probability space such as $[0,1]$ with uniform measure. Another direction to pursue is to determine the relationship between the original hypergraph $G$ in the above theorems and the ``repaired'' hypergraph $G'$. For instance, the argument in \cite{AloSha2} gives an effective procedure to locate $G'$ (albeit one which requires heavy use of the regularity lemma); in contrast, the argument in \cite{rs2} is indirect (proceeding by contradiction) and does not obviously provide any algorithm for locating $G'$ other than brute force search.
In the positive direction we have three main results. The first result extends Theorem \ref{as-thm} to the directed multicoloured case:
\begin{theorem}[Every hereditary directed hypergraph property is testable]\label{rs-thm-dir} Let $K$ be a finite palette, and let $\mathcal{P}$ be a hereditary $K$-property. Then $\mathcal{P}$ is testable with one-sided error. \end{theorem}
The proof of Theorem \ref{rs-thm-dir} follows the R\"odl-Schacht argument and is given in Section \ref{posi}.
\begin{remark}\label{directed} As is well known, one can identify a directed graph with an undirected bipartite graph on twice as many vertices, and similar identifications also exist for hypergraphs. However, it does not appear possible to use such identifications to deduce the testability of directed hypergraph properties from the testability of undirected hypergraph properties, because one cannot canonically recover the directed graph from the undirected one without knowledge of the specific identification used. Indeed, the negative result in Theorem \ref{negate} below shows that the directed and undirected cases are in fact quite different. On the other hand, this distinction between directed and undirected hypergraphs disappears for partite properties; see Remark \ref{partite-directed}. \end{remark}
The next result extends Theorem \ref{as-thm} (in the graph case $k=2$) in a different direction, namely showing that hereditary undirected graph properties are not only testable with one-sided error, but enjoy the stronger property of being \emph{locally repairable}. Roughly speaking, local repairability (which is somewhat analogous to the concept of \emph{local correctability} in coding theory) shows that the repaired graph $G'$ can be (probabilistically) obtained from $G$ in a ``local'' manner, in that every edge of $G'$ can be determined using only knowledge of $O(1)$ edges of $G$. Because of this locality, the testability theorem can in fact be extended from finite graphs $G$ to infinite graphs $G$ (with a probability measure on the vertices), and also one can allow the graphs to contain loops. In fact this turns out to be a natural setting in which to study a certain strong form of local repairability.
To make this more precise we need more definitions, beginning with a continuous analogue of a graph or hypergraph.
\begin{definition}[Continuous hypergraphs]\label{contmap} Let $K$ be a finite palette. A \emph{$K$-coloured continuous hypergraph} is a quadruplet $G = (V,\mathcal{B},\nu,(G_j)_{j=0}^\infty)$, where $(V,\mathcal{B},\nu)$ is a probability space, and $G_j: V^j \to K_j$ is a measurable map for each $j \geq 0$. If $W$ is an vertex set, we define the \emph{sampling map} $\overline{G}^{(W)}: V^W \to K^{(W)}$ by the formula $$ \overline{G}^{(W)}( v )_j(\phi) = G_j(v \circ \phi)$$ for all $j \geq 0$, all $\phi \in \operatorname{Inj}([j],W)$, and all $v \in V^W$, where we view $v$ as a function from $W$ to $V$ (and identify $V^j$ with $V^{[j]}$). If $\mathcal{P}$ is a $K$-property, we say that $G$ \emph{obeys} $\mathcal{P}$ if $\overline{G}^{(W)}(v) \in \mathcal{P}^{(W)}$ for all vertex sets $W$ and all $v \in V^W$. \end{definition}
\begin{example} A $\{0,1\}_2$-coloured continuous hypergraph $G$ is essentially a probability space $(V,\mathcal{B},\nu)$, together with a measurable subset $G_2$ of $V \times V$, which can be viewed as a continuous analogue of a set of edges on $V$. In particular, if one takes $V$ to be the unit interval $[0,1]$ with the standard Borel $\sigma$-algebra $\mathcal{B}$ and Lebesgue measure $\nu$, a $\{0,1\}_2$-continuous hypergraph becomes a measurable subset $G_2$ of the unit square. The sampling map $\overline{G}^{([n])}: [0,1]^n \to \{0,1\}_2^{([n])}$ then maps any $n$-tuple $v_1,\ldots,v_n \in [0,1]$ of ``sampling vertices'' to the directed graph $([n],E)$ on $n$ vertices, with $(i,j) \in E$ if and only if $(v_i,v_j) \in G_2$. Thus, if one selects a point in $[0,1]^n$ uniformly at random, the image of this point under $\overline{G}^{([n])}$ is a randomly sampled graph of order $n$ from the continuous graph $G$. Note that we do not exclude the diagonal of $V \times V$ from $G_2$, and so we allow continuous hypergraphs to contain loops. \end{example}
\begin{remark} In the language of category theory, one can view the map $\overline{G}: W \mapsto \overline{G}^{(W)}$ as a \emph{natural transformation} from the contravariant functor $W \mapsto V^W$ to the contravariant functor $W \mapsto K^{(W)}$. If $G$ obeys $\mathcal{P}$, then the natural transformation $\overline{G}$ factors through the inclusion natural transformation from $\mathcal{P}$ to $K$. \end{remark}
\begin{example}\label{extend} Any ordinary hypergraph $G \in K^{(V)}$ on a finite set $V$ can be extended (somewhat arbitrarily) to a continuous hypergraph $\tilde G$, by endowing $V$ with the discrete $\sigma$-algebra $\mathcal{B}$ and the uniform probability measure $\nu$, and defining $\tilde G_j: V^j \to K_j$ to be an arbitrary extension of $G_j: \operatorname{Inj}([j],V) \to K_j$, where we view $\operatorname{Inj}([j],V)$ as a subset of $V^j$ in the obvious manner. One can view $\tilde G$ as a looped version of the hypergraph $G$. Observe that if any one of these extensions $\tilde G$ obeys a hereditary hypergraph property $\mathcal{P}$, then $G$ does also. The framework of continuous hypergraph also allows for placing weights on the vertices by adjusting the probability measure $\nu$ accordingly. \end{example}
\begin{example}[$0-1$ graphons] Let $E \subset [0,1]^2$ be a measurable subset of the unit square. Then the quadruplet $G = ([0,1],\mathcal{B},\nu,\mathcal{I}(E))$, where $\mathcal{B}$ is the Borel $\sigma$-algebra on the unit interval $[0,1]$, $\nu$ is the uniform measure on $[0,1]$, and $\mathcal{I}(E): [0,1]^2 \to \{0,1\}$ is the indicator function of $E$, is a continuous $\{0,1\}_2$-coloured hypergraph (abusing notation slightly by dropping all the trivial components $G_j$ of the graph $G$ for $j \neq 2$). If $\mathcal{P}$ is the $\{0,1\}_2$-property of being undirected and triangle-free, then $G$ obeys $\mathcal{P}$ if and only if $E$ is symmetric (thus $(x,y) \in E$ if and only if $(y,x) \in E$) and contains no sets of the form $\{ (x,y),(y,z),(z,x)\}$ for $x,y,z \in [0,1]$. \end{example}
Now we generalise the local modification rules from Definition \ref{lacma} to more general hypergraphs (including continuous ones). We give two equivalent definitions of this concept, a concrete one (resembling Definition \ref{lacma}) and a category-theoretic one (resembling Remark \ref{lacma2}):
\begin{definition}[Local modification rule, concrete definition]\label{concmod} Let $K = (K_j)_{j=0}^k$ be a finite palette. A \emph{local modification rule} is a pair $T = (A, T)$, where $A$ is a finite vertex set, and $T$ is a collection of maps $T_j: K^{(A \uplus [j])} \to K_{=j}^{([j])}$ for $0 \leq j \leq k$ which obey the $\operatorname{Inj}([j],[j])$-equivariance condition $$ K_{=j}^{(\phi)} \circ T_j = T_j \circ K^{(\operatorname{id}_A \uplus \phi)}$$ for all $\phi \in \operatorname{Inj}([j],[j])$. Given such a rule, and given a vertex set $V$, we define the modification map $\overline{T}^{(V)}: K^{(A \uplus V)} \to K^{(V)}$ by the formula $$ \overline{T}^{(V)}(G)_j(\phi) := T_j(K^{(\operatorname{id}_A \uplus \phi)}(G))(\phi)$$ for every vertex set $V$, all $0 \leq j \leq k$, all $G \in K^{(A \uplus V)}$, and all $\phi \in \operatorname{Inj}([j],V)$; the components $\overline{T}^{(V)}(G)_j$ for $j>k$ are of course trivial. \end{definition}
\begin{definition}[Local modification rule, categorical definition]\label{locmod} Let $K$ be a finite palette. A \emph{local modification rule} is a pair $T = (A, T)$, where $A$ is a finite vertex set, and $T$ is an assignment of a map $\overline{T}^{(V)}: K^{(A \uplus V)} \to K^{(V)}$ for every vertex set $V$ (where $A \uplus V$ denotes the disjoint union of $A$ and $V$), such that the diagram \begin{equation}\label{natural} \begin{CD} K^{(A \uplus V)} @>{\overline{T}^{(V)}}>> K^{(V)} \\ @VV{K^{(\operatorname{id}_A \oplus \phi)}}V @VV{K^{(\phi)}}V \\ K^{(A \uplus W)} @>{\overline{T}^{(W)}}>> K^{(W)} \end{CD} \end{equation} commutes for any morphism $\phi \in \operatorname{Inj}(W,V)$ between two vertex sets $W,V$. \end{definition}
It is not difficult to see that the two definitions are equivalent. For instance, given a modification rule $(A,T)$ defined by Definition \ref{locmod}, the corresponding maps $T_j$ for Definition \ref{concmod} can be defined by the formula $$ T_j := \pi_{K \to K_{=j}}^{([j])} \circ \overline{T}^{([j])},$$ where $\pi_{K \to K_{=j}}^{([j])}: K^{([j])} \to K_{=j}^{([j])}$ is the projection map. In our proofs, we shall adopt a category-theoretic viewpoint and rely on the latter definition rather than the former. However, for the purpose of understanding the results, the reader may safely ignore the category-theoretic definition.
\begin{remark} The commutative diagram \eqref{natural} is asserting that $\overline{T}$ is a \emph{natural transformation} between the functors $V \mapsto K^{(A \uplus V)}$ and $V \mapsto K^{(V)}$. It is this natural transformation property that makes the repair rule \emph{local} (and invariant under relabeling of vertices); it implies that the value of a modified edge $T_v(G)_j(\phi)$ for a continuous graph depends only on the edges that involve the vertices $v$ and the vertices of $\phi$, and similarly for the modified edges $T_\phi(G)_j(\psi)$ of finite graphs. \end{remark}
We now use local modification rules to modify discrete and continuous hypergraphs in order to ensure (or \emph{entail}) certain properties $\mathcal{P}$.
\begin{definition}[Entailment and modification]\label{entail} Let $(A,T)$ be a local modofication rule. We say that this rule \emph{entails} a $K$-property $\mathcal{P}$ if $\overline{T}^{(V)}( K^{(A \uplus V)} ) \subset \mathcal{P}^{(V)}$ for any vertex set $V$. \begin{itemize} \item If $G = (V,\mathcal{B},\nu,(G_j)_{j=0}^\infty)$ is a continuous $K$-coloured hypergraph, and $v = (v_a)_{ a \in A} \in V^A$ is a collection of vertices in $a$, we define the \emph{modification} $T_v(G) = (V, \mathcal{B}, \nu, (G'_j)_{j=0}^\infty)$ of $G$ to be the continuous $K$-coloured hypergraph given by the requirement that $$ T_v(G)^{(W)}( w ) = \overline{T}^{(W)}( \overline{G}^{(A \uplus W)}( v, w ) )$$ for all vertex sets $W$ and all $w \in V^W$; one can verify that this requirement uniquely defines a continuous $K$-coloured hypergraph $T_v(G)$. Note that if $T$ entails $\mathcal{P}$, then $T_v(G)$ obeys $\mathcal{P}$ for every continuous $K$-coloured hypergraph $G$ on a vertex set $V$, and any $v \in V^A$. \item If $G = (G_j)_{j=0}^\infty$ is a $K$-coloured hypergraph on a vertex set $V$, and $\phi \in \operatorname{Inj}(A,V)$, then we define the \emph{modification} $T_\phi(G)$ of $G$ to be the $K$-coloured hypergraph $G' = (G'_j)_{j=0}^\infty$ on $V \backslash \phi(A)$ defined by the formula $$ T_\phi(G) = \overline{T}^{(V \backslash \phi(A))}( K^{(\phi \uplus \operatorname{id}_{V \backslash \phi(A)})}(G) ),$$ where $\phi \uplus \operatorname{id}_{V \backslash \phi(A)}: A \uplus (V \backslash \phi(A)) \to V$ is the bijection formed by the direct sum of $\phi: A \to \phi(A)$ and the identity map $\operatorname{id}_{V \backslash\phi(A)}: V \backslash \phi(A) \to V \backslash \phi(A)$. Again, note that if $T$ entails $\mathcal{P}$, then $T_\phi(G)$ obeys $\mathcal{P}$ for every $K$-coloured hypergraph on a vertex set $V$, and any $\phi \in \operatorname{Inj}(A,V)$. \end{itemize} \end{definition}
\begin{example}\label{bipart} Let $K = \{0,1\}_2$, so that $K$-coloured hypergraphs are just directed graphs. We define the local modification rule $T = (A,T)$ by setting $A := [1] = \{1\}$, and setting $\overline{T}^{(V)}(G) \in K^{(V)}$ for any vertex set $V$ and any directed graph $G \in K^{(A \uplus V)}$ (thus $G$ can be identified with a map $G_2: A \uplus V \times A \uplus V \to \{0,1\}$) to be the collection of all edges $(v,w) \in \operatorname{Inj}([2],V)$ such that $G_2(v,w)=G_2(w,1)=1$ and $G_2(v,1)=0$. In words, $\overline{T}^{(V)}(G)$ creates a bipartite directed graph from $G$ by deleting all edges from $G$ except those which connect a vertex $V$ which do not have an edge to $1$, to a vertex of $V$ which does have an edge to $1$. In particular, if $\mathcal{P}$ is the $\{0,1\}_2$-property of being bipartite, then it is clear that $T$ entails $\mathcal{P}$. If $G = (V,\mathcal{B},\nu,G_2)$ is a continuous $K$-coloured hypergraph (ignoring the trivial components $G_0,G_1$), and $v_1 \in V$, then the modified continuous graph $T_{v_1}(G) = (V,\mathcal{B},\nu,G'_2)$ is given by requiring that $G'_2(v,w) = 1$ whenever $G_2(v,w)=G_2(w,v_1)=1$ and $G_2(v,v_1)=0$. Similarly, if $G = (G_2)$ is a directed graph on a vertex set $V$, and $v_1$ is a vertex in $V$, then the modified directed graph $T_{v_1}(G) = G'_2$ is given by requiring that $G'_2(v,w) = 1$ whenever $G_2(v,w)=G_2(w,v_1)=1$ and $G_2(v,v_1)=0$. \end{example}
We can now generalise Definition \ref{wlr}:
\begin{definition}[Local repairability]\label{locrep} Let $K$ be a finite palette of some order $k$, and let $\mathcal{P}$ be a hereditary $K$-property. \begin{itemize} \item We say that $\mathcal{P}$ is \emph{strongly locally repairable} if for every $\varepsilon > 0$ there exists a finite set $A$, an $N > 0$, and a real number $\delta > 0$ with the following property: Whenever $G = (V,\mathcal{B},\nu,(G_j)_{j=0}^k)$ is a continuous $K$-coloured hypergraph which approximately locally obeys $\mathcal{P}$ in the sense that\footnote{We use $\mathcal{I}(E)$ to denote the indicator of an event $E$, thus $\mathcal{I}(E)=1$ when $E$ is true and $\mathcal{I}(E)=0$ otherwise.} \begin{equation}\label{gp}
\int_{V^{[N]}} \mathcal{I}\left(\overline{G}^{([N])}(v) \in \mathcal{P}^{([N])}\right)\ d\nu^{[N]}(v) \geq 1-\delta, \end{equation} where $\nu^{[N]}$ is the $N$-fold product measure of $\nu$ on $V^{[N]}$, then there exists a local modification rule $T = (A,T)$ that entails $\mathcal{P}$, which does not significantly modify $G$ in the sense that \begin{equation}\label{intv}
\int_{V^A} \int_{V^{[k]}} \mathcal{I}\left( \overline{T_v(G)}^{([k])}(w) \neq \overline{G}^{([k])}(w) \right)\ d\nu^A(v) d\nu^{[k]}(w) \leq \varepsilon. \end{equation}
\item We say that $\mathcal{P}$ is \emph{weakly locally repairable} if for every $\varepsilon > 0$ there exists a finite set $A$, an integer $N \geq |A|+k$, and a real number $\delta > 0$ with the following property: whenever $G$ is a $K$-coloured hypergraph on a vertex set $V$ with $N \leq |V| < \infty$ which approximately obeys $\mathcal{P}$ in the sense of \eqref{injv}, then there exists a local modification rule $T = (A,T)$ and $\phi \in \operatorname{Inj}(A,V)$ such that $T_\phi(G)$ obeys $\mathcal{P}$, and which is close to $G$ in the sense that \begin{equation}\label{psieps}
\frac{1}{\left|\binom{V \backslash \phi(A)}{k}\right|} \left|\left\{ W \in \binom{V \backslash \phi(A)}{k} : T_\phi(G)\downharpoonright_W \neq G\downharpoonright_W \right\}\right| \leq \varepsilon.
\end{equation} \end{itemize} \end{definition}
\begin{example} Let $\mathcal{P}$ be the $\{0,1\}_2$-property of being a bipartite graph. The local rule in Example \ref{bipart} entails $\mathcal{P}$, but is not strong enough by itself to show that $\mathcal{P}$ is strongly or weakly locally repairable, because it tends to delete far too many edges to force bipartiteness. However, one can improve this rule by enlarging the set $A$ and using a rule closer to that discussed in Section \ref{prevsec}; we omit the details. \end{example}
\begin{remark} Informally, local repairability is the assertion that if a hypergraph locally obeys $\mathcal{P}$ (in the sense that most hypergraphs of order $N$ obtained by randomly sampling $N$ vertices from $V$ will obey $\mathcal{P}$), then there is a modification rule which is guaranteed to produce a new hypergraph which obeys $\mathcal{P}$, and which is also close to the original hypergraph in the sense that most random $k$-element samples of the two hypergraphs will agree. (Note that this implies automatically implies the same statement for random $j$-element samples for any $j < k$.)
The differences between strong and weak local repairability are that for strong local repairability, one can handle infinite hypergraphs, as well as hypergraphs with loops; one does not need to delete any vertices when repairing the hypergraph; and furthermore, the local modification rule $T$ modifies \emph{all} hypergraphs to obey $\mathcal{P}$, not just the original hypergraph $G$, and the repaired hypergraph is likely to stay close to $G$ for \emph{most} choices of $v \in V^A$, and not just for a \emph{single} $\phi \in \operatorname{Inj}(A,V)$. \end{remark}
\begin{remark} Suppose that $\mathcal{P}$ is weakly (or strongly) locally repairable. As stated, the repair algorithm $T$ appearing in the above definition depends on the hypergraph $G$ as well as on the data $\mathcal{P}$ and $\varepsilon$. With a bit more effort, one can show that there exists a repair algorithm $T$ which depends only on $\mathcal{P}$ and $\varepsilon$, and which works (with high probability) for \emph{all} hypergraphs (or continuous hypergraphs) $G$ that obey \eqref{gp}. To see this, observe that as $A$ does not depend on $G$, the number of possible repair algorithms $T$ that can arise is bounded (for fixed $\mathcal{P}$ and $\varepsilon$). Thus one can simply try all of these algorithms in turn on a large random portion of $G$ and verify empirically whether any of them obey \eqref{intv}, and then use the ``winner'' to then repair the rest of the hypergraph. We omit the details. \end{remark}
We make the following simple observations:
\begin{proposition}[Easy implications]\label{easy} Let $\mathcal{P}$ be a $K$-property for some finite palette $K$. If $\mathcal{P}$ is strongly locally repairable, then it is weakly locally repairable, and also testable with one-sided error. \end{proposition}
\begin{proof}(Sketch) Let $k$ be the order of $K$. To show that strong local repairability implies weak local repairability, we start with a large finite hypergraph $G$ on at least $N$ vertices obeying \eqref{injv} (for some $N$ and $\delta$ to be chosen later), extend it to a continuous hypergraph $\tilde G$ as in Example \ref{extend}, and apply strong local repairability to obtain a local repair rule $T = (A,T)$ entailing $\mathcal{P}$ and obeying \eqref{intv} with $\varepsilon$ replaced by some slightly smaller quantity $\varepsilon'$ depending on $k$ and $\varepsilon$, and assuming that $N$ and $\delta$ were sufficiently large and small respectively depending on $\varepsilon'$. If $N$ is large enough, we can use \eqref{intv} and the pigeonhole principle to find $\phi \in \operatorname{Inj}(A,V) \subset V^A$ such that\footnote{Here and in the sequel we use $X \ll Y$ and $Y \gg X$ synonymously with $X = O(Y)$ or $Y = \Omega(X)$ for non-negative $X,Y$; if the implied constant depends on some parameters, we will indicate this by appropriate subscripting.}
$$ \frac{1}{|V^{[k]}|} \left|\left\{ w \in V^{[k]}: \overline{T_\phi(\tilde G)}^{([k])}(w) \neq \overline{\tilde G}^{([k])}(w) \right\}\right| \ll_{k} \varepsilon'$$ which then implies \eqref{psieps} if $N$ is large enough and $\varepsilon'$ is sufficiently small depending on $k$ and $\varepsilon$. Also, since $T$ entails $\mathcal{P}$, $T_\phi(G)$ will obey $\mathcal{P}$, and we are done. A similar argument gives testability with one-sided error, by setting $G'$ to be the hypergraph corresponding to $T_\phi(\tilde G)$ (basically, by reversing Example \ref{extend} and deleting all the loops); we omit the details. \end{proof}
\begin{remark}\label{easy-rem} It is almost true that weak local repairability implies testability with one-sided error; the one problem is that the hypergraph obtained by weak local repairability was forced to delete a bounded number of vertices. If one strengthens the notion of weak local repairability to allow $T$ to entail $\mathcal{P}$, rather than merely assume that $T_\phi(G)$ obeys $\mathcal{P}$, then one can easily fix the problem by adding a bounded number of ``dummy'' vertices to $G$ to create a slightly enlarged graph $G'$, so that $T_\phi(G')$ still obeys $\mathcal{P}$ and has the same number of vertices as $G$; we leave the details to the reader. On the other hand, this strengthened notion of weak local repairability becomes equivalent to the strong notion of local repairability, as one can see by viewing a continuous hypergraph as the limit of a sequence of finite hypergraphs (and using the fact that for fixed $A$, the number of possible modification rules $T$ is finite); we omit the details. Indeed we do not know any example of a hereditary property which is weakly locally repairable but not strongly locally repairable. \end{remark}
We can now quickly state our next main theorem.
\begin{theorem}[Every hereditary undirected graph property is locally repairable]\label{lgr} Let $K$ be a finite palette of order at most $2$, and let $\mathcal{P}$ be a hereditary undirected $K$-property. Then $\mathcal{P}$ is strongly locally repairable (and hence also weakly locally repairable). \end{theorem}
The proof of Theorem \ref{lgr} follows the Alon-Shapira argument and is given in Section \ref{posi}.
\begin{remark} Theorem \ref{lgr} implies the existence of a probabilistic algorithm that can generate each edge of the graph $G'$ in Theorem \ref{as-thm} in time $O_{\mathcal{P},\varepsilon}(1)$ (and using $O_{\mathcal{P},\varepsilon}(1)$ queries to $G$), i.e. in a time bounded by a quantity depending only\footnote{We caution however that our result, which is proven by indirect means, is \emph{ineffective} or \emph{non-uniform} in the sense that we do not provide a way to explicitly compute this bound $O_{\mathcal{P},\varepsilon}(1)$ given $\mathcal{P}$ and $\varepsilon$. Indeed, given the discussion in \cite{AloSha}, \cite{AloSha2}, it is extremely likely that the bound here is \emph{uncomputable} from that data in general, even when $\mathcal{P}$ itself is computable; the issue seems to be related to that of solving various halting problems associated to $\mathcal{P}$. In particular, we have a somewhat subtle distinction: for any \emph{fixed} $\mathcal{P}$, $\varepsilon$, and $G$, the repair algorithm $T$ can be described in a finite (but uncomputable) amount of time, but we do not have an algorithm to \emph{compute} this description from $\mathcal{P}$, $\varepsilon$, and $G$.} on $\mathcal{P}$ and $\varepsilon$. In particular, the entire graph $G'$ can be reconstructed in time $O_{\mathcal{P},\varepsilon}(|V|^2)$ (of course, one needs to query the entire graph $G$ to do this). Similar remarks apply to Theorems \ref{monotone}, \ref{part} below. \end{remark}
Another way to contrast local repairability with testability is to observe that Theorem \ref{lgr} also easily implies Ramsey's theorem:
\begin{corollary}[Ramsey's theorem]\label{ramsey} Let $K$ be a finite palette of order at most $2$ and let $n \geq 1$. If $N'$ is sufficiently large depending on $K$ and $n$, then for every undirected graph $G \in K^{([N'])}$ there exists a set $W \subset [N']$ with $|W| = n$ such that the induced graph $G\downharpoonright_W \in K^{(W)}$ is monochromatic, or equivalently that $K^{(\phi)}( G\downharpoonright_W ) = G\downharpoonright_W$ for all $\phi \in \operatorname{Inj}(W,W)$. \end{corollary}
\begin{remark} Ramsey's theorem is of course also true for palettes $K$ of order greater than $2$, but Theorem \ref{lgr} turns out to fail in this case, due to the failure of a generalised version of Ramsey's theorem: see Theorem \ref{negate} below. \end{remark}
\begin{proof} Let $\mathcal{P}$ be the $K$-property of being undirected and not containing any monochromatic induced sub-hypergraph on $n$ vertices. This is clearly a hereditary $K$-property, and hence strongly locally repairable by Theorem \ref{lgr}. On the other hand, it is impossible for any non-empty $K$-coloured continuous graph $G = (V,\mathcal{B},\nu,G_2)$ to obey this property\footnote{For closely related reasons, it is also impossible to find a local repair rule $T$ which entails $\mathcal{P}$.}, since if $v \in V^n$ is any $n$-tuple with all coordinates equal then $\overline{G}^{([n])}(v)$ is a monochromatic hypergraph on $n$ vertices. Applying Theorem \ref{lgr} in the contrapositive, we conclude the existence of an $N \geq 1$ and $\delta > 0$ such that
$$ \frac{1}{|V|^N} \left|\left\{ v \in V^{[N]}: \overline{G}^{([N])}(v) \hbox{ obeys } \mathcal{P} \right\}\right| < 1-\delta.$$
On the other hand, if $G$ contained no induced monochromatic sub-hypergraphs on $n$ vertices, the left-hand side would be\footnote{We use subscripts on the $O()$ notation to indicate that the implied constants in that notation depend on the variables in the subscripts.} $1 - O_{N,n,K}(1/|V|)$. The claim then follows by taking $N'$ sufficiently large depending on $N, n, K, \delta$. \end{proof}
It does not appear possible to similarly deduce Ramsey's theorem just from Theorem \ref{as-thm}. One indirect piece of evidence for this claim is that the arguments in \cite{rs2} do not invoke Ramsey-theoretic arguments anywhere, but are still able to obtain Theorem \ref{as-thm}. On the other hand, the Alon-Shapira arguments used to prove Theorem \ref{as-thm} in the $k=2$ case crucially relies on Ramsey's theorem. Similarly, our proof of Theorem \ref{lgr} will also invoke Corollary \ref{ramsey} at a key juncture (see Section \ref{tech}).
The arguments used to prove Theorem \ref{lgr} can also be used (after some modification) to establish local repairability of monotone hypergraph properties and partite hypergraph properties. More precisely, we have the following two results.
\begin{definition}[Monotonicity]\label{mono} An \emph{ordered finite palette} is a finite palette $K = (K_j)_{j=0}^\infty$, together with a partial ordering $<_j$ on each component $K_j$ which is a \emph{meet-semilattice}, in the sense that any two elements $c_j, c'_j$ in $K_j$ have a unique meet\footnote{We say that $z = x \wedge y$ is the \emph{meet} of two elements $x,y$ of a partially ordered set if $z \leq x,y$, and if $z \geq z'$ for any $z' \leq x,y$.} $c_j \wedge c'_j$; note that this is automatically a commutative and associative operation.
Now let $K$ be an ordered finite palette and $\mathcal{P}$ a hereditary $K$-property. \begin{itemize} \item We say that $\mathcal{P}$ is \emph{monotone} if if given any vertex set $V$ and any $K$-coloured hypergraphs $G \in \mathcal{P}^{(V)}$, any hypergraph $G' \in K^{(V)}$ with the property that $G'_j(\phi) \leq G_j(\phi)$ for all $j \geq 0$ and $\phi \in \operatorname{Inj}([j],V)$, will obey $\mathcal{P}$. (Informally: ``deleting'' edges (or lowering the colour of edges) will preserve the property $\mathcal{P}$.) \item We say that $\mathcal{P}$ is \emph{weakly monotone} if given any vertex set $V$ and any $K$-coloured hypergraphs $G,G' \in \mathcal{P}^{(V)}$, the hypergraph $G \wedge G' \in K^{(V)}$ defined by $(G \wedge G')_j(\phi) := G_j(\phi) \wedge G'_j(\phi)$ for all $j \geq 0$ and $\phi \in \operatorname{Inj}([j],V)$, also obeys $\mathcal{P}$. (Informally, the ``intersection'' (or color-meet) of two hypergraphs obeying $\mathcal{P}$, continues to obey $\mathcal{P}$.) \end{itemize} \end{definition}
\begin{example} Suppose we are in the ``boolean'' case where $K = \{0,1\}_k$ is the monochromatic finite palette of some order $k \geq 0$, so that a $K$-coloured hypergraph on a vertex set $V$ can be identified with a set $E \subset \operatorname{Inj}([k],V)$ of morphisms from $[k]$ to $V$. A hereditary $K$-property $\mathcal{P}$ is then monotone if, given any $E \in \operatorname{Inj}([k],V)$ which obeys $\mathcal{P}$, the hypergraph associated to any subset of $E$ also obeys $\mathcal{P}$. Similarly, $\mathcal{P}$ is weakly monotone if, given any two $E, E' \subset \operatorname{Inj}([k],V)$ which obey $\mathcal{P}$, the hypergraph associated to $E \cap E'$ also obeys $\mathcal{P}$. Note that any directed monotone or undirected monotone hypergraph property is weakly monotone. However, one can easily concoct examples of weakly monotone properties which are not monotone (e.g. the property of being a complete hypergraph is weakly monotone). \end{example}
\begin{theorem}[Every weakly monotone directed hypergraph property is locally repairable]\label{monotone} Let $K$ be an ordered finite palette, and let $\mathcal{P}$ be a weakly monotone $K$-property. Then $\mathcal{P}$ is strongly locally repairable (and hence also weakly locally repairable). \end{theorem}
\begin{definition}[Partiteness]\label{partite} Let $K$ be a palette of order $k \geq 1$. If $G \in K^{(V)}$ is a $K$-coloured hypergraph, $0 \leq j \leq k$, and $\phi \in \operatorname{Inj}([j],V)$, we say that $\phi$ is a \emph{partite edge} of $G$ if the map $G_1: V \to K_1$ is injective on $\phi([j])$. If $G, G' \in K^{(V)}$, we say that $G, G'$ are \emph{partite equivalent} if $G_1 = G'_1$ and if $G_j(\phi) = G'_j(\phi)$ for every $0 \leq j \leq k$ and every partite edge $\phi \in \operatorname{Inj}([j],V)$ of $G$ (and thus of $G'$). We say that a hereditary $K$-property $\mathcal{P}$ is \emph{partite} if it is preserved under partite equivalence, thus if $G \in \mathcal{P}^{(V)}$ and $G'$ is partite equivalent to $G$, then $G' \in \mathcal{P}^{(V)}$. \end{definition}
\begin{example}[Tripartite triangle-freeness] Let $K$ be the finite palette $K := (\operatorname{pt}, \{1,2,3\}, \{0,1\})$ of order $2$. Thus a $K$-coloured graph $G \in K^{(V)}$ on a vertex set $V$ can be viewed as a vertex colouring $G_1: V \to \{1,2,3\}$, together with a set $E_2 \subset \operatorname{Inj}([2],V)$ of edges. Let $\mathcal{P}$ be the $K$-property of being undirected (thus $(v,w) \in E_2$ if and only if $(w,v) \in E_2)$, partite (thus $(v,w) \in E_2$ only if $G_1(v) \neq G_1(w)$), and triangle-free (thus there do not exist $u,v,w \in V$ such that $(u,v), (v,w), (w,u) \in E_2$). With our definitions, $\mathcal{P}$ is hereditary but is not a partite $K$-property, because it is not preserved under partite equivalent operations, such as adding edges $(v,w)$ within a single vertex colour class $G_1^{-1}(\{i\})$. However, if we define $\mathcal{P}'$ to be the $K$-property that $G'$ obeys $\mathcal{P}$, where $G'$ is the $K$-coloured graph with the same vertex colouring $G'_1 := G_1$ as $G$, and whose edge set $E'_2$ consists of those edges $(v,w) \in E_2$ for which $G_1(v) \neq G_1(w)$, then $\mathcal{P}'$ is a hereditary partite $K$-property. \end{example}
\begin{remark}\label{partite-directed} In Remark \ref{directed} we commented that property testing of directed hypergraph properties could not be easily reduced to the property testing of undirected hypergraph properties. However, in the case of partite properties one can canonically convert directed hypergraphs into undirected hypergraphs in a manner which allows one to transfer property testing results back and forth between the directed and undirected cases. For instance, given a bipartite directed graph $G = (V,E)$ (so the palette here is $(\operatorname{pt}, \{0,1\},\{0,1\})$), one can lift $G$ to an undirected bipartite $(\operatorname{pt}, \{0,1\}, \{ \emptyset, (0,0), (0,1), (1,0), (1,1)\})$-coloured graph $G'$, by declaring the colour of an undirected edge $\{v_0,v_1\}$ in $G'$, where $v_0$ and $v_1$ are in the $0$-vertex and $1$-vertex classes respectively, to be the ordered pair consisting of the colour of the directed edges $(v_0, v_1)$ and $(v_1,v_0)$ in $G$ respectively (and all edges not connecting a $0$-vertex to a $1$-vertex can be assigned the colour $\emptyset$). It is then not hard to see that a partite property $\mathcal{P}$ of directed bipartite graphs $G$ can be lifted to an equivalent partite property $\mathcal{P}'$ on undirected bipartite graphs $G'$, and that local testability or repair results for $\mathcal{P}$ are equivalent to those for $\mathcal{P}'$. More generally, if $K$ is any finite palette and $G \in K^{(V)}$ is a directed $K$-coloured hypergraph, one can create an undirected $K$-coloured hypergraph $G' \in (K')^{(V)}$, where the finite palette $K' = (K'_j)_{j=0}^\infty$ is defined by setting $K'_j := K_j$ for $j =0,1$ and $K'_j := \binom{\operatorname{Inj}([j],K_1) \times K_j}{j!} \cup \{\emptyset\}$ for $j > 1$, by setting $G'_j := G_j$ for $j=0,1$, and setting $G'_j(\phi) := \{ (G_1 \circ \phi \circ \psi, G_j(\phi \circ \psi)): \psi \in \operatorname{Inj}([j],[j]) \}$ when $j \geq 2$ and $\phi$ is a partite edge, and $G'_j(\phi) := \emptyset$ when $j \geq 2$ and $\phi$ is not a partite edge. Then one can identify each directed partite $K$-property $\mathcal{P}$ with a undirected partite $K'$-property $\mathcal{P}'$, such that $G$ obeys $\mathcal{P}$ if and only if $G'$ obeys $\mathcal{P}'$; we omit the details. \end{remark}
\begin{theorem}[Every partite hypergraph property is locally repairable]\label{part} Let $K$ be an finite palette of order $k \geq 1$, and let $\mathcal{P}$ be a partite hereditary $K$-property. Then $\mathcal{P}$ is strongly locally repairable (and hence also weakly locally repairable). \end{theorem}
\begin{remark} A similar result to Theorem \ref{part} implicitly appears in \cite{ishi}. It is also quite likely that Theorem \ref{monotone} can be deduced from the methods in \cite{ars}, although this is not done explicitly in that paper. \end{remark}
Theorems \ref{rs-thm-dir}, \ref{lgr}, \ref{monotone}, and \ref{part} will all be proven in Section \ref{posi}. The arguments have many features in common (and in fact share many key propositions) and so will be proven concurrently. To do this, we will use a version of the hypergraph correspondence principle \cite{Tao3}, combined with a structure theorem \cite{Aus1} for exchangeable random hypergraphs, to convert these problems into an infinitary\footnote{There are a number of advantages in working in the infinitary framework. One is that there are fewer epsilons that one needs to manage in the argument. Another is that one gains access to a number of useful infinitary tools, such as the Lebesgue dominated convergence theorem, Littlewood's principle that measurable functions are almost continuous, and the Lebesgue-Radon-Nikodym theorem. While each of these infinitary tools does have some sort of finitary analogue, these analogues are significantly messier to use (and are less well known) than their infinitary counterparts.} one concerning the testability and repairability of certain ``infinitely regular'' exchangeable random hypergraphs (or more precisely, for exchangeable ``recipes'' for producing such hypergraphs, whose palettes are sub-Cantor sets rather than finite sets). This conversion, which is completed in Section \ref{reduce-sec} is analogous to the exploitation of graph and hypergraph limits in \cite{LovSze2}, \cite{EleSze}, with the infinitely regular exchangeable random hypergraphs being closely related to the graphons and hypergraphons from those papers.
The (infinitary versions of) three local repairability results (Theorem \ref{lgr}, \ref{monotone}, and \ref{part}) will then be deduced from a single ``non-exchangeable'' local repairability result, Proposition \ref{repair}, in Section \ref{tech}. It is at this stage that a certain amount of Ramsey theory is needed, and assumptions such as undirectedness, monotonicity, or partiteness become crucial. On the other hand, the result in Proposition \ref{repair} does not require any Ramsey theory, and works for arbitrary hereditary properties.
Proposition \ref{repair}, as well (the infinitary version of) Theorem \ref{rs-thm-dir}, is then deduced from two discretisation results, Propositions \ref{disc-ident} and \ref{disc-ident2}, which construct certain discretisation transformations from continuous palettes to discrete palettes that converge in certain technical senses to the identity as the discrete palette becomes increasingly fine. These propositions form the heart of the paper and are proven in Sections \ref{disc-sec}, \ref{disc2-sec}. Proposition \ref{disc-ident}, which underlies the local testability result in Theorem \ref{rs-thm-dir} (and is also used in the proof of Proposition \ref{repair}) follows the R\"odl-Schacht approach and is relatively easy, whereas Proposition \ref{disc-ident2}, which is needed only for the repairability results, uses the Alon-Shapira method and is significantly more technical due to the breakdown of independence caused by ``indistinguishable'' edges\footnote{In the setting of \cite{AloSha2}, this corresponds to the difficulty of repairing edges that connect a single cell in a Szemer\'edi partition to itself. Once one considers (not necessarily undirected) hypergraphs of higher order, more complicated forms of indistinguishability also appear.}.
\subsection{New negative results}\label{subs:negative}
The above positive results are fairly unsurprising, given the prior work in this direction such as \cite{AloSha2}, \cite{rs2}, and \cite{ishi}. On the other hand, the following negative results seem to be somewhat more unexpected.
\begin{theorem}[Negative results]\label{negate} \begin{itemize} \item[(a)] (Directed graph properties are not locally repairable) There exists a hereditary $\{0,1\}_2$-property which is not weakly locally repairable. \item[(b)] (Undirected $\leq 3$-uniform hypergraph properties are not locally repairable) There exists a hereditary undirected $(\operatorname{pt},\{0,1\},\{0,1\},\{0,1\})$-property which is not weakly locally repairable. \item[(c)] (Undirected $3$-uniform hypergraph properties are not locally repairable) There exists a hereditary undirected $\{0,1\}_3$-property which is not weakly locally repairable. \end{itemize} \end{theorem}
\begin{remark} Combining this theorem with Theorem \ref{rs-thm-dir} we see that there exist hereditary undirected hypergraph properties $\mathcal{P}$ which are testable with one-sided error, but not weakly or strongly locally repairable. Informally, what this means is that for hypergraphs $G$ which almost obey such properties $\mathcal{P}$, there do exist nearby hypergraphs $G'$ which genuinely obey $\mathcal{P}$, but such hypergraphs cannot be obtained from $G$ by purely local modifications. We will make this more precise in Section \ref{negchap}, when we prove more refined versions of Theorem \ref{negate}. \end{remark}
\begin{remark} There are analogous results\footnote{We are indebted to Luca Trevisan for this remark.} in the coding theory literature. For instance, in \cite{GS} one finds constructions of locally testable codes which map messages of length $k$ to strings of length $k^{1+o(1)}$, but such codes cannot be locally correctable due to the lower bound results in \cite{katz}. \end{remark}
We prove Theorem \ref{negate} in Section \ref{negchap}. For part (a), the directed graph property is actually very simple\footnote{This example is of course closely related to the example of the \emph{half-graph}, which is a familiar counterexample to many overly strong assertions about graph regularity or graph property testing.} - it is the property that a directed graph determines a total ordering on $V$. The theorem is thus asserting that a lightly corrupted total ordering on an extremely large vertex set cannot be ``cleaned up'' by a purely local algorithm. The failure in (a) can ultimately be traced back to the simple fact that directed graphs do not obey the Ramsey theorem (which in turn reflects the basic fact that the two directed edges connecting two vertices $v$ and $w$ may well have distinct colours). Parts (b) and (c) are derived from the counterexample in (a) and some \emph{ad hoc} combinatorial constructions, which ``encode'' the property of being a directed graph as a $\leq 3$-uniform undirected property, and then as a $3$-uniform undirected property. It is somewhat surprising that one has failure of local repairability in these undirected cases, since Ramsey's theorem is known to be true for hypergraphs. The problem is rather subtle, and lies in the fact that in the $3$-uniform case, Ramsey's theorem fails for a certain generalisation of a hypergraph known as a \emph{hypergraphon}, in which the colour of a given $3$-uniform edge is not completely determined by its three vertices, but is also dependent on the colour of the $\binom{3}{2}$ $2$-uniform edges between those vertices, which are in turn not completely determined by the vertices themselves.
\begin{remark} In \cite{KohNagRod}, a positive property testing result for $3$-uniform hypergraphs was proven in the case that $\mathcal{P}$ was the $\{0,1\}_3$-property of not containing a fixed hypergraph as an induced subhypergraph. This argument relied on Ramsey theory and it seems likely that the repaired hypergraph $G'$ given by this argument could be generated by a local modification rule, though we were unable to fully verify that the arguments in \cite{KohNagRod} would yield this conclusion. If this is the case, it illustrates an interesting contrast with Theorem \ref{negate}(c), in that arbitrary hereditary properties can in fact behave differently from the properties formed from forbidding a single hypergraph. Unsurprisingly, the counterexample for local repair of $3$-uniform hypergraphs can be modified to also be a counterexample for local repair of $k$-uniform hypergraphs for any $k \geq 3$, but we will not detail this here. \end{remark}
\subsection{Summary of notation}
For the readers convenience we summarise some of the key notation used in this paper.
The cardinality of a finite set $E$ is denoted $|E|$, and we write $\binom{V}{j} := \{ e \subset V: |e|=j\}$. For any positive integer $N$, we write $[N] := \{1,\ldots,N\}$. For any event $E$, we write $\mathcal{I}(E)$ for the indicator function of $E$. The injections $\phi$ from $V$ to $W$ are denoted $\operatorname{Inj}(V,W)$. The notation $X \ll Y$, $Y \gg X$, $X=O(Y)$, or $Y = \Omega(X)$ is used to denote $X \leq CY$ for some absolute constant $C$; if $C$ needs to depend on some additional parameters such as $\varepsilon$, we will denote this by subscripting, e.g. $X \ll_\varepsilon Y$ or $X=O_\varepsilon(Y)$.
Hypergraphs and pullback maps $K^{(\phi)}$ are defined in Definition \ref{hyperdef}. Hereditary properties are defined in Definition \ref{hered}. Testability is defined in Definition \ref{Testdef}, and local repairability is defined in Definition \ref{locrep}, after introducing the notions of a continuous hypergraph (Definition \ref{contmap}), a local modification rule (Definition \ref{concmod} or \ref{locmod}), and entailment and modification (Definition \ref{entail}).
In Appendix \ref{prob} a number of key probabilistic concepts are defined, including the conditioning $(\mu|E)$ of a probability measure to an event of positive probability, the notion of a probability kernel $P: Y \rightsquigarrow X$, and the composition of $P \circ Q$ of two such kernels.
\section{Proofs of negative results}\label{negchap}
We begin with the proofs of the various counterexamples to local repairability in Theorem \ref{negate}. The material here is largely independent of those of the positive results, which will be given in Section \ref{posi}.
\subsection{The counterexample for directed graphs}\label{lrs}
In this section we construct a counterexample that will demonstrate part (a) of Theorem \ref{negate}. In this section we set $K := \{0,1\}_2$ and $k := 2$. Note in this case that we can identify a $K$-coloured hypergraph $G$ on a vertex set $V$ with a \emph{directed graph} $G = (V,<_G)$, where $<_G$ is a binary relation $<_G: V \times V \to \{\hbox{true}, \hbox{false}\}$ on $V$. We let $\mathcal{P}$ be the $\{0,1\}_2$-property that $<_G$ is a total ordering, then this is clearly a hereditary $K$-property. It will suffice to show that $\mathcal{P}$ is not weakly locally repairable.
In order to illustrate some of the ideas involved, let us first demonstrate the much simpler fact that $\mathcal{P}$ is not \emph{strongly} locally repairable. Consider the continuous $K$-coloured hypergraph $G$ in which $V$ is the unit interval $[0,1]$ with the Borel $\sigma$-algebra $\mathcal{B}$ and Lebesgue measure $\mu$, and $<_G$ is the usual ordering relation on $[0,1]$. Then we certainly have \eqref{gp}; in fact we can take $\delta=0$ in this case. On the other hand, it is impossible to repair $G$ to a new continuous hypergraph $G'$ that obeys $\mathcal{P}$, because if $W$ is any finite set with at least two elements, then $\overline{G'}^{(W)}(v)$ cannot obey $\mathcal{P}$ whenever $v$ has a repeated coefficent (thus $v_w = v_{w'}$ for some for some distinct $w,w' \in W$), since the statements $w <_{\overline{G'}^{(W)}(v)} w'$ and $w' <_{\overline{G'}^{(W)}(v)} w$ would have the same truth value, which is inconsistent with $\mathcal{P}$. Thus $\mathcal{P}$ is not strongly locally repairable.
Now we disprove weak local repairability for the same property $\mathcal{P}$. This counterexample will be so strong that the parameter $\varepsilon$ in Definition \ref{locrep} (and the estimate \eqref{psieps}) will play no role whatsoever. (However, we will take advantage of \eqref{psieps} for some suitably small $\varepsilon$ when proving parts (b) and (c) of Theorem \ref{negate}.)
Let $A$ be an arbitrary finite non-empty set, let $N > 0$ be an integer, and let $\delta > 0$ be an arbitrary small number, which we can assume to be small compared to $A,N$. Let $\sigma > 0$ be an even smaller number (depending on these parameters) to be chosen later, and then let $M$ be an enormous number (depending on all previous parameters), again to be chosen later. We set $V := [M]$.
To prove Theorem \ref{negate}(a), it will suffice to construct a directed graph $G = (V,<_G)$ obeying \eqref{injv} for which there does \emph{not} exist a local modification rule $T = (A,T)$ and $\phi \in \operatorname{Inj}(A,V)$ such that the repaired graph $T_{\phi}(G)$ obeys $\mathcal{P}$ (note that by construction, our counterexample $V$ can be larger than any specified size). Our construction will be probabilistic in nature.
To define $G$, we first define an ``uncorrupted'' directed graph $G^{(0)} = (V, <_{G^{(0)}})$ by letting $<_{G^{(0)}} = <$ be the standard total ordering on $V=[M]$, thus $G^{(0)}$ obeys $\mathcal{P}$. Now let $G = ([M], <_G)$ be a corrupted version of $G^{(0)}$, in which for any $(v,w) \in \operatorname{Inj}([2],[M])$, the statements $v <_G w$ and $v < w$ have the same truth value with probability $1-\sigma$ and the opposite truth value with probability $\sigma$, with these events being independent as $(v,w)$ varies.
Since the uncorrupted $G^{(0)}$ obeys $\mathcal{P}$, and $G$ is a random corruption of $G^{(0)}$, it is easy to see that for each fixed morphism $\phi \in \operatorname{Inj}([N],V)$, that $K^{(\phi)}( G )$ will obey $\mathcal{P}$ with probability $1 - O_N(\sigma)$. By the first moment method and linearity of expectation, we conclude that \eqref{injv} holds with probability $1 - O_{N,\delta}(\sigma)$. Let us now condition on the event that \eqref{injv} holds.
Now suppose for contradiction that there exists a local modification rule $T = (A,T)$ and $\phi \in \operatorname{Inj}(A,V)$ such that the repaired graph $G' = (V \backslash \phi(A), <') := T_{\phi}(G)$ obeys $\mathcal{P}$.
Let us say that two distinct vertices $v_1, v_2 \in V \backslash \phi(A)$ are \emph{indistinguishable} if the graph $K^{(\phi \uplus (v_1,v_2))}(G) \in K^{(A \uplus \{1,2\})}$ is invariant under permutation of the $1$ and $2$ indices; more explicitly, $v_1, v_2$ are indistinguishable whenever one has the symmetries $$ \mathcal{I}( v_1 <_G a ) = \mathcal{I}( v_2 <_G a )$$ and $$ \mathcal{I}( a <_G v_1 ) = \mathcal{I}( a <_G v_2 )$$ for all $a \in A$, as well as the symmetry $$ \mathcal{I}( v_1 <_G v_2 ) = \mathcal{I}( v_2 <_G v_1 ).$$ Note that if an indistinguishable pair $v_1, v_2$ of vertices exists, then by \eqref{natural} (applied to the map from $V$ to itself interchanging $v_1$ and $v_2$) the statements $v_1 <_{G'} v_2$ and $v_2 <_{G'} v_1$ have the same truth value, which implies that $G'$ cannot obey $\mathcal{P}$, a contradiction. Thus, in order to establish Theorem \ref{negate}(a), it will suffice (by the probabilistic method) to show
\begin{lemma}\label{indis} Suppose $M$ is sufficiently large (depending on $N,\delta,\sigma,A$). Then with probability $1-O_{A}(\sigma)$, it is true that for every $\phi \in \operatorname{Inj}(A,V)$, there exists at least one pair $(v_1,v_2)$ of distinct but indistinguishable vertices in $V \backslash \phi(A)$. \end{lemma}
\begin{proof} Let $c > 0$ be a small constant depending on $A$ to be chosen later (actually, one can take $c := 100^{-|A|}$). Let $B$ be an arbitrary subset of $V$ of cardinality at least $cM$. We assume $M$ is large enough that $cM > 2$. Call $B$ \emph{corrupted} if there exists distinct $v_1, v_2 \in B$ such that $v_1 <_G v_2$ and $v_2 <_G v_1$ have the same truth value. Observe from construction of $G$ for any distinct $v_1, v_2 \in V$ that $v_1 <_G v_2$ and $v_2 <_G v_1$ have the same truth value with probability $\gg \sigma$. By independence, we conclude that $B$ will be corrupted with probability at least $1 - \exp( - \Omega( \sigma c^2 M^2 ) )$. On the other hand, the total number of sets $B$ is at most $2^M$. Also, the total number of choices for $\phi$ can be crudely bounded by $M^{|A|}$. By the union bound, we conclude that with probability at least $1 - 2^M M^{|A|} \exp( - \Omega( \sigma c^2 M^2 ) )$, \emph{every} set of cardinality at least $cM$ is corrupted, for all choices of $\phi$. If $M$ is large enough depending on $A, \sigma$, we thus see that this event holds with probability $1-O_{A}(\sigma)$.
Let us condition on the above event, and let $\phi \in \operatorname{Inj}(A,V)$ be arbitrary. Let $\Omega := 2^A$ denote the power set of $A$. We can then partition $$ V = A \cup \bigcup_{U, U' \in \Omega} V_{U,U'}$$ where for each $U \in \Omega$, $V_{U,U'}$ is the set of all $v \in V \backslash A$ such that $$ U = \{ a \in A: v <_G \phi(a) \} \hbox{ and } U' = \{ a \in A: \phi(a) <_G v \}.$$
The total number of pairs $(U,U')$ is $O_{A}(1)$. Thus by the pigeonhole principle (and taking $M$ large enough), we can find $U, U'$ such that $|V_{U,U'}| \geq cM$, if $c$ is sufficiently small depending on $A$. In particular, $V_{U,U'}$ is corrupted and we can find distinct $v_1,v_2 \in V_{U,U'}$ such that $v_1 <_G v_2$ and $v_2 <_G v_1$ have the same truth value. By construction, we see that $v_1,v_2$ are indistinguishable, and the claim follows. \end{proof}
The proof of Theorem \ref{negate}(a) is now complete.
\subsubsection{Further remarks} We close this section with some further remarks about Theorem \ref{negate}(a). Informally, the above result asserts that there does not exist a repair algorithm to convert a corrupted total ordering $G$ on a large finite set into an exact total ordering $<'$, in which the order relationship of two vertices $v_1, v_2$ of the set is repaired by inspecting the corrupted relationship between $v_1, v_2$ and a bounded number of other vertices, selected in advance. It is likely that this result can be strengthened to allow for a more adaptive repair algorithm in which the other vertices that one queries need not be selected in advance, and for which the probability of the algorithm successfully obtaining a total ordering is lowered to, say, $2/3$ rather than $1$. One should also be able to obtain a similar result even if the algorithm is allowed to retain a bounded amount of ``memory'' between repairing one edge and the next. However, we will not pursue such strengthenings here.
On the other hand, once one relaxes the requirement of locality (or bounded memory), it becomes very easy to repair the corrupted total ordering $G$ used in the above proof to obtain an exact total ordering $<'$, while only modifying a proportion $O(\varepsilon)$ of the edges. We sketch the details as follows. Fix $\varepsilon > 0$, let $A = [N']$ for some large integer $N'$, and select $\phi \in \operatorname{Inj}(A,V)$ at random. With probability $1-O_{A}(\delta)$, the directed graph $K^{(A)}(G)$ is totally ordered; we condition on this event, and then without loss of generality (relabeling $A$ if necessary) we may assume that the total ordering on $K^{(A)}(G)$ is the usual ordering on $A$.
For each $0 \leq i \leq N'$, let $V_i$ be the set of all vertices $v \in V \backslash \phi(A)$ such that $$ \{ j \in A: i < j \leq N' \} = \{ 1 \leq j \leq N': v <_G \phi(j) \} $$ and $$ \{ j \in A: 1 \leq j \leq i \} = \{ 1 \leq j \leq N': \phi(j) <_G v \};$$ roughly speaking, $V_i$ is the set of those vertices which $\phi(A)$ "predicts" should lie in the interval between $\phi(i)$ and $\phi(i+1)$.
These sets are clearly disjoint, and using the first moment method one can show that with probability $1 - O_{N',\varepsilon}(\delta)$, these sets cover a proportion $1-O(\varepsilon |V|)$ of the vertices in $V$. We then define the total order $<'$ by declaring $v_i <' v_j$ whenever $v_i \in V_i, v_j \in V_j$, and $i < j$, and placing an arbitrary total ordering $<'$ on each of the $V_i$ separately, and also completing the total ordering to the complement of $\bigcup_i V_i$ (these are the non-local components of the repair algorithm). It is not difficult to show that for $\delta$ sufficiently small, and then $M$ sufficiently large, that with probability $1 - O_{A,\varepsilon}(\delta)$, this total order $<'$ will differ from $G$ on only $O(\varepsilon)$ of the edges; we omit the details. Note that the run time of this algorithm will be linear in the number of edges (i.e. the run time will be $O(|V|^2)$).
\subsection{The counterexample for undirected $\leq 3$-uniform hypergraphs}\label{leq3}
We now prove Theorem \ref{negate}(b). We fix $k = 3$ and $K = \{ \operatorname{pt}, \{0,1\}, \{0,1\}, \{0,1\} \}$. Note that a $K$-coloured undirected hypergraph $G$ on a vertex set $V$ can thus be viewed as a quadruplet $G = (V, E_1, E_2, E_3)$, where $E_1 \subset V$ is a set of vertices, $E_2 \subset \binom{V}{2}$ is a set of undirected $2$-edges, and $E_3 \subset \binom{V}{3}$ is a set of undirected $3$-edges. The basic idea will be to ``encode'' the notion of a total ordering using the undirected data $E_1, E_2, E_3$.
Let us introduce the following notation. Given a $K$-coloured undirected hypergraph $G$ and vertices $r, b, b' \in V$, we say that \begin{itemize} \item $b$ is \emph{$G$-blue} if $\{b\} \in E_1$; \item $r$ is \emph{$G$-red} if $\{r\} \not \in E_1$; \item $r$ \emph{$G$-likes} $b$ if $r$ is $G$-red, $b$ is $G$-blue, and $\{r,b\} \in E_2$; \item $r$ \emph{$G$-prefers} $b$ to $b'$ if $r$ is $G$-red, $b, b'$ are $G$-blue, $r$ $G$-likes $b$, and $r$ does not $G$-like $b$; \item $r$ \emph{ranks $\{b,b'\}$ $G$-correctly} if $r$ is $G$-red, $b,b'$ are $G$-blue, and $\{r,b,b'\} \in E_3$; \item We write $b >_{G,r} b'$ if $r$ either (a) $G$-prefers $b$ to $b'$ and ranks $\{b,b'\}$ $G$-correctly, or (b) $G$-prefers $b'$ to $b$ and does not rank $\{b,b'\}$ $G$-correctly; \item The hypergraph $G$ is \emph{consistently orderable} if there exists a total ordering $>_G$ on $V$ such that $b >_G b'$ whenever $r, b, b'$ are such that $b >_{G,r} b'$. \end{itemize}
We let $\mathcal{P}$ be the $K$-property of being undirected and consistently orderable. One easily verifies that $\mathcal{P}$ is a hereditary undirected $K$-coloured hypergraph property. To show Theorem \ref{negate}(b), it suffices to show that $\mathcal{P}$ is not weakly locally repairable.
Let $\varepsilon > 0$ be a small absolute constant (one could take $\varepsilon = \frac{1}{1000}$ for concreteness), let $A$ be an arbitrary finite non-empty set, let $N > 0$ be an integer, and let $\delta > 0$ be an arbitrary small number, which we can assume to be small compared to $A,N$. Let $\sigma > 0$ be an even smaller number (depending on these parameters) to be chosen later, and then let $M$ be an enormous number (depending on all previous parameters), again to be chosen later. We set $V := [M]$.
To prove Theorem \ref{negate}(b), it will suffice to construct a $K$-coloured undirected graph $G = (V,E_1,E_2,E_3)$ obeying \eqref{injv}, for which there does \emph{not} exist a local modification rule $T = (A,T)$ and $\phi \in \operatorname{Inj}(A,V)$ such that the repaired hypergraph $T_{\phi}(G)$ obeys $\mathcal{P}$ and \eqref{psieps}. (Again, note that by construction that $V$ can be made larger than any specified number.)
As before, to define $G$ we first define a (random) ``uncorrupted'' $K$-coloured hypergraph $$G^{(0)} = (V, E^{(0)}_1, E^{(0)}_2, E^{(0)}_3)$$ by the following construction:
\begin{itemize} \item $E^{(0)}_1 := [M/2]$ (thus vertices between $1$ and $M/2$ are $G^{(0)}$-blue, and vertices between $M/2+1$ and $M$ are $G^{(0)}$-red); \item $E^{(0)}_2$ is a random graph on $V$, with each edge $\{v_1,v_2\}$ lying in $E^{(0)}_2$ with a probability of $1/2$, with these events being jointly independent. (Thus, a given $G^{(0)}$-red vertex will $G^{(0)}$-like a given $G^{(0)}$-blue vertex with a probability of $1/2$, independently of all other instances of the $G^{(0)}$-like relation.) \item $E^{(0)}_3$ is the set of unordered triples $\{r,b,b'\}$ such that $r$ is $G^{(0)}$-red, $b, b'$ are $G^{(0)}$-blue, and one of the following statements hold: \begin{itemize} \item[(i)] $r$ $G^{(0)}$-likes both $b$ and $b'$; \item[(ii)] $r$ does not $G^{(0)}$-like either $b$ or $b'$; \item[(iii)] $r$ $G^{(0)}$-prefers $b$ to $b'$, and $b > b'$. \end{itemize} \end{itemize}
\begin{figure}
\caption{The three types of triples (indicated by shaded triangles) which lie in $E^{(0)}_3$. Solid lines indicate edges in $E_2 = E^{(0)}_2$, while dashed lines indicate edges not in $E_2 = E^{(0)}_2$. The vertices on the top row are red, while the bottom vertices are blue; the blue points are ordered so that the larger points are on the right.}
\label{fig1}
\end{figure}
It is not hard to verify that $G^{(0)}$ is consistently orderable (with $>_{G^{(0)}}$ being the usual ordering $>$ on $[M]$) and so obeys $\mathcal{P}$.
Next, we define the ``corrupted'' $K$-coloured undirected hypergraph $$G = (V, E) = (V, E_1, E_2, E_3)$$ as follows:
\begin{itemize} \item $V = [M]$; \item $E_j = E^{(0)}_j$ for $j=1,2$ (thus, $G$ and $G^{(0)}$ have the same notions of red, blue, like, and prefer). \item For each $e \in \binom{V}{3}$, the statements $e \in E^{(0)}_3$ and $e \in E_3$ have the same truth value with probability $1-\sigma$, and have opposite truth value with probability $\sigma$, independently of each other and of the random graph $E^{(0)}_2$. (Thus the relations $>_{G,r}$ will be a slight corruption of $>_{G^{(0)},r}$.) \end{itemize}
Since $G^{(0)}$ obeys $\mathcal{P}$, we can use the first moment method as in the preceding section to conclude that \eqref{injv} holds with probability $1 - O_{N,\delta}(\sigma)$. Let us now condition on the event that \eqref{injv} holds.
Suppose for contradiction that there exists a local modification rule $T = (A,T)$ and a morphism $\phi: A \to V$ such that the repaired hypergraph $G' = (V \backslash \phi(A), E'_1,E'_2,E'_3) := T_{\phi}(G)$ obeys $\mathcal{P}$ and \eqref{psieps}. From \eqref{psieps} we see in particular that \begin{equation}\label{lowcorrupt}
|E'_j \Delta E_j| \ll \varepsilon M^j \end{equation} for $j=1,2,3$, where $\Delta$ denotes the symmetric difference operator.
Fix $T,\phi$. Call a quadruplet $(r_1,r_2,b_1,b_2)$ of distinct vertices in $V \backslash \{\phi(A)\}$ \emph{inconsistent} (relative to $T$ and $\phi$) if the following properties hold:
\begin{itemize} \item[(i)] $r_1,r_2$ are both $G$-red and $G'$-red, and $b_1,b_2$ are both $G$-blue and $G'$-blue. \item[(ii)] $r_1$ $G'$-prefers $b_1$ to $b_2$, and $r_2$ $G'$-prefers $b_2$ to $b_1$. \item[(iii)] The undirected hypergraph $K^{(\phi \uplus (r_1,r_2,b_1,b_2))}(G) \in K^{(A \uplus [4])}$ is invariant under the morphism $\operatorname{id}_A \oplus (2,1,4,3) \in \operatorname{Inj}(A \cup [4], A \cup [4])$, where $(2,1,4,3) \in \operatorname{Inj}([4],[4])$ is the permutation which switches $1$ and $2$, and also switches $3$ and $4$. More explicitly, for any $a \in A$, we have the $E_2$ symmetries $\mathcal{I}( \{ b_1, \phi(a) \} \in E_2 ) = \mathcal{I}( \{ b_2, \phi(a) \} \in E_2 )$ and $\mathcal{I}( \{ r_1, \phi(a) \} \in E_2 ) = \mathcal{I}( \{ r_2, \phi(a) \} \in E_2 )$, as well as the $E_3$ symmetries \begin{equation}\label{symmetry} \begin{split} \mathcal{I}( \{r_1,r_2,b_1\} \in E_3 ) &= \mathcal{I}( \{r_1,r_2,b_2\} \in E_3 ) \\ \mathcal{I}( \{b_1,b_2,r_1\} \in E_3 ) &= \mathcal{I}( \{b_1,b_2,r_2\} \in E_3 ) \\ \mathcal{I}( \{r_1,b_1,\phi(a)\} \in E_3 ) &= \mathcal{I}( \{r_2,b_2,\phi(a)\} \in E_3 ) \hbox{ for all } a \in A\\ \mathcal{I}( \{r_1,b_2,\phi(a)\} \in E_3 ) &= \mathcal{I}( \{r_2,b_1,\phi(a)\} \in E_3 ) \hbox{ for all } a \in A\\ \mathcal{I}( \{r_1,\phi(a),\phi(a')\} \in E_3 ) &= \mathcal{I}( \{r_2,\phi(a),\phi(a')\} \in E_3 ) \hbox{ for all } \{a,a'\} \in \binom{A}{2}\\ \mathcal{I}( \{b_1,\phi(a),\phi(a')\} \in E_3 ) &= \mathcal{I}( \{b_2,\phi(a),\phi(a')\} \in E_3 ) \hbox{ for all } \{a,a'\} \in \binom{A}{2}. \end{split} \end{equation} \end{itemize}
\begin{figure}
\caption{A partial depiction of an inconsistent quadruple $(r_1,r_2,b_1,b_2)$, surrounded by a number of vertices $\phi(a)$ with $a \in A$. The connectivity between $(r_1,r_2,b_1,b_2)$ and $\phi(A)$ needs to be symmetric with respect to the ``reflection map'' $\operatorname{id}_A \oplus (2,1,4,3)$ which swaps $r_1$ and $r_2$, and swaps $b_1$ and $b_2$, but leaves the vertices in $\phi(A)$ unchanged.}
\label{fig2}
\end{figure}
Observe that if $(r_1,r_2,b_1,b_2)$ are inconsistent, then from properties (iii) and Definition \ref{locmod} we conclude that $$ \mathcal{I}( \{ r_1, b_1, b_2 \} \in E'_3 ) = \mathcal{I}( \{ r_2, b_2, b_1 \} \in E'_3 ).$$ By properties (i) and (ii), this implies either that $b_1 <_{G',r_1} b_2$ and $b_2 <_{G',r_2} b_1$ are both true, or that $b_2 <_{G',r_1} b_1$ and $b_1 <_{G',r_2} b_2$ are both true. But this implies that $G'$ is not consistently orderable and thus does not obey $\mathcal{P}$, a contradiction. Thus to conclude the proof of Theorem \ref{negate}(b), it suffices to show
\begin{lemma}\label{triple} Suppose $\varepsilon > 0$ is sufficiently small, and $M$ is sufficiently large (depending on $N,\sigma,A,\varepsilon$). Then with probability $1-O_{A,\varepsilon}(\sigma)$, it is true that for all morphisms $\phi \in \operatorname{Inj}(A,V)$ and all local modification rules $T$ obeying \eqref{lowcorrupt}, there exists at least one quadruplet $(r_1,r_2,b_1,b_2)$ of inconsistent vertices in $V \backslash \phi(A)$. \end{lemma}
\begin{proof}
Let $c > 0$ be a small number depending on $\varepsilon, A$ to be chosen later. Recall that the $2$-uniform graph $E_2 \subset \binom{V}{2}$ was selected to be a random graph on $V = [M]$, with edge density $1/2$. By standard arguments (similar\footnote{In other words, one shows that \eqref{regular} holds for each pair $X, Y$ with super-exponentially high probability $1 - \exp(\Omega_c(|V|^2))$, and then applies the union bound. See also \cite{wilson} for a proof that random graphs are regular.} to that used to prove Lemma \ref{indis}), we thus see that if $M$ is sufficiently large depending on $c, \sigma$, with probability $1 - O(\sigma)$, the graph $E_2$ is \emph{$c$-regular} in the sense that \begin{equation}\label{regular}
|\{ (a,b) \in X \times Y: \{a,b\} \in E_2 \}| = ( \frac{1}{2} + O(c) ) |X| |Y| \end{equation}
for all disjoint $X, Y \subset V$ with cardinality $|X|, |Y| \geq c |V|$. Let us now condition on the event that we have this $c$-regularity, and freeze $E_2$ (and hence $G^{(0)}$).
Next, by paying a factor of $M^{|A|}$ in all future probability upper bounds, we may freeze the morphisms $\phi$. The total number of possible modification rules $T$ is clearly $O_A(1)$, so by paying this factor as well we may also freeze $T$.
We now freeze the set $E_3 \backslash \binom{ V \backslash \phi(A) }{3}$, which describes all the edges of $E_3$ which contain at least one vertex from $\phi(A)$. Now that we have frozen these edges, as well as $E_2$ and $T$, we see from Definition \ref{locmod} that $E'_1$ and $E'_2$ are also frozen.
The only randomness that remains after all this freezing comes from the random variables $\mathcal{I}(e \in E_3 \Delta E^{(0)}_3)$ for $e \in \binom{ V \backslash \phi(A) }{3}$, which are jointly independent (even after all the freezing) and equal $1$ with probability $\sigma$ each. From \eqref{natural} we conclude that if $e \in \binom{V}{3}$ intersects $\phi(A)$ then the quantity $\mathcal{I}(e \in E'_3)$ is now deterministic, whereas if $e$ does not intersect $\phi(A)$ then the quantity $\mathcal{I}(e \in E'_3)$ depends only on the quantity $\mathcal{I}(e \in E_3 \Delta E^{(0)}_3)$ (as well as all the frozen data, of course).
Since $E'_1$ and $E'_2$ are frozen, we may condition on the event that \eqref{lowcorrupt} holds for $j=1,2$ without difficulty. (We will not attempt to condition on the event that \eqref{lowcorrupt} holds for $j=3$, because this creates the technical problem that such a conditioning will disrupt the joint independence of the events $e \in E_3 \Delta E^{(0)}_3$, which we will need to exploit later.)
Let $V_R$ denote the set of vertices in $V \backslash \phi(A)$ which are both $G$-red and $G'$-red, and similarly let $V_B$ denote the set of vertices in $V \backslash \phi(A)$ which are both $G$-blue and $G'$-blue. From \eqref{lowcorrupt} for $j=1$ we have \begin{equation}\label{vrb}
|V_R|, |V_B| \geq M/4 \end{equation} if $\varepsilon$ is small enough.
Let $E^*_2 \subset V_R \times V_B$ be the set of all pairs $(r,b) \in V_R \times V_B$ such that $\{r,b\} \in E_2 \Delta E'_2$. From \eqref{lowcorrupt} for $j=2$ and \eqref{vrb}, we have \begin{equation}\label{vrbb}
|E^*_2| \ll \varepsilon |V_R| |V_B|. \end{equation}
Let $\Omega = 2^{A}$ be the power set of $A$. If $U_R \in \Omega$, define $V_{R,U_R}$ to be the set of all vertices $r \in V_R$ such that $$ U_R = \{ a \in A: \{\phi(a),r\} \in E_2 \}.$$ Similarly, for any $U_B \in \Omega$, define $V_{B,U_B}$ to be the set of all $b \in V_B$ such that $$ U_B = \{ a \in A: \{\phi(a),b\} \in E_2 \}.$$ Then we have the partitions $$ V_R = \bigcup_{U_R \in \Omega} V_{R,U_R}; \quad V_B = \bigcup_{U_B \in \Omega} V_{B,U_B}$$ and thus $$ V_R \times V_B = \bigcup_{U_R, U_B \in \Omega} V_{R,U_R} \times V_{B,U_B}.$$ The number of pairs $(U_R, U_B)$ is $O_{A}(1)$. By the pigeonhole principle (first discarding all small pairs $V_{R,U_R} \times V_{B,U_B}$) we can choose a pair $(U_R,U_B)$ such that \begin{equation}\label{verb}
|V_{R,U_R}|, |V_{B,U_B}| \gg_{\varepsilon,A} M \end{equation} and \begin{equation}\label{es2}
|E^*_2 \cap (V_{R,U_R} \times V_{B,U_B})| \ll \varepsilon |V_{R,U_R}| |V_{B,U_B}|.
\end{equation} Fix this pair $(U_R, U_B)$ (if there are multiple pairs available, choose one arbitrarily).
By \eqref{regular} and standard ``counting lemma'' arguments (see e.g. \cite{wilson}), we see that there exist $\gg |V_{R,U_R}|^2 |V_{B,U_B}|^2$ quadruplets $(r_1,r_2,b_1,b_2)$ with $r_1, r_2 \in V_{R,U_R}$ and $b_1, b_2 \in V_{B,U_B}$ such that $r$ $G$-prefers $b_1$ to $b_2$, and $r_2$ $G$-prefers $b_2$ to $b_1$. In view of \eqref{es2}, we conclude (if $\varepsilon$ is small enough) that the same assertion holds with ``$G$-prefers'' replaced by ``$G'$-prefers''.
Call a quadruplet $(r_1,r_2,b_1,b_2)$ \emph{admissible} if it is of the above form, thus $r_1, r_2 \in V_{R,U_R}$ and $b_1, b_2 \in V_{B,U_B}$ such that $r_1$ $G'$-prefers $b$ to $b_2$, and $r_2$ $G'$-prefers $b_2$ to $b_1$. From \eqref{verb} we thus see that there are $\gg_{\varepsilon,A} M^4$ admissible quadruplets.
From chasing all the definitions, we see that if an admissible quadruplet $(r_1,r_2,b_1,b_2)$ obeys \eqref{symmetry}, then it is inconsistent. Thus, it will suffice to upper bound the probability that no admissible quadruplet obeys \eqref{symmetry} for any choice of $\phi$.
Since $E_2$ and $E'_2$ are already frozen, so are the set of admissible quadruplets $(r_1,r_2,b_1,b_2)$. Observe from construction of $E_3$ that for any admissible quadruplet $(r_1,r_2,b_1,b_2)$, the probability that this quadruplet obeys\footnote{Note from construction that only the first two conditions in \eqref{symmetry} are in doubt; the remaining conditions, which involve at least one element from $\phi(A)$, are automatic due to $r_1, r_2$ and $b_1, b_2$ lying in the same cells $V_{R,U_R}$ and $V_{B,U_B}$ respectively.} \eqref{symmetry} is $\Omega_{\sigma,A}(1)$, and thus the probability that it does \emph{not} obey \eqref{symmetry} is $\exp( - \Omega_{\sigma,A}(1) )$. Furthermore, the events that a family of quadruplets do not obey \eqref{symmetry} will be jointly independent as long as no two of these quadruplets share a vertex in common (recall that we are freezing all the edges of $E_3$ which intersect $\phi(A)$). Since there are $\gg_{\varepsilon,A} M^4$ admissible quadruplets, an easy greedy algorithm argument allows us to find $\gg_{\varepsilon,A} M$ admissible quadruplets for which no two share three vertices in common. Thus the probability that no admissible quadruplet is corrupted is at most $\exp( - \Omega_{\sigma,\varepsilon,A}(M) )$. Combining this with our previous factors of $M^{|A|}$ and $O_{A}(1)$ introduced earlier, we obtain the claim if $M$ is sufficiently large. \end{proof}
The proof of Theorem \ref{negate}(b) is now complete.
\subsection{The counterexample for undirected $3$-uniform hypergraphs}\label{eq3}
We now adapt the methods of the previous section to prove Theorem \ref{negate}(c). The main challenge is to find analogues of $G^{(0)}$ and $\mathcal{P}$ in the $3$-uniform setting rather than the $\leq 3$-uniform setting. This will be done in a rather artificial and \emph{ad hoc} fashion, encoding a $\leq 3$-uniform hypergraph property in a $3$-uniform one.
We fix $k = 3$ and $K = \{ 0,1\}_3$. Note that a $K$-coloured undirected hypergraph $G$ on a vertex set $V$ can thus be viewed as a pair $G = (V, E)$, where $E_3 \subset \binom{V}{3}$ is a set of $3$-edges.
In order to motivate the property $\mathcal{P}$ that we will need here, we first construct the uncorrupted $K$-coloured hypergraph $G^{(1)} = (V, E^{(1)})$ which will play the role of $G^{(0)}$ in the previous section.
Let $M$ be a large integer. Then we define the $(\operatorname{pt},\{0,1\},\{0,1\},\{0,1\})$-coloured undirected hypergraph $G^{(0)} = ([M], E^{(0)}_1, E^{(0)}_2, E^{(0)}_3)$ as in the previous section. We then define the notions of ``red'', ``blue'', ``likes'', ``prefers'', ``ranks correctly'' as before (dropping the $G^{(0)}$ prefix). We then let $V := [2M]$. We call the vertices in $[2M] \backslash [M]$ \emph{green} (thus every vertex in $V$ is either red, blue, or green). We then define $G^{(1)} = (V, E^{(1)})$ to be the $3$-uniform graph, where $E^{(1)}$ consists of all triples $\{x,y,z\} \in \binom{V}{3}$ for which one of the following statements are true: \begin{itemize} \item $x,y,z$ are all green. \item $\{x,y,z\}$ consists of a red vertex, a blue vertex, and a green vertex, and the red vertex likes the blue vertex. \item $\{x,y,z\}$ consists of a red vertex and two blue vertices, and the red vertex ranks the two blue vertices correctly. \end{itemize}
Note how $E^{(1)}$ involves the three components $E^{(0)}_1, E^{(0)}_2, E^{(0)}_3$ of $E^{(0)}$.
\begin{figure}
\caption{The various types of triples that make up $E^{(1)}$. In addition to the triples that are inherited from $E^{(0)}_3$, one also has triples that connect three green vertices together, or else connect a green vertex to a red vertex that likes a blue vertex. Note that the four green vertices on the right will in fact form a tetrahedron (and thus be $G$-green), whereas any quadruple of vertices which is not entirely green cannot form such a tetrahedron.}
\label{fig3}
\end{figure}
Now we define $\mathcal{P}$. For any $K$-coloured undirected hypergraph $G = (V,E)$, we introduce the following notation:
\begin{itemize} \item We call an element $g_1 \in V$ \emph{$G$-green} if there exists $\{g_2,g_3,g_4\} \in \binom{V \backslash \{g_1\}}{3}$ such that $\binom{\{g_1,g_2,g_3,g_4\}}{3} \subset E$. \item We call an element $x \in V$ \emph{$G$-nongreen}\footnote{We allow for the possibility that a vertex is both $G$-green and $G$-nongreen, or is neither $G$-green nor $G$-nongreen. However, these situations will not occur for the model graph $G^{(1)}$.} if there exist distinct $G$-green vertices $g, g'$ such that $\{x,g,g'\} \not \in E$. \item If $x, y \in V$ are distinct, we say that $x$ \emph{$G$-likes} $y$ if they are both $G$-nongreen, and there exists a $G$-green vertex $g$ such that $\{ x, y, g \} \in E$. \item Two vertices $x, x' \in V$ are \emph{$G$-similar} if there exists $y$ such that $x, x'$ both $G$-like $y$. \item If $r, b \in V$ are distinct, we say that $r$ \emph{$G$-dislikes} $b$ if $r,b$ are both $G$-nongreen, and there exists a $G$-green vertex $g$ such that $\{ x, y, g \} \not \in E$. \item If $b, b', r$ are distinct elements of $V$, we say that $r$ \emph{$G$-prefers $b$ to $b'$} if $r,b,b'$ is $G$-nongreen, $b,b'$ are $G$-similar, $r$ $G$-likes $b$, and $r$ $G$-dislikes $b'$. \item If $b, b', r$ are distinct elements of $V$, we write $b >_{G,r} b'$ if either (a) $r$ $G$-prefers $b$ to $b'$ and $\{r,b,b'\} \in E$; or (b) $r$ $G$-prefers $b'$ to $b$ and $\{r,b,b'\} \not \in E$. \item The hypergraph $G$ is \emph{consistently orderable} if there exists a total ordering $>_G$ on $V$ such that $b >_G b'$ whenever $r, b, b'$ are such that $b >_{G,r} b'$. \end{itemize}
We say that a $K$-coloured hypergraph obeys $\mathcal{P}$ if it is undirected and consistently orderable. One can verify with some tedious effort that $\mathcal{P}$ is an undirected $K$-property. One can also verify that when $G = G^{(1)}$, the $G^{(1)}$-green vertices are precisely the green vertices, the $G^{(1)}$-nongreen vertices the red and blue vertices, and $G^{(1)}$-similar vertices are either both red or both blue. From this one can easily verify that $G^{(1)}$ obeys $\mathcal{P}$ (using the usual ordering $>$ on $[2M]$ for $>_{G^{(1)}}$).
We set $\varepsilon > 0$ to be small ($\varepsilon := \frac{1}{1000}$ will do). Let $A$, $N > 0$, and $\delta > 0$ be arbitrary, and let $\sigma > 0$ be sufficiently small depending on all these parameters. We then let $M$ be a large integer (depending on all previous parameters). We define the ``corrupted'' $3$-uniform hypergraph $G = (V, E)$ by declaring $\mathcal{I}( e \in E ) := \mathcal{I}( e \in E^{(1)} )$ with probability $1-\sigma$ and $\mathcal{I}( e \in E ) := 1-\mathcal{I}( e \in E^{(1)} )$ with probability $\sigma$ for each $e \in \binom{V}{3}$, independently for each choice of $e$.
Since $G^{(1)}$ obeys $\mathcal{P}$, we can use the first moment method as in the preceding two sections to conclude that \eqref{injv} holds with probability $1 - O_{N,\delta}(\sigma)$. Let us now condition on the event that \eqref{injv} holds.
To prove Theorem \ref{negate}(c), it will suffice to show that there does \emph{not} exist a local modification rule $T = (A,T)$ and a morphism $\phi \in \operatorname{Inj}(A,V)$ such that the repaired hypergraph $T_{\phi}(G)$ obeys $\mathcal{P}$ and \eqref{psieps}.
Suppose for contradiction that $T$ and $\phi$ exists with the above properties. We write $G' = (V \backslash \phi(A),E') := T_\phi(G)$. From \eqref{psieps} we thus have \begin{equation}\label{lowc}
|E' \Delta E| \ll \varepsilon M^3. \end{equation} Let us call an $9$-tuple \begin{equation}\label{rgb} (r_1, r_2, r_3, b_1, b_2, g_1, g_2, g_3, g_4 ) \end{equation} of distinct vertices in $V \backslash \phi(A)$ \emph{inconsistent} if the following properties hold:
\begin{itemize} \item[(i)] $\binom{\{g_1,g_2,g_3,g_4\}}{3} \subset E'$. \item[(ii)] For $x \in \{r_1,r_2,r_3,b_1,b_2\}$ we have $\{x,g_1,g_2\} \not \in E'$. \item[(iii)] For $i \in \{1,2,3\}$ and $j \in \{1,2\}$ we have $\{ r_i, b_j, g_1 \} \in E'$ if and only if $(i,j) \not \in \{ (1,1), (2,2) \}$. \item[(iv)] We have the symmetries \eqref{symmetry} (with $E_3$ replaced by $E$). \end{itemize}
Suppose that we can locate an inconsistent $9$-tuple \eqref{rgb}. From property (i) we see that $g_1,g_2,g_3,g_4$ are $G'$-green. From property (ii) we then conclude that $r_1,r_2,r_3,b_1,b_2$ are $G'$-nongreen. From property (iii) we conclude that for $i \in \{1,2,3\}$ and $j \in \{1,2\}$, that $r_i$ $G'$-likes $b_j$ whenever $(i,j) \not \in \{ (1,1), (2,2)\}$. In particular, $b_1, b_2$ are $G'$-similar (thanks to $r_3$). From property (iii) again we also see that $r_1$ $G'$-dislikes $b_1$ and $r_2$ $G'$-dislikes $b_2$. Thus $r_1$ $G'$-prefers $b_2$ to $b_1$, and $r_2$ $G'$-prefers $b_1$ to $b_2$. On the other hand from property (iv) and Definition \ref{locmod} as in the previous section we see that $\mathcal{I}( \{ r_1, b_1, b_2\} \in E' ) = \mathcal{I}( \{ r_2, b_2, b_1\} \in E' )$. Thus either $b_1 >_{G',r_1} b_2$ and $b_2 >_{G',r_2} b_1$ are both true, or $b_2 >_{G',r_1} b_1$ and $b_1 >_{G',r_2} b_2$ are both true, and so $G'$ is not consistently orderable and thus does not obey $\mathcal{P}$, a contradiction. Thus to conclude the proof of Theorem \ref{negate}(c), it will suffice to show
\begin{lemma} Suppose $\varepsilon > 0$ is sufficiently small, and $M \geq N_*$ is sufficiently large (depending on $N,\delta,\sigma,A,N_*,\varepsilon$). Then with probability $1-O_{A,\delta,\varepsilon}(\sigma)$, there will exist at least one $9$-tuple \eqref{rgb} of inconsistent vertices in $V \backslash \phi(A)$, for all choices of morphism $\phi$ and modification rule $T$ for which \eqref{lowc} holds. \end{lemma}
\begin{proof}
Let $c > 0$ be a small number depending on $\varepsilon, A$ to be chosen later. Recall the $2$-uniform random graph $E_2$ on $[M]$ used to construct $G^{(0)}$. By arguing exactly as in the proof of Lemma \ref{triple}, we see (for $M$ large enough) that with probability $1-O(\sigma)$, we have the regularity property \eqref{regular} for all disjoint $X, Y \subset [M]$ with $|X|, |Y| \geq cM$. Let us condition on the event that this regularity property holds. We now freeze $E_2$, which in turn freezes $G^{(1)}$ and $E^{(1)}$.
As in the proof of Lemma \ref{triple}, we pay a factor of $O_{A}(M^{|A|})$ in all future probability upper bounds in order to freeze $\phi$ and $T$.
From construction, we have for any $\{ v_1, v_2, v_3 \} \in \binom{V}{3}$, that $\{v_1, v_2, v_3 \} \in E \Delta E^{(1)}$ with an independent probability $\delta$. From Chernoff's inequality, we conclude that for each $v_1, v_2 \in V$, that \begin{equation}\label{v3}
| \{ v_3 \in V \backslash \{v_1,v_2\}: \{v_1,v_2,v_3\} \in E \Delta E^{(1)} \} | \leq \delta^{1/2} M \end{equation} with probability at least $1 - \exp( - \Omega_{\delta}(M) )$. For technical reasons (related to the reason we did not condition on \eqref{lowcorrupt} for $j=3$ in the previous section), we will weaken \eqref{v3} to \begin{equation}\label{v3-weak}
| \{ v_3 \in V \backslash \{v_1,v_2\}: \{v_1,v_2,v_3\} \in (E \Delta E^{(1)}) \backslash \binom{[M] \backslash \phi(A)}{3} \} | \leq \delta^{1/2} M \end{equation} in order not to destroy the independence of the events $\{v_1,v_2,v_3\} \in E \Delta E^{(1)}$ when $v_1,v_2,v_3$ lie in $[M] \backslash \phi(A)$.
By the union bound, we thus see that (if $M$ is sufficiently large) that with probability $1 - O( M \exp( - \Omega_{\delta}(M) ) ) = 1 - O(\sigma)$, the assertion \eqref{v3-weak} holds for \emph{all} $v_1,v_2 \in V$. We now condition on the event that this holds.
We now freeze the restriction $E \backslash \binom{[M] \backslash \phi(A)}{3}$ of $E$ to those edges which are not contained in $[M] \backslash \phi(A)$. Thus the only randomness remaining comes from the random variables $\mathcal{I}(e \in E^{(1)} \Delta E)$ for $e \in \binom{[M] \backslash \phi(A)\}}{3}$, which are jointly independent with probability $\delta$ each. Note (from Definition \ref{locmod}) that the quantity $\mathcal{I}(e \in E')$ for $e \in \binom{[M]}{3}$ is now deterministic unless $e \in \binom{[M] \backslash \phi(A)\}}{3}$, in which case it depends only on the quantity $\mathcal{I}(e \in E^{(1)} \Delta E)$ (as well as frozen data, of course).
We would like to condition on the event that \eqref{locmod} holds, but this would destroy the joint independence of the events $e \in E^{(1)} \Delta E$, which will be important later. So we shall be content to condition on the slightly weaker statement \begin{equation}\label{small-delta-weak}
\frac{1}{|\binom{V}{3}|} \left|(E \Delta E') \backslash \binom{[M] \backslash \phi(A)}{3}\right| \ll \varepsilon \end{equation} as this is a deterministic statement that does not depend on the truth value of any of the events $e \in E^{(1)} \Delta E$ for $e \in \binom{[M] \backslash \phi(A)}{3}$.
The next step is to select some good vertex sets to work with. From \eqref{small-delta-weak} we have
$$ \sum_{v_1 \in V \backslash \phi(A)} \left| \left\{ \{ v_2, v_3 \} \in \binom{V \backslash \{v_1\}}{2}: \{v_1,v_2,v_3 \} \in (E \Delta E') \backslash \binom{[M] \backslash \phi(A)}{3} \right\} \right| \ll \varepsilon M^3$$
and so (for $M$ large enough) by Markov's inequality we can find a subset $V' \subset V \backslash \phi(A)$ with $|V \backslash V'| \ll \varepsilon^{1/2} M$ such that \begin{equation}\label{vsting}
\left| \left\{ \{ v_2, v_3 \} \in \binom{V \backslash \{v_1\}}{2}: \{v_1,v_2,v_3 \} \in (E \Delta E') \backslash \binom{[M] \backslash \phi(A)}{3} \right\} \right| \ll \varepsilon^{1/2} M^2 \end{equation} for all $v_1 \in V'$.
Set \begin{align*} V_B &:= [M/2] \cap V'\\ V_R &:= ([M] \backslash [M/2]) \cap V'\\ V_G &:= ([2M] \backslash [M]) \cap V'. \end{align*}
In particular (for $\varepsilon$ small enough) we have $|V_B|, |V_R|, |V_G| \gg M$.
For $b \in V_B$ and $r \in V_R$, define \begin{equation}\label{eep}
f(r,b) := | \{ v \in V \backslash \{r,b\}: \{r,b,v\} \in (E \Delta E') \backslash \binom{[M] \backslash \phi(A) }{3} \}|, \end{equation} thus $0 \leq f(r,b) \ll M$. From \eqref{lowc} we observe that
$$\sum_{r \in V_R} \sum_{b \in V_B} f(r,b) \ll \varepsilon |V_R| |V_B| M.$$ Thus if we define \begin{equation}\label{es2-again} E^*_2 := \{ (r,b) \in V_R \times V_B: f(r,b) \geq \sqrt{\varepsilon} M \} \end{equation} then by Markov's inequality we have \begin{equation}\label{fiber}
|E^*_2| \ll \sqrt{\varepsilon} |V_R| |V_B|. \end{equation}
Let $\Omega_R := 2^{\binom{A}{2}}$ and $\Omega_B := 2^{\binom{A}{\leq 2}}$ be the power sets of $\binom{A}{2}$ and $\binom{A}{\leq 2} := \bigcup_{j \leq 2} \binom{A}{j}$ respectively. If $U_R \in \Omega_R$, define $V_{R,U_R}$ to be the set of all vertices $r \in V_R$ such that $$ U_R = \{ \{a,a'\} \in \binom{A}{2}: \{\phi(a),\phi(a'),r\} \in E_2 \}.$$ Similarly, for any $U_B \in \Omega_B$, define $V_{B,U_B}$ to be the set of all $b \in V_{B}$ such that $$ U_B = \{ \{ a \} \in \binom{A}{1}: b < \phi(a) \} \cup \{ \{a,a'\} \in \binom{A}{2}: \{\phi(a),\phi(a'),b\} \in E_2 \}.$$
The $V_{R,U_R}$ and $V_{B,U_B}$ partition $V_R$ and $V_B$ respectively. Since $|\Omega_R|, |\Omega_B| \ll_{A} 1$, we thus see from \eqref{fiber} and the pigeonhole principle that there exists $U_R \in \Omega_R$ and $U_B \in\Omega_B$ with \begin{equation}\label{verb-again}
|V_{R,U_R}|, |V_{B,U_B}| \gg_{A} M
\end{equation} and \begin{equation}\label{estar2}
|E^*_2 \cap (V_{R,U_R} \times V_{B,U_B})| \ll \sqrt{\varepsilon} |V_{B,U_B}| |V_{R,U_R}|. \end{equation} Henceforth we fix $U_B$ and $U_R$ so that \eqref{verb-again}, \eqref{estar2} hold.
To locate inconsistent $9$-tuples \eqref{rgb} we shall constructed a nested sequence $\Sigma_0 \supset \ldots \supset \Sigma_7$ of candidate $9$-tuples as follows. We let $\Sigma_0$ be the collection of all $9$-tuples \eqref{rgb} such that $\{r_1,r_2,r_3\} \in \binom{V_{R,U_R}}{3}$, $\{b_1,b_2\} \in \binom{V_{B,U_B}}{2}$, and $\{g_1,g_2,g_3,g_4\} \in \binom{V_G}{4}$. Clearly we have $|\Sigma_0| \gg |V_{R,U_R}|^3 |V_{B,U_B}|^2 M^4$.
Let $\Sigma_1$ be the collection of all $9$-tuples \eqref{rgb} in $\Sigma_0$ such that for all $i \in \{1,2,3\}$ and $j \in \{1,2\}$, we have $\{ r_i, b_j \} \in E_2$ if and only if $(i,j) \not\in \{ (1,1), (2,2) \}$. Using \eqref{regular} and standard ``counting lemma'' arguments we see that if $c$ is sufficiently small (depending on $N'$), then $|\Sigma_1| \gg |V_{R,U_R}|^3 |V_{B,U_B}|^2 M^4$.
Let $\Sigma_2$ be the collection of all $9$-tuples \eqref{rgb} in $\Sigma_1$ such that $(r_i,b_j) \not \in E^*_2$ for all $i \in \{1,2,3\}$ and $j \in \{1,2\}$. From \eqref{estar2} we have $|\Sigma_1 \backslash \Sigma_2| \ll \sqrt{\varepsilon} |V_{R,U_R}|^3 |V_{B,U_B}|^2 M^4$. Thus if $\varepsilon$ is sufficiently small we have $|\Sigma_2| \gg |V_{R,U_R}|^3 |V_{B,U_B}|^2 M^4$.
Let $\Sigma_3$ be the collection of all $9$-tuples \eqref{rgb} in $\Sigma_2$ such that $\{r_i,b_j,g_k\} \not \in E \Delta E'$ for all $i \in \{1,2,3\}$, $j \in \{1,2\}$ and $k \in \{1,2,3,4\}$. From \eqref{eep}, \eqref{es2-again} we see that $|\Sigma_2 \backslash \Sigma_3| \ll \sqrt{\varepsilon} |V_{R,U_R}|^3 |V_{B,U_B}|^2 M^4$. Thus if $\varepsilon$ is sufficiently small we have $|\Sigma_3| \gg |V_{R,U_R}|^3 |V_{B,U_B}|^2 M^4$.
Let $\Sigma_4$ be the collection of all $9$-tuples \eqref{rgb} in $\Sigma_3$ such that $\{x, y, z \} \not \in E \Delta E'$ for all $x \in \{r_1,r_2,r_3,b_1,b_2\}$ and distinct $y,z \in \{g_1,g_2,g_3,g_4\}$. From \eqref{vsting} we see that $|\Sigma_3 \backslash \Sigma_4| \ll \sqrt{\varepsilon} |V_{R,U_R}|^3 |V_{B,U_B}|^2 M^4$. Thus if $\varepsilon$ is sufficiently small we have $|\Sigma_4| \gg |V_{R,U_R}|^3 |V_{B,U_B}|^2 M^4$.
Let $\Sigma_5$ be the collection of all $9$-tuples \eqref{rgb} in $\Sigma_4$ such that $\{x, y, z \} \not \in E \Delta E'$ for all $\{x,y,z\} \in \binom{\{g_1,g_2,g_3,g_4\}}{3}$. From \eqref{small-delta-weak} we observe that $|\Sigma_4 \backslash \Sigma_5| \ll \varepsilon |V_{R,U_R}|^3 |V_{B,U_B}|^2 M^4$. Thus if $\varepsilon$ is sufficiently small we have $|\Sigma_5| \gg |V_{R,U_R}|^3 |V_{B,U_B}|^2 M^4$. In particular, by \eqref{verb-again} we have $|\Sigma_5| \gg_{A} M^{9}$.
Let $\Sigma_6$ be the collection of all $9$-tuples \eqref{rgb} in $\Sigma_5$ such that $\{x,y,z\} \not \in E \Delta E^{(1)}$ for all \begin{equation}\label{xyz}
\{x,y,z\} \in \binom{\{ r_1, r_2, r_3, b_1, b_2, g_1,g_2,g_3,g_4 \} \cup \phi(A)}{3} \backslash \left(\binom{\phi(A)}{3} \cup \binom{\{r_1,r_2,b_1,b_2\}}{3} \right). \end{equation}
From \eqref{v3} we see that $|\Sigma_5 \backslash \Sigma_6| \ll_{N'} \delta^{1/2} M^{9}$. Thus if $\delta$ is sufficiently small (depending on $N'$) then $|\Sigma_6| \gg_{A} M^{9}$.
Let $\Sigma_7$ be the collection of all $9$-tuples \eqref{rgb} in $\Sigma_6$ such that $$ \mathcal{I}( \{r_1,b_1,b_2\} \in E ) = \mathcal{I}( \{r_2,b_1,b_2\} \in E ) \hbox{ and } \mathcal{I}( \{b_1,r_1,r_2\} \in E ) = \mathcal{I}( \{b_2,r_1,r_2\} \in E ) $$
To estimate the size of $\Sigma_7$ we will need a slightly different type of argument than those used in previous paragraphs, namely a probabilistic argument. Let $\operatorname{Inj}([9],[2M])$ denote the space of all $9$-tuples \eqref{rgb}. From the lower bound $|\Sigma_6| \gg_{A} M^{9}$ we see that for each fixed $n$, a randomly selected $9$-tuple \eqref{rgb} in $\operatorname{Inj}([9],[2M])$ would lie in $\Sigma_6$ with probability $\gg_{A} 1$.
Now observe from construction of $\Sigma_6$ and $E$ that the event that \eqref{rgb} lies in $\Sigma_6$ is independent\footnote{Note how it is important here that $\binom{\{r_1,r_2,b_1,b_2\}}{3}$ is excluded in \eqref{xyz}.} of the events $\{x,y,z\} \in E \Delta E^{(1)}$ for $\{x,y,z\} \in \binom{\{r_1,r_2,b_1,b_2\}}{3}$, which each occur with an independent probability of $\delta$. Thus, regardless of the truth values of $\{x,y,z\} \in E^{(1)}$ for $\{x,y,z\} \in \binom{\{r_1,r_2,b_1,b_2\}}{3}$, we see that if one conditions on the event \eqref{rgb} lies in $\Sigma_6$, then \eqref{rgb} will lie in $\Sigma_7$ with probability $\gg_{\delta} 1$. Undoing the conditioning on $\Sigma_6$, we see that a randomly chosen $9$-tuple \eqref{rgb} in $\operatorname{Inj}([9],[2M])$ lies in $\Sigma_7$ with probability $\gg_{\delta,A} 1$.
Let $A := \lfloor M^{0.1} \rfloor$. We pick $A$ $9$-tuples $t_1,\ldots,t_A \in \operatorname{Inj}([9],[2M])$ independently at random (and independently of $E_2$ and $E$). With probability $1 - O( M^{-0.8} )$, these tuples will be disjoint; we condition on this event. Now we make the crucial observation the events $t_i \in \Sigma_7$ are jointly independent for $i=1,\ldots,A$. Indeed, in view of all the frozen data, the event that $t_i$ lies in $\Sigma_7$ depends only on the truth value of the events $\{x,y,z\} \in (E \Delta E^{(1)}) \cap \binom{[M] \backslash \phi(A)}{3}$, where $\{x,y,z\}$ lies $t_i$, and the independence assertion follows. (It is for this reason that we have jealously guarded the joint independence of the edge events associated to $\binom{[M] \backslash \phi(A)}{3}$.)
Now for any $1 \leq i \leq A$, if we condition on $t_1,\ldots,t_{i-1}$ then each $t_i$ will lie in $\Sigma_7$ with probability $\gg_{\delta,N'} 1$ (the constraint that $t_1,\ldots,t_A$ are all disjoint only distorts this probability by $O(M^{-0.8})$, which is negligible if $M$ is large enough). Multiplying this together we see that with probability at least $1 - \exp( \Omega_{\delta,N'}(M^{0.1}) )$, at least one of the $t_i$ will lie in $\Sigma_7$. Unfreezing $\phi$ and $T$, we conclude from the union bound that with probability $1 - O_{A,\delta}( M^{|A|} \exp( \Omega_{\delta,A}(M^{0.1}) ) )$, we have $\Sigma_7$ non-empty for \emph{all} choices of $\phi(A)$ and $T$. In particular, this event occurs with probability $1 - O(\sigma)$ if $M$ is large enough.
To conclude the lemma, it will suffice to show that every $9$-tuple in $\Sigma_7$ is inconsistent. Let \eqref{rgb} be a tuple in $\Sigma_7$. By definition of $\Sigma_0$, we have $g_1,g_2,g_3,g_4 \in V_G$, and thus by definition of $E^{(1)}$ $$ \binom{\{g_1,g_2,g_3,g_4\}}{3} \subset E^{(1)}.$$ From the definition of $\Sigma_6$ we thus have $$ \binom{\{g_1,g_2,g_3,g_4\}}{3} \subset E$$ and then by definition of $\Sigma_5$ we have $$ \binom{\{g_1,g_2,g_3,g_4\}}{3} \subset E'$$ which is part (i) of the definition of inconsistency.
Similarly, by definition of $\Sigma_0$ and $E^{(1)}$ we have $\{x,g_1,g_2\} \not \in E^{(1)}$ for all $x \in \{r_1,r_2,r_3,b_1,b_2\}$. By definition of $\Sigma_6$ we then have $\{x,g_1,g_2\} \not \in E$, and by definition of $\Sigma_4$ we have $\{x,g_1,g_2\} \not \in E'$. This is part (ii) of the definition of inconsistency.
From the definition of $\Sigma_0$, $g_1$ is green. From the definitions of $\Sigma_1$ and $E^{(1)}$ we then have for every $i \in \{1,2,3\}$ and $j \in \{1,2\}$ that $\{ r_i, b_j, g_1 \} \in E^{(1)}$ if and only if $(i,j) \not \in \{ (1,1), (2,2) \}$. By definition of $\Sigma_6$, the same statement holds with $E^{(1)}$ replaced by $E$, and by definition of $\Sigma_2$ the same statement holds with $E$ replaced by $E'$. This is part (iii) of the definition of inconsistency.
It remains to verify \eqref{symmetry} (with $E_3$ replaced by $E$). The first two symmetries follow from the definition of $\Sigma_7$. The last two symmetries follow from the definitions of $\Sigma_0$ and $V_{R,U_R}$, $V_{B,U_B}$. To verify the middle two symmetries, we see from definition of $\Sigma_3$ that it suffices to show that $\mathcal{I}( \{r_1,b_1,\phi(a)\} \in E^{(1)} ) = \mathcal{I}( \{r_2,b_2,\phi(a)\} \in E^{(1)} )$ and $\mathcal{I}( \{r_1,b_2,\phi(a)\} \in E^{(1)} ) = \mathcal{I}( \{r_2,b_1,\phi(a)\} \in E^{(1)} )$ for all $a \in A$.
Fix $a$. There are several cases. If $\phi(a)$ is green, then the claim follows from the definitions of $E^{(1)}$ and $\Sigma_1$. If $\phi(a)$ is red, then by definition of $E^{(1)}$, none of the $\{r_j,b_k,\phi(a)\}$ lie in $E^{(1)}$, and the claim follows. Finally, suppose that $\phi(a)$ is blue. By definition of $V_{B,U_B}$ and $V_{R,U_R}$, we see that $\mathcal{I}( b_1 < \phi(a) ) = \mathcal{I}( b_2 < \phi(a) )$ and $\mathcal{I}( (r_1,\phi(a)) \in E_2 ) = \mathcal{I}( (r_2,\phi(a)) \in E_2 )$. Also, by definition of $\Sigma_1$ we have $\mathcal{I}( (r_1,b_1) \in E_2 ) = \mathcal{I}( (r_2,b_2) \in E_2 )$ and $\mathcal{I}( (r_1,b_2) \in E_2 ) = \mathcal{I}( (r_2,b_1) \in E_2 )$. The claim then follows from the definition of $E^{(1)}$. \end{proof}
This concludes the proof of Theorem \ref{negate}(c).
\begin{remark} One does not need the full strength of consistent orderability to define $\mathcal{P}$; it is enough that there do not exist $r,r',b,b'$ such that $b >_{G,r} b'$ and $b' >_{G,r'} b$. With this modification, the property $\mathcal{P}$ can now be expressed as a single first-order sentence\footnote{Equivalently, there exists a finite collection of ``forbidden'' hypergraphs which describe $\mathcal{P}$, in the sense that $G$ obeys $\mathcal{P}$ if and only if it contains no induced copy of any of the forbidden hypergraphs. In contrast, hereditary properties are associated to an \emph{at most countable} family of forbidden hypergraphs.} using only the universal quantifier $\forall$, which is a slightly stronger statement than saying that $\mathcal{P}$ is hereditary. This gives a slight strengthening to Theorem \ref{negate}(c). \end{remark}
\section{Proofs of the positive results}\label{posi}
We now begin the proofs of the positive results. Except in side remarks and examples, the material here is independent of that in Section \ref{negchap}.
\subsection{An infinitary setting: exchangeable random hypergraphs and their structure}
In order to prove our new positive results, it will be helpful to recast the graphs and hypergraphs that we are studying into a more infinitary form (although the actual arguments will still be structured much as in the finitary presentations in Alon and Shapira~\cite{AloSha} and elsewhere). The formalism we will use is that of `exchangeable random hypergraphs', which have already appeared in the study of single hypergraph removal lemmas in~\cite{Tao3} and whose structure is examined in more detail in~\cite{Aus1}. In addition to providing a reasonably clean language for handling continuous graphs, these also admit their own versions of the theorem we shall prove, in whose statement the existence of an $\varepsilon$-modification of a given graph or hypergraph to another that satisfies a certain property is replaced by that of a near-diagonal joining of a given exchangeable random graph or hypergraph to another that satisfies the relevant property almost surely.
The infinitary setting offers several advantages. Firstly, it conceals from view many quantitative parameters such as $\varepsilon$ and $N$ which would otherwise have to be managed directly by hand; the process of taking a limit sends most (though not all) of these parameters to zero or infinity, and the remaining parameters often just need to be controlled qualitatively (e.g. knowing that they are finite) rather than quantitatively (i.e. with an explicit bound). Secondly, it allows one to use the standard tools and intuition from basic infinitary theories, most notably topology, measure theory, and probability theory. For instance, the well-known fact that measurable functions can be approximated by continuous ones will form a partial substitute for the Szemer\'edi regularity lemma.
The purpose of this section is to review the relevant theory from \cite{Aus1} which we will need here. To begin with we shall work with undirected graphs, and then discuss the (minor) modifications needed to handle directed graphs later in this section.
\subsubsection{The category of sub-Cantor spaces}
Our infinitary analysis will take place in the category of \emph{sub-Cantor spaces}, which we now pause to define.
\begin{dfn}[Sub-Cantor spaces]\label{subcantor} A \emph{sub-Cantor space} is a topological space $Z$ which is homeomorphic to a compact subset of the standard Cantor space $\{0,1\}^\mathbf{N}$. We always endow sub-Cantor spaces with their Borel $\sigma$-algebra generated by the open sets (or compact sets). We say that a sub-Cantor space is \emph{trivial} or a \emph{point} if $Z$ is a singleton set, and write $Z = \operatorname{pt}$ in this case. \end{dfn}
\begin{examples} Any finite set is a sub-Cantor space, a closed subspace of a sub-Cantor space is again a sub-Cantor space, and any at most countable product of a sub-Cantor space is again a sub-Cantor space. In particular, $K^{(V)}$ is a sub-Cantor space for any finite palette $K$ and any vertex set $V$. \end{examples}
\begin{remark}\label{borel} By a theorem of Borel, a space is a sub-Cantor space if and only if it is totally disconnected, compact, and metrisable. However, we will not need that characterisation here. We also make the useful observation that the topology of a sub-Cantor space can be generated from a countable algebra of clopen sets, as this property can be easily verified for the Cantor space $\{0,1\}^\mathbf{N}$ and is preserved under passage to compact subspaces. \end{remark}
We will view the class of sub-Cantor spaces as a category, where the morphisms are the \emph{probability kernels} $P: X \rightsquigarrow Y$ between sub-Cantor spaces $X,Y$; see Appendix \ref{prob} for a definition of a probability kernel and their relevant properties. (This is distinct from the category of vertex sets, defined in Definition\ref{vertset}.) Informally, one can think of a probability kernel as a stochastic analogue of a function from $X$ to $Y$, mapping points in $X$ to probability distribtions in $Y$ rather than to deterministic points. We distinguish several special types of probability kernels between sub-Cantor spaces:
\begin{itemize} \item A probability kernel $P: X \rightsquigarrow Y$ is \emph{deterministic} if we have $P(x) = \delta_{\phi(x)}$ for all $x \in X$ and some measurable $\phi: X \to Y$; \item A probability kernel $P: X \rightsquigarrow Y$ is \emph{deterministically continuous} if we have $P(x) = \delta_{\phi(x)}$ for all $x \in X$ and some continuous $\phi: X \to Y$; \item A probability kernel $P: X \rightsquigarrow Y$ is \emph{weakly continuous} if the function $x \mapsto \int_Y f(y)\ P(x,dy)$ is continuous for every continuous function $f: Y \to \mathbf{R}$. \end{itemize}
\begin{remark} Recall from Remark \ref{borel} that a sub-Cantor space has a countable base of clopen sets. Because of this, one can easily verify that a probabilistic kernel is deterministically continuous if and only if it is both deterministic and weakly continuous. As we will show later (see Proposition \ref{tprop} and Definition \ref{infitest}), the concept of weak continuity will correspond to testability with one-sided error, while deterministic continuity will correspond to strong local repairability. Roughly speaking, weak continuity is the minimal amount of regularity necessary for one to be able to transfer infinitary results back to the finitary setting, while strong continuity, in view of the sub-Cantor structure, means that the relevant continuous maps $\phi: X \to Y$ between sub-Cantor spaces ``depend on only finitely many coordinates'' and will thus define a local modification rule. \end{remark}
Rather than work on an individual sub-Cantor space, it will be useful to conduct our analysis on \emph{families} of sub-Cantor spaces indexed by vertex sets, with various morphisms between these spaces. The most convenient way to handle these families is via the notion of a \emph{contravariant functor} from category theory.
\begin{definition}[Contravariant functor] A \emph{contravariant functor} $Z$ is an assignment of a sub-Cantor space $Z^{(V)}$ to every vertex set $V$, together with a probability kernel $Z^{(\phi)}: Z^{(V)} \to Z^{(W)}$ for every morphism\footnote{Recall that in the category of vertex sets (as opposed to that of sub-Cantor spaces), the morphisms are just the (deterministic) injective maps between vertex sets.} $\phi \in \operatorname{Inj}(W,V)$ between vertex sets, such that $Z^{(\operatorname{id}_V)}: Z^{(V)} \to Z^{(V)}$ is the identity probability kernel on $Z^{(V)}$ for every vertex sets $V$, and such that $Z^{(\phi \circ \psi)} = Z^{(\psi)} \circ Z^{(\phi)}$ for any morphisms $\phi \in \operatorname{Inj}(W,V)$ and $\psi \in \operatorname{Inj}(V,U)$ between vertex sets. We say that the contravariant functor is \emph{deterministically continuous} (resp. \emph{weakly continuous}) if all the probability kernels $Z^{(\phi)}$ are deterministically continuous (resp. weakly continuous). If $z \in Z^{(V)}$ and $W \subset V$, we write $z\downharpoonright_W \in Z^{(W)}$ for $Z^{(\iota_{W \subset V})}(z)$, and refer to $z\downharpoonright_W$ as the \emph{restriction} of $z$ to $W$. Similarly, if $\mu \in \operatorname{Pr}(Z^{(V)})$ and $W \subset V$, we write $\mu\downharpoonright_W \in \operatorname{Pr}(Z^{(W)})$ for the projected measure $Z^{\iota_{W \subset V}} \circ \mu$.
If $Z$ is a contravariant functor and $S$ is a vertex set, we define the \emph{shift} $Z^{\uplus S}$ to be the contravariant functor given by requiring that $$ (Z^{\uplus S})^{(V)} := Z^{(V \uplus S)}$$ for all vertex sets $V$ and $$ (Z^{\uplus S})^{(\phi)} := Z^{(\phi \oplus \operatorname{id}_S)}$$ for all morphisms $\phi$. One easily verifies that $Z^{\uplus S}$ is a contravariant functor, which is deterministically continuous (resp. weakly continuous) if $Z$ is. \end{definition}
\begin{remark} Intuitively, a contravariant functor is a recipe for generating a space of objects $Z^{(V)}$ to every vertex set $V$, to which one can meaningfully perform operations such as relabeling $V$, or restricting $V$ to a subset $W$. A typical example of such a space $Z^{(V)}$ would be $K^{(V)}$, the space of $K$-coloured hypergraphs on $V$. Note however that we allow the relabeling and restriction operations to be stochastic rather than deterministic. \end{remark}
In this paper we will only be dealing with either deterministically continuous or weakly continuous contravariant functors. One such functor is the \emph{trivial functor} $\operatorname{pt}$, which maps every vertex set to a point (and every morphism to the unique probability kernel between two points). More generally, an important source of such functors for us will come from \emph{sub-Cantor palettes}.
\begin{definition}[Sub-Cantor palettes] A \emph{sub-Cantor palette} is a tuple $Z = (Z_j)_{j=0}^\infty$ of sub-Cantor spaces, all but finitely many of which are trivial. We define the \emph{order} of $Z$ to be the largest $k$ for which $Z_k$ is non-trivial, or $-1$ if all components $Z_j$ are trivial. We identify $Z$ with a deterministically continuous contravariant functor by defining $$ Z^{(V)} := \prod_{j=0}^\infty Z_j^{\operatorname{Inj}([j],V)}$$ for all vertex sets $V$, and defining $Z^{(\phi)}: Z^{(V)} \to Z^{(W)}$ for all morphisms $\phi \in \operatorname{Inj}(W,V)$ by the formula $$ Z^{(\phi)}( ( ( z_j(\psi) )_{\psi \in \operatorname{Inj}([j],V)} )_{j=0}^\infty ) = ( ( z_j(\phi \circ \psi) )_{\psi \in \operatorname{Inj}([j],W)} )_{j=0}^\infty $$ for all $( ( z_j(\psi) )_{\psi \in \operatorname{Inj}([j],V)} )_{j=0}^\infty \in Z^{(V)}$. One easily verifies that $Z$ is indeed a deterministically continuous contravariant functor.
If $j$ is an integer, we write $Z_{\leq j}$ (resp. $Z_{<j}$, $Z_{\geq j}$, $Z_{>j}$, $Z_{=j}$) for the sub-Cantor palette whose $i^{{\operatorname{th}}}$ component is $Z_i$ when $i \leq j$ (resp. $i<j$, $i\geq j$, $i>j$, $i=j$) and a point otherwise. \end{definition}
\begin{example} The finite palettes in Definition \ref{hyperdef} are sub-Cantor palettes. \end{example}
\begin{example}[Sub-Cantor spaces as contravariant functors]\label{trivial} A sub-Cantor space $X$ can be viewed as a sub-Cantor palette of order $0$, and can therefore be viewed as a contravariant functor, in which $X^{(V)} = X$ and $X^{(\phi)} = \operatorname{id}_X$ for all vertex sets $V$ and morphisms $\phi$. \end{example}
\begin{example}[Hypergraph properties as contravariant functors]\label{propfunc} If $K$ is a finite palette and $\mathcal{P}$ is a hereditary $K$-property, one easily verifies for every vertex set $V$ that $\mathcal{P}^{(V)}$ is a closed subspace of $K^{(V)}$ and is therefore itself a sub-Cantor space. From this and the hereditary nature of $\mathcal{P}$ we see that $\mathcal{P}$ is in fact a contravariant functor. \end{example}
We will also need to deal with families of probability kernels between one family of sub-Cantor spaces and another. The most convenient way to handle such a concept is using the notion of a \emph{natural transformation} from category theory.
\begin{definition}[Natural transformation]\label{natdef} A \emph{natural transformation} $N: Z \to Y$ between two contravariant functors $Z, Y$ is an assignment of a probability kernel $N^{(V)}: Z^{(V)} \rightsquigarrow Y^{(V)}$ for every vertex set $V$, such that the diagram \begin{equation}\label{natprop} \begin{CD} Z^{(V)} @>{N^{(V)}}>> Y^{(V)} \\ @VV{Z^{(\phi)}}V @VV{Y^{(\phi)}}V \\ Z^{(W)} @>{N^{(W)}}>> Y^{(W)} \end{CD} \end{equation} commutes for every morphism $\phi \in \operatorname{Inj}(W,V)$ between vertex sets (the horizontal arrows here being probability kernels rather than continuous maps). We say that the natural transformation is \emph{deterministically continuous} (resp. \emph{weakly continuous}) if all the probability kernels $N^{(V)}$ are deterministically continuous (resp. weakly continuous).
An \emph{exchangeable $Z$-recipe} on a contravariant functor $Z$ is a natural transformation $\mu: \operatorname{pt} \to Z$ from the trivial functor to $Z$, or equivalently an assignment of a probability measure $\mu^{(V)} \in \operatorname{Pr}(Z^{(V)})$ to every vertex set $V$, such that one has the exchangeability property \begin{equation}\label{zphi} Z^{(\phi)} \circ \mu^{(V)} = \mu^{(W)} \end{equation} for all morphisms $\phi \in \operatorname{Inj}(W,V)$ between two vertex sets. If $S$ is a vertex set, we define the exchangeable $Z^{\uplus S}$-recipe $\mu^{\uplus S}: \operatorname{pt} \to Z^{\uplus S}$ by the formula $(\mu^{\uplus S})^{(V)} := \mu^{(V \uplus S)}$. \end{definition}
\begin{remark} The condition \eqref{natprop} can be divided into two sub-conditions, namely \emph{equivariance} (or \emph{exchangeability}) \begin{equation}\label{exchange}
Y^{(\phi)} \circ N^{(V)} = N^{(V)} \circ Z^{(\phi)} \hbox{ for all } \phi \in \operatorname{Inj}(V,V)
\end{equation} and \emph{locality} \begin{equation}\label{local}
N^{(V)}(z)\downharpoonright_W = N^{(W)}(z\downharpoonright_W) \hbox{ for all } W \subset V \hbox{ and } z \in Z^{(V)}. \end{equation} Similarly, if $\mu$ is an exchangeable $Z$-recipe, then $\mu^{(V)}$ is an $\operatorname{Inj}(V,V)$-invariant measure on $Z^{(V)}$, and the pushforward of $\mu^{(V)}$ under the restriction map to a subset $W$ of $V$ is the measure $\mu^{(W)}$.
Intuitively, a natural transformation $N: Z \to Y$ is a rule (which may be either deterministic or stochastic) for converting $Z$-type objects on a given vertex set $V$ to $Y$-type objects on the same vertex set, in a manner which is both local (in the sense of \eqref{local}) and exchangeable (in the sense of \eqref{exchange}). We will shortly give a number of examples of natural transformations, such as recolouring maps, and local modification rules.
If $Z$ is a palette, one can view an exchangeable $Z$-recipe as a means for constructing a random $Z$-coloured hypergraph on any vertex set $V$, which is exchangeable with respect to relabeling of $V$, and also respects restriction from one vertex set to a subset. \end{remark}
\begin{remark} For future reference we observe the obvious fact that the composition $N_1 \circ N_2: Z \to X$ of two natural transformations $N_1: Y \to X$ and $N_2: Z \to Y$, defined by $(N_1 \circ N_2)^{(V)} := N_1^{(V)} \circ N_2^{(V)}$, is again a natural transformation. \end{remark}
Many important combinatorial operations on hypergraphs can be interpreted as natural transformations\footnote{Informally, any operation on hypergraphs which is both local (the effect of an operation on a subset $W$ of the vertex set $V$ depends only on the restriction of the hypergraph to $W$) and exchangeable (the operation respects hypergraph isomorphism) will have an interpretation as a natural transformation.}. We list some examples of relevance to our applications here.
\begin{definition}[Colouring as a natural transformation]\label{colour} Let $Z = (Z_j)_{j=0}^\infty$ be a sub-Cantor palette. A \emph{colouring} $\alpha: Z \to A$ of $Z$ is a tuple $\alpha = (\alpha_j)_{j=0}^\infty$ of \emph{continuous}\footnote{Informally, this means that the colour assigned to any point in $Z$ depends only on ``finitely many coordinates'' of that point.} maps $\alpha_j: Z_j \to A_j$, where $A = (A_j)_{j=0}^\infty$ is a finite palette. Each individual map $\alpha_j$ can be interpreted as a deterministically continuous natural transformation $\overline{\alpha_j}: Z_j \to A_j$ defined by the formula $$ \overline{\alpha_j}^{(V)}( (z(\phi))_{\phi \in \operatorname{Inj}([j],V)} ) := ( \alpha_j(z(\phi)))_{\phi \in \operatorname{Inj}([j],V)}$$ and then the entire colouring can be viewed as a deterministically continuous natural transformation $\overline{\alpha}: Z \to A$ by $$ \overline{\alpha}^{(V)}( (z_j)_{j=0}^\infty ) := ( \overline{\alpha_j}^{(V)}(z_j) )_{j=0}^\infty.$$ One easily verifies that $\overline{\alpha_j}$ and $\overline{\alpha}$ are indeed deterministically continuous natural transformations. We say that a colouring $\alpha: Z \to A$ \emph{refines} or \emph{is finer than} another $\kappa: Z \to K$ if we have $\kappa = \sigma \circ \alpha$ for some colouring $\sigma: A \to K$. \end{definition}
\begin{example}[Probability measures as exchangeable recipes]\label{trivial-2} If $X$ is a sub-Cantor space (which we can view as a palette of order 0 and thus as a contravariant functor, by Example \ref{trivial}), then an exchangeable $X$-recipe $\mu$ is nothing more than just a probability measure $\mu \in \operatorname{Pr}(X)$ on $X$. \end{example}
\begin{definition}[Sampling as an exchangeable recipe]\label{sampling} Let $K = (K_j)_{j=0}^k$ be a finite palette, and let $G = (V,\mathcal{B},\nu,(G_j)_{j=0}^k)$ be a continuous $K$-coloured hypergraph. For any vertex set $S$, the sampling map $\overline{G}^{(S)}: V^S \to K^{(S)}$ is a measurable map, and $\overline{\nu}^{(S)} := \nu^S$ is a probability measure on $V^S$. Thus the pushforward measure $\overline{G}^{(S)} \circ \overline{\nu}^{(S)}$ is a probability measure on $K^{(S)}$, which can be viewed as a probability kernel from $\operatorname{pt}$ to $K^{(S)}$. We can then define the exchangeable $K$-recipe $\overline{G} \circ \overline{\nu}: \operatorname{pt} \to K$ by letting $(\overline{G} \circ \overline{\nu})^{(S)} := \overline{G}^{(S)} \circ \nu^S$; one easily verifies that this is indeed an exchangeable $K$-recipe. (If $V$ was a sub-Cantor space, and thus identifiable with a sub-Cantor palette of order $1$, one could interpret $\overline{\nu}: \operatorname{pt} \to V$ as an exchangeable $V$-recipe, and $\overline{G}: V \to K$ as a deterministic natural transformation; however, we will not need to adopt this perspective here.) \end{definition}
\begin{example}[Inclusion as a natural transformation] If $K$ is a finite palette and $\mathcal{P}$ is a hereditary $K$-property, then the inclusion natural transformation $\iota: \mathcal{P} \to K$ is a deterministically continuous natural transformation. \end{example}
\begin{example}[Local modification rule as natural transformation]\label{lmr-nat} A local modification rule $T = (T,A)$ on a finite palette $K$ can be viewed as a deterministically continuous natural transformation $\overline{T}: K^{\uplus A} \to K$, with the maps $\overline{T}^{(V)}: K^{(A \uplus V)} \to K^{(V)}$ given by either Definition \ref{concmod} or Definition \ref{locmod}; the locality condition \eqref{local} reflects the fact that the colour assigned to an edge $\phi \in \operatorname{Inj}([j],V)$ by such a rule only depends on the restriction of the original graph to $A \cup \phi([j])$. If $\mathcal{P}$ is a hereditary $K$-property $\mathcal{P}$, then $T$ entails $\mathcal{P}$ if and only if the associated natural transformation $\overline{T}$ factors through the inclusion natural transformation $\iota: \mathcal{P} \to K$. \end{example}
\begin{definition}[Direct sum of natural transformations] If $Y_1$ and $Y_2$ are contravariant functors, we define the \emph{Cartesian product} $Y_1 \times Y_2$ to be the contravariant functor defined by $(Y_1 \times Y_2)^{(V)} :=Y_1^{(V)} \times Y_2^{(V)}$ for all vertex sets $V$, and $(Y_1 \times Y_2)^{(\phi)}(y_1,y_2) := (Y_1^{(\phi)}(y_1), Y_2^{(\phi)}(y_2))$ for all morphisms $\phi \in \operatorname{Inj}(W,V)$ and points $y_1 \in Y_1^{(V)}$, $y_2 \in Y_2^{(V)}$; one easily verifies that $Y_1 \times Y_2$ is indeed a contravariant functor. If $N_1: Z_1 \to Y_1$ and $N_2: Z_2 \to Y_2$ are natural transformations, we define the \emph{direct sum} $N_1 \oplus N_2: Z_1 \times Z_2 \to Y_1 \times Y_2$ to be the natural transformation defined by $(N_1 \oplus N_2)^{(V)}(z_1,z_2) = (N_1^{(V)}(z_1), N_2^{(V)}(z_2))$ for all vertex sets $V$ and points $z_1 \in Z_1^{(V)}$ and $z_2 \in Z_2^{(V)}$; one easily verifies that $N_1 \oplus N_2$ is indeed a natural transformation. \end{definition}
\begin{example} If $Z = (Z_j)_{j=0}^k$ is a sub-Cantor palette, then we have $Z = Z_{=0} \times \ldots \times Z_{=k}$ as contravariant functors. If $\alpha = (\alpha_j)_{j=0}^k: Z \to A$ is a colouring, then we have $\overline{\alpha} = \overline{\alpha_{0}} \oplus \ldots \oplus \overline{\alpha_{k}}$. \end{example}
We now turn to an important weak compactness property of recipes, which in fact is the main reason why we have set up all this infinitary machinery in the first place.
\begin{definition}[Vague convergence of recipes] Let $Z$ be a sub-Cantor palette, let $\mu_n: \operatorname{pt} \to Z$ be a sequence of exchangeable $Z$-recipes, and let $\mu: \operatorname{pt} \to Z$ be another exchangeable $Z$-recipe. We say that $\mu_n$ \emph{converges vaguely} to $\mu$ if $\mu_n^{(V)}$ converges vaguely to $\mu^{(V)}$ for every vertex set $V$ (see Appendix \ref{prob} for a definition of vague convergence of measures). \end{definition}
\begin{lemma}[Vague sequential compactness of recipes]\label{compact} Let $Z$ be a sub-Cantor palette, and let $\mu_n: \operatorname{pt} \to Z$ be a sequence of exchangeable $Z$-recipes. Then there exists a subsequence $\mu_{n_j}: \operatorname{pt} \to Z$ which converges vaguely to another exchangeable $Z$-recipe $\mu: \operatorname{pt} \to Z$. \end{lemma}
\begin{proof} Let $S$ be a countably infinite vertex set. Then by Lemma \ref{seq}, we can find a subsequence $\mu_{n_j}: \operatorname{pt} \to Z$ such that the probability measures $\mu_{n_j}^{(S)} \in \operatorname{Pr}(Z^{(S)})$ converge vaguely to a measure $\mu^{(S)} \in \operatorname{Pr}(Z^{(S)})$. Observe from \eqref{zphi} that $Z^{(\phi)} \circ \mu_{n_j}^{(S)} = \mu_{n_j}^{(S)}$ for all $\phi \in \operatorname{Inj}(S,S)$. Since $Z^{(\phi)}$ is continuous, we can use vague convergence and conclude that \begin{equation}\label{zamu} Z^{(\phi)} \circ \mu^{(S)} = \mu^{(S)}. \end{equation} We can then define the exchangeable $Z$-recipe $\mu: \operatorname{pt} \to Z$ by defining $\mu^{(V)} := Z^{(\phi)} \circ \mu^{(S)}$ for any vertex set $V$ and any morphism $\phi \in \operatorname{Inj}(V,S)$; one easily verifies from \eqref{zamu} that $\mu$ is well-defined and is an exchangeable $Z$-recipe. Also, as $\mu_{n_j}^{(S)}$ converges vaguely to $\mu^{(S)}$, one can see (by pulling back by an arbitrary morphism $\phi \in \operatorname{Inj}(V,S)$) that $\mu_{n_j}^{(V)}$ converges vaguely to $\mu^{(V)}$ for all vertex sets $V$. The claim follows. \end{proof}
\subsubsection{A structure theorem for exchangeable random hypergraphs}
In the infinitary framework, graphs and hypergraphs will be modeled by exchangeable recipes, via the sampling operation in Definition \ref{sampling}. In order to use this formalism, we will need a classification of all the possible exchangeable recipes that one could associate with a given palette $Z$. Such a classification is analogous to the Szemer\'edi and hypergraph regularity lemmas in the finitary setting, or to the description of `limit objects' of certain sequences of finite graphs or hypergraphs in terms of `graphons' and `hypergraphons' in the works of Lov\'asz and Szegedy~\cite{LovSze2} and Elek and Szegedy~\cite{EleSze}. (The $k=1$ version of this classification is essentially de Finetti's theorem, a foundational result in the study of exchangeable probability measures.)
In fact, the classification that we need has been available in the probabilistic literature for quite some time, appearing first in the study of `exchangeable arrays of random variables' in the work of Hoover~\cite{Hoo1,Hoo2}, Aldous~\cite{Ald1,Ald2,Ald3} and Kallenberg~\cite{Kal}. Their formalism is slightly removed from the more combinatorial set-up and demands of the present paper, and so we refer the reader to~\cite{Aus1} for a description of the relationship between them and versions of these results suited to our present purposes.
Let us first give some illustrative examples of exchangeable $Z$-recipes that provide simple instances of the general result to follow.
\begin{example}[Random vertex colouring]\label{rvc} Let $Z = (Z_0,Z_1)$ be a palette of order $1$, let $P_0 \in \operatorname{Pr}(Z_0)$ be a probability measure (and thus identifiable with a $Z_{\leq 0}$-recipe $P_0: \operatorname{pt} \to Z_{\leq 0}$), and let $Q_1: Z_0 \rightsquigarrow Z_1$ be a probability kernel. If we then define the probability kernel $P_1: Z_{\leq 0} \to Z$ by the formula $P_1^{(V)}(z) := Q_1(z)^V$ for all vertex sets $V$ and $z \in Z_0$, then $\mu := P_1 \circ P_0$ is an exchangeable $Z$-recipe. This recipe colours a given vertex set $V$ by first assigning a colour $z \in Z_0$ at random with law $P_0$ to the empty set, and then assigning a colour in $Z_1$ to each vertex independently at random with law $Q_1(z)$.
A classical theorem of de Finetti asserts (in this language) that if $Z_1$ is a sub-Cantor space and $\mu_{=1}$ is an exchangeable $Z_{=1}$-recipe, then there exists $Z_0, P_0, Q_1, \mu$ as above such that $\mu_{=1} = \pi \circ \mu$, where $\pi: Z \to Z_{=1}$ is the projection map. This theorem gives a satisfactory classification of exchangeable recipes on palettes of order $1$, and the later work of Hoover, Aldous and Kallenberg was motivated by an effort to generalise this special case. \end{example}
\begin{example}[Erd\H{o}s-Renyi hypergraphs]\label{eph} Let $Z = \{0,1\}_k$ for some $k \geq 1$, and let $0 < p < 1$. Then we can define the exchangeable $Z$-recipe $\mu: \operatorname{pt} \to Z$ by setting $\mu^{(V)} = \prod_{e \in \binom{V}{k}} \mu_{p,e}$ for all vertex sets $V$, where we identify $\{0,1\}_k^{(V)}$ with $\prod_{e \in \binom{V}{k}} \{0,1\}_k^{(e)}$, and $\mu_{p,e} \in \operatorname{Pr}( \{0,1\}_k^{(e)} )$ is the law of the random hypergraph of order $k$ on $e$ which is complete with probability $p$ and empty with probability $1-p$; thus $\mu^{(V)}$ is the law of a random undirected hypergraph of Erd\H{o}s-Renyi type. \end{example}
\begin{example}[Random complete bipartite graph]\label{rcbg} Let $Z = (\operatorname{pt},\{0,1\},\{0,1\})$, and let $Q_1 \in \operatorname{Pr}(\{0,1\})$ be the uniform measure on $\{0,1\}$. From Example \ref{rvc}, $Q_1$ induces a $Z_{\leq 1}$-exchangeable recipe $P_1: \operatorname{pt} \to Z_{\leq 1}$. We also define a natural transformation $P_2: Z_{\leq 1} \to Z$ by the formula $P_2^{(V)}(z) := \delta_z \times \prod_{e \in \binom{V}{2}} Q_2^{(e)}(z\downharpoonright_e)$ for all vertex sets $V$, where we identify $Z^{(V)}$ with $Z_{\leq 1}^{(V)} \times \prod_{e \in \binom{V}{2}} Z_{=2}^{(e)}$, and for any $e = \{v,w\}$ and $z = (z_v,z_w) \in Z_{\leq 1}^{(e)}$, $Q_2^{(e)}(z)$ is the law of the random graph on $e$ which is complete when $z_v \neq z_w$ and empty otherwise. The recipe $\mu := P_2 \circ P_1$ is then an exchangeable $Z$-recipe, which describes a random complete bipartite graph on any given vertex set $V$. \end{example}
\begin{example}[Erd\H{o}s-Renyi graphs with random density] Let $Z = (Z_0,\operatorname{pt},\{0,1\})$, let $P_0 \in \operatorname{Pr}(Z_0)$ be a probability measure, and let $p: Z_0 \to [0,1]$ be a measurable function. We can view $P_0$ as a natural transformation $P_0: \operatorname{pt} \to Z_{\leq 0}$. We can then define the natural transformation $P_2: Z_{\leq 1} \to Z$ by setting $P_2^{(V)}(z_0) = \delta_{z_0} \times \prod_{e \in \binom{V}{2}}\mu_{p(z_0),e}$ for all vertex sets $V$ and all $z_0 \in Z_0 \equiv Z_{\leq 1}^{(V)}$, where we identify $Z^{(V)}$ with $Z_{\leq 1}^{(V)} \times \prod_{e \in \binom{V}{2}} Z_{=2}^{(e)}$, and $\mu_{p,e}$ are the measures defined in Example \ref{eph}. Then $\mu := P_2 \circ P_0$ is an exchangeable $Z$-coloured hypergraph, which describes an Erd\H{o}s-Renyi random graph whose expected edge density $p$ is itself a random variable. \end{example}
\begin{example}[Random directed complete graph] Let $Z = \{0,1\}_2$ and let $P_2: \operatorname{pt} \to Z$ be exchangeable $Z$-recipe $P_2^{(V)} = \prod_{e \in \binom{V}{2}} Q_2^{(e)}$, where for each $e = \{v,w\}$, $Q_2^{(e)} \in \operatorname{Pr}( \{0,1\}_2^{(e)} )$ is the law of the random directed graph $G_2: \operatorname{Inj}([2],e) \to \{0,1\}$ such that $G_2(v,w) = 1$ and $G_2(w,v)=0$ with probability $1/2$, and $G_2(v,w)=0$ and $G_2(w,v)=1$ with probability $1/2$. Thus $P_2^{(V)}$ is the law of a random directed complete graph on $V$, on which given any two vertices $v$ and $w$, exactly one of the directed edges $(v,w)$ and $(w,v)$ will lie in the graph, with an equal probability $1/2$ of each. \end{example}
\begin{example}[Random 3-uniform hypergraphs]\label{r3h} We now consider a somewhat more general example than those above. Let $Z = (\operatorname{pt},Z_1,Z_2,\{0,1\})$, let $Q_1 \in \operatorname{Pr}(Z_1)$ be a probability measure, let $Q_2: Z_1 \times Z_1 \rightsquigarrow Z_2$ be a symmetric probability kernel, and let $p: Z_{\leq 2}^{([3])} \to [0,1]$ be a measurable function which is symmetric with respect to the $\operatorname{Inj}([3],[3])$ action on the base $Z_{\leq 2}^{([3])} \equiv Z_1^3 \times Z_2^6$. (Actually, for this construction, only the values of $p$ on \emph{undirected} hypergraphs in $Z_{\leq 2}^{([3])}$ - a set which is identifiable with $Z_1^3 \times Z_2^3$ - will be relevant.) From Example \ref{rvc} with $Z_0 = \operatorname{pt}$, $Q_1$ induces a natural transformation $P_1: \operatorname{pt} \to Z_{\leq 1}$. Similarly, the map $Q_2$ induces a natural transformation $P_2: Z_{\leq 1} \to Z_{\leq 2}$ defined by $P_2^{(V)}(z) := \delta_z \times \prod_{e \in \binom{V}{2}} Q_2^{(e)}(z\downharpoonright_e)$ for all vertex sets $V$ and $z \in Z_{\leq 1}^{(V)}$, where for each $e = \{v,w\}$, $Q_2^{(e)}(z_v,z_w)$ is the law of the random hypergraph $G_e$ in $Z_{=2}^{e}$ which is symmetric (thus $G_e(v,w)=G_e(w,v)$) and such that $G_e(v,w)$ has law $Q_2(v,w)=Q_2(w,v)$. The function $p$ also induces a natural transformation $P_3: Z_{\leq 2} \to Z$, defined by $P_3^{(V)}(z) := \delta_z \times \prod_{e \in \binom{V}{3}} Q_3^{(e)}(z\downharpoonright_e)$ for all vertex sets $V$ and $z \in Z_{\leq 2}^{(V)}$, where $Q_3^{(\{v_1,v_2,v_3\})}(y)$ is the law of the random hypergraph in $\{0,1\}_3^{(\{v_1,v_2,v_3\})}$ which is complete with probability $p( Z_{\leq 2}^{(v_1,v_2,v_3)}(y) )$ and empty otherwise (note that the exact ordering of $\{v_1,v_2,v_3\}$ is irrelevant due to the symmetry assumptions on $p$). This generates an exchangeable $Z$-recipe $\mu := P_3 \circ P_2 \circ P_1$, which creates a hypergraph on any vertex set $V$ by first using $Q_1$ to colour the vertices, then $Q_2$ to colour $2$-edges, and finally $Q_3$ to colour $3$-edges. This sort of recipe has also appeared, for example, in the different formalism of `hypergraphons' studied in~\cite{EleSze}, and can be viewed as the infinitary analogue of the regularisations of finitary hypergraphs given for instance in \cite{gowers-3} or \cite{rodl}. \end{example}
These examples can be generalized to create exchangeable recipes of any given order. To do this, we introduce some more notation.
\begin{definition}[Independence] Let $X$ be a sub-Cantor space with a probability measure $\mu \in \operatorname{Pr}(X)$, and let $\pi_\alpha: X \to Y_\alpha$, $\alpha \in A$ be a collection of measurable maps to other sub-Cantor spaces $Y_\alpha$. We say that the maps $\pi_\alpha$ are \emph{jointly independent relative to $\mu$} if we have $$ \int_X (\prod_{\alpha \in A'} f_\alpha \circ \pi_\alpha)\ d\mu = \prod_{\alpha \in A'} \int_X f_\alpha \circ \pi_\alpha\ d\mu$$ for all finite subsets $A'$ of $A$ and all bounded measurable functions $f_\alpha: Y_\alpha \to \mathbf{R}$. \end{definition}
\begin{remark} If we choose $x \in X$ at random with law $\mu$, then the $\pi_\alpha$ are jointly independent relative to $\mu$ if and only if the random $\pi_\alpha(x) \in Y_\alpha$ are jointly independent in the usual probabilistic sense. \end{remark}
\begin{definition}[$j$-independence]\label{j-indep} Let $N: Z \to Y$ be a natural transformation, and let $j \geq 0$. We say that $N$ is \emph{$j$-independent} if for every vertex set $V$ and every $z \in Z^{(V)}$, the restriction maps $\pi_W: Y^{(V)} \to Y^{(W)}$ for $W \in \binom{V}{j}$ are jointly independent relative to the measure $N^{(V)}(z) \in \operatorname{Pr}(Y^{(V)})$. \end{definition}
\begin{remark} Informally, $j$-independence asserts that for any fixed $z \in Z^{(V)}$, the $j$-edges of the random element of $Y^{(V)}$ drawn using the law $N^{(V)}(z)$ are jointly independent random variables. For instance in Example \ref{rvc}, once one fixes $z \in Z_0$, $P_1$ colours the vertices in $V$ independently with law $Q_1(z)$, and thus $P_1: Z_{<0} \to Z_{\leq 1}$ is $1$-independent. (Note, however, that if $z \in Z_0$ is chosen randomly rather than deterministically, then the colours assigned to vertices in $V$ need not be independent any more.) More generally, in all of the examples discussed earlier in this section, the natural transformations $P_j: Z_{<j} \to Z_{\leq j}$ that appear in those examples are $j$-independent. \end{remark}
\begin{example}\label{jep} Let $Z$ be a contravariant functor and $j \geq 0$. Let $Y_{=j}$ be a sub-Cantor palette with only the $j^{\operatorname{th}}$ component non-trivial, and let $Q^{([j])}: Z^{([j])} \rightsquigarrow Y_{=j}^{([j])}$ be a probability kernel which is $\operatorname{Inj}([j],[j])$-equivariant, thus $Y_{=j}^{(\phi)} \circ Q^{([j])} = Q^{([j])} \circ Z^{(\phi)}$ for all $\phi \in \operatorname{Inj}([j],[j])$. If we then define the natural transformation $Q: Z \to Y_{=j}$ by $$ Q^{(V)}(z) := \prod_{e \in \binom{V}{j}} Y_{=j}^{(\phi_e^{-1})} \circ Q^{([j])}( Z^{(\phi_e)}(z) )$$ for all vertex sets $V$ and all $z \in Z^{(V)}$, where we identify $Y_{=j}^{(V)}$ with $\prod_{e \in \binom{V}{j}} Y_{=j}^{(e)}$, and where we choose an arbitrary morphism $\phi_e \in \operatorname{Inj}([j],e)$ for each $e \in \binom{V}{j}$ (the exact choice of $\phi_e$ is irrelevant, thanks to the $\operatorname{Inj}([j],[j])$-equivariance of $Q^{([j])}$), then one verifies that $Q$ is a $j$-independent natural transformation. Indeed, $Q$ is the unique $j$-independent natural transformation which agrees with $Q^{([j])}$ at $[j]$, and conversely every $j$-independent natural transformation from $Z$ to $Y_{=j}$ arises in this fashion. \end{example}
\begin{definition}[Regular exchangeable recipes]\label{regexp} Let $Z$ be a sub-Cantor palette of some order $k \geq 0$, and let $\mu: \operatorname{pt} \to Z$ be an exchangeable $Z$-recipe. We say that $\mu$ is \emph{regular} if there exists a factorisation $$ \mu = P_k \circ \ldots \circ P_0$$ where for each $0 \leq j \leq k$, $P_j: Z_{<j} \to Z_{\leq j}$ is a $j$-independent natural transformation which partially inverts the projection natural transformation $\pi_j: Z_{\leq j} \to Z_{<j}$ in the sense that $\pi_j \circ P_j = \operatorname{id}_{Z_{<j}}$. (Here we identify $Z$ with $Z_{\leq k}$ in the obvious manner.) \end{definition}
\begin{remark} If one sets $\mu_{\leq j} = \mu_{<j+1} := P_j \circ \ldots \circ P_0$, then the situation can be described by a commutative diagram whose $j^{\operatorname{th}}$ layer for $j=0,\ldots,k$ takes the form \begin{equation}\label{mupj} \begin{CD} \operatorname{pt} @>{\mu_{\leq j}}>> Z_{\leq j} @>{\operatorname{id}_{Z_{\leq j}}}>> Z_{\leq j} \\
@| @AA{P_j}A @V{\pi_j}VV \\ \operatorname{pt} @>{\mu_{<j}}>> Z_{<j} @>{\operatorname{id}_{Z_{<j}}}>> Z_{<j} \end{CD} \end{equation} \end{remark}
\begin{example} Let $Z$ be a sub-Cantor palette of order $k \ge 0$. If $0 \leq j \leq k$ and $Q_j^{([j])}: Z_{<j}^{([j])} \rightsquigarrow Z_{=j}^{([j])}$ is a $\operatorname{Inj}([j],[j])$-equivariant probability kernel, and $Q_j: Z_{<j} \to Z_{=j}$ is the associated $j$-independent natural transformation, as defined by Example \ref{jep}, then the natural transformation $P_j: Z_{<j} \to Z_{\leq j}$ defined by $P_j^{(V)}(z) := \delta_z \times Q_j^{(V)}(z)$ for all vertex sets $V$ and $z \in Z^{(V)}$, where we identify $Z_{\leq j}^{(V)}$ with $Z_{<j}^{(V)} \times Z_{=j}^{(V)}$, is a $j$-independent natural transformation with $\pi_j \circ P_j = \operatorname{id}_{Z_{<j}}$. Thus by selecting a $Q_j^{([j])}$ for each $0 \leq j \leq k$ and then composing the resulting $P_j$ together, one obtains a regular exchangeable $Z$-recipe $\mu$; conversely, all such regular exchangeable $Z$-recipes arise in this manner. \end{example}
In terms of the notation set out above, we can now state the full structure theorem that we need.
\begin{theorem}[Structure theorem]\label{struct} Let $K$ be a finite palette of some order $k \geq 0$, let $\mu: \operatorname{pt} \to K$ be an exchangeable $K$-recipe and let $S$ be a countably infinite vertex set. Then there exists a sub-Cantor palette $Z$, a deterministically continuous natural transformation $\Lambda: K^{\uplus S} \to Z$, and a colouring map $\kappa:Z\to K$ such that the natural transformation $\overline{\kappa} \circ \Lambda: K^{\uplus S} \to K$ is just the restriction map, thus \begin{equation}\label{kappalam} (\overline{\kappa} \circ \Lambda)^{(V)}(G) = G\downharpoonright_V \end{equation} for all vertex sets $V$ and all $G \in K^{(V \uplus S)}$, and such that $\Lambda \circ \mu^{\uplus S}: \operatorname{pt} \to Z$ is a regular exchangeable $Z$-recipe. \end{theorem}
\begin{remark} The situation in the structure theorem can be summarised by the following commutative diagram, \begin{equation}\label{mustruct} \begin{CD} K^{\uplus S} @>{\Lambda}>> Z @>{\overline{\kappa}}>> K \\ @AA{\mu^{\uplus S}}A @AA{\Lambda \circ \mu^{\uplus S}}A @AA{\mu}A\\ \operatorname{pt} @= \operatorname{pt} @= \operatorname{pt} \end{CD} \end{equation} with the map from $K^{\uplus S}$ to $K$ being the restriction map (by \eqref{kappalam}), and the middle vertical map being an exchangeable $Z$-recipe and thus factorable as $P_k \circ \ldots \circ P_0$ for some $j$-independent ingredients $P_j: Z_{<j} \to Z_{\leq j}$. \end{remark}
\begin{proof} See Theorems~3.15 (for the undirected case) and Theorem~3.22 (for the general case) in~\cite{Aus1}. \end{proof}
Informally, the above theorem asserts that any exchangeable recipe can (after adding a sufficient number of ``hidden variables'') be constructed from randomly colouring $0$-edges, then $1$-edges, then $2$-edges, etc. as in Examples \ref{rvc}-\ref{r3h}. It is analogous to the hypergraph regularity lemma, which roughly speaking asserts that any $k$-uniform hypergraph $G = (V,E)$ can be regularised by first colouring (i.e. partitioning) the $1$-edges (i.e. vertices), and then on each pair of $1$-cells, colouring/partitioning the $2$-edges between those cells in a regular fashion (regularity being the analogue of $2$-independence), then on each triplet of $1$-cells and triplet of $2$-cells, colouring/partitioning the $3$-edges with vertices in the $1$-cells and $2$-edges in the $2$-cells in a (hypergraph)-regular fashion, and so forth.
\subsection{Infinitary reductions of main theorems}\label{reduce-sec}
In this section, we use the structure theorem to deduce the main positive results of this paper (Theorems \ref{rs-thm-dir}, \ref{lgr}, \ref{monotone}, \ref{part}) from infinitary counterparts (Proposition \ref{rs-prop}, \ref{lgr-prop}, \ref{monotone-prop}, \ref{part-prop}), which will then be proven in later sections.
We begin with some notation.
\begin{definition}[Entailment] Let $K$ be a finite palette, let $\mathcal{P}$ be a hereditary $K$-property, and let $N: Z \to K$ be a natural transformation from some contravariant functor $Z$. We say that $N$ \emph{almost entails} $\mathcal{P}$ if we have $N^{(V)}(z)(\mathcal{P}^{(V)}) = 1$ for all vertex sets $V$ and all $z \in Z^{(V)}$. We say that $N$ \emph{entails} $\mathcal{P}$ if $N$ is deterministically continuous and almost entails $\mathcal{P}$. \end{definition}
\begin{remark} If $N$ is deterministically continuous, then $N^{(V)}$ can be viewed as a continuous function from $Z^{(V)}$ to $K^{(V)}$, and then the assertion that $N$ entails $\mathcal{P}$ is equivalent to the claim that $N^{(V)}(Z^{(V)}) \subset \mathcal{P}^{(V)}$. Note that this notation of entailment is consistent with that in Definition \ref{locmod} after using Example \ref{lmr-nat}. \end{remark}
\begin{remark}[Alon-Shapira finitisation trick]\label{sat} From \eqref{kphi} in Definition \ref{hered} and \eqref{natprop} in Definition \ref{natdef} we see that to verify that $N$ almost entails $\mathcal{P}$, it suffices to verify $N^{(V)}(z)(\mathcal{P}^{(V)}) = 1$ for a single countably infinite vertex set $V$ and all $z \in Z^{(V)}$. Actually, from countable additivity and the way $\mathcal{P}$ is extended from finite hypergraphs to infinite ones, it suffices to verify $N^{(V)}(z)(\mathcal{P}^{(V)}) = 1$ for all finite vertex sets $V$ and all $z \in Z^{(V)}$. For similar reasons, to verify that a continuous natural transformation $N: Z \to K$ entails $\mathcal{P}$, it suffices to show that $N^{(V)}(Z^{(V)}) \subset \mathcal{P}^{(V)}$ for all finite $V$. This ability to reduce entailment to verification on finite vertex sets is crucial to our arguments; not coincidentally, an analogous finitisation observation played a similarly central role in \cite{AloSha2}. \end{remark}
\begin{definition}[Infinitary repairability and testability]\label{infitest} Let $K$ be a finite palette of some order $k \geq 0$, and let $\mathcal{P}$ be a hereditary $K$-property. We say that $\mathcal{P}$ is \emph{infinitarily testable with one-sided error} (resp. \emph{infinitarily strongly locally repairable}) if given any sub-Cantor palette $Z$ of order $k$, any colouring $\kappa: Z \to K$, any regular exchangeable $Z$-recipe $\mu: \operatorname{pt} \to Z$ such that $\overline{\kappa} \circ \mu$ almost entails $\mathcal{P}$, and every $\varepsilon > 0$, there exists a weakly continuous (resp. deterministically continuous) natural transformation $T: Z \to K$ that almost entails (resp. entails) $\mathcal{P}$ and is close to $\overline{\kappa}$ in the sense that \begin{equation}\label{mukan} \int_{Z^{([k])}} T^{([k])}(z)( K^{([k])} \backslash \{\overline{\kappa}^{([k])}(z)\} )\ d\mu^{([k])}(z) < \varepsilon. \end{equation} \end{definition}
\begin{remark}\label{jojo-rem} When $T$ is deterministically continuous, \eqref{mukan} simplifies to \begin{equation}\label{jojo} \mu^{([k])}( \{ z \in Z^{([k])}: T^{([k])}(z) \neq \overline{\kappa}^{([k])}(z) \} ) < \varepsilon. \end{equation} \end{remark}
\begin{example}[Testing and repair of the triangle-free property, I]\label{triangle-test} Let $Z = (\operatorname{pt}, Z_1, \{0,1\})$ be a sub-Cantor palette, let $K := \{0,1\}_2$, and let $\kappa: Z \to K$ be the colouring map which is the identity on the order $2$ component and trivial on lower order components. Let $Q_1 \in \operatorname{Pr}(Z_1)$; and let $P_1: \operatorname{pt} \to Z_{\leq 1}$ be as in Example \ref{rvc}. Let $p: Z_1 \times Z_1 \to [0,1]$ be a symmetric measurable function, and let $P_2: Z_{\leq 1} \to Z$ be the $2$-independent natural transformation $P_2^{(V)}(z) := \delta_z \times \prod_{e \in \binom{V}{2}} Q_2^{(e)}(z\downharpoonright_e)$ for all vertex sets $V$ and $z \in Z_{\leq 1}^{(V)}$, where for each $e = \{v,w\}$,
$Q_2^{(e)}(z_v,z_w)$ is the law of the random graph on the doubleton $e$ which is complete with probability $p(z_v,z_w)$ and empty otherwise. Then $\mu := P_2 \circ P_1$ is a regular exchangeable $Z$-recipe (closely related to the \emph{graphons} introduced in \cite{LovSze2}); it randomly colours any vertex set $V$ by assigning each vertex $v \in V$ a random colour $G_1(v)$ in $Z_1$ with law $Q_1$, and then assigns any edge $\{v,w\}$ the colour $1$ with probability $p(G_1(v),G_1(w))$, independently for all edges $\{v,w\}$ (once the colours $G_1(v)$ have all been picked). Let $\mathcal{P}$ be the hereditary $K$-property of being undirected and triangle-free. Observe that $\mu$ will almost entail $\mathcal{P}$ if we have $p(x,y)p(y,z)p(z,x)=0$ for $Q_1$-almost every $x,y,z \in Z_1$; suppose that this is the case. Now we seek a weakly (resp. deterministically) continuous natural transformation $T: Z \to K$ that almost entails (resp. entails) $\mathcal{P}$, and is close to $\overline{\kappa}: Z \to K$ (observe that $\overline{\kappa}$ itself does not entail $\mathcal{P}$ at all) in the sense of \eqref{mukan}. We know of two methods to achieve this, which we shall call the \emph{R\"odl-Schacht method} and the \emph{Alon-Shapira method}, being loosely based on the constructions in \cite{rs2} and \cite{AloSha2} respectively (we will also discuss finitary analogues of these schemes in the next remark). Both methods proceed by first choosing a refinement $\alpha: Z \to A$ of $\kappa: Z \to K$, which amounts to subdividing the vertex space $Z_1$ into finitely many clopen ``cells'' $\alpha_1^{-1}(\{a\})$; the finer one takes the colouring $\alpha$, the better the value of $\varepsilon$ one will eventually obtain in \eqref{mukan}. The R\"odl-Schacht method then constructs the law $T^{(V)}(z) \in \operatorname{Pr}(K^{(V)})$ of a random $K$-coloured graph on a vertex set $V$, starting from a $Z$-coloured graph $z \in Z^{(V)}$, as follows. For each vertex $v \in V$, one looks at the cell $C_v := \alpha_1^{-1}(\alpha(z_1(v))) \in Z_1$ that $z_1(v)$ lives in. If this cell has positive measure with respect to $Q_1$, then we select a point $\zeta_v \in C_v$ at random with law $(Q_1|C_v)$ (see Appendix \ref{prob} for the definition of conditioned measure). Otherwise, we select $\zeta_v \in Z_1$ with law $Q_1$. Note that in either case, the law of $\zeta_v$ is absolutely continuous with respect to $Q_1$. We perform this selection procedure independently for each $v \in V$. One now selects $T^{(V)}(z)_2(v,w) = T^{(V)}(z)_2(w,v) \in \{0,1\}$ for each $(v,w) \in \operatorname{Inj}([2],V)$ separately by the following rule: \begin{itemize} \item If $z_2(v,w)=z_2(w,v)$ and $p(\zeta_v,\zeta_w) = p(\zeta_w,\zeta_v) \neq 1-z_2(v,w)$, then set $T^{(V)}(z)_2(v,w)=z_2(v,w)$ and $T^{(V)}(z)_2(w,v)=z_2(w,v)$. \item Otherwise, set $T^{(V)}(z)_2(v,w)=T^{(V)}(z)_2(w,v)$ equal to $1$ with probability $p(\zeta_w,\zeta_v)$, and equal to 0 otherwise. \end{itemize} One can verify that $T$ is a weakly continuous natural transformation which almost entails $\mathcal{P}$, and that \eqref{mukan} is obeyed for sufficiently fine $\alpha$, which demonstrates that $\mathcal{P}$ is infinitarily testable with one-sided error.
Now we turn to the Alon-Shapira method, which is more complicated, but constructs a natural transformation $T: Z \to K$ which is deterministically continuous rather than weakly continuous. To simplify matters we shall take advantage of the monotonicity of the property $\mathcal{P}$, and also make the additional assumption that the measure $Q_1$ is \emph{atomless} (i.e. $Q_1(\{z\})=0$ for all $z \in Z_1$). Let $\alpha: Z \to A$ be as before. For each $a \in A_1$ independently in turn, we construct the cell $C_a := \alpha_1^{-1}(\{a\})$, and select $\zeta_a \in Z_1$ at random with law $(Q_1|C_a)$ if $Q_1(C_a) > 0$, and with law $Q_1$ otherwise. For each pair $\{a,a'\} \in \binom{A_1}{2}$, we then select $\zeta_{a,a'} = \zeta_{a',a} \in \{0,1\}$ independently at random, such that $\zeta_{a,a'}=1$ with probability $p( \zeta_a, \zeta_{a'} )$. With all these choices, we then define the (random) deterministically continuous natural transformation $T: Z \to K$ by setting $T^{(V)}(z)_2(v,w)=T^{(V)}(z)_2(w,v)$ for vertex sets $V$, $z \in Z^{(V)}$, and $(v,w) \in \operatorname{Inj}([2],V)$ by the following rule: \begin{itemize} \item If $\alpha_1(v)\neq \alpha_1(w)$, $z_2(v,w)=z_2(w,v)$, and $p(\zeta_{\alpha_1(v)},\zeta_{\alpha_1(w)})=p(\zeta_{\alpha_2(v)},\zeta_{\alpha_1(w)}) \neq 1-z_2(v,w)$, then we set $T^{(V)}(z)_2(v,w)=T^{(V)}(z)_2(w,v) = z_2(v,w)$. \item If $\alpha_1(v)\neq \alpha_1(w)$ but we are not in the previous case, we set $T^{(V)}(z)_2(v,w) = \zeta_{\alpha_1(v),\alpha_1(w)}$. \item If we are in the ``diagonal case'' $\alpha_1(v)=\alpha_1(w)$ then we set $T^{(V)}(z)_2(v,w) = \zeta_{\alpha_1(v),\alpha_1(w)} = 0$. \end{itemize} One can verify that with probability 1, $T$ is a deterministically continuous transformation which entails $\mathcal{P}$; the monotonicity of $\mathcal{P}$ is used to ensure that the ``zeroing out'' of the diagonal case does not interfere with this entailment. One can also verify \eqref{mukan} if the colouring $\alpha$ is sufficiently fine; the atomless nature of $Q_1$ is used to ensure that the contribution of the diagonal case can be made arbitrarily small. (One can handle the diagonal contributions of any atoms in $Q_1$ by adding an additional case to the above rule; we leave the details to the interested reader.) If $\mathcal{P}$ is not monotone, the diagonal case causes much more difficulty, and needs to be coloured according to a colour provided by an application of Ramsey's theorem; see \cite{AloSha2} for details (albeit in a rather different language). \end{example}
\begin{example} Testing and repair of the triangle-free property, II] We now adapt the above discussion to the finitary setting, to help provide a partial dictionary between the finitary and infinitary worlds. Our discussion will be somewhat informal. We start with a fixed \emph{graphon} - a measurable symmetric function $p: [0,1] \times [0,1] \to [0,1]$ by the following procedure. Given such a graphon, and given a vertex set $V$, we construct a random graph $G = (V,E)$ by the following procedure. First, randomly assign to each vertex $v \in V$ a colour $G_1(v) \in [0,1]$ using the uniform distribution on $[0,1]$, with each vertex being coloured independently. (Note that the uniform distribution on $[0,1]$ is atomless, thus avoiding some of the technicalities alluded to in the previous example.) Next, we define the edge set $E$ of $G$ by declaring each edge $\{v,w\}$ to lie in $E$ with probability $p(G_1(v),G_1(w))$, with these events being independent once the colours of the vertices have been chosen.
The finitary analogue of the R\"odl-Schacht method involves two vertex sets\footnote{Of course, in the initial setup in \cite{rs2} no graphon is initially provided. Instead, one takes a hypothetical sequence of increasingly large counterexamples to the local testability claim, passes to a subsequence which does converge to a graphon $p$ (cf. Lemma \ref{compact}), selects two widely separated elements of this sequence, and then applies the argument described here.}, a relatively small one $V$ and a very large one $V^*$, and generates two random graphs $G = (V,E)$, $G^* = (V^*,E^*)$ using the same graphon $p$. We assume that the large graph $G^*$ is very close to being triangle-free, and in particular we assume that the triangle density of $G^*$ is extremely small compared to the size $|V|$ of the smaller graph. On the other hand, the triangle density of $G^*$ is extremely close to the quantity \begin{equation}\label{evo}
\int_0^1 \int_0^1 \int_0^1 p(x,y) p(y,z) p(z,x)\ dx dy dz. \end{equation}
We may thus assume (with high probability) that this quantity is very small - smaller than any quantity depending only on $|V|$.
We then use the nearly triangle-free nature of $G^*$ to obtain a genuinely triangle-free perturbation $G' = (V,E')$ of $G$ as follows. Pick a large number $N$ (much larger than $|V|$) and subdivide the intervals $[0,1]$ into $N$ intervals $I_1,\ldots,I_N$ of equal length. We then define a random map $\zeta: V \to [0,1]$ as follows. For each $v \in V$, we look at the colour $G_1(v) \in [0,1]$ of $v$; this falls into one of the intervals $I_i$ of $[0,1]$. We then pick an element of $I_i$ uniformly at random and call this $\zeta_v$. (Note that different $v, v' \in V$ may correspond to the same $i$, but in such cases we pick $\zeta_v, \zeta_{v'}$ independently; in any event, such collisions will be rare if $N$ is chosen large enough depending on $|V|$.) This gives rise to a random map $\zeta: V \to [0,1]$. From the smallness of \eqref{evo} and the first moment method we see that the quantity \begin{equation}\label{poof}
\sum_{u,v,w \in V, \hbox{ distinct}} p( \zeta_u, \zeta_v ) p( \zeta_v, \zeta_w ) p( \zeta_w, \zeta_u ) \end{equation}
can be made (with high probability) to be as small as desired depending on $|V|$.
We use this map $\zeta$ to construct $G'=(V,E')$ as follows. We will need a small threshold $\sigma > 0$ depending on $|V|$. Let $\{v,w\}$ be an edge in $V$. \begin{itemize} \item If $\{v,w\} \in E$, and $p( \zeta_v, \zeta_w ) \geq \sigma$, we place $\{v,w\}$ in $E'$. \item If $\{v,w\} \in E$, and $p( \zeta_v, \zeta_w ) < \sigma$, we exclude $\{v,w\}$ from $E'$. \item If $\{v,w\} \not \in E$ and $p( \zeta_v, \zeta_w ) \leq 1-\sigma$, we exclude $\{v,w\}$ from $E'$. \item If $\{v,w\} \not \in E$ and $p( \zeta_v, \zeta_w ) < 1-\sigma$, we place $\{v,w\}$ in $E'$. \end{itemize}
One can check (if \eqref{poof} is sufficiently small) that $G'$ is genuinely triangle-free; meanwhile, from the Lebesgue differentiation theorem we know that $p$ is approximately constant on most cells $I_i \times I_j$; since $G$ is generated using $p$, this can be used to show that $G$ and $G'$ differ in a relatively small number of edges if the parameters are selected correctly (it is here that it is crucial that $N$ is large compared with $|V|$). Note however that the rule generating $G'$ from $G$ is not local in nature, as it requires an initial assignment of a real number $\zeta_v \in [0,1]$ to each vertex $v$ and thus requires far more ``memory'' than is available to a local modification rule. Also, the ``complexity'' $N$ of the modification procedure here has to be large compared with $V$, and in particular this procedure would not work if $V$ were infinite.
Now we briefly sketch the Alon-Shapira approach to constructing $G'$. Here we will not use the large graph $G^*$, and work solely with $G$. We assume that $G$ is close to triangle-free, thus we may assume that \eqref{evo} is small; but now the bound is much weaker. More precisely, for any $\delta > 0$, we may assume that \eqref{evo} is less than $\delta$, but only if $|V|$ is sufficiently large depending on $\delta$; we no longer have the luxury of assuming \eqref{evo} to be arbitrarily small depending on $V$.
We now construct the perturbation $G'$ by a variant of the R\"odl-Schacht method. We pick an $N$ which is moderately large, but now \emph{independent} of $|V|$, and create the intervals $I_1,\ldots,I_N$ as before; this induces a partition $V = V_1 \cup \ldots \cup V_N$ of $V$ into cells which (with high probability) are of roughly equal size. Rather than assign a number $\zeta_v \in [0,1]$ to each vertex $v \in V$, we now only assign a number $\zeta_i \in I_i$ for each $1 \leq i \leq N$, drawn uniformly at random from $I_i$ and independently for each $i$. We then construct $G' = (V,E')$ as follows for $v \in V_i, w \in V_j$, this time with a threshold $\sigma > 0$ that is small compared with $N$, but independent of $|V|$: \begin{itemize} \item If $\{v,w\} \in E$, $i \neq j$, and $p( \zeta_i, \zeta_j ) \geq \sigma$, we place $\{v,w\}$ in $E'$. \item If $\{v,w\} \in E$, $i \neq j$, and $p( \zeta_i, \zeta_j ) < \sigma$, we exclude $\{v,w\}$ from $E'$. \item If $\{v,w\} \not \in E$, $i \neq j$, and $p( \zeta_i, \zeta_j ) \leq 1-\sigma$, we exclude $\{v,w\}$ from $E'$. \item If $\{v,w\} \not \in E$, $i \neq j$, and $p( \zeta_i, \zeta_j ) < 1-\sigma$, we place $\{v,w\}$ in $E'$. \item If $i=j$, we exclude $\{v,w\}$ from $E'$. \end{itemize} This procedure will (if $V$ is large enough to ensure \eqref{evo} sufficiently small depending on $N,\sigma$) create a triangle-free graph $G'$ which is close (with high probability) to $G$. Technically, $G'$ is not obtained from $G$ from a local modification rule; however, the rule that decides when an edge $\{v,w\}$ belongs to $G'$ depends only on whether $\{v,w\}$ lies in $G$, and on the cells $V_i, V_j$ that $v,w$ lie in. As mentioned before, the $V_i$ can be viewed as a Szemer\'edi partition of the graph $G$. Another way to obtain a Szemer\'edi partition is to select a number of random vertices $v_1,\ldots,v_k$ and use the neighbourhoods of these vertices to determine a partition; see e.g. \cite{ishi-0}. Using such a regularisation instead of the one based on the intervals $I_1,\ldots,I_n$, one can eventually obtain a local modification rule that repairs $G$ to a triangle-free graph and which only modifies a small number of edges on the average; we omit the details, which could in principle be extracted from the argument in \cite{AloSha2} using random vertex neighbourhoods to regularise graphs as in \cite{ishi-0}. \end{example}
The connection of the notions in Definition \ref{infitest} to the those in Definitions \ref{Testdef}, \ref{locrep} is given by the following correspondence principle.
\begin{proposition}[Correspondence principle]\label{tprop} Let $K$ be a finite palette, and let $\mathcal{P}$ be a hereditary $K$-property. \begin{itemize} \item[(i)] If $\mathcal{P}$ is infinitarily testable with one-sided error, then $\mathcal{P}$ is testable with one-sided error. \item[(ii)] If $\mathcal{P}$ is infinitarily strongly locally repairable, then $\mathcal{P}$ is strongly locally repairable. \end{itemize} \end{proposition}
\begin{proof} Let $k$ denote the order of $K$. We first prove (i). Suppose for contradiction that $\mathcal{P}$ is infinitarily testable with one-sided error but not testable with one-sided error. Carefully negating all the quantifiers, we conclude that there exists an error tolerance $\varepsilon > 0$ and a sequence $G_n \in K^{(V_n)}$ of $K$-coloured hypergraphs on finite vertex sets $V_n$ with $|V_n| \geq \max(n,k)$, which increasingly locally obey $\mathcal{P}$ in the sense that \begin{equation}\label{gnp}
\frac{1}{|\binom{V_n}{n}|} \left|\left\{ W \in \binom{V_n}{n}: G_n\downharpoonright_W \in \mathcal{P}^{(W)} \right\}\right| \geq 1-\frac{1}{n}, \end{equation} but is far from $\mathcal{P}$ in the sense that for any $n$, there does not exist any $G'_n \in \mathcal{P}^{(V_n)}$ for which \begin{equation}\label{joy}
\frac{1}{|\binom{V_n}{k}|} \left|\left\{ W \in \binom{V_n}{k}: G_n\downharpoonright_W \neq G'_n\downharpoonright_W \right\}\right| \leq \varepsilon. \end{equation} From \eqref{gnp} and the hereditary nature of $\mathcal{P}$ we easily see that \begin{equation}\label{gnp2}
\lim_{n \to \infty} \frac{1}{|\binom{V_n}{N}|} \left|\left\{ W \in \binom{V_n}{N}: G_n\downharpoonright_W \in \mathcal{P}^{(W)} \right\}\right| = 1 \end{equation} for all fixed $N \ge 1$.
We now arbitrarily extend each $G_n \in K^{(V_n)}$ to a $K$-coloured continuous $\tilde G_n$ on $V_n$ as in Example \ref{extend}, endowing each $V_n$ with uniform probability measure $\nu_n$. By Definition \ref{sampling}, we thus have a sequence of exchangeable $K$-recipes $\overline{\tilde G_n} \circ \overline{\nu_n}: \operatorname{pt} \to K$. From \eqref{gnp2} (and the fact that $|V_n| \to \infty$ as $n \to \infty$) we see that the $\overline{\tilde G_n} \circ \overline{\nu_n}$ increasingly entail $\mathcal{P}$ in the sense that \begin{equation}\label{nuf} \lim_{n \to \infty} (\overline{\tilde G_n} \circ \overline{\nu_n})^{([N])}( \mathcal{P}^{([N])} ) = 1 \end{equation} for any $N \ge 1$.
By Lemma \ref{compact}, and passing to a subsequence if necessary, we may assume that $\tilde G_n \circ \nu_n$ converges vaguely to an exchangeable $K$-recipe $\mu: \operatorname{pt} \to K$. From \eqref{nuf} (and the fact that $\mathcal{P}^{([N])}$ is clopen) we conclude that $\mu^{([N])}(\mathcal{P}^{([N])}) = 1$ for all $N$. By Remark \ref{sat}, we conclude that $\mu$ almost entails $\mathcal{P}$.
Let $S$ be a countably infinite vertex set. We now invoke Theorem \ref{struct} to obtain a sub-Cantor palette $Z$, a natural transformation $\Lambda: K^{\uplus S} \to Z$ and a colouring $\kappa: Z \to K$ such that $\Lambda \circ \mu^{\uplus S}: \operatorname{pt} \to Z$ is a regular exchangeable $Z$-recipe. From \eqref{mustruct} we see that $\overline{\kappa} \circ \Lambda \circ \mu^{\uplus S} = \mu$, thus $\overline{\kappa} \circ \Lambda \circ \mu^{\uplus S}$ almost entails $\mathcal{P}$.
Let $\delta$ be a small number (depending on $\varepsilon$ and $k$) to be chosen later. As $\mathcal{P}$ is infinitarily testable with one-sided error, we can find a weakly continuous natural transformation $T: Z \to K$ that almost entails $\mathcal{P}$ such that \begin{equation}\label{mukan2} \int_{Z^{([k])}} T^{([k])}(z)( K^{([k])} \backslash \{\overline{\kappa}^{([k])}(z)\} )\ d\Lambda^{([k])} \circ \mu^{([k] \uplus S)}(z) < \delta. \end{equation}
The situation can be summarised by the commutative diagram \begin{equation}\label{tstruct} \begin{diagram}
& & K & \lTo^{\iota} & \mathcal{P} && \\
& & \uTo^T & \ruTo & &&\\ K^{\uplus S} & \rTo^\Lambda & Z & \rTo^{\overline{\kappa}} & K &\lTo^\iota & \mathcal{P}\\ \uTo^{\mu^{\uplus S}} & & \uTo^{\Lambda \circ \mu^{\uplus S}} & & \uTo^\mu & \ruTo &\\ \operatorname{pt} & \rEq & \operatorname{pt} & \rEq & \operatorname{pt} & & \end{diagram} \end{equation} where the two maps $T$, $\overline{\kappa}$ are close in the sense of \eqref{mukan}. The fact that $\mu$ almost entails $\mathcal{P}$ means that it in fact factors through the inclusion map $\iota: \mathcal{P} \to K$, and similarly for $T$.
Fix this $T$, and let $n$ be a large integer to be chosen later. We perform the following random construction. Let $\mathbf{N}$ be the natural numbers (actually, we could use any countably infinite vertex set here). Let $\psi \in V_n^{\mathbf{N}}$ be a point drawn at random with law $\nu_n^{\mathbf{N}}$ (or equivalently, $\psi: \mathbf{N} \to V_n$ is a random function from $\mathbf{N}$ to $V_n$). Then the point \begin{equation}\label{zDef} z := \Lambda^{(\mathbf{N})}(\overline{\tilde G_n}^{(\mathbf{N})}(\psi)) \end{equation} is a random point in $Z^{(\mathbf{N})}$ with law $(\Lambda \circ \overline{\tilde G_n} \circ \overline{\nu_n})^{(\mathbf{N})}$.
After choosing $\psi$ and hence $z$, let $G \in K^{(\mathbf{N})}$ be drawn at random with law $T^{(\mathbf{N})}(z)$. By construction of $T$, we see that $G$ almost surely obeys $\mathcal{P}$.
We now claim that for $n$ sufficiently large we have \begin{equation}\label{pze}
\mathbf{P}( \overline{\kappa}^{(e)}(z\downharpoonright_e) \neq G\downharpoonright_e ) < \delta \end{equation} for all $e \in \binom{\mathbf{N}}{k}$. As the joint distribution of $(z,G)$ is exchangable with respect to the action of $\operatorname{Inj}(\mathbf{N},\mathbf{N})$, we see that the probability on the left is independent of the choice of $e$, and so it suffices to verify \eqref{pze} for $e=[k]$. Since $T$ is a natural transformation, we observe that for fixed $z$, $G\downharpoonright_{[k]}$ has the distribution $T^{([k])}(z\downharpoonright_{[k]})$. Also, $z\downharpoonright_{[k]}$ has the distribution $\Lambda^{([k])} \circ (\overline{\tilde G_n}\circ \overline{\nu_n})^{([k] \uplus S)}$. We can thus re-express the left-hand side of \eqref{pze} as $$ \int_{Z^{([k])}} T^{([k])}(z)( K^{([k])} \backslash \{ \overline{\kappa}^{([k])}(z) \} )\ d\Lambda^{([k])} \circ (\overline{\tilde G_n}\circ \overline{\nu_n})^{([k] \uplus S)}(z).$$ But as $T$ is weakly continuous, the integrand here is continuous. Since $\overline{\tilde G_n}\circ \overline{\nu_n}$ converges vaguely to $\mu$, the claim \eqref{pze} thus follows from \eqref{mukan2}.
Now let $M = M_n$ be a large integer (depending on $|V_n|$ and $\varepsilon$) to be chosen later. The vertices $\psi(1),\ldots,\psi(M) \in V_n$ are drawn uniformly at random, so by the law of large numbers we see (if $M$ is sufficiently large) that with probability at least $1/2$, that we have \begin{equation}\label{mvn}
\frac{M}{2|V_n|} \leq |\{ m \in [M]: \psi(m) = v \}| \leq \frac{2M}{|V_n|} \end{equation}
for all $v \in V_n$. (Note that it is crucial here that $M$ is taken large compared to $|V_n|$; it is because of this that we only obtain testability here rather than local repairability.)
We now condition on the event that \eqref{mvn} holds. Because this event has probability at least $1/2$, we see that after this conditioning, $G$ still continues to obey $\mathcal{P}$ almost surely, and from \eqref{pze} we have \begin{equation}\label{kzkz}
\mathbf{P}( \kappa^{([k])}(z\downharpoonright_e) \neq G\downharpoonright_e ) \ll \delta \end{equation} for all $e \in \binom{\mathbf{N}}{k}$.
For any $v \in V_n$, let $m_v \in [M]$ be chosen uniformly and independently at random from the set $\{ m \in [M]: \psi(m) = v \}$, which is non-empty by \eqref{mvn}. This gives us a random function $m \in \operatorname{Inj}(V_n,\mathbf{N})$ which partially inverts $\psi$. We then define the hypergraph $G'_n \in K^{(V_n)}$ by the formula $G'_n := K^{(m)}(G)$. From \eqref{zDef} we also have $G_n = K^{(m)}(\kappa^{(\mathbf{N})}(z))$.
Since $G$ almost surely obeys $\mathcal{P}$, the hypergraph $G'_n = K^{(m)}(G)$ does also. From \eqref{mvn}, \eqref{kzkz}, and the construction of $m$ we also see that
$$ \sum_{W \in \binom{V_n}{k}} \mathbf{P}( G_n\downharpoonright_W \neq G'_n\downharpoonright_W ) \ll_k \delta |\operatorname{Inj}([k],V_n)|;$$ by linearity of expectation, we thus have
$$ \mathbf{E}( \frac{1}{|\binom{V_n}{k}|} \left|\left\{ W \in \binom{V_n}{k}: G_n\downharpoonright_W \neq G'_n\downharpoonright_W \right\}\right| ) \ll_k \delta.$$ Thus by the the first moment method, there exists a deterministic hypergraph $G'_n \in \mathcal{P}^{(V_n)}$ such that
$$ \frac{1}{|\binom{V_n}{k}|} \left|\left\{ W \in \binom{V_n}{k}: G_n\downharpoonright_W \neq G'_n\downharpoonright_W \right\}\right| \ll_k \delta.$$ Choosing $\delta$ sufficiently small depending on $\varepsilon$, we obtain \eqref{joy}, which is a contradiction. This concludes the proof of (i).
Now we prove (ii). Suppose for contradiction that $\mathcal{P}$ is infinitarily strongly locally repairable but not strongly locally repairable. Carefully negating all the quantifiers, we conclude that there exists an error tolerance $\varepsilon > 0$ and a sequence of $K$-coloured continuous hypergraphs $(G_n)_{n\geq 1}$, each on a different probability space $(V_n, \mathcal{B}_n, \nu_n)$, which increasingly obey $\mathcal{P}$ in the sense that $$ \lim_{N \to \infty} \int_{V_n^{[N]}} \mathcal{I}(\overline{G_n}^{([N])}(v) \in \mathcal{P}^{([N])})\ d\nu^{[N]}(v) = 1$$
for every $N$, but such that for each $n$, there does not exist any modification rule $T = (A,T)$ entailing $\mathcal{P}$ with $|A| \leq n$ for which \begin{equation}\label{joy2} \int_{V^A} \int_{V^{[k]}} \mathcal{I}\left( \overline{T_v(G_n)}^{([k])}(w) \neq \overline{G_n}^{([k])}(w) \right)\ d\nu^A(v) d\nu^{[k]}(w) < \varepsilon. \end{equation}
As in the proof of (i), we may assume after passing to a subsequence that the exchangeable $K$-recipes $\overline{G_n} \circ \overline{\nu_n}: \operatorname{pt} \to K$ converge vaguely to an exchangeable $K$-recipe $\mu: \operatorname{pt} \to K$ which almost entails $\mathcal{P}$.
Let $S$ be a countably infinite vertex set. As before, we invoke Theorem \ref{struct} to obtain a sub-Cantor palette $Z$ and natural transformations $\Lambda: K^{\uplus S} \to Z$ and $\kappa: Z \to K$, with $\Lambda \circ \mu^{\uplus S}: \operatorname{pt} \to Z$ is a regular exchangeable $Z$-recipe, and with $\overline{\kappa} \circ \Lambda \circ \mu^{\uplus S}$ almost entailing $\mathcal{P}$ (by \eqref{mustruct}).
As $\mathcal{P}$ is infinitarily strongly locally repairable, we can find a deterministically continuous natural transformation $\tilde T: Z \to G$ entailing $\mathcal{P}$ such that \begin{equation}\label{mukan-3} \Lambda^{([k])} \circ \mu^{([k] \uplus S)}( \{ z \in Z^{([k])}: \tilde T^{([k])}(z) \neq \overline{\kappa}^{([k])}(z) \} ) < \varepsilon. \end{equation} The situation is once again depicted by \eqref{tstruct}, except with the weakly continuous $T$ replaced by the deterministically continuous $\tilde T$.
Now consider the map $$ (\tilde T \circ \Lambda)^{([k])}: K^{([k] \uplus S)} \to K^{([k])}.$$ This is a continuous map from the sub-Cantor space $K^{([k] \uplus S)}$ to the finite space $K^{([k])}$. As such, all of its level sets are clopen, and thus factor through $K^{([k] \uplus A)}$ for some finite subset $A$ of $S$. In other words, we can find a finite set $A \subset S$ and a continuous map $\overline{T}^{([k])}: K^{([k] \uplus A)} \to K^{([k])}$ such that $$ (\tilde T \circ \Lambda)^{([k])} = \overline{T}^{([k])} \circ \pi_A^{([k])}$$ where $\pi_A: K^{\uplus S} \to K^{\uplus A}$ is the restriction natural transformation. If we then define the natural transformation $\overline{T}: K^{\uplus A} \to K$ by requiring that $$ K^{(\phi)} \circ \overline{T}^{(V)} = \overline{T}^{([k])} \circ K^{(\phi)}$$ for all vertex sets $V$ and all $\phi \in \operatorname{Inj}([k],V)$, one easily verifies that $\overline{T}$ is well-defined, is a deterministically continuous natural transformation, and that the diagram \begin{equation}\label{tcirc} \begin{CD} K^{\uplus A} @>{\overline{T}}>> K \\ @AA{\pi_A}A @AA{\tilde T}A \\ K^{\uplus S} @>{\Lambda}>> Z \end{CD} \end{equation} commutes. (The reader may wish to connect this diagram together with \eqref{tstruct}, with $T$ again replaced by $\tilde T$ of course.) In particular, $(T, A)$ is a local modification rule in the sense of \ref{locmod}.
Since $\tilde T$ entails $\mathcal{P}$, we see from \eqref{tcirc} and the surjectivity of $\pi_A$ that $\overline{T}$ also entails $\mathcal{P}$. By chasing all the definitions we conclude that the local modification rule $(T,A)$ also entails $\mathcal{P}$.
Now we turn to \eqref{mukan-3}. From \eqref{tcirc} and the structure theorem (Theorem \ref{struct}) we can rewrite this as $$ \mu^{([k] \uplus A)}( \{ H \in K^{([k] \uplus A)}: \overline{T}^{([k])}(H) \neq H\downharpoonright_{[k]} \} ) < \varepsilon. $$ Since the set here is clopen, and $\overline{G_n} \circ \overline{\nu_n}$ converges vaguely to $\mu$, we conclude that $$ (\overline{G_n} \circ \overline{\nu_n})^{([k] \uplus A)}( \{ H \in K^{([k] \uplus A)}: \overline{T}^{([k])}(H) \neq H\downharpoonright_{[k]} \} ) < \varepsilon $$ for all sufficiently large $n$. But the left-hand side can be rearranged using Definition \ref{contmap} and Definition \ref{locmod} as $$ \int_{V_n^A} \int_{V_n^k} \mathcal{I}\left( \overline{T_v(G_n)}^{([k])}(w) \neq \overline{G_n}^{([k])}(w) \right)\ d\nu_n^k(w) d\nu_n^A(v).$$ But this contradicts \eqref{joy2} (for $n$ sufficiently large). This concludes the proof of (ii). \end{proof}
In view of the above correspondence principle, the Theorems \ref{rs-thm-dir}, \ref{lgr}, \ref{monotone}, \ref{part} now follow immediately from the following four infinitary counterparts respectively.
\begin{proposition}[Every hereditary directed hypergraph property is testable]\label{rs-prop} Let $K$ be a finite palette, and let $\mathcal{P}$ be a hereditary $K$-property. Then $\mathcal{P}$ is infinitarily testable with one-sided error. \end{proposition}
\begin{proposition}[Every hereditary undirected graph property is locally repairable]\label{lgr-prop} Let $K$ be a finite palette of order at most $2$, and let $\mathcal{P}$ be a hereditary undirected $K$-property. Then $\mathcal{P}$ is infinitarily strongly locally repairable. \end{proposition}
\begin{proposition}[Every weakly monotone directed hypergraph property is locally repairable]\label{monotone-prop} Let $K$ be an ordered finite palette, and let $\mathcal{P}$ be a weakly monotone $K$-property. Then $\mathcal{P}$ is infinitarily strongly locally repairable. \end{proposition}
\begin{proposition}[Every partite hypergraph property is locally repairable]\label{part-prop} Let $K$ be an finite palette of order $k \geq 1$, and let $\mathcal{P}$ be a partite hereditary $K$-property. Then $\mathcal{P}$ is infinitarily strongly locally repairable. \end{proposition}
We will prove these four propositions in future sections, with the proof of Propositions \ref{lgr-prop}, \ref{monotone-prop}, \ref{part-prop} being started in Section \ref{tech}, after some preliminaries in Section \ref{fine}, and Proposition \ref{rs-prop} being started in Section \ref{discred-sec}. For now, we reinterpret the negative results from Section \ref{negchap} by indicating why infinitary strong local repairability fails\footnote{The authors in fact discovered this failure at the infinitary level first, and only converted it to the finitary counterexamples in Section \ref{negchap} afterwards, and with some non-trivial effort.} for directed graph properties or undirected hypergraph properties of order $\leq 3$.
\subsubsection{Directed graph properties are not infinitarily strongly repairable}
We begin by recasting the argument in Section \ref{lrs} in the infinitary setting. Let $Z_1 = C \subset \mathbf{R}$ be the standard middle-thirds Cantor set consisting of numbers in $[0,1]$ whose base $3$ expansion consists only of $0$s and $2$s with Cantor measure $Q_1 = \mu_C$ (which would be the law of a random base $3$ string in $[0,1]$ consisting of $0$s and $2$s); by Example \ref{rvc}, this induces a natural transformation $P_1: \operatorname{pt} \to Z_{\leq 1}$. We set $Z := (\operatorname{pt},Z_1,\{0,1\})$ and $k:=2$, and let $P_2: Z_{<2} \to Z$ be the natural transformation defined by $P_2^{(V)}(z) := \delta_z \times \prod_{e \in \binom{V}{2}} Q^{(e)}(z\downharpoonright_e)$ for all vertex sets $V$ and $z \in Z_{<2}^V$, where $Q^{(\{v,w\})}(z)$ is the law of the directed graph $G_2$ in $\{0,1\}_2^{(\{v,w\})}$ defined by $G_2(v,w) := \mathcal{I}( z(v) < z(w) )$ and $G_2(w,v) := \mathcal{I}( z(w) < z(v) )$. Then $\mu := P_2 \circ P_1$ is a regular exchangeable $Z$-recipe. We let $K := \{0,1\}_2$ and let $\kappa: Z \to K$ be the colouring map which is the identity on the second component and trivial on the zeroth and first components. Then we easily check that $\overline{\kappa} \circ \mu$ almost entails the $\{0,1\}_2$-property $\mathcal{P}$ of being a total ordering (as in Section \ref{lrs}). However, one cannot find any deterministically continuous natural transformation $T: Z \to K$ which entails $\mathcal{P}$, because any $Z$-coloured hypergraph $z \in Z^{(V)}$ which has a pair $v,w$ of vertices which are indistinguishable in the sense that $z_1(v)=z_1(w)$ and $z_2(v,w)=z_2(w,v)$, will necessarily map under $T$ to a directed graph $G \in K^{(V)}$ such that $G_2(v,w)=G_2(w,v)$, which implies that $G$ cannot obey $\mathcal{P}$. Thus the $\{0,1\}_2$-property $\mathcal{P}$ is not infinitarily strongly repairable.
One can view the argument in Section \ref{lrs} that shows that $\mathcal{P}$ is not strongly repairable as the finitary analogue of the argument above. (The much more complicated demonstration that $\mathcal{P}$ is also not weakly repairable does not seem to have an easily describable infinitary counterpart.)
\subsubsection{$\leq 3$-uniform hypergraph properties are not infinitarily strongly repairable}
Now let $Z_1 = C \cup \{R\}$, where $C$ is the middle-thirds Cantor set and $R$ is an abstract ``red'' point, and let $Q_1 := \frac{1}{2} \mu_C + \frac{1}{2} \delta_R$, thus the red point has mass $1/2$ and the Cantor set has total mass $1/2$. We set $Z := (\operatorname{pt},Z_1,\{0,1\},\{0,1\})$ and $k := 3$, thus by Example \ref{rvc}, $Q_1$ induces a natural transformation $P_1: \operatorname{pt} \to Z_{\leq 1}$. We then define a natural transformation $P_2: Z_{\leq 1} \to Z_{\leq 2}$ by $P_2^{(V)}(z) := \delta_z \times \prod_{e \in \binom{V}{2}} Q^{(e)}_2(z\downharpoonright_e)$, where $Q^{(\{v,w\})}_2(z)$ is the law of the random graph in $\{0,1\}_2^{(\{v,w\})}$ which is complete with probability $1/2$ and empty otherwise if $z_1(v) \neq z_1(w)$, and always empty when $z_1(v)=z_1(w)$. We then define a natural transformation $P_3: Z_{\leq 2} \to Z$ by $P_3^{(V)}(z) := \delta_z \times \prod_{e \in \binom{V}{3}} Q^{(e)}_3(z\downharpoonright_e)$, where $Q^{(e)}_3(z)$ is the law of the random hypergraph in $\{0,1\}_3^{(e)}$ which is empty unless $e$ can be expressed as $\{r,b,b'\}$ where $z_1(r)=R$, $z_1(b) > z_1(b')$ lie in $C$, $z_2(r,b)=1$, and $z_2(r,b')=0$, in which case the hypergraph is complete. Then $\mu := P_3 \circ P_2 \circ P_1$ is a regular exchangeable $Z$-recipe. If we let $K := (\operatorname{pt},\{0,1\},\{0,1\},\{0,1\})$, and let $\kappa: Z \to K$ be the colouring which is trivial on the zeroth component, the identity on the second and third components, and maps $C$ to $0$ and $R$ to $1$ on the first component, one verifies that $\overline{\kappa} \circ \mu$ almost entails the $K$-property $\mathcal{P}$ defined in Section \ref{leq3}.
Now let $V := \{r_1,r_2,b_1,b_2\}$ be an abstract set with four elements, and consider the $Z$-coloured hypergraphs $z \in Z^{(V)}$ such that \begin{itemize} \item $z_1(r_1)=z_1(r_2)=R$ and $z_1(b_1)=z_1(b_2) \in C$; \item $z_2(r_1,b_1)=z_2(r_2,b_2)=1$ and $z_2(r_1,b_2)=z_2(r_2,b_1)=0$; \item $z$ is symmetric with respect to the morphism $\phi \in\operatorname{Inj}(V,V)$ which swaps $r_1$ and $r_2$, and swaps $b_1$ and $b_2$. \end{itemize}
If $T: Z \to K$ is a deterministically continuous natural transformation and $G := T^{(V)}(z) \in K^{(V)}$, then we see that $G$ is also symmetric with respect to the morphism $\phi$ mentioned above. If $G_1 = \kappa_1 \circ z_1$ and $G_2 = z_2$, then we also have $G_1(r_1)=G_1(r_2)=1$, $G_1(b_1)=G_1(b_2)=0$, $G_2(r_1,b_1)=G_2(r_2,b_2)=1$ and $G_2(r_1,b_2)=G_2(r_2,b_1)=0$. But this implies that either $b_1 >_{G,r_1} b_2$ and $b_2 >_{G,r_2} b_1$ are both true, or $b_2 >_{G,r_1} b_1$ and $b_1 >_{G,r_2} b_2$ are both true, which in either case is incompatible with $T$ entailing $\mathcal{P}$. Thus the only way that $T$ can entail $\mathcal{P}$ is if we have $G_1 \neq \kappa_1 \circ z_1$ or $G_2 \neq z_2$ for all $z \in Z^{(V)}$ of the above form. But this can be shown to be inconsistent with $T$ obeying \eqref{mukan} for $\varepsilon$ sufficiently small, and so $\mathcal{P}$ is not infinitarily strongly locally repairable.
One can perform a similar infinitary translation of the scenario in Section \ref{eq3}; we leave this to the reader.
\subsection{The asymptotics of increasingly fine colourings}\label{fine}
Much of our analysis will revolve around the colouring of an infinite palette $Z$ by a finite palette $A$; such colouring is roughly analogous to that of dividing the vertices (or lower-order edges) of a graph (or hypergraph) into cells, as is done in the graph and hypergraph regularity lemmas. We will need a notion of a statement becoming asymptotically true for ``sufficiently fine'' colourings, similar to how a graph becomes increasingly regular as one partitions the vertices into finer and finer cells, or how a measurable function increasingly resembles a continuous one when viewed at finer and finer scales. In this section we set out some notation that will help us achieve these goals.
\begin{definition}[Colouring topology]\label{color-top} Let $Z$ be a sub-Cantor palette of order at most $k$. For each $0 \leq j \leq k$, we let $\operatorname{Col}_j(Z)$ denote the collection of all finite $\sigma$-algebras $\mathcal{B}$ of $Z_j$ that are generated by clopen sets, and let $\operatorname{Col}(Z) := \prod_{j=0}^k \operatorname{Col}_j(Z)$. Note that every colouring $\alpha = (\alpha_j)_{j=0}^\infty: Z \to A$ generates an element $\mathcal{B}_\alpha = (\mathcal{B}_{\alpha_j})_{j=0}^k$ of $\operatorname{Col}(Z)$, where $\mathcal{B}_{\alpha_j}$ is the $\sigma$-algebra of $Z_j$ is generated by the level sets of $\alpha_j: Z_j \to A_j$. (The maps $\alpha_j$ for $j>k$ are trivial and thus of no consequence.)
We endow $\operatorname{Col}(Z)$ with the topology whose sub-basic open sets take the form \begin{equation}\label{sub-basic} \{ (\mathcal{B}_j)_{j=0}^k \in \operatorname{Col}(Z): \mathcal{B}_i \supset F( \mathcal{B}_{i+1},\ldots, \mathcal{B}_k ) \} \end{equation} where $0 \leq i \leq k$ and $F: \operatorname{Col}_{i+1}(Z) \times \ldots \times \operatorname{Col}_k(Z) \to \operatorname{Col}_i(Z)$ is an arbitrary function. Thus a set is open if it is the union of sets which are finite intersections of sets of the form \eqref{sub-basic}. We make the simple but important observation that the intersection of finitely many non-empty open sets in $\operatorname{Col}(Z)$ is again a \emph{non-empty} open set.
Let $\alpha: Z \to A$ be a colouring. A statement involving $\alpha$ is said to hold \emph{for sufficiently fine $\alpha$} if there exists a non-empty open set $U \subset \operatorname{Col}(Z)$ such that the statement holds whenever $\mathcal{B}_\alpha \in U$. If $c(\alpha) \in \mathbf{R}$ is a real-valued quantity depending\footnote{Technically, the class of all colourings on a given palette $Z$ is not a set, so that $c$ here is a class function rather than a function, but one can rectify this by any number of artificial expedients, for instance by forcing all palettes to take values in the set of integers.} on $\alpha$, and $c_\infty$ is a real number, we say that $c(\alpha)$ \emph{tends to $c_\infty$ as $\alpha \to \infty$}, and write $\lim_{\alpha \to \infty} c(\alpha) = c_\infty$ or $c(\alpha) = c_\infty + o_{\alpha \to \infty}(1)$, if for every $\varepsilon > 0$, the statement $|c(\alpha) - c_\infty| \leq \varepsilon$ is true for sufficiently fine $\alpha$. \end{definition}
\begin{remark} Readers familiar with the hypergraph regularity lemma may recall that in order to usefully regularise a hypergraph of order $k$ on a vertex set $V$, one must partition each of the edge classes $\binom{V}{j}$ for $1 \leq j \leq k$ into cells. Typically, the regularisation will only be useful if the partitions for lower values of $j$ are sufficiently fine compared to higher values of $j$, as the lower order partitions are used to regularise the higher order ones. Our notion of sufficiently fine colourings in the above definition captures the infinitary analogue of this phenomenon. \end{remark}
One can treat the limit $\alpha \to \infty$ much like a sequential limit $n \to \infty$. For instance, any finite linear combination of quantities which are $o_{\alpha \to \infty}(1)$ is also $o_{\alpha \to \infty}(1)$. More generally, we have
\begin{lemma}[Dominated convergence theorem]\label{dct} Let $Z$ be a sub-Cantor palette, and let $(X,\nu)$ be a probability space. For each colouring $\alpha: Z \to A$, let $F_\alpha: X \to [-1,1]$ be a measurable function. If we have \begin{equation}\label{falpha} \lim_{\alpha \to \infty} F_\alpha(x) = 0 \end{equation} for $\nu$-almost every $x \in X$, then we have $$ \lim_{\alpha \to \infty} \int_X F_\alpha\ d\nu(x) = 0.$$ \end{lemma}
\begin{proof} By splitting $F_\alpha$ into positive and negative components we may assume that all the $F_\alpha$ are non-negative. Let $\varepsilon > 0$ be arbitrary. Since $$ \int_X F_\alpha\ d\nu(x) \leq \varepsilon + \nu( \{ x \in X: F_\alpha(x) > \varepsilon \} )$$ it will suffice to show that $F_\alpha$ converges to zero in measure, in the sense that $\nu( \{ x \in X: F_\alpha(x) > \varepsilon \} )\leq \varepsilon$ for all sufficiently fine $\alpha$ (depending on $\varepsilon$).
Since any sub-Cantor space has at most countably many clopen subsets, we see that the set $\operatorname{Col}(Z)$ is at most countable. We can thus find a sequence $\alpha_n$ of colourings whose associated $\sigma$-algebras $\mathcal{B}_{\alpha_n}$ exhaust $\operatorname{Col}(Z)$. From this and the hypothesis \eqref{falpha}, we see that $$\nu( \bigcap_{n=1}^\infty \{ x \in X: F_{\alpha_n}(x) > \varepsilon \} ) = 0.$$ By the monotone convergence theorem, we thus have $$\nu( \bigcap_{n=1}^N \{ x \in X: F_{\alpha_n}(x) > \varepsilon \} ) < \varepsilon$$ for some finite $N$. Taking $\alpha$ to be finer than $\alpha_1,\ldots,\alpha_N$, the claim follows. \end{proof}
An important principle for us will be \emph{Littlewood's principle}, which asserts that measurable functions are almost continuous at sufficiently fine scales. We shall need the following technical version of this principle.
\begin{lemma}[Littlewood's principle]\label{dom} Let $Z$ be a sub-Cantor palette of order $k \geq 0$, and let $\alpha: Z \to A$ be a colouring of $Z$. Let $V$ be a finite vertex set, and let $0 \le j \le k$. Let $H$ be a finite-dimensional Hilbert space, let $\mu \in \operatorname{Pr}(Z_{=j}^{(V)})$ be a probability measure, and let $F: Z_{=j}^{(V)} \to H$ be a bounded measurable function; we allow $H, \mu, F$ to depend on $\alpha_{j+1},\ldots,\alpha_k,V$, but they must be independent of $\alpha_0,\ldots,\alpha_j$. For any $a \in A_{=j}^{(V)}$, let $C_a \subset Z_{=j}^{(V)}$ be the set $C_a := (\overline{\alpha_{=j}}^{(V)})^{-1}(\{a\})$. Then $F$ is almost continuous on most cells $C_a$ in the sense that $$ \sum_{a \in A_{=j}^{(V)}} \mu(C_a)
\int_{Z_{=j}^{(V)}} \left\| F(z) - \int_{Z_{=j}^{(V)}} F(w)\ d(\mu|C_a)(w) \right\|_H\ d(\mu|C_a)(z) = o_{\alpha \to \infty}(1),$$
where the conditioning $(\mu|C_a)$ is defined in Appendix \ref{prob}, and we adopt the convention that the summand vanishes when $\mu(C_a)=0$. \end{lemma}
\begin{proof} Fix $V, j, \alpha_{j+1},\ldots,\alpha_k$, which then fixes $H,\mu,F$, and let $\varepsilon > 0$. It suffices to show that $$ \sum_{a \in A_{=j}^{(V)}} \mu(C_a)
\int_{Z_{=j}^{(V)}} \left\| F(z) - \int_{Z_{=j}^{(V)}} F(w)\ d(\mu|C_a)(w) \right\|_H\ d(\mu|C_a)(z) \ll \varepsilon$$ for all sufficiently fine $\alpha_{j}$.
As the topology of $Z_{=j}^{(V)}$ has a countable base of clopen sets, we can approximate the bounded measurable function $F$ to within $O(\varepsilon)$ in $L^1(\mu)$ norm by a finite linear combination $G$ of indicator functions of clopen sets. Then we have $$ \sum_{a \in A_{=j}^{(V)}} \mu(C_a)
\int_{Z_{=j}^{(V)}} \| F(z) - G(z) \|_H\ d(\mu|C_a)(z) = \|F-G\|_{L^1(\mu)} \ll \varepsilon$$ and similarly (by the triangle inequality) $$ \sum_{a \in A_{=j}^{(V)}} \mu(C_a)
\int_{Z_{=j}^{(V)}} \| \int_{Z_{=j}^{(V)}} F(w)\ d(\mu|C_a)(w) - \int_{Z_{=j}^{(V)}} G(w)\ d(\mu|C_a)(w) \|_H\ d(\mu|C_a)(z) \leq \|F-G\|_{L^1(\mu)} \ll \varepsilon$$ so by the triangle inequality again, it suffices to show that $$ \sum_{a \in A_{=j}^{(V)}} \mu(C_a)
\int_{Z_{=j}^{(V)}} \| G(z) - \int_{Z_{=j}^{(V)}} G(w)\ d(\mu|C_a)(w) \|_H\ d(\mu|C_a)(z) \ll \varepsilon$$ for all sufficiently fine $\alpha_{j}$. But by the nature of $G$ we see that $G$ will constant on all of the cells $C_a$ if $\alpha_{j}$ is fine enough. The claim follows. \end{proof}
\subsection{Reduction of repairability to non-exchangeable repairability}\label{tech}
We need to prove three infinitary strong local repair results\footnote{Readers who are only interested in the testability result may skip ahead to Section \ref{discred-sec}.}, namely Proposition \ref{lgr-prop} which addresses undirected graph properties; Proposition \ref{monotone-prop}, which addresses monotone hypergraph properties; and Proposition \ref{part-prop}, which addresses partite hypergraph properties. We shall deduce all three propositions from the following somewhat technical proposition that pertains to arbitrary hereditary hypergraph properties, which, instead of constructing a deterministically continuous natural transformation $T: Z \to K$ that entails $\mathcal{P}$, settles for constructing a single map $U: A^{(V)} \to \mathcal{P}^{(V)}$ on a very large but finite vertex set $V$, which satisfies the locality property \eqref{local} but not the exchangeability property \eqref{exchange}. More precisely, we have
\begin{proposition}[Non-exchangeable repair of hereditary properties]\label{repair} Let $K$ be a finite palette of order $k \geq 0$, let $\mathcal{P}$ be a hereditary $K$-property, let $Z$ be a sub-Cantor palette, let $\kappa: Z \to K$ be a colouring, and let $\mu: \operatorname{pt} \to Z$ be a regular exchangeable $Z$-recipe such that $\overline{\kappa} \circ \mu$ almost entails $\mathcal{P}$. Then for any colouring $\alpha: Z \to A$ which refines $\kappa$ through $\sigma$ (as in Definition \ref{colour}) and any finite vertex set $V$, there exists a map $U: A^{(V)} \to \mathcal{P}^{(V)}$ which is \emph{local} in the sense that for any $W \subset V$ and any $a, a' \in V$ with $a\downharpoonright_W = a'\downharpoonright_W$, we have $U(a)\downharpoonright_W = U(a')\downharpoonright_W$, and which locally resembles $\overline{\sigma}^{(V)}$ in the sense that \begin{equation}\label{repo} (\overline{\alpha} \circ \mu)^{([k])}(\Omega_U) \geq 1 - o_{\alpha \to \infty}(1) \end{equation} where $\Omega_U \subset A^{([k])}$ is the set of all $b \in A^{([k])}$ such that $K^{(\phi)}(U(a)) = \overline{\sigma}^{([k])}(b)$ for all $\phi \in \operatorname{Inj}([k],V)$ and $a \in A^{(V)}$ with $ A^{(\phi)}(a)=b$, and the expression $o_{\alpha \to \infty}(1)$ is uniform in the choice of $V$. \end{proposition}
\begin{remark} The fact that the error $o_{\alpha \to \infty}(1)$ is uniform in $V$ is crucial for establishing testability properties for general properties $\mathcal{P}$. Without this uniformity, one would only be able to test properties that were equivalent to forbidding a finite number of induced hypergraphs. (We will eventually be generating this finite set $V$ from the Alon-Shapira finitisation trick, Remark \ref{sat}, and as such there is no good control as to the size of $V$ other than that it is finite.) The need to pass from the finite setting to the infinite setting, but then back again to the finite setting, is somewhat analogous to the presence of several regularisations in the Alon-Shapira argument \cite{AloSha2} at radically different scales; roughly speaking, the finest such regularisation corresponds to the infinitary framework here, but we still have to treat the remaining regularisations finitarily. \end{remark}
In the remainder of this section we show how Proposition \ref{repair} implies the three infinitary strong local repair results. We begin with the repairability of weakly monotone hypergraph properties.
\begin{proof}[Proof of Proposition \ref{monotone-prop} assuming Proposition \ref{repair}] Let $K$ be an ordered finite palette of order $k \geq 0$, let $\mathcal{P}$ be a weakly monotone $K$-property, let $Z$ be a sub-Cantor palette, let $\kappa: Z \to K$ be a colouring, and let $\mu: \operatorname{pt} \to Z$ be a regular exchangeable $Z$-recipe such that $\overline{\kappa} \circ \mu$ almost entails $\mathcal{P}$, and let $\varepsilon > 0$. Our task is to locate a deterministically continuous natural transformation $T: Z \to K$ entailing $\mathcal{P}$ which obeys \eqref{mukan}. Note (as observed in Remark \ref{jojo-rem}) that as $T$ is deterministically continuous, the left-hand side of \eqref{mukan} simplifies to $$ \mu^{([k])}( \{ z \in Z^{([k])}: T^{([k])}(z) \neq \overline{\kappa}^{([k])}(z) \} ).$$
Let $\alpha: Z \to A$ be a sufficiently fine colouring to be chosen later; note that for $\alpha$ fine enough we may assume that $\kappa = \sigma \circ \alpha$ for some colouring $\sigma: A \to K$. We will find a deterministically continuous natural transformation $S: A \to K$ entailing $\mathcal{P}$ with the property that \begin{equation}\label{mussel} (\alpha \circ \mu)^{([k])}( \{ b \in A^{([k])}: S^{([k])}(b) \neq \overline{\sigma}^{([k])}(b) \} ) = o_{\alpha \to \infty}(1). \end{equation} Once we do this, Proposition \ref{monotone-prop} follows by setting $T := S \circ \overline{\alpha}$ and taking $\alpha$ sufficiently fine.
It remains to locate a natural transformation $S$ with the required properties. We first use a finitisation trick of Alon and Shapira. Observe (from Remark \ref{sat}) that if a deterministically continuous natural transformation $S: A \to K$ does \emph{not} entail $\mathcal{P}$, then there exists a finite integer $N$ such that $S^{([N])}(A^{([N])}) \not \subset \mathcal{P}^{([N])}$. This integer $N$ ostensibly depends on $S$; however, since $A$ and $K$ are both finite palettes, the number of deterministically continuous natural transformations $S: A \to K$ which do not entail $\mathcal{P}$ is also finite. Thus (by enlarging $N$ if necessary) one can make $V$ independent of $S$. In other words, there exists\footnote{Note however that this $N$ is \emph{ineffectively} finite, as one needs to solve a ``halting problem'' for $\mathcal{P}$ in order to compute it.} an $N = N_{A,K,\mathcal{P}}$ which serves as a \emph{certificate} for $\mathcal{P}$ in the following sense: if $S: A \to K$ is a deterministically continuous natural transformation such that \begin{equation}\label{snap} S^{([N])}(A^{([N])}) \subset \mathcal{P}^{([N])}, \end{equation} then $S$ entails $\mathcal{P}$.
Fix this value of $N$; by increasing $N$ if necessary we may assume $N \geq k$. Our objective is now to locate a deterministically continuous natural tranformation $S: A \to K$ that obeys \eqref{mussel} and \eqref{snap}.
Let $V := [N]$. We apply Proposition \ref{repair} to obtain a local map $U: A^{([N])} \to \mathcal{P}^{([N])}$ obeying \eqref{repo}.
We will now use $U$ and the weakly monotone nature of $\mathcal{P}$, to build the deterministically continuous natural transformation $S: A \to K$. We first define the map $S^{([N])}: A^{([N])} \to K^{([N])}$ by the formula \begin{equation}\label{sm}
S^{([N])}(a) := \bigwedge_{\phi \in \operatorname{Inj}([N],[N])} K^{(\phi)}(U(a))
\end{equation} for all $a \in A^{([N])}$, where the meet of $K$-coloured hypergraphs was defined in Definition \ref{mono} (note that this operation is both commutative and associative). Since $U(a) \in \mathcal{P}^{(V)}$ and $\mathcal{P}$ is weakly monotone, we see that \eqref{snap} holds.
The map $S^{([N])}$ is clearly $\operatorname{Inj}([N],[N])$-equivariant; since $U$ is local, $S^{([N])}$ is also. From this (and the assumption $N \geq k$) we see that $S^{([N])}$ extends uniquely to a deterministically continuous natural transformation $S: A \to K$.
Finally, it remains to verify \eqref{mussel}. From \eqref{sm} we see that $$ S^{([k])}(b) := \bigwedge_{a \in A^{([N])}, \phi \in \operatorname{Inj}([k],[N]): A^{(\phi)}(a)=b} K^{(\phi)}(U(a))$$ for all $b \in A^{([k])}$. The claim \eqref{mussel} now follows from \eqref{repo}. \end{proof}
Now we turn to the repairability of undirected graph properties.
\begin{proof}[Proof of Proposition \ref{lgr-prop} assuming Proposition \ref{repair}] By increasing $k$ and adding some dummy palettes if necessary we can take $k=2$. We then repeat the proof of Proposition \ref{monotone-prop}, with $K$ a finite palette of order at most $2$ and $\mathcal{P}$ a hereditary undirected $K$-property, and let $Z, \kappa, \mu, \alpha, A, \sigma, N$ be as in the previous proof. As before, our objective is to locate a deterministically continuous natural transformation $S: A \to K$ obeying \eqref{mussel} and \eqref{snap}. The main difference is that we will use Ramsey theory instead of monotonicity to construct $S$.
Let $V$ be a sufficiently large finite vertex set (depending on $N, A, K$) to be chosen later. We apply Proposition \ref{repair} to obtain a local map $U: A^{(V)} \to \mathcal{P}^{(V)}$ obeying \eqref{repo}.
We now use Ramsey-theoretic tools to restrict $U$ to a smaller vertex set on which one has more monochromaticity; in these arguments we will rely crucially on the fact that $k$ is equal to $2$.
Since $U$ is local, we see that $U$ uniquely defines a map $U_W: A^{(W)} \to \mathcal{P}^{(W)} \subset K^{(W)}$ for all $W \subset V$, defined by requiring $U_W(a\downharpoonright_W) = U(a)\downharpoonright_W$ for all $a \in A^{(W)}$. Applying this with $W=\emptyset$ we obtain a map $U_\emptyset: A_0 \to K_0$. Applying this instead with $W = \{v\}$ equal to a singleton set, we obtain a map $U_v: A_0 \times A_1 \to K_0 \times K_1$. The number of possible maps $U_v$ is finite, and so by the pigeonhole principle we can find a subset $V' \subset V$ and a map $U_1: A_0 \times A_1 \to K_0 \times K_1$ such that $U_v = U_1$ for all $v \in V'$. Furthermore, we can make $V'$ as large as desired (depending on $N,A,K$) by making $V$ sufficiently large (depending on $N,A,K$).
We would like to perform the same analysis for doubleton sets $W = \{v,w\}$, but one runs into a difficulty that there is a $\operatorname{Inj}([2],W)$-ambiguity when trying to identify $A^{(W)}$ (for instance) with $A_0 \times A_1^2 \times A_2^2$. We shall rectify this by \emph{ad hoc} combinatorial trickery when $k=2$ by exploiting the undirected nature of $\mathcal{P}$, but the ambiguity is much more serious\footnote{Specifically, the problem is that the colour in $K_3$ that $U(a)$ assigns to a $3$-edge $\{u,v,w\}$ depends not only on the vertex colours $a_1(u), a_1(v), a_1(w) \in A_1$ and the $3$-edge colour $a_3(u,v,w) \in A_3$, but also depends on the $2$-edge colours $a_2(u,v), a_2(v,w), a_2(w,u) \in A_2$, in a manner which may not be completely symmetric, even when $\mathcal{P}$ is undirected. Unsurprisingly, it is this potential for asymmetry within an undirected hypergraph property which is exploited in Sections \ref{leq3} and \ref{eq3}.} when $k \geq 3$ (even for undirected $\mathcal{P}$) when one has to consider tripleton sets $W = \{u,v,w\}$ or worse, and indeed as we see from Theorem \ref{negate}, the analogue of Proposition \ref{lgr-prop} fails in this case.
We turn to the details. Let $M$ be a large number depending on $N, A, K$ to be chosen later. If $V$ (and hence $V'$) is chosen sufficiently large depending on $M, A, K$, we can find disjoint sets $V_{a_0,a_1}$ in $V'$ for $a_0 \in A_0$ and $a_1 \in A_1$ such that $|V_{a_0,a_1}| \geq M$.
Suppose $a_0 \in A_0$ and $a_1,a'_1 \in A_1$. Then we can define a map $U_{v,v'}: A_2 \to K_2$ for any $v \in V_{a_0,a_1}$ and $v' \in V_{a_0,a'_1}$ by setting $U_{v,v'}(a_2) := U_{\{v,v'\}}(a)_2(v,v')$ for all $a_2 \in A_2$, where $a \in A^{(\{v,v'\})}$ is the undirected hypergraph defined explicitly by $$ a_0() := a_0; a_1(1) := a_1; a_1(2) := a'_1; a_2(1,2) = a_2(2,1) = a_2.$$ Now we crucially use the fact that $\mathcal{P}$ is undirected to conclude that $U_{v,v'} = U_{v',v}$. Thus $U_{v,v'}$ can be viewed as describing a $K_2^{A_2}$-coloured graph $G_{a_0}$ on the vertex set $\bigcup_{a_1 \in A_1} V_{a_0,a_1}$ for each $a_0 \in A_0$, and in particular defining bipartite graphs between $V_{a_0,a_1}$ and $V_{a_0,a'_1}$ when $a_1 \neq a'_1$. Applying Ramsey's theorem (as well as the bipartite Ramsey theorem\footnote{See for instance \cite[\S 1.2, 5.1]{GraRotSpe} for statements and proofs of these theorems. These theorems can be deduced from Theorem \ref{lgr} by a slight modification of the arguments used to prove Corollary \ref{ramsey}, but we of course cannot do so here as that would be circular.}) repeatedly, we thus conclude (if $M$ is sufficiently large depending on $N, A, K$) that we can find subsets $V'_{a_0,a_1} \subset V_{a_0,a_1}$ for $a_0 \in A_0$ and $a_1 \in A_1$ of size \begin{equation}\label{vaa}
|V'_{a_0,a_1}| = N \end{equation} such that $G_{a_0}$ is monochromatic on $V'_{a_0,a_1} \times V'_{a_0,a'_1}$ for all $a_0 \in A_0$ and $a_1,a'_1 \in A_1$ (not necessarily distinct). In other words, we can find maps $U_{a_0,a_1,a'_1}: A_2 \to K_2$ for $a_0 \in A_0$ and $a_1,a'_1 \in A_1$ with $U_{a_0,a_1,a'_1} = U_{a_0,a'_1,a_1}$ such that $U_{v,v'} = U_{a_0,a_1,a'_1}$ for all $v \in V'_{a_0,a_1}$ and $v' \in V'_{a_0,a'_1}$.
Let us place an arbitrary total ordering $<$ on $K_2$, which in particular defines a minimum function $\min: K_2 \times K_2 \to K_2$. We now define a deterministically continuous natural transformation $S: A \to K$ by setting \begin{align*} S^{(W)}(a)_0() &:= U_0(a_0) \\ S^{(W)}(a)_1(w) &:= U_1(a_0,a_1(w))_1 \\ S^{(W)}(a)_2(w,w') &:= \min( U_{a_0, a_1(w), a_1(w')}(a_2(w,w')), U_{a_0, a_1(w), a_1(w')}(a_2(w',w)) ) \end{align*} for all vertex sets $W$ and all $a \in A^{(W)}$. One easily verifies that $S$ is indeed a deterministically continuous natural transformation. Now we verify \eqref{snap}. If $a \in A^{([N])}$, observe (from \eqref{vaa}) that we can find a morphism $\Phi \in \operatorname{Inj}([N], V)$ such that $\Phi(n) \in V'_{a_0(), a_1(n)}$ for all $n \in [N]$. Define the symmetrisation $\tilde G \in K^{([N])}$ of any $G \in K^{([N])}$ by defining $\tilde G_0 := G_0$, $\tilde G_1 := G_1$, and $\tilde G_2( n, m) := \min( G_2(n,m), G_2(m,n) )$ for all $(n,m) \in \operatorname{Inj}([2],[N])$; in particular, $\tilde G = G$ whenever $G$ is undirected. By chasing all the definitions we see that $$ S^{([N])}(a) = \widetilde{K^{(\Phi)}( U( b ) )}$$ for any $b \in A^{(V)}$ with $A^{(\Phi)}(b) = a$. Since $U(b)$ obeys $\mathcal{P}$ and is thus undirected, we obtain \eqref{snap} as desired.
Finally, we verify \eqref{mussel}. Let $b \in A^{([k])}$ be drawn at random with law $(\overline{\alpha} \circ \mu)^{([k])}$, and let $G := \overline{\sigma}^{([k])}(b) \in K^{([k])}$. By \eqref{repo}, we see that with probability $1 - o_{\alpha \to \infty}(1)$ we have \begin{equation}\label{kphu} K^{(\phi)}(U(a)) = G \end{equation} whenever $\phi \in \operatorname{Inj}([k],V)$ and $a \in A^{(V)}$ satisfies $A^{(\phi)}(a) = b$; let us now condition on this event. Since $U(a) \in \mathcal{P}^{(V)}$, we conclude that $G \in \mathcal{P}^{([k])}$; in particular, $G$ is undirected.
To prove \eqref{mussel}, it will suffice to show that $S^{([k])}(b) = G$. In view of the definition of $S$, it will suffice to show that \begin{align*} G_0() &= U_0(b_0) \\ G_1(i) &= U_1(b_0,b_1(i))_1 \\ G_2(i,j) &= U_{b_0,b_1(i), b_i(j)}(b_2(i,j)) \\ G_2(i,j) &= U_{b_0,b_1(i), b_i(j)}(b_2(j,i)) \end{align*} for all distinct $i, j \in [k]$. But these claims all follow from \eqref{kphu} and the definition of $U_0$, $U_1$, $U_{b_0,b_1(i),b_i(j)}$ by choosing $\phi$ appropriately. \end{proof}
Finally, we establish the repairability of partite hypergraph properties.
\begin{proof}[Proof of Proposition \ref{part-prop} assuming Proposition \ref{repair}] Once again we repeat the proof of Proposition \ref{monotone-prop}, with $K$ a finite palette of order $k \geq 1$ and $\mathcal{P}$ a partite hypergraph $K$-property, and let $Z, \kappa, \mu, \alpha, A, \sigma, N$ be as in the previous proof. As before, our objective is to locate a deterministically continuous natural transformation $S: A \to K$ obeying \eqref{mussel} and \eqref{snap}. In this case we will use partite Ramsey theory instead of Ramsey theory or monotonicity to construct $S$.
Let $M$ be a large integer (depending on $N,A,K$) to be chosen later. We let $V := [M] \times A_1$, thus $W$ is the disjoint union of the sets $V_{a_1} := [M] \times \{a_1\}$ of cardinality $M$ for $a_1 \in A_1$. We apply Proposition \ref{repair} to obtain a local map $U: A^{(V)} \to \mathcal{P}^{(V)}$ obeying \eqref{repo}. From locality as before, we also have maps $U_W: A^{(W)} \to \mathcal{P}^{(W)}$ for all $W \subset V$.
Let $0 \leq j \leq k$, and let $\psi \in \operatorname{Inj}([j],A_1)$ be a morphism. For any vertices $v_1 \in V_{\psi(1)}, \ldots, v_j \in V_{\psi(j)}$, one can define a map $U_{\psi; v_1,\ldots,v_j}: A^{([j])} \to \mathcal{P}^{([j])}$ by the formula $$ U_{\psi; v_1,\ldots,v_j} := K^{(v)} \circ U_{\{v_1,\ldots,v_j\}} \circ A^{(v^{-1})}$$ where $v: [j] \to \{v_1,\ldots,v_j\}$ is the bijection that sends $i$ to $v_i$ for $i \in [j]$. One can view this map as defining a $j$-partite $j$-uniform $(\mathcal{P}^{([j])})^{A^{([j])}}$-coloured hypergraph on the disjoint vertex classes $V_{\psi(1)},\ldots,V_{\psi(j)}$.
The number of $j$ and $\psi$ are finite (and independent of $M$), and the size of the palettes
$(\mathcal{P}^{([j])})^{A^{([j])}}$ are also finite and independent of $M$. Thus by applying the partite hypergraph Ramsey theorem (see e.g. \cite[\S 5.1]{GraRotSpe}) repeatedly, we conclude (if $M$ is sufficiently large depending on $N,A,K$) that there exist sets $V'_{a_1} \subset V_{a_1}$ of cardinality $|V'_{a_1}| = N$ for all $a_1 \in A_1$ such that all the partite hypergraphs mentioned above are monochromatic, or in other words that for every $0 \leq j \leq k$ and $\psi \in \operatorname{Inj}([j],A_1)$ there exists a map $U_\psi: A^{([j])} \to \mathcal{P}^{([j])}$ such that $U_{\psi; v_1,\ldots,v_j} = U_\psi$ for all $v_1 \in V'_{\psi(1)},\ldots,v_j \in V'_{\psi(j)}$.
Fix the $V'_{a_1}$ and $U_\phi$. We now introduce the deterministically continuous natural transformation $S: A \to K$ by defining $S^{(W)}(a)_j(\phi) \in K_j$ for vertex sets $W$, hypergraphs $a \in A^{(W)}$, integers $0 \leq j \leq k$, and morphisms $\phi \in \operatorname{Inj}([j],W)$ according to the following rule. If $\phi$ is a partite edge for $a$ (thus the map $a_1 \circ \phi: [j] \to A_1$ is a morphism) then we set \begin{equation}\label{swaj}
S^{(W)}(a)_j(\phi) := U_{a_1 \circ \phi}( A^{(\phi)}(a) )_j(\phi);
\end{equation} otherwise, if $\phi$ is not a partite edge, we set \begin{equation}\label{swaj-2} S^{(W)}(a)_j(\phi) := \sigma_j( a_j(\phi) ). \end{equation} One easily verifies that $S$ is a strongly natural transformation. Now we verify \eqref{snap}. Let $a \in A^{([N])}$ be arbitrary. Since each of the $V'_{a_1}$ have cardinality $N$, we can find a morphism $\Phi: [N] \to V$ such that $\Phi(n) \in V'_{a_1(n)}$ for all $n \in [N]$. Let $b \in A^{(V)}$ be any hypergraph such that $A^{(\Phi)}(b) = a$. By chasing all the definitions (and using the local nature of $U$), we conclude that $$ S^{([N])}(a)_j(\phi) = U(b)_j( \Phi \circ \phi )$$ for all $0 \leq j \leq k$ and all partite edges $\phi \in \operatorname{Inj}([j],[N])$. By Definition \ref{partite}, we conclude that $S^{([N])}(a)$ is partite equivalent to $K^{(\Phi)}(U(b))$. Since $U(b) \in \mathcal{P}^{(V)}$ and $\mathcal{P}$ is partite, we obtain $S^{([N])}(a) \in \mathcal{P}^{([N])}$ as required.
Now we prove \eqref{mussel}. Let $b \in A^{([k])}$ be drawn at random with law $(\alpha \circ \mu)^{([k])}$, and let $G := \overline{\sigma}^{([k])}(b) \in K^{([k])}$. By \eqref{repo}, we see with probability $1-o_{\alpha \to \infty}(1)$ that \eqref{kphu} holds whenever $\phi \in \operatorname{Inj}([k],V)$ and $a \in A^{(V)}$ satisfies $A^{(\phi)}(a) = b$. Conditioning on this event, we conclude from \eqref{swaj} and the definition of the $U_\psi$ that $S^{([k])}(b)_j(\phi) = \sigma_j(b_j(\phi))$ whenever $0 \leq j \leq k$ and $\phi \in \operatorname{Inj}([j],[k])$ is a partite edge of $b$. Combining this with \eqref{swaj-2} we obtain \eqref{mussel}. \end{proof}
To conclude the proof of all our main theorems, it remains to establish Proposition \ref{rs-prop} and Proposition \ref{repair}. This will be the purpose of the remaining sections.
\subsection{Reduction to discretisations of the identity}\label{discred-sec}
In the previous sections, we have reduced all of our testability and repair claims to two propositions, namely Proposition \ref{rs-prop} and \ref{repair}. In this section, we show how these propositions will follow from the following two propositions, which assert the existence of two different ways to approximate the identity natural transformation $\operatorname{id}_Z: Z \to Z$ by more discrete natural transformations that factor through a colouring $\alpha: Z \to A$.
\begin{proposition}[First discretisation of the identity]\label{disc-ident} Let $Z$ be a sub-Cantor palette of some order $k \geq 0$, and let $\mu: \operatorname{pt} \to Z$ be a regular exchangeable $Z$-recipe. Then for any colouring $\alpha: Z \to A$ there exist $j$-independent natural transformations $Q_{\alpha,j}: Z_{<j} \times A_{\geq j} \to Z_{\leq j} \times A_{>j}$ for each $0 \leq j \leq k$ with the following properties: \begin{itemize} \item[(i)] ($Q_{\alpha,j}$ only modifies the $j$ component) For each $\alpha$ and each $0 \leq j \leq k$, the diagram $$ \begin{CD} Z_{<j} \times A_{\geq j} @>{Q_{\alpha,j}}>> Z_{\leq j} \times A_{>j} \\ @VVV @VVV \\ Z_{<j} \times A_{>j} @= Z_{<j} \times A_{>j} \end{CD} $$ commutes, where the vertical arrows denote the obvious projection natural transformations. \item[(ii)] (Absolute continuity) For each $\alpha$, every finite vertex set $V$, and every $a \in A^{(V)}$, we have $$ (Q_{\alpha,k} \circ \ldots \circ Q_{\alpha,0})^{(V)}(a) \ll \mu.$$ \item[(iii)] (Convergence to the diagonal) Given any finite vertex set $V$ and any continuous function $F: Z^{(V)} \times Z^{(V)} \to \mathbf{R}$, we have $$ \lim_{\alpha \to \infty} \int_{Z^{(V)}} \left( \int_{Z^{(V)}} F( z, z')\ (Q_{\alpha,k} \circ \ldots \circ Q_{\alpha,0} \circ \overline{\alpha})^{(V)}(z,dz') \right)\ d\mu^{(V)}(z) = \int_{Z^{(V)}} F(z,z)\ d\mu^{(V)}(z).$$ \end{itemize} \end{proposition}
\begin{example}\label{d1} Let $Z = (\operatorname{pt}, Z_1)$ and $k=1$ for some sub-Cantor space $Z_1$, and let $\mu \in \operatorname{Pr}(Z_1)$ be a probability measure, which can be identified with an exchangeable $Z$-recipe by Example \ref{rvc}. Let $\alpha: Z \to A$ be a colouring of $Z$, let $Q_0: A \to A$ be the identity map, and let $Q_1: A \to Z$ be the natural transformation defined by $Q_1^{(V)}(a) := \prod_{v \in V} \mu_{a_1(v)}$ for any vertex set $V$ and $a \in A^{(V)}$, where we identify $Z^{(V)}$ with $Z_1^V$ and for any $a_1 \in A_1$, $\mu_{a_1} \in \operatorname{Pr}(Z_1)$ is the measure $(\mu|C_{a_1})$ if $\mu(C_{a_1})>0$ and $\mu$ otherwise, where $C_{a_1} := \alpha_1^{-1}(\{a_1\})$. Then one easily verifies that $Q_0, Q_1$ obey the properties described above. Roughly speaking, the map $Q_1$ maps points $z$ in $Z_1$ to the uniform distribution on the $A_1$-cell that $z$ lies in; as the colouring $\alpha$ gets finer and finer, this map converges to the identity in a weak sense, which corresponds to the property (iii) above. \end{example}
\begin{proposition}[Second discretisation of the identity]\label{disc-ident2} Let $Z$ be a sub-Cantor palette of some order $k \geq 0$, and let $\mu: \operatorname{pt} \to Z$ be a regular exchangeable $Z$-recipe. Then for any colouring $\alpha: Z \to A$ there exist a sub-Cantor space $X_\alpha$ (which we view as a sub-Cantor palette of order $0$) with a probability measure $\nu_\alpha \in \operatorname{Pr}(X_\alpha)$ (which we view as a natural transformation $\nu_\alpha: \operatorname{pt} \to X_\alpha$), together with a deterministically continuous natural transformation $\zeta_\alpha: A \times X_\alpha \to Z$, with the following properties: \begin{itemize} \item[(i)] (Asymptotic absolute continuity) The measure $$ \zeta_\alpha^{([k])} \circ ((\overline{\alpha} \circ \mu) \oplus \nu_\alpha)^{([k])} \in \operatorname{Pr}(Z^{([k])})$$ is $o_{\alpha \to \infty}(1)$-absolutely continuous with respect to $\mu^{([k])}$ (see Definition \ref{epsac-def} for a definition of $\varepsilon$-absolute continuity). \item[(ii)] (Convergence to the diagonal) Given any finite vertex set $V$ and any continuous function $F: Z^{(V)} \times Z^{(V)} \to \mathbf{R}$, we have $$ \lim_{\alpha \to \infty} \int_{Z^{(V)}} \int_{X_\alpha} F( z, \zeta_\alpha^{(V)}( \overline{\alpha}^{(V)}(z), x) )\ d\nu_\alpha(x) d\mu^{(V)}(z) = \int_{Z^{(V)}} F(z,z)\ d\mu^{(V)}(z).$$ \end{itemize} \end{proposition}
\begin{remark} The situation in the above proposition can be depicted by the following diagram, $$ \begin{CD} Z @<<< Z \times X_\alpha @>{\overline{\alpha}\oplus \operatorname{id}}>> A \times X_\alpha @>{\zeta_\alpha}>> Z\\ @A{\mu}AA @A{\mu \oplus \nu_\alpha}AA @A{(\overline{\alpha} \circ \mu) \oplus \nu_\alpha}AA @.\\ \operatorname{pt} @= \operatorname{pt} @= \operatorname{pt} @. \end{CD}; $$ informally speaking, the proposition asserts that the right map from $Z \times X_\alpha$ to $Z$ is asymptotically absolutely continuous and asymptotically convergent to the left map. \end{remark}
\begin{example}\label{d2} Let $Z = (\operatorname{pt}, Z_1)$ and $k=1$ for some sub-Cantor space $Z_1$, and let $\mu \in \operatorname{Pr}(Z_1)$ be a probability measure, which can be identified with an exchangeable $Z$-recipe by Example \ref{rvc}. For technical reasons we also select an arbitrary element $z_*$ of $Z_1$. Let $\alpha: Z \to A$ be a colouring of $Z$. For each $a_1 \in A_1$, we define the cell $C_{a_1} := \alpha_1^{-1}(\{a_1\})$, and draw $\zeta_{a_1} \in Z_1$ independently at random for each $a_1$ with law $(\mu|C_{a_1})$ if $\mu(C_{a_1}) > 0$, or with law $\delta_{z_*}$ otherwise. We then define the natural transformation $\zeta_\alpha: A \to Z$ by setting $\zeta_\alpha^{(V)}(a)_1(v) := \zeta_{a_1(v)}$ for all vertex sets $V$, all $a \in A^{(V)}$, and all $v \in V$; one easily verifies that this transformation obeys all the required properties; compare this construction with that in Example \ref{d1}. As we shall see in Section \ref{disc2-sec}, the situation becomes more complicated when $k>1$ due to the presence of ``indistinguishable'' pairs of elements of $A^{([j])}$ for $1 < j \leq k$ which are coupled together, which forces some modification to the above procedure of selecting each value of $\zeta$ independently. \end{example}
In the rest of this section we show how Proposition \ref{rs-prop} follows from Proposition \ref{disc-ident}, and Proposition \ref{repair} follows by combining Proposition \ref{disc-ident} with Proposition \ref{disc-ident2}.
\begin{proof}[Proof of Proposition \ref{rs-prop} assuming Proposition \ref{disc-ident}] Let $K$ be a finite palette of some order $k \geq 0$, let $\mathcal{P}$ be a hereditary $K$-property, let $Z$ be a sub-Cantor palette, let $\kappa: Z \to K$ be a colouring, and let $\mu: \operatorname{pt} \to Z$ be a regular exchangeable $Z$-recipe such that $\kappa \circ \mu$ almost entails $\mathcal{P}$, and let $\varepsilon > 0$. Our task is to construct a weakly continuous natural transformation $T: Z \to K$ which almost entails $\mathcal{P}$, and such that \eqref{mukan} holds.
Let $\alpha: Z \to A$ be a sufficiently fine colouring of $Z$ to be chosen later. We apply Proposition \ref{disc-ident} to obtain natural transformations $Q_{\alpha,j}: Z_{<j} \times A_{\geq j} \to Z_{\leq j} \times A_{>j}$ for $0 \leq j \leq k$ with the stated properties. We then define $T = T_\alpha: Z \to K$ to be the natural transformation $$ T := \overline{\kappa} \circ Q_{\alpha,k} \circ \ldots \circ Q_{\alpha,0} \circ \overline{\alpha}.$$ Since $T$ factors through the natural transformation $\overline{\alpha}: Z \to A$, and $A$ is a finite palette, we see that $T$ must be weakly continuous. Now we verify that $T$ almost entails $\mathcal{P}$. If $V$ is a finite vertex set and $z \in Z^{(V)}$, then by Proposition \ref{disc-ident}(ii) we see that the probability measure $T^{(V)}(z)$ is absolutely continuous with respect to $(\overline{\kappa} \circ \mu)^{(V)}$. Since $\overline{\kappa} \circ \mu$ almost entails $\mathcal{P}$, we see that $\mathcal{P}^{(V)}$ has full measure with respect to $(\overline{\kappa} \circ \mu)^{(V)}$ and hence $T^{(V)}(z)$. The claim now follows from Remark \ref{sat}.
Finally, we need to verify \eqref{mukan}. Let $F: Z^{([k])} \times Z^{([k])} \to \mathbf{R}$ be the indicator function \begin{equation}\label{fdef} F(z,z') := \mathcal{I}(\overline{\kappa}^{([k])}(z) \neq \overline{\kappa}^{([k])}(z')). \end{equation} Observe that $F$ is a continuous function which vanishes on the diagonal $z=z'$, and so by Proposition \ref{disc-ident}(iii) we have $$ \int_{Z^{([k])}} F( z, (Q_{\alpha,k} \circ \ldots \circ Q_{\alpha,0} \circ \overline{\alpha})^{([k])}(z) )\ d\mu^{(V)}([k]) < \varepsilon$$ for sufficiently fine $\alpha$. But by chasing all the definitions we see that this is equivalent to \eqref{mukan}. \end{proof}
\begin{remark} Note that we did not use the full strength of Proposition \ref{disc-ident} in order to establish Proposition \ref{rs-prop}. However we will need to exploit Proposition \ref{disc-ident} more thoroughly when establishing Proposition \ref{repair} below. \end{remark}
\begin{proof}[Proof of Proposition \ref{repair} assuming Proposition \ref{disc-ident} and Proposition \ref{disc-ident2}] Let $K$ be a finite palette of order $k \geq 0$, let $\mathcal{P}$ be a hereditary $K$-property, let $Z$ be a sub-Cantor palette, let $\kappa: Z \to K$ be a colouring, and let $\mu: \operatorname{pt} \to Z$ be a regular exchangeable $Z$-recipe such that $\overline{\kappa} \circ \mu$ almost entails $\mathcal{P}$. Let $\varepsilon > 0$. Our task is to show that if colouring $\alpha: Z \to A$ is a sufficiently fine colouring which refines $\kappa$ in the sense that $\kappa = \sigma \circ \alpha$ for some $\sigma: A \to K$, then for any finite vertex set $V$ there exists a local map $U: A^{(V)} \to \mathcal{P}^{(V)}$ such that \begin{equation}\label{repo2} (\overline{\alpha} \circ \mu)^{([k])}(\Omega_U) \geq 1 - \varepsilon. \end{equation}
Let $\alpha$ be as above. As in the proof of Proposition \ref{rs-prop}, we let $F: Z^{([k])} \times Z^{([k])} \to \mathbf{R}$ be the indicator function \eqref{fdef}. If $\alpha$ is sufficiently fine, then by Proposition \ref{disc-ident2} we can find a sub-Cantor space $X_\alpha$ with a probability measure $\nu: \operatorname{pt} \to X_\alpha$ and a deterministically continuous natural transformation $\zeta_\alpha: A \times X_\alpha \to Z$ with \begin{equation}\label{zzf} \int_{Z^{(V)}} F( z, (\zeta_\alpha \circ (\overline{\alpha} \oplus \nu_\alpha))^{(V)}(z) )\ d\mu^{(V)}(z) < \varepsilon/3 \end{equation} such that the $\zeta_\alpha^{([k])} \circ ((\overline{\alpha} \circ \mu) \oplus \nu_\alpha)^{([k])}$ is $\varepsilon/3$-absolutely continuous with respect to $\mu^{([k])}$. By Proposition \ref{epsac}, we can find a compact set $E_{\alpha} \subset A^{([k])} \times X_\alpha$ such that \begin{equation}\label{excep}
((\overline{\alpha} \circ \mu) \oplus \nu_\alpha)^{([k])}(E_{\alpha}) < \varepsilon/3 \end{equation} and \begin{equation}\label{ac-out}
\zeta_\alpha^{([k])} \circ \mathcal{I}(E_{\alpha}^c) ((\overline{\alpha} \circ \mu) \oplus \nu_\alpha)^{([k])} \ll \mu^{([k])}. \end{equation}
Now let $V$ be an arbitrary finite vertex set. We let $\alpha': Z \to A'$ be another colouring (it will depend on\footnote{This introduction of a second colouring has an analogue in \cite{AloSha}, \cite{AloSha2}, in which one uses a fine Szemer\'edi partition to decide how to colour a coarse Szemer\'edi partition.} $V$ and $\alpha$) to be chosen later. We apply Proposition \ref{disc-ident} to obtain $j$-independent natural transformations $Q_{\alpha',j}: Z_{<j} \times {A'}_{\geq j} \to Z_{\leq j} \times {A'}_{>j}$ for $0 \leq j \leq k$ with the stated properties.
For each $-1 \leq j \leq k$ in turn, we use the $Q_{\alpha',j}$ to construct random local maps $U'_{\leq j}: {A'}_{\leq j}^{(V)} \to Z_{\leq j}^{(V)}$ recursively as follows. The map $U'_{\leq -1}: \operatorname{pt} \to \operatorname{pt}$ is of course the trivial map. Now suppose recursively that $0 \leq j \leq k$ and the local map $U'_{<j} := U'_{\leq j-1}: {A'}_{<j}^{(V)} \to Z_{<j}^{(V)}$ has already been chosen. For any $e \in \binom{V}{j}$, the local map $U'_{<j}$ then induces a map $U'_{<j,e}: {A'}_{<j}^{(e)} \to Z_{<j}^{(e)}$. We then randomly select, independently for each $e$, a map $U'_{\leq j,e}: {A'}_{\leq j}^{(e)} \to Z_{\leq j}^{(e)}$ by choosing $U'_{\leq j,e}(a)$ independently at random for each $a \in {A'}_{\leq j}^{(e)}$ with law $Q_{\alpha',j}^{(e)}( U'_{<j,e}(a_{<j}), a_j )$, where $a_{<j} \in {A'}_{<j}^{(e)}$ and $a_j \in {A'}_{=j}^{(e)} = {A'}_{\geq j}^{(e)}$ are the components of $a$. From Proposition \ref{disc-ident}(i), we see that we almost surely have the commutative diagram
\begin{equation}\label{upcom} \begin{CD} {A'}_{\leq j}^{(e)} @>{U'_{\leq j,e}}>> Z_{\leq j}^{(e)} \\ @VVV @VVV \\ {A'}_{<j}^{(e)} @>{U'_{<j,e}}>> Z_{<j}^{(e)} \end{CD},
\end{equation} where the vertical arrows are the obvious projection maps. We now condition on this probability $1$ event.
We then define the local map $U'_{\leq j}: {A'}_{\leq j}^{(V)} \to Z_{\leq j}^{(V)}$ to be the unique local map whose restrictions to each $e \in \binom{V}{j}$ are given by $U'_{\leq j,e}$; the condition \eqref{upcom} (and the local nature of $U'_{<j}$) ensures that the local map $U'_{\leq j}$ is well-defined.
By the $j$-independent nature of the $Q_{\alpha',j}$ (see Definition \ref{j-indep} and Proposition \ref{disc-ident}(i), we see by induction on $j$ that for any $-1 \leq j \leq k$ and any $a \in A_{\leq j}^{(V)}$, the random variable $U'_{\leq j}(a) \in Z_{\leq j}^{(V)}$ is distributed with law $Q_{\leq j}(a)$, where $Q_{\leq j}: A_{\leq j} \to Z_{\leq j}$ is the unique natural transformation obeying the commutative diagram \begin{equation}\label{upcom2} \begin{CD} A @>{Q_{\alpha',j} \circ \ldots \circ Q_{\alpha',0}}>> Z_{\leq j} \times A_{>j} \\ @VVV @VVV \\ A_{\leq j} @>{\rlap{$\scriptstyle{\ \ \ \ \ \ Q_{\leq j}}$}\phantom{Q_{\alpha',j} \circ \ldots \circ Q_{\alpha',0}}}>> Z_{\leq j} \end{CD}, \end{equation} where the vertical arrows are the obvious projection natural transformations. Applying this with $j=k$, we conclude that for any $a \in A^{(V)}$, the random variable $U'_{\leq k}(a) \in Z^{(V)}$ is distributed with law $(Q_{\alpha',k} \circ \ldots \circ Q_{\alpha',0})^{(V)}(a)$. In particular, we see from Proposition \ref{disc-ident}(ii) that the distribution of $U'_{\leq k}(a)$ is absolutely continuous with respect to $\mu^{(V)}$. Since $\overline{\kappa} \circ \mu$ almost entails $\mathcal{P}$, we conclude that $\overline{\kappa}^{([k])} \circ U'_{\leq k}(a)$ obeys $\mathcal{P}$ almost surely. In other words, we see with probability $1$ that the map $\overline{\kappa}^{([k])} \circ U'_{\leq k}$ maps $(A')^{(V)}$ to $\mathcal{P}^{(V)}$. We now condition on this probability $1$ event.
We choose $x \in X_\alpha$ at random with law $\nu_\alpha$ (independently of all previous random choices), and define the (probabilistic) map $U = U_x: A^{(V)} \to \mathcal{P}^{(V)}$ by composing together the chain \begin{equation}\label{uxdef0} \begin{CD} A^{(V)} @>{\operatorname{id} \times x}>> A^{(V)} \times X_\alpha @>{\zeta_\alpha^{(V)}}>> Z^{(V)} @>{\overline{\alpha'}^{(V)}}>> (A')^{(V)} @>{U'_{\leq k}}>> Z^{(V)} @>{\overline{\kappa}^{(V)}}>> K^{(V)} \end{CD} \end{equation} or in other words by the formula \begin{equation}\label{uxdef}
U_x(a) := (\overline{\kappa}^{(V)} \circ U'_{\leq k} \circ \overline{\alpha'}^{(V)} \circ \zeta_\alpha^{(V)})(a,x)
\end{equation} for all $a \in A^{(V)}$.
By construction we see that (with probability $1$) $U_x$ does indeed map $A^{(V)}$ to $\mathcal{P}^{(V)}$; since $U'_{\leq k}$ is local and $\overline{\kappa}$, $\overline{\alpha'}$, $\zeta_\alpha$ are deterministically continuous natural transformations, we see that $U_x$ is also almost surely local. To establish the claim \eqref{repo2}, it thus suffices by the probabilistic method to show that $$ \mathbf{E} (\overline{\alpha} \circ \mu)^{([k])}(\Omega_{U_x}) \geq 1 - \varepsilon.$$
Accordingly, let us select $b \in A^{([k])}$ at random with law $(\overline{\alpha} \circ \mu)^{([k])}$. By \eqref{excep}, we see that $(b,x) \not \in E_{\alpha}$ with probability at least $1-\varepsilon/3$. Also, by \eqref{zzf}, we see that $\overline{\sigma}^{([k])}(b) = (\overline{\kappa} \circ \zeta_\alpha)^{([k])}(b,x)$ with probability $1-\varepsilon/3$. Thus it suffices to show that the event $$ (b,x) \not \in E_{\alpha} \hbox{ and } K^{(\phi)}(U_x(a)) \neq (\overline{\kappa} \circ \zeta_\alpha)^{([k])}(b,x) \hbox{ for some } \phi \in \operatorname{Inj}([k],V) \hbox{ and } a \in A^{(V)} \hbox{ with } A^{(\phi)}(a)=b $$ has probability at most $\varepsilon/3$.
Fix $\phi \in \operatorname{Inj}([k],V)$; by the union bound, it suffices to show that the event $$ (b,x) \not \in E_{\alpha} \hbox{ and } K^{(\phi)}(U_x(a)) \neq (\overline{\kappa} \circ \zeta_\alpha)^{([k])}(b,x) \hbox{ for some } a \in A^{(V)} \hbox{ with } A^{(\phi)}(a)=b $$
has probability at most $\varepsilon/3|\operatorname{Inj}([k],V)|$.
Write $z:= \zeta_\alpha^{([k])}(b,x)$, $a' := \overline{\alpha'}^{([k])}(z)$, and $e := \phi([k])$. From \eqref{uxdef} (or \eqref{uxdef0}) we see that $$ K^{(\phi)}(U(a)) = \overline{\kappa}^{([k])} \circ Z^{(\phi)} \circ U'_{\leq k, e} \circ (A')^{(\phi^{-1})}(a')$$ whenever $a \in A^{(V)}$ is such that $A^{(\phi)}(a) = b$, where $U'_{\leq k,e}: (A')^{(e)} \to Z^{(e)}$ is the localisation of the local map $U'_{\leq k}: (A')^{(V)} \to Z^{(V)}$. Thus, if we write $z' := U'_{\leq k, e} \circ (A')^{(\phi^{-1})}(a')$, it suffices to show that the event $$ (b,x) \not \in E_{\alpha} \hbox{ and } \overline{\kappa}^{([k])}(z') \neq \overline{\kappa}^{([k])}(z) $$
has probability at most $\varepsilon/3|\operatorname{Inj}([k],V)|$. By \eqref{fdef}, it thus suffices to show that \begin{equation}\label{evff}
\mathbf{E} \left( \mathcal{I}((b,x) \not \in E_{\alpha}) F(z,z') \right) < \frac{\varepsilon}{3|\operatorname{Inj}([k],V)|}. \end{equation}
Recall that for any $a \in A^{(V)}$, the random variable $U'_{\leq k}(a) \in Z^{(V)}$ is distributed with law $(Q_{\alpha',k} \circ \ldots \circ Q_{\alpha',0})^{(V)}(a)$. This implies for fixed $a'$ that $z'$ is distributed with law $(Q_{\alpha',k} \circ \ldots \circ Q_{\alpha',0} \circ \overline{\alpha'})^{([k])}(z)$. Thus we can write the left-hand side of \eqref{evff} as $$ \int_{Z^{(V)}} f_{\alpha'}(z)\ d\mu_\alpha(z)$$ where $f_{\alpha'}: Z^{(V)} \to [0,1]$ is the measurable function $$ f_{\alpha'}(z) := \int_{Z^{(V)}} F(z,z')\ (Q_{\alpha',k} \circ \ldots \circ Q_{\alpha',0} \circ \overline{\alpha'})^{([k])}(z, dz')$$ and $\mu_\alpha$ is the finite measure $$ \mu_\alpha := \zeta_\alpha^{([k])} \circ \mathcal{I}(E_{\alpha}^c) ((\overline{\alpha} \circ \mu) \oplus \nu_\alpha)^{([k])}.$$ Now, by Proposition \ref{disc-ident}(iii), we have $$ \lim_{\alpha' \to \infty} \int_{Z^{(V)}} f_{\alpha'}(z)\ d\mu^{(V)}(z) = 0;$$ thus (by Markov's inequality) $f_{\alpha'}$ converges in measure to zero with respect to $\mu^{(V)}$ as $\alpha' \to \infty$. On the other hand, from \eqref{ac-out} we see that $\mu_\alpha$ is absolutely continuous with respect to $\mu^{(V)}$. By Proposition \ref{epsac}, we conclude that $f_{\alpha'}$ also converges in measure to zero with respect to $\mu_\alpha$, and so $$ \lim_{\alpha' \to \infty} \int_{Z^{(V)}} f_{\alpha'}(z)\ d\mu_\alpha(z) = 0.$$ Thus, by choosing $\alpha'$ sufficiently fine depending on $\alpha, V, \varepsilon$, we obtain \eqref{evff} as required for every choice of $\phi$. \end{proof}
To complete the proof of our testability and local repair results, it suffices to prove Proposition \ref{disc-ident} and Proposition \ref{disc-ident2}. This is the purpose of the next two sections.
\subsection{Proof of Proposition \ref{disc-ident}}\label{disc-sec}
We now prove Proposition \ref{disc-ident}. Let $Z, k, \mu, \alpha$ be as in that proposition. By Definition \ref{regexp}, we can factor $$ \mu = P_k \circ \ldots \circ P_0$$ where for each $0 \leq j \leq k$, $P_j: Z_{<j} \to Z_{\leq j}$ is a $j$-independent natural transformation such that $\pi_{<j} \circ P_j = \operatorname{id}_{Z_{<j}}$, where $\pi_{<j}: Z_{\leq j} \to Z_{<j}$ is the projection natural transformation. From Definition \ref{j-indep}, we conclude that \begin{equation}\label{pjw}
P_j^{(V)}(z) = \delta_z \times Q_j^{(V)}(z) = \delta_z \times \prod_{e \in \binom{V}{j}} Q_j^{(e)}(z\downharpoonright_e) \end{equation} for all vertex sets $V$ and all $z \in Z_{<j}^{(V)}$, and some $j$-independent natural transformation $Q_j: Z_{<j} \to Z_{=j}$, where we identify $Z_{\leq j}^{(V)}$ with $Z_{<j}^{(V)} \times \prod_{e \in \binom{V}{j}} Z_{=j}^{(e)}$ in the obvious manner.
Suppose that $e$ is a vertex set of size $|e|=j$, $z \in Z_{<j}^{(e)}$, and $a \in A_{=j}^{(e)}$. We define the \emph{cell} $C_a \subset Z_{=j}^{(e)}$ associated to $a$ by the formula $$ C_a := (\alpha_{=j}^{(e)})^{-1}(\{a\}) = \{ z \in Z_{=j}^{(e)}: \alpha_j(z(\phi)) = a(\phi) \hbox{ for all } \phi \in \operatorname{Inj}([j],e) \}$$
and then define the measure $\nu_{e,z,a} \in \operatorname{Pr}( Z_{=j}^{(e)} )$ to equal the conditioned measure $(Q_j^{(e)}(z)|C_a)$ (as defined in Appendix \ref{prob}) if $Q_j^{(e)}(z)(C_a) > 0$, or $Q_j^{(e)}(z)$ otherwise.
We then define the natural transformation $Q_{\alpha,j}: Z_{<j} \times A_{\geq j} \to Z_{\leq j} \times A_{>j}$ by the formula \begin{equation}\label{qjw} Q_{\alpha,j}^{(V)}( z_{<j}, a_j, a_{>j} ) := \delta_{z_{<j}} \times \prod_{e \in \binom{V}{j}} \nu_{e,z_{<j}\downharpoonright_e,a_j\downharpoonright_e} \times \delta_{a_{>j}} \end{equation} for all vertex sets $V$ and all $z_{<j} \in Z_{<j}^{(V)}$, $a_j \in A_{=j}^{(V)}$, $a_{>j} \in A_{>j}^{(V)}$, where we identify $Z_{<j}^{(V)} \times A_{\geq j}^{(V)}$ with $Z_{<j}^{(V)} \times A_{=j}^{(V)} \times A_{>}^{(V)}$ and $Z_{\leq j}^{(V)} \times A_{>j}^{(V)}$ with $Z_{<j}^{(V)} \times \prod_{e \in \binom{V}{j}} Z_{=j}^{(e)} \times A_{>j}^{(V)}$ in the obvious manner. Note that we can factor $Q_{\alpha,j} = {Q'}_{\alpha,j} \oplus \operatorname{id}_{A_{>j}}$, where ${Q'}_{\alpha,j}: Z_{<j} \times A_{=j} \to Z_{\leq j}$ is defined by \begin{equation}\label{qjw2} {Q'}_{\alpha,j}^{(V)}( z_{<j}, a_j ) := \delta_{z_{<j}} \times \prod_{e \in \binom{V}{j}} \nu_{e,z_{<j}\downharpoonright_e,a_j\downharpoonright_e}. \end{equation}
(Compare this with Example \ref{d1}.)
One easily verifies that $Q_{\alpha,j}$ is a natural transformation, is $j$-independent, and obeys claim (i) of Proposition \ref{disc-ident}. Also observe from construction that $\nu_{e,z,a}$ is absolutely continuous with respect to $Q_j^{(e)}(z)$ for all $0 \leq j \leq k$, $|e|=j$, $z \in Z_{<j}^{(e)}$, and $a \in A_{=j}^{(e)}$. By \eqref{pjw}, \eqref{qjw}, and Lemma \ref{pac}, we conclude the absolute continuity relationship $$ Q_{\alpha,j}^{(V)}( z_{<j}, a_j, a_{>j} ) \ll P_j^{(V)}(z_{<j}) \times \delta_{a_{>j}}$$ for all finite vertex sets $V$, all $0 \leq j \leq k$, and all $z_{<j} \in Z_{<j}^{(V)}$, $a_{=j} \in A_{=j}^{(V)}$, $a_{>j} \in A_{>j}^{(V)}$. Iterating this using Lemma \ref{pac}, we obtain claim (ii) of Proposition \ref{disc-ident}.
It remains to prove claim (iii) of Proposition \ref{disc-ident}, which is the most difficult estimate. The key tool will be Littlewood's principle (Lemma \ref{dom}). For inductive reasons we need to prove the following rather technical statement. For any $-1 \leq j \leq k$, we introduce the exchangeable $Z_{\leq j}$-recipe $\mu_{\leq j}: \operatorname{pt} \to Z_{\leq j}$ by the formula $$ \mu_{\leq j} := P_j \circ \ldots \circ P_0$$ and the natural transformation $T_{\leq j}: Z_{\leq j} \to Z_{\leq j}$ to be the unique natural transformation such that $$ T_{\leq j} \circ \pi_{Z \to Z_{\leq j}} = \pi_{Z_{\leq j} \times A_{>j} \to Z_{\leq j}} \circ Q_{\alpha,j} \circ \ldots \circ Q_{\alpha,0} \circ \overline{\alpha}$$ where $\pi_{Z \to Z_{\leq j}}$ and $\pi_{Z_{\leq j} \times A_{>j} \to Z_{\leq j}}$ are the projection natural transformations.
\begin{lemma}[Convergence to the diagonal]\label{coerce-conv} Let the notation and assumptions be as above. Let $V$ be a finite vertex set, let $-1 \leq j \leq k$, Let $H$ be a finite-dimensional Hilbert space depending on $\alpha_{j+1},\ldots,\alpha_k,V$ but independent of $\alpha_0,\ldots,\alpha_j$, and let $F: Z_{\leq j}^{(V)} \to H$ be a bounded measurable function which can depend on $\alpha_{j+1},\ldots,\alpha_k,V$ but is independent of $\alpha_0,\ldots,\alpha_j$. Then \begin{equation}\label{zetaj}
\int_{Z_{\leq j}^{(V)}}
\left[ \int_{Z_{\leq j}^{(V)}} \| F(z)-F(w) \|_H\ T_{\leq j}^{(V)}(z, dw) \right]\ d\mu_{\leq j}^{(V)}(z) = o_{\alpha \to \infty}(1). \end{equation} \end{lemma}
\begin{proof} We induct on $j$. The case $j =-1$ is vacuously true, so suppose that $0 \leq j \leq k$ and that the claim has already been proven for $j-1$.
Fix $V, H, F$; we may normalise $F$ to be bounded in magnitude by $1$. It is convenient to use the language of probability rather than measure theory. Let $z \in Z_{\leq j}^{(V)}$ be drawn at random with law $\mu_{\leq j}^{(V)}$, and then for fixed $z$, let $w$ be drawn at random with law $T_{\leq j}^{(V)}(z)$. Our task is to show that \begin{equation}\label{fzw}
\mathbf{E} \| F(z) - F(w) \|_H = o_{\alpha \to \infty}(1). \end{equation}
We split $z = (z_{<j},z_j)$ and $w = (w_{<j},w_j)$ for $z_{<j}, w_{<j} \in Z_{<j}^{(V)}$ and $z_j, w_j \in Z_j^{(V)}$. We similarly split $a := \overline{\alpha_{\leq j}}^{(V)}(z) \in A_{\leq j}^{(V)}$ as $a = (a_{<j}, a_j)$. Observe from construction that
\begin{itemize} \item $z_{<j} \in Z_{<j}^{(V)}$ has the distribution of $\mu_{\leq j-1}^{(V)}$; \item Given $z_{<j}$, $a_{<j}$ is determined by the formula $a_{<j} = \overline{\alpha_{<j}}^{(V)}(z_{<j})$; \item Given $z_{<j}$, $z$ is a random variable with law $P_j^{(V)}(z_{<j})$; \item Given $z$, $a_j$ is determined by the formula $a_j = \overline{\alpha_{j}}^{(V)}(z_j)$; \item Given $z_{<j}$, $w_{<j}$ is a random variable with law $T_{\leq j-1}^{(V)}(z_{<j})$; \item Given $w_{<j}$ and $a_j$, $w$ is a random variable with law ${Q'}_{\alpha,j}^{(V)}(w_{<j},a_j)$ (defined in \eqref{qjw2}). \end{itemize}
Now we write the left-hand side of \eqref{fzw} as
$$ \sum_{b \in A_{=j}^{(V)}} \mathbf{E} \left(\mathcal{I}( a_j = b ) |F(z) - F(w)| \right)$$ and estimate this using the triangle inequality as the sum of the three expressions \begin{equation}\label{x1}
\sum_{b \in A_{=j}^{(V)}} \mathbf{E} \left(\mathcal{I}( a_j = b ) \| F(z) - G_{b}(z_{<j}) \|_H\right) \end{equation} \begin{equation}\label{x2}
\sum_{b \in A_{=j}^{(V)}} \mathbf{E} \left(\mathcal{I}( a_j = b ) \| G_{b}(z_{<j}) - G_{b}(w_{<j}) \|_H\right) \end{equation} and \begin{equation}\label{x3}
\sum_{b_ \in A_{=j}^{(V)}} \mathbf{E} \left(\mathcal{I}( a_j = b ) \| F(w) - G_{b}(w_{<j}) \|_H\right) \end{equation} where $G_{b}: Z_{<j}^{(V)} \to H$ is the measurable function $$ G_{b}( z_{<j} ) := \int_{Z_{\leq j}^{(V)}} F(z)\ {Q'}_{\alpha,j}^{(V)}( (z_{<j},b), dz ).$$
We will show that each of \eqref{x1}, \eqref{x2}, \eqref{x3} are $o_{\alpha \to \infty}(1)$.
By the induction hypothesis we have
$$ \mathbf{E} \sum_{b \in A_{=j}^{(V)}} \| G_{b}(z_{<j}) - G_{b}(w_{<j}) \|_H = o_{\alpha \to \infty}(1)$$ and so the contribution of \eqref{x2} is acceptable.
Now let us look at \eqref{x1}. In view of the distribution of $z_{<j}$ and $z$, we can rewrite this expression as $\mathbf{E} f_{\alpha_j}( z_{<j} )$, where where $$ f_{\alpha_j}(z_{<j}) := \sum_{b \in A_{=j}^{(V)}} P_j^{(V)}(z_{<j})(C_b)
\int_{Z_{\leq j}^{(V)}} \left\| F(y) - \int_{Z_{\leq j}^{(V)}} F(u)\ (P_j^{(V)}(z_{<j},du)|C_b) \right\|_H\ (P_j^{(V)}(z_{<j},dy)|C_b)$$ and $$ C_b := (\overline{\alpha_{=j}}^{(V)})^{-1}(\{b\}),$$ where we can of course ignore all summands on which $P_j^{(V)}(z_{<j})(C_b) = 0$. By Lemma \ref{dom}, $f_{\alpha_j}(z_{<j}) = o_{\alpha \to \infty}(1)$ for each $z_{<j}$. Applying Lemma \ref{dct}, we conclude that \eqref{x1} is $o_{\alpha \to \infty}(1)$ as desired.
Finally we look at \eqref{x3}. For each $b_j \in A_{=j}^{(V)}$, let $\Omega_{b_j} \subset Z_{<j}^{(V)}$ be the set of all $z_{<j}$ such that the event $\{ a_j = b_j \}$ has non-zero measure with respect to $\overline{P_j}^{(V)}(z_{<j})$. We split \eqref{x3} further into \begin{equation}\label{x3-a}
\sum_{b_j \in A_{=j}^{(V)}} \mathbf{E} \left( \mathcal{I}( a_j = b_j ) \mathcal{I}( w_{<j} \in \Omega_{b_j} ) \| F(w) - G_{b_j}(w_{<j}) \|_H\right) \end{equation} and \begin{equation}\label{x3-b}
\sum_{b_j \in A_{=j}^{(V)}} \mathbf{E} \left(\mathcal{I}( a_j = b_j ) \mathcal{I}( w_{<j} \not \in \Omega_{b_j} ) \| F(w) - G_{b_j}(w_{<j}) \|_H\right) \end{equation} Consider the expression \eqref{x3-a}. If $w_{<j} \in \Omega$ and $a_j = b_j$ are fixed, then $w_j$ has the distribution of $\mu_{w_{<j}}$, where $\mu_{w_{<j}}$ was defined in the treatment of \eqref{x1}. Thus we can bound \eqref{x3-a} by $$ \mathbf{E} f_{\alpha_j}(w_{<j}).$$
By the induction hypothesis, we have $\mathbf{E} |f_{\alpha_j}(w_{<j}) - f_{\alpha_j}(z_{<j})| = o_{\alpha \to \infty}(1)$, and so the contribution of \eqref{x3-a} is acceptable by our analysis of \eqref{x1}.
Finally, we turn to \eqref{x3-b}. As $F$ is bounded in magnitude by $1$, we may bound this expression crudely by $$ 2 \sum_{b_j \in A_{=j}^{(V)}} \mathbf{E} \left(\mathcal{I}( a_j = b_j ) \mathcal{I}( w_{<j} \not \in \Omega_{b_j}^c)\right).$$ By the induction hypothesis we have
$$ 2 \sum_{b_j \in A_{=j}^{(V)}} \mathbf{E} \left|\mathcal{I}( w_{<j} \not \in \Omega_{b_j}) - \mathcal{I}( z_{<j} \not \in \Omega_{b_j})\right| = o_{\alpha \to \infty}(1)$$ so it suffices to show that $$ 2 \sum_{b_j \in A_{=j}^{(V)}} \mathbf{E} \left(\mathcal{I}( a_j = b_j ) \mathcal{I}( z_{<j} \not \in \Omega_{b_j})\right) = o_{\alpha \to \infty}(1).$$ But if $z_{<j} \in \Omega_{b_j}^c$ then $a_j$ has a zero probability of equaling $b_j$, and so the left-hand side is zero. The claim follows. \end{proof}
Now we prove Claim (iii) of Proposition \ref{disc-ident}. Observe from the Stone-Weierstrass theorem that we can approximate any continuous function $F: Z^{(V)} \times Z^{(V)} \to \mathbf{R}$ uniformly by finite linear combinations of tensor products $f(z) g(z')$, where $f: Z^{(V)} \to \mathbf{R}$ and $g: Z^{(V)} \to \mathbf{R}$ are continuous. By linearity, we may assume that $F$ itself is of this form; thus our task is to show that $$ \lim_{\alpha \to \infty} \int_{Z^{(V)}} f(z) \left( \int_{Z^{(V)}} g( z')\ T_{\leq k}^{(V)}(z,dz') \right)\ d\mu^{(V)}(z) = \int_{Z^{(V)}} f(z) g(z)\ d\mu^{(V)}(z).$$ By the triangle inequality, it suffices to show that
$$ \lim_{\alpha \to \infty} \int_{Z^{(V)}} \left( \int_{Z^{(V)}} |g( z')-g(z)|\ T_{\leq k}^{(V)}(z,dz')\right)\ d\mu^{(V)}(z) = 0.$$ But this follows immediately from Lemma \ref{coerce-conv}. The proof of Proposition \ref{disc-ident} (and thus also Theorem \ref{rs-thm-dir}) is now complete.
\subsection{Proof of Proposition \ref{disc-ident2}}\label{disc2-sec}
We now prove Proposition \ref{disc-ident2}, which is the most difficult proposition to establish in this paper. In Example \ref{d2}, we already saw how the $k=1$ case of this proposition proceeded. Unfortunately, this case does not capture the full complexity of this proposition, as it does not reveal the difficulty of dealing with ``indistinguishable'' pairs of inputs. To illustrate the problem, let us informally consider a model case in which $k=2$, $Z = (\operatorname{pt},Z_1,\{0,1\})$, and $\mu = P_2 \circ P_1$ where $P_1: \operatorname{pt} \to Z_{\leq 1}$ is given from a probability measure $Q_1 \in \operatorname{Pr}(Z_1)$ as in Example \ref{rvc}, and $P_2: Z_{\leq 1} \to Z$ takes the form $P_2^{(V)}(z) = \delta_z \times Q_2^{(V)}(z)$ for some \emph{deterministic} natural transformation $Q_2: Z_{\leq 1} \to Z_{=2}$. We will also assume that $A_2=\{0,1\}$ and that $\alpha_2: \{0,1\} \to \{0,1\}$ is the identity.
We can view the deterministically continuous natural transformation $\zeta_{\alpha}: A \times X_\alpha \to Z$ as a \emph{random} deterministically continuous natural transformation $\zeta: A \to Z$. Such a natural transformation can be built out of two functions $\zeta_1: A^{([1])} \to Z_{=1}^{([1])}$ and $\zeta_2: A^{([2])} \to Z_{=2}^{([2])}$ by requiring that $$ \zeta^{([j])}(a)_j(\phi) = \zeta_j(a)(\phi)$$ for $j=1,2$, $a \in A^{([j])}$, and $\phi \in \operatorname{Inj}([j],[j])$. Any two functions $\zeta_1, \zeta_2$ will determine a deterministically continuous natural transformation, so long as $\zeta_2$ is $\operatorname{Inj}([2],[2])$-equivariant. On the other hand, to get the convergence to the diagonal, we would like to have $\alpha_{=j}^{([j])}(\zeta_j(a)) = a_j$ for all $j=0,1$ and ``most'' $a_j \in A^{([j])}$ (with respect to the measure $\overline{\alpha} \circ \mu^{([j])}$). We also need to select the $\zeta_j(a)$ in a suitably ``absolutely continuous'' manner.
We build $\zeta_1$ and $\zeta_2$ as follows. For each $a_1 \in A_1 \equiv A^{([1])}$, define the cell $C_{a_1} := \alpha_1^{-1}(\{a_1\})$, and select $\zeta_1(a_1) \in Z_1^{([1])}$ independently at random with law $(Q_1|C_{a_1})$ if $Q_1(C_{a_1})>0$, and with law $Q_1$ otherwise; this already ensures that $\alpha_1 \circ \zeta_1$ converges to the diagonal (by Littlewood's principle). Then, we can define $\zeta_2$ by $\zeta_2(a) := Q_2(\zeta_1 \circ a_1)$. Note that as long as $1$ and $2$ are \emph{distinguishable} in the sense that $a_1(1)\neq a_1(2)$, the distribution of $\zeta_1 \circ a_1 \in Z_1^2$ will be absolutely continuous with respect to $Q_1^2$, as $\zeta_1(a_1(1))$ and $\zeta_1(a_2(2))$ are independent and individually absolutely continuous with respect to $Q_1$. The required absolute continuity and convergence properties would be relatively easy to establish if the distinguishable case was the only case. However, in the indistinguishable case $a_1(1)=a_1(2)$, the random variable $\zeta_1 \circ a_1$ is no longer absolutely continuous with respect to $Q_1^2$, being concentrated on the diagonal of $Z_1^2$ (which can have zero measure), and so convergence and absolute continuity in this case is not immediately clear. To resolve this issue, observe that if $Z_1$ is atomless with respect to $Q_1$ then this indistinguishable case will be asymptotically negligible for sufficiently fine $A_1$; on the other hand, if $Z_1$ does contain atoms, then the diagonal of $Z_1^2$ acquires a positive measure with respect to $Q_1^2$, and so the difficulty again disappears. Note however that our analysis had to take note of what symmetries were obeyed by the input $a$. Later on we shall see that we will need to describe these symmetries in general by a certain \emph{groupoid} $R_a$.
We now begin the full proof of Proposition \ref{disc-ident2}. Let $Z, k, \mu, \alpha$ be as in that proposition. To simplify the notation slightly we shall omit some subscripts on $\alpha$. By Definition \ref{regexp}, we can factor $$ \mu = P_k \circ \ldots \circ P_0$$ where for each $0 \leq j \leq k$, $P_j: Z_{<j} \to Z_{\leq j}$ is a $j$-independent natural transformation such that \begin{equation}\label{pizz} \pi_{Z_{\leq j} \to Z_{<j}} \circ P_j = \operatorname{id}_{Z_{<j}}. \end{equation}
Our objective is to find a probability sub-Cantor space $(X,\nu)$ and a deterministically continuous natural transformation $\zeta: A \times X \to Z$ such that $$ \zeta^{([k])} \circ ((\overline{\alpha} \circ \mu) \oplus \nu)^{([k])}$$ is $o_{\alpha \to \infty}(1)$-absolutely continuous with respect to $\mu^{([k])}$, and which converges to the diagonal in the sense that \begin{equation}\label{lima} \lim_{\alpha \to \infty} \int_{Z^{(V)}} \int_{X} F( z, \zeta^{(V)}( \overline{\alpha}^{(V)}(z),x) )\ d\nu(x) d\mu^{(V)}(z) = \int_{Z^{(V)}} F(z,z)\ d\mu^{(V)}(z) \end{equation} for all finite vertex sets $V$ and all continuous $F: Z^{(V)} \times Z^{(V)} \to \mathbf{R}$.
This will follow from the $j=k$ case of following inductive proposition.
\begin{proposition}[Inductive discretisation]\label{indiscrete} For any $-1 \leq j \leq k$, there exists a probability sub-Cantor space $(X_j,\nu_j)$ and a deterministically continuous natural transformation $\zeta_{\leq j}: A_{\leq j} \times X_j \to Z_{\leq j}$ such that \begin{equation}\label{ejc-ac}
\zeta_{\leq j}^{(V)} \circ ((\overline{\alpha}_{\leq j} \circ \mu_{\leq j}) \oplus \nu_j)^{(V)} \ll_{o_{\alpha \to \infty}(1)} \mu_{\leq j}^{(V)} \end{equation} for all finite vertex sets $V$, and for which we have the convergence property \begin{equation}\label{limz} \lim_{\alpha \to \infty} \int_{Z_{\leq j}^{(V)}}
\left[ \int_{Z_{\leq j}^{(V)}} \| F(z)-F(w) \|_H\ T_{\leq j}^{(V)}(z, dw) \right]\ d\mu_{\leq j}^{(V)}(z) = 0 \end{equation} for all finite vertex sets $V$, all finite-dimensional Hilbert spaces $H$, and all bounded measurable $F: Z_{\leq j}^{(V)} \to H$, where $T_{\leq j}: Z_{\leq j} \to Z_{\leq j}$ is the natural transformation $$ T_{\leq j} := \zeta_{\leq j} \circ (\overline{\alpha}_{\leq j} \oplus \nu_j),$$ $\mu_{\leq j}: \operatorname{pt} \to Z_{\leq j}$ is the exchangeable $Z_{\leq j}$-recipe $$ \mu_{\leq j} := P_j \circ \ldots \circ P_0,$$ and we allow $H$, $F$ to depend on $\alpha_{j+1},\ldots,\alpha_k$ (but must be independent of $\alpha_0,\ldots,\alpha_j$). \end{proposition}
Indeed, to establish \eqref{lima} from the $j=k$ case of \eqref{limz} one repeats the arguments at the end of the previous section.
\begin{remark} It will be more convenient to interpret \eqref{limz} probabilistically, as the assertion that if $V$ is a vertex set, $F: Z_{\leq j}^{(V)} \to H$ is a bounded measurable map into a finite-dimensional Hilbert space, $z \in Z_{\leq j}^{(V)}$ is drawn at random with law $\mu_{\leq j}^{(V)}$, $x \in X_j$ is drawn at random with law $\nu_j$, $a := \overline{\alpha_{\leq j}}^{(V)}(z) \in A_{\leq j}^{(V)}$, and $w := \zeta_{\leq j}^{(V)}(a,x)$, then \begin{equation}\label{limzo}
\mathbf{E} \|F(z) - F(w)\|_H = o_{\alpha \to \infty}(1) \end{equation} where the decay rate $o_{\alpha \to \infty}$ depends of course on $F$ and $H$. \end{remark}
It remains to prove Proposition \ref{indiscrete}. The case $j=-1$ is trivial, so suppose inductively that $0 \leq j \leq k$ and that the claim has already been proven for $j-1$. To simplify the notation slightly we shall just consider the case $j=k$; actually, we can reduce to this case by discarding all components of $Z, \alpha, A$ of order greater than $j$, and then reducing $k$ to $j$.
Henceforth $j=k$. Let $(X_{k-1},\nu_{k-1})$ and $\zeta_{<k} := \zeta_{\leq k-1}$ be given by the inductive hypothesis.
\subsubsection{Construction of $X_k$ and $\zeta_{\leq k}$}
Let $\Xi$ denote the collection of all $\operatorname{Inj}([k],[k])$-equivariant maps $\xi: A^{([k])} \to Z_{=k}^{([k])}$; observe that $\Xi$ is a compact subset of $(Z_{=k}^{([k])})^{A^{([k])}}$ and is thus a sub-Cantor space. We refer to elements $\xi$ of $\Xi$ as \emph{$k$-rules}.
We set $X_k := X_{k-1} \times \Xi$, and let $\zeta_{\leq k}: A \times X_k \to Z$ be the unique deterministically continuous natural transformation with the following two properties: \begin{itemize} \item ($\zeta_{\leq k}$ extends $\zeta_{<k}$) We have the identity $$ \pi_{Z \to Z_{<k}} \circ \zeta_{\leq k} = \zeta_{<k} \circ \pi_{A \times X_k \to A_{<k} \times X_{k-1}}$$ where $\pi_{Z \to Z_{<k}}: Z \to Z_{<k}$ and $\pi_{A \times X_k \to A_{<k} \times X_{k-1}}: A \times X_k \to A_{<k} \times X_{k-1}$ are the projection natural transformations. \item ($\zeta_{\leq k}$ extends $\xi$) We have $$ \zeta_{\leq k}^{([k])}(a,(x,\xi))_k := \xi(a)$$ for all $a \in A^{([k])}$, $x \in X_{k-1}$, and $\xi \in \Xi$. \end{itemize}
More explicitly, $\zeta_{\leq k}$ is given by the formula $$ \zeta_{\leq k}^{(V)}( (a_{<k}, a_k), (x,\xi) ) := \left( \zeta_{<k}^{(V)}(a_{<k}, x) , \left( Z^{(\phi_e^{-1})}_{=k}(\xi( A^{(\phi_e)}(a_{<k},a_k) ) ) \right)_{e \in \binom{V}{k}} \right)$$ for all vertex sets $V$, all $(a_{<k}, a_k) \in A^{(V)} \equiv A_{<k}^{(V)} \times A_{=k}^{(V)}$, all $x \in X_k$, and $\xi \in \Xi$, where for each $e \in \binom{V}{k}$, $\phi_e$ is an arbitrary morphism from $[k]$ to $e$ (the exact choice of morphism is not relevant, thanks to the $\operatorname{Inj}([k],[k])$-invariance of $\xi$), and where we identify $Z^{(V)}$ with $Z_{<k}^{(V)} \times \prod_{e \in \binom{V}{k}} Z_{=k}^{(e)}$. One easily verifies that $\zeta_{\leq k}$ is a deterministically continuous natural transformation.
\subsubsection{Construction of $\nu_k$}
To construct the measure $\nu_k \in \operatorname{Pr}(X_k)$ we will need some more notation.
\begin{definition}[Invariant space, stabiliser, indistinguishability]\label{indistinguished} Let $Y$ be a sub-Cantor palette, and $V$, $W$ be vertex sets. \begin{itemize} \item If $G \leq \operatorname{Inj}(V,V)$ is a group, we define the \emph{$G$-invariant space} $(Y^{(V)})^G := \{ y \in Y^{(V)}: Y^{(\phi)}(y) = y \hbox{ for all } \phi \in G$\}; this is a compact subspace of $Y^{(V)}$. \item If $y \in Y^{(V)}$, we define the \emph{stabiliser} ${\operatorname{stab}}(y) := \{ \phi \in \operatorname{Inj}(V,V): Y^{(\phi)}(y) = y\}$; this is a subgroup of $\operatorname{Inj}(V,V)$. \item We say that two elements $y \in Y^{(V)}$, $y' \in Y^{(W)}$ are \emph{indistinguishable} if there exists an invertible $\phi \in \operatorname{Inj}(V,W)$ such that $Y^{(\phi)}(y) = y'$ (in particular, this requires $V$ and $W$ to have equal cardinality), and \emph{distinguishable} otherwise. \end{itemize} \end{definition}
\begin{remark} Note that deterministically continuous natural transformations are forced to map indistinguishable elements to indistinguishable elements. In particular, the images of indistinguishable elements cannot be set independently. This lack of independence will cause significant technical difficulties in our arguments. A similar difficulty will also be caused by the fact that deterministically continuous natural transformations must map $G$-invariant spaces into $G$-invariant spaces (or equivalently, they cannot decrease the stabiliser of an element). \end{remark}
\begin{definition}[Vertical ingredient]\label{vertmes} We define $Q: Z_{<k} \to Z_k$ to be the unique natural transformation such that \begin{equation}\label{pjwq} \delta_{z} \times Q^{(V)}(z) := P_k^{(V)}( z ); \end{equation} for all vertex sets $V$ and all $z \in Z_{<k}^{(V)}$; this is well defined from \eqref{pizz}. \end{definition}
\begin{definition}[Cell]\label{celldef} If $V$ is a vertex set, $G \leq \operatorname{Inj}(V,V)$ and $a_k \in A_{=k}^{(V)}$, we define the \emph{cell} $$C_{V,G,a_k} := \{ z \in (Z_{=k}^{(V)})^G: \overline{\alpha_{=k}}^{(V)}(z) = a_k\};$$ this is a compact subspace of $Z_{=k}^{(V)}$. \end{definition}
\begin{definition}[Default point] We arbitrarily select a \emph{default point} $z_* \in Z_k$. For any $V$, we define $\overline{z_*}^{(V)} \in Z_k^{(V)}$ by setting $\overline{z_*}^{(V)}(\phi) := z_*$ for all $\phi \in \operatorname{Inj}([k],V)$. \end{definition}
\begin{remark} The point $z_*$ is only needed for technical reasons, as a sort of ``error message'' to output when certain inputs are ``bad''. The exact value of $z_*$ plays no role in our arguments. \end{remark}
\begin{definition}[Quadruples]\label{quad} If $V$ is a vertex set, $G \leq \operatorname{Inj}(V,V)$, $a_k \in A_{=k}^{(V)}$, and $z_{<k} \in Z_{<k}^{(V)}$, we say that $(V,G,a_k,z_{<k})$ is \emph{good} if $Q^{(V)}(z_{<k})(C_{V,G,a_k}) > 0$, and \emph{bad} otherwise. We define the probability measure $\rho_{V,G,a_k,z_{<k}} \in \operatorname{Pr}( (Z_{=k}^{(V)})^G )$ to equal the conditioned measure $(Q^{(V)}(z_{<k})|C_{V,G,a_k})$ (as defined in Appendix \ref{prob}) if $(V,G,a_k,z_{<k})$ is good, and $\delta_{\overline{z_*}^{(V)}(\phi)}$ otherwise. \end{definition}
By using the natural transformation properties heavily, we observe that the probability measures $\rho_{V,G,a_j,w_{<j}}$ are invariant under relabeling in the sense that for any $G \leq \operatorname{Inj}(V,V)$, $a_j \in A_{=j}^{(V)}$, $w_{<j} \in Z_{<j}^{(V)}$, and any bijection $\phi: V \to W$, that \begin{equation}\label{zanu} \rho_{\phi G \phi^{-1}, A_{=j}^{(\phi)}(a_j), Z_{<j}^{(\phi)}(w_{<j})} = Z_{=j}^{(\phi)} \circ \rho_{G,a_j,w_{<j}}. \end{equation}
\begin{definition}[Random $k$-rules]\label{rkr} If $x \in X_{<k}$, we define $\eta_{x} \in \operatorname{Pr}( \Xi )$ to be the unique law for a random $k$-rule $\xi \in \Xi$ with the following properties: \begin{itemize} \item For each $a = (a_{<k},a_k) \in A^{([k])}$, the random variable $\xi(a) \in Z_{=k}^{([k])}$ has the law of $\rho_{[k], {\operatorname{stab}}(a), a_k, \zeta_{<k}^{([k])}(a_{<k},x)}$. \item If $a_1,\ldots,a_n \in A^{([k])}$ are pairwise distinguishable, then the random variables $\xi(a_1),\ldots,\xi(a_n) \in Z_{=k}^{([k])}$ are jointly independent. \end{itemize} \end{definition}
\begin{remark} The probability distribution $\eta_{x}$ can be constructed explicitly as follows. The equivalence relation of indistinguishability partitions $A^{([k])}$ into finitely many equivalence classes. For each equivalence class $O$, select a representative $a \in O$ arbitrarily, and draw $\xi(a)$ independently at random with law $\rho_{[k], {\operatorname{stab}}(a), a_k, \zeta_{<k}^{([k])}(a_{<k},x)}$. Then for any $\phi \in \operatorname{Inj}([k],[k])$, we set $\xi( A^{(\phi)}(a)) := Z_{=k}^{(\phi)}(\xi(a))$. One easily verifies (using \eqref{zanu}) that this defines a random $k$-rule $\xi$, and that the law $\eta_{x}$ for $\xi$ has the desired properties; it is also easy to see that this law is unique. \end{remark}
With all these definitions, we can now define the measure $\nu_{\leq k} \in \operatorname{Pr}(X_k)$ by the formula $$ \nu_{\leq k} := \int_{X_{k-1}} \delta_x \times \eta_x\ d\nu_{<k}(x).$$ Informally, $\nu_{\leq k}$ is the law of the pair $(x,\xi)$, where $x \in X_{k-1}$ is selected with law $\nu_{<k}$, and then $\xi \in \Xi$ is selected with law $\eta_x$.
\begin{remark} When $k=1$, the construction simplifies substantially since it is not possible for two distinct elements of $A^{([1])}$ to be indistinguishable, and one essentially obtains the construction in Example \ref{d2}. \end{remark}
It remains to verify the properties \eqref{ejc-ac}, \eqref{limz} (with $j=k$).
\subsubsection{Most quadruples are good}
The first task is to show that the bad quadruples (which output the ``error message'' $z_*$) are negligible.
\begin{proposition}[Most quadruples are good]\label{quadgood} Let $z \in Z^{([k])}$ be drawn at random with law $\mu^{([k])}$, let $a = (a_{<k},a_k) := \overline{\alpha}^{([k])}(z) \in A^{([k])}$, and let $x \in X_{k-1}$ be drawn independently at random with law $\nu_{k-1}$. Let $w_{<k} := \zeta_{<k}^{([k])}(a_{<k},x) \in Z_{<k}^{([k])}$. Then the quadruple $([k], {\operatorname{stab}}(a), a_k, w_{<k})$ is good with probability $1 - o_{\alpha \to \infty}(1)$. \end{proposition}
\begin{proof} The key idea of the proof is to exploit the fact (essentially arising from the monotone (or dominated) convergence theorem) that most elements of the cells $C_{V,G,a_k}$ tend to inherit the symmetries of their colour $a_k$ when the colouring $\alpha$ is sufficiently fine.
By Definition \ref{quad}, our task is to show that $$ \mathbf{P}( Q^{([k])}(w_{<k})(C_{[k],{\operatorname{stab}}(a),a_k}) = 0 ) = o_{\alpha \to \infty}(1).$$ The number of possible stabiliser subgroups ${\operatorname{stab}}(a) \leq \operatorname{Inj}([k],[k])$ is bounded (independently of $A$ or $\alpha$), so it suffices by the union bound to show that $$ \mathbf{P}( {\operatorname{stab}}(a) = G \hbox{ and } Q^{([k])}(w_{<k})(C_{[k],G,a_k}) = 0 ) = o_{\alpha \to \infty}(1)$$ for each fixed group $G \leq \operatorname{Inj}([k],[k])$.
Fix $G$. If ${\operatorname{stab}}(a)=G$, then $a \in (A^{([k])})^G$; thus (by the natural transformation properties of $\zeta_{<k}$) we see that $w_{<k}$ lies in $(Z_{<k}^{([k])})^G$. From this and Definition \ref{celldef} we see that $$ \{ w_{<k}\} \times C_{[k],G,a_k} = (\{w_{<k}\} \times Z_{=k}^{([k])}) \cap (Z^{([k])})^G \cap C'_{a_k}$$ where for any $b \in A_{=k}^{([k])}$, $C'_b \subset Z^{([k])}$ is the set $$ C'_{b} := Z_{<k}^{([k])} \times (\overline{\alpha_{=k}}^{([k])})^{-1}(\{b\}).$$ From this and Definition \ref{vertmes}, we see that $$ Q^{([k])}(w_{<k})(C_{[k],G,a_k}) = P_k^{([k])}(w_{<k})( (Z^{([k])})^G \cap C'_{a_k} )$$ so it suffices to show that for every $\varepsilon > 0$, we have $$ \mathbf{P}( {\operatorname{stab}}(a) = G \hbox{ and } F_{a_k}(w_{<k}) = 0 ) \ll \varepsilon$$ for all sufficiently fine $\alpha$ (depending on $\varepsilon$), where for any $b \in Z_{=k}^{([k])}$, $F_b: Z_{<k}^{([k])} \to [0,1]$ is the measurable function $$ F_b(y) := P_k^{([k])}(y)( (Z^{([k])})^G \cap C'_{b} ).$$
Fix $\varepsilon$, and set $\delta := \varepsilon/|A_{=k}^{([k])}|$. Observe that $$ \mathcal{I}( F_{a_k}(w_{<k}) = 0 )
\leq \sum_{b \in A_{=k}^{([k])}} \frac{1}{\delta} |F_{b}(w_{<k}) - F_{b}(z_{<k})| + \mathcal{I}( F_{a_k}(z_{<k}) \leq \delta ).$$ From the inductive hypothesis \eqref{limz} (or \eqref{limzo}) we have
$$ \mathbf{E} \sum_{b \in A_{=k}^{([k])}} \frac{1}{\delta} |F_{b}(w_{<k}) - F_{b}(z_{<k})| \ll \varepsilon$$ for sufficiently fine $\alpha$, so it suffices to show that \begin{equation}\label{astab} \mathbf{P}( {\operatorname{stab}}(a) = G \hbox{ and } F_{a_k}(z_{<k}) \leq \delta ) \ll \varepsilon. \end{equation} Recall that $z_{<k}$ is distributed with law $\mu_{<k}^{([k])}$, and for fixed $z_{<k}$, $z_k$ is distributed with law $P_k^{([k])}(z_{<k})$. In particular, for any $b \in A_{=k}^{([k])}$ and fixed $z_{<k}$, we have $a_k = b$ with probability $P_k^{([k])}(y)( C'_{b} )$. Thus we may express the left-hand side of \eqref{astab} as $$ \mathbf{E} \sum_{b \in A_{=k}^{([k])}} P_k^{([k])}(z_{<k})( C'_{b} ) \mathcal{I}({\operatorname{stab}}(a_{<k},b) = G) \mathcal{I}( F_b(z_{<k}) \leq \delta ).$$ We can split $$P_k^{([k])}(y)( C'_{b} ) = F_b(y) + P_k^{([k])}(y)( C'_{b} \backslash (Z^{([k])})^G ).$$ Since $$ \mathbf{E} \sum_{b \in A_{=k}^{([k])}} F_b(z_{<k}) \mathcal{I}( F_b(z_{<k}) \leq \delta )
\leq |A_{=k}^{([k])}| \delta = \varepsilon,$$ it thus suffices to show that $$ \mathbf{E} \sum_{b \in A_{=k}^{([k])}} P_k^{([k])}(z_{<k})( C'_{b} \backslash (Z^{([k])})^G ) \mathcal{I}({\operatorname{stab}}(a_{<k},b) = G) \ll \varepsilon.$$ Now observe that if ${\operatorname{stab}}(a_{<k},b) = G$, then on the support $\{z_{<k}\} \times Z_{=k}^{([k])}$ of $P_k^{([k])}(z_{<k})$, the set $C'_{b}$ is contained in the set $(\overline{\alpha}^{([k])})^{-1}(( A^{([k])})^G)$. Thus we have $$ \sum_{b \in A_{=k}^{([k])}} P_k^{([k])}(z_{<k})( C'_{b} \backslash (Z^{([k])})^G ) \mathcal{I}({\operatorname{stab}}(a_{<k},b) = G) \leq P_k^{([k])}(z_{<k})( (\overline{\alpha}^{([k])})^{-1}(( A^{([k])})^G) \backslash (Z^{([k])})^G );$$ since $\mu^{([k])} = P_k^{([k])} \circ\mu_{<k}^{([k])}$, it thus suffices to show that \begin{equation}\label{ag} \mu^{([k])}( (\overline{\alpha}^{([k])})^{-1}(( A^{([k])})^G) \backslash (Z^{([k])})^G ) \ll \varepsilon \end{equation} for sufficiently fine $\alpha$.
Now observe that if $z \in Z^{([k])}$ is not $G$-invariant (i.e. $z \not \in (Z^{([k])})^G$), then (since the algebra of clopen subsets in sub-Cantor spaces separate points) there exists a colouring $\alpha: Z \to A$ such that $\alpha^{([k])}(z)$ is also not $G$-invariant. This property is then inherited by all refinements of $\alpha$. As a consequence we see that $$ \mathcal{I}( z \in (\overline{\alpha}^{([k])})^{-1}((A^{([k])})^G) \backslash (Z^{([k])})^G) = o_{\alpha \to \infty}(1)$$ for all $z \in Z^{([k])}$. The claim \eqref{ag} then follows from the dominated convergence theorem (Lemma \ref{dct}). \end{proof}
\subsubsection{Decoupling}
Let $V$ be a finite vertex set, let $z \in Z^{(V)}$ be drawn at random with law $\mu^{(V)}$, let $a := \overline{\alpha}^{(V)}(z) \in A^{(V)}$, and let $x \in X_{k-1}$ be drawn independently at random with law $\nu_{k-1}$. We then draw $\xi \in \Xi$ with law $\eta_x$, and set $w \in Z^{(V)}$ to be the point $w := \zeta_{\leq k}(a,(x,\xi))$. We split $z = (z_{<k},z_k)$, $a = (a_{<k},a_k)$, and $w = (w_{<k},w_k)$ in the usual manner.
Let us temporarily freeze $z, a, x$, so that the only remaining source of randomness comes from $\xi$. The lower order components $w_{<k}$ of $w$ do not depend on $\xi$ and are now deterministic; indeed, we have $w_{<k} = \zeta_{<k}(a_{<k},x)$. If we split the top component $w_k$ as $w_k = (w_k\downharpoonright_e)_{e \in \binom{V}{k}}$, then we see that each piece $w_k\downharpoonright_e$ depends on $\xi$ via the formula $$ Z^{(\phi_e^{-1})}_{=k}(\xi( A^{(\phi_e)}(a_{<k},a_k) ) ).$$ From this and Definition \ref{rkr} (and \eqref{zanu}), we see that $w_k\downharpoonright_e$ is distributed (for fixed $z,a,x$) according to the law $$\rho_{e, {\operatorname{stab}}(a\downharpoonright_e), a_k\downharpoonright_e, \zeta_{<k}^{(e)}(a_{<k}\downharpoonright_e,x)} = \rho_{e, {\operatorname{stab}}(a\downharpoonright_e), a_k\downharpoonright_e, w_{<k}\downharpoonright_e}.$$ In particular, we almost surely have the constraint \begin{equation}\label{wke}
w_k\downharpoonright_e \in (Z_{=k}^{(e)})^{{\operatorname{stab}}(a\downharpoonright_e)}, \end{equation} thus $w_k\downharpoonright_e$ needs to inherit all the symmetries that $a\downharpoonright_e$ has.
From Definition \ref{rkr}, we also see that for $e_1,\ldots,e_n \in \binom{V}{k}$, the pieces $w_k\downharpoonright_{e_1}, \ldots, w_k\downharpoonright_{e_n}$ are jointly independent so long as the $a\downharpoonright_{e_1},\ldots,a\downharpoonright_{e_n}$ are pairwise distinguishable.
On the other hand, if $e, e' \in \binom{V}{k}$ are such that $a\downharpoonright_{e}$ and $a\downharpoonright_{e'}$ are indistinguishable, thus we have $a\downharpoonright_{e'} = A^{(\phi)}(a\downharpoonright_e)$ for some $\phi \in \operatorname{Inj}(e,e')$, then $w_k\downharpoonright_e$ and $w_k\downharpoonright_{e'}$ are coupled together via the constraint \begin{equation}\label{wke2} w_k\downharpoonright_e = Z_{=k}^{(\phi)}( w_k\downharpoonright_{e'} ). \end{equation} Note that \eqref{wke} can be viewed as the special case of $e=e'$ of \eqref{wke2}.
Motivated by the above discussion, for every $b \in A^{(V)}$, let $R_b$ be the set of all triples $(e,e',\phi)$, where $e,e' \in \binom{V}{k}$ and $\phi \in \operatorname{Inj}(e,e')$ is such that $A^{(\phi)}(b\downharpoonright_{e'}) = b\downharpoonright_e$, thus $R_b$ collects all the ways in which components of $b$ are indistinguishable from each other. The set $R_b$ is a \emph{groupoid}, in the sense that \begin{itemize} \item For every $e \in \binom{V}{k}$, the triple $(e,e,\operatorname{id}_e)$ lies in $R_b$. \item If $(e,e',\phi)$ lies in $R_b$, then $(e',e,\phi^{-1})$ lies in $R_b$. \item If $(e,e',\phi)$ and $(e',e'',\psi)$ lie in $R_b$, then $(e,e'',\psi \circ \phi)$ lies in $R_b$. \end{itemize} Observe that for any $e \in \binom{V}{j}$, the stabiliser ${\operatorname{stab}}(b\downharpoonright_e)$ of $b$ restricted to $e$ can be recovered from $R_b$ by the formula $${\operatorname{stab}}(b\downharpoonright_e) = \{ \phi \in \operatorname{Inj}(e,e): (e,e,\phi) \in R_b \}.$$
Let us call $e,e' \in \binom{V}{k}$ \emph{$R$-indistinguishable} for some groupoid $R$ if there exists $\phi \in \operatorname{Inj}(e,e')$ with $(e,e',\phi) \in R$, and \emph{$R$-distinguishable} otherwise. As $R$ is a groupoid, the property of being $R$-indistinguishable is an equivalence relation.
Given a groupoid $R$, an element $b \in A_{=k}^{(V)}$, and an element $y \in Z_{<k}^{(V)}$, we then define the probability measure $\sigma_{V,R,b,y} \in \operatorname{Pr}(Z^{(V)})$ to be the unique probability distribution of a random variable $w = (w_{<k},w_k) \in Z^{(V)}$ such that \begin{itemize} \item $w_{<k} = y$; \item For each $e \in \binom{V}{j}$, $w_k\downharpoonright_e$ has the distribution of $\rho_{e,G_e,b\downharpoonright_e, y\downharpoonright_e}$, where $G_e \leq \operatorname{Inj}(e,e)$ is the group $G_e := \{ \phi \in \operatorname{Inj}(e,e): (e,e,\phi) \in R \}$; \item For any $(e,e',\phi) \in R$, we have the constraint \eqref{wke2}; \item For any $e_1,\ldots,e_n \in \binom{V}{k}$ which are pairwise $R$-distinguishable, the random variables $w_k\downharpoonright_{e_1},\ldots,w_k\downharpoonright_{e_n}$ are jointly independent. \end{itemize}
One can construct $\sigma_{V,R,b,y}$ more explicitly by choosing one representative $e$ from each $R$-indistinguishable equivalence class, selecting $w_k\downharpoonright_e$ independently at random for each such representative with law $\rho_{e,G_e,b\downharpoonright_e, y\downharpoonright_e}$, and then extending to all other $e$ by \eqref{wke2}.
By the previous discussion, we see that for fixed $z,a,x$, $w$ is distributed according to the law $\sigma_{V,R_a,a_k,w_{<k}}$.
We would like to remove the couplings \eqref{wke}, \eqref{wke2} from this distribution. To this end, we define the \emph{trivial groupoid} $R_0 := \{ (e,e,\operatorname{id}_e): e \in \binom{V}{k} \}$. We would like to assert that the probability measure $\sigma_{V,R_a,a_k,w_{<k}}$ is close to $\sigma_{V,R_0,a_k,w_{<k}}$
in the total variation norm $\|\cdot \|_{M(Z^{(V)})}$ on $Z^{(V)}$, as defined in Appendix \ref{prob}. This is accomplished by the following key estimate.
\begin{proposition}[$\sigma_{V,R_a,a_k,w_{<k}}$ approximates $\sigma_{V,R_0,a_k,w_{<k}}$]\label{decouple} Let $V$ be a finite vertex set, let $z \in Z^{(V)}$ be drawn at random with law $\mu^{(V)}$, let $a := \overline{\alpha}^{(V)}(z) \in A^{(V)}$, and let $x \in X_{k-1}$ be drawn independently at random with law $\nu_{k-1}$. Set $w_{<k} := \zeta_{<k}(a_{<k},x)$. Then
$$ \mathbf{E} \| \sigma_{V,R_a,a_k,w_{<k}} - \sigma_{V,R_0,a_k,w_{<k}} \|_{M(Z^{(V)})} = o_{\alpha \to \infty}(1).$$ \end{proposition}
\begin{proof} From the inductive hypothesis \eqref{limz} (or \eqref{limzo}) we have $$
\mathbf{E} \sum_{b \in A_{=k}^{(V)}} |\| \sigma_{V,R_{a_{<k},b},b,w_{<k}} - \sigma_{V,R_0,b,w_{<k}} \|_{M(Z^{(V)})} - \mathbf{E} \| \sigma_{V,R_{a_{<k},b},b,z_{<k}} - \sigma_{V,R_0,b,z_{<k}} \|_{M(Z^{(V)})}| =o_{\alpha \to \infty}(1)$$ so it suffices to show that
$$ \mathbf{E} \| \sigma_{V,R_a,a_k,z_{<k}} - \sigma_{V,R_0,a_k,z_{<k}} \|_{M(Z^{(V)})} = o_{\alpha \to \infty}(1).$$
The main difficulty here is to understand the effect of the constraints \eqref{wke}, \eqref{wke2} caused by $R_a$. The number of possible groupoids $R_a$ is bounded independently of $\alpha$. Thus it suffices to show that \begin{equation}\label{rar}
\mathbf{E} \mathcal{I}(R_a=R) \| \sigma_{V,R,a_k,z_{<k}} - \sigma_{V,R_0,a_k,z_{<k}} \|_{M(Z^{(V)})} = o_{\alpha \to \infty}(1)
\end{equation} for each groupoid $R$.
Fix $R$. Recall that $z_{<k}$ is distributed with law $\mu_{<k}^{(V)}$, and for fixed $z_{<k}$, $z_k$ is distributed with law $Q^{(V)}(z_{<k})$, and so for any $b \in A_{=k}^{(V)}$ and fixed $z_{<k}$, $a_k$ will equal $b$ with probability $Q^{(V)}(z_{<k})(C_{V,\{\operatorname{id}\},b})$. Thus we can rewrite the left-hand side of \eqref{rar} as $$ \int_{Z_{<k}^{(V)}} \sum_{b \in A_{=k}^{(V)}} Q^{(V)}(y)(C_{V,\{\operatorname{id}\},b}) \mathcal{I}( R_{\overline{\alpha_{<k}}^{(V)}(y),b} = R )
\| \sigma_{V,R,b,y} - \sigma_{V,R_0,b,y} \|_{M(Z^{(V)})} \ d\mu_{<k}^{(V)}(y).$$
Let $Z_R \subset Z^{(V)}$ denote the set $$ Z_R := \{ y \in Z^{(V)}: Z^{(\phi)}(y \downharpoonright_{e'}) = y\downharpoonright_e \hbox{ for all } (e,e',\phi) \in R \}$$ and let $A_{R} \subset A^{(V)}$ denote the set $$ A_R := \{ b \in A^{(V)}: A^{(\phi)}(b\downharpoonright_{e'}) = b\downharpoonright_e \hbox{ for all } (e,e',\phi) \in R \}$$ As in the proof of Proposition \ref{quadgood}, we can use the fact that clopen subsets in sub-Cantor spaces separate points to conclude that $$ \mathcal{I}( y \in (\overline{\alpha}^{(V)})^{-1}(A_R) \backslash Z_R) = o_{\alpha \to \infty}(1)$$ for each $y \in Z^{(V)}$, and hence by Lemma \ref{dct} $$ \mu^{(V)}( (\overline{\alpha}^{(V)})^{-1}(A_R) \backslash Z_R ) = o_{\alpha \to \infty}(1)$$ or in other words $$ \int_{Z_{<k}^{(V)}} \sum_{b \in A_{=k}^{(V)}} Q^{(V)}(y)(C_{V,\{\operatorname{id}\},b} \backslash Z_R) \mathcal{I}( R_{\overline{\alpha_{<k}}^{(V)}(y),b} = R )\ d\mu_{<k}^{(V)}(y) = o_{\alpha \to \infty}(1).$$ Since we have $$ \int_{Z_{<k}^{(V)}} \sum_{b \in A_{=k}^{(V)}} Q^{(V)}(y)(C_{V,\{\operatorname{id}\},b})\ d\mu_{<k}^{(V)}(y) = 1,$$ we may thus apply Markov's inequality and locate an exceptional set $E \subset Z_{<k}^{(V)} \times A_{=k}^{(V)}$ with $$ \int_{Z_{<k}^{(V)}} \sum_{b \in A_{=k}^{(V)}} Q^{(V)}(y)(C_{V,\{\operatorname{id}\},b}) \mathcal{I}( (y,b) \in E ) = o_{\alpha \to \infty}(1)$$ such that \begin{equation}\label{qvy}
Q^{(V)}(y)(C_{V,\{\operatorname{id}\},b}) > 0 \end{equation} and \begin{equation}\label{qvy2}
Q^{(V)}(y)(C_{V,\{\operatorname{id}\},b} \backslash Z_R) = o_{\alpha \to \infty}(1) Q^{(V)}(y)(C_{V,\{\operatorname{id}\},b}) \end{equation} for all $(y,b) \in Z_{<k}^{(V)} \times A_{=k}^{(V)} \backslash E$.
Fix this set $E$. To finish the proof of \eqref{rar}, it thus suffices to show that
$$ \| \sigma_{V,R,b,y} - \sigma_{V,R_0,b,y} \|_{M(Z^{(V)})} = o_{\alpha \to \infty}(1)$$ uniformly for all $(y,b) \in Z_{<k}^{(V)} \times A_{=k}^{(V)} \backslash E$ with \begin{equation}\label{rr} R_{\overline{\alpha_{<k}}^{(V)}(y),b} = R. \end{equation}
Fix $y,b$ as above. From \eqref{qvy} (and the $k$-independence of $P_k$, and hence of $Q$) we see that $(e,\{\operatorname{id}\},b\downharpoonright_e,y\downharpoonright_e)$ is good for every $e \in \binom{V}{k}$. Thus by Definition \ref{quad}, we have
$$ \rho_{e, \{\operatorname{id}\}, b\downharpoonright_e, y\downharpoonright_e} = (Q^{(e)}(y\downharpoonright_e)|C_{e,\{\operatorname{id}\},b\downharpoonright_e})$$ and hence (by the $k$-independence of $Q$ again) we see from construction of $\sigma'$ that
$$ \sigma_{V,R_0,b,y} = (Q^{(V)}(y)|C_{V,\{\operatorname{id}\},b}).$$ From \eqref{qvy2} and Lemma \ref{cond} we thus have
$$ \| \sigma_{V,R_0,b,y} - (Q^{(V)}(y)|C_{V,\{\operatorname{id}\},b} \cap Z_R) \|_{M(Z^{(V)})} = o_{\alpha \to \infty}(1)$$ so by the triangle inequality it suffices to show that \begin{equation}\label{sby}
\| \sigma_{V,R,b,y} - (Q^{(V)}(y)|C_{V,\{\operatorname{id}\},b} \cap Z_R) \|_{M(Z^{(V)})} = o_{\alpha \to \infty}(1). \end{equation}
Let $w$ be drawn using law $\sigma_{V,R,b,y}$, and let $w'$ be drawn using law $(Q^{(V)}(y)|C_{V,\{\operatorname{id}\},b} \cap Z_R)$. The lower order components $w_{<k}$, $w'_{<k}$ of $w, w'$ are both equal to $b$, so we focus on the top order components $w_k, w'_k$, which we split as $(w_k\downharpoonright_e)_{e \in \binom{V}{k}}$ and $(w'_k\downharpoonright_e)_{e \in \binom{V}{k}}$ respectively.
If $e_1,\ldots,e_n \in \binom{V}{k}$ are pairwise $R$-distinguishable, then by construction of $\sigma_{V,R,b,y}$ we have that $w_k\downharpoonright_{e_1}, \ldots, w_k\downharpoonright_{e_n}$ are jointly independent. Conversely, if $e, e'$ are $R$-distinguishable, thus $(e,e',\phi) \in R$ for some $\phi \in \operatorname{Inj}(e,e')$, then from \eqref{wke2} we have the constraint $$ w_k\downharpoonright_e = Z_{=k}^{(\phi)}( w_k\downharpoonright_{e'} ).$$
Now we observe that the random variables $w'_k \downharpoonright_e$ obey exactly the same independence and constraint properties. Indeed, if $(e,e',\phi) \in R$, then the constraint $$ w'_k\downharpoonright_e = Z_{=k}^{(\phi)}( w'_k\downharpoonright_{e'} ).$$ holds almost surely, since $w$ is constrained to lie in $Z_R$ almost surely. On the other hand, if $e_1,\ldots,e_n \in \binom{V}{k}$ are pairwise $R$-distinguishable, and thus lie in disjoint equivalence classes of $R$-indistinguishability, then we claim that the random variables $w'_k\downharpoonright_{e_1}, \ldots, w'_k\downharpoonright_{e_n}$ are jointly independent. Indeed, this claim is clearly true if $w'$ is drawn with law $Q^{(V)}$ (as all of the $w'_k\downharpoonright_e$ are jointly independent in this case), and the conditioning to $C_{V,\{\operatorname{id}\},b} \cap Z_R$ only couples together those pairs $w'_k\downharpoonright_e$, $w'_k\downharpoonright_{e'}$ which lie in the same equivalence class.
In view of the above discussion (and the fact that the cardinality of $\binom{V}{k}$ is independent of $\alpha$), we see that in order to conclude \eqref{sby}, it suffices to show that for each $e \in \binom{V}{k}$ separately, the laws of $w_k\downharpoonright_e$ and $w'_k\downharpoonright_e$ differ in $M(Z_{=k}^{(e)})$ norm by $o_{\alpha \to \infty}(1)$, uniformly in $y$ and $b$.
Fix $e$. From the definition of $\sigma_{V,R,b,y}$, we see that $w_k\downharpoonright_e$ is distributed according to the law $\rho_{e, G_e, b\downharpoonright_e, y\downharpoonright_e}$. The distribution of $w'_k\downharpoonright_e$ is more complicated. However, by \eqref{qvy2} we know that this law differs from the measure $\pi_{V \to e} \circ (Q^{(V)}(y)|C_{V,\{\operatorname{id}\},b})$, where $\pi_{V \to e}: Z_{=k}^{(V)} \to Z_{=k}^{(e)}$ is the restriction map, by $o_{\alpha \to \infty}(1)$ in the total variation norm $M(Z_{=k}^{(e)})$. Thus it suffices to show that
$$ \| \rho_{e, G_e, b\downharpoonright_e, y\downharpoonright_e} - \pi_{V \to e} \circ (Q^{(V)}(y)|C_{V,\{\operatorname{id}\},b}) \|_{M(Z_{=k}^{(e)}} = o_{\alpha \to \infty}(1).$$ But since $P_k$ (and hence $Q$) is $k$-independent, we have
$$\pi_{V \to e} \circ (Q^{(V)}(y)|C_{V,\{\operatorname{id}\},b}) =
(Q^{(e)}(y\downharpoonright)|C_{e,\{\operatorname{id}\},b\downharpoonright_e}).$$ Meanwhile, from \eqref{qvy}, \eqref{qvy2} we have $$ Q^{(V)}(y)(C_{V,\{\operatorname{id}\},b} \cap Z_R) > 0.$$ Using the inclusion \begin{equation}\label{include} \pi_{V \to e}(C_{V,\{\operatorname{id}\},b} \cap Z_R) \subset C_{e,G_e,b\downharpoonright_e} \end{equation} and using the $k$-independence of $Q$ once again, we conclude $$ Q^{(e)}(y\downharpoonright_e)( C_{e,G_e,b\downharpoonright_e} ) > 0$$ and thus by Definition \ref{quad}
$$ \rho_{e, G_e, b\downharpoonright_e, y\downharpoonright_e} = ( Q^{(e)}(y\downharpoonright_e) | C_{e,G_e,b\downharpoonright_e} ).$$ Our task is thus to show that
$$ \| (Q^{(e)}(y\downharpoonright_e)|C_{e,\{\operatorname{id}\},b\downharpoonright_e}) -
(Q^{(e)}(y\downharpoonright_e)|C_{e,G_e,b\downharpoonright_e}) \| = o_{\alpha \to \infty}(1).$$ But from \eqref{qvy2}, the inclusion \eqref{include} and the $k$-independence of $Q$ once again, we have $$ Q^{(e)}(y\downharpoonright_e)(C_{e,\{\operatorname{id}\},b\downharpoonright_e} \backslash C_{e,G_e,b\downharpoonright_e} ) = o_{\alpha \to \infty}(1)$$ and the claim follows from Lemma \ref{cond}. \end{proof}
\subsubsection{Approximate absolute continuity}
We can now quickly prove \eqref{ejc-ac}. We can phrase this claim in probabilistic language as follows. Let $z \in Z^{(V)}$ be drawn at random with law $\mu^{(V)}$, let $a := \overline{\alpha}^{(V)}(z) \in A^{(V)}$, let $x \in X_{k-1}$ be drawn independently with law $\nu_{k-1}$, let $\xi \in \Xi$ be drawn with law $\eta_x$, and let $w := \zeta_{\leq k}(a,(x,\xi)) \in Z^{(V)}$. Let $\varepsilon > 0$ be arbitrary. Our task is to show that if $\alpha$ is sufficiently fine depending on $\varepsilon$, then the distribution of $w$ is $\varepsilon$-absolutely continuous with respect to $\mu^{(V)}$. Thus, let $E \subset Z^{(V)}$ be a measurable set such that $\mu^{(V)}(E) = 0$. Our task is to show that \begin{equation}\label{pwe} \mathbf{P}( w \in E ) \leq \varepsilon. \end{equation}
From \eqref{mupj} we have $\mu^{(V)} = P_k^{(V)} \circ \mu_{<k}^{(V)}$. Since $\mu^{(V)}(E) = 0$, we conclude that the set $E' := \{ y \in Z_{<k}^{(V)}: P_k^{(V)}(y)(E) > 0 \}$ has measure zero with respect to $\mu_{<k}^{(V)}$. By the inductive hypothesis \eqref{ejc-ac}, we already know that the distribution of $w_{<k} \in Z_{<k}^{(V)}$ is $\varepsilon/4$-absolutely continuous with respect to $\mu_{<k}^{(V)}$ if $\alpha$ is sufficiently fine. Thus we have $$ \mathbf{P}( w_{<k} \in E' ) < \varepsilon/4.$$
Furthermore, by Proposition \ref{quadgood}, we see that $$ \mathbf{P}( ([k], {\operatorname{stab}}(a), a_k, w_{<k}) \hbox{ bad } ) < \varepsilon/4$$ for $\alpha$ sufficiently fine, which implies that $$ \mathbf{P}( ([k], \{\operatorname{id}\}, a_k, w_{<k}) \hbox{ bad } ) < \varepsilon/4.$$
Also, by Proposition \ref{decouple} and Markov's inequality, we have
$$ \mathbf{P}( \| \sigma_{V,R_a,a_k,w_{<k}} - \sigma_{V,R_0,a_k,w_{<k}} \|_{M(Z^{(V)})} > \varepsilon/4 ) < \varepsilon/4 $$ if $\alpha$ is sufficiently fine.
Now let us fix $z,a,x$ (and hence $w_{<k}$), and condition on the events that $w_{<k} \not \in E'$ (so $P_k^{(V)}(w_{<k})(E) = 0$) and that \begin{equation}\label{events}
([k], \{\operatorname{id}\}, a_k, w_{<k}) \hbox{ good }; \quad \| \sigma_{V,R_a,a_k,w_{<k}} - \sigma_{V,R_0,a_k,w_{<k}}
\|_{M(Z^{(V)})} \leq \varepsilon/4. \end{equation} By the preceding discussion, the event \eqref{events} occurs with probability at least $1-3\varepsilon/4$. As discussed previously, the random variable $w$ now has the distribution of $\sigma_{V,R_a,a_k,w_{<k}}$. By \eqref{events}, we thus have the conditional probability estimate
$$ \mathbf{P}( w \in E | z,a,x ) \leq \sigma_{V,R_0,a_k,w_{<k}}(E) + \varepsilon/4.$$ But as $([k], \{\operatorname{id}\}, a_k, w_{<k})$ is good, we see from construction of $\sigma_{V,R_0,a_k,w_{<k}}$ (and the $k$-independence of $P_k$) that $\sigma_{V,R_0,a_k,w_{<k}}$ is absolutely continuous with respect to $P_k^{(V)}$, and thus by \eqref{events} we have $\sigma_{V,R_0,a_k,w_{<k}}(E) = 0$. Integrating over $z,a,x$ and applying the union bound we obtain the claim \eqref{pwe}.
\subsubsection{Convergence to the diagonal}
Now, we verify \eqref{limz}. We shall modify the argument used to establish Lemma \ref{coerce-conv}. Fix $V, H, F$ as in the Proposition; we may normalise $F$ to be bounded in norm by $1$. As in the proof of Lemma \ref{coerce-conv}, it is convenient to use the probabilistic formulation \eqref{limzo}. Let $z \in Z^{(V)}$ be drawn at random with law $\mu^{(V)}$, and then for fixed $z$, let $x \in X_{k-1}$ be drawn independently at random with law $\nu_{k-1}$, $\xi \in \Xi$ drawn with law $\eta_x$, and set $a := \overline{\alpha}^{(V)}(z)$ and $w := \zeta_{\leq k}^{(V)}(a,x)$. Our task is to show that $$
\mathbf{E} \| F(z) - F(w) \|_H = o_{\alpha \to \infty}(1), $$ where the decay rate $o_{\alpha \to \infty}(1)$ is allowed to depend on $V$, $F$ and $H$.
As usual, we decompose $z = (z_{<k},z_k)$, $a = (a_{<k},a_k)$, and $w = (w_{<k},w_k)$. From the inductive hypothesis \eqref{limzo} we have $$ \mathbf{P}( \overline{\alpha_{<k}}^{(V)}(z_{<k}) \neq \overline{\alpha_{<k}}^{(V)}(w_{<k}) ) = o_{\alpha \to \infty}(1).$$ Since $\overline{\alpha_{<k}}^{(V)}(z_{<k}) = a_{<k}$, it thus suffices to show that \begin{equation}\label{efs}
\mathbf{E} \| F(z) - F(w) \|_H \mathcal{I}( S ) = o_{\alpha \to \infty}(1) \end{equation} where $S$ is the event that \begin{equation}\label{aw} \overline{\alpha_{<k}}^{(V)}(w_{<k}) = a_{<k}. \end{equation}
For fixed $z,a,x$ (and hence $w_{<k}$), we recall that $w$ has the distribution of $\sigma_{V,R_a,a_k,w_{<k}}$. Thus we can express the left-hand side of \eqref{efs} as
$$ \mathbf{E} (\int_{Z^{(V)}} \|F(z)-F(y)\|_H\ d\sigma_{V,R_a,a_k,w_{<k}}(y)) \mathcal{I}(S).$$ From Proposition \ref{decouple} (and the boundedness of $F$) we have
$$ \mathbf{E} (\int_{Z^{(V)}} \|F(z)-F(y)\|_H\ d\sigma_{V,R_a,a_k,w_{<k}}(y)) \mathcal{I}(S)
\leq \mathbf{E} (\int_{Z^{(V)}} \|F(z)-F(y)\|_H\ d\sigma_{V,R_0,a_k,w_{<k}}(y)) \mathcal{I}(S) + o_{\alpha \to \infty}(1)$$ and so it suffices to show that
$$ \mathbf{E} (\int_{Z^{(V)}} \|F(z)-F(y)\|_H\ d\sigma_{V,R_0,a_k,w_{<k}}(y)) \mathcal{I}(S) = o_{\alpha \to \infty}(1).$$ Using \eqref{aw}, we can bound the left-hand side by
$$ \mathbf{E} \int_{Z^{(V)}} \|F(z)-F(y)\|_H\ d\sigma_{V,R_0,(\overline{\alpha_{<k}}^{(V)}(w_{<k}),a_k),w_{<k}}(y)).$$ By the triangle inequality, we can bound this in turn by the sum of
$$ \mathbf{E} \| F(z) - G_{a_k}(w_{<k})\|_H$$ and $$ \mathbf{E} J_{a_k}(w_{<k})$$ where $G_{a_k}: Z_{<k}^{(V)} \to H$, $J_{a_k}: Z_{<k}^{(V)} \to \mathbf{R}$ are the bounded measurable functions $$ G_{a_k}(w_{<k}) := \int_{Z^{(V)}} F(y) d\sigma_{V,R_0,(\overline{\alpha_{<k}}^{(V)}(w_{<k}),a_k),w_{<k}}(y)$$ and
$$ J_{a_k}(w_{<k}) := \int_{Z^{(V)}} \|G_{a_k}(w_{<k})-F(y)\|_H\ d\sigma_{V,R_0,(\overline{\alpha_{<k}}^{(V)}(w_{<k}),a_k),w_{<k}}(y).$$ From the inductive hypothesis \eqref{limzo} we have
$$ \mathbf{E} \sum_{b \in A_{=k}^{(V)}} \| G_b(w_{<k}) - G_b(z_{<k}) \|_H = o_{\alpha \to \infty}(1)$$ and
$$ \mathbf{E} \sum_{b \in A_{=k}^{(V)}} |J_b(w_{<k}) - J_b(z_{<k})| = o_{\alpha \to \infty}(1)$$ and so by the triangle inequality it suffices to show that \begin{equation}\label{fg}
\mathbf{E} \| F(z) - G_{a_k}(z_{<k})\|_H = o_{\alpha \to \infty}(1) \end{equation} and \begin{equation}\label{fg2}
\mathbf{E} J_{a_k}(z_{<k}) = o_{\alpha \to \infty}(1). \end{equation}
Let us temporarily freeze $z_{<k}$ (and thus $a_{<k}$), then $z$ has the distribution of $P_k^{(V)}(z_{<k})$. In particular, for any $b \in A_{=k}^{(V)}$, the probability that $a_k = b$ (conditioning on $z_{<k}$) is equal to $P_k^{(V)}(z_{<k})( C'_b )$, where $$ C'_{b} := Z_{<k}^{(V)} \times (\overline{\alpha_{=k}}^{(V)})^{-1}(\{b\}).$$ Thus we see that those $b$ for which $P_k^{(V)}(z_{<k})( C'_b )=0$ will almost surely not be equal to $a_k$; in other words, we almost surely have $$ P_k^{(V)}(z_{<k})( C'_{a_k} ) > 0.$$ From Definition \ref{quad}, we conclude that $(V, \{\operatorname{id}\}, a_k,z_{<k})$ is almost surely good. Since $P_k$ is $k$-independent, we conclude that $(e, \{\operatorname{id}\}, a_k\downharpoonright_e, z_{<k}\downharpoonright_e)$ is also almost surely good for all $e \in \binom{V}{k}$. From this, the $k$-independence of $P_k$ again, and the definition of $\sigma_{V,R_0,a_k,z_{<k}}$, we conclude that
$$ \sigma_{V,R_0,a_k,z_{<k}} = ( P_k^{(V)}(z_{<k}) | C'_{a_k} )$$
almost surely. Also, note that for any $b \in A_{=k}^{(V)}$, the distribution of $z$ conditioned to the event $a_k=b$ is also given by $( P_k^{(V)}(z_{<k}) | C'_{a_k} )$. From this, we see that the left-hand sides of \eqref{fg} and \eqref{fg2} are both equal to $$ \int_{Z_{<k}^{(V)}} \sum_{b \in A_{=k}^{(V)}} P_k^{(V)}(v)(C'_b)
\int_{Z^{(V)}} \| F(y) - \int_{Z^{(V)}} F(u)\ ( P_k^{(V)}(v,du) | C'_b ) \|_H
( P_k^{(V)}(v,dy) | C'_b )\ d\mu_{<k}^{(V)}(v).$$ From Lemma \ref{dom}, we have $$ \sum_{b \in A_{=k}^{(V)}} P_k^{(V)}(v)(C'_b)
\int_{Z^{(V)}} \| F(y) - \int_{Z^{(V)}} F(u)\ ( P_k^{(V)}(v,du) | C'_b ) \|_H
( P_k^{(V)}(v,dy) | C'_b )\ d\mu_{<k}^{(V)}(v) = o_{\alpha \to \infty}(1)$$ for all $v \in Z_{<k}^{(V)}$. The claim then follows from Lemma \ref{dct}.
This (finally!) completes the proof of Proposition \ref{indiscrete} and thus Proposition \ref{disc-ident2}, which in turn completes the proof of all the local repairability results claimed in the introduction.
\appendix
\section{Some measure theory and probability}\label{prob}
In this appendix we recall some notions from measure theory and probability which we will rely on to establish our positive results.
We will work throughout this paper with sub-Cantor spaces (as defined in Definition \ref{subcantor}). All of the notation here however extends to the larger category of standard Borel spaces, i.e. a Polish space (a complete separable metrisable space), together with their Borel $\sigma$-algebra, which is generated by the open sets.
If $X$ is a sub-Cantor space, we will write $\operatorname{Pr}(X)$ for the space of all probability Borel measures on $\mathcal{X}$. This is a convex subset of the space $M(X)$ of all real finite measures on $\mathcal{X}$, equipped with the usual total variation norm
$$\| \mu \|_{M(X)} := |\mu|(X) = \sup\{ |\mu(E)-\mu(F)|: E, F \subset X, \hbox{ disjoint} \}.$$
An important operation for us will be that of \emph{conditioning}: if $\mu \in \operatorname{Pr}(X)$ is a probability measure and $E \subset X$ is an event with $\mu(E) > 0$, we define the \emph{conditioning} $(\mu|E) \in \operatorname{Pr}(X)$ of $\mu$ to $E$ to be the probability measure defined by the usual formula
$$ (\mu|E)(F) := \frac{\mu(E \cap F)}{\mu(E)}.$$ The following computation is easily verified:
\begin{lemma}[Conditioning by high probability events is mild]\label{cond} Let $\mu \in \operatorname{Pr}(X)$ and $E \subset X$ be such that $\mu(E) \geq 1-\varepsilon$ for some $0 < \varepsilon < 1/2$. Then $\| \mu - (\mu|E) \|_{M(X)} \ll \varepsilon$. \end{lemma}
The space $M(X)$ (and hence $\operatorname{Pr}(X)$) comes equipped with the \emph{vague topology} (or \emph{weak-* topology}), defined as the topology induced by the functionals $\mu \mapsto \int_X f\ d\mu$ for all bounded continuous supported $f$. The following lemma is well-known:
\begin{lemma}[Prokhorov's theorem]\label{seq} Let $X$ be a sub-Cantor space, and let $\mu_n$ be a sequence of measures in $\operatorname{Pr}(X)$. Then there is some subsequence $\mu_{n_j}$ of $\mu_n$ which converges vaguely to another measure $\mu \in \operatorname{Pr}(X)$. \end{lemma}
The space $\operatorname{Pr}(X)$ also comes with a $\sigma$-algebra, induced by the evaluation mappings $\mu \mapsto \mu(A)$ for all measurable $A \subset X$. This allows us to introduce the notion of a \emph{probability kernel}, which is fundamental to our arguments for our positive results:
\begin{definition}[Probability kernels] Let $X,Y$ be sub-Cantor spaces. A \emph{probability kernel from $Y$ to $X$} is a measurable function $P: Y \to \operatorname{Pr}(X)$ from $Y$ to $\operatorname{Pr}(X)$. We will use the notation $P:Y \rightsquigarrow X$ to denote the fact that $P$ is a probability kerne from $Y$ to $X$. If $y \in Y$ and $f: X \to \mathbf{R}$ is measurable, we use $\int_X f(x)\ P(y, dx)$ to denote the integral of $f$ against the measure $P(y) \in \operatorname{Pr}(X)$. We call a probability kernel $P: Y \rightsquigarrow X$ \emph{trivial} if $X$ is a point. \end{definition}
\begin{remark} A probability kernel $P: Y \rightsquigarrow X$ can be viewed as describing the law for some random variable on $X$, where the distribution of that law depends on the value of a parameter $y$ in $Y$. Indeed, one common way to construct probability kernels is to condition one random variable on the value of another; in measure-theoretic terms, this is closely related to the operation of \emph{disintegrating} a measure with respect to a factor. \end{remark}
Two important special cases of a probability kernel arise from probability measures and from measurable functions. Indeed, if $\mu \in \operatorname{Pr}(X)$ is a probability measure, we can (by abuse of notation) identify $\mu$ with a probability kernel $\mu: \operatorname{pt} \rightsquigarrow X$ which maps the point in $\operatorname{pt}$ to $\mu$. Similarly, if $\phi: Y \to X$ is a measurable function, we can (by further abuse of notation) identify $\phi$ with a probability kernel $\phi: Y \rightsquigarrow X$ which maps any point $y \in Y$ to the Dirac mass $\delta_{\phi(y)}$ at $y$. These abuses of notation shall be in entail throughout the paper.
We now define two important notions on probability kernels, namely composition and product.
\begin{definition}[Composition of kernels] If $P: Y \rightsquigarrow X$ and $Q:Z \rightsquigarrow Y$ are probability kernels between sub-Cantor spaces, we define \emph{composition} $P\circ Q: Z\rightsquigarrow X$ by the formula by \[P\circ Q(z)(E) := \int_YP(y)(E)\,Q(z,d y)\] for all $z \in Z$ and all measurable $E \subset X$. \end{definition}
\begin{example}[Special cases of composition] Let $\phi: Y \to X$ and $\psi: Z \to Y$ be measurable maps, which we then identify with probability kernels, and let $\mu \in \operatorname{Pr}(Y)$ be a probability measure (which we also identify with a probability kernel). Then $\phi \circ \psi$ is just the usual composition of $\phi$ and $\psi$, while $\phi \circ \mu$ is the pushforward of $\mu$ under $\phi$. \end{example}
\begin{remark} For future reference we observe that a probability kernel $P: Y \rightsquigarrow X$ not only pushes forward probability measures $\mu \in \operatorname{Pr}(Y)$ to probability measures $P \circ \mu \in \operatorname{Pr}(X)$, but in fact can push forward arbitrary finite Borel measures $\mu$ on $Y$ to finite Borel measures $P \circ \mu$ on $X$, by the formula $$ P \circ \mu(E) := \int_Y P_y(E)\ d\mu(y)$$ for all measurable $E \subset X$. \end{remark}
\begin{definition}[Product of kernels] If $S$ is an at most countable set, and $P_s: Y \rightsquigarrow X_s$ is a probability kernel between sub-Cantor spaces for each $s \in S$, then we define the product $\bigotimes_{s \in S} P_s: Y \rightsquigarrow \prod_{s \in S} X_s$ by defining $\bigotimes_{s \in S} P_s(y)$ for each $y \in Y$ to be the product of the probability measures $P_s(y)$ for $s \in S$. We also write $P^{\otimes S}$ for $\bigotimes_{s \in S} P$. \end{definition}
Finally, we define the notion of one probability kernel being absolutely continuous with respect to another.
\begin{definition}[Absolute continuity] If $\mu, \nu \in \operatorname{Pr}(X)$ are two probability measures on a sub-Cantor space, we say that $\mu$ is \emph{absolutely continuous with respect to $\nu$}, and write $\mu \ll \nu$, if for every measurable $E \subset X$ we have $\mu(E)=0$ whenever $\nu(E)=0$. If $P, P': Y \rightsquigarrow X$ are probability kernels, we say that $P'$ is \emph{absolutely continuous with respect to $P$}, and write $P' \ll P$, if we have $P'(y) \ll P(y)$ for all $y \in Y$. \end{definition}
\begin{example}\label{condit} If $\mu \in \operatorname{Pr}(X)$ is a probability measure, and $E \subset X$ is such that $\mu(E) > 0$, then $(\mu|E) \ll \mu$. \end{example}
The notion of absolute continuity is clearly a partial ordering on probability kernels between two given sub-Cantor spaces. It also interacts nicely with both composition and finite products:
\begin{lemma}[Preservation of absolute continuity]\label{pac} \begin{itemize} \item Let $P, P': Y \rightsquigarrow X$ and $Q, Q': Z \rightsquigarrow Y$ be probability kernels. If $P' \ll P$ and $Q' \ll Q$, then $P' \circ Q' \ll P \circ Q$. \item Let $S$ be a finite set, and for each $s \in S$ let $P_s, P'_s: Y \rightsquigarrow X_s$ be probability kernels such that $P'_s \ll P_s$. Then $\bigotimes_{s \in S} P'_s \ll \bigotimes_{s \in S} P_s$. \end{itemize} \end{lemma}
\begin{proof} Both claims follow immediately from the Fubini-Tonelli theorem. \end{proof}
In some of our arguments we will need a perturbed version of absolute continuity.
\begin{definition}[$\varepsilon$-absolute continuity]\label{epsac-def} Let $\varepsilon \ge 0$. If $\mu, \nu \in \operatorname{Pr}(X)$ are two probability measures on a sub-Cantor space, we say that $\mu$ is \emph{$\varepsilon$-absolutely continuous with respect to $\nu$}, and write $\mu \ll_\varepsilon \nu$, if for every measurable $E \subset X$ we have $\mu(E) \leq \varepsilon$ whenever $\nu(E)=0$. \end{definition}
From the Lebesgue-Radon-Nikodym theorem we have several equivalent characterisations of $\varepsilon$-absolute continuity:
\begin{proposition}[Equivalent formulations of $\varepsilon$-absolute continuity]\label{epsac} Let $\varepsilon \ge 0$, and let $\mu, \nu \in \operatorname{Pr}(X)$ are two probability measures on a sub-Cantor space $X$. Then the following statements are equivalent: \begin{itemize} \item $\mu$ is $\varepsilon$-absolutely continuous with respect to $\nu$. \item For every $\varepsilon' > \varepsilon$ there exists $\delta > 0$ such that we have $\mu(E) \leq \varepsilon'$ for every measurable $E \subset X$ with $\nu(E) < \delta$. \item There exists a compact set $E \subset X$ with $\mu(E) \leq \varepsilon$ and $\nu(E)=0$ such that $\mathcal{I}(E^c) \mu \ll \nu$. \end{itemize} \end{proposition}
\parskip 0pt
\textsc{Department of Mathematics\\ University of California at Los Angeles, Los Angeles, CA 90095, USA}
Email: \verb|timaustin@math.ucla.edu|
Web: \verb|www.math.ucla.edu/~timaustin|
Email: \verb|tao@math.ucla.edu|
Web: \verb|www.math.ucla.edu/~tao|
\end{document} |
\begin{document}
\title{A Guide for Computing Stable Homotopy Groups}
\author[A. Beaudry]{Agn\`es Beaudry} \address{Department of Mathematics\\ University of Colorado at Boulder \\ \newline Campus Box 395 \\ Boulder \\ Colorado \\ 80309}
\author[J. Campbell]{Jonathan A. Campbell} \address{Department of Mathematics \\ Vanderbilt University \\ \newline 1326 Stevenson Center \\ Nashville \\ Tennessee \\ 37240}
\date{\today}
\begin{abstract} This paper contains an overview of background from stable homotopy theory used by Freed--Hopkins in their work on invertible extended topological field theories. We provide a working guide to the stable homotopy category, to the Steenrod algebra and to computations using the Adams spectral sequence. Many examples are worked out in detail to illustrate the techniques. \end{abstract}
\maketitle
\setcounter{tocdepth}{1} \tableofcontents
\section{Introduction and organization}
\subsection{Introduction} The main theorem of \cite{FH} states that deformation classes of reflection positive invertible $n$-dimensional extended topological field theories with symmetry group $H_n$ are classified by the torsion in \[ [MTH, \Sigma^{n+1}I_{{{\mathbb{Z}}}}] . \] Here, $MTH$ is the Madsen--Tillmann spectrum associated to a group $H$ which is a stabilization of $H_n$, $I_{{{\mathbb{Z}}}}$ is the Anderson dual of the sphere spectrum, and $[-, -]$ denotes the \emph{stable} homotopy classes of maps. These concepts will be discussed in \fullref{sec:spectra}.
In order to complete the classification problem, it is necessary to be able to compute stable homotopy classes of maps from a spectrum $X$ to $I_{{{\mathbb{Z}}}}$. This problem can be reduced to the computation of the stable homotopy groups of $X$ itself as will be described in \fullref{sec:andersondual}.
In general, it is notoriously difficult, if not impossible, to completely compute the homotopy groups of a spectrum $X$. However, homotopy theorists are very good at doing these computations in small ranges and the problems motivated by physics only require information in small dimensions, making us a perfect match.
The main tool used to compute low-dimensional homotopy groups of spectra is the Adams spectral sequence. Adams initially introduced this spectral sequence in order to resolve the Hopf invariant one problem \cite{adams_cohomology, adams_hopf_inv_one}. It has been a standard tool in homotopy theory since then. In brief, the Adams spectral sequence takes in information about the cohomology of a space or spectrum and outputs information about its stable homotopy groups.
The Steenrod algebra, which we denote by $\mathcal{A}$, is one of the classical structures in homotopy theory. The mod-$2$ cohomology $H^\ast (X;{{\mathbb{Z}}}/2)$ of any space or spectrum $X$ is a module over $\mathcal{A}$. This module is the input to the Adams spectral sequence. Although it can be difficult to compute the $\mathcal{A}$-module structure of the cohomology of an arbitrary space or spectrum $X$, we work under the favorable circumstance that the examples we consider are related to the classifying spaces of various Lie groups. With the $\mathcal{A}$-module structure of $H^*(X;{{\mathbb{Z}}}/2)$ in hand, and some knowledge of homological algebra over the Steenrod algebra, the $E_2$-page of the Adams spectral sequence can be computed. In the low-dimensional range, we are lucky, and every example we consider is fully computable by hand.
The aim of this paper is to introduce the reader to enough of the machinery of spectra, the Steenrod algebra and the Adams spectral sequence to understand the computation of the homotopy groups $\pi_\ast MTH$. To illustrate how one applies the theory, we do the computations for a few examples. In particular, we go over the cases when $H$ is $\Spin^c$ and $\Pin^c$ in detail, an exercise which was left to the reader in Section 10 of \cite{FH} and was not covered in \cite{campbell}.
\subsection{Organization}
In order to fully explicate the computations for readers unfamiliar with stable homotopy theory, we include an introduction to spectra in \fullref{sec:spectra}. Among other topics, we discuss the category of spectra and its homotopy category (\fullref{sec:catspectra} and \fullref{sec:homotopycat}), the homotopy groups of spectra (\fullref{sec:homotopygroups}), the Anderson dual (\fullref{sec:andersondual}) and the construction of Thom spectra (\fullref{sec:thomspectra}). These latter are integral to the Freed--Hopkins classification since it is Thom spectra that are tightly linked with cobordism groups and the cobordism hypothesis.
In \fullref{sec:steenrod} we discuss the Steenrod algebra, $\mc{A}$, which is a non-commutative, infinitely generated algebra that acts on the cohomology of all spaces and spectra. In \fullref{sec:ans}, we introduce $\mathcal{A}_1$, an eight dimensional sub-algebra of $\mathcal{A}$ that will play a crucial role in the computations. In \fullref{sec:compA1} we compute the $\mathcal{A}_1$-module structure for some examples of cobordism spectra. This computation depends on knowing how to determine the $\mathcal{A}_1$-module structure of the cohomology of classifying spaces, along with the Thom isomorphism and the Wu formula. These things are discussed in \fullref{sec:SWclasses}.
The Adams spectral sequence is introduced in \fullref{sec:ass}. The primary tool for computation with the Adams spectral sequence is homological algebra over $\mathcal{A}$ and, in our examples, over $\mathcal{A}_1$. This section includes a discussion of resolutions (\fullref{sec:MinRes}) and computations of $\mathrm{Ext}_{\mathcal{A}_1}$ for a menagerie of $\mathcal{A}_1$-modules. It includes explanations of Adams charts (\fullref{sec:achart}), of certain mulitplicative structures on $\mathrm{Ext}$ (\fullref{sec:multiplicative}) and a variety of useful tricks. In \fullref{sec:assconstruction}, we formally construct the spectral sequence and in \fullref{sec:usingASS}, we provide a ``user's manual''.
In \fullref{sec:examples}, we come to the main event. In the range $0 \leq n \leq 4$ we compute $\pi_n MTH(s)$ and $\pi_n MTH^c(s)$ in all of the cases that were not explained in further detail in \cite{campbell}. The computations rely on the Adams spectral sequence, and we use all of the material developed in \fullref{sec:steenrod} and \fullref{sec:ass} to compute the $E_2$-pages. In such a small range, and in these cases, the spectral sequences collapse and the homotopy groups can be read off of the Adams charts.
\section{A working guide to spectra}\label{sec:spectra} In this section, we give an introduction to spectra. If one is interested in computing homotopy groups, then one can often get away with an understanding of the properties of the homotopy category of spectra (see \fullref{sec:homotopycat}). Some of the information we include is not strictly necessary for this understanding, but we tried to strike a balance between too little and too much information.
For a more in-depth introduction to spectra, a starting point would be Section 1.4 of Lurie \cite{lurie} and the introduction by Elmendorff--Kriz--Mandell--May to Chapter 6 of \cite{athandbook} (a book that contains other hidden gems). One could then move on to Part III of Adams \cite{adams}, Chapter 10.9 of Weibel \cite{weibel} and Chapter 12 of Aguilar--Gitler--Carlos \cite{aguilar}. For serious treatments of different modern models of the category of spectra together with all of its structure, see the first parts of Schwede \cite{schwede}, Mandell--May--Schwede--Shipley \cite{mmss}, Elmendorff--Kriz--Mandell--May \cite{ekmm} or Lurie \cite{lurie}. For the equivariant treatment, see Lewis--May--Steinberger \cite{lms}.
\begin{notation} We let $\Ab$ be the category of graded abelian groups. We let $\mathrm{Top}_*$ be a category of suitably nice based topological spaces with continuous maps that preserve the base points. \end{notation}
Motivation for the category of spectra comes from at least two directions. First, there is Brown's representability theorem that states that a cohomology theory $E^* \colon \mathrm{Top}_*^{\text{op}} \to \operatorname{Ab}$ has a sequence of representing spaces $E_n$. That is, $E^n(X) \cong [X, E_n]$. We will let $\Sigma(-)$ be the reduced suspension and $\Omega(-)$ be the based loops functor. The isomorphism $[\Sigma X, E_n] \cong [X, \Omega E_n]$ together with the suspension isomorphism $E^n(X) \cong E^{n+1}(\Sigma X)$ give rise to an isomorphism \[\xymatrix{ [X, E_n] \ar[r]^-{\cong} & [ X, \Omega E_{n+1}]}\] which is natural in $X$. By the Yoneda Lemma, this corresponds to a weak equivalence $\omega_n \colon E_n \xrightarrow{\simeq} \Omega E_{n+1}$. Further, to discuss natural transformations between cohomology theories, one is led to discuss maps between these sequences of spaces. It thus behooves us to construct a category which consists of sequences of spaces.
Another motivation is via Freudenthal's suspension theorem. Let $X$ be a $k$-connected topological space. Freudenthal's suspension theorem states that the map $\pi_n (X) \to \pi_{n+1} (\Sigma X)$ is an isomorphism if $n \leq 2k$. For a fixed $n$ and connected $X$, this implies that $\pi_{n+k} (\Sigma^k X)$ stabilizes as $k$ goes to infinity. This motivates the definition of the \emph{$n$th stable homotopy group} \[\pi_n^s X = \colim_k \pi_{n+k} \Sigma^k X \cong \pi_{n+m} (\Sigma^m X) \ \ \ m \gg 0. \] An amazingly useful fact is that $\pi_*^s \colon \mathrm{Top}_* \to \Ab$ is a homology theory, making the stable homotopy groups often (slightly) more computable than the usual, \emph{unstable}, homotopy groups. It is useful to consider the sequences of spaces $\{\Sigma^n X\}$ as the fundamental objects, and we come again to a point where it is necessary to define some category of sequences of spaces.
We will define the following five categories in the next few sections: \begin{enumerate}[(1)] \item The category of prespectra, denoted $\mathrm{PreSp}$. See \fullref{defn:prespectra}. \item The category of spectra, denoted $\mathrm{Sp}$. See \fullref{defn:spectra}. \item The category of CW-prespectra, denoted $\mathrm{CWPreSp}$. See \fullref{defn:CWprespectra}. \item The category of CW-spectra, denoted $\mathrm{CWSp}$. See \fullref{defn:CWspectra}. \item The homotopy category of spectra, denoted $\mathrm{hSp}$. See \fullref{sec:homotopycat}. \end{enumerate} The first four are a means to the fifth. We justify this complication by the following analogy inspired from Chapter 10 of \cite{weibel} and, in particular, Analogy 10.9.7. The reader can skip this analogy now and come back to it at the end of \fullref{sec:homotopycat}.
\begin{analogy}\label{analogy} To justify having both spectra and prespectra, we make an analogy with the categories of sheaves and presheaves. Although we do homological algebra in the category of sheaves, some constructions are easier to make in the category of presheaves. The forgetful functor from sheaves to presheaves has a left adjoint, the sheafification functor. This allows one to transport constructions from presheaves to sheaves.
In this part of the analogy, spectra are the sheaves and prespectra are the presheaves. The analogue of the sheafification functor is called spectrification and is denoted $L \colon \mathrm{PreSp} \to \mathrm{Sp}$. It is the left adjoint to a forgetful functor from $\mathrm{Sp}$ to $\mathrm{PreSp}$. See \fullref{rem:functorL}.
Now, switching gears, we think of the category $\mathcal{C}$ of bounded below chain complexes of $R$-modules. There are two important kinds of equivalences in this category, the chain homotopy equivalences and the quasi-isomorphisms. The derived category ${D}(\mathcal{C})$ is characterized as the initial category which receives a functor $\mathcal{C} \to {D}(\mathcal{C})$ such that the quasi-isomorphisms are mapped to isomorphisms in ${D}(\mathcal{C})$. Chain homotopy equivalence is an equivalence relation, but the property of being quasi-isomorphic is not. In theory, it takes more work to invert the quasi-isomorphisms than it does to invert the chain homotopy equivalences. However, a quasi-isomorphism between bounded below projective chain complexes is a chain homotopy equivalence and, further, any chain complex is quasi-isomorphic to a projective one. Therefore, a model for ${D}(\mathcal{C})$ is the category whose objects are projective chain complexes and morphisms are chain homotopy equivalences of maps.
In this part of the analogy, the topological spaces are the $R$-modules and the category of spectra is the analogue of $\mathcal{C}$. The chain homotopy equivalences correspond to the homotopy equivalences and the quasi-isomorphisms to the weak homotopy equivalences. The homotopy category of spectra is analogous to ${D}(\mathcal{C})$. The projective chain complexes are the analogues to CW-spectra, and a model for the homotopy category of spectra is the category of CW-spectra together with homotopy classes of maps between them.
We have not mentioned CW-prespectra and use it to tie the knot between the two analogies: CW-prespectra are easy to define in prespectra, and the spectrification functor is used to transfer the definition to spectra. \end{analogy}
\subsection{The categories}\label{sec:catspectra}
\begin{defn}[Prespectra]\label{defn:prespectra} A \emph{prespectrum} $X$ is a sequence of spaces $ X_n \in \mathrm{Top}_*$ for $n\geq 0$ and continuous maps $\sigma_n \colon \Sigma X_n \to X_{n+1}$. We let $\omega_n \colon X_n \to \Omega X_{n+1}$ be the adjoint of $\sigma_n$ and note that giving the structure maps $\sigma_n$ of a prespectrum is equivalent to specifying the maps $\omega_n$. A map of prespectra $f \colon X \to Y$ of degree $r$ is a sequence of continuous, based maps $f_n \colon X_n \to Y_{n-r}$ such that the following diagram commutes: \[\xymatrix{ \Sigma X_n \ar[r]^-{f_n} \ar[d]_{\sigma_n} & \Sigma Y_{n-r} \ar[d]^{\sigma_{n-r}} \\ X_{n+1} \ar[r]_-{f_{n+1}} & Y_{n+1-r}.}\] We let $\mathrm{PreSp}$ denote the category of prespectra. (The plural of prespectrum is \emph{prespectra}.) \end{defn}
\begin{rem} If the maps $\omega_n$ are weak homotopy equivalences, then $X$ is often called an \emph{$\Omega$-prespectrum}. \end{rem}
\begin{defn}[Spectra]\label{defn:spectra} A prespectrum is called a \emph{spectrum} if the maps $\omega_n$ are homeomorphisms. We let $\mathrm{Sp}$ denote the full subcategory of prespectra generated by the objects which are spectra. \end{defn}
\begin{defn}[CW-prespectra]\label{defn:CWprespectra} We call a prespectrum a \emph{CW-prespectrum} if the spaces $X_n$ are CW-complexes and the maps $\Sigma X_n \to X_{n+1}$ are cellular inclusions. We let $\mathrm{CWPreSp}$ denote the full subcategory of prespectra generated by the objects which are CW-prespectra. \end{defn}
\begin{ex}
The standard example is the suspension prespectrum $\Sigma^{\infty}A$ of a based topological space $A$. Its $n$th space is given by $\Sigma^n A$ and the structure maps are identities $\Sigma \Sigma^n A \cong \Sigma^{n+1}A \to \Sigma^{n+1}A$. In fact, this extends to a functor $\Sigma^{\infty}\colon \mathrm{Top}_* \to \mathrm{PreSp}$ which sends a space $A$ to $\Sigma^{\infty}A$. The functor $\Sigma^{\infty}$ is left adjoint to the functor $\Omega^{\infty} \colon \mathrm{PreSp} \to \mathrm{Top}_*$ which sends a prespectrum to its zeroth space.
\end{ex}
\begin{ex} The Eilenberg-MacLane prespectrum $HG$, where $G$ is an abelian group, has $n$th space $K(G,n)$. The structure maps of $HG$ are the adjoints to the homotopy equivalences $\omega_n \colon K(G,n) \to \Omega K(G,n+1)$. A homomorphism of abelian groups $G_1 \to G_2$ give rise to a map of prespectra $HG_1 \to HG_2$.
\end{ex}
\begin{ex} Another example is given by $K$-theory. The odd spaces of $K$ are the infinite unitary group $U$ and the even spaces are ${{\mathbb{Z}}} \times BU$, where $BU$ is the classifying space of $U$. The structure maps $\omega_n$ are the equivalences given by Bott Periodicity. Similarly, real $K$-theory is denoted by $KO$. Its spaces repeat with period eight starting with ${{\mathbb{Z}}} \times BO$, where $BO$ is the classifying space of the infinite orthogonal group $O$.
\end{ex}
\begin{ex} If $X=X_0$ is an infinite loop space so that there exists spaces $X_k$ so that $X \simeq \Omega^k X_k $ for all $k\geq 0$, then the $X_k$ assemble into a prespectrum.
\end{ex}
\begin{defn}\label{def:functorL}\label{rem:functorL} The \emph{spectrification functor} $L \colon \mathrm{PreSp} \to \mathrm{Sp}$ is the left adjoint to the forgetful functor $U \colon \mathrm{Sp} \to \mathrm{PreSp}$. \end{defn}
\begin{rem} The functor $L$ exists by Freyd's adjoint functor theorem. It can be constructed easily if the maps $\omega_n$ are inclusions (for example, if $X$ is a CW-prespectrum). In this case, $LX$ is the spectrum whose $k$th space is \[ LX_k = \colim_n \Omega^{n+k}X_n,\] where the colimit is taken over $ \Omega^{n+k}(\omega_n) \colon \Omega^{n+k}X_n \to \Omega^{n+k+1}X_{n+1}$. If $X$ is already a spectrum, then $LX \cong X$ as these maps are all homeomorphisms. For a general definition, we refer the reader to Appendix A.1 of \cite{lms}. \end{rem} \begin{warn} We abuse notation and write $ULX$ simply as $LX$. Further, we often omit the $L$ if we are not emphasizing the replacement. For example, we write $\Sigma^{\infty}A = L( \Sigma^{\infty}A )$, $HG =L(HG)$, etc.. \end{warn}
\begin{defn}[CW-Spectra]\label{defn:CWspectra} The category of \emph{CW-spectra}, denoted $\mathrm{CWSp}$, is the full subcategory of spectra generated by the image of the restriction of $L$ to CW-prespectra. That is, $X \in \mathrm{CWSp}$ if it is of the form $LY$ for some $Y \in \mathrm{CWPreSp}$. \end{defn}
We summarize the discussion by the following diagram of adjunctions, where $\Omega^{\infty} \colon \mathrm{Sp} \to \mathrm{Top}_*$ is also the zeroth space functor: \[\xymatrix{ \mathrm{Top}_* \ar@<.5ex>[rr]^-{\Sigma^{\infty}} \ar@/^2pc/[rrrr]^{\Sigma^{\infty}} & & \mathrm{PreSp} \ar@<.5ex>[ll]^-{\Omega^{\infty}} \ar@<.5ex>[rr]^-{L} & &\mathrm{Sp} \ar@<.5ex>[ll]^-U \ar@/^2pc/[llll]^-{\Omega^{\infty}}} \]
The coproduct in $\mathrm{Top}_*$ is the wedge $A \vee B$. The category $\mathrm{Top}_*$ is a closed symmetric monoidal category, where the hom objects are the spaces of continuous based maps $\Maps(A, B)$ and the symmetric monoidal product is the smash product $A \wedge B$. There is an associated homeomorphism \[ \Maps(A \wedge B, C) \cong \Maps(A, \Maps(B, C)).\] We briefly discuss related constructions in (pre)spectra.
For $X$ a prespectrum and $A$ a based topological space, we let $X \wedge A$ be the prespectrum whose spaces are given by $X_n \wedge A$ and structure maps by $\sigma_n \wedge \id_A$. We define $\Sigma^r X = X \wedge S^r$ with $\Sigma =\Sigma^1$.
Similarly, we let $F(A, X)$ be the prespectrum whose $n$th space is $\Maps(A,X_n)$ and whose structure maps are given $f \mapsto \omega_n \circ f$, using the identification \[\Omega \Maps(A, X_{n+1}) \cong \Maps(A, \Omega X_{n+1}) .\] We let $ \Omega(X) = F(S^1, X)$. In the homotopy category (defined in \fullref{sec:homotopycat}), the functors $\Omega(-)$ and $\Sigma(-)$ become inverses, so we let $\Sigma^{-1}(-)=\Omega(-)$.
In prespectra, the coproduct is also a wedge construction. The spaces of $X \vee Y$ are $X_n \vee Y_n$ with structure maps $\sigma_n \vee \sigma_n$, using the fact that $\Sigma(X_n \vee Y_n) \cong \Sigma X_n \vee \Sigma Y_n$.
These constructions transfer to spectra via the spectrification functor $L$, and we abuse notation by dropping the $L$ from the notation. For example, we write $\Sigma X= L(\Sigma X) $.
\begin{rem}\label{rem:smashfun} Smash products of spectra and function spectra are harder to construct, and we will not do this here. We do note however that there are versions of the category of spectra which are closed symmetric monoidal with respect to an appropriate smash product. The first such construction is due to Elmendorf--Kriz--Mandell--May \cite{ekmm}. However, up to homotopy (see the definition of homotopy in these categories below), the smash product $X \wedge Y$ was constructed directly two decades prior. This is called Boardman's \emph{handicrafted} smash product and the construction is described in \cite{adams}. One can also construct a function spectrum $F(X,Y)$ so that $F(X,F(Y,Z)) \simeq F(X\wedge Y,Z)$. We will only use these constructions up to homotopy and we take them for granted. \end{rem}
\subsection{Homotopies and homotopy groups}\label{sec:homotopygroups} Let $I_+$ be the unit interval $[0,1]$ with a disjoint basepoint. Then the prespectrum $X\wedge I_+$ admits a map \[X \vee X \xrightarrow{i_0 \vee i_1} X\wedge I_+\] defined levelwise on each factor by the inclusions at $0$ and $1$ respectively. As in $\mathrm{Top}_*$, we can use the prespectrum $X\wedge I_+$ to define homotopies between maps.
Two maps of prespectra $f,g \colon X \to Y$ are \emph{homotopic}, denoted $f \simeq g$, if there is a map $H \colon X\wedge I_+ \to Y $ which restricts to $f \vee g$ along the inclusion \[X \vee X \xrightarrow{i_0 \vee i_1} X\wedge I_+ \xrightarrow{H} Y.\] Maps of spectra are homotopic if they are homotopic as maps of prespectra. We will let the set of homotopy classes of maps between two (pre)spectra $X$ and $Y$ be denoted by $\{X,Y\}$. If $Y$ is an $\Omega$-prespectrum, this is in fact an abelian group. Similarly, homotopy classes of maps of degree $r$ are denoted by $\{X,Y\}_r$. Two (pre)spectra $X$ and $Y$ are \emph{homotopy equivalent} if there are maps $f \colon X \to Y$ and $g \colon Y \to X$ such that $f\circ g \simeq \id_{Y}$ and $g \circ f \simeq \id_{X}$.
\begin{defn} Let $X$ be a (pre)spectrum and $n\in {{\mathbb{Z}}}$. The \emph{$n$th homotopy group} of $X$ is \[\pi_n X = \colim_{k} \pi_{n+k}X_k\] where the maps in the colimit take an element $S^{n+k} \to X_k$ to the composite $S^{n+k+1} \to \Sigma X_k \xrightarrow{\sigma_{k}} X_{k+1}$. A map of (pre)spectra is a \emph{weak homotopy equivalence} if it induces an isomorphism on homotopy groups. \end{defn}
\begin{rem} The unit of the adjunction $X \to LX$ is a functorial replacement of $X$ by the weakly homotopy equivalent spectrum $LX$. \end{rem}
\begin{rem}[Whitehead's theorem] A map of CW-spectra which is a weak homotopy equivalence is also a homotopy equivalence. \end{rem}
\subsection{The homotopy category of spectra and its triangulation}\label{sec:homotopycat} First, we recall the analogous object for $\mathrm{Top}_*$. The homotopy category of based topological spaces $\mathrm{hTop}_*$ is the initial category receiving a functor from $\mathrm{Top}_*$ which sends weak homotopy equivalences to isomorphisms. Using Whitehead's theorem and CW-approximation, one model for $\mathrm{hTop}_*$ has objects the pointed CW-complexes and morphisms the based homotopy classes of maps between them. The map $\mathrm{Top}_* \to \mathrm{hTop}_*$ sends $A$ to a CW-approximation $\Gamma A$, which is functorial up to homotopy, and a map $f$ to the homotopy equivalence class of $\Gamma f$.
There are many constructions of the homotopy category of spectra, which we denote by $\mathrm{hSp}$, including through the theory of $\infty$-categories. These all give equivalent categories and $\mathrm{hSp}$ is one of the modern settings for homotopy theory. In this section, we give some of the standard tools to work in $\mathrm{hSp}$.
The homotopy category $\mathrm{hSp}$ is initial among categories that admit a functor out of $\mathrm{(Pre)Sp}$ which sends the weak homotopy equivalences to isomorphisms. In particular, any functor $\mathrm{(Pre)Sp} \to \mathcal{D} $ with this property factors through the functor $\mathrm{(Pre)Sp} \to \mathrm{hSp} $: \[\xymatrix{ \mathrm{PreSp} \ar[dr] \ar[r]^-{L} & \mathrm{Sp} \ar[d] \ar[r] & \mathrm{hSp} \ar@{-->}[dl] \\ & \mathcal{D} & }\]
The objects of $\mathrm{hSp}$ are simply called \emph{spectra}. The category $\mathrm{hSp}$ is a triangulated category with shift operator given by the suspension $\Sigma(-)$, which in $\mathrm{hSp}$ becomes inverse to $\Sigma^{-1}(-) = \Omega(-)$. We define \[[X,Y]_r := \mathrm{hSp}_r(X,Y) = \mathrm{hSp}(\Sigma^r X, Y) \] and let $[X,Y] = [X,Y]_0$. These are abelian groups for all $X$, $Y$ and $r$. The isomorphisms in $\mathrm{hSp}$ are denoted by $\simeq$ because of their relationship to the weak homotopy equivalences.
\begin{rem} The morphisms in $\mathrm{hSp}$ must be computed with care, and we remind the reader of \fullref{analogy}. With this analogy in mind, note that $[X,Y]$ is not in general isomorphic to $\{X,Y\}$. Here, $\{X,Y\}$ denotes the homotopy classes of maps as defined in \fullref{sec:homotopygroups}. This is the essence of the ``cells now --- maps later'' discussion on p.142 of \cite{adams}. \end{rem}
\begin{rem}[CW-approximation]\label{rem:CWapprox} For any prespectrum $X$, there is a CW-spectrum $\Gamma X$ connected to $X$ by a zig-zag of weak homotopy equivalences. The construction is functorial up to homotopy. \end{rem}
\begin{rem} We use CW-approximation to describe models for $ \mathrm{hSp}$. The first has objects CW-spectra and morphisms homotopy classes of maps between them. In particular, if $X$ and $Y$ are CW-spectra, then $[X,Y]\cong \{X,Y\}$. The functor $ \mathrm{(Pre)Sp} \to \mathrm{hSp}$ sends $X$ to $\Gamma X$ and a map $f$ to the homotopy equivalence class of $\Gamma f$. A slightly larger model is to let the objects be CW-prespectra and morphisms $[X,Y] \cong \{LX, LY\} \cong \{X, LY\} $. One can also take objects to be all prespectra and morphisms to be $[X,Y] \cong \{\Gamma X, \Gamma Y\}$.
The point we want to stress here is that, for any two $X$ and $Y$, whether they be prespectra, spectra, CW-prespectra or CW-spectra, it makes sense to write down $[X,Y]$. Every point of view yields isomorphic abelian groups. In $\mathrm{hSp}$, we forget the distinctions: All objects have equal dignity and are called \emph{spectra}. \end{rem}
We extend $\Sigma^{\infty}$ to a functor $\Sigma^{\infty} \colon \mathrm{Top}_* \to \mathrm{hSp}$ by sending $A$ to the image of $\Sigma^{\infty}A \in \mathrm{hSp}$. We often simply write $A$ to denote $\Sigma^{\infty}A \in \mathrm{hSp}$. For example, $S^t$ as a spectrum is \[S^t \simeq \Sigma^{\infty} S^t \simeq \Sigma^t \Sigma^{\infty} S^0 \simeq \Sigma^t S^0. \] The \emph{sphere spectrum} is the spectrum $S^0$. On the other hand, $\Omega^{\infty}$ induces a functor $\Omega^{\infty} \colon \mathrm{hSp} \to \mathrm{hTop}_*$.
We let $F(X,Y)$ and $X \wedge Y$ be the function spectrum and smash product in $\mathrm{hSp}$. See \fullref{rem:smashfun}. The category $\mathrm{hSp}$ is a closed symmetric monoidal category so that \begin{align}\label{eq:adjunctionF}
F(X \wedge Y, Z ) \simeq F(X ,F(Y, Z) ). \end{align} The sphere spectrum $S^0$ is the unit for the symmetric monoidal structure and \begin{align*} S^0 \wedge X &\simeq X, & F(S^0,X)&\simeq X. \end{align*} If is $X$ a spectrum and $A$ is a based topological space, for the constructions described in \fullref{sec:catspectra}, we have $A \wedge X \simeq (\Sigma^{\infty}A) \wedge X$ and $F(A,X) \simeq F(\Sigma^{\infty}A, X)$.
There is an identity \[[X,Y]_t = \pi_tF(X,Y).\] In particular, if $\pi_*X$ denotes the homotopy groups of $X$, \[\pi_tX \cong [S^t, X] \cong \pi_{0}F(S^t, X) \cong \pi_t F(S^0, X) .\]
The category $\mathrm{hSp}$ has arbitary products and coproducts. Further, for a collection of objects $X_{\alpha}$, $\alpha\in I$ with the property that, for every $k \in {{\mathbb{Z}}}$, $\pi_kX_\alpha =0$ for all but finitely many $\alpha \in I$, the map \begin{equation}\label{eq:prodcoprod} \xymatrix{\bigvee_{\alpha \in I} X_\alpha \ar[r]^-{\simeq} & \prod_{\alpha \in I} X_\alpha} \end{equation} is an isomorphism.
Pushout and pullback diagrams also coincide in $\mathrm{hSp}$. The exact triangles \[X \to Y \to Z \to \Sigma X\] are equivalently called \emph{cofiber} and \emph{fiber} sequences. The spectrum $Z$ is called the \emph{cofiber} of $X \to Y$, while $X$ is called the \emph{fiber} of $Y \to Z$. A map $X \to Y$ is null homotopic if and only if $Z \simeq Y \vee \Sigma X$.
A standard example of an exact triangle in $\mathrm{hSp}$ is constructed by killing an element in homotopy. For example, if $\alpha \colon S^n \to S^m$ is an element of $\pi_n S^m$, then $C(\alpha)$ is defined by the exact triangle \[S^m \xrightarrow{\alpha} S^n \to C(\alpha) \to S^{m+1}.\]
If $X \to Y \to Z \to \Sigma X$ is an exact triangle, then so are the four term sequences obtained by applying $W \wedge (-)$, $F(W,-)$ or $F(-,W)$. Further, applying either of $[-,X]$ and $[X,-]$ to an exact triangle gives rise to a long exact sequence of abelian groups. In particular, there are long exact sequences on homotopy groups $\pi_*(-)$.
A useful fact about the functor $\Sigma^{\infty}$ is that it commutes with $\wedge$ and with $\vee$. Also, applying $\Sigma^{\infty}$ to a homotopy cofiber sequence $A \to B \to C$ of spaces gives an exact triangle in $ \mathrm{hSp}$. In particular, the cofiber sequence $A \vee B \to A \times B \to A \wedge B$ gives rise to a split cofiber sequence of spectra so that \[\Sigma^{\infty}(A \times B) \simeq \Sigma^{\infty}A \vee \Sigma^{\infty} B \vee \Sigma^{\infty}(A \wedge B).\]
\begin{warn} From this point onwards, when we say ``spectrum'', we mean an element of $\mathrm{hSp}$ unless otherwise specified. \end{warn}
\subsection{Cohomology and Homology Theories.} A generalized homology theory is a collection of functors $E_n \colon \mathrm{hSp} \to \Ab$ indexed by ${{\mathbb{Z}}}$, together with natural isomorphisms $E_{n+1}(\Sigma -) \xrightarrow{\cong} E_{n}( -)$ such that $E_n$ takes arbitrary coproducts to direct sums and exact triangles to exact sequences. A generalized cohomology theory is a collection of contravariant functors $E^n \colon \mathrm{hSp}^{\mathrm{op}} \to \Ab$ indexed by ${{\mathbb{Z}}}$ and natural isomorphisms $E^{n}( -) \xrightarrow{\cong} E^{n+1}(\Sigma -)$ such that $E^n$ that takes arbitrary coproducts to direct products and exact triangles to exact sequences. We refer the reader to Whitehead \cite[Section 5]{whitehead} for more on generalized homology and cohomology theories.
Any spectrum in $\mathrm{hSp}$ gives rise to generalized homology and cohomology theories $E_* \colon \mathrm{hSp} \to \Ab$ and $E^* \colon \mathrm{hSp}^{\mathrm{op}} \to \Ab$. Further, by precomposing with $\Sigma^{\infty} \colon \mathrm{Top}_* \to \mathrm{hSp}$, we obtain (reduced) theories defined on topological spaces. If $E \in \mathrm{hSp}$, \[ E^n(X) = [X, E]_{-n}\cong \pi_{-n}F(X,E) \cong [X, \Sigma^n E] \] and \[ E_n(X) = \pi_n( E \wedge X).\] Conversely, the Brown representability theorem implies that any homology or cohomology theory is represented by a spectrum $E = \{E_n\}$ so that $E^n(X) = [X, E_n]$.
\begin{rem} If $E \in \mathrm{PreSp}$ is a prespectrum, and $A$ is a topological space, \[ E^n(A) \cong [A , (LE)_n] \] where the right hand side denotes homotopy classes of maps in $\mathrm{Top}_*$. In particular, if $E \in \mathrm{Sp}$, then $E^n(A) \cong [A,E_n]$. In fact, for this to hold, it is enough that the structure maps $\omega_n$ be weak homotopy equivalences (i.e., that $E$ be an $\Omega$-prespectrum).
If $E \in \mathrm{CWPreSp}$ is such that $E_n$ is $n-1$-connected, then \[E_n(A) \cong \pi_n(E \wedge A) \cong \colim_{k} \pi_{n+k} (E_k \wedge A). \] \end{rem}
\begin{ex} If $E = HG$, the Eilenberg--MacLane spectrum for an abelian group $G$ and $A$ is a based space, or $B_+$ is an unbased space with a disjoint base point, then \begin{align*} HG^*(A) &= \widetilde{H}^*(A ; G) & HG^*(B_+) &= {H}^*(B ; G). \end{align*} Further, by definition, $\widetilde{H}^*(A;G) \cong HG^*(\Sigma^{\infty}A)$. \end{ex}
\subsection{Connective spectra}\label{sec:connective} Let $A \in \mathrm{Top}_*$ be a connected CW-complex. For every $m\geq 0$, there is a space $A_{\tau \geq m} $ with the property that $\pi_nA_{\tau \geq m}=0$ if $n<m$, together with a map $A_{\tau \geq m} \to A$ which is an isomorphism on $\pi_n$ if $n \geq m$. The space $A_{\tau \geq m}$ is called the \emph{$m$th connective cover of $A$}, and is obtained as the $m$th stage of the Whitehead tower. This can be done functorially and the spaces $A_{\tau \geq m}$ are unique up to canonical isomorphism in $\mathrm{hTop}_*$.
Note that the homotopy groups of spectra are defined for any integer $n \in{{\mathbb{Z}}}$. In particular, some spectra have negative homotopy groups. If $X \in \mathrm{hSp}$ and $m \in {{\mathbb{Z}}}$, the \emph{$m$th connective cover} of $X$ is a spectrum $X_{\tau \geq m} $ with the property that $\pi_nX_{\tau \geq m} =0$ for $n<m$, together with a map $X_{\tau \geq m} \to X$ which is an isomorphism on $\pi_n$ if $n\geq m$. If $X \in \mathrm{hSp}$ is represented by a prespectrum with spaces $X_n$, then $X_{\tau \geq m}$ is represented by a prespectrum whose spaces are $(X_{\tau \geq m})_n = (X_n)_{\tau \geq m+n}$ and whose structure maps are obtained from those of $X$ using the functoriality and uniqueness. The spectrum $X_{\tau \geq 0} $ is called the \emph{connective cover} of $X$. \begin{notation} The spectrum $\ku$ denotes the connective cover of the $K$-theory spectrum $K$. The spectrum $\ko$ denotes the connective cover of the real $K$-theory spectrum $KO$. \end{notation}
\subsection{Multiplicative Homology Theories.} One of the main reasons for introducing a symmetric monoidal products on the category of spectra $\mathrm{Sp}$ or on its homotopy category $\mathrm{hSp}$ is the discussion of \textit{ring spectra}. The cohomology theory that one first encounters, singular cohomology, has the structure of a graded ring. By the Brown representability theorem, this gives rise to maps $H {{\mathbb{Z}}} \wedge H {{\mathbb{Z}}} \to H{{\mathbb{Z}}}$ for the Eilenberg--MacLane spectrum $H{{\mathbb{Z}}}$. Many cohomology theories come equipped with this structure; for example, $K$ and $KO$-theory and nearly all cobordism theories.
We give some definitions in $\mathrm{hSp}$. A \emph{ring spectrum} is a spectrum $R \in \mathrm{hSp}$ together with a multiplication map $\mu \colon R \wedge R \to R$ and a unit map $\eta \colon S^0 \to R$ such that the diagram \[ \xymatrix{
S^0 \wedge R\ar[dr]_-{\simeq} \ar[r]^-{\eta \wedge \id_R} & R \wedge R \ar[d]^\mu & R \wedge S\ar[dl]^-{\simeq} \ar[l]_-{\id_R \wedge \eta}\\
& R& } \] commutes (in $\mathrm{hSp}$). Granted a notion of ring spectrum, we can define commutative ring spectra, and module spectra. A commutative ring spectrum is one such that the diagram \[ \xymatrix{
R \wedge R \ar[dr]_-{\mu}\ar[rr]^-{\text{tw}} & & R \wedge R \ar[dl]^-{\mu} \\
& R } \] commutes, where $\operatorname{tw}$ is the map that exchanges the two copies of $R$. For $R$ a ring spectrum, an $R$-module spectrum is a spectrum $M$ together with a map $R \wedge M \to M$ which fits into the commutative diagrams that categorify the notion of a module over a ring.
Much of the intuition from homological algebra can be carried over to the context of ring spectra and module spectra. For example, one can define resolutions in this context. The homotopy groups of a resolution will reflect properties of the homotopy groups of the spectrum it resolves. See, for example, \cite{millerrelations}. This is one of the ideas in the construction of the Adams spectral sequence. See \fullref{sec:assconstruction}.
A construction from algebra that requires more care with the smash product when being adapted to spectra is the notion of quotient modules. This is solved in the modern categories of spectra, but is not needed here.
\subsection{Spanier--Whitehead duality} The functional dual of a spectrum $X$ is the function spectrum $F(X, S^0)$. This is often denoted by $DX$ in analogy with Spanier--Whitehead duality. If $X \simeq \Sigma^{\infty}A$ for a finite CW-complex $A$, then $DX$ is the classical Spanier--Whitehead dual of $A$.
The enriched adjunction \eqref{eq:adjunctionF} gives rise to certain important maps. First, there are the units and the counits which are ``coevaluations'' and ``evaluations'' respectively: \begin{align*} coev &\colon Y \to F(X, X\wedge Y) & ev &\colon X \wedge F(X,Y) \to Y \end{align*} Using the adjunction \eqref{eq:adjunctionF} and $ev$, for any spectra $X$, $Y$ and $Z$, the adjoint to the $X \wedge F(X,Y) \wedge Z \xrightarrow{ev \wedge Z} Y \wedge Z$ gives a map \begin{equation}\label{eq:dualstruc}
F(X, Y)\wedge Z \to F(X, Y\wedge Z)
\end{equation} which may or may not be an isomorphism in $\mathrm{hSp}$.
The spectrum $Z$ is called \emph{dualizable} if this is an isomorphism in $\mathrm{hSp}$ for all spectra $X$ and $Y$. Examples of dualizable spectra are the spheres $S^t = \Sigma^tS^0$ and, more generally, the suspension spectrum $\Sigma^{\infty}A$ of any finite CW-complex $A$. Finally, to verify that $Z$ is dualizable, it is enough to check that \[ DZ \wedge Z \simeq F(Z, S^0)\wedge Z \to F(Z, Z)\] is a weak equivalence.
\subsection{Brown-Comenetz and Anderson duality}\label{sec:andersondual} For any injective abelian group $A$, the functor from $\mathrm{Top}_*$ to abelian groups given by \[ I_A^n(X) = \Hom_{{{\mathbb{Z}}}}(\pi_{n}( \Sigma^{\infty}X) ,A)\] defines a cohomology theory, which is represented by a spectrum denoted $I_{A}$. For example, if $A=\mathbb{Q}$, then \[I_{\mathbb{Q}}^n(X) \cong \widetilde{H}^{n}(X ; \mathbb{Q}) ,\] and $I_{\mathbb{Q}}$ is equivalent to $H\mathbb{Q}$.
Since $\mathbb{Q}/{{\mathbb{Z}}}$ is an injective abelian group, we also obtain a spectrum $I_{\mathbb{Q}/{{\mathbb{Z}}}}$, which is often called the \emph{Brown-Comenetz spectrum}. The natural map $\mathbb{Q} \to \mathbb{Q}/{{\mathbb{Z}}}$ together with the Yoneda Lemma gives rise to a map of spectra $I_{\mathbb{Q}} \to I_{\mathbb{Q}/{{\mathbb{Z}}}}$. Then $I_{{{\mathbb{Z}}}}$ is defined by the exact triangle in $\mathrm{hSp}$ \begin{align}\label{eq:fibIZ} I_{{{\mathbb{Z}}}} \to I_{\mathbb{Q}} \to I_{\mathbb{Q}/{{\mathbb{Z}}}} \to \Sigma I_{{{\mathbb{Z}}}}. \end{align} The spectrum $I_{{{\mathbb{Z}}}}$ is called the \emph{Anderson dual spectrum}.
Associated to \eqref{eq:fibIZ} is a long exact sequence on cohomology \[ \ldots \to {I_\mathbb{Q}}^{*-1}(X) \to I_{\mathbb{Q}/{{\mathbb{Z}}}}^{*-1}(X) \to I_{{{\mathbb{Z}}}}^*(X) \to {I_\mathbb{Q}}^*(X) \to I_{\mathbb{Q}/{{\mathbb{Z}}}}^*(X) \to \ldots \] If the homotopy groups $\pi_*(\Sigma^{\infty}X)$ are finitely generated abelian groups in each degree, one can deduce from this long exact sequence that there is an isomorphism \[ I_{{{\mathbb{Z}}}}^*(X) \cong \mathrm{Torsion}(\pi_{*-1}(\Sigma^{\infty}X) ) \oplus \mathrm{Free}(\pi_{*}X). \] So, computing $I_{{{\mathbb{Z}}}}^*(X) \cong [X, \Sigma^*I_{{{\mathbb{Z}}}}]$ is equivalent to computing the (stable) homotopy groups of $X$.
\subsection{Thom Spectra}\label{sec:thomspectra} Let $B$ be a topological space and $\nu \colon E \to B$ be a $n$-dimensional real vector bundle on $B$. Then $\mathrm{Sph}(\nu) \colon \mathrm{Sph}(E) \to B $ is the $n$-sphere bundle whose fibers are the one-point compactification of the fibers of $\nu$. The bundle $ \mathrm{Sph}(E)$ has a section $s \colon B \to \mathrm{Sph}(E)$ which sends $b$ to the point at infinity in the fiber $ \mathrm{Sph}(E)_b$. Then the Thom space of $\nu$ is defined as \[ B^{\nu} = \mathrm{Sph}(E)/s(B) .\] The \emph{Thom spectrum}, also denoted by $B^{\nu}$, is the suspension spectrum of the Thom space. The composite \[\mathrm{Sph}(E) \to \mathrm{Sph}(E) \times \mathrm{Sph}(E) \to B \times B^{\nu},\] which is the diagonal map followed by the product of $\mathrm{Sph}(\nu)$ and the quotient map, induces a map $B^{\nu} \to B_+ \wedge B^{\nu}$ called the \emph{Thom diagonal}.
If $\nu = \alpha \oplus \mathbf{n}$ where $\mathbf{n}$ is the trivial $n$-dimensional bundle, then \begin{equation}\label{eq:trivialsus}B^{\nu} \simeq \Sigma^n B^{\alpha}. \end{equation} In particular, if $\mathbf{0}$ is the zero bundle, then $B^{\mathbf{0}} = \Sigma^{\infty} B_{+}$.
The identity \eqref{eq:trivialsus} motivates the definition of Thom spectra for virtual bundles. We give the definition for based spaces $B$ which are CW-complexes with finitely many cells in each dimension. Recall that a virtual bundle $\nu$ over $B$ is the formal difference $\nu = \alpha-\beta$ of vector bundles $\alpha$ and $\beta$ over $B$. If $\alpha$ is an $n$-dimensional bundle and $\beta$ is an $m$-dimensional bundle, we say that $\nu$ has dimension $n-m$.
If $B$ is compact, we can choose a bundle $\beta^{\perp}$ and an integer $k$ so that $\beta \oplus \beta^{\perp} \cong \mathbf{k}$. In this case, we define \[ B^{\nu} := \Sigma^{-k}B^{\alpha \oplus \beta^{\perp}} .\] This is independent of the choice of complement $\beta^{\perp}$. Now, let $B_q$ be the $q$-skeleton of $B$. By our assumption on $B$, the space $B_q$ is compact. The bundle $\nu$ pulls back to virtual bundles $\nu_q$ over $B_q$ for each $q$. There are induced maps of Thom spectra $B_{q}^{-\nu_q} \to B_{q+1}^{-\nu_{q+1}}$, and \[B^{-\nu} := \colim_{q} B_{q}^{-\nu_q} .\]
\begin{ex} Let $O_n$ be the $n$th orthogonal group and $BO_n$ its classifying space. A model for $BO_n$ is given by the Grassmanian $G_n= \varinjlim_{k}\mathrm{Gr}_n(\mathbb{R}^{k})$, where $\mathrm{Gr}_n(\mathbb{R}^{k})$ is the space of $n$-dimensional subspaces of $\mathbb{R}^{k}$ and the maps in the colimit are induced by the inclusions $\mathbb{R}^{k}\subseteq \mathbb{R}^{k+1}$ into the first $k$-coordinates. This has the homotopy type of a CW-complex with finitely many cells in each dimension. Consider the subspace of $G_n \times \mathbb{R}^{\infty}$ given by \[ E_n= \{ (P,v) \in G_n \times \mathbb{R}^{\infty} : P \in G_n, v\in P \}. \] The map \[\gamma_n \colon E_n \to G_n\] which sends $(P,v)$ to $P$ is an $n$-dimensional vector bundle. This is often called the \emph{universal bundle} over $BO_n$. The associated Thom space is denoted by $MO_n$, which is also used to denote the associated Thom spectrum.
If $H_n \to O_n$ is a group homomorphism, then the universal bundle $\gamma_n$ pulls back to a bundle over $BH_n$ that we will also denote by $\gamma_n$. The associated Thom space/spectrum is denoted by $MH_n$.
Finally, in these examples, the Thom spectrum of the virtual bundle $-\gamma_n$ is denoted by $MTH_n$ and is called the \emph{Madsen--Tillmann spectrum}. \end{ex}
\begin{rem}Thom spectra are related to Spanier--Whitehead duality via the \emph{Atiyah duality} isomorphism. Let $M$ be an $n$-manifold and $TM$ be the tangent space of $M$, then Atiyah duality is the equivalence $M^{-TM} \simeq D(\Sigma^{\infty} M_+)$. \end{rem}
The cohomology of the Thom space is related to the cohomology of the base space. We treat the case $H^*(-;{{\mathbb{Z}}}/2 )$ as it comes free of orientability conditions. Given any virtual $n$-bundle $\nu$, there is an isomorphism \[ \mathrm{Th} \colon H^*(B;{{\mathbb{Z}}}/2) \cong \widetilde{H}^*(B^{\mathbf{0}};{{\mathbb{Z}}}/2) \to \widetilde{H}^{*+n}(B^{\nu};{{\mathbb{Z}}}/2). \] called the \emph{Thom isomorphism}. The isomorphism is given by an external cup product with a class \[U = U(\nu) \in \widetilde{H}^n(B^{\nu} ; {{\mathbb{Z}}}/2) \] called the \emph{Thom class}.
\section{The Steenrod algebra}\label{sec:steenrod}
In this section, we review some basic facts about the Steenrod algebra $\mathcal{A}$ at the prime $p=2$. A very good reference for this material is Mosher--Tangora \cite{MosherTangora} and the interested reader should consult it for a more thorough presentation.
We focus on the prime $p=2$, although much of this story has an analogue at odd primes. We will let \[H^*(X) = \widetilde{H}^*(X ;{{\mathbb{Z}}}/2)\] denote the reduced mod $2$ cohomology of $X$ if it is a space, or simply the mod $2$ cohomology of $X$ if it is a (pre)spectrum. If $X \in \mathrm{Top}$ and we want to refer to the unreduced cohomology, we will use the notation $H^*(X;{{\mathbb{Z}}}/2)$.
\subsection{Cohomology operations and the Steenrod algebra}\label{sec:steenroddesc} Let $ \Vect({{\mathbb{Z}}}/2 )$ denote the category of ${{\mathbb{Z}}}$-graded ${{\mathbb{Z}}}/2 $ vector spaces, so that mod $2$ cohomology is a functor \[H^*(-;{{\mathbb{Z}}}/2) \colon \mathrm{Top} \to \Vect({{\mathbb{Z}}}/2 ).\] A cohomology operation of degree $k$ is a natural transformation \[ \gamma \colon H^*(-;{{\mathbb{Z}}}/2) \to H^{*+k}(-;{{\mathbb{Z}}}/2).\] The operation $\gamma$ is said to be \emph{stable} if it commutes with the suspension isomorphism \[\Sigma \colon H^*(-) \xrightarrow{\cong} {H}^{*+1}(\Sigma(-)).\]
\begin{ex} The short exact sequence \[ 0 \to {{\mathbb{Z}}}/2 \to {{\mathbb{Z}}}/4 \to {{\mathbb{Z}}}/2 \to 0\] induces a long exact sequence on cohomology \[ \xymatrix{ \ldots \ar[r] & H^*(-; {{\mathbb{Z}}}/4) \ar[r] & H^*(-; {{\mathbb{Z}}}/2) \ar[r] & H^{*+1}(-; {{\mathbb{Z}}}/2) \ar[r] & \ldots }\] The connecting homomorphism $H^*(-; {{\mathbb{Z}}}/2) \to H^{*+1}(-; {{\mathbb{Z}}}/2) $ is natural and commutes with the suspension isomorphism, so it is a stable cohomology operation of degree one. We call this operation $Sq^1$; it is also known as the Bockstein homomorphism. \end{ex}
\begin{ex} Consider the real projective plane $ \mathbb{R} P^2$. Then \[H^*( \mathbb{R} P^2;{{\mathbb{Z}}}/2) \cong {{\mathbb{Z}}}/2[w_1]/w_1^3\] for a class $w_1$ in degree $1$. (The name $w_1$ will reappear in \fullref{sec:SWclasses} and is used consistently here.) Then $Sq^1(w_1) =w_1^2$. In fact, $\mathbb{R} P^2$ can be constructed from the circle $S^1$ via the following pushout diagram: \[ \xymatrix{S^1 \ar[r] \ar[d]_{2} & D^2\ar[d] \\ S^1 \ar[r] & \mathbb{R} P^2} \] The element $w_1$ is dual to the homology class represented by the $1$-cell and the element $w_1^2$ is dual to that represented by the $2$-cell. The cohomology operation $Sq^1(w_1) = w_1^2$ is recording the fact that the $2$-cell of $\mathbb{R} P^2$ is attached to the $1$-cell via the multiplication by $2$ map. See \fullref{fig:RP2CP2RPinf}. \end{ex}
\begin{defn} The Steenrod algebra $\mathcal{A}$ is the graded non-commmutative $\mathbb{F}_2$-algebra generated in degree $k$ by the stable cohomology operations of that degree and with multiplication given by composition of operations. \end{defn}
\begin{rem} Let $ {H}\mathbb{Z}/2 $ be the mod-$2$ Eilenberg--MacLane spectrum whose $n$th space is given by $K({{\mathbb{Z}}}/2,n)$. Since \[H^t(-;{{\mathbb{Z}}}/2) \cong [(-)_+, K({{\mathbb{Z}}}/2,n)] \cong [ \Sigma^{\infty}(-)_+, \Sigma^{t} {H}\mathbb{Z}/2 ] \] it follows from the Yoneda Lemma that degree $t$ cohomology operations are in one to one correspondence with maps $[ {H}\mathbb{Z}/2 , \Sigma^t {H}\mathbb{Z}/2 ]$. Therefore, \[\mathcal{A} \cong {H}\mathbb{Z}/2 ^*( {H}\mathbb{Z}/2 ).\] \end{rem}
Constructing all cohomology operations is rather difficult and a good reference is given by \cite{MosherTangora}. However, $\mathcal{A}$ can be described axiomatically and this is the approach we take here. \begin{thm}\label{thm:squnstable} For each $k \geq 0$, there exists a stable cohomology operation of degree $k$ \[Sq^k \colon H^*(-;{{\mathbb{Z}}}/2 ) \to H^{*+k}(-;{{\mathbb{Z}}}/2 ) \] called the \emph{$k$th Steenrod square}. For $X$ a topological space, the Steenrod squares satisfy the following properties: \begin{enumerate}[(a)] \item $Sq^0 =1$ \item For $x \in H^k(X;{{\mathbb{Z}}}/2)$, $Sq^k(x) = x^2$. \item If $x\in H^i(X;{{\mathbb{Z}}}/2)$ and $i<k$, then $Sq^k(x) =0$. \item (Cartan Formula) $Sq^k(xy) = \sum_{i+j = k } Sq^i(x)Sq^j(y)$, where the multiplication on $H^*(X;{{\mathbb{Z}}}/2)$ is given by the cup product. \end{enumerate} \end{thm}
\begin{rem}\label{rem:twoforms} In \fullref{thm:squnstable}, the Cartan Formula is only expressed for the cup product of elements in $H^*(X;{{\mathbb{Z}}}/2 )$. However, it also holds for the cross product. That is, if $x \in H^*(X;{{\mathbb{Z}}}/2)$ and $y \in H^*(Y;{{\mathbb{Z}}}/2)$, then for \[x \otimes y \in H^*(X \times Y;{{\mathbb{Z}}}/2) \cong H^*(X;{{\mathbb{Z}}}/2)\otimes_{{{\mathbb{Z}}}/2} H^*(Y;{{\mathbb{Z}}}/2),\] then \[ Sq^k(x \otimes y) = \sum_{i+j = k } Sq^i(x) \otimes Sq^j(y).\] If one is working with the reduced cohomology groups, then the same formula holds for $H^*(X \wedge Y) \cong H^*(X) \otimes_{{{\mathbb{Z}}}/2} H^*(Y)$.
Finally, if there is a continuous map $Y \to X \times Y $, so that $H^*(Y;{{\mathbb{Z}}}/2)$ becomes a module over $H^*(X;{{\mathbb{Z}}}/2)$, then the Cartan Formula implies that \[ Sq^k(x \cdot y) = \sum_{i+j = k } Sq^i(x) \cdot Sq^j(y)\] where $\cdot$ denotes the action of $H^*(X;{{\mathbb{Z}}}/2)$ on $H^*(Y;{{\mathbb{Z}}}/2)$. \end{rem}
\begin{thm} The Steenrod algebra $\mathcal{A}$ is the tensor algebra over ${{\mathbb{Z}}}/2$ generated by the $Sq^i$ subject to the following relations: \begin{enumerate}[(1)] \item $Sq^0 =1$ \item The Adem relations: For $0< a <2b$, \[Sq^aSq^b = \sum_{c=0}^{[a/2]} \binom{b-c-1}{a-2c} Sq^{a+b-c} Sq^c.\] \end{enumerate} \end{thm}
\begin{rem}\label{rem:hopfalgebra} The Steenrod algebra $\mathcal{A}$ is a graded, non-commutative, augmented algebra. In fact, it is a cocommutative Hopf algebra over ${{\mathbb{Z}}}/2 $\footnote{The authors have heard the following anecdote from Doug Ravenel: During a lecture of Milnor on Hopf algebras at Princeton many years ago, Steenrod asked if there were any interesting examples.} whose coproduct $\psi \colon \mathcal{A} \to \mathcal{A} \otimes \mathcal{A}$ is determined by \[\psi(Sq^k) = \sum_{i+j =k} Sq^i \otimes Sq^j.\] The antipode $\chi \colon \mathcal{A} \to \mathcal{A}$ is defined inductively by the identities \begin{align*}
\chi(Sq^0)&=Sq^0, & \sum_{i=0}^k Sq^i \chi(Sq^{k-i})&=0, \ k>0.
\end{align*} We note that $\mathcal{A}_0 ={{\mathbb{Z}}}/2 $ and let $I(\mathcal{A})$ be the kernel of the augmentation $\varepsilon \colon \mathcal{A} \to {{\mathbb{Z}}}/2 $. \end{rem}
\begin{rem}\label{rem:modA} We let $\Mod_{\mathcal{A}}$ be the category of graded left modules over $\mathcal{A}$. These are ${{\mathbb{Z}}}$-graded ${{\mathbb{Z}}}/2 $-vector spaces together with a left action of $\mathcal{A}$. Given $M$ and $N$ in $\Mod_{\mathcal{A}}$, we let $M\otimes_{{{\mathbb{Z}}}/2 }N$ be the module whose structure is given by $a(m\otimes n) = \sum a_im\otimes a_jn$, where $a\in \mathcal{A}$ and $ \psi(a) = \sum a_i\otimes a_j$.
Modules which satisfy the conditions of \fullref{thm:squnstable} are called unstable modules. The cohomology of a spectrum need not be an unstable module in general. \end{rem}
To specify an $\mathcal{A}$-module structure on a graded ${{\mathbb{Z}}}/2$-vector space $M$, one must describe the action of the Steenrod squares on $M$. We record this information in a picture we call an \emph{cell diagram}. See \fullref{fig:cellexplanation}. The following result implies that specifying the action of $Sq^{2^n}$ for $n\geq 0$ is enough to describe an $\mathcal{A}$-module. \begin{thm} $\mathcal{A}$ is generated as an algebra by $Sq^{2^n}$ for $n\geq 0$. \end{thm}
\begin{ex} Consider the complex projective plane $ \mathbb{C} P^2$. Then \[H^*( \mathbb{C} P^2;{{\mathbb{Z}}}/2) \cong {{\mathbb{Z}}}/2[w_2]/w_2^3\] for a class $w_2$ in degree $2$. (The name $w_2$ reappears in \fullref{sec:SWclasses} and is used consistently here.) It follows from the properties of the squares that $Sq^2(w_2) =w_2^2$. In fact, $\mathbb{C} P^2$ can be constructed from the sphere $S^2$ via the following pushout diagram: \[ \xymatrix{S^3 \ar[r] \ar[d]_{\eta} & D^4\ar[d] \\ S^2 \ar[r] & \mathbb{C} P^2} \] where $\eta \colon S^3 \to S^2$ is the Hopf fibration. The element $w_2$ is dual to the homology class represented by the $2$-cell and the element $w_2^2$ is dual to that represented by the $4$-cell. The cohomology operation $Sq^2(w_2)=w_2^2$ is recording the fact that the $4$-cell of $\mathbb{C} P^2$ is attached to the $2$-cell via the map $\eta$. See \fullref{fig:RP2CP2RPinf}. \end{ex}
\begin{ex} The Steenrod operations for the cohomology of $\mathbb{R} P^{\infty} \simeq BO_1$ are completely explicit. Writing $H^*(\mathbb{R} P^{\infty};{{\mathbb{Z}}}/2) \cong {{\mathbb{Z}}}/2[w_1]$ for $w_1$ in degree $1$, we have \[ Sq^n(w_1^m) = \binom{m}{n}w_1^{m+n}.\] Using the naturality of the squares, this example often comes in handy in computing operations in the cohomology of other spaces. See \fullref{fig:MO1MU1} \end{ex}
\begin{figure}\label{fig:cellexplanation}
\end{figure}
\begin{figure}
\caption{The structure of $H^*(\mathbb{R} P^2)$ (left), $H^*(\mathbb{C} P^2)$ (right) as modules over $\mathcal{A}$. The class $w_1$ is in $H^1(\mathbb{R} P^{2})$. The class $w_2$ is in $H^2(\mathbb{C} P^2)$.}
\label{fig:RP2CP2RPinf}
\end{figure}
\subsection{The Subalgebras $\mathcal{A}_n $}\label{sec:ans}
The Steenrod algebra is an infinitely generated non-commutative algebra. However, it is finitely generated in each degree. In fact, it is filtered by the finite sub-Hopf algebras generated by $Sq^1, \ldots, Sq^{2^n}$, which are denoted $\mathcal{A}_n $. Further, each algebra $\mathcal{A}_n $ contains a commutative subalgebra generated by elements $Q_0, \ldots, Q_n$ which are defined inductively by \begin{align*} Q_0 &= Sq^1, \\ Q_i &= Sq^{2^i}Q_{i-1} + Q_{i-1} Sq^{2^i}. \end{align*} In fact, the $Q_i$'s generate an exterior algebra and we let $\mathcal{E}_n= E(Q_{0}, \ldots, Q_n)$.
For example, the algebra $\mathcal{A}_1 $ is the subalgebra of $\mathcal{A}$ generated by $Sq^1$ and $Sq^2$. As a module over itself, $\mathcal{A}_1 $ admits the cell diagram depicted in \fullref{fig:A1E1}.
\begin{figure}
\caption{$\mathcal{A}_1$ (left) and its subalgebra $\mathcal{E}_1$ (right). The dashed lines represent the action of $Q_1 = Sq^1Sq^2 + Sq^2 Sq^1$.}
\label{fig:A1E1}
\end{figure}
\begin{defn}\label{defn:AmmB} Let $\mathcal{B}$ be a subalgebra of a ${{\mathbb{Z}}}/2 $-algebra $\mathcal{C}$. Then \[\mathcal{C} /\!\!/ \mathcal{B} := \mathcal{C}\otimes_{\mathcal{B}} {{\mathbb{Z}}}/2 \] where ${{\mathbb{Z}}}/2 $ denotes the trivial $\mathcal{B}$ module concentrated in degree zero. \end{defn}
\begin{rem}The subalgebras $\mathcal{A}_1 $ and $\mathcal{E}_1 $ appear naturally in classical computations as they are related to $K$-theory. Let $\ku$ be the connective $K$-theory spectrum and $\ko$ its real version. See \fullref{sec:connective}. Then there are isomorphisms of $\mathcal{A}$-modules \begin{align*} H^*(\ku) &\cong \mathcal{A} /\!\!/ \mathcal{E}_1 , & H^*(\ko) &\cong \mathcal{A} /\!\!/ \mathcal{A}_1 . \end{align*} Similarly, if $\tmf$ is the connective spectrum of topological modular forms and $BP\langle 2\rangle$ is a spectrum obtained from the Brown-Peterson spectrum $BP$ by killing a choice of generators $v_k$ for $k\geq 3$, then \begin{align*} H^*(BP\langle 2\rangle ) &\cong \mathcal{A} /\!\!/ \mathcal{E}_2, & H^*(\tmf) &\cong \mathcal{A} /\!\!/ \mathcal{A}_2 . \end{align*} These spectra are the chromatic height $2$ analogues of $\ku$ and $\ko$ respectively.\end{rem}
\subsection{Thom Spectra and Stiefel--Whitney Classes}\label{sec:SWclasses}
Given an $n$-dimensional vector bundle $\nu \colon E \to B$, we recalled the definition of the Thom space $B^{\nu}$ of $\nu$ in \fullref{sec:thomspectra}. Further, we recalled the Thom isomorphism \[ \mathrm{Th} \colon H^*(B;{{\mathbb{Z}}}/2) \to \widetilde{H}^{*+n}(B^{\nu};{{\mathbb{Z}}}/2)\] which was given by the cup product with a Thom class $U \in \widetilde{H}^n(B^{\nu} ; {{\mathbb{Z}}}/2)$. We note that $\mathrm{Th}(1)=U$ and write \[\mathrm{Th}(x) = xU.\]
\begin{warn} The Steenrod operations do not commute with the Thom isomorphism. This fact is crucial for \fullref{defn:SW} below. \end{warn}
The Thom isomorphism is used to define classical invariants of a bundle $\nu$ called the \emph{Stiefel--Whitney classes}. \begin{defn}\label{defn:SW} The \emph{$i$th Stiefel--Whitney class} $w_i=w_i(\nu)$ of a vector bundle $\nu$ is defined by \[ w_i = \mathrm{Th}^{-1}( Sq^i(U)) \in H^i(B;{{\mathbb{Z}}}/2).\] In particular, they satisfy the identity \[w_i U = Sq^i(U).\] The \emph{total Stiefel--Whitney class} is the formal sum \[w = w(\nu) = 1+w_1 + w_2 +\ldots \] \end{defn}
\begin{rem}\label{rem:totalSW} If $\nu$ is a trivial bundle, the Stiefel--Whitney classes are trivial except for $w_0 =1$. Given two vector bundles $\nu$ and $\eta$, one can show that \[ w(\nu \oplus \eta) = w(\nu) w(\eta).\] If follows that \[w(\nu \oplus \nu^{\perp}) =1 \] for any orthogonal complement of an embedding of $\nu$ into a trivial bundle $\mathbf{m}$. This identity allows us to determine the Stiefel--Whitney classes of $\nu^{\perp}$ given those of $\nu$. It also allows us to define the Stiefel--Whitney classes of a virtual bundle. In particular, \[w(-\nu) = w(\nu)^{-1}. \] \end{rem}
The effect of the Steenrod squares on the Stiefel--Whitney classes is given by the Wu formula. \begin{thm}[Wu Formula] Let $\nu \colon E \to B$ be a vector bundle over $B$. Then \[Sq^i(w_j) = \sum_{k=0}^{i} \binom{(j-i)+(k-1)}{k}w_{i-k}w_{j+k}.\] \end{thm}
\begin{rem} Applying $\mathrm{Th}(-)$ to both sides of the display in \fullref{defn:SW}, one deduces that \[Sq^i(U) = w_iU \in H^{i+n}(B^{\nu}). \] Further, the Thom diagonal gives $H^*(B^{\nu})$ the structure of an $H^*(B; {{\mathbb{Z}}}/2)$-module. So using \fullref{rem:twoforms}, for $x \in H^*(B;{{\mathbb{Z}}}/2)$ we have \[Sq^k(x U) = \sum_{i+j=k} Sq^i(x)Sq^j(U) = \sum_{i+j=k} Sq^i(x)w_j U. \] This determines the structure of $H^*(B^{\nu})$ as an $\mathcal{A}$-module based on that of $H^*(B)$. \end{rem}
\subsection{Examples of computations of Steenrod operations}\label{sec:compA1} In this section, we go through a few selected computations of Steenrod operations. Most of the examples play a role in Section 10 of \cite{FH}. Further, the computations illustrate many of the concepts and techniques mentioned above. We do not do all the computations in detail but try to give enough information for the reader to learn the techniques and be able to reproduce them on their own. \begin{ex}\label{ex:bogen} The classifying space $BO_n$ carries the universal $n$-plane bundle $\gamma_n$, and its Thom space is denoted $MO_n$. The cohomology of $BO_n$ is \begin{align*}H^*(BO_n;{{\mathbb{Z}}}/2) &= {{\mathbb{Z}}}/2 [w_1, \ldots, w_n], & H^*(MO_n) &= Z/2 [w_1, \ldots, w_n]\{U\}.\end{align*} Similarly, $BSO_n$ carries the universal oriented $n$-plane bundle and its Thom space is denoted by $MSO_n$. A bundle is oriented if and only if $w_1=0$, and so \begin{align*} H^*(BSO_n;{{\mathbb{Z}}}/2) &= {{\mathbb{Z}}}/2 [w_2, \ldots, w_n], & H^*(MSO_n) &= {{\mathbb{Z}}}/2 [w_2, \ldots, w_n]\{U\}.\end{align*} \end{ex}
\begin{ex}\label{ex:mo1mu1} As special cases of \fullref{ex:bogen} we have \begin{align*} H^*(\mathbb{R} P^{\infty};{{\mathbb{Z}}}/2) &\cong H^*(BO_1;{{\mathbb{Z}}}/2) \cong {{\mathbb{Z}}}/2 [w_1], & H^*(MO_1) \cong {{\mathbb{Z}}}/2 [w_1]\{U\} \end{align*} Further, $Sq^1(w_1^{k}U) = w_1^{k+1}U$ if $k$ is even and zero if $k$ is odd. Using the Cartan Formula as in \fullref{rem:twoforms}, one deduces that $Sq^2(w_1^{k}U) = \binom{k-1}{2} w_1^{k+2}U$. In fact, $MO_1 \simeq \mathbb{R} P^{\infty}$.
Similarly, \begin{align*} H^*(\mathbb{C} P^{\infty};{{\mathbb{Z}}}/2) &\cong H^*(BSO_2;{{\mathbb{Z}}}/2) \cong {{\mathbb{Z}}}/2 [w_2], & H^*(MU_1) \cong {{\mathbb{Z}}}/2 [w_2]\{U\}, \end{align*} and $Sq^2(w_2^{k}U) = w_2^{k+1}U$ if $k$ is even and zero if $k$ is odd. All of the $Sq^1$s are zero. In fact, $MU_1 \simeq \mathbb{C} P^{\infty}$. \begin{figure}
\caption{From the left, the structures of $H^*(BO_1)$, $H^*(MO_1)$, $H^*(BU_1)$ and $H^*(MU_1)$ as $\mathcal{A}_1$-modules.}
\label{fig:MO1MU1}
\end{figure} \end{ex}
\begin{ex} As an exercise that will be relevant in \fullref{sec:examples}, we consider the structure of $H^*(MU_1 \wedge MO_1)$ as modules over $\mathcal{A}_1$. By the K\"unneth isomorphism, we have \[ H^*(MU_1 \wedge MO_1) \cong H^*(MU_1) \otimes_{{{\mathbb{Z}}}/2 } H^*(MO_1). \] We use the Cartan formula as discussed in \fullref{rem:twoforms}. Since all of the $Sq^1$s vanish in $H^*(MU_1)$, we deduce from the Cartan formula that for any $a \in H^*(MU_1)$ and $b \in H^*(MO_1)$, \begin{align*} Sq^1(a \otimes b) &= Sq^1(a) \otimes b + a \otimes Sq^1(b) = a \otimes Sq^1(b) , \\
Sq^2(a \otimes b) &=Sq^2(a) \otimes b + Sq^1(a) \otimes Sq^1(b) + a \otimes Sq^2(b) = Sq^2(a) \otimes b +a \otimes Sq^2(b) . \end{align*} The $\mathcal{A}_1$-module structure is illustrated in a small range in \fullref{fig:MU1smshMO1}.
\begin{figure}
\caption{The $\mathcal{A}_1$-submodule of $H^*(MU_1 \wedge MO_1)$ generated by $U \otimes U$, $U\otimes w_1^2U + w_2U \otimes U $, $w_2^2U \otimes U $ and $w_2^2U \otimes w_1^2U + w_2^3U \otimes U$. The class $U\otimes U \in H^3( MU_1 \wedge MO_1)$. All classes of of degree $* \leq 5$ in $H^*(\Sigma^{-3} MU_1 \wedge MO_1)$ are contained in this submodule. }
\label{fig:MU1smshMO1}
\end{figure} \end{ex}
\begin{ex} In this example, we compute part of the structure of $H^*(BO_3)$ as a module over $\mathcal{A}_1$. We recall from \fullref{sec:SWclasses} that \[H^*(BO_3;{{\mathbb{Z}}}/2) \cong {{\mathbb{Z}}}/2 [w_1,w_2,w_3]\] and using the Wu formula, we compute that \begin{align*} Sq^1(w_1)&= w_1^2 & Sq^1(w_2) &= w_1w_2+ w_3 & Sq^1(w_3) &= w_1w_3 & \\ Sq^2(w_1) &=0 & Sq^2(w_2) &= w_2^2 &Sq^2(w_3) &= w_2w_3. \end{align*} With the Cartan formula, this determines all of the operations for $\mathcal{A}_1$ on $H^*(BO_3)$. For example, \begin{align*} Sq^2(Sq^1(w_2)) &= Sq^2(w_1w_2)+ Sq^2(w_3) \\ & = Sq^2(w_1)Sq^0(w_2)+Sq^1(w_1)Sq^1(w_2) + Sq^0(w_1)Sq^2(w_2) +Sq^2(w_3) \\ &= w_1^2Sq^1(w_2) + w_1w_2^2+w_2w_3 \\ &= (w_1^2+w_2)Sq^1(w_2). \end{align*} We let \begin{align}\label{eq:namex} x&= Sq^1(w_2) =w_1w_2+ w_3, & \overline{w}_2&= w_1^2+w_2. \end{align} A part of the cell diagram for $H^*(BO_3)$ is depicted in \fullref{fig:HBO3}. \end{ex}
\begin{ex}\label{ex:MO3} To compute the structure of $H^*(MO_3)$ as a module over $\mathcal{A}_1$, we use the Thom isomorphism and \fullref{rem:twoforms}. The former gives the identification \[H^*(MO_3) \cong {{\mathbb{Z}}}/2 [w_1, w_2,w_3]\{U\}\] where the Thom class $U$ is in $H^3(MO_3)$. \fullref{rem:twoforms} allows us to compute the action of $\mathcal{A}_1$ on $H^*(MO_3)$ and the result is illustrated in \fullref{fig:HMO3}. For example, \begin{align*} Sq^2(w_2U) &= Sq^2(w_2)U + Sq^1(w_2)Sq^1(U) + w_2 Sq^2(U) \\ &= w_2^2U+xw_1U+w_2^2U =w_1xU. \end{align*} A few other relations are given by \begin{align*} Sq^1(U) &=w_1U & Sq^1(w_1U)&=0 & Sq^1(w_2U)&=w_3 U & Sq^1(w_3U)&=0 \\ Sq^2( U)&=w_2U & Sq^2(w_1U) &= w_1\overline{w}_2U & Sq^2(w_2U) &= w_1xU & Sq^2(w_3U) &= w_1^2w_3 U . \end{align*} \end{ex}
\begin{figure}
\caption{The $\mathcal{A}_1$-submodule of $H^*(BO_3)$ generated by $w_1$, $w_2$, $w_3$ and $w_1^2w_2$. The class $w_1 \in H^1(BO_3)$ and $w_2 \in H^2(BO_3)$.}
\label{fig:HBO3}
\end{figure}
\begin{figure}
\caption{The $\mathcal{A}_1$-submodule of $H^*(MO_3)$ generated by the classes $U$, $w_1^2U$, $w_2^2U$ and $w_2w_3U$. The class $U \in H^3(MO_3)$. This submodule contains all cohomology classes in $H^*(\Sigma^{-3}MO_3)$ of degree $*\leq 5$.}
\label{fig:HMO3}
\end{figure}
\begin{ex}\label{ex:MTO3} We turn to the computation of part of the structure of the cohomology of $MTO_3$ as a module over $\mathcal{A}_1$. Recall that $MTO_3$ is the Thom space for the virtual bundle $-\gamma_3$ over $BO_3$. Again, we have a Thom isomorphism \[H^*(MTO_3) \cong {{\mathbb{Z}}}/2 [w_1, w_2,w_3]\{\overline{U}\} \] where the Thom class $\overline{U}= U(-\gamma_3)$ is in degree $-3$. However, here $w_i = w_i(\gamma_3)$, the Stiefel--Whitney classes of the universal bundle. Let $\overline{w}_i = w_i(-\gamma_3)$. To compute the Steenrod operations using the formula \[Sq^i(\overline{U}) = \overline{w}_i \overline{U},\] of \fullref{defn:SW}, we need a formula for the $\overline{w}_i$s in terms of the $w_i$s. Letting $w=w(\gamma_3)$ and $\overline{w} = w(-\gamma_3)$ be the total Stiefel--Whitney classes, \fullref{rem:totalSW} gives an identity \begin{align*} \overline{w} &=w^{-1}= \frac{1}{1+w_1+w_2+w_3} = \sum_{i\geq 0} (w_1+w_2+w_3)^i . \end{align*} Collecting the terms of the same degree, we get that \begin{align*} \overline{w}_1 &= w_1 \\ \overline{w}_2 &= w_1^2+w_2 . \end{align*} Therefore, a few relations are given by \begin{align*} Sq^1(\overline{U}) &=w_1\overline{U}, &Sq^1(w_1\overline{U})&= 0 & Sq^1(w_2\overline{U})&= w_3 \overline{U} & Sq^1(w_3\overline{U})&= 0 \\ Sq^2( \overline{U})&=(w_1^2+w_2)\overline{U} & Sq^2(w_1\overline{U}) &= w_1w_2 \overline{U} & Sq^2(w_2\overline{U}) &= w_1w_3 \overline{U} & Sq^2(w_3\overline{U}) &= 0. \end{align*} A part of the cell diagram for $H^*(MTO_3)$ is depicted in \fullref{fig:HMTO3}. \begin{figure}
\caption{The $\mathcal{A}_1$-submodule of $H^*(MTO_3)$ generated by the classes $\overline{U}$, $w_2\overline{U}$, $w_2^2\overline{U}$, $w_1^4\overline{U}$, $w_2^3 \overline{U} $ and $w_2w_3\overline{U}$. The class $\overline{U} \in H^{-3}(MTO_3)$. This submodule contains all cohomology classes in $H^*(\Sigma^{3} MTO_3)$ of degree $*\leq 5$.}
\label{fig:HMTO3}
\end{figure} \end{ex}
\begin{exc} Use the formulas of \fullref{ex:MO3} and \fullref{rem:twoforms} to compute that the $\mathcal{A}_1$-submodule of $H^*(MO_3)$ generated by $U$, $w_1^2U$ and $w_2^2U$ has the structure depicted in \fullref{fig:HMO3}. Do the same thing for \fullref{fig:HMTO3} using the results of \fullref{ex:MTO3}. \end{exc}
\begin{ex} In this example, we compute the structure of $H^*(MSO_3)$ as a module over $\mathcal{A}_1$. Let $\iota \colon MSO_3 \to MO_3$ be the map of Thom spectra induced by the inclusion of $SO_3$ into $O_3$. The induced map $ \iota^* \colon H^*(MO_3) \to H^*(MSO_3)$ is given by moding out $w_1$. The Thom class of $\gamma_3$ maps to that of the universal bundle on $BSO_3$. We get an isomorphism \[H^*(MSO_3) \cong {{\mathbb{Z}}}/2 [w_2,w_3]\{U\} .\] Further, the Steenrod operations are natural with maps of spaces or spectra, so $Sq^k\iota^* =\iota^* Sq^k$.
We use that $x \equiv w_3 \mod (w_1)$ for $x$ as in \eqref{eq:namex}. We get the following formulas from \fullref{ex:MO3}. First, in the cohomology of $BSO_3$, we have \begin{align*} Sq^1(w_2)&= w_3 & Sq^1(w_3) &= 0 & Sq^2(w_2)&= w_2^2 & Sq^2(w_3) &=w_2w_3 . \end{align*} So, in the cohomology of $MSO_3$, we have \begin{align*}
Sq^1(U) &=0 & Sq^1(w_2U)&=w_3 U & Sq^1(w_3U)&=0 \\
Sq^2( U)&=w_2U & Sq^2(w_2U) &= 0 & Sq^2(w_3U) &= 0 . \end{align*}
\begin{figure}
\caption{The $\mathcal{A}_1$-submodule of $H^*(MSO_3)$ generated by the classes $U$, $w_2^2U$, $w_2w_3U$ and $w_2^4U$. The class $U \in H^3(MSO_3)$. This submodule contains all cohomology classes in $H^*(\Sigma^{-3}MSO_3)$ of degree $*\leq 5$.}
\label{fig:HMSO3}
\end{figure}
\end{ex}
\section{The Adams spectral sequence}\label{sec:ass} One of the most effective methods for computing stable homotopy groups is the Adams spectral sequence. The idea is roughly as follows. Take a space or a spectrum $X$ and resolve it into pieces whose homotopy we understand. The Eilenberg--MacLane spectra are good candidates --- they are constructed to have homotopy in a single degree. Then, reconstruct the stable homotopy groups of $X$ from algebraic data associated to this resolution.
We will make this more precise and give a sketch of the construction of the Adams spectral sequence. In the cases of interest, it has the form \begin{equation}\label{eq:ASSdisplay} E_2^{s,t} = \mathrm{Ext}_{\mathcal{A}}^{s,t}(H^*(X), {{\mathbb{Z}}}/2 ) \Longrightarrow (\pi_{t-s}X)_2^{\wedge}\end{equation} We will explain the terms in \eqref{eq:ASSdisplay} throughout this section. We begin by defining $\mathrm{Ext}_{\mathcal{A}}$ and giving tools to compute it.
\subsection{Computing $\mathrm{Ext}$ over the Steenrod algebra}\label{sec:compext} Let $\mathcal{B}$ be a graded ring. For any $\mathcal{B}$-module $M$ and $r \in {{\mathbb{Z}}}$, let $\Sigma^r M = M[r]$ be the graded $\mathcal{B}$-module given in degree $t$ by \[(\Sigma^rM)^t = (M[r])^t =M^{t-r}. \] Let $\Hom_{\mathcal{B}}^*(M, N)$ be the graded abelian group given in degree $t$ by \[\Hom_{\mathcal{B}}^t(M, N) = \Hom_{\mathcal{B}}(M, \Sigma^t N).\] The contravariant functor \[\Hom_{\mathcal{B}}^*(-, N) \colon \mathcal{B}\text{-}\Mod \to \Ab \] is left exact and has right derived functors $\mathrm{Ext}^s_{\mathcal{B}}(-,N)$. We let \[\mathrm{Ext}^{s,t}_{\mathcal{B}}(-,N) = (\mathrm{Ext}^s_{\mathcal{B}}(-,N))^t \] and treat $\mathrm{Ext}_{\mathcal{B}}^{*,*}(-,N)$ as a functor with values in bi-graded abelian groups. As always, the value of these functors on a $\mathcal{B}$-module $M$ can be computed by choosing a resolution $P_{\bullet}$ of $M$ by projective $\mathcal{B}$-modules and forming the cochain complex $\Hom_{\mathcal{B}}^*(P_\bullet, N)$. Then \[ \mathrm{Ext}^{s,t}_{\mathcal{B}}(M,N) = H^s(\Hom_{\mathcal{B}}^t(P_\bullet, N) ).\]
A useful tool is the interpretation of elements in $\mathrm{Ext}^{s,t}_{\mathcal{B}}(M,N)$ as equivalence classes of extensions when $s\geq 1$. That is, an element of $\mathrm{Ext}^{s,t}_{\mathcal{B}}(M,N)$ is an exact complex, or \emph{extension}, \[ \xymatrix{ 0 \ar[r] & \Sigma^t N \ar[r] & P_1 \ar[r] & \ldots \ar[r] & P_s \ar[r] & M \ar[r] & 0 } \] where two extensions are equivalent if there exists a commutative diagram \[ \xymatrix{ 0 \ar[r] &\Sigma^{t} N \ar[d]^-{\id_N} \ar[r] & P_1 \ar[d]^-{\cong} \ar[r] & \ldots \ar[r] & P_s \ar[d]^-{\cong} \ar[r] & M \ar[r] \ar[d]^-{\id_M} & 0 \\ 0 \ar[r] & \Sigma^{t} N \ar[r] & P_1' \ar[r] & \ldots \ar[r] & P_s' \ar[r] & M \ar[r] & 0 . } \]
\begin{ex}\label{ex:h0h1def} The class in $\mathrm{Ext}_{\mathcal{A}}^{1,1}({{\mathbb{Z}}}/2 ,{{\mathbb{Z}}}/2 )$ represented by the extension \[ 0 \to \Sigma {{\mathbb{Z}}}/2 \to \Sigma^{-1} H^*(\mathbb{R} P^2) \to {{\mathbb{Z}}}/2 \to 0,\] which is depicted in \fullref{fig:h0}, is called $h_0$. The class in $\mathrm{Ext}_{\mathcal{A}}^{1,2}({{\mathbb{Z}}}/2 ,{{\mathbb{Z}}}/2 )$ represented by the extension \[ 0 \to \Sigma^{2}{{\mathbb{Z}}}/2 \to \Sigma^{-2} H^*(\mathbb{C} P^2) \to {{\mathbb{Z}}}/2 \to 0,\] which is depicted in \fullref{fig:h1}, is called $h_1$.
\begin{figure}\label{fig:h0}
\label{fig:h1}
\end{figure} \end{ex}
\subsection{Module Structure on $\mathrm{Ext}$}\label{sec:multiplicative} Let $\mathcal{B}$ be a sub-Hopf algebra of the Steenrod algebra $\mathcal{A}$. Then for any $\mathcal{B}$-module $M$, there is a map \[ \mathrm{Ext}_{\mathcal{B}}^{s,t}(M,{{\mathbb{Z}}}/2 ) \otimes_{{{\mathbb{Z}}}/2} \mathrm{Ext}_{\mathcal{B}}^{s',t'}({{\mathbb{Z}}}/2 ,{{\mathbb{Z}}}/2 ) \to \mathrm{Ext}_{\mathcal{B}}^{s+s',t+t'}(M,{{\mathbb{Z}}}/2 ) .\] This is called the \emph{Yoneda product}. It is straightforward to describe the product in terms of extensions. Suppose that $s,s' \geq 1$. Given two extensions \begin{equation} \label{eq:elementM}
\xymatrix{ 0 \ar[r] & \Sigma^t {{\mathbb{Z}}}/2 \ar[r] & P_1 \ar[r] & \ldots \ar[r] & P_s \ar[r] & M \ar[r] & 0 }
\end{equation} and \begin{equation} \label{eq:element}\xymatrix{ 0 \ar[r] & \Sigma^{t'} {{\mathbb{Z}}}/2 \ar[r] & Q_1 \ar[r] & \ldots \ar[r]^-{\varphi_{s'}} & Q_{s'} \ar[r] & {{\mathbb{Z}}}/2 \ar[r] & 0 ,} \end{equation} where \eqref{eq:elementM} represents an element of $\mathrm{Ext}_{\mathcal{B}}^{s,t}(M,{{\mathbb{Z}}}/2 ) $ and \eqref{eq:element} an element of $\mathrm{Ext}_{\mathcal{B}}^{s',t'}({{\mathbb{Z}}}/2 ,{{\mathbb{Z}}}/2 ) $, we can splice the complexes to obtain an extension of length $s+s'$: \[ \xymatrix@-1pc{ 0 \ar[r] & \Sigma^{t'+t} {{\mathbb{Z}}}/2 \ar[r] & \Sigma^t Q_1 \ar[r] & \ldots \ar[r] & \Sigma^t Q_{s'} \ar[rr] \ar[dr] & & P_1 \ar[r] & \ldots \ar[r] & P_s \ar[r] & M \ar[r] & 0 \\
& & & & & \Sigma^t {{\mathbb{Z}}}/2 \ar[ur] & & } \]
which represents the product in $ \mathrm{Ext}_{\mathcal{B}}^{s+s',t+t'}(M,{{\mathbb{Z}}}/2 )$. This defines the module structure for elements of degree $s\geq 1$ in $\mathrm{Ext}_{\mathcal{B}}^{s,t}(M,{{\mathbb{Z}}}/2 )$. If $s=0$, then given a homomorphism $ M \to \Sigma^t {{\mathbb{Z}}}/2 $ in \[\mathrm{Ext}_{\mathcal{B}}^{0,t}(M, {{\mathbb{Z}}}/2 ) \cong \Hom_{\mathcal{B}}(M, \Sigma^t {{\mathbb{Z}}}/2)\] and an element of $\mathrm{Ext}_{\mathcal{B}}^{s',t'}({{\mathbb{Z}}}/2 ,{{\mathbb{Z}}}/2 )$ represented by \eqref{eq:element}, we obtain an element in $\mathrm{Ext}_{\mathcal{B}}^{s',t+t'}(M, {{\mathbb{Z}}}/2 )$ represented by \[\xymatrix@-0.5pc{0 \ar[r] & \Sigma^{t'} {{\mathbb{Z}}}/2 \ar[r] & Q_1 \ar[r] & \ldots \ar[r] & \Sigma^t Q_{s'-1} \ar[r] & \Sigma^tQ_{s'} \times_{ \Sigma^{t}{{\mathbb{Z}}}/2 } M \ar[r] & M \ar[r] & 0 } \] where $\Sigma^t Q_{s'} \times_{ \Sigma^{t}{{\mathbb{Z}}}/2 } M $ is the pull-back of $\mathcal{A}_1$-modules. There is a commutative diagram of exact sequences \[\xymatrix{ 0 \ar[r] & \ker( \Sigma^t\varphi_{s'}) \ar[r] \ar[d] & \Sigma^t Q_{s'} \times_{ \Sigma^{t}{{\mathbb{Z}}}/2 } M \ar[d] \ar[r] & M \ar[d] \ar[r] & 0 \\
0 \ar[r] & \ker( \Sigma^t\varphi_{s'}) \ar[r] & \Sigma^t Q_{s'} \ar[r] &\Sigma^t {{\mathbb{Z}}}/2 \ar[r] & 0 }\]
so that we really do get an exact complex. An example when $\mathcal{B} =\mathcal{A}_1$ is given in \fullref{fig:extmultforhom}.
\begin{figure}\label{fig:extmultforhom}
\end{figure}
\subsection{Adams Charts}\label{sec:achart} For $\mathcal{B}$ a sub-Hopf algebra of $\mathcal{A}$, we depict the information contained in $ \mathrm{Ext}_{\mathcal{B}}^{s,t}(M, {{\mathbb{Z}}}/2 )$ in a picture which we call an \emph{Adams chart}. See \fullref{fig:achart}. An Adams chart is an illustration of $ \mathrm{Ext}_{\mathcal{B}}^{s,t}(M, {{\mathbb{Z}}}/2 )$ in the $(t-s,s)$-plane. A generator for a copy of ${{\mathbb{Z}}}/2 $ in $ \mathrm{Ext}_{\mathcal{B}}^{s,t}(M, {{\mathbb{Z}}}/2 )$ is denoted by a $\bullet$. Multiplication by $h_0$ is recorded by drawing a vertical line between two classes and multiplication by $h_1$ by a line of slope $(1,1)$. An infinite string of classes connected by multiplications by $h_0$ is called an \emph{$h_0$-tower}. Note that the Adams chart for $\Sigma^rM$ is the same as that of $M$, but horizontally shifted to the right by $r$. \begin{figure}\label{fig:achart}
\end{figure}
\subsection{Minimal Resolutions}\label{sec:MinRes} Let $\mathcal{B}$ be a sub-Hopf algebra of $\mathcal{A}$. Recall that $\mathcal{A}$ is an augmented algebra with $\mathcal{A}_0 = {{\mathbb{Z}}}/2 $. So this holds for any of its subalgebras. We let $I(\mathcal{B})$ be the kernel of the augmentation of $\mathcal{B}$. Note that for any $\mathcal{B}$-module $P$ and ${{\mathbb{Z}}}/2 $ the trivial $\mathcal{B}$-module, the map \[ \Hom_{\mathcal{B}}^*(P, {{\mathbb{Z}}}/2 ) \to \Hom_{\mathcal{B}}^*(I(\mathcal{B})P, {{\mathbb{Z}}}/2 ) \] induced by the inclusion $I(\mathcal{B})P \hookrightarrow P$ is zero. So, if $P_{\bullet}$, \[\ldots \to P_s \xrightarrow{f_s} P_{s-1} \to \ldots \to P_0 \to M\] is a projective resolution of $M$ which satisfies \[f_s(P_s) \subseteq I(\mathcal{B})P_{s-1},\] then the maps in the cochain complex $\Hom_{\mathcal{B}}^*(P_{\bullet},{{\mathbb{Z}}}/2 )$ are trivial and it follows that \[ \mathrm{Ext}_{\mathcal{B}}^{s,t}(M, {{\mathbb{Z}}}/2 ) \cong \Hom_{\mathcal{B}}^t(P_s, {{\mathbb{Z}}}/2 ).\] Such a resolution is called a \emph{minimal resolution}.
If $M$ is a $\mathcal{B}$-module which is bounded below, then $M$ has a minimal resolution by free $\mathcal{B}$-modules. In such a resolution $P_{\bullet} \to M$, the $P_s$ are direct sums of suspensions of $\mathcal{B}$ and $\mathrm{Ext}_{\mathcal{B}}^{s,t}(M, {{\mathbb{Z}}}/2)$ is a product of ${{\mathbb{Z}}}/2$s indexed over the summands $\Sigma^t \mathcal{B} \subseteq P_s$. If there are finitely many of these, the product is isomorphic to a direct sum and each summand corresponds to a generator in $\mathrm{Ext}_{\mathcal{B}}^{s,t}(M, {{\mathbb{Z}}}/2)$.
If $\mathcal{B}$ and $M$ are small, these are straightforward to construct and we do a few examples here in the case when $\mathcal{B}=\mathcal{A}_1$.
\begin{rem}\label{rem:range} Let $\mathcal{B}$ a sub-Hopf algebra of $\mathcal{A}$ and $M$ a graded $\mathcal{B}$-module of finite type which is zero in degrees $t<n$. Using a free minimal resolution of $M$ to compute $\mathrm{Ext}_{\mathcal{B}}^{s,t}(M, {{\mathbb{Z}}}/2 )$, one deduces that $\mathrm{Ext}_{\mathcal{B}}^{s,t}(M, {{\mathbb{Z}}}/2 ) =0$ for $t-s<n$. \end{rem}
\begin{ex}\label{ex:M1} We begin by constructing a resolution of the $\mathcal{A}_1$-module $M_0= \mathcal{A}_1 /\!\!/ \mathcal{E}_0$, where $\mathcal{E}_0$ is the subalgebra generated by $Sq^1$. It is depicted below. This example is also treated by a different method in \fullref{ex:corM1}. The module $M_0$ has a periodic minimal resolution of the form \begin{equation}\label{eq:resM} \xymatrix{ M_0 & \mathcal{A}_1 \ar[l] & \Sigma \mathcal{A}_1 \ar[l] & \Sigma^2 \mathcal{A}_1 \ar[l] & \ldots \ar[l] } \end{equation} See \fullref{fig:minM0}. The horizontal (blue) arrows indicate the maps in \eqref{eq:resM}. The circled (in red) classes are in the kernel. We have redrawn the kernels to the right (in red) to make the next map easier to visualize. The duals of the boxed classes (in blue) will form a basis of $\mathrm{Ext}_{\mathcal{A}_1}^{*,*}(M_0, {{\mathbb{Z}}}/2 )$. Recall that $h_0$ was defined in \fullref{ex:h0h1def}. See also \fullref{fig:h0}. The class in $\mathrm{Ext}^{1,1}_{\mathcal{A}_1}(M_0, {{\mathbb{Z}}}/2 )$ is the $h_0$ multiple of the class in $\mathrm{Ext}^{0,0}_{\mathcal{A}_1}(M_0, {{\mathbb{Z}}}/2 )$. This is read off of the part of \fullref{fig:minM0} that has been framed (in gray).
The $\mathcal{A}_1$-module $M_0$ is not the restriction of any $\mathcal{A}$-module, but it has such a nice projective resolution that it is often used as a tool to compute resolutions for other modules. This will be explained below. There are larger versions of the module $M_0$ that we will denote by $M_n$ obtained by stringing together copies of $M_0$, including the case $n = \infty$. For example, $M_1$ is drawn in \fullref{fig:M1}. These all have periodic minimal resolutions. For example, \begin{equation*} \xymatrix{ M_1 & \mathcal{A}_1 \oplus \Sigma^4\mathcal{A}_1 \ar[l] & \Sigma( \mathcal{A}_1 \oplus \Sigma^4\mathcal{A}_1 ) \ar[l] & \Sigma^2 ( \mathcal{A}_1 \oplus \Sigma^4\mathcal{A}_1 ) \ar[l] & \ldots \ar[l] } \end{equation*} The Adams chart of $M_n$ has $h_0$-towers starting in $(4k,0)$ for $0 \leq k \leq n$. For example, the Adams chart for $M_1$ is depicted in \fullref{fig:M1}.
\begin{figure}
\caption{The $\mathcal{A}_1$ module $M_0$.}
\label{fig:M0}
\label{fig:M0achart}
\end{figure}
\begin{figure}\label{fig:minM0}
\end{figure}
\begin{figure}
\caption{The $\mathcal{A}_1$-module $M_1$.}
\label{fig:M1}
\label{fig:M1achart}
\end{figure}
\end{ex}
\begin{ex} The module ${{\mathbb{Z}}}/2 $ has a rather complicated minimal resolution. It is an excellently annoying exercise to work it out. We have illustrated the first two terms of such a resolution in \fullref{fig:minF2}. We will give a different approach in \fullref{ex:periodicF2} to computing the Adams chart for $\mathrm{Ext}_{\mathcal{A}_1}({{\mathbb{Z}}}/2 ,{{\mathbb{Z}}}/2 )$ but we include it here in \fullref{fig:F2}. \begin{figure}\label{fig:minF2}
\label{fig:F2}
\end{figure} \end{ex}
\subsection{Change-of-Rings}\label{sec:COR} Let $\mathcal{B}$ be a subalgebra of $\mathcal{A}$. We defined $\mathcal{A} /\!\!/ \mathcal{B}$ in \fullref{defn:AmmB}. \begin{lem}[Shearing Isomorphism]\label{lem:shearing} Let $\mathcal{B}$ be a sub Hopf-algebra of $\mathcal{A}$. Let $M$ be an $\mathcal{A}$-module. Then there is an isomorphism of $\mathcal{A}$-modules \[ \mathcal{A} \otimes_{\mathcal{B}} M \cong \mathcal{A} /\!\!/ \mathcal{B} \otimes_{{{\mathbb{Z}}}/2 } M \] where the action of $\mathcal{A}$ on $\mathcal{A} \otimes_{\mathcal{B}} M$ is via the left action of $\mathcal{A}$ on itself and the action of $\mathcal{A}$ on $ \mathcal{A} /\!\!/ \mathcal{B} \otimes_{{{\mathbb{Z}}}/2 } M$ is the one described in \fullref{rem:modA}. \end{lem} \begin{rem} If $\mathcal{B}={{\mathbb{Z}}}/2 $, the isomorphism of \fullref{lem:shearing} is induced by the composite \[\xymatrix{ \mathcal{A} \ar[r]^-{\psi\otimes M} & \mathcal{A} \otimes \mathcal{A} \otimes M \ar[r]^-{\mathcal{A} \otimes f} & \mathcal{A}\otimes M} \] where $f \colon \mathcal{A} \otimes M \to M$ is the structure map of the $\mathcal{A}$-module $M$. The maps $\psi$ and $\chi$ below are as in \fullref{rem:hopfalgebra}. The inverse is induced by the composite \[\xymatrix{ \mathcal{A} \ar[r]^-{\psi\otimes M} &\mathcal{A} \otimes \mathcal{A} \otimes M \ar[rr]^-{\mathcal{A} \otimes \chi \otimes M} & & \mathcal{A} \otimes \mathcal{A} \otimes M \ar[r]^-{\mathcal{A} \otimes f} & \mathcal{A}\otimes M.} \] One verifies that these maps descend to the quotients for more general $\mathcal{B}$. \end{rem}
From the shearing isomorphism and, from the adjunction \[\Hom_{\mathcal{B}}(M, N) \cong \Hom_{\mathcal{A}}(\mathcal{A}\otimes_{\mathcal{B}} M, N) \] one can prove that \[ \mathrm{Ext}_{\mathcal{A}}^{*,*}( \mathcal{A} /\!\!/ \mathcal{B} \otimes_{{{\mathbb{Z}}}/2 } M, N ) \cong \mathrm{Ext}_{ \mathcal{B} }^{*,*}( M, N ) \] for any $\mathcal{A}$-modules $M$ and $N$. Therefore, in the case of extended modules, computations over $\mathcal{A}$ can be reduced to potentially easier computations over smaller sub- Hopf algebras $\mathcal{B}$. Some common examples are described below.
\begin{ex} Many of the modules relevant in the computations of \cite{FH} are of the form $\mathcal{A} /\!\!/ \mathcal{A}_1 \otimes_{{{\mathbb{Z}}}/2 } M_0$ in some range. By the adjunction \[\mathrm{Ext}_{\mathcal{A}}^{*,*}(\mathcal{A} /\!\!/ \mathcal{A}_1 \otimes_{{{\mathbb{Z}}}/2 } M_0, {{\mathbb{Z}}}/2 ) \cong \mathrm{Ext}_{\mathcal{A}_1}^{*,*}(M_0, {{\mathbb{Z}}}/2 ), \] we only need to keep track of the $\mathcal{A}_1$-module structure. \end{ex} \begin{rem} Let $R$ be a graded exterior algebra on $n$ generators over ${{\mathbb{Z}}}/2 $ \[R = E(x_1, \ldots, x_n) = {{\mathbb{Z}}}/2 [x_1, \ldots, x_n]/(x_1^2, \ldots, x_n^2).\] where $x_i$ is in degree $t_i$. Then $\mathrm{Ext}_{R}^{*,*}({{\mathbb{Z}}}/2 ,{{\mathbb{Z}}}/2 )$ is a polynomial algebra on $n$ generators \[ \mathrm{Ext}_{R}^{*,*}({{\mathbb{Z}}}/2 ,{{\mathbb{Z}}}/2 ) \cong {{\mathbb{Z}}}/2 [y_1, \ldots, y_n]\] for $y_i \in \mathrm{Ext}^{1,t_i}({{\mathbb{Z}}}/2 ,{{\mathbb{Z}}}/2 )$. This is an example of a phenomenon called Koszul duality. \end{rem}
\begin{ex}\label{ex:corM1} The module $M_0$ of \fullref{ex:M1} is isomorphic to $\mathcal{A}_1 /\!\!/ \mathcal{E}_0 $, where $\mathcal{E}_0$ is the algebra generated by $Sq^1$. The algebra $\mathcal{E}_0$ is an exterior algebra on one generator in degree $1$, so that \[\mathrm{Ext}_{\mathcal{A}_1}^{*,*}(M_0,{{\mathbb{Z}}}/2 ) \cong \mathrm{Ext}_{\mathcal{E}_0}^{*,*}({{\mathbb{Z}}}/2 ,{{\mathbb{Z}}}/2 ) \cong {{\mathbb{Z}}}/2 [h_0] \] where $h_0 \in \mathrm{Ext}^{1,1}_{\mathcal{E}_0}({{\mathbb{Z}}}/2 , {{\mathbb{Z}}}/2 )$. The Adams chart for $M_0$ contains one $h_0$-tower starting in degree $(0,0)$. \end{ex}
\begin{ex}\label{ex:coneeta} The $\mathcal{A}_1$-module $\mathcal{A}_1 /\!\!/ \mathcal{E}_1$ is the cohomology of $\Sigma^{-2}H^*(\mathbb{C} P^2)$, illustrated in \fullref{fig:E1}. By the change-of-rings isomorphism, \[\mathrm{Ext}_{\mathcal{A}_1}^{*,*}(\mathcal{A}_1 /\!\!/ \mathcal{E}_1, {{\mathbb{Z}}}/2 ) \cong \mathrm{Ext}_{\mathcal{E}_1}^{*,*}({{\mathbb{Z}}}/2 , {{\mathbb{Z}}}/2 ). \] Since $\mathcal{E}_1 = E(Q_0, Q_1)$, it follows that $\mathrm{Ext}_{\mathcal{A}_1}^{*,*}(\mathcal{A}_1 /\!\!/ \mathcal{E}_1, {{\mathbb{Z}}}/2 )$ is a polynomial algebra on two generators. It is common to call the generator corresponding to $Q_0 = Sq^1$ by $h_0 \in \mathrm{Ext}_{\mathcal{A}_1}^{1,1}(\mathcal{A}_1 /\!\!/ \mathcal{E}_1, {{\mathbb{Z}}}/2 )$. The generator corresponding to $Q_1$ is often called $v_1 \in \mathrm{Ext}_{\mathcal{A}_1}^{1,3}(\mathcal{A}_1 /\!\!/ \mathcal{E}_1, {{\mathbb{Z}}}/2 )$, so that \[ \mathrm{Ext}_{\mathcal{A}_1}^{*,*}(\mathcal{A}_1 /\!\!/ \mathcal{E}_1, {{\mathbb{Z}}}/2 ) \cong {{\mathbb{Z}}}/2 [h_0, v_1].\] The Adams chart is depicted in \fullref{fig:coneeta}. \begin{figure}
\caption{$\mathcal{A}_1 /\!\!/ \mathcal{E}_1$}
\label{fig:E1}
\label{fig:coneeta}
\end{figure}
\end{ex}
\subsection{Long Exact Sequences}\label{sec:LES}
For some of the computations below we will need to use the long exact sequence induced on $\mathrm{Ext}$ from a short exact sequence of modules.
\begin{prop}\label{prop:LES}
Let $0 \to M \to N \to P \to 0$ be an exact sequence of $\mathcal{B}$-modules. Then there is a long exact sequence
\[
\xymatrix{ \ldots \ar[r] & \mathrm{Ext}^{s, t}_{\mathcal{B}} (P, {{\mathbb{Z}}}/2 ) \ar[r] & \mathrm{Ext}^{s, t}_{\mathcal{B}} (N, {{\mathbb{Z}}}/2 ) \ar[r] & \mathrm{Ext}^{s, t}_{\mathcal{B}} (M, {{\mathbb{Z}}}/2 ) \ar[dll]_{\delta}\\
& \mathrm{Ext}^{s+1, t}_{\mathcal{B}} (P, {{\mathbb{Z}}}/2 ) \ar[r] & \mathrm{Ext}^{s+1,t}_{\mathcal{B}} (N, {{\mathbb{Z}}}/2 ) \ar[r] & \ldots
}
\] \end{prop}
The map $\delta$ can be identified using the description of $\mathrm{Ext}$ in terms of extensions given in \fullref{sec:compext}. Given an extension \begin{equation}\label{eq:orig} \xymatrix{ 0 \ar[r] & \Sigma^t {{\mathbb{Z}}}/2 \ar[r] & P_1 \ar[r] & \ldots \ar[r] & P_s \ar[r] & M \ar[r] & 0 } \end{equation} we let $P_{s+1} = N$ and get an extension of length $s+1$ \[ \xymatrix@-0.5pc{ 0 \ar[r] & \Sigma^t {{\mathbb{Z}}}/2 \ar[r] & P_1 \ar[r] & \ldots \ar[r] & P_s \ar[rr] \ar[dr] & & N=P_{s+1} \ar[r] & P \ar[r] & 0 \\
& & & & & M \ar[ur] & } \] which corresponds to the boundary of the element of $ \mathrm{Ext}^{s, t}_{\mathcal{B}} (M, {{\mathbb{Z}}}/2 ) $ represented by \eqref{eq:orig} in $ \mathrm{Ext}^{s+1, t}_{\mathcal{B}} (P, {{\mathbb{Z}}}/2 ) $. See, e.g. \cite[9.6]{mccleary} for more details.
Computations using \fullref{prop:LES} can be done with the help of an Adams chart. The trick is to draw both $ \mathrm{Ext}^{s, t}_{\mathcal{B}} (P, {{\mathbb{Z}}}/2 )$ and $ \mathrm{Ext}^{s, t}_{\mathcal{B}} (M, {{\mathbb{Z}}}/2 )$ in the same chart and to treat the boundary map $\delta$ as a differential of slope $(-1,1)$. We illustrate this by an example.
\begin{ex}\label{ex:R0} We compute $\mathrm{Ext}_{\mathcal{A}_1}^{s,t}(R_0, {{\mathbb{Z}}}/2 )$ for $R_0$ as depicted in \fullref{fig:R0}. The module $R_0$ sits in an exact sequence \begin{equation}\label{eq:extR0} 0 \to \Sigma {{\mathbb{Z}}}/2 \to R_0 \to M_{\infty} \to 0, \end{equation} so we use the long exact sequence of \fullref{prop:LES} to compute $\mathrm{Ext}_{\mathcal{A}_1}^{s,t}(R_0, {{\mathbb{Z}}}/2 )$:
\[
\xymatrix{
\ldots \ar[r] & \mathrm{Ext}^{s, t}_{\mathcal{A}_1} (M_{\infty}, {{\mathbb{Z}}}/2 ) \ar[r] & \mathrm{Ext}^{s, t}_{\mathcal{A}_1} (R_0, {{\mathbb{Z}}}/2 ) \ar[r] & \mathrm{Ext}^{s, t}_{\mathcal{A}_1} (\Sigma {{\mathbb{Z}}}/2 , {{\mathbb{Z}}}/2 ) \ar[dll]_-{\delta}^-{h_0}\\
& \mathrm{Ext}^{s+1, t}_{\mathcal{A}_1} (M_{\infty}, {{\mathbb{Z}}}/2 ) \ar[r] & \mathrm{Ext}^{s+1,t}_{\mathcal{A}_1} (R_0, {{\mathbb{Z}}}/2 ) \ar[r] & \ldots
}
\] The boundary is given by multiplication by $h_0$ since \eqref{eq:extR0} is a representative extension for the element $h_0 \cdot 1 \in \mathrm{Ext}_{\mathcal{A}_1}^{1,1}(M_{\infty}, {{\mathbb{Z}}}/2)$.
In \fullref{fig:R0achart}, the classes of $\mathrm{Ext}_{\mathcal{A}_1}^{s,t}(\Sigma {{\mathbb{Z}}}/2 , {{\mathbb{Z}}}/2 )$ (blue), which is illustrated in \fullref{fig:F2}, support boundaries (red) to the classes of $\mathrm{Ext}_{\mathcal{A}_1}^{s+1,t}(M_{\infty}, {{\mathbb{Z}}}/2 )$ (green). The circled classes are the elements of $\mathrm{Ext}_{\mathcal{A}_1}^{s,t}(R_0, {{\mathbb{Z}}}/2 )$ in this range. The dashed line indicates a multiplication by $h_1$ between a class coming from $\mathrm{Ext}_{\mathcal{A}_1}^{s,t}(\Sigma {{\mathbb{Z}}}/2 , {{\mathbb{Z}}}/2 )$ and a class coming from $\mathrm{Ext}_{\mathcal{A}_1}^{s+1,t}(M_{\infty}, {{\mathbb{Z}}}/2 )$, which we have not justified. One way to do this is to compute a minimal resolution for $R_0$ and use the fact that multiplication by $h_1$ corresponds to the extension depicted in \fullref{fig:h1}.
\begin{figure}
\caption{An extension exhibiting an $\mathcal{A}_1$-module we call $R_0$.}
\label{fig:R0}
\label{fig:R0achart}
\end{figure} \end{ex}
\begin{ex} Consider the $\mathcal{A}_1$-module depicted in \fullref{fig:R1}. Using \fullref{prop:LES}, we get the Adams chart depicted in \fullref{fig:R1achart}. \begin{figure}
\caption{An exact sequence of $\mathcal{A}_1$-modules depicting $R_1$.}
\label{fig:R1}
\label{fig:R1achart}
\end{figure} \end{ex}
\begin{rem}\label{rem:sstrick} We present one last trick which is a variation on \fullref{prop:LES}. It uses the fact that, although the module $\mathcal{A}_1 /\!\!/ \mathcal{E}_0$ is not projective, it has a nice periodic resolution as an $\mathcal{A}_1$-module. Given a module $M$, suppose that there is an exact complex \begin{equation} \xymatrix{0 & M \ar[l] & P_0 \ar[l]_-{f_0} & P_1 \ar[l]_-{f_1} & P_2 \ar[l]_-{f_2} & \ldots \ar[l] } \end{equation} where the $P_s$ are direct sums of suspensions of copies of $\mathcal{A}_1$ and $\mathcal{A}_1 /\!\!/ \mathcal{E}_0$ and with the property that \[f_s(P_s) \subseteq I(\mathcal{B})P_{s-1},\] so that $P_{\bullet} \to M$ is a ``minimal resolution'', but not by projective modules. We call this a ``modified'' minimal resolution. For each summand $\Sigma^{t} \mathcal{A}_1$ in $P_s$, there will be a generator of ${{\mathbb{Z}}}/2 \in \mathrm{Ext}^{s,t}_{\mathcal{A}_1}(M,{{\mathbb{Z}}}/2 )$ and for each summand
$\Sigma^{t} \mathcal{A}_1 /\!\!/ \mathcal{E}_0$ in $P_s$, there will be an $h_0$-tower whose generator is in $\mathrm{Ext}^{s,t}_{\mathcal{A}_1}(M,{{\mathbb{Z}}}/2 )$.
The proof of this fact uses the collapsing of the spectral sequence of a double complex built from minimal resolutions.
\end{rem}
\begin{ex}\label{ex:periodicF2}
We give a modified minimal resolution for $\mathcal{A}_1$ which is periodic in \fullref{fig:periodicF2}. More precisely, the figure depicts the top row of \eqref{eq:per}. The periodic resolution is obtained by splicing copies of this complex together and is the bottom row of \eqref{eq:per}.
\begin{equation}\label{eq:per}
\xymatrix@-1.2pc{ 0 & {{\mathbb{Z}}}/2 \ar[l] \ar@{=}[d] & \mathcal{A}_1 \ar@{=}[d] \ar[l] & \Sigma^2 \mathcal{A}_1 \oplus \Sigma \mathcal{A}_1 /\!\!/ \mathcal{E}_0 \ar@{=}[d] \ar[l] & \Sigma^{4} \mathcal{A}_1 \ar@{=}[d] \ar[l] & \Sigma^7 \mathcal{A}_1 /\!\!/ \mathcal{E}_0 \ar@{=}[d] \ar[l] & \Sigma^{12} {{\mathbb{Z}}}/2 \ar[l] & 0 \ar[l] \\
0 & {{\mathbb{Z}}}/2 \ar[l] & P_0 \ar[l] & P_1 \ar[l] & P_2 \ar[l] &P_3 \ar[l] & \Sigma^{12} P_0 \ar[l] \ar[u] & \Sigma^{12} P_1 \ar[l] & \ldots \ar[l] }
\end{equation}
\begin{figure}\label{fig:periodicF2}
\end{figure} \end{ex}
\begin{ex} Consider the module $R_2$ depicted in \fullref{fig:R2JQ}. Using the resolution constructed in \fullref{ex:periodicF2}, we have a modified minimal resolution \[ \xymatrix{ 0 & R_2 \ar[l] & \Sigma^{-1}P_1 \ar[l] & \Sigma^{-1}P_2 \ar[l] & \ldots \ar[l]. }\] So the Adams chart for $\mathrm{Ext}^{s,t}_{\mathcal{A}_1}(R_2, {{\mathbb{Z}}}/2 )$ is a truncated version of that for $\mathrm{Ext}^{s,t}_{\mathcal{A}_1}({{\mathbb{Z}}}/2 , {{\mathbb{Z}}}/2 )$ and is given in \fullref{fig:R2JQachart}. Similarly, the $\mathcal{A}_1$-modules $J$, called the \emph{joker}, and $Q$ called the \emph{``upside down'' question mark complex} also have Adams charts which are truncated versions of that for $\mathcal{A}_1$. These are depicted in \fullref{fig:R2JQachart}.
\begin{figure}
\caption{An $\mathcal{A}_1$-module we call $R_2$ (left), the joker $J$ (center) and the ``upside down'' question mark complex $Q$ (right).}
\label{fig:R2JQ}
\end{figure}
\begin{figure}\label{fig:R2JQachart}
\end{figure} \end{ex}
\subsection{The Adams spectral sequence}\label{sec:assconstruction} We turn to the construction of the spectral sequence. In this section, we make the following assumption:
\begin{assumption}\label{ass:assX} Let $X$ be the suspension spectrum of a CW-complex that has finitely many cells in each dimension. \end{assumption}
For example, the Thom spectra we are considering have this property since Grassmanians have cell structures with finitely many $n$-cells for each $n$. Some of this can be done in more generality, but all of our examples will have models of this form so we limit ourselves to this case. A friendly reference to spectral sequences is Hatcher's online notes \cite{hatcherss}. Other great references are McCleary \cite{mccleary}, Boardman \cite{boardman} and Miller \cite{millerrelations}.
\begin{defn} The \emph{Hurewicz homomorphism} \begin{align*} h \colon \pi_tX = [S^t, X] \to \Hom_{\mathcal{A}} (H^*(X), H^*(S^t))\cong \Hom_{\mathcal{A}} (H^*(X), \Sigma^t {{\mathbb{Z}}}/2 ) \end{align*} is defined by sending a map $f \colon S^t \to X$ to the induced map on cohomology, $f^* \colon H^*(X) \to H^*(S^t)$. \end{defn}
If $h$ were an isomorphism, computing the homotopy groups of $X$ would be as easy as understanding its cohomology. In certain cases, this does happen.
\begin{defn}\label{defn:genEM} A spectrum $Z$ is a generalized Eilenberg--MacLane spectrum of finite type if \[Z \simeq HV \simeq \bigvee_{i \in I} \Sigma^iH{{\mathbb{Z}}}/2 \] where $V$ is a graded ${{\mathbb{Z}}}/2 $ vector space which is finite in each degree. \end{defn}
The finiteness assumption in \fullref{defn:genEM} gives an isomorphism \[\bigvee_{i \in I} \Sigma^iH{{\mathbb{Z}}}/2 \simeq \prod_{i \in I} \Sigma^iH{{\mathbb{Z}}}/2 .\] See \eqref{eq:prodcoprod}.
\begin{ex}\label{rem:freeup} Let $X$ be a spectrum that satisfies \fullref{ass:assX}. There is an isomorphism \[ H^*({H}\mathbb{Z}/2 \wedge X ) \cong \mathcal{A} \otimes_{{{\mathbb{Z}}}/2 } H^*(X) \]
and a class $1 \otimes x \in H^{|x|}({H}\mathbb{Z}/2 \wedge X )$ corresponds to a map
${H}\mathbb{Z}/2 \wedge X \rightarrow \Sigma^{|x|}{H}\mathbb{Z}/2$. By \fullref{ass:assX}, the cohomology of $X$ is finite in each degree, so
\[ \prod_{x \in H^*(X) } \Sigma^{|x|} {H}\mathbb{Z}/2 \simeq \bigvee_{x \in H^*(X) } \Sigma^{|x|} {H}\mathbb{Z}/2 \] and the product of these maps is a weak equivalence:
\[ \xymatrix{{H}\mathbb{Z}/2 \wedge X \ar[r] & \bigvee_{x \in H^*(X) } \Sigma^{|x|} {H}\mathbb{Z}/2 }\] So any spectrum of the form ${H}\mathbb{Z}/2 \wedge X$ for $X$ satisfying \fullref{ass:assX} is a generalized Eilenberg--MacLane spectrum of finite type. \end{ex}
If $Z$ is a generalized Eilenberg--MacLane spectrum of finite type, then the Hurewicz homomorphism is an isomorphism. So, the idea is to resolve $X$ by generalized Eilenberg--MacLane spectra. \begin{defn}
Let $X$ be a spectrum that satisfies \fullref{ass:assX}. An \textbf{Adams resolution} is a sequence of spectra
\begin{equation}\label{eq:adamsres}
\xymatrix{
X = X_0 \ar[d]^-{j_0} & X_1 \ar[d]^-{j_1} \ar[l]_-{i_0} & X_2 \ar[d]^-{j_2} \ar[l]_-{i_1} & X_3 \ar[l]_-{i_2} \ar[d]^-{j_3} & \ldots \ar[l] & \\ K_0 \ar@{.>}[ur]_-{\delta_0} & K_1 \ar@{.>}[ur]_-{\delta_1} & K_2 \ar@{.>}[ur]_-{\delta_2} & K_3 \ar@{.>}[ur]_-{\delta_3} & } \end{equation} where \[ \xymatrix{ X_{s+1} \ar[r]^-{i_s} & X_s \ar[r]^-{j_s} & K_s \ar[r]^-{\delta_s} & \Sigma X_{s+1}} \] are cofiber sequences (i.e. exact triangles) and such that
\begin{enumerate}[(a)]
\item $K_i \simeq \bigvee \Sigma^j {H}\mathbb{Z}/2$, and
\item $H^\ast (K_i) \to H^\ast (X_i)$ is surjective.
\end{enumerate} \end{defn}
\begin{rem} From an Adams resolution, we obtain a sequence \[ \xymatrix{X = X_0 \ar[r] & K_0 \ar[r] & \Sigma K_1 \ar[r] & \Sigma^2 K_2 \ar[r] & \ldots} \] where $ \Sigma^s K_s \to \Sigma^{s+1} K_{s+1} $ is the composite $j_{s+1} \circ \delta_s$. Further, the resolution is constructed so that $H^*(\Sigma^{\bullet} K_{\bullet}) \to H^*(X)$ is a projective resolution of $H^*(X)$ as an $\mathcal{A}$-module. \end{rem}
\begin{rem} Let $\overline{{H}}\mathbb{Z}/2 $ be defined by the fiber sequence $\overline{{H}}\mathbb{Z}/2 \to S \to {H}\mathbb{Z}/2$. From \fullref{rem:freeup}, it follows that \[ \xymatrix{
X \ar[d]^-{j_0} & \overline{{H}}\mathbb{Z}/2 \wedge X \ar[d]^-{j_1} \ar[l]_-{i_0} & \overline{{H}}\mathbb{Z}/2^{\wedge 2} \wedge X \ar[d]^-{j_2} \ar[l]_-{i_1} & \ldots \ar[l]_-{i_2} & \\
{H}\mathbb{Z}/2 \wedge X \ar@{.>}[ur]_-{\delta_0} & {H}\mathbb{Z}/2 \wedge \overline{{H}}\mathbb{Z}/2 \wedge X \ar@{.>}[ur]_-{\delta_1} & {H}\mathbb{Z}/2 \wedge \overline{{H}}\mathbb{Z}/2^{\wedge 2} \wedge X \ar@{.>}[ur]_-{\delta_2} & } \] is an Adams resolution. So Adams resolutions always exist. \end{rem}
\begin{defn} Let \[F^s = \text{im}(\pi_*X_s \rightarrow \pi_* X ).\] Then $\alpha \in \pi_*X$ has \emph{Adams filtration} $s$ if $\alpha \in F^s\backslash F^{s+1}$. \end{defn}
The Adams filtration of an element is independent of the choice of Adams resolution.
\begin{lem} An element $f\in \pi_t X$ has Adams filtration $\geq s$ if and only if $f$ factors as \[f : S^t =U_s\rightarrow U_{s-1} \rightarrow U_{s-2} \rightarrow \ldots \rightarrow U_1 \rightarrow U_0=X \] where the maps $U_{i} \rightarrow U_{i-1}$ induce the zero maps on mod-$2$ cohomology. \end{lem}
\begin{ex} An element of $\pi_*X$ has Adams filtration $0$ if and only if its image under the Hurewicz homomorphism is non-zero. The image of $\pi_tX_1 \rightarrow \pi_t X$ is the kernel of the map $j_0$. But $j_0^*$ is surjective on cohomology, so \[ \Hom_{\mathcal{A}}(H^*(X), \Sigma^t {{\mathbb{Z}}}/2) \to \Hom_{\mathcal{A}}(H^*(K_0), \Sigma^t {{\mathbb{Z}}}/2) \] is injective. In particular, $i_0^*$ must be zero and the image of $i_0$ consists of elements of filtration $s\geq 1$.
Examples of elements of Adams filtration one are the Hopf maps \begin{align*} \eta \colon S^3 &\rightarrow S^2, & \nu \colon S^7 &\rightarrow S^4, & \sigma \colon S^{15} &\rightarrow S^8. \end{align*} \end{ex}
We now turn to the construction of the Adams spectral sequence. Fix an Adams resolution of $X$ as in \eqref{eq:adamsres}. Applying $\pi_*(-)$, we get an unravelled exact couple \[\xymatrix{ \pi_*X \ar@{=}[r] &\pi_*X_0 \ar[d]^{j_0} & \pi_*X_1 \ar[l]_{i_0} \ar[d]^{j_1} & \pi_*X_2 \ar[l]_{i_1} \ar[d]^{j_2} & \ar[l]_{i_2} \pi_*X_3 \ar[d]^{j_3} & \ar[l] \ldots \\ & \pi_* K_0 \ar@{.>}[ur]_{\delta_0} & \pi_*K_1 \ar@{.>}[ur]_{\delta_1} & \pi_*K_2 \ar@{.>}[ur]_{\delta_2} & \pi_*K_3 \ar@{.>}[ur] & } \] from which we obtain a spectral sequence. More precisely, we let \begin{enumerate}[(a)] \item $E_1^{s, t} = \pi_{t-s}K_s \cong \pi_t \Sigma^s K_s$, and \item $d_1 \colon E_1^{s, t} \to E_1^{s+1, t} $ be given by $d_1 = \Sigma j_{s+1} \circ \delta_s $. \end{enumerate} In general, $E_r^{*,*} = \ker(d_{r-1})/\im(d_{r-1})$ and $d_r \colon E_r^{s, t} \to E_r^{s+r, t +r-1}$ is given by $d_r(x) = j(y)$ for any $y$ such that $i^{r-1}(y) = \delta(x)$. Here, $i^{n} =i \circ \ldots \circ i$ iterated $n$-times and we have left out indices and suspensions.
\begin{prop} Let $X$ satisfy \fullref{ass:assX}. There is an isomorphism \[E_2^{s,t} \cong \mathrm{Ext}_{\mathcal{A}}^{s,t}(H^*(X), {{\mathbb{Z}}}/2 ).\] \end{prop} \begin{proof} In degree $t$, the $E_2$ term is the cohomology of \begin{equation}\label{eq:respiK} \xymatrix{0 \ar[r] & \pi_tK_0 \ar[r] & \pi_t\Sigma K_1 \ar[r] & \pi_t\Sigma^2 K_2 \ar[r] & \ldots } \end{equation} However, the $K_s$ are generalized Eilenberg MacLane spectra, so \[ \pi_{t}\Sigma^s K_s \cong \Hom_{\mathcal{A}}(H^*(\Sigma^s K_s), {{\mathbb{Z}}}/2 ).\] The Adams resolutions are built so that $H^*(\Sigma^{\bullet} K_{\bullet}) \to H^*(X)$ is a projective resolution as $\mathcal{A}$-modules, so the homology of \eqref{eq:respiK} is $ \mathrm{Ext}_{\mathcal{A}}^{s,t}(H^*(X), {{\mathbb{Z}}}/2 )$. \end{proof}
In general, the Adams spectral sequence does not exactly compute the homotopy groups of the spectrum $X$. However, under \fullref{ass:assX}, it does compute their \emph{$2$-completion}, a construction we review here.
\begin{defn}\label{defn:abcomp} Let $G$ be an abelian group. For each $s\in \mathbb{N}$, let \[p_{s+1} \colon G/2^{s+1} \to G/2^{s}\] be the map induced by reduction modulo $2^{s}$. The \emph{$2$-completion} of $G$, denoted by $ G_2^{\wedge}$, is the inverse limit of $G/2^{s}$ along the maps $p_{s}$. That is, \[ G_2^{\wedge} = \varprojlim_s G/2^s . \] \end{defn}
\begin{rem} Note that in the category of abelian groups, $\varprojlim_s G/2^s$ is isomorphic to the kernel of the map \[ p \colon \prod_{s} G/2^s \to \prod_{s} G/2^s \] where $p$ is the difference of the identity and the map to the product induced by the composites $ \prod_{s } G/2^s \to G/2^{k+1} \xrightarrow{p_{k+1}} G/2^k $. \end{rem}
\begin{ex}\label{ex:2adic} If $G={{\mathbb{Z}}}$, then \[{{\mathbb{Z}}}_2 := ({{\mathbb{Z}}})^{\wedge}_2 = \varprojlim_s {{\mathbb{Z}}}/2^s \] are the $2$-adic integers. In general, if $G$ is a finitely generated abelian group, \[G_2^{\wedge}\cong G\otimes {{\mathbb{Z}}}_2.\] In particular, if $G$ is a finite abelian $2$-group, then $G_2^{\wedge} \cong G$. \end{ex}
\begin{thm}\label{thm:assconv} Let $X$ satisfy \fullref{ass:assX}. Then the Adams spectral sequence for $X$ computes the $2$-completion of the homotopy groups of $X$. That is, the spectral sequence converges to $(\pi_{*}X)^{\wedge}_{2}$: \begin{align*} \mathrm{Ext}_{\mathcal{A}}^{s,t}(H^*(X) , {{\mathbb{Z}}}/2 ) \Longrightarrow (\pi_{t-s}X)^{\wedge}_{2} \end{align*} \end{thm}
\begin{rem}\label{rem:compsepctra} In fact, the Adams spectral sequence for $X$ satisfying \fullref{ass:assX} computes the homotopy groups of a spectrum $X_2^{\wedge}$ that can be obtained using a construction analogous to completion for abelian group, and which has the property that $\pi_*(X_2^{\wedge}) \cong (\pi_*X)^{\wedge}_2$. In broad strokes, we define $X/2^{s} \in \mathrm{hSp} $ via the exact triangle: \[\xymatrix{ X \ar[r]^-{2^{s}} & X \ar[r] & X/2^{s} \ar[r] & \Sigma X.}\] There are induced maps $p_{s+1} \colon X/2^{s+1} \to X/2^s$ and we define $\varprojlim_s X/2^s \in \mathrm{hSp}$ by the exact triangle \[ \xymatrix{ \varprojlim_s X/2^s \ar[r] & \prod_{s} X/2^s \ar[r]^-{p} & \prod_{s} X/2^s \ar[r] & \Sigma \varprojlim_s X/2^s} \] where $p$ is the difference of the identity and the map to the product induced by the composites $ \prod_{s } X/2^s \to X/2^{k+1} \xrightarrow{p_{k+1}} X/2^k $. This is called the \emph{homotopy inverse limit}. For a spectrum $X$ that satisfies \fullref{ass:assX}, then \[ X_2^{\wedge} = \varprojlim_s X/2^s. \] We refer the reader to Bousfield \cite[Section 2]{bousfield_lochom} and Ravenel \cite[II.2.1]{ravgreen} for more details on this and related topics. \end{rem}
\subsection{Using the Adams spectral sequence}\label{sec:usingASS} In this section, we continue to assume that $X$ satisfies \fullref{ass:assX}, so that the Adams spectral sequence for $X$ computes the $2$-completion of the homotopy groups of $X$.
Computing Adams differentials is ``an art not a science''. There is no algorithm for determining them in general and it usually is a theorem when one computes a new differential in a spectral sequence of interest. However, there are rules to the game and the goal of this section is to share some of the tricks of the trade.
First, the Adams spectral sequence is depicted in an Adams chart as in \fullref{fig:ASSexample}. In this grading, a $d_r$ differential increases $s$ by $r$ and decreases $t-s$ by $1$. If $d_r(x)=y$, we say that $x$ hits, or \emph{kills} $y$. The class $x$ is the \emph{source} and $y$ the \emph{target} of the differential. A class which is in the the kernel of $d_r$ for every $r$ is called a \emph{permanent cycle}. A class which is hit by a differential is called a \emph{boundary}. We say that $x$ \emph{survives} if it is a permanent cycle, but not a boundary.
The Adams spectral sequence for $X$ is a module over the Adams spectral sequence for $S^0$. From this, it follows in particular that the differentials are $h_0$ and $h_1$-linear. That is, $d_r(h_ix)=h_i d_r(x)$ for $i=0,1$.
We draw each page $E_r^{*,*}$ of the spectral sequence in subsequent Adams charts, erasing pairs of classes $x$ and $y$ that are connected by a differential $d_r(x)=y$ as we ``turn the pages''. Letting the process go to infinity, or stopping when there are no possible differentials left, we get the last page, called $E_{\infty}^{*,*}$. The last page of the spectral sequence contains the information for $(\pi_*X)^{\wedge}_{2}$ in the form of an \emph{associated graded}. That is, there is a filtration \[ (\pi_{t}X)^{\wedge}_{2} = F_{\infty}^{0,t} \supseteq F_{\infty}^{1,t+1} \supseteq F_{\infty}^{2,t+2} \supseteq \ldots \] related to $E_{\infty}^{*,*}$ by exact sequences \begin{equation}\label{eq:extASS} 0 \to F_{\infty}^{s+1,t+s+1} \to F_{\infty}^{s,t+s} \to E_{\infty}^{s,t+s} \to 0 . \end{equation} So, each box in the $t-s$ column of the Adams chart at $E_{\infty}^{*,*}$ is a subquotient of the answer $ (\pi_{t}X)^{\wedge}_{2}$ and the last problem is to reassemble them together. This is called \emph{solving the extensions}, where the word ``extension'' refers to \eqref{eq:extASS}. We solve for $F_{\infty}^{0,t}/F^{s,t+s}$ inductively, starting with \[0 \to E_{\infty}^{1,t+1} \to F_{\infty}^{0,t}/F_{\infty}^{2,t+2} \to E_{\infty}^{0,t} \to 0 \] and continuing on to \[ 0 \to E_{\infty}^{s,t+s} \to F_{\infty}^{0,t}/F_{\infty}^{s+1,t+s+1} \to F_{\infty}^{0,t}/F_{\infty}^{s,t+s} \to 0. \] We take the inverse limit once all of the terms $F_{\infty}^{0,t}/F^{s,t+s}$ have been determined.
An element $a \in E_{\infty}^{s,t+s}$ only represents a class $\alpha \in (\pi_{t}X)^{\wedge}_{2}$ modulo elements of \emph{higher filtration}. The language used is that $a$ \emph{detects} the element $\alpha$. Note that if $a$ detects $\alpha$, then it detects any class $\alpha + \beta$ where $\beta \in F_{\infty}^{s+1,t+s+1}$, so a class $a$ may detect multiple elements.
For the Adams spectral sequence, if $a \in E_{\infty}^{s,t+s}$ and $b \in E_{\infty}^{s+1, t+s+1}$ are such that $h_0a=b$, then $a$ detects an element $\alpha$ and $b$ detects an element $\beta$ such that $2\alpha = \beta$. So multiplication by $h_0$ records multiplication by $2$ and corresponds to a non-trivial, but easy to detect, extension as it comes from the module structure of the $E_2$-page. However, there can be non-trivial extensions coming from multiplications that do not come from the algebraic structure of the $E_2$-page. These are called \emph{exotic extensions}. For example, if $2\zeta = \omega$, for $\zeta \in F_{\infty}^{s,t+s}$ and $\omega \in F_{\infty}^{s+\epsilon,t+s+\epsilon}$ where $\epsilon >1$. Then $\zeta$ will be detected by some $z \in E_{\infty}^{s,t+s}$ and $\omega$ will be detected by some $w \in E_{\infty}^{s+\epsilon,t+s+\epsilon}$ so that these two classes are too far apart to be connected by an $h_0$. These situations are illustrated in \fullref{fig:ASSexample}.
If there are no non-trivial differentials, we say that the spectral sequence \emph{collapses}. We say that it \emph{collapses at $E_r$} if $E_r^{*,*}=E_{\infty}^{*,*}$. Often, there will be no possibilities for non-trivial differentials as the target of any possible differential will be zero. In this case, we say that the spectral sequence \emph{collapses for degree reasons}, or \emph{is too sparse for differentials}. Finally, if there are no possibilities for exotic extensions because no two classes on the $E_{\infty}$-page are aligned in a way that would allow for one to exist, we again say that there are \emph{no exotic extensions for degree reasons} or that the spectral sequence is \emph{too sparse for exotic extensions}. These are the best of all possible scenarios since differentials are hard to compute and exotic extensions are hard to solve. We will be in this situation in all of the examples in \fullref{sec:examples}.
\begin{ex} A typical example of solving extensions is when a column consists of a single $h_0$-tower, say starting in $E_{\infty}^{0,t}$. Then \[\xymatrix{ 0 \ar[r] & E_{\infty}^{s,t+s} \ar[r] \ar@{=}[d] & F_{\infty}^{0,t}/F_{\infty}^{s+1,t+s+1} \ar[r] \ar@{=}[d] & F_{\infty}^{0,t}/F_{\infty}^{s,t+s} \ar[r]\ar@{=}[d] & 0 \\
0 \ar[r] & {{\mathbb{Z}}}/2 \ar[r] & {{\mathbb{Z}}}/2^{s+1} \ar[r] & {{\mathbb{Z}}}/2^{s} \ar[r] & 0 }\]
and $ F_{\infty}^{0,t}/F_{\infty}^{s,t+s} \cong {{\mathbb{Z}}}/2^s{{\mathbb{Z}}}$ for all $s$. So
\[ (\pi_{t}X)^{\wedge}_{2} \cong \varprojlim_{s} F_{\infty}^{0,t}/F_{\infty}^{s,t+s} \cong \varprojlim_{s} {{\mathbb{Z}}}/2^s \cong {{\mathbb{Z}}}_2 \]
where ${{\mathbb{Z}}}_2$ are the $2$-adic integers defined in \fullref{ex:2adic}.
\end{ex}
\begin{figure}
\caption{Some phenomena in an Adams spectral sequence. The left chart is an example of an $E_3$-page and the right is the corresponding $E_4$-page, which in this case would be the $E_{\infty}$-page as there is no possibilities for further differentials. (The class $e$ cannot support a $d_r$ differential to the $h_0$-tower since this would violate the $h_0$-linearity of the differentials.) }
\label{fig:ASSexample}
\end{figure}
\section{Examples from the classification problems}\label{sec:examples}
In this section, we work out examples to illustrate the methodology. First, some notation. In \cite{FH}, Freed and Hopkins give a uniform classification of fermionic symmetric groups (\cite[9.2]{FH}) in spacetime dimension $n$. There are two complex symmetry groups, denoted $H^c_n(s)$, and labelled by $s = 0, 1$ and eight real symmetry groups, denoted $H_n(s)$, and labelled by $s = 0, \pm 1, \pm 2, \pm 3, 4$. They also show \cite[2.12]{FH} that in each case there are maps $H_n (s) \hookrightarrow H_{n+1}(s)$ stabilizing the groups, so that it makes sense to speak of $H(s)$ and $H^c(s)$ (this is precisely analogous to how $O(n)$ stabilizes to $O$). The Madsen-Tillman spectra (see \fullref{sec:thomspectra}) $MTH(s)$ are the cobordism theory of manifolds with stable tangential $H(s)$-structure. It is this cobordism theory that features in the Freed-Hopkins classification. This section will be devoted to computing the low dimensional homotopy groups of these cobordism spectra.
In \cite{FH}, Freed and Hopkins produce the tables of \fullref{fig:tablesum}. The explanations in \cite{FH} are brief and some steps are left as exercises. In \cite{campbell}, one of the authors gave a detailed explanation of the computation for $MT\Pin^-$, $MT\Pin^{+}$, $MT\Pin^{\widetilde{c}-}$, $MT\Pin^{\widetilde{c}+}$ and $MTG^{+}$. For this reason, we choose to apply the methods to explain the computations for $MTG^0$, $MTG^{-}$, $MT\Spin^c$ and $MT\Pin^c$, although we start by reproducing the computation for $MTG^{+}$ as a warm-up.
\begin{figure}
\caption{The various real (top) and complex (bottom) symmetry groups studied in \cite{FH}.}
\label{fig:tablesum}
\end{figure}
\subsection{Reducing to computations over $\mathcal{A}_1$} Computations of $\mathcal{A}$ are in general difficult to perform without computer assistance. However, if one can reduce the computation to one over $\mathcal{A}_1 $, constructing minimal resolutions becomes rather straightforward and computations can be done by hand, at least in some range.
The key to making the shift from computations over $\mathcal{A}$ to computations over $\mathcal{A}_1 $ is the fact that the spectra $MTH$ defined above satisfy \[MTH \simeq \MSpin \wedge X(H) \] where $X(H)$ are the Thom spectra of certain familiar vector bundles \cite[10.7]{FH}. The values of $X(H)$ for the groups $H$ studied in \cite{FH} are given in \fullref{fig:tablesum}.
Since our cohomology is with field coefficients, namely ${{{\mathbb{Z}}}}/2$, the K\"unneth formula gives an isomorphism \[H^*(MTH) \cong H^*(\MSpin) \otimes_{{{\mathbb{Z}}}/2} H^*(X(H)).\] The key steps in the reductions of computations to an $\mathcal{A}_1 $-module problem is the following theorem. \begin{thm}[Anderson, Brown, Peterson]\label{thm:mspinko} There is an isomorphism \[H^*(\MSpin) \cong \mathcal{A} \otimes_{\mathcal{A}_1 }({{\mathbb{Z}}}/2 \oplus M)\] where $M$ is a graded $\mathcal{A}_1 $-modules which is zero in degrees $t<8$. \end{thm} As a consequence of \fullref{thm:mspinko} and \fullref{rem:range}, we have: \begin{cor} There is an isomorphism \[\mathrm{Ext}_{\mathcal{A}}^{s,t}(H^*(\MSpin \wedge X(H)), {{\mathbb{Z}}}/2 ) \cong \mathrm{Ext}_{\mathcal{A}_1 }^{s,t}(H^*(X(H)),{{\mathbb{Z}}}/2 ) \] if $t-s < 8$. \end{cor} So low dimensional computations can be done over $\mathcal{A}_1$. We go through the following steps to compute $\pi_tMTH$ for $0\leq t\leq 4$: \begin{enumerate}[(1)] \item Compute $H^*(X(H))$ as modules over the $\mathcal{A}_1$. See \fullref{sec:compA1}. \item Compute $\mathrm{Ext}_{\mathcal{A}_1}^{s,t}(H^*(X(H)), {{\mathbb{Z}}}/2 )$ in the range $t-s \leq 5$. See \fullref{sec:MinRes}, \fullref{sec:COR} and \fullref{sec:LES}. \item Compute the differentials and extensions. See \fullref{sec:usingASS}. In all of our examples, the spectral sequences are too sparse for differentials and exotic extensions and this step is trivial. \item Read off $\pi_*MTH$. \end{enumerate}
We will do this one example at a time.
\subsection{The case $s=3$} This is the case of $H=G^{+}= \Pin^{+} \times_{\{\pm 1\}} SU_2$ and in this case, \[MTG^{+} \simeq \MSpin \wedge \Sigma^{-3} MO_3 .\]
This example was stated in \cite{FH} and explicitly computed in \cite{campbell}. The cohomology of $H^*(\Sigma^{-3}MO_3)$ is illustrated in \fullref{fig:HMO3}. Let $R_3$ be the $\mathcal{A}_1$-module depicted in \fullref{fig:R3}, so that $R_3$ sits in an exact sequence \[ 0 \to \Sigma Q \to R_3 \to M_{\infty} \to 0.\] From \fullref{fig:HMO3}, we have that \[ H^*(\Sigma^{-3}MO_3) \approx R_3 \oplus \Sigma^{2}\mathcal{A}_1 \oplus \Sigma^{4}\mathcal{A}_1 \oplus \Sigma^5 \mathcal{A}_1 \] where we will use $\approx$ to denote that there is an isomorphism in the range necessary for computations of homotopy groups in degrees less than or equal to $4$. We include the column $t-s=5$ to preclude the possibility of incoming differentials into the column $t-s=4$.
To compute the $E_2$-page of the spectral sequence \[\mathrm{Ext}_{\mathcal{A}}^{s,t}(H^*(MTG^{+}), {{\mathbb{Z}}}/2 ) \approx \mathrm{Ext}_{\mathcal{A}_1}^{s,t}(H^*(\Sigma^{-3}MO_3), {{\mathbb{Z}}}/2 ) \Rightarrow \pi_{t-s}MTG^{+} , \] we have to compute $\mathrm{Ext}_{\mathcal{A}_1}^{*,*}(R_3, {{\mathbb{Z}}}/2 )$. \fullref{fig:R3} and \fullref{fig:R3achart} illustrate this computation.
The Adams spectral sequence computing $\pi_*MTG^{+}$ is depicted in \fullref{fig:ASSMO3}. The spectral sequence is too sparse for differentials and exotic extensions, so the homotopy groups are \begin{align*}
\pi_0 MTG^+ &= {{\mathbb{Z}}}/2
\\ \pi_1 MTG^+ &= 0 \\
\pi_2 MTG^+ &= {{\mathbb{Z}}}/2 \\
\pi_3 MTG^+ &= 0 \\
\pi_4 MTG^+ &= {{\mathbb{Z}}}/2 \times {{\mathbb{Z}}}/4. \end{align*}
\begin{figure}
\caption{An $\mathcal{A}_1$-module we call $R_3$.}
\label{fig:R3}
\label{fig:R3achart}
\label{fig:ASSMO3}
\end{figure}
\subsection{The case $s=-3$} This is the case of $H=G^{-}= \Pin^{-} \times_{\{\pm 1\}} SU_2$ and in this case, \[MTG^{-} \simeq \MSpin \wedge \Sigma^{3} MTO_3. \]
In the degrees relevant for us, the $\mathcal{A}_1$-module structure of $H^\ast (\Sigma^3 MTO_3)$ is given in \fullref{fig:HMTO3}. We have that \[ H^\ast (\Sigma^3 MTO_3) \approx \mathcal{A}_1 \oplus \Sigma^2 R_0 \oplus \Sigma^4 \mathcal{A}_1 \oplus \Sigma^4 \mathcal{A}_1 \oplus \Sigma^{5}R_5 \] where $R_0$ is the module depicted in \fullref{fig:R0} and $R_5$ the module depicted in \fullref{fig:R5}. The module $R_5$ sits in a short exact sequence of $\mathcal{A}_1$-modules (pictured in \fullref{fig:R5}): \[ 0 \to J \to R_5 \to \Sigma M_\infty \to 0 . \] \fullref{fig:R0achart} gives $\mathrm{Ext}^{\ast, \ast}_{\mathcal{A}_1}(R_0, {{\mathbb{Z}}}/2 )$ and \fullref{fig:R5achart} gives $\mathrm{Ext}_{\mathcal{A}_1}^{\ast, \ast}(R_5, {{\mathbb{Z}}}/2 )$. The Adams chart for $\mathrm{Ext}_{\mathcal{A}_1}^{s,t}(H^*(\Sigma^{-3}MTO_3), {{\mathbb{Z}}}/2 )$ is depicted in \fullref{fig:ASSMTO3} in the range of interest. The spectral sequence is too sparse for differentials and exotic extensions and the homotopy groups of $MTG^{-}$ are
\begin{align*}
\pi_0 MTG^{-} &= {{\mathbb{Z}}}/2 \\
\pi_1 MTG^{-} &= 0 \\
\pi_2 MTG^{-} &= {{\mathbb{Z}}}/2 \\
\pi_3 MTG^{-} &= 0 \\
\pi_4 MTG^{-} &= ({{\mathbb{Z}}}/2)^3.
\end{align*}
\begin{figure}
\caption{An $\mathcal{A}_1$-module we call $R_5$.}
\label{fig:R5}
\label{fig:R5achart}
\label{fig:ASSMTO3}
\end{figure}
\subsection{The case $s=4$} This is the case of $H=G^{0}= \Spin \times_{\{\pm 1\}} SU_2$ and in this case, \[MTG^{0} \simeq \MSpin \wedge \Sigma^{-3} MSO_3.\] The $\mathcal{A}_1 $-structure of $H^\ast (MSO_3)$ is depicted in \fullref{fig:HMSO3}, and \[ H^\ast (\Sigma^{-3} MSO_3) \approx Q \oplus \Sigma^4 R_2 . \] The Adams chart for the modules $Q$ and $R_2$ are depicted in \fullref{fig:R2JQachart}, and the Adams chart for $\mathrm{Ext}_{\mathcal{A}_1}^{s,t}(H^\ast (\Sigma^{-3} MSO_3), {{\mathbb{Z}}}/2 )$ is in \fullref{fig:ASSMSO3}. The spectral sequence is too sparse for differentials and exotic extensions and the homotopy groups of $MTG^0$ are
\begin{align*}
\pi_0 MTG^0 &= {{\mathbb{Z}}}\\
\pi_1 MTG^0 &= 0\\
\pi_2 MTG^0 &= 0\\
\pi_3 MTG^0 &= 0\\
\pi_4 MTG^0 &= {{\mathbb{Z}}}^2 .
\end{align*}
\begin{figure}\label{fig:ASSMSO3}
\end{figure}
\subsection{The complex case $s=0$} This is the case of $H^c=\Spin^c$ and in this case, \[MTH^c(0) \simeq \MSpin \wedge \Sigma^{-2} MU_1 . \] The structure of $H^\ast (MU_1)$ as an $\mathcal{A}_1$-module is depicted in \fullref{fig:MO1MU1}. It is given by shifted sums of $\mathcal{A}_1 /\!\!/ \mathcal{E}_1$, so \[ H^\ast ( \Sigma^{-2} MU_1) \approx \mathcal{A}_1 /\!\!/ \mathcal{E}_1 \oplus \Sigma^4 \mathcal{A}_1 /\!\!/ \mathcal{E}_1 \] In \fullref{ex:coneeta}, we calculated that \[ \mathrm{Ext}_{\mathcal{A}_1}^{*,*} (\mathcal{A}_1 /\!\!/ \mathcal{E}_1, {{\mathbb{Z}}}/2 ) \cong \mathrm{Ext}_{\mathcal{E}_1}^{*,*} ({{\mathbb{Z}}}/2 , {{\mathbb{Z}}}/2 ) \cong {{\mathbb{Z}}}/2 [h_0, v_1] \] for $v_1$ in degree $(s,t)=(1, 3)$. The $E_2$-page of the Adams spectral sequence for $\pi_\ast MTH^c(0)$ is depicted in \fullref{fig:ASSMU1}. The spectral sequence is too sparse for differentials and exotic extensions and the homotopy groups of $MTH^c(0)$ are \begin{align*}
\pi_0 MTH^c(0) &= {{\mathbb{Z}}} \\
\pi_1 MTH^c(0) &= 0 \\
\pi_2 MTH^c (0) &= {{\mathbb{Z}}} \\
\pi_3 MTH^c (0) &= 0\\
\pi_4 MTH^c (0) &=({{\mathbb{Z}}})^2 \end{align*}
\begin{figure}\label{fig:ASSMU1}
\end{figure}
\subsection{The complex case $s=1$} This is the case of $H^c=\Pin^c$ and in this case, \[MTH^c(1) \simeq \MSpin \wedge \Sigma^{-3} MU_1\wedge MO_1 . \] The structure of $H^*(MU_1 \wedge MO_1)$ is depicted in \fullref{fig:MU1smshMO1}. We have \[ H^*(\Sigma^{-3}MU_1 \wedge MO_1) \approx R_6 \oplus \Sigma^4 R_6 \] for the module $R_6$ depicted in \fullref{fig:R6}. In order to compute the $E_2$-page of the Adams spectral sequence for $\pi_\ast MTH^c(1)$ we need to compute $\mathrm{Ext}^{\ast, \ast}_{\mathcal{A}_1} (R_6, {{\mathbb{Z}}}/2 )$. The module $R_6$ sits in a short exact sequence of $\mathcal{A}_1$-modules (pictured in \fullref{fig:R6}): \[ 0 \to \Sigma R_1 \to R_6 \to M_\infty \to 0 \] and $\mathrm{Ext}^{\ast, \ast}_{\mathcal{A}_1} (R_6, {{\mathbb{Z}}}/2 )$ is computed in \fullref{fig:R6achart}. The $E_2$-page of the Adams spectral sequence for $\pi_\ast MTH^c(1)$ is depicted in \fullref{fig:ASSMU1MO1}. The spectral sequence is too sparse for differentials and exotic extensions and the homotopy groups of $MTH^c(1)$ are \begin{align*}
\pi_0 MTH^c(1) &= {{\mathbb{Z}}}/2 \\
\pi_1 MTH^c(1) &= 0 \\
\pi_2 MTH^c(1) &= {{\mathbb{Z}}}/4 \\
\pi_3 MTH^c(1) &= 0 \\
\pi_4 MTH^c(1) &= {{\mathbb{Z}}}/2 \times {{\mathbb{Z}}}/8 .
\end{align*}
\begin{figure}
\caption{The exact sequence for $R_6$.}
\label{fig:R6}
\label{fig:R6achart}
\label{fig:ASSMU1MO1}
\end{figure}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
\end{document} |
\begin{document}
\title{Noncausal FIR Zames-Falb Multiplier Search for Exponential Convergence Rate} \thispagestyle{empty} \pagestyle{empty}
\begin{abstract}
In the existing literature, there are two approaches to estimate tighter bounds of the exponential convergence rate of stable Lur'e systems. On one hand, the classical integral quadratic constraint (IQC) framework can be applied under loop-transformation, so the stability of the new loop implies the convergence of the original loop. On the other hand, it is possible to modify the IQC framework, the so-called $\rho$-IQC framework, in such a way that the convergence rate is directly obtained over the original loop.
In this technical note, we extend the literature results from the search for a causal finite impulse response (FIR) Zames-Falb multiplier to the noncausal case. We show that the multipliers by the two approaches are equivalent by a change of variable. However, the factorisation of the Zames-Falb $\rho$-IQC is restricted compared to the Zames-Falb IQC, so a unified factorisation is proposed.
Finally, numerical examples illustrate that noncausal multipliers lead to less-conservative results.
\end{abstract}
\begin{IEEEkeywords} Exponential convergence rate; Zames-Falb multipliers; integral quadratic constraint. \end{IEEEkeywords}
\section{Introduction}
A classical topic in control theory is the Lur'e problem\cite{Lurie:1944}, which concerns the stability of a feedback interconnection between a linear time-invariant (LTI) system $G$ and any nonlinearity or uncertainty $\Delta$ within some classes (see Fig.~\ref{fig:lure}\footnote{The Lur'e system or Lur'e problem is originally defined for unforced systems, but it is relaxed to forced systems \cite{Altshuller:2013}.}). The stability of Lur'e systems is mainly studied with two different techniques: Lyapnov stability and input-output stability. Lyapnov stability is based on internal state variables of the unforced system, while input-output stability studies the input-output mapping of the forced system. These two methods are closely related, and sometimes equivalent \cite{Vidyasagar:2002, Hassan:2002}.
\begin{figure}
\caption{The Lur'e system}
\label{fig:lure}
\end{figure}
The input-output approach splits the stability problem into two steps. Firstly, we need to find a class of LTI systems referred to as multipliers, preserving some properties of the set of nonlinearities $\Delta$. Secondly, we search for a suitable multiplier within the developed class of multipliers for the LTI system $G$. As a result, the stability of the nonlinear system is translated into an LTI design problem. The nonlinearity $\Delta$ is a slope-restricted nonlinearity, where the class of Zames-Falb multipliers $\mathcal{M}$, defined in both continuous-time~\cite{Zames:1968} and discrete-time~\cite{Jan:1971} (see \cite{Joaquin:2016} for a tutorial), is the widest class of LTI multipliers preserving the positivity of the nonlinearity. Note that some other multipliers are phase equivalent to corresponding Zames-Falb multipliers \cite{Joaquin:2013}. Then the absolute stability problem is reduced to a search of $M\in\mathcal{M}$ such that \begin{equation}\label{eq:re_condition}
Re\{M(z)\left(1+KG(z)\right)\}<0 \quad \forall |z|=1, \end{equation} where $K$ is the maximum slope of the nonlinearity. This condition can be expressed equivalently in the IQC framework \cite{Alexandre:1997}, where the frequency domain condition can be converted to computable linear matrix inequalities (LMIs) by the Kalman-Yakubovich-Popov (KYP) lemma \cite{Rantzer:1996}. In discrete-time, the search over FIR Zames-Falb multipliers proposed in~\cite{Shuai:2014,Joaquin:2018} provides the less conservative absolute stability results in the literature.
Recently, the analysis of the convergence rates of the Lur'e system has attracted much attention. First-order optimisation algorithms, such as gradient decent method and Nesterov method, are written as Lur'e systems \cite{Lessard:2016}. Specially, a strongly convex function converges to the optimal point exponentially with a rate $\rho$ ($0<\rho<1$) with first-order optimisation algorithms, which can be considered as the equilibrium point of the corresponding Lur'e system. Less-conservative results are obtained in optimisation \cite{Scoy:2018, Cyrus:2018, Mahyar:2017} and control \cite{Zachary:2017}.
\begin{figure}
\caption{The scaled system in \cite{Bin:2016}}
\label{fig:lure_scaled}
\end{figure} The convergence analysis of Lur'e systems has been presented in two different but equivalent frameworks: \begin{itemize}
\item On the one hand, in \cite{Bin:2016}, the (time domain) IQCs are constructed for the scaled uncertainty $\Delta_{\rho}$, and the corresponding exponential stability condition is to search for a multiplier that belongs to a suitable subset of $\mathcal{M}$, such that
\begin{equation}\label{eq:re_p_condition1}
Re\{M(z)(1+KG_{\rho}(z))\}<0 \quad \forall |z|=1,
\end{equation}
where the multiplier is in the same form of original FIR Zames-Falb multipliers, while the $\ell_1$-norm condition is penalised with $\rho$.
\item On the other hand, in \cite{Boczar:2015, Boczar:2017, Freeman:2018}, the (frequency domain) $\rho$-IQCs are constructed for the original uncertainty $\Delta$, and the exponential stability condition is to search for a multiplier that also belongs to a subset of $\mathcal{M}$, such that
\begin{equation}\label{eq:re_p_condition}
Re\{M(\rho z)(1+KG(\rho z))\}<0 \quad \forall |z|=1,
\end{equation}
where the multiplier is constructed from the original FIR Zames-Falb multiplier $M(z)$ by replacing $z$ by $\rho z$, and the $\ell_1$-norm condition is also penalised with $\rho$. \end{itemize}
In both approaches, sound analysis on the causal FIR Zames-Falb multipliers are provided in the literature above. On contrast, the noncausal FIR Zames-Falb multiplier in the form $M(\rho z)$ is studied in \cite{Freeman:2018}, where its modified $\ell_1$-norm condition is proved, but details to obtain the stability LMI are not given.
In this technical note, we are concerned with the technique to apply noncausal FIR Zames-Falb multipliers in \cite{Shuai:2014,Joaquin:2018} to estimate exponential convergence rates, and especially focus on the factorisations. The main contribution of this technical note is the development of suitable factorizations for both approaches when noncausal Zames-Falb multipliers are used. In Section \ref{sec:factorisations_iqc}, the time domain Zames-Falb IQC with causal multipliers in \cite{Bin:2016} for continuous time system is extended to frequency domain with noncausal multipliers for discrete time system. Meanwhile, in Section \ref{sec:factorisations_piqc}, we provide a factorisation by lifting \cite{Hosoe:2013} as an unified structure for causal, anticausal and noncausal FIR Zames-Falb multipliers in both the IQC and $\rho$-IQC frameworks. Then, the validity of different factorisations are discussed, which completes the results in \cite{Boczar:2015,Boczar:2017,Freeman:2018}. Furthermore, we show that the multipliers $M(z)$ in (\ref{eq:re_p_condition1}) and $M(\rho z)$ in (\ref{eq:re_p_condition}) are equivalent by variable conversion, and lead to similar results in numerical examples in Section \ref{sec:results}.
\section{Notations and preliminary results}\label{sec:notations} Some of the notations and definitions are summarised from \cite{Lessard:2016, Boczar:2015, Boczar:2017, Bin:2016}, which are repeated here for completeness.
\subsection{Notations and Lur'e systems}
Let $\mathbb{Z}$ and $\mathbb{Z}^+$ be the set of integer numbers and positive integer numbers including zero, respectively. The notations $\mathbb{R}$ and $\mathbb{R}^+$ are defined in the same way for real numbers. And let $\mathbb{C}$ be the set of complex numbers. Let $\ell$ be the space of all real-valued sequences $h: \mathbb{Z}^+\mapsto\mathbb{R}$. Let $\ell_2$ be the space of real-valued square-summable sequences $h:\mathbb{Z}^+\mapsto\mathbb{R}$. For any $\rho\in[0,1]$, we will say that $h\in\ell_2^\rho$ if the sequence $\{\rho^{-k}h_k\}_{k=0}^\infty$ belongs to $\ell_2$. Finally, for absolute-summable sequences $h: \mathbb{Z}^+\mapsto\mathbb{R}$, we define $\|h\|_1=\sum_{k=-\infty}^{\infty}|h_k|$.
Let $\mathbf{RH}_{\infty}$ and $\mathbf{RL}_{\infty}$ be the space consisting of proper real rational transfer functions $G$: $G\in \mathbf{RH}_{\infty}$ has all poles inside the open unit disk in the complex plane; $G\in \mathbf{RL}_{\infty}$ has no pole on the unit disk. With the minimal state-space realisation, the transfer function is $G(z) = C(zI -A)^{-1}B+D$, or $G \sim $$\begin{bmatrix} A\; B;\; C\; D \end{bmatrix}$ in short.
The expression $G^{*}(z)$ denotes the complex conjugate transpose of $G(z)$ at $|z|=1$, i.e. $G^{*}(z)={G}^T\left(\frac{1}{z}\right)$, where the superscript $T$ indicates the transpose. Moreover, if a parameter $\rho$ is involved in the variable, the complex conjugate transpose can be expressed as $G^*(\rho, z)=G^T\left(\rho, \frac{1}{z}\right)$.
A nonlinear operator $\Delta: \ell(\mathbb{Z}^+) \mapsto \ell(\mathbb{Z}^+)$ is said to be memoryless if there exists a map $N: \mathbb{R} \to \mathbb{R} $ such that $(\Delta\upsilon)_k=N(\upsilon_k)$, $\forall k \in \mathbb{Z}$. Assume that $\Delta(0)=0$. The memoryless uncertainty $\Delta$ is said to be (sector) bounded, denoted by $\Delta \in [\underline{k},\overline{k}]$ ($0\le \underline{k}<\overline{k}<\infty$), if $\underline{k} x \le N(x) \le \overline{k} x, \forall x \in \mathbb{R}$. The uncertainty $\Delta$ is said to be slope-restricted, denoted by $\Delta\in S[\underline{k},\overline{k}]$, if $\underline{k}(x_1-x_2) \le N(x_1)-N(x_2) \le \overline{k}(x_1-x_2), \forall x_1, x_2 \in \mathbb{R}$ and $x_1 \ne x_2$. The slope-restricted uncertainty is also sector bounded, but the reverse is not. Finally, the uncertainty $\Delta$ is said to be odd if $\Delta(-x)=-\Delta(x)$, $\forall x\in \mathbb{R}$.
Consider the Lur'e system in Figure \ref{fig:lure}. It is expressed as \begin{equation*} v=f+Gw, \quad w=g+\Delta v. \end{equation*}
The feedback interconnection is well-posed if the inverse map $(v,w) \mapsto (g,f)$ is causal in $\ell$.
\begin{definition} The feedback interconnection in Fig. \ref{fig:lure} is $\ell_2$-stable if it is well-posed, and the signals $(v,w)\in\ell_2$ for any $(g,f)\in\ell_2$. \end{definition}
\begin{definition} The feedback interconnection in Fig. \ref{fig:lure} is globally exponentially stable with convergence rate $\rho$ if there exists some $\rho\in (0,1)$ and $c>0$ such that when $g=0$ and $f=0$, \begin{equation}\label{eq:exponential_stable}
\|x_k\|\le c\rho^k \|x_0\| \quad \forall k\ge 0,\;\forall x_0\in \mathbb{R}^n . \end{equation} \end{definition}
Henceforth, the infimum convergence rate of the feedback interconnection in Fig. \ref{fig:lure} is referred to as $\rho_{\{G,\Delta\}}$.
\begin{remark}\label{rm:exponential_convergence} Condition (\ref{eq:exponential_stable}) is equivalent to the fact that the state $x_k$ of $G$ converges to zero exponentially with the rate $\rho$, i.e. $x_k \rho^{-k}\to 0$ as $k\to \infty$. \end{remark}
As mentioned in the introduction, $\ell_2$-stability is an input-output relation, while exponential stability is an internal relation. Therefore, it is not trivial to restate exponential stability in an input-output manner.
\begin{definition}\label{df:l2p_stability} The feedback interconnection in Fig. \ref{fig:lure} is $\ell_2^{\rho}$-stable if it is well-posed, and the signals $(v,w)\in\ell_2^{\rho}$ for any $(g,f)\in\ell_2^{\rho}$. \end{definition}
\begin{theorem}\label{th:stability_relations} For the Lur'e system in Fig. \ref{fig:lure}, assume $G$ is controllable and observable, and $\Delta$ is memoryless and slope-restricted. The unforced system is globally exponentially stable with rate $\rho$ if and only if the forced system is $\ell_2^{\rho}$-stable. \end{theorem}
\begin{IEEEproof} The sufficiency can be proved in a similar way with Proposition 5 in \cite{Boczar:2015,Boczar:2017}, and the necessity is proved in outline in Appendix \ref{appendix_proof}. \end{IEEEproof}
\subsection{Kalman conjecture for convergence analysis}\label{sc:kalman_conjecture} The Kalman conjecture is a necessary and sufficient condition for stability when it is true, which is stated below.
\begin{definition}[Nyquist value, $K_N$] \label{df:Nyquist}
The Nyquist value of a stable transfer function $G(z)$ is
\begin{equation*}\label{Nyquist}
K_N=\sup_{K}\{ K>0: (1-\tau KG(z))^{-1} \ \textrm{is stable} \ \forall \tau\in[0,1] \}.
\end{equation*} \end{definition}
\begin{conjecture}[Kalman conjecture~\cite{Kalman:1957}] Let $\Delta$ be memoryless, and $\Delta \in S[0, K]$. The feedback interconnection between $G$ and $\Delta$ is asymptotically stable if and only if $K < K_N$. \end{conjecture}
We can translate the above definition and conjecture into the convergence analysis. Let us define the absolute convergence rate of the class of systems defined by the Lur'e system as follows \begin{equation}\label{eq:rho_star} \rho^*_{\{G,K\}}=\sup_{\Delta\in S[0, K]}\{\rho_{\{G,\Delta\}}\} \end{equation} where $K<K_N$. A lower bound of the $\rho^*_{\{G,K\}}$ is given by \begin{equation}\label{eq:rho_linear}
\underline{\rho^*_{\{G,K\}}}=\max_{\tau\in[0,1]}\left\{\left|\text{eig}\left(\frac{G}{1-\tau KG}\right)\right|\right\}. \end{equation} In some instances, this lower bound is referred to as the theoretical value. The Kalman conjecture can be restated using the convergence rates defined in (\ref{eq:rho_star}) and (\ref{eq:rho_linear}) as follows. \begin{conjecture}[Kalman conjecture for convergence analysis]\label{cj:kc_convergence}
For any stable $G$, let $\Delta \in S[0, K]$ with $K<K_N$, then \begin{equation}\label{eq:1} \rho^*_{\{G,K\}}=\underline{\rho^*_{\{G,K\}}}. \end{equation} \end{conjecture}
\subsection{Estimation of upper bound of $\rho^*_{\{G,K\}}$}\label{sc:estimation} In the last years, Zames-Falb multipliers have been used to estimate an upper bound of $\rho^*_{\{G,K\}}$, denoted by $\overline{\rho^*_{\{G,K\}}}$. As mentioned in the introduction, there are two approaches based on the relation below.
\begin{theorem}[\cite{Boczar:2015,Bin:2016,Boczar:2017}]\label{th:stability_original_sclaed}
The system in Fig. \ref{fig:lure} is well-posed if and only if the scaled system in Fig. \ref{fig:lure_scaled} is well-posed. Furthermore, the system in Fig. \ref{fig:lure} is $\ell_2^{\rho}$-stable if and only if the scaled system in Fig. \ref{fig:lure_scaled} is $\ell_2$-stable. \end{theorem}
The two approaches to estimate $\overline{\rho^*_{\{G,K\}}}$ are reviewed in the following parts.
\subsubsection{Analysis in IQC framework}
In this approach, $\ell_2^{\rho}$ stability of the system in Fig. \ref{fig:lure} is studied by $\ell_2$ stability of the scaled system in Fig. \ref{fig:lure_scaled}, where an IQC is constructed for the scaled uncertainty $\Delta_{\rho}$ at first.
\begin{definition}[IQC \cite{Alexandre:1997}]\label{df:IQC}
Let $\Pi(z)$ be a Hermitian (self-adjoint) bounded measurable operator. Then, for a bounded and causal operator $\Delta_{\rho}: {\ell} \mapsto {\ell}$, it is said to satisfy the IQC defined by $\Pi$, if for all $v\in {\ell}_2$
\begin{gather}\label{eq:iqc}
\int_{|z|=1}
\begin{bmatrix}
\hat{v}(z) \\ \widehat{\Delta_{\rho}v}(z)
\end{bmatrix}^*
\Pi(z)
\begin{bmatrix}
\hat{v}(z) \\ \widehat{\Delta_{\rho}v}(z)
\end{bmatrix}
dz \ge 0,
\end{gather} where $\hat{v}$ and $\widehat{\Delta_{\rho}v}$ denote the z-transform of $v$ and $\Delta_{\rho}v$ respectively. \end{definition}
\begin{theorem}[\cite{Alexandre:1997}] \label{th:IQC_discrete}
For the system in Fig. \ref{fig:lure_scaled}, let $G_{\rho}(z) \equiv G(\rho z) \in \mathbf{RH}_{\infty}$, and $\Delta_{\rho}$ be a causal bounded operator. Assume that $\forall \tau \in [0,1]$,
\begin{enumerate}
\item the feedback interconnection between $G_{\rho}$ and $\tau \Delta_{\rho}$ is well-posed;
\item the operator $\tau \Delta_{\rho}$ satisfies the IQC defined by $\Pi$;
\item there exists $\epsilon>0$, such that
\begin{gather}\label{eq:iqc_stability}
\begin{bmatrix}
G_{\rho}(z) \\
I \\
\end{bmatrix}^*
\Pi(z)
\begin{bmatrix}
G_{\rho}(z) \\
I \\
\end{bmatrix}
\le -\epsilon I, \quad \forall |z|=1.
\end{gather}
\end{enumerate}
Then, the system in Fig.\ref{fig:lure_scaled} is $\ell_2$-stable, thus the system in Fig.\ref{fig:lure} is $\ell_2^{\rho}$-stable. \end{theorem}
In order to make the frequency domain inequality (FDI) (\ref{eq:iqc_stability}) computable, the Kalman-Yakubovich-Popov (KYP) lemma should be applied.
\begin{lemma}[KYP lemma \cite{Rantzer:1996}]\label{Le:kyp}
Given $A \in \mathbb{R}^{n \times n}$, $B \in \mathbb{R}^{n \times m}$, $K_p=K_p^{T} \in \mathbb{R}^{(n+m) \times (n+m)}$, with $ det(z I -A)\ne 0$ for all $|z|=1$, where the pair $(A, B)$ are controllable, the following statements are equivalent:
\begin{enumerate}
\item For all $|z|=1$,
\begin{gather*}
\begin{bmatrix}
(zI -A)^{-1}B \\
I \\
\end{bmatrix}^*
K_p
\begin{bmatrix}
(zI -A)^{-1}B \\
I \\
\end{bmatrix}
\leq 0.
\end{gather*}
\item There is a symmetric matrix $P \in \mathbb{R}^{n \times n}$ and
\begin{gather*}
\begin{bmatrix}
A^{T}PA-P & A^TPB \\
B^{T}PA & B^TPB \\
\end{bmatrix}+K_p
\leq 0.
\end{gather*}
\end{enumerate} \end{lemma}
Generally, the IQC multiplier $\Pi$ is dynamic, and can be factorised as below. \begin{definition}[\cite{Scherer:2011}]
Any $\Pi(z)\in\mathbf{RL}_\infty$ has nonunique factorisations $(\Psi,K_p)$ in the form
\begin{equation}\label{eq:factorisation}
\Pi(z)=\Psi^*(z)K_p\Psi(z),
\end{equation}
where $K_p=K_p^T$ is constant, and $\Psi$ is a stable LTI system with the state-space representation
\begin{gather}\label{eq:psi_state_space}
\Psi(z) \sim \begin{bmatrix} A_\Psi & B_{\Psi_1} & B_{\Psi_2}\\ C_\Psi & D_{\Psi_1} & D_{\Psi_2} \end{bmatrix}.
\end{gather} \end{definition}
Notice that when $G(z) \sim $ $\begin{bmatrix} A\; B;\; C\; D \end{bmatrix}$, $G_{\rho}(z)\equiv $ $G(\rho z) $ $\sim \begin{bmatrix} \rho^{-1}A\;\; \rho^{-1}B;\;\; C\;\; D \end{bmatrix}$. Next, substitute (\ref{eq:factorisation}) into (\ref{eq:iqc_stability}), and apply the KYP lemma, the well-known stability LMI is applied to the scaled system.
\begin{corollary}
The FDI (\ref{eq:iqc_stability}) is equivalent to the existence of $P=P^T$ such that
\begin{equation}\label{eq:iqc_lmi}
\begin{bmatrix} \hat{A}^TP\hat{A}-P &\hat{A}^TP\hat{B} \\ \hat{B}^TP\hat{A} & \hat{B}^TP\hat{B}\end{bmatrix}+\begin{bmatrix} \hat{C}^T \\\hat{D}^T\end{bmatrix} K_p \begin{bmatrix} \hat{C} &\hat{D}\end{bmatrix}<0,
\end{equation}
where $\Psi\begin{bmatrix} G_{\rho} \\I\end{bmatrix} \sim \begin{bmatrix} \hat{A} & \hat{B}\\ \hat{C} & \hat{D} \end{bmatrix}$, and
$\hat{A}=\begin{bmatrix}\rho^{-1}A & 0 \\ B_{\Psi_1}C & A_{\Psi} \end{bmatrix}$,
$\hat{B}=\begin{bmatrix}\rho^{-1}B \\ B_{\Psi_2}+B_{\Psi_1}D \end{bmatrix}$,
$\hat{C}=\begin{bmatrix}D_{\Psi_1}C & C_{\Psi}\end{bmatrix}$, $ \hat{D}=D_{\Psi_2}+D_{\Psi_1}D$. \end{corollary}
\subsubsection{Analysis in $\rho$-IQC framework}
In this approach, $\ell_2^{\rho}$ stability of the system in Fig. \ref{fig:lure} is studied by scaling signals, where a $\rho$-IQC is defined for the original uncertainty $\Delta$ at first.
\begin{definition}[$\rho$-IQC \cite{Boczar:2015,Boczar:2017}] \label{df:rho_IQC}
Let $\Pi(\rho, z)$ be a Hermitian (self-adjoint, i.e. $\Pi(\rho, z)=\Pi^*(\rho, z)=\Pi^T(\rho, z^{-1})$) bounded measurable operator. Then, for a bounded and causal operator $\Delta: {\ell} \mapsto {\ell}$, it is said to satisfy the $\rho$-IQC defined by $\Pi$, if for all $y_2\in {\ell}_2^{\rho}$
\begin{gather}\label{eq:rho_iqc}
\int_{|z|=1}
\begin{bmatrix}
\widehat{y_2}(\rho, z)\\ \widehat{\Delta y_2}(\rho, z)
\end{bmatrix}^*
\Pi(\rho, z)
\begin{bmatrix}
\widehat{y_2}(\rho, z)\\ \widehat{\Delta y_2}(\rho, z)
\end{bmatrix}
dz \ge 0,
\end{gather}
where $\widehat{y_2}(\rho, z) \equiv \widehat{y_2}(\rho z)$ and $\widehat{\Delta y_2}(\rho, z)\equiv \widehat{\Delta y_2}(\rho z)$. \end{definition}
\begin{remark} According to the signal relations in the proof of Theorem \ref{th:stability_original_sclaed}, (\ref{eq:iqc}) holds if and only if (\ref{eq:rho_iqc}) holds, which implies $\Pi(z)$ and $\Pi(\rho,z)$ are equivalent. It will be shown by the Zames-Falb IQC and $\rho$-IQC in Section \ref{sec:factorisations_iqc} and \ref{sec:factorisations_piqc}. \end{remark}
\begin{theorem} [\cite{Boczar:2015,Boczar:2017}] \label{th:rho_IQC}
Fix $\rho\in(0,1)$. For the Lur'e system in Fig. \ref{fig:lure}, let $G(\rho,z) \equiv G(\rho z) \in \mathbf{RH}_{\infty}$, and $\rho_-\circ(\Delta\circ \rho_+) \equiv \Delta_{\rho}$ be a causal bounded operator. Assume that $\forall \tau \in [0,1]$,
\begin{enumerate}
\item the feedback interconnection between $G$ and $\tau \Delta$ is well-posed;
\item the operator $\tau \Delta$ satisfies the $\rho$-IQC defined by $\Pi$;
\item there exists $\epsilon>0$, such that
\begin{gather}\label{eq:rho_iqc_stability}
\begin{bmatrix}
G(\rho, z) \\
I \\
\end{bmatrix}^*
\Pi(\rho, z)
\begin{bmatrix}
G(\rho, z) \\
I \\
\end{bmatrix}
\le -\epsilon I, \forall |z|=1.
\end{gather}
\end{enumerate}
Then, the system in Fig.\ref{fig:lure} is $\ell_2^{\rho}$-stable. \end{theorem}
Similarly, (\ref{eq:rho_iqc_stability}) can be converted to the stability LMI after the factorisation of $\Pi(\rho,z)$ as defined below.
\begin{definition}
Any $\Pi(\rho,z) \in\mathbf{RL}_\infty$ has nonunique factorisations $(\Psi,K_p)$ in the form
\begin{equation}\label{eq:factorisation_rho}
\Pi(\rho,z)=\Psi^*(\rho,z)K_p\Psi(\rho,z),
\end{equation}
where $K_p=K_p^T$ is constant, and $\Psi(\rho,z)$ is a stable LTI system with the variable $z$ and the parameter $\rho$. \end{definition}
Particularly, when the multiplier used in the $\rho$-IQC is causal, $\Psi(\rho,z)=\Psi(\rho z)$ is valid. Additionally, when (\ref{eq:psi_state_space}) holds, \begin{equation}\label{eq:psi_pz_causal} \Psi(\rho, z) \sim \begin{bmatrix} \rho^{-1}A_\Psi & \rho^{-1}B_{\Psi_1} & \rho^{-1}B_{\Psi_2}\\ C_\Psi & D_{\Psi_1} & D_{\Psi_2} \end{bmatrix}. \end{equation}
Therefore, $\Psi(\rho, z)\begin{bmatrix} G(\rho, z) \\I\end{bmatrix}$ $\sim$ $\begin{bmatrix} \rho^{-1}\hat{A} & \rho^{-1}\hat{B}\\ \hat{C} & \hat{D} \end{bmatrix}$, and the LMI can be further simplified to the form in \cite{Boczar:2015, Lessard:2016, Boczar:2017}: $\exists P=P^T$, such that \begin{equation}\label{eq:causal_LMI} \begin{bmatrix} \hat{A}^TP\hat{A}-\rho^2 P &\hat{A}^TP\hat{B} \\ \hat{B}^TP\hat{A} & \hat{B}^TP\hat{B}\end{bmatrix}+\begin{bmatrix} \hat{C}^T \\\hat{D}^T\end{bmatrix} K_{p} \begin{bmatrix} \hat{C} &\hat{D}\end{bmatrix} <0. \end{equation}
\begin{remark}
In the conventional IQC framework, the variable is $z$ only, and the analysis is conducted on the circle $|z|=1$ in the complex plane. However, in the $\rho$-IQC framework, the variables are $\rho$ and $z$, such as $\rho z$ and $\frac{z}{\rho}$. Then, when the complex conjugate is used, the analysis is conducted on both circles $|z|=\rho$ and $|z|=\frac{1}{\rho}$, as shown in Fig. \ref{fig:circles}.
\end{remark}
\begin{figure}
\caption{Analysis on the circles with radii $\rho$ and $1/\rho$ }
\label{fig:circles}
\end{figure}
\subsection{Zames-Falb IQC with FIR multipliers} \label{sc:ZF-IQC} In this part, the structure of the Zames-Falb IQC for the class of slope-restricted uncertainties is introduced. Then, three factorisations with FIR multipliers are provided.
\begin{theorem}[Zames-Falb IQC \cite{Heath:2005}] Assume the uncertainty $\Delta$ is static and $\Delta\in S[0, K]$. It satisfies the Zames-Falb IQC defined by $\Pi$ as \begin{equation}\label{eq:zf_iqc}
\Pi(z)=
\begin{bmatrix} 0 & K M^*(z) \\ K M(z) & -(M(z)+M^*(z)) \end{bmatrix}.
\end{equation} \end{theorem}
Here, we focus on the noncausal FIR Zames-Falb multiplier proposed in \cite{Shuai:2014}, \begin{equation}\label{eq:zf_multiplier} M(z)=-h_{-n_b}z^{-n_b}-\cdots-h_{-1}z^{-1}+h_0-h_{1}z^{1}-\cdots-h_{n_f}z^{n_f}, \end{equation} where the causal part is with the backward-shift operator $z^{-i_b}$ ($i_b=1,2, \cdots, n_b$), and the anticausal part is with the forward-shift operator $z^{i_f}$ ($i_f=1,2, \cdots, n_f$). In addition, $h_{i_f}>0$ and $h_{-i_{b}}>0$, or $\Delta$ is odd. The $\ell_1$-norm condition of $M(z)$ is \begin{equation}\label{eq:FIR_L1}
\sum_{i_b=1}^{n_b}|h_{-i_b}| + \sum_{i_f=1}^{n_f}|h_{i_f}|< h_0, \end{equation} where we can set $h_0=1$ without loss of generality.
A standard factorisation of (\ref{eq:zf_iqc}) used in previous literature, such as \cite{Boczar:2015, Lessard:2016}, is (\ref{eq:factorisation}) with
\begin{equation}\label{eq:zf_factorisation1}
\Psi_1(z)=\begin{bmatrix} K M(z) & -M(z) \\0 & 1 \end{bmatrix}, \quad K_{p,1}=\begin{bmatrix} 0 & 1 \\1 & 0 \end{bmatrix},
\end{equation} where $M(z)$ must be causal to keep $\Psi_1(z)$ stable, so it needs further factorisation for noncausal multipliers. The state-space representation of $\Psi_1(z)$ with causal Zames-Falb multipliers is given in \cite{Lessard:2016}.
Moreover, the factorisation method called ``lifting factorisation'' is available, which can be treated as the discrete time counterpart of the factorisation for general continuous time multipliers in \cite{Veenman:2016}. One possible lifting factorisation is with
{\small{\begin{equation}\label{eq:zf_factorisation2}
\Psi_2(z)=\left[\begin{array}{c|c}
1 & 0\\ \pmb{Z}^{-i} & \pmb{0}\\ \hline 0 & 1\\\pmb{0} & \pmb{Z}^{-i} \end{array}\right],
K_{p,2}=\arraycolsep1pt\left[\begin{array}{cc|cc} 0 & \pmb{0} & Kh_0 & K\pmb{h}^T_{i} \\
\pmb{0} & \pmb{0} & K \pmb{h}_{-i} & \pmb{0}\\ \hline
Kh_0 & K\pmb{h}^T_{-i} & -2h_0 & -\pmb{h}^T_{-i}-\pmb{h}^T_{i}\\
K\pmb{h}_{i} & \pmb{0} & -\pmb{h}_{-i}-\pmb{h}_{i} & \pmb{0}
\end{array}\right],
\end{equation}}} where $\Psi_2(z)$ is called "lifting matrix", whose state-space representation is provided in Appendix \ref{appendix_ss}. Particularly, the causal and the anticausal parts in (\ref{eq:zf_multiplier}) must have the same step with this factorisation, i.e. $n_b=n_f=n_z$. Additionally, $\pmb{Z}^{-i}=[z^{-1} \; z^{-2} \;\cdots \; z^{-n_z}]^T$; $\pmb{h}_{i}=[h_1\; h_2\; \cdots \; h_{n_z}]^T$, $\pmb{h}_{-i}=[h_{-1}\; h_{-2}\; \cdots \; h_{-n_z}]^T$. Moreover, causal multipliers are obtained with $\pmb{h}_{i}=\pmb{0}$; anticausal multipliers are obtained with $\pmb{h}_{-i}=\pmb{0}$. Notice that the causal part and anticausal part share the same base $\begin{bmatrix} 1\\ \pmb{Z}^{-i} \end{bmatrix}$ in $\Psi_2(z)$, so we say they are coupled.
Henceforth, we use the notations $\pmb{0}$ for zero matrices in some proper dimensions, and $\pmb{I}_{(n)}$ for the $n\times n$ identity matrix.
Finally, another lifting factorisation is defined in (\ref{eq:zf_factorisation3}) on the next page. \begin{figure*}\label{eq:zf_factorisation3}
\end{figure*} In this factorisation, the definitions of matrices are similar with (\ref{eq:zf_factorisation2}) but with possibly different values of $n_b$ and $n_f$. Similarly, $\pmb{h}_{i_f}=\pmb{0}$ for causal multipliers; $\pmb{h}_{-i_b}=\pmb{0}$ for anticausal multipliers. Especially, as illustrated in the lifting matrix $\Psi_3(z)$, the bases of the causal part and anticausal part are separated, which brings the flexibility to construct asymmetric noncausal multipliers. We say the causal and anticausal parts are decoupled.
In the following sections \ref{sec:factorisations_iqc} and \ref{sec:factorisations_piqc}, the validity of these three factorisations will be discussed in IQC and $\rho$-IQC frameworks respectively. In addition, the multipliers $M(z)$ in (\ref{eq:re_p_condition1}) and $M(\rho z)$ in (\ref{eq:re_p_condition}) will be shown equivalent.
\section{Zames-Falb multipliers for convergence analysis in IQC framework}\label{sec:factorisations_iqc} In the convergence analysis, the FIR Zames-Falb multipliers belong to a subset of the class of Zames-Falb multipliers ($M\in\mathcal{M}_{\rho}\subset \mathcal{M}$) as their $\ell_1$ norm conditions are penalised with the convergence rate $\rho$.
Consider the noncausal multiplier in the IQC for the scaled uncertainty $\Delta_{\rho}$, \begin{equation}\label{eq:FIR_noncausal1} M(z)=-\tilde{h}_{-n_b}z^{-n_b}-\cdots -\tilde{h}_{-1}z^{-1} +h_0-\tilde{h}_{1}z^{1}-\cdots-\tilde{h}_{n_f}z^{n_f}, \end{equation} where $\tilde{h}_{-i_b}>0$ and $\tilde{h}_{i_f}>0$, or $\Delta_{\rho}$ is odd. Its $\ell_1$ norm condition is \begin{equation}\label{eq:FIR_noncausal_L11}
\sum_{i_b=1}^{n_b}|\tilde{h}_{-i_b}|\rho^{-i_b}+\sum_{i_f=1}^{n_f}|\tilde{h}_{i_f}|\rho^{-i_f}<h_0, \end{equation} where the proof of the causal part is given in \cite{Bin:2016}, while the proof of the anticausal part will be linked with the anticausal multipliers in the next section.
As mentioned, for causal multipliers, all the factorisations in Section \ref{sc:ZF-IQC} are valid; while for anticausal and noncausal multipliers, (\ref{eq:zf_factorisation2}) and (\ref{eq:zf_factorisation3}) are valid. Moreover, the analysis is in the IQC-framework, so the LMI also keeps the same form in (\ref{eq:iqc_lmi}).
In short, by this approach, everything keeps the same as in the conventional IQC analysis except that the $\ell_1$ norm condition of FIR Zames-Falb multipliers are penalised symmetrically on causal and anticausal parts.
\section{Zames-Falb multipliers for convergence analysis in $\rho$-IQC framework}\label{sec:factorisations_piqc}
Different from the IQC analysis in the previous section, the parameter $\rho$ is involved as a variable in the $\rho$-IQC. As a result, the factorisation is restricted in different cases with causal, anticausal and noncausal multipliers.
Firstly, the lifting factorisation (\ref{eq:zf_factorisation3}) can be extended to (\ref{eq:factorisation_rho}) with $K_{p,3}$ being the same, and $\Psi_3$ being modified to
\begin{equation}\label{eq:phi_pz}
\small{\Psi_3(\rho, z) =\left[ \begin{array}{c|c}
1 & 0\\ \rho^{-i_b}\pmb{Z}^{-i_b} & \pmb{0}\\0 & 1\\\pmb{0} & \rho^{-i_b}\pmb{Z}^{-i_b} \\ \hline 1 & 0\\ \rho^{i_f}\pmb{Z}^{-i_f} & \pmb{0}\\0 & 1\\\pmb{0} & \rho^{i_f}\pmb{Z}^{-i_f}
\end{array} \right]}, \end{equation} where $\rho^{-i_b}$ and $\rho^{i_f}$ are multiplied to $\pmb{Z}^{-i_b}$ and $\pmb{Z}^{-i_f}$, respectively. The state-space representation of $\Psi_3(\rho, z)$ is attached in Appendix \ref{appendix_ss}.
In the following, it is straightforward to show that the factorisation (\ref{eq:factorisation_rho}) with $\left(\Psi_3(\rho,z), K_{p,3}\right)$ is an unified structure in the $\rho$-IQC framework with causal, anticausal and noncausal FIR Zames-Falb multipliers.
With the noncausal multiplier in (\ref{eq:FIR_noncausal1}), setting $h_{-i_b}=\tilde{h}_{-i_b}/\rho^{-i_b}$ and $h_{i_f}=\tilde{h}_{i_f}/\rho^{i_f}$, the noncausal multiplier $M(\rho,z)$ in the $\rho$-IQC for the original uncertainty $\Delta$ defined in \cite{Boczar:2015,Boczar:2017,Freeman:2018} is obtained: \begin{equation}\label{eq:FIR_noncausal} \begin{split} M(\rho, z) =& -h_{-n_b}\rho^{-n_b}z^{-n_b} - \cdots - h_{-1}\rho^{-1}z^{-1}\\ & +h_0 - h_{1}\rho z - \cdots - h_{n_f}\rho^{n_f}z^{n_f}, \end{split} \end{equation} where $h_{-i_b}>0$ and $h_{i_f}>0$, or $\Delta$ is odd. Its $\ell_1$ norm condition is proved in the literature above as \begin{equation}\label{eq:FIR_noncausal_L1}
\sum_{i_b=1}^{n_b}|h_{-i_b}|\rho^{-2i_b} + \sum_{i_f=1}^{n_f}|h_{i_f}|< h_0, \end{equation} which in turn proves (\ref{eq:FIR_noncausal_L11}).
As the variable conversion is unique, the multipliers $M(\rho,z)$ in (\ref{eq:FIR_noncausal}) and $M(z)$ in (\ref{eq:FIR_noncausal1}) are equivalent and belong to the same subset $\mathcal{M}_{\rho}$. Nevertheless, the main issue of the noncausal multiplier $M(\rho,z)$ is the factorisation.
First, for causal multipliers, the factorisations (\ref{eq:zf_factorisation1}), (\ref{eq:zf_factorisation2}) and (\ref{eq:zf_factorisation3}) are all valid with $z$ replaced by $\rho z$. After that the LMI (\ref{eq:causal_LMI}) is obtained with the KYP lemma. In addition, when the factorisation (\ref{eq:phi_pz}) is used, the LMI is in the form (\ref{eq:iqc_lmi}).
Second, for anticausal multipliers, the factorisation (\ref{eq:zf_factorisation1}) is invalid. The other two factorisations by lifting are discussed.
The factorisation (\ref{eq:zf_factorisation2}) and (\ref{eq:zf_factorisation3}) lead to (\ref{eq:factorisation_rho}) with anticausal multipliers by setting $\pmb{h}_{-i}=\pmb{0}$ and $\pmb{h}_{-i_b}=\pmb{0}$, respectively, and replacing $z$ by $\frac{z}{\rho}$ in $\Psi_{2,3}(z)$. The corresponding state-space representation becomes \begin{gather*} \Psi_{2,3}(\rho,z) \sim \begin{bmatrix} \rho A_\Psi & \rho B_{\Psi_1} & \rho B_{\Psi_2}\\ C_\Psi & D_{\Psi_1} & D_{\Psi_2} \end{bmatrix}. \end{gather*}
Notice that $\Psi_{3}(\rho,z)$ here is only valid for anticausal multipliers, while it for noncausal multipliers is (\ref{eq:phi_pz}).
With this factorisation, the replacements $A_{\Psi} \to \rho A_\Psi $, $B_{\Psi} \to \rho B_\Psi $ are taken in (\ref{eq:iqc_lmi}) to obtain the LMI in the anticausal case.
The factorisation (\ref{eq:phi_pz}) leads to (\ref{eq:factorisation_rho}) with anticausal multipliers by setting $\pmb{h}_{-i_b}=\pmb{0}$. Similar to the causal case, the LMI (\ref{eq:iqc_lmi}) will be obtained.
Finally, for noncausal multipliers, the factorisation is more restricted, because it is inconsistent for the causal part and anticausal part. Using the factorisation (\ref{eq:zf_factorisation2}) as an example, it is impossible to replace the variable $z$ in $\Psi_2$ to $\rho z $ for causal multipliers and to $\frac{z}{\rho}$ for anticausal multipliers simultaneously. Therefore, the decoupling of causal and anticausal part is significant, and only the factorisation (\ref{eq:phi_pz}) is valid. Then, the LMI is (\ref{eq:iqc_lmi}).
In short, by this approach, the factorisation is more restricted. The factorisation (\ref{eq:factorisation_rho}) with $\left(\Psi_3(\rho,z), K_{p,3}\right)$ is an unified structure in the $\rho$-IQC framework.
\section{Numerical results}\label{sec:results}
In this section, we compare the results by causal, anticausal and noncausal multipliers with different forms and factorisations. The examples are listed in Table \ref{tb:example_plants} with other related information. Here, we set the causal step and anticausal step to be equal in noncausal multipliers ($n_b=n_f=n_z$). According to the preliminary study, higher order multipliers may not lead to less-conservative results, so the step in each example was tuned in advance to reduce the conservatism.
\begin{table}[ht]
\centering
\begin{tabular}{ |c|c|c|c| }
\hline
Ex & $G(z)$ &$K$ &$n_z$ \\ \hline
1 & $-\frac{1}{z-0.4}$& $1$ & $1$ \\ \hline
2 & $\frac{2z -1}{20z^2-10z+10}$& $9$ & $20$\\ \hline
3 & $-\frac{10z^2+19z+9}{100z^3 -80z^2+17z-1}$&$3$ & $30$\\ \hline
4 & $-\frac{0.1z}{z^2-1.8z+0.81}$& $12$ & $20$\\ \hline
\end{tabular}
\caption{Examples}
\label{tb:example_plants} \end{table}
The best estimates of $\overline{\rho^*_{\{G,K\}}}$ are obtained by the bisection search from the initial range $(max(|eig(A)|), 1)$, where the lower boundary is set to ensure the stability of the scaled plant $G(\rho z)$. The software package CVX with the solver sdpt3 \cite{cvx:2014, cvx:2008} is used to solve the LMI. The results are demonstrated in Table \ref{tb:convergence_results}, where the best result of each example is in bold.
\begin{table}[ht]
\centering
\begin{tabular}{ |c|c|c|c|c|c| }
\hline
& & \multicolumn{4}{|c|}{$\overline{\rho^*_{\{G,K\}}}$} \\ \cline{3-6}
Ex & $\underline{\rho^*_{\{G,K\}}}$ & C. (\ref{eq:FIR_noncausal}) & AC. (\ref{eq:FIR_noncausal}) & NC. (\ref{eq:FIR_noncausal1}) & NC. (\ref{eq:FIR_noncausal})\\
& & with (\ref{eq:zf_factorisation2}) & with (\ref{eq:zf_factorisation2}) & with (\ref{eq:zf_factorisation2}) & with (\ref{eq:phi_pz}) \\ \hline
1 & $0.6$ & $0.600037$ &\pmb{$0.600024$} &\pmb{$0.600024$}&\pmb{$0.600024$}\\ \hline
1a & $0.6$ & $0.600281$ &\pmb{$0.600024$} &\pmb{$0.600024$}&\pmb{$0.600024$}\\ \hline
2 & $0.974679$& $0.978485$ & $0.975044$ &\pmb{$0.974758$}&$0.974794$\\ \hline
2a & $0.974679$& $0.998474$ & $0.975044$ &\pmb{$0.974758$}&$0.974794$\\ \hline
3 & $0.975367$& $0.976437$ & \pmb{$0.975525$}&$0.975815$ &$0.975891$\\ \hline
3a & $0.975367$& $0.976501$ & \pmb{$0.975769$}&$0.975891$ &$0.975891$\\ \hline
4 & $0.9$& $0.991760$ & invalid &\pmb{$0.990723$}&\pmb{$0.990723$}\\ \hline
4a & $0.9$& $0.992794$ & invalid &\pmb{$0.992529$}&\pmb{$0.992529$}\\ \hline
\end{tabular}
\caption{Best estimates of $\overline{\rho^*_{\{G,K\}}}$ by causal (C.), anticausal (AC.) and noncausal (NC.) multipliers with different factorisations compared with $\underline{\rho^*_{\{G,K\}}}$}
\label{tb:convergence_results} \end{table}
In the table, Ex. 1-4 are with odd uncertainties while Ex. 1a-4a are with non-odd uncertainties. As demonstrated in the table, all the results are valid as they are larger than the theoretical bound $\rho^*$. For the systems that break the Kalman conjecture, such as Ex. 4(a), $\underline{\rho^*_{\{G,K\}}}$ is meaningless. Otherwise, the best results are close to the theoretical value.
First, as for different types of uncertainties, the optimal convergence rates with odd uncertainties are no greater than those with general uncertainties. which is a natural result and can be reflected by the more strict $\ell_1$ norm conditions on multipliers for general uncertainties.
Then, as for different multipliers, the introduction of anticausal steps are efficient to reduce the conservatism (also see \cite[Fig. 2]{Freeman:2018}). Meanwhile, noncausal multipliers should be less conservative than causal and anticausal multipliers in general. Nevertheless, due to possible numerical problems with noncausal multipliers in MATLAB (e.g. large computational load, complex matrices), anticausal multipliers provide the optimal results in Ex. 1(1a) and 3(3a). However, anticausal multipliers are conservative when searching the maximum slope $K$ for $\ell_2$-stability (also see \cite[Table \rom{2}, \rom{3}]{Shuai:2014}). For example, in Ex. 4(4a), the anticausal multiplier is not sufficient for stability when $K=12$, while causal and noncausal multipliers are sufficient. In this case, noncausal multipliers are less conservative.
Moreover, the results by the two structures of noncausal multipliers $M(z)$ and $M(\rho, z)$ are almost the same, as they are equivalent. Nevertheless, the computational load with $M(\rho, z)$ would be larger due to its large matrices in the factorisation.
The similar conclusions are also inflected by the convergence rates with uncertainties that have different maximum slopes in Fig. \ref{fig:p_ex2}, \ref{fig:p_ex4}.
\begin{figure}\label{fig:p_ex2}
\end{figure} \begin{figure}\label{fig:p_ex4}
\end{figure}
As illustrated in the figures, anticausal multipliers achieve tight bounds of the convergence rates when they are sufficient for exponential stability; noncausal multipliers are efficient to achieve less conservative results when the closed-loop system is close to instability. However, causal multipliers are conservative in general.
In summary, for a general plant whose properties are unknown, it is more reliable to use noncausal multipliers to obtain less conservative bounds of the convergence rates. Meanwhile, the choice of a specific structure of noncausal multipliers is not crucial as for the conservatism in result. On the contrary, if the given plant verifies the Kalman conjecture and the feedback uncertainty is odd, Conjecture \ref{cj:kc_convergence} seems true, and the multiplier techniques are not necessary.
\section{Conclusion}\label{sec:conclusion} In this technical note, we reviewed the stability concepts of Lur'e systems, where exponential stability with the convergence rate $\rho$ is linked with an extension form of $\ell_2$-stability, defined as $\ell_2^{\rho}$ stability. On the other hand, we extended the literature results on causal FIR Zames-Falb multipliers to anticausal and noncausal cases. The Zames-Falb IQC and $\rho$-IQC are shown equivalent, where the multipliers can be converted by changing the variables. However, the factorisation of the Zames-Falb $\rho$-IQC is restricted, especially with noncausal multipliers. Hence, an unified factorisation is provided for both Zames-Falb IQC and $\rho$-IQC. Furthermore, the numerical examples indicated noncausal multipliers are efficient to achieve less conservative estimation of the upper bound convergence rates in general case. However, when the system verifies the Kalman conjecture and the feedback uncertainty is odd, it is reasonable to use the theoretical bound from the linear analysis directly without any multiplier technique.
\appendix \label{Sec:appendix} \subsection{Proof of Theorem \ref{th:stability_relations} (Outline)}\label{appendix_proof} The proof follows \cite[Theorem 6.3.46]{Vidyasagar:2002} and the theorems therein, which link internal stability to input-output $\ell$-stability in continuous time. Here we relate the relations with the convergence rate in discrete time. The argument in continuous time can be converted to discrete time trivially, which is not stressed here.
In order to keep consistent with the notations in \cite{Vidyasagar:2002}, we consider the system $-G \sim $$\begin{bmatrix} A\; B;\; -C\; 0 \end{bmatrix}$. The minus signs indicate negative feedback structure in \cite[Fig. 6.4]{Vidyasagar:2002}, and the $D$ matrix can be set to $0$ without loss of generality. Moreover, because $G$ is linear and stable, the disturbance signal $f$ in Fig. \ref{fig:lure} is assumed to be zero without loss of generality. Then, the feedback interconnection in Fig. \ref{fig:lure} can be expressed as \begin{equation}\label{eq:system_proof} x_{k+1}=Ax_k+Bg_k+B\Delta(v_k),\quad y_k=-Cx_k. \end{equation}
\textit{Necessity}: The expression (\ref{eq:system_proof}) satisfies the general form (6.3.7) in \cite{Vidyasagar:2002}. In addition, the conditions (6.3.16, 17) are satisfied, because the state-space matrices of $G$ are finite and $\Delta$ is slope-restricted.
Following the proof and notations in \cite[Theorem 6.3.15]{Vidyasagar:2002}, (6.3.19) implies the exponentially stability condition (\ref{eq:exponential_stable}) of the unforced system where $c=\frac{\beta}{\alpha}$, $\rho=\frac{1}{2\beta^2}$ ($0<\alpha<\beta$ are constant). Then, in (6.3.32), $\|x_k\|\le \frac{W_k}{\alpha}$, where $W_k\le h_k$, and $h_k$ is the output of a first order system with input $\|g_k\|$ and pole $\frac{-1}{2\beta^2}$. In other words, the exponential rate limited by this transfer function is $\rho=\frac{1}{2\beta^2}$. Henceforth, let the input to this transfer function is $g\in \ell_2^{\rho}$, then the solution $h\in \ell_2^{\rho}$, thus $W\in \ell_2^{\rho}$ and $x\in \ell_2^{\rho}$. Moreover, according to (6.3.33), $v\equiv y\in \ell_2^{\rho}$. Next, because $\Delta$ is memoryless and slope-restricted, and $g\in \ell_2^{\rho}$, the signal $w\in \ell_2^{\rho}$. This proves that the forced system is $\ell_2^{\rho}$-stable.
\subsection{State-space representation of $\Psi_2(z)$ and $\Psi_3(\rho,z)$} \label{appendix_ss}
The state-space representation of $\Psi_2(z)$ in (\ref{eq:zf_factorisation2}) is given below. {\begin{equation*}
\begin{split}
A_{2\Psi}=\begin{bmatrix}
A_S & \pmb{0}\\ \pmb{0} & A_S
\end{bmatrix}_{(2n\times 2n)},&\quad
B_{2\Psi}=\begin{bmatrix}
B_S & \pmb{0}\\ \pmb{0} & B_S
\end{bmatrix}_{(2n \times 2)}, \\
C_{2\Psi}=\begin{bmatrix}
C_{S} & \pmb{0}\\ \pmb{0} & C_{S}
\end{bmatrix}_{((2n+2) \times 2n)},& \quad
D_{2\Psi}=\begin{bmatrix}
D_{S} & \pmb{0}\\ \pmb{0} & D_{S}
\end{bmatrix}_{((2n+2) \times 2)},
\end{split}
\end{equation*}
where $A_S$, $B_S$, $C_{S}$ and $D_{S}$ are
\begin{equation*}
\begin{split}
A_S=\begin{bmatrix}
\pmb{0} & 0\\ \pmb{I}_{(n-1)} & \pmb{0}
\end{bmatrix}_{(n \times n)},&\quad
B_S=\begin{bmatrix}
1 \\ \pmb{0}_{(n-1) \times 1}
\end{bmatrix}_{(n \times 1)}, \\
C_{S}=\begin{bmatrix}
\pmb{0} \\ \pmb{I}_{(n)}
\end{bmatrix}_{((n+1) \times n)},&\quad
D_{S}=\begin{bmatrix}
1 \\ \pmb{0}_{n \times 1}
\end{bmatrix}_{((n+1) \times 1)}.
\end{split}
\end{equation*}
Then, we consider $\Psi_3(\rho,z)$ in (\ref{eq:phi_pz}). Let $n_b \ne n_f$, and $\hat{n}=\max\{n_b,nf\}$. The state-space representation is given below. {\begin{equation*} \begin{split} A_{3\Psi}=\frac{1}{\rho}A_{2\Psi (2\hat{n} \times 2\hat{n})},&\quad B_{3\Psi}=B_{2\Psi (2\hat{n} \times 2)}, \\ C_{3\Psi}=\begin{bmatrix} C_{S1} & \pmb{0}\\ \pmb{0} & C_{S1}\\C_{S2} & \pmb{0}\\ \pmb{0} & C_{S2} \end{bmatrix},& \; D_{3\Psi}=\begin{bmatrix} D_{S ((n_b+1) \times 1)} & \pmb{0}\\ \pmb{0} & D_{S ((n_b+1) \times 1)} \\D_{S ((n_f+1) \times 1)} & \pmb{0}\\ \pmb{0} & D_{S ((n_f+1) \times 1)} \end{bmatrix}. \end{split} \end{equation*}
The matrices $C_{S1}$ and $C_{S2}$ are
{\small{\begin{equation*}
C_{S1}=\arraycolsep1pt \begin{bmatrix}
\pmb{0} & \pmb{0}^{\square}\\ \frac{1}{\rho}\pmb{I}_{(n_b)} & \pmb{0}^{\square}
\end{bmatrix}_{((n_b+1)\times\hat{n})},
C_{S2}=\begin{bmatrix}
\pmb{0} & \pmb{0}^{\triangle}\\ diag\left(\rho^{2i_f-1}\right) & \pmb{0}^{\triangle}
\end{bmatrix}_{((n_f+1)\times\hat{n})}.
\end{equation*}}}
The matrices $\pmb{0}^{\square}$ are removed when $n_b>n_f$; the matrices $\pmb{0}^{\triangle}$ are removed when $n_b<n_f$; both of the matrices $\pmb{0}^{\square}$ and $\pmb{0}^{\triangle}$ are removed when $n_b=n_f$.
\end{document} |
\begin{document}
\newcommand{\need}[1]{{\tiny *** #1}} \newcommand{\mar}[1]{\marginpar{\raggedright\tiny FIXME: #1 }}
\title{Minimal Modularity Lifting For non-regular symplectic representations}
\author{Frank Calegari \and David Geraghty} \thanks{The first author was supported in part by NSF Career Grant DMS-0846285 and NSF Grant
DMS-1404620 and NSF Grant DMS-1701703. The second author was
supported in part by NSF Grants DMS-1200304 and DMS-1128155.} \subjclass[2010]{11F33, 11F80.}
{\footnotesize \maketitle \tableofcontents }
\section{Introduction}
In this paper, we prove a minimal modularity lifting theorem for Galois representations (conjecturally) associated to
Siegel modular forms $\pi$ for the group $\mathrm{GSp}(4)/\mathbf{Q}$ such that
$\pi_{\infty}$ is a holomorphic limit of discrete series. An example of what we can prove with these methods is the following:
\begin{theorem} \label{theorem:char0} Let $r: G_{\mathbf{Q}} \rightarrow \mathrm{GSp}_4(\overline{\Q}_p)$ be a continuous irreducible representation satisfying the following conditions: \begin{enumerate}
\item $r|G_{\mathbf{Q}_p}$ is ordinary with Hodge--Tate weights $[0,0,j-1,j-1]$ for some integer $j$ satisfying $p-1 > j \ge 4$. \item If $\alpha$ and $\beta$ are the unit root eigenvalues of Frobenius on $D_{\mathrm{cris}}(V)$, then $$(\alpha^2 -1)(\beta^2 - 1)(\alpha - \beta)(\alpha^2 \beta^2 -1) \not\equiv 0 \mod p.$$ \item
The image of $\overline{r}|G_{\mathbf{Q}(\zeta_p)}$ contains~$\mathrm{Sp}_4(\mathbf{F}_p)$. \item For a prime~$x \ne p$, the image of inertia at~$x$ is unipotent, and the image of any generator of tame inertia has the same number of Jordan blocks mod~$p$ as it does in characteristic zero. \item $\overline{r}$ is modular of level $N(\overline{r})$ and weight~$(j,2)$.
\end{enumerate} Then $r$ is modular, that is, there exists a cuspidal Siegel modular Hecke eigenform $F$ of weight $(j,2)$ such that $$L(r,s) = L(F,s),$$
where $L(F,s)$ is the spinor $L$-function of $F$. \end{theorem}
We deduce Theorem~\ref{theorem:char0} from our main result, which we now state.
(We shall refer to \S~\ref{section:deformations} and
\S~\ref{sec:gal-rep-low} for precise details concerning ramification
behaviour, level subgroups and the exact definition of minimal deformations.) Let $\epsilon$ denote the $p$-adic cyclotomic character. Let $\mathcal{O}$ be the ring of integers of a finite extension $K$ of $\mathbf{Q}_p$. Let $\displaystyle{\overline{r}:G_{\mathbf{Q}} \rightarrow \mathrm{GSp}_4(k)}$ be a continuous irreducible representation whose similitude character $\nu(\overline{r})$ on inertia at $p$ is the mod-$p$ reduction of $\epsilon^{1-j}$. Suppose that
$\overline{r} | G_{\mathbf{Q}_p}$ contains an unramified subspace of dimension two on which $\mathrm{Frob}_p$ acts by the scalars $\alpha$ and $\beta$ respectively, where $$(\alpha^2 - 1)(\beta^2 - 1)(\alpha - \beta)(\alpha^2 \beta^2 - 1) \ne 0.$$ Suppose further that $\overline{r}$ has big image (explicitly, satisfies Assumption~\ref{assumption:bigimage})
and that $\overline{r}|G_{\mathbf{Q}_x}$ for a prime $x \ne p$ is either unramified or is one of the types listed in Assumption~\ref{assumption:ramification}. Let $Y_1(N)$ denote the (open) Siegel modular variety of level $N = N(\overline{r})$ over $\mathrm{Spec}(\mathcal{O})$, where $N$ is determined by $\overline{r}$ as in \S~\ref{section:SMF}, and let $\omega(j,2)$ denote the coherent sheaf on $Y_1(N)$ whose complex sections define Siegel modular forms of weight $(j,2)$ for some integer $p - 1 > j \ge 4$. Let $\mathbf{T}$ denote the subring of endomorphisms of $$e H^0(Y_1(N),\omega(j,2) \otimes K/\mathcal{O}) \simeq \lim_{\rightarrow} e H^0(Y_1(N),\omega(j,2) \otimes \mathcal{O}/\varpi^n)$$ (where~$e = e_{\alpha,\beta}$ is a certain ordinary projection, see section~\ref{section:ordinaryprojection})
generated by Hecke operators at primes not dividing $Np$. Let $R^{\mathrm{min}}$ denote the universal minimal deformation ring of $\overline{r}$ (see Definition~\ref{defn:minimal} for more details).
\begin{theorem} \label{theorem:realmain} Suppose that there exists a maximal ideal $\mathfrak{m}$ of $\mathbf{T}$ and a corresponding representation $\overline{r}_{\mathfrak{m}}:G_{\mathbf{Q}} \rightarrow \mathrm{GSp}_4(k)$ which is isomorphic to $\overline{r}$. Let $R^{\mathrm{min}}$ denote the universal minimal ordinary deformation ring of $\overline{r}$. Suppose that $p-1 > j \ge 4$. Then there is an isomorphism $R^{\mathrm{min}} \stackrel{\sim}{\rightarrow} \mathbf{T}_{\mathfrak{m}}$, and moreover, the $\mathbf{T}_{\mathfrak{m}}$ module $e H^0(Y_1(N),\omega(j,2) \otimes K/\mathcal{O})_{\mathfrak{m}}^{\vee}$ is free as a $\mathbf{T}_{\mathfrak{m}}$-module. \end{theorem}
The proof follows the strategy of~\cite{CG}. The main ingredients are showing that there exists a map from $R^{\mathrm{min}}$ to $\mathbf{T}_{\mathfrak{m}}$ (see Theorem~\ref{theorem:localglobal}) and proving that the cohomology of the subcanonical extension $\omega(j,2)^{\mathrm{sub}}$ of $\omega(j,2)$ to a smooth toroidal compactification $X_1(N)$ of $Y_1(N)$ vanishes outside degrees $0$ and $1$ (see Theorem~\ref{thm:lan-suh} ---
in the case of classical modular curves this step was trivial).
\subsection{Comparison with previous methods} The first modularity theorems which applied to non-regular motives were the results of
Buzzard--Taylor and Buzzard~\cite{BuzzT,BuzzWild} on
two dimensional odd Artin representations $V$. The idea of these papers can roughly be described as follows. Using known cases of Serre's conjecture, one deduces that $\overline{\rho}$ is modular, where $\rho$ is the representation associated to some $p$-adic realization of $V$ for some $p$. Using modularity theorems in \emph{regular} weight, one then proves that a big Hecke algebra is modular. Specializing to weight one, one deduces the existence of an \emph{overconvergent} eigenform $f$ corresponding to $V$. Under a non-degeneracy assumption on $V$ ($\overline{\rho}$ is $p$-distinguished), one constructs (using companion forms) a second
Hida family which specializes to a \emph{second} eigenform $g$. Using the geometric properties of $U$, one shows that $f$ and $g$ converge deeply into the supersingular locus. The Fourier coefficients $a_n$ of $f$ and $g$ for $(n,p) = 1$ are determined by $V$. One then constructs a suitable linear combination $h = (\alpha f - \beta g)/(\alpha - \beta)$ which converges over the entire modular curve, and is thus classical by rigid GAGA. The formal rigid geometry employed by these papers have been generalized by various authors, in particular by Kassaei~\cite{Kassaeiold}. One may well ask whether this approach can be applied to Siegel modular forms of weight $(2,2)$ --- work of Tilouine and his collaborators has made great progress in this direction. The modularity lifting result for (regular weight) Hida families has been established in many cases by Genestier--Tilouine~\cite{TG} (see also Pilloni~\cite{Pilloni}). Significant progress has also been made in the theory of canonical subgroups and the geometry of Siegel modular varieties. One difficulty, however, is that the Fourier expansions of Siegel modular forms are \emph{not} determined by the Hecke eigenvalues. This is a difficulty which must be overcome in such an approach. (Various classicality results for overconvergent forms can be established without using $q$-expansions, see for example~\cite{PayGlue,PillStroh}, but these results only apply to forms of sufficiently non-critical slope.) The difficulty of dealing with $q$-expansions manifests itself for our approach also --- we are forced to prove (``by hand'') various properties of Fourier expansions of Hecke eigenforms in \S~\ref{section:explicit}.
\subsection{Abelian Surfaces} \label{spec} It would be desirable to weaken the assumption $j \ge 4$ in the main theorem to $j \ge 2$, since the case $j = 2$ includes the representations associated to the Tate modules of abelian surfaces. The only point in our arguments in which we use the fact that $j \ge 4$ is to deduce that $H^2(X_1(N),\omega(j,2)^{\mathrm{sub}}) = 0$ for the subcanonical extension $\omega(j,2)^{\mathrm{sub}}$ of $\omega(j,2)$ to a smooth toroidal compactification $X_1(N)$ of $Y_1(N)$.
If this vanishing holds for $j = 2$ then our theorem would also apply to these cases. On the other hand, one does \emph{not} expect vanishing here, because one expects that singular Siegel modular forms should contribute cohomology in these degrees. However,
we need only the weaker result that the image of $H^2(X_1(N),\omega(j,2)^{\mathrm{can}})$ in $H^2(X_1(N),\omega(j,2)^{\mathrm{sub}})$ is zero after localization at a sufficiently non-Eisenstein maximal ideal~$\mathfrak{m}$. We expect this to always be true for~$j = 2$, although we were not able to prove it. On the other hand, using the ideas of Khare--Thorne~\cite{KT}, one can dispense with proving this under the very strong supplementary hypothesis that there exists a~\emph{characteristic zero} form of weight~$N = N(\overline{r})$ which gives rise to~$\overline{r}$. In particular, by using the arguments of the proof of Theorem~6.29 of~\emph{ibid}, one should be able to prove the analogue of Theorem~\ref{theorem:char0} in weight~$(2,2)$ assuming the existence of an auxiliary Siegel modular form~$G$ of the same level also of weight~$(2,2)$ with~$\overline{r}_{G} = \overline{r}$.
\subsection{Recent Developments} ({\bf Added: January, 2019\rm}) \label{january} Very recently, there have been a number of developments related to the main theme of this paper, in particular, in the preprints~\cite{pilloniHidacomplexes} and~\cite{BCGP}, the latter which establishes the potential modularity of abelian surfaces over totally real fields. The introduction to~\cite{BCGP} explains a number of innovations which made those results possible, so we shall confine ourselves here to only a few salient remarks. The first is that the vanishing conjecture for~$H^2$ localized at~$\mathfrak{m}$ mentioned in~\S\ref{spec} remains unresolved, and the methods of~\cite{BCGP} blend the techniques of this paper (and~\cite{CG}) with arguments from~\cite{pilloniHidacomplexes}. A second point is that the paper~\cite{pilloniHidacomplexes} develops a conceptual method to define (normalized) Hecke operators at~$p$, and in particular establishes the action of these operators on higher cohomology (which is essential for the main results of~\cite{pilloniHidacomplexes} and~\cite{BCGP}). In this paper, it suffices to construct the action of these Hecke operators on~$H^0$ which is significantly easier. The methods we use in~\S\ref{section:hecke} to do this are admittedly disagreeable, relying as they do on arguments using~$q$-expansions. Thus the reader is encouraged to consult~\cite[\S7]{pilloniHidacomplexes} and~\cite[\S4.5]{BCGP} for a more geometric construction of these operators. An analysis of the normalization factors for Hecke operators required in~\cite{pilloniHidacomplexes} also sheds some light on another phenomenological feature of this paper which readers may find surprising: On the Galois side, there is essentially no difference (in the ordinary setting) between working in the (irregular) weight~$(j,2)$ for~$j > 2$ and working in the (irregular) weight~$(2,2)$. On the other hand, the Hecke operators at~$p$ (particularly~$T_{p,2}$) behave quite differently in weight~$(2,2)$. In our context, this arises most noticeably in~\S\ref{relationship} (via Lemma~\ref{lemma:verify}), which one should compare to~\cite[\S11.1]{pilloniHidacomplexes} (warning: the convention of that paper is that Pilloni's~$T_{p,1}$ is our~$T_{p,2}$ and \emph{vice versa}, and the spherical version of the operator~$T$ in~\cite{pilloniHidacomplexes} is equal in weight~$(2,2)$ up to translation by a multiple of~$T_{p,0}$ to the operator we call~$Q_2$). Finally, the paper~\cite{BCGP} develops a geometric version of the doubling argument (see~\S5 of~\emph{ibid}.) This provides a much more robust explanation (in a slightly different setting)
for what in this paper occupies most of~\S\ref{section:qstuff} and consists of a sequence of tricky and not entirely intuitive series of manipulations with~$q$-expansions. (Note that the geometric doubling argument of~\cite{BCGP} is only written for weight~$(2,2)$ but the method applies in principle to the weights~$(j,2)$ which we consider in this paper.) Finally, the very observant reader will notice that the doubling argument of~\cite{BCGP} applies in weight~$(2,2)$ to the space of ordinary forms at Klingen level, whereas in this paper we essentially prove (in the same weight) a tripling result at spherical level.
Neither of these results immediately imply the other. The ``extra'' copy of the space of forms
can be interpreted as giving rise to a space of \emph{non-ordinary} forms of weight~$(p+1,p+1)$. See Remark~\ref{trip} for further discussion on this point, which we also discuss in a different context below.
It is natural to ask whether one should expect any genuine difficulties in modifying the geometric doubling argument of~\cite{BCGP} to the setting of this paper. We now offer some speculative remarks to address this point (using notation from~\cite{BCGP}). Let~$\pi_p$ be a smooth admissible irreducible unramified representation of~$\mathrm{GL}_2(\mathbf{Q}_p)$ (over~$\mathbf{C}$) which is not trivial. (For example, $\pi_p$ could be the local constituent of an automorphic representation~$\pi$ associated to a classical modular form.) Let~$\mathrm{Sph} = \mathrm{GL}_2(\mathbf{Z}_p)$ and let~$\mathrm{Iw}$ denote the Iwahori subgroup of~$\mathrm{Sph}$. The classical theory of oldforms is a reflection of the fact that $$\dim \pi^{\mathrm{Iw}}_p = 2 = 2 \cdot \dim \pi^{\mathrm{Sph}}_p$$ and the characteristic zero version of doubling is the statement that the span of the spherical vector~$v$ under the operator~$U_p$ is all of~$\pi^{\mathrm{Iw}}_p$. The integral version of this statement is false in general. For example, given a classical ordinary modular eigenform~$f$ of weight~$k \ge 2$, the span of~$f \mod p$ under~$U_p$ is simply~$f$, because~$T_p = U_p \mod p$ in these weights. However, some version of this result \emph{does} hold in weight~$k = 1$, and it is this property which is leveraged to prove local-global compatibility results in~\cite{CG}. Let us now replace~$\mathrm{GL}_2(\mathbf{Q}_p)$ by~$\mathrm{GSp}_4(\mathbf{Q}_p)$, and let~$\mathrm{Kli}$ and~$\mathrm{Iw}$ denote the Klingen and Iwahori subgroups respectively of~$\mathrm{Sph} = \mathrm{GSp}_4(\mathbf{Z}_p)$ (denoted elsewhere in this paper by~$\Pi$ and~$I$ respectively.) Now (for the~$\Pi_p$ of interest) we will have $$\dim \Pi^{\mathrm{Iw}}_p = 8 = 2 \cdot 4 = 2 \dim \Pi^{\mathrm{Kli}}_p = 8 \dim \Pi^{\mathrm{Sph}}_p.$$ The factor~$8$ here may be interpreted as the order of the Weyl group of~$\mathrm{GSp}(4)$. More prosaically, the oldforms in~$\Pi^{\mathrm{Iw}}_p$ correspond to a choice of eigenvalues~$\alpha$ and~$\alpha \beta$ for the Hecke operators~$U_{\mathrm{Iw}(p),1}$ and~$U_{\mathrm{Iw}(p),2}$ respectively, whereas the oldforms in~$\Pi^{\mathrm{Kli}}_p$ correspond to a choice of eigenvalues~$\alpha + \beta$ and~$\alpha \beta$ for the Hecke operators~$U_{\mathrm{Kli}(p),1}$ and~$U_{\mathrm{Kli}(p),2} = U_{\mathrm{Iw}(p),2}$. When one passes from~$\pi^{\mathrm{Sph}}_p$ to~$\pi^{\mathrm{Iw}}_p$ for weight one modular forms or~$\Pi^{\mathrm{Kli}}_p$ to~$\Pi^{\mathrm{Iw}}_p$ for weight~$(2,2)$ Siegel modular forms, the property of of being ordinary turns out to be automatically preserved on the corresponding space of old forms. However, this is not \emph{a priori} true when passing from~$\Pi^{\mathrm{Sph}}_p$ to~$\Pi^{\mathrm{Iw}}_p$, and so one would have to see in any geometric version of this argument a way of dealing with the non-ordinary forms.
\subsection{Results of Arthur} In Section~\ref{sec:balanced-property}, we make use of the results of \cite{arthur-gsp4}, which sketches how the results of \cite{Arthur} on orthogonal and symplectic groups can be extended to the general symplectic group $\mathrm{GSp}_4$. At the time of the initial submission of this paper,
these results of Arthur are conditional on the stabilization of the twisted trace formula. (We direct the reader to~\cite{GT} for the most up to date status of these results for~$\mathrm{GSp}_4$.)
\section{Notation} \label{sec:notation}
We fix a prime $p$ and let $\mathcal{O}$ be the ring of integers of a finite extension $K$ of $\mathbf{Q}_p$ with residue field $k$. We let $\mathcal{C}_{\mathcal{O}}$ denote the category of complete local Noetherian $\mathcal{O}$-algebras $R$ with residue field isomorphic to $k$ (via the structural homomorphism $\mathcal{O} \to R$).
We let \[ \epsilon : G_{\mathbf{Q}} \to \mathbf{Z}_p^{\times} \] denote the cyclotomic character. The Hodge--Tate weight of
$\epsilon|G_{\mathbf{Q}_p}$ is $-1$.
If $L$ is a finite extension of $\mathbf{Q}_l$ for some prime $l$. We let $\mathrm{Art}_L : L^\times \to W_L^{\mathrm{ab}}$ denote the Artin map, normalized so that uniformizers correspond to geometric Frobenius elements. If $\gamma$ is an element of some ring $R$, then we define the character \[ \lambda(\gamma) : G_L \longrightarrow R^\times \] to be the unramified character which takes the geometric Frobenius element $\mathrm{Frob}_L$ to $\gamma$, when this character is well defined.
\subsubsection{The group $\mathrm{GSp}_4$} \label{sec:group-gsp_4}
Let $G = \mathrm{GSp}_4 = \{M \in \mathrm{GL}_4: M^{t} J M = \nu \cdot J \text{\ for some\ } \nu\in\mathrm{GL}_1\}$, where $$\displaystyle{J:=\left(\begin{matrix} 0 &0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & -1 & 0 & 0 \\ -1 & 0 & 0 & 0 \end{matrix} \right)}.$$ The group $\mathrm{Sp}_4$ is the subgroup consisting of elements with $\nu = 1$. We let $B \subset G$ be the Borel subgroup consisting of upper triangular matrices. The Lie algebras of $G$ and $B$ are denoted $\mathfrak{g}$ and $\mathfrak{b}$ while those of $\mathrm{Sp}_4$ and $B\cap \mathrm{Sp}_4$ are denoted $\mathfrak{g}^0$ and $\mathfrak{b}^0$. Let $P\subset G$ denote the Siegel parabolic, that is, the stabilizer of the plane spanned by the first two standard basis vectors. Let $\Pi \subset G$ denote the Klingen parabolic, which is the stabilizer of the line spanned by the first standard basis vector. We denote the Levi subgroup of $P$ (resp.\ $\Pi$) by $M=M_P$ (resp.\ $M_\Pi$). We have $M \cong \mathrm{GL}_2\times\mathrm{GL}_1$.
Let $T$ denote the diagonal torus in $\mathrm{GSp}_4$ and $X^*(T)$ its character group. We identify $X^*(T)$ with the lattice $\mathbf{Z}^3$ by associating to $(a,b;c)$ the character \[ \diag(t_1,t_2,\nu t_2^{-1},\nu t_1^{-1}) \mapsto t_1^a t_2^a
\nu^{c}.\] We identify the cocharacter group $X_*(T)$ with $\mathbf{Z}^3$ by associating the triple $(\alpha,\beta;\gamma)$ with the cocharacter: \[ t \mapsto \diag(t^\alpha, t^\beta , t^{\gamma-\beta} ,
t^{\gamma-\alpha}) .\] The natural pairing on $X^*(T)\times X_*(T)$ is then: $\langle (a,b;c),(\alpha,\beta,\gamma)\rangle \mapsto a\alpha+b\beta+c\gamma$.
The positive roots of $G$ with respect to the Borel $B$ are given by $\alpha_1 :=(1,-1;0)$, $\alpha_2:=(0,2;-1)$, $\alpha_3=(1,1;-1)$ and $\alpha_4=(2,0;-1)$. Of these, $\alpha_1$ and $\alpha_2$ are the simple roots. We let $\rho = (2,1;-3/2)$ denote the half-sum of the positive roots. The coroots are: $\alpha_1^{\vee} = (1,-1;0)$, $\alpha_2^{\vee} = (0,1;0)$, $\alpha_3^{\vee} = (1,1;0)$ and $\alpha_4^{\vee} = (1,0;0)$. The intersection $B\cap M$ is a Borel subgroup of $M$. The corresponding positive root is $\alpha_1$.
\begin{df}
\label{defn:dominant-weights} We define the set $X^*(T)^+_{G}$ to be the set $\{ \lambda\in X^*(T) : \langle \lambda, \alpha_i^{\vee}\rangle \geq 0 \text{\ } \forall i\}$ of weights which are dominant with respect to $B$. Explicitly \[ X^*(T)^+_{G} = \{(a,b;c)\in X^*(T) :a \geq b \geq 0\}.\] Similarly, we define the set of weights $X^*(T)^+_{M}:= \{(a,b;c)\in X^*(T):\langle \lambda,\alpha_1^{\vee}\rangle \geq 0\}$ which are dominant with respect to $B\cap M$. Explicitly, this is: \[ X^*(T)^+_{M} = \{(a,b;c)\in X^*(T) :a \geq b\}.\] \end{df}
Note that the natural action of $M$ on the plane spanned by the first two (resp.\ the last two) standard basis vectors is the irreducible representation of highest weight $(1,0;0)$ (resp.\ $(0,-1;1)$).
We let $W_{G}=N_{G}(T)/T$ denote the Weyl group of $G$ and we define $W_{M}$ and $W_{M_{\Pi}}$ similarly. Let $s_0,s_1$ denote the generators for the Weyl group $W_{G}$ given in~\cite[\S 2]{HerzigTilouine}. We fix a set of Kostant representatives $W^M= \{ \widetilde{w}_0,\widetilde{w}_1,\widetilde{w}_2,\widetilde{w}_3\} $ for $ W_{M}\backslash W_{G}$ by setting $\widetilde{w}_0=1$, $\widetilde{w}_1=s_1$, $\widetilde{w}_2=s_1s_0$ and $\widetilde{w}_3=s_1s_0s_1$. Note that each $\widetilde{w}_i$ has length $i$. We let $w\in W_{G}$ act on $X^*(T)$ by $(w\lambda)(t)=\lambda(w^{-1}tw)$. Then we have: \begin{align*}
\widetilde{w}_1(a,b;c) &= (a,-b;b+c)\\
\widetilde{w}_2(a,b;c) & = (b,-a;a+c) \\
\widetilde{w}_3(a,b;c) & = (-b,-a;a+b+c). \end{align*} The longest element of $W_{G}$ which we denote by $w_0$ acts via $w_0(a,b;c)=(b,a;c)$.
Note that the collection of representatives $W^M$ is precisely the set of $w\in W_{G}$ such that $w(X^*(T)^+_{G})\subset X^*(T)^+_M$. We let $C_0 \subset X^*(T)_{\mathbf{R}}:= X^*(T)\otimes_{\mathbf{Z}}\mathbf{R}$ denote the closed dominant Weyl chamber. In other words, $C_0 = \{ (a,b;c)\in \mathbf{R}^3 : a \geq b \geq 0\}$. For $i=1,2,3$, we define the chambers $C_i:= \widetilde{w}_i(C_0)$.
\subsubsection{The group $\mathrm{GSp}_4(\mathbf{R})$} \label{sec:group-gsp_4r}
Let \[ h : \mathrm{Res}_{\mathbf{C}/\mathbf{R}}(\mathbf G_m)(\mathbf{R})= \mathbf{C}^{\times} \to G(\mathbf{R})= \mathrm{GSp}_4(\mathbf{R}) \] be the homomorphism sending $x+iy$ to the matrix \[ \begin{pmatrix}
xI_2 & yS \\
-yS & xI_2 \end{pmatrix} \] where \[S:= \begin{pmatrix}
0 & 1\\1&0 \end{pmatrix}.\] Let $K^h$ denote the centralizer of $h$ in $G(\mathbf{R})$ (acting by conjugation). Then since $h(i) = J$, we see that $K^h = \mathbf{R}^\times K_\infty$ where $K_\infty$ is the maximal compact subgroup of $G(\mathbf{R})$ given by the fixed points of the Cartan involution $g \mapsto (g^{t})^{-1}$. The similitude character restricts to a surjective map $\nu : K_\infty \to \{\pm 1\}$ and whose kernel $K_{\infty,1}$ is the connected component of the identity. Then we have explicitly, \[ K_{1,\infty} = \left\{
\begin{pmatrix}
SAS & SB \\
-BS & A
\end{pmatrix} \in G(\mathbf{R}) : A^t A + B^t B = I_2, A^t B = B^t A \right\}.\]
The map: \begin{align*}
K_{\infty,1} & \longrightarrow \mathrm{GL}_2(\mathbf{C}) \\
\begin{pmatrix}
SAS & SB \\
-BS & A
\end{pmatrix} &\longmapsto A +iB \end{align*}
induces an isomorphism between $K_{\infty,1}$ and $U(2)$. We let
$H_1 \subset K_{\infty,1}$ denote the preimage of the diagonal
compact torus in $U(2)$ and let $H:= \mathbf{R}^{\times}_{>0} H_1 \subset
K^h$. Let $\mathfrak{h} = \Lie H$, $\mathfrak{k}^h = \Lie K^h$ and so on. Then we have \[ \mathfrak{h} =\left\{ h(t_1,t_2;z):= \begin{pmatrix}
z & 0 & 0 & t_1 \\ 0 & z & t_2 & 0 \\ 0 & -t_2 & z & 0\\
-t_1 & 0 & 0 & z \end{pmatrix}: t_1,t_2,z \in \mathbf{R} \right\}.\] We use subscripts to denote complexifications of Lie algebras and Lie groups; thus $H_{\mathbf{C}}$ and $\mathfrak{h}_{\mathbf{C}}$ denote the complexifications of $H$ and $\mathfrak{h}$. Then $\mathfrak{h}_{\mathbf{C}} = \Lie H_{\mathbf{C}} = \{ h(t_1,t_2;t) : t_1,t_2,z \in \mathbf{C}\}$ and the surjective map $\exp : \mathfrak{h}_{\mathbf{C}} \to H_{\mathbf{C}}$ sends $h(t_1,t_2;z)$ to \[ \exp(z) \begin{pmatrix}
\cos t_1 & 0 & 0 & \sin t_1 \\
0 & \cos t_2 & \sin t_2 & 0 \\
0 & -\sin t_2 & \cos t_2 & 0 \\
-\sin t_1 & 0 & 0 & \cos t_1 \end{pmatrix}.\] Thus its kernel is $\{ h(t_1,t_2;z) : t_1,t_2 \in 2\pi\mathbf{Z}, z\in 2\pi i \mathbf{Z}\}$. We define the lattice $X^*(H_{\mathbf{C}}) \subset \mathfrak{h}_{\mathbf{C}}^*$ to be the subspace consisting of differentials of (complex analytic) characters of $H_{\mathbf{C}}$. Equivalently, $X^*(H_{\mathbf{C}})$ is the subset of $X^*(\mathbf{C}^\times \times H_{1,\mathbf{C}})=\{ \lambda \in \mathfrak{h}_{\mathbf{C}}^* : \lambda (\ker( \exp:\mathfrak{h}_{\mathbf{C}}\to H_{\mathbf{C}})) \subset 2\pi i \mathbf{Z} \}$ consisting of differentials of characters of $\mathbf{C}^\times\times K_{1,\mathbf{C}}$ which factor through the multiplication map $\mathbf{C}^\times\times H_{1,\mathbf{C}} \to H_{\mathbf{C}}$. We fix an isomorphism \[\{ (a,b;c) \in\mathbf{Z}^3:a+b\equiv c \bmod 2\} \stackrel{\sim}{\rightarrow} X^*(H_{\mathbf{C}})\] by letting $(a,b;c)$ correspond to the linear form \[ h(t_1,t_2;z) \mapsto at_1 i + b t_2 i + cz \] on $\mathfrak{h}_{\mathbf{C}}$. This extends by linearity to an isomorphism $\mathbf{C}^3 \to \mathfrak{h}_{\mathbf{C}}^*$.
Let $V^{\pm} \subset \mathbf{C}^4$ be the subspace where $h(i)$ acts via $\pm i$. Then each $V^{\pm}$ is isotropic and we have an orthogonal direct sum $\mathbf{C}^4 = V^- \oplus V^+$. Let $Q^- \subset G(\mathbf{C})$ denote the stabilizer of $V^-$.
Consider the Hodge decomposition \[ \mathfrak{g}_{\mathbf{C}} = \mathfrak{g}^{0,0}\oplus \mathfrak{g}^{-1,1} \oplus \mathfrak{g}^{1,-1} \] where $\mathfrak{g}^{p,q}$ is the subspace on which $h(z)$ acts via $z^{-p}\overline z^{-q}$. Then we have $\mathfrak{g}^{0,0} = \mathfrak{k}_{\mathbf{C}}^h$ and we let $\mathfrak{p}^+=\mathfrak{g}^{-1,1}$, $\mathfrak{p}^- = \mathfrak{g}^{1,-1}$. We also let $P^{\pm}$ denote the subgroup of $G(\mathbf{C})$ generated by $\exp(\mathfrak{p}^{\pm})$.
Then we have \[ Q^- = K^h_{\mathbf{C}} P^{-} \text{\ and\ } \Lie Q^- = \mathfrak{k}_{\mathbf{C}}^h \oplus \mathfrak{p}^{-}.\] Moreover, $K^h_{\mathbf{C}}$ is the Levi component of $Q^-$ and $P^-$ is its unipotent radical. Let \[ f_1 = \begin{pmatrix}
1 \\ 0 \\ 0 \\ -i \end{pmatrix}, f_2 = \begin{pmatrix}
0 \\ 1 \\ -i \\ 0 \end{pmatrix}, f_3 = \begin{pmatrix}
0 \\ -i \\ 1 \\ 0 \end{pmatrix}, f_4 = \begin{pmatrix}
-i \\ 0 \\ 0 \\ 1 \end{pmatrix}\in \mathbf{C}^4.\] Then $f_1,f_2$ are a basis of $V^-$ and $f_3,f_4$ are a basis of $V^+$. With respect to the basis $f_1,\dots,f_4$ of $\mathbf{C}^4$, an element \[ k = \begin{pmatrix}
SAS & SB \\
-BS & A
\end{pmatrix} \in K_{1,\infty} \] acts via \[ C^{-1} k C
= \begin{pmatrix}
SAS-iSBS & 0 \\
0 & A+iB \end{pmatrix} \text{\ where } C := \begin{pmatrix}
I_2 & -iS \\
-iS & I_2 \end{pmatrix} . \] Note that the Cayley transform $C$ conjugates the Siegel parabolic $P(\mathbf{C})$ to $Q^-$. Let $\Phi \subset X^*(H_{\mathbf{C}})$ denote the root system defined by the adjoint action of $H_{\mathbf{C}}$ on $\mathfrak{g}_{\mathbf{C}}$. The compact roots $\Phi_c$ are those appearing in $\mathfrak{k}_{\mathbf{C}}^h$, while the non-compact roots $\Phi_n$ are those appearing in $\mathfrak{p}^+\oplus \mathfrak{p}^-$. We choose a system of positive roots $\Phi^+$ in such a way that the set of positive non-compact roots $\Phi^+_n = \Phi^+ \cap \Phi_n$ coincides with the roots in $\mathfrak{p}^+$. (We do this in order to be consistent with the conventions of \cite[\S 2.4]{blasius-harris-ramak}.) We are then forced to take $\Phi^+$ to be the set of roots appearing in $C(\Lie \overline{B})C^{-1}$ where $\overline{B} \subset G$ is the Borel subgroup of \emph{lower} triangular matrices. With respect to the identification of $X^*(H_{\mathbf{C}})$ as a subset of $\mathbf{Z}^3$ given above, we then have: \begin{align*}
\Phi^+_c &= \{ (1,-1;0) \} \\
\Phi^+_n &= \{ (0,2;0), (1,1;0) , (2,0;0) \}. \end{align*} This can be seen easily from the fact that $C^{-1}h(t_1,t_2;0)C = \diag(-it_1,-it_2,it_2,it_1)$. \begin{df}
\label{defn:dominant-weights-H} We let $X^*(H_{\mathbf{C}})^+_{K^h_{\mathbf{C}}}$ denote the set of which are dominant with respect the system of positive roots $\Phi^+_c$. In other words, $X^*(H_{\mathbf{C}})^+_{K^h_{\mathbf{C}}} = \{ (a,b;c)\in X^*(H_{\mathbf{C}}) : a\geq b\}$.
This set parameterizes the irreducible complex analytic representations of $K_{\mathbf{C}}^h$. For $\mu \in X^*(H_{\mathbf{C}})^+_{K^h_\mathbf{C}}$, we let $V_\mu$ denote the corresponding irreducible representation of highest weight $\mu$.
\end{df}
We note that natural representation of $K^h_{\mathbf{C}}$ on $V^-$ (resp.\ $V^+$) is the irreducible representation of highest weight $(0,-1;1)$ (resp.\ $(1,0;1)$). Note also that the similitude character $\nu : H_{\mathbf{C}} \to \mathbf{C}^\times$ has weight $(0,0;2)$.
\section{Some Commutative Algebra}
We recall here some formalism from~\cite{CG} for proving modularity lifting results in contexts where the Hecke algebra has ``co-dimension $1$'' over the ring of diamond operators. The notion of ``balanced'' below plays the role of ``codimension one'' for the non-regular group rings $S_N:=\mathcal{O}[(\mathbf{Z}/p^N\mathbf{Z})^q]$.
\subsection{Balanced Modules}
Let $S$ be a Noetherian local ring with residue field $k$ and let $M$ be a finitely generated $S$-module.
\begin{df}
\label{defn:defect} We define the \emph{defect} $d_S(M)$ of $M$ to be \[ d_S(M)= \dim_k \mathrm{Tor}^0_S(M,k)-\dim_k \mathrm{Tor}^1_S(M,k)=\dim_k M/\mathfrak{m}_S M - \dim_k \mathrm{Tor}^1_S(M,k).\] \end{df}
Let \[ \dots \rightarrow P_i \rightarrow \dots \rightarrow P_1 \rightarrow P_0 \rightarrow M \rightarrow 0\] be a (possibly infinite) resolution of $M$ by finite free $S$-modules. Assume that the image of $P_i$ in $P_{i-1}$ is contained in $\mathfrak{m}_S P_{i-1}$ for each $i\geq 1$. (Such resolutions always exist and are often called `minimal'.) Let $r_i$ denote the rank of $P_i$. Tensoring the resolution over $S$ with $k$ we see that $P_i/\mathfrak{m}_S P_i \cong \mathrm{Tor}^i_S(M,k)$ and hence that $r_i = \dim_k \mathrm{Tor}^i_S(M,k)$.
\begin{df}
\label{defn:balanced} We say that $M$ is \emph{balanced} if $d_S(M)\geq 0$. \end{df}
If $M$ is balanced, then we see that it admits a presentation \[ S^d \rightarrow S^d \rightarrow M \rightarrow 0 \] with $d = \dim_k M/\mathfrak{m}_S M$.
\subsection{Patching} \label{sec:patching}
We recall the abstract Taylor--Wiles style patching result from~\cite{CG}.
\begin{prop} \label{prop:patching} Suppose that \begin{enumerate} \item $R$ is an object of $\mathcal{C}_\mathcal{O}$ and $H$ is a finite $R$-module which is also finite over~$\mathcal{O}$; \item $q \geq 1$ is an integer,
and for each integer $N\geq 1$, $S_N:=\mathcal{O}[(\mathbf{Z}/p^N\mathbf{Z})^q]$; \item $R_\infty:= \mathcal{O}[[x_1,\dots,x_{q-1}]]$; \item for each $N\geq 1$, $\phi_N: R_\infty \twoheadrightarrow R$ is a surjection
in $\mathcal{C}_\mathcal{O}$ and $H_N$ is an $R_\infty\otimes_\mathcal{O}
S_N$-module \end{enumerate} and that for each $N\geq 1$ the following conditions are satisfied \begin{enumerate}[label=(\alph*)]
\item\label{cond-image} the image of $S_N$ in $\mathrm{End}_\mathcal{O}(H_N)$ is contained in the image
of $R_\infty$, and moreover, the image of the augmentation ideal of
$S_N$ in $\mathrm{End}_{\mathcal{O}}(H_N)$ is contained in the image of $\ker(\phi_N)$;
\item\label{cond-coninvts} there is an isomorphism $\psi_N: (H_N)_{\Delta_N} \stackrel{\sim}{\rightarrow} H$ of
$R_\infty$-modules, where $R_\infty$ acts on $H$ via $\phi_N$ and
$\Delta_N = (\mathbf{Z}/p^N\mathbf{Z})^q$;
\item\label{cond-balanced} $H_N$ is finite and balanced over $S_N$ (see Definition \ref{defn:balanced}). \end{enumerate} Then $H$ is a free $R$-module. \end{prop}
\begin{proof} This is Prop.~2.3 of~\cite{CG}. \end{proof}
\section{Deformations of Galois representations} \label{section:deformations}
Let $$\overline{r}: G_{\mathbf{Q}} \rightarrow \mathrm{GSp}_4(k)$$ be a continuous, odd, absolutely irreducible Galois representation with similitude character of the form $\nu(\overline{r}) = \overline{\epsilon}^{-(a-1)}$ where $a \geq 2$. Let us suppose that there exist $\alpha$ and $\beta$ in $k$ such that
$$\overline{r} | G_p \sim \left( \begin{matrix} \lambda(\alpha) & 0 & * & * \\ 0 & \lambda(\beta) & * & * \\ 0 & 0 & \nu(\overline{r}) \cdot \lambda( \beta^{-1}) & 0 \\ 0 & 0 & 0 & \nu(\overline{r}) \cdot \lambda(\alpha^{-1}) \end{matrix} \ \ \right),$$ and moreover $(\alpha^2 - 1)(\beta^2 - 1)(\alpha^2 \beta^2 - 1)(\alpha - \beta) \ne 0$. Let $S(\overline{r})$ denote the set of primes of $\mathbf{Q}$ away from $p$ at which $\overline{r}$ is ramified.
The group $\mathrm{GSp}_4$ admits a $11$-dimensional adjoint representation on its Lie algebra $\mathfrak{g}$. Let $\mathrm{ad}(\overline{r})$ denote the composition of $\overline{r}$ with this representation. For~$p > 2$, the representation~$\mathrm{ad}(\overline{r})$ admits a decomposition~$\mathrm{ad}(\overline{r}) = \mathrm{ad}^0(\overline{r}) \oplus \nu$, where~$\nu$ is the similitude character of~$\overline{r}$.
We make the following further assumptions on $\overline{r}$:
\begin{assumption}[Big Image] \label{assumption:bigimage} The restriction of $\overline{r}$ to $G_{\mathbf{Q}(\zeta_p)}$ satisfies the following conditions, cf. ~\S5.7 of~\cite{Pilloni}: \begin{enumerate} \item[H1:] The field~$\mathbf{Q}(\mathrm{ad}^0(\overline{r}))$ does not contain~$\zeta_p$, \item[H2:] For any~$m$, there exists an element~$\sigma \in G_{\mathbf{Q}(\zeta_{p^m})}$ such that~$\overline{r}(\sigma)$ has four distinct eigenvalues and such that the action of~$\sigma$ on each irreducible representation of~$\mathrm{ad}^0(\overline{r})$ over~$G_{\mathbf{Q}(\zeta_{p^m})}$ contains~$1$ as an eigenvalue. \item[H3:] Neither the image~$\Gamma$ of~$\mathrm{ad}^0(\overline{r})$ nor the image of~$\mathrm{ad}^0(\overline{r})(1)$ admits a quotient of degree~$p$. \end{enumerate} \end{assumption}
If this assumption holds, we say that~$\overline{r}$ has \emph{big image}, although condition~(H1)
depends on more than the group-theoretic image of~$\overline{r}$ or even~$\overline{r}|_{G_{\mathbf{Q}(\zeta_p)}}$.
\begin{assumption}[Neatness] \label{assumption:neatness} There exists a~$\sigma \in G_{\mathbf{Q}}$ with~$\epsilon(\sigma) = q \not\equiv 1 \mod p$ such that the ratio of any two eigenvalues of~$\overline{r}(\sigma)$ is not equal to~$q \mod p$. \end{assumption}
This condition is imposed to avoid dealing with stacks. If~$p \ge 5$, any surjective representation~$\overline{r}: G_{\mathbf{Q}} \rightarrow \mathrm{GSp}_4(\mathbf{F}_p)$ whose similitude character is a power of~$\overline{\epsilon}$ will be neat. By assumption, the image contains an element~$\overline{r}(\sigma)$ which is scalar with eigenvalue~$\lambda \ne \pm 1$. If~$q = \epsilon(\sigma) \equiv 1 \mod p$, then the similitude character would also equal~$1$. But the similitude character of the scalar matrix~$\lambda$ is~$\lambda^2 \not\equiv 1 \mod p$.
\begin{assumption}[Ramification] \label{assumption:ramification}
If $x \in S(\overline{r})$, then $\overline{r} | G_x$ is one of the following types: \begin{enumerate}
\item{\bf U3 \rm}: $\overline{r} | I_x$ has unipotent image, and $\overline{r} |
I_x$ is conjugate to the group generated by $\exp(N_3)$, where $$ N_3 = \left( \begin{matrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & 0 & 0 \end{matrix} \right).$$
\item{\bf U2 \rm}: $\overline{r} | I_x$ has unipotent image, and $\overline{r} |
I_x$ is conjugate to the group generated by $\exp(N_2)$, where $$ N_2 = \left( \begin{matrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & 0 & 0 \end{matrix} \right).$$
\item{\bf U1 \rm}: $\overline{r} | I_x$ has unipotent image, and $\overline{r} |
I_x$ is conjugate to the group generated by $\exp(N_1)$, where
$$ N_1 = \left( \begin{matrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{matrix} \right).$$
\item{\bf P\rm}: $\overline{r} | G_x$ is a direct sum of characters, and $\overline{r} | I_x$ has the form $$\left( \begin{matrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & \chi_x & 0 \\ 0 & 0 & 0 & \chi_x \end{matrix} \right)$$
for some non-trivial character $\chi_x$ of $I_x$. Both the plane of invariants under~$I_x$ and the plane on which~$I_x$ acts by~$\chi_x$
are isotropic.
Moreover $x -1$ is prime to $p$.
\item{\bf H\rm}: $\overline{r} | I_x$ is absolutely irreducible and $x^4 - 1$ is prime to $p$. \end{enumerate} \end{assumption}
\begin{remark} \emph{Since we are assuming that the similitude character of~$\overline{r}$ is a power of the cyclotomic character, it turns out that~$\overline{r}|I_x$ can never be of type~{\bf P\rm}. We expect that our arguments can also be adapted to deal with representations~$\overline{r}$ with more general (odd) similitude characters, but we made this assumption to simplify some of the arguments involving~$q$-expansions (in particular, to avoid various Nebentypus characters). } \end{remark}
Note that non-trivial unipotent representations are not direct sums, so a prime $x \in S(\overline{r})$ is either of type {\bf U\rm}, {\bf P\rm}, or {\bf H\rm}, but never simultaneously any two of these types. Moreover, $x$ is of type {\bf U2\rm} or {\bf
U3\rm} if and only if $\overline{r}(I_x)$ is generated by an element $\exp(N)$ where $N$ is nilpotent of rank $2$, or $3$ respectively.
Let $Q$ denote a finite set of primes of $\mathbf{Q}$ disjoint from $S(\overline{r})\cup\{p\}$. We assume that for each $x\in Q$ the following hold: \begin{itemize} \item $x\equiv 1 \mod p$,
\item $\overline{r} | G_x$ is a direct sum of four pairwise distinct
characters. Label these characters as
$\lambda(\alpha_x),\lambda(\beta_x),\lambda(\gamma_x)$, and $\lambda(\delta_x)$
such that the planes $\lambda(\alpha_x)\oplus\lambda(\beta_x)$ and
$\lambda(\gamma_x)\oplus\lambda(\delta_x)$ are isotropic and
$\alpha_x\delta_x = \beta_x\gamma_x = \nu(\overline{r})(\mathrm{Frob}_x)$. \end{itemize}
(By abuse of notation, we sometimes use $Q$ to denote the product of primes in $Q$.) For objects $R$ in $\mathcal{C}_\mathcal{O}$, a \emph{deformation} of $\overline{r}$ to $R$ is a $\ker(\mathrm{GSp}_4(R)\to \mathrm{GSp}_4(k))$-conjugacy class of continuous lifts $r : G_\mathbf{Q}\to\mathrm{GSp}_4(R)$ of $\overline{r}$. We will often refer to the deformation containing a lift $r$ simply by $r$.
\begin{remark} \emph{When deforming Galois representations over~$\mathbf{Q}$, we could work either with fixed or varying similitude character --- both give rise to deformation problems with~$l_0 = 1$. We make the (somewhat arbitrary) choice to work with deformations with fixed similitude character in this paper, because it is the ``correct'' approach for general totally real fields --- for totally real fields other than~$\mathbf{Q}$, the invariant~$l_0$ increases (by~$[F:\mathbf{Q}] - 1$) when deforming the similitude character. } \end{remark}
\begin{df}
\label{defn:minimal} We say that a deformation $r:G_\mathbf{Q}\to \mathrm{GSp}_4(R)$ of $\overline{r}$ is \emph{minimal
outside $Q$} if it satisfies the following properties: \begin{enumerate} \item\label{det} The similitude character $\nu(r)$ is equal to
$\epsilon^{-(a-1)}$.
\item\label{outside-NQ} If $x\not\in Q\cup S(\overline{r}) \cup \{p\}$ is a prime of $\mathbf{Q}$, then $r|G_x$ is unramified. \item\label{at-special} If $x \in S(\overline{r})$ is of type {\bf U1}, {\bf U2\rm
\ or \bf U3\rm}, then $r|I_x$ has unipotent image and its image is
topologically generated by an element $\exp(N)$ where $N$ is
nilpotent of rank $1$, $2$ or $3$ respectively. \item\label{at-principle} If $x \in S(\overline{r})$ is of type {\bf P\rm}, then $r(I_x) \stackrel{\sim}{\rightarrow} \overline{r}(I_x)$.
\item\label{at-Q} If $x\in Q$, then $r|G_x \cong V_1 \oplus V_2$ where
each $V_i$ is an isotropic plane in $R^4$ and $V_1$ lifts
$\lambda(\alpha_x)\oplus \lambda(\beta_x)$ while $V_2$ lifts
$\lambda(\gamma_x)\oplus \lambda(\delta_x)$. Moreover, $I_x$ acts by
scalars (via some character) on~$V_1$ and by scalars via the inverse of
this character on~$V_2$.
\item\label{at-p} The representation $r$ has the following shape at $p$:
$$r | G_p \sim \left( \begin{matrix}
\chi_{\alpha} \psi^{-1} & 0 & * & * \\
0 & \chi_{\beta} \psi^{-1} & * & * \\
0 & 0 & \epsilon^{-(a-1)} \chi_{\beta}^{-1} \psi & 0 \\
0 & 0 & 0 & \ \epsilon^{-(a-1)} \chi_{\alpha}^{-1} \psi
\ \end{matrix} \ \ \right),$$ where $\chi_{\alpha}$ and $\chi_{\beta}$ are unramified characters lifting $\lambda(\alpha)$ and $\lambda(\beta)$ respectively, and~$\psi$ is an unramified character which is trivial modulo the maximal ideal. \end{enumerate} If $Q$ is empty, we will refer to such deformations simply as being \emph{minimal}. If $r$ satisfies conditions \eqref{outside-NQ}--\eqref{at-principle}, then we say $r$ is \emph{weakly minimal} outside $Q$. \end{df}
\begin{remark} \emph{The local condition at $p$ is equivalent to
asking that $r$ is ordinary (of fixed weight). When~$a = 2$ it is also equivalent to being finite flat. This is because, for unramified characters ${\psi_1}$ and $\psi_2$, the group $\mathrm{Ext}^1({\psi_1},\psi_2)$ in this category is trivial, and the group
$\mathrm{Ext}^1(\epsilon {\psi_1},\psi_2)$ is the same whether it is computed in the category of finite flat group schemes or as $G_p$ modules, as long as~${\psi_1} {\psi_2}^{-1} \not\equiv 1 \mod p$. The latter condition follows (for all the relevant extensions) from the assumption $(\alpha \beta - 1)(\alpha^2 - 1)(\beta^2 - 1)(\alpha - \beta) \ne 0$.} \end{remark}
The functor that associates to each object $R$ of $\mathcal{C}_\mathcal{O}$ the set of deformations of $\overline{r}$ to $R$ which are minimal outside $Q$ is represented by a complete Noetherian local $\mathcal{O}$-algebra $R_Q$. This follows from the proof of Theorem 2.41 of ~\cite{DDT}. If $Q=\emptyset$, we will sometimes denote $R_Q$ by $R^{\min}$.
Let $H^1_{Q}(\mathbf{Q},\mathrm{ad}^0(\overline{r}))$ denote the Selmer group defined as the kernel of the map \[ H^1(\mathbf{Q},\mathrm{ad}^0(\overline{r})) \longrightarrow \bigoplus_{x} H^1(\mathbf{Q}_x,\mathrm{ad}^0(\overline{r}))/L_{Q,x}\] where $x$ runs over all primes of $\mathbf{Q}$ and: \begin{itemize} \item If $x \not \in Q \cup p$, then
$L_{Q,x} = H^1(G_x/I_x,(\mathrm{ad}^0(\overline{r}))^{I_x})$.
\item If $x \in Q$, then $H^1(G_x,\mathrm{ad}^0(\overline{r}))$ is isomorphic to the
subspace of \[ H^1(G_x,\mathrm{ad} \ \lambda(\alpha_x)) \oplus H^1(G_x,\mathrm{ad} \lambda(\beta_x)) \oplus H^1(G_x,\mathrm{ad} \ \lambda(\beta_x)^{-1}) \oplus H^1(G_x,\mathrm{ad} \ \lambda(\alpha_x)^{-1}) \] consisting of elements $(c_1,c_2,d_2,d_1)$ with $c_1+d_1 = c_2+d_2$. (Note that each summand is a copy of $\mathrm{Hom}_{\cts}(G_x,k)$.) We let $L_{Q,x}$ denote the subspace corresponding to elements $(c_1,c_2,d_2,d_1)$ with $c_1-c_2$ and $d_1 - d_2$ and~$c_1+d_1$ (equivalently, $c_2 + d_2$) unramified. \item If $x = p$, then we define $L_{Q,p}=L_p$ as follows: let $\mathfrak{u}\subset \mathfrak{b}^0$ be the subspace of matrices whose non-zero entries appear in the upper right $2\times 2$ block. We define $L'_p = \ker (H^1(G_p,\mathfrak{b}^0) \to H^1(I_p,\mathfrak{b}^0/\mathfrak{u}))$ and $L_p=L_{Q,p}=\im(L'_p \to H^1(G_p,\mathfrak{g}^0))$. \end{itemize} Let $H^1_{Q}(\mathbf{Q},\mathrm{ad}^0(\overline{r}(1)))$ denote the corresponding dual Selmer group.
\begin{lemma}
\label{lem:FL} We have $\dim_k L_p - \dim_k H^0(G_p,\mathrm{ad}^0(\overline{r})) = 3$. \end{lemma}
\begin{proof} The subspace $L'_p\subset H^1(G_p,\mathfrak{b}^0)$ is precisely set of elements mapping to the subspace $H^1(G_p/I_p,(\mathfrak{b}^0/\mathfrak{u})^{I_p})\subset H^1(G_p,\mathfrak{b}^0/\mathfrak{u})$. We have $\mathfrak{b}^0/\mathfrak{u} \cong 1\oplus 1\oplus
\lambda(\beta)\lambda(\alpha)^{-1}$ as a $k[G_p]$-module and hence
$H^1(G_p/I_p,(\mathfrak{b}^0/\mathfrak{u})^{I_p})$ is 2-dimensional since $\alpha\neq
\beta$. The condition $(\alpha^2 - 1)(\beta^2 - 1)(\alpha^2 \beta^2 - 1)(\alpha - \beta) \ne 0$ implies that $h^2(G_p,\mathfrak{u})=0$ and hence $H^1(G_p,\mathfrak{b}^0) \twoheadrightarrow H^1(G_p,\mathfrak{b}^0/\mathfrak{u})$. It follows that
$\dim_k L'_p = 2 + h^1(G_p,\mathfrak{b}^0) - h^1(G_p,\mathfrak{b}^0/\mathfrak{u})$.
Thus \[ \dim_k L_p'- h^0(G_p,\mathfrak{b}^0) = 2 + h^1(G_p,\mathfrak{u}) - h^0(G_p,\mathfrak{b}^0/\mathfrak{u}) - h^0(G_p,\mathfrak{u}).\] We have $h^0(G_p,\mathfrak{u})=0$ and $h^0(G_p,\mathfrak{b}^0/\mathfrak{u})=2$. The Euler characteristic formula implies that $h^1(G_p,\mathfrak{u})=3$. Thus \[ \dim_k L_p'- h^0(G_p,\mathfrak{b}^0) = 2 + 3 - 2- 0 = 3.\] Finally, the condition on $\alpha$ and $\beta$ implies that $h^0(G_p,\mathfrak{g}^0/\mathfrak{b}^0)=0$. It follows that $h^0(G_p,\mathfrak{b}^0)=h^0(G_p,\mathfrak{g}^0)$ and $L'_p \stackrel{\sim}{\rightarrow} L_p$. This concludes the proof. \end{proof}
\begin{prop} \label{prop:tangent-space-w1}
The reduced tangent space $\mathrm{Hom}(R_Q/\mathfrak{m}_\mathcal{O},k[\epsilon]/\epsilon^2)$ of $R_{Q}$ has
dimension $$\dim_k H^1_{Q}(\mathbf{Q},\mathrm{ad}^0(\overline{r}(1))) - 1 + \# Q. $$ \end{prop}
\begin{proof} The argument is very similar to that of Corollary~2.43 of~\cite{DDT}. The reduced tangent space has dimension $\dim_k H^1_Q(\mathbf{Q},\mathrm{ad}^0(\overline{r}))$. By Theorem 2.18 of \emph{loc.\ cit.}\ this is equal to \begin{align*} \dim_k H^1_Q(\mathbf{Q},\mathrm{ad}^0(\overline{r}(1))) + \dim_k
& H^0(\mathbf{Q},\mathrm{ad}^0(\overline{r}))-\dim_k H^0(\mathbf{Q},\mathrm{ad}^0(\overline{r}(1))) \\ + \sum_{x} (\dim_k L_{Q,x}-\dim_k & H^0(\mathbf{Q}_x,\mathrm{ad}^0(\overline{r}))) -\dim_k H^0(G_\infty,\mathrm{ad}^0(\overline{r})), \end{align*} where $x$ runs over all finite places of $\mathbf{Q}$. The second term is equal to~$0$ and the third term vanishes (by the absolute irreducibility of $\overline{r}$ and the fact that $\overline{r}\not \cong \overline{r}\otimes \epsilon$). Now, we have: \begin{itemize} \item $\dim_k L_{Q,x} - \dim_k H^0(\mathbf{Q}_x,\mathrm{ad}^0(\overline{r}))=0$ for $x\not \in Q\cup \{ p\}$; \item $\dim_k L_{Q,x} - \dim_k H^0(\mathbf{Q}_x,\mathrm{ad}^0(\overline{r}))= 3$ for $x=p$; \item $\dim_k L_{Q,x} - \dim_k H^0(\mathbf{Q}_x,\mathrm{ad}^0(\overline{r}))= 1$ for $x\in Q$ (by \cite[Prop.\ 10.4.1]{TG}); and \item $\dim_k H^0(G_\infty,\mathrm{ad}^0(\overline{r}))=4$. \end{itemize} This concludes the proof. \end{proof}
The next result (on the existence of Taylor-Wiles primes) follows from the previous proposition and the proof of \cite[Prop.\ 5.6]{Pilloni}.
\begin{prop} \label{prop:tw-primes-w1} Let $q =\dim_k H^1_{\emptyset}(\mathbf{Q},\mathrm{ad}^0(\overline{r}(1)))$ and recall that we are supposing $\overline{r}$ satisfies Assumption \ref{assumption:bigimage}. Then $q\geq 1$ and for any integer $N\geq 1$ we can find a set $Q_N$
of primes of $\mathbf{Q}$ such that \begin{enumerate} \item $\# Q_N =q$. \item $x \equiv 1 \mod p^N$ for each $x\in Q_N$. \item For each $x\in Q_N$, $\overline{r}$ is unramified at $x$ and
$\overline{r}(\mathrm{Frob}_x)$ has four pairwise distinct eigenvalues. \item $H^1_{Q_N}(\mathbf{Q},\mathrm{ad}(\overline{r}(1)))=(0)$. \end{enumerate}
In particular, the reduced tangent space of $R_{Q_N}$ has dimension
$q-1$ and $R_{Q_N}$ is a quotient of a power series
ring over $\mathcal{O}$ in $q-1$ variables. \end{prop}
\begin{example}[Examples of representations with big image] \label{inducedexample} Suppose that~$p \ge 5$. \begin{enumerate} \item Let $K/\mathbf{Q}$ be an imaginary quadratic field not contained in~$\mathbf{Q}(\zeta_p)$. Let $$\overline{\rho}: G_{K} \rightarrow \mathrm{GL}_2(\mathbf{F}_p)$$ is a representation with determinant~$\epsilon^{1-k}$ for some integer~$k$ such that the images of~$\overline{\rho}$ and~$\overline{\rho}^{c}$ for any complex conjugation~$c \in \mathrm{Gal}(\overline{\Q}/\mathbf{Q})$ both contain~$\mathrm{SL}_2(\mathbf{F}_p)$
are have totally disjoint fixed fields over~$K(\zeta_p)$.
Then the representation $$\overline{r} = \mathrm{Ind}^{\mathbf{Q}}_{K} \overline{\rho}$$ preserves a symplectic form and has big image. \item Suppose the image of~$\overline{r}$ is~$\mathrm{GSp}_4(\mathbf{F}_p)$. Then~$\overline{r}$ has big image. \end{enumerate} \end{example}
\begin{proof} The second claim follows immediately for~$p \ge 5$ by~\cite{Pilloni}, Prop~5.8. For the first claim, it is an easy consequence of the fact that~$\mathrm{SL}_2(\mathbf{F}_p)$ is perfect for~$p \ge 5$ that~$H3$ holds, and similarly, assuming that~$K \not\subset \mathbf{Q}(\zeta_p)$, that~$H1$ holds. Hence it suffices to find an element in the image with distinct eigenvalues and with~$1$ as an eigenvalue for every irreducible constituent of~$\mathrm{ad}^0(\overline{r})$. We first compute the representation~$\mathrm{ad}^0(\overline{r})$. Note that the dual of~$\overline{\rho}$ and~$\overline{\rho}^c$ can be identified with~$\overline{\rho} \times \epsilon^{k-1}$ and~$\overline{\rho}^c \otimes \epsilon^{k-1}$ respectively. Over~$K$, we have an identification
$$\mathrm{ad}^0(\overline{r}) |_{G_K} = (\overline{\rho} \otimes \overline{\rho}^c) \otimes \epsilon^{k-1} \oplus \mathrm{ad}^0(\overline{\rho}) \oplus \mathrm{ad}^0(\overline{\rho}^c),$$ and over~$\mathbf{Q}$, we have an identification $$\mathrm{ad}^0(\overline{r}) = \mathrm{As}(\overline{\rho}) \otimes \epsilon^{k-1} \oplus \mathrm{Ind}^{\mathbf{Q}}_{K} \mathrm{ad}^0(\overline{\rho}),$$ where~$\mathrm{As}$ is the Asai representation. Over~$\mathbf{Q}(\zeta_{p^m})$ for any~$m$, the character~$\epsilon^{k-1}$ is trivial, and hence
the image of~$\overline{r} |_{G_{\mathbf{Q}(\zeta_{p^m})}}$ under our assumptions is the group~$\mathrm{SL}_2(\mathbf{F}_p)^2 \rtimes \mathbf{Z}/2\mathbf{Z}$. Since~$1$ and~$-1$ are always eigenvalues of any element acting on~$\mathrm{Ind}^{\mathbf{Q}}_{K} \mathrm{ad}^0(\overline{\rho})$, it suffices to find an element~$\sigma \in \mathrm{SL}_2(\mathbf{F}_p)^2 \rtimes \mathbf{Z}/2\mathbf{Z}$ which has distinct eigenvalues under~$\overline{r}$ and has an eigenvalue~$1$ in~$\mathrm{As}(\overline{\rho})$. To be more precise, since we haven't been careful about distinguishing the Asai representation from its quadratic twist, we shall find an element with eigenvalues both~$1$ and~$-1$. One can explicitly realize the Asai representation as follows. Let~$V$ be the standard representation of~$\mathrm{SL}_2(\mathbf{F}_p)$ over~$\mathbf{F}_p$, and let~$V \otimes V$ be the representation of the exterior product~$\mathrm{SL}_2(\mathbf{F}_p) \times \mathrm{SL}_2(\mathbf{F}_p)$. The element~$(g,h)$ acts on~$v \otimes w$ via~$(g,h)(v \otimes w) = (gv \otimes hw)$. The Asai representation is determined uniquely by the action of a fixed lift of complex conjugation~$c \in \mathrm{Gal}(\overline{\Q}/\mathbf{Q})$, which acts on~$V \otimes V$ by the formula~$c(v \otimes w) = w \otimes v$.
Consider the elements~$g,h \in \mathrm{SL}_2(\mathbf{F}_p)$ such that, with respect to some chosen basis~$V = \{u,v\}$, $$g = \left( \begin{matrix} x & 0 \\ 0 & x^{-1} \end{matrix} \right), \qquad h = \left( \begin{matrix} y & 0 \\ 0 & y^{-1} \end{matrix} \right).$$ Then~$c \cdot (g,h)$ acts on~$\overline{r}$ via the matrix $$\left( \begin{matrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{matrix} \right)\left( \begin{matrix} x & 0 & 0 & 0 \\ 0 & x^{-1} & 0 & 0 \\ 0 & 0 & y & 0 \\ 0 & 0 & 0 & y^{-1} \end{matrix} \right) $$ with eigenvalues~$\pm (xy)^{1/2}$ and~$\pm (xy)^{-1/2}$. On the other hand, the action of this element via the Asai representation (and basis~$u \otimes u$, $v \otimes v$, $u \otimes v$, $v \otimes u$) is $$\left( \begin{matrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{matrix} \right)\left( \begin{matrix} xy & 0 & 0 & 0 \\ 0 & (xy)^{-1} & 0 & 0 \\ 0 & 0 & (x/y) & 0 \\ 0 & 0 & 0 & (x/y)^{-1} \end{matrix} \right) $$ with eigenvalues~$xy$, $(xy)^{-1}$, and~$\pm 1$. The four eigenvalues are distinct as long as~$\pm (xy)^{1/2} \ne \pm (xy)^{-1/2}$, or equivalently if~$(xy)^2 \ne 1$. One can now choose~$x = 2$ and~$y = 1$ in~$\mathbf{F}^{\times}_p$. \end{proof}
\begin{remark} \emph{Suppose that~$K$ is an imaginary quadratic field, and suppose that $E/K$ is an elliptic curve which neither has~CM nor is isogenous (over~$\overline{K}$) to its Galois conjugate~$E^{c}/K$. We claim that Example~\ref{inducedexample} applies to the mod~$p$ representations~$\overline{\rho}: G_K \rightarrow \mathrm{GL}_2(\mathbf{F}_p)$ associated to the dual of~$E[p]$ for sufficiently large~$p$. The representations~$\overline{r}$ in this case are the duals of the representations~$A[p]$ associated to the abelian surface~$A = \mathrm{Res}^{\mathbf{Q}}_{K}(E)$. By~\cite{MR0387283}, the Galois representations~$\overline{\rho}_p, \overline{\rho}^c_p: G_K \rightarrow \mathrm{GL}_2(\mathbf{F}_p)$ associated to the duals of~$E[p]$ and~$E^c[p]$
have images~$\mathrm{GL}_2(\mathbf{F}_p)$ and determinants~$\epsilon^{1-2}$ for all sufficiently large~$p \ge 5$. Let~$F/K$ and~$F^c$ denote the corresponding extensions, so~$\mathrm{Gal}(F/K)$ and~$\mathrm{Gal}(F^c/K)$ are both isomorphic to~$\mathrm{GL}_2(\mathbf{F}_p)$, and~$\mathrm{Gal}(F/K(\zeta_p))$ and~$\mathrm{Gal}(F^c/K(\zeta_p))$ are both isomorphic to~$\mathrm{SL}_2(\mathbf{F}_p)$. By the simplicity of~$\mathrm{PSL}_2(\mathbf{F}_p)$ for~$p \ge 5$, the only non-trivial quotients of~$\mathrm{SL}_2(\mathbf{F}_p)$ are~$\mathrm{PSL}_2(\mathbf{F}_p)$ and~$\mathrm{SL}_2(\mathbf{F}_p)$. This implies that if~$H:= F \cap F^c \supseteq K(\zeta_p)$ is strictly larger than~$K(\zeta_p)$, then
then either~$\mathrm{Gal}(H/K) = \mathrm{GL}_2(\mathbf{F}_p)$, or~$\mathrm{Gal}(H/K) = \mathrm{GL}_2(\mathbf{F}_p)/\pm I$.
In either case, the projective representations associated to~$\overline{\rho}_p$ and~$\overline{\rho}^c_p$
both factor through~$\mathrm{Gal}(H/K)$.
Since all automorphisms of~$\mathrm{PGL}_2(\mathbf{F}_p)$ are inner,
this implies that projective representations of~$\overline{\rho}_p$ and~$\overline{\rho}^c_p$ are isomorphic, and hence $\overline{\rho}_p \simeq \overline{\rho}^c_p \otimes \chi_p$ for some character~$\chi_p$ which (by comparing determinants) is at most quadratic. Assume~$p$ is sufficiently large so that~$E$
has good reduction at all primes above~$p$ and moreover that~$p$ is unramified in~$K$. Then~$\overline{\rho}_p$ and~$\overline{\rho}^c_p$ are both finite flat at~$v|p$, which forces~$\chi_p$ to be unramified at all primes above~$p$. But this implies that~$\chi_p$ is unramified outside primes dividing the conductor~$N$ and~$N^c$ of~$E$ and~$E^{c}$ respectively. There are only finitely many such quadratic characters by class field theory. Hence, if there are infinitely primes~$p$ for which the assumptions of~Example~\ref{inducedexample} do not occur, then there exists a \emph{fixed} character~$\chi$ with~$\chi^2 = 1$ and isomorphisms $\overline{\rho}_p \simeq \overline{\rho}^c_p \otimes \chi$ for infinitely many~$p$. Such an isomorphism (for a single~$p$) implies that~$a_{v} = \chi(v) a_{v^c} \mod p$ for all pairs of conjugate primes~$v$ and~$v^c$ of good reduction for~$E$, and hence, given infinitely many such~$p$, one deduces the equality~$a_v = \chi(v) a_{v^c}$. If~$L/K$ is the (at most) quadratic extension in which~$\chi$ splits, this implies (by Cebotarev) that the Tate modules (for any fixed prime) of~$E$ and~$E^c$ are isomorphic, and hence (by Faltings~\cite{MR718935}) that~$E$ and~$E^c$ are isogenous over~$L$. } \end{remark}
\section{Siegel threefolds} \label{section:SMF}
\subsection{Level Structure} \label{sec:level-str} Recall that there are two conjugacy classes of maximal parabolic subgroups of $\mathrm{GSp}(4)$ represented by the \emph{Siegel parabolic} $P$ which is block upper triangular with Levi $$M = M_P := \left\{ \left( \begin{matrix} A & \\ & \lambda B\end{matrix} \right): \lambda \in \mathrm{GL}_1, A \in \mathrm{GL}_2, B = S {}^{t}A^{-1} S, S = \left( \begin{matrix} & 1 \\ 1 & \end{matrix} \right) \right\},$$ and the \emph{Klingen parabolic} $\Pi$ which is block upper triangular with Levi $$M_{\Pi}:= \left\{ \left( \begin{matrix} \lambda & & \\ & A & \\ & & \lambda^{-1} \det(A) \end{matrix} \right): \lambda \in \mathrm{GL}_1, A \in \mathrm{GL}_2 \right\}.$$ These both contain the Borel subgroup~$B$. For each prime $x$, these give rise to parahoric subgroups $P(x)$, $\Pi(x)$, and~$I(x)$ of $\mathrm{GSp}_4(\mathbf{Z}_x)$, namely, the inverse image of the corresponding parabolic subgroups over $\mathbf{F}_x$. (The group~$I(x)$ is called the Iwahori subgroup.) The Klingen parahoric subgroup contains a normal subgroup $\Pi(x)^{+}$ with $\Pi(x)/\Pi(x)^{+} \simeq (\mathbf{Z}/x\mathbf{Z})^{\times}$ (via projection onto $\lambda \bmod x$). For each prime~$x$, we also have the~\emph{Paramodular group}~$K(x)$, which is the stabilizer in~$\mathrm{GSp}_4(\mathbf{Q}_x)$ of~$\mathbf{Z}_x \oplus \mathbf{Z}_x \oplus \mathbf{Z}_x \oplus x \mathbf{Z}_x$, and is the intersection $$\left( \begin{matrix} * & * & * & */x \\ * & * & * & */x \\ * & * & * & */x \\ x* & x* & x* & x \end{matrix} \right) \cap \mathrm{GSp}_4(\mathbf{Q}_x)$$ for values~$* \in \mathbf{Z}_x$.
\subsection{Cohomology of Siegel \texorpdfstring{$3$}{3}-folds} \label{sec:cohom} Let $S$ and $Q$ be finite sets of primes of $\mathbf{Q}$ which are disjoint from each other and do not contain $p$. By a slight abuse of notation, we will sometimes denote the product of the primes in $Q$ by the same symbol $Q$.
For each $x \in S$, let $K_x \subset \mathrm{GSp}_4(\mathbf{Z}_x)$ equal one of $S(x)$, $\Pi(x)$, $K(x)$, $\Pi(x)^+$, $I(x)$ or the full congruence subgroup of level $x$. For $x \not \in S$, we let $K_x = \mathrm{GSp}_4(\mathbf{Z}_x)$ and we define $K :=\prod_x K_x \subset \mathrm{GSp}_4(\mathbb{A}^\infty)$. For $x\in Q$, we let $K_{x,0}=\Pi(x)$ and $K_{x,1}= \Pi^+(x)$. Let $K_i(Q)=\prod_{x\not\in Q}K_x \times\prod_{x\in
Q}K_{x,i}$ for $i=0,1$.
We assume that the subgroup $K$ is neat. (This will be the case if $S$ contains a prime $x \geq 3$ where $K_x$ is the full congruence subgroup of level $x$.) We let $Y_K\to \mathrm{Spec}(\mathcal{O})$ (resp.\ $Y_{K_i(Q)}\to\mathrm{Spec}(\mathcal{O})$) denote the Siegel moduli space of level $K$ (resp.\ $K_i(Q)$). This scheme classifies principally polarized abelian varieties together with a $K$-level structure (resp.\ $K_i(Q)$-level structure). (See \cite[\S 4.1]{Pilloni-hida}.) In each case we denote the universal abelian variety by $\mathcal{A}$.
If $Y$ denotes one of the above spaces, we can choose a toroidal compactification $X \to \mathrm{Spec}(\mathcal{O})$ of $Y$. The abelian scheme $\mathcal{A}$ then extends to a semi-abelian scheme $\pi: \mathcal{A} \to X$ and the sheaf $\mathcal{E} := \pi_*\Omega^1_{\mathcal{A}/X}$ is a locally free $\mathcal{O}_{X}$-module of rank 2. For integers $a \ge b$, we let $\omega(a,b):=\mathrm{Sym}^{a-b}\mathcal{E} \otimes \det^{b}\mathcal{E}$. We also~denote~$\det \mathcal{E}$ by~$\omega$, so, for example, $\omega(a,a) = \omega^a$ is a line bundle. If $M$ is an $\mathcal{O}$-module, we will let $\omega(a,b)_M$ denote the sheaf $\omega(a,b)\otimes_{\mathcal{O}}M$. The coherent cohomology groups $H^i(X,\omega(a,b)_M)$ are independent of the choice of toroidal compactification $X$ (see \cite[Lemma 7.1.1.4]{lan} and the proof of \cite[Lemma 7.1.1.5]{lan}). The Koecher principle states that there is an isomorphism $$H^0(Y,\omega(a,b)_{M}) \simeq H^0(X,\omega(a,b)_{M}).$$ We may therefore pass freely between the open variety $Y$ and the (any) smooth projective toroidal compactification $X$ without comment when dealing with $H^0$.
We choose toroidal compactifications $X_K$ and $X_{K_0(Q)}$ so that the natural map $Y_{K_0(Q)} \to Y_K$ extends to a map $X_{K_0(Q)} \to X_K$. As explained in \S~4.1.2 of~\cite{Pilloni-hida}, the universal subgroup $H \subset \mathcal{A}[Q]$ over $Y_{K_0(Q)}$ extends to $X_{K_0(Q)}$. We then define the toroidal compactification $X_{K_1(Q)} = \mathrm{Isom}_{X_{K_0}(Q)}(\mathbf{Z}/Q, H)$. The resulting map $X_{K_1(Q)} \to X_{K_0(Q)}$ is then finite \'etale with Galois group $\Delta_Q := (\mathbf{Z}/Q)^\times$.
\subsection{Vanishing results} \label{sec:vanishing-results}
Let $X$ denote one of the toroidal compactifications defined in the previous section. We first record some consequences of a vanishing theorem of Lan and Suh.
\begin{theorem}
\label{thm:lan-suh}
\hspace{2em}
\begin{enumerate} \item Suppose that $a\geq 3$ and $2\leq a-b \leq p-2$. Then \[ H^i(X,\omega(a,b)(-\infty)_k)=0\] for $i>2$. \item Suppose that $a+b \geq 6$ and $2\leq a-b \leq p-2$. Then \[ H^i(X,\omega(a,b)(-\infty)_k)=0\] for $i>1$.
\item Suppose that $b \geq 4$ and $0 \leq a-b \leq p-4$. Then \[ H^i(X,\omega(a,b)(-\infty)_k) =0 \] for $i>0$.
\end{enumerate} \end{theorem}
\begin{proof}
This follows from \cite[Cor.\ 7.24]{LanSuh} after unwinding
definitions. We take the group scheme $\mathrm{G}_1/R_1$ (in the notation of~\cite{LanSuh}) to be our
$G/\mathcal{O}$. The groups $\mathrm{M}_1\subset \mathrm{P}_1 \subset \mathrm{G}_1$ correspond to
the Siegel Levi and parabolic: $M \subset P \subset
G$. The set of dominant weights $X_{\mathrm{G}_1}^+$ (resp.\
$X_{\mathrm{M}_1}^+$) is our $X^*(T)^+_{G}$ (resp.\ $X^*(T)^+_M$)
from Definition~\ref{defn:dominant-weights}.
In this paragraph, we show that the subset $X_{\mathrm{G}_1}^{+,<_{\mathrm{re}}p}\subset
X_{\mathrm{G}_1}^+$ as defined in~\cite[Defn.\ 6.3]{LanSuh-compact}
corresponds to the set of those $\mu=(a,b;c)\in
X^*(T)_{G}^+$ such that $a + b < p - 3$.
As an intermediate step, we first show that
$X_{\mathrm{G}_1}^{+,<_{\mathrm{re}}p}$ corresponds to those $\mu=(a,b;c)\in
X^*(T)_{G}^+$ such that:
\begin{itemize}
\item $\langle \mu+\rho , \pm\alpha_i^{\vee} \rangle \leq p $ for $i=1,\dots,4$;
\item $a+b + 3 < p$.
\end{itemize}
To see this, we note the following: to lie in
$X_{\mathrm{G}_1}^{+,<_{\mathrm{re}}p}$, by definition, the element
$\mu$ must satisfy $|\mu|_L + d < p$ and must also lie in
$X_{\mathrm{G}_1}^{+,<_{\mathrm{W}}p}$. The definition of $|\mu|_L$
in Definition
3.2 of \cite{LanSuh-compact} boils down to $|\mu|_L = a+b$ (the set
$\Upsilon$ in our case consists of the single embedding $\mathbf{Z} \hookrightarrow \mathcal{O}$
and the norm $|\mu| = a+b$ is defined near the beginning of \S 2.5). The dimension $d$ is defined in Definition 3.9 of
\cite{LanSuh-compact} to be $\dim_{\mathcal{O}}(X)$ which is 3 in our case.
Next, the set
$X_{\mathrm{G}_1}^{+,<_{\mathrm{W}}p}$ is defined in Definition
3.2 to consist of those $\mu \in
X_{\mathrm{G}_1}^{+,<p}$ for which $|\mu|_L < p$. Finally, the set
$X_{\mathrm{G}_1}^{+,<p}$ is defined in Definition 2.29 to consist of all dominant $\mu \in
X^+_{\mathrm{G}_1}$ which satisfy the first condition above. This
establishes the intermediate step.
Now, if $\mu \in X^*(T)^+_{G}$, then the largest of the
$\langle \mu + \rho , \pm \alpha_i^{\vee}\rangle$ is $\langle \mu + \rho,
\alpha_3^{\vee}\rangle = a+b +3$. Thus, we see that $\mu \in
X_{\mathrm{G}_1}^{+,<_{\mathrm{re}}p}$ if
and only if $a\geq b\geq0$ and $a+b < p-3$.
The set $X_{\mathrm{M}_1}^{+,<p}$, by Definition 2.29 of \cite{LanSuh-compact},
is $\{ \mu \in X^*(T)^{+}_M : \langle \mu + \rho,
\alpha_1^{\vee}\rangle \leq p\} = \{ (a,b;c)\in X^*(T)^+_M :
(a+2)-(b+1) \leq p\}$. By Lemma 7.2, Definition 7.14 (which is
vacuous in our case) and Proposition 7.15 of \cite{LanSuh-compact}, a weight
$\mu = (a,b;c)$ lies in $
X_{\mathrm{M}_1}^{+,<p}$ and is \emph{positive parallel} if and only if
$a=b>0$.
If $\mu = (a,b;c) \in X^*(T)^+_M$, then a pair of vector bundles
${\mathcal{W}}_\mu^*$, for $*\in \{\mathrm{can},\mathrm{sub}\}$ is defined in
\cite{LanSuh}. Indeed $\mu$ determines an algebraic representation
of $M\cong \mathrm{GL}_2\times \mathrm{GL}_1$ over $\mathcal{O}$ with highest weight
$(a,b;c)$ (namely $(\mathrm{Sym}^{a-b}S_2 \otimes \det^b S_2 )\otimes
S_1^{\otimes c}$ where $S_i$ is the standard representation of
$\mathrm{GL}_i$) and the corresponding bundles are then defined by
\cite[Defn.\ 4.12]{LanSuh}. We claim that
\[ {\mathcal{W}}_{\mu}^{\mathrm{can}} = \omega(a,b).\] (We note that the parameter
$c$ does not change the underlying vector bundle, but does change
the Hecke action on cohomology by a power of the similitude
character.) Let $\mu = (0,-1;1)$, let $L$ denote the standard representation of $G$
and $L_0^{\vee}(1)\subset L$ the subspace spanned by the first two
standard basis vectors. Then $L_0^{\vee}(1)$ is the standard
representation of the $\mathrm{GL}_2$-factor of $M$ and is the
representation of $M$ corresponding to $(1,0;0)$. The representation
$L_0 = (L_0^{\vee}(1))^{\vee}(1)$ thus corresponds to $\mu =
(0,-1;1)$. By~\cite[Example 1.22]{LanSuh-compact}, we have (in the notation of
that paper)
$\mathcal{E}_{\mathrm{M}_1}(L_0) =
\Lie_{\mathcal{A}/Y}$. However, $\mathcal{W}_{\mu} = \mathcal{E}_{\mathrm{M}_1}(L_0)$ by
definition, and we have $\Lie_{\mathcal{A}/Y} = \mathcal{E}^{\vee} = \omega(0,-1)$. It follows that
$\mathcal{W}_{(0,-1;1)}^{\mathrm{can}} \cong \omega(0,-1)$. We
deduce that $\omega(a,b) = (\mathrm{Sym}^{a-b}\otimes\det{}^b)(\omega(1,0))
= \mathcal{W}_{(a,b;-a-b)}$, as required.
With these preliminaries out of the way, we now apply \cite[Cor.\
7.24]{LanSuh}. We take $\mu = (\alpha,\beta;\gamma) \in
X_{\mathrm{G}_1}^{+,<_{\mathrm{re}}p}$. (The condition that
$\max(2,r_{\tau})<p$ when $\tau = \tau\circ c$ boils down to $2<p$
in our case.) We take $\nu = (t,t;0)$ a positive parallel weight. We
therefore have $t>0$, $\alpha\geq \beta \geq 0$ and $\alpha+\beta < p-3$.
We
now apply part 2 of \cite[Cor.\
7.24]{LanSuh} successively with $w \in W^{M_1}$ taken to equal each
of the elements $\widetilde{w}_1,\widetilde{w}_2,\widetilde{w}_3$ from
Section~\ref{sec:notation}. Note that each $\widetilde{w}_i$ has length
$i$. If we take $w = \widetilde{w}_1$, then (ignoring the third component):
\[ \widetilde{w}_1 \cdot \mu - \nu = \widetilde{w}_1(\alpha+2,\beta+1) - (2,1) - (t,t) = (\alpha-t,
-\beta-2-t). \]
Thus $({\mathcal{W}}^{\vee}_{\widetilde{w}_1\cdot\mu - \nu})^{\mathrm{sub}} =
\omega(\beta+2+t, -\alpha +t)(-\infty)$.
Then \cite[Cor.\
7.24]{LanSuh} implies
\[ H^i(X, \omega(\beta+2+t,-\alpha+t)(-\infty)_k) = 0 \]
for each $i>2$. Taking $a = \beta+2+t$ and $b =-\alpha+t$ gives the
first part of our proposition.
Similarly, if $w = \widetilde{w}_2$, then: \[ \widetilde{w}_2 \cdot \mu - \nu = \widetilde{w}_2(\alpha+2,\beta+1) - (2,1) - (t,t) =
(\beta-1-t, -\alpha-3-t). \]
Hence \[ H^i(X, \omega(\alpha+3+t,1 - \beta+t)(-\infty)_k) = 0 \]
for $i>1$. This gives the second part of the proposition.
Finally, we take $w = \widetilde{w}_3$, then:
\[ \widetilde{w}_3 \cdot \mu - \nu = \widetilde{w}_3(\alpha+2,\beta+1) - (2,1) - (t,t) =
(-\beta-3-t, -\alpha-3-t). \]
Hence \[ H^i(X, \omega(\alpha+3+t, \beta+3+t)(-\infty)_k) = 0 \]
for $i>0$. This gives the last part of the proposition. \end{proof}
It is interesting to compare the above vanishing result in characteristic $p$ with the following characteristic 0 vanishing results due to Blasius--Harris--Ramakrishnan, Mirkovi\'c, Williams and Schmid. We have an identification \[ Y(\mathbf{C}) = G(\mathbf{Q}) \backslash (G(\mathbf{R})/K^h \times G(\mathbb{A}^\infty)/U) \] where $U\subset G(\mathbb{A}^\infty)$ is the open compact subgroup used to define $Y$ and $K^h$ is the compact-mod-center subgroup defined in Section~\ref{sec:group-gsp_4r}. To any finite dimensional $\mathbf{C}$-representation $(\sigma,V_{\sigma})$ of $K^h_{\mathbf{C}}$, there is an associated vector bundle $\mathcal{V}_{\sigma}$ on $Y(\mathbf{C})$ which is defined in \cite[Defn.\ 1.3.2]{blasius-harris-ramak}. This bundle has extensions
$\mathcal{V}_{\sigma}^{\mathrm{sub}}\subset \mathcal{V}_{\sigma}^{\mathrm{can}}$ to $X(\mathbf{C})$. In \cite{blasius-harris-ramak}, the bundle $\mathcal{V}_{\sigma}^{\mathrm{can}}$ is denoted $\widetilde\mathcal{V}_{\sigma}$. We have $\mathcal{V}^{\mathrm{sub}}_{\sigma}=\mathcal{V}_{\sigma}^{\mathrm{can}}(-\infty)$. For each $i\geq 0$, we define: \[ \overline{H}^i(X(\mathbf{C}),\mathcal{V}_{\sigma}) := \im (H^i(X(\mathbf{C}),\mathcal{V}_\sigma^\mathrm{sub})\to H^i(X(\mathbf{C}),\mathcal{V}_\sigma^\mathrm{can})).\] Let~$\widetilde{H}^i(\mathcal{V}_\sigma^\mathrm{sub})$ and~$\widetilde{H}(\mathcal{V}_\sigma^\mathrm{can}))$ denote the direct limit of~$H^i(X(\mathbf{C}),\mathcal{V}_\sigma^\mathrm{sub})$ and~$H^i(X(\mathbf{C}),\mathcal{V}_\sigma^\mathrm{can}))$ respectively over all levels~$K$. Let~$\overline{H}^i(X(\mathbf{C}),\mathcal{V}_{\sigma})$ denote the corresponding limit of~$\overline{H}^i(X(\mathbf{C}),\mathcal{V}_{\sigma})$ (including both an overline and a tilde in the notation was too cumbersome, hopefully no confusion will result).
Let $\mathcal{A}_{(2)}(G)$ denote the space of automorphic forms on $G(\mathbf{Q})\backslash G(\mathbb{A})$ which are square integrable modulo the centre $Z_{G}(\mathbb{A})$. Let $\mathcal{A}_0(G)\subset \mathcal{A}_{(2)}(G)$ denote the space of cusp forms. For $(\sigma,V_\sigma)$ a representation of $K^h_{\mathbf{C}}$ as above and $i\geq 0$, we define: \begin{align*}
\mathcal{H}^i_{(2),\sigma} &= H^i(\Lie Q^-,
K^h;\mathcal{A}_{(2)}(G)\otimes
V_{\sigma}) \\
\mathcal{H}^i_{\mathrm{cusp},\sigma} &= H^i(\Lie Q^-,
K^h;\mathcal{A}_{0}(G)\otimes
V_{\sigma}). \end{align*} Then we have the following result of Harris:
\begin{theorem}
\label{thm:coherent-cohom-lie-alg-cohom} There are canonical maps, forming a commutative diagram: \[ \begin{tikzcd}
\mathcal{H}^i_{\mathrm{cusp},\sigma} \arrow{r}\arrow{d} & \mathcal{H}^i_{(2),\sigma}
\arrow{d} \\ \widetilde{H}^i(\mathcal{V}_\sigma^\mathrm{sub}) \arrow{r} & \widetilde{H}^i(\mathcal{V}_\sigma^\mathrm{can}) \end{tikzcd}. \] Moreover: \begin{enumerate} \item The composition $\mathcal{H}^i_{\mathrm{cusp},\sigma} \to
\overline{H}^i(\mathcal{V}_\sigma)$ is injective for all $i$, and
is an isomorphism for $i=0,3$. \item The image of $\mathcal{H}^i_{(2),\sigma}$ in
$\widetilde{H}^i(\mathcal{V}_\sigma^{\mathrm{can}})$ contains $\overline{H}^i(\mathcal{V}_{\sigma})$. \end{enumerate} \end{theorem}
\begin{proof}
This follows from \cite[Theorem 2.7 \& Prop.\ 3.2.2]{harris-ann-arb}. \end{proof}
For $*\in \{ \mathrm{cusp}, (2)\}$, we then define $\widetilde{H}^i(\mathcal{V}_{\sigma}^{\mathrm{can}})_{*}$ to be the image of the space $\mathcal{H}^i_{*,\sigma}$ in $\widetilde{H}^i(\mathcal{V}_\sigma^{\mathrm{can}})$. Thus we have \[ \mathcal{H}^i_{\mathrm{cusp},\sigma}\cong \widetilde{H}^i(\mathcal{V}_{\sigma}^{\mathrm{can}})_{\mathrm{cusp}} \subset \overline{H}^i(\mathcal{V}_{\sigma}) \subset \widetilde{H}^i(\mathcal{V}_{\sigma}^{\mathrm{can}})_{(2)}.\]
For $*\in \{ \mathrm{cusp}, (2)\}$, the space $\mathcal{A}_{*}(G)$ is semisimple as a $G(\mathbb{A})$-representation and we decompose: \[ \mathcal{A}_{*}(G) = \bigoplus_{\pi}m_*(\pi) \pi^{\infty}\otimes \pi_{\infty}. \] We let $\mathcal{A}_{*}(G)_{\mathrm{temp}}$ denote the subspace \[ \bigoplus m_*(\pi) \pi^{\infty}\otimes \pi_{\infty} \] where the sum is over all those $\pi$ such that $\pi_\infty$ is essentially tempered. We define $\mathcal{H}^i_{*,\sigma,\mathrm{temp}} \subset \mathcal{H}^i_{*,\sigma}$ by replacing $\mathcal{A}_{*}(G)$ with $\mathcal{A}_{*}(G)_{\mathrm{temp}}$ in the definition of $\mathcal{H}^i_{*,\sigma}$. We then define \[ \widetilde{H}^i(\mathcal{V}_{\sigma}^{\mathrm{can}})_{*,\mathrm{temp}} \subset \widetilde{H}^i(\mathcal{V}_\sigma^\mathrm{can})_* \] to be the image of $\mathcal{H}^i_{*,\sigma,\mathrm{temp}} \to \widetilde{H}^i(\mathcal{V}_\sigma^{\mathrm{can}})$. We may also define analogous spaces \[ H^i(X(\mathbf{C}),\mathcal{V}_{\sigma}^{\mathrm{can}})_{*,\mathrm{temp}} \subset H^i(X(\mathbf{C}),\mathcal{V}_\sigma^\mathrm{can})_* \] by applying~$K$-invariants to the constructions above, where~$K$ is the level of~$X(\mathbf{C})$.
Suppose now that $(\sigma,V_\sigma)$ is the irreducible representation of $K^h_{\mathbf{C}}$ of highest weight $\mu = (a,b;c) \in X^*(H_\mathbf{C})$, with respect to the system of positive weights fixed in \S~\ref{sec:notation}. We first of all observe that the bundle $\mathcal{V}_\sigma$ does not depend on $c$. Indeed, let $(\tau,V_\tau)$ be the irreducible representation of highest weight $(a,b;c+2)$. Consider the $G(\mathbf{Q})$-equivariant bundles $\mathcal{V}_{\sigma}^{\vee} = G(\mathbf{C})\times_{Q^-} V_{\sigma}$ and $\mathcal{V}_{\tau}^{\vee}= G(\mathbf{C})\times_{Q^-}V_{\tau}$ on $G(\mathbf{C})/Q^-$ defined in \cite[\S 1.3]{blasius-harris-ramak}. (The superscripted $^{\vee}$'s do not refer to dual bundles here.) Then by the definition of $\mathcal{V}_\sigma$, it suffices to show that $\mathcal{V}_\sigma^\vee \stackrel{\sim}{\rightarrow} \mathcal{V}_\tau^\vee$ as $G(\mathbf{Q})$-equivariant bundles.
We have that $\tau = \sigma \otimes \nu$, so we may take the
underlying space of $\tau$ to be $V_\tau = V_{\sigma}$ and the action to be $\tau(g) = \nu(g)\sigma(g)\in \mathrm{End}(V_{\sigma})$ for all $g \in K^h_{\mathbf{C}}$. Then the map \begin{align*} G(\mathbf{C})\times_{Q^-} V_{\sigma} & \longrightarrow G(\mathbf{C})\times_{Q^-} V_{\tau} \\ (g,w) & \longmapsto (g,\nu(g)^{-1}w) \end{align*} gives the required isomorphism $\mathcal{V}_\sigma^{\vee} \stackrel{\sim}{\rightarrow} \mathcal{V}_\tau^\vee$. (Note however that the Hecke action on the cohomology of $\mathcal{V}_{\sigma}$ will depend on $c$ -- changing the value of $c$ introduces a corresponding twist by a power of the similitude character in the Hecke action.)
For $\mu \in X^*(H_\mathbf{C})^+_{K^h_{\mathbf{C}}}$ a dominant weight, we let $\mathcal{V}_{\mu}$ denote the vector bundle associated to the irreducible $K^h_{\mathbf{C}}$-representation $W_\mu$. We would like to compare these bundles to the bundles introduced in the proof of Theorem~\ref{thm:lan-suh}.
\begin{df}
\label{df:lan-suh-bundle-notn} Let $\mu = (a,b;c)\in X^*(T)^+_M$. We let $\mathcal{W}_{\mu}$ denote the canonical extension $\mathcal{W}_{\mu}^{\mathrm{can}}$ in the notation of the proof of Theorem~\ref{thm:lan-suh}, and we let $\mathcal{W}_{\mu}^{\mathrm{sub}} = \mathcal{W}_{\mu}(-\infty)$. \end{df} We saw above that, as vector bundles over $X$, we have: \[ \mathcal{W}_\mu \cong \omega(a,b), \] though the Hecke action on the cohomology of $\mathcal{W}_{\mu}$ will depend on $c$.
\begin{lemma}
\label{lem:identifying-Vsigma} Let $\mu = (a,b;c)\in X^*(T)^+_M$. Then, over $X(\mathbf{C})$, we have: \[ \mathcal{W}_{(a,b;c)} \cong \mathcal{V}_{(-b,-a;a+b+2c)},\] compatibly with Hecke actions on cohomology. \end{lemma}
\begin{proof} It suffices to prove the isomorphism over $Y$. Consider the short exact sequence: \[ 0 \longrightarrow \Lie_{\mathcal{A}^{\vee}/Y}^{\vee} \longrightarrow \underline{H}_1^{\mathrm{dR}}(\mathcal{A}/Y) \longrightarrow \Lie_{\mathcal{A}/Y} \longrightarrow 0 \] and the Poincar\'{e} duality pairing \[ \langle\ ,\ \rangle : \underline{H}_1^{\mathrm{dR}}(\mathcal{A}/Y) \otimes \underline{H}_1^{\mathrm{dR}}(\mathcal{A}/Y) \longrightarrow \mathcal{O}_Y(1). \] (See \cite[\S 1.2]{LanSuh-compact}).
Expressed in terms of the functor $\mathcal{W}_{\mu}$ of Lan--Suh, the short exact sequence becomes: \[ 0 \longrightarrow \mathcal{W}_{(1,0;0)} \longrightarrow \underline{H}_1^{\mathrm{dR}}(\mathcal{A}/Y) \longrightarrow \mathcal{W}_{(0,-1;1)} \longrightarrow 0 \] and the bundle $\mathcal{O}_Y(1)$ becomes $\mathcal{W}_{(0,0;1)}$. (See \cite[Example 1.22]{LanSuh-compact}.)
Similarly, over $Y(\mathbf{C})$ the short exact sequence becomes \[ 0 \longrightarrow \mathcal{V}_{(0,-1;1)} \longrightarrow \underline{H}_1^{\mathrm{dR}}(\mathcal{A}/Y) \longrightarrow \mathcal{V}_{(1,0;1)} \longrightarrow 0 \] and $\mathcal{O}_Y(1)$ is identified with $\mathcal{V}_{(0,0;2)}$. This follows from \cite[Example III.2.4]{milne-ann-arb}: if we take the point $o\in \check{X}$ to be $h(i) = J$ in the notation of Section~\ref{sec:group-gsp_4r}, then the isotropic subspace corresponds to $V^-$ and $V/W$ corresponds to $V^+$. As remarked at the end of Section~\ref{sec:group-gsp_4r}, we have $V^- = W_{(0,-1;1)}$, $V^+ = W_{(1,0;1)}$ and the similitude character corresponds to $W_{(0,0;2)}$. Note also that the notation $\mathcal{H}_{\mathrm{dR}}(\mathcal{A})$ of \cite{milne-ann-arb} refers to de Rham homology (see \S I.3).
It follows that, over $Y(\mathbf{C})$, we have $\mathcal{W}_{(0,0;1)} = \mathcal{V}_{(0,0;2)}$ and $\mathcal{W}_{(1,0;0)} = \mathcal{V}_{(0,-1;1)}$. Thus, \begin{align*}
\mathcal{W}_{(a,b;c)} & = (\mathrm{Sym}^{a-b}\otimes \det{}^b)(\mathcal{W}_{(1,0;0)}) \otimes
\mathcal{W}_{(0,0;c)} \\
& = (\mathrm{Sym}^{a-b}\otimes \det{}^b)(\mathcal{V}_{(0,-1;1)}) \otimes
\mathcal{V}_{(0,0;2c)} \\
& = \mathcal{V}_{(-b,-a;a+b+2c)} \end{align*} This is compatible with Hecke action on cohomology since all isomorphisms respect the equivariant constructions. \end{proof}
The Weyl chambers $C_0,\dots,C_4 \subset X^*(T)\otimes_{\mathbf{Z}}\mathbf{R} \cong \mathbf{R}^3$ are defined in Section~\ref{sec:group-gsp_4}. We have \begin{eqnarray*}
C_0 &=& \{ (a,b;c) \in \mathbf{R}^3 : a \geq b \geq 0\}\\
C_1 &=& \{ (a,b;c) \in \mathbf{R}^3 : a \geq -b \geq 0\}\\
C_2 &=& \{ (a,b;c) \in \mathbf{R}^3 : -b \geq a \geq 0\}\\
C_3 &=& \{ (a,b;c) \in \mathbf{R}^3 : -b \geq -a \geq 0\}. \end{eqnarray*}
\begin{theorem}
\label{prop:char-0-vanish} Let $\mu = (a,b;c) \in X^*(T)^+_M$. Then: \[ H^i(X(\mathbf{C}),\mathcal{W}_{\mu})_{(2),\mathrm{temp}} = 0\] for all $0\leq i \leq 3$ such that \[ \mu = (a-1,b-2;c) \not \in C_i. \] \end{theorem}
\begin{proof} In Section~\ref{sec:group-gsp_4r}, we identified $X^*(H_{\mathbf{C}})$ with $\mathbf{Z}^3$. Under the resulting identification of $X^*(H_{\mathbf{C}})\otimes_{\mathbf{Z}}\mathbf{R}$ with $\mathbf{R}^3$, the chambers $C_i$ for $X^*(T)^+_M\otimes_{\mathbf{Z}}\mathbf{R}$ above correspond to Weyl chambers in $X^*(H_{\mathbf{C}})\otimes_{\mathbf{Z}}\mathbf{R} $. Let $\sigma = (-b,-a;a+b+2c)$, regarded as an element of $X^*(H_{\mathbf{C}})$. Then we've seen above that \[ \mathcal{V}_{\sigma} \cong \mathcal{W}_{\mu}. \] Suppose that \[ H^i(X(\mathbf{C}),\mathcal{W}_{\mu})_{(2),\mathrm{temp}} = H^i(X(\mathbf{C}),\mathcal{V}_{\sigma})_{(2),\mathrm{temp}} \neq 0.\] Then by Theorem \ref{thm:coherent-cohom-lie-alg-cohom}, there is some $\pi = \pi^{\infty}\otimes \pi_\infty$ in $\mathcal{A}_{(2)}(G)_{\mathrm{temp}}$ such that \[ H^i(\Lie P^{-}, K^h; \pi_\infty \otimes V_{\sigma})) \neq 0 .\] By a theorem of Mirkovi\'{c} \cite[Theorem 3.5]{harris-ann-arb}, $\pi_\infty$ is a discrete series or limit of discrete series. Hence, using the Harish-Chandra parameterization, we may write $\pi_{\infty} = \pi(\lambda,C)^* = \pi(-w_0(\lambda),-w_0(C))$ for some Weyl Chamber $C \in \{ C_0,\dots,C_3\}$ and a weight $\lambda \in C \cap \left(X^*(H_{\mathbf{C}})+\rho\right)$.
By \cite[Theorem 3.2.1]{blasius-harris-ramak}, it follows that:
\[ \lambda = ((\sigma + \rho)|_{\mathrm{Sp}_4(\mathbf{R})}; -a-b-2c) = (2-b,1-a;-a-b-2c), \] and \[ i = \# \left(\Phi(C)^+\cap \Phi_n^+ \right),\] where $\Phi(C)^+$ is the system of positive roots determined by the chamber $C$. For $j=0,\dots,3$, we have $\# \left(\Phi(C_j)^+\cap \Phi_n^+ \right) = 3-j$. Hence we must have $C = C_{3-i}$ and $\lambda \in C_{3-i}$. However, $C_{3-i} = -w_0(C_i)$, so $-w_0(\lambda) \in C_i$. We have, have: \begin{align*}
-w_0(\lambda) & = -w_0 (-b+2,-a+1;-(a+b+2c)) \\
& = (a-1, b-2; a+b+2c). \end{align*} Thus, we deduce that $-w_0(\lambda) = (a-1,b-2;a+b+2c)$ lies in $C_i$. This is equivalent to the condition in statement of the theorem. \end{proof}
We also record the following: \begin{theorem}
\label{thm:contrib-to-char-0-cohom} Let $\mu = (a,b;c) \in X^*(T)^+_M$, let $ w = -(a+b+2c)$, and let $\sigma = (-b,-a;a+b+2c) = (-b,-a;-w)$, regarded as an element of $X^*(H_{\mathbf{C}})$. Suppose that $\pi = \pi^{\infty}\otimes \pi_\infty$ in $\mathcal{A}_{(2)}(G)$ contributes to $H^i(X(\mathbf{C}),\mathcal{W}_{\mu})_{(2)} \cong H^i(X(\mathbf{C}),\mathcal{V}_{\sigma})_{(2)}$. \begin{enumerate} \item The infinitesimal character of $\pi_{\infty}$ is given under the Harish-Chandra isomorphism by:
\[ \chi_{((- \sigma - \rho)|_{\mathrm{Sp}_4(\mathbf{R})};-w)} = \chi_{(a-1,b-2;-w)} .\] \item Let $\widetilde{\pi}_{\infty}$ denote the transfer of
$\pi_{\infty}$ to $\mathrm{GL}_4(\mathbf{R})$. Then the infinitesimal character of
$\widetilde{\pi}_{\infty}$ is given under the Harish-Chandra
isomorphism by $\chi_{\tau}$ where: \[ \tau = \left(\frac{a+b-3-w}{2}, \frac{a-b+1-w}{2}, \frac{-a+b-1-w}{2}, \frac{-a-b+3-w}{2}\right). \] \item\label{Mirkovic} If furthermore, $\pi_{\infty}$ is tempered, then $\pi_{\infty}$ is a discrete series or limit of discrete series representation, and is given under the Harish-Chandra parameterization by: \[ \pi_{\infty} \cong \pi((a-1,b-2;-w), C_i). \] \end{enumerate} \end{theorem}
\begin{proof}
For the first part, we have that \[ H^i(\Lie P^{-}, K^h; \pi_{\infty}\otimes V_{\sigma}) \neq 0.\] It follows from \cite[Theorem 3.2.1]{blasius-harris-ramak} that the infinitesimal character of $\pi_{\infty}$ is equal
to~$\chi_{((-\sigma-\rho)|_{\mathrm{Sp}_4(\mathbf{R})};-w)}$. The second part can be inferred from \cite[\S 2.1.2]{Sor}. The last part is due to Mirkovi\'c and was established in the proof of Theorem~\ref{prop:char-0-vanish}. \end{proof}
\begin{df}
\label{defn:discrete-series-weight} A weight $\mu = (a,b;c) \in X^*(T)^+_M$ such that $(a-1,b-2;c)$ lies in the interior of a unique Weyl chamber $C_i$ is said to be a \emph{discrete series
weight} or a \emph{regular weight}. If $\mu - w_0(\rho)$ lies in the intersection of exactly two of Weyl chambers $C_i$, we say it is a \emph{limit of
discrete series weight} or a \emph{non-regular weight}. \end{df} From the explicit description of the Weyl chambers $C_i$ above, we see that the limit of discrete series weights thus come in 3 families: \begin{align*}
\mu &= (a,2;c) \text{ with $a\in \mathbf{Z}_{\geq 2}$} \\
\mu &= (a,3-a;c) \text{ with $a\in \mathbf{Z}_{\geq 2}$} \\
\mu &= (1,b;c) \text{ with $b\in \mathbf{Z}_{\leq 1}$}. \end{align*} Note that for the corresponding families of vector bundles $\mathcal{W}_{\mu}=\omega(a,b)$, the first and third are interchanged under the Serre duality map $\omega(a,b) \mapsto \omega(a,b)^{{\vee}}\otimes \det \Omega^1_{X}\cong\omega(3-b,3-a)(-\infty)$ while the second family is stable under this operation. (Up to interchanging the canonical and subcanonical extensions, of course.) The preceding theorem implies that for all $a \geq 2$, we have: \begin{eqnarray*}
H^i(X(\mathbf{C}),\omega(a,2))_{(2),\mathrm{temp}} &=& 0 \mbox{\;\; for $i=2,3$}\\
H^i(X(\mathbf{C}),\omega(a,3-a))_{(2),\mathrm{temp}} &=& 0 \mbox{\;\; for $i=0,3$}. \end{eqnarray*} (Technically, we should normalize the Hecke action on the cohomology of $\omega(a,b)$ before we adjoin the subscripts $(2)$ or $\mathrm{temp}$. See Section~\ref{sec:hecke-operators} below.) From the result of Lan and Suh, we deduce the following characteristic $p$ analogue of these vanishing results for limit of discrete series weights.
\begin{corr}
\label{cor:ls-vanish-lds}
\begin{enumerate}
\item For $4\leq a \leq p$, we have \[ H^i(X,\omega(a,2)(-\infty)_k)=0\] for $i=2,3$. \item For $3 \leq a \leq (p+1)/2$, we have \[ H^0(X,\omega(a,3-a)_k)= H^3(X,\omega(a,3-a)(-\infty)_k)=0.\]
\end{enumerate} \end{corr}
\begin{proof} The vanishing results for the subcanonical extensions $\omega(*,*)(-\infty)$ follow directly from Theorem~\ref{thm:lan-suh}. The fact that \[ H^0(X,\omega(a,3-a)_k) =0 \] in the second part then follows from Serre duality since: \[ \omega(a,3-a)^{\vee}\otimes \det \Omega^1_{X/\mathcal{O}} = \omega(a-3,-a)\otimes \omega(3,3)(-\infty) = \omega(a,3-a)(-\infty).\] \end{proof}
\subsection{Torsion Classes} It seems natural to ask whether one can (explicitly or otherwise) construct classes in $H^0(X,\omega(2,2))$ which do not lift to characteristic zero. Let us recall what happens for classical modular forms of weight one.
Suppose that $X_1(N)$ denotes (for this paragraph) the classical modular curve. A non-Eisenstein Hecke eigenclass in $H^0(X_1(N),\omega_k)$ gives rise to an irreducible Galois representation $\overline{r}:G_{\mathbf{Q}} \rightarrow \mathrm{GL}_2(k)$. Suppose that
the image of $\overline{\rho}$ contains $\mathrm{SL}_2(k')$ for some $\# k' > 5$.
Such a representation cannot be the mod-$p$ reduction of a representation with image isomorphic to some subgroup of $\mathrm{GL}_2(\mathbf{C})$, and thus by~\cite{DeligneSerre}, the corresponding mod-$p$ class does not lift to characteristic zero. (Explicit examples were first found by Mestre for $\#k = 8$ and $N = 1429$.) A slightly different example can be given as follows. Suppose that $\Gamma = \Gamma_1(N) \cap \Gamma_0(x)$. Consider a non-Eisenstein Hecke eigenclass in $H^1(X(\Gamma),\omega_k)$ which is new of level $x$. Then the restriction of $\overline{r}$ to $I_x$ is rank two unipotent. Such a class cannot lift to characteristic zero at minimal level, because otherwise (by~\cite{DeligneSerre} again) the corresponding representation
$\rho$ would simultaneously have finite image and yet $\rho|I_x$ would be unipotent and hence infinite. Note that (unlike in the first example) it may well be possible to lift $\overline{\rho}$ to characteristic zero at some non-minimal level. Examples of the second kind have a natural analogue in the Siegel context.
Suppose that
$\overline{r}$ has type {\bf U3\rm} at $x$. If $r$ is any minimal lift of $\overline{r}$, the image of $I_x$ under $r$ will be rank three unipotent. This will also be true for the restriction of $r$ to any finite extension of $\mathbf{Q}_x$. Yet, by a theorem of Grothendieck (\cite{SGA7}, Exp.9) the image of inertia of a semistable abelian variety is rank two unipotent, i.e., satisfies $(\sigma-1)^2 = 0$. If follows that $r$ cannot contribute to a motive associated to an abelian variety. Conjecturally, Siegel modular eigenforms of weight $(2,2)$ should be associated to abelian varieties $M/\mathbf{Q}$ of dimension $2n$ equipped with an injection $E \rightarrow \mathrm{End}_{\mathbf{Q}}(M) \otimes \mathbf{Q}$ for some totally real field $E$ of degree $n$. This suggests that such representations $\overline{r}$ \emph{do not} admit minimal lifts to characteristic zero when $\sigma = (2,2)$.
It would be interesting to produce an explicit example of such a modular representation. Recall that there is an exceptional isomorphism $S_6 \simeq \mathrm{GSp}_4(\mathbf{F}_2)$ coming from identifying the Galois group of $\mathcal{A}_2[2]$ over $\mathcal{A}_2$ with either the symmetries of the $2$-torsion points on the universal abelian surface or the action of $S_6$ on the (generically) $6$ Weierstrass points~\cite{Faber}. The unipotent element $\sigma \in \mathrm{GSp}_4(\mathbf{F}_2)$ such that $(\sigma-1)^2 \ne 0$ has conjugacy class $(1,2,3,4)(5,6) \in S_6$ (this class is preserved by the exotic automorphism of $S_6$). In particular, if $K/\mathbf{Q}$ is a sextic field with Galois closure $G \subset S_6$ containing $(1,2,3,4)(5,6)$ and acting irreducibly on $\mathbf{F}^4_2$, and $p$ is an odd prime such that $p = \mathfrak{p}^4 \mathfrak{q}^2$, then $\overline{r}:\mathrm{Gal}(K/\mathbf{Q}) \simeq \mathrm{GSp}_4(\mathbf{F}_2)$ should give rise to such a representation. Here is an explicit example coming from a slight variation of this argument. Suppose that $A$ is the abelian surface corresponding to the Jacobian of the curve: $$y^2 =x^5 - 2x^4 + 6x^3 - 8x^2 + 4x - 4,$$ which has good reduction outside~$3 \cdot 5 \cdot 19$. The representation $\overline{r}:G_{\mathbf{Q}} \rightarrow \mathrm{GSp}_4(\mathbf{F}_2)$ has image~$S_5 \subset S_6$, and the image of inertia at~$5$ is conjugate to~$(1,2,3,4)(5,6)$. Hence~$\overline{r}$ should give rise to a mod-$2$ torsion class with trivial level structure outside $3 \cdot 5 \cdot 19$, and the following level structure at these primes: \begin{enumerate} \item Iwahori level structure at $p = 5$, \item Paramodular level structure at $p = 3$ and $19$. \end{enumerate} Note that this conjectural torsion class \emph{does} conjecturally lift to characteristic zero at some level since one expects that~$A$ is modular. (The conductor of~$A$ is~$3 \cdot 5^3 \cdot 19$.)
Common to both examples is the non-existence of automorphic representations $\pi$ (associated to either classical modular forms of weight $1$
or Siegel modular forms of weight $(2,2)$) such that $\pi_x$ is the Steinberg representation.
For classical modular forms, the non-existence of such $\pi$ follows from a consideration
of the corresponding Galois representations,
an argument which does not obviously generalize to the Siegel case (since one does
not know how to attach an abelian variety to such a form). However,
following argument (due to Kevin Buzzard) generalizes nicely:
\begin{theorem} If $\pi$ is a cuspidal automorphic representation associated to a Siegel modular form
of weight $(2,2)$, then $\pi_x$ is not the Steinberg representation for any $p$.
\end{theorem}
\begin{proof}
In weights $(j,k)$ with $j \ge k \ge 2$, the corresponding Frobenius
eigenvalues of the Weil--Deligne representation
associated to a Steinberg representation~$\pi_x$ are
$$\{x^{(w+3)/2}, x^{(w+1)/2}, x^{(w-1)/2}, x^{(w-3)/2}\},$$
where $w = j + k-3$. Moreover, the
corresponding eigenvalue of $U_{x,1}$ is $x^{(w-3)/2}$.
In particular, if $j=k=2$,
then $w = 1$ and the corresponding eigenvalue of $U_{x,1}$ is $x^{-1}$,
contradicting the integrality of Hecke eigenvalues (which is
a consequence of the integrality of the $q$-expansion).
\end{proof}
\subsection{Hecke operators} \label{sec:hecke-operators}
For simplicity, we denote the schemes $X_K$ and $X_{K_i(Q)}$ of \S~\ref{sec:cohom} by $X$ and $X_i(Q)$ respectively. Let $M$ denote an $\mathcal{O}$-module.
Let $x$ be a rational prime. We define matrices \[ \beta_{x,0} = \begin{pmatrix} x &
0&0&0\\0&x&0&0\\0&0&x&0\\0&0&0&x\end{pmatrix} \quad \beta_{x,1} = \begin{pmatrix} 1 &
0&0&0\\0&1&0&0\\0&0&x&0\\0&0&0&x \end{pmatrix} \quad \beta_{x,2} = \begin{pmatrix} 1 & 0&0&0\\0&x&0&0\\0&0&x&0\\0&0&0&x^2\end{pmatrix} \] and regard them as elements of $\mathrm{GSp}_4(\mathbf{Q}_x)$. If $x\not \in S$ (resp.\ $x\not\in S\cup Q$) we will consider the Hecke operators $T_{x,i}=[K\beta_{x,i}K]$ (resp.\ $T_{x,i}=[K_i(Q)\beta_{x,i}K_i(Q)]$) acting on each of the spaces \[ H^n(X,\omega(a,b)_M) \text{ (resp.\ $H^n(X_i(Q),\omega(a,b)_M)$)}
\]
as in \cite[\S 1.1.6]{skinner-urban-p-adic} or \cite[\S
8]{tilouine-bgg}. We also denote $T_{x,0}$ by $S_x$. The definition
of Hecke operators given in \cite{skinner-urban-p-adic} or
\cite{tilouine-bgg} applies when $x \neq p$ or when $p$ is
invertible on $M$. The remaining cases when $x = p$ requires more
care. In Lemma~\ref{lemma:Gross} below we show that $T_{p,1}$ and
$Q_{p,2}: = (pT_{p,2}+ (p + p^3)S_p)p^{2-b}$ exist as operators in
cohomological degree $n=0$ over $M = K/\mathcal{O}$.
Similarly, if $x\in Q$, we have operators $U_{x,i}=[K_i(Q)\beta_{x,i}K_i(Q)]$ on $H^n(X_i(Q),\omega(a,b)_M)$. As in \S~\ref{sec:cohom}, the map $X_1(Q)\to X_0(Q)$ is Galois with Galois group $\Delta_Q:=\prod_{x\in Q}(\mathbf{Z}/x)^\times$. This gives rise to an action of $\Delta_Q$ on $H^n(X_1(Q),\omega(a,b)_M)$. For each $u\in \Delta_Q$, we denote the corresponding operator on $H^n(X_1(Q),\omega(a,b)_M)$ by $\langle u \rangle$.
Finally, we shall also exploit Hecke operators of a slightly different flavour, which we denote by~$U_{p,1}$ and~$U_{p,2}$ respectively. In the context of this paper, they may be considered formal operators on~$q$-expansions. (They can also be interpreted more classically as Hecke operators with level structure at~$p$.) Their key property is that the operators~$T_{p,1}$ and~$T_{p,2}/p^{k+j-6}$ act by~$U_{p,1}$ and~$U_{p,2}$ for large enough weights, including~$(j,k)$ plus any non-trivial multiple of~$(p-1,p-1)$ for~$j \ge k \ge 2$. Their explicit definition in given in Lemmas~\ref{eightthree} and \ref{eightfour}.
\begin{remark}
\label{rem:normalization}
We note that our definition of the Hecke action is the `natural' one
twisted by $\nu^{-3}$ (see \cite[1.1.6a]{skinner-urban-p-adic}). We
saw in the proof of Theorem~\ref{thm:lan-suh}, that for the natural
action, there is an isomorphism $\omega(a,b) \cong \mathcal{W}_{(a,b;-a-b)}$,
and hence over $\mathbf{C}$, an isomorphism $\omega(a,b) \cong
\mathcal{V}_{(-b,-a;-a-b)}$. Under our normalization of the Hecke action on
$\omega(a,b)$, we therefore have $\omega(a,b) \cong \mathcal{W}_{\mu}$ and,
over $\mathbf{C}$, $\omega(a,b) \cong \mathcal{V}_{\sigma}$ where we take:
\[ \mu = (a,b;3-a-b) \quad \mbox{and}\quad \sigma =
(-b,-a;6-a-b). \] \end{remark}
\begin{remark}
\label{rem:convention}
In view of the previous remark, we will identify the set $\mathbf{Z}^{2,+}:=\{ (a,b)\in \mathbf{Z}^2: a\geq b\}$
with the subset $(a,b;3-a-b)$ of $X^*(T)^+_M$. Thus it makes sense
to speak of $\mu =(a,b) \in X^*(T)^+_M$. \end{remark}
\begin{remark}
\label{rem:central-chars}
Let $\mu = (a,b) \in X^*(T)^+_M$ and let $w = a+b-6$. For $x \in
\mathbf{Z}$, we can similarly define a Hecke operator associated to
$[K\diag(x,x,x,x)K]$ on the cohomology of $\omega(a,b)$: this
operator acts as $x^w = x^{a+b-6}$. Now, suppose that $\pi =
\pi^{\infty}\otimes \pi_\infty$ in $\mathcal{A}_{(2)}(G)$ contributes to
\[ H^i(X(\mathbf{C}),\omega(a,b))_{(2)}
\cong H^i( X(\mathbf{C}),\mathcal{W}_{\mu})_{(2)}
\cong H^i( X(\mathbf{C}),\mathcal{V}_{\sigma})_{(2)},\]
where $\sigma = (-b,-a;-w)$. It follows that the central character of
$\pi_{\infty}$ is given by:
\[ x \mapsto x^{-w}.\]
Furthermore, by Proposition~\ref{thm:contrib-to-char-0-cohom}, the
transfer of $\pi_{\infty}$ to $\mathrm{GL}_4(\mathbf{R})$ has infinitesimal
character $\chi_{\tau}$ where
\[ \tau = \left(0, -(b-2), -(a-1), -(a+b-3)\right)
+3/2(1,1,1,1). \] \end{remark}
We now introduce some Hecke algebras. We note that in the following definition, we work over $K/\mathcal{O}$ rather than $\mathcal{O}$.
\begin{df}
\label{defn:hecke-alg} Let~$\mu = (a,b)\in X^*(T)^+_M$ with $a \ge b \ge 2$.
\begin{enumerate}
\item The anaemic Hecke algebra \[ \T^{\mathrm{an}}_{\mu}(Q) \subset \mathrm{End}_{\mathcal{O}}(H^0(X_1(Q),\omega(a,b)(-\infty)_{K/\mathcal{O}})) \] is the $\mathcal{O}$-algebra generated by the operators $T_{x,i}$ for $x \not\in S\cup Q\cup \{p\}$. \item Similarly, we let $\mathbf{T}_{\mu}(Q)$ be the algebra generated over $\T^{\mathrm{an}}_{\mu}(Q)$ by the operators $U_{x,i}$ for $x\in Q$ and $\langle u \rangle$ for $u\in \Delta_Q$. When $Q=\emptyset$, we have $ \T^{\mathrm{an}}_{\mu}(\emptyset) = \mathbf{T}_{\mu}(\emptyset)$ and we denote this algebra by $\mathbf{T}_{\mu}$. \item Finally, $\widetilde{\T}_{\mu}(Q)$ denotes the
$\mathbf{T}_{\mu}(Q)$-algebra generated by the operators~$T_{p,1}$ and~$Q_{p,2} = (p T_{p,2}
+ (p + p^3) S_p) p^{2-b}$. (The existence of these operators is
established in Lemma~\ref{lemma:Gross}.)
If $Q =\emptyset$, then we denote $\widetilde{\T}_{\mu}(\emptyset)$ by $\widetilde{\T}_{\mu}$. \end{enumerate} \end{df}
Note that the algebras $\T^{\mathrm{an}}_{\mu}(Q) \subset \mathbf{T}_{\mu}(Q) \subset \widetilde{\T}_{\mu}(Q)$ preserve the subspace \[ H^0(X_0(Q),\omega(a,b)(-\infty)_{K/\mathcal{O}}) \subset
H^0(X_1(Q),\omega(a,b)(-\infty)_{K/\mathcal{O}}). \]
We will also need to consider ordinary Hecke algebras.
Let $e = \varinjlim_n (T_{p,1}Q_{p,2})^{n!}$ denote the ordinary
idempotent associated to the Hecke operators $T_{p,1}$ and
$Q_{p,2}$. (We will only consider this operator in contexts where
the direct limit makes sense.) We define: \[ H^0(X_0(Q),\omega(a,b)(-\infty)_M)^{\mathrm{ord}} = e H^0(X_0(Q),\omega(a,b)(-\infty)_M) \] for $M = \mathcal{O}, \mathcal{O}/\varpi^m$ or $M = K/\mathcal{O}$. We thus have: {\small \[ H^0(X_0(Q),\omega(a,b)(-\infty)_M) = H^0(X_0(Q),\omega(a,b)(-\infty)_M)^{\mathrm{ord}} \bigoplus (1-e)H^0(X_0(Q),\omega(a,b)(-\infty)_M)\] } for such $M$.
\begin{df}
\label{df:ordinary-hecke-alg}
Let~$\mu = (a,b)$ with $a \ge b \ge 2$. We define the ordinary Hecke algebras
$\T^{\mathrm{an}}_{\mu}(Q)^{\mathrm{ord}}$ (resp.\ $\mathbf{T}_{\mu}(Q)^{\mathrm{ord}}$,
$\widetilde{\mathbf{T}}_{\mu}(Q)^{\mathrm{ord}}$) to be the image of $\T^{\mathrm{an}}_{\mu}(Q)$ (resp.\ $\mathbf{T}_{\mu}(Q)$,
$\widetilde{\mathbf{T}}_{\mu}(Q)$) in \[ \mathrm{End}_{\mathcal{O}}(H^0(X_0(Q),\omega(a,b)(-\infty)_{K/\mathcal{O}})^{\mathrm{ord}}). \] \end{df}
\section{Galois representations associated to modular forms}
As in Section~\ref{sec:cohom}, let $S$ and $Q$ be finite sets of primes of $\mathbf{Q}$ which are disjoint and do not contain $p$. We allow the possibility that $Q=\emptyset$. We let $K$ and $K_{i}(Q)$ be open compact subgroups of $\mathrm{GSp}_4(\mathbb{A}^{\infty})$ as in Section~\ref{sec:cohom}, and we let $X = X_K$ and $X_i(Q) = X_{K_i(Q)}$ be the corresponding Siegel threefolds, defined over $\mathcal{O}$.
\subsection{The Hasse invariant} \label{sec:hasse-invariant}
We begin with a definition.
\begin{df}
\label{defn:hasse-invts} Let $h \in H^0(X, \omega^{p-1}_k)$ be the Hasse invariant and let $A \in H^0(X, \omega^{r(p-1)})$ be a lift of $h^r$, for some $r>0$ which we fix for the rest of this section. \end{df}
The existence of such a lift $A$ follows from the Koecher principle and the ampleness of $\omega$ on the minimal compactification of $X$.
\begin{lemma}
\label{lem:hasse-mod-p} Let $\mu = (a,b)\in X^*(T)_M^+$ with $a\geq b \geq 2$. Then: \begin{enumerate} \item Multiplication by $h$ defines an injection: \[ h: H^0(X_1(Q),\omega(a,b)_k) \hookrightarrow H^0(X_1(Q),\omega(a+(p-1),b+(p-1))_k) \] which is equivariant for the Hecke operators $T_{x,i}$ for each $x \not\in S\cup Q \cup \{p\}$ and the operators $U_{x,i}$ for $x \in Q$. \item\label{ops-at-p} If $b\geq 3$, then this map is also equivariant for the operators $T_{p,1}$ and $Q_{p,2}$. \end{enumerate} \end{lemma}
\begin{proof}
It is well-known that multiplication by $h$ is injective and commutes with Hecke operators away from $p$. We may thus assume that $b \geq 3$. It is shown in \cite[\S A.3]{Pilloni-hida} and \cite[Lemme
8.7]{tilouine-bgg} that multiplication by $h$ commutes with the operators $U_{p,1}$ and $U_{p,2}$. Since $b \geq 3$, \cite[Lemme 8.5]{tilouine-bgg} implies that $T_{p,1}\equiv U_{p,1} \mod p$ and $p^{3-b}T_{p,2} \equiv U_{p,2} \mod p$. It follows that $T_{p,1}$ and $Q_{p,2} = p^{3-b}T_{p,2} + (1+p^2)p^{3-b}S_p$ also commute with $h$. \end{proof}
Suppose that $\mu = (a,b) \in X^*(T)_M^+$ with $a\geq b \geq 2$. By the proof of \cite[Th\'eor\`eme 6.2]{Pilloni-hida}, there exists an integer $N(\mu)$ as in the following definition.
\begin{df}
\label{df:suff-large-wt}
Let $N(\mu)$ be an integer such that for all $t \geq N(\mu)$,
$i>0$, and $Z \in \{ X, X_0(Q), X_1(Q)\}$, the cohomology group
\[ H^i(Z, \omega(a+t, b+t)(-\infty)_k)\] vanishes.
\end{df} Note that for such $t \geq N(\mu)$, the maps \begin{eqnarray*}
H^0(X,\omega(a+t, b+t)(-\infty)) &\to& H^0(X, \omega(a+t, b+t)(-\infty)_k) \\
H^0(X,\omega(a+t, b+t)(-\infty)_K) &\to& H^0(X, \omega(a+t, b+t)(-\infty)_{K/\mathcal{O}}) \end{eqnarray*} are both surjective. The same is true over $X_0(Q)$ and $X_1(Q)$.
\begin{lemma} \label{lem:hasse-mod-p^m} Let $\mu = (a,b)\in X^*(T)_M^+$ with $a\geq b \geq 2$ and let $m>0$. There exists an integer $s>0$ such that, if we set $t = rs(p-1)$, then: \begin{enumerate}
\item $t \geq N(\mu)$, and
\item multiplication by $A^s$ defines an injection \[ H^0(X_1(Q),{\omega(a,b)}_{\mathcal{O}/\varpi^m}) \hookrightarrow H^0(X_1(Q),{\omega(a+t,b+t)}_{\mathcal{O}/\varpi^m}) \] which is equivariant for the Hecke operators $T_{x,i}$ for each $x \not\in S\cup Q \cup \{p\}$ and the operators $U_{x,i}$ for each $x \in Q$.
\end{enumerate} \end{lemma}
\begin{proof}
The second property holds as long as $p^{m-1} | s$ (see \cite[Theorem
6.2.1]{goldring}), so it suffices to take $s$ equal to any integer
greater than $N(\mu)/r(p-1)$ and divisible by $p^{m-1}$. \end{proof}
Let $\mu = (a,b)\in X^*(T)_M^+$ with $a\geq b \geq 2$. Recall that the Hecke algebras \[ \T^{\mathrm{an}}_{\mu}(Q) \subset \mathbf{T}_{\mu}(Q) \subset \widetilde{\T}_{\mu}(Q) \subset \mathrm{End}_{\mathcal{O}}(H^0(X_1(Q),\omega(a,b)(-\infty)_{K/\mathcal{O}})) \] were defined in Definition~\ref{defn:hecke-alg}.
\begin{remark}
\label{rem:hasse-hecke} For $\mu = (a,b)$ with~$a \ge b \ge 2$ and each $m >0$, we have $$H^0(X_1(Q),{\omega(a,b)(-\infty)}_{\mathcal{O}/\varpi^m})\cong H^0(X_1(Q),{\omega(a,b)(-\infty)}_{K/\mathcal{O}})[\varpi^m].$$ Let $I_{\mu,m}$ (resp.\ $\widetilde{I}_{\mu,m}$) denote the annihilator of the former space in $\mathbf{T}_{\mu}(Q)$ (resp.\ $\widetilde{\mathbf{T}}_{\mu}(Q)$). If $s$ and $t$ are as in Lemma~\ref{lem:hasse-mod-p^m}, then multiplication by $A^s$ induces a surjective map: \[ \mathbf{T}_{\mu'}(Q) \twoheadrightarrow \mathbf{T}_{\mu}(Q)/I_{\mu,m}, \] where $\mu' = \mu + (t,t)$. In particular, any maximal ideal $\mathfrak{m}$ of $\mathbf{T}_{\mu}(Q)$ pulls back under this map to a maximal ideal of $\mathbf{T}_{\mu'}(Q)$ which we will also denote by $\mathfrak{m}$.
Similarly, Lemma~\ref{lem:hasse-mod-p} induces a map \[ \mathbf{T}_{\mu'}(Q) \twoheadrightarrow \mathbf{T}_{\mu}(Q)/I_{\mu,1} \] where $\mu' = \mu + (p-1,p-1)$ and, if $b\geq 3$, this extends to a map \[ \widetilde{\mathbf{T}}_{\mu'}(Q) \twoheadrightarrow \widetilde{\mathbf{T}}_{\mu}(Q)/\widetilde{I}_{\mu,1}. \] \end{remark}
\subsection{Preliminaries on Galois representations} \label{sec:prel-galo-repr}
We now turn our attention to Galois representations.
\begin{prop}
\label{prop:similitude} Let $\mu = (a,b) \in X^*(T)_M^+$ and let $w = a+b-6$. There is a continuous character \[ \chi_{\mu} : G_{\mathbf{Q}} \to \T^{\mathrm{an}}_{\mu}(Q)^\times \] such that: \begin{enumerate}
\item $\chi_{\mu}|G_{\mathbf{Q}_p}$ is crystalline with Hodge--Tate weight $w$; \item for all $x\not\in S\cup Q\cup\{p\}$, $\chi_{\mu}$ is unramified
at $x$ and $\chi_{\mu}(\mathrm{Frob}_x) = S_x$. \end{enumerate} In particular, \[ \chi_{\mu} = \chi_{\mu,0} \epsilon^{-w} \] for some finite order character $\chi_{\mu,0}: G_{\mathbf{Q}} \to \widetilde{\mathbf{T}}_{\mu}(Q)^\times$. \end{prop}
\begin{proof}
This follows from the proof of \cite[Proposition 4]{TaylorDuke},
noting that we have twisted the Hecke action by $\nu^{-3}$ (see Remark~\ref{rem:central-chars}). \end{proof}
\begin{df}
\label{df:Hecke-polynomial} For a prime $x$, we introduce the Hecke polynomial: \[ Q_x(T) = X^4 - T_{x,1} X^3 + (x T_{x,2} + (x^3 + x) S_x) X^2 -
x^3 S_xT_{x,1} X + x^6 S_x^2. \] \end{df}
If a modular form $f$ is an eigenform for a collection of Hecke operators $T$, we denote by $\lambda_f$ the map such that $T f = \lambda_f(T)f$ for each $T$. In particular, if $f$ is an eigenform for the operators $T_{x,i}$ at $x$, then we can specialize the polynomial $Q_x(T)$ at $f$ to get $\lambda_f(Q_x(T))$.
\begin{prop}
\label{prop:weiss} Let $\mu = (a,b) \in X^*(T)_M^+$ with $a \geq b \geq 3$. Let $w = a+b-6$ and $\mathbf{w} = w+3 = a+b-3$. Let \[ f \in H^0(X_1(Q), \omega(a,b)(-\infty)) \] be a cuspidal eigenform for the operators $T_{x,i}$ for all $x \not \in Q \cup S $ and $i = 0,1,2$. Then there is a continuous semisimple representation \[ r_f : G_{\mathbf{Q}} \to \mathrm{GSp}_4(K') \] defined over a finite extension $K'/K$ such that: \begin{enumerate} \item\label{W-simil} The similitude character $\nu \circ r_f$ is given
by \[ \nu \circ r_f = \lambda_f \circ \chi_{\mu}\epsilon^{-3} = \lambda_f \circ \chi_{\mu,0}\epsilon^{-\mathbf{w}}.\] \item\label{W-unram} $r_f$ is unramified at primes $x \not\in Q \cup S \cup \{p\}$,
and at such primes, the characteristic polynomial of $r_f(\mathrm{Frob}_x)$
is given by: \[ \det(X - r_f(\mathrm{Frob}_x)) = \lambda_f(Q_x(X)).\]
\item\label{W-at-p} The restriction $r_f | G_{\mathbf{Q}_p}$ is crystalline with Hodge--Tate
weights $\mathbf{w},(a-1),(b-2),0$. If, in addition, $f$ is an eigenvalue of
the Hecke operators at $p$, then the characteristic polynomial of
$\Phi$ on $D_{\mathrm{cris}}(r_{f}|G_{\mathbf{Q}_p})$ is $\lambda_f(Q_p(X))$. \item\label{W-ord} Suppose $f$ is ordinary in the sense that it is an
eigenform for $T_{p,1}$ and
$Q_{p,2}$ with eigenvalues being $p$-adic units. Then $Q_p(X)$ has distinct eigenvalues
$\alpha_p, \beta_p, \gamma_p, \delta_p$ with $p$-adic valuations $0,
b-2, a-1, \mathbf{w}$, respectively. Furthermore, $r_f|G_{\mathbf{Q}_p}$ is
conjugate in $\mathrm{GSp}_4(K')$ to a representation of the form \[
\left( \begin{matrix} \lambda(\alpha_p) & * & * & * \\ 0 & \epsilon^{-(b-2)} \cdot \lambda(p^{-(b-2)}\beta_p) & * & * \\ 0 & 0 & \epsilon^{-(a-1)} \cdot \lambda(p^{-(a-1)}\gamma_p) & * \\ 0 & 0 & 0 & \epsilon^{-\mathbf{w}} \cdot \lambda(p^{-\mathbf{w}}\delta_p) \end{matrix} \ \ \right) \] \item\label{W-loc-glob} If $r_f$ is absolutely irreducible, then it
satisfies local-global compatibility at all primes. \end{enumerate} \end{prop}
\begin{proof}
The existence of $r_f$ follows from the work of Taylor, Laumon and
Weissauer. Some of the finer properties are due to Urban,
Genestier--Tilouine, Gan--Takeda, Sorensen and Mok.
Fix an embedding $\imath:K\hookrightarrow \mathbf{C}$ and let $\pi$ be an cuspidal automorphic representation of
$\mathrm{GSp}_4(\mathbb{A}_{\mathbf{Q}})$ which contributes to the $f$-part of $H^0(X_1(Q),
\omega(a,b)(-\infty)_{\mathbf{C}})$ under the isomorphism of the first part of
Theorem~\ref{thm:coherent-cohom-lie-alg-cohom} (with $\sigma =
(-b,-a;6-a-b)$, as in Remark~\ref{rem:normalization}).
We take $r_f : G_{\mathbf{Q}} \to \mathrm{GL}_4(\overline{K})$ be the representation
$R_p$ of \cite[Theorem 3.5]{Mok} associated to $\pi$. When $\pi$ is
simple, generic in the terminology of \cite{Mok}, the representation
can be conjugated to take values in $\mathrm{GSp}_4(\overline{K})$, by the main
theorem of \cite{bellaiche-chenevier-families}. In the
remaining cases, the representation $R_p$ is reducible and can
easily be seen to be symplectic. The usual
Baire category argument implies that $r_f$ can be defined over a
finite extension of $K$. Thus in all cases, we may take $r_f
: G_{\mathbf{Q}} \to \mathrm{GSp}_4(K')$.
Parts~\eqref{W-simil}-- ~\eqref{W-loc-glob} follow from the statement
of Theorem~\cite[Theorem 3.5]{Mok}.
\end{proof}
\begin{lemma} \label{lem:galois-mod-m}
Let $\mu = (a,b)\in X^*(T)_M^+$ with $a\geq b \geq 2$ and let $\mathfrak{m}$
be a maximal ideal of $\T^{\mathrm{an}}_{\mu}(Q)$. Then there is a continuous
semisimple representation \[ \overline{r}_{\mathfrak{m}} : G_{\mathbf{Q}} \to \mathrm{GL}_4(\T^{\mathrm{an}}_{\mu}(Q)/\mathfrak{m}) \] such that for each $x \not\in S\cup Q \cup \{p\}$, the restriction
$\overline{r}_{\mathfrak{m}}|G_{\mathbf{Q}_x}$ is unramified and $\overline{r}_{\mathfrak{m}}(\mathrm{Frob}_x)$ has characteristic polynomial $Q_x(X)$.
If $\overline{r}_{\mathfrak{m}}$ is absolutely irreducible, then the representation $\overline{r}_{\mathfrak{m}}$ preserves a symplectic pairing and hence, after conjugation, we have a representation: \[ \overline{r}_{\mathfrak{m}} : G_{\mathbf{Q}} \to \mathrm{GSp}_4(\T^{\mathrm{an}}_{\mu}(Q)/\mathfrak{m}) \] \end{lemma}
\begin{proof}
Choose an integer $s$ as in Lemma~\ref{lem:hasse-mod-p^m} with $m$
taken to equal $1$ and let $t = rs(p-1)$. Let $f \in
H^0(X_1(Q),\omega(a+t,b+t)(-\infty))\otimes \overline{K}$ be an
eigenform for $\widetilde{\T}_{\mu'}(Q)_{\mathfrak{m}}$. Let $r_f$ be the Galois
representation associated to $f$ by Proposition~\ref{prop:weiss} and take
$\overline{r}_{\mathfrak{m}}$ to be the semisimplification of a reduction of $r_f$ to
characteristic $p$. The resulting representation is defined over the
algebraic closure of $\T^{\mathrm{an}}_{\mu}(Q)/\mathfrak{m}$, but by the argument of
\cite[Prop.\ 3.4.2]{CHT}, we see that after conjugation, it may be
defined over $\T^{\mathrm{an}}_{\mu}(Q)/\mathfrak{m}$.
For the last part: let $\pi$ the transfer to $\mathrm{GL}_4$ (given by
\cite{arthur-gsp4}) of the automorphic representation generated by
$f$. Then $\pi$ descends to an automorphic representation $\Pi$ of a
unitary group over $\mathbf{Q}$. The family of $\ell$-adic Galois
representations associated to $\Pi$ is the same as that associated
to $f$. Thus, \cite[Theorem 1.2]{bell-chen} and the fact that
$\overline{r}_{\mathfrak{m}}$ is absolutely irreducible implies that $r_f$ is
symplectic. The same is then true of $\overline{r}_{\mathfrak{m}}$ (by absolute
irreducibility). \end{proof}
\begin{remark}
By the same argument, the previous result holds if we replace
$\T^{\mathrm{an}}_{\mu}(Q)$ by $\mathbf{T}_{\mu}(Q)$ or $\widetilde{\mathbf{T}}_{\mu}(Q)$. \end{remark}
\begin{df}
\label{defn:eisenstein} We say that $\mathfrak{m}$ is \emph{non-Eisenstein} if the representation $\overline{r}_{\mathfrak{m}}$ is absolutely irreducible. \end{df}
\subsection{Galois representations in cohomological weights} \label{sec:gal-rep-cohom}
Let $\overline{r} : G_{\mathbf{Q}} \to \mathrm{GSp}_4(k)$ be a representation as in Section \ref{section:deformations}. By Assumption~\ref{assumption:neatness} and by Cebotarev, there exist infinitely many primes~$q$ such that no pair of eigenvalues of~$\overline{r}(\mathrm{Frob}_q)$ have ratio~$q \mod p$ and~$q \not \equiv 1 \mod p$. Choose any such~$q$ which is disjoint to~$p$ and all primes of bad reduction of~$\overline{r}$. We take $S = S(\overline{r}) \cup \{q\}$ and $Q$ a possibly empty set of primes disjoint from $S\cup \{ p\}$. We define a compact open subgroup $ K = \prod_x K_x$ of $\mathrm{GSp}_4(\mathbb{A}^{\infty})$ as follows: \begin{enumerate} \item If $x = p$ or $\overline{r}$ is unramified at $x$ and~$x \ne q$, then $K_x = \mathrm{GSp}_4(\mathbf{Z}_x)$. \item If $x$ is of type {\bf U3\rm}, then $K_x = I(x)$, where $I(x)$ is the Iwahori subgroup. \item If $x$ is of type {\bf U2\rm}, then $K_x = \Pi(x)$, where $\Pi(x)$ is the Klingen parahoric.
\item If $x$ is of type {\bf U1\rm}, then $K_x = K(x)$, where $K(x)$ is the paramodular group at $x$. \item If $x$ is of type {\bf P\rm}, then $K_x = \Pi(x)^{+}$ (and $x-1$ is prime to $p$). \item If $x$ is of type {\bf H\rm}, then $K_x$ is the full congruence subgroup of level $x$. \item If~$x = q$, then $K_x$ is the full congruence subgroup of level $x$. \end{enumerate} We then let $X = X_K$ and $X_{i}(Q) = X_{K_{i}(Q)}$ as in Section~\ref{sec:cohom}.
Let $\mu = (a,b)\in X^*(T)_M^+$ with $a\geq b \geq 3$ be a \emph{regular} weight and let $\mathfrak{m}_{\emptyset}$ be a maximal ideal of $\mathbf{T}_{\mu}^{\mathrm{ord}}$ (the ordinary Hecke algebra with $Q = \emptyset$) with residue field $k$. Then $\mathfrak{m}_{\emptyset}$ pulls back to an ideal of $\T^{\mathrm{an}}_\mu(Q)^{\mathrm{ord}}$ which in turn pushes forward to an ideal of $\mathbf{T}_{\mu}(Q)^{\mathrm{ord}}$. We denote both of these ideals by $\mathfrak{m}_{\emptyset}$, in a slight abuse of notation. The ideal $\mathfrak{m}_{\emptyset} \subset \T^{\mathrm{an}}_\mu(Q)^{\mathrm{ord}}$ is maximal but $\mathfrak{m}_{\emptyset}\subset \mathbf{T}_\mu(Q)^{\mathrm{ord}}$ need not be maximal -- there may be multiple maximal ideals $\mathfrak{m}$ of $\mathbf{T}_{\mu}(Q)^{\mathrm{ord}}$ that contain it. We make the following assumption:
\begin{assumption}
\label{assumption:hecke-galois-rep-regular}
Let $\overline{r}$, $\mu$ and $\mathfrak{m}_{\emptyset}$ be as above. Then: \begin{enumerate} \item We have $\overline{r}_{\mathfrak{m}_{\emptyset}} \cong \overline{r}$. In particular,
since $\overline{r}$ is absolutely irreducible, $\mathfrak{m}_{\emptyset}$ is
non-Eisenstein.
\item\label{ass:TW-at-Q} For each $x \in Q$, $x \equiv 1 \mod p$ and $\overline{r}|G_x$ is a
direct sum of four pairwise distinct characters with Frobenius
eigenvalues $\alpha_x, \beta_x, \gamma_x, \delta_x$. We assume the
eigenvalues have been labeled so that the plane
$\lambda(\alpha_x)\oplus \lambda(\beta_x)$ is isotropic, and hence $\alpha_x\delta_x = \beta_x\gamma_x$. \end{enumerate}
\end{assumption} We let $\mathfrak{m} \subset \mathbf{T}_{\mu}(Q)^{\mathrm{ord}}$ be any maximal ideal which contains $\mathfrak{m}_{\emptyset}$. The representations $\overline{r}_{\mathfrak{m}}$, $\overline{r}_{\mathfrak{m}_\emptyset}$ and $\overline{r}$ are all isomorphic.
We now turn to the prime $p$. Let $\alpha,\beta\in k^\times$ be the elements associated to $\overline{r}|G_{\mathbf{Q}_p}$ at the beginning of Section~\ref{section:deformations}. For $M = \mathcal{O}, \mathcal{O}/\varpi^m$ or $K/\mathcal{O}$, we define: \begin{itemize} \item $H^0(X_1(Q),\omega(a,b)(-\infty)_M)^{\beta}$ to be the subspace
of $H^0(X_1(Q),\omega(a,b)(-\infty)_M)$ given by the image of the idempotent
$e_{\beta} = \varinjlim_n ((T_{p,1} -
\tilde\beta)(Q_{p,2}-\tilde\alpha\tilde\beta))^{n!}$, where
$\tilde{\alpha}$ and $\tilde{\beta}$ are lifts of $\alpha$ and
$\beta$ to $\mathcal{O}$. \item $\T^{\mathrm{an}}_{\mu}(Q)^{\beta}$ (resp.\ $\mathbf{T}_\mu(Q)^\beta$,
$\widetilde{\mathbf{T}}_\mu(Q)^\beta$) to be the image of $\T^{\mathrm{an}}_{\mu}(Q)$ (resp.\ $\mathbf{T}_\mu(Q)$,
$\widetilde{\mathbf{T}}_\mu(Q)$) in \[ \mathrm{End}_{\mathcal{O}}(H^0(X_1(Q),\omega(a,b)(-\infty)_M)^{\beta}). \] \end{itemize} We also make the analogous definitions with $\alpha$ and $\beta$ swapping roles.
\begin{theorem} \label{theorem:highergood} Let $\mu = (a,b)$, $\mathfrak{m}_{\emptyset}$ and $\mathfrak{m}$ be as above, and suppose that Assumption~\ref{assumption:hecke-galois-rep-regular} holds. Let $w = a+b-6$ and $\mathbf{w} = w+3 = a+b-3$. Then there exists a continuous representation $$r=r_{\mu,\mathfrak{m}}^{\beta}: G_{\mathbf{Q}} \rightarrow \mathrm{GSp}_4(\mathbf{T}_{\mu}(Q)_{\mathfrak{m}}^\beta)$$ lifting $\overline{r}_{\mathfrak{m}}=\overline{r}$ and such that: \begin{enumerate}
\item\label{reg-simil} The similitude character $\nu\circ r$ is given by: \[ \nu \circ r = \chi_{\mu} \epsilon^{-3} = \chi_{\mu,0} \epsilon^{-\mathbf{w}},\] where~$\chi_{\mu,0}$ is a finite order character unramified at~$p$ which is trivial modulo~$\mathfrak{m}$.
\item\label{reg-unram} For each prime $x\not\in S\cup Q\cup \{p\}$, $r$ is unramified at $x$ and $r(\mathrm{Frob}_x)$ has characteristic polynomial $Q_x(X)$. \item\label{reg-ord} There are units $d_{p,1},\dots,d_{p,4} \in \mathbf{T}_{\mu}(Q)_\mathfrak{m}^\beta$ satisfying $$Q_p(X) = (X - d_{p,1})(X - p^{b-2} d_{p,2})(X - p^{a-1} d_{p,3})(X - p^{\mathbf{w}} d_{p,4}) \in \mathbf{T}_{\mu}(Q)_\mathfrak{m}^\beta[X],$$ and such that: \begin{enumerate} \item We have $d_{p,1} \mod \mathfrak{m} = \beta$ and $d_{p,2} \mod \mathfrak{m} =
\alpha$;
\item $r|G_{\mathbf{Q}_p}$ is conjugate in $\mathrm{GSp}_4$ to a representation of the form: \[ \left( \begin{matrix} \lambda(d_{p,1}) & * & * & * \\ 0 & \epsilon^{-(b-2)} \cdot \lambda(d_{p,2}) & * & * \\ 0 & 0 & \epsilon^{-(a-1)} \cdot \lambda(d_{p,3}) & * \\ 0 & 0 & 0 & \epsilon^{-\mathbf{w}}\cdot \lambda(d_{p,4}) \end{matrix} \ \ \right) \] \end{enumerate} \item\label{reg-deformation} After twisting by the unique square-root of~$\chi_{\mu,0}$ which is trivial modulo~$\mathfrak{m}$, the deformation $r$ of $\overline{r}$ satisfies properties
\eqref{outside-NQ}--~\eqref{at-Q} of Definition~\ref{defn:minimal}. \end{enumerate} \end{theorem}
\begin{remark} \emph{We expect that, under the given assumptions, the Hecke rings in question are torsion free. However, we avoid having to prove this by passing to sufficiently high weight.} \end{remark}
\begin{proof} As in Remark~\ref{rem:hasse-hecke}, $I_{\mu,m}$ denotes the annihilator of $H^0(X_1(Q),{\omega(a,b)(-\infty)}_{\mathcal{O}/\varpi^m})$ in $\mathbf{T}_{\mu}(Q)$. Since $\mathbf{T}_{\mu}(Q)_{\mathfrak{m}}=\varprojlim_m \mathbf{T}_{\mu}(Q)_{\mathfrak{m}}/I_{\mu,m}$, it suffices to construct, for each $m>0$, a representation $r_{m} : G_\mathbf{Q} \to \mathrm{GSp}_4(\mathbf{T}_{\mu}(Q)_{\mathfrak{m}}^\beta/I_{\mu,m})$ satisfying the conditions of the theorem. We thus fix an $m>0$. Choose an integer $s>0$ as in Lemma~\ref{lem:hasse-mod-p^m} and let $t = rs(p-1)$. By Lemma~\ref{lem:hasse-mod-p^m} and Lemma~\ref{lem:hasse-mod-p}~\eqref{ops-at-p}, multiplication by $A^s$ restricts to a map: \[ H^0(X_1(Q),\omega(a,b)(-\infty)_{\mathcal{O}/\varpi^m})^{\beta}_{\mathfrak{m}} \hookrightarrow H^0(X_1(Q),\omega(a+t,b+t)(-\infty)_{\mathcal{O}/\varpi^m})^{\beta}_{\mathfrak{m}}. \] This in turns gives rise to a surjective map $\mathbf{T}_{\mu'}(Q)_{\mathfrak{m}}^\beta \twoheadrightarrow \mathbf{T}_{\mu}(Q)_{m}^{\beta}/I_{\mu,m}$. Thus it suffices to prove the result in weight $\mu' := (a',b') := (a+t,b+t)$.
Since $t \geq N(\mu)$, we have that \[ H^0(X_1(Q),\omega(a',b')(-\infty)_{K/\mathcal{O}}) \cong H^0(X_1(Q),\omega(a',b')(-\infty))\otimes K/\mathcal{O} \] and hence we may regard $\mathbf{T}_{\mu'}(Q)$ as acting faithfully on both \[ H^0(X_1(Q),\omega(a',b')(-\infty)) \mbox{\ \ and\ \ } H^0(X_1(Q),\omega(a',b')(-\infty)_K). \] Thus we have \[ \mathbf{T}_{\mu'}(Q)_{\mathfrak{m}}^{\beta} \hookrightarrow \prod_i \mathcal{O}_{K_i} \] where the $K_i$ are a finite collection of finite extensions of $K$, one for each minimal prime $\wp_i$ of $T_{\mu'}(Q)_{\mathfrak{m}}^{\beta}$. Each such minimal prime corresponds to an eigenform $f_i$ for $\mathbf{T}_{\mu'}(Q)_{\mathfrak{m}}^{\beta}$. The eigenform $f_i$ has an associated Galois representation $r_{f_i} : G_{\mathbf{Q}} \to \mathrm{GSp}_4(\mathcal{O}_{K'_i})$ for some finite extension $K'_i/K_i$, by Proposition~\ref{prop:weiss}. After conjugation, we may assume that each $r_{f_i}$ reduces to $\overline{r}$.
By the argument of the proof of \cite[3.4.4]{CHT}, using \cite[Lemma 7.1.1]{gee-ger} in place of \cite[2.1.12]{CHT}, we see that the representation $\prod_i r_{f_i}$ descends to a representation $r : G_{\mathbf{Q}} \to \mathrm{GSp}_4(\mathbf{T}_{\mu'}(Q)_{\mathfrak{m}}^{\beta})$. It follows from Proposition~\ref{prop:weiss} that $r$ satisfies properties \eqref{reg-simil}--\eqref{reg-ord} of the theorem. For part \eqref{reg-ord}, note that $Q_p(X) \in \mathbf{T}_{\mu'}(Q)_{\mathfrak{m}}^{\beta}$ factors as \[ (X-d_{p,1})(X-p^{b-2}d_{p,2})(X-p^{a-1}d_{p,1})(X-p^{\mathbf{w}}d_{p,4}) \] for units $d_{p,i} \in \mathbf{T}_{\mu'}(Q)_{\mathfrak{m}}^{\beta}$. We also have $T_{p,1} \equiv \beta \mod \mathfrak{m}$ and $Q_{p,2} \equiv \alpha\beta \mod \mathfrak{m}$ in $\mathbf{T}_{\mu'}(Q)_{\mathfrak{m}}^{\beta}$ (by definition of the idempotent $e_\beta$). Since $Q_p(X) = X^4 - T_{p,1} X^3 + p^{b-2}Q_{p,2}X^2 - \dots$, we deduce that $d_{p,1} \equiv \beta \mod \mathfrak{m}$ and $d_{p,2} \equiv \alpha \mod \mathfrak{m}$.
To show that $r$ satisfies properties \eqref{outside-NQ}--~\eqref{at-Q} of Definition~\ref{defn:minimal}, it suffices to show that each $r_{f_i}$ does so. In fact, property \eqref{outside-NQ} has already been established with the exception of the prime~$x = q$. If~$x = q$, then (by our assumptions)~$\mathrm{ad}^0(\overline{r})(1)$ as a~$G_{\mathbf{Q}_q}$-module contains no subquotient isomorphic to~$k$, and so~$H^2(\mathbf{Q}_q,\mathrm{ad}^0(\overline{r})) \simeq H^0(\mathbf{Q}_q,\mathrm{ad}^0(\overline{r})(1))^{*} = 0$. Since~$q \ne p$, it follows that~$H^1(\mathbf{Q}_q,\mathrm{ad}^0(\overline{r}))$ consists entirely of unramified classes. In particular, all lifts of~$\overline{r}$ are automatically unramified at~$q$.
Since $\mathfrak{m}$ is non-Eisenstein, it follows from Proposition~\ref{prop:weiss}\eqref{W-loc-glob} that $r_{f_i}$ satisfies local-global compatibility at all primes. Thus we may apply the results of \cite[\S 4.5]{Sor}. We now turn to property \eqref{at-special} of Definition~\ref{defn:minimal}. If $x \in S(\overline{r})$ is of type {\bf U3}, then $\overline{r}(I_x)$ is unipotent and generated by a conjugate of $\exp(N_3)$. Since $K_x = I(x)$, \cite[Corollary 1]{Sor} implies that $r_{f_i}(I_x)$ is topologically generated by a conjugate of $\exp(N_3)$, $\exp(N_2)$ or $\exp(N_1)$. The latter two cases are incompatible with the residual representation being of nilpotent rank 3. Similarly, if $x \in S(\overline{r})$ is of type {\bf U2}, then $K_x = \Pi(x)$ and \cite[Corollary 1]{Sor} implies that $r_{f_i}(I_x)$ is topologically generated by a conjugate of $\exp(N_2)$ or $\exp(N_1)$. The latter case is incompatible with the residual representation being of nilpotent rank 2. Finally, if~$x \in S(\overline{r})$ is of type {\bf U1}, then~$K_x = K(x)$. It then suffices to note, following~\cite[\S 4.5]{Sor}, that the corresponding representation~$\pi_x$ is \emph{para-spherical}, that is, has a non-zero fixed vector by a non-special maximal compact subgroup, namely~$K(x)$ itself. This establishes property \eqref{at-special}. For property \eqref{at-principle}, suppose that $x \in S(\overline{r})$ is of type {\bf P}. Then $K_x = \Pi(x)^+$. It follows from \cite[Corollary 1]{Sor} that $\Pi(x)$ has no invariants on the automorphic representation generated by $f_i$ (as otherwise
$r_{f_i}|I_x$ would be unipotent, contradicting the assumption on $\overline{r}$ at $x$). Thus $\Pi(x)/\Pi(x)^+$ acts through a non-trivial character on the space of $\Pi(x)^+$ invariants. By \cite[Corollary 3]{Sor} all such characters have to lift the character
$\nu \circ \overline{r}|I_{x}$. However, since $x -1$ is prime to $p$, there is a unique such character, and the result follows from \cite[Corollary 3]{Sor}.
Finally, we turn to property \eqref{at-Q} of Definition~\ref{defn:minimal}. Let $x \in Q$, and recall that $K_x = \Pi(x)^+$. Let $\pi$ be the automorphic representation generated by $f_i$. Consider first the case where $\pi_x$ has non-trivial $\Pi(x)$-invariants. Then $\pi_x$ is a subquotient of an unramified principal series. By part~\eqref{ass:TW-at-Q} of Assumption~\ref{assumption:hecke-galois-rep-regular} and \cite[Prop.\ 3.2.3]{TG}, we see that $\pi_x$ is unramified. In this case, property \eqref{at-Q} of Definition~\ref{defn:minimal} certainly holds for $r_{f_i}$. In the remaining case, where $\pi_x$ has no non-trivial $\Pi(x)$-invariants, we see that $\Pi(x)/\Pi^+(x)$ acts through a non-trivial character on $\pi_x^{\Pi(x)^+}$, and the required property holds by~\cite[Corollary 3]{Sor}. \end{proof}
\subsection{Galois representations in low weights} \label{sec:gal-rep-low}
We let $\overline{r} : G_{\mathbf{Q}} \to \mathrm{GSp}_4(k)$, $S = S(\overline{r})$, $Q$ and $K \subset \mathrm{GSp}_4(\mathbb{A}^{\infty})$ be as in the previous section. Recall that in Section~\ref{section:deformations}, we fixed two units
$\alpha,\beta \in k^\times$ associated to $\overline{r}|G_{\mathbf{Q}_p}$. We now let $\sigma = (a,2) \in X^*(T)_M^{+}$ with $a\geq 2$ denote a non-regular weight.
\begin{df}
\label{df:katz-modular} We say that $\overline{r}$ is \emph{Katz modular of weight $\sigma$} if there exists a maximal ideal $\mathfrak{m}_{\emptyset}$ of $\mathbf{T}_{\sigma}$ such that: \begin{enumerate} \item We have $\overline{r}_{\mathfrak{m}_{\emptyset}} \cong \overline{r}$, and \item There exists a form $\eta \in
H^0(X,\omega(a,2)_{K/\mathcal{O}})[\mathfrak{m}_{\emptyset}]$ such that
\begin{align*}
T_{p,1} (\eta) &= (\alpha+\beta)\eta \\
Q_{p,2} (\eta) &= (\alpha\beta)\eta.
\end{align*} \end{enumerate} \end{df}
We now make the following assumption:
\begin{assumption}[Residual Modularity] We assume:
\label{assumption:hecke-galois-rep-low} \begin{enumerate} \item\label{ass:katz-mod} $\overline{r}$ is Katz modular of weight $\sigma$ with associated maximal ideal
$\mathfrak{m}_{\emptyset}$ and eigenform $\eta$,
\item\label{ass:TW-at-Q-low} For each $x \in Q$, $x \equiv 1 \mod p$ and $\overline{r}|G_x$ is a
direct sum of four pairwise distinct characters with Frobenius
eigenvalues $\alpha_x, \beta_x, \gamma_x, \delta_x$. We assume the
eigenvalues have been labeled so that the plane
$\lambda(\alpha_x)\oplus \lambda(\beta_x)$ is isotropic, and hence $\alpha_x\delta_x = \beta_x\gamma_x$. \end{enumerate} \end{assumption} We let $\mathfrak{m}$ be any maximal ideal of $\mathbf{T}_{\sigma}(Q)$ containing $\mathfrak{m}_{\emptyset}$.
\label{section:ordinaryprojection} Let $e_{\alpha,\beta}$ be the idempotent \[ \varinjlim_{n}((T_{p,1}-\tilde\alpha-\tilde\beta)(Q_{p,2}-\tilde\alpha\tilde\beta))^{n!},\] where $\tilde{\alpha}$ and $\tilde{\beta}$ are lifts of $\alpha$ and $\beta$ to $\mathcal{O}$, and define: \[ H^0(X_1(Q),\omega(a,2)(-\infty)_{K/\mathcal{O}})^{\alpha,\beta} = e_{\alpha,\beta} H^0(X_1(Q),\omega(a,2)(-\infty)_{K/\mathcal{O}}).\] The assumption that $\overline{r}$ is Katz modular implies that this space is non-zero after localization at $\mathfrak{m}$. We let $\mathbf{T}_{\sigma}(Q)^{\alpha,\beta}$ denote the image of $\mathbf{T}_{\sigma}(Q)$ in \[ \mathrm{End}_{\mathcal{O}}(H^0(X_1(Q),\omega(a,2)(-\infty)_{K/\mathcal{O}})^{\alpha,\beta}).\]
Our main result in this section is the following.
\begin{theorem}
\label{theorem:localglobal} Let $\overline{r}$, $\sigma = (a,2)$ with $p-1> a$ and $\mathfrak{m}$ be as above and suppose that Assumption \ref{assumption:hecke-galois-rep-low} holds. In addition, suppose that: $$(\alpha^2 - 1)(\beta^2 - 1)(\alpha - \beta)(\alpha^2 \beta^2 - 1) \neq 0.$$ Then there exists a representation $$r_{Q}: G_{\mathbf{Q}} \rightarrow \mathrm{GSp}_4(\mathbf{T}_{\sigma}(Q)_{\mathfrak{m}}^{\alpha,\beta})$$ which is a minimal deformation of $\overline{r}$ outside $Q$. \end{theorem}
\begin{proof} As in the proof of Theorem~\ref{theorem:highergood}, it suffices to prove the existence of an appropriate representation $r_m : G_{\mathbf{Q}} \to \mathrm{GSp}_4(\mathbf{T}_{\sigma}(Q)_{\mathfrak{m}}^{\alpha,\beta}/I_{\sigma,m})$ for each $m>0$. We thus fix an $m>0$. By Theorem~\ref{theorem:qexp} below, there exists a power $A^s$ of $A$ such that we have injections: \begin{align*}
H^0(X_1(Q),\omega(a,2)(-\infty)_{\mathcal{O}/\varpi^m})^{\alpha,\beta}_{\mathfrak{m}}
&\stackrel{e_{\beta}\circ A^s}{\hookrightarrow}
H^0(X_1(Q),\omega(a+t,b+t)(-\infty)_{\mathcal{O}/\varpi^m})^{\beta}_{\mathfrak{m}} \\
H^0(X_1(Q),\omega(a,2)(-\infty)_{\mathcal{O}/\varpi^m})^{\alpha,\beta}_{\mathfrak{m}}
&\stackrel{e_{\alpha}\circ A^s}{\hookrightarrow}
H^0(X_1(Q),\omega(a+t,b+t)(-\infty)_{\mathcal{O}/\varpi^m})^{\alpha}_{\mathfrak{m}} \end{align*} where $t = rp^{m-1}(p-1)$. These in turns give rise to surjections: \begin{align*} \mathbf{T}_{\mu'}(Q)_{\mathfrak{m}}^\beta &\twoheadrightarrow \mathbf{T}_{\mu}(Q)_{m}^{\alpha,\beta}/I_{\mu,m} \\ \mathbf{T}_{\mu'}(Q)_{\mathfrak{m}}^\alpha &\twoheadrightarrow \mathbf{T}_{\mu}(Q)_{m}^{\alpha,\beta}/I_{\mu,m}, \end{align*}
where $\mu' = \mu + (t,t)$. The first of these surjections together
with Theorem~\ref{theorem:highergood} implies the existence of a
representation $r'_m$ satisfying all of the required properties,
except for conditions~\eqref{det} and~\eqref{at-p} of
Definition~\ref{defn:minimal}. However, we deduce from the existence
of both surjections that the representation $r_m|G_p$ contains two
distinct rank-1 unramified submodules (spanned by basis vectors) --
one of which having Frobenius eigenvalue lifting $\alpha$, and the
other having Frobenius eigenvalue lifting $\beta$. By Nakayama's
Lemma, we deduce that $r_m'$ contains an unramified rank-2 submodule
of the form required by condition \eqref{at-p} of
Definition~\ref{defn:minimal}. In order to obtain a representation
that also satisfies condition~\eqref{det} of
Definition~\ref{defn:minimal}, we note that $\nu(r'_m) =
\chi\epsilon^{-(a-1)}\chi_Q$ where $\chi_Q$ is a finite order
character of $p$-power order which is unramified outside
$Q$. Since $p$ is odd, we can find a square root of $\chi_Q$ and
twist $r'_m$ by the inverse of this square root. The resulting
representation $r_m$ now satisfies all required properties. \end{proof}
\section{Properties of cohomology groups} \label{sec:prop-cohom}
As in Section~\ref{sec:cohom}, let $S$ and $Q$ be finite sets of primes of $\mathbf{Q}$ which are disjoint and do not contain $p$. We allow the possibility that $Q=\emptyset$. We let $K$ and $K_{i}(Q)$ be open compact subgroups of $\mathrm{GSp}_4(\mathbb{A}^{\infty})$ as in Section~\ref{sec:cohom}, and we let $X = X_K$ and $X_i(Q) = X_{K_i(Q)}$ be the corresponding Siegel threefolds, The goal of this section is to prove Theorems \ref{thm:no-newforms} and \ref{thm:balanced} below.
\subsection{Taylor--Wiles primes} \label{sec:prop-cohom-groups}
Fix $\mu = (a,b) \in X^*(T)_M^+$ with $a\geq b \geq 2$. Let $\mathfrak{m}_{\emptyset}$ be a non-Eisenstein maximal ideal of~$\mathbf{T}_{\mu}$. The ideal $\mathfrak{m}_{\emptyset}$ gives rise to ideals of $\T^{\mathrm{an}}_{\mu}(Q)$ and $\mathbf{T}_{\mu}(Q)$ which we also denote by $\mathfrak{m}_{\emptyset}$ (see Section~\ref{sec:gal-rep-cohom}). We will need the following assumption (c.f.\ Assumptions~\ref{assumption:hecke-galois-rep-regular} and \ref{assumption:hecke-galois-rep-low}):
\begin{assumption}
\label{assumption:tw-assump}
For each $x \in Q$, we have $x \equiv 1 \mod p$, and $\overline{r}_{\mathfrak{m}_{\emptyset}}|G_x$ is a
direct sum of four pairwise distinct characters with Frobenius
eigenvalues $\alpha_x, \beta_x, \gamma_x, \delta_x$. We assume the
eigenvalues have been labeled so that the plane
$\lambda(\alpha_x)\oplus \lambda(\beta_x)$ is isotropic, and hence $\alpha_x\delta_x = \beta_x\gamma_x$. \end{assumption}
For $x \in Q$, we let $\alpha_x', \beta_x', \gamma_x', \delta_x' \in \mathcal{O}^\times$ be elements lifting $\alpha_x$, $\beta_x$, $\gamma_x, \delta_x \in k^\times$. The point of the above assumption is to rule out the possibility of newforms at level $K_0(Q)$:
\begin{theorem}
\label{thm:no-newforms} Let $\mu$ and $\mathfrak{m}_{\emptyset}$ be as above, and suppose that Assumption~\ref{assumption:tw-assump} holds. Let $\mathfrak{m}$ denote the ideal of $\mathbf{T}_\mu(Q)$ containing $\mathfrak{m}_{\emptyset}$ together with the elements $x U_{x,2} - \alpha'_x\beta'_x$ and $U_{x,1}-\alpha'_x -\beta'_x$ for each $x\in Q$. Then $\mathfrak{m}$ is maximal and there is an isomorphism \[ \mathrm{pr}_Q \circ i: H^0(X,\omega(a,b)(-\infty)_{K/\mathcal{O}})_{\mathfrak{m}_{\emptyset}} \stackrel{\sim}{\longrightarrow} H^0(X_0(Q),\omega(a,b)(-\infty)_{K/\mathcal{O}})_{\mathfrak{m}}.\] which is equivariant for the operators $T_{x,i}$ for each $x\not \in S\cup Q \cup \{p\}$ as well as for the operators $T_{p,1}$ and $Q_{p,2}$.
Here $i$ is the natural inclusion and $\mathrm{pr}_Q$ is defined as follows. For $x\in Q$, let $R_x$ denote the Hecke operator \[ R_x = (xU_{x,2} - \alpha'_x\gamma'_x)(xU_{x,2} - \beta_x'\delta_x')(xU_{x,2} - \gamma'_x \delta'_x)\in \mathbf{T}_{\mu}(Q) \] and let $\mathrm{pr}_x$ denote the idempotent \[ \mathrm{pr}_x = \lim_{n \rightarrow \infty} (R'_x)^{n!}. \] Then the $\mathrm{pr}_x$'s commute with one another and $\mathrm{pr}_Q$ denotes their product. \end{theorem}
For compactness, we will make use the alternative notation $\mathcal{W}_{\mu} = \omega(a,b)$, and $\mathcal{W}_{\mu}^{\mathrm{sub}} = \omega(a,b)(-\infty)$. In sufficiently high weight, Theorem~\ref{thm:no-newforms} is due to Genestier and Tilouine:
\begin{theorem}
\label{thm:genestier-tilouine}
Suppose $\mu = (a,b)$ is such that $H^i(X, \mathcal{W}^{\mathrm{sub}}_{\mu,k})$ and
$H^i(X_0(Q), \mathcal{W}^{\mathrm{sub}}_{\mu,k})$ are 0 for all $i>0$. Then the map \[ \mathrm{pr}_Q \circ i : H^0(X,\mathcal{W}^{\mathrm{sub}}_{\mu,K/\mathcal{O}})_{\mathfrak{m}_{\emptyset}} \stackrel{\sim}{\longrightarrow} H^0(X_0(Q),\mathcal{W}^{\mathrm{sub}}_{\mu,K/\mathcal{O}})_{\mathfrak{m}} \] is an isomorphism. An explicit inverse is given by the composition \[ H^0(X_0(Q),\mathcal{W}^{\mathrm{sub}}_{\mu,K/\mathcal{O}})_{\mathfrak{m}} \hookrightarrow H^0(X_0(Q),\mathcal{W}^{\mathrm{sub}}_{\mu,K/\mathcal{O}})_{\mathfrak{m}_{\emptyset}} \stackrel{d_Q^{-1}\mathrm{tr}}{\to} H^0(X,\mathcal{W}^{\mathrm{sub}}_{\mu,K/\mathcal{O}})_{\mathfrak{m}_{\emptyset}}\] where $d_Q = \prod_{x\in Q}[\mathrm{GSp}_4(\mathbf{Z}_x):\Pi(x)]$ (which is prime to $p$) and $\mathrm{tr}$ is the trace map associated to $X_0(Q) \to X$. \end{theorem}
\begin{proof} By the assumption of cohomology vanishing, it suffices to prove both statements with $K/\mathcal{O}$ replaced by $K$. Indeed, if the map over $K$ is surjective, then so too is the map over $K/\mathcal{O}$. Furthermore, if
$d_Q^{-1}\mathrm{tr}$ is an inverse over $K$, then the fact that its
defined over $\mathcal{O}$ implies immediately that it also gives an inverse over
$K/\mathcal{O}$. The proof of the corresponding result over $K$ follows exactly as in the proof of \cite[Proposition 11.1.2]{TG}. \end{proof}
Using this result and the Hasse invariant $h \in H^0(X, \omega^{p-1}_k)$, we can now establish Theorem~\ref{thm:no-newforms} at the level of $\varpi$-torsion. (Recall that cohomology in degree 0 over $k$ can be identified with $\varpi$-torsion in degree 0 cohomology over $K/\mathcal{O}$.) Note that, for any weight $\mu$, the cohomology vanishing assumption of the previous theorem holds in weight $\mu + (t,t)$ as long as $t \geq N(\mu)$ (where $N(\mu)$ is defined in Definition~\ref{df:suff-large-wt}).
\begin{lemma}
\label{lem:cartesian-hasse} Let $\mu = (a,b)$ with $a \geq b \geq 2$. Choose an integer
$t$ such that $(p-1)t \geq N(\mu)$. Let $\mu' = (a',b') = (a+t(p-1),b+t(p-1))$. Then the following diagrams are co-cartesian: \[ \begin{tikzpicture}
\matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em]
{
H^0(X_0(Q),\mathcal{W}^{\mathrm{sub}}_{\mu,k})_{\mathfrak{m}} & H^0(X_0(Q),\mathcal{W}^{\mathrm{sub}}_{\mu',k})_{\mathfrak{m}} \\
H^0(X,\mathcal{W}^{\mathrm{sub}}_{\mu,k})_{\mathfrak{m}_{\emptyset}} & H^0(X,\mathcal{W}^{\mathrm{sub}}_{\mu',k})_{\mathfrak{m}_{\emptyset}} \\};
\path[-stealth]
(m-2-1) edge node [left] {$\mathrm{pr}_Q\circ i$} (m-1-1)
edge node [above] {$h^t$} (m-2-2)
(m-1-1) edge node [above] {$h^t$} (m-1-2)
(m-2-2) edge node [left] {$\mathrm{pr}_Q\circ i$} node [right] {$\cong$} (m-1-2);
\end{tikzpicture} \] \[ \begin{tikzpicture}
\matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em]
{
H^0(X_0(Q),\mathcal{W}^{\mathrm{sub}}_{\mu,k})_{\mathfrak{m}} & H^0(X_0(Q),\mathcal{W}^{\mathrm{sub}}_{\mu',k})_{\mathfrak{m}} \\
H^0(X,\mathcal{W}^{\mathrm{sub}}_{\mu,k})_{\mathfrak{m}_{\emptyset}} & H^0(X,\mathcal{W}^{\mathrm{sub}}_{\mu',k})_{\mathfrak{m}_{\emptyset}} \\};
\path[-stealth]
(m-1-1) edge node [left] {$d_Q^{-1}\mathrm{tr}$} (m-2-1)
edge node [above] {$h^t$} (m-1-2)
(m-2-1) edge node [above] {$h^t$} (m-2-2)
(m-1-2) edge node [left] {$d_Q^{-1}\mathrm{tr}$} node [right] {$\cong$} (m-2-2);
\end{tikzpicture} \] In particular, the left hand vertical maps are mutually inverse isomorphisms. \end{lemma}
\begin{proof}
Note that the right hand vertical maps are mutually inverse isomorphisms by
Theorem~\ref{thm:genestier-tilouine} and the choice of $t$. The diagrams
are commutative because $h$ commutes with all Hecke operators at the
primes in $Q$ (Lemma~\ref{lem:hasse-mod-p}). Now, let $f \in H^0(X,\mathcal{W}^{\mathrm{sub}}_{\mu',k})_{\mathfrak{m}_{\emptyset}}$ and
let $F = \mathrm{pr}_Q(f) \in H^0(X_0(Q),\mathcal{W}^{\mathrm{sub}}_{\mu',k})_{\mathfrak{m}}$. Note
that $f$ can be recovered from $F$ via the formula
$f = d_Q^{-1}\mathrm{tr}(F)$. We need to show that $f$ is divisible
by $h^t$ if and only if $F$ is divisible by $h^t$.
But this follows immediately by the commutativity of the diagrams
above: if~$f = h^t g$, then~$F = h^t \mathrm{pr}_Q(g)$,
and if~$F = h^t g$, then~$f = h^t d_Q^{-1}\mathrm{tr}(g)$. (Note
that since~$X_0(Q)$ and~$X$ are smooth (and in particular irreducible) over~$k$,
multiplication by~$h$ is injective on~$H^0$.) \end{proof}
We will need the analogous result for forms on the non-ordinary locus: let $S$ (resp.\ $S_0(Q)$) denote the non-ordinary locus of $X_k$ (resp.\ $X_0(Q)_k$).
\begin{lemma}
\label{lem:no-new-forms-non-ord}
Let $\mu = (a,b)$ with $a \geq b \geq 2$. Then the map \[ \mathrm{pr}_Q \circ i : H^0(S,\mathcal{W}^{\mathrm{sub}}_{\mu,k})_{\mathfrak{m}_{\emptyset}} \stackrel{\sim}{\longrightarrow} H^0(S_0(Q),\mathcal{W}^{\mathrm{sub}}_{\mu,k})_{\mathfrak{m}} \] is an isomorphism with inverse $d_Q^{-1}\mathrm{tr}$. \end{lemma}
\begin{proof}
We first show that the result is true in sufficiently high
weight. More precisely: let $t \geq N(\mu)+1$. We let
$\mu' = (a+(t-1)(p-1),b+(t-1)(p-1))$ and
$\mu'' = (a+t(p-1),b+t(p-1))$.
We have a
commutative diagram: \[\begin{tikzpicture} \tikzstyle{every node}=[font=\tiny]
\matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em]
{
H^0(X_0(Q),\mathcal{W}^{\mathrm{sub}}_{\mu',k})_{\mathfrak{m}} & H^0(X_0(Q),\mathcal{W}^{\mathrm{sub}}_{\mu'',k})_{\mathfrak{m}}
& H^0(S_0(Q),\mathcal{W}^{\mathrm{sub}}_{\mu'',k})_{\mathfrak{m}} & 0 \\
H^0(X,\mathcal{W}^{\mathrm{sub}}_{\mu',k})_{\mathfrak{m}_{\emptyset}} & H^0(X,\mathcal{W}^{\mathrm{sub}}_{\mu'',k}))_{\mathfrak{m}_{\emptyset}} &
H^0(S,\mathcal{W}^{\mathrm{sub}}_{\mu'',k})_{\mathfrak{m}_{\emptyset}} & 0\\ };
\path[-stealth]
(m-2-1) edge node [left] {$\mathrm{pr}_Q$} node [right] {$\cong$} (m-1-1)
edge node [above] {$h$} (m-2-2)
(m-1-1) edge node [above] {$h$} (m-1-2)
(m-2-2) edge node [left] {$\mathrm{pr}_Q$} node [right] {$\cong$} (m-1-2)
(m-2-3) edge node [left] {$\mathrm{pr}_Q$} (m-1-3)
(m-1-2) edge (m-1-3)
(m-1-3) edge (m-1-4)
(m-2-2) edge (m-2-3)
(m-2-3) edge (m-2-4)
;
\end{tikzpicture}\] The choice of $t$ guarantees that the rows are short exact sequences. From the previous lemma, we deduce that the right hand vertical map is an isomorphism with inverse $d_Q^{-1}\mathrm{tr}$.
Now we imitate the proof of the previous lemma to deduce the result in smaller weights. For this we use the existence of the Hasse invariant \[ \tilde{h} \in H^0(S, \omega(p^2-1,p^2-1)_k). \] Such a form was constructed in unpublished work of the second author with Goldring, but is also constructed in greater generality in \cite{Boxer} and \cite{Wushi}. In \cite[Theorem~B.2]{Boxer} (see also~\cite[Theorem~6.2.3]{Boxer}), it is shown that $\tilde{h}$ extends to the boundary, (by the normality of the~$p$-rank 1 locus) and that multiplication by $\tilde{h}$ is Hecke equivariant away from $p$ (see \cite[Theorem~4.5.4(3)]{Boxer}).
(It is also true, but not relevant here, that~$\tilde{h}$ vanishes on the 1-dimensional Ekedahl--Oort stratum of $S$ to precise order~$2$: see the references in proof of Theorem~\ref{theorem:boxer} below for more discussion on this point.)
We choose an integer $s$ such that $ t:= s(p+1) \geq N(\mu)+1$. Let $\mu'' = \mu + s(p^2 -1, p^2 -1) = \mu + t(p-1,p-1)$. Then we have a commutative diagram: \[ \begin{tikzpicture}
\matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em]
{
H^0(S_0(Q),\mathcal{W}^{\mathrm{sub}}_{\mu,k})_{\mathfrak{m}} & H^0(S_0(Q),\mathcal{W}^{\mathrm{sub}}_{\mu'',k})_{\mathfrak{m}} \\
H^0(S,\mathcal{W}^{\mathrm{sub}}_{\mu,k})_{\mathfrak{m}_{\emptyset}} & H^0(S,\mathcal{W}^{\mathrm{sub}}_{\mu'',k})_{\mathfrak{m}_{\emptyset}} \\};
\path[-stealth]
(m-2-1) edge node [left] {$\mathrm{pr}_Q\circ i$} (m-1-1)
edge node [above] {$\tilde{h}^s$} (m-2-2)
(m-1-1) edge node [above] {$\tilde{h}^s$} (m-1-2)
(m-2-2) edge node [left] {$\mathrm{pr}_Q\circ i$} node [right] {$\cong$} (m-1-2);
\end{tikzpicture} \] The right hand vertical map is an isomorphism with inverse $d_Q^{-1}\mathrm{tr}$ by the first paragraph. The lemma now follows by the same argument as the previous lemma. \end{proof}
We will also need the analogous result for first degree cohomology over $k$:
\begin{lemma}
\label{lem:no-new-forms-H1-mod-p} Suppose $\mu = (a,b)$ where $a \geq b \geq 2$. Then the map \[ \mathrm{pr}_Q \circ i : H^1(X,\mathcal{W}^{\mathrm{sub}}_{\mu,k})_{\mathfrak{m}_{\emptyset}} \stackrel{\sim}{\longrightarrow} H^1(X_0(Q),\mathcal{W}^{\mathrm{sub}}_{\mu,k})_{\mathfrak{m}} \] is an isomorphism with inverse $d_Q^{-1}\mathrm{tr}$. \end{lemma}
\begin{proof}
If $N(\mu) = 0$, then both sides of the map are zero, so we may assume
that $N(\mu)>0$. Let $t \geq N(\mu)$, and let
$\mu' = (a+(t-1)(p-1),b+(t-1)(p-1))$ and
$\mu'' = (a+t(p-1),b+t(p-1))$. Consider the
diagram with exact rows: \[\begin{tikzpicture} \tikzstyle{every node}=[font=\tiny]
\matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em]
{
H^0(X_0(Q),\mathcal{W}^{\mathrm{sub}}_{\mu',k})_{\mathfrak{m}} & H^0(X_0(Q),\mathcal{W}^{\mathrm{sub}}_{\mu'',k})_{\mathfrak{m}}
& H^0(S_0(Q),\mathcal{W}^{\mathrm{sub}}_{\mu'',k})_{\mathfrak{m}} & H^1(X_0(Q),\mathcal{W}^{\mathrm{sub}}_{\mu',k})_{\mathfrak{m}} & 0 \\
H^0(X,\mathcal{W}^{\mathrm{sub}}_{\mu',k})_{\mathfrak{m}_{\emptyset}} & H^0(X,\mathcal{W}^{\mathrm{sub}}_{\mu'',k})_{\mathfrak{m}_{\emptyset}} &
H^0(S,\mathcal{W}^{\mathrm{sub}}_{\mu'',k})_{\mathfrak{m}_{\emptyset}} & H^1(X,\mathcal{W}^{\mathrm{sub}}_{\mu',k})_{\mathfrak{m}_{\emptyset}} & 0\\ };
\path[-stealth]
(m-2-1) edge node [left] {$\mathrm{pr}_Q$} node [right] {$\cong$} (m-1-1)
edge node [above] {$h$} (m-2-2)
(m-1-1) edge node [above] {$h$} (m-1-2)
(m-2-2) edge node [left] {$\mathrm{pr}_Q$} node [right] {$\cong$} (m-1-2)
(m-2-3) edge node [left] {$\mathrm{pr}_Q$} node [right] {$\cong$} (m-1-3)
(m-2-4) edge node [left] {$\mathrm{pr}_Q$} (m-1-4)
(m-1-2) edge (m-1-3)
(m-1-3) edge (m-1-4)
(m-1-4) edge (m-1-5)
(m-2-2) edge (m-2-3)
(m-2-3) edge (m-2-4)
(m-2-4) edge (m-2-5)
;
\end{tikzpicture}\] The first three vertical maps are isomorphisms with inverse $d_Q^{-1}\mathrm{tr}$ by the previous two lemmas. We deduce that the rightmost vertical map above is an isomorphism with inverse $d_Q^{-1}\mathrm{tr}$. This proves the lemma in weight $\mu'$. The general case then follows by a similar argument using a reverse induction on $t$. \end{proof}
We are finally in a position to prove Theorem~\ref{thm:no-newforms} in the general case.
\begin{proof}[Proof of Theorem~\ref{thm:no-newforms}] For each $n\geq 1$, let $\mathcal{O}_n := \mathcal{O}/\varpi^n$. We have a commutative diagram: \[\begin{tikzpicture} \tikzstyle{every node}=[font=\tiny]
\matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em]
{
H^0(X_0(Q),\mathcal{W}_{\mu,k}^{\mathrm{sub}})_{\mathfrak{m}} & H^0(X_0(Q),\mathcal{W}_{\mu,\mathcal{O}_n}^{\mathrm{sub}})_{\mathfrak{m}}
& H^0(X_0(Q),\mathcal{W}_{\mu,\mathcal{O}_{n-1}}^{\mathrm{sub}})_{\mathfrak{m}} & H^1(X_0(Q),\mathcal{W}_{\mu,k}^{\mathrm{sub}})_{\mathfrak{m}} \\
H^0(X,\mathcal{W}_{\mu,k}^{\mathrm{sub}})_{\mathfrak{m}_{\emptyset}} & H^0(X,\mathcal{W}_{\mu,\mathcal{O}_n}^{\mathrm{sub}})_{\mathfrak{m}_{\emptyset}} &
H^0(X,\mathcal{W}_{\mu,\mathcal{O}_{n-1}}^{\mathrm{sub}})_{\mathfrak{m}_{\emptyset}}& H^1(X,\mathcal{W}_{\mu,k}^{\mathrm{sub}})_{\mathfrak{m}_{\emptyset}}\\ };
\path[-stealth]
(m-2-1) edge node [left] {$\mathrm{pr}_Q$} node [right] {$\cong$} (m-1-1)
(m-2-2) edge node [left] {$\mathrm{pr}_Q$} (m-1-2)
(m-2-3) edge node [left] {$\mathrm{pr}_Q$} (m-1-3)
(m-2-4) edge node [left] {$\mathrm{pr}_Q$} node [right] {$\cong$} (m-1-4)
(m-1-1) edge (m-1-2)
(m-2-1) edge (m-2-2)
(m-1-2) edge node [above] {$\varpi$} (m-1-3)
(m-1-3) edge (m-1-4)
(m-2-2) edge node [above] {$\varpi$} (m-2-3)
(m-2-3) edge (m-2-4)
;
\end{tikzpicture}\] The vertical maps on the ends are isomorphisms by Lemma~\ref{lem:cartesian-hasse} and Lemma~\ref{lem:no-new-forms-H1-mod-p}.
By induction on $n$ and the Five Lemma we deduce that the map \[ \mathrm{pr}_{Q}\circ i : H^0(X,\mathcal{W}_{\mu,\mathcal{O}_n}^{\mathrm{sub}})_{\mathfrak{m}_{\emptyset}} \to
H^0(X_0(Q),\mathcal{W}_{\mu,\mathcal{O}_n}^{\mathrm{sub}})_{\mathfrak{m}} \] is an isomorphism for all $n$. This shows that the map of Theorem~\ref{thm:no-newforms} is an isomorphism after passing to $\varpi^n$-torsion, for any $n$. The result follows. \end{proof}
\subsection{The balanced property} \label{sec:balanced-property}
In this section we assume that $\mu = (a,2)$ is a limit of discrete series weight, where~$p > a - 2$. Let $\Delta$ be a quotient of $\Delta_Q:=\prod_{x\in Q}(\mathbf{Z}/x)^\times$ and let $X_{\Delta}(Q)\to X_0(Q)$ denote the corresponding sub-cover of $X_1(Q)\to X_0(Q)$. If $\mathcal{L}$ is a vector bundle on $X_\Delta(Q)$, we
define \[ H_i(X_\Delta(Q),\mathcal L):= H^i(X_\Delta(Q),(\omega^3 \otimes\mathcal{L}^{{\vee}}(-\infty))_{K/\mathcal{O}})^{\vee} \] for all $i$. Note that $\omega^3(-\infty)$ is the dualizing sheaf on $X_{\Delta}(Q)$.
We now take $\mathcal{L} = \omega(1,3-a)$, so that $\omega^3 \otimes \mathcal{L}^{\vee}(-\infty) \cong \omega(a,2)(-\infty)$. Here we use our bound~$p > a - 2$ to deduce that there is an equality~$(\mathrm{Sym}^{a-2})^{\vee} \simeq \mathrm{Sym}^{a-2} \otimes \det^{2-a}$ as~$\mathcal{O}$-modules.
Thus, $\mathbf{T}_{\mu}(Q)$ acts on $H_0(X_{\Delta}(Q),\omega(1,3-a))$. We fix a non-Eisenstein maximal ideal $\mathfrak{m}$ of $\mathbf{T}_{\mu}(Q)$. We will need the following assumption:
\begin{assumption}
\label{ass:cohom-vanishing} The space $H^2(X_{\Delta}(Q), \omega(a,2)(-\infty)_k)_{\mathfrak{m}}$ is trivial. \end{assumption}
There is a slight abuse of notation here in that $\mathbf{T}_{\mu}(Q)$ does not act on $H^2(X_{\Delta}(Q), \omega(a,2)_k)$. The localization at $\mathfrak{m}$ refers to the localization at the corresponding maximal ideal of the polynomial ring over $\mathcal{O}$ generated by the Hecke operators.
\begin{remark}
\label{rmk:lan-suh} We note that if $p \geq a \geq 4$, then the assumption above holds, even before localization at $\mathfrak{m}$, by Theorem~\ref{thm:lan-suh}. \end{remark}
\begin{lemma} \label{lem:cohom-torsion-free}
Suppose Assumption~\ref{ass:cohom-vanishing} holds.
Then
$H_1(X_\Delta(Q),\omega(1,3-a))_{\mathfrak{m}}$
is $p$-torsion free. \end{lemma}
\begin{proof} The claim is equivalent to the divisibility of $H^1(X_{\Delta}(Q), \omega(a,2)(-\infty)_{K/\mathcal{O}})_{\mathfrak{m}}$.
Since $X_{\Delta}(Q)$ is flat over $\mathcal{O}$, there is an exact sequence $$0 \rightarrow {{\omega(a,2)(-\infty)}}_k \rightarrow {{\omega(a,2)(-\infty)}}_{K/\mathcal{O}} \stackrel{\varpi}{\rightarrow} {{\omega(a,2)(-\infty)}}_{K/\mathcal{O}} \rightarrow 0.$$ Taking cohomology, this reduces to the claim that $H^2(X_\Delta(Q),{{\omega(a,2)(-\infty)}}_{k})_{\mathfrak{m}}$ vanishes. \end{proof}
The following lemma uses only the assumption that $\mathfrak{m}$ is non-Eisenstein: it holds in all weights and in all prime to $p$ levels. We just state it in the case we need:
\begin{lemma}
\label{lem:harris-zucker} The map \[ H^i(X_{0}(Q), \omega(a,2)(-\infty))_{\mathfrak{m}} \otimes K \longrightarrow H^i(X_{0}(Q), \omega(a,2))_{\mathfrak{m}} \otimes K \] is an isomorphism for all $i$. \end{lemma}
\begin{proof} Let $\partial X$ denote the boundary of $X_{0}(Q)$. It suffices to show that the boundary cohomology \[ H^i(\partial X, \omega(a,2))_{\mathfrak{m}} \otimes K \] vanishes for all $i$. However, over $\mathbf{C}$ the cohomology of the boundary is computed by the nerve spectral sequence: \[ E_1^{r,s} = \bigoplus_{r(R) = r+1} E_1^{r,s}(R) \implies H^{r+s}(\partial X, \omega(a,2)_{\mathbf{C}}) .\] See \cite{HZIII} (3.2.4). Here $R$ is a $\mathbf{Q}$-parabolic of $G$ and $r(R)$ is its parabolic rank. By~\cite[Corollary 3.2.9]{HZIII}, and freely using the notation of this paper. the space $E_1(R)^{r,s}$ is the space of $K$-invariants in: \[ \mathrm{Ind}^{G(\mathbb{A}^{\infty})}_{R(\mathbb{A}^{\infty})}\bigoplus_{i\geq 0, w \in
W^{R,p}}I^{R}\left( \widetilde{H}^{s-i-\ell(w)}(X(G_{h, R}), \mathcal{V}_{\lambda(h,w)}) \otimes H^i(X(G_{\ell, R}), \widetilde{\mathbf{V}}_{\lambda(\ell,w)})\right).\] If $R = \Pi$ is the Klingen parabolic, then $G_{h, R} = \mathrm{GSp}_2 = \mathrm{GL}_2$ and $G_{\ell, R} = \mathrm{GL}_1$. If $R$ is the Siegel parabolic or the Borel subgroup, then $G_{h,R}$ is trivial and $G_{\ell, R} = L_R$ is the Levi component of $R$ (and hence is either $\mathrm{GL}_2\times \mathrm{GL}_1$ or $\mathrm{GL}_1^{3}$). In all cases, $\mathcal{V}_{\lambda(h,w)}$ is the canonical extension of an automorphic vector bundle on the Shimura variety $X(G_h)$ and $\widetilde{\mathbf{V}}_{\lambda(\ell, w)}$ is a local system on $X(G_{\ell})$ associated to an algebraic representation of $G_{\ell}$. See \cite[(3.6.1)]{HZI} for the highest weight formulas. The functor $I_R$ is an intermediate induction defined in \cite[(3.2.8)]{HZIII}.
Since each of the groups $G_h$ and $G_{\ell}$ are products of copies of $\mathrm{GL}_2$ and $\mathrm{GL}_1$, we see that to any Hecke eigenclass in any $H^i(\partial X, \omega(a,2)_{\mathbf{C}})$, we can associate a compatible system of reducible $\mathrm{GSp}_4$-valued $l$-adic representations of $G_{\mathbf{Q}}$. Since the ideal $\mathfrak{m}$ is non-Eisenstein, it follows that $H^i(\partial X, \omega(a,2))_{\mathfrak{m}}\otimes K = 0$, as required. \end{proof}
We come to the main result of this section:
\begin{theorem}
\label{thm:balanced}
Let $\Delta$ be a quotient of $\Delta_Q$ which is of $p$-power
order. As above, let $\mu = (a,2)$ with~$p - 2 > a$ and let $\mathfrak{m}$ be a non-Eisenstein
ideal of $\mathbf{T}_{\mu}(Q)$. Suppose that
Assumption~\ref{ass:cohom-vanishing} holds.
Then the $\mathcal{O}[\Delta]$-module
\[ H_0(X_{\Delta}(Q), \omega(1,3-a))_{\mathfrak{m}} = H^0(X_{\Delta}(Q), \omega(a,2)(-\infty)_{K/\mathcal{O}})^{\vee}_\mathfrak{m} \] is balanced in the sense
of Definition \ref{defn:balanced}. \end{theorem}
\begin{proof} The argument proceeds exactly as in the proof of Prop.~3.8 of~\cite{CG}. If we let $M = H_0(X_{\Delta}(Q), \omega(1,3-a))_{\mathfrak{m}}$ and $S = \mathcal{O}[\Delta]$, then the defect $d_{S}(M)$ is given by: \[ d_{S}(M) = r - \dim_k \mathrm{Tor}_1^S(M,\mathcal{O})/\varpi \] where $r$ is the $\mathcal{O}$-rank of $M_{\Delta}$. Thus we need to show that $r \ge \dim_k \mathrm{Tor}_1^S(M,\mathcal{O})/\varpi$.
Let $\mathcal{L} = \omega(1,3-a)$. Applying Pontryagin duality to the Hochschild--Serre spectral sequence, we get a spectral sequence: \[ \mathrm{Tor}_i^S\left(H_j(X_{\Delta}(Q), \mathcal{L})_{\mathfrak{m}}, \mathcal{O}\right) \implies H_{i+j}(X_{0}(Q),\mathcal{L})_{\mathfrak{m}} \] This spectral sequence tells us that: \begin{enumerate} \item $M_{\Delta} \stackrel{\sim}{\rightarrow} H_0(X_0(Q), \mathcal{L})_{\mathfrak{m}}$, and \item we have a short exact sequence \[ \left(H_1(X_{\Delta}(Q), \mathcal{L})_{\mathfrak{m}}\right)_{\Delta} \longrightarrow H_1(X_0(Q), \mathcal{L})_{\mathfrak{m}} \longrightarrow \mathrm{Tor}_1^S(M, \mathcal{O}) \longrightarrow 0.\] \end{enumerate} To prove that $d_S(M) \geq 0$, it follows from the second point that it is sufficient to show that $H_1(X_0(Q),\mathcal{L})_{\mathfrak{m}}$ is free of rank at most $r$ over $\mathcal{O}$. Lemma~\ref{lem:cohom-torsion-free} tells us that this space is $p$-torsion free. Passing to characteristic 0 and using the first point, we are therefore reduced to establishing the inequality: $$\dim_K H_1(X_0(Q),\mathcal{L})_\mathfrak{m} \otimes K \le \dim_K H_0(X_0(Q),\mathcal{L})_\mathfrak{m}\otimes K.$$ In other words, we need to show: $$\dim_K H^1(X_0(Q),\omega(a,2)(-\infty))_\mathfrak{m} \otimes K \le \dim_K H^0(X_0(Q),\omega(a,2)(-\infty))_\mathfrak{m}\otimes K.$$ By Lemma~\ref{lem:harris-zucker}, we are reduced to showing that \[ \dim_K \overline{H}^1(X_0(Q),\omega(a,2))_\mathfrak{m} \otimes K \le \dim_K \overline{H}^0(X_0(Q),\omega(a,2))_\mathfrak{m}\otimes K \] where $\overline{H}^i$ denotes the interior cohomology (the image of $H^i(\omega(a,b)(-\infty))$ in $H^i(\omega(a,b))$).
As recalled in Theorem~\ref{thm:coherent-cohom-lie-alg-cohom}, the interior cohomology can be computed in terms of square integrable automorphic forms on $G$. By Remark~\ref{rem:normalization}, the cohomology of $\omega(a,b)$ agrees with that of $\mathcal{W}_{\mu}\cong \mathcal{V}_{\sigma}$ where $\mu = (a,b;3-a-b) = (a,2;1-a)$ and $\sigma = (-2,-a;4-a)$. Theorem~\ref{thm:coherent-cohom-lie-alg-cohom} then implies that: \[ \overline{H}^i(X_{0}(Q), \omega(a,2)_{\mathbf{C}}) \subset \bigoplus_{\pi
\in \mathcal{A}_{(2)}(G)} \left(\left(\pi^{\infty}\right)^{K_0(Q)} \otimes
H^i(\Lie P^{-}, K^h; \pi_{\infty}\otimes V_{\sigma}) \right)^{\oplus
m_{(2)}(\pi)}\] where $m_{(2)}(\pi)$ denotes the multiplicity of $\pi$ in $\mathcal{A}_{(2)}(G)$. Fix a degree $i \in \{0,1\}$ and let $\pi \in \mathcal{A}_{(2)}(G)$ be such that $\pi$ contributes to $H^i(X_{0}(Q),\omega(a,2))_{\mathfrak{m}}\otimes \mathbf{C}$ under the above inclusion (for some embedding $K \hookrightarrow \mathbf{C}$). Let $\widetilde{\pi}$ denote the transfer of $\pi$ to $\mathrm{GL}_4(\mathbb{A})$ under the Classification Theorem of \cite{arthur-gsp4}. Then, by Remark~\ref{rem:central-chars}, the infinitesimal character of $\widetilde{\pi}_{\infty}$ is $\chi_{(0,0-(a-1),-(a-1))+3/2(1,1,1,1)}$. Let $\chi_{\pi}$ denote the central character of $\pi$.
The representation $\widetilde{\pi}$ falls into one of 6 classes (a)--(f) given in Section 5 of \cite{arthur-gsp4}. We show now that we can rule out all classes other than class (a). In cases (e) and (f), $\widetilde{\pi}$ is an isobaric sum of idele class characters. In case (d), $\widetilde{\pi}$
is of the form $\lambda|\cdot|^{1/2} \boxplus \lambda |\cdot|^{-1/2} \boxplus \mu$ where $\lambda$ is an idele class character and $\mu$ is a cuspidal automorphic representation of $\mathrm{GL}_2(\mathbb{A})$ such that its central character $\chi_{\mu}$ satisfies $\chi_{\mu} = \lambda^2 = \chi_{\pi}$. Considering the infinitesimal character of $\widetilde{\pi}_{\infty}$, we see that we must have $a = 2$ and $\mu$ must correspond to a classical modular eigenform of weight 2. In case (c), there is a cuspidal automorphic representation $\mu$ of orthogonal type of $\mathrm{GL}_2(\mathbb{A})$
such that $\widetilde{\pi} = \mu|\cdot|^{1/2} \boxplus \mu |\cdot |^{-1/2}$. Being of orthogonal type means that $\mu$ is induced from a quadratic extension of $\mathbf{Q}$. In case (b), $\widetilde{\pi} = \mu_1 \boxplus \mu_2$ where the $\mu_i$ are distinct cuspidal automorphic representations of $\mathrm{GL}_2(\mathbb{A})$ with $\chi_{\mu_1} = \chi_{\mu_2} = \chi_{\pi}$. Considering the infinitesimal character of $\widetilde{\pi}_{\infty}$ and the fact that the $\mu_i$ have the same central character, it follows that $\mu_i$ are both associated to classical modular eigenforms of weight $a$. Thus, in all cases (b) -- (f), we can associate a compatible family of reducible $l$-adic Galois representations to $\widetilde{\pi}$. This contradicts the fact that $\mathfrak{m}$ is non-Eisenstein.
The only remaining case is case (a) where $\widetilde{\pi}$ is a cuspidal automorphic representation of $\mathrm{GL}_4(\mathbb{A})$ that is $\chi_{\pi}$-self dual. By Clozel's Purity Lemma \cite[Lemme 4.9]{clozel-ann-arb}, $\widetilde{\pi}_{\infty}$ is essentially tempered. (We thank
Olivier Ta\"{\i}bi for pointing this out to us.) It follows that $\pi_{\infty}$ is also essentially tempered, since its $L$-parameter is essentially bounded.
Then by
Theorem~\ref{thm:contrib-to-char-0-cohom}\eqref{Mirkovic}, $\pi_{\infty}$ is the limit of
discrete series representation $\pi(\lambda,C_i)$ where $\lambda = (a-1,0;4-a)$. Furthermore, by a Theorem of Wallach \cite[Theorem 2.3]{Mok}, it follows that $\pi$ is cuspidal.
By the first part of Theorem~\ref{thm:coherent-cohom-lie-alg-cohom} , the cuspidal cohomology $\mathcal{H}^i_{\mathrm{cusp},\sigma}$ maps injectively to the interior cohomology: \[ \overline{H}^i(X_{0}(Q), \omega(a,2)_{\mathbf{C}})_{\mathrm{cusp}} \cong \bigoplus_{\pi
\in \mathcal{A}_{0}(G)} \left(\left(\pi^{\infty}\right)^{K_0(Q)} \otimes
H^i(\Lie P^{-}, K^h; \pi_{\infty}\otimes V_{\sigma}) \right)^{\oplus
m_0(\pi)}\] where $m_0(\pi)$ is the multiplicity of $\pi$ in $\mathcal{A}_0(G)$.
Thus, at this point, we can prove that the dimensions \[ \dim_K \overline{H}^j(X_0(Q),\omega(a,2))_\mathfrak{m} \otimes K \] are equal for $j=0,1$ if we can establish: \begin{enumerate} \item The spaces $H^j(\Lie P^{-}, K^h; \pi(\lambda, C_j)\otimes
V_{\sigma})$ have the same dimension for $j = 0, 1$. \item The representation $\pi' = \pi^{\infty}\otimes \pi(\lambda,
C_{1-i})$ also lies in $\mathcal{A}_{(2)}(G)$; \item The multiplicities $m_0(\pi)$, $m_{(2)}(\pi)$, $m_0(\pi')$ and
$m_{(2)}(\pi')$ are all equal. \end{enumerate} The first point follows from \cite[Theorem 3.4]{harris-ann-arb} which says that both spaces are one dimensional. The second point follows from \cite{arthur-gsp4}. Indeed, since $\pi(\lambda, C_i)$ is essentially tempered, the local packet $\Pi_{\psi_{\infty}}$ (where $\psi = \widetilde{\pi} \boxplus 1$, in the notation of \cite{arthur-gsp4}) is in fact an L-packet by \cite[Theorem 2.1]{Mok}. Furthermore, it consists of the pair of representations $\{ \pi(\lambda, C_0), \pi(\lambda, C_1) \}$ (see \cite[\S 3.1]{Mok}). Since the group $\mathcal{S}_{\psi}$ is trivial in Case (a) of \cite{arthur-gsp4}, it then follows from part (ii) of the Classification Theorem that $\pi'$ is also automorphic. Finally, for the third point, the theorem of Wallach quoted above implies that $\pi$ and $\pi'$ are both cuspidal. Part (iii) of the Classification Theorem then implies that each of the multiplicities in point (3) is 1. We have thus shown that $$\dim_K H^1(X_0(Q),\omega(a,2)(-\infty))_\mathfrak{m} \otimes K = \dim_K H^0(X_0(Q),\omega(a,2)(-\infty))_\mathfrak{m}\otimes K,$$ as required. \end{proof}
\section{\texorpdfstring{$q$}{q}-expansions of Siegel modular forms} \label{section:qstuff}
As in Section~\ref{sec:cohom}, let $S$ and $Q$ be finite sets of primes of $\mathbf{Q}$ which are disjoint and do not contain $p$. We allow the possibility that $Q=\emptyset$. We let $K$ and $K_{i}(Q)$ be open compact subgroups of $\mathrm{GSp}_4(\mathbb{A}^{\infty})$ as in Section~\ref{sec:cohom}, and we let $X = X_K$ and $X_i(Q) = X_{K_i(Q)}$ be the corresponding Siegel threefolds, with open subspaces $Y$ and $Y_i(Q)$, all defined over $\mathcal{O}$.
\subsection{\texorpdfstring{$q$}{q}-expansions of Siegel modular forms} \label{section:qexpone}
(For more background and details on the results quoted in this section, see~\S~3.1 of~\cite{TilouineDocumenta}.) Recall that $Y_1(Q)$ has good reduction at $p$. Let $R$ be an $\mathcal{O}$-module (we will exclusively be interested in the case when either $R = \mathcal{O}/\varpi^n$ for some $n$, or when $R = K/\mathcal{O}$). Let $\sigma=(j,k) \in X^*(T)_M^+$ be a weight and associate to $\sigma$ the representation \[ U = \left(\mathrm{Sym}^{j-k}(\mathcal{O}^2) \otimes_{\mathcal{O}} \det(\mathcal{O}^2)^{\otimes k}\right) \otimes_{\mathcal{O}}R \] of $\mathrm{GL}_2$ over $R$. Associated to $\sigma$, we also have the vector bundle $\mathcal{W}_{\sigma} = \omega(j,k)$. There is a $q$-expansion map: $$H^0(Y_1(Q),\omega(j,k)_{R}) \rightarrow R[[q,q',\zeta]][\zeta^{-1}] \otimes_R U.$$
\begin{theorem} The $q$-expansion map is injective. \end{theorem}
\begin{proof} This is a standard fact (see, for example, Prop.~3.2 of~\cite{TilouineDocumenta}). \end{proof}
\subsection{Explicit Formulae} \label{section:explicit}
Let $L$ be the product of the primes in $S$ and $Q$, so that~$X_1(Q)$ has good reduction outside~$L$. Let~$R$ be a~$\mathbf{Z}_p$-module and thus a~$\mathbf{Z}[1/L]$-algebra. Any $F \in H^0(X_1(Q),\mathcal{W}_{\sigma,R})$ has a ``$q$-expansion'': $$F = \sum_{\mathcal{X}} a(F,Q) q^Q,$$ where $\mathcal{X}$ denotes the $2 \times 2$ positive semi-definite matrices which take on $\mathbf{Z}[1/L]$-integral arguments for integral vectors, or equivalently, $$\mathcal{X} = \left( \begin{matrix} m & \frac{1}{2} r \\ \frac{1}{2}r & n \end{matrix} \right), m,n,r \in \mathbf{Z}[1/L].$$ The set~$\mathcal{X}$ is naturally a subset of~$M_2(\mathbf{Q})$. The group~$\mathrm{GL}_2(\mathbf{Q})$ acts on~$M_2(\mathbf{Q})$ by the following formula: \begin{equation*} M.Q := (\det M)^{-1} M Q M^{T} \end{equation*} where the right hand side is multiplication. We may naturally extend the definition of~$a(F,Q)$ for~$Q \in M_2(\mathbf{Q})$ by setting~$a(F,Q) = 0$ for all~$Q$ not in~$\mathcal{X}$. In any~$q$-expansion, the coefficients~$a(F,Q)$ will also vanish unless the denominators occurring in~$Q$ are bounded by some fixed power of~$L$ which depends only on the level structure. (Since our arguments in this section are all~$p$-adic, there is little harm in imagining that~$L = 1$.) Let~$V = \mathcal{O}^2$ be the standard representation of~$\mathrm{SL}_2(\mathbf{Z})$ over $\mathcal{O}$. The elements $a(F,Q)$ are elements of the representation $U$, where, if $\sigma$ has weight $(j,k)$, then $$U = \mathrm{Sym}^{j-k}(V) \otimes R.$$ Let $\rho: \mathrm{SL}_2(\mathbf{Z}) \rightarrow \mathrm{GL}(U)$ denote the corresponding representation. The representation~$\rho$ extends to a homomorphism from~$M_2(\mathbf{Z})$ to~$\mathrm{End}(U)$ over~$R$ which we denote by~$\rho$, where once more~$\rho$ only depends on~$j-k$ and (more relevantly) preserves integrality.
We may write the~$q$-expansion of a form~$F$ as
$$F = \sum_{\substack{n, m \ge 0 \\ r^2 - 4mn \le 0 }} a_F(n,r,m) q^n \zeta^r {q'}^{m}$$ where $a_F(n,r,m) = a(F,Q)$ satisfies, for~$M \in \overline{\Gamma} \subset \mathrm{SL}_2(\mathbf{Z})$, the equality $$a(F,M.Q) = \rho(M) a(F,Q).$$ Here~$\overline{\Gamma}$ is the congruence subgroup of~$\mathrm{SL}_2(\mathbf{Z})$ defined on p.807 of~\cite{TilouineDocumenta}; since we are working at spherical level at~$p$ the group~$\overline{\Gamma}$ has level prime to~$p$. (It will do the reader little harm to pretend
that~$\overline{\Gamma}$ is just~$\mathrm{SL}_2(\mathbf{Z})$.)
\begin{remark} \label{remark:parity}
\emph{We shall assume that either~$j \ge 4$ or~$j = k = 2$. Since we are most interested in representations with similitude character~$\nu$ is equal to~$\epsilon^{j+k-3}$, the oddness condition forces the congruence~$j \equiv k \mod 2$, and so if~$j > k \ge 2$ then~$j \ge 4$. In cases (coming from Taylor--Wiles primes) where there is non-trivial Nebentypus character at the auxiliary primes~$q|Q$, we may twist (at the cost of increasing the level at~$Q$) to force the Nebentypus character to be trivial. The only change this has is to make the~$q$-expansions below less unpleasant --- the addition of a Nebentypus character only introduces a notational difficulty. We note, however, that with non-trivial Nebentypus character the case of weight~$(j,k) = (3,2)$ is possible, but our arguments would not cover this case.} \end{remark}
\subsection{Hecke Operators at~\texorpdfstring{$p$}{p}} Since we will exclusively be interested in Hecke operators at~$p$, we drop the subscript~$p$ from the notation. Similarly, we drop the subscript~$1$,and so~$T_{p,1}$ and~$U_{p,1}$ are denoted~$T$ and~$U$, whereas~$T_{p,2}$ and~$U_{p,2}$ are denoted~$T_2$ and~$U_2$ respectively. One has the following explicit description of the Hecke operator~$T$: \begin{lemma} \label{eightthree} \label{lemma:T} In weight~$\sigma = (j,k)$ there is an identity of formal operators $T = U + p^{k-2} Z + p^{k+j-3} V$, where~$U$, $Z$, and~$V$ preserve formal integral~$q$-expansions, and such that the following identities hold: $$a(UF,Q) = a(F,pQ),$$ $$a(ZF,Q) = \sum_{\mathcal{S}} \rho(M) a(F,M^{-1}.Q).$$ Here $\mathcal{S}$ denotes (any) set of representatives in~$M_2(\mathbf{Z})$ for the left coset decomposition of $$\overline{\Gamma} \left( \begin{matrix} p & 0 \\ 0 & 1 \end{matrix} \right) \overline{\Gamma}.$$ Moreover, $a(F,S^{-1} Q) = 0$ unless $S^{-1} Q$ is a $p$-integral binary quadratic form. \end{lemma}
Note that the coset decomposition of~$\overline{\Gamma} \left( \begin{matrix} p & 0 \\ 0 & 1 \end{matrix} \right) \overline{\Gamma}$ for a congruence subgroup~$\overline{\Gamma}$ prime to~$p$ is essentially the same as the coset decomposition of $\mathrm{SL}_2(\mathbf{Z}) \left( \begin{matrix} p & 0 \\ 0 & 1 \end{matrix} \right) \mathrm{SL}_2(\mathbf{Z})$. These formulae are well known. See, for example, Prop~10.2 of~\cite{geer}. To compare our formula with \emph{ibid}, note that we have normalized the matrices in~$\mathcal{S}$ to be integral of determinant~$p$, and absorbed the action of the determinant into the coefficient (since we are concerned here with issues of~$p$-integrality). We have a similar description of~$T_2$ which can be obtained by a laborious computation (following the arguments of~\S3.2 and~\S3.3 of~\cite{Andrianov}:
\begin{lemma} \label{eightfour} In weight~$\sigma = (j,k)$ there is an identity of formal operators $T_2 = p^{k+j-6} U_2 + p^{k-3} Z_2 + p^{2k+j-6} V_2,$ where~$U_2$, $Z_2$, and $V_2$
preserve formal integral~$q$-expansions, and the following identities hold: $$a(Z_2 F,Q) = \sum_{\mathcal{S}} \rho(M) a(F, M^{-1}.pQ).$$ where~$\mathcal{S}$ is as in the description of~$Z$ in Lemma~\ref{lemma:T}. If~$Q \not\equiv 0 \mod p$, then $$a(U_2 F,Q) = \left(-1 + p \left(\frac{\det(Q)}{p} \right) \right) a(Q) = \left( -1 + p \left( \frac{r^2 - 4 m n}{p} \right) \right) a(Q).$$ If~$Q \equiv 0 \mod p$, then $$a(U_2 F,Q) = (-1 + p^2 ) a(Q).$$ \end{lemma}
For those wanting a more explicit description, note that in weight~$(k,k)$ we have the possibly more familiar identities: $$a(ZF,n,r,m) = a(pn,r,m/p) + \sum_{0 \le \alpha < p} a((n + \alpha r + \alpha^2 m)/p, r + 2 m \alpha, p m),$$ $$a(Z_2 F,n,r,m) = a(p^2 n,p r,m) + \sum_{0 \le \alpha < p} a((n + \alpha r + \alpha^2 m), p(r + 2 m \alpha), p^2 m).$$ Note also that there is a formal identity~$Z_2 = U Z$.
\begin{df} Let~$X_2$ denote the formal operator on~$q$-expansions such that $$U_2 = -1 + p \cdot X_2.$$ Explicitly, if~$Q \not\equiv 0 \mod p$, then~$a(X_2 F,Q) = a(F,Q)$ times~$(D/p)$, where~$D$ is the determinant of the quadratic form associated to~$Q$, and~$(D/p)$ is the Legendre symbol. If~$Q \equiv 0 \mod p$, then~$a(X_2 F,Q) = p a(F,Q)$. In all cases, we see that~$a(X_2 F,Q) = (D/p) a(F,Q) \mod p$. \end{df}
\begin{lemma}\label{lemma:iszero} Over~$k = \mathcal{O}/\varpi$, we have~$Z_2 X_2 = 0$. \end{lemma}
\begin{proof} We have~$a(X_2 F,Q) = 0$ if~$\det(Q) \equiv 0 \mod p$, but~$a(Z_2 F,Q)$ is a sum over terms of the form~$a(F,R)$ with~$\det(R) = 0$. \end{proof}
\begin{df} \emph{A binary quadratic form $Q$ is \emph{$p$-primitive} if it is not of the form $pR$ for an $p$-integral form $R$.} \end{df}
\subsection{Hecke Operators on forms of in characteristic~\texorpdfstring{$p$}{p}}
\label{section:hecke} Let~$Q_2 = (p \cdot T_2 + (p + p^3) S) p^{2-k}$.
\begin{lemma} \label{lemma:Gross} There is an action of $T$ and~$Q_2$ on $H^0(X_1(Q),{\omega(j,k)}_{K/\mathcal{O}})$ which commutes with the other Hecke operators and acts on~$q$-expansions via the above formula. \end{lemma}
\begin{proof} The argument is very similar to Prop.~4.1 of~\cite{Gross}. It suffices to prove the result with coefficients in~$\mathcal{O}/\varpi^m$. The natural approach to defining these operators is using correspondences, as for modular curves. There are two issues which arise. The first is that the projection maps from the Siegel modular varieties with appropriate parahoric level structures are not finite over~$X$. The second is that the definition involving correspondences is some power of~$p$ times the actual Hecke operator of interest. A general approach to resolving these questions has been recently found by Pilloni~\cite{Pilloni}, who constructs all the operators used in this paper. More importantly, his method also allows one to give an action of these operators on higher higher coherent cohomology as well. We use a more pedestrian approach. We can resolve the normalization issue by using the~$q$-expansion principle. The first issue is more subtle. The geometric maps involved are certainly proper; the failure of finiteness is thus a failure of quasi-finiteness. The source of quasi-finiteness arises from the fact that the kernel of Frobenius of an abelian surface~$A$ could (for example) equal~$\alpha_p \times \alpha_p$, which contains ``too many'' subgroup schemes of type~$\alpha_p$.
On the other hand, this issue does not arise over the ordinary locus nor over the larger almost ordinary locus consisting of abelian surfaces (those with~$p$ rank~$\ge 1$) where subgroup schemes such as~$\alpha_p \times \alpha_p$ cannot occur. This shows how to resolve the issue by the following ad hoc method: by Hartogs' Lemma, it suffices to construct~$T$ over the global sections of a subvariety~$X' \subset X$ whose complement has codimension~$\ge 2$. In particular, we may replace~$X$ by the moduli space of almost ordinary abelian surfaces for which the corresponding maps are indeed finite. Implicit in this argument is a verification that the formulas above (in Lemmas~\ref{eightthree} and~\ref{eightfour}) preserve integrality --- for~$Q_2$ this is verified in Lemma~\ref{lemma:verify} below.
\end{proof}
Note that this argument is not sufficient to construct these operators on $$H^1(X_1(Q),\omega(j,k)_{K/\mathcal{O}}),$$ however, we have no need to the consider the action of Hecke operators at~$p$ on these spaces.
We shall also need to use various properties of theta operators. We begin by recalling their basic properties:
\begin{prop} Let~$p > 3$, let~$j-2 \ge k \ge 2$, and let~$p-2 > j-k$. \begin{enumerate} \item There is a map $$\Theta: H^0(X_1(Q),\omega(k,k)_{\mathcal{O}/\varpi^m}) \rightarrow H^0(X_1(Q),\omega(k+p+1,k+p+1)_{\mathcal{O}/\varpi^m})$$ whose action on~$q$-expansions is given by $$\Theta \sum a_Q q^Q = \sum \det(Q) a_Q q^Q.$$ \item There is a map $$\theta_1: H^0(X_1(Q),\omega(j,k)_{\mathcal{O}/\varpi^m}) \rightarrow H^0(X_1(Q),\omega(j+p-1,k+p+1)_{\mathcal{O}/\varpi^m})$$ whose action on~$q$-expansions is given by $$\theta_1 \sum a_Q q^Q = \sum \det(Q) \mathrm{con}(a_Q \otimes Q) q^Q,$$ where~$\mathrm{con}: \mathrm{Sym}^{j - k} \otimes \mathrm{Sym}^2 \rightarrow \mathrm{Sym}^{j-k-2}$ is the natural ~$\mathrm{SL}_2(\mathbf{Z})$-equivariant projection. \end{enumerate} \end{prop}
\begin{proof} The operator~$\Theta$ is defined in~\cite[Prop~3.9]{Yamauchi}, and the operator~$\theta_1$ is defined in~\cite[Prop~3.12]{Yamauchi}. \end{proof}
(Some of these maps were also considered in previous unpublished work of Ghitza~\cite{Ghitza}). The main results we need concerning these operators are given by the next two theorems.
\begin{theorem} \label{theorem:boxer} Let~$p > 3$ and~$p+1 \ge k$, and assume~$p \nmid k(2k-1)$ --- so in particular $k = 2$ and~$k = p + 1$ are admissible values of~$k$. Then the map $$\Theta: H^0(X_1(Q),\omega(k,k)_{\mathcal{O}/\varpi^m}) \rightarrow H^0(X_1(Q),\omega(k+p+1,k+p+1)_{\mathcal{O}/\varpi^m})$$ is injective. In particular, if~$\Theta F = 0$,
we must have~$F = 0$. \end{theorem}
\begin{proof} We may immediately reduce to the case~$m = 1$ and~$\mathcal{O}/\varpi = k$. Suppose that~$F$ lies in the kernel, so~$\Theta F = 0$. After possibly replacing~$(k,k)$ by~$(k-(p-1),k-(p-1))$, we may assume that~$F$ is not divisible by the Hasse invariant. Following Theorem~4.7 of~\cite{Yamauchi}, it suffices to show that~$F$ is not zero on the superspecial locus if it is not divisible by the Hasse invariant. Hence~$F$ has
non-trivial specialization to the~$p$-rank~$1$ strata. The supersingular locus on this strata is a Cartier divisor cut out by a section of~$\omega^{(p^2-1)/2}$ for~$p > 2$, so since~$2k < p^2 - 1$ (for~$p > 3$), the restriction of~$F$ is non-zero on the supersingular locus. (That the supersingular locus is a Cartier divisor inside the~$p$-rank~$1$ locus when~$p > 2$ was proved by Koblitz, see~p.193 of~\cite{Koblitz}. The exact order of vanishing can also be found in~\cite{vanderGeer}, Theorem~2.4.) Finally, each irreducible component of the supersingular locus is a copy of~$\mathbf{P}^1$ with~$p^2 + 1$ superspecial points on it. Moreover, the line bundle~$\omega$ restricts to~$\mathcal{O}(p-1)$ on each of these~$\mathbf{P}^1$s. Hence the restriction to the superspecial points is injective as long has~$k(p-1) \le p^2 + 1$, which holds for~$k \le p+1$. \end{proof}
We also require a related result for non parallel weight.
\begin{theorem} \label{theorem:theta} Let~$p -1 > j \ge 4$. The map: $$\theta_1: H^0(X_1(Q),\omega(j,2)_{\mathcal{O}/\varpi^m}) \rightarrow H^0(X_1(Q),\omega (j + p-1,p + 3)_{\mathcal{O}/\varpi^m})$$ is injective. \end{theorem}
\begin{proof} It suffices to work over~$k = \mathcal{O}/\varpi.$ Suppose that~$\theta_1 F = 0$, and that~$F$ is non-zero after restriction to the
superspecial locus. Then the result follows directly from Theorem~3.20 of~\cite{Yamauchi}. As stated, the result does not apply in weight~$(6,2)$, although the same argument works in this weight providing that one may assume (in the notation of \emph{ibid}.) that~$F_2|_X \ne 0$, which
can be achieved under the action of~$\overline{\Gamma} \subset \mathrm{SL}_2(\mathbf{Z})$ for~$j < p-1$, since the level of~$\overline{\Gamma}$ is prime to~$p$
and so surjects on to~$\mathrm{SL}_2(\mathbf{F}_p)$. The corresponding representation of~$\mathrm{SL}_2(\mathbf{F}_p)$ is irreducible,
and thus for there exists an element which applied to~$F$ has~$F_i|_{X} \ne 0$ for any fixed choice of~$i$. Hence it remains to show that the restriction of~$F$ to the superspecial locus is non-zero. Let~$X=X_1(Q)$, and denote the rank one strata (respectively, the supersingular locus, respectively, the superspecial locus) by~$Y$, $Z$, and~$S$ respectively. We are assuming that the restriction of~$F$ to~$Y$ is nonzero. Suppose the restriction of~$F$ to~$Z$ is zero. There is an exact sequence: $$0 \rightarrow H^0(Y,\omega(j,2)_k \otimes \omega^{-m}) \rightarrow H^0(Y,\omega(j,2)_k) \rightarrow H^0(Z,\omega(j,2)_k),$$ where~$m = (p^2 - 1)/2$. If~$F$ restricts to zero, we obtain a non-zero class in the first group. Yet there is also a sequence: $$H^0(X,\omega(j,2)_k \otimes \omega^{-m}) \rightarrow H^0(Y,\omega(j,2)_k \otimes \omega^{-m}) \rightarrow H^1(X,\omega(j,2)_k \otimes \omega^{-m-(p-1)}).$$ The first term vanishes. To see that the final term vanishes, we use the fact that Serre duality shows that the last term is dual to $$H^2(X,\omega(m + p,m + p + 2 - j)_k(-\infty)),$$ which vanishes by Theorem~\ref{thm:lan-suh}. We now have to establish non-vanishing from~$Z$ to~$S$. The restriction of the Hodge bundle to any~$\mathbf{P}^1$ on~$Z$ is~$\mathcal{O}(-1) \oplus \mathcal{O}(p)$. Hence we need to show that no class in $$H^0(\mathbf{P}^1,\mathrm{Sym}^{j-2} (\mathcal{O}(-1) \oplus \mathcal{O}(p)) \otimes \mathcal{O}(2(p-1)))$$ can vanish at~$p^2+1$ points. This is valid as long as $$jp - 2 = (j-2)p + 2(p-1) \le p^2 + 1,$$ which holds provided~$j \le p$.
\end{proof}
\subsection{Relationship between Hecke eigenvalues and crystalline Frobenius} \label{relationship} Suppose that~$F$ is a cuspidal eigenform of weight~$\sigma = (j,k)$ of level prime to~$p$, and let~$r: G_{\mathbf{Q}} \rightarrow \mathrm{GSp}_4(\overline{\Q}_p)$ be the associated Galois representation. One expects (and knows in regular weights, see Theorem~\ref{theorem:highergood}) that~$r$ is crystalline at~$p$ and that crystalline Frobenius has eigenvalues which are the roots of the following polynomial: $$X^4 - \lambda X^3 + (p \mu + (p^3 + p) p^{k+j- 6}) X^2 - \lambda p^{k+j-3}X + p^{2k+2j - 6},$$ where~$\lambda$ is the eigenvalue of~$T$ and~$\mu$ is the eigenvalue of~$T_2$. We may write the eigenvalues of this polynomial as follows: $$\alpha , \beta p^{k-2} , \beta^{-1} p^{j-1} , \alpha^{-1} p^{k+j-3}, $$ where~$\alpha$ and~$\beta$ have non-negative~$p$-adic valuation. That means that the coefficient of crystalline Frobenius should have characteristic polynomial: $$X^4 - (\alpha + \ldots) + (\alpha \beta p^{k-2} + O(p^{k-1})) X^2 + \ldots $$ On the other hand, we know that the coefficient of~$X^2$ should be: $$p^{k-2} Q_2:=p \cdot T_2 + (p + p^3) S,$$ where the operator~$Q_2$ is defined by this formula. In particular, the eigenvalues of this operator ($Q_2$) should all be integral.
\begin{lemma} \label{lemma:verify} Let~$\sigma = (j,k)$ with~$j \ge k \ge 2$. If~$(j,k) \ne (2,2)$, there is a congruence of operators on formal~$q$-expansions: $$Q_2 = (p \cdot T_2 + (p + p^3) S) p^{2-k} \equiv Z_2 \mod p.$$ In particular, if~$F$ is an ordinary form of regular weight~$\sigma$ with crystalline eigenvalues as above, the eigenvalue of~$Z_2$ is~$\alpha \beta \mod p$. If~$\sigma = (2,2)$, there is a congruence $$Q_2 = (p \cdot T_2 + (p + p^3) S) p^{2-k} \equiv Z_2 + X_2 \mod p.$$ \end{lemma}
\begin{proof} The operator~$S$ acts by a scalar which is equal to~$p^{j+k-6}$. Note that $$p^3 \cdot p^{j + k- 6} \cdot p^{2-k} \equiv 0 \mod p.$$
Thus we can ignore the~$p^3 S$ term above.
We have $$ \begin{aligned} (p \cdot T_2 + (p + p^3) S) p^{2-k} = & p^{3-k}( p^{j+k-6} U_2 + p^{k-3} Z_2 + p^{j + 2k-6} V_2) + p^{j-3} \mod p \\
= & \ p^{j-3} U_2 + Z_2 + p^{j + k-3} V_2 + p^{j-3} \mod p \\ = & \ -p^{j-3} + p^{j-2} X_2 + Z_2 + p^{j+k-3} V_2 + p^{j-3} \mod p \\ = & \ p^{j-2} X_2 + Z_2 \mod p \end{aligned}$$ and we are done. \end{proof}
\subsection{The Main Theorem on~\texorpdfstring{$q$}{q}-expansions} Our main theorem is as follows (we use the notation of~\S\ref{sec:gal-rep-low}).
\begin{theorem} \label{theorem:qexp} Let~$\sigma = (j,2)$ for some~$p-1 > j \ge 2$. Assume that~$\overline{r}$ is as in Assumption~\ref{assumption:hecke-galois-rep-low}. Assume, moreover, that $$\alpha \beta (\alpha^2 - 1)(\beta^2 - 1)(\alpha - \beta)(\alpha^2 \beta^2 - 1) \neq 0.$$ Let~$\mathfrak{m}$ denote the corresponding ideal of the Hecke algebra away from~$p$. Let~$A$ denotes a non-trivial power of the Hasse invariant of weight~$k$. Then the composite map: $$\begin{diagram} H^0(X_1(Q),\omega(j,2)_{\mathcal{O}/\varpi^m})^{\alpha,\beta}_{\mathfrak{m}} & \rTo^{A} & H^0(X_1(Q),\omega(j + k,2 + k)_{\mathcal{O}/\varpi^m})_{\mathfrak{m}} \\ & & \dTo^{\pi_{\beta}} \\ & & H^0(X_1(Q),\omega(j + k,2+k)_{\mathcal{O}/\varpi^m})^{\beta}_{\mathfrak{m}}, \end{diagram} $$ is injective, where~$\pi_{\beta}$ denotes the projection onto the summand where~$U - \beta$ and~$Q_2 - \alpha \beta$ (equivalently $Z_2 - \alpha \beta$) are nilpotent. \end{theorem}
Note that, by symmetry, the same result holds with~$\beta$ replaced by~$\alpha$. Before beginning the proof of this theorem, we first prove a much easier analogue for~$\mathrm{GL}(2)$:
\begin{lemma} \label{lemma:easy} Let~$X_1(N)$ denote the modular curve, and let~$\overline{\rho}: G_{\mathbf{Q}} \rightarrow \mathrm{GL}_2(\overline{\F}_p)$ be a modular representation of level~$N$ and weight one over~$\mathbf{F}_p$ such that~$\overline{\rho}(\mathrm{Frob}_p)$ has eigenvalues~$\alpha$ and~$\beta$. Let~$\mathfrak{m}$ denote the corresponding ideal of the Hecke algebra away from~$p$. Assume that $$\alpha - \beta \ne 0.$$ If~$A$ denotes a suitable power of the Hasse invariant of weight~$k$, then the composite map: $$\begin{diagram} H^0(X_1(N),\omega_{\mathcal{O}/\varpi^m})_{\mathfrak{m}} & \rTo^{A} & H^0(X_1(N),\omega^{k+1}_{\mathcal{O}/\varpi^m})_{\mathfrak{m}} \\ & & \dTo^{\pi_{\beta}} \\ & & H^0(X_1(N),\omega^{k+1}_{\mathcal{O}/\varpi^m})^{\beta}_{\mathfrak{m}}, \end{diagram} $$ is injective, where~$\pi_{\beta}$ denotes the projection onto the quotient of homology where~$U - \beta$ is nilpotent. \end{lemma}
In both results, all of the corresponding maps are equivariant with respect to Hecke operators away from~$p$. It suffices to show that the image of the~$\mathbf{T}$-socle maps injectively, and hence we may work with coefficients over a finite field~$k = \mathcal{O}/\varpi$ of characteristic~$p$.
\begin{proof}[Proof of Lemma~\ref{lemma:easy}] Let~$M = H^0(X_1(N),\omega_{\mathcal{O}/\varpi^m})_{\mathfrak{m}}$ and~$N = H^0(X_1(N),\omega^{k+1}_{\mathcal{O}/\varpi^m})_{\mathfrak{m}}$. The map ~$M \rightarrow N$ is certainly injective, as can be seen by the~$q$-expansion principle (the map is the identity on~$q$-expansions). Let~$U$ denote the action of~$T$ on~$N$. Then~$U$ satisfies the polynomial~$U^2 - T U + \langle p \rangle = 0$ on the image of~$M$, and so~$M$ lies inside the ordinary subspace of~$N$, and so inside~$N_{\alpha} \oplus N_{\beta}$, where~$N_{\gamma}$ is the factor of~$N$ on which~$(U - \gamma)$ is nilpotent. We have operators~$U$ and~$V$ defined by the formulae $$U \left( \sum a_n q^n \right) = \sum a_{np} q^n, \qquad V \left( \sum a_n q^n\right) = \sum a_n q^{np},$$ and~$T = U + \langle p \rangle V$ in weight~$1$, whereas~$T = U$ in higher weight. The projection operator: $$\pi_{\beta}: N_{\alpha} \oplus N_{\beta} \rightarrow N_{\beta}$$ is given by~$\pi_{\beta} = (U - \alpha)^m$ for some integer~$m$. Suppose that~$F \in M$ satisfies~$\pi_{\beta}(F) = 0$. We have the identity~$U V F = F$, and we may reduce to the case that~$\langle p \rangle F = \alpha \beta F$. We are assuming that~$F = F_{\alpha} \in N_{\alpha}$. Let us write $$(U - \alpha) F_{\alpha} = G_{\alpha} \quad \Longrightarrow \quad U F_{\alpha} = \alpha F_{\alpha} + G_{\alpha} \quad \Longrightarrow \quad \alpha U^{-1} F_{\alpha} = F_{\alpha} - U^{-1} G_{\alpha}.$$ Note that~$U$ is invertible on~$N_{\alpha}$. Since~$T F$ also lies in~$N_{\alpha} \oplus N_{\beta}$, we deduce that~$VF$ lies in~$N_{\alpha} \oplus N_{\beta}$. Yet~$UVF = F \in N_{\beta}$, and so~$VF \in N_{\alpha}$, and moreover $\langle p \rangle VF = \alpha \beta U^{-1} F_{\alpha}$. It follows that $$\begin{aligned} (T - \alpha - \beta) F = & \ (U - \alpha) F_{\alpha} + (\langle p \rangle V - \beta) F_{\alpha} = (U - \alpha) F_{\alpha} + \alpha \beta U^{-1} F_{\alpha} - \beta F_{\alpha} \\ = & \ G_{\alpha} + \beta F_{\alpha} - \beta U^{-1} G_{\alpha} - \beta F_{\alpha} = G_{\alpha} - \beta U^{-1} G_{\alpha}.
\end{aligned}$$
If~$G_{\alpha} \ne 0$, then the latter expression is non-zero, since applying~$U$ gives~$U G_{\alpha} - \beta G_{\alpha}$ and~$\beta \ne \alpha$. On the other hand,~$G_{\alpha}$ is deeper in the filtration of~$N_{\alpha}$ given by $$N_{\alpha} \supset (U - \alpha) N_{\alpha} \supset (U - \alpha)^2 N_{\alpha} \ldots $$ and hence,
replacing~$F$ by~$(T - \alpha - \beta)F$ sufficiently many times, we may assume that~$G_{\alpha} = 0$, that~$U F_{\alpha} = \alpha F_{\alpha}$, and that
$(T - \alpha - \beta) F_{\alpha} = 0$. We are thus left with a form~$F$ such that:
$$T F = (\alpha + \beta) F, \qquad U F = \alpha F, \qquad V F = \beta F.$$ We may now achieve a contradiction based purely on a computation with formal~$q$-expansions. For example, the identity~$V F = \beta F$ is impossible as soon as either~$\beta \ne 1$ or~$F$ is a cusp form, simply by considering the exponent of the smallest coefficient. Alternatively, a non-formal argument using properties of modular forms would be to
note that~$\theta V F = 0$, and then use the fact that~$\theta$ has no kernel in low weight (by~\cite{Katz}).
\end{proof}
A different proof of this theorem is given in~\cite{CG}; the point is that the proof given here avoids any geometry.
The proof below is somewhat in this spirit --- using some elementary reductions, we arrive, given an element of~$\ker(\pi_{\beta})$, and a form~$F$ which is simultaneously acted upon by a collection of formal operators in a very constrained way. The identities we get are not quite enough to deduce that~$F = 0$ as formal~$q$-expansions, however, they are enough to produce forms of low weight inside the kernel of various theta operators, which will be enough to produce a contraction by Theorems~\ref{theorem:theta} and~\ref{theorem:boxer}.
No doubt (see~\S\ref{january}) there will be better geometric replacements for this argument, so we apologize in advance for the somewhat messy approach that we present here.
As in the proof above, let use write: $$M = H^0(X_1(Q),\omega(j,2)_k)_{\mathfrak{m}}, \qquad N = H^0(X_1(Q),\omega(j+k,2+k)_{k})_{\mathfrak{m}}.$$ The map~$M \rightarrow N$ is certainly injective, as can be seen by the~$q$-expansion principle (the map is the identity on~$q$-expansions). By abuse of notation, we view~$M \subset N$ under this map. Since~$\alpha \beta \ne 0$, the operator~$Q_2$ acts invertibly on~$M$. Depending on the weight~$\sigma$, the operator~$Q_2$ acts on~$M$ either as~$Z_2$ or as~$Z_2 + X_2$.
\begin{lemma} \label{lemma:injective} Assume that~$\alpha$ and~$\beta$ are as in Theorem~\ref{theorem:qexp}. Suppose that~$\sigma = (j,2)$ with~$j > 2$. Then~$M = Q_2 M = Z_2 M$, and~$M$ is a subspace of the submodule of~$N$ on which~$U$ is invertible. If~$\sigma = (2,2)$, then~$Z_2$ acts on~$N$, the map~$M \rightarrow Z_2 M$ is injective, and~$Z_2 M \subset N$ is a subspace of the submodule of~$N$ on which~$U$ is invertible. \end{lemma}
\begin{proof} In the first case, by assumption we know that~$Q_2 - \alpha \beta$ is nilpotent, and so~$Q_2$ induces an isomorphism of~$M$. On the other hand, the operator~$Q_2$ acts via the formal operator~$Z_2$. In weight~$\tau = (j+k,2+k)$, the corresponding operator~$Q_2$ also acts via~$Z_2$, and so we deduce that~$Q_2 - \alpha \beta$ acts on~$M \subset N$ and acts nilpotently. Yet~$Q_2$ only acts invertibly on the ordinary part of~$N$, as can be seen by lifting to characteristic zero. Now let us consider the case of weight~$\sigma = (2,2)$. We have $$M = Q_2 M = (Z_2 + X_2)M.$$ Now~$Q_2$ acts in weight~$N$ by~$Z_2$, so certainly~$Z_2 M \subset N$. Since~$Q_2$ acts by~$Z_2 + X_2$ on $M$, there is a commutative diagram as follows: $$\begin{diagram} M & \rTo & Z_2 M \\ \dTo & & \dTo \\ Q_2 M & \rTo & Z_2 (Z_2 + X_2) M = Z^2_2 M \\ \end{diagram} $$ where (by Lemma~\ref{lemma:iszero}) we use the fact that~$Z_2 X_2 = 0$. Since the left hand side is an isomorphism, it follows that~$Z^2_2 M = Z_2 M$, and hence that~$Z_2$ acts invertibly on~$Z_2 M$, and as in the previous argument it follows that~$Z_2$ and hence~$U$ is invertible on this space.
Hence it suffices to show that~$Z_2 F \ne 0$ for any~$F \in M$. Suppose that~$Z_2 F = 0$. Then~$Q_2 F = Z_2 F + X_2 F = X_2 F$. Since~$Q_2 F \in M$, we have~$X_2 F \in M$. Yet then (again by Lemma~\ref{lemma:iszero}) we have $Q^2_2 F = (Z_2 + X_2) X_2 F = X^2_2 F$, and then~$Q^3_2 F = X^3_2 F = X_2 F$, and so~$Q_2 F = X_2 F \ne 0$ is an eigenvector of~$Q_2$ with eigenvalue~$\lambda$ satisfying $\lambda^2 = 1$. Yet the only generalized eigenvalue of~$Q_2$ is~$\alpha \beta$, and by assumption~$(\alpha \beta)^2 \ne 1$. \end{proof}
(Note that this is the point in this paper which uses the assumption~$(\alpha \beta)^2 \ne 1$
rather than the weaker claim~$\alpha \beta \ne 1$ which is sufficient for arguments
on the Galois side.)
\begin{lemma} The operator~$U(U - \alpha)(U - \beta)$ acts nilpotently on~$N$. \end{lemma}
\begin{proof} This follows by lifting to characteristic zero and noting that the only possible unit crystalline eigenvalues of Frobenius of a lift of~$\overline{r}$ are~$\alpha$ or~$\beta$ modulo~$\mathfrak{m}$. \end{proof}
\begin{lemma} \label{lemma:explicitstuff} Suppose that the composite~$\pi_{\beta} : Z_2 M \rightarrow N_{\beta}$ is not injective. \begin{enumerate} \item If~$(\sigma) = (j,2)$ with~$j > 2$, there exists a nonzero form~$F = F_{\alpha} \in M \cap N_{\alpha}$ such that $$U F= \alpha F, \qquad T F = (\alpha + \beta) F, \qquad Z F = \beta F.$$ \item If~$(\sigma) = (2,2)$, there exists a nonzero form~$F = F_{\alpha} + F_0$ with~$F_{\alpha} \in N_{\alpha}$ and~$F_0 \in N_{0}$ such that: $$U F_{\alpha} = \alpha F_{\alpha}, \qquad TF = (\alpha + \beta) F, \qquad X_2 F = \alpha \beta F_0.$$ \end{enumerate} \end{lemma}
\begin{proof} First note that~$T F = (U + Z) F \in M$, and that~$U F \in N$, so~$Z F \in N$. Assume that~$\sigma = (j,2)$ with~$j > 2$. Note that~$Z_2$ commutes with~$U$. Hence, after replacing~$F \in \ker(\pi_{\beta})$ by~$(Z_2 - \alpha \beta)^m F = (Q_2 - \alpha \beta)^m F$ for sufficiently large~$m$, we may assume that~$Z_2 F = \alpha \beta F$. The assumption~$\pi_{\beta}(F) = 0$ implies that~$F = F_{\alpha} \in N_{\alpha}$. Clearly~$U F \in N_{\alpha}$ also, and so~$ZF = TF - UF \in N_{\alpha} \oplus N_{\beta}$. Yet~$Z_2 = U Z$, so we have $$U Z F = \alpha \beta F_{\alpha} \Rightarrow Z F = \alpha \beta U^{-1} F_{\alpha} \in N_{\alpha}$$ (There can be no component in~$N_{\beta}$ because~$U$ is invertible on that space.) Write~$(U - \alpha) F_{\alpha} = G_{\alpha}$, so~$U F_{\alpha} - G_{\alpha} = \alpha F_{\alpha}$, or $$\alpha U^{-1} F_{\alpha} = F_{\alpha} - U^{-1} G_{\alpha}.$$ We infer that $$\begin{aligned} (T - \alpha - \beta) F = & \ (U + Z - \alpha - \beta) F_{\alpha} = (U - \alpha) F_{\alpha} + (Z - \beta) F_{\alpha} \\ = & \ G_{\alpha} + \beta F_{\alpha} - \beta U^{-1} G_{\alpha} - \beta F_{\alpha} \\ = & \ G_{\alpha} - \beta U^{-1} G_{\alpha}. \end{aligned}$$ We claim that if~$G_{\alpha} \ne 0$, then the last expression is non-zero. This is because~$U$ acts invertibly on~$N_{\alpha}$, and applying~$U$ we get $$U (G_{\alpha} - \beta U^{-1} G_{\alpha}) = (U - \alpha) G_{\alpha} + (\alpha - \beta) G_{\alpha},$$ and~$(U - \alpha) G_{\alpha}$ has a smaller nilpotence level than~$G_{\alpha}$, and~$(\alpha - \beta) \ne 0$. In particular, replacing~$F$ by~$(T - \alpha - \beta) F$, we may find more elements in~$M$ which also lie in the kernel of~$\pi_{\beta}$, and reduce to the case where~$U F_{\alpha} = \alpha F_{\alpha}$ and~$Z_2 F_{\alpha} = UZ F_{\alpha} = \alpha \beta F_{\alpha}$. However, in this case, we also see that~$Z F_{\alpha} = \beta F_{\alpha}$, and the required equalities follow.
Now suppose that~$\sigma = (2,2)$. Let us write~$\pi_{\beta}: Z_2 M \subset N_{\alpha} \oplus N_{\beta} \rightarrow N_{\beta}$ as $(U - \alpha)^m$, and so~$(U - \alpha)^m Z_2 F = 0$ for some~$F \ne 0$. Since~$Z_2$ formally commutes with~$U$, we also get $$(U - \alpha)^m (Z^2_2) F = Z_2 (U - \alpha)^m Z_2 F = 0.$$ so~$Z_2$ preserves the property of~$Z_2 F$ lying in the kernel of~$\pi_{\beta}$. But $$Z_2 (Z_2 + X_2) F = Z^2_2 F,$$ because~$Z_2 X_2 = 0$. Hence, if~$Z_2 F$ lies in the kernel of~$\pi_{\beta}$, then so does $$Z_2 Q_2 F = Z_2 (Z_2 + X_2)F.$$ Hence we may repeatedly replace~$F$ by~$(Q_2 - \alpha \beta)F = (Z_2 + X_2 - \alpha \beta)F$, and thus replace~$F$ by a form such that~$Q_2 F = \alpha \beta F$ and~$Z_2 F \in N_{\alpha}$. Now, as above, we may write $$F = F_{\alpha} + F_0 = (F_{\alpha},0,F_0) \in N_{\alpha} \oplus N_{\beta} \oplus N_0.$$ We are assuming that~$Q_2 F = \alpha \beta F$, and so $$Q_2 F = (\alpha \beta F_{\alpha}, 0,\alpha \beta F_{0}).$$ Thus we deduce that $X_2 F = (0,0,\alpha \beta F_{0})$ and $Z_2 F = (\alpha \beta F_{\alpha}, 0, 0)$. We once more would like to use that~$T = U + Z$ implies that~$Z F \in N$. However, we no longer know (or expect) that~$ZF$ it is ordinary. However, since~$Z_2 = U Z$ and~$ZF \in N$, we certainly deduce that $$Z F = (U^{-1} \alpha \beta F_{\alpha}, 0, G_{0}),$$ for some~$G_0$ in the kernel of~$U$. Are arguments are similar to those used above. We write~$(U - \alpha) F_{\alpha} = G_{\alpha}$, so $U F_{\alpha} - G_{\alpha} = \alpha F_{\alpha}$, or $$\alpha U^{-1} F_{\alpha} = F_{\alpha} - U^{-1} G_{\alpha}.$$ This implies that $$\begin{aligned}
G:=(T - \alpha - \beta) F = & \ (U + Z - \alpha - \beta) (F_{\alpha},0,F_{0}) \\
= & \ (\alpha F_{\alpha} + G_{\alpha},0,0) + (\beta F_{\alpha} - \beta U^{-1} G_{\alpha},0,G_0)
- (\alpha+ \beta) F \\
= & \ (G_{\alpha} - \beta U^{-1} G_{\alpha},0,G_0 - (\alpha + \beta) F_{0}) \end{aligned} $$ The first term lies in a space where~$(U - \alpha)$ is nilpotent, but it has a smaller nilpotence level than~$F_{\alpha}$ by construction. Moreover, if it is equal to zero, then $$0 = U (G_{\alpha} - \beta U^{-1} G_{\alpha}) = \alpha G_{\alpha} + H_{\alpha} - \beta G_{\alpha},$$ where~$(U - \alpha) G_{\alpha} = H_{\alpha}$ has yet a higher level of nilpotence. In particular, this can equal zero only if either~$\alpha = \beta$ or~$G_{\alpha} = 0$. Since we are explicitly forbidding the former, we may assume, by induction, that~$F_{\alpha} \ne 0$ is a~$U$-eigenvector, and so $$(T - \alpha - \beta) F = (0,0,G_0 - (\alpha + \beta) F_{0}).$$ This implies that~$Z_2 (T - \alpha - \beta) F = 0$, and thus (from the injectivity of~$Z_2$ in Lemma~\ref{lemma:injective}) that~$(T - \alpha - \beta) F = 0$, or that~$F$ is a~$T$-eigenform. The required identities follow immediately upon writing~$F = F_{\alpha} + F_0$ where~$F$ is a~$T$-eigenform, $U F_{\alpha} = \alpha F_{\alpha}$, and~$X_2 F = \alpha \beta F_0$. \end{proof}
At this point, to prove Theorem~\ref{theorem:qexp}, it suffices to show that there are no Siegel modular forms which satisfy the above identities. For example, in weights~$\sigma = (j,2)$ with~$j > 2$, we would like to show that there is no form~$F$ which is an eigenform for both~$T$ and~$U$. We now examine what constraints these identities place on the Fourier coefficients of~$F$.
\begin{remark}[Tripling] \label{trip} \emph{A theme of~\cite{CG}, following previous work of Wiese~\cite{Wiese}, was to prove that certain Galois representations were ordinary in two different ways by {\bf doubling\rm}, that is, mapping the form of low weight to forms of heigh weight in two different ways. This is also our argument in weights~$(j,2)$ for~$j \ge 4$. However, in weight~$(2,2)$, we see some new phenomena. When we pass to weight~$(p+1,p+1)$, we see not only the the space of low weight forms has been doubled, but rather tripled, with the image generating (under the map~$X_2$) is mapped to the kernel of~$Z_2$. What this must mean is that, in weight~$(p+1,p+1)$, any ordinary Galois representation coming from weight~$(2,2)$ should have a non-ordinary lift in weight~$(p+1,p+1)$. This phenomena doesn't happen for~$\mathrm{GL}(2)$, since forms of weight~$p$ which are ordinary modulo~$p$ are ordinary in characteristic zero by (boundary cases of) Fontaine--Laffaille theory. For~$\mathrm{GSp}(4)$, however, the Hodge--Tate weights in weight~$(p+1,p+1)$ are~$[0,p-1,p,2p-1]$, which are well beyond the Fontaine--Laffaille range. One can also ask what is the exact relationship between tripling argument here in weight~$(2,2)$ and the doubling version of~\cite{BCGP} at Klingen level. For our purposes, this
would require proving that there exists a (Hecke equivariant away from~$p$) injection from from our space of forms~$M$ at spherical level to a space of ordinary forms (with respect to the operator denoted~$U_{\mathrm{Kli},2}$ in~\cite{BCGP}) at Klingen level also in weight~$(2,2)$. While this should certainly be true, we have not attempted to prove it. } \end{remark}
\subsection{Binary quadratic forms}
\begin{df} \emph{We define a set with multiplicities $\mathcal{F}(Q)$ of equivalence classes of $p$-integral binary quadratic forms as follows. For each $M \in \mathcal{S}$ (with~$\mathcal{S}$ as defined in Lemma~\ref{eightthree}), we add $[P]$ to $\mathcal{F}(Q)$ if and only if there exists a $P \in [P]$ such that $Q = M.P$. In particular, $M$ contributes a class $[P]$ if and only if $[M^{-1}.Q]$ is $p$-integral.} \end{df}
An easy lemma shows that $\mathcal{F}(Q)$ only depends on $[Q]$. A binary quadratic form defines a section of $\mathcal{O}(2)$ on $\mathbf{P}^1(\mathbf{F}_p)$, the latter of which is in natural bijection to $\mathcal{S}$ (recall that~$\mathcal{S}$ is the coset space of~$\diag(1,p)$ in~$\overline{\Gamma} \subset \mathrm{SL}_2(\mathbf{Z})$). We see that $M^{-1}.Q$ is $p$-integral if any only if the corresponding quadratic form has a zero at the corresponding point in $\mathbf{P}^1(\mathbf{F}_p)$. In particular, $\mathcal{F}(Q)$ is empty if $Q$ does not represent zero. Moreover, the cardinality of $\mathcal{F}(Q)$ is given by the number of zeros of $Q$, and is thus equal to $0$, $1$, or $2$ if $Q$ is $p$-primitive. (If $Q$ is not $p$-primitive, then $Q \equiv 0 \mod p$ and $\mathcal{F}(Q)$ has cardinality $p+1$).
The definition of $\mathcal{F}(Q)$ is motivated by the following observation: There is an identity $$a(ZF,Q) = \sum_{[P] \in \mathcal{F}(Q)} \rho(M_P) a(F,P),$$ where $P \in [P]$ is some (any) element in $[P]$ such that $M_P.P = Q$ for $M_P \in \mathcal{S}$.
\begin{lemma} \label{lemma:symmetric} If $[P] \in \mathcal{F}([Q])$, then $[Q] \in \mathcal{F}([P])$. \end{lemma}
\begin{proof} Replacing $Q$ by $g.Q$ for some $g \in \overline{\Gamma} \subset \mathrm{SL}_2(\mathbf{Z})$, we may assume that $Q = M.P$ where $$M = \left( \begin{matrix} 1 & 0 \\ 0 & p \end{matrix} \right).$$ Yet then $p M^{-1}.Q = M^{-1}.Q = P$, and $p M^{-1} \in \mathcal{S}$. \end{proof}
Let $d(Q)$ denote the discriminant of $Q$.
\begin{lemma} Suppose that $Q$ is $p$-primitive. Let $D = d(Q)$. Then either: \begin{enumerate} \item $(D/p) = -1$, and $\mathcal{F}([Q])$ is empty. \item $(D/p) = 0$, and $\mathcal{F}([Q])$ has exactly one element. \item $(D/p) = +1$, and $\mathcal{F}([Q])$ has exactly two elements. \end{enumerate} \label{lemma:zeroonetwo} \end{lemma}
\begin{proof} This follows from the fact that a $p$-primitive form $Q$ has exactly $0$, $1$, or $2$ solutions in $\mathbf{P}(\mathbf{F}_p)$, depending on whether $(D/p)$ is $-1$, $0$, or $1$ respectively. Note that (in the final case) $\mathcal{F}([Q])$ may consist of the same class with multiplicity two. This happens, for example, if $(D/p) = 1$ and the class number of $D$ is one. \end{proof}
In light of Lemma~\ref{lemma:explicitstuff}, to prove Theorem~\ref{theorem:qexp}, it suffices to prove the following.
\begin{theorem} \label{theorem:nodice} Suppose that $F = \sum a(F,Q) q^Q$ is a Siegel modular $q$-expansion of weight~$\sigma = (j,2)$ in characteristic~$p$, where~$p-1 > j$. \begin{enumerate} \item Let~$\sigma = (j,2)$ with~$j \ge 4$, and suppose that~$UF = \alpha F$ and~$ZF = \beta F$ for some~$\alpha, \beta$ with~$\alpha \beta (\beta^2 - 1) \ne 0$, then~$F = 0$. \item Let~$\sigma = (2,2)$, and suppose that~$F = F_{\alpha} + F_0$, where $U F_{\alpha} = \alpha F_{\alpha}$, $X_2 F = \alpha \beta F_0$, and $Z F = \beta F + \alpha F_0$ for some~$\alpha, \beta$ with~$\alpha \beta (\beta^2 - 1)(\alpha^2 \beta^2 - 1) \ne 0$. Then~$F = 0$. \end{enumerate} \label{theorem:Zbegone} \end{theorem}
\begin{proof} We first prove that that there exists a~$Q$ with~$\det(Q) \not\equiv 0 \mod p$. In particular, in weight~$(2,2)$,
we may also assume that~$F_0 = (\alpha \beta)^{-1} X_2 F = 0$,
and thus have the
equalities: $$U F = \alpha F, \quad ZF = \beta F.$$ In fact, we may assume these equalities hold in both cases, since we are assuming such an equality holds in the case of non-parallel weight. If $a(F,pP) \ne 0$, then, since $a(F,pP) = a(UF,P) = \alpha \cdot a(F,P)$, we have $a(F,P) \ne 0$. Hence, if $F \ne 0$,
there exists a $p$-primitive form $Q$ with $a(F,Q) \ne 0$.
Without loss of generality, assume that $Q$ is a
$p$-primitive form of minimal discriminant with $a(F,Q) \ne 0$.
By Lemma~\ref{lemma:zeroonetwo}, $\mathcal{F}(Q)$ consists of a single class $[P]$.
It follows that
$$a(F,P) = \rho(M_Q) a(ZF,Q) = \beta \cdot \rho(M_Q) a(F,Q).$$
If $P$ is \emph{not} $p$-primitive, then $P = pR$ for some $R$, and then
$a(F,R) \ne 0$, contradicting the minimality of
$Q$ (note that $P$ and $Q$ have the same discriminant). Hence $P$ is also $p$-primitive.
Yet then $\mathcal{F}(P)$ consists of a single element, which must be $[Q]$ by
Lemma~\ref{lemma:symmetric}.
Yet then it follows that
$$\beta^2 a(F,Q) = a(Z^2 F,Q) = \rho(M_P) a(ZF,P) = \rho(M_Q) \rho(M_P) a(F,Q) =
\begin{cases} 0, & j > 2 \\ a(F,Q), & j = 2 \end{cases}$$ Here we use that $P = M_Q.Q = M_Q.M_P.P$, and thus
$\rho(M_Q.M_P) = \rho(p \cdot I)$ is the identity in weight~$(2,2)$ and zero in higher weight.
If~$j > 2$ we are done, and if~$\sigma = (2,2)$, we are done since~$\beta^2 -1 \ne 0$.
\begin{remark} \emph{ As an alternative to this argument, one could use an analogue of Theorem~\ref{theorem:boxer} to show that the kernel of~$\Theta$ is trivial in low weight (but this would require formulating and then proving such a theorem for non-parallel weight). } \end{remark}
We may therefore assume that~$a(F,Q) \ne 0$ for some~$Q$ of discriminant~$D$ prime to~$p$.
\subsection{The case~\texorpdfstring{$\sigma = (2,2)$}{sigma=(2,2)}.} Let us now assume that~$\sigma = (2,2)$. The coefficient~$a(X_2 F,Q)$ is equal to~$(D/p) a(F,R)$, where~$D = D_Q$ is the discriminant of~$Q$. Hence, since~$ZF = \beta F + \beta^{-1} X_2 F$, we deduce that, if~$(D/p) = -1$, that $$0 = a(ZF,Q) = \beta a(F,Q) - \beta^{-1} a(F,Q) = (\beta - \beta^{-1}) a(F,Q).$$ Assuming that~$\beta^2 \ne 1$, we deduce that~$a(F,Q) = 0$. It follows that the only~$Q$ with~$a(F,Q) \ne 0$ have~$D = \det(Q)$ satisfying~$(D/p) = 0,1$. In particular, the form $$F - X_2 F \in M$$ lies in the kernel of~$\Theta$. Yet this implies that~$F - X_2 F$ trivial by Theorem~\ref{theorem:boxer}. But this implies that~$Z_2 F = Z_2 X_2 F = 0$, and this contradicts the injectivity of~$Z_2: M \rightarrow N$ in Lemma~\ref{lemma:injective}.
\subsection{The case~\texorpdfstring{$\sigma = (j,2)$}{sigma=(j,2)} with~\texorpdfstring{$j \ge 4$}{j ge 4}}
We may assume that~$a(F,Q) \ne 0$, where~$Q$ is~$p$-primitive
and~$D = d(Q)$ is non-zero. If~$(D/p) = -1$, then~$a(ZF,Q) = 0$,
contradicting the non-vanishing of~$a(F,Q)$ and the identity~$ZF = \beta Z$.
Hence we may assume that~$(D/p) = 1$. The action of~$\overline{\Gamma} \subset \mathrm{SL}_2(\mathbf{Z})$ on binary quadratic forms of discriminant~$D$ has a finite orbit which may be identified with a ray class group. The assumption on~$D$ implies that~$Q$ has exactly two zeros in~$\mathbf{P}^1(\mathbf{F}_p)$. For either of the zeros (say~$\xi$), we may consider the corresponding quadratic form $$P = M.Q : = M Q M^{T} \cdot \det(M)^{-1},$$ where~$M$ is a representative of an element in~$\mathcal{S}$ corresponding to~$\xi$. The class of~$P$ in the class group does not depend on the choice of representative of~$M$. The quadratic form~$P$ also has two roots. We claim that, for one of those roots, there is a choice of representative~$N$ for the element in~$\mathcal{S}$ such that $$N.P:= N P N^{T} \cdot \det(N)^{-1} = Q, \qquad MN = \left( \begin{matrix} p & 0 \\ 0 & p \end{matrix} \right).$$ Indeed, if~$N = p M^{-1}$, then the corresponding identity is trivially satisfied. We may view the process of applying~$Z$ dynamically as follows: The coefficient corresponding to a quadratic form~$Q$ of discriminant~$D$ with~$(D/p) = +1$ of~$ZF$ is given by a sum~$\rho(M_P) a(F,P) + \rho(M_R) a(F,R)$ for a pair of quadratic forms~$P$ and~$R$ also of the same discriminant. The ray class group corresponding to~$Q$ is partitioned by this process~$Q \rightarrow \{P,R\}$ into a finite number of cyclic orbits, on which this operation takes a binary quadratic form to its two nearest neighbours (if the orbit has fewer than two elements, this pair of neighbours may have multiplicity). Let us now consider the coefficient~$a(Z^2 F,Q)$. This consists of two pairs of two terms coming from the neighboring quadratic forms~$P$ and~$R$ respectively. From the above, for each neighbour~$P$, there will be a term of the form $$\rho(M) \rho(N) a(F,Q) = \rho(MN) a(F,Q) = 0,$$ where the identity~$\rho(MN) = 0$ requires the assumption that~$j > k$. Hence~$a(Z^2 F,Q)$ will also be a sum of two terms coming from the quadratic forms of distance~$2$ away from~$Q$ inside its cyclic orbit. Let us consider one orbit of size~$s$. Then, we also see, modifying~$M_s$ by an element of~$\overline{\Gamma}$ if necessary, that $$ Q = M_s M_{s-1} \ldots M_1.Q = A Q A^{t} \cdot \det(A)^{-1},$$ where~$A = M_{s} M_{s-1} \ldots M_1 \in M_2(\mathbf{Z})$ has~$\det(A) = p^s$. Cycling the other way, we deduce the following:
\begin{lemma} \label{eighttwentythree} Suppose that~$F$ is a formal Siegel modular form of weight~$(j,2)$ which is an eigenform of~$Z$ with eigenvalue~$\beta$. Suppose that~$Q$ has discriminant~$D$~with~$(D/p) = 1$. Then there exists an integer~$s > 0$ such that $$\begin{aligned} \beta^s a(F,Q) = & \ \rho(A) a(F,Q) + \rho(B) a(F,Q),\\ = & \ \mathrm{Sym}^{j-2}[A] a(F,Q) + \mathrm{Sym}^{j-2}[B] a(F,Q), \end{aligned} $$ where $$A Q A^{t} = p^s Q, \qquad B Q B^{t} = p^s Q.$$ \end{lemma}
We now make a small recap: At the beginning of the of the proof of Theorem~\ref{theorem:nodice}, we proved that we could assume that~$F$ had a non-zero coefficient~$a(F,Q)$ where~$Q$ has non-zero discriminant modulo~$p$. If~$(D/p) = -1$, then~$a(ZF,Q) = 0$, which (with~$ZF = \beta F$) would imply that~$F = 0$. Hence we may assume there is a non-zero coefficient with~$(D/p) = +1$ (which we exploit below) and use the following proposition to reach the final contradiction.
\begin{prop} \label{prop:justnow} Suppose that~$F$ is a formal Siegel modular form of weight~$(j,2)$ modulo~$p$ which is an eigenform of~$Z$ with eigenvalue~$\beta$ such that $\beta \ne 0$, and suppose that~$p > j - 2$. Suppose that~$F$ has a non-zero coefficient~$a(F,Q)$ where~$(D/p) = 1$. Then~$\theta_1 F = 0$. \end{prop}
\begin{proof} The map~$\theta_1$ is induced from the contraction map $$\mathrm{con}: \mathrm{Sym}^{j-2} \otimes \mathrm{Sym}^2 \rightarrow \mathrm{Sym}^{j-4} \otimes \det$$ (this is well defined integrally as long as~$p > j - 2$). In particular, we have the identity $$a(\theta_1 Z^s F,Q) = \mathrm{con}(\mathrm{Sym}^{j-2}[A] a(F,Q) \otimes Q^{\vee}) + \mathrm{con}(\mathrm{Sym}^{j-2}[B] a(F,Q) \otimes Q^{\vee}),$$ where~$\mathrm{con}$ denotes the contraction map. We claim that~$\mathrm{con}(\mathrm{Sym}^{j-2}[A] x \otimes Q^{\vee}) = 0$ for any~$x \in \mathrm{Sym}^{j-2} V$, where~$V = k^2$. Once we have this, we deduce that~$\beta^s a(\theta_1 F,Q) = a(\theta_1 Z^s F, Q) = 0$, and since~$\beta \ne 0$, we have~$a(\theta_1 F,Q) = 0$ and~$\theta_1 F = 0$.
While there is probably an easy coordinate free way to prove the required claim, it is also simple enough to do the computation explicitly by writing everything out in terms of bases. Let us write down a standard basis~$\{f_1,f_2\}$ for~$V$ and a standard basis~$\{e_1,e_2\}$ for~$V^{\vee}$. To be explicit, we choose bases such that a form $$Q = \left( \begin{matrix} m & \frac{1}{2} r \\ \frac{1}{2}r & n \end{matrix} \right)$$ gives rise to the element~$m f^2_1 + r f_1 f_2 + n f^2_2$, and~$Q^{\vee}$ gives rise to~$m e^2_1 + r e_1 e_2 + n e^2_2$. With respect to this choice, the contraction map on~$\mathrm{Sym}^2 \otimes \mathrm{Sym}^2$ (up to scalar) corresponds to
sending~$e^2_1 f^2_2$ and~$e^2_2 f^2_1$ to~$-2$ and~$e_1 f_1 e_2 f_2$ to~$1$, and sending all other monomials to zero. As a consistency check, note that $$\mathrm{con}(Q \otimes Q^{\vee}) = r^2 - 4 m n = -4 \det(Q).$$ Similarly, the contraction mapping on~$\mathrm{Sym}^{j-2} \otimes \mathrm{Sym}^2$ for~$p > j-2$ satisfies $$\mathrm{con}(f^{i}_1 f^{j-2-i}_2 e^k_1 e^{2-k}_2) = 0 \ \text{unless} \ 2 \le i + k \le j - 4.$$
The formula~$Q = A Q A^{t} \det(A)^{-1}$ continues to hold if we replace~$Q$ by~$M.Q = MQM^{t}$ and~$A$ by~$MAM^{-1}$ some invertible~$M$. In particular, we may replace~$A$ by any integral conjugate. We consider two cases. \begin{enumerate} \item $A$ has a non-zero eigenvalue mod~$p$. In this case (by Hensel's Lemma), the matrix~$A$ has an eigenvalue over~$\mathbf{Z}_p$, and a second eigenvalue which has valuation~$s$. In particular, after a change of basis, we may write
$$A = \left(\begin{matrix} u & 0 \\ 0 & 0 \end{matrix} \right) \mod p^s,
\quad Q = \left( \begin{matrix} m & \frac{1}{2} r \\ \frac{1}{2}r & n \end{matrix} \right).$$
The conditions~$AQA^{t} = \det(A) Q $ and~$\det(A) = p^s$ imply that~$n \equiv 0 \mod p^s$
(multiply out and consider the bottom right entry),
and thus that~$Q^{\vee} = m e^2_1 + r e_1 e_2 \mod p$. But now the image of~$A$ on~$k$ is generated by~$f_1$, and so
the image of~$\mathrm{Sym}^{j-2}[A] x$ is given by~$f^{j-2}_1$.
But this forces the contraction after tensoring with~$Q^{\vee}$ to be zero over~$k$,
because the only monomial which~$f^{j-2}_1$ contracts with non-trivially with is~$e^2_2$.
\item $A$ is nilpotent modulo~$p$. If~$A$ is trivial modulo~$p$ there is nothing to prove. On the other hand,
if
$$A \equiv \left(\begin{matrix} 0 & 1 \\ 0 & 0 \end{matrix} \right) \mod p,$$
then once again the image of~$A$ is generated by~$f$, and the conditions~$A Q A^{t} = \det(A) Q$ and~$\det(A) = p^s$
imply once more that~$n \equiv 0 \mod p$ (multiply out as above but now consider the top left entry), and the proof proceeds as in
the previous case.
\end{enumerate} This completes the proof of the proposition. \end{proof}
Combining Prop.~\ref{prop:justnow} with Lemma~\ref{lemma:explicitstuff} and Theorem~\ref{theorem:theta}, we obtain a contradiction, and this completes the proof of Theorem~\ref{theorem:qexp}.
\end{proof}
\section{Modularity Lifting}
The following theorem is the main result of this paper.
\begin{theorem}
\label{thm:main-thm} Let $\overline{r} : G_{\mathbf{Q}} \to \mathrm{GSp}_4(k)$ be a continuous, odd, absolutely irreducible Galois representation. Suppose that $\nu(\overline{r}) = \epsilon^{-(a-1)}$ where $p-1 > a \geq 2$.
Suppose that the following hold: \begin{enumerate} \item There exist units $\alpha$ and $\beta$ in $k$ such that
$$\overline{r} | G_{p} \sim \left( \begin{matrix} \lambda(\alpha) & 0 & * & * \\ 0 & \lambda(\beta) & * & * \\ 0 & 0 & \nu(\overline{r}) \cdot \lambda( \beta^{-1}) & 0 \\ 0 & 0 & 0 & \nu(\overline{r}) \cdot \lambda(\alpha^{-1}) \end{matrix} \ \ \right),$$ and moreover $(\alpha^2 - 1)(\beta^2 - 1)(\alpha^2 \beta^2 - 1)(\alpha - \beta) \ne 0$.
\item Let $S(\overline{r})$ denote the set of primes of $\mathbf{Q}$ away from $p$ at which $\overline{r}$ is ramified. Then for each $x\in S(\overline{r})$, the restriction $\overline{r}|G_{x}$ falls into one of the cases of Assumption~\ref{assumption:ramification}.
\item $($Big Image$)$ The restriction $\overline{r}|G_{\mathbf{Q}(\zeta_p)}$ has big image in the
sense of Assumption~\ref{assumption:bigimage}. \item The representation $\overline{r}$ is Katz modular of weight $\sigma :=
(a,2) \in X^*(T)^+_M$ in the sense of
Definition~\ref{df:katz-modular}.
\item $($Neatness$)$ $\overline{r}$ satisfies Assumption~\ref{assumption:neatness}. \end{enumerate} We now introduce some notation: let $K \subset \mathrm{GSp}_4(\mathbb{A}^{\infty})$ be the compact open subgroup
defined as in the beginning of Section~\ref{sec:gal-rep-cohom}. Let
$X = X_K$, and for any set of primes $Q$ disjoint from $S(\overline{r})\cup
\{p\}$, let $X_i(Q) = X_{K_i(Q)}$. Let the Hecke algebras
$\mathbf{T}_{\sigma}$ and $\mathbf{T}^{\mathrm{an}}_{\sigma}(Q)$ be as in
Definition~\ref{defn:hecke-alg}. The assumption that $\overline{r}$ is Katz
modular implies that there is a maximal ideal $\mathfrak{m}_{\emptyset}$ of
$\mathbf{T}_{\sigma}$ associated to $\overline{r}$. The pullback of
$\mathfrak{m}_{\emptyset}$ to $\mathbf{T}^{\mathrm{an}}_{\sigma}(Q)$ is also denoted
$\mathfrak{m}_{\emptyset}$. We further assume:
\begin{enumerate}[resume]
\item \label{requiredvanishing} If $Q$ satisfies
Assumption~\ref{assumption:hecke-galois-rep-regular}~\eqref{ass:TW-at-Q},
then \[ H^2(X_i(Q),\omega(a,2)(-\infty)_k)_{\mathfrak{m}_{\emptyset}} = \{ 0 \}.\]
\end{enumerate}
Let $R^{\min}$ be the universal deformation ring classifying minimal deformations of $\overline{r}$ in the sense of Definition~\ref{defn:minimal} (with $Q$ taken to be empty). Then the map \[ R^{\min} \to \mathbf{T}_{\sigma,\mathfrak{m}_{\emptyset}}^{\alpha,\beta}, \] which classifies the minimal deformation of Theorem~\ref{theorem:localglobal} (with $Q$ taken to be empty), is an isomorphism. Furthermore, the space \[ H^0(X, \omega(a,2)(-\infty)_{K/\mathcal{O}})^{\alpha,\beta,\vee}_{\mathfrak{m}_{\emptyset}} \] is a free $\mathbf{T}_{\sigma,\mathfrak{m}_{\emptyset}}^{\alpha,\beta}$ module. \end{theorem}
Note that, for~$p \ge a \ge 4$, the hypothesis~\ref{requiredvanishing} holds by Theorem~\ref{thm:lan-suh}.
\begin{proof}
To prove the theorem, we apply Proposition~\ref{prop:patching}, as
follows:
\begin{enumerate}
\item Take $R = R^{\min}$ and $H = H^0(X, \omega(a,2)(-\infty)_{K/\mathcal{O}})^{\alpha,\beta,\vee}_{\mathfrak{m}_{\emptyset}}$.
\item Let $q$ and the sets $Q_N$ be as in
Proposition~\ref{prop:tw-primes-w1}.
\item The ring $R_{\infty}$ is the power series ring
$\mathcal{O}[[x_1,\dots,x_{q-1}]]$.
\item For each $N \geq 1$, we define a surjection $R_{\infty} \twoheadrightarrow
R$ as follows: Let $R_{Q_N}$ denote the universal deformation ring
classifying deformations of $\overline{r}$ which are minimal outside $Q$, in the sense
of Definition~\ref{defn:minimal}. Choose any surjection
$R_{\infty} \twoheadrightarrow R_{Q_N}$ (possible by
Proposition~\ref{prop:tw-primes-w1}) and let $R_{\infty}\twoheadrightarrow R$
be the composite of this surjection with the natural map $R_{Q_N}
\twoheadrightarrow R^{\min}$.
We define the module $H_N$ as follows: let $\Delta$ be the unique
quotient of $\Delta_{Q_N} = \prod_{x\in Q_N}(\mathbf{Z}/x)^{\times}$ which is
isomorphic to $(\mathbf{Z}/p^N\mathbf{Z})^{q}$, and let $X_{\Delta}(Q_N) \to
X_0(Q_N)$ be as in Section~\ref{sec:balanced-property}.
Let $\mathfrak{m}_N$
be the ideal $\mathfrak{m} \subset \mathbf{T}_{\sigma}(Q_N)$ of Theorem~\ref{thm:no-newforms} when $Q$ is
taken to be $Q_N$. We then take
\[ H_N := H^0(X_{\Delta}(Q_N), \omega(a,2)(-\infty)_{K/\mathcal{O}})^{\alpha,\beta,\vee}_{\mathfrak{m}_N} \]
and we regard it as an $R_{\infty}$-module via the
surjection $R_{\infty} \twoheadrightarrow R_{Q_N}$ chosen above, and the
classifying map $R_{Q_N} \twoheadrightarrow \mathbf{T}_{\sigma}(Q)_{\mathfrak{m}_N}^{\alpha,\beta}$ associated to
the deformation $r_{Q_N}$ of
Theorem~\ref{theorem:localglobal}. The $S_N$-module structure on
$H_N$ is given by choosing an identification $\Delta \cong
(\mathbf{Z}/p^N\mathbf{Z})^{q}$.
\end{enumerate} We need to check that, given these definitions, the conditions of Proposition~\ref{prop:patching} hold. \begin{enumerate}[label=(\alph*)] \item The image of $S_N$ in $\mathrm{End}_{\mathcal{O}}(H_N)$ is contained in the
image of $R_{\infty}$ because under the Galois representation
$r_{Q_N}$ of Theorem~\ref{theorem:localglobal}, the image of an
element $\sigma \in I_x$, for $x$ a prime in $Q_N$, is conjugate to a
matrix of the form $\diag(1,1,\langle u \rangle, \langle u \rangle)$
where $\mathrm{Art}_x(u) = \sigma$. This follows from~\cite[Corollary 3]{Sor}. \item We have
\begin{eqnarray*}
(H_N)_{\Delta_N} &=& \left( \left(H^0(X_{\Delta}(Q_N), \omega(a,2)(-\infty)_{K/\mathcal{O}})^{\alpha,\beta}_{\mathfrak{m}_N}\right)^{\Delta_N}\right)^{\vee} \\ & = & H^0(X_{0}(Q_N), \omega(a,2)(-\infty)_{K/\mathcal{O}})^{\alpha,\beta,\vee}_{\mathfrak{m}_N}.
\end{eqnarray*} Combining this with the isomorphism of Theorem~\ref{thm:no-newforms}, we obtain an isomorphism: \[ \psi_N : (H_N)_{\Delta_N} \stackrel{\sim}{\longrightarrow} H. \] \item Finally, $H_N$ is finite and balanced over $S_N$ by
Theorem~\ref{thm:balanced}.
\end{enumerate} We can thus apply Proposition~\ref{prop:patching}, and we deduce that $H$ is a finite free $R$-module. Since the action of $R$ on $H$ factors through $\mathbf{T}_{\sigma,\mathfrak{m}_{\emptyset}}^{\alpha,\beta}$, the conclusions of Theorem~\ref{thm:main-thm} follow immediately. \end{proof}
\end{document} |
\begin{document}
\ifarxiv\else
\markboth{Rue et al.}{Bayesian computing with INLA} \fi
\title{Bayesian Computing with INLA: A Review}
\author{H{\aa}vard Rue$^1$, Andrea Riebler$^1$, Sigrunn H.\
S{\o}rbye$^2$,\ifarxiv\\ \fi Janine B.\ Illian$^3$, Daniel P.\
Simpson$^4$ and Finn K.\ Lindgren$^5$
\affil{$^1$ Department of Mathematical Sciences, Norwegian
University of Science and Technology, N-7491 Trondheim,
Norway; email: hrue@math.ntnu.no}
\affil{$^2$ Department of Mathematics and Statistics, UiT The
Arctic University of Norway, 9037 Troms{\o}, Norway}
\affil{$^3$ Centre for Research into Ecological and Environmental
Modelling, School of Mathematics and Statistics, University of
St Andrews, St Andrews, Fife KY16 9LZ, United Kingdom}
\affil{$^4$ Department of Mathematical Sciences, University of
Bath, Claverton Down, Bath, BA2 7AY, United Kingdom}
\affil{$^5$ School of Mathematics, The
University of Edinburgh, James Clerk Maxwell Building,
The King's Buildings, Peter Guthrie Tait Road, Edinburgh, EH9 3FD,
United Kingdom}
}
\ifarxiv \date{\today} \maketitle\fi
\begin{abstract}
The key operation in Bayesian inference, is to compute
high-dimensional integrals. An old approximate technique is the
Laplace method or approximation, which dates back to Pierre-Simon
Laplace (1774). This simple idea approximates the integrand with a
second order Taylor expansion around the mode and computes the
integral analytically. By developing a nested version of this
classical idea, combined with modern numerical techniques for
sparse matrices, we obtain the approach of \emph{Integrated Nested
Laplace Approximations} (INLA) to do approximate Bayesian
inference for \emph{latent Gaussian models} (LGMs). LGMs represent
an important model-abstraction for Bayesian inference and include
a large proportion of the statistical models used today. In this
review, we will discuss the reasons for the success of the
INLA-approach, the \texttt{R-INLA}\xspace package, why it is so accurate, why the
approximations are very quick to compute and why LGMs make such a
useful concept for Bayesian computing.
\end{abstract}
\begin{keywords}
Gaussian Markov random fields, Laplace approximations, approximate
Bayesian inference, latent Gaussian models, numerical integration,
sparse matrices \end{keywords}
\ifarxiv\else\maketitle\tableofcontents\fi
\section{INTRODUCTION}
A key obstacle in Bayesian statistics is to actually \emph{do} the Bayesian inference. From a mathematical point of view, the inference step is easy, transparent and defined by first principles: We simply update prior beliefs about the unknown parameters with available information in observed data, and obtain the posterior distribution for the parameters. Based on the posterior, we can compute relevant statistics for the parameters of interest, including marginal distributions, means, variances, quantiles, credibility intervals, etc. In practice, this is much easier said than done.
The introduction of simulation based inference, through the idea of Markov chain Monte Carlo \citep{book44}, hit the statistical community in the early 1990's and represented a major break-through in Bayesian inference. MCMC provided a general recipe to generate samples from posteriors by constructing a Markov chain with the target posterior as the stationary distribution. This made it possible (in theory) to extract and compute whatever one could wish for. Additional major developments have paved the way for popular user-friendly MCMC-tools, like \texttt{WinBUGS} \citep{tech23}, \texttt{JAGS} \citep{man2}, and the new initiative \texttt{Stan} \citep{man3}, which uses Hamiltonian Monte Carlo. Armed with these and similar tools, Bayesian statistics has quickly grown in popularity and Bayesian statistics is now well-represented in all the major research journals in all branches of statistics.
In our opinion, however, from the point of view of applied users, the impact of the Bayesian revolution has been less apparent. This is not a statement about how Bayesian statistics itself is viewed by that community, but about its rather ``cumbersome'' inference, which still requires a lot of CPU -- and hence human time-- as well as tweaking of simulation and model parameters to get it right. Re-running a lot of alternative models gets even more cumbersome, making the iterative process of model building in statistical analysis impossible \citep[Sec.~1.1.4]{book77}. For this reason, simulation based inference (and hence in most cases also Bayesian statistics) has too often been avoided as being practically infeasible.
In this paper, we review a different take on doing Bayesian inference that recently has facilitated the uptake of Bayesian modelling within the community of applied users. The given approach is restricted to the specific class of latent Gaussian models (LGMs) which, as will be clear soon, includes a wide variety of commonly applied statistical models making this restriction less limiting than it might appear at first sight. The crucial point here is that we can derive \emph{integrated nested Laplace approximation} (INLA methodology) for LGMs, a deterministic approach to approximate Bayesian inference. Performing inference within a reasonable time-frame, in most cases INLA is both faster and more accurate than MCMC alternatives. Being used to trading speed for accuracy this might seem like a contradiction to most readers. The corresponding R-package (\texttt{R-INLA}\xspace, see \url{www.r-inla.org}), has turned out to be very popular in applied sciences and applied statistics, and has become a versatile tool for quick and reliable Bayesian inference.
Recent examples of applications using the \texttt{R-INLA}\xspace package for statistical analysis, include
disease mapping \citep{schroedle-held-2011,schroedle-held-2011b,ugarte-etal-2014,ugarte-etal-2016,papoila-etal-2014,art592,art585},
age-period-cohort models \citep{riebler-held-2016},
evolution of the Ebola virus \citep{art593},
studies of relationship between access to housing, health and well-being in cities \citep{art594},
study of the prevalence and correlates of intimate partner violence against men in Africa \citep{art595},
search for evidence of gene expression heterosis \citep{art596},
analysis of traffic pollution and hospital admissions in London \citep{art597},
early transcriptome changes in maize primary root tissues in response to moderate water deficit conditions by RNA-Sequencing \citep{art598},
performance of inbred and hybrid genotypes in plant breeding and genetics \citep{art599},
a study of Norwegian emergency wards \citep{art600},
effects of measurement errors \citep{art601,art561, muff-keller-2015},
network meta-analysis \citep{art602},
time-series analysis of genotyped human campylobacteriosis cases from the Manawatu region of New Zealand \citep{art603},
modeling of parrotfish habitats \citep{art604},
Bayesian outbreak detection \citep{art605},
studies of long-term trends in the number of Monarch butterflies \citep{art606},
long-term effects on hospital admission and mortality of road traffic noise \citep{art607},
spatio-temporal dynamics of brain tumours \citep{art608},
ovarian cancer mortality \citep{art609},
the effect of preferential sampling on phylodynamic inference \citep{art610},
analysis of the impact of climate change on abundance trends in central Europe \citep{art611},
investigation of drinking patterns in US Counties from 2002 to 2012 \citep{art612},
resistance and resilience of terrestrial birds in drying climates \citep{art613},
cluster analysis of population amyotrophic lateral sclerosis risk \citep{art614},
malaria infection in Africa \citep{art615},
effects of fragmentation on infectious disease dynamics \citep{art616},
soil-transmitted helminth infection in sub-Saharan Africa \citep{art617},
analysis of the effect of malaria control on Plasmodium falciparum in Africa between 2000 and 2015 \citep{art618},
adaptive prior weighting in generalized regression \citep{held-sauter-2016},
analysis of hand, foot, and mouth disease surveillance data in China \citep{art582},
estimate the biomass of anchovies in the coast of Per\'u \citep{art549},
and many others.
We review the key components that make up INLA in \Sec{sec:background} and in \Sec{sec:INLA} we combine these to outline why -- and in which situations -- INLA works. In \Sec{sec:examples} we show some examples of the use of \texttt{R-INLA}\xspace, and discuss some special features that expand the class of models that \texttt{R-INLA}\xspace can be applied to. In \Sec{sec:priors}, we discuss a specific challenge in Bayesian methodology, and, in particular, reason why it is important to provide better suggestions for default priors. We conclude with a general discussion and outlook in \Sec{sec:discussion}.
\section{BACKGROUND ON THE KEY COMPONENTS} \label{sec:background}
In this section, we review the key components of the INLA-approach to approximate Bayesian inference. We introduce these concepts using a top-down approach, starting with \emph{latent Gaussian models} (LGMs), and what type of statistical models may be viewed as LGMs. We also discuss the types of Gaussians/Gaussian-processes that are computationally efficient within this formulation, and illustrate Laplace approximation to perform integration -- a method that has been around for a very long time yet proves to be a key ingredient in the methodology we review here.
Due to the top-down structure of this text we occasionally have to mention specific concepts before properly introducing and/or defining them -- we ask the reader to bear with us in these cases.
\subsection{Latent Gaussian Models (LGMs)}
The concept of latent Gaussian models represents a very useful abstraction subsuming a large class of statistical models, in the sense that the task of statistical inference can be unified for the entire class~\citep{art451}. This is obtained using a three-stage hierarchical model formulation, in which observations $\mm{y}$ can be assumed to be conditionally independent, given a latent Gaussian random field $\mm{x}$ and hyperparameters $\mm{\theta_1}$, \begin{equation*}\label{eq0a}
\mm{y} \mid\mm{x},\mm{\theta}_1 \sim
\prod_{i\in {\mathcal I}} \pi(y_i\mid x_i, \mm{\theta}_1). \end{equation*} The versatility of the model class relates to the specification of the latent Gaussian field: \begin{equation*}\label{eq0b}
\mm{x} \mid\mm{\theta}_2 \sim
\mathcal{N}\left(\mm{\mu}(\mm{\theta}_2),
\mm{Q}^{-1}(\mm{\theta_2})\right) \end{equation*} which includes all random terms in a statistical model, describing the underlying dependence structure of the data. The hyperparameters $\mm{\theta}=(\mm{\theta}_1,\mm{\theta}_2)$, control the Gaussian latent field and/or the likelihood for the data, and the posterior reads \begin{equation}\label{eq1}
\pi(\mm{x}, \mm{\theta} | \mm{y}) \propto
\pi(\mm{\theta}) \;
\pi(\mm{x} | \mm{\theta}) \;
\prod_{i\in {\mathcal I}} \pi(y_i | x_i, \mm{\theta}).
\end{equation} We make the following critical assumptions : \begin{enumerate}
\item The number of hyperparameters $|\mm{\theta}|$ is \emph{small},
typically $2$ to $5$, but not exceeding $20$.
\item The distribution of the latent field, $\mm{x}|\mm{\theta}$ is
Gaussian and required to be a Gaussian Markov random field (GMRF)
(or do be close to one) when the dimension $n$ is high ($10^{3}$
to $10^{5}$). \item The data \mm{y} are mutually conditionally independent of
$\mm{x}$ and $\mm{\theta}$, implying that each observation $y_i$
only depends on one component of the latent field, e.g.\ $x_i$.
Most components of \mm{x} will not be observed. \end{enumerate} These assumptions are required both for computational reasons and to ensure, with a high degree of certainty, that the approximations we describe below are accurate.
\subsection{Additive Models}
Now, how do LGMs relate to other better-known statistical models? Broadly speaking, they are an umbrella class generalising the large number of related variants of ``additive'' and/or ``generalized'' (linear) models. For instance, interpreting the likelihood
$\pi(y_i|x_i, \mm{\theta})$, so that ``$y_i$ only depends on its linear predictor $x_i$'', yields the generalized linear model setup. We can interpret $\{x_i, i \in {\mathcal I}\}$ as $\eta_i$ (the linear predictor), which itself is additive with respect to other effects, \begin{equation}\label{eq3}
\eta_i = \mu + \sum_j \beta_j z_{ij} + \sum_k f_{k,j_k(i)}. \end{equation} Here, $\mu$ is the overall intercept and $\mm{z}$ are fixed covariates with linear effects $\{\beta_j\}$. The difference between this formulation and an ordinary generalized linear model are the terms $\{f_{k}\}$, which are used to represent \emph{specific} Gaussian processes. We label each $f_k$ as a \emph{model component}, in which element $j$ contributes to the $i$th linear predictor. Examples of model components $f_k$ include auto-regressive time-series models, stochastic spline models and models for smoothing, measurement error models, random effects models with different types of correlations, spatial models etc. We assume that the model components are a-priori independent, the fixed effects ($\mu, \mm{\beta})$ have a joint Gaussian prior and that the fixed effects are a-priori independent of the model components.
The key is now that the model formulation in \eref{eq3} and LGMs relate to the same class of models when we assume Gaussian priors for the intercept and the parameters of the fixed effects. The joint distribution of \begin{equation}\label{eq4}
\mm{x} = (\mm{\eta}, {\mu}, \mm{\beta}, \mm{f}_1, \mm{f}_2, \ldots) \end{equation} is then Gaussian, and also non-singular if we add a tiny noise term in~\eref{eq3}. This yields the latent field \mm{x} in the hierarchical LGM formulation. Clearly, $\text{dim}(\mm{x}) = n$ can easily get large, as it equals the number of observations, plus the intercept(s) and fixed effects, plus the sum of the dimension of all the model components.
The hyperparameters $\mm{\theta}$ comprise the parameters of the likelihood and the model components. A likelihood family and each model component, typically has between zero and two hyperparameters. These parameters often include some kind of variance, scale or correlation parameters. Nicely, the number of hyperparameters is typically small and further, does not depend on the dimension of the latent field $n$ nor the number of observations. This is crucial for computational efficiency, as even with a big dataset, the number of hyperparameters remains constant and assumption 1. still holds.
\subsection{Gaussian Markov Random Fields (GMRFs)}
In practice, the latent field should not only be Gaussian, but should also be a (sparse) Gaussian Markov random field (GMRF); see \cite{book80,col27,col26} for an introduction to GMRFs. A GMRF $\mm{x}$ is simply a Gaussian with additional conditional independence properties, meaning that $x_i$ and $x_j$ are conditionally independent given the remaining elements $\mm{x}_{-ij}$, for quite a few $\{i,j\}$'s. The simplest non-trivial example is the first-order auto-regressive model, $x_t = \phi x_{t-1} + \epsilon_t$,
$t=1, 2, \ldots, m$, having Gaussian innovations $\mm{\epsilon}$. For this model, the correlation between $x_t$ and $x_s$ is $\phi^{|s-t|}$ and the resulting $m\times m$ covariance matrix is dense. However,
$x_s$ and $x_t$ are conditionally independent given $\mm{x}_{-st}$, for all $|s-t|>1$. In the Gaussian case, a very useful consequence of conditional independence is that this results in zeros for pairs of conditionally independent values in the precision matrix (the inverse of the covariance matrix). Considering GMRFs provides a huge computational benefit, as calculations involving a dense $m \times m$ matrix are much more costly than when a sparse matrix is used. In the auto-regressive example, the precision matrix is tridiagonal and can be factorized in ${\mathcal O}(m)$ time, whereas we need ${\mathcal O}(m^{3})$ in the general dense case. Memory requirement is also reduced, ${\mathcal O}(m)$ compared to ${\mathcal O}(m^{2})$, which makes it much easier to run larger models. For models with a spatial structure, the cost is ${\mathcal O}(m^{3/2})$ paired with a ${\mathcal O}(m\log(m))$ memory requirement. In general, the computational cost depends on the actual sparsity pattern in the precision matrix, hence it is hard to provide precise estimates.
\subsection{Additive Models and GMRFs} In the construction of additive models including GMRFs the following fact provides some of the ``magic'' that is exploited in INLA: \begin{quote}
\emph{The joint distribution for $\mm{x}$ in~\eref{eq4} is also a
GMRF and its precision matrix consists of sums of the
precision matrices of the fixed effects and the other model
components.} \end{quote} We will see below that we need to form the joint distribution of the latent field many times, as it depends on the hyperparameters $\mm{\theta}$. Hence, it is essential that this can be done efficiently avoiding computationally costly matrix operations. Being able to simply treat the joint distribution as a GMRF with a precision matrix that is easy to compute, is one of the key reasons why the INLA-approach is so efficient. Also, the sparse structure of the precision matrix boosts computationally efficiency, compared with operations on dense matrices.
To illustrate more clearly what happens, let us consider the following simple example, \begin{equation}\label{eq5}
\eta_i = \mu + \beta z_i + f_{1j_1(i)} + f_{2j_2(i)} +
\epsilon_i, \quad i=1, \ldots, n, \end{equation} where we have added a small amount of noise $\epsilon_i$. The two model components $f_{1j_1(i)}$ and $f_{2j_2(i)}$ have sparse precision matrices $\mm{Q}_1(\mm{\theta})$ and $\mm{Q}_2(\mm{\theta})$, of dimension $m_1\times m_1$ and $m_2 \times m_2$, respectively. Let $\tau_\mu$ and $\tau_\beta$ be the (fixed) prior precisions for $\mu$ and $\beta$. We can express~\eref{eq5} using matrices, \begin{displaymath}
\mm{\eta} = \mu\mm{1} + \beta \mm{z} + \mm{A}_1 \mm{f}_1 +
\mm{A}_2 \mm{f}_2 + \mm{\epsilon}. \end{displaymath} Here, $\mm{A}_1$, and similarly for $\mm{A}_2$, is a $n\times m_1$ sparse matrix, which is zero except for exactly one $1$ in each row. The joint precision matrix of $(\mm{\eta}, \mm{f}_1, \mm{f}_2, \beta, \mu)$ is straight forward to obtain by rewriting \begin{eqnarray}\label{eq6}
&\exp\left(\right. -\frac{\tau_\epsilon}{2}
\left(\mm{\eta} -\left(\mu\mm{1} + \beta \mm{z} + \mm{A}_1 \mm{f}_1 +
\mm{A}_2 \mm{f}_2\right)\right)^{T}
\left(\mm{\eta} -\left(\mu\mm{1} + \beta \mm{z} + \mm{A}_1 \mm{f}_1 +
\mm{A}_2 \mm{f}_2\right)\right)
\nonumber\\
& \left.-\frac{\tau_\mu}{2} \mu^{2}
-\frac{\tau_\beta}{2} \beta^{2}
-\frac{1}{2} \mm{f}_1^{T} \mm{Q}_1(\mm{\theta}) \mm{f}_1
-\frac{1}{2} \mm{f}_2^{T} \mm{Q}_2(\mm{\theta}) \mm{f}_2
\right) \nonumber \end{eqnarray} into \begin{displaymath}
\exp\left( -\frac{1}{2} (\mm{\eta}, \mm{f}_1, \mm{f}_2, \beta, \mu)^{T}
\mm{Q}_{\text{joint}}(\mm{\theta})
(\mm{\eta}, \mm{f}_1, \mm{f}_2, \beta, \mu) \right) \end{displaymath} where \begin{displaymath}
\mm{Q}_{\text{joint}}(\mm{\theta}) =
\begin{bmatrix}
\tau_{\epsilon} \mm{I} & \tau_{\epsilon} \mm{A}_1 &
\tau_{\epsilon} \mm{A}_2 & \tau_{\epsilon} \mm{I} \mm{z} &
\tau_{\epsilon} \mm{I}\mm{1} \\
& \mm{Q}_1(\mm{\theta}) + \tau_{\epsilon} \mm{A}_1
\mm{A}_1^{T} & \tau_{\epsilon} \mm{A}_1 \mm{A}_2^{T} &
\tau_{\epsilon} \mm{A}_1 \mm{z} & \tau_{\epsilon} \mm{A}_1
\mm{1}\\
&& \mm{Q}_2(\mm{\theta}) + \tau_{\epsilon} \mm{A}_2
\mm{A}_2^{T} & \tau_{\epsilon} \mm{A}_2 \mm{z} &
\tau_{\epsilon} \mm{A}_2\mm{1} \\
&\text{sym.}&& \tau_\beta + \tau_{\epsilon} \mm{z}^{T}\mm{z} &
\tau_{\epsilon} \mm{z}^{T}\mm{1}\\
&&&& \tau_\mu + \tau_{\epsilon} \mm{1}^{T}\mm{1}
\end{bmatrix}. \end{displaymath} The dimension is $n + m_1 + m_2 + 2$. Concretely, the above-mentioned ``magic'' implies that the only matrices that need to be multiplied are the \mm{A}-matrices, which are extremely sparse and contain only one non-zero element in each row. These matrix products do not depend on $\mm{\theta}$ and hence they only need to be computed once. The joint precision matrix only depends on \mm{\theta} through $\mm{Q}_1(\mm{\theta})$ and $\mm{Q}_2(\mm{\theta})$ and as $\mm{\theta}$ change, the computational cost of re-computing $\mm{Q}_\text{joint}(\mm{\theta})$ is negligible.
The sparsity of $\mm{Q}_{\text{joint}}(\mm{\theta})$ illustrates how the additive structure of the model facilitates computational efficiency. For simplicity, assume $n = m_1 = m_2$, and denote by $e_1$ and $e_2$ the average number of non-zero elements in a row of $\mm{Q}_1(\mm{\theta})$ and $\mm{Q}_2(\mm{\theta})$, respectively. An upper bound for the number of non-zero terms in $\mm{Q}_{\text{joint}}(\mm{\theta})$ is $n(19 + e_1 + e_2) +4$. Approximately, this gives on average only $(19 + e_1 + e_2)/3$ non-zero elements for a row in $\mm{Q}_{\text{joint}}(\mm{\theta})$, which is very sparse.
\subsection{Laplace Approximations} \label{sec:laplace}
The Laplace approximation or method, is an old technique for the approximation of integrals; see \cite[Ch.~3.3]{book123} for a general introduction. The setting is as follows. The aim is to approximate the integral, \begin{displaymath}
I_n = \int_x \exp(n f(x))\,dx \end{displaymath} as $n\rightarrow\infty$. Let $x_0$ be the point in which $f(x)$ has its maximum, then \begin{eqnarray}\label{eq7}
I_n &\approx& \int_x \exp\left(n\left(f(x_0) + \frac{1}{2}(x-x_0)^{2}
f''(x_0)\right)\right)\, dx \\
&=& {\exp(nf(x_0))}{\sqrt{\frac{2\pi}{-nf''(x_0)}}}
= \widetilde{I}_n. \end{eqnarray} The idea is simple but powerful: Approximate the target with a Gaussian, matching the mode and the curvature at the mode. By interpreting $nf(x)$ as the sum of log-likelihoods and $x$ as the unknown parameter, the Gaussian approximation will be exact as $n\rightarrow\infty$, if the central limit theorem holds. The extension to higher dimensional integrals, is immediate and the error turns out to be \begin{displaymath}
I_n = \widetilde{I}_n \left(1 + {\mathcal O}(n^{-1})\right). \end{displaymath} This is a good result for two reasons. The error is \emph{relative} and with rate $n^{-1}$, as opposed to an \emph{additive} error and a rate $n^{-1/2}$, which are common in simulation-based inference.
The Laplace approximation used to be a key tool for doing high-dimensional integration in pre-MCMC times, but quickly went out of fashion when MCMC entered the stage. But how does it relate to what we endeavour to do here? Lets assume that we would like to compute a marginal distribution $\pi(\gamma_1)$ from a joint distribution $\pi(\mm{\gamma})$ \begin{eqnarray}\label{eq8}
\pi(\gamma_1) &=& \frac{\pi(\mm{\gamma})}{\pi(\mm{\gamma}_{-1}|
\gamma_1)} \nonumber\\
&\approx& \frac{\pi(\mm{\gamma})}{
\pi_G(\mm{\gamma}_{-1};
\; \mm{\mu}(\gamma_1), \mm{Q}(\gamma_1))}
\Big|_{\mm{\gamma}_{-1} = \mm{\mu}(\gamma_1)}, \end{eqnarray} where we have exploited the fact that we approximate
$\pi(\mm{\gamma}_{-1}|\gamma_1)$ with a Gaussian. In the context of the LGMs we have $\mm{\gamma} = (\mm{x}, \mm{\theta})$. \cite{art367} show that if $\pi(\mm{\gamma}) \propto \exp(nf_n(\mm{\gamma}))$, i.e.\ if $f_n(\mm{\gamma})$ is the average log likelihood, the relative error of the \emph{normalized} approximation~\eref{eq8}, within a ${\mathcal O}(n^{-1/2})$ neighbourhood of the mode, is ${\mathcal O}(n^{-3/2})$. In other words, if we have $n$ replicated data from the same parameters, $\mm{\gamma}$, we can compute posterior marginals with a \emph{relative} error of ${\mathcal O}(n^{-3/2})$, assuming the numerical error to be negligible. This is an extremely positive result, but unfortunately the underlying assumptions usually do not hold. \begin{enumerate} \item Instead of replicated data from the same model, we may have one
replicate from one model (as is common in spatial statistics), or
several observations from similar models. \item The implicit assumption in the above result is also that
$|\mm{\gamma}|$ is fixed as $n\rightarrow\infty$. However, there
is only one realisation for each observation/location in the
random effect(s) in the model, implying that $|\gamma|$ grows with
$n$. \end{enumerate} Is it still possible to gain insight into when the Laplace approximation would give good results, even if these assumptions do not hold? First, let's replace \emph{replicated observations from the
same model}, with several observations from \emph{similar} models -- where we deliberately use the term ``similar'' in a loose sense. We can borrow strength across variables that we \emph{a-priori} assume to be similar, for example in smoothing over time or over space. In this case, the resulting linear predictors for two observations could differ in only one realisation of the random effect. In addition, borrowing strength and smoothing can reduce the effect of the model dimension growing with $n$, since the \emph{effective} dimension can then grow much more slowly with $n$.
Another way to interpret the accuracy in computing posterior marginals using Laplace approximations, is to not look at the error-rate but at the implicit constant upfront. If the posterior is close to a Gaussian density, the results will be more accurate compared to a density that is very different from a Gaussian. This is similar to the convergence for the central limit theorem where convergence is faster if relevant properties such as uni-modality, symmetry and tail behaviour are satisfied; see for example \cite{art619}. Similarly, in the context here uni-modality is necessary since we approximate the integrand with a Gaussian. Symmetry helps since the Gaussian distribution is symmetric, while heavier tails will be missed by the Gaussian. For example, assume \begin{displaymath}
\exp(n f_n(\mm{\gamma})) = \prod_i \text{Poisson}(y_i; \lambda =
\exp(\gamma_1 + \gamma_2 z_i)) \end{displaymath} with centred covariates $\mm{z}$. We then expect better accuracy for $\pi(\gamma_1)$, having high counts compared with low counts. With high counts, the Poisson distribution is approximately Gaussian and almost symmetric. Low counts are more challenging, since the likelihood for $y_i=0$ and $z_i=0$, is proportional to $\exp(-\exp(\gamma_1))$, which has a maximum value at $\gamma_1=-\infty$. The situation is similar for binomial data of size $m$, where low values of $m$ are more challenging than high values of $m$. Theoretical results for the current rather ``vague'' context are difficult to obtain and constitute a largely unsolved problem; see for example \cite{art408,art620,art621}.
Let us now discuss a simplistic, but realistic, model in two dimensions $\mm{x} = (x_1, x_2)^{T}$, where \begin{equation}\label{eq16}
\pi(\mm{x}) \propto \exp\left(-\frac{1}{2} \mm{x}^{T}
\begin{bmatrix}
1 & \rho \\ \rho & 1
\end{bmatrix}
\mm{x}\right)
\prod_{i=1}^{2} \frac{\exp(cx_i)}{1+\exp(c x_i)} \end{equation} for a constant $c>0$ and $\rho\ge0$. This is the same functional form as we get from two Bernoulli successes, using a logit-link. Using the constant $c$ is an alternative to scaling the Gaussian part, and the case where $\rho < 0$ is similar. The task now is to approximate
$\pi(x_1) = \pi(x_1, x_2)/\pi(x_2|x_1)$, using \eref{eq8}. Here, the Gaussian approximation is indexed by $x_1$ and we use one Laplace approximation for each value of $x_1$ . The likelihood term has a mode at $(\infty, \infty)$, hence the posterior is a compromise between this and the Gaussian prior centred at $(0,0)$.
We first demonstrate that even if the Gaussian approximation matching the mode of $\pi(\mm{x})$ is not so good, the Laplace approximation which uses a sequence of Gaussian approximations, can do much better. Let $\rho=1/2$ and $c=10$ (which is an extreme value). The resulting marginal for $x_1$ (solid), the Laplace approximation of it (dashed) and Gaussian approximation (dot-dashed), are shown in \Fig{fig1}. \begin{figure}
\caption{The true marginal (solid line), the Laplace approximation
(dashed line) and the Gaussian approximation (dot-dashed
line).}
\label{fig1}
\end{figure} The Gaussian approximation fails both to locate the marginal correctly and, of course, it also fails to capture the skewness that is present. In spite of this, the \emph{sequence} of Gaussian approximations used in the Laplace approximation performs much better and only seems to run into slight trouble where the curvature of the likelihood changes abruptly.
An important feature of \eref{eq8} are its properties in the limiting cases $\rho\rightarrow 0$ and $\rho\rightarrow 1$. When $\rho=0$,
$x_1$ and $x_2$ become independent and $\pi(x_2|x_1)$ does not depend on $x_1$. Hence, \eref{eq8} is exact up to a numerical approximation of the normalising constant. In the other limiting case,
$\rho\rightarrow 1$, $\pi(x_2|x_1)$ is the point-mass at $x_2=x_1$, and \eref{eq8} is again exact up numerical error. This illustrates the good property of \eref{eq8}, being exact in the two limiting cases of weak and strong dependence, respectively. This indicates that the approximation should not fail too badly for intermediate dependence. \Fig{fig2} illustrates the Laplace approximation and the true marginals, using $\rho=0.05, 0.4, 0.8$ and $0.95$, and $c=10$. For $\rho=0.05$ (\Fig{fig2}a) and $\rho=0.95$ (\Fig{fig2}d), the approximation is almost perfect, whereas the error is largest for intermediate dependence where $\rho=0.4$ (\Fig{fig2}b) and $\rho=0.8$ (\Fig{fig2}c). \begin{figure}
\caption{The true marginal (solid line) and the Laplace
approximation (dashed line), for $\rho = 0.05$ (a), $0.4$ (b),
$0.8$ (c) and $0.95$ (d).}
\label{fig2}
\end{figure}
\section{Putting It All Together: INLA} \label{sec:INLA}
With all the key components at hand, we now can put all these together to illustrate how they are combined to from INLA. The main aim of Bayesian inference is to approximate the posterior marginals \begin{equation}\label{eq2}
\pi(\theta_j | \mm{y}), \quad j=1, \ldots, |\mm{\theta}|,
\qquad
\pi(x_i | \mm{y}), \quad i=1, \ldots, n. \end{equation} Our approach is tailored to the structure of LGMs, where
$|\mm{\theta}|$ is low-dimensional, $\mm{x}|\mm{\theta}$ is a GMRF and the likelihood is conditional independent in the sense that $y_i$ only depends on one $x_i$ and $\mm{\theta}$. From the discussion in \Sec{sec:laplace}, we know that we should aim to apply Laplace approximation only to near-Gaussian densities. For LGMs, it turns out that we can reformulate our problem as series of subproblems that allows us to use Laplace approximations on these. To illustrate the general principal, consider an artificial model \begin{displaymath}
\eta_i = g(\beta) u_{j(i)}, \end{displaymath}
where $y_i | \eta_i \sim\text{Poisson}(\exp(\eta_i))$, $i=1, \ldots, n$, $\beta\;\sim\;{\mathcal N}(0,1)$, $g(\cdot)$ is some well-behaved monotone function, and $\mm{u}\sim{\mathcal N}(\mm{0}, \mm{Q}^{-1})$. The index mapping $j(i)$ is made such that the dimension of $\mm{u}$ is fixed and does not depend on $n$, and all $u_j$s are observed roughly the same number of times. Computation of the posterior marginals for $\beta$ and all $u_j$ is problematic, since we have a product of a Gaussian and a non-Gaussian (which is rather far from a Gaussian). Our strategy is to break down the approximation into smaller subproblems and only apply the Laplace approximation where the densities are almost Gaussian. They key idea is to use conditioning, here on $\beta$. Then \begin{equation}\label{eq9}
\pi(\beta|\mm{y}) \propto
\pi(\beta) \int \prod_{i=1}^{n} \pi\left(y_i | \lambda_i
= \exp\left(g(\beta)
u_{j(i)}\right)\right) \times \pi(\mm{u}) \;d\mm{u}. \end{equation} The integral we need to approximate should be close to Gaussian, since the integrand is a Poisson-count correction of a Gaussian prior. The marginals for each $u_j$, can be expressed as \begin{equation}\label{eq10}
\pi(u_j | \mm{y}) = \int
\pi(u_j | \beta, \mm{y}) \times \pi(\beta
| \mm{y}) \; d\beta. \end{equation} Note that we can compute the integral directly, since $\beta$ is one-dimensional. Similar to \eref{eq9}, we have that \begin{equation}\label{eq11}
\pi(\mm{u} | \beta, \mm{y}) \propto
\prod_{i=1}^{n} \pi\left(y_i | \lambda_i = \exp\left(
g(\beta) u_{j(i)}\right)\right) \times \pi(\mm{u}), \end{equation} which should be close to a Gaussian. Approximating
$\pi(u_j | \beta, \mm{y})$ involves approximation of the integral of this density in one dimension less, since $u_j$ is fixed. Again, this is close to Gaussian.
The key lesson learnt, is that we can break down the problem into three sub-problems. \begin{enumerate}
\item Approximate $\pi(\beta|\mm{y})$ using \eref{eq9}.
\item Approximate $\pi(u_j | \beta, \mm{y})$, for all $j$ and for all
required values of $\beta$'s, from \eref{eq11}.
\item Compute $\pi(u_j|\mm{y})$ for all $j$ using the results from the
two first steps, combined with numerical integration \eref{eq10}. \end{enumerate} The price we have to pay for taking this approach is increased complexity; for example step 2 needs to be computed for all values of $\beta$'s that are required. We also need to integrate out the $\beta$'s in \eref{eq10}, numerically. If we remain undeterred by the increased complexity, the benefit of this procedure is clear; we only apply Laplace approximations to densities that are near-Gaussians, replacing complex dependencies with conditioning and numerical integration.
The big question is whether we can pursue the same principle for LGMs, and whether we can make it computationally efficient by accepting appropriate trade-offs that allow us to still be sufficiently exact. The answer is \emph{Yes} in both cases. The strategy outlined above can be applied to LGMs by replacing $\beta$ with $\mm{\theta}$, and $\mm{u}$ with $\mm{x}$, and then deriving approximations to the Laplace approximations and the numerical integration. The resulting approximation is fast to compute, with little loss of accuracy. We will now discuss the main ideas for each step -- skipping some practical and computational details that are somewhat involved but still relatively straight forward using ``every trick in the book'' for GMRFs.
\subsection{Approximating the Posterior Marginals for the
Hyperparameters} \label{sec:pmh}
Since the aim is to compute a posterior for each $\theta_j$, it is tempting to use the Laplace approximation directly, which involves approximating the distribution of
$(\mm{\theta}_{-j}, \mm{x})|(\mm{y}, \theta_j)$ with a Gaussian. Such an approach will not be very successful, since the target is and will not be very close to Gaussian; it will typically involve triplets like $\tau x_i x_j$. Instead we can construct an approximation to \begin{equation}\label{eq12}
\pi(\mm{\theta}|\mm{y}) \propto \frac{\pi(\mm{\theta}) \pi(\mm{x} |
\mm{\theta}) \pi(\mm{y}|\mm{x}, \mm{\theta})}{
\pi(\mm{x} | \mm{\theta}, \mm{y})}, \end{equation} in which the Laplace approximation requires a Gaussian approximation of the denominator \begin{eqnarray}\label{eq14}
\pi(\mm{x}|\mm{y}, \mm{\theta})
&\propto&
\exp\left(
-\frac{1}{2} \mm{x}^{T}\mm{Q}(\mm{\theta}) \mm{x}
+ \sum_i \log\pi(y_i|x_i, \mm{\theta})
\right) \\
&=&
\label{eq15}
(2\pi)^{-n/2} |\mm{P}(\theta)|^{1/2}
\exp\left(
-\frac{1}{2} (\mm{x} -\mm{\mu}(\mm{\theta}))^{T}
\mm{P}(\mm{\theta})
(\mm{x} -\mm{\mu}(\mm{\theta}))
\right). \end{eqnarray} Here, $\mm{P}(\mm{\theta}) = \mm{Q}(\theta) + \text{diag}(\mm{c}(\mm{\theta}))$, while $\mm{\mu}(\mm{\theta})$ is the location of the mode. The vector $\mm{c}(\mm{\theta})$ contains the negative second derivatives of the log-likelihood at the mode, with respect to $x_i$. There are two important aspects of \eref{eq15}. \begin{enumerate} \item It is a GMRF with respect to the same graph as from a model
without observations $\mm{y}$, so computationally it does not cost
anything to account for the observations since their impact is a
shift in the mean and the diagonal of the precision matrix. \item The approximation is likely to be quite accurate since the
impact of conditioning on the observations, is only on the
``diagonal''; it shifts the mean, reduces the variance and might
introduce some skewness into the marginals etc. Importantly, the
observations do not change the Gaussian dependency structure
through the terms $x_ix_j Q_{ij}(\mm{\theta})$, as these are
untouched. \end{enumerate}
Since $|\mm{\theta}|$ is of low dimension, we can derive marginals for
$\theta_j|\mm{y}$ directly from the approximation to
$\mm{\theta}|\mm{y}$. Thinking traditionally, this might be costly since every new $\mm{\theta}$ would require an evaluation of \eref{eq15} and the cost of numerical integration would still be exponential in the dimension. Luckily, the problem is somewhat more well-behaved, since the latent field $\mm{x}$ introduces quite some uncertainty and more ``smooth'' behaviour on the $\mm{\theta}$ marginals.
In situations where the central limit theorem starts to kick in,
$\pi(\mm{\theta}|\mm{y})$ will be close to a Gaussian. We can improve this approximation using variance-stabilising transformations of $\mm{\theta}$, like using $\log(\text{precisions})$ instead of precisions, the Fisher transform of correlations etc. Additionally, we can use the Hessian at the mode to construct almost independent linear combinations (or transformations) of $\mm{\theta}$. These transformations really simplify the problem, as they tend to diminish long tails and reduce skewness, which gives much simpler and better-behaved posterior densities.
The task of finding a quick and reliable approach to deriving all the marginal distributions from an approximation to the posterior density \eref{eq12}, while keeping the number of evaluation points low, was a serious challenge. We did not succeed on this until several years after \cite{art451}, and after several failed attempts. It was hard to beat the simplicity and stability of using the (Gaussian) marginals derived from a Gaussian approximation at the mode. However, we needed to do better as these Gaussian marginals were not sufficiently accurate. The default approach used now is outlined in \citet[Sec.~3.2]{art522}, and involves correction of local skewness (in terms of difference in \emph{scale}) and an integration-free method to approximate marginals from a skewness-corrected Gaussian. How this is technically achieved is somewhat involved and we refer to \cite{art522} for details. In our experience we now balance accuracy and computational speed well, with an improvement over Gaussian marginals while still being exact in the Gaussian limit.
In some situations, our approximation to \eref{eq12} can be a bit off. This typically happens in cases with little smoothing and/or no replications, for example when $\eta_i = \mu + \beta_z z_i + u_i$, for a random-effect \mm{u}, and a binary likelihood \citep{sauter-held-2016}. With vague priors model like this verge on being improper. \cite{art587} discuss these cases and derive a correction term which clearly improves the approximation to
$\pi(\mm{\theta}|\mm{y})$.
\subsection{Approximating the Posterior Marginals for the Latent
Field} \label{sec:pml}
We will now discuss how to approximate the posterior marginals for the latent field. For linear predictors with no attached observations, the posterior marginals are also the basis to derive the predictive densities, as the linear predictor itself is a component of the latent field. Similar to \eref{eq10}, we can express the posterior marginals as \begin{equation}\label{eq17}
\pi(x_i|\mm{y}) = \int \pi(x_i| \mm{\theta}, \mm{y}) \;
\pi(\mm{\theta}| \mm{y}) \; d\mm{\theta}, \end{equation} hence we are faced with two more challenges. \begin{enumerate}
\item We need to integrate over $\pi(\mm{\theta}|\mm{y})$, but the
computational cost of standard numerical integration is
exponential in the dimension of $\mm{\theta}$. We have already
ruled out such an approach in \Sec{sec:pmh}, since it was too
costly computationally, except when the dimension is low.
\item We need to approximate $\pi(x_i|\mm{\theta}, \mm{y})$ for a
subset of all $i=1, \ldots, n$, where $n$ can be (very) large,
like in the range of $10^{3}$ to $10^{5}$. A standard application
of the Laplace approximation, which involves location of the mode
and factorisation of a $(n-1)\times(n-1)$ matrix many times for
each $i$, will simply be too demanding. \end{enumerate} The key to success is to come up with efficient approximate solutions for each of these problems.
Classical numerical integration is only feasible in lower dimensions. If we want to use $5$ integration points in each dimension, the cost would be $5^{k}$ to cover all combinations in $k$ dimensions, which is $125$ ($k=3$) and $625$ ($k=4$). Using only $3$ integration points in each dimension, we get $81$ ($k=4$) and $729$ ($k=6$). This is close to the practical limits. Beyond these limits we cannot aim to do accurate integration, but should rather aim for something that is better than avoiding the integration step, like an empirical Bayes approach which just uses the mode. In dimensions $>2$, we borrow ideas from central composite design \citep{art400} and use integration points on a sphere around the centre; see \Fig{fig3} which illustrates the procedure in dimension $2$ (even though we do not suggest using this approach in dimension $1$ and $2$). \begin{figure}
\caption{The contours of a posterior marginal for
$(\theta_1, \theta_2)$ and the associated integration points
(black dots).}
\label{fig3}
\end{figure} The integrand is approximately spherical (after rotation and scaling), and the integration points will approximately be located on an appropriate level set for the joint posterior of $\mm{\theta}$. We can weight the spherical integration points equally, and determine the relative weight with the central point requiring the correct expectation of $\mm{\theta}^{T}\mm{\theta}$, if the posterior is standard Gaussian \citep[Sec.~6.5]{art451}. It is our experience that this approach balances computational costs and accuracy well, and it is applied as the default integration scheme. More complex integration schemes could be used with increased computational costs.
For the second challenge, we need to balance the need for improved approximations beyond the Gaussian for $\pi(x_i|\mm{\theta}, \mm{y})$, with the fact that we (potentially) need to do this $n$ times. Since $n$ can be large, we cannot afford doing too heavy computations for each $i$ to improve on the Gaussian approximations. The default approach is to compute a Taylor expansion around the mode of the Laplace approximation, which provides a linear and a cubic correction term to the (standarized) Gaussian approximation, \begin{equation}\label{eq13}
\log\pi(x_i | \mm{\theta}, \mm{y}) \approx -\frac{1}{2} x_i^{2} +
b_i(\mm{\theta}) x_i + \frac{1}{6} c_i(\mm{\theta})x_i^{3}. \end{equation} We match a skew-Normal distribution~\citep{art414} to \eref{eq13}, such that the linear term provides a correction term for the mean, while the cubic term provides a correction for skewness. This means that we approximate \eref{eq17} with a mixture of skew-Normal distributions. This approach, termed simplified Laplace approximation, gives a very good trade-off between accuracy and computational speed.
Additional to posterior marginals, we can also provide estimates of the deviance information criterion (DIC) \citep{art413}, Watanabe-Akaike information criterion (WAIC) \citep{art626,art627}, marginal likelihood and conditional predictive ordinates (CPO) \citep{col28}. Other predictive criteria such as the ranked probability score (RPS) or the Dawid-Sebastiani-Score (DSS) \citep{art427} can also be derived in certain settings \citep{art492,art498}. \cite{art531} discuss how the INLA-framework can be extended to a class of near-Gaussian latent models.
\section{THE R-INLA PACKAGE: EXAMPLES} \label{sec:examples}
The \texttt{R-INLA}\xspace package (see \texttt{www.r-inla.org}) provides an implementation of the INLA-approach, including standard and non-standard tools to define models based on the \texttt{formula} concept in \texttt{R}. In this section, we present some examples of basic usage and some special features of \texttt{R-INLA}\xspace.
\subsection{A Simple Example} \label{sec:simple}
We first show the usage of the package through a simple simulated example, \begin{displaymath}
\mm{y}|\mm{\eta} \;\sim\;\text{Poisson}(\exp(\mm{\eta})) \end{displaymath} where $\eta_i = \mu + \beta w_i + u_{j(i)}$, $i=1, \ldots, n$, $w$ are covariates, $\mm{u} \;\sim\; {\mathcal N}_m(\mm{0}, \tau^{-1} \mm{I})$, and $j(i)$ is a known mapping from $1:n$ to $1:m$. We generate data as follows \begin{verbatim} set.seed(123456L) n = 50; m = 10 w = rnorm(n, sd = 1/3) u = rnorm(m, sd = 1/4) intercept = 0; beta = 1 idx = sample(1:m, n, replace = TRUE) y = rpois(n, lambda = exp(intercept + beta * w + u[idx])) \end{verbatim} giving \begin{verbatim} > table(y, dnn=NULL)
0 1 2 3 5 17 18 9 5 1 \end{verbatim} We use \texttt{R-INLA}\xspace to do the inference for this model, by \begin{verbatim} library(INLA) my.data = data.frame(y, w, idx) formula = y ~ 1 + w + f(idx, model="iid"), r = inla(formula, data = my.data, family = "poisson") \end{verbatim} The \texttt{formula} defines how the response depends on covariates, as usual, but the term \texttt{f(idx, model="iid")} is new. It corresponds to the function $f$ that we have met above in \eref{eq3}, one of many implemented GMRF model components. The \texttt{iid} term refers to the ${\mathcal N}(\mm{0}, \tau^{-1}\mm{I})$ model, and \texttt{idx} is an index that specifies which elements of the model component go into the linear predictor.
\Fig{fig4}a shows three estimates of the posterior marginal of $u_1$. The solid line is the default estimate, the simplified Laplace approximation, as outlined in \Sec{sec:INLA} (and with the \texttt{R}-commands given above). The dashed line is the simpler Gaussian approximation which avoids integration over $\mm{\theta}$, \begin{verbatim} r.ga = inla(formula, data = my.data, family = "poisson",
control.inla = list(strategy = "gaussian", int.strategy = "eb")) \end{verbatim} The dotted line represents the (almost) true Laplace approximations and accurate integration over $\mm{\theta}$, and is the best approximation we can provide with the current software, \begin{verbatim} r.la = inla(formula, data = my.data, family = "poisson",
control.inla = list(strategy = "laplace",
int.strategy = "grid", dz=0.1, diff.logdens=20)) \end{verbatim} It is hard to see as it almost entirely covered by the solid line, meaning that our mixture of skew-Normals is very close to being exact in this example. We also note that by integrating out $\mm{\theta}$, the uncertainty increases, as it should. To compare the approximations with a simulation based approach, \Fig{fig4}b shows the corresponding histogram for $10^{5}$ samples using \texttt{JAGS}, together with the default estimate from \Fig{fig4}a. The fit is quite accurate. The CPU time used by \texttt{R-INLA}\xspace with default options, was about $0.16$ seconds on a standard laptop where $2/3$ of this time was used for administration. \begin{figure}
\caption{Panel (a) shows the default estimate (simplified Laplace
approximation) of the posterior marginal for $u_1$ (solid), a
simplified estimate, i.e. the Gaussian approximation, (dashed)
and the best possible Laplace approximation (dotted). Panel
(b) shows the histogram of $u_1$ using $10^{5}$ samples
produced using \texttt{JAGS}, together with the simplified
Laplace approximation from (a).}
\label{fig4}
\end{figure}
\subsection{A Less Simple Example Including Measurement Error} \label{sec:less.simple}
We continue with a measurement error extension of the previous example, assuming that the covariate $\mm{w}$ is only observed indirectly through $\mm{z}$, where \begin{displaymath}
z_i|\ldots \;\sim\; \texttt{Binomial}\left(m,
\text{prob} = \frac{1}{1+\exp(-(\gamma + w_i))}\right), \quad i=1,
\ldots, n, \end{displaymath} with intercept $\gamma$. In this case, the model needs to be specified using two likelihoods and also a special feature called \texttt{copy}. Each observation can have its own type of likelihood (i.e.\ family), which is coded using a matrix (or list) of observations, where each ``column'' represents one family. A linear predictor can only be associated with one observation. The \texttt{copy} feature allows us to have additional identical copies of the \emph{same} model component in the formula, and we have the option to scale it as well. An index \texttt{NA} is used to indicate if there is no contribution to the linear predictor and this is used to zero-out contributions from model components. This is done in the code below: \begin{verbatim}
## generate observations that we observe for 'w' m = 2 z = rbinom(n, size = m, prob = 1/(1+exp(-(0 + w))))
## create the response. since we have two families, poisson and
## binomial, we use a matrix, one column for each family Y = matrix(NA, 2*n, 2) Y[1:n , 1] = y Y[n + 1:n, 2] = z
## we need one intercept for each family. this is an easy way to achive that Intercept = as.factor(rep(1:2, each=n))
## say that we have 'beta*w' only for 'y' and 'w' only for 'z'. the formula
## defines the joint model for both the observations, 'y' and 'z' NAs = rep(NA, n) idx = c(NAs, 1:n) idxx = c(1:n, NAs) formula2 = Y ~ -1 + Intercept + f(idx, model="iid") +
f(idxx, copy="idx", hyper = list(beta = list(fixed = FALSE)))
## need to use a 'list' since 'Y' is a matrix my.data2 = list(Y=Y, Intercept = Intercept, idx = idx, idxx = idxx)
## we need to define two families and give the 'size' for the binomial r2 = inla(formula2, data = my.data2, family = c("poisson", "binomial"),
Ntrials = c(NAs, rep(m, n))) \end{verbatim} We refer to \cite{art561} for more details on measurement error models using INLA, and to the specific latent Gaussian models termed {\tt
mec} and {\tt meb} that are available in {\tt R-INLA} to facilitate the implementation of classical error models and Berkson error models, respectively.
\subsection{A Spatial Example} \label{sec:spatial}
The \texttt{R-INLA}\xspace package has extensive support for spatial Gaussian models, including intrinsic GMRF models on regions (often called ``CAR'' models, \citep[Ch.~5.2]{book124}), and a subclass of continuously indexed Gaussian field models. Of particular interest are Gaussian fields derived from stochastic partial differential equations (SPDEs). The simplest cases are Mat\'ern fields in dimension $d$, which can be described as the solution to \begin{equation}\label{eq18}
(\kappa^{2} - \Delta)^{\alpha/2} (\tau {x}(\mm{s})) =
{\mathcal W}(\mm{s}), \end{equation} where $\Delta$ is the Laplacian, $\kappa>0$ is the spatial scale parameter, $\alpha$ controls the smoothness, $\tau$ controls the variance, and ${\mathcal W}(\mm{s})$ is a Gaussian spatial white noise process. \cite{art246,art455} shows that its solution is a Gaussian field with a Mat\'ern covariance function having smoothness $\nu = \alpha-d/2$. The smoothness is usually kept fixed based on prior knowledge of the underlying process. A formulation of Mat\'ern fields as solutions to \eref{eq18} might seem unnecessarily complicated, since we already know the solution. However, \cite{art500} showed that by using a finite basis-function representation of the continuously indexed solution, one can derive (in analogy to the well known \emph{Finite Element Method}) a local representation with Markov properties. This means that the joint distribution for the weights in the basis-function expansion is a GMRF, and the distribution follows directly from the basis functions and the triangulation of space. The main implication of this result is that it allows us to continue to think about and interpret the model using marginal properties like covariances, but at the same time we can do fast computations since the Markov properties make the precision matrix very sparse. It also allows us to add this component in the \texttt{R-INLA}\xspace framework, like any other GMRF model-component.
The dual interpretation of Mat\'ern fields, both using covariances and also using its Markov properties, is very convenient both from a computational but also from a statistical modeling point of view \citep{art512,art508,art527}. The same ideas also apply to non-stationary Gaussian fields using non-homogeneous versions of an appropriate SPDE \citep{art500,art529,art581,art532}, Gaussian fields that treats land as a barrier to spatial correlation \citep{tech125}, multivariate random fields \citep{art591}, log-Gaussian Cox processes \citep{art583}, and in the near future also to non-separable space-time models.
We end this section with a simple example of spatial survival analysis taken from \cite{art458}, studying spatial variation in leukaemia survival data in north-west England in the period 1982--1998. The focus of the example is to see how and how easily, the spatial model integrates into the model definition \citep{art494}. We therefore omit further details about the dataset and refer to the original article.
First, we need to load the data and create the mesh, i.e.\ a triangulation of the area of interest to represent the finite dimensional approximation to \eref{eq18}. \begin{verbatim} library(INLA) data(Leuk) loc <- cbind(Leuk$xcoord, Leuk$ycoord) bnd1 <- inla.nonconvex.hull(loc, convex=0.05) bnd2 <- inla.nonconvex.hull(loc, convex=0.25) mesh <- inla.mesh.2d(loc, boundary=list(bnd1, bnd2),
max.edge=c(0.05, 0.2), cutoff=0.005) \end{verbatim} \Fig{fig5}a displays the study area and the locations of the events, while \Fig{fig5}b shows the associated mesh with respect to which we define the SPDE model. We use an additional rougher mesh to reduce boundary effects. \begin{figure}
\caption{Panel (a) shows the area of north-west England for the
leukaemia study, where the (post-code) locations of the events
are shown as dots. Panel (b) overlays the mesh used for the
SPDE model.}
\label{fig5}
\end{figure} The next step is to create a mapping matrix from the mesh onto the locations where the data are observed. Then we define the SPDE model, to define the statistical model including covariates like sex, age, white blood-cell counts (wbc) and the Townsend deprivation index (tpi), and to call a book-keeping function which keeps the indices in correct order. Finally, we call \texttt{inla()} to do the analysis, assuming a Weibull likelihood. Note that application of a Cox proportional hazard model will give similar results.
\begin{verbatim} A <- inla.spde.make.A(mesh, loc) spde <- inla.spde2.matern(mesh, alpha=2) ## alpha=2 is the default choice formula <- inla.surv(time, cens) ~ 0 + a0 + sex + age + wbc + tpi +
f(spatial, model=spde) stk <- inla.stack(data=list(time=Leuk$time, cens=Leuk$cens), A=list(A, 1),
effect=list(list(spatial=1:spde$n.spde),
data.frame(a0=1, Leuk[,-c(1:4)]))) r <- inla(formula, family="weibull", data=inla.stack.data(stk),
control.predictor=list(A=inla.stack.A(stk))) \end{verbatim}
\Fig{fig6}a shows the estimated spatial effect, with the posterior mean (left), and posterior standard deviation (right). \begin{figure}
\caption{The spatial effect in the model (left: mean, right:
standard deviation).}
\label{fig6}
\end{figure}
\subsection{Special Features}
In addition to standard analyses, the \texttt{R-INLA}\xspace package also contains non-standard features that really boost the complexity of models that can be specified and analysed. Here, we give a short summary of these, for more details see \cite{art522}. \begin{description} \item[replicate] Each model component given as a \texttt{f()}-term can
be replicated, creating \texttt{nrep} iid replications with shared
hyperparameters. For example, \begin{verbatim} f(time, model="ar1", replicate=person) \end{verbatim}
defines one AR(1) model for each person sharing the same
hyperparameters. \item[group] Each model component given as a \texttt{f()}-term, can be
grouped, creating \texttt{ngroup} dependent replications with a
separable correlation structure. To create a separable space-time
model, with an AR(1) dependency in time, we can specify \begin{verbatim} f(space, model=spde, group=time, control.group = list(model = "ar1")) \end{verbatim}
\cite{art492} used grouped smoothing priors in \texttt{R-INLA}\xspace to impute
missing mortality rates for a specific country by taking advantage
from similar countries where these data are available. The authors
provide the corresponding R-code in the supplementary material. We
can both group and replicate model components. \item[A-matrix] We can create a second layer of linear predictors
where \mm{\eta} is defined by the \texttt{formula}, but where
$\mm{\eta}^{*} = \mm{A}\mm{\eta}$ is connected to the
observations. Here, \mm{A} is a constant (sparse) matrix; see the
above spatial example. \item[Linear combinations] We can also compute posterior marginals of
$\mm{v} = \mm{B}\mm{x}$ where \mm{x} is the latent field and
$\mm{B}$ is a fixed matrix. This could for example be
$\beta_1 - \beta_2$ for two fixed effects, or any other linear
combinations. Here is an example computing the posterior for the
difference between two linear effects, $\beta_u - \beta_{v}$ \begin{verbatim} lc = inla.make.lincomb(u=1, v=-1) r = inla(y ~ u + v, data = d, lincomb = lc) \end{verbatim} \item[Remote server] It is easy to set up a remote MacOSX/Linux server
to host the computations while doing the \texttt{R}-work at your
local laptop. The job can be submitted and the results can be
retrieved later, or we can use it interactively. This is a very
useful feature for larger models. It also ensures that
computational servers will in fact be used, since we can work in a
local \texttt{R}-session but use a remote server for the
computations. Here is an example running the computations on a
remote server \begin{verbatim} r = inla(formula, family, data = data, inla.call = "remote") \end{verbatim}
To submit a job we specify \begin{verbatim} r = inla(formula, family, data = data, inla.call = "submit") \end{verbatim}
and we can check the status and retrieve the results when the
computations are done, by \begin{verbatim} inla.qstat(r) r = inla.qget(r) \end{verbatim} \item[R-support] Although the core inla-program is written in
\texttt{C}, it is possible to pass a user-defined latent model
component written in \texttt{R}, and use that as any other latent
model component. The \texttt{R}-code will be evaluated within the
\texttt{C}-program. This is very useful for more specialised model
components or re-parameterisations of existing ones, even though
it will run slower than a proper implementation in \texttt{C}. As
a simple example, the code below implements the model component
\texttt{iid}, which is just independent Gaussian random effects
${\mathcal N}_n(\mm{0}, (\tau\mm{I})^{-1})$. The skeleton of the
function is predefined, and must return the graph, the
\mm{Q}-matrix, initial values, the mean, the log normalising
constant and the log prior for the hyperparameter. \begin{verbatim} iid.model = function(cmd = c("graph", "Q", "mu", "initial",
"log.norm.const", "log.prior", "quit"),
theta = NULL, args = NULL) {
interpret.theta = function(n, theta)
return (list(prec = exp(theta[1L])))
graph = function(n, theta)
return (Diagonal(n, x= rep(1, n)))
Q = function(n, theta) {
prec = interpret.theta(n, theta)$prec
return (Diagonal(n, x= rep(prec, n))) }
mu = function(n, theta) return (numeric(0))
log.norm.const = function(n, theta) {
prec = interpret.theta(n, theta)$prec
return (sum(dnorm(rep(0, n),
sd = 1/sqrt(prec), log=TRUE))) }
log.prior = function(n, theta) {
prec = interpret.theta(n, theta)$prec
return (dgamma(prec, shape = 1, rate = 5e-05, log=TRUE)
+ theta[1L]) }
initial = function(n, theta) return (4.0)
quit = function(n, theta) return (invisible())
val = do.call(match.arg(cmd),
args = list(n = as.integer(args$n), theta = theta))
return (val) } n = 50 ## the dimension my.iid = inla.rgeneric.define(iid.model, n=n) \end{verbatim}
Hence, we can replace \texttt{f(idx,model="iid")} with our own
\texttt{R}-implementation, using \texttt{f(idx, model=my.iid)}.
For details on the format, see \texttt{inla.doc("rgeneric")} and
\texttt{demo(rgeneric)}. \end{description}
\section{A CHALLENGE FOR THE FUTURE: PRIORS} \label{sec:priors}
Although the \texttt{R-INLA}\xspace project has been highly successful, it has also revealed some ``weak points'' in general Bayesian methodology from a practical point of view. In particular, our main concern is how we think about and specify priors in LGMs. We will now discuss this issue and our current plan to provide good sensible ``default'' priors.
Bayesian statistical models require prior distributions for all the random elements of the model. Working within the class of LGMs, this involves choosing \emph{priors} for all the hyperparameters \mm{\theta} in the model, since the latent field is by definition Gaussian. We deliberately wrote prior\emph{s} since it is common practice to define independent priors for each $\theta_j$, while what we really should aim for is a joint prior for all \mm{\theta}, when appropriate.
The ability to incorporate prior knowledge in Bayesian statistics is a great tool and potentially very useful. However, except for cases where we do have ``real/experimental'' prior knowledge, for example through results from previous experiments, it is often conceptually difficult to encode prior knowledge through probability distributions for all model parameters. Examples include priors for precision and overdispersion parameters, or the amount of t-ness in the Student-t distribution. \cite{art631} discuss these aspects in great detail.
In \texttt{R-INLA}\xspace we have chosen to provide default prior distributions for all parameters. We admit that currently these have been chosen partly based on the priors that are commonly used in the literature and partly out of the blue. It might be argued that this is not a good strategy, and that we should force the user to provide the complete model including the joint prior. This is a valid point, but all priors in \texttt{R-INLA}\xspace can easily be changed, allowing the user to define any arbitrary prior distribution. So the whole argument boils down to a question of convenience.
Do we have a ``Houston, we have a problem''-situation with priors? Looking at the current practice within the Bayesian society, we came to the conclusion; we do. We will argue for this through a simple example, showing what can go wrong, how we can think about the problem and how we can fix it. We only discuss proper priors.
Consider the problem of replacing a linear effect of the Townsend deprivation index \texttt{tpi} with a smooth effect of \texttt{tpi} in the Leukaemia example in \Sec{sec:spatial}. This is easily implemented by replacing \verb|tpi| with \verb|f(tpi, model="rw2")|. Here,
\verb|rw2| is a stochastic spline, simply saying that the second derivative is independent Gaussian noise \citep{book80,art435}. By default, we constrain the smooth effect to also sum to zero, so that these two model formulations are the same in the limit as the precision parameter $\tau$ tends to infinity, and a vague Gaussian prior is used for the linear effect. The question is which prior should be used for $\tau$. An overwhelming majority of cases in the literature uses some kind of a Gamma$(a,b)$ prior for $\tau$, implying that $\pi(\tau)\propto \tau^{a-1}\exp(-b\tau)$, for some $a,b>0$. This prior is flexible, conjugate with the Gaussian, and seems like a convenient choice. Since almost everyone else is using it, how wrong can it be?
If we rewind to the point where we replaced the linear effect with a smooth effect, we realise that we do this because we want a more flexible model than the linear effect, i.e.\ we also want to capture \emph{deviations} from the linear effect. Implicitly, \emph{if} there is a linear effect, we do want to retrieve that with enough data. Measuring the distance between the straight line and the stochastic spline using the Kullback-Leibler divergence, we find that $\text{KLD} \propto 1/\tau$ meaning that the (unidirectional) distance is $d \propto\sqrt{1/\tau}$. For simplicity, choose $a=b=1$ in the Gamma-prior, then the derived prior for the distance $d$ is \begin{equation}\label{eq19}
\pi(d) \propto \exp(-1/d^{2})/d^{3}. \end{equation} \Fig{fig7}a displays this prior on the distance scale, revealing two surprising features. First, the mode is around $d\approx0.82$, and second, the prior appears to be zero for a range of positive distances. The second feature is serious as it simply \emph{prevents} the spline from getting too close to the linear effect. It is clear from \eref{eq19} that the effect is severe, and in practice, $\pi(d) \approx 0$ even for positive $d$. This is an example of what \cite{art631} call prior \emph{overfitting}; the prior prevents the simpler model to be located, even when it is the true model. \begin{figure}
\caption{Panel (a) shows the Gamma$(1,1)$ prior on the distance
scale. Panel (b) shows the smoothed effect of covariate
\texttt{tpi} using the exponential prior on the distance scale
$\lambda\exp(-\lambda )$.}
\label{fig7}
\end{figure} Choosing different parameters in the Gamma-prior does not change the overfitting issue. For all $a,b>0$, the corresponding prior for the distance tends to $0$ as $d\rightarrow 0$. For a (well-behaved) prior to have $\pi(d=0) > 0$, we need $E(\tau)=\infty$.
If we are concerned about the behaviour of the distance between the more flexible and the simpler model component, we should define the prior directly on the distance, as proposed in \cite{art631}. A prior for the distance should be decaying with the mode at distance zero. This makes the simpler model central and the point of attraction. The exponential prior is recommended as a generic choice since it has a constant rate penalisation, $\pi(d) = \lambda \exp(-\lambda d)$. The value of $\lambda$ could be chosen by calibrating some property of the model component under consideration. Note that this way of defining the prior is invariant to reparameterisations, as it is defined on the distance and not for a particular parametersation.
Let us return to the stochastic spline example, assigning the exponential prior to the distance. The parameter $\lambda$ can be calibrated by imposing the knowledge that the effect of \texttt{tpi} is not likely to be above $1$ on the linear predictor scale, \begin{verbatim} ..+ f(tpi, model="rw2", scale.model = TRUE,
hyper = list(prec = list(prior="pc.prec", param=c(1, 0.01)))) \end{verbatim} Here, \texttt{scale.model} is required to ensure that the parameter $\tau$ represents the precision, not just \emph{a} precision parameter \citep{art521}. The estimated results are given in \Fig{fig7}b, illustrating the point-wise posterior mean, median and the $2.5\%$ and $97.5\%$ credibility intervals, for the effect of \texttt{tpi} on the mean survival time.
Here, we have only briefly addressed the important topic of constructing well-working priors, and currently we are focusing a lot of activity on this issue to take the development further. Besides others we plan to integrate automatic tests for prior sensitivity, following the work of \citet{roos-held-2011,art573}. The final goal is to use the above ideas to construct a joint default prior for LGMs, which can be easily understood and interpreted. A main issue is how to decompose and control the variance of the linear predictor, an issue we have not discussed here. For further information about this issue, please see \cite{art631} for the original report which introduces the class of penalised complexity (PC) priors. Some examples on application of these priors include disease mapping \citep{art585}, bivariate meta-analysis \citep{art586,guo-riebler-2015}, age-period-cohort models \citep{riebler-held-2016}, Bayesian P-splines \citep{art630}, structured additive distributional regression \citep{klein-kneib-2016}, Gaussian fields in spatial statistics \citep{art590}, modeling monthly maxima of instantaneous flow \citep{ferkingstad-etal-2016} and autoregressive processes \citep{tech126}.
Interestingly, the framework and ideas of PC priors, are also useful for sensitivity analysis of model assumptions and developing robust models, but it is too early to report this here. Stay tuned!
\section{DISCUSSION} \label{sec:discussion}
We hope we have convinced the reader that the INLA approach to approximate Bayesian inference for LGMs is a useful addition to the applied statistician's toolbox; the key components just play so nicely together, providing a very exact approximation while reducing computation costs substantially. The key benefit of the INLA approach is that it is central to our long-term goal of making LGMs a class of models that we (as a community) can \emph{use and understand}.
Developing, writing and maintaining the code-base for a such large open-source project, is a huge job. Nearly all the \texttt{R/C/C++} code is written and maintained by F.\ Lindgren (20\%) and H.\ Rue (80\%), and is a result of a substantial amount of work over many years. Many more have contributed indirectly by challenging the current practice and implementation. The current version of this project is a result of the cumulative effort of the many users, and their willingness to share, challenge and question essentially everything. Documentation is something we could and should improve upon, but the recent book by \cite{book125} does a really good job.
The current status of the package is good, but we have to account for the fact that the software has been developed over many years, and is basically the version we used while developing the methods. Hence, while the software works well it less streamlined and less easy to maintain than it ought to be. We are now at a stage where we know what we want the package to do and software to be, hence a proper rewrite by skilled people would really be a useful project for the society. If this would happen, we would be more than happy to share all our knowledge into a such ``version 2.0'' project!
Another use of \texttt{R-INLA}\xspace is to use it purely as computational back-end. The generality of \texttt{R-INLA}\xspace comes with a prize of complexity for the user, hence a simplified interface for a restricted set of models can be useful to improve accessibility for a specific target audience or provide additional tools that are mainly relevant for these models. Examples of such projects, are \texttt{AnimalINLA} \citep{art622}, \texttt{ShrinkBayes} \citep{art624,art514,art623,riebler-etal-2014}, \texttt{meta4diag} \citep{guo-riebler-2015}, \texttt{BAPC} \citep{riebler-held-2016}, \texttt{diseasemapping} and \texttt{geostatp} \citep{art625}, and \cite{art528}. Similarly, the \texttt{excursions} package for calculating joint exceedance probabilities in GMRFs \citep{bolin-lindgren-2015,bolin-lindgren-2016} includes an interface to analyse LGMs estimated by \texttt{R-INLA}. Recent work on methodology for filtered spatial point patterns in the context of distance sampling \citep{art589} has initiated the construction of wrapper software for fitting other complex spatial models such as those resulting from plot sampling data or for point process models within \texttt{R-INLA}\xspace. There is also an interesting line of research using \texttt{R-INLA}\xspace to do approximate inference on a sub-model within a larger model, see \cite{art407} for a theoretical justification and \cite{art503} for an early application of this idea. One particular application here, is how to handle missing data in cases where the joint model is not an LGM.
Please visit us at \texttt{www.r-inla.org}!
\end{document} |
\begin{document}
\begin{abstract} Let $(X, C)$ be a germ of a threefold $X$ with terminal singularities along an irreducible reduced complete curve $C$ with a contraction $f: (X, C)\to (Z, o)$ such that $C=f^{-1}(o)_{\operatorname{red}}$ and $-K_X$ is ample.
This paper continues our study of such germs containing a point of type \type{(IIA)} started in \cite{Mori-Prokhorov-IIA-1}. \end{abstract} \title{Threefold extremal contractions \ of type ype{(IIA)}
\section{Introduction} Let $(X,C)$ be a germ of a threefold with terminal singularities along a reduced complete curve. We say that $(X,C)$ is an \textit{extremal curve germ} if there is a contraction $f: (X,C)\to (Z,o)$ such that $C=f^{-1}(o)_{\operatorname{red}}$ and $-K_X$ is $f$-ample. Furthermore, $f$ is called \textit{flipping} if its exceptional locus coincides with $C$ and \textit{divisorial} if its exceptional locus is two-dimensional. If $f$ is not birational, then $\dim Z=2$ and $(X,C)$ is said to be a \textit{$\mathbb{Q}$-conic bundle germ} \cite{Mori-Prokhorov-2008}.
In this paper we consider only extremal curve germs with \textit{irreducible} central fiber $C$. All the possibilities for the local behavior of $(X,C)$ are classified into types \type{(IA)}, \type{(IC)}, \type{(IIA)}, \type{(IIB)}, \type{(IA^\vee)}, \type{(II^\vee)}, \type{(ID^\vee)}, \type{(IE^\vee)}, and \type{(III)}, whose definitions we refer the reader to \cite{Mori-1988} and \cite{Mori-Prokhorov-2008}.
In this paper we complete the classification of extremal curve germs containing points of type \type{(IIA)} started in \cite{Mori-Prokhorov-IIA-1}. As in \cite{Kollar-Mori-1992}, \cite{Mori-Prokhorov-IA}, and \cite{Mori-Prokhorov-IC-IIB}
the classification is done in terms of a general hyperplane section, that is, a general divisor $H$ of $|\mathscr{O}_X|_C$, the linear subsystem of $|\mathscr{O}_X|$ consisting of sections containing $C$. The case where $H$ is normal was treated in \cite{Mori-Prokhorov-IIA-1}. In this paper we consider the case of non-normal $H$. Our main result is the following. \begin{theorem}\label{main}
Let $(X,C)$ be an extremal curve germ and let $f: (X, C)\to (Z,o)$ be the corresponding contraction. Assume that $(X,C)$ it has a point $P$ of type \type{(IIA)}. Furthermore, assume that the general member $H\in |\mathscr{O}_X|_C$ is not normal. Then the following are the only possibilities for the dual graph of $(H,C)$, and all the possibilities do occur.
\begin{enumerate} \renewcommand\labelenumi{{\rm (\arabic{section}.\arabic{subsection}.\arabic{enumi})}\refstepcounter{equation}} \renewcommand\theenumi{(\arabic{section}.\arabic{subsection}.\arabic{enumi})}
\item \label{main-theorem-divisorial} $f$ is divisorial\footnote{This case was erroneously omitted in \cite[Th. 3.6 and Cor. 3.8]{Tziolas2005}.}, $f(H)\ni o$ is of type \type{D_{5}}, \begin{equation*} \xy \xymatrix@R=7pt@C=17pt{ &\circ\ar@{-}[d] \\ \underset {} \bullet \ar@{-}[r] &\underset 3\circ\ar@{-}[r]&\circ\ar@{-}[r]&\circ \\ &\circ\ar@{-}[u] } \endxy \end{equation*}
\item \label{main-theorem-conic-bundle} $f$ is a $\mathbb{Q}$-conic bundle over a smooth surface, \begin{equation*} \vcenter{ \xy \xymatrix@R=7pt@C=11pt{ &\circ\ar@{-}[r]&\overset {3}\circ\ar@{-}[d]\ar@{-}[r]&\circ \\ \bullet \ar@{-}[r] &\underset {}\circ\ar@{-}[r]&\circ\ar@{-}[r]&\circ } \endxy} \end{equation*} \end{enumerate} In both cases $X$ can have at most one extra point of type \type{(III)}.
\end{theorem}
\begin{remark} If $(X,C)$ is an extremal curve germ of type \type{(IIA)}, then according to \cite[Corollary 2.6]{Mori-Prokhorov-IIA-1}
the general member $H\in |\mathscr{O}_X|_C$ is not normal if and only if $\operatorname{H}^0(\operatorname{gr}_C^1\mathscr{O})=0$. \end{remark}
Note that the description of a member $H\in |\mathscr{O}_X|_C$ is just a part of our results. We also describe the infinitesimal structure of the corresponding extremal curve germs. Refer to \eqref{equation-possibilities-lP=3+III-b} and \ref{scorollary-9-6-8} for the case \ref{main-theorem-divisorial} and to \eqref{equation-lP=4-gr-2-C-O} and \ref{case-conic-bundle} for the case \ref{main-theorem-conic-bundle}. We also provide many examples (see \ref{example-divisorial-lP=3}, \ref{example-divisorial-lP=7}, \ref{example-conic-bundle-lP=4+III}, \ref{example-conic-bundle-lP=8}).
The proof of the main theorem splits into cases according to the invariant $\ell(P)$ which, in our case, can take values $\ell(P)\in\{ 3,\, 4,\, 7,\, 8\}$ (see \ref{equation-iP} and Proposition \ref{proposition-cases-lP}). Cases of odd and even $\ell(P)$ will be considered in Sections \ref{section-lP=3-III} and \ref{section-lP=4}, respectively.
\section{Preliminaries}\label{section-Preliminaries} \begin{setup} \label{Set-up} Let $(X,C)$ be an extremal curve germ and let $f: (X, C)\to (Z,o)$ be the corresponding contraction. The ideal sheaf of $C$ in $X$ we denote by $I_C$ or simply by $I$. Assume that $(X,C)$ has a point $P$ of type \type{(IIA)}. Then by \cite[6.7, 9.4]{Mori-1988} and \cite[8.6, 9.1, 10.7]{Mori-Prokhorov-2008} $P$ is the only non-Gorenstein point of $X$ and $(X,C)$ has at most one Gorenstein singular point $R$ \cite[6.2]{Mori-1988}, \cite[9.3]{Mori-Prokhorov-2008}. If $\operatorname{H}^0(\operatorname{gr}_C^1\mathscr{O})=0$, then $(X,C)$ is not flipping \cite[ch. 7]{Kollar-Mori-1992}. \end{setup}
\begin{scase}\label{sde} Thus, in the case $\operatorname{H}^0(\operatorname{gr}_C^1\mathscr{O})=0$, we have two possibilities: \begin{itemize} \item $f$ is a $\mathbb{Q}$-conic bundle and $(Z,o)$ is smooth \cite[Th. 1.2]{Mori-Prokhorov-2008}; \item $f$ is a divisorial contraction and $(Z,o)$ is a cDV point (or smooth) \cite[Th. 3.1]{Mori-Prokhorov-IA}. \end{itemize} \end{scase}
\begin{case}\label{equation-iP} Everywhere in this paper $(X,P)$ denotes a terminal singularity $(X,P)$ of type \type{cAx/4} and $(X^\sharp, P^\sharp)\to (X,P)$ denotes its index-one cover.
Let \begin{equation*} \ell(P):=\operatorname{len}_P I^{\sharp (2)}/I^{\sharp 2}, \end{equation*} where $I^\sharp$ is the ideal defining $C^\sharp$ in $X^\sharp$. Recall (see \cite[(2.16)]{Mori-1988}) that in our case \begin{equation*} i_P(1)=\lfloor(\ell(P)+6)/4\rfloor. \end{equation*}
\end{case}
\begin{case} \label{equation-IIA-point} According to \cite[A.3]{Mori-1988} we can express the \type{(IIA)} point as \begin{equation} \label{equation-XC} \begin{split} (X, P)&= \{\alpha=0\}/{\boldsymbol{\mu}}_4(1, 1, 3, 2)\subset\mathbb{C}^4_{y_1,\dots, y_4}/{\boldsymbol{\mu}}_4(1, 1, 3, 2), \\ C&=\{y_1\text{-axis}\}/{\boldsymbol{\mu}}_4, \end{split} \end{equation} where $\alpha=\alpha(y_1,\dots, y_4)$ is a semi-invariant such that \begin{equation}\label{equation-alpha} \operatorname{wt}\alpha\equiv 2\mod 4,\qquad \alpha\equiv y_1^{\ell(P)}y_j\mod (y_2, y_3, y_4)^2, \end{equation} where $j= 2$ (resp. $3$, $4$) if $\ell(P)\equiv 1$ (resp. $3$, $0$) $\mod 4$ \cite[(2.16)]{Mori-1988} and $(I^\sharp)^{(2)}=(y_j)+(I^\sharp)^{2}$. Moreover, $y_2^2,\, y_3^2\in \alpha$ (because $(X,P)$ is of type \type{cAx/4}). \end{case}
\begin{case}\label{ge}
Recall that in our case the general member $D\in |-K_X|$ does not contain $C$ \cite[Th. 7.3]{Mori-1988}, \cite[Prop. 1.3.7]{Mori-Prokhorov-2008}. Hence $D\cap C=\{P\}$, $D\simeq f(D)$, and $D$ has at $P$ a singularity of type \type{D_{2n+1}} \cite[6.4B]{Reid-YPG1987}. In the coordinates $y_1,\dots,y_4$, the divisor $D$ is given by \begin{equation*} D=\{y_1= \xi\}/{\boldsymbol{\mu}}_4,\qquad \xi\in (y_2,\, y_3,\, y_4). \end{equation*} \end{case}
\begin{scase}\label{sde1}
Let $H$ be a general member of $|\mathscr{O}_X|_C$ through $C$ and let $\beta\in \operatorname{H}^0(I_C)$ be a non-zero section defining $H$. Let $H_Z=f(H)$ and let $\psi: H^{\operatorname{n}}\to H$ be the normalization. The composition map $H^{\operatorname{n}}\to H_Z$ has connected fibers. Moreover, it is a rational curve fibration if $\dim Z=2$ and it is a birational contraction to a point $(H_Z, o)$ which is either smooth or Du Val point of type \type{A} or \type{D} if $f$ is divisorial (see \ref{ge}).
In both cases $H^{\operatorname{n}}$ has only rational singularities. \end{scase}
For convenience of the reader we formulate the following lemma which follows from the standard exact sequence \begin{equation*} 0\xrightarrow{\hspace*{20pt}} I^{(n+1)} \xrightarrow{\hspace*{20pt}} I^{(n)} \xrightarrow{\hspace*{20pt}} \operatorname{gr}_C^n\mathscr{O}\xrightarrow{\hspace*{20pt}} 0. \end{equation*}
\begin{lemma}\label{lemma-grC} Let $(X,C)$ be an extremal curve germ. Then the following assertions hold. \begin{enumerate} \item \label{lemma-grC-1} If $\operatorname{H}^1(\operatorname{gr}_C^n\mathscr{O})=0$ and the map $\operatorname{H}^0(I^{(n)})\to \operatorname{H}^0(\operatorname{gr}_C^n\mathscr{O})$ is surjective, then $\operatorname{H}^1(I^{(n+1)})\simeq \operatorname{H}^1(I^{(n)})$.
\item \label{lemma-grC-2} If for all $i<n$ one has $\operatorname{H}^1(\operatorname{gr}_C^i\mathscr{O})=0$ and the map $\operatorname{H}^0(I^{(i)})\to \operatorname{H}^0(\operatorname{gr}_C^i\mathscr{O})$ is surjective, then $\operatorname{H}^1(I^{(n)})\simeq \operatorname{H}^1(\operatorname{gr}_C^n\mathscr{O})=0$. \item \label{lemma-grC-3} In particular, $\operatorname{H}^1(I)= \operatorname{H}^1(\operatorname{gr}_C^1\mathscr{O})=0$ and if $\operatorname{H}^0(\operatorname{gr}_C^1\mathscr{O})=0$, then $\operatorname{H}^1(I^{(2)})= \operatorname{H}^1(\operatorname{gr}_C^2\mathscr{O})=0$. \end{enumerate} \end{lemma}
The following auxiliary result can be proved by induction on $n$. \begin{proposition}\label{proposition-lP=4-XC} Let $(X,P)\subset \mathbb{C}^4_{x_1,\dots,x_4}$ be a hypersurface containing $C:=\{\text{$x_1$-axis}\}$ with defining equation $h\in \mathbb{C}\{x_1,\dots,x_4\}$ such that \begin{equation*} h=x_1^mx_4+h_2(x_2,x_3)+h_3(x_1,\dots,x_4), \end{equation*} where $h_2$ is a quadratic form in $x_2$ and $x_3$, $h_3\in (x_2,x_3,x_4)^3$, and $m\ge 1$. Let $I=(x_2,x_3,x_4)$ be the ideal of $C$. Let \begin{equation*} \operatorname{gr}_C^{\bullet}:= \bigoplus_{n\ge 0} \operatorname{gr}_C^n\mathscr{O} \end{equation*} be the graded $\mathscr{O}_C$-algebra with the degree $n$ part $\operatorname{gr}_C^n\mathscr{O}$. Then the following assertions hold. \begin{enumerate} \item \label{proposition-lP=4-XC-1} If $h_2=0$, then \begin{equation*} \operatorname{gr}_C^2\mathscr{O}= S^2\operatorname{gr}_C^1\mathscr{O}. \end{equation*}
\item \label{proposition-lP=4-XC-2} If $h_2\neq 0$, then \begin{equation*} \operatorname{gr}_C^{\bullet}\mathscr{O}\simeq\mathscr{O}_C[x_2,x_3,x_4]/(x_1^mx_4+h_2), \end{equation*} where $x_2,x_3,x_4$ have degree $1$, $1$, $2$, respectively.
\item \label{proposition-lP=4-XC-4} If $x_3^2\in h_2$, then \begin{equation*} \operatorname{gr}_C^{\bullet}\mathscr{O} =\mathscr{O}_C[x_2,x_4]\oplus x_3\mathscr{O}_C[x_2,x_4]. \end{equation*} \item \label{proposition-lP=4-XC-5} If $h_2=x_2x_3$, then \begin{equation*} \operatorname{gr}_C^{\bullet}\mathscr{O} =\mathscr{O}_C[x_4]\oplus x_2\mathscr{O}_C[x_2,x_4]\oplus x_3\mathscr{O}_C[x_3,x_4]. \end{equation*} \end{enumerate} \end{proposition}
\begin{proposition}\label{proposition-cases-lP} Assume that $\operatorname{H}^0(\operatorname{gr}_C^1\mathscr{O})=0$. Then \begin{equation} \label{equation-gr1CO} \operatorname{gr}_C^1\mathscr{O}\simeq \mathscr{O}(-1)\oplus \mathscr{O}(-1) \end{equation} \textup(as an abstract sheaf\textup) and one of the following possibilities holds: \begin{enumerate} \item $\operatorname{Sing}(X)=\{P\}$, $i_P(1)=3$, and $\ell(P)=7$ or $8$, \item $\operatorname{Sing}(X)=\{P,\, R\}$, where $R$ is a type \type{(III)} point, $i_P(1)=2$, $i_R(1)=1$, and $\ell(P)=3$ or $4$. \end{enumerate} \end{proposition}
\begin{proof} Write $\operatorname{gr}_C^1\mathscr{O}\simeq \mathscr{O}(a_1)\oplus \mathscr{O}(a_2)$ for some $a_1$, $a_2$. Since $\operatorname{H}^0(\operatorname{gr}_C^1\mathscr{O})=0$, we have $a_1, a_2<0$. On the other hand, $\operatorname{H}^1(\operatorname{gr}_C^1\mathscr{O})=0$ (see Lemma \ref{lemma-grC}\ref{lemma-grC-3}). Hence, $a_1=a_2=-1$. Recall that $\ell(P)\not\equiv 2\mod 4$.
Consider the case where $P$ is the only singular point of $X$. Then $i_P(1)=3$ by \cite[(2.3.2)]{Mori-1988} and \cite[(3.1.2), (4.4.3)]{Mori-Prokhorov-2008}. According to \cite[2.16]{Mori-1988} we have $7\le \ell(P)\le 9$. Assume that $\ell(P)=9$. Then using a deformation of the form $\alpha_t=\alpha+t y_1y_2$ (see \eqref{equation-alpha}), we get a germ $(X_t,C_t)$ having a point $P_t$ of type \type{(IIA)} with $\ell(P_t)=1$ and two type \type{(III)} points. This is impossible by \cite[7.4.1]{Kollar-Mori-1992} and \cite[9.1]{Mori-Prokhorov-2008}.
Suppose $\operatorname{Sing}(X)\neq \{P\}$. Then by \cite[6.7]{Mori-1988} and \cite[8.6, 9.1]{Mori-Prokhorov-2008} we have $\operatorname{Sing}(X)=\{P,\, R\}$, where $R$ is a type \type{(III)} point. If $i_R(1)>1$, then by using deformation at $R$ we obtain an extremal curve germ with one point of type \type{(IIA)} and at least two points of type \type{(III)}. This is impossible again by \cite[6.7]{Mori-1988} and \cite[9.1]{Mori-Prokhorov-2008}. Therefore, $i_R(1)=1$ and so $i_P(1)=2$. By \cite[2.16]{Mori-1988} we have $3\le \ell(P)\le 5$ Assume that $\ell(P)=5$. Using a deformation of the form $\alpha_t=\alpha+t y_1y_2$, we obtain a germ $(X_t,C_t)$ having a point $P_t$ with $\ell(P_t)=1$ and two type \type{(III)} points. This is impossible by \cite[7.4.1]{Kollar-Mori-1992} and \cite[9.1]{Mori-Prokhorov-2008}. \end{proof}
\begin{slemma} \label{lemma-lP=3+IIIa} If $\operatorname{H}^0(\operatorname{gr}_C^1\mathscr{O})=0$, then \begin{equation*}
\operatorname{gr}_C^2\mathscr{O} \simeq \mathscr{O}(a_1)\oplus\mathscr{O}(a_2)\oplus\mathscr{O}(a_3), \end{equation*} \textup(as an abstract sheaf\textup) with $a_i\ge -1$ and $\max \{a_1,\, a_2,\, a_3\}\ge 0$. \end{slemma} \begin{proof}
If $\operatorname{H}^0(\operatorname{gr}_C^1\mathscr{O})=0$, then the general member $H\in |\mathscr{O}_X|_C$ is singular along $C$.
According to \cite[Lemma 3.1.1]{Mori-Prokhorov-IIA-1} there exists a section $\beta\in \operatorname{H}^0(I)$ containing $y_4^2$ and $y_2y_3$ at $P^\sharp$. Therefore, $\beta\in \operatorname{H}^0(I^{(2)})$ and the image $\bar\beta$ of $\beta$ in $\operatorname{H}^0(\operatorname{gr}_C^2\mathscr{O})$ is non-zero. In particular, $\operatorname{H}^0(\operatorname{gr}_C^2\mathscr{O})\neq 0$. By Lemma \ref{lemma-grC}\ref{lemma-grC-3} we have $\operatorname{H}^1(\operatorname{gr}_C^2\mathscr{O})=0$ and the assertion follows.
\end{proof}
\section{Cases $\ell(P)=3$ and $7$} \label{section-lP=3-III} In this section we assume that $\ell(P)\in \{3,\, 7\}$. It will be shown that Computation \ref{computation-lP=3a-III} is applicable here and the possibility \ref{main-theorem-divisorial} occurs.
\begin{case}\label{notation-lP=3+III} By Proposition \ref{proposition-cases-lP} in the case $\ell(P)=3$ the variety $X$ has a type \type{(III)} point $R$ with $i_R(1)=1$ and $X$ is smooth outside $P$ in the case $\ell(P)=7$. According to \ref{equation-IIA-point} the equation of $X$ at $P$ has the form \begin{equation}\label{equation-alpha-lP=3-and-7} \alpha=y_1^{\ell(P)}y_3+y_2^2+y_3^2+\delta y_4^{2k+1}+c y_1^2y_4^2+\epsilon y_1y_3y_4+\xi y_1^3y_2y_4 +\cdots=0. \end{equation} Thus \begin{equation}\label{equation-alpha-lP=3-and-7-mod} \alpha\equiv y_1^{\ell(P)}y_3+y_2^2 \mod (y_2y_4,\, y_4^2)+ I^{(3)},\qquad y_3\in I^{(2)}. \end{equation} \end{case}
\begin{scase} In the case $\ell(P)=3$ by \cite[Lemma 2.16]{Mori-1988}, since $i_R(1)=1$, the equation of $X$ at $R$ has the form \begin{equation}\label{equation-beta-lP=3+III-1} \gamma=z_1z_3+\gamma_2(z_2,z_4)+\gamma_3(z_1,\dots,z_4), \end{equation} where $\gamma_2$ is a quadratic form, $\gamma_3\in (z_2,z_4)^3+(z_2,z_4)z_3+(z_3)^2$, and $C$ is the $z_1$-axis. \end{scase} \begin{scase} According to \eqref{equation-gr1CO}, since $y_4$ and $y_2$ form an $\ell$-free $\ell$-basis of $\operatorname{gr}^1_C\mathscr{O}$ at $P$, we have the following $\ell$-isomorphism \begin{equation} \label{equation-(7.4.1.1)-lP=3+III} \vcenter{ \xymatrix@R=6pt@C=-3pt{ \operatorname{gr}_C^1\mathscr{O}= &(-1+3P^\sharp)\ar@{=}[d]&\mathbin{\tilde\oplus}& (-1+2P^\sharp).\ar@{=}[d] \\ & \mathscr{A} && \mathscr{B} }} \end{equation} We choose the coordinates $y_1,\dots, y_4$ at $P$ keeping $y_1$ and $y_3$ the same so that $y_2$ is an $\ell$-basis of $\mathscr{A}$ and $y_4$ is an $\ell$-basis of $\mathscr{B}$. \end{scase}
\begin{sremark}\label{remarkProposition-lP=3+IIIa} By \eqref{equation-alpha-lP=3-and-7-mod} the semi-invariants $y_4^2$, $y_2y_4$, $y_3$ form an $\ell$-basis of $\operatorname{gr}_C^2\mathscr{O}$. \end{sremark}
\begin{lemma}\label{lemma-possibilities-lP=3+III} {}For $\operatorname{gr}_C^2\mathscr{O}$, one of the following possibilities holds \begin{numcases}{\operatorname{gr}_C^2\mathscr{O}=} (a)\mathbin{\tilde\oplus} (-1+P^\sharp)^{\mathbin{\tilde\oplus} 2},\quad a=0,\ 1 \label{equation-possibilities-lP=3+III-a} \\ (P^\sharp)\mathbin{\tilde\oplus} (0)\mathbin{\tilde\oplus} (-1+P^\sharp), \label{equation-possibilities-lP=3+III-b} \\ \mathscr{V}\mathbin{\tilde\oplus} (-1), \label{equation-possibilities-lP=3+III-c} \end{numcases} where $\mathscr{V}$ is some $\ell$-sheaf. \end{lemma}
\begin{proof} Consider the natural map \begin{equation} \label{equation-varphi} \varphi: \tilde S^2 \operatorname{gr}_C^1\mathscr{O}= \mathscr{A}^{\mathbin{\tilde\otimes} 2} \mathbin{\tilde\oplus} (\mathscr{A}\mathbin{\tilde\otimes}\mathscr{B})\mathbin{\tilde\oplus} \mathscr{B}^{\mathbin{\tilde\otimes} 2} \xrightarrow{\hspace*{25pt}}\operatorname{gr}_C^2\mathscr{O}, \end{equation} where \begin{equation*} \mathscr{A}^{\mathbin{\tilde\otimes} 2}= (-1+2P^\sharp),\quad \mathscr{A}\mathbin{\tilde\otimes}\mathscr{B}= (-1+P^\sharp),\quad \mathscr{B}^{\mathbin{\tilde\otimes} 2}= (-1). \end{equation*} $\ell$-bases of these $\ell$-sheaves at $P$ are $y_2^2$,\, $y_2y_4$,\, $y_4^2$, and respectively. By Remark \ref{remarkProposition-lP=3+IIIa} we see that an $\ell$-basis of $\operatorname{gr}_C^2\mathscr{O}$ can be taken as $y_4^2,\, y_2y_4,\, y_3$.
According to \eqref{equation-alpha-lP=3-and-7-mod} we have $y_1^2y_2^2\equiv (\operatorname{unit})\cdot y_1^{\ell(P)+1}\cdot y_1y_3$. Hence, \begin{equation} \label{equation-lP=3-7-Coker-P} \operatorname{coker}_P \varphi=\mathscr{O}_C \cdot y_1y_3/\mathscr{O}_C\cdot y_1^2y_2^2= \mathscr{O}_C/\bigl(y_1^{\ell(P)+1}\bigr)\cdot y_1y_3 \end{equation} and $\operatorname{coker}_P \varphi^{\sharp}= \mathscr{O}_{C^{\sharp}}/\bigl(y_1^{\ell(P)}\bigr)\cdot y_3$. In particular, $\operatorname{len} _P\operatorname{coker}_P \varphi=(\ell(P)+1)/2$. If $\ell(P)=3$, then \begin{equation*} \operatorname{coker}_R \varphi = \begin{cases} \mathscr{O}_C/(z_1)\cdot z_3 & \text{if $\gamma_2\neq 0$,} \\ 0 & \text{if $\gamma_2= 0$,} \end{cases} \end{equation*} In particular, $\operatorname{len}_R \operatorname{coker}_R \varphi\le 1$. By Lemma \ref{lemma-lP=3+IIIa} one of the following holds \begin{equation*} \operatorname{gr}_C^2\mathscr{O}\simeq \mathscr{O}(-1)^{\oplus 2}\oplus\mathscr{O}(1),\quad \mathscr{O}(-1)^{\oplus 2}\oplus\mathscr{O},\quad\text{or}\quad \mathscr{O}^{\oplus 2}\oplus\mathscr{O}(-1). \end{equation*} By Remark \ref{remarkProposition-lP=3+IIIa} we get the only possibilities listed in Lemma \xref{lemma-possibilities-lP=3+III}. \end{proof}
\begin{lemma}\label{treating-equation-possibilities-lP=3+III-c} The case \eqref{equation-possibilities-lP=3+III-c} does not occur. \end{lemma}
\begin{proof} Indeed, from the exact sequence \begin{equation*} 0\xrightarrow{\hspace*{20pt}} \operatorname{gr}_C^1\omega\xrightarrow{\hspace*{20pt}} \omega /F^{2}\omega \xrightarrow{\hspace*{20pt}} \omega/ F^1 \omega\xrightarrow{\hspace*{20pt}} 0, \end{equation*} we obtain $\chi(\omega /F^{2}\omega)=0$. Then we apply \cite[Lemma 3.7(ii)]{Mori-Prokhorov-IIA-1} with ${\mathscr{K}}=I^{(2)}$. \end{proof}
\begin{lemma} \label{lemma-equation-possibilities-lP=3+IIIa} The case \eqref{equation-possibilities-lP=3+III-a} does not occur. \end{lemma}
\begin{proof} The deformation of the form \begin{equation} \label{equation-lP=3-7-deformations} \alpha'=\alpha+\delta' y_4^{3}+\epsilon' y_1y_3y_4 \end{equation} does not change the case division of Lemma \xref{lemma-possibilities-lP=3+III} because $y_4^3,\, y_1y_3y_4\in I^{(3)}$. Since it suffices to disprove a small deformation of $X$, we may assume that in \eqref{equation-alpha-lP=3-and-7} the coefficients $\delta$ and $\epsilon$ are general and $k=1$.
Let us analyze the map $\varphi$ (see \eqref{equation-varphi}) in our case.
Since the map $\mathscr{A}^{\mathbin{\tilde\otimes} 2}\to (-1+P^\sharp)$ is zero (by the degree consideration), the image of $\mathscr{A}^{\mathbin{\tilde\otimes} 2}=(-1+2P^\sharp) \hookrightarrow \operatorname{gr}_C^2\mathscr{O}$ must be contained in the first summand $(a)\subset \operatorname{gr}_C^2\mathscr{O}$. Since $(-1+P^\sharp)^{\mathbin{\tilde\oplus} 2}$ has no global sections, $\beta$ must be a global section of $(a)$. The map $\varphi$ is given by the following matrix: \[ \arraycolsep=1.4pt
\begin{blockarray}{cc@{\qquad}ccc} &&\scriptstyle{(-1+2P^\sharp)}&\scriptstyle{(-1+P^\sharp)}&\scriptstyle{(-1)} \\[7pt] \begin{block}{rc@{\qquad}(ccc)} \scriptstyle{(a)}&\scriptstyle{v_1}&y_1^2h(y_1^4)&\star&\star\star \\ \scriptstyle{(-1+P^\sharp)}&\scriptstyle{v_2}&0&b_1&b_3y_1 \\ \scriptstyle{(-1+P^\sharp)}&\scriptstyle{v_3}&0&b_2&b_4y_1 \\ \end{block} \end{blockarray} \] where $b_1,\dots, b_4$ are constants and $h$ is a polynomial of degree $\le a$. Since the matrix is non-degenerate, $(b_1b_4-b_2b_3)h\neq 0$. Applying elementary transformations of rows and switching the second and the third rows (which correspond to automorphisms of $\operatorname{gr}_C^2\mathscr{O}$), one can reduce the matrix to the form \begin{equation} \label{equation-lP=3-7-matrix} \begin{pmatrix} y_1^2h(y_1^4)&0&b_5 \\ 0&1&0 \\ 0&0&y_1 \end{pmatrix} \end{equation} where $b_5$ is a constant. If $b_5=0$, then \[ (\operatorname{coker}_{P} \varphi)^\sharp \simeq \mathscr{O}_C^\sharp /(y_1)\oplus \mathscr{O}_C^\sharp /(y_1^2h). \] This contradicts \eqref{equation-lP=3-7-Coker-P}. Hence, we may assume that $b_5=1$. From the matrix \eqref{equation-lP=3-7-matrix} we see \begin{eqnarray*} y_2^2&=& y_1^2hv_1, \\ y_4^2&=& v_1+y_1v_3. \end{eqnarray*} Eliminating $v_1$ we obtain the following relations in $\operatorname{gr}_C^2\mathscr{O}$: \begin{equation} \label{equation-lP=3-7-vv} v_1=y_4^2-y_1v_3,\qquad y_1^3h v_3+y_2^2-y_1^2hy_4^2=0. \end{equation} The last one must a multiple of $\alpha$. \begin{scase} \label{scase-lP=3-7-new-treatment} If $h$ is a unit, then comparing with \eqref {equation-alpha-lP=3-and-7} we see that $\ell(P)=3$, $c=h(0)\neq 0$ and $v_3\ni y_3$. If $h$ is linear, then $\ell(P)=7$, $c=h(0)= 0$ and again $v_3\ni y_3$. Since $\beta$ is a section of $(a)\subset \operatorname{gr}_C^2\mathscr{O}$, it must be proportional to $v_1$. Therefore, $y_1y_3\in \beta$. Moreover, \eqref{equation-lP=3-7-vv} shows that in the case $\ell(P)=3$ the term $y_1y_3$ appears in $\beta$ with coefficient $1/c$. Note that the coefficients of $y_1^2y_4^2\in \alpha$ and $y_4^2,\, y_1y_3\in \beta$ are preserved under deformations \eqref{equation-lP=3-7-deformations}. So we may assume that the condition $\epsilon c\neq \delta$ of \ref{computation-lP=3+III-part2} is satisfied. Thus in the case $\ell(P)=3$ we may apply Computation \ref{computation-lP=3+III-part2}. In the case $\ell(P)=7$ we also may apply \ref{computation-lP=3+III-part2} to $\alpha^o=\beta=0$, where $\alpha^o$ is a linear combination of $\alpha$ and $y_1^2\beta$ (and so $y_1^2y_4^2\in \alpha^o$). Then in both cases we obtain a contradiction by Lemma \ref{slemma-lP=3+III-generalityH} below. \end{scase}
\begin{slemma}\label{lemma-computation-lP=3+III-part2} Assume that $\Delta(H,C)$ at $P$ is as in \eqref{graphs-computation-lP=3+III-part2}. Then the contraction $f$ is birational and $\Delta(H,C)$ has one of the following forms: \begin{equation*} \xy \xymatrix@R=1pt@C=10pt{ \mathrm{a)}&&\hbox to 5pt{\hss {$\scriptstyle{C}$ $\overset{3}{\scriptstyle \odot}$}}\ar@{-}[d]&\circ\ar@{-}[d] \\ &\underset C\bullet\ar@{-}[r] &\circ\ar@{-}[r] &\underset{3}\circ\ar@{-}[r]&\circ \ar@{-}[r]&\circ } \endxy \hspace{17pt} \xy \xymatrix"M"@C=10pt@R=1pt{ \mathrm {b_n)}&&&\circ\ar@{-}[d]\ar@{-}[r]& \cdots\ar@{-}[r]&\circ \\ &\underset {C}{\bullet}\ar@{-}[r] &\circ\ar@{-}[r] &\underset{3}{\circ}\ar@{-}[r]&\circ\ar@{-}[r]&\circ } \POS"M1,4"."M1,6"!C*\frm{^\}},+U*++!D\txt{$\scriptstyle{n\ge 1}$} \endxy \end{equation*} \begin{equation*} \xy \xymatrix@R1pt@C13pt{ \mathrm {c)}&\overset{4}{\diamond}\ar@{-}[d]&&\circ\ar@{-}[d] \\ &\underset{C}\bullet\ar@{-}[r]&\circ\ar@{-}[r] &\underset{3}\circ\ar@{-}[r] &\circ\ar@{-}[r] &\circ } \endxy \end{equation*} where $\bullet$, as usual, corresponds to a component of the proper transform of $C$ that is a $(-1)$-curve, $\scriptstyle \odot$ corresponds to a component that is not a $(-1)$-curve, and $\diamond$ corresponds to an exceptional divisor over a point on $C\setminus \{P\}$. \end{slemma}
\begin{proof} Let $H^{\operatorname{n}}\to \tilde H$ be the normalization, let $\hat H\to H^{\operatorname{n}}$ be the minimal resolution,
and let $\hat C\subset \hat H$ be the proper transform of $C$. Assume that $\hat C$ has two components $\hat C_1$ and $\hat C_2$ (the case \eqref{graphs-computation-lP=3+III-part2}a)). Then $\Delta(H,C)$ has the form \begin{equation*} \xymatrix@R1pt { \ovalh{\phantom{PP}$\Gamma_2$\phantom{PP}} &&\ar@{-}[ll]\scriptstyle{\hat C_2}\ar@{-}[d]& \ovalh{\phantom{P}$\Gamma$\phantom{P}} \\ \ovalh{\phantom{PP}$\Gamma_1$\phantom{PP}}\ar@{-}[r] &\scriptstyle{\hat C_1}\ar@{-}[r]& \circ\ar@{-}[r]&\underset3\circ\ar@{-}[r]\ar@{-}[u]&\circ\ar@{-}[r]&\circ } \end{equation*} where subgraphs $\Gamma_1$ and $\Gamma_2$ correspond to singularities of $H^{\operatorname{n}}$ outside $P$ and $\Gamma$ is a Du Val subgraph corresponding to $O'\in \tilde H$ (see \ref{claim-new-11-3}). Since the whole configuration $\Delta(H,C)$ is contractible to a Du Val point or corresponds to a fiber of a rational curve fibration (see \ref{sde1}), it contains a $(-1)$-curve. Thus we may assume by symmetry that $\hat C_1^2=-1$. Then contracting $\hat C_1$ we obtain \begin{equation*} \xymatrix@R1pt { \ovalh{\phantom{PP}$\Gamma_2$\phantom{PP}}&&\ar@{-}[ll]\scriptstyle{\hat C_2}\ar@{-}[d] &\ovalh{\phantom{P}$\Gamma$\phantom{P}} \\ \ovalh{\phantom{PP}$\Gamma_1'$\phantom{PP}}\ar@{-}[rr]&&\bullet\ar@{-}[r]& \underset3\circ\ar@{-}[r]\ar@{-}[u]&\circ\ar@{-}[r]&\circ } \end{equation*} Then $\Gamma_1'$ must be empty. Contracting the black vertex we obtain \begin{equation*} \xymatrix@R1pt { \ovalh{\phantom{PP}$\Gamma_2$\phantom{PP}}&&\ar@{-}[ll]\scriptstyle{\hat C_2'}\ar@{-}@/_3pt/[dr] &\ovalh{\phantom{P}$\Gamma$\phantom{P}} \\ &&&\circ\ar@{-}[r]\ar@{-}[u]&\circ\ar@{-}[r]&\circ } \end{equation*} Recall that $\Gamma\neq \varnothing$.
It is easy to see that configuration $\Delta(H,C)$ does not correspond to a fiber of a rational curve fibration. Hence $f$ is birational. Since $y_4^3\in \alpha$, the general member $D\in |-K_X|$ is of type \type{D_5} (see \ref{ge}). Hence $f(H)$ is either of type \type{D_5} or ``better''. This implies that $\Gamma_2=\varnothing$, $\hat C_2'^2=-1$ and so $\hat C_2^2=-2$. Moreover, $\Gamma$ consists of a single vertex. Thus we obtain the case a). The cases where $\hat C$ is irreducible is treated in a similar way. \end{proof}
\begin{slemma}\label{slemma-lP=3+III-generalityH} Assume that $(H,C)$ is of type \type{a)}, \type{b_n)} or \type{c)} of Lemma \xref{lemma-computation-lP=3+III-part2}. Then the chosen element $H$
is not general in $|\mathscr{O}_X|_C$. \end{slemma} \begin{proof} Take a divisor $\Theta$ on the minimal resolution whose coefficients for \type{a)} and \type{b_n)} are as follows: \begin{equation*} \xy \xymatrix@R=0pt@C=10pt{ \mathrm{a)}&\overset{1}{\scriptstyle \odot}\ar@{-}[d]&\overset{1}\circ\ar@{-}[d]&&\overset{1}\vartriangle\ar@{-}[d] \\ \underset 3\bullet\ar@{-}[r] &\underset 3\circ\ar@{-}[r] & \underset{2}\circ\ar@{-}[r]&\underset 2\circ \ar@{-}[r]&\underset 2\circ&\underset 1\vartriangle\ar@{-}[l] } \endxy \hspace{25pt} \xy \xymatrix@C=17pt@R=1pt{ \mathrm {b_n)}&&\overset{1}\circ\ar@{-}[d]\ar@{-}[r]& \cdots\ar@{-}[r]&\overset{1}\circ&\overset{1}\vartriangle\ar@{-}[l] \\ \underset {1}{\bullet}\ar@{-}[r] &\underset{1}\circ\ar@{-}[r] &\underset{1}{\circ}\ar@{-}[r]&\underset{1}\circ\ar@{-}[r]&\underset{1}\circ&\underset{1}\vartriangle\ar@{-}[l] } \endxy \end{equation*}
where $\vartriangle$ corresponds to an arbitrary smooth analytic curve meeting the corresponding component transversely. It is easy to verify that $\Theta$ is numerically trivial, so $\Theta$ is the pull-back of a Cartier divisor $\Theta_Z$ on $H_Z$. Clearly, $\Theta_Z$ extends to a Cartier divisor $G_Z$ on $Z$. Let $G:=f^*G_Z$. Then $\Theta$ is the pull-back of $G|_H$.
In the case \type{a)} the normalization of $H$ at a general point of $C$
is locally reducible: $H=H_1+H_2$. The diagram \type{a)} shows that for $H_i\cap G$ is a reduced divisor for some $i\in \{1,\, 2\}$. Hence, $G\in |\mathscr{O}_X|_C$ is normal which contradicts our assumptions. In the case \type{b_n)} and \type{c)} the normalization of $H$ is a bijection by Corollary \xref{scorollary-lP=3+III-sing}. In the case \type{b_n)} it is easy to see that the multiplicity of the intersection $H\cap G$ at a general point of $C$
is $\le 2$. This shows that the divisor $G\in |\mathscr{O}_X|_C$ is normal, a contradiction.
Similar arguments show that in the case \type{c)} the multiplicity of the intersection $H\cap G$ at a general point of $C$ equals $4$. By Corollary \ref{scorollary-lP=3+III-sing}
$H$ has a cuspidal singularity at a general point of $C$. Let $D\subset X$ be a disk that intersects $C$ transversely at a general point. Then the curves $H|_D$ and $G|_D$ are cuspidal. Since $H|_D \cdot G|_D=H\cdot G\cdot D=4$, these cusps are in general position, that is, the quadratic parts of the corresponding equations are not proportional. But then the general member of the pencil generated by $H|_D$ and $G|_D$
has an ordinary double point at the origin. Hence the chosen element $H\in |\mathscr{O}_X|_C$ is not general, a contradiction. \end{proof}
Thus the case \eqref{equation-possibilities-lP=3+III-a} does not occur. Lemma \ref{lemma-equation-possibilities-lP=3+IIIa} is proved. \end{proof}
\begin{case}{\bf Case \eqref{equation-possibilities-lP=3+III-b}.} \label{Subcase-equation-possibilities-lP=3+III-c} We will show that Computation \ref{computation-lP=3a-III} is applicable in this case and the possibility \ref{main-theorem-divisorial} occurs. We have \begin{equation*}
\vcenter{ \xymatrix@R=6pt@C=-3pt{ \operatorname{gr}_C^2\mathscr{O}= &(P^\sharp)\ar@{=}[d]&\mathbin{\tilde\oplus}& (0)\ar@{=}[d]&\mathbin{\tilde\oplus}& (-1+P^\sharp).\ar@{=}[d] \\ & \mathscr{D} && {\mathscr{E}} && \mathscr{G} }} \end{equation*}
We apply the arguments similar to that used in the proof of Lemma \ref{lemma-equation-possibilities-lP=3+IIIa}. In our case the map $\varphi$ is given by the following matrix: \[ \begin{blockarray}{cc@{\qquad}ccc} &&\scriptstyle{(-1+2P^\sharp)}&\scriptstyle{(-1+P^\sharp)}&\scriptstyle{(-1)} \\[7pt] \begin{block}{rc@{\qquad}(ccc)} \scriptstyle{(P^\sharp)}&\scriptstyle{w_1} &b_1y_1^3&h(y_1^4)&\star \\ \scriptstyle{(0)} &\scriptstyle{w_2} &b_2y_1^2&b_3y_1^3&\star\star \\ \scriptstyle{(-1+P^\sharp)}& \scriptstyle{w_3} &0&b_4&b_5y_1 \\ \end{block} \end{blockarray} \] where $b_1,\dots,b_5$ are constants, $h$ is a polynomial of degree $\le 1$, and $\star$ is divisible by $y_1$. Consider the map \begin{equation} \label{equation-lP=3-7-definition-pi} \pi: (-1+P^\sharp)=\mathscr{A}\mathbin{\tilde\otimes}\mathscr{B} \xrightarrow{\hspace*{10pt}} \operatorname{gr}^2_C\mathscr{O} \xrightarrow{\makebox[20pt]{ $\scriptstyle\operatorname{pr}$}} \mathscr{G}=(-1+P^\sharp), \end{equation} which is uniquely determined by $\mathscr{A}$ and $\mathscr{B}$. We may regard $\pi$ as the multiplication by $b_4$.
\begin{slemma} $b_4\neq 0$. \end{slemma} \begin{proof} Assume that $b_4=0$. Since the matrix is non-degenerate, $b_5\neq0$. Applying elementary transformations of rows, as in the proof of Lemma \ref{lemma-equation-possibilities-lP=3+IIIa}, one can reduce the matrix to the form \begin{equation} \label{equation-lP=3-7-matrix-2} \begin{pmatrix} b_1y_1^3&h(y_1^4)&0 \\ b_2y_1^2&b_3y_1^3&b_6 \\ 0&0&y_1 \end{pmatrix} \end{equation} where $b_6$ is a constant. If $b_6=0$, then \[ (\operatorname{coker}_{P} \varphi)^\sharp \simeq \mathscr{O}_C^\sharp /(y_1)\oplus \text{(non-zero $\mathscr{O}_C^\sharp$-module)}. \] This contradicts \eqref{equation-lP=3-7-Coker-P}. Hence, we may assume that $b_6=1$. Assume that $b_2=0$.
Applying elementary row transformations we can reduce \eqref{equation-lP=3-7-matrix-2} to the form \[
\begin{pmatrix} y_1^3&h(y_1^4)&0 \\ 0&b_3y_1^3&1 \\ 0&0&y_1 \end{pmatrix} \] which gives us \begin{eqnarray*} y_2^2&=& y_1^3 w_1, \\ y_2y_4&=& h(y_1^4) w_1+b_3y_1^3 w_2, \\ y_4^2&=& w_2+y_1w_3. \end{eqnarray*} If $h(0)=0$, then one can see that $(\operatorname{coker}_{P} \varphi)^\sharp$ cannot be a cyclic $\mathscr{O}_C^\sharp$-module. Thus, $h$ is a unit and we can eliminate $w_1$ and $w_2$: \begin{eqnarray*} y_2^2&=& \textstyle{\frac 1h y_1^3y_2y_4-\frac {b_3}h y_1^6 y_4^2+\frac {b_3}h y_1^7w_3}, \\
w_1&=& \textstyle{\frac 1h y_2y_4-\frac {b_3}h y_1^3 y_4^2+\frac {b_3}h y_1^4w_3}, \\
w_2&=&y_4^2-y_1w_3. \end{eqnarray*} Comparing the first equation with \eqref{equation-alpha-lP=3-and-7} we see that $\ell(P)=7$ and $w_3\ni y_3$. Then from the second one we see $w_1\not\ni y_3$. Clearly, $\beta$ is a linear combination of $y_1w_1$ and $w_2$ (with constant coefficients). Hence, $\beta\ni y_1y_3$. As in the proof of Lemma \ref{lemma-equation-possibilities-lP=3+IIIa} a deformation of the form \eqref{equation-lP=3-7-deformations} is trivial modulo $I^{(3)}$ and so it preserves case division \ref{lemma-possibilities-lP=3+III}, as well as, the vanishing of $b_4$. Then we can argue as in \ref{scase-lP=3-7-new-treatment} and get a contradiction.
Hence $b_2\neq 0$. Then we may assume that $b_1=0$ and $b_2=1$. The relations in $(\operatorname{coker}_{P} \varphi)^\sharp$ are $y_1^2w_2=w_2+y_1w_3=0$, $hw_1+b_3y_1^3w_2=0$. Eliminating $w_2$ one can see \[
(\operatorname{coker}_{P} \varphi)^\sharp\simeq \mathscr{O}_C^\sharp /(y_1^3)\oplus \mathscr{O}_C^\sharp /(h). \] By \eqref{equation-lP=3-7-Coker-P} we have $h(0)\neq 0$ and $\ell(P)=3$. From the matrix \eqref{equation-lP=3-7-matrix-2} we see \begin{eqnarray*} y_4^2&=& w_2+y_1w_3, \\ y_2^2&=& y_1^2w_2, \\ y_2y_4&=& h(y_1^4)w_1+ b_3y_1w_2. \end{eqnarray*} Eliminating $w_2$ we obtain the following relations in $\operatorname{gr}_C^2\mathscr{O}$: \begin{equation} \label{equation-lP=3-7-vv-2} w_2=y_4^2-y_1w_3,\qquad y_2^2- y_1^2y_4^2+y_1^3w_3=0.
\end{equation} The last must be congruent to $ \alpha\mod I^{(3)}$. Comparing with \eqref {equation-alpha-lP=3-and-7} we see that $w_3=\frac 1c y_3$ in $\operatorname{gr}_C^2\mathscr{O}$. Since $\beta$ is a section of $(0)\subset \operatorname{gr}_C^2\mathscr{O}$, it must be proportional to $w_2$. Therefore, $y_1y_3\in \beta$. Moreover, \eqref{equation-lP=3-7-vv-2} shows that $y_1y_3$ appears in $\beta$ with coefficient $1/c$. Now we apply Computation \ref{computation-lP=3+III-part2}, Lemma \ref{lemma-computation-lP=3+III-part2}, and Lemma \ref{slemma-lP=3+III-generalityH} and get a contradiction. \end{proof}
\begin{scase} \label{treating-6-5-5} From now on we assume that $b_4\neq 0$. In other words, the map $\pi$ is non-zero. The induced map \[ \mathscr{B}^{\mathbin{\tilde\otimes} 2}=(-1)\longrightarrow \mathscr{G}=(-1+P^\sharp) \] can be regarded as the multiplication by $sy_1$ for some $s$. For $\mu\in \mathbb{C}$, take a subsheaf $\mathscr{B}'\subset \mathscr{A}\mathbin{\tilde\oplus}\mathscr{B}$ so that $y_4':=y_4+\mu y_1y_2$ is an $\ell$-basis of $\mathscr{B}'$. Clearly, $\operatorname{gr}^1_C\mathscr{O} =\mathscr{A}\mathbin{\tilde\oplus}\mathscr{B}'$. Regard $y_1$ as a map $\mathscr{B}\to \mathscr{A}$. Then $(\mu y_1,1)(\mathscr{B})\subset \mathscr{A}\mathbin{\tilde\oplus} \mathscr{B}$ and we have the following diagram \begin{equation*} \xymatrix@R10pt{ ((\mu y_1,1)(\mathscr{B}))^{\mathbin{\tilde\otimes} 2}\ar@{=}[d]\ar@{^{(}->}[r]& \tilde S^2\operatorname{gr}_C^1\mathscr{O}\ar[r]^-{\operatorname{pr}}&\mathscr{G} \\ (\mu^2 y_1^2,2\mu y_1,1)(\mathscr{B}^{\mathbin{\tilde\otimes} 2})\ar@/_15pt/[urr]_-{\cdot(2\mu y_1b_4+sy_1)} } \end{equation*} Set $\mu:=-s/(2b_4)$. With this choice of $\mu$, the map ${\mathscr{B}'}^{\mathbin{\tilde\otimes} 2}\to \mathscr{G}$ is zero. Thus $\mathscr{A}^{\mathbin{\tilde\otimes} 2}\mathbin{\tilde\oplus}\mathscr{B}'^{\mathbin{\tilde\otimes} 2}\subset \mathscr{D} \mathbin{\tilde\oplus}{\mathscr{E}}$. Let ${\mathscr{K}}$ be the ideal such that $I^{(2)}\supset {\mathscr{K}}\supset I^{(3)}$ and ${\mathscr{K}}/ I^{(3)}= \mathscr{D}\mathbin{\tilde\oplus} {\mathscr{E}}$.
Since $\mathscr{A}^{\mathbin{\tilde\otimes} 2}\to \mathscr{G}$ is zero, perturbing $\mathscr{B}$ with $\mu$ has no effect on $\pi: \mathscr{A} \mathbin{\tilde\otimes} \mathscr{B} \to \mathscr{G}$, and we use the same notation $\pi: \mathscr{A} \mathbin{\tilde\otimes} \mathscr{B}' \to \mathscr{G}$. \end{scase}
\begin{slemma}\label{lemma-lP=3-7-ci} $I{\mathscr{K}}=I^{(3)}$ outside $P$ and $I^{\sharp}{\mathscr{K}}^{\sharp}=(I^{(3)})^{\sharp}$ at $P$. \end{slemma}
\begin{proof} Consider the following digram with $\ell$-exact rows and injective vertical arrows: \begin{equation}\label{big-diagram} \vcenter{ \xy \xymatrix@R=14pt@C=20pt{ 0\ar[r] &\mathscr{A}^{\mathbin{\tilde\otimes} 2}\mathbin{\tilde\oplus} \mathscr{B}'^{\mathbin{\tilde\otimes} 2}\ar[r]\ar@{^{(}->}[d]^{\upsilon} &\tilde S^2\operatorname{gr}^1_C\mathscr{O}\ar[r]\ar@{^{(}->}[d]^{\varphi}& \mathscr{A}\mathbin{\tilde\otimes} \mathscr{B}'\ar[r]\ar[d]_{{\simeq}}^{b_4}& 0 \\ 0\ar[r]& \mathscr{D}\mathbin{\tilde\oplus} {\mathscr{E}}\ar[r] &\operatorname{gr}_C^2\mathscr{O}\ar[r]& \mathscr{G}\ar[r] &0 } \endxy } \end{equation} At a point $Q\in C$ which is a smooth point of $X$, we can choose coordinates $u_1,u_2,u_3$ for $(X,Q)$ so that $Q$ is the origin, $C$ is the $u_1$-axis, and $u_2$ (resp. $u_3$) generates $\mathscr{A}$ (resp. $\mathscr{B}'$) at $Q$. Then from \eqref{big-diagram} we see \begin{equation*} I^{(3)}=I^3=(u_2,u_3)^3, \qquad {\mathscr{K}}=(u_2^2,u_3^2)+(u_2,u_3)^3, \end{equation*} from which follows $I^{(3)}={\mathscr{K}} I$. At $P$, again from \eqref{big-diagram} we have \begin{equation*} \operatorname{coker}_{P^\sharp}\upsilon^\sharp \simeq \operatorname{coker}_{P^\sharp} \varphi^\sharp \simeq \left(\mathscr{O}_{C^\sharp}/ (y_1^3)\right) y_3. \end{equation*} Thus, $(\mathscr{D}\mathbin{\tilde\oplus} {\mathscr{E}})^\sharp$ is generated by $y_3$ and $\varrho$, where $\varrho:=y_2^2$ or $y_4'^2$. Therefore, \begin{eqnarray*} y_2^2,\ y_4'^2\in {\mathscr{K}}^\sharp&=&(y_3,\varrho)+(y_2, y_4)^3, \\ {\mathscr{K}}^\sharp I^\sharp &=& y_3 I^\sharp+(y_2, y_4)^3. \end{eqnarray*} Whence, \begin{equation*} \mathscr{O}_{C^\sharp}\cdot y_3 \oplus \mathscr{O}_{C^\sharp}\cdot \varrho \twoheadrightarrow {\mathscr{K}}^\sharp /{\mathscr{K}}^\sharp I^\sharp. \end{equation*} Since \begin{equation*} {\mathscr{K}}^\sharp /{\mathscr{K}}^\sharp I^\sharp \twoheadrightarrow {\mathscr{K}}^\sharp/ {I^{(3)}}^\sharp \simeq \mathscr{O}_{C^\sharp}\oplus \mathscr{O}_{C^\sharp}, \end{equation*} the arrow above is an isomorphism and $I^{\sharp}{\mathscr{K}}^{\sharp}=(I^{(3)})^{\sharp}$ at $P^{\sharp}$.
If $\ell(P)=3$, then at $R$, changing coordinates $z_1,\dots, z_4$ keeping $z_1$ and $z_3$ the same, we may assume that $z_2$ and $z_4$ are bases at $R$ of $\mathscr{A}$ and $\mathscr{B}'$, respectively. Then in view of \eqref{big-diagram} and $\operatorname{coker}_R \varphi =\mathbb{C}_R$, we see that $\mathscr{D}\mathbin{\tilde\oplus} {\mathscr{E}}$ is generated by $z_3$ and $z_i^2$ for some $i=2,\, 4$. Therefore, \begin{eqnarray*} z_2^2,\, z_4^2\in {\mathscr{K}} &=&(z_3,\, z_i^2)= (z_2,\, z_4)^3, \\ {\mathscr{K}} I &=& z_3 I+(z_2, y_4)^3. \end{eqnarray*} Whence, \begin{equation*} \mathscr{O}_{C} \cdot z_3 \oplus \mathscr{O}_{C}\cdot z_i^2 \twoheadrightarrow {\mathscr{K}} /{\mathscr{K}} I. \end{equation*} Since \begin{equation*} {\mathscr{K}} /{\mathscr{K}} I \twoheadrightarrow {\mathscr{K}}/ I^{(3)} \simeq \mathscr{O}_C\oplus \mathscr{O}_C, \end{equation*} we have $I{\mathscr{K}}=I^{(3)}$ at $R$. This proves Lemma \ref{lemma-lP=3-7-ci}. \end{proof}
\begin{scorollary}\label{corollary-lP=3-7-ci}. ${\mathscr{K}}\mathbin{\tilde\otimes} \mathscr{O}_C\simeq (P^\sharp)\mathbin{\tilde\oplus} (0)$ and so ${\mathscr{K}}$ is an l.c.i. ideal of codimension $2$ outside $P$ and ${\mathscr{K}}^\sharp$ is l.c.i. at $P^\sharp$. \end{scorollary}
\begin{scase} Thus, \begin{eqnarray*} {\mathscr{K}}/ ({\mathscr{K}}\mathbin{\tilde\otimes} I)&=& (P^\sharp)\mathbin{\tilde\oplus} (0), \\ (\omega_X\mathbin{\tilde\otimes} {\mathscr{K}})/ (\omega_X\mathbin{\tilde\otimes}{\mathscr{K}}\mathbin{\tilde\otimes} I)&=& (0)\mathbin{\tilde\oplus} (-P^\sharp). \end{eqnarray*} Our goal is to extend a non-zero section $\bar \xi $ of $(0)\subset \omega_X\mathbin{\tilde\otimes} {\mathscr{K}}/ \omega_X\mathbin{\tilde\otimes}{\mathscr{K}}\mathbin{\tilde\otimes} I$ to a section $\xi \in \operatorname{H}^0(\omega_X\mathbin{\tilde\otimes} {\mathscr{K}})$. By the Formal Function Theorem \begin{equation*} \lim_{\longleftarrow} \operatorname{H}^0\left(\frac{\omega_X\mathbin{\tilde\otimes} {\mathscr{K}}}{\omega_X\mathbin{\tilde\otimes} {\mathscr{K}}^{(n)}} \right) \simeq \lim_{\longleftarrow}\ \frac{ f_*(\omega_X\mathbin{\tilde\otimes} {\mathscr{K}})}{ {\mathfrak{m}}^n_{o,Z}f_*(\omega_X\mathbin{\tilde\otimes} {\mathscr{K}})}. \end{equation*} Thus, for lifting $\bar \xi$, it is sufficient to show that the map \begin{equation*} \Phi_n: \operatorname{H}^0(\omega_X\mathbin{\tilde\otimes} {\mathscr{K}}/ \omega_X\mathbin{\tilde\otimes}{\mathscr{K}}^{(n)}) \xrightarrow{\hspace*{20pt}} \operatorname{H}^0(\omega_X\mathbin{\tilde\otimes} {\mathscr{K}}/ \omega_X\mathbin{\tilde\otimes}{\mathscr{K}}\mathbin{\tilde\otimes} I) \end{equation*} is surjective for all $n>0$, or equivalently $\Phi_2$ and \begin{equation*} \Psi_n: \operatorname{H}^0(\omega_X\mathbin{\tilde\otimes} {\mathscr{K}}/ \omega_X\mathbin{\tilde\otimes}{\mathscr{K}}^{(n)}) \xrightarrow{\hspace*{20pt}} \operatorname{H}^0(\omega_X\mathbin{\tilde\otimes} {\mathscr{K}}/ \omega_X\mathbin{\tilde\otimes}{\mathscr{K}}^{(n-1)}) \end{equation*} are surjective for all $n>0$. We have \begin{equation*} 0 \to \omega_X\mathbin{\tilde\otimes} \left( \frac{{\mathscr{K}}^{(n-1)}}{{\mathscr{K}}^{(n)}}\right) \xrightarrow{\hspace*{20pt}} \frac{\omega_X\mathbin{\tilde\otimes} {\mathscr{K}}}{ \omega_X\mathbin{\tilde\otimes}{\mathscr{K}}^{(n)}} \xrightarrow{\makebox[35pt]{ $\scriptstyle\psi_n$}} \frac{\omega_X\mathbin{\tilde\otimes} {\mathscr{K}}}{ \omega_X\mathbin{\tilde\otimes}{\mathscr{K}}^{(n-1)}} \to 0. \end{equation*} Note that the sheaves $\omega_X\mathbin{\tilde\otimes}(\operatorname{im}({\mathscr{K}}\mathbin{\tilde\otimes} I \to {\mathscr{K}}))/{\mathscr{K}}^{(2)})$ and \begin{equation*} \omega_X\mathbin{\tilde\otimes} {\mathscr{K}}^{(n-1)} /\omega_X\mathbin{\tilde\otimes}{\mathscr{K}}^{(n)} \simeq \tilde S^{n-1}\left(\omega_X\mathbin{\tilde\otimes}{\mathscr{K}}/\omega_X\mathbin{\tilde\otimes}{\mathscr{K}}^{(2)}\right) \end{equation*} have filtrations with successive subquotients \begin{equation*} (-P^\sharp)\mathbin{\tilde\otimes} \tilde S^{n-1}\left((-P^\sharp)\mathbin{\tilde\oplus} (0)\right) \mathbin{\tilde\otimes} \begin{cases} (0) \\ (-1+2P^\sharp) \\ (-1+3P^\sharp) \\ (-1+P^\sharp) \end{cases} \end{equation*} which are all $\ge (-1)$ and hence have vanishing $\operatorname{H}^1$. Thus $\Psi_n=\operatorname{H}^0(\psi_n)$ and $\Phi_2$ are onto and so is $\Phi_n=\Phi_2\circ \Psi_3\circ\cdots \circ \Psi_n$. \end{scase}
\begin{scase}\label{sde-iP=3+III-sections} Thus a non-zero section $\bar \xi $ of $(0)\subset \omega_X\mathbin{\tilde\otimes} {\mathscr{K}}/ \omega_X\mathbin{\tilde\otimes}{\mathscr{K}}\mathbin{\tilde\otimes} I$ induces a section $\xi \in \operatorname{H}^0(\omega_X\mathbin{\tilde\otimes} {\mathscr{K}})$ which in turns induces a generator of $(P^\sharp)$. Let $G:=\{\xi =0\}$. Then $G\supset 4C$ and $\mathscr{O}_H{\mathscr{K}}=\mathscr{O}_H(-G)$. Hence, ${\mathscr{K}}$ is generated by $\xi$ and $\beta$:
\begin{scorollary}\label{scorollary-9-6-8} The ideal ${\mathscr{K}}$ is a global complete intersection. More precisely, ${\mathscr{K}}=(\beta,\xi)$. \end{scorollary} Moreover, $\xi$ can be locally written as $\xi=y_3+(\text{higher degree terms})$. Thus we may assume that there exists a global section of $\mathscr{O}_X$ which is locally written as
$y_1y_3$, i.e. $y_1y_3\in \beta$. On the other hand by \ref{ge} the general member $D\in |-K_X|$ is given by $y_1+\xi'=0$ for some $\xi'\in (y_2,y_3,y_4)$. Then replacing $\beta$ with a linear combination of $\beta$ and $(y_1+\xi')\xi$ we may assume that $y_1y_3$ appears in $\beta$ with arbitrary coefficient $\lambda$ and $y_4^2$ appears in $\beta$ with coefficient $1$. In particular, there is a specific section $\beta^\circ$ which does not contain $y_1y_3$ (and contains $y_4^2$). Then $H$ can be given by the equations $\alpha^\circ=\beta=0$, where $\alpha^\circ:= \alpha+ y_1^2\beta^\circ$ contains $y_1^2y_4^2$. \end{scase}
Now applying Computation \ref{computation-lP=3a-III} with $l=3$ or $7$, we obtain the diagram \ref{main-theorem-divisorial}. The following examples show that this case does occur. \end{case}
\begin{example}\label{example-divisorial-lP=3} Let $Z \subset {\mathbb{C}}^5_{z_1,\ldots,z_5}$ be defined by \begin{eqnarray*} 0&=& z_2^2+z_3+z_4z_5^k-z_1^3,\qquad k\ge 1,\\ 0&=& z_1^2z_2^2+z_4^2-z_3z_5. \end{eqnarray*} Then $(Z,0)$ is a threefold singularity of type \type{cD_{5}}. Let $B \subset Z$ be the $z_5$-axis and let $f : X \to Z$ be the weighted $(1,1,4,2,0)$-blowup. The origin of the $z_3$-chart is a type \type{(IIA)} point $P$ with $\ell(P)=3$: \begin{equation*} \{-y_1^3y_3+y_2^2+y_3^2+y_4(y_1^2y_2^2+y_4^2)^k=0\}/{\boldsymbol{\mu}}_{4}(1,1,3,2), \end{equation*} where $(C,P)$ is the $y_1$-axis. In the $z_1$-chart we have type \type{(III)} a point. \end{example}
\begin{example} \label{example-divisorial-lP=7} As in \ref{example-divisorial-lP=3}, let $Z \subset \mathbb{C}^5_{z_1,\ldots,z_5}$ be defined by \begin{eqnarray*} 0 &=& z_2^2+z_1^2z_5+ z_3+z_4z_5^k, \qquad k\ge 1, \\ 0 &=& z_3z_5+z_1^5+z_4^2. \end{eqnarray*} Then the point $(Z,0)$ is of type \type{cD_{5}}. Let $B \subset Z$ be the $z_5$-axis and let $f : X \to Z$ be the weighted $(1,1,4,2,0)$-blowup. In the $z_1$-chart $X$ is smooth and the origin of the $z_3$-chart is a \type{(IIA)} point $P$ with $\ell(P)=7$: \begin{equation*} \{-y_1^7y_3+y_2^2+y_3^2-y_1^2y_4^2+y_4(y_1^5y_3+y_4^2)^k=0\}/{\boldsymbol{\mu}}_{4}(1,1,3,2), \end{equation*} where $(C,P)$ is the $y_1$-axis. \end{example}
\section{Cases $\ell(P)=4$ and $8$}\label{section-lP=4} In this section we assume that $\ell(P)\in \{4,\, 8\}$. We will show that Computation \ref{computation-lP=4+III} is applicable here and the possibility \ref{main-theorem-conic-bundle} occurs. \begin{case} According to \ref{equation-IIA-point} we may write \begin{equation}\label{equation-lP=4-alpha} \alpha=y_1^{\ell(P)}y_4+y_2^2+y_3^2+\delta y_4^3+c y_1^2y_4^2+\epsilon y_1y_3y_4+ \zeta y_1^2y_2y_3+\cdots, \end{equation} with $\delta,\, c,\, \epsilon,\, \zeta\in \mathbb{C}\{y_1^4\}$. It is easy to see that $y_4\in I^{\sharp (2)}$. Hence, \begin{equation}\label{equation-y14y4} -y_1^{\ell(P)}y_4\equiv y_2^2+y_3^2+ \zeta y_1^2y_2y_3 \mod I^{\sharp (3)}. \end{equation} By Proposition \ref{proposition-cases-lP} in the case $\ell(P)=4$ the variety $X$ has a type \type{(III)} point $R$ with $i_R(1)=1$ and $X$ is smooth outside $P$ in the case $\ell(P)=8$. \end{case}
\begin{case} Taking Proposition
\ref{proposition-lP=4-XC} into account for any $n\ge 1$ we can write \begin{equation*} (\operatorname{gr}_C^n \mathscr{O})^\sharp =\bigoplus_{\substack{a+b+2c= n\\ b=0,\ 1}} \mathscr{O}_{C^\sharp}\cdot y_2^ay_3^by_4^c, \end{equation*} where $a,\, b,\, c\ge 0$, and \begin{equation} \label{equation-lP=4-gr1O} \vcenter{ \xymatrix@R=6pt@C=-3pt{ \operatorname{gr}_C^1\mathscr{O}= &(-1+3P^\sharp)\ar@{=}[d]&\mathbin{\tilde\oplus}& (-1+P^\sharp),\ar@{=}[d] \\ & \mathscr{A} && \mathscr{B} }} \end{equation} where $y_2$ (resp. $y_3$) is an $\ell$-basis of $\mathscr{A}$ (resp. $\mathscr{B}$) at $P$. \end{case} \begin{case} In the case $\ell(P)=4$ by \cite[Lemma 2.16]{Mori-1988}, since $i_R(1)=1$, the equation of $X$ at $R$ can be written as follows \begin{equation}\label{equation-lP=4-gamma} \gamma(z)=z_1z_4+q_2(z_2, z_3)+q_3(z_1, \dots,z_4),\quad q_3\in (z_2,z_3,z_4)^3, \end{equation} where $C$ is the $z_1$-axis and $q_2\in \mathbb{C}\cdot z_2^2+\mathbb{C}\cdot z_2z_3+\mathbb{C}\cdot z_3^2$. Hence, $z_4\in I^{(2)}$. \end{case}
\begin{case} Consider the map $\varphi: \tilde S^2\operatorname{gr}_C^1\mathscr{O} \hookrightarrow \operatorname{gr}_C^2\mathscr{O}$. Clearly, it is an isomorphism outside $\{P,\, R\}$ (resp. $\{P\}$) in the case $\ell(P)=4$ (resp. $\ell(P)=8$). The equality \eqref{equation-lP=4-gr1O} implies \begin{eqnarray*} \tilde S^2\operatorname{gr}_C^1\mathscr{O}&=& (-1+2P^\sharp)\mathbin{\tilde\oplus} (-1)\mathbin{\tilde\oplus} (-2+2P^\sharp), \\ \deg \operatorname{gr}_C^2\mathscr{O} &=& -4+\operatorname{len} \operatorname{coker} \varphi\ge -2. \end{eqnarray*} Furthermore, \begin{equation} \label{equation-lP=4-cokerP-cokerP} \operatorname{coker}_P \varphi =\mathbb{C}_{(\ell(P)/4)P}\cdot \overline{(y_1^2y_4)}. \end{equation} Hence, in the case $\ell(P)=4$, $\operatorname{coker}_R \varphi\neq 0$. Taking Proposition \ref{proposition-lP=4-XC}\ref{proposition-lP=4-XC-1} into account in this case we obtain $q_2\neq 0$ (see \eqref{equation-lP=4-gamma}) and \begin{equation} \label{equation-lP=4-cokerR-cokerR} \operatorname{coker}_R \varphi =\mathbb{C}_R\cdot \bar z_4\simeq \mathbb{C}. \end{equation} Thus in both cases $\ell(P)=4$ and $\ell(P)=8$ we have $\deg \operatorname{gr}_C^2\mathscr{O}=-2$. By Lemma \ref{lemma-lP=3+IIIa}
\begin{equation} \label{equation-lP=4-gr-2-C-O-1} \operatorname{gr}_C^2\mathscr{O} \simeq \mathscr{O}\oplus \mathscr{O}(-1)^{\oplus 2}. \end{equation} Furthermore, $\operatorname{gr}_C^2\mathscr{O}$ has an $\ell$-basis $y_2y_3$, $y_2^2$, $y_4$ at $P^\sharp$. Thus, \begin{equation} \label{equation-lP=4-gr-2-C-O} \operatorname{gr}_C^2\mathscr{O}= (0)\mathbin{\tilde\oplus}(-1+2P^\sharp)\mathbin{\tilde\oplus} (-1+2P^\sharp), \end{equation} since $\operatorname{H}^1 (\operatorname{gr}_C^2\omega)=0$ (cf. Lemma \ref{treating-equation-possibilities-lP=3+III-c}). \end{case}
\begin{case} According to \eqref {equation-lP=4-cokerP-cokerP} and \eqref {equation-lP=4-cokerR-cokerR} \begin{equation} \label{equation-lP=4-quotient} \operatorname{gr}_C^2\mathscr{O}/\tilde S^2\operatorname{gr}_C^1\mathscr{O}\simeq \begin{cases} \mathbb{C}_P\oplus\mathbb{C}_R& \text{in the case $\ell(P)=4$,}\\ \mathbb{C}_{2P}& \text{in the case $\ell(P)=8$.} \end{cases} \end{equation} Let $\mathscr{F}$ be the sheaf with an $\ell$-structure defined by the conditions: \begin{eqnarray*} &&\tilde S^2\operatorname{gr}_C^1\mathscr{O} \subset \mathscr{F}\subset \operatorname{gr}_C^2\mathscr{O}, \\ &&\operatorname{gr}_C^2\mathscr{O}/\mathscr{F}=\mathbb{C}_P, \\ &&\operatorname{gr}_C^2\mathscr{O}^\sharp/\mathscr{F}^\sharp=\mathscr{O}^\sharp/(y_1^4)\cdot y_4^2. \end{eqnarray*} {}From \eqref{equation-lP=4-gr-2-C-O-1} one can see that there are two possibilities: \begin{numcases}{\mathscr{F}\simeq} \mathscr{O}(-1)^{\oplus 3},\label{lP=4-F-case-1} \\ \mathscr{O}\oplus\mathscr{O}(-1)\oplus \mathscr{O}(-2).\label{lP=4-F-case-2} \end{numcases} \end{case}
\begin{case}{\bf Case \eqref{lP=4-F-case-2}.} Since $\mathscr{F}\subset \operatorname{gr}_C^2\mathscr{O}$, by \eqref {equation-lP=4-gr-2-C-O} \begin{equation*} \mathscr{F}=(0)\mathbin{\tilde\oplus} (-1+2P^\sharp) \mathbin{\tilde\oplus} (-2+2P^\sharp). \end{equation*} Now we treat the cases $\ell(P)=4$ and $\ell(P)=8$ separately. \end{case}
\begin{slemma}\label{lemma-lP=4-case-does-not-occur} The case \eqref{lP=4-F-case-2} with $\ell(P)=4$ does not occur. \end{slemma}
\begin{proof} Consider the embedding \begin{equation*} z_1\cdot(0)\subset \mathscr{O}_C(-R)\cdot\mathscr{F} \subset \tilde S^2\operatorname{gr}_C^1\mathscr{O} = (-1+2P^\sharp) \mathbin{\tilde\oplus}(-1)\mathbin{\tilde\oplus} (-2+2P^\sharp). \end{equation*} Clearly, the image in the third summand is zero and the projection to the second summand is multiplication by a constant. Moreover, if this constant is zero, then the image of $z_1\cdot(0)$ is contained in $(-1+2P^\sharp)$. In other words, the summand $(0)\subset \mathscr{F}\subset \operatorname{gr}_C^2\mathscr{O}$ is contained in $(2P^\sharp)$ which is impossible by \eqref{equation-lP=4-gr-2-C-O}.
By changing $\ell$-splitting as follows \begin{equation*} z_3 \longmapsto z_3+(\operatorname{const} ) z_2,\quad y_3 \longmapsto y_3+(\operatorname{const}) y_1^2y_2, \end{equation*} one can assume that $q_2\in \mathbb{C}^*\cdot z_2z_3$ and so $(0)\ni \overline{z_1z_4}=\overline{z_2z_3}$. Furthermore, $\mathscr{F}\supset (0)=\mathscr{O}_C\cdot z_4$ at $R$ by changing coordinates as $z_4 \mapsto z_4+\cdots$. Since $\mathscr{F}\subset \operatorname{gr}_C^2\mathscr{O}$, $(0)$ is sent isomorphically to $(0)\subset \operatorname{gr}_C^2\mathscr{O}$. We have the inclusion $\operatorname{gr}_C^2\mathscr{O}\supset \mathscr{O}_C\cdot \bar\beta=\mathscr{A}\mathbin{\tilde\otimes} \mathscr{B}(R)$ (see \eqref{equation-lP=4-gr1O}). Hence, $\bar\beta=\nu y_2y_3$ at $P^\sharp$, where $\nu$ is a unit. \begin{sclaim} $\bar\beta \operatorname{gr}_C^1\mathscr{O}$ is an $\ell$-subbundle of $\operatorname{gr}_C^3\mathscr{O}$ and and the natural map $\mathscr{A}^{\mathbin{\tilde\otimes} 3}\to \operatorname{gr}_C^3\mathscr{O}/\bar\beta \operatorname{gr}_C^1\mathscr{O}$ induces the following $\ell$-exact sequence \begin{equation} \label{iP=4-equation-exact-sequence-l} \vcenter{ \xymatrix@R=6pt@C=20pt{ 0\ar[r]&\mathscr{A}^{\mathbin{\tilde\otimes} 3}(4P^\sharp) \ar[r]\ar@{=}[d]&\operatorname{gr}_C^3\mathscr{O}/\bar\beta \operatorname{gr}_C^1\mathscr{O} \ar[r]& \mathscr{B}^{\mathbin{\tilde\otimes} 3}(4P^\sharp)\ar@{=}[d] \ar[r]& 0, \\ &(P^\sharp)&&(-2+3P^\sharp) } } \end{equation} where $y_2y_4$ \textup(resp. $y_3y_4$\textup) is an $\ell$-basis of $\mathscr{A}^{\mathbin{\tilde\otimes} 3}(4P^\sharp)$ \textup(resp. $\mathscr{B}^{\mathbin{\tilde\otimes} 3}(4P^\sharp)$\textup). \end{sclaim}
\begin{proof} To check the assertion at $R$ we apply Proposition \ref{proposition-lP=4-XC}\ref{proposition-lP=4-XC-5} with $m=1$ and $\bar\beta=z_4$, and note that $\operatorname{gr}_C^3\mathscr{O}/\bar\beta \operatorname{gr}_C^1\mathscr{O}=\mathscr{O}_Cz_2^3\oplus \mathscr{O}_C z_3^3$. At $P^\sharp$, we note that $\bar\beta=\nu y_2y_3$ and use Proposition \ref{proposition-lP=4-XC}\ref{proposition-lP=4-XC-4} with $h=\alpha$ to show that $\operatorname{gr}_C^3\mathscr{O}$ has $\ell$-basis $y_2^3$, $y_2^2y_3$, $y_2y_4$, $y_3y_4$. By \eqref{equation-y14y4} \begin{equation*} y_1^{4}y_4+y_2^2+y_3^2+\zeta y_1^2\bar\beta=0. \end{equation*} Then $\operatorname{gr}_C^3\mathscr{O}/\bar\beta\operatorname{gr}_C^1\mathscr{O}$ has an $\ell$-free $\ell$-basis $y_2y_4$, $y_3y_4$ because $y_3^2y_2\equiv -y_2^3-y_1^4y_2y_4$, and we have $y_2^3\equiv -y_1^4y_2y_4\mod (\bar\beta)$ and $y_2y_4\equiv -y_2^3/y_1^4 \mod (\bar\beta)$. This shows the exactness because $y_3^3\equiv -y_1^4y_3y_4\mod (\beta)$. \end{proof} To complete the proof of Lemma \ref{lemma-lP=4-case-does-not-occur} we note that the sequence \eqref{iP=4-equation-exact-sequence-l} implies that $\operatorname{H}^1(\operatorname{gr}_C^3\mathscr{O}/\bar\beta \operatorname{gr}_C^1\mathscr{O} )\neq 0$. This contradicts Lemma \ref{lemma-grC}. Thus the case \eqref{lP=4-F-case-2} with $\ell(P)=4$ does not occur. \end{proof}
\begin{slemma}\label{lemma-lP=8-case-does-not-occur} The case \eqref{lP=4-F-case-2} with $\ell(P)=8$ does not occur. \end{slemma}
\begin{proof} We have $0\neq \bar\beta \in \operatorname{H}^0((0))\subset \operatorname{H}^0(\mathscr{F})$. Since $\bar\beta \notin \operatorname{H}^0(\tilde S^2\operatorname{gr}_C^1\mathscr{O})$ and $\mathscr{F}/\tilde S^2\operatorname{gr}_C^1\mathscr{O}=\mathbb{C}\cdot \overline{y_1^6y_4}$, we have \begin{equation} \label{equation-lP=4-barbeta} \bar\beta=(\cdots )y_2^2+(\cdots )y_2y_3+(\operatorname{unit})y_1^6y_4. \end{equation} {}From the following relation \begin{equation*} \bar\beta\cdot (-1) \subset \mathscr{F}(-4P^\sharp)\subset \tilde S^2\operatorname{gr}_C^1\mathscr{O} = (-1+2P^\sharp) \mathbin{\tilde\oplus}(-1)\mathbin{\tilde\oplus} (-2+2P^\sharp) \end{equation*} we see that the image of $y_1^4\bar\beta$ in the third summand is zero and the projection to the second summand is multiplication by a constant. Moreover, if this constant is zero, then the image of $y_1^4\cdot(0)$ is contained in $(-1+2P^\sharp)$. In other words, the summand $(0)\subset \mathscr{F}\subset \operatorname{gr}_C^1\mathscr{O}$ is contained in $(2P^\sharp)$ which is impossible by \eqref{equation-lP=4-gr-2-C-O}. Therefore, \begin{equation*} y_1^4\bar\beta =(\cdots)y_2^2+(\operatorname{unit}) y_2y_3. \end{equation*} Then \eqref{equation-lP=4-barbeta} implies \begin{equation*} y_1^{10}y_4\equiv (\cdots)y_2^2+(\operatorname{unit}) y_2y_3 \mod I^{(3)}. \end{equation*} On the other hand, $y_4$, $y_2^2$, $y_2y_3$ form an $\ell$-basis of $\operatorname{gr}_C^2\mathscr{O}$, a contradiction. This proves Lemma \ref{lemma-lP=8-case-does-not-occur}. \end{proof}
\begin{case}{\bf Case \eqref{lP=4-F-case-1}.}\label{case-conic-bundle} If the coefficient of $y_1^2y_4$ in $\bar\beta$ is zero, then $\bar\beta \in \operatorname{H}^0(\mathscr{F})$. But in our case $\operatorname{H}^0(\mathscr{F})=0$ which gives us a contradiction.
Thus for a general choice of $\beta\in \operatorname{H}^0(\mathscr{O}_X)$ at $P$ we can write $\bar\beta=\nu y_2y_3+\eta y_1^2y_4+\cdots$ and so \begin{equation*} \beta=\theta y_4^2 +\nu y_2y_3+\eta y_1^2y_4+\cdots, \end{equation*} where $\theta,\, \nu,\, \eta$ are units. This means that $y_1^2y_4\in \beta$. Since $\operatorname{h}^0(\operatorname{gr}_C^2\mathscr{O})=1$, the ratio of the coefficients $\nu$ and $\eta$ is fixed. On the other hand, the ratio of the coefficients of $\nu$ and $\theta$ is general \cite[Lemma 3.1.1]{Mori-Prokhorov-IIA-1}. Hence the ratio of coefficients $\theta$ and $\eta$ can be chosen general. Then we apply Computation \ref{computation-lP=4+III}. One can see that the graph \eqref{graph-diagram-non-normal-lP=4+III} corresponds to a conic bundle. We obtain the diagram \ref{main-theorem-conic-bundle}. Examples \ref{example-conic-bundle-lP=4+III} and \ref{example-conic-bundle-lP=8} below show that both possibilities $\ell(P)=4$ and $8$ do occur. \end{case}
\begin{example}\label{example-conic-bundle-lP=4+III} Let $X$ be the the hypersurface of weighted degree $10$ in the weighted projective space $\mathbb{P}(1,1,3,2,4)_{x_1,x_2, x_3, x_4, w}$
given by the equation
\begin{equation*}w\phi_6 -x_1^6\phi_4=0,\quad \text{}\quad \begin{array}{lll} \phi_6&:=&x_1^4x_4+x_3^2+x_2^2w+\delta x_4^3, \\ \phi_4&:=&x_4^2+\nu x_2x_3+\eta x_1^2x_4+\mu x_1^3x_2 \end{array} \end{equation*}
(for simplicity we assume that the coefficients $\delta$, $\nu$, $\eta$ are general). Regard $X$ as a small analytic neighborhood of $C$. In the affine chart $U_w:=\{w\neq 0\}\simeq \mathbb{C}^4/{\boldsymbol{\mu}}_{4}(1,1,3,2)$ the variety $X$ is given by \begin{equation*} \phi_6(y_1,y_2,y_3,y_4, 1) - y_1^6\phi_4(y_1,y_2,y_3,y_4, 1)=0 \end{equation*} and $C$ is the $y_1$-axis. Clearly, it has the form \eqref{equation-lP=4-alpha}. So, the origin $P\in (X,C)$ is a type \type{(IIA)} point with $\ell(P)=4$.
In the affine chart $U_1:=\{x_1\neq 0\}\simeq \mathbb{C}^4$ the variety $X$ is defined by \begin{equation*} w\phi_6(1,z_2,z_3,z_4, w) - \phi_4(1,z_2,z_3,z_4, w)=0. \end{equation*} If $\mu\neq 0$, then $X$ is smooth outside $P$, i.e. $(X,C)$ is as in the case \cite[(1.1.4)]{Mori-Prokhorov-IIA-1}. If $\mu=0$, then $(X,C)$ has a type \type{(III)} point at $(0,0,0,\eta)$.
Consider the surface $H=\{\phi_6=\phi_4=0\}\subset X$. Let $\psi : H^{\operatorname{n}}\to H$ be the normalization (we put $H^{\operatorname{n}}=H$ if $H$ is normal) and let $C^{\operatorname{n}}:=\psi^{-1}(C)$. Near $P$ the surface $H$ has the form \cite[9.3]{Mori-Prokhorov-IIA-1} (resp. \ref{computation-lP=4+III}) if $\mu \neq 0$ (resp. $\mu =0$). In particular, the singularities of $H^{\operatorname{n}}$ are rational. Note that $H$ is a fiber of the fibration $\pi: X\to D$ over a small disk around the origin given by the rational function $ \phi_4/w =\phi_6/x_1^6$ which is regular in a neighborhood of $C$. By the adjunction formula $\mathscr{O}_{X}(K_X)=\mathscr{O}_{X}(-1)$. Hence, \begin{equation*} -K_H\cdot C=-K_X\cdot C=\mathscr{O}_{\mathbb{P}}(1)\cdot C=\textstyle \frac14. \end{equation*}
\begin{sclaim} \begin{enumerate}[leftmargin=20pt] \item If $\mu \neq 0$, then $H$ is smooth outside $P$.
\item Assume that $\mu = 0$. Let $P_1\in C$ be the point $\{4\eta^2w=\nu^2 x_1^4\}$. Then $H$ is singular along $C$, the curve $C^{\operatorname{n}}$ is irreducible and rational, and $\psi_C:= C^{\operatorname{n}}\to C$ is a double cover branched over $\{P,\, P_1\}$. Moreover, $\psi^{-1}(P)$ is the only singular point of $H^{\operatorname{n}}$. \end{enumerate} \end{sclaim} \begin{proof} Direct computations show that $P_1\in H$ is a pinch point \textup(see \xref{definition-pinch-point}\textup) and any $Q\in C\setminus \{P,\, P_1\}$ is a double normal crossing point of $H$. \end{proof}
\begin{sclaim} \label{claim-conic-bundle-C-Q-Cartier} If $\mu=0$ \textup(resp. $\mu\neq 0$\textup), then $4C^{\operatorname{n}}$ \textup(resp. $8C^{\operatorname{n}}$\textup) is a Cartier divisor on $H^{\operatorname{n}}$. Moreover, $(C^{\operatorname{n}})^2=0$. \end{sclaim} \begin{proof} We consider only the case where $H$ is not normal, i.e. $\mu=0$. The case $\mu\neq 0$ is easier and left to the reader. Let $V\subset \mathbb{P}(1,1,3,2,4)$ be the weighted hypersurface given by $x_4=0$ and let $M:= H\cap V$. We have $M=\{x_3^2+w x_2^2=x_2x_3=x_4=0\}$. Let $\Gamma$ be the line $\{x_3=x_4=w=0\}$ and let $\Gamma^{\operatorname{n}}$ be its preimage on $H^{\operatorname{n}}$. Then $\psi^* 2M= 4 C^{\operatorname{n}}+ 2\Gamma^{\operatorname{n}}$. Since $2M$ is Cartier near $C$ and $\Gamma^{\operatorname{n}}$ is contained in the smooth locus of $H^{\operatorname{n}}$, the divisor $4 C^{\operatorname{n}}$ is Cartier on $H^{\operatorname{n}}$. Further, by the projection formula \begin{equation*} \psi^* 2M\cdot C^{\operatorname{n}} =4 V\cdot C =2. \end{equation*} Since $\Gamma$ is smooth and $C^{\operatorname{n}}\to C$ is \'etale over the point $\Gamma\cap C$, the curves $\Gamma^{\operatorname{n}}$ and $C^{\operatorname{n}}$ meet each other transversely at one point which is a smooth point of $H^{\operatorname{n}}$. Hence, $\Gamma^{\operatorname{n}}\cdot C^{\operatorname{n}}=1$ and so \begin{equation*} \label{equation-conic-bundle-H-c2} 4 (C^{\operatorname{n}})^2= \psi^* 2M \cdot C^{\operatorname{n}} - 2\Gamma^{\operatorname{n}}\cdot C^{\operatorname{n}}= 2-2=0.\qedhere \end{equation*} \end{proof}
\begin{sclaim}\label{claim-conic-bundle-surface-H} There exists a rational curve fibration $f_H: H\to B$, where $B\subset \mathbb{C}$ is a small disk around the origin, such that $C=f_H^{-1}(0)_{\operatorname{red}}$. \end{sclaim}
\begin{proof} Using the explicit description of the minimal resolution (see \cite[9.3]{Mori-Prokhorov-IIA-1}, \eqref{graph-diagram-non-normal-lP=4+III}) and Claim \ref{claim-conic-bundle-C-Q-Cartier}, one can see that the contraction exists on $H^{\operatorname{n}}$. Then, clearly, it descends to $H$. \end{proof}
\begin{sclaim} One has $\operatorname{H}^1(\hat X,\mathscr{O}_{\hat X})=0$, where $\hat X$ denotes the completion of $X$ along $C$. \end{sclaim} \begin{proof} Consider the case $\mu=0$ (the case $\mu\neq 0$ is similar and easier). By Claim \ref{claim-conic-bundle-surface-H} \ $4C^{\operatorname{n}}=\operatorname{div} (\varphi)$
for some regular function $\varphi\in \operatorname{H}^0(\mathscr{O}_{H^{\operatorname{n}}})$. Since $\varphi|_{C^{\operatorname{n}}}=0$, this function descends to $H$ and defines a Cartier divisor $\mathscr{C}$ on $H$ such that $\psi^* \mathscr{C}=4C^{\operatorname{n}}$. Consider the standard injection $\theta: \mathscr{O}_H\to \psi_* \mathscr{O}_{H^{\operatorname{n}}}$. Then there is the following commutative diagram \begin{equation*} \label{equation-conic-bundle-diagram} \begin{xy} \xymatrix@C=39pt{ & I_C\ar@{^{(}->}[d] & I_{C^{\operatorname{n}}}\ar@{^{(}->}[d] \\ 0\ar[r]&\mathscr{O}_H\ar@{->>}[d] \ar[r]^{\theta}& \psi_* \mathscr{O}_{H^{\operatorname{n}}}\ar@{->>}[d]\ar[r] &\operatorname{coker}(\theta)\ar[d]^{\simeq}\ar[r]&0 \\ 0\ar[r]&\mathscr{O}_C \ar[r]^{\theta}& \psi_* \mathscr{O}_{C^{\operatorname{n}}}\ar[r] &\psi_* \mathscr{O}_{C^{\operatorname{n}}}^{\langle\iota=-1\rangle}\ar[r]&0 } \end{xy} \end{equation*} where $\mathscr{O}_{C^{\operatorname{n}}}^{\langle\iota=-1\rangle}$ is the anti-invariant part with respect to the Galois involution $\iota: C^{\operatorname{n}}\to C^{\operatorname{n}}$. Since the last row in this diagram splits and $\operatorname{H}^1(\mathscr{O}_{C^{\operatorname{n}}})=0$, we have $\operatorname{H}^1(\operatorname{coker}(\theta))=0$. Using the snake lemma we see that the multiplication by $\varphi$ induces the following diagram \begin{equation*} \begin{xy} \xymatrix@C=39pt@C=33pt{ &0\ar[r]&\mathscr{O}_H\ar@{^{(}->}[d]^{\cdot \varphi} \ar[r]^{\theta}& \psi_* \mathscr{O}_{H^{\operatorname{n}}}\ar@{^{(}->}[d]^{\cdot \varphi} \ar[r] &\operatorname{coker}(\theta)\ar[d]^{\cdot \varphi=0}\ar[r]&0 \\ & 0\ar[r]&\mathscr{O}_H\ar@{->>}[d] \ar[r]^{\theta}& \psi_* \mathscr{O}_{H^{\operatorname{n}}}\ar@{->>}[d]\ar[r] &\operatorname{coker}(\theta)\ar[d]^{\simeq}\ar[r]&0 \\ 0\ar[r]& \operatorname{coker}(\theta)\ar[r]&\mathscr{O}_{\mathscr{C}}\ar[r]& \psi_* \mathscr{O}_{4C^{\operatorname{n}}}\ar[r] &\operatorname{coker}(\theta)\ar[r]&0 } \end{xy} \end{equation*} Since $\operatorname{H}^1(\operatorname{coker}(\theta))=0$, from the last row we see $\operatorname{H}^1(\mathscr{O}_{\mathscr{C}})\simeq \operatorname{H}^1(\mathscr{O}_{4C^{\operatorname{n}}})$. On the other hand, $4C^{\operatorname{n}}$ is a fiber of a rational curve fibration. Hence, $\operatorname{H}^1(\mathscr{O}_{\mathscr{C}})\simeq \operatorname{H}^1(\mathscr{O}_{4C^{\operatorname{n}}})=0$. Similar arguments show that $\operatorname{H}^1(\mathscr{O}_{m\mathscr{C}})=0$ for any $m>0$. Then by the Formal Function Theorem $\operatorname{H}^1(\hat H, \mathscr{O}_{\hat H})=0$, where $\hat H$ is the completion of $H$ along $C$. Applying the Formal Function Theorem again we obtain $\operatorname{H}^1(\hat X, \mathscr{O}_{\hat X})=0$. \end{proof} \begin{sclaim} The contraction $f_H: H\to B$ extends to a contraction $\hat f: \hat X\to \hat Z$. \end{sclaim} \begin{proof} Since $\operatorname{H}^1(\mathscr{O}_{\hat X})=0$, from the exact sequence \begin{equation*} 0 \xrightarrow{\hspace*{20pt}} \mathscr{O}_X \xrightarrow{\hspace*{20pt}} \mathscr{O}_X (H) \xrightarrow{\hspace*{20pt}} \mathscr{O}_H (H)\xrightarrow{\hspace*{20pt}} 0 \end{equation*} we see that the map $\operatorname{H}^0(\mathscr{O}_{\hat X} (\hat H))\to \operatorname{H}^0(\mathscr{O}_{\hat H} (\hat H))$
is surjective. Hence there exists a divisor $\hat H_1\in |\mathscr{O}_{\hat X}|_{\hat C}$ such that $\hat H_1|_{\hat H}=\hat \mathscr{C}$. Then the divisors $\hat H$ and $\hat H_1$ define a contraction $\hat f: \hat X\to \hat Z$. \end{proof}
\begin{sclaim}\label{claim-contraction-exists} There exists a contraction $f:X\to Z$ that approximates $\hat f: \hat X\to \hat Z$. \end{sclaim} \begin{proof} Let $F$ be the scheme fiber of $f_H: H\to B$ over the origin. The above arguments shows that the deformations of $F$ are unobstructed. Therefore the corresponding component of the Douady space is smooth and two-dimensional. This allow us to produce a contraction $f: X\to Z$. \end{proof} \end{example}
\begin{example} \label{example-conic-bundle-lP=8} Similar to Example \ref{example-conic-bundle-lP=4+III}, let $X\subset \mathbb{P}(1,1,3,2,4)$ be a small analytic neighborhood of $C= \{\text{$(x_1,w)$-line}\}$ given by the equation $x_1^6\phi_4-w \phi_6 =0$, where \begin{eqnarray*} \phi_6&:=&x_3^2+x_2^2w+\delta x_4^3+cx_1^2x_4^2, \\ \phi_4&:=&x_4^2+\nu x_2x_3+\eta x_1^2x_4. \end{eqnarray*} It is easy to check that $P:=(0:0:0:0:1)$ is the only singular point of $X$ on $C$ and it is a type \type{(IIA)} point with $\ell(P)=8$. The rational function $\phi_4/w=\phi_6/x_1^6$ near $C$ defines a fibration whose central fiber $H$ is given by $\phi_4=\phi_6 =0$. Existence of a contraction $f: X\to Z$ can be shown similar to Claim \ref{claim-contraction-exists}. Near $P$ the surface $H$ has the following form which can be reduced to \ref{computation-lP=4+III}: \begin{equation*} -c\eta y_1^4y_4+ y_3^2+y_2^2+\delta y_4^3-c\nu y_1^2 y_2y_3=\phi_4=0. \end{equation*} \end{example}
\begin{subexample-remark} \label{example-conic-bundle-normal-H} In a similar way we can construct an example of a $\mathbb{Q}$-conic bundle with $\ell(P)=5$ and normal $H$ \cite[(1.1.4)]{Mori-Prokhorov-IIA-1}. Consider $X\subset \mathbb{P}(1,1,3,2,4)$ given by $w\phi_6-x_1^6\phi_4=0$, where \begin{eqnarray*} \phi_6&:=&x_1^5 x_2+x_2^2w+x_3^2+\delta x_4^3+cx_1 ^2 x_4^2 \end{eqnarray*} and $\phi_4$ is as in \ref{example-conic-bundle-lP=4+III}. In the affine chart $U_w\simeq \mathbb{C}^4/{\boldsymbol{\mu}}_{4}(1,1,3,2)$ the origin $P\in (X,C)$ is a type \type{(IIA)} point with and $\ell(P)=5$. It is easy to see that $X$ is smooth outside $P$. The rational function $\phi_4/w=\phi_6/x_1^6$ defines a fibration on $X$ near $C$ with central fiber $H=\{\phi_4=\phi_6=0\}$. \end{subexample-remark}
\section{Appendix} In this section we collect computations of resolutions of (non-normal) surface singularities appearing as general members
$H\in |\mathscr{O}_X|$. The techniques is very similar to that used in \cite[\S 9]{Mori-Prokhorov-IIA-1} \begin{assumption} \label{notation-blowup-1} Let $W:= \mathbb{C}^4_{y_1,\dots,y_4}/{\boldsymbol{\mu}}_4(1,1,3,2)$ and let $\sigma$ be the weight $\frac14(1,1,3,2)$. Let $P\in X$ be a three-dimensional terminal singularity of type \type{cAx/4} given in $W$ there by the equation $\alpha=0$ with \begin{equation}\label{equation-alpha-computations} \alpha=y_1^ly_j+y_2^2+y_3^2+\delta y_4^{2k+1}+c y_1^2y_4^2+\epsilon y_1y_3y_4 +y_2\alpha'+\alpha'', \end{equation} where $j=3$ or $4$,\ $l\in \mathbb{Z}_{>0}$,\ $c, \epsilon\in \mathbb{C}$, \ $\delta\in \mathbb{C}^*$, \ $\alpha'\in (y_2,\, y_3,\, y_4)$,\ $\alpha''\in (y_2,\, y_3,\, y_4)^2$,\ $\sigma\mbox{-}\ord (\alpha')= 5/4$,\ $\sigma\mbox{-}\ord (\alpha'')> 3/2$,\ $k\ge 1$, and $2k+1$ is the smallest exponent of $y_4$ appearing in $\alpha$. We usually assume that all the summands in \eqref{equation-alpha-computations} have no common terms. \end{assumption}
\begin{sconstruction}\label{construction-w-blowup-X} Consider the weighted $\sigma$-blowup $\Phi: \tilde W\to W$. Let $\tilde X$ be the proper transform of $X$ on $\tilde W$ and $\Pi\subset \tilde W$ be the $\Phi$-exceptional divisor. Then $\Pi\simeq \mathbb{P}(1,1,3,2)$ and $\mathscr{O}_{\Pi}(\Pi)\simeq \mathscr{O}_{\mathbb{P}}(-4)$. Put \begin{equation}\label{equation-Lambda-computation-2} \begin{aligned} O&:=(1:0:0:0),\quad Q:=(0:0:1:0)\in \Pi, \\ \Lambda&:=\{y_2=\alpha_{\sigma=6/4}=0\}\subset \Pi. \end{aligned} \end{equation} Let $\tilde X\subset \tilde W$ be the proper transform of $X$. \end{sconstruction}
\begin{sclaim}\label{claim-4-construction-blowup} $\operatorname{Sing}(\tilde X)$ consists of the curve $\Lambda$, the point $Q$, and the point $Q_1:=(0:0:0:1)$ \textup($Q_1\notin \Lambda$ only if $k=1$\textup). \end{sclaim}
\begin{sclaim}\label{claim-singularities-Lambda} $\tilde X$ has singularity of type \type{cA_1} at a general point of $\Lambda$. \end{sclaim}
\begin{proof}
Let $D\in |-K_X|$ be a general member and let $F$ be a general hyperplane section of $X$ passing through $0$. We may assume that $D$ is given by $y_1+y_2+\cdots $ (see \ref{ge}) and $F$ is given by $y_1y_3+\cdots=0$. It is easy to compute \begin{equation*} \Phi^*\left( K_X+D+\textstyle\frac 12 F\right)= K_{\tilde X}+\tilde D+\textstyle\frac 12 \tilde F+E\mathbin{\sim_{\scriptscriptstyle{\mathbb{Q}}}} 0, \end{equation*}
where $E=\left(\Pi|_{\tilde X}\right)_{\operatorname{red}}=\{y_2=0\}\subset \Pi$, so $E\simeq \mathbb{P}(1,3,2)$ with natural coordinates $y_1$, $y_3$, $y_4$. By the adjunction formula \cite[Th. 16.5]{Utah} \begin{equation}\label{equation-adjunction} \textstyle \left.
\left(K_{\tilde X}+\tilde D+\frac 12 \tilde F+E\right)\right|_E=K_{E}+\tilde D|_E+\frac 12 \tilde F|_E+\operatorname{Diff}_E(0)\mathbin{\sim_{\scriptscriptstyle{\mathbb{Q}}}} 0, \end{equation}
where $\operatorname{Diff}_E(0)$ is an effective divisor supported on $\Lambda$. Let $G:=\{y_1=0\}\subset E$. Then $G$ is a generator of $\operatorname{Cl}(E)\simeq \mathbb{Z}$. It is easy to see that $\tilde D|_E\sim G$, $\tilde F|_E\sim 4G$, and $\Lambda\sim 6G$.
By \eqref{equation-adjunction} we have $\operatorname{Diff}_E(0) \mathbin{\sim_{\scriptscriptstyle{\mathbb{Q}}}} 3 G$, i.e. $\operatorname{Diff}_E(0)=\frac 12 \Lambda$. By the inversion of adjunction $K_{\tilde X}+E$ is plt at a general point of $\Lambda$ \cite[Th. 17.6]{Utah}. Then by \cite[Th. 16.6]{Utah} the variety $\tilde X$ has singularity of type \type{cA_1} at a general point of $\Lambda$. \end{proof}
\begin{assumption}\label{notation-blowup} In the notation of \xref{notation-blowup-1} consider a non-normal surface singularity $H\ni 0$ given in $W$ by two $\sigma$-semi-invariant equations $\alpha=\beta=0$. We assume that the following conditions are satisfied \begin{itemize}[leftmargin=20pt] \item $H$ is singular along $C:=\{\text{$y_1$-axis}\}/{\boldsymbol{\mu}}_4$ and smooth outside $C$, \item $\alpha$ satisfies the assumptions of \xref{notation-blowup-1}, \item $\operatorname{wt} \beta\equiv 0\mod 4$, \item $y_4^2$ appears in $\beta$ with coefficient $1$, \item $y_2y_3$ appears in $\beta$ with coefficient $\nu$ which can be taken general, \item the normalization of $H$ has only rational singularities and, for any resolution, the total transform of $C$ has only normal crossings. \end{itemize}
\begin{scase}\label{computations-notation} We can write the equations of $H$ in the following form \begin{eqnarray*} \alpha&=&y_1^ly_j+y_2^2+y_3^2+\delta y_4^{2k+1}+c y_1^2y_4^2+\epsilon y_1y_3y_4 +y_2\alpha'+\alpha'', \\ \beta&= &y_4^2+\nu y_2y_3+\lambda y_1y_3+\eta y_1^2y_4+y_2\beta'+\beta'', \end{eqnarray*} where $\alpha$ is as in \ref{notation-blowup-1},\quad $\eta, \nu, \lambda\in \mathbb{C}$, \ $\beta',\, \beta''\in (y_2,\, y_3,\, y_4)$,\ $\sigma\mbox{-}\ord (\beta')= 3/4$,\ and $\sigma\mbox{-}\ord (\beta'')> 1$. We usually assume that all the summands in $\beta$ have no common terms. Then $\beta'\in (y_1y_4,\, y_1y_2,\, y_2^2, y_2y_4)$. \end{scase} \end{assumption}
\begin{sremark}\label{remark-computation-normality} Since $H$ is singular along $C$, we have $y_1^sy_r\notin \beta$ for any $r\neq j$ and any $s$. Hence $\lambda\eta=0$. Moreover, if $\lambda\neq 0$, then $j=3$ and if $\eta\neq 0$, then $j=4$. We also may assume that $\beta''\in (y_2,\, y_3,\, y_4)^2$. \end{sremark}
\begin{sconstruction}\label{notation-computations--sing} As in \xref{construction-w-blowup-X} consider the weighted $\sigma$-blowup $\Phi: \tilde W\to W$. Let $\tilde H\subset \tilde W$ (resp. $\tilde C\subset \tilde W$) be the proper transform of $H$ (resp. $C$). Clearly, $\tilde C\cap \Pi=\{ O\}$. Denote (scheme-theoretically) \begin{equation*} \Xi:=\tilde H\cap \Pi = \{y_2^2=\beta_{\sigma=1}=0\} \subset \Pi. \end{equation*} The surface $\tilde H$ is smooth outside $\tilde C\cup \operatorname{Supp}(\Xi)$ and the set $\tilde C\cup \operatorname{Supp}(\Xi)$ is covered by two affine charts in $\tilde W$ \begin{equation*} U_1=\{y_1\neq 0\}\simeq \mathbb{C}^4,\qquad U_3=\{y_3\neq 0\}\simeq \mathbb{C}^4/{\boldsymbol{\mu}}_3(1,1,2,2). \end{equation*} Let $\varphi: \hat H\overset{\tau}{\longrightarrow} \tilde H^{\operatorname{n}}\overset{\tilde \psi}{\longrightarrow} \tilde H$ be the composition of the normalization and the minimal resolution and let $\hat \Xi_i\subset \hat H$ be the proper transform of $\Xi_i$. Let $\tilde C^{\operatorname{n}}=\tilde \psi^{-1}(\tilde C)_{\operatorname{red}}$ and let $\hat C\subset \hat H$ be the proper transform of $\tilde C^{\operatorname{n}}$. \end{sconstruction}
\begin{sclaim}[{\cite[9.1.4]{Mori-Prokhorov-IIA-1}}]\label{claim-1-construction-blowup} Any irreducible component $\Xi_i$ of $\Xi$ is a smooth rational curve passing through $Q$. Moreover, $\Xi=2\Xi_1$ \textup(resp. $\Xi=2\Xi_1+2\Xi_2$,\ $\Xi=4\Xi_1$\textup) if and only if $\lambda\neq 0$ \textup(resp. $\lambda=0$ and $\eta\neq 0$, $(\lambda, \eta)= (0,0)$\textup). \end{sclaim}
\begin{sclaim}[{\cite[9.1.5]{Mori-Prokhorov-IIA-1}}] \label{claim-2-construction-blowup} The point $Q\in \tilde H$ is Du Val of type \type{A_2}. In particular, $\tilde H$ is normal outside $\tilde C$. \end{sclaim}
\begin{sclaim}\label{claim-3-construction-blowup-a} If at least one of the constants $\lambda$ or $\eta$ is non-zero, then the singular locus of $\tilde H$ coincides with $\bigl(\operatorname{Supp}(\Xi)\cap \Lambda\bigr)\cup \{Q\}\cup \tilde C$. \end{sclaim} \begin{proof} Direct computations. \end{proof}
\begin{sremark} Let $\psi: H^{\operatorname{n}}\to H$ be the normalization and let $C^{\operatorname{n}}:=\psi^{-1}(C)_{\operatorname{red}}$. Since $H$ has double singularities at a general point of $C$, the map $\psi_C: C^{\operatorname{n}}\to C$ is either birational or a double cover. In particular, $C^{\operatorname{n}}$ has at most two components. \end{sremark}
\begin{sdefinition}\label{definition-pinch-point} A surface singularity $0\in S$ is called a \emph{pinch point} if it is analytically isomorphic to \begin{equation*} 0\in \{z_2^2+z_1z_3^2=0\}\subset \mathbb{C}^3. \end{equation*} \end{sdefinition}
\begin{sremark} The singular locus of a surface $S$ near a pinch point $0$ is a smooth curve $C$, the normalization $\psi: S^{\operatorname{n}}\to S$ of $S$ is smooth, and $\psi_C: \psi^{-1}(C)\to C$ is a double cover ramified over $0$. \end{sremark}
\begin{sclaim}\label{claim-construction-blowup-Du-Val} The singularities of $\tilde H^{\operatorname{n}}$ are Du Val outside the preimage of $\tilde C$. If moreover $\beta$ contains either $y_1y_3$ or $y_1^2y_4$, then the singularities of $\tilde H^{\operatorname{n}}$ are Du Val everywhere. \end{sclaim} \begin{proof} By Claim \ref{claim-2-construction-blowup} \ $\tilde H$ has a Du Val singularity at $Q$. Note that near $O$ the surface $\tilde H$ is a hypersurface singularity of the form $x_2^2=\phi(x_1,x_3)$, where $\tilde C$ is the $x_1$-axis. The normalization $\tilde \psi: \tilde H^{\operatorname{n}}\to \tilde H$ can be obtained as a sequence of successive blowups over $\tilde C$. In particular, $\tilde H^{\operatorname{n}}$ has only hypersurface singularities. Finally we note that a two-dimensional rational Gorenstein singularity must be Du Val. \end{proof}
\begin{sclaim}{\cite[9.1.9]{Mori-Prokhorov-IIA-1}} \label{claim-5-equation-notation-blowup} $K_{\tilde H}=\Phi^*K_H-\frac 34 \Xi$. \end{sclaim}
\begin{sclaim}\label{claim-computation-Xi-n} Assume that the singularities of $\tilde H^{\operatorname{n}}$ are Du Val \textup(cf. Claim \xref{claim-construction-blowup-Du-Val}\textup). Write $K_{\tilde H^{\operatorname{n}}}=\tilde \psi^* K_{\tilde H}-\Upsilon$, where $\Upsilon$ is the effective divisor defined by the conductor ideal. \begin{itemize} \item If $\Xi=2\Xi_1$, then $\hat \Xi_1^2=-4+\tau^*\Upsilon \cdot \hat \Xi_1$. \item If $\Xi=2\Xi_1+2\Xi_2$, then $\hat \Xi_i^2=-3+\tau^*\Upsilon \cdot\hat \Xi_i$. \end{itemize} \end{sclaim}
\begin{proof} Consider, for example, the first case $\Xi=2\Xi_1$. As in \cite[Claim 9.1.10]{Mori-Prokhorov-IIA-1}, $K_{\tilde H}\cdot \Xi_1=2$. Since $\tilde H$ has only Du Val singularities, we have \begin{equation*} K_{\hat H}=\varphi^*K_{\tilde H} - \tau^*\Upsilon,\qquad K_{\hat H}\cdot \hat \Xi_1= K_{\tilde H}\cdot \Xi_1 -\hat \Xi_1\cdot\tau^*\Upsilon. \end{equation*} Therefore, $\hat \Xi_1^2=-2- K_{\hat H}\cdot \hat \Xi_1= -4+\hat \Xi_1\cdot\tau^*\Upsilon$. \end{proof}
\begin{computation}\label{computation-lP=3+III-part2} In the notation of \xref{notation-blowup}, let \begin{eqnarray*} \alpha&=&y_1^3y_3+y_2^2+y_3^2+\delta y_4^3+c y_1^2y_4^2+\epsilon y_1y_3y_4 +y_2\alpha'+\alpha'', \\ \beta&= &y_4^2+\nu y_2y_3+\textstyle{\frac 1c} y_1y_3+y_2\beta'+\beta'', \end{eqnarray*} where $c$, $\nu$, $\delta$, $\epsilon$ are constants such that $c\neq 0$ and $\epsilon c\neq \delta$. We assume that the hypothesis of \xref{computations-notation} are satisfied. Then the graph $\Delta(H,C)$ has one of the following forms: \begin{equation}\label{graphs-computation-lP=3+III-part2} \vcenter{ \xy \xymatrix@R=5pt@C=10pt{ \mathrm{a)}&\overset{C}\bullet\ar@{-}[d]&\ovalh{\phantom{P}}\ar@{-}[d] \\ \underset{C}\bullet\ar@{-}[r] &\circ\ar@{-}[r] &\underset{3}\circ\ar@{-}[r]&\circ \ar@{-}[r]&\circ } \endxy } \hspace{30pt} \vcenter{ \xy \xymatrix@R=5pt@C=13pt{ \mathrm{b)}&&\ovalh{\phantom{P}}\ar@{-}[d] \\ \underset{C}{\bullet}\ar@{-}[r] &\circ\ar@{-}[r] &\underset{3}\circ\ar@{-}[r]&\circ \ar@{-}[r]&\circ } \endxy } \end{equation} where $\ovalh{\phantom{P}}$ is a non-empty connected Du Val subgraph. In the second case the normalization of $H$ is a bijection. \end{computation}
\begin{proof} We use the notation of \xref{notation-blowup}. By Remark \ref{remark-computation-normality}, \ $y_1^jy_2\notin \beta$ for any $j$. By \ref{claim-1-construction-blowup} we have $\Xi=2\Xi_1$, where $\Xi_1:=\{y_2=y_4^2+\frac 1c y_1y_3=0\}$. The first equation modulo the second one can be rewritten in the form \begin{equation*} \alpha=y_2^2+y_3^2+\delta y_4^3+\epsilon y_1y_3y_4 +y_2\alpha'+\alpha''. \end{equation*}
\begin{sclaim} The point $O\in \tilde H$ is analytically isomorphic to a hypersurface singularity of the form \begin{equation*} \{y_2^2+y_1y_4^3+ \theta y_1^ry_4^2=0\}\subset \mathbb{C}^3, \end{equation*} where again $\tilde C$ is the $y_1$-axis, $\theta\in \mathbb{C}$, and $r\ge 2$. \end{sclaim}
\begin{proof} In the affine chart $U_1$ the equations of $\tilde H$ have the following form \begin{eqnarray*} \alpha_{U_1}&=&y_2^2+y_1y_3^2+\delta y_1y_4^3+\epsilon y_1y_3y_4 +y_1y_2\alpha_\bullet+y_1^2\alpha_{\blacktriangle}, \\ \beta_{U_1}&=&y_4^2+\nu y_2y_3+\textstyle{\frac 1c} y_3+y_2\beta_\bullet+y_1\beta_{\blacktriangle}, \end{eqnarray*} where $\alpha_\bullet\in (y_2,y_3,y_4)$, $\alpha_{\blacktriangle}$, $\beta_{\blacktriangle}\in (y_2,y_3,y_4)^2$, $\beta_{\bullet}\in (y_2,y_4)$. {}From the second equation we obtain \begin{equation*} y_3= -cu (y_4^2+y_2\beta_\circ+y_1\beta_{\scriptscriptstyle \triangle}), \end{equation*} where $\beta_\circ\in (y_2, y_4)$, $\beta_{\scriptscriptstyle \triangle}\in (y_2, y_4)^2$, and $u$ is a unit such that $u(0)=1$. Consider the ideal \begin{equation*} \mathfrak I:=\left(y_1^2y_4^2,\, y_2^3,\, y_1y_2^2,\, y_1y_2y_4,\, y_1y_4^4\right). \end{equation*} Then we can eliminate $y_3$ in the first equation modulo $\mathfrak I$: \begin{equation*} \alpha_{U_1}\equiv y_2^2+(\delta -c\epsilon u) y_1y_4^3 \mod \mathfrak I. \end{equation*} Thus, for some $v_i\in \mathbb{C}\{y_1,y_2,y_4\}$, we can write \begin{equation*} \alpha_{U_1}=y_2^2+(\operatorname{unit}) y_1y_4^3+v_1y_1^2y_4^2+v_2y_2^3+v_3y_1y_2^2+v_4y_1y_2y_4+v_5y_1y_4^4. \end{equation*} Clearly, the last equation is analytically equivalent to the desired form. \end{proof}
\begin{scorollary}\label{scorollary-lP=3+III-sing} Let $\tilde \psi: \tilde H^{\operatorname{n}}\to \tilde H$ be the blowup of $\tilde C$. Then $\tilde H^{\operatorname{n}}$ coincides with the normalization and has exactly one singular point which is of type \type{A_1}. Moreover, if $r=2$ and $\theta\neq 0$, then the preimage $\tilde C^{\operatorname{n}}:=\tilde \psi^{-1}(\tilde C)_{\operatorname{red}}$ has two components and $\tilde C^{\operatorname{n}}\to \tilde C$ is a double cover. If $\theta=0$, then $\tilde C^{\operatorname{n}}$ is irreducible and $\tilde C^{\operatorname{n}}\to \tilde C$ is a bijection (near $O$). If $r>2$ and $\theta\neq 0$, then the total transform of $\tilde C^{\operatorname{n}}$ on the minimal resolution is not a normal crossing divisor. \end{scorollary}
\begin{sclaim}\label{claim-new-11-3} The intersection $\Xi_1\cap \operatorname{Sing}(\tilde H)$ consists of three points: $O$, $Q$, and the point $O'\in \Xi_1\cap \Lambda\setminus\{O\}= \{(0:0: -(\delta -c\epsilon)c : \delta -c\epsilon)\}$. \end{sclaim}
Now to finish the proof of \ref{computation-lP=3+III-part2} we notice that by Claim \ref{claim-computation-Xi-n} we have $\hat \Xi_1^2=-3$ \ because $\tau^*\Upsilon$ meets $\hat \Xi_1$ transversely. This completes the proof of \ref{computation-lP=3+III-part2}. \end{proof}
\begin{computation}\label{computation-lP=3a-III} In the notation of \xref{notation-blowup}, let \begin{eqnarray*} \alpha&=&y_1^ly_3+y_2^2+y_3^2+\delta y_4^{2k+1}+c y_1^2y_4^2+\epsilon y_1y_3y_4 +y_2\alpha'+\alpha'', \\ \beta&=&y_4^2+\nu y_2y_3+\lambda y_1y_3+y_2\beta'+\beta'', \end{eqnarray*} where $l\equiv 3\mod 4$,\ \ $k\ge 1$. We assume that the hypothesis of \xref{computations-notation} are satisfied, $\lambda$ is general with respect to $\delta$ and $c$, and if $l>3$, then $c\neq 0$. Then the preimage of $C$ on the normalization is irreducible and the graph $\Delta(H,C)$ has the following form: \begin{equation}\label{graph-diagram-non-normal} \vcenter{\hbox{ \xy \xymatrix@R=10pt@C=19pt{ &\circ\ar@{-}[d] \\ \underset C \bullet \ar@{-}[r] &\underset 3\circ\ar@{-}[r]&\circ\ar@{-}[r]&\circ \\ &\circ\ar@{-}[u] } \endxy } } \end{equation} \end{computation}
\begin{proof} We use the notation of \xref{notation-blowup}. By Remark \ref{remark-computation-normality}, \ $y_1^jy_2\notin \beta$ for any $j$. We also may assume that $\alpha''$ does not contain any terms of the form $y_4^r$. By \ref{claim-1-construction-blowup} we have $\Xi=2\Xi_1$, where $\Xi_1:=\{y_2=y_4^2+\lambda y_1y_3=0\}$. Since $\lambda\neq 0$, by Claim \ref{claim-3-construction-blowup-a} the set $\operatorname{Sing}(\tilde H)$ is contained in $\tilde C\cup \{Q\}\cup \Lambda$.
\begin{sclaim} The intersection $\tilde H\cap \Lambda$ consists of $O$ and two more distinct points $P_1$ and $P_2$. Moreover, $\tilde H$ meets $\Lambda$ transversely at $P_1$ and $P_2$ and has singularities of type \type{A_1} at these points. \end{sclaim} \begin{proof}
Consider the hypersurface $V\subset W$ defined by $\beta=0$. Let $\tilde V\subset \tilde W$ be its proper transform. So, $\tilde H=\tilde X\cap\tilde V$. We have $(\tilde V|_\Pi\cdot \Lambda)_{\Pi}=4$ and the local intersection number at $O$
equals $2$. Since the base locus of the linear system on $\Pi$ generated by $\tilde V|_{\Pi}$ meets $\Lambda$ only at $O$, the last assertion follows by Bertini's theorem and Claim \ref{claim-singularities-Lambda}. \end{proof}
\begin{sclaim} $\tilde H\ni O$ is a pinch point. \end{sclaim} \begin{proof} In the affine chart $U_1$ the equations of $\tilde H$ have the form \begin{equation*} \begin{aligned} 0&=y_1^{(l+1)/4}y_3+y_2^2+y_1 (y_3^2+\delta y_1^{k-1}y_4^{2k+1}+c y_4^2+\epsilon y_3y_4 +y_2\alpha_{\bullet}+y_1\alpha_{\blacktriangle}), \\ 0&=y_4^2+\nu y_2y_3+\lambda y_3+y_2\beta_\bullet+y_1\beta_\blacktriangle, \end{aligned} \end{equation*} where $\beta_\bullet\in (y_2,\, y_3,\, y_4)$, $\beta_\blacktriangle\in (y_2,\, y_3,\, y_4)^2$. {}From the second equation we have \begin{equation*} y_3= u(y_4^2+y_2\beta_\circ+y_1\beta_{\scriptscriptstyle \triangle}), \end{equation*} where $u$ is a unit such that $u(0)=-1/\lambda$ and $\beta_\circ,\, \beta_{\scriptscriptstyle \triangle}\in (y_2,\, y_4)$. Eliminating $y_3$ we obtain \begin{multline*} uy_1^{(l+1)/4} (y_4^2+y_2\beta_\circ+y_1\beta_{\scriptscriptstyle \triangle})+y_2^2+ u^2y_1(y_4^2+y_2\beta_\circ+y_1\beta_{\scriptscriptstyle \triangle})^2+ \\ \delta y_1^ky_4^{2k+1}+c y_1y_4^2+\epsilon u y_1y_4(y_4^2+y_2\beta_\circ+y_1\beta_{\scriptscriptstyle \triangle}) +y_1y_2\alpha_{\bullet}+y_1^2\alpha_{\blacktriangle}=0, \end{multline*} {}From this we see that the equation of $\tilde H$ at $O$ can be written in the form $y_2^2+y_1y_4^2+\cdots=0$, i.e. $\tilde H\ni O$ is a pinch point. \end{proof} Now to finish the proof of \ref{computation-lP=3a-III} we notice that by Claim \ref{claim-computation-Xi-n} we have $\hat \Xi_1^2=-3$ \ because $\tau^*\Upsilon$ is reduced and meets $\hat \Xi_1$ transversely. \end{proof}
\begin{computation}\label{computation-lP=4+III} In the notation of \xref{notation-blowup}, let \begin{eqnarray*} \alpha&=&y_1^{4l}y_4+y_2^2+y_3^2+\delta y_4^{2k+1}+c y_1^2y_4^2+\epsilon y_1y_3y_4+y_2\alpha'+\alpha'', \\ \beta&=&y_4^2+\nu y_2y_3+\eta y_1^2y_4+ y_2\beta'+\beta'', \end{eqnarray*} where $l,\, k\ge 1$,\ $c,\, \epsilon \in \mathbb{C}$, \ $\delta,\, \eta\in \mathbb{C}^*$, and $\eta$ is general with respect to $\alpha$. We assume that the hypothesis of \xref{computations-notation} are satisfied. Then the graph $\Delta(H,C)$ has one of the following forms: \begin{equation}\label{graph-diagram-non-normal-lP=4+III} \vcenter{ \xy \xymatrix@R=7pt@C=11pt{ &\circ\ar@{-}[r]&\overset {3}\circ\ar@{-}[d]\ar@{-}[r]&\circ \\ \bullet \ar@{-}[r] &\underset {}\circ\ar@{-}[r]&\circ\ar@{-}[r]&\circ } \endxy} \end{equation} \end{computation}
\begin{proof} We use the notation of \xref{notation-blowup}. In our case $\Xi=2\Xi_1+2\Xi_2$, where $\Xi_1:=\{y_2=y_4=0\}$, \ $\Xi_2:=\{y_2=\eta y_1^2+ y_4=0\}$, and $\Xi_1\cap \Xi_2=\{Q\}$.
\begin{sclaim}\label{claim-construction-blowup-intersection-Lambda} \begin{enumerate}[leftmargin=20pt] \item $\operatorname{Sing}(\tilde H)\cap \Xi_1=\{O,\, Q\}$. \item $\operatorname{Sing}(\tilde H)\cap \Xi_2=\Xi_2\cap\Lambda \cup \{Q\}$. \end{enumerate} \end{sclaim}
\begin{proof} By Claim \ref{claim-3-construction-blowup-a} we have $\operatorname{Sing}(\tilde H)\subset \Lambda\cup \tilde C\cup \{Q\}$. On the other hand, $Q\notin \Lambda$ and $\Xi_1\cap \Lambda=\{O\}$. \end{proof}
\begin{sclaim} $O\in \tilde H$ is a pinch point. \end{sclaim}
\begin{proof} In the affine chart $U_1$ the equations of $\tilde H$ have the form \begin{eqnarray*} \alpha_{U_1}&=&y_2^2+y_1(y_1^{l-1}y_4+y_3^2+\delta y_4^{2k+1}+c y_4^2+\epsilon y_3y_4+y_2\alpha_\bullet +y_1\alpha_\blacktriangle), \\ \beta_{U_1}&=&y_4^2+\nu y_2y_3+\eta y_4+ y_2\beta_\bullet+y_1\beta_\blacktriangle, \end{eqnarray*} where $\alpha_\blacktriangle\in (y_2,\, y_3,\, y_4)^2$, $\beta_\bullet\in (y_2,\, y_4)$, $\alpha_\bullet\in (y_2,\, y_3,\, y_4)$, and $\beta_\blacktriangle\in (y_2,\, y_3,\, y_4)^2$ by Remark \ref{remark-computation-normality}. {}From $\beta_{U_1}$ we have \begin{equation*} y_4= u y_2y_3+ y_2\beta_1+y_1\beta_2,\quad \beta_1\in (y_2),\ \beta_2\in (y_2,y_3)^2, \ u=\operatorname{unit}. \end{equation*} Then we can eliminate $y_4$ from $\alpha_{U_1}$: \begin{equation*} y_2^2+y_1y_3^2+ \gamma_1y_1y_2+\gamma_2y_1 +\gamma_3 y_1^2=0, \end{equation*} where $\gamma_1\in (y_2,y_3)$, $\gamma_2\in (y_3)^4$, $\gamma_3\in (y_3)^2$. By completing the square we can put the equation of $\tilde H$ at $O$ to the following form \begin{equation*} y_2^2+(\operatorname{unit} )\cdot y_1y_3^2=0. \qedhere \end{equation*} \end{proof}
Recall that by Claim \ref{claim-construction-blowup-Du-Val} the surface $\tilde H^{\operatorname{n}}$ has only Du Val singularities. As in \cite[9.1.6]{Mori-Prokhorov-IIA-1} we see that the pair $(\tilde H, \Xi_1+\Xi_2)$ is not lc at $Q$ and lc outside $Q$ and $\tilde C$. Thus the dual graph $\Delta(H, C)$ has the form \begin{equation}\label{graph-diagram-non-normal-lP=4+IIIa} \vcenter{ \xy \xymatrix@R=1pt@C=23pt{ &&{\ovalv{\phantom{P}$\scriptstyle P$\phantom{P}}}\ar@{-}[r]&\overset {\Xi_2}\circ \ar@{-}[d] \\ \underset C \bullet\ar@{-}[rr] & &\underset {\Xi_1}\circ\ar@{-}[r]&\circ\ar@{-}[r]&\circ } \endxy } \end{equation} where \ovalh{$\scriptstyle P$} is a Du Val subgraph which is not empty (but possibly disconnected). By Claim \ref{claim-computation-Xi-n} we have $\hat \Xi_2^2=-3$ and $\hat \Xi_1^2= -2$. Further, \begin{equation*} \Xi_2\cdot (\Xi_1+\Xi_2)=\textstyle\frac 12 \Xi_2\cdot \Pi= -\frac 23,\quad \Xi_1\cdot \Xi_2=\frac 23, \quad \Xi_2^2=-\frac 43. \end{equation*} Then as in the proof of \cite[Lemma 3.8]{Mori-Prokhorov-IIA-1} we have $\deg \operatorname{Diff}_{\Xi_2}(0)=5/3$. There are two possibilities: $\operatorname{Diff}_{\Xi_2}(0)=\frac 23 Q+\frac 12 P_1+ \frac 12 P_2$ and $\operatorname{Diff}_{\Xi_2}(0)=\frac 23 Q+P_1$. Hence the singularities of $\tilde H$ on $\Xi_2\setminus \{Q\}$ are either two points which are of type \type {A_1} or one point which is of type \type{D_n} or \type{A_3}. The second possibility occurs only for some specific choice of $\eta$ (when two intersection points $\Lambda\cap \Xi_2$ coincide). We obtain \eqref{graph-diagram-non-normal-lP=4+III}. \end{proof}
\par
\noindent {\bf Acknowledgments.} The paper was written during the second author's visits to RIMS, Kyoto University. The author is very grateful to the institute for the invitation, support, and hospitality.
\def\mathbb#1{\mathbf#1} \def\bblapr{April}
\end{document} |
\begin{document}
\title[{Some summation theorems for truncated Clausen series}]{Some summation theorems for truncated Clausen series and applications}
\author[M.I. Qureshi, Saima Jabee$^{*}$ and Dilshad Ahamad]{M.I. Qureshi, Saima Jabee$^{*}$ and Dilshad Ahamad}
\address{M.I. Qureshi: Department of Applied Sciences and Humanities,
Faculty of Engineering and Technology,
Jamia Millia Islamia (A Central University),
New Delhi 110025, India.} \email{miqureshi\_delhi@yahoo.co.in}
\address{Saima Jabee: Department of Applied Sciences and Humanities, Faculty of Engineering and Technology, Jamia Millia Islamia (A Central University), New Delhi 110025, India. } \email{saimajabee007@gmail.com}
\address{Dilshad Ahamad: Department of Applied Sciences and Humanities,
Faculty of Engineering and Technology,
Jamia Millia Islamia (A Central University),
New Delhi 110025, India.} \email{dlshdhmd4@gmail.com}
\keywords{Watson summation theorem; Whipple summation theorem; Dixon summation theorem; Saalsch\"{u}tz summation theorem; Truncated series; Hypergeometric summation theorems; Mellin transforms.}
\subjclass[2010]{33C05, 33C20, 44A10.}
\thanks{*Corresponding author}
\begin{abstract} The main aim of this paper is to derive some new summation theorems for terminating and truncated Clausen's hypergeometric series with unit argument, when one numerator parameter and one denominator parameter are negative integers. Further, using our truncated summation theorems, we obtain the Mellin transforms of the product of exponential function and Goursat's truncated hypergeometric function. \end{abstract} \begin{center} \today \end{center} \maketitle { \section{Introduction} In our investigations, we shall use the following standard notations:\\ $\mathbb{N}:=\{1,2,3,\dots\}$; $\mathbb{N}_0:=\mathbb{N}\bigcup\{0\}$; $\mathbb{Z}_0^-:=\mathbb{Z}^-\bigcup\{0\}=\{0,-1,-2,-3,\dots\}$.\\ The symbols $\mathbb{C}$, $\mathbb{R}$, $\mathbb{N}$, $\mathbb{Z}$, $\mathbb{R}^+$ and $\mathbb{R}^-$ denote the sets of complex numbers, real numbers, natural numbers, integers, positive and negative real numbers respectively.\\ The Pochhammer symbol $(\alpha)_{p}$ ~$(\alpha, p \in\mathbb{C})$ (\cite[p.22 eq(1), p.32 Q.N.(8) and Q.N.(9)]{Rainville}, see also \cite[p.23, eq(22) and eq(23)]{Srivastava3}), is defined by \begin{equation}\label{f.eq(g1)} (\alpha)_{p}:=\frac{\Gamma(\alpha+p)}{\Gamma(\alpha)}= \begin{cases} $1$ & ;(p=0; \alpha \in \mathbb {C}\setminus \{0\})\\ \alpha (\alpha+1)\ldots (\alpha+n-1) & ;(p=n \in \mathbb {N}; \alpha \in \mathbb {C}\setminus {\mathbb{Z}_0^-})\\ \frac{(-1)^{n}k!}{(k-n)!} & ;(\alpha=-k; p=n; n,k \in \mathbb{N}_0; {0}\leq{n}\leq{k})\\ $0$ & ;(\alpha=-k; p=n; n,k \in \mathbb{N}_0;{n}>{k})\\ \frac{(-1)^{n}}{(1-\alpha)_n} & ;(p=-n; n\in \mathbb{N}; \alpha \in \mathbb{C}\setminus \mathbb{Z}), \end{cases} \end{equation} it being understood conventionally that $(0)_0=1$ and assumed tacitly that the Gamma quotient exists.\\ The generalized hypergeometric function ${_p}F_q$ (\cite[Art.44, pp.73-74]{Rainville}, see also \cite{Bailey}), is defined by \begin{eqnarray}\label{f.eq(g2)} {_p}F_{q}\left[\begin{array}{r} \alpha_1, \alpha_2, \dots, \alpha_p;\\ ~\\ \beta_1, \beta_2, \dots, \beta_q;\end{array}\ z\right]={_p}F_{q}\left[\begin{array}{r} (\alpha_p);\\ ~\\ (\beta_q);\end{array}\ z\right]=\sum_{n=0}^{\infty}\frac{\displaystyle\prod_{j=1}^{p}(\alpha_j)_n}{\displaystyle\prod_{j=1}^{q}(\beta_j)_n}\frac{z^n}{n!}. \end{eqnarray} By convention, a product over the empty set is unity.\\
$\big(p, q \in \mathbb{N}_0;~ p\leqq{q+1}~;~ p\leqq{q}~ \text{and}~ |z|<\infty ;\big.$
$~\big. p=q+1 ~\text{and}~ |z|<1;~ p=q+1, |z|=1~\text{and}~\Re(\omega)>0;~p=q+1, |z|=1, z\neq 1~\text{and}~ -1< \Re(\omega) \leq0\big)$,\\ where \[\omega:=\sum_{j=1}^{q}{\beta}_j-\sum_{j=1}^{p}{\alpha}_j, \] \[\big(\alpha_j\in \mathbb{C}~(j=1, 2,\dots,p ); \beta_j\in\mathbb{C}\setminus\mathbb{Z}_0^-(j=1, 2, \dots, q) \big),\] where $\Re$ denotes the real part of complex number throughout the paper.\\ A finite series identity (reversal of the order of terms in finite summation) is given by \begin{eqnarray}\label{eq(g14)} \sum_{n=0}^{m}\Phi(n)=\sum_{n=0}^{m}\Phi(m-n);\quad{m\in\mathbb{N}_0}. \end{eqnarray} The truncated hypergeometric series is given by: \begin{eqnarray}\label{f.eq(g16)} &&\text{The sum of the first (m+1)-terms of infinite series } {_{p}}F_{q}\left[\begin{array}{r} (\alpha_p);\\ ~\\ (\beta_q);\end{array}\ z\right]\nonumber\\ &&\qquad\qquad={_{p}}F_{q}\left[\begin{array}{r} (\alpha_p);\\ ~\\ (\beta_q);\end{array}\ z\right]_{m}=\sum_{n=0}^{m}\frac{\displaystyle\prod_{j=1}^{p}(\alpha_j)_n}{\displaystyle\prod_{j=1}^{q}(\beta_j)_n}\frac{z^n}{n!}\nonumber\\ &&\qquad\qquad=\frac{[(\alpha_p)]_{m}z^{m}}{[(\beta_q)]_{m}m!}{_{q+2}}F_{p}\left[\begin{array}{r} -m, 1-(\beta_{q})-m,1;\\ ~\\ 1-(\alpha_p)-m;\end{array}\ \frac{(-1)^{p+q+1}}{z}\right], \end{eqnarray} where $(\alpha_{p}), (\beta_{q}), 1-(\alpha_{p})-m, 1-(\beta_{q})-m\in\mathbb{C}\setminus\mathbb{Z}_0^-$; $m\in\mathbb{N}_0$, and \begin{eqnarray} [(\alpha_p)]_{m}=(\alpha_1)_{m}(\alpha_2)_{m}\dots(\alpha_p)_{m}=\prod_{i=1}^{p}(\alpha_i)_{m}=\prod_{i=1}^{p}\frac{\Gamma(\alpha_i+m)}{\Gamma(\alpha_i)}, \end{eqnarray} with similar interpretation for others.\\ The terminating hypergeometric series (the hypergeometric polynomial) is given by \begin{eqnarray}\label{eq(g15)} {_{p+1}}F_{q}\left[\begin{array}{r} -m, (\alpha_p);\\ ~\\ (\beta_q);\end{array}\ z\right]&=&\frac{[(\alpha_p)]_{m}(-z)^{m}}{[(\beta_q)]_{m}}{_{q+1}}F_{p}\left[\begin{array}{r} -m, 1-(\beta_{q})-m;\\ ~\\ 1-(\alpha_p)-m;\end{array}\ \frac{(-1)^{p+q}}{z}\right],\nonumber\\ \end{eqnarray} where $(\alpha_{p}), (\beta_{q}), 1-(\alpha_{p})-m, 1-(\beta_{q})-m\in\mathbb{C}\setminus\mathbb{Z}_0^-$ and $m\in\mathbb{N}_0$.\\
If $\ell> m$; $m,\ell\in\mathbb{N}; \alpha,\beta,\gamma\in\mathbb{C}\setminus\mathbb{Z}_0^{-}$, then series ${_3}F_2\left[\begin{array}{r} -m, \alpha,\beta;\\ ~\\ -\ell,\gamma;\end{array}\ z\right]$ is an infinite series and is given by the following series representation (see for example \cite[p.41, eq.(3.1.26); p.42, eq.(3.2.6)]{Luke} and \cite[p.438, eq.(7.2.3.5)]{Prudnikov2}) \begin{eqnarray} {_3}F_2\left[\begin{array}{r} -m, \alpha,\beta;\\ ~\\ -\ell,\gamma;\end{array}\ z\right]&=&\sum_{r=0}^{m}\frac{(-m)_r(\alpha)_r(\beta)_rz^r}{(-\ell)_r(\gamma)_rr!}+\sum_{r=\ell+1}^{\infty}\frac{(-m)_r(\alpha)_r(\beta)_rz^r}{(-\ell)_r(\gamma)_rr!}\nonumber\\ &&={_3}F_2\left[\begin{array}{r} -m, \alpha,\beta;\\ ~\\ -\ell,\gamma;\end{array}\ z\right]_m+\sum_{r=\ell+1}^{\infty}\frac{(-m)_r(\alpha)_r(\beta)_rz^r}{(-\ell)_r(\gamma)_rr!}. \end{eqnarray}
In original notation, the higher order Goursat hypergeometric function is represented by double integral \cite[p. 286]{Goursat}. So we have \begin{eqnarray}\label{eq(1)} G\left(\begin{array}{r} \alpha,\beta;\\ \gamma, \delta;\end{array}z\right)&=&\frac{\Gamma{(\gamma)}\Gamma{(\delta)}}{\Gamma{(\alpha)}\Gamma{(\beta)}\Gamma{(\gamma-\alpha)}\Gamma{(\delta-\beta)}}\times\nonumber\\ &&\times\int_{0}^{1}\int_{0}^{1}u^{\alpha-1}v^{\beta-1}(1-u)^{\gamma-\alpha-1}(1-v)^{\delta-\beta-1}e^{zuv}dudv, \end{eqnarray} where $\Re{(\gamma)}>\Re{(\alpha)}>0$, $\Re{(\delta)}>\Re{(\beta)}>0$,\\ and \begin{eqnarray*} G\left(\begin{array}{r} \alpha,\beta;\\ \gamma, \delta;\end{array}z\right)&=&1+\sum_{n=1}^{\infty}\frac{(\alpha)_n(\beta)_n z^n}{(\gamma)_n(\delta)_n n!}\\ &=&{_2}F_{2}\left[\begin{array}{r} \alpha,\beta;\\ \gamma, \delta;\end{array}z\right], \end{eqnarray*}
where $\gamma,\delta\in\mathbb{C}\setminus\mathbb{Z}_0^{-}$ and $|z|<\infty$.\\ It is also well known that, under certain conditions, the Goursat's function \cite[p. 286]{Goursat} ${_2}F_2(\alpha, \beta; \gamma, \delta; z)$ is defined by \begin{eqnarray}\label{eq(2)} {_2F_2} \left[\begin{array}{r} \alpha,\beta;\\ \gamma, \delta;\end{array}z\right]=\frac{\Gamma{(\delta)}}{\Gamma{(\alpha)}\Gamma{(\delta-\alpha)}}\int_{0}^{1}v^{\alpha-1}(1-v)^{\delta-\alpha-1}{_1F_1} \left[\begin{array}{r} \beta;\\ \gamma;\end{array}zv\right]dv, \end{eqnarray} where $\Re{(\delta)}>\Re{(\alpha)}>0$ and ${_1}F_{1}(\cdot)$ is Kummer's confluent hypergeometric function.\\ An integral transform that may be considered as the multiplicative form of the two-sided Laplace transform is known as Mellin transform, which is closely related to the Fourier transform, Laplace transform and other transforms. The Mellin transform is defined by \begin{eqnarray}\label{eq(1.9)} \mathcal{M}\{f(t);s\}=\int_{0}^{\infty}t^{s-1}f(t)dt=g(s), \end{eqnarray} where $s$ is a complex variable, above integral exists with suitable convergence conditions.\\ Until 1990, only few classical summation theorems for ${_2}F_1$ and ${_3}F_2$ were known. Subsequently, some progress has been made in generalizing these classical summation theorems (see \cite{Kim, Lavoie1, Lavoie2, Lavoie3, Miller, Rakha1, Rakha2}).\\ \section{Summation theorems for non-terminating, terminating and truncated clausen series } In this section, we have verified the following terminating and truncated Clausen summation theorems by taking suitable values of parameters. So, without any loss of convergence, we can relax convergence conditions in some cases.\\ The classical Watson's summation theorem for non-terminating Clausen's hypergeometric series of unit argument \cite[p.16, section 3.3(1)]{Bailey} takes the form \begin{eqnarray}\label{eq(2.1)} {_3F_2} \left[\begin{array}{r} \alpha,\beta, \gamma;\\ \frac{1+\alpha+\beta}{2}, 2\gamma;\end{array}1\right] &=& \frac{\Gamma{\left(\frac{1}{2}\right)}\Gamma{\left(\gamma+\frac{1}{2}\right)}\Gamma{\left(\frac{1+\alpha+\beta}{2}\right)}\Gamma{\left(\gamma+\frac{1-\alpha-\beta}{2}\right)}}{\Gamma{\left(\frac{1+\alpha}{2}\right)}\Gamma{\left(\frac{1+\beta}{2}\right)}\Gamma{\left(\gamma+\frac{1-\alpha}{2}\right)}\Gamma{\left(\gamma+\frac{1-\beta}{2}\right)}}, \end{eqnarray} provided $\Re(\gamma+\frac{1-\alpha-\beta}{2})>0; \frac{1+\alpha+\beta}{2}, \gamma, 2\gamma\in\mathbb{C}\setminus\mathbb{Z}_0^{-}$ and parameters are adjusted in such a way that the series on the left-hand side is well defined.\\ When $\alpha=-2m$ in equation \eqref{eq(2.1)}, we get a Watson's summation theorem for terminating hypergeometric series (containing (2m+1)-terms) \begin{eqnarray}\label{eq(2.2)} {_3F_2} \left[\begin{array}{r} -2m,\beta, \gamma;\\ \frac{1-2m+\beta}{2}, 2\gamma;\end{array}1\right] &=& \frac{\left(\frac{1}{2}\right)_m\left(\gamma+\frac{1-\beta}{2}\right)_m}{\left(\gamma+\frac{1}{2}\right)_m\left(\frac{1-\beta}{2}\right)_m}, \end{eqnarray} where $\beta,\gamma, 2\gamma, \frac{1+\beta}{2}-m\in\mathbb{C}\setminus\mathbb{Z}_0^{-}$; $m\in\mathbb{N}$.\\ When $\alpha=-2m-1$ in equation \eqref{eq(2.1)}, we get another Watson's summation theorem for terminating hypergeometric series (containing-(2m+2) terms) \begin{eqnarray}\label{eq(2.3)} {_3F_2} \left[\begin{array}{r} -2m-1,\beta, \gamma;\\ \frac{\beta-2m}{2}, 2\gamma;\end{array}1\right] &=& 0, \end{eqnarray} where $\beta, \gamma, 2\gamma, \frac{\beta-2m}{2}\in\mathbb{C}\setminus\mathbb{Z}_0^{-}$; $m\in\mathbb{N}$.\\
We recall a Watson's summation theorem for truncated Clausen's series (containing (m+1)-terms) \cite[p.238, eq(2.2)]{Bailey1} \begin{eqnarray}\label{eq(2.4)} {_3F_2} \left[\begin{array}{r} -m,\alpha,\beta;\\ -2m,\frac{1+\alpha+\beta}{2};\end{array}1\right]_m &=& \frac{\left(\frac{1+\alpha}{2}\right)_m\left(\frac{1+\beta}{2}\right)_m}{\left(\frac{1}{2}\right)_m\left(\frac{1+\alpha+\beta}{2}\right)_m}, \end{eqnarray} where $\alpha,\beta,\frac{1+\alpha+\beta}{2}\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m\in\mathbb{N}$.\\ On setting $\gamma=-m-k-\frac{1}{2}$ in equation \eqref{eq(2.2)}, we obtain Watson's summation theorem for truncated Clausen's series (containing-(2m+1) terms) is given by \begin{eqnarray}\label{eq(2.5)} {_3F_2} \left[\begin{array}{r} -2m,\beta,-m-k-\frac{1}{2};\\ -2m-2k-1, \frac{1+\beta}{2}-m;\end{array}1\right]_{2m}&=& \frac{\left(\frac{1}{2}\right)_m\left(\frac{2+\beta+2k}{2}\right)_m}{\left(\frac{1-\beta}{2}\right)_m\left(1+k\right)_m}, \end{eqnarray} where $\beta,\frac{1+\beta}{2}-m\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N}$.\\ On setting $\gamma=-m-k-\frac{1}{2}$ in equation \eqref{eq(2.3)}, we obtain another Watson's summation theorem for truncated Clausen's series (containing-(2m+2) terms) is given by \begin{eqnarray}\label{eq(2.6)} {_3F_2} \left[\begin{array}{r} -2m-1,\beta,-m-k-\frac{1}{2};\\ -2m-2k-1, \frac{\beta}{2}-m;\end{array}1\right]_{2m+1}&=&0, \end{eqnarray} where $\beta,\frac{\beta}{2}-m\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N}$.\\ The following summation theorem for Clausen's non-terminating series due to Saalsch\"{u}tz's (\cite[p.21,section 3.8(2)]{Bailey}, \cite[p.534, Entry 12]{Prudnikov2}, see also \cite[p.73(2.4.4.4) and p.246(III.31)]{Slater}) is given by \begin{eqnarray}\label{eq(2.7)} &&{_3F_2} \left[\begin{array}{r} \alpha,\beta, \gamma+\delta-\alpha- \beta-1;\\ \gamma, \delta;\end{array}1\right]\nonumber\\ &&=\frac{\Gamma{(\gamma)}\Gamma{(\delta)}\Gamma{(\gamma-\alpha-\beta)}\Gamma{(\delta-\alpha-\beta)}}{\Gamma{(\gamma-\alpha)}\Gamma{(\gamma-\beta)}\Gamma{(\delta-\alpha)}\Gamma{(\delta-\beta)}}+\frac{1}{(\alpha+\beta-\gamma)}\frac{\Gamma{(\gamma)}\Gamma{(\delta)}}{\Gamma{(\alpha)}\Gamma{(\beta)}\Gamma{(\gamma+\delta-\alpha-\beta)}}\times\nonumber\\ &&\times{_3F_2} \left[\begin{array}{r} \gamma-\alpha,\gamma-\beta, 1;\\ \gamma-\alpha-\beta+1, \gamma+\delta-\alpha-\beta;\end{array}1\right], \end{eqnarray} where $\Re{(\delta-\alpha-\beta)}>0$ and $\Re{(\gamma-\alpha-\beta)}>0$.\\ If we set $\delta=-m+1-\gamma+\alpha+\beta$, $m$ being positive integer, in the right-hand side of equation \eqref{eq(2.7)}, we obtain Saalsch\"{u}tz's summation theorem for Clausen's terminating series (\cite[p.9, section 2.2(1)]{Bailey}, see also \cite[p.87, Th 29]{Rainville}) \begin{eqnarray}\label{eq(2.8)} {_3F_2} \left[\begin{array}{r} \alpha,\beta, -m;\\ \gamma, 1+\alpha+\beta-\gamma-m;\end{array}1\right] &=& \frac{\left(\gamma-\alpha\right)_m\left(\gamma-\beta\right)_m}{\left(\gamma\right)_m\left(\gamma-\alpha-\beta\right)_m}, \end{eqnarray} where $ \alpha,\beta,\gamma, 1+\alpha+\beta-\gamma-m\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m\in\mathbb{N}$.\\ On setting $\gamma=-m-k$ in equation \eqref{eq(2.8)}, we get Saalsch\"{u}tz's summation theorem for truncated series \begin{eqnarray}\label{eq(2.8c)} {_3F_2} \left[\begin{array}{r} -m,\alpha,\beta;\\ -m-k, 1+\alpha+\beta+k;\end{array}1\right]_m &=& \frac{\left(1+\alpha+k\right)_m\left(1+\beta+k\right)_m}{\left(1+k\right)_m\left(1+\alpha+\beta+k\right)_m}, \end{eqnarray} where $\alpha,\beta,1+\alpha+\beta+k\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N}$.\\ Next we recall another Saalsch\"{u}tz's summation theorem for Clausen's terminating series (\cite[p.24]{Bailey1}, see also \cite[p.87, Theorem 30]{Rainville}) \begin{eqnarray}\label{eq(2.8a)} {_3F_2} \left[\begin{array}{r} -m, \alpha+m, 1+\alpha-\beta-\gamma;\\ 1+\alpha-\beta, 1+\alpha-\gamma;\end{array}1\right] &=& \frac{\left(\beta\right)_m\left(\gamma\right)_m}{\left(1+\alpha-\beta\right)_m\left(1+\alpha-\gamma\right)_m}, \end{eqnarray} where $\alpha+m, 1+\alpha-\beta-\gamma,1+\alpha-\beta,1+\alpha-\gamma\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m\in\mathbb{N}$.\\ If we set $1+\alpha-\beta=-m-k$ in equation \eqref{eq(2.8a)}, we obtain the following Saalsch\"{u}tz's summation theorem for truncated series \begin{eqnarray}\label{eq(2.8b)} {_3F_2} \left[\begin{array}{r} -m, \beta-k-1, -m-k-\gamma;\\ -m-k, \beta-\gamma-m-k;\end{array}1\right]_m &=& \frac{\left(\beta\right)_m\left(\gamma\right)_m}{\left(1+k\right)_m\left(1+k+\gamma-\beta\right)_m}, \end{eqnarray} where $\beta-k-1,-m-k-\gamma,\beta-\gamma-m-k\in\mathbb{C}\setminus\mathbb{Z}_0; m,k\in\mathbb{N}$.\\ Next we recall Whipple's summation theorem for non-terminating Clausen's series \cite[p.16, section 3.4(1)]{Bailey} \begin{eqnarray}\label{eq(2.9)} &&{_3F_2} \left[\begin{array}{r} \alpha,1- \alpha, \beta;\\ \gamma, 2\beta-\gamma+1;\end{array}1\right]\nonumber\\ &&=\frac{\pi \Gamma{(\gamma)}\Gamma{(2\beta-\gamma+1)}} {2^{2\beta-1}\Gamma{\left(\frac{\alpha+2\beta-\gamma+1}{2}\right)}\Gamma{\left(\frac{\alpha+\gamma}{2}\right)}\Gamma{\left(\frac{2-\alpha+2\beta-\gamma}{2}\right)}\Gamma{\left(\frac{1-\alpha+\gamma}{2}\right)}}, \end{eqnarray} where $\Re{(\beta)}>0,\gamma,2\beta-\gamma+1\in\mathbb{C}\setminus\mathbb{Z}_0^{-}$.\\ On setting $\alpha=-2m$ in equation \eqref{eq(2.9)}, we get Whipple's summation theorem for terminating series \begin{eqnarray}\label{eq(2.10)} {_3F_2} \left[\begin{array}{r} -2m,1+2m, \beta;\\ \gamma, 1+2\beta-\gamma;\end{array}1\right]=\frac{\left(\frac{2-\gamma}{2}\right)_m\left(\frac{1-2\beta+\gamma}{2}\right)_m}{\left(\frac{1+\gamma}{2}\right)_m\left(\frac{2+2\beta-\gamma}{2}\right)_m}, \end{eqnarray} where $\beta,\gamma,1+2\beta-\gamma\in\mathbb{C}\setminus\mathbb{Z}_0^{-}$; $m\in\mathbb{N}$.\\ On setting $\alpha=-2m-1$ in equation \eqref{eq(2.9)}, we get another Whipple's summation theorem for terminating series \begin{eqnarray}\label{eq(2.11)} {_3F_2} \left[\begin{array}{r} -2m-1,2+2m, \beta;\\ \gamma, 1+2\beta-\gamma;\end{array}1\right]=\frac{(\gamma-1)(2\beta-\gamma)\left(\frac{3-\gamma}{2}\right)_m\left(\frac{2-2\beta+\gamma}{2}\right)_m}{(\gamma)(1+2\beta-\gamma)\left(\frac{2+\gamma}{2}\right)_m\left(\frac{3+2\beta-\gamma}{2}\right)_m}, \end{eqnarray} where $\beta,\gamma,1+2\beta-\gamma\in\mathbb{C}\setminus\mathbb{Z}_0^{-}$; $m\in\mathbb{N}$.\\ Another Whipple's summation theorem for Clausen's terminating series is given by (\cite[p. 157, eq(3.1)]{Qureshi4}, see also \cite[p. 190, eq(2)]{Dzhrbashyan} and \cite[p. 238, eq(3.1)]{Bailey1}) \begin{eqnarray}\label{eq(2.13)} &&{_3F_2} \left[\begin{array}{r} -m,\alpha, 1-\alpha;\\ \gamma, 1-\gamma-2m;\end{array}1\right]=\frac{\left(\frac{\gamma+\alpha}{2}\right)_{m} \left(\frac{\gamma-\alpha+1}{2}\right)_{m}}{\left(\frac{\gamma}{2}\right)_{m} \left(\frac{\gamma+1}{2}\right)_{m}}, \end{eqnarray} where $\alpha,1-\alpha, \gamma, 1-\gamma-2m, \frac{\alpha+\gamma+2m}{2},\frac{1-\alpha+\gamma+2m}{2}\in\,\mathbb{C}\setminus\mathbb{Z}_0^-$; $m\in\mathbb{N}$.\\ If we set $\gamma=-2m-k$ in equation \eqref{eq(2.13)}, we get Whipple summation theorem for truncated series containing (m+1)-terms \begin{eqnarray}\label{eq(2.14a)} {_3F_2} \left[\begin{array}{r} -m,\alpha, 1-\alpha;\\ -2m-k, 1+k;\end{array}1\right]_m=\frac{\left(\frac{2-\alpha+k}{2}\right)_{m} \left(\frac{1+\alpha+k}{2}\right)_{m}}{\left(\frac{2+k}{2}\right)_{m} \left(\frac{1+k}{2}\right)_{m}}, \end{eqnarray} where $\alpha,1-\alpha\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N}$.\\ On setting $\gamma=-2m-2k$ in equation \eqref{eq(2.10)}, we get Whipple summation theorem for truncated series containing (2m+1)-terms \begin{eqnarray}\label{eq(2.15)} &&{_3F_2} \left[\begin{array}{r} -2m,1+2m,\beta;\\ -2m-2k, 2\beta+2m+2k+1;\end{array}1\right]_{2m}=\frac{(1+2\beta+2k)_{2m}\left(1+k\right)_{2m}}{(1+2k)_{2m}\left(1+\beta+k\right)_{2m}}, \end{eqnarray} where $\beta,2\beta+2m+2k+1\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N}$.\\ On setting $\gamma=-2m-2k-1$ in equation \eqref{eq(2.10)}, we get Whipple summation theorem for truncated series containing (2m+1)-terms \begin{eqnarray}\label{eq(2.15a)} &&{_3F_2} \left[\begin{array}{r} -2m,1+2m,\beta;\\ -2m-2k-1, 2\beta+2+2m+2k;\end{array}1\right]_{2m}=\frac{(2+2\beta+2k)_{2m}\left(\frac{3+2k}{2}\right)_{2m}}{(2+2k)_{2m}\left(\frac{3+2\beta+2k}{2}\right)_{2m}},\nonumber\\ \end{eqnarray} where $\beta,2\beta+2m+2k+2\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N}$.\\ If we set $\gamma=-2m-2k-1$ in equation \eqref{eq(2.11)}, we get Whipple summation theorem for truncated series containing (2m+2)-terms \begin{eqnarray}\label{eq(2.16)} &&{_3F_2} \left[\begin{array}{r} -2m-1,2+2m,\beta;\\ -2m-2k-1, 2\beta+2m+2k+2;\end{array}1\right]_{2m+1}\nonumber\\ &&=\frac{(k+1)(2\beta+2m+2k+1)(2\beta+2k+1)_{2m}\left(2+k\right)_{2m}}{(2m+2k+1)(\beta+k+1)(2k+1)_{2m}\left(2+\beta+k\right)_{2m}}, \end{eqnarray} where $\beta,2\beta+2m+2k+2\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N}$.\\ If we set $\gamma=-2m-2k-2$ in equation \eqref{eq(2.11)}, we get Whipple summation theorem for truncated series containing (2m+2)-terms \begin{eqnarray}\label{eq(2.16a)} &&{_3F_2} \left[\begin{array}{r} -2m-1,2+2m,\beta;\\ -2m-2k-2, 2\beta+2m+2k+3;\end{array}1\right]_{2m+1}\nonumber\\ &&=\frac{(2k+3)(\beta+m+k+1)(2\beta+2k+2)_{2m}\left(\frac{5+2k}{2}\right)_{2m}}{(m+k+1)(2\beta+2k+3)(2k+2)_{2m}\left(\frac{5+2\beta+2k}{2}\right)_{2m}}, \end{eqnarray} where $\beta,2\beta+2m+2k+3\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N}$.\\ The classical Dixon's summation theorem for Clausen's non-terminating series \cite[p.13, section 3.1(1)]{Bailey} is given by \begin{eqnarray}\label{eq(2.18)} &&{_3F_2} \left[\begin{array}{r} \alpha,\beta, \gamma;\\ 1+\alpha-\beta, 1+\alpha-\gamma;\end{array}1\right]\nonumber\\ &&=\frac{\Gamma{\left(1+\frac{\alpha}{2}\right)}\Gamma{(1+\alpha-\beta)}\Gamma{(1+\alpha-\gamma)}\Gamma{\left(1+\frac{\alpha}{2}-\beta-\gamma\right)}}{\Gamma{\left(1+\alpha\right)}\Gamma{\left(1+\frac{\alpha}{2}-\beta\right)}\Gamma{\left(1+\frac{\alpha}{2}-\gamma\right)}\Gamma{\left(1+\alpha-\beta-\gamma\right)}}, \end{eqnarray} where $\Re{(\alpha-2\beta-2\gamma)}>-2; \alpha,\beta,\gamma\in\mathbb{C};1+\alpha-\beta,1+\alpha-\gamma,1+\frac{\alpha}{2},1+\frac{\alpha}{2}-\beta-\gamma\in\mathbb{C}\setminus\mathbb{Z}_0^-$.\\ Equation \eqref{eq(2.18)} can be written as \begin{eqnarray}\label{eq(2.19)} &&{_3F_2} \left[\begin{array}{r} \alpha,\beta, \gamma;\\ 1+\alpha-\beta, 1+\alpha-\gamma;\end{array}1\right]\nonumber\\ &&=\frac{\cos\left(\frac{\pi \alpha}{2}\right)\Gamma{(1-\alpha)}\Gamma{(1+\alpha-\beta)}\Gamma{(1+\alpha-\gamma)}\Gamma{\left(1+\frac{\alpha}{2}-\beta-\gamma\right)}}{\Gamma{\left(1-\frac{\alpha}{2}\right)}\Gamma{\left(1+\frac{\alpha}{2}-\beta\right)}\Gamma{\left(1+\frac{\alpha}{2}-\gamma\right)}\Gamma{\left(1+\alpha-\beta-\gamma\right)}}\\ &&=\frac{\cos\left(\frac{\pi \alpha}{2}\right)\Gamma{\left(\beta-\frac{\alpha}{2}\right)}\Gamma{(\gamma-\frac{\alpha}{2})}\Gamma{(1-\alpha)}\Gamma{\left(\beta+\gamma-\alpha\right)}}{\Gamma{\left(\beta-\alpha\right)}\Gamma{(\gamma-\alpha)}\Gamma{\left(1-\frac{\alpha}{2}\right)}\Gamma{\left(\beta+\gamma-\frac{\alpha}{2}\right)}}\times\nonumber\\ &&\times\frac{\sin\{\pi \left(\beta-\frac{\alpha}{2}\right)\}\sin\{\pi \left(\gamma-\frac{\alpha}{2}\right)\}\sin\{\pi \left(\beta+\gamma-\alpha\right)\}}{\sin\{\pi \left(\beta-\alpha\right)\}\sin\{\pi \left(\gamma-\alpha\right)\}\sin\{\pi \left(\beta+\gamma-\frac{\alpha}{2}\right)\}}. \end{eqnarray} On setting $\alpha=-2m$ in equation \eqref{eq(2.18)}, we obtain Dixon's summation theorem for terminating series \begin{eqnarray}\label{eq(2.20)} {_3F_2} \left[\begin{array}{r} -2m,\beta, \gamma;\\ 1-2m-\beta, 1-2m-\gamma;\end{array}1\right]=\frac{(\beta)_{m}(\gamma)_{m}2^{2m}\left(\frac{1}{2}\right)_m(\beta+\gamma)_{2m}}{(\beta)_{2m}(\gamma)_{2m}(\beta+\gamma)_{m}}, \end{eqnarray} where $\beta, \gamma\in \mathbb{C}\setminus\mathbb{Z}; m\in\mathbb{N}$.\\ On setting $\alpha=-2m-1$ in equation \eqref{eq(2.18)}, we obtain another Dixon's summation theorem for terminating series \begin{eqnarray}\label{eq(2.21)} &&{_3F_2} \left[\begin{array}{r} -2m-1,\beta, \gamma;\\ -2m-\beta, -2m-\gamma;\end{array}1\right]=0, \end{eqnarray} where $\beta, \gamma\in \mathbb{C}\setminus\mathbb{Z}; m\in\mathbb{N}$.\\ On setting $\beta=1+k$ in equation \eqref{eq(2.20)}, we obtain Dixon's summation theorem for truncated series \begin{eqnarray}\label{eq(2.22)} {_3F_2} \left[\begin{array}{r} -2m,1+k, \gamma;\\ -2m-k, 1-2m-\gamma;\end{array}1\right]_{2m}=\frac{(1+k)_{m}(\gamma)_{m}2^{2m}\left(\frac{1}{2}\right)_m(1+k+\gamma)_{2m}}{(1+k)_{2m}(\gamma)_{2m}(1+k+\gamma)_{m}}, \end{eqnarray} where $\gamma,1-2m-\gamma\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N}$.\\ On setting $\gamma=1+k$ in equation \eqref{eq(2.22)}, we obtain another Dixon's summation theorem for truncated series \begin{eqnarray}\label{eq(2.23)} {_3F_2} \left[\begin{array}{r} -2m,1+k, 1+k;\\ -2m-k, -2m-k;\end{array}1\right]_{2m}=\frac{(1+k)_{m}(1+k)_{m}2^{2m}\left(\frac{1}{2}\right)_m(2+2k)_{2m}}{(1+k)_{2m}(1+k)_{2m}(2+2k)_{m}}, \end{eqnarray} where $m,k\in\mathbb{N}$.\\ On setting $\beta=1+k$ in equation \eqref{eq(2.21)}, we obtain Dixon's summation theorem for truncated series \begin{eqnarray}\label{eq(2.24)} &&{_3F_2} \left[\begin{array}{r} -2m-1,1+k, \gamma;\\ -2m-1-k, -2m-\gamma;\end{array}1\right]_{2m+1}=0, \end{eqnarray} where $\gamma,-2m-\gamma\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N}$.\\ On setting $\gamma=1+k$ in equation \eqref{eq(2.24)}, we obtain another Dixon's summation theorem for truncated series \begin{eqnarray}\label{eq(2.25)} &&{_3F_2} \left[\begin{array}{r} -2m-1,1+k, 1+k;\\ -2m-1-k, -2m-1-k;\end{array}1\right]_{2m+1}=0, \end{eqnarray} where $m,k\in\mathbb{N}$.\\ On setting $\beta=-2m$ in equation \eqref{eq(2.18)}, we obtain Dixon's summation theorem for terminating series \begin{eqnarray}\label{eq(2.20a)} {_3F_2} \left[\begin{array}{r} -2m,\alpha, \gamma;\\ 1+\alpha+2m, 1+\alpha-\gamma;\end{array}1\right]=\frac{(1+\alpha)_{2m}\left(1+\frac{\alpha}{2}-\gamma\right)_{2m}}{\left(1+\frac{\alpha}{2}\right)_{2m}(1+\alpha-\gamma)_{2m}}, \end{eqnarray} where $\alpha, \gamma\in \mathbb{C}\setminus\mathbb{Z}; m\in\mathbb{N}$.\\ On setting $\beta=-2m-1$ in equation \eqref{eq(2.18)}, we obtain another Dixon's summation theorem for terminating series \begin{eqnarray}\label{eq(2.20b)} {_3F_2} \left[\begin{array}{r} -2m-1,\alpha, \gamma;\\ 2+\alpha+2m, 1+\alpha-\gamma;\end{array}1\right]=\frac{(1+\alpha)(2+\alpha-2\gamma)(2+\alpha)_{2m}\left(2+\frac{\alpha}{2}-\gamma\right)_{2m}}{(2+\alpha)(1+\alpha-\gamma)\left(2+\frac{\alpha}{2}\right)_{2m}(2+\alpha-\gamma)_{2m}}, \end{eqnarray} where $\alpha, \gamma\in \mathbb{C}\setminus\mathbb{Z}; m\in\mathbb{N}$.\\ On setting $\gamma=1+\alpha+2m+k$ in equation \eqref{eq(2.20a)}, we obtain Dixon's summation theorem for truncated series \begin{eqnarray}\label{eq(2.20c)} {_3F_2} \left[\begin{array}{r} -2m,\alpha, 1+\alpha+2m+k;\\ -2m-k, 1+\alpha+2m;\end{array}1\right]_{2m}=\frac{(1+\alpha)_{2m}\left(1+\frac{\alpha}{2}+k\right)_{2m}}{\left(1+\frac{\alpha}{2}\right)_{2m}(1+k)_{2m}}, \end{eqnarray} where $\alpha,1+\alpha+2m+k,1+\alpha+2m\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N}$.\\
On setting $\gamma=2+\alpha+2m+k$ in equation \eqref{eq(2.20b)}, we obtain another Dixon's summation theorem for truncated series \begin{eqnarray}\label{eq(2.20d)} &&{_3F_2} \left[\begin{array}{r} -2m-1,\alpha, 2+\alpha+2m+k;\\ -2m-k-1, 2+\alpha+2m;\end{array}1\right]_{2m+1}\nonumber\\ &&=\frac{(1+\alpha)(2+2k+\alpha+4m)(2+\alpha)_{2m}\left(1+\frac{\alpha}{2}+k\right)_{2m}}{(2+\alpha)(1+2m+k)\left(2+\frac{\alpha}{2}\right)_{2m}(1+k)_{2m}}, \end{eqnarray} where $\alpha,2+\alpha+2m+k,2+\alpha+2m\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N}$.\\
Also, on setting $\gamma=-m$ in equation \eqref{eq(2.19)}, we obtain Dixon's theorem for Clausen's terminating series \begin{eqnarray}\label{eq(2.26)} &&{_3F_2} \left[\begin{array}{r} \alpha, \beta, -m;\\ 1+\alpha-\beta, 1+\alpha+m;\end{array}1\right]\nonumber\\ &&=\frac{\cos\left(\frac{\pi \alpha}{2}\right)\Gamma{(1-\alpha)}\Gamma{(1+\alpha-\beta)}\Gamma{(1+\alpha+m)}\Gamma{\left(1+\frac{\alpha}{2}-\beta+m\right)}}{\Gamma{\left(1-\frac{\alpha}{2}\right)}\Gamma{\left(1+\frac{\alpha}{2}-\beta\right)}\Gamma{\left(1+\frac{\alpha}{2}+m\right)}\Gamma{\left(1+\alpha-\beta+m\right)}}, \end{eqnarray} where $\alpha,\beta,1+\alpha-\beta,1+\alpha+m\in\mathbb{C}\setminus\mathbb{Z}_0^-; m\in \mathbb{N}$.\\ On setting $\beta=-m$ in equation \eqref{eq(2.18)}, we get Dixon's summation theorem for Clausen's terminating series \begin{eqnarray}\label{eq(2.27)} {_3F_2} \left[\begin{array}{r} -m, \alpha, \gamma;\\ 1+\alpha+m, 1+\alpha-\gamma;\end{array}1\right]=\frac{(1+\alpha)_m\left(1+\frac{\alpha}{2}-\gamma\right)_m}{\left(1+\frac{\alpha}{2}\right)_m(1+\alpha-\gamma)_m}, \end{eqnarray} where $\alpha,\gamma,1+\alpha+m,1+\alpha-\gamma\in\mathbb{C}\setminus\mathbb{Z}_0^-; m\in \mathbb{N}$.\\ In section 3, we discuss the applications of some summation theorems for truncated Clausen hypergeometric series in Mellin transforms of the product of exponential function and truncated Goursat hypergeometric function. \section{Applications in Mellin transforms} In this section, we obtain Mellin transforms of the product of exponential function and truncated Goursat's function ${_2F_2}(\cdot)$ (when one numerator and one denominator parameters are negative integers), \begin{eqnarray}\label{eq(7.1)} \mathcal{M}\left\{e^{-\mu t}{_2F_2} \left[\begin{array}{r} -m, a;\\ -m-\ell, b;\end{array}\lambda t\right]_m;s\right\}&=&\int_{0}^{\infty}t^{s-1}e^{-\mu t}{_2F_2} \left[\begin{array}{r} -m, a;\\ -m-\ell, b;\end{array}\lambda t\right]_mdt\nonumber\\ &&=\frac{\Gamma(s)}{\mu^s}{_3F_2} \left[\begin{array}{r} -m, a,s;\\ -m-\ell, b;\end{array}\frac{\lambda}{\mu}\right]_m, \end{eqnarray} where $ \Re{(s)}>0$; $\Re{(\mu)}>0$ and $m,\ell\in\mathbb{N}$.\\ We derive some new results for Mellin transform as applications of summation theorems discussed in previous section.\\ \textbf{Case I.} On setting $\ell=m, a=\alpha, b=\frac{1+\alpha+\beta}{2}, \lambda=\mu$ and $s=\beta$ in equation \eqref{eq(7.1)} and using Watson's truncated summation theorem \eqref{eq(2.4)}, we obtain \begin{eqnarray}\label{eq(7.2)} \mathcal{M}\left\{e^{-\mu t}{_2F_2} \left[\begin{array}{r} -m,\alpha;\\ -2m,\frac{1+\alpha+\beta}{2};\end{array}\mu t\right]_m;\beta\right\}=\frac{\Gamma(\beta)}{\mu^{\beta}}\frac{\left(\frac{1+\alpha}{2}\right)_m\left(\frac{1+\beta}{2}\right)_m}{\left(\frac{1}{2}\right)_m\left(\frac{1+\alpha+\beta}{2}\right)_m}, \end{eqnarray} where $\alpha,\frac{1+\alpha+\beta}{2}\in\mathbb{C}\setminus\mathbb{Z}_0^{-}$; $m\in\mathbb{N}$; $\Re(\beta)>0,\Re(\mu)>0$.\\
\textbf{Case II.} Replacing $m$ by $2m$ and after that setting $\ell=2k+1,a=-m-k-\frac{1}{2}, b=\frac{1+\beta}{2}-m, \lambda=\mu$ and $s=\beta$ in equation \eqref{eq(7.1)} and using Watson's truncated summation theorem \eqref{eq(2.5)}, we obtain \begin{eqnarray}\label{eq(7.14)} \mathcal{M}\left\{e^{-\mu t}{_2F_2} \left[\begin{array}{r} -2m,-m-k-\frac{1}{2};\\ -2m-2k-1, \frac{1+\beta}{2}-m;\end{array}\mu t\right]_{2m};\beta\right\}=\frac{\Gamma(\beta)}{\mu^{\beta}}\frac{\left(\frac{1}{2}\right)_m\left(\frac{2+\beta+2k}{2}\right)_m}{\left(\frac{1-\beta}{2}\right)_m\left(1+k\right)_m}, \end{eqnarray} where $\left(\frac{1+\beta}{2}\right)-m\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N};$ $\Re(\beta)>0, \Re(\mu)>0$.\\
\textbf{Case III.} Replacing $m$ by $2m+1$ and after that setting $\ell=2k,a=-m-k-\frac{1}{2}, b=\frac{\beta}{2}-m, \lambda=\mu$ and $s=\beta$ in equation \eqref{eq(7.1)} and using Watson's truncated summation theorem \eqref{eq(2.6)}, we obtain \begin{eqnarray}\label{eq(7.15)} \mathcal{M}\left\{e^{-\mu t}{_2F_2} \left[\begin{array}{r} -2m-1,-m-k-\frac{1}{2};\\ -2m-2k-1, \frac{\beta}{2}-m;\end{array}\mu t\right]_{2m+1};\beta\right\}=0, \end{eqnarray} where $\frac{\beta}{2}-m\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N};$ $\Re(\beta)>0, \Re(\mu)>0$.\\
\textbf{Case IV.} On setting $\ell=k,a=\alpha, b=1+\alpha+\beta+k, \lambda=\mu$ and $s=\beta$ in equation \eqref{eq(7.1)} and using Saalsch\"{u}tz's truncated summation theorem \eqref{eq(2.8c)}, we obtain \begin{eqnarray}\label{eq(7.3)} \mathcal{M}\left\{e^{-\mu t}{_2F_2} \left[\begin{array}{r} -m,\alpha;\\ -m-k, 1+\alpha+\beta+k;\end{array}\mu t\right]_m;\beta\right\}= \frac{\Gamma(\beta)}{\mu^{\beta}}\frac{\left(1+\alpha+k\right)_m\left(1+\beta+k\right)_m}{\left(1+k\right)_m\left(1+\alpha+\beta+k\right)_m},\nonumber\\ \end{eqnarray} where $\alpha,1+\alpha+\beta+k\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N};$ $\Re(\beta)>0, \Re(\mu)>0$.\\
\textbf{Case V.} On setting $\ell=k,a=\beta-k-1, b=\beta-\gamma-m-k, \lambda=\mu$ and $s=-m-k-\gamma$ in equation \eqref{eq(7.1)} and using Saalsch\"{u}tz's truncated summation theorem \eqref{eq(2.8b)}, we obtain \begin{eqnarray}\label{eq(g7.4)} &&\mathcal{M}\left\{e^{-\mu t}{_2F_2} \left[\begin{array}{r} -m,\beta-k-1;\\ -m-k, \beta-\gamma-m-k;\end{array}\mu t\right]_m;-m-k-\gamma\right\}\nonumber\\ &&\qquad=\frac{\Gamma(-m-k-\gamma)}{\mu^{-m-k-\gamma}}\frac{\left(\beta\right)_m\left(\gamma\right)_m}{\left(1+k\right)_m\left(1+k+\gamma-\beta\right)_m}, \end{eqnarray} where $\beta-k-1,\beta-\gamma-m-k\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N};$ $\Re(-m-k-\gamma)>0, \Re(\mu)>0$.\\
\textbf{Case VI.} On setting $\ell=m+k,a=1-\alpha, b=1+k, \lambda=\mu$ and $s=\alpha$ in equation \eqref{eq(7.1)} and using Whipple's truncated summation theorem \eqref{eq(2.14a)}, we obtain \begin{eqnarray}\label{eq(7.5)} &&\mathcal{M}\left\{e^{-\mu t}{_2F_2} \left[\begin{array}{r} -m,1-\alpha;\\ -2m-k, 1+k;\end{array}\mu t\right]_m;\alpha\right\}=\frac{\Gamma(\alpha)}{\mu^{\alpha}}\frac{\left(\frac{2-\alpha+k}{2}\right)_{m} \left(\frac{1+\alpha+k}{2}\right)_{m}}{\left(\frac{2+k}{2}\right)_{m} \left(\frac{1+k}{2}\right)_{m}}, \end{eqnarray} where $1-\alpha\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N}$ and $\Re(\alpha)>0,\Re(\mu)>0$.\\
\textbf{Case VII.} Replacing $m$ by $2m$ and after that setting $\ell=2k,a=1+2m, b=1+2m+2k+2\beta, \lambda=\mu$ and $s=\beta$ in equation \eqref{eq(7.1)} and using Whipple's truncated summation theorem \eqref{eq(2.15)}, we obtain \begin{eqnarray}\label{eq(7.6)} &&\mathcal{M}\left\{e^{-\mu t}{_2F_2} \left[\begin{array}{r} -2m,1+2m;\\ -2m-2k, 2\beta+1+2m+2k;\end{array}\mu t\right]_{2m};\beta\right\}\nonumber\\ &&\qquad=\frac{\Gamma(\beta)}{\mu^{\beta}}\frac{(1+2\beta+2k)_{2m}\left(1+k\right)_{2m}}{(1+2k)_{2m}\left(1+\beta+k\right)_{2m}}, \end{eqnarray} where $1+2m+2k+2\beta\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N}$ and $\Re(\beta)>0,\Re(\mu)>0$.\\
\textbf{Case VIII.} Replacing $m$ by $2m$ and after that setting $\ell=2k+1,a=1+2m, b=2+2m+2k+2\beta, \lambda=\mu$ and $s=\beta$ in equation \eqref{eq(7.1)} and using Whipple's truncated summation theorem \eqref{eq(2.15a)}, we obtain \begin{eqnarray}\label{eq(7.7)} &&\mathcal{M}\left\{e^{-\mu t}{_2F_2} \left[\begin{array}{r} -2m,1+2m;\\ -2m-2k-1, 2\beta+2+2m+2k;\end{array}\mu t\right]_{2m};\beta\right\}\nonumber\\ &&\qquad=\frac{\Gamma(\beta)}{\mu^{\beta}}\frac{(2+2\beta+2k)_{2m}\left(\frac{3+2k}{2}\right)_{2m}}{(2+2k)_{2m}\left(\frac{3+2\beta+2k}{2}\right)_{2m}}, \end{eqnarray} where $2+2m+2k+2\beta\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N}$ and $\Re(\beta)>0,\Re(\mu)>0$.\\
\textbf{Case IX.} Replacing $m$ by $2m+1$ and after that setting $\ell=2k,a=2+2m, b=2+2m+2k+2\beta, \lambda=\mu$ and $s=\beta$ in equation \eqref{eq(7.1)} and using Whipple's truncated summation theorem \eqref{eq(2.16)}, we obtain \begin{eqnarray}\label{eq(7.8)} &&\mathcal{M}\left\{e^{-\mu t}{_2F_2} \left[\begin{array}{r} -2m-1,2+2m;\\ -2m-2k-1, 2\beta+2m+2k+2;\end{array}\mu t\right]_{2m+1};\beta\right\}\nonumber\\ &&=\frac{\Gamma(\beta)}{\mu^{\beta}}\frac{(k+1)(2\beta+2m+2k+1)(2\beta+2k+1)_{2m}\left(2+k\right)_{2m}}{(2m+2k+1)(\beta+k+1)(2k+1)_{2m}\left(2+\beta+k\right)_{2m}}, \end{eqnarray} where $2+2m+2k+2\beta\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N}$ and $\Re(\beta)>0,\Re(\mu)>0$.\\
\textbf{Case X.} Replacing $m$ by $2m+1$ and after that setting $\ell=2k+1,a=2+2m, b=3+2m+2k+2\beta, \lambda=\mu$ and $s=\beta$ in equation \eqref{eq(7.1)} and using Whipple's truncated summation theorem \eqref{eq(2.16a)}, we obtain \begin{eqnarray}\label{eq(7.9)} &&\mathcal{M}\left\{e^{-\mu t}{_2F_2} \left[\begin{array}{r} -2m-1,2+2m;\\ -2m-2k-2, 2\beta+2m+2k+3;\end{array}\mu t\right]_{2m+1};\beta\right\}\nonumber\\ &&=\frac{\Gamma(\beta)}{\mu^{\beta}}\frac{(2k+3)(\beta+m+k+1)(2\beta+2k+2)_{2m}\left(\frac{5+2k}{2}\right)_{2m}}{(m+k+1)(2\beta+2k+3)(2k+2)_{2m}\left(\frac{5+2\beta+2k}{2}\right)_{2m}}, \end{eqnarray} where $3+2m+2k+2\beta\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N}$ and $\Re(\beta)>0,\Re(\mu)>0$.\\
\textbf{Case XI.} Replacing $m$ by $2m$ and after that setting $\ell=k,a=1+k, b=1-2m-\gamma, \lambda=\mu$ and $s=\gamma$ in equation \eqref{eq(7.1)} and using Dixon's truncated summation theorem \eqref{eq(2.22)}, we obtain \begin{eqnarray}\label{eq(7.10)} &&\mathcal{M}\left\{e^{-\mu t}{_2F_2} \left[\begin{array}{r} -2m,1+k;\\ -2m-k, 1-2m-\gamma;\end{array}\mu t\right]_{2m};\gamma\right\}\nonumber\\ &&=\frac{\Gamma(\gamma)}{\mu^{\gamma}}\frac{(1+k)_{m}(\gamma)_{m}2^{2m}\left(\frac{1}{2}\right)_m(1+k+\gamma)_{2m}}{(1+k)_{2m}(\gamma)_{2m}(1+k+\gamma)_{m}}, \end{eqnarray} where $1-2m-\gamma\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N}$ and $\Re(\gamma)>0,\Re(\mu)>0$.\\
\textbf{Case XII.} Replacing $m$ by $2m$ and after that setting $\ell=k,a=1+k, b=-2m-k, \lambda=\mu$ and $s=1+k$ in equation \eqref{eq(7.1)} and using Dixon's truncated summation theorem \eqref{eq(2.23)}, we obtain \begin{eqnarray}\label{eq(7.11)} &&\mathcal{M}\left\{e^{-\mu t}{_2F_2} \left[\begin{array}{r} -2m,1+k;\\ -2m-k, -2m-k;\end{array}\mu t\right]_{2m};1+k\right\}\nonumber\\ &&=\frac{\Gamma(1+k)}{\mu^{1+k}}\frac{(1+k)_{m}(1+k)_{m}2^{2m}\left(\frac{1}{2}\right)_m(2+2k)_{2m}}{(1+k)_{2m}(1+k)_{2m}(2+2k)_{m}}, \end{eqnarray} where $m,k\in\mathbb{N}$ and $\Re(\mu)>0$.\\
\textbf{Case XIII.} Replacing $m$ by $2m+1$ and after that setting $\ell=k,a=\gamma, b=-2m-\gamma, \lambda=\mu$ and $s=1+k$ in equation \eqref{eq(7.1)} and using Dixon's truncated summation theorem \eqref{eq(2.24)}, we obtain \begin{eqnarray}\label{eq(7.16)} \mathcal{M}\left\{e^{-\mu t}{_2F_2} \left[\begin{array}{r} -2m-1, \gamma;\\ -2m-1-k, -2m-\gamma;\end{array}\mu t\right]_{2m+1};1+k\right\}=0, \end{eqnarray} where $\gamma,-2m-\gamma\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N}$ and $\Re(\mu)>0$.\\
\textbf{Case XIV.} Replacing $m$ by $2m+1$ and after that setting $\ell=k,a=1+k, b=-2m-k-1, \lambda=\mu$ and $s=1+k$ in equation \eqref{eq(7.1)} and using Dixon's truncated summation theorem \eqref{eq(2.25)}, we obtain \begin{eqnarray}\label{eq(7.17)} \mathcal{M}\left\{e^{-\mu t}{_2F_2} \left[\begin{array}{r} -2m-1,1+k;\\ -2m-1-k, -2m-1-k;\end{array}\mu t\right]_{2m+1};1+k\right\}=0, \end{eqnarray} where $m,k\in\mathbb{N}$ and $\Re(\mu)>0$.\\
\textbf{Case XV.} Replacing $m$ by $2m$ and after that setting $\ell=k,a=\alpha, b=1+\alpha+2m, \lambda=\mu$ and $s=1+\alpha+2m+k$ in equation \eqref{eq(7.1)} and using Dixon's truncated summation theorem \eqref{eq(2.20c)}, we obtain \begin{eqnarray}\label{eq(7.12)} &&\mathcal{M}\left\{e^{-\mu t}{_2F_2} \left[\begin{array}{r} -2m,\alpha;\\ -2m-k, 1+\alpha+2m;\end{array}\mu t\right]_{2m};1+\alpha+2m+k\right\}\nonumber\\ &&=\frac{\Gamma(1+\alpha+k+2m)}{\mu^{1+\alpha+k+2m}}\frac{(1+\alpha)_{2m}\left(1+\frac{\alpha}{2}+k\right)_{2m}}{\left(1+\frac{\alpha}{2}\right)_{2m}(1+k)_{2m}}, \end{eqnarray} where $\alpha,1+\alpha+2m\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N}$ and $\Re(1+\alpha+2m+k)>0,\Re(\mu)>0$.\\
\textbf{Case XVI.} Replacing $m$ by $2m+1$ and after that setting $\ell=k,a=\alpha, b=2+\alpha+2m, \lambda=\mu$ and $s=2+\alpha+2m+k$ in equation \eqref{eq(7.1)} and using Dixon's truncated summation theorem \eqref{eq(2.20d)}, we obtain \begin{eqnarray}\label{eq(7.13)} &&\mathcal{M}\left\{e^{-\mu t}{_2F_2} \left[\begin{array}{r} -2m-1,\alpha;\\ -2m-k-1, 2+\alpha+2m;\end{array}\mu t\right]_{2m+1};2+\alpha+2m+k\right\}\nonumber\\ &&=\frac{\Gamma(2+\alpha+2m+k)}{\mu^{2+\alpha+2m+k}}\frac{(1+\alpha)(2+2k+\alpha+4m)(2+\alpha)_{2m}\left(1+\frac{\alpha}{2}+k\right)_{2m}}{(2+\alpha)(1+2m+k)\left(2+\frac{\alpha}{2}\right)_{2m}(1+k)_{2m}},\nonumber\\ \end{eqnarray} where $\alpha,2+\alpha+2m\in\mathbb{C}\setminus\mathbb{Z}_0^{-}; m,k\in\mathbb{N}$ and $\Re(2+\alpha+2m+k)>0,\Re(\mu)>0$.\\
\textbf{Remark.} In the next communication \cite{Qureshi2}, we shall obtain the Mellin transform of the product of exponential function and infinite Goursat series ${_2F_2} \left[\begin{array}{r} -m,\alpha;\\ -m-\ell, \beta;\end{array}\lambda t\right]$.
\section*{Concluding remarks} In previous sections, we have derived some summation theorems for Clausen's terminating and truncated hypergeometric series ${_3F_2}$ when one numerator and one denominator parameters are negative integers. In the sequel of this paper, we have derived some summation formulae for Gauss' hypergeometric series ${_2F_1}$, Clausen hypergeometric series ${_3F_2}$ and have discussed their applications (see for example \cite{Qureshi2,Qureshi3}). It is expected that these summation formulae will be of wide interest and will help to advance research in the field of special functions.\\ We conclude our present investigation by observing that several hypergeometric summation theorems can be derived from a known summation theorem in an analogous manner.}
\end{document} |
\begin{document}
\title{Identities for the number of standard Young
tableaux in some $(k,\ell)$ hooks} \centerline{A. Regev}
{\bf Abstract}: Closed formulas are known for $S(k,0;n)$, the number of standard Young tableaux of size $n$ and with at most $k$ parts, where $1\le k\le 5$. Here we study the analogue problem for $S(k,\ell;n)$, the number of standard Young tableaux of size $n$ which are contained in the $(k,\ell)$ hook. We deduce some formulas for the cases $k+\ell\le 4$.
2010 Mathematics Subject Classification 05C30
\section{Introduction} Given a partition $\lambda$ of $n$, $\lambda\vdash n$, let $\chi^\lambda$ denote the corresponding irreducible $S_n$ character. Its degree is denoted by $\deg \chi^\lambda=f^\lambda$ and is equal to the number of Standard Young tableaux (SYT) of shape $\lambda$~\cite{kerber},~\cite{macdonald},~\cite{sagan},~\cite{stanley}. The number $f^\lambda$ can be calculated for example by the hook formula~\cite[Theorem 2.3.21]{kerber},~\cite[Section 3.10]{sagan},~\cite[Corollary 7.21.6]{stanley}. We consider the number of SYT in the $(k,\ell)$ hook. More precisely, given integers $k,\ell,n\ge 0$ we denote \[ H(k,\ell;n)=\{\lambda=(\lambda_1,\lambda_2,\ldots)\mid \lambda\vdash n~\mbox{and}~ \lambda_{k+1}\le \ell\}\qquad\mbox{and}\qquad S(k,\ell;n)=\sum_{\lambda\in H(k,\ell;n)}f^\lambda. \] \subsection{The cases where $S(k,\ell;n)$ are known}\label{s1} For the "strip" sums $S(k,0;n)$ it is known~\cite{regev1}~\cite{stanley} that \[ S(2,0;n)={n\choose\it l.l.o\frac{n}{2}\rfloor}\quad\mbox{and}\quad S(3,0;n)=\sum_{j\ge 0}\frac{1}{j+1}{n\choose 2j}{2j\choose j}. \] Let $C_j=\frac{1}{j+1}{2j\choose j}$ be the Catalan numbers, then Gouyon-Beauchamps~\cite{gouyon}~\cite{stanley} proved that \[ S(4,0;n)=C_{\it l.l.o\frac{n+1}{2}\rfloor}\cdot C_{\lceil\frac{n+1}{2}\rceil}\quad\mbox{and}\quad S(5,0;n)=6\sum_{j=0}^{\it l.l.o\frac{n}{2}\rfloor}{n\choose 2j}\cdot C_j\cdot\frac{(2j+2)!}{(j+2)!(j+3)!}. \]
As for the "hook" sums, until recently only $S(1,1;n)$ and $S(2,1;n)=S(1,2;n)$ have been calculated:
1. ~It easily follows that $S(1,1;n)=2^{n-1}$.
2. ~The following identity was proved in~\cite[Theorem 8.1]{regev2}: \begin{eqnarray}\label{motzkin.path.3} S(2,1;n)=~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{eqnarray} \[ ~~~~~~=\frac{1}{4}\left(\sum_{r=0}^{n-1}{n-r\choose{\it l.l.o\frac{n-r}{2}\rfloor}} {n\choose r} +\sum_{k=1}^{\it l.l.o\frac{n}{2}\rfloor-1}\frac{n!}{k!\cdot (k+1)!\cdot (n-2k-2)!\cdot (n-k-1)\cdot(n-k)}\right)+1. \]
\subsection{The main results} In Section~\ref{s2} we prove Equation~\eqref{rewrite8}, which gives (sort of) a closed formula for $S(3,1;n)$ in terms of the Motzkin-sums function. For the Motzkin-sums function see~\cite[sequence A005043]{sloane}. Equation~\eqref{rewrite8} in fact is a "degree" consequence of a formula of $S_n$ characters, of interest on its own, see Equation~\eqref{rewrite3}.
In Section~\ref{s3} we find some intriguing relations between the sums $S(4,0;n)$ and the "rectangular" sub-sums $S^*(2,2,;n)$, see below identities~\eqref{b3} and~\eqref{b4}.
Finally, in Section~\ref{s4} we review some cases where the hook-sums $S(k,\ell;n)$ are related, in some rather mysterious ways, to humps calculations on Dyck and on Motzkin paths, see~\eqref{eq1},~\eqref{eq2}, and Theorem~\ref{motzkin.humps.1}.
As usual, in some of the above identities it is of interest to find bijective proofs, which might explain these identities.
{\bf Acknowledgement.} We thank D. Zeilberger for verifying some of the identities here by the WZ method.
\section{The sums $S(3,1;n)$ and the characters $\chi (3,1;n)$}\label{s2}
Define the $S_n$ character \begin{eqnarray}\label{rewrite5} \chi(k,\ell;n)=\sum_{\lambda\in H(k,\ell;n)} \chi^\lambda\qquad\mbox{so}\qquad\deg(\chi(k,\ell;n))=S(k,\ell;n). \end{eqnarray} \subsection{The Motzkin-sums function}
Define the $S_n$ character \begin{eqnarray}\label{rewrite6} \Psi(n)=\sum_{k=0}^{\it l.l.o n/2\rfloor}\chi^{(k,k,1^{n-2k})}\qquad\mbox{and denote}\qquad\deg\Psi(n)=a(n). \end{eqnarray} We call $\Psi(n)$ {\it the Motzkin-sums} character. Note that \[ \deg\chi^{(k,k,1^{n-2k})}=f^{(k,k,1^{n-2k})}=\frac{n!}{(k-1)!\cdot k!\cdot (n-2k)!\cdot (n-k)\cdot (n-k+1)}, \] hence \begin{eqnarray}\label{a2} a(n)=\sum_{k=1}^{\it l.l.o{n}/{2}\rfloor}\frac{n!}{(k-1)!\cdot k!\cdot (n-2k)!\cdot (n-k)\cdot (n-k+1)}. \end{eqnarray} By~\cite[sequence A005043]{sloane} it follows that $a(n)$ is the Motzkin-sums function. The reader is referred to~\cite{sloane} for various properties of $a(n)$. For example, $a(n)+a(n+1)=M_n$, where $M_n$ are the Motzkin numbers. Also $a(1)=0,~a(2)=1$ and $a(n)$ satisfies the recurrence: \begin{eqnarray} \mbox{for $n\ge 3$}\qquad \label{a1} a(n)=\frac{n-1}{n+1}\cdot(2\cdot a(n-1)+3\cdot a(n-2)). \end{eqnarray}
Note also that for $n\ge 2~$ Equation~\eqref{motzkin.path.3} can be written as \begin{eqnarray}\label{a3} S(2,1;n) =\frac{1}{4}\left(\sum_{r=0}^{n-1}{n-r\choose{\it l.l.o\frac{n-r}{2}\rfloor}} {n\choose r}+a(n)-1\right)+1. \end{eqnarray}
The asymptotic behavior of $a(n)$ can be deduced from that of $M_n$. We deduce it here, even though it is not needed in the sequel.
\begin{remark} As $n$ goes to infinity, \[ a(n)\simeq \frac{\sqrt 3}{8\cdot\sqrt{2\pi}}\cdot\frac{1}{n\sqrt n}\cdot 3^n\qquad\mbox{and}\qquad a(n)\simeq\frac{1}{4} \cdot M_n. \] \begin{proof} By standard techniques it can be shown that $a(n)$ has asymptotic behavior $$a(n)\simeq c\cdot \left(\frac{1}{n}\right)^g\cdot \alpha^n$$ for some constants $c,g$ and $\alpha$ -- which we now determine. By~\cite{regev1} \[ M_n\simeq
\frac{\sqrt 3}{2\sqrt{2\pi}} \cdot\left(\frac{1}{n}\right)^{3/2}\cdot 3^n. \] With \[
M_n=a(n)+a(n+1)\simeq c\cdot (1+\alpha)\cdot\left(\frac{1}{n}\right)^g\cdot\alpha^n \] this implies that $\alpha=3$, that $g=3/2$ and that $c=\frac{\sqrt 3}{8\cdot\sqrt{2\pi}}$.
\end{proof} \end{remark}
\subsection{The outer product of $S_m$ and $S_n$ characters}
Given an $S_m$ character $\chi_m$ and an $S_n$ character $\chi_n$, we can form their {\it outer} product $\chi_n\hat\otimes \chi_n$. The exact decomposition of $\chi_m\hat\otimes \chi_n$ is given by the Littlewood-Richardson rule~\cite{kerber},~\cite{macdonald},~\cite{sagan},~\cite{stanley}. In the special case that $\chi_n=\chi^{(n)}$, this decomposition is given, below, by Young's rule. Also \begin{eqnarray}\label{rewrite7} \deg (\chi_n\hat\otimes \chi^{(n)})=\deg (\chi_n)\cdot{n+m\choose n}. \end{eqnarray}
{\bf Young's Rule}~\cite{macdonald}: Let $\lambda=(\lambda_1,\lambda_2,\ldots)\vdash m$ and denote by $\lambda^{+n}$ the following set of partitions of $m+n$: \[ \lambda^{+n}=\{\mu\vdash n+m\mid \mu_1\ge \lambda_1\ge \mu_2\ge \lambda_2\ge\cdots\}. \] Then \[ \chi^\lambda\hat\otimes \chi^{(n)}=\sum_{\mu\in\lambda^{+n}}\chi^\mu. \] \begin{example}\label{rewrite4}~\cite{regev1},~\cite{stanley} Given $n$, it follows that \begin{eqnarray}\label{rewrite1} \chi^{(\it l.l.o n/2\rfloor)}\hat\otimes \chi^{(\lceil n/2\rceil)}=\chi(2,0;n),\quad\mbox{and by taking degrees,}\quad S(2,0;n)={n\choose \it l.l.o n/2 \rfloor}. \end{eqnarray} \end{example}
\subsection{A character formula for $\chi(3,1;n)$}
\begin{prop}\label{rewrite2} With the notations of~\eqref{rewrite5} and~\eqref{rewrite6}, \begin{eqnarray}\label{rewrite3} \chi(3,1;n)=\frac{1}{2}\cdot\left[\chi(2,0,n)+\sum_{j=0}^n \Psi(j)\hat\otimes \chi^{(n-j)} \right]. \end{eqnarray} By taking degrees, Example~\ref{rewrite4} together with~\eqref{rewrite6} and~\eqref{rewrite7} imply that
\begin{eqnarray}\label{rewrite8} S(3,1;n)=\frac{1}{2}\cdot \left[{n\choose\it l.l.o\frac{n}{2}\rfloor}+\sum_{j=0}^n a(j)\cdot{n\choose j} \right]. \end{eqnarray}
\end{prop} \begin{proof} Denote \[ \Omega(n)=\sum_{j=0}^n \Psi(j)\hat\otimes \chi^{(n-j)} \] and analyze this $S_n$ character. Young's rule implies the following:
Let $\mu\vdash n$, then by Young's rule $\chi^\mu$ has a positive coefficient in $\Omega(n)$ if and only if $\mu\in H(3,1;n)$. Moreover, all these coefficients are either $1$ or $2$, and such a coefficient equals $1$ if and only if $\mu$ is a $\le 2$ two rows partition $\mu=(\mu_1,\mu_2)$. It follows that \begin{eqnarray}\label{p3} \chi(2,0;n)+\Omega(n)=2\cdot\sum_{\lambda\in H(3,1;n)}\chi^\lambda. \end{eqnarray} This implies~\eqref{rewrite3} and completes the proof of Proposition~\ref{rewrite2}. \end{proof}
\section{The sums $S(4,0;n)$ and $S^*(2,2;n)$}\label{s3} \begin{defn} \begin{enumerate} \item Let $n=2m$, $m\ge 2$ and let $H^*(2,2;2m)\subset H(2,2;2m)$ denote the set of partitions $H^*(2,2;2m)=\{(k+2,k+2,2^{m-2-k})\vdash 2m\mid k=0,\ldots m-2\}$ (the partitions in the $(2,2)$ hook with both arm and leg being rectangular), then denote \[ S^*(2,2;2m)=\sum _{\lambda\in H^*(2,2;2m)} f^\lambda. \] \item Let $n=2m+1$, $m\ge 2$ and let $H^*(2,2;2m+1)\subset H(2,2;2m+1)$ denote the set of partitions $H^*(2,2;2m+1)=\{(k+3,k+2,2^{m-2-k})\vdash 2m+1\mid k=0,\ldots m-2\}$ (the partitions in the $(2,2)$ hook with arm nearly rectangular and leg rectangular), then denote \[ S^*(2,2;2m+1)=\sum _{\lambda\in H^*(2,2;2m+1)} f^\lambda. \] \end{enumerate} \end{defn}
Recall from Section~\ref{s1} that $S(4,0;2m-1)=C_m^2$ and $S(4,0;2m)=C_m\cdot C_{m+1}$. We have the following intriguing identities. \begin{prop}\label{b1} \begin{enumerate} \item Let $n=2m$ then \[S(4,0;2m-2)=C_{m-1}\cdot C_{m}=S^*(2,2;2m).\] Explicitly, we have the following identity: \begin{eqnarray}\label{b3} C_{m-1}\cdot C_m=\frac{1}{m\cdot (m+1)}\cdot{2m-2\choose m-1}\cdot{2m\choose m}=~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\end{eqnarray} \[\label{b2}~~~~~~~~~~=\sum_{k=0}^{m-2}\frac{(2m)!}{k!\cdot (k+1)!\cdot(m-k-2)!\cdot (m-k-1)! \cdot (m-1)\cdot m^2\cdot (m+1)}. \]
\item Let $n=2m+1$ then \[\frac{2m+1}{m+2}\cdot S(4,0;2m-1)=\frac{2m+1}{m+2}\cdot C_{m}^2=S^*(2,2;2m+1).\]
Explicitly, we have the following identity: \begin{eqnarray}\label{b4} \frac{2m+1}{ m+2}\cdot C_{m}^2=
\frac{1}{(m+1)\cdot(m+2)}\cdot{2m\choose m}{2m+1\choose m} =~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{eqnarray} \[=\sum_{k=0}^{m-2}\frac{(2m+1)!\cdot 2}{k!\cdot (k+2)!\cdot(m-k-2)!\cdot (m-k-1)! \cdot (m-1)\cdot m\cdot (m+1)\cdot (m+2)}. \] \end{enumerate} \end{prop}
\begin{proof} Equation~\eqref{b3} is the specialization of Gauss's $2F1(a,b;c;1)$ with $a=2-m,b=1-m, c=2$~\cite{askey}, and~\eqref{b4} is similar.
Alternatively, the identities~\eqref{b3} and~\eqref{b4} can be verified by the WZ method~\cite{doron3},~\cite{doron2}. \end{proof}
\section{Hook-sums and humps for pathes}\label{s4}
A Dyck path of length $2n$ is a lattice path, in $\mathbb{Z}\times \mathbb{Z}$, from $(0,0)$ to $(2n,0)$, using up-steps $(1,1)$ and down-steps $(1,-1)$ and never going below the $x$-axis. A {\it hump} in a Dyck path is an up-step followed by a down-step.
A Motzkin path of length $n$ is a lattice path from $(0,0)$ to $(n,0)$, using flat-steps $(1,0)$, up-steps $(1,1)$ and down-steps $(1,-1)$, and never going below the $x$-axis. A hump in a Motzkin path is an up-step followed by zero or more flat-steps followed by a down-step.
We count now {\it humps} for Dyck and for Motzkin paths and observe the following intriguing phenomena: The humps-calculations in the Dyck case correspond the $2\times n$ rectangular shape $\lambda=(n,n)$ to the $(1,1)$ hook shape $\mu=(n,1^n)$. And in the Motzkin case we show below that it
corresponds the $(3,0)$ strip shape partitions $H(3,0;n)$ to the $(2,1)$ hook shape partitions $H(2,1;n)$.
\subsection{The Dyck case}
The Catalan number \[C_n=\frac{(2n)!}{n!(n+1)!}\] is the cardinality of a variety of sets~\cite{stanley}; here we are interested in two such sets. First, $C_n=f^{(n,n)}$, the number of SYT of shape $(n,n)$. Second, $C_n$ is the number of Dyck paths of length $2n$.
Let ${\cal H} D_n$ denote the total number of humps in all the Dyck paths of length $2n$, then \[ {\cal H} D_n={2n-1\choose n},\] see~\cite{dershowitz1},~\cite{dershowitz2},~\cite{deutsch}. Since ${2n-1\choose n}=f^{(n,1^n)}$, we have \[ C_n=f^{(n,n)}\qquad\mbox{and}\qquad{\cal H} D_n=f^{(n,1^n)}, \] which we denote by \begin{eqnarray}\label{eq1} {\cal H}: (n,n)\longrightarrow (n,1^n). \end{eqnarray}
\subsection{The Motzkin case}
Like the Catalan numbers, also the Motzkin numbers $M_n$ are the cardinality of a variety of sets; for example $M_n=S(3,0;n)$,~\cite{regev1},~\cite{stanley},~\cite [sequence A001006]{sloane}, which gives the Motzkin numbers a SYT interpretation. Also, $M_n$ is the number of Motzkin paths of length $n$. Let ${\cal H} M_n$ denote the total number of humps in all the Motzkin paths of length $n$, then by~\cite [sequence A097861]{sloane} \begin{eqnarray}\label{motzkin.path.2} {\cal H}M_n=\frac{1}{2}\sum_{j\ge 1}{n\choose j}{n-j\choose j}. \end{eqnarray} We show below that this implies the intriguing identity ${\cal H}M_n=S(2,1;n)-1,$ which gives a SYT-interpretation of the numbers ${\cal H}M_n$. Thus the humps-calculations in the Motzkin case corresponds the $(3,0)$ strip shape partitions $H(3,0;n)$ to the $(2,1)$ hook shape partitions $H(2,1;n)$. We denote this by \begin{eqnarray}\label{eq2} {\cal H}: H(3,0;n)\longrightarrow H(2,1;n). \end{eqnarray}
\begin{thm}\label{motzkin.humps.1} The number of humps for the Motzkin paths of length $n$ satisfies \[ {\cal H}M_n=S(2,1;n)-1. \] \end{thm}
\begin{proof} Combining Equations~\eqref{motzkin.path.3} and~\eqref{motzkin.path.2}, the proof of Theorem~\ref{motzkin.humps.1} will follow once the following binomial identity -- of interest on its own -- is proved. \begin{lem}\label{motzkin.humps.11} For $n\ge 2$ \begin{eqnarray}\label{motzkin.path.222} 2\sum_{j=1}^{\it l.l.o n/2\rfloor}{n\choose j}{n-j\choose j}=\sum_{r=0}^{n-1}{n-r\choose{\it l.l.o\frac{n-r}{2}\rfloor}} {n\choose r}+a(n)-1=~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{eqnarray} \[ ~~~~~~~~~~~~=\sum_{r=0}^{n-1}{n-r\choose{\it l.l.o\frac{n-r}{2}\rfloor}} {n\choose r} +\sum_{k=1}^{\it l.l.o\frac{n}{2}\rfloor-1}\frac{n!}{k!\cdot (k+1)!\cdot (n-2k-2)!\cdot (n-k-1)\cdot(n-k)}. \] \end{lem}
Equation~\eqref{motzkin.path.222} was verified by the WZ method. About this method, see~\cite{doron3},~\cite{doron2}. We remark that it would be interesting to find an elementary proof of this identity.
This completes the proof of Theorem~\ref{motzkin.humps.1}. \end{proof}
A. Regev, Math. Dept. The Weizmann Institute, Rehovot 76100, Israel.
{\it Email address:} amitai.regev at weizmann.ac.il
\end{document} |
\begin{document}
\title{{\TheTitle}
\begin{abstract} We study some optimal control problems on networks with junctions, approximate the junctions by a switching rule of delay-relay type and study the passage to the limit when $\varepsilon$, the parameter of the approximation, goes to zero. First, for a twofold junction problem we characterize the limit value function as viscosity solution and maximal subsolution of a suitable Hamilton-Jacobi problem. Then, for a threefold junction problem we consider two different approximations, recovering in both cases some uniqueness results in the sense of maximal subsolution. \end{abstract}
\begin{keywords} optimal control, networks, discontinuous dynamics, hybrid systems, delayed thermostat, Hamilton-Jacobi equations, viscosity solutions \end{keywords}
\begin{AMS}
34H05, 35R02, 47J40, 49L25, 35F21 \end{AMS}
\section{Introduction} In this paper, we are interested in optimal control problems with dynamics inside a network. Each arc $E_i$ of the network has its own controlled dynamics $f_i$ and cost $\ell_i$. When passing from an arc to another one through a node, the system then drastically experiences a discontinuity. We refer to this situation as a ``junction''. Recently there was an increasing interest in dynamical systems and differential equation on network, for example in connection with problems of data transmission and traffic flows (e. g. Garavello-Piccoli \cite{GaPi}, Engel et al. \cite{EnKrNa}).\\ For dynamic programming and Hamilton-Jacobi-Bellman (HJB) equation, in optimal control, the presence of junctions is a problem because, by the discontinuous feature of HJB, the uniqueness of the solution of HJB is not in general guaranteed. In particular for an optimal control problem we cannot characterize the value function as the unique solution of HJB. Some authors have recently studied optimal control and HJB on networks as well as discontinuous HJB also not necessarily coming from an optimal control problem, see for instance Achdou-Camilli-Cutr\`\i-Tchou \cite{AcCaCuTc}, Camilli-Marchi \cite{CaMa}, Camilli-Marchi-Schieborn \cite{CaMaSc}, Camilli-Schieborn \cite{CaSc}, Imbert-Monneau-Zidani \cite{ImMoZi}, Achdou-Oudet-Tchou \cite{AcOuTc}, Achdou-Tchou \cite{AcTc}, and the recent Lions-Souganidis \cite{LiSo}. The optimal control problem on networks is related to $n$-dimensional optimal control problems on multi-domains, where the dynamics and costs incur in discontinuities when crossing some fixed hypersurfaces. These problems, started with Bressan-Hong \cite{BrHo}, \cite{BrHo1}, have been studied, in connection with HJB, in Barles-Briani-Chasseigne \cite{BaBrCh}, Barnard-Wolenski \cite{BaWo}, Rao-Zidani \cite{RaZi}, Barles-Briani-Chasseigne \cite{BaBrCh2}, Rao-Siconolfi-Zidani \cite{RaSiZi}, Barles-Chasseigne \cite{BaCh}, Achdou-Oudet-Tchou \cite{achoudtch}, Barles-Briani-Chasseigne-Imbert \cite{BaBrChIm}, Imbert-Monneau\cite{ImMo}.\\ In this paper we study a possible approximation of an optimal control problem on a network with a junction. Some preliminary and partial results have been presented in Bagagiolo \cite{Bae}. The main critical point is a uniqueness result for the viscosity solution of HJB equation that turns out to be discontinuous in the (one-dimensional) space-variable (we refer the reader to, for example, Bardi-Capuzzo Dolcetta \cite{BaCaDo} for a comprehensive account of viscosity solutions for Hamilton-Jacobi equations). Indeed, when using the classical double-variable technique for proving comparison results between sub- and super- viscosity solutions we cannot in general conclude as in the standard way because the points of minimum and of maximum, even if very close, may belong to different arcs for which dynamics and costs are absolutely non-comparable (the junction, indeed). Consider the situation where two half-lines (the edges) are separated by one point (the junction). Also note that the possible angle between the lines is irrelevant, being the discontinuity of dynamics and costs through the junction the only relevant fact. Our approach is to replace the junction, which represents a unique threshold for passing both from one edge to the other one and vice-versa, by a so-called delayed thermostat consisting in two different thresholds for passing separately from one edge to the other one and vice-versa (see \cref{fig:therm}). The problem is then transformed in a so-called hybrid problem (continuous/discrete evolution, see for example Goebel-Sanfelice-Teel \cite{Goebel}) for which the discontinuity of HJB is replaced by some suitable mutually exchanged boundary conditions on the extreme points of the two branches. This allows to get a uniqueness result for HJB for this kind of thermostatic problem. It is not unusual in the engineering literature on control problems to overcome discontinuities in dynamics by inserting some regularizing effects as hysteresis: switching and/or continuous. The thermostat is the fundamental brick of switching hysteresis model. See for instance Kokotovich \cite{TaoKoko} (pp. 17-23), Seidman \cite{Seidman88}, Hante-Leugering-Seidman \cite{HanteSeidman}. In Ceragioli-De Persis-Frasca \cite{Ceragioli} and Seidman \cite{Seidman2013} the thermostat is applied to solve several control problems coming from different contexts and discontinuities. The approximation of sliding mode behaviour through a thermostatic switching rule and the consequent passage to the limit of the switching threshold is discussed in Liberzon \cite{Liberzon} (pp. 14-15), Utkin \cite{Utkin} (pp. 30-31) and Alexander-Seidman \cite{AlexanderSeidman}.
The study of this kind of switching hysteresis in connection with Dynamic Programming and HJB for optimal control is not so advanced and moreover the limit problem when the switching threshold goes to zero was probably never studied. Hence our results may shed light for new formulations for the junction problem as indeed it happens for our three-fold problem that we will study below. We start from the results in Bagagiolo \cite{Ba} (see also Bagagiolo-Danieli \cite{BaDa}), where the author studies the dynamic programming method and the corresponding HJB problem for optimal control problems whose dynamics has a thermostatic behavior. This means that the dynamics $f$ (as well as the cost $\ell$) besides the control, depends on the state variable $x \in {\NZQ R}$, which evolves with continuity via the equation $x'=f$, and also depends on a discrete variable $i\in \left\{-1, 1\right\}$, whose evolution is governed by a delayed thermostatic rule, subject to the evolution of the state $x$. \begin{figure}
\caption{ The two-fold junction and its thermostatic approximation.}
\label{fig:therm}
\end{figure} In \cref{fig:therm} the behavior of such a rule is explained, correspondingly to the choice of the fixed threshold parameter $\varepsilon>0$. The output $i\in\{-1,1\}$ can (and must) jump from $1$ to $-1$ only when the input $x$, coming from the right (i.e. from values larger than or equal to $-\varepsilon$), possibly goes below the threshold $-\varepsilon$; it can (and must) jump from $-1$ to $1$ only when $x$, coming from the left (i.e. from values smaller than or equal to $\varepsilon$) possibly goes above the threshold $\varepsilon$. In all other situations it remains constant. In particular, when $x>\varepsilon$ then $i$ is equal to $1$, and when $x<-\varepsilon$ then $i$ is equal to $-1$. We refer to Visintin \cite{Vi} page 102 for a formal definition of such a switching rule. The controlled evolution is then given by \[ \begin{cases} x'(t) = f(x(t), i(t), \alpha(t)), \\ i(t)= h_{\varepsilon}\left[x\right](t) \\ x(0)= x_0, \ \ \ i(0)= i_0 \end{cases} \] where $\alpha: \left[0, +\infty\right] \rightarrow A$ is the measurable control, and $h_{\varepsilon}\left[\cdot\right]$ represents the thermostatic delayed relationship between the input $x$ and the output $i$. The initial value $i_0 \in \left\{-1, 1\right\}$ must be coherent with the thermostatic relation: $i_0 = 1$ (resp. $i_0 = -1$) whenever $x_0 > \varepsilon$ (resp. $x_0 < - \varepsilon$). The infinite horizon optimal control problem is then, given a running cost $\ell$ and a discount factor $\lambda > 0$, the minimization over all measurable controls, of the cost \begin{equation}\label{eq:[4]} V_{\varepsilon}(x_0, i_0)= \inf_{\alpha \in{\cal A}} \int_{0}^{\infty} e^{-\lambda t} \ell(x(t), i(t), \alpha(t))dt \end{equation} where ${\cal A}$ is the set of measurable controls. In \cite{Ba}, the problem is written as a coupling of two exit time optimal control problems which mutually exchange their exit-costs. In particular, using the notations \begin{equation} \label{eq:omega} \Omega_1^{\varepsilon}= \left\{x > -\varepsilon \right\},\ \ \overline\Omega_1^{\varepsilon}= \left\{x \geq -\varepsilon \right\},\ \ \Omega_{-1}^{\varepsilon}= \left\{x < \varepsilon \right\} \ \ \overline\Omega_{-1}^{\varepsilon}= \left\{x \leq \varepsilon \right\}, \end{equation} in $\overline\Omega_1^{\varepsilon} \times \left\{1\right\}$ (resp. $\overline\Omega_{-1}^{\varepsilon} \times \left\{-1\right\}$), the function $x\mapsto V_{\varepsilon}(x,1)$ (resp. $x\mapsto V_\varepsilon(x,-1)$) coincides with the value function of the exit-time optimal control problem on $\overline\Omega_1^{\varepsilon}$ \ (resp. $\overline\Omega_{-1}^{\varepsilon}$ ), where the exit-cost on $-\varepsilon$\ (resp. on $\varepsilon$) is given by $V_\varepsilon(-\varepsilon,-1)$ \ (resp. $V_\varepsilon(\varepsilon,1)$). Under standard hypotheses, in \cite{Ba} is proved the following theorem. \begin{theorem}\label{th:sistemapprox} The value function $V_{\varepsilon}$ in \cref{eq:[4]} is the unique bounded, continuous viscosity solution on $\overline\Omega_1^{\varepsilon} \times \left\{1\right\} \cup \overline\Omega_{-1}^{\varepsilon} \times \left\{-1\right\}$ of the following coupled Dirichlet problem, where the boundary conditions (the two exit-costs) are also in the viscosity sense ($V'$ stays for derivative with respect to $x$) \begin{equation}\label{s1} \begin{cases} \lambda V_{\varepsilon}(x, 1) + \sup_{a \in A}\left\{-f(x, 1, a)V'_{\varepsilon}(x, 1) - \ell(x, 1,a)\right\} = 0 \ \text{in}\ \Omega_1^{\varepsilon} \times \left\{1\right\} \\ V_{\varepsilon}(- \varepsilon, 1)= V_{\varepsilon}(- \varepsilon, -1) \\ \lambda V_{\varepsilon}(x, -1) + \sup_{a \in A}\left\{-f(x, -1, a)V'_{\varepsilon}(x, -1) - \ell(x, -1, a)\right\} = 0 \ \text{in}\ \Omega_{-1}^{\varepsilon} \times \left\{-1\right\} \\ V_{\varepsilon}(\varepsilon, -1)= V_{\varepsilon}(\varepsilon, 1) \end{cases} \end{equation} \end{theorem}
In the present paper, we approximate some junction problems by a suitable combinations of delayed thermostats. Every thermostat is characterized by its two thresholds ($\varepsilon$ and $-\varepsilon$ in the preceding description). We study the limit of the value functions $V_\varepsilon$ and of the HJB problem when the threshold distance $\varepsilon$ tends to zero, and hence recovering the junction situation.
In Barles-Briani-Chasseigne \cite{BaBrCh}, among others, a one-dimensional two-fold junction problem is studied and some possible approximations are given. Here we introduce a different kind of approximation (thermostatic) and recover, by a different proof, similar results: we characterize the limit problem and we get that the limit of $V_\varepsilon$ is the corresponding maximal viscosity subsolution. This corresponds to the value function of the junction optimal control problem where, on the junction point, some further dynamics (besides the already given ones) are considered: the ones given by a suitable convexification of ``non-inward pointing" dynamics (``regular" dynamics in \cite{BaBrCh}) and somehow corresponding to stable equilibria on the junction point (stable equilibria of dynamics interpreted as forces). In \cite{BaBrCh} the case where on the junction one can also use the so-called ``singular" dynamics (i.e., a suitable convexification of ``inward pointing" dynamics; somehow corresponding to unstable equilibria)
is also treated. In particular in this case all possible dynamics on the junction can be used: singular, regular and the already given ones of the original control problem. All such possible dynamics at the interface are also used in Rao-Siconolfi-Zidani \cite{RaSiZi} to prove uniqueness of the solution for HJB. In our paper, the use of the thermostatic approximation leads to consider only the ``regular problem", and hence to have a characterization as maximal subsolution. Hence the problem in \cite{RaSiZi} and our limit problem are substantially different. Indeed, as said before, the concept of thermostat is based on the concept of switching that occurs when suitable thresholds are reached. For the occurrence of the switching it is necessary that the threshold is reached with a suitable signed velocity.
And such a sign is exactly the one requested by the construction of the regular dynamics.
Moreover, it is important to note that there exist several ways to define a junction optimal control problem and everyone of them has different HJ representation with different possible approximations by more regular problem. The case of a ``threefold" junction (see \cref{threejunction}) is not treated in \cite{BaBrCh}, and indeed the convexification of dynamics seems to be not more applicable (the physical interpretation as forces equilibrium is also failing). However, inspired by the previous thermostatic approximation, we introduce a special kind of ``convexification parameters" that somehow corresponds to the length of the time intervals that the dynamics spends on every single branch of a ``threefold" thermostatic approximation. Here, we have more than one way for passing to the limit, and we recover uniqueness results for the limit problems in the sense of maximal subsolution. The approach can be extended to the $n$-fold junctions (\cref{7foldjunction}), but with higher notation complexity.
The paper is organized as follows: basic assumption are set in \cref{sect:2}. In \cref{sec:3}, we introduce the thermostatic approximation of a two-junction problem and study the passage to the limit for $\varepsilon \rightarrow 0$ in the problem studied in \cite{Ba}. In \cref{sec:4} we study the three-junction problem both in the case with uniform switching thresholds and with non uniform switching thresholds. This corresponds to two different limit optimal control problems with different admissible behaviors on the junction.
\section{Basic assumption on the junction problem and the delay thermostat} \label{sect:2} Let the junction be given by a finite number of co-planar half-lines $R_i$, $i=1,\dots,n$, originating from the same point $O$, and we consider the half-lines as closed, that is the point $O\in R_i$ for every $i$. On each branch $R_i$ we consider a one-dimensional coordinate $x\ge0$ such that $x(O)=0$. The state position may be then encoded by the pair $(x,i)$. In Sect. 4, for convenience of notation, we will sometimes change the sign of $x$ . \begin{figure}
\caption{ A star-shaped network (7-fold junction).}
\label{7foldjunction}
\end{figure} We consider a controlled evolution on such a star-shaped network, given by the following dynamics. On $R_i$ the system is driven by a continuous and bounded dynamics $f_i:{\NZQ R}\times A\to{\NZQ R}$, with $A$ compact, $f_i$ Lipschitz and controllability holds:
\begin{equation}\label{eq:Lip} \exists L>0\ \mbox{such that } \forall\ x, y \in {\NZQ R},\ \forall\ a \in A\ \mbox{it is } \vert f_i(x, a)- f_i(y, a) \vert \leq L \vert x-y \vert. \end{equation}
\begin{equation}\label{eq:Controllability} \forall\ i\ \exists \ a_{i}^{-}, a_{i}^{+} \ \in A\ \ \text{s.t.}\ \ \ f_{i}(0, a_{i}^{-}) < 0 < f_{i}(0, a_{i}^{+}) \end{equation} The controlled system on the network is then, for an initial state $(x,i)$ with $x\in R_i$, \begin{equation} \label{eq:systemjunction} \left\{ \begin{array}{ll} \displaystyle y'(t)=f_j(y(t),\alpha(t))&\mbox{for } t>0\ \mbox{and } y(t)\in R_j\\ y(0)=x\\ x\in R_i \end{array} \right. \end{equation}
\noindent where $\alpha:[0,+\infty[\to A$ belongs to $\cal A$, the set of measurable controls, and $j=j(t)$ is the switching variable that switches to $j'$ when $y(t)$ enters the new half-line $R_{j'}$.
To this controlled systems we associate an infinite horizon optimal control problem. For every branch $R_i$ we consider a running cost $\ell_i:{\NZQ R}\times A\to [0,+\infty[$, and the problem is given by the minimization, over all controls $\alpha\in{\cal A}$, of the cost functional \begin{equation} \label{eq:costfunctionaljunction} J(x,i,\alpha)=\int_0^{+\infty}e^{-\lambda t}\ell_j(y(t),\alpha(t))dt. \end{equation} \noindent In \cref{eq:costfunctionaljunction}, $\lambda>0$ is a fixed discount factor, the trajectory $y(\cdot)$ is the solution of \cref{eq:systemjunction}, and the index $j$ switches as explained above. Moreover, for every $i$, the function $\ell_i:{\NZQ R} \times A\rightarrow {\NZQ R}$ is continuous and bounded, and there exists a modulus of continuity $\omega_{\ell}:[0,\infty[\to[0,+\infty[$ (i.e. continuous, increasing and $\omega_\ell(0)=0$), such that, for any $x, y \in {\NZQ R}$ and $a \in A$ and for any $i$ \begin{equation}\label{eq:LLip} \vert \ell_i(x, a)- \ell_i(y, a) \vert \leq \omega_{\ell}\left( \vert x-y \vert\right). \end{equation}
We finally consider the value function \begin{equation*} V(x,i)=\inf_{\alpha\in{\cal A}}J(x,i,\alpha). \end{equation*}
Of course, the concept of solution (or trajectory) for the system \cref{eq:systemjunction} and the definition of the cost \cref{eq:costfunctionaljunction} are not well-posed. Indeed, at the junction point $O$, we can choose the index $i$ we prefer, but the existence of the trajectory is not guaranteed, due to possible fast oscillations of the index $i$ (as when, for a generic ordinary differential equation, the dynamics is discontinuous in the space variable). The main goal of the present paper is just an approximation and the corresponding passage to the limit, for such possible oscillating behavior in the context of optimal control. To this end, we are going to use delayed thermostat operator and, in addition to what already said in the Introduction, here we point out that, fixed the thresholds $-\varepsilon,\varepsilon$, for each continuous scalar input $t\mapsto u(t)$, and for each initial output $i_0\in\{-1,1\}$ coherent with $u(0)$, there exists a unique output $t\mapsto i(t)=:h_\varepsilon[u](t)\in\{-1,1\}$ satisfying $i(0)=i_0$. For a regular scalar dynamics $g$, and for a coherent initial state $(x,i_0)$, there exists a unique solution of the thermostatic system
\[ \left\{ \begin{array}{ll} y'=g(y,i,t)\\ y(0)=x\\ i(t)=h_\varepsilon[y](t),\ \ i(0)=i_0 \end{array} \right. \]
\noindent The main reason for that is indeed the ``splitting" of the thresholds, which avoids fast oscillations of the switching variable $i$, and allows to construct the solution by a suitable gluing of pieces of solutions with constant $i$ (see \cite{Ba}).
\section{A twofold junction problem}\label{sec:3} We consider a one-dimensional optimal control problem for which the controlled dynamics and the cost, $f,\ell$, suddenly change when passing from one half-line to the other one: $f(x, \cdot) = f_1(x, \cdot)\ (\text{resp.}\ f(x, \cdot) = f_{-1}(x, \cdot))$ if $x > 0 \ (\text{resp. if}\ x < 0)$, where $f_1 : [0, + \infty[ \times A \rightarrow {\NZQ R}$, $f_{-1} : ] -\infty, 0] \times A \rightarrow {\NZQ R}$. The point $x=0$ may represent a ``junction'', a node on a network with two entering edges (see \cref{fig:therm}). For $\varepsilon > 0$ we approximate the junction problem by a delayed thermostatic problem. Still denoting by $f_1, f_{-1}$ two extensions by constancy in the space variable $x$ of the dynamics to $[-\varepsilon, +\infty[ \times A$ and to $]-\infty, \varepsilon] \times A$ respectively, we may consider the controlled system \begin{equation}\label{s2} \begin{cases} x'(t) = f_{i(t)}(x(t), \alpha(t)), \\ i(t)= h_{\varepsilon}\left[x\right](t) \\ x(0)= x_0, \ \ \ i(0)= i_0 \end{cases} \end{equation} Similarly to $f_1,f_{-1}$, we extend the running costs $\ell_1, \ell_{-1}$.
Let $V_{\varepsilon}$ be the value function of the thermostatic optimal control problem with dynamics given by \cref{s2} and corresponding costs. We define the function \begin{flalign*} \tilde{V}_{\varepsilon} : {\NZQ R} \setminus \left\{0\right\} \rightarrow {\NZQ R}, & \ \ \tilde{V}_{\varepsilon} (x)= \begin{cases} V_{\varepsilon}(x, 1) & x > 0\\ V_{\varepsilon}(x, -1) & x < 0. \end{cases} \end{flalign*} In general, $V_{\varepsilon}(0, -1) \neq V_{\varepsilon}(0, 1)$. \begin{theorem} \label{th:2junction} As $\varepsilon \rightarrow 0^{+}$, $\tilde{V}_{\varepsilon}$ uniformly converges on ${\NZQ R}\setminus \left\{0\right\}$ to a continuous function which, if \cref{eq:Lip,eq:LLip} hold, continuously extends to a function $\tilde V$ on the whole ${\NZQ R}$. If \cref{eq:Controllability} holds, $\tilde V$ coincides with the (already known as unique) viscosity solution of \begin{equation}\label{eq:Sistema2junction} \begin{cases} \lambda V + H_{1}(x, V') = 0 \ \ \ \text{for} \ x>0\\ \lambda V + H_{-1}(x, V') = 0 \ \ \text{for} \ x<0 \\ V(0) = \min \left\{u_{0}(0), V_{sc(-1)}(0), V_{sc(1)}(0) \right\} \end{cases} \end{equation} where $H_1,H_{-1}$ are the Hamiltonians in \cref{s1}, $u_0(0)$ is the convexification \begin{equation} \label{eq:date} u_0(0)= \frac{1}{\lambda}\min_{A_0}\left\{\mu \ell_{-1}(0, a_{-1}) + (1 - \mu)\ell_{1}(0, a_{1})\right\} \end{equation} \begin{multline} \label{eq:A0} A_0 = \bigl\{(\mu, a_{-1}, a_{1} ) \in [0,1] \times A \times A :\\
\mu f_{-1}(0, a_{-1}) + (1 - \mu)f_{1}(0, a_{1})=0, f_1(0, a_1) \leq 0, f_{-1}(0, a_{-1})\geq 0 \bigr\} \end{multline} and $V_{sc(i)}(0)$ is the value function at $x=0$ of the state-constraint optimal control problem on the branch $i$. \end{theorem} \begin{proof} We are going to use the notations in \cref{eq:omega}. We also recall that the state-constraint problem in branch $i$ is the optimal control problem restricted to the branch $i$ and such that, at the point $x=0$ we can only use controls that make us to not leave the branch. We first prove that $V_\varepsilon$ uniformly converges to a continuous function on $\mathbb{R}\setminus\{0\}$. We have some cases and we illustrate some of them.
i) $f_{-1}(0,a)\le0$ for all $a\in A$. Hence, when starting from a point of $\overline\Omega_{-1}^{\varepsilon} \times \left\{-1\right\}$, it is impossible to switch on the other branch $\overline\Omega_1^{\varepsilon} \times \left\{1\right\}$. Hence, $x\to V_\varepsilon(x,-1)$ is the value function of the optimal control problem with dynamics $f_{-1}$ and cost $\ell_{-1}$ and state-constraint in $\overline\Omega_{-1}^{\varepsilon} \times \left\{-1\right\}$, which uniformly converges on $]-\infty,0]$ to the value function with same dynamics and cost and with state constraints in $]-\infty,0]$ (dynamics and costs are bounded), that is to $V_{sc(-1)}$. In the other branch, being $V_{\varepsilon}(-\varepsilon,-1)$ convergent to $V_{sc(-1)}(0)$, we also get the uniform convergence of $V_\varepsilon(\cdot,1)$ to the unique solution of the first line of \cref{eq:Sistema2junction} with viscosity boundary datum $V_{sc(-1)}(0)$. Indeed, they are respectively the value function of the exit-time problem in $[-\varepsilon,+\infty[$ and $[0,+\infty[$ with the same dynamics, same cost and with convergent exit-costs.
ii) $\exists \ a_{-1},a_1\in A$ such that $f_1(0,a_1)<0<f_{-1}(0,a_{-1})$. In this case, when $\varepsilon$ is sufficiently small, starting from $(\varepsilon,1)$ (resp. from $(-\varepsilon,-1)$) we can always switch on the other branch, and we can reach $(-\varepsilon,-1)$ (resp. $(\varepsilon,1)$) in a time interval whose length is infinitesimal as $\varepsilon$. It is then easy to check that the difference $|V_\varepsilon(\varepsilon,1)-V_\varepsilon(-\varepsilon,-1)|$ (as well as $|V_\varepsilon(0,1)-V_\varepsilon(0,-1)|$) is also infinitesimal as $\varepsilon$.
Moreover, for every pair $(\varepsilon_1,\varepsilon_2)$ with $\varepsilon_1,\varepsilon_2>0$, $\|V_{\varepsilon_1}-V_{\varepsilon_2}\|$ is also infinitesimal as $\max\{\varepsilon_1,\varepsilon_2\}$. As before, $V_\varepsilon$ uniformly converges on $\mathbb{R}\setminus\{0\}$ to a solution of the first two lines of \cref{eq:Sistema2junction}, which also continuously extends to $x=0$. We denote by $\tilde V$ such extended limit function and, assuming \cref{eq:Controllability} (which of course implies the conditions in ii)), we prove that it satisfies the third equation of \cref{eq:Sistema2junction}, from which the conclusion of the proof. Again, we proceed illustrating some cases.
a) $V_{sc(-1)}(0)$ strictly realizes the minimum in \cref{eq:Sistema2junction}. Then, there exists a measurable control $\alpha$ such that the corresponding trajectory starting from $x=0$ with dynamics $f_{-1}$ does not exit from $]-\infty,0]$, and the corresponding cost, with running cost $\ell_{-1}$, is strictly less than $u_{0}(0)$ and $V_{sc(1)}(0)$. The control $\alpha$ has exactly the same cost for the thermostatic problem with initial point $(0,-1)$ (no switchings occur).\\ Now, we observe that, for every $(\mu,a_{-1},a_1)\in A_0$ with $f_{-1}(0,a_{-1}),f_1(0,a_1)\neq0$, the alternation of the constant controls $a_{-1},a_1$ correspondingly to every switching, gives a cost for the thermostatic problem in $(0,-1)$ as well as in $(0,1)$, which, when $\varepsilon$ goes to zero, converges to $\left(\mu\ell_{-1}(0,a_{-1})+(1-\mu)\ell_1(0,a_1)\right)/\lambda$. Indeed, the condition $\mu f_{-1}(0,a_{-1})+(1-\mu)f_1(0,a_1)=0$ implies that $f_{-1}(0,a_{-1}),f_1(0,a_1)$ are in the same (inverse) proportion as $\mu$ and $1-\mu$, and the corresponding time-durations for covering the distance $2\varepsilon$ (from one threshold to the other one) are in the same (direct) proportion as $\mu$ and $1-\mu$. Hence, the required convergence holds.\\ From this we get that $V_{sc(-1)}(0)=V_\varepsilon(0,-1)$ and so $\tilde V(0)= V_{sc(-1)}(0)$.
b) $u_{0}(0)$ strictly realizes the minimum. Then, let $(\mu,a_{-1},a_1)\in A_0$ be such that $\mu\ell_{-1}(0,a_{-1})+(1-\mu)\ell_{1}(0,a_{1})$ is the minimum in the definition of $u_{0}(0)$. Again, as in the previous point, we get that a switching trajectory using controls $a_{-1}$ and $a_1$ is near optimal for $V_\varepsilon$, and then the conclusion. \end{proof} \newsiamremark{rem}{Remark} \begin{rem} In this one-dimensional case, \cref{th:2junction} also proves that $\tilde V=U^+$, where $U^+$ is the value function of the so-called regular problem in \cite{BaBrCh}. In the sequel we are also given a different proof of such an equality where, using the thermostatic approximation, we show that $\tilde V$ is the maximal subsolution of a suitable Hamilton-Jacobi problem as in \cite{BaBrCh}, namely next problem \cref{eq:HJBproblem}. \end{rem} \begin{theorem} \label{thm:ishii} Assume \cref{eq:Lip,eq:Controllability,eq:LLip}. The function $\tilde{V}$ is a viscosity solution of the Hamilton-Jacobi-Bellman problem \begin{equation}\label{eq:HJBproblem} \begin{cases} \lambda V + H_{1}(x, V') = 0 \ \ \text{in} \ \{x>0\}=:\Omega_1 \\ \lambda V + H_{-1}(x, V') = 0 \ \ \text{in} \ \{x<0\}=:\Omega_{-1} \\ \min \left\{\lambda V + H_1, \lambda V + H_{-1}\right\} \leq 0 \ \text{on} \ x=0 \\ \max \left\{\lambda V + H_1, \lambda V + H_{-1}\right\} \geq 0 \ \text{on} \ x=0. \end{cases} \end{equation} \noindent Here we mean that $\tilde V$ is a subsolution of the first three equations and a supersolution of the first two together with the fourth one. \end{theorem} \begin{proof} From \cref{th:2junction}, $\tilde V$ is a viscosity solution of the first two lines of \cref{eq:HJBproblem}.
We now prove the third equation in \cref{eq:HJBproblem}. Let $\varphi \in C^1({\NZQ R})$ be a test function such that $\tilde V-\varphi$ has a strict relative maximum at $x=0$. By uniform convergence, there exists a sequence $x_\varepsilon\in\overline\Omega_1^\varepsilon$ of points of relative maxima for $V_\varepsilon(\cdot,1)-\varphi$ which converge to $x=0$. We may have two mutually exclusive cases: 1) at least for a subsequence, at $x_\varepsilon$ the HJB equation satisfied by $V_\varepsilon(\cdot,1)$ has the right sign "$\le$" (if $x_\varepsilon$ is an interior point, then we always have the right sign, being the equation satisfied), 2) it is definitely true that the boundary point $x_\varepsilon=-\varepsilon$ is a strict maximum point and the HJB equation has the wrong sign "$>$". Also note that the boundary of $\overline\Omega_1^\varepsilon$, i.e. $x=-\varepsilon$, is also converging to $x=0$.
Case 1). As $\varepsilon\to0$, we get $\lambda \tilde{V}+H_1 \leq 0$ in $x=0$ and the third equation in \cref{eq:HJBproblem}.
Case 2). Since the boundary conditions in \cref{s1} are in the viscosity sense and by virtue of the controllability condition \cref{eq:Controllability}, we have \begin{equation} \label{eq:CondBordosup} V_{\varepsilon}(- \varepsilon, 1) = V_{\varepsilon}(- \varepsilon, -1) \end{equation}
The same argumentations and cases also hold for the branches $\overline\Omega_{-1}^\varepsilon$. If the corresponding case 1) holds, then we get the conclusion as before. Otherwise we have \begin{equation} \label{eq:CondBordoinf} V_{\varepsilon}(\varepsilon, -1) = V_{\varepsilon}(\varepsilon, 1) \end{equation}
We prove that case 2) cannot simultaneously holds in both branches. Indeed, observe that $(- \varepsilon, -1)$ is in the interior of $\overline{\Omega}_{-1}^{\varepsilon}$ and $(\varepsilon,1)$ is in the interior of $\overline\Omega_1^\varepsilon$, therefore, using \cref{eq:CondBordoinf,eq:CondBordosup}, we get the following contradiction and conclude \begin{equation*} \begin{array}{ll} \displaystyle V_{\varepsilon}(- \varepsilon, -1) - \varphi (- \varepsilon) < V_{\varepsilon}(\varepsilon, -1) - \varphi (\varepsilon )= V_{\varepsilon}(\varepsilon, 1) - \varphi (\varepsilon )\\ \displaystyle < V_{\varepsilon}(- \varepsilon, 1) - \varphi (-\varepsilon) = V_{\varepsilon}(- \varepsilon, -1) - \varphi (- \varepsilon) \end{array} \end{equation*} To prove the fourth equation in \cref{eq:HJBproblem}, we argue in the same way. \end{proof} We prove that $\tilde{V}$ is the maximal subsolution of \cref{eq:HJBproblem}, and use the following lemma.
\begin{lemma} \label{lem:vbarra} Assume that $\forall\ \varepsilon >0$ small enough, the optimal strategy for the approximating problem $\varepsilon$, starting by any $(x, 1), (x, -1)$ with $x \in [-\varepsilon, \varepsilon]$, is to have infinitely many switches between the two branches (i.e. no state-constraint behavior is optimal). Let $(\bar{\mu}, \bar{a}_{-1}, \bar{a}_1) \in A_0$ be such that $f_{-1}(0, \bar{a}_{-1})>0, f_1(0, \bar{a}_1)<0$, and that \begin{equation}\label{eq:Optimality} \tilde V(0)=u_0(0)= \frac{1}{\lambda}\lbrace \bar{\mu}\ell_{-1}(0, \bar{a}_{-1})+(1-\bar{\mu})\ell_1(0, \bar{a}_1) \rbrace. \end{equation} For every $x \in [-\varepsilon, \varepsilon]$, we consider the two switching trajectories (compare with \cref{s2}) \[ \begin{cases} y'(t) = f_{i(t)}(0, \bar{a}_{i(t)}), \\ i(t)= h_{\varepsilon}\left[y\right](t), \\ y(0)= x, \ \ \ i(0)= 1, \end{cases}, \ \ \ \begin{cases} y'(t) = f_{i(t)}(0, \bar{a}_{i(t)}), \\ i(t)= h_{\varepsilon}\left[y\right](t), \\ y(0)= x, \ \ \ i(0)= -1. \end{cases} \] \noindent On the branches, they have constant velocity ($f_1(0, \bar{a}_1)$ and $f_{-1}(0, \bar{a}_{-1})$ towards left and right respectively), and switch infinitely many times. We consider the functions \begin{equation}\label{vbarrainfinito} \begin{aligned} \bar{V}_{\varepsilon}(x, 1) & = \int_{0}^{\infty} e^{-\lambda t}\ell_{i(t)}(0, \bar{a}_{i(t)})dt \quad \text{with}\quad i(0)=1,\\ \bar{V}_{\varepsilon}(x, -1) & = \int_{0}^{\infty} e^{-\lambda t}\ell_{i(t)}(0, \bar{a}_{i(t)})dt \quad \text{with}\quad i(0)=-1. \end{aligned} \end{equation} Then $\bar{V}_{\varepsilon}(\cdot, 1)$ and $\bar{V}_{\varepsilon}(\cdot, -1)$ are differentiable in $[-\varepsilon, \varepsilon]$ and \begin{equation}\label{DifferenzaNulla} \sup_{x \in [-\varepsilon, \varepsilon]}\vert \bar{V}_{\varepsilon}^{'}(x, 1)- \bar{V}_{\varepsilon}^{'}(x, -1) \vert \rightarrow 0\ \ \text{for} \ \ \varepsilon\rightarrow 0. \end{equation} \end{lemma} \begin{proof} The derivability comes form the constancy of dynamics and costs. We can rewrite the two functions in \cref{vbarrainfinito} as \begin{equation} \label{eq:vbarra} \begin{aligned} \bar{V}_{\varepsilon}(x, 1) & = \int_{0}^{\frac{x+\varepsilon}{\vert f_1(0, \bar{a}_1)\vert}} e^{-\lambda t}\ell_1(0, \bar{a}_1)dt + e^{\frac{-\lambda(x+\varepsilon)}{\vert f_1(0, \bar{a}_1)\vert}}\bar{V}_{\varepsilon}(-\varepsilon, 1),\\ \bar{V}_{\varepsilon}(x, -1) & = \int_{0}^{\frac{\varepsilon - x}{f_{-1}(0, \bar{a}_{-1})}} e^{-\lambda t}\ell_{-1}(0, \bar{a}_{-1})dt + e^{\frac{-\lambda(\varepsilon - x)}{ f_{-1}(0, \bar{a}_{-1})}}\bar{V}_{\varepsilon}(\varepsilon, -1), \end{aligned} \end{equation} where the upper extremal of the integration is the reaching time of the threshold in the corresponding initial branch. Then we have \begin{equation}\label{uguaglianzavbarra} \bar{V}_\varepsilon(-\varepsilon, 1)= \bar{V}_\varepsilon(-\varepsilon, -1)\quad \text{and} \quad \bar{V}_\varepsilon(\varepsilon, -1)= \bar{V}_\varepsilon(\varepsilon, 1), \end{equation} and by \cref{eq:Optimality} for any $i$, $\lim_{\varepsilon\to 0}\bar{V}_\varepsilon(x,i)= \tilde{V}(0)= u_0(0)$ uniformly in $x \in [-\varepsilon, \varepsilon]$. A direct calculation gives \begin{align*} \bar{V}_{\varepsilon}^{'}(x, 1)& = \frac{1}{\vert f_1(0, \bar{a}_1)\vert}e^{\frac{-\lambda(x+\varepsilon)}{\vert f_1(0, \bar{a}_1)\vert}}\ell_1(0, \bar{a}_1)-\frac{\lambda e^{\frac{-\lambda(x+\varepsilon)}{\vert f_1(0, \bar{a}_1)\vert}}}{\vert f_1(0, \bar{a}_1)\vert}\bar{V}_{\varepsilon}(-\varepsilon, 1),\\ \bar{V}_{\varepsilon}^{'}(x, -1)& = - \frac{1}{f_{-1}(0, \bar{a}_{-1})}e^{\frac{-\lambda(\varepsilon-x)}{ f_{-1}(0, \bar{a}_{-1})}}\ell_{-1}(0, \bar{a}_{-1})+\frac{\lambda e^{\frac{-\lambda(\varepsilon-x)}{ f_{-1}(0, \bar{a}_{-1})}}}{f_{-1}(0, \bar{a}_{-1})}\bar{V}_{\varepsilon}(\varepsilon, -1). \end{align*} and then for $\varepsilon \to 0$ \begin{align*} & \bar{V}_{\varepsilon}^{'}(x, 1) \longrightarrow \frac{\bar{\mu}\big(\ell_1(0, \bar{a}_1)- \ell_{-1}(0, \bar{a}_{-1})\big)}{\vert f_1(0, \bar{a}_1)\vert}, \\ & \bar{V}_{\varepsilon}^{'}(x, -1) \longrightarrow \frac{(\bar{\mu}-1)\big(\ell_{-1}(0, \bar{a}_{-1})- \ell_{1}(0, \bar{a}_{1})\big)}{ f_{-1}(0, \bar{a}_{-1})}. \end{align*} \noindent Recalling the definition of $A_0$ \cref{eq:A0}, calculating $\overline\mu$, we get \cref{DifferenzaNulla} by \begin{equation*}
\bar{V}_{\varepsilon}^{'}(x, 1) \longrightarrow \frac{\ell_{1}(0, \bar{a}_{1})- \ell_{-1}(0, \bar{a}_{-1})}{f_{-1}(0, \bar{a}_{-1})- f_1(0, \bar{a}_1) }, \quad
\bar{V}_{\varepsilon}^{'}(x, -1) \longrightarrow \frac{\ell_{1}(0, \bar{a}_{1})- \ell_{-1}(0, \bar{a}_{-1})}{f_{-1}(0, \bar{a}_{-1})- f_1(0, \bar{a}_1)} \end{equation*} \end{proof} \begin{theorem}\label{Confronto} For each $u$ bounded, continuous subsolution of \cref{eq:HJBproblem}, it is $u \leq \tilde{V}$ in ${\NZQ R}$. \end{theorem} \begin{proof} We can assume to be in the situation as in \cref{lem:vbarra}. Indeed, otherwise in at least one branch $\tilde V$ coincides with the corresponding state-constraint value function which, see for example Soner \cite{So}, is greater than any subsolution (note that, in general the state-constraint value functions do not satisfy the third line of \cref{eq:HJBproblem}). We then also get $u\leq \tilde{V}$ on the other branch. \\ We assume by contradiction that $\sup_{x \in {\NZQ R}}(u-\tilde{V})(x)>\delta>0$. If \begin{equation*} \exists r>0 \vert \forall \delta' >0\ \exists\ \overline{x} \in ]r, +\infty[: \sup_{x\in {\NZQ R}} \big((u-\tilde{V})(x)-(u-\tilde{V})(\overline{x})\big)\leq \delta', \end{equation*} then, by \cref{thm:ishii} and known comparison techniques we get a contradiction because, in $ ]r, +\infty[$, $\tilde V$ is a supersolution and $u$ is a subsolution of the same HJB. Similarly for the opposite case $]-\infty, -r[$. Hence we may restrict to the case where $u -\tilde{V}$ has the maximum with respect to $r$ in $x=0$. Since $\bar{V}_{\varepsilon}(x, i)$ converges to $\tilde{V}(0)$, with $\bar V_\varepsilon$ defined in \cref{eq:vbarra}, then for small $\varepsilon$, \begin{equation}\label{eq:ConditionMassimo} u(z^i)-\bar{V}_{\varepsilon}(z^i, i) =\max_{[-\varepsilon, \varepsilon]}(u(\cdot)-\bar{V}_{\varepsilon}(\cdot, i))> \frac{\delta}{2} > 0, \end{equation} with $z^i \in [-\varepsilon, \varepsilon]$. If for example $\max(u(\cdot)-\bar{V}_\varepsilon (\cdot, 1))$ is reached in $x= -\varepsilon$ and $\max(u(\cdot)-\bar{V}_\varepsilon (\cdot, -1))$ is reached only in $\varepsilon$, then using \cref{uguaglianzavbarra} we get the contradiction \begin{equation} \begin{array}{ll}\label{staccodalbordo} \displaystyle u(-\varepsilon)- \bar{V}_{\varepsilon}(- \varepsilon, 1) = u(-\varepsilon)- \bar{V}_{\varepsilon}(- \varepsilon, -1)\\ \displaystyle < u(\varepsilon)- \bar{V}_{\varepsilon}(\varepsilon, -1) = u(\varepsilon)- \bar{V}_{\varepsilon}(\varepsilon, 1). \end{array} \end{equation} This implies that in at least one branch we can assume $z^i$ not equal to corresponding switching threshold. Then assume $z^{-1} \in [-\varepsilon, \varepsilon[$.
We are now comparing $u$ and $\overline V_\varepsilon$. By \cref{eq:vbarra}, we have for every $i=-1, 1$ \begin{equation*} \lambda \bar{V}_{\varepsilon}(x, i)-f_{i}(x, \bar{a}_i)\bar{V}_{\varepsilon}^{'}(x, i)-\ell_i(x, \bar{a}_i)\geq - O(\varepsilon), \end{equation*} \noindent in $x\in[-\varepsilon,\varepsilon[$ or in $]-\varepsilon,\varepsilon]$ respectively, where, here and in the sequel $O(\varepsilon)$ is a suitable positive infinitesimal quantity as $\varepsilon \to 0$. Recalling that $\bar{V}_{\varepsilon}(\cdot, i)$ is derivable in $[-\varepsilon, \varepsilon]$ and recalling the sign of $f_i(0, \bar{a}_i)$ we then get for every $i=-1, 1$ \begin{equation}\label{Hamiltonianatrovata} \lambda \bar{V}_{\varepsilon}(x, i)+ H_i(x, p)\geq - O(\varepsilon), \end{equation} for every $x \in [-\varepsilon, \varepsilon[$ and for every $p$ subgradient in $x$ with respect to $[-\varepsilon, \varepsilon]$ of $\bar{V}_{\varepsilon}(\cdot, -1)$ (respectively for any $ x \in ]-\varepsilon, \varepsilon]$ and $p$ subgradient of $\bar{V}_{\varepsilon}(\cdot, 1)$).
Let $\eta:[-\varepsilon, \varepsilon]\to {\NZQ R}$ be continuous and $c>0$ such that (see \cite{So}, condition (A1) in the case of an interval) \begin{equation} \label{funzioneeta}
]x+ \xi\eta(x)-\xi c, x+ \xi\eta(x)+\xi c [ \ \subseteq\ ]-\varepsilon, \varepsilon[ \ \forall x \in [-\varepsilon, \varepsilon], 0< \xi\leq c. \end{equation} For any $0<\xi\leq c$, we define the function in $[-\varepsilon, \varepsilon] \times [-\varepsilon, \varepsilon]$: \begin{equation*} \Phi_{\xi}(x, y) = u(x)- \bar{V}_{\varepsilon}(y, -1)- \left\vert\frac{x-y}{\xi}-\eta(z^{-1})\right\vert^2-\left\vert y-z^{-1}\right\vert^2. \end{equation*} Let $(x_{\xi}^{-1}, y_{\xi}^{-1})$ be a point of maximum for $\Phi_\xi \in [-\varepsilon, \varepsilon]\times [-\varepsilon, \varepsilon]$. Recalling $z^{-1} \in [-\varepsilon, \varepsilon[$ by standard estimates (see Soner \cite{So} or Bardi- Capuzzo Dolcetta \cite{BaCaDo} pp. 271) for small $\xi$ we get $x_\xi^{-1} \in ]-\varepsilon, \varepsilon[$, $y_\xi^{-1} \in [-\varepsilon, \varepsilon[$ and \begin{equation}\label{stima2} \frac{x_{\xi}^{-1} - y_{\xi}^{-1}}{\xi} \rightarrow \eta(z^{-1}) \quad \text{and}\quad x_{\xi}^{-1}, y_{\xi}^{-1}\rightarrow z^{-1} \quad\text{as}\quad \xi\rightarrow 0. \end{equation} \noindent We have the following possible cases, for a subsequence $\xi\to 0$:\\ (i) $(x_{\xi}^{-1}, y_{\xi}^{-1}) \in \ ]-\varepsilon,0[ \times [-\varepsilon, \varepsilon[$; (ii) $x_{\xi}^{-1}=0$ and $y_{\xi}^{-1}\in ]-\varepsilon, \varepsilon[$; (iii) $(x_{\xi}^{-1}, y_{\xi}^{-1}) \in \ ]0, \varepsilon[ \times ]-\varepsilon, \varepsilon[$.\\
Case (i). We get for any small $\xi$ \begin{equation}\label{sottosoluzine} \lambda u(x_{\xi}^{-1})+ H_{-1} \bigg(x_{\xi}^{-1}, \frac{2}{\xi}\bigg(\frac{x_{\xi}^{-1} - y_{\xi}^{-1}}{\xi}-\eta(z^{-1})\bigg)\bigg)\leq 0, \end{equation} \begin{equation}\label{soprasoluzione} \lambda \bar{V}_{\varepsilon}(y_{\xi}^{-1}, -1)+ H_{-1}\bigg(y_{\xi}^{-1}, \frac{2}{\xi}\bigg(\frac{x_{\xi}^{-1} - y_{\xi}^{-1}}{\xi}-\eta(z^{-1})\bigg)+2(z^{-1}-y_{\xi}^{-1})\bigg) \geq -O(\varepsilon). \end{equation} and we conclude in the standard way getting the contradiction to \cref{eq:ConditionMassimo} first sending $\xi \to 0$ and then $\varepsilon \to 0$.\\ Case (ii). By $x_{\xi}^{-1}=0$ we have that \begin{equation}\label{eq:CondMinimo} \begin{aligned} \min\biggl\{\lambda u(0)+ H_1\bigg(0,\frac{2}{\xi}\bigg(\frac{- y_{\xi}^{-1}}{\xi}-& \eta(z^{-1})\bigg)\bigg),\\ &\lambda u(0)+ H_{-1}\bigg(0,\frac{2}{\xi}\bigg(\frac{- y_{\xi}^{-1}}{\xi}-\eta(z^{-1})\bigg)\bigg)\biggr\}\leq0. \end{aligned} \end{equation} If $ \lambda u(0)+H_{-1}\bigg(0, \frac{2}{\xi}\bigg(\frac{- y_{\xi}^{-1}}{\xi}-\eta(z^{-1})\bigg)\bigg)\leq 0$ for a subsequence $\xi$ tends to 0 we conclude as in Case (i). Otherwise, we have \begin{equation}\label{eq:CondSott} \lambda u(0)+ H_1\bigg(0, \frac{2}{\xi}\bigg(\frac{- y_{\xi}^{-1}}{\xi}-\eta(z^{-1})\bigg)\bigg)\leq 0. \end{equation} The inequality \cref{soprasoluzione,eq:CondSott} cannot be compared because they have different Hamiltonians. However, noting that $y_{\xi}^{-1} \in ]-\varepsilon, \varepsilon[$, we have \begin{equation*} \left(\bar{V}_\varepsilon(y_{\xi}^{-1}, -1)\right)^{'}=\frac{2}{\xi}\bigg(\frac{x_{\xi}^{-1} - y_{\xi}^{-1}}{\xi}-\eta(z^{-1})\bigg)+2(z^{-1}-y_{\xi}^{-1}). \end{equation*}
By \cref{DifferenzaNulla}, we have \begin{equation*} \bar{V}_\varepsilon(y_{\xi}^{-1}, 1) = \bar{V}_\varepsilon(y_{\xi}^{-1}, -1)+ {O}(\varepsilon),\quad
\left(\bar{V}_\varepsilon(y_{\xi}^{-1}, 1)\right)' = \left(\bar{V}_\varepsilon(y_{\xi}^{-1}, -1)\right)^{'}+ {O}(\varepsilon), \end{equation*} \noindent and using \cref{Hamiltonianatrovata} in $y_{\xi}^{-1}$ for $i=1$, we get \begin{equation}\label{hamiltonianacontraria2} \lambda \bar{V}_\varepsilon(y_{\xi}^{-1}, 1)+ H_1\bigg(y_{\xi}^{-1}, \frac{2}{\xi}\bigg(\frac{ - y_{\xi}^{-1}}{\xi}-\eta(z^{-1})\bigg)+2(z^{-1}-y_{\xi}^{-1}) \bigg)\geq - {O}(\varepsilon). \end{equation} By \cref{eq:CondSott,hamiltonianacontraria2} we obtain a contradiction as in the case (i).\\
Case (iii). For $x_\xi^{-1} \in ]0, \varepsilon[$ we have \begin{equation}\label{sottosoluzionemaggiorezero} \lambda u(x_{\xi}^{-1})+ H_{1} \bigg(x_{\xi}^{-1}, \frac{2}{\xi}\bigg(\frac{x_{\xi}^{-1}- y_{\xi}^{-1}}{\xi}-\eta(z^{-1})\bigg)\bigg)\leq 0 \end{equation} that cannot be compared with \cref{soprasoluzione}. Being also $y_{\xi}^{-1} \in [-\varepsilon, \varepsilon[$, we conclude as before. \end{proof}
\section{A threefold junctions problem}\label{sec:4} Here, we consider a junction given by three half-lines entering the same point (see \cref{threejunction}). In this case we have three labels $\left\{1, 2, 3\right\}$, one for every half-line $R_1, R_2, R_3$, that we identify with the labelled half-line $R_i= [0, +\infty[\times \lbrace i \rbrace$. We also consider the controlled dynamics $f_i:R_i\times A \to {\NZQ R} $ and the running costs $\ell_i: R_i\times A \to [0, +\infty[$. We approximate these triple discontinuity by a thermostatic-type combination in the following way. We extend $f_i$ and $\ell_i$ to $[-\varepsilon_i, +\infty[\times \{i \} \times A$, where $\varepsilon_i>0$ are not necessarily the same for every $i$. \begin{figure}
\caption{ The threefold junctions and its thermostatic-type approximation.}
\label{threejunction}
\end{figure} The thermostatic controlled dynamics is given by \begin{equation}\label{s4} \begin{cases} x'(t) = f_{i(t)}(x(t), \alpha(t)),\\ i(t) = \tilde{h}[x](t), \\ i(0) = i_0 \in \left\{1, 2, 3\right\},\ x(0)=x_0 \in \ [-\varepsilon_{i_0}, +\infty[ , \end{cases} \end{equation}
\noindent where $\tilde{h}[x](t)$ is the delayed thermostatic rules as show \cref{threejunction}. In this thermostatic representation, denoting by $R_{\varepsilon_i}:=[-\varepsilon_i, +\infty[ \times \lbrace i \rbrace $ (and by $int(R_{\varepsilon_i})= ]-\varepsilon_i, +\infty[ \times \{i\}$), we can only switch from $R_{\varepsilon_1}$ to $R_{\varepsilon_2}$, from $R_{\varepsilon_2}$ to $R_{\varepsilon_3}$ and from $R_{\varepsilon_3}$ to $R_{\varepsilon_1}$. This is an arbitrary choice, because in the limit problem, at the junction-point, a switch to any of the other branches is possible. However, we will recover this kind of behavior in the limit procedure because the transitions times become smaller and smaller and, indeed, the limit equation \cref{eq:HJBproblem3} is independent from that choice. Moreover, in the switching rule given by $\tilde{h}$, also the variable $x$ is subject to a discontinuity at the switching instant unlike the twofold case in previous section (see \cref{threejunction} and also note in the thermostat, the branch $R_{\varepsilon_1}$ is oriented in the opposite way with respect to the standard one). For every $i_0 \in \left\{1, 2, 3\right\}$ and $\forall x_0 \in \ [-\varepsilon_{i_0}, +\infty[$ we consider the value function \begin{equation} V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(x_0, i_0)=\inf_{\alpha \in {\cal A}}\int_{0}^{\infty}e^{-\lambda t}\ell_{i(t)}(x(t), \alpha(t))dt, \end{equation} and we also have for every $i=1,2,3$ the Hamiltonians \begin{equation}\label{s5} H_i(x, p)= \sup_{a \in A}\left\{-f_i(x,a)\cdot p - \ell_i(x, a)\right\}. \end{equation} where we drop the index $i$ in the entries of $f_i$, $\ell_i$ and hence in $H_i$. We will sometimes use this simplification of the notation in the sequel too, without recalling it.\\ As in \cref{th:sistemapprox} we have the following proposition \begin{proposition}\label{Approx3} For any choice of $\varepsilon_1, \varepsilon_2, \varepsilon_3>0$ the value function $V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}$ of the switching three-thermostatic optimal control problem is the unique bounded and continuous function on $R_{\varepsilon_1} \cup R_{\varepsilon_2} \cup R_{\varepsilon_3}$ which satisfies, in the viscosity sense \begin{equation}\label{RO} \begin{cases} \lambda V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(x, 1) + H_1 \bigg(x, V'_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(x, 1)\bigg) = 0 & \text{in} \ \textit{int}(R_{\varepsilon_1}), \\ V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(-\varepsilon_1, 1) = V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(\varepsilon_2, 2); \\ \lambda V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(x, 2) + H_2 \bigg(x, V'_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(x, 2)\bigg) = 0 & \text{in} \ \textit{int}(R_{\varepsilon_2}), \\ V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(-\varepsilon_2, 2) = V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(\varepsilon_3, 3); \\ \lambda V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(x, 3) + H_3 \bigg(x, V'_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(x, 3)\bigg) = 0 & \text{in} \ \textit{int}(R_{\varepsilon_3}),\\ V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(-\varepsilon_3, 3) = V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(\varepsilon_1, 1). \end{cases} \end{equation} \end{proposition} The proof is essentially an adaptation of the one of the \cref{th:sistemapprox} in \cite{Ba} to whom the reader is strongly referred. However, a very short sketch of the proof is given in the Appendix.
In the following subsections we are going to consider two different limit junction problems. In \cref{sectUni} the limit problem is given by the fact that on the junction point the admissible dynamics and costs are given by some suitably interpreted balance of the behaviours on all three branches. In \cref{subsec:nonuniform} instead, a balance among only two branches is also admitted. The main difference in the two limiting procedures is that in the first case the three thresholds go to zero with the same velocity, whereas in the second case different velocities are admitted. Such differences will lead to two HJB limit problems which differ by the definition of the admissible test functions (see the comments after \cref{def:visco}) which will lead to distinct ways for proving a comparison result.\\
We finally note that a similar control problem with $n$ branches is studied in \cite{AcOuTc}. However, in that work no convexification or balance of the dynamics and costs are taken into account at the junction. The considered optimal control is then different from ours. \subsection{Uniform switching thresholds}\label{sectUni} We assume $(\varepsilon_1, \varepsilon_2, \varepsilon_3)= (\varepsilon, \varepsilon, \varepsilon)$. Looking to the twofold junction it is easy to see that the convexification parameters $\mu, 1-\mu$ are given by the ratio between the time spent using $f_i (0,a_i)$ to go from a threshold to the other one (namely $2\varepsilon/f_i (0,a_i) $) and the total time to perform a complete switching. Coherently, when $f_1, f_2, f_3 < 0$ (dropping the entries in the dynamics), namely when we perform the whole cycle, the right convex parameters to be considered are \begin{equation}\label{musogliefisse} \mu_1=\frac{f_{2}f_3}{f_{2}f_3 + f_{1}f_3 + f_{1}f_2}, \ \mu_2=\frac{f_1f_3}{f_{2}f_3 + f_{1}f_3 + f_{1}f_2}, \ \mu_3=\frac{f_1f_2}{f_{2}f_3 + f_{1}f_3 + f_{1}f_2}. \end{equation} Moreover $(\mu_1, \mu_2, \mu_3) \in [0, 1]^3$ and $\sum_{i=1}^{3} \mu_i = 1$.
Observe that now we have not anymore the interpretation as balance of forces, indeed in general $ \sum_{i=1}^{3} \mu_i f_i(0,a_i) \neq 0$, regardless to our choice of the signs of the branches $R_i$ and dynamics $f_i$. Also note that \cref{musogliefisse} is meaningful with the same interpretation when at most one $f_i$ is null, in which case we definitely remain in the corresponding branch. To identify the right limit optimal control problem when $\varepsilon\to 0$ we define its controlled dynamics. In particular, calling $TR = R_1 \cup R_2 \cup R_3$, if $(x, i) \in TR$, with $x\neq 0$ then the dynamics is the usual $f_i(x, a_i)$ with $a_i\in A$. If instead $x=0$, being $(0, i)=(0,j)$ for $i, j \in \{1, 2, 3\}, i\neq j$, we can either choose any dynamics makes us to stay inside a single branch $R_i$ or we may rest at zero ``formally" using any combination $ \sum_{i=1}^{3} \mu_i f_i(0,a_i)$ with $f_i(0, a_i)$ and $\mu_i$ as before (where $\mu_i$ plays a role in the definition of the corresponding cost, see below). The set of controls in the junction point is then \begin{equation*} A(0)=A_{0}\cup \widetilde{A} \end{equation*} with (note that in $\widetilde{A}$ the index $i$ is also at disposal) \begin{align*} A_{0} &= \lbrace (a_1, a_2, a_3) \in A^3 \vert \ f_i(0, a_i)\leq 0\ \text{with at most one equal to} \ 0\rbrace, \\ \widetilde{A} &= \left\{(a, i) \in A \times \{1, 2,3\} \vert\ f_i(0, a)\geq 0\ \right\}. \end{align*} Then, calling $\hat a$ the generic element of $A(0)$ we define \begin{equation*} f_0(0, \hat a)= \begin{cases} f_i(0, a) & \text{if} \ \hat{a} \in \widetilde{A}, \\ 0 & \text{if}\ \hat{a} \in A_{0}. \end{cases} \end{equation*} With the same arguments, if $(x, i) \in TR$ and $x\neq 0$ then the running cost is $\ell_i(x, a_i)$ with $a_i\in A$, otherwise we define \begin{equation*} \ell_0(0, \hat a)= \begin{cases} \ell_i(0, a) & \text{if} \ \hat{a} \in \widetilde{A}, \\ \mu_1 \ell_1(0, a_1)+\mu_2\ell_2(0, a_2)+ \mu_3\ell_3(0, a_3) & \text{if}\ \hat{a} \in A_{0}. \end{cases} \end{equation*} The quadruples $f = (f_1, f_2, f_3, f_0)$ and $\ell = (\ell_1, \ell_2, \ell_3, \ell_0)$ then define the threefold junction optimal control problem. In particular given an initial state $(x_0, i_0) \in TR$ and a measurable control $\alpha(t) \in A\cup A(0)$ we consider a possible admissible trajectory in $TR$ whose evolution, denoted by $(x(t), i(t))$, is such that $i(t)$ remains constant whenever $x(t)>0$ and $x(t)$ evolves with dynamics described above. Given an initial state, the set of measurable controls for which there exists a unique admissible trajectory is not empty and we denote it by ${\cal A}_{(x_0, i_0)}$. We then consider an infinite horizon problem with a discount factor $\lambda >0$ given by \begin{equation*} J(x_0, i_0, \alpha) = \int_{0}^{+\infty} e^{-\lambda t}\ell(x(t), i(t), \alpha(t)) dt, \end{equation*} where $\ell$ is the running cost described above and the corresponding
value function is \begin{equation}\label{funzionevaloresogliefisse} V(x_0, i_0)= \inf_{\alpha \in {\cal A}_{(x_0, i_0)}}J(x_0, i_0, \alpha). \end{equation}
In the sequel when $x=0$ we will drop the index $i$. If we remain in $x=0$ for all the time using controls in $A_0$ then the best cost is given by \begin{equation} \label{s15} u_{1,2,3}(0) = \frac{1}{\lambda}\inf_{A_0}\left\{\mu_1\ell_1(0, a_1) + \mu_2\ell_2(0, a_2) + \mu_3\ell_3(0, a_3)\right\}. \end{equation} \begin{rem}\label{Remarkcontrolli} In general $A_0$ is not compact. However, if $(a_1^{k}, a_2^{k}, a_3^{k}) \in A_0$ is a minimizing sequence for $u_{1,2,3}(0)$ converging to $(\bar{a}_1, \bar{a}_2, \bar{a}_3) \notin {A_0}$, the quantity inside the bracket in \cref{s15} loses meaning but we still have the inequality \begin{equation*} \lim_{k\rightarrow \infty}\left\{\mu_1^{k}\ell_1(0, a_1^{k}) + \mu_2^{k}\ell_2(0, a_2^k) + \mu_3^k \ell_3(0, a_3^k)\right\} \geq \min\{\ell_i(0, \bar{a}_i) \vert f_i(0,\bar{a}_i)=0\}. \end{equation*} and we can still get an optimal behavior among the ones making us stay at $x=0$. \end{rem} \begin{theorem} \label{T1} Assume \cref{eq:Lip,eq:Controllability,eq:LLip}. Then, $V$ is continuous on $TR$. Moreover when $x=0$, \begin{equation} \label{s6} V(0) = \min \left\{u_{1, 2, 3}(0), V_{sc(1)}(0), V_{sc(2)}(0), V_{sc(3)}(0) \right\}, \end{equation} where $V_{sc(i)}(0)$ is the value function at $x=0$ of the state- constraint optimal control problem on $R_i$. Therefore\\ i) if $V(0) = u_{1, 2, 3}(0)$, then $V$ is the unique bounded and continuous solution of the three problems (one for every $i \in \left\{1, 2, 3\right\}$) \begin{equation} \begin{cases} \label{s7} \lambda u + H_{i}(x, u^{\prime}) = 0 \ \ \text{in} \ int(R_i) \\ u(0) = u_{1, 2, 3}(0) \end{cases} \end{equation} ii) if $V(0) = V_{sc(i)}(0)$, for some $i = 1, 2, 3$, then $V$ satisfies: $V = V_{sc(i)}$ in $R_i$, and uniquely solves (for every $j \in \left\{1, 2, 3\right\} \setminus \left\{i\right\}$) \begin{equation} \begin{cases} \label{s8} \lambda u + H_{j}(x, u^{\prime}) = 0 \ \ \text{in} \ int(R_j) \\ u(0) = V_{sc(i)}(0). \end{cases} \end{equation}
\end{theorem} \begin{proof} The continuity of $V$ comes from controllability \cref{eq:Controllability} and regularity \cref{eq:Lip,eq:LLip} in a standard way. Moreover, \cref{s6} comes from \cref{funzionevaloresogliefisse} because the four terms in the minimum are exactly the only allowed values (see also Remark \ref{Remarkcontrolli}). Finally \cref{s7,s8} follow from standard properties of Dirichlet problems in the viscosity sense. \end{proof}
\begin{theorem}\label{limitsogliefisse} Assume \cref{eq:Lip,eq:Controllability,eq:LLip}. The value function $V$ \cref{funzionevaloresogliefisse} (also characterized by \cref{T1}) satisfies \begin{equation} \label{s16} V(x, i) = \lim_{\varepsilon \rightarrow 0}V_{\varepsilon, \varepsilon, \varepsilon}(x, i) \ \ \forall \ (x, i) \in R_i, \ i= 1, 2, 3. \end{equation} where $V_{\varepsilon, \varepsilon, \varepsilon}$ is the value function of the approximating thermostatic problem \cref{RO} with uniform thresholds $(\varepsilon, \varepsilon, \varepsilon)$, and the convergence is uniform. Moreover, when $x=0$ the limit is independent from $i = 1, 2, 3$. \end{theorem} \begin{proof} We first prove that \cref{s16} holds for $x=0$ (the junction point). The fact that the limit \cref{s16}, whenever it exists, is independent from $i$ when $x=0$ comes from the controllability hypothesis \cref{eq:Controllability} because $\vert V_{\varepsilon, \varepsilon, \varepsilon}(0, i) - V_{\varepsilon, \varepsilon, \varepsilon} (0, j) \vert$ is infinitesimal as $\varepsilon$. In the sequel, we drop the symbol $i$ in the expression $V_{\varepsilon, \varepsilon, \varepsilon}(0, i)$.\\ We prove \cref{s16} at $x=0$ for a convergent subsequence still denoted by $(\varepsilon, \varepsilon, \varepsilon)$ which exists because $ V_{\varepsilon, \varepsilon, \varepsilon}$ are equi-bounded. The uniqueness of the limit will give the whole \cref{s16}. By contradiction, suppose that $V(0) < \lim V_{\varepsilon, \varepsilon, \varepsilon}(0)$. By \cref{eq:Controllability}, for every $\varepsilon> 0$, we have $V_{\varepsilon, \varepsilon, \varepsilon}(0) \leq V_{sc(i)}(0)$ for every $i=1, 2, 3$. Hence, the absurd hypothesis implies $V(0) = u_{1, 2, 3}(0)$ by \cref{s6}. Suppose that $(a_1,a_2, a_3) \in A_0$ realizes the minimum in the definition of $u_{1, 2, 3}(0)$. We analyze some possible cases, the others are similar.
1) $f_1(0,a_1), f_2(0, a_2), f_3(0, a_3) < 0$. Hence, using a suitably switching control between those constant controls, we get $V_{\varepsilon, \varepsilon, \varepsilon}(0) $ is not larger then $ u_{1, 2, 3}(0)$ plus an infinitesimal quantity as $\varepsilon\to 0$, which is a contradiction.
2) $f_1(0,a_1) = 0, f_2(0, a_2), f_3(0, a_3) < 0$. In this case we arrive at $R_1$ and we stop with $f_1(0,a_1)$ in $x=0$. Hence, $u_{1, 2, 3}(0)= \frac{1}{\lambda} \ell_1(0,a_1)$ cannot be lower than $V_{sc(1)}(0)$ which is a contradiction.
If there is no minimizing $(a_1,a_2,a_3)$ (see Remark \ref{Remarkcontrolli}) then the cost $u_{1, 2, 3}(0)$ cannot be better than the state-constraint value $V_{sc(i)}(0)$. Then as before, we have again a contradiction.
Now assume $\lim V_{\varepsilon, \varepsilon, \varepsilon}(0) < V(0)$. Let $\delta > 0$ be such that, for $\varepsilon$ small enough, it is $V_{\varepsilon, \varepsilon, \varepsilon}(0) + \delta < V(0)$. A measurable control $\alpha$ which almost realizes the optimum (less than $\beta>0$) for $V_{\varepsilon, \varepsilon, \varepsilon}(0)$ must be such that there are infinitely many switching between all branches $R_i^{\varepsilon}$ (i.e for every $i$, $f_i(x, \alpha_i)<0\ \forall\ x$ ).
Indeed, if it is not the case, then, for at least one branch $R_i^{\varepsilon}$, the trajectory definitely remains inside it. Hence, for small $\varepsilon$, $V_{\varepsilon, \varepsilon, \varepsilon}(0)$ is almost equal to $V_{sc(i)}(0)$, which is a contradiction. We can restrict to consider a piecewise constant control that we call again $\alpha$ since $V_{\varepsilon, \varepsilon, \varepsilon}$ defined either by measurable controls or by piecewise constant controls, satisfies the same problem \cref{RO} which admits a unique solution (see e. g. \cite{BaCaDo} Remark 2.15 page 109). Then, to obtain the optimum, on each branch $R_i^{\varepsilon}$ let $x_1^i, \dots, x_{n^i}^i$ be the points corresponding to the discontinuity instants $t_1^i, \dots, t_{n^i}^i$ of the control $\alpha$ and let $a_j^i$ be the constant controls $\forall i=1, 2, 3$, $\forall \ j=1, \dots, n^i-1$. On the assumption that $f_i(0, a_j^i)<0 \ \forall\ i, j$ we consider the dynamics $f_i(0, a_j^i)$ and the running cost $\ell_i(0, a_j^i)$ on every spatial interval $[x_j^i, x_{j+1}^i]$. Now, for every $i$ we consider \begin{equation}\label{infdelrapporto} \inf_{a \in A}\left\{ \frac{\ell_i(0, a)}{\vert f_i(0, a)\vert}\ \vert f_i(0, a)<0\right\}. \end{equation} If \cref{infdelrapporto} is a minimum for every $i$ obtained in $(\bar{a}_1, \bar{a}_2, \bar{a}_3)$ then in each $R_i^{\varepsilon}$ we use constant dynamics $f_i(0, \bar{a}_i)$ and constant running cost $\ell_i(0, \bar{a}_i)$. \\Therefore $\left\vert J(\cdot, i, \alpha)- J(\cdot, i, \bar{a}_i)\right\vert \leq O(\varepsilon)$ and we get \begin{equation}\label{primacontraddizione} \begin{array}{ll} \displaystyle V_{\varepsilon, \varepsilon, \varepsilon}(0)\geq J(\cdot, i, \alpha)-\beta\geq J(\cdot, i, \bar{a}_i)-O(\varepsilon)-\beta\\ \displaystyle \geq u_{1, 2, 3}(0)-O(\varepsilon)-\beta \geq V(0)-O(\varepsilon)-\beta, \end{array} \end{equation} that is a contradiction. If, for some $i$, \cref{infdelrapporto} is not a minimum then we can consider the minimizing sequence $a_i^k$ that realizes the infimum less than $O(\frac{1}{k})$. In particular $a_i^k \to \tilde{a}_i \in A$ for $k\to +\infty$ and $f_i(0, a_i^k)\to f_i(0, \tilde{a}_i)=0$ being $f_i(0, a_i^k)<0$. However, since the optimal strategy is to switch among the branches, we cannot stop in the branch $R_i^{\varepsilon}$ with dynamics $f_i(0, \tilde{a}_i)$ paying the cost $\ell_i(0, \tilde{a}_i)$. Then, always taking into account that $f_i(0, a_j^i)<0$ we have \begin{equation}\label{secondacontradd} \begin{array}{ll} \displaystyle V_{\varepsilon, \varepsilon, \varepsilon}(0)\geq J(\cdot, i, \alpha)-\beta\geq J(\cdot, i,a_i^k)-O\left(\frac{1}{k}\right)-O(\varepsilon)-\beta\\ \displaystyle \geq u_{1, 2, 3}(0)-O\left(\frac{1}{k}\right)-O(\varepsilon)-\beta \geq V(0)-O\left(\frac{1}{k}\right)-O(\varepsilon)-\beta, \end{array} \end{equation} which is again a contradiction.\\
Therefore at the end, $V_{\varepsilon, \varepsilon, \varepsilon}(0)$ cannot be less than $V(0) - \delta$ by the definition of $V(0)$. This is a contradiction. Hence we have $\lim V_{\varepsilon, \varepsilon, \varepsilon}(0) = V(0)$. The equations solved by $V_{\varepsilon,\varepsilon, \varepsilon}$ and by $V$ (\cref{RO,s7,s8} respectively) are the same for all $(x, i) \in int(R_i)$ and the boundary datum converges to $V(0)$. Hence, representing the solutions as the value functions of the corresponding optimal control problems, we get \cref{s16} and the uniform convergence. \end{proof} To show that $V$ \cref{funzionevaloresogliefisse} is a viscosity solution of the next problem \cref{eq:HJBproblem3}, we introduce the test functions for the differential equations on the branches and give the definition of viscosity subsolution and supersolution of \cref{eq:HJBproblem3}.
\begin{definition}\label{def_giromondo} Let $\varphi: TR\to {\NZQ R}$ be a function such that \begin{equation}\label{test_function} \begin{aligned}
&\varphi|_{R_i}:=\varphi_i : R_i \longrightarrow {\NZQ R} \\ & (x, i) \longmapsto \varphi_i(x, i) \ \ \ \text{if}\ x \neq 0, \forall i \in \lbrace 1, 2, 3\rbrace \\ & (0, i) \longmapsto \varphi_i(0, i)=\varphi_j(0, j)\ \forall j \in \lbrace 1, 2, 3\rbrace \setminus \lbrace i \rbrace, \end{aligned} \end{equation} with $\varphi \in C^0(TR)$ and $\varphi_i \in C^1(R_i)$. \end{definition} \begin{definition} \label{defsubsup} A continuous function $u:TR\to {\NZQ R}$ is a viscosity subsolution of \cref{eq:HJBproblem3} if for any $(x, i) \in TR$, any $\varphi$ as in \cref{test_function} such that $u-\varphi$ has a local maximum at $(x, i)$ with respect to $TR$, then \begin{equation} \begin{aligned} &\lambda u(x, i)+H_i(x, \varphi'_i(x, i))\leq 0 & & x \in int(R_i), \\ & \min\left\{\lambda u(0, i)+H_i(0, \varphi'_i(0, i)), \ i=1, 2, 3 \right\}\leq 0 & & x=0. \end{aligned} \end{equation} A continuous function $u:TR\to {\NZQ R}$ is a viscosity supersolution of \cref{eq:HJBproblem3} if for any $(x, i) \in TR$, any $\varphi$ as in \cref{test_function} such that $u-\varphi$ has a local minimum at $(x, i)$ with respect to $TR$, then \begin{equation} \begin{aligned} &\lambda u(x, i)+H_i(x, \varphi'_i(x, i))\geq 0 & & x \in int(R_i), \\ & \max\left\{\lambda u(0, i)+H_i(0, \varphi'_i(0, i)),\ i=1, 2, 3 \right\}\geq 0 & & x=0. \end{aligned} \end{equation} In particular note that if $x=0$ then the local maximum/minimum is with respect to all the three branches and $\varphi'_i(0, i)$ is the right derivative on the branch $i$, $(\varphi'_i)^+$. Since $(0, i) = (0, j)$ for $i,\ j \in \{1, 2, 3\} , \ i\neq j$, we drop the index $i$ in the pair $(0, i)$. \end{definition} We will prove the following theorem using the thermostatic approximation, namely considering the approximating value function $V_{\varepsilon, \varepsilon, \varepsilon}$. Differently from the twofold junction problem in which the index that identifies the branch is included in the sign of $x$ and the test function $\varphi \in C^1({\NZQ R})$, here, we need to extend the test function $\varphi_i$ in \cref{test_function} from $R_i$ to $R_i^{\varepsilon}$. To do that we distinguish the case in which $V-\varphi$ has a local maximum point at $x=0$ from that where $x=0$ is a local minimum point, both respect to all three branches.\\ If $V-\varphi$ has a local maximum point at $x=0$ then we suppose that \begin{equation}\label{assumptionextendedtestfunction} \varphi_1'(0)^+\leq\varphi_2'(0)^+\leq\varphi_3'(0)^+. \end{equation} Note that our switching sequence is $1\to 2\to 3\to 1$ which is coherent with such an order. If the order is different, then we consider a different switching sequence in the approximating thermostatic $\varepsilon$-problem, still coherent with the order. This is always possible because the limit function $V$ is independent from the switching order of the chosen approximating problem. Then we define \begin{equation}\label{extendedtestfunction} \tilde{\varphi}_i: [-\varepsilon, +\infty[\times \lbrace i\rbrace \longrightarrow {\NZQ R},\quad \tilde{\varphi}_i = \begin{cases} \varphi_i(x, i) & x\geq 0 \\ \varphi_{i_s}(-x, i_s) & x < 0 \end{cases} \end{equation} for $i=1, 2$ and with $i_s$ the next transition to $i$. If $i=3$ we construct $\tilde{\varphi}_3$ in two different way, correspondingly to the cases: \begin{equation}\label{estensionevarphiramo3_1}
\text{if} \quad \varphi_1'(0)^+=\varphi_3'(0)^+ \quad \text{then} \quad \tilde{\varphi}_3= \begin{cases} \varphi_3(x, 3) & x\geq 0, \\ \varphi_{1}(-x, 1) & x < 0. \end{cases} \end{equation} \begin{equation}\label{estensionevarphiramo3_2} \text{if} \quad \varphi_1'(0)^+<\varphi_3'(0)^+ \quad \text{then}\quad \tilde{\varphi}_3= \begin{cases} \varphi_3(x, 3) & x\geq 0, \\ \varphi_{3}(-x, 3) & x < 0. \end{cases} \end{equation} By the assumption \cref{assumptionextendedtestfunction}, the first case gives, $\varphi_1'(0)^+=\varphi_2'(0)^+=\varphi_3'(0)^+$, and that the second case gives, at least for small $\varepsilon$, $\tilde\varphi_1(\varepsilon, 1)=\varphi_1(\varepsilon, 1)\le\varphi_3(\varepsilon, 3)=\tilde\varphi_3(-\varepsilon, 3)$. Finally in both cases we then have $\tilde\varphi_1(\varepsilon, 1)\le\tilde\varphi_3(-\varepsilon, 3)$.\\ If instead $V-\varphi$ has a local minimum point at $x=0$ then we suppose that \begin{equation}\label{assumptionextendedtestfunction2} \varphi_1'(0)^+\geq\varphi_2'(0)^+\geq\varphi_3'(0)^+, \end{equation} and that the switching order is the coherent one, as above. In this case we construct $\tilde{\varphi}_3$ as in \cref{extendedtestfunction,estensionevarphiramo3_1,estensionevarphiramo3_2}(with the only difference of the case $\varphi_1'(0)^+<\varphi_3'(0)^+$ replaced by $\varphi_1'(0)^+>\varphi_3'(0)^+$). In this case, for at least small $\varepsilon$ it is $\tilde\varphi_1(\varepsilon, 1)\ge\tilde\varphi_3(-\varepsilon, 3)$.\\ The function $\tilde{\varphi}_i$ is not differentiable in $x=0$, hence we cannot write a unique HJB equation for the function $V_{\varepsilon, \varepsilon, \varepsilon}(\cdot, i)$ in branch $R_i^{\varepsilon}$. To overcome the problem of discontinuity of $\tilde{\varphi}_i'$ in $x=0$ we interpret the behaviour of the dynamic $f_i(x, a_i)<0$ for $x \in ]-\varepsilon, 0[$ as entering in the next branch of the switching rule. More precisely, considering for example the branches $R_1^{\varepsilon}$ and $R_2^{\varepsilon}$, we define the function ${V}_{\varepsilon, \varepsilon, \varepsilon}(x, 1)=: \widetilde{V}_{\varepsilon, \varepsilon, \varepsilon}(-x, 2)$, the dynamics $-{f}_1(x, a)=:\tilde{f}_2(-x, a)$ and the relative running costs ${\ell}_1(x, a)=:\tilde{\ell}_2(-x, a)$ for $x\in ]-\varepsilon, 0[$. In this way, for any $x \in ]-\varepsilon, 0[$ a local maximum point of $V_{\varepsilon, \varepsilon, \varepsilon}(\cdot, 1)- \tilde{\varphi}_1(\cdot, 1)$, we get that $V_{\varepsilon, \varepsilon, \varepsilon}(\cdot, 1)$ satisfies
\begin{equation*}
\lambda \widetilde{V}_{\varepsilon, \varepsilon, \varepsilon}(-x, 2)+\sup_{a \in A}\left\{-\tilde{f}_2(-x, a)\varphi_2(-x, 2)'-\widetilde \ell_2(-x, a)\right\}\leq 0.
\end{equation*}
which is equivalent, for the considerations before, to
\begin{equation*}
\lambda V_{\varepsilon, \varepsilon, \varepsilon}(x, 1)+\sup_{a \in A}\left\{-f_1(x, a)\tilde{\varphi}_1(x, 1)'-\ell_1(x, a)\right\}\leq 0.
\end{equation*}
The same ideas can be applied to the other pairs of branches $(R_2^{\varepsilon}, R_3^{\varepsilon})$ and $(R_3^{\varepsilon}, R_1^{\varepsilon})$. \begin{theorem} \label{thm:ishiitre} Assume \cref{eq:Lip,eq:Controllability,eq:LLip}. The value function $V$ \cref{funzionevaloresogliefisse} is a viscosity solution of the Hamilton-Jacobi-Bellman problem \begin{equation}\label{eq:HJBproblem3} \begin{cases} \lambda V + H_{1}(x, V') = 0 \ \ \text{in} \ int(R_1) \\ \lambda V + H_{2}(x, V') = 0 \ \ \text{in} \ int(R_2) \\ \lambda V + H_{3}(x, V') = 0 \ \ \text{in} \ int(R_3) \\ \min \left\{\lambda V + H_1, \lambda V + H_{2}, \lambda V + H_{3}\right\} \leq 0 \ \text{on} \ x=0 \\ \max \left\{\lambda V + H_1, \lambda V + H_{2}, \lambda V + H_{3}\right\} \geq 0 \ \text{on} \ x=0 \end{cases} \end{equation} \end{theorem} \begin{proof} From \cref{Approx3,limitsogliefisse} and by classical convergence result, we get the first three eqaution in \cref{eq:HJBproblem3}.\\
We now prove the fourth equation in \cref{eq:HJBproblem3}. Let $\varphi$ as given in \cref{test_function} such that $V-\varphi$ has a strict relative maximum at $x=0$ with respect all the three branches and consider the assumption \cref{assumptionextendedtestfunction}. For every $i$ it is \begin{equation}\label{condizionesottsoludynentranti} \lambda V(0)+\sup_{a\in A, f_i(0,a)\ge0}\{-f_i(0,a)\varphi_i'^+(0)-\ell_i(0,a)\}\le 0. \end{equation} \noindent Indeed, for every $\varepsilon>0$, and for every $t>0$, we have ($V_{\varepsilon,\varepsilon,\varepsilon}$ solves DPP; $\alpha$ p.c. stays for piece-wise constant control) \begin{equation*} V_{\varepsilon,\varepsilon,\varepsilon}(0,i)\le\inf_{\alpha\ p.c., f_i(0,\alpha)\ge0}\left( \int_0^te^{-\lambda s}\ell_i(x(s),\alpha(s))ds+e^{-\lambda t}V_{\varepsilon,\varepsilon,\varepsilon}(x(t),i) \right) \end{equation*} \noindent and hence, passing to the limit $\varepsilon\to0^+$, \begin{equation*} V(0)\le\inf_{\alpha\ p.c., f_i(0,\alpha)\ge0}\left( \int_0^te^{-\lambda s}\ell_i(x(s),\alpha(s))ds+e^{-\lambda t}V(x(t),i) \right) \end{equation*} \noindent and finally we get the desired inequality \cref{condizionesottsoludynentranti}, being $x=0$ a local maximum for $V-\varphi_i$ with respect to $R_i$.
Hence, we only need to prove that, with our hypotheses, for at least one $i$, we get \begin{equation}\label{disequazionedaprovare} \lambda V(0)+\sup_{a\in A, f_i(0,a)\le0}\{-f_i(0,a)\varphi_i'(0)^+-\ell_i(0,a)\}\leq 0. \end{equation} For each $i$ let $(x_\varepsilon^i, i)$ be a sequence of local maximum points for $V_{\varepsilon,\varepsilon,\varepsilon}-\tilde\varphi_i$ with respect to $R_i^\varepsilon$ convergent to $(0, i)$, with $\tilde\varphi_i$ as in \cref{extendedtestfunction}. For each $\varepsilon$, for at least one branch $i$ we may assume $x_\varepsilon^i\neq-\varepsilon$. Indeed, if it is not the case, recalling that, by controllability implies $V_{\varepsilon,\varepsilon,\varepsilon}(-\varepsilon,i)\le V_{\varepsilon,\varepsilon,\varepsilon}(\varepsilon,i_s)$, we get the contradiction \begin{equation*} \begin{array}{ll} V_{\varepsilon,\varepsilon,\varepsilon}(\varepsilon,1)-\tilde\varphi_1(\varepsilon, 1)<V_{\varepsilon,\varepsilon,\varepsilon}(-\varepsilon,1)-\tilde\varphi_1(-\varepsilon, 1)\le V_{\varepsilon,\varepsilon,\varepsilon}(\varepsilon,2)-\tilde\varphi_2(\varepsilon, 2)< \\ V_{\varepsilon,\varepsilon,\varepsilon}(-\varepsilon,2)-\tilde\varphi_2(-\varepsilon, 2)\le V_{\varepsilon,\varepsilon,\varepsilon}(\varepsilon,3)-\tilde\varphi_3(\varepsilon, 3)<V_{\varepsilon,\varepsilon,\varepsilon}(-\varepsilon,3)-\tilde\varphi_3(-\varepsilon, 3)\le\\ V_{\varepsilon,\varepsilon,\varepsilon}(\varepsilon,1)-\tilde\varphi_1(\varepsilon, 1). \end{array} \end{equation*}
Now, let $i$ be such that $x_\varepsilon^i\neq-\varepsilon$ for every $\varepsilon$ (or at least for a subsequence). If $x_\varepsilon^i>0$ for all $\varepsilon$, in the limit we get \begin{equation*} \lambda V(0)+\sup_{a\in A}\left\{-f_i(0,a)\varphi_i'(0)^+-\ell_i(0,a)\right\}\leq 0, \end{equation*} \noindent and we get the conclusion.
\noindent If $x_\varepsilon^i\in]-\varepsilon,0[$, in the limit we get
\begin{equation*} \lambda V(0)+\sup_{a\in A}\left\{-f_i(0,a)\tilde\varphi_i'(0)^--\ell_i(0,a)\right\}\leq 0, \end{equation*}
\noindent and in particular \begin{equation*} \lambda V(0)+\sup_{a\in A,f_i(0,a)\le0}\left\{-f_i(0,a)\tilde\varphi_i'(0)^--\ell_i(0,a)\right\}\leq 0. \end{equation*}
\noindent where $\tilde\varphi_i'(0)^-$ is the left derivative of $\tilde\varphi_i$ at $x=0$. Now, if we are in the first case (all the right derivatives coincide) then we have $\tilde\varphi_i'(0)^-=\varphi_{i_s}'(0)^+=\varphi_i'(0)^+$, and hence we get \cref{disequazionedaprovare}. If instead $\varphi_1'(0)^+<\varphi_3'(0)^+$ then if $i=1$ then $i_s=2$, hence, by our hypotheses, in the inequality above it is \begin{equation*} -f_i(0,a)\tilde\varphi_i'(0)^-=-f_i(0,a)\varphi_{i_s}'(0)^+\ge-f_i(0,a)\varphi_i'(0)^+, \end{equation*} \noindent and we conclude. Same arguments if $i=2$ and $i_s=3$. If instead $i=3$, then $\tilde\varphi_3(0)^-=\varphi_3'(0)^+$ and we conclude.
Finally, if $x_\varepsilon^i=0$, then we still get \begin{equation*} \lambda V(0)+\sup_{a\in A,f_i(0,a)\le0}\left\{-f_i(0,a)\tilde\varphi_i'(0)^--\ell_i(0,a)\right\}\le0, \end{equation*} \noindent and we conclude as before, i.e. studying the two cases as above.\\ Now we suppose $V-\varphi$ have a local minimum with respect to $TR$ at $(0, i)$ and consider \cref{assumptionextendedtestfunction2}. We have to prove that, for at least one $i$, we have \begin{equation}\label{disequazionedaprovare2} \lambda V(0)+\sup_{a\in A}\{-f_i(0,a)\varphi_i'(0)^+-\ell_i(0,a)\}\ge0. \end{equation}
If for some $i$ and for $\varepsilon\to0^+$, $V_{\varepsilon, \varepsilon, \varepsilon}(\cdot,i)$ coincides with the state-constraint value function on $R_\varepsilon^i$, then $V_{\varepsilon, \varepsilon, \varepsilon}(\cdot,i)$ and $V(\cdot, i)$ coincides on $R_i$ and hence $V$ satisfies the same HJB equation as $V_{\varepsilon, \varepsilon, \varepsilon}$, which is \cref{disequazionedaprovare2}. \noindent Hence we suppose that no $V_{\varepsilon, \varepsilon, \varepsilon}(\cdot,i)$ coincide with the corresponding state-constraint value function.
For each $i$ let $(x_\varepsilon, i)$ be a sequence of local minimum points for $V_{\varepsilon, \varepsilon, \varepsilon}-\tilde\varphi_i$ with respect to $R_\varepsilon^i$, which converges to $(0, i)$. In this case we may assume that, for a fixed $i$, the sequence is such that either $x_\varepsilon^i\neq-\varepsilon$ or $x_\varepsilon^i=-\varepsilon$ but the HJB equation satisfied by $V_{\varepsilon, \varepsilon, \varepsilon}$ has the right sign ($\geq 0$). Indeed, if it is not the case (i.e. $x_\varepsilon^i=-\varepsilon$ and HJB has the wrong sign), we must have $V_{\varepsilon, \varepsilon, \varepsilon}(-\varepsilon,i)=V_{\varepsilon, \varepsilon, \varepsilon}(\varepsilon,i_s)$, and hence we get the following contradiction \[ \begin{array}{ll} V_{\varepsilon, \varepsilon, \varepsilon}(\varepsilon, 1)-\tilde\varphi_1(\varepsilon, 1)>V_{\varepsilon, \varepsilon, \varepsilon}(-\varepsilon,1)-\tilde\varphi_1(-\varepsilon, 1)= V_{\varepsilon, \varepsilon, \varepsilon}(\varepsilon,2)-\tilde\varphi_2(\varepsilon, 2)>\\ V_{\varepsilon, \varepsilon, \varepsilon}(-\varepsilon, 2)-\tilde\varphi_2(-\varepsilon, 2)= V_{\varepsilon, \varepsilon, \varepsilon}(\varepsilon, 3)-\tilde\varphi_3(\varepsilon, 3)>V_{\varepsilon, \varepsilon, \varepsilon}(-\varepsilon, 3)-\tilde\varphi_3(-\varepsilon, 3)\ge\\ V_{\varepsilon, \varepsilon, \varepsilon}(\varepsilon, 1)-\tilde\varphi_1(\varepsilon, 1). \end{array} \] If $x_\varepsilon^i>0$, in the limit we exactly get \[ \lambda V(0)+\sup_{a\in A}\{-f_i(0,a)\varphi_i'(0)^+-\ell_i(0,a)\}\ge0. \] If $x_\varepsilon^i\in[-\varepsilon,0[$ in the limit we get \[ \lambda V(0)+\sup_{a\in A}\{-f_i(0,a)\tilde\varphi_i'(0)^--\ell_i(0,a)\}\ge0. \] If all the right derivatives at $x=0$ of $\varphi_i(0)$ coincide, then we conclude because $\tilde\varphi_i'(0)^-=\varphi_{i_s}'(0)^+=\varphi_i'(0)^+$ . Otherwise, if $i=3$ then we have $\tilde\varphi_i'(0)^-=\varphi_3'(0)^+$ and we conclude; if $i=1$ or $i=2$, by the hypotheses on $V_{\varepsilon, \varepsilon, \varepsilon}(\cdot,i)$ not coincident with the state-constraint value function, we get that the supremum above is approximated by controls such that $f_i(0,a)\le0$, which means \begin{equation*} -f_i(0,a)\tilde\varphi_i'(0)^-=-f_i(0,a)\tilde\varphi_{i_s}(0)^+\le-f_i(0,a)\varphi_i(0)^+ \end{equation*} \noindent and we conclude.
If $x_\varepsilon^i=0$, then in the limit we get (still recalling that $V_{\varepsilon, \varepsilon, \varepsilon}(\cdot,i)$ is not the state-constraint value function) \begin{equation*} \lambda V(0)+\sup_{a\in A,f_i(0,a)\le0}\{-f_i(0,a)\tilde\varphi_i'(0)^--\ell_i(0,a)\}\ge0 \end{equation*} \noindent and we conclude as before. \end{proof} Now we want to prove that $V$ \cref{funzionevaloresogliefisse} is the maximal subsolution of \cref{eq:HJBproblem3}.
Assume that $\forall \ \varepsilon >0$ small enough, the optimal strategy for the approximating problem $\varepsilon$, starting by any $(x, i)$ with $x \in [-\varepsilon, \varepsilon]$, is to run through infinitely many switches between the three branches (i.e. no state-constraint behaviour is optimal). Let $\mu_1, \mu_2, \mu_3$ be as in \cref{musogliefisse} and $(a_1, a_2, a_3) \in A_0$ realize the minimum in \cref{s15} such that \begin{equation}\label{eq:Optimalitycaso3} V(0)=u_{1,2,3}(0)= \frac{1}{\lambda}\lbrace \mu_1 \ell_1(0, a_1) + \mu_2 \ell_2(0, a_2)+ \mu_3 \ell_3(0, a_3) \rbrace. \end{equation}
For every $x \in [0, \varepsilon]$, we define the following functions \begin{equation}\label{vbarracasotre} \bar{V}^{\varepsilon}(x, i) = \int_{0}^{\frac{x}{\vert f_i(0, a_i)\vert}} e^{-\lambda t}\ell_i(0, a_i)dt + e^{\frac{-\lambda x}{\vert f_i(0, a_i)\vert}} u_{1, 2, 3}(0), \end{equation} where the upper extremal of the integration is the reaching time of the point $0$ in the corresponding branch starting from $x \in [0, \varepsilon]$. For $x \in [0, \varepsilon]$, $V_{\varepsilon, \varepsilon, \varepsilon,}(x, i)$ is not larger then $\bar{V}^{\varepsilon}(x, i)$ plus an infinitesimal quantity as $\varepsilon\to 0$. The functions in \cref{vbarracasotre} are differentiable in $[0, \varepsilon]$. A direct computation gives \begin{equation*} \bar{V}^{\varepsilon}(x, i)' = \frac{\ell_i(0, a_i)}{\vert f_i(0, a_i)\vert}e^{\frac{-\lambda x}{\vert f_i(0, a_i)\vert}}-\frac{\lambda e^{\frac{-\lambda x}{\vert f_i(0, a_i)\vert}}}{\vert f_i(0, a_i)\vert}{u}_{1, 2, 3}(0),
\end{equation*} and then for $\varepsilon\to 0$ \begin{align*} \bar{V}^{\varepsilon}(x, 1)' & \longrightarrow \frac{(1-\mu_1)\ell_1(0, a_1)-\mu_2\ell_2(0, a_2)-\mu_3\ell_3(0, a_3)}{\vert f_1(0, a_1)\vert}, \\ \bar{V}^{\varepsilon}(x, 2)' & \longrightarrow \frac{-\mu_1\ell_1(0, a_1)+(1-\mu_2)\ell_2(0, a_2)-\mu_3\ell_3(0, a_3)}{\vert f_2(0, a_2)\vert}, \\ \bar{V}^{\varepsilon}(x, 3)' & \longrightarrow \frac{-\mu_1\ell_1(0, a_1)-\mu_2\ell_2(0, a_2)+(1-\mu_3)\ell_3(0, a_3)}{\vert f_3(0, a_3)\vert}. \end{align*} Moreover by \cref{vbarracasotre} we have for every $i=1, 2, 3$ \begin{equation}\label{condsoprasoluzionevbarra} \lambda \bar{V}^{\varepsilon}(x, i)-f_i(x, a_i)\bar{V}^{\varepsilon}(x, i)'-\ell_i(x, a_i)\geq -O(\varepsilon), \end{equation} in $x \in [0, \varepsilon]$. In \cref{condsoprasoluzionevbarra} when $x=0$ we use the right derivative of $\bar{V}^{\varepsilon}(x, i)$ and $\bar{V}^{\varepsilon}(0, i)= {u}_{1, 2, 3}(0)$ for every $i$. Furthermore, by differentiability of $\bar{V}^{\varepsilon}(x, i)$ and recalling the sign of $f_i(0, a_i)$ we then get for every $i$ \begin{equation*} \lambda \bar{V}^{\varepsilon}(x, i)+H_i(x, q)\geq -O(\varepsilon), \end{equation*} for every $x \in [0, \varepsilon]$ and for every $q$ subgradient in $x$ of $\bar{V}^{\varepsilon}(x, i)$.\\ We now define on $TR\cap \left(\cup_{i=1}^3 [0, \varepsilon] \times \{i\} \right)$ the function \begin{equation}\label{vbarracasotrederivabile} \bar{V}(x)= \begin{cases} \bar{V}^{\varepsilon}(x, i) & \text{if} \ x \in int(R_i),\\ u_{1, 2, 3}(x) & \text{if} \ x =0. \end{cases} \end{equation} which is in $C^1([0, \varepsilon])$ and that we extend to whole $TR$ maintaining its differentiability. \begin{theorem} For each $u$ bounded, continuous subsolution of \cref{eq:HJBproblem3}, it is $u\leq V$ in $TR$. \end{theorem} \begin{proof} We can assume to be in the settings above for which \cref{eq:Optimalitycaso3} holds. Indeed, otherwise in at least one branch $V$ coincides with the corresponding state-constraint value function, greater than any subsolution (see Soner \cite{So}). We then also have $u\leq V$ on the other branches. By contradiction, suppose $\sup_{(x, i) \in TR} (u-V)(x, i) > \delta >0$. If \begin{equation*} \exists r>0 \vert \forall \delta' >0\ \exists\ (\overline{x}, i) \in ]r, +\infty[\times \{i\}: \sup_{(x, i)\in TR} \big((u-{V})(x, i)-(u-V)(\overline{x}, i)\big)\leq \delta', \end{equation*} then, by \cref{thm:ishiitre} and known comparison techniques we get a contradiction because, in $ ]r, +\infty[\times\{i \}$, $V$ is a supersolution and $u$ is a subsolution of the same HJB. Hence we may restrict to the case where $u -V$ has the maximum with respect to $r$ in $x=0$. Since $\bar{V}^{\varepsilon}(x, i)$ converges to $V(0)$, with $\bar{V}^{\varepsilon}$ defined in \cref{vbarracasotre}, then for small $\varepsilon$, \begin{equation}\label{condmassimo3} u(z^i, i)-\bar{V}^{\varepsilon}(z^i, i)=\max_{[0, \varepsilon]\times\{i \}}(u(\cdot, i)-\bar{V}^{\varepsilon}(\cdot, i))>\frac{\delta}{2}>0. \end{equation} Since $u(x, i)$ is a continuous subsolution of \cref{eq:HJBproblem3} then satisfies \begin{equation} \lambda u(x, i)-f_i(x, a_i)\cdot p-\ell_i(x, a_i)\leq 0 \quad \forall p \in D^{+}u(x, i)\neq \emptyset, \end{equation} where $D^{+}u(x, i)$ is the set of super-differentials of $u$ at a point $(x, i)$. Now, taking into account \cref{condsoprasoluzionevbarra,condmassimo3} we have that \begin{equation} p-\bar{V}^{\varepsilon}(x, i)' \leq \frac{-\lambda \delta}{2\vert f_i(x, a_i)\vert} +O(\varepsilon), \end{equation} whence, for $\varepsilon < \frac{1}{2}\left\vert \frac{\lambda \delta}{2\vert f_i(x, a_i)\vert}\right\vert$, we get that $p-\bar{V}^{\varepsilon}(x, i)'\leq -\bar{\delta}$, for a suitable $\bar{\delta}>0$ regardless to $x$. Hence $u(x, i)-\bar{V}^{\varepsilon}(x, i)$ is decreasing and, taking $\varepsilon$ as above, has maximum point in $x=0$.
By the previous consideration we get that $\bar{V}(x)$ \cref{vbarracasotrederivabile} is an admissible test function and that $u-\bar{V}$ has a local maximum point in $x=0$ for suitable small $\varepsilon>0$.
Hence, being $u$ a subsolution, exists $\bar{i} \in \{1, 2, 3\}$ such that \begin{equation}\label{sottosoluzioneibarra} \lambda u(0)+H_{\bar{i}}\left(0, \left(\bar{V}^{\varepsilon}(0, \bar{i})'\right)\right)\leq 0. \end{equation} Moreover, by\cref{vbarracasotre}, we have \begin{equation}\label{soprasoluzioneibarra} \lambda \bar{V}^{\varepsilon}(0, \bar{i})+H_{\bar{i}}\left(0, \left(\bar{V}^{\varepsilon}(0, \bar{i})'\right)\right)\geq -O(\varepsilon). \end{equation} Subtracting \cref{soprasoluzioneibarra} to \cref{sottosoluzioneibarra} we contradict \cref{condmassimo3} and then, for $\varepsilon\to 0$, $u\leq V$ in $TR$. \end{proof} \subsection{Non-uniform switching thresholds} \label{subsec:nonuniform} In this section we suppose that the three thresholds of the three-thermostatic optimal control problem are not the same for all $R_{\varepsilon_{i}}$. This imply that the time spent in a single branch $R_{\varepsilon_{i}}$ to reach the relative threshold depends on the value of $\varepsilon_i$. Accordingly to this, the convexification parameters $\bar{\mu}_1, \bar{\mu}_2, \bar{\mu}_3$ are such that if at limit for $(\varepsilon_1, \varepsilon_2, \varepsilon_3)\to (0^+, 0^+, 0^+)$ the optimal behavior is to switch only between two branch, $R_i$ and $R_j$ for $i, j\in \{1, 2, 3\}, i\neq j$, then $\bar{\mu}_i+\bar{\mu}_j=1$. If instead the optimal behavior is to switch among all three branches $R_i$ then $\bar{\mu}_i=\mu_i$ as in \cref{musogliefisse}. To identify the limit optimal control problem when $(\varepsilon_1, \varepsilon_2, \varepsilon_3) \to (0^+, 0^+, 0^+)$ we define the controlled dynamics. Using the same notation of the last section, if $(x, i) \in TR$ with $x\neq 0$ then the dynamics is the usual $f_i(x, a_i)$ with $a_i \in A$. If instead $x=0$, being $(0, i)=(0,j)$ for $i, j \in \{1, 2, 3\}, i\neq j$, we can either choose any dynamics makes us to stay inside a single branch $R_i$ or we may rest at zero using any combination $ \sum_{i=1}^{3} \bar{\mu}_i f_i(0,a_i)$ with $f_i(0, a_i)$ and $\bar{\mu}_i$ as before. In detail, the set of controls in the junction is $A(0)=\overline{A}\cup \widetilde{A}$ with \begin{align*} \overline{A} & = \{(a_1, a_2, a_3, \sigma, \bar{\mu}_1, \bar{\mu}_2, \bar{\mu}_3 ) \in A^3 \times \{12, 13, 23, 123 \} \times [0, 1]^3 \vert \\ & \sigma=ij \Rightarrow \bar{\mu}_i+\bar{\mu}_j=1, f_i(0, a_i)\leq 0;\\ & \sigma=123 \Rightarrow \bar{\mu}_i=\mu_i, f_i(0, a_i)\leq 0 \ \text{with at most one equal to} \ 0 \}, \\ \widetilde{A} &= \left\{(a, i) \in A \times \{1, 2,3\} \vert\ f_i(0, a)\geq 0\right\}. \end{align*} In $\widetilde{A}$ the index $i$ is at disposal, while in $\overline{A}$, the notation $ij$ means that the switching is only between $R_i$ and $R_j$ (as well as $123$ means that the switching performs among all the three branches).\\ Then, as in the last section, calling $\hat{a}$ the generic element of $A(0)$ we define \begin{equation*} f_0(0, \hat a)= \begin{cases} f_i(0, a) & \text{if} \ \hat{a} \in \widetilde{A}, \\ 0 & \text{if}\ \hat{a} \in \overline{A}. \end{cases} \end{equation*} With the same arguments, if $(x, i) \in TR$ and $x\neq 0$ then the running cost is $\ell_i(x, a_i)$ with $a_i\in A$, otherwise we define \begin{equation*} \ell_0(0, \hat a)= \begin{cases} \ell_i(0, a) & \text{if} \ \hat{a} \in \widetilde{A}, \\ \bar{\mu}_1 \ell_1(0, a_1) + \bar{\mu}_2 \ell_2(0, a_2) & \text{if} \ \sigma=12 \quad \text{and}\ \hat{a} \in \overline{A},\\ \bar{\mu}_1 \ell_1(0, a_1) + \bar{\mu}_3 \ell_3(0, a_3) & \text{if} \ \sigma=13 \quad \text{and}\ \hat{a} \in \overline{A}, \\ \bar{\mu}_2 \ell_2(0, a_2) + \bar{\mu}_3 \ell_3(0, a_3) & \text{if} \ \sigma=23 \quad \text{and}\ \hat{a} \in \overline{A} \\ \mu_1 \ell_1(0, a_1)+\mu_2\ell_2(0, a_2)+ \mu_3\ell_3(0, a_3) & \text{if} \ \sigma=123 \quad \text{and}\ \hat{a} \in \overline{A}. \end{cases} \end{equation*}
The quadruples $f = (f_1, f_2, f_3, f_0)$ and $\ell = (\ell_1, \ell_2, \ell_3, \ell_0)$ then define a threefold junction optimal control problem, different from the one in \cref{subsec:nonuniform}. We still denote by ${\cal A}_{(x_0, i_0)}$ the nonempty set of measurable controls for which there exists a unique admissible trajectory and consider the cost functional ($\lambda >0$) \begin{equation*} J(x_0, i_0, \alpha) = \int_{0}^{+\infty} e^{-\lambda t} \ell(x(t), i(t), \alpha(t)) dt \end{equation*} where $\ell$ is the running cost described above. The corresponding value function is \begin{equation}\label{funzionevaloresogliemobili} V^*(x_0, i_0)= \inf_{\alpha \in {\cal A}_{(x_0, i_0)}}J(x_0, i_0, \alpha). \end{equation} Observe that if we stay in $x=0$ for all time using controls in $\overline{A}$ the cost is \begin{equation*} u_{0}(0) = \frac{1}{\lambda}\ \min_{\overline A}\ \sum_{i=1}^{3} \bar{\mu}_i \ell_{i}(0, a_i) = \frac{1}{\lambda}\ \min\left\{u_{1, 2}(0), u_{1, 3}(0), u_{2, 3}(0), u_{1, 2, 3}(0)\right\} \end{equation*} where $u_{1, 2}(0)$ is the minimum over ${\overline A}$ of the cost $\ell_0$ when $\sigma=12$, $u_{1, 3}(0)$ is the minimum over ${\overline A}$ of the cost $\ell_0$ when $\sigma=13$ and similarly the others.
\begin{theorem}\label{limitesogliemobili} Assume \cref{eq:Lip,eq:Controllability,eq:LLip}. The value function $V^*$ \cref{funzionevaloresogliemobili} characterized by \cref{T1}, but with $u_0(0)$ in place of $u_{1, 2, 3}(0)$, namely \begin{equation} \label{defVstarinzero} V^*(0) = \min \left\{u_{0}(0), V_{sc(1)}(0), V_{sc(2)}(0), V_{sc(3)}(0) \right\}, \end{equation} satisfies \begin{equation} \label{s11} V^{*}(x, i) = \liminf_{(\varepsilon_1, \varepsilon_2, \varepsilon_3) \rightarrow (0^{+}, 0^{+}, 0^{+})} V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(x, i) \ \forall \ (x, i) \in R_i, \ i= 1, 2, 3. \end{equation} where $V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}$ is the value function of the approximating thermostatic problem \cref{RO}, with non uniform thresholds $(\varepsilon_1, \varepsilon_2, \varepsilon_3)$, and the convergence is uniform. Moreover, when $x=0$, the limit is independent from $i=1, 2, 3$. \end{theorem} \begin{proof}
We prove \cref{s11} at $x=0$. The independence from $i$ of \cref{s11} comes from the controllability \cref{eq:Controllability}: $|V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(0, i) - V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(0, j)|$\ is infinitesimal as $\max \left\{\varepsilon_1, \varepsilon_2, \varepsilon_3\right\}$. In the sequel, we omit the symbol $i$ in the expression $V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(0, i)$.\\ By contradiction, suppose $V^{*}(0) < \liminf V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(0)$. By \cref{eq:Controllability}, for every $\varepsilon_1, \varepsilon_2, \varepsilon_3 > 0$, we have $V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(0) \leq V_{sc(i)}(0)$ for every $i=1, 2, 3.$ Hence, it implies $V^{*}(0) = u_{0}(0)$. Let $(a_1, a_2, a_3, \sigma, \bar{\mu}_1, \bar{\mu}_2, \bar{\mu}_3) \in \overline{A}$ realize the minimum in the definition of $u_{0}(0)$. We analyze some possible cases, the other ones being similar.
1) $f_1(0,a_1), f_2(0, a_2), f_3(0, a_3) < 0$ and $\sigma=123$. Taking $(\varepsilon_1, \varepsilon_2, \varepsilon_3) = (\varepsilon, \varepsilon, \varepsilon)$ and using a suitably switching control between those constants controls, $ V_{\varepsilon, \varepsilon, \varepsilon}(0)$ is not larger than $u_{1, 2, 3}(0)$ plus an infinitesimal quantity as $\varepsilon\to 0$, which is a contradiction.
2) $f_1(0,a_1), f_2(0, a_2), f_3(0, a_3) < 0$ and $\sigma=23$. Here, taking the triple \\ $(\varepsilon_1, \varepsilon_2, \varepsilon_3) = (\varepsilon^2, \varepsilon, \varepsilon)$, $V_{\varepsilon, \varepsilon, \varepsilon}(0)$ is not larger than $u_{2, 3}(0)$ plus an infinitesimal quantity as $\varepsilon\to 0$, which is a contradiction.
3) $f_1(0,a_1) = 0, f_2(0, a_2), f_3(0, a_3) < 0$. In this setting we can study two sub-cases according to the value of $\sigma$.
3.1) If $\sigma=123$, taking $(\varepsilon_1, \varepsilon_2, \varepsilon_3) = (\varepsilon, \varepsilon, \varepsilon)$, we arrive in $R_1$ and we stop there. Therefore, $u_{1, 2, 3}(0)=\frac{1}{\lambda}\ell_1(0, a_1)$ cannot be lower than $V_{sc(1)}(0)$ that is a contradiction.
3.2) If $\sigma=23$ we take the triple $(\varepsilon_1, \varepsilon_2, \varepsilon_3) = (\varepsilon^2, \varepsilon, \varepsilon)$ and argue as in case 2).\\ We remark that if $\sigma=12$ or $13$ then considering $ (\varepsilon, \varepsilon, \varepsilon^2)$ and $ (\varepsilon, \varepsilon^2, \varepsilon)$ respectively we can conclude as in 3.1).
4) $f_1(0,a_1), f_2(0, a_2)=0, f_3(0, a_3)<0$. Also in this case we have different sub-cases according to the value of $\sigma$.
4.1) If $\sigma=123$ we take the triple $(\varepsilon, \varepsilon, \varepsilon)$ and conclude using Remark \ref{Remarkcontrolli} since $u_{1, 2, 3}(0)$ cannot be lower than a state constraints. Then as before a contradiction.
4.2) If $\sigma=23$ taking the triple $(\varepsilon^2, \varepsilon, \varepsilon)$ we get $u_{2, 3}(0)= \frac{1}{\lambda}\ell_2(0, a_2)$ that is no lower than$ V_{sc(2)}(0)$, that is a contradiction.
Now we assume $\liminf V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(0) < V^{*}(0)$. Let $\delta > 0$ be such that, for arbitrarily small suitably chosen $(\varepsilon_1, \varepsilon_2, \varepsilon_3)$ , it is $V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(0) + \delta < V^{*}(0)$. A measurable control $\alpha$ which almost realizes the optimum (less then $\beta$) for $V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(0)$ must be such that there are infinitely many switching between all branches $R_{\varepsilon_1}, R_{\varepsilon_2}, R_{\varepsilon_3}$. Indeed, if it is not the case, then, for a least one branch, say $R_{\varepsilon_i}$, the trajectory definitely remains inside it. Hence, for small $(\varepsilon_1, \varepsilon_2, \varepsilon_3)$, $V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(0)$ is almost equal to $V_{sc(i)}(0)$, which is a contradiction. As in \cref{limitsogliefisse}, we can limit to consider a piecewise constant control that we call again $\alpha$. To prove that $V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(0)$ cannot be less than $V^{*}(0) - \delta$ we proceed as in \cref{limitsogliefisse} considering $O(\max\{\varepsilon_1, \varepsilon_2, \varepsilon_3\})$ and $u_0(0)$ instead of $O(\varepsilon)$ and $u_{1, 2, 3}(0)$ respectively.\\ In conclusion we have $\liminf V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(0) = V^{*}(0)$. The equations solved by $V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}$ and by $V^{*}(0)$(\cref{RO,s7,s8} suitably modified) are the same in the interior of $R_i$ and the boundary datum converges to $V^{*}(0)$. Then, representing the solutions as the value functions of the corresponding optimal control problems, we get \cref{s11} and the uniform convergence. \end{proof} \begin{rem} As we show in the proof of \cref{limitesogliemobili}, we can restrict us to consider as thresholds $
(\varepsilon, \varepsilon, \varepsilon ), (\varepsilon, \varepsilon, \varepsilon^2 ), (\varepsilon, \varepsilon^2, \varepsilon ), (\varepsilon^2, \varepsilon, \varepsilon )$.
Hence, given the dynamics $f_1, f_2, f_3$ and the running costs $\ell_1, \ell_2, \ell_3$ satisfying the controllability assumptions exists a unique choice of previous triples such that \begin{equation*} V^*(x, i)=\liminf_{(\varepsilon_1, \varepsilon_2, \varepsilon_3)\rightarrow (0, 0, 0)} V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(x, i)= \lim_{(\cdot, \cdot, \cdot)\rightarrow (0, 0, 0)}V_{(\cdot, \cdot, \cdot)}(x, i) \ \forall\ (x, i) \in R_i. \end{equation*} We do not consider triples of the kind $(c_1\varepsilon, c_2\varepsilon, c_3\varepsilon)$, $c_1, c_2, c_3 \in {\NZQ R}$ because they do not bring new possible optimal behaviours. Similarly, we do not consider triples as $(\varepsilon^2, \varepsilon^2, \varepsilon)$ because at the limit this would means to stay in $x = 0$ without using the balance of the dynamics, which is physically meaningless. \end{rem} \begin{rem} When the optimal strategy is to switch among all branches we have that $\ell_i(0, a_i) = \ell_j(0, a_j)\ \forall i, j \in \{1, 2, 3\}, i\neq j$ and $V^*(x, i)=V(x, i)$, where $V$ is the value function \cref{funzionevaloresogliefisse} of the threefold junction problem with uniform thresholds. \end{rem} We introduce test functions $\psi: TR\to {\NZQ R}$ such that $\psi \in C^1(TR)$ and on each branch $\psi_i: R_i\to {\NZQ R}$ is such that $\psi_i(x, i)=\psi_j(x, j)$ for every $i,j \in \{1, 2, 3\}, i\neq j$ when $x=0$. \begin{definition} \label{def:visco} A continuous function $u: TR\to {\NZQ R}$ is a viscosity subsolution of \cref{eq:HJBproblem3'} if for any $(x, i) \in TR$, for any test function $\psi$ as above such that $u-\psi$ has a local maximum at $(x, i)$, then \begin{equation} \begin{aligned} & \lambda u(x, i) + H_i(x, \psi'_i(x, i))\leq 0 & & (x, i)\in int(R_i), \\ & \min\left\{\lambda u(0, i)+H_i(0, \psi'_i(0, i)),\ i=1, 2, 3 \right\}\leq 0 & & x=0; \end{aligned} \end{equation} A continuous function $u: TR\to {\NZQ R}$ is a viscosity supersolution of \cref{eq:HJBproblem3'} if for any $(x, i) \in TR$, any $\psi \in C^1(TR)$ such that $u-\psi$ has a local minimum at $(x, i)$, then \begin{equation} \begin{aligned} & \lambda u(x, i) + H_i(x, \psi'_i(x, i))\geq 0 & & x\in int(R_i), \\ & \max\left\{\lambda u(0, i)+H_i(0, \psi'_i(0, i)),\ i=1, 2, 3 \right\}\geq 0 & & x=0. \end{aligned} \end{equation} In particular, if $x=0$ then the local maximum/minimum may be considered with respect to two of the three branches only. \end{definition} Note the difference with \cref{defsubsup} where, for $x = 0$, the max/min is respect to all three branches. \begin{theorem} Assume \cref{eq:Lip,eq:Controllability,eq:LLip}. The function $V^{*}$ is a viscosity solution and the maximal subsolution of the HJB problem, in the sense of \cref{def:visco}. \begin{equation}\label{eq:HJBproblem3'} \begin{cases} \lambda V + H_{1}(x, V') = 0 \ \ \text{in} \ int(R_1), \\ \lambda V + H_{2}(x, V') = 0 \ \ \text{in} \ int(R_2), \\ \lambda V + H_{3}(x, V') = 0 \ \ \text{in} \ int(R_3), \\ \min \left\{\lambda V + H_1, \lambda V + H_{2}, \lambda V + H_{3}\right\} \leq 0 \ \text{on} \ x=0, \\ \max \left\{\lambda V + H_1, \lambda V + H_{2}, \lambda V + H_{3}\right\} \geq 0 \ \text{on} \ x=0. \end{cases} \end{equation}
\end{theorem} \begin{proof} By \cref{Approx3,limitesogliemobili}, $V^*$ satisfies the first three equations of \cref{eq:HJBproblem3'}. For the other equations suppose, for example, that the optimal strategy is to switch only between the two branches $R_1$ and $R_2$. \\ Then $V^*= \lim_{(\varepsilon, \varepsilon, \varepsilon^2) \to (0, 0, 0)}V_{\varepsilon, \varepsilon, \varepsilon^2}=V_{1,2}$. If $V^*-\psi$ assumes its maximum or minimum in $x=0$ with respect to $R_1\cup R_2$, then, by the twofold junction problem, $V^*$ is a viscosity solution and the maximal subsolution of \begin{equation*} \begin{cases} \lambda V + H_{1}(x, V') = 0 \ \ \text{in} \ \ int(R_1),\\ \lambda V + H_{2}(x, V') = 0 \ \ \text{in} \ \ int(R_2), \\ \min \left\{\lambda V + H_1, \lambda V + H_{2}\right\} \leq 0 \ \text{on} \ x=0, \\ \max \left\{\lambda V + H_1, \lambda V + H_{2}\right\} \geq 0 \ \text{on} \ x=0. \end{cases} \end{equation*} If instead $V^*-\psi$ has a maximum point at $x=0$ with respect to $ R_1\cup R_3$ we prove that the $\min \left\{\lambda V + H_1, \lambda V + H_{2}\right\}$ is still lower or equal to zero. We consider two cases:
1) If the optimal behavior consists to reach $R_2$ and stay there, namely $V^*(x, 2)= V_{sc(2)}(x)$, and supposing that the cost to pay in $R_1$ to reach the junction is lower than the one in $R_3$, then $V^*=V_{1,2}$ on $R_1\cup R_2$. Now, since (by the assumption) $V^*-\psi$ has maximum point at $x=0$ locally with respect to the branch $R_3$, then $\psi_3(x, 3)\geq V^*(x, 3)$ for $x$ near to zero. The optimality of $V_{sc(2)}$ implies that $V^*(\cdot, 3)\geq V_{sc(2)}(\cdot)=V^*(\cdot, 2)$ and hence $\psi_3(\cdot, 3)\geq V^*(\cdot, 2)$. Then gluing $\psi_3$ over $R_2$ we obtain that $V^*-\psi_3$ has a maximum point in $x=0$ locally with respect to $R_2$. Hence, $\min \left\{\lambda V + H_1, \lambda V + H_{2}\right\}\leq 0$.
2) If the optimal strategy is to switch between $R_1$ and $R_2$ and the maximum point at $x=0$ is still with respect to $R_1\cup R_3$ we conclude because $\psi_3(\cdot, 3)\geq V^*(\cdot, 2)$.
If $V^*=V_{1,2}$ and $V^*-\psi$ has a maximum point at $x=0$ with respect to $R_2\cup R_3$, with similar argument as before we conclude that $\min \left\{\lambda V + H_1, \lambda V + H_{2}\right\} \leq 0$.\\ In conclusion we have shown that the following condition hold: exists a couple of indexes $(\bar{i}, \bar{j})$, fixed a priori, such that $V^*=V_{\bar{i}, \bar{j}}$ on $R_{\bar{i}}\cup R_{\bar{j}}$ and that for all $\psi \in C^1(TR)$ such that $V^*-\psi$ has the maximum point at $x=0$ with respect to any couple of edges, $\min\lbrace \lambda V + H_{\bar{i}}, \lambda V + H_{\bar{j}} \rbrace \leq 0$. From the latter condition follows that $\min\lbrace \lambda V + H_1, \lambda V + H_{2}, \lambda V + H_{3} \rbrace \leq 0$. Proceeding as before also for the fifth equation of \cref{eq:HJBproblem3'} we have that $V^*$ is a viscosity solution of \cref{eq:HJBproblem3'}.
Now, let $u$ be a continuous subsolution of \cref{eq:HJBproblem3'} satisfying the above condition with the same couple of indexes $(\bar{i}, \bar{j})$ that we suppose to be $(1, 2)$. Then \begin{equation}\label{cond su u} V^*\geq u \ \ \text{on} \ \ R_1\cup R_2\ \Longrightarrow\ V^*(0)\geq u(0). \end{equation} Furthermore $V^*$ is a supersolution of the third equation of \cref{eq:HJBproblem3'}, $u$ is a subsolution of the same equation and hence by \cref{cond su u} follows $V^*\geq u$ on $R_3$. We can conclude that $V^*\geq u$ on $TR$ and hence it is the maximal subsolution of \cref{eq:HJBproblem3'}. \end{proof} Similarly $V^*= \lim_{(\varepsilon^2, \varepsilon, \varepsilon) \to (0, 0, 0)}V_{\varepsilon^2, \varepsilon, \varepsilon}=V_{2, 3}$, $V^*= \lim_{(\varepsilon, \varepsilon^2, \varepsilon) \to (0, 0, 0)}V_{\varepsilon, \varepsilon^2, \varepsilon}=V_{1, 3}$. \section*{Appendix}
{\it Proof of the Proposition 4.1.}\\
As in \cref{th:sistemapprox} in \cite{Ba} by virtue of the total controllability and Dynamic Programming (at least for small $\varepsilon$) we have
\begin{multline}\label{exittimevaluefunction}
V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(x_0, i_0)= \inf_{\alpha \in {\cal A}}\Biggl\{ \int_{0}^{t_{(x_0, i_0)}(\alpha)}e^{-\lambda s}\ell_{i_0}(x(s), \alpha(s))ds \\ + e^{-\lambda t_{(x_0, i_0)}(\alpha)}V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(-\varepsilon_{i_{0^+}}, i_{0^+}) \Biggr\}.
\end{multline}
Namely, in each connected component $R_{\varepsilon_i}$, $V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}$
is the value function of the exit time problem from $R_{\varepsilon_i}$ with exit cost $V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}(-\varepsilon_{i_{0^+}}, i_{0^+})$. Here $t_{(x_0, i_0)}(\alpha)$ is the first switching time, $i_{0^+}$ is the next value of the output $i_0$ and $\varepsilon_{i_{0^+}}$ the relative threshold. We get that the value function $V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}$ is bounded and uniformly continuous on each of the three connected components of $ {\cal O}=R_{\varepsilon_1} \cup R_{\varepsilon_2} \cup R_{\varepsilon_3}$. Put together all these considerations, by a classical result on the boundary conditions in the viscosity sense follows that
$V_{\varepsilon_1, \varepsilon_2, \varepsilon_3}$ is a viscosity solution of the system \eqref{RO} on each branch with condition in viscosity sense. Regarding the uniqueness we prove that every solution of \eqref{RO} is a fixed point of a contraction map ${\cal G}: BC({\cal O})\to BC({\cal O})$, where $BC({\cal O})= BC(R_{\varepsilon_1})\times BC(R_{\varepsilon_2})\times BC(R_{\varepsilon_3})$ is the space of the bounded and continuous function on ${\cal O}$. Hence, by completeness arguments the uniqueness follows. We sketch how the contraction map ${\cal G}$ is constructed.
For every $c\geq 0$ and for every $i_0 \in \lbrace 1, 2, 3\rbrace$, let $z_c^{(i_0)}$ be the solution of the correspondent Hamilton-Jacobi equation \eqref{RO}$_{i_0}$ with boundary datum $c$. Hence, for each $(\xi, \eta, \sigma)\in BC({\cal O})$ we define
\begin{equation*}
{\cal G}(\xi, \eta, \sigma):= \biggl(z_{\bigl( z_{\xi(-\varepsilon_2)}^{(2)}(\varepsilon_2) \bigr)}^{(1)}(\cdot), z_{\bigl( z_{\eta(-\varepsilon_3)}^{(3)}(\varepsilon_3) \bigr)}^{(2)}(\cdot), z_{\bigl( z_{\sigma(-\varepsilon_1)}^{(1)}(\varepsilon_1) \bigr)}^{(3)}(\cdot)\biggr).
\end{equation*}
This means that, for instance, the first component of ${\cal G}(\xi, \eta, \sigma)$ is the solution on the branch $R_{\varepsilon_1}$ with boundary datum equal to the value on $\varepsilon_2$ of the solution on the branch $R_{\varepsilon_2}$ with boundary datum equal to $\xi(-\varepsilon_2)$. Then for every $(\xi, \eta, \sigma), (\widehat{\xi}, \widehat{\eta}, \widehat{\sigma}) \in BC({\cal O})$, for the first component of ${\cal G}$ we have
\begin{align*}
& \Vert ({\cal G}(\xi, \eta, \sigma))_1 - ({\cal G}(\widehat{\xi}, \widehat{\eta}, \widehat{\sigma}))_1\Vert_{\infty} \leq \vert z_{\xi(-\varepsilon_2)}^{(2)}(\varepsilon_2) - z_{\widehat{\xi}(-\varepsilon_2)}^{(2)}(\varepsilon_2)\vert \\
& \leq e^{\frac{-\lambda (2\varepsilon_2)}{M}}\vert \xi(-\varepsilon_2)- \widehat{\xi}(-\varepsilon_2)\vert \leq e^{\frac{-\lambda (2\varepsilon_2)}{M}} \Vert \xi - \widehat{\xi}\Vert_{\infty},
\end{align*}
with $M$ the bound of the dynamics $f_i$. A similar inequality holds for the others components of ${\cal G}$. Since $\lambda>0$ we get the conclusion.
\end{document} |
\begin{document}
\title{Two theorems on point-flat incidences.} \author{Ben Lund \footnote{Princeton University. Work on this project was supported by NSF grants DMS-1802787 and DMS-1344994.}}
\maketitle
\begin{abstract} We improve the theorem of Beck giving a lower bound on the number of $k$-flats spanned by a set of points in real space, and improve the bound of Elekes and T\'oth on the number of incidences between points and $k$-flats in real space. \end{abstract}
\section{Introduction}
Let $P$ be a set of $n$ points in $d$-dimensional real affine space. For any integers $0 \leq k \leq d$ and $1 \leq r \leq n$, a $k$-flat ($k$-dimensional affine subspace) $\Gamma$ is $r$-rich if it contains at least $r$ points of $P$. A $k$-flat $\Gamma$ is spanned by $P$ if $\Gamma$ contains $k+1$ points of $P$ that are not contained in a $(k-1)$-flat. This paper gives new results on two well-studied questions: \begin{enumerate} \item How many $r$-rich $k$-flats can be determined by $P$? \item How few $k$-flats can be spanned by $P$? \end{enumerate}
A fundamental result in combinatorial geometry is the Szemer\'edi-Trotter theorem \cite{szemeredi1983extremal}, which gives an upper bound on the number of $r$-rich lines determined by a set of points.
Throughout this paper, $n$ is used for a positive integer, and $P$ is used for a set of $n$ points in real affine space. None of the theorems depend on the dimension of the space.
\begin{theorem}[Szemer\'edi, Trotter]\label{th:sz-t} For any integer $r > 1$, the number of $r$-rich lines determined by $P$ is bounded above by $O(n^2 r^{-3} + n r^{-1})$. \end{theorem}
An alternate, and perhaps more well known, formulation of Theorem \ref{th:sz-t} is that the number of incidences between sets of $n$ points and $m$ lines in real space is bounded above by $O(n^{2/3}m^{2/3} + m + n)$. These formulations are equivalent.
\begin{comment} Note that there is an equivalent formulation of Theorem \ref{th:sz-t} in terms of the maximum possible number of incidences between a fixed set of points and lines, where an incidence is a pair of a point and a line such that the point is contained in the line, and bounds of the this type are often called incidence bounds. \end{comment}
\begin{comment} Szemer\'edi-Trotter type incidence bounds play a major role in combinatorial geometry, and have numerous applications in other areas of mathematics and computer science. The surveys and books of Dvir \cite{dvir2012incidence}, Guth \cite{guth2016polynomial}, Tao \cite{tao2013algebraic}, Tao and Vu \cite{tao2006additive}, Elekes \cite{elekes2001sums}, Pach and Sharir \cite{pach2004geometric}, and Matou\v{s}ek \cite{matousek2002lectures}, are all good resources on Szemer\'edi-Trotter type bounds and their applications. \end{comment}
The first result of this paper is an upper bound on $r$-rich $k$-flats, for $k>1$. In order to prove a nontrivial bound in this context, we need to place some restriction on the points or the flats. To illustrate this point, let $L$ be a set of planes that each contain a fixed line, and $P$ a set of $n$ points contained in the same line. Then, each plane of $L$ is $r$-rich for all $r \leq n$.
Several point-flat incidence bounds have been proved, using a variety of nondegeneracy conditions. Initial work on point-plane incidences was by Edelsbrunner, Guibas, and Sharir \cite{edelsbrunner1990complexity}, who considered point sets with no three collinear points, and also incidences between planes and vertices of their arrangement. Agarwal and Aronov \cite{agarwal1992counting} gave an asymptotically tight bound on the number of incidences between vertices and flats of an arrangement of $k$-flats; this bound was generalized by the author, Purdy, and Smith \cite{lund2011bichromatic} to incidences between vertices of an arrangement of flats and a subset of the flats of the arrangement. Brass and Knauer \cite{brass2003counting}, as well as Apfelbaum and Sharir \cite{apfelbaum2007large}, showed that a point-flat incidence graph with many edges must contain a large complete bipartite graph. Sharir and Solomon \cite{sharir2016incidences} obtain a stronger bound than that of Apfelbound and Sharir for point-plane incidences by adding the condition that the points are contained in an algebraic variety of bounded degree.
In this paper, we use the nondegeneracy assumption introduced by Elekes and T\'oth \cite{elekes2005incidences}. For any real $\alpha \in (0,1)$, a $k$-flat $\Gamma$ is {\em $\alpha$-nondegenerate} if at most $\alpha |P \cap \Gamma|$ points of $P$ lie on any $(k-1)$-flat contained in $\Gamma$. Using this definition, Elekes and T\'oth proved the following Szemer\'edi-Trotter type theorem for points and planes. \begin{theorem}[Elekes, T\'oth]\label{th:et-2} For any real $\alpha$ with $0 < \alpha < 1$ and integer $r > 2$, the number of $\alpha$-nondegenerate, $r$-rich planes determined by $P$ is bounded above by $O_\alpha(n^3r^{-4} + n^2 r^{-2})$. \end{theorem} The subscript in the $O$-notation indicates that the implied constant depends on those parameters listed in the subscript.
Elekes and T\'oth generalized Theorem \ref{th:et-2} to higher dimensions in the following, weaker form. \begin{theorem}[Elekes, T\'oth]\label{th:et-k} For each $k > 2$ there is a positive real constant $\beta_k$ such that, for any real $\alpha$ with $0 < \alpha < \beta_k$ and integer $r > k$, the number of $\alpha$-nondegenerate, $r$-rich $k$-flats determined by $P$ is bounded above by $O_{\alpha, k}(n^{k+1} r^{-k-2} + n^k r^{-k})$. \end{theorem}
Elekes and T\'oth remarked that their argument can not be improved to replace the constants $\beta_k$ with $1$ for $k>2$ in Theorem \ref{th:et-k}. Afshani, Berglin, van Duijn, and Nielsen \cite{afshani2016applications} used Theorem \ref{th:et-2} for an algorithm to determine the minimum number of $2$-dimensional planes needed to cover a set of points in $\mathbb{R}^3$, with a running time that depends on the required number of planes. They mention that one of the obstacles to generalizing their algorithm to higher dimensions is the lack of a full generalization of Theorem \ref{th:et-2}.
The contribution of this paper is the following strong generalization of Theorem \ref{th:et-2}, which removes this limitation of Theoerem \ref{th:et-k}.
\begin{theorem}\label{th:et-gen} For each integer $k \geq 1$, real $\alpha$ with $0 < \alpha < 1$, and integer $r > k$, the number of $\alpha$-nondegenerate, $r$-rich $k$-flats is bounded above by $O_{\alpha, k}(n^{k+1} r^{-k-2} + n^k r^{-k})$. \end{theorem} The case $k=1$ of this theorem is Theorem \ref{th:sz-t}, and the case $k=2$ is Theorem \ref{th:et-2}. For $k>2$, it is new.
One well-known application of an incidence bound between points and lines is Beck's theorem \cite{beck1983lattice}. Proving a conjecture of Erd\H{o}s \cite{erdos1975some}, Beck used a slightly weaker incidence bound than Theorem \ref{th:sz-t} to show that, if $P$ is a set of $n$ points such that no more than $s$ points of $P$ lie on any single line, then the number of lines spanned by $P$ is $\Omega(n(n-s))$. In the same paper, Beck gave the following bound for flats of higher dimensions. \begin{theorem}[Beck]\label{th:beck} For each $k \geq 1$, there is are constants $c_k$ and $c_k'$ such that either $c_k n$ points of $P$ are contained in a single $k$-flat, or $P$ spans $c'_k n^{k+1}$ $k$-flats. \end{theorem}
How large can the constant $c_k$ be in Theorem \ref{th:beck}? For $k=1$, Beck showed that Theorem \ref{th:beck} holds for any $c_1 < 1$. For $k=2$, if $P$ is a set of $n$ points of which $n/2$ lie on each of two skew lines, then $P$ spans $n$ planes, but no plane contains more than $n/2 + 1$ points of $P$. Hence, Theorem \ref{th:beck} does not hold for $c_2 = 1/2$. Beck's proof yields a constant $c_k$ of the form $c_k = e^{-ck}$ for some real $c>0$. Do \cite{do2016extending} improved this by showing that Theorem \ref{th:beck} holds for any $c_k < 1/k$.
The second result of this paper is the following improvement to Theorem \ref{th:beck}.
\begin{theorem}\label{th:improvedBeck}
For each integer $k \geq 1$, at least one of the following holds:
\begin{enumerate}
\item at least $(1-o(1))n$ points of $P$ are contained in a single $k$-flat, or
\item at least $(\frac{1}{2}-o(1))n$ points of $P$ are contained in a single $(k-1)$-flat, or
\item $k$ is odd and $(1-o(1))n $ points of $P$ are contained in the union of $k$ lines, or
\item $P$ spans $\Omega_k(n^{k+1})$ $k$-flats.
\end{enumerate} \end{theorem}
An immediate corollary of Theorem \ref{th:improvedBeck} is that, for any $c_k < 1/2$, there is a constant $c_k'$ such that Theorem \ref{th:beck} holds with for these choices of $c_k$ and $c_k'$. Indeed, for odd $k$, any set of $(k+1)/2$ lines are contained in some $k$-flat. Hence, if the third alternative in Theorem \ref{th:improvedBeck} holds, a simple averaging argument shows that some $k$-flat contains at least $(1-o(1)) ((k+1)/2) (n/k) > (1-o(1)) n/2$ points of $P$.
As noted above, we cannot take $c_2 = 1/2$ in Theorem \ref{th:beck}. In fact, we can not take $c_k = 1/2$ for any $k>1$. To see this, suppose that $P$ is contained in the union of a $(k-1)$-flat $\Gamma$ and a line $\ell$, with each of $\Gamma$ and $\ell$ containing $n/2$ points. In this case, any $k$-flat spanned by $P$ contains either $\Gamma$ or $\ell$, and so $P$ spans at most $n/2 + \binom{n/2}{k-1} = O(n^{k-1})$ $k$-flats.
We remark that all of the new results in this paper hold for point sets in complex space, using the generalization of the Szemer\'edi-Trotter bound to complex space proved by T\'oth \cite{toth2015szemeredi} and Zahl \cite{zahl2012szemeredi}.
\section{Projective geometry and essential dimension}
In this section, we fix notation and review some basic facts of projective geometry, as well as results and definitions we need from \cite{lund2016essential}. For convenience, we work in the $d$-dimensional real projective space $\mathbb{P}^d(\mathbb{R})$ instead of real affine space. This does not affect any of the results.
The span of a set $X $ is the smallest flat that contains $X$, and is denoted $\overline{X}$. We denote by $\overline{\Lambda,\Gamma}$ the span of $\Lambda \cup \Gamma$. It is a basic fact of projective geometry that, for any flats $\Lambda, \Gamma$, \begin{equation}\label{eqn:dimSpan} \dim(\overline{\Lambda,\Gamma}) + \dim(\Lambda \cap \Gamma) = \dim(\Lambda) + \dim(\Gamma).\end{equation} More generally, using the fact that $\dim(\Lambda \cap \Gamma) \geq -1$, we have for any set $\mathcal{H}$ of flats that
\begin{equation}\label{eqn:dimSpanManyFlats}\dim \overline{\mathcal{H}} \leq |\mathcal{H}| - 1 +\sum_{\Lambda \in \mathcal{H}} \dim(\Lambda).\end{equation}
For any $k$-flat $\Lambda$ in $\mathbb{P}^d(\mathbb{R})$, the $(k+1)$-flats that contain $\Lambda$ correspond to the points of $\mathbb{P}^{d-k-1}(\mathbb{R})$. The {\em projection from $\Lambda$} is the map $\pi_\Lambda: \mathbb{P}^d(\mathbb{R}) \setminus \Lambda \rightarrow \mathbb{P}^{d-k-1}(\mathbb{R})$ that sends a point $p$ to the $k+1$ flat $\overline{p,\Lambda}$.
Defined in \cite{lund2016essential}, the {\em essential dimension} $K=K(P)$ of $P$ is the minimum $t$ such that there exists a set $\mathcal{G}$ of flats such that \begin{enumerate} \item $P$ is contained in the union of the flats of $\mathcal{G}$, \item each flat $\Lambda \in \mathcal{G}$ has dimension $\dim(\Lambda) \geq 1$, and \item $\sum_{\Lambda \in \mathcal{G}} \dim(\Lambda) = t$. \end{enumerate}
Denote by $f_k$ the number of $k$-flats spanned by $P$. For each non-negative integer $i$, let $g_i$ be the largest number of points of $P$ contained in a set of essential dimension $i$. For example, $g_1$ is the largest number of points contained in any single line, and $g_2$ is the largest number of points contained in any single plane, or the union of any pair of lines. Note that $1=g_0 \leq g_1 \leq \ldots \leq g_K = n$. A classical theorem of Beck \cite{beck1983lattice} is that $f_1 = n(n-g_1)$. This was generalized to all dimensions in \cite{lund2016essential}, and this generalization is the main tool used here. \begin{theorem}\label{thm:essentialDimBound} For each $k$, there is a constant $c_k$ such that, if $n-g_k > c_k$, then $$f_k = \Theta_k \left(\prod_{i=0}^k (n-g_i) \right).$$ If $n-g_k = 0$ ({i.e.} $k \geq K$), then $$f_k = O_K\left(\prod_{i=0}^{2(K-1) - k} (n-g_i)\right),$$ and either $f_{k-1} = f_k = 0$ or $f_{k-1} > f_k$. \end{theorem}
\section{Proof of Theorem \ref{th:et-gen}}
Recall from the introduction that a $k$-flat $\Lambda$ is {\em $\alpha$-nondegenerate} if at most $\alpha |P \cap \Lambda|$ points of $P$ lie on any $(k-1)$-flat contained in $\Lambda$. We further say that $\Lambda$ is {\em essentially-$\alpha$-nondegenerate} if for each $P' \subset P \cap \Lambda$ such that the essential dimension of $P'$ is at most $k-1$, we have $|P'| \leq \alpha |P \cap \Lambda|$. Note that an essentially-$\alpha$-nondegenerate flat is also $\alpha$-nondegenerate, but not necessarily the other way around.
The following bound on essentially-$\alpha$-nondegenerate flats was proved by Do \cite{do2016extending}. \begin{theorem}[Do]\label{th:do} For any integer $k \geq 1$, any real $\alpha$ with $0 < \alpha < 1$, and any integer $r > k$, the number of essentially-$\alpha$-nondegenerate, $r$-rich $k$-flats is bounded above by $O_{\alpha,k}(n^{k+1}r^{-k-2} + n^kr^{-k})$. \end{theorem}
Theorem \ref{th:do} is also an immediate consequence of Theorem \ref{thm:essentialDimBound} together with the following theorem of Elekes and T\'oth \cite{elekes2005incidences}. A $k$-flat $\Lambda$ is {\em $\gamma$-saturated} if $\Lambda \cap P$ spans at least $\gamma |\Lambda \cap P|^k$ different $(k-1)$-flats. \begin{theorem}[Elekes, T\'oth]\label{th:elktot} For any positive real $\gamma$, any integer $k \geq 1$, and any integer $r > k$, the number of $r$-rich $\gamma$-saturated $k$-flats is at most $O_{\gamma,k}(n^{k+1}r^{-k-2} + n^k r^{-k})$. \end{theorem}
To prove Theorem \ref{th:do} from Theorem \ref{thm:essentialDimBound} and Theorem \ref{th:elktot}, observe that Theorem \ref{thm:essentialDimBound} implies that essentially-$\alpha$-nondegenerate $k$-flats are $\gamma$ saturated, for an appropriate choice of $\gamma$ depending on $\alpha$ and $k$.
In the remainder of this section, we deduce Theorem \ref{th:et-gen} from Theorem \ref{th:do}.
\subsection{The case $k=3$} The case $k=3$ admits a simpler proof than the general theorem, which we give first. The proof for arbitrary $k$ does not depend on this special case, but is built around a similar idea.
\begin{theorem}\label{th:et-3} For any real $\alpha$ with $0 < \alpha < 1$ and integer $r > 3$, the number of $\alpha$-nondegenerate, $r$-rich $3$-flats is bounded above by $O_\alpha(n^{4}r^{-5} + n^3 r^{-3})$. \end{theorem} \begin{proof}
By Theorem \ref{th:do}, the number of essentially-$\alpha^{1/2}$-nondegenerate $r$-rich $3$-flats is bounded above by $O_\alpha(n^{4}r^{-5} + n^3 r^{-3})$. If an $r$-rich $3$-flat $\Lambda$ is $\alpha$-nondegenerate but not essentially-$\alpha^{1/2}$-nondegenerate, then at least $\alpha^{1/2} |P \cap \Lambda| \geq \alpha^{1/2}r$ points of $P$ are contained in the union of two skew lines, neither of which contains more than $\alpha |P \cap \Lambda|$ points of $P$; hence, each of these lines contains at least $(\alpha^{1/2} - \alpha)r$ points. By the Szemer\'edi-Trotter theorem (Theorem \ref{th:sz-t}), the maximum number of pairs of $((\alpha^{1/2} - \alpha)r)$-rich lines is bounded above by $O_\alpha(n^4 r^{-6} + n^2 r^{-2})$, which implies the conclusion of the theorem. \end{proof}
\subsection{The general case}
The proof of Theorem \ref{th:et-3} given above does not generalize to higher dimensions, but the basic approach of bounding the number of $r$-rich $\alpha$-nondegenerate flats that are not also essentially-$\alpha'$-nondegenerate (for a suitable choice of $\alpha'$) does still work in higher dimensions.
It turns out that a distinguishing property of rich flats that are $\alpha$-nondegenerate but not essentially-$\alpha'$-nondegenerate is that they are \textit{special}, according to the following definition. If a $k$-flat $\Lambda$ is $(r,\alpha)$-\textit{special}, then $\Lambda$ contains a set $\mathcal{G}$ of flats so that \begin{enumerate}
\item $\overline{\mathcal{G}} = \Lambda$,
\item for each $G' \subseteq G$ with $|G'| > 1$, we have $\sum_{\Gamma \in \mathcal{G'}} \dim(\Gamma) < \dim(\overline{\mathcal{G'}})$, and
\item each flat of $\mathcal{G}$ is $r$-rich and $\alpha$-nondegenerate. \end{enumerate}
We first show that each rich flat that is $\alpha$-nondegenerate but not essentially-$\alpha'$-nondegenerate is special.
\begin{lemma}\label{th:specialFlats}
Let $0 < \alpha < 1$, and let $r$ and $k$ be positive integers.
If $\Lambda$ is an $r$-rich, $\alpha$-nondegenerate $k$-flat that is not also essentially-$\alpha'$-nondegenerate, then it is $(r',\alpha')$-special, for $\alpha' = (k + \alpha)(k + 1)^{-1}$ and $r'=(1-\alpha')|P \cap \Lambda|$. \end{lemma} \begin{proof}
From the definition of essentially-$\alpha'$-nondegenerate, there is a collection $\mathcal{G}'$ of sub-flats of $\Lambda$ with $\sum_{\Gamma \in \mathcal{G}'} \dim(\Gamma) < k$ such that $|\bigcup_{\Gamma \in \mathcal{G}'} \Gamma \cap P| > \alpha' |P \cap \Lambda|$.
We modify $\mathcal{G}'$ to obtain a set $\mathcal{G}$ satisfying the three conditions needed to show that $\Lambda$ is special, as follows.
If $\Gamma \in \mathcal{G}'$ is not $\alpha'$-nondegenerate, then replace $\Gamma$ with the smallest flat $\Gamma' \subset \Gamma$ that contains at least $(\alpha')^{\dim(\Gamma) - \dim(\Gamma')}|P \cap \Gamma|$ points.
Note that, since any flat $\Gamma'' \subset \Gamma'$ with $\dim(\Gamma'') = \dim(\Gamma') - 1$ contains fewer than $(\alpha')^{\dim(\Gamma) - \dim(\Gamma') + 1}|P \cap \Gamma| \leq \alpha'|P \cap \Gamma'|$ points of $P$, it follows that $\Gamma'$ is $\alpha'$-nondegenerate.
Furthermore, the number of points in $\Gamma$ that are not also in $\Gamma'$ is at most $(1-(\alpha')^{\dim(\Gamma) - \dim(\Gamma')}) |P \cap \Gamma|$.
Since $0 < \alpha' < 1$, we have for any integer $j \geq 1$ that
\[1-(\alpha')^j = \alpha'(1 - (\alpha')^{j-1}) + 1 - \alpha' \leq 1 - (\alpha')^{j-1} + 1 - \alpha'.\]
Hence by induction, $1 - (\alpha')^j \leq j(1-\alpha')$.
Consequently, the number of points in $\Gamma$ that are not also in $\Gamma'$ is at most $ (\dim(\Gamma) - \dim(\Gamma'))(1-\alpha')|P \cap \Lambda|$.
For any subset $\mathcal{G}'' \subset \mathcal{G}'$ such that $\sum_{\Gamma \in \mathcal{G''}} \dim(\Gamma) = \dim(\overline{\mathcal{G''}})$, replace $\mathcal{G}''$ with $\overline{\mathcal{G}''}$.
Note that this does not increase $\sum_{\Gamma \in \mathcal{G'}} \dim(\Gamma)$.
Repeat this procedure until the second condition in the definition of special holds.
Remove from $\mathcal{G}'$ any flat that contains fewer than $(1-\alpha')|P \cap \Lambda|$ points to obtain the final set $\mathcal{G}$.
Each remaining flat in $\mathcal{G}$ is $\alpha'$-nondegenerate and $(1-\alpha')|P \cap \Lambda|$-rich.
The number of points that are contained in flats of $\mathcal{G}'$ but not in flats of $\mathcal{G}$ is at most $\sum_{\Gamma \in \mathcal{G}'} \dim(\Gamma) (1-\alpha')|P \cap \Lambda| < k (1-\alpha') |P \cap \Lambda|= (\alpha'-\alpha)|P \cap \Lambda|$ points.
Hence, $|\bigcup_{\Gamma \in \mathcal{G}} \Gamma \cap P| > \alpha |P \cap \Lambda|$.
If $\dim(\overline{\mathcal{G}}) < k$, then $\Lambda$ is not $\alpha$-nondegenerate, contrary to our assumption.
Hence, $\dim(\overline{\mathcal{G}}) = k$, and $\Lambda$ is $(r',\alpha')$-special. \end{proof}
Next, we show that the number of special flats is asymptotically smaller than the upper bound on the number of essentially-$\alpha$-nondegenerate flats coming from Theorem \ref{th:do}.
\begin{lemma}\label{th:boundingSpecial}
For any $0 < \alpha < 1$ and positive integers $r,k$, the number of $(r,\alpha)$-special $k$-flats for $P$ is $O_{\alpha,k}(n^{k+1}r^{-k-2} + n^k r^{-k})$. \end{lemma} \begin{proof}
We proceed by induction on $k$.
The base case of $k=1$ is handled by Theorem \ref{th:sz-t}.
Let $k>1$, and suppose that Theorem \ref{th:et-gen} has been shown to hold for all $k' < k$.
Let $\mathcal{F}$ be the set of $(r,\alpha)$-special $k$-flats.
We partition $\mathcal{F}$ into subsets $\mathcal{F}_b$ for each integer $1 \leq b < k$, and separately bound the size of each $\mathcal{F}_b$, as follows.
First we assign each flat to one of the parts.
For each $\Lambda \in \mathcal{F}$, let $\mathcal{G}_\Lambda$ be a minimal set of flats that shows that $\Lambda$ is special.
Since $\mathcal{G}_\Lambda$ is minimal, we have that $\overline{\mathcal{G}_\Lambda} = \Lambda$ but $\overline{\mathcal{G}_\Lambda \setminus \Gamma} \subsetneq \Lambda$ for each $\Gamma \in \mathcal{G}_\Lambda$.
Let $\Gamma_\Lambda$ be an arbitrary flat in $\mathcal{G}_\Lambda$, let $b_\Lambda = \dim(\overline{\mathcal{G}_\Lambda \setminus \Gamma_{\Lambda}})$, and let $\mathcal{G}'_\Lambda = \mathcal{G}_\Lambda \setminus \Gamma_\Lambda$.
Note that $\overline{\mathcal{G}'_\Lambda}$ is $(r,\alpha)$-special.
Assign $\Lambda$ to $\mathcal{F}_{b_\Lambda}$.
Now we fix $b$, and bound $|\mathcal{F}_b|$.
The inductive hypothesis implies that
$$|\{\overline{\mathcal{G}_\Lambda'} : \Lambda \in \mathcal{F}_b\}| = O_{\alpha,b}(n^{b+1}r^{-b-2} + n^b r^{-b}).$$
Hence, it will suffice to bound the number of flats $\Lambda \in \mathcal{F}_b$ that share any fixed associated set $\mathcal{G}_\Lambda'$ by $O_{\alpha,k}(n^{k-b}r^{-k+b})$.
Let $\mathcal{R} \in \{\overline{\mathcal{G}_\Lambda'} : \Lambda \in \mathcal{F}_b\}$.
Recall that $\pi_\mathcal{R}$ denotes the projection from $\mathcal{R}$.
Let $M$ be a multiset of points in $\mathbb{P}^{d-1-b}$, with the multiplicity of a point $q$ equal to the number of points $p \in P$ so that $\pi(p) = q$.
For each $\Lambda \in \mathcal{F}_b$ such that $\overline{\mathcal{G}_\Lambda'} = \mathcal{R}$, there is an $\alpha$-nondegenerate, $r$-rich flat $\Gamma$ such that $\overline{\Gamma, \mathcal{R}} = \Lambda$.
Since $\overline{\Gamma,\mathcal{R}} = \Lambda$ and $\pi_{\mathcal{R}}(\Gamma)$ is disjoint from $\mathcal{R}$, we have that $\dim \pi_{\mathcal{R}}(\Gamma) = k-1-b$.
We claim that $\pi_{\mathcal{R}}(\Gamma)$ is $(1-\alpha)r$-rich and $\alpha$-nondegenerate relative to $M$.
First, note that $|\pi_\mathcal{R}(\Gamma) \cap M| = |\Gamma \cap P| - |\Gamma \cap \mathcal{R} \cap P|$.
Since $\Gamma$ is $\alpha$-nondegenerate, $|\Gamma \cap \mathcal{R} \cap P| < \alpha |\Gamma \cap P|$, so $\pi_{\mathcal{R}}(\Gamma)$ is $(1-\alpha)r$-rich.
Let $\Gamma'$ be a subflat of $\pi_{\mathcal{R}}(\Gamma)$, and let $\pi^{-1}(\Gamma') \subset \Gamma$ be the preimage of $\Gamma'$ in $\Gamma$.
Note that $\dim \overline{\pi^{-1}(\Gamma'), \mathcal{R} \cap \Gamma} < \dim \Gamma$, hence $|\Gamma' \cap \pi_\mathcal{R}(P)| \leq \alpha |\Gamma \cap P| - |\Gamma \cap \mathcal{R} \cap P| \leq \alpha|\pi_\mathcal{R}(\Gamma) \cap \pi_\mathcal{R}(P)|$.
Hence, $\pi_\mathcal{R}(\Gamma)$ is $\alpha$-nondegenerate.
To complete the proof, we will use the following lemma, proved below.
\begin{lemma}\label{th:mult-bound}
Let $M$ be a multiset of points with total multiplicity $n$.
The number of $r$-rich, $\alpha$-nondegenerate $k$-flats spanned by $M$ is bounded above by $(1-\alpha)^{-k}n^{k+1}r^{-k-1}$.
\end{lemma}
From Lemma \ref{th:mult-bound}, we get the required bound of $O(n^{k-b}r^{b-k})$ on the number of $(1-\alpha)r$-rich, $\alpha$-nondegenerate $(k-1-b)$-flats spanned by $\pi_\mathcal{R}(P)$, and this completes the proof of Theorem \ref{th:et-gen}. \end{proof}
Here is the proof of Lemma \ref{th:mult-bound}.
\begin{proof}[Proof of Lemma \ref{th:mult-bound}] There are $n^{k+1}$ ordered lists of $k+1$ points in $M$ (with repetitions allowed). We show below that for any $r$-rich, $\alpha$-nondegenerate $k$-flat $\Lambda$, there are at least $(1-\alpha)^k r^{k+1}$ distinct lists of $k+1$ affinely independent points contained in $\Lambda$. Since $k+1$ affinely independent points are contained in exactly one $k$-flat, this implies the conclusion of the lemma.
Let $\Lambda$ be an $r$-rich, $\alpha$-nondegenerate $k$-flat. We will show that, for each $0 \leq k' \leq k$, $\Lambda$ contains $(1-\alpha)^{k'} r^{k'+1}$ distinct ordered lists of $k'+1$ affinely independent points of $M$. We proceed by induction on $k'$. The base case of $k'=0$ is immediate from the fact that $\Lambda$ is $r$-rich. Let $k' > 0$, and suppose that $\Lambda$ contains $(1-\alpha)^{k' - 1} r^{k'}$ distinct ordered lists of $k'$ affinely independent points of $M$.
Let $V$ be the set of pairs $(\mathbf{v},p)$, where $\mathbf{v}$ is an ordered list of $k'$ affinely independent points of $M$ contained in $\Lambda$, and $p$ is a point of $M$ contained in $\Lambda \setminus \overline{\mathbf{v}}$. By the inductive hypothesis, we know that there are $(1-\alpha)^{k'-1}r^{k'}$ choices for $\mathbf{v}$. Since $\dim{\overline{\mathbf{v}}} = k'-1 \leq k-1$, the hypothesis that $\Lambda$ is $\alpha$-nondegenerate implies that $|\overline{\mathbf{v}} \cap M| \leq \alpha |\Lambda \cap M|$. Consequently, for a fixed choice of $\mathbf{v}$, there are at least $(1 - \alpha) r$ choices for $p$. Hence $|V| \geq (1-\alpha)^{k'}r^{k' + 1}$, as claimed.
\end{proof}
Now the proof of Theorem \ref{th:et-gen} is done. Theorem \ref{th:do} gives the required bound on the number of essentially-$\alpha$-nondegenerate flats. Lemma \ref{th:specialFlats} shows that the flats that are $\alpha$-nondegenerate but not essentially-$\alpha$-nondegenerate are special, and Lemma \ref{th:boundingSpecial} gives the required bound on special flats.
\section{Proof of Theorem \ref{th:improvedBeck}}
In this section, we show that Theorem \ref{th:improvedBeck} follows from Theorem \ref{thm:essentialDimBound}.
\begin{proof}
Let $c_k$ be the constant in the lower bound of Theorem \ref{thm:essentialDimBound}.
If $f_k \geq c_k n^{k+1}$, then the third alternative of Theorem \ref{th:improvedBeck} holds and we are done.
Suppose that $f_k < c_k n^{k+1}$. Theorem \ref{thm:essentialDimBound} implies that there is a set $\mathcal{G}$ of flats such that $\sum_{\Gamma \in \mathcal{G}} \dim(\Gamma) \leq k$, at least $(1-o(1))n$ points of $P$ lie in some flat of $\mathcal{G}$, and each flat of $\mathcal{G}$ has dimension at least $1$. If $\mathcal{G}$ consists of a single flat, then the first alternative holds and we're done. Suppose that $|\mathcal{G}| > 1$. We show below that, unless $k$ is odd and $\mathcal{G}$ is the union of $k$ lines, we can partition $\mathcal{G}$ into $\mathcal{G}_1, \mathcal{G}_2$ such that $\dim \overline{\mathcal{G}_1}, \dim \overline{\mathcal{G}_2} \leq k-1$. Since either $|P \cap (\cup_{\Gamma \in \mathcal{G}_1} \Gamma)| \geq (1/2)|P \cap (\cup_{\Gamma \in \mathcal{G}} \Gamma)|$ or $|P \cap (\cup_{\Gamma \in \mathcal{G}_2} \Gamma)| \geq (1/2) |P \cap (\cup_{\Gamma \in \mathcal{G}} \Gamma)|$, this is enough to prove the theorem.
Let $\mathcal{G} = \{\Gamma_1, \Gamma_2, \ldots, \Gamma_m\}$, with $\dim(\Gamma_1) \leq \dim(\Gamma_2) \leq \ldots \leq \dim(\Gamma_m)$. Let $\mathcal{G}_1 = \{\Gamma_1, \Gamma_2, \ldots, \Gamma_{m_1}\}$, with $m_1$ chosen as large as possible under the constraint $\dim \overline{\mathcal{G}_1} \leq k-1$.
Let $s = \dim \Gamma_{m_1 + 1}$. Note that $\dim \overline{\mathcal{G}_1} + s \geq k-1$, since otherwise $\Gamma_{m_1 + 1}$ would be included in $\mathcal{G}$. Since each flat in $\mathcal{G}_1$ has dimension at most $s$, by (\ref{eqn:dimSpanManyFlats}) we have \begin{align*}
\sum_{\Gamma \in \mathcal{G}_1} \dim \Gamma &\geq \dim \overline{\mathcal {G}_1} + 1 - |\mathcal{G}_1|, \\ &\geq k - s - \frac{1}{s}\sum_{\Gamma \in \mathcal{G}_1} \dim \Gamma. \end{align*} Hence, $$k - \sum_{\Gamma \in \mathcal{G}_2} \dim \Gamma \geq \sum_{\Gamma \in \mathcal{G}_1} \dim \Gamma \geq \frac{k-s}{1 + s^{-1}},$$ and so $$\sum_{\Gamma \in \mathcal{G}_2} \dim \Gamma \leq \frac{s + ks^{-1}}{1 + s^{-1}}.$$
Since each flat in $\mathcal{G}_2$ has dimension at least $s$, we have $|\mathcal{G}_2| \leq \frac{1}{s} \sum_{\Gamma \in \mathcal{G}_2} \dim \Gamma$. Applying (\ref{eqn:dimSpanManyFlats}), we have \begin{align*}
\dim \overline{\mathcal{G}_2} &\leq |\mathcal{G}_2| - 1 + \sum_{\Gamma \in \mathcal{G}_2} \dim \Gamma, \\ &\leq (1 + s^{-1}) \sum_{\Gamma \in \mathcal{G}_2} \dim \Gamma - 1,\\ &\leq s + ks^{-1} - 1,\\ &\leq k, \end{align*}
with equality only if $s=1$ and $|\mathcal{G}_2| = \frac{1}{s}\sum_{\Gamma \in \mathcal{G}_2} \dim \Gamma$; this occurs only if $\mathcal{G}$ is a set of lines. Note that the case $s = k$ is eliminated by the assumption that $|\mathcal{G}| > 1$ together with the fact that $\sum_{\Gamma \in \mathcal{G}} \dim(\Gamma) \leq k$.
If $\mathcal{G}$ is a set of lines and $k$ is even, then $\mathcal{G}_2$ consists of at most $k/2$ lines, which span at most a $(k-1)$-flat. Hence, if $\dim \overline{\mathcal{G}_2} = k$, then $\mathcal{G}$ must be a set of lines, and $k$ must be odd. Since $\dim \overline{\mathcal{G}_1} \leq k-1$ by construction, this completes the proof.
\end{proof}
\end{document} |
\begin{document}
\title{Schoenberg Representations and Gramian Matrices of Mat\'ern Functions}
\begin{itemize} \item[{}] {\bf Abstract.} We represent Mat\'ern functions in terms of Schoenberg's integrals which ensure the positive definiteness and prove the systems of translates of Mat\'ern functions form Riesz sequences in $L^2(\mathbb{R}^n)$ or Sobolev spaces. Our approach is based on a new class of integral transforms that generalize Fourier transforms for radial functions. We also consider inverse multi-quadrics and obtain similar results. \end{itemize}
{\small \begin{itemize} \item[{}]{\bf Keywords.} Bessel function, Fourier transform, Gramian matrix, Hankel-Schoenberg transform, inverse multi-quadrics, Mat\'ern function, positive definite, Riesz sequence, Schoenberg matrix, Sobolev space.
\item[{}] 2010 Mathematics Subject Classification: 33C10, 41A05, 42B10, 60E10. \end{itemize}}
\section{Introduction} In many areas of Mathematics, the functions of type \begin{equation} M_\alpha(z) = K_\alpha(z) z^\alpha\qquad(\alpha\in\mathbb{R},\,z>0) \end{equation} arise frequently, referred to as the Mat\'ern functions, where $K_\alpha(z)$ stands for the modified Bessel function of the second kind of order $\alpha$.
Intimately connected is the family of functions of type \begin{equation} \phi_\beta(r) = (1+r^2)^{-\beta}\qquad(\beta>0, \,r\ge 0) \end{equation} whose radial extensions to the Euclidean spaces are referred to as the inverse multi-quadrics in the theory of interpolations or spatial statistics.
In a fixed Euclidean space, both class of functions, if radially extended with suitably rearranged $\,\alpha, \beta,\,$ provide essential ingredients of Sobolev spaces. In their pioneering work \cite{AK}, N. Aronszajn and K. T. Smith introduced the Sobolev space $H^\alpha(\mathbb{R}^n),\,\alpha>0,\,$ as the space of Bessel potentials, that is, the convolutions $\,(G_{\alpha/2}\ast u)(\mathbf{x}),\,u\in L^2(\mathbb{R}^n),\,$ where $G_{\alpha/2}$ denotes the radial extension of a special kind of Mat\'ern functions defined as follows.
\begin{definition} For a positive integer $n$ and $\,\alpha>0,$ \begin{equation}\label{G1} G_\alpha(z) = \frac{1}{2^{\alpha-1 + \frac n2}\,\pi^{\frac n2}\, \Gamma(\alpha)}\,K_{\alpha-\frac n2}(z) z^{\alpha - \frac n2}\qquad(z>0). \end{equation} For its radial extension to the Euclidean space $\mathbb{R}^n$, we write
$$G_\alpha(\mathbf{x}) = G_\alpha(|\mathbf{x}|), \quad |\mathbf{x}| = \sqrt{\mathbf{x}\cdot\mathbf{x}}\qquad(\mathbf{x}\in\mathbb{R}^n).$$ \end{definition}
A characteristic feature of the kernel $G_\alpha$ is the Fourier transform \begin{align*}
\widehat{G_\alpha}(\xi) =\int_{\mathbb{R}^n} e^{-i \xi\cdot\mathbf{x}}\,G_\alpha(\mathbf{x}) d\mathbf{x}= \left( 1+|\xi|^2\right)^{-\alpha}\,, \end{align*} which, together with the intrinsic properties of $K_{\alpha-n/2}$, enables the authors to obtain a comprehensive list of functional properties. Let us state only a few of their list which are relevant to the present work (see also \cite{C}).
\begin{itemize} \item[(a)] The Sobolev space $H^\alpha(\mathbb{R}^n)$ is identified with \begin{equation*}
H^\alpha(\mathbb{R}^n) = \left\{ u\in L^2(\mathbb{R}^n) : \int_{\mathbb{R}^n} \left( 1+|\xi|^2\right)^{\alpha}|\widehat{u}(\xi)|^2 d\xi <\infty \right\}. \end{equation*} In particular, $\,G_\beta\in H^\alpha(\mathbb{R}^n)\,$ if and only if $\,\beta>(2\alpha +n)/4.$ \item[(b)] $\,\left(G_\alpha\ast G_\beta\right)(\mathbf{x}) = G_{\alpha +\beta}(\mathbf{x})\,$ for $\,\alpha>0,\,\beta>0.$ \item[(c)] In the case $\,\alpha>n/2,\,$ $G_{\alpha}$ is positive definite on $\mathbb{R}^n$. The symmetric kernel $G_{\alpha}(\mathbf{x} - \mathbf{y})$ is in fact a reproducing kernel for the Hilbert space $H^\alpha(\mathbb{R}^n)$ under the inner product $$\big(u, v\big)_{H^\alpha(\mathbb{R}^n)} =
(2\pi)^{-n}\int_{\mathbb{R}^n} \widehat{u}(\xi)\,\overline{\widehat{v}(\xi)}\,(1+|\xi|^2)^\alpha\,d\xi.$$ \end{itemize}
Our primary purpose in the present work is to obtain a set of invariants for both classes of functions, that is, those properties valid in any Euclidean space, related with the positive definiteness and Fourier transforms.
We recall that a univariate function $\phi$ defined on the interval $[0, \infty)$ is said to be {\it positive semi-definite on $\mathbb{R}^n$} if it satisfies \begin{align}\label{G3}
\sum_{j=1}^N\sum_{k=1}^N \,\phi\left(\left|\mathbf{x}_j - \mathbf{x}_k\right|\right)\alpha_j \overline{\alpha_k}\,\ge\, 0 \end{align} for any choice of $\,\alpha_1, \cdots, \alpha_N\in\mathbb{C}\,$ and distinct points $\,\mathbf{x}_1, \cdots, \mathbf{x}_N\in\mathbb{R}^n,\,$ where $N$ is arbitrary. If equality in \eqref{G3} holds only if $\,\alpha_1=\cdots=\alpha_N=0,\,$ then it is said to be {\it positive definite on $\mathbb{R}^n$}.
A univariate function which is positive semi-definite or positive definite on every $\mathbb{R}^n$ takes the following specific form:
\begin{itemize} \item[{}]{\bf Criterion I} (I. J. Schoenberg \cite{Sc2}). {\it A continuous function $\phi$ on $[0, \infty)$ is positive semi-definite on every $\mathbb{R}^n$ if and only if \begin{equation}\label{G4} \phi(r) = \int_0^\infty e^{-r^2 t}\,d\nu(t) \end{equation} for a finite positive Borel measure $\nu$ on $[0, \infty).$ Moreover, if $\nu$ is not concentrated at zero, then $\phi$ is positive definite on every $\mathbb{R}^n$. } \end{itemize}
Due to the representation formula \begin{equation}\label{G5}
\phi_\beta(r) = \frac{1}{\Gamma(\beta)}\int_0^\infty e^{-r^2 t}\,e^{-t} t^{\beta -1} dt\qquad(\beta>0), \end{equation} it is well known that each $\phi_\beta$ is positive definite on every $\mathbb{R}^n$ (see e.g. \cite{We}).
Our preliminary observation is the following.
\begin{theorem}\label{theorem1.1} For $\,\alpha>0,\,$ we have \begin{equation*} \frac{2^{1-\alpha}}{\Gamma(\alpha)}\,K_\alpha(z) z^\alpha = \int_0^\infty e^{-z^2 t}\,f_\alpha(t) dt\qquad(z\ge 0), \end{equation*} where $f_\alpha$ denotes the probability density defined by $$ f_\alpha(t) = \frac{1}{2^{2\alpha}\Gamma(\alpha)}\,\exp\left(-\frac{1}{4t}\right) t^{-\alpha-1}.$$ As a consequence, $M_\alpha$ is positive definite on every $\mathbb{R}^n$. \end{theorem}
In order to find direct relationships between the functions $M_\alpha$ and $\phi_\beta$, without recourse to their Euclidean extensions, we shall introduce a new class of integral transforms that incorporates Fourier transforms for radial measures and Hankel transforms in certain sense.
\begin{definition}\label{def1} For $\,\lambda>-1,\,$ let $J_\lambda$ denote the Bessel function of the first kind of order $\lambda$ and define $\,\Omega_\lambda : \mathbb{R}\to \mathbb{R}\,$ by \begin{align*} \Omega_\lambda(t) &= \Gamma(\lambda+1)\left(\frac t2\right)^{-\lambda} J_\lambda(t)\\ &=\Gamma(\lambda+1)\sum_{k=0}^\infty\frac{(-1)^k}{k!\,\Gamma(\lambda +k +1)}\,\left(\frac t 2\right)^{2k}. \end{align*} \end{definition}
In the special case $\,\lambda = (n-2)/2\,,$ with $n$ a positive integer, $\Omega_\lambda$
arises on consideration of the Fourier transforms for radial functions on $\mathbb{R}^n$. To be specific, if $F$ is integrable with $\,F(\mathbf{x})= f(|\mathbf{x}|)\,$ for some univariate function $f$ on $[0, \infty)$, then it is well known (see e.g. \cite{Stw}) that \begin{align}\label{G6}
\widehat{F}(\xi) &= (2\pi)^{n/2} |\xi|^{-\frac{n-2}{2}}\int_0^\infty J_{\frac{n-2}{2}}
(|\xi|t) f(t) t^{n/2}dt\nonumber\\
&=\frac{2\pi^{n/2}}{\Gamma(n/2)} \int_0^\infty \Omega_{\frac{n-2}{2}}(|\xi|t) f(t) t^{n-1} dt. \end{align}
More extensively, I. J. Schoenberg noticed that the Fourier transform of any radial measure on $\mathbb{R}^n$ is also representable in the above form and set up the following characterization (see also H. Wendland \cite{We}).
\begin{itemize} \item[{}]{\bf Criterion II} (I. J. Schoenberg \cite{Sc1}, \cite{Sc2}). {\it A continuous function $\phi$ on $[0, \infty)$ is positive semi-definite on $\mathbb{R}^n$ if and only if \begin{equation}\label{G7} \phi(r) = \int_0^\infty \Omega_{\frac{n-2}{2}}(rt) d\nu(t) \end{equation} for a finite positive Borel measure $\nu$ on $[0, \infty)$. Moreover, in the case when $\,d\nu(t) = f(t) t^{n-1} dt\,$ with continuous $f$, $\phi$ is positive definite on $\mathbb{R}^n$ if and only if $\phi$ is nonnegative and non-vanishing. } \end{itemize}
Our generalization of Schoenberg's integrals or Fourier transforms for radial measures takes the following form.
\begin{definition} The Hankel-Schoenberg transform of order $\,\lambda>-1\,$ of a finite positive Borel measure $\nu$ on $[0, \infty)$ is defined by \begin{equation*} \phi(r) = \int_0^\infty\Omega_\lambda(rt)\,d\nu(t)\qquad(0\le r<\infty). \end{equation*} \end{definition}
For those Borel measures on $[0, \infty)$ which are absolutely continuous with respect to Lebesgue measure, it is simple to express the Hankel-Schoenberg transforms in terms of the classical Hankel transforms for which analogues of the Fourier inversion theorem and Parseval's relations are available.
Our evaluations will be of the form \begin{align} \left(1+r^{2}\right)^{-\alpha-\lambda -1} = c(\alpha, \lambda)\int_{0}^{\infty}\Omega_{\lambda}(rt)\big[K_{\alpha}(t)t^{\alpha}\big]t^{2\lambda+1}dt \end{align} for $\,\alpha+\lambda+1>0\,$ with an explicit positive constant $c(\alpha, \lambda)$. By inversions and order-changing transforms, we shall obtain a number of representation formulas for the Mat\'ern functions $M_\alpha$ in terms of $\phi_\beta$'s and vice versa, which suits to Schoenberg's criterion and makes it possible to find the Fourier transforms of their radial extensions to any Euclidean space.
In accordance with the notation of \cite{GMO}, we introduce
\begin{definition} For a univariate function $\phi$ on $[0, \infty)$ and a set of distinct points $\,X= \{\mathbf{x}_j\}_{j\in\mathbb{N}}\subset\mathbb{R}^n,\,$ the Schoenberg matrix is defined to be \begin{equation}
\mathbf{S}_X (\phi) = \Big[ \phi\big(\left|\mathbf{x}_j - \mathbf{x}_k\right|\big)\Big]_{j, \,k\in\mathbb{N}}. \end{equation} \end{definition}
The notion of Schoenberg matrix comes up instantly with an attempt to construct an interpolating functional that matches the values of any function at each point of $X$. To state briefly, if $\mathbf{S}_X (\phi)$ defines a bounded invertible operator on the space $\ell^2(\mathbb{N})$, then it is possible to construct a Lagrange-type radial basis sequence $\,\left\{u_j^*\right\}_{j\in\mathbb{N}}\,$ by setting
$$u_j^*(\mathbf{x}) = \sum_{k=1}^\infty c_{j, k}\,\phi(|\mathbf{x}- \mathbf{x}_k|),\quad j=1,2, \cdots, $$ and solving the infinite system $\,u_j^*(\mathbf{x}_k) = \delta_{j, k},\,$ which has a unique solution $\,\mathbf{c}_j = (c_{j, 1}, c_{j, 2},\cdots)\in\ell^2(\mathbb{N})\,$ for each $j$. The functional $$A_X(f)(\mathbf{x}) = \sum_{j=1}^\infty f(\mathbf{x}_j)\,u_j^*(\mathbf{x}),$$ definable on any class of functions, obviously interpolates $f$ at $X.$
The Schoenberg matrices arise under various guises in other fields of Mathematics. In Functional Analysis, for example, it is common that $\mathbf{S}_X (\phi)$ coincides with the Gramian matrix of a sequence obtained by translating another function $\psi$ by $X$ in an appropriate Hilbert space $H$, that is,
$$\mathbf{S}_X (\phi) = \Big[\big(\psi(|\cdot - \mathbf{x}_j|),\,\psi(|\cdot - \mathbf{x}_k|)\big)_H\Big]_{j, k\in\mathbb{N}}.$$
In such a circumstance, $\,\left\{\psi(|\mathbf{x}- \mathbf{x}_j|)\right\}_{j\in\mathbb{N}}\,$ is a Riesz sequence in $H$ if and only if $\mathbf{S}_X (\phi)$ defines a bounded invertible operator on $\ell^2(\mathbb{N})$.
Our secondary purpose is to study the Schoenberg or Gramian matrices associated with the Mat\'ern functions $M_\alpha$ as well as the functions $\phi_\beta$ with our focuses on their boundedness and invertibility on $\ell^2(\mathbb{N})$.
Our approaches are substantially based on the recent developments \cite{GMO}, \cite{MS} of L. Golinskii {\it et al.} in which a list of criteria for the boundedness and invertibility are established from several perspectives. To illustrate, the authors devoted considerable portions of their work in studying the $L^2$-based Gramian matrices associated to the Mat\'ern functions $M_\alpha$ and obtained their boundedness and invertibility on $\ell^2(\mathbb{N})$ in the range $\,-n/4<\alpha\le 0.$
As we shall present below, we shall improve their results by extending the range to $\,\alpha>-n/4\,$ and the boundedness and invertibility results to the aforementioned Sobolev space-based Gramian matrices. In applications, we shall prove that the system of type
$\,\big\{M_\alpha(|\mathbf{x}-\mathbf{x}_j|)\big\}_{j\in\mathbb{N}},\,$ where $\,(\mathbf{x}_j)\,$ is an arbitrary set of distinct points of $\mathbb{R}^n$, is a Riesz sequence in $L^2(\mathbb{R}^n)$ or the Sobolev space of certain specified order.
In the same manner, the system of type $\,\big\{\phi_\beta(|\mathbf{x}-\mathbf{x}_j|)\big\}_{j\in\mathbb{N}}\,$ will be shown to be a Riesz sequence in the Hilbert space of functions on $\mathbb{R}^n$ for which $\,\phi_\beta(|\mathbf{x}-\mathbf{y}|)\,$ is a reproducing kernel.
\section{Bessel functions $K_\alpha$} In this section we collect some of the basic properties of $K_\alpha$ relevant to the present work, most of which can be found in \cite{AS}, \cite{E} and \cite{Wa}.
For $\,\alpha\in\mathbb{C},\,$ the modified Bessel function $K_\alpha$ is defined by \begin{align} K_\alpha(z) &= \frac{\pi}{2}\left[\frac{\,I_{-\alpha}(z)-I_{\alpha}(z)\,}{\sin\left(\alpha\pi\right)}\right],\quad\text{where}\\ I_{\alpha}(z) &= \sum_{k=0}^{\infty}\frac{1}{k!\,\Gamma\left(k+\alpha+1\right)}\left(\frac{z}{2}\right)^{2k +\alpha}.\label{K1} \end{align} In the case when $\alpha$ happens to be an integer, $\,\alpha=n,\,$ this formula should be interpreted as $\,K_n(z) = \lim_{\alpha\to\, n} K_\alpha(z).\,$ The Bessel functions $\,I_\alpha, \,K_\alpha\,$ form a fundamental system of solutions to the differential equation \begin{equation}\label{K2} z^2\frac{d^2 u}{dz^2} + z \frac{du}{dz} - (z^2 + \alpha^2) u = 0. \end{equation} Hereafter, we shall be concerned only with $\,\alpha\in\mathbb{R}\,$ and $\,z>0.$
\begin{itemize} \item[(K1)] By definition, it is evident $\,K_{-\alpha}(z) = K_\alpha(z).\,$ For each integer $n$, a series expansion formula for $K_n(z)$ is also available. In particular, \begin{equation}\label{K5} K_0(z) = -\log (z/2) I_0(z) + \sum_{k=0}^\infty \frac{\psi(k+1)}{(k!)^2} \left(\frac z2\right)^{2k}, \end{equation} where $\psi$ denotes the digamma function so that $$\psi(1) = -\gamma, \quad \psi(k+1) = -\gamma + \sum_{j=1}^k\frac 1j\,,$$ with $\gamma$ being the Euler-Mascheroni constant.
\item[(K2)] For $\,\alpha>-1/2\,$ and $\,z>0,$ Schl\"afli's integrals state \begin{align}\label{K3} K_{\alpha}(z) &= \frac{\sqrt{\pi}}{\Gamma(\alpha + 1/2)}\,\left(\frac{z}{2}\right)^{\alpha} \int_{1}^{\infty} e^{-zt}\left(t^{2}-1\right)^{\alpha-\frac{1}{2}}\,dt\nonumber\\ &=\sqrt{\frac{\pi}{2}\,} \frac{e^{-z} z^\alpha}{\Gamma(\alpha+1/2)}\, \int_{0}^{\infty} e^{- zt} \left[ t\left( 1 + \frac t2\right)\right]^{\alpha - \frac 12}\,dt \end{align} in which the latter follows from the former by suitable substitutions. Another form of Schl\"afli's integral reads \begin{equation}\label{K4} K_\alpha(z) = \frac 12 \int_{-\infty}^\infty e^{-z \cosh t - \alpha t}\, dt, \end{equation} which holds for any real $\alpha$ and $\,z>0.$ As a consequence, $K_\alpha(z)$ is positive on the interval $(0, \infty)$.
\item[(K3)] From the differential equation \eqref{K2}, it follows plainly $$\frac{d}{dz} \big[K_\alpha(z) z^\alpha\big] = - K_{\alpha -1}(z) z^\alpha.$$ By (K2), hence, the Mat\'ern function $\,M_\alpha(z) = K_\alpha(z) z^\alpha\,$ is positive and strictly decreasing on the interval $(0, \infty)$.
\item[(K4)] Of great significance is the asymptotic behavior of $K_\alpha$ for $\,\alpha\ge 0.$ \begin{itemize} \item[(i)] As $\,z\to 0,\,$ the series expansions \eqref{K1} and \eqref{K5} yield\footnote{To be more precise, \eqref{K1} shows $$K_\alpha(z) = 2^{\alpha-1}\Gamma(\alpha) z^{-\alpha}\big[ 1 + O\left( z^{\alpha_*}\right)\big],$$ where $\,\alpha_* = \min (2\alpha, \,2)\,$ and \eqref{K5} shows $$K_0(z) = -\log z + \log 2 -\gamma + \big[1-\log(z/2)\big] O\left(z^2\right).$$} \begin{equation*} K_\alpha(z) \,\sim\,\left\{\begin{aligned} &{2^{\alpha-1}\Gamma(\alpha) z^{-\alpha}} &{\quad\text{for}\quad \alpha>0},\\ &{-\log z} &{\quad\text{for}\quad \alpha =0}.\end{aligned}\right. \end{equation*} \item[(ii)] As $\,z\to\infty,$ a version of Hankel's asymptotic formula states \begin{align*} K_{\alpha}(z) = \sqrt{\frac{\pi}{2 z}\,}\,e^{-z}\left[ 1 + \frac{4\alpha^{2}-1}{8z} + O\left(\frac{1}{z^2}\right)\right]. \end{align*} \end{itemize}
\item[(K5)] In the special case $\,\alpha = n + 1/2\,$ with $n$ an integer, it is simple to express $K_\alpha$, and hence the Mat\'ern function $M_\alpha$, in closed forms on evaluation of Schl\"afli's integral \eqref{K3}. To state $M_\alpha$ explicitly, \begin{align}\label{K6} M_{n+ \frac 12}(z) &= \sqrt{\frac{\pi}{2}\,}\,e^{-z} z^{n}\sum_{k=0}^n \frac{(n+k)!}{k! (n-k)!}\,(2z)^{-k},\nonumber\\ M_{-n-\frac 12} (z) &= \sqrt{\frac{\pi}{2}\,}\,e^{-z} z^{-n-1}\sum_{k=0}^n \frac{(n+k)!}{k! (n-k)!}\,(2z)^{-k}, \end{align} where $n$ is a nonnegative integer. A list of positive orders reads \begin{align} M_{\frac 12}(z)&= \sqrt{\frac{\pi}{2}\,}\, e^{-z}\,,\nonumber\\ M_{\frac 32}(z)&= \sqrt{\frac{\pi}{2}\,}\,(1+z)e^{-z}\,,\nonumber\\ M_{\frac 52}(z)&= \sqrt{\frac{\pi}{2}\,}\,\left(3 + 3z+ z^{2}\right)e^{-z} \end{align} which are of considerable interest in spatial statistics (see \cite{G1}, \cite{G2}). A list of negative orders reads \begin{align} M_{-\frac 12}(z)&= \sqrt{\frac{\pi}{2}\,} \,\frac{e^{-z}}{z}\,,\nonumber\\ M_{-\frac 32}(z)&= \sqrt{\frac{\pi}{2}\,}\,\left(\frac{1}{z^2} + \frac{1}{z^3}\right)e^{-z}\,,\nonumber\\ M_{-\frac 52}(z)&= \sqrt{\frac{\pi}{2}\,}\,\left(\frac{1}{z^3} + \frac{3}{z^4} +\frac{3}{z^5}\right)e^{-z}\,. \end{align} \end{itemize}
\section{Hankel-Schoenberg transforms} The purpose of this section is to establish basic properties of the Hankel-Schoenberg transforms which will be used subsequently.
To begin with, we list the following properties on the kernels $\Omega_\lambda$ which are deducible from those on the Bessel functions $J_\lambda$ (\cite{E}, \cite{Wa}).
\begin{itemize} \item[(J1)] Each $\Omega_\lambda$ is of class $C^\infty(\mathbb{R})$, even and uniformly bounded by $\,1=\Omega_\lambda(0).\,$ A theorem of Bessel-Lommel states that it is an oscillatory function with an infinity of positive simple zeros. A modification of Hankel's asymptotic formula for $J_\lambda$ shows that as $\,t\to\infty,$ \begin{equation*} \Omega_\lambda(t) = \frac{\Gamma(\lambda+1)}{\sqrt{\pi}} \left(\frac t2\right)^{-\lambda -1/2} \left[\cos\left(t - \frac{(2\lambda+1)\pi}{4}\right) + O\left( t^{-1}\right)\right]. \end{equation*}
\item[(J2)] For $\,\lambda>-1/2,\,$ Poisson's integral reads \begin{align*} \Omega_\lambda(t) = \frac{2}{B\left(\lambda + 1/2\,,\,1/2\right)}\, \int_0^1 \cos(t s)\, (1-s^2)^{\lambda -\frac 12}\,dt, \end{align*} where $B$ stands for the Euler beta function defined by $$B(a, \,b) = \int_0^1 t^{a-1} (1-t)^{b-1} dt\qquad(a>0, \,b>0).$$
\item[(J3)] By Liouville's theorem, $\Omega_\lambda$ is expressible in finite terms by algebraic and trigonometric functions if and only if $2\lambda$ is an odd integer. Indeed, the Lommel-type recurrence formula \begin{align*} \Omega_\lambda(t) -\Omega_{\lambda-1}(t)
= \frac{t^2}{\,4\lambda(\lambda+1)\,}\,\Omega_{\lambda+2}(t) \qquad(\lambda>-1) \end{align*} may be used to evaluate $\Omega_{n + 1/2}$ for any integer $n$ together with $$\Omega_{-\frac 12}(t) = \cos t,\quad \Omega_{\frac 12}(t) = \frac{\sin t}{t}\,.$$ \end{itemize}
The Hankel transforms of a function $f$ refer to the integrals $$\int_0^\infty J_\lambda(rt) f(t) t dt\qquad(\lambda\in\mathbb{C}).$$ It follows by definition that the Hankel-Schoenberg transforms can be written in terms of the Hankel transforms whenever $\nu$ admits an integrable density $f$, that is, $\,d\nu(t) = f(t) dt.\,$ The Hankel-Watson inversion theorem (\cite{Wa}) states that if $\,\lambda\ge -1/2\,$ and $\,f(t)\sqrt t\,$ is integrable on $[0, \infty)$, then \begin{equation*} \int_0^\infty J_\lambda(rt)\left[\int_0^\infty J_\lambda(ru) f(u)u du\right] rdr = \frac{\,f(t+0) + f(t-0)\,}{2} \end{equation*} at every $\,t>0\,$ such that $f$ is of bounded variation in a neighborhood of $t$.
An obvious modification yields the following inversion formula which may serve as an alternative of the Fourier inversion theorem for radial functions.
\begin{theorem}\label{inversion} {\rm (Inversion)} For $\,\lambda\ge -1/2,\,$ assume that \begin{equation}\label{invc1}
\int_0^\infty |f(t)| t^{-\lambda-1/2}\,dt <\infty. \end{equation}
Then the following holds for every $\,t>0\,$ at which $f$ is continuous: \begin{align*} \left\{\aligned &{\phi(r) = \int_0^\infty \Omega_\lambda(rt) f(t) dt\quad \text{implies}}\\
&{f(t) = \frac{t^{2\lambda+1}}{4^\lambda\left[\Gamma(\lambda+1)\right]^2}\,
\int_0^\infty \Omega_\lambda(rt)\, \phi(r) r^{2\lambda+1}dr.}\endaligned\right. \end{align*} \end{theorem}
A version of Parseval's theorem is deducible from its equivalent for the Hankel transforms in a trivial manner.
\begin{theorem}\label{Parseval} {\rm (Parseval's relation)} For $\,\lambda>-1\,,$ let \begin{align*} \phi_j(r) = \int_0^\infty\Omega_\lambda(rt) f_j(t) t^{2\lambda +1} dt,\quad j=1, 2. \end{align*} If both integrals are absolutely convergent, then \begin{equation*} \int_0^\infty f_1(t) f_2(t) t^{2\lambda+1} dt = \frac{1}{4^\lambda\left[\Gamma(\lambda+1)\right]^2}\, \int_0^\infty \phi_1(r) \phi_2(r) r^{2\lambda+1} dr. \end{equation*} \end{theorem}
\begin{lemma}\label{basic} For $\,\lambda>\rho>-1\,$ and $\,r\ge 0,$ \begin{equation*} \Omega_\lambda(r) = \frac{2}{B(\rho +1, \,\lambda-\rho)} \int_0^\infty\Omega_\rho(rt) (1-t^2)_+^{\lambda-\rho-1}t^{2\rho+1} dt. \footnote{As usual, we write $\,x_+ = \max\,(x, 0)\,$ for $\,x\in\mathbb{R}.\,$}\end{equation*} \end{lemma}
\begin{proof} If $\nu$ denotes the probability measure $$d\nu(t) = \frac{2}{B(\rho +1, \,\lambda-\rho)} \,(1-t^2)_+^{\lambda-\rho-1}t^{2\rho+1} dt,$$ then it has finite moments of all orders with $$\int_0^\infty t^{2k} d\nu(t) = \frac{\Gamma(k+\rho+1)}{\Gamma(\rho+1)}\cdot \frac{\Gamma(\lambda+1)}{\Gamma(k+\lambda+1)},\quad k=0,1,\cdots.$$ It follows from integrating termwise, readily justified, that \begin{align*} \int_0^\infty\Omega_\rho(rt) d\nu(t) &= \Gamma(\rho+1) \sum_{k=0}^\infty\frac{(-1)^k}{k!\,\Gamma(k+\rho+1)}\left(\frac r2\right)^{2k} \int_0^\infty t^{2k} d\nu(t)\\ &= \Gamma(\lambda+1)\sum_{k=0}^\infty\frac{(-1)^k}{k!\,\Gamma(k+\lambda +1)}\left(\frac r2\right)^{2k}\\ &=\Omega_\lambda(r). \end{align*} \end{proof}
The Hankel-Schoenberg transforms may be regarded as a generalization of the radial Fourier transforms or Schoenberg's integrals due to the following order-changing interrelations.
\begin{theorem}\label{orderwalk} Let $\,\lambda>\frac{n-2}{2}\,$ with $n$ a positive integer. For any finite positive Borel measure $\nu$ on $[0, \infty)$ which is not concentrated at zero, its Hankel-Schoenberg transform of order $\lambda$ can be represented as \begin{align*} & \int_0^\infty \,\Omega_{\lambda}(rt) d\nu(t) = \int_0^\infty \,\Omega_{\frac{n-2}{2}}(rt)\, W_\lambda(\nu)(t) t^{n-1} dt,\quad\text{where}\\ & W_\lambda(\nu)(t) = \frac{2}{B\left(\frac n2, \,\lambda +1 -\frac n2\right)} \int_0^\infty\left( 1- \frac{t^2}{s^2}\right)^{\lambda - \frac n2}_+ s^{-n} d\nu(s). \end{align*}
\noindent Moreover, $\,d\mu(t) = W_\lambda(\nu)(t) t^{n-1} dt\,$ defines a finite positive Borel measure on $[0, \infty)$ with the total mass $\mu\left([0, \infty)\right) = \nu\left([0, \infty)\right).$ \end{theorem}
\begin{proof} As a special case of Lemma \ref{basic}, the choice $\,\rho=\frac{n-2}{2}\,$ gives \begin{equation}\label{O1} \Omega_\lambda(r) = \frac{2}{B\left(\frac n2, \,\lambda+1-\frac n2\right)} \int_0^\infty\Omega_{\frac{n-2}{2}}(rs) (1-s^2)_+^{\lambda-\frac n2}s^{n-1} ds, \end{equation} whence the result follows by interchanging the order of integrations.
Since \begin{align*} \int_0^\infty\left( 1- \frac{t^2}{s^2}\right)^{\lambda - \frac n2}_+ t^{n-1} dt = \frac{s^n}{2}\int_0^1 (1-u)^{\lambda-\frac n2} u^{\frac n2 -1} du \end{align*} for each $\,s>0,$ it is straightforward to find \begin{align*} \mu([0, \infty)) &= \int_0^\infty W_\lambda(\nu)(t) t^{n-1} dt\\ &= \frac{2}{B\left(\frac n2, \,\lambda +1 -\frac n2\right)} \int_0^\infty\int_0^\infty\left( 1- \frac{t^2}{s^2}\right)^{\lambda - \frac n2}_+ s^{-n} d\nu(s) t^{n-1} dt\\ &= \frac{2}{B\left(\frac n2, \,\lambda +1 -\frac n2\right)} \int_0^\infty \int_0^\infty\left( 1- \frac{t^2}{s^2}\right)^{\lambda - \frac n2}_+ t^{n-1} dt s^{-n} d\nu(s)\\ &=\nu([0, \infty)). \end{align*}
\end{proof}
\begin{remark} A positive Borel measure $\nu$ on $[0, \infty)$ is concentrated at zero if it is a constant multiple of Dirac mass at zero, that is, $\,\nu = c\,\delta_0\,$ with $\,c>0.$ For such a Borel measure $\nu$, its Hankel-Schoenberg transform is simply $$ \int_0^\infty\Omega_{\lambda}(rt) d\nu(t) = c\,\Omega_\lambda(0) = c.$$ \end{remark}
\section{Schoenberg representations} Our aim in this section is to set up Schoenberg's representations for Mat\'ern functions which ensure their positive definiteness.
\begin{lemma}\label{lemmaS0} For $\,\alpha\in\mathbb{R}\,$ and $\,z>0,\,$ we have \begin{equation}\label{S1} K_\alpha(z) z^\alpha = 2^{-\alpha -1}\int_0^\infty \exp\left(-z^2 t - \frac {1}{4t}\right) t^{-\alpha -1} dt. \end{equation} \end{lemma}
\begin{proof} For any real $\alpha$ and $\,z>0,$ if we make substitution $\,z e^{-t} = 2s\,$ in the second form of Schl\"afli's integral \eqref{K4}, then \begin{align*} K_\alpha(z) &= \frac 12\int_{-\infty}^\infty \exp\left(-z\cosh t-\alpha t\right) dt\\ &= 2^{\alpha-1}z^{-\alpha}\int_0^\infty \exp\left(-s - \frac{z^2}{4s}\right) s^{\alpha -1} ds \end{align*} from which \eqref{S1} follows on making another substitution $\,s= 1/4t.$ \end{proof}
In the case $\,\alpha>0,\,$ it follows from the asymptotic behavior of $K_\alpha$ near zero, as stated in (K4), that the Mat\'ern function $M_\alpha$ is well defined as a continuous function on $[0, \infty)$ with the limiting value $\,M_\alpha(0) = 2^{\alpha -1}\,\Gamma(\alpha).$ For this reason, it will be convenient to consider the following types of Mat\'ern functions which are frequently used in many fields (see e.g. \cite{G1}).
\begin{definition} For $\,\alpha>0,\,$ put \begin{equation}\label{M} \mathcal{M}_\alpha(z) = \frac{2^{1-\alpha}}{\Gamma(\alpha)}\,K_\alpha(z) z^\alpha\qquad(z\ge 0). \end{equation} \end{definition}
We recall that a function $\phi$ is said to be {\it completely continuous on $[0, \infty)$} if it is continuous on $[0, \infty)$ and satisfies the condition $\,(-1)^m\phi^{(m)}(z)\ge 0\,$ for all nonnegative integers $m$ and $\,z>0\,$ (see e.g. \cite{We}).
\begin{theorem}\label{corollaryS1}{\rm (Theorem \ref{theorem1.1})} For $\,\alpha>0,\,$ we have \begin{equation*} \mathcal{M}_\alpha(z) = \int_0^\infty e^{-z^2 t} f_\alpha(t) dt\quad\,\,(z\ge0), \end{equation*} where $f_\alpha$ is the continuous probability density on $[0, \infty)$ defined by \begin{equation*} f_\alpha(t) = \left\{\begin{aligned} &{\frac{1}{4^\alpha\Gamma(\alpha)}\, \exp\left(-\frac{1}{4t}\right) t^{-\alpha-1}} &{\text{for}\quad t>0},\\ &{\qquad\qquad\,\, 0} &{\text{for}\quad t=0}.\end{aligned}\right. \end{equation*} As a consequence, $\mathcal{M}_\alpha$ is continuous and positive definite on every $\mathbb{R}^n$. In addition, the function $\mathcal{M}_\alpha\left(\sqrt{z}\,\right)$ is also positive definite on every $\mathbb{R}^n$ and completely continuous on $[0, \infty)$. \end{theorem}
\begin{proof} As it is elementary to verify that $f_\alpha$ is a continuous probability density on $[0, \infty)$, the statements on $\mathcal{M}_\alpha(z)$ are immediate consequences of Lemma \ref{lemmaS0} and Schoenberg's Criterion I on the positive definiteness.
In the special case $\,\alpha=1/2,\,$ we have \begin{equation}\label{S2} e^{-z} = \int_0^\infty e^{-z^2 t} f_{1/2}(t) dt\quad\,\,(z\ge0), \end{equation} whence it is straightforward to deduce the integral representations \begin{align} \mathcal{M}_\alpha\left(\sqrt z\,\right) &= \int_0^\infty e^{-z t} f_\alpha(t) dt\label{S3}\\ &=\int_0^\infty e^{-z^2 u} g_\alpha(u) du\label{S4}, \end{align} where $g_\alpha$ stands for the function defined by $\,g_\alpha(0) =0\,$ and \begin{equation*} g_\alpha(u) = \frac{u^{-3/2}}{2^{2\alpha+1}\sqrt\pi\,\Gamma(\alpha)}\int_0^\infty \exp\left(-\frac{1}{4t} -\frac{t^2}{4u}\right) t^{-\alpha} dt \end{equation*} for $\,u>0.\,$ As readily verified, $g_\alpha$ is a continuous probability density on $[0, \infty)$ and hence it follows from \eqref{S4} and Schoenberg's Criterion I that the function $\mathcal{M}_\alpha\left(\sqrt{z}\,\right)$ is positive definite on every $\mathbb{R}^n$.
That it is completely continuous on $[0, \infty)$ is a consequence of \eqref{S3}.\footnote{It may be proved either by differentiating under the integral sign or by applying the well-known theorem of Bernstein-Hausdorff-Widder which states that a function $f$ is completely continuous on $[0, \infty)$ if and only if $$ f(r) = \int_0^\infty e^{-rt} d\mu(t)\qquad(r\ge 0)$$ for some finite positive Borel measure $\mu$ on $[0, \infty)$ (see e.g. \cite{We}).} \end{proof}
We are now concerned with the second form of Schoenberg's integrals. For the sake of computational facilitation as well as inversion, it is advantageous to consider the Hankel-Schoenberg transforms.
As it is conventional, we shall use the notation of Pochhammer and Barnes for the generalized hypergeometric functions \begin{equation*} {}_pF_q\left(a_1, \cdots, a_p;\,b_1, \cdots, b_q;\,x\right) =\sum_{k=0}^\infty\frac{\left(a_1\right)_k\cdots\left(a_p\right)_k}{k! \left(b_1\right)_k\cdots\left(b_q\right)_k}\,x^k \end{equation*} in which the symbol $(a)_k$ for a non-zero real number $a$ stands for \begin{equation*} (a)_k = \left\{\begin{aligned} &{a(a+1)\cdots (a+k-1)} &{\text{for} \quad k\ge 1}, \\ &{\qquad 1} &{\text{for} \quad k = 1}.\end{aligned}\right. \end{equation*}
The following is easily obtainable from Scl\"afli's integrals. As it is known, however, we shall omit the proof (see \cite{AS}, \cite{E}, \cite{Wa}).
\begin{lemma}\label{lemmaS1}
For $\,\alpha\in\mathbb{R}\,$ and $\,\beta>|\alpha|,\,$ we have \begin{equation}\label{S5} \int_0^\infty K_{\alpha}(t)t^{\beta-1}dt = 2^{\beta-2} \Gamma\left(\frac{\beta+\alpha}{2}\right)\Gamma\left(\frac{\beta-\alpha}{2}\right). \end{equation} \end{lemma}
\begin{lemma}\label{lemmaS2}
Let $\,\alpha\in\mathbb{R}\,$ and $\,\beta>|\alpha|.\,$ For the probability measure $$d\nu(t) =\frac{1}{2^{\beta-2} \Gamma\left(\frac{\beta+\alpha}{2}\right)\Gamma\left(\frac{\beta-\alpha}{2}\right)}\, K_{\alpha}(t)t^{\beta-1}dt,$$ the Hankel-Schoenberg transform of order $\lambda>-1$ is given by \begin{equation}\label{S6} \int_{0}^{\infty}\Omega_{\lambda}(rt)d\nu(t) ={}_2F_{1}\left(\frac{\beta-\alpha}{2},\,\frac{\beta+\alpha}{2};\,\lambda+1;\,-r^2\right)\,. \end{equation} \end{lemma}
\begin{proof} A simple modification of \eqref{S5} yields
\begin{align*}
\int_0^\infty t^{2k}d\nu(t)=2^{2k} \left(\frac{\beta+\alpha}{2}\right)_{k}
\left(\frac{\beta-\alpha}{2}\right)_{k},\quad k=0,1,2,\cdots.
\end{align*}
Integrating termwise, we deduce
\begin{align*}
\int_{0}^{\infty}\Omega_{\lambda}(rt)d\nu(t)&=
\sum_{k=0}^\infty \frac{\left(-1\right)^k}{k!\left(\lambda+1\right)_k} \left(\frac{r}{2}\right)^{2k}
\int_0^\infty t^{2k} dt\\
&=\sum_{k=0}^\infty
\frac{\left(\frac{\beta+\alpha}{2}\right)_k\left(\frac{\beta-\alpha}{2}\right)_k}{k!\left(\lambda+1\right)_k}
\left(-r^2\right)^k,
\end{align*}
which is equivalent to the stated formula \eqref{S6}. \end{proof}
By obvious cancellation effects, the generalized hypergeometric function \eqref{S6} reduces to the binomial series expansion in the case $\,\beta=\alpha + 2(\lambda+1)\,$ or $\,\beta=-\alpha + 2(\lambda+1).\,$ To be precise, we have the following general results which include Schoenberg's representations for Mat\'ern functions.
\begin{theorem}\label{theoremS1} Let $\,\lambda>-1\,$ and $\,\alpha+\lambda+1>0.$ For each $\,r\ge 0,$ we have \begin{align}\label{S7} (1+r^{2})^{-\alpha-\lambda-1} &=\frac{1}{2^{\alpha+2\lambda}\Gamma(\lambda+1)\Gamma(\alpha+\lambda+1)} \nonumber\\ &\qquad\times\quad\int_{0}^{\infty}\Omega_{\lambda}(rt) \big[K_{\alpha}(t) t^{\alpha}\big] t^{2\lambda+1} dt. \end{align} Moreover, if $\, 2\alpha +\lambda +3/2>0\,$ in addition, then for each $\,z>0,$ \begin{align}\label{S8} K_\alpha(z) z^\alpha =\frac{2^\alpha\Gamma(\alpha+\lambda+1)}{\Gamma(\lambda+1)} \int_0^\infty \Omega_\lambda(zt) (1+ t^2)^{-\alpha-\lambda-1} t^{2\lambda+1} dt. \end{align} \end{theorem}
\begin{proof} Formula \eqref{S7} follows from the special case $\,\beta=\alpha+2\lambda+2\,$ of \eqref{S6}, Lemma \ref{lemmaS2}, and Newton's binomial theorem
\begin{align*}
\sum_{k=0}^{\infty}\frac{(\alpha+\lambda+1)_{k}}{k!}\,(-r^2)^{k} =(1+r^2)^{-\alpha-\lambda-1}.
\end{align*}
As the function $\,f(t) = K_{\alpha}(t) t^{\alpha +2\lambda+1} \,$ is continuous on $(0, \infty)$ and \begin{align*}
\int_0^\infty |f(t)| t^{-\lambda-1/2} dt &= \int_0^\infty K_{\alpha}(t) t^{\alpha +\lambda+1/2} dt\\ &= 2^{\alpha + \lambda -1/2} \Gamma\left(\alpha + \frac{2\lambda +3}{4}\right)\Gamma\left(\frac{2\lambda+3}{4}\right)<\infty \end{align*} by Lemma \ref{lemmaS1}, applicable due to the condition $\,2\alpha +\lambda +3/2>0,\,$ \eqref{S8} follows from inverting \eqref{S7} in accordance with Theorem \ref{inversion}. \end{proof}
Choosing $\,\alpha, \lambda\,$ suitably or regarding them as variable parameters, one may exploit these formulas from several perspectives. If we are concerned with the Fourier transforms in a fixed Euclidean space $\mathbb{R}^n$, for example, the first formula may be applied to yield the following.
\begin{itemize} \item[(a)] For $\,\alpha>0,\,$ if we recall \eqref{G1} \begin{equation*} G_\alpha(z) = \frac{1}{2^{\alpha-1 + \frac n2}\,\pi^{\frac n2}\, \Gamma(\alpha)}\,K_{\alpha-\frac n2}(z) z^{\alpha - \frac n2}, \end{equation*} the special case $\,\lambda = (n-2)/2\,$ of \eqref{S7} yields \begin{equation*} (1+r^2)^{-\alpha} = \frac{2\pi^{n/2}}{\Gamma\left(n/2\right)} \int_0^\infty\Omega_{\frac{n-2}{2}} (rt) G_\alpha(t) t^{n-1} dt \end{equation*} so that we obtain the Fourier transform formula \begin{equation}\label{S9}
\widehat{G_\alpha}(\xi) = (1 +|\xi|^2)^{-\alpha}. \end{equation}
\item[(b)] As $\alpha$ varies over $\,\alpha> 0,\,$ \eqref{S9} expresses the inverse multi-quadrics of any positive order in terms of the Fourier transforms of $G_\alpha(\mathbf{x})$. On the contrary, Hankel-Schoenberg transform formula \eqref{S7} enables us to obtain such Fourier representations by varying $\lambda$ with a fixed $\alpha$.
To be specific, let us fix $\,\alpha>-n/2\,$ and set \begin{align} F_{\alpha, \lambda}(z) &= \frac{1}{2^{\alpha + 2\lambda} \pi^{\frac n2}\Gamma\left(\lambda +1 -\frac n2\right)\Gamma(\alpha + \lambda +1)} \nonumber\\ &\qquad\qquad\times\quad \int_z^\infty (s^2 - z^2)^{\lambda -\frac n2} \big[K_\alpha(s) s^\alpha\big] s ds \end{align} for $\,\lambda>(n-2)/2.$ By Theorem \ref{orderwalk}, we may put \eqref{S7} in the form \begin{equation*} (1+r^2)^{-\alpha - \lambda-1} = \frac{2\pi^{n/2}}{\Gamma\left(n/2\right)} \int_0^\infty\Omega_{\frac{n-2}{2}} (rt) F_{\alpha, \lambda}(t) t^{n-1} dt. \end{equation*}
If we write $\, F_{\alpha, \lambda}(\mathbf{x}) = F_{\alpha, \lambda}(|\mathbf{x}|),\,\mathbf{x}\in\mathbb{R}^n,\,$ then \begin{equation}\label{S10}
\widehat{ F_{\alpha, \lambda}}(\xi) = (1 +|\xi|^2)^{-\alpha -\lambda-1}. \end{equation}
As $\lambda$ varies in the range $\,\lambda>(n-2)/2,$ this Fourier transform formula represents the inverse multi-quadrics of order greater than $\,\alpha + n/2.$ \end{itemize}
A noteworthy feature of Mat\'ern functions is the following invariance which follows immediately from \eqref{S8} by reformulation.
\begin{corollary}\label{corollaryS1} If $\,\alpha>0,\,$ then for any $\,\lambda>-1,$ \begin{align}\label{S11} \mathcal{M}_\alpha(z) = \int_0^{\infty}\Omega_\lambda (zt)\, d\nu_{\alpha, \lambda}(t) \qquad(z\ge 0), \end{align} where $\nu_{\alpha, \lambda}$ denotes the probability measure on $[0, \infty)$ defined by \begin{align*} d\nu_{\alpha, \lambda}(t)= \frac{2}{B(\alpha, \,\lambda+1)}(1+t^2)^{-\alpha-\lambda-1} t^{2\lambda+1} dt. \end{align*} \end{corollary}
\begin{remark} In view of Schoenberg's Criterion II, this integral formula with $\,\lambda = (n-2)/2\,$ provides another proof of the positive definiteness of the Mat\'ern functions. In particular, the choice of $\,n=1\,$ gives \begin{align*} \mathcal{M}_\alpha(z) = \frac{2}{B\left(\alpha,\,1/2\right)} \int_{0}^{\infty}\frac{\cos (zt)\,dt}{\,\left(1 + t^{2}\right)^{\alpha+ 1/2}\,}, \end{align*} the formula obtained by Basset, Malmst\'en and Poisson (see \cite{Wa}). \end{remark}
\section{Schoenberg matrices on $\ell^2(\mathbb{N})$} In this section we shall investigate whether Schoenberg matrices of Mat\'ern functions or inverse multi-quadrics, in a fixed Euclidean space $\mathbb{R}^n$, give rise to bounded invertible operators on the Hilbert space $\ell^{2}(\mathbb{N})$.
As it is common in the theory of scattered data approximations, we shall deal with arbitrary sets of type $\,X = \left\{ \mathbf{x}_{j}\in \mathbb{R}^n : j\in\mathbb{N}\right\}\,$ satisfying \begin{equation}\label{M1}
\delta(X) = \inf_{j\neq k} \left|\mathbf{x}_{j}-\mathbf{x}_{k}\right|>0,\quad \dim\left[ {\rm span}(X)\right] = d \end{equation} for some $\,1\le d\le n.$ Our analysis will be based on the following.
\begin{proposition}\label{propM} {\rm (\cite{GMO})} Let $f$ be a nonnegative function defined on $[0, \infty)$.
\begin{itemize} \item[\rm(i)] Suppose $f$ is monotone decreasing, $\,f(0) =1\,$ and $\,f(t) t^{d-1}\,$ is integrable on $[0, \infty).$ Then the Schoenberg matrix $\,\mathbf{S}_{X}(f)$ defines a bounded self-adjoint operator on $\ell^{2}(\mathbb{N})$ with \begin{equation*}
\left\|\mathbf{S}_{X}(f)\right\|\le 1+\frac{ d(5^d-1)}{[\delta(X)]^d}\int_{0}^{\infty}f(t)t^{d-1}dt. \end{equation*} Moreover, if $X$ satisfies the additional separation assumption \begin{equation*} \delta(X) > \left[d(5^d-1)\int_{0}^{\infty}f(t)t^{d-1}dt\right]^{1/d}, \end{equation*} then $\,\mathbf{S}_{X}(f)$ defines a bounded invertible operator on $\ell^{2}(\mathbb{N})$.
\item[\rm(ii)] Suppose $\,n\ge 2\,$ and $f$ admits an integral representation $$f(r) = \int_0^\infty e^{-r^2 t}\, d\nu(t)\quad(r\ge 0)$$ for a finite positive Borel measure $\nu$ such that it is equivalent to Lebesgue measure on $[0, \infty)$ and satisfies the moment condition $$\int_0^\infty t^{-d/2}\, d\nu(t)<\infty.$$ Then $\,\mathbf{S}_{X}(f)$ defines a bounded invertible operator on $\ell^{2}(\mathbb{N})$. \end{itemize} \end{proposition}
\begin{remark}
A positive Borel measure $\nu$ on $[0, \infty)$ is equivalent to Lebesgue measure $|\cdot|$ if both are absolutely continuous with respect to each other. By the Radon-Nikodym theorem, a necessary and sufficient condition for $\nu$ to be equivalent to Lebesgue measure is that $\,d\nu(t) = p(t) dt\,$ for a nonnegative density $p$ such that $\,{\rm supp} (p) = [0, \infty)\,$ and
$$\int_I p(t) dt =0 \,\Longleftrightarrow\, |I|=0$$ for any Borel set $\,I\subset [0, \infty).$
\end{remark}
As the operator norm bound and the invertibility condition of part (i) are slightly different from the original ones presented in \cite{GMO}, we shall give a review of their proof for part (i) in the appendix.
Now that Schoenberg's representations are available for Mat\'ern functions of type \eqref{M}, it is a simple matter to prove the following.
\begin{theorem}\label{theoremM1} Let $X$ be an arbitrary set of points of $\mathbb{R}^n$ satisfying \eqref{M1}. For $\,\alpha>0,\,$ consider the Schoenberg matrix of $\mathcal{M}_\alpha$, $$\mathbf{S}_X\left(\mathcal{M}_\alpha\right) = \Big[ \mathcal{M}_\alpha\left(\mathbf{x}_j - \mathbf{x}_k\right)\Big]_{j, \,k\in\mathbb{N}}.$$
\begin{itemize} \item[\rm(i)] $\,\mathbf{S}_X\left(\mathcal{M}_\alpha\right)$ defines a bounded self-adjoint operator on $\ell^{2}(\mathbb{N})$ with
$$\left\| \mathbf{S}_X\left(\mathcal{M}_\alpha\right)\right\| \le 1 + \frac{d\, 2^{d-1} (5^d -1) \Gamma\left(\alpha + \frac d2\right)\Gamma\left(\frac d2\right)} {\left[\delta(X)\right]^d\,\Gamma(\alpha)}\,.$$
\item[\rm(ii)] For $\,n\ge 2,\,$ $\,\mathbf{S}_X\left(\mathcal{M}_\alpha\right)$ defines a bounded invertible operator on $\ell^{2}(\mathbb{N})$. In the case $\,n=d=1,\,$ if $X$ satisfies the additional assumption $$\delta(X) > \frac{4\,\Gamma\left(\alpha + \frac 12\right)\Gamma\left(\frac 12\right)} {\Gamma(\alpha)}\,,$$ then it defines a bounded invertible operator on $\ell^{2}(\mathbb{N})$. \end{itemize} \end{theorem}
\begin{proof} An application of Lemma \ref{lemmaS1} gives \begin{align*} \int_0^\infty \mathcal{M}_\alpha(t) t^{d-1} dt &= \frac{2^{1-\alpha}}{\Gamma(\alpha)}\int_0^\infty K_\alpha(t) t^{\alpha + d-1} dt\\ &= \frac{2^{d-1}\Gamma\left(\alpha + \frac d2\right)\Gamma\left(\frac d2\right)} {\Gamma(\alpha)}\,. \end{align*}
Since $\,\mathcal{M}_\alpha(0) = 1\,$ and $\mathcal{M}_\alpha$ is strictly decreasing on the interval $[0, \infty)$ as it is noted in (K3), the criterion in the first part of Proposition \ref{propM} is applicable and part (i) follows with the stated operator norm bound.
Concerning part (ii), we invoke Corollary \ref{corollaryS1} to represent $$\mathcal{M}_\alpha(z) = \int_0^\infty e^{-z^2 t} f_\alpha(t) dt \qquad(z\ge 0)$$ in which $f_\alpha$ stands for the probability density \begin{equation*} f_\alpha(t) = \left\{\begin{aligned} &{\frac{1}{4^\alpha\Gamma(\alpha)}\, \exp\left(-\frac{1}{4t}\right) t^{-\alpha-1}} &{\text{for}\quad t>0},\\ &{\qquad\qquad\,\, 0} &{\text{for}\quad t=0}.\end{aligned}\right. \end{equation*}
Since the measure determined by $\,f_\alpha(t) dt\,$ is obviously equivalent to Lebesgue measure on $[0, \infty)$ and it is elementary to compute $$\int_0^\infty t^{-d/2} f_\alpha(t) dt = \frac{2^d\Gamma\left(\alpha + \frac d2\right)}{\Gamma(\alpha)} <\infty,$$ the criterion in the second part of Proposition \ref{propM} implies the invertibility of $\,\mathbf{S}_X\left(\mathcal{M}_\alpha\right)$ in the case $\,n\ge 2.$ The last statement on the invertibility when $\,n=d=1\,$ follows by the first criterion of Proposition \ref{propM}. \end{proof}
\begin{theorem}\label{theoremM2} For $\,\beta> n/2,\,$ put \begin{equation} \phi_\beta(r) = (1+ r^2)^{-\beta} \qquad(r\ge 0). \end{equation} Let $X$ be an arbitrary set of points of $\mathbb{R}^n$ satisfying \eqref{M1} and $$\mathbf{S}_X\left(\phi_\beta\right) = \Big[ \phi_\beta\left(\mathbf{x}_j - \mathbf{x}_k\right)\Big]_{j, \,k\in\mathbb{N}}.$$
\begin{itemize} \item[\rm(i)] $\,\mathbf{S}_X\left(\phi_\beta\right)$ defines a bounded self-adjoint operator on $\ell^{2}(\mathbb{N})$ with
$$\left\| \mathbf{S}_X\left(\phi_\beta\right)\right\| \le 1 + \frac{d(5^d -1) B\left(\beta-\frac d2,\,\frac d2\right)} {2 \left[\delta(X)\right]^d}\,.$$
\item[\rm(ii)] For $\,n\ge 2,\,$ $\,\mathbf{S}_X\left(\phi_\beta\right)$ defines a bounded invertible operator on $\ell^{2}(\mathbb{N})$. In the case $\,n=d=1,\,$ if $X$ satisfies the additional assumption $$\delta(X) > 2 B\left(\beta- \frac 12\,,\, \frac 12\right),$$ then it defines a bounded invertible operator on $\ell^{2}(\mathbb{N})$. \end{itemize} \end{theorem}
\begin{proof} By using the aforementioned integral representation $$\phi_\beta(r) = \frac{1}{\Gamma(\beta)}\int_0^\infty e^{-r^2 t} e^{-t} t^{\beta-1} dt\qquad(r\ge 0),$$ the proof follows along the same scheme as above. \end{proof}
\begin{remark} In connection with the problem of interpolating functions at an arbitrary set of distinct points $X$, it is an immediate consequence of Theorems \ref{theoremM1}, \ref{theoremM2} that $\,\mathcal{M}_\alpha,\, \phi_\beta,\,$ with $\,\alpha>0,\,\beta>n/2,\,$ could be used in constructing Lagrange-type radial basis sequences $\,\left\{u_j^*\right\}_{j\in\mathbb{N}},\,$ by the same process pointed out in the introduction, and the interpolating functional $$A_X(f)(\mathbf{x}) = \sum_{j=1}^\infty f(\mathbf{x}_j)\,u_j^*(\mathbf{x}).$$
\end{remark}
\section{Gramian matrices and Riesz sequences} Now that Schoenberg matrices of Mat\'ern functions are shown to induce bounded and invertible operators on $\ell^2(\mathbb{N})$, it is natural to ask if they generate Riesz sequences or bases in appropriate Hilbert spaces.
We recall that a system $\,\{f_j\}_{j\in\mathbb{N}}\,$ of vectors in a Hilbert space $H$ is said to be a Riesz sequence if its moment space is equal to $\ell^2(\mathbb{N})$, that is, $$\left\{ \mathbf{m}_f = \big\{(f, f_j)_H\big\}_{j\in\mathbb{N}} : f\in H\right\} = \ell^2(\mathbb{N}).$$ If $\,\{f_j\}_{j\in\mathbb{N}}\,$ is complete in addition, it is called a Riesz basis (see \cite{Y}). A classical theorem of Bari states a necessary and sufficient condition for the system $\,\{f_j\}_{j\in\mathbb{N}}\,$ to be a Riesz sequence is that the Gramian matrix \begin{equation} {\rm Gram}\Big(\{f_j\}_{j\in\mathbb{N}}\,;\,H\Big) = \big[\left( f_j,\,f_k\right)_H\big]_{j, \,k\in N} \end{equation} defines a bounded and invertible operators on $\ell^2(\mathbb{N})$.
As for the sequences constructed from translating Mat\'ern functions by distinct points, their Gramian matrices in $L^2(\mathbb{R}^n)$ or Sobolev spaces turn out to be easily identifiable in terms of Schoenberg matrices.
In order not to entangle with parameters, it is convenient to work with the Bessel potential kernels of \eqref{G1} \begin{equation*} G_\alpha(\mathbf{x}) = \frac{1}{2^{\alpha-1 + \frac n2}\,\pi^{\frac n2}\,
\Gamma(\alpha)}\,K_{\alpha-\frac n2}(|\mathbf{x}|) |\mathbf{x}|^{\alpha - \frac n2}. \end{equation*}
\subsection{Results on $L^2(\mathbb{R}^n)$ space} Concerning the square integrability, we have the following.
\begin{lemma}\label{lemmaGR1} For $\,\lambda>-1,\,$ if $\,2\alpha + \lambda +1>0,\,$ then \begin{equation*} \int_0^\infty \big[K_\alpha(t) t^\alpha\big]^2 t^{2\lambda +1} dt = \frac{\sqrt{\pi}\,\,\Gamma(\alpha +\lambda +1)\Gamma(2\alpha +\lambda +1) \Gamma(\lambda+1)}{4\,\Gamma\left(\alpha + \lambda + \frac 32\right)}\,. \end{equation*}
In particular, if $\, \alpha + n/4>0\,$ with $n$ a positive integer, then \begin{equation*} \int_0^\infty \big[K_\alpha(t) t^\alpha\big]^2 t^{n-1} dt = \frac{\sqrt{\pi}\,\,\Gamma\left(\alpha +\frac n2\right) \Gamma\left(2\alpha +\frac n2\right)\Gamma\left(\frac n2\right)}{4\,\Gamma\left(\alpha + \frac{n+1}{2}\right)}\,. \end{equation*} \end{lemma}
\begin{proof} An application of Parseval's relation, Theorem \ref{Parseval}, for the Hankel-Schoenberg transforms to formula \eqref{S7} of Theorem \ref{theoremS1} gives \begin{align*} \int_0^\infty \big[K_\alpha(t) t^\alpha\big]^2 t^{2\lambda +1} dt = \big[2^{\alpha+\lambda}\,\Gamma(\alpha +\lambda+1)\big]^2 \int_0^\infty \frac{r^{2\lambda+1}\,dr}{(1+ r^2)^{2\alpha+ 2\lambda+ 2}}. \end{align*} By making substitution $\, u = 1/(1+r^2),\,$ we compute \begin{align*} \int_0^\infty \frac{r^{2\lambda+1}\,dr}{(1+ r^2)^{2\alpha+ 2\lambda+ 2}} &= \frac 12 \int_0^1 u^{2\alpha + \lambda} (1-u)^\lambda du\\ &= \frac 12\,B(2\alpha +\lambda+1,\,\lambda+1) \end{align*} and the stated formula follows on simplifying constants by using Legendre's duplication formula for the Gamma function. The second stated formula corresponds to a special case of the first one with $\,\lambda = n/2 -1.$ \end{proof}
\begin{theorem}\label{theoremGR1} If $\,\alpha>n/4,\,$ then for any $\,\mathbf{x}, \,\mathbf{y}\in\mathbb{R}^n,\,$ \begin{align}\label{GR1} \big( G_{\alpha}(\cdot-\mathbf{x}),\, G_\alpha(\cdot-\mathbf{y})\big)_{L^2(\mathbb{R}^n)} = G_{2\alpha} (\mathbf{x}-\mathbf{y}). \end{align} As a consequence, for any sequence of distinct points $\,(\mathbf{x}_j)_{j\in\mathbb{N}}\subset \mathbb{R}^n,\,$ the Gramian matrix of the system $\,\big\{ G_\alpha(\mathbf{x}- \mathbf{x}_j) \big\}_{j\in\mathbb{N}}\subset L^2(\mathbb{R}^n)\,$ coincides with the Schoenberg matrix of $G_{2\alpha}$, that is, \begin{align*} {\rm Gram}\Big(\big\{ G_\alpha(\mathbf{x}- \mathbf{x}_j)\big\}_{j\in\mathbb{N}}\,;\,L^2(\mathbb{R}^n)\Big) = \Big[G_{2\alpha} \left(\mathbf{x}_j- \mathbf{x}_k\right)\Big]_{j, \,k\in\mathbb{N}}\,. \end{align*} \end{theorem}
\begin{proof} By Lemma \ref{lemmaGR1}, $\,G_\alpha\in L^2(\mathbb{R}^n).\,$ Due to radial symmetry, \begin{align*} \big( G_\alpha(\cdot-\mathbf{x}),\, G_\alpha(\cdot-\mathbf{y})\big)_{L^2(\mathbb{R}^n)} &=\int_{\mathbb{R}^n} G_\alpha(\mathbf{u}-\mathbf{x}) G_\alpha(\mathbf{u} -\mathbf{y}) d\mathbf{u}\\ &= \int_{\mathbb{R}^n} G_\alpha(\mathbf{x} - \mathbf{y} -\mathbf{w}) G_\alpha(\mathbf{w}) d\mathbf{w}\\ &= \left(G_\alpha\ast G_\alpha\right)(\mathbf{x}-\mathbf{y}). \end{align*} On the Fourier transform side, formula \eqref{S9} gives \begin{align*} \widehat{G_\alpha\ast G_\alpha}(\xi)
= (1+|\xi|^2)^{-2\alpha} = \widehat{G_{2\alpha}}(\xi), \end{align*} whence $\,G_\alpha \ast G_\alpha = G_{2\alpha}\,$ and the result follows. \end{proof}
\begin{remark} This result extends the work of L. Golinskii {\it et al.} \cite{GMO} in which the authors dealt only with the range $\,n/4<\alpha\le n/2.$
\begin{itemize} \item[(a)] In the case when both $\,\alpha-n/2\,$ and $\,2\alpha - n/2\,$ are halves of odd integers,
it is possible to write $L^2$ inner products explicitly with the aid of (K5).
As illustrations in $\mathbb{R}^3$, we take $\,\alpha = 1, \,2\,$ to obtain \begin{align*}
&\qquad\qquad\int_{\mathbb{R}^3} \frac{e^{-|\mathbf{u} -\mathbf{x}| - |\mathbf{u}-\mathbf{y}|}}
{|\mathbf{u}-\mathbf{x}|\,|\mathbf{u}-\mathbf{y}|}\,d\mathbf{u}
= 2\pi\,e^{-|\mathbf{x}-\mathbf{y}|}\,,\\
&\int_{\mathbb{R}^3} e^{-|\mathbf{u} -\mathbf{x}| - |\mathbf{u}-\mathbf{y}|}\,d\mathbf{u}
= \pi\,e^{-|\mathbf{x}-\mathbf{y}|}\left( 1+ |\mathbf{x}-\mathbf{y}| + \frac{|\mathbf{x}-\mathbf{y}|^2}{3}\right) \end{align*} for which the first formula is of considerable interest in the spectral analysis for the Schr\"odinger equations (see \cite{MS}).
\item[(b)] To reformulate \eqref{GR1} in a more direct fashion, put \begin{equation}\label{GR2}
F_\alpha(\mathbf{x}) = \frac{1}{2^{\alpha +n -1} \pi^{\frac n2}\Gamma\left(\alpha + \frac n2\right)}\, K_\alpha(|\mathbf{x}|) |\mathbf{x}|^\alpha\,. \end{equation} As an alternative of \eqref{GR1}, if $\,\alpha>-n/4,\,$ then \begin{align} \big( F_\alpha(\cdot-\mathbf{x}),\, F_\alpha(\cdot-\mathbf{y})\big)_{L^2(\mathbb{R}^n)} = F_{2\alpha + \frac n2} (\mathbf{x}-\mathbf{y}). \end{align} \end{itemize} \end{remark}
As it is shown in Theorem \ref{theoremM1} that the Schoenberg matrices of $$G_{2\alpha}(z) = \frac{\Gamma(2\alpha-n/2)}{(4\pi)^{n/2}\,\Gamma(2\alpha)}\, \mathcal{M}_{2\alpha -n/2}(z)\qquad(z>0)$$ define bounded and invertible operators on $\ell^2(\mathbb{N})$ as long as $\,\alpha>n/4,$ we obtain the following from Bari's theorem and Theorem \ref{theoremGR1}.
\begin{corollary} Let $\,\alpha>n/4\,$ and $\,X =\left\{\mathbf{x}_j\in\mathbb{R}^n : j\in\mathbb{N}\right\}\,$ be arbitrary with
$$\delta(X) = \inf_{j\ne k}\,\left|\mathbf{x}_j - \mathbf{x}_k\right|\,>0.$$
\begin{itemize} \item[\rm(i)] If $\,n\ge 2,\,$ then $\,\big\{ G_\alpha(\mathbf{x}- \mathbf{x}_j) \big\}_{j\in\mathbb{N}}\,$ forms a Riesz sequence in $L^2(\mathbb{R}^n).$ \item[\rm(ii)] In the case $\,n=1,\,$ if $X$ is separated with $$\delta(X)> \frac{4\,\Gamma(2\alpha) \Gamma(1/2)}{\Gamma(2\alpha-1/2)},$$ then $\,\big\{ G_\alpha(x - x_j) \big\}_{j\in\mathbb{N}}\,$ forms a Riesz sequence in $L^2(\mathbb{R}).$ \end{itemize} \end{corollary}
\subsection{Results on Sobolev spaces} An important feature of the Sobolev space $H^\alpha(\mathbb{R}^n)$ with $\,\alpha>n/2\,$ is that it is a reproducing kernel Hilbert space with the kernel $G_{\alpha}(\mathbf{x}-\mathbf{y})$ so that it may be viewed as the space of functions of type $$f(\mathbf{x})= \sum_{j=1}^\infty a_j \,G_{\alpha}(\mathbf{x} - \mathbf{x}_j),$$ where $\,(a_j)\in\ell^2(\mathbb{N})\,$ and $\,(\mathbf{x}_j)\subset\mathbb{R}^n\,$ are arbitrary (see \cite{A}). Thus it is reasonable to expect that the system $\, \left\{G_{\alpha}(\mathbf{x} - \mathbf{x}_j)\right\}_{j\in\mathbb{N}}\subset H^\alpha(\mathbb{R}^n)\,$ may serve as a Riesz sequence or a Riesz basis in its closed linear span once the translation points $(\mathbf{x}_j)$ were scattered all over some planes of $\mathbb{R}^n$.
As a matter of fact, the reproducing property implies \begin{equation}\label{GR3} \big( G_{\alpha}(\cdot-\mathbf{x}),\, G_{\alpha}(\cdot-\mathbf{y})\big)_{H^\alpha(\mathbb{R}^n)} = G_{\alpha} (\mathbf{x}-\mathbf{y}) \end{equation} for all $\,\mathbf{x}, \,\mathbf{y}\in\mathbb{R}^n\,$ and our foregoing analysis yields
\begin{theorem} Let $\,\alpha>n/2\,$ and $\,X =\left\{\mathbf{x}_j\in\mathbb{R}^n : j\in\mathbb{N}\right\}\,$ be arbitrary with
$$\delta(X) = \inf_{j\ne k}\,\left|\mathbf{x}_j - \mathbf{x}_k\right|>0.$$
\begin{itemize} \item[\rm(i)] If $\,n\ge 2,\,$ then $\,\big\{ G_{\alpha}(\mathbf{x}- \mathbf{x}_j) \big\}_{j\in\mathbb{N}}\,$ forms a Riesz sequence in $H^\alpha(\mathbb{R}^n).$ \item[\rm(ii)] In the case $\,n=1,\,$ if $X$ is separated with $$\delta(X)> \frac{4\,\Gamma(\alpha) \Gamma(1/2)}{\Gamma(\alpha- 1/2)},$$ then $\,\big\{ G_{\alpha}(x - x_j) \big\}_{j\in\mathbb{N}}\,$ forms a Riesz sequence in $H^\alpha(\mathbb{R}).$ \end{itemize} \end{theorem}
Regarding the problem of determining if the sequences of translates by inverse multi-quadrics give rise to Riesz sequences, we introduce a class of function spaces defined in terms of Fourier transforms as follows.
\begin{definition} For $\,\alpha>0,$ \begin{align*} \mathcal{K}_\alpha(\mathbb{R}^n) = \left\{ f\in C(\mathbb{R}^n)\cap L^2(\mathbb{R}^n) :
\int_{\mathbb{R}^n} \frac{\big|\widehat f(\mathbf{\xi})\big|^2 d\mathbf{\xi}}
{ K_{\alpha} (|\xi|) |\xi|^{\alpha}} <\infty\right\}. \end{align*} \end{definition}
A theorem of R. Schaback \cite{S} and H. Wendland (\cite{We}, Theorem 10.27) states if $\,\Phi\in C(\mathbb{R}^n)\cap L^1(\mathbb{R}^n),\,$ real-valued and positive definite, the Hilbert space of functions on $\mathbb{R}^n$ with the reproducing kernel $\Phi(\mathbf{x}-\mathbf{y})$ coincides with \begin{align*}
\mathcal{H}(\mathbb{R}^n) = \left\{ f\in C(\mathbb{R}^n)\cap L^2(\mathbb{R}^n) : \int_{\mathbb{R}^d} \frac{\big|\widehat f(\mathbf{\xi})\big|^2 d\mathbf{\xi}} {\widehat{\Phi}(\mathbf{\xi})} <\infty\right\} \end{align*} for which the inner product is defined by \begin{align*} \bigl(f,\,g\bigr)_{\mathcal{H}(\mathbb{R}^n)} = (2\pi)^{-n}\int_{\mathbb{R}^n} \frac{\widehat{f}(\mathbf{\xi}) \overline{\,\widehat{g}(\mathbf{\xi})} \,d\mathbf{\xi}}{\widehat{\Phi}(\mathbf{\xi})}. \end{align*}
As a consequence, it is simple to find that the space $\mathcal{K}_\alpha(\mathbb{R}^n)$ arises as a reproducing kernel Hilbert space with an appropriate multi-quadrics as its reproducing kernel. To be precise, we have the following results.
\begin{theorem} For $\,\beta>n/2,\,$ consider the inverse multi-quadrics
$$\phi_\beta(\mathbf{x}) = (1+|\mathbf{x}|^2)^{-\beta}.$$
\begin{itemize} \item[\rm(i)] The Hilbert space of functions on $\mathbb{R}^n$ with the reproducing kernel $\phi_\beta$ coincides with $\,\mathcal{K}_{\beta-n/2}(\mathbb{R}^n)\,$ for which the inner product is defined by \begin{align}\label{GR4} \bigl(f,\,g\bigr)_{\mathcal{K}_{\beta-n/2}(\mathbb{R}^n)} = (2\pi)^{-2n}\int_{\mathbb{R}^n} \frac{\widehat{f}(\mathbf{\xi}) \overline{\,\widehat{g}(\mathbf{\xi})} \,d\mathbf{\xi}}{G_\beta(\mathbf{\xi})}. \end{align}
\item[\rm(ii)] Let $\,X =\left\{\mathbf{x}_j\in\mathbb{R}^n : j\in\mathbb{N}\right\}\,$ be arbitrary with
$$\delta(X) = \inf_{j\ne k}\,\left|\mathbf{x}_j - \mathbf{x}_k\right|>0.$$ Then the system $\,\big\{ \phi_\beta(\mathbf{x}- \mathbf{x}_j) \big\}_{j\in\mathbb{N}}\,$ forms a Riesz sequence in $\mathcal{K}_{\beta-n/2}(\mathbb{R}^n)\,$ for any $\,n\ge 2\,$ and for $\,n=1\,$ under the additional assumption $$\delta(X)> 2 B(\beta-1/2,\,1/2).$$ \end{itemize} \end{theorem}
\begin{proof} Obviously, $\phi_\beta$ is continuous, integrable and positive definite. By an application of the Hankel-Schoenberg transform formula for $\phi_\beta$ as stated in Corollary \ref{corollaryS1}, we have $\,\,\widehat{\phi_\beta}(\xi) = (2\pi)^n G_\beta(\xi)\,$ and hence part (i) follows by the aforementioned theorem of Schaback and Wendland.
By the reproducing property, the Gramian matrix is given by \begin{align*} {\rm Gram}\Big(\big\{\phi_\beta(\mathbf{x}- \mathbf{x}_j)\big\}_{j\in\mathbb{N}}\,;\,\mathcal{K}_{\beta-n/2}(\mathbb{R}^n)\Big) = \Big[\phi_{\beta} \left(\mathbf{x}_j- \mathbf{x}_k\right)\Big]_{j, \,k\in\mathbb{N}}\,. \end{align*} and part (ii) follows immediately from Theorem \ref{theoremM2}. \end{proof}
\begin{remark} As the Mat\'ern functions of positive order are bounded smooth functions with exponential decays, it is evident $\,\mathcal{K}_\alpha(\mathbb{R}^n)\subset H^\infty(\mathbb{R}^n)\,$ for any $\,\alpha>0.$ In the special case $\,\beta= (n+1)/2,\,$ we note $$\mathcal{K}_{1/2}(\mathbb{R}^n) = \left\{f\in C(\mathbb{R}^n)\cap L^2(\mathbb{R}^n) :
\int_{\mathbb{R}^n} e^{\,|\xi|} \,\big|\widehat f(\mathbf{\xi})\big|^2\, d\mathbf{\xi} <\infty\right\}, $$ which is the reproducing kernel Hilbert space with the Poisson kernel
$$\phi_{\frac{n+1}{2}}(\mathbf{x}) = (1+|\mathbf{x}|^2)^{-\frac{n+1}{2}}.$$ \end{remark}
\section{Appendix: $\ell^2(\mathbb{N})$-Boundedness}
For the sake of completeness, we reproduce the proof of L. Golinskii {\it et al.} \cite{GMO} for part (i) of Proposition \ref{propM} which states
\begin{itemize}{\it \item[{}] Suppose that $f$ is a nonnegative monotone decreasing function on $[0, \infty)$ such that $\,f(0) =1\,$ and the function $\,f(t) t^{d-1}\,$ is integrable on $[0, \infty).$ For any $\,X\subset \mathbb{R}^n\,$ satisfying the condition \eqref{M1}, the Schoenberg matrix $\,\mathbf{S}_{X}(f)$ defines a bounded self-adjoint operator on $\ell^{2}(\mathbb{N})$ with \begin{equation}\label{A}
\left\|\mathbf{S}_{X}(f)\right\|\le 1+\frac{ d(5^d-1)}{[\delta(X)]^d}\int_{0}^{\infty}f(t)t^{d-1}dt. \end{equation}} \end{itemize}
\paragraph{Proof.} Let us write $\,\delta = \delta(X)\,$ and assume $\,{\rm span}(X)\simeq\mathbb{R}^{d}\,$ for simplicity. We fix $j$ and estimate the infinite sum \begin{align*}
A_j &\equiv \sum_{k=1}^{\infty}f(|\mathbf{x}_k- \mathbf{x}_j|)=1+\sum_{m=1}^{\infty}\sum_{\mathbf{x}_{k}\in X_{m}}f(|\mathbf{x}_k- \mathbf{x}_j|)\,,\quad\text{where}\\
X_{m} &=\big\{\mathbf{x}_{k}\in X : m\delta \le |\mathbf{x}_k- \mathbf{x}_j|< (m+1)\delta\big\}\,. \end{align*} In terms of the open balls $\,B(\mathbf{x}_k,\delta/2)\subset \mathbb{R}^n\,,$ a geometric inspection reveals \begin{align*}
\#(X_{m}) &\le \frac{\,\mathrm{vol}\Big(\Big\{\mathbf{y}\in \mathbb{R}^d : \left(m-\frac{1}{2}\right)\delta\le |\mathbf{y} - \mathbf{x}_j|< \left(m+\frac{3}{2}\right)\delta\Big\}\Big)\,} {\text{vol}\big(\,B(\mathbf{x}_k,\delta/2)\cap \mathbb{R}^d\,\big)}\\ &=(2m+3)^d-(2m-1)^d\\ &\le (5^d-1)\,m^{d-1}, \end{align*} which implies \begin{equation*} A_j \le 1+\sum_{m=1}^{\infty}(5^d-1)m^{d-1}f(m\delta)\,. \end{equation*}
As $f$ is monotone decreasing on $[0,\infty)$, \begin{align*} \int_{0}^{\infty}f(t\delta)t^{d-1}dt &=\sum_{m=1}^{\infty}\int_{m-1}^{m}f(t\delta)t^{d-1}dt\\ &\ge \sum_{m=1}^{\infty} f(m\delta)\left[\frac{m^{d}-(m-1)^d}{d}\right]\\ &\ge\sum_{m=1}^{\infty} f(m\delta)\frac{m^{d-1}}{d}, \end{align*} which yields $$ \sum_{m=1}^{\infty}f(m\delta)m^{d-1}\le \frac{d}{\delta^{d}}\int_{0}^{\infty}f(t)t^{d-1}dt.$$ Inserting this estimate into the above sum, we are led to \begin{align*} A_j \le 1+\frac{d(5^d-1)}{\delta^d}\int_{0}^{\infty}f(t)t^{d-1}dt\,. \end{align*}
Since this estimate is independent of $j$, the result follows by Schur's test.
\begin{remark} By Schur's test, (\ref{A}) implies \begin{equation*}
\left\|I-S_X(f)\right\|\le \frac{d(5^d-1)}{[\delta(X)]^d}\int_{0}^{\infty}f(t)t^{d-1}dt \end{equation*} and the right side is strictly less than $1$ if \begin{equation*} \delta(X)>\left[d(5^d-1)\int_{0}^{\infty}f(t)t^{d-1}dt\right]^{1/d}\,. \end{equation*} For such a set $X$, $S_X(f)$ defines a bounded invertible operator on $\ell^{2}(\mathbb{N})$. \end{remark}
\noindent {\bf Acknowledgements.} Yong-Kum Cho is supported by National Research Foundation of Korea Grant funded by the Korean Government (\# 20150301). Hera Yun is supported by the Chung-Ang University Research Scholarship Grants in 2014.
\noindent Yong-Kum Cho
\noindent Department of Mathematics, Chung-Ang University, 84 Heukseok-Ro, Dongjak-Gu, Seoul 156-756, Korea (e-mail: ykcho@cau.ac.kr)
\noindent Dohie Kim
\noindent Department of Mathematics, Chung-Ang University, 84 Heukseok-Ro, Dongjak-Gu, Seoul 156-756, Korea (e-mail: hanna927@hanmail.net)
\noindent Kyungwon Park
\noindent Department of Computer Engineering, Korea Polytechnic University, 237 Sangidaehak-Ro, Siheung-Si 429-793, Korea (e-mail: chrisndanny@kpu.ac.kr)
\noindent Hera Yun
\noindent Department of Mathematics, Chung-Ang University, 84 Heukseok-Ro, Dongjak-Gu, Seoul 156-756, Korea (e-mail: herayun06@gmail.com)
\end{document} |
\begin{document}
\title{Monotonicity and regularity of the speed for excited
random walks in higher dimensions} \author {Cong-Dan Pham\\Aix Marseille Universit$\acute{\text{e}}$, CNRS, Centrale Marseille\\ LATP, UMR 7353, 13453 Marseille France\\cong-dan.pham@univ-amu.fr} \maketitle \begin{abstract} We introduce a method for studying monotonicity of the speed of excited random walks in high dimensions, based on a formula for the speed obtained via cut-times and Girsanov's transform. While the method gives rise to similar results as have been or can be obtained via the expansion method of van der Hofstad and Holmes, it may be more palatable to a general probabilistic audience. We also revisit the law of large numbers for stationary cookie environments. In particular, we introduce a new notion of $e_1-$exchangeable cookie environment and prove the law of large numbers for this case. \end{abstract} \section{Introduction}
\subsection{Excited random walk with random cookies (ERWRC)}
Excited random walks (ERW) were introduced in \cite{BW03} by I. Benjamini and D. Wilson. After that, M. Zerner generalized ERW when he introduced in \cite{Zer05}, \cite{Zer06} cookie random walks, which are also called multi-excited random walks. In our paper, we consider a model of random walk called the excited random walk with $m$ random cookies which we denote by ERWRC or $m$-ERWRC when more explicitly needed. This is a generalisation of multi-excited random walk and also a particular case of the excited random walk in random environment introduced and considered in \cite{MPRV12} and \cite{KZ14}.
\iffalse{Firstly, we recall a model that is a generalisation of the random walk in random environment and of the cookie random walk, which is known as excited random walk in random cookie environments (ERWRCE). This model was considered in \cite{KZ14} and \cite{MPRV12}:
Let $\E:=\{\pm e_j|j\in\{1,2,...,d\}\}$ be the set of unit coordinate vectors in ${\mathbb Z}^d$ and denote by ${\mathcal M}_{\E}$ the set of probability measures on $\E$, i.e. vectors with $2d$ non-negative entries which sum up to 1. Such vectors are called $cookies.$ The set of cookie environments is denoted by $$ \Omega:={\mathcal M}_{\E}^{{\mathbb Z}^d\times {\mathbb N}^*}. $$ (Here ${\mathbb N}^*=\{1,2,...\}$.) The elements of $\Omega$ are written as $\omega=(\omega(z,e,i))_{z\in{\mathbb Z}^d, e\in\E, i\in{\mathbb N}^*}$ with $(\omega(z,e,i))_{e\in\E}$ being the $i-$th cookie at $z.$ For fixed $\omega\in\Omega$ and $y \in{\mathbb Z}^d$, an ERW starting at $y$ in the cookie environment $\omega$ is a process $Y:=(Y_n)_{n\geqslant 0}$ on a suitable probability space $(\Omega',\F',{\mathbb P}_{y,\omega})$ which satisfies \begin{align} &{\mathbb P}_{y,\omega}(Y_0=y)=1 \text{ and }\\ \notag
{\mathbb P}_{y,\omega}(Y_{n+1}=Y_n+e|&(Y_i)_{0\leqslant i\leqslant n})=\omega(Y_n,e,\#\{i\in\{0,1,...,n\}|Y_i=Y_n\}) \end{align} for all $n\in{\mathbb N}$ and $e\in\E.$ Here $\#A$ denotes the cardinality of the set $A.$ The cookie environment $\omega$ may be chosen at random itself according to a probability measure ${\mathbb Q}$ on $(\Omega,\F)$, where $\F$ is the canonical product Borel $\sigma-$algebra. Averaging the so-called $quenched$ measure ${\mathbb P}_{y,\omega}$ over the environment $\omega$ we obtain the averaged (often also called annealed) measure $P_{y}(\cdot)={\mathbb E}({\mathbb P}_{y,\omega}(\cdot))$ on $\Omega\times\Omega'.$ When the cookie environment $\omega=(\omega(z,e,i))_{z\in{\mathbb Z}^d, e\in\E, i\in{\mathbb N}}$ does not depend on $i$ i.e. $\omega(z,e,i)$ $=\omega(z,e)$ for all $i\in{\mathbb N}^*$, then we get the random walk in random environment (see \cite{SZ99}, \cite{HoSu12},...). Some other particular cases of ERWRCE were studied in \cite{Hol12}, \cite{HoSu12}, \cite{Bau13}.}\fi
Let us describe $m-$ERWRC. Let $m$ be a positive integer or $m=+\infty$. We place $m$ cookies on every site of the lattice ${\mathbb Z}^{d}.$ Moreover, $m$ random variables $(\beta_k(y))_{1\leqslant k\leqslant m}$ with values in $[-1,1]$ are attached to each site $y$ of ${\mathbb Z}^{d}$. The process $ \beta:=\{(\beta_k(y))_{1\leqslant k\leqslant m}\}_{y\in{\mathbb Z}^{d}}$ serves as a random environment whose law is denoted by ${\mathbb Q}$. Let ${\mathbb B}:=([-1,1]^{m})^{{\mathbb Z}^{d}}$ be the set of random environments. The excited random walk with $m$ cookies $ \beta=\{(\beta_k(y))_{1\leqslant k\leqslant m}\}_{y\in{\mathbb Z}^{d}}$ is a discrete time nearest neighbor random walk $(Y_n)_{n\geqslant 0}$ on the lattice ${\mathbb Z}^d$ obeying the following rule: when the walk visits $y$ for the $k$-th time, $1 \leqslant k \leqslant m$, then it eats one cookie and jumps with probability $(1+\beta_{k}(y))/2d$ to the right, probability $(1-\beta_{k}(y))/2d$ to the left, and probability $1/(2d)$ to the other nearest neighbor sites. On the other hand, when the walk is at a site $y$ where there is no more cookie, then it jumps uniformly at random with probability $1/(2d)$ to one of the $2d$ neighboring sites. When $m=1$ and the environment $\beta$ is constant, we recover the excited random walk.
Throughout this paper, we denote by $\{Y_n\notin^k\}$ the event that $Y_n$ has been visited fewer than $k$ times before time $n$ and denote by $\{Y_n\in^k\}$ the complement of $\{Y_n\notin^k\}.$ When $k=1$ we also use the notations $\{Y_n\notin\}:=\{Y_n\notin^1\}$ and $\{Y_n\in\}:=\{Y_n\in^1\}.$ Moreover, the event that $Y_n$ has been exactly visited $k-1$ times before time $n$ is denoted by $\{Y_n\notin_k\}$ and its complement is denoted by $\{Y_n\in_k\}$.
From the description of $m-$ERWRC, when $\beta$ is fixed, the ``quenched" law ${\mathbb P}_{\beta}$ of excited random walk with $m$ random cookies $ \beta$ is the probability on the path space $({\mathbb Z}^d)^{\mathbb N}$, defined by: \begin{itemize} \item ${\mathbb P}_{\beta}(Y_0=0)=1$,
\item ${\mathbb P}_{\beta}[Y_{n+1}-Y_n=\pm e_i|Y_0,..., Y_n]=\frac{1}{2d}$ for $2\leqslant i\leqslant d$, \item if $Y_n$ has been visited exactly $k-1$ times before time $n$, i.e. on the event $\{Y_n\notin_k\}$ $$
{\mathbb P}_{\beta}[Y_{n+1}-Y_n=\pm e_1|Y_0,..., Y_n]= \left\{ \begin{array}{ll} \frac{1\pm\beta_{k}(Y_n)}{2d} & \mbox{ for } 1 \leqslant k \leqslant m, \\
\frac{1}{2d} & \mbox{ for } k > m. \end{array} \right. $$ \end{itemize} The ``annealed" law $P$ is then defined as the semi-direct product on ${\mathbb B}\times({\mathbb Z}^d)^{\mathbb N}$: $P={\mathbb Q}\otimes{\mathbb P}_{\beta}.$ We say that the cookies are ``identical" if \begin{gather*}\tag{IDEN}\label{IDEN} \forall k \mbox{ such that } 1\leqslant k\leqslant m \, , \, \, \forall y \in {\mathbb Z}^{d} \, , \, \, \beta_k(y) = \beta(y) \,. \end{gather*}
In this model, the random cookie environment $\beta=\{\beta(y)\}_{y\in{\mathbb Z}^{d}}$ is assumed to be: \begin{itemize} \item stationary: $\beta(y+\cdot)\overset{law}{=}\beta$ for any $y$ in ${{\mathbb Z}^{d}}$, \item $e_1$-exchangeable: to define this notion, we consider a family $\Delta=\{\delta_z\}_{z\in{\mathbb Z}^{d-1}}$ of bijective mappings from ${\mathbb Z}$ to ${\mathbb Z}$. The mapping $\sigma_{\Delta}: {\mathbb Z}^d \to {\mathbb Z}^d$ defined by $\sigma_{\Delta}(x,z)=(\delta_z(x),z)$ for all $x\in{\mathbb Z}, z\in{\mathbb Z}^{d-1}$, is then a bijection from ${\mathbb Z}^d$ to ${\mathbb Z}^d$, acting on the set ${\mathbb B}$ of environments by $\sigma_{\Delta}(\beta)(y)=\beta(\sigma_{\Delta}(y))$. The environment is said to be $e_1$-exchangeable if and only if $\sigma_{\Delta}(\beta) \overset{law}{=}\beta$ for any family $\Delta$. In other words, an environment is $e_1$-exchangeable if its law does not change when performing permutations of the environment on each horizontal line. \end{itemize}
An i.i.d. cookie environment is of course stationary and $e_1$-exchangeable. Another simple example is provided by a stationary environment not depending on the horizontal component: for all $y=(x,z) \in {\mathbb Z} \times {\mathbb Z}^{d-1}$, $\beta(y)=\beta(z)$, where $(\beta(z))_{z \in {\mathbb Z}^{d-1}}$ is stationary.
To describe our main result about this model, we introduce a partial ordering on the laws of environments. Generally speaking, let $Q_1, Q_{2}$ be two probability measures on a partially ordered set $(E,\leqslant)$. We say that a probability measure $Q$ on $E \times E$ is a monotone coupling of $Q_1$ and $Q_2$, if when denoting by $l_1$ and $l_2$ the coordinate maps from $E \times E$ to $E$: $$ \mbox{ for } i=1,2 \, , \, \text{ for all } B \mbox{ events of } E \, , \, \, Q(l_i \in B)=Q_i(B) \, \mbox{ and } Q(l_1 \leqslant l_2) =1 \, . $$ When such a monotone coupling exist, we say that $Q_1 \prec Q_2$.
The set ${\mathbb B}$ of environment is provided with the partial ordering: $$ \beta_1 \leqslant \beta_2 \mbox{ if and only if } \beta_{1,k}(y) \leqslant \beta_{2,k}(y) \, , \, \, 1 \leqslant k \leqslant m
\, , \, \, y \in {\mathbb Z}^d \, . $$
Let $(Z_n)_{n\geqslant 0}$ (resp. $(X_n)_{n\geqslant 0}$) be the vertical (resp. horizontal) component of $m-$ERWRC $(Y_n)_{n\geqslant 0}$:
$$Z_n:=(Y_n\cdot e_2,...,Y_n\cdot e_d) \, , \, \, X_n:=Y_n\cdot e_1 \, .
$$ Then $(Z_n)_{n \geqslant 0}$ is a simple random walk on ${\mathbb Z}^{d-1}$. We can extend this simple random walk to times integer to obtain the simple random walk $(Z_n)_{n \in{\mathbb Z}}$ (see \eqref{lienZ-tildeZ} in Section \ref{contructES}). For $d-1\geqslant 5$, E. Bolthausen, A-S. Sznitman and O. Zeitouni \cite{BSZ03} proved the existence of cut times, i.e. times splitting the trajectory into two non-intersecting paths. Moreover, these cut times are integrable for $d-1\geqslant 5$. Let $\D$ be the
set of cut times, write $\D=\{...<T_{-2}<T_{-1}<T_0\leqslant 0<T_1<T_2<...\}$. We denote $T:=T_1$ and $\hat{P}=P(\cdot|0\in\D)$.
Our main result reads then as follows: \begin{thm} \label{Therwrc} Let $Y=(Y_n)$ be $m-$ERWRC, assume that the random cookie environment is stationary and $e_1$-exchangeable. We denote $X_n=Y_n\cdot e_1$ the projection of the random walk on the first coordinate. \begin{itemize}
\item Law of large numbers:
For $d\geqslant 6$, $\frac{X_n}{n}$ converges $P-$a.s. to a random variable $V$, whose expectation under $\hat{P}$ is denoted by $v({\mathbb Q})$ satisfying $v({\mathbb Q})=\hat{E}[V]=\frac{\hat{E}(X_T)}{\hat{E}(T)}$. In the particular case that the cookie environment is $i.i.d.$ then $V$ is constant and $V=v({\mathbb Q}).$ \item Monotonicity:
\begin{enumerate} \item If the cookies are identical $i.e.$ $\forall y \in {\mathbb Z}^d \, , \, \, \beta_1(y)=\beta_2(y)=...=\beta_m(y)=\beta(y) \, $ then there exists $d_0\in{\mathbb N}^{*}$ such that $v({\mathbb Q})$ is increasing w.r.t. ${\mathbb Q}$ for $d\geqslant d_0$ (w.r.t. the partial ordering $\prec$). \item If the cookies are identical, there exists $\sigma_0\in(0,1)$ such that for any $d \geqslant 10$, $v({\mathbb Q})$ is increasing w.r.t. ${\mathbb Q}$ on the set
$\{{\mathbb Q} \mbox{ such that } {\mathbb Q}(0 \leqslant |\beta(y)| \leqslant \sigma_0, \forall y \in {\mathbb Z}^d)=1 \}$. \end{enumerate} \end{itemize}
\end{thm}
About the law of large numbers (LLN):
To prove the law of large numbers, we use the technique of cut times as in \cite{BSZ03}, \cite{HoSu12}, \cite{Hol12}. Our contribution is to use it for $e_1-$exchangeable stationary environment. In the i.i.d. setting, the LLN of Theorem $\ref{Therwrc}$ is a consequence of Theorem $1.1 $ of \cite{HoSu12}. However, the proof of the LLN for i.i.d. setting in our paper is not totally the same as in \cite{BSZ03}, \cite{HoSu12} (see Section \ref{seciid}). The formula $v({\mathbb Q})=\frac{\hat{E}(X_T)}{\hat{E}(T)}$ obtained in our proof is different from the formula of the speed in $\cite{BSZ03}$. To use cut times, the dimension $d$ is required to be not smaller than $6$. This implies the existence of cut times of the projection $Z$ of the random walk $Y$ on the $d_1$ last coordinates ($d_1=d-1\geqslant 5$). In \cite{KZ14}, the LLN of Theorems $4.6$ and $4.8$ is proved for all dimension $d\geqslant 1$ using renewal structure. However, to use this technique, the conditions of uniform ellipticity of the cookie environment and the transience of the random walk in some direction $l\in {\mathbb R}^d $ are needed.
About the monotonicity of the speed:
Our result is to prove for the case of $e_1-$exchangeable and stationary cookie environment. For the i.i.d. setting, M. Holmes and R. Sun \cite{HoSu12} considered random walks in partially random environment which is similar to random walks with an infinite number of identical random cookies, $m=+\infty$. In this model the probability of stepping in $d_1$ last coordinates is random (i.e. this probability depends on the cookie environment) and $d_0=d-d_1\geqslant 1$. The question of monotonicity is considered under the assumption that there is an explicit coupling of two laws of the random environments. In this case, the laws of the random environments are allowed to take two values, say $\nu_1$ and $\nu_2$ with probabilities $\beta$ and $1-\beta$ (where $\beta$ is a constant in $[0,1]$). They proved the monotonicity of the speed with respect to $\beta.$
In the paper, we prove the monotonicity for all $1\leqslant m \leqslant +\infty$. In fact, there is an intersection between our model and the model used in \cite{HoSu12} that the projected random walk on ${\mathbb Z}^{d_1}$ is a simple random walk, $m=+\infty$ and $d_0=1$. In this case, the monotonicity can be easily proved by coupling argument for stochastic domination (see \cite{HoSu12}, page $5$).
About the methodology, for the case of the probability of stepping in $d_1$ last coordinates is not random, with the method cut times and Girsanov's transform, the explicit coupling of the laws of random environments used in \cite{HoSu12} is not needed in our proof. We notice that the lace expansion method can be applied to prove the monotonicity of the speed of Theorem \ref{Therwrc}. More precisely, using the stationary coupling $\beta_t=(1-t)\beta_1+t\beta_2$, we can prove the existence of the speed by the law of large numbers. Together with boundedness and convergence of the lace expansion series, the lace expansion formula for the expectation of the speed then follows. These techniques can be found in \cite{HoSu12}.
In \cite{Hol12}, M. Holmes asked about monotonicity of the speed with respect to stochastic domination. He considered the model with $1\leqslant m\leqslant +\infty$, $d_0=1$ and the probability of stepping in $d_1$ last coordinates is not random. The author proved the following result:
\begin{thm*}[Theorem 2.3, \cite{Hol12}]\label{ThHol} Set $\delta_i:={\mathbb E}[\beta_i(0)]$. Let $A$ be a finite set of integers $A\subset N$. If $\beta_i(o)$ is independent of $(\beta_j(o))_{j\ne i}$ for each $i\in A$, then for each fixed joint distribution of $\beta_{A^c}(o) = (\beta_i(o))_{i\notin A}$, the annealed speed $v$ in dimension $d$ is a continuous function of $(\delta_i)_{i\in A}$ when $d\geqslant 6$ and is differentiable in $\delta_i$ for each $i\in A$ when $d\geqslant 8$. If $1\in A$, then $v$ is strictly increasing in $\delta_1$ when $d\geqslant 12$. \end{thm*} Under the conditions of this theorem, for $i\in A\text{ and }i>0$, the speed depends on the $\beta_i$ via the mean $\delta_i={\mathbb E}[\beta_i(x)]$ where $x\in{\mathbb Z}^d$. This means that the law of the random walk does not change when we replace $\beta_i, i\in A$ by the constant $\delta_i$. Here the speed is monotone in the first drift $\delta_1$ when the $i$-th cookie is independent of the others for $i\in A$ and $1\in A$. This is a special case of stochastic domination. The model in our paper is quite similar to the model in \cite{Hol12} except the conditions of the random cookie environment. We prove the monotonicity of the speed with respect to the law of the random cookie environment ${\mathbb Q}$ for the special case of $m$ identical random cookies.
\subsection{Excited random walk with $m$ identical deterministic cookies ($m$-ERW)} This model is a partial model of $m-$ERWRC when the cookie environment is not random and identical, i.e. the cookies are the same for every site: $$ \forall k \mbox{ such that } 1\leqslant k\leqslant m \, , \, \, \forall y \in {\mathbb Z}^{d} \, , \, \, \beta_k(y) = \beta \, , $$ for some real number $\beta\in[0,1]$. We see that the $m-$ERW is also a partial model of the model called multi-excited random walk which was introduced in \cite{Zer05}. Let ${\mathbb P}_{m,\beta}$ denote the law of $m$-ERW. As $m$ is large, the $m$-ERW is more and more like a simple random walk with bias $\beta$. Let $v(m,\beta)$ be the speed of the $m$-ERW whose existence is proven for $d \geqslant 2$ in \cite{BR07}, \cite{MPRV12}, \cite{KZ14}.
We prove in Section \ref{Sec4} the following result: \begin{thm}\label{md} For $d\geqslant 8$, the speed $v(m,\beta)$ is differentiable w.r.t $\beta$ in $[0,1)$. Moreover, the derivative converges to $\frac{1}{d}$, uniformly in $\beta$ on compact subsets of $[0,1)$: for any $\beta_0 \in [0,1)$,
$$ \lim_{m\to\infty} \sup_{\beta \in [0,\beta_0]} \left| \frac{\partial}{\partial\beta}v(m,\beta) - \frac{1}{d} \right| =0 \, .$$ Hence, there exists $m(\beta_0)$ such that for $m\geqslant m(\beta_0)$ the speed of the $m$-ERW is increasing in $\beta$ on $[0,\beta_0]$.
\end{thm} The differentiability of the speed was proved in \cite{Hol12} Theorem 2.3. The rest could also be obtained by minor modification of the proof of Theorem $2.3$ of \cite{Hol12}. \subsection{Excited random walk}
\iffalse{\hspace*{0.5cm}An excited random walk (ERW) with bias parameter $\beta\in(0,1]$ is a discrete time nearest neighbor random walk $(Y_n)_{n\geqslant 0}$ on the lattice ${\mathbb Z}^d$ obeying the following rule: when at time $n$ the walk is at a site it has already visited before time $n$, it jumps uniformly at random to one of the $2d$ neighboring sites. On the other hand, when the walk is at a site it has not visited before time $n$, it jumps with probability $(1+\beta)/2d$ to the right, probability $(1-\beta)/2d$ to the left, and probability 1/(2d) to the other nearest neighbor sites. Excited random walk was introduced in 2003 by I. Benjamini and D.B. Wilson \cite{BW03}. By using Theorem $21$ of \cite{BoS02}, they obtained a local limit theorem and proved that for every value of $\beta\in(0,1]$ and $d\geqslant 2$, excited random walks are transient. Furthermore, they proved that for $d\geqslant 4$, $$\liminf_{n\to\infty}Y_n.e_1 > 0\,\, a.s.,$$ where $(e_i:1\leqslant i\leqslant d)$ denotes the canonical generators of the group ${\mathbb Z}^{d}$ . This result was extended for dimensions $d=2, 3$ by G. Kozma (see \cite{Koz03}, \cite{Koz05}). In 2007, J. B$\acute{\text{e}}$rard and A. Ram$\acute{\text{\i}}$rez \cite{BR07} and in 2012, with the different approach, M. Menshikov, S. Popov, A. Ram$\acute{\text{\i}}$rez and M. Vachkovskaia \cite{MPRV12} proved a law of large numbers and a central limit theorem for the excited random walk for $d\geqslant 2$, namely: \begin{itemize} \item (Law of large numbers). There exists $\ v=v(\beta,d), 0<v<+\infty$ such that a.s.
$$\lim_{n\to\infty}n^{-1}Y_n\cdot e_1=v. $$
\item (Central limit theorem). There exists $\sigma=\sigma(\beta,d),0<\sigma<+\infty$ such that $(n^{-1/2}(Y_{\lfloor nt\rfloor}\cdot e_1-v\lfloor nt \rfloor), t \geqslant 0)$ converges in law as $n\to+\infty$ to a Brownian motion with variance $\sigma^2$. \end{itemize}
R. van der Hofstad and M. Holmes \cite{vdHH10} proved that the speed $v$ is strictly increasing in $\beta$ for $d\geqslant 9$ relying on the lace expansion technique. In this paper, we prove that the speed of an excited random walk is differentiable in $\beta$ for $d\geqslant 8$ and monotone in high dimension using cut times and martingale transforms. This technique is the new approach that is different from \cite{vdHH10}. About differentiability of the speed, we are interested in the derivative at the critical point $\beta=0$. When the derivative at $0$ is positive and is continuous in a neighbourhood of $0$ then the speed is monotonic in that neighbourhood. The existence of the derivative at $0$ of the speed of a random process plays an important role in mathematical physics. This problem is known as ``Einstein relation for random process", (see for instance the work of N. Gantert, P. Mathieu and A. Piatnitski \cite{GMP12}, see also \cite{BHOZ13}, \cite{KOa05}, \cite{KOb05}, \cite{LR94}).}\fi
Excited random walk is introduced in \cite{BW03}, this model is a partial case of $m-$ERW when $m=1$. Our main result for the excited random walk is the following:
\begin{thm}Let $v(\beta)$ be the speed of ERW with bias $\beta.$ \label{ThERW} \begin{enumerate} \item $v(\beta)$ is differentiable in $\beta\in [0,1)$ for $d\geqslant 8$. For $d\geqslant 6$,
the derivative at the critical point $0$ exists, is positive
and satisfies :
$$\lim_{\beta\to 0}\frac{v(\beta)}{\beta}=\frac{1}{d}R(0) \, , $$ where $R(0):=\lim_{n\to\infty}(R_n/n)$, $R_n$ is the number of points visited at time $n$ by the symmetric simple random walk on ${\mathbb Z}^{d}.$ \item There exist $d_0\in \mathbb{N}^{*},\,\beta_0\in(0,1)$ such that the speed of the excited random walk is strictly increasing in $\beta \in\left[0,1\right]$ for $d\geqslant d_0$ and strictly increasing in $\beta\in[0,\beta_0)$ for $d\geqslant 8$. \end{enumerate} \end{thm} For the monotonicity of the speed in a neighborhood of $0$, we need $d\geqslant 10$ in Theorem \ref{Therwrc}, but in Theorem \ref{ThERW} here, we need only $d\geqslant 8.$ In the point $1$ of Theorem \ref{ThERW}, the differentiability of $v(\beta)$ on $[0,1)$ for $d\geqslant 8$ is contained in Theorem $2.3$ of \cite{HoSu12}. However, we add the differentiability at the critical point $0$ for $d\geqslant 6$. The point $2$ is proved in \cite{vdHH10} for $d_0=9$ by the lace expansion method.
\iffalse{\subsection{Relations with previous works}
The idea of using cut times for laws of large numbers is not new, (see \cite{BSZ03}, \cite{HoSu12}, \cite{Hol12}). Our contribution is to use it for $\Delta-$exchangeable environment.
All previous results due to M. Holmes and al, (see \cite{Hol12},
\cite{HoSu12}, \cite{vdHH10}), were proved using lace expansions. Here, to prove the monotonicity of the speed for excited random walk, we do not use lace expansions, we use directly Formula \eqref{1}. This formula has the advantage that the denominator ${\mathbb E}_{\beta}(T|0\in \D)$ does not depend on $\beta$. Girsanov's transform gives an expression of the derivative $\partial_{\beta}v(\beta)$ ( see the formula \eqref{DerNum}). Then, using the fact that ${\mathbb E}(T)<\infty$ for $d\geqslant 8$, we estimate \eqref{DerNum} to obtain the monotonicity of the speed. Thus Point $2$ of Theorem \ref{6} is not new but the method of the proof is different from previous works.
Theorem \ref{md} is new. In Theorem \ref{23}, the notion of $\Delta-$exchangeable environments is new. Now we can compare Theorem \ref{23} with results of M. Holmes and al for random i.i.d. cookie environments.}\fi
In our paper, we prove the results by using cut times and Girsanov's transform. Our proof is based on two ingredients: \begin{itemize} \item Using stationary properties, it is possible to express the expectation of the speed in the direction $e_1$ as follows: \begin{equation}\label{F1}
v({\mathbb Q})=\frac{{\mathbb Q}{\mathbb E}_{\beta}(X_T|0\in\D)}{E_{\beta}(T|0\in \D)}, \end{equation} where ${\mathbb E}_{\beta}$ is the expectation under the ``quenched'' law ${\mathbb P}_{\beta}$ of ERWRC, $\D$ is the set of cut times, and $X_T=Y_T\cdot e_1$. In the case of i.i.d. cookies, $v({\mathbb Q})$ is also the speed when the speed is deterministic. \item Starting from $\eqref{F1}$, we consider two random cookies $\beta_1$ and $\beta_2$, and a stationary coupling $\beta_t=(1-t)\beta_1+t\beta_2, t\in[0,1].$ We get the expectation of the speed for random cookies $\beta_t$ (see \eqref{vitesst}) as follows: \begin{align} \label{F2}
f(t):=\frac{{\mathbb Q}{\mathbb E}_{\beta_t}(X_T \, 1_{0\in\D})}{E(T \, 1_{0\in\D})}. \end{align} Where, $E$ is the expectation w.r.t. $P$. We use Girsanov's transforms to make the dependence of $f(t)$ w.r.t to $t$ more explicit. This enables us to compute the derivative of $f(t)$ when $d\geqslant 8$ and to prove that this derivative is positive for $d$ high enough or if random cookie is small enough to $0$. \end{itemize}
All results of differentiability and monotonicity in \cite{vdHH10}, \cite{Hol12}, \cite{HoSu12} were proved by using the lace expansion. We do not use this method in this paper. In \cite{HoSu12}, the authors used cut times to prove the law of large numbers. The existence of the speed and the convergence of the lace expansion series allow to express the speed by the lace expansion formula. This formula was used in calculating the derivative and showing that the derivative is positive. In this paper, to prove the law of large number we also use the cut times. However, with different arguments, we obtain the formulas of the speed (see \eqref{F1} and \eqref{F2}), which are more explicit than the formulas in the previous works.
In other to prove the monotonicity of the speed, we do not use the lace expansion formula, we use directly the formulas \eqref{F1} and \eqref{F2} of the speed via cut time $T$. These formulas have the advantage that the denominator ${\mathbb E}_{\beta}(T|0\in \D)$ does not depend on random cookie $\beta$. Girsanov's transform gives an expression of the derivative $\frac{\partial f}{\partial t}(t)$ via the cut time $T$ (see \eqref{dht}). Using this formula, we estimate the derivative of the speed and obtain that the derivative is positive when $d$ large enough depending on the moments of $T$. Here, the condition $d\geqslant 6$ is needed for the existence of cut time, and we have $\sup_{d\geqslant 6}\hat{E}T=\sup_{d\geqslant 6}\frac{1}{P(0\in\D)}<+\infty$.
In the proof of the monotonicity in Theorem \ref{Therwrc}, in the estimation of the derivative, there is the appearance of the third moment of cut time $T$ (see \eqref{T3}). Therefore, we need $d\geqslant 10$ to get $\sup_{d\geqslant 10}\hat{E}(T^3)<+\infty$. For the particular case of ERW, the second moment of cut time $T$ appears (see \eqref{T2}). Hence, we need $d\geqslant 8$ to have that $\sup_{d\geqslant 8}\hat{E}(T^2)<+\infty.$
Notice that the constant $d_0$ in our method depends on the moments of $T$. While by using lace expansion method, M. Holmes and co-authors gave a explicit integer $d_0$, example in \cite{vdHH10} $d_0=9$, in Theorem $2.3$ of \cite{Hol12} $d_0=12$.
The paper is organized as follows: in Section \ref{Sec2}, we prove Theorem \ref{Therwrc}. First we give a construction of $m-$ERWRC. We then prove the law of large numbers and obtain an expression of the speed by using cut times for stationary and $e_1-$exchangeable cookies. In the particular case of i.i.d. cookies, we prove that the speed is deterministic. Using Girsanov's transforms, we get the derivative of the speed and estimate it to obtain the differentiability and monotonicity of the speed. Section \ref{Sec3} is devoted to the proof of Theorem \ref{ThERW} based on that of Theorem \ref{Therwrc}. In Section \ref{Sec4}, we prove Theorem \ref{md}. The key of the proof is Lemma \ref{bd2}. We use this lemma to show that the derivative of the speed tends uniformly in the drift $\beta$ to a positive constant when the number of cookies tends to the infinity.
\section{Proof of Theorem \ref{Therwrc}}\label{Sec2} \subsection{A construction of $m-$ERWRC} \label{contructES} We begin this section by constructing the $m-$ERWRC from some independent sequences of random variables. This plays an important role to prove the monotonicity. Fix $\beta(y)=(\beta_1(y),\beta_2(y),...,\beta_m(y)), y\in{\mathbb Z}^d.$ First, we consider a simple random walk (SRW) $\{ \tilde{Z}_n \}_{n\in{\mathbb Z}}$ on $ {\mathbb Z}^{d-1}$ where $\tilde{Z}_0:=0$. Let three sequences of random variables and random vectors $\{\eta_i\}_{i\geqslant 0}$,$\{\xi_i\}_{i\geqslant 0}$ and $\{\zeta_1(y),...,\zeta_m(y)\}_{y\in{\mathbb Z}^d}$ such that every random variable in these sequences is independent of each other,
independent of $\tilde Z$ and having distribution $$\eta_i\sim Ber\left(\frac{1}{d}\right),\quad \xi_i\sim Ber\left(\frac{1}{2}\right), \quad \zeta_k(y)\sim Ber\left((\beta_k(y)+1)/2\right)\text{ where } 1\leqslant k\leqslant m.$$ $\{\tilde{Z}_n\}_{n\geqslant 0}$ will give the sequence of vertical moves of the excited random walk, ${\eta_i=+1}$ will mean that at time $i$, the excited random walk performs an horizontal move. The direction of this move is given by $\xi_i$ when the $m-$ERWRC is at a site that has been visited more than $m-1$ times before the time $i$ , and by $\zeta_k(y), k\in\{1,2,...,m\}, y\in{\mathbb Z}^d$ otherwise. More precisely, set $A_i^n:=\{\sum_{j=0}^{n-1}(1-\eta_j)=i\}$, $(0\leqslant i\leqslant n)$ for $n>0$ and $A_0^0:=\Omega$. Then for every $n\geqslant 0$, we have $\bigcup_{i=0}^{n}A_i ^n=\Omega$ and $A_i ^n\bigcap A_j^n=\emptyset$ for $i\neq j$. We define the vertical component $Z$ of $Y$ by: \begin{equation} \label{lienZ-tildeZ}\forall n\in{\mathbb Z} , Z_n= \begin{cases}\tilde{Z}_{0} & \mbox{ if } n = 0 \, , \\
\tilde{Z}_{\sum_{i=0}^{n-1}(1-\eta_i)} & \mbox{ if } n > 0 \, , \\
\tilde{Z}_{-\sum_{i=n}^{-1}(1-\eta_i)} & \mbox{ if } n < 0 \, .
\end{cases} \end{equation}
We now construct the horizontal component $X$ of $Y$. Set $Y_0:=0$ and assume that $(Y_j, 0 \leqslant j\leqslant i)$ are constructed. Let us define $Y_{i+1}$. On the event $Y_i\notin_k$ i.e. $Y_i$ has been exactly visited $k-1$ times before time $i$, set $$ \E_i:=\begin{cases} (2 \zeta_k(Y_i) -1) \ind_{\eta_i=1}&\text{ if }1\leqslant k\leqslant m\, ,\\ (2 \xi_i -1) \ind_{\eta_i=1}&\text{ if } k>m\,. \end{cases} $$ We then set $X_{i+1}:=X_i + \E_i$, and $Y_{i+1}:=(X_{i+1},Z_{i+1})$. With this construction, we obtain: \begin{lem}\label{3}$Y$ is a $m-$ERWRC of the quenched law ${\mathbb P}_{\beta}$. \end{lem} \begin{proof} For the proof of Lemma $\ref{3}$, we need the following lemma: \begin{lem} \label{4}Let $\F$ and ${\mathscr G}$ be two sigma-algebras and $C\in \F\cap{\mathscr G}$
such that $\F|_C:=\{A\cap C \mbox{ with }A\in\F\}\subset {\mathscr G}$. For any integrable random variable $V$, we get
$${\mathbb E}(V1_C|\F)={\mathbb E}\left[{\mathbb E}(V1_C|{\mathscr G})|\F\right].$$ \end{lem} The proof of Lemma $\ref{4}$ is easy. Now, we return to the proof of Lemma $\ref{3}$. Set \begin{align*} &\F_n^{Y}:=\sigma(Y_j, 0 \leqslant j\leqslant n)\\ &\F_n:=\sigma(Z_j, 0 \leqslant j\leqslant n,\eta_j,\xi_j,\zeta_k(y),\, 0 \leqslant j\leqslant n-1, 1\leqslant k\leqslant m, y\in{\mathbb Z}^d)\\ &{\mathscr G}_{ni}:=\sigma (\tilde Z_j, 0 \leqslant j\leqslant i,\xi_j,\eta_j,\zeta_k(y),\, 0 \leqslant j\leqslant n-1, 1\leqslant k\leqslant m, y\in{\mathbb Z}^d) \end{align*} It is clear that $\F_n^{Y}\subset \F_n \mbox{ and } A_i^n\in \F_n\cap {\mathscr G}_{ni}$.
Moreover, $\F_n|_{A_i^n}\subset {\mathscr G}_{ni}$. Now, using Lemma $\ref{4}$, we have for $j\geqslant 2$, \allowdisplaybreaks{ \begin{align*}
&{\mathbb P}(Y_{n+1}-Y_n=\pm e_j|\F_n^{Y})\\
&=\sum_{i=0}^{n}{\mathbb P}\left(\tilde Z_{i+1}-\tilde Z_i=\pm e_j,A_i^n,\eta_n=0|\F_n^{Y}\right)\\
&=\sum_{i=0}^{n}{\mathbb P}\left[{\mathbb P}\left(\tilde Z_{i+1}-\tilde Z_i=\pm e_j,A_i^n,\eta_n=0|\F_n\right)|\F_n^{Y}\right]\\
&=\sum_{i=0}^{n}{\mathbb P}\left[{\mathbb P}\left(\tilde Z_{i+1}-\tilde Z_i=\pm e_j,A_i^n,\eta_n=0|{\mathscr G}_{ni}\right)|\F_n^{Y}\right]\\
&=\sum_{i=0}^{n}{\mathbb P}(\tilde Z_{i+1}-\tilde Z_i=\pm e_j){\mathbb P}(\eta_n=0){\mathbb P}(A_i^n|\F_n^{Y})\\ &={\mathbb P}(\tilde Z_{i+1}-\tilde Z_i=\pm e_j){\mathbb P}(\eta_n=0)=\frac{1}{2(d-1)}.\left(1-\frac{1}{d}\right)=\frac{1}{2d}. \end{align*}} For the case $e_j=e_1$, on the event $Y_n\notin_k$ where $k\leqslant m$,
\begin{align*}
&{\mathbb P}(Y_{n+1}-Y_n=+ e_1|\F_n^{Y})={\mathbb P}(\eta_n=1,\E_n=1|\F_n^{Y})\\
=&{\mathbb P}(\eta_n=1,\zeta_k(Y_n)=1|\F_n^{Y})={\mathbb P}(\eta_n=1).{\mathbb P}(\zeta_k(Y_n)=1|\F_n^{Y})\\ =&\frac{1}{d}.\frac{1+\beta_k(Y_n)}{2}=\frac{1+\beta_k(Y_n)}{2d}. \end{align*}
The cases $e_j=-e_1$ and $k>m$ are treated similarly. Lemma $\ref{3}$ is now proved. \end{proof}
\iffalse{ Next, we give another construction of the ERW, on which we obtain an ergodic dynamical system leading to the formula $\eqref{1}$. We begin with $$\Omega:=\left({\mathbb Z}^{d-1}\right)^{{\mathbb Z}}\times{\{0,1\}}^{{\mathbb Z}}\times{\{0,1\}}^{{\mathbb Z}}.$$
Let $q$ be the probability on ${\mathbb Z}^{d-1}$ such that $q(e)=\frac{1}{2d}$ for all $|e|=1$ and $q(0)=\frac{1}{d}$. Let $p_1$ and $p_2$ be the probabilities on $\{0,1\}$ such that $p_1(1)=p_1(0)=\frac{1}{2}$ and $p_2(1)=$$(1+\beta)/2,$$\,p_2(0)$$=(1-\beta)/2.$ We define the probability ${\mathbb P}$ on $\Omega$ by $${\mathbb P}:=q^{\otimes {\mathbb Z}}\otimes{p_1}^{\otimes {\mathbb Z}}\otimes{p_2}^{\otimes {\mathbb Z}}.$$ Now, we take $\omega=(w,u,l)\in \Omega$ with $w\in{\left({\mathbb Z}^{d-1}\right)}^{{\mathbb Z}}$, $u\in \{0,1\}^{{\mathbb Z}}$, $l\in \{0,1\}^{{\mathbb Z}}$. For $k\in{\mathbb Z}$, let $\zeta_k, \xi_k: \Omega \to \{0,1\}$ and $I_k:\Omega\to{{\mathbb Z}}^{d-1}$ be such that $I_k(\omega):=w_k$, $\zeta_k(\omega)=u_k$, $\xi_k(\omega):=l_k.$ Next, we define $Z_k:\Omega\to{{\mathbb Z}}^{d-1}$ and $\eta_k:\Omega\to\{0,1\}$ by $$Z_k= \begin{cases} I_0+...+I_{k-1} &\mbox{ if } k>0 \, , \\ 0 &\mbox{ if }k=0 \, , \\ -(I_k+...+I_{-1})&\mbox{ if } k<0 \, . \end{cases},\quad \quad \eta_k:=1_{Z_k=Z_{k+1}}. $$ By the definition of $Z$ we see that $I_k=Z_{k+1}-Z_k.$ From the sequences $(Z_k)_{k\in{\mathbb Z}}$, $(\eta_k)_{k\in{\mathbb Z}}$ , $(\xi_k)_{k\in{\mathbb Z}}$, $(\zeta_k)_{k\in{\mathbb Z}}$, we can construct the ERW $(Y_n)_{n\geqslant 0}$ just as in the first construction. We also define the sequence $(\tilde{Z}_k)_{k\in{\mathbb Z}}$ as the sequence of ``moves" of $Z$. More precisely, $(\tilde{Z}_k)_{k\in{\mathbb Z}}$ is the unique sequence such that: \begin{equation} \forall n\in{\mathbb Z} , Z_n= \begin{cases}\tilde{Z}_{0} & \mbox{ if } n = 0 \, , \\
\tilde{Z}_{\sum_{i=0}^{n-1}(1-\eta_i)} & \mbox{ if } n > 0 \, , \\
\tilde{Z}_{-\sum_{i=n}^{-1}(1-\eta_i)} & \mbox{ if } n < 0 \, .
\end{cases} \end{equation} }\fi
Now, set $\D:=\{n\in {\mathbb Z} \mbox{ such that }Z_{(-\infty,n)}\cap Z_{[n,+\infty)}=\emptyset\}$ to be the set of cut times of $Z$ and similarly let $\tilde\D$ be the set of cut times of $\tilde{Z}$. The sequence of cut times of $Z$ is then defined by induction:
\begin{align*} T_1&:=\inf\{n>0\mbox{ such that } n\in\D\},\\ T_{i+1}&:=\inf\{n>T_i\mbox{ such that } n\in\D\}\,\, \text{, for } i\geqslant 1,\\ T_{i-1}&:=\sup\{n<T_i\mbox{ such that } n\in\D\} \text{, for } i\leqslant 1. \end{align*} By construction, $T_0\leqslant 0< T_1$ and we set $T:=T_1.$ We define similarly $\tilde{T}_i$ and $\tilde{T}$ for $\tilde{Z}$. Observe that the laws of $T$ and $\tilde{T}$ do not depend on the environment $\beta$, since
they depend only on $Z$ and $\tilde{Z}$. Moreover, it follows from (\ref{lienZ-tildeZ}) that
\begin{equation}\label{T} \tilde{T}=\sum_{i=0}^{T-1}(1-\eta_i) \, , \, \, \text{ and } \{T>k\}=\left\{\tilde{T}>\sum_{i=0}^{k-1}(1-\eta_i)\right\} \, . \end{equation}
We consider $W:=\{\omega\in\Omega:\forall j\,,T_j(\omega)<\infty\}$. E. Bolthausen, A-S. Sznitman and O. Zeitouni \cite{BSZ03} proved that ${\mathbb P}(W)=1$ and ${\mathbb P}(0\in\D)>0$ for $d-1\geqslant 5.$
Let $\hat{{\mathbb P}}:={\mathbb P}(.|0\in\D)$ be the Palm measure.
\iffalse{ Since $\theta_k(W)\subset W$, $\hat{\theta}_k(W)\subset W$ and we get: \begin{lem} The triple $(W,\hat{{\mathbb P}},\hat{\theta})$ is an ergodic dynamical system for $d-1 \geqslant 5$. \end{lem} \begin{proof} The idea of the proof comes from the paper of E. Bolthausen, A-S. Sznitman \& O. Zeitouni \cite{BSZ03}.
First, we prove that $\hat{{\mathbb P}}$ is invariant under $\hat{\theta}$. Take any set $A\subset W$. Without loss of generality, suppose that $A\subset (0\in\D)$, then we have: \begin{align*} \hat{\theta}\circ\hat{{\mathbb P}}(A) &=\hat{{\mathbb P}}\left(\hat{\theta}^{-1}A\right)=\frac{{\mathbb P}\left(\theta^{-1}_{T_1}A,0\in\D\right)}{{\mathbb P}(0\in\D)}\\ &=\sum_{k\geqslant 1}\frac{{\mathbb P}\left(\theta^{-1}_kA,T_1=k,0\in\D\right)}{{\mathbb P}(0\in\D)}\\
&=\sum_{k\geqslant 1}\frac{{\mathbb P}\left(A,T_{-1}=-k,0\in\D\right)}{{\mathbb P}(0\in\D)}\\ &=\frac{{\mathbb P}(A)}{{\mathbb P}(0\in\D)}=\hat{{\mathbb P}}(A). \end{align*} Next, we prove that for any set $A\subset W$ such that $\hat{\theta}^{-1}A=A$ then $\hat{{\mathbb P}}(A)=0 \text{ or }1.$
Indeed, set $\hat{\Omega}:=(0\in\D)$ and $B:=A\cap\hat{\Omega}\subset W $. Note that
$\hat{\theta}^{-1}(\hat{\Omega})=W$, so that $\hat{\theta}^{-1}A=\hat{\theta}^{-1}B$. This in turn implies that $\hat{\theta}^{-1}B \cap \hat{\Omega} =\hat{\theta}^{-1}A \cap \hat{\Omega}= A \cap \hat{\Omega}= B$.
We will prove that $\theta_1\left[\hat{\theta}^{-1}B\right]=\hat{\theta}^{-1}B$. Using the ergodicity of $(\Omega, {\mathbb P}, \theta)$, it follows that ${\mathbb P}\left(\hat{\theta}^{-1}B\right)=0\text{ or }1$, and $$\hat{{\mathbb P}}(A)=\hat{{\mathbb P}}(\hat{\theta}^{-1}A)= \hat{{\mathbb P}}(\hat{\theta}^{-1}B) =\frac{{\mathbb P}\left(\hat{\theta}^{-1}B \cap \hat{\Omega}\right)}{{\mathbb P}(\hat{\Omega})}=0\text{ or }1.$$ So, to finish the proof we only need to prove that $\theta_1\left[\hat{\theta}^{-1}B\right]=\hat{\theta}^{-1}B$.
Firstly, we show that $\theta_1\left[\hat{\theta}^{-1}B\right]\subset\hat{\theta}^{-1}B$. Take $x\in\hat{\theta}^{-1}B$. Then $\hat{\theta}x\in B.$ If $T_1(x)>1$,
then $\hat{\theta}(\theta_1x)=\hat{\theta}x\in B\Rightarrow\theta_1x\in\hat{\theta}^{-1}B.$
If $T_1(x)=1$, then $\theta_1x=\hat{\theta}x\in B
=\hat{\theta}^{-1}B\cap\hat{\Omega}\Rightarrow\theta_1x\in\hat{\theta}^{-1}B.$
It remains to prove that $\hat{\theta}^{-1}B\subset\theta_1\left[\hat{\theta}^{-1}B\right]$. Take $x\in\hat{\theta}^{-1}B$ then $x=\theta_1\left(\theta_{-1}x\right)$ and we will prove that $\theta_{-1}x\in\hat{\theta}^{-1}B\Leftrightarrow\hat{\theta}\left(\theta_{-1}x\right)\in B.$ If $x\in\hat{\Omega}$, then $\hat{\theta}\left(\theta_{-1}x\right)=x\in\hat{\theta}^{-1}B\cap\hat{\Omega}=B.$
If $x\notin\hat{\Omega}$, then $\hat{\theta}\left(\theta_{-1}x\right)=\hat{\theta}x\in B.$ \end{proof} \begin{lem}\label{2} For $d\geqslant 6$, there exists $v(\beta)>0$ such that a.s.,
$\lim_{n\to\infty}n^{-1}X_n=v(\beta)$ and we have the following formula: \begin{equation} \label{ExpV}
v(\beta)=\frac{\mathbb{E}_{\beta} \left(X_{T}|0\in \D\right)}{\mathbb{E}_{\beta}( T|0\in \D)} \, . \end{equation} \end{lem} \begin{proof} See in \cite{BR07},\cite{MPRV12} about the law of large number for ERW, then
$v=\lim_{n\to\infty}\frac{X_n}{n}$ exists ${\mathbb P}$ a.s. This is therefore also true $\hat{{\mathbb P}}$-a.s. On the other hand, it is proved in \cite{BSZ03}
that $T_1$ is $\hat{{\mathbb P}}$-integrable for $d\geqslant 6$ (with $\hat{{\mathbb E}}|T_1|=\frac{1}{{\mathbb P}(0\in\D)}$). Using the ergodicity of $(W,\hat{{\mathbb P}},\hat{\theta})$, $\hat{{\mathbb P}}$-a.s., $\frac{T_k}{k}\to \hat{{\mathbb E}}(T_1)>0$, so that $\hat{{\mathbb P}}$-a.s., $T_k\to+\infty.$ Therefore $\hat{{\mathbb P}}$-a.s., $v=\lim_{k\to\infty}\frac{X_{T_k}}{T_k} =\lim_{k\to\infty}\frac{X_{T_k}}{k}.\frac{k}{T_k}$. But $\frac{k}{T_k}\to\frac{1}{\hat{{\mathbb E}}(T_1)}$ and we also have $$X_{T_{k+1}}-X_{T_k}=X_{T_1}\circ\hat{\theta}_k \, .$$
Note that $\hat{{\mathbb E}}(|X_{T_1}|) \leqslant\hat{{\mathbb E}}(T_1)=\frac{1}{{\mathbb P}(0\in\D)}<+\infty$ for $d \geqslant 6$.
Then, $$\frac{X_{T_k}}{k}=\frac{\sum_{i=0}^{k-1}X_{T_1}\circ\hat{\theta_i}}{k}\to \hat{{\mathbb E}}(X_{T_1}) \,, \hat{{\mathbb P}}-as\,.$$ This finishes the proof of Lemma $\ref{2}$. \end{proof} }\fi Exactly in the same way, we can prove (see Lemma 1.1 of \cite{BSZ03})
the following lemma: \begin{lem}\label{EsT} Let $f$ be a non-negative measurable function, for $d\geqslant 6$ we have \begin{equation} \label{P-hatP} \int fd{\mathbb P}=\frac{\int \sum_{k=0}^{T-1} f\circ\theta_k \, d\hat{{\mathbb P}}}{\int Td\hat{{\mathbb P}}} \, \end{equation} with convention that one of two sides equals to $+\infty$ so the other equals to $+\infty$. A simple instance of this formula is to take $f=\ind_{0 \in \D}$, so that $\sum_{k=0}^{T-1} f\circ\theta_k =1$, leading to \begin{equation} \label{Ecutime}{\mathbb P}(0 \in \D) = (\hat{{\mathbb E}}{T})^{-1} \text{ and }{\mathbb E}[T1_{0\in\D}]=1. \end{equation} \end{lem} \begin{proof} Indeed, by Lemma 1.1 of \cite{BSZ03}, \eqref{P-hatP} is true for $f.1_{f\leqslant c}$ for some positive constant $c$. Take $c$ to tend to $+\infty$ we get \eqref{P-hatP}. \end{proof}
It is proved in \cite{BSZ03} that $\hat{{\mathbb E}}T=1/{\mathbb P}(0\in\D)<\infty$ for $d\geqslant 6$, ${\mathbb E} T<+\infty$ when $d\geqslant 8$ and ${\mathbb E} (T^2)<+\infty$ when $d\geqslant 10$. Hence we can take $f=T$ in \eqref{P-hatP}. Observe that $T\circ\theta_k=T-k$ for $k\in\{0,1,2,...,T-1\},$ \eqref{P-hatP} reads \begin{align}\label{7} \hat{{\mathbb E}}T \, {\mathbb E} T=\int \left[T+(T-1)...+1)\right]d\hat{{\mathbb P}} = \hat{{\mathbb E}} \left( \frac{T^2 + T}{2} \right) \, . \end{align} Now, we take $f=T^2$, observe that $T^2\circ\theta_k=(T-k)^2$ for $k\in\{0,1,2,...,T-1\},$ \eqref{P-hatP} reads \begin{align}\label{ET3} \hat{{\mathbb E}}T \, {\mathbb E} (T^2)=\int \left[T^2+(T-1)^2...+1^2)\right]d\hat{{\mathbb P}} = \hat{{\mathbb E}} \left[ \frac{T(T+1)(2T+1)}{6} \right] \, . \end{align} Therefore, \begin{equation} \hat{{\mathbb E}}(T^2)<+\infty \text{ for } d\geqslant 8,\,\hat{{\mathbb E}}(T^3)<+\infty \text{ for } d\geqslant 10. \end{equation} Actually, Lemma \ref{TL2} asserts the stronger result that $$c_1:=\sup_{d\geqslant 8}\hat{{\mathbb E}}(T^2)<+\infty.$$ To prove monotonicity of the speed, we need the moments of $T$ are bounded as in the following lemma: \begin{lem} \label{TL2} $$c_1:=\sup_{d\geqslant 8}\hat{{\mathbb E}}\left(T^2\right)<+\infty$$ and $$ c_2:=\sup_{d\geqslant 10}\hat{{\mathbb E}}{(T^{3})}<+\infty $$ \end{lem}
\begin{proof} From $\eqref{7}$, we have $$\hat{{\mathbb E}}(T^2) =2{\mathbb E} T\hat{{\mathbb E}}T-\hat{{\mathbb E}}T=\frac{2{\mathbb E} T-1}{{\mathbb P}(0\in\D)}.$$ Because $\lim_{d\to+\infty}{\mathbb P}(0\in\tilde{\D})=1$ and ${\mathbb P}(0\in\D)=\frac{d-1}{d}{\mathbb P}(0\in\tilde{\D})$
(see \cite{ET60}, remark 3, page 248), to show that $c_1<+\infty$ (resp. $c_2<+\infty$), it is enough to prove that $\sup_{d\geqslant 8}{\mathbb E} (T)<+\infty$ (resp. $\sup_{d\geqslant 10}{\mathbb E} (T^2)<+\infty$). \\ Choose $\ep$ such that $0<\ep<1$. We consider a simple random walk $Z^{\ep}$ on ${\mathbb Z}^{d-1}$such that: \begin{align}
&{\mathbb P}\left(Z^{\ep}_{n+1}-Z^{\ep}_n=e|\F^{Z^{\ep}}_n\right)=\frac{\ep}{2(d-1)}, \text{ for } e\in\{\pm e_2,\pm e_2,...,\pm e_{d}\}\notag,\\
&{\mathbb P}\left(Z^{\ep}_{n+1}-Z^{\ep}_n=0|\F^{Z^{\ep}}_n\right)=1-\ep. \end{align} Note that, we can construct $Z^{\ep}$ from the sequences $(\tilde{Z}_n)_{ n\in{\mathbb Z}}$, $(\eta^{\ep}_n)_{ n\in{\mathbb Z}}$,
where $\eta^{\ep}_{n}\sim Ber(1-\ep)$ as in the construction of $Z.$ Set $\J:=\{n \text{ such that }Z^{\ep}_n\ne Z^{\ep}_{n-1}\}$ and write $\J=\{...<j_{-1}<j_0\leqslant 0<j_1<...\}.$ Set $\mu_n:=j_n-j_{n-1}\text{ for }n>1$ and $\mu_1:=j_1.$ Then, the $(\mu_n)_{n\geqslant 0}$ are i.i.d. , Geometric$(\ep)$ random variables. We call $\{T^{\ep}_n\}_{n\in{\mathbb Z}}$ the cut times of $Z^{\ep}$, $T^{\ep}:=T^{\ep}_1$
and $\D^{\ep}$ is the set of cut times. Then ${\mathbb P}(0\in\D^{\ep})=\ep{\mathbb P}(0\in\tilde{\D})$ converges to $\ep$ when $d\to\infty$ and ${\mathbb P}(0\in\D^{\ep})$ is bounded by $\ep.$ We also have $T^{\ep}=\sum_{i=1}^{\tilde{T}}\mu_i$. Then \begin{align} {\mathbb E}(T^{\ep})&=\sum_{k\geqslant 1}{\mathbb E}(\sum_{i=1}^{k}\mu_i){\mathbb P}[\tilde{T}=k]\notag\\ &=\sum_{k\geqslant 1}\frac{k}{\ep}{\mathbb P}[\tilde{T}=k]\notag\\ &=\frac{{\mathbb E} \tilde{T}}{\ep}. \end{align} We compute similarly and get that \begin{align*} {\mathbb E}[(T^{\ep})^2]=\frac{{\mathbb E}(\tilde{T}^2)+(1-\ep){\mathbb E}(\tilde{T})}{\ep^2}. \end{align*} $T$ is $T^\ep$ with $\ep=\frac{d-1}{d}$ then ${\mathbb E} T=\frac{d}{d-1}{\mathbb E}\tilde{T}$, so that ${\mathbb E} T=\frac{d\ep}{d-1}{\mathbb E}(T^{\ep})$. Therefore, in order to prove that $\sup_{d\geqslant 8}{\mathbb E} T<+\infty$ (resp. $\sup_{d\geqslant 10}{\mathbb E} (T^2)<+\infty$),
it is enough to prove that $\sup_{d\geqslant 8}{\mathbb E} (T^{\ep})<+\infty$ (resp. $\sup_{d\geqslant 10}{\mathbb E} [(T^{\ep})^2]<+\infty$) for some fixed $\ep.$
Now, repeating the proof of (1.12) in \cite{BSZ03}, we obtain for $k_j=1+Lj$, $j\geqslant 0$ ($L\geqslant 1, J\geqslant 1$ are two fixed integers), \begin{align} {\mathbb P}\left(T^{\ep}>k_{2J}\right)&\leqslant {\mathbb P}(0\in\D^{\ep})^J+(2J+1)\sum_{k\geqslant L}k{\mathbb P}\left(Z^{\ep}_k=0\right)\notag\\ &\leqslant {\ep}^J+(2J+1)\sum_{k\geqslant L}k{\mathbb P}\left(Z^{\ep}_k=0\right). \end{align} Using the fact that ${\mathbb P}\left(Z^{\ep}_n=0\right)$ decreases with $d\geqslant 2$ (we delay the proof to the end), let $D\geqslant 6$, we have \begin{align} {\mathbb P}[T^{\ep}>k_{2J}]\,(\text{with } d\geqslant D)&\leqslant {\ep}^J+(2J+1)\sum_{k\geqslant L}k{\mathbb P}\left(Z^{\ep}_k=0\right)\text{ (when }d=D)\notag\\ &\leqslant \ep^J+ (2J+1)\text{ const } L^{-\frac{D-5}{2}}. \end{align} Choosing a large enough $\gamma$ depending on $\ep$, and setting $J=[\gamma\log n]$, $L=[\frac{n}{3J}]$ then \begin{equation} {\mathbb P}[T^{\ep}>n]\leqslant c(\log n)^{1+\frac{D-5}{2}}n^{-\frac{D-5}{2}},\quad n\geqslant 1,\, d\geqslant D, \end{equation} and \begin{equation} n{\mathbb P}[T^{\ep}>n]\leqslant c(\log n)^{1+\frac{D-5}{2}}n^{-\frac{D-7}{2}},\quad n\geqslant 1,\, d\geqslant D, \end{equation} where c depends only on $D$ and $\ep.$ This implies that choosing $D=8$ we get $\sup_{d\geqslant 8}{\mathbb E}{T^{\ep}}<\infty$ and choose $D=10$ we get $\sup_{d\geqslant 10}{\mathbb E}{[{(T^{\ep}})^2]}<\infty.$
Now, in order to finish the proof of Lemma \ref{TL2}, we have to prove that ${\mathbb P}[Z^{\ep}_n=0]$ decreases with $d\geqslant 2$.\\ Remark that for $n$ odd ${\mathbb P}[Z^{\ep}_n=0]=0$, so we consider $n$ even. Using characteristic functions, we obtain \def \Th {\Theta} \begin{align} {\mathbb P}(Z^{\ep}_n=0)&=\frac{1}{(2\pi)^{d-1}}\int_{-\pi}^{\pi}...\int_{-\pi}^{\pi}\left(\frac{\ep}{d-1}\sum_{i=1}^{d-1}\cos \theta_i+1-\ep\right)^nd\theta_1...d\theta_{d-1}\notag\\ &=\frac{1}{(2\pi)^{d-1}}\int_{-\pi}^{\pi}...\int_{-\pi}^{\pi}\left(\frac{\ep}{d-1}\sum_{i=1}^{d-1}\left(\cos \theta_i+\frac{1-\ep}{\ep}\right)\right)^nd\theta_1...d\theta_{d-1}\notag\\ &={\mathbb E}\left[\left(\frac{1}{d-1}\sum_{i=1}^{d-1}\left(\ep\cos \Th_i+1-\ep\right)\right)^n \right]. \end{align} where we consider a sequence $\{\Th_i\}_{i=1}^{d-1}$ of i.i.d. random variables having
uniform distribution $U[-\pi,\pi].$ Now, we consider the function $f(x)=x^n$, $n$ is even, $f$ is a convex function on ${\mathbb R}$ and $$f\left(\frac{x_1+x_2+...+x_d}{d}\right)\leqslant\frac{f(x_1)+f(x_2)+...+f(x_d)}{d},\quad\forall x_1,x_2,...,x_d\in{\mathbb R}.$$ For $a_1,a_2,...,a_d\in{\mathbb R}$, choose $$x_1=\frac{a_1+a_2+...+a_{d-1}}{d-1},\,x_2=\frac{a_2+a_3+...+a_{d}}{d-1},...,x_d=\frac{a_d+a_1+...+a_{d-2}}{d-1},$$ then we get \begin{align} &\left(\frac{a_1+a_2+...+a_d}{d}\right)^n\notag\\ &\leqslant\frac{1}{d}\left\{\left(\frac{a_1+a_2+...+a_{d-1}}{d-1}\right)^n+\left(\frac{a_2+a_3+...+a_d}{d-1}\right)^n+...+\left(\frac{a_d+a_1+...+a_1}{d-1}\right)^n\right\}. \end{align} Now, take $a_i=\ep\cos \Th_i+1-\ep\text{ for }i=1, \cdots, d$ and take the expectation. It comes \begin{align} {\mathbb E}\left[\left(\frac{1}{d}\sum_{i=1}^d\left(\ep\cos \Th_i+1-\ep\right)\right)^n\right] \leqslant {\mathbb E}\left[\left(\frac{1}{d-1}\sum_{i=1}^{d-1}\left(\ep\cos \Th_i+1-\ep\right)\right)^n\right]. \end{align} It means that ${\mathbb P}[Z^{\ep}_n=0]$ decreases with $d\geqslant 2.$ \end{proof}
\subsection{Girsanov's transform} \label{Girsanov} This section is devoted to the Girsanov's transform connecting ${\mathbb P}_{\beta}$ and ${\mathbb P}_0$ where $\beta=\{(\beta_1,\beta_2,...,\beta_m)(y)\}_{y\in{\mathbb Z}^d}$ is fixed environment.
We begin by introducing several $\sigma$-algebras. For $n \in {\mathbb Z}$, let $\F_n^Z = \sigma(Z_k, k \leqslant n)$. For $n \geqslant 0$, let $\F_n^Y = \sigma(Y_k, 0 \leqslant k \leqslant n)$, $\F_n = \sigma(\F_n^Z,\F_n^Y)= \sigma(\F_{-1}^Z, \F_n^Y)$, and ${\mathscr G}_n= \sigma(\F_n^Y,\sigma(Z_k, k \in {\mathbb Z}))$. We get $\F_n \subset {\mathscr G}_n$. Moreover $T$ is not a $(\F_n)$-stopping time, but is obviously a $({\mathscr G}_n)$-stopping time, so that we can define the $\sigma$-algebra ${\mathscr G}_T$ of the events prior to $T$. Recall that $\E_j=(Y_{j+1}-Y_j).e_1$ and $\{Y_j\notin_k\}$ means that $Y_j$ has been visited exactly $k$ times at time $j$. We define for $n \geqslant 0$, and $\beta \in ([-1,1]^m)^{{\mathbb Z}^d}$: $$ M_n(\beta)=\prod_{j=0}^{n-1}\prod_{k=1}^{m} \left[1+\E_j\beta_k(Y_j) 1_{Y_j\notin_k}\right] \, , $$ with the convention the product $\prod_{j=0}^{n-1}(...)=1$ and $M_n(\beta)=1$ for $n=0$.
\begin{lem} \label{densite} For any $\beta \in ([-1,1]^m)^{{\mathbb Z}^d}$, $d \geqslant 6$, $n \geqslant 0$,
$$ M_n(\beta) = \frac{d\mathbb{P}_{\beta}|_{\F_n}}{d\mathbb{P}_0|_{\F_n}} \, , \, \,
M_n(\beta) = \frac{d\mathbb{P}_{\beta}|_{{\mathscr G}_n }}{d\mathbb{P}_0|_{{\mathscr G}_n}} \, , \, \,
M_T(\beta) = \frac{d\mathbb{P}_{\beta}|_{{\mathscr G}_T}}{d\mathbb{P}_0|_{{\mathscr G}_T}} \, .
$$
\end{lem}
\begin{proof}
Since $ \F_n \subset {\mathscr G}_n$, $M_n(\beta)$ is $ \F_n$-measurable, and $T$ is a finite $({\mathscr G}_n)$-stopping time, it is enough to prove that $M_n(\beta) = \frac{d\mathbb{P}_{\beta}|_{{\mathscr G}_n }}{d\mathbb{P}_0|_{{\mathscr G}_n}}$. Let $A \in \F_{-1}^Z$, $y_1,..., y_n \in ({\mathbb Z}^d)^n$, and $B \in \sigma(Z_{n+k}-Z_n, k \geqslant 0)$ be fixed. Since $(Z_{n+\cdot}-Z_n)$ is independent from $\F_n$, we get: $${\mathbb P}_{\beta}(A, Y_0=0, Y_1=y_1,..., Y_n=y_n, B)
= {\mathbb P}_{\beta}(A, Y_0=0, Y_1=y_1,..., Y_n=y_n) {\mathbb P}_{\beta}(B). $$ Note that the law of $Z$ does not depend on $\beta$, so that ${\mathbb P}_{\beta}(B)={\mathbb P}_{0}(B)$. Now by the definition of $m-$ERWRC, $$
\mathbb{P}_{\beta}[Y_n=y_n \left| A, Y_0=0, Y_1=y_1,..., Y_{n-1}=y_{n-1} \right.]
= \frac{1}{2d}\prod_{k=1}^{m}\left[1+{\ep}_{n-1}\beta_k(y_{n-1})1_{y_{n-1}\notin_k}\right],\\ $$ where $\ep_{n-1}=(y_{n}-y_{n-1}).e_1$.
Then we get by induction that for any $\beta \in ([-1,1]^m)^{{\mathbb Z}^d}$, \begin{align*} \mathbb{P}_{\beta}[A, Y_0=0, Y_1=y_1,..., Y_n=y_n] & = \left(\frac{1}{2d}\right)^n\prod_{j=0}^{n-1}\prod_{k=1}^{m}\left[1+{\ep}_j\beta_k(y_j)1_{y_j\notin_k}\right] \mathbb{P}_{\beta}[A] \\ & = \left(\frac{1}{2d}\right)^n\prod_{j=0}^{n-1}\prod_{k=1}^{m}\left[1+{\ep}_j\beta_k(y_j)1_{y_j\notin_k}\right] \mathbb{P}_{0}[A] \, , \end{align*} where the last equality comes from the fact that $A\in \F_{-1}^Z$. Hence, $$\frac{\mathbb{P}_{\beta}[A, Y_0=0, Y_1=y_1,..., Y_n=y_n, B]} {\mathbb{P}_0[A,Y_0=0,Y_1=y_1,...,Y_n=y_n,B]} = \prod_{j=0}^{n-1}\prod_{k=1}^{m}\left[1+{\ep}_j\beta(y_j)1_{y_j\notin_k}\right] \, . $$ We have just proved that for all $A \in \F_{-1}^Z$, $y_1,..., y_n \in ({\mathbb Z}^d)^n$, and $B \in \sigma(Z_{n+k}-Z_n, k \geqslant 0)$, $$ \mathbb{P}_{\beta}[A,Y_0=0,Y_1=y_1,...,Y_n=y_n,B] = {\mathbb E}_{0}[1_A \, 1_{Y_0=0,Y_1=y_1,..,Y_n=y_n} \, 1_B \, M_n(\beta)]. $$ The result follows since ${\mathscr G}_n = \sigma(\F_{-1}^Z,\F_n^Y,\sigma(Z_{n+k}-Z_n, k \geqslant 0))$. \end{proof} \subsection{Existence of the speed.} \subsubsection{$e_1-$ exchangeable and stationary environment} We begin with some notations used throughout the section. For $z \in ({\mathbb Z}^{d-1})^{{\mathbb Z}}$, and $k,l \in {\mathbb Z}, k\leqslant l$, $z_{[k,l]}:=(z_k,z_{k+1},\cdots,z_l)$. The expectation w.r.t. the law ${\mathbb Q}$ of the environment is still denoted by ${\mathbb Q}$. We also use the
notation $\hat{P}(\cdot)=P(\cdot|0\in\D)$, and for $\beta$ fixed, $\hat{{\mathbb P}}_{\beta}(\cdot)={\mathbb P}_{\beta}(\cdot|0\in\D)$. Since ${\mathbb P}_{\beta}(0\in\D)$ does not depend on $\beta$, we get $\hat{P}(\cdot)={\mathbb Q}(\hat{{\mathbb P}}_{\beta}(\cdot))$. Let $A$ be any Borel set of $({\mathbb Z}^{d})^{{\mathbb N}}$, then \begin{align} & \hat{P}(Y_{T+.}-Y_T\in A)={\mathbb Q}[\hat{{\mathbb P}}_{\beta}(Y_{T+.}-Y_T\in A)]\notag\\
&=\sum_{k\geqslant 1}\sum_{z_{[1,k]}}\sum_{x\in{\mathbb Z}} {\mathbb Q}[\hat{{\mathbb P}}_{\beta}(Y_{k+.}-Y_k\in A|
T=k,Z_{[1,k]}=z_{[1,k]},X_k=x) \notag\\ & \hspace*{5cm} \times
\hat{{\mathbb P}}_{\beta}(X_k=x|T=k,Z_{[1,k]}=z_{[1,k]})]
\hat{P}(T=k,Z_{[1,k]}=z_{[1,k]}) \, . \end{align}
Note that by the definition of the cut times, the trajectory of $Y$ between $T_n$ and $T_{n+1}-1$ does not intersect the trajectory of $Y$ before $T_n$. Hence
$\hat{{\mathbb P}}_{\beta}(Y_{k+.}-Y_k\in A|T=k,Z_{[1,k]}=z_{[1,k]},X_k=x)$ depends only on $\{\beta(.,z)\}_{z\notin z_{[1,k]}}$, while
$\hat{{\mathbb P}}_{\beta}(X_k=x|T=k,Z_{[1,k]}=z_{[1,k]})$ depends only on $\{\beta(.,z)\}_{z\in z_{[1,k]}}$. $z_{[1,k]}$ and $x\in{\mathbb Z}$ being given, we consider the mapping $\delta:{\mathbb Z}^{d}\to{\mathbb Z}^{d}$ defined by: $$ \forall (u,v)\in{\mathbb Z}\times {\mathbb Z}^{d-1}, \, \, \delta (u,v)= \left\{ \begin{array}{ll}
(u,v) & \mbox{ if } v \in z_{[1,k]} \, ,
\\
(u-x,v) & \mbox{ if } v\notin z_{[1,k]} \, .
\end{array}
\right .
$$ It follows from the preceding remark that: \begin{align*}
& \hat{{\mathbb P}}_{\delta \beta}(Y_{k+.}-Y_k\in A|T=k,Z_{[1,k]}=z_{[1,k]},X_k=x) \\
& =\hat{{\mathbb P}}_{\theta_{(-x,0)} \beta}(Y_{k+.}-Y_k\in A|T=k,Z_{[1,k]}=z_{[1,k]},X_k=x) \\
& = \hat{{\mathbb P}}_{\theta_{( 0,z_k)} \beta}(Y_{.}\in A|T_{-1}=-k,Z_{[-k,-1]}=\overline{z}_{[-k,-1]}) \, , \end{align*} where $\theta_{(x,z)}\beta(u,v)=\beta(u+x,v+z)$, and $\overline{z}_{[-k,-1]}=(-z_k,z_1-z_k,\cdots,z_{k-1}-z_k)$. Moreover,
$$ \hat{{\mathbb P}}_{\delta \beta}(X_k=x|T=k,Z_{[1,k]}=z_{[1,k]})= \hat{{\mathbb P}}_{\beta}(X_k=x|T=k,Z_{[1,k]}=z_{[1,k]})
\, .
$$
The random environment being $e_1$-exchangeable, $\delta(\beta)$ has the same law as $\beta$. Hence, {\allowdisplaybreaks \begin{align} & \hat{P}(Y_{T+.}-Y_T\in A) \notag\\ &=\sum_{k\geqslant 1}\sum_{z_{[1,k]}}\sum_{x\in{\mathbb Z}}
{\mathbb Q}[\hat{{\mathbb P}}_{\delta\beta}(Y_{k+.}-Y_k\in A|T=k,Z_{[1,k]}=z_{[1,k]},X_k=x) \notag\\
& \hspace*{5cm} \times \hat{{\mathbb P}}_{\delta\beta}(X_k=x|T=k,Z_{[1,k]}=z_{[1,k]})] \hat{P}(T=k,Z_{[1,k]}=z_{[1,k]}) \notag\\ &=\sum_{k\geqslant 1} \sum_{z_{[1,k]}} \sum_{x\in{\mathbb Z}}
{\mathbb Q}[\hat{{\mathbb P}}_{\theta_{(0,z_k)}\beta}(Y_{.}\in A|T_{-1}=-k,Z_{[-k,-1]}=\overline{z}_{[-k,-1]}) \notag\\
& \hspace*{5cm} \times\hat{{\mathbb P}}_{\beta}(X_k=x|T=k,Z_{[1,k]}=z_{[1,k]})] \hat{P}(T=k,Z_{[1,k]}=z_{[1,k]}) \notag\\ &=\sum_{k\geqslant 1} \sum_{z_{[1,k]}}
{\mathbb Q}[\hat{{\mathbb P}}_{\theta_{(0,z_k)}\beta}(Y_{.}\in A|T_{-1}=-k,Z_{[-k,-1]}=\overline{z}_{[-k,-1]})] \hat{P}(T=k,Z_{[1,k]}=z_{[1,k]}). \end{align}} Using the stationarity of the environment, we get then {\allowdisplaybreaks \begin{align} & \hat{P}(Y_{T+.}-Y_T\in A) \notag\\ &=\sum_{k\geqslant 1}\sum_{z_{[1,k]}}
{\mathbb Q}[\hat{{\mathbb P}}_{\beta}(Y_{.}\in A|T_{-1}=-k,Z_{[-k,-1]}=\overline{z}_{[-k,-1]})] \hat{P}(T_{-1}=-k,Z_{[-k,-1]}=\overline{z}_{[-k,-1]}) \notag\\
&=\hat{P}(Y_.\in A). \end{align}} Now, set $H_n=X_{T_{n}}-X_{T_{n-1}}$ for $n\geqslant 1$. We have just seen that the sequence $\{H_n\}_{n\geqslant 1}$ is stationary
under $\hat{P}$. Furthermore, $\hat{E}|H_n|\leqslant \hat{E}T<\infty$ for $d\geqslant 6$. By the ergodic theorem, $\hat{P}-a.s.$
$$\lim_{n\to\infty}\frac{H_1+H_2+...+H_n}{n}=\hat{E}(H_1|\F_H),$$ where $\F_H$ is the $\sigma$-algebra generated by the invariant sets of the sequence $\{H_n\}.$
Therefore $\lim_{n\to\infty}\frac{X_{T_n}}{n}=\hat{E}(X_T|\F_H)$. On the other hand, we also have $\hat{P}-as$ $\lim_{n\to\infty}\frac{T_n}{n}=\hat{E}(T)$,
so that $\hat{P}-as$, $V:=\lim_{n\to\infty}\frac{X_n}{n}$ exists for $d \geqslant 6$, and
$$V =\frac{\hat{E}(X_T|\F_H)}{\hat{E}(T)}.$$ \subsubsection{i.i.d random environment.}\label{seciid}
We consider now the case of an i.i.d environment with $m$ cookies. In this situation, we can prove that the speed is deterministic. To this end, we construct an ergodic dynamical system on which the $m-$ERWRC is defined. Let $\mu$ be the law of $\beta = (\beta_1,\beta_2,...,\beta_m)(0) \in [-1,1]^m$.
We consider the probability space $$W:= \Gamma\times ({\mathbb Z}^{d-1})^{{\mathbb Z}}\times\{0,1\}^{{\mathbb Z}}\times \{\{0,1\}^m\}^{{\mathbb Z}}\text{ where }\Gamma=([-1,1]^{m})^{{\mathbb Z}},$$ endowed with the probability semi-product $P_s:={\mathbb Q}_s \times {\mathbb P}_{\gamma}$, where ${\mathbb Q}_s=\mu^{\otimes {\mathbb Z}}$ and for $\gamma \in \Gamma$, $${\mathbb P}_{\gamma}=q^{\otimes{\mathbb Z}} \otimes p_{1}^{\otimes{\mathbb Z}}
\otimes \underset{n\in {\mathbb Z}}{\bigotimes}\underset{1\leqslant k\leqslant m}{\bigotimes}p_{kn}(\gamma) \, ,
$$ where \begin{itemize} \item $q$ is the law of the increments of $Z$, \item $p_1$ is a Bernoulli distribution of parameter 1/2, \item $p_{kn}(\gamma)$ is a Bernoulli distribution with $p_{kn}\{1\}=\frac{1+\gamma_n(k)}{2},\ p_{kn}\{0\}=\frac{1-\gamma_n(k)}{2}.$ \end{itemize} Now, we take $w=(\gamma,u,l,h)\in W$ with $\gamma\in\Gamma$, $u\in{({\mathbb Z}^{d-1})}^{{\mathbb Z}}$, $l\in\{0,1\}^{{\mathbb Z}}$, $h\in \{\{0,1\}^m\}^{{\mathbb Z}}$. For $n\in{\mathbb Z}$, let $(\beta_n, I_n, \zeta_n, \xi_n)$ be the canonical process on $W$: $$ \beta_n(w)=\gamma_n \in [-1,1]^m \, , \, \, I_n(w)=u_n \in {\mathbb Z}^{d-1} \, , \, \, \zeta_n(w)=l_n \in \{0,1\} \, , \,\, \xi_{n,j}(w)=h_{n,j} \in \{0,1\} \, . $$ From $(I_n)_{n \in {\mathbb Z}}$, we define $Z, \tilde{Z}$ as follows: $$Z_k= \begin{cases} I_1+...+I_k &\mbox{ if } k>0 \, , \\ 0 &\mbox{ if }k=0 \, , \\ -(I_{k+1}+...+I_0)&\mbox{ if } k<0 \, , \end{cases}\quad\quad \quad \eta_k:=1_{Z_k=Z_{k+1}}. $$ Set $U(k):=\inf\{n, \sum_{i=0}^{n-1}(1-\eta_i)=k\}$ and $\tilde{Z}_k:=Z_{U(k)}.$ It is clear that this definition of $Z, \tilde{Z}$ satisfies \eqref{lienZ-tildeZ}.
Once $Z$ is defined, we construct the horizontal part's increment $\E_i = X_{i+1}-X_i \in \{-1,0,1\}$ for $i \geqslant 0$, as follows. Set $Y_0=0$ and assume that $(Y_0,...,Y_i)$ have been constructed. Then, \begin{itemize} \item On the event $\{Y_i\notin_k\}$ ($1 \leqslant k \leqslant m$) (i.e. $Y_i$ has been exactly visited $k$ times at time $i$), $$\E_i = (2 \xi_{n_1(Y_0,...,Y_i),k} -1) \, 1_{Z_i=Z_{i+1}} \, , $$ where $n_1(Y_0,...,Y_i) = \inf \{n \leqslant i, \mbox{ such that } Y_n=Y_i\}$. \item On the event $\{Y_i \in^m \}$ (i.e. $Y_i$ has been visited more than $m$ times at time $i$), $$\E_i = (2 \zeta_i -1) \, 1_{Z_i=Z_{i+1}} \, . $$ \end{itemize} It is proved similarly as in Section \ref{contructES} about the construction $m-$ERWRC to have that the construction of $Y$ above satisfies, \begin{align*}
P_{\gamma}(Y_{n+1}-Y_n=\pm e_1|\F^{Y}_n, Y_n\notin_{k})=&\frac{1\pm \gamma_k(n_1(Y_0,...,Y_n))}{2d} \text{ for }k\leqslant m,\\
P_{\gamma}(Y_{n+1}-Y_n=\pm e_i|\F^{Y}_n, Y_n\notin_{k})=&\frac{1}{2d}\text{ for }i>1 \text{ or }k>m. \end{align*}
\begin{lem}\label{bodecungluat} Under $P$, the sequence $(Y_n)_{n\geqslant 0}$ is an $m$-ERWRC with i.i.d environment $\beta=(\beta(y))_{y\in {\mathbb Z}^d}$ of common law $\mu$. \end{lem} \begin{proof} We begin with giving an expression for the law of the $m$-ERWRC with i.i.d environment $\beta$. Fix $y_0=0, y_1,..., y_n \in {\mathbb Z}^d$ and set $\ep_i=(y_{i+1}-y_i).e_1 \in\{0,\pm 1\}.$ Then, for an $m$-ERWRC with i.i.d environment $\beta$, we have \begin{align*} &{\mathbb Q} {\mathbb P}_\beta[Y_0=y_0, Y_1=y_1,...,Y_n=y_n]\\ &={\mathbb Q}\left[\left(\frac{1}{2d}\right)^{n}\prod_{i=0}^{n-1}\prod_{k=1}^{m}(1+\beta_k(y_i)\ep_i \, 1_{y_i\notin_k})\right] \, . \end{align*} We decompose the first product according to the value of the first visit to $y_i$. \begin{align*} & {\mathbb Q} {\mathbb P}_\beta[Y_0=y_0, Y_1=y_1,...,Y_n=y_n]\\ &=\left(\frac{1}{2d}\right)^{n} {\mathbb Q}\left\{\prod_{n_1=0}^{n-1} \prod_{k=1}^{m} \prod_{j=n_1}^{n-1 }\left[1+1_{y_{n_1} \notin} \, \beta_k(y_{n_1}) \, \ep_j \, 1_{y_j=y_{n_1}} \, 1_{y_{j}\notin_k} \right] \right\}\\ &=\left(\frac{1}{2d}\right)^{n}\prod_{n_1=0}^{n-1} {\mathbb Q}\left\{\prod_{k=1}^{m}\prod_{j=n_1}^{n-1} \left[1+1_{y_{n_1} \notin} \, \beta_k(y_{n_1}) \, \ep_j \, 1_{y_j=y_{n_1}} \, 1_{y_{j} \notin_k}\right]\right\}. \end{align*} The last equation comes from the independence of the random variables $\beta_k(y_i)$ for $y_i\notin.$ On the other hand, using the construction above, \begin{align*} &P_s[Y_0=y_0, Y_1=y_1,...,Y_n=y_n]={\mathbb Q}_s {\mathbb P}_\gamma[Y_0=y_0, Y_1=y_1,...,Y_n=y_n]\\ &=\left(\frac{1}{2d}\right)^{n} {{\mathbb Q}_s}\left\{\prod_{n_1=0}^{n-1}\prod_{k=1}^{m}\prod_{j=n_1}^{n-1} \left[1+1_{y_{n_1}\notin}\gamma_k(n_1) \, \ep_i \, 1_{y_j=y_i} \, 1_{y_i\in_k} \right]\right\}\\ &=\left(\frac{1}{2d}\right)^{n}\prod_{n_1=0}^{n-1} {{\mathbb Q}_s}\left\{\prod_{k=1}^{m}\prod_{j=n_1}^{n-1} \left[1+1_{y_i\notin} \, \gamma_k(n_1) \, \ep_i \, 1_{y_j=y_i} \, 1_{y_i\in_k}\right]\right\}. \end{align*} This finishes the proof of Lemma $\ref{bodecungluat}$ since $\{1_{y_{n_1} \notin} \beta(y_{n_1})\}_{n_1=0,...,n-1}$ and $\{1_{y_{n_1} \notin}\gamma(n_1)\}_{n_1=0,...,n-1}$ are two sequences of i.i.d. random vectors with common law $\mu$. \end{proof}
Now, we denote by $(\theta_k)_{k\in {\mathbb Z}}$ the canonical shift on $W$, i.e. $\theta_k(w.)=(w_{k+.})$. We set $$\hat{W} = \Gamma \times\left[ ({\mathbb Z}^{d-1})^{{\mathbb Z}} \cap\{0\in\D\}\right] \times \{0,1\}^{{\mathbb Z}} \times (\{0,1\}^m)^{{\mathbb Z}} \, . $$
On $\hat{W}$ we define $\hat{\theta}:=\hat{\theta}_1=\theta_T$ and $\hat{P}_s(\cdot)=P_s(\cdot|0\in\D).$
\begin{lem} $(W,\theta,P_s) $ is an ergodic system. As a consequence, $(\hat{W},\hat{\theta},\hat{P}_s) $ is also an ergodic system. \end{lem} \begin{proof} The idea of proof comes from $\cite{BSZ03}$. Firstly, we prove that $\theta$ is a measure-preserving transformation. Consider a measurable set $A\times B$ of $W$, where $A \subset \Gamma$, and $B \subset ({\mathbb Z}^{d-1})^{{\mathbb Z}} \times \{0,1\}^{{\mathbb Z}} \times (\{0,1\}^m)^{{\mathbb Z}}$ . We have that {\allowdisplaybreaks \begin{align*} &\theta_k\circ P_s(A\times B)=P_s(\theta_k^{-1}A\times\theta_k^{-1}B)\\ &=\int_{\theta_k^{-1}A}{\mathbb P}_{\gamma}(\theta_k^{-1}B)d{\mathbb Q}=\int_{\theta_k^{-1}A}{\mathbb P}_{\theta_k\gamma}(B)d{\mathbb Q}_s\\ &=\int_{A} \, {\mathbb P}_{\gamma}(B) \, (\theta_k^{-1}{\mathbb Q}_s)(d\gamma) =\int_{A}{\mathbb P}_{\gamma}(B){\mathbb Q}_s(d\gamma) \\ &=P_s(A\times B). \end{align*} } Now, we prove that $\theta$ is ergodic. Let A be a measurable subset of W, invariant under $\theta$ and $\ep>0.$
There exists an integer $m_{\ep}>0$ and a measurable subset $A_{\ep}$ depending only on $(w_m)_{|m|\leqslant m_ {\ep}}$ such that
$$ | E_{P_s}[1_A-1_{A_{\ep}}] | \leqslant\ep.$$ Then, for $L\geqslant 0,$ $$ P_s(A)={\mathbb E}_{P_s}[1_A1_A\circ\theta_L]= {\mathbb E}_{P_s}[1_{A_{\ep}}1_{A_{\ep}}\circ\theta_L]+c_{\ep},$$
with $|c_{\ep}|\leqslant 2\ep.$
Because that $p_{kn}(\gamma)$ depends only on $\gamma_n$, we prove that the sequence $(\gamma_n, I_n, \zeta_n, \xi_n)_{n\in {\mathbb Z}}$ is the sequence of independent variables under $P_s.$ Indeed, let $i<j,\,i,j\in{\mathbb Z}$, we take two measurable sets $A_i\times B_i$ and $A_j\times B_j$, where $A_i, A_j \subset [0,1]^m$, and $B_i, B_j \subset {\mathbb Z}^{d-1} \times \{0,1\} \times \{0,1\}^m$. We have \begin{align*} &P_s\left\{[(\gamma_i, I_i, \zeta_i, \xi_i)\in A_i\times B_i]\bigcap[(\gamma_j, I_j, \zeta_j, \xi_j)\in A_j\times B_j]\right\}\\ &=\int_{\Gamma}{\mathbb Q}_s(d\gamma)1_{\gamma_i(\gamma)\in A_i, \gamma_j(\gamma)\in A_j}{\mathbb P}_{\gamma}\left\{[(I_i, \zeta_i, \xi_i)\in B_i]\bigcap[(I_j, \zeta_j, \xi_j)\in B_j]\right\}\\ &=\int_{\Gamma}{\mathbb Q}_s(d\gamma)1_{\gamma_i(\gamma)\in A_i, \gamma_j(\gamma)\in A_j}{\mathbb P}_{\gamma}\left\{(I_i, \zeta_i, \xi_i)\in B_i\right\}{\mathbb P}_{\gamma}\left\{[(I_j, \zeta_j, \xi_j)\in B_j\right\}\\ &=\int_{\Gamma}{\mathbb Q}_s(d\gamma)1_{\gamma_i(\gamma)\in A_i}{\mathbb P}_{\gamma}\left\{(I_i, \zeta_i, \xi_i)\in B_i\right\}.\int_{\Gamma}{\mathbb Q}_s(d\gamma)1_{\gamma_j(\gamma)\in A_j}{\mathbb P}_{\gamma}\left\{(I_j, \zeta_j, \xi_j)\in B_j\right\}\\ &=P_s\left\{[(\gamma_i, I_i, \zeta_i, \xi_i)\in A_i\times B_i]\right\}.P_s\left\{[(\gamma_j, I_j, \zeta_j, \xi_j)\in A_j\times B_j]\right\} \end{align*} So, for $L>2m_{\ep}$, we get ${\mathbb E}_{P_s}[1_{A_{\ep}}1_{A_{\ep}}\circ\theta_L] = P_s(A_{\ep}) P_s(A_{\ep}\circ\theta_L) = P_s(A_{\ep})^2$.
Therefore $$|P_s(A)-P_s(A)^{2}|\leqslant|P_s(A)-P_s(A_{\ep})^{2}|+2\ep\leqslant 4\ep.$$ Letting $\ep$ tend to $0$, we have that $P_s(A)=0$ or $1$. \end{proof} \begin{lem} Let $Y$ is a $m-$ERWRC such that the environment cookie is i.i.d., $X$ is the horizontal component $X_n=Y_n\cdot e_1.$ For any $d \geqslant 6$ then $P-as$, $\lim_{n\to\infty}\frac{X_n}{n}=v({\mathbb Q}):=\frac{\hat{E}(X_T)}{\hat{E}(T)}$. \end{lem} \begin{proof} The existence of the limit, the fact that it is deterministic and the expression of $v({\mathbb Q})$ for $d\geqslant 6$ follow from the ergodicity of $(\hat{W},\hat{\theta},\hat{P}_s)$, and the integrability of $T$ w.r.t $\hat{P}_s$ when $d \geqslant 6$. \end{proof}
\subsection{Monotonicity and differentiability of the speed.} Now, we prove that the expectation $v({\mathbb Q})=\hat{E}[V]=\frac{\hat{E}(X_T)}{\hat{E}(T)}$ is increasing in ${\mathbb Q}$.\\ Consider $\beta_1=\{\beta_1(y)\}_{y\in{\mathbb Z}^{d}},\beta_2=\{\beta_2(y)\}_{y\in{\mathbb Z}^{d}}$ defined on $(\Omega,{\mathcal A},Q)\to{\mathbb B}=([-1,1]^m)^{{\mathbb Z}^{d}}$ such that $Q(\beta_1\leqslant\beta_2)=1$. It is proved in D. Aldous and R. Lyons \cite{AL07}, that if there exists a monotone coupling of ${\mathbb Q}_1$ and ${\mathbb Q}_2$, then there also exists a stationary monotone coupling of ${\mathbb Q}_1$ and ${\mathbb Q}_2$, as soon as ${\mathbb Q}_1$ and ${\mathbb Q}_2$ are stationary.
Therefore we can suppose that $\{(\beta_1,\beta_2)(y)\}_{y\in{\mathbb Z}^{d}}$ is stationary.
Set $\beta_t(y)=(1-t)\beta_1(y)+t\beta_2(y)$ for $t\in[0,1]$. $\beta_t=\{\beta_t(y)\}_{y\in{\mathbb Z}^{d}}$ is a stationary
environment . Consider
\begin{align}\label{vitesst}
f(t):=\frac{{\mathbb Q}{\mathbb E}_{\beta_t}(X_T \, 1_{0\in\D})}{E(T \, 1_{0\in\D})}. \end{align}
Note that $\beta_t$ is not necessarily exchangeable, so that we can not assert that $f(t)$ is the mean
of the speed of the ERW in the random environment $\beta_t$. Nevertheless, $\beta_1$ and $\beta_2$ being
exchangeable, we get
$f(0)=v({\mathbb Q}_{1}),\ f(1)=v({\mathbb Q}_{2})$, so that it is enough to prove that $f(t)$ is increasing in $t$. First of all, we need the Girsanov's transform. We have $$ M_n(\beta_t)=\prod_{j=0}^{n-1} [1+\E_j\beta_t(Y_j)1_{Y_j\notin^{m}}] \, , $$ where $Y_j\notin^{m}$ denotes the event that $Y_j$ has not been visited more than $m-1$ times before time $j$. As in section \ref{Girsanov}, we have Girsanov's transforms:
$$ \frac{{dP_{\beta_t}}|_{\F_n}}{{dP_{0}}|_{\F_n}}
=\frac{{dP_{\beta_t}}|_{{\mathscr G}_n}}{{dP_{0}}|_{{\mathscr G}_n}} = M_n(\beta_t) \, , \, \,
\frac{{dP_{\beta_t}}|_{{\mathscr G}_T}}{{dP_{0}}|_{{\mathscr G}_T}} = M_T(\beta_t) \, .$$ \subsubsection{Differentiability of $f(t)$.} \label{ft}
We begin by giving another expression of the numerator in \eqref{vitesst}.
\begin{lem}\label{20} For $n \geqslant 1$,
then \begin{align}\label{NumV} {\mathbb E}_{\beta_t}(X_T1_{0\in\D})&={\mathbb E}_{\beta_t}\left[\sum_{j=0}^{T-1} \beta_t(Y_j) \, 1_{0\in\D} \, 1_{Y_j\notin^{m}} \, 1_{Z_j=Z_{j+1}}\right]\notag\\&={\mathbb E}_{0}\left[\sum_{j=0}^{T-1} \beta_t(Y_j) \, 1_{0\in\D} \, 1_{Y_j\notin^{m}} \, 1_{Z_j=Z_{j+1}} \, M_T(\beta_t)\right]. \end{align}
\end{lem} \begin{proof}
Observe that \begin{align}\label{a}
{\mathbb P}_{\beta_t}[\E_j=\pm1|{\mathscr G}_j]&=\frac{1\pm\beta_t(Y_j)}{2}1_{Y_j\notin^m}1_{Z_j=Z_{j+1}}+\frac{1}{2}1_{Y_j\in^m}1_{Z_j=Z_{j+1}}\notag\\ &=\left(\frac{1}{2}\pm\frac{\beta_t(Y_j)}{2}1_{Y_j\notin^m}\right)1_{Z_j=Z_{j+1}}. \end{align} Hence, $$ {\mathbb E}_{\beta_t}(X_T1_{0\in\D})={\mathbb E}_{\beta_t}(\sum_{j=0}^{+\infty}\E_j1_{T>j}1_{0\in\D})=\sum_{j=0}^{+\infty}{\mathbb E}_{\beta_t}(\E_j1_{T>j}1_{0\in\D}) \, ,$$ where the last equality follows from the integrability of $T$ w.r.t $\hat{{\mathbb P}}$ for $d\geqslant 6$. Note that $\{0\in\D\}\text{ and }\{T>j\}$ belong to ${\mathscr G}_j$. Therefore, {\allowdisplaybreaks \begin{align*} &{\mathbb E}_{\beta_t}(\E_j1_{T>j}1_{0\in\D}) \\
&={\mathbb E}_{\beta_t}\left[1_{T>j} \, 1_{0\in\D} \, {\mathbb P}_{\beta_t}(\E_j=1|{\mathscr G}_j)\right]
-{\mathbb E}_{\beta_t}\left[1_{T>j} \, 1_{0\in\D} \, {\mathbb P}_{\beta_t}(\E_j=-1|{\mathscr G}_j)\right]\\ &={\mathbb E}_{\beta_t}\left[\frac{1+\beta_t(Y_j)}{2}1_{T>j}1_{0\in\D}1_{Y_j\notin^{m}}1_{Z_j=Z_{j+1}}\right]-{\mathbb E}_{\beta_t}\left[\frac{1-\beta_t(Y_j)}{2}1_{T>j}1_{0\in\D}1_{Y_j\notin^{m}}1_{Z_j=Z_{j+1}}\right]\\ &={\mathbb E}_{\beta_t}\left[ \beta_t(Y_j) \, 1_{T>j} \, 1_{0\in\D} \, 1_{Y_j\notin^{m}} \, 1_{Z_j=Z_{j+1}}\right]. \end{align*}
Thus, \begin{align*} {\mathbb E}_{\beta_t}(X_T1_{0\in\D})&=\sum_{j=0}^{+\infty}{\mathbb E}_{\beta_t}\left[ \beta_t(Y_j) \, 1_{T>j}1_{0\in\D} \, 1_{Y_j\notin^{m}} \, 1_{Z_j=Z_{j+1}}\right]\\ &={\mathbb E}_{\beta_t}\left[\sum_{j=0}^{T-1} \beta_t(Y_j) \, 1_{0\in\D} \, 1_{Y_j\notin^{m}} \, 1_{Z_j=Z_{j+1}}\right]\\ &={\mathbb E}_{0}\left[\sum_{j=0}^{T-1} \beta_t(Y_j) \, 1_{0\in\D} \, 1_{Y_j\notin^{m}} \, 1_{Z_j=Z_{j+1}} \, M_T(\beta_t)\right]. \end{align*}
}
This proves the first equality. The second one follows from the fact that $\sum_{j=0}^{T-1} \beta_t(Y_j) \, $ $1_{0\in\D} \, 1_{Y_j\notin^{m}} \, 1_{Z_j=Z_{j+1}}$ is ${\mathscr G}_T$-measurable, and Lemma \ref{densite}.
\end{proof}
We turn now to the derivative of $f(t)$. We study now the sign of the derivative on the set of bounded environment, and from now on we assume that for $i=1,2$, $|\beta_i(y)| \leqslant \sigma <1$ a.s. for any $y$ of ${\mathbb Z}^d$ where $\sigma$ is a constant in $(0,1)$.
\begin{lem}\label{Dhspeed} For $d \geqslant 8$, the function $t \in [0,1] \to {\mathbb Q}[{\mathbb E}_{\beta_t}(X_T \, 1_{0\in\D})]$ is differentiable and, {\allowdisplaybreaks \begin{align}\label{dht} &E(T1_{0\in\D}) .\frac{\partial f}{\partial t}(t)
=\frac{\partial}{\partial t} {\mathbb Q}[{\mathbb E}_{\beta_t}(X_T1_{0\in\D})] \notag \\ &={\mathbb Q} {\mathbb E}_{\beta_t}\left[\sum_{j=0}^{T-1} (\beta_2-\beta_1)(Y_j) \, 1_{0\in\D} \, 1_{Y_j\notin^{m}} \, 1_{Z_j=Z_{j+1}}\right]\notag\\ & \quad + {\mathbb Q} {\mathbb E}_{\beta_t}\left[\sum_{j=0}^{T-1} \beta_t(Y_j) \, 1_{0\in\D} \, 1_{Y_j\notin^{m}} \, 1_{Z_j=Z_{j+1}}
\sum_{i=1}^{T-1}\frac{(\beta_2-\beta_1)(Y_i)\E_i1_{Y_i\notin^m}}{1+\beta_t(Y_i)\E_i}
\, 1_{Z_i=Z_{i+1}} \right]. \end{align} } \end{lem} \begin{proof}
We have $M_T(\beta_t)=\prod_{j=0}^{T-1} \left[1+\E_j\beta_t(Y_j) 1_{Y_j\notin^m}\right]$ then \begin{align*} \frac{\partial}{\partial t}M_T(\beta_t)&=\left(\sum_{j=0}^{T-1}\frac{(\beta_2-\beta_1)(Y_j)\E_j}{1+\beta_t(Y_j)\E_j}1_{Y_j\notin^m}\right)M_T(\beta_t)\\ &=\left(\sum_{j=0}^{T-1}\frac{(\beta_2-\beta_1)(Y_j)\E_j}{1+\beta_t(Y_j)\E_j}1_{Y_j\notin^m}1_{Z_j=Z_{j+1}}\right)M_T(\beta_t) \end{align*} the last equality is followed by the fact that $Z_j=Z_{j+1}$ when $\E_j\neq 0.$ Set \begin{align*} N_T(t):=\sum_{j=0}^{T-1} \beta_t(Y_j) \, 1_{0\in\D} \, 1_{Y_j\notin^{m}} \, 1_{Z_j=Z_{j+1}}. \end{align*} Then \begin{align*} \frac{\partial}{\partial t}N_T(t)=\sum_{j=0}^{T-1} (\beta_2-\beta_1)(Y_j) \, 1_{0\in\D} \, 1_{Y_j\notin^{m}} \, 1_{Z_j=Z_{j+1}}. \end{align*} and \begin{align}\label{btU} \quad\quad\quad U_T(\beta_t):&=\frac{\partial}{\partial t}\left[N_T(t)M_T(\beta_t)\right]=\frac{\partial}{\partial t}N_T(t)M_T(\beta_t)+N_T(t)\frac{\partial}{\partial t}M_T(\beta_t)\notag\\ &=\sum_{j=0}^{T-1} (\beta_2-\beta_1)(Y_j) \, 1_{0\in\D} \, 1_{Y_j\notin^{m}} \, 1_{Z_j=Z_{j+1}}M_T(\beta_t)\notag\\ &\quad\quad\quad+N_T(t)\left(\sum_{j=0}^{T-1}\frac{(\beta_2-\beta_1)(Y_j)\E_j}{1+\beta_t(Y_j)\E_j}1_{Y_j\notin^m}1_{Z_j=Z_{j+1}}\right)M_T(\beta_t). \end{align}
We have \begin{equation}{\mathbb Q}{\mathbb E}_{0}\left[N_T(t)M_T(\beta_t)\right]={\mathbb Q}{\mathbb E}_{0}[N_T(0)M_T(\beta_0)]+{\mathbb Q}{\mathbb E}_{0}\left[\int_{0}^{t}U_T(\beta_x)dx\right]. \end{equation}
Since $N_T(t)\leqslant T1_{0\in\D}, \frac{\partial}{\partial t}N_T(t)\leqslant 2T1_{0\in\D} $ and $\left|\frac{\E_j}{1+x\E_j}\right|\leqslant\frac{1}{1-\sigma},\forall x\leqslant \sigma,$ we get \begin{align*}
&\int_{0}^{t}{\mathbb Q}{\mathbb E}_{0} |U_T(\beta_x)|dx\leqslant 2\int_{0}^{t}{\mathbb Q}{\mathbb E}_{0}(T1_{0\in\D}M_T(\beta_x))dx+\frac{2\sigma}{1-\sigma}\int_{0}^{t}{\mathbb Q}{\mathbb E}_{0}(T^21_{0\in\D}M_T(\beta_x))dx\\ &=2\int_{0}^{t}\hat{{\mathbb E}}_0(T)dx+\frac{2\sigma}{1-\sigma}\int_{0}^{t}{\mathbb Q}{\mathbb E}_{\beta_x}(T^21_{0\in\D})dx=2\int_{0}^{t}\hat{{\mathbb E}}_0(T)dx+\frac{2\sigma}{1-\sigma}\int_{0}^{t}{\mathbb Q}{\mathbb E}_{0}(T^21_{0\in\D})dx,\\ &(\text{ since }T\text{ and }\{0\in\D\}\text{ belong to }\sigma (Z), \text{ then they do not depend on }x )\\ &=2t\hat{{\mathbb E}}_0 T+\frac{2t\sigma}{1-\sigma} \hat{{\mathbb E}}_0(T^2)<+\infty \text{ when }\hat{{\mathbb E}}_0(T^2)<+\infty. \end{align*} It follows from Lemma \ref{TL2} that $\hat{{\mathbb E}}_0(T^2) < + \infty$ for $d\geqslant 8$. Fubini's theorem leads then to \begin{equation}{\mathbb Q}{\mathbb E}_{0}\left[N_T(t)M_T(\beta_t)\right]={\mathbb Q}{\mathbb E}_{0}[N_T(0)M_T(\beta_0)]+\int_{0}^{t}{\mathbb Q}{\mathbb E}_{0}\left[U_T(\beta_x)\right]dx. \end{equation} Now, we prove that ${\mathbb Q}{\mathbb E}_{0}\left[U_T(\beta_x)\right]$ is continuous in $x\in[0,1]$. To this end, we recall a general result about uniform integrability of positive random variables (see for instance Theorem 5 page 189 in Shiryaev \cite{Shi96}).
\begin{lem}\label{bd1} Let $J$ be an interval of ${\mathbb R}$, and $(X(\beta), \beta\in J) $ be a family of positive integrable random variables. Assume that $\{X(\beta)\}_{\beta\in J}$ is a.s. continuous in $\beta$. Then,
the function $\varphi(\beta)={\mathbb E}[X(\beta)]$ is continuous in $\beta$ if only if the family
$\{X(\beta)\}_{\beta\in J}$ is uniformly integrable. \end{lem}
Observe from \eqref{btU} that \begin{equation} \label{VUI}
|U_T(\beta_x)|\leqslant 2\sigma T M_T(\beta_x)\, 1_{0\in\D}+ \frac{2\sigma}{1-\sigma} T^2 M_T(\beta_x)\, 1_{0\in\D}\,\leqslant \frac{4}{1-\sigma} T^2 M_T(\beta_x)\, 1_{0\in\D} . \end{equation}
For $x_0\in[0,1]$, we have:
\begin{enumerate} \item $\lim_{x\to x_0} T^2M_T(\beta_x) 1_{0\in\D}= T^2 M_T(\beta_{x_0}) 1_{0\in\D}$ a.s.,
\item $T^2 M_T(\beta_x) 1_{0\in\D} \geqslant 0$,
\item $\forall x$, ${\mathbb Q}{\mathbb E}_0[T^2 \, M_T(\beta_x) \, 1_{0\in\D}]={\mathbb Q}{\mathbb E}_{\beta_x}(T^2 \, 1_{0\in\D})={\mathbb E}_0(T^2 \, 1_{0\in\D})
< + \infty$, since $\hat{{\mathbb E}}_0(T^2) < + \infty$ for $d\geqslant 8$.
\end{enumerate} It follows then from Lemma \ref{bd1} that the family $\{T^2M_T(\beta_x) 1_{0\in\D}\}_{x \in [0,1]}$ is uniformly integrable. By (\ref{VUI}), this is also true for the family $\{U_T(\beta_x)\}_{x\to x_0}$ in a neighborhood of $x_0 \in [0,1]$. Therefore, we obtain, $$\lim_{x\to x_0}{\mathbb Q}{\mathbb E}_0(U_T(\beta_x))={\mathbb Q}{\mathbb E}_0(U_T(\beta_{x_0})) \mbox{ i.e. }{\mathbb Q}{\mathbb E}_0(U_T(\beta_x)) \mbox{ is continuous}.$$ Then, we get $$\frac{\partial}{\partial t}{\mathbb Q}{\mathbb E}_{0}\left[N_T(t)M_T(\beta_t)\right]={\mathbb Q}{\mathbb E}_{0}\left[U_T(\beta_t)\right].$$ This finishes the proof of Lemma \ref{Dhspeed}. \end{proof}
\iffalse{For $d\geqslant 8$, $\hat{E}(T^2)<\infty$. We can take the derivative in $t$, to get that {\allowdisplaybreaks \begin{align} &E(T1_{0\in\D}) \frac{\partial}{\partial t}f(t)
=\frac{\partial}{\partial t} {\mathbb Q}[{\mathbb E}_{\beta_t}(X_T1_{0\in\D})] \notag \\ &={\mathbb Q} {\mathbb E}_{\beta_t}\left[\sum_{j=0}^{T-1} (\beta_2-\beta_1)(Y_j) \, 1_{0\in\D} \, 1_{Y_j\notin^{m}} \, 1_{Z_j=Z_{j+1}}\right]\notag\\ & \quad + {\mathbb Q} {\mathbb E}_{\beta_t}\left[\sum_{j=0}^{T-1} \beta_t(Y_j) \, 1_{0\in\D} \, 1_{Y_j\notin^{m}} \, 1_{Z_j=Z_{j+1}}
\sum_{i=1}^{T-1}\frac{(\beta_2-\beta_1)(Y_i)\E_i1_{Y_i\notin^m}}{1+\beta_t(Y_i)\E_i}
\, 1_{Z_i=Z_{i+1}} \right]. \end{align} } }\fi \subsubsection{Monotonicity of the speed} \label{Mono} We remind the reader that $\tilde{Z}$ is defined as the walk $Z$ when it moves, and $\tilde{\D}$ denotes the cut times of $\tilde{Z}$. Since $T \geqslant 1$, the first term is bounded from below by its first item corresponding to $j=0$.
{\allowdisplaybreaks \begin{align} \label{term1}
{\mathbb Q} {\mathbb E}_{\beta_t}\left[\sum_{j=0}^{T-1} (\beta_2-\beta_1)(Y_j) \, 1_{0\in\D} \, 1_{Y_j\notin^{m}} \, 1_{Z_j=Z_{j+1}}\right]
& \geqslant {\mathbb Q} \left[(\beta_2-\beta_1)(0)\right] P(0\in\D, Z_0=Z_1) \notag \\ & = \frac{1}{d} {\mathbb Q} \left[(\beta_2-\beta_1)(0)\right] P(0\in\D). \end{align} } The equality \eqref{term1} follows, since $\D:=\{n\in {\mathbb Z} \mbox{ such that }Z_{(-\infty,n)}\cap Z_{[n,+\infty)}=\emptyset\}$ and, therefor, $\{0\in\D\}=\{Z_{-1}\neq Z_0, 0\in \tilde{\D}\}=\{\eta_{-1}=0, 0\in \tilde{\D}\}.$ So we have $P(0\in\D, Z_0=Z_1)=P(0\in\tilde{\D}, \eta_0=1, \eta_{-1}=0)=P(0\in\tilde{\D}, \eta_{-1}=0).P(\eta_0=1)=\frac{1}{d}P(0\in\D).$
Now, we focus on the second term. Since ${\mathbb E}_{\beta_t}[\E_k1_{Y_k\notin^m}/(1+\beta_t(Y_k)\E_k)|{\mathscr G}_k]=0$, then {\allowdisplaybreaks \begin{align} & {\mathbb Q}{\mathbb E}_{\beta_t}\left[\sum_{0\leqslant j \leqslant i\leqslant T-1} \beta_t(Y_j) \, 1_{0\in\D} \, 1_{Y_j\notin^{m}} 1_{Z_j=Z_{j+1}} \frac{(\beta_2-\beta_1)(Y_i) \E_i }{1+\beta_t(Y_i)\E_i}1_{Y_i\notin^m} 1_{Z_i=Z_{i+1}} \right] =\notag\\& {\mathbb Q}{\mathbb E}_{\beta_t}\left[\sum_{0\leqslant j \leqslant i\leqslant T-1} \beta_t(Y_j) \, 1_{0\in\D} \, 1_{Y_j\notin^{m}} 1_{Z_j=Z_{j+1}}
(\beta_2-\beta_1)(Y_i)1_{Z_i=Z_{i+1}}{\mathbb E}_{\beta_t}\left(\frac{\E_i1_{Y_i\notin^m} }{1+\beta_t(Y_i)\E_i} |{\mathscr G}_i\right)\right]\notag \\ &=0. \end{align} Then the second term of \eqref{dht} is equal to: {\allowdisplaybreaks \begin{align*} & {\mathbb Q}{\mathbb E}_{\beta_t}\left[\sum_{0\leqslant i < j\leqslant T-1} \beta_t(Y_j) \, 1_{0\in\D} \, 1_{Y_j\notin^{m}} 1_{Z_j=Z_{j+1}} \frac{(\beta_2-\beta_1)(Y_i) \E_i }{1+\beta_t(Y_i)\E_i}1_{Y_i\notin^m} 1_{Z_i=Z_{i+1}} \right] \\ &\geqslant - \sigma{\mathbb Q}{\mathbb E}_{\beta_t}\left[\sum_{0\leqslant i < j\leqslant T-1} \, 1_{0\in\D} \, 1_{Z_j=Z_{j+1}}
(\beta_2-\beta_1)(Y_i){\mathbb E}_{\beta_t}\left(\frac{|\E_i| }{1+\beta_t(Y_i)\E_i}|{{\mathscr G}_i}\right)1_{Y_i\notin^m} 1_{Z_i=Z_{i+1}} \right] \\
& (\mbox{ since } |\beta_t| \leqslant \sigma )\, \\ &\geqslant - \sigma{\mathbb Q}{\mathbb E}_{\beta_t}\left[\sum_{0\leqslant i < j\leqslant T-1} \, 1_{0\in\D} \, 1_{Z_j=Z_{j+1}} (\beta_2-\beta_1)(Y_i)1_{Z_i=Z_{i+1}}1_{Y_i\notin^m}1_{Z_i=Z_{i+1}} \right] \\ &\geqslant - \sigma {\mathbb Q} {\mathbb E}_{\beta_t}\left[\sum_{0\leqslant i < j} (\beta_2-\beta_1)(Y_i) 1_{Z_j=Z_{j+1}}1_{Z_i=Z_{i+1}} 1_{0\in\D} 1_{T>j} \right] \, , \\ & \geqslant - \sigma {\mathbb Q} {\mathbb E}_{\beta_t}\left[\sum_{0\leqslant i < j} (\beta_2-\beta_1)(Y_i) 1_{\eta_j=1}1_{\eta_i=1} 1_{0\in\tilde{D}} 1_{\tilde{T}>\sum_{k=0}^{j-1}(1-\eta_k)} \right]\\ & \geqslant - \sigma {\mathbb Q} {\mathbb E}_{\beta_t}\left[\sum_{0\leqslant i < j} (\beta_2-\beta_1)(Y_i) 1_{\eta_j=1}1_{\eta_i=1} 1_{0\in\tilde{D}} 1_{\tilde{T}>\sum_{k=0,k\neq i}^{j-1}(1-\eta_k)} \right]\\ & = - \frac{\sigma}{d^2} {\mathbb Q} {\mathbb E}_{\beta_t}\left[\sum_{0\leqslant i < j}(\beta_2-\beta_1)(Y_i) 1_{0\in\D} 1_{\tilde{T}>\sum_{k=0,k\neq i}^{j-1}(1-\eta_k)}\right]\\ &\mbox{ since } \eta_j\mbox{ is independent of } \tilde{Z}, \F^Y_i, \eta_1,...,\eta_{j-1} \text{ and }\eta_i\text{ is independent of }\tilde{Z}, \F^Y_i,\{\eta_k\}_{k\neq i}\, , \\ & = - \frac{\sigma}{d^2} {\mathbb Q} {\mathbb E}_{\beta_t}\left[\sum_{0\leqslant i < j}(\beta_2-\beta_1)(Y_i) 1_{0\in\D} 1_{\tilde{T}>\sum_{k=0}^{j-2}(1-\eta_k)}\right]\\ &\text{with the convention that the sum over an empty set equals to }0\\ & = - \frac{\sigma}{d^2} {\mathbb Q} {\mathbb E}_{\beta_t}\left[\sum_{0\leqslant i < j}(\beta_2-\beta_1)(Y_i) 1_{0\in\D} 1_{T>j-1}\right]\label{EQ},\tag{EQ}\\ &= - \frac{\sigma}{d^2} \sum_{i=1}^{+\infty}{\mathbb Q} {\mathbb E}_{\beta_t}\left[(\beta_2-\beta_1)(Y_i) (T-i)1_{T>i} 1_{0\in\D}\right] \, , \\ &= - \frac{\sigma}{d^2} \sum_{i=1}^{+\infty} {\mathbb Q} {\mathbb E}_{\beta_t} \left[\sum_{z\in{\mathbb Z}^{d-1}} \sum_{x\in{\mathbb Z}} \frac{(\beta_2-\beta_1)(y)}{d^{2}}1_{Z_i=z} 1_{X_i=x} (T-i)1_{T>i}1_{0\in\D}\right] \mbox{ with } y=(x,z) \, , \\ &= - \frac{\sigma}{d^2} \sum_{i\geqslant 1} {\mathbb Q} \left[(\beta_2-\beta_1)(0) \sum_{z\in{\mathbb Z}^{d-1}} \sum_{x\in{\mathbb Z}} {\mathbb E}_{\theta_y\beta_t}(1_{Y_i=y} (T-i)1_{T>i}1_{0\in\D})\right]\\ &\text{ because } \beta\text{ is stationary,} \\ &\geqslant - \frac{\sigma}{d^2} \sum_{i\geqslant 1} {\mathbb Q} \left\{(\beta_2-\beta_1)(0) \sum_{z\in{\mathbb Z}^{d-1}} {\mathbb E}_{\theta_y\beta_t}[(2i+1)(1_{Z_i=z} (T-i)1_{T>i}1_{0\in\D})]\right\} \\
& \hspace{5cm} \text{ for } X_i=x \Rightarrow |x|\leqslant i \, ,
\\ &\geqslant - \frac{\sigma}{d^2} \sum_{i\geqslant 1}{\mathbb Q}\left\{(\beta_2-\beta_1)(0) \sum_{z\in{\mathbb Z}^{d-1}}{\mathbb E}_{0}[(2T+1) 1_{Z_i=z} (T-i)1_{T>i}1_{0\in\D})]\right\} \, , \\ & \geqslant - \frac{\sigma}{d^2} {\mathbb Q}\left[(\beta_2-\beta_1)(0) \right] {\mathbb E}_0\left[\frac{(2T+1)T(T+1)}{2}1_{0\in\D}\right] \, . \end{align*} Therefore, we get $$ \hat{E}(T) \frac{\partial}{\partial t}f(t)\geqslant \frac{1}{d} {\mathbb Q}[(\beta_2-\beta_1)(0)] \left[1-\frac{1}{d}\sigma\hat{E}\left[\frac{(2T+1)T(T+1)}{2}\right]\right].$$ This implies that $\frac{\partial}{\partial t}f(t)\geqslant 0$ when $d\geqslant\sigma\hat{E}\left[\frac{(2T+1)T(T+1)}{2}\right]$. Lemma \ref{TL2}} asserts that
\begin{align}\label{T3} d_0:=\max\left\{\left\lfloor\sup_{d\geqslant 10}\hat{E}\left[\frac{(2T+1)T(T+1)}{2}\right]\right\rfloor+1,10\right\}<+\infty. \end{align} Then, for $d\geqslant d_0>\sigma d_0$, we have $({\partial}/{\partial t})f(t)\geqslant 0$, which implies that $f(0)\leqslant f(1)$
so that $v({\mathbb Q}_{\beta_1})\leqslant v({\mathbb Q}_{\beta_2}) $ on the set of probability measures on bounded
environment. Choose $\sigma_0=\frac{10}{d_0}$, then we have the monotonicity for environments bounded by $\sigma_0$ for any $d\geqslant 10.$ For $d\geqslant d_0$, we have proved the monotonicity on the set of environments bounded by $\sigma<1$, take $\sigma$ tending to $1$, this finishes the proof.
\iffalse{ \subsection{Monotonicity of the speed.} \label{monotonicity} We focus now on the proof of point 2. in Theorem 1.1, and we use (\ref{DerNum}) to study the sign of the derivative of $v(\beta)$.
Since $T \geqslant 1$, $N_T \geqslant N_1=d 1_{Y_0\notin \emptyset}1_{Z_0=Z_1}=d1_{Z_0=Z_1}$. We remind the reader that $\tilde{Z}$ is defined as the walk $Z$ when it moves, and $\tilde{\D}$ denotes the cut times of $\tilde{Z}$. Then, \begin{align*} {\mathbb E}_{\beta}(N_T \, 1_{0\in\D}) & \geqslant d{\mathbb P}(Z_0=Z_1,\ 0\in\D)=d{\mathbb P}(\eta_0=1,\eta_{-1}=0,\ 0\in\tilde\D) \\ & =\frac{d-1}{d}{\mathbb P}(0\in\tilde\D)={\mathbb P}(0\in\D) \, . \end{align*} We focus now on the second expectation in (\ref{DerNum}). It is equal to $$d{\mathbb E}_{\beta}\left[\sum_{0\leqslant j,k<T}\frac{\E_j}{1+\beta\E_j}1_{Y_j\notin}1_{Z_j=Z_{j+1}}1_{Y_k\notin}1_{Z_k=Z_{k+1}} \, 1_{0\in\D}\right].$$ Note that for $j\geqslant k:$ \begin{align*} &{\mathbb E}_{\beta}\left[\sum_{0\leqslant j,k<T}\frac{\E_j}{1+\beta\E_j}1_{Y_j\notin}1_{Z_j=Z_{j+1}}1_{Y_k\notin}1_{Z_k=Z_{k+1}} \, 1_{0\in\D}\right]\\
&={\mathbb E}_{\beta}\left[{\mathbb E}_{\beta}\left(\sum_{0\leqslant j,k<T}\frac{\E_j}{1+\beta\E_j}1_{Y_j\notin}1_{Z_j=Z_{j+1}}1_{Y_k\notin}1_{Z_k=Z_{k+1}} \, 1_{0\in\D}|{\mathscr G}_j\right)\right]\\
&={\mathbb E}_{\beta}\left[1_{T>j} \, 1_{0\in\D}\, 1_{Z_j=Z_{j+1}}\, 1_{Y_k\notin}1_{Z_k=Z_{k+1}}{\mathbb E}_{\beta}\left(\frac{\E_j1_{Y_j\notin}}{1+\beta\E_j}|{\mathscr G}_j\right)\right]=0. \end{align*} Using \eqref{a}, then the last equality is followed by, \begin{align*}
{\mathbb E}_{\beta}\left(\frac{\E_j1_{Y_j\notin}}{1+\beta\E_j}|{\mathscr G}_j\right)&=\frac{1_{Y_j\notin}{\mathbb E}_{\beta}(\E_j=1|{\mathscr G}_j)}{1+\beta}+\frac{1_{Y_j\notin}{\mathbb E}_{\beta}(\E_j=-1|{\mathscr G}_j)}{1-\beta}\\ &=\frac{1+\beta}{2}1_{Y_j\notin}1_{Z_j=Z_{j+1}}\frac{1}{1+\beta}-\frac{1-\beta}{2}1_{Y_j\notin}1_{Z_j=Z_{j+1}}\frac{1}{1-\beta}=0. \end{align*} Hence, using the fact that $\frac{\E_j}{1+\beta\E_j}\geqslant-\frac{1}{1-\beta}$, we get {\allowdisplaybreaks \begin{align*} &{\mathbb E}_{\beta}\left[N_T \, 1_{0\in\D}\sum_{j=0}^{T-1}\frac{\E_j}{1+\beta\E_j}1_{Y_j\notin\{Y_0,...Y_{j-1}\}}1_{Z_j=Z_{j+1}}\right]\\ &=d{\mathbb E}_{\beta}\left[\sum_{0\leqslant j<k<T}\frac{\E_j}{1+\beta\E_j}1_{Y_j\notin\{Y_0,...Y_{j-1}\}}1_{Z_j=Z_{j+1}}1_{Y_k\notin\{Y_0,...Y_{k-1}\}}1_{Z_k=Z_{k+1}}1_{0\in\D}\right]\\ &\geqslant-\frac{d}{1-\beta}\sum_{k=1}^{+\infty}{\mathbb E}_{\beta}\left[\sum_{0\leqslant j<k}1_{\E_j=-1}1_{Y_j\notin\{Y_0,...Y_{j-1}\}}1_{Z_k=Z_{k+1}}1_{0\in\D}1_{T>k}\right]\\ &\geqslant-\frac{d}{1-\beta}\sum_{k=1}^{+\infty}{\mathbb E}_{\beta}\left[\sum_{0\leqslant j<k}1_{\eta_j=1,\zeta_j=0}1_{\eta_k=1}1_{T>k}1_{0\in\D}\right]\\ &\text{ (by convention, the value of any empty sum of numbers is zero),}\\ &\geqslant-\frac{d}{1-\beta}\sum_{k=1}^{+\infty}\sum_{0\leqslant j<k}{\mathbb E}_{\beta}\left[1_{\eta_j=1,\zeta_j=0}1_{\eta_k=1}1_{\sum_{0\leqslant i<k, i\ne j}(1-\eta_i)<\tilde{T}} \, 1_{\eta_{-1}=0} \, 1_{0\in\tilde\D}\right]\\ &=-\frac{d}{1-\beta}\frac{1}{d}\frac{1-\beta}{2d}\sum_{k=1}^{+\infty}k{\mathbb P}\left[\sum_{0\leqslant i<k-1}(1-\eta_i)<\tilde{T},\eta_{-1}=0,0\in\tilde\D\right]\\ &=-\frac{1}{2d}\sum_{k=1}^{+\infty}k{\mathbb P}[T>k-1,0\in\D] \text{ (by \eqref{T})} \\ & =-\frac{1}{2d} {\mathbb E}\left[ \sum_{k=1}^{T} k \, 1_{0\in\D}\right] \\ &=-\frac{{\mathbb P}(0\in\D)}{2d}\hat{{\mathbb E}}\left( \frac{T^2+T}{2}\right). \end{align*} } From the computation above, we get \begin{equation} \frac{\partial}{\partial\beta}({\mathbb E}_{\beta}(X_T1_{0\in\D}))\geqslant \frac{{\mathbb P}(0\in\D)}{d}-\frac{{\mathbb P}(0\in\D)}{2d^2}\beta\hat{{\mathbb E}}\left( \frac{T^2+T}{2}\right)\geqslant 0 \mbox{ if } d\geqslant \beta\hat{{\mathbb E}}\left(\frac{T^2+T}{4}\right) . \end{equation} }\fi
\section{Proof of Theorem \ref{ThERW}.}\label{Sec3} The proof of Theorem \ref{ThERW} is based on that of Theorem \ref{Therwrc}. \subsection{The differentiability of the speed $v(\beta)$} In the proof of Theorem \ref{Therwrc}, Section \ref{ft} about the differentiability of $f(t)$ for $d\geqslant 8$, we consider $m=1,\beta_1(y)=0, \beta_2(y)=\beta_2, \beta_t=t\beta_2, t\in[0,1]$ for all $y\in{\mathbb Z}^d$ and $\beta_2$ is constant in $(0,1).$ The function $f(t)$ is difined by the couple of the environments $\beta_1,\beta_2$ so we denote $f_c(t)$ to be the function defined by $\beta_1=0,\beta_2=c$ for some constant $c\in[0,1).$ Then we have $v(\beta)=f_{\beta_2}(\frac{\beta}{\beta_2})$, moreover $f(t)$ is differentiable in $t\in [0,1]$, this implies that $v(\beta)$ is differentiable in $\beta\in [0,\beta_2)$ for all $\beta_2<1$ i.e. it is differentiable in [0,1) when $d\geqslant 8.$
We are now interested in proving the existence and computing the derivative at the critical point $0$. By Lemma $\ref{20}$, with $N_n:=d\sum_{j=0}^{n-1}1_{Y_j\notin}1_{Z_j=Z_{j+1}}$, we get $$\frac{v(\beta)}{\beta}=\frac{1}{d} \frac{{\mathbb E}_0(N_T1_{0\in\D}M_T(\beta))} {{\mathbb E}_0(T1_{0\in\D})}=\frac{1}{d} {\mathbb E}_0(N_T1_{0\in\D}M_T(\beta)). $$ Note that \begin{itemize} \item $T1_{0\in\D}M_T(\beta)\geqslant 0$, \item $\lim_{\beta\to 0}(T1_{0\in\D}M_T(\beta))=T1_{0\in\D}$, \item ${\mathbb E}_0(T1_{0\in\D}M_T(\beta))={\mathbb E}_{\beta}(T1_{0\in\D})={\mathbb E}_0(T1_{0\in\D}) =1$ for $d \geqslant 6$. \end{itemize} Therefore, by Lemma \ref{bd1}, $\{T1_{0\in\D}M_T(\beta)\}_{\beta}$ is uniformly integrable in a neighborhood of $0$.
This is also true for $\{N_T1_{0\in\D}M_T(\beta)\}_{\beta\to 0}$ since $N_T \leqslant dT$. Therefore, we get $$\lim_{\beta\to 0}{\mathbb E}_0(N_T1_{0\in\D}M_T(\beta))={\mathbb E}_0(N_T1_{0\in\D}).$$ On the other hand, with $R_n$ is the range of the simple symmetric random walk on ${\mathbb Z}^d$ and denote $\{Y_i\notin\}:=\{Y_i\notin\{Y_0,Y_1,...,Y_{i-1}\}\}$ then \begin{align*} R(0):&=\lim_{n\to\infty}\frac{R_n}{n}=\lim_{n\to\infty}\frac{R_{T_n}}{T_n}=\lim_{n\to\infty}\frac{R_{T_1}+(R_{T_2}-R_{T_1})+...+(R_{T_n}-R_{T_{n-1}})}{T_n}\\ &=\lim_{n\to\infty}\frac{(1_{Y_0\notin}+...+1_{Y_{T_1-1}\notin})+(1_{Y_{T_1}\notin}+...+1_{Y_{T_2-1}\notin})+...+(1_{Y_{T_{n-1}}\notin}+...+1_{Y_{T_n-1}\notin})}{T_n}\\ &=\frac{{\mathbb E}_0(R_T1_{0\in\D})}{{\mathbb E}_0(T1_{0\in\D})}={\mathbb E}_0(R_T1_{0\in\D}), (\text{because }{\mathbb E}_0(T1_{0\in\D})=\hat{{\mathbb E}}_0(T){\mathbb P}_0(0\in\D)=1). \end{align*} Similarly, with $N_n=d\sum_{j=0}^{n-1}1_{Y_j\notin}1_{Z_j=Z_{j+1}}$ then $$ \lim_{n\to\infty}\frac{N_n}{n}=\frac{{\mathbb E}_0(N_T1_{0\in\D})}{{\mathbb E}_0(T1_{0\in\D})}={\mathbb E}_0(N_T1_{0\in\D}).$$ Note that $$ {\mathbb E}_0(N_n)=d\sum_{j=0}^{n-1}{\mathbb E}_0(1_{Y_j\notin}1_{Z_j=Z_{j+1}})=d\sum_{j=0}^{n-1}{\mathbb E}_0(1_{Y_j\notin}){\mathbb P}_0({Z_j=Z_{j+1}})={\mathbb E}_0\left(\sum_{j=0}^{n-1}1_{Y_j\notin}\right)={\mathbb E}_0(R_n). $$ Therefore \begin{align*} R(0):=\lim_{n\to\infty}\frac{R_n}{n}=\lim_{n\to\infty}{\mathbb E}_0\left(\frac{R_n}{n}\right)=\lim_{n\to\infty}{\mathbb E}_0\left(\frac{N_n}{n}\right)={\mathbb E}_0(N_T1_{0\in\D}). \end{align*} \subsection{Monotonicity of $v(\beta)$} In Section \ref{Mono}, we consider the particular case $m=1$ and $\beta_1(y)=\beta_1, \beta_2(y)=\beta_2$ for all $y\in{\mathbb Z}^d,$ where $\beta_1$ and $\beta_2$ are two constants in $[0,1)$ such that $\beta_1\leqslant \beta_2\leqslant\sigma< 1$. By \eqref{term1} and \eqref{EQ} we get that \begin{align}\label{T2} \hat{E}(T) \frac{\partial}{\partial t}f(t)&\geqslant \frac{1}{d}(\beta_2-\beta_1)\left[P(0\in\D)-\frac{\sigma}{d} {\mathbb Q} {\mathbb E}_{\beta_t}\left(\sum_{j\geqslant 1}j1_{0\in\D} 1_{T>j-1}\right)\right]\notag\\ &\geqslant \frac{1}{d}(\beta_2-\beta_1)\left[P(0\in\D)-\frac{\sigma}{d}E\left(\sum_{j\geqslant 1}j1_{0\in\D} 1_{T>j-1}\right)\right]\notag\\ &\geqslant \frac{1}{d}(\beta_2-\beta_1)\left[1-\frac{\sigma}{d}\hat{E}\left(\frac{T^2+T}{2}\right)\right]. \end{align} Set $d_0:=\max\left\{\left\lfloor \sup_{d\geqslant 8}\hat{E}\left(\frac{T^2+T}{2}\right)\right \rfloor+1,8\right\}$ then $\frac{\partial}{\partial t}f(t)\geqslant 0$ i.e. f(t) is increasing in $t\in[0,1]$ and $v(\beta)$ is increasing in $\beta\in[0,1]$ when $d\geqslant d_0$ or $\sigma\leqslant \frac{8}{d_0}$ for all $d\geqslant 8.$
\section{Proof of Theorem \ref{md}}\label{Sec4} \iffalse{ In this section, we consider a multi-excited random walk with $m$ cookies ($m$-ERW) that is a particular case of $m$-ERWRE when the random environment $\beta_k(y)$ is deterministic and with constant value $\beta\in[0,1]$ for all $1\leqslant k\leqslant m$ and $y\in{\mathbb Z}^{d}$. We denote with ${\mathbb P}_{m,\beta}$ the law of $m$-ERW defined by: \begin{itemize}
\item If $Y_n$ has been visited less than $m$ times before time $n$, then \begin{align*}
{\mathbb P}_{m,\beta}(Y_{n+1}-Y_n=\pm e_1|\F^{Y}_n)&=\frac{1\pm \beta}{2d} \, , \\
{\mathbb P}_{m,\beta}(Y_{n+1}-Y_n=\pm e_i|\F^{Y}_n)&=\frac{1}{2d}\text{ for } 2\leqslant i \leqslant d \, , \end{align*} \item If $Y_n$ has been visited at least $m$ times before time $n$ then
$${\mathbb P}_{m,\beta}(Y_{n+1}-Y_n=\pm e_i|\F^{Y}_n)=\frac{1}{2d}\text{ for } 1\leqslant i\leqslant d \, .$$
\end{itemize}
We use the notation $Y_n\notin^{m}$ to mean
that $Y_n$ has not been visited at least $m$ times before time $n$.
}\fi For $m-$ERW, we denote the function $f(t)$ by $f_c(m,t)$ in the case of the couple environments such that $\beta_1=0,\beta_2=c$ where $c$ is a constant in $[0,1)$ and $\beta_t=tc, t\in [0,1].$
Set $$N_n^{m}=d\sum_{j=0}^{n-1}1_{Y_j\notin^{m}}1_{Z_j=Z_{j+1}} \, .$$ Then, from the formula \eqref{NumV} we get $$ {\mathbb E}_{m,\beta}(X_T \, 1_{0\in\D})=\frac{\beta}{d}{\mathbb E}_{m,\beta}(N^{m}_T \, 1_{0\in\D}) .$$ $m-$ERW is the particular case of $m-$ERW with i.i.d. random cookies, then the law of large numbers gives the following formula of the speed when $d\geqslant 6$: \begin{align} v(m,\beta)&=\frac{{\mathbb E}_{m,\beta}(X_T \, 1_{0\in\D})}{{\mathbb E}_{m,\beta}(T \,1_{0\in\D})} =\frac{\beta}{d}\frac{{\mathbb E}_{0}(N_T^{m}\,1_{0\in\D})}{{\mathbb E}_{0}(T\,1_{0\in\D})}. \end{align} We see that $v(m,\beta)=f_c(m,\frac{\beta}{c})$ (where $t=\frac{\beta}{c}$), then $$ \frac{\partial v}{\partial\beta}(m,\beta)=\frac{\partial f_c}{\partial t}(m,\frac{\beta}{c}). \frac{1}{c}\, , $$ and combine with the formula \eqref{dht} we obtain the derivative of the speed: \begin{align} \frac{\partial v}{\partial\beta}(m,\beta)&=\frac{1}{d}\frac{{\mathbb E}_{0}[N_T^{m}\, M_T^{m}(\beta)\,1_{0\in\D}]}{{\mathbb E}_{0}(T\,1_{0\in\D})} +\frac{\beta}{d}\frac{{\mathbb E}_{0}[N_T^{m}\, M_T^{m}(\beta) \, U_T^{m}(\beta)\, 1_{0\in\D}]}{{\mathbb E}_{0}(T\,1_{0\in\D})}, \text{ for } \beta\in [0,1) \label{21} \end{align} where $$ U_T^{m}(\beta)=\sum_{j=0}^{T-1}\frac{\E_j}{1+ \beta\E_j}1_{Y_j\notin^{m}\{Y_0,...Y_{j-1}\}}1_{Z_j=Z_{j+1}} \, ,
$$
$$M_T^{m}(\beta)=\prod_{j=0}^{T-1}\left[1+\ep_j\beta 1_{Y_j\notin^{m}\{Y_0,...Y_{j-1}\}}\right]
\, .
$$
In order to prove the uniform convergence of $({\partial v}/{\partial\beta})(m,\beta)$ as $m$ goes to $+ \infty$, we use
the following lemma, whose proof is given below:
\begin{lem}\label{bd2} Let $J$ be an interval of ${\mathbb R}$, and $\{X_n(\beta)\}_{\beta \in J, n \geqslant 1}$, $\{X(\beta)\}_{\beta\in J}$ be families of non-negative random variables. Assume that \begin{enumerate} \item for every $n$, $\{ X_n(\beta)\}_{\beta\in J}$ is uniformly integrable, \item $\{X(\beta)\}_{\beta\in J}$ is uniformly integrable, \item $X_n(\beta)$ converges in probability to $X(\beta)$, uniformly in $\beta$: for any $\varepsilon>0$,
$$ \lim_{n \to +\infty} \sup_{\beta \in J} {\mathbb P} ( \left| X_n(\beta) - X(\beta) \right| > \ep) =0 \, . $$ \end{enumerate}
Then, $\lim_{n \rightarrow + \infty} \sup_{\beta \in J} \left| {\mathbb E}(X_n(\beta)) -{\mathbb E}(X(\beta)) \right| = 0$
if and only if $\{X_n(\beta)\}_{n\in{\mathbb N},\beta\in J}$ is uniformly integrable. \end{lem}
Set $$N_T^{\infty}=d \sum_{j=0}^{T-1} 1_{Z_j=Z_{j+1}} \, , \, \, U_T^{\infty}(\beta)=\sum_{j=0}^{T-1}\frac{\E_j}{1+\beta\E_j}1_{Z_j=Z_{j+1}} \, , \, \, \,M_T^{\infty}(\beta)=\prod_{j=0}^{T-1}\left(1+\ep_j\beta\right)\, . $$ One can check that the following inequalities hold: $ \forall m \in {\mathbb N} \cup \{+ \infty \}$ , $\forall \beta \in [0, \beta_0)$ ($\beta_0 < 1$), $$ N^m_T \leqslant dT \, , \,\, M^m_T(\beta) \leqslant 2^T \, , \,\, V^m_T(\beta) \leqslant \frac{T}{1-\beta_0} \, , $$
$$ \left| N^m_T - N^{\infty}_T \right| \leqslant d (T-m)_+ \, , $$ $$
\sup_{\beta \in [0,1]} \left| M^m_T(\beta) - M^{\infty}_T(\beta) \right| \leqslant 2^T (T-m)_+ \, , $$ $$
\sup_{\beta \in [0,\beta_0]} \left| V^m_T(\beta) - V^{\infty}_T(\beta) \right| \leqslant \frac{1}{1-\beta_0} (T-m)_+ \, . $$ We deduce from these inequalities that
$\sup_{\beta \in [0,1]} \left| N^m_ T M^m_T(\beta) - N^{\infty}_T M^{\infty}_T(\beta) \right|$ converges a.s. to 0 when $m$ tends to $\infty$. The same is true for
$$\sup_{\beta \in [0,\beta_0]} \left| N^m_ T M^m_T(\beta) V^m_T(\beta)
- N^{\infty}_T M^{\infty}_T(\beta) V^{\infty}_T(\beta)\right|.$$
Using Lemma \ref{bd1}, we can also show that for every $m \geqslant 1$ the family $\{T M^m_T(\beta) 1_{0\in \D}\}_{\beta \in [0,1]}$ is uniformly integrable w.r.t. index $\beta$ for $d \geqslant 6$. Indeed, it is a.s. continuous in $\beta$ for every $m\geqslant 1$, and for $d \geqslant 6$, $${\mathbb E}_0(T M^m_T(\beta) \, 1_{0 \in \D}) = {\mathbb E}_{m,\beta}(T\, 1_{0 \in \D})= {\mathbb E}_0(T 1_{0 \in \D}) = 1 .$$ Since $N^m_T \leqslant dT$, for every $m \geqslant 1$ the family $\{N^m_T M^m_T(\beta) 1_{0\in \D}\}_{\beta \in [0,1]}$ is uniformly integrable for $d \geqslant 6$.
In the same way, Lemma \ref{bd1} implies that for every $m \geqslant 1$ the family $\{T^2 M^m_T(\beta) $ $1_{0\in \D}\}$ $_{\beta \in [0,1]}$ is uniformly integrable for $d \geqslant 8$. Since $N^m_T \leqslant T$ and $V^m_T(\beta) \leqslant \frac{1}{1-\beta_0} T$ for $0 \leqslant \beta \leqslant \beta_0 <1$, for every $m \geqslant 1$ the family $\{N^m_T V^m_T(\beta) M^m_T(\beta) 1_{0\in \D}\}_{\beta \in [0,\beta_0]}$ is also uniformly integrable. To apply Lemma \ref{bd2}, it remains to prove that $\{N^{\infty}_T M^{\infty}_T(\beta) 1_{0\in \D}\}_{\beta \in [0,1]}$, (resp. $\{N^{\infty}_T M^{\infty}_T(\beta) V^{\infty}_T(\beta) 1_{0\in \D}\}_{\beta \in [0,1]}$) are uniformly integrable. This is true for $d \geqslant 6$ (resp. $d \geqslant 8$) using again Lemma \ref{bd1}.
By Lemma \ref{bd2}, we conclude that for $d \geqslant 8$, and $0 \leqslant \beta_0 < 1$,
$$ \lim_{m \rightarrow + \infty} \sup_{\beta \in [0, \beta_0]} \left| \frac{\partial v}{\partial\beta}(m,\beta) - \frac{\partial v}{\partial\beta}(\infty,\beta) \right| = 0 \, . $$ Note that ${\mathbb P}_{\infty,\beta}$ is the law of simple random walk with drift $\beta$. Therefore, $v(\infty,\beta)=\beta/d$ and $({\partial v}/{\partial\beta})(\infty,\beta) =1/d$, leading to the
statement in Theorem \ref{md}. This in turn implies that for $d\geqslant 8$, for all $\beta_0\in[0,1)$ there exists $m(\beta_0)$ such that for
$m\geqslant m(\beta_0)$ the speed of ERW with $m$ cookies is increasing in $\beta$ on $[0,\beta_0].$
To finish the proof of Theorem $\ref{md}$, we prove Lemma $\ref{bd2}$.
\begin{myproof}[Proof of Lemma \ref{bd2}] $(\Leftarrow)$ We prove the sufficiency. Since $\{X_n(\beta)\}_{n,\beta}$ and $\{X(\beta)\}_{\beta}$ are uniformly integrable, for all $\ep>0,$ there exists $c_0$ such that for all $c\geqslant c_0$, we have: $$ \sup_{n,\beta}{\mathbb E}[X_n(\beta)1_{X_n(\beta)\geqslant c}]<\varepsilon \, , \, \, \sup_{\beta}{\mathbb E}[X(\beta)1_{X(\beta)\geqslant c}]<\varepsilon\, . $$ Therefore \begin{align}\label{hoitudeu}
&|{\mathbb E} [X_n(\beta)]-{\mathbb E} [X(\beta)]| \\
&\leqslant \ep+{\mathbb E}[|X_n(\beta)|1_{|X_n(\beta)-X(\beta)|> \ep}]+{\mathbb E}[|X(\beta)|1_{|X_n(\beta)-X(\beta)|> \ep}] \notag\\
&\leqslant \ep+{\mathbb E}[X_n(\beta)1_{X_n(\beta) \geqslant c_0}]+{\mathbb E}[X_n(\beta)1_{X_n(\beta)< c_0}1_{|X_n(\beta)-X(\beta)|> \ep}] \notag\\
&\quad\,\,\,\,\,+{\mathbb E}[X(\beta)1_{X(\beta) \geqslant c_0}]+{\mathbb E}[X(\beta)1_{X(\beta)<c_0}1_{|X_n(\beta)-X(\beta)|> \ep}]\notag\\
&\leqslant 3 \ep+2c_0 \sup_{\beta} {\mathbb P}[ |X_n(\beta)-X(\beta)|>\ep]. \end{align} By assumption 3, we get that for all $\varepsilon>0$,
$$ \limsup_{n \rightarrow + \infty} \sup_{\beta} |{\mathbb E} [X_n(\beta)]-{\mathbb E} [X(\beta)]| \leqslant 3 \varepsilon\, .$$
($\Rightarrow$) We prove now the necessity. For any $C > 0$, \begin{align*} & {\mathbb E}(X_n(\beta) \, 1_{X_n(\beta) \geqslant C}) \\ & = {\mathbb E}(X_n(\beta) -X(\beta)) + {\mathbb E}(X(\beta ) \, 1_{X(\beta) \geqslant C-1}) + {\mathbb E}(X(\beta) \, 1_{X(\beta) < C-1} - X_n(\beta) \, 1_{X_n(\beta) < C}) \, . \end{align*} Using the positivity of $X_n(\beta)$, for any $\varepsilon\in (0,1)$, \begin{align*}
& X(\beta) \, 1_{X(\beta) < C-1} - X_n(\beta) \, 1_{X_n(\beta) < C} \\ & \leqslant [X(\beta)-X_n(\beta)] 1_{X(\beta) < C-1,X_n(\beta) < C}
+ X(\beta) \, 1_{X(\beta) < C-1} 1_{| X_n(\beta) - X(\beta) | \geqslant \ep} \\
& \leqslant \varepsilon+ 2 C 1_{| X_n(\beta) - X(\beta) | \geqslant \ep} \, . \end{align*} Therefore, for any $C > 0$ and any $\varepsilon\in (0,1)$, \begin{align*}
&\sup_{\beta} {\mathbb E}[X_n(\beta) \, 1_{X_n(\beta) \geqslant C}] \\
& \leqslant \sup_{\beta} \left| {\mathbb E}[X_n(\beta) -X(\beta)] \right|
+ \sup_\beta {\mathbb E}[X(\beta ) \, 1_{X(\beta) \geqslant C-1}] + \varepsilon+ 2 C \sup_{\beta} {\mathbb P}( | X_n(\beta) - X(\beta) | \geqslant \ep) \, . \end{align*} Taking the limit $n \to \infty$, then $\varepsilon\to 0$ leads to \begin{equation} \label{UIunif} \limsup_{n \to \infty} \sup_{\beta} {\mathbb E}[X_n(\beta) \, 1_{X_n(\beta) \geqslant C}] \leqslant \sup_{\beta} {\mathbb E}[X(\beta ) \, 1_{X(\beta) \geqslant C-1}] \, . \end{equation} Let $\varepsilon> 0$. Using the uniform integrability of the family $\{X(\beta )\}_{\beta}$, one can find $C_0(\ep)$ such that $\sup_{\beta} {\mathbb E}[X(\beta ) \, 1_{X(\beta) \geqslant C_0(\ep)-1}] \leqslant \ep$. By \eqref{UIunif}, there exists $n_0(\ep)$ such that for all $n \geqslant n_0(\ep)$, $$ \sup_{\beta} {\mathbb E}[X_n(\beta) \, 1_{X_n(\beta) \geqslant C_0(\ep)}] \leqslant 2 \varepsilon\, .$$ For $n < n_0(\ep)$, we use the uniform integrability of the family $\{X_n(\beta )\}_{\beta}$ to get $C_1(\ep)$ such that for any $C \geqslant C_1(\ep)$, $\sup_{n\leqslant n_0(\ep),\beta}{\mathbb E}[X_n(\beta)1_{X_n(\beta)>C}]<\ep.$
Now, choosing $C_2(\ep)=\max\{C_0(\ep),C_1(\ep)\}$, we get $\sup_{n,\beta}{\mathbb E}[X_n(\beta)1_{X_n(\beta)>C}]<2\ep$ for all $C>C_2(\ep)$ .
\end{myproof}
\iffalse{\section{Excited random walk with random cookies.} Let $\beta$ be a environment ($\beta\in{\mathbb B}$). We denote by $\{Y_n\in_k\}$ the event $$ \{Y_n\in_k\} = \{Y_n \mbox{ has been visited exactly } k-1 \mbox{ times before time } n \} \, . $$ We recall from the introduction that the ``quenched" law ${\mathbb P}_{\beta}$ of the ERW in the cookie environment $\beta$, is defined by the following conditions: \begin{enumerate} \item ${\mathbb P}_{\beta}(Y_0=0)=1$, \item on the event $\{Y_n\in_k\}$ where $1 \leqslant k\leqslant m$, then \begin{align*}
&{\mathbb P}_{\beta}[Y_{n+1}-Y_n=\pm e_i|\F_n^{Y}]=\frac{1}{2d}\quad \text{ for } 2\leqslant i\leqslant d \, , \\
&{\mathbb P}_{\beta}[Y_{n+1}-Y_n=\pm e_1|\F_n^{Y}]=\frac{1\pm\beta_k(Y_n)}{2d} \, , \end{align*}
\item on the event $\{Y_n\in_k\}$ where $k> m$, then
$$ {\mathbb P}_{\beta}[Y_{n+1}-Y_n=\pm e_i|\F_n^{Y}]=\frac{1}{2d}\quad \text{ for } 1\leqslant i \leqslant d \, . $$ \end{enumerate} The ``annealed" law is defined by $P={\mathbb Q}\otimes{\mathbb P}_{\beta}$. Observe that the cut times are still well defined. In section 4.1 and 4.2 we consider the case when there is only one cookie ($m=1$) (1-ERWRE) and the environment is then denoted by $\beta=\{\beta(y)\}_{y\in{\mathbb Z}^{d}}$.
\iffalse{ \subsection{$m=1$ and i.i.d. random cookie environments.} In this case, we see that the ``annealed" law of ERWRC is the law of an ERW. \begin{lem} Assume that $m=1$ and the cookie environment $\{\beta(y)\}_{y\in{\mathbb Z}^{d}}$ is i.i.d. Then under $P$, $Y$ has the same law as an excited random walk with parameter $\beta_0:={\mathbb E}_{{\mathbb Q}}[\beta(0)].$ \end{lem} \begin{proof} We have to prove that $$ P[Y_0=y_0, Y_1=y_1,...,Y_n=y_n]=\left(\frac{1}{2d}\right)^{n}\prod_{i=0}^{n-1}(1+\beta_0{\ep}_i.1_{y_i\notin}),$$ where $\beta_0={\mathbb E}_{\mathbb Q}[\beta(0)],\ y_0=0 $ and $y_{i+1}-y_i\in\{0,\pm 1\}.$
Indeed, we have \begin{align*} P[Y_0=y_0, Y_1=y_1,...,Y_n=y_n]&={\mathbb E}_{\mathbb Q}{\mathbb P}_\beta[Y_0=y_0, Y_1=y_1,...,Y_n=y_n]\\ &={\mathbb E}_{\mathbb Q}\left[\left(\frac{1}{2d}\right)^{n}\prod_{i=0}^{n-1}(1+\beta(y_i){\ep}_i.1_{y_i\notin})\right]\\ &=\left(\frac{1}{2d}\right)^{n}\prod_{i=0}^{n-1}(1+{\mathbb E}_{\mathbb Q}[\beta(0)]{\ep}_i.1_{y_i\notin}). \end{align*} \end{proof} Then, we get the law of large numbers and the fact that the speed is increasing in $\beta_0={\mathbb E}_{\mathbb Q}[\beta(0)]$ from the results on the excited random walk.
\subsection{$m\geqslant 1$ and stationary random cookie environments.} We focus now on the case of a stationary and $e_1$-exchangeable environment, and on the proof of Theorem $\ref{Therwrc}.$ }\fi }\fi
\section*{Acknowledgments} I would like to thank my Ph.D. advisors Pierre Mathieu, and Fabienne Castell for suggesting this problem. This research was supported by the French ANR project MEMEMO2 2010 BLAN 0125.
\thispagestyle{empty}
\end{document} |
\begin{document}
\begin{abstract} We compute the variances of sums in arithmetic progressions of arithmetic functions associated with certain $L$-functions of degree two and higher in $\Fq[t]$, in the limit as $q\to\infty$. This is achieved by establishing appropriate equidistribution results for the associated Frobenius conjugacy classes. The variances are thus related to matrix integrals, which may be evaluated. Our results differ significantly from those that hold in the case of degree-one $L$-functions (i.e. situations considered previously using this approach). They correspond to expressions found recently in the number field setting assuming a generalization of the pair-correlation conjecture. Our calculations apply, for example, to elliptic curves defined over $\Fq[t]$. \end{abstract}
\title[Variance of sums of arithmetic functions] {Variance of sums in arithmetic progressions of arithmetic functions associated with higher degree $L$-functions in $\mathbb{F}_q[t]$}
\author{Chris Hall \address{Department of Mathematics, The University of Western Ontario, London, ON, Canada, N6A 5B7}}
\author{Jonathan P.~Keating \address{School of Mathematics, University of Bristol, Bristol BS8 1TW, UK}}
\author{Edva Roditty-Gershon \address{School of Mathematics, University of Bristol, Bristol BS8 1TW, UK}}
\thanks{ We are pleased to acknowledge support under EPSRC Programme Grant EP/K034383/1 LMF: \textit{L}-Functions and Modular Forms. JPK is also grateful for support through a Royal Society Wolfson Research Merit Award and a Royal Society Leverhulme Senior Research Fellowship. We thank Nick Katz, MManuel Kowalski, and Zeev Rudnick for discussion and helpful comments.}
\allowdisplaybreaks \maketitle
\section{Introduction}\label{sec:introduction}
\subsection{Analytic motivation}
Let $\Lambda(n)$ denote the von Mangoldt function, defined by \begin{equation*}
\Lambda(n)
=
\begin{cases}
\log p & \mbox{if }n=p^k\mbox{ for some prime }p\mbox{ and integer }k\ge 1, \\
0 & \mbox{otherwise.}
\end{cases} \end{equation*} The prime number theorem implies that \begin{equation*}
\sum_{n \leq x} \Lambda(n)= x+o(x), \end{equation*} as $x\to\infty$, determining the average of $\Lambda(n)$ over long intervals. In many problems one needs to understand sums over shorter intervals and in arithmetic progressions. This is significantly more difficult, because the fluctuations between different short intervals/arithmetic progressions can be large, and in many important cases we do not have rigorous results.
One may seek to characterize the fluctuations in these sums via their variances. These variances are the subject of several long-standing conjectures. For example, in the case of short intervals Goldston and Montgomery \cite{GM} have made the following conjecture \begin{conjecture}[Variance of primes in short intervals]\label{GMcon} For any fixed $\varepsilon>0$, \begin{equation*}
\int_{1}^{X} \Big( \sum_{X\leq n \leq x+h} \Lambda(n)- h\Big)^{2} dx
\sim
hX\big(\log X-\log h\big) \end{equation*} uniformly for $1\leq h\leq X^{1-\varepsilon}$. \end{conjecture}
It is natural to try to compute the variance in Conjecture \ref{GMcon} using the Hardy-Littlewood Conjecture \begin{equation}\label{HL}
\sum_{n\le X}\Lambda(n)\Lambda(n+k)\sim \mathfrak{S}(k)X \end{equation} as $X\rightarrow\infty$, where $\mathfrak{S}(k)$ is the singular series \begin{equation*}
\mathfrak{S}(k) =
\begin{cases}
2\prod_{p>2}\left(1-\frac{1}{(p-1)^2}\right)
\prod_{\substack{p>2\\ p|k}}\frac{p-1}{p-2}
& \mbox{\quad if }k \mbox{ is even, } \\
0 & \mbox{\quad if }k \mbox{ is odd. }
\end{cases} \end{equation*} Montgomery and Soundararajan \cite{MS} proved that \eqref{HL}, together with an assumption concerning the implicit error term, implies a more precise asymptotic for the variance in Conjecture \ref{GMcon} when $\log X\leq h\leq X^{1/2}$, namely that it is equal to \begin{equation*}\label{Zeta variance h LOT}
hX\big(\log X-\log h - \gamma_0-\log 2\pi\big)
+
O_\varepsilon\Big(h^{15/16}X(\log X)^{17/16}+h^2X^{1/2+\varepsilon}\Big), \end{equation*} where $\gamma_0$ is the Euler-Mascheroni constant.
An alternative approach to computing this variance follows from \begin{equation*}
\frac{\zeta^\prime(s)}{\zeta(s)}=-\sum_{n=1}^\infty\frac{\Lambda (n)}{n^s}, \end{equation*} which links statistical properties of $\Lambda(n)$ to those of the zeros of the Riemann zeta-function $\zeta(s)$. Taking this line, Goldston and Montgomery \cite{GM} proved that Conjecture \ref{GMcon} is equivalent to the following conjecture, due to Montgomery \cite{M}, concerning the pair correlation of the non-trivial zeros $\frac12 +i\gamma$ of the zeta-function: \begin{conjecture}[Pair Correlation Conjecture]\label{SPC} Let \[
\mathcal{F}(X,T)
=
\sum_{0<\gamma,\gamma'\leq T}X^{i(\gamma-\gamma')}w(\gamma-\gamma'), \] where $w(u)=\frac{4}{4+u^2}$. Then for any fixed $A\geq1$ we have, assuming the Riemann Hypothesis, \[
\mathcal{F}(X,T)\sim\frac{T\log T}{2\pi} \] uniformly for $T\leq X\leq T^{A}$. \end{conjecture} \noindent See also \cite{C} and \cite{LPZ}, where lower order terms are considered in the equivalence.
There is a similar theory in the case of sums in arithmetic progressions. The Prime Number Theorem for arithmetic progression states that for a fixed modulus $\Q$, \begin{equation}\label{PNT for arith prog}
\sum_{\substack{n\leq X\\ n=A\bmod \Q}} \Lambda(n) \sim \frac{X}{\phi(\Q)},\quad \mbox{ as }X\to \infty
\;, \end{equation} where $\phi(\Q)$ is the Euler totient function, giving the number of reduced residues modulo $\Q$. The variance of sums over different arithmetic progressions is then defined by \begin{equation}
G(X,\Q)=\sum_{\substack{A\bmod \Q\\ \gcd(A,\Q)=1}} \left|\sum_{\substack{n\leq X\\ n=A\bmod \Q}} \Lambda(n)-\frac X{\phi(\Q)}
\right|^2. \end{equation} Asymptotic formulae are known when $G(X, \Q)$ is summed over a long range of values of $\Q$ (c.f.~\cite{Montgomery}, \cite{HooleyI} and \cite{HooleyII}), but much less is known concerning $G(X, \Q)$ itself. In the latter case, Hooley has made the following conjecture \cite{HooleyICM}. \begin{conjecture}[Variance of primes in arithmetic progressions]\label{Hooleycon} \begin{equation*}
G(X, \Q)\sim X\log \Q. \end{equation*} \end{conjecture} \noindent Hooley was not specific about the size of $\Q$ relative to $X$ for which this asymptotic should hold. Friedlander and Goldston \cite{FG} have shown that in the range $\Q>X^{1+o(1)}$, \begin{equation}\label{FG uninteresting}
G(X,\Q)
\sim
X\log X - X - \frac{X^2}{\phi(\Q)} + O\left(\frac X{(\log X)^A}\right) + O((\log \Q)^3) \;. \end{equation} This is a relatively straightforward range because it contains at most one prime. They conjecture that Hooley's asymptotic holds if $X^{1/2+\epsilon}<\Q<X$ and further conjecture that if $X^{1/2+\epsilon}<\Q<X^{1-\epsilon}$ then \begin{equation}\label{FG conj}
G(X,\Q)
\sim
X\log \Q - X\cdot\left(\gamma_0 +\log 2\pi + \sum_{p\mid \Q} \frac{\log p}{p-1}\right) \;. \end{equation} They show that both Conjecture~\ref{Hooleycon} and \eqref{FG conj} hold assuming the Hardy-Littlewood conjecture with small remainders. For $\Q<X^{1/2}$ relatively little seems to be known.
Conjectures \ref{GMcon} and \ref{Hooleycon} remains open, but their analogues in the function field setting have been proved in the limit of large field size \cite{KR}.
Let $\Fq$ be a finite field of $q$ elements and $\Fq[t]$ the ring of polynomials with coefficients in $\Fq$. Let $\MM\sub\Fq[t]$ be the subset of monic polynomials and $\MM_n\sub\MM$ be the subset of polynomials of degree $n$. Let $\PP\sub\MM$ be the subset of irreducible polynomials and $\PP_n=\PP\cap\MM_n$. The norm of a non-zero polynomial $f\in\Fq[t]$ is defined to be $|f|=q^{\deg f}$.
The von Mangoldt function is the function on $\MM$ defined as $$
\Lambda(f)
=
\begin{cases}
d & \mbox{if }f=\pi^m\mbox{ with }\pi\in\PP_d \\
0 & \mbox{otherwise}
\end{cases} $$ The Prime Polynomial Theorem in this context is the identity \begin{equation}\label{Explicit formula}
\sum_{f\in \mathcal M_n}\Lambda(f) = q^n \;. \end{equation} The analogue of Conjecture \ref{GMcon} is the following result, proved in \cite{KR}: for $h\le n-5$, \begin{equation}\label{KRint}
\frac{1}{q^n}\sum_{A\in \mathcal M_n}
\left| \sum_{|f-A|\le q^h}\Lambda (f)-q^{h+1}\right|^2
\sim
q^{h+1}(n-h-2) \end{equation}
as $q\rightarrow \infty$; note that $|\{f:|f-A|\le q^h\}|=q^{h+1}$.
In the same vein, the function-field analogue of Conjecture \ref{Hooleycon} was also established in \cite{KR}: fix $n\ge 2$, then, given a sequence of finite fields $\Fq$ and square-free polynomials $\Q\in \Fq[t]$ with $2\le\deg(\Q)\le n+1$, one has \begin{equation}\label{KRap}
\sum_{\substack{A\bmod \Q\\ \gcd(A,\Q)=1}}
\left|
\sum_{\substack{f\in \mathcal M_n\\f=A \bmod \Q}}
\Lambda(f)-\frac{q^n}{\Phi(\Q)}
\right|^2
\sim
q^n({\rm deg} \Q-1) \end{equation} as $q \rightarrow \infty$.
The asymptotic formulae (\ref{KRint}) and (\ref{KRap}) were established in \cite{KR} by expressing the variances as sums over families of $L$-functions. These $L$-functions can be expressed as the characteristic polynomials of matrices representing Frobenius conjugacy classes. In the limit as $q\rightarrow \infty$, these matrices become equidistributed in one of the classical compact groups and the sums become matrix integrals of a kind familiar in Random Matrix Theory. Evaluating these integrals leads to the expressions above.
This approach to computing variances has subsequently been applied to other arithmetic functions defined over function fields, including the M\"obius function \cite{KRII}, the square of the M\"obius function (i.e., the characteristic function of square-free polynomials) \cite{KRII}, square-full polynomials \cite{R-G}, and the generalized divisor functions \cite{KRRR}. For overviews see \cite{Rud}, \cite{KR-G}, and \cite{Rod}. The arithmetic functions considered so far have all been associated with degree-one $L$-functions (or simple functions of these). Our main aim in this paper is to extend the theory to arithmetic functions associated with $L$-functions of degree-two and higher. For example, our results apply to $L$-functions associated with elliptic curves defined over $\Fq[t]$. This will require us to establish the appropriate equidistribution results for such $L$-functions. We achieve this using the machinery developed by Katz \cite{Katz:CE}.
The main reason for moving to higher-degree $L$-functions is the recent discovery in the number-field setting that one gets qualitatively new behaviour when the degree exceeds one \cite{BKS}.
We summarize briefly now the results in \cite{BKS}. Let $\mathcal{S}$ denote the Selberg class $L$-functions. For $F\in\mathcal{S}$ primitive, write \[
F(s) = \sum_{n=1}^{\infty}\frac{a_F(n)}{n^s}. \] Then $F(s)$ has an Euler product \begin{equation}\label{Euler}
F(s)
=
\prod_{p}\textrm{exp}\bigg(\sum_{l=1}^{\infty}\frac{b_F(p^l)}{p^{ls}}\bigg) \end{equation} and satisfies the functional equation \[
\Phi(s) = \varepsilon_F\overline{\Phi}(1-s), \] where $\overline{\Phi}(s)=\overline{\Phi(\overline{s})}$ and \[
\Phi(s) = \Q^s\bigg(\prod_{j=1}^{r}\Gamma(\lambda_j s+\mu_j)\bigg) F(s), \]
for some $\Q>0$, $\lambda_j>0$, $\textrm{Re}(\mu_j)\geq0$ and $|\varepsilon_F|=1$.
There are two important invariants of $F(s)$: the degree $d_F$ and the conductor $\mathfrak{q}_F$, given by \[
d_F=2\sum_{j=1}^{r}\lambda_j
,\quad
\mathfrak{q}_F=(2\pi)^{d_F}\Q^2\prod_{j=1}^{r}\lambda_j^{2\lambda_j}. \] respectively. Another is $m_F$, the order of the pole at $s=1$, which equals $1$ for the Riemann zeta function and is expected to be 0 otherwise.
Let $\Lambda_F$ be the arithmetic function defined by \[
\frac{F'(s)}{F(s)}
=
-\sum_{n=1}^{\infty}\frac{\Lambda_F(n)}{n^s}, \] and let $\psi_F$ be the function defined by \begin{equation*}
\psi_{F}(x) := \sum_{n \leq x} \Lambda_F(n). \end{equation*} The former will be the main focus of our attention.
A generalized prime number theorem of the form \begin{equation*}
\sum_{n \leq x} \Lambda_F(n) = m_{F} x+o(x) \end{equation*} is expected to hold. In analogy with the case of the Riemann zeta function, it is natural to consider the variance \begin{equation*}
\tilde{V}_F(X, h) := \int_{1}^{X}\Big |\psi_F(x+h)-\psi_F(x) - m_Fh\Big|^{2} dx. \end{equation*} For example, when $F$ represents an $L$-function associated with an elliptic curve, $\tilde{V}_F(X, h)$ is the variance of sums over short intervals involving the Fourier coefficients of the associated modular form evaluated at primes and prime powers; and in the case of Ramanujan's $L$-function, it represents the corresponding variance for sums involving the Ramanujan $\tau$-function.
For most $F\in\mathcal{S}$ it is expected that \begin{equation*}
\sum_{n\le X}\Lambda_F(n)\Lambda_F(n+h) = o(X). \end{equation*} This might lead one to expect that $\tilde{V}_F(X, h)$ typically exhibits significantly different asymptotic behaviour than in the case when $F$ is the Riemann zeta-function because in that case \eqref{HL} plays a central role in our understanding of the variance. However, all principal $L$-functions are believed to look essentially the same from the perspective of the statistical distribution of their zeros; that is, it is conjectured that the zeros of all primitive $L$-functions have a limiting distribution which coincides with that of random unitary matrices, as in Montgomery's conjecture (\ref{SPC}). It was proved in \cite{BKS}, assuming the Generalized Riemann Hypothesis (GRH), that an extension of the pair correlation conjecture for the zeros that includes lower or terms (and which itself follows from the ratio conjecture of \cite{CFZ} along the lines of \cite{CS}) is equivalent to the formulae \eqref{2.2} and \eqref{2.3} below for $\tilde{V}_F(X, h)$ which generalize the Montgomery-Soundararajan formula (\ref{Zeta variance h LOT}).
If $0<B_1<B_2\leq B_3<1/d_F$, then \begin{eqnarray}\label{2.2}
\tilde{V}_F(X,h)
& = &
h X\Big(d_F \log\frac{X}{h}+\log\mathfrak{q}_F-(\gamma_0+\log 2\pi)d_F\Big)\nonumber\\
& &
\qquad\qquad+O_\varepsilon\big(hX^{1+\varepsilon}(h/X)^{c/3}\big)+O_\varepsilon\Big(hX^{1+\varepsilon}\big(hX^{-(1-B_1)}\big)^{1/3(1-B_1)}\Big) \end{eqnarray} uniformly for $X^{1-B_3}\ll h\ll X^{1-B_2}$, for some $c>0$.
Otherwise, if $1/d_F<B_1<B_2\leq B_3<1$, \begin{eqnarray}\label{2.3}
\tilde{V}_F(X,h)
& = &
\frac{1}{6}h X\Big(6\log X-\big(3+8\log 2\big)\Big) \\
& &
\qquad\qquad+O_\varepsilon\big(hX^{1+\varepsilon}(h/X)^{c/3}\big)+O_\varepsilon\Big(hX^{1+\varepsilon}\big(hX^{-(1-B_1)}\big)^{1/3(1-B_1)}\Big) \end{eqnarray} uniformly for $X^{1-B_3}\ll h\ll X^{1-B_2}$, for some $c>0$.
If $d_F=1$ there is only one regime of behaviour, governed by (\ref{2.2}). When $\mathfrak{q}_F=1$, this coincides exactly with (\ref{Zeta variance h LOT}); and when $\mathfrak{q}_F\neq 1$, it generalizes (\ref{Zeta variance h LOT}) in a straightforward way.
If $d_F>1$ there are two ranges of behaviour, depending on the size of $h$. In the first range, $\tilde{V}_F(X,h)/h$ is proportional to $\log h$; in the regime it is independent of $h$ at leading order. It is this behaviour that we seek to understand better in the case of function fields. In that case we are able to establish unconditional theorems which illustrate the qualitatively new form of the variance when the degree two or higher.
\subsection{Function-field analogue}
Our results are quite general and to state them requires a good deal of notation and terminology to be developed. For this reason we postpone presenting them until later sections, when the necessary theory has been developed. For reference, our main results are Theorem~\ref{thm:variance-estimate} (see \S\ref{sec:sums-in-arithmetic-progressions}) and Theorem~\ref{thm:application} (see \S\ref{sec:explicit-abelian-varieties}). The former provides the variance estimates we need and the latter provides an application of these estimates to $L$-functions of abelian varieties. Two key ingredients used to prove these theorems are Theorem~\ref{thm:big-monodromy-implies-equidistribution} (see \S\ref{sec:equidistribution}) and Theorem~\ref{thm:is-equidistributed} (see \S\ref{sec:big-monodromy}) which provide requisite equidistribution and big-monodromy results respectively.
To illustrate our results we state now a special case of one of them.
Suppose $q$ is an odd prime power, and let $E/\Fq(t)$ be the Legendre curve, that is, the elliptic curve with affine model $$
y^2 = x(x-1)(x-t). $$ Over the ring $\Fq[t]$, this curve has bad reduction at $t=0,1$ and good reduction everywhere else, so it has conductor $\s=t(t-1)$. It also has additive reduction at $\infty$, so the $L$-function is given by an Euler product $$
L(T,E/\Fq(t))
=
\prod_{\pi\in\mathcal{P}} L(T^{\deg(\pi)},E/\mathbb{F}_\pi)^{-1} $$ where $\mathcal{P}\subset\Fq[t]$ is the subset of monic irreducibles and $\mathbb{F}_\pi$ is the residue field $\Fq[t]/\pi\Fq[t]$.
Each Euler factor of $L(T,E/\Fq(t))$ is the reciprocal of a polynomial in $\mathbb{Q}[T]$ and satisfies $$
T\frac{d}{dT}
\log L(T,E/\mathbb{F}_\pi)^{-1}
=
\sum_{m=1}^\infty a_{\pi,m}T^m
\in\mathbb{Z}[[T]]. $$ Moreover, if we define $\Lambda_{\mathrm{Leg}}$ to be the function on the subset $\mathcal{M}$ of monic polynomials given by $$
\LambdaLeg(f)
=
\begin{cases}
d\cdot a_{\pi,m} & \mbox{if }f=\pi^m\mbox{ with }\pi\in\mathcal{P}\mbox{ and }\deg(\pi)=d \\
0 & \mbox{otherwise},
\end{cases} $$ then the $L$-function satisfies $$
T\frac{d}{dT}
\log(L(T,E/\Fq(t)))
=
\sum_{n=1}^\infty
\left(
\sum_{f\in\mathcal{M}_n}
\LambdaLeg(f)
\right)
T^n. $$
Let $\Q\in\Fq[t]$ be monic and square free. For each $n\geq 1$ and each $A$ in $\Gamma(\Q)=(\Fq[t]/\Q\Fq[t])^\times$, consider the sum $$
S_{n,\Q}(A)
\ :=
\sum_{\substack{f\in\mathcal{M}_n\\f\equiv A\bmod\Q}}
\LambdaLeg(f). $$ Let $A$ vary uniformly over $\Gamma(\Q)$, and consider the moments $$
\mathbb{E}_A[S_{n,\Q}(A)]
=
\frac{1}{|\Gamma(\Q)|}
\sum_{A\in\Gamma(\Q)}
S_{n,\Q}(A),
\quad
\mathrm{Var}_A[S_{n,\Q}(A)]
=
\frac{1}{|\Gamma(\Q)|}
\sum_{A\in\Gamma(\Q)}
|S_{n,\Q}(A)-\mathbb{E}_A[S_{n,\Q}(A)]|^2. $$
These moments (and the quantity $|\Gamma(\Q)|$) depend on $q$, so one can ask how they behave when we replace $\Fq$ by a finite extension, that is, let $q\to\infty$. Using the theory we develop in this paper one can prove the following theorem.
\begin{theorem}\label{thm:intro-theorem} \label{intro theorem} If $\gcd(\Q,\s)=t$ and if $\deg(\Q)$ is sufficiently large, then $$
|\Gamma(\Q)|\cdot\mathbb{E}_A[S_{n,\Q}(A)]
=
\sum_{f\in\mathcal{M}_n}\LambdaLeg(f),\quad
\lim_{q\to\infty}\frac{|\Gamma(\Q)|}{q^{2n}}\cdot\mathrm{Var}_A[S_{n,\Q}(A)] = \min\{n,2\deg(\Q)-1\}. $$ \end{theorem}
\noindent See Theorem~\ref{thm:application}. This should be compared to (\ref{KRap}). For definiteness, we could replace ``sufficiently large'' by $\deg(\Q)>900$, but we do not believe this bound to be optimal. We also do not believe the hypothesis on $\gcd(\Q,\s)$ is necessary (cf.~Remark~\ref{rmk:unipotence-hypothesis}).
The fact that the expression for the variance depends on $2\deg(\Q)$ is a direct consequence of the fact that the associated $L$-functions have degree two. (For an $L$-function of degree $r$, one will get a leading term of $r\deg(\Q)$ instead.) This then leads to there being two ranges of behaviour.
The analogues of our main results in the number field setting are formulae for the variance of $\Lambda_F$ when summed over arithmetic progressions (a similar case to when these sums are considered in short intervals, as in \eqref{2.2} and \eqref{2.3}). For example if we take a rational elliptic curve and write the number of points over the field of $p$ elements as $$ N_{p}=p+1-a_{p} $$ and the number of points over an extension field of degree $m$ as $$ N_{p^{m}}=p^{m}+1-a_{p^{m}} $$ then our function field theorems are analogous to considering the fluctuations of the sum of $a_{p^{m}}$, weighted by the logarithm of $p$, over residue classes of $p^{m} \bmod c$.
\subsection{Underlying equidistribution theorem}
The key ingredients we use to prove Theorem~\ref{thm:intro-theorem} and its generalizations are the Mellin transform and Deligne's equidistribution theorem. More precisely, we start with a lisse sheaf $\FF$ on a dense open $T\seq\Aonet[1/\s]$ and twist it by variable Dirichlet characters $\dc$ with square-free conductor $\Q$ to obtain a family of lisse sheaves $\FF_\dc$ on $T[1/\Q]$; this family is a Mellin transform of $\FF$.
One can associate a monodromy $\GG_\arith$ group to this family generated by Frobenius conjugacy classes $\Frob_{\EFq,\dc}$ for variable Dirichlet characters $\dc$ over finite extensions $\EFq/\Fq$. A priori $\GG_\arith$ is reductive and defined over $\Qellbar$, but Deligne's Riemann hypothesis allows us to associate the classes $\Frob_{\EFq,\dc}$ for `good' $\dc$ to well-defined conjugacy classes in a compact form of the `same' reductive group over $\bbC$. Deligne's equidistribution theorem implies these classes are equidistributed.
For our applications, we need equidistribution in a unitary group $U_\R(\bbC)$, and thus we need $\GG_\arith$ to be as big as possible, namely $\GL_{\R,\Qellbar}$. We were only able to prove this is the case under the hypotheses that $\deg(\Q)\gg 1$ and that $\FF$ has a unipotent block of exact multiplicity one about $t=\gcd(\Q,\s)=0$.
On one hand, while we do expect that one may encounter exceptions when $\deg(\Q)$ is small, we do not believe our lower bound on $\deg(\Q)$ is sharp. On the other hand, the hypothesis on the monodromy about the unique prime dividing $\gcd(\Q,\s)$ was made in order to ensure we could exhibit elements of $\GG_\arith$ whose existence helped ensure the group was big. We conjecture one still has big monodromy under the weaker hypothesis that $\gcd(\Q,\s)=1$.
\subsection{Overview}
The structure of this paper is as follows.
We start in \S\ref{sec:framework} by establishing notation and relatively basic facts that we need throughout the rest of the paper.
In \S\ref{sec:l-functions} we define two $L$-functions that one can attach to a Galois representation $\rho$: the complete $L$-function $L(T,\rho)$ and a partial $L$-function $\LC(T,\rho)$. The former may be defined in terms of an Euler product over all places of the function field $\Fq(t)$, and for the latter we exclude the Euler factors indexed by a finite set $\CC$ of places in $\Fq(t)$. If the excluded Euler factors are in fact trivial, then the two $L$-functions will coincide, but otherwise they will not. Either way, after imposing requisite hypotheses on the representation $\rho$, we apply the theory of $L$-functions and also Deligne's theorem to deduce information about their degrees and zeros.
In \S\ref{sec:twisted-l-functions} we consider twists of the representation $\rho$ by Dirichlet characters $\dc$ of square-free conductor $\Q$. The material in this section is mostly a recasting of the results in \S\ref{sec:l-functions} in a manner which is convenient for us. The main objects of interest at the complete $L$-function $L(T,\rhochi)$ and the partial $L$-function $\LC(T,\rhochi)$.
In \S\ref{sec:equidistribution} we recall the notion of a good character $\dc$ for $\rho$: it is a character such that $L(T,\rhochi)$ and $\LC(T,\rhochi)$ are both polynomials and equal to each other. This is precisely the property we need to deduce that they are `pure', that is, that their zeros are Weil numbers, and to produce a unitarized $L$-function $\LCStar(T,\rhochi)$. This allows us to associate to each good character $\dc$ a conjugacy class $\theta_{\rho,\dc}$ in a unitary group $U_\R(\bbC)$ for $R=\deg(\LC(T,\rho))$. We define what it means for the resulting multiset of conjugacy classes $\Theta_{\rho,q}$ to be equidistributed in $U_\R(\bbC)$ as $q\to\infty$. SSentially it says that for any representation $\Lambda\colon U_\R(\bbC)\to\GL_n(\bbC)$, the average of $\Lambda(\Tr(\theta_{\rho,\dc}))$ over the good $\dc$ tends to the value of a matrix integral $\int_{U_\R(\bbC)}\Tr(\Lambda(\theta))d\theta$. We then prove a theorem which asserts that one achieves equidistribution when the Mellin transform of $\rho$ has big monodromy.
In \S\ref{sec:sums-in-arithmetic-progressions} we introduce the arithmetic functions of interest to us. More precisely, we define a generalization $\Lambda_\rho$ of the von Mangoldt function and consider sums $\SnAQ$ of its values in an arithmetic progression modulo $\Q$. For each $n$, we consider the expected value and variance of these sums as $A$ varies uniformly over $\BQ$. We show how to evaluate the limit of both quantities as $q\to\infty$ under the hypothesis that the Mellin transform of $\rho$ has big monodromy. As mentioned above, we use this hypothesis to deduce that the conjugacy classes $\Theta_{\rho,q}$ are equidistributed and then to evaluate the variance in terms of an easy-to-evaluate matrix integral.
In \S\ref{sec:big-monodromy} we prove a theorem which asserts that the Mellin transform of $\rho$ has big monodromy provided $\rho$ satisfies certain hypotheses. The material in this section rests heavily on the monumental works of Katz, most notably the monograph \cite{Katz:CE}. In order to prove our result, we were forced to impose the condition that the (square-free) conductor $\s$ of $\rho$ and the twisting conductor $\Q$ satisfy $\deg(\gcd(\Q,\s))=1$. We also imposed conditions on the local monodromy of $\rho$ at the zero of $\deg(\Q,\s)$. We used both of these hypotheses to deduce that the relevant monodromy groups contained an element so special that the group was forced to be big (e.g., for the specific example considered in Theorem~\ref{thm:intro-theorem} one obtains pseudoreflections). While the specific result we proved is new, it borrows heavily from the rich set of tools developed by Katz, and one familiar with his work will easily recognize the intellectual debt we owe him.
In \S\ref{sec:explicit-abelian-varieties} we bring everything together and show how Galois representations arising from (Tate modules of) certain abelian varieties satisfy the requisite properties to apply the theorems of the earlier sections. More precisely, we consider Jacobians of (elliptic and) hyperelliptic curves of arbitrary genus, the Legendre curve being one such example. Because we chose to work with hyperelliptic curves we were forced to assume $q$ is odd. Nonetheless, we expect one can find other suitable examples in characteristic two.
There are two appendices to the paper containing material we needed for the results in Section~\ref{sec:big-monodromy}. In the first appendix we prove the group-theoretic result which asserts that a reductive subgroup of $\GL_\R$ with the sort of special element alluded to above is big. In the second appendix we recall much of the abstract formalism required to define the monodromy groups which we want to show are big. While none of this material is new, it elaborates on some of the facts which we felt were not always easy to give a direct reference for in \cite{Katz:CE}. In particular, our work should not be regarded as a substitute for Katz's original monograph, but we hope some readers will find it an acceptible complement to his masterful presentation.
\section{Framework}\label{sec:framework}
\subsection{Notation}
Let $q$ be the power of an odd prime $p$, $\Fq$ be the finite field with $q$ elements, and $K$ be the global field $\Fq(t)$. Let $\PP$ be the places of $K$ and $\PP_d\sub\PP$ be the finite subset of places of degree $d$. For each $v\in\PP$, let $\Fv$ be its residue field and $d_v=[\Fv:\Fq]$ be its degree. If $v$ is a finite place, then it corresponds to a monic irreducible $\pi\in\Fq[t]$, and $\Fpi$ is the quotient ring $\Fq[t]/\pi$. On the other hand, the residue field of the unique infinite place $v=\infty$ can be regarded as the quotient ring $\Fq[u]/u$ by taking $u=1/t$.
Let \defi{$\MM\sub\Fq[t]$} be the subset of monic polynomials and \defi{$\MM_d\sub\MM$} be the subset of polynomials of degree $d$. Let \defi{$\AA_d\seq\MM_d$} be the subset of irreducible polynomials and \defi{$v\colon\AA_d\to\PP_d$} be the map which identifies an irreducible $\pi$ with its corresponding finite place $v(\pi)$.
Let $\Ksep$ be a separable closure of $K$ and $\Fqbar\sub\Ksep$ be the algebraic closure of $\Fq\sub K$. Let $\GK=\Gal(\Ksep/K)$ and $G_{\Fq}=\Gal(\Fqbar/\Fq)$, and let $\bar{G}_K\seq G_K$ be the stabilizer of $\Fqbar$ so that there is an exact sequence $$
1
\longto \bar{G}_K
\longto G_K
\longto G_{\Fq}
\longto 1 $$ of profinite groups. Given a quotient $\GK\onto Q$ of profinite groups, we write $\bar{Q}\seq Q$ for the image of $\bar{G}_K$ and call it the \defi{geometric subgroup}.
For each $v\in\PP$, we fix a decomposition group $D(v)\seq \GK$, that is, a representative of its conjugacy class; equivalently we fix a place of $\Ksep$ over $v$. Let $I(v)\seq D(v)$ be the inertia subgroup and $P(v)\seq I(v)$ be the wild inertia subgroup (i.e., the $p$-Sylow subgroup). The quotient \defi{$\Gv=D(v)/I(v)$} is the absolute Galois group of $\Fv$, and we write $\Frob_v\in\Gv$ for the Frobenius element $\Frob_q^{d_v}$.
For each subset $S\sub\PP$, let $\KS\seq\Ksep$ be the maximal subextension unramified \emph{away} from $S$ and $\KSt\seq\KS$ be the maximal subextension \emph{tamely} ramfied over $S$. Both extensions are Galois over $K$, so we write \defi{$\GKS$} and \defi{$\GKSt$} for their respective Galois groups. There is a commutative diagram $$
\xymatrix{
\GK\ar[rr]\ar[dr] & & \GKS\ar[dl] \\
& \GKSt &
} $$ of quotients.
If $v\not\in S$, then the inertia subgroup $I(v)$ is contained in the kernel of the horizontal map. In particular, every element of the coset $\Frob_v I(v)$ maps to the same element of $\GKS$ which we denote $\Frob_v\in\GKS$. Moreover, the kernel of the horizontal map is generated by the conjugates of the $I(v)$ for $v\not\in S$, and it and the conjugates of the $P(v)$ for $v\in S$ generate the rest of the kernel of the other map from $\GK$.
Given a number field $E$, we write $\bbZ_E$ for the ring of integers. Given a maximal prime $\l\sub\bbZ_E$, we write $\ell\in\bbZ$ for the rational prime it divides and $E_\l$ for the $\l$-adic completion of $E$. We also write $\bar{E}_\l$ for an algebraic closure of $E_\l$, e.g., $\Qellbar$ is an algebraic closure of $\Qell$.
Given a smooth geometrically connected curve $U$ over $\Fq$, we write $\Ubar$ for the base change curve $U\times_{\Fq}\Fqbar$. We fix (but do not name) a geometric generic point of $U$ and write $\piOne{U}$ and $\piOne{\Ubar}$ for the arithmetic and geometric \'etale fundamental groups of $U$ respectively. Moreover, if $T$ is a second smooth geometrically connected curve over $\Fq$ and if $T\to U$ is a finite \'etale cover, then we implicitly suppose the geometric generic point of $T$ maps to that of $U$ and write $\piOne{T}\to\piOne{U}$ for the induced inclusion of fundamental groups.
Given a sheaf $\FF$ on $U$, we suppose that $\FF$ is constructible, and unless stated otherwise we suppose it has coefficients in $\Qellbar$. We also write $H^i(\Ubar,\FF)$ and $H^i_c(\Ubar,\FF)$ for the \'etale cohomology groups of $\FF$. For each integer $n$, we write $\FF(n)$ for the Tate twisted sheaf $\FF\otimes_{\Qellbar}\Qellbar(n)$ and recall that $$
\det(1-T\,\Frob_q\mid H^i(\Ubar,\FF(n)))
=
\det(1-q^nT\,\Frob_q\mid H^i(\Ubar,\FF)). $$ A similar identity holds for cohomology with compact supports (cf.~\cite[Proof of 6.1.13]{SGA4.5}). In particular, we have identities $$
\dim(H^i(\Ubar,\FF(n)))
=
\dim(H^i(\Ubar,\FF)),
\quad
\dim(H^i_c(\Ubar,\FF(n)))
=
\dim(H^i_c(\Ubar,\FF)) $$ for every $i$ and $n$.
The sheaf $\FF$ is lisse (or locally constant) on $U$ if and only it corresponds to a continuous representation $\piOne{U}\to\GL(V)$ from the \'etale fundamental group to a finite-dimensional $\Qellbar$ vector space $V$ (cf.~\cite[II.3.16.d]{Milne}). In that case one has identifications \begin{equation}\label{eqn:invariants-and-coinvariants}
H^0(\Ubar,\FF)
=
V^{\piOne{\Ubar}}
\mbox{ and }
H^2_c(\Ubar,\FF(2))
=
V_{\piOne{\Ubar}} \end{equation} with the subspace of $\piOne{\Ubar}$-invariants and quotient space of $\piOne{\Ubar}$-coinvariants (see \cite[Exp.~6, 1.18.d]{SGA4.5}).
\section{$L$-functions}\label{sec:l-functions}
Let $\ell$ be a prime distinct from $p$ and $\Vl$ be a finite-dimensional $\Qellbar$-vector space. Let $\Q\in\Fq[t]$ be monic and square free, $\CC\sub\PP$ be the subset consisting of $\infty$ and $v(\pi)$ for every prime factor $\pi$ of $\Q$, and $\SS\sub\PP$ be a finite subset of places. Suppose $\rho$ is a homomorphism $$
\rho\colon \GKS\to\GL(\Vl) $$ which is continuous with respect to the profinite topologies and which has \defi{trivial geometric invariants} (i.e., the subspace of $\bar{G}_{K,\SS}$-invariants of $\Vl$ vanishes).
In this section, we define, for each $v\in\PP$, the Euler factor $L(T,\rho_v)\in\Qellbar[T]$ of a local representation $\rho_v\colon G_v\to\GL(\Vl_v)$, as well as $L$-functions $$
L(T,\rho)
=
\prod_{v\in\PP}
L(T^{d_v},\rho_v)^{-1},
\quad
\LC(T,\rho)
=
\prod_{v\not\in\CC}
L(T^{d_v},\rho_v)^{-1} $$ and cohomological factors $$
\PC{i}(T) = \det(1-T\,\Frob_q\mid H^i_c(\AonetBar[1/\Q], \ME{\rho}))
\mbox{ for }i=1,2 $$ (see \S\ref{subsec:l-functions-of-rho}). We also define numerical invariants of $\rho$, including $\dr(\rho)$, $\dropCee{\rho}$, and $\swan(\rho)$, and we show that $\deg(\LC(T,\rho))$ and $\deg(L(T,\rho))$ equal \begin{equation}\label{eq:def-rC}
\rC(\rho)
=
\dr(\rho)
-
\dropCee{\rho}
+
\swan(\rho)
+
(\deg(\Q)-1)\cdot\dim(\Vl) \end{equation} and \begin{equation}\label{e:def-degL}
\degL(\rho)
=
\dr(\rho)
+
\swan(\rho)
-
2\cdot\dim(\Vl) \end{equation} respectively (see \S\ref{subsec:numerical-invariants-of-rho}). Finally, we define what it means for $\rho$ to be punctually $\iota$-pure of weight $w$ and use Deligne's Riemann hypothesis to derive some consequential properties of the $L$-functions (see \S\ref{subsec:purity} and \S\ref{subsec:semisimplicity-and-irreducibility}). Using these definitions we then given the main result of this section, Theorem~\ref{thm:archimedean-bound}, in \S\ref{sec:proof-of-archimedean-bound}.
\subsection{Galois modules versus sheaves}
While most of this paper uses the language of global fields, it is useful to adopt a geometric language. Certain readers will find the latter language more to their taste, and we acknowledge that many of our results may have a more appealing formulation in the language of geometry (and sheaves). However, we felt the language of Galois representations over global (function) fields was accessible to a broader audience, so we tried to do `as much as possible' in that language.
\subsection{Middle extensions}
Let $U\seq X\seq\Ponet$ be dense Zariski open subsets and $j\colon U\to X$ be the inclusion, and let $\FF$ be a sheaf on $X$. Suppose everything is defined over $\Fq$ so that the fiber $\FF_{\etabar}$ of $\FF$ over the geometric point $\etabar=\mathrm{Spec}(\Kbar)$ is a $G_K$-module. If the restriction $j^*\FF$ is lisse on $U$, then the fiber $\FF_\etabar$ is even a module over the \'etale fundamental group $\piOneU$. Conversely, for every continuous homomorphism $\piOneU\to\GL(\Vl)$, there is a lisse $\Qellbar$-sheaf on $U$ whose fiber over $\etabar$ is the $\piOneU$-module $\Vl$.
Given a sheaf $\GG$ sheaf on $U$ (e.g., $j^*\FF$), there are two functorial extensions of $\GG$ to a sheaf on all of $X$ we wish to consider, the extension by zero $j_!\GG$ and the direct image $j_*\GG$. (One can also consider hybrid versions such as $j''_!j'_*\GG$ for inclusions $j'\colon U\to U'$ and $j''\colon U''\to X$, but we do not need such versions.) As $\FF$ and $\GG$ vary we have $$
\Hom_{X}(j_!\GG,\FF) = \Hom_U(\GG,j^*\FF)
\quad\mbox{and}\quad
\Hom_{X}(\FF,j_*\GG) = \Hom_U(j^*\FF,\GG), $$ that is, the functors $j_!,j_*$ are adjoints of $j^*$ (cf.~\cite[II.3.14.a]{Milne}). In particular, the adjoints of the identity $j^*\FF\to j^*\FF$ are maps of the form $j_!j^*\FF\to\FF$ and $\FF\to j_*j^*\FF$ which we call \defi{adjunction maps}. We say that $\FF$ is \defi{supported on $U$} iff the first map is an isomorphism, and $\FF$ is a \defi{middle extension} iff the second map is an isomorphism for \emph{every} $j$.
\begin{lemma}\label{lem:lisse-vs-middle}\
\begin{enum} \item\label{part:single-U} If $j^*\FF$ is lisse and $\FF\to j_*j^*\FF$ is an isomorphism, then $\FF$ is a middle extension. \item\label{part:lisse-to-middle} If $\GG$ is lisse, then $j_*\GG$ is a middle extension. \end{enum} \end{lemma}
\begin{proof} Let $U'\seq X$ be a dense Zariski open and $U''=U\cap U'$. Consider the commutative diagram $$
\xymatrix{
U''\ar[r]^{i'}\ar[d]_{i} & U'\ar[d]^{j'} \\
U\ar[r]_j & X \\
} $$ of inclusions and the corresponding commutative diagram \begin{equation}\label{eq:adj-com}
\begin{array}{c}
\xymatrix{
\FF\ar[d]\ar[r] & j_*j^*\FF\ar[d] \\
j'_*j^{\prime *}\FF\ar[r] & (ij)_*(ij)^*\FF = (i'j')_*(i'j')^*\FF
}
\end{array} \end{equation} of adjunction maps.
Suppose $\GG$ is lisse. On one hand, this implies the map $\GG\to i_*i^*\GG$ is an isomorphism, so the right map of \eqref{eq:adj-com} is an isomorphism when $j^*\FF$ is lisse. In particular, if the top map of \eqref{eq:adj-com} is also an isomorphism, then the left map must also be an isomorphism, for \emph{every} $j$, hence \eqref{part:single-U} holds. On the other hand, the direct image map $j_*\GG\to j_*i_*i^*\GG$ is also an isomorphism. It even coincides with the adjunction map $j_*\GG\to j'_*j^{\prime *}j_*\GG$ via the functorial identities $j_*i_*i^*\GG = j'_*i'_*i^*\GG = j'_*j^{\prime *}j_*\GG$, so \eqref{part:lisse-to-middle} holds. \end{proof}
The following proposition shows that there is a canonical middle extension sheaf on $\Ponet$ we can associate to $\rho$. We denote it and its restriction to $X$ by $\ME{\rho}$.
\begin{prop}\label{prop:assoc-me} There is a middle extension $\FF$ with $\FF_\etabar=\Vl$ as $G_K$-modules, and it is unique up to isomorphism. \end{prop}
\begin{proof} There are quotients $G_K\onto\piOneU$ and $G_K\onto\GKS$, so $\FF_\etabar$ and $\Vl$ are $G_K$-modules. Moreoever, if $U'\seq U$ is a sufficiently small dense Zariski open, then there exist a quotient $\piOneUpBar\onto\GKS$ and a unique lisse sheaf $\GG$ on $U'$ with $\GG_\etabar=\Vl$ as $\piOneUpBar$-modules. Its direct image $\ME{\rho}$ on $X$ is a middle extension by Lemma~\ref{lem:lisse-vs-middle}.\ref{part:lisse-to-middle}, and $\ME{\rho}_\etabar=\GG_\etabar=\Vl$ as $\GK$-modules by construction.
Let $\FF$ be any middle extension with $\FF_\etabar=\Vl$ as $\GK$-modules; we must show it isomorphic to $\ME{\rho}$. Up to shrinking $U$, we may suppose that $\ME{\rho}$ and $\FF$ are lisse on $U$ and thus $\ME{\rho}_\etabar,\FF_\etabar$ are $\piOneU$-modules. Then the canonical bijection $\ME{\rho}_\etabar\to\FF_\etabar$ extends uniquely to an isomorphism $j^*\ME{\rho}\to j^*\FF$ of lisse sheaves. Moreover, the direct image $j_*j^*\ME{\rho}\to j_*j^*\FF$ and adjunction maps $\ME{\rho}\to j_*j^*\ME{\rho}$ and $\FF\to j_*j^*\FF$ are all isomorphisms, so there exists an isomorphism $\ME{\rho}\to\FF$ as claimed. \end{proof}
\begin{cor}\label{cor:assoc-me} Let $\SS'\sub\PP$ be a finite subset containing $\SS$ and $\rho'\colon G_{K,\SS'}\to\GL(\Vl)$ be the composition of $\rho$ with the natural quotient $G_{K,\SS'}\onto\GKS$. Then $\ME{\rho}$ and $\ME{\rho'}$ are isomorphic. \end{cor}
\begin{proof} The quotient $\GK\to\GKS$ factors as $\GK\onto G_{K,\SS'}\onto\GKS$, and $\ME{\rho'}_\etabar=\Vl=\ME{\rho}$ as $\GK$-modules. Since $\ME{\rho},\ME{\rho'}$ are both middle extensions, Proposition~\ref{prop:assoc-me} implies they are isomorphic. \end{proof}
\subsection{Euler characteristics}
Let $\GG$ be a sheaf on $U$. Then there is an exact sequence $$
0
\longto j_!\GG
\longto j_*\GG
\longto \SS_\GG
\longto 0 $$
\noindent where $\SS_\GG$ is a skyscraper sheaf supported on \defi{$Z=\Ponet\ssm U$}, and the corresponding long exact sequence of (\'etale) cohomology (over $\Fqbar$) can be written \begin{equation}\label{eq:ff-ex-seq}
\cdots
\to H^n(\Zbar,\SS_\GG)
\to H^{n+1}_c(\Ubar,\GG)
\to H^{n+1}(\PonetBar,j_*\GG)
\to \cdots \end{equation} where $n\in\bbZ$.
\begin{lemma} There exist exact sequences \begin{equation}\label{eq:ff-ex-seq-head}
0
\to H^0_c(\Ubar,\GG)
\to H^0(\PonetBar,j_*\GG)
\to H^0(\Zbar,\SS_\GG)
\to H^1_c(\Ubar,\GG)
\to H^1(\PonetBar,j_*\GG)
\to 0 \end{equation} and \begin{equation}\label{eq:ff-ex-seq-piece}
0
\longto H^2_c(\Ubar,\GG)
\longto H^2(\PonetBar,j_*\GG)
\longto 0 \end{equation} and all other cohomology groups in \eqref{eq:ff-ex-seq} vanish. \end{lemma}
\begin{proof} The first term of \eqref{eq:ff-ex-seq} vanishes unless $n=0$ since $\dim(Z)=0$, and the other two terms vanish for $n+1\neq 0,1,2$ since $U$ and $\Ponet$ are curves. Therefore \eqref{eq:ff-ex-seq} breaks into the pieces \eqref{eq:ff-ex-seq-head} and \eqref{eq:ff-ex-seq-piece}, and all other terms vanish. \end{proof}
If $U=\Ponet$, then the middle term of \eqref{eq:ff-ex-seq-head} vanishes, and otherwise the first term vanishes since any curve $U\subsetneq\Ponet$ is affine. Either way, the Euler characteristics $$
\chi(\PonetBar,j_*\GG)
=
\sum_{n=0}^2 (-1)^n\dim(H^n(\PonetBar,j_*\GG)),
\quad
\chi_c(\Ubar,j_*\GG)
=
\sum_{n=0}^2 (-1)^n\dim(H^n_c(\Ubar,j_*\GG)), $$ and $\chi(\Zbar,\SS_\GG)=\dim(H^0(\Zbar,\SS_\GG))$ satisfy \begin{equation}
\chi(\PonetBar,j_*\GG)
-
\chi_c(\Ubar,\GG)
=
\chi(\Zbar,\SS_\GG)
=
\sum_{z\in Z} \deg(z)\cdot\dim(\GG^{I(z)}_\etabar). \end{equation}
\subsection{$L$-functions of $\rho$}\label{subsec:l-functions-of-rho}
The decomposition group $D(v)$ stabilizes the subspace $\Wl=\Wll$, and $I(v)$ acts trivially on it, so there is a representation $
\rholv\colon\Gv\to\GL(\Wl). $ We identify the subspace $V_v\seq V$ and the representation $\rholv$ with a geometric fiber of $\ME{\rho}$ (cf.~\cite[3.1.16]{Milne}). The \defi{Euler factor} of $\rho$ at $v$ is given by $$
\phivl{T} = \det\left(1 - T\rholv(\Frob_v)\mid\Wl\right)\in\Qellbar[T], $$ and its degree equals the dimension of $V_v$.
The \defi{partial} and \defi{complete $L$-functions} of $\rho$ are the formal power series in $\Qellbar[T]$ with respective Euler products \begin{equation}\label{eq:inc-l-fcn}
\LC(T,\rho)
=
\prod_{v\not\in\CC}\phivl{T^{d_v}}^{-1}
\quad\mbox{and}\quad
L(T,\rho)
=
\prod_{v\in\PP}\phivl{T^{d_v}}^{-1}. \end{equation} If $U=\Aonet[1/\Q]$, then they equal the $L$-functions of the sheaves $j_!j^*\ME{\rho}$ and $\ME{\rho}$, and the ratio $$
\MC(T,\rho) = L(T,\rho)/\LC(T,\rho)
=
\prod_{v\in\CC} L(T^{d_v},\rho_v)^{-1} $$ is the $L$-function of the restriction of $\ME{\rho}$ to $Z$ and hence is the reciprocal of a polynomial.
The \'etale cohomology groups of these sheaves are finite-dimensional $\Qellbar$-vector spaces, and $\Frob_q$ acts $\Qellbar$-linearly on them. In particular, we have characteristic polynomials $$
\PC{n}(T) = \det(1-T\,\Frob_q\mid H^n_c(\AonetBar[1/\Q], \ME{\rho})) $$ which are trivial for $i\neq 1,2$ since $\Udee{\Q}$ is an affine curve, and they satisfy \begin{equation}\label{eq:L-fun:frac:partial}
\LC(T,\rho)
=
{\PC{1}(T,\rho)}
/
{\PC{2}(T,\rho)}. \end{equation} Similarly, the characteristic polynomials \begin{equation}\label{eq:def-P_n(T)}
P_n(T,\rho) = \det(1-T\,\Frob_q\mid H^n(\PonetBar,\ME{\rho})). \end{equation} are trivial for $i\neq 0,1,2$ since $\Ponet$ is a complete curve, and they otherwise satisfy \begin{equation}\label{eq:L-fun:frac}
L(T,\rho)
=
\frac{P_1(T,\rho)}{P_0(T,\rho)P_2(T,\rho)}. \end{equation} Moreover, the degrees are related to the respective Euler characteristics via the identities $$
\deg(\LC(T,\rho))=-\chi_c(\Ubar,\ME{\rho})
\mbox{\ \ and\ \ }
\deg(L(T,\rho))=-\chi(\PonetBar,\ME{\rho}). $$
\subsection{Numerical invariants of $\rho$}\label{subsec:numerical-invariants-of-rho}
Let $
\rank_v(\rho) = \deg(L(T,\rho_v)) $ and $
\dr_v(\rho)=\dim(\Vl)-\rank_v(\rho), $ and let $\swan_v(\rho)$ be the Swan conductor of $\Vl$ as an $\Qellbar[I(v)]$-module (see \cite[1.6]{Katz:GKM}). We call these and $$
\dropCee{\rho}
=
\sum_{v\in\CC} d_v\cdot\dr_v(\rho). $$ the \defi{local invariants} of $\rho$ and $$
\rank(\rho)
=
\dim(\Vl),
\quad
\dr(\rho)
=
\sum_{v\in\PP} d_v\cdot\dr_v(\rho),
\quad
\swan(\rho) = \sum_{v\in\PP} d_v\cdot\swan_v(\rho) $$ are the \defi{global invariants}. The latter remain unchanged if we replace $\Fq$ by a finite extension.
\begin{prop}\label{prop:chi-Ponet} $$
\chi(\PonetBar,\ME{\rho})
=
2\cdot\rank(\rho)
-
(
\dr(\rho)
+
\swan(\rho)
) $$ \end{prop}
\begin{proof} Suppose $\ME{\rho}$ is lisse on $U$ since $\ME{\rho}$ is a middle extension. On one hand, the Euler-Poincare formula, as proved by Raynaud \cite[Th.~1]{Raynaud}, asserts \begin{align*}
\chi_c(\Ubar,\ME{\rho})
& =
\rank(\rho)
\cdot
(2-\deg(Z))
-
\swan(\rho). \end{align*} On the other hand, a short calculation shows $$
\chi(\Zbar,\ME{\rho})
=
\deg(Z)\cdot\rank(\rho)
-
\dr(\rho) $$ since $\ME{\rho}$ is also a middle extension, and thus $$
\chi(\PonetBar,\ME{\rho})
=
\chi_c(\Ubar,\ME{\rho})
+
\chi(\Zbar,\ME{\rho})
=
2\cdot\rank(\rho) - \dr(\rho) - \swan(\rho) $$ as claimed. \end{proof}
\begin{cor}\label{cor:chi_c} If $\ME{\rho}$ is supported on $\Aonet[1/\Q]$, then $\chi_c(\AonetBar[1/\Q],\ME{\rho})=\chi(\PonetBar,\ME{\rho})$, and \begin{equation}\label{eq:chi-c-ME}
\chi_c(\AonetBar[1/\Q],\ME{\rho})
=
(1-\deg(\Q))\cdot\rank(\rho)
-
(
\dr(\rho)
-
\dropCee{\rho}
+
\swan(\rho)
) \end{equation} in general. \end{cor}
\begin{proof} If $\ME{\rho}$ is supported on $\Aonet[1/\Q]$, then $\dropCee{\rho}=\deg(\CC)\cdot\rank(\rho)$ and $\deg(\CC)=1+\deg(\Q)$, so it suffices to show \eqref{eq:chi-c-ME} holds in general. There is a canonical bijection $Z=\CC$ when $U=\Aonet[1/\Q]$, so the desired identity follows easily from the identities $$
\chi_c(\AonetBar[1/\Q],\ME{\rho})
=
\chi(\PonetBar,\ME{\rho}) - \chi(\Zbar,\ME{\rho}) $$ and $$
\chi(\Zbar,\ME{\rho})
=
\deg(\CC)\cdot\rank(\rho) - \dropCee{\rho} $$ and from the identity in Proposition~\ref{prop:chi-Ponet}. \end{proof}
\subsection{Purity}\label{subsec:purity}
Let $\iota\colon\Qbar\to\bbC$ and $\Qbar\to\Qellbar$ be field embeddings. A non-zero polynomial $\psi\in\Qellbar[T]$ is \defi{$\iota$-pure of $q$-weight $w$} iff every zero $\alpha\in\Qellbar$ is a $q$-Weil number of weight $w$, that is, lies in $\Qbar$ and satisfies $$
|\iota(\alpha)|^2=(1/q)^w. $$ It is \defi{pure of $q$-weight $w$} iff it is $\iota$-pure of $q$-weight $w$ for every $\iota$, and it is \defi{($\iota$-)mixed of $q$-weights $\leq w$} iff it is a product of ($\iota$-)pure polynomials each of $q$-weights $\leq w$. Our terminology is unconventional in that we incorporate $q$, however, we need to make $q$ explicit since we have not said where $\psi$ comes from.
\begin{lemma}\label{lem:trace-bound}
If $M$ is an invertible $d\times d$ matrix with coefficients in $\Qellbar$ and if $\det(1-M\,T)$ is mixed of $q$-weights $\leq w$, then $\Tr(M)\in\Qbar$ and $|\iota(\Tr(M))|^2\leq dq^{w}$ for every field embedding $\iota\colon\Qbar\to\bbC$. \end{lemma}
\begin{proof} If $M$ is invertible and $\psi(T)=\det(1-M\,T)$ is mixed, there exist $\beta_1,\ldots,\beta_d\in\bar{E}^\times$ such that $$
\psi(T)
= \prod_{i=1}^d (1 - \beta_i T)
= 1 - \Tr(M)\cdot T + \cdots + (-1)^d\cdot\det(M)\cdot T^d $$ and such that $\Tr(M)=\beta_1+\cdots+\beta_m$ also lies in $\bar{E}$. Therefore, if $\iota\colon\bar{E}\to\bbC$ is a field embedding, then $$
|\Tr(M)|^2
=
\left|
\sum_{i=1}^d \beta_i
\right|^2
\leq
\sum_{i=1}^d |\beta_i|^2
= dq^{w} $$ as claimed. \end{proof}
The representation $\rho$ is \defi{punctually ($\iota$-)pure of weight $w$} iff $\phivl{T}$ is ($\iota$-)pure of $q^{d_v}$-weight $w$ for all $v\in\PP\ssm S$. Equivalently, we want $\phivl{T^{d_v}}$ to be pure of $q$-weight $w$ for all $v\not\in S$. The modifier punctually should remind the reader the definition is local.
\begin{theorem}\label{thm:deligne} If $\rho$ is punctually $\iota$-pure of weight $w$, then the cohomological factors $P_{n,\CC}(T,\rho)$ are $\iota$-mixed of $q$-weights $\leq w+n$ and the factors $P_n(T,\rho)$ are $\iota$-pure of $q$-weight $w+n$. \end{theorem}
\begin{proof} See Theorems 1 and 2 of \cite{Deligne:WeilII} for the respective assertions about $P_{n,\CC}(T,\rho)$ and $P_n(T,\rho)$. \end{proof}
Let $\FF$ be a middle-extension sheaf on $\Ponet$. We say that $\FF$ is \defi{punctually ($\iota$-)pure of weight $w$} iff for some dense Zariski open subset $U\seq\Ponet$ on which $\FF$ is lisse, the corresponding representation of $\piOne{U}$ is punctually ($\iota$-)pure of weight $w$.
\begin{lemma}\label{lem:weight-over-Z} Let $j\colon U\to\Ponet$ be the inclusion of a dense Zariski open subset and $Z=\Ponet\ssm U$. If $\FF$ is lisse on $U$ and punctually $\iota$-pure of weight $w$, then $\det(1-T\Frob_q\mid H^0(\Zbar,j_*\FF))$ is $\iota$-mixed of $q$-weights $\leq w$. \end{lemma}
\begin{proof} See \cite[1.8.1]{Deligne:WeilII}. \end{proof}
\subsection{Semisimplicity and irreducibility}\label{subsec:semisimplicity-and-irreducibility}
Consider an exact sequence of $\GKS$-modules \begin{equation}\label{eq:V-es}
0
\longto \Vl_1
\longto \Vl
\longto \Vl_2
\longto 0, \end{equation} and let $\rho_i\colon\GKS\to\GL(\Vl_i)$ be the corresponding structure homomorphism for $i\in\{1,2\}$. A priori, \eqref{eq:V-es} does not split, but we say $\rho$ is \defi{arithmetically semisimple} iff the sequence splits for \emph{every} $\GKS$-invariant subspace $\Vl_1\seq \Vl$. By Clifford's theorem, the condition implies that $\rho$ is \defi{geometrically semisimple} since $\GKSbar$ is normal in $\GKS$ (cf.~\cite[49.2]{CurtisReiner}), that is, every $\GKSbar$-invariant subspace of $\Vl$ has a $\GKSbar$-invariant complement, but the converse need not be true.
We say that $\rho$ is \defi{geometrically simple} iff $\rho$ is irreducible and geometrically semisimple. It is equivalent to assuming $\ME{\rho}$ is \defi{geometrically irreducible}, that is, there are no non-zero proper subsheaves over $\Fqbar$.
\begin{prop}\label{prop:pure-impliessemisimple} If $\rho$ is punctually $\iota$-pure, then it is geometrically semisimple, and in particular, the subspace of $\Vl$ of $\GKSbar$-invariants is trivial if and only if the quotient space of $\Vl$ of $\GKSbar$-coinvariants is trivial. \end{prop}
\begin{proof} One can rephrase semisimplicity for $\rho$ in terms of semisimplicity for $\ME{\rho}$ (cf.~\cite[5.1.7]{BBD}). It follows that both are geometrically semisimple of $\rho$ is $\iota$-pure (see \cite[5.3.8]{BBD}). In particular, $\rho$ has trivial $\GKS$-invariants if and only if it has trivial $\GKS$-coinvariants, hence $H^0(\PonetBar,\ME{\rho})$ vanishes if and only if $H^2(\PonetBar,\ME{\rho})$ does. \end{proof}
\begin{cor}\label{cor:poly-L-function} If $\rho$ is punctually $\iota$-pure, then the following are equivalent: \begin{enum} \item\label{enum:proper-rational} $L(T,\rho)$ is in $\Qbar(T)$ but not $\Qbar[T]$; \item\label{enum:both-spaces} $V^{\GKSbar}$ and $V_{\GKSbar}$ vanish; \item\label{enum:both-polys} $P_0(T,\rho)$ and $P_2(T,\rho)$ are non-trivial polynomials in $\Qbar[T]$; \item\label{enum:one-space} $V_{\GKSbar}$ vanishes; \item\label{enum:one-poly} $P_2(T,\rho)$ is a non-trivial polynomial in $\Qbar[T]$. \end{enum} \end{cor}
\begin{proof} On one hand, Theorem~\ref{thm:deligne} implies that the cohomological factors $P_n(T,\rho)$ are relatively prime, so \eqref{enum:proper-rational} and \eqref{enum:both-polys} are equivalent. Moreover, \eqref{enum:both-spaces} and \eqref{enum:both-polys} (resp.~\eqref{enum:one-space} and \eqref{enum:one-poly}) are equivalent by \eqref{eqn:invariants-and-coinvariants} and \eqref{eq:def-P_n(T)}. On the other hand, Proposition~\ref{prop:pure-impliessemisimple} implies that $P_0(T,\rhochi)$ is trivial if and only if $P_2(T,\rhochi)$ is trivial, so \eqref{enum:both-polys} and \eqref{enum:one-poly} are equivalent. \end{proof}
\begin{cor}\label{cor:pure-trivial-invs} If $\rho$ is punctually $\iota$-pure and has trivial geometric invariants, then $H^i(\PonetBar,\ME{\rho})$ and $H^i_c(\Ubar,\ME{\rho})$ vanish for $i\neq 1$, and there is an exact sequence \begin{equation}\label{eq:ME(rho)-exact-sequence}
0
\longto H^0(\Zbar,\ME{\rho})
\longto H^1_c(\Ubar,\ME{\rho})
\longto H^1(\PonetBar,\ME{\rho})
\longto 0. \end{equation} Therefore $L(T,\rho)=P_1(T,\rho)$ and $\LC(T,\rho)=P_{1,\CC}(T,\rho)$. \end{cor}
\begin{proof} Suppose $\rho$ is punctually $\iota$-pure and has trivial geometric invariants so that Proposition~\ref{prop:pure-impliessemisimple} implies $\rho$ has trivial geometric coinvariants. We claim $H^i(\PonetBar,\ME{\rho})$ vanishes for $i\neq 1$. The Corollary then follows by observing that \eqref{eq:ff-ex-seq-head} simplifies to \eqref{eq:ME(rho)-exact-sequence} and that $H^2_c(\Ubar,\ME{\rho})$ vanishes by \eqref{eq:ff-ex-seq-piece}.
The claim is independent of $U$, so up to shrinking $U$, we suppose $j^*\ME{\rho}$ is lisse. Then $$
H^0(\PonetBar,\ME{\rho})
=
H^0(\Ubar,\ME{\rho})
\mbox{ and }
H^2(\PonetBar,\ME{\rho})
=
H^2_c(\Ubar,\ME{\rho}) $$ are the subspace of $\piOneUbar$-invariants and (a Tate twist of the) quotient space of $\piOneUbar$-coinvariants respectively of $\Vl$ by \eqref{eqn:invariants-and-coinvariants}. The claim is also independent of $\SS$, so up to replacing $\SS$ by a finite superset in $\PP$, we suppose $\rho$ factors through a natural quotient $\GKSbar\onto\piOneUbar$. Then the cohomology spaces in question are the $\GKSbar$-invariants and $\GKSbar$-coinvariants of $\Vl$, which are trivial by hypothesis, so $H^i(\PonetBar,\ME{\rho})$ vanishes for $i\neq 1$ as claimed. \end{proof}
\begin{cor}\label{cor:good-vs} The following are equivalent: \begin{enum} \item\label{li:a:3} $\MC(T,\rho)=1$, that is, $\ME{\rho}$ is supported on $\Aonet[1/\Q]$; \item\label{li:a:4} $\LC(T,\rho)$ is a polynomial which is $\iota$-pure of $q$-weight $w+1$. \end{enum} \end{cor}
\noindent Note, $\MC(T,\rho)$ is the $L$-function of the restriction of $\ME{\rho}$ to $Z$, so the former is trivial if and only if the latter is.
\begin{proof} If \eqref{li:a:3} holds, then the subspace of $I(\infty)$-invariants of $\Vl$ is trivial, so a fortiori, the subspace of $\GKSbar$-invariants is trivial. Therefore Corollary~\ref{cor:pure-trivial-invs} implies $\LC(T,\rho)$ equals $L(T,\rho)=P_1(T,\rho)$ and hence Theorem~\ref{thm:deligne} implies \eqref{li:a:4} holds.
If \eqref{li:a:4} holds, then $P_{2,\CC}(T,\rho)$ divides $P_{1,\CC}(T,\rho)$ by \eqref{eq:L-fun:frac:partial}. Theorem~\ref{thm:deligne} implies $P_{2,\CC}(T,\rho)=P_2(T,\rho)$ is $\iota$-pure of $q$-weight $w+2$, so it is coprime to $P_{1,\CC}(T,\rho)$ and hence trivial. Therefore $H^2(\PonetBar,\ME{\rho})$ vanishes, and hence $H^0(\PonetBar,\ME{\rho})$ also vanishes since $\rho$ is geometrically semisimple. That is, $\rho$ has trivial geometric invariants. Moreover, $1/\MC(T,\rho)$ is a polynomial which is $\iota$-mixed of $q$-weights $\leq w$ by Lemma~\ref{lem:weight-over-Z} while $L(T,\rho)$ is a polynomial which is $\iota$-pure of $q$-weight $w$, so Corollary~\ref{cor:pure-trivial-invs} implies \eqref{li:a:3} holds. \end{proof}
\subsection{Main Theorem}\label{sec:proof-of-archimedean-bound}
The following theorem is the main result of Section~\ref{sec:l-functions}. The essential ingredient it uses is Deligne's Riemann hypothesis.
\newcommand\thmA{ Suppose $\rho$ is punctually $\iota$-pure of weight $w$. Then $\deg(\LC(T,\rho))=\rC(\rho)$ and $\deg(L(T,\rho))=\degL(\rho)$. Moreover, $\rho$ has trivial geometric invariants if and only if $L(T,\rho)$ is a polynomial if and only if the cohomological factor $P_{2,\CC}(T,\rho)$ is trivial. If these conditions hold, then $\LC(T,\rho)$ is the polynomial $P_{1,\CC}(T)$ and is $\iota$-mixed of $q$-weights $\leq w+1$, and $L(T,\rho)$ is its largest $\iota$-pure factor of $q$-weight $w+1$. }
\begin{theorem}\label{thm:archimedean-bound} \thmA \end{theorem}
\begin{proof} Suppose $\rho$ is punctually $\iota$-pure. Corollary~\ref{cor:poly-L-function} implies that it has trivial geometric invariants if and only if $L(T,\rho)$ is a polynomial if and only if $P_{2,\CC}(T,\rho)$ is trivial, so suppose these equivalent conditions hold. On one hand, Corollary~\ref{cor:pure-trivial-invs} implies $L(T,\rho)=P_1(T,\rho)$ and $\LC(T,\rho)=P_{1,\CC}(T,\rho)$, so both are polynomials are claimed. Moreover, Proposition~\ref{prop:chi-Ponet} implies $$
\deg(L(T,\rho))
=
\dr(\rho)
+
\swan(\rho)
-
2\cdot\dim(\Vl)
=
\degL(\rho) $$ and Corollary~\ref{cor:chi_c} implies
$$
\deg(\LC(T,\rho))
=
-\chi_c(\Aonet[1/\Q],\ME{\rho})
=
\rC(\rho) $$ as claimed. On the other hand, Theorem~\ref{thm:deligne} implies $L(T,\rho)$ is pure of $q$-weight $w+1$ and $\LC(T,\rho)$ is mixed of $q$-weights $\leq w+1$ since $\rho$ is punctually pure of weight $w$. Moreover, Lemma~\ref{lem:weight-over-Z} implies that $\LC(T,\rho)/L(T,\rho)=1/\MC(T,\rho)$ is a polynomial which is $\iota$-mixed of $q$-weights $\leq w$, so $L(T,\rho)$ is the largest $\iota$-pure factor of $\LC(T,\rho)$ of $q$-weight $w+1$ as claimed. \end{proof}
\section{Twisted $L$-functions}\label{sec:twisted-l-functions}
Recall we have a finite-dimensional $\El$-vector space $V$ and a (continuous) representation $$
\rho\colon \GKS\to\GL(V). $$ We fix a field embedding $\iota\colon\Qbar\to\bbC$ and suppose $\rho$ is punctually $\iota$-pure of weight $w$ so that we can apply the results of the previous section.
Let $\s,\Q\in\Fq[t]$ be monic and square free, and suppose $\SS\sub\PP$ is the finite subset consisting of $\infty$ and $v(\pi)$ for every prime factor $\pi$ of $\s$. Let $\CC\sub\PP$ be defined similarly and \defi{$\RR=\SS\cup\CC$}.
Let \defi{$\BQ$} be the finite group $\BQFq$ and \defi{$\PhiQ$} be the dual group of all Dirichlet characters $$
\dc\colon
\BQ\to\Qlbartimes $$ of conductor dividing $\Q$. For each $\dc$, we define a twisted representation $$
\rhochi\colon \GKR\to\GL(V_\dc) $$ where $V_\dc=V$ as $\El$-vector spaces (see \S\ref{subsec:tensor-products}). We show that $\rhochi$ is also punctually $\iota$-pure of weight $w$ and that the corresponding $L$-functions $$
L(T,\rhochi)
=
\prod_{v\in\PP} L(T^{d_v},(\rhochi)_v)^{-1},
\quad
\LC(T,\rhochi)
=
\prod_{v\not\in\CC} L(T^{d_v},(\rhochi)_v)^{-1} $$ are $\iota$-mixed (see \S\ref{subsec:tensor-products} and \S\ref{subsec:induced-representations}).
\newcommand\thmB{ Suppose $\rho$ is $\iota$-pure of weight $w$ and $\dc\in\PhiQ$. Then $$
\deg(\LC(T,\rhochi))
=
\rC(\rho)
=
\deg(L(T,\rho)) + (\deg(\Q)+1)\dim(V) - \dropCee{\rho}. $$ Moreover, $\rhochi$ has trivial geometric invariants if and only if $L(T,\rhochi)$ is a polynomial if and only if $\PC{2}(T,\rhochi)$ is trivial. If these conditions hold, then $\LC(T,\rhochi)$ is the polynomial $\PC{1}(T)$ and is $\iota$-mixed of $q$-weights $\leq w+1$, and $L(T,\rhochi)$ is its largest $\iota$-pure factor of $q$-weight $w+1$. }
\begin{theorem}\label{thmB} \thmB \end{theorem}
\noindent The proof is in \S\ref{sec:proof-of-thmB}.
\subsection{Dirichlet characters}
By definition, each $\dc\in\PhiQ$ is a homomorphism $\BQ\to\Qlbartimes$. There is also a quotient $G_{K,\CC}\onto\BQ$ from abelian class field theory, and we write $$
\dcc\colon
G_{K,\CC}\to\GL_1(\El) $$ for the composition of these maps and the canonical isomorphism $\Qlbartimes\to\GL_1(\El)$. The corresponding middle-extension sheaf \defi{$\ME{\dc}$} is a so-called Kummer sheaf. It is tamely ramified over $\CC$ since the hypothesis that $\Q$ is square free implies that $\BQ$ has order prime to $p$ and thus $\dcc(P(t))$ is trivial for every $t\in\CC$.
There is a natural quotient $\GKR\onto\GKC$ since $\CC\seq\RR$, and we write \defi{$\dcr$} for the composition of this quotient and $\dcc$.
\subsection{Tensor products}\label{subsec:tensor-products}
The \defi{tensor product} of $\rho$ and $\dc$ is the representation $$
\rho\otimes\dc \colon \GKR\to\GL(V_\dc) $$
\noindent given by $(\rhochi)(g)=\rho(g)\dcc(g)$ where $V_\dc=V$ as $\El$-vector spaces. The corresponding Euler factors are given by $$
L(T,(\rhochi)_v)
=
\det(1 - T\,(\rhochi)_v(\Frob_v)\mid V^{I(v)}_\dc), $$ and in particular, \begin{equation}\label{eq:twisted-euler}
L(T,(\rhochi)_v)
=
L(\dcc(\Frob_v)T,\rho_v) \end{equation} for $v\not\in\CC$.
\begin{lemma}\label{lem:rhochi-pure}\
\begin{enum} \item\label{lem:item:rhochi-pure--simplicity} If $\rho$ is geometrically simple, then so is $\rhochi$. \item\label{lem:item:rhochi-pure--purity} If $\rho$ is punctually $\iota$-pure of weight $w$, then so is $\rhochi$. \end{enum} \end{lemma}
\begin{proof} If $W_\dc\seq V_\dc$ be a $\bar{G}_{K,\RR}$-invariant subspace, then $W=W_\dc\otimes\bar\dc$ is a $\bar{G}_{K,\RR}$-invariant subspace. Moreover, if $\rho$ is geometrically simple, then $W$ equals $\ZeroSpace$ or $V$, hence $W_\dc$ equals $\ZeroSpace$ or $V_\dc$. Thus \eqref{lem:item:rhochi-pure--simplicity} holds.
Observe that $\zeta=\dcc(\Frob_v)$ is a root of unity since $\BQ$ has finite order, hence $\zeta\in\Qbar$ and $|\iota(\zeta)|^2=1$. If $v\not\in\CC$ and if $\alpha\in\Qbar$ is a zero of $L(T,(\rhochi)_v)$, then \eqref{eq:twisted-euler} implies that $\alpha/\zeta$ is a zero of $L(T,\rho_v)$. In particular, $|\alpha|^2=|\alpha/\zeta|^2=(1/q^{d_v})^w$, hence $L(T^{d_v},(\rhochi)_v)$ is $\iota$-pure of $q$-weight $w$ for almost all $v$. Thus \eqref{lem:item:rhochi-pure--purity} holds. \end{proof}
Therefore we can apply Theorem~\ref{thm:archimedean-bound} to $\rhochi$.
\begin{lemma} $
\dr(\rhochi)
-
\dr(\rho)
=
\dropCee{\rhochi}
-
\dropCee{\rho} $ and $
\swan(\rhochi)
=
\swan(\rho). $ \end{lemma}
\begin{proof} If $v\in\PP$, then $\swan_v(\rhochi)=\swan_v(\rho)$ since tensoring with tamely ramified character (e.g., $\dc$) does not change the local Swan conductor. Moreover, if $v\not\in\CC$, then $V$ and $V_\dc$ are isomorphic as $I(v)$-modules, and thus $L(T,\rho_v)$ and $L(T,(\rhochi)_v)$ have the same degree, that is, $\dr_v(\rhochi)=\dr_v(\rho)$. \end{proof}
\begin{cor}\label{cor:rC-independent-of-chi} $
\rC(\rhochi)
=
\rC(\rho). $ \end{cor}
\begin{proof} Combine the lemma and \eqref{eq:def-rC} to deduce \begin{align*}
\rC(\rhochi)
& =
\dr(\rhochi)
-
\dropCee{\rhochi}
+
\swan(\rhochi)
+
(\deg(\Q)-1)\cdot\dim(V) \\
& =
\dr(\rho)
-
\dropCee{\rho}
+
\swan(\rho)
+
(\deg(\Q)-1)\cdot\dim(V)
\ = \rC(\rho) \end{align*} as claimed. \end{proof}
\subsection{Induced representations}\label{subsec:induced-representations}
Let $L=\Fq(u)$ be the subfield of $K$ corresponding to the finite cover $\Q\colon\Ponet\to\Poneu$, and let $\SS$ be a finite set of places in $L$ including those lying below $\RR$ and those which ramify in $L/K$. Then for each $\dc\in\PhiQ$, we have an induced representation $$
\Ind(\rhochi)\colon G_{L,\SS}\to\GL(\Ind(V_\dc)) $$ where $\Ind(V_\dc)$ is a vector space of dimension $n\cdot\dim(V_\dc)$.
\begin{lemma}\label{lem:Ind-rhochi-pure} If $\rho$ is punctually $\iota$-pure of weight $w$, then so is $\Ind(\rhochi)$. \end{lemma}
\begin{proof} \newcommand\w{{\bar{v}}}
Let $\w$ be a place in $L$ not lying and $\SS$, and let $v|\w$ denote any place in $K$ lying over $\w$. Then $$
L(T^{\deg(\w)},\Ind(\rhochi)_\w)
=
\prod_{v\mid \w}
L(T^{\deg(v)},(\rhochi)_v). $$ In particular, Lemma~\ref{lem:rhochi-pure}.\ref{lem:item:rhochi-pure--purity} implies the factors on the right are $\iota$-pure of $q$-weight $w$, so the left side is also $\iota$-pure of $q$-weight $w$. \end{proof}
\subsection{Proof of Theorem~\ref{thmB}}\label{sec:proof-of-thmB}
If $\rho$ is punctually $\iota$-pure of $q$-weight $w$, then so is $\rhochi$ by Lemma~\ref{lem:rhochi-pure}.\ref{lem:item:rhochi-pure--purity}. Hence $\rhochi$ also has trivial geometric invariants, then we can apply Theorem~\ref{thm:archimedean-bound}. In particular, we deduce that $\LC(T,\rhochi)$ and $L(T,\rhochi)$ are polynomials of respective degrees $\rC(\rhochi)$ and $\degL(\rhochi)$, that $\LC(T,\rhochi)$ is $\iota$-mixed of $q$-weights $\leq w+1$, and that $L(T,\rhochi)$ is the largest factor of $\LC(T,\rhochi)$ which is $\iota$-pure of $q$-weight $w+1$. Finally, we observe that \begin{eqnarray*}
\rC(\rhochi)
& \overset{\mathrm{Cor.~}\ref{cor:rC-independent-of-chi}}{=} &
\rC(\rho) \\
& \overset{\eqref{eq:def-rC}}{=} &
\dr(\rho)
-
\dropCee{\rho}
+
\swan(\rho)
+
(\deg(\Q)-1)\cdot\dim(\Vl) \\
& \overset{\mathrm{Th.~}\ref{thm:archimedean-bound}}{=} &
\deg(L(T,\rho))
+
(\deg(\Q)+1)\cdot\dim(\Vl)
-
\dropCee{\rho} \end{eqnarray*} as claimed.
\section{Statement of Equidistribution}\label{sec:equidistribution}
Recall we have an $\El$-vector space $V$ of finite dimension \defi{$\r$} and a (continuous) representation $$
\rho\colon \GKS\to\GL(V) $$ which is punctually pure of weight $w$. We also have monic square free $\s,\Q\in\Fq[t]$ and corresponding finite subsets $\SS,\CC\sub\PP$ of supporting places.
In this section, we consider the partial $L$-functions $\LC(T,\rhochi)$ as $\chi$ varies over $\PhiQ$ and regard them as a proxy for coefficients in a Mellin transform of $\rho$. One can easily show that there are hardly any characters $\dc\in\PhiQ$ such that $\rhochi$ has \emph{non-}trivial geometric invariants, and otherwise, having trivial geometric invariants implies $\LC(T,\rhochi)$ is a polynomial in $\bbQbar[T]$ of degree $\R=\rC(\rho)$ by Theorem~\ref{thmB}. Moreover, the subset \begin{equation}\label{eq:phi-good}
\PhiQGood\rho
=
\left\{\,
\dc\in\PhiQ
:
\LC(T,\rhochi)=L(T,\rhochi)\in\bbQbar[T]
\,\right\} \end{equation}
\noindent is `big' (see Corollary~\ref{cor:bad-bound-for-PhiQ}) and consists of all $\dc$ for which \begin{equation}\label{eqn:LCStar}
\LCStar(T,\rhochi)
=
\LC(T/(\sqrt{q})^{1+w},\rhochi) \end{equation} is pure of $q$-weight zero by Theorem~\ref{thmB}. In particular, for each $\dc\in\PhiQGood\rho$, $\LCStar(T,\rhochi)$ is the characteristic polynomial of a unitary element of $\GL_\R(\bbC)$, so there is a unique conjugacy class $\trhochi$ of $U_\R(\bbC)\seq\GL_\R(\bbC)$ whose elements have the same characteristic polynomial. We would to know whether or not they are equidistributed.
We say the multiset $
\ThetaRhoq
=
\{\,\trhochi:\dc\in\PhiQGood\rho\,\} $ of conjugacy classes becomes \defi{equidistributed in $U_\R(\bbC)$ as $q\to\infty$} iff, for every continuous central function $f\colon U_\R(\bbC)\to\bbC$, one has \begin{equation}\label{eqn:trace-limit}
\lim_{q\to\infty}
\frac{1}{|\PhiQGood\rho|}
\sum_{\dc\in\PhiQGood\rho}f(\trhochi)
=
\int_{U_\R(\bbC)} f(\theta)d\theta \end{equation} where $d\theta$ is the unique Haar probability measure on $U_\R(\bbC)$. Equivalently, by the Peter-Weyl theorem, one has equidistribution if and only if for every irreducible finite-dimensional representation $\Lambda\colon U_\R(\bbC)\to\GL_{\dim(\Lambda)}(\bbC)$ and for $f=\Tr\circ\Lambda$, the identity in \eqref{eqn:trace-limit} holds.
In principle, one could try to exhibit equidistribution for all of $\ThetaRhoq$ at once. Instead we follow Katz and (try to) prove simultaneous and uniform equidistribution for certain one-parameter families of characters. More precisely, we partition $\PhiQ$ into cosets $\dc\PhiUNu$ of a subgroup $\PhiUNu$ (defined in \S\ref{sec:one-parameter-families}) and (try to) prove equidistribution for characters in \begin{equation}
\dc\PhiUNuGood{\rho}=\dc\PhiUNu\cap\PhiQGood\rho. \end{equation} Doing so for a single coset is equivalent to showing that an associated monodromy group we denote $\Ggeom{\dc}{\rhol}$ equals $\GL_{\R,\Qellbar}$. See \S\ref{sec:one-parameter-families}, \S\ref{sec:properties-preserved-by-Q_*}, and \S\ref{subsec:tannakian-monodromy-groups}.
The monodromy group is an algebraic subgroup of $\GL_{\R,\Qellbar}$. We say the former is \defi{big} iff it equals the latter, and we write \begin{equation}\label{eqn:big-defn}
\PhiQBig\rho
=
\{\,
\dc\in\PhiQ
:
\Ggeom{\dc}{\rho}\mbox{ is big}
\,\} \end{equation} for the subset of big characters. We say that the \defi{Mellin transform} of $\rho$ has \defi{big monodromy} in $\GL_{\R,\Qellbar}$ iff \begin{equation}\label{eqn:big-monodromy-bis}
|\PhiQBig\rho|
\sim
|\PhiQGood\rho|
\mbox{ as }
q\to\infty, \end{equation} or equivalently (cf.~Corollary~\ref{cor:bad-bound-for-PhiQ}), \begin{equation}\label{eqn:big-monodromy}
|\PhiQBig\rho|
\sim
|\PhiQ|
\mbox{ as }
q\to\infty. \end{equation}
\begin{theorem}\label{thm:big-monodromy-implies-equidistribution} Suppose $\rho$ is punctually $\iota$-pure and $\dc$ is in $\PhiQBig\rho$. Let $\Lambda\colon U_\R(\bbC)\to\GL_{\dim(\Lambda)}(\bbC)$ be a finite-dimensional representation. If $q$ is sufficiently large, then \begin{equation}\label{eqn:sum-to-integral}
\frac{1}{|\dc\PhiUNuGood{\rho}|}
\sum_{\dc'\in\dc\PhiUNuGood{\rho}}
\Tr\,\Lambda(\trhochip)
=
\int_{U_\R(\bbC)}\Tr\,\Lambda(\theta)\,d\theta
+
o(1)
\mbox{ as }
q\to\infty, \end{equation} and the implicit constant depends only on $\r=\dim(V)$ and $\dim(\Lambda)$. In particular, if the Mellin transform of $\rho$ has big monodromy, then $\ThetaRhoq$ is equidistributed in $U_\R(\bbC)$. \end{theorem}
\noindent The proof is in \S\ref{sec:proof-of-thmD}.
\begin{remark} Observe that the $q$-weight $w$ of $\rho$ plays no role in the statement of the theorem. This is because we factored out the weight in the normalization \eqref{eqn:LCStar}. Another way to achieve the same renormalization is to replace $\rho$ by an appropriate Tate twist so that $w=-1$ and $
\LCStar(T,\rhochi)
=
\LC(T,\rhochi). $ \end{remark}
\subsection{Reduction to $\bbG_m$}
Let $\Ponet$ and $\Poneu$ denote the projective $t$-line and $u$-line respectively, and let $\kPu=\Fq(u)$. The function-field embedding $\kPu\to K$ generated by $u\mapsto\Q$ corresponds to a finite morphism $\Q\colon \Ponet\to\Poneu$. The morphism has generic degree $n=\deg(\Q)$ and is generically etale since $\Q$ is square free of degree $n$, and it fits in a commutative diagram $$
\xymatrix{
\Aonet[1/\Q]\ar[r]\ar[d]_\Q & \Ponet\ar[d]^\Q & \div(\Q)\ar[l]\ar[d]^\Q \\
\bbG_m\ar[r] & \Poneu & \{0,\infty\}\ar[l]
} $$ where the outer vertical maps are finite morphisms. There are canonical identifications of $\div(\Q)$ with $\CC$ and $\{0,\infty\}$ with a set $\CCp$ composed of two places of the function field $\Fq(u)$.
For any sheaf $\FF$ on the $t$-line, one can define the direct image sheaf $\Q_*\FF$.
On one hand, the geometric generic fiber of $\FF=\Q_*\ME{\rho}$ is the induced representation $$
\Ind(\rho) \colon G_{K'}\to\GL(\Ind(V)) $$ where $\Ind(V)$ is a vector space of dimension $n\cdot\dim(V)$ (cf.~\cite[II.3.1.e]{Milne}). Moreover, if $\ubar$ is a geometric closed point of $\Poneu$, that is, a closed point of $\Ponet\times_{\Fq}\Fqbar$, and if $\Q^{-1}(\ubar)=\{\tbar_1,\ldots,\tbar_m\}\sub\Ponet\times_{\Fq}\Fqbar$, then the various geometric fibers satisfy \begin{equation}\label{eq:direct-image-fiber}
(\Q_*\FF)_{\ubar}
=
H^0(\ubar,\Q_*\FF)
=
\bigoplus_{i=1}^m H^0(\tbar_i,\FF)
=
\bigoplus_{i=1}^m \FF_{\tbar_i} \end{equation} as $\Qellbar$-vector spaces (cf.~\cite[II.3.5.c]{Milne}). In particular, if $\FF$ is supported on $\Aonet[1/\Q]$, then $\Q_*\FF$ is supported on $\bbG_m$.
On the other hand, the functorial properties of $\Q_*$ yield canonical isomorphisms \begin{equation}\label{eq:cohomology-comparison}
H^n(\PonetBar,\FF)
=
H^n(\PonetBar,\Q_*\FF)
\mbox{\ \ and\ \ }
H^n_c(\AonetBar[1/\Q],\FF)
=
H^n_c(\bar\bbG_m,\Q_*\FF) \end{equation} for each $n$. For example, $\Q_*$ is exact since $\Q$ is a finite map, so the first identity in \eqref{eq:cohomology-comparison} is a consequence of the (trivial) Leray spectral sequence (cf.~\cite[II.3.6 and III.1.18]{Milne}). In particular, the identities \eqref{eq:L-fun:frac:partial}, \eqref{eq:L-fun:frac}, and \eqref{eq:cohomology-comparison} jointly imply that \begin{equation}
L(T,\ME{\rhochi})
=
L(T,\Q_*\ME{\rhochi})
\mbox{\ and\ }
\LC(T,\ME{\rhochi})
=
L_{\CCp}(T,\Q_*\ME{\rhochi}) \end{equation} for $\dc\in\PhiQ$.
\subsection{One-parameter families}\label{sec:one-parameter-families}
Recall $\Q\in\Fq[t]\sub\kPt$ is monic and square free and $\kPu\to\kPt$ is the function-field embedding which sends $u$ to $\Q$. The norm map $\kPt\to\kPu$ is multiplicative and sends $t$ to $(-1)^{n}u$ for $n=\deg(\Q)$. It also induces homomorphisms $$
\nu
\colon
\BQ
\to
\Bu
\mbox{\ \ and\ \ }
\nu^*
\colon
\PhiU
\to
\PhiQ $$ where $\Bu=(\Fq[u]/u\Fq[u])^\times$ and $\PhiU$ is its dual. In particular, $\nu$ is surjective, so its dual $\nu^*$ is injective, and we can identify $\PhiU$ with its image $\PhiUNu$. Moreover, as the following lemma shows, twisting by elements of the coset $\dc\PhiUNu$ is the `same' as twisting by elements of $\PhiU$.
\begin{lemma}\label{lem:direct-image-of-middle-extension} Let $\dc\in\PhiQ$ and $\alpha\in\PhiU$.
\begin{enum} \item\label{lem:ind-is-me} $\Q_*\ME{\rhochi}$ is isomorphic to $\ME{\Ind(\rhochi)}$. \item\label{lem:ind-projection} $\Q_*\ME{\rhochi\alpha^\nu}$ is isomorphic to $\ME{\Ind(\rhochi)\otimes\alpha}$.
\end{enum} \end{lemma}
\begin{proof} By \cite[3.3.1]{Katz:TLFM}, $\Q_*\ME{\rhochi}$ is a middle extension, and since it is generically equal to the middle extension sheaf $\ME{\Ind(\rhochi)}$, Proposition~\ref{prop:assoc-me} implies part \eqref{lem:ind-is-me} holds.
Up to replacing $\rho$ by $\rhochi$, we suppose without loss of generality that $\dc=\chinot$. Let $T\seq\Ponet$ be a dense Zariski open subset and $U=\Q(T)$. Suppose that $U\seq\Gm$ so that $\Q^*\ME{\alpha}$ is lisse on $T$, that the restriction $\Q\colon T\to U$ is \etale, and that $\ME{\rho}$ is lisse on $T$. Let $i\colon T\to\Ponet$ and $j\colon U\to\Poneu$ be the inclusions. We have $$
\ME{\rho\otimes\alpha^\nu}
\simeq
i_*i^*(\ME{\rho\otimes\alpha^\nu})
\simeq
i_*i^*(\ME{\rho}\otimes\ME{\alpha^\nu})
\simeq
i_*i^*(\ME{\rho}\otimes\Q^*\ME{\alpha}) $$ since each of the sheaves is a middle extensions and lisse on $T$. Therefore the projection formula implies $$
\Q_*\ME{\rho\otimes\alpha^\nu}
\simeq
\Q_*(i_*i^*(\ME{\rho}\otimes\Q^*\ME{\alpha}))
\simeq
j_*j^*(\Q_*\ME{\rho}\otimes\ME{\alpha}) $$ since each of the sheaves is lisse on $U$ and a middle extension on $\Poneu$ (by part \eqref{lem:ind-is-me}) and since $\Q\colon T\to U$ is \etale. Finally, \begin{align*}
j_*j^*(\Q_*\ME{\rho}\otimes\ME{\alpha})
\simeq
j_*j^*(\ME{\Ind(\rho)}\otimes\ME{\alpha})
\simeq
\ME{\Ind(\rho)\otimes\alpha} \end{align*} and thus part \eqref{lem:ind-projection} holds. \end{proof}
\subsection{Properties preserved by $\Q_*$}\label{sec:properties-preserved-by-Q_*}
We say a character $\dc\in\PhiQ$ is \defi{good for $\rho$} or simply \defi{good} iff it lies in the subset $\PhiQGood\rho$ defined in \eqref{eq:phi-good}. When $\Q=t$ and thus $\Aonet[1/\Q]=\bbG_m$, then Lemma~\ref{lem:direct-image-of-middle-extension} and the following lemma together show that our notion of good coincides with that of Katz's (cf.~\cite[Chapter 3]{Katz:CE}):
\begin{lemma} If $\dc\in\PhiQ$ and $\alpha\in\PhiU$, then the following are equivalent:
\begin{enum} \item\label{li:good-1} $\dc\alpha^\nu$ is good for $\rho$; \item\label{li:good-2} $\ME{\rhochi\alpha^\nu}$ is supported on $\Aonet[1/\Q]$; \item\label{li:good-3} $\ME{\Ind(\rhochi)\otimes\alpha}$ is supported on $\bbG_m$; \item\label{li:good-4} $\alpha\in\PhiU$ is good (\`a la Katz) for $\Q_*\ME{\rhochi}$. \end{enum} \end{lemma}
\begin{proof} Corollary~\ref{cor:good-vs} implies the first conditions \eqref{li:good-1} and \eqref{li:good-2} are equivalent. Conditions \eqref{li:good-2} and \eqref{li:good-3} are equivalent by the identity in \eqref{eq:direct-image-fiber} for $\ubar\in\CC'$. Finally, taking $\Q=t$ and applying the equivalence of \eqref{li:good-1} and \eqref{li:good-2} yields the equivalence of \eqref{li:good-3} and \eqref{li:good-4}. \end{proof}
Let $\PhiQBad\rho$ be the complement $\PhiQ\ssm\PhiQGood\rho$ and $\dc\PhiUNuBad{\rho}=\PhiQBad\rho\cap\dc\PhiUNu$.
\begin{cor}\label{cor:bad-bound}
$|\dc\PhiUNuBad{\rho}|\leq(1+\deg(\Q))\cdot\rank(\rho)$. \end{cor}
\begin{proof} If $\dc\in\PhiQBad\rho$, then $\dc$ it coincides with some tame character of $\rho$ at some $v\in\CC$, and there are at most $(1+\deg(\Q))\cdot\rank(\rho)$ such characters. Compare \cite[pp.~12--13]{Katz:CE}. \end{proof}
\begin{cor}\label{cor:bad-bound-for-PhiQ}
$|\PhiQGood\rho|\sim|\PhiQ|$ as $q\to\infty$. \end{cor}
\begin{proof} Observe that Corollary~\ref{cor:bad-bound} implies $$
|\PhiQ| - |\PhiQGood\rho|
=
|\PhiQBad\rho|
=
\sum_{\dc\PhiUNu}
|\PhiUNuBad\rho|
\leq
O(|\PhiQ|/|\PhiUNu|)
=
o(|\PhiQ|) $$ as $q\to\infty$. \end{proof}
\subsection{Tannakian monodromy groups}\label{subsec:tannakian-monodromy-groups}
Suppose $\Q=t$ and thus $\CCp=\CC=\{0,\infty\}$ and $\PhiU=\PhiQ$. Suppose moreover that $\rho$ is geometrically simple and $\dim(V)>1$ so that no geometric subquotient of $\ME{\rho}$ is a Kummer sheaf.
Let $j\colon\bbG_m\to\Poneu$ be the inclusion, let $j_0\colon\bbG_m\to\Aoneu$ be the inclusion map, and for each $\alpha\in\PhiU$, let $$
\omega_\alpha(\ME{\rho})
=
H^1_c(\AoneuBar,j_{0*}j^*\ME{\rho\otimes\alpha}). $$ It is a $G_{\Fq}$-module, that is, $\Frob_q$ acts functorially, and it corresponds to a well-defined conjugacy class of elements $\Frob_{\Fq,\alpha}\sub\GL(\omega(\ME{\rho}))$ where $\omega(\ME{\rho})=\omega_{\mathbf{1}}(\ME{\rho})$ and $\mathbf{1}\in\PhiU$ is the trivial character. Moreover, if $\alpha$ is good, then $$
\omega_\alpha(\ME{\rho})
=
H^1_c(\bar\bbG_m,\ME{\rho\otimes\alpha}), $$ and in particular $$
L_\CC(T,\rho\otimes\alpha)
=
\det(1-\Frob_\alpha T\mid \omega(\ME{\rho})). $$
In a way we will not make precise here, the $\Frob_\alpha$ `generate' $\ell$-adic reductive subgroups $$
\Ggeom{}{\rho}\seq\Garith{}{\rho}\seq\GL_{\R,\Qellbar} $$ which are well-defined up to conjugacy. They are fundamental groups of certain Tannakian categories, and we call them the \defi{Tannakian monodromy groups of $\rho$}. See Appendix~\ref{sec:tannakian-appendix} for details. We say the Mellin transform of $\rhol$ has \defi{big Tannakian monodromy} iff $\Ggeom{}{\rho}=\GL_{\R,\Qellbar}$.
For general $\Q$ and $\dc\in\PhiQ$, we write $$
\Ggeom\dc\rho \seq \Garith\dc\rho \seq \GL_{\R,\Qellbar} $$ for the Tannakian monodromy groups of $\Ind(\rhochi)$, and we say that the Mellin transform of $\rhochi$ has \defi{big Tannakian monodromy} iff $\Ggeom\dc\rho=\GL_{\R,\Qellbar}$. Now the action of $\Frob_q$ on $\omega_\alpha(\ME{\rhochi})$ corresponds to a well-defined conjugacy class $\Frob_{\Fq,\alpha}\sub\Garith{\dc}{\rho}$.
\subsection{Proof of Theorem~\ref{thm:big-monodromy-implies-equidistribution}}\label{sec:proof-of-thmD}
We may suppose without loss of generality that $\Lambda$ is irreducible since it is semisimple and $\Tr(\Lambda_1\oplus\Lambda_2)=\Tr(\Lambda_1)+\Tr(\Lambda_2)$ for any representations $\Lambda_1,\Lambda_2$. Moreover, one can show that $$
\int_{U_\R(\bbC)}\Tr\,\Lambda(\theta)\,d\theta
=
\begin{cases}
1 & \Lambda\mbox{ is the trivial representation} \\
0 & \mbox{otherwise}
\end{cases} $$ so to prove \eqref{eqn:sum-to-integral} we must show that \begin{equation}\label{eq:to-show}
\frac{1}{|\dc\PhiUNuGood{\rho}|}\sum_{\dc'\in\dc\PhiUNuGood{\rho}}
\Tr\,\Lambda(\trhochip)
=
\begin{cases}
1 & \Lambda\mbox{ is the trivial representation} \\
o(1) & \mbox{otherwise}
\end{cases} \end{equation} when $q$ is large.
If $q$ is sufficiently large, then Corollary~\ref{cor:bad-bound} implies that $$
|\dc\PhiUNuBad{\rho}|
\leq
(1+\deg(\Q))\cdot\rank(\rho)
<
|\dc\PhiUNu| $$
and thus $\dc\PhiUNuGood{\rho}$ is non-empty. In particular, the left side of \eqref{eq:to-show} is defined for large $q$, and it is identically $1$ when $\Lambda$ is the trivial representation. On the other hand, if $\Lambda$ is non-trivial and if $q$ is bigger than $(|\dc\PhiUNuBad{\rho}|+1)^2$, then \cite[7.5]{Katz:CE} implies that \begin{equation}\label{eqn:a-uniform-bound}
\frac{1}{|\dc\PhiUNuGood{\rho}|}
\left|
\sum_{\dc'\in\dc\PhiUNuGood{\rho}}
\!\!\!\!\Tr\,\Lambda(\trhochip)
\right|
\leq
(\dim(V) + \dim(\Lambda))
\left(
\frac{1}{\sqrt{q}}
+
\frac{1}{\sqrt{q}^3}
\right). \end{equation} Thus \eqref{eq:to-show} holds, as claimed, and the implicit constant depends only on $r$ and $\dim(\Lambda)$.
To complete the proof of the theorem we must show that $\ThetaRhoq$ becomes equidistributed in $U_\R(\bbC)$. We observe that \begin{equation}\label{eqn:uniform-bound-for-Tr}
|\Tr\,\Lambda(\trhochip)|\leq\dim(\Lambda)
\mbox{ for }
\dc'\in\dc\PhiUNuGood{\rho} \end{equation} Therefore \begin{equation*}\label{eqn:key-trace-estimate}
\sum_{\dc\in\PhiQGood\rho}
\!\!\!\!\Tr\,\Lambda(\trhochi)
=
\sum_{\dc\in\PhiQGoodBig\rho}
\!\!\!\!\Tr\,\Lambda(\trhochi)
+
o(1)\cdot|\PhiQGood\rho\ssm\PhiQGoodBig\rho| \end{equation*} where $$
\PhiQGoodBig\rho
=
\PhiQGood\rho\cap\PhiQBig\rho. $$
\noindent In particular, if the Mellin transform of $\rho$ has big monodromy, that is, if \eqref{eqn:big-monodromy-bis} holds, then $$
\frac{|\PhiQGood\rho\ssm\PhiQGoodBig\rho|}{|\PhiQGood\rho|} = o(1)
\mbox{ for }
q\to\infty $$ and thus \begin{eqnarray*}
\frac{1}{|\PhiQGood\rho|}
\sum_{\dc\in\PhiQGood\rho}
\!\!\!\!\Tr\,\Lambda(\trhochi)
& \overset{\eqref{eqn:uniform-bound-for-Tr}}= &
\frac{1}{|\PhiQGood\rho|}
\sum_{\dc\in\PhiQGoodBig\rho}
\!\!\!\!\Tr\,\Lambda(\trhochi)
+
o(1)\cdot O(\dim(\Lambda)) \\
& \overset{\eqref{eqn:sum-to-integral}}= &
\int_{U_\R(\bbC)}\Tr\,\Lambda(\theta)\,d\theta + o(1) \end{eqnarray*} as $q\to\infty$. Therefore $\ThetaRhoq$ becomes equidistributed in $U_\R(\bbC)$ as claimed.
\begin{remark} An examination of the above proof will show that one does not need to suppose $q\to\infty$ by taking $q=p^m$ and letting $m\to\infty$. Indeed, the key identities \eqref{eqn:a-uniform-bound} and \eqref{eqn:uniform-bound-for-Tr} are valid even if one takes $q=p$ and $p\to\infty$ in $\bbZ$. This would allow one to prove `horizontal' variants of Theorem~\ref{thm:big-monodromy-implies-equidistribution}. Because stating a correspondingly general result would be cumbersome and we do not need such results, we leave the details to an interested reader. \end{remark}
\section{Sums in Arithmetic Progressions}\label{sec:sums-in-arithmetic-progressions}
In addition to assuming that our representation $$
\rho\colon \GKS\to\GL(V) $$ is punctually $\iota$-pure of weight $w$, we suppose that $\rho$ is geometrically simple yet not an element of $\PhiQ$ and that the Mellin transform of $\rho$ has big monodromy. The first hypothesis ensures that $\rhochi$ has trivial geometric invariants for every $\dc\in\PhiQ$ while the second allows us to apply Theorem~\ref{sec:proof-of-thmD}.
In this section, which forms the heart of our paper, we shift gears and analyze the distribution of certain traces indexed by residue classes modulo $\Q$. More precisely, for each monic irreducible $\pi\in\MM$, the traces are coefficients of the Euler factor $\phivl{T}$ of $v=v(\pi)$, and we use them to define a function $\VM\colon\MM\to\El$ satisfying \begin{equation*}
T\frac{d}{dT}
\log(L_{\{\infty\}}(T,\rho))
=
\sum_{n=1}^\infty
\left(
\sum_{f\in\MM_n}\VM(f)
\right)
T^n \end{equation*} (see \S\ref{subsec:von-mangoldt}). In particular, for each $n\geq 1$ and $A\in\BQ$, we consider the sum \begin{equation}\label{eqn:SnAQ}
\SnAQ
=
\sum_{\substack{f\in \MM_n\\ f\equiv A\bmod\Q}}
\VM(f), \end{equation} and then we consider the mean and variance of these sums given by \begin{equation}\label{eqn:E-and-V}
\bbE_A[\SnAQ]
=
\frac{1}{\phi(\Q)}\sum_{A\in\BQ}\SnAQ,
\
\Var_A[\SnAQ]
=
\frac{1}{\phi(\Q)}\sum_{A\in\BQ} \left|\SnAQ - \bbE_A[\SnAQ]\right|^2 \end{equation} respectively.
Our main result has two parts. On one hand, we can precisely evaluate $\bbE_A[\SnAQ]$ in terms of the coefficients $\bn{\rho}$ coming from the identity $$
T\frac{d}{dT}\LC(T,\rho)
=
\sum_{n=1}^\infty
\bn{\rho}T^n $$ satisfied by the normalized $L$-function (see \S\ref{subsec:random-sums}). We can also give bounds for the archimedean norm of these coefficients (see \S\ref{subsec:key-estimates}). On the other hand, we can evaluate $\Var_A[\SnAQ]$ using trace formulae (see \S\ref{subsec:random-sums}), and its leading order term is the value of a matrix integral on $U_\R(\bbC)$ by our hypotheses on $\rho$ (see \S\ref{subsec:key-estimates} and \S\ref{sec:proof-of-thmE}). The value of this integral exhibits a dichotomy depending on whether or not $n\leq\R=\rC(\rho)$, and in particular, the interval of small $n$ grows with $\r=\dim(V)$ since $\rC$ does.
After giving some preliminary results we calculate the mean and variance in Theorem~\ref{thm:variance-estimate} of \S\ref{sec:proof-of-thmE}. In our proof we use a classification of the elements of $\PhiQ$ in terms of a trichotomy of good, mixed, and heavy characters (see \S\ref{sec:character-trichotomy}). As we explain, this is a refinement of Katz's dichotomy of good and bad characters.
\subsection{Trace formula}
In this section we define local and cohomological traces of $\rho$ and recall how they are related by a trace formula. For details, see \cite[Exp.~2, \S 3]{SGA4.5}.
On one hand, the \defi{local traces} of $\rho$ are given by $$
\arvm{\rho,v}
=
\Tr\left(\rholv(\Frob_v)^m\mid\Wl\right)
\mbox{ for }v\in\PP\mbox{ and }m\geq 1, $$ and they satisfy \begin{equation*}\label{eqn:euler-factor}
T\frac{d}{dT}\log
L(T,\rho_v)^{-1}
=
\sum_{m=1}^\infty \arvm{\rho,v}T^m
\mbox{ for }v\in\PP. \end{equation*} Combining this identity with \eqref{eq:twisted-euler} yields the more general identity \begin{equation}\label{eqn:twisted-euler-factor}
T\frac{d}{dT}\log
L(T,(\rhochi)_v)^{-1}
=
\sum_{m=1}^\infty \dc(\Frob_v)^m\arvm{\rho,v}T^m
\mbox{ for }v\in\PP\ssm\CC. \end{equation}
On the other hand, the \defi{cohomological traces} of $\rho\otimes\dc$ are given by $$
\bn{\rhochi}
=
\sum_{i=1}^2
(-1)^i\cdot
\Tr\left(\Frob_q\mid H^i_c(\AonetBar[1/\Q],\FF\otimes\LL_\dc)\right)
\mbox{ for }n\geq 1, $$ and they satisfy \begin{equation}\label{eqn:cohomological-trace}
T\frac{d}{dT}
\log\LC(T,\rhochi)
=
\sum_{n=1}^\infty
\bn{\rhochi}T^n. \end{equation} Similarly, we define the \defi{normalized cohomological traces} of $\rhochi$ by $$
\bnstar{\rho,\dc}
=
\frac{1}{q^{n(1+w)/2}}
\bn{\rhochi}
=
\frac{1}{(\sqrt{q})^{1+w}}
\sum_{i=1}^2
(-1)^i\cdot\Tr\left(\Frob_q\mid H^i_c(\AonetBar[1/\Q],\FF\otimes\LL_\dc)\right) $$ so that \eqref{eqn:LCStar} and \eqref{eqn:cohomological-trace} imply $$
T\frac{d}{dT}\log
\LCStar(T,\rhochi)
=
T\frac{d}{dT}\log
\LC(T/(\sqrt{q})^{1+w},\rhochi)
=
\sum_{n=1}^\infty
\bnstar{\rho,\dc}T^n. $$
Combining \eqref{eqn:twisted-euler-factor} and \eqref{eqn:cohomological-trace} with \eqref{eq:L-fun:frac:partial} yields the identity $$
T\frac{d}{dT}
\log\LC(T,\rhochi)
=
\sum_{n=1}^\infty
\left(
\sum_{md=n}
\sum_{v\in\PP_d}
d\cdot\arvm{\rho,v}
\right)
T^n $$ and, in particular, we obtain the Grothendieck--Lefschetz trace formula \begin{equation}\label{eqn:trace-formula}
\sum_{md=n}
\sum_{v\in\PP_d}
d\cdot\arvm{\rho,v}
=
\bn{\rhochi}. \end{equation}
\subsection{Von Mangoldt function}\label{subsec:von-mangoldt}
We define the \defi{von Mangoldt function} of $\rho$ to be the map $\VM\colon\MM\to\El$ given by \begin{equation}\label{eqn:von-mangoldt}
\VM(f)
=
\begin{cases}
d\cdot \arvm{\rho,v(\pi)} & f=\pi^m\mbox{ and }\pi\in\AA_d \\
0 & \mbox{otherwise}.
\end{cases} \end{equation} We also define the \defi{extension by zero} of $\dc\in\PhiQ$ to be the map $\chiz\colon\MM\to\El$ given by $$
\chiz(f) =
\begin{cases}
\dc(f+\Q\,\Fq[t]) & \mbox{if }
\gcd(f,\Q)=1 \\
0 & \mbox{otherwise}.
\end{cases} $$ It is multiplicative and satisfies \begin{equation*}\label{eq:chis.vs.chiz}
\chiz(\pi)
=
\begin{cases}
\dc(\Frob_{v(\pi)}) & \mbox{if }\pi\nmid\Q \\
0 & \mbox{otherwise}
\end{cases}
\mbox{ for }\pi\in\AA. \end{equation*} These functions allow us to rewrite \eqref{eqn:trace-formula} as \begin{equation*}\label{eqn:trace-formula-bis}
T\frac{d}{dT}
\log(\LC(T,\rhochi))
=
\sum_{n=1}^\infty
\left(
\sum_{f\in\MM_n}\chiz(f)\VM(f)
\right)
T^n \end{equation*} and, in particular, to deduce the identity \begin{equation}\label{lem:twisted-VM-sum}
\sum_{f\in\MM_n}\chiz(f)\VM(f)
=
\bn{\rhochi}
\mbox{ for }n\geq 1. \end{equation} We observe that in the special case $\dc=\one$ this simplifies to \begin{equation}\label{eqn:formula-for-bn}
\bn{\rho}
=
\sum_{\substack{f\in \MM_n\\ \gcd(f,\Q)=1}}
\VM(f). \end{equation}
\subsection{Random arithmetic-progression sums}\label{subsec:random-sums}
Regard $A$ is a uniformly random element of $\BQ$, and consider the expected value \begin{equation}\label{eqn:expected-value}
\bbE_A[\SnAQ]
=
\frac{1}{\phi(\Q)}\sum_{A\in\BQ}\SnAQ. \end{equation} Observe that, for each $A_1,A_2\in\BQ$, one has \begin{equation*}\label{eq:orth:A}
\frac{1}{\phi(\Q)}\sum_{\dc\in\PhiQ}\chiz(A_1)\chibarz(A_2)
=
\begin{cases}
1 & \mbox{if }A_1=A_2 \\
0 & \mbox{if }A_1\neq A_2,
\end{cases} \end{equation*} and thus \begin{equation*}\label{eqn:sum-of-bs}
\SnAQ
=
\frac{1}{\phi(\Q)}
\sum_{f\in\MM_n}
\VM(f)
\sum_{\dc\in\PhiQ}
\chiz(f)
\chibarz(A)
=
\frac{1}{\phi(\Q)}\sum_{\dc\in\PhiQ} \bn{\rhochi}\cdot\chibarz(A) \end{equation*} by \eqref{lem:twisted-VM-sum}. Therefore, if we write $\chinot\in\PhiQ$ for the trivial character, then the right side of \eqref{eqn:expected-value} equals $$
\frac{1}{\phi(\Q)^2}
\sum_{\dc\in\PhiQ}
\bn{\rhochi}
\sum_{A\in\BQ}
\bar\chiz(A)
=
\frac{1}{\phi(\Q)}
\bn{\rho,\chinot} $$ since, for every $\chione,\chitwo\in\PhiQ$, one has \begin{equation}\label{eq:orth:chi}
\frac{1}{\phi(\Q)}\sum_{A\in\BQ}\chionez(A)\chitwobarz(A)
=
\begin{cases}
1 & \mbox{if }\chione=\chitwo \\
0 & \mbox{if }\chione\neq\chitwo.
\end{cases} \end{equation} In particular, we have the identity \begin{equation}\label{eq:s-e}
\SnAQ - \bbE_A[\SnAQ]
=
\frac{1}{\phi(\Q)}\sum_{\substack{\dc\in\PhiQ \\ \dc\neq\chinot}} \bn{\rhochi}\cdot\bar\dc(A). \end{equation}
Now consider the variance $$
\Var_A[\SnAQ]
=
\frac{1}{\phi(\Q)}\sum_{A\in\BQ} \left|\SnAQ - \bbE_A[\SnAQ]\right|^2. $$ If we apply identities \eqref{eq:orth:chi} and \eqref{eq:s-e}, then the right side equals $$
\frac{1}{\phi(\Q)^3}
\sum_{A\in\BQ}
\sum_{\substack{\chionez,\chitwoz\in\PhiQ\\\chionez,\chitwoz\neq\chinot}}
\bn{\rho\otimes\chione}\overline{\bn{\rho\otimes\chitwo}}\cdot
\chionebarz(A)\chitwoz(A)
=
\frac{1}{\phi(\Q)^2}
\sum_{\substack{\dc\in\PhiQ\\\dc\neq\chinot}}
|\bn{\rhochi}|^2. $$
In summary, the function $\SnAQ$ of the random variable $A$ satisfies \begin{equation}\label{eqn:E-and-V-formulae}
\bbE_A[\SnAQ]
= \frac{1}{\phi(\Q)}\bn{\rho\otimes\chinot},
\quad
\Var_A[\SnAQ] =
\frac{1}{\phi(\Q)^2}
\sum_{\substack{\dc\in\PhiQ\\\dc\neq\chinot}}
|\bn{\rhochi}|^2. \end{equation} Observe that $\rho\otimes\chinot=\rho$ and thus $\bnstar{\rho\otimes\chinot}=\bnstar{\rho}$.
\subsection{Trichotomy of characters}\label{sec:character-trichotomy}
On one hand, a character $\dc\in\PhiQ$ is \defi{good for $\rho$} (or \defi{$\rho$-good}) if and only if the $L$-functions $\LC(T,\rhochi)$ and $L(T,\rhochi)$ are polynomials and equal in $\Qbar[T]$; see \eqref{eq:phi-good}. In that case Theorem~\ref{thmB} implies they equal $\PC{1}(T,\rhochi)$ and are $\iota$-pure of $q$-weight $w+1$, and then $\LCStar(T,\rhochi)$ is given by $$
\LCStar(T,\rhochi)
=
\det(1-T\,\Frob_q\mid H^1_c(\AonetBar[1/\Q], \FF)) $$ where $$
\FF
:=
\ME{\rhochi}((1+w)/2)
=
\ME{\rhochi}\otimes\El((1+w)/2) $$ is a so-called Tate twist of $\ME{\rhochi}$. Moreover, $\LCStar(T,\rhochi)$ has degree $\R=\rC(\rhochi)=\rC(\rho)$ and is $\iota$-pure of $q$-weight zero. In particular, it is the characteristic polynomial of a unique conjugacy class $\trhochi\sub U_\R(\bbC)$, and thus \begin{equation}\label{eqn:bnstar-to-trace}
\bnstar{\rhochi}
=
-\Tr\left(\Frob_q^n\mid H^1_c(\AonetBar[1/\Q],\FF)\right)
=
-\Tr\,\std(\trhochi^n) \end{equation} where $\std\colon U_\R(\bbC)\to\GL_\R(\bbC)$ is the inclusion $U_\R(\bbC)\seq\GL_R(\bbC)$.
On the other hand, there are two ways a character can fail to be good for $\rho$: either $L(T,\rhochi)$ is not a polynomial or $L(T,\rhochi)$ and $\LC(T,\rhochi)$ are polynomials but not equal to each other. Only the first of these possibilities is problematic for us because in that case the denominator of $L(T,\rhochi)$ has zeros of excessive weight. More precisely, if the factor $P_2(T,\rhochi)$ of the denominator of $L(T,\rhochi)$ is non-trivial, then it $\iota$-mixed of $q$-weights $\leq w+1$ but not $\iota$-mixed of $q$-weights $\leq w$ (cf.~Theorem~\ref{thmB}). Hence we say that $\dc$ is \defi{heavy for $\rho$} (or \defi{$\rho$-heavy}) iff it lies in the subset $$
\PhiQAwful\rho
=
\{\,
\dc\in\PhiQ
:
L(T,\rhochi)\not\in\Qbar[T]
\,\}. $$
\noindent The following lemma can be used to classify $\dc$ which are heavy for $\rho$.
\begin{lemma}\label{lem:heavy-criterion} Suppose $\rho$ is geometrically simple and punctually $\iota$-pure and $\dc\in\PhiQ$. Then $\dc\in\PhiQAwful\rho$ if and only if $\rhochi$ is geometrically isomorphic to the trivial representation. \end{lemma}
\begin{proof} The essential point is that since $\rhochi$ is geometrically simple, the quotient space of geometric coinvariants $(V_\dc)_{\GKSbar}$ either vanishes or equals $V_\dc$. The former occurs if and only if $\rhochi$ is geometrically isomorphic to the trivial representation, so the lemma follows from Corollary~\ref{cor:poly-L-function}. \end{proof}
\begin{cor}\label{cor:classify-awful} Suppose $\rho$ is geometrically simple and punctually $\iota$-pure, and let $r=\dim(V)$. Then $\PhiQAwful\rho\seq\{\chinot\}$ if and only if one of the following hold:
\begin{enum} \item\label{cor:item:r>1} $r>1$; \item\label{cor:item:r=1 and trivial} $r=1$ and $\rho$ is geometrically isomorphic to the trivial representation; \item\label{cor:item:r=1 and non-Kummer} $r=1$ and $\rho$ is not geometrically isomorphic to a Dirichlet character in $\PhiQ$. \end{enum}
\noindent Moreover, $\PhiQAwful\rho=\{\chinot\}$ if and only if \eqref{cor:item:r=1 and trivial} holds. \end{cor}
\begin{proof} Let $\dc\in\PhiQ$. Lemma~\ref{lem:heavy-criterion} implies that $\dc$ is heavy for $\rho$ if and only if $\rhochi$ is geometrically isomorphic to the trivial representation (and hence $r=1$). By the contrapositive, $\dc$ is not heavy for $\rho$ if and only if $r>1$ or $\rho$ is not geometrically isomorphic to $1/\dc$. Therefore \eqref{cor:item:r>1} or \eqref{cor:item:r=1 and non-Kummer} holds if and only if $\PhiQAwful\rho$ is empty, and \eqref{cor:item:r=1 and trivial} holds if and only if $\PhiQAwful\rho=\{\chinot\}$. \end{proof}
We also say that $\dc$ is \defi{mixed for $\rho$} (or \defi{$\rho$-mixed}) iff it lies in the subset $$
\PhiQMixed\rho = \PhiQ \ssm (\PhiQGood\rho \cup \PhiQAwful\rho). $$ Equivalently, $\dc$ is mixed for $\rho$ if and only if $\LC(T,\rhochi)$ is a polynomial which is $\iota$-mixed of $q$-weights $\leq w+1$ but not $\iota$-pure of $q$-weight $w+1$.
In summary, we classify the characters in $\PhiQ$ by a trichotomy: each is either $\rho$-good, $\rho$-mixed, or $\rho$-heavy. This terminology refines Katz's because we divide his bad characters into mixed and heavy characters.
\begin{lemma}\label{lem:bn-bound} Suppose $\rho$ is punctually $\iota$-pure of weight $w$ and $\dc\in\PhiQ$. Then
\begin{enum}
\item If $\dc$ is heavy for $\rho$, then $|\bnstar{\rhochi}|^2=O(q^n)$, and otherwise $|\bnstar{\rhochi}|^2=O(1)$.
\item $|\PhiQMixed\rho\ssm\{\chinot\}|\sim O(|\PhiQGood\rho|/q)$ and $|\PhiQAwful\rho|=O(1)$. \end{enum}
\noindent Moreover, the bounds assume $q$ tends to infinity and the implied constants depend only on $\rho$. \end{lemma}
\begin{proof} Regardless of whether $\dc$ is good, mixed, or heavy, we have $$
\bnstar{\rhochi}
=
-\Tr\left(\Frob_q^n\mid H^1_c(\AonetBar[1/\Q],\FF)\right)
+\Tr\left(\Frob_q^n\mid H^2_c(\AonetBar[1/\Q],\FF)\right). $$ One one hand, the second term on the right vanishes unless $\dc$ is heavy. On the other hand, Theorem~\ref{thm:deligne} and Lemma~\ref{lem:trace-bound} imply $$
|\,\Tr\left(\Frob_q^n\mid H^i_c(\AonetBar[1/\Q],\FF)\right)|^2
=
O(q^{i-1}) $$ since $\FF$ is punctually pure of weight $-1$. \end{proof}
Up to replacing $\Q$ by a proper monic divisor $\Qp$, we can apply the same trichotomy to characters in $\PhiQp$.
\begin{lemma}\label{lem:Qp-bn-bound}
Let $\Qp$ be a monic divisor of $\Q$ in $\Fq[t]$. If $\rho$ is punctually $\iota$-pure of weight $w$ and if $\dc\in\PhiQ$, then $|\PhiQpGood\rho|\sim|\PhiQp|$ as $q\to\infty$. \end{lemma}
\begin{proof} Apply Lemma~\ref{lem:bn-bound} with $\Qp$ in lieu of $\Q$. \end{proof}
\subsection{Key estimates}\label{subsec:key-estimates}
In this section we provide the exact formula and key asymptotic estimate we need to prove Theorem~\ref{thm:variance-estimate}.
\begin{prop}\label{prop:E-estimate} Suppose $\rho$ is punctually $\iota$-pure of weight $w$ and $\PhiQAwful\rho\seq\{\chinot\}$. Then $$
\phi(\Q)\cdot\bbE_A[\SnAQ] = \bn{\rho}. $$ \end{prop}
\begin{proof} By definition, $$
\phi(\Q)\cdot\bbE_A[\SnAQ]
=
\sum_{A\in\BQ} \SnAQ
=
\sum_{A\in\BQ}
\sum_{\substack{f\in \MM_n\\ f\equiv A\bmod\Q}}
\VM(f)
=
\sum_{\substack{f\in \MM_n\\ \gcd(f,\Q)=1}}
\VM(f), $$ and \eqref{eqn:formula-for-bn} then yields the desired identity. \end{proof}
\begin{remark} While we do not need the result, we point out that Proposition~\ref{prop:E-estimate} and Lemma~\ref{lem:bn-bound} imply $$
\frac{\phi(\Q)}{q^{n(1+w)}}\cdot|\bbE_A[\SnAQ]|^2
=
|\bnstar{\rho}|^2
\sim
O(1)
\mbox{ for }
q\to\infty $$ when $\rho$ is punctually $\iota$-pure of weight $w$ and $\PhiQAwful\rho\seq\{\chinot\}$. \end{remark}
\begin{proof} Combine . \end{proof}
\begin{prop}\label{prop:var-estimate} Suppose $\rho$ is punctually $\iota$-pure of weight $w$ and $\PhiQAwful\rho\seq\{\chinot\}$. Then $$
\frac{\phi(\Q)}{q^{n(1+w)}}
\cdot
\Var_A[\SnAQ]
=
\frac{1}{|\PhiQGood\rho|}
\sum_{\dc\in\PhiQGood\rho}
|\Tr\,\std(\trhochi^n)|^2
+
O(q^{-1})
\mbox{ as }
q\to\infty $$ where $\std\colon U_\R(\bbC)\to\GL_\R(\bbC)$ is the representation given by the inclusion $U_\R(C)\seq\GL_\R(\bbC)$. \end{prop}
\begin{proof} Lemma~\ref{lem:bn-bound} implies \begin{align*}
\phi(\Q)^2\cdot\Var_A[\SnAQ]\
& -
\sum_{\substack{\dc\in\PhiQGood\rho\\\dc\neq\chinot}}
|\bn{\rhochi}|^2
\ =
\sum_{\substack{\dc\in\PhiQMixed\rho\\\dc\neq\chinot}}
|\bn{\rhochi}|^2
\ +\
\sum_{\substack{\dc\in\PhiQAwful\rho\\\dc\neq\chinot}}
|\bn{\rhochi}|^2 \\[0.1in]
& \sim\
|\PhiQMixed\rho\ssm\{\chinot\}|\cdot O(q^{n(1+w)})
\ +\
|\PhiQAwful\rho\ssm\{\chinot\}|\cdot O(q^{n(2+w)}), \end{align*} and thus Lemma~\ref{lem:bn-bound} implies $$
\Var_A[\SnAQ]
\sim
\frac{q^{n(1+w)}}{\phi(\Q)}
\left(
\frac{1}{|\PhiQGood\rho|}
\sum_{\dc\in\PhiQGood\rho}
|\bnstar{\rhochi}|^2
+
O(q^{-1})
\right) $$ as $q\to\infty$. The proposition now follows from \eqref{eqn:bnstar-to-trace}. \end{proof}
\subsection{Proof of Theorem~\ref{thm:variance-estimate}}\label{sec:proof-of-thmE}
\newcommand\thmE{ Suppose that $\rho$ is punctually $\iota$-pure of weight $w$, that $\PhiQAwful\rho\seq\{\chinot\}$ for all $q$, and that the Mellin transform of $\rho$ has big monodromy. Then, for each $n\geq 1$ and, $$
\phi(\Q)\cdot\bbE_A[\SnAQ]
=
\bn{\rho}
\mbox{ and }
\lim_{q\to\infty}
\frac{\phi(\Q)}{q^{n(1+w)}}
\cdot
\Var_A[\SnAQ]
=
\min\{n,\rC(\rho)\}. $$ }
The following theorem is the main result of this section.
\begin{theorem}\label{thm:variance-estimate} \thmE \end{theorem}
\noindent See Corollary~\ref{cor:classify-awful} for a classification of $\rho$ satisfying the condition $\PhiQAwful\rho\seq\{\chinot\}$.
\begin{proof} The first part of the theorem is an immediate consequence of \eqref{eqn:E-and-V-formulae} since $\PhiQAwful\rho\seq\{\chinot\}$ for all $q$. Let $\R=\rC(\rho)$. Then Theorem~\ref{thm:big-monodromy-implies-equidistribution} implies that $\ThetaRhoq$ is equidistributed in $U_\R(\bbC)$ as $q\to\infty$ since the Mellin transform of $\rho$ has big monodromy. Therefore Proposition~\ref{prop:E-estimate} implies that $$
\phi(\Q)\cdot\bbE_A[\SnAQ]
=
\bn{\rho}, $$ and Proposition~\ref{prop:var-estimate} and \eqref{eqn:trace-limit} imply $$
\frac{\phi(\Q)}{q^{n(1+w)}}
\cdot
\Var_A[\SnAQ]
\ \sim
\int_{U_\R(\bbC)}
|\Tr\,\std(\theta^n)|^2
\,d\theta. $$ The second part of the theorem now follows from the identity $$
\int_{U_\R(\bbC)}
|\Tr\,\std(\theta^n)|^2
\,d\theta
=
\min\{n,\R\}
=
\min\{n,\rC(\rho)\} $$ (see\footnote{NB: The reference \cite[Th.~2]{DS} is sometimes used, but as explained in \cite{DE}, the theorem is incorrectly stated.} \cite[Th.~1]{DE}). \end{proof}
\section{Exhibiting Big Monodromy}\label{sec:big-monodromy}
In this section we present sufficient criteria for the Mellin transform of $\rho$ to have big monodromy and refer the interested reader to \S\ref{sec:explicit-abelian-varieties} for explicit examples of representations meeting these criteria. Before stating the main theorem, we make some hypotheses and introduce pertinent terminology.
Throughout this section, we suppose that $\gcd(\s,\Q)=t-a$, for some $a\in\Fq$. One could easily argue that this is less general than supposing that $\s,\Q$ are relatively prime, however, we do not presently have a way to avoid our hypothesis. For ease of exposition, we also suppose that $a=0$ and observe that, up to performing an additive translation $t\mapsto t+a$, this represents no additional loss of generality.
For $t=0,\infty$, we regard $V_\dc$ as an $I(t)$-module and then denote it $V_\dc(t)$. We write $V_\dc(t)^\unip$ for the maximal subspace of $V_\dc(t)$ on which $I(t)$ acts unipotently. It is a direct summand of $V_\dc(t)$, and each simple $e$-dimensional submodule of it is isomorphic to a common module $\Unip(e)$. We say $V_\dc(t)$ has a \defi{unique unipotent block exact multiplicity one} iff, for a unique integer $e\geq 1$, some $I(t)$-submodule is isomorphic $\Unip(e)$ but no submodule is isomorphic to $\Unip(e)\oplus\Unip(e)$.
\newcommand\thmF{ Suppose that $\gcd(\s,\Q)=t$ and that $\deg(\Q)\geq 3$. Suppose moreover that $V(0)$ has a unique unipotent block of exact multiplicity one and that $\rho$ is geometrically simple and punctually pure. If $r:=\dim(V)$ and $\deg(\Q)$ satisfy $$
\deg(\Q)
>
\frac{1}{r}\left(72(r^2+1)^2 - r - \deg(L(T,\rho)) + \dropCee{\rho}\right), $$ then the Mellin transform of $\rho$ has big monodromy. }
\begin{theorem}\label{thm:is-equidistributed} \thmF \end{theorem}
\noindent We prove the theorem in \S\ref{subsec:proof-of-equidistribution-theorem}.
\begin{remark}\label{rmk:unipotence-hypothesis} As the reader will notice, the proof of our theorem has a lot in common with Katz's proof of \cite[Th.~17.1]{Katz:CE}. We both need the hypothesis on $\gcd(\Q,\s)$ and the structure of $V(0)^\unip$ in order to exhibit special elements of the relevant arithemtic monodromy groups. More precisely, the hypothesis that $\gcd(\Q,\s)=t$ helps ensure that, for sufficiently many $\dc$, some induced representation $\Ind(V_\dc)$ has the property that $\Ind(V_\dc)(0)^\unip=V(0)^\unip$ (cf.~Lemma~\ref{lem:induced-unipotent}). The hypothesis on the structure of these coincident modules then leads to the desired element (cf.~Lemma~\ref{lem:nice-element}). We expect one can remove this hypothesis but do not know how to do so. \end{remark}
\begin{remark} The hypothesis $\gcd(\Q,\s)=t$ also plays a minor role in Proposition~\ref{prop:induced-simplicity}. However, one could easily make other hypotheses (e.g.,~$\gcd(\Q,\s)=1$) and still be able to proceed (cf.~\cite[Th.~5.1]{Katz:QKR}). \end{remark}
\subsection{Two norm maps}
This subsection recalls material from \cite[\S 2]{Katz:CE} and borrows heavily from \loccit
Let $B$ be the finite $\Fq$-algebra $\Fq[t]/\Q\,\Fq[t]$. It is a direct product of finite extensions of $\Fq$ and hence \etale{} since $\Q$ is square free. More generally, for each finite extension $\EFq/\Fq$, the $\Fq$-algebra $$
B_{\EFq} = B\otimes_{\Fq}\EFq $$ is \etale{} and has the structure of a free $B$-module of rank $d=[\EFq:\Fq]$.
Let $\bbB$ be the functor on variable $\Fq$-algebras $R$ defined by $$
\bbB(R)
= R[t]/\Q R[t]. $$ It is the functor $R\mapsto B_R=B\otimes_{\Fq}R$ and takes values in the category of $\Fq$-algebras. In fact, $\bbB(R)$ even has the structure of an \etale{} $R$-algebra which is free of rank $\deg(\Q)$. In particular, for each $\Fq$-algebra $R$, there is a norm map $\bbB(R)\to R$ which is part of a transformation $$
\norm_{B/\Fq}\colon\bbB\to\id_{\Fq\mathrm{-algebras}} $$ between $\bbB$ and the identity functor on the category of $\Fq$-algebras.
Let $\bbBt$ be the functor on variable $\Fq$-algebras $R$ defined by $$
\bbBt(R)
= (R[t]/\Q R[t])^\times. $$ It is the composition of $\bbB$ with the functor $A\mapsto A^\times$ of $\Fq$-algebras and takes values in the category of groups. Moreover, the restriction of the norm map $\bbB(R)\to R$ to the group of units yields a homomorphism $$
\nu_R\colon\bbBt(R)\to R^\times, $$ and in particular, $\nu_{\Fq}$ is the map $\nu$ of \S\ref{sec:one-parameter-families}.
For each finite extension $\EFq/\Fq$, let $\bbB_{\EFq}$, $\bbBt_{\EFq}$ be the functors on variable $\Fq$-algebras $R$ defined by $$
\bbB_{\EFq}(R)
= B_{\EFq}\otimes_{\Fq}R,\quad
\bbBt_{\EFq}(R)
= (B_{\EFq}\otimes_{\Fq}R)^\times $$ respectively.
On one hand, $\bbB_{\EFq}$ takes values in the category of $\Fq$-algebras. However, $\bbB_{\EFq}(R)$ also has the structure of an \etale{} $B_R$-algebra which is free of rank $d$ as a $B_R$-module since $$
B_{\EFq}\otimes_{\Fq}R
= B\otimes_{\Fq}\EFq\otimes_{\Fq}R
= B_R\otimes_{\Fq}\EFq $$ and since $B_{\EFq}$ is an \etale{} $B$-algebra which is free of rank $d$ as a $B$-module. In particular, there is a transformation $$
\norm_{\EFq/\Fq}\colon
\bbB_E\to\bbB $$ between the functors $\bbB_E$ and $\bbB$.
On the other hand, $\bbBt_{\EFq}$ takes values in the category of groups and is even a smooth commutative group scheme. More precisely, $\bbBt$ is a group scheme over $\Fq$ of multiplicative type (i.e., a torus), and $\bbBt_{\EFq}$ is the torus $\Res_{\EFq/\Fq}(\bbBt)$ over $\Fq$ given by extending scalars to $\EFq$ and then taking the Weil restriction of scalars of $\bbBt$ back down to $\Fq$ (cf.~\cite[\S 7.6]{BLR}). Moreover, the transformation $\norm_{\EFq/\Fq}$ induces a transformation $$
\norm_{\EFq/\Fq}\colon\bbBt_E\to\bbBt $$ which is even an \etale{} surjective homomorphism of tori. In particular, since $$
\bbBt_{\EFq}(\Fq)
=
\bbBt(\EFq)
=
(\EFq[t]/\Q\EFq[t])^\times $$ one obtains a second norm map $$
\nu_{\EFq}^{\,\prime}
\colon
(\EFq[t]/\Q\EFq[t])^\times
\to
(\Fq[t]/\Q\Fq[t])^\times $$ which is a surjective homomorphism by Lang's theorem.
\subsection{Characters of a twisted torus}
Let $\EFq/\Fq$ be a finite extension and $\PhiEOf\EFq\Q$ be the dual group $\Hom(\bbBt(\EFq),\bbC^\times)$ so that $\PhiEOf\Fq\Q=\PhiQ$. Suppose that $\Q$ splits completely over $\EFq$, and let $a_1,\ldots,a_n\in\EFq$ be the zeros of $\Q$ so that $\Q=\prod_{i=1}^n(t-a_i)$ in $\EFq[t]$.
For each $\EFq$-algebra $R$, the Chinese Remainder Theorem implies that there is a unique algebra isomorphism \begin{equation}\label{eqn:crt-isomorphism}
R[t]/\Q R[t]\to\prod_{i=1}^n R[t]/(t-a_i)R[t] \end{equation} which sends the residue class of $t$ to the tuple $(a_1,\ldots,a_n)$ of residue class representatives. Writing it as an isomorphism $\bbB(R)\to R^n$ and restricting to units yields a group isomorphism $\bbBt(R)\to (R^\times)^n$. As $R$ varies over $\EFq$-algebras, the latter isomorphisms in turn yield an isomorphism of tori $\sigma\colon\bbBt\to\Gm^n$ over $\EFq$. In particular, applying Weil restriction of scalars from $\EFq$ to $\Fq$ yields an isomorphism $$
\Res_{\EFq/\Fq}(\sigma)\colon\bbBt_{\EFq}\to\bbG_{m,\EFq}^n $$ of tori over $\Fq$ where $\bbG_{m,\EFq}=\Res_{\EFq/\Fq}(\Gm)$.
There is a unique permutation $\phi\in\Sym([n])$ satisfying $a_{\phi^{-1}(i)}=a_i^q$ since $\Q$ is square free and has coefficients in $\Fq$. While $\sigma$ does not descend to a morphism $\bbBt\to\Gm^n$ in general, we can use $\phi$ to construct a twisted form $\bbT$ of $\Gm^n$ over $\Fq$ such that $\sigma$ is the pullback of a morphism $\bbBt\to\bbT$ over $\Fq$. More precisely, we define the twisted Frobenius $\tau$ on $\bbT=\Gm^n$ as the composition $$
(b_1,\ldots,b_n)
\mapsto (b_1^q,\ldots,b_n^q)
\mapsto (b_{\phi(1)}^q,\ldots,b_{\phi(n)}^q) $$ of the usual Frobenius automorphism and a permutation of the coordinates of $\Gm^n$. One can easily verify that $\tau^d$ is the $d$th power of the usual Frobenius and thus $\bbT$ is indeed a twist of $\Gm^n$. Moreover, one can also show that $(a_1,\ldots,a_n)$ is fixed by $\tau$ and even that $$
\bbT(\Fq)=\bbT^{\tau=1}=\bbBt(\Fq). $$ In particular, by precomposing with $\tau$ we obtain the automorphism $\tau_\EFq^\vee$ on $$
\Hom(\bbT(\EFq),\bbC^\times)
=
\Hom(\Gm^n(\EFq),\bbC^\times)
=
\Hom(\EFq^\times,\bbC^\times)^n $$ given by \begin{equation}\label{eqn:def-tau}
\tau_\EFq^\vee\colon
(\dc_1,\ldots,\dc_n)
\mapsto
(\dc_{\phi^{-1}(1)}^q,\ldots,\dc_{\phi^{-1}(n)}^q). \end{equation}
Composition of $\Res_{\EFq/\Fq}(\sigma)$ with the projection $\bbG_{m,\EFq}^n\to\bbG_{m,\EFq}$ onto the $i$th factor yields a surjective homomorphism $$
\pi_i\colon\bbBt_{\EFq}\to\bbG_{m,\EFq} $$ of tori over $\Fq$. In particular, taking duals of the respective groups of $\EFq$-rational points and using the bijections $\bbG_{m,\EFq}(\Fq)=\Gm(\EFq)=\EFq^\times$ yields an isomorphism $$
\sigma_{\EFq}^\vee
\colon
\prod_{i=1}^n\Hom(\EFq^\times,\bbC^\times)
\ni(\dc_1,\ldots,\dc_n)\mapsto\prod_{i=1}^n\dc_i\pi_i\in
\PhiEOf\EFq\Q. $$ We observe that since $\nu_\EFq^\prime$ is surjective its dual $\nu_{\EFq}^{\,\prime\,\vee}$ is a monomorphism $\PhiQ\to\PhiEOf\EFq\Q$ and thus we can identify $\PhiQ$ with a subset of $\Hom(\EFq^\times,\bbC^\times)^n$. More precisely, it is the subgroup of characters fixed by $\tau_\EFq^\vee$ and thus \begin{equation}\label{eqn:identifying-PhiQ}
(\sigma_\EFq^\vee)^{-1}(
\nu_{\EFq}^{\,\prime\,\vee}(\PhiQ)
)
=
\{\,
(\dc_1,\ldots,\dc_n)\in\Hom(\EFq^\times,\bbC^\times)^n
:
\dc_{\phi(i)}=\dc_i^q\mbox{ for }i\in[n]
\,\}. \end{equation}
\subsection{Characters with distinct components}\label{sec:distinct-component-characters}
We say that a character $\dc\in\PhiEOf\EFq\Q$ \defi{has distinct components} iff it lies in the subset $$
\PhiEOfDistinct\EFq\Q
=
\left\{\,
\sigma_{\EFq}^\vee(\dc_1,\ldots,\dc_n)\in\PhiEOf\EFq\Q
:
\dc_i\neq\dc_j
\mbox{ for }
1\leq i<j\leq n
\,\right\}, $$ and we define the corresponding subset of $\PhiQ$ as the intersection $$
\PhiQDistinct
=
\PhiEOfDistinct\EFq\Q\cap\nu_{\EFq}^{\,\prime\,\vee}(\PhiQ) $$ where $\nu_{\EFq}^{\,\prime\,\vee}\colon\PhiQ\to\PhiEOf\EFq\Q$ is the dual of $\nu_\EFq^{\,\prime}$.
\begin{lemma} $\PhiQDistinct$ is well defined, that is, it does not depend upon our choice of $\EFq$. \end{lemma}
\begin{proof} Let $\EFq'/\EFq$ be a finite extension and observe that the norm map $\EFq^{\prime\times}\to\EFq^\times$ is surjective so induces a monomorphism $$
\Hom(\EFq^\times,\bbC^\times)
\to
\Hom(\EFq^{\prime\times},\bbC^\times), $$ and thus $$
\PhiEOfDistinct{\EFq}\Q
=
\PhiEOfDistinct{\EFq'}\Q
\cap
\PhiEOf\EFq\Q. $$ In particular, if $\EFq''/\Fq$ is a second finite extension over which $\Q$ splits completely and if $\EFq'$ contains the compositum $\EFq\EFq''$, then $$
\PhiEOfDistinct{\EFq}\Q
\cap
\nu_{\EFq}^{\,\prime\,\vee}(\PhiQ)
=
\PhiEOfDistinct{\EFq'}\Q
\cap
\nu_{\EFq'}^{\,\prime\,\vee}(\PhiQ)
=
\PhiEOfDistinct{\EFq''}\Q
\cap
\nu_{\EFq''}^{\,\prime\,\vee}(\PhiQ) $$ and $\PhiQDistinct$ is indeed well defined. \end{proof}
Let $\Q=\prod_{j=1}^r\pi_i\in\Fq[t]$ be a factorization into monic irreducibles. The quotient $\EFq_j=\Fq[t]/\pi_j\Fq[t]$ is a finite extension of $\Fq$ of degree and $n_j=\deg(\pi_j)$. It is also the splitting field of $\pi_j$ and thus may be embedded in $\EFq$. Moreover, there are bijections \begin{equation}\label{eqn:phiQ-bijections}
\PhiQ
= \prod_{j=1}^r\PhiOf{\pi_j}
= \prod_{j=1}^r\Hom(\EFq_j^\times,\bbC^\times),\ \
\PhiEOf\EFq\Q
= \prod_{j=1}^r\PhiEOf\EFq{\pi_j}
= \prod_{j=1}^r\Hom(\EFq^\times,\bbC^\times)^{n_j} \end{equation} given by applying the Chinese Remainder Theorem.
For each monic factor $\Qp$ of $\Q$ in $\Fq[t]$, let $\PhiQpDistinct$ be the subset of $\PhiQp$ defined similarly as above but with $\Qp$ in lieu of $\Q$. One can easily verify that it does not depend upon the polynomial $\Q$ of which $\Qp$ is a factor.
\begin{lemma}\label{lem:counting-distinct-characters}
$|\PhiOfDistinct{\pi_j}|\sim|\PhiOf{\pi_j}|$, for each $j\in[r]$, as $q\to\infty$. \end{lemma}
\begin{proof} Let $j\in[r]$, and suppose without loss of generality that $a_1,\ldots,a_{n_j}$ are the zeros of $\pi_j$ and $\phi(i)\equiv i+1\bmod{n_j}$ for $i\in[n_j]$. Then by \eqref{eqn:identifying-PhiQ} and \eqref{eqn:phiQ-bijections} there is an identification \begin{eqnarray*}
\PhiOf{\pi_j}
& = &
\{\,
(\dc_1,\ldots,\dc_{n_j})\in\Hom(\EFq_j^\times,\bbC^\times)^{n_j}
:
\dc_{i+1}=\dc_i^q\mbox{ for }i\in[n_j-1]
\,\}. \end{eqnarray*} since any $\dc\in\Hom(\EFq^\times,\bbC^\times)$ factors through an inclusion $\EFq_j^\times\to\EFq^\times$ if $\dc^{q^{n_j}}=\dc$.
The groups $\EFq_j^\times$ and $\Hom(\EFq_j^\times,\bbC^\times)$ are cyclic and non-canonically isomorphic, so let $g$ and $\chi$ be respective generators. Then we have a further identifications \begin{eqnarray*}
\PhiOf{\pi_j}
& = &
\{\,
(\chi^{e_1},\ldots,\chi^{e_{n_j}})\in\Hom(\EFq_j^\times,\bbC^\times)^{n_j}
:
e_{i+1}\equiv qe_i\bmod{q^{n_j}-1}
\mbox{ for }
i\in[n_j-1]
\,\} \\
& = &
\{\,
(g^{e_1},\ldots,g^{e_{n_j}})\in(\EFq_j^\times)^{n_j}
:
e_{i+1}\equiv qe_i\bmod{q^{n_j}-1}
\mbox{ for }
i\in[n_j-1]
\,\}. \end{eqnarray*} From this last identification one easily deduces an identification between $\PhiOfDistinct{\pi_j}$ and the set $$
\{\,
(g^{e_1},\ldots,g^{e_{n_j}})\in(\EFq_j^\times)^{n_j}
:
e_{i+1}\equiv qe_i\bmod{q^{n_j}-1}
\mbox{ for }
i\in[n_j-1]
\mbox{ and }
\Fq(g^{e_1})=\EFq_j
\,\}, $$ and thus $$
|\PhiOfDistinct{\pi_j}|
=
|\{\,
g^e\in\EFq_j^\times
:
e\in[q^{n_j}-1]
\mbox{ and }
\EFq_j=\Fq(g^e)
\,\}|. $$ Finally, it is well known that the cardinality of the righthand set is asymptotic to $q^{n_j}-1$ as $q\to\infty$ (cf.~\cite[2.2]{Rosen}), and thus $$
|\PhiOf{\pi_j}|
=
|\Hom(\EFq_j^\times,\bbC^\times)|
=
|\EFq_j^\times|
=
q^{n_j}-1
\sim
|\PhiOfDistinct{\pi_j}|
\mbox{ for }
q\to\infty $$ as claimed. \end{proof}
\begin{cor}\label{cor:size-of-Qp-distinct}
If $\Qp$ is a monic factor of $\Q$ in $\Fq[t]$, then $|\PhiQpDistinct|\sim|\PhiQp|$ as $q\to\infty$. \end{cor}
\begin{proof} Suppose without loss of generality that $\Q=\pi_1\cdots\pi_s$ with $s\in[r]$ so that there is a bijection $$
\PhiOf\Qp
=
\prod_{j=1}^s
\PhiOf{\pi_j}. $$ This bijection in turn induces an inclusion $$
\PhiOfDistinct\Qp
\to
\prod_{j=1}^s
\PhiOfDistinct{\pi_j} $$ whose coimage is bounded above by $
\prod_{j=1}^s
(\deg(\Qp)-n_j) $ since an element of the codomain lies in the image if (and only if) the components are pairwise distinct. In particular, $$
|\PhiOfDistinct\Qp|
\sim
\prod_{j=1}^s
|\PhiOfDistinct{\pi_j}|
\overset{\mathrm{Lemma~\ref{lem:counting-distinct-characters}}}\sim
\prod_{j=1}^s
|\PhiOf{\pi_j}|
\mbox{ for }
q\to\infty $$ as claimed. \end{proof}
\subsection{Properties of $H^2_c$}
Let $X$ be a smooth geometrically connected curve over $\Fq$, let
$T\seq X$ be a dense Zariski open subset, and let $\FF$ be a sheaf on $X$.
\begin{lemma}\label{lem:birational-invariance-of-H^2_c} There is a bijection $H^2_c(\Tbar,\FF)\to H^2_c(\bar{X},\FF)$. \end{lemma}
\begin{proof} Let $j\colon T\to X$ be the corresponding inclusion. Then the adjunction map $j_!j^*\FF\to\FF$ is part of an exact sequence of sheaves on $X$ $$
0\to j_!j^*\FF\to \FF\to \QQ\to 0 $$ where $\QQ$ is a skyscraper sheaf supported on $X\ssm T$. The bijection in question is part of the corresponding long exact sequence of cohomology $$
\cdots
\to H^1_c(\bar{X},\QQ)
\to H^2_c(\Tbar,\FF)
\to H^2_c(\bar{X},\FF)
\to H^2_c(\bar{X},\QQ)
\to \cdots $$ where $H^i_c(\bar{X},\QQ)$ vanishes for $i\neq 0$ since $\QQ$ is a skyscraper sheaf. \end{proof}
Let $\GG$ be a sheaf on $X$ and $\GG^\vee$ be its dual. Suppose $\FF$ and $\GG$ are lisse on $T$, and thus so is $\GG^\vee$. Let $\rho\colon\piOneT\to\GL(V)$, $\omega\colon\piOneT\to\GL(W)$, and $\omega^\vee\colon\piOneT\to\GL(W^\vee)$ be the respective corresponding representations.
\begin{lemma}\label{lem:detecting-FG-isomorphisms} Suppose $\FF$ and $\GG$ are lisse and geometrically simple on $T$.
\begin{enum} \item $\dim(H^2_c(\Tbar,\FF\otimes\GG^\vee))=\dim(\Hom_{\piOneT}(W,V))\leq 1$. \item $\dim(H^2_c(\Tbar,\FF\otimes\GG^\vee))=1$ if and only if $\FF$ and $\GG$ are geometrically isomorphic on $T$. \end{enum} \end{lemma}
\begin{proof} Let $G=\piOneTbar$ so that $\rho$ and $\omega^\vee$ are absolutely simple representations of $G$ and $\rho\otimes\omega^\vee$ is the representation on $V\otimes W^\vee$ corresponding to $\FF\otimes\GG^\vee$. Therefore $$
\dim(H^2_c(\Tbar,\FF\otimes\GG^\vee))
\overset{\eqref{eqn:invariants-and-coinvariants}}=
\dim\left((V\otimes W^\vee)_G\right)
=
\dim\left((V\otimes W^\vee)^G\right)
=
\dim\left(\Hom_G(W,V)\right) $$ (cf.~\cite[43.14]{CurtisReiner}). Moreover, the sheaves $\FF,\GG$ are geometrically isomorphic on $T$ if and only if $V$ and $W$ are isomorphic as representations of $G$. If these equivalent conditions hold, then Schur's lemma implies $\dim(\Hom_G(W,V))=1$, and otherwise $\dim(\Hom_G(W,V))=0$ (see \cite[27.3]{CurtisReiner}). \end{proof}
\subsection{Invariant scalars}
Let $\l\in\k^\times$. If we identify $\Gm$ with $\Poneu\ssm\{0,\infty\}$ and regard $\l$ as an element of $\Gm(\k)$, then multiplication by it (i.e., translation) induces an automorphism of $\Poneu$ over $\k$ which we also denote $\l\colon\Poneu\to\Poneu$. We say $\l$ is an \defi{invariant scalar} of $\GG$ iff the direct image $\l_*\GG$ is geometrically isomorphic to $\GG$. For example, $1$ is an invariant scalar for every $\GG$, and every $\l$ is an invariant scalar of the constant sheaf $\Qellbar$.
Let $\alpha\colon\piOne{\Gm}\to\Qellbar^\times$ be a tame character. The corresponding sheaf $\LL_\alpha=\ME{\alpha}$ is a so-called Kummer sheaf.
\begin{lemma}\label{lem:invariant-scalars-for-Kummer} Every $\l\in\k^\times$ is an invariant scalar of $\LL_\alpha$. \end{lemma}
\begin{proof} The tame fundamental group of $\Gm$ is a quotient and completely generated by the images of the inertia groups $I(0)$ and $I(\infty)$. The character $\alpha$ is completely determined by these images, and translation by $\lambda$ does not change how $I(0)$ and $I(\infty)$ act since it fixes both $0$ and $\infty$. Therefore $\l_*\LL_\alpha$ and $\LL_\alpha$ are lisse and geometrically isomorphic on $\Gm$, and $\l$ is an invariant scalar of $\LL_\alpha$. \end{proof}
\begin{cor} $\l$ is an invariant scalar of $\GG$ if and only if it is an invariant scalar of $\GG\otimes\LL_\alpha$ \end{cor}
\noindent In particular, the answer to the question of whether or not $\l$ is an invariant scalar of $\Q_*\ME{\rhochi}$ depends only on the coset $\dc\PhiUNu$.
\begin{proof} The sheaves $\l_*\LL_\alpha$ and $\LL_\alpha$ are lisse and geometrically isomorphic on $\Gm$ by Lemma~\ref{lem:invariant-scalars-for-Kummer}. Moreover, $$
\l_*(\GG\otimes\LL_\alpha)\otimes(\GG\otimes\LL_\alpha)^\vee
=
\l_*\GG\otimes(\l_*\LL_\alpha\otimes\LL_\alpha^\vee)\otimes\GG^\vee, $$ so $\l_*\GG\otimes\GG^\vee$ and $\l_*(\GG\otimes\LL_\alpha)\otimes(\GG\otimes\LL_\alpha)^\vee$ are lisse and geometrically isomorphic on $U\ssm\{0,\infty\}$. Thus $\l$ is an invariant scalar of $\GG$ if and only if it is an invariant scalar of $\GG\otimes\LL_\alpha$. \end{proof}
The following lemma gives a cohomological criterion for detecting invariant scalars.
\begin{lemma}\label{lem:detecting-invariant-scalars} Let $\l\in\Fqbar^\times$. Suppose $\l_*\GG$ and $\GG$ are lisse and geometrically simple on $U$. Then the following are equivalent:
\begin{enum} \item $\l$ is an invariant scalar of $\GG$; \item $H^2_c(\Ubar,\l_*\GG\otimes\GG^\vee)\neq\ZeroSpace$; \item $H^2(\PoneuBar,\l_*\GG\otimes\GG^\vee)\neq\ZeroSpace$. \end{enum} \end{lemma}
\begin{proof} Lemma~\ref{lem:detecting-FG-isomorphisms} implies the equivalence of (1) and (2), and Lemma~\ref{lem:birational-invariance-of-H^2_c} implies the equivalence of (2) and (3). \end{proof}
\subsection{Avoiding invariant scalars}
Consider the affine plane curve $$
X_\l
:
\l\Q(x_1)=\Q(x_2), $$ and let $\pi_i\colon X_\l\to\Aonet$ be the map $(x_1,x_2)\mapsto x_i$. They are part of a commutative diagram $$
\xymatrix{
X_\l\ar[r]^{\pi_2}\ar[d]_{\pi_1}\ar@{.>}[dr]^\pi
& \Aonet\ar[d]^{\Q} \\
\Aonet\ar[r]_{\l\Q}
& \Aoneu
} $$ where $\pi=\Q\pi_2=\l\Q\pi_1$. Moreover, the maps $\Q$ and $\l\Q$ are generically \'etale of degree $n=\deg(\Q)$, thus their fiber product $\pi$ is generically \'etale of degree $n^2$.
Let $\EFq/\Fq$ be a finite extension over which $\Q$ splits and $Z=\{a_1,\ldots,a_n\}\seq\EFq$ be the zeros of $\Q$.
\begin{lemma}\label{lem:smoothness-of-X_l} $X_\l$ is smooth over the $n^2$ points of $Z\times_{\Aoneu}Z=Z\times Z$. \end{lemma}
\begin{proof} The subset $Z\sub\Aonet$ is the vanishing locus of $\Q$ and $\l\Q$, hence $Z\times_{\Aoneu}Z=Z\times Z$. Moreover, $$
\frac{\partial}{\partial x_2}(\l\Q(x_1)-\Q(x_2))
=
\Q'(x_2)
=
\sum_{i=1}^n\prod_{j\neq i}(x-a_j) $$ does not vanish at any $a_i\in Z$ since $\Q$ is square free, so $X_\l$ is smooth at every $(a_i,a_j)\in Z\times Z$. \end{proof}
Consider the external tensor product sheaf $$
\EE_{\rhochi,\l} := \ME{\rhochi}\boxtimes\ME{\rhochi}^\vee $$ on $\Aonet\times\Aonet$ and the tensor product sheaf $$
\TT_{\rhochi,\l}
:=
\l\Q_*\ME{\rhochi}\otimes\Q_*\ME{\rhochi}^\vee $$ one $\Poneu$. They have respective generic ranks $r$ and $r^2$ since both $\ME{\rhochi}$ and its dual have generic rank $r$.
Let $T_\l\seq X_\l$ be a smooth dense Zariski open subset and $U_\l=\pi(T_\l)$. Up to shrinking $T_\l$, we suppose that $\EE_{\rhochi,\l}$ is lisse on $T_\l$ and that $\pi$ is \'etale over $U_\l$.
\begin{lemma}\label{lem:focusing-on-E_lambda} The sheaves $\pi_*(\EE_{\rhochi,\l})$ and $\TT_{\rhochi,\l}$ are lisse and isomorphic on $U_\l$. \end{lemma}
\begin{proof}
Let $w$ be a geometric point of $U_\l$, and let $W_1=(\l\Q)^{-1}(w)$ and $W_2=\Q^{-1}(w)$. Then $|W_1|=|W_2|=\deg(\Q)$ and $\pi^{-1}(w)=W_1\times W_2$ since $\pi$ is unramified over $w$, and $$
\pi_*(\EE_{\rhochi,\l})_w
\ =
\bigoplus_{(w_1,w_2)\in W_1\times W_2}
\EE_{\rhochi,\l,(w_1,w_2)}
\ =
\bigoplus_{(w_1,w_2)\in W_1\times W_2}
\left(\ME{\rhochi}_{w_1}\otimes\ME{\rhochi}^\vee_{w_2}\right) $$ whereas $$
\TT_{\rhochi,\l,w}
\ =
\left(\bigoplus_{w_1\in W_1}\ME{\rhochi}_{w_1}\right)
\otimes
\left(\bigoplus_{w_2\in W_2}\ME{\rhochi}^\vee_{w_2}\right).
$$ Therefore both sheaves have the same geometric fibers, and hence they are isomorphic. It remains to show they are lisse on $U_\l$.
On one hand, $\EE_{\rhochi,\l}$ is lisse on $T_\l$, so its geometric fibers all have the same rank $r^2$. Moreover, $\Q$ is \etale{} over $U_\l$ by hypothesis, so the geometric fibers of $\pi_*(\EE_{\rhochi,\l})$ also all have the same rank $\dim(\Q)r^2$ and hence $\pi_*(\EE_{\rhochi,\l})$ is lisse on $U_\l$ (see \cite[Prop.~11]{Katz:SC}). On the other hand, $\pi_*(\EE_{\rhochi,\l})$ is isomorphic to $T_{\rhochi,\l}$ on $U_\l$ which implies the latter is also lisse on $U_\l$. \end{proof}
The contrapositive of the following corollary gives us a way to show some $\l$ is \emph{not} an invariant scalar.
\begin{cor}\label{cor:detecting-invariant-scalars} Suppose $\rho$ is geometrically simple and $\dc\in\PhiQ$. Then the following are equivalent:
\begin{enum} \item $\l$ is an invariant scalar of $\Q_*\ME{\rhochi}$; \item $H^2_c(\bar U_\lambda,\TT_{\rhochi,\l})\neq\ZeroSpace$. \end{enum}
\noindent They imply
\begin{enum} \setcounter{enumi}{2} \item $H^2_c(\bar T_\l,\EE_{\rhochi,\l})\neq\ZeroSpace$. \end{enum}
\end{cor}
\begin{proof} Lemmas~\ref{lem:detecting-invariant-scalars} and \ref{lem:focusing-on-E_lambda} imply the equivalence of (1) and (2). If $\piOne{U_\lambda}\to\GL(V)$ is the representation corresponding to $\TT_\l$, then $
V^{\piOne{U_\l}}
\seq
V^{\piOne{T_\l}} $ so \eqref{eqn:invariants-and-coinvariants} and (2) imply (3). \end{proof}
The following proposition was inspired by \cite[Proof of Th.~5.1.3]{Katz:TLFM}.
\begin{prop}\label{prop:no-nontrivial-invariant-scalars} Suppose $\deg(\Q)\geq 2+\deg(\gcd(\Q,s))$ and $\dc\in\PhiQDistinct$.
\begin{enum} \item If $\rho$ is geometrically irreducible, then so is $\ME{\rhochi}$. \item $\l=1$ is the only invariant scalar of $\Q_*\ME{\rhochi}$. \end{enum} \end{prop}
\begin{proof} Let $\EFq/\Fq$ be a splitting field of $\Q$ and $a_1,a_2\in\EFq$ be zeros of $\Q$ which are distinct from each other and the zeros of $s$. Let $\dc_1,\dc_2\in\Hom(\EFq^\times,\bbC^\times)$ be the corresponding components of $(\sigma_\EFq^\vee)^{-1}(\nu_{\EFq}^{\,\prime\,\vee}(\dc))$ as an element of $(\sigma_\EFq^\vee)^{-1}(\PhiEOf\EFq\Q)$ (compare \eqref{eqn:identifying-PhiQ} and \eqref{eqn:phiQ-bijections}). Then $\dc_1,\dc_2$ are distinct characters, so $\alpha=\dc_1/\dc_2$ is a non-trivial character.
Let $\l\in\k^\times$ be an arbitrary scalar. If $\l\neq 1$, then for each component $T'_\l\seq T_\l$ over $\Fqbar$, there is a smooth point $t'=(t'_1,t'_2)\in T'_\l(\Fqbar)$ satisfying $\{t'_1,t'_2\}=\{a_1,a_2\}$. The map $\pi$ is \'etale over $0$ since $\Q$ is square free, hence we can use $\pi$ to identify $I(t')$ with $I(0)$. We can also identify $I(t'_1)$ and $I(t'_2)$ with $I(0)$.
On one hand, the fiber of $\ME{\rhochi}$ at $t=t'_i$ and the fiber at $t=0$ of $\Qellbar^{r}\otimes\LL_{\dc_i}$ are isomorphic as $I(0)$-modules since $s(a_i)\neq 0$. Moreover, the fiber of $\EE_{\rhochi,\l}$ at $t'$ and the fiber at $u=0$ of $\Qellbar^{r^2}\otimes\LL_{\dc}$ are isomorphic as $I(0)$-modules. On the other hand, the latter fibers have no $I(0)$-invariants since $\dc$ is non-trivial, so a fortiori, the geometric generic fiber of $\EE_{\rhochi,\l}$ has no $\piOne{\Tbar_\l}$-invariants. Therefore \eqref{eqn:invariants-and-coinvariants} implies $H^2_c(\bar T_\l,\EE_{\rhochi,\l})$ vanishes for $\l\neq 1$, and hence the contrapositive of Corollary~\ref{cor:detecting-invariant-scalars} implies $\l=1$ is the only invariant scalar of $\Q_*\ME{\rhochi}$. \end{proof}
\subsection{Baby theorem}
In this subsection we prove a simplified version of Theorem~\ref{thm:is-equidistributed}.
Let $U$ be a dense Zariski open subset of $\Gm=\Poneu\ssm\{0,\infty\}$ and $\theta\colon\piOne{U}\to\GL(W)$ be a continuous representation to a finite-dimensional $\Qellbar$-vector space $W$. Let $\PhiU$ be the dual of $\Bu=(\Fq[u]/u\Fq[u])^\times$ (cf.~\S\ref{sec:one-parameter-families}). For $u=0,\infty$, let $W(u)$ denote $W$ regarded as an $I(u)$-module and $W(u)^\unip$ be its maximal submodule where $I(u)$ acts unipotently. If $\theta$ is geometrically simple and punctually pure of weight $w$ and if $\dim(W)>1$, then we can associate to $\theta$ a pair of Tannakian monodromy groups $$
\GG_\geom(\theta,\PhiU)
\seq
\GG_\arith(\theta,\PhiU)
\seq
\GL_{\R,\Qellbar} $$ for $\R=\chi(\GmBar,\ME{\theta})$ (see \S\ref{sec:representation-monodromy-groups} and Theorem~\ref{thm:P-is-neutral-Tannakian}).
\begin{theorem}\label{thm:baby-monodromy-theorem} Suppose that $\theta$ is geometrically simple and punctually pure of weight $w$, that $\dim(W)>1$ or that $\theta$ does not factor through the composed quotient $\piOne{U}\onto\piOne{\Gm}\onto\piOneTame{\Gm}$, and that $\l=1$ is the only invariant scalar of $\ME{\theta}$. Suppose moreover that $W(0)^\unip$ has dimension at most $r$ and a unique unipotent block of exact multiplicity one and that $\R>72(r^2+1)^2$. Finally, suppose $W(\infty)^\unip=\ZeroSpace$. Then $\GG_\geom(\theta,\PhiU)$ equals $\GL_{\R,\Qellbar}$. \end{theorem}
\noindent The proof consists of a few steps and will occupy the remainder of this section.
Let $G=\GG_\arith(\theta,\PhiU)$ and $H=\GG_\geom(\theta,\PhiU)$.
\begin{lemma}\label{lem:G/H-is-abelian} $G$ and $H$ are reductive and there is an exact sequence $$
1\to H\to G\to T\to 1 $$ for some torus $T$ over $\Qellbar$. \end{lemma}
\begin{proof} Observe that $\ME{\theta}$ is geometrically simple yet is not a Kummer sheaf since otherwise one would have $\dim(W)=1$ and $\theta$ would factor through $\piOne{u}\onto\piOneTame{\Gm}$. Moreover, $\theta$ is geometrically simple and punctually pure of weight $w$ by hypothesis. Therefore the lemma follows from Proposition~\ref{prop:middle-extension-monodromy}.\ref{item:prop:mem-reductive-and-normal}. \end{proof}
A priori $G$ or $H$ could be disconnected, so let $G^0$ and $H^0$ be the respective identity components.
\begin{lemma}\label{lem:lie-irreducible} $G^0$ and $H^0$ are (Lie-)irreducible subgroups of $\GL_{\R,\Qellbar}$. \end{lemma}
\begin{proof} This follows from \cite[Th.~8.2 and Cor.~8.3]{Katz:CE} since $\l=1$ is the only invariant scalar of $\ME{\theta}$. \end{proof}
Let $\wm{m}\colon(\Qbar^\times)^m\to\bbZ^m$ be the $m$th weight multiplicity map for $m=\R$ given in Definition~\ref{def:weight-multiplicity-map}.
\begin{lemma}\label{lem:nice-element} There exist an element $g\in G^0$ and an eigenvalue tuple $\gamma\in(\Qellbar^\times)^R$ of $g$ satisfying the following:
\begin{enum} \item $\gamma=(\gamma_1,\ldots,\gamma_\R)$ lies in $(\Qbar^\times)^\R$ and thus $\det(g)=\gamma_1\cdots\gamma_\R$ lies in $\Qbar^\times$;
\item\label{lem:item:ne-det} $|\iota(\det(g))|^2=(1/q)^w$ for some $w\neq 0$ and every field embedding $\iota\colon\Qbar\to\bbC$; \item $c=\wm{\R}(\gamma)$ satisfies $\len(c)\leq r+1$ and $1=c_{\len(c)}<c_{\len(c)-1}$ and $c_2\leq r$. \end{enum} \end{lemma}
\begin{proof} This follows from Proposition~\ref{prop:middle-extension-monodromy}.\ref{item:prop:mem-nonzero-weights} with $g=f^c$ for any element $f\in\Frob_{\Fq,\one} $ and for $c={[G:G^0]}$. More precisely, if $\alpha=(\alpha_1,\ldots,\alpha_\R)$ is an eigenvalue tuple of $f$, then all the $\alpha_i$ lie in $\Qbar$, all the non-zero weights $w_1,\ldots,w_n$ of the $\alpha_i$ are negative since $W(\infty)^\unip$ vanishes, one has $1\leq n\leq r$ since $1\leq \dim(W(0)^\unip)\leq r$, there is a unique non-zero weight of multiplicity one since $W(0)^\unip$ has a unique unipotent block of exact multiplicity one, and the weight zero has multiplicity $\R-n\geq \R-r>1$. Hence it suffices to take $\gamma\in(\Qbar^\times)^\R$ to be the eigenvalue tuple with $\gamma_i=\alpha_i^c$ for $1\leq i\leq \R$ and $w$ to be $(w_1+\cdots+w_n)c$. \end{proof}
\begin{cor}\label{cor:det-H} $\det(H)$ equals $\Qellbar^\times$. \end{cor}
\begin{proof} Follows from Lemma~\ref{lem:nice-element}.\ref{lem:item:ne-det} and the argument in \cite[Proof of Th.~17.1]{Katz:CE} using the element $g$ in Lemma~\ref{lem:nice-element}. \end{proof}
Let $[G^0,G^0]$ be the derived subgroup of $G^0$.
\begin{lemma}\label{lem:derived-of-G^0} $[G^0,G^0]$ equals $\SL_{\R,\Qellbar}$. \end{lemma}
\begin{proof} Combine Lemmas~\ref{lem:lie-irreducible} and \ref{lem:nice-element} to deduce that the hypotheses of Theorem~\ref{thm:big-monodromy} hold, and thus $G^0$ equals one of $\SL_\R(\Qellbar)$ or $\GL_\R(\Qellbar)$. The derived subgroup of both of these groups equals $\SL_\R(\Qellbar)$. \end{proof}
We may now complete the proof of the theorem. First, we have inclusions $$
[G^0,G^0]
\seq
[G,G]
\seq
[\GL_{\R,\Qellbar},\GL_{\R,\Qellbar}]
=
\SL_{\R,\Qellbar}, $$ and Lemma~\ref{lem:derived-of-G^0} implies the outer terms are equal, so the inclusions are equalities. Moreover, Lemma~\ref{lem:G/H-is-abelian} implies $H$ is normal in $G$ and $G/H$ is abelian, so $H$ contains $[G,G]=\SL_{\R,\Qellbar}$, and hence, by Corollary~\ref{cor:det-H}, $H=\GL_{\R,\Qellbar}$ as claimed.
\subsection{Frobenius reciprocity}
Let $\Q\colon T\to U$ be a finite \'etale map of smooth geometrically connected curves over $\Fq$. Let $\FF$ (resp.~$\GG$) be a lisse sheaf on $T$ (resp.~$U$) and $\piOneT\to\GL(V)$ (resp.~$\piOneU\to\GL(W)$) be the corresponding representation. Let $\FF^\vee$ be the dual of $\FF$ and $\piOneT\to\GL(V^\vee)$ be the corresponding representation.
\begin{lemma}\label{lem:dual-is-unambiguous} $\Q_*(\FF^\vee)$ is isomorphic to the dual of $\Q_*\FF$. \end{lemma}
\begin{proof} See \cite[Lem.~3.1.3]{Katz:TLFM}. \end{proof}
\noindent Therefore we may unambiguously write $\Q_*\FF^\vee$.
\begin{prop}\label{prop:frobenius-reciprocity} $
\dim(H^2_c(\Tbar,\Q^*\GG\otimes\FF^\vee))
=
\dim(H^2_c(\Ubar,\GG\otimes\Q_*\FF^\vee)). $ \end{prop}
\begin{proof} Let $H=\piOneTbar$ and $G=\piOneUbar$. We suppose that $V$ (resp.~$W$) is a left $H$-module (resp.~$G$-module), and define $\Ind_H^G(V)$ to be the (Mackey) induced module $\Hom_G(\Qellbar[H],V)$ and $\Res_H^G(W)$ to be the restricted module $W$ regarded as a left $H$-module. Then Frobenius reciprocity implies that there is a bijection of vector spaces $$
\Hom_H(\Res_H^G(W),V)
\to
\Hom_G(W,\Ind_H^G(V)) $$ given by $\psi\mapsto (w\mapsto (r\mapsto \psi(rv)))$ (cf.~\cite[\S 3.0]{Katz:TLFM}). Moreover, Lemma~\ref{lem:detecting-FG-isomorphisms} implies that $$
\dim(H^2_c(\Tbar,\Q^*\GG\otimes \FF^\vee))
=
\dim(\Hom_H(\Res_H^G(W),V)) $$ and that $$
\dim(H^2_c(\Ubar,\GG\otimes\Q_*\FF^\vee))
=
\dim(\Hom_G(W,\Ind_H^G(V))), $$ so the proposition follows immediately. \end{proof}
\subsection{Begetting simplicity}
In this section we give a criterion for $\Ind(\rhochi)$ to be geometrically simple. Our argument was inspired by \cite[Proof of Th.~5.1.1]{Katz:QKR}.
\begin{prop}\label{prop:induced-simplicity} Let $\dc\in\PhiQDistinct$. Suppose that $\gcd(\Q,s)=t$, that $\deg(\Q)\geq 2$, and that $\dc(\Gamma(t))=\OneSpace$. If $\rho$ is geometrically simple, then so are $\rhochi$ and $\Ind(\rhochi)$. \end{prop}
\begin{proof} Let $T\seq\Ponet$ be a dense Zariski open subset and $U=\Q(T)$. Up to shrinking $T$, we suppose that $\FF=\ME{\rhochi}$ is lisse over $T$ and that $\Q$ is \'etale over $U$.
Suppose that $\rho$ is geometrically simple and thus so is $\rhochi$. Let $\GG=\Q_*\FF^\vee$ (cf.~Lemma~\ref{lem:dual-is-unambiguous}), and observe that Lemma~\ref{lem:direct-image-of-middle-extension}.\ref{lem:ind-is-me} implies that $\GG$ and $\ME{\Ind(\rhochi)}^\vee$ are isomorphic over $U$. We wish to show that $\dim(H^2(\Ubar,\GG\otimes\GG^\vee))=1$ so that Lemma~\ref{lem:detecting-FG-isomorphisms} implies that $\ME{\Ind(\rhochi)}$ is geometrically simple over $U$, that is, that $\Ind(\rhochi)$ is geometrically simple. In fact, Lemma~\ref{lem:birational-invariance-of-H^2_c} and Proposition~\ref{prop:frobenius-reciprocity} imply that $$
\dim(H^2_c(\PoneuBar,\GG\otimes\GG^\vee))
=
\dim(H^2_c(\Ubar,\Q_*\FF\otimes\Q_*\FF^\vee))
=
\dim(H^2_c(\Ubar,\Q^*\Q_*\FF\otimes\FF^\vee)), $$ so it suffices to show the last term equals 1.
The functor $\Q^*$ is left adjoint to the functor $\Q_*$ since $\Q$ is finite (cf.~\cite[II.3.14]{Milne}), so the identify map $\Q_*\FF\to\Q_*\FF$ induces an adjoint $\Q^*\Q_*\FF\to\Q$. Generically it is the trace map $\Ind(V_\dc)\to V_\dc$ and thus is surjective (cf.~\cite[V.1.12]{Milne}). Let $\KK$ be the kernel so that we have an exact sequence of sheaves \begin{equation}\label{eqn:untensored-sequence}
0\to \KK\to \Q^*\Q_*\FF\to\FF\to 0. \end{equation} These sheaves and $\FF^\vee$ are all lisse over $T$ and free, so the sequence \begin{equation}\label{eqn:tensor-sequence}
0\to \KK\otimes\FF^\vee\to \Q^*\Q_*\FF\otimes\FF^\vee\to\FF\otimes\FF^\vee\to 0 \end{equation} is exact on $T$. In particular, we have a corresponding exact sequence of cohomology $$
H^2_c(\Ubar,\KK\otimes\FF^\vee)
\to H^2_c(\Tbar,\Q^*\Q_*\FF\otimes\FF^\vee)
\to H^2_c(\Tbar,\FF\otimes\FF^\vee)
\to H^3_c(\Tbar,\KK\otimes\FF^\vee) $$ the last term of which vanishes. The hypothesis that $\FF$ is geometrically simple implies the penultimate term has dimension 1 by Lemma~\ref{lem:detecting-FG-isomorphisms}, so it suffices to show that the first term vanishes.
Let $\EFq/\Fq$ be a splitting field of $\Q$, let $a_1,\ldots,a_n\in\EFq$ be the zeros of $\Q$, and let $$
(\dc_1,\ldots,\dc_n)
=
(\sigma_\EFq^\vee)^{-1}(\nu_{\EFq}^{\,\prime\,\vee}(\dc))
\in\Hom(\EFq^\times,\bbC^\times)^n $$ as in \eqref{eqn:identifying-PhiQ}. We suppose without loss of generality that $a_1=0$ and thus $s(a_2)\cdots s(a_n)\neq 0$ since $\gcd(\Q,\s)=1$.
Let $G=\piOneTbar$ and $H=\piOneUbar$, and let $G\to\GL(V_\dc)$ and $H\to\GL(\Ind_H^G(V_\dc))$ be the representations corresponding to $\FF$ and $\Q_*\FF$ respectively. The exact sequences \eqref{eqn:untensored-sequence} and \eqref{eqn:tensor-sequence} correspond to exact sequences of $G$-modules \begin{equation}\label{eqn:untensored-sequence:bis}
0\to K\to R\to V_\dc\to 0 \end{equation} and $$
0\to K\otimes V_\dc^\vee\to R\otimes V_\dc^\vee\to V_\dc\otimes V_\dc^\vee\to 0 $$ where $R=\Res_H^G(\Ind_H^G(V_\dc))$. We claim the first term of the latter sequence has no $I(0)$-convariants so a fortiori has no $\piOneTbar$-convariants, and hence $H^2(\Tbar,\KK\otimes\FF^\vee)$ vanishes as claimed.
The translation map $t\mapsto t+a_i$ induces an isomorphism $I(0)\simeq I(a_i)$ for each $i\in[n]$, so we can regard $V_\dc(a_i)$ as an $I(0)$-module. In fact, we have isomorphisms of $I(0)$-modules $$
R(0)
\simeq
\bigoplus_{i=1}^n V_\dc(a_i)
,\quad
K(0)
\simeq
\bigoplus_{i=2}^n V_\dc(a_i)
,\quad
(K\otimes V_\dc^\vee)(0)
\simeq
\bigoplus_{i=2}^n(\Qellbar^{r-1}\otimes \dc_i^{-1}). $$ More precisely, the first isomorphism corresponds to the fact that the geometric fibers of $\Q^*\Q_*\FF$ and $\FF$ satisfy $(\Q^*\Q_*\FF)_0=\oplus_{\Q(a)=0}\FF_a$ since $\Q$ is \etale{} over $u=0$ (cf.~\cite[II.3.5]{Milne}); the second isomorphism uses \eqref{eqn:untensored-sequence:bis} and the assumption that $a_1=0$ to identify $K(0)$ with $R(0)/V_\dc(0)$; and the last isomorphism uses that $s(a_2)\cdots s(a_n)\neq 0$, that is, $\CC\ssm\{a_1\}$ lies in the locus of lisse reduction of $\ME{\rhochi}^\vee$.
The hypothesis that $\Gamma(t)$ is in the kernel of $\dc$ implies that $V_\dc(0)\simeq V(0)$ as $I(0)$-modules. Moreover, $\dc_2,\ldots,\dc_n$ are all non-trivial since they are distinct from the trivial character $\dc_1$ by hypothesis, so each of the summands $(\Qellbar^{r-1}\otimes \dc_i^{-1})$ has \emph{trivial} $I(0)$-coinvariants. Therefore $K\otimes V_\dc^\vee$ has trivial $\piOneTbar$-coinvariants as claimed. \end{proof}
\subsection{Preserving unipotent blocks}
For each monic divisor $\Qp$ of $\Q$ in $\Fq[t]$, consider the subset $$
\PhiQpGood\rho
=
\{\,
\dc\in\PhiQp
:
\ME{\rhochi}
\mbox{ is supported on }\Aonet[1/\Qp]
\,\}. $$ If $\rho$ is the trivial representation, then it consists of the odd primitive characters of conductor $\Qp$.
For $t=0,\infty$, let $V_\dc(t)$ denote $V_\dc$ regarded as an $I(t)$-module. Similarly, for $u=0,\infty$, let $\Ind(V_\dc)(u)$ denote $\Ind(V_\dc)$ regarded as an $I(u)$-module, and let $\Ind(V_\dc)(u)^\unip$ be the maximal submodule of $\Ind(V_\dc)(u)$ where $I(u)$ acts unipotently. We say that $\Ind(V_\dc)(0)$ (resp.~$V_\dc(0)$) has a \defi{unipotent block of dimension $e$ and exact multiplicity $m$} iff it has an $I(0)$-submodule isomorphic to $U(e)^{\oplus m}$ but no $I(0)$-submodule isomorphic to $U(e)^{\oplus m+1}$.
\begin{lemma}\label{lem:induced-unipotent} Suppose $\gcd(\Q,s)=t$, and let $\Qp=\Q/t$ and $\dc\in\PhiQDistinct\cap\PhiQpGood\rho$. Then
\begin{enum} \item\label{item:unip-0} $\Ind(V_\dc)(0)$ has a unipotent block of dimension $e$ and exact multiplicity $m$ if and only if $V(0)$ does;
\item\label{item:unip-infty} $\Ind(V_\dc)(\infty)^\unip=\ZeroSpace$.
\end{enum} \end{lemma}
\begin{proof} On one hand, $V_\dc(z)^\unip=\ZeroSpace$ for every $z\in\CC\ssm\{0\}$ since $\dc$ is in $\PhiQpGood\rho$ and $\gcd(\Qp,\s)=1$. Moreover, $V_\dc(0)$ and $V(0)$ are isomorphic as $I(0)$-modules since $\dc(\Bt)=\OneSpace$. Therefore the only unipotent blocks of $\Ind(V_\dc)(0)$ are those coming from $V_\dc(0)$, and all such blocks contribute identical blocks to $V_\dc(0)$, so \eqref{item:unip-0} holds. On the other hand, every unipotent block of $\Ind(V_\dc)(\infty)$ contributes to $V_\dc(\infty)^\unip$, and the latter vanishes since $\dc$ is $\rho$-primitive, so \eqref{item:unip-infty} holds. \end{proof}
\subsection{Proof of Theorem~\ref{thm:is-equidistributed}}\label{subsec:proof-of-equidistribution-theorem}
Recall that $\R$ is given by \begin{equation}\label{eqn:R-recall}
\R
:=
\rC(\rho)
=
(\deg(\Q)+1)r + \deg(L(T,\rho)) - \dropCee{\rho} \end{equation} and it equals $\deg(\LC(T,\rhochi))$ for all $\dc\in\PhiQ$ (see Theorem~\ref{thmB}).
\begin{lemma}\label{lem:R-lower-bound} $R>72(r^2+1)^2$ \end{lemma}
\begin{proof} Follows from \eqref{eqn:R-recall} and the hypothesis on $\deg(\Q)$ in the statement of the theorem. \end{proof}
Let $\Qp=\Q/t$.
\begin{lemma}\label{lem:induced-representation} Suppose $\dc\in\PhiQDistinct\cap\PhiQpGood\rho$. Then the following hold:
\begin{enum} \item\label{lem:item:ir-simple} $\Ind(\rhochi)$ is geometrically simple; \item\label{lem:item:ir-0-unipotent} $\dim(\Ind(V_\dc)(0)^\unip)=\dim(V_\dc(0)^\unip)$ and $\Ind(V_\dc)(0)$ has a unique unipotent block of exact multiplicity one; \item\label{lem:item:ir-infty-unipotent} $\Ind(V_\dc)(\infty)^\unip=\ZeroSpace$. \end{enum} \end{lemma}
\begin{proof} Part~\eqref{lem:item:ir-simple} follows from Proposition~\ref{prop:induced-simplicity} since $\dc$ is in $\PhiQDistinct\cap\PhiQp$, since $\rho$ is geometrically simple, and since $\deg(\Q)\geq 2$. Parts~\eqref{lem:item:ir-0-unipotent} and \eqref{lem:item:ir-infty-unipotent} follow from Lemma~\ref{lem:induced-unipotent} since $\dc$ is also in $\PhiQpGood\rho$ and since $V(0)$ has a unique unipotent block of exact multiplicity one. \end{proof}
\begin{cor}\label{cor:big-subset} $(\PhiQDistinct\cap\PhiQpGood\rho)\seq\PhiQBig\rho$. \end{cor}
\begin{proof} Let $\dc\in\PhiQDistinct\cap\PhiQpGood\rho$, and let $\theta=\Ind(\rhochi)$ and $W=\Ind(V_\dc)$. Then Lemmas~\ref{lem:induced-representation} and \ref{lem:Ind-rhochi-pure} imply that $\theta=\Ind(\rhochi)$ is geometrically simple and punctually pure of weight $w$ since $\dc\in\PhiQDistinct$. Moreover, $\dim(W)=\deg(\Q)\cdot\dim(V)>2$ since $\deg(\Q)\geq 2$, and Proposition~\ref{prop:no-nontrivial-invariant-scalars} implies that $\lambda=1$ is the only invariant scalar of $\ME{\theta}\simeq\Q_*\ME{\rhochi}$ since $\deg(\Q)\geq 3$ and $\dc\in\PhiQDistinct$. Lemma~\ref{lem:induced-representation} also implies that $W(0)$ has a unique unipotent block of exact multiplicity one, that $\dim(W(0)^\unip)=\dim(V(0)^\unip)\leq\dim(V)=r$, and that $W(\infty)^\unip=\ZeroSpace$. Finally, Lemma~\ref{lem:R-lower-bound} implies $R>72(r^2+1)^2$. Therefore the hypotheses of Theorem~\ref{thm:baby-monodromy-theorem} hold, and hence $\dc\in\PhiQBig\rho$. \end{proof}
\begin{cor}\label{cor:big-subset-of-big} $(\PhiQDistinct\cap\PhiQpGood\rho)\PhiUNu\seq\PhiQBig\rho$. \end{cor}
\begin{proof} Follows from Corollary~\ref{cor:big-subset} since $\PhiQBig\rho$ is a union of cosets $\dc\PhiUNu$. \end{proof}
Let $\dc\in\PhiQ$ and $\dc\PhiUNu$ be the corresponding coset.
\begin{lemma}\label{lem:unique-alpha}
$|\dc\PhiUNu\cap\PhiQp|=1$. \end{lemma}
\begin{proof} We must show that there is a unique element $\alpha\in\PhiU$ satisfying $\dc\alpha^\nu(\Gamma(t))=\OneSpace$. Since $\gcd(\s,\Q)=t$, we can speak of the component of $\dc$ at $t=0$: it is the character given by restricting $\chi$ to the subgroup $\Gamma(t)\seq\BQ$. There is a unique element of $\PhiUNu$ with the same component at $t=0$, call it $\beta^\nu$. Then $\alpha=1/\beta$ is the desired character. \end{proof}
We need one more estimate to complete the proof of the theorem.
\begin{lemma}\label{lem:counting-big} $
|\PhiQDistinct\cap\PhiQpGood\rho|
\sim
|\PhiQpDistinct|
\sim
|\PhiQp|
\mbox{ as }
q\to\infty. $ \end{lemma}
\begin{proof} We observe that there are natural inclusions $$
\left(\PhiQpDistinct\ssm\cup_{\pi\mid\Qp}\PhiOf{\Qp/\pi}\right)
\seq
(\PhiQDistinct\cap\PhiQp)
\seq
\PhiQpDistinct $$ since an element of $\PhiQpDistinct$ will fail to lie in $\PhiQDistinct$ only if one of its $\deg(\Qp)$ components is trivial, that is, if it lies in $\PhiOf{\Qp/\pi}$ for some prime factor $\pi\mid\Qp$. Intersecting with $\PhiQpGood\rho$ gives further inclusions $$
\left((\PhiQpGood\rho\cap\PhiQpDistinct)\ssm\cup_{\pi\mid\Qp}\PhiOf{\Qp/\pi}\right)
\seq
(\PhiQDistinct\cap\PhiQpGood\rho)
\seq
\PhiQpDistinct. $$ Finally, we know that $$
|\PhiQpGood\rho|
\overset{\mathrm{Lem.~}\ref{lem:Qp-bn-bound}}\sim
|\PhiQp|
\overset{\mathrm{Cor.~}\ref{cor:size-of-Qp-distinct}}\sim
|\PhiQpDistinct|
,\quad
|\cup_{\pi\mid\Qp}\PhiOf{\Qp/\pi}|/|\PhiQ| \ll 1/q = o(1) $$ and hence $$
\left|(\PhiQpGood\rho\cap\PhiQpDistinct)\ssm\cup_{\pi\mid\Qp}\PhiOf{\Qp/\pi}\right|
\sim
|\PhiQp| $$ as $q\to\infty$. \end{proof}
\begin{cor}\label{cor:counting-big} $
|(\PhiQDistinct\cap\PhiQpGood\rho)\PhiUNu|
\sim
|\PhiQ| $ for $q\to\infty$. \end{cor}
\begin{proof} Combine Lemma~\ref{lem:unique-alpha} and Lemma~\ref{lem:counting-big}. \end{proof}
\noindent The theorem now follows by observing that $$
|\PhiQ|
\overset{\mathrm{Cor.~}\ref{cor:counting-big}}
\sim
|(\PhiQDistinct\cap\PhiQpGood\rho)\PhiUNu|
\overset{\mathrm{Cor.~}\ref{cor:big-subset-of-big}}
\leq
|\PhiQBig\rho|
\leq
|\PhiQ| $$ and thus $$
|\PhiQBig\rho|
\sim
|\PhiQ| $$ for $q\to\infty$.
\bs\noindent $\therefore$\ \ The Mellin transform of $\rho$ has big monodromy as claimed and Theorem~\ref{thm:is-equidistributed} holds.
\section{Application to Explicit Abelian Varieties}\label{sec:explicit-abelian-varieties}
In this section we apply the theory developed in the previous sections to representations coming from (the Tate modules of) a general class of abelian varieties. More precisely, we give an explicit family of abelian varieties for which we can show the corresponding representations satisfy the hypotheses of Theorem~\ref{thm:is-equidistributed}. Our principal application, of which Theorem~\ref{thm:intro-theorem} is a special case, is Theorem~\ref{thm:application}.
Throughout this section we suppose that $q$ is an odd prime power so that we can speak of hyperelliptic curves. One who is interested in even characteristic or in $L$-functions whose Euler factors have odd degree is encouraged to consider Kloosterman sheaves (e.g., see \cite[7.3.2]{Katz:GKM}).
\subsection{Some hyperelliptic curves and their Jacobians}\label{Hyperelliptic curves and their Jacobians}
Let $g$ be a positive integer. In this section we construct an explicit family of abelian varieties which give rise to Galois representations we can easily show satisfy the hypotheses Theorem~\ref{thm:variance-estimate}. One member of this family is an elliptic curve, the Legendre curve, and it has affine model $$
\Xleg : y^2 = x(x-1)(x-t). $$ It is isomorphic to its own Jacobian, and the general abelian varieties in our family will be Jacobians of curves. More precisely, we fix a monic square free $f\in\Fq[x]$ of degree $2g$ and consider the projective plane curve $X/K$ with affine model \begin{equation}\label{eq:X}
X \colon y^2 = f(x)(x-t). \end{equation} For technical reasons we will eventually suppose that $f$ has a zero $a$ in $\Fq$, and up to the change of variables $x\mapsto x+a$, we will suppose that $a=0$. We do not need this hypothesis yet since the discussion in this section does not use it.
The curve $X$ has genus $g$. If $g>1$, it is a so-called hyperelliptic curve, and otherwise it is an elliptic curve. Either way its Jacobian $J$ is a $g$-dimensional principally polarized abelian variety over $K$. See \cite{handbook:ecc} for more information about hyperelliptic curves and their Jacobians.
For each finite place $v=\pi$, one can define a reduction $X/\Fpi$ starting with the reduction of \eqref{eq:X} modulo $\pi$.
\begin{lemma}\label{lem:hyper-reduction} The monic polynomial $\s=f(t)\in\Fq[t]$ satisfies the following: \begin{enum} \item if $\pi\nmid\s$, then $X/\Fpi$ is a smooth projective curve of genus $g$; \item if $\pi\mid\s$, then $X/\Fpi$ is smooth away from a single node and has genus $g-1$. \end{enum} \end{lemma}
\begin{proof} The essential point is that, for any monic polynomial $h(x)$ with coefficients in a field $F$ of characteristic not two, the affine curve $y^2=h(x)$ is smooth iff $h$ is a square free polynomial. More generally, if $h=h_1h_2^2$ where $h_1,h_2\in F[x]$ are square free and relatively prime, then the following hold: \begin{enum} \item the map $(x,y)\mapsto (x,y/h_2(x))$ induces a birational map from $y^2=h_1(x)$ to $y^2=h(x)$; \item the $\deg(h_2)$ points $(x,y)$ satisfying $h_2(x)=y=0$ are so-called nodes of $y^2=h(x)$; \item the map in (1) corresponds to blowing up the nodes in (2); \item the curve $y^2=h_1(x)$ is smooth of genus $\lfloor(\deg(h_1)-1)/2\rfloor$ since $h_1$ is square free; \item both curves have one (resp.~two) points at infinity if $\deg(h)$ is odd (resp.~even). \end{enum}
\noindent (Compare \cite[Ex.~I.5.6]{Hartshorne}.) The proof of the lemma will consist of showing that we are in this general situation.
Let $t_0\in\Fpi$ satisfy $t\equiv t_0\bmod\pi$, and let $h_0(x):=f(x)(x-t_0)\in\Fpi[x]$. The polynomial $f(x)$ is square free by hypothesis, so $h_0(x)$ is square free iff $f(t_0)=0$, or equivalently, $\pi\mid\s$. In particular, if $\pi\nmid\s$, then $h_0$ is square free and $y^2=h_0(x)$ is smooth of genus $g$. Otherwise, $h_0=h_1h_2^2$ where $h_1=f/(x-t_0)$ and $h_2=x-t_0$ are coprime (since $f$ is square free), and thus $y^2=h_0(x)$ is smooth away from the node $(t_0,0)$ and birational to the curve $y^2=h_1(x)$ which is smooth of genus $g-1$. \end{proof}
\begin{remark}\label{rmk:claim-about-reduction-at-infinity} One can also define a reduction $X/\Finf$ by writing $t=1/u$ and clearing denominators, and one eventually finds that $X/\Finf$ has genus zero. However, the arguments are subtler and beyond the scope of this article, so we omit them. \end{remark}
For example, $\Xleg$ has smooth reduction away from $t=0,1,\infty$, over $t=0,1$ its reduction is a so-called node, and over $t=\infty$ it is a so-called cusp. Since it is isomorphic to its Jacobian, these are sometimes refers to these as good, multiplicative, and additive reduction respectively. However, in general, one needs to construct separately reductions $J/\Fpi$, for every $\pi$, and also a reduction $J/\Finf$.
\begin{lemma}\label{lem:J_pi}\ \begin{enum} \item If $\pi\nmid\s$, then $J/\Fpi$ is the Jacobian of $X/\Fpi$ so is a $g$-dimensional abelian variety; \item If $\pi\mid\s$, then $J/\Fpi$ is an extension of an abelian variety by a one-dimensional torus. \end{enum} \end{lemma}
\begin{proof} Both statements are easy consequences of Lemma~\ref{lem:hyper-reduction}. More precisely, if $X/\Fpi$ is projective and smooth away from $n$ nodes, then $J/\Fpi$ is an extension of a $(g-n)$-dimensional abelian variety by an $n$-dimensional torus. See \cite[9.2.8]{BLR} and keep in mind Lemma~\ref{lem:hyper-reduction}. \end{proof}
\begin{remark}\label{rmk:reduction-of-J-at-infty} One can also show that $J/\Finf$ is a $g$-dimensional additive linear algebraic group, but demonstrating it directly is harder and requires a finer statement than the claim in Remark~\ref{rmk:claim-about-reduction-at-infinity}. \end{remark}
One can regard the various reductions of $J$ as the special fibers of the (identity component of the) N\'eron model of $J/K$ over $\Ponet$. However, for our purposes, Lemma~\ref{lem:J_pi} contains all the information we need about the model. More precisely, we only need to know the respective dimensions $g_\pi$, $m_\pi$, and $a_\pi$ of the good, multiplicative, and additive parts of $J/\Fpi$. Thus \begin{equation}\label{eqn:gma-values}
(g_\pi,m_\pi,a_\pi)
=
\begin{cases}
(g,0,0) & \mbox{if }\pi\nmid\s \\
(g-1,1,0) & \mbox{if }\pi\mid\s
\end{cases} \end{equation} by Lemma~\ref{lem:J_pi}. In \S\ref{sec:tate-modules} we will show that $$
(g_\infty,m_\infty,a_\infty) = (0,0,g) $$ as claimed in Remark~\ref{rmk:reduction-of-J-at-infty}.
\subsection{Tate modules}\label{sec:tate-modules}
Let $\ell$ be a prime distinct from the characteristic $p$ of $\Fq$. For each $m\geq 0$, let $J[\ell^m]\seq J(\bar{K})$ be the subgroup of $\ell^m$-torsion; it is isomorphic to $(\bbZ/\ell^m)^{2g}$ and hence is a finite Galois module. Multiplication by $\ell$ induces an epimorphism $J[\ell^{m+1}]\onto J[\ell^m]$, for each $m$, and the $\Zell$-Tate module of $J$ is the projective limit $$
T_\ell(J) := \varprojlim J[\ell^m]. $$ Concretely one can regard $T_\ell(J)$ as the set $$
\{\,
(P_0,P_1,\ldots)
:
P_m\in J[\ell^m]\mbox{ and }\ell P_{m+1}=P_m\mbox{ for }m\geq 0
\,\}. $$ It is even a Galois $\bbZ_\ell$-module (since the action of $G_K$ and multiplication by $\ell$ commute), and it is isomorphic to $\Zell^{2g}$ as a $\Zell$-module (cf.~\cite[\S1]{SerreTate}).
Let $V$ be the vector space $T_\ell(J)\otimes_\Zell\Qellbar$ and $\GK\to\GL(V)$ be the corresponding Galois representation. For each $v\in\PP$, let $V(v)$ denote $V$ as an $I(v)$-module and let $V(v)^\unip$ be the maximal submodule where $I(v)$ acts unipotently.
\begin{prop}\label{prop:tate-module-invariant-dimensions} Let $v\in\PP$, and let $g_z$ and $m_z$ be the respective dimensions of the abelian and multiplicative part of $J/\bbF_v$ Then $$
V(v)^\unip\simeq U(1)^{\oplus 2g_v}\oplus U(2)^{\oplus m_v}. $$ \end{prop}
\begin{proof} This is a general fact about Tate modules of abelian varieties. See \cite[Exp.~IX, \S2.1]{SGA7}. \end{proof}
Let $\SS=\{\pi\in\PP:\pi\mid\s\}\cup\{\infty\}$ where $\s=f(t)$ as in Lemma~\ref{lem:hyper-reduction}. Then by Proposition~\ref{prop:tate-module-invariant-dimensions}, the action of $G_K$ on $V$ induces a representation $$
\rho\colon\GKS\to\GL(V) $$ since $$
\dim(V^{I(v)})=\dim(V)=2g
\mbox{ for }
v\in\PP\ssm\SS $$ by \eqref{eqn:gma-values}.
\begin{lemma}\label{lem:numerical-invariants-for-J} $\rho$ is geometrically simple and punctually pure of weight one, and it satisfies $$
\dr_v(\rho)
=
\begin{cases}
0 & v\in\PP\ssm\SS \\
1 & v\in\SS\ssm\{\infty\} \\
2g & v=\infty
\end{cases},\quad
\swan(\rho) = 0. $$ \end{lemma}
\begin{proof} The values $\dr_v(\rho)$ for $v\neq\infty$ follow directly from \eqref{eqn:gma-values} since $$
\dr_v(\rho) = \dim(V) - \dim(V^{I(v)}) = 2g - 2g_v - m_v $$ by Proposition~\ref{prop:tate-module-invariant-dimensions}. For the assertions about geometric simplicity and weight and about $\dr_\infty(\rho)$ and $\swan(\rho)$ we refer to \cite[10.1.9 and 10.1.17]{KS} (cf.~\cite[\S 5]{Hall:BM} for a related discussion about $J[\ell]$). \end{proof}
\begin{cor} $L(T,J/K)=1$, that is, it is a polynomial and $\deg(L(T,J/K)) = 0$. \end{cor}
\begin{proof} The representation $\rho$ is geometrically simple and $\dim(V)=2g>0$, so $\rho$ has trivial geometric invariants. Moreover, it is punctually pure of weight $w=1$, so Theorem~\ref{thm:archimedean-bound} implies $L(T,\rho)$ is a polynomial of degree $$
\degL(\rho)
=
\dr(\rho)
+
\swan(\rho)
-
2\cdot\dim(\Vl)
\overset{\mathrm{Lem.~}\ref{lem:numerical-invariants-for-J}}{=}
(\deg(f)\cdot 1+1\cdot 2g)
+
0
-
2\cdot 2g
=
0 $$ as claimed. \end{proof}
Let $\Q\in\Fq[t]$ be monic and square free and $\CC\sub\PP$ be the finite subset consisting of $\pi$ and $v(\pi)$ for every prime factor $\pi$ of $\Q$ (cf.~\S\ref{sec:twisted-l-functions}).
\begin{lemma}\label{lem:rhochi-properties} For every $\dc\in\PhiQ$, the representation $\rhochi$ is geometrically simple and punctually pure of weight one, and $\dc$ is not heavy. \end{lemma}
\begin{proof} Lemma~\ref{lem:rhochi-pure}.\ref{lem:item:rhochi-pure--simplicity} implies that $\rhochi$ is geometrically simple since $\rho$ is. Moreover, it has trivial geometric invariants since $\dim(V)=2g>1$, so $\dc$ is not heavy. Finally, Lemma~\ref{lem:rhochi-pure}.\ref{lem:item:rhochi-pure--purity} implies that it is punctually pure of weight $w=1$ since $\rho$ is. \end{proof}
\begin{cor} If $\dc\in\PhiQ$, then $\LC(T,\rhochi)$ is a polynomial and $$
\deg(\LC(T,\rhochi))
=
2g\cdot \deg(\Q) - \deg(\gcd(\Q,\s)). $$ \end{cor}
\begin{proof} By Lemma~\ref{lem:rhochi-properties} the hypotheses of Theorem~\ref{thmB} hold, and hence $\LC(T,\rhochi)$ is a polynomial of degree $$
\rC(\rho)
=
\deg(L(T,\rho)) + (\deg(\Q)+1)\dim(V) - \dropCee{\rho}
=
2g\cdot (\deg(\Q)+1) - \dr_{\CC\cap\SS}(\rho). $$ The corollary follows by observing that $$
\dr_{\CC\cap\SS}(\rho)
=
\sum_{v\in\CC\cap\SS} d_v\cdot\dr_v(\rho)
=
\deg(\gcd(\Q,\s))\cdot 1
+
\dr_\infty(\rho) $$ and that $\dr_\infty(\rho)=2g$. \end{proof}
\subsection{Arithmetic application}\label{sec:arithmetic-application}
In this section we show how to apply our main theorem to the example given above.
The Euler factor at $v=\infty$ of the $L$-function of $J$ is trivial since $\dr_\infty(\rho)=\dim(V)$, and thus the complete $L$-function satisfies $$
L(T,J/K)
=
\prod_{\pi\in\AA}
L(T^{\deg(\pi)},J/\Fpi)^{-1}
=
\prod_{v\in\PP}
L(T^{d_v},\rho_v)^{-1}
=
L_{\{\infty\}}(T,\rho). $$ Similarly, for the partial $L$-function of $\rho$, we have $$
\LC(T,\rho)
=
\prod_{v\in\PP\ssm\CC}
L(T^{d_v},\rho_v)^{-1}
=
\prod_{\substack{\pi\in\AA\\\pi\nmid\Q}}
L(T^{\deg(\pi)},J/\Fpi)^{-1}. $$
For each $\pi\in\AA$, the Euler factor $L(T,J/\Fpi)^{-1}$ is the reciprocal of a polynomial with coefficients in $\bbZ$ so satisfies $$
T\frac{d}{dT}\log(L(T,J/\Fpi))
=
\sum_{n=1}^\infty a_{\pi,n}T^n $$ for integers $a_{\pi,n}\in\bbZ$.
The complete $L$-function is also a polynomial with coefficients in $\bbZ$, and it satisfies $$
T\frac{d}{dT}\log(L(T,J/K))
=
T\frac{d}{dT}\log(L_{\{\infty\}}(T,\rho))
=
\sum_{n=1}^\infty
\left(
\sum_{f\in\MM_n}
\VM(f)
\right)
T^n $$ where $\VM(f)\colon\MM\to\bbZ$ is the von Mangoldt function of $\rho$ defined in \eqref{eqn:von-mangoldt} by $$
\VM(f)
=
\begin{cases}
d\cdot a_{\pi,n} & f=\pi^m\mbox{ and }\pi\in\AA_d \\
0 & \mbox{otherwise}.
\end{cases} $$ Similarly, the partial $L$-function of $\rho$ is a polynomial with coefficients in $\bbZ$ and satisfies $$
T\frac{d}{dT}\LC(T,\rho)
=
\sum_{n=1}^\infty
\left(
\sum_{\substack{f\in\MM_n\\\gcd(f,\Q)=1}}
\VM(f)
\right)
T^n. $$
For $A$ in $\BQ=(\Fq[t]/\Q\Fq[t])^\times$ and positive integer $n$, we defined the sum $\SnAQ$ in \eqref{eqn:SnAQ} by $$
\SnAQ
=
\sum_{\substack{f\in\MM_n\\f\equiv A\bmod\Q}}
\VM(f). $$ We then defined the expected value and variance of this sum as $A$ varies uniformly over $\BQ$ by $$
\bbE_A[\SnAQ]
=
\frac{1}{\phi(\Q)}\sum_{A\in\BQ}\SnAQ,
\
\Var_A[\SnAQ]
=
\frac{1}{\phi(\Q)}\sum_{A\in\BQ} \left|\SnAQ - \bbE_A[\SnAQ]\right|^2 $$
respectively where $\phi(\Q)=|\BQ|$ (see \eqref{eqn:E-and-V}).
\begin{theorem}\label{thm:application} Suppose that $\gcd(\Q,\s)=t$ and that $\deg(\Q)>\frac{1}{2g}(72(4g^2+1)^2+1)$. Then $$
\phi(\Q)\cdot\bbE_A[\SnAQ]
\ =
\sum_{\substack{f\in\MM_n\\\gcd(f,\Q)=1}}
\VM(f)
\mbox{ and }
\lim_{q\to\infty}
\frac{\phi(\Q)}{q^{2n}}
\cdot
\Var_A[\SnAQ]
= \min\{n,2g\cdot\deg(\Q)-1\}. $$ \end{theorem}
\begin{proof} This will follow from Theorem~\ref{thm:variance-estimate} once we show that all the hypotheses of that theorem are met. Lemma~\ref{lem:rhochi-properties} implies that $\rho$ is punctually pure of weight $w=1$ and that $\PhiQAwful\rho$ is empty\footnote{There \emph{are} mixed characters, but as shown the proof of Proposition~\ref{prop:var-estimate}, they do not contribute to the main term of the variance estimate.}. Moreover, Proposition~\ref{prop:tate-module-invariant-dimensions} implies that $V(0)$ has a unique unipotent block of dimension two and no other unipotent block of multiplicity one (since $2g-2\neq 1$), hence Theorem~\ref{thm:is-equidistributed} implies that the Mellin transform of $\rho$ has big monodromy since $\gcd(\Q,\s)=t$ and since $$
\deg(\Q)
>
\frac{1}{2g}(72((2g)^2+1)^2 - 2g - 0 + (1+2g))
=
\frac{1}{2g}(72(4g^2+1)^2+1). $$ Therefore the hypotheses of Theorem~\ref{thm:variance-estimate} hold as claimed. \end{proof}
Taking $g=1$ and $f=x(x-1)$ yields Theorem~\ref{thm:intro-theorem} from \S\ref{sec:introduction}.
\appendix
\section{Detecting a big subgroup of $\GL_R$}
\subsection{Weight multiplicity map}
Let $\iota\colon \bbQbar\to\bbC$ be a field embedding, $m$ be a positive integer, and $m=\{1,\ldots,m\}$.
\begin{defn} A \defi{weight partition map} of an element $\alpha=(\alpha_1,\ldots,\alpha_m)$ in $(\bbQbar^\times)^m$ is a map $w_\alpha\colon[m]\to[m]$ satisfying the following for every $i,j\in[m]$: $$
w_\alpha(i) = w_\alpha(j)
\mbox{ iff }
|\iota(\alpha_i)| = |\iota(\alpha_j)|;
\ \
|w_\alpha^{-1}(i)|\geq |w_\alpha^{-1}(j)|
\mbox{ if }
i\leq j. $$ \end{defn}
\noindent In general, $\alpha$ may have multiple weight partition maps, but all will have the same range and yield the same map $[m]\to\bbZ$ given by $i\mapsto |w_\alpha^{-1}(i)|$. In particular, if $w_\alpha$ is a weight partition map of $\alpha$ and if $\sigma\in\Sym(m)$, then the composed map $w_\alpha\sigma$ is also a weight partition map of $\alpha$.
\begin{defn}\label{def:weight-multiplicity-map} The \defi{$m$th weight multiplicity map} is the map $$
\wm{m}\colon (\bbQbar^\times)^m\to\bbZ^m $$
which sends an element $\alpha$ to the tuple $\l=(\l_1,\ldots,\l_m)$ satisfying $\l_i=|w_\alpha^{-1}(i)|$ for some weight partition map $w_\alpha$ and every $i\in[m]$. \end{defn}
\begin{lemma} Let $\alpha,\beta\in(\bbQbar^\times)^m$, and let $s\in\bbQbar^\times$ and $\sigma\in\Sym(m)$. Suppose $\beta_i=s\alpha_{\sigma(i)}$ for every $i\in[m]$. Then $\wm{m}(\alpha)=\wm{m}(\beta)$. \end{lemma}
\begin{proof} Let $w_\alpha,w_\beta$ be respective weight partition maps of $\alpha,\beta$. Then for every $i,j\in[m]$, one has $$
w_\beta(i) = w_\beta(j)
\iff
|\iota(\beta_i)| = |\iota(\beta_j)|
\iff
|\iota(\alpha_{\sigma(i)})| = |\iota(\alpha_{\sigma(j)})|
\iff
w_\alpha\sigma(i) = w_\alpha\sigma(j). $$ In particular, the weight partition maps $\sigma w_\alpha,w_\beta$ of $\alpha,\beta$ respectively coincide, so $\wm{m}(\alpha)=\wm{m}(\beta)$ as claimed. \end{proof}
\begin{defn} For any $\l=\wm{m}(\alpha)$, let $\len(\l)=\max\{1\leq i\leq m:\l_i\neq 0\}$. \end{defn}
\noindent Observe that $[\len(\l)]$ is the range of any weight partition map $w_\alpha$ of $\alpha$ and $(\l_1,\ldots,\l_{\len(\l)})$ is a partition of $m$.
\subsection{Tensor indecomposability}
Let $m,n\geq 2$ be integers, let $\alpha\in (\bbQbar^\times)^m$, $\beta\in (\bbQbar^\times)^n$, and $\gamma\in(\bbQbar^\times)^{mn}$ be elements, and let $a=\wm{m}(\alpha)$, $b=\wm{n}(\beta)$, $c=\wm{mn}(\gamma)$.
Suppose $\tau\colon[m]\times[n]\to [mn]$ is a bijection satisfying $$
\gamma_{\tau(i,j)} = \alpha_i\beta_j
\mbox{ for }
(i,j)\in[m]\times[n], $$ and let $w_\alpha,w_\beta,w_\gamma$ be weight partition maps of $\alpha,\beta,\gamma$ respectively.
\begin{lemma}\label{lem:def-kappa} There exists a unique map $[\len(a)]\times[\len(b)]\to[\len(c)]$ which makes the following diagram commute: $$
\xymatrix{
[m]\times [n]\ar[r]^\tau\ar[d]_{w_\alpha\times w_\beta}
& [mn]\ar[d]^{w_\gamma} \\
[\len(a)]\times[\len(b)]\ar[r]
& [\len(c)].
} $$ \end{lemma}
\begin{proof} To see that such a map exists observe that $w_\gamma\tau$ factors through $w_\alpha\times w_\beta$ since \begin{eqnarray*}
(w_\alpha\times w_\beta)(i_1,j_1)
=
(w_\alpha\times w_\beta)(i_2,j_2)
& \iff &
|\alpha_{i_1}|=|\alpha_{i_2}|\mbox{ and }|\beta_{j_1}|=|\beta_{j_2}| \\
& \Longrightarrow &
|\alpha_{i_1}\beta_{j_1}| = |\alpha_{i_2}\beta_{j_2}| \\
& \iff &
|\gamma_{\tau(i_1,j_1)}| = |\gamma_{\tau(i_2,j_2)}| \\
& \iff &
w_\gamma\tau(i_1,j_1) = w_\gamma\tau(i_2,j_2) \end{eqnarray*} for every $i_1,i_2\in[m]$ and $j_1,j_2\in[n]$. To see that the map is unique, observe that the left vertical map of the diagram is surjective and that the map must satisfy $l\mapsto w_\gamma\tau(i,j)$ for any $(i,j)$ in $(w_\alpha\times w_\beta)^{-1}(l)$. \end{proof}
Let $\kappa\colon [\len(a)]\times[\len(b)]\to[\len(c)]$ be the map of Lemma~\ref{lem:def-kappa}.
\begin{lemma}\label{lem:kappa-is-injective} For each $l\in[\len(a)]$, the restriction of $\kappa$ to $\{l\}\times[\len(b)]$ is injective. \end{lemma}
\begin{proof} Recall that $[\len(a)]$ and $[\len(b)]$ are the respective ranges of $w_\alpha$ and $w_\beta$, so suppose $i\in[m]$ and $j_1,j_2\in[n]$. Moreover, one has \begin{eqnarray*}
\kappa(w_\alpha(i),w_\beta(j_1))=\kappa(w_\alpha(i),w_\beta(j_2))
& \iff &
w_\gamma\tau(i,j_1) = w_\gamma\tau(i,j_2) \\
& \iff &
|\gamma_{\tau(i,j_1)}| = |\gamma_{\tau(i,j_2)}| \\
& \iff &
|\alpha_i\beta_{j_1}| = |\alpha_i\beta_{j_2}| \\
& \iff &
w_\beta(j_1)=w_\beta(j_2), \end{eqnarray*} and thus the restriction of $\kappa$ to $\{w_\alpha(i)\}\times[\len(b)]$ is injective as claimed. \end{proof}
Let $r$ be a positive integer.
\begin{lemma}\label{lem:parts-bounded-by-r}\
\begin{enum} \item\label{item:c-len-to-ab-lens} If $c_{\len(c)}\leq r$, then $a_{\len(a)}\leq r$ and $b_{\len(b)}\leq r$.
\item\label{item:a_1-to-c_len(b)}
If $a_1>r$ (resp.~$b_1>r$), then $c_{\len(b)}>r$ (resp.~$c_{\len(a)}>r$).
\end{enum} \end{lemma}
\begin{proof} For part \eqref{item:c-len-to-ab-lens}, we prove the contrapositive. More precisely, if $k\in[\len(c)]$, then one has $$
c_k
=
\sum_{\kappa(i,j)=k}
a_i b_j
\geq
a_{\len(a)}b_{\len(b)}
\geq
\max\{a_{\len(a)},b_{\len(b)}\}, $$ and thus $c_{\len(c)}>r$ if $a_{\len(a)}>r$ or $b_{\len(b)}>r$. Thus \eqref{item:c-len-to-ab-lens} holds.
For part \eqref{item:a_1-to-c_len(b)}, we suppose, without loss of generality, that $a_1>r$ and show that $c_{\len(b)}>r$. We first observe that Lemma~\ref{lem:kappa-is-injective} implies the integers $\kappa(1,1),\ldots,\kappa(1,\len(b))$ are distinct. Moreover, for each $l\in[\len(b)]$, one has $$
c_{\kappa(1,l)}
\geq
a_1b_l
>
r\cdot 1
=
r. $$ Therefore at least $\len(b)$ integers in the monotone decreasing sequence $c_1,\ldots,c_{\len(b)}$ exceed $r$, and thus \eqref{item:a_1-to-c_len(b)} holds. \end{proof}
The following proposition is the main result of this subsection. We will use its contrapositive to deduce that a certain representation is tensor indecomposable whenever $mn\gg r$.
\begin{prop}\label{prop:tensor-indecomposable} Suppose $c_{\len(c)}=1$ and $c_2\leq r$. If $\len(c)\leq r+1$, then $m,n\leq r^2+1$ and thus $mn\leq (r^2+1)^2$. \end{prop}
\begin{proof} Lemma~\ref{lem:parts-bounded-by-r}.\ref{item:c-len-to-ab-lens} implies that $a_{\len(a)}=b_{\len(b)}=1$ since $c_{\len(c)}=1$. Therefore $\len(a)\geq 2$ and $\len(b)\geq 2$ since $m\geq 2$ and $n\geq 2$ respectively, and moreover, $c_2\geq c_{\len(a)}$ or $c_2\geq c_{\len(b)}$. Hence the contrapositive of Lemma~\ref{lem:parts-bounded-by-r}.\ref{item:a_1-to-c_len(b)} implies $a_1\leq r$ and $b_1\leq r$ since $c_2\leq r$. In particular, if $\len(c)\leq r+1$, then Lemma~\ref{lem:kappa-is-injective} implies $\len(a),\len(b)\leq r+1$, and thus $$
m = \sum_{i=1}^{\len(a)} a_i\leq ra_1 + a_{\len(a)}\leq r^2+1,\quad
n = \sum_{j=1}^{\len(b)} b_j\leq rb_1 + b_{\len(b)}\leq r^2+1 $$ as claimed. \end{proof}
\subsection{Pairing avoidance}
Let $n$ be a positive integer and $I$ be the $n\times n$ identity matrix. We define the orthogonal and symplectic groups of matrices by $$
\O_n(\bbQbar)
=
\left\{\,M\in\GL_n(\bbQbar) : MM^t = I\,\right\} $$ and $$
\Sp_{2n}(\bbQbar)
=
\left\{\,M\in\GL_{2n}(\bbQbar) : MPM^t = P
\mbox{ for }P=\left(\begin{array}{rr}0&I\\-I&0\end{array}\right)
\,\right\} $$ respectively.
\begin{lemma}\label{lem:eigenvalue-involution} Suppose $m=n$ (resp.~$m=2n$) and $g\in\O_n(\bbQbar)$ (resp.~$g\in\Sp_{2n}(\bbQbar)$). Let $\alpha\in(\bbQbar^\times)^m$ be a tuple of the eigenvalues of $g$ and $a=\wm{m}(\alpha)$. Then some involution $\pi\in\Sym(\len(a))$ satisfies the following:
\begin{enum} \item $a_i=a_{\pi(i)}$ for every $i\in[\len(a)]$; \item $\pi$ has at most one fixed point. \end{enum} \end{lemma}
\begin{proof} The involution $s\mapsto 1/s$ of $\bbQbar^\times$ induces a permutation of the eigenvalues of elements of $\O_n(\bbQbar)$ and $\Sp_{2n}(\bbQbar)$. The latter is an involution $\sigma\in\Sym(m)$ with the property that, for any weight partition map $w_\alpha$ of $\alpha$ and every $i\in[m]$, one has $$
w_\alpha(i) = w_\alpha\sigma(i)
\iff
|\alpha_i| = |\alpha_{\sigma(i)}|
\iff
|\alpha_i| = |1/\alpha_i|
\iff
|\alpha_i| = 1. $$ The involution in question is given by $w_\alpha(i)\mapsto w_\alpha\sigma(i)$ for every $i\in[m]$; recall $w_\alpha$ maps onto $[\len(a)]$. \end{proof}
The following is the main result of this subsection. We will use its contrapositive to show that some subgroup of $\GL_m(\bbQbar)$ fails to preserve non-degenerate pairings which are either symmetric or alternating.
\begin{prop}\label{prop:pairing-avoidance} Suppose $m=n$ (resp.~$m=2n$) and $g\in\GL_n(\bbQbar)$. Let $\alpha\in(\bbQbar^\times)^m$ be a tuple of the eigenvalues of $g$ and $a=\wm{m}(\alpha)$. If there exist $i,j$ such that $a_i,a_j$ are distinct from each other and from all $a_k$ for $k\neq i,j$, then $g\not\in\O_n(\bbQbar)$ (resp.~$g\not\in\Sp_{2n}(\bbQbar)$). \end{prop}
\begin{proof} We prove the contrapositive. More precisely, if $g\in\O_n(\bbQbar)$ (resp.~$g\in\Sp_{2n}(\bbQbar)$) and if $\pi\in\Sym(\len(a))$ is an involution satisfying the properties of Lemma~\ref{lem:eigenvalue-involution}, then $\pi(i)=i$ for at most one $i$. Therefore, for all but at most one $i$ and for $j=\pi(i)$, one has $i\neq j$ and $a_i=a_j$. In particular, there is at most one $i$ such that $a_i\neq a_j$ for $j\neq i$. \end{proof}
\subsection{Main theorem}
In this section we state and prove the main result of this appendix.
\begin{theorem}\label{thm:big-monodromy} Let $r,R$ be positive integers and $G$ be a connected reductive subgroup of $\GL_R(\Qellbar)$. Let $g\in G$ be an element and $\gamma\in(\Qellbar^\times)^R$ be an eigenvector tuple of $g$. Suppose that $G$ is irreducible, that $\gamma$ lies in $(\bbQbar^\times)^R$, and that $c=\wm{R}(\gamma)$ satisfies $\len(c)\leq r+1$ and $1=c_{\len(c)}<c_{\len(c)-1}$ and $c_2\leq r$. If $R>72(r^2+1)^2$, then either $G=\SL_R(\Qellbar)$ or $G=\GL_R(\Qellbar)$. \end{theorem}
\noindent The proof will occupy the remainder of this subsection.
Since $G$ is algebraic, it contains the semisimplification of $g$, an element for which $\gamma$ is also an eigenvector. Hence we replace $g$ by its semisimplification and suppose without loss of generality that $g$ is semisimple. We also replace $G$ and $g$ by the conjugates $h^{-1}Gh$ and $h^{-1}gh$ by a suitable element $h\in\GL_R(\Qellbar)$ so that we may suppose without loss of generality that $g$ is the diagonal matrix $\mathrm{diag}(\gamma_1,\ldots,\gamma_R)$.
Let $V=\Qellbar^R$ and $f$ be the diagonal matrix $$
f = \mathrm{diag}(|\iota(\gamma_1)|,\ldots,|\iota(\gamma_m)|). $$
We claim we may regard $f$ as an element of $\GL_R(\Qellbar)$. More precisely, it is an element of $\GL_R(\iota(\bbQbar))\sub\GL_R(\bbC)$ since $|\iota(\gamma_i)|^2=\iota(\gamma_i)\overline{\iota(\gamma_i)}$ lies in the algebraically closed subfield $\iota(\bbQbar)\sub\bbC$ and thus so does $|\iota(\gamma_i)|$. Replacing $G$, $g$, $f$ by conjugates by a suitable common permutation matrix, we suppose without loss of generality that $|\iota(\gamma_1)|$ is an eigenvalue of $f$ of multiplicity $c_1$.
\begin{lemma}\label{lem:rank-bound}
$f$ is a semisimple element of $G$ such that $f-|\iota(\gamma_1)|\in\End(V)$ has rank at most $r^2$. \end{lemma}
\begin{proof} For some sequence $e_1,\ldots,e_n$ of tuples $e_i=(e_{i,1},\ldots,e_{i,m})\in\bbZ^m$, the intersection of $G$ with the subgroup of diagonal matrices in $\GL_R(\Qellbar)$ consists of all matrices $\diag(\alpha_1,\ldots,\alpha_m)$ satisfying $$
\prod_{i=1}^m\alpha_i^{e_{1,i}}
=
\prod_{i=1}^m\alpha_i^{e_{2,i}}
=
\cdots
=
\prod_{i=1}^m\alpha_i^{e_{n,i}}
=
1. $$ By hypothesis, $g$ lies in this intersection, and thus $$
|\iota(\prod_{i=1}^m\gamma_i^{e_{1,i}})|
=
|\iota(\prod_{i=1}^m\gamma_i^{e_{2,i}})|
=
\cdots
=
|\iota(\prod_{i=1}^m\gamma_i^{e_{n,i}})|
=
|\iota(1)| $$ or equivalently $$
\prod_{i=1}^m|\iota(\gamma_i)|^{e_{1,i}}
=
\prod_{i=1}^m|\iota(\gamma_i)|^{e_{2,i}}
=
\cdots
=
\prod_{i=1}^m|\iota(\gamma_i)|^{e_{n,i}}
=
1. $$
Therefore $f$ is a diagonal (hence semisimple) element of $G$ as claimed. It remains to show $f-|\iota(\gamma_1)|\in\End(V)$ has rank at most $r^2$. Indeed, exactly $c_1$ of its eigenvalues equal $|\iota(\gamma_1)|$, hence the rank of $f-|\iota(\gamma_1)|$ is $$
R - c_1 \leq \sum_{i=2}^{\len(c)} c_i \leq r\cdot r = r^2 $$ by our hypotheses on $c$. \end{proof}
Let $[G,G]$ be the derived (i.e., commutator) subgroup of $G$. Observe that $G$ acts irreducibly on $V=\Qellbar^R$ by hypothesis, so its center $Z(G)$ consists entirely of scalars and $G$ is an almost product of $[G,G]$ and $Z(G)$. In particular, $[G,G]$ is a connected semisimple group which also acts irreducibly on $V$, and for some $a\in\Qellbar^\times$, the scalar multiple $af$ lies in $[G,G]$.
Let $\mathfrak{g}\seq\gl_R=\End(V)$ be the Lie algebra of $[G,G]$. It is a semisimple irreducible Lie subalgebra of $\gl_R$ since $[G,G]$ is semisimple and acts irreducibly on $V$. It also contains $af$, and Lemma~\ref{lem:rank-bound} implies that $\dim((af-a|\iota(\gamma_1)|)V)\leq r^2$. Finally, the contrapositive of Proposition~\ref{prop:tensor-indecomposable} implies that $\mathfrak{g}$ is simple since otherwise $V$ would be tensor decomposable as a representation of $G$. Therefore, a result of Zarhin \cite[Th.~6]{Zarhin} implies that $\mathfrak{g}$ is one of $\mathfrak{sl}(V)$, $\mathfrak{so}(V)$, or $\mathfrak{sp}(V)$ since $$
R = \dim(V)
> 72(r^2)^2
\geq 72\dim((f-|\iota(\gamma_1)|)V)^2
= 72\dim((af-a|\iota(\gamma_1)|)V)^2 $$ by our hypotheses on $R$.
To complete the proof of the theorem it suffices to rule out $\mathfrak{g}=\mathfrak{so}(V)$ and $\mathfrak{g}=\mathfrak{sp}(V)$ or equivalently to show that $G$ preserves neither an orthogonal nor a symplectic pairing. However, our hypotheses on $c$ together with the contrapositive of Proposition~\ref{prop:pairing-avoidance} implies that $G$ preserves neither such type of pairing, so $\mathfrak{g}=\mathfrak{sl}(V)$ as claimed. That is, $[G,G]$ is $\SL(V)$ and $G$ is equal to one of $\SL(V)$ or $\GL(V)$.
\section{Perverse Sheaves and the Tannakian Monodromy Group}\label{sec:tannakian-appendix}
\subsection{Category of perverse sheaves}
Given a smooth curve $X$ over a perfect field $\bbF$, we can speak of the so-called derived category $\Dbc{X}$. Its objects $M$ are complexes of constructible $\Qellbar$-sheaves on $X$ over $\bbF$ whose cohomology complex $$
\cdots
\longto \H{-1}M
\longto \H{0}M
\longto \H{1}M
\longto \cdots $$ is bounded and whose cohomology sheaves $\H{i}M$ are all constructible. There is a well-defined dual object $\Dual{M}$, the Verdier dual of $M$. Moreover, for each $n\in\bbZ$, there is a well-defined shifted complex $M[n]$ which satisfies $\H{i}{M[n]}=\H{i+n}M$.
We say that $M$ is \defi{semi-perverse} iff $\H{0}{M}$ is punctual and $\H{i}{M}$ vanishes for $i>0$ and that $M$ is \defi{perverse} iff $M$ and $\Dual{M}$ are semi-perverse. We write $\Perv{X}$ for the full subcategory of perverse objects in $\Dbc{X}$. It is an abelian category thus one can speak of subquotients of its objects as well as kernels and cokernels of its morphisms. It is common to call its objects perverse sheaves despite the fact that they are \emph{complexes} of sheaves.
There is a natural functor from the category of constructible $\El$-sheaves on $X$ over $k$ to $\Dbc{X}$: it sends a sheaf $\FF$ to a complex concentrated at $i=0$ and takes a morphism to the unique extension to a morphism of complexes. The image of this functor is not stable under duality though: if $\FF^\vee$ is the dual of $\FF$, then $\Dual{\FF}$ is isomorphic to $\FF^\vee(1)[2]$. If instead one sends sends each $\FF$ to $\FF(1/2)[1]$, then self-dual objects are taken to self-dual objects and middle-extension sheaves are taken to perverse sheaves.
\subsection{Purity}
Let $X$ be a smooth curve over $\Fq$. We say an object $M$ in $\Dbc{X}$ is \defi{$\iota$-mixed of weights $\leq w$} iff $\H{i}M$ is punctually $\iota$-mixed of weights $\leq w+i$ for every $i$, and then $M[n]$ is $\iota$-mixed of weights $w+n$. We also say $M$ is \defi{$\iota$-pure of weight $w$} iff $M$ is $\iota$-mixed of weights $\leq w$ and $\Dual{M}$ is $\iota$-mixed of weights $\leq -w$, and then $M[n]$ is $\iota$-pure of weight $w+n$. Finally, we say $M$ is \defi{pure of weight $w$} iff it is $\iota$-pure of weight $w$ for every field embedding $\iota\colon\Qbar\to\bbC$.
\subsection{Subobjects and subquotients}
Let $(\CC,\oplus)$ be an abelian category, let $\zero$ be its zero object, and let $M,N$ be a pair of objects in $\CC$.
We say that $N$ is a \defi{subobject} of $M$ and write $N\seq M$ iff there is a monomorphism $N\into M$ in $\CC$. More generally, we say $N$ of $M$ is a \defi{subquotient} of $M$ iff there exist an object $S$, a monomorphism $S\into M$, and an epimorphism $S\onto N$ all in $\CC$. Equivalently, $N$ is a subquotient of $M$ iff there exist an object $Q$, an epimorphism $M\onto Q$, and a monomorphism $N\into Q$ all in $\CC$.
\begin{prop}\label{prop:subquotient-of-pure-is-pure} If $M\in\Perv{\Gm}$ is $\iota$-pure of weight $w$, then so is every subquotient $N$. \end{prop}
\begin{proof} See \cite[5.3.1]{BBD}. \end{proof}
Given a pair $N_1,N_2\seq M$ of subobjects, we write $N_1\seq N_2\seq M$ iff $N_1\seq N_2$ and, for the corresponding monomorphisms, $N_1\into M$ equals the composition $N_1\into N_2\into M$. We also write $N_1=N_2\seq M$ iff $N_1\seq N_2\seq M$ and $N_2\seq N_1\seq M$. For example, if $M$ is an object in $\Perv{\Gm}$ and if $\phi$ is the Frobenius automorphism of $\bar{M}$, then the subobjects $N\seq M$ give rise to precisely those subobjects $\bar{N}\seq\bar{M}$ satisfying $\bar{N}=\phi(\bar{N})\seq\bar{M}$.
\subsection{Kummer sheaves}
Let $\Gm=\Poneu\ssm\{0,\infty\}$ over $\Fq$, and let $\piOneTame{\Gm}$ be the tame \'etale fundamental group, that is, the maximal quotient of $\piOne{\Gm}$ whose kernel contains the $p$-Sylow subgroups of $I(0)$ and $I(\infty)$. It lies in an exact sequence $$
1\to \piOneTame{\GmBar}\to \piOneTame{\Gm}\to \Gal(\Fqbar/\Fq)\to 1 $$ where $\piOneTame{\GmBar}$ is the image of $\piOne{\GmBar}$ via the tame quotient $\piOne{\Gm}\onto\piOneTame{\Gm}$.
We say a constructible sheaf on $\PoneBar$ is a \defi{Kummer sheaf} iff it is a middle-extension sheaf which is lisse of rank one on $\GmBar$ and for which the corresponding representation factors through the quotient $\piOne{\GmBar}\onto\piOneTame{\GmBar}$. Equivalently, the Kummer sheaves are the middle-extension sheaves $\LL_\rho$ on $\PoneBar$ associated to a continuous character $\rho\colon\piOneTame{\GmBar}\to\Qellbar^\times$.
\subsection{Middle convolution on $\PP$}\label{sec:cateogory-P}
Let $\pi\colon\Gm\times\Gm\to\Gm$ be the multiplication map on $\Gm$ over $\Fq$. Using it one can define two additive bifunctors on $\Dbc{\GmBar}$ corresponding to two flavors of multiplicative convolution: $$
M\star_! N := R\pi_!(M\boxtimes N),\quad
M\star_* N := R\pi_*(M\boxtimes N). $$ There is a canonical map $M\star_! N\to M\star_* N$, but it need not be an isomorphism in general. However, if both convolution objects lie in $\Perv{\GmBar}$, then one can speak of the image of the map and define $$
M\midstar N := \mathrm{Image}(M\star_! N\to M\star_* N). $$ This observation led Katz to define the full subcategory $\PP$ of $\Perv{\GmBar}$ whose objects are all $M$ for which $N\mapsto M\star_!N$ and $N\mapsto M\star_*N$ take perverse sheaves to perverse sheaves (see \cite[\S2.6]{Katz:RLS} and \cite[Ch.~2]{Katz:CE}). Among other things, it includes perverse sheaves $\FF[1]$ for $\FF$ a simple middle-extension sheaf on $\GmBar$ of generic rank at least two. Moreover, it is an additive category with respect to the usual direct sum of sheaves. Katz called the resulting additive bifunctor on $\PP$ middle convolution.
\subsection{The category $\PP_\arith$}\label{sec:category-P_arith}
Let $\Dbc{\Gm}\to\Dbc{\GmBar}$ be the ``extension of scalars'' functor which sends an object of $M$ over $\Fq$ to the object $\bar{M}=M\times_{\Fq}\Fqbar$. It maps objects of $\Perv{\Gm}$ to objects of $\Perv{\GmBar}$, and we define $\PP_\arith$ to be the full subcategory of $\Perv{\Gm}$ whose objects $M$ are those for which $\bar{M}$ lies in $\PP$. Among other things, $\PP_\arith$ contains perverse sheaves $\FF[1]$ for $\FF$ a geometrically simple middle-extension sheaf on $\Gm$ over $\Fq$ which is of generic rank at least two.
Once again we have the two flavors of multiplicative convolution $$
M\star_! N := R\pi_!(M\boxtimes N),\quad
M\star_* N := R\pi_*(M\boxtimes N). $$ for any pair of objects $M,N$ in $\Perv{\Gm}$. We can also define middle convolution on $\PP_\arith$ as before $$
M\midstar N := \mathrm{Image}(M\star_! N\to M\star_* N). $$ for any pair of objects $M,N$ in $\PP_\arith$.
\begin{prop}\label{prop:convolution-of-pures-is-pure} If $M$ and $N$ are $\iota$-pure of weights $m$ and $n$ respectively, then $M\midstar N$ is $\iota$-pure of weight $m+n$. \end{prop}
\begin{proof} Our argument is essentially that of \cite[Ch.~4]{Katz:CE}. On one hand, $M\boxtimes N$ is $\iota$-pure of weight $m+n$ on $\Gm\times\Gm$, hence \cite[3.3.1]{Deligne:WeilII} and Proposition~\ref{prop:subquotient-of-pure-is-pure} imply $M\star_! N$ and its perverse quotient $M\midstar N$ are $\iota$-mixed of weight $m+n$. On the other hand, $DM$ and $DN$ are $\iota$-pure of weights $m$ and $n$ respectively, and \begin{eqnarray*}
D(M\midstar N)
& = &
\mathrm{Image}(D(M\star_*N)\to D(M\star_! N)) \\
& = &
\mathrm{Image}(DM\star_! DN\to DM\star_* DN)
\ \ =\ \
DM\midstar DN \end{eqnarray*} hence $D(M\midstar N)$ is $\iota$-mixed weights $\leq m+n$ (cf.~\cite[6.2]{Deligne:WeilII}). Thus $M\midstar N$ is $\iota$-pure of weight $m+n$ as claimed. \end{proof}
\subsection{The category $\Tann{\GmBar}$}\label{sec:tann-gmbar}
Gabber and Loeser defined an object $M$ in $\Perv{\GmBar}$ to be \defi{negligible} iff its Euler characteristic $\chi(\GmBar,M)$ vanishes (see \cite[pg.~529]{GL}), or equivalently, it is isomorphic to a successive extension of shifted Kummer sheaves $\LL_\rho[1]$ (cf.~\cite[3.5.3]{GL}). They showed that the full subcategory $\Negl{\GmBar}$ of $\Perv{\GmBar}$ whose objects are the negligible sheaves is a thick subcategory of the abelian category (see \cite[3.5.2]{GL}), and thus one can speak of the quotient category $$
\Tann{\GmBar}:=\Perv{\GmBar}/\Negl{\GmBar}. $$ They then proceeded to show that $\Tann{\GmBar}$ is a neutral Tannakian category (see \cite[3.7.5]{GL} and \cite[II.2.19]{DM}).
\begin{theorem}\label{thm:P-is-neutral-Tannakian} The composite map $\PP\to\Perv{\GmBar}\to\Tann{\GmBar}$ induces an equivalence of categories such that: \begin{enum} \item middle convolution on $\PP$ induces a tensor product $\otimes$ on $\Tann{\GmBar}$; \item the unit object $\one$ corresponds to the skyscraper sheaf $i_*\Qellbar$ for $i\colon\{1\}\to\GmBar$ the inclusion;
\item the dual $M^\vee$ of an object $M$ is the object $[x\mapsto 1/x]^*DM$;
\item the dimension $\dim(M)$ of an object $M$ is $\chi(\GmBar,M)$;
\item\label{thm:item:fiber-functor} a fiber functor is $M\mapsto H^0(\AoneuBar,j_{0!}M)$ for $j_0\colon\Gm\to\Aoneu$ the inclusion.
\end{enum} \end{theorem}
\noindent See \cite[3.7.2]{GL} and \cite[Ch.~2 and Ch.~3]{Katz:CE}.
\subsection{The category $\Tann{\Gm}$}
Let $\Negl{\Gm}$ be the full subcategory of $\Perv{\Gm}$ whose objects $M$ are those for which $\bar{M}$ lies in $\Negl{\GmBar}$, and let $$
\Tann{\Gm} := \Perv{\Gm}/\Negl{\Gm}. $$ Like $\Tann{\GmBar}$, the quotient category is an abelian category and even a neutral Tannakian category with tensor product $\otimes$ given by middle convolution. Moreover, the ``extension of scalars'' functor induces a functor $$
\Tann{\Gm}\to\Tann{\GmBar} $$ which also call the ``extension of scalars'' functor.
\begin{prop} Suppose $M,N\in\Tann{\Gm}$ are $\iota$-pure of weights $m$ and $n$ respectively. Then $M^\vee$, $N^\vee$, and $M\otimes N$ are $\iota$-pure of weights $m$, $n$, and $m+n$ respectively. \end{prop}
\begin{proof} The Verdier duals $DM$ and $DN$ are $\iota$-pure of weights $m$ and $n$ respectively, hence so are the Tannakian duals $M^\vee=[x\mapsto 1/x]^*DM$ and $N^\vee=[x\mapsto 1/x]^*DN$. Moreover, Proposition~\ref{prop:convolution-of-pures-is-pure} implies that $M\otimes N=M\midstar N$ is $\iota$-pure of weight $m+n$. \end{proof}
\subsection{Semisimple abelian categories}
We say that $M$ is \defi{simple} iff the only subobjects $N\seq M$ in $\CC$ are isomorphic to $\zero$ or $M$. More generally, we say that $M$ is \defi{semisimple} iff it is isomorphic to a finite direct sum $N_1\oplus\cdots\oplus N_m$ of simple subobjects $N_1,\ldots,N_m\seq M$. We say that $\CC$ is \defi{semisimple} iff each of its objects is semisimple.
\begin{prop}\label{prop:weight-zero-implies-semisimple} If $M\in\Tann{\Gm}$ is $\iota$-pure of weight zero, then $\gp{\bar{M}}$ is semisimple. \end{prop}
\begin{proof} If $N_1,N_2\in\Tann{\Gm}$ are $\iota$-pure of weight zero, then so is $N_1\oplus N_2$. Therefore Proposition~\ref{prop:convolution-of-pures-is-pure} implies that $T^{a,b}(M)$ is pure of weight zero, for every $a,b\geq 0$, and \cite[5.3.8]{BBD} implies that $T^{a,b}(\bar{M})$ is semisimple. \end{proof}
\subsection{Tannakian monodromy group}
Let $k$ be an algebraically closed field of characteristic zero and $\Vec_k$ be the category of finite-dimensional vector spaces over $k$. It is well known that the latter yields a rigid abelian tensor category $(\Vec_k,\otimes)$ with respect to the usual operators $\oplus$ and $\otimes$ of vector spaces and with unit object $\one=k$.
Let $(\CC,\otimes)$ be a neutral Tannakian category over $k$. Thus $(\CC,\otimes)$ is a rigid abelian tensor category whose unit object $\one$ satisfies $k=\End(\one)$ and for which there exists a fiber functor $\omega$, that is, an exact faithful $k$-linear tensor functor $\omega\colon\CC\to\Vec_k$. For example, $\Vec_k$ is a neutral Tannakian category and the identity functor $\Vec_k\to\Vec_k$ is a fiber functor. More generall, given an affine group scheme $G$ over $k$, the category $\Rep_k(G)$ of linear representations of $G$ on finite-dimensional $k$-vector spaces yields a neutral Tannakian category $(\Rep_k(G),\otimes)$, and the forgetful functor $\Rep_k(G)\to\Vec_k$ is a fiber functor.
Given an object $M$ of $\CC$, its dual $M^\vee$, and non-negative integers $a,b$, let $$
T^{a,b}(M) := M^{\otimes a}\oplus (M^\vee)^{\otimes b} $$ and let $\gp{M}$ be the full tensor subcategory of $\CC$ whose objects consist of all subobjects of $T^{a,b}(M)$ for all $a,b\geq 0$. For each automorphism $\gamma\in\Aut_\CC(M)$, let $\gamma^\vee\in\Aut_\CC(M^\vee)$ be the corresponding dual automorphism and $T^{a,b}(\gamma)\in\Aut_\CC(T^{a,b}(M))$ be the induced automorphism.
Let $\Alg_k$ be the category of $k$-algebras and $\Set$ be the category of sets. Given a pair $\omega_1,\omega_2$ of fiber functors $\CC\to\Vec_k$ and an object $M$ in $\CC$, one can define a functor $$
\ulIsom^\otimes(\omega_1|M,\omega_2|M)\colon\Alg_k\to\Set $$ by sending a $k$-algebra $R$ to the set $$
\{\,
\gamma\in\Isom_R(\omega_1(M)_R,\omega_2(M)_R)
:
T^{a,b}(\gamma)(\omega_1(N))\seq\omega_2(N)
\mbox{ for all }
a,b\geq 0
\mbox{ and }
N\seq T^{a,b}(M)
\,\} $$ where $\omega_i(M)_R=\omega_i(M)\otimes_k R$ and $$
\Isom_R(\omega_1(M)_R,\omega_2(M)_R)
=
\{\,
\gamma\in\Hom_R(\omega_1(M)_R,\omega_2(M)_R)
:
\gamma\mbox{ is invertible }
\,\}. $$ Similarly, given a single fiber functor $\omega\colon\CC\to\Vec_k$ and object $M$ in $\CC$, one can define a functor $$
\ulAut^\otimes(\omega|M)\colon\Alg_k\to\Set $$
as the functor $\ulIsom^\otimes(\omega|M,\omega|M)$.
\begin{theorem}\label{thm:tannakian-monodromy-group} Let $\omega_1,\omega_2$ be fiber functors $\CC\to\Vec_k$ and $M$ be an object of $\CC$.
\begin{enum}
\item $\ulAut^\otimes(\omega_i|M)$ is representable by an algebraic group scheme $G_{\omega_i|M}$ over $k$;
\item\label{thm:item:tmg-reductive} if $\gp{M}$ is semisimple, then $G_{\omega_i|M}$ is reductive;
\item\label{thm:item:tmg-torsor} $\ulIsom^\otimes(\omega_1|M,\omega_2|M)$ is represented by an affine scheme over $k$ which is a $G_{\omega_1|M}$-torsor; \end{enum} \end{theorem}
\noindent See \cite[II.2.11, II.2.20, II.2.28, and II.3.2]{DM}.
We call the group scheme $G_{\omega_i|M}$ in the theorem the \defi{Tannakian monodromy group} of $\gp{M}$ with respect to $\omega_i$.
\begin{theorem}\label{thm:pure-of-weight-zero-begets-reductive}
Let $\omega\colon\Perv{\GmBar}\to\Vec_k$ be a fiber functor over $\Fqbar$ and $M\in\Perv{\Gm}$. If $M$ is pure of weight zero, then $G_{\omega|\bar{M}}$ is reductive. \end{theorem}
\begin{proof} This follows from Proposition~\ref{prop:weight-zero-implies-semisimple} and Theorem~\ref{thm:tannakian-monodromy-group}.\ref{thm:item:tmg-reductive}. \end{proof}
\subsection{Geometric versus arithmetic monodromy}
For every object $M$ in $\Tann{\Gm}$ and all integers $a,b\geq 0$, the ``extension of scalars'' functor sends a subobject $N\seq T^{a,b}(M)$ to a subobject $\bar{N}\seq T^{a,b}(\bar{M})$. Moreover, composing the functor with a fiber functor $\omega$ on $\Tann{\GmBar}$ yields a fiber a fiber functor on $\Tann{\Gm}$ which we also denote $\omega$. Thus there is a natural transformation $$
\ulAut^\otimes(\omega|\bar{M})
\to
\ulAut^\otimes(\omega|M) $$ and a corresponding monomorphism of Tannakian monodromy groups $$
G_{\omega|\bar{M}}
\to
G_{\omega|M}. $$
We call $G_{\omega|\bar{M}}$ and $G_{\omega|M}$ the \defi{geometric} and \defi{arithmetic Tannakian monodromy groups} of $M$ with respect to $\omega$ respectively.
\begin{prop}\label{prop:arith-is-reductive} Suppose $M$ is in $\Tann{\Gm/\Fq}$ and is pure of weight zero. \begin{enum}
\item $G_{\omega|\bar{M}}$ is a normal subgroup of $G_{\omega|M}$
\item If $M$ is arithmetically semisimple, then $G_{\omega|M}/G_{\omega|\bar{M}}$ is a torus, and thus $G_{\omega|M}$ is reductive. \end{enum} \end{prop}
\begin{proof}
Proposition~\ref{prop:weight-zero-implies-semisimple} implies that $\bar{M}$ is semisimple, so part (1) follows from \cite[Th.~6.1]{Katz:CE}. Therefore we can speak of the quotient $G_{\omega|M}/G_{\omega|\bar{M}}$, and \cite[Lem.~7.1]{Katz:CE} implies it is a quotient of $M$ is arithmetically semisimple. Moreover, Proposition~\ref{thm:pure-of-weight-zero-begets-reductive} implies that $G_{\omega|\bar{M}}$ is reductive, so part (2) follows by observing that the extension of a torus by a reductive group is reductive. \end{proof}
\subsection{Frobenius element}
Let $\omega$ be a fiber functor $\Tann{\GmBar}\to\Vec_k$, let $\EFq/\Fq$ be a finite extension, and let $M$ be in $\Tann{\Gm/\EFq}$. The geometric Frobenius element of $\Gal(\Fqbar/\EFq)$ induces a well-defined automorphism $\phi_E$ of $\bar{M}$. By applying $\omega$, one obtains a well-defined $k$-linear automorphism of $\omega(\bar{M})$, that is, an element of $\GL(\omega(\bar{M}))=\GL(\omega(M))$. It is even an element of $G_{\omega|M}$ since, for every $N\seq T^{a,b}(M)$ and $a,b\geq 0$, one has $$
\bar{N}=T^{a,b}(\phi_\EFq)(\bar{N})\seq T^{a,b}(\bar{M}) $$ and thus $$
\omega(\bar{N})=T^{a,b}(\phi_\EFq)(\omega(\bar{N}))\seq \omega(T^{a,b}(\bar{M}))=T^{a,b}(\omega(M)). $$
We call $\omega(\phi_\EFq)$ the \defi{geometric Frobenius element} of $G_{\omega|M}$.
\subsection{Frobenius conjugacy classes}
Let $\omega_1,\omega_2$ be fiber functors $\Tann{\GmBar}\to\Vec_k$, let $M$ be an element of $\Tann{\Gm}$, and let $\pi$ be an element of $\ulIsom^\otimes(\omega_1|M,\omega_2|M)(k)$. Then Theorem~\ref{thm:tannakian-monodromy-group}.\ref{thm:item:tmg-torsor} implies that the map $g\mapsto\pi g$ induces a bijection $$
G_{\omega_1|M}\to \ulIsom^\otimes(\omega_1|M,\omega_2|M). $$
Moreover, the map $g_2\mapsto g_2^\pi=\pi^{-1}g_2\pi$ induces an isomorphism $G_{\omega_2|M}\to G_{\omega_1|M}$. While the map is not canonical (since $\pi$ is not), the conjugacy class $$
\Frob_{\omega_2|M}
=
\{\,
\omega_2(\phi)^{\pi g_1}
:
g_1\in G_{\omega_1|M}(k)
\,\}
\sub
G_{\omega_1|M}(k) $$
is well defined. We call it the \defi{geometric Frobenius conjugacy class} of $\omega_2|M$ in $G_{\omega_1|M}$.
For each finite extension $\EFq/\Fq$ and each character $\rho\in\PhiEOf\EFq{u}$, let $\LL_\rho$ be the corresponding Kummer sheaf on $\Gm$ over $E$ and $\omega_\rho\colon\Tann{\GmBar}\to\Vec_k$ be the functor given by $$
M\mapsto H^0(\AoneuBar,j_{0!}(M\otimes\LL_\rho)). $$ It is a fiber functor by \cite[3.2]{Katz:CE}, and $\omega_\one$ is the fiber functor of Theorem~\ref{thm:P-is-neutral-Tannakian}.\ref{thm:item:fiber-functor}. We write $$
\Frob_{\EFq,\rho}\sub G_{\omega_\one|M} $$
for the corresponding geometric Frobenius conjugacy class of $\omega_\rho|M_{\EFq}$ where $M_{\EFq}=M\times_{\Fq}\EFq$.
Let $m=\dim(\omega_\rho(M))$ and $n\in\{0,1,\ldots,m\}$. We say that $\omega_\rho(M)$ is \defi{mixed of weights $w_1,\ldots,w_m$} iff there exists an eigenvector tuple $\alpha=(\alpha_1,\ldots,\alpha_m)\in(\Qellbar^\times)^m$ of any element of $\Frob_{\EFq,\rho}$ such that $\alpha\in(\Qbar^\times)^m$ and such that $$
|\iota(\alpha_i)|^2 = (1/|\EFq|)^{w_i}\mbox{ for }1\leq i\leq m $$ for every field embedding $\iota\colon\Qbar\to\bbC$. We also say that $\omega_\rho(M)$ is \defi{mixed of non-zero weights $w_1,\ldots,w_n$} iff it is mixed of weights $w_1,\ldots,w_m$ with $w_{n+1}=\cdots=w_m=0$.
\subsection{Monodromy for pure middle-extension sheaves}\label{sec:representation-monodromy-groups}
Let $U\seq\Gm$ be a dense Zariski open subset over $\Fq$. Let $\theta\colon\piOne{U}\to\GL(W)$ be a continuous representation to a finite-dimensional $\Qellbar$-vector space $W$ and $\FF=\ME{\theta}$ be the associated middle-extension sheaf on $\Gm$. Suppose that $\theta$ is punctually pure of weight $w$ so that $M=\FF((1+w)/2)[1]$ is pure of weight zero. Suppose moreover that $\theta$ is geometrically simple and that it does not factor through the composed quotient $\piOne{U}\onto\piOne{\Gm}\onto\piOneTame{\Gm}$ so that $M$ lies in $\PP_\arith$.
Let $\PhiU$ be the dual of $\Bu=(\Fq[u]/u\Fq[u])^\times$ (cf.~\S\ref{sec:one-parameter-families}). We define the \defi{geometric} and \defi{arithmetic Tannakian monodromy groups} of (the Mellin transformation of) $\theta$ to be $$
\GG_\geom(\theta,\PhiU):=G_{\omega_\one|\bar{M}},\quad
\GG_\arith(\theta,\PhiU):=G_{\omega_\one|M}. $$ For $u=0,\infty$, let $W(u)$ denote $W$ regarded as an $I(u)$-module, and let $W(u)^\unip$ be the maximal submodule of $W(u)$ where $I(u)$ acts unipotently. Moreover, let $e_{u,1},\ldots,e_{u,d_u}$ be positive integers integers satisfying $$
W(u)^\unip\simeq U(e_{u,1})\oplus\cdots\oplus U(e_{u,d_u}) $$ as $I(u)$-modules where $U(e)$ denotes the irreducible $e$-dimensional $I(u)$-module on which $I(u)$ acts unipotently.
\begin{prop}\label{prop:middle-extension-monodromy}\
\begin{enum} \item\label{item:prop:mem-reductive-and-normal} The groups $\GG_\geom(\theta,\PhiU)$ and $\GG_\arith(\theta,\PhiU)$ are reductive, and there is an exact sequence $$
1
\to \GG_\geom(\theta,\PhiU)
\to \GG_\arith(\theta,\PhiU)
\to T
\to 1 $$ for some torus $T$ over $\Qellbar$.
\item\label{item:prop:mem-nonzero-weights} For each finite extension $\EFq/\Fq$ and each $\alpha\in\PhiEOf\EFq{u}$, the fiber $\omega_\rho(M)$ is mixed of non-zero weights $-e_{0,1},\ldots,-e_{0,d_0},e_{\infty,1},\ldots,e_{\infty,d_\infty}$.
\end{enum} \end{prop}
\begin{proof} Part (1) follows from Proposition~\ref{prop:arith-is-reductive}, and part (2) follows from \cite[Th.~16.1]{Katz:CE}. \end{proof}
\end{document} |
\begin{document}
\title{Outlier Explanation via Sum-Product Networks}
\begin{abstract}
Outlier explanation is the task of identifying a set of features that distinguish a sample from normal data, which is important for downstream (human) decision-making. Existing methods are based on beam search in the space of feature subsets. They quickly becomes computationally expensive, as they require to run an outlier detection algorithm from scratch for each feature subset.
To alleviate this problem, we propose a novel outlier explanation algorithm based on Sum-Product Networks (SPNs), a class of probabilistic circuits. Our approach leverages the tractability of marginal inference in SPNs to compute outlier scores in feature subsets. By using SPNs, it becomes feasible to perform backwards elimination instead of the usual forward beam search, which is less susceptible to missing relevant features in an explanation, especially when the number of features is large. We empirically show that our approach achieves state-of-the-art results for outlier explanation, outperforming recent search-based as well as deep learning-based explanation methods. \end{abstract} \section{Introduction}
The identification of uncommon or anomalous samples (outliers) in a dataset is an important task in data science. Outliers can, for example, indicate defective products in quality control \cite{stojanovic2016big}, intrusions in networks \cite{garcia2009anomaly}, or potential medical conditions in health records \cite{carrera2019online}. Many outlier detection methods have been proposed, including classical methods based on notions of distance or density \cite{liu2008isolation,scholkopf2001estimating} as well as deep learning-based methods (see \citealt{pang2021deep} for a review).
A less well investigated, but natural question is that of \emph{outlier explanation}: Given a sample classified as an outlier, which properties of the sample are the cause for this classification, i.e., which properties are specifically anomalous? This task has also been called \emph{outlying aspect mining}
\cite{duan2015mining,vinh2016discovering,samariya2020new} or \emph{outlier interpretation} \cite{xu2021beyond,liu2018contextual}.
\begin{figure}
\caption{Two explanations generated for outliers in the Wisconsin Breast Cancer dataset. Contours illustrate the (marginal) density of the SPN. Left: Features that best explain outlyingness of the green sample. Right: Feature that best explain outlyingness of the red sample.
The explanations indicate that the green sample has unusual texture, while the red sample has unusual shape.}
\label{fig:example}
\end{figure}
\begin{example} \normalfont The Wisconsin Breast Cancer dataset\footnote{Available at \url{odds.cs.stonybrook.edu/wbc}.} contains properties of cell nuclei of fine needle aspirates of breast mass. Outliers correspond to malignant cases. A support system for a physician should not only \emph{detect} potential outliers (malignant cases), but also explain why it considers a sample an outlier, to increase the physician's confidence in the system and support her diagnosis and therapy decision. Figure \ref{fig:example} shows two examples of such explanations. We can see that the green sample has unusual texture and the red sample has unusual shape, which can be relevant for downstream tasks like therapy decisions. \end{example}
Most existing methods for outlier explanation are based on beam search to identify feature subsets in which the outlier score of a sample is maximal \cite{duan2015mining,vinh2016discovering,wells2019new,samariya2020new}. They require to run an outlier detection algorithm for each feature subset that is visited during beam search, which can be computationally very costly.
The contribution of this paper is a novel algorithm for outlier explanation based on \emph{Sum-Product Networks} (SPNs) \cite{poon2011sum}. SPNs are probabilistic models in which marginal inference is tractable. In this model, identification of feature subsets with high outlier score is fast, compared to existing methods: An SPN only needs to be trained once, and marginal probabilities in the SPN correspond to outlier scores in the respective feature subsets.
Furthermore, we propose to use a backward selection search strategy instead of the usual forward beam search to identify feature subsets. The option to use backward selection is enabled by the use of SPNs as underlying model: As runtime of outlier score computation in SPNs does not depend on the feature subset size, it become possible to start the search for an explanation with a large feature subset and prune it iteratively. We find that backward elimination is less susceptible to miss relevant features than beam search, especially for dataset with larger dimensionality, overall leading to more accurate explanations for high-dimensional data.
We extensively evaluate our approach on a number of synthetic and real-world outlier explanation tasks. Our approach achieves state-of-the-art results for outlier explanation, outperforming recent search-based as well as deep learning-based explanation methods, while being computationally feasible.
\section{Preliminaries and Related Work}
\subsection{Explainable Outlier Detection} \label{subsec:outlier-detection}
\paragraph{Outlier Detection} Outlier detection is the following unsupervised learning task: Given a dataset $\{\mathbf{x^{(1)}},\dots,\mathbf{x^{(n}}\}$, classify each sample as either normal or outlier. Here, the samples $\mathbf{x} \in \mathcal{X}$ can be multivariate, e.g. $\mathcal{X} = \mathbb{R}^n$, but we consider the case where some or all dimensions are categorical as well.
Outlier detection algorithms usually compute a \emph{scoring function} $f: \mathcal{X} \rightarrow \mathbb{R}$, which can be used for outlier classification by classifying all samples with $f(\mathbf{x}) > t$ as outliers, for a fixed threshold $t$.
Classical methods include, for example, isolation forests \cite{liu2008isolation}, local outlier factor \cite{breunig2000lof} or one-class support vector machines \cite{scholkopf2001estimating}. More recently, deep neural networks have been used for this task \cite{pang2021deep}.
In this paper, we focus on probabilistic outlier detection, by assuming that \emph{normal} data was generated from a distribution $p(\mathbf{x};\theta)$ with parameters $\theta$. An outlier is a sample which is unlikely to be drawn from $p(\mathbf{x};\theta)$. That is, we use the scoring function $f(\mathbf{x}) = -p(\mathbf{x};\theta)$. Parametric as well as non-parametric density estimators have been considered for $p(\mathbf{x};\theta)$, e.g., Gaussian mixtures \cite{pimentel2014review} or Kernel Density Estimation \cite{schubert2014generalized}.
The parameters $\theta$ can be estimated from purely normal training data, in which case the task is also called \emph{novelty detection} \cite{pimentel2014review}. For the outlier detection task considered here, where the available data consists of normal as well as anomalous data, it is still customary to use the complete data to estimate $\theta$, assuming that outliers are rare and sparse so that they will still be assigned a low probability.
\paragraph{Outlier Explanation} \emph{Outlier explanation} is the task of retrieving a subset of features in which the sample is specifically anomalous \cite{zhang2004hos,duan2015mining,wells2019new,samariya2020new}. More formally, let $D \subseteq \{1,\dots,n\}$ be a set of indices, and let $\mathbf{x}_D$ denote the projection of $\mathbf{x}$ onto the subspace indicated by $D$. Outlier explanation is the task of identifying $D$, such that $f(\mathbf{x}_D)$ is maximized for a given sample $\mathbf{x}$.
The naive approach of computing $f(\mathbf{x}_D)$ individually for each subspace $D \subseteq \{1,\dots,n\}$ quickly becomes infeasible due to the combinatorial explosion in $|D|$.
Therefore, existing methods \cite{zhang2004hos,duan2015mining,wells2019new,samariya2020new} for outlier explanation usually perform a greedy beam search that iteratively adds dimensions to $D$. For example, \citet{duan2015mining} use a kernel density estimator (KDE) to compute outlier scores for each visited subspace. \citet{wells2019new} build on this work, replacing the KDE with a faster, grid-based density estimator. Still, these methods are computationally expensive as they need to run a density estimator individually for each investigated subspace \cite{samariya2020comprehensive}.
To select the best explanation, simply returning the subspace $D$ where $f(\mathbf{x}_D$ is minimized is not usually not appropriate, because scoring functions for different dimensionalities are usually not directly comparable \cite{vinh2016discovering}. For example, the distance between samples generally increases when the number of dimensions increases, favoring high-dimensional subspaces as explanations. Thus, \emph{dimensionality-unbiased} scores like \emph{z-score} normalization \begin{equation} z(\mathbf{x},D,\mathbf{X}) = \frac{f(\mathbf{x}_D) - \mu(\mathbf{X}_D)}{\sigma(\mathbf{X}_D)} \label{eq:zscore} \end{equation}
w.r.t. the training dataset $\mathbf{X}$ or a rank transformation have been proposed \cite{duan2015mining}. They allow to compare scores between different dimensionalities, thus allowing to identify a subspace $D$ in which $\mathbf{x}_D$ is most outlying, relative to other samples. However, they are computationally expensive, as outlier scores need to be computed for all samples $\mathbf{x}' \in \mathbf{X}$, instead of only the query sample $\mathbf{x}$.
In contrast to search-based methods, explanations can also be obtained via algorithm-agnostic, local explainability methods from the area of supervised learning, e.g. LIME \cite{ribeiro2016should} or SHAP \cite{lundberg2017unified}. They explain a prediction by assigning an importance value to each feature.
Feature importance-based explanation methods specifically tailored towards the outlier detection task have been proposed as well \cite{liu2018contextual,xu2021beyond}. For example, ATON \cite{xu2021beyond} consists of an embedding layer and a subsequent self-attention layer, where attention weights represent contribution of each embedding dimension to the outlyingness of an outlier. From these weights, feature importance weights (in the original feature space) can be computed. This method has been shown to produce more accurate explanations than general explainability methods (LIME, SHAP) for the task of outlier explanation.
\subsection{Sum-Product Networks}
\paragraph{Representation} A Sum-Product Network (SPN) \cite{poon2011sum} is a rooted directed acyclic graph representing a probability distribution over a sequence of random variables (RVs) $\mathbf{X} = X_1,\dots,X_n$. Each node represents a distribution $p_N$ over a subset $\mathbf{X}_{\phi(N)} \subseteq \mathbf{X}$, where $\phi(N) \subseteq \{1,\dots,n\}$ is called the scope of the node $N$. In the following, $\text{ch}(N)$ denotes the children of node $N$. An SPN contains tree types of nodes: Leaf nodes, product nodes and sum nodes. A product node represents a factorized distribution $p(\mathbf{X}_{\phi(N)}) = \prod_{C \in \text{ch}(N)} p_C(\mathbf{X}_{\phi(C)})$. A sum node represents a mixture distribution $p_N(\mathbf{X}_{\phi(N)}) = \sum_{C \in \text{ch}(N)} w_C \, p_C(\mathbf{X}_{\phi(C)})$. Finally, a leaf node directly represents a (tractable) univariate or multivariate distribution. \emph{Decomposability} (children of product nodes have pairwise disjoint scopes) and \emph{completeness} (children of sum nodes have identical scope) ensure that an SPN actually represents a valid probability distribution. By definition, the distribution represented by an SPN is the distribution defined by its root node.
Early research on SPNs focused on categorical distributions \cite{poon2011sum} or simple parametric leaf distributions, like Gaussians \cite{dennis2012learning}. More recently, SPNs with piecewise polynomial leaf distributions have been used to model continuous and mixed data \cite{molina2018mixed}.
\paragraph{Inference} The appealing property of SPNs is that any marginal distribution $p(\mathbf{X'}{=}\mathbf{x'})$ for a subset $\mathbf{X'} \subset \mathbf{X}$ can be computed efficiently. Intuitively, this is possible because summation over the marginalized RVs can be ``pushed down'' into the leaf nodes of the SPN \cite{peharz2015theoretical}. Thus, marginal inference reduces to marginalization of the leaves and evaluating the internal nodes of the SPN once. As leaves are usually chosen such that marginal inference in leaf distributions is possible in constant time, marginal inference is linear in the number of nodes of the SPN. Specifically, when the leaf distributions are univariate, the value of marginalized leaves can simply be set to 1.
\paragraph{Learning} A number of different learning algorithms for SPNs have been proposed. Early learning algorithms focused on structure learning \cite{gens2013learning,vergari2015simplifying,peharz2013greedy,molina2018mixed}. Most prominently, LearnSPN \cite{gens2013learning} is a greedy structure learning algorithm, which creates a tree-structured SPN in a top-down fashion. It recursively tests for independence of RVs (in which case it creates a product node and recurses), and otherwise clusters the data into subsets, creates a corresponding sum node and recurses. \citet{molina2018mixed} proposed an extension of LearnSPN which also works for continuous and mixed domains. Recently, \citet{peharz2020random} proposed a learning algorithm which first initializes a random SPN structure and then learns parameters via EM. This way, parameter learning can leverage fast, parallel GPU computations, as shown by \citet{peharz2020einsum}.
\section{Explainable Outlier Detection via SPNs} In this section, we present a novel outlier detection model, and show how outlier explanations can be extracted from this model in a straightforward way.
\subsection{SPNs for Outlier Detection}
Probabilistic outlier detection methods use a scoring function of the form $f(\mathbf{x}) = -p(\textbf{x};\theta)$.
The main idea of this paper is to use an SPN to represent the joint density $p(\textbf{x};\theta)$. This approach has several advantages, compared to existing outlier detection approaches: \begin{itemize} \item SPNs are powerful and expressive density estimators, reaching state-of-the-art performance in several density estimation tasks. Thus, they should be able to accurately learn $p(\textbf{x};\theta)$, leading to good density estimation (and subsequently, outlier detection) performance. \item SPNs can seamlessly handle mixed discrete-continuous domains, which is difficult for distance-based methods as well as methods using parametric distributions. \item SPNs directly lend themselves to outlier explanation due to their tractable inference. In fact, in this paper, we use them solely for this purpose. \end{itemize}
\subsection{SPNs for Outlier Explanation} \label{subsec:spn-explanation}
\begin{figure*}
\caption{Overview of proposed method for outlier detection and explanation. An SPN only needs to be trained once (1) and can subsequently be used to compute outlier scores (2), as well as anomalous feature subsets (i.e., explanations) via marginal inference in the SPN (3).}
\label{fig:abstract}
\end{figure*}
As discussed above, the central challenge in outlier explanation is to efficiently compute $f(\mathbf{x}_D)$ for subspaces $D$. For probabilistic outlier detection methods, this task is equivalent to computing a marginal distribution $f(\mathbf{x}_D) = p(\mathbf{x}_D; \theta)$. Such a marginal is obtained by integrating over all RVs $\mathbf{X} \setminus \mathbf{X}_D$. More formally, let $\bar{D} = \{1,\dots,n\}\setminus D$, and denote $\bar{D} = \{\bar{D}_1,\dots,\bar{D}_k\}$. The outlier score in subspace $D$ is given by \begin{equation} \begin{split} & f(\mathbf{x}_D) = p(\mathbf{x}_D;\theta) \\ = & \int_{x_{\bar{D}_1}} \dots \int_{x_{\bar{D}_k}} p(\mathbf{x};\theta) \, \text{d}x_{\bar{D}_1} \dots \text{d}x_{\bar{D}_k} \end{split} \label{eq:marginal} \end{equation} Explicitly computing such marginals is intractable for many expressive density estimators. Instead, the strategy taken by existing outlier explanation methods \cite{duan2015mining,wells2019new} is to project the training samples to the subspace $D$, and estimate the parameters of the model $p(\mathbf{x}_D;\theta)$ from those samples. This approach is computationally expensive, as outlier scores are computed for many subspaces during beam search for the subspace in which the sample is most outlying.
In SPNs, however, marginal inference is tractable: The time complexity of evaluating a marginal probability in Equation \ref{eq:marginal} is linear in the number of nodes of the SPN \cite{poon2011sum}, independently of the number of RVs that are marginalized---and irrespective of the number of original training samples, in contrast to approaches that perform parameter estimation for each subspace.
Hence, we propose the outlier detection and explanation model shown in Figure \ref{fig:abstract}: A single SPN representing the joint $p(\mathbf{x};\theta)$ is trained once, which can then subsequently be used to compute outlier scores as well as outlier explanations. Specifically, outlier explanations are computed via search in the feature subspaces, to identify the subspace where the outlier score of a sample is maximal. In the following, we discuss the search strategy as well as the strategy for selecting the dimensionality of the explanation in more detail.
\paragraph{Search Strategies} As computing outlier scores for all $2^n-1$ feature subspaces quickly becomes infeasible with increasing number of features $n$, a search strategy that only explores promising subspaces is required. \textbf{Forward beam search}, which greedily adds features to the explanation has been used for this task before \cite{vinh2016discovering}. More specifically, the beam search keeps a set of $B$ hypotheses (feature subspaces). In each step and for each hypothesis, it greedily adds that feature to the hypothesis that maximizes the outlier score of the sample in the extended feature set. Search is carried out until a maximum depth $S$. The search algorithm is shown in Algorithm \ref{alg:beam-search}.
\begin{algorithm}[t] \caption{forwardBeamSearch($\mathbf{x}$,$S$,$B$,$\theta$)} \label{alg:beam-search} \begin{algorithmic} \State \textbf{Input:} Outlier $\mathbf{x}$ of dimensionality $n$, maximum explanation size $S$, beam width $B$, distribution parameters $\theta$ (e.g., as an SPN) \State \textbf{Output:} For each $k \in \{1,\dots,n\}$, a subspace $D$ of size $k$ in which $x_D$ is most anomalous \State{$D_1 \leftarrow \textsc{argLowestDensities}(B,D,\theta,\mathbf{x})$} \Comment{Store $B$ most outlying dimensions} \State{$D_1^{(\text{best})} \leftarrow \textsc{argLowestDensities}(1,D,\theta,\mathbf{x})$} \Comment{Overall most outlying dimension, needed as return value later} \For{$k \in \{2,\dots,S\}$} \State{$D_k \leftarrow \{\}$} \For{$D_{k-1}^{(i)} \in D_{k-1}$} \Comment{For all hypotheses, get all candidate subspaces of size $k$}
\State{$D_k \leftarrow D_k \cup \{D_{k-1}^{(i)} \cup d \,|\, d \in D\}$} \EndFor \State{$D_k \leftarrow \textsc{argLowestDensities}(B,D_k,\theta,\mathbf{x})$} \Comment{Keep only $B$ most outlying subspaces as hypotheses for next iteration} \State{$D_k^{(\text{best})} \leftarrow \textsc{argLowestDensities}(1,D_k,\theta,\mathbf{x})$} \Comment{Store the most outlying subspace of size $k$ for returning it later} \EndFor \State \Return $D_1^{(\text{best})},\dots,D_S^{(\text{best})}$ \Function{argLowestDensities}{$B,D_k,\theta,\mathbf{x}$} \Comment{Get the $B$ subspaces from $D_k$ where $\mathbf{x}$ is least likely}
\State{$L \leftarrow \{ p(\mathbf{x}_d;\theta) \,|\, d \in D_k\}$}
\State \Return $\{d \,|\, d \in D_k, \text{rank}(p(\mathbf{x}_d;\theta),L) \leq B\}$ \EndFunction \end{algorithmic} \end{algorithm}
At depth $k$, each hypothesis consists of $k$ features, and $n-k$ features need to be explored (where $n$ is the overall number of features). Thus, up to depth $k$, $\sum_{i=1}^k n-k < n \, k$ feature subspaces are explored per hypothesis. In SPNs, computing an outlier score (a marginal probability density) amounts to evaluating the SPN (with $N$ nodes) once, resulting in an overall time complexity of beam search-based explanation of $\mathcal{O}(N\, n\, S)$, where $S$ is the maximum search depth.
Intuitively, beam search works well when a sample that has high outlier score in a feature set of size $k$ also has a high outlier score in one of the subsets of size $k-1$. When this is not the case, beam search can fail to find reasonable explanations, as pointed out by \citet{xu2021beyond}.
To alleviate this problem, we propose a top-down, \textbf{backward elimination} search strategy to identify explanatory subsets. Instead of greedily adding dimensions, the search algorithm starts with the full feature set, and then greedily removes one feature at a time, so that in resulting feature subspace, the outlyingness of the sample is maximal (compared to all other subspaces of that size). The algorithm is shown in Algorithm \ref{alg:explain-backward}.
Intuitively, when a sample is an outlier in $k$-dimensional subspace, it cannot be a complete inlier in any $(k+1)$-dimensional subspace. Thus, starting from high dimensionality and only removing features can lead to more accurate results than bottom-up beam search.
At iteration $k$ of backward elimination, the feature subset consists of $n-k$ features. For each of the $n-k$ subsets of size $n-k-1$, an outlier score needs to be computed. The algorithm runs for $n$ iterations, resulting in $\sum_{k=0}^n (n-k) < n^2$ explored subsets. Thus, overall runtime complexity of backward elimination is $\mathcal{O}(n^2\, N)$, where $N$ is the number of nodes of the SPN.
\begin{algorithm}[t] \caption{backwardElimination($\mathbf{x}$,$\theta$)} \label{alg:explain-backward} \begin{algorithmic} \State \textbf{Input:} Outlier $\mathbf{x}$ of dimensionality $n$, distribution parameters $\theta$ (e.g., as an SPN) \State \textbf{Output:} For each $k \in \{1,\dots,n\}$, a subspace $D$ of size $k$ in which $x_D$ is most anomalous \State{$D_n \leftarrow \{1,\dots,n\}$} \For{$k \in \{n-1,\dots,1\}$} \State{$d_k \leftarrow \underset{d \in D}{\text{argmax }} p(\mathbf{x}_{D_k \setminus d};\theta)$} \State{$D_{k-1} \leftarrow D_k \setminus \{d_k\}$} \EndFor \State \Return $D_1,\dots,D_{n-1}$ \end{algorithmic} \end{algorithm}
\paragraph{Dimensionality Selection} Both beam search and backward elimination result in an outlier score for each visited feature subset. As a last step, one of the subsets needs to be selected as explanation. Simply selecting the subset with lowest outlier score might not be optimal, because scores for different dimensionalities are usually not directly comparable. Specifically, the densities $p(\mathbf{x}_D;\theta)$ will typically be smaller for larger dimensionality of $\mathbf{x}_D$.
\citet{vinh2016discovering} introduce \emph{dimensionality-unbiasedness} as a desideratum for outlier scores to allow for such comparison. Dimensionality-unbiasedness can be achieved, for example, by z-score transformation of the score in each subspace $D$, w.r.t.\ the scores of all samples in $D$ (see Equation \ref{eq:zscore}). However, these transformations are computationally inefficient as outlier scores need to be computed for all samples instead of only the query sample.
Instead, we propose to use the \emph{elbow} method to select the optimal feature subset size (which has been, for example, used for determining the optimal number of clusters in k-means clustering \cite{aggarwal2015data}): In real datasets, we often observe a large difference between the minimal log density of all examined feature subsets of size $k$ and $k+1$ for a given sample, as shown in Figure \ref{fig:steep-ll}. In this case, we assume the subspace of size $k+1$ to be the explanation for that sample. More concretely, we compute differences between subsequent lowest log densty, and then return the lowest-dimensional subspace where the difference is larger than a threshold $\kappa$. When a difference of at least $\kappa$ never occurs, we return the single feature with lowest univariate density.
\begin{figure}
\caption{Two examples of minimal log probability density of a sample, for different feature subsets sizes $d$ in the WBC dataset. In the left example, the true explanatory subspace has $d=4$, and $d=2$ in the right example. This can be easily identified by the drop in LL.}
\label{fig:steep-ll}
\end{figure}
\section{Experimental Evaluation} \label{sec:exp-evaluation}
Goal of the experiments was to evaluate the outlier explanation performance of our proposed method. Specifically, we aimed at answering the following research questions: \begin{itemize} \item[\textbf{Q1}] How accurate are the explanations provided by our SPN-based approach for synthetic and real-world datasets, compared to state-of-the-art methods? \item[\textbf{Q2}] How do the forward beam search and backward elimination search strategies for SPN-based outlier explanation compare, w.r.t.\ explanation performance? \item[\textbf{Q3}] How does the proposed ebow method for dimensionality selection compare with the previously used method based on z-score transformation, w.r.t.\ explanation performance? \item[\textbf{Q4}] How does runtime of our SPN-based approach scale w.r.t.\ data dimensionality, compared to state-of-the-art methods? \end{itemize}
\subsection{Data Sets}
In our experiments, we used two groups of datasets:
\paragraph{Synthetic Outlier Explanation Datasets} Evaluating outlier \emph{explanations} is not straightforward due to the lack of ground truth explanations. Therefore, we first used 21 synthetic datasets\footnote{Available at \url{www.ipd.kit.edu/mitarbeiter/muellere/HiCS}.} created by \citet{keller2012hics} for the purpose of evaluating outier ranking algorithms. Each dataset consists of 10, 20, 30, 40, 50, 75 or 100 features (3 datasets per number of features) and contains 1000 samples, 19 to 136 of which are outliers. The datasets were created in such a way that each outlier is easily detectable in a pre-defined, 2- to 5-dimensional feature subset (which varies between outliers), but is an inlier in any lower-dimensional projection of the data. Goal of outlier explanation is to retrieve exactly those feature indices for each outlier. \paragraph{Real-World Outlier Explanation Datasets} Additionally, we evaluated outlier explanation performance on nine real-world datasets\footnote{Available at \url{github.com/xuhongzuo/outlier-interpretation}.} provided by \citet{xu2021beyond}.
To cope with the lack of ground-truth explanations, they created explanation labels for a set of real-world datasets as follows: First, each dataset was reduced to its ten first principal components. Then, for each dataset and each feature subset of that dataset, three outlier detection algorithms (Isolation Forests \cite{liu2008isolation}, COPOD \cite{li2020copod} and HBOS \cite{goldstein2012histogram}) were applied to the subspace. The explanation label of an outlier was defined to be the feature subset where the outlier score is maximal (w.r.t. the algorithm). As a result of this procedure, each dataset has three distinct explanation labels per outlier, corresponding to the three outlier detection algorithms. From the available twelve datasets, we selected those nine datasets where at least one of the three outlier detection algorithms could achieve more than 0.5 ROC AUC, to ensure that the notion of outliers (and thus outlier explanations) is sensible.
\subsection{Experiments}
We compared our SPN-based outlier explanation algorithm to the following state-of-the-art outlier explanation algorithms: \begin{itemize} \item \textbf{ATON} \cite{xu2021beyond}, a state-of-the-art neural network model for outlier explanation based on attention. \item \textbf{COIN} \cite{liu2018contextual} is an outlier explanation method which fits a set of classifiers that separate outliers from clusters of nearby normal data, and uses the weights in the classifiers as feature importance values. \item \textbf{SiNNE} \cite{samariya2020new} is the latest contribution in a line of search-based outlier detection algorithms including \cite{vinh2016discovering} and \cite{wells2019new}. Instead of its predecessors, the approach uses a \emph{dimensionality-unbiased} outlier score function that does not require post-hoc normalization. \end{itemize}
We used implementations of these algorithms provided by \citet{xu2021beyond} \footnote{github.com/xuhongzuo/outlier-interpretation}. Our SPN-based outlier detection algorithm was implemented in Python. We used the SPFlow library \cite{Molina2019SPFlow} for fitting and inference in SPNs. We used the LearnSPN algorithm \cite{gens2013learning} for SPN structure learning.
All SPN learning hyperparameters were set to fixed values across all experiments and datasets as follows: We used Gaussian leaf distributions for real features and categorical leaf distributions for categorical features. During row splits, the data was partitioned via Expectation Maximization for Gaussian Mixture Models, using 2 mixture components. The Randomized Dependence Coefficient (RDC) \cite{lopez2013randomized} was used as independence test, setting $\alpha = 0.6$. The threshold for fitting leaves was set to $m=200$ samples to prevent overfitting. For beam search, we used a fixed beam width of 10, and set the threshold to $\kappa = \text{exp}(1)$.
This choice of SPN hyperparameters was based on \citet{vergari2015simplifying}, who found these hyperparameters to perform well on a set of 20 benchmark datasets (which are different from the datasets investigated here). Optimization of these hyperparameters on a validation set is possible and could improve SPN performance further. However, hyperparameter tuning was not attempted here as these fixed parameters already achieved good performance.
\section{Results}
\subsection{Outlier Explanation Performance}
\setlength{\tabcolsep}{5pt}
\begin{table}[t] \begin{small} \centering
\begin{tabular}{rrr|rrr}
\toprule D & SPN-fw & SPN-bw & ATON & COIN & SiNNE \\
\midrule 10 & 0.867 (2) & 0.799 (5) & 0.806 (4) & \textbf{0.933 (1)} & 0.86 (3) \\
20 & \textbf{0.668 (1)} & 0.646 (4) & 0.589 (5) & 0.667 (2) & 0.65 (3) \\
30 & 0.562 (2) & \textbf{0.676 (1)} & 0.497 (4) & 0.427 (5) & 0.54 (3) \\
40 & 0.399 (2) & \textbf{0.634 (1)} & 0.348 (3) & 0.261 (4) & - \\
50 & 0.351 (2) & \textbf{0.682 (1)} & 0.3 (3) & 0.227 (4) & - \\
75 & 0.355 (2) & \textbf{0.698 (1)} & 0.205 (3) & 0.158 (4) & - \\
100 & 0.267 (2) & \textbf{0.611 (1)} & 0.154 (3) & 0.118 (4) & - \\
\midrule
Mean & 0.496 (2) & \textbf{0.678 (1)} & 0.414 (3) & 0.399 (4) & - \\
\bottomrule \end{tabular} \end{small} \caption{Outlier explanation performance (F1 score of retrieved relevant dimensions and F1 score rank) for synthetic datasets. SPN-fw and SPN-bw denote SPN-based outlier explanation with forward and backward search strategies, respectively. SiNNE did not finish in less than 5,000 seconds for $D \geq 30$. } \label{tbl:synthetic-results} \end{table}
\paragraph*{Synthetic Data} To assess \textbf{Q1} and \textbf{Q2}, we first evaluated the quality of the explanations (in terms of F1 score of retrieved features) on the synthetic datasets. We evaluated both forward beam search and backward elimination search.
Table \ref{tbl:synthetic-results} shows F1 scores of the different outlier explanation methods. For each data dimensionality $D$, mean F1 scores of the three datasets of that dimensionality are reported. Regarding \textbf{Q1}, both SPN-based approaches outperformed the state-of-the-art methods (except for $D=10$), with an increasingly large difference in F1 for increasing $D$. SPN-bw (SPN with backward elimination search) is the only method where F1 score did not decrease substantially for larger data dimensionality, achieving good explanation performance even for $D=100$. With regards to the two search strategies (\textbf{Q2}), it can be seen that backward elimination outperformed beam search for higher-dimensional cases. We suspect that this is due to the fact that beam search is susceptible to missing relevant dimensions when their number increases (and the beam width stays constant), whereas backward elimination is more stable w.r.t. dimensionality.
\begin{table}[t!] \centering \begin{scriptsize} \begin{tabular}{llllll}
\toprule dataset & SPN-fw & SPN-bw & ATON & COIN & SiNNE \\
\midrule \multirow{3}{*}{arrhythmia} & \textbf{0.742 (1)} & 0.726 (2) & 0.676 (3) & 0.367 (5) & 0.564 (4) \\
& \textbf{0.635 (1)} & 0.577 (3) & 0.596 (2) & 0.398 (5) & 0.499 (4) \\
& 0.695 (2) & \textbf{0.751 (1)} & 0.557 (3) & 0.273 (5) & 0.473 (4) \\
\midrule
\multirow{3}{*}{ionosphere} & \textbf{0.644 (1)} & 0.488 (4) & 0.622 (3) & 0.629 (2) & 0.482 (5) \\
& 0.59 (2) & 0.452 (5) & \textbf{0.671 (1)} & 0.573 (3) & 0.454 (4) \\
& \textbf{0.658 (1)} & 0.564 (4) & 0.618 (3) & 0.647 (2) & 0.433 (5) \\
\midrule
\multirow{3}{*}{letter} & \textbf{0.701 (1)} & 0.519 (5) & 0.665 (3) & 0.562 (4) & 0.668 (2) \\
& 0.641 (2) & 0.388 (5) & \textbf{0.664 (1)} & 0.554 (4) & 0.614 (3) \\
& \textbf{0.778 (1)} & 0.752 (2) & 0.545 (4) & 0.403 (5) & 0.616 (3) \\
\midrule
\multirow{3}{*}{optdigits} & \textbf{0.754 (1)} & 0.45 (5) & 0.671 (2) & 0.607 (4) & 0.654 (3) \\
& \textbf{0.725 (1)} & 0.472 (5) & 0.672 (2) & 0.593 (4) & 0.622 (3) \\
& \textbf{0.887 (1)} & 0.871 (2) & 0.557 (4) & 0.298 (5) & 0.58 (3) \\
\midrule
\multirow{3}{*}{pima} & 0.589 (2) & 0.538 (5) & \textbf{0.673 (1)} & 0.553 (4) & 0.588 (3) \\
& 0.632 (2) & 0.515 (5) & \textbf{0.65 (1)} & 0.586 (3) & 0.557 (4) \\
& \textbf{0.747 (1)} & 0.656 (2) & 0.531 (3) & 0.415 (5) & 0.441 (4) \\
\midrule
\multirow{3}{*}{satimage} & 0.604 (2) & \textbf{0.612 (1)} & 0.585 (3) & 0.429 (4) & 0.429 (5) \\
& 0.661 (2) & 0.59 (3) & \textbf{0.664 (1)} & 0.539 (4) & 0.41 (5) \\
& 0.746 (2) & \textbf{0.823 (1)} & 0.541 (3) & 0.247 (5) & 0.442 (4) \\
\midrule
\multirow{3}{*}{wbc} & \textbf{0.718 (1)} & 0.63 (2) & 0.604 (3) & 0.56 (5) & 0.57 (4) \\
& 0.552 (2) & 0.447 (5) & \textbf{0.601 (1)} & 0.461 (4) & 0.499 (3) \\
& \textbf{0.679 (1)} & 0.659 (2) & 0.579 (4) & 0.639 (3) & 0.502 (5) \\
\midrule
\multirow{3}{*}{wineRed} & 0.436 (3) & 0.366 (5) & \textbf{0.661 (1)} & 0.429 (4) & 0.505 (2) \\
& 0.432 (4) & 0.367 (5) & \textbf{0.652 (1)} & 0.45 (3) & 0.493 (2) \\
& \textbf{0.491 (1)} & 0.407 (4) & 0.481 (2) & 0.408 (3) & 0.361 (5) \\
\midrule
\multirow{3}{*}{wineWhite} & 0.526 (3) & 0.454 (4) & \textbf{0.619 (1)} & 0.436 (5) & 0.531 (2) \\
& 0.469 (4) & 0.428 (5) & \textbf{0.605 (1)} & 0.497 (3) & 0.528 (2) \\
& \textbf{0.569 (1)} & 0.529 (2) & 0.479 (3) & 0.38 (5) & 0.388 (4) \\
\midrule
Mean & \textbf{0.641 (1)} & 0.557 (3) & 0.609 (2) & 0.479 (5) & 0.515 (4) \\
\bottomrule \end{tabular} \end{scriptsize}
\caption{Outlier explanation performance (F1 score of retrieved relevant dimensions and F1 score rank) for real-world datasets. The three rows for each dataset correspond to the three ground truth explanation labels. SPN-fw and SPN-bw denote SPN-based outlier explanation with forward and backward search strategies, respectively. } \label{tbl:xu-results} \end{table}
\paragraph*{Real Data}
Next, we evaluated outlier explanation performance on the real-world datasets processed by \citet{xu2021beyond}. The results for ATON, COIN and SiNNE were taken directly from the paper introducing ATON \cite{xu2021beyond}. Table \ref{tbl:xu-results} shows the empirical results. For these datasets, our SPN-based approach (with forward search) outperformed the state-of-the-art in 17 out of 27 cases (63 \%).
With respect to \textbf{Q2}, forward beam search generally outerformed backward elimination for these datasets, which is consistent with results for the synthetic data: Keep in mind that the data were preprocessed by \citet{xu2021beyond} such that they were at most 10-dimensional. For such low-dimensional data, forward beam search (with a beam width of 10) was still able to identify explanations correctly.
Overall, the empirical results are encouraging: For the high-dimensional (synthetic) data, our SPN-based approach achieved a new state-of-the-art, and for the (low-dimensional) real-world data, our approach still outperformed state-of-the-art methods in 63\% of the cases.
\subsection{Dimensionality Selection}
\begin{table}[t] \centering \begin{small} \begin{tabular}{rrrrrrr}
\toprule & \multicolumn{3}{l}{Forward Beam Search} & \multicolumn{3}{l}{Backward Elimination}\\
\cmidrule(lr){2-4} \cmidrule(lr){5-7} $D$ & elbow-1 & elbow-e & zscore & elbow-1 & elbow-e & zscore \\
\midrule
10 & \textbf{0.85} & \textbf{0.85} & 0.69 & 0.78 & 0.79 & 0.80 \\
20 & \textbf{0.68} & \textbf{0.68} & 0.30 & 0.65 & 0.65 & 0.65 \\
30 & 0.59 & 0.59 & 0.41 & \textbf{0.68} & 0.67 & 0.69 \\
40 & 0.40 & 0.40 & 0.25 & 0.58 & 0.57 & \textbf{0.60} \\
50 & 0.36 & 0.35 & 0.22 & \textbf{0.72} & 0.70 & 0.71 \\
75 & 0.35 & 0.34 & 0.23 & 0.73 & 0.72 & \textbf{0.75} \\
100 & 0.26 & 0.26 & 0.17 & 0.62 & 0.61 & \textbf{0.64} \\
\bottomrule \end{tabular} \end{small} \caption{Outlier explanation performance (F1 score) for synthetic data with different search strategies and dimensionality selection methods. elbow-1: elbow dimensionality selection with $\kappa=1$; elbow-e: with $\kappa=\text{exp}(1)$; zscore: z-score dimensionality selection. } \label{tbl:dim-selection-res} \end{table}
To assess \textbf{Q3}, we compared the z-score-based method for dimensionality selection with the elbow method, with thresholds $\kappa=1$ and $\kappa=\text{exp}(1)$. The outlier explanation results are shown in Table \ref{tbl:dim-selection-res}.
For the elbow method, the results are insensitive to the value of $\kappa$, in both forward beam search as well as backward elimination. This observation is consistent with the intuition given in Figure \ref{fig:steep-ll}: The difference in log likelihood between the true explanatory subspace and any lower-dimensional projection is often large, such that the actual value of $\kappa$ is less relevant. Furthermore, results of the elbow method are not worse (and sometimes even better for forward beam search) than the z-score method, while being computationally less expensive (the z-score requires to compute outlier scores of all training samples in all investigated subspaces, while the elbow method does not).
Overall, the results show that the elbow method viable alternative to the conventional z-score transformation to select the dimensionality of the explanation.
\subsection{Runtime}
Finally, we compared the runtime of SPN-bw and SPN-fw (both using the elbow method for selecting the explanation dimensionality) with runtime of the state-of-the-art models.
All models were trained and evaluated on an 8-core laptop CPU (Intel Core i7-10510U). Figure \ref{fig:runtime} shows the runtime of the models for varying data dimensionality.
For a moderate number of dimensions ($D \leq 100$), our SPN-based methods have a substantially lower explanation generation runtime than the other search-based method (SiNNE) and comparable runtime to the deep learning-based method (ATON).
Note that both ATON as well as the SPNs could also be evaluated on a GPU, which could influence relative performance. Specifically, \citet{peharz2020einsum} recently proposed an efficient GPU implementation of SPNs which is sometimes orders of magnitude faster than other implementations.
Overall, the results indicate the computational feasibility of our approach and competitiveness to state-of-the-art methods.
\begin{figure}
\caption{Runtime of the different outlier explanation methods. For each dataset and method, we measured the overall runtime of explaining all outliers of that dataset.}
\label{fig:runtime}
\end{figure}
\section{Discussion and Conclusion}
In this paper, we proposed to use Sum-Product Networks (SPNs) for outlier detection and explanation. SPNs can model high-dimensional, mixed discrete-continuous distributions accurately and efficiently. Due to the tractability of marginal inference of SPNs, identifying explanations (feature subsets in which a sample is specifically anomalous) becomes efficient, allowing the used of backward elimination search. We empirically showed that our approach can generate more accurate explanations than existing methods, clearly outperforming other search-based approaches, and even outperforming deep learning-based methods in the majority of the cases.
Here, we only investigated outlier explanation for tabular data. Applying SPNs to the closely related task of \emph{image anomaly localization} \cite{venkataramanan2020attention} is a possible next step. For this task, SPNs suitable for images (like Deep Convolutional SPNs \cite{butz2019deep}) together with efficient SPN training algorithms and implementations (like the recently proposed Einsum Networks \cite{peharz2020einsum}) are an attractive option.
\end{document} |
\begin{document}
\title{On constructing benchmark quantum circuits with known near-optimal transformation cost}
\author{Sanjiang~Li,
Xiangzhen~Zhou,
Yuan~Feng \IEEEcompsocitemizethanks{\IEEEcompsocthanksitem Sanjiang Li and Yuan Feng are with Centre for Quantum Software and Information (QSI), Faculty of Engineering and Information Technology, University of Technology Sydney, NSW 2007, Australia. Xiangzhen Zhou is with Nanjing Tech University and Tsinghua University. \protect\\
E-mail: $\{$sanjiang.li, yuan.feng$\}$@uts.edu.au}
}
\date{} \maketitle
\begin{abstract} Current quantum devices impose strict connectivity constraints on quantum circuits, making circuit transformation necessary before running logical circuits on real quantum devices. Many quantum circuit transformation (QCT) algorithms have been proposed in the past several years. This paper proposes a novel method for constructing benchmark circuits and uses these benchmark circuits to evaluate state-of-the-art QCT algorithms, including $\text{t}\!\ket{\mathrm{ket}}$\ from Cambridge Quantum Computing, Qiskit from IBM, and three academic algorithms \textsf{SABRE}, \textsf{SAHS}, and \textsf{MCTS}. These benchmarks have known near-optimal transformation costs and thus are called \texttt{QUEKNO}\ (for quantum examples with known near-optimality). Compared with \texttt{QUEKO}\ benchmarks designed by Tan and Cong (2021), which all have zero optimal transformation costs, \texttt{QUEKNO}\ benchmarks are more general and can provide a more faithful evaluation for QCT algorithms (like $\text{t}\!\ket{\mathrm{ket}}$) which use subgraph isomorphism to find the initial mapping. Our evaluation results show that \textsf{SABRE}\ can generate transformations with conspicuously low average costs on the 53-qubit IBM Q Rochester and Google's Sycamore in both gate size and depth objectives. \end{abstract}
\section{Introduction} \label{sec:intro} Current noisy intermediate-scale quantum (NISQ) devices impose strict connectivity constraints on quantum circuits, which specify that a 2-qubit gate (like CNOT or CZ) can only be executed between two neighbouring qubits. This makes circuit transformation necessary before running ideal (logical) circuits on real quantum devices. This critical procedure is known as \emph{quantum circuit transformation} (QCT), qubit mapping, or layout synthesis, and is in a sense analogous to the classical layout synthesis task. \xblue{QCT is an important component of quantum circuit compilation and has attracted broad interests from areas like quantum computing \cite{ChildsSU19-qct,Cowtan+19-tket,Nannicini+21_bipmapping,Saeedi+11_synthesis,Venturelli+18_Planner}, electronic design automation \cite{Ash-Saki+19_qure,Deng0L20_codar,Itoko+19_commutation,TanC21-gate_absorption,Xie+21_commutativity,ZhouFL20_MCTS_iccad,Zhou+22_MCTS_Todaes,Zhou+20_SAHS,Zulehner+18_Astar}, and computer architecture \cite{Li+19-sabre,Liu+22_not_all_swap,Murali+19,MuraliMMJ20,TannuQ19, Zhang+21-time}.} In this paper, we address this procedure as, interchangeably, qubit mapping and (quantum) circuit transformation.
In the past several years, many QCT algorithms have been proposed. Given {an} ideal quantum circuit, such a {QCT} algorithm transforms the quantum circuit by first constructing an initial mapping and then repeatedly inserting SWAP gates to schedule 2-qubit gates so that they are all executable on the target quantum device. \xblue{Depending on the optimisation objective, the \emph{cost} of such a transformation is either the number of SWAPs inserted or the depth difference between the circuits before and after transformation.} As the qubit mapping problem is NP-complete \cite{Siraichi+18,TanC21-queko}, exact algorithms often can only handle circuits with up to 10 qubits. Most QCT algorithms are heuristic and thus their optimality is hard to evaluate. Tan and Cong \cite{TanC21-queko} propose to generate and use benchmark circuits with known optimality to evaluate different QCT algorithms. These benchmark circuits are called {\texttt{QUEKO}}, standing for `quantum examples with known optimality'. By comparing several state-of-the-art QCT algorithms on {\texttt{QUEKO}}, they show that current (heuristic) QCT algorithms are still far from optimal. This is a brilliant idea and, for the first time, we could tell how far such an algorithm is away from the optimal solutions. This kind of research will undoubtedly boost the investigation for better circuit transformation algorithms.
A striking feature of the {\texttt{QUEKO}} benchmarks is that, \xblue{with a proper initial mapping, every circuit can be transformed with zero cost, i.e., no SWAP gates are required in the optimal transformation.} This characteristic of {\texttt{QUEKO}} leads to two concerns. The first is that these circuits are perhaps \emph{not representative} as no SWAP gates are inserted in the optimal transformation of these benchmarks. The second is that QCT algorithms that adopt subgraph isomorphism \xblue{(cf. Definition~\ref{dfn:subgraph_isomorphism})}, e.g., the algorithms presented in \cite{siraichi+19_bmt, LiZF21_fidls}, can in principle find the ideal initial mapping and thus transform every {\texttt{QUEKO}} benchmark circuit with zero cost. For example, the \textsf{FiDLS}\ algorithm \cite{LiZF21_fidls} transforms all 20-qubit {\texttt{QUEKO}} benchmarks on IBM Q Tokyo with zero cost and thus \xblue{\texttt{QUEKO}\ can't provide an effective evaluation for the optimality of \textsf{FiDLS}.} The industry-level router $\text{t}\!\ket{\mathrm{ket}}$\ (from Cambridge Quantum Computing) has a mapping pass, called \texttt{GraphPlacement}, which also uses subgraph isomorphism to help construct initial mapping. In their evaluation on \texttt{QUEKO}\ benchmarks, Tan and Cong use \texttt{GraphPlacement} as the mapping pass of $\text{t}\!\ket{\mathrm{ket}}$\ and show that $\text{t}\!\ket{\mathrm{ket}}$\ has the best performance among all compared QCT algorithms. It is unclear how much this outstanding performance of $\text{t}\!\ket{\mathrm{ket}}$ is due to \texttt{GraphPlacement} as it can find the correct embeddings for some, though not all, circuits. To provide a fair evaluation of the optimality of these QCT algorithms, we need to construct new benchmark circuits so that their optimal transformations require nonzero transformation costs.
\begin{figure*}\label{fig:opt_sum}
\end{figure*}
To construct a benchmark circuit {such that its optimal transformed circuit has depth $k$,} Tan and Cong \cite{TanC21-queko} propose the following 3-phase strategy: First, construct a \textsf{backbone} which is a sequence of $k$ gates $g_1,g_2,\ldots,g_k$, such that any two consecutive gates share a common qubit and, for every $1\leq i\leq k$, if $g_i$ is a 2-qubit gate acting on qubits $p,q$ then $p,q$ are connected by an edge in the device architecture; second, randomly \textsf{sprinkle} 1- and 2-qubit gates on available positions at each time slot according to predefined gate densities; third, \textsf{scramble} the circuit by permuting its gates with a random permutation $\pi$ on the vertex set of the device architecture. Apparently, the permuted circuit is executable with initial mapping $\pi^{-1}$ (the inverse of $\pi$). This means that the circuit has zero optimal transformation cost.
Inspired by {\texttt{QUEKO}}, this paper aims to construct benchmark circuits that alleviate the above mentioned shortcomings of {\texttt{QUEKO}} and therefore provide {faithful} and more representative benchmarks for evaluating QCT algorithms. As the quantum devices targeted by this paper have medium to large size, it is challenging to generate representative circuits with exact (nonzero) optimal transformation costs. We circumvent this obstacle by constructing benchmarks with known \emph{near-optimal} transformation costs.
Our construction is motivated by the observation (see Theorem~\ref{thm:qct}) that any optimal circuit transformation with swap cost $k$ can be obtained by (i) partitioning the input circuit $C$ into $s\leq k+1$ consecutive sections $C_1,\ldots,C_s$ such that each $C_i$ is nonempty and executable with some mapping $\sigma_i$ (which can be regarded as a permutation on device qubits), (ii) inserting between $C_i$ and $C_{i+1}$ an SWAP circuit (i.e., a circuit consists of SWAP gates) $S_i$ that implements the permutation $\sigma_{i+1}\circ \sigma_i^{-1}$, and (iii) use the permutation $\sigma_1$ as the initial mapping, where the total number of gates in these $S_i$ is $k$.
In order to generate a circuit with known near-optimal cost, our strategy is simply reversing the above transformation process. Let $\mathbb{AG}$ be the architecture graph of the target quantum device. A circuit $C$ consisting of 1- and 2-qubit gates is called an $\mathbb{AG}$-circuit if, for every 2-qubit gate $g$ in $C$, the two qubits of $g$ are neighbours in $\mathbb{AG}$. As in {\texttt{QUEKO}}, we follow a 3-phase construction process: First, we generate a \textsf{backbone}, which consists of a sequence of random subgraphs $G_1,\cdots,G_s$ of $\mathbb{AG}$ that are linked by a sequence of permutations $\pi_2,\cdots,\pi_{s}$ such that the union graph of $G_i$ and $\pi_{i+1}(G_{i+1})$\footnote{The graph permuted by $\pi_{i+1}$ from $G_{i+1}$, see Definition~\ref{dfn:pgraph}.} is not embeddable in $\mathbb{AG}$ for each $1\leq i<s$. Second, for each subgraph $G_i$, we construct an $\mathbb{AG}$-circuit $\widetilde{C}_i$ by randomly inserting (i.e., \textsf{sprinkling}) 1- and 2-qubit gates such that every 2-qubit gate acts on two neighbouring qubits in $G_i$ and, for each edge $(p,q)$ in $G_i$, there is at least one 2-qubit gate in $\widetilde{C}_i$ acting on $p$ and $q$. Third, we \textsf{scramble} each circuit $\widetilde{C}_i$ with permutation $\pi_1\circ\cdots\circ\pi_{i}$ and denote $C_i$ for the resultant circuit, where $\pi_1$ is a new arbitrary permutation. The circuit $C\ensuremath{\triangleq} C_1+C_2+\cdots +C_s$ (`+' denotes concatenation) is called a \texttt{QUEKNO}\ (Quantum Examples with Known Near-Optimality) circuit. Our Theorem~\ref{thm:quekno} guarantees that $C$ can be transformed with $k=\sum_{i=2}^{s}\swapnorm{\pi_{i}}$ swaps, where $\swapnorm{\pi}$ is the minimum number of swaps required to implement a permutation $\pi$.
In above, we mainly focus on benchmark circuits for evaluating gate size optimality. Our approach can also generate benchmark circuits for evaluating depth optimality. For this purpose, we replace the permutations used above (except the first one, which can still be arbitrary) with permutations that can be implemented by a collection of SWAP gates that share no common qubits, i.e., a SWAP circuit of depth 1.\footnote{A SWAP circuit of depth 1 has depth 3 if we implement each SWAP gate as three consecutive CNOT gates.} In this paper, we call these gates \emph{parallel} SWAPs.
We generated \texttt{QUEKNO}\ benchmark circuits for both depth and gate size optimality and for three state-of-the-art quantum devices, viz., IBM Q Tokyo (20 qubits), Rochester (53 qubits), and Google Sycamore (53 qubits). The benchmarks are \emph{near-term feasible} benchmarks in terms of \cite{TanC21-queko}. For examples, the benchmark set `53Q\_gate\_Rochester' contains 400 circuits, whose numbers of 2-qubit gates range from 9 to 508 and the known near-optimal gate size ratios (also known as `\textbf{cx ratios}') \begin{equation}\label{eq:rho_gate} \rho_{\text{gate}} = \frac{\text{number of CNOT gates in the output circuit}}{\text{number of CNOT gates in the input circuit}} \end{equation} range from 1 to 1.5172, with an average of 1.1923; and the benchmark set `53Q\_depth\_Rochester' contains 240 circuits, whose numbers of gates range from 74 to 1425, depths range from 8 to 86, and the known near-optimal depth ratios \begin{equation}\label{eq:rho_depth}
\rho_{\text{depth}} = \frac{\text{depth of the output circuit}}{\text{depth of the input circuit}} \end{equation} range from 1 to 1.9286, with an average of 1.3968 and only 36 out of 240 circuits having ratios $>1.5$. For 53-qubit circuits, these cx and depth ratios are quite small. Indeed, as shown in \cite{TanC21-queko}, the best average depth ratio of QCT algorithms on IBM Q Rochester is around 6 for \texttt{QUEKO}\ benchmarks. In terms of $\rho_\text{gate}$ and $\rho_\text{depth}$ above, it is appropriate to regard our benchmarks as having known near-optimal costs.
We compared four state-of-the-art QCT algorithms, $\text{t}\!\ket{\mathrm{ket}}$\ \cite{Cowtan+19-tket}, \textsf{SABRE}\ \cite{Li+19-sabre}, \textsf{SAHS}\ \cite{Zhou+20_SAHS}, and \textsf{MCTS}\ \cite{ZhouFL20_MCTS_iccad,Zhou+22_MCTS_Todaes} with a baseline Qiskit transpiler on our benchmarks and three sets of \texttt{QUEKO}\ benchmarks --- `54Q\_bntf\_Sycamore', `20Q\_bss\_Tokyo', and `16Q\_bntf\_Aspen-4'. \xblue{ Fig.~\ref{fig:opt_sum} show that (i) \textsf{SABRE}\ and \textsf{MCTS}\ have the similar and best performance on the 20-qubit device IBM Q Tokyo; and (ii) \textsf{SABRE}\ has the best performance on 53-qubit devices IBM Q Rochester and Google's Sycamore. The superiority of \textsf{SABRE}\ over the other algorithms is often conspicuous and consistent in both gate size and depth objectives. When comparing with \texttt{QUEKO}\ benchmarks, evaluation results on \texttt{QUEKNO}\ benchmarks are in general more \textbf{conservative}: they are less optimistic on the 20-qubit device and, when the target is depth optimality, less pessimistic on the 53-qubit devices. Moreover, on the \texttt{QUEKO}\ benchmark set `20Q\_bss\_tokyo', \textsf{SABRE}\ has average gate size/depth ratios 1.02 and 1.08, which are too close to 1 (the optimal ratio) and thus not realistic. This suggests that \texttt{QUEKO}\ benchmarks may provide poor optimality evaluations even for QCT algorithms which do not use subgraph isomorphism.}
The remainder of this paper is organised as follows. In Sec.~\ref{sec:prelimaries}, we recall relevant backgrounds about quantum circuits and permutations. We introduce QCT and discuss QCT algorithms in Sec.~\ref{sec:qct}. The theoretical description and design of \texttt{QUEKNO}\ benchmarks are presented in Sec.~\ref{sec:quekno-theory} and Sec.~\ref{sec:design}. We evaluate and compare in Sec.~\ref{sec:evaluation} state-of-the-art QCT algorithms on our \texttt{QUEKNO}\ and \texttt{QUEKO}\ \cite{TanC21-queko} benchmark sets. The evaluation section ends with further discussions, which is followed by the concluding section.
\section{Preliminaries}\label{sec:prelimaries}
This section recalls some relevant background in quantum computing and graph theory. Interested readers in quantum computing may consult \cite{Nielsen-Chuang02} for a systematic introduction.
\subsection{Quantum Circuits}
Quantum circuits are currently the standard model for describing quantum algorithms. Like classical combinational circuits, a quantum circuit consists of a sequence of quantum gates acting on qubits (quantum bits). A general state of qubit $q$ has form $\ket{\psi}=\alpha_0\ket{0}+\alpha_1\ket{1}$ with $\alpha_0,\alpha_1$ being complex values and $|\alpha_0|^2+|\alpha_1|^2=1$. It can often be written as a 2-dim vector $\begin{pmatrix} \alpha_0 \\ \alpha_1 \end{pmatrix}$, which we also denote as $(\alpha_0,\alpha_1)^T$ for convenience. In general, an $n$-qubit state is often represented as a $2^n$-dimensional unit complex vector $(\alpha_0,\alpha_1,\ldots,\alpha_{2^n-1})^T$.
Quantum gates are unitary transformations. An $n$-qubit gate is represented as a $2^n\times 2^n$ unitary matrix. The following 1-qubit gates are often used: \begin{equation*} \resizebox{0.9\hsize}{!}{ $X = \begin{pmatrix}
0& 1\\ 1 & 0
\end{pmatrix}$,\quad
$H = \frac{1}{\sqrt{2}}\begin{pmatrix}
1& 1\\ 1 & -1
\end{pmatrix}$,\quad
$S = \begin{pmatrix}
1& 0\\ 0 & i
\end{pmatrix}$, \quad
$T = \begin{pmatrix}
1& 0\\ 0 & e^{i\frac{\pi}{4}}
\end{pmatrix}.$
} \end{equation*} CNOT (also called CX) and CZ are two important 2-qubit gates. For any computational basis state $\ket{i}\ket{j}$, CNOT and CZ map $\ket{i}\ket{j}$ to, respectively, $\ket{i}\ket{i\oplus j}$ and $(-1)^{i\cdot j}\ket{i}\ket{j}$, where $\oplus$ denotes exclusive-or (XOR) and $\cdot$ Boolean conjunction.
Single qubit gates and CNOT are sufficient to implement an arbitrary quantum gate. In addition, any quantum gate can be approximated to arbitrary accuracy by only using $H, S, T$ and CNOT gates. In particular, the 2-qubit SWAP gate, which maps $\ket{i}\ket{j}$ to $\ket{j}\ket{i}$, can be implemented by three CNOT gates, i.e., $\swap{p}{q} = \textsc{cnot}(p,q)\textsc{cnot}(q,p)\textsc{cnot}(p,q)$.
While different quantum devices may support different universal sets of quantum gates, in practice, the 2-qubit gate in such a universal set is CNOT or CZ. As the actual functionality of a 1-qubit gate plays no role in QCT, in the rest of this paper, we denote a 1-qubit gate simply by the qubit it acts on. For example, an $H$ gate on $q_i$ is simply denoted by $\langle q_i\rangle$. Analogously, we write a CNOT or CZ gate with control qubit $q_i$ and target qubit $q_j$ simply as $\langle q_i,q_j\rangle$.
\begin{figure}
\caption{A logical circuit and its interaction graph. }
\label{fig:logical_circuit}
\end{figure}
A circuit $C$ is usually given as a sequence of gates $g_0,g_1,\ldots,g_{m-1}$, but this does not mean that a gate with a larger index should be executed after gates with smaller indexes. In fact, two gates can be executed in parallel if they do not act on a common qubit. Naturally, we partition $C$ into layers while putting each gate as front as possible (so that it can be executed at the earliest time). The number of layers is called the \emph{depth} of the circuit. For example, the circuit $C$ below can be aligned as illustrated in Fig.~\ref{fig:logical_circuit}. \vspace*{2mm} \resizebox{0.95\hsize}{!}{
$C = \big[\langle 5\rangle, \langle 2, 1\rangle, \langle 1\rangle, \langle 4, 0\rangle, \langle 4\rangle, \langle 1\rangle, \langle 4, 0\rangle, \langle 3\rangle, \langle 5\rangle, \langle 5, 3\rangle, \langle 1\rangle, \langle 0\rangle, \langle 1, 4\rangle$},
\resizebox{0.95\hsize}{!}{
$\quad\quad\quad\quad \langle 1\rangle, \langle 2\rangle, \langle 5\rangle, \langle 4, 2\rangle, \langle 0\rangle, \langle 2, 4\rangle, \langle 2\rangle, \langle 2\rangle, \langle 3, 0\rangle, \langle 3\rangle, \langle 1\rangle, \langle 5, 3\rangle \big]$} \vspace*{2mm}
\noindent The circuit has 9 layers and thus a depth of 9. In particular, its first layer contains four gates, viz. $\langle 5\rangle, \langle 2, 1\rangle,\langle 4, 0\rangle,\langle 3\rangle $.
\subsection{Subgraph Isomorphism}
\xblue{ Quantum circuit transformation is closely related to the well-known subgraph isomorphism problem, which was shown in 1971 to be NP-complete in the seminal work \cite{Cook71}. \begin{definition} [subgraph isomorphism]\label{dfn:subgraph_isomorphism} Given two graphs $G_i=(V_i,E_i)$ $(i=1,2$), we say $G_1$ is a subgraph of $G_2$ if $V_1\subseteq V_2$ and $E_1\subseteq E_2$. We say $G_1$ is \emph{embeddable} into $G_2$ if there is an injective mapping $f:V_1\to V_2$ such that $(f(p),f(q))\in E_2$ for any edge $(p,q)\in E_1$. In this case, we also call $f$ a \emph{subgraph isomorphism} or an \emph{embedding}. \end{definition} Subgraph isomorphisms can be found or denied by algorithms like VF2 \cite{Cordella+04-vf2}. }
\begin{figure}
\caption{The architecture graph of Grid$2\times 3$ (left) and IBM Q Rochester (right).}
\label{fig:ags}
\end{figure}
\vspace*{2mm}
In this paper, we represent a quantum device as an undirected graph $G = (V, E)$, called the \emph{architecture graph} of the quantum device, where vertices in $V$ are the set of physical qubits and edges in $E$ are allowed 2-qubit interactions. In other words, $(p,q)$ is an edge in $E$ iff a 2-qubit gate can be operated on qubits $p$ and $q$. As $G$ is an undirected graph, we have $(p,q)\in E$ iff $(q,p)\in E$. In what follows, we do not differ between the architecture graph and the corresponding quantum device. Fig.~\ref{fig:ags} shows the architecture graphs of Grid2$\times$3 (an artificial device) and IBM Q Rochester.
\xblue{Recall that a quantum circuit is a sequence of 1- or 2-qubit gates. We associate a graph to each quantum circuit.} \begin{definition}[interaction graph] Let $C$ be a quantum circuit on qubit set $Q$. The interaction graph $(Q,E)$ of $C$ is defined as: $\forall p,q\in Q$, $(p,q)\in E$ iff either $\cx{p}{q}$ or $\cx{q}{p}$ is in $C$. \end{definition}
Consider the circuit $C$ in Fig.~\ref{fig:logical_circuit} (left). Its interaction graph has six edges (cf. Fig.~\ref{fig:logical_circuit} (right)). \xblue{Because there is a 3-cycle (viz. the cycle $(2,1,4,2)$) in this interaction graph, it cannot be embedded into Grid2$\times$3, which has no 3-cycles. }
\subsection{Permutations and SWAP Circuits} \xblue{Our construction of benchmarks is highly involved with permutations on the architecture graph.}
Let $G=(V,E)$ be an undirected graph. Assume $V= [n]\ensuremath{\triangleq}\set{0,1,\ldots,n-1}$. A permutation $\pi: V \to V$ is a bijection on $V$. For examples, the identity mapping $id_V$ is a permutation; any swap on an edge $(i,j)$ of $G$ induces a permutation $\pi_{i,j}$ which maps $i$ to $j$ and $j$ to $i$ while keeping the other vertices unchanged. Permutations can be composed to form more complicated permutations. Formally, we say a permutation $\pi$ is implementable by a sequence of $c$ swaps $\pi_{p_1,q_1},\ldots,\pi_{p_c,q_c}$ if {$\pi=\pi_{p_c,q_c}\circ\cdots\circ \pi_{p_1,q_1}$}, where $(p_i,q_i)$ is an edge in $G$ for $1\leq i\leq c$. Each permutation $\pi$ on a connected graph $G$ can be implemented by a sequence of swaps on edges of $G$.\footnote{This is a special case of the token swapping problem (cf. \cite{miltzow2016approximation}).} We write $\swapnorm{\pi}$ for the minimum number of swaps required to implement $\pi$ and call this number the \emph{swap cost} of (implementing) $\pi$.
As $V=[n]$, we represent each permutation $\pi$ on $V$ as the vector $(\pi(0),\pi(1),\ldots,\pi(n-1))$. For example, $\pi=(3,0,2,1,4,5)$ denotes the permutation on {$V=[6]$} which maps 0 to 3, 1 to 0, 3 to 1 and leaves the other vertices unchanged. We can implement $\pi$ by first swapping 0 and 1, and then swapping 1 and 3. That is, {$\pi = \pi_{1,3}\circ \pi_{0,1}$.} Note that the composition of permutations is not commutative. For example, ${\pi_{0,1}\circ \pi_{1,3}}=(1,3,2,0,4,5)\not=\pi$.
\xblue{Regarding the quantum circuit transformation problem, for each edge $(i,j)$ on the architecture graph $G=(V,E)$, the swap $\pi_{i,j}$ corresponds to the SWAP gate $\swap{i}{j}$. We next explain how a permutation can be implemented by a SWAP circuit, i.e., a circuit consisting of SWAP gates. Suppose $C$ is a quantum circuit on qubit set $Q$. For convenience, we assume $Q\subseteq V$. Suppose $\sigma:Q\to V$ is the current mapping. Applying a swap gate $\swap{i}{j}$, we transform $\sigma$ into a new mapping $\sigma'=\pi_{i,j}\circ\sigma$, where $\sigma'(p)=j$ if $\sigma(p)=i$, $\sigma'(p)=i$ if $\sigma(p)=j$, and $\sigma'(p)=\sigma(p)$ otherwise. In general, for a SWAP circuit $S \ensuremath{\triangleq} (\swap{p_1}{q_1}, \ldots,\swap{p_c}{q_c})$, $S$ transforms $\sigma$ into $\sigma'\ensuremath{\triangleq}\pi_{p_c,q_c}\circ\cdots\circ\pi_{p_1,q_1} \circ \sigma$. In this case, we say that $\pi \ensuremath{\triangleq} \pi_{p_c,q_c}\circ\cdots\circ\pi_{p_1,q_1}$ is implemented by $S$. } Therefore, any permutation on $G$ can be implemented by a SWAP circuit. For any $\pi$ on $G$, we denote by $\langle\!\langle \pi\rangle\!\rangle$ a SWAP circuit that implements $\pi$ with the minimal number of SWAP gates.
\begin{lemma}\label{lem:permcomp} Let $\pi_1,\pi_2$ be two permutations on a graph $G=(V,E)$. Suppose $S_i$ is a SWAP circuit that implements $\pi_i$ for $i=1,2$. Then $S_1+S_2$ implements {$\pi_2\circ\pi_1$}. \end{lemma} \begin{proof} \xblue{
Let $S_1 = \big( \swap{p_1}{q_1},\ldots,\swap{p_c}{q_c} \big)$ and $S_2 = \big( \swap{p_{c+1}}{q_{c+1}},\ldots,\swap{p_{c+d}}{q_{c+d}} \big)$, where $c,d\geq 1$ and each $(p_j,q_j)$ is an edge in $G$. By definition, we have {$\pi_1 = \pi_{p_c,q_c}\circ\cdots\circ\pi_{p_1,q_1}$ and $\pi_2 = \pi_{p_{c+d},q_{c+d}}\circ\cdots\circ\pi_{p_{c+1},q_{c+1}}$. It is now clear that $\pi_2\circ \pi_1=\pi_{p_{c+d},q_{c+d}}\circ\cdots\circ\pi_{p_{c+1},q_{c+1}}\circ \pi_{p_{c},q_{c}}\circ\cdots\circ\pi_{p_1,q_1}$} is implemented by $S_1+S_2= \big( \swap{p_1}{q_1},\ \ldots,\ \swap{p_c}{q_c},\ \swap{p_{c+1}}{q_{c+1}},\ \ldots$, $\swap{p_{c+d}}{q_{c+d}} \big)$.
} \end{proof}
In the construction of \texttt{QUEKNO}\ benchmarks, we need to modify subgraphs and circuits by permutation. \xblue{Given a graph $G$, a permutated graph is obtained by rearranging nodes in $G$.}
\begin{definition}[permutated subgraph]\label{dfn:pgraph} Let $G=(V,E)$ be an undirected graph and $G_1=(V_1,E_1)$ a subgraph of $G$. The permutation of $G_1$ under a permutation $\pi$ on $V$, written $\pi(G_1)$, is the undirected graph $(\pi(V_1),\pi(E_1))$, where $\pi(V_1) \ensuremath{\triangleq} \{\pi(v)\mid v\in V_1\}$ and $\pi(E_1) \ensuremath{\triangleq} \{ (\pi(v),\pi(v')) \mid (v,v')\in E\}$. \end{definition}
\xblue{Analogously, a permutated circuit is obtained by rearranging qubits in the given circuit.} \begin{definition}[permutated circuit]\label{dfn:pcirc} For a circuit $C\ensuremath{\triangleq}(g_1,\cdots,g_n)$ on qubits in $V$, the permutation of $C$ under a permutation $\pi$ on $V$ is defined as $\pi(C) \ensuremath{\triangleq} (\pi(g_1),\cdots,\pi(g_n))$, where $\pi(g)$ is the same gate as $g$ but operates on \begin{itemize}
\item qubit $\pi(q_i)$ if $g$ is a 1-qubit gate on $q_i$,
\item qubits $\pi(q_i)$ and $\pi(q_j)$ if $g$ is a 2-qubit gate on $q_i$ and $q_j$. \end{itemize} \end{definition}
\begin{lemma} For a permutation $\pi$ and circuits $C,C_1,C_2$, we have $\pi^{-1}(\pi(C))=C$ and $\pi(C_1+C_2) = \pi(C_1) + \pi(C_2)$. \end{lemma} \begin{proof}
This follows directly from Definition~\ref{dfn:pcirc}. \end{proof}
\section{Quantum Circuit Transformation} \label{sec:qct} When designing a quantum algorithm in the quantum circuit model, the designer usually has no targeted quantum device. Not surprisingly, the quantum circuit may contain 2-qubit gates acting on arbitrary qubit pairs. This makes circuit transformation necessary. In the following, we call $C$ a \emph{logical} circuit and call the circuit after transformation a \emph{physical} circuit (as it is executable on the physical quantum device). We also address qubits in a logical (physical) circuit as logical (physical) qubits. Note that the term `logical' used in QCT should not be confused with that used in error correction.
Device-supported 1-qubit gates can be executed directly. A 2-qubit gate (CNOT or CZ) is \emph{directly executable} on a quantum device $\mathbb{AG} \ensuremath{\triangleq} (\mathbb{V}, \mathbb{E})$ if its two qubits are neighbours in $\mathbb{AG}$. We call a circuit $C \ensuremath{\triangleq} (g_1,g_2,\ldots, g_m) $ an $\mathbb{AG}$-circuit if all 2-qubit gates in $C$ are directly executable on $\mathbb{AG}$.
In general, suppose $C$ is not an $\mathbb{AG}$-circuit. Let $Q$ be the qubit set of $C$ and assume w.l.o.g. that $Q\subseteq \mathbb{V}$. In order to execute $C$ on $\mathbb{AG}$, we first construct an initial mapping $\tau_{ini}$ from $Q$ to $\mathbb{V}$. If all gates in $C$ are executable under $\tau_{ini}$, , i.e., every 2-qubit gate $\cx{u}{v}$ in $C$ satisfies $(\tau_{ini}(u),\tau_{ini}(v))\in \mathbb{E}$, we say that $C$ is executable on $\mathbb{AG}$ with $\tau_{ini}$. This happens iff $\tau_{ini}(C)$ is an $\mathbb{AG}$-circuit.
Suppose $C$ is not executable under a selected initial mapping $\tau_{ini}$. A general QCT algorithm runs as follows: After removing all or some executable gates under $\tau_{ini}$ from $C$, it repeats the following two procedures alternatively until there are no gates left: (i) transform the mapping into a new mapping by inserting one or several SWAP gates and (ii) remove all or some executable gates. A natural (but clearly not optimal) way for qubit mapping (see, e.g., \cite{siraichi+19_bmt,LiZF21_fidls}) is to partition $C$ into nonempty sections ${C}_1,\ldots,{C}_s$, such that each ${C}_i$ is executable with some permutation $\sigma_i: \mathbb{V}\to \mathbb{V}$, i.e., $\widetilde{C}_i\ensuremath{\triangleq} \sigma_i(C_i)$ is an $\mathbb{AG}$-circuit. For each $1\leq i\leq s$, we have $C_i = \sigma^{-1}_i(\widetilde{C}_i)$ and \begin{align}\label{eq:circpart} \resizebox{0.9\hsize}{!}{
$C = C_1 + C_2 + \cdots + C_s=\sigma_1^{-1}(\widetilde{C}_1) + \sigma_2^{-1}(\widetilde{C}_2) + \cdots + \sigma_s^{-1}(\widetilde{C}_s).$
} \end{align} From this partition, we transform and execute $C$ as follows: \begin{enumerate}
\item Apply $\sigma_1$ on $C$. This transforms in particular the first section into the $\mathbb{AG}$-circuit $\widetilde{C}_1$. We execute and remove all gates in $C_1$ from $C$ and write $PC_1 = \widetilde{C}_1$ for the current physical (implemented) circuit, and denote the rest of the logical circuit as $LC_1 = C_2+\cdots + C_s$.
\item [] We then append $\langle\!\langle {\sigma_1^{-1}} \rangle\!\rangle$ in the end of $PC_{1}$, which implements ${\sigma_1^{-1}}$ with minimal number of SWAPs. This procedure does not change $LC_1$ but updates $PC_1$ as $\widetilde{C}_1+\langle\!\langle {\sigma_1^{-1}} \rangle\!\rangle$ and reverts {the current logical to physical mapping, viz. ${\sigma_1}$},
to the identity mapping $id_{\mathbb{V}}$.
\item Suppose currently the mapping is $id_{\mathbb{V}}$ and the logical and physical circuits are $LC_{i-1}=C_i+\cdots + C_s$ and $PC_{i-1}$ for some $1<i<s$. We append a SWAP circuit $\langle\!\langle {\sigma_i} \rangle\!\rangle$ to $PC_{i-1}$, i.e., immediately after $\langle\!\langle {\sigma_{i-1}^{-1}} \rangle\!\rangle$. This changes the current {logical to physical mapping to $\sigma_i$} and transforms the $i$-th section $C_i$ to $\widetilde{C}_i$. We append $\widetilde{C}_i$ to $PC_{i-1}$ and obtain
\[
\resizebox{0.9\hsize}{!}{
$PC_i = {\widetilde{C}_1+\langle\!\langle \sigma_1^{-1}\rangle\!\rangle +\langle\!\langle \sigma_2\rangle\!\rangle + \cdots + \widetilde{C}_{i-1} + \langle\!\langle \sigma_{i-1}^{-1}\rangle\!\rangle + \langle\!\langle\sigma_i\rangle\!\rangle + \widetilde{C}_i.}$
}
\] Accordingly, the logical circuit is updated to $LC_i = C_{i+1}+\cdots+C_s$.
\item [] We next append a SWAP circuit $\langle\!\langle {\sigma_i^{-1}} \rangle\!\rangle$ to $PC_i$. This does not change $LC_i$ but updates $PC_i$ as $PC_i+\langle\!\langle {\sigma_i^{-1}} \rangle\!\rangle$ and reverts the current {logical to physical mapping, viz. $\sigma_{i}$,} to the identity $id_{\mathbb{V}}$.
\item In the final step, we have $i=s-1$, $LC_i = C_s$, and the identity mapping. We append to $PC_{s-1}$ a SWAP circuit $\langle\!\langle {\sigma_s} \rangle\!\rangle$. This changes the {current logical to physical mapping} to ${\sigma_s}$ and transforms the last section $C_s$ to $\widetilde{C}_s$. We remove all gates in $C_s$ from $LC_{s-1}$ and obtain an empty logical circuit. Accordingly, the final physical circuit is
\begin{align*}
\hspace*{-5mm}\resizebox{0.95\hsize}{!}{
{$PC_s = \widetilde{C}_1 + \langle\!\langle \sigma_1^{-1} \rangle\!\rangle +\langle\!\langle \sigma_2 \rangle\!\rangle + \cdots + \widetilde{C}_{s-1} + \langle\!\langle \sigma_{s-1}^{-1} \rangle\!\rangle + \langle\!\langle \sigma_s \rangle\!\rangle+\widetilde{C}_s$.}
}
\end{align*} Note that the final {logical to physical mapping is $\sigma_s$}. \end{enumerate} It can be proved that $PC_s$ is equivalent to the input circuit $C$ up to an initial {logical to physical} mapping $\sigma_1$ and a final {logical to physical} mapping $\sigma_s$. Furthermore, two consecutive SWAP circuits can be replaced with one and we can replace {$\langle\!\langle \sigma_i^{-1} \rangle\!\rangle + \langle\!\langle \sigma_{i+1}\rangle\!\rangle$ with $\langle\!\langle \sigma_{i+1}\circ\sigma_i^{-1}\rangle\!\rangle$} (cf. Lemma~\ref{lem:permcomp}). As the initial and final mappings incur no cost, the transformation cost is the number of SWAPs used in the SWAP circuits $\langle\!\langle {\sigma_{i+1}\circ\sigma_i^{-1} }\rangle\!\rangle$ for $1\leq i<s$. The final physical circuit can be written as \begin{align}\label{eq:physical-circuit-final} \resizebox{0.9\hsize}{!}{
{$PC = \widetilde{C}_1 + \langle\!\langle {\sigma_2\circ \sigma_1^{-1}}\rangle\!\rangle+ \cdots + \widetilde{C}_{s-1} + \langle\!\langle {\sigma_s\circ\sigma_{s-1}^{-1}}\rangle\!\rangle+\widetilde{C}_s$.}
} \end{align} {In summary, the logical circuit $C$ can be transformed into the physical circuit $PC$ by (i) applying the initial mapping $\sigma_1$ on $C$, and (ii) inserting SWAP circuit $S_i\ensuremath{\triangleq} \langle\!\langle {\sigma_{i+1}\circ\sigma_{i}^{-1}}\rangle\!\rangle$ between $C_i$ and $C_{i+1}$ for $1\leq i\leq s-1$. The transformation cost is the the total number of SWAPs contained in $S_1+\cdots +S_{s-1}$.}
\xblue{On the other hand, it is easy to see that any successful transformation of $C$ into a physical circuit $PC$ on $\mathbb{AG}$ can be obtained in the above way. Indeed, let $\sigma_1$ be the initial mapping and $S_1,\ldots,S_{s-1}$ be the sequence of SWAP circuits inserted into $C$. We can partition the circuit $C$ into sections $C_1,\ldots,C_s$ as in Eq.~\ref{eq:circpart} and represent $PC$ as in Eq.~\ref{eq:physical-circuit-final}.}
This actually shows the following result. \begin{theorem}\label{thm:qct} For any circuit $C$ with optimal transformation cost $k$, there exist a partition of $C$ into $1\leq s\leq k+1$ nonempty sections $C_1,\ldots,C_s$ and $s$ permutations $\sigma_i$ $(1\leq i\leq s)$ such that $\sum_{i=2}^s \swapnorm{{\sigma_{i}\circ\sigma_{i-1}^{-1}}} = k$ and, for each $1\leq i\leq s$, $\widetilde{C}_i\ensuremath{\triangleq} \sigma_i(C_i)$ is an $\mathbb{AG}$-circuit. Moreover, the optimal transformed circuit has form as Eq.~\ref{eq:physical-circuit-final}. \end{theorem} \begin{proof} \xblue{ By assumption, $C$ has an optimal transformation with cost $k$. That is, we can transform $C$ into an executable circuit with an initial (logical to physical) mapping $\sigma_1$ and $k$ SWAP gates $\swap{p_1}{q_1},\ldots,\swap{p_k}{q_k}$. Let $C_1$ be the subcircuit of $C$ which contains all gates that are executable under $\sigma_1$. Due to the optimality, $C_1$ cannot be empty. Removing all gates in $C_1$ and inserting the SWAP gates one by one until some gates in the remainder of $C$ are executable under the current mapping. Write $S_1$ for this SWAP circuit and $\pi_2$ for {the inverse of} the permutation implemented by $S_1$. Then $\sigma_2 \ensuremath{\triangleq} \pi_2^{-1}\circ\sigma_1$ is the current (logical to physical) mapping. Continuing the above procedure till all gates in $C$ are executed, we partition $C$ into $1\leq s\leq k+1$ nonempty sections $C_1,\ldots, C_s$ and partition the $k$ SWAP gates into $s-1$ nonempty SWAP circuits $S_1,\ldots,S_{s-1}$ such that $S_i$ is inserted between $C_i$ and $C_{i+1}$. Let $\pi_{i+1}$ be {the inverse of} the permutation implemented by $S_i$ and let $\sigma_{i+1} \ensuremath{\triangleq} \pi_{i+1}^{-1}\circ\sigma_i$. It can be proved inductively that $C_i$ is executable under $\sigma_i$ for $1\leq i\leq s$. Clearly, $C$ has form as in Eq.~\ref{eq:circpart}. Since $\pi_{i+1}^{-1}=\sigma_{i+1}\circ\sigma_i^{-1}$ and $S_i=\langle\!\langle {\pi_{i+1}^{-1}} \rangle\!\rangle$ for each $1\leq i\leq s-1$, the transformed circuit has form as Eq.~\ref{eq:physical-circuit-final}. } \end{proof}
\subsection{Related Works} Ideal quantum circuits designed for implementing quantum algorithms need to be compiled before running on a real physical device. This paper assumes that the physical device supports (a subset of) 1-qubit gates and the 2-qubit CNOT gate or CZ gate. Quantum circuit compilation usually consists of two important tasks. The first task is to decompose gates in the ideal circuit into elementary gates supported by the device. {The} second task is to map or route (logical) qubits in the ideal circuit to physical qubits in the device so that its coupling constraints are all satisfied, i.e., every 2-qubit gate acts on two neighbouring qubits in the physical device.
Gate decomposition is now standard and has been well addressed in many previous works (cf. \cite{Nielsen-Chuang02}). This paper focuses on the circuit transformation task, which also involves two steps: \emph{initial mapping} and \emph{qubit routing}.
With the rapid development in quantum hardware, the qubit mapping problem has attracted fast-growing interest, and many QCT algorithms have been developed. A few of these algorithms aim to find transformations with the exact optimal cost \cite{Saeedi+11_synthesis, Siraichi+18, Venturelli+18_Planner, Booth+18_Planning,Murali+19,Almeida+19_permutation,Nannicini+21_bipmapping}, which are, however, only applicable to circuits with less than or around 10 qubits. Most algorithms turn to heuristic search techniques \cite{Zulehner+18_Astar,Li+19-sabre,ChildsSU19-qct,Cowtan+19-tket,Zhou+20_SAHS,siraichi+19_bmt,LiZF21_fidls,ZhouFL20_MCTS_iccad}. These {heuristic} algorithms can be further classified according to their optimisation objective. A few algorithms aim to maximise the fidelity or minimise the error rate \cite{Murali+19,Ash-Saki+19_qure,TannuQ19,Deng0L20_codar}; many aim to reduce the number of SWAP gates inserted \cite{Zulehner+18_Astar,Li+19-sabre,LiZF21_fidls,Zhou+20_SAHS,ZhouFL20_MCTS_iccad}; and many others aim to reduce the depth of the output circuit \cite{Cowtan+19-tket,Zhang+21-time,Zhou+22_MCTS_Todaes} so that the highly limited qubit coherence time is respected.
Several QCT algorithms have exploited subgraph isomorphism. The BMT algorithm of \cite{siraichi+19_bmt} combines subgraph isomorphism with token swapping and is thus closely related to the general transformation pattern as specified in Eq.~\ref{eq:circpart}. Based on exhaustive search, the BMT algorithm consumes {much} time and memory and is unsuitable for circuits involving 20 or more qubits. Using the well-known subgraph isomorphism algorithm VF2 \cite{Cordella+04-vf2}, \textsf{FiDLS}\ \cite{LiZF21_fidls} selects an initial mapping that can bring the interaction graph of the input circuit closer to the architecture graph of the physical device. When the circuits can be directly executed, \textsf{FiDLS}\ can often find an embedding and transform the circuit without inserting a single SWAP. \xblue{This is particularly true for \texttt{QUEKO}\ benchmarks \cite{TanC21-queko}. Indeed, \textsf{FiDLS}\ can transform all 20-qubit {\texttt{QUEKO}} benchmarks on IBM Q Tokyo with zero cost.}
In the literature, there are also works that exploit commutation rules \cite{Itoko+19_commutation,Xie+21_commutativity}, synthesis \cite{TanC21-gate_absorption}, gate optimisations \cite{Liu+22_not_all_swap}, or remote CNOT \cite{Niemann+21_combine_remote_cnot} for circuit transformation. These techniques can be combined with every QCT algorithm. As this paper aims to construct benchmarks for fairly evaluating the transformation optimality of QCT algorithms, we do not take these extensions into our discussion.
In this paper, we focus on four state-of-the-art QCT algorithms --- $\text{t}\!\ket{\mathrm{ket}}$, \textsf{SABRE}, \textsf{SAHS}, and \textsf{MCTS}, which have been evaluated on extensive realistic quantum circuits and all have demonstrated superiority over several compared algorithms. We will thoroughly compare them with a baseline Qiskit transpiler on \texttt{QUEKNO}\ benchmark circuits. $\text{t}\!\ket{\mathrm{ket}}$\ is a powerful quantum circuit compiler tool proposed by Cambridge Quantum Computing. First described in \cite{Cowtan+19-tket}, the $\text{t}\!\ket{\mathrm{ket}}$\ router tries to select the SWAP operation which can maximally reduce the diameter of the interaction graph of the current layer. \textsf{SABRE}\ \cite{Li+19-sabre} adopts a 3-fold strategy and a heuristic function that models the fitness of a mapping with those 2-qubit gates in the first two layers of the circuit. More precisely, \textsf{SABRE}\ starts from a random initial mapping and transforms the input circuit $C$ by using the heuristic function; it then uses the final mapping as the initial mapping and transforms the reverse of $C$; and, lastly, it uses the final mapping of the second transform as the initial mapping and transforms $C$ again. In this way, information about the whole circuit is taken into consideration. The \textsf{SAHS}\ algorithm \cite{Zhou+20_SAHS} {first selects an initial mapping which best fits the input circuit $C$ by using the simulated annealing method and then, in the routing process,} simulates the search process one step further and selects the SWAP with the best consecutive SWAP to apply. The Monte Carlo Tree Search (MCTS) method for quantum circuit transformation, {denoted as \textsf{MCTS}\ in this paper,} was first introduced in \cite{ZhouFL20_MCTS_iccad} for gate size optimisation and extended in \cite{Zhou+22_MCTS_Todaes} for depth optimisation. The idea is to exploit the search space in a balanced way. On average, the MCTS-based QCT algorithms can search deeper and find better solutions.
\subsection{A Circuit Transformation Example}\label{sec:qct-ex}
Let $\mathbb{AG}$ be the Grid2$\times$3 architecture graph (cf. Fig.~\ref{fig:ags} (left)). Consider the logical circuit shown in Fig.~\ref{fig:logical_circuit} and suppose our target is to minimise the gate size. As 1-qubit gates can be directly executed, we remove them from the circuit and the {remaining} logical circuit is
$C = \big[\langle 2, 1\rangle, \langle 4, 0\rangle, \langle 4, 0\rangle, \langle 5, 3\rangle, \langle 1, 4\rangle, \langle 4, 2\rangle, \langle 2, 4\rangle, \langle 3, 0\rangle, \langle 5, 3\rangle\big].$
As the interaction graph of $C$ (see Fig.~\ref{fig:logical_circuit} (right)) has a 3-cycle $(2,1,4,2)$, it cannot be embedded into $\mathbb{AG}$; thus this circuit is not executable under any initial mapping. In the following, we show that it can be transformed into an executable circuit with only one SWAP.
First, we partition $C$ into two parts such that the interaction graph of each part is embeddable in $\mathbb{AG}$. For example, let $C_1 = [\langle 2, 1\rangle, \langle 4, 0\rangle, \langle 4, 0\rangle, \langle 5, 3\rangle, \langle 1, 4\rangle]$, and $C_2 = [ \langle 4, 2\rangle, \langle 2, 4\rangle, \langle 3, 0\rangle, \langle 5, 3\rangle]$. Using a subgraph isomorphism algorithm (e.g., VF2 \cite{Cordella+04-vf2,LiZF21_fidls}), we find a permutation $\sigma_1= (5,1,0,4,3,2)$ (regarded as a logical to physical mapping) which transforms $C_1$ to an $\mathbb{AG}$-circuit. Note that $\sigma_1$ permutes the whole circuit as \begin{align*} \resizebox{0.95\hsize}{!}{ $ \sigma_1(C) = \big[\langle 0, 1\rangle, \langle 3, 5\rangle, \langle 3, 5\rangle, \langle 2, 4\rangle, \langle 1, 3\rangle, \langle 3, 0\rangle, \langle 0, 3\rangle, \langle 4, 5\rangle, \langle 2, 4\rangle \big]$. } \end{align*} As every gate in $\sigma_1(C_1)$ acts on neighbouring physical qubits in $\mathbb{AG}$, we execute and remove them. The rest of the circuit is $\sigma_1(C_2) = [\langle 3, 0\rangle, \langle 0, 3\rangle$, $\langle 4, 5\rangle, \langle 2, 4\rangle]$. As $\langle 2,4\rangle$ and $\langle 4,5\rangle$ are edges in $\mathbb{AG}$, we need only bring the current logical qubit on physical qubit $0$ next to that of physical qubit $3$, or vice versa. This can be done by inserting the SWAP gate $\swap{0}{1}$, which implements the permutation $\pi_{0,1}$. Note that $\pi_{0,1}^{-1}=\pi_{0,1}$. Because $\pi_{0,1}\big(\sigma_1(C_2)\big)= [\langle 3,1\rangle,\langle 1,3\rangle,\langle 4,5\rangle,\langle 2,4\rangle]$ is an $\mathbb{AG}$-circuit, it can be executed directly. In conclusion, to transform $C$, we apply an initial mapping $\sigma_1$ and then insert $\swap{0}{1}$. As the swap cost is 1, the transformation is clearly optimal. Corresponding to Theorem~\ref{thm:qct} and letting {$\sigma_2=\pi_{0,1}\circ\sigma_1$, we have $\sigma_i(C_i)$ is an $\mathbb{AG}$-circuit for $i=1,2$ and $\sum_{i=2}^2 \swapnorm{\sigma_i\circ\sigma_{i-1}^{-1}} = \swapnorm{\sigma_{2}\circ\sigma_1^{-1}}=\swapnorm{\pi_{0,1}\circ\sigma_1\circ\sigma_1^{-1}} = \swapnorm{\pi_{0,1}} = 1$.} {This shows that $C$ can be transformed into an executable circuit with one SWAP.}
\section{{\texttt{QUEKNO}} Benchmark Circuits} \label{sec:quekno-theory}
Our procedure for constructing {\texttt{QUEKNO}} (quantum examples with known near-optimal cost) benchmarks is the reverse of the circuit transform procedure described in the last section.
For example, suppose our task is to construct a benchmark circuit with optimal swap cost 1. If the interaction graph of a circuit is not embeddable into $\mathbb{AG}$, then its transformation (swap) cost is at least 1. To make sure that the swap cost is not higher than 1, we need only {find} a transformation with cost 1. The idea for constructing such a \texttt{QUEKNO}\ circuit is to randomly generate subgraphs $G_1,G_2$ of $\mathbb{AG}$, and then permute $G_2$ by the permutation $\pi_{i,j}$ for some randomly selected edge $(i,j)$ of $\mathbb{AG}$. We call such a construction a \emph{subgraph link} or \emph{glink}. {When the {union} graph of $G_1$ and $\pi_{i,j}(G_2)$ is not embeddable into $\mathbb{AG}$, we call the construction a \emph{strong} glink.} It is easy to generate $\mathbb{AG}$-circuits $\widetilde{C}_1$ and $\widetilde{C}_2$ with interaction graphs $G_1$ and $G_2$. By permuting $\widetilde{C}_2$ with $\pi_{i,j}$, we have a new circuit $\widetilde{C}_1+ \pi_{i,j}(\widetilde{C}_2)$, which satisfies our requirement. To make the generated circuit more general, we apply another random permutation $\pi'$ on $\widetilde{C}_1+ \pi_{i,j}(\widetilde{C}_2)$. The resultant {circuit} $\pi'\big(\widetilde{C}_1+ \pi_{i,j}(\widetilde{C}_2)\big)$ is a {\texttt{QUEKNO}} circuit with optimal swap cost {$\leq 1$ and the equality holds when the glink is strong}.
Permutations in glinks need not to be a single SWAP operation. We also consider permutations which require several swaps to implement, but this sometimes leads to a less precise estimation of the optimal swap cost.
Subgraph links can be connected into a chain (called a \emph{glink chain}). Based on glink chains, we generate more general {\texttt{QUEKNO}} circuits. In the following, we first illustrate the key idea of glink with a motivating example, then present the detailed construction method of the {\texttt{QUEKNO}} circuits.
\subsection{A Motivating Example}
{Let $V_1 \ensuremath{\triangleq}\{0,1,2,3,4,5\}$, $E_1\ensuremath{\triangleq} \{(0,1), (1,3), (2,4), (3,5)\}$ and $V_2\ensuremath{\triangleq} \{1,2,3,4,5\}$, $E_2\ensuremath{\triangleq} \{ (1,3), (2,4), (4,5)\}$. Then $G_i=(V_i,E_i)$ $(i=1,2)$ are two subgraphs of $\mathbb{AG}=Grid2\times3$}, see Fig.~\ref{fig:glink} for illustration. Let $\pi_2\ensuremath{\triangleq} \pi_{0,1}$ be the permutation induced by swapping 0 and 1 in $\mathbb{AG}$. Note that the inverse of $\pi_2$ is itself, i.e., $\pi_2 = \pi_2^{-1}$.
We permute $G_2$ with $\pi_2$ and {let $G'$ be the union of $G_1$ and $\pi_2(G_2)$. Then $G'$} has edges $(0,1),(1,3), (2,4),(3,5)$ and $(0,3), (4,5)$; see Fig.~\ref{fig:glink}(d). As $G'$ has a 3-cycle $(0,1,3,0)$, it cannot be embedded into $\mathbb{AG}$.
\begin{figure}
\caption{Illustration of the construction of a glink: (a) subgraph $G_1$; (b) subgraph $G_2$; (c) the graph $\pi_2(G_2)$ obtained by permuting $G_2$ with $\pi_2$ (the permutation induced by $\swap{0}{1}$);
(d) the {union} graph $G'$ of $G_1$ and $\pi_2(G_2)$.}
\label{fig:glink}
\end{figure}
Starting from $G_1$ and $G_2$, we `grow' two circuits $\widetilde{C}_1$ and $\widetilde{C}_2$ with interaction graphs $G_1$ and $G_2$. For instance, let $\widetilde{C}_1 = [\langle 2\rangle , \langle 0, 1\rangle , \langle 1\rangle , \langle 3, 5\rangle , \langle 3\rangle , \langle 1\rangle , \langle 3, 5\rangle , \langle 4\rangle , \langle 2\rangle , \langle 2, 4\rangle , \langle 1\rangle , \langle 5\rangle $, $\langle 1, 3\rangle ]$, and $\widetilde{C}_2 = [\langle 0\rangle , \langle 1\rangle , \langle 2\rangle , \langle 3, 1\rangle , \langle 5\rangle , \langle 1, 3\rangle , \langle 1\rangle , \langle 1\rangle , \langle 4, 5\rangle $, $\langle 4\rangle , \langle 0\rangle , \langle 2, 4\rangle ]$. Then we have two $\mathbb{AG}$-circuits $\widetilde{C}_1$ and $\widetilde{C}_2$. We permute $\widetilde{C}_2$ with $\pi_2=\pi_{0,1}$ and get $\pi_2(\widetilde{C}_2) = \big[\langle 1\rangle , \langle 0\rangle , \langle 2\rangle , \langle 3, 0\rangle , \langle 5\rangle , \langle 0, 3\rangle , \langle 0\rangle , \langle 0\rangle , \langle 4, 5\rangle , \langle 4\rangle , \langle 1\rangle , \langle 2, 4\rangle \big].$
Lastly, we generate a random permutation $\pi_1$, and apply $\pi_1$ on $\widetilde{C}_1+\pi_2(\widetilde{C}_2)$. For example, let $\pi_1=(2,1,5,4,3,0)$. We have a {\texttt{QUEKNO}} circuit $C \ensuremath{\triangleq} \pi_1\big(\widetilde{C}_1+\pi_2(\widetilde{C}_2)\big) = \pi_1(\widetilde{C}_1)+({\pi_1\circ\pi_2})(\widetilde{C}_2)$. More precisely, we have $C=C_1+C_2$, where $C_1 \ensuremath{\triangleq} \pi_1(\widetilde{C}_1) = [\langle 5\rangle, \langle 2, 1\rangle, \langle 1\rangle, \langle 4, 0\rangle, \langle 4\rangle, \langle 1\rangle, \langle 4, 0\rangle, \langle 3\rangle, \langle 5\rangle, \langle 5, 3\rangle, \langle 1\rangle, \langle 0\rangle$, $\langle 1, 4\rangle]$ and
$C_2 \ensuremath{\triangleq} ({\pi_1\circ\pi_2})(\widetilde{C}_2) = [\langle 1\rangle, \langle 2\rangle, \langle 5\rangle, \langle 4, 2\rangle, \langle 0\rangle, \langle 2, 4\rangle, \langle 2\rangle, \langle 2\rangle, \langle 3, 0\rangle, \langle 3\rangle, \langle 1\rangle, \langle 5, 3\rangle]$.
The \texttt{QUEKNO}\ circuit $C$ is exactly the logical circuit shown in Fig.~\ref{fig:logical_circuit}. As we have seen in Sec.~\ref{sec:qct-ex}, starting from the initial (logical to physical) mapping $\pi_1^{-1} = (5,1,0,4,3,2)$, all gates in $C_1$ can be executed and removed. We then insert a $\swap{0}{1}$. This updates the logical to physical mapping from $\pi_1^{-1}$ to $({\pi_1\circ\pi_2})^{-1} = {\pi_2^{-1}\circ \pi_1^{-1} = (5,0,1,4,3,2)}$, which transforms {$\langle 4,2\rangle$ to $\langle 3,1\rangle$, $\langle 3,0\rangle$ to $\langle 4,5\rangle$, and $\langle 5,3\rangle$ to $\langle 2,4\rangle$}. Thus all gates in $C_2$ are executable. With cost 1, this transformation is optimal.
In the following, we provide a formal introduction of the construction of glinks and glink chains.
\subsection{Subgraph Link Chains}
Let $\mathbb{AG} = (\mathbb{V}, \mathbb{E})$ be an architecture graph. Each \texttt{QUEKNO}\ circuit uses a chain of subgraph links as the backbone.
\begin{definition}[glink] Let $G_i=(V_i,E_i)$ for $i=1,2$ be two subgraphs of $\mathbb{AG}$ and $\pi$ a permutation on $\mathbb{V}$. We construct a new graph $G'=(V',E')$, where \begin{itemize}
\item $V'=V_1\cup \pi(V_2)$, and
\item $E' = E_1\cup \pi(E_2)$, i.e., for $u,v\in V'$, $(u,v)\in E'$ iff either $(u,v)\in E_1$ or $(\pi^{-1}(u),\pi^{-1}(v))$ $\in E_2$. \end{itemize} We call $(G_1,\pi,G_2)$ a \emph{subgraph link} (\emph{glink} for short), denoted by $\glink{G_1}{\pi}{G_2}$. If in addition $G'$ is not embeddable in $\mathbb{AG}$, then we call $(G_1,\pi,G_2)$ a \emph{strong glink}, denoted by $\sglink{G_1}{\pi}{G_2}$. \end{definition}
Glinks are basic components of the construction of {\texttt{QUEKNO}} circuits. Fig.~\ref{fig:glink} shows an example of strong glinks.
\begin{lemma}\label{lem:glink} Let $\glink{G_1}{\pi}{G_2}$ be a glink. Suppose $\widetilde{C}_i$ $(i=1,2)$ is a circuit which has $G_i$ as its interaction graph. Let $C=\widetilde{C}_1 + \pi(\widetilde{C}_2)$. Then $C$ can be transformed into an $\mathbb{AG}$-executable circuit by inserting between $\widetilde{C}_1$ and $\pi(\widetilde{C}_2)$ any SWAP circuit that implements {$\pi^{-1}$}. \end{lemma} \begin{proof}
\xblue{Taking the identity mapping $id_\mathbb{V}$ as the initial mapping, gates in $\widetilde{C}_1$ can be executed and removed from $C$ directly. Note that the current mapping remains to be $id_\mathbb{V}$. If we insert a SWAP circuit that implements {$\pi^{-1}$, then the current logical to physical mapping becomes $\pi^{-1}$},
which can execute $\pi(\widetilde{C}_2)$ because $\pi^{-1}(\pi(\widetilde{C}_2))=\widetilde{C}_2$ is an $\mathbb{AG}$-circuit. } \end{proof}
Apparently, if we apply another permutation $\pi'$ on $C=\widetilde{C}_1 + \pi(\widetilde{C}_2)$, then $\pi'(C)$ can be executed on $\mathbb{AG}$ by taking the inverse of $\pi'$ as the initial mapping and by inserting between $\widetilde{C}_1$ and $\pi(\widetilde{C}_2)$ any SWAP circuit that implements {$\pi^{-1}$}.
The above construction can be naturally extended to a sequence of glinks, upon which we can `grow' \texttt{QUEKNO}\ benchmark circuits. \begin{definition}[glink chain] A glink chain is a sequence \[G_1,\pi_2,G_2,\pi_3,\cdots, G_{s-1},\pi_{s},G_s\] such that $\glink{G_i}{\pi_{i+1}}{G_{i+1}}$ for each $1\leq i<s$. A glink chain is \emph{strong} if all its glinks are strong. \end{definition}
\subsubsection*{Benchmark construction based on a glink chain} Having a glink chain $\langle G_1,\pi_2,G_2,\cdots, \pi_{s},G_s \rangle$ and an additional arbitrary permutation $\pi_1$, we generate a benchmark circuit $C$ as follows: (1) For each $i$, generate a random circuit $\widetilde{C}_i$ s.t. its interaction graph is $G_i$. (2) Concatenate backward these circuits with the permutations, i.e.,
\begin{align}\label{eq:quekno}
\resizebox{0.9\hsize}{!}{
\hspace*{-4mm} $C = \pi_1\bigg(\widetilde{C}_1 + \pi_2 \Big(\widetilde{C}_2 + \pi_3 \big( \cdots \pi_{s-1}(\widetilde{C}_{s-1} + \pi_{s}(\widetilde{C}_s))\cdots\big)\Big)\bigg)$.
}
\end{align} Each $\widetilde{C}_i$ constructed above is an $\mathbb{AG}$-circuit. We now give a reasonable estimation of the optimal transformation cost of the constructed circuit $C$.
\begin{theorem}\label{thm:quekno} Suppose $C$ is a \texttt{QUEKNO}\ circuit as in Eq.~\ref{eq:quekno}. Then $C$ can be transformed into an $\mathbb{AG}$-executable circuit by \begin{itemize}
\item taking $\pi_1^{-1}$ as the initial mapping,
\item inserting, from left to right, a SWAP circuit {$S_{i}$} that implements {$\pi_{i+1}^{-1}$} after we execute and remove gates in $C_i\ensuremath{\triangleq} (\pi_1\circ\cdots\circ \pi_i)(\widetilde{C_i})$ for $1\leq i<s$. \end{itemize} The optimal transformation cost of $C$ is at most $\sum_{i=2}^s\swapnorm{\pi_{i}}$. \end{theorem} \begin{proof}
For each $1\leq i\leq s$, write $\sigma_i$ for the permutation $(\pi_1\circ \cdots\circ \pi_i)^{-1}$. Then $C$ is represented in the form as in Eq.~\ref{eq:circpart}. Following the analysis given in Sec.~\ref{sec:qct} or directly by Lemma~\ref{lem:glink}, we can show the correctness of the above transformation. As the cost of this transformation is $k \ensuremath{\triangleq} \sum_{i=2}^s {\swapnorm{\pi_i^{-1}} =\sum_{i=2}^s \swapnorm{\pi_i}}$, we know the optimal cost is not larger than $k$. \end{proof}
\section{Benchmark Design}\label{sec:design}
Let $\mathbb{AG}=(\mathbb{V},\mathbb{E})$ be the architecture graph of a quantum device. This section describes the detailed procedure for generating our \texttt{QUEKNO}\ benchmarks for $\mathbb{AG}$. We need the following subroutines, \xblue{the parameters used there will be explained in Sec.~\ref{sec:quekno-dim} in detail.} \begin{enumerate}
\item Randomly generate a subgraph $G$ of $\mathbb{AG}$ with the specified subgraph size, which can be either `small' or `large'.
\item Given a subgraph $G$ of $\mathbb{AG}$, and a given qubit gate ratio $\rho_\text{qbg}$, which specifies the ratio between the numbers of 1- and 2-qubit gates, randomly generate an $\mathbb{AG}$-circuit $\widetilde{C}$ which respects $\rho_\text{qbg}$ and has $G$ as its interaction graph.
\item Given two subgraphs $G_1, G_2$ of $\mathbb{AG}$, randomly select a permutation $\pi$ with a given permutation type such that $\sglink{G_1}{\pi}{G_2}$, i.e., $\langle G_1,\pi,G_2 \rangle$ is a strong glink. When the optimisation objective is `gate size', we have two permutation types, `opt1' and `opt2', {which will be introduced in Sec.~\ref{sec:quekno-dim}.} When the optimisation objective is `depth', we require the permutation be implemented by a SWAP circuit of depth 1, which consists of a sequence of SWAP gates that do not act on a same physical qubit. We address this type of permutation as `parallel' (cf.~Sec.~\ref{sec:quekno-dim}). \end{enumerate} With these subroutines, we can generate benchmark circuits with form Eq.~\ref{eq:quekno}, which usually have near-optimal transformation costs. The pseudocode is described in Alg.~\ref{alg:quekno}.
From Theorem~\ref{thm:qct}, any optimal circuit transformation of an input circuit can be presented as in Eq.~\ref{eq:quekno}. This suggests that our benchmarks are general and representative.
\begin{algorithm} \caption{{\texttt{QUEKNO}} Benchmark Circuits Construction} \label{alg:quekno}
\begin{algorithmic}[1] \Require{An architecture graph $\mathbb{AG}$, optimisation type $opt_\text{type}$, target swap cost $c$, permutation type $perm_\text{type}$, subgraph size $graph\_size$, and qubit gate ratio $\rho_\text{qbg}$} \Ensure{A \texttt{QUEKNO}\ circuit $C$} \State Randomly generate a permutation $\pi_1$ \State Randomly generate a subgraph $G_1$ of $\mathbb{AG}$ respecting subgraph size \State Randomly generate an $\mathbb{AG}$-circuit $\widetilde{C}_1$ with interaction graph $G_1$ respecting qubit gate ratio \State $glinkChain \leftarrow G_1$ \State $cost \leftarrow 0$ \State $\ell \leftarrow 1$ \While{$cost < c$}
\State Randomly generate a strong glink $\sglink{G_\ell}{\pi_{\ell+1}}{G_{\ell+1}}$ starting from the last subgraph $G_\ell$ of $glinkChain$ that respects permutation type and subgraph size
\State Extend $glinkChain$ with $\sglink{G_\ell}{\pi_{\ell+1}}{G_{\ell+1}}$
\State Randomly generate an $\mathbb{AG}$-circuit $\widetilde{C}_{\ell+1}$ with interaction graph $G_{\ell+1}$ respecting qubit gate ratio
\If{$opt_\text{type} = \text{`gate size'}$}
\State $cost \leftarrow cost+\swapnorm{\pi_{\ell+1}}$
\ElsIf{$opt_\text{type} = \text{`depth'}$}
\State $cost \leftarrow cost + 1$
\EndIf \State $\ell \leftarrow \ell + 1$ \EndWhile \State \resizebox{0.9\hsize}{!}{ $C\leftarrow \pi_1\bigg(\widetilde{C}_1 + \pi_2 \Big(\widetilde{C}_2 + \pi_3 \big( \cdots \pi_{\ell}(\widetilde{C}_{\ell} + \pi_{\ell+1}(\widetilde{C}_{\ell+1}))\cdots \big)\Big)\bigg)$ } \end{algorithmic} \end{algorithm}
When the optimisation objective is `gate size', the above algorithm constructs benchmarks with near-optimal transformation swap cost. When the optimisation objective is the depth of the output circuit, \xblue{the situation is a little different. While each SWAP gate is implemented with three consecutive CNOTs, the depth increased by inserting a SWAP circuit of depth 1 between $C_i$ and $C_{i+1}$ may sometimes be significantly larger than 3. This is because gates in the two sections $C_i$ and $C_{i+1}$ are not aligned in the process of transformation.} To generate benchmark circuits with small depth ratio, we align each $\widetilde{C}_i$ so that there are no or very few free qubits in its last layer. This can be achieved by, for example, rearranging the gates in $\widetilde{C}_i$ or inserting random 1-qubit gates and some 2-qubit gates that {have} already appeared.
\subsection{Dimensions and Parameters of \texttt{QUEKNO}} \label{sec:quekno-dim} We only consider near-term feasible benchmarks, which correspond to the \texttt{QUEKO}\ benchmarks $\text{B}_{\text{NTF}}$ in \cite{TanC21-queko}. \texttt{QUEKNO}\ benchmarks are ideal for evaluating gate size and depth optimality. Our design has the following dimensions: \begin{enumerate}
\item \textbf{optimisation objective ($opt_\text{type}$)}: the target can be minimising the swap cost or depth cost. The swap cost counts how many swaps are inserted in the transformation, while the depth cost is defined as the difference between the depths of the output and input circuits. Note that in the output circuit each SWAP gate is compiled into three CNOT gates. To minimise the swap cost (depth cost) is equivalent to minimising the gate size (depth) of the output circuit. We call these two optimisations swap cost (or gate size) optimisation and depth cost (or depth) optimisation.
\item \textbf{target transformation cost ($c$)}: if the optimisation objective is the swap cost, we select the swap cost from $\set{0,1,2,3,4,5,10,15,20,25}$; when the objective is the depth cost, we select the cost (swap layers) from $\set{1,2,3,4,5,10}$. Recall that we are interested in near-term feasible circuits, which often have depths less than 50. Because the circuit depth may often be increased by 3 or more when adding a layer or a sequence of SWAPs, it is reasonable to consider swap cost below 25 and swap layers below 10. In addition, as can be seen from, e.g., Fig.~\ref{fig:depth_opt_rochester}, QCT algorithms scale very well with the increasing target swap (depth) cost.
\item \textbf{permutation type ($perm_\text{type})$}: for gate size optimisation, permutations (except the first permutation $\pi_1$, which incurs no transformation cost) are implemented by either a single swap or two consecutive swaps.
If the permutation type is `opt1', then each permutation is implemented by a single swap; if it is `opt2', then we randomly implement each permutation by either a single swap or two consecutive swaps. For the depth optimisation, each permutation (except the first one) is implemented by a set of parallel swaps, i.e., a SWAP circuit of depth 1.
\item \textbf{architecture graph ($\mathbb{AG}$)}: We consider three representative and state-of-the-art quantum devices, viz., IBM Q Tokyo (20 qubits), IBM Q Rochester (53 qubits), and Google's Sycamore (53 qubits), which are also considered in \cite{TanC21-queko,LiZF21_fidls,Zhou+22_MCTS_Todaes}. We note that the benchmarks designed for IBM Q Rochester {are perhaps not} near-optimal for Sycamore, and vice versa. This is because (i) there are subgraphs of Sycamore that are not embeddable in IBM Q Rochester, and (ii) a strong glink for Rochester needs not to be strong for Sycamore. \xblue{We note that our benchmark construction method can be applied on any near-term superconducting quantum devices.}
\item \textbf{subgraph size ($graph\_size$)}: \xblue{Recall that subgraphs of the architecture graph are used for generating glinks. The larger the subgraphs, the more gates contained in the subcircuits (i.e., sections $C_i$) in the benchmark circuit.}
For 20-qubit IBM Q Tokyo, the subgraphs in the generated glinks have five edges on average; for the two 53-qubit devices, we have two choices. The average size of these subgraphs could be small ($\sim$8 edges) or large ($\sim$16 edges).
\item \textbf{qubit gate ratio ($\rho_\text{qbg}$) for evaluating depth optimality}:
\xblue{Recall that the actual functionality of a 1-qubit gate plays no role in the circuit transformation process. Moreover, for gate size optimality, the number of 1-qubit gates in the circuits does not affect the transformation process. This is, however, not true for depth optimality: the number (and distribution) of 1-qubit gates in the input circuit may significantly affect the depth of the output (transformed) circuit. To reflect this,
we introduce in \texttt{QUEKNO}\ construction the important parameter `qubit gate ratio' $\rho_\text{qbg}=M_1/M_2$ between 1- and 2-qubit gates, where $M_1$ and $M_2$ denote the number of 1- and 2-qubit gates respectively.} In particular, following \cite{TanC21-queko}, we consider two special ratios, viz., the `QSE' ratio $\rho_\text{qbg}= 2.55$ based on the random circuit used in Google's quantum supremacy experiment \cite{Arute+19_google_quantum_supremacy} and the `TFL' ratio $\rho_\text{qbg} = 1.5$ based on the Toffoli circuit. \end{enumerate}
\begin{table}[]
\centering
\scalebox{0.8}{
\begin{tabular}{c|c|c|c|c}
benchmark set name & $perm_\text{type}$ & $graph\_size$ & $\rho_\text{qbg}$ & circ. number \\ \hline
20Q\_gate\_tokyo& opt1 or opt2 &large &1.5& 100 $(\times 2)$\\
20Q\_depth\_tokyo¶llel &large &1.5 or 2.55 & 60 $(\times 2)$\\
53Q\_gate\_Sycamore & opt1 or opt2 &small or large&1.5&100 $(\times 4)$\\
53Q\_depth\_Sycamore ¶llel&small or large &1.5 or 2.55 & 60 $(\times 4)$\\
53Q\_gate\_Rochester & opt1 or opt2 &small or large&1.5&100 $(\times 4)$\\
53Q\_depth\_Rochester ¶llel&small or large &1.5 or 2.55 & 60 $(\times 4)$\\
\hline 20Q\_bss\_tokyo &-&-&2.55& 90\\ 16Q\_bntf\_Aspen-4 &-&-&1.5 & 90\\ 54Q\_bntf\_Sycamore &-&-&2.55& 90\\
\end{tabular}
}
\caption{\texttt{QUEKNO}\ (top) and \texttt{QUEKO}\ benchmark sets (bottom)}
\label{tab:benchmark_set} \end{table}
For each legal combination of these dimensions, we randomly generate 10 circuits. In summary, we have six sets of \texttt{QUEKNO}\ benchmarks as shown at the top of Table~\ref{tab:benchmark_set}. For example, the benchmark set `53Q\_gate\_Rochester' includes 100 circuits for each pair of parameters (permutation type, graph size), which can be used to evaluate the gate size optimality of QCT algorithms on IBM Q Rochester.
\subsection{Benchmark Information} This subsection gives a closer examination of \texttt{QUEKNO}\ benchmarks. Due to space limitation, here we only consider benchmarks for depth optimality. We take Rochester benchmark sets as examples and show how the depth of the benchmark circuit and the known near-optimal depth ratio vary with target depth cost.
\begin{figure}
\caption{Average information about \texttt{QUEKNO}\ `53Q\_depth\_Rochester' benchmarks: the average depth (top) and the average known near-optimal depth ratios (bottom), where the $x$-axis denotes target swap depth cost and each node in a line denotes the average value of ten \texttt{QUEKNO}\ circuits with the same parameters.}
\label{fig:rochester_depth_info}
\end{figure}
In Fig.~\ref{fig:rochester_depth_info}, we compare four sets of \texttt{QUEKNO}\ `53Q\_depth\_Rochester' benchmarks, i.e., those with graph size `large' or `small' and qubit gate ratio `TFL' (i.e., $\rho_\text{qbg}=1.5$) or `QSE' (i.e., $\rho_\text{qbg}=2.55$), and target SWAP depth cost in $\set{1,2,3,4,5}$. For better illustration, we omit the data about benchmarks with target SWAP depth cost 10, which have average depths ranging from 40 to 80. From the top of Fig.~\ref{fig:rochester_depth_info}, we can see that these benchmarks have average depth linear in the target SWAP depth cost, and benchmarks with graph size `large' (qubit gate ratio `QSE') have larger depths than corresponding benchmarks with graph size `small' (qubit gate ratio `TFL'). Fig.~\ref{fig:rochester_depth_info} (bottom) shows that `TFL' benchmarks have a larger known near-optimal depth ratio than corresponding `QSE' benchmarks, partially because the latter have more 1-qubit gates and relatively fewer 2-qubit gates. Similarly, benchmarks with graph size `small' have a larger known near-optimal depth ratio than those with graph size `large', as benchmarks with graph size `large' have more 2-qubit gates but the same number of sections. The figure also shows that the depth ratios of \texttt{QUEKNO}\ benchmarks scale well (at most linearly) with {increasing} target swap depth costs.
\section{Experiments and Evaluation}\label{sec:evaluation} Our algorithm (implemented in Python 3) and benchmarks are publicly available through the MIT license at \url{https://github.com/ebony72/quekno}. All our experiments were run on a laptop with i7-11800 CPU, 32 GB memory and RTX 3060 GPU.
We compare the following state-of-the-art QCT algorithms: {$\text{t}\!\ket{\mathrm{ket}}$} \cite{Cowtan+19-tket} from Cambridge Quantum Computing, \textsf{SABRE}\ \cite{Li+19-sabre}, \textsf{SAHS}\ \cite{Zhou+20_SAHS}, \textsf{MCTS}\ \cite{Zhou+22_MCTS_Todaes} with the Qiskit transpiler. Based on exhaustive search, the BMT algorithm of \cite{siraichi+19_bmt} takes several hours to transform a 20-qubit circuit with depth $\leq 10$ on our computer. So we choose not to evaluate BMT. \textsf{FiDLS}\ \cite{LiZF21_fidls} was designed for gate size optimisation. Experiments on \texttt{QUEKNO}\ gate size benchmarks show that it has a similar performance as $\text{t}\!\ket{\mathrm{ket}}$, and its initial mapping module can find the ideal embedding (thus achieve optimality) for all \texttt{QUEKO}\ `16Q\_bntf\_Aspen-4' and `20Q\_bss\_Tokyo' benchmarks and all but one `53Q\_bntf\_Rochester' benchmarks. We also omit the discussion of the performance of \textsf{FiDLS}\ in this paper.
\subsubsection*{Details of the QCT algorithms Compared} Qiskit's transpiler provides multiple choices for both initial mapping and routing procedures.\footnote{The version of Qiskit used in our evaluation is 0.33.0.} To facilitate comparison with \texttt{QUEKO}\ benchmarks, we choose \texttt{DenseLayout} and \texttt{StochasticSWAP} as the Qiskit transpiler to compare. \textsf{SABRE}\ was initially described in \cite{Li+19-sabre} and has recently been assembled in Qiskit. In this paper, we choose this Qiskit implementation of \textsf{SABRE}\ and select the advanced `lookahead' heuristics. In addition, we modify the `EXTENDED\_SET\_SIZE' (the size of the lookahead window) as the number of physical qubits of the device. The version of \textsf{SAHS}\ we use is from its authors' GitHub website,\footnote{https://github.com/BensonZhou1991/circuittransform} which is now more efficient and supports both gate size and depth objectives. The original \textsf{MCTS}\ algorithm \cite{ZhouFL20_MCTS_iccad} targeted gate size and was modified in \cite{Zhou+22_MCTS_Todaes} to address depth optimisation.\footnote{https://github.com/BensonZhou1991/MCTS-New} We call these two versions \textsf{MCTS}-size and \textsf{MCTS}-depth, respectively. In our evaluation of gate size optimality, we use \textsf{MCTS}-size; otherwise, we use \textsf{MCTS}-depth. The initial mappings used for \textsf{MCTS}\ are obtained by the Simulated Annealing method in \textsf{SAHS}\ \cite{Zhou+20_SAHS}. When evaluating $\text{t}\!\ket{\mathrm{ket}}$\ on \texttt{QUEKO}, Tan and Cong \cite{TanC21-queko} selected \texttt{GraphPlacement} for initial mapping, which might have led to favourable results for $\text{t}\!\ket{\mathrm{ket}}$, as \texttt{GraphPlacement} uses subgraph isomorphism and can find optimal solutions for some \texttt{QUEKO}\ circuits. To facilitate comparison with the results presented in \cite{TanC21-queko}, in our evaluation, we also choose to use \texttt{GraphPlacement}.\footnote{The version of $\text{t}\!\ket{\mathrm{ket}}$\ used for our evaluation is 0.17.0.}
In addition, we disable all optimisation passes for fair comparisons. As \textsf{MCTS}, \textsf{SABRE}, and Qiskit are random algorithms, for each benchmark circuit, we run these algorithms five times and record the best value. This could improve the performance by 10\% - 20\%.
\subsection{Summary of Evaluation Results} We summarise the evaluation results in Fig.~\ref{fig:opt_sum}. For gate size optimality, we consider three \texttt{QUEKNO}\ benchmark sets, viz. `53Q\_gate\_Sycamore', `53Q\_gate\_Rochester', `20Q\_gate\_Tokyo', and three \texttt{QUEKO}\ sets `54Q\_bntf\_Sycamore', `20Q\_bss\_Tokyo', and `16Q\_bntf\_Aspen-4'. As no `20Q\_bntf\_Tokyo' benchmark set is provided in the GitHub website of \texttt{QUEKO},\footnote{https://github.com/tbcdebug/QUEKO-benchmark} we replace it with `20Q\_bss\_Tokyo', which are benchmarks with depth from 100 to 900 for scaling study. We also run benchmarks in the `16Q\_bntf\_Aspen-4' on the 20-qubit IBM Q Tokyo. In addition, when running on benchmarks in `54Q\_bntf\_Sycamore', we use the ideal architecture graph of Sycamore where the bad node, as well as its connections, is restored. For depth optimality, we evaluate, besides the three \texttt{QUEKO}\ sets, three \texttt{QUEKNO}\ sets `53Q\_depth\_Sycamore', `53Q\_depth\_Rochester', and `20Q\_depth\_Tokyo'.
For gate size optimality, from the top of Fig.~\ref{fig:opt_sum}, we have \begin{enumerate}
\item Qiskit performs significantly the worst across all six benchmark sets.
\item \textsf{SABRE}\ has the best performance on all but one benchmark sets, and its gate size optimality on the \texttt{QUEKNO}\ benchmark set `20Q\_gate\_Tokyo' is only slightly inferior to that of \textsf{MCTS}-size (1.74 vs. 1.70). Its performance on the two 53-qubit \texttt{QUEKNO}\ benchmark sets is conspicuously ($>17\%$) better than the other algorithms.
\item \textsf{MCTS}-size performs the best on `20Q\_gate\_Tokyo', the third on `54Q\_bntf\_Sycamore', and the second on the rest three benchmark sets. Its performance on small-qubit benchmark sets is very close to that of \textsf{SABRE}. \end{enumerate}
\vspace*{2mm}
For depth optimality, from the bottom of Fig.~\ref{fig:opt_sum}, we have \begin{enumerate}
\item Qiskit performs clearly the worst across all but `54Q\_bntf\_Sycamore', and its performance on `54Q\_bntf\_Sycamore' is the second-worst.
\item \textsf{SABRE}\ has the best performance on all benchmark sets, and its performance on the two 53-qubit \texttt{QUEKNO}\ benchmark sets is at least 10\% better than the other algorithms.
\item \textsf{MCTS}-depth performs the second on all but one benchmark sets, and on `54Q\_bntf\_Sycamore' it performs the third. \end{enumerate} \xblue{When comparing the performances on \texttt{QUEKNO}\ benchmarks across different architectures, from Fig.~\ref{fig:opt_sum}, we also observe the following general trends: \textsf{SABRE}\ $\prec$ \textsf{MCTS}\ $\prec$ \textsf{SAHS}\ $\prec$ $\text{t}\!\ket{\mathrm{ket}}$\ $\prec$ Qiskit, where $\prec$ denotes `better than'. Note that \textsf{SABRE}\ $\prec$ \textsf{MCTS}\ is violated only on `20Q\_gate\_Tokyo', where the cx ratios of \textsf{SABRE}\ and \textsf{MCTS}\ are 1.74 and 1.70, respectively. In addition, the cx/depth ratios on Sycamore are in general smaller than those on IBM Q Rochester, reflecting the fact that Sycamore has more qubit connections than Rochester does.}
\subsection{Comparing \texttt{QUEKNO}\ with \texttt{QUEKO}} \xblue{ As pointed out in the introduction, \texttt{QUEKO}\ benchmarks have optimal transformations with zero costs. This implies that \texttt{QUEKO}\ cannot provide a faithful evaluation for QCT algorithms (like \textsf{FiDLS}\ \cite{LiZF21_fidls}) which use subgraph isomorphism for initial mapping. While \textsf{FiDLS}\ can find the optimal transformation for all \texttt{QUEKO}\ `16Q\_bntf\_Aspen-4' and `20Q\_bss\_Tokyo' benchmarks, its performance on \texttt{QUEKNO}\ `20Q\_gate\_Tokyo' is similar to that of $\text{t}\!\ket{\mathrm{ket}}$, absolutely not optimal.
In addition, from Fig.~\ref{fig:opt_sum} (top), we observe that both \textsf{SABRE}\ and \textsf{MCTS}-size have near-optimal performance on \texttt{QUEKO}\ benchmark sets `20Q\_bss\_Tokyo' (1.02 vs. 1.06) and `16Q\_bntf\_Aspen-4' (1.14 vs. 1.19). Note that the optimality is reached if the score is 1. This near-optimality performance is questionable as neither algorithm uses subgraph isomorphism for initial mapping. It seems that the two \texttt{QUEKO}\ benchmark sets are too optimistic and do not reveal the actual performances of the two QCT algorithms. By contrast, on \texttt{QUEKNO}\ `20Q\_gate\_Tokyo', the cx ratios of \textsf{SABRE}\ and \textsf{MCTS}-size are 1.74 and 1.70. We believe \texttt{QUEKNO}\ provides a more faithful optimality evaluation than \texttt{QUEKO}\ for QCT algorithms on IBM Q Tokyo. A similar {phenomenon} is also observed from Fig.~\ref{fig:opt_sum} (bottom): \textsf{SABRE}\ and \textsf{MCTS}-depth both have near-optimal performances (1.08 vs. 1.22) on \texttt{QUEKO}\ `20Q\_bss\_Tokyo', while their depth ratios on \texttt{QUEKNO}\ `20Q\_depth\_Tokyo' are 2.59 and 2.77.
It is also interesting to note the different performances of $\text{t}\!\ket{\mathrm{ket}}$\ on \texttt{QUEKO}\ and \texttt{QUEKNO}\ benchmarks. From Fig.~\ref{fig:opt_sum} we see that $\text{t}\!\ket{\mathrm{ket}}$\ performs significantly better than \textsf{MCTS}\ and \textsf{SAHS}\ on \texttt{QUEKO}\ `54Q\_bntf\_Sycamore'; on contrast, its performances on the four 53Q \texttt{QUEKNO}\ benchmark sets (as well as other \texttt{QUEKO}\ and \texttt{QUEKNO}\ benchmark sets) are always inferior to those of \textsf{MCTS}\ and \textsf{SAHS}. This inconsistent performance of $\text{t}\!\ket{\mathrm{ket}}$\ is perhaps due to its use of the \texttt{GraphPlacement} mapping pass and the shortcoming that all \texttt{QUEKO}\ benchmarks have zero optimal transformation cost.
In summary, \texttt{QUEKNO}\ benchmarks overcome the two limitations of \texttt{QUEKO}\ (cf.~Sec.~\ref{sec:intro}) and demonstrate effectiveness in faithfully evaluating QCT algorithms. This is due to their construction method (cf. Alg.~\ref{alg:quekno}) and the properties characterised in Theorems~\ref{thm:qct} and ~\ref{thm:quekno}. }
\subsection{Factors That Affect Optimality}
According to the design of \texttt{QUEKNO}\ benchmarks, there are three critical factors that may affect the gate size optimality of a QCT algorithm, viz. the target swap cost, the permutation type, and the subgraph size. Analogously, the target swap depth cost, the subgraph size, and the qubit gate ratio are the three critical factors that may affect depth optimality.
\begin{figure}
\caption{Evaluation results on IBM Q Rochester and \texttt{QUEKNO}\ benchmark set `53Q\_depth\_Rochester', where the $x$- and $y$-axes denote the target swap depth cost and the average output/input depth ratio of individual QCT algorithms, and `opt' denotes the known near-optimal depth ratio.}
\label{fig:depth_opt_rochester}
\end{figure}
\begin{figure}
\caption{Performance comparison among four subsets (`TFL', `QSE', `small', `large') of `53Q\_depth\_Rochester' benchmarks, where the $y$-axis denotes the average output/input depth ratios of individual QCT algorithms and `avg' (`TFL', `QSE', `small', `large', resp.) denotes the average performance across the whole (`TFL', `QSE', `small', `large', resp.) benchmark set.
}
\label{fig:depth_opt_rochester_cmp}
\end{figure}
For large devices like IBM Q Rochester, the subgraphs used in glinks have graph size `small' ($\sim8$ edges) or `large' ($\sim16$ edges). Apparently, benchmarks with `small' subgraphs have relatively fewer gates. By the construction of \texttt{QUEKNO}\ circuits, benchmarks with qubit gate ratio `TFL' have less 1-qubit gates but roughly the same number of 2-qubit gates as those with qubit gate ratio `QSE'.
QCT algorithms scale well on \texttt{QUEKNO}\ benchmarks with the increasing target swap or swap depth cost used in constructing these benchmarks (cf. Fig.~\ref{fig:depth_opt_rochester}). Fig.~\ref{fig:depth_opt_rochester_cmp} shows that (i) the qubit gate ratio has a significant effect (often $\geq 10\%$) on the performances of QCT algorithms; (ii) the subgraph size has an even larger effect: QCT algorithms have much better (often $\geq 50\%$) performances on benchmarks with small subgraphs, partially due to their smaller gate size. For example, \textsf{SABRE}\ has an average depth ratio 3.23 on benchmarks with large subgraphs, which is 1.63x of its average depth ratio 1.98 on benchmarks with small subgraphs.
\subsection{Performances of QCT Algorithms on Real Circuits} \xblue{ To further validate {the effectiveness of optimality evaluation} of \texttt{QUEKNO}\ benchmarks, we compare the gate size optimality of the five QCT algorithms on IBM Q Tokyo and real benchmark circuits extracted from IBM Qiskit Circuit Library.\footnote{\url{https://qiskit.org/documentation/apidoc/circuit_library.html}} The real benchmark consists of 37 circuits with qubit numbers in the range of 16 to 20, gates numbers 24 to 8060 and circuit depths 3 to 1592. These include in particular 5 Quantum Fourier Transform (QFT) circuits, 5 Quantum Volume (QV) circuits, and 5 Instantaneous Quantum Polynomial (IQP) circuits. These circuits are often very deep ($\text{depths} \geq 100$) and not ideal for comparing with our near-term feasible benchmark circuits. The average cx ratios (cf. Eq.~\ref{eq:rho_gate}) are shown below. \begin{table}[h]
\centering
\scalebox{0.9}{
\begin{tabular}{c|c|c|c|c|c} QCT alg. & {\textsf{SABRE}} & \textsf{MCTS}-size & {\textsf{SAHS}} & {$\text{t}\!\ket{\mathrm{ket}}$} & qiskit \\ \hline cx ratio & 1.57 & 1.51 & 1.56 & 1.68 & 2.49
\end{tabular}
} \end{table}
\noindent From Fig.~\ref{fig:opt_sum} (top), we can see that these ratios are more similar to those demonstrated on \texttt{QUEKNO}\ benchmarks `20Q\_gate\_Tokyo' than those demonstrated on \texttt{QUEKO}\ `20Q\_bss\_Tokyo' and `16Q\_bntf\_Aspen-4'. }
\subsection{Further Discussion} \xblue{ \subsubsection*{The Complexity of \texttt{QUEKNO}\ Construction} The proposed \texttt{QUEKNO}\ benchmark construction method runs in linear time in the number of gates. However, it calls a subgraph isomorphism algorithm to check if a glink is strong (line~8 of Alg.~\ref{alg:quekno}), which is not polynomial in the number of qubits. This is not a serious problem for the following reasons: first, benchmark construction is done offline; second, we are only interested in near-term quantum devices with up to 1000 qubits, for which subgraph isomorphism can be checked efficiently by algorithms such as VF2; third, we only need to refute a subgraph isomorphism, which is usually much easier than confirming it; fourth, we could set a time-out and repeat the procedure several times; lastly, dropping the requirement that each glink being strong doesn't change the `known near-optimal ratio'; instead, it only decreases the possibility that this ratio is close to its exact optimal ratio.
\subsubsection*{Hardware Noises and Fidelity} {In addition to} limited connectivity between qubits, current and near-term quantum devices suffer from limitations such as short qubit coherence time, gate errors, and crosstalk noises. Even worse, these limitations vary spatially and temporally across qubits and daily calibration. This makes hardware noise-aware qubit mapping challenging. Nevertheless, there are several works addressing this issue \cite{Deng0L20_codar,Murali+19,TannuQ19,Xie+21_commutativity}. To facilitate comparison with \texttt{QUEKO}\ \cite{TanC21-queko} and provide benchmarks for evaluating mainstream QCT algorithms, the design of \texttt{QUEKNO}\ focuses on 2-qubit gate counts and circuit depth. The above two objectives are important because: (i) 2-qubit gate error rates are often 10x of 1-qubit gate error rates and thus the number of CNOTs plays a dominant role in circuit fidelity estimation; (ii) the highly limited coherence time of quantum device is respected by reducing the depth of the output circuit. In fact, the effect of noise is often reduced by CNOT gate count reduction (see, e.g., \cite{PatelYIJT22}). In addition, the effect of crosstalk could also be mitigated through commutativity-based instruction reordering \textbf{after} qubit mapping \cite{Xie+21_commutativity}.
The fidelity of a circuit can be calculated as the product of the fidelities of gates in the circuit. Our benchmarks (especially those for gate size optimality) can be directly used for evaluating the fidelity optimality of QCT algorithms. We need only calculate the fidelity of the output circuit of the known near-optimal transformation for each benchmark. With near-optimal cx ratios, the fidelity ratios between the output and input circuits are also near-optimal.
In real quantum devices, the reliability of qubits and the connectivity strength between qubits vary spatially and temporally. Our construction method can also be adapted to generate benchmark circuits that respect these hardware characteristics. With the current calibration data, in the construction algorithm, we select subgraphs which contain highly reliable nodes with long qubit coherence time and edges with strong connectivity; and, when generating strong glinks, we choose permutations that are implemented with highly reliable edges. This is in a sense the reverse procedure of the variability-aware qubit mapping approach in \cite{Murali+19,TannuQ19}. However, benchmarks for the current calibration are not necessarily useful for the next calibration. For variation-aware QCT algorithms \cite{Deng0L20_codar,Murali+19,TannuQ19}, which have taken hardware variations into consideration, we believe the trends across QCT algorithms do not change over time in general. }
\section{Conclusion} \label{sec:conclusion} We proposed \texttt{QUEKNO}\ benchmark circuits for quantum circuit transformation (QCT), which have known near-optimal transformation costs. Our construction process reflects the general circuit transformation process and can generate more general and representative QCT benchmarks when compared with \texttt{QUEKO}\ benchmarks \cite{TanC21-queko}, which all have zero optimal costs. \texttt{QUEKNO}\ benchmarks can be used to evaluate both gate size and depth optimality of QCT algorithms. Our detailed experiments show that \texttt{QUEKNO}\ benchmarks can provide a fair evaluation of QCT algorithms, including those that use subgraph isomorphism to select initial mappings (e.g., $\text{t}\!\ket{\mathrm{ket}}$). In particular, the evaluation results show that \textsf{SABRE}\ has the best performance on large devices like Google's Sycamore and IBM Q Rochester and is arguably the most promising QCT algorithm for even larger quantum devices.
On the other hand, our evaluation results also show that current state-of-the-art QCT algorithms still have a significant gap with the known near-optimal ratio. For example, on the \texttt{QUEKNO}\ benchmark set `53Q\_depth\_Rochester\_large', the best output/input circuit depth ratio is 3.23 (obtained by \textsf{SABRE}, cf. Fig.~\ref{fig:depth_opt_rochester_cmp}), while the known near-optimal depth ratio is below 1.50 (cf.~Fig.~\ref{fig:rochester_depth_info} (bottom)). This observation suggests there is still significant space for improvement of QCT algorithms.
\end{document} |
\begin{document}
\centerline{\bf On the $p$-adic Leopoldt transform of a power series}
\centerline{ Bruno Angl\` es\footnote{Universit\'e de Caen, LMNO CNRS UMR 6139, BP 5186, 14032 Caen Cedex, France. E-mail: angles@math.unicaen.fr} } ${}$\par
Let $p$ be an odd prime number. Let $X$ be the projective limit for the norm maps of the $p$-Sylow subgroups of the ideal class groups of $\mathbb Q(\zeta_{p^{n+1}}),$ $n\geq 0.$ Let $\Delta ={\rm Gal}(\mathbb Q(\zeta_p)/\mathbb Q)$ and let $\theta $ be an even and non-trivial character of $\Delta. $ Then $X$ is a $\mathbb Z_p[[T]]$-module and the characteristic ideal of the isotypic component $X(\omega \theta^{-1})$ is generated by a power series $f(T,\theta)\in \mathbb Z_p[[T]]$ such that (see for example \cite{CS}):
$$\forall n\geq 1,\, n\equiv 0\pmod{p-1},\, f((1+p)^{1-n}-1,\theta)=L(1-n,\theta ),$$
where $L(s,\theta)$ is the usual Dirichlet $L$-series. Therefore, it is natural and interesting to study the properties of the power series $f(T,\theta).$\par
${}$\par
We denote by $\overline{f(T,\theta)}\in \mathbb F_p[[T]]$ the reduction of $f(T,\theta)$ modulo $p.$ Then B. Ferrero and L. Washington have proved (\cite{FW}):
$$\overline{f(T,\theta)}\not =0.$$
Note that, in fact, we have (\cite{ANG}):
$$\overline{f(T,\theta)}\not \in \mathbb F_p[[T^p]].$$ W. Sinnott has proved the following (\cite{SI2}):
$$\overline{f(T,\theta)}\not \in \mathbb F_p(T).$$
But, note that $\forall a\in \mathbb Z_p^*,$ $\mathbb F_p[[T]]=\mathbb F_p[[(1+T)^a -1]].$ Therefore it is natural to introduce the notion of a pseudo-polynomial which is an element $F(T)$ in $\mathbb F_p[[T]]$ such that there exist an integer $r\geq 1,$ $c_1,\cdots c_r\in \mathbb F_p,$ $a_1,\cdots ,a_r\in \mathbb Z_p,$ such that $F(T)=\sum_{i=1}^r c_i (1+T)^{a_i}.$ An element of $\mathbb F_p[[T]]$ will be called a pseudo-rational function if it is the quotient of two pseudo-polynomials. In this paper, we prove that $\overline{f(T,\theta)}$ is not a pseudo-rational function (part 1) of Theorem \ref{Theorem2}). This latter result suggests the following question: is $\overline{f(T,\theta)}$ algebraic over $\mathbb F_p(T)?$ We suspect that this is not the case but we have no evidence for it. Note that, by the result of Ferrero and Washington, we can write:
$$\overline{f(T,\theta)}=T^{\lambda (\theta )}U(T),$$
where $\lambda (\theta) \in \mathbb N $ and $U(T)\in \mathbb F_p[[T]]^*.$ S. Rosenberg has proved that (\cite{ROS}):
$$\lambda (\theta )\leq (4p(p-1))^{\phi (p-1)},$$
where $\phi$ is Euler's totient function. In this paper, we improve Rosenberg's bound (part 2) of Theorem \ref{Theorem2}):
$$\lambda(\theta )<(\frac{p-1}{2})^{\phi (p-1)}.$$
This implies that the lambda invariant of the field $\mathbb Q(\zeta_p)$ is less than $2(\frac{p-1}{2})^{\phi (p-1)+1}$ (see Corollary \ref{Theorem3} for the precise statement for an abelian number field). Note that this bound is certainly far from the truth, because according to a heuristic argument due to Ferrero and Washington (see \cite{LA}) and to Grennberg's conjecture:
$$\lambda (\mathbb Q(\zeta_p))=\sum_{\theta \in \widehat{\Delta},\, \theta \not =1\, {\rm and \, even }}\lambda(\theta )\leq \frac{{\rm Log}(p)}{{\rm Log}({\rm Log}(p)) }.$$
${}$\par
The author is indebted to Warren Sinnott for communicating some of his unpublished works (note that Lemma \ref{Lemma7} is due to Warren Sinnott). The author also thanks Filippo Nuccio for pointing out the work of J. Kraft and L. Washington (\cite{KW}).\par
\section{Notations}\par
${}$\par
Let $p$ be an odd prime number and let $K$ be a finite extension of $\mathbb Q_p.$ Let $O_K$ be the valuation ring of $K$ and let $\pi $ be a prime of $K.$ We set $\mathbb F_q=O_K/\pi O_K,$ it is a finite field having $q$ elements and its characteristic is $p.$ Let $T$ be an indeterminate over $K,$ we set $\Lambda =O_K[[T]].$ Observe that $\Lambda/\pi \Lambda \simeq \mathbb F_q[[T]].$ Let $F(T)\in \Lambda\setminus \{ 0\},$ then we can write in an unique way (\cite{WAS}, Theorem 7.3):
$$F(T)=\pi^{\mu(F)} P(T) U(T),$$
where $U(T)$ in an unit of $\Lambda ,$ $\mu(F)\in \mathbb N,$ $P(T)\in O_K[T]$ is a monic polynomial such that $P(T)\equiv T^{\lambda (F)}\pmod{\pi }$ for some integer $\lambda (F)\in \mathbb N.$ If $F(T)=0,$ we set $\mu(F)=\lambda(F)=\infty .$ An element $F(T)\in \Lambda$ is called a pseudo-polynomial (see also \cite{ROS}, Definition 2) if there exist some integer $r\geq 1,$ $c_1,\cdots , c_r \in O_K,$ $a_1,\cdots ,a_r\in \mathbb Z_p,$ such that:
$$F(T)=\sum_{i=1}^r c_i (1+T)^{a_i}.$$
We denote the ring of pseudo-polynomials in $\Lambda $ by $A.$ Let $\delta \in \mathbb Z/(p-1)\mathbb Z$ and $F(T)\in \Lambda ,$ we set:
$$\gamma_{\delta } (F(T))= \frac{1}{p-1} \sum_{\eta \in \mu_{p-1}} \eta^{\delta } F((1+T)^{\eta }-1).$$
Then $\gamma_{\delta}:\Lambda \rightarrow \Lambda$ is a $O_K$-linear map and:\par
\noindent - for $\delta ,\delta'\in \mathbb Z/(p-1)\mathbb Z,$ $\gamma_{\delta}\gamma_{\delta'}=0$ if $\delta \not = \delta'$ and $\gamma_{\delta}^2=\gamma_{\delta},$\par
\noindent - $\sum_{\delta \in \mathbb Z/(p-1)\mathbb Z} \gamma_{\delta }={\rm Id}_{\Lambda}.$\par
\noindent For $F(T)\in \Lambda,$ we set:
$$D(F(T))=(1+T) \frac{d}{dT} F(T),$$
$$U(F(T))=F(T)-\frac{1}{p}\sum_{\zeta\in \mu_p} F(\zeta (1+T)-1)\, \in \Lambda .$$
Then $D,U: \Lambda \rightarrow \Lambda $ are $O_K$-linear maps. Observe that:\par
\noindent - $U^2=U,$\par
\noindent - $DU=UD,$\par
\noindent- $\forall \delta \in \mathbb Z/(p-1)\mathbb Z,$ $ \gamma_{\delta }U=U\gamma_{\delta},$\par
\noindent - $\forall \delta \in \mathbb Z/(p-1)\mathbb Z,$ $D\gamma_{\delta }=\gamma_{\delta +1}D.$\par
\noindent If $F(T)\in \Lambda,$ we denote its reduction modulo $\pi$ by $\overline{F(T)}\in \mathbb F_q[[T]].$ If $f:\Lambda \rightarrow \Lambda$ is a $O_K$-linear map, we denote its reduction modulo $\pi$ by $\overline{f}: \mathbb F_q[[T]]\rightarrow \mathbb F_q[[T]].$ For all $n\geq 0,$ we set $\omega_n(T)=(1+T)^{p^n}-1.$\par
Let $B$ be a commutative and unitary ring. We denote the set of invertible elements of $B$ by $B^*.$\par
We fix $\kappa $ a topological generator of $1+p\mathbb Z_p.$ Let $x\in \mathbb Z_p$ and let $n\geq 1,$ we denote the unique integer $k\in \{ 0,\cdots , p^n-1\}$ such that $x\equiv k\pmod{p^n}$ by $[x]_n.$ Let $\omega :\mathbb Z_p^*\rightarrow \mu_{p-1}$ be the Teichm\"uller character, i.e. $\forall a\in \mathbb Z_p^*,$ $\omega (a)\equiv a \pmod{p}.$ Let $x,y\in \mathbb Z_p,$ we write:\par
\noindent - $x\sim y$ if there exists $\eta \in \mu_{p-1}$ such that $y=\eta x,$\par
\noindent - $x\equiv y \pmod {\mathbb Q^*}$ if there exists $z\in \mathbb Q^*$ such that $y=zx.$\par
\noindent The function ${\rm Log}_p$ will denote the usual $p$-adic logarithm. $v_p$ will denote the usual $p$-adic valuation on $\mathbb C_p$ such that $v_p(p)=1. $\par
Let $\rho$ be a Dirichlet character of conductor $f_{\rho}.$ Recall that the Bernoulli numbers $B_{n,\rho}$ are defined by the following identity:
$$\sum_{a=1}^{f_{\rho}}\frac{\rho (a) e^{aZ}}{e^{fZ}-1}=\sum_{n\geq 0} \frac{B_{n,\rho }}{n!} Z^{n-1},$$
where $e^Z=\sum_{n\geq 0} Z^n/n!.$ If $\rho =1,$ for $n\geq 2,$ $B_{n,1}$ is the $n$th Bernoulli number.\par
Let $x\in \mathbb R.$ We denote the biggest integer less than or equal to $x$ by $[x].$ The function ${\rm Log}$ will denote the usual logarithm.\par
\section{Preliminaries}
${}$\par
Let $\delta \in \mathbb Z/(p-1)\mathbb Z.$ In this section, we will recall the construction of the $p$-adic Leopoldt transform $\Gamma_{\delta }$ (see \cite{LA}, Theorem 6.2) which is a $O_K$-linear map from $\Lambda $ to $\Lambda .$\par
First, observe that $(\pi^n, \omega_n(T))= \pi^n \Lambda +\omega_n(T)\Lambda ,$ $n\geq 1,$ is a basis of neighbourhood of zero in $\Lambda :$\par
\newtheorem{Lemma1}{Lemma}[section]
\begin{Lemma1} \label{Lemma1}
${}$\par
\noindent 1) $\forall n\geq 1,$ $(\pi , T)^{2n}\subset (\pi^n, T^n)\subset (\pi ,T)^n.$\par
\noindent 2) $\forall n\geq 1,$ $\omega_n (T)\in (p^{[n/2]}, T^{p^{[n/2]+1}}).$\par
\noindent 3) Let $N\geq 1,$ set $n=[{\rm Log}(N)/{\rm Log}(p)].$ We have:
$$T^N\in (p^{[n/2]}, \omega_{[n/2]+1}(T)).$$
\end{Lemma1}
\noindent{\sl Proof} Note that assertion 1) is obvious. Assertion 2) comes from the fact:
$$\forall k\in \{ 1,\cdots, p^n\},\, v_p( \frac{p^n!}{k! (p^n-k)!})=n-v_p(k).$$
To prove assertion 3), it is enough to prove the following:\par
\noindent $\forall n\geq 0,$ there exist $\delta_0^{(n)}(T),\cdots , \delta_n^{(n)}(T) \in \mathbb Z [T]$ such that:
$$T^{p^n}=\sum_{i+j=n} \omega_i(T) p^j \delta_j^{(n)}(T).$$
Let's prove this latter fact by recurrence on $n.$ Note that the result is clear if $n=0.$ Let's assume that it is true for $n$ and let's prove the assertion for $n+1.$ Let $r(T)\in \mathbb Z[T]$ such that:
$$\frac{\omega_{n+1}(T)}{\omega_n(T)} +pr(T)=T^{p^n(p-1)}.$$
Then:
$$T^{p^{n+1}}=T^{p^n} \frac{\omega_{n+1}(T)}{\omega_n(T)} +pr(T) T^{p^n}.$$
Note that there exists $q(T) \in \mathbb Z[T]$ such that:
$$\frac{\omega_{n+1}(T)}{\omega_n(T)}=\omega_n(T)^{p-1}+pq(T).$$
Thus:
$$T^{p^{n+1}}=\omega_{n+1}(T)\delta_0^{(n)}(T) +\, \sum_{i+j=n,\, j\geq 1}( \omega_n(T)^{p-1}+pq(T)) \omega_i(T) p^j \delta_j^{(n)}(T)\, + \sum_{i+j=n} \omega_i(T) p^{j+1} \delta_j^{(n)}(T) r(T).$$
Thus, there exist $\delta_0^{(n+1)}(T), \cdots , \delta_{n+1}^{(n+1)}(T) \in \mathbb Z[T]$ such that:
$$T^{p^{n+1}}=\sum_{i+j=n+1} \omega_i(T)p^j \delta_j^{(n+1)} (T).\, \diamondsuit $$
The following Lemma will be useful in the sequel (for a similar result see \cite{ROS}, Lemma 5):\par
\newtheorem{Lemma2}[Lemma1]{Lemma}
\begin{Lemma2} \label{Lemma2}
Let $F(T)\in A.$ Write $F(T)=\sum_{i=1}^r \beta_i (1+T)^{\alpha_i},$ $\beta_1,\cdots, \beta_r\in O_K,$ $\alpha_1,\cdots ,\alpha_r \in \mathbb Z_p,$ and $\alpha_i\not = \alpha_j$ for $i\not = j.$ Let $N={\rm Max}\{ v_p(\alpha_i-\alpha_j),i\not =j\}.$ Let $n\geq 1$ be an integer. Then:
$$F(T)\equiv 0\pmod{(\pi^n,\omega_{N+1}(T))} \Leftrightarrow \forall i=1,\cdots r, \, \beta_i\equiv 0\pmod{\pi^n}.$$
\end{Lemma2}
\noindent{\sl Proof} We have:
$$F(T)\equiv \sum_{i=1}^r \beta_i (1+T)^{[\alpha_i]_{N+1}}\pmod{\omega_{N+1}(T)}.$$
Therefore $F(T)\equiv 0\pmod{(\pi^n,\omega_{N+1}(T))}$ if and only if we have:
$$\sum_{i=1}^r\beta_i (1+T)^{[\alpha_i]_{N+1}}\equiv 0\pmod{\pi^n}.$$
But for $i\not =j,$ $[\alpha_i]_{N+1}\not = [\alpha_j]_{N+1}.$ Therefore $\sum_{i=1}^r\beta_i (1+T)^{[\alpha_i]_{N+1}}\equiv 0\pmod{\pi^n}$ if and only if:
$$\forall i=1,\cdots r, \, \beta_i\equiv 0\pmod{\pi^n}.\, \diamondsuit $$
Observe that $U,D, \gamma_{\delta}$ are continuous $O_K$-linear maps by Lemma \ref{Lemma1} and the following Lemma:\par
\newtheorem{Lemma3}[Lemma1]{Lemma}
\begin{Lemma3} \label{Lemma3} Let $F(T)\in \Lambda$ and let $n\geq 0.$\par \noindent 1) $F(T)\equiv 0\pmod{\omega_n(T)}\Rightarrow \gamma_{\delta}(F(T))\equiv 0\pmod{\omega_n(T)}.$\par \noindent 2) $F(T)\equiv 0\pmod{\omega_n(T)}\Rightarrow D(F(T))\equiv 0\pmod{(p^n,\omega_n(T))}.$\par \noindent 3) If $n\geq 1,$ $F(T)\equiv 0\pmod{\omega_n(T)}\Rightarrow U(F(T))\equiv 0\pmod{\omega_n(T)}.$\par \end{Lemma3} \noindent{\sl Proof} The assertions 1) and 2) are obvious. It remains to prove 3). Observe that, by \cite{WAS}, Proposition 7.2, we have: $$\forall G(T)\in \Lambda,\, G(T)\equiv 0\pmod{\omega_n(T)} \Leftrightarrow \forall \zeta \in \mu_{p^n},\, G(\zeta-1)=0.$$
Now, let $F(T)\in \Lambda ,$ $F(T)\equiv 0\pmod{\omega_n(T)}.$ Since the map: $\mu_{p^n}\rightarrow \mu_{p^n},$ $x\mapsto \zeta x,$ is a bijection for all $\zeta\in \mu_{p^n},$ we get:
$$\forall \zeta\in \mu_{p^n},\, U(F)(\zeta-1)=0.$$
Therefore:
$$U(F(T))\equiv 0\pmod{\omega_n(T)}.\, \diamondsuit $$
Let $s\in \mathbb Z_p.$ For $n\geq 0,$ set:
$$k_n(s,\delta)=[s]_{n+1}+\delta_n p^{n+1}\, \in \mathbb N\setminus\{ 0\},$$
where $\delta_n\in \{ 1, \cdots ,p-1\}$ is such that $[s]_{n+1}+\delta_n \equiv \delta \pmod{p-1}.$ Observe that:\par
\noindent - $\forall n\geq 0,$ $k_n(s,\delta)\equiv \delta \pmod{p-1}$ and $k_n(s,\delta)\equiv s\pmod{p^{n+1}},$\par
\noindent - $\forall n\geq 0,$ $k_{n+1}(s,\delta)>k_n(s,\delta),$\par
\noindent - $s={\rm lim}_nk_n(s,\delta).$\par
\noindent In particular:
$$\forall a\in \mathbb Z_p,\, \forall n\geq 0,\, a^{k_{n+1}(s,\delta)}\equiv a^{k_n(s,\delta)}\pmod{p^{n+1}}.$$
Now, let $F(T)\in A.$ Write $F(T)=\sum_{i=1}^r \beta_i (1+T)^{\alpha_i},$ $\beta_1,\cdots ,\beta_r\in O_K,$ $\alpha_1,\cdots ,\alpha_r \in \mathbb Z_p.$ We set:
$$\Gamma_{\delta }(F(T))=\sum_{\alpha_i\in \mathbb Z_p^*} \beta_i \omega^{\delta}(\alpha_i) (1+T)^{\frac{{\rm Log}_p(\alpha_i)}{{\rm Log}_p(\kappa)}}.$$
Thus, we have a surjective $O_K$-linear map: $\Gamma_{\delta }: A\rightarrow A.$ Note that:\par
\newtheorem{Lemma4}[Lemma1]{Lemma}
\begin{Lemma4} \label{Lemma4}
Let $F(T)\in A.$\par
\noindent 1) Let $s\in \mathbb Z_p.$ Then:
$$\forall n\geq 0, \, \Gamma_{\delta} (F)(\kappa^s-1)\equiv D^{k_n(s,\delta)}(F)(0)\pmod{p^{n+1}}.$$
2) Let $n\geq 1.$ Assume that $F(T)\equiv 0\pmod{\omega_n(T)}.$ Then $\Gamma_{\delta }(F(T))\equiv 0\pmod{\omega_{n-1}(T)}.$\par
\end{Lemma4}
\noindent {\sl Proof} For $a\in \mathbb Z_p^*,$ write $a=\omega (a)\, <a>,$ where $<a>\in 1+p\mathbb Z_p.$ Let's write:
$$F(T)=\sum_{i=1}^r\beta_i(1+T)^{\alpha_i},$$
$\beta_1,\cdots ,\beta_r \in O_K,$ $\alpha_1,\cdots ,\alpha_r \in \mathbb Z_p.$ We have:
$$D^{k_n(s,\delta)}(F(T))=\sum_{i=1}^r\beta_i \alpha_i^{k_n(s,\delta)}(1+T)^{\alpha_i}.$$
Thus:
$$D^{k_n(s,\delta)}(F(T))\equiv \sum_{\alpha_i \in \mathbb Z_p^*}\beta_i \omega^{\delta }(\alpha_i)<\alpha_i>^s(1+T)^{\alpha_i}\pmod{p^{n+1}}.$$
But recall that:
$$\Gamma_{\delta}(F)(\kappa^s-1)=\sum_{\alpha_i\in \mathbb Z_p^*}\beta_i \omega^{\delta}(\alpha_i)<\alpha_i>^s.$$
Assertion 1) follows easily. Now, let's suppose that $F(T)\equiv 0\pmod{\omega_n(T)}$ for some $n\geq 1.$ Then:
$$\forall a\in\{ 0,\cdots ,p^n-1\},\, \sum_{\alpha_i\equiv a\pmod{p^n}}\beta_i =0.$$
This implies that:
$$\forall a\in\{ 0,\cdots, p^{n-1}-1\},\, \sum_{\alpha_i\in \mathbb Z_p^*,\, {\rm Log}_p(\alpha_i)/{\rm Log}_p(\kappa)\equiv a \pmod{p^{n-1}}}\omega^{\delta }(\alpha_i) \beta_i =0.$$
But recall that:
$$\Gamma_{\delta }(F(T))=\sum_{\alpha_i\in \mathbb Z_p^*} \beta_i \omega^{\delta}(\alpha_i) (1+T)^{\frac{{\rm Log}_p(\alpha_i)}{{\rm Log}_p(\kappa)}}.$$
Thus $\Gamma_{\delta }(F(T))\equiv 0\pmod{\omega_{n-1}(T)}.\, \diamondsuit $\par
\newtheorem{Proposition1}[Lemma1]{Proposition}
\begin{Proposition1} \label{Proposition1}
Let $F(T)\in \Lambda.$ There exists an unique power series $\Gamma_{\delta}(F(T))\in \Lambda$ such that:
$$\forall s\in \mathbb Z_p\, \forall n\geq 0,\, \Gamma_{\delta} (F)(\kappa^s-1)\equiv D^{k_n(s,\delta)}(F)(0)\pmod{p^{n+1}}.$$
\end{Proposition1}
\noindent{\sl Proof} Let $(F_N(T))_{N\geq 0}$ be a sequence of elements in $A$ such that:
$$\forall N\geq 0,\, F(T)\equiv F_N(T)\pmod{\omega_N(T)}.$$
Fix $N\geq 1.$ Then:
$$\forall m\geq N, F_m(T)\equiv F_N(T)\pmod{\omega_N(T)}.$$
Therefore, by Lemma \ref{Lemma4}, we have:
$$\forall m\geq N, \Gamma_{\delta}(F_m(T))\equiv \Gamma_{\delta}(F_N(T))\pmod{\omega_{N-1}(T)}.$$
This implies that the sequence $(\Gamma_{\delta}(F_N(T)))_{N\geq 1}$ converges in $\Lambda $ to some power series $G(T)\in \Lambda.$ Observe that, since $\Lambda $ is compact, we have:
$$\forall N\geq 1, G(T)\equiv \Gamma_{\delta}(F_N(T))\pmod{\omega_{N-1}(T)}.$$
In particular:
$$\forall N\geq 1, G(\kappa^s-1)\equiv \Gamma_{\delta }(F_N)(\kappa^s-1)\pmod{p^N}.$$
Thus, applying Lemma \ref{Lemma4}, we get:
$$\forall N\geq 1, G(\kappa^s-1)\equiv D^{k_{N-1}(s,\delta )}(F_N)(0)\pmod{p^N}.$$
But:
$$\forall N\geq 1, D^{k_{N-1}(s,\delta )}(F(T))\equiv D^{k_{N-1}(s,\delta )}(F_N(T))\pmod{(p^N,\omega_N(T))}.$$
Therfore:
$$\forall N\geq 1, G(\kappa^s -1)\equiv D^{k_{N-1}(s,\delta )}(F)(0)\pmod{p^N}.$$
Now, set $\Gamma_{\delta }(F(T))=G(T).$ The Proposition follows easily. $\diamondsuit$\par
\section{Some properties of the $p$-adic Leopoldt transform}
${}$\par
We need the following fundamental result:\par
\newtheorem{Proposition2}{Proposition}[section]
\begin{Proposition2} \label{Proposition2}
Let $\delta \in \mathbb Z/(p-1)\mathbb Z$ and let $F(T)\in \Lambda.$ Let $m,n\in \mathbb N\setminus \{ 0\}.$ then:
$$\Gamma_{\delta}(F(T))\equiv 0\pmod{(\pi^n,\omega_{m-1}(T))} \Leftrightarrow \gamma_{-\delta}U(F(T)) \equiv 0\pmod{(\pi^n,\omega_{m}(T))}.$$
\end{Proposition2}
\noindent{\sl Proof} A similar result has been obtained by S. Rosenberg (\cite{ROS}, Lemma 8). We begin by proving that $\Gamma_{\delta}$ is a continuous $O_K$-linear map. By Lemma \ref{Lemma1}, this comes from the following fact:\par
\noindent Let $F(T)\in \Lambda.$ Let $n\geq 1$ and assume that $F(T)\equiv 0 \pmod{\omega_n(T)},$ then $\Gamma_{\delta }(F(T))\equiv 0\pmod{\omega_{n-1}(T)}.$\par
\noindent Indeed, let $(F_N(T))_{N\geq 0}$ be a sequence of elements in $A$ such that:
$$\forall N\geq 0, \, F(T)\equiv F_N(T) \pmod{\omega_{N}(T)}.$$
By the proof of Proposition \ref{Proposition1}:
$$\forall N\geq 1,\, \Gamma_{\delta}(F(T))\equiv \Gamma_{\delta}(F_N(T))\pmod{\omega_{N-1}(T)}.$$
Now, by Lemma \ref{Lemma4}:
$$\Gamma_{\delta}(F_n(T))\equiv 0\pmod{\omega_{n-1}(T)}.$$
The assertion follows.\par
\noindent Now, since $\Gamma_{\delta}, \gamma_{-\delta},U$ are continuous $O_K$-linear maps, it suffices to prove the Proposition in the case where $F(T)\in A.$ Write $F(T)=\sum_{i=1}^r \beta_i (1+T)^{\alpha_i},$ $\beta_1,\cdots ,\beta_r \in O_K,$ $\alpha_1,\cdots ,\alpha_r \in \mathbb Z_p.$ Let $I\subset \{ \alpha_1,\cdots ,\alpha_r\}$ be a set of representatives of the classes of $\alpha_1, \cdots ,\alpha_r$ for the relation $\sim .$ For $x\in I,$ $x\not \equiv 0\pmod{p},$ set:
$$\beta_x=\sum_{\alpha_i\sim x}\beta_i \frac{\alpha_i}{x}.$$
We get:
$$(p-1)\gamma_{-\delta}U(F(T))=\sum_{\eta\in \mu_{p-1}}\sum_{x\in I,\, x\in \mathbb Z_p^*}\eta^{-\delta}\beta_x (1+T)^{\eta x}.$$
Now observe that:
$$\Gamma_{\delta}(F(T)) =\Gamma_{\delta} \gamma_{-\delta }U (F(T))=\sum_{x\in I, \, x\in \mathbb Z_p^*}\beta_x \omega^{\delta}(x) (1+T)^{{\rm Log}_p(x)/{\rm Log}_p(\kappa )}.$$
Therefore $\Gamma_{\delta}(F(T))\equiv 0\pmod{(\pi^n,\omega_{m-1}(T))} $ if and only if:
$$\forall a \in \{ 0,\cdots p^{m-1}-1\},\, \sum_{x\in I,\, x\in \mathbb Z_p^*,\, {\rm Log}_p(x)/{\rm Log}_p(\kappa )\equiv a\pmod{p^{m-1}}}\beta_x \omega^{\delta} (x) \equiv 0\pmod{\pi^n}.$$
Now, observe that for $a\in \{ 0,\cdots , p^{m}-1\},$ there exists at most one $\eta \in \mu_{p-1}$ such that $[\eta x]_m=a,$ and if such a $\eta $ exists it is equal to $\omega (a) \omega^{-1}(x).$ Therefore $\Gamma_{\delta}(F(T))\equiv 0\pmod{(\pi^n,\omega_{m-1}(T))} $ if and only if:
$$\forall a \in \{ 0,\cdots ,p^m-1\},\, \sum _{x\in I,\, x\in \mathbb Z_p^*,\, \exists \eta_x \in \mu_{p-1},[\eta_x x]_m=a} \beta_x \eta_x^{-\delta}\equiv 0\pmod{\pi ^n}.$$
This latter property is equivalent to $ \gamma_{-\delta}U(F(T)) \equiv 0\pmod{(\pi^n,\omega_{m}(T))}.\, \diamondsuit $\par
Now, we can list the basic properties of $\Gamma_{\delta}:$\par
\newtheorem{Proposition3}[Proposition2]{Proposition}
\begin {Proposition3} \label{Proposition3}
Let $\delta \in \mathbb Z/(p-1)\mathbb Z.$\par
\noindent 1) $\Gamma_{\delta}:\Lambda \rightarrow \Lambda$ is a surjective and continuous $O_K$-linear map.\par
\noindent 2) $\forall F(T)\in \Lambda,$ $\Gamma_{\delta}(F(T))=\Gamma_{\delta }\gamma_{-\delta}U(F(T)).$\par
\noindent 3) $\forall a \in \mathbb Z_p^*,$ $\Gamma_{\delta} (F((1+T)^a-1))= \omega^{\delta }(a) (1+T)^{{\rm Log}_p(a)/{\rm Log}_p (\kappa)}\Gamma_{\delta }(F(T)).$\par
\noindent 4) Let $\kappa'$ be another topological generator of $1+p\mathbb Z_p$ and let $\Gamma_{\delta}'$ be the $p$-adic Leopoldt transform associated to $\kappa'$ and $\delta .$ Then:
$$\forall F(T)\in \Lambda ,\, \Gamma_{\delta}'(F(T)) =\Gamma_{\delta}(F)((1+T)^{{\rm Log}_p(\kappa)/{\rm Log}_p(\kappa')}-1).$$
5) Let $F(T)\in \Lambda.$ Then $\mu(\Gamma_{\delta }(F(T)))=\mu (\gamma_{-\delta}U(F(T)))$ and:
$$\forall N\geq 1,\, \lambda (\Gamma_{\delta}(F(T)))\geq p^{N-1} \Leftrightarrow \lambda (\gamma_{-\delta}U(F(T)))\geq p^N.$$
\end{Proposition3}
\noindent{\sl Proof} The assertions 1),2),3),4) come from the fact that $\Gamma_{\delta}, \gamma_{-\delta}, U$ are continuous and that these assertions are true for pseudo-polynomials. The assertion 5) is a direct application of Proposition \ref{Proposition2} . $\diamondsuit$\par
Let's recall the following remarkable result due to W. Sinnott:
\newtheorem{Proposition4}[Proposition2]{Proposition}
\begin{Proposition4} \label{Proposition4}
Let $r_1(T),\cdots ,r_s(T)\in \mathbb F_q(T)\cap \mathbb F_q[[T]].$ Let $c_1,\cdots ,c_s\in \mathbb Z_p\setminus \{ 0\}$ and suppose that:
$$\sum_{i=1}^s r_i((1+T)^{c_i}-1)=0.$$
Then:
$$\forall a\in \mathbb Z_p,\, \sum_{c_i \equiv a\pmod{\mathbb Q^*}}r_i((1+T)^{c_i}-1)\, \in \mathbb F_q.$$
\end{Proposition4}
\noindent{\sl Proof} See \cite{SI2}, Proposition 1. $\diamondsuit$\par
Let's give a first application of this latter result:
\newtheorem{Proposition5}[Proposition2]{Proposition}
\begin{Proposition5} \label{Proposition5}
Let $\delta \in \mathbb Z/(p-1)\mathbb Z$ and let $F(T)\in K(T)\cap \Lambda.$\par
\noindent 1) If $\delta $is odd or if $\delta =0,$ then:
$$\mu (\Gamma_{\delta }(F(T)))=\mu (U(F(T))+(-1)^{\delta }U(F((1+T)^{-1}-1))).$$
2) If $\delta $is even and $\delta \not =0,$ then:
$$\mu (\Gamma_{\delta }(F(T)))=\mu (U(F(T))+U(F((1+T)^{-1}-1))-2U(F)(0)).$$
\end{Proposition5}
\noindent{\sl Proof} The case $\delta =0$ has already been obtained by Sinnott (\cite{SI1}, Theorem 1). We prove 1), the proof of 2) is quite similar. Now, observe that 1) is a consequence of Proposition \ref{Proposition3} and the following fact:\par
\noindent Let $F(T)\in K(T)\cap \Lambda ,$ then $\mu(\gamma_{-\delta }(F(T)))=\mu (F(T)+(-1)^{\delta }F((1+T)^{-1}-1)).$\par
\noindent Let's prove this fact. Let $r(T)\in \Lambda,$ observe that:
$$\gamma_{-\delta }(r(T))= (-1)^{\delta} \gamma_{-\delta}(r((1+T)^{-1}-1)).$$
We can assume that $F(T)+(-1)^{\delta}F((1+T)^{-1}-1)\not =0.$ Write:
$$F(T)+(-1)^{\delta}F((1+T)^{-1}-1)=\pi^{m}G(T),$$
where $m\in \mathbb N,$ and $G(T)\in \Lambda \setminus \pi \Lambda .$ Note that $G(T)\in K(T).$ We must prove that $\gamma_{-\delta}(G(T))\not \equiv 0\pmod{\pi}.$ Suppose that it is not the case, i.e. $\gamma_{-\delta}(G(T)) \equiv 0\pmod{\pi}.$ Then:
$$G(0)\equiv 0\pmod{\pi}.$$
Furthermore, by Proposition \ref{Proposition4}, there exists $c\in O_K$ such that:
$$G(T)+(-1)^{\delta}G((1+T)^{-1}-1)\equiv c\pmod{\pi}.$$
But, we must have $c\equiv 0\pmod{\pi}.$ Observe that:
$$G(T)=(-1)^{\delta}G((1+T)^{-1}-1).$$
Therefore we get $G(T)\equiv 0\pmod{\pi}$ which is a contradiction. $\diamondsuit$\par
\newtheorem{Lemma5}[Proposition2]{Lemma}
\begin{Lemma5} \label{Lemma5}
Let $F(T)\in \mathbb F_q(T)\cap \mathbb F_q[[T]].$ Then $F(T)$ is a pseudo-polynomial if and only if there exists some integer $n\geq 0$ such that $(1+T)^n F(T)\in \mathbb F_q[T].$
\end{Lemma5}
\noindent{\sl Proof} Assume that $F(T)$ is a pseudo-polynomial. We can suppose that $F(T)\not =0.$ Write:
$$F(T)=\sum_{i=1}^r c_i (1+T)^{a_i},$$
where $c_1,\cdots, c_r \in \mathbb F_q^*,$ $a_1,\cdots a_r\in \mathbb Z_p$ and $a_i\not = a_j$ for $i\not =j.$ Since $F(T)\in \mathbb F_q(T)$ there exist $m,n\in \mathbb N\setminus\{0\},$ $m>{\rm Max}\{ v_p(a_i-a_j),\, i\not =j\},$ such that:
$$(T^{q^n}-T)^{q^{m}}F(T)\in \mathbb F_q[T].$$
Thus:
$$\sum_{i=1}^rc_i (1+T)^{a_i+q^{n+m}}-\sum_{i=1}^rc_i (1+T)^{a_i+q^{m}}\, \in \mathbb F_q[T].$$
Observe that:\par
\noindent- $\forall i,j \in \{ 1,\cdots ,r\},$ $a_i+q^{n+m}\not = a_j+q^m,$\par
\noindent - $a_i+q^m=a_j+q^m \Leftrightarrow i=j.$\par
\noindent Thus, by Lemma \ref{Lemma2}, we get:
$$\forall i\in \{ 1,\cdots r\}, \, a_i+q^m \in \mathbb N.$$
Therefore $(1+T)^{q^m}F(T)\in \mathbb F_q[T].$ The Lemma follows. $\diamondsuit$\par
Let's give a second application of Proposition \ref{Proposition4}:
\newtheorem{Proposition6}[Proposition2]{Proposition}
\begin{Proposition6} \label{Proposition6}
Let $\delta \in \mathbb Z/(p-1)\mathbb Z$ and let $F(T)\in \mathbb F_q(T)\cap \mathbb F_q[[T]].$ Suppose that there exist an integer $r\in \{ 0,\cdots, (p-3)/2\},$ $c_1,\cdots c_r \in \mathbb Z_p\setminus\{ 0\},$ $G_1(T),\cdots , G_r(T) \in \mathbb F_q(T)\cap \mathbb F_q[[T]]$ and a pseudo-polynomial $R(T)\in \mathbb F_q[[T]]$ such that:
$$\overline{\gamma_{\delta}}(F(T))=R(T)+\sum_{i=1}^r G_i((1+T)^{c_i}-1).$$
Then, there exists an integer $n\geq 0$ such that:
$$(1+T)^n (F(T)+(-1)^{\delta} F((1+T)^{-1}-1))\, \in \mathbb F_q[T].$$
\end{Proposition6}
\noindent{\sl Proof} Note that if $\eta,\eta'\in \mu_{p-1}:$ $\eta\equiv \eta'\pmod{\mathbb Q^*}\Leftrightarrow \eta=\eta'\, {\rm or}\, \eta =-\eta'.$ Since $r<(p-1)/2,$ by Proposition \ref{Proposition4}, there exists $\eta \in \mu_{p-1}$ such that:
$$\overline{\eta}^{\delta} F((1+T)^{\eta }-1)+\overline{-\eta}^{\delta} F((1+T)^{-\eta }-1)\, {\rm is\, a \, pseudo-polynomial}.$$
Therefore:
$$F(T)+(-1)^{\delta} F((1+T)^{-1}-1)\, {\rm is\, a \, pseudo-polynomial}.$$
It remains to apply Lemma \ref{Lemma5}. $\diamondsuit$\par
Let $F(T)\in \Lambda.$ We say that $F(T)$ is a pseudo-rational function if $F(T)$ is the quotient of two pseudo-polynomials. For example, $\forall a\in \mathbb Z_p,$ $\forall b\in \mathbb Z_p^*,$ $\frac{(1+T)^a -1}{(1+T)^b-1}$ is a pseudo-rational function. We finish this section by giving a generalization of \cite{SI2}, Theorem1:
\newtheorem{Theorem1}[Proposition2]{Theorem}
\begin{Theorem1} \label{Theorem1}
Let $\delta \in \mathbb Z/(p-1)\mathbb Z$ and let $F(T)\in \mathbb F_q(T)\cap \mathbb F_q[[T]].$ Then $\overline{\Gamma_{\delta}}(F(T))$ is a pseudo-rational function if and only if there exists some integer $n\geq 0$ such that:
$$(1+T)^n(\overline{U}(F(T))+(-1)^{\delta}\overline{U}(F((1+T)^{-1}-1)))\, \in \mathbb F_q[T].$$
\end{Theorem1}
\noindent{\sl Proof} Assume that $\overline{\Gamma_{\delta}}(F(T))$ is a pseudo-rational function. Then , by 3) of Proposition \ref{Proposition3} and Proposition \ref{Proposition2}, there exist $c_1,\cdots, c_r \in \mathbb F_q^*,$ $a_1,\cdots ,a_r \in \mathbb Z_p,$ $a_i\not = a_j$ for $i\not =j,$ such that:
$$\overline{\Gamma_{\delta}}\overline{\gamma_{-\delta}}\overline{ U}(\sum_{i=1}^r c_i F((1+T)^{\kappa^{a_i}}-1)))\, {\rm is\, a \, pseudo-polynomial}.$$
This implies, again by Proposition \ref{Proposition2}, that:
$$\overline{\gamma_{-\delta}}\overline{ U}(\sum_{i=1}^r c_i F((1+T)^{\kappa^{a_i}}-1)))\, {\rm is\, a \, pseudo-polynomial}.$$
Set:
$$G(T)=\overline{U}(F(T))+(-1)^{\delta}\overline{U}(F((1+T)^{-1}-1))\, \in \mathbb F_q(T)\cap \mathbb F_q[[T]].$$
Now, by Proposition \ref{Proposition4}, there exist $d_1,\cdots ,d_{\ell}\in \mathbb F_q^*,$ $b_1,\cdots b_{\ell}\in \mathbb Z_p,$ $b_i\not =b_j$ for $i\not =j,$ $\eta_1,\cdots , \eta_{\ell}\in \mu_{p-1},$ with $\forall i,j\in \{1,\cdots ,\ell \},$ $\eta_i\kappa^{b_i}\equiv \eta_j\kappa^{b_j}\pmod{\mathbb Q^*},$ and $\eta_i\kappa^{b_i}\not = \eta_j\kappa^{b_j}$ for $i\not = j,$ such that:
$$\sum_{i=1}^{\ell}d_iG((1+T)^{{\eta_i}\kappa^{b_i}}-1) \, {\rm is\, a \, pseudo-polynomial}.$$
For $i=1,\cdots , \ell,$ write:
$$\eta_i\kappa^{b_i}= \eta_1\kappa^{b_1}x_i,$$
where $x_i\in \mathbb Q^*\cap \mathbb Z_p^*,$ and $x_i\not = x_j$ for $i\not = j.$
Since $G(T)=(-1)^{\delta} G((1+T)^{-1}-1),$ we can assume that $x_1,\cdots x_{\ell}$ are positives. Now, we get:
$$\sum_{i=1}^{\ell}d_iG((1+T)^{x_i}-1) \, {\rm is\, a \, pseudo-polynomial}.$$
Therefore, there exist $N_1,\cdots ,N_{\ell}\in \mathbb N\setminus\{0\},$ $N_i\not =N_j$ for $i\not =j,$ such that:
$$\sum_{i=1}^{\ell}d_iG((1+T)^{N_i}-1) \, {\rm is\, a \, pseudo-polynomial}.$$
Now, by Lemma \ref{Lemma5}, there exists some integer $N\geq 0$ such that:
$$(1+T)^N(\sum_{i=1}^{\ell}d_iG((1+T)^{N_i}-1)) \, \in \mathbb F_q[T].$$
But, since $G(T)\in \mathbb F_q(T)\cap \mathbb F_q[[T]],$ $d_1,\cdots ,d_{\ell}\in \mathbb F_q^*,$ $N_1,\cdots N_{\ell }\in \mathbb N\setminus \{ 0\}$ and $N_i\not =N_J$ for $i\not =j,$ this implies that there exist some integer $n\geq 0$ such that $(1+T)^n G(T)\in \mathbb F_q[T].\, \diamondsuit$\par
\section{Application to Kubota-Leopoldt $p$-adic L-functions}
${}$\par
Let $\theta$ be a Dirichlet character of the first kind, $\theta \not =1$ and $\theta$ even. We denote by $f(T,\theta)$ the Iwasawa power series attached to the $p$-adic L-function $L_p(s,\theta)$ (see \cite{WAS}, Theorem 7.10). Write:
$$\theta =\chi \omega^{\delta +1},$$
where $\chi$ is of conductor $d,$ $d\geq 1$ and $d\not \equiv 0\pmod{p},$ and $\delta \in \mathbb Z/(p-1)\mathbb Z.$ Set $\kappa =1+pd$ and $K=\mathbb Q_p(\chi).$ We set:
$$F_{\chi}(T)=\frac{\sum_{a=1}^d\chi (a)(1+T)^a}{1-(1+T)^d}.$$
Let's give the basic properties of $F_{\chi}(T):$
\newtheorem{Lemma6}{Lemma}[section]
\begin{Lemma6} \label{Lemma6}
${}$\par
\noindent 1) If $d\geq 2,$ $F_{\chi}(T)\in \Lambda.$\par
\noindent 2) If $d=1,$ $\forall \alpha \in \mathbb Z/(p-1)\mathbb Z,$ $\alpha \not =1,$ $\gamma_{\alpha}(F_{\chi }(T))\in \Lambda.$\par
\noindent 3) $U(F_{\chi}(T))=F_{\chi}(T)-\chi(p) F_{\chi}((1+T)^p-1).$\par
\noindent 4) If $d\geq 2,$ $F_{\chi}((1+T)^{-1}-1)=\varepsilon F_{\chi}(T),$ wher $\varepsilon =1$ if $\chi$ is oddd and $\varepsilon=-1$ if $\chi$ is even.\par
\noindent 5) If $d=1,$ $F_{\chi}((1+T)^{-1}-1)=-1-F_{\chi}(T).$\par
\end{Lemma6}
\noindent {\sl Proof} 1), 4) and 5) are obvious.\par
\noindent 2) For $d=1,$ we have:
$$F_{\chi}(T)=-1+\frac{\sum_{a=0}^{p-1}(1+T)^a}{1-(1+T)^p}.$$
Set:
$$G(T)=(1-(1+T)^p)\gamma_{\alpha }(F_{\chi}(T)).$$
Note that:
$$\forall \eta \in \mu_{p-1},\, \frac{1-(1+T)^p}{1-(1+T)^{\eta p}}\equiv \eta^{-1} \pmod{\omega_1(T)}.$$
Therefore:
$$(p-1) G(T) \equiv \sum_{\eta\in \mu_{p-1}}\eta^{\alpha -1}\sum_{a=0}^{p-1}(1+T)^{\eta a}\pmod{\omega_1(T)}.$$
Thus:
$$(p-1) G(T) \equiv \sum_{\eta\in \mu_{p-1}}\eta^{\alpha -1}\sum_{b=0}^{p-1}(1+T)^{b}\pmod{\omega_1(T)}.$$
Since $\alpha \not =1,$ we get:
$$G(T)\equiv 0\pmod{\omega_1(T)}.$$
Therefore $\gamma_{\alpha}(F_{\chi}(T))\in \Lambda.$\par
\noindent 3) For $d=1,$ we have:
$$U(F_{\chi}(T))=\frac{\sum_{a=1}^{p-1}(1+T)^a}{1-(1+T)^p}= F_{\chi} (T) -F_{\chi }((1+T)^p-1).$$
Now, let $d\geq 2.$ Set $q_0=\kappa=1+pd.$ Note that:
$$F_{\chi}(T)=\frac{\sum_{a=1}^{q_0}\chi(a) (1+T)^a}{1-(1+T)^{q_0}}.$$
Therefore:
$$U(F_{\chi}(T))=\frac{\sum_{a=1,\, a\not \equiv 0\pmod{p}}^{q_0}\chi(a) (1+T)^a}{1-(1+T)^{q_0}}.$$
But:
$$F_{\chi}(T)-\chi (p)F_{\chi}((1+T)^p-1)= \frac{\sum_{a=1}^{q_0}\chi(a) (1+T)^a}{1-(1+T)^{q_0}}-\chi (p)\frac{\sum_{a=1}^{d}\chi(a) (1+T)^{pa}}{1-(1+T)^{q_0}}.$$
The Lemma follows easily. $\diamondsuit$\par
\newtheorem{Lemma7}[Lemma6]{Lemma}
\begin{Lemma7} \label{Lemma7}
Assume that $d\geq 2.$ The denominator of $F_{\chi}(T)$ is $\phi_{d}(1+T)$ where $\phi_d(X)$ is the $d$th cyclotomic polynomial and the same is true for $\overline{F_{\chi}(T)}.$\par
\end{Lemma7}
\noindent{\sl Proof} Let $\zeta \in \mu_d.$ If $\zeta$ is not a primite $d$th root of unity, then, by \cite{WAS}, Lemma 4.7, we have:
$$\sum_{a=1}^d \chi (a) \zeta^a=0.$$
If $\zeta $ is a primitive $d$th root of unity, then by \cite{WAS}, Lemma 4.8, we have:
$$\sum_{a=1}^d \chi (a) \zeta ^a \not \equiv 0 \pmod{\widetilde{\pi }},$$
where $\widetilde {\pi }$ is any prime of $K(\mu_d ).$ $\diamondsuit$\par
\newtheorem{Lemma8}[Lemma6]{Lemma}
\begin{Lemma8} \label{Lemma8}
The derivative of $\gamma_{-\delta}(F_{\chi}(T))$ is not a pseudo-polynomial modulo $\pi.$\par
\end{Lemma8}
\noindent{\sl Proof} We first treat the case $d\geq 2.$ By 3) and 4) of Lemma \ref{Lemma6}, Lemma \ref{Lemma7} and Proposition \ref{Proposition6}, $\overline{\gamma_{-\delta}}\overline{U} (\overline{F_{\chi}(T)})$ is not a pseudo-polynomial. But observe that $\overline{U}=\overline{D^{p-1}}. $ Thus $\overline{D}\overline{\gamma_{-\delta}} (\overline{F_{\chi}(T)})$ is not a pseudo-polynomial.\par
\noindent For the case $d=1.$ Set $\widetilde {F_{\chi}(T)}=F_{\chi}(T)-2F_{\chi}((1+T)^2-1)=1-\frac{1}{2+T}.$ Observe that:\par
\noindent - $ \widetilde {F_{\chi}((1+T)^{-1}-1)}=1-\widetilde {F_{\chi}(T)},$\par
\noindent - $U(\widetilde {F_{\chi}(T)})=\widetilde {F_{\chi}(T)}-\widetilde {F_{\chi}((1+T)^p-1)}.$\par
\noindent Therefore, as in the case $d\geq2,$ $ \overline {\gamma_{-\delta}}\overline{U} (\overline{\widetilde{F_{\chi}(T)}})$ is not a pseudo-polynomial. Thus $ \overline{\gamma_{-\delta}}\overline{U} (\overline{F_{\chi}(T)})$ is not a pseudo-polynomial. And one can conclude as in the case $d\geq 2.$ $\diamondsuit$ \par
\newtheorem{Lemma9}[Lemma6]{Lemma}
\begin{Lemma9} \label{Lemma9}
$$\Gamma_{\delta}\gamma_{-\delta}(F_{\chi}(T))=f(\frac{1}{1+T}-1,\theta).$$
\end{Lemma9}
\noindent{\sl Proof} We treat the case $d=1,$ the case $d\geq 2$ is quite similar. Set $T=e^Z-1.$ We get:
$$\gamma_{-\delta}(F_{\chi}(T))=\sum_{n\geq 0,\, n\equiv 1+\delta \pmod{p-1}}\frac{B_n}{n!}Z^{n-1}.$$
Thus, by \cite{WAS}, Theorem 5.11, we get:
$$\forall k\in \mathbb N, k\equiv \delta \pmod{p-1},\, D^k\gamma_{-\delta}U(F_{\chi}) (0)=L_p(-k,\theta).$$
But, by Proposition \ref{Proposition1},we have for $s\in \mathbb Z_p:$
$$\Gamma_{\delta}\gamma_{-\delta}U(F_{\chi})(\kappa^s-1)={\rm lim}_n D^{k_n(s,\delta)}\gamma_{-\delta}U(F_{\chi})(0)=L_p(-s,\theta)=f(\kappa^{-s}-1,\theta).$$
The Lemma follows. $\diamondsuit$\par
We can now state and prove our main result:
\newtheorem{Theorem2}[Lemma6]{Theorem}
\begin{Theorem2} \label{Theorem2}
${}$\par
\noindent 1) $\overline{f(T,\theta )}$ is not a pseudo-rational function.\par
\noindent 2) $\lambda (f(T,\theta ))< (\frac{p-1}{2}\phi(d))^{\phi (p-1)},$ where $\phi$ is Euler's totient function.\par
\end{Theorem2}
\noindent{\sl Proof}${}$\par
\noindent 1) Assume the contrary, i.e. $\overline{f(T,\theta )}$ is a pseudo-rational function. Then $\overline{f(\frac{1}{1+T}-1,\theta )}$ is also a pseudo-rational function. Thus $\overline{\Gamma_{\delta}}\overline {\gamma_{-\delta}}\overline{U} (\overline {F_{\chi}(T)})$ is a pseudo-rational function.\par
\noindent We first treat the case $d\geq 2.$ By Theorem \ref{Theorem1}, there exists an integer $n\geq 0$ such that $(1+T)^n(\overline{U}(\overline{F_{\chi}(T)})+(-1)^{\delta }\overline{U}(\overline{F_{\chi}((1+T)^{-1}-1)}) \in \mathbb F_q[T].$ This is a contradiction by 3) and 4) of Lemma \ref{Lemma6} and Lemma \ref{Lemma7}.\par
\noindent For the case $d=1.$ We work with $\widetilde {F_{\chi}(T)}=F_{\chi}(T)-2F_{\chi}((1+T)^2-1)=1-\frac{1}{2+T}.$ Then , by Proposition \ref{Proposition3}, $\overline{\Gamma_{\delta}}\overline {\gamma_{-\delta}}\overline{U} (\overline {\widetilde{F_{\chi}(T)}})$ is a pseudo-rational function. We get a contradiction as in the case $d\geq 2.$\par
\noindent 2) Our proof is inspired by a method introduced by S. Rosenberg (\cite{ROS}). We first treat the case $d=1.$ Note that we can assume that $\lambda(f(T,\theta ))\geq 1.$ Now, by Lemma \ref{Lemma8}:
$$\mu(\gamma_{-\delta}(F_{\chi}(T)))=0.$$
Futhermore, we have:
$$\gamma_{-\delta}(F_{\chi})(0)\equiv 0\pmod{\pi }.$$
Therefore, by 3) of Lemma \ref{Lemma6}, we get:
$$\lambda(\gamma_{-\delta}U(F_{\chi}(T)))=\lambda (\gamma_{-\delta}(F_{\chi}(T))).$$
Therefore we have to evaluate $\lambda (\gamma_{-\delta}(F_{\chi}(T))).$ Set $F(T)=\frac{-1}{T}.$ Since $\delta$ is odd, we have:
$$\gamma_{-\delta}(F_{\chi}(T))=\gamma_{-\delta}(F(T)).$$
Observe that $F((1+T)^{-1}-1)=1-F(T).$ Let $S\subset \mu_{p-1}$ be a set of representatives of $\mu_{p-1}/\{ 1,-1\}.$ We have:
$$(p-1)\gamma_{-\delta}(F(T))=2\sum_{\eta \in S} \eta^{-\delta}F((1+T)^{\eta}-1)\, -\sum_{\eta \in S}\eta^{-\delta}.$$
Set:
$$G(T)=(\prod_{\eta\in S}((1+T)^{\eta}-1))\gamma_{-\delta}(F(T)).$$
Then:\par
\noindent- $\mu (G(T))=0,$\par
\noindent- $\lambda (G(T))=\frac{p-1}{2}+\lambda (\gamma_{-\delta }(F(T))).$\par
\noindent For $S'\subset S,$ write $t(S')=\sum_{x\in S'}x.$ We can write:
$$G(T)=\sum_{S'\subset S} a_{S'} (1+T)^{t(S')},$$
where $a_{S'}\in O_K.$ Set:
$$N={\rm Max}\{v_p(t(S')-t(S'')),\, S',S''\subset S,\, t(S')\not = t(S'')\}.$$
It is clear that:
$$p^N<(\frac{p-1}{2})^{\phi (p-1)}.$$
But, by Lemma \ref{Lemma2}, we have:
$$\lambda(G(T))<p^{N+1}.$$
Thus, by Propositon \ref{Proposition3}, we get:
$$\lambda(f(T,\theta))=\lambda(f(\frac{1}{1+T}-1,\theta))<p^N<(\frac{p-1}{2})^{\phi (p-1)}.$$
Now, we treat the general case, i.e. $d\geq 2.$ Again we can assume that $\lambda(f(T,\theta ))\geq 1.$ Thus as in the case $d=1,$ we get:
$$\lambda(\gamma_{-\delta}U(F_{\chi}(T)))=\lambda (\gamma_{-\delta}(F_{\chi}(T))).$$
Now, by Lemma \ref{Lemma7}, we can write:
$$F_{\chi}(T)=\frac{\sum_{a=0}^{\phi(d)-1}r_a (1+T)^a}{\phi_d(1+T)},$$
where $r_a\in O_K$ for $a\in \{ 0,\cdots, \phi(d)-1\}.$
Let again $S\subset \mu_{p-1}$ be a set of representatives of $\mu_{p-1}/\{ 1,-1\}.$ By Lemma \ref{Lemma6}, we have:
$$(p-1)\gamma_{-\delta}(F_{\chi}(T))=2\sum_{\eta \in S}\eta^{-\delta}F_{\chi}((1+T)^{\eta }-1).$$
Set:
$$G(T)=(\prod_{\eta \in S}\phi_d((1+T)^{\eta})))\gamma_{-\delta}(F_{\chi}(T)).$$
We have:
$$G(T)=\sum_{a=0}^{\phi(d)-1}\sum_{\eta \in S}\sum_{S'\subset S\setminus \{ \eta \}}\, \, \sum_{\underline{d}=(d_{\eta'})_{\eta'\in S'},\, d_{\eta'}\in \{ 0,\cdots, \phi (d)\}}b_{S',\underline{d}}(1+T)^{a\eta +\sum_{\eta'\in S'}d_{\eta'}\eta'},$$
where $b_{S',\underline{d}}\in O_K.$ Note that again $\mu (G(T))=0$ and that $\lambda (G(T))=\lambda (\gamma_{-\delta}(F_{\chi}(T))).$ Now, for $a,b \in \{ 0,\cdots , \phi (d)-1\},$ $\eta_1,\eta_2\in S,$ $S_1\in S\setminus \{ \eta_1\},$ $S_2\in S \setminus\{ \eta_2\},$ set:
$$V=a\eta_1+\sum_{\eta \in S_1} d_{\eta}\eta \, -b\eta_2-\sum_{\eta \in S_2} d_{\eta }' \eta,$$
where $\forall \eta \in S_1,$ $d_{\eta}Ê\in \{ 0, \cdots , \phi (d)\},$ and $\forall \eta \in S_2,$ $d_{\eta}'Ê\in \{ 0, \cdots , \phi (d)\}.$\par
\noindent If $\eta_1=\eta_2$ then we can write:
$$V=(a-b)\eta_1+\sum_{\eta \in S'} u_{\eta }\eta,$$
where $\mid u_{\eta}\mid \in \{ 0,\cdots ,\phi(d)\}$ and $\mid S'\mid \leq \frac {p-3}{2}.$\par
\noindent If $\eta_1\not =\eta_2,$ we can write:
$$V=a'\eta_1+b'\eta_2+\sum_{\eta\in S'}u_{\eta} \eta,$$
where $\mid a'\mid, \mid b'\mid, \mid u_{\eta }\mid \in\{ 0,\cdots ,\phi (d)\},$ and $\mid S'\mid\leq \frac{p-5}{2}.$
Therefore, if $V\not = 0,$ we get:
$$p^{v_p(V)}<(\frac{p-1}{2}\phi(d))^{\phi(p-1)}.$$
Now, we can conclude as in the case $d=1.$ $\diamondsuit$\par
Let $E$ be a number field and let $E_{\infty}/E$ be the cyclotomic $\mathbb Z_p$-extension of $E.$ For $n\geq 0,$ let $A_n$ be the $p$th Sylow subgroup of the ideal class group of the $n$th layer in $E_{\infty}/E.$Then , by \cite{WAS}, Theorem 13.13, there exist $\mu_p(E)\in \mathbb N, \lambda_p(E)\in \mathbb N$ and $\nu_p(E) \in \mathbb Z,$ such that for all sufficiently large $n:$
$$\mid A_n\mid =p^{\mu_p(E)p^n+\lambda_p(E)n+\nu_p(E)}.$$
Recall that it is conjectured that $\mu_p(E)=0$ and if $E$ is an abelian number field it has been proved by B. Ferrero and L. Washington (\cite{FW}).
\newtheorem{Theorem3}[Lemma6]{Corollary}
\begin{Theorem3} \label{Theorem3}
Let $F$ be an abelian number field of conductor $N.$ Write $N=p^m d,$ where $m\in \mathbb N$ and $d\geq 1,$ $d\not \equiv 0\pmod{p}.$ Then:
$$\lambda_p(F)<2(\frac{p-1}{2}\phi(d))^{\phi(p-1)+1}.$$
\end{Theorem3}
\noindent{\sl Proof} Set, for all $n\geq 0,$ $q_n=p^{n+1}d.$ Then $F\subset \mathbb Q(\mu_{q_m}).$ It is not difficult to see that (see the arguments in the proof of Theorem 7.15 in \cite{WAS}) :
$$\lambda_p(F)\leq \lambda_p(\mathbb Q(\mu_{q_m})).$$
But, note that:
$$\lambda_p(\mathbb Q(\mu_{q_m}))=\lambda_p(\mathbb Q(\mu_{q_0})).$$
Now, by \cite {WAS} Proposition 13.32 and Theorem 7.13:
$$\lambda_p(\mathbb Q(\mu_{q_0}))\leq 2\sum_{\theta \, {\rm even},\, \theta \not =1,\, f_{\theta}\mid q_0}\lambda (f(T,\theta)).$$
It remains to apply Theorem \ref{Theorem2}. $\diamondsuit$\par
Note that the bound of this latter Corollary is certainly far from the truth even in the case $p=3$ (see \cite{KW}).\par
\end{document} |
\begin{document}
\title{Existence of minimal models for varieties of log general type II} \date{\today} \author{Christopher D. Hacon} \address{Department of Mathematics \\ University of Utah\\ 155 South 1400 East\\ JWB 233\\ Salt Lake City, UT 84112, USA} \email{hacon@math.utah.edu} \author{James M\textsuperscript{c}Kernan} \address{Department of Mathematics\\ University of California at Santa Barbara\\ Santa Barbara, CA 93106, USA} \email{mckernan@math.ucsb.edu} \address{Department of Mathematics\\ MIT\\ 77 Massachusetts Avenue\\ Cambridge, MA 02139, USA} \email{mckernan@math.mit.edu}
\thanks{The first author was partially supported by NSF research grant no: 0456363 and an
AMS Centennial fellowship and the second author was partially supported by NSA grant no:
H98230-06-1-0059 and NSF grant no: 0701101. We would like to thank F. Ambro, C. Birkar,
P. Cascini, J. A. Chen, A. Corti, O. Fujino, S. Keel, \and J. Koll\'ar for valuable
suggestions.}
\begin{abstract} Assuming finite generation in dimension $n-1$, we prove that pl-flips exist in dimension $n$. \end{abstract}
\maketitle
\tableofcontents
\section{Introduction} \label{s_int}
This is the second of two papers whose purpose is to establish: \begin{theorem}\label{t_finite} The canonical ring $$ R(X,K_X)=\bigoplus_{m\in\mathbb{N}}H^0(X,\ring X.(mK_X)), $$ is finitely generated for every smooth projective variety $X$. \end{theorem}
Note that Siu has announced a proof of finite generation for varieties of general type, using analytic methods, see \cite{Siu06}.
Our proof relies on the ideas and techniques of the minimal model program and roughly speaking in this paper we will show that finite generation in dimension $n-1$ implies the existence of flips in dimension $n$. More precisely, assuming the following: \setcounter{theorema}{5} \begin{theorema}\label{t_ezd} Let $\pi\colon\map X.Z.$ be a projective morphism to a normal affine variety. Let $(X,\Delta=A+B)$ be a $\mathbb{Q}$-factorial kawamata log terminal pair of dimension $n$, where $A\geq 0$ is an ample $\mathbb{Q}$-divisor and $B\geq 0$. If $K_X+\Delta$ is pseudo-effective, then \begin{enumerate} \item The pair $(X,\Delta)$ has a log terminal model $\mu\colon\rmap X.Y.$. In particular if $K_X+\Delta$ is $\mathbb{Q}$-Cartier then the log canonical ring $$ R(X,K_X+\Delta)=\bigoplus_{m\in\mathbb{N}}H^0(X,\ring X.(\rdown m(K_X+\Delta).)), $$ is finitely generated.
\item Let $V\subset \operatorname{WDiv} _{\mathbb{R}}(X)$ be the vector space spanned by the components of $\Delta$. Then there is a constant $\delta>0$ such that if $G$ is a prime divisor contained in the stable base locus of $K_X+\Delta$ and $\Xi\in \mathcal L _{A}(V)$ such that $\|\Xi-\Delta\|<\delta$, then $G$ is contained in the stable base locus of $K_X+\Xi$. \item Let $W\subset V$ be the smallest affine subspace of $\operatorname{WDiv} _{\mathbb{R}}(X)$ containing $\Delta$, which is defined over the rationals. Then there is a constant
$\eta>0$ and a positive integer $r>0$ such that if $\Xi\in W$ is any divisor and $k$ is any positive integer such that $\|\Xi-\Delta\|<\eta$ and $k(K_X+\Xi)/r$ is Cartier, then every component of $\operatorname{Fix} (k(K_X+\Xi))$ is a component of the stable base locus of $K_X+\Delta$. \end{enumerate} \end{theorema} we prove the existence of pl-flips: \setcounter{theorema}{0} \begin{theorema}\label{t_existence} Pl-flips exist in dimension $n$. \end{theorema} that is, we prove: \begin{theorem}\label{t_m} Theorem~\ref{t_ezd}$_{n-1}$ implies Theorem~\ref{t_existence}$_n$. \end{theorem}
With the results of \cite{BCHM06}, \eqref{t_m} completes the proof of \eqref{t_finite}.
The main ideas used in this paper have their origins in the work of Shokurov on the existence of flips \cite{Shokurov03} together with the use of the extension theorem of \cite{HM05b} which in turn was inspired by the work of Kawamata, Siu and Tsuji (cf. \cite{Kawamata99}, \cite{Siu98} and \cite{Tsuji99}). For further history about the details of this problem see \cite[\S 2.1]{Corti05}.
In this paper, however we do not make use of the concept of \lq\lq asymptotic saturation\rq\rq\ introduced by Shokurov, and in fact we prove a more general result which does not require the relative weak log Fano condition (see also \cite{Ambro06}).
Further treatments of the results of this paper may be found in \cite{Ambro06} and \cite{HM07} (which follows Shokurov's approach more explicitly).
We now turn to a more detailed description of the results and techniques used in this paper. Recall the following: \begin{definition}\label{d_pl} Let $(X,\Delta )$ be a purely log terminal pair and $f\colon\map X.Z.$ be a projective morphism of normal varieties. Then $f$ is a \textbf{pl-flipping contraction} if $\Delta $ is a $\mathbb{Q}$-divisor and \begin{enumerate} \item $f$ is small, of relative Picard number one, \item $-(K_X+\Delta)$ is $f$-ample, \item $X$ is $\mathbb{Q}$-factorial, \item $S=\rdown\Delta.$ is irreducible and $-S$ is $f$-ample. \end{enumerate}
The \textbf{flip} of a pl-flipping contraction $f\colon\map X.Z.$ is a small projective morphism $g\colon\map Y.Z.$ of relative Picard number one, such that $K_Y+\Gamma$ is $g$-ample, where $\Gamma$ is the strict transform of $\Delta$. \end{definition}
The flip $g$ is unique, if it exists at all, and it is given by $$
Y=\operatorname{Proj}_Z \mathfrak{R} \qquad \text{where} \qquad \mathfrak{R}=\bigoplus _{m\in \mathbb{N}\,:\,k|m}f_*\ring X.(m(K_X+\Delta)), $$ and $k$ is any positive integer such that $k(K_X+\Delta)$ is integral. Therefore, in order to prove the existence of pl-flips, it suffices to show that $\mathfrak{R}$ is a finitely generated $\ring Z.$-algebra. Since this problem is local over $Z$, we may assume that $Z=\operatorname{Spec} A$ is affine and it suffices to prove that $$
R(X,k(K_X+\Delta))=\bigoplus _{m\in \mathbb{N}\,:\,k|m}H^0(X,\ring X.(m(K_X+\Delta))), $$ is a finitely generated $A$-algebra. It is then natural to consider the restricted algebra $$ R_S(X,k(K_X+\Delta))=\operatorname{Im}\left(\map R(X,k(K_X+\Delta)).R(S,k(K_S+\Omega)).\right), $$ whose graded pieces correspond to the images of the restriction homomorphisms $$ \map {H^0(X,\ring X.(m(K_X+\Delta)))}.{H^0(S,\ring S.(m(K_S+\Omega)))}., $$ where $m=kl$ is divisible by $k$ and $\Omega$ is defined by the adjunction formula $$
(K_X+\Delta)|_S=K_S+\Omega, $$ and $k(K_X+\Delta)$ is Cartier. Shokurov has shown, cf. \eqref{t_restricted}, that the algebra $R(X,k(K_X+\Delta))$ is finitely generated if and only if the restricted algebra is finitely generated.
Now, if the natural inclusion $$ R_S(X,k(K_X+\Delta))\subset R(S,k(K_S+\Omega)), $$ were an isomorphism, then \eqref{t_m} would follow from (1) of Theorem~\ref{t_ezd}$_{n-1}$. In fact the pair $(S,\Omega)$ is kawamata log terminal,
$\operatorname{dim} S=\operatorname{dim} X-1=n-1$ and since $f|_S$ is birational, $\Omega$ is automatically big so that, by a standard argument, (1) of Theorem~\ref{t_ezd}$_{n-1}$ applies and $R(S,k(K_S+\Omega))$ is finitely generated. \eqref{t_restricted} also implies $\mathfrak{R}$ is finitely generated.
Unluckily this is too much to hope for. However it does suggest that one should concentrate on the problem of lifting sections and the main focus of this paper is to prove the extension result \eqref{t_lift}. In fact \eqref{t_m} is a straightforward consequence of \eqref{t_lift}.
To fix ideas, let us start with an example where we cannot lift sections. Let $X$ be the blow up of $\pr 2.$ at a point $o$, with exceptional divisor $E$. Let $S$ be the strict transform of a line through $o$, let $L_1$, $L_2$ and $L_3$ be the strict transforms of general lines in $\pr 2.$, let $p=E\cap S$ and let $p_i=L_i\cap S$. Then the pair $$ (X,\Delta =S+(2/3)(E+L_1+L_2+L_3)), $$ is purely log terminal but the homomorphism \[ \map {H^0(\ring X.(3l(K_X+\Delta )))}.{H^0(\ring S.(3l(K_S+\Omega )))}.\simeq H^0 (\ring {\pr 1.}.(2l)), \]
is never surjective, where $\Omega=(\Delta-S)|_S=2/3(p+p_1+p_2+p_3)$ and $l$ is a positive integer. The problem is that the stable base locus of $K_X+\Delta$ contains $E$ and yet
$|3(K_S+\Omega)|$ is base point free. Notice, however, that $$
|3l(K_X+\Delta )|_S=|3l(K_S+\Theta )|+3l(\Omega-\Theta), $$ where $\Theta=(2/3)(p_1+p_2+p_3)$ is obtained from $\Omega$ by throwing away $p$. In other words, $\Theta$ is obtained from $\Omega$ by removing some part of each components contained in the stable base locus of $K_X+\Delta$.
Returning to the general setting, one may then hope that the restricted algebra $R_S(S,l(K_X+\Delta))$ is given by an algebra of the form $R(S,l(K_S+\Theta))$ for some kawamata log terminal pair $(S,\Theta)$ where $0\leq\Theta\leq\Omega$ is a $\mathbb{Q}$-divisor obtained from $\Omega$ by subtracting components of $\Omega$ contained in the stable base locus of $K_X+\Delta$. We will now explain how this may be achieved. The tricky thing is to determine exactly how much of the stable base locus to throw away.
It is not hard to reduce to the following situation: $\pi\colon\map X.Z.$ is a projective morphism to a normal affine variety $Z$, where $(X,\Delta=S+A+B)$ is a purely log terminal pair of dimension $n$, $S=\rdown\Delta.$ is irreducible, $X$ and $S$ are smooth, $A\geq 0$
is an ample $\mathbb{Q}$-divisor, $B\geq 0$, $(S,\Omega=(\Delta-S)|_S)$ is canonical and the stable base locus of $K_X+\Delta$ does not contain $S$.
Let $$
\Theta _m=\Omega -\Omega \wedge F_m\qquad \text{where}\qquad F_m=\operatorname{Fix} (|m(K_X+\Delta)|_S)/m, $$
and $m(K_X+\Delta)$ is Cartier. Then $m(\Omega-\Theta_m)$ is the biggest divisor contained in $\operatorname{Fix}(|m(K_X+\Delta)|_S)$ such that $0\leq\Theta_m\leq\Omega$. It follows that \[
\label{e_sup} |m(K_S+\Theta _m)|+m(\Omega-\Theta_m)\supset |m(K_X+\Delta)|_S. \tag{$\supset$} \] A simple consequence of the main lifting result \eqref{t_lift} of this paper implies that this tautological inclusion \eqref{e_sup} is actually an equality, \[
\label{e_eq} |m(K_S+\Theta_m)|+m(\Omega-\Theta_m)=|m(K_X+\Delta)|_S. \tag{$=$} \] A technical, but significant, improvement on the proof of the existence of flips which appears in \cite{HM07} is that the statement of \eqref{e_eq} and of \eqref{t_lift} involves only linear systems and divisors on $X$, even though the proof of \eqref{t_lift} involves passing to a higher model. The key point is that since $(S,\Omega)$ is canonical, it suffices to keep track only of the fixed divisor on $S$ and not of the whole base locus.
To prove \eqref{e_eq} we use the method of multiplier ideal sheaves. In fact the main point is to establish an inclusion of multiplier ideal sheaves, \eqref{t_multiplier}. A proof of \eqref{t_multiplier} appeared originally in \cite{HM05b}. We chose to include a proof of this result for the convenience of the reader and we decided to use notation closer to the well established notation used in \cite{Lazarsfeld04b}. Note however that the multiplier ideal sheaves we use, see \eqref{d_variant}, must take into account the divisor $\Delta$ (for example consider the case worked out above) and the fact that $(S,\Omega)$ is canonical.
In fact \eqref{e_eq} follows from the MMP. Indeed, if one runs $f\colon\rmap X.Y.$ the $(K_X+\Delta)$-MMP, almost by definition this will not change the linear systems
$|m(K_X+\Delta)|$. Since $K_Y+\Gamma=K_X+f_*\Delta$ is nef, one can lift sections on $Y$
from the strict transform $T$ of $S$, by an easy application of Kawamata-Viehweg vanishing. In general, however, the linear systems $|m(K_T+g_*\Theta)|$ are bigger than the linear systems $|m(K_S+\Theta)|$, since the induced birational map $g\colon\rmap S.T.$ might extract some divisors. However any such divisor must have log discrepancy at most one, so this cannot happen, almost by definition, if $K_S+\Theta$ is canonical.
In order to establish that $R_S(X,k(K_X+\Delta))$ is finitely generated, cf. \eqref{t_rational}, and thereby to finish the proof of \eqref{t_m}, it is necessary and sufficient to show that $\Theta=\lim (\Theta _{m!}/m!)$ is rational (the seemingly strange use of factorials is so that we can use limits rather than limsups). At this point we play off two facts. The first is that since we are assuming that Theorem~\ref{t_ezd} holds on
$S$, if $m>0$ is sufficiently divisible and $\Phi$ is an appropriately chosen $\mathbb Q$-divisor sufficiently close to $\Theta$, then the base locus of $|m(K_S+\Phi)|$ and the stable base locus of $K_S+\Theta$ are essentially the same (basically because $K_S+\Theta$ and $K_S+\Phi$ share a log terminal model $\mu\colon\rmap S.S'.$ and these two sets of divisors are precisely the divisors contracted by $\mu$). The second is that using \eqref{t_squeeze}, \eqref{t_lift} is slightly stronger than \eqref{e_eq}; one is allowed to overshoot $\Theta_m$ by an amount $\epsilon/m$, where $\epsilon>0$ is fixed. (It seems worth pointing out that \eqref{t_squeeze} seems to us a little mysterious. In particular, unlike \eqref{t_lift}, we were unable to show that this result follows from the MMP.)
More precisely, since the base locus of $|m(K_S+\Theta_m)|$ contains no components of $\Theta_m$, by (2) of Theorem~\ref{t_ezd} it follows that the stable base locus of $K_S+\Theta$ contains no components of $\Theta$. If $\Theta$ is not rational, then by Diophantine approximation there is a $\mathbb{Q}$-divisor $0\leq \Phi\leq \Omega$ very close to $\Theta$ and an integer $k>0$ such that $k\Phi$ is integral and $\operatorname{mult}_G\Phi>\operatorname{mult}_G\Theta$, for some prime divisor $G$. By \eqref{t_lift}, it actually follows that \[
|k(K_S+\Phi)|+k(\Omega-\Phi)=|k(K_X+\Delta )|_S. \] The condition $\operatorname{mult}_G\Phi>\operatorname{mult}_G\Theta$ ensures that $G$ is a component of $\operatorname{Fix} (k(K_S+\Phi))$, and hence of the stable base locus of $K_S+\Phi$. But then $G$ is a component of $\Theta$ and of the stable base locus of $\Theta$. This is the required contradiction.
\section{Notation and conventions} \label{s_notation}
We work over the field of complex numbers $\mathbb{C}$. Let $X$ be a normal variety. A $$ \left. \begin{array}{r} \text{\textit{(integral) divisor}} \\ \text{\textit{$\mathbb{Q}$-divisor}} \\ \text{\textit{$\mathbb{R}$-divisor}} \\ \end{array} \right\} \quad \text{is a} \quad \left \{ \begin{array}{l} \text{$\mathbb{Z}$-linear}\\ \text{$\mathbb{Q}$-linear}\\ \text{$\mathbb{R}$-linear,}\\ \end{array} \right. $$ combination of prime divisors. Given an integral Weil divisor $D$, we let $$ R(X,D)=\bigoplus _{m\in\mathbb{N}} H^0(X,\ring X.(mD)). $$ Set \begin{align*} \operatorname{WDiv}_{\mathbb{Q}}(X)&=\ten \operatorname{WDiv}(X).\mathbb{Z}.\mathbb{Q}. \\ \operatorname{WDiv}_{\mathbb{R}}(X)&=\ten \operatorname{WDiv} (X).\mathbb{Z}.\mathbb{R}., \end{align*} where $\operatorname{WDiv} (X)$ is the group of Weil divisors on $X$. The definitions below for $\mathbb{R}$-divisors reduce to the usual definitions for $\mathbb{Q}$-divisors and integral divisors, see \cite{BCHM06}. Note that the group of $\mathbb{R}$-divisors forms a vector space, with a canonical basis given by the prime divisors. If $C=\sum c_iB_i$ and $D=\sum d_iB_i$, where $B_i$ are distinct prime divisors, then we write $D\geq 0$ if $d_i\geq 0$ and we will denote by \begin{align*}
\|C\| &= \max_i c_i & C\wedge D &= \sum_i\min\{c_i,d_i\}B_i \\ \rdown C. &= \sum _i \rdown c_i. B_i & \{C\} &= C-\rdown C. . \end{align*}
Two $\mathbb{R}$-divisors $C$ and $D$ are $$ \left. \begin{array}{rl} \text{linearly equivalent,} & C\sim D\\ \text{$\mathbb{Q}$-linearly equivalent,} & C\sim_{\mathbb{Q}}D\\ \text{$\mathbb{R}$-linearly equivalent,} & C\sim_{\mathbb{R}}D\\ \end{array} \right\} \quad \text{if $C-D$ is a} \quad \left \{ \begin{array}{l} \text{$\mathbb{Z}$-linear}\\ \text{$\mathbb{Q}$-linear}\\ \text{$\mathbb{R}$-linear,}\\ \end{array} \right. $$ combination of principal divisors. Note that if $C\sim_{\mathbb{Q}}D$ then $mC\sim mD$ for some positive integer $m$, but this fails in general for $\mathbb{R}$-linear equivalence. Note also that if two $\mathbb{Q}$-divisors are $\mathbb{R}$-linearly equivalent then they are in fact $\mathbb{Q}$-linearly equivalent, but that two integral divisors might be $\mathbb{Q}$-linearly equivalent without being linearly equivalent. Let \begin{align*}
|D|&=\{\,C\in \operatorname{WDiv} (X)\,|\, C\geq 0\, ,\, C \sim D \,\} \\
|D|_{\mathbb{Q}}&=\{\,C\in \operatorname{WDiv} _{\mathbb{Q}}(X)\,|\, C\geq 0\, ,\, C \sim_{\mathbb{Q}} D \,\} \\
|D|_{\mathbb{R}}&=\{\,C\in \operatorname{WDiv}_{\mathbb{R}} (X)\,|\, C\geq 0\, ,\, C \sim_{\mathbb{R}} D \,\}. \end{align*}
If $T$ is a subvariety of $X$, not contained in the base locus of $|D|$, then $|D|_T$
denotes the image of the linear system $|D|$ under restriction to $T$. If $D$ is an integral divisor, $\operatorname{Fix}(D)$ denotes the fixed divisor of $D$ so that
$|D|=|D-\operatorname{Fix}(D)|+\operatorname{Fix}(D)$ where the base locus of $|D-\operatorname{Fix} (D)|$ contains no divisors. More generally $\operatorname{Fix}(V)$ denotes the fixed divisor of the linear system $V$.
The \textit{stable base locus} of $D$, denoted by $\mathbf{B}(D)$, is the intersection of the support of the elements of $|D|_{\mathbb{R}}$ (if $|D|_{\mathbb{R}}$ is empty then by convention the stable base locus is the whole of $X$). The \textit{stable fixed divisor} is the divisorial support of the stable base locus. The \textit{augmented stable base locus} of $D$, denoted by $\mathbf{B}_+(D)$, is given by the stable base locus of $D-\epsilon A$ for some ample divisor $A$ and any rational number $0<\epsilon \ll 1$. The \textit{diminished
stable base locus} is defined by $$ \mathbf{B}_-(D)=\bigcup_{\epsilon>0} \mathbf{B}(D+\epsilon A). $$ In particular we have $$ \mathbf{B}_-(D)\subset \mathbf{B}(D)\subset \mathbf{B}_+(D). $$
An \textit{$\mathbb{R}$-Cartier divisor} $D$ is an $\mathbb{R}$-linear combination of Cartier divisors. An $\mathbb{R}$-Cartier divisor $D$ is \textit{nef} if $D\cdot \Sigma\geq 0$ for any curve $\Sigma\subset X$. An $\mathbb{R}$-Cartier divisor $D$ is \textit{ample} if it is $\mathbb{R}$-linearly equivalent to a positive linear combination of ample divisors (in the usual sense). An $\mathbb{R}$-Cartier divisor $D$ is \textit{big} if $D \sim_{\mathbb{R}} A+B$, where $A$ is ample and $B\geq 0$. A
$\mathbb{Q}$-Cartier divisor $D$ is a \textit{general ample $\mathbb{Q}$-divisor} if there is an integer $m>0$ such that $mD$ is very ample and $mD\in |mD|$ is very general.
A \textit{log pair} $(X,\Delta)$ is a normal variety $X$ and $\mathbb{R}$-Weil divisor $\Delta\geq 0$ such that $K_X+\Delta$ is $\mathbb{R}$-Cartier. We say that a log pair $(X,\Delta)$ is \textit{log smooth}, if $X$ is smooth and the support of $\Delta$ is a divisor with global normal crossings. A projective birational morphism $g\colon\map Y.X.$ is a \textit{log resolution} of the pair $(X,\Delta )$ if $X$ is smooth and the inverse image of $\Delta$ union the exceptional locus is a divisor with global normal crossings. Note that in the definition of log resolution we place no requirement that the indeterminacy locus of $g$ is contained in the locus where the pair $(X,\Delta)$ is not log smooth. If $V$ is a linear system on $X$, a \textit{log resolution of $V$ and
$(X,\Delta)$} is a log resolution of the pair $(X,\Delta)$ such that if $|M|+F$ is the decomposition of $g^*V$ into its mobile and fixed parts, then $|M|$ is base point free and $F$ union the exceptional locus union the strict transform of $\Delta$ is a divisor with simple normal crossings support. If $g$ is a log resolution, then we may write $$ K_Y+\Gamma=g^*(K_X+\Delta)+E, $$ where $\Gamma\geq 0$ and $E\geq 0$ have no common components, $g_*\Gamma =\Delta$ and $E$ is $g$-exceptional. Note that this decomposition is unique. The \textit{log discrepancy} of a divisor $F$ over $X$ $$ a(X,\Delta,F)=1+\operatorname{mult}_F(E-\Gamma). $$ Note that with this definition, a component $F$ of $\Delta$ with coefficient $b$ has log discrepancy $1-b$. The log discrepancy does not depend on the choice of model $Y$, so that the log discrepancy is also a function defined on valuations. A \textit{log
canonical place} is any valuation of log discrepancy at most zero and the centre of a log canonical place is called a \textit{log canonical centre}. Note that every divisor on $X$ is by definition a canonical centre, so the only interesting canonical centres are of codimension at least two.
The pair $(X,\Delta)$ is \textit{kawamata log terminal} if there are no log canonical centres. We say that the pair $(X,\Delta)$ is \textit{purely log terminal} (respectively \textit{canonical} or \textit{terminal}) if the log discrepancy of any exceptional divisor is greater than zero (respectively at least one or greater than one). We say that the pair is \textit{divisorially log terminal} if there is a log resolution $g\colon\map Y.X.$ such that all exceptional divisors $E\subset Y$ have log discrepancy greater than zero.
\section{Preliminary results} \label{s_preliminary}
In this section we recall several results about finitely generated algebras and in particular we will give a proof of Shokurov's result that the pl-flip exists if and only if the restricted algebra is finitely generated.
\begin{definition} Let $X$ be a normal variety, $S$ be a prime divisor and $B$ an integral Weil divisor which is $\mathbb Q$-Cartier and whose support does not contain $S$. The
\textbf{restricted algebra} $R_S(X,B)$ is the image of the homomorphism $\map R(X,B).R(S,B|_S).$. \end{definition}
We remark that as $B$ is $\mathbb Q$-Cartier then $B|_{S}$ is a well defined $\mathbb{Q}$-Cartier divisor on $S$.
\begin{theorem}\label{t_restricted} Let $f\colon\map X.Z.$ be a pl-flipping contraction with respect to $(X,\Delta)$. Pick an integer $k$ such that $k(K_X+\Delta)$ is Cartier.
Then \begin{enumerate} \item The flip of $f$ exists if and only if the flip of $f$ exists locally over $Z$. \item If $Z=\operatorname{Spec} A$ is affine then the flip $f^{+}\colon\map X^+.Z.$ exists if and only if the restricted algebra $R_S(X,k(K_X+\Delta))$ is a finitely generated $A$-algebra. \end{enumerate} \end{theorem}
We start with the following well known result: \begin{lemma}\label{l_truncation} Let $R$ be a graded algebra which is an integral domain and let $d$ be a positive integer.
Then $R$ is finitely generated if and only if $R_{(d)}$ is finitely generated. \end{lemma} \begin{proof} Suppose that $R$ is finitely generated. It is easy to write down an action of the cyclic group $\mathbb{Z}_d$ on $R$ so that the invariant ring is $R_{(d)}$. Thus $R_{(d)}$ is finitely generated by the Theorem of E. Noether which states that the ring of invariants of a finitely generated ring under the action of a finite group is finitely generated.
Suppose now that $R_{(d)}$ is finitely generated. Let $f\in R_i$. Then $f$ is a root of the monic polynomial $x^d-f^d\in R_{(d)}[x]$. It follows that $R$ is integral over $R_{(d)}$ and the result follows by another Theorem of E. Noether on finiteness of integral closures. \end{proof}
\begin{lemma}\label{l_restricted} Let $S$ be a normal prime divisor on $X$ and let $B$ an integral Weil divisor which is $\mathbb Q$-Cartier and whose support does not contain $S$. \begin{itemize} \item If $R(X,B)$ is finitely generated then $R_S(X,B)$ is finitely generated. \item If $S\sim B$ and $R_S(X,B)$ is finitely generated then $R(X,B)$ is finitely generated. \end{itemize} \end{lemma} \begin{proof} Since there is a surjective homomorphism $\phi\colon\map R(X,B).R_S(X,B).$, it is clear that if $R(X,B)$ is finitely generated then $R_S(X,B)$ is finitely generated.
Suppose now that $R_S(X,B)$ is finitely generated and $S\sim B$. Then there is a rational function $g_1$ such that $(g_1)=S-B$. If we consider the elements of $R(X,B)_m$ as rational functions, then a rational function $g$ belongs to $R(X,B)_m$ if and only if $(g)+mB\geq 0$. But if $g$ is in the kernel of $\phi$, then there is a divisor $S'\geq 0$ such that $(g)+mB=S+S'$. It follows that $(g/g_1)+(m-1)B=S'$ so that $g/g_1=h\in R(X,B)_{m-1}$. But then the kernel of $\phi$ is the principal ideal generated by $g_1$. \end{proof}
\begin{proof}[Proof of \eqref{t_restricted}] It is well known that the flip $f^{+}\colon\map X^+.Z.$ exists if and only if the sheaf of graded $\ring Z.$-algebras $$
\bigoplus_{m\in\mathbb{N}\,:\,k|m}f_*\ring X.(m(K_X+\Delta)), $$ is finitely generated, cf.~\cite[6.4]{KM98}. Since this can be checked locally, this gives (1).
If $Z=\operatorname{Spec} A$ is affine it suffices to check that $R(X,k(K_X+\Delta))$ is a finitely generated $A$-algebra. Since the relative Picard number is one, there are real numbers $a$ and $b$ such that $a(K_X+\Delta)$ and $bS$ are numerically equivalent over $Z$. As both $-(K_X+\Delta)$ and $-S$ are ample $\mathbb{Q}$-divisors we may assume that $a$ and $b$ are both positive integers. Moreover, as $a(K_X+\Delta)-bS$ is numerically trivial over $Z$, it is semiample over $Z$ by the base point free theorem. In particular, we may replace numerical equivalence by linear equivalence, $$ a(K_X+\Delta) \sim_Z bS. $$ But then there is a rational function $g$ and a divisor $D$ on $Z$ such that $$ a(K_X+\Delta)=bS+f^*D+(g). $$ As any line bundle on a quasi-projective variety is locally trivial, possibly passing to an open subset of $Z$, and using (1), we may assume that $D \sim 0$, so that $$ a(K_X+\Delta) \sim bS. $$ By \eqref{l_truncation} it follows that $R(X,k(K_X+\Delta))$ is finitely generated if and only if $R(X,S)$ is finitely generated. Since $Z$ is affine and $f$ is small, $S$ is mobile so that $S\sim S'$ where $S'\geq 0$ is a divisor whose support does not contain $S$. By \eqref{l_restricted}, $R(X,S)$ is finitely generated if and only if $R_S(X,S')$
is finitely generated. Since $a(K_X+\Delta)|_S\sim bS'|_S$ the result follows by \eqref{l_truncation}. \end{proof}
\section{Multiplier ideal sheaves} \label{s_squeeze}
The main result of this section is: \begin{theorem}\label{t_squeeze} Let $\pi\colon\map X.Z.$ be a projective morphism to a normal affine variety $Z$, where $(X,\Delta=S+A+B)$ is a log pair, $S=\rdown\Delta.$ is irreducible, $(X,S)$ is log smooth, and both $A\geq 0$ and $B\geq 0$ are $\mathbb{Q}$-divisors. Let $k$ be any positive integer and $0\leq \Phi\leq
\Omega=(\Delta-S)|_S$ be any divisor such that both $k(K_S+\Phi)$ and $k(K_X+\Delta)$ are Cartier. Let $C=A/k$.
If there is an integer $l>1$ and an integral divisor $P\geq 0$ such that $lA$ is Cartier, $C-\frac{(k-1)}mP$ is ample, $(X,\Delta+\frac{k-1}mP)$ is purely log terminal and $$
l|k(K_S+\Phi)|+m(\Omega-\Phi)+(mC+P)|_S\subset |m(K_X+\Delta+C)+P|_S, $$ where $m=kl$, then $$
|k(K_S+\Phi)|+k(\Omega-\Phi)\subset |k(K_X+\Delta)|_S. $$ \end{theorem}
To prove \eqref{t_squeeze}, we need a variant of multiplier ideal sheaves: \begin{definition-lemma}\label{d_variant} Let $(X,\Delta)$ be a log smooth pair where $\Delta$ is a reduced divisor and let $V$ be a linear system whose base locus contains no log canonical centres of $(X,\Delta )$. Let $\mu\colon\map Y.X.$ be a log resolution of $V$ and $(X,\Delta )$ and let $F$ be the fixed divisor of the linear system $\mu^*V$. Let $K_Y+\Gamma =\mu ^*(K_X+\Delta)+ E$ where $\Gamma=\sum P_i$ is the sum of the divisors on $Y$ of log discrepancy zero.
Then for any real number $c\geq 0$, define the \textbf{multiplier ideal sheaf} $$ \mathcal{J}_{\Delta,c\cdot V}:=\mu _*\ring Y.( E-\rdown cF.). $$ If $\Delta=0$ we will write $\mathcal{J}_{c\cdot V}$ and if $D=cG$, where $G>0$ is a Cartier divisor, we define $$ \mathcal{J}_{\Delta,D}:=\mathcal{J}_{\Delta,c\cdot V}, $$ where $V=\{G\}$. \end{definition-lemma} \begin{proof} We have to show that the definition of the multiplier ideal sheaf is independent of the choice of log resolution. Let $\mu\colon\map Y.X.$ and $\mu'\colon\map Y'.X.$ be two log resolutions of $(X,\Delta)$ and $V$. We may assume that $\mu'$ factors through $\mu$ via a morphism $\nu\colon\map Y'.Y.$. Then $F'=\nu ^* F$ as $\mu ^*V-F$ is free, and \begin{align*} E'-cF' &= K_{Y'}+\Gamma'-\mu'^*(K_X+\Delta)-cF' \\
&= K_{Y'}+\Gamma'-\nu^*(K_Y+\Gamma-E+cF) \\
&= \nu^*( E-\rdown cF.)+K_{Y'}+\Gamma'-\nu^*(K_Y+\Gamma+\{cF\}) \\
&= \nu^*(E -\rdown cF.)+G. \end{align*} Since $(Y,\Gamma+E+F)$ is log smooth, it follows that $(Y,\Gamma+\{cF\})$ is log canonical and has the same log canonical places as $(Y,\Gamma)$ and hence as $(X,\Delta )$. Thus $\rup G.\geq 0$ and since $\nu _*(K_{Y'}+\Gamma ')=K_Y+\Gamma$, $\rup G .$ is $\nu$-exceptional. Then \begin{align*} \mu'_*\ring Y'.(E'-\rdown cF'.)&=\mu_*(\nu_*\ring Y'.(E'-\rdown cF'.)) \\
&=\mu_*(\nu_*\ring Y'.(\nu^*(E-\rdown cF.)+\rup G.)) \\
&=\mu_*\ring Y.(E-\rdown cF.). \qedhere \end{align*} \end{proof}
We need to develop a little of the theory of multiplier ideal sheaves. \begin{lemma}\label{l_theory} Let $(X,\Delta)$ be a log smooth pair where $\Delta $ is reduced, let $V$ be a linear system whose base locus contains no log canonical centres of $(X,\Delta )$ and let $G\geq 0$ and $D\geq 0$ be $\mathbb{Q}$-Cartier divisors whose supports contain no log canonical centres of $(X,\Delta )$.
Then \begin{enumerate} \item $\mathcal{J}_{\Delta,D}=\ring X.$ if and only if $(X,\Delta+D)$ is divisorially log terminal and $\rdown D.=0$. \item If $0\leq \Delta'\leq \Delta$ then $\mathcal{J}_{\Delta,c\cdot V}\subset \mathcal{J}_{\Delta',c\cdot V}$. In particular, $\mathcal{J}_{\Delta,c\cdot V}\subset \mathcal{J}_{c\cdot V} \subset \ring X.$. \item If $\Sigma\geq 0$ is a Cartier divisor, $D-\Sigma\leq G$ and $\mathcal{J}_{\Delta,G}=\ring X.$ then $\mathcal{I}_{\Sigma}\subset \mathcal{J}_{\Delta,D}$. \end{enumerate} \end{lemma} \begin{proof} (1) follows easily from the definitions.
(2) follows from the fact that $a(P,X,\Delta')\geq a(P,X,\Delta)$ for all divisors $P$ on $Y$.
To see (3), notice that as $\Sigma$ is Cartier and $\mathcal{J}_{\Delta,G}=\ring X.$, we have $$\mathcal{J}_{\Delta,G}(-\Sigma)=\ring X.(-\Sigma)=\mathcal I _{\Sigma}.$$ But since $D \leq G+\Sigma$, we also have \[ \mathcal{J}_{\Delta,G}(-\Sigma)=\mathcal{J}_{\Delta,G+\Sigma} \subset \mathcal{J}_{\Delta,D}.\qedhere \] \end{proof}
We have the following extension of (9.5.1) of \cite{Lazarsfeld04a} or (2.4.2) of \cite{Takayama06}: \begin{lemma}\label{l_ses} Let $\pi\colon\map X.Z.$ be a projective morphism to a normal affine variety $Z$. Let $(X,\Delta)$ be a log smooth pair where $\Delta $ is reduced, let $S$ be a component of $\Delta$, let $D\geq 0$ be a $\mathbb{Q}$-Cartier divisor whose support does not contain any log canonical centres of $(X,\Delta)$ and let
$\Theta=(\Delta-S)|_S$. Let $N$ be a Cartier divisor. \begin{enumerate} \item There is a short exact sequence $$
\ses \mathcal{J}_{\Delta-S,D+S}.\mathcal{J}_{\Delta,D}.\mathcal{J}_{\Theta,D|_S}.. $$ \item(Nadel Vanishing) If $N-D$ is ample then $$ H^i(X,\mathcal{J}_{\Delta,D}(K_X+\Delta+N))=0, $$ for $i>0$. \item If $N-D$ is ample then $$ \map
{H^0(X,\mathcal{J}_{\Delta,D}(K_X+\Delta+N))}.{H^0(S,\mathcal{J}_{\Theta,D|_S}(K_X+\Delta+N))}., $$ is surjective. \end{enumerate} \end{lemma} \begin{proof} By the resolution lemma of \cite{Szabo94}, we may find a log resolution $\mu\colon\map Y.X.$ of $(X,\Delta+D)$ which is an isomorphism over the generic point of each log canonical centre of $(X,\Delta)$. If $T$ is the strict transform of $S$ then we have a short exact sequence $$ \ses {\ring Y.(E-\rdown\mu^*D.-T)}.{\ring Y.(E-\rdown\mu^*D.)}.{\ring T.(E-\rdown\mu^*D.)}.. $$ Now $\mu_*\ring Y.(E-\rdown\mu^*D.)=\mathcal{J}_{\Delta,D}$. If $\Gamma$ is the sum of the divisors of log discrepancy zero then $$ E-\mu^*D=(K_Y+\Gamma)-\mu^*(K_X+\Delta+D). $$ But then $$ E-\mu^*D-T=(K_Y+\Gamma-T)-\mu^*(K_X+\Delta-S+(D+S)), $$ so that $$ \mu_*\ring Y.(E-\rdown\mu^*D.-T)=\mathcal{J}_{\Delta-S,D+S}, $$ and $$
(E-\mu^*D)|_T=K_T+(\Gamma-T)|_T-\mu^*(K_S+\Theta+D|_S), $$ so that $$
\mu_*\ring T.(E-\rdown\mu^*D.)=\mathcal{J}_{\Theta,D|_S}. $$ Since $(Y,\Gamma+\mu^*D)$ is log smooth and $\Gamma$ and $\mu^*D$ have no common components, $(Y,\Gamma+\{\mu^*D\})$ is divisorially log terminal. Therefore we may pick an exceptional divisor $F\geq 0$ such that $K_Y+\Gamma+\{\mu^*D\}+F$ is divisorially log terminal and $-F$ is $\mu$-ample. As $$ E-\rdown\mu^*D.-T-(K_Y+\Gamma-T+\{\mu ^*D\}+F)=-\mu^*(K_X+\Delta+D)-F, $$ is $\mu$-ample, Kawamata-Viehweg vanishing implies that $$ R^1\mu_*\ring Y.(E-\rdown\mu^*D.-T)=0, $$ and this gives (1).
Similarly, Kawamata-Viehweg vanishing implies that $$ R^i\mu_*\ring Y.(\mu^*(K_X+\Delta+N)+E-\rdown\mu^*D.)=0, $$ for $i>0$. As $N-D$ is ample then, possibly replacing $F$ by a small multiple, we may assume that $\mu^*(N-D)-F$ is ample. As \[ \mu^*(K_X+\Delta+N)+E-\rdown\mu^*D.-(K_Y+\Gamma+\{\mu^*D\}+F)=\mu^*(N-D)-F, \] is ample, Kawamata-Viehweg vanishing implies that $$ H^i(Y,\ring Y.(\mu^*(K_X+\Delta+N)+E-\rdown\mu^*D.))=0, $$ for $i>0$. Since the Leray-Serre spectral sequence degenerates, this gives (2), and (3) follows from (2). \end{proof}
\begin{proof}[Proof of \eqref{t_squeeze}] Since $(X,\Delta +\frac {k-1}mP)$ is purely log terminal, $(S, \Omega +\frac{k-1}mP|_S)$ is kawamata log terminal and $S$ is contained in the support neither of $A$ nor of $P$. If $\Sigma \in |k(K_S+\Phi)|$ then we may pick a divisor $$
G\in |m(K_X+\Delta+C)+P|\quad \text{such that}\quad G|_S=l\Sigma+m(\Omega-\Phi+C|_S)+P|_S. $$ Let $$ \Lambda =\frac {k-1}mG+B\qquad \text{and}\qquad N=k(K_X+\Delta)-K_X-S. $$ As the support of the $\mathbb{Q}$-divisor $\Lambda\geq 0$ does not contain $S$ and by assumption $$ N-\Lambda \sim _{\mathbb Q} C-\frac{k-1}mP, $$ is ample, \eqref{l_ses} implies that sections of
$H^0(S,\mathcal{J}_{\Lambda|_S}(k(K_S+\Omega)))$ extend to sections of $H^0(X,\ring X.(k(K_X+\Delta)))$. Now \begin{align*}
&\phantom{\leq } \Lambda |_S -(\Sigma +k(\Omega-\Phi))\\
&=\frac {k-1}m(l\Sigma+m(\Omega-\Phi+C|_S)+P|_S)+B|_S-(\Sigma+k(\Omega-\Phi)) \\
&\leq \Omega+\frac {k-1}m P|_S. \end{align*}
As $(S, \Omega +\frac{k-1}mP|_S)$ is kawamata log terminal, $\mathcal J_{\Omega
+\frac{k-1}mP|_S}=\ring S.$ and we are done by (3) of \eqref{l_theory}. \end{proof}
\section{Asymptotic multiplier ideal sheaves} \label{s_variations}
\begin{definition}\label{d_additive} Let $X$ be a normal variety and let $D$ be a divisor. An \textbf{additive sequence of linear systems associated to $D$} is a sequence $V_{\bullet}$, such that $V_m\subset \proj {H^0(X,\ring X.(mD))}.$ and $$ V_i+V_j\subset V_{i+j}. $$ \end{definition}
\begin{definition-lemma}\label{d_asymptotic} Suppose that $(X,\Delta)$ is log smooth, where $\Delta$ is reduced and let $V_{\bullet}$ be an additive sequence of linear systems associated to a divisor $D$. Assume that there is an integer $k>0$ such that no log canonical centre of $(X,\Delta )$ is contained in the base locus of $V_k$.
If $c$ is a positive real number and $p$ and $q$ are positive integers divisible by $k$ then $$ \mathcal{J}_{\Delta,\frac cp\cdot V_p}\subset \mathcal{J}_{\Delta,\frac cq\cdot V_q} \qquad \text{$\forall q$ divisible by $p$.} $$ In particular the \textbf{asymptotic multiplier ideal sheaf} of $V_{\bullet}$ $$ \mathcal{J}_{\Delta,c\cdot V_{\bullet}}=\bigcup _{p>0}\mathcal{J}_{\Delta,\frac cp\cdot V_p}, $$
is given by $\mathcal{J}_{\Delta,c\cdot V_{\bullet}}=\mathcal{J}_{\Delta,\frac cp\cdot V_p} $, for $p$ sufficiently large and divisible. If we take $V_m=|mD|$ the complete linear system, then define $$
\mathcal{J}_{\Delta,c\cdot \|D\|}=\mathcal{J}_{\Delta,c\cdot V_{\bullet}}, $$
and if $S$ is a component of $\Delta$ and we take $W_m=|mD|_S$, then define $$
\mathcal{J}_{\Theta,c\cdot \|D\|_S}=\mathcal{J}_{\Theta,c\cdot W_{\bullet}}, $$
where $\Theta=(\Delta -S)|_S$. \end{definition-lemma} \begin{proof} If $p$ divides $q$ then pick a common log resolution $\mu\colon\map Y.X.$ of $V_p$, $V_q$ and $(X,\Delta )$ and note that $$ \frac 1qF_q\leq \frac 1p F_p, $$ where $F_p$ is the fixed locus of $\mu^*V_p$ and $F_q$ is the fixed locus of $\mu^*V_q$. Therefore $\mathcal{J}_{\Delta,\frac cp\cdot V_p}\subset \mathcal{J}_{\Delta,\frac cq\cdot V_q}$. The equality $\mathcal{J}_{\Delta,c\cdot
V_{\bullet}}=\mathcal{J}_{\Delta,\frac cp\cdot V_p}, $ now follows as $X$ is Noetherian. \end{proof}
We are now ready to state the main result of this section: \begin{theorem}\label{t_multiplier} Let $\pi\colon\map X.Z.$ be a projective morphism to a normal affine variety $Z$. Suppose that $(X,\Delta=S+B)$ is log smooth and purely log terminal of dimension $n$, where $S=\rdown \Delta.$ is irreducible and let $k$ be a positive integer such that $D=k(K_X+\Delta)$ is integral. Let $A$ be any ample $\mathbb Q$-divisor on $X$. Let $q$ and $r$ be any positive integers such that $Q=qA$ is very ample, $rA$ is Cartier and $(j-1)K_X+\Xi+rA$ is ample for every Cartier divisor $0\leq \Xi\leq j\rup\Delta.$ and every integer $1\leq j \leq k+1$.
If the stable base locus of $D$ does not contain any log canonical centre of $(X,\rup \Delta.)$, then $$
\mathcal{J}_{\|mD|_S\|}\subset \mathcal{J}_{\Theta,\|mD+P\|_S} \qquad \text{for all} \qquad m\in\mathbb{N}, $$
where $\Theta=\rup B.|_S$, $p=qn+r$ and $P=pA$.
Moreover, we have $$\pi _* \mathcal J_{\|mD|_S\|}(mD+P)\subset \operatorname{Im}\left( \pi _* \ring X.(mD+P)\to \pi _* \ring S.(mD+P)\right)$$ for all $m\in \mathbb N$. \end{theorem}
We will need some results about the sheaves $\mathcal{J}_{\Delta,c\cdot V_{\bullet}}$, most of which are easy generalisations of the corresponding facts for the usual asymptotic multiplier ideal sheaves. \begin{lemma}\label{l_id} Let $\pi\colon\map X.Z.$ be a projective morphism to a normal affine variety $Z$ and let $D$ be a $\mathbb{Q}$-Cartier divisor. Suppose that $(X,\Delta)$ is log smooth, $\Delta $ is reduced and the stable base locus of $D$ contains no log canonical centre of $(X,\Delta)$. Then \begin{enumerate} \item for any real numbers $0<c_1\leq c_2$ there is a natural inclusion $$
\mathcal{J}_{\Delta, c_2 \cdot \|D\|}\subset \mathcal{J}_{\Delta, c_1 \cdot \|D\|}, $$ and \item if $D$ is Cartier and $S$ is a component of $\Delta$, then the image of the map $$ \map {\pi_*\ring X.(D)}.{\pi _*\ring S.(D)}., $$
is contained in $\pi_*\mathcal{J}_{\Theta,\|D\|_S}(D)$ where $\Theta =(\Delta -S)|_S$. \end{enumerate} \end{lemma} \begin{proof} (1) is immediate from the definitions.
Suppose that $D$ is Cartier. Pick an integer $p$ such that $$
\mathcal{J}_{\Theta, \|D\|_S}=\mathcal{J}_{\Theta, \frac 1p \cdot |pD|_S}, $$
and a log resolution $\mu\colon\map Y.X.$ of $|D|$, $|pD|$ and $(X,\Delta )$. Let $T$ be the strict transform of $S$, let $F_1$ be the fixed locus of $\mu^*|D|$ and let $F_p$ be the fixed locus of $\mu^*|pD|$. We have $$ (\pi\circ\mu)_*\ring Y.(\mu^*D-F_1)=\pi_*\ring X.(D)=(\pi\circ\mu)_*\ring Y.(E+\mu^*D). $$ The first equality follows by definition of $F_1$ and the second follows as $E\geq 0$ is exceptional. As there are inequalities $$ \mu ^*D-F_1\leq \mu ^* D-\rdown F_p/p.\leq E+\mu ^* D-\rdown F_p/p. \leq E+\mu ^* D, $$ the image of $\pi_*\ring X.(D)$ is equal to the image of $$ (\pi\circ\mu)_*\ring Y.(E+\mu ^*D-\rdown F_p/p.). $$ Thus the image of $\pi_*\ring X.(D)$ is contained in \[
(\pi\circ\mu)_* \ring T.(E+\mu^*D-\rdown F_p/p.)=\pi _*\mathcal{J}_{\Theta, \|D\|_S}(D).\qedhere \] \end{proof}
\begin{lemma}\label{l_image} Let $\pi\colon\map X.Z.$ be a projective morphism to a normal affine variety $Z$ and let $D$ be a Cartier divisor. Suppose that $(X,\Delta)$ is log smooth and $\Delta$ is reduced. Let $S$ be a component of $\Delta$ and
$\Theta=(\Delta -S)|_S$.
If $\mathbf{B}_+(D)$ contains no log canonical centres of $(X,\Delta)$ then the image of the map $$ \map {\pi_*\ring X.(K_X+\Delta+D)}.{\pi_*\ring S.(K_S+\Theta+D)}., $$ contains $$
\pi _* \mathcal{J}_{\Theta,\|D\|_S}(K_S+\Theta+D). $$ \end{lemma} \begin{proof} Pick an integer $p>1$ such that $$
\mathcal{J}_{\Theta,\|D\|_S}=\mathcal{J}_{\Theta, \frac 1 p \cdot |pD|_S}, $$
and there is a divisor $A+B\in |pD|$ where $A\geq 0$ is a general very ample divisor and $B\geq 0$ contains no log canonical centres of $(X,\Delta)$. By the resolution lemma of
\cite{Szabo94}, we may find a log resolution $\mu\colon\map Y.X.$ of $|pD|$ and of $(X,\Delta)$ which is an isomorphism over every log canonical centre of $(X,\Delta)$. Let
$F_p$ be the fixed divisor of $\mu^*|pD|$, $M_p=p\mu ^*D-F_p$ and let $\Gamma$ and $T$ be the strict transforms of $\Delta$ and $S$. We have a short exact sequence $$ \ses {\ring Y.(G-T)}.{\ring Y.(G)}.{\ring T.(G)}., $$ where $G=K_Y+\Gamma+\mu ^*D-\rdown F_p/p.$. As $\mu^*A$ is base point free and
$\mu^*(A+B)\in \mu^*|pD|$, the divisor $C:=\mu^*B-F_p$ is effective. Note that $M_p-C \sim \mu^*A$. As no component of $C$ is a component of $\Gamma$, we may pick $0<\delta\leq 1/p$ and an exceptional $\mathbb{Q}$-divisor $F\geq 0$ such that
$(Y,\Gamma-T+\{F_p/p\}+\delta (C+F))$ is divisorially log terminal and $\mu^*A-F$ is ample. As $|M_p|$ is free, $M_p/p$ is nef and so \begin{align*} G-T-(K_Y+\Gamma-T+\{\frac 1pF_p\}+\delta(C+F))&=\frac 1pM_p-\delta(C+F) \\
&\sim_{\mathbb{Q}}(\frac 1p-\delta)M_p+\delta(\mu^*A-F), \end{align*} is ample. In particular Kawamata-Viehweg vanishing implies that $R^1\phi_*\ring Y.(G-T)=0$ where $\phi=\pi\circ\mu$. Therefore the homomorphism \[
\pi_*\ring X.(K_X+\Delta+D)\supset\phi_*\ring Y.(G)\longrightarrow\phi_*\ring T.(G)=\pi _*\mathcal{J}_{\Theta,\|D\|_S}(K_S+\Theta+D), \] is surjective. \end{proof}
\begin{theorem}\label{t_generation} Let $\pi\colon\map X.Z.$ be a projective morphism, where $Z$ is affine and $X$ is a smooth variety of dimension $n$.
If $D$ is a Cartier divisor whose stable base locus is a proper subset of $X$, $A$ is an ample Cartier divisor and $H$ is a very ample divisor then
$\mathcal{J}_{\|D\|}(D+K_X+A+nH)$ is globally generated. \end{theorem}
\begin{proof} Pick an integer $p>0$ such that if $pB\in |pD|$ is a general element, then $$
\mathcal{J}_{\|D\|}=\mathcal{J}_{\frac 1p \cdot |pD|}=\mathcal{J}_{B}. $$
Then by (2) of \eqref{l_ses}, $H^i(X,\mathcal{J}_{\|D\|}(D+K_X+A+mH))=0$ for all $i>0$ and $m\geq 0$ and we may apply \eqref{l_generation}. \end{proof}
\begin{lemma}\label{l_generation} Let $\pi\colon\map X.Z.$ be a projective morphism where $X$ is smooth of dimension $n$, $Z$ is affine and let $H$ be a very ample divisor.
If $\mathcal{F}$ is any coherent sheaf such that $H^i(X,\mathcal{F}(mH))=0$, for $i>0$ and for all $m\geq -n$ then $\mathcal{F}$ is globally generated. \end{lemma} \begin{proof} Pick $x\in X$. Let $\mathcal{T}\subset\mathcal{F}$ be the torsion subsheaf supported at $x$, and let $\mathcal{G}=\mathcal{F}/\mathcal{T}$. Then $H^i(X,\mathcal{G}(mH))=0$ for $i>0$ and for all $m\geq -n$ and $\mathcal{F}$ is globally generated if and only if $\mathcal{G}$ is globally generated. Replacing $\mathcal{F}$ by $\mathcal{G}$ we may therefore assume that $\mathcal{T}=0$.
Pick a general element $Y\in |H|$ containing $x$. As $\mathcal{T}=0$ there is an exact sequence $$ \ses \mathcal{F}(-Y).\mathcal{F}.\mathcal{G}., $$ where $\mathcal{G}=\mathcal{F}\otimes \ring Y.$. As $H^i(Y,\mathcal{G}(mH))=0$, for $i>0$ and for all $m\geq -(n-1)$, $\mathcal{G}$ is globally generated by induction on the dimension. As $H^1(X,\mathcal{F}(-Y))=0$ it follows that $\mathcal{F}$ is globally generated. \end{proof}
\begin{proof}[Proof of \eqref{t_multiplier}] We follow the argument of \cite{HM05b} which in turn is based on the ideas of \cite{Kawamata99}, \cite{Siu98} and \cite{Tsuji99}.
We proceed by induction on $m$. The statement is clear for $m=0$, and so it suffices to show that $$
\mathcal{J}_{\|(m+1)D|_S\|}\subset \mathcal{J}_{\Theta,\|(m+1)D+P\|_S}, $$ assuming that $$
\mathcal{J}_{\|tD|_S\|}\subset \mathcal{J}_{\Theta,\|tD+P\|_S} \qquad \text{for all} \qquad t\leq m. $$ If $\Delta=\sum \delta _i\Delta _i$, where each $\Delta_i$ is a prime divisor, then for any $1\leq s\leq k$, put $$ \Delta^s=\sum _{i\,:\,\delta _i>(k-s)/k}\Delta _i . $$ We have \begin{itemize} \item each $\Delta^s$ is integral, \item $\displaystyle{S=\Delta^1\leq \Delta^2 \leq \cdots \leq \Delta^k=\rup \Delta.}$, and \item $\displaystyle{\Delta=\frac 1k\sum_{s=1}^k \Delta^s}$, \end{itemize} and these properties uniquely determine the divisors $\Delta^s$. We let $\Delta^{k+1}=\rup \Delta.$. We recursively define integral divisors $D_{\leq s}$ by the rule $$ D_{\leq s} = \begin{cases}
0 & \text{if $s=0$} \\
K_X+\Delta^s+D_{\leq s-1} & 1\leq s\leq k. \end{cases} $$ Note that $D_{\leq k}=D$. By (1) of \eqref{l_id} there is an inclusion $$
\mathcal{J}_{\|(m+1)D|_S\|}\subset \mathcal{J}_{\|mD|_S\|}, $$ and so it suffices to prove that there are inclusions \[
\label{e_star}\mathcal{J}_{\|mD|_S\|}\subset \mathcal{J}_{\Theta^{s+1},\|mD+D_{\leq s}+P\|_S}, \tag{$\star$} \]
for $0\leq s\leq k$, where $\Theta^i=(\Delta^i-S)|_S$ for $1\leq i\leq k+1$. Thus $\Theta^{k}=\Theta^{k+1}=\Theta$ and $\Theta^1=0$.
We proceed by induction on $s$. Now $$
\mathcal{J}_{\|mD|_S\|}\subset \mathcal{J}_{\Theta,\|mD+P\|_S} \subset \mathcal{J}_{\Theta^1,\|mD+P\|_S}. $$ The first inclusion holds by assumption and since $\Theta^1\leq \Theta$, (2) of \eqref{l_theory} implies the second inclusion. Thus \eqref{e_star} holds when $s=0$.
Now suppose that \eqref{e_star} holds for $s\leq t-1$. Note that \begin{align*} \label{e_dagger} mD+D_{\leq t}+P&=K_X+\Delta^t+(D_{\leq t-1}+P)+mD\\
&=mD+K_X+(\Delta^t+D_{\leq t-1}+rA)+nQ,\tag{$\dagger$} \end{align*} where, by assumption, both $D_{\leq t-1}+P$ and $\Delta^t+D_{\leq t-1}+rA$ are ample for any $1\leq t\leq k+1 $. In particular $\mathbf{B}_{+}(mD+D_{\leq t-1}+P)$ contains no log canonical centres of $(X,\rup \Delta.)$. Then \begin{align*}
\pi_*\mathcal{J}_{\|mD|_S\|}(mD+D_{\leq t}+P)
&\subset \pi _*\mathcal{J}_{\Theta^t,\|mD+D_{\leq t-1}+P\|_S}(mD+D_{\leq t}+P)\\ &\subset \operatorname{Im} \left( \pi _* \ring X.(mD+D_{\leq t}+P)\longrightarrow \pi _*\ring S.(mD+D_{\leq t}+P)\right)\\
&\subset \pi _* \mathcal{J}_{\Theta^{t+1},\|mD+D_{\leq t}+P\|_S}(mD+D_{\leq t}+P). \end{align*} The first inclusion holds as we are assuming \eqref{e_star} for $s=t-1$, the second inclusion holds by \eqref{e_dagger} and \eqref{l_image} and the last inclusion follows from (2) of \eqref{l_id}. But \eqref{e_dagger} and \eqref{t_generation} imply that $$
\mathcal{J}_{\|mD|_S\|}(mD+D_{\leq t}+P), $$ is generated by global sections and so \[
\mathcal{J}_{\|mD|_S\|}\subset \mathcal{J}_{\Theta^{t+1}, \|mD+D_{\leq t}+P\|_S}. \] The inclusion $$
\pi _* \mathcal J_{\|mD|_S\|}(mD+P)\subset \operatorname{Im}\left(\pi _* \ring X.(mD+P)\to \pi _* \ring S.(mD+P)\right), $$ is part of the inclusions proved above when $s=k$. \end{proof}
\section{Lifting sections} \label{s_lift}
\begin{lemma}\label{l_lim} Let $D\geq 0$ be a Cartier divisor on a normal variety $X$, and let $Z\subset X$ be an irreducible subvariety.
Then $$
\liminf \frac{\operatorname{mult}_Z(|mD|)}m=\lim \frac{\operatorname{mult}_Z(|m!D|)}{m!}. $$ \end{lemma} \begin{proof} Note that if $a$ divides $b$ then $$
\frac{\operatorname{mult}_Z(|aD|)}a\geq \frac{\operatorname{mult}_Z(|bD|)}b, $$ whence the result. \end{proof} \begin{lemma}\label{l_achieve} Let $D\subset X$ be a divisor on a smooth variety and $Z$ a closed subvariety.
If $\lim\operatorname{mult}_Z(|m!D|)/m!=0$ then $Z$ is not contained in $\mathbf{B}_-(D)$. \end{lemma} \begin{proof} Let $A$ be any ample divisor. Pick $l>0$ such that $lA-K_X$ is ample. If
$m>l$ is sufficiently divisible then $\mathcal{J}_{\|mD\|}(m(D+A))$ is globally generated by \eqref{t_generation}. But if $p>0$ is sufficiently large and divisible and $D_{mp}\in
|mpD|$ is general, then $\operatorname{mult}_Z D_{mp}=\operatorname{mult}_Z|mpD|<p$ and $$
\mathcal J_{\|mD\|}=\mathcal J_{(1/p)D_{mp}}. $$ But since $\operatorname{mult}_Z D_{mp}/p<1$ it follows that $(X,D_{mp}/p)$ is
kawamata log terminal, in a neighbourhood of the generic point of $Z$. Thus $Z$ is not contained in the co-support of $\mathcal J_{\|mD\|}$ and so $Z$ is not contained in the base locus of $m(D+A)$. \end{proof}
\begin{theorem}\label{t_lift} Let $\pi\colon\map X.Z.$ be a projective morphism to a normal affine variety $Z$, where $(X,\Delta=S+A+B)$ is a purely log terminal pair, $S=\rdown\Delta.$ is irreducible, $(X,S)$ is log smooth, $A\geq 0$ is a general ample
$\mathbb{Q}$-divisor, $B\geq 0$ is a $\mathbb{Q}$-divisor and $(S,\Omega+A|_S)$ is canonical, where $\Omega=(\Delta-S)|_S$. Assume that the stable base locus of $K_X+\Delta$ does not contain $S$. Let $F=\lim F_{l!}$, where, for any positive and sufficiently divisible integer $m$, we let \[
F_m=\operatorname{Fix} (|m(K_X+\Delta)|_S)/m . \] If $\epsilon>0$ is any rational number such that $\epsilon(K_X+\Delta)+A$ is ample and if $\Phi$ is any $\mathbb{Q}$-divisor on $S$ and $k>0$ is any integer such that \begin{enumerate} \item both $k\Delta$ and $k\Phi$ are Cartier, and \item $\Omega\wedge \lambda F \leq \Phi\leq \Omega$, where $\lambda=1-\epsilon/k$, \end{enumerate} then $$
|k(K_S+\Omega-\Phi)|+k\Phi\subset |k(K_X+\Delta)|_S. $$ \end{theorem} \begin{proof} By assumption $A=H/m$, where $H$ is very ample and a very
general element of $|H|$ and $m\geq 2$ is an integer. If $C=A/k$, then $$ A+(k-1)C=\frac{2k-1}{km}H, $$ and so $$ (X,\Delta+(k-1)C=S+\frac{2k-1}{km}H+B) $$ is purely log terminal, as $$ \frac{2k-1}{km}<1. $$ On the other hand, $$
(S,\Omega+C|_S), $$
is canonical as we are even assuming that $(S,\Omega+A|_S)$ is canonical. Pick $\eta>\epsilon/k$ rational so that $\eta(K_X+\Delta)+C$ is ample and let $\mu=1-\eta<\lambda=1-\epsilon /k$. If $l>0$ is any sufficiently divisible integer so that $O=l(\eta(K_X+\Delta)+C)$ is very ample, then \begin{align*}
G_l &=\operatorname{Fix}( |l(K_X+\Delta+C)|_S)/l \\
&=\operatorname{Fix}(|l\mu (K_X+\Delta)+O|_S)/l \\
&\leq \operatorname{Fix}(|l\mu (K_X+\Delta)|_S)/l \\
&=\mu F_{\mu l}. \end{align*} Thus $$ \lim G_{l!}\leq\mu\lim F_{l!}=\mu F. $$
On the other hand \eqref{l_achieve} implies that there is a positive integer $l$ such that every prime divisor on $S$ which does not belong to the support of $F$ does not belong to the base locus of $|l(K_X+\Delta+C)|$. Thus we may pick a positive integer $l$ such that \begin{itemize} \item $k$ divides $l$, \item $lC$ is Cartier, and \item $G_l\leq \lambda F$. \end{itemize}
Let $f\colon\map Y.X.$ be a log resolution of the linear system $|l(K_X+\Delta+C)|$ and of $(X,\Delta+C)$. We may write $$ K_Y+\Gamma=f^*(K_X+\Delta+C)+E, $$ where $\Gamma\geq 0$ and $E\geq 0$ have no common components, $f_*\Gamma =\Delta+C$ and $f_*E=0$. Then $$ H_l=\operatorname{Fix}(l(K_Y+\Gamma))/l=\operatorname{Fix}(lf^*(K_X+\Delta+C))/l+E. $$
If $\Xi=\Gamma-\Gamma\wedge H_l$ then $l(K_Y+\Xi)$ is Cartier and $\operatorname{Fix}(l(K_Y+\Xi))$ and
$\Xi$ share no common components. Since the mobile part of $|l(K_Y+\Xi)|$ is free and the support of $\operatorname{Fix}(l(K_Y+\Xi))+\Xi$ has normal crossings it follows that the stable base locus of $K_Y+\Xi$ contains no log canonical centres of $(Y,\rup \Xi.)$ (which are nothing but the strata of $\rup\Xi.$).
Let $H\geq 0$ be any ample divisor on $Y$. Pick positive integers $m$ and $q$ such that $l$ divides $m$ and $Q=qH$ is very ample. Let $T$ be the strict transform of $S$, let
$\Gamma _T=(\Gamma-T)|_T$ and let $\Xi_T=(\Xi-T)|_T$. If $$
\tau\in H^0(T,\ring T.(m(K_T+\Xi _T)))=H^0(T,\mathcal{J}_{\|m(K_T+\Xi_T)\|}(m(K_T+\Xi _T))), $$ and $\sigma\in H^0(T,\ring T.(Q))$ then $$
\sigma\cdot \tau \in H^0(T,\mathcal{J}_{\|m(K_T+\Xi _T)\|}(m(K_T+\Xi _T)+Q)). $$ On the other hand, if $q$ is sufficiently large and divisible then by \eqref{t_multiplier}
$H^0(T,\mathcal{J}_{\|m(K_T+\Xi_T)\|}(m(K_T+\Xi _T)+Q))$ is contained in the image of \[ \map {H^0(Y,\ring Y.(m(K_Y+\Xi)+Q))}.{H^0(T,\ring T.(m(K_T+\Xi _T)+Q))}.. \] Hence there is a fixed $q$ such that whenever $l$ divides $m$, we have \[
|m(K_T+\Xi _T)|+m(\Gamma _T-\Xi _T)+|Q|_T|\subset |m(K_Y+\Gamma)+Q|_T. \]
If $g=f|_T\colon \map T.S.$ then $g_*\Gamma _T=\Omega+C|_S$ and since $g_*\Xi _T\leq
\Omega+C|_S$ and $(S, \Omega+C|_S)$ is canonical, we have $|m(K_S+g_*\Xi_T)|=g_*|m(K_T+\Xi _T)|$. Therefore, applying $g_*$, we obtain \[
|m(K_S+g_*\Xi _T)|+m(\Omega+C|_S-g_*\Xi _T)+P|_S\subset |m(K_X+\Delta +C)+P|_S, \] where $P=f_*Q$.
Since for every prime divisor $L$ on $S$ we have $$
\operatorname{mult}_L G_l=\operatorname{mult}_{L'}\operatorname{Fix}(|l(K_Y+\Gamma)|_T)/l=\operatorname{mult}_{L'} H_l|_T, $$ where $L'$ is the strict transform of $L$ on $T$, it follows that $$
g_*\Xi_T-C|_S=\Omega-\Omega \wedge G_l\geq \Omega-\Omega \wedge \lambda F \geq\Omega-\Phi\geq 0. $$ Therefore \[
|m(K_S+\Omega-\Phi)|+m\Phi+(mC+P)|_S \subset |m(K_X+\Delta+C)+P|_S, \] for any $m$ divisible by $l$. In particular if we pick $m$ so that $C-\frac{k-1}mP$ is ample and $(X,\Delta+\frac{k-1}mP)$ is purely log terminal then the result follows by \eqref{t_squeeze}. \end{proof}
\section{Rationality of the restricted algebra} \label{s_rationality}
In this section we will prove: \begin{theorem}\label{t_rational} Assume Theorem~\ref{t_ezd}$_{n-1}$.
Let $\pi\colon\map X.Z.$ be a projective morphism to a normal affine variety $Z$, where $(X,\Delta=S+A+B)$ is a purely log terminal pair of dimension $n$, $S=\rdown\Delta.$ is irreducible, $(X,S)$ is log smooth, $A\geq 0$ is a general ample $\mathbb{Q}$-divisor,
$B\geq 0$ is a $\mathbb{Q}$-divisor and $(S,\Omega+A|_S)$ is canonical, where
$\Omega=(\Delta-S)|_S$. Assume that the stable base locus of $K_X+\Delta$ does not contain $S$. Let $F=\lim F_{l!}$ where, for any positive and sufficiently divisible integer $m$, we let \[
F_m=\operatorname{Fix} (|m(K_X+\Delta)|_S)/m. \]
Then $\Theta=\Omega-\Omega\wedge F$ is rational. In particular if both $k\Delta$ and $k\Theta$ are Cartier then $$
|k(K_S+\Theta)|+k(\Omega-\Theta)=|k(K_X+\Delta)|_S, $$ and $$ R_S(X,k(K_X+\Delta))\simeq R(S,k(K_S+\Theta)). $$ \end{theorem} \begin{proof} Suppose that $\Theta$ is not rational. Let $V\subset\operatorname{WDiv}_{\mathbb{R}}(S)$ be the vector space spanned by the components of $\Theta$. Then there is a constant
$\delta>0$ such that if $\Phi\in V$ and $\|\Phi-\Theta\| <\delta$ then $\Phi\geq 0$ has the same support as $\Theta$ and moreover, by (2) of Theorem~\ref{t_ezd}$_{n-1}$, if $G$ is a prime divisor contained in the stable base locus of $K_S+\Theta$ then it is also contained in the stable base locus of $K_S+\Phi$.
If $l(K_X+\Delta)$ is Cartier and $\Theta_l=\Omega-\Omega\wedge F_l$ then $$
|l(K_X+\Delta)|_S\subset |l(K_S+\Theta_l)|+l(\Omega\wedge F_l). $$ Hence $\operatorname{Fix} (l(K_S+\Theta_l))$ does not contain any components of $\Theta_l$. In particular the stable base locus of $K_S+\Theta_l$ does not contain any components of $\Theta_l$. But we may pick $l>0$ so that $\Theta_l\in V$ and
$\|\Theta_l-\Theta||<\delta$. It follows that no component of $\Theta$ is in the stable base locus of $K_S+\Theta$.
Let $W\subset V$ be the smallest rational affine space which contains $\Theta$. (3) of Theorem~\ref{t_ezd}$_{n-1}$ implies that there is a positive integer $r>0$ and a positive constant $\eta>0$ such that if $\Phi\in W$, $k\Phi/r$ is Cartier and
$\|\Phi-\Theta\|<\eta$ then every component of $\operatorname{Fix} (k(K_S+\Phi))$ is in fact a component of the stable base locus of $K_S+\Theta$.
Pick a rational number $\epsilon>0$ such that $\epsilon(K_X+\Delta)+A$ is ample. By Diophantine approximation, we may find a positive integer $k$, a divisor $\Phi$ on $S$ and a prime divisor $G$ (necessarily a component of $\Theta$ whose coefficient is irrational) such that \begin{enumerate} \item $0\leq \Phi\in W$, \item both $k\Phi/r$ and $k\Delta/r$ are Cartier,
\item $\|\Phi-\Theta\|<\min(\delta,\eta,f\epsilon/k)$ where $f$ is the smallest non-zero coefficient of $F\neq 0$, and \item $\operatorname{mult}_G\Phi>\operatorname{mult}_G \Theta$. \end{enumerate}
\begin{claim}\label{c_component} $\Omega\wedge \lambda F\leq \Omega-\Phi \leq \Omega$, where $\lambda=1-\epsilon/k$. \end{claim} \begin{proof}[Proof of \eqref{c_component}] Let $P$ be a prime divisor on $S$ and let $\omega$, $f$, $\phi$ and $\theta$ be the multiplicities of $\Omega$, $F$, $\Phi$ and $\Theta$ along $P$. We just need to check that \[ \label{e_component}\min(\omega,\lambda f)\leq \omega-\phi. \tag{$*$} \]
There are two cases. If $\omega \leq f$, then $\theta=0$ so that $\phi =0$ and \eqref{e_component} holds. If $\omega \geq f$, then $\theta =\omega -f$ and since
$\|\Phi-\Theta\|<f\epsilon/k$, \[ \min (\omega ,\lambda f)=\left(1-\frac \epsilon k\right)f \leq f-(\phi -\theta )=\omega -\phi. \qedhere \] \end{proof}
\eqref{c_component}, (2) and \eqref{t_lift} imply that $$
|k(K_S+\Phi)|+k(\Omega-\Phi)\subset |k(K_X+\Delta)|_S. $$ (4) implies that $G$ is a component of $\operatorname{Fix}(k(K_S+\Phi))$. (2) and
$\|\Phi-\Theta\|<\eta$ imply that $G$ is a component of the stable base locus of $K_S+\Theta$, a contradiction.
Thus $\Theta$ is rational. Hence $\Omega\wedge F$ is rational, and we are done by \eqref{t_lift}. \end{proof}
\section{Proof of \eqref{t_m}} \label{s_pl}
\begin{theorem}\label{t_model} Assume Theorem~\ref{t_ezd}$_{n-1}$.
Let $\pi\colon\map X.Z.$ be a projective morphism to a normal affine variety $Z$. Suppose that $(X,\Delta=S+A+B)$ is a purely log terminal pair of dimension $n$, $S=\rdown \Delta.$ is irreducible and not contained in the stable base locus of $K_X+\Delta$, $A\geq 0$ is a general ample $\mathbb{Q}$-divisor and $B\geq 0$ is a $\mathbb{Q}$-divisor.
Then there is a birational morphism $g\colon\map T.S.$, a positive integer $l$ and a kawamata log terminal pair $(T,\Theta)$ such that $K_T+\Theta$ is $\mathbb{Q}$-Cartier and $$ R_S(X,l(K_X+\Delta))\cong R(T,l(K_T+\Theta)). $$ \end{theorem} \begin{proof} If $f\colon\map Y.X.$ is a log resolution of $(X,\Delta)$ then we may write $$ K_Y+\Gamma'=f^*(K_X+\Delta)+E, $$ where $\Gamma'\geq 0$ and $E\geq 0$ have no common components, $f_*\Gamma'=\Delta$ and $f_*E=0$. If $T$ is the strict transform of $S$ then we may choose $f$ so that
$(T,\Psi'=(\Gamma'-T)|_T)$ is terminal. Note that $T$ is not contained in the stable base locus of $K_Y+\Gamma'$ as $S$ is not contained in the stable base locus of $K_X+\Delta$.
Pick a $\mathbb{Q}$-divisor $F$ such that $f^*A-F$ is ample and $(Y,\Gamma'+F)$ is purely log terminal. Pick $m>1$ so that $m(f^*A-F)$ is very ample and pick $mC\in |m(f^*A-F)|$ very general. Then $$ (Y,\Gamma=\Gamma'-f^*A+F+C \sim_{\mathbb{Q}}\Gamma'), $$
is purely log terminal and if $m$ is sufficiently large $(T,\Psi+C|_T)$ is terminal, where
$\Psi=(\Gamma-T)|_T$.
On the other hand \begin{align*} R(X,k(K_X+\Delta))&\cong R(Y,k(K_Y+\Gamma)) \qquad \text{and}\\ R_S(X,k(K_X+\Delta))&\cong R_T(Y,k(K_Y+\Gamma)), \end{align*} for any $k$ sufficiently divisible. Now apply \eqref{t_rational} to $(Y,\Gamma)$. \end{proof}
\begin{proof}[Proof of \eqref{t_m}] By \eqref{t_restricted} we may assume that $Z$ is affine and by \eqref{l_restricted}, it suffices to prove that the restricted algebra is finitely generated. As $Z$ is affine, $S$ is mobile and as $f$ is birational, the divisor $\Delta-S$ is big. But then $$ \Delta-S\sim_{\mathbb{Q}} A+B, $$ where $A$ is a general ample $\mathbb{Q}$-divisor and $B\geq 0$. As $S$ is mobile, we may assume that the support of $B$ does not contain $S$. Now $$ K_X+\Delta'=K_X+S+(1-\epsilon)(\Delta-S)+ \epsilon A+\epsilon B\sim_{\mathbb Q} K_X+\Delta, $$ is purely log terminal, where $\epsilon$ is any sufficiently small positive rational number. By \eqref{l_truncation}, we may replace $\Delta$ by $\Delta'$. We may therefore assume that $\Delta=S+A+B$, where $A$ is a general ample $\mathbb{Q}$-divisor and $B\geq 0$. Since we are assuming Theorem~\ref{t_ezd}$_{n-1}$, \eqref{t_model} implies that the restricted algebra is finitely generated.\end{proof}
\end{document} |
\begin{document}
\title{Universal steering inequalities } \author{Huangjun Zhu} \email{Corresponding Author: hzhu@pitp.ca} \affiliation{Perimeter Institute for Theoretical Physics, Waterloo, Ontario N2L 2Y5, Canada} \affiliation{Institute for Theoretical Physics, University of Cologne, Cologne
50937, Germany}
\author{Masahito Hayashi} \email{masahito@math.nagoya-u.ac.jp} \affiliation{Graduate School of Mathematics, Nagoya University, Nagoya 464-0814, Japan} \affiliation{Centre for Quantum Technologies, National University of Singapore, Singapore 117543, Singapore}
\author{Lin Chen} \email{linchen@buaa.edu.cn} \affiliation{School of Mathematics and Systems Science, Beihang University, Beijing 100191, China} \affiliation{International Research Institute for Multidisciplinary Science, Beihang University, Beijing 100191, China}
\pacs{03.67.-a, 03.65.Ud, 03.65.Ta}
\begin{abstract} We propose a general framework for constructing universal steering criteria that are applicable to arbitrary bipartite states and measurement settings of the steering party. The same framework is also useful for studying the joint measurement problem. Based on the data-processing inequality for an extended R\'enyi relative entropy, we then introduce a family of universal steering inequalities, which detect steering much more efficiently than those inequalities known before. As illustrations, we show unbounded violation of a steering inequality for assemblages constructed from mutually unbiased bases and establish an interesting connection between maximally steerable assemblages and complete sets of mutually unbiased bases. We also provide a single steering inequality that can detect all bipartite pure states of full Schmidt rank. In the course of study, we generalize a number of results intimately connected to data-processing inequalities, which are of independent interest. \end{abstract}
\date{\today} \maketitle
\emph{Steering} is a nonclassical phenomenon that formalizes what Einstein called "spooky action at a distance" \cite{EinsPR35, Schr35D}. For a long time, it was studied under the name of Einstein-Podolsky-Rosen (EPR) paradox \cite{Wern89, Reid89, ReidDBC09}. Recently, it was realized that steering is a form of nonlocality that sits between entanglement and Bell nonlocality \cite{WiseJD07,JoneWD07,SaunJWP10, QuinVCA15} and that is intrinsically asymmetric \cite{MidgFO10,BowlVQB14}. Interestingly, steering can be characterized by a simple quantum information processing task, namely, entanglement verification with an untrusted party \cite{WiseJD07,JoneWD07}.
In addition, steering has been found useful in a number of applications, such as subchannel discrimination \cite{PianW15} and one-sided device-independent quantum key distribution \cite{BranCWS12}.
Recently, detection and characterization of steering have attracted increasing attention \cite{Reid89,WiseJD07,JoneWD07, ReidDBC09,CavaJWR09, WalbSGT11, SchnBWC13, Puse13, PramKM14, KogiSCA15, SkrzNC14,PianW15, KogiLRA15}. Various steering criteria and inequalities have been derived, such as linear steering inequalities \cite{CavaJWR09, Puse13}; inequalities based on multiplicative variances \cite{Reid89, ReidDBC09,CavaJWR09}, entropic uncertainty relations \cite{WalbSGT11, SchnBWC13}, fine-grained uncertainty relations \cite{PramKM14}; and hierarchy of steering criteria based on moments \cite{KogiSCA15}. However, most of these results are tailored to deal with specific scenarios; majority criteria are only applicable to given numbers of measurement settings and outcomes. In addition, many criteria (including most linear criteria) rely heavily on numerical optimization and lack a clear physical meaning and simple interpretation.
In this paper, we propose a general framework for constructing universal steering criteria that are applicable to arbitrary measurement settings of the steering party. In particular, we introduce nonlinear steering inequalities based on the data-processing inequality for an extended R\'enyi relative entropy \cite{Reny61,Haya06book}, which detect steering more systematically and efficiently than criteria in the literature. The same framework is also useful for studying the joint measurement problem \cite{BuscHL07, HeinW10, QuinVB14, UolaMG14, UolaBGP15, Zhu15IC}. In addition, our inequalities have a clear information theoretic meaning and simple interpretation. As illustrations of the general framework, we show unbounded violation of a steering inequality by virtue of \emph{mutually unbiased bases} (MUB) \cite{WootF89, DurtEBZ10} and establish an interesting connection between maximally steerable settings and complete sets of MUB. We also provide a single steering inequality that can detect all bipartite pure states of full Schmidt rank.
\begin{figure}
\caption{Steering scenario. Alice can affect Bob's state via her choice of the measurement according to the relation $\rho_{a|x}= \tr_A[(A_{a|x}\otimes 1) \rho]$. Entanglement is necessary but not sufficient for steering. }
\label{fig:steering}
\end{figure}
Suppose Alice and Bob share a bipartite state $\rho$ with reduced states $\rho_A$ and $\rho_B$. Alice can perform local measurements described by a collection of positive-operator-valued measures (POVMs) $\{A_{a|x}\}$, which is known as a \emph{measurement assemblage}. If Alice obtains outcome $a$ for measurement $x$, then the unnormalized reduced state of Bob is $\rho_{a|x}=\tr_A [(A_{a|x}\otimes 1)\rho]$. In the following, we discuss steering of Bob's system by Alice's measurements in terms of Bob's states~$\rho_{a|x}$. The set of states $\rho_{a|x}$ for a given $x$ is called an \emph{ensemble} for $\rho_B$, and the whose collection of ensembles is called a \emph{state assemblage}~\cite{Puse13}; see \fref{fig:steering}. To distinguish them, we express the ensemble by $\{\rho_{a|x}\}_a$ and the state assemblage by $\{\rho_{a|x}\}$. The assemblage $\{\rho_{a|x}\}$ is steerable if it does not admit a local hidden state model \cite{WiseJD07,JoneWD07} as $\rho_{a|x}=\sum_\lambda p(a|x,\lambda)\sigma_\lambda$ for all $a,x$, where $\{\sigma_\lambda\}$ is an ensemble for $\rho_B$ and $p(a|x,\lambda)$ are a collection of stochastic maps with $p(a|x,\lambda)\geq 0$ and $\sum_a p(a|x,\lambda)=1$. The state $\rho$ is steerable from Alice to Bob if there exists a measurement assemblage for Alice such that the resulting state assemblage for Bob is steerable. In this paper we shall focus on steerability of assemblages, no matter how they are constructed.
The steering problem is closely related to the joint measurement problem of POVMs \cite{QuinVB14, UolaMG14, UolaBGP15, Zhu15IC, Puse15}. Up to a scaling, a POVM may be seen as an ensemble for the completely mixed state. A measurement assemblage is \emph{compatible} or \emph{jointly measurable} if the corresponding state assemblage (for the completely mixed state) is unsteerable. In view of this connection, many results on steering can be turned into corresponding results on POVMs, and vice versa. We shall make use of this connection without further comments whenever convenient.
To set up the stage, we need to introduce suitable order relations on ensembles and assemblages.
Given two ensembles $\{\rho_a\}$ and $\{\sigma_b\}$ for $\rho_B$, which may represent two preparation procedures, the ensemble $\{\rho_a\}$ is a \emph{coarse graining} of $\{\sigma_b\}$, denoted by $\{\rho_a\}\preceq \{\sigma_b\}$ or $\{\sigma_b\}\succeq \{\rho_a\}$, if the former can be derived from the latter by data processing, that is, $\rho_a=\sum_b p(a|b) \sigma_b$, where the stochastic map $p(a|b)$ characterizes the data-processing procedure. In that case, $\{\sigma_b\}$ is a \emph{refinement} of $\{\rho_a\}$. Intuitively, coarse graining usually leads to a less informative ensemble. Two ensembles are \emph{equivalent} if they are coarse graining (refinement) of each other. The relation of coarse graining (refinement) forms a partial order on equivalent classes of ensembles for a given state.
The order relation on ensembles can be generalized to assemblages in a natural way. Given two assemblages $\{\rho_{a|x} \}$ and $\{\sigma_{b|y} \}$ for $\rho_B$, the assemblage $\{\rho_{a|x} \}$ is a coarse graining of $\{\sigma_{b|y} \}$, denoted by $\{\rho_{a|x} \}\preceq \{\sigma_{b|y} \}$ or $\{\sigma_{b|y} \}\succeq\{\rho_{a|x} \}$, if each ensemble in $\{\rho_{a|x} \}$ is a coarse graining of an ensemble in $\{\sigma_{b|y} \}$. In that case, $\{\sigma_{b|y} \}$ is called a refinement of $\{\rho_{a|x} \}$. An assemblage is unsteerable if and only if it has a refinement that contains only one ensemble, that is, all its ensembles possess a common refinement. By definition, any coarse graining of an unsteerable assemblage is unsteerable. Conversely, any refinement of a steerable assemblage is steerable.
A function $f$ on ensembles is \emph{order monotonic} (or order preserving) if $f(\{\rho_a\})\preceq f(\{\sigma_b\})$ whenever $\{\rho_a\}\preceq \{\sigma_b\}$. Order-monotonic functions on assemblages can be defined in a similar manner.
Here the image of $f$ could be any space with a partial order, although we use the same notation for the partial order as that on ensembles.
The image of all ensembles for a given state under an order-monotonic function $f$ is called the \emph{complementarity chamber} and denoted by $\mathcal{C}_f$. In those cases of interest to us, the chambers are usually finite-dimensional compact convex sets, and their shapes reflect the information tradeoff among different ensembles, hence the name. For any unsteerable assemblage $\{\rho_{a|x}\}$ with a common refinement, say, $\{\sigma_\lambda\}$, we have $f(\{\rho_{a|x}\}_a)\preceq f(\{\sigma_\lambda\})\in \mathcal{C}_f$ for all $x$. So $f(\{\rho_{a|x}\}_a)$ have a common upper bound in $\mathcal{C}_f$. Violation of this condition is a signature of steerability; see \fref{fig:USI} for an illustration.
\begin{figure}\label{fig:USI}
\end{figure}
To unleash the potential of the idea spelled out above, it is essential to construct order-monotonic functions that are easy to characterize. Inspired by the data-processing inequality for a R\'enyi relative entropy \cite{Reny61,Haya06book}, here we introduce two such functions from ensembles to superoperators. Let $Q$ be a positive operator of full rank; define \begin{equation}\label{eq:GQ} \mathcal{G}_Q (\{\rho_a\}):=\sum_a \frac{\douter{\rho_a}{\rho_a} }{\tr(Q\rho_a)},\quad \barcal{G}_Q(\{\rho_a\}):=\sum_a \frac{\douter{\bar{\rho}_a}{\bar{\rho}_a} }{\tr(Q \rho_a)}, \end{equation} where $\bar{\rho}_a=\rho_a-\tr(\rho_a)/d$ and $d$ is the dimension of the Hilbert space. Here, we consider the Hilbert space of operators on the physical space, i.e., the Hilbert-Schmidt space. The kets in this space are denoted by the double-ket notation to distinguish them from ordinary kets. Superoperators, such as the outer product $\douter{A}{B}$, act on the operator space just as ordinary operators act on the usual Hilbert space; for example $(\douter{A}{B})\dket{C}=\dket{A}\tr(B^\dag C)$ (cf.~\cite{Zhu12the,Zhu15IC}).
Now, for a positive real vector $\vec{p}$ and a real vector $\vec{v}$ of the same length,
we introduce extended R\'enyi relative entropy of order~2 as $D_2(\vec{v}\| \vec{p}):=\log \sum_k \frac{v_k^2}{p_k}$, which reduces to the conventional R\'enyi relative entropy of order~2 when $\vec{p}$ and $\vec{v}$ represent probability distributions \cite{Reny61,Haya06book}. As shown in the supplementary material, the extended R\'enyi relative entropy obeys the data-processing inequality, from which we deduce the following theorem. \begin{theorem}\label{thm:GQOM} The functions $\mathcal{G}_Q(\cdot)$ and $\barcal{G}_Q(\cdot)$ are order monotonic for any positive operator $Q$ of full rank. \end{theorem}
When $Q$ is the identity, \eref{eq:GQ} reduces to \begin{equation}\label{eq:G} \mathcal{G}(\{\rho_a\}):=\sum_a \frac{\douter{\rho_a}{\rho_a} }{\tr(\rho_a)},\quad \barcal{G}(\{\rho_a\}):=\sum_a \frac{\douter{\bar{\rho}_a}{\bar{\rho}_a} }{\tr(\rho_a)}. \end{equation} We have \begin{equation}\label{eq:Gbound} \begin{aligned} \Tr(\mathcal{G}(\{\rho_a\}))&=\sum_a\frac{\tr(\rho_a^2)}{\tr(\rho_a)}\leq \sum_a\tr(\rho_a)=1,\\ \Tr(\barcal{G}(\{\rho_a\}))&=\Tr(\mathcal{G}(\{\rho_a\}))-\frac{1}{d}\leq 1-\frac{1}{d}, \end{aligned} \end{equation} where "$\Tr$" denotes the trace of superoperators. Here the upper bounds are saturated if and only if the ensemble is rank 1, that is, all the $\rho_a$ have rank 1. Define \begin{equation}\label{eq:tau} \begin{aligned}
\tau(\{\rho_{a|x}\})&=\min\{\Tr(\mathcal{F})| \mathcal{F}\geq \mathcal{G}(\{\rho_{a|x}\}_a)\; \forall x\},\\
\bar{\tau}(\{\rho_{a|x}\})&=\min\{\Tr(\mathcal{F})| \mathcal{F}\geq \barcal{G}(\{\rho_{a|x}\}_a)\; \forall x\}. \end{aligned} \end{equation} \begin{theorem}\label{thm:USI}
The functions $\tau(\cdot)$ and $\bar{\tau}(\cdot)$ are order-monotonic on assemblages. Any unsteerable assemblage $\{\rho_{a|x}\}$ satisfies $\tau(\{\rho_{a|x}\})\leq 1$ and $\bar{\tau}(\{\rho_{a|x}\})\leq 1-1/d$. Any compatible measurement assemblage $\{M_{a|x}\}$ satisfies $\tau(\{M_{a|x}\})\leq d$
and $\bar{\tau}(\{M_{a|x}\})\leq d-1$. \end{theorem}
\begin{proof}
Suppose $\{\rho_{a|x}\}\preceq \{\sigma_{b|y}\}$. Then for any ensemble $x$ in $\{\rho_{a|x}\}$ there exists an ensemble $y$ in
$\{\sigma_{b|y}\}$ such that $\mathcal{G}(\{\rho_{a|x}\}_a)\leq \mathcal{G}(\{\sigma_{b|y}\}_b)$ according to \thref{thm:GQOM}. Therefore, $\mathcal{F} \geq \mathcal{G}(\{\rho_{a|x}\}_a)$ for all $x$ whenever $\mathcal{F} \geq\mathcal{G}(\{\sigma_{b|y}\}_b)$ for all $y$. It follows that $\tau(\{\rho_{a|x}\})\leq \tau(\{\sigma_{b|y}\})$ and $\tau(\cdot)$ is order-monotonic. By the same reasoning, so is $\bar{\tau}(\cdot)$.
The ensembles in $\{\rho_{a|x}\}$ possess a common refinement, say $\{\sigma_\lambda\}$, so that $\mathcal{G}(\{\sigma_\lambda\})\geq \mathcal{G}(\{\rho_{a|x}\}_a)$ for all $x$. On the other hand, $\Tr(\mathcal{G}(\{\sigma_\lambda\}))\leq 1$ according to \eref{eq:Gbound}. It follows that $\tau(\{\rho_{a|x}\})\leq 1$. The other three inequalities in \thref{thm:USI} follow from the same reasoning. \end{proof} \begin{remark}
According to \pref{pro:taudif} in the supplementary material, $\tau(\{\rho_{a|x}\})=\bar{\tau}(\{\rho_{a|x}\})+1/d$, so the inequalities $\tau(\{\rho_{a|x}\})\leq 1$ and $\bar{\tau}(\{\rho_{a|x}\})\leq 1-1/d$ are equivalent; so are $\tau(\{M_{a|x}\})\leq d$ and $\bar{\tau}(\{M_{a|x}\})\leq d-1$. In practice, one may be easier to analyze than another.
\end{remark}
\Fref{fig:USI} depicts the simple idea behind steering inequalities in \thref{thm:USI}. These inequalities means that unsteerable (compatible)\ assemblages cannot be too informative (cf. \rcite{Zhu15IC}). Compared with steering inequalities in the literatures \cite{CavaJWR09, Puse13}, what is remarkable is that they are applicable to arbitrary assemblages and the bounds can be derived without numerical optimization. The values of $\tau(\{\rho_{a|x}\})$ and $\bar{\tau}(\{\rho_{a|x}\})$ can be computed efficiently with semidefinite programming (SDP), whose size increases only linearly with the number of ensembles. Although the steerability of an assemblage can be determined by
SDP \cite{Puse13, SkrzNC14}, the size of such SDP increases exponentially with ensembles. Our approach is attractive from both conceptual and practical perspectives.
To illustrate the application of our steering inequalities, we need to introduce several concepts. Two ensembles $\{\rho_a\}$ and $\{\sigma_b\}$ are \emph{mutually orthogonal} if $\tr(\bar{\rho}_a\bar{\sigma}_b)=0 $ for all $a,b$ or, equivalently, if $\barcal{G}(\{\rho_{a}\})$ and $\barcal{G}(\{\sigma_{b}\})$ have mutually orthogonal support. The same definition applies to POVMs. For POVMs corresponding to rank-1 projective measurements, orthogonality is equivalent to mutually unbiasedness. Recall that two bases $\{|\psi_j\rangle \}$ and $\{|\varphi_k\rangle\}$ in dimension $d$ are mutually unbiased if $|\langle\psi_j|\varphi_k\rangle|^2=1/d$ for all $j,k$ \cite{WootF89, DurtEBZ10}. The following two propositions are proved in the supplementary material. \begin{proposition}\label{pro:MSA}
Any measurement assemblage $\{M_{a|x}\}$ satisfies $\tau(\{M_{a|x}\})\leq d^2$ and $\bar{\tau}(\{M_{a|x}\})\leq d^2-1$. Any state assemblage $\{\rho_{a|x}\}$ satisfies $\tau(\{\rho_{a|x}\})\leq d$ and $\bar{\tau}(\{\rho_{a|x}\})\leq d-1/d$. \end{proposition} \begin{proposition}\label{pro:MSAm}
Any measurement assemblage $\{M_{a|x}\}$ with $m$ POVMs satisfies $\bar{\tau}(\{M_{a|x}\})\leq m(d-1)$. Any state assemblage $\{\rho_{a|x}\}$ with $m$ ensembles satisfies $\bar{\tau}(\{\rho_{a|x}\})\leq m(1-1/d)$. The upper bound is saturated if and only if the POVMs (ensembles) are rank 1 and mutually orthogonal. \end{proposition}
In view of \pref{pro:MSA}, measurement assemblages saturating the upper bound $\bar{\tau}(\{M_{a|x}\})\leq d^2-1$ (or $\tau(\{M_{a|x}\})\leq d^2$) are called maximally incompatible; state assemblages saturating $\bar{\tau}(\{\rho_{a|x}\})\leq d-1/d$ (or $\tau(\{\rho_{a|x}\})\leq d$) are called maximally steerable.
When a measurement assemblage $\{M_{a|x}\}$ is composed of $m$ projective measurements, the upper bound $\bar{\tau}(\{M_{a|x}\})\leq m(d-1)$ is saturated if and only if the bases are mutually unbiased. So measurement assemblages composed of MUB are maximally incompatible for given $m$; accordingly, state assemblages constructed from MUB are maximally steerable.
The inequality $\bar{\tau}(\{M_{a|x}\})\leq d^2-1$ in \pref{pro:MSA} means that each set of MUB can contain at most $d+1$ bases, in agreement with the well-known bound \cite{WootF89, DurtEBZ10}. When $d$ is a prime power, a complete set of MUB can be constructed \cite{WootF89, DurtEBZ10}, so the compatibility inequality $\bar{\tau}(\{M_{a|x}\})\leq d-1$ and the steering inequality $\bar{\tau}(\{\rho_{a|x}\})\leq 1-1/d$ can be violated by a factor of $d+1$, which is unbounded as $d$ grows. In contrast with the unbounded violation of a linear steering inequality shown in \rcite{MarcRYH15}, our result follows from a universal recipe, and the degree of violation can be determined precisely.
An intriguing problem left open is how many bases are needed to construct a maximally incompatible (steerable) assemblage when complete sets of MUB cannot be found, say in dimension~6.
Next, we generalize \thref{thm:USI} by virtue of the order-monotonic functions $\mathcal{G}_Q$ and $\barcal{G}_Q$. Define superoperator $\mathcal{R}_Q$~\cite{BrauC94,PetzS96,Zhu12the,Zhu15IC} and $\barcal{R}_Q$ by the equation \begin{equation}\label{eq:LRmultiplication} \begin{aligned} \mathcal{R}_Q\dket{S}=\frac{1}{2}\dket{SQ+ QS},\quad \barcal{R}_Q=\mathcal{R}_Q-\frac{\douter{Q}{Q}}{\tr(Q)}. \end{aligned} \end{equation} Note that both $\mathcal{R}_Q$ and $\barcal{R}_Q$ are positive superoperators, that is, $\dbra{S}\mathcal{R}_Q\dket{S}\geq 0$ and $\dbra{S}\barcal{R}_Q\dket{S}\geq 0$ for any operator $S$; so $\mathcal{R}_Q^{1/2}$ and $\barcal{R}_Q^{1/2}$ are well defined. Here comes the analogy of \eref{eq:Gbound}, \begin{equation}\label{eq:GQbound} \begin{aligned} \Tr(\mathcal{R}_Q \mathcal{G}_Q(\{\rho_a\}))&=\sum_a\frac{\tr (Q\rho_a^2)}{\tr(Q\rho_a)}\leq \sum_a\tr(\rho_a)=1,\\ \Tr(\barcal{R}_Q \barcal{G}_Q(\{\rho_a\}))&=\Tr(\mathcal{R}_Q \mathcal{G}_Q(\{\rho_a\}))-\frac{\tr(Q\rho_B)}{\tr (Q)} \\ &\leq 1-\frac{\tr(Q\rho_B)}{\tr(Q)} . \end{aligned} \end{equation} Again, the upper bounds are saturated if and only if the ensemble is rank 1. The operator $Q$ serves as a probe. If $Q=1$, then $\mathcal{R}_Q=\mathbf{I}$ and $\barcal{R}_Q=\bar{\mathbf{I}}$, where $\mathbf{I}$ is the identity superoperator and $\bar{\mathbf{I}}$ is the projector onto the space of traceless operators, so \eref{eq:GQbound} reduces to \eref{eq:Gbound}. Define \begin{equation}\label{eq:tauQ} \begin{aligned}
\tau_Q(\{\rho_{a|x}\})&=\min\{\Tr(\mathcal{F})| \mathcal{F}\geq \tilde{\mathcal{G}}_Q(\{\rho_{a|x}\}_a)\; \forall x\},\\
\bar{\tau}_Q(\{\rho_{a|x}\})&=\min\{\Tr(\mathcal{F})| \mathcal{F}\geq \tilde{\barcal{G}}_Q(\{\rho_{a|x}\}_a)\; \forall x\}, \end{aligned} \end{equation} where $\tilde{\mathcal{G}}_Q=\mathcal{R}_Q^{1/2}\mathcal{G}_Q\mathcal{R}_Q^{1/2}$ and $\tilde{\barcal{G}}_Q=\barcal{R}_Q^{1/2}\barcal{G}_Q\barcal{R}_Q^{1/2}$. By the same reasoning as in the proof of \thref{thm:USI}, we have \begin{theorem}\label{thm:USI2}
The functions $\tau_Q(\cdot)$ and $\bar{\tau}_Q(\cdot)$ are order-monotonic on assemblages for any invertible positive operator $Q$. Any unsteerable state assemblage $\{\rho_{a|x}\}$ satisfies
$\tau_Q(\{\rho_{a|x}\})\leq 1$ and $\bar{\tau}_Q(\{\rho_{a|x}\})\leq 1-\tr(Q\rho_B)/\tr(Q)$. Any compatible measurement assemblage $\{\rho_{a|x}\}$ satisfies
$\tau_Q(\{M_{a|x}\})\leq d$ and $\bar{\tau}_Q(\{M_{a|x}\})\leq d-1$. \end{theorem}
To illustrate the power of \thref{thm:USI2}, let us consider a measurement assemblage $\{M_{a|x}\}$ composed of two different symmetric informationally complete measurements (SICs) \cite{Zaun11, ReneBSC04}. Since SICs form 2-designs and tight informationally complete measurements \cite{Scot06, ZhuE11, Zhu12the,ApplFZ15G}, we have
$\mathcal{G}(\{M_{a|1}\}_a)=\mathcal{G}(\{M_{a|2}\}_a)=(\mathbf{I}+\douter{1}{1})/(d+1)$ and $\tau(\{M_{a|x}\})= d$. If $Q$ is a generic quantum state, then $\mathcal{G}_Q(\{M_{a|1}\}_a)\neq\mathcal{G}_Q(\{M_{a|2}\}_a)$, so that $\tau_Q(\{M_{a|x}\})>d$.
In this case, the inequalities in \thref{thm:USI2} can detect incompatibility (steering) that cannot be detected by \thref{thm:USI} thanks to the choice in the probe $Q$.
More general steering inequalities can be derived by considering the effect of filtering. The following proposition is an easy generalization of a result in \rcite{UolaBGP15}. \begin{proposition}\label{pro:FilterSteer}
The two assemblages $\{V\rho_{a|x}V^\dag\}$ and $\{V\rho_{a|x}^\mathrm{T} V^\dag\}$ (unnormalized) are both unsteerable for any operator $V$ if $\{\rho_{a|x}\}$ is unsteerable. When $V$ is invertible, $\{V\rho_{a|x}V^\dag\}$, $\{V\rho_{a|x}^\mathrm{T} V^\dag\}$, and $\{\rho_{a|x}\}$ are simultaneously steerable or not. \end{proposition} When Bob's state $\rho_B$ is invertible, \thref{thm:USI2} and \pref{pro:FilterSteer} imply that
any unsteerable assemblage $\{\rho_{a|x}\}$ satisfies \begin{equation}\label{eq:SteeringIneqGQrhoB}
\tau_Q(\{\rho_B^{-\frac{1}{2}} \rho_{a|x}\rho_B^{-\frac{1}{2}}\})\leq d, \quad
\bar{\tau}_Q(\{\rho_B^{-\frac{1}{2}}\rho_{a|x}\rho_B^{-\frac{1}{2}}\})\leq d-1. \end{equation}
Until now, we discuss steering in terms of Bob's state assemblage $\{\rho_{a|x}\}$. At this point, it is instructive to consider steering of Bob's state by Alice's measurements as described by the assemblage
$\{A_{a|x}\}$, which is the physical situation illustrated in \fref{fig:steering}. Suppose they share a pure bipartite state $\rho$ of full Schmidt rank, which has the Schmidt decomposition $\rho=\sum_{j,k}\lambda_j\lambda_k |jj\rangle\langle kk|$. Then the reduced states of Alice and Bob have the same form with respect to the Schmidt basis
$\rho_{A}=\rho_{B}=\sum_j \lambda_j^2 |j\rangle\langle j|$, and the state assemblage $\{\rho_{a|x}\}$ for Bob takes on the form
$\rho_{a|x}=\rho_B^{1/2}A_{a|x}^\mathrm{T}\rho_B^{1/2}$ \cite{Haya06book,UolaBGP15}. Therefore, \begin{equation}\label{eq:H} \begin{aligned}
{\tau}_Q(\{\rho_B^{-1/2} \rho_{a|x}\rho_B^{-1/2}\})
={\tau}_Q(\{A_{a|x}^\mathrm{T}\}), \\
\bar{\tau}_Q(\{\rho_B^{-1/2} \rho_{a|x}\rho_B^{-1/2}\})=\bar{\tau}_Q(\{A_{a|x}^\mathrm{T}\}).
\end{aligned} \end{equation} As a consequence of \esref{eq:SteeringIneqGQrhoB} and \eqref{eq:H},
$\tau(\{A_{a|x}\}) \le d$ and $\bar{\tau}(\{A_{a|x}\}) \le d-1$ if Alice cannot steer Bob's system, note that ${\tau}(\{A_{a|x}^\mathrm{T}\})
={\tau}(\{A_{a|x}\})$ and $\bar{\tau}(\{A_{a|x}^\mathrm{T}\})
=\bar{\tau}(\{A_{a|x}\})$ according to \lref{lem:tauTranspose}
in the supplementary material. If the assemblage $\{A_{a|x}\}$ is composed of two MUB, then $\bar{\tau}(\{
A_{a|x}\})=2(d-1)$, which violates the second inequality by a factor of 2. Remarkably, the single steering inequality
$\bar{\tau}(\{\rho_B^{-\frac{1}{2}}\rho_{a|x}\rho_B^{-\frac{1}{2}}\}) \le d-1$ with two measurement settings can detect the steerability of all bipartite pure states of full Schmidt rank, whereas
infinitely many inequalities linear in $\rho_{a|x}$
(note that our inequalities are not linear in $\rho_{a|x}$) are needed to achieve the same task \cite{CavaJWR09, Puse13}. Also, no general recipe is known for constructing linear steering inequalities without numerical optimization. Therefore, our approach provides a dramatic improvement over those alternatives known in the literatures. Further, an additional example on isotropic states is presented in the supplementary material.
In summary, we proposed a general framework for detecting and characterizing steering based on simple information theoretic ideas. Based on the data-processing inequality for the extended R\'enyi relative entropy of order~2, we then introduced a family of universal steering inequalities that are applicable to arbitrary assemblages and have a simple interpretation. As illustrations, we showed unbounded violation of a steering inequality for assemblages constructed from MUB and provided a single steering inequality that can detect all bipartite pure states of full Schmidt rank. Our work established intriguing connections among a number of fascinating subjects, including information theory, quantum foundations, and geometry of quantum state space, which are of interest to researchers from diverse fields. In addition, our work has an intimate connection to quantum estimation theory. Indeed, our \thref{thm:GQOM} can be applied to prove the data-processing inequality for Fisher information, and vice versa (cf. the supplementary material). Also, our study allows to derive and generalize many results in quantum estimation theory, which will be presented in another paper.
\section*{Acknowledgments} HZ is grateful to Matthew Pusey and Joshua Combes for comments and discussions. This work is supported by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation. HZ also acknowledges financial support from the Excellence Initiative of the German Federal and State Governments (ZUK 81) and the DFG. MH is partially supported by a MEXT Grant-in-Aid for Scientific Research (A) No. 23246071 and the National Institute of Information and Communication Technology (NICT), Japan. The Centre for Quantum Technologies is funded by the Singapore Ministry of Education and the National Research Foundation as part of the Research Centres of Excellence programme. LC was supported by the NSF of China (Grant No. 11501024), and the Fundamental Research Funds for the Central Universities (Grant Nos. 30426401 and 30458601).
\setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0} \setcounter{theorem}{0} \setcounter{lemma}{0} \setcounter{remark}{0}
\makeatletter \renewcommand{S\arabic{equation}}{S\arabic{equation}} \renewcommand{S\arabic{figure}}{S\arabic{figure}} \renewcommand{S\arabic{table}}{S\arabic{table}} \renewcommand{S\arabic{theorem}}{S\arabic{theorem}} \renewcommand{S\arabic{lemma}}{S\arabic{lemma}} \renewcommand{S\arabic{remark}}{S\arabic{remark}}
\onecolumngrid
\begin{center}
\textbf{\large Supplementary material: Universal steering inequalities}
\end{center} In this supplementary material we prove \thref{thm:GQOM} in the main text
by proving the data-processing inequality for the extended R\'enyi relative entropy of order~2. We also explain the connection between \thref{thm:GQOM} and generalized data-processing inequalities for generalized relative entropies as well as the data-processing inequality for Fisher information.
Quite surprisingly, \thref{thm:GQOM} embodies a generalization of two distinct data-processing inequalities. This observation reveals an intriguing connection between the R\'enyi relative entropy and Fisher information, which deserves further study. In addition, we derive an equality between $\tau(\{\rho_{a|x}\})$ and $\bar{\tau}(\{\rho_{a|x}\})$, and prove \psref{pro:MSA} and \ref{pro:MSAm} in the main text, which are needed for determining maximally steerable assemblages. We also show that the functions $\tau(\{\rho_{a|x}\})$ and $\bar{\tau}(\{\rho_{a|x}\})$
introduced in the main text are invariant under transposition of $\rho_{a|x}$. Finally, we consider the steerability of isotropic states as another illustration of our approach.
\section{Proof of \thref{thm:GQOM}} \begin{proof} Let $A=\{A_k\}$ and $B=\{B_j\}$ be two sets of positive operators that satisfy
$B_j=\sum_k\Lambda_{jk}A_k$ for an arbitrary stochastic matrix~$\Lambda$. Let $C$ be an arbitrary Hermitian operator; then \begin{equation} \dbra{C}\mathcal{G}_Q(A)\dket{C}=\sum_k \frac{v_k^2}{p_k}, \quad \dbra{C}\mathcal{G}_Q(B) \dket{C}=\sum_j \frac{u_j^2}{q_j}, \end{equation} where $p_k=\tr(Q A_k)$, $q_j=\tr(Q B_j)$, $v_k=\tr(A_k C)$, and $u_j=\tr(B_j C)$ satisfy $q_j=\sum_k\Lambda_{jk} p_k$ and $u_j=\sum_k\Lambda_{jk} v_k$. According to \lref{lem:DPIRenyi2} below, $\dbra{C}\mathcal{G}_Q(B)\dket{C}\leq \dbra{C}\mathcal{G}_Q(A)\dket{C}$, which implies that $\mathcal{G}_Q(B)\leq \mathcal{G}_Q(A)$ and $\mathcal{G}_Q(\cdot)$ is order monotonic. By the same token, so is $\barcal{G}_Q(\cdot)$. Alternatively, the latter conclusion follows from the observation that $\barcal{G}_Q(\cdot)=\bar{\mathbf{I}}\mathcal{G}_Q(\cdot)\bar{\mathbf{I}}$.\end{proof}
\begin{remark} $\{A_k\}$ and $\{B_j\}$ in the above proof are not necessarily normalized assemblages as long as they are connected by a stochastic matrix. Therefore, \thref{thm:GQOM} is applicable for both normalized and unnormalized assemblages. \end{remark}
Let $\vec{p}$ and $\vec{v}$ be two real vectors of the same length and $p_k>0$ for all $k$. Following the main text, the extended R\'enyi relative entropy of order~2 between the two vectors is defined as
$D_2(\vec{v}\| \vec{p}):=\log \sum_k \frac{v_k^2}{p_k}$.
\begin{lemma}\label{lem:DPIRenyi2} Given two real vectors $\vec{p}$ and $\vec{v}$ as above, let $\vec{q}=\Lambda
\vec{p}$ and $\vec{u}=\Lambda \vec{v}$, where $\Lambda$ is a stochastic matrix. Then $D_2(\vec{u}\| \vec{q})\leq D_2(\vec{v}\| \vec{p})$, that is, \begin{equation}\label{eq:DPIRenyi2} \sum_j \frac{u_j^2}{q_j}\leq \sum_k \frac{v_k^2}{p_k}. \end{equation} \end{lemma} \begin{proof} The proof follows the same idea as in the proof of data-processing inequalities for generalized relative entropies~\cite{Csis67,Haya06book}. It relies on the convexity of the quadratic function. \begin{align} \sum_j \frac{u_j^2}{q_j}&=\sum_j q_j\biggl(\sum_k\frac{\Lambda_{jk}p_k}{q_j} \frac{v_k}{p_k}\biggr)^2\leq \sum_j q_j\sum_k\frac{\Lambda_{jk}p_k}{q_j}\Bigl( \frac{v_k}{p_k}\Bigr)^2=\sum_k\frac{v_k^2}{p_k}, \end{align} note that $\{\frac{\Lambda_{jk}p_k}{q_j}\}_k$ forms a probability distribution. \end{proof} \begin{remark} Here we assume that no row of $\Lambda$ are identically 0, so that $q_j>0$ for all $j$. \end{remark}
When $\vec{v}$ and $\vec{p}$ are probability distributions,
$D_2(\vec{v}\| \vec{p})$ is R\'enyi relative entropy of order~2, which obeys a well-known data-processing inequality \cite{Reny61,Haya06book}. In \lref{lem:DPIRenyi2}, the four vectors $\vec{q},\vec{p}, \vec{u},\vec{v}$ are not necessarily probabilities as long as the components of $\vec{p}$ and $\vec{q}$ are positive. Therefore, \eref{eq:DPIRenyi2} may be understood as a generalization of the data-processing inequality for R\'enyi relative entropy of order~2. Although this relative entropy and its variant play important roles in information theory and cryptography theory \cite{BennBCM95,HastILL99}, we are not aware of the extension of the quantity $\log \sum_k \frac{v_k^2}{p_k}$ and \eref{eq:DPIRenyi2} for general vectors in the literature. Our work may stimulate further progress along this direction.
\section{Generalized data-processing inequalities} In this section, we explore potential extension of our approach based on generalized data-processing inequalities. Let $f$ be a convex function defined on real numbers; let $\vec{p}$ and $\vec{v}$ be two real vectors of the same length such that $p_k>0$ for all $k$. Define the extended $f$-relative entropy as \begin{equation}
D_f(\vec{v}\| \vec{p})=\sum_k p_k f\Bigl(\frac{v_k}{p_k}\Bigr). \end{equation}
\begin{theorem}
$D_f(\vec{v}\|\vec{p})$ is monotonic under data processing for any convex function $f$, that is, \begin{equation}\label{eq:DPIRE}
D_f(\Lambda\vec{v}\| \Lambda\vec{p})\leq D_f(\vec{v}\|\vec{p}) \end{equation} for any stochastic matrix $\Lambda$. \end{theorem} \begin{remark} This theorem can be proved using Jensen's inequality for a convex function according to the same reasoning as in the proof of \lref{lem:DPIRenyi2} (cf. \rscite{Csis67,Haya06book}). \end{remark}
When $\vec{p}$ and $\vec{v}$ are probability distributions, $D_f(\vec{v}\|\vec{p})$ is known as the $f$-relative entropy ($f$-divergence) between $\vec{p}$ and $\vec{v}$. It quantifies the closeness between the two probability distributions, which is intimated connected to mutual information. In that case, \eref{eq:DPIRE} reduces to the data-processing inequality (also known as information-processing inequality) for $f$-relative entropy; cf. \rcite{Csis67, Haya06book}. In addition, we can relax the requirement on $f$ to
be a convex function defined over positive numbers. When $f(x)=x\log x$, $D_f(\vec{v}\| \vec{p})$ reduces to the usual relative entropy, and \eref{eq:DPIRE} represents the well known data-processing inequality for the relative entropy.
When $f(x)=x^2$, $\log D_f(\vec{v} \| \vec{p})$ coincides with the R\'enyi relative entropy of order~2, and \eref{eq:DPIRE} takes on the same form as \eref{eq:DPIRenyi2}.
In our application, $v_k$ may take on negative values (this is the case for $v_k=\tr(A_k C)$ in the proof of \thref{thm:GQOM}), so it is desirable to choose the function $f$ that can accept a negative entry such as $f(x)=x^2$. This is the reason we introduce the extended $f$-relative entropy. Other potential choices include the quartic function $f(x)=x^4$. We leave it open whether such functions can be used to construct steering inequalities.
\section{Connection with the data-processing inequality for Fisher information} In this section, we explain the connection between the order-monotonic functions $\mathcal{G}_Q(\cdot)$ and $\barcal{G}_Q(\cdot)$ in Theorem 1 and
Fisher information with regard to the data-processing inequality \cite{Zami98}.
Consider a random variable $A$ as the outcome of a measurement or observation. Suppose $A$ takes on values over positive integers and the probability of obtaining outcome $j$ is $p_A(j|\theta)$, where $\theta$ is the unknown parameter. The \emph{Fisher information} associated with $A$ is given by \begin{align}\label{sym:FisherInf}
I_A(\theta)&= \sum_j p_A(j|\theta)\Bigl(\frac{\partial \ln p_A(j|\theta)}{\partial
\theta}\Bigr)^2=\sum_j\frac{1}{p_A(j|\theta)}\Bigl(\frac{\partial p_A(j|\theta)}{\partial \theta}\Bigr)^2. \end{align}
\begin{lemma}\label{lem:chainRule}
Suppose $A, B$ are two random variables with joint distribution $p(j,k|\theta)$. Then the total Fisher information provided by the two random variables is given by\begin{equation}
I_{A,B}(\theta)=I_A(\theta)+ I_{B|A}(\theta), \end{equation} where \begin{equation}
I_{B|A}(\theta)=\sum_{j,k}p(j,k|\theta) \biggl(\frac{\partial \ln p_{B|A}(k|j;\theta)}{\partial \theta}\biggr)^2 \end{equation}
is the conditional Fisher information, and $p_{B|A}(k|j;\theta)$ is the conditional probability distribution. \end{lemma} \begin{remark} This chain rule for Fisher information and the data-processing inequality in \thref{thm:FIDPI} below were derived in \rcite{Zami98} (in a slightly different form). \end{remark}
\begin{proof} \begin{align} &I_{A,B}(\theta)=\sum_{j,k}
p(j,k|\theta)\Bigl(\frac{\partial \ln p(j,k|\theta)}{\partial
\theta}\Bigr)^2=\sum_{j,k} p(j,k|\theta) \biggl(\frac{\partial \ln p_A(j|\theta)}{\partial
\theta}+\frac{\partial \ln p_{B|A}(k|j;\theta)}{\partial \theta}\biggr)^2\nonumber\\
&=\sum_{j,k} p(j,k|\theta) \biggl[\biggl(\frac{\partial \ln p_A(j|\theta)}{\partial
\theta}\biggr)^2+ \biggl(\frac{\partial \ln p_{B|A}(k|j;\theta)}{\partial
\theta}\biggr)^2\biggr]+ 2\sum_{j,k} p(j,k|\theta)\frac{\partial \ln p_A(j|\theta)}{\partial
\theta}\frac{\partial \ln p_{B|A}(k|j;\theta)}{\partial \theta} \nonumber \\
&=I_A(\theta) +I_{B|A}(\theta), \end{align} where in deriving the last equality, we have employed the following equation \begin{align}
&\sum_{j,k} p(j,k|\theta)\frac{\partial \ln p_A(j|\theta)}{\partial \theta}\frac{\partial
\ln p_{B|A}(k|j;\theta)}{\partial \theta}\nonumber \\
&=\sum_j\biggl( p_A(j|\theta)\frac{\partial \ln p_A(j|\theta)}{\partial
\theta} \sum_k p_{B|A}(k|j;\theta)\frac{\partial \ln p_{B|A}(k|j;\theta)}{\partial \theta}\biggr)\nonumber \\
&=\sum_j \biggl( p_A(j|\theta)\frac{\partial \ln p_A(j|\theta)}{\partial
\theta}\sum_k\frac{\partial p_{B|A}(k|j;\theta)}{\partial \theta}\biggr)=0, \end{align}
note that $\sum_kp_{B|A}(k|j;\theta)=1$. \end{proof}
\begin{theorem}\label{thm:FIDPI} Suppose $A, B$ are two random variables whose marginal distributions satisfy
$p_B(j|\theta)=\sum_k\Lambda_{jk}p_A(k|\theta)$, where $\Lambda$ is a stochastic matrix independent of the parameter $\theta$. Then $I_B(\theta)\leq I_A(\theta)$. \end{theorem} This theorem is an immediate consequence of \lref{lem:chainRule}, given that
$I_{B|A}(\theta)=0$ since the conditional probability distribution $p_{B|A}$ is independent of $\theta$.
In the rest of this section we show that the superoperators $\mathcal{G}_Q(A)$ and $\barcal{G}_Q(A)$ introduced in the main text may be understood as Fisher information matrices in superoperator form when $Q$ is a quantum state and $A=\{A_k\}$ is a POVM. Note that $\tr(Q A_k)$ is the probability of obtaining outcome $A_k$ upon measuring $A$ on $Q$ and that \begin{equation} \mathcal{G}_Q(A)=\sum_k \frac{\douter{A_k}{A_k} }{\tr(Q A_k)},\quad \barcal{G}_Q(A)=\sum_k \frac{\douter{\bar{A}_k}{\bar{A}_k} }{\tr(Q A_k)}. \end{equation} Suppose $Q$ is a quantum state that depends on the parameter $\theta$. Then the Fisher information concerning $\theta$ provided by the measurement $A$ is \begin{equation} I_A(\theta)=\sum_k \frac{1}{\tr(Q A_k)}\tr\Bigl(\frac{\partial Q}{\partial \theta}A_k\Bigr)^2=\Dbra{\frac{\partial Q}{\partial \theta}}\mathcal{G}_Q(A)\Dket{\frac{\partial Q}{\partial \theta}}=\Dbra{\frac{\partial Q}{\partial \theta}}\barcal{G}_Q(A)\Dket{\frac{\partial Q}{\partial \theta}}, \end{equation} where the third equality follows from the fact that $\partial Q/\partial \theta$ is traceless. Therefore, $\mathcal{G}_Q(A)$ and $\barcal{G}_Q(A)$ are Fisher information matrices in disguise, and \thref{thm:GQOM} in the main text embodies generalized data-processing inequalities for Fisher information. As an implication of our result, the Fisher information data-processing inequality is also valid for incomplete observations or measurements. These results are of interest beyond the focus of this paper, such as in quantum metrology \cite{CombFJC14,Ferr14}.
\section{Connection between $\tau(\{\rho_{a|x}\})$ and $\bar{\tau}(\{\rho_{a|x}\})$}
\begin{proposition}\label{pro:taudif}Any state assemblage $\{\rho_{a|x}\}$ satisfies
$\tau(\{\rho_{a|x}\})=\bar{\tau}(\{\rho_{a|x}\})+1/d$. Any measurement assemblage $\{M_{a|x}\}$ satisfies $\tau(\{M_{a|x}\})=\bar{\tau}(\{M_{a|x}\})+1$. \end{proposition}
\begin{proof}Let $\Delta_x=\mathcal{G}(\{\rho_{a|x}\}_a)-\barcal{G}(\{\rho_{a|x}\}_a)$. Calculation shows that $\Delta_x$ is independent of $x$, \begin{equation} \Delta_x=\Delta:=\frac{\douter{1}{\rho_B}+\douter{\rho_B}{1}}{d}-\frac{\tr(\rho_B)}{d^2}(\douter{1}{1}),\quad \Tr(\Delta_x)=\Tr(\Delta)=\frac{\tr(\rho_B)}{d}=\frac{1}{d}. \end{equation}
If $\mathcal{F}\geq \mathcal{G}(\{\rho_{a|x}\}_a)$ for all $x$, then $\mathcal{F}-\Delta\geq \barcal{G}(\{\rho_{a|x}\}_a)$ for all $x$. So $\bar{\tau}(\{\rho_{a|x}\})\leq \tau(\{\rho_{a|x}\})-\Tr(\Delta)=\tau(\{\rho_{a|x}\})-1/d$. Similarly, we have the inequality $\tau(\{\rho_{a|x}\})\leq \bar{\tau}(\{\rho_{a|x}\})+1/d$. It follows that $\tau(\{\rho_{a|x}\})=\bar{\tau}(\{\rho_{a|x}\})+1/d$. The equality $\tau(\{M_{a|x}\})=\bar{\tau}(\{M_{a|x}\})+1$ follows from the same reasoning. \end{proof}
\section{Proofs of \psref{pro:MSA} and \ref{pro:MSAm}} In this section we prove \psref{pro:MSA} and \ref{pro:MSAm} in the main text, which are needed for determining maximally steerable assemblages.
\begin{proof}[Proof of \pref{pro:MSA}]
According to \lref{lem:SLDbound} below, $\tau(\{M_{a|x}\})\leq \Tr(\mathbf{I})
= d^2$ and $\bar{\tau}(\{M_{a|x}\})\leq \Tr(\bar{\mathbf{I}}) =d^2-1$. Here $\mathbf{I}$ is the identity superoperator and $\bar{\mathbf{I}}$ is the projector onto the space of traceless operators. By the same token, \begin{equation}
\tau(\{\rho_{a|x}\})\leq \Tr(\mathcal{R}_{\rho_B}) = d\tr(\rho_B)=d,\quad
\bar{\tau}(\{M_{a|x}\})\leq \Tr(\bar{\mathbf{I}}\mathcal{R}_{\rho_B}\bar{\mathbf{I}}) =\frac{(d^2-1)\tr(\rho_B)}{d}=d-\frac{1}{d}, \end{equation}
where $\mathcal{R}_{\rho_B}$ is defined as in \eref{eq:LRmultiplication} in the main text. However, $\bar{\mathbf{I}}\mathcal{R}_{\rho_B}\bar{\mathbf{I}}$ is in general not equal to $\barcal{R}_{\rho_B}$ in \eref{eq:LRmultiplication}. \end{proof} \begin{remark}
According to \pref{pro:taudif}, the upper bound in $\tau(\{M_{a|x}\})\leq d^2$ is saturated if and only if the upper bound in $\bar{\tau}(\{M_{a|x}\})\leq d^2-1$ is saturated. Similarly, the two upper bounds in $\tau(\{\rho_{a|x}\})\leq d$ and $\bar{\tau}(\{\rho_{a|x}\})\leq d-1/d$ are simultaneously saturated or not. \end{remark}
\begin{proof}[Proof of \pref{pro:MSAm}] According to \lref{lem:SumBound} below and \eref{eq:Gbound} in the main text,
$\bar{\tau}(\{M_{a|x}\})\leq
\sum_x \Tr(\barcal{G}(\{M_{a|x}\}_a))\leq m(d-1)$. The first inequality is saturated if and only if the POVMs in the assemblage are mutually orthogonal, and the second is saturated if and only if all the POVMs have rank 1. The same proof applies to state assemblages. \end{proof}
In the rest of this section we prove the two auxiliary lemmas needed in the proofs of \psref{pro:MSA} and \ref{pro:MSAm}. \begin{lemma}\label{lem:SLDbound} Any POVM $\{M_a\}$ satisfies $\mathcal{G}(\{M_a\})\leq \mathbf{I}$ and
$\barcal{G}(\{M_a\})\leq \bar{\mathbf{I}}$. Any ensemble $\{\rho_a\}$ for the state~$\rho_B$ satisfies $\mathcal{G}(\{\rho_a\})\leq \mathcal{R}_{\rho_B}$ and $\barcal{G}(\{\rho_a\})\leq \bar{\mathbf{I}}\mathcal{R}_{\rho_B}\bar{\mathbf{I}}$. \end{lemma} \begin{remark} The bounds in the lemma are closely related to the quantum Cram\'er-Rao bound based on the symmetric logarithmic derivative \cite{Zhu12the,Zhu15IC}. \end{remark} \begin{proof} Let $C$ be an arbitrary Hermitian operator. Then \begin{align} \dbra{C}\mathcal{G}(\{M_a\})\dket{C}=\sum_a\frac{\dinner{C}{M_a}\dinner{M_a}{C}}{\tr(M_a)} =\sum_a\frac{\bigl[\tr\bigl(CM_a^{1/2} M_a^{1/2}\bigr)\bigr]^2}{\tr(M_a)}\leq \sum_a \tr(C^2M_a)=\tr(C^2), \end{align} which implies that $\mathcal{G}(\{M_a\})\leq \mathbf{I}$. Conjugation by $\bar{\mathbf{I}}$ yields $\barcal{G}(\{M_a\})\leq \bar{\mathbf{I}}$. By the same token, we have \begin{align} \dbra{C}\mathcal{G}(\{\rho_a\})\dket{C}\leq\tr(C^2\rho_B)=\dbra{C}\mathcal{R}_{\rho_B}\dket{C}, \end{align} which implies that $\mathcal{G}(\{\rho_a\})\leq \mathcal{R}_{\rho_B}$ and $\barcal{G}(\{\rho_a\})\leq \bar{\mathbf{I}}\mathcal{R}_{\rho_B}\bar{\mathbf{I}}$. \end{proof}
\begin{lemma}\label{lem:SumBound}
Any assemblage $\{\rho_{a|x}\}$ satisfies \begin{equation}
\bar{\tau}(\{\rho_{a|x}\})\leq \sum_x \Tr(\barcal{G}(\{\rho_{a|x}\}_a))=\sum_{a,x}\frac{\tr(\bar{\rho}_{a|x}^2)}{\tr(\rho_{a|x})}. \end{equation} The upper bound is saturated if and only if the ensembles in the assemblage
are mutually orthogonal. \end{lemma}
\Lref{lem:SumBound} is an immediate consequence of the following lemma. \begin{lemma} Suppose $A_j$ are positive operators in dimension $d$ and \begin{equation}\label{eq:tfun}
t(\{A_j\}):=\min\{\tr(F)| F\geq A_j\;\forall j\}. \end{equation} Then $t(\{A_j\})\leq \sum _j \tr(A_j)$ and the upper bound is saturated if and only if the $A_j$ have mutually orthogonal support. In that case, the operator that attains the minimum in \eref{eq:tfun} is unique and is equal to $\sum_j A_j$. \end{lemma} \begin{proof} Let $C=\sum_j A_j$; then $C\geq A_j$ for all $j$. So \begin{equation}\label{aeq:tupperbound} t(\{A_j\})\leq \tr(C)= \sum _j \tr(A_j). \end{equation}
Let $E$ be an operator that attains the minimum in \eref{eq:tfun}, $P_j$ the projector onto the support of $A_j$, and $P^\bot$ the projector onto the kernel of $\sum_j A_j$. If the $A_j$ are mutually orthogonal, then the $P_j$ are mutually orthogonal and orthogonal to $P^\bot$. In addition, $P^\bot+\sum_j P_j=1$ and $P_j E P_j\geq P_jA_jP_j=A_j$, so that \begin{align} t(\{A_j\})&=\tr(E)= \tr(P^\bot E P^\bot) +\sum_j \tr(P_j E P_j)\geq \sum_j \tr(P_j E P_j)\geq\sum_j\tr(A_j), \end{align} which implies that $t(\{A_j\})= \sum _j \tr(A_j)$ given \eref{aeq:tupperbound}. Furthermore, saturating these inequalities implies that $P^\bot E P^\bot=0$ and $P_j E P_j=A_j$ for all $j$. Given that $E-A_j$ is positive semidefinite for all $j$, we conclude that $P^\bot E P_j=P_j E P^\bot =0$ for all $j$ and $P_j E P_k=0$ for all $j\neq k$. Consequently, $E=\sum_j P_j E P_j=\sum_j A_j$.
If two of the $A_j$, say $A_1$ and $A_2$, are not orthogonal, then there exists a (nonzero) positive operator $B$ that satisfies $B\leq A_1$ and $B\leq A_2$. Let $C=\sum_j A_j-B$; then $C\geq A_j$ for all $j$. So \begin{equation} t(\{A_j\})\leq \tr(C)= \sum _j \tr(A_j)-\tr(B)< \sum _j \tr(A_j). \end{equation} \end{proof}
\section{Steering measures under transposition}
Here we show that $\tau(\{\rho_{a|x}\})$ and $\bar{\tau}(\{\rho_{a|x}\})$
are invariant under transposition of $\rho_{a|x}$. \begin{lemma}\label{lem:tauTranspose} \begin{equation}
\tau(\{\rho_{a|x}^\mathrm{T}\})=\tau(\{\rho_{a|x}\}), \quad \bar{\tau}(\{\rho_{a|x}^\mathrm{T}\})=\bar{\tau}(\{\rho_{a|x}\}). \end{equation} \end{lemma} \begin{proof}
Let $\mathcal{F}$ be a superoperator that satisfies $\mathcal{F}\geq \mathcal{G}(\{\rho_{a|x}\})$
for all $x$ and that $\Tr(\mathcal{F})=\tau(\{\rho_{a|x}\})$. Decompose $\mathcal{F}$ as $\mathcal{F}=\sum_j r_j (\douter{R_j}{R_j})$ and let $\mathcal{F}_\mathrm{T}=\sum r_j (\douter{R_j^\mathrm{T}}{R_j^\mathrm{T}})$. Then we have \begin{equation}
\sum_j r_j( \douter{R_j}{R_j})\geq \sum_a \frac{\douter{\rho_{a|x}}{\rho_{a|x}
}}{\tr(\rho_{a|x})}\quad \forall x, \end{equation} which implies that \begin{equation}
\mathcal{F}_\mathrm{T}\geq \sum_a \frac{\douter{\rho_{a|x}^\mathrm{T}}{\rho_{a|x}^\mathrm{T}
}}{\tr(\rho_{a|x})}=\mathcal{G}(\{\rho_{a|x}^\mathrm{T}\}_a) \quad \forall x. \end{equation}
Therefore, $\tau(\{\rho_{a|x}^\mathrm{T}\})\leq \Tr(\mathcal{F}_\mathrm{T})=\Tr(\mathcal{F})=\tau(\{\rho_{a|x}\})$. By symmetry we also have $\tau(\{\rho_{a|x}\})\leq \tau(\{\rho_{a|x}^\mathrm{T}\})$. It follows that $\tau(\{\rho_{a|x}^\mathrm{T}\})=\tau(\{\rho_{a|x}\})$. The equality
$\bar{\tau}(\{\rho_{a|x}^\mathrm{T}\})=\bar{\tau}(\{\rho_{a|x}\})$ follows from the same reasoning. \end{proof}
\section{Steerability of isotropic states} Consider a family isotropic states in dimension $d\times d$ parametrized by the parameter $\alpha$ with $0\leq \alpha\leq 1$, \begin{equation}
\rho(\alpha)=(1-\alpha)\frac{1}{d^2}+\alpha |\Phi\rangle\langle \Phi|, \end{equation}
where $|\Phi\rangle=(\sum_j |jj\rangle )/\sqrt{d}$. Suppose Alice has the measurement assemblage $\{A_{a|x}\}$; then Bob has the state assemblage $\{\rho_{a|x}\}$ with \begin{equation}
\rho_{a|x}=\tr_A[(A_{a|x}\otimes 1)\rho(\alpha)]=\frac{1}{d}\Bigl[\alpha A_{a|x}^\mathrm{T}+\frac{1-\alpha}{d}\tr(A_{a|x})\Bigr], \end{equation}
which may be seen as a coarse graining of the assemblage $\{A_{a|x}^\mathrm{T}/d\}$. Calculation shows that \begin{equation}
\bar{\tau}(\{\rho_{a|x}\})=\frac{\alpha^2}{d}\bar{\tau}(\{A_{a|x}^\mathrm{T}\})=\frac{\alpha^2}{d}\bar{\tau}(\{A_{a|x}\}), \end{equation}
where the second equality follows from \lref{lem:tauTranspose}. The isotropic state is steerable with respect to $\{A_{a|x}\}$ when $\alpha^2\bar{\tau}(\{A_{a|x}\})> (d-1)$. If the measurement assemblage for Alice is composed of $m$ MUB, then
$\bar{\tau}(\{A_{a|x}\})=m(d-1)$, so the isotropic state is steerable if $m\alpha^2>1$. In the case of two qubits, this condition turns out to be both sufficient and necessary \cite{CavaJWR09,KogiSCA15}. Note that a two-qubit isotropic state is equivalent to a Werner state under a local unitary transformation.
\end{document} |
\begin{document}
\def \\ { \cr } \def{\math R}{{\fam\mathfam R}} \def{\math N}{{\fam\mathfam N}} \def{\math E}{{\fam\mathfam E}} \def{\math P}{{\fam\mathfam P}} \def{\math Z}{{\fam\mathfam Z}} \def{\math Q}{{\fam\mathfam Q}} \def{\math C}{{\fam\mathfam C}} \def \e{{\rm e}} \def \f{{\cal F}} \def \g{{\cal G}} \def \h{{\cal H}} \def \d{{\tt d}} \def \k{{\tt k}} \def \i{{\tt i}} \def \B{{\bf B}} \def \L{{\cal L}} \newcommand{\ed}{\mbox{$ \ \stackrel{d}{=}$ }} \newtheorem{theorem}{Theorem} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \centerline{\Large \bf {Some Connections Between (Sub)Critical}} \vskip 2mm \centerline{\Large \bf Branching Mechanisms and Bernstein Functions}
\vskip 1cm \centerline{\Large \bf Jean Bertoin$^{(1)}$, Bernard Roynette$^{(2)}$, and Marc Yor$^{(1)}$}
\vskip 1cm \noindent \noindent (1) {\sl Laboratoire de Probabilit\'es et Mod\` eles Al\'eatoires and Institut universitaire de France,
Universit\'e Pierre et Marie Curie, 175, rue du Chevaleret,
F-75013 Paris, France.} \vskip 2mm \noindent (2) {\sl {Institut Elie Cartan, Campus Scientifique, BP 239, Vandoeuvre-l\` es-Nancy Cedex F-54056, France }}
\begin{abstract} We describe some connections, via composition, between two functional spaces: the space of (sub)critical branching mechanisms and the space of Bernstein functions. The functions ${\bf e}_\alpha: x\to x^{\alpha}$ where $x\geq0$ and $0<\alpha\leq 1/2$, and in particular the critical parameter $\alpha=1/2$, play a distinguished role. \end{abstract}
\section{Introduction} This note is a prolongation of \cite{RY} where the following remarkable property of the function ${\bf e}_\alpha: x\to x^{\alpha}$ was pointed at for $\alpha=1/2$: if $\Psi$ is a (sub)critical branching mechanism, then $\Psi\circ {\bf e}_{1/2} $ is a Bernstein function (see the next section for the definition of these notions). In the present work, we first show that this property extends to every $\alpha\in]0,1/2]$. Then we characterize the class of so-called internal functions, i.e. that of Bernstein functions $\Phi$ such that the compound function $\Psi\circ \Phi$ is again a Bernstein function for every (sub)critical branching mechanism $\Psi$. In the final section, we gather classical results on transformations of completely monotone functions, Bernstein functions and (sub)critical branching mechanisms which are used in our analysis.
\section{Some functional spaces}
\subsection{Completely monotone functions}
For every Radon measure $\mu$ on $[0,\infty[$, we associate the function $\L_{\mu}: ]0,\infty[\to[0,\infty]$ defined by \begin{equation}\label{eq0} \L_{\mu}(q):=\int_{]0,\infty[}\e^{-qx}\mu(dx)\,, \end{equation} i.e. $\L_{\mu}$ is the Laplace transform of $\mu$. We denote by \begin{equation}\label{eq1} {\bf CM}:=\left\{\L_\mu: \L_\mu(q)<\infty\hbox{ for all }q>0\right\}\,, \end{equation} which is an algebraic convex cone (i.e. a convex cone which is further stable under inner product). The celebrated theorem of Bernstein (see for instance Theorem 3.8.13 in \cite{Jacob}) identifies ${\bf CM}$ with the space of completely monotone functions, i.e. functions $f: ]0,\infty[ \to[0,\infty[$ of class ${\cal C}^{\infty}$ such that for every integer $n\geq1$, the $n$-th derivative $f^{(n)}$ of $f$ has the same sign as $(-1)^n$. Recall from monotone convergence that $\L_\mu$ has a (possibly infinite) limit at $0+$ which coincides with the total mass of $\mu$.
We shall focus on two natural sub-cones of ${\bf CM}$: \begin{equation}\label{eq2} \B_1:=\left\{\L_{\mu}: \int_{]0,\infty[}(1\wedge x^{-1})\mu(dx)<\infty\right\} \end{equation}
We further denote by $\B_1^{\downarrow}$ the sub-space of functions in $\B_1$ which are the Laplace transforms of absolutely continuous measures with a decreasing density : \begin{equation}\label{eq4} \B_1^{\downarrow}:=\left\{\L_\mu: \mu(dx)=g(x)dx, g \hbox{ decreasing and } \int_{0}^{\infty}(1\wedge x^{-1})g(x)dx<\infty\right\}. \end{equation} Note that the density $g$ then has limit $0$ at infinity.
\subsection{Bernstein functions} For every triple $(a,b,\Lambda)$ with $a,b\geq0$ and $\Lambda$ a positive measure on $]0,\infty[$ such that \begin{equation}\label{eq5} \int_{]0,\infty[}(x\wedge 1) \Lambda(dx)<\infty\,, \end{equation} we associate the function $\Phi_{a,b,\Lambda}: ]0,\infty[\to [0,\infty[$ defined by \begin{equation}\label{eq6} \Phi_{a,b,\Lambda}(q):=a +bq+\int_{]0,\infty[}( 1-\e^{-q x})\Lambda(dx)\,, \end{equation} and call $\Phi_{a,b,\Lambda}$ the Bernstein function with characteristics $(a,b,\Lambda)$. We denote the convex cone of Bernstein functions by \begin{equation}\label{eq7} \B_2:=\left\{\Phi_{a,b,\Lambda}: a,b\geq0\hbox{ and $\Lambda$ positive measure fulfilling (\ref{eq5})}\right\}. \end{equation}
It is well-known that $\B_2$ can be identified with the space of real-valued ${\cal C}^{\infty}$ functions $f: ]0,\infty[\to[0,\infty[$ such that for every integer $n\geq1$, the $n$-th derivative $f^{(n)}$ of $f$ has the same sign as $(-1)^{n-1}$. See Definition 3.9.1 and Theorem 3.9.4 in \cite{Jacob}.
Bernstein functions appear as Laplace exponents of subordinators,
see e.g. Chapter 1 in \cite{Besf}, Chapter 6 in \cite{Sato}, or Section 3.9 in \cite{Jacob}. This means that $\Phi\in\B_2$ if and only if there exists an increasing process $\sigma=(\sigma_t, t\geq0)$ with values in $[0,\infty]$ ($\infty$ serves as absorbing state) with independent and stationary increments as long as $\sigma_t<\infty$, such that for every $t\geq0$ $${\math E}(\exp(-q\sigma_t))\,=\,\exp(-t\Phi(q))\,,\qquad q>0.$$ In this setting, $a$ is known as the killing rate, $b$ as the drift coefficient, and $\Lambda$ as the L\'evy measure.
We shall further denote by $\B_2^{\downarrow}$ the subspace of Bernstein functions for which the L\'evy measure is absolutely continuous with a monotone decreasing density, viz. $$\B_2^{\downarrow}:=\left\{\Phi_{a,b,\Lambda}: a,b\geq0 \hbox{ and }\Lambda(dx)=g(x)dx, g\geq0 \hbox{ decreasing and } \int_{0}^{\infty}(x\wedge 1)g(x)dx<\infty\right\}.$$
\subsection{ (Sub)critical branching mechanisms} For every triple $(a,b,\Pi)$ with $a,b\geq0$ and $\Pi$ positive measure on $]0,\infty[$ such that \begin{equation}\label{eq8} \int_{]0,\infty[}(x\wedge x^2) \Pi(dx)<\infty \end{equation} we associate the function $\Psi_{a,b,\Pi}: ]0,\infty[\to[0,\infty[$ defined by \begin{equation}\label{eq9} \Psi_{a,b,\Pi}(q):=a q +bq^2+\int_{]0,\infty[}( \e^{-q x}-1 +q x)\Pi(dx)\,, \end{equation} and denote the convex cone of such functions by \begin{equation}\label{eq10} \B_3:=\left\{\Psi_{a,b,\Pi}: a,b\geq0\hbox{ and $\Pi$ a positive measure such that (\ref{eq8}) holds}\right\} \end{equation}
Functions in $\B_3$ are convex increasing functions of class ${\cal C}^{\infty}$ that vanish at $0$; they coincide with the class of branching mechanisms for (sub)critical continuous state branching processes, where (sub)critical means critical or sub-critical. See Le Gall \cite{LG} on page 132.
Alternatively, functions in the space $\B_3$ can also be viewed as Laplace exponents
of L\'evy processes with no positive jumps that do not drift to $-\infty$ (or, equivalently, with nonnegative mean). In this setting, $a$ is the drift coefficient, $2b$ the Gaussian coefficient, and $\Pi$ the image of the L\'evy measure by the map $x\to-x$.
See e.g. Chapter VII in \cite{Belp}.
\section{Composition with ${\bf e}_{\alpha}$} Stable subordinators correspond to a remarkable one-parameter family of Bernstein functions denoted here by $({\bf e}_{\alpha}, 0<\alpha<1)$, where $$ {\bf e}_{\alpha}(q):=q^{\alpha}\,=\, {\alpha\over \Gamma(1-\alpha)} \int_{0}^{\infty} (1-\e^{-q x}) x^{-1-\alpha}dx\,, \qquad q>0\,. $$
\begin{theorem}\label{T1} The following assertions are equivalent:
\noindent {\rm (i)} $\alpha\in]0,1/2]$.
\noindent {\rm (ii)} For every $\Psi\in \B_3$, $\Psi\circ {\bf e}_{\alpha}\in\B_2$. \end{theorem}
The implication (ii) $\Rightarrow$ (i) is immediate. Indeed, $\Psi_{0,1,0}: q\to q^2$ belongs to $\B_3$, but ${\bf e}_{2\alpha}=\Psi_{0,1,0}\circ {\bf e}_{\alpha}$ is in $\B_2$ if and only if $2\alpha\leq 1$. However, the converse (i) $\Rightarrow$ (ii) is not straightforward and relies on the following technical lemma, which appears as Lemma VI.1.2 in \cite{RY}. Here, for the sake of completeness, we provide a proof. \begin{lemma}\label{L1} For $\alpha\in ]0,1/2]$, let $\sigma^{(\alpha)}=(\sigma^{(\alpha)}_x, x\geq0)$ be a stable subordinator with index $\alpha$ with Laplace transform $${\math E}\left(\exp\left(-{q} \sigma^{(\alpha)}_x\right)\right) =\exp(-x q^{\alpha})\,,\qquad x,q>0\,.$$ Denote by $p^{(\alpha)}(x,t)$ the density of the law of $\sigma^{(\alpha)}_x$. Then for every $,x,t>0$, we have $$p^{(\alpha)}(x,t)\leq {\alpha\over \Gamma(1-\alpha)} x t^{-(1+\alpha)}\,.$$ \end{lemma}
\noindent{\bf Remark :} The bound in Lemma \ref{L1} is sharp, as it is well-known that for any $0<\alpha<1$ and each fixed $t>0$ $$p^{(\alpha)}(x,t) \,\sim\,{\alpha\over \Gamma(1-\alpha)} x t^{-(1+\alpha)}\,,\qquad x\to\infty.$$ More precisely, there is a series representation of $p^{(\alpha)}(x,t)$, see Formula (2.4.7) on page 90 in Zolotarev \cite{Zol}:
$$p^{(\alpha)}(x,1)={1\over \pi}\sum_{n=1}^{\infty}(-1)^{n-1}{\Gamma(n\alpha+1)\over \Gamma(n+1))}\sin(\pi n \alpha) x^{-n\alpha -1}\,.$$
Using the identity
$$\Gamma(\alpha)\Gamma(1-\alpha)={\pi \over\sin (\alpha \pi)}\,,$$ this agrees of course with the above estimate. It is interesting to note that the second leading term in the expansion, $$-{\Gamma(2\alpha+1)\over 2\pi}\sin(2\pi \alpha) x^{-2\alpha -1},$$ is negative for $\alpha<1/2$, but positive for $\alpha >1/2$. So the bound in Lemma \ref{L1} would fail for $\alpha>1/2$.
\noindent{\bf Proof:}\hskip10pt In the case $\alpha=1/2$, there is an explicit expression for the density $$p^{(1/2)}(x,t)\,=\,{x\over 2\sqrt{\pi t^3}}\exp\left(-{x^2\over 4t}\right)\,,$$ from which the claim is obvious (recall that $\Gamma(1/2)=\sqrt \pi$).
In the case $\alpha<1/2$, we start from the identity $$\exp(-xq^{\alpha}) \,=\,\int_{0}^{\infty}\e^{-qt} p^{(\alpha)}(x,t)dt \,, $$ and take the derivative in the variable $q$ to get $$\alpha q^{\alpha-1}\exp(-xq^{\alpha}) \,=\,\int_{0}^{\infty}\e^{-qt} {t\over x} p^{(\alpha)}(x,t)dt\,,$$ and then $$\alpha q^{\alpha-1}\left(1-\exp(-xq^{\alpha})\right) \,=\,\int_{0}^{\infty}\e^{-qt}\left({\alpha\over \Gamma(1-\alpha)} t^{-\alpha}- {t\over x} p^{(\alpha)}(x,t)\right)dt\,.$$
Denote the left hand-side by $g(x,q)$, and take the derivative in the variable $x$. We obtain $${\partial g(x,q)\over \partial x} \,=\,\alpha q^{2\alpha-1}\e^{-xq^{\alpha}} \,=\,\alpha q^{2\alpha-1}\int_{0}^{\infty}\e^{-qt} p^{(\alpha)}(x,t)dt\,.$$ On the other hand, since $1-2\alpha>0$, $$q^{2\alpha-1}\,=\,{1\over \Gamma(1-2\alpha)}\int_{0}^{\infty}\e^{-qs}s^{-2\alpha}ds\,,$$ and hence $${\partial g(x,q)\over \partial x} \,=\,{\alpha\over \Gamma(1-2\alpha)}\int_{0}^{\infty}{ds\over s^{2\alpha}}\int_{0}^{\infty}dt \e^{-q(s+t)}
p^{(\alpha)}(x,t)\,.$$ The change of variables $u=t+s$ yields $${\partial g(x,q)\over \partial x} \,=\,{\alpha\over \Gamma(1-2\alpha)}\int_{0}^{\infty}du\int_{0}^{u}{ds\over s^{2\alpha}} \e^{-qu}
p^{(\alpha)}(x,u-s)\,;$$ and since $g(0,t)=0$, we finally obtain the identity \begin{eqnarray*} & &\int_{0}^{\infty}\e^{-qt}\left({\alpha\over \Gamma(1-\alpha)} t^{-\alpha}- {t\over x} p^{(\alpha)}(x,t)\right)dt\\ &=& {\alpha\over \Gamma(1-2\alpha)}\int_{0}^{x}dy\int_{0}^{\infty}du\int_{0}^{u}{ds\over s^{2\alpha}} \e^{-qu}
p^{(\alpha)}(x,u-s)\,. \end{eqnarray*} Inverting the Laplace transform, we conclude that $${\alpha\over \Gamma(1-\alpha)} t^{-\alpha}- {t\over x} p^{(\alpha)}(x,t) \,=\, {\alpha\over \Gamma(1-2\alpha)}\int_{0}^{x}dy\int_{0}^{t}{ds\over s^{2\alpha}}
p^{(\alpha)}(x,t-s)\,,$$ which entails our claim.
\vrule height 1.5ex width 1.4ex depth -.1ex \vskip20pt
We are now able to prove Theorem \ref{T1}.
\noindent{\bf Proof:}\hskip10pt Let $\Psi_{a,b,\Pi}\in\B_3$. Since both $a{\bf e}_{\alpha}$ and $b{\bf e}_{2\alpha}$ are Bernstein functions, there is no loss of generality in assuming that $a=b=0$. Set for $t>0$ $$\nu_{\alpha}(t):= {\alpha\over \Gamma(1-\alpha) t^{1+\alpha}}\int_{0}^{\infty}\Pi(dx) x \left(1-{\Gamma(1-\alpha) t^{1+\alpha}\over \alpha x}
p^{(\alpha)}(x,t)\right)\,.$$ It follows from Lemma \ref{L1} that $\nu_{\alpha}(t)\geq0$. We have for every $q>0$ \begin{eqnarray*} & &\int_{0}^{\infty}(1-\e^{-qt})\nu_{\alpha}(t)dt\\ &=&\int_{0}^{\infty}\Pi(dx) x \int_{0}^{\infty}dt \left({\alpha (1-\e^{-qt})\over \Gamma(1-\alpha) t^{1+\alpha}} -{p^{(\alpha)}(x,t)\over x}+ \e^{-qt} {p^{(\alpha)}(x,t)\over x}\right)\\ &=&\,\int_{0}^{\infty}\Pi(dx) x\left(q^{\alpha}-{1\over x}+{\e^{-q^{\alpha}x}\over x}\right)\\ &=& \Psi_{0,0,\Pi}({\bf e}_{\alpha}(q))\,. \end{eqnarray*} As this quantity is finite for every $q>0$, this shows that $\Psi_{0,0,\Pi}\circ{\bf e}_{\alpha}\in\B_2$.
\vrule height 1.5ex width 1.4ex depth -.1ex \vskip20pt
\noindent {\bf Remark :} The proof gives a stronger result than that stated in Theorem \ref{T1}. Indeed, we specified the L\'evy measure $\nu_{\alpha}$ of $\Psi_{0,0,\Pi}\circ{\bf e}_{\alpha}$. Furthermore, in the case $\alpha=1/2$, this expression shows that $\Psi_{0,0,\Pi}\circ{\bf
e}_{1/2}\in\B_2^{\downarrow}$. It is interesting to combine this observation with the forthcoming Proposition \ref{P2} : for every $\Psi\in\B_3$, $\Psi\circ{\bf
e}_{1/2}\in\B_2^{\downarrow}$, thus ${\rm Id}\times (\Psi\circ{\bf
e}_{1/2}): q\to q\Phi(\sqrt q)$ is again in $\B_3$, and in turn ${\bf e}_{1/2}\times (\Psi\circ{\bf
e}_{1/4})\in\B_2^{\downarrow}$. More generally, we have by iteration that for every integer $n$ $${\bf e}_{2-2^{1-n}}\times (\Psi\circ{\bf
e}_{2^{-n}})\in\B_3\,,$$ and $${\bf e}_{1-2^{-n}}\times (\Psi\circ{\bf
e}_{2^{-n-1}})\in\B_2^{\downarrow}\,.$$
\section{Internal functions} It is well-known that the cone ${\bf CM}$ of completely monotone functions and the cone $\B_2$ of Bernstein functions are both stable by right composition with a Bernstein function; see Proposition \ref{P3} below. Theorem \ref{T1} incites us to consider also compositions of (sub)critical branching mechanisms and Bernstein functions; we make the following definition :
\begin{definition} A Bernstein function $\Phi\in\B_2$ is said {\rm internal} if $\Psi\circ \Phi\in\B_2$ for every $\Psi\in\B_3$. \end{definition}
Theorem \ref{T1} shows that the functions ${\bf e}_{\alpha}$ are internal if and only if $\alpha\in]0,1/2]$. The critical parameter $\alpha=1/2$ plays a distinguished role. Indeed, we could also prove Theorem \ref{T1} using the following alternative route. First, we check that ${\bf e}_{1/2}$ is internal (see \cite{RY}), and then we deduce by subordination that for every $\alpha<1/2$ that $\Psi\circ {\bf e}_{\alpha}= \Psi\circ {\bf e}_{1/2}\circ {\bf e}_{2\alpha}$ is again a Bernstein function for every $\Psi\in\B_3$. Developing this argument, we easily arrive at the following characterization of internal functions :
\begin{theorem}\label{T2} Let $\Phi=\Phi_{a,b,\Lambda}\in\B_2$ be a Bernstein function. The following assertions are then equivalent:
\noindent{ \rm (i)} $\Phi$ is internal,
\noindent{ \rm (ii)} $\Phi^2\in\B_2$,
\noindent{ \rm (iii)} $b=0$ and there exists a subordinator $\sigma= (\sigma_t, t\geq0)$ such that $$\Lambda(dx)\,=\,c\int_{0}^{\infty} t^{-3/2} {\math P}(\sigma_t\in dx)dt\,.$$
\end{theorem}
\noindent{\bf Proof:}\hskip10pt (i) $\Rightarrow$ (ii) is obvious as $\Psi_{0,1,0}\circ \Phi =\Phi^2$.
(ii) $\Rightarrow$ (i). We know from Theorem \ref{T1} or \cite{RY} that for every $\Psi\in\B_3$, $\Psi\circ {\bf e}_{1/2}\in\B_2$. It follows by subordination that for every Bernstein function $\kappa\in\B_2$, $\Psi\circ {\bf e}_{1/2}\circ \kappa\in\B_2$. Take $\kappa=\Phi^2$, so $ {\bf e}_{1/2}\circ \kappa=\Phi$, and hence $\Phi$ is internal.
(iii) $\Rightarrow$ (ii) Let $\kappa$ denote the Bernstein function of $\sigma$. We have \begin{eqnarray*} \Phi(q)\,&=&\,a +\int_{]0,\infty[}(1-\e^{-qx})\Lambda(dx)\\ &=&\,a + c \int_{]0,\infty[}\int_{0}^{\infty}dt (1-\e^{-qx}) t^{-3/2} {\math P}(\sigma_t\in dx)\\ &=&\,a + c \int_{0}^{\infty}dt (1-\e^{-t\kappa(q)}) t^{-3/2}\,. \end{eqnarray*} The change of variables $t\kappa(q)=u$ yields $$\Phi(q)\,=\,a+c'\sqrt{\kappa(q)}$$ and hence $$\Phi^2(q)\,=\,a^2 + 2ac'\sqrt{\kappa(q)} +c'^2\kappa(q)\,.$$ Since $\kappa^{1/2}={\bf e}_{1/2}\circ \kappa$ is again a Bernstein function, we thus see that $\Phi^2\in\B_2$.
(ii) $\Rightarrow$ (iii) Recall that the drift coefficient $b$ of $\Phi_{a,b,\Lambda}$ is given by $$\lim_{q\to\infty} \Phi_{a,b,\Lambda}(q)/q\,=\,b\,;$$ see e.g. page 7 in \cite{Besf}. It follows immediately that $b=0$ whenever $\kappa:=\Phi_{a,b,\Lambda}^2\in\B_2$. Recall from Sato \cite{Sato} on page 197-8 that if $\tau^{(1)}$ and $\tau^{(2)}$ are two independent subordinators with respective Bernstein functions $\Phi^{(1)}$ and $\Phi^{(2)}$, then the compound process $\tau^{(1)}\circ \tau^{(2)}:=\tau^{(3)}$ is again a subordinator with Bernstein function $\Phi^{(3)}:=\Phi^{(2)}\circ \Phi^{(1)}$; moreover its L\'evy measure $\Lambda^{(3)}$ is given by $$\Lambda^{(3)}(dx)=\int_{0}^{\infty}{\math P}(\tau^{(1)}_t\in dx)\Lambda^{(2)}(dt)\,,$$ where $\Lambda^{(2)}$ denotes the L\'evy measure of $\tau^{(2)}$. As $\Phi_{a,b,\Lambda}={\bf e}_{1/2}\circ \kappa$, and the L\'evy measure of ${\bf e}_{1/2}$ is $ct^{-3/2}dt$ with $c=1/(2\sqrt \pi)$, we deduce that $$\Lambda(dx)\,=\,c\int_{0}^{\infty} {\math P}(\sigma_t\in dx) t^{-3/2}dt\,.$$
The proof of Theorem \ref{T2} is now complete.
\vrule height 1.5ex width 1.4ex depth -.1ex \vskip20pt
It is noteworthy that if $\Phi_{a,b,\Lambda}$ is internal and $\Lambda\not\equiv 0$, then $$\int_{]0,\infty}x \Lambda(dx)\,=\,\infty\,.$$ Indeed, $$\int_{]0,\infty}x \Lambda(dx) \,=\,c\int_{0}^{\infty}\int_{]0,\infty[} x {\math P}(\sigma_t\in dx) t^{-3/2}dt \,=\,c\int_{0}^{\infty}{\math E}(\sigma_1) t^{-1/2}dt\,=\,\infty\,.$$ For instance, the Bernstein function $q\to \log(1+q)$ of the gamma subordinator is not internal.
\begin{corollary}\label{C1} For every $\Psi\in\B_3$, wewrite $\Phi$ for the inverse function of $\Psi$ and then $\Phi'$ for its derivative. Then $1/\Phi'$ is internal. \end{corollary}
\noindent{\bf Proof:}\hskip10pt It is known (see Corollary \ref{C2} below) that $1/\Phi'$ is a Bernstein function; let us check that its square is also a Bernstein function.
We know that $\Psi''\in \B_1$ (Proposition \ref{P1} below)
and $\Phi\in \B_2$ (Proposition \ref{P4} below); we deduce from Proposition \ref{P3} that $\Psi''\circ \Phi\in\B_1$. If we write $I(f): x\to\int_{0}^{x}f(y)dy$ for every locally integrable function $f$, then again by Proposition \ref{P1}, we get that $I(\Psi''\circ \Phi)$ is a Bernstein function.
Now $$\Psi''\,=\,-{\Phi''\circ \Psi\over (\Phi'\circ \Psi)^3}\,,$$ so $$\Psi''\circ \Phi\,=\,-{\Phi''\over (\Phi')^3}\,,$$ and we conclude that $${1\over 2(\Phi')^2}\,=\,I(\Psi''\circ \Phi)\in \B_2\,.$$
\vrule height 1.5ex width 1.4ex depth -.1ex \vskip20pt
\section{Some classical results and their consequences} For convenience, this section gathers some classical transformations involving $\B_j$, $j\in\{1,2,3\}$ and related subspaces, which have been used in the preceding section. We start by considering derivatives and indefinite integrals. The following statement is immediate.
\begin{proposition}\label{P1} Let $j=2,3$ and $f:]0,\infty[\to[0,\infty[$ be a ${\cal C}^{\infty}$-function with derivative $f'$. For $j=3$, we further suppose that $\lim_{q\to0}f(q)=0$. There is the equivalence
$$f\in\B_j \ \Longleftrightarrow \ f'\in \B_{j-1}\,.$$ \end{proposition}
The next statement is easily checked using integration by parts.
\begin{proposition}\label{P2} Let $j=2,3$ and consider two functions $f,g:]0,\infty[\to[0,\infty[$ which are related by the identity $f(q)=qg(q)$. Then there is the equivalence
$$f\in\B_j \hbox{ and } \lim_{q\to0} f(q)=0 \ \Longleftrightarrow \ g\in \B_{j-1}^{\downarrow} \,.$$ \end{proposition} Proposition \ref{P2} has well-known probabilistic interpretations. First, let $\sigma$ be a subordinator with Bernstein function $f\in \B_2$ with unit mean, viz. ${\math E}(\sigma_1)=1$, which is equivalent to $f'(0+)=1$. Then the completely monotone function $g(q):=f(q)/q$
is the Laplace transform of a probability measure on ${\math R}_+$. The latter appears in the renewal theorem for subordinators (see e.g. \cite{BvHS}); in particular it describes the weak limit of the so-called age process $A(t)=t-g_t$ as $t\to\infty$, where $g_t:=\sup\left\{\sigma_s: \sigma_s<t\right\}.$ Second, let $X$ be a L\'evy process with no positive jumps and Laplace exponent $f\in \B_3$. The L\'evy process reflected at its infimum, $X_t-\inf_{0\leq s \leq t} X_s$, is Markovian; and if $\tau$ denotes its inverse local time at $0$, then $\sigma=-X\circ \tau$ is a subordinator called the descending ladder-height process. The Bernstein function of the latter is then given by
$g(q)=f(q)/q$; ; see e.g. Theorem VII.4(ii) in \cite{Belp}.
We next turn our attention to composition of functions; here are some classical properties \begin{proposition}\label{P3} Consider two functions $f,g:]0,\infty[\to[0,\infty[$. Then we have the implications $$f,g\in\B_2 \ \Longrightarrow \ f\circ g\in \B_2\,,$$ $$f\in{\bf CM} \hbox{ and }g\in \B_2 \ \Longrightarrow \ f\circ g\in {\bf CM}\,,$$ $$f\in\B_1 \hbox{ and }g\in \B_2 \ \Longrightarrow \ f\circ g\in \B_1\,.$$ \end{proposition} The first statement in Proposition \ref{P3} is related to the celebrated subordination of Bochner (see, e.g. Section 3.9 in \cite{Jacob} or Chapter 6 in \cite{Sato}); more precisely if $\sigma$ and $\tau$ are two independent subordinators with respective Bernstein functions $f_{\sigma}$ and $f_{\tau}$, then $\sigma\circ \tau$ is again a subordinator whose Bernstein function is $f_{\tau}\circ f_{\sigma}$. The second statement is a classical result which can be found as Criterion 2 on page 441 in Feller \cite{Feller}; it is also related to Bochner's subordination.
Finally we turn our attention to inverses.
\begin{proposition}\label{P4} Consider a function $f:]0,\infty[\to]0,\infty[$. Then $$f\in\B_2\cup \B_3 \ \Longrightarrow \ 1/f\in {\bf CM}\,.$$
Further, if $f^{-1}$ denotes the inverse of $f$ when the latter is a bijection, then $$f\in\B_3\,,f\not\equiv 0 \ \Longrightarrow \ f^{-1}\in \B_2\,.$$
\end{proposition}
We mention that if $f\in\B_3$, the completely monotone function $1/f$ is the Laplace transform of the so-called scale function of the L\'evy process $X$ with no positive jumps which has Laplace exponent $f$. See Theorem VII.8 in \cite{Belp}. On the other hand, $f^{-1}$ is the Bernstein function of the subordinator of first-passage times $T_t:=\inf\left\{s\geq0: X_s>t\right\}$; see e.g. Theorem VII.1 in \cite{Belp}. Finally, in the case when $f\in\B_2$ is a Bernstein function, the completely monotone function $1/f$ is the Laplace transform of the renewal measure $U(dx)=\int_{0}^{\infty} {\math P}(\sigma_t\in dx)dt$, where $\sigma$ is a subordinator with Bernstein function $f$.
\begin{corollary}\label{C2} Let $\Psi\not\equiv 0$ be a function in $\B_3$, and denote by $\Phi=\Psi^{-1}\in\B_2$ its inverse bijection. Then $q\to1/\Phi'(q)$ and ${\rm Id}/\Phi: q\to q/\Phi(q)$ are Bernstein functions. Furthermore ${1\over \Phi \Phi'}: q\to1/(\Phi(q) \Phi'(q))$ is completely monotone. \end{corollary}
\noindent{\bf Proof:}\hskip10pt We know from Propositions \ref{P1} and \ref{P4} that both $\Phi$ and $\Psi'$ are Bernstein functions. We conclude from Proposition \ref{P3} that $1/\Phi'=\Psi'\circ \Phi$ is again in $\B_2$.
Similarly, we know from Proposition \ref{P2} that $q\to \Psi(q)/q$ is a Bernstein function, and composition on the right by the Bernstein function $\Phi$ yields ${\rm Id}/\Phi$ that is again in $\B_2$.
Finally, we can write $1/(\Phi \Phi')=f\circ \Phi$ where $f(q)=\Psi'(q)/q$. We know from Proposition \ref{P1} that $\Psi'\in\B_2$, so $f\in{\rm CM}$ by Proposition \ref{P2}. Since $\Phi\in\B_2$, we conclude from Proposition \ref{P3} that $f\circ \Phi\in{\bf CM}$.
\vrule height 1.5ex width 1.4ex depth -.1ex \vskip20pt
If $\Phi=\Psi^{-1}$ is the Bernstein function given by the inverse of a function $\Psi\in\B_3$, the Bernstein function $1/\Phi'$ is the exponent of the subordinator $L^{-1}$ defined as the inverse of the local time at $0$ of the L\'evy process with no positive jumps and Laplace exponent $\Psi$. See e.g. Exercise VII.2 in \cite{Belp}. On the other hand, ${\rm Id}/\Phi$ is then the Bernstein function of the decreasing ladder times, see Theorem VII.4(ii) in \cite{Belp}. The interested reader is also referred to \cite{Be1} for further
factorizations for Bernstein functions which arise naturally for L\'evy processes with no positive jumps, and their probabilistic interpretations.
Next, recall that a function $f:]0,\infty[\to{\math R}_+$ is called a Stieltjes transform if it can be expressed in the form $$f(q)\,=\,b+\int_{[0,\infty[}{\nu(dt)\over t+q}\,,\qquad q>0\,,$$ where $b\geq0$ and $\nu$ is a Radon measure on ${\math R}_+$ such that $\int_{[0,\infty[}(1\wedge t^{-1})\nu(dt)<\infty$. Equivalently, a Stieltjes transform is the Laplace transform of a Radon measure $\mu$ on ${\math R}_+$ of the type $\mu(dx)=b\delta_0(dx)+h(x)dx$, where $b\geq0$ and $h$ is a completely monotone function which belongs to $L^1(\e^{-qx}dx)$ for every $q>0$; see e.g. Section 3.8 in \cite{Jacob}.
\begin{corollary}\label{C3} Let $f\in \B_2$ be a Bernstein function such that its derivative $f'$ is a Stieltjes transform. Then for every Bernstein function $g\in B_2$, the function $f\circ {1\over g}$ is completely monotone. \end{corollary}
\noindent{\bf Proof:}\hskip10pt We can write $$f(q)\,=\,a+ bq + \int_{0}^{q}dr\int_{0}^{\infty}dx\e^{-rx}h(x)\,,\qquad q>0\,,$$ where $a,b\geq0$ and $h\in\B_1$. Thus $$f(q)\,=\,a+ bq + \int_{0}^{\infty}dx(1-\e^{-qx}){h(x)\over x}\,,\qquad q>0\,,$$ and then $$f\circ {1\over g}(q)\,=\,a+{b\over g(q)}+\int_{0}^{\infty}dx(1-\e^{-x/g(q)}){h(x)\over x}\,.$$ We already know from Proposition \ref{P4} that $a+b/g \in {\bf CM}$. The change of variable $y=x/g(q)$ yields $$\int_{0}^{\infty}dx(1-\e^{-x/g(q)}){h(x)\over x} \,=\,\int_{0}^{\infty}(1-\e^{-y})h(yg(q)){dy\over y}\,.$$ For each fixed $y>0$, $yg$ is a Bernstein function, so by Proposition \ref{P3}, the function $q\to h(yg(q))$ is completely monotone.
We conclude that for every integer $n\geq 0$, $$(-1)^n{\partial^n\over \partial q^n}(f\circ {1\over g})(q) \,=\,\int_{0}^{\infty}(-1)^n{\partial^n\over \partial q^n}(h(yg(\cdot))(q) (1-\e^{-y}){dy\over y}\,\geq\,0\,,$$ which establishes our claim.
\vrule height 1.5ex width 1.4ex depth -.1ex \vskip20pt
\end{document} |
\begin{document}
\title{Phase-transition-like Behavior of Quantum Games}
\author{Jiangfeng Du\footnote[1]{To whom correspondence should be addressed. E-mail: djf@ustc.edu.cn}} \address{Department of Modern Physics, University of Science and Technology of China, Hefei, 230027, People's Republic of China} \address{Department of Physics, National University of Singapore, Lower Fent Ridge, Singapore 119260, Singapore} \address{Centre for Quantum Computation, Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, United Kingdom}
\author{Hui Li\footnote[2]{E-mail: lhuy@mail.ustc.edu.cn}} \address{Department of Modern Physics, University of Science and Technology of China, Hefei, 230027, People's Republic of China}
\author{Xiaodong Xu} \address{Harrison M. Randall Laboratory of Physics, The University of Michigan, Ann Arbor, Michigan 48109-1120}
\author{Xianyi Zhou and Rongdian Han} \address{Department of Modern Physics, University of Science and Technology of China, Hefei, 230027, People's Republic of China}
\begin{abstract} The discontinuous dependence of the properties of a quantum game on its entanglement has been shown up to be very much like phase transitions viewed in the entanglement-payoff diagram [J. Du \textit{et al.}, Phys. Rev. Lett, \textbf{88}, 137902 (2002)]. In this paper we investigate such phase-transition-like behavior of quantum games, by suggesting a method which would help to illuminate the origin of such kind of behavior. For the particular case of the generalized Prisoners' Dilemma, we find that, for different settings of the numerical values in the payoff table, even though the classical game behaves the same, the quantum game exhibits different and interesting phase-transition-like behavior. \end{abstract}
\pacs{03.67.-a, 02.50.Le}
\submitto{\JPA}
\maketitle
\section{Introduction}
The theory of quantum games is a new born field which combines the classical game theory and the quantum information theory, opening a new range of potential applications. Recent research have shown that quantum games can outperform their classical counterparts \cite{1,2,3,4,5,6,11,7,8,15,10}. J. Eisert \textit{et al}. investigated the quantization of the famous game of Prisoners' Dilemma \cite{4}. Their result exhibits the surprising superiority of quantum strategies over classical ones and the players can escape the dilemma when they both resort to quantum strategies. L. Marinatto and T. Weber studied the quantum version of the Battle of the Sexes game and found that the game can have a unique solution with entangled strategy \cite{5}. Besides two player quantum games, works on multiplayer games have also been presented \cite{6,11}. In a recent paper of S.C. Benjamin and P.M. Hayden, they showed that multiplayer quantum games can exhibit certain forms of pure quantum equilibrium that have no analogue in classical games, or even in two player quantum games \cite{6}. Although most of the works are focused on maximally entangled quantum games, game of varying entanglement is also investigated \cite{8,15}. For the particular case of the two-player quantum Prisoners' Dilemma, two thresholds for the game's entanglement is found, and the phenomena which are very much like phase transitions\ are also revealed. Even though quantum game are played mostly on paper, the first experimental realization of quantum games has also been implemented on a NMR quantum computer \cite{10}.
In this paper, we investigate the phase-transition-like behavior of quantum games, using a proposed method which would help to illuminate the origin of such kind of behavior. For the generalized version of Prisoners' Dilemma, we find that, with different settings of the numerical values for the payoff table, even though the classical game behaves the same, the quantum game behaves greatly differently and exhibits interesting phase-transition-like behavior in the entanglement-payoff diagram. We find thresholds for the amount of entanglement that separate different regions for the game. The properties of the game changes discontinuously when its entanglement goes across these thresholds, creating the phase-transition-like behavior. We present investigation for both the case where the strategic space is restricted as in Ref. \cite{4} and the case where the players are allowed to adopt any unitary operations as their strategies. In the case where the strategic space is restricted, the phase-transition-like behavior exhibits interesting variation with respect to the change of the numerical values in the payoff table, so does the property of the game. In the case where the players are allowed to adopt any unitary operations, the game has an boundary, being a function of the numerical values in the payoff table, for its entanglement. The quantum game has an infinite number of Nash equilibria if its entanglement is below the boundary, otherwise no pure strategic Nash equilibrium could be found when its entanglement exceeds the boundary.
The proposed method would help to illuminate the origin of such kind of phase-transition-like behavior. In this method, strategies of players are corresponding to unit vectors in some real space, and the searching for Nash equilibria includes a procedure of finding the eigenvector of some matrix that corresponds to the maximal eigenvalue. In the particular case presented in this paper, the eigenvalues are functions of the amount of entanglement, and thus there can be an eigenvalue-crossing. Crossing an eigenvalue-crossing point makes the eigenvector that corresponds to the maximal eigenvalue changes discontinuously, indicating the discontinuous change of the properties of the quantum game, as well as the phase-transition-like behavior.
\section{\label{sec2}Quantization of The Generalized Prisoners' Dilemma}
\begin{table}[b] \caption{The general form of the Prisoners' Dilemma. The first entry in the parenthesis denotes the payoff of Alice and the second number the payoff of Bob. The entries in this table should satisfy conditions: $t>r>p>s$ (see in Reference \protect\cite{9}). The meanings of the symbols in the table is as follows. $C$: Cooperate; $D$: Defect; $r$: reward; $p$: punishment; $t$: temptation; $s$: sucker's payoff.} \label{Table1} \begin{indented} \item[]\begin{tabular}{ccc} \br & Bob: $C$ & Bob: $D$ \\ \mr Alice: $C$ & $\left( r ,r \right) $ & $\left( s ,t \right) $ \\ Alice: $D$ & $\left( t ,s \right) $ & $\left( p ,p \right) $ \\ \br \end{tabular} \end{indented} \end{table}
The classical Prisoners' Dilemma is the most widely studied and used paradigm as a non-zero-sum game that could have an equilibrium outcome which is unique, but fails to be Pareto optimal. The importance of this game lies in the fact that many social phenomena with which we are familiar seem to have Prisoner's Dilemma at their core. The general form of the Prisoners' Dilemma \cite{9} is shown as in Table \ref{Table1}, with suggestive names for the strategies and payoffs. The condition $t>r>p>s$ guarantees that strategy $D$ dominates strategy $C$ for both players, and that the unique equilibrium at $\left( D,D\right) $ is Pareto inferior to $\left( C,C\right) $.
\begin{figure}
\caption{The physical model for two player quantum Prisoners' Dilemma.}
\label{fig1}
\end{figure}
The physical model of the quantum Prisoners' Dilemma is originally proposed by J. Eisert \textit{et al.} as shown in Fig. \ref{fig1}. Together with the payoff table for the general Prisoners' Dilemma, the scheme can represent the generalized quantum\ Prisoners' Dilemma. In this scheme the game has two qubits, one for each player. The possible outcomes of the classical strategies $D$ and $C$ are assigned to two basis $\left\vert D\right\rangle $ and $\left\vert C\right\rangle $ in the Hilbert space of a qubit. Hence the state of the game at each instance is described by a vector in the tensor product space which is spanned by the classical game basis $\left\vert CC\right\rangle $, $\left\vert CD\right\rangle $, $\left\vert DC\right\rangle $ and $\left\vert DD\right\rangle $, where the first and second entries refer to Alice's and Bob's qubits respectively. The initial state of the game is given by \begin{equation} \left\vert \psi_{i}\right\rangle =\hat{J}\left\vert CC\right\rangle ,\label{eq 1} \end{equation} where $\hat{J}$\ is a unitary operator which is known to both players. Strategic moves of Alice and Bob are associated with unitary operators $\hat{U}_{A}$ and $\hat{U}_{B}$ respectively, which are chosen from a strategic space $S$. At the final stage, the state of the game is \begin{equation} \left\vert \psi_{f}\right\rangle =\hat{J}^{\dag}\left( \hat{U}_{A}\otimes \hat{U}_{B}\right) \hat{J}\left\vert CC\right\rangle .\label{eq 2} \end{equation} The subsequent measurement yields a particular result and the expect payoffs of the players are given by \begin{equation} \left\{ \begin{array} [c]{c} \$_{A}=rP_{CC}+pP_{DD}+tP_{DC}+sP_{CD}\\ \$_{B}=rP_{CC}+pP_{DD}+sP_{DC}+tP_{CD} \end{array} \right. ,\label{eq 3} \end{equation} where $P_{\sigma\tau}=\left\vert \left\langle \sigma\tau\right\vert \left. \psi_{f}\right\rangle \right\vert ^{2}$ $\left( \sigma,\tau\in\left\{ C,D\right\} \right) $ is the probability that $\left\vert \psi _{f}\right\rangle $\ collapses into basis $\left\vert \sigma\tau\right\rangle $.
In the general case, strategies for players could be any unitary operations. However, since the overall phase factor of $\left\vert \psi_{f}\right\rangle $ will not affect the final results of the game, we can safely set the strategic space $S=SU\left( 2\right) $ as in Refs. \cite{4} and \cite{6}, without loss of generality.
As we known, an operator $\hat{U}\in SU\left( 2\right) $ can be written as \begin{equation} \hat{U}=w\cdot\hat{I}_{2}+x\cdot i\hat{\sigma}_{x}+y\cdot i\hat{\sigma} _{y}+z\cdot i\hat{\sigma}_{z},\label{eq 5} \end{equation} with $w,x,y,z\in\left[ -1,1\right] $ and $w^{2}+x^{2}+y^{2}+z^{2}=1$. This enables us to represent $\hat{U}$ directly by a four-dimensional real vector \begin{equation} u=\left( w,x,y,z\right) \in\mathbb{R}^{4}, \end{equation} with $u\cdot u^{T}=w^{2}+x^{2}+y^{2}+z^{2}=1$ (superscript $T$ denotes \textit{Transpose}), and its components are denoted as $u^{1}=w,u^{2} =x,u^{3}=y,u^{4}=z$.
Denote Alice's strategy by $u_{A}$ and Bob's by $u_{B}$, the payoffs in Eq. (\ref{eq 3}) can be written as \begin{equation} \left\{ \begin{array} [c]{c} \$_{A}=\$_{A}\left( u_{A},u_{B}\right) =\sum_{ij,kl}\$_{ij,kl}^{A}\cdot u_{A}^{i}u_{A}^{j}u_{B}^{k}u_{B}^{l}\\ \$_{B}=\$_{B}\left( u_{A},u_{B}\right) =\sum_{ij,kl}\$_{ij,kl}^{B}\cdot u_{B}^{i}u_{B}^{j}u_{A}^{k}u_{A}^{l} \end{array} \right. ,\label{eq 14} \end{equation} where $i,j,k,l$ run from $1$ to $4$, $\left( \$_{ij,kl}^{A}\right) $\ and $\left( \$_{ij,kl}^{B}\right) $\ are certain tensors. The formulation of $\left( \$_{ij,kl}^{A}\right) $ and $\left( \$_{ij,kl}^{B}\right) $ in Eq. (\ref{eq 14}) are not uniquely determined. However if restricted to be symmetric, \textit{i.e.} $\$_{ij,kl}^{A}=\$_{ji,kl}^{A}=\$_{ij,lk}^{A}$ and $\$_{ij,kl}^{B}=\$_{ji,kl}^{B}=\$_{ij,lk}^{B}$ (this can always be done), they both can be uniquely determined. The calculations for $\left( \$_{ij,kl} ^{A}\right) $\ and $\left( \$_{ij,kl}^{B}\right) $\ could be found in \ref{app1}. Eqs. (\ref{eq 14}) are actually very general formulations for any static quantum game expressed as in Table \ref{Table1} and Fig. \ref{fig1} (the gate $\hat{J}^{\dag}$ prior to measurement can even be replaced by other unitary transformation, not necessarily the inverse of $\hat{J}$). All the structural information of the game, including the classical payoff table and the physical model, is represented by the tensors $\left( \$_{ij,kl}^{A}\right) $ and $\left( \$_{ij,kl}^{B}\right) $. In the Prisoners' Dilemma, we have $\$_{ij,kl}^{A}\equiv\$_{ij,kl}^{B}$ due to the symmetric structure of the game. In an asymmetric game, $\left( \$_{ij,kl}^{A}\right) $ does not necessarily equals $\left( \$_{ij,kl} ^{B}\right) $.
Defining $\$_{ij,kl}\equiv\$_{ij,kl}^{A}\equiv\$_{ij,kl}^{B}$, Eq. (\ref{eq 14}) can be re-expressed as
\begin{equation} \left\{ \begin{array} [c]{c} \$_{A}\left( u_{A},u_{B}\right) =\sum_{ij}\left( \sum_{kl}\$_{ij,kl}\cdot u_{B}^{k}u_{B}^{l}\right) u_{A}^{i}u_{A}^{j}=u_{A}\cdot P\left( u_{B}\right) \cdot u_{A}^{T}\\ \$_{B}\left( u_{A},u_{B}\right) =\sum_{ij}\left( \sum_{kl}\$_{ij,kl}\cdot u_{A}^{k}u_{A}^{l}\right) u_{B}^{i}u_{B}^{j}=u_{B}\cdot P\left( u_{A}\right) \cdot u_{B}^{T} \end{array} \right. ,\label{eq 15} \end{equation} where $P\left( u\right) $ is a symmetric matrix as a function of $u$, whose $i,j$-th element satisfies \begin{equation} \left( P\left( u\right) \right) _{ij}=\sum_{kl}\$_{ij,kl}\cdot u^{k} u^{l}.\label{eq 16} \end{equation}
Let $\left( u_{A}^{\ast},u_{B}^{\ast}\right) $ be a Nash equilibrium of the game, we can see that, from Eq. (\ref{eq 15}), $u_{A}\cdot P\left( u_{B}^{\ast}\right) \cdot u_{A}^{T}$ reaches its maximum at $u_{A} =u_{A}^{\ast}$ and simultaneously $u_{B}\cdot P\left( u_{A}^{\ast}\right) \cdot u_{B}^{T}$ reaches its maximum at $u_{B}=u_{B}^{\ast}$. In terms of game theory, we say that $u_{A}^{\ast}$ dominates $u_{B}^{\ast}$ and $u_{B}^{\ast}$ dominates $u_{A}^{\ast}$. Together with $u_{A}^{\ast}\cdot\left( u_{A}^{\ast }\right) ^{T}=u_{B}^{\ast}\cdot\left( u_{B}^{\ast}\right) ^{T}=1$, we can conclude that $u_{A}^{\ast}$ ($u_{B}^{\ast}$) must be the eigenvector of $P\left( u_{B}^{\ast}\right) $ [$P\left( u_{A}^{\ast}\right) $] which corresponds to the maximal eigenvalue, and the corresponding eigenvalue is exactly the payoff for Alice (Bob) at this Nash equilibrium. This analysis also tells that the dominant strategy against a given strategy $u$ must be the eigenvector of $P\left( u\right) $ that corresponds to the maximal eigenvalue.
In the following, we will first investigate the general Prisoners' Dilemma in the case that the strategic space is restricted to be the 2-parameter subset of $SU\left( 2\right) $ as given in Ref. \cite{4}. Then we investigate this game when the players are allowed to adopt any unitary strategic operations. Here we shall note that some authors \cite{17} have argued that the restriction on the strategic space given in Ref. \cite{4} has no physical basis, and it does restrict generality. However, apart from these arguments, it is still an interesting case and a good instance to show how the phase-transition-like behavior originates. Yet the particular results achieved hold only for this very specific set of strategies.
\section{\label{sec3}Two-Parameter Set of Strategies}
In the case of two-parameter set of strategies, the strategic space $S$ is restricted to the two-parameter subset of $SU\left( 2\right) $ as follows \cite{4}, \begin{equation} \hat{U}\left( \theta,\varphi\right) =\left( \begin{array} [c]{cc} e^{i\varphi}\cos\theta/2 & \sin\theta/2\\ -\sin\theta/2 & e^{-i\varphi}\cos\theta/2 \end{array} \right) ,\label{eq 4} \end{equation} with $\theta\in\left[ 0,\pi\right] $ and $\varphi\in\left[ 0,\pi/2\right] $.
As illustrated in details by J. Eisert \textit{et al. }\cite{4}, in order to guarantee that the classical Prisoners' Dilemma is faithfully represented, the form of $\hat{J}$ should be \begin{equation} \hat{J}=e^{i\gamma\hat{D}\otimes\hat{D}/2}=\cos\frac{\gamma}{2}\hat{C} \otimes\hat{C}+i\sin\frac{\gamma}{2}\hat{D}\otimes\hat{D},\label{eq 7} \end{equation} where $\hat{C}=$ $\hat{U}\left( 0,0\right) $, $\hat{D}=\hat{U}\left( \pi,0\right) $,\ and $\gamma\in\left[ 0,\pi/2\right] $ is in fact a measure for the game's entanglement.
Eq. (\ref{eq 4}) can be rewritten as \begin{eqnarray} \hat{U}\left( \theta,\varphi\right) & =\cos\frac{\theta}{2}\cos\varphi \cdot\hat{I}_{2}+\sin\frac{\theta}{2}\cdot i\hat{\sigma}_{y}+\cos\frac{\theta }{2}\sin\varphi\cdot i\hat{\sigma}_{z}\nonumber\\ & =w\cdot\hat{I}_{2}+y\cdot i\hat{\sigma}_{y}+z\cdot i\hat{\sigma} _{z},\label{eq 12} \end{eqnarray} where $w=\cos\frac{\theta}{2}\cos\varphi,y=\sin\frac{\theta}{2},z=\cos \frac{\theta}{2}\sin\varphi$. Obviously we have $w,y,z\in\left[ 0,1\right] $ and $\hat{U}\left( \theta,\varphi\right) \in SU\left( 2\right) $ implies that $w^{2}+y^{2}+z^{2}=1$. Since $\hat{U}\left( \theta,\varphi\right) $ and $-\hat{U}\left( \theta,\varphi\right) $ represent the same strategy, it is enough to restrict ourselves with $w,y,z\in\left[ -1,1\right] $. Therefore in the case of two-parameter set of strategies, $\hat{U}\left( \theta ,\varphi\right) $ can be represented by a three-dimensional real vector \begin{equation} u=\left( w,y,z\right) \in\mathbb{R}^{3}, \end{equation} with $u\cdot u^{T}=w^{2}+y^{2}+z^{2}=1$. Eqs. (\ref{eq 14}, \ref{eq 15}, \ref{eq 16}) will remain their form, except that all the indices run only from $1 $ to $3$, rather than from $1$ to $4$. Obviously we have $\hat{C} \sim\left( 1,0,0\right) ,\hat{D}\sim\left( 0,1,0\right) ,\hat{Q} \sim\left( 0,0,1\right) $, in which \textquotedblleft$\sim$ \textquotedblright\ means \textquotedblleft represent (by)\textquotedblright. In the remaining part of this paper, we do not distinguish a unitary operator and the corresponding vector (3-dimensional or 4-dimensional), as long as there is not ambiguity.
In Ref. \cite{8}, we investigated this game in the case that $\left( r,p,t,s\right) =\left( 3,1,5,0\right) $ and observed the phenomenon that are very much like phase transitions. In the generalized quantum Prisoners' Dilemma, such phase-transition-like behavior still exists. In fact, there exist two thresholds for the game's entanglement, $\gamma_{th1}=\arcsin \sqrt{\left( p-s\right) /\left( t-s\right) }$ and $\gamma_{th2} =\arcsin\sqrt{\left( t-r\right) /\left( t-s\right) }$. We hereby prove that, for $0\leqslant\gamma<\gamma_{th1}$, the strategic profile $\hat {D}\otimes\hat{D}$ is the Nash equilibrium with payoffs $\$_{A}=\$_{B}=p$. For $\gamma_{th2}<\gamma\leqslant\pi/2$, the strategic profile $\hat{Q}\otimes \hat{Q}$ is the Nash equilibrium with payoffs $\$_{A}=\$_{B}=r$. If $\gamma_{th1}<\gamma_{th2}$ and $\gamma_{th1}\leqslant\gamma\leqslant \gamma_{th2}$, the game has two Nash equilibria $\hat{D}\otimes\hat{Q}$ and $\hat{Q}\otimes\hat{D}$. The payoff for the player who adopts $\hat{D}$ is $s+\left( t-s\right) \cos^{2}\gamma$ while for the player who adopts $\hat{Q}$\ is $s+\left( t-s\right) \sin{}^{2}\gamma$. While if $\gamma _{th2}<\gamma_{th1}$ and $\gamma_{th2}\leqslant\gamma\leqslant\gamma_{th1}$, both $\hat{D}\otimes\hat{D}$\ and $\hat{Q}\otimes\hat{Q}$\ are Nash equilibria of the game. We obtain these conclusions through the following steps:
Assume one player adopts strategy $\hat{D}$, the payoff for the other as the function of his/her strategy $u$ is \begin{equation} u\cdot P\left( \hat{D}\right) \cdot u^{T}, \end{equation} where the explicit expression of $P\left( \hat{D}\right) $\ is (the calculation could be found in \ref{app2}) \begin{equation} P\left( \hat{D}\right) =\left( \begin{array} [c]{ccc} s & 0 & 0\\ 0 & p & 0\\ 0 & 0 & s+\left( t-s\right) \sin{}^{2}\gamma \end{array} \right) .\label{eq 8} \end{equation} If $0\leqslant\gamma<\gamma_{th1}=\arcsin\sqrt{\left( p-s\right) /\left( t-s\right) }$, the maximal eigenvalue of $P\left( \hat{D}\right) $ is $p$, and the corresponding eigenvector is $\left( 0,1,0\right) \sim\hat{D}$. If $\gamma_{th1}<\gamma\leqslant\pi/2$, the maximal eigenvalue of $P\left( \hat{D}\right) $ is $s+\left( t-s\right) \sin{}^{2}\gamma$, and the corresponding eigenvector is $\left( 0,0,1\right) \sim\hat{Q}$. Therefore $\hat{D}$ dominates $\hat{D}$ for $0\leqslant\gamma<\gamma_{th1}$ while $\hat{Q}$ dominates $\hat{D}$ for $\gamma_{th1}<\gamma\leqslant\pi/2$. For the same time we have $\$_{A}\left( \hat{D},\hat{D}\right) =\$_{B}\left( \hat{D},\hat{D}\right) =p$ and $\$_{A}\left( \hat{Q},\hat{D}\right) =\$_{B}\left( \hat{D},\hat{Q}\right) =s+\left( t-s\right) \sin{}^{2} \gamma$.
While assume one player adopts strategy $\hat{Q}$, the payoff for the other as the function of his/her strategy $u$ is \begin{equation} u\cdot P\left( \hat{Q}\right) \cdot u^{T}, \end{equation} where the explicit expression of $P\left( \hat{Q}\right) $\ is (the calculation could be found in \ref{app2}) \begin{equation} P\left( \hat{Q}\right) =\left( \begin{array} [c]{ccc} r-\left( r-p\right) \sin{}^{2}\gamma & 0 & 0\\ 0 & t-\left( t-s\right) \sin{}^{2}\gamma & 0\\ 0 & 0 & r \end{array} \right) .\label{eq 9} \end{equation} If $0\leqslant\gamma<\gamma_{th2}=\arcsin\sqrt{\left( t-r\right) /\left( t-s\right) }$, the maximal eigenvalue of $P\left( \hat{Q}\right) $ is $t-\left( t-s\right) \sin{}^{2}\gamma$, and the corresponding eigenvector is $\left( 0,1,0\right) \sim\hat{D}$. If $\gamma_{th2}<\gamma\leqslant\pi/2$, the maximal eigenvalue of $P\left( \hat{Q}\right) $ is $r$, and the corresponding eigenvector is $\left( 0,0,1\right) \sim\hat{Q}$. Therefore $\hat{D}$ dominates $\hat{Q}$ for $0\leqslant\gamma<\gamma_{th2}$ while $\hat{Q}$ dominates $\hat{Q}$ for $\gamma_{th2}<\gamma\leqslant\pi/2$. For the same time we have $\$_{A}\left( \hat{Q},\hat{Q}\right) =\$_{B}\left( \hat{Q},\hat{Q}\right) =r$ and $\$_{A}\left( \hat{D},\hat{Q}\right) =\$_{B}\left( \hat{Q},\hat{D}\right) =t-\left( t-s\right) \sin{}^{2} \gamma=s+\left( t-s\right) \cos^{2}\gamma$.
From the above analysis, we can see that when $0\leqslant\gamma<\gamma_{th1}$, $\hat{D}\otimes\hat{D}$ is a Nash equilibrium of the game, and when $\gamma_{th2}<\gamma\leqslant\pi/2$, $\hat{Q}\otimes\hat{Q}$ is a Nash equilibrium of the game. If $\gamma_{th1}<\gamma_{th2}$ and $\gamma _{th1}\leqslant\gamma\leqslant\gamma_{th2}$, $\hat{D}$ dominates $\hat{Q} $ and $\hat{Q}$ dominates $\hat{D}$, hence both $\hat{D}\otimes\hat{Q}$ and \ $\hat{Q}\otimes\hat{D}$ are Nash equilibria of the game. While if $\gamma_{th2}<\gamma_{th1}$ and $\gamma_{th2}\leqslant\gamma\leqslant \gamma_{th1}$, both $\hat{D}\otimes\hat{D}$\ and $\hat{Q}\otimes\hat{Q}$\ are Nash equilibria of the game. The corresponding payoffs are also obtained.
\begin{figure}
\caption{The payoff function of Alice with respect to the amount of the entanglement in the case of two-parameter strategies. The numerical values in the payoff matrix are set as $(r=3,p=1,t=5,s=0)$ such that $r+p<t+s$. The region between two thresholds are the transitional region from classical to quantum, in which the game has two asymmetric Nash equilibria although the game is symmetric with respect to the interchange of the players.}
\label{fig2}
\end{figure}
In the case that the entries in the payoff table are taken as $\left( r=3,p=1,t=5,s=0\right) $, which has been investigated in Ref\cite{8}, the game has two thresholds for the amount of the game's entanglement. Due to the two thresholds, the game is divided into three regions, the classical region, the quantum region, and the transitional region from classical to quantum. In the general quantum Prisoners' Dilemma, there still exist two thresholds and the phase-transition-like behavior shows up again. However the situation may be more complicated because the two thresholds have no deterministic relations in magnitude. In fact, the case that $\left( r=3,p=1,t=5,s=0\right) $ is just an instance of the more general case of $r+p<t+s$. For the game under this condition, it is obviously that $\gamma_{th1}<\gamma_{th2}$ and the game behaves similarly to the one with $\left( r=3,p=1,t=5,s=0\right) $. Fig. \ref{fig2} depicts the payoff of Alice as the function of $\gamma$ when both players resort to Nash equilibrium in the case of $r+p<t+s$. In the transitional regions, the two Nash equilibria are fully equivalent. Since there is no communication between two players, one player will have no idea which equilibrium strategy the other player chooses. So the strategy mismatch situation will probably occur. A more severe problem is that, since strategy $\hat{D}$ will lead to a better payoff so both players will be tempted to choose $\hat{D}$ and the final payoff for both of them will become $p$, which happens to be the catch of the dilemma in the classical game.
An interesting situation is, as we can see, if $\gamma_{th1}=\gamma_{th2}$, the transitional region will disappear. The condition $\gamma_{th1} =\gamma_{th2}$ implies that \begin{equation} r+p=t+s.\label{eq 10} \end{equation} Note that we should keep in mind that the basic condition $t>r>p>s$ must be satisfied to maintain the properties of the classical game.\ And under the condition in Eq. (\ref{eq 10}) the game has only one threshold for its entanglement $\gamma_{th}=\gamma_{th1}=\gamma_{th2}$. Hence the game exhibits only two regions, one is classical and the other is quantum. The transitional region in which the game has two asymmetric Nash equilibrium disappears. Under the conditions $r+p=t+s$ and $t>r>p>s$, we plot the payoff of Alice as the function of $\gamma$ in Fig. \ref{fig3} when both players resort to Nash equilibrium.
\begin{figure}
\caption{The payoff function of Alice with respect to the amount of the entanglement in the case of two-parameter strategies. The numerical values in the payoff matrix are set as $(r=3,p=2,t=5,s=0)$ such that $r+p=t+s$. The two thresholds converge to be a unique one $\gamma_{th}$ and the transitional region no longer exists.}
\label{fig3}
\end{figure}
Now we consider what would happen in the game of $r+p>t+s$. In this case, we have $\gamma_{th1}>\gamma_{th2}$. Therefore the game has no transitional region, hence none of $\hat{D}\otimes\hat{Q}$ and $\hat{Q}\otimes\hat{D}$ is a Nash equilibrium of the game. However both $\hat{D}\otimes\hat{D}$ and $\hat{Q}\otimes\hat{Q}$ are still Nash equilibria in the region $\gamma _{th2}\leqslant\gamma\leqslant\gamma_{th1}$. So for $\gamma_{th2} \leqslant\gamma\leqslant\gamma_{th1}$, a new region --- coexistent region --- arises with two Nash equilibria. These two Nash equilibria are both symmetric with respect to the interchange of the two players. In this case, we illustrate the payoff of Alice as the function of $\gamma$ in Fig. \ref{fig4}. We should also note that in this case the multiple Nash equilibria brings a situation different to that in the transitional regions with $r+p<t+s$. The two Nash equilibria are not equivalent and $\hat{Q}\otimes\hat{Q}$ gives higher payoffs to both players than does $\hat{D}\otimes\hat{D}$. Therefore it is a quite reasonable assumption that the players are most likely to resort to the equilibrium $\hat{Q}\otimes\hat{Q}$ rather than $\hat{D}\otimes\hat{D}$, since they are both trying to maximize their individual payoffs. However, one still can not claim that the players will definitely resort to the equilibrium that gives higher payoffs. But if they do, the final results of the game will then be the same as in the quantum region with $\gamma>\gamma_{th1}$, and the dilemma will be resolved.
An interesting question is that can the game behave full quantum-mechanically no matter how much it is entangled for some particular numerical value of $\left( r,p,t,s\right) $, \textit{i.e.} have only the quantum region (without the presence of classical, transitional or coexistent regions). If it can, we immediately deduce that $\gamma_{th2}=0$. This means $t=r$, which contradicts the basic condition $t>r>p>s$. Hence the game cannot always have $\hat{Q}\otimes\hat{Q}$ as its equilibrium in the whole domain of $\gamma$ from $0$ to $\pi/2$, as long as the game remains a \textquotedblleft Prisoners' Dilemma\textquotedblright. In fact, as long as the condition $t>r>p>s$ holds, none of $\gamma_{th1}$ and $\gamma_{th2}$ could reach $0$ or $\pi/2$, hence none of the classical and quantum regions will disappear.
\begin{figure}
\caption{The payoff function of Alice with respect to the amount of the entanglement in the case of two-parameter strategies. The numerical values in the payoff matrix are set as $(r=3,p=2,t=4,s=0)$ such that $r+p>t+s$. The coexistent region emerges, in which both $\hat{D}\otimes\hat{D}$ and $\hat {Q}\otimes\hat{Q}$ are Nash equilibria.}
\label{fig4}
\end{figure}
\section{\label{sec4}General Unitary Operations}
In this section, we investigate the generalized quantum Prisoners' Dilemma when both the players can access to any unitary operations as their strategies, rather than in a restricted subset in Eq. (\ref{eq 4}). The method for analyzing is clearly described in section \ref{sec2}. The result is that, there exist a boundary $\gamma_{B}=\arcsin\sqrt{\left( p-s\right) /\left( p+t-r-s\right) }$ for the game's entanglement. If $\gamma<\gamma_{B}$, there are infinite Nash equilibrium. Any strategic profile $\left\{ \left( 0,\alpha,\beta,0\right) ,\left( 0,\beta,\alpha,0\right) \right\} $ ($\alpha^{2}+\beta^{2}=1$) is a Nash equilibrium. Each of them results in the same payoffs $\$_{A}=\$_{B}=p+\left( r-p\right) \sin^{2}\gamma$. While as long as $\gamma>\gamma_{B}$, there will be no Nash equilibrium for the game. We prove these results as follows.
For the strategy $u_{1} = \left( 0,\alpha,\beta,0\right) $ ($\alpha^{2}+\beta^{2} =1$), we have (the calculation could be found in \ref{app1}), with $\epsilon\equiv\sin^{2}\gamma$, \begin{equation} \fl P\left( u_{1} \right) =\left( \begin{array} [c]{cccc} s+\left( t-s\right) \alpha^{2}\epsilon & 0 & 0 & \left( s-t\right) \alpha\beta\epsilon\\ 0 & p+\left( r-p\right) \beta^{2}\epsilon & \left( r-p\right) \alpha \beta\epsilon & 0\\ 0 & \left( r-p\right) \alpha\beta\epsilon & p+\left( r-p\right) \alpha ^{2}\epsilon & 0\\ \left( s-t\right) \alpha\beta\epsilon & 0 & 0 & s+\left( t-s\right) \beta^{2}\epsilon \end{array} \right) .\label{eq 17} \end{equation} The eigenvalues and corresponding eigenvectors of $P\left( \left( 0,\alpha,\beta,0\right) \right) $ in Eq. (\ref{eq 17}) are
\begin{equation} \left\{ \begin{array} [c]{ll} p & \quad\left( 0,\alpha,-\beta,0\right) \\ s & \quad\left( \beta,0,0,\alpha\right) \\ p+\left( r-p\right) \sin^{2}\gamma & \quad\left( 0,\beta,\alpha,0\right) \\ s+\left( t-s\right) \sin^{2}\gamma & \quad\left( \alpha,0,0,-\beta\right) \end{array} \right. .\label{eq 20} \end{equation} If $\gamma<\gamma_{B}$, the maximal eigenvalue is $p+\left( r-p\right) \sin^{2}\gamma$ and the corresponding eigenvector is $\left( 0,\beta ,\alpha,0\right) $. Therefore $\left( 0,\beta,\alpha,0\right) $ dominates $\left( 0,\alpha,\beta,0\right) $, and vice versa (by exchanging $\alpha$ and $\beta$ in Eqs. (\ref{eq 17}, \ref{eq 20})). Hence any strategic profile $\left\{ \left( 0,\alpha,\beta,0\right) ,\left( 0,\beta,\alpha,0\right) \right\} $ ($\alpha^{2}+\beta^{2}=1$) is a Nash equilibrium.
While if $\gamma>\gamma_{B}$, the dominant strategy against $\left( 0,\alpha,\beta,0\right) $ turns to be $\left( \alpha,0,0,-\beta\right) $. For the strategy $ u_{2} = \left( \alpha,0,0,-\beta\right) $, we have (the calculation could be found in \ref{app1}), with $\epsilon\equiv \sin^{2}\gamma$, \begin{equation} \fl P\left( u_{2} \right) =\left( \begin{array} [c]{cccc} r+\left( p-r\right) \beta^{2}\epsilon & 0 & 0 & \left( r-p\right) \alpha\beta\epsilon\\ 0 & t+\left( s-t\right) \alpha^{2}\epsilon & \left( s-t\right) \alpha \beta\epsilon & 0\\ 0 & \left( s-t\right) \alpha\beta\epsilon & t+\left( s-t\right) \beta ^{2}\epsilon & 0\\ \left( r-p\right) \alpha\beta\epsilon & 0 & 0 & r+\left( p-r\right) \alpha^{2}\epsilon \end{array} \right) .\label{eq 18} \end{equation} And the eigenvalues and corresponding eigenvectors of $P\left( \left( \alpha,0,0,-\beta\right) \right) $ in Eq. (\ref{eq 18}) are
\begin{equation} \left\{ \begin{array} [c]{ll} r & \quad\left( \alpha,0,0,\beta\right) \\ t & \quad\left( 0,\beta,-\alpha,0\right) \\ r+\left( p-r\right) \sin^{2}\gamma & \quad\left( \beta,0,0,-\alpha\right) \\ t+\left( s-t\right) \sin^{2}\gamma & \quad\left( 0,\alpha,\beta,0\right) \end{array} \right. .\label{eq 21} \end{equation} In Eq. (\ref{eq 21}), $\left( 0,\beta,-\alpha,0\right) $ always corresponds to the maximal eigenvalue $t$. Therefore no matter what the amount of entanglement is, $\left( 0,\beta,-\alpha,0\right) $ always dominates $\left( \alpha,0,0,-\beta\right) $. With further analysis combining Eq. (\ref{eq 20}) and Eq. (\ref{eq 21}), we find that when $\gamma>\gamma_{B}$, $\left( \alpha,0,0,-\beta\right) $ dominates $\left( 0,\alpha ,\beta,0\right) $, $\left( 0,\beta,-\alpha,0\right) $ dominates $\left( \alpha,0,0,-\beta\right) $, $\left( \beta,0,0,\alpha\right) $ dominates $\left( 0,\beta,-\alpha,0\right) $, and finally $\left( 0,\alpha ,\beta,0\right) $ dominates $\left( \beta,0,0,\alpha\right) $. No pair of them can form a Nash equilibrium. In fact, it can be proved that no pair of strategies in the region of $\gamma>\gamma_{B}$ can form a pure Nash equilibrium of the game. However the game remains to have mixed Nash equilibria \cite{16}.
We depict the payoff function of Alice as a function of the amount of entanglement when both players resort to Nash equilibrium (if there is one) in Fig. \ref{fig5}. This figure also exhibits the phase-transition-like behavior of the game. The boundary of entanglement divides the game into two regions: in one of which the game has infinite Nash equilibria, while in the other the game has no pure strategic Nash equilibrium.
\begin{figure}
\caption{The payoff function of Alice with respect to the amount of the entanglement in the case that both players are allowed to adopt any unitary operator as his/her strategy.}
\label{fig5}
\end{figure}
\section{Discussion and Conclusion}
In this paper, we investigate the discontinuous dependence of Nash equilibria and payoffs on the game's entanglement for the general quantum Prisoners' Dilemma. This discontinuity can be viewed as the phase-transition-like behavior in the payoff-entanglement diagram. We firstly investigate the generalized quantum Prisoners' Dilemma when the strategic space is restricted to be a two-parameter subset of $SU\left( 2\right) $ as in Ref. \cite{4}. With condition $r+p<t+s$, the game exhibits the classical, quantum and transitional regions in its payoff-entanglement diagram. The original Prisoners' Dilemma with $\left( r=3,p=1,t=5,s=0\right) $ is just an instance for the general game with condition $r+p<t+s$. In the classical region $\hat{D}\otimes\hat{D}$ is the unique Nash equilibrium, and in the quantum region the unique Nash equilibrium is $\hat{Q}\otimes\hat{Q}$. While in the transitional region, two asymmetric Nash equilibria, $\hat{D}\otimes\hat{Q}$ and $\hat{Q}\otimes\hat{D}$, emerge, each leads to the asymmetric result of the game in despite of the symmetry of the game itself. If the entries in the payoff table satisfy that $r+p=t+s$, the transitional region will disappear. The game has only one threshold for the amount of its entanglement at which the game transits from classical to quantum discontinuously. In the case that $r+p>t+s$, a new region --- the coexistent region --- emerges, replacing the transitional region. This new region is in fact there where the classical region and the quantum region overlap. In the coexistent region, the game has both $\hat{D}\otimes\hat{D}$ and $\hat{Q}\otimes\hat{Q}$ as its Nash equilibria. Since $\hat{Q}\otimes\hat{Q}$ is superior to $\hat{D}\otimes \hat{D}$, one may expect both players most likely to choose $\hat{Q}$\ as his/her strategy, and the dilemma will be resolved if they do so. We also explored the phase-transition-like behavior of the quantum game in the case where both players are allowed to adopt any unitary transformations as their strategies. The game has an boundary for its entanglement, being a function of the numerical values in the payoff table, below which the game has infinite Nash equilibria, while above which the game has no pure strategic Nash equilibrium.
The phase-transition-like behavior presented in this paper is very much like phase transitions in real physical systems \cite{14}, not only phenomenally but also mathematically. For a certain physical system whose Hamiltonian is dependent of some parameter, a special case is that the eigenfunctions of the Hamiltonian is independent of the parameter even though the eigenvalues vary with it. Then there can be a level-crossing where an excited level becomes the ground state, creating a point of a non-analyticity of the ground state energy as a function of the parameter, as well as a discontinuous dependence of the ground state on the parameter. A quantum phase transition is hence viewed as any point of non-analyticity in the ground state energy of the system concerned. In the generalized quantum Prisoners' Dilemma, the dominant strategy against a given strategy $u$ is the eigenvector that corresponds to the maximal eigenvalue of matrix $P\left( u\right) $ (see in Section \ref{sec2}). Since $P\left( u\right) $\ is a function of the amount of entanglement $\gamma$, the eigenvalues may cross. This eigenvalue-crossing makes the eigenvector that corresponds to the maximal eigenvalue changes discontinuously. It also creates a non-analyticity of the payoff (the maximal eigenvalue) as a function of $\gamma$, and the game exhibit phase-transition-like behavior. The method proposed in this paper would help to illuminate the origin of the phase-transition-like behavior of quantum games, and we hope it would further help investigate quantum games more intensively, and more profound results may be derived.
\ack This work was supported by the Nature Science Foundation of China (Grants No. 10075041 and No. 10075044), the National Fundamental Research Program (Grant No. 2001CB309300) and the ASTAR Grant No. 012-104-0040.
\appendix
\section{\label{app1}Calculations For General Unitary Operations}
Denote Alice's strategy by $u_{A}=\left( u_{A}^{1},u_{A}^{2},u_{A}^{3} ,u_{A}^{4}\right) $ and Bob's by $u_{B}=\left( u_{B}^{1},u_{B}^{2},u_{B} ^{3},u_{B}^{4}\right) $, then substitute Eq. (\ref{eq 5}) into Eq. (\ref{eq 2}), we have \begin{eqnarray} \fl \left\vert \psi_{f}\right\rangle =&[\left( u_{A}^{1}u_{B}^{1}-u_{A} ^{4}u_{B}^{4}\right) +i\left( u_{A}^{4}u_{B}^{1}+u_{A}^{1}u_{B}^{4}\right) \cos\gamma-\left( u_{A}^{3}u_{B}^{2}+u_{A}^{2}u_{B}^{3}\right) \sin \gamma]\left\vert CC\right\rangle +\nonumber\\ \fl & [-\left( u_{A}^{1}u_{B}^{3}+u_{A}^{4}u_{B}^{2}\right) +i\left( u_{A} ^{1}u_{B}^{2}-u_{A}^{4}u_{B}^{3}\right) \cos\gamma+\left( u_{A}^{3}u_{B} ^{4}-u_{A}^{2}u_{B}^{1}\right) \sin\gamma]\left\vert CD\right\rangle +\nonumber\\ \fl & [-\left( u_{A}^{3}u_{B}^{1}+u_{A}^{2}u_{B}^{4}\right) +i\left( u_{A} ^{2}u_{B}^{1}-u_{A}^{3}u_{B}^{4}\right) \cos\gamma+\left( u_{A}^{4}u_{B} ^{3}-u_{A}^{1}u_{B}^{2}\right) \sin\gamma]\left\vert DC\right\rangle +\nonumber\\ \fl & [\left( u_{A}^{3}u_{B}^{3}-u_{A}^{2}u_{B}^{2}\right) -i\left( u_{A} ^{3}u_{B}^{2}+u_{A}^{2}u_{B}^{3}\right) \cos\gamma+\left( u_{A}^{4}u_{B} ^{1}+u_{A}^{1}u_{B}^{4}\right) \sin\gamma]\left\vert DD\right\rangle . \fl \label{eq app1} \end{eqnarray} Since the game is symmetric with respect to the interchange of the players, we have \begin{equation} \$_{A}\left( u_{A},u_{B}\right) \equiv\$_{B}\left( u_{B},u_{A}\right) ,\forall u_{A},u_{B}\in SU\left( 2\right) .\label{eq app2} \end{equation} and we can immediately see from Eq. (\ref{eq 14}) that \begin{equation} \$_{ij,kl}^{A}\equiv\$_{ij,kl}^{B}, i,j=1,2,3,4.\label{eq app3} \end{equation} And $P\left( u\right) $ (in Eq. (\ref{eq 16})) is symmetric too. Therefore we can define $\$_{ij,kl}\equiv\$_{ij,kl}^{A}\equiv\$_{ij,kl}^{B}$ for convenience. Substitute Eq. (\ref{eq app1}) into Eqs. (\ref{eq 3}, \ref{eq 14}), we can find the non-zero elements of $\left( \$_{ij,kl}\right) $ are (with $\$_{ij,kl}=\$_{ji,kl}=\$_{ij,lk}=\$_{ji,lk}$) \begin{eqnarray} \$_{11,11} & =\$_{44,44}=r,\$_{11,33}=\$_{44,22}=s,\nonumber\\ \$_{22,22} & =\$_{33,33}=p,\$_{22,44}=\$_{33,11}=t,\nonumber\\ \$_{11,22} & =\$_{44,33}=s+\left( t-s\right) \sin^{2}\gamma,\$_{11,44} =\$_{44,11}=r+\left( p-r\right) \sin^{2}\gamma,\nonumber\\ \$_{22,11} & =\$_{33,44}=t+\left( s-t\right) \sin^{2}\gamma,\$_{22,33} =\$_{33,22}=p+\left( r-p\right) \sin^{2}\gamma,\nonumber\\ \$_{12,13} & =-\$_{34,24}=\frac{1}{2}\left( s-r\right) \sin\gamma ,\$_{12,24}=-\$_{34,13}=\frac{1}{2}\left( t-p\right) \sin\gamma,\nonumber\\ \$_{13,12} & =-\$_{24,34}=\frac{1}{2}\left( t-r\right) \sin\gamma ,\$_{13,34}=-\$_{24,12}=\frac{1}{2}\left( p-s\right) \sin\gamma,\nonumber\\ \$_{14,14} & =-\$_{23,23}=\frac{1}{2}\left( p-r\right) \sin^{2} \gamma,\$_{14,23}=-\$_{23,14}=\frac{1}{2}\left( s-t\right) \sin^{2} \gamma.\label{eq app4} \end{eqnarray}
For strategy $\left( 0,\alpha,\beta,0\right) $, we see that \begin{eqnarray} \left( P\left( \left( 0,\alpha,\beta,0\right) \right) \right) _{ij} & =\left( P\left( \left( 0,\alpha,\beta,0\right) \right) \right) _{ji}\nonumber\\ & =\alpha^{2}\$_{ij,22}+\beta^{2}\$_{ij,33}+\alpha\beta\left( \$_{ij,23} +\$_{ij,32}\right) \nonumber\\ & =\alpha^{2}\$_{ij,22}+\beta^{2}\$_{ij,33}+2\alpha\beta\$_{ij,23} \label{eq app5} \end{eqnarray} and for $\left( \alpha,0,0,-\beta\right) $ we have \begin{eqnarray} \left( P\left( \left( \alpha,0,0,-\beta\right) \right) \right) _{ij} & =\left( P\left( \left( \alpha,0,0,-\beta\right) \right) \right) _{ji}\nonumber\\ & =\alpha^{2}\$_{ij,11}+\beta^{2}\$_{ij,44}-\alpha\beta\left( \$_{ij,14} +\$_{ij,41}\right) \nonumber\\ & =\alpha^{2}\$_{ij,11}+\beta^{2}\$_{ij,44}-2\alpha\beta\$_{ij,14} .\label{eq app6} \end{eqnarray} Therefore Eq. (\ref{eq 17}) and (\ref{eq 18}) are obtained.
\section{\label{app2}Calculations For Two-Parameter Strategic Space}
The two-parameter strategic space can be obtained by restricting $u^{2}=x\equiv0$ in the general case ($u^{2}=x$ is the second component of $u $, not its squared length). Therefore the expressions for $\$_{ij,kl}$ can be obtained from Eqs. (\ref{eq app4}), by excluding all elements containing the index $2$, and then replacing index $3$ by $2$ and $4$ by $3$. Therefore for the case of two-parameter strategic space, we have all the non-zero elements (with $\$_{ij,kl}=\$_{ji,kl}=\$_{ij,lk}=\$_{ji,lk}$) as follows. \begin{eqnarray} \$_{11,11} & =\$_{33,33}=r,\$_{11,22}=s,\$_{22,22}=p,\$_{22,11}=t,\nonumber\\ \$_{33,22} & =s+\left( t-s\right) \sin^{2}\gamma,\$_{11,33}=\$_{33,11} =r+\left( p-r\right) \sin^{2}\gamma,\nonumber\\ \$_{22,33} & =t+\left( s-t\right) \sin^{2}\gamma,\$_{13,13}=\frac{1} {2}\left( p-r\right) \sin^{2}\gamma,\nonumber\\ \$_{23,12} & =\frac{1}{2}\left( p-t\right) \sin\gamma,\$_{12,23}=\frac {1}{2}\left( p-s\right) \sin\gamma. \end{eqnarray} Since $\hat{D}\sim\left( 0,1,0\right) $ and $\hat{Q}\sim\left( 0,0,1\right) $, it is obvious to see that \begin{eqnarray} \left( P\left( \hat{D}\right) \right) _{ij} & =\left( P\left( \hat {D}\right) \right) _{ji}=\$_{ij,22},\\ \left( P\left( \hat{Q}\right) \right) _{ij} & =\left( P\left( \hat {Q}\right) \right) _{ji}=\$_{ij,33}, \end{eqnarray} with $i,j=1,2,3$. The expressions in Eqs. (\ref{eq 8}) and (\ref{eq 9}) are hence obtained.
\section*{References}
\end{document} |
\begin{document}
\title[Volume invariant and maximal representation] {Volume invariant and maximal representations of discrete subgroups of Lie groups}
\author{Sungwoon Kim} \address{School of Mathematics, KIAS, Hoegiro 85, Dongdaemun-gu, Seoul, 130-722, Republic of Korea} \email{sungwoon@kias.re.kr}
\author{Inkang Kim} \address{School of Mathematics KIAS, Hoegiro 85, Dongdaemun-gu, Seoul, 130-722, Republic of Korea} \email{inkang@kias.re.kr}
\footnotetext[1]{2000 {\sl{Mathematics Subject Classification.}} 22E46, 57R20, 53C35} \footnotetext[2]{{\sl{Key words and phrases.}} volume invariant, lattice, representation variety, semisimple Lie group, Toledo invariant, maximal representation.} \footnotetext[3]{The second
author gratefully acknowledges the partial support of KRF grant (0409-20060066).}
\begin{abstract} Let $\Gamma$ be a lattice in a connected semisimple Lie group $G$ with trivial center and no compact factors. We introduce a volume invariant for representations of $\Gamma$ into $G$, which generalizes the volume invariant for representations of uniform lattices introduced by Goldman. Then, we show that the maximality of this volume invariant exactly characterizes discrete, faithful representations of $\Gamma$ into $G$. \end{abstract}
\maketitle
\section{Introduction}
A volume invariant is defined to characterize discrete, faithful representations of a discrete group $\Gamma$ into a connected semisimple Lie group $G$. For a uniform lattice $\Gamma$, Goldman \cite{Go92} introduced a volume invariant $\upsilon(\rho)$ of a representation $\rho \colon\thinspace \Gamma \rightarrow G$ as follows: Let $X$ be the associated symmetric space of dimension $n$ and $M=\Gamma\backslash X$. To every representation $\rho \colon\thinspace \Gamma \rightarrow G$, a bundle $E_\rho$ over $M$ with fibre $X$ and structure group $G$ is associated. One can obtain a closed $n$--form $\omega_\rho$ on $E_\rho$ by spreading the $G$--invariant volume form $\omega$ on $X$ over the fibres of $E_\rho$. Then, the volume invariant $\upsilon(\rho)$ of $\rho$ is defined by $$ \upsilon (\rho) = \int_M f^*\omega_\rho,$$ where $f$ is a section of $E_\rho$.
The definition of the volume invariant $\upsilon(\rho)$ is independent of the choice of a section since $X$ is contractible. It can be easily seen that the volume invariant $\upsilon(\rho)$ satisfies an inequality
\begin{eqnarray}\label{eqn:1.1} |\upsilon(\rho)| \leq \mathrm{Vol}(M),\end{eqnarray} which recovers the Milnor-Wood inequality for $G=\mathrm{PSL}_2(\mathbb{R})$. Note that the volume invariant $\upsilon(\rho)$ is available only for representations of uniform lattices. Goldman \cite{Go92} conjectured the following and gave a positive answer for all connected semisimple Lie groups except for $\mathrm{SU}(n,1), \mathrm{Sp}(n,1), \mathrm{F}_4^{-20}$.
\begin{conj}\label{con:1.1} Equality holds in (\ref{eqn:1.1}) if and only if $\rho$ is a discrete, faithful representation of $\Gamma$ into $G$. \end{conj}
Numerical invariants such as the volume invariant have been used to study a representation variety $\mathrm{Hom}(\Gamma,G)$ consisting of homomorphisms $\rho \colon\thinspace \Gamma \rightarrow G$. For example, Goldman \cite{Go88} characterized $(4g-3)$--connected components of the representation variety $\mathrm{Hom}(\pi_1(S),\mathrm{PSL}_2 (\mathbb{R}))$ for a closed surface $S$ of genus $g$ via the Toledo invariant. Moreover, he verified that the connected component of $\mathrm{Hom}(\pi_1(S),\mathrm{PSL}_2 (\mathbb{R}))$ with maximal Toledo invariant is exactly the embedding of the Teichm\"{u}ller space of $S$ into $\mathrm{Hom}(\pi_1(S),\mathrm{PSL}_2 \mathbb{R})$ \cite{Go80}. Burger, Iozzi and Wienhard \cite{BIW10} generalize the theories of a closed surface representation variety in $\mathrm{PSL}_2 (\mathbb{R})$ to other Lie groups such as split simple Lie groups and Lie groups of Hermitian type.
In comparison with uniform lattices, numerical invariants for representations of nonuniform lattices have been rarely defined. The main reason for this is that the fundamental class of open manifolds vanishes in the top dimensional singular homology. Recently, Burger, Iozzi and Wienhard \cite{BIW11} define the Toledo invariant for representations of a compact surface with boundary by using its relative fundamental class. Then, they show that this Toledo invariant exactly detects hyperbolic structures on the surface.
The aim of this paper is to introduce a new invariant for representations of arbitrary lattices $\Gamma$ in $G$ which detects discrete, faithful representations in the representation variety $\mathrm{Hom}(\Gamma,G)$. One advantage of the new invariant is that it provides a tool for studying the representation varieties of nonuniform lattices in semisimple Lie groups. In addition, we explore the relation between the new invariant and $\upsilon(\rho)$. Then, we give a proof of Conjecture \ref{con:1.1}.
Let $\Gamma$ be a lattice in $G$. Every representation $\rho \colon\thinspace \Gamma \rightarrow G$ induces canonical pullback maps $\rho^*_b \colon\thinspace H^\bullet_{c,b}(G,\mathbb{R})\rightarrow H^\bullet_b(\Gamma,\mathbb{R})$ in continuous bounded cohomology. Let $c \colon\thinspace H^\bullet_{c,b}(G,\mathbb{R})\rightarrow H^\bullet_c(G,\mathbb{R})$ be the comparison map induced from the inclusion of the continuous bounded cochain complex of $G$ into the continuous cochain complex of $G$. The Van Est isomorphism gives an isomorphism $H^n_c(G,\mathbb{R})\cong \mathbb{R}\cdot \omega$, where $\omega$ is the $G$--invariant volume form on the associated symmetric space $X$. Then, we define a new invariant $\mathrm{Vol}(\rho)$ by
$$\mathrm{Vol}(\rho) = \inf \{ |\langle \rho^*_b(\omega_b),\alpha \rangle| \text{ }|\text{ }c(\omega_b)=\omega \text{ and } \alpha \in [M]^{\ell^1}_\mathrm{Lip} \}, $$ where $[M]^{\ell^1}_\mathrm{Lip}$ is the set of all $\ell^1$--homology classes in $H^{\ell^1}_n(M,\mathbb{R})$ that are represented by at least one locally finite fundamental cycle with finite Lipschitz constant. Note that $\rho^*_b (\omega_b)$ is regarded as a bounded cohomology class in $H^n_b(M,\mathbb{R})$ by the canonical isomorphism between $H^n_b(\Gamma,\mathbb{R})$ and $H^n_b(M,\mathbb{R})$. Thus, $\rho^*_b(\omega_b)$ can be evaluated on $\ell^1$--homology classes in $H^{\ell^1}_n(M,\mathbb{R})$ and hence, the definition of $\mathrm{Vol}(\rho)$ makes sense. For more details on the definition and properties of the volume invariant $\mathrm{Vol}(\rho)$, see Section \ref{sec:3}.
An essential ingredient in defining the volume invariant $\mathrm{Vol}(\rho)$ is the geometric simplicial volume of $M$, introduced by Gromov \cite{Gr82}. Indeed, Gromov defined two kinds of simplicial volumes for open Riemannian manifolds. One is defined as the $\ell^1$--seminorm of the locally finite fundamental class of $M$. This is a topological invariant. The other is defined by the infimum over all $\ell^1$--norms of locally finite fundamental cycles of $M$ with finite Lipschitz constant. The latter is called the geometric simplicial volume of $M$ because the Riemannian structure on $M$ is involved in its definition. Note that this is not a topological invariant anymore.
One can notice that the volume invariant $\mathrm{Vol}(\rho)$ can be defined via locally finite fundamental cycles of $M$ instead of locally finite fundamental cycles with finite Lipschitz constant. However, it turns out that if the volume invariant $\mathrm{Vol}(\rho)$ is defined via locally finite fundamental cycles, then this invariant does not always detect discrete, faithful representations. For further discussion of this, see Section \ref{sec:3.1}.
\begin{thm}\label{thm:1.2} Let $\Gamma$ be an irreducible lattice in a connected semisimple Lie group $G$ with trivial center and no compact factors. Let $\rho \colon\thinspace \Gamma \rightarrow G$ be a representation. Then, the volume invariant $\mathrm{Vol}(\rho)$ satisfies an inequality $$ \mathrm{Vol}(\rho) \leq \mathrm{Vol}(M),$$ where $X$ is the associated symmetric space and $M=\Gamma\backslash X$. Moreover, equality holds if and only if $\rho$ is a discrete, faithful representation. \end{thm}
Theorem \ref{thm:1.2} implies that the volume invariant $\mathrm{Vol}(\rho)$ exactly characterizes discrete, faithful representations in the representation variety $\mathrm{Hom}(\Gamma,G)$. In particular, when $\Gamma$ is a uniform lattice, we verify that
\begin{eqnarray}\label{eqn:1.2} \mathrm{Vol}(\rho)=|\upsilon(\rho)|. \end{eqnarray} From the view of Equation (\ref{eqn:1.2}), the volume invariant $\mathrm{Vol}(\rho)$ can be regarded as an invariant for representations of arbitrary lattices extending the volume invariant $\upsilon(\rho)$ only for representations of uniform lattices. Note that Theorem \ref{thm:1.2} covers the remaining cases $\mathrm{SU}(n,1), \mathrm{Sp}(n,1), \mathrm{F}_4^{-20}$ that Goldman's proof in \cite{Go92} did not cover. In fact, one can easily notice that Conjecture \ref{con:1.1} is able to be proved by using the Besson-Courtois-Gallot technique in \cite{BCG99}.
In a similar way, we define a volume invariant $\mathrm{Vol}(\rho)$ for representations $\rho \colon\thinspace \Gamma \rightarrow \mathrm{SO}(m,1)$ of lattices $\Gamma$ in $\mathrm{SO}(n,1)$. A representation $\rho \colon\thinspace \Gamma \rightarrow \mathrm{SO}(m,1)$ is said to be a \emph{totally geodesic representation} if there is a totally geodesic $\mathbb{H}^n \subset \mathbb{H}^m$ so that the image of the representation lies in the subgroup $G \subset \mathrm{SO}(m,1)$ that preserves this $\mathbb{H}^n$ and that the $\rho$--equivariant map $F \colon\thinspace \mathbb{H}^n \rightarrow \mathbb{H}^m$ is a totally geodesic isometric embedding. Then, we show that this volume invariant characterizes totally geodesic representations.
\begin{thm} Let $\Gamma$ be a lattice in $\mathrm{SO}(n,1)$ and $M=\Gamma \backslash \mathbb{H}^n$. The volume invariant $\mathrm{Vol}(\rho)$ of a representation $\rho \colon\thinspace \Gamma \rightarrow \mathrm{SO}(m,1)$ for $m\geq n \geq 3$ satisfies an inequality $$\mathrm{Vol}(\rho) \leq \mathrm{Vol}(M).$$ Moreover, equality holds if and only if $\rho$ is a totally geodesic representation. \end{thm}
Finally using bounded cohomology theory and volume invariant, we can formulate the local rigidity phenomena of complex hyperbolic uniform lattices. Specially we prove that \begin{thm}Let $\Gamma\subset \mathrm{SU}(n,1)$ be a uniform lattice and $\rho:\Gamma\rightarrow \mathrm{SU}(m,1)$, $m\geq n \geq 2$ a representation. Then it is a maximal volume representation if and only if it is a totally geodesic representation. For the natural inclusion $\Gamma\subset \mathrm{SU} (n,1)\subset \mathrm{SU}(m,1)\subset \mathrm{Sp}(m,1)$, it is locally rigid, in the sense that the nearby representations stabilize a copy of ${\mathbb H}^n_{\mathbb C}$ inside ${\mathbb H}^m_{\mathbb H}$. \end{thm} This paper is organized as follows: We review the simplicial volume, $\ell^1$--homology and continuous (bounded) cohomology in order to define the new invariant $\mathrm{Vol}(\rho)$ in Section \ref{sec:2}. We describe the basic properties of the volume invariant $\mathrm{Vol}(\rho)$ in Section \ref{sec:3}. Then, we devote ourselves to proving Theorem \ref{thm:1.2} for the case that $G$ is a semisimple Lie group of higher rank in Section \ref{sec:4}, $G$ is a simple Lie group of rank $1$ except for $\text{SO}(2,1)$ in Section \ref{sec:5} and $G$ is $\mathrm{SO}(2,1)$ in Section \ref{sec:6}. We deal with a volume invariant for representations $\rho \colon\thinspace \Gamma \rightarrow \mathrm{SO}(m,1)$ of lattices $\Gamma$ in $\mathrm{SO}(n,1)$ in Section \ref{sec:7}. Lastly, we reformulate the rigidity phenomenon of uniform lattices of $\mathrm{SU}(n,1)$ in $\mathrm{SU}(m,1)$ or $\mathrm{Sp}(m,1)$ via the volume invariant in Section \ref{sec:8}.
\section{Preliminaries}\label{sec:2}
\subsection{Simplicial volume}
Let $M$ be an $n$--dimensional manifold. The simplicial $\ell^1$--norm $\| \cdot \|_1$ on the singular chain complex $C_\bullet(M,\mathbb{R})$ is defined by the $\ell^1$--norm with respect to the basis given by all singular simplices. The simplicial $\ell^1$--norm induces a $\ell^1$--seminorm on $H_\bullet(M,\mathbb{R})$ as follows:
$$\| \alpha \|_1 =\inf \| c \|_1$$ where $c$ runs over all singular cycles representing $\alpha \in H_\bullet(M,\mathbb{R})$.
For an oriented, connected, closed $n$--manifold $M$, the simplicial volume $\| M \|$ of $M$ is defined as the $\ell^1$--seminorm of the fundamental class $[M]$ in $H_n(M,\mathbb{R})$. If $M$ is an oriented, connected, open $n$--manifold, then $M$ has a fundamental class $[M]$ in the locally finite homology $H^\mathrm{lf}_n(M,\mathbb{R})$. The locally finite homology of $M$ is defined as the homology of the locally finite chain complex $C_\bullet^\mathrm{lf}(M,\mathbb{R})$. More precisely, let $S_k(M)$ be the set of singular $k$--simplices of $M$ and $S^\mathrm{lf}_k(M)$ denote the set of all locally finite subsets of $S_k(M)$, that is, if $A \in S^\mathrm{lf}_k(M)$, any compact subset of $M$ intersects the image of only finitely many elements of $A$. Then, the locally finite chain complex $C_\bullet^\mathrm{lf}(M,\mathbb{R})$ is defined by
$$C_\bullet^\mathrm{lf}(M,\mathbb{R})= \left\{\sum_{\sigma \in A} a_\sigma \sigma \ \bigg| \ A \in S^\mathrm{lf}_\bullet(X) \text{ and }a_\sigma \in \mathbb{R} \right\}.$$
A $\ell^1$--seminorm on $H^\mathrm{lf}_\bullet(M,\mathbb{R})$ is induced from the simplicial $\ell^1$--norm on the locally finite chain complex $C^\mathrm{lf}_\bullet(M,\mathbb{R})$ with respect to the basis given by all singular simplices. The simplicial volume $\| M \|$ of $M$ is defined as the $\ell^1$--seminorm of the locally finite fundamental class $[M]$ of $M$.
In addition, Gromov introduces the geometric simplicial volume of oriented, connected, open Riemannian manifolds. Fixing a metric on the standard $k$--simplex $\Delta^k$ by the Euclidean metric, the Lipschitz constant $\mathrm{Lip}(\sigma)$ of a singular simplex $\sigma \colon\thinspace \Delta^k \rightarrow M$ is defined. Subsequently, for a locally finite chain $c \in C^\mathrm{lf}_\bullet(M,\mathbb{R})$, define the Lipschitz constant $\mathrm{Lip}(c)$ of $c$ by the supremum over all Lipschitz constants of the simplices occurring in $c$.
The subcomplex $C^\mathrm{lf,Lip}_\bullet(M,\mathbb{R})$ of $C^\mathrm{lf}_\bullet(M,\mathbb{R})$ consisting of all chains with finite Lipschitz constant induces the homology with Lipschitz locally finite support, denoted by $H^\mathrm{lf,Lip}_\bullet(M,\mathbb{R})$. Indeed, $H^\mathrm{lf,Lip}_\bullet(M,\mathbb{R})$ is isomorphic to $H^\mathrm{lf}_\bullet(M,\mathbb{R})$ \cite[Theorem 3.3]{LS09}. Hence, it has a distinguished generator $[M]_\mathrm{Lip}$ in $H^\mathrm{lf,Lip}_\bullet(M,\mathbb{R})$ corresponding to the locally finite fundamental class $[M]$ in $H^\mathrm{lf}_\bullet(M,\mathbb{R})$. The geometric simplicial volume of $M$ is defined as the $\ell^1$--seminorm of $[M]_\mathrm{Lip}$, denoted by $\| M \|_\mathrm{Lip}$. Gromov \cite{Gr82} proves the proportionality principle for the geometric simplicial volume as follows.
\begin{thm}[Gromov]\label{thm:2.1} Let $M$ be a closed Riemannian manifold and $N$ be a complete Riemannian manifold of finite volume. If the universal covers of $M$ and $N$ are isometric, then
$$\frac{\|M\|_\mathrm{Lip}}{\mathrm{Vol}(M)}=\frac{\|N\|_\mathrm{Lip}}{\mathrm{Vol}(N)}.$$ \end{thm}
The simplicial volume of a smooth manifold gives a lower bound of its minimal volume. Hence, the question was naturally raised as to which manifolds have nonzero simplicial volumes. Gromov \cite{Gr82} and Thurston \cite{Th78} first show that the simplicial volume of complete Riemannian manifolds of finite volume with pinched negative sectional curvature is nonzero. Moreover, it is shown that closed locally symmetric spaces of noncompact type have positive simplicial volumes \cite{LS06}. In contrast, the simplicial volume of open, complete locally symmetric spaces of noncompact type with finite volume may vanish. For instance, the simplicial volume of locally symmetric spaces of noncompact type with $\mathbb{Q}$--rank at least $3$ vanishes \cite{LS09}. On the other hand, it turns out that the simplicial volume of $\mathbb{Q}$--rank $1$ locally symmetric spaces covered by a product of $\mathbb{R}$--rank $1$ symmetric spaces is positive \cite{KK} and moreover, it is equal to their geometric simplicial volume \cite{BKK} for amenable boundary group cases. The $\mathbb{Q}$--rank $2$ cases remain open.
\subsection{$\ell^1$--homology}
Let $M$ be an oriented, connected $n$--manifold. The $\ell^1$--chain complex of $M$ is the $\ell^1$--completion $C_\bullet^{\ell^1}(M,\mathbb{R})$ of the normed chain complex $C_\bullet(M,\mathbb{R})$ with respect to the simplicial $\ell^1$--norm $\| \cdot \|_1$. Then, the $\ell^1$--homology $H^{\ell^1}_\bullet(M,\mathbb{R})$ of $M$ is defined as the homology of $\ell^1$--chain complex of $M$, $$H^{\ell^1}_\bullet(M,\mathbb{R}) = H_\bullet ( C_\bullet^{\ell^1}(M,\mathbb{R})).$$
The natural inclusion $C_\bullet(M,\mathbb{R}) \hookrightarrow C_\bullet^{\ell^1}(M,\mathbb{R})$ induces a comparison map $H_\bullet(M,\mathbb{R}) \rightarrow H_\bullet^{\ell^1}(M,\mathbb{R})$. Note that this map is an isometric inclusion because $C_\bullet(M,\mathbb{R})$ is a dense subcomplex of $C_\bullet^{\ell^1}(M,\mathbb{R})$ \cite[Proposition 2.4]{Lo08}.
Similarly, inclusions $C_\bullet(M,\mathbb{R}) \subset C_\bullet^\mathrm{lf}(M,\mathbb{R}) \cap C_\bullet^{\ell^1}(M,\mathbb{R}) \subset C_\bullet^{\ell^1}(M,\mathbb{R})$ imply that the middle complex is dense in $C_\bullet^{\ell^1}(M,\mathbb{R})$. Hence, the induced map $H_\bullet (C_\bullet^\mathrm{lf}(M,\mathbb{R}) \cap C_\bullet^{\ell^1}(M,\mathbb{R})) \rightarrow H^{\ell^1}_\bullet(M,\mathbb{R})$ is an isometric inclusion. From this point of view, the simplicial volume of $M$ can be computed in terms of the $\ell^1$--homology of $M$ as follows:
$$ \| M \| = \inf \{ \| \alpha \|_1 \text{ }|\text{ }\alpha \in [M]^{\ell^1} \subset H^{\ell^1}_n(M,\mathbb{R}) \},$$ where $[M]^{\ell^1}$ is the set of all $\ell^1$--homology classes that are represented by at least one locally finite fundamental cycle.
In a similar way, the geometric simplicial volume of $M$ is computed by
$$ \| M \|_\mathrm{Lip} = \inf \{ \| \alpha \|_1 \text{ }|\text{ }\alpha \in [M]^{\ell^1}_\mathrm{Lip} \subset H^{\ell^1}_n(M,\mathbb{R}) \},$$ where $[M]^{\ell^1}_\mathrm{Lip}$ is the set of all $\ell^1$--homology classes that are represented by at least one locally finite fundamental cycle with finite Lipschitz constant. We refer the reader to \cite[Section 6]{Lo08} for more detailed explanations.
\subsection{Continuous bounded cohomology}
Let $G$ be a topological group. Consider the continuous cocomplex $C^\bullet_c(G,\mathbb{R})$ with the homogeneous coboundary operator, where $$C^k_c(G,\mathbb{R})=\{ f \colon\thinspace G^{k+1} \rightarrow \mathbb{R}\text{ }|\text{ }f \text{ is continuous} \}.$$ The action of $G$ on $C^k_c(G,\mathbb{R})$ is given by $$(g\cdot f)(g_0,\ldots,g_k)=f(g^{-1}g_0,\ldots,g^{-1}g_k).$$ The continuous cohomology $H^\bullet_c(G,\mathbb{R})$ of $G$ with trivial coefficients is defined as the cohomology of the $G$--invariant continuous cocomplex $C^\bullet_c(G,\mathbb{R})^G$.
For a cochain $f\colon\thinspace G^{k+1} \rightarrow \mathbb{R}$, define its sup norm by
$$\|f\|_\infty = \sup \{ |f(g_0,\ldots,g_k)|\text{ }|\text{ } (g_0,\ldots,g_k)\in G^{k+1}\}.$$ The sup norm turns $C^\bullet_c(G,\mathbb{R})$ into normed real vector spaces. The continuous bounded cohomology $H^\bullet_{c,b}(G,\mathbb{R})$ of $G$ is defined as the cohomology of the subcocomplex $C^\bullet_{c,b}(G,\mathbb{R})^G$ of
$G$--invariant continuous bounded cochains in $C^\bullet_c(G,\mathbb{R})^G$. The inclusion of $C^\bullet_{c,b}(G,\mathbb{R})^G \subset C^\bullet_c(G,\mathbb{R})^G$ induces a comparison map $c \colon\thinspace H^\bullet_{c,b}(G,\mathbb{R}) \rightarrow H^\bullet_c(G,\mathbb{R})$. The sup norm induces seminorms on both $H^\bullet_c(G,\mathbb{R})$ and $H^\bullet_{c,b}(G,\mathbb{R})$, denoted by $\| \cdot \|_\infty$. Note that for $\beta \in H^k_c(G,\mathbb{R})$,
$$ \|\beta \|_\infty = \inf \{ \| \beta_b \|_\infty \text{ }|\text{ } \beta_b \in H^k_{c,b}(G,\mathbb{R}) \text{ and } c(\beta_b)=\beta \}.$$
For a connected semisimple Lie group $G$ with trivial center and no compact factors, the continuous cohomology $H^\bullet_c(G,\mathbb{R})$ is isomorphic to the set of $G$--invariant differential forms on the associated symmetric space $X$ according to the Van Est isomorphism. In particular, the continuous cohomology of $G$ in the top degree is generated by the $G$--invariant volume form $\omega$ on $X$.
Let $\Gamma_0$ be a uniform lattice in $G$ and $M=\Gamma_0 \backslash X$. Bucher-Karlsson \cite{Bu08} reformulates a proof of Gromov's proportionality principle in the language of continuous bounded cohomology and moreover, shows that $$\frac{\| M \|}{\mathrm{Vol}(M)}=\frac{1}{\| \omega \|_\infty}.$$
It is easy to see that $\|M\|_\mathrm{Lip}=\|M\|$ because $M$ is closed. Let $\Gamma$ be an arbitrary lattice in $G$ and $N=\Gamma \backslash X$. It follows from Gromov's proportionality principle that \begin{eqnarray}\label{eqn:2.1}
\frac{\| N \|_\mathrm{Lip}}{\mathrm{Vol}(N)}=\frac{\| M \|_\mathrm{Lip}}{\mathrm{Vol}(M)}=\frac{\| M \|}{\mathrm{Vol}(M)}=\frac{1}{\| \omega \|_\infty}. \end{eqnarray} Note that the proportionality principle fails in general for the ordinary simplicial volume.
\section{Volume invariant}\label{sec:3}
In this section, we define a new invariant $\mathrm{Vol}(\rho)$ and explore its properties. Throughout the paper, $G$ denotes a connected semisimple Lie group with trivial center and no compact factors, and $\Gamma$ denotes a lattice in $G$. As usual, $X$ denotes the associated symmetric $n$--space and $M$ denotes the locally symmetric space $\Gamma\backslash X$. The symbol $\omega$ denotes the $G$--invariant volume form on $X$.
\subsection{Volume invariant}\label{sec:3.1}
Let $\rho \colon\thinspace \Gamma \rightarrow G$ be a representation. Then, $\rho$ induces canonical pullback map $\rho^*_c \colon\thinspace H^\bullet_c(G,\mathbb{R}) \rightarrow H^\bullet(\Gamma,\mathbb{R})$ in continuous cohomology. This canonical pullback map is realized on the level of cocomplex as follows: For a continuous map $f \colon\thinspace G^{k+1}\rightarrow \mathbb{R}$, define a map $\rho^*(f) \colon\thinspace \Gamma^{k+1} \rightarrow \mathbb{R}$ by $$\rho^*(f)(\gamma_0,\ldots,\gamma_k)=f(\rho(\gamma_0),\ldots,\rho(\gamma_k)),$$ for $(\gamma_0,\ldots,\gamma_k) \in \Gamma^{k+1}$. This defines a chain map $\rho^* \colon\thinspace C^\bullet_c(G,\mathbb{R}) \rightarrow C^\bullet(\Gamma,\mathbb{R})$. Moreover, $\rho^*$ maps $G$--invariant cochains to $\Gamma$--invariant cochains and hence, it induces a homomorphism $\rho^*_c \colon\thinspace H^\bullet_c(G,\mathbb{R}) \rightarrow H^\bullet(\Gamma,\mathbb{R})$ in continuous cohomology. In the same manner, $\rho$ induces a homomorphism $\rho^*_b \colon\thinspace H^\bullet_{c,b}(G,\mathbb{R}) \rightarrow H^\bullet_b(\Gamma,\mathbb{R})$ in continuous bounded cohomology.
For a connected semisimple Lie group $G$ with trivial center and no compact factors, it is well known that the $G$--invariant volume form $\omega \in H^n_c(G,\mathbb{R})$ is bounded. In other words, there exists a continuous bounded cohomology class $\omega_b \in H^n_{c,b}(G,\mathbb{R})$ such that $c(\omega_b)=\omega$ for the comparison map $c \colon\thinspace H^n_{c,b}(G,\mathbb{R}) \rightarrow H^n_c(G,\mathbb{R})$. By pulling back $\omega_b$ by $\rho$, we obtain a bounded cohomology class $\rho^*_b(\omega_b)\in H^n_b(\Gamma,\mathbb{R})$. Subsequently, we identify the bounded cohomology class $\rho^*_b(\omega_b)$ in $H^n_b(\Gamma,\mathbb{R})$ with a bounded cohomology class in $H^n_b(M,\mathbb{R})$ via the canonical isomorphism between $H^\bullet_b(\Gamma,\mathbb{R})$ and $H^\bullet_b(M,\mathbb{R})$ \cite{Gr82}. Then, the bounded cohomology class $\rho^*_b(\omega_b)$ can be evaluated on $\ell^1$--homology classes in $H^{\ell^1}_n(M,\mathbb{R})$ by the Kronecker products $$ \langle\cdot ,\cdot \rangle \colon\thinspace H^\bullet_b(M,\mathbb{R}) \otimes H^{\ell^1}_\bullet(M,\mathbb{R}) \rightarrow \mathbb{R}.$$
Now, we define a \emph{volume invariant $\mathrm{Vol}(\rho)$ of $\rho$} by
$$\mathrm{Vol}(\rho) = \inf \{ |\langle \rho^*_b(\omega_b),\alpha \rangle| \text{ }|\text{ }c(\omega_b)=\omega \text{ and } \alpha \in [M]^{\ell^1}_\mathrm{Lip} \}.$$
It is easy to see that the volume invariant $\mathrm{Vol}(\rho)$ is finite since $\omega$ is bounded and the geometric simplicial volume of $M$ is strictly positive. Furthermore, a upper bound on the volume invariant $\mathrm{Vol}(\rho)$ can be obtained from its definition immediately as follows.
\begin{prop}\label{pro:3.1} Let $\rho \colon\thinspace \Gamma \rightarrow G$ be a representation. Then, the volume invariant $\mathrm{Vol}(\rho)$ of $\rho$ satisfies an inequality $$ \mathrm{Vol}(\rho) \leq \mathrm{Vol}(M).$$ \end{prop}
\begin{proof} For a continuous cohomology class $\beta \in H^n_c(G,\mathbb{R})$,
$$ \| \beta \|_\infty = \inf \{ \| \beta_b \|_\infty \ | \ c(\beta_b)=\beta \},$$ where $c \colon\thinspace H^n_{c,b}(G,\mathbb{R}) \rightarrow H^n_c(G,\mathbb{R})$ is the comparison map. From the definition of the volume invariant $\mathrm{Vol}(\rho)$, we have {\setlength\arraycolsep{2pt} \begin{eqnarray*}
\mathrm{Vol}(\rho) &=& \inf \{ |\langle \rho^*_b(\omega_b),\alpha \rangle| \ | \ c(\omega_b)=\omega \text{ and } \alpha \in [M]^{\ell^1}_\mathrm{Lip} \} \\
&\leq& \inf \{ \| \rho^*_b(\omega_b) \|_\infty \cdot \| \alpha \|_1 \ | \ c(\omega_b)=\omega \text{ and } \alpha \in [M]^{\ell^1}_\mathrm{Lip} \} \\
&\leq& \inf \{ \|\omega_b\|_\infty \ | \ c(\omega_b)=\omega \} \cdot \inf \{ \|\alpha \|_1 \ | \ \alpha\in [M]^{\ell^1}_\mathrm{Lip} \} \\
&=& \|\omega\|_\infty \cdot \| M \|_\mathrm{Lip} \\ &=& \mathrm{Vol}(M). \end{eqnarray*}} The last equation comes from Equation (\ref{eqn:2.1}). \end{proof}
\begin{rem} If we define the volume invariant $\mathrm{Vol}(\rho)$ via $[M]^{\ell^1}$ instead of $[M]^{\ell^1}_\mathrm{Lip}$, we obtain the following inequality in a similar way as above
$$\mathrm{Vol}(\rho) \leq \|\omega\|_\infty \cdot \| M \|.$$
If $\Gamma$ is a lattice of $\mathbb{Q}$--rank at least $3$, it is known that $\| M \|=0$ \cite{LS09}. This implies that $\mathrm{Vol}(\rho)=0$ for all representations $\rho \colon\thinspace \Gamma \rightarrow G$. Then, this volume invariant cannot detect discrete, faithful representations. This is the reason why we use the notion of the geometric simplicial volume of $M$ to define the volume invariant $\mathrm{Vol}(\rho)$ instead of the ordinary simplicial volume of $M$. \end{rem}
\subsection{Volume invariant and $\rho$--equivariant map}
Goldman \cite{Go92} defined the volume invariant $\upsilon(\rho)$ by using a section $s \colon\thinspace M \rightarrow E_\rho$. Indeed, a section $s \colon\thinspace M \rightarrow E_\rho$ corresponds to a $\rho$--equivariant map $s \colon\thinspace X \rightarrow X$. In a similar way, the volume invariant $\mathrm{Vol}(\rho)$ can be reformulated in terms of $\rho$--equivariant map. In this section, we devote ourselves to explaining this and verifying $\mathrm{Vol}(\rho)=|\upsilon(\rho)|$ for representations $\rho \colon\thinspace \Gamma \rightarrow G$ of uniform lattices $\Gamma$.
First, we describe another useful cocomplexes for both continuous and continuous bounded cohomology of $G$. For a nonnegative integer $k$, define
$$C^k_c(X,\mathbb{R}) = \{ f \colon\thinspace X^{k+1} \rightarrow \mathbb{R} \ | \ f \text{ is continuous} \}.$$
Consider the sup norm $\| \cdot \|_\infty$ on $C^k_c(X,\mathbb{R})$ defined by
$$\| f \|_\infty = \sup \{ |f(x_0,\ldots,x_k)| \ | \ (x_0,\ldots,x_k)\in X^{k+1} \}.$$ Let $C^k_{c,b}(X,\mathbb{R})$ be the subspace consisting of continuous bounded $k$--cochains. Then, $C^\bullet_c(X,\mathbb{R})$ with the homogeneous coboundary operator becomes a cochain complex. Moreover, the homogeneous coboundary operator on $C^\bullet_c(X,\mathbb{R})$ restricts to $C^\bullet_{c,b}(X,\mathbb{R})$. The $G$--action on $C^\bullet_c(X,\mathbb{R})$ is defined analogously to the one on $C^\bullet_c(G,\mathbb{R})$.
It is a standard fact that the continuous cohomology $H^\bullet_c(G,\mathbb{R})$ of $G$ is isometrically isomorphic to the cohomology of the cocomplex $C^\bullet_c(X,\mathbb{R})^G$. For a proof, see \cite[Chapter 3]{Gu80}. The continuous bounded cohomology $H^\bullet_{c,b}(G,\mathbb{R})$ of $G$ is isometrically isomorphic to the cohomology of the subcocomplex $C^\bullet_{c,b}(X,\mathbb{R})^G$ of $C^\bullet_c(X,\mathbb{R})^G$. The comparison map $c \colon\thinspace H^\bullet_{c,b}(G,\mathbb{R})\rightarrow H^\bullet_c(G,\mathbb{R})$ is induced by the natural inclusion $C^\bullet_{c,b}(X,\mathbb{R})^G \subset C^\bullet_c(X,\mathbb{R})^G$. Furthermore, both $H^\bullet(\Gamma,\mathbb{R})$ and $H^\bullet_b(\Gamma,\mathbb{R})$ are isometrically isomorphic to the cohomologies of cocomplexes $C^\bullet_c(X,\mathbb{R})^\Gamma$ and $C^\bullet_{c,b}(X,\mathbb{R})^\Gamma$ respectively. See \cite[Corollary 7.4.10]{Mo01} for a detailed proof.
We describe here an explicit map on the level of cocomplex which induces an isometric isomorphism between $H^\bullet(C^\bullet_c(X,\mathbb{R})^G)$ and $H^\bullet_c(G,\mathbb{R})$. Let us fix a base point $o\in X$. Define a map $\phi_o \colon\thinspace C^k_c(X,\mathbb{R}) \rightarrow C^k_c(G,\mathbb{R})$ by $$\phi_o(f)(g_0,\ldots,g_k)=f(g_0\cdot o,\ldots, g_k\cdot o).$$ The map $\phi_o$ is a $G$-morphism between two cocomplexes and restricts to the subcocomplexes of continuous bounded cochains. Then, $\phi_o$ induces an isometric isomorphism $\phi^G_c \colon\thinspace H^\bullet(C^\bullet_c(X,\mathbb{R})^G) \rightarrow H^\bullet_c(G,\mathbb{R})$ in continuous cohomology. Note that $\phi^G_c$ is independent of the choice of the base point $o\in X$ even though $\phi_o$ depends on $o\in X$. Hence, we denote the induced map in continuous cohomology by $\phi^G_c$ without the subscript ``o". In a similar way, the map $\phi_o$ induces isometric isomorphisms, $\phi^G_b \colon\thinspace H^\bullet(C^\bullet_{c,b}(X,\mathbb{R})^G) \rightarrow H^\bullet_{c,b}(G,\mathbb{R})$ and $\phi^\Gamma_b \colon\thinspace H^\bullet(C^\bullet_{c,b}(X,\mathbb{R})^\Gamma) \rightarrow H^\bullet_b(\Gamma,\mathbb{R})$.
Let $s \colon\thinspace X \rightarrow X$ be a $\rho$--equivariant continuous map for a representation $\rho \colon\thinspace \Gamma \rightarrow G$. Then, $s$ induces a map $s^* \colon\thinspace C^k_c(X,\mathbb{R}) \rightarrow C^k_c(X,\mathbb{R})$ defined by $$s^*(f)(x_0,\ldots,x_k)=f(s(x_0),\ldots,s(x_k)),$$ for a cochain $f$ in $C^k_c(X,\mathbb{R})$. Due to the $\rho$--equivariance and continuity of $s \colon\thinspace X\rightarrow X$, it follows that $s^*$ maps $G$--invariant continuous (bounded) cochains to $\Gamma$--invariant continuous (bounded) cochains. Hence, $s^*$ induces homomorphisms $s^*_c \colon\thinspace H^\bullet( C^\bullet_c (X,\mathbb{R})^G) \rightarrow H^\bullet( C^\bullet_c (X,\mathbb{R})^\Gamma)$ in continuous cohomology and $s^*_b \colon\thinspace H^\bullet( C^\bullet_{c,b} (X,\mathbb{R})^G) \rightarrow H^\bullet( C^\bullet_{c,b} (X,\mathbb{R})^\Gamma)$ in continuous bounded cohomology. Now, consider the following diagram: $$ \xymatrixcolsep{4pc}\xymatrix{ C^\bullet_c(X,\mathbb{R})^G \ar[r]^-{\phi_o} & C^\bullet_c(G,\mathbb{R})^G \\ C^\bullet_{c,b}(X,\mathbb{R})^G \ar[r]^-{\phi_o} \ar[d]_-{s^*} \ar[u]^-{i} & C^\bullet_{c,b}(G,\mathbb{R})^G \ar[d]^-{\rho^*} \ar[u]_-{i} \\ C^\bullet_{c,b}(X,\mathbb{R})^\Gamma \ar[r]^-{\phi_o} & C^\bullet_b(\Gamma,\mathbb{R})^\Gamma. }$$
In this diagram, it is clear that the upper diagram commutes. On the other hand, the lower diagram does not commute. However, one can notice that it commutes in cohomology as follows. Let $f\in C^k_{c,b}(X,\mathbb{R})^G$ be a $G$--invariant continuous bounded cocycle. Define $b \in C^{k-1}_b(\Gamma,\mathbb{R})$ by $$b(\gamma_0,\ldots,\gamma_{k-1})=\sum_{i=0}^{k-1} (-1)^i f(\rho(\gamma_0) \cdot o, \ldots, \rho(\gamma_i) \cdot o, \rho(\gamma_i) \cdot s(o),\ldots,\rho(\gamma_{k-1})\cdot s(o)).$$ Then, $b$ is a $\Gamma$--invariant bounded cochain since $f$ is a $G$--invariant continuous bounded cocycle. Also, it is a straightforward computation that $$(\rho^* \circ \phi_o - \phi_o \circ s^*)(f)(\gamma_0,\ldots,\gamma_k) = \delta b (\gamma_0,\ldots,\gamma_k).$$ This implies that the lower diagram commutes in the cohomology level and hence, we have the following commutative diagram: $$ \xymatrixcolsep{4pc}\xymatrix{ H^\bullet(C^\bullet_c(X,\mathbb{R})^G) \ar[r]^-{\phi^G_c}_-{\cong} & H^\bullet_c(G,\mathbb{R}) \\ H^\bullet(C^\bullet_{c,b}(X,\mathbb{R})^G) \ar[r]^-{\phi^G_b}_-{\cong} \ar[d]_-{s^*_b} \ar[u]^-{c} & H^\bullet_{c,b}(G,\mathbb{R}) \ar[d]^-{\rho^*_b} \ar[u]_-{c} \\ H^\bullet(C^\bullet_{c,b}(X,\mathbb{R})^\Gamma) \ar[r]^-{\phi^\Gamma_b}_-{\cong} & H^\bullet_b(\Gamma,\mathbb{R}) }$$
Each cohomology class in $H^\bullet_c(G,{\mathbb R})$, $H^\bullet_{c,b}(G,{\mathbb R})$ and $H^\bullet_b(\Gamma,{\mathbb R})$ is canonically identified with a cohomology class in $H^\bullet(C^\bullet_c(X,\mathbb{R})^G)$, $H^\bullet(C^\bullet_{c,b}(X,\mathbb{R})^G)$ and $H^\bullet(C^\bullet_{c,b}(X,\mathbb{R})^\Gamma)$ via the isomorphisms induced by $\phi_o$ respectively.
Let $\omega_b$ be a continuous bounded cohomology class in $H^n_{c,b}(G,{\mathbb R})$ representing the $G$--invariant volume form $\omega \in H^n_c(G,{\mathbb R})$. We use the same notations $\omega$ and $\omega_b$ for the cohomology class in $H^\bullet(C^\bullet_c(X,\mathbb{R})^G)$ and $H^\bullet(C^\bullet_{c,b}(X,\mathbb{R})^G)$ identified with $\omega \in H^n_c(G,\mathbb{R})$ and $\omega_b \in H^n_{c,b}(G,\mathbb{R})$ via $\phi^G_c$ and $\phi^G_b$, respectively.
Noting that the cohomologies $H^\bullet(C^\bullet_{c,b}(X,\mathbb{R})^\Gamma)$ and $H^\bullet_b(\Gamma,\mathbb{R})$ are canonically identified with the bounded cohomology $H^\bullet_b(M,\mathbb{R})$, one can conclude that $s^*_b(\omega_b) = \rho^*_b(\omega_b)$ in $H^n_b(M,{\mathbb R})$ via the canonical isomorphisms. Hence,
$$\{ s^*_b(\omega_b) \in H^n_b(M,{\mathbb R}) \ | \ c(\omega_b) =\omega \} =\{ \rho^*_b(\omega_b) \in H^n_b(M,{\mathbb R}) \ | \ c(\omega_b) =\omega \}.$$
Therefore, the volume invariant $\mathrm{Vol}(\rho)$ can be reformulated in terms of $\rho$--equivariant map as follows:
$$\mathrm{Vol}(\rho)= \inf \{|\langle s^*_b(\omega_b),\alpha \rangle| \ | \ c(\omega_b)=\omega \text{ and } \alpha \in [M]^{\ell^1}_\mathrm{Lip} \}.$$ Note that the above reformulation of the volume invariant $\mathrm{Vol}(\rho)$ is independent of the choice of $\rho$--equivariant map $s \colon\thinspace X \rightarrow X$ as observed.
To define the volume invariant $\upsilon(\rho)$, Goldman \cite{Go92} uses a smooth section of the associated bundle. The reformulation of the volume invariant $\mathrm{Vol}(\rho)$ in terms of $\rho$--equivariant map makes it possible to verify the relation between two invariants $\upsilon(\rho)$ and $\mathrm{Vol}(\rho)$.
\begin{lemma}\label{lem:3.3} Let $\Gamma$ be a uniform lattice in $G$ and $\rho \colon\thinspace \Gamma \rightarrow G$ be a representation. Then,
$$\mathrm{Vol}(\rho) = |\upsilon (\rho)| = \left| \int_M s^* \omega \right|,$$ where $s \colon\thinspace M \rightarrow E_\rho$ is a smooth section of the associated bundle $E_\rho$. \end{lemma}
\begin{proof} A section $s \colon\thinspace M \rightarrow E_\rho$ corresponds to a $\rho$--equivariant map $X\rightarrow X$, denoted by $s \colon\thinspace X \rightarrow X$. Since $M=\Gamma\backslash X$ is a closed manifold, the set $[M]^{\ell^1}_\mathrm{Lip}$ contains exactly one element, namely, the class $i_*[M]$, where $[M]$ is the fundamental class of $M$, and $i_* \colon\thinspace H_n(M,\mathbb{R}) \rightarrow H^{\ell^1}_n(M,\mathbb{R})$ is the map induced by the inclusion $C_\bullet(M,\mathbb{R}) \subset C_\bullet^{\ell^1}(M,\mathbb{R})$. Hence, the volume invariant $\mathrm{Vol}(\rho)$ is computed by {\setlength\arraycolsep{2pt} \begin{eqnarray*}
\mathrm{Vol}(\rho) &=& \inf \{ |\langle s^*_b(\omega_b),\alpha \rangle | \ | \ c(\omega_b)=\omega \text{ and } \alpha \in [M]^{\ell^1}_\mathrm{Lip} \} \\
&=& \inf \{ |\langle s^*_b(\omega_b),i_*[M] \rangle | \ | \ c(\omega_b)=\omega \}. \end{eqnarray*}}
Considering the following commutative diagram, $$ \xymatrixcolsep{4pc}\xymatrix{ H^n_{c,b}(G,\mathbb{R}) \ar[r]^-{c} \ar[d]_-{s^*_b} & H^n_c(G,\mathbb{R}) \ar[d]^-{s^*_c} \\ H^n_b(\Gamma,\mathbb{R}) \ar[r]^-{c} & H^n(\Gamma,\mathbb{R}) }$$ we have $c(s^*_b(\omega_b))=s^*_c(c(\omega_b))=s^*_c\omega$. Note that $s^*_c\omega$ is represented by a $\Gamma$--invariant cocycle $s^*f$ where $f \colon\thinspace X^{n+1} \rightarrow \mathbb{R}$ is the $G$--invariant cocycle representing $\omega$, which is defined by $$ f(x_0,\ldots,x_n) = \int_{[x_0,\ldots,x_n]} \omega.$$
Also, one can consider another $\Gamma$--invariant cocycle $h \colon\thinspace X^{n+1} \rightarrow \mathbb{R}$ defined by $$h(x_0,\ldots,x_n) = \int_{[x_0,\ldots,x_n]} s^*\omega.$$ Here, $s^*\omega$ is the pull-back of the $G$--invariant volume form $\omega$ by $s \colon\thinspace X \rightarrow X$. It is easy to see that $h$ also represents the continuous cohomology class $s^*_c \omega$ because the geodesic straightening map is chain homotopic to the identity.
Let $c$ be a fundamental cycle representing $[M]$. Since $h$ represents the cohomology class $s^*_c\omega$ in $H^n(\Gamma,\mathbb{R}) \cong H^n(M,\mathbb{R})$, we have $$ \langle s^*_b(\omega_b), i_*[M] \rangle =\langle s^*_c \omega, [M] \rangle = \langle h, c \rangle=\int_M s^*\omega$$ for any $\omega_b \in c^{-1}(\omega)$. The last equation follows from the de Rham theorem. This completes the proof. \end{proof}
Goldman proves that $\upsilon (\rho)$ exactly characterizes discrete, faithful representations of $\Gamma$ into $G$ for the case that $G$ is either a connected semisimple Lie group of higher rank or $\text{SO}(n,1)$. This implies that $\mathrm{Vol}(\rho)$ does so by Lemma \ref{lem:3.3}.
\section{Semisimple Lie groups of higher rank}\label{sec:4}
In this section, we prove Theorem \ref{thm:1.2} for the case that $G$ is a semisimple Lie group of higher rank. Recall the restriction maps $$res_c \colon\thinspace H^\bullet_c(G,\mathbb{R}) \rightarrow H^\bullet(\Gamma,\mathbb{R}) \text{ and } res_b \colon\thinspace H^\bullet_{c,b}(G,\mathbb{R}) \rightarrow H^\bullet_b(\Gamma,\mathbb{R}),$$ induced from the inclusions $C^\bullet_c(X,\mathbb{R})^G \subset C^\bullet_c(X,\mathbb{R})^\Gamma$ and $C^\bullet_{c,b}(X,\mathbb{R})^G \subset C^\bullet_{c,b}(X,\mathbb{R})^\Gamma$ respectively. Note that $res_b$ is an isometric embedding because $\Gamma$ is a lattice in $G$. We first observe that $$\langle res_b(\omega_b), \alpha \rangle = \mathrm{Vol}(M)$$ for all $\omega_b \in c^{-1}(\omega)$ and all $\alpha \in [M]^{\ell^1}_\mathrm{Lip}$. To verify this, we need to prove the existence of the geodesic straightening map on the locally finite chain complex with finite Lipschitz constant.
The geodesic straightening map on the singular chain complex of a nonpositively curved manifold is introduced by Thurston \cite[Section 6.1]{Th78}. Let $X$ be a simply connected, complete Riemannian manifold with nonpositive sectional curvature. A geodesic simplex is defined inductively as follows: Let $x_0,\ldots,x_k \in X$. First, the geodesic $0$--simplex $[x_0]$ is the point $x_0 \in X$ and the geodesic $1$--simplex $[x_0,x_1]$ is the unique geodesic from $x_1$ to $x_0$. In general, the geodesic $k$--simplex $[x_0,\ldots,x_k]$ is the geodesic cone over $[x_0,\ldots,x_{k-1}]$ with the top point $x_k$.
Let $M$ be a connected, complete Riemannian manifold with nonpositive sectional curvature. Then, \emph{the geodesic straightening map} $str \colon\thinspace C_\bullet (M,\mathbb{R}) \rightarrow C_\bullet (M,\mathbb{R})$ is defined by $$ str(\sigma) = \pi_M \circ [\tilde{\sigma}(e_0),\ldots,\tilde{\sigma}(e_k)],$$ for a singular $k$--simplex $\sigma \colon\thinspace \Delta^k \rightarrow M$ where $\pi_M \colon\thinspace \widetilde{M} \rightarrow M$ is the universal covering map, $e_0,\ldots,e_k$ are the vertices of the standard $k$--simplex $\Delta^k$, and $\tilde{\sigma}$ is a lift of $\sigma$ to the universal cover $\widetilde{M}$.
\begin{prop}\label{pro:4.1} Let $M$ be a connected, complete, locally symmetric space of noncompact type. Then, geodesic straightening map is well-defined on $C^\mathrm{lf,Lip}_\bullet(M,\mathbb{R})$ and moreover, it is chain homotopic to the identity. \end{prop}
\begin{proof} Let $A \in S^\mathrm{lf,Lip}_k(M)$ for a nonnegative integer $k$. This means that any compact subset of $M$ intersects the image of only finitely many elements of $A$ and there exists a constant $C_A>0$ such that $\mathrm{Lip}(\sigma)<C_A$ for all $\sigma\in A$.
Let $str \colon\thinspace C_\bullet(M,\mathbb{R}) \rightarrow C_\bullet(M,\mathbb{R})$ denote the geodesic straightening map. Define $str(A)$ by
$$str(A)= \{ str(\sigma) \text{ }|\text{ } \sigma \in A \}.$$ To show that geodesic straightening map is well defined on $C^\mathrm{lf,Lip}_\bullet(M,\mathbb{R})$, it is sufficient to show that $str(A) \in S^\mathrm{lf,Lip}_k(M)$.
We first claim that $str(A)$ has finite Lipschitz constant. Let $\mathrm{Diam}(\sigma)$ denote the diameter of $\sigma(\Delta^k)$ for a singular simplex $\sigma \colon\thinspace \Delta^k \rightarrow M$. For all $\sigma \in A$, $$\mathrm{Diam}(\sigma) \leq C_A \cdot \mathrm{Diam}(\Delta^k)$$ since $\sigma \colon\thinspace \Delta^k \rightarrow M$ has Lipschitz constant $C_A$. Hence, each $\sigma \in A$ is contained in a closed ball of diameter $D_A= C_A \cdot \mathrm{Diam}(\Delta^k)$ in $M$. Because every closed ball in $X$ is geodesically convex, both $\sigma$ and $str(\sigma)$ are contained in the same closed ball of diameter $D_A$ for every $\sigma \in A$. This implies that $\mathrm{Diam}(str(\sigma))<D_A$ for all $\sigma \in A$.
For every $D>0$ and $k\in \mathbb{N}$, there is $L>0$ such that every geodesic $k$--simplex $\tau$ of diameter less than $D$ satisfies $\| T_x\tau \|<L$ for every $x \in \Delta^k$ \cite[Proposition 2.4]{LS09}. Hence, there exists $L_A>0$ such that $\mathrm{Lip}(str(\sigma)) < L_A $ for all $\sigma \in A$, that is, $str(A)$ has finite Lipschitz constant $L_A$.
Next, to verify that $str(A)$ has locally finite support, we need to show that every compact subset of $M$ intersects the image of only finitely many elements of $str(A)$. Let $K$ be a compact subset of $M$ and $\mathcal{N}_{D_A}(K)$ be the $D_A$--neighborhood of $K$. Suppose $\overline{\mathcal{N}_{D_A}(K)}\cap \sigma= \emptyset$ for some $\sigma \in A$. As observed above, both $\sigma$ and $str(\sigma)$ are contained in a closed ball $B_\sigma$ of diameter $D_A$. It is obvious that $B_\sigma \cap (M-\overline{\mathcal{N}_{D_A}(K)}) \neq \emptyset$ because of $\sigma \subset B_\sigma$. Then, $B_\sigma$ can never touch $K$, which implies $str(\sigma) \cap K = \emptyset$. Thus, $K$ can intersect the image of $str(\sigma)$ only for $\sigma \in A$ with $\overline{\mathcal{N}_{D_A}(K)}\cap \sigma \neq \emptyset$. There exist finitely many such elements of $A$ since $\overline{\mathcal{N}_{D_A}(K)}$ is the compact subset of $M$, and $A$ has locally finite support. Finally, we can conclude that $str(A)$ is a locally finite subset of $S_k(M)$ with finite Lipschitz constant, that is, $str(A) \in S^\mathrm{lf,Lip}_k(M)$.
From the above observation, we have a well-defined map $$str^\mathrm{lf} \colon\thinspace C^\mathrm{lf,Lip}_\bullet(M,\mathbb{R})\rightarrow C^\mathrm{lf,Lip}_\bullet(M,\mathbb{R})$$ extending the geodesic straightening map $str \colon\thinspace C_\bullet (M,\mathbb{R}) \rightarrow C_\bullet (M,\mathbb{R})$. It is obvious that $str^\mathrm{lf}$ is a chain map.
Now, to construct a chain homotopy from $str^\mathrm{lf}$ to the identity, recall the chain homotopy $H_\bullet \colon\thinspace C_\bullet(M,\mathbb{R}) \rightarrow C_{\bullet+1}(M,\mathbb{R})$ from the geodesic straightening map $str$ to the identity. Let $G_\sigma \colon\thinspace \Delta^k \times [0,1] \rightarrow M$ be the canonical straight line homotopy from $\sigma$ to $str(\sigma)$ for a singular $k$--simplex $\sigma$ in $M$. Let $\{e_0,\ldots,e_k \}$ denote the set of vertices in $\Delta^k$ for each $k$. The chain homotopy $H_k \colon\thinspace C_k(M,\mathbb{R})\rightarrow C_{k+1}(M,\mathbb{R})$ is defined by $$H_k(\sigma) = \sum_{i=0}^k (-1)^i G_\sigma \circ \eta_i,$$ where $\eta_i \colon\thinspace \Delta^{k+1} \rightarrow \Delta^k \times [0,1]$ is the affine map that maps $e_0,\ldots,e_{k+1}$ to $(e_0,0),\ldots,(e_i,0),(e_i,1),\ldots,(e_k,1)$ for $i=0,\ldots,k$.
Let $c=\sum_{\sigma \in A} a_\sigma \sigma$ be a $k$--chain in $C_k^\mathrm{lf,Lip}(M,\mathbb{R})$ for $A\in S_k^\mathrm{lf,Lip}(M)$. Then, as we observed previously, $\mathrm{Lip}(\sigma) < C_A$ and $\mathrm{Lip}(str(\sigma)) < L_A$ for all $\sigma \in A$. Moreover, the canonical line homotopy $G_\sigma$ from $\sigma$ to $str(\sigma)$ has finite Lipschitz constant that depends only on $C_A$, $L_A$ by \cite[Proposition 2.1]{LS09}. Noting that the Lipschitz constant of $\eta_i$ is also uniformly bounded from above for all $i=0,\ldots,k$, it follows that the Lipschitz constant of $H_k(\sigma)$ is uniformly bounded from above by a constant depending only on $C_A$, $L_A$ for all $\sigma \in A$. This means that the Lipschitz constant $\mathrm{Lip}(H_k(c))$ of $H_k(c)$ is finite.
To see that $H_k(c)$ has locally finite support, note that if $\sigma$ is contained in a closed ball, then the images of both $str(\sigma)$ and $H_k(\sigma)$ are contained in the same closed ball because every closed ball in $X$ is geodesically convex. As in the proof that $str(A)$ has locally finite support, any compact subset $K$ of $M$ can intersect the image of singular $(k+1)$--simplices occurring in $H_k(\sigma)$ only for $\sigma \in A$ with $\overline{\mathcal{N}_{D_A}(K)}\cap \sigma \neq \emptyset$. The set of such elements of $A$ are finite due to $A\in S^\mathrm{lf,Lip}_k(M)$. Moreover, since $H_k(\sigma)$ is a finite sum of $(k+1)$--simplices, $K$ intersects the image of finitely many $(k+1)$--simplices occurring in $H_k(c)$. This implies that $H_k(c)$ has locally finite support. Now, we have a well-defined map, $$H_\bullet^\mathrm{lf} \colon\thinspace C^\mathrm{lf,Lip}_\bullet(M,\mathbb{R}) \rightarrow C^\mathrm{lf,Lip}_{\bullet+1}(M,\mathbb{R}).$$
Since $H_\bullet^\mathrm{lf} \colon\thinspace C^\mathrm{lf,Lip}_\bullet(M,\mathbb{R}) \rightarrow C^\mathrm{lf,Lip}_{\bullet+1}(M,\mathbb{R})$ is the map extending the chain homotopy $H_\bullet \colon\thinspace C_\bullet (M,\mathbb{R}) \rightarrow C_{\bullet +1}(M,\mathbb{R})$ between the geodesic straightening map $str$ and the identity, it clearly satisfies $$\partial \circ H_k^\mathrm{lf} + H_{k-1}^\mathrm{lf} \circ \partial =str^\mathrm{lf} -id.$$ Hence, $H^\mathrm{lf}_\bullet$ is a chain homotopy from $str^\mathrm{lf}$ to the identity. Therefore, we can conclude that $str^\mathrm{lf} \colon\thinspace C^\mathrm{lf,Lip}_\bullet(M,\mathbb{R}) \rightarrow C^\mathrm{lf,Lip}_\bullet(M,\mathbb{R})$ is a chain homotopic to the identity. \end{proof}
The existence of the geodesic straightening map on $C^\mathrm{lf,Lip}_\bullet(M,\mathbb{R})$ allows us to get a straight cycle from an arbitrary cycle without changing its homology class. By using the straightening map $str^\mathrm{lf} \colon\thinspace C^\mathrm{lf,Lip}_\bullet(M,\mathbb{R}) \rightarrow C^\mathrm{lf,Lip}_\bullet(M,\mathbb{R})$, we can prove the following Lemma.
\begin{lemma}\label{lem:4.2} Let $G$ be a connected semisimple Lie group with trivial center and no compact factors. Let $\Gamma$ be a lattice in $G$. Then, $$\langle res_b(\omega_b) , \alpha \rangle = \mathrm{Vol}(M)$$ for all $\omega_b \in c^{-1}(\omega)$ and all $\alpha \in [M]^{\ell^1}_\mathrm{Lip}$. \end{lemma}
\begin{proof} On the continuous cochain complex $C^\bullet_c(X,\mathbb{R})$, the $G$--invariant volume form $\omega$ is represented by a cocycle $f \colon\thinspace X^{n+1} \rightarrow \mathbb{R}$ defined by $$f(x_0,\ldots,x_n)=\int_{[x_0,\ldots,x_n]} \omega.$$ Let $f_b \colon\thinspace X^{n+1} \rightarrow \mathbb{R}$ be a cocycle representing $\omega_b \in c^{-1}(\omega)$. Both $f$ and $f_b$ represent the same cohomology class $\omega$ in the continuous cohomology of $G$. Hence, there exists a $G$--invariant continuous cochain $b$ in $C^{n-1}_c(X,\mathbb{R})^G$ such that $$f_b =f +\delta b.$$
Let $c=\sum_{i=1}^\infty a_i \sigma_i$ be a locally finite fundamental $\ell^1$--cycle with finite Lipschitz constant representing $\alpha$. Due to Proposition \ref{pro:4.1}, $str^\mathrm{lf}(c)$ is also a locally finite fundamental $\ell^1$--cycle with finite Lipschitz constant and represents $\alpha$. By \cite[Proposition 4.4]{LS09}, we have $$ \langle f, str^\mathrm{lf}(c) \rangle = \mathrm{Vol}(M).$$
Now, we claim that $\langle \delta b, str^\mathrm{lf}(c) \rangle =0$. Let $\sigma_i^j$ denote the $j$-th face of $\sigma$ for $j=0,\ldots,n$. Then, $\partial \sigma_i = \sum_{j=0}^n (-1)^j \cdot \sigma_i^j$ and \begin{eqnarray}\label{eqn:4.1} \langle \delta b, str^\mathrm{lf}(c) \rangle = \sum_{i=1}^\infty \sum_{j=0}^n (-1)^j a_i \cdot \langle b, str(\sigma_i^j) \rangle. \end{eqnarray}
Since the Lipschitz constant of $str^\mathrm{lf}(c)$ is finite, there exists $R>0$ such that each $\sigma_i$ is contained in a closed ball with radius $R$ for all $i \in \mathbb{N}$. Fix a closed ball $B$ with radius $R$ in $X$. Then, there exists $g_i \in G$ for each $\sigma_i$ such that $g_i\cdot str(\sigma_i) \subset B$ since $G$ acts transitively on $X$. Due to the $G$--invariance of $b$, we have $\langle b, str(\sigma_i^j) \rangle = \langle b, g_i \cdot str(\sigma_i^j) \rangle$ for all $i\in \mathbb{N}$ and $j=0,\ldots,n$. This implies that $$\langle b, str(\sigma_i^j) \rangle = b(x_0, \ldots,x_{n-1})$$ for some $(x_0,\ldots,x_{n-1})\in B^n$. Since $b$ is continuous and $B$ is the compact subset of $X$, there exists a upper bound $C>0$ on $\langle b, str(\sigma_i^j) \rangle $ for all $i\in \mathbb{N}$ and $j=0,\ldots,n$. Furthermore, $c$ is a $\ell^1$--cycle and hence,
$$\sum_{i=1}^\infty \sum_{j=0}^n | (-1)^j a_i \cdot \langle b, str(\sigma_i^j) \rangle | < n C \cdot \sum_{i=1}^\infty |a_i| < \infty.$$
In other words, the series in Equation (\ref{eqn:4.1}) absolutely converges. Thus, all rearrangements of the series in Equation (\ref{eqn:4.1}) converge to the same value. From the cycle condition of $str^\mathrm{lf}(c)$, there exists a permutation $\tau$ of $\mathbb{N}\times \{0,\ldots,n\}$ such that $$ \sum_{i=1}^\infty \sum_{j=0}^n (-1)^{\tau(j)} a_{\tau(i)} \cdot str(\sigma_{\tau(i)}^{\tau(j)})=0.$$ Under this permutation $\tau$, we can conclude that $\langle \delta b, str^\mathrm{lf}(c) \rangle = 0$. Finally, we have {\setlength\arraycolsep{2pt} \begin{eqnarray*} \langle res_b(\omega_b), \alpha \rangle &=& \langle f+\delta b, str^\mathrm{lf}(c) \rangle \\ &=& \langle f, str^\mathrm{lf}(c) \rangle + \langle \delta b, str^\mathrm{lf}(c) \rangle \\ &=& \mathrm{Vol}(M) \end{eqnarray*}} The second equation is available since all series in the equation absolutely converge. \end{proof}
\begin{defi} A representation $\rho \colon\thinspace \Gamma \rightarrow G$ is \emph{maximal} if $$\mathrm{Vol}(\rho)=\mathrm{Vol}(M).$$ \end{defi} For reader's convenience, we recall Margulis's normal subgroup theorem \cite{Margulis}. \begin{thm}Let $G$ be a connected semisimple Lie group with finite center with ${\mathbb R}$-rank $\geq 2$, and let $\Gamma\subset G$ be an irreducible lattice. If $N\subset \Gamma$ is a normal subgroup of $\Gamma$, then either $N$ lies in the center of $G$ or the quotient $\Gamma/N$ is finite. \end{thm} \begin{thm}\label{thm:4.4} Let $G$ be a connected semisimple Lie group of higher rank with trivial center and no compact factors. Let $\Gamma$ be an irreducible lattice in $G$. Then, a representation $\rho \colon\thinspace \Gamma \rightarrow G$ is maximal if and only if $\rho$ is a discrete, faithful representation. \end{thm}
\begin{proof}
First, suppose that $\rho$ is discrete and faithful. Margulis Superrigidity Theorem implies that $\rho$ extends to an automorphism $\tilde{\rho} \colon\thinspace G \rightarrow G$. Then, a representation $\rho \colon\thinspace \Gamma \rightarrow G$ is written as a composition $\rho = \tilde{\rho} \circ i$ where $i \colon\thinspace \Gamma \rightarrow G$ is the natural inclusion of $\Gamma$ into $G$. The canonical pullback map $\rho^*_b \colon\thinspace H^\bullet_{c,b}(G,\mathbb{R}) \rightarrow H^\bullet_b(\Gamma,\mathbb{R})$ in continuous bounded cohomology is realized as a composition $\rho^*_b=res_b \circ \tilde{\rho}^*_b$, $$ \xymatrixcolsep{2pc}\xymatrix{ H^\bullet_{c,b}(G,\mathbb{R}) \ar[r]^{\tilde{\rho}^*_b} & H^\bullet_{c,b}(G,\mathbb{R}) \ar[r]^{res_b} & H^\bullet_b(\Gamma,\mathbb{R}). }$$
Since $\tilde{\rho}$ is an automorphism of $G$, it induces an automorphism of the continuous (bounded) cohomology of $G$. In particular, it is easy to see that $\tilde{\rho}^*_c(\omega) = \pm \omega$ in $H^n_c(G,\mathbb{R})$. Considering the commutative diagram $$ \xymatrixcolsep{4pc}\xymatrix{ H^n_{c,b}(G,\mathbb{R}) \ar[r]^-{c} \ar[d]_-{\tilde{\rho}^*_b} & H^n_c(G,\mathbb{R}) \ar[d]^-{\tilde{\rho}^*_c} \\ H^n_{c,b}(G,\mathbb{R}) \ar[r]^-{c} & H^n_c(G,\mathbb{R}) }$$ the automorphism $\tilde{\rho}^*_b \colon\thinspace H^n_{c,b}(G,\mathbb{R}) \rightarrow H^n_{c,b}(G,\mathbb{R})$ permutes the set of $c^{-1}(\omega)$ up to sign. Hence, {\setlength\arraycolsep{2pt} \begin{eqnarray*}
\mathrm{Vol}(\rho) &=& \inf \{ |\langle \rho^*_b (\omega_b),\alpha \rangle| \ | \ c(\omega_b)=\omega \text{ and } \alpha \in [M]^{\ell^1}_\mathrm{Lip} \} \\
&=& \inf \{|\langle res_b( \tilde{\rho}^*_b (\omega_b)),\alpha \rangle | \ | \ c(\omega_b)=\omega \text{ and } \alpha \in [M]^{\ell^1}_\mathrm{Lip} \} \\
&=& \inf \{ |\langle res_b( \omega_b),\alpha \rangle | \ | \ c(\omega_b)=\omega \text{ and } \alpha \in [M]^{\ell^1}_\mathrm{Lip} \}. \end{eqnarray*}} According to Lemma \ref{lem:4.2}, $\langle res_b(\omega_b),\alpha \rangle =\mathrm{Vol}(M)$ for all $\omega_b \in c^{-1}(\omega)$ and all $\alpha \in [M]^{\ell^1}_\mathrm{Lip}$. Therefore, $\mathrm{Vol}(\rho)= \mathrm{Vol}(M)$.
Conversely, suppose that $\rho \colon\thinspace \Gamma \rightarrow G$ is not a discrete, faithful representation. If $\rho$ has nontrivial kernel, then $\rho(\Gamma)$ is a finite group by the Margulis's normal subgroup theorem. If $\rho$ is a nondiscrete, faithful representation, then $\rho(\Gamma)$ is precompact by the Margulis superrigidity theorem. In either case, $\rho(\Gamma)$ is an amenable subgroup of $G$. Regarding $\rho$ as a composition $\rho = i \circ \rho$, $$ \xymatrixcolsep{2pc}\xymatrix{ \Gamma \ar[r]^-{\rho} & \rho(\Gamma) \ar[r]^-{i} & G }$$ one can realize $\rho^*_b \colon\thinspace H^\bullet_{c,b}(G,\mathbb{R})\rightarrow H^\bullet_b(\Gamma,\mathbb{R})$ as a composition $\rho^*_b \circ res_b$, $$ \xymatrixcolsep{2pc}\xymatrix{ H^\bullet_{c,b}(G,\mathbb{R}) \ar[r]^-{res_b} & H^\bullet_{c,b}(\rho(\Gamma),\mathbb{R}) \ar[r]^-{\rho^*_b} & H^\bullet_b(\Gamma,\mathbb{R}). }$$ The continuous bounded cohomology $H^\bullet_{c,b}(\rho(\Gamma),\mathbb{R})$ is trivial because $\rho(\Gamma)$ is amenable. This implies that $\rho^*_b(\omega_b) = \rho^*_b(res_b(\omega_b))=0$ for all $\omega_b \in c^{-1}(\omega)$. Hence, $\mathrm{Vol}(\rho)=0$. This completes the proof of this theorem. \end{proof}
\section{Simplie Lie groups of rank $1$}\label{sec:5}
In this section, we give a proof of Theorem \ref{thm:1.2} for the case that $G$ is a simple Lie group of rank $1$ except for $\text{SO}(2,1)$. The Besson-Courtois-Gallot technique is a central ingredient here.
\begin{defi}
Let $F \colon\thinspace X \rightarrow Y$ be a smooth map between Riemannian manifolds $X$ and $Y$. The $p$--Jacobian $\mathrm{Jac}_pF$ of $F$ is defined by $$\mathrm{Jac}_pF(x) =\sup \| d_xF(u_1) \wedge \cdots \wedge d_xF(u_p) \|,$$ where $\{u_1,\ldots,u_p\}$ varies on the set of orthonormal $p$--frames at $x\in X$. \end{defi}
Let $X$ and $Y$ be complete, simply connected, Riemannian manifolds. Suppose that the sectional curvature $K_Y$ on $Y$ satisfies $K_Y \leq -1$. Let $\Gamma$ and $\Gamma'$ be discrete subgroups of $\text{Isom}(X)$ and $\text{Isom}(Y)$ respectively. For any representation $\rho \colon\thinspace \Gamma \rightarrow \Gamma'=\rho(\Gamma)$, Besson, Courtois and Gallot show that for all $\epsilon >0$, $p\geq 3$, there exists a $\rho$--equivariant map $F_\epsilon \colon\thinspace X \rightarrow Y$ such that $$\mathrm{Jac}_p F_\epsilon (x) \leq \left( \frac{\delta(\Gamma)}{p-1}(1+\epsilon) \right)^p,$$ for all $x \in X$ where $\delta(\Gamma)$ is the critical exponent of $\Gamma$. Furthermore, they show that if $X$ has strictly negative sectional curvature, $\Gamma$ and $\Gamma'$ are convex cocompact and $\rho$ is injective, then there exists the natural map $F \colon\thinspace X\rightarrow Y$ in \cite[Theorem 1.10]{BCG99}. Note that one can make use of Besson-Courtois-Gallot's method if there exists a $\rho$--equivariant measurable map from the visual boundary $\partial X$ of $X$ to $\partial Y$ for a representation $\rho :\Gamma \rightarrow \mathrm{Isom}(Y)$.
\begin{prop}\label{pro:5.2} Let $G$ and $H$ be connected simple Lie groups of rank $1$ with trivial center and no compact factors. Let $X$ and $Y$ be the symmetric spaces associated with $G$ and $H$ respectively. Assume that the symmetric metrics on $X$ and $Y$ are normalized so that their curvatures lie between $-4$ and $-1$. Let $\Gamma$ be a lattice in $G$ and $\rho \colon\thinspace \Gamma \rightarrow H$ be a representation whose image is nonelementary. Then, there exists a map $F \colon\thinspace X\rightarrow Y$ such that \begin{itemize} \item[(1)] $F$ is smooth. \item[(2)] $F$ is $\rho$--equivariant. \item[(3)] For all $k\geq 3$, $\mathrm{Jac}_kF(x) \leq (\delta(\Gamma)/(k-1))^k$. \item[(4)] If $\mathrm{dim}(X) \geq \mathrm{dim}(Y) \geq 3$, then $\mathrm{Jac}_nF(x) \leq (\delta(\Gamma)/(n+d-2))^n$ where $d$ is the real dimension of the field or the ring under consideration for $G$. Moreover, equality holds for some $x\in X$ if and only if $D_xF$ is a homothety from $T_xX$ to $T_{F(x)}Y$. \end{itemize} \end{prop}
\begin{proof} By the assumption of the sectional curvatures on $X$ and $Y$, the associated symmetric spaces $X$ and $Y$ are $\mathrm{CAT}(-1)$--spaces. Since any lattice in $G$ is a discrete divergence subgroup of $G$, it follows from \cite[Theorem 0.2]{BM96} that there exists the unique $\rho$--equivariant measurable map $\varphi \colon\thinspace \partial X \rightarrow \partial Y$ and it takes almost all its values in the limit set of $\rho(\Gamma)$.
Let $\{ \nu_x \}_{x\in X}$ denote the family of Patterson-Sullivan measures on $\partial X$ for $\Gamma$. Let $\mu_x $ be the pushforward of $\nu_x$ by $\varphi$, that is, $\mu_x = \varphi_* \nu_x$. It can be easily seen that $\{\mu_x \}_{x\in X}$ is $\rho$--equivariant and moreover, the measures $\mu_x$ and $\mu_y$ are in the same measure class for all $x,y \in X$.
We claim that the barycenter of $\mu_x$ is well defined for all $x\in X$. Recall that if $\mu_x$ is not concentrated on two points, then the barycenter of $\mu_x$ is well defined. Assume that $\mu_x$ is concentrated on two points. Let $p$ be one of them. Then, $\mu_x$ must have positive weights on each $\rho(\Gamma)$--orbit of $p$ because $\mu_x$ and $\mu_{\gamma x}=\rho(\gamma)_* \mu_x$ are in the same measure class for all $\gamma \in \Gamma$. However, the set of $\rho(\Gamma)$--orbits of $p$ contains more than two points because $\rho(\Gamma)$ is nonelementary. This contradicts the assumption that $\mu_x$ is concentrated on only two points. Therefore, the claim holds.
As Besson, Courtois and Gallot construct the natural map in \cite{BCG99}, define a map $F \colon\thinspace X\rightarrow Y$ by the composition $bar \circ \varphi_* \circ \mu$ of maps $$ \xymatrixcolsep{2pc}\xymatrix{ X \ar[r]^-{\mu} & \mathcal{M}^+ (\partial X) \ar[r]^-{\varphi_*} & \mathcal{M}^+ (\partial Y) \ar[r]^-{bar} & Y }$$ where $\mathcal{M}^+ (\partial X)$ denotes the set of positive Borel measures on $\partial X$. Then, this map $F$ is a $\rho$--equivariant. Furthermore, the properties $(1) \sim (4)$ of the natural map $F \colon\thinspace X \rightarrow Y$ can be proved by the same argument as in \cite[Section 2]{BCG99}. \end{proof}
The map $F \colon\thinspace X\rightarrow Y$ as above is called the \emph{natural map} for a representation $\rho \colon\thinspace\Gamma \rightarrow H$.
\begin{thm}\label{thm:5.3} Let $G$ be a connected simple Lie group of rank $1$ with trivial center and no compact factors, except for $\mathrm{SO}(2,1)$. Let $\Gamma$ be a lattice in $G$. Then, a representation $\rho \colon\thinspace \Gamma \rightarrow G$ is maximal if and only if $\rho$ is a discrete, faithful representation. \end{thm}
\begin{proof} Suppose that $\rho \colon\thinspace \Gamma \rightarrow G$ is a discrete, faithful representation.
Let $X$ be the associated symmetric space of dimension $n$ and $M=\Gamma\backslash X$. Then, $\rho$ extends to an automorphism $\tilde{\rho} \colon\thinspace G \rightarrow G$ due to the Mostow's rigidity theorem. In a similar argument as in the proof of Theorem \ref{thm:4.4}, we have $\mathrm{Vol}(\rho)=\mathrm{Vol}(M).$
Conversely, we now suppose that $\mathrm{Vol}(\rho)=\mathrm{Vol}(M)$. If $\rho(\Gamma)$ is elementary, then $\rho^*_b(\omega_b)=0$ for all $\omega_b \in c^{-1}(\omega)$ and thus, $\mathrm{Vol}(\rho)=0$. Hence, we can assume that $\rho(\Gamma)$ is nonelementary. Assume that the sectional curvature on $X$ lies between $-4$ and $-1$. Then, there exists the natural map $F \colon\thinspace X \rightarrow X$ according to Proposition \ref{pro:5.2}. Because of the critical exponent $\delta(\Gamma)=n+d-2$ for any lattice $\Gamma$ in $G$ where $d$ is the real dimension of the field or the ring under consideration for $G$, we have $$\mathrm{Jac}_nF(x) \leq 1.$$
Define a continuous function $f \colon\thinspace X^{n+1} \rightarrow \mathbb{R}$ by $$f(x_0,\ldots,x_n) = \int_{[x_0,\ldots,x_n]}\omega.$$ It can be easily seen that $f \colon\thinspace X^{n+1} \rightarrow \mathbb{R}$ is a $G$--invariant continuous bounded cocycle representing the $G$--invariant volume form $\omega \in H^n_c(G,\mathbb{R})$ on $X$. Hence, $f$ determines a continuous bounded cohomology class $\omega_b \in c^{-1}(\omega)$. Recall that the $\Gamma$--invariant bounded cocycle $F^* f \colon\thinspace X^{n+1} \rightarrow \mathbb{R}$ is defined by $$F^* f(x_0,\ldots,x_n)=f(F(x_0),\ldots,F(x_n))=\int_{[F(x_0),\ldots,F(x_n)]}\omega.$$
Considering the pullback $F^*\omega$ of the $G$--invariant volume form $\omega$ on $X$ by the natural map $F$, one can define another $\Gamma$--invariant continuous bounded cocycle $h \colon\thinspace X^{n+1}\rightarrow \mathbb{R}$ by $$h(x_0,\ldots,x_n)=\int_{[x_0,\ldots,x_n]}F^*\omega.$$ The change of variables formula implies $$h(x_0,\ldots,x_n)=\int_{[x_0,\ldots,x_n]}F^*\omega=\int_{F([x_0,\ldots,x_n])} \omega.$$
It is clear that $[F(x_0),\ldots,F(x_n)]= str(F([x_0,\ldots,x_n]))$. From the canonical straight line homotopy $H_\bullet \colon\thinspace C_\bullet(X,\mathbb{R})\rightarrow C_{\bullet+1}(X,\mathbb{R})$ between the geodesic straightening map and the identity, we have $$ [F(x_0),\ldots,F(x_n)]-F([x_0,\ldots,x_n])=(\partial \circ H_n + H_{n-1} \circ \partial) (F([x_0,\ldots,x_n])).$$
It is a straightforward computation that $ h - F^* f = \delta \eta$ where $$\eta(x_0,\ldots,x_{n-1})=\int_{H_{n-1}\circ F([x_0,\ldots,x_{n-1}])} \omega .$$ \begin{lemma} If $G$ is not $\mathrm{SO}(3,1)$, $\eta$ is a $\Gamma$-invariant continuous bounded cochain, which implies that $h$ and $F^* f$ represent the same bounded cohomology class $F^*_b(\omega_b)$ in $H^n_b(\Gamma,\mathbb{R})$. \end{lemma} \begin{proof} In the case that $G$ is not $\mathrm{SO}(3,1)$, the associated symmetric space $X$ has dimension at least $4$. Then, the property ($3$) in Proposition \ref{pro:5.2} shows $$\mathrm{Jac}_{n-1}F(x) \leq \left( \frac{n+d-2}{n-2} \right)^{n-1},$$ for all $x \in X$. Hence, the volume of $F([x_0,\ldots,x_{n-1}])$ has a uniform upper bound. The volume of the straight line homotopy between $F([x_0,\ldots,x_{n-1}])$ and $[F(x_0),\ldots,F(x_{n-1})]$ is uniformly bounded from above since the volumes of both $F([x_0,\ldots,x_{n-1}])$ and $[F(x_0),\ldots,F(x_{n-1})]$ are uniformly bounded from above and the sectional curvature on $X$ is bounded from above by $-1$. More precisely, one can approximate the straight line homotopy by the union of small cones $C_i$ whose bases are on $[F(x_0),\ldots,F(x_{n-1})]$ and whose apexes are on $F([x_0,\ldots,x_{n-1}])$, and small cones $C_j$ whose bases are on $F([x_0,\ldots,x_{n-1}])$ and whose apexes are on $[F(x_0),\ldots,F(x_{n-1})]$. On the other hand, it can be shown, see for example \cite{Gr82} (page 19), that $$\mathrm{Vol}(Cone)\leq (n-1)^{-1}\mathrm{Vol}(Base).$$ This shows that the volume of the straight line homotopy is bounded uniformly by the sum of volumes of $F([x_0,\ldots,x_{n-1}])$ and $[F(x_0),\ldots,F(x_{n-1})]$. Thus, $\eta$ is a $\Gamma$-invariant continuous bounded cochain, which implies that $h$ and $F^* f$ represent the same bounded cohomology class $F^*_b(\omega_b)$ in $H^n_b(\Gamma,\mathbb{R})$. \end{proof} Let $\alpha \in [M]^{\ell^1}_\text{Lip}$ and $c$ be a locally finite fundamental $\ell^1$--cycle with finite Lipschitz constant representing $\alpha$. We now assume that $G$ is not $\mathrm{SO}(3,1)$. Maximality condition $\mathrm{Vol}(\rho)=\mathrm{Vol}(M)$ gives us an inequality \begin{eqnarray}\label{naturalvol}
|\langle F^*_b (\omega_b), \alpha \rangle | = | \langle F^* f, c \rangle | = |\langle h, c \rangle| = \left| \int_M F^*\omega \right|\geq \mathrm{Vol}(M). \end{eqnarray} Since $\text{Jac}_nF(x)\leq 1$ almost everywhere, inequality (\ref{naturalvol}) actually implies that
$$ \left| \int_M F^*\omega \right| = \mathrm{Vol}(M),$$ and hence, $\text{Jac}_nF(x)=1$ everywhere. Then, it follows from the property $(4)$ of the natural map in Proposition \ref{pro:5.2} that $F$ is an isometry. Therefore, $\rho :\Gamma \rightarrow G$ is a discrete, faithful representation.
The theorem for the case $G=\mathrm{SO}(3,1)$ can be covered by the result of Bucher, Burger and Iozzi \cite{BBI}. In their paper \cite{BBI}, an invariant for representations of lattices in $\mathrm{SO}(n,1)$ is defined in the same manner as the invaraint for representations of lattices in $\mathrm{SO}(2,1)$ in \cite{BIW10}. Moreover, they show that the invariant detects discrete, faithful representations for $n \geq 3$. In fact, it is easy to see that the absolute value of the invariant for representations $\rho$ of hyperbolic lattices is equal to the volume invariant $\mathrm{Vol}(\rho)$. This follows from the same argument in the proof of Proposition \ref{prop:6.2}. Hence, the theorem holds for the case $G=\mathrm{SO}(3,1)$. We finally complete the proof. \end{proof}
From Lemma \ref{lem:3.3}, it is easy to see that Theorem \ref{thm:5.3} covers the remaining cases $\mathrm{SU}(n,1), \mathrm{Sp}(n,1), \mathrm{F}_4^{-20}$ that Goldman's proof in \cite{Go92} did not cover. Hence, we complete the proof of Conjecture \ref{con:1.1}.
\section{$\text{SO}(2,1)$}\label{sec:6}
In this section, we deal with $\text{PU}(1,1)$ instead of $\text{SO}(2,1)$ for convenience. Let $\Gamma$ be a lattice in $\text{PU}(1,1)$ and $\rho \colon\thinspace \Gamma \rightarrow \text{PU}(1,1)$ be a representation. The unit ball $\mathbb{D}$ in the complex plane $\mathbb{C}$ is the associated symmetric space and $S=\Gamma\backslash \mathbb{D}$ is a surface of finite topological type with negative Euler number. If $\Gamma$ is a uniform lattice, then the volume invariant $\mathrm{Vol}(\rho)$ is equal to
$|\upsilon(\rho)|$ as we see this in Lemma \ref{lem:3.3}. Hence, Theorem \ref{thm:1.2} for uniform lattices in $\text{PU}(1,1)$ follows from Goldman's proof. We refer the reader to \cite{Go81} for a detailed proof of this.
From now on, we assume that $\Gamma$ is a nonuniform lattice in $\text{PU}(1,1)$. In this case, Burger, Iozzi and Wienhard define the Toledo invariant as follows. Let $\Sigma$ be a connected, oriented, compact surface with boundary $\partial \Sigma$ whose interior is homeomorphic to $S$. Let $\rho \colon\thinspace \pi_1(\Sigma) \rightarrow \text{PU}(1,1)$ be a representation. The second continuous cohomology $H^2_c(\text{PU}(1,1),\mathbb{R})$ of $\text{PU}(1,1)$ is generated by the K\"{a}hler form $\kappa$ on $\mathbb{D}$. There is the unique continuous bounded K\"{a}hler class $\kappa_b \in H^2_{c,b}(\text{PU}(1,1),\mathbb{R})$ since the comparison map $c \colon\thinspace H^\bullet_{c,b}(\mathrm{PU}(1,1),\mathbb{R}) \rightarrow H^\bullet_c(\mathrm{PU}(1,1),\mathbb{R})$ is an isomorphism in degree $2$. By pulling back the bounded K\"{a}hler class $\kappa_b$ via $\rho$, one can obtain a bounded cohomology class $$\rho^*_b(\kappa_b) \in H^2_b(\pi_1(\Sigma),\mathbb{R}) \cong H^2_b(\Sigma,\mathbb{R}).$$
The canonical map $C^\bullet_b(\Sigma, \partial \Sigma, \mathbb{R}) \rightarrow C^\bullet_b(\Sigma,\mathbb{R})$ induces an isomorphism $j \colon\thinspace H^2_b(\Sigma,\partial \Sigma,\mathbb{R}) \rightarrow H^2_b(\Sigma,\mathbb{R})$ in bounded cohomology. The Toledo invariant $\mathrm{T}(\Sigma, \rho)$ of $\rho$ is defined by $$\text{T}(\Sigma, \rho)= \langle j^{-1}(\rho^*_b(\kappa_b)),[\Sigma,\partial \Sigma]\rangle,$$ where $j^{-1}(\rho^*_b(\kappa_b))$ is considered as an ordinary relative cohomology class and $[\Sigma,\partial \Sigma]$ is the relative fundamental class. Burger, Iozzi and Wienhard obtain a kind of the Milnor inequality
$$ | \mathrm{T}(\Sigma,\rho) | \leq \chi(\Sigma),$$ where $\chi(\Sigma)$ is the Euler number of $\Sigma$. Moreover, they generalize Goldman's characterization of maximal representations for closed surfaces to the cases of surfaces with boundary.
\begin{thm}[Burger, Iozzi and Wienhard] Let $\Sigma$ be a connected oriented surface with negative Euler number. A representation $\rho \colon\thinspace \pi_1(\Sigma) \rightarrow \mathrm{PU}(1,1)$ is maximal if and only if it is the holonomy representation of a complete hyperbolic metric on the interior of $\Sigma$. \end{thm}
In fact, a similar argument holds for a representation of $\pi_1(\Sigma)$ into a Lie group of Hermitian type. We refer the reader to \cite{BIW10} for more details.
\begin{prop}\label{prop:6.2} Let $\Gamma$ be a nonuniform lattice in $\mathrm{PU}(1,1)$. Then
$$\mathrm{Vol}(\rho)=2\pi |\mathrm{T}(\Sigma,\rho) |.$$ \end{prop}
\begin{proof}
Let $S=\Gamma\backslash \mathbb{D}$ and $\Sigma$ be the compact surface with boundary whose interior is homeomorphic to $S$. We think of $S$ as the interior of $\Sigma$. Let $\omega$ be the $\text{PU}(1,1)$--invariant volume form on $\mathbb{D}$. Then, $\omega= 2\pi \kappa$ for the K\"{a}hler form $\kappa$ on $\mathbb{D}$. Hence, {\setlength\arraycolsep{2pt} \begin{eqnarray*}
\mathrm{Vol}(\rho)&=& \inf \{ |\langle \rho^*_b (\omega_b), \alpha \rangle| \ | \ c(\omega_b)=\omega \text{ and }\alpha \in [S]^{\ell^1}_\mathrm{Lip} \} \\
&=& 2\pi \cdot \inf \{| \langle \rho^*_b (\kappa_b), \alpha \rangle | \ | \ \alpha \in [S]^{\ell^1}_\mathrm{Lip} \}. \end{eqnarray*}}
We claim that $\langle \rho^*_b (\kappa_b), \alpha \rangle = \text{T}(\Sigma,\rho)$ for all $\alpha \in [S]^{\ell^1}_\mathrm{Lip}$. Consider a collar neighborhood of $\partial \Sigma$ in $\Sigma$ that is homeomorphic to $\partial \Sigma \times [0,1)$. Let $K$ be the complement of the collar neighborhood of $\partial \Sigma$. Note that $K$ is a compact subsurface with boundary that is a deformation retract of $\Sigma$. Consider the following commutative diagram, $$ \xymatrixcolsep{2pc}\xymatrix{ C^\bullet_b(S,\mathbb{R}) & C^\bullet_b(\Sigma,\mathbb{R}) \ar[l]_-{i_1} & C^\bullet_b(\Sigma,\partial \Sigma,\mathbb{R}) \ar[l]_-{j} \\ C^\bullet_b(S,S-K,\mathbb{R})\ar[u]^-{p_1} & C^\bullet_b(\Sigma, \Sigma-K, \mathbb{R}) \ar[l]_-{i_2} \ar[u]^-{p_2} \ar[ru]_-{p_3} & }$$ where every map in the above diagram is the map induced from the canonical inclusion. Every map in the diagram induces an isomorphism in bounded cohomology in degree $2$. Thus, there exists a cocycle $z\in C^2_b(\Sigma,\Sigma-K,\mathbb{R})$ such that $p_2(z)$ represents $\rho_b^*(\kappa_b)$ in $H^2_b(\Sigma,\mathbb{R})$ and $i_1 (p_2(z))$ represents $\rho_b^*(\kappa_b)$ in $H^2_b(S,\mathbb{R})$ and $p_3(z)$ represents $j^{-1}(\rho_b^*(\kappa_b))$ in $H^2_b(\Sigma,\partial \Sigma, \mathbb{R})$. Here, we use the same notation $\rho^*_b(\kappa_b)$ for the bounded cohomology classes in $H^2_b(\Sigma,\mathbb{R})$ and $H^2_b(S,\mathbb{R})$ identified with $\rho^*_b(\kappa_b) \in H^2_b(\Gamma,\mathbb{R})$ via the canonical isomorphisms $H^2_b(\Sigma,\mathbb{R}) \cong H^2_b(\Gamma,\mathbb{R})$ and $H^2_b(S,\mathbb{R}) \cong H^2_b(\Gamma,\mathbb{R})$ respectively.
Let $c=\sum_{i=1}^\infty a_i \sigma_i$ be a locally finite fundamental $\ell^1$--cycle with finite Lipschitz constant representing $\alpha \in [S]^{\ell^1}_\mathrm{Lip}$. Then, we have
$$\langle \rho^*_b(\kappa_b),\alpha \rangle = \langle i_1 (p_2(z)), c \rangle = \langle z, c|_K \rangle,$$
where $c|_K=\sum_{\mathrm{im}\sigma_i \cap K \neq \emptyset} a_i \sigma_i$. It is a standard fact that $c|_K$ represents the relative fundamental class $[S,S-K]$ in $H_2(S,S-K,\mathbb{R})$. Since the fundamental cycle representing $[S,S-K]$ is also a representative of the fundamental class $[\Sigma,\Sigma-K]$ by the canonical inclusion, $c|_K$ represents the fundamental class $[\Sigma,\Sigma -K]$ in $H_2(\Sigma,\Sigma-K,\mathbb{R})$. Let $[z]$ denote the cohomology class in $H^2(\Sigma,\Sigma-K,\mathbb{R})$ determined by $z$. From the viewpoint of the Kronecker product $\langle \cdot, \cdot \rangle \colon\thinspace H^2(\Sigma, \Sigma-K, {\mathbb R}) \otimes H_2(\Sigma,\Sigma-K,{\mathbb R}) \rightarrow {\mathbb R}$, we have
$$\langle z, c|_K \rangle = \langle [z], [\Sigma, \Sigma-K] \rangle.$$
Let $d \in C_2(\Sigma,\partial \Sigma)$ be a cycle representing the fundamental cycle $[\Sigma,\partial \Sigma]$ in $H_2(\Sigma,\partial \Sigma,\mathbb{R})$. Since $p_3(z)$ represents $j^{-1}( \rho^*_b(\kappa_b))$,
$$ \langle j^{-1}(\rho_b^*(\kappa_b)), [\Sigma,\partial \Sigma] \rangle = \langle p_3(z), d \rangle = \langle z, d|_K \rangle.$$
For any relative fundamental cycle $d$ in $C_2(\Sigma,\partial \Sigma,\mathbb{R})$, $d|_K$ represents the fundamental class $[\Sigma, \Sigma-K]$ in $H_2(\Sigma,\Sigma-K,\mathbb{R})$. Hence, $$\langle z,d|_K \rangle = \langle [z], [\Sigma,\Sigma-K] \rangle.$$ Therefore, we can finally conclude that $$\langle \rho^*_b(\kappa_b),\alpha \rangle = \langle j^{-1}(\rho^*_b(\kappa_b)), [\Sigma,\partial \Sigma] \rangle = \langle [z], [\Sigma,\Sigma-K] \rangle,$$ which implies this proposition. \end{proof}
The equation $\mathrm{Vol}(\rho)=2 \pi |\text{T}(\rho)|$ implies that the structure theorem for maximal representations of compact surfaces into $\text{PU}(1,1)$ with respect to the Toledo invariant $\text{T}(\rho)$ holds for the volume invariant $\mathrm{Vol}(\rho)$.
\begin{thm}\label{thm:6.3} Let $\Gamma$ be a lattice in $\mathrm{PU}(1,1)$. Then, a representation $\rho \colon\thinspace \Gamma \rightarrow G$ is maximal if and only if $\rho$ is a discrete, faithful representation. \end{thm}
Theorem \ref{thm:1.2} follows from Proposition \ref{pro:3.1}, Theorem \ref{thm:4.4}, \ref{thm:5.3} and \ref{thm:6.3}.
\begin{thm}\label{thm:6.4} Let $\Gamma$ be an irreducible lattice in a connected semisimple Lie group $G$ with trivial center and no compact factors. Let $\rho \colon\thinspace \Gamma \rightarrow G$ be a representation. Then, the volume invariant $\mathrm{Vol}(\rho)$ satisfies an inequality $$ \mathrm{Vol}(\rho) \leq \mathrm{Vol}(M),$$ where $X$ is the associated symmetric space and $M=\Gamma\backslash X$. Moreover, equality holds if and only if $\rho$ is a discrete, faithful representation. \end{thm}
\section{Representations of lattices in $\mathrm{SO}(n,1)$ into $\mathrm{SO}(m,1)$}\label{sec:7}
In this section, we introduce a volume invariant $\mathrm{Vol}(\rho)$ for representations $\rho \colon\thinspace \Gamma \rightarrow \mathrm{SO}(m,1)$ of lattices $\Gamma$ in $\mathrm{SO}(n,1)$ for $m\geq n$. Let $\mathbb{H}^k$ denote the hyperbolic $k$--space for each $k\in \mathbb{N}$. Define a map $f_n^m \colon\thinspace (\mathbb{H}^m)^{n+1} \rightarrow \mathbb{R}$ by $$f_n^m(x_0,\ldots,x_n)= \mathrm{Vol}_n^m([x_0,\ldots,x_n]),$$ where $\mathrm{Vol}_n^m([x_0,\ldots,x_n])$ is the $n$--dimensional volume of the geodesic $n$--simplex $[x_0,\ldots,x_n]$ in $\mathbb{H}^m$. Clearly, $f_n^m$ is a $\mathrm{SO}(m,1)$--invariant continuous (bounded) cochain in $C^n_c(\mathbb{H}^m,\mathbb{R})$. Observing that the geodesic $n$--simplex $[x_0,\ldots,x_n]$ is contained in a copy of $\mathbb{H}^n$ in $\mathbb{H}^m$, it is easy to see that $f_n^m$ is a continuous (bounded) cocycle and moreover,
$$\| \omega_n^m \|_\infty = v_n$$ where $\omega_n^m \in H^n_c(\mathrm{SO}(m,1),\mathbb{R})$ is the continuous cohomology class determined by the cocycle $f_n^m$ and $v_n$ is the volume of a regular ideal geodesic simplex in $\mathbb{H}^n$.
According to the Van Est isomorphism, the continuous cohomology class $\omega_n^m$ corresponds to a $\mathrm{SO}(m,1)$--invariant, differential $n$--form $\omega_n^m$ on $\mathbb{H}^m$. The restriction of the differential form $\omega_n^m$ to any totally geodesic $\mathbb{H}^n$ in $\mathbb{H}^m$ is the Riemannian volume form on the totally geodesic $\mathbb{H}^n$ in $\mathbb{H}^m$.
Let $\Gamma$ be a lattice in $\mathrm{SO}(n,1)$ and $\rho \colon\thinspace \Gamma \rightarrow \mathrm{SO}(m,1)$ be a representation for $m\geq n$. Let $c \colon\thinspace H^*_{c,b}(\mathrm{SO}(m,1),\mathbb{R}) \rightarrow H^*_c(\mathrm{SO}(m,1),\mathbb{R})$ be the comparison map and $M=\Gamma \backslash \mathbb{H}^n$. Then, we define a volume invariant $\mathrm{Vol}(\rho)$ of $\rho$ by
$$ \mathrm{Vol}(\rho) = \inf \{ | \langle \rho^*_b (\omega_{n,b}^m), \alpha \rangle | \ | \ c(\omega_{n,b}^m)=\omega_n^m \text{ and } \alpha \in [M]^{\ell^1}_\mathrm{Lip}\}.$$
It satisfies an inequality
$$ \mathrm{Vol}(\rho) \leq \| \omega_n^m \|_\infty \cdot \|M\|_\mathrm{Lip} = v_n \cdot \frac{\mathrm{Vol}(M)}{v_n}=\mathrm{Vol}(M).$$
Recall that a representation $\rho \colon\thinspace \Gamma \rightarrow \mathrm{SO}(m,1)$ is said to be a \emph{totally geodesic representation} if there is a totally geodesic $\mathbb{H}^n \subset \mathbb{H}^m$ so that the image of the representation lies in the subgroup $G \subset \mathrm{SO}(m,1)$ that preserves this $\mathbb{H}^n$ and that the $\rho$--equivariant map $F \colon\thinspace \mathbb{H}^n \rightarrow \mathbb{H}^m$ is a totally geodesic isometric embedding. Note that the subgroup $G$ of $\mathrm{SO}(m,1)$ is of the form $H\times K$ where $H$ is isomorphic to $\mathrm{SO}(n,1)$ and $K$ is isomorphic to the compact group $\mathrm{SO}(m-n)$. A totally geodesic representation $\rho \colon\thinspace \Gamma \rightarrow \mathrm{SO}(m,1)$ splits into $\rho = \rho_1 \times \rho_2$ where $\rho_1$ is conjugate to $\Gamma$ by the Mostow rigidity theorem.
\begin{thm} Let $\Gamma$ be a lattice in $\mathrm{SO}(n,1)$ and $M=\Gamma \backslash \mathbb{H}^n$. The volume invariant $\mathrm{Vol}(\rho)$ of a representation $\rho \colon\thinspace \Gamma \rightarrow \mathrm{SO}(m,1)$ for $m\geq n \geq 3$ satisfies an inequality $$\mathrm{Vol}(\rho) \leq \mathrm{Vol}(M).$$ Moreover, equality holds if and only if $\rho$ is a totally geodesic representation. \end{thm}
\begin{proof} We only need to show the second statement. In fact, a proof of the theorem is given by Bucher, Burger and Iozzi in \cite{BBI}. We give here an independent proof of the theorem for $m \geq n >3$. Suppose that a representation $\rho \colon\thinspace \Gamma \rightarrow \mathrm{SO}(m,1)$ is a totally geodesic representation. Then, there exists a $\rho$--equivariant totally geodesic isometric embedding $F \colon\thinspace \mathbb{H}^n \rightarrow \mathbb{H}^m$. The $\rho$--equivariant map $F$ induces homomorphisms $F^*_c \colon\thinspace H^\bullet_c(\mathrm{SO}(m,1),\mathbb{R})\rightarrow H^\bullet (\Gamma,\mathbb{R})$ and $F^*_b \colon\thinspace H^\bullet_{c,b}(\mathrm{SO}(m,1),\mathbb{R})\rightarrow H^\bullet_b(\Gamma,\mathbb{R})$. The volume invariant $\mathrm{Vol}(\rho)$ of $\rho$ can be computed by
$$\mathrm{Vol}(\rho) = \inf \{ | \langle F^*_b (\omega_{n,b}^m), \alpha \rangle | \ | \ c(\omega_{n,b}^m)=\omega_n^m \text{ and } \alpha \in [M]^{\ell^1}_\mathrm{Lip}\}.$$
Since $F$ is an isometric embedding, we have {\setlength\arraycolsep{2pt} \begin{eqnarray*} F^* f^m_n(y_0,\ldots,y_n) &=& f^m_n(F(y_0),\ldots,F(y_n)) \\ &=& \mathrm{Vol}_n^m([F(y_0),\ldots,F(y_n)]) \\ &=& \mathrm{sign}(F) \cdot \mathrm{Vol}^n_n([y_0,\ldots,y_n]), \end{eqnarray*}} where $\mathrm{sign}(F)=1$ if $F$ is orientation-preserving and $\mathrm{sign}(F)=-1$ if $F$ is orientation-reversing. This implies that $F^*_c(\omega^m_n)=\mathrm{sign}(F) \cdot res_c(\omega_n)$ where $\omega_n$ is the $\mathrm{SO}(n,1)$--invariant volume form on $\mathbb{H}^n$. Hence, it immediately follows that $F^*_b(\omega^m_{n,b})= \mathrm{sign}(F) \cdot res_b(\omega_{n,b})$ for some $\omega_{n,b} \in c^{-1}(\omega_n)$. By Lemma \ref{lem:4.2}, we have $$ \langle F^*_b(\omega^m_{n,b}), \alpha \rangle = \langle \mathrm{sign}(F) \cdot res_b(\omega_{n,b}), \alpha \rangle = \mathrm{sign}(F) \cdot \mathrm{Vol}(M),$$
for all $\alpha \in [M]^{\ell^1}_\mathrm{Lip}$. Thus, we can conclude that $$ \mathrm{Vol}(\rho) = \inf \{ | \langle F^*_b (\omega_{n,b}^m), \alpha \rangle | \ | \ c(\omega_{n,b}^m)=\omega_n^m \text{ and } \alpha \in [M]^{\ell^1}_\mathrm{Lip}\} = \mathrm{Vol}(M).$$
Conversely, we suppose that $\mathrm{Vol}(\rho)=\mathrm{Vol}(M)$. Recall that the natural map $F \colon\thinspace \mathbb{H}^n \rightarrow \mathbb{H}^m$ satisfies: \begin{itemize} \item $F$ is smooth. \item $F$ is $\rho$--equivariant. \item For all $k\geq 3$, $\text{Jac}_kF(x) \leq (\delta(\Gamma)/(k-1))^k$.
\item If $\| D_xF(u_1)\wedge \cdots \wedge D_xF(u_k)\|= (\delta(\Gamma)/(k-1))^k$ for an orthonormal $k$-frame $u_1,\ldots,u_k$ at $x \in \mathbb{H}^n$, then the restriction of $D_xF$ to the subspace generated by $u_1,\ldots,u_k$ is a homothety. \end{itemize}
Because of $\delta(\Gamma)=n-1$ for a lattice $\Gamma$ in $\mathrm{SO}(n,1)$, $\text{Jac}_nF(x)\leq 1$. By an argument similar to the one used in the proof of Theorem \ref{thm:5.3}, we can conclude that
$$\left| \int_M F^*\omega_n^m \right| = \mathrm{Vol}(M).$$ Hence, $\mathrm{Jac}_n F(x) =1$ almost everywhere after possibly reversing the orientation of $X$. Then, $F$ is a global isometry of $\mathbb{H}^n$. For a detailed proof about this, we refer to \cite{FK06}. Therefore, $\rho$ is a totally geodesic representation. \end{proof}
\section{Toledo invariant of complex hyperbolic representations}\label{sec:8}
In this section we consider only uniform lattices $\Gamma\subset \mathrm{SU}(n,1),\ n\geq 2$. \subsection{On complex hyperbolic space} Let $\Gamma\subset \mathrm{SU}(n,1)$ be a uniform lattice that $M=\Gamma\backslash {\mathbb H}^n_{\mathbb C}$ and $\rho \colon\thinspace \Gamma\rightarrow G=\mathrm{SU}(m,1),\ m\geq n$ be a representation. Let $\omega$ be a K\"ahler form on ${\mathbb H}^m_{\mathbb C}$. Then $\frac{1}{n!}\omega^n$ will be a $\mathrm{SU}(m,1)$-invariant form. Then it defines an element $\omega_c\in H^{2n}_{c}(G,{\mathbb R})$ via Van-Est isomorphism. Denote $\omega_b\in H^{2n}_{b,c}(G,{\mathbb R})$ a bounded class such that $c(\omega_b)=\omega_c$ under the comparison map. Define the volume of the representation $\rho$ by
$$ \mathrm{Vol}(\rho) = \inf \{ |\langle \rho^*_b(\omega_b),\alpha \rangle| \ | \ c(\omega_b)=\omega_c \text{ and } \alpha \in [M]^{\ell^1}_\mathrm{Lip} \}.$$ Then, it satisfies the usual inequality
$$\mathrm{Vol}(\rho) \leq \|\omega_c\|_\infty \cdot \| M \|_\mathrm{Lip}.$$ But, since $\frac{1}{n!}\omega^n$ is the volume form on ${\mathbb H}^n_{\mathbb C}$, $\mathrm{Vol}(\rho) \leq \mathrm{Vol}(M)$.
Suppose $\mathrm{Vol}(\rho)=\mathrm{Vol}(M)$. If $\rho$ is not reductive, the image will be contained in a parabolic group, and the volume will be zero. Hence assume that $\rho$ is reductive. Let $F \colon\thinspace {\mathbb H}^n_{\mathbb C}\rightarrow {\mathbb H}^m_{\mathbb C}$ be a $\rho$--equivariant smooth harmonic map. Then some class $\rho^*_b(\omega_b)$ is represented by $F^*(\frac{1}{n!}\omega^n)$ and the pairing satisfies
$$|\langle \rho^*_b(\omega_b),\alpha \rangle|=\left| \int_M F^* \left( \frac{1}{n!}\omega^n \right) \right| \geq\mathrm{Vol}(M).$$ This implies that the rank of $dF$ at some point $x\in {\mathbb H}^n_{\mathbb C}$ is maximal. By Siu's argument \cite{Siu}, $F$ is holomorphic. It is shown in \cite{BCG99} that $\mathrm{Jac}_{2n}F\leq 1$ for holomorphic map $F$. Consequently
$$\left| \int_M F^* \left( \frac{1}{n!}\omega^n \right) \right| = \mathrm{Vol}(M)$$ and $F$ is an isometric embedding.
Hence we obtain using the same proof of section \ref{sec:7} and the above argument \begin{thm}Let $\Gamma\subset \mathrm{SU}(n,1)$ be a uniform lattice and $\rho \colon\thinspace \Gamma\rightarrow \mathrm{SU}(m,1)$, $ m\geq n$ be a representation. Then $\rho \colon\thinspace \Gamma\rightarrow \mathrm{SU}(m,1)$ is a maximal volume representation if and only if $\rho$ is a totally geodesic representation. \end{thm} This is a reformulation of Corlette's result in \cite{Corlette} in terms of the bounded cohomology theory. See also \cite{KM} and \cite{BI} for defferent formulations. Note that this theorem implies both Goldman-Millson and Corlette's results. \begin{cor}\label{cor:8.2} Let $\Gamma\subset \mathrm{SU}(n,1)\subset \mathrm{SU}(m,1)$ be a uniform lattice. Then it is locally rigid up to compact group. \end{cor}
\begin{proof}Suppose $\rho_t \colon\thinspace \Gamma\rightarrow \mathrm{SU}(m,1)$ is an one-parameter family of representations near $\rho_0=i$, the canonical inclusion. Note that $\mathrm{Vol}(\rho_t)=\left| \int_M f_t^* \left( \frac{1}{n!}\omega^n \right) \right| $ by Lemma \ref{lem:3.3} where $f_t$ is a $\rho_t$--equivariant map $f_t \colon\thinspace {\mathbb H}^n_{\mathbb C}\rightarrow {\mathbb H}^m_{\mathbb C}$.
Note that $f_0^*(\frac{1}{n!}\omega^n)\in H^{2n}(M,{\mathbb Z})$ is a Chern class and hence $[f_t^*(\frac{1}{n!}\omega^n)]=[f_0^*(\frac{1}{n!}\omega^n)]\in H^{2n}(M,{\mathbb Z})$. This implies that $\mathrm{Vol}(\rho_t)=\mathrm{Vol}(\rho_0)$. Since $\rho_0$ is a maximal volume representation, $\rho_t$ is also maximal, hence
they are all conjugate each other up to compact group. \end{proof}
\subsection{On quaternionic hyperbolic space} We can also formulate the rigidity phenomenon of uniform lattices of $\mathrm{SU}(n,1)$ in $\mathrm{Sp}(m,1)$ as follows. A homogeneous space $D=\mathrm{Sp}(m,1)/\mathrm{Sp}(m)\times\mathrm{U}(1)$ over ${\mathbb H}^m_{\mathbb H}$ with fiber ${\mathbb C} P^1$ is called a twistor space. The vertical bundle $\cal V$ tangent to the fibers is a smooth subbundle of $TD$ and a unique $\mathrm{Sp}(m,1)$-invariant complement to this vertical subbundle is called the holomorphic horizontal subbundle $\cal H$. This twistor space $D$ posseses a pseudo-K\"ahler metric $g$ which is negative definite on $\cal V$ and positive definite on $\cal H$, whose associated (1,1) form $\omega_D$ is closed. The quaternionic hyperbolic space ${\mathbb H}^m_{\mathbb H}$ has one dimensional space of $\mathrm{Sp}(m,1)$--invariant four forms. We pick a four form $\alpha$ so that its restriction to a totally geodesic complex hyperbolic subspace ${\mathbb H}^m_{\mathbb C}$ is $\omega^2_{{\mathbb H}^m_{\mathbb C}}$ where $\omega_{{\mathbb H}^m_{\mathbb C}}$ is a K\"ahler form on complex hyperbolic space. The relation between $\alpha$ and $\omega_D$ is as follows, see \cite[Lemma 1]{DG}.
$$\pi^*\alpha=\omega_D^2 + d\beta$$ for some $\mathrm{Sp}(m,1)$--invariant 3--form $\beta$. Note that $\omega_D^n\in H^{2n}_{c}(\mathrm{Sp}(m,1),{\mathbb R})$. One can define the volume of a representation $\rho \colon\thinspace \Gamma\rightarrow \mathrm{Sp}(m,1)$ by$$ \mathrm{Vol}(\rho) = \inf \{ |\langle \rho^*_b(\omega_b),\beta \rangle| \ | \ c(\omega_b)=\omega^n_D \text{ and } \beta \in [M]^{\ell^1}_\mathrm{Lip} \}.$$
Then $\mathrm{Vol}(\rho)\leq \| \omega^n_D \|_\infty \cdot \| M \|_{\mathrm{Lip}}$. Suppose $\mathrm{Vol}(\rho)=\mathrm{Vol}(M)$. Note that such a value is realized when $\rho$ is a totally geodesic embedding since the restriction of $\omega_D$ to ${\mathbb H}^n_{\mathbb C}$ is a K\"ahler form. Let $f\colon\thinspace {\mathbb H}^n_{\mathbb C}\rightarrow {\mathbb H}^m_{\mathbb H}$ be a $\rho$--equivariant harmonic map and let $F$ be a lift of $f$ to the twister space $D$ so that $\pi\circ F=f$. If $f^*\alpha=0$, then $F^*\omega_D^2=-dF^*\beta$ on $M$ and thus, we have $$\int_M F^*\omega^n_D =-\int_M F^*(\omega^{n-2}_D\wedge d\beta)=-\int_M dF^*(\omega^{n-2}_D\wedge \beta)=0.$$
Hence we may assume that $f^*\alpha\neq 0$, then the rank of $f$ is at least four at some point. By \cite{CD}, one can choose $F$ to be a holomorphic horizontal lift.
Then $F^* \omega_D^n$ represents some class $\rho^*_b(\omega_b)$ with $c(\omega_b)=\omega^n_D$, and $$\mathrm{Vol}(\rho)=\mathrm{Vol}(M)\leq \int_M F^*\omega_D^n. $$ Since $F$ is holomorphic, $F^*\omega_D\leq \omega_M$, hence $$\int_M F^*\omega_D^n\leq \int_M \omega_M^n=\mathrm{Vol}(M).$$ This forces $F$ to be totally geodesic embedding. Hence the local rigidity of $\Gamma\subset \mathrm{SU}(n,1)\subset \mathrm{Sp}(m,1)$ also follows as in Corollary \ref{cor:8.2}, which is part of a result in \cite{KKP}.
\end{document} |
\begin{document}
\title{On the $N$-Extended Euler System I.\
Generalized Jacobi Elliptic Functions}
\begin{abstract} We study the integrable system of first order differential equations $\omega_i(v)'=\alpha_i\,\prod_{j\neq i}\omega_j(v)$, $(1\!\leq i, j\leq \! N)$ as an initial value problem, with real coefficients $\alpha_i$ and initial conditions $\omega_i(0)$. The analysis is based on its quadratic first integrals. For each dimension $N$, the system defines a family of functions, generically hyperelliptic functions. When $N=3$, this system generalizes the classic Euler system for the reduced flow of the free rigid body problem, thus we call it $N$-extended Euler system ($N$-EES). In this Part I the cases $N=4$ and $N=5$ are studied, generalizing Jacobi elliptic functions which are defined as a 3-EES. Taking into account the nested structure of the $N$-EES, we propose repa\-ra\-metrizations of the type ${\rm d}v^*=g(\omega_i)\,{\rm d}v$ that separate geometry from dynamic. Some of those parametrizations turn out to be genera\-li\-zation of the {\sl Jacobi amplitude}. In Part II we consi\-der geometric properties of the $N$-system and the numeric computation of the functions involved. It will be published elsewhere.
{\bf keywords:} Integrable systems \and Generalized Euler system \and Jacobi and Weierstrass elliptic functions \and third Legendre elliptic integral
\end{abstract}
\section{Introduction} \label{intro}
We are interested in the real functions $\omega_i(v)$ which are solutions of the integrable system of differential equations \begin{equation}\label{eq:EESn} \frac{{\rm d}\omega_i}{{\rm d}v}=\alpha_i\,\prod_{j\neq i}\omega_j,\qquad (1\!\leq i ,j\leq \! N), \end{equation} with coefficients and initial conditions $\alpha_i, \omega_i(0)\!\in\!\mathbb{R}$. Our study is based on the quadratic expressions
\begin{equation}\label{eq:integralescuadraticas} C_{ij}(v) = \alpha_i\,\omega_j(v)^2 - \alpha_j\,\omega_i(v)^2 \end{equation} which are integrals of the system (\ref{eq:EESn}).
Initial conditions (IC) will be denoted $\omega^0\equiv \omega(0)=(\omega_1(0),\ldots,\omega_n(0))$. To simplify expressions we will use as notation $\omega_i\equiv\omega_i(v)$ and $\omega_i'\equiv{\rm d}\omega_i/{\rm d}v$.
From the geometric point of view, the integrals (\ref{eq:integralescuadraticas}) tell us that the flow defined by (\ref{eq:EESn}) is the result of the intersection of quadrics in dimension $N$; more precisely, elliptic and hyperbolic cylinders. Thus, the $N$-EES family belongs to a larger family where the paraboloids are also included, as well as the degenerate cases defined by the hyperplanes. Its Poisson structure is defined by a determinant built on the gradients of the independent integrals, {\it i.e.} the Casimirs. When $N=3$ the classic mixed product is precisely the determinant: one of the integrals is the Casimir and the other the Hamiltonian; details will be given elsewhere \cite{Crespo2015}.
One of the features of the system (\ref{eq:EESn}) is that it allows, from a dynamical system point of view, dealing with a large family of functions in the real domain in a unified way. It ranges from trigonometric functions (harmonic oscillator) to elliptic functions (pendulum and free rigid body), including also rational functions (for unbounded trajectories), etc. We will learn that different systems will allow us to introduce the same functions. For instance the hyperbolic functions may be introduced with $N=2$, but also appear when $N=3$ and two of the coefficients are equal). The interest of the study of the generic system $N> 4$ (the case $N=4$ is special, as we show below) lies in the fact that we face then hyperelliptic integrals and their inverses, a well established theory of special functions of complex variable made in XIX century which, nowadays, is in a revival in several branches of science, particularly in mechanics. But, although the theory is `at hand', nevertheless its application results a nontrivial task, because of the number of parameters involved in the definition of the functions, solutions of an IVP.
\subsection{On Euler system, Jacobi functions and 3-EES} \label{sec:Jacobi}
In this paper, our program is to generalize {\sl Jacobi elliptic functions}. Thus, within the dynamical system point of view we have adopted, let us remember how all this started. The history of the $N$-EES begins with the well known Euler system of nonlinear differential equations in three dimensions \cite{MarsdenRatiu}, giving the reduced dynamics of the free rigid body problem (the dynamics of the angular momentum vector ${\bf \Pi}$ in the moving frame) \begin{equation}\label{Euler} \Pi_1'= \alpha_1\,\Pi_2\Pi_3, \,\,\Pi_2'= \alpha_2\,\Pi_1\Pi_3, \, \Pi_3'= \alpha_3\,\Pi_1\Pi_2, \end{equation} such that $\sum\alpha_i=0$, where $\alpha_i$ are functions of the principal moments of inertia.
Associated with (\ref{Euler}), the second fundamental system, known as the {\sl Jacobi system}, is given by \begin{equation}\label{Jacobi} \omega_1'= \omega_2\,\omega_3, \quad\omega_2'= -\omega_1\,\omega_3, \quad \omega_3'= -m\,\omega_1\,\omega_2, \end{equation} with $\omega(0)=(0,1,1)$. The functions solution of (\ref{Jacobi}), denoted as $\omega_1\equiv {\rm sn}, \,\omega_2\equiv {\rm cn}$ and $\omega_3\equiv {\rm dn}$, are called {\sl Jacobi elliptic functions}. Then, the solution of (\ref{Euler}) are given by means of those functions, using the method of undetermined coefficients. For some readers could be useful to consult our paper \cite{CrespoFerrer} where we have studied the {\sl extended Euler system} \begin{equation}\label{sistema3} \omega_1'= \alpha_1\,\omega_2\omega_3, \quad\omega_2'= \alpha_2\,\omega_1\omega_3, \quad \omega_3'= \alpha_3\,\omega_1\omega_2, \end{equation} {\it i.e.} the (\ref{eq:EESn}) for $N=3$, considering generic values for coefficients $\alpha_i$ and initial conditions defining the system.
Relying on the work of Tricomi \cite{Tricomi}, Hille \cite{Hille} and Meyer \cite{Meyer} dedicated to system (\ref{Jacobi}), we have shown in a straightforward manner how Jacobi and Weierstrass elliptic functions in the real domain are connected with this system \cite{CrespoFerrer}, although the tradition is to treat them separately because of the their intrinsic differences in the complex domain (see for instance Whittaker and Watson \cite{Whittaker} and Lawden \cite{Lawden}). Here we will apply the same approach to the system in $N$-dimensions. More precisely, we will present the generalization of both types of functions, where the $N$-Weierstrass function relates with the norm of the vector defined by the functions $\omega_i$.
\subsection{Integrals, functions and regularization } \label{sec:regularization}
Moreover, as an alternative to confront directly with hyperelliptic functions, we propose {\sl to experiment} with repa\-ra\-metrizations starting from low dimensions. More precisely, we extend the regularization ${\rm d}v^*=\omega_3{\rm d}v$, already studied for the case $N=3$ by Molero {\it et al.} \cite{Molero}. This way of proceeding seems to be an open line of work. The fact that elliptic and hyperelliptic functions are `naturally' introduced within the context of complex functions may explain why we have not found references. It is due to the consideration of those functions in a dynamical systems context, in the real domain, that the regularization enters on the scene. More precisely we focus on `regularizations' of the type ${\rm d}v^*=g(\omega_i){\rm d}v$, a technique well known in classical fields such as Celestial Mechanics (where they are used for studies ranging from collisions to efficient numerical integration schemes). We will see that the new variable is a generalization of the Jacobi amplitude. This procedure, based on the symmetry of the system, alleviates the manipulation of the hyperelliptic functions involved, which are relegated to only one quadrature (the {\sl regularization equation}), separating it from the geometry (it is part of our research, knowing more on how generic this procedure is).
This research has two parts. Part I, which makes the content of this paper, works in detail the cases $N=4,5$. The key aspect associated with this case is that for each IVP we deal with two or three parameters. In Section~\ref{sec:generic} we briefly refers to the equilibria as well as particular solutions such as the rectilinear. After that we fix the dimension considering the case 4-EES. In Section~\ref{sec:N4ratios} we present a basic feature related to the ratios of the functions. In Section~\ref{sec:Mahler} we focus in a biparametric system, which we dubbed as {\sl Mahler system}. In Section~\ref{sec:regularizacion} we apply to our system the regularization technique. We identify that the new variable is a `generalized amplitude'. In Sect. \ref{sec:additionformulas} we provide with the addition formulas associated to the Mahler system. Using them we propose extending the work of Bulirsch and Fukushima, we introduce some formulas related to the numerical evaluation of a 4-EES. In Section~\ref{sec:N5} we approach the system for $N=5$, focusing in one of the particular cases, showing its connection with the previous dimension. Finally, as an application, we briefly consider in Section~\ref{sec:FRB} the free rigid body formulated in Andoyer variables
For the benefit of the reader we include two Appendices which contain properties of $\theta_i$ and elliptic Jacobi functions. There is a Part II, devoted to generic features of (\ref{eq:EESn}) from the geometric point of view, and to the numeric evaluation of the Mahler system, following the steps of Bulirsch and Fukushima. This will be published elsewhere.
We ought to close the Introduction pointing out that this paper does not contain a complete analysis of the relative role of the parameters involved in the defined functions. Some transformations related to the range of those parameters are required, similar to the well known transformations for the elliptic modulus of the Jacobi functions. That analysis is still in progress.
\section{Some basic features of $N$-EES} \label{sec:generic}
We have mentioned in the Introduction that our interest in this paper focuses on the study of some systems (\ref{eq:EESn}) of low dimension. Nevertheless, as in any dimension common features are present, it is worth to briefly refer to some of them.
\subsection{On particular solutions: equilibria and straight lines through the origin} Before we start our analysis of the IVP, a first question is to identify the equilibria of the system (\ref{eq:EESn}). Denoting $P=(p_1,p_2,\ldots,p_n)$ an equilibrium point, we easily check that the system has the following set of equilibria: \begin{itemize} \item Origin $P=0\in \mathbb{R}^n$, \item For $n\geq 3$, the points: $P_i=(0,\ldots, p_i,\ldots,0)$, \quad $1\leq {i}\leq n$, functions of the initial conditions. \item For $n\geq 4$, planes $\Pi_{i_1,i_2}=(0,\ldots, p_{i_1},\ldots,p_{i_2},\ldots 0)$, \quad $1\leq {i_1}<{i_2}\leq n$, functions of the initial conditions. \item For $n\geq 5$, the hyperplanes $$\Pi_{i_1,i_2,\ldots,i_{n-2}}=(0,\ldots, p_{i_1},\ldots,p_{i_2},\ldots, p_{i_{n-2}},\ldots 0),$$ $1\leq {i_1}<{i_2}< {i_{n-2}}\leq n$. \end{itemize} Thus, associated to these equilibria hyperplanes, we have the study of their invariant manifolds and their connections, generalizing the heteroclinic trajectories in three dimensions. This is out of the scope of the present paper.
\par\noindent {\sl Straight-lines through the origin}. Meanwhile in the generic study of the quadratures associated with our system (see Sect.~\ref{sec:quadrature}) an assumption is commonly made, namely, the roots of the polynomials involved are different, when considering an IVP we may be under a scenario where we have multiple roots. This is precisely the case with {\sl straight-lines through the origin}. Then, instead of requiring the use of special functions, the solutions are expressed by means of {\sl elementary functions}, different for each dimension.
\subsection{Reduction to quadratures: Generalized Weierstrass function} \label{sec:quadrature} Taking into account the integrals (\ref{eq:integralescuadraticas}), and proceeding like in the classic case $N=3$, we may reduce the system to a fundamental differential equation in two forms. The first one, after choosing one of te functions, say $\omega_i$, it leads to the differential equation \begin{equation} \big(\frac{{\rm d}\omega_i}{{\rm d} v}\big)^2 = \alpha_i^{3-N}\,\big[\prod_{j\neq i}^N(\alpha_j\omega_i^2+C_i^j)\big]. \end{equation} or, by separation, the corresponding quadrature \begin{equation} \alpha_i^{(3-N)/2}\,v=\int\frac{{\rm d}\omega_i}{[\prod_{j\neq i}^N(\alpha_j\omega_i^2+C_i^j)]^{1/2}}. \end{equation} As an alternative, if we introduce the {\sl square of the norm} \begin{equation}\label{squarenorm} \Omega_N(v)\equiv\omega(v)^2=\sum_{i=1}^N\omega_i(v)^2, \end{equation} after some straightforward computations we obtain \begin{equation}\label{weierstrassquadrature0} \Big(\frac{{\rm d}\Omega_N}{{\rm d} v}\Big)^2 = 4\,\prod_{i=1}^N (\Omega_N-b_i), \quad \sum_{i=1}^N b_i=0, \end{equation} a differential equation whose solution $\Omega_N(v)$ may be seen as the generalized Weierstrass function $\wp(v)$. Following either way we confront generically hyperelliptic integrals.
\subsection{On the normalized $N$-EES} \label{sec:normalized}
Associated to a generic $N$-EES (\ref{eq:EESn}), {\it i.e.} assuming that $\sum\alpha_i\neq 0$, we consider the {\sl square norm} function (\ref{squarenorm}) that satisfies \begin{equation} \frac{{\rm d}\omega}{{\rm d}v}=(\sum_{i=1}^N\alpha_i)\frac{1}{\omega}\prod_{i=1}^N \omega_i. \end{equation} Thus, introducing the functions $$\tilde\omega_i=\frac{\omega_i}{\omega},$$ we have \begin{equation} \frac{{\rm d}\phantom{-}}{{\rm d}v}\Big(\frac{\omega_i}{\omega}\Big)=[\alpha_i\omega^2-\left(\sum_{i=1}^N\alpha_i\right)\omega_i^2]\frac{1}{\omega^3}\prod_{j\neq i}^N \omega_j. \end{equation} which may be written also as \begin{equation} \frac{{\rm d}\tilde\omega_i}{{\rm d}v}=c_i\,\prod_{j\neq i}^N \tilde\omega_j\,\omega^{N-4}, \end{equation} where the coefficients \begin{equation} c_i=\alpha_i\omega^2-(\sum\alpha_i)\omega_i^2 \end{equation} are integrals of the flow, whose values are determined for each IVP by the initial conditions. In other words, carrying out the reparametrization $v\rightarrow v^*$ given by \begin{equation}\label{newquadrature} {\rm d}v^*=\omega^{N-4}\,{\rm d}v, \end{equation} associated to (\ref{eq:EESn}) we have the {\sl normalized system} \begin{equation}\label{sistemaNnormal} \frac{{\rm d}\tilde\omega_i}{{\rm d}v^*}=c_i\prod_{j\neq i}^N \tilde\omega_j, \end{equation} with initial conditions \begin{equation} \tilde\omega_i(0)=\omega_i(0)/\omega(0), \quad \omega(0)^2=\sum\omega_i(0)^2, \end{equation} \emph{i.e.} the flow (\ref{sistemaNnormal}) lives in $\mathbb{S}^{N-1}$ and, like the differential system satisfied by the angular momentum in 3-D, we have $\sum c_i=0$. Note that to deal with the system (\ref{sistemaNnormal}) versus (\ref{eq:EESn}) will bring advantages, at least from the numerical point of view.
With (\ref{sistemaNnormal}) integrated we have $\tilde\omega_i=\tilde\omega_i(v^*)$. Then, we still have to implement the quadrature associated to the regularization (\ref{newquadrature}) in order to recover the relation with the original variable. For instance, considering the first integral $c_1$ we obtain \begin{equation} {\rm d}v= \omega^{4-N}\,{\rm d}v^*= \Big(\frac{c_1-(\sum\alpha_i)\tilde\omega_1(v^*)^2}{\alpha_1}\Big)^{\frac{4-N}{2}}\,{\rm d}v^* \end{equation} whose quadrature gives the parametrization relation, solved generically by numeric methods. Note that, the case $N=4$ is special, because we do not need to do regularization.
Moreover, we will not pursue here with the study of the normalized system (\ref{sistemaNnormal}). For details on this analysis we refer to \cite{Crespo2015}.
Let us close this Section pointing out another basic feature of this system; we refer to it as the {\sl scaling factor}. If the functions $\omega_i(v), \, (i=1,\ldots N)$ is a set of solutions, then taking a constant $c$, the functions $u_i(v)=c\,\omega_i(c^{N-2}v)$ satisfy the same system with the corresponding IC given by $u_i(0)=c\,\omega_i(0)$. We will make use of this property along the paper.
\section{The case $N=4$. Relying on Jacobi elliptic functions?} \label{sec:N4ratios}
We focus now on the 4-EES case. For each IVP, with some abuse of notation, we refer to the functions solutions generically with $\omega_i$. Later, referring to some specific systems, we will introduce new notations.
At this point, perhaps some readers would like to know the original motivation of our interest in 4-EES case. The reason is connected with an observation about the classical way in which the study of the rigid body dynamics is developed, based on Jacobi elliptic functions. Meanwhile those functions depend on one parameter (elliptic modulus), and appear naturally tied to problems like the pendulum or the measure of an arc of ellipse, when we apply them to the rigid body problem, we need to consider a second parameter (the {\sl characteristic}, a function of the principal moments of inertia). In other words, the first and third Legendre elliptic integrals are involved. Since Jacobi, the way to proceed has been: (i) to introduce complementary functions $Z$ and $\Theta$; (ii) to make use of the addition formulas of elliptic functions, dealing with the second parameter as an amplitude, etc. Here we search for an alternative to such approach considering a generalization of Jacobi elliptic functions with two parameters.
Thus, we start with the 4-EES \begin{equation}\label{sistema40} \begin{array}{l} \omega_1'= \alpha_1\,\omega_2\,\omega_3\,\omega_4, \\[1ex] \omega_2'= \alpha_2\,\omega_1\,\omega_3\,\omega_4,\\[1ex] \omega_3'= \alpha_3\,\omega_1\,\omega_2\,\omega_4,\\[1ex] \omega_4'= \alpha_4\,\omega_1\,\omega_2\,\omega_3, \end{array} \end{equation} with given initial conditions $\omega^0$, and the corresponding six quadratic first integrals (\ref{eq:integralescuadraticas}), of which three are independent (Fig.~\ref{fig:GenericSolution} shows a graph of the solution of the system (\ref{sistema40})). Although by scaling and a change of variables we could get rid of two of the coefficients $\alpha_i$, for our purpose it is convenient here to maintain all of them.
\begin{figure}
\caption{\small Graphical solution of the previous system (\ref{sistema40}) for $\alpha_1=1;\,\alpha_2=-1;\,\alpha_3=2,\,\alpha_4=-0.5$.}
\label{fig:SeccionesEenergia}
\label{fig:GenericSolution}
\end{figure}
To our surprise, the only reference we have found so far to (\ref{sistema40}) is E. Hille \cite{Hille}, where the case $N=4$ is considered in Chapter 2 (exercises 7, 8 and 9) under the suggestion of K. Mahler. More precisely he considers the IVP $\omega(0)=(0,1,1,1)$ and coefficients $\alpha_i=(1,-1,-\alpha^2,-\beta^2)$, with both coefficients less than one. He says ``the solutions are hyperelliptic functions of genus 2'', a statement on which we disagree. Finally he mentions ``the example can be generalized in an obvious manner.''
Thus, our plan is: (i) to study (\ref{sistema40}) as an extension of the case $N=3$ where the Jacobi elliptic functions were defined. Note that represent a drastic reduction in the number of parameters (coefficients and IC) to discuss; (ii) To introduce again regularizations. In order to approach both aspects, apart from its own interest, we think the case $N=4$ is critical in the search for methodologies to follow when dealing with systems of higher dimension, {\it i.e.} in the reign of hyperelliptic integrals.
\subsection{Nested structure and integration by Jacobi elliptic functions}
Extending what we know for the case $N=3$, a basic feature of the $N$-EES is its relation with the system verified by the ratios. Referring to that we say the 4-EES has a `nested structure', and we call it the `Glashier Ratios Property'. Moreover the case $N=4$ asks for a particular study devoted to it. As we will see, for other dimensions a regularization is needed.
\begin{proposition} {\bf (Glashier Ratios Property)} \label{propo:glashier4} Given the functions $\omega_i(v)$ verifying a 4-EES, then the functions $\omega_i(v)/\omega_j(v)$ defined by their ratios, $(i,j,k,l)\in {\rm Per}(1,2,3,4)$ satisfy a 3-EES given by \begin{equation}\label{glashier4} \begin{array}{l} \displaystyle{\frac{{\rm d}\phantom{-}}{{\rm d}v}\Big(\frac{\omega_i}{\omega_l}\Big)=C_i^l\,\frac{\omega_j}{\omega_l}\frac{\omega_k}{\omega_l}},\\[1.5ex] \displaystyle{\frac{{\rm d}\phantom{-}}{{\rm d}v}\Big(\frac{\omega_j}{\omega_l}\Big)=C_j^l\,\frac{\omega_i}{\omega_l}\frac{\omega_k}{\omega_l}},\\[1.5ex] \displaystyle{\frac{{\rm d}\phantom{-}}{{\rm d}v}\Big(\frac{\omega_k}{\omega_l}\Big)=C_k^l\,\frac{\omega_i}{\omega_l}\frac{\omega_j}{\omega_l}}, \end{array} \end{equation} with initial conditions $\omega_i(0)/\omega_l(0)$ and coefficients given by the integrals $C_i^l=\alpha_i\omega_l^2-\alpha_l\omega_i^2$. \end{proposition} \begin{proof} It is straightforward making use of the definition of the 4-EES \end{proof} \begin{remark} From the previous Proposition \ref{propo:glashier4} readers familiar with the expressions of Jacobi elliptic functions, and their computation by means of Jacobi theta functions $\theta_i(x)$, may wonder what the relation between those functions and the $\omega_i(v)$ might be. We have gathered some of those systems in an Appendix. In fact the reader will find in Lawden (Chp 1) a number of properties of $\theta_i$ functions which are also satisfied by the $\omega_i$. Perhaps, the simple fact that $\theta_1'(0)=\theta_2(0)\theta_3(0)\theta_4(0)$ is satisfied for the 4-EES when we take $\alpha_1=1$, is one of the most surprising. We will come back to this below. \end{remark} \begin{remark} Note that there is the possibility to take a slight different version of the ratios, namely to work with $u_j^i=c_j^i\,\omega_i/\omega_j$, with coefficients $c_j^i$ still to be determined, in order to simplify some expressions, adjust constants in applications, etc. We do not follow this alternative in this paper. \end{remark} \begin{proposition} \label{propo:sistema40} For suitable IC the 4-EES (\ref{sistema40}) has as solution the bounded functions $\omega_i(v)\equiv \omega_i(v;\alpha_i,\omega_i(0))$ given by \begin{eqnarray}
&&\omega_1(v)=\tilde C_1^4\frac{{\rm sn}(a v|m_1)}{\sqrt{1-n_1\,{\rm sn}^2(a v|m_1)}},\label{omega41}\\
&&\omega_2(v)=\tilde C_2^4\frac{{\rm cn}(a v|m_1)}{\sqrt{1-n_1\,{\rm sn}^2(a v|m_1)}},\label{omega42}\\
&&\omega_3(v)=\tilde C_3^4\frac{{\rm dn}(a v|m_1)}{\sqrt{1-n_1\,{\rm sn}^2(a v|m_1)}},\label{omega43}\\
&&\omega_4(v)=\tilde C_4^4\frac{1}{\sqrt{1-n_1\,{\rm sn}^2(a v|m_1)}},\label{omega44} \end{eqnarray}
where ${\rm sn}(a v|m_1)$, etc are the Jacobi elliptic functions, and the constants $\tilde C_i^4$, $a$, $m_1$ and $n_1$ are functions of $\alpha_i$ and $\omega_i(0)$. \end{proposition} \begin{proof} Let us assume IC $\omega^0=(\omega_1^0,\ldots, \omega_4^0)$ such that $\omega_j\neq 0$ in its domain of definition. According to the previous Proposition, we consider the ratios and the reciprocals $1/\omega_j$, that we denote \begin{equation}\label{ratios4} u_i^j=\frac{\omega_i}{\omega_j}, \quad i\neq j, \qquad u_j^j=\frac{1}{\omega_j}, \end{equation} in the domain where $\omega_j$ is defined. Without loss of generality we assume we refer to the case $j=4$, with IC such that $\omega_4>0$. Moreover, we still simplify a bit more the notation writing $u_i^4=u_i$.
Then, according to Proposition \ref{propo:glashier4} it results for the functions $u_i$, $i=1,2,3$ we have the following system \begin{equation}\label{sistema430} \begin{array}{l} u_1'= C_1^4\,u_2\,u_3, \\[1ex] u_2'= C_2^4\,u_3\,u_1,\\[1ex] u_3'= C_3^4\,u_1\,u_2, \end{array} \end{equation} with IC $u_i(0)=u_i^0=\omega_i^0/\omega_j^0$. Moreover, from the first integral \begin{equation} \alpha_1\omega_4^2-\alpha_4\omega_1^2=C_1^4 \end{equation} we may write \begin{equation}\label{u4u1} u_4^2=\frac{1}{C_1^4}(\alpha_1-\alpha_4 u_1^2). \end{equation} Because the functions $u_i$, $i=1,2,3$ satisfy (\ref{sistema430}), they belong to the set of functions defined by the `Jacobi elliptic functions' ${\rm sn}, {\rm cn}, {\rm dn}$ and their ratios. Then, following Crespo and Ferrer \cite{CrespoFerrer}, we know our system corresponds to one of the four possible cases (Glashier systems), depending on the sign of the integrals. Here, to continue our reasoning on the system (\ref{sistema40}), we focus on the case where the sign of $C_1^4$ is different of $C_2^4$ and $C_3^4$ (the other cases are treated likewise). This means that $u_i$, $i=1,2,3$ are of the form, say \begin{equation}\label{indeterminados} \begin{array}{l} u_1(v)=\delta_1\,{\rm sn}(a v,m_1), \\ u_2(v)=\delta_2\,{\rm cn}(a v,m_1),\\ u_3(v)=\delta_3\,{\rm dn}(a v,m_1). \end{array} \end{equation} Proceeding by the method of undetermined coefficients, replacing (\ref{indeterminados}) in (\ref{sistema430}) we identify that the constants $\delta_i, a$ y $m_1$ satisfy a system of algebraic equations whose solution is\begin{eqnarray*} &&\delta_2=u_2^0, \quad \delta_3=u_3^0,\quad\delta_1=\sqrt{-\alpha_1/\alpha_2}\delta_2, \\ &&a=\alpha_1\delta_2\delta_3/\delta_1, \quad m_1= \alpha_3\delta_2^2/(\alpha_2\delta_3^2) \end{eqnarray*}
(for details see for instance Lawden \cite{Lawden}, p. 132).
Summarizing, according to (\ref{ratios4}) and (\ref{sistema430}) we have $\omega_i=u_i/u_4$, where $u_i$ (i=1,2,3) are the Jacobi elliptic functions and $u_4$ is given by (\ref{u4u1}). From those expressions, we obtain the functions (\ref{omega41})-(\ref{omega44}), where \begin{equation} \tilde C_4^4=\sqrt{C_1^4/\alpha_1},\quad \tilde C_i^4=\delta_i/\tilde C_4^4,\quad n_1=\alpha_4\delta_1^2/\alpha_1 \end{equation} and, as stated in the Proposition, initial conditions still have to be chosen such that $n_1<1$. \end{proof} Before we continue it is convenient to formulate the previous Proposition in a `complementary form', where we make more transparent the role played by coefficients and initial conditions. \begin{proposition} \label{pro:Jacobisimilar} The functions $\omega_i(v)$, $i=1,\ldots 4$ , given by \begin{equation}\label{soluciones} \begin{array}{l} \displaystyle{\omega_1(v)=\frac{\omega_2(0)\,\omega_3(0)\,\omega_4(0)}{a}\,
\frac{{\rm sn}(av|m_1)}{\sqrt{1+n_1\,{\rm sn}^2 (av| m_1)}}},\\
\displaystyle{\omega_2(v) = \omega_2(0)\,\frac{{\rm cn}(av|m_1)}{\sqrt{1+n_1\,{\rm sn}^2(av|m_1)}}}, \\
\displaystyle{ \omega_3(v)=\omega_3(0)\,\frac{{\rm dn}(av|m_1)}{\sqrt{1+n_1\,{\rm sn}^2(av|m_1)}}},\\
\displaystyle{ \omega_4(v)= \omega_4(0)\,\frac{1}{\sqrt{1+n_1\,{\rm sn}^2(av|m_1)}}}. \end{array} \end{equation} satisfy a differential system of the type (\ref{sistema40}) given by \begin{equation}\label{Jacobisimilar0} \begin{array}{l} \displaystyle{\omega_1'= \phantom{-\,}\omega_2\,\omega_3\,\omega_4}, \\ \displaystyle{\omega_2'= - (1+n_1)\frac{a^2}{\omega_3^2(0)\omega_4^2(0)}\,\omega_1\,\omega_3\,\omega_4},\\ \displaystyle{\omega_3'= -(m_1+n_1)\frac{a^2}{\omega_2^2(0)\,\omega_4^2(0)}\,\omega_1\,\omega_2\,\omega_4},\\ \displaystyle{\omega_4'= -n_1\frac{a^2}{\omega_2^2(0)\omega_3^2(0)}\,\omega_1\,\omega_2\,\omega_3}, \end{array} \end{equation} with $\omega= (0,\omega_2(0), \omega_3(0),\omega_4(0))$ as initial conditions \end{proposition} \begin{proof} It is a straightforward exercise by computing deri\-va\-ti\-ves. \end{proof}
\begin{remark} In particular, choosing $\omega_i(0)=1$ $(i=2,3,4)$ and $a=1$, join with $n_1=n$ and $m_1=m-n$ in Proposition \ref{pro:Jacobisimilar}, we have the Jacobi elliptic functions \begin{equation*} {\rm sn}(v)=\frac{\omega_1(v)}{\omega_4(v)},\quad {\rm cn}(v)=\frac{\omega_2(v)}{\omega_4(v)},\quad{\rm dn}(v)=\frac{\omega_3(v)}{\omega_4(v)} \end{equation*} with elliptic modulus $m_1=m-n$, where $\omega_i(v;m,n)$ satisfy the system \begin{equation}\label{ratiosJacobi} \begin{array}{l} \displaystyle{\omega_1'= \phantom{-\,}\omega_2\,\omega_3\,\omega_4}, \\ \displaystyle{\omega_2'= -(1+n)\,\omega_1\,\omega_3\,\omega_4},\\ \displaystyle{\omega_3'= -m\,\omega_1\,\omega_2\,\omega_4},\\ \displaystyle{\omega_4'= -n\,\omega_1\,\omega_2\,\omega_3}, \end{array} \end{equation} with integrals \begin{equation} \begin{array}{l} \displaystyle{\omega_2^2+(1+n)\,\omega_1^2=1}, \\[1.1ex] \displaystyle{\omega_3^2+m\,\omega_1^2=1},\\[1.1ex] \displaystyle{\omega_4^2+n\,\omega_1^2=1}. \end{array} \end{equation} If $0<n<m<1$, we have $-1/\sqrt{1+n}\leq \omega_1 \leq 1/\sqrt{1+n}$, $-1\leq \omega_2 \leq 1$, $\sqrt{1-m/(1+n)}\leq \omega_3 \leq 1$ and $\sqrt{1-n/(1+n)}\leq \omega_4 \leq 1$.
More details on the system (\ref{ratiosJacobi}) will not be given in the rest of this paper. \end{remark}
\section{Studying two 4-EES systems}
Looking for the generalization of Jacobi elliptic functions, we now focus on two cases of (\ref{sistema40}): \begin{itemize} \item One-parameter ($\theta_i$ similar) family in Sec.~\ref{sec:thetas} and; \item Two-parameter family (Mahler system) in Sec.~\ref{sec:Mahler}. \end{itemize} It is worth noting that the first two equations in both systems (see (\ref{Jacobisimilar1}) and (\ref{sistema4mn})) are equal, with the consequence that one of the integrals is $\omega_1^2+\omega_2^2=1$, which is not the case for the previous system (\ref{ratiosJacobi}).
In relation with both, before we continue, a comment on notation is due. In what follows, it is convenient to redefine some of the constants which appear in the previous expressions. More precisely, in Sec.~\ref{sec:thetas} we write $m_1\equiv k^2$, and we will find that $a$ and $n_1$ are functions of $k$. Likewise, in Sec.~\ref{sec:Mahler} we fix all initial conditions and coefficients except two of them, denoted by $-m$ and $-n$. \subsection{One-parameter $\omega_i(v)$ functions, `similar' to Jacobi $\theta_i$ functions } \label{sec:thetas} We look here for functions $\omega_i$, solutions of our differential system (\ref{sistema40}), similar to Jacobi $\theta_i$ functions. What we mean by that should be made more precise: (i) coefficients and IC of the 4-EES have to be dependent only of one parameter: $\alpha_i=\alpha_i(k)$, $\omega_i^0=\omega_i^0(k)$; (ii) Moreover those functions $\omega_i(v;k)$ ought to be found imposing that they verify properties defining $\theta_i$ de Jacobi.
Such search does not appear straightforward because, we remember, $\theta_i$ functions are defined as 1-para\-meter Fourier series solving the heat equation. Our way of proceeding will be to take into account those properties of $\theta_i$ which could be imposed on the differential system: both the ratios and the identities satisfied by $\theta_i(0)$ are essential for us.
\begin{proposition} {\rm\bf ($\omega_i$: `similar Jacobi $\theta_i$ functions')}
Choosing initial conditions as functions of the elliptic modulus \begin{equation}\label{values1} \omega_1(0)=0,\,\omega_2(0)=\sqrt{a\,k}, \, \omega_3(0)=\sqrt{a},\, \omega_4(0)=\sqrt{a\,k'} \end{equation} join with \begin{equation}\label{values2} a=\frac{2K}{\pi}, \qquad n_1=k'-1, \qquad m_1=k^2 \end{equation} where $k'=\sqrt{1-k^2}$, then we may write \begin{equation} \begin{array}{l} v_1(\omega_3^2(0)z) =\displaystyle{\frac{\omega_3(0)}{\omega_2(0)}\, \frac{\omega_1(z)}{\omega_4(z)}},\\[2ex] v_2(\omega_3^2(0)z) =\displaystyle{\frac{\omega_4(0)}{\omega_2(0)}\, \frac{\omega_2(z)}{\omega_4(z)}},\\[2ex] v_3(\omega_3^2(0)z) =\displaystyle{\frac{\omega_4(0)}{\omega_3(0)}\, \frac{\omega_3(z)}{\omega_4(z)}} \end{array} \end{equation} in other words, we express the Jacobi elliptic functions as ratios of the $\omega_i(v)$, in a similar way as Jacobi gave them with respect to the $\theta_i$ functions. \end{proposition} \begin{proof}.- It is a straightforward exercise replacing the previous values (\ref{values1}) and (\ref{values2}) in Proposition \ref{pro:Jacobisimilar}. The result is that the functions are \begin{equation}\label{funcionesthetasimilar} \begin{array}{l} \displaystyle{\omega_1(z,k) =\sqrt{a\,kk'}\,\frac{{\rm sn}(u)}{\sqrt{1-(1-k')\,{\rm sn}^2(u)}}},\\ \displaystyle{\omega_2(z,k) =\sqrt{a\,k}\,\frac{{\rm cn}(u)}{\sqrt{1-(1-k')\,{\rm sn}^2(u)}}},\\ \displaystyle{\omega_3(z,k) =\sqrt{a}\,\frac{{\rm dn}(u)}{\sqrt{1-(1-k')\,{\rm sn}^2(u)}}},\\ \displaystyle{\omega_4(z,k) =\sqrt{a\,k'}\,\frac{1}{\sqrt{1-(1-k')\,{\rm sn}^2(u)}}}, \end{array} \end{equation} join with $u=az$.
Thus the system (\ref{Jacobisimilar0}) given by \begin{equation}\label{Jacobisimilar1} \begin{array}{l} \displaystyle{\omega_1'= \omega_2\,\omega_3\,\omega_4,} \\ [1.2ex] \displaystyle{\omega_2'= -\omega_3\,\omega_4\,\omega_1},\\[1.2ex] \displaystyle{\omega_3'= -\frac{1-k'}{k}\,\omega_4\,\omega_1\,\omega_2},\\ [1.2ex] \displaystyle{\omega_4'= \frac{1-k'}{k}\,\omega_1\,\omega_2\,\omega_3,} \end{array} \end{equation} with initial conditions (\ref{values1}), is the IVP we were looking for. Fig.~\ref{fig:ThetaSimilarm095} shows an example of a graph of this set of functions. \end{proof}
\begin{figure}
\caption{\small Graph of the $\theta_i$-similar for $m=0.95$.}
\label{fig:SeccionesEenergia}
\label{fig:ThetaSimilarm095}
\end{figure}
It is an exercise to check that the functions (\ref{funcionesthetasimilar}) verify identical relations to the linear combinations satisfied by the square of Jacobi $\theta_i$ functions (see Lawden, formulae (1.4.49)--(1.4.52), p. 11).
\subsection{Mahler system. A biparametric 4-EES:} \label{sec:Mahler}
As a second distinguished 4-EES we consider now a `biparametric' case we call {\sl Mahler system}. It is an IVP which defines the functions $\omega_i(v;m,n)$, solutions of (\ref{sistema40}) depending on two parameters, such that \begin{itemize} \item coefficients $\alpha=(1,-1,-m,-n)$ \item initial conditions $\omega^0=(0,1,1,1)$. \end{itemize} When $n=0$ then $\omega_i(v)$ are the Jacobi elliptic functions and $\omega_4(v)\equiv 1$.
Note that this represents some abuse of notation, because $n$ has already been used to denote the last component of an $N$-dimension system. Nevertheless, we think by the context it will become clear when is a coefficient: $n\in \mathbb{R}$, although in some occasions $n$ might be used as a counter (ordinal number: $n\in \mathbb{N}$).
\begin{proposition} {\rm\bf (Mahler system)}\\ \label{propo:sistema4mn} The 4-EES given by \begin{equation}\label{sistema4mn} \begin{array}{l} \omega_1'= \,\omega_2\,\omega_3\,\omega_4, \\ \omega_2'= -\,\omega_1\,\omega_3\,\omega_4,\\ \omega_3'= -m\,\omega_1\,\omega_2\,\omega_4,\\ \omega_4'= -n\,\,\omega_1\,\omega_2\,\omega_3, \end{array} \end{equation} where $n< m<1$, with IC $\omega(0)=(0,1,1,1)$, has the functions \begin{equation}\label{sistema4mnsolution} \begin{array}{l}
\displaystyle{\omega_1=A\,\frac{{\rm sn}(av|m_1)}{\sqrt{1-n_1\,{\rm sn}^2(av|m_1)}}},\\
\displaystyle{\omega_2=\frac{{\rm cn}(av|m_1)}{\sqrt{1-n_1\,{\rm sn}^2(av|m_1)}}},\\
\displaystyle{\omega_3=\frac{{\rm dn}(av|m_1)}{\sqrt{1-n_1\,{\rm sn}^2(av|m_1)}}},\\
\displaystyle{\omega_4=\frac{1}{\sqrt{1-n_1\,{\rm sn}^2(av|m_1)}}}, \end{array} \end{equation} as solution, with values $a,A,m_1,n_1$ given by \begin{equation}\label{valores} \begin{array}{l} a=\sqrt{1-n}, \quad \qquad A= 1/\sqrt{1-n}, \\[1.2ex] \displaystyle{ n_1= \frac{n}{n-1}}, \quad\qquad \displaystyle{m_1= \frac{n-m}{n-1}}. \end{array} \end{equation}
\end{proposition} \begin{proof} Let us consider the system (\ref{sistema4mn}) as an IVP with $\omega(0)= (0, \omega_2(0),\omega_3(0),\omega_4(0))$, $(\omega_i(0)\neq 0, \, i=2,3,4)$ dependent of two parameters $(m,n)$. It admits as solution the functions \begin{eqnarray*}
&&\tilde\omega_1(v) =A\,\frac{{\rm sn}(av|m_1)}{\sqrt{1-n_1\,{\rm sn}^2(av|m_1)}},\\
&&\tilde\omega_2(v) =\omega_2(0)\,\frac{{\rm cn}(av|m_1)}{\sqrt{1-n_1\,{\rm sn}^2(av|m_1)}},\\
&&\tilde\omega_3(v) =\omega_3(0)\,\frac{{\rm dn}(av|m_1)}{\sqrt{1-n_1\,{\rm sn}^2(av|m_1)}},\\
&&\tilde\omega_4(v) =\omega_4(0)\,\frac{1}{\sqrt{1-n_1\,{\rm sn}^2(av|m_1)}}, \end{eqnarray*} where $a,A,m_1,n_1$ are given by \begin{equation}\label{parametersIC} \begin{array}{l} \displaystyle{a=\omega_3(0)\sqrt{\omega_4^2(0)-n\,\omega_2^2(0)}}, \\ \displaystyle{ A=\frac{\omega_2(0)\,\omega_4(0)}{\sqrt{\omega_4^2(0)-n\,\omega_2^2(0)}}}, \\ \displaystyle{ n_1=\frac{n\,\omega_2^2(0)}{n\,\omega_2^2(0)-\omega_4^2(0)}}, \\ \displaystyle{ m_1=\frac{\omega_2^2(0)\,(n\,\omega_3^2(0) - m\,\omega_4^2(0))}
{\omega_3^2(0)\,(n\,\omega_2^2(0) - \omega_4^2(0))}} \end{array} \end{equation} and the derivatives at the origin satisfy \begin{equation} \tilde\omega_1(0)'= \,\omega_2(0)\,\omega_3(0)\,\omega_4(0),\quad \tilde\omega_i(0)'=0, \end{equation}
where $i=2,3,4$. Then, choosing as IC the quantities $\omega(0)=(0,1,1,1)$ and replacing them in (\ref{parametersIC}), we readily obtain the values (\ref{valores}) for those parameters. \end{proof} \begin{remark} In particular the case $n=0$ leads to: $a=1, \, A= 1, \, n_1= 0$ y $m_1= m,$ {\it i.e.}, the Jacobi elliptic functions. We have another special case when $m=0$. As we have assumed $n<m$, in this case $n<0$ and the diferencial system (\ref{sistema4mn}) corresponds again to a Jacobi system, but now with negative parameter (there is a transformation to reduce it to the {\sl normal case}, see Appendix B, Sect. \ref{sec:Appendices}). For more on particular cases see Section~\ref{sec:particularcases}. We leave for the reader to work out the other particular cases defined by special values of the pair $(m,n)$. \end{remark}
\section{Regularization and `generalized amplitudes' for the Mahler system} \label{sec:regularizacion}
We have just solved the system $N=4$ in the standard way: making use of known functions (Jacobi elliptic functions). In what follows we are going to proceed making use of the {\sl regularization}. To do that, we start remembering in Section \ref{sec:omega3} the recent proposal of the authors for $N=3$ (see Molero {\it et al.} \cite{Molero}), which is intrinsically connected with the Jacobi {\sl amplitude}. After that we develop the same approach for the $N=4$ case. That proposal entails to study, at least, two possible regularizations $v\rightarrow v^*$ given by \begin{itemize} \item ${\rm d}v^*/{\rm d}v=\omega_4$, \item ${\rm d}v^*/{\rm d}v=\omega_3\,\omega_4$, \end{itemize} which we gather in Sections \ref{sec:omega4} and \ref{sec:omega34}. Let us proceed one by one. But, before, we remember in Sect. \ref{sec:omega3} how this has been done for the 3-EES.
\subsection{Preliminaries: 3-EES and regularization} \label{sec:omega3}
Let us consider the 3-EES (\ref{sistema3}) with initial conditions $\omega^0\equiv \omega(0)=(\omega_1(0),\omega_2(0),\omega_3(0))$, whose values we choose below. This system has the integrals \begin{equation}\label{integrales3} \alpha_1\omega_2^2-\alpha_2\,\omega_1^2=C_1^2, \qquad \alpha_1\omega_3^2-\alpha_3\,\omega_1^2=C_1^3. \end{equation} Let us assume $\alpha_i$ and IC such that $\omega_3(v)> 0$. Then, making use of the parametrization \begin{equation}\label{regularizacion} \frac{{\rm d}v^*}{{\rm d}v}=\omega_3, \end{equation} the system (\ref{sistema3}) reduces to \begin{equation}\label{sistema2} \frac{{\rm d}\omega_1}{{\rm d}v^*}= \alpha_1\,\omega_2, \qquad \frac{{\rm d}\omega_2}{{\rm d}v^*}= \alpha_2\,\omega_1, \end{equation} join with the quadrature defined by (\ref{regularizacion}). Choosing the coefficients $\alpha_1=1$, $\alpha_2=-1$ and IC $(\omega_1(0),\omega_2(0))=(0,1)$, the system (\ref{sistema2}) defines the trigonometric (circular) functions: \begin{equation}\label{circulares} \sin(v^*), \qquad \cos(v^*). \end{equation} (with other conditions, by a change of variables we may reduce it to this case) Then, keeping in mind (\ref{integrales3}), the regularization (\ref{regularizacion}) takes the form \begin{equation}\label{regularizacion1} \frac{{\rm d}v^*}{{\rm d}v}=\sqrt{C_1^3+\alpha_3\,\omega_1^2}. \end{equation} Motivated by the dynamical system defining the {\sl simple pendulum}\footnote{This lead us to an interpretation of the regularization: $v\equiv t$ and $v^*\equiv \phi$, in other words `time' and `angle'. Angle in the 1-2 plane; arc through the integral $\omega_1^2+\omega_2^2=1$, a circle projection of the integral which is a cylinder.}, it is chosen $\omega_3(0)=1$ join with $\alpha_3=-k^2$, where $k^2<1$. Thus, replacing in (\ref{regularizacion1}) we have \begin{equation} {\rm d}v = \frac{{\rm d}v^*}{\sqrt{1-k^2\sin^2v^*}}, \end{equation} whose quadrature and inversion leads us to the Jacobi ``${\rm am}$'' function: \begin{equation} v^*= {\rm am}(v,k). \end{equation} Finally, replacing in (\ref{circulares}) we have the Jacobi functions \begin{equation}\label{elipticasJacobi} \begin{array}{l} \sin(v^*(v))=\sin({\rm am}(v,k)), \\ \cos(v^*(v))=\cos({\rm am}(v,k)), \end{array} \end{equation} which today, following Gudermann, are denoted in the form \begin{equation*}\label{elipticasJacobi1} \begin{array}{l} {\rm sn}(v ;\, k)\equiv\sin({\rm am}(v,k)), \\ {\rm cn}(v ;\, k)\equiv\cos({\rm am}(v,k)). \end{array} \end{equation*} Completing our set of functions $\omega_3$ is given by \begin{equation} \omega_3(v)\equiv {\rm dn}(v ;\, k)=\sqrt{1-k^2{\rm sn}^2(v ;\, k)}. \end{equation} Summarizing, using the previous notation, the integrals
(\ref{integrales3}) lead us to the well known expressions relating these functions\begin{equation} {\rm sn}^2+{\rm cn}^2 =1, \qquad {\rm dn}^2+ k^2{\rm sn}^2 = 1. \end{equation} Finally, replacing in (\ref{sistema3}) we write what some authors refer as ``{\sl derivation rules}'' of Jacobi functions: \begin{equation}\label{derivadas} {\rm sn}'= {\rm cn}\,{\rm dn}, \quad {\rm cn}'= -{\rm sn}\,{\rm dn},\quad {\rm dn}'= -k^2{\rm sn}\,{\rm cn}. \end{equation}
\subsection{The ${\rm d}v^*/{\rm d}v=\omega_4$ regularization.} \label{sec:omega4} Proceeding as in the previous Section we treat now the case $N=4$ by means of the regularization \begin{equation}\label{regularizacion4} \frac{{\rm d}v^*}{{\rm d}v}=\omega_4. \end{equation} \begin{remark} Remember the comment above in relation with notation; although there is some abuse using again $v^*$ for denoting the new independent parameter, from the context we distinguish it from the one studied in the previous Section. \end{remark} As a consequence the system (\ref{sistema40}) is reduced to \begin{equation*} \frac{{\rm d}\omega_1}{{\rm d}v^*}= \alpha_1\,\omega_2\,\omega_3, \quad \frac{{\rm d}\omega_2}{{\rm d}v^*}= \alpha_2\,\omega_1\,\omega_3,\quad \frac{{\rm d}\omega_3}{{\rm d}v^*}= \alpha_3\,\omega_1\,\omega_2, \end{equation*} and $\omega_4(v^*)$ which will be obtained using one of the integrals, after we have solved the previous system.
We focus on the case $\alpha_1=1$, $\alpha_2=-1$ and $\alpha_3=-m$ because, as we have said before, we plan to generalize Jacobi elliptic functions. Thus, we have \begin{equation} \omega_1={\rm sn}(v^*;m_1), \, \omega_2={\rm cn}(v^*;m_1),\, \omega_3={\rm dn}(v^*;m_1) \end{equation} and for the differential relation \label{regularizacion4} using the integral $n\,\omega_1^2 + \omega_4^2=C_1^4$ and the initial conditions, we may write \begin{equation}\label{cambioeliptico0} v=\int\frac{{\rm d}v^*}{\sqrt{1 - n_1\,{\rm sn}^2(v^*;m_1)}}. \end{equation}
\subsection{$N=4$. The regularization ${\rm d}v^*/{\rm d}v=\omega_3\,\omega_4$.} \label{sec:omega34}
Proceeding the same way as for $N=3$, we treat now the case $N=4$ by means of the regularization \begin{equation}\label{regularizacion34} \frac{{\rm d}v^*}{{\rm d}v}=\omega_3\,\omega_4. \end{equation} As a consequence the system (\ref{sistema40}) reduces to \begin{equation}\label{generic2} \frac{{\rm d}\omega_1}{{\rm d}v^*}= \alpha_1\,\omega_2, \quad \frac{{\rm d}\omega_2}{{\rm d}v^*}= \alpha_2\,\omega_1, \end{equation} and two quadratures associated to $\omega_3$ y $\omega_4$. In fact, they are not needed because the integrals gives us $$\omega_i^2=C_1^i-\alpha_i\omega_1^2,\quad (i=3,4).$$ Note that $C_1^i$ are constants which depend on the initial conditions.
Without loss of generality we will assume our system is made of bounded functions. Then, by a change of variables, our system (\ref{generic2}) reduces to $\alpha_1=1,\, \alpha_2=-1$, thus it results \begin{equation}\label{trig4} \omega_1(v^*)=\sin v^*, \qquad \omega_2(v^*)=\cos v^*, \end{equation} Considering the previous integrals we may write (\ref{regularizacion34}) as follows \begin{equation} {\rm d}v = \frac{{\rm d}v^*}{\sqrt{\prod_{i=3}^4 (C_1^i-\alpha_i\sin^2 v^*)}} \end{equation} or in a slightly different form \begin{equation}\label{cuadratura2} \lambda{\rm d}v = \frac{{\rm d}v^*}{\sqrt{(1-\beta_1\sin^2 v^*)(1-\beta_2\sin^2 v^*)}} \end{equation} where $\beta_i$ and $\lambda$ are functions of $C_1^i$ and $\alpha_i$.
In what follows, with the Mahler system in mind as the basic 4-EES, it is convenient to take the associated notation: $$\beta_1 \equiv n,\qquad \beta_2 \equiv m,\qquad \lambda\equiv 1.$$ In other words, the differential relation (\ref{cuadratura2}) reads \begin{equation}\label{cuadraturadiferencial34} {\rm d}v = \frac{{\rm d}v^*}{\sqrt{(1-n\sin^2 v^*)(1-m\sin^2 v^*)}} \end{equation} The quadrature takes the form \begin{equation} v=G(v^*,n,m)=\int_0^{v^*}\!\!\!\frac{d\vartheta}{\sqrt{(1-n\sin^2\vartheta)(1-m\sin^2\vartheta})}, \end{equation} Thus, we define the period as the two-parameters function \begin{equation} G(\pi/2,n,m)=\int_0^{\pi/2}\frac{d\vartheta}{\sqrt{(1-n\sin^2\vartheta)(1-m\sin^2\vartheta})} \end{equation} Thus, when $(n,m)=(0,0)$, we have $G(0,0)=\pi/2$, and when $(n,m)=(1,1)$, we have $G(1,1)=\infty$.
When $(m,n)$ are small, if we carry out the Taylor expansion of the integrand, after the evaluation of the quadratures, $G(n,m)$ may be approximated in the form \begin{eqnarray*} && G(n,m)=\\ &&\hspace{0.2cm}\frac{\pi}{2}\Big[1 +\frac{m}{4} + \frac{9 m^2}{64} + \frac{25 m^3}{256} + \frac{1225 m^4}{16384} \\ &&\hspace{0.7cm}+\frac{n}{4}\Big(1 +\frac{3m}{8} + \frac{15 m^2}{64} + \frac{175 m^3}{1024} + \frac{2205 m^4}{16384}\Big) \\ &&\hspace{0.7cm}+\frac{9n^2}{64}\Big(1 +\frac{5m}{12} + \frac{35 m^2}{128} + \frac{105 m^3}{512} + \frac{2695 m^4}{16384}\Big) \\ &&\hspace{0.7cm}+ \frac{25n^3}{256} \Big(1 + \frac{7 m }{16} +\frac{189 m^2 }{640} + \frac{231 m^3 }{1024} + \frac{3003 m^4 }{16384}\Big) \\
&&\hspace{0.7cm}+\frac{1225n^4}{16384}\Big(1+ \frac{9 m}{20} + \frac{99 m^2}{320}+ \frac{429 m^3}{1792} + \frac{6435 m^4}{32768}\Big)\Big]\\
&&\hspace{0.7cm}+\rm{h.o.t.} \end{eqnarray*} although the previous expression may be written in different form making more explicit its symmetric character with respect to $m$ and $n$.
Now we define the {\sl generalized amplitud} {\rm amg} as the inverse function \begin{equation} v^*= {\rm amg}(v; n,m). \end{equation} Thus, considering the expressions (\ref{trig4}), we have \begin{equation} \sin\, v^*= \sin\,{\rm amg}(v,n,m)\equiv {\rm sng}(v,n,m) \end{equation} and \begin{equation} \cos\, v^*= \cos\,{\rm amg}(v,n,m)\equiv {\rm cng}(v,n,m) \end{equation} \par\noindent $\bullet$ There is an alternative way of proceeding. If we consider the change of variable $\sin \vartheta=x$ it allows to follow the steps of Jacobi for the case $N=3$. Then, the differential relation (\ref{cuadraturadiferencial34}) takes the form \begin{equation}\label{cuadratura22} {\rm d}v = \frac{{\rm d}x}{ \sqrt{(1-x^2)(1-n\,x^2) (1-m\,x^2) }} \end{equation} or, inverting the expression \begin{equation}\label{ed1}
\frac{{\rm d}x}{{\rm d}v}=\sqrt{(1-x^2)(1-n\,x^2)(1-m\,x^2)}. \end{equation} In other words, we define the function ${\rm sng}$ \begin{equation}\label{sng} x=x(v;n,m)={\rm sng}(v;n,m) \end{equation} as the two-parameters function (whose range is made more precise below), solution of the differential equation \begin{equation}
\Big(\frac{{\rm d}x}{{\rm d}v}\Big)^2 = (1-x^2)(1-n\,x^2)(1-m\,x^2). \end{equation} In this paper we will restrict to a range $ n\leq m\leq 1$.
\par\noindent $\bullet$ Then, associated with ${\rm sng}$ we propose the following functions \begin{eqnarray} &&{\rm cng}(v;n,m)=\pm\sqrt{1-{\rm sng}^2(v;n,m)},\label{cng}\\[1.2ex] &&{\rm dng}(v;n,m)=\sqrt{1-m\,{\rm sng}^2(v;n,m)},\label{dng}\\[1.2ex] &&\,{\rm fng}(v;n,m)=\sqrt{1-n\,{\rm sng}^2(v;n,m)}.\label{fng} \end{eqnarray} To simplify notation we will write ${\rm sng}(v;n,m)\equiv {\rm sng}$, etc. Examples of the graph of these new functions can be seen in Figs.~\ref{fig:Mahlern01m08} and \ref{fig:Mahlernmenos2m05}.
\begin{figure}
\caption{\small Mahler $n=0.1, m=0.8$. Falta otra con valor m\'as extremo de $n=m$}
\label{fig:SeccionesEenergia}
\label{fig:Mahlern01m08}
\end{figure}
\begin{figure}
\caption{\small Mahler $n=-2, m=0.5$. Falta otra con valor m\'as extremo de $n=m$}
\label{fig:SeccionesEenergia}
\label{fig:Mahlernmenos2m05}
\end{figure}
Due to the process we have followed, we immediately check that these functions ${\rm sng}$, etc verify the following IVP \begin{equation}\label{sistemasng} \begin{array}{l}
\displaystyle{\frac{{\rm d}\,{\rm sng}}{{\rm d}v}= {\rm cng}\,{\rm dng}\,{\rm fng}}, \\[1.5ex]
\displaystyle{\frac{{\rm d}\,{\rm cng}}{{\rm d}v}= -{\rm sng}\,{\rm dng}\,{\rm fng}},\\[1.5ex]
\displaystyle{\frac{{\rm d}\,{\rm dng}}{{\rm d}v}= -m\,{\rm sng}\,{\rm cng}\,{\rm fng}},\\[1.5ex]
\displaystyle{\frac{{\rm d}\,{\rm fng}}{{\rm d}v}= -n\,{\rm sng}\,{\rm cng}\,{\rm dng}}, \end{array} \end{equation} with initial conditions $(0,1,1,1)$. The integrals, as we have mentioned before, lead to the following expressions \begin{equation} {\rm cng}^2+{\rm sng}^2=1, \quad {\rm dng}^2+m\,{\rm sng}^2=1, \quad {\rm fng}^2+n\,{\rm sng}^2=1. \end{equation}
Thus, from the functions solution of the Mahler system, the Jacobi functions are given by \begin{equation}\label{JacobiMahler} \begin{array}{l} \displaystyle{{\rm sn}(av;m_1)=\frac{1}{A}\,\frac{{\rm sng}(v;m,n)}{{\rm fng}(v;m,n)}}, \\[2ex] \displaystyle{{\rm cn}(av;m_1)= \frac{{\rm cng}(v;m,n)}{{\rm fng}(v;m,n)},} \\[2ex] \displaystyle{{\rm dn}(av;m_1)=\frac{{\rm dng}(v;m,n)}{{\rm fng}(v;m,n)}, } \end{array} \end{equation} \noindent $\bullet$ {\sl Taylor expansions of ${\rm sng},\, {\rm cng},\, {\rm dng}$ and ${\rm fng}$ near the origin. }
\par\noindent As a direct application of the definition of those functions by the differential system (\ref{sistemasng}), we may easily compute to any order the Taylor expansion of the previous functions: \begin{equation}\label{algoritmo2} \begin{array}{l} \displaystyle{{\rm sng}(v)= v - \frac{1 + m + n}{6}\,v^3}\\[1.5ex] \displaystyle{\hspace{0.5cm}+\frac{1 + 14(m + n + m n)+ m^2+ n^2}{120}\,v^5 +\ldots}\\[1.5ex] \displaystyle{{\rm cng}(v)= 1 - \frac{1}{2}\,v^2+\frac{1 + 4 m + 4 n}{24}\,v^4}\\[1.5ex] \displaystyle{\hspace{0.5cm}-\frac{1 + 44 (m +n)+ 16 m^2 + 104 m n + 16n^2}{720}\,v^6 +\ldots}\\[1.5ex] \displaystyle{{\rm dng}(v)= 1 - \frac{m}{2}\,v^2+\frac{m(4 + m + 4 n)}{24}\,v^4}\\ \displaystyle{\hspace{0.5cm}-\frac{m(16 + 44 m + m^2 + 104 n + 44 m n + 16 n^2)}{720}\,v^6+\ldots}\\[1.5ex] \displaystyle{{\rm fng}(v)= 1 - \frac{n}{2}\,v^2+\frac{n(4 + n + 4 m )}{24}\,v^4}\\[1.5ex] \displaystyle{\hspace{0.5cm}-\frac{n(16 + 44 n + n^2+ 104 m + 44 m n + 16 m^2)}{720}\,v^6+\ldots} \end{array} \end{equation} \begin{remark} The interest of these expansions is connected with the computation of these functions. By extension of the process followed by Bulirsch and Fukushima computing Jacobi elliptic functions (see Appendix). Nevertheless, there is still work to be done comparing that scheme with the possible advantages of using regularization. \end{remark}
\subsection{Particular cases} \label{sec:particularcases} \begin{description} \item $\bullet$ $n=0$. In this case, due to the choice of the initial conditions, we have ${\rm fng}(v)\equiv 1$. Moreover we have ${\rm sng}(v;0,m)= {\rm sn}(v,m)$, etc, {\it i.e.} the Jacobi elliptic functions with elliptic modulus $m$. \item $\bullet$ $m=0$. Here, based on the initial conditions, we have ${\rm dng}(v)\equiv 1$. Moreover ${\rm sng}(v;n,0)= {\rm sn}(v,n)$, etc, {\it i.e.} the Jacobi elliptic functions have an elliptic modulus $n$ (que es negativo, thus we still needs to make a transformation; see (\ref{valores}) leading to $m_1$).
\item $\bullet$ $m=1$. In this case the differential equation is
\begin{equation}
\frac{{\rm d}x}{{\rm d}v} = (1-x^2)\sqrt{1-n\,x^2}. \end{equation} For this quadrature we obtain {\small \begin{equation} v=\frac{1}{2\sqrt{1-n}}{\rm ln}\frac{(1+x)}{(1-x)}\frac{(1-n x + \sqrt{(1-n)(1-n x^2)})}{(1+ n x + \sqrt{(1-n)(1-n x^2)})}, \end{equation}} whose inversion is possible, because it is injective.
\item $\bullet$ $m=n$. Now the differential equation is \begin{equation} \frac{{\rm d}x}{{\rm d}v} = (1-m x^2)\sqrt{1-x^2}. \end{equation} We obtain \begin{equation} v=\frac{1}{\sqrt{1-m}} {\rm ArcTan}\Big(\sqrt{1-m}\frac{x}{\sqrt{1-x^2}}\Big) \end{equation} Again the inversion is possible because it is injective
\begin{equation} \tan(\sqrt{1-m}\,v) =\sqrt{1-m}\frac{x}{\sqrt{1-x^2}} \end{equation} More precisely, we have \begin{equation} x=\frac{\tan(\sqrt{1-m}\,w)}{\sqrt{1-m+\tan^2(\sqrt{1-m}\,v)}} \end{equation}
Graphical examples for $n=m$ can be seen in Figs.~\ref{fig:Mahlernm05} and \ref{fig:Mahlernm095}.
\begin{figure}
\caption{\small Mahler $m=n=0.5$. Falta otra con valor m\'as extremo de $n=m$}
\label{fig:SeccionesEenergia}
\label{fig:Mahlernm05}
\end{figure}
\begin{figure}
\caption{\small Mahler $m=n=0.95$. Falta otra con valor m\'as extremo de $n=m$}
\label{fig:SeccionesEenergia}
\label{fig:Mahlernm095}
\end{figure}
\item $\bullet$ $m=n=1$. In this case \begin{equation} v=\frac{x}{\sqrt{1-x^2}} \end{equation} and finally, after inversion, it results \begin{equation} x=\frac{v}{\sqrt{1+v^2}} \end{equation}
\item $\bullet$ $m=n=0$. In this case we recover the circular functions.
\item There are other particular cases related to unbounded trajectories, like the straight-lines which are expressed by elementary functions. This requires the signs of the coefficients to be the same, something that we have excluded when choosing our system. \end{description}
\section{Addition formulas} \label{sec:additionformulas} In order to alleviate the notation, we introduce the following convention \begin{equation*} \begin{array}{l} \rm{sng}(a\,x;m,n)=\rm{s}_{ax},\quad\rm{cng}(a\,x;m,n)=\rm{c}_{ax},\\ \rm{dng}(a\,x;m,n)=\rm{d}_{ax},\quad\rm{fng}(a\,x;m,n)=\rm{f}_{ax}. \end{array} \end{equation*} \begin{theorem}[Addition-Subtraction formulae for the 4-Mahler functions] The addition and subtraction formulae for the 4-Mahler functions are given next.
\begin{eqnarray} &&\rm{sng}(x\pm y;m,n)=\label{eq:AdditionSubtraction4Mahler}\\ &&\dfrac{A\,(\rm{s}_{ax}\rm{c}_{ay}\rm{d}_{ay}\rm{f}_{ax}\pm \rm{s}_{ay}\rm{c}_{ax}\rm{d}_{ax}\rm{f}_{ay})}{\sqrt{(\rm{f}^2_{ax}\rm{f}^2_{ay}-m_1\rm{s}^2_{ax}\rm{s}^2_{ay})^2-n_1(\rm{s}_{ax}\rm{c}_{ay}\rm{d}_{ay}\rm{f}_{ax}\pm \rm{s}_{ay}\rm{c}_{ax}\rm{d}_{ax}\rm{f}_{ay})^2}}\nonumber\\[2ex] &&\rm{cng}(x\pm y;m,n)=\nonumber\\ &&\dfrac{\rm{c}_{ax}\rm{c}_{ay}\rm{f}_{ax}\rm{f}_{ay}\mp \rm{s}_{ax}\rm{s}_{ay}\rm{d}_{ax}\rm{d}_{ay}}{\sqrt{(\rm{f}^2_{ax}\rm{f}^2_{ay}-m_1\rm{s}^2_{ax}\rm{s}^2_{ay})^2-n_1(\rm{s}_{ax}\rm{c}_{ay}\rm{d}_{ay}\rm{f}_{ax}\pm \rm{s}_{ay}\rm{c}_{ax}\rm{d}_{ax}\rm{f}_{ay})^2}}\nonumber\\[2ex] &&\rm{dng}(x\pm y;m,n)=\nonumber\\ &&\dfrac{\rm{d}_{ax}\rm{d}_{ay}\rm{f}_{ax}\rm{f}_{ay}\mp \rm{s}_{ax}\rm{s}_{ay}\rm{c}_{ax}\rm{c}_{ay}}{\sqrt{(\rm{f}^2_{ax}\rm{f}^2_{ay}-m_1\rm{s}^2_{ax}\rm{s}^2_{ay})^2-n_1(\rm{s}_{ax}\rm{c}_{ay}\rm{d}_{ay}\rm{f}_{ax}\pm \rm{s}_{ay}\rm{c}_{ax}\rm{d}_{ax}\rm{f}_{ay})^2}}\nonumber\\[2ex] &&\rm{fng}(x\pm y;m,n)=\nonumber\\ &&\dfrac{\rm{f}^2_{ax}\rm{f}^2_{ay}-m_1\rm{s}^2_{ax}\rm{s}^2_{ay}}{\sqrt{(\rm{f}^2_{ax}\rm{f}^2_{ay}-m_1\rm{s}^2_{ax}\rm{s}^2_{ay})^2-n_1(\rm{s}_{ax}\rm{c}_{ay}\rm{d}_{ay}\rm{f}_{ax}\pm \rm{s}_{ay}\rm{c}_{ax}\rm{d}_{ax}\rm{f}_{ay})^2}}\nonumber \end{eqnarray}
where $A$, $a$, $m_1$ and $n_1$ are given in formula (43) (en la proposici—n 5). \end{theorem} \begin{proof} Let us prove the formula corresponding to $\rm{sng}(x\pm y;m,n)$, the remaining ones are analogous. By Proposition~5 we have that $$\rm{sng}(x\pm y;m,n)=A\dfrac{\rm{sn}(ax+ay\,;m_1)}{\sqrt{1-n_1\rm{sn}^2(ax+ay\,;m_1)}}.$$ Thus, using the addition and subtraction formulae for the Jacobi elliptic sine (see Appendix B) and assuming the following convention $$\rm{sn}(a\,x;m_1)=\rm{s}_{x},\quad\rm{cn}(a\,x;m_1)=\rm{c}_{x},\quad\rm{dn}(a\,x;m_1)=\rm{d}_{x},$$ we obtain \begin{equation*} \begin{array}{l} \rm{sng}(x\pm y;m,n)=\\ \hspace{1cm}A\dfrac{\dfrac{\rm{s}_{x}\rm{c}_{y}\rm{d}_{y}\pm\rm{s}_{y}\rm{c}_{x}\rm{d}_{x}}{1-m_1\rm{s}_{x}^2\rm{s}^2_y}}{\sqrt{\dfrac{(1-m_1\rm{s}_{x}^2\rm{s}_{y}^2)^2-n_1(\rm{s}_x\rm{c}_y\rm{d}_y\pm\rm{s}_y\rm{c}_x\rm{d}_x)^2}{(1-m_1\rm{s}_{x}^2\rm{s}^2_y)^2}}}, \end{array} \end{equation*} simplifying denominators \begin{equation} \label{eq:AuxDoubleSine} \begin{array}{l} \rm{sng}(x\pm y;m,n)=\\ \hspace{0.8cm}A\dfrac{\rm{s}_{x}\rm{c}_{y}\rm{d}_{y}\pm\rm{s}_{y}\rm{c}_{x}\rm{d}_{x}}{\sqrt{(1-m_1\rm{s}_{x}^2\rm{s}_{y}^2)^2-n_1(\rm{s}_x\rm{c}_y\rm{d}_y\pm\rm{s}_y\rm{c}_x\rm{d}_x)^2}}. \end{array} \end{equation} Finally, recalling that \begin{eqnarray*} &&\rm{s}_{x}=\frac{1}{A}\dfrac{\rm{sng}(ax;m,n)}{\rm{fng}(ax;m,n)}\\ &&\rm{c}_{x}=\dfrac{\rm{cng}(ax;m,n)}{\rm{fng}(ax;m,n)}\\ &&\rm{d}_{x}=\dfrac{\rm{sng}(ax;m,n)}{\rm{fng}(ax;m,n)}, \end{eqnarray*} and likewise for $\rm{s}_{y}, \rm{c}_{y},\rm{d}_{y}$, if we multiply numerator and denominator in (\ref{eq:AuxDoubleSine}) by $\rm{fng}^2(ax;m,n)$ and $\rm{fng}^2(ay;m,n)$ we obtain (\ref{eq:AdditionSubtraction4Mahler}) after algebraic simplifications. \end{proof}
\begin{corollary} The formulae for the double angle of the 4-Mahler functions are given by \begin{gather} \begin{aligned} \label{eq:Double4Mahler} \rm{sng}(2x;m,n)&=&\dfrac{2A\,\rm{s}_{ax}\rm{c}_{ax}\rm{d}_{ax}\rm{f}_{ax}}{\sqrt{(\rm{f}_{ax}^4-m_1\,\rm{s}_{ax}^4)^2-n_1\,\big(2\,\rm{s}_{ax}\rm{c}_{ax}\rm{d}_{ax}\rm{f}_{ax}\big)^2}}\\ \rm{cng}(2x;m,n)&=&\dfrac{\rm{c}^2_{ax}\rm{f}^2_{ax}\mp \rm{s}^2_{ax}\rm{d}^2_{ax}}{\sqrt{(\rm{f}_{ax}^4-m_1\,\rm{s}_{ax}^4)^2-n_1\,\big(2\,\rm{s}_{ax}\rm{c}_{ax}\rm{d}_{ax}\rm{f}_{ax}\big)^2}}\\ \rm{dng}(2x;m,n)&=&\dfrac{\rm{d}_{ax}^2\rm{f}_{ax}^2\mp \rm{s}_{ax}^2\rm{c}_{ax}^2}{\sqrt{(\rm{f}_{ax}^4-m_1\,\rm{s}_{ax}^4)^2-n_1\,\big(2\,\rm{s}_{ax}\rm{c}_{ax}\rm{d}_{ax}\rm{f}_{ax}\big)^2}}\\ \rm{fng}(2x;m,n)&=&\dfrac{\rm{f}^4_{ax}-m_1\rm{s}^4_{ax}}{\sqrt{(\rm{f}_{ax}^4-m_1\,\rm{s}_{ax}^4)^2-n_1\,\big(2\,\rm{s}_{ax}\rm{c}_{ax}\rm{d}_{ax}\rm{f}_{ax}\big)^2}} \end{aligned} \end{gather} \end{corollary}
\begin{corollary} The formulae for the half angle of the 4-Mahler functions are given by \begin{gather} \begin{aligned} \label{eq:Half4Mahler} \rm{sng}(\frac{x}{2};m,n)=\;&A\,\sqrt{\dfrac{\rm{f}_{ax}-\rm{c}_{ax}}{\rm{f}_{ax}+\rm{d}_{ax}-n_1(\rm{f}_{ax}-\rm{c}_{ax})}}\\ \rm{cng}(\frac{x}{2};m,n)=\;&\sqrt{\dfrac{\rm{d}_{ax}+\rm{c}_{ax}}{\rm{f}_{ax}+\rm{d}_{ax}-n_1(\rm{f}_{ax}-\rm{c}_{ax})}}\\ \rm{dng}(\frac{x}{2};m,n)=\;&\sqrt{\dfrac{(\rm{c}_{ax}+\rm{d}_{ax})(\rm{f}_{ax}+\rm{d}_{ax})}{(\rm{f}_{ax}+\rm{c}_{ax})(\rm{f}_{ax}+\rm{d}_{ax})-n_1(\rm{f}_{ax}^2-\rm{c}_{ax}^2)}}\\ \rm{fng}(\frac{x}{2};m,n)=\;&\sqrt{\dfrac{\rm{f}_{ax}+\rm{d}_{ax}}{\rm{f}_{ax}+\rm{d}_{ax}-n_1(\rm{f}_{ax}-\rm{c}_{ax})}} \end{aligned} \end{gather} \end{corollary}
\subsection{On the numerical computation of $\omega_i$ functions by extending Bulirsch-Fukushima method} \label{sec:computing}
As we know Jacobi elliptic functions are defined by some ratios of $\theta_i$ Jacobi functions. This way of handling the Jacobi elliptic functions is convenient due to the fast convergency of those series. Nevertheless, at present, fast numeric codes compete with this classical analytic approach. More precisely, in order to implement those codes {\sl addition formulas} compute Jacobi elliptic functions are basic expressions in that process (see Fukushima\cite{Fukushima2013,Fukushima2014}).
We can extend those expressions to the $\omega_i$ functions. Thus, as Fukushima explains, the algorithm is made of three steps:\\ (i) the forward transformation defined by (Corollary 2, Half arguments formulas: (\ref{eq:Half4Mahler}) reducing the values of $\omega_i$ by a number of iterations;\\ (ii) evaluation of the Mac-Laurin series expansions given by (\ref{algoritmo2}) and;\\ (iii) the backward transformation (Corollary 1: Double arguments formulas (\ref{eq:Double4Mahler})) as many times as the forward transformation.
Details of the implementation of this process will be given in \cite{Crespo2015}.
\section{On the case $N=5$} \label{sec:N5} As we have pointed out in the Introduction, hyperelliptic integrals appear in (\ref{eq:EESn}) when $N\geq 5$. Thus it is convenient to see in some detail the case $N=5$, the lower system belonging to this category.
Thus, as before, we start keeping the notation used in lower dimension
\begin{equation}\label{sistema5}
\begin{array}{l}
\displaystyle{\omega_1'= \alpha_1\,\omega_2\,\omega_3\,\omega_4\,\omega_5}, \\\displaystyle{ \omega_2'= \alpha_2\,\omega_1\,\omega_3\,\omega_4\,\omega_5},\\\displaystyle{\omega_3'= \alpha_3\,\omega_1\,\omega_2\omega_4\,\omega_5},\\\displaystyle{ \omega_4'= \alpha_4\,\omega_1\,\omega_2\,\omega_3\,\omega_5},\\\displaystyle{ \omega_5'= \alpha_5\,\omega_1\,\omega_2\,\omega_3\,\omega_4},
\end{array}
\end{equation} with given initial conditions $\omega(0)$. As examples in Figs. \ref{fig:MahlerP02N04M07} and \ref{fig:MahlerPmenos2Nmenos1M04} we present two set of functions of the 5-EES family.
\begin{figure}
\caption{\small 5-Mahler system graphs for $p=0.2, n=0.4, m=0.7$.}
\label{fig:}
\label{fig:MahlerP02N04M07}
\end{figure}
\begin{figure}
\caption{\small 5-Mahler system graphs for $p=-2, n=-1, m=0.4$.}
\label{fig:}
\label{fig:MahlerPmenos2Nmenos1M04}
\end{figure}
We will proceed as in the lower dimensions $N=3,4$, considering alternative procedures to the classic solution based on the direct reduction to hyperelliptic integrals. In other words:
\par\noindent
(i) we introduce the functions $u_i^j(v)$, (where we maintain the notation) ratios of the $\omega_i$ \begin{equation}\label{ratios5} u_i^j=\frac{\omega_i}{\omega_j}, \quad i\neq j, \qquad u_j^j=\frac{1}{\omega_j}, \end{equation} in the domain of definition of $\omega_j$.
\par\noindent (ii) In the rest of the section we will study the effect of the introduction of some possible regularizations, namely two of them \begin{itemize} \item $\displaystyle{{\rm d}v^* = \omega_5\,{\rm d}v.}$ \item $\displaystyle{{\rm d}v^*=\omega_3\,\omega_4\,\omega_5\,{\rm d}v.}$ \end{itemize} Again, we have to keep in mind that with the notation used in the above regularizations, the new variable $v^*$ is different from one case to the other. \subsection{The ${\rm d}v^*/{\rm d}v=\omega_5$ regularization.} \label{sec:omega5} Then, associated to the ratios $u_i$, if we carry out the regularization \begin{equation}\label{regularization5} {\rm d}v =u_5\,{\rm d}v^*. \end{equation}
we have the following regularized differential system \begin{equation}\label{sistema43} \begin{array}{l} \displaystyle{\frac{{\rm d}u_1}{{\rm d}v^*}= C_1^5\,u_2u_3u_4}, \\[1.5ex] \displaystyle{\frac{{\rm d}u_2}{{\rm d}v^*}= C_2^5\,u_1u_3u_4},\\[1.5ex] \displaystyle{\frac{{\rm d}u_3}{{\rm d}v^*}= C_3^5\,u_1u_2u_4},\\[1.5ex] \displaystyle{\frac{{\rm d}u_4}{{\rm d}v^*}= C_4^5\,u_1u_2u_3}, \end{array} \end{equation} with IC $u_i(0)=u_i^0=\omega_i^0/\omega_j^0$, $i=1,\ldots, 4$.
Thus, dividing the integral $\alpha_1\omega_5^2-\alpha_5\omega_1^2=C_1^5$ by $\omega_5^2$ we write: $u_5^2=(\alpha_1-\alpha_5\,u_1^2)/C_1^5$. Then, we obtain \begin{equation}\label{regularizacion5} v= \sqrt{\frac{\alpha_1}{C_1^5}}\int\sqrt{1-n_2\,[u_1(v^*)]^2}{\rm d}v^*, \end{equation} where $n_2=\alpha_5/\alpha_1$ and $u_1(v^*)$ is a function solution of the system (\ref{sistema43}); quadrature which will be solved numerically. As we know that the solution of (\ref{sistema43}) can be obtained by undetermined coefficients, making use of the $4$-{\sl Mahler functions} defined by the system (\ref{sistemasng}), but in the variable $v^*$. In other words, the previous form of the solution represents an alternative to the use of hyperelliptic integrals for solving (\ref{eq:EESn}) for $N=5$. Or, in a more precise form, we have separated geometry from dynamics. The trajectory is expressed by Jacobi or Mahler functions, meanwhile the quadrature of the parametrization (\ref{regularizacion5}) will lead generically to a hyperellictic integral.
\subsection{The ${\rm d}v^*/{\rm d}v=\omega_3\omega_4\omega_5$ regularization.} \label{sec:omega345}
Let us consider again the system 5-EES (\ref{sistema5}). Now we try the regularization \begin{equation}\label{regularization345} \frac{{\rm d}v^*}{{\rm d}v}=\omega_3\,\omega_4\,\omega_5. \end{equation} in a domain where $\omega_3\,\omega_4\,\omega_5\neq 0$. This means that the system reduces to \begin{equation}\label{generic22} \frac{{\rm d}\omega_1}{{\rm d}v^*}= \alpha_1\,\omega_2, \quad \frac{{\rm d}\omega_2}{{\rm d}v^*}= \alpha_2\,\omega_1, \end{equation} and three quadratures associated to $\omega_3$, $\omega_4$ and $\omega_5$. In fact, they are not needed because the integrals allow to write $\omega_i^2=C_1^i-\alpha_i\omega_1^2$, $(i=3,4,5)$. Remember that $C_1^i$ are constants, functions of the initial conditions.
Assuming the bounded case we can always choose, by scaling and transformation of functions, that $\alpha_1=1,\, \alpha_2=-1$. In other words we have \begin{equation} \omega_1(v^*)=\sin v^*, \qquad \omega_2(v^*)=\cos v^*, \end{equation} Then, the quadrature (\ref{regularization345}), taking into account the previous mentioned integrals, we have \begin{equation}\label{cuadratura3} \lambda v = \int\frac{{\rm d}v^*}{\sqrt{\prod_{i=3}^5 (1-\beta_i\sin^2 v^*)}} \end{equation} where $\beta_i$ and $\lambda$ are functions of $C_1^i$ and $\alpha_i$. This lead us, in the generic case, to a hyperelliptic quadrature.
\paragraph{Dealing with the $5$-Mahler System.} In what follows we choose as the basic system in $N=5$ a Mahler type system \begin{equation}\label{Mahler5} \begin{array}{l} \displaystyle{\omega_1'= \omega_2\,\omega_3\,\omega_4\,\omega_5}, \\ \displaystyle{ \omega_2'= -\omega_1\,\omega_3\,\omega_4\,\omega_5},\\ \displaystyle{\omega_3'= -m\,\,\omega_1\omega_2\,\omega_4\,\omega_5},\\ \displaystyle{ \omega_4'= -n\,\omega_1\,\omega_2\,\omega_3\,\omega_5},\\ \displaystyle{ \omega_5'= -p\,\omega_1\,\omega_2\,\omega_3\,\omega_4}, \end{array} \end{equation} with initial conditions $(0,1,1,1,1)$.
Moreover, apart from adjusting coefficients, an alternative form of dealing with (\ref{cuadratura3}) is to make a change of variable $\sin v^* =x$. Then, the corresponding new expression for the regularization
is given by \begin{equation}\label{cuadratura33} \lambda\,{\rm d}v = \frac{{\rm d}x}{\sqrt{(1-x^2) (1-m\,x^2) (1-n\,x^2) (1-p\,x^2)}}. \end{equation} Denoting $$w=\lambda\,v$$ we define by {\rm Amg} (generalized amplitude) the inverse function \begin{equation}\label{amplitudgeneralizada}
v^*= {\rm Amg}(w;p,m,n). \end{equation} Then, by analogy with the notation introduced in lower dimensions, we propose to write \begin{equation} \sin\, v^*= \sin\,{\rm Amg}(w;p,n,m)\equiv {\rm Sng}(w;p,n,m) \end{equation} In other words, we define ${\rm Sng}$ \begin{equation}\label{sng} x=x(w;p,n,m)={\rm Sng}(w;p,n,m) \end{equation} as the three-parameter function solution of the differential equation \begin{equation}\label {ed1}
\Big(\frac{{\rm d}x}{{\rm d}w}\Big)^2=(1-x^2)(1-p\,x^2)(1-n\,x^2)(1-m\,x^2). \end{equation} In the rest of this paper we restrict ourselves to the domain of parameters $\Delta=\{(p,n,m)\in [0,1]\times[0,1]\times[0,1]\}$. \par Then, associated with ${\rm Sng}$ we introduce the following functions \begin{equation}\label{CngDngFngHng} \begin{array}{l} \displaystyle{{\rm Cng}(w;p,n,m)=\pm\sqrt{1-{\rm Sng}^2(w;p,n,m)},}\\[1.2ex] \displaystyle{{\rm Dng}(w;p,n,m)=\sqrt{1-m\,{\rm Sng}^2(w;p,n,m)},}\\[1.2ex] \displaystyle{{\rm Fng}(w;p,n,m)=\sqrt{1-n\,{\rm Sng}^2(w;p,n,m)},}\\[1.2ex]\,\displaystyle{{\rm Hng}(w;p,n,m)=\sqrt{1-p\,{\rm Sng}^2(w;p,n,m)}}. \end{array} \end{equation} To simplify the notation, we will write in some expressions \begin{eqnarray*} &&{\rm Sng}(w;p,n,m)\equiv {\rm Sng}, \quad {\rm Cng}(w;p,n,m)\equiv {\rm Cng}, \\ &&{\rm Dng}(w;p,n,m)\equiv {\rm Dng}, \quad {\rm Fng}(w;p,n,m)\equiv {\rm Fng},\\ &&{\rm Hng}(w;p,n,m)\equiv {\rm Hng}. \end{eqnarray*} Then, we write again (\ref{Mahler5}) as the following IVP \begin{equation}\label{Mahler51} \begin{array}{l} \displaystyle{\frac{{\rm d}\,{\rm Sng}}{{\rm d}w}= {\rm Cng}\,{\rm Dng}\,{\rm Fng}\,{\rm Hng}}, \\ [1.8ex] \displaystyle{\frac{{\rm d}\,{\rm Cng}}{{\rm d}w}= -{\rm Sng}\,{\rm Dng}\,{\rm Fng}\,{\rm Hng}}\\ [1.8ex] \displaystyle{\frac{{\rm d}\,{\rm Dng}}{{\rm d}w}= -m\,{\rm Sng}\,{\rm Cng}\,{\rm Fng}\,{\rm Hng}}\\ [1.8ex] \displaystyle{\frac{{\rm d}\,{\rm Fng}}{{\rm d}w}= -n\,{\rm Sng}\,{\rm Cng}\,{\rm Dng}\,{\rm Hng}},\\[1.8ex] \displaystyle{\frac{{\rm d}\,{\rm Hng}}{{\rm d}w}= -p\,{\rm Sng}\,{\rm Cng}\,{\rm Dng}\,{\rm Fng}} \end{array} \end{equation} with initial conditions (0,1,1,1,1). Note that in agreement with (\ref{CngDngFngHng}), the integrals take the following form \begin{equation} \begin{array}{l} {\rm Cng}^2+{\rm Sng}^2=1, \qquad {\rm Dng}^2+m\,{\rm Sng}^2=1, \\[1.2ex] {\rm Fng}^2+n\,{\rm Sng}^2=1, \qquad {\rm Hng}^2+p\,{\rm Sng}^2=1. \end{array} \end{equation} We are not going to deal with the generic study of our system (\ref{Mahler51}). It is out of the scope of this paper. In the last Section we will restrict to analyze some particular cases
\section{$N=5$: Some particular cases} \label{sec:appendixpp}
Like in previous dimensions, we consider two particular cases \subsection{The case $p=0$.} Now, according to (\ref{CngDngFngHng}), we have ${\rm Hng}\equiv 1$. This corresponds to the previous studied case: 4-Mahler system. \subsection{The case $p=n$.} As we have just pointed out, a particular case of (\ref{cuadratura3}) we will consider now two of the $\beta_i$ equals. According to the notation introduced, we write \begin{equation}\label{cuadratura&Pi} \lambda\,v=\int_0^{\tilde v^*}\frac{d\vartheta}{(1-n\sin^2\vartheta)\sqrt{1-m\sin^2\vartheta}}, \end{equation} \begin{remark} In relation to the quadrature (\ref{cuadratura&Pi}) the reader will remember that this is precisely the Legendre third elliptic integral\footnote{Dealing with the search of fast numerical algorithms for the computation of the third elliptic integral Fukushima \cite{Fukushima2013,Fukushima2014} singles out in a recent paper that by a number of transformations the domain of $n$ and $m$ are reduced as $$ 0 < m <1, \qquad -\sqrt{m} <n < \frac{m}{1+\sqrt{1-m}}. $$ This fact has to be in mind (incluir gr‡fico de este dominio) in order to study the $\omega_i$ thinking on applications to those integrals\ldots} $\Pi(\tilde v^*;m,n)$. Thus, for the particular cases $n=0$ and $n=m$, we encounter the other Legendre elliptic integrals: \begin{eqnarray*} &&F(\varphi,m)=\int_0^{\varphi}\frac{d\vartheta}{\sqrt{1-m\sin^2\vartheta}}= \Pi(\varphi,0,m)\\ &&E(\varphi,m)=\int_0^{\varphi}\sqrt{1-m\sin^2\vartheta}\,d\vartheta,\\ &&\hspace{1.2cm}=(1-m)\,\Pi(\varphi,m,m) + m \frac{\sin(2\varphi)}{2\sqrt{1-m\sin^2\varphi}}. \end{eqnarray*} \end{remark} Denoting $$w=\lambda\,v,$$ we define as {\rm Amg} (generalized amplitude) the inverse function \begin{equation} \tilde v^*= {\rm Amg}(w;n,n,m). \end{equation} Then, by analogy with the notation introduced in lower dimensions, we propose to write \begin{equation} \sin\, \tilde v^*= \sin\,{\rm Amg}(w;n,n,m)\equiv {\rm Sng}(w;n,m) \end{equation} For later use, we also include here the expression for our particular case of (\ref{cuadratura33}) \begin{equation}\label{ed1} {\rm d}w = \frac{{\rm d}x}{(1-n\,x^2) \sqrt{(1-x^2) (1-m\,x^2) }}. \end{equation} From our initial conditions we have ${\rm Hng}\equiv {\rm Fng}$. Then, from (\ref{Mahler5}) we immediately obtain that $\omega_4\equiv \omega_5$, and that these functions satisfy the following IVP \begin{equation}\label{regularizeSng00} \begin{array}{l} \displaystyle{\frac{{\rm d}\,{\rm Sng}}{{\rm d}w}= {\rm Cng}\,{\rm Dng}\,{\rm Fng}^2}, \\ [1.8ex] \displaystyle{\frac{{\rm d}\,{\rm Cng}}{{\rm d}w}= -{\rm Sng}\,{\rm Dng}\,{\rm Fng}^2}\\ [1.8ex] \displaystyle{\frac{{\rm d}\,{\rm Dng}}{{\rm d}w}= -m\,{\rm Sng}\,{\rm Cng}\,{\rm Fng}^2}\\ [1.8ex] \displaystyle{\frac{{\rm d}\,{\rm Fng}}{{\rm d}w}= -n\,{\rm Sng}\,{\rm Cng}\,{\rm Dng}\,{\rm Fng}}, \end{array} \end{equation} con las condiciones iniciales (0,1,1,1).
Again by a regularization $w\rightarrow \tilde v$ given by \begin{equation}\label{cuadraturapi0} \frac{{\rm d}\tilde v}{{\rm d}w} = {\rm Fng} \end{equation} transforms (\ref{regularizeSng00}) in a regularized system
which is a 4-Mahler system in the new variable.
After we have solved the regularized system, we still need to compute the quadrature associated to the differential relation (\ref{cuadraturapi0}). Explicitly we have \begin{equation} {\rm d}w=\int\frac{{\rm d}\tilde v}{\sqrt{1-n \,{\rm Sng}^2(w(\tilde v))}} \end{equation} We will give details of this process, both from the analytical and numerical point of view, in a forthcoming paper.
\section{On the application to the free rigid body} \label{sec:FRB}
We will apply what we have presented in previous sections to the description of the solution of the free rigid body. We will do that formulating the system in symplectic Andoyer variables.
\subsection{The solution in Andoyer variables} \label{sec:classical} Let us consider the Hamiltonian of the free rigid body expressed in Andoyer's variables $(\lambda,\mu,\nu,\Lambda,M,N)$ which takes the form \begin{equation} \mathcal{H}=\frac{1}{2}(a_1\sin^2\nu + a_2\cos^2\nu)(M^2-N^2) + \frac{a_3}{2}N^2, \end{equation} where $(a_1, a_2, a_3)=(1/A, 1/B,1/C)$ with $(A,B,C)$ the principal moments of inertia. Note that in applications we will study the influence of $B$, which will be taken as physical parameter $A\leq B\leq C$, join with $C<A+B$. The differential system is given by three equations \begin{eqnarray} &&\frac{\mathrm{d}\nu}{\mathrm{d}t}=\phantom{-}\frac{\partial\mathcal{H}}{\partial N}=N(a_3-a_1\sin^2\nu-a_2\cos^2\nu),\label{nuPunto}\\[0.7ex] &&\frac{\mathrm{d}N}{\mathrm{d}t}=-\frac{\partial\mathcal{H}}{\partial\nu}=(a_2-a_1)(M^2-N^2)\sin\nu\cos\nu,\label{NPunto}\\ &&\frac{\mathrm{d}\mu}{\mathrm{d}t}=\phantom{-}\frac{\partial\mathcal{H}}{\partial M}=M(a_1\sin^2\nu+a_2\cos^2\nu),\label{muPunto} \end{eqnarray} and the other three $(\lambda,\Lambda,M)$ which are integrals. Usually we integrate first the system defined by $N$ and $\nu$. More precisely, we solve the Euler system, associated with those variables. Then, the functions solution $N(t)$ and $\nu(t)$ are given making use of the Jacobi elliptic functions \begin{equation}\label{solutionnu} \begin{array}{l} \displaystyle{\sin\nu(t) = \frac{{\rm cn}(s\,t;\,m)}{\sqrt{1+n^*{\rm sn}^2(s\,t;\,m)}}}, \\[2.5ex] \displaystyle{\cos\nu(t) = \sqrt{1+n^*}\frac{{\rm sn}(s\,t;\,m)}{\sqrt{1+n^*{\rm sn}^2(s\,t;\,m)}}}, \\[2.5ex] \displaystyle{\hspace{0.4cm}N(t)= R\, {\rm dn}(s\,t;\,m),} \end{array} \end{equation} where \begin{eqnarray*} &&R^2 = M^2\frac{C(1-dA)}{C-A}, \quad n^*=-n=\frac{C(B-A)}{A(C-B)},\\ &&m= \frac{(B-A)(dC-1)}{(C-B)(1-dA)}, \quad s^2 = M^2\frac{(C-B)(1-dA)}{ABC}, \end{eqnarray*} with $d=2h/M^2$.
\begin{remark} The reader will notice that $\sin\nu(t)$ and $\cos\nu(t)$ are Mahler functions. Moreover from (\ref{JacobiMahler}) we know that $N(t)$ is a ratio of Mahler functions \end{remark}
Finally, we obtain $\mu(t)$ by means of a quadrature: \begin{equation}\label{cuadraturamu} \mu=M\int (a_1\sin^2\nu(t)+a_2\cos^2\nu(t))\,\mathrm{d}t \end{equation} which is finally expressed by means of a linear function of time and the Legendre third elliptic integral. Integral whose solution Jacobi gave making use of his elliptic and related functions.
\subsection{On alternative approaches}
We proceed here in a different form than the previous Section \ref{sec:classical}. Making use of the Hamiltonian function, we may separate variables in the system defined by (\ref{nuPunto})-(\ref{NPunto}). More precisely, we denote \begin{equation}
n_1=\frac{a_1-a_2}{d-a_2}, \quad m_1= \frac{a_1-a_2}{a_3-a_2} \end{equation} and $\Omega= (d-a_2)(a_3-a_2)$, where we assume $a_3\neq a_2$ and $d\neq a_2$; (the case of equality has to be treated separately). Then, the equation (\ref{nuPunto}) may be written in the form \begin{equation} M\Omega\,\mathrm{d}t=\frac{\mathrm{d}\nu}{\sqrt{(1-n_1\sin^2\nu)(1-m_1\sin^2\nu)}} \end{equation} where $n_1=n_1(d,a_2)$ and $m_1=m_1(d,a_2)$, that is to say, we may study the system under the influence of the intermediate moment of inertia and the value of the Hamiltonian, keeping fixed the other parameters. Again, we have to distinguish circulation and libration patterns, but we do not need to go into details of that procedure here.
\par\noindent $\bullet$ In a more detailed form, we see that from (\ref{sistema4mnsolution}) and (\ref{solutionnu}) we may write \begin{eqnarray*} &&\sin\nu(t) = A_1\,{\rm cng}(w;n,m), \\ &&\cos\nu(t) = A_2\, {\rm sng}(w;n,m), \\ &&N=A_3\,{\rm dng}(w;n,m)/{\rm fng}(w;n,m), \end{eqnarray*} where $A_i$ are quantities depending on the previous constants. \par
The quadrature (\ref{cuadraturamu}) of the Andoyer angle variable $\mu$, now takes the form: \begin{eqnarray} &&\mu = M\int (a_1\sin^2\nu+a_2\cos^2\nu)\,\mathrm{d}t\nonumber \\ &&\hspace{0.3cm}= M\int (\tilde a_1 {\rm sng}^2t + \tilde a_2 {\rm cng}^2t) \,\mathrm{d}t\\ &&\hspace{0.3cm}= M\tilde a_2\,t + a_1^* \int {\rm sng}^2t \,\mathrm{d}t\nonumber\end{eqnarray} \par\noindent $\bullet$ Finally, from what we have seen in Sect. \ref{sec:appendixpp} we find that $$ \sin\mu = {\rm Sng}(w;n,m), \qquad \cos\mu = {\rm Cng}(w;n,m)$$ In other words, depending on the use of ${\rm sng}$, etc. or ${\rm Sng}$, etc. we reach the third Legendre elliptic integral in two different forms. Comparisons of the pros and cons of their use, versus the classic approach based on ${\rm sn}$, etc. Jacobi functions, is in progress. \section{Appendices} \label{sec:Appendices} {\bf Appendix A: On the ratios of Jacobi $\theta_i$ functions as solutions of 3-EES.}
From Lawden \cite{Lawden} (Chp.~1) we borrow the following 3-EES differential systems satisfied by the ratios of the Jacobi $\theta_i$ functions \begin{eqnarray} &&\label{ratio41}\frac{{\rm d}\phantom{-}}{{\rm d}v}\Big(\frac{\theta_1}{\theta_4}\Big)=\theta_4^2(0)\frac{\theta_2}{\theta_4}\frac{\theta_3}{\theta_4},\\ &&\frac{{\rm d}\phantom{-}}{{\rm d}v}\Big(\frac{\theta_2}{\theta_4}\Big)=-\theta_3^2(0)\frac{\theta_1}{\theta_4}\frac{\theta_3}{\theta_4},\\ &&\frac{{\rm d}\phantom{-}}{{\rm d}v}\Big(\frac{\theta_3}{\theta_4}\Big)=-\theta_2^2(0)\frac{\theta_1}{\theta_4}\frac{\theta_2}{\theta_4}, \end{eqnarray} etc. We find convenient to introduce the notation $x_{ij}=\theta_j/\theta_i$ and the reparametrization $v\rightarrow \tau$ given by ${\rm d}\tau=\sqrt{2{\rm K}/\pi}\,{\rm d}v$, with $x_{ij}'={\rm d}x_{ij}/{\rm d}\tau$. Thus, taking into account the values of $\theta_i(0)$, where $k^2=m$, $k^2+{k'}^2=1$ and ${\rm K}(m)$ is the the complete Legendre first elliptic integral, we write those IVP systems as follows. Note that, as was pointed out in Crespo and Ferrer \cite{Crespo2015}, considering the sign of the coefficients, we may distinguish
\par\noindent $\bullet$ Two bounded systems: \begin{eqnarray*} &&x_{41}'=k'\,x_{42}\,x_{43},\\ &&x_{42}'=-\,x_{41}\,x_{43},\\ &&x_{43}'=-k\,x_{41}\,x_{42}, \qquad (0,\sqrt{k/k'},1/\sqrt{k'}) \end{eqnarray*} and \begin{eqnarray*} &&x_{31}'=x_{32}\,x_{34},\\ &&x_{32}'=-k'\,x_{31}\,x_{34},,\\ &&x_{34}'=k\,x_{31}\,x_{32}, \qquad (0,\sqrt{k},\sqrt{k'}) \end{eqnarray*} \par\noindent $\bullet$ Two unbounded systems: \begin{eqnarray*} &&x_{21}'=k\,x_{23}\,x_{24},\\ &&x_{23}'=k'\,x_{21}\,x_{24},,\\ &&x_{24}'=x_{21}\,x_{23}, \qquad (0,1/\sqrt{k},\sqrt{k'/k}) \end{eqnarray*} and \begin{eqnarray*} &&x_{12}'=-k\,x_{13}\,x_{14},\\ &&x_{13}'=-x_{12}\,x_{14},,\\ &&x_{14}'=-k'\,x_{12}\,x_{13}, \quad (1,\sqrt{(k'+1)/k},\sqrt{(k'+1)/k}). \end{eqnarray*}
Then, we may express those ratios as functions the Jacobi elliptic functions and their Glashier ratios.
\par\noindent {\bf Appendix B: Transformations and addition formulas for Jacobi elliptic functions.}
For the benefit of the reader we bring here some well known transformations involving the {\sl elliptic modulus}. They may be found in any handbook of elliptic functions (remember that, depending on the authors, two notations are used: `modulus' or `parameter' related by $k^2\equiv m$, and their complementaries). Those formulas should be used for the reduction to the normal case of some of the particular cases mentioned along the paper.
\par\noindent $\bullet$ {\it Negative parameter} \\ Let $m$ be a positive number and write \begin{equation}\label{eq:changemnegative} \mu=\frac{m}{1+m}, \qquad \mu_1=\frac{1}{1+m}, \qquad v=\frac{u}{\sqrt{\mu_1}}. \end{equation} Then, \begin{eqnarray*} &&{\rm sn}(u\,;-m)=\sqrt{\mu_1}\,\frac{{\rm sn}(v\,;\,\mu)}{{\rm dn}(v\,;\,\mu)},\\ &&{\rm cn}(u\,;-m)=\frac{{\rm cn}(v\,;\,\mu)}{{\rm dn}(v\,;\,\mu)},\\ &&{\rm dn}(u\,;-m)=\frac{1}{{\rm dn}(v\,;\,\mu)}. \end{eqnarray*} Thus elliptic functions with negative parameter may be expressed by elliptic functions with a positive parameter. Note that $0<\mu<1$.\par
A final comment related to the complete elliptic integral of first kind is due here. Unlike {\sl Maple}, the software {\sl Mathematica} yields the following result
\begin{equation} \int_0^{\pi/2}\frac{\mathrm{d}\phi}{\sqrt{1-m\sin^2\phi}}=\frac{1}{\sqrt{1-m}}\,\mathrm{K}\left(\frac{m}{m-1}\right) \end{equation}
for $\forall m\leq 1$, instead of the expected result $\mathrm{K}(m)$. By applying the previous change (\ref{eq:changemnegative}), we have that, being $m$ a positive number,
\begin{equation} \mathrm{K}(-m)=\frac{1}{\sqrt{1+m}}\,\mathrm{K}\left(\frac{m}{1+m}\right)=\sqrt{\mu_1}\,\mathrm{K}(\mu) \end{equation}
which is exactly the same result given by {\sl Mathematica} for $m<0$.
\par\noindent $\bullet$ {\it Reciprocal parameter} \\ Denoting now $v=\sqrt{m}u$, we have \begin{eqnarray*} &&{\rm sn}(u\,;m)=\frac{1}{\sqrt{m}}\,{\rm sn}(v\,;m^{-1}),\\ &&{\rm cn}(u\,;m)={\rm dn}(v\,;m^{-1}),\\ &&{\rm dn}(u\,;m)={\rm cn}(v\,;m^{-1}). \end{eqnarray*} This is Jacobi's {\it real transformation}. If $m>1$, then $m^{-1}<1$, thus elliptic functions whose parameter is greater than $1$ are related to the ones whose parameter is less than $1$. In short there is no loss of generality assuming $0\leq m\leq 1$.
\par\noindent $\bullet$ {\it Decrease of parameter} \\ \begin{equation} \mu=\Big(\frac{1-\sqrt{m_1}}{1+\sqrt{m_1}}\Big)^2, \qquad v=\frac{u}{1+\sqrt{\mu}}. \end{equation} \begin{eqnarray*} &&{\rm sn}(u\,;m)=\frac{(1+\sqrt{\mu}){\rm sn}(v\,;\,\mu)}{1+\sqrt{\mu}\,{\rm sn}^2(v\,;\,\mu)},\\[1ex] &&{\rm cn}(u\,;m)=\frac{{\rm cn}(v\,;\,\mu)\,{\rm dn}(v\,;\,\mu)}{1+\sqrt{\mu}\,{\rm sn}^2(v\,;\,\mu)},\\[1ex] &&{\rm dn}(u\,;m)=\frac{1-\sqrt{\mu}\,{\rm sn}^2(v\,;\,\mu)}{1+\sqrt{\mu}\,{\rm sn}^2(v\,;\,\mu)}. \end{eqnarray*} This is Gauss transformation or the {\it descending} Landen transformation, which makes elliptic functions to depend on functions with a smaller parameter.
Note that, making use of the double angle, we may also write \begin{equation} {\rm dn}(u\,;m)=\frac{\sqrt{\mu}\,{\rm cn}(2v\,;\,\mu) + {\rm dn}(2v\,;\,\mu)}{1+\sqrt{\mu}}. \end{equation} There are analogous expressions for the increase of parameter. For a recent study where generalized formules are given, see \cite{Khare}.
\par\noindent $\bullet$ {\it Addition formulae} \\
Complementing previous transformations, we collect also here the {\sl addition formulae} \begin{eqnarray*} &&{\rm sn}(\alpha+\beta) =\frac{{\rm sn}\,\alpha \, {\rm cn}\,\beta\,{\rm dn}\,\beta + {\rm sn}\,\beta \, {\rm cn}\,\alpha\, {\rm dn}\,\alpha}{1-m\,{\rm sn}^2\alpha \, {\rm sn}^2\beta},\\ &&{\rm cn}(\alpha+\beta) =\frac{{\rm cn}\,\alpha \, {\rm cn}\,\beta - {\rm sn}\,\alpha\,
{\rm sn}\,\beta\,{\rm dn}\alpha\, {\rm dn}\,\beta}{1-m\,{\rm sn}^2\alpha \, {\rm sn}^2\beta},\\ &&{\rm dn}(\alpha+\beta) =\frac{{\rm dn}\,\alpha \, {\rm dn}\beta - m\, {\rm sn}\,\alpha\, {\rm sn}\,\beta\,{\rm cn}\,\alpha \,{\rm cn}\,\beta}{1-m\,{\rm sn}^2\alpha \, {\rm sn}^2\beta}, \end{eqnarray*} which we have generalized for the new functions; more precisely this has been done for the 4-EES Mahler system.
\end{document} |
\begin{document}
\title{The Nuisance Principle in Infinite Settings }
\paragraph{Disclaimer} This is the pre-peer-reviewed version of the following article: \begin{quote} Sean C. Ebels-Duggan. The Nuisance Principle in Infinite Settings, \emph{Thought: A Journal of Philosophy}~4(4), December~2015, pp.~263--268. \end{quote} which has been published in final form at \begin{quote} http://onlinelibrary.wiley.com/doi/10.1002/tht3.186/abstract \end{quote}
\paragraph{Note} The final version has additional remarks and corollaries; namely these: First corollary: SOL + Nuisance Principle (NP) prove that there is no pairing injection of the universe. Second corollary: SOL+ HP + NP prove that the universe is uncountable. Remark: Even if NP does not imply the finitude of the universe, it is still deductively non-conservative.)
\section*{Submitted Article}
\begin{abstract}
Neo-Fregeans have been troubled by the Nuisance Principle (NP), an abstraction principle that is consistent but not jointly (second-order) satisfiable with the favored abstraction principle HP. We show that logically this situation persists if one looks at joint (second-order) consistency rather than satisfiability: under a modest assumption about infinite concepts, NP is also inconsistent with HP. \end{abstract}
The so-called Nuisance Principle (NP) is the paradigm example of an abstraction principle that is individually satisfiable in second-order logic (with full comprehension), but is not jointly satisfiable with the neo-logicist's favored abstraction principle, HP. This is thought to cause trouble for neo-logicists. Some abstraction principles, when added to second-order logic, allow the recovery of certain mathematical content. For example, HP allows one to recover second-order Peano Arithmetic. If abstraction principles have epistemic status near enough to logic, then so do the recovered mathematics. The principle NP is troublesome because initially it seems to have epistemic status like HP, but it is hard to see how near-logical principles could be so incompatible.
But are NP and HP jointly \emph{consistent}? The further question arises because satisfiability (having a \emph{standard} model) and consistency (not proving a contradiction) are not the same in second-order logic. The question was partially answered in \cite[21--22]{WalshED2015}; the present note moves us further, but not fully, towards a complete answer.
The principle HP, attributed loosely to Hume by Frege \cite[$\S$ 63]{Frege1980}, states that the Number of $F$s (denoted $\#F$) is identical to the Number of $G$s ($\#G $) just if there is a bijection from the $F$s to the $G$s---that is, a function associating all of the objects falling under $F$ with all of the objects falling under $G$, such that no two objects falling under $F$ are associated with the same object falling under $G$. In second-order logic the existence of such a bijection can be represented, and is demonstrably an equivalence relation. Thus HP can be represented by: \[ (\forall F)(\forall G)(\#F = \#G \leftrightarrow F \approx G)\] where `$\approx$' is shorthand for the second-order formula asserting the existence of a bijection.
The Nuisance Principle is a simplification due to Crispin Wright \cite{Wright1997aa} of a principle introduced by George Boolos \cite{Boolos1990}. One can express in second-order logic the following equivalence relation: \begin{quote} $N(F,G)$ \emph{iff} there are finitely many objects falling under $F$ but not $G$, and finitely many falling under $G$ but not $F$ \end{quote} The Nuisance principle is then the claim that the \emph{Nuisance} of $F$ (denoted $\ddag F$) is identical to the Nuisance of $G$ ($\ddag G$) if and only if $N$ holds of $F$ and $G$. Using our abbreviation $N(F, G)$ we can represent this in second-order logic by \[ (\forall F)(\forall G)(\ddag F = \ddag G \leftrightarrow N(F,G) )\] Notice that NP and HP are both abstraction principles in virtue of having the same form: equality between objects on the left, an equivalence relation between concepts on the right.
That NP and HP are jointly unsatisfiable can be seen by deploying features of cardinal numbers in set theory to show that the former is satisfiable only if there are finitely many objects.\footnote{For such a proof that NP is unsatisfiable, see \cite{Antonelli2010aa}.} Since HP proves there are infinitely many objects, the two are not jointly satisfiable. But one cannot adapt this proof to a deductive setting. The complicating factor is that outside of standard models, being ``infinite'' can mean many things. Typically, concepts are ``infinite'' if they are \emph{Dedekind infinite}: there is a function from \emph{all} the objects falling under the concept to \emph{not all} of the objects falling under the concept, such that no two objects are sent to the same object by that function. (That is to say, there is an injection from the concept to a proper subconcept of itself.) In standard models of second-order logic, Dedekind infinite concepts behave like infinite sets behave in set theory. But this isn't guaranteed in non-standard models (and this is why in this note we use ``concepts'' rather than ``sets'' to indicate the semantic correlate to second-order variables).
In this note we show that NP is inconsistent with the Dedekind infinity of the universe in the presence of a natural and relatively modest \emph{strengthening} of the assertion that the universe is Dedekind infinite. Such a strengthening is a conditional describing the behavior of Dedekind infinite concepts.
This is a significant improvement over what was shown in \cite{WalshED2015}. The proof in that paper used two versions of the Axiom of Choice: a global well-ordering {\tt GC} to get Dedekind infinite concepts to behave like infinite sets, and a uniform means of selecting representatives for each equivalence class, {\tt AC}. So what was shown in that paper is that,
if one's second-order logic includes these versions of the Axiom of Choice, then NP is not consistent with the universe being Dedekind infinite. Thus HP and NP are jointly inconsistent, as in the proof that they are unsatisfiable.
Our improvement is that we can obtain this result by appeal to an ostensibly weaker principle. The principle in question is the following strengthening of infinity: \begin{description} \item[(Pairing)] If the universe is Dedekind infinite, then there is a binary function $f$ defined on all pairs of objects such that for any $z$ and any $x,y,x',$ and $y'$, if $z=f(x,y)$ and $z=f(x', y')$, then $x=x'$ and $y= y'$.
\end{description}
In other words, if the universe is Dedekind infinite, then there is an injection from pairs of objects into the universe.\footnote{It is worth reiterating the remark of \cite{WalshED2015} that Pairing is a consequence of {\tt GC}. It is also worth the separate remark that in {\tt ZF} set theory, a version of Pairing implies the (set theoretic) Axiom of Choice (see \cite[Theorem 11.7]{Jech1973aa}). Because equivalence in {\tt ZF} is not the same as equivalence in second-order logic, we here treat these these principles as distinct.} In effect, this strengthening says that universe-sized concepts can be broken-up into universe-many disjoint subconcepts, each of universe-size.
We now sketch a deductive argument showing that NP and Pairing are inconsistent with the assertion that the universe is Dedekind infinite. For if the universe is Dedekind infinite and Pairing holds,
we can associate a Dedekind infinite concept with each concept, \emph{whether the latter is finite or infinite}. For given a concept $F$, let $U[F]$ be defined by \begin{multline*} z \; \textrm{falls under}\; U[F] \leftrightarrow \\ \textrm{there is an $x$ falling under $F$, and a $y$ such that}\; f(x,y)=z \end{multline*} In other words, $U[F]$ is the image of $F$ when projected (on the right) with the universal set $V$ (on the left): $U[F] = f(F,V)$.
Now we show that if concepts $F$ and $G$ are extensionally distinct (if something falls under one that doesn't fall under the other), then the equivalence relation $N$, described above, does not hold between $U[F]$ and $U[G]$. For if $a$ falls under $F$ but not $G$, then by fixing $a$ we obtain a one-to-one map $f(a, y)$ from the universal set into $U[F]-U[G]$, the part of $U[F]$ that does not overlap $U[G]$. Since the universe is Dedekind infinite, by Pairing, so is $U[F]-U[G]$. An identical argument can be made for any element falling under $G$ but not $F$. Thus $N$ does not obtain between $U[F]$ and $U[G]$. Clearly, of course, if $F$ and $G$ are not extensionally distinct, then $N(U[F], U[G])$, since $U[A]$ and $U[B]$ will not be extensionally distinct either. In other words, \[ (\forall F)(\forall G)(N(U[F], U[G]) \leftrightarrow (\forall x)(Fx \leftrightarrow Gx)) \]
Suppose now that NP obtains; we then have
\[ (\forall F)(\forall G)(\ddag U[F] = \ddag U[G] \leftrightarrow (\forall x)(Fx \leftrightarrow Gx)) \] One can then rehearse the argument of Russell's paradox: where $R$ is the concept defined by \begin{multline*} y \; \textrm{falls under}\; R \leftrightarrow \textrm{there is a concept $Y$ such that $y = \ddag U[Y]$} \\ \textrm{and $y$ does not fall under $Y$},\end{multline*} the usual argument shows that $\ddag U[R]$ falls under $R$ if and only if it doesn't. So Pairing and NP imply that the universe is not Dedekind infinite. Thus in the presence of Pairing, HP and NP are not jointly consistent.\footnote{Acknowledgments removed for blind review. }
\end{document}
\end{document} |
\begin{document}
\title{Emergence
and non-typicality of the finiteness of the attractors in many topologies}
\author{Pierre Berger\footnote{CNRS-LAGA, Uiversit\'e Paris 13, USPC.}}
\date{
In memoriam of Anosov's 80th anniversary. }
\maketitle \abstract{ We will introduce the notion of Emergence for a dynamical system, and we will conjecture the local typicality of super complex ones. Then, as part of this program, we will provide sufficient conditions for an open set of $C^{d}$-families of $C^r$-dynamics to contain a Baire generic set formed by families displaying infinitely many sinks at every parameter, for all $\infty \ge r\ge d\ge 1$ and $d<\infty$ and two different topologies on families. In particular the case $d=r=1$ is new. \section{Introduction} \subsection{Tentatives to describe typical dynamics}
Under the dual leadership of Anosov-Sinai in USSR and Smale in the USA, the hyperbolic theory for differentiable dynamical systems grew up. We shall recall some elements of this theory.
Let $M$ be a manifold and let $f$ be a $C^1$-diffeomorphisms of $M$.
A compact set $K\subset M$ is hyperbolic if the tangent space of $TM| K$ is split into two vector sub-bundles $E^s$ and $E^u$, which are both $Df$-invariant and respectively contracted and expanded by the dynamics:
\[TM| K = E^s\oplus E^u\quad
Df(E^s)= E^s\quad Df(E^u)= E^u\quad \lim_{+\infty} \|Df^n|E^s\|= 0\quad \lim_{+\infty} \|Df^{-n}|E^u\|= 0\]
There are many examples of hyperbolic sets such as the Anosov maps (when $K=M$), the Smale Horseshoes, the Derivated of Anosov, and the Plykin attractors for diffeomorphisms, see \cite{Sm} for more details.
An important property of the hyperbolic sets is their structural stability: \begin{thm}[Anosov \cite{An67}] For every $C^1$-perturbation $f'$ of $f$, there is a unique hyperbolic set $K'$ for $f'$, which is homeomorphic to $K$ via a map $h\colon K\to K'$ $C^0$-close to the canonical inclusion $K\hookrightarrow M$ and which conjugates the dynamics:
\[h\circ f|K= f'\circ h|K\; .\] \end{thm}
The hyperbolic set $K$ is a \emph{basic set} if it is \emph{transitive} (there is a dense forward orbit in $K$) and \emph{locally maximal}: there is a neighborhood $N$ of $K$ such that $K= \cap_{n\in \mathbb Z} f^n(N)$.
Smale defined the diffeomorphisms satisfying \emph{Axiom A} as those whose non-wandering set\footnote{The set of points $z\in M$ such that any neighborhood $V$ of $z$ intersects one of its iterates: $\exists n\not= 0: f^n (V)\cap V\not= \varnothing$.} is the finite disjoint union of basic sets.
A basic set $K$ is a \emph{hyperbolic attractor} if $K= \cap_{n\ge 0} f^n(N)$. Another important result of this theory is: \begin{thm}[Sinai-Bowen-Ruelle] Given a hyperbolic attractor, there exists a unique invariant probability measure $\nu$ on $N$ so that for Lebesgue almost every $z\in N$, for every continuous function $\phi$: \[\lim_{n\to \infty} \frac1n \sum_{i=0}^{n-1} \phi({f^i(z)})= \int \phi d\mu\; .\] \end{thm} This result was very appreciated by physicists since it enables one to get from a deterministic system a new system with robust statistical properties, and somehow to make a conceptual bridge between Classical Mechanics and Statistical mechanics.
Also Smale made the following conjecture: \begin{conj}[Smale 1965] A Baire generic\footnote{A set is Baire generic if it contains a countable intersection of open dense sets.} diffeomorphism of a compact manifold satisfies Axiom A. \end{conj} This conjecture (and others by Smale and by Thom) appear at the beginning of a mathematical optimistic movement aiming to describe a typical dynamical system.
However, Smale and Smale-Abraham (1966) fund soon a counter example to this conjecture.
In 1974, a student of Smale, Newhouse discovered an extremely complicated new phenomenon, occurring in a locally Baire generic set of dynamics. \begin{thm}[\cite{Newhouse}]
For every $r\ge 2$, for every manifold $M$ of dimension $\ge 2$, there exist a non-empty open set $U\subset Diff^r(M)$ and a generic set $\mathcal R\subset U$ so that for every $f\in \mathcal R$, the dynamics $f$ has infinitely many sinks, each of which having very different statistical properties. \end{thm} Clearly these dynamics do not satisfy Axiom A (which have finitely many attractors). Even today, we do not know to describe a single example of these dynamics -- in the meaning that -- we do not know even if Lebesgue almost every point belongs to the basin of an \emph{ergodic attractor}.
From \cite{PS89}, an \emph{ergodic attractor} $(\Lambda,\mu)$ is a compact transitive set $\Lambda$ supporting an invariant probability measure $\mu$ s.t. for a set of positive Lebesgue measure $B$ (called \emph{Basin}) it holds: \[\lim_{n\to \infty} \frac1n \sum_{i=0}^{n-1} \phi({f^i(z)})= \int \phi \, d\mu,\quad \forall \phi\in C^0(M,\mathbb R)\quad \forall z\in B.\]
In the mean time the simulations of the atmospheric physicist Lorenz showed a new chaotic attractor for an ODE \cite{Lo63}. Later H\' enon modeled the first return of map of this flow, to get a simple paradigmatic example of a chaotic surface map: the H\'enon map $(x,y)\mapsto (x^2+a+y,-b x)$ parametrized by $a,b\in \mathbb R$. He conjectured that for $b=0.3$ and a certain parameter $a$ this map has a chaotic attractor \cite{He76}. During a series of papers, this conjecture have been shown to be true for $b$ sufficiently small. \begin{thm}[Benedicks-Carleson \cite{BC2}, Mora-Viana \cite{Mora-viana}, Benedicks-Young \cite{BY}, Wang-Young \cite{YW}, Takahasi \cite{T11}, Berger \cite{berhen},Yoccoz (1990-Today)] For $b$ sufficiently small, for a set of Lebesgue positive measure of parameters $a$, the map $(x,y)\mapsto (x^2+a+y,-bx)$ has unique ergodic attractor $(\Lambda,\mu)$ , which is not supported by an attracting periodic orbit. \end{thm}
The conjecture of H\' enon was a posteriori disturbing since an arbitrarily small neighborhood $N$ of the attractor is a topological disk, and so the attractor cannot be a hyperbolic attractor (otherwise there would be a line field on the topological disk $N$). Also the simplicity of the model, its physical meaning and the concept of abundance involved make this phenomenon unavoidable.
That is why the next conjectures have been formulated thanks to the concept of typicality sketched by Kolmogorov during his plenary talk in the ICM 1954. Here is a version of typicality which appears in many conjectures: \begin{defi}[Arnold-Kolmogorov typicality] A property $\mathcal P$ on dynamics of a manifold $M$ is typical if there exists a Baire generic set of $C^d$-families $(f_a)_{a\in \mathbb R^k}$ of $C^r$-dynamics so that $\mathcal P$ is satisfied by Lebesgue almost every small parameter $a$. \end{defi} Hence this definition of typicality involved integers $k,d,r$. We will discuss about the topological spaces of families in the next section.
To take into account the aforementioned examples and counter examples, there were several conjectures claiming the typicality of the finiteness of attractors, let us recall the following\footnote{Similar conjectures had been formulated by Tedeschini-Lalli \& Yorke, Palis \& Takens, and Palis him-self, some of them for low dimensional dynamical systems.}:
\begin{conj}[Pugh-Shub \cite{PS95}]\label{ConjPS} Typically (in the sens of Arnold-Kolmogorov) a diffeomorphism of a compact manifold has a finite numbers of topological attractors (and so sinks). \end{conj}
These conjectures aimed to model typical dynamics thanks to finitely many attractors. The general strategy though to prove them was to study the unfolding of stable and unstable manifolds (in analogy with Thom-Mather works in singularity Theory).
Recently, in \cite{BE15}, a mechanism has been found to stop the unfolding for an open set of dynamics' families. This mechanism is given by the \emph{parablender}, a generalisation of Bonatti-Diaz Blender for parameter families. This enabled to prove:
\begin{thm}[\cite{BE15,BE152}] For every manifold of dimension at least $3$, for every $k\ge 0$, for every $r> d\ge 1$, there exists an open set of $\hat{\mathcal U}$ of $C^d$-families $(f_a)_{a}$ of $C^r$-diffeomorphisms of $M$, so that for a generic $(f_a)_a\in \mathcal U$, \emph{for every} parameter $a\in [-1,1]^k$, the map $f_a$ has infinitely many sinks.
\end{thm} The same statement is also possible for surface local diffeomorphisms\footnote{ A local diffeomorphism is a differentiable map of a manifold so that at every point its derivative is a linear bijection.}.
The \emph{main result} of this paper is devoted to get the case $r\ge d\ge 1$, $r\le \infty$ and $d<\infty$. Hence the cases $d=r$ or $r=1$ are new. It will be stated in section \ref{Statement of the main theorem}.
It will be proved by using a variation of the previous proof: we will put a source in the covered domain of the parablender. This enables a revision of the previous proof, which is shorter and covers the new case $d=r\ge 1$.
Also the statement of the main result lies on general hypothesis satisfied by an open set of families. This is motivated by a work in progress with S. Crovisier and E. Pujals, showing the Kolmogorov-Arnold $C^r$-typicality of dynamics displaying infinitely many sinks.
Such results are very disturbing since the general trend was to use the bifurcation theory to show the finiteness of attractors. Here the bifurcation theory enables to stop the bifurcation and shows the non-typicality of the finiteness of attractors.
\subsection{Emergence}
One of my personal motivations is the following problem:
\begin{problem}\label{Problemstat}
Show the existence of an open set of deterministic dynamical systems which typically cannot be described by means of statistics.
\end{problem}
This problem goes in opposition to the aforementioned optimistic movement, as well as the massive (and naive) use of statistic in many branches of science (economy, ecology, physics ...).
The aim is not to prove that statistics never apply (they do for many systems!), but that they do not apply for many typical systems, even among the finite dimensional, deterministic differentiable dynamical systems. We shall formalize this problem. For this end, we are going to define the Emergence of dynamical systems. This concept evaluates the complexity to approximate a system by statistics.
In statistic it is standard to use the Wasserstein distance $W_1$ on the space of probability measures $\mathbb P(M)$ of a compact manifold $M$: \[W_1(\nu,\mu) = \sup_{\phi\in Lip^1(M,[-1,1])}\int_M \phi(x)\, d(\mu-\nu)(x)\;,\quad \forall \nu,\mu\in \mathbb P(M)\] where $Lip^1(M,[-1,1])$ is the space of $1$-Lipschitz functions with values in $[-1,1]$.
Given a differentiable map $f$ of $M$, $x\in M$ and $n\ge 0$, we denote by $\delta_{n\, x}$ the probability measure which associates to an observable $\phi\in C^0(M,\mathbb R)$ the mean $\frac1n \sum_{k=0}^{n-1}\phi({f^k(x)})$.
\begin{prop} Given a probability measure $\mu$, the following functions are continuous: \[x\in M\mapsto d_{W^1}(\delta_{n\, x},\mu)\in \mathbb R\; ,\quad \forall n\] \end{prop} \begin{proof} We notice that it suffices to show that for every $\delta>0$, there exists $\eta>0$ such that if $x$ and $x'$ are $\eta$ distant, then \[d_{W^1}(\delta_{n\, x'},\mu)\ge d_{W^1}(\delta_{n\, x},\mu)-\delta\; .\] We recall that $Lip^1(M,[-1,1])$ endowed with $C^0$-uniform norm is compact, by Arzel\`a-Ascoli Theorem. Hence, there exists $\phi \in L^1(M,[-1,1])$ such that: $$d_{W^1}( \delta_{n\, x},\mu)= \frac1n \sum_{k=0}^{n-1}\phi({f^k(x)})- \int_M \phi d\mu\; .$$ As $\phi$ and $(f^k)_{k\le n}$ are Lipschitz, there exists $\eta>0$ so that for $x'$ $\eta$-close to $x$, it holds:
\[ \frac1n \sum_{k=0}^{n-1}\phi({f^k(x')})\ge
\frac1n \sum_{k=0}^{n-1}\phi({f^k(x)})-\delta\Rightarrow d_{W^1}(\delta_{n\, x'},\mu)\ge d_{W^1}(\delta_{n\, x},\mu)-\delta\; .\]
\end{proof}
We recall that the space of probabilities over a compact manifold and endowed with the metric $d_{W^1}$ is relatively compact.
Hence, given a differentiable map of $f$ of $M$, we can define the \emph{Emergence $\mathcal E(f,\epsilon)$ of $f$ at scale $\epsilon>0$} as the minimum numbers $N$ of probability measures $\{\mu_i\}_{1\le i\le N}$ so that \[\limsup_{n\to \infty} \int_{x\in M} \min_{1\le i\le N} d_{W^1}(\frac1n \sum_{k=0}^{n-1}\delta_{f^k(x)},\mu_i) \, d\text{Leb}\le \epsilon\; .\]
\begin{defi}[Emergence] The Emergence is \emph{\bf F} if $\mathcal E(f,\epsilon) = O(1)$ when $\epsilon\to 0$.
The Emergence is at most \emph{\bf P} if there exists $k> 1$ so that $\mathcal E(f,\epsilon) = O(\epsilon ^{-k})$.
The Emergence is \emph{\bf Sup-P} if $\limsup \frac{\log \mathcal E(f,\epsilon)}{-\log\epsilon}=+\infty$. \end{defi} We notice that the Emergence is a lower bound on the complexity (in space\footnote{The number of data to store.} and in time) to approximate numerically a dynamical system by statistics with precision $\epsilon$. Following, the celebrated Cobham's thesis, an algorithm in Sup-P is -- in practical -- not feasible \cite{Co65}.
Note that the Emergence is invariant by differentiable conjugacy.
\noindent{\bf Examples with {\bf F}-Emergence} If a dynamical system $f$ admits finitely many ergodic attractors $(\Lambda_i,\mu_i)_{1\le i\le N}$ whose basins $(B_i)_i$ cover Lebesgue almost all the manifold, then the Emergence is bounded by $N$ (and so it is of type { F}) \begin{proof} By the dominated function theorem, it suffices to show that for every $i\le N$ and every $x\in B_i$, $d_{W^1}( \delta_{n\, x},\mu_i)\to 0$. By compactness of $Lip^1(M,[-1,1])$, for every $n$, there exists $\phi_n \in Lip^1(M,[-1,1])$ so that: \[\Delta_{ n}:= d_{W^1}(\delta_{n\, x},\mu_i)= \int_M \phi_n d \delta_{n\, x}- \int_{M} \phi_n d\mu_i = \frac1{n} \sum_{k=0}^{n-1}\phi(f^k(x))- \int_{M} \phi_n d\mu_i
\; .\] Let $\phi\in Lip^1(M,[-1,1])$ be a cluster value of $(\phi_n)_n$ and let $(n_j)_{j\ge 0}$ be an increasing sequence so that $\phi_{n_j}\to \phi$. Then
\[\Delta_{ n_j}\le 2\|\phi_{n_j}-\phi\|_{C^0}+
\frac1{n_j} \sum_{k=0}^{n_j-1}\phi(f^k(x))- \int_M \phi d\mu_i\to 0\; .\] Thus every cluster value of $(\Delta_{ n})_n$ is zero, and so this sequence converges to zero. \end{proof} \begin{rema} We recall that a diffeomorphism satisfying Axiom A, an irrational rotation or a H\'enon map for Benedicks-Carleson parameters have finitely many ergodic attractors whose basin cover Lebesgue almost all the phase space $M$. Hence their Emergences are finite. \end{rema}
\noindent{\bf Example with { P}-Emergence.} Let $f$ be the identity. Observe that $\mathcal E(f,\epsilon) = O(\epsilon^{-n})$ with $n$ the dimension of $M$. Hence its Emergence is polynomial. Also the Emergence of an irrational rotation on a cylinder, which is the product of systems with Emergences 1 and $O(\epsilon^{-1})$, is $O(\epsilon^{-1})$.
It seems also possible to prove that the Emergence of the so-called Bowen eyes dynamics is $O(\epsilon^{-1})$.
Hence it seems that all the well understood dynamical systems have an Emergence at most {P}. However, the main conjecture of this work states that those of Sup-P Emergence should not be neglected: \begin{conjecture}\label{mainconj} There exists an open set $U\subset Diff(M)$ so that a typical $f\in U$ has Emergence { Sup-P}. \end{conjecture} Let us explain why a proof of this conjecture would solve Problem \ref{Problemstat} from the computational view point. Given a typical $f\in U$, to describe by means of statistics with precision $\epsilon$, all of its orbit, but a proportion Lebesgue measure $1-\epsilon$, we would need at least a super-polynomial number of invariant probabilities w.r.t. $\frac1\epsilon$. To find them by means of statistics, we need \emph{at least} one data for each of them, and so to do a super polynomial of number of operations. By Cobham's thesis this is not feasible by a computer.
Also we notice that when the Emergence is Sup-P, the Hausdorff dimension of the set of probabilities which would model our system is infinite.
Hence to find these invariant probabilities, we would not be able to use the (finite dimensional) parametric statistics, but only the non-parametric ones, whose computational cost is higher (and much more than 1 as in the above lower bound).
Furthermore let us notice that even if we quotient the phase space by a symmetry group of finite dimension (as for the case of a rotation on the disk or the identity on a manifold), the Emergence of the system will remain Sup-P.
Note that it is not even easy to find a locally typical non-conservative system with infinite Emergence (that is not in $F$), on the other hand KAM theory provides examples (of at least $P$-Emergence) in the conservative setting.
\noindent{\bf Candidates for Sup-P-Emergence.} It is perhaps possible to construct a unimodal map with Sup-P Emergence from \cite{HK90}, or a locally $C^r$-dense set of surface diffeomorphisms with Sup-P Emergence from \cite{Ki15}. It would be very challenging to derivate from these systems one which is moreover locally typical.
\begin{Claim} Dynamics with infinitely many sinks do not have finite Emergence.\end{Claim} \begin{proof} Indeed, for every $N\ge 1$, there exists $N$ different attracting cycles which define measures $\{\nu_i\}_{1\le i\le N+1}$. For $\epsilon>0$ small enough, the measure of each basin of $\nu_i$ is at least $\ge \sqrt \epsilon$. For a possibly smaller $\epsilon>0$, given any $N$-uplet of probability measure $\{\mu_i\}_{1\le i\le N}$, there exists exists $1\le j\le N+1$ so that $W_1(\nu_j, \{\mu_i\}_{1\le i\le N})\ge \sqrt \epsilon$. Consequently the Emergence $\mathcal E(\epsilon)$ at scale $\epsilon$ is greater than $N$. Hence the limit of $\mathcal E(\epsilon)$ as $\epsilon\to 0 $ is infinite. \end{proof}
It is perhaps possible to make a variation of Newhouse's construction to produce a generic dynamics with { Sup-P} Emergence. This issue will be discussed in a forthcoming work. That is why main Theorem \ref{main} enters in this program.
Let me mention also the concept of universal dynamics of Bonatti-Diaz \cite{BD02} and Turaev \cite{Tu15} which might produce locally Baire generic sets of diffeomorphism with high Emergence.
It would be interesting to study Conjecture \ref{mainconj} w.r.t. different notions of typicality \cite{HK10} and smoothness. Also it might be interesting to investigate the concept of Emergence for other metrics than $W_1$ on the space of invariant probability measures.
Also it would be interesting to provide numerical evidences for such a program (from big data?). The following problem remains open. \begin{problem}
Show numerical simulations depicting a (typical) dynamical systems which displays infinitely many sinks. \end{problem} Let us point out that by definition, a Sup-P Emergent dynamical system is very complex to describe, and so the non-existence of such pictures is consistent with their conjectured local typicality.
}
\thanks{I am grateful to V. Baladi, J. Bochi, C. Bonatti, F. Ledrappier, M. Lyubich, M. Shub, C. Tresser, D. Turaev and especially to S. Crovisier and E. Pujals for many inspiring and motivating discussions. I thanks the anonymous referee for all his valuable suggestions and corrections.
\begin{center} \emph{With all my thought to my master Jean-Christophe Yoccoz.} \end{center} }
\section{Statement of the main Theorem}
\subsection{Topological spaces of families of maps} \paragraph{Space of families} For $d\ge r\ge 0$ and $k\ge 0$, there are at least two ways to define a space of $C^d$-families parametrized by $\mathbb I^k:=[-1,1]^k$ of $C^r$-maps from a manifold $M$ into a manifold $N$.
The first is attributed to Arnold by Y. Iliachenko (in the case $d=r$) \cite{IL99}. It is the space \[C^{d,r}_A(\mathbb I^k , M,N):= \{(f_a)_a: \partial_a^i\partial_z^j f_a (z)\text{ exists continuously } \forall i\le d\text{ , }i+j\le r \text{ and } (a,z)\in \mathbb I^k\times M\}\] It has the advantage to be invariant by composition: for every $(f_a)_a, (g_a)_a\in C^{d,r}_A(\mathbb I^k , M,M)$, the composed family $(f_a\circ g_a )_a$ is in $C^{d,r}_A(\mathbb I^k , M,M)$.
Another way was presented in \cite{PS95} to state Conjecture \ref{ConjPS}. It is the space: \[C^{d,r}_{PS}(\mathbb I^k , M,N):= \{(f_a)_a: \partial_a^i\partial_z^j f_a (z)\text{ exists continuously } \forall j\le r\text{ , }i\le d \text{ and } (a,z)\in \mathbb I^k\times M\}\] It has the inconvenient \emph{to not be} invariant by composition when $d>0$ and $r<\infty$. But it has the advantage to have a geometric meaning. A family is $(f_a)_a\in C^{d,r}_{PS}(\mathbb I^k , M,M)$ is actually a $C^d$-map from $\mathbb I^k $ into the Fr\'echet manifold $C^r(M,N)$.
We remark: \[C^{d,d+r}_{A}(\mathbb I^k, M,N) \subset C^{d,r}_{PS}(\mathbb I^k, M,N) \subset C^{d,r}_{A}(\mathbb I^k, M,N) \]
Hence in the important case $r=\infty$ (or $d=0$) the spaces $C^{d,r}_{PS}(\mathbb I^k, M,N)$ and $C^{d,r}_{A}(\mathbb I^k, M,N)$ are equal, and so they are denoted by $C^{d,\infty }(\mathbb I^k, M,N)$ (resp. $C^{0,r }(\mathbb I^k, M,N)$).
\paragraph{Topologies on families} Any Riemanian metrics on $M$ and $N$, together with the Eulidean norm on $\mathbb R^k$ define a Riemannian metric on $N$, $TM^*\otimes TN$, ... ,
$(\mathbb R^{*k})^{\otimes i} \otimes (TM^*)^{\otimes j} \otimes TN$. The topology of $C_A^{d,r}(\mathbb I^k ,M,N)$ is defined thanks to the following base of neighborhoods:
\[V(f,K,\epsilon, r')= \{f'\in C_A^{d,r}(\mathbb I^k ,M,N) : d(\partial_a^i\partial_z^j f_a (z) ,\partial_a^i\partial_z^j f'_a (z) ) <\epsilon, \; \forall (a,z)\in K,\; i+j\le r', i \le d\}\] among any finite $r'\le r$, $\epsilon>0$ and any compact subset $K$ of $\mathbb I^k\times M$. The topology on $C_{PS}^{d,r}(\mathbb I^k ,M,N)$ is defined similarly ($i+j\le r'$ is replaced by $j\le r'$). Both topologies coincide for $r=\infty$ and $d=0$.
We remark that for $d=r$ the space $C^{d,\infty }_A(\mathbb I^k, M,N)$ is canonically homeomorphic to the space $C^{d}(\mathbb I^k\times M,N)$ endowed with the $C^d$-compact-open topology. Also for $d=r=\infty$ the space $C^{\infty,\infty }(\mathbb I^k, M,N)$ is canonically homeomorphic to the space $C^{\infty}(\mathbb I^k\times M,N)$ endowed with the compact-open, weak Whitney topology. A family in $C^{\infty,\infty }(\mathbb I^k, M,N)$ is called \emph{smooth}.
\subsection{Hyperbolic sets involved}
Most of the proofs involve surface local diffeomorphisms. We recall
that a map $f\in C^r(M,M)$ is a local diffeomorhism if $r\ge 1$ and there is an open covering $(U_i)_i$ of $M$ so that $f|U_i$ is a diffeomorphism onto its image for every $i$.
Let us recall some elements of the hyperbolic theory for local diffeomorphisms.
An invariant compact set $K$ for $f$ is \emph{hyperbolic} if there
is a vector bundle $E^s \subset TM|K$ which is invariant by $Df|K$, contracted by $Df$ and so that the quotient $TM|K/E^s$ is expanded by the action induced by $Df$.
Then for every $z\in K$, the following set, called the \emph{stable manifold} of $z$, is a $\dim\, E^s$-manifold, injectively $C^r$-immersed into $M$: \[W^s(z; f):= \{z'\in M: \; \lim_{+\infty} d(f^n(z), f^n(z'))=0\}\]
The notion of unstable manifold needs to consider the \emph{space of preorbits} $\overleftarrow K:=\{(z_i)_{i\le 0}\in K^{\mathbb Z^-}: z_{i+1}= f(z_i)\; \forall i<0\}$ of $K$. Given a preorbit $\overleftarrow z = (z_i)_{i\le -1}\in \overleftarrow K$, we can define the \emph{unstable manifold} of $\overleftarrow z$, which is a $\text{codim}\, E^s$-manifold $C^r$-immersed into $M$: \[W^u(\overleftarrow z; f):= \{z'\in M: \; \lim_{+\infty} d(f^n(z), f^n(z'))=0\}\] In general this manifold is \emph{not} immersed \emph{injectively}.
When $z\in K$ is periodic, the unstable manifold $W^u(z;f)$ denotes the one associated to the unique preorbit of $z$ which is periodic.
A local stable manifold $W^s_{loc} (z; f)$ of $z$ is an embedded, connected submanifold equal to a neighborhood of $z$ in $W^s(z; f)$. The local unstable manifold are defined similarly. We can chose them so that they depend continuously on $z$ and $\overleftarrow z$ respectively.
We endow $\overleftarrow K$ with the topology induced by the product topology of $K^\mathbb Z$. Hence $\overleftarrow K$ is compact. Note that when $f|K$ is bijective, $\overleftarrow K$ is homeomorphic to $K$.
\paragraph{Blender} A hyperbolic set $K$ of a surface local diffeomorphisms $f$ is \emph{blender} if $\dim E^u\not=2$ and a continuous union of local unstable manifolds $\cup_{\overleftarrow z\in \overleftarrow K} W^u_{loc} (\overleftarrow z; f)$ contains robustly a non empty open set $O$ of $M$: \[\cup_{\overleftarrow z\in \overleftarrow K} W^u_{loc} (\overleftarrow z; f')\supset O \quad ,\quad \forall f' \; \text{$C^1$-close to } f\; .\] The set $O$ is called a \emph{covered domain} of the blender $K$.
\begin{figure}
\caption{A blender of a surface.}
\end{figure}
Blender were discovered in \cite{BD96}, and then used in \cite{BD99,DNP} to produce a locally generic set of diffeomorphisms displaying infinitely many sinks. In \cite{BE15}, the notion of blender has been adapted to local surface diffeomorphisms to produce a locally generic set of surface local diffeomorphism displaying infinitely many sinks following a similar argument to \cite{DNP}.
\paragraph{Area contracting saddle point} A surface local-diffeomorphism has an area contracting fixed point $P$ if the product of the stable and the unstable eigenvalues of $P$ has a modulus less than $1$.
\paragraph{Projectively hyperbolic source} A fixed point $S$ of a surface diffeomorphism $f$ is a \emph{projectively hyperbolic source} if $D_Sf$ has two eigenvalues
$\sigma_{uu}, \sigma_{u}$ with different moduli $1< |\sigma_{u}|<|\sigma_{uu}|$. The eigenspace associated to $\sigma_u$ is called the \emph{weak unstable direction}, whereas the eigenspace $E^{uu}(S)$ associated to $\sigma_{uu}$ is called the \emph{strong unstable direction}.
A \emph{basin} of the source $S$ is an open neighborhood $B$ of $S$ on which an inverse branch $g$ of $f$ is well defined and whose points $z\in B$ satisfy $g^n(z)\to S$. Then $E^{uu}(S)$ extends continuously to a line field on $B$, denoted also by $E^{uu}(S)$ and so that for every $z\in B$,
\[ \lim_{n\to +\infty}\frac1n \log (\|D_zg^n|E^{uu}(S)\|\cdot |\sigma^{uu}|^n) \to 0\; .\] The line field is uniquely defined once $g$ is fixed, and there is a unique inverse branch of $g\colon B\to B$ which fixes $S$. Hence $E^{uu}(S)$ is uniquely defined once $S$ and $B$ are fixed.
If $S$ is a source of period $p$, then the above definitions and notations are canonically generalized by considering $f^p$ instead of $f$.
Moreover it is well known that the line field $E^{uu}(S)$ is the tangent space of a unique $C^0$-foliation $\mathcal F^{uu}(S)$ on $B$, whose leaves are as regular as the dynamics \cite{Yoccozintro}. Note that $\mathcal F^{uu}(S)$ is uniquely defined once $S$ and $B$ are fixed.
A $C^1$-embedded curve $\Gamma$ in $B$ has a \emph{robust tangency} with $\mathcal F^{uu}$ if any $C^1$-perturbation of $\Gamma$ has a tangency with one leaf of $\mathcal F^{uu}$. \paragraph{Hyperbolic sets for families of dynamics} Let us fix $k\ge 0$, $1\le d\le r\le \infty$, $X\in \{A,PS\}$ and a family of local diffeomorphisms $(f_a)_a\in C^{d,r}_X(\mathbb I^k,M,M)$, with $\mathbb I=[-1,1]$.
It is well known that if $f_0$ has a hyperbolic fixed point $P_0$, then it persists for every $a$ small as a hyperbolic fixed point $P_a$, and the map $a\mapsto P_a$ is of class $C^d$.
More generally, if $K$ is a hyperbolic set for $f_0$, it persists for every $a$ small, but if the map $f_0|K$ is not bijective, we need to consider the space of preorbits $\overleftarrow K=\{(z_i)_{i\le 0}: f_0(z_i)=z_{i+1} \forall i<0\}$ of $K$. Let $\overleftarrow f_0$ be the shift map on $\overleftarrow K$.
\begin{thm}[Prop 1.6 \cite{BE15, BE152}] For every $a$ in a neighborhood $V$ of $0$, there exists a map $h_a \in C^0(\overleftarrow K; M)$ so that: \begin{itemize} \item $h_0$ is the zero-coordinate projection $ (z_i)_i\mapsto z_0$. \item $f_a \circ h_a= h_a\circ \overleftarrow {f_0}$ for every $a\in V$. \item For every $\overleftarrow z\in \overleftarrow K$, the map $a\in V\mapsto h_a(\overleftarrow z)$ is of class $C^d$. \end{itemize} \end{thm}
The point $h_a(\overleftarrow z)$ is called the \emph{hyperbolic continuation} of $\overleftarrow z$ for $f_a$. We denote $z_a\in M$ the zero-coordinate of $h_a(\overleftarrow z)$.
The family of sets $(K_a)_a$, with $K_a:= \{z_a : \overleftarrow z \in \overleftarrow K\}$, is called the hyperbolic continuation of $K$.
The local stable and unstable manifolds $W^s_{loc} (z; f_a) $ and $W^u_{loc} (\overleftarrow z; f_a) $ are canonically chosen so that they depend continuously on $a$, $z$ and $\overleftarrow z$.
They are called the \emph{hyperbolic continuations} of $W^s_{loc} (z; f_0) $ and $W^u_{loc} (\overleftarrow z; f_0) $ for $f_a$. Let us recall: \begin{prop}[Prop 1.6 \cite{BE15,BE152}]\label{Wupara} For every $z\in K$, the family $(W^s_{loc} ( z; f_a))_{a\in V}$ is of class $C^{d,r}_A$. For every $\overleftarrow z\in \overleftarrow K$, the family $(W^u_{loc} (\overleftarrow z; f_a))_{a\in V}$ is of class $C^{d,r}_A$. Both vary continuously with $z\in K$ and $\overleftarrow z\in \overleftarrow K$.
\end{prop}
In order to study the bifurcation between stable, unstable and strong unstable manifolds for a family $(f_a)_a$ of dynamics $f_a$, it is natural to study the action of $(f_a)_a$ on \emph{$C^d$-jets}. \label{notationJdM} Given a $C^d$-family of points $(z_a)_{a\in \mathbb R^k }$, its \emph{$C^d$-jet at $a_0\in \mathbb R^k$} is $J^d_{a_0} (z_a)_a= \sum_{j=0}^d \frac{\partial^j_a z_a}{j!} a^{\otimes j}$. Let $J^d_{a_0}(\mathbb R^k,M)$ be the space of $C^d$-jets of $C^d$-families of points in $M$ at $a=a_0$.
We notice that any $C^{d,r}_A$-family $(f_a)_a$ of $C^r$-maps $f_a$ of $M$ acts canonically on $J^d_{a_0} M$ as the map: \[J^d_{a_0} (f_a)_a\colon J^d_{a_0}(z_a)_a\in J^d_{a_0}(\mathbb R^k,M)\mapsto J^d_{a_0} (f_a(z_a))_a\in J^d_{a_0}(\mathbb R^k, M)\]
\begin{rema}Suppose that $M$ is a surface. If $f_{a_0}$ has a hyperbolic fixed point $P$ with stable and unstable eigenvalues $\lambda_s,\lambda_u$ then $J^d_{a_0} (P_a)_a$ is the unique hyperbolic fixed point of $J^d_{a_0} (f_a)_a$. Moreover the stable and unstable directions of $D_{J^d_{a_0} (P_a)_a} J^d_{a_0} (f_a)_a$ have the same dimension. The restriction of $D_{J^d_{a_0} (P_a)_a} J^d_{a_0} (f_a)_a$ to each of these spaces is the composition of $\lambda_s id$ (resp. $\lambda_u id$) with a nilpotent map. We observe that $W^s(J^d_{a_0}(P_a)_a)$ consists of $C^d$-jets of families $(Q_a)_a$ so that $Q_a$ is in $W^s(P_a; f_a)$ for every $a$. \end{rema}
More generaly, given a hyperbolic set $K$ for $f_{a_0}$, the set $J^d_{a_0} (K_a)_a:= \{J^d_{a_0}(h_a(\overleftarrow z))_a: \overleftarrow z\in \overleftarrow K\}$ is a hyperbolic compact set for $(J^d_{a_0}(f_a))_a$.
The first example of parablender was given in \cite{BE15}; in \cite{BCP16} a new example of parablender was given and therein the definition of parablender was formulated as:
\begin{defi}[$C^d$-Parablender] A family $(K_a)_a$ of blenders for $(f_a)_a$ is a \emph{$C^d$-parablender} at $a={a_0}$ if the following condition is satisfied. There exists a non-empty open set $O$ of $C^d(\mathbb R^k, M)$ so that for every $(f'_a)_a$ $C^{d,d}_A$-close to $(f_a)_a$, for every $\gamma\in O$, there exist $\overleftarrow z\in \overleftarrow K$ and a $C^d$-family $(Q_a)_a$ of points in a continuous family $(W^u_{loc}(\overleftarrow z; f'_a))_a$ of local unstable manifolds satisfying:
\[d(\gamma(a), Q_a)= o(\|a-a_0\|^d)\; .\]
The open set $O$ is called a \emph{covered domain} for the $C^d$-parablender $(K_a)_a$. \end{defi} \begin{rema} We notice that if $J^d(K_a)_a$ is a blender for $J^d_{a_0}(f_a)_a$ then $(K_a)_a$ is a $C^d$-parablender at $a_0$ for $(f_a)_a$. We do not know if it is a necessary condition. \end{rema} \begin{exam}[$C^d$-Parablender]\label{expparablender} We propose here a small variation of Example 2.2 \cite{BE15}. Let $\Delta:= \{-1,1\}^E$ with $E:= \{i=(i_1,\dots, i_k)\in \{0,\dots, d\}^k: i_1+\cdots +i_k\le d\}$. Each $\delta \in \Delta$ is seen as a function $\delta\colon i\in E\mapsto \delta(i)\in \{-1,1\}$.
Consider ${Card\, \Delta}$ disjoint segments $D:= \sqcup _{ \delta \in \Delta} I_\delta $ of $[-1/2,1/2]\setminus \{0\}$.
Let $Q\colon \sqcup _{\delta\in \Delta} I_\delta \to [1,1]$ be a locally affine, orientation preserving map which sends each $I_\delta$ onto $[-1,1]$. Let $(\mathring f_a)_a$ be the $k$-parameter family defined by: \[\mathring f_a(x,y) \colon(x,y)\in D\times [-3,3]\longmapsto \begin{array}{cc} (Q(x), \frac 23 y + \sum_{i\in E } \delta(i)\cdot a_{1}^{i_1}\cdots a_{k}^{i_k}) & \text{if } x\in I_\delta \; . \end{array} \] In the above definition we use the convention $0^0=1$. Then $(f_a)_a$ is a $C^\infty$-family of $C^\infty$-local diffeomorphisms.
We notice that the maximal invariant set of $\mathring f_0$ is a blender $K$.
Let us define the following subset of $J^d_{0} \mathbb R^2$, with $a^i= \prod_{j=1}^k a_j^{i_j}$ for every $a\in \mathbb R^k$ and $j\in E$: \[\hat O:= \{\sum_{i\in E} (x_i,y_i) \cdot a^i
: |x_i|< 1,|y_i|< 2\}\quad,\quad \hat O':= \{\sum_{i\in E} (x_i,y_i) \cdot a^i
: |x_i|\le 1/2,|y_i|\le 1\}\] \[ \text{and}\quad \hat O_\delta:= \{\sum_{i\in E} (x_i,y_i) \cdot a^i
: |x_i|< 1, 0\le \delta(i)\cdot y_i< 2\}\, .\] We observe that $\hat O=\cup_{\delta\in \Delta} \hat O_\delta=\hat O \Supset \hat O'$. Also for every $\delta \in \Delta$,
Let $g_{a\; \delta}$ be the inverse of $\mathring f_a| I_\delta\times [-3,3]$. Both are product dynamics of intervals.
Let us show that $J^d_0 (g_{a\; \delta})_a$ sends $\hat O_\delta$ into $\hat O'$. The map $J^d_0 (\mathring f_a)_a$ is the composition of a hyperbolic map with a translation of by the $C^d$-Jet $\sum_E \delta(i) a_1^{i_1}\cdots a_k^{i_k}$. Thus $J^d_0 (g_{a\; \delta})_a$ is a composition of: \begin{itemize} \item a translation which sends $\hat O_\delta$ to \[\{\sum_{i\in E} (x_i,y_i) \cdot a^i
: |x_i|< 2, -1\le \delta(i)\cdot y_i< 1\}\, .\] \item a hyperbolic transformation which sends the latter into $\hat O'$. \end{itemize}
For every $\u \delta=(\delta_{i})_{i\le -1}\in \Delta^{\mathbb Z^-}$, for every $(f'_a)_a$ $C^{d,d}_A$-close to $(\mathring f_a)_a$ we define $W^u_{loc}(\u \delta; f'_a):= \cap_{n\ge 1} f_a'^n(I_\delta \times [-3,3])$. We notice that $(W^u_{loc}(\u \delta; f'_a))_{\overleftarrow \delta \in \Delta^{\mathbb Z^-}}$ is a continuous family of unstable manifolds of the hyperbolic continuation $K'_a$ of $K$.
Let $O$ be the set of families $\gamma$ of points so that $J^d_0 \gamma\in \hat O$.
We notice that there exists $\alpha>0$, so that given $(f_a)_a$ $C^{d,d}_A$-close to $(\mathring f_a)_a$, for all $\|a_0\|\le \sqrt[k]{ 2}\alpha$ and $\gamma\in O$, the following property holds true:
There exists a sequence of preimages $(\gamma^{i})_{i\le -1}$ of $\gamma$ and symbols $\u \delta =(\delta(i))_{i\le -1}$ so that $\gamma^0=\gamma$, $J^d_{a_0} \gamma^{i+1}$ is in $\hat O_{\delta^i}$, $\gamma^i(a)= (f'_a| I_{\delta(i)} \times [-3,3])^{-1}(\gamma^{i+1}(a))$, and $J^d_{a_0} \gamma^i$ is in $\hat O'$ for every $i\le -1$.
By proceeding like in Theorem B \cite{BCP16}, this implies the existence of a $C^d$-family $(Q_a)_a$ of points in $(W^u_{loc}(\u \delta; f'_a))_a$ so that $J^d_{a_0} (Q_a)_a = J^d_{a_0} (\gamma(a))_a$.
As the family $(\mathring f_{a+a_0})_a$ is close to $(\mathring f_{a})_a$ for every $a_0$ small, there exists $\alpha>0$ small such that $(K_a)_a$ is a $C^d$-parablender at every $\|a_0\|\le \sqrt[k]{ 2}\alpha$.
Consequently the family $(f_a)_a:=(\mathring f_{\alpha \cdot a})$ displays the $C^d$-parablender $(K_a)_a=(\mathring K_{\alpha \cdot a})_a$ at every $a_0\in \mathbb I^k$, with covered domain containing every constant family of points in $[-1/2,1/2]\times [-1,1]$.
\end{exam}
\subsection{Statement of the main Theorem} \label{Statement of the main theorem}
Let $\mathcal U$ be the open set of $C^1$-local surface diffeomorphisms of a surface $M$ which have a blender $K$, an area contracting saddle fixed point $P$ and a projectively hyperbolic source $S$ with a strong unstable foliation $(\mathcal F^{uu},B)$ so that : \begin{enumerate}[$(i)$] \item[$(H_0)$] $S$ is in a domain robustly covered by a continuous family $(W^u_{loc}(\overleftarrow z; f))_{\overleftarrow z\in \overleftarrow K}$ of local unstable manifolds of the blender $K$.
\item[$(H_1)$] $K$ is included in $B$ and the stable direction of $K$ is not tangent to $\mathcal F^{uu}$. \item[$(H_2)$] A segment of $W^s(P; f)$ has a robust tangency with $\mathcal F^{uu} $ and $W^u(P;f)$ has a transverse intersection with $W^s(K;f)$. \end{enumerate} It will be clear after reading the sketch of proof of the main theorem that a $C^r$-generic diffeomorphism in $\mathcal U$ displays infinitely many sinks for every $\infty\ge r\ge 1$. See also \cite{New80,Asa08} for a parameter free argument. The parametric version of this result involves more hypotheses.
Let $(f_a)_a$ be a $C^{d,d}$-family of maps $f_a$ in $\mathcal U$ and denote by $K_a$, $P_a$ and $S_a$ the hyperbolic continuations of respectively the blender, the saddle point and the source with which $f_a$ satisfies $(H_0-H_1-H_2)$ for every $a\in \mathbb I^k=[-1,1]^k$. Assume that $(K_a)_a$ is a $C^d$-parablender at every $a_0\in \mathbb I^k$, and its covered domain contains $J^d_{a_0} (S_a)_a$. More specifically there exists a continuous family of unstable manifolds $((W^u_{loc}(\overleftarrow k; f_a))_a)_{\overleftarrow k\in \overleftarrow K}$ so that for every $C^{d,d}$-perturbation $(f'_a)_a$ of $(f_a)_a$, with $(S'_a)_a$ the hyperbolic continuation of $(S_a)_a$ it holds:
\begin{itemize} \item[$(H_3)$] For every $a_0\in \mathbb I^k$, there exists a $C^d$-family $(Q_a)_a$ of points $Q_a$ in $W^u_{loc} (\overleftarrow k; f_a')$ so that
$d(Q_a, S'_a)= o(\| a-a_0\|^d)$. In particular $W^u_{loc}(\overleftarrow z;f'_{a_0})$ contains $S'_{a_0}$ .
\item[$(H_4)$] $W^u_{loc}(\overleftarrow z ;f'_{a_0})$ is not tangent to the weak unstable direction of $S'_{a_0}$. \end{itemize}
Note that $(H_3)$ \& $(H_4)$ imply $(H_0)$.
Let $\mathcal U^{d,d}$ be the open set of $C^{d,d}_A$-families of maps satisfying $(H_0-H_1-\cdots-H_4)$. We observe that for every $X\in \{A,PS\}$, and $1\le d\le r\le \infty$, the set $ {\mathcal U}_{X}^{d,r}:= {\mathcal U}_A^{d,d}\cap C^{d,r}_X(\mathbb I^k,M,M)$ is open for the $ C^{d,r}_X$-topology. We recall that $\mathbb I^k:=[-1,1]^k$.
\begin{theo}[Main theorem]\label{main} For every $1\le k<\infty$, any topology $X\in \{A,PS\}$, $1\le d\le r\le \infty$ with $d< \infty$, there exists a $C^{d,r}_X$-Baire generic set $\mathcal R$ in $ {\mathcal U}_{X}^{d,r}$ so that for every $(f_a)_a\in \mathcal R$ and every $a\in \mathbb I^k$, the map $f_a$ displays infinitely many sinks. \end{theo}
\begin{exam}\label{examfonda} Here is a variation of \textsection 4 of \cite{BE15} in which we add a source.
Let $(f_a)_{a\in \mathbb I^k} \colon D\times [-3,3]\to [-1,1]\times [-4,4]$ be the map of example \ref{expparablender} exhibiting a $C^d$-parablender $(K_a)_a$ at every $a\in \mathbb I^k$ and the constant family $(0)_{a\in \mathbb I ^k}$ in its covered domain.
We recall that $D$ is formed by ${Card\, \Delta}$-intervals of $(-1,1)\setminus \{0\}$. Let $I_S, I_P,I_{P'}\subset (-1,1) \setminus D$ be disjoint segments, so that $I_S$ is centered at $0$. We extend $Q$ to $D\sqcup I_S\sqcup I_P\sqcup I_{P'}$ so that $Q$ remains locally affine and orientation preserving, and sends as well $I_S$, $ I_P$, $I_{P'}$ onto $[1,1]$.
Let $x_S\in I_S$ and $x_P\in I_P$ be fixed points of $Q$. Let $x_{P'}$ be the preimage by $Q|I_{P'}$ of $x_P$. Let: \[f_a(x,y) \colon(x,y)\in (D\sqcup I_S\sqcup I_P\sqcup I_{P'}) \times [-3,3]\mapsto \left\{\begin{array}{cc} f_a(x,y) & \text{if } x\in D\\
(Q(x), \frac{y}{\sqrt{|I_S|}}) & \text{if } x\in I_S, \\
(Q(x), |I_P|^2 y) & \text{if } x\in I_P, \\ (y -(x-x_{P'})^2+x_P,x_{P'}-x) & \text{if } x\in I_{P'}, \\ \end{array}\right. \] With $\hat \mathbb R$ the one point compactification of $\mathbb R$, since $Q$ is orientation preserving, it is easy to extend $f$ to a local diffeomomorphism of the torus $\hat \mathbb R^2$ of degree $Card\, \Delta+3$.
We notice that $S=(0,0)$ is a projectively hyperbolic source $S$ with vertical weak unstable direction, hence transverse to the local unstable manifold of $(K_a)_a$ (which are of the form $([-1,1]\times \{y_a\})_a$).
Note also that the hyperbolic continuation of $S$ is the constant family $(0)_a$ and so belongs to the covered domain of the $C^d$-parablender $(K_a)_a$ at every $a_0\in \mathbb I^k$.
Also $P=(x_P,0)$ is an area contracting saddle fixed point, with vertical local stable manifold. The preimage of this local stable manifold in $I_{P'}\times [-1,1]$ is the graph of the function $x\mapsto (x-x_{P'})^2$. It has a robust tangency with $\mathcal F^{uu}(S)$ whose leaves are all horizontal.
Consequently $(f_a)_{a\in \mathbb I^k}$ belongs to $\mathcal U^{d,d}_A\cap C^{\infty,\infty}(\mathbb I^k, M,M)$.
Hence by Theorem \ref{main}, for every $\infty\ge r\ge d$ and $X\in \{A,PS\}$, a $C^{d,r}_X$-Baire generic perturbation of $(f_a)_a$ displays infinitely many sinks at every parameter $a\in \mathbb I^k$. \end{exam}
A corollary of the above example and of the proof of the Main theorem is: \begin{coro}\label{theo2} For every compact manifold of dimension $\ge 3$, for all $\infty > r\ge d\ge 1$, $\infty >k\ge 0$, and $X\in \{PS,A\}$, there exists an open set $\hat U$ in $C^{d,r}_X(\mathbb I^k, M,M)$ of families $(\hat f_a)_a$ of \emph{diffeomormphisms} $\hat f_a\in Diff^r(M)$ and a Baire residual set $\mathcal R$ in $\hat U$ so that for every $(f_a)_a\in \mathcal R$, for every $a\in \mathbb I_k $, the map $f_a$ displays infinitely many sinks. \end{coro} The proof will be done in section \ref{diffcase}. We will add an extension to the argument of the corresponding corollary of \cite{BE15}, in order to cover the new classes of regularity.
\section{Sketch of proof} Let $k\ge 1$, $r\ge d\ge 1$ with $d<\infty$ and $X\in \{A,PS\}$. To avoid technical difficulties, we will work only with $C^\infty$-families in ${\mathcal U}^{d,r}_X$, which are families in ${\mathcal U}^\infty $, where: \[ {\mathcal U}^\infty := {\mathcal U}^{d,d}_A\cap C^{\infty,\infty}(\mathbb I^k,M,M)= {\mathcal U}^{d,r}_X\cap C^{\infty,\infty}(\mathbb I^k,M,M),\quad \mathbb I^k=[-1,1]^k\]
We observe that ${\mathcal U}^\infty$ is a dense set in ${\mathcal U}^{d,r}_X$ for the $C^{d,r}_X$-topology. It is also dense in the following space: \[ {\mathcal U}^{d,\infty} := {\mathcal U}^{d,d}_A\cap C^{d,\infty}(\mathbb I^k,M,M)= {\mathcal U}^{d,r}_X\cap C^{d,\infty}(\mathbb I^k,M,M)\]
The following lemma enables us to work only with the spaces ${\mathcal U}^{d,\infty}$ and ${\mathcal U}^{\infty}$.
\begin{lemm}\label{lemmfonda} The main Theorem holds true if for every $N>0$, there is a $C^{d,\infty}$-dense set in ${\mathcal U}^{d,\infty}$ of families $(f_a)_a\in \mathcal U^{\infty}$, so that for every $a\in \mathbb I^k$, the map $f_a$ displays a sink of period at least $N$. \end{lemm} \begin{proof} For every $N\ge 1$, the set $\mathcal V_{N}$ of families $(f_a)_a\in \mathcal U^{d,r}_X$ such that $f_a$ has a sink of period at least $N$ for every $a\in \mathbb I^k$ is open and dense in $\mathcal U^{d,r}_X$. Hence the following set is Baire residual in $ {\mathcal U}^{d,r}_X$: $\mathcal R := \bigcap_{N\in \mathbb N} \mathcal V_{N}$. We observe that for every $(f_a)_a\in \mathcal R$, for every $a\in \mathbb I^k$, the map $f_a$ has a sink of arbitrarily large period. Hence $f_a$ has infinitely many sinks. \end{proof}
We recall that by $(H_3)$, for any $(f_a)_a\in {\mathcal U}^\infty $ and any $a_0\in \mathbb I^k $ the source $(S_a)_a$ has its $C^d$-jets at $a=a_0$ in the covered domain of the $C^d$-parablender $(K_a)_a$. This means that there exists $\overleftarrow Q\in \overleftarrow K$ and a $C^\infty$-family $(Q'_a)_a$ of points in the local unstable manifold $(W^u_{loc}(\overleftarrow Q;f_a))_a$ so that $d(Q'_a,S_a)= o(\|a-a_0\|^d)$. Hence a $C^{d,\infty}$-perturbation of $(f_a)_a$ puts $S_a$ at $Q'_a$ for all $a$ in a compact neighborhood $V$ of $a_0$. We observe that the maximal $\eta$-ball centered at $a_0$ and contained in $V$ is bounded from below only by the size of the allowed perturbation. We are going to keep $V$ intact in all the next steps.
By the second part of $(H_2)$, the unstable manifold of $P_a$ has a transverse intersection with a stable manifold of $K_a$. Hence by the following parametrized inclination lemma, $(W^u(P_a; f_a))_a$ accumulates on $(W^u_{loc}(\overleftarrow Q; f_a))_a$.
\begin{lemm}[Parametrized inclination Lemma 1.7 \cite{BE15} ]\label{inclilemma} Let $(f_a)_a$ be a smooth family of local diffeomorphisms leaving invariant a hyperbolic compact set $(K_a)_a$ with unstable direction of dimension $d_u$. Let $(C_a)_a$ be a smooth family of submanifolds of dimension $d_u$ and intersecting transversally a local stable manifold of $K_a$.
Then, for any $\overleftarrow Q\in \overleftarrow K$, for any local unstable manifold $(W^u_{loc}(\overleftarrow Q; f_a))_a$, there exists $C'_a\subset C_a$ so that the family $(f_a^n(C'_a))_a$ is $C^\infty$-close to $(W^u_{loc}(\overleftarrow Q; f_a))_a$ when $n$ is large. \end{lemm}
Hence by the second part of $(H_2)$ and the parametrized inclination lemma, we can assume that after an arbitrarily small $C^{d, \infty}$-perturbation, the family $(f_a)_a$ is of class $C^\infty$ and there exists a segment a segment $\Gamma_a^u$ of $W^u(P_a; f_a)$ which contains $S_a$ for every $a\in V$.
By the first part of $(H_2)$, we can perturb $(f_a)_a$ so that a segment of $W^s(P_a;f_{a_0})$ has a quadratic tangency with a leaf of $\mathcal F^{uu}(S_{a_0})$ for every $a\in U$, where $U$ is equal to $V$ without a (possibly empty) $1$-codimensional submanifold. We recall that two curves of a surface have a \emph{quadratic tangency} if they are tangent, and their curvatures are different at a tangency point.
We are going to construct a perturbation of $(f_a)_a$ which displays a persistent homoclinic tangency:
\begin{defi}[Persistent homoclinic tangency] Let $(f_a)_{a\in \mathbb I^k}$ be a smooth family of surface local-diffeomorphisms and $(P_a)_{a\in \mathbb I^k}$ a saddle periodic point.
The saddle point $P_a$ has a \emph{homoclinic tangency} if $W^u(P_a; f_a)$ is tangent to $W^s(P_a; f_a)$ at one point $H_a$.
The homoclinc tangency is \emph{persistent} for $a$ in an open subset $U\subset \mathbb I^k$, if there exist a smooth family $(\Gamma^u_a)_{a\in U}$ of embedded segments in $(W^u(P_a;f_a))_a$ and a smooth family of points $(H_a)_{a\in V}\in (\Gamma_a^u)_{a\in U}$ so that $W^s(P_a;f_a)$ is tangent to $\Gamma_a^u$ at $H_a$ for every $a\in U$.
\end{defi}
For the sake of simplicity, let us assume that $V=U$ (the tangency of $W^u_{loc}(P_a; f_a)$ with $\mathcal F^{uu}$ is robustly quadratic) and that $V$ is compact. This extra hypothesis can be assumed when $d\ge 2$.
The following proposition implies that for every $a_0\in \mathbb I^k$, there exists a dense set in ${\mathcal U}^{d,\infty}$ of families $(f_a)_a\in {\mathcal U}^\infty$ so that $P_a$ has a persistent homoclinic tangency for $a$ in the compact neighborhood $V$ of $a_0$.
\begin{prop}\label{tangencycreation} Let $V\subset \mathbb I^k$ be a compact subset. Let $(f_a)_{a\in \mathbb I^k} $ be a $C^\infty$-family of diffeomorphisms, which has a projectively hyperbolic source $(S_a)_a$. Let $(C_a)_a$ be a smooth family of embedded curves $C_a$, so that for every $a\in V$, $C_a$ has a quadratic tangency with $\mathcal F^{uu}(S_a)$ at a point $H_a$ depending continuously on $a\in V$. Let $(W_a)_{a}$ be a smooth family of embedded curves so that
for every $a\in V$, $W_a$ contains $S_a$ in its interior and $T_{S_a} W_a$ is not equal to the weak unstable direction of $S_a$.
Then there exists a smooth perturbation $(W'_a)_a$ of $(W_a)_a$ and $n\ge 0$ so that $f^{n}_a(W'_a)$ has a quadratic tangency with $C_a$ which persists for $a\in V$. \end{prop} This proposition will be proved in section \ref{tangencycreationsec}.
Indeed by $(H_4)$ we can apply Proposition \ref{tangencycreation} with $W_a$ equal to the segment of the unstable manifold $\Gamma_a^u$ containing $S_a$ and $C_a$ the segment of stable manifold of $P_a$ which is robustly tangent to $\mathcal F^{uu}(S_a)$, for every $a$ in the compact neighborhood $V$ of $a_0$.
This implies that up to a $C^{d,\infty}$-perturbation, we can assume that the saddle point $P_a$ displays a robust homoclinic tangency for every $a\in V$.
Then the following proposition implies that for every $N\ge 0$, up to a $C^{d,\infty}$-perturbation, the dynamics $f_a$ displays a sink of period at least $N$ for every $a$ in $V$.
\begin{prop}\label{sinkcreation} Let $V\subset \mathbb I^k$ be a compact subset. Let $(f_a)_{a\in \mathbb I^k} $ be a $C^\infty$-family of local diffeomorphisms. We suppose that for every $a\in V$, the map $f_a$ has an area contracting saddle point $P_a$ which displays a persistent homoclinic tangency at $H_a$ for $a\in V$. Then for every $N\ge 1$ and $\eta>0$, there exists a smooth perturbation $(f'_a)_a$ such that for every $a\in V$ : \begin{itemize} \item for every $z$ which is not in the $\eta$-ball $B(H_a,\eta)$, it holds $f_a(z)=f_a'(z)$, \item the map $f_a$ has a sink of period at least $\ge N$.
\end{itemize} \end{prop} This proposition will be proved in section \ref{proofofsinkcreation}.
By Lemma \ref{lemmfonda}, to prove the main theorem, it suffices to construct for every $N\ge 0$, a dense set in ${\mathcal U}^{d,\infty}$ of smooth families which have a sink of period $\ge N$ for every $a\in \mathbb I^k$. This is stronger that what was sketched above: this was proved only for $a\in V$, where the size of $V$ was determined by the size of the perturbation allowed when $(H_3)$ was used.
To overcome this difficulty, our strategy is to replicate the source $S$ to others satisfying also $(H_2)$ and $(H_3)$. The replication process uses $(H_1)$ and $(H_4)$. It leads us to the following Proposition proved in section \ref{proofpropfonda3}.
\begin{prop}\label{propfonda3} There is a dense set in ${\mathcal U}^{d,\infty}$ formed by families $(f_a)_a\in {\mathcal U}^\infty$ which satisfy the following property:
There exists a finite open covering $(U_i)_i$ of $\mathbb I^k$ so that for every $i$ there exist a projectively hyperbolic source $(S_{ia})_{a\in U_i}$ and a continuous of family of segments $(\Gamma^{u}_{ia})_a$ of $(W^u(P; f_a))_a$ such that: \begin{enumerate}[$(i)$] \item $S_{ia}$ is in $\Gamma_{ia}^{u}\subset W^u(P_a; f_a)$, for every $a\in U_i$. \item $T_{S_{ia}} \Gamma_{ia}^{u}$ is not the weak unstable direction of $S_{ia}$, for every $a\in U_i$. \item There exists a (smooth) family $(H_{ia})_{a\in U_i}$ of points in $(W^s(P_a; f_a))_{a\in U_i}$ at which $W^s(P_a; f_a)$ has a quadratic tangency with the strong unstable foliation of $S_{ia}$, for every $a\in U_i$. \item For every $a\in U_i\cap U_j$ with $i\not= j$, the sets $(f^k_a(H_{ia}))_{k\ge 0}$ and $(f^k_a(H_{ja}))_{k\ge 0}$ are disjoint and the orbits $(f^k_a(S_{ia}))_{k\ge 0}$ and $(f^k_a(S_{ja}))_{k\ge 0}$ are disjoint. \end{enumerate} \end{prop}
In section \ref{propfonda3main}, a development of the above sketched argument (where $(S_a)_a$ is replaced by $(S_{ia})_{a\in \mathcal U_i}$) will prove that Propositions \ref{tangencycreation}, \ref{sinkcreation} and \ref{propfonda3} imply Lemma \ref{lemmfonda}. We recall that Lemma \ref{lemmfonda} implies the main theorem.
\section{Tangency creation (Proof of Prop. \ref{tangencycreation})} \label{tangencycreationsec} In this section we consider a $C^\infty$-family $(f_a)_a$ of diffeomorphisms of $\mathbb R^2$ which display a projectively hyperbolic source $S_a$ for every $a$ with strong unstable direction $E^{uu}_a$.
It is useful to regard the following smooth bundle automorphism over $f_a$: \[T f_a\colon \mathbb R^2\times \mathbb P\mathbb R^1\to \mathbb R^2\times \mathbb P\mathbb R^1\] \[(z,L)\mapsto (f_a(z), D_zf_a(L))\]
We notice that the strong unstable direction $E^{uu}_a$ is a hyperbolic point for $T f_a$ with unstable direction $\mathbb R^2$ and stable direction the tangent space $\mathbb R$ of the second coordinate $\mathbb P\mathbb R^1$.
The following well known proposition is important: \begin{prop}\label{Fuusmooth} The strong unstable foliation $\mathcal F^{uu}(S_a)$ on the
neighborhood of $S_a$ is of class $C^\infty$ and depends $C^\infty$ on $a\in \mathbb I^k$. \end{prop} \begin{proof} Observe that $(S_a,E^{uu}(S_a))$ is a hyperbolic point of $Tf_a$ and that the tangent space of $\mathcal F^{uu}(S_a)$ is its local unstable manifold. Indeed, every vector $u$ tangent to $\mathcal F^{uu}(S_a)$ at a point $z$ displays a backward orbit by $Tf_a$ which converge to $(S_a,E^{uu}(S_a))$.
As $T f_a$ is of class $C^\infty$, its local unstable manifold and so the foliation $\mathcal F^{uu}(S_a)$ are of class $C^\infty$. As the local unstable manifold of a hyperbolic point of a smooth family of diffeomorphisms depends smoothly on the parameter by Prop. \ref{Wupara}, the foliation $\mathcal F^{uu}(S_a)$ depends smoothly on $a$.
\end{proof}
We are now ready to prove Proposition \ref{tangencycreation}. We recall that $(C_a)_a$ denotes a $C^\infty$-family of embedded curves $C_a$, and $V$ a compact set of $\mathbb I^k$ so that for every $a\in V$, the curve $C_a$ has a quadratic tangency with $\mathcal F^{uu}(S_a)$ at a point $H_a$ depending continuously on $a$. Also $(W_a)_{a}$ denotes a $C^\infty$-family of embedded curves $W_a$ which contains $S_a$ for every $a\in V$ and so that $T_{S_a} W_a$ is not equal to the weak unstable direction of $S_a$. We are going to show the existence of a $C^\infty$-perturbation of $(W'_a)_a$ of $(W_a)_a$ and $n\ge 0$ so that $f^{n}_a(W_a)$ displays a quadratic tangency with $C_a$ for every $a\in V$.
\begin{proof}[Proof of Proposition \ref{tangencycreation}] We continue with the above notion by denoting $T f_a\colon \mathbb R^2\times \mathbb P\mathbb R^1\to \mathbb R^2\times \mathbb P\mathbb R^1$ the action of $(f_a)_a$ on the line bundle of $\mathbb R^2$. Let $E^{uu}_a\in \{S_a\}\times \mathbb P\mathbb R^1$ be the strong unstable direction of $S_a$.
We observe that the tangent bundle $TC_a$ of $C_a$ is an embedded curve $T C_a$ in $\mathbb R^2\times \mathbb P\mathbb R^1$. As $C_a$ is tangent at one point to $\mathcal F^{uu}(S_{a})$, the curve $T C_a$ intersects $W^u_{loc}(E^{uu}_a; T f_a)$. As the tangency is quadratic, this intersection is transverse.
Hence by the inclination lemma, the preimages $(TC^n_a)_{n\le -1}$ of $T C_a$ by $T f_a$ accumulate on $W^s_{loc}(E^{uu}_a; T f_a)$, for the $C^\infty$-topology. By the parametric inclination lemma \ref{inclilemma}, the preimages $((T C^n_a)_a)_{n\le -1}$ of $T C_a$ by $(T f_a)_a$ accumulate on $(W^s_{loc}(E^{uu}_a; T f_a))_a$, for the $C^\infty$-topology.
Remark that $W^s(E^{uu}_a,Tf_a)$ is $\{S_a\}\times \mathbb P\mathbb R^1$ without the weak unstable direction of $S_a$.
Note also that the tangent bundle $TW_a$ of $W_a$ is an embedded curve of $\mathbb R^2\times \mathbb P\mathbb R^1$. The curve $W_a$ contains $S_a$ and is not tangent to the weak unstable direction of $S_a$, hence $TW_a$ intersects $W^s_{loc}(E^{uu}_a;Tf_a)$ at $T_{S_a}W_a=:(S_a,v_a)$. We observe that $a\mapsto v_a\in \mathbb P\mathbb R^1$ is of class $C^\infty$.
As $ (T C^n_a)_a $ contains a segment close to $W^s_{loc}(E^{uu}_a;Tf_a)$, there exists a point $(x_a,y_a)\in\mathbb R^2$ so that $(x^n_a,y^n_a,v_a)_a$ is in $ (T C^n_a)_a $ and is $C^\infty$-close to $(S_a,v_a)_a$.
Consequently, there exists $(z^n_a)_a$ $C^\infty$-small so that $T C^n_a$ intersects $TW_a+(z^n_a,0)= T(W_a+z^n_a)$. Let $\rho$ be a compactly supported bump function equal to $1$ on $V$. We notice that the proposition is proved with the perturbation $W_a'= W_a+\rho(a)\cdot z^n_a$ which is small for $n\le 0$ large. \end{proof} \begin{rema}\label{tangencycreation2} In view of the above proof, the source $(S_a)_a$ of Proposition \ref{tangencycreation} does not need to be fixed: it can be a periodic source. Then the same conclusion holds true. \end{rema}
\begin{rema}\label{rematechnique} Moreover, the tangency point $\tilde H^{n}_a$ of $C_a^{n}$ with $W'_a$ satisfies that $(f^k_a ( \tilde H^{n}_a))_{k=n}^0$ is close to $(f_a^{k}(H_a))_{k=n}^0$, where $n\le 0$ is defined in the proof.
Indeed the curve $TC_a^{n}$ being close to be vertical, the point $\tilde H^{n}_a$ is close to $f_a^{n}(H_a)$. Also the curve
$TC_a^{n}$ is contracted by $Tf^k _a$ for every $a$ and for every $n\le k\le 0$, since $TC_a^{n}$ is close to the local stable manifold $W^s_{loc}(E^{uu}_a;Tf_a)$.
\end{rema}
\section{Sinks Creation (proof of Prop. \ref{sinkcreation})} \label{proofofsinkcreation} The following section is devoted to the proof of Proposition \ref{sinkcreation2} below which implies Proposition \ref{sinkcreation}
Let $V\subset \mathbb R^k$ be a compact subset. For every $\eta>0$ small, let $V_\eta$ be the $\eta$-neighborhood of $V$.
Let $(f_a)_{a\in \mathbb R^k} $ be a $C^\infty$-family of local diffeomorphisms. We suppose that for every $a\in V$, the map $f_a$ has an area contracting saddle point $P_a$ which displays a persistent homoclinic tangency at $H_a$ for $a\in V$.
In other words, the $C^\infty$-family $(H_a)_a$ is formed by the tangency points of the local stable manifolds $(W^s_{loc}(P_a; f_a))_a$ with a smooth family $(\Gamma_a^u)_a$ of embedded segments in $(W^u(P_a;f_a))_a$. Let $(H^{-j}_a)_{j\le 0}$ be the $j^{th}$-preimage of $H_a$ defined using the inverse branches defining $\Gamma_a^u$. We observe that $H^0_a=H_a$ and that this presequence converges to $P_a$. We define:
\[\mathcal O(H_a)= \{H^{-j}_a : {j\le 0}\}\cup\{f_a^j(H_a):j\ge 0\}.\]
\begin{prop}\label{sinkcreation2}
For every $M\ge 1$ and $\eta>0$, there exists a smooth perturbation $(f'_a)_a$ such that: \begin{itemize} \item for every $a\notin V_\eta$ or $z\notin B(H_a,\eta)$, it holds $f_a(z)=f_a'(z)$, \item for every the map $f_a$ has a sink $A_a$ of period at least $\ge M$ and with orbit in the $\eta$-neighborhood of $\mathcal O(H_a)$. \item the sinks $A_a$ depends smoothly on $a\in V$, and the family $(A_a)_{a\in V}$ of points is $C^\infty$-close to $(H_a)_{a\in V}$. \end{itemize} \end{prop} \begin{proof}[Proof of Proposition \ref{sinkcreation2}]
The set $\mathcal O(H_a)$ is discrete and has a unique accumulation point at $P_a$. Hence $\eta=2 \cdot d(H_a, \mathcal O(H_a)\setminus \{H_a\})$ is positive.
Let $\lambda_a$ and $\sigma_a$ be respectively the stable and unstable eigenvalues of $P_a$. As $P_a$ is area contracting it holds:
\[|\lambda_a\sigma_a|<1\; .\]
Via a smooth family of charts, for every $a\in V$, we identify a neighborhood $N_a$ of $H_a$ with $[-1,1]^2$ so that $H_a$ is identified to $0$, a segment of $\Gamma^u_a\subset W^u(P_a; f_a)$ to $[-1,1]\times \{0\}$ and a segment of $W^s(P_a; f_a)$ to the graph of $x\in [-1,1]\mapsto \rho_a(x)$ with $\rho_a(0)=0$ and $D_0\rho_a=0$ for every $a\in V$.
We notice that for every $a\in V$, $H_a^{-k}$ is $C^\infty$-close to $P_a$ for $k\ge 1$ large.
\begin{figure}
\caption{Preimages of $\Delta_a$.}
\label{preimagedeltaa}
\end{figure} As the map $f_a$ is in general not bijective, the stable manifold $W^s(P_a; f_a)$ is in general not connected.
Let us first assume that $H_a$ belongs to the component of $W^s(P_a; f_a)$ containing $P_a$. Let $\Delta_a^{-\infty}$ be the segment of $W^s(P_a;f_a)$ with endpoints $P_a$ and $H_a$ (by assumption they belong to the same component of $W^s(P_a;f_a)$ ).
Let $\Delta_a:= \{0\}\times [-\delta,\delta]$, with $\delta>0$ small. By the inclination lemma, for $M$ large enough and $\delta>0$ small enough, for every $n\ge M$, the preimage of $\Delta_a$ by $f_a^n$ contains a segment $\Delta_a^{-n}$ bounded by $H_a^{-n}$ and $\Delta_a$, which is close to $\Delta_a^{-\infty}$ (see figure \ref{preimagedeltaa}). Actually, by the parametric inclinaison lemma, the family of curves $(\Delta_a^{-n})_{a\in V}$ is $C^\infty$-close to $(\Delta_a^{-\infty})_{a\in V}$.
Let us fix $n\ge M$ large with respect to $M$. We observe that $\Delta^{-n}_a\cap [-1,1]^2$ is the graph of a function $x\in [0,1]\mapsto \hat \rho_a(x)$, with $(\hat \rho_a)_{a\in V}$ $C^\infty$-close to $(\rho_a)_{a\in V}$.
Let $Q_a$ be the unique endpoint of $\Delta_a^{-n}$ in $\Delta_a$. We notice that $(Q_a= (0,\hat \rho_a(0)))_{a\in V}$ is close to the constant family $(H_a=0= (0,\rho_a(0)))_{a\in V}$.
Let $Q'_a\in \Delta_a$ be the image of $Q_a$ by $f_a^n$. Note that $d(Q_a,Q'_a)<\delta<\!< \eta$. \begin{lemm} For $n$ large, the family $(Q'_a)_{a\in V}$ is close to $ (H_a)_{a\in V}$. \end{lemm} \begin{proof}
We observe that $f_a|\Delta_a^{-\infty}$ is contracting with $P_a$ as unique fixed point.
By identifying $\Delta^{-k}_a$ to $\Delta_a^{-\infty}$ for $k\ge M$ large (and so $H_a^{-k}$ to $P_a$), the map $f_a|\Delta_a^{-k}$ is contracting with $P_a$ as unique fixed point.
As $(\Delta^{-k}_a)_{a\in V}$ is $C^\infty$-close to $(\Delta^{-\infty}_a)_{a\in V}$ we have uniform bounds which enable us to prove that $(f^{n-M}_a(Q_a))_{a\in V}$ is $C^{\infty}$-close to $(H_a^{-M})_{a\in V}$ for $n$ large.
By taking $n$ large compare to $M$, it comes that $(Q'_a)_{a\in V}= (f^{n}_a(Q_a))_{a\in V}$ is $C^{\infty}$-close to $(H_a)_{a\in V}$.\end{proof}
\begin{lemm}\label{LemmaQ} The $n$-first iterates of $Q_a$ are in the $\eta$-neighborhood of $\mathcal O( H_a)$. \end{lemm} \begin{proof} By taking the notation of the previous proof, since $\delta>0$ is small with respect to $\eta$, the $M$ first iterates of $Q_a$ are $\eta$-close to the $M$ first iterates of $H_a$. In the previous proof we saw that $\{f^k_a(Q_a): \; {n\ge k\ge M}\}$ is even closer to $\{ H^{-k}_a : \; 0\le k\le n-M\}$. \end{proof}
\begin{figure}
\caption{Notations involved.}
\label{preuvemain}
\end{figure}
We recall that $f_a^n(Q_a)= Q'_a$ and $Df_a^n(T_{Q_a} \Delta^{-n}_a)= Q'_a+\{0\}\times \mathbb R$. Let $L_a \in \mathbb P \mathbb R^1$ be defined by $Q'_a+L_a:= Df_a^n(Q_a+\{0\}\times \mathbb R)$. By the inclination lemma, the lines family $(L_a)_{a\in V}$ is $C^\infty$- close to $(\mathbb R\times \{0\})_{a\in V}$. As $(Q_a)_{a\in V}$ is close to $(0)_{a\in V}$ and $(\Delta^{-n}_a)_{a\in V}$ is close to $(\Delta^{-\infty}_a)_{a\in V}$, the family $(T_{Q_a} \Delta^{-n}_a)_{a\in V}$ is close to $(T_{H_a} \Delta^{-\infty}_a)_a= (\mathbb R\times \{0\})_a$ and so to $(L_a)_{a\in V}$.
Consequently, there exists a family $(f'_a)_a$ which is $C^\infty$-close to $(f_a)_a$ so that: \begin{itemize} \item $f'_a(Q_a')= f_a(Q_a)$ for every $a\in V$, \item $Df'_a(Q_a'+L_a)= Df_a(T_{Q_a} \Delta^{-n}_a)$ for every $a\in V$, \item $Df'_a(Q_a'+\{0\}\times \mathbb R)= Df_a(Q_a+\{0\}\times \mathbb R)$ for every $a\in V$. \item $f_a'(z)=f_a(z)$ if $a\notin V_\eta$ or $z\notin B(H_a,\eta)$. \end{itemize}
By Lemma \ref{LemmaQ}, the
$n$-first $f_a$-iterates of $Q_a$ are included in the $\eta$ neighborhood of $\mathcal O(H_a)$. Thus $\{f^k_a(Q_a): 1\le k\le n-1\}$ are $\eta$-distant to $H_a$. Thus it holds for every $a\in V$: \[ D_{f'_a(Q'_a)} f'^{n-1}_a = D_{f_a(Q_a)} f_a^{n-1}\] Consequently the point $Q'_a$ is $n$-periodic and $D_{Q'_a}f_a^n$ sends $L_a$ to $\{0\}\times \mathbb R$ with a contraction factor of the order of $\lambda_a^n$ and it sends $\{0\}\times \mathbb R$ to $L_a$ with an expansion factor of the order of $\sigma_a^n$. As $\lambda_a\sigma_a<1$, $D _{Q_a'}f_a^{2n}$ is a contracting homotety of factor of the order of $ (\lambda_a\sigma_a)^{2n}$. In particular $Q'_a$ is a sink of period $n\ge M$.
If $H_a$ does not belong to the component of $W^s(P_a; f_a)$ containing $P_a$, then let $k\ge 0$ be minimal such that $f^k_a(H_a)$ belongs to the connected component of $W^s(P_a; f_a)$. By $f_a$-invariance of $W^u(P_a; f_a)$, this immersed submanifold is still tangent to $W^s(P_a; f_a)$ at $f^k_a(H_a)$. Hence by the above argument, for small perturbation of $W^u(P_a; f_a)$ around $f^k_a(H_a)$ there is a persistent homoclinic tangency. The map $f_a^k$ being a diffeomorphism on a neighborhood of $H_a$, we can make this perturbation supported by a small neighborhood of $H_a$.
\end{proof}
\section{Proof that Proposition \ref{propfonda3} implies main Theorem \ref{main}} \label{propfonda3main}
Let us assume that as in Proposition \ref{propfonda3}, there is a dense set in ${\mathcal U}^{d,\infty}$ formed by families $(f_a)_a\in {\mathcal U}^\infty$ which satisfies the following property:
There exists a finite open covering $(U_i)_i$ of $\mathbb I^k$ so that for every $i$ there exist a periodic, projectively hyperbolic source $(S_{ia})_{a\in U_i}$ and a continuous family of embedded segments $(\Gamma^{u}_{ia})_a$ of $(W^u(P_a;f_a))_a$ such that: \begin{enumerate}[$(i)$] \item $S_{ia}$ is in $\Gamma_{ia}^{u}\subset W^u(P_a; f_a)$, for every $a\in U_i$. \item $T_{S_{ia}} \Gamma_{ia}^{u}$ is not the weak unstable direction of $S_{ia}$, for every $a\in U_i$. \item There exists a (smooth) family $(H_{ia})_{a\in U_i}$ of points in $(W^s(P_a; f_a))_{a\in V_i}$ at which $W^s(P_a; f_a)$ has a quadratic tangency with the strong unstable foliation of $S_{ia}$, for every $a\in U_i$. \item For every $a\in U_i\cap U_j$ with $i\not= j$, the sets $(f^k_a(H_{ia}))_{k\ge 0}$ and $(f^k_a(H_{ja}))_{k\ge 0}$ are disjoint and the finite orbits $(f^k_a(S_{ia}))_{k\ge 0}$ and $(f^k_a(S_{ja}))_{k\ge 0}$ are disjoint. \end{enumerate}
We want to show that under these assumptions for every $M>0$, each of these families can be perturbed -- following the algorithm described in the sketch of proof -- to one which displays a sink of period at least $M$ for every $a\in \mathbb I^k$.
In order to do so, we need to handle independently different perturbations. Hence we need to find some independent rooms in the phase space to do so.
For every $i$ and $a\in U_i$, we define the following sets of forward and backward orbits.
Put $\mathcal O^+(H_{ia}):= \{f^k_a(H_{ia}): k\ge 0\}$ and $\mathcal O^+(S_{ia}):= \{f^k_a(S_{ia}): k\ge 0\}$.
Let $\mathcal O^{-}(H_{ia})$ be the preorbit of $H_{ia}$ defined thanks to the inverse branches defining the strong unstable foliation of $S_{ia}$. We notice that $\mathcal O^{-}(H_{ia})$ accumulates on $\mathcal O^+(S_{ia})$.
We put: $$\mathcal O(H_{ia}):= \mathcal O^{+}(H_{ia})\cup \mathcal O^{-}(H_{ia}).$$
Also the segment $\Gamma_{ia}^{u}$ of $W^u(P_a; f_a)$ is defined by a sequence of inverse branches of $f_a$. Let $\mathcal O^{-}(S_{ia}):= (S^{(k)}_{ia})_{k\le -1}$ be the preobit of $S_{ia}$ associated to this sequence of inverse branches. We notice that $\mathcal O^{-}(S_{ia})$ is discrete with $P_a$ as unique accumulation point. We put: $$\mathcal O(S_{ia}):= \mathcal O^{+}(S_{ia})\cup \mathcal O^{-}(S_{ia}).$$
\begin{fact}We can assume that $\mathcal O^{+}(S_{ia})$ is disjoint from $\mathcal O^{-}(S_{ia})$. \end{fact} \begin{proof}Indeed, $\mathcal O^{+}(S_{ia})$ is finite since $S_{ia}$ is periodic thus $\mathcal O^{-}(S_{ia})$ is not contained in $\mathcal O^{+}(S_{ia})$. If the preimage $S_{ia}^{(-1)}$ of $S_{ia}$ is in $\mathcal O^{+}(S_{ia})$, then properties $(i-ii-iii-iv)$ are still satisfied by $S_{ia}^{(-1)}$ and the preimage $\Gamma_{ia}'^{u}$ of $\Gamma_{ia}^{u}$. Consequently, by shifting the orbit, we can assume that $S_{ia}^{(-1)}$ is not in $\mathcal O^{+}(S_{ia})$, and so $\mathcal O^{+}(S_{ia})$ is disjoint from $\mathcal O^{-}(S_{ia})$.\end{proof}
By $(iv)$, for every $i\not= j$ such that $a\in U_i\cap U_j$, the sets $\mathcal O^+(S_{ia})$ and $\mathcal O^+(S_{ja})$ are disjoint. This implies that the sets $\mathcal O(S_{ia})$ and $\mathcal O(S_{ja})$ are disjoint.
We notice also that $\mathcal O(S_{ia})$ is disjoint from $\mathcal O(H_{ja})$, for every $i,j$ such that $a\in U_i\cap U_j$ (otherwise the source would converge to $P_a$). Consequently: \begin{fact} For every $a$ and all $i\not= j$ such that $a\in U_i\cap U_j$, the point $S^{(-1)}_{ia}$ is disjoint from the set $\mathcal O(S_{ja})\cup \mathcal O (H_{ja})\cup \mathcal O (H_{ia})\cup \{P_a\}$. \end{fact}
Note also that $\mathcal O(S_{ia})$ is discrete with a unique accumulation point at $P_{a}$ and $\mathcal O(H_{ia})$ is discrete with accumulation points $P_{a}$ and the finite set $\mathcal O^+(S_{ia})$. Hence the set $\mathcal O(S_{ja})\cup \mathcal O(S_{ia})\cup \mathcal O (H_{ja})\cup \mathcal O (H_{ia})\cup \{P_a\}$ is compact and $S^{(-1)}_{ia}$ is isolated therein.
As these compact sets depend continuously on $a$, by shrinking slightly the covering $(U_i)_i$, we obtain: \begin{lemm} There exists $\delta>0$ so that for every $i\not = j$ and $a\in U_i\cap U_j$, the point $S^{(-1)}_{ia}$ is at distance at least $3\delta>0 $ to $\mathcal O(S_{ja})\cup (\mathcal O(S_{ia})\setminus \{S^{(-1)}_{ia}\})\cup \mathcal O (H_{ja})\cup \mathcal O (H_{ia})$. \end{lemm}
Let $(U'_i)_i$ be an open, relatively compact covering of $\mathbb I^k$ such that $cl(U'_i)\subset U_i,$ $\forall i$.
By shrinking $\Gamma^{u}_{ia}$, we can assume its preimage $\Gamma^{u(-1)}_{ia}$ included in the $\delta/2$-neighborhood of $S^{(-1)}_{ia}$ for every $i$ and $a\in U_i$. By Proposition \ref{tangencycreation}, we can perturb $\Gamma^{u}_{ia}$ to a curve $\tilde \Gamma^{u}_{ia}$ which is tangent to $W^s(P_a; f_a)$ at a point $\tilde H_{ia}$.
By Remark \ref{rematechnique}, the forward orbit of $\mathcal O^+(\tilde H_{ia}):=\{f^k_a(\tilde H_{ia}): k\ge 0\}$ is included in the $\delta$-neighborhood of $\mathcal O(H_{ia})$.
As $S^{(-1)}_{ia}$ is $3\delta$-distant from $\mathcal O(H_{ia})\cup (\mathcal O(S_{ia})\setminus \{S^{(-1)}_{ia}\})$ we can handle so that:
\begin{itemize}
\item $f_{ia}(z) = f_a(z)$ if $a\notin U_i$ or $z\notin B(S^{(-1)}_{ia},\delta)$
\item for every $a\in U'_i$, the curve $\tilde \Gamma_{ia}^{u}$ is the hyperbolic continuation of $\Gamma_{ia}^{u}$ for $f_{ia}$. \end{itemize} This proves:
\begin{lemm} For every $i$ there exists a smooth perturbation $(f_{ia})_{a\in U_i}$ of $(f_a)_{a\in U_i}$ so that: \begin{itemize} \item the hyperbolic continuation of $\tilde\Gamma_{ia}^{u}$ of $\Gamma_{ia}^{u}$ has a persistent homoclinic quadratic tangency with $W^s(P_a;f_{ia})$ at a point $\tilde H_{ia}\in \tilde \Gamma_{ia}^{u}\cap B(S_{ia}, \delta)$ and with forward orbit $\mathcal O^+(\tilde H_{ia})$ in the $\delta$-neighborhood of $\mathcal O(H_{ia})$, for every $a\in U_i'$. \item $f_{ia}(z)=f_a(z)$ for every $z\notin B(S^{(-1)}_{ia},\delta)$ and $a\in U_i$. \end{itemize} \end{lemm}
We define $\mathcal O^-(\tilde H_{ia})$ by different branches from those defining $\mathcal O^-(H_{ia})$: we consider the inverse branches of the dynamics defining $\tilde \Gamma^{iu}_a$. This defines a sequence of preimages $(\tilde H_{ia}^{(k)})_{k\le 0}$ so that $\tilde H_{ia}^{(k)}$ is close to $S_{ia}^{(k)}$ for every $k\le -1$.
We put $\mathcal O(\tilde H_{ia}):= \mathcal O^-(\tilde H_{ia})\cup \mathcal O^+(\tilde H_{ia})$. It is a discrete set with a unique accumulation point $P_a$. Also we notice that for every $i\not =j$ and $a\in U'_i\cap U'_j$: \begin{itemize} \item the $\delta$-neighborhood of $\mathcal O(\tilde H_{ja})$ is disjoint from $B(S^{(-1)}_{ia},\delta)$, \item $B(S^{(-1)}_{ia},\delta)$ contains $ \tilde H^{(-1)}_{ia}$ and is disjoint from the $\delta$-neighborhood of $\mathcal O(\tilde H_{ia})\setminus \{\tilde H^{(-1)}_{ia}\}$. \end{itemize}
Then Proposition \ref{sinkcreation2} implies:
\begin{lemm} For every $M\ge 0$, for every $i$ there exists a smooth perturbation $(f_{ia})_{a\in U_i}$ of $(f_a)_{a\in U_i}$ so that: \begin{itemize} \item $(f_{ia})_a$ has a sinks $A_{ia}$ of period $\ge M$ with orbit in $B(\mathcal O(H_{ia}),\delta)$, for every $a\in U_i'$. \item $f_{ia}(z)=f_a(z)$ for every $z\notin B(S^{(-1)}_{ia},\delta)$ and $a\in U_i$. \end{itemize} \end{lemm}
Note that by the first item, for every $a\in U_i\cap U_j'$ with $i\not= j$, the orbit of $A_{ja}$ does not intersect $B(S^{(-1)}_{ia},\delta)$. Hence for every perturbation $(f'_a)_a$ of $(f_a)_a$ so that: \begin{itemize} \item $f_a'(z)= f_a(z)$ for every $z\notin B(S^{(-1)}_{ia},\delta)$ and every $i$ such that $a\in U_i$, \item $f'_a= f_{ia}(z)$ for every $a\in U'_i$ and every $z\in B(S^{(-1)}_{ia},\delta)$, \end{itemize} The point $A_{ia}$ is still an attracting cycle of period $\ge M$ for $f'_a$, for every $a\in U'_i$. As $(U'_i)_i$ is a covering of $\mathbb I^k$, the family $(f_a')_a$ displays a sink of period $\ge M$ for every $a \in \mathbb I^k$.
We notice that the perturbation $(f_a')_a$ exists since the sets $( \{(a,z)\in U_i\times M: z\in B(S^{(-1)}_{ia},\delta)\})_i$ are disjoint.
\section{Replications of the source $S$ (Proof of Prop. \ref{propfonda3})} \label{proofpropfonda3} Let $(f_a)_a$ be a smooth family in $\mathcal U^\infty\subset \mathcal U^{d,\infty}$. Let $\mathcal F^{uu}_a(S_{a})$ be the strong unstable foliation associated to the projectively hyperbolic fixed source $S_a$ of $f_a$.
\begin{prop}\label{propprefinal} For every $\epsilon>0$, for every $a_0\in \mathbb I^k$, there exist $(k+1)3^k$-sources $(S_{i a_0})_i$ with disjoint periodic orbits so that: \begin{enumerate}[$(1)$] \item the source $S_{i a_0}$ is projectively hyperbolic and is $\epsilon$-close to $S_{a_0}$ and the $C^d$-jet $J^d_{a_0} (S_{ia})_a$ of $(S_{ia})_a$ at $a=a_0$ is $\epsilon$-close to $J^d_{a_0}(S_a)_a$. \item the set $B$ is included in the basin of $S_{ia_0}$ and the leaves of the strong unstable foliation associated to $\mathcal F^{uu}(S_{i a_0})$ are $\epsilon$-$C^2$-close to those of $\mathcal F^{uu}(S_{i a_0})$, over $B$. \end{enumerate} \end{prop}
\begin{proof} For the sake of simplicity, we prove this proposition for: \[a_0=0.\]
Let $\overleftarrow Q\in \overleftarrow K$ be such that $S_0$ is in $W^u_{loc}(\overleftarrow Q; f_{0})$. The preorbit $\overleftarrow Q$ being not necessarily periodic, it does not need to have Lyapunov exponents well defined. Let $(\lambda_n)_n$ and $(\sigma_n)_n$ be defined by:
\[\lambda_n = \|D_Qf^{-n}_0|E^s\|^{-1}<1 \quad \text{and}\quad \sigma_n =\|D_Qf_0^{-n}|E^u\|^{-1}>1\; .\] Let $\sigma_u$ and $\sigma_{uu}$ be the weak and strong unstable eigenvalues of $S_0$.
Let $\phi_a$ be the inverse branch of $f_{a}$ which contracts $B$ to $S_{a}$. Then $\phi_{0}^n$ contracts $B$ to a small neighborhood $B_n$ of $S_0$, with a contraction of the order of $\sigma_{u}^{-n}$. Let $(\psi^m_a)_{m\ge 0}$ be the inverse branches of $(f_a^m)_{m\ge 0}$ which define $W^u(\overleftarrow Q;f_a)$. For every $m$ large, $\psi^m_0$ is defined from a small neighborhood of $S_0$ with image in a small neighborhood of the zero coordinate of $\overleftarrow f^{-m}(\overleftarrow Q)$. Its Lipschitz constant is of the order of $\lambda_{m}^{-1}$. The following lemma is obvious: \begin{lemm}\label{choixnm} For every $m$ large, for every $n\ge 0$ large enough, it holds: \begin{enumerate}[$(a)$] \item $\lambda_{m}\cdot \sigma_{u}^{n}$ is large, \item $\lambda_{m }\cdot \sigma_{u}^{n}$ is small compare to $\sigma_{m}\cdot \sigma_{uu}^{n}$. \end{enumerate} \end{lemm} In particular, for $n$ large enough, for such a choice of $n,m$, the map $\phi_0^n\circ \psi^m_0\circ \phi^n_0$ is well defined on $B$ and is very contracting. We put $B_{n,m} = \psi^m_0(B_n)= \psi^m_0\circ \phi^n_0(B)$ and $B_{n,m,n} = \phi^n_0(B_{n,m})= \phi^n_0\circ\psi^m_0\circ \phi^n_0(B)$ .
By $(H_1)$, $B_{n,m,n}$ is included in the small neighborhood $B_n\subset B$ of $S_0$, and so in $B$. Consequently $\phi^n_0\circ\psi^m_0\circ \phi^n_0$ has a fixed point $S_{i0}$ in $B_{n,m,n}$. As $B_{n}$ is a small neighborhood of $S_{0}$, the $f_0$-periodic source $S_{i0}$ is close to $S_{0}$. We notice that $B$ is in the basin of $S_{i0}$. Let us show that the continuation $(S_{ia})_a$ of $S_{i0}$ satisfies $(1)-(2)$ for $n,m$ sufficiently large.
\paragraph{(1)} Let us come back to the notations introduced in \textsection \ref{notationJdM} on the space of $d$-jets $J^d_0 (\mathbb I^k ,M)$. We remark that the map $J^d_{0} (\phi_a^n)_a$ is as contracting as $\phi_0^n$ with a constant of the order of $\sigma_{u}^{-n}$, whereas $J^d_{0} (\psi_a^m)_a$ is Lipschitz with a constant of the order of $\lambda_{m}^{-1}$. Hence, by $(a)$, the composed map $J^d_{0} (\psi_a^n\circ \phi_a^m\circ \phi_a^n)_a$ is very contracting with a unique fixed point $J^d_{0} (S_{ia})_a$ close to $J^d_{0} (S_{a})_a$.
By hyperbolicity and condition $(iv)$, every line $L$ close to the line field $E^{uu}(S_0)$ and pointed at a point in $B_{n,m,n}$, is sent by $Df^m$ to a line $L'$ even closer to the line field $E^{uu}(S_0)$ and pointed at a point in $B_{n,m}$. Then it is sent by $Df^m$ to a line $L''$ close to $TW^u(Q; f_{0})$ (and pointed at a point in $B_n$). Finally, it is sent by $Df^n$ to a line very close to $E^{uu}(S_0)$ and pointed at a point in $B$. Hence there is an invariant cone field, and so $S_{i0}$ is a projectively hyperbolic periodic source.
\paragraph{(2)} We proved that the strong unstable direction $E^{uu}_i(S_0)$ associated to $S_{i0}$ is $C^0$-close to $E^{uu}(S_0)$. Let us show that the leaves integrating $E^{uu}_i(S_0)$ are $C^2$-close to the leaves of $\mathcal F^{uu}(S_0)$.
Let $\gamma$ be a curve in $B_{n,m,n}$ which is $C^2$-close to a plaque of $\mathcal F^{uu}(S_0)$. By projective hyperbolicity, the image by $f_0^n$ of $\gamma$ is a curve $C^2$-close to a plaque of $\mathcal F^{uu}(S_0)$ in $B_{n,m}$. Then by the inclination lemma and $(H_4)$, its image by $f^m_0$ is $C^2$-close to a segment of $W^u_{loc}(Q;f_0)$ in $B_m$. Again by projective hyperbolicity, its image by $f^n_0$ is then $C^2$-close to a plaque of $\mathcal F^{uu}(S_0)$. This shows that a curve in $B_{n,m,m}$ which is $C^2$-close a plaque in $\mathcal F^{uu}(S_0)$ is sent by $f^{2n+m}_0$ to a curve which is even closer to a plaque of $\mathcal F^{uu}(S_0)$. This proves that the leaves $\mathcal F^{uu}_i(S_0)$ are $C^2$-close to those of $\mathcal F^{uu}(S_0)$.
Consequently for every $m$ large, for every $n$ large depending on $m$, there is a $2n+m$-periodic sources $(S_{ia})_a$ which satisfies $(1-2)$ at $a=0$. By taking different values for $m$, we get $(k+1)3^k$ periodic sources $(S_{ia})_a$ with disjoint orbits as claimed in the Lemma. \end{proof}
For every $a\in \mathbb I^k$, all the estimates involved in the algorithm given in the above lemma are bounded from above. Hence by compactness of $\mathbb I^k$, the integers $n, m$ are bounded by a constant $N$ independent of $a\in \mathbb I^k$.
This implies the following: \begin{cor}\label{prefinal} There exist $\eta>0$ and $N>0$ so that for every $a_0\in \mathbb I^k$, there exist a finite set $\hat J(a_0)$ of symbols and a family $(S_{ia_0})_{i\in \hat J(a_0)}$ of periodic sources $S_{ia_0}$ so that: \begin{enumerate}[$(a)$] \item The cardinality of $\hat J(a_0)$ is $(k+1) 3^k$ and for every $i\not = j\in \hat J(a_0)$, the sources $S_{ia_0}$ and $S_{ja_0}$ have disjoint orbits, \item For every $i\in \hat J(a_0)$, the source $S_{ia_0}$ persists for every $a_1\in a_0+[-\eta,\eta]^k$, and its continuations $(S_{ia})_a$ satisfy $(1)$ \& $(2)$ at $a_1$. \item The period of $S_{ia_0}$ is bounded by $3N$ and the expansion of $S_{ia}$ is bounded from below by $1000$ for every $a\in a_0+[-\eta,\eta]^k$. \end{enumerate} \end{cor}
For every $a_0\in \mathbb I^k$, for every $i\in \hat J(a_0)$, for every $a\in a_0+[-\eta,\eta]^k$, we define the finite set: \[\mathcal O^+ (S_{ia}):= \{f^k_a(S_{ia}): \; k\ge 0\}.\]
By $(c)$, there exists $\delta>0$,
s.t. for all $a_0\in \mathbb I^k$ and $i\in \hat J(a_0)$, the map $f^{(3N)!}_{a_0}|B(S_{i a_0}, \delta)$ is expanding and so $S_{i a_0}$ is its unique fixed point. Note that $(3N)!$ is divided by the periods of all the sources $S_{ia_0}$ among $i\in \hat J(a_0)$.
Hence, for all $a_0,a_1\in \mathbb I^k$, so that $a_0+
[-\eta,\eta]^k\cap a_1+[-\eta,\eta]^k$ contains a parameter $a$, for all $i \in \hat J(a_0)$ and $j \in \hat J(a_1)$, it holds: \[\text{either }\mathcal O^+ (S_{ia})= \mathcal O^+ (S_{ja})\quad \text{or}\quad d(\mathcal O^+ (S_{ia}), \mathcal O^+ (S_{ja}))>\delta\; .\]
For every $\eta'<\eta$, let $\mathbb Z_{\eta'}:= \mathbb I^k \cap \eta' \mathbb Z^k$. \begin{lemm} For every $\eta'<\eta$, there exists a subset $\sqcup_{a_0\in \mathbb Z_{\eta'}} J(a_0)$ of the disjoint union $\sqcup_{a_0\in \mathbb Z_{\eta'}} \hat J(a_0)$ so that: \begin{itemize} \item for all $a_0\in \mathbb Z_{\eta'}$, the set $J(a_0)$ has cardinality $(k+1)$, and for every $a\in a_0+[-\eta',\eta']^k$ and all $i\not=j\in J(a_0)$, the orbits $\mathcal O(S_{ia})$ and $\mathcal O(S_{ja})$ are $\delta$-distant. \item for all $a_1\not= a_2 $ , for every $i\in J(a_1)$ and $j\in J(a_2)$, for every $a\in (a_1+[-\eta',\eta']^k)\cap (a_2+[-\eta',\eta']^k)$, the orbits $\mathcal O(S_{ia})$ and $\mathcal O(S_{ja})$ are $\delta$-distant. \end{itemize}
\end{lemm}
\begin{proof} Let us index $\mathbb Z_{\eta'} =:\{a_i: 1\le i\le q\}$. Let $J(a_1)\subset \hat J(a_1)$ be any subset of cardinality $k+1$. By Corollary \ref{prefinal} (a), the orbits of $S_{ia}$ and $S_{ja}$ are disjoint for every $i\not =j\in J(a_1)$ and $a\in (a_1+[-\eta',\eta']^k)$.
Let $2\le q'\le q$ and assume by induction $J(a_i)$ constructed for every $i< q'$. We notice that the cardinality of $\mathbb Z_{\eta'}(q'):= \{ a_i \in a_{q'} +[-\eta',\eta']^k: i<q'\}\cap \mathbb Z_{\eta'}$ is at most $3^k-1$. By induction, the cardinality of $\sqcup_{ a_i \in \mathbb Z_{\eta'}(q')} J(a_i)$ is at most $ (k+1) (3^k-1)$ .
Hence we have to remove at most $ (k+1) (3^k-1)$ periodic sources of $\hat J(a_{q'})$ so that the remaining sources have continuations with disjoint orbit to those indexed by $\sqcup_{ a_i \in \mathbb Z_{\eta'} (q')} J(a_i) $. We chose any set $J(a_{q'})$ of cardinality $k+1$ in the remaining set formed by at least $(k+1) 3^k- (k+1) (3^k-1)= k+1$ different sources.
\end{proof}
We recall that there exists a continuous family of local unstable manifolds $(W^u_{loc}(\overleftarrow z; f))_{\overleftarrow z\in \overleftarrow K}$ so that by $(H_3)$ and Proposition \ref{propprefinal}.2, for every $i\in \sqcup_{a_0\in \mathbb Z_{\eta'}} J(a_0)$, there exists $\overleftarrow z_i\in \overleftarrow K$ satisfying: \begin{itemize} \item $W^u_{loc}(\overleftarrow z_i; f_{a_0})$ contains $S_{i,a_0}$, \item there exists a $C^d$-family of points $(Q_a)_a\in (W^u_{loc}(\overleftarrow z_i; f_{a}))_a$ and a continuous function $\epsilon_i$ equal to zero at $0$ s.t:
\[d(Q_a, S_{i,a})\le \epsilon_i( \|a-a_0\|)\cdot \|a-a_0\|^d.\] \end{itemize}
We recall that the family $(W^u_{loc}(\overleftarrow z; f_a)_{a\in \mathbb I^k})_{\overleftarrow z\in \overleftarrow K}$ is continuous in the $C^{\infty}$ topology.
Hence there exists a family of smooth charts $(\phi_{ia})_{a\in \mathbb I^k} $ from a neighborhood of $W^u_{loc}(\overleftarrow z_i; f_{a})$ onto an open subset of $\mathbb R^2$, which send $W^u_{loc}(\overleftarrow z_i; f_{a})$ to the constant segment $[-1,1]\times \{0\}$ for every $a\in \mathbb I^k$ and which have bounded $C^{r,r}$-norm independently of $a_0\in \mathbb Z_{\eta'}$ and $i\in J(a_0)$ for every $r\ge 0$.
We remark that $S_{ia}$ belongs to the domain of this chart for $a$ sufficiently close to $a_0$. Let $(x_{i}(a),y_{i}(a)):=\phi_a(S_{ia})$ and remark that: $$\partial_a^s y_{i}(a_0)=0\quad \forall s\le d\; .$$
Moreover, by Corollary \ref{prefinal}.$(c)$, for every $a_0\in \mathbb Z_{\eta'}$ and $i\in J(a_0)$, the period $p$ of $S_{ia}$ is at most $3N$ and the expansion of $D_{S_{ia}}f^p_a$ is at least $1000$. Thus the following derivative : \[a\in a_0+ [-\eta,\eta]^k\mapsto \partial_a S_{ia} = - (D_{z} f_a^p(S_{ia}) )^{-1}(\partial_a f_a^p)(S_{ia})\] displays a $C^{d}$-norm bounded independently of $\eta'$, $i$, $a_0$, and $a$. In particular, there exists $C_{d+1}>0$ so that for every $a\le \eta$, it holds:
$$|\partial_a^{d+1} y_{i}(a_0+a)|\le C_{d+1}\; , \text{where } (x_{i}(a),y_{i}(a)):=\phi_a(S_{ia}) \quad \text{and}\quad \partial_a^s y_{i}(a_0)=0, \; \forall s\le d$$
A crucial point is that $C_{d+1}$ and $\delta>0$ depend neither $\eta'$ nor on $a_0,i$.
Hence for $\eta'\in (0,\eta)$ sufficiently small, we can $C^{d,\infty}$-perturb $(f_a)_a$ to a smooth family $(f'_a)_a$ so that $S_{ia}$ persists as $S_{ia}':= \phi_a^{-1}(x_{ia},0)$ for every $a\in a_0+[-2\eta'/3,2\eta'/3]$, the perturbation being supported by $(a,z)\in (a_0+[-\eta',\eta']^k)\times B(S_{ia},\delta/2)$.
Note that the perturbation is $C^{d,\infty}$-small when $\eta'$ is small. This proves: \begin{prop} For every $\eta'<\eta$ small, there exists a $C^{d,\infty}$-perturbation $(f'_a)_a\in \mathcal U^\infty$ of $(f_a)_a$ and families of sources $(S'_{ia_0})_{ a_0\in \mathbb Z_{\eta'}, i\in J(a_0) }$ so that \begin{itemize} \item for every $a_0\in \mathbb Z_{\eta'}$, the cardinal of the set $J(a_0)$ is $(k+1)$, \item for all $a_1,a_2\in \mathbb Z_{\eta'}$, for all $i \not = j \in J(a_1) \sqcup J(a_2)$, for every $a\in (a_1+[-\eta',\eta']^k)\cap (a_2+[-\eta',\eta']^k)$, the orbits $\mathcal O(S'_{ia})$ and $\mathcal O(S'_{ja})$ are $\delta/2$-distant. \item for every $a_0\in \mathbb Z_{\eta'}$, for all $i\in J(a_0)$, there exists $\overleftarrow z_i\in \overleftarrow K$ so that $S'_{ia}$ belongs to $W^u_{loc}(\overleftarrow z; f_a)$ for every $a\in a_0+[-2\eta'/3,2\eta'/3]^k$. \end{itemize}
\end{prop}
By Taking $\eta'$ such that $1/\eta'\in \mathbb N$, it holds that $\cup_{a_0\in \mathbb Z_{\eta'}} (a_0+[-2\eta'/3, 2\eta'/3]^k)\supset \mathbb I^k$.
Note that by $(H_2)$ a segment $W^s_{loc}(P, f_a')$
has a robust tangency with $\mathcal F^{uu}(S_a)$ for every $a\in \mathbb I^k$. Also if $a\in a_0+(-\eta,\eta)^k$
and $a_0\in \mathbb Z_{\eta'}$, and $i\in J(a_0)$, the leaves of
the foliation $\mathcal F^{uu}(S_{ia})$ are $C^2$-close to $\mathcal F^{uu}(S_a)$. We recall that $\mathcal F^{uu}(S_{ia})$ is actually a $C^\infty$-foliation depending smoothly on the parameter by Prop. \ref{Fuusmooth}.
We recall that $\mathcal U$ which is defined by $(H_0-H_1-H_2)$ is a $C^1$-open set which allows the robust tangency between $\mathcal F^{uu}(S)$ and $W^s_{loc} (P)$ to be not quadratic.
However, by performing a small perturbation of the dynamics in the $\delta/2$-neighborhood of $S_{ia}'$, we can keep $S_{ia}'$ and $W^s_{loc}(P, f_a')$ at the same places and make any small smooth perturbation for $(\mathcal F^{uu}(S_{ia}))_a$.
By Thom's transversality theorem, for a typical perturbation of $(\mathcal F^{uu}(S_{ia}))_a$, the set of parameters for which $W^s_{loc}(P, f_a')$ does not display a quadratic tangency with $\mathcal F^{uu}(S_{ia})$ is a (possibly empty) submanifold of codimension $1$ in $(-2\eta/3,2\eta/3)+a_i$. Let $U_i$ be the complement of this manifold in $(-2\eta/3,2\eta/3)+a_i$.
Moreover these $k+1$ one-co-dimensional manifolds are multi-transverse for $i\in J(a_0)$. As the parameter space $\mathbb I^k$ has dimension $k$, for a typical perturbation of $((\mathcal F^{uu}(S_{ia}))_a)_{i\in J(a_0)}$, the union $\cup_{i\in J(a_0)}U_i$ contains $a_0+(-2\eta'/3,2\eta'/3)$.
Thus a $C^{d,\infty}$-perturbation $(f''_a)_a\in \mathcal U^\infty$ of $(f_a)_a$ satisfies Proposition \ref{propfonda3} with sources $(S_{ia})_{a\in U_i}$ among $i\in \sqcup_{a_0\in \mathbb Z_{\eta'} }J(a_0)$. \section{Proof of Corollary \ref{theo2}} \label{diffcase} \begin{proof} Let $X\in \{A,PS\}$ and $1\le d\le r<\infty$. Let us come back to Example \ref{examfonda}. Let $N$ be a small neighborhood of $(D\sqcup I_S\sqcup I_P\sqcup I_{P'})\times [-3,3]$.
We recall that for every $M\ge 0$, we have constructed a $\mathcal U^{d,r}_X$-dense set $D$ of $C^{d,\infty}$-families $(f_a)_a$ so that for every $a\in \mathbb I^k$, the map $f_a$ displays a sink of period at least $M$ with orbit in $N$.
Let $\hat \mathbb R$ be the one point compactification of $\mathbb R$, and let $\mathbb A= \hat \mathbb R\times [-4,4]$. Let $I$ be a compact neighborhood of the infinity, so that $I\times [-4,4]$ does not intersect $N$.
As the map $Q\colon \sqcup_{\delta \in \Delta} I_\delta\sqcup I_S\sqcup I_P\to [-1,1]$ is orientation preserving and $f_a| I_{P'\times [-3,3]}$ as well, it is possible to extend each smooth family $(f_a)_a\in D$ to a family of local diffeomorphisms of $\mathbb A$ of degree $Card \, \Delta+3$ so that: \begin{enumerate}[$(i)$] \item for every $(f_a)_a\in D$, for every $a\in \mathbb I^k$, the map has a sinks of period $\ge M$ outside of $I\times [-4,4]$.
\item for every $\theta\in \hat \mathbb R$, the map $f|\{\theta\}\times [-4,4]$ is injective. \end{enumerate} Let $ \tilde {\mathbb A} := \mathbb A \times (-1,1)^{n-2}$. Let $M$ be a manifold of dimension $n$.
By section 5, \textsection [inv. in dim. $\ge 4$ and $=3$] \cite{BE15}, we can find a small function $\rho \in C^\infty(\mathbb A,(-1,1)^{n-2})$ so that for $\lambda>0$ small compare to $\|\rho\|_{C^0}$, for every $(f_a)_a\in \mathcal U^{d,r}_X$, the following is a restriction of a $C^\infty$-family of diffeomorphisms of $M$: \[\tilde f_a \colon (z,h)\in \tilde {\mathbb A}\mapsto (f(z), \lambda h + \rho(z))\in \tilde{ \mathbb A} \] We notice that if $\pi$ denotes the projection $\pi(z,h)=z$, then it holds $\pi \circ \tilde f_a= f_a \circ \pi$. In fact, $\pi$ is the holonomy along strong local stable manifolds of the form $W^{ss}_{loc} ((z,h); \tilde f_a)= \{z\}\times (-1,1)^{d-2}$.
We recall that $\mathcal U^{d,r}_X$ is the intersection of $\mathcal U^{d,d}_A$ with $C^{d,r}_X(\mathbb I^k, \mathbb A, \mathbb A)$ where $\mathcal U^{d,d}_A$ is the set of $C^{d,r}_A$-families which satisfy $(H_0\cdots H_4)$, \textsection 2.4.
Let us fix an infinitely smooth family $(f_a)_a\in \mathcal U^{d,d}_A\cap C^{\infty}(\mathbb I^k\times \mathbb A,\mathbb A)$, and let $\tilde {\mathcal U}^{d,d}_A$ be a small $C^{d,d}_A$-neighborhood of $(\tilde f_a)_a$. Put $\tilde {\mathcal U}^{d,r}_X = \tilde {\mathcal U}^{d,d}_A\cap C^{r}(\mathbb I^k\times \tilde{ \mathbb A} , \tilde{ \mathbb A} )$ , given a topology $X\in \{A,PS\}$ and $r\ge d$.
\begin{lemm}[Lemma 5.1 \cite{BE15}]\label{5.1} For every $\infty >r\ge d\ge 1$, for $\tilde {\mathcal U}^{d,d}_A$ small enough, for every $\lambda>0$ small enough, for every $(\tilde f_a')_a \in \tilde {\mathcal U}^{d,d}_A\cap C^{d+r+1}(\mathbb I^k\times \tilde{ \mathbb A}, \tilde{ \mathbb A})$, the continuities $(W^{ss}_{loc} (z; \tilde f'_a))_{z\in \mathbb A}$ of the strong unstable manifolds form a fibration which is of class $C^{r+d}$
for every $a\in \mathbb I^k$. Moreover, the family $(\cup_{a\in \mathbb I^k} \{a\}\times W^{ss}_{loc} (z; \tilde f'_a))_{z\in \mathbb A}$ is of class $C^{d+r}$
and its family of tangent space is $C^{d-1,d-1}_A$-close to the family $(TW^{ss}_{loc} (z; \tilde f_a))_{z\in \mathbb A}$. \end{lemm}
Hence given $(\tilde f_a')_a \in \tilde {\mathcal U}^{d,d}_A\cap C^{d+r+1}(\mathbb I^k\times \tilde{ \mathbb A},\tilde{ \mathbb A})$, the holonomy of along the strong stable foliation to the transverse section $\{h=0\}$ defines a $C^{d+r,d+r}_A$-family $(\pi'_a)_a$ of projections $\pi_a'\colon \tilde{ \mathbb A}\to \mathbb A$ which is $C^{d-1,d-1}_A$-close to $(\pi)_{a\in \mathbb I^k}$. Let $(f_a')_a\in C^{d+r,d+r}_A(\mathbb I^k,\mathbb A,\mathbb A)$ be defined by: \[ \pi_a'\circ \tilde f'_a= f_a'\circ \pi_a'\; .\] Unfortunately, $(f_a')$ is in general only $C^{d-1,d-1}_A$-close to $(f_a)_a$. Nevertheless we are going to show that: \begin{fact}\label{dernierfact} The $C^{d+r,d+r}_A$ family $(f_a')_a$ satisfies $(H_0, H_1, H_2,H_3, H_4)$. \end{fact} Consequently, by the main theorem, for every $N\ge 0$, there exists a $C^{d+r,d+r}_A$-perturbation $(f_a'')_a$ of $(f'_a)_a$ (which is also a $C^{d,r}_X$-perturbation) so that the map $f''_a$ displays a sink of period $\ge N$ for every $a\in \mathbb I^k$.
Let $\tilde f''_a$ be the map which sends $(z,h)\in \tilde {\mathbb A }$ to the point in $\pi'^{-1}_a(\{f''_a(z)\})$ with $(n-2)$-last coordinates equal to those of $\tilde f'(z,h)$. We observe that: \[\pi'_a \circ \tilde f''_a = f''_a \circ \pi'_a \; .\] Furthermore, by smoothness of the strong stable foliation, $\tilde f''_a$ is a $C^{d+r,r+d}_A$-perturbation $(\tilde f'_a)_a$. In particular $(\tilde f''_a)_a$ is a $C^{d,r}_X$-perturbation of $(\tilde f'_a)_a$.
Also, since the fibers are contracted, for every $a\in \mathbb I^k$,
$\tilde f''_a$ has a sinks of period $\ge N$. This proves the existence of a $C^{d,r}_X$-dense set of family of diffeomorphisms which display a sink of period $\ge M$ for every parameter $a\in \mathbb I^k$. Note that this set is necessarily open. The intersection of these open and dense sets among $N\ge 0$ is the residual set $\mathcal R$: a family therein displays a sink of arbitrarily large period and so infinitely many sinks, at every parameter $a\in \mathbb I^k$.
\end{proof} \begin{proof}[Proof of Fact \ref{dernierfact}]
We recall that $(f_a)_a$ satisfies $(H_0\cdots H_4)$ with a family of projectively hyperbolic source $(S_a)_a$, an area contracting fixed point $(P_a)_a$, a $C^d$-parablender $(K_a)_a$ and a family of local unstable manifold $(W^u_{loc} (\overleftarrow z; f_a))_{\overleftarrow z\in \overleftarrow K\; a\in \mathbb I^k}$. These sets are cannonically embedded in $\{h=0\}$ to hyperbolic set for the the product dynamics of $(f_a)_a$ with $0$. Hence for $\lambda>0$ and $\rho$ small, they persists for $(\tilde f_a)_a$ to hyperbolic set. Let $(\tilde S_a)_a$, $(\tilde P_a)$, $(\tilde K_a)_a$ and $(W^u(\overleftarrow z; \tilde f_a))_{\overleftarrow z\; a}$ be the hyperbolic continuations of them for $(\tilde f_a)_a$. Let $(\tilde S'_a)_a$, $(\tilde P'_a)$, $(\tilde K'_a)_a$ and $(W^u(\overleftarrow z; \tilde f'_a))_{\overleftarrow z\; a}$ be the hyperbolic continuation of them for the $C^{d+r+1}$-family $(\tilde f'_a)_a$. Let $a\in \mathbb I^k$. Note that the local strong stable manifold of $\tilde S'_a$ and $\tilde P'_a$ intersect $\{ h=0\}$ at respectively a projectively hyperbolic source $S_a'$ and an area contracting saddle point $P'_a$ for $f'_a$.
Let us prove $(H_1)$ and $(H_2)$.
The dynamics of $\tilde f_a'$ restricted to a local unstable manifold of $W^u_{loc} (\tilde S_a'; \tilde f_a')$ is $C^d$-close to $\tilde f_a| W^u_{loc} (\tilde S_a; \tilde f_a)$. By $(H_1)$, the latter can be chosen so that it projects diffeomorphically onto $B$ by $\pi$. Hence $W^u_{loc} (\tilde S_a'; \tilde f_a')$ project diffeomorphically by $\pi_a'$ onto a domain $B'$ which is $C^0$-close to $B$: a basin of $S_a'$. By hyperbolic continuation, the image $K'_a=\pi_a'(\tilde K'_a)$ is close to $K_a$ (for the Hausdorff topology on compact subset). Hence, it is included $B'$ by $(H_1)$ for $f_a$. This shows the first part of $(H_1)$ for $f'_a$, for every $a\in \mathbb I^k$.
Since $\pi_a'| W^u_{loc} (\tilde S_a'; \tilde f_a')$ is a diffeomorphism, it suffices to verify the second part of $(H_1)$ and the first part of $(H_2)$ at $W^u_{loc} (\tilde S_a'; \tilde f_a')$.
For this end, observe that $W^u_{loc} (\tilde S_a'; \tilde f_a')$ is foliated by strong unstable manifolds $\tilde{\mathcal F}^{uu}$, and this foliation is $C^d$-close to the one of $W^u_{loc} (\tilde S_a; \tilde f_a)$. By $(H_2)$ for $f_a$, the latter foliation displays a robust tangency with $W^s_{loc}(P_a; f_a)\times (-1,1)^k$, which is $C^d$-close to a local stable manifold $W^s_{loc}(\tilde P_a; \tilde f'_a)$ of $\tilde P'_a$. Hence the strong unstable foliation of $W^u_{loc} (\tilde S_a'; \tilde f_a')$ displays a robust tangency with $W^s_{loc}(\tilde P_a; \tilde f'_a)$.
Looking at the image by the diffeomorphism $\pi_a'| W^u_{loc} (\tilde S_a'; \tilde f_a')$, it comes that $W^s(P_a'; f'_a)$ displays a robust tangency with the strong unstable foliation of $S'_a$ : the first part of condition $(H_2)$ for $f_a'$.
Likewise, for every $z\in K_a$, a small local stable manifold $W^s_{loc} (z; f_a)$ for $f_a$ defines a local stable manifold $W^s_{loc} ( z; f_a)\times (-1,1)^{n-2}$ for $\tilde f_a$. The latter is transverse to $\tilde{\mathcal F}^{uu}$ by $(H_1)$. Hence by hyperbolic continuation, the final part of $(H_1)$ is also satisfied for $f_a'$.
Furthermore, by $(H_2)$ for $f_a$, the local stable manifold $W^u(\tilde P_a ;\tilde f_a)$ displays a transverse intersection with $W^s_{loc} ( z; f_a)\times (-1,1)^{n-2}$ for a certain $z\in K_a$. By hyperbolic continuation, $W^u(\tilde P_a ;\tilde f_a)$ displays a transverse intersection with $W^s_{loc} ( \tilde z; \tilde f'_a)$ for $\tilde z$ in $\tilde K_a'$. This transverse intersection projects by $\pi'_a$ to a transverse intersection between $W^u(P'_a ; f'_a)$ and $W^s_{loc} ( \pi_a'(\tilde z); f'_a)$. This shows the final part of $(H_2)$ for $f_a'$.
Therefore $f'_a$ satisfies $(H_1)$ and $(H_2)$ for every $a\in \mathbb I^k$.
Let us show $(H_4)$. We recall that a local strong stable manifold of $\tilde S_a$ is a vertical segment. It persists to local strong stable manifold $W^{ss}_{loc} (\tilde S_a'; \tilde f'_a)$ for $\tilde S'_a$ which is close to a vertical segment.
Let $E^c_a$ (resp. $E'^c_a$) be the sum of the strong stable direction and the weak unstable direction of $\tilde S_a'$ (resp. $\tilde S_a'$). The one-codimensional plane $E'^c_a$ extends to a unique plane field $E'^c_a$ over $W^{ss}(\tilde S'_a; \tilde f'_a)$ which is at most weakly expanded. This is easily shown by using a cone field argument. The latter shows moreover that $E'^c_a$ is $C^0$-close to the constant plane field equal to $E^c_a$. A vector in $D\pi'_a( E'^c_a)$ is at most weakly expanded by $D_{S'_a} f_a'^n$ since $Df_a'^n\circ D\pi_a' = D\pi_a' \circ D\tilde f_a^n$. Hence $D\pi'_a( E'^c_a)$ is equal to the weak unstable direction of $S'_a$.
On the other hand, let us consider a local unstable manifold $W^u_{loc} (\overleftarrow z; f_a)$ of the family satisfying $(H_3-H_4)$ for $(f_a)_a$. It persists to a local unstable manifold $W^u_{loc} (\overleftarrow z; \tilde f'_a)$ which is $C^1$-close to $W^u_{loc} (\overleftarrow z; f_a)\times \{0\}$. The latter cannot be tangent to $E^c_a$ by $(H_4)$ for $(f_a)_a$. Hence $W^u_{loc} (\overleftarrow z; \tilde f'_a)$ cannot be tangent to $E'^c_a$. Note that $W^u_{loc} (\overleftarrow z; \tilde f'_a)$ is sent by $\pi_a'$ to $W^u_{loc} (\overleftarrow z; f'_a)$. Thus $W^u_{loc} (\overleftarrow z; f'_a)$ cannot be tangent to the weak unstable direction of $S_a'$. This achieves the proof of $(H_4)$.
Let us show $(H_3)$. Let us call a vertical $\text{codim }2$-$C^d$-submanifold any product of a $C^d$-curve in $\mathbb I^k\times \mathbb A$ with $(-1,1)^{n-2}$. By strong contraction, we notice that there is an open set $\mathcal V$ of $\text{codim }2$-$C^d$-submanifold $C^d$-close to be vertical, which is left invariant by $(a,z)\mapsto (a,\tilde f_a(z))$ and $(a,z)\mapsto (a,\tilde f'_a(z))$.
As $\cup_{a} \{a,R_a\} \times (-1,1)^d=\cup_{a} \{a\}\times W^{ss}_{loc} (\tilde R_a; \tilde f_a)\}$ is in $\mathcal V$, the hyperbolic continuation $\cup_{a} \{a\}\times W^{ss}_{loc} (\tilde R'_a; \tilde f'_a)\}$ is in $\mathcal V$ as well. Hence its intersection $(S'_a)_a$ with $\{h=0\}$ is $C^d$-close to $(S_a)_a$.
To accomplish the proof of $(H_3)$, it suffices to show that $(K'_a)_a$ is a $C^d$-parablender with $\hat O$ as covered domain (see Example \ref{expparablender} for the explicit definition of $\hat O$).
Let $a_0\in \mathbb I^k$ and let $(z_a)_a$ be a $C^d$-curve with $J^d_{a_0}(z_a)_a\in \hat O$. Let $W^{ss}(\tilde z_a; \tilde f_a')$ be the strong stable manifold of $\tilde z_a= (z_a,0)\in \tilde {\mathbb A}$. We notice that $\cup_{a\in \mathbb I^k} \{a\}\times W^{ss}(\tilde z_a; \tilde f_a')$ is in $\mathcal V$ and so $(W^{ss}(\tilde z_a; \tilde f_a'))_a$ is $C^{d,d}_A$-close to the family $(\{z_a\}\times (-1,1)^d)_a$. Hence, by the covering property of Example \ref{expparablender}, there exists $\delta_{-1}\in \Delta$ so that the intersection point $\{(z_a(h),h)\}=W^{ss}_{loc}(\tilde z_a; f_a)\cap \mathbb A\times \{h\}$ satisfies that $J^d_{a_0} (z_a(h))_a$ is close to $\hat O_{\delta_{-1}}$, for every $h\in (-1,1)^{d-2}$.
Let $W^s_{loc}(\tilde z^{-1}_a; \tilde f'_a)$ be the component of $\tilde f_a^{-1}(W^s_{loc}(\tilde z_a;\tilde f'_a))\cap \tilde {\mathbb A}
$ associated to the symbol $\delta_{-1}$. By the covering property, the intersection point $z_a^{-1}$ of $W^s_{loc}(\tilde z^{-1}_a; \tilde f'_a)$ with $\{h=0\}$ displays a $C^d$-jet at $a=a_0$ in $\hat O$. By definition, $z_a^{-1}$ is the preimage by $f'_a$ of $z_a$ associated to $\delta_{-1}$.
Again, we notice that $W^{ss}_{loc}(\tilde z^{-1}_a; \tilde f'_a)$ is $C^{d,d}_A$-close to the family $(\{z^{-1}_a\}\times (-1,1)^d)_a$, and so there exists $\delta^{-2}\in \Delta$ so that $W^{ss}_{loc}(\tilde z^{-1}_a; \tilde f'_a)$ is made of points $z_a^{-2}$ s.t. $J^d_{a_0} (z_a)_a$ is close to be in $\hat O_{\delta_{-2}}$.
And so on, we define likewise a sequence of preimages $((z^{-i}_a)_{i\le -1})_a$ of $(z_a)_a$ by $(f'_a)_a$ associated to a preorbit of symbols $\underline \delta\in \Delta^{\mathbb Z^-}$ , and such that $J^d_{a_0}(z^{-i}_a)_a$ is in $\hat O$ for every $i$. Let $W^u_{loc}(\underline \delta ; f'_a)$
be the local unstable manifolds associated to the preorbit $\underline \delta$.
As for Example \ref{expparablender} this implies the existence of a $C^d$-family of points $(Q_a)_a\in (W^u_{loc} (\underline \delta ; f'_a))_a$ satisfying $J^d_{a_0}(Q_a)_a = J^d_{a_0} (z_a)_a$.
This proves that $\hat O$ is a covered domain of the $C^d$-parablender $(K'_a)_a$. \end{proof}
\end{document} |
\begin{document}
\title{Rational singular loci of nilpotent varieties} \author{William M. McGovern} \subjclass{22E47,57S25} \keywords{rational smoothness, intersection cohomology, nilpotent variety} \begin{abstract} We present two methods for computing the rational singular locus of the closure of a nilpotent orbit in a complex semisimple Lie algebra and give a number of interesting examples. \end{abstract} \maketitle
\section{Introduction} Let $G$ be a complex reductive group with Lie algebra $\frak g$. Let $\mathcal N$ be the nilcone of $\frak g$ and let $X=\overline{G\cdot e}$ be a nilpotent variety in $\mathcal N$ (the closure of a nilpotent orbit $\mathcal O=G\cdot e$). Recall that a point $x$ of $X$ is said to be rationally smooth if for all $y$ in a neighborhood of $x$ in the complex topology we have $H_y^m(X) = 0$ for all $m\ne 2\dim X$ and $H_y^{2\dim X}(X) = \mathbb Q$, where $H_y$ denotes cohomology with support in $\{y\}$; here we can use either singular or intersection cohomology \cite{GM83}. The rational singular locus Rat Sing~$X$ consists of all points of $X$ at which $X$ is rationally singular (not smooth); this is a closed subset of $X$. Rational smoothness has played a major role in representation theory ever since the seminal work of Kazhdan and Lusztig in the seventies \cite{KL79}; it also provided one of the first applications of intersection cohomology. We will describe two methods of computing Rat Sing~$X$ and give some examples.
\section{First method--Brion}
Our first method applies to very general varieties $X$; they need not admit dense orbits under the action of any reductive group, but they must carry an action of a torus $T$ of dimension at least two. This method uses techniques of Brion first developed in \cite{Br99}.
\newtheorem*{theorem}{Theorem} \begin{theorem} Suppose that the variety $X$ admits an action of a torus $T$ such that $x\in X$ is an attractive fixed point of the $T$ action (so that all weights of $T$ on the tangent space $T_x X$ lie in an open half-space. Then $X$ is rationally smooth at $x$ if and only if
\begin{itemize} \item Some punctured neighborhood of $x$ in $X$ is rationally smooth.
\item $X^{T'}$ is rationally smooth at $x$ for every subtorus $T'$ of codimension one in $T$.
\item $\dim_x X = \sum_{T'} \dim_x X^{T'}$ (sum over all subtori $T'$ of codimension one in $T$)
\end{itemize} In addition, for any $T$-fixed point $x$ (attractive or not), the second condition is necessary for rational smoothness at $x$ and the third one becomes a necessary condition if $\dim_x X^T$ is subtracted from the left side and from every term in the right side. \end{theorem}
For the proof see \cite[1.1,1.4]{Br99}. Here the first of these conditions is typically hard to verify, but essential: the second and third (in the absence of the first) might hold for one torus $T$ but fail for another, and in addition the first condition is needed to check that the cohomology vanishes at every point in a neighborhood of the one in question. We can apply these conditions in particular to a nilpotent variety $\overline{G\cdot e}$ and a point $x$ on it, provided that a torus $T$ of dimension at least two fixes $x$: this will occur whenever $x$ lies in the derived subalgebra of a Levi subalgebra of $\frak g$ of corank at least two. The case $x=0$ is especially easy; here we may take the torus to be the direct product of a maximal torus $T$ of $G$ and a 1-torus $\mathbb C^*$, the latter acting on $\frak g$ by scalar multiplication. Then $0$ is always an attractive fixed point for the action of this larger torus.
\newtheorem*{corollary}{Corollary} \begin{corollary} The full nilcone $\mathcal N$ is rationally smooth. Every nilpotent variety $\bar{\mathcal O}$ different from $\mathcal N$ and 0 is rationally singular at 0, with two exceptions, namely the closures of the minimal orbit in types $C_n$ and $G_2$. \end{corollary}
The rational singularity result is a straightforward consequence of the last of Brion's criteria for rational smoothness: unless $\mathcal O$ is the minimal orbit, the subtori $T'$ contributing to the right side are exactly the centralizers of the positive root spaces in $\frak g$ and every nonzero term on the right side equals 2, whence the right side equals the dimension of $\mathcal N$ rather than $\bar{\mathcal O}$. Even if $\mathcal O$ is minimal, the right side winds up being too large, except in types $C_n$ and $G_2$. In these cases all three of Brion's criteria hold, the last two by direct calculation and the first one since there is only one singular point. Thus these varieties are rationally smooth. That $\mathcal N$ is rationally smooth is an old result of Borho and Macpherson \cite{BM83}; we will sketch their argument in the next seciton. This result can also be proved using Brion's techniques, by constructing a slice of $\mathcal N$ in the sense of \cite[2.1]{Br99}. This can be done inductively, starting with the full nilcone $\mathcal N'$ for $\frak{sl}_2$, which is well known to be rationally smooth.
It seems difficult in general to prove rational smoothness of nilpotent varieties using these techniques (because of the difficulty of verifying the first of Brion's conditions), but one can for example show that the rational singular locus of any spherical nilpotent variety in type $C$ lies one step below its boundary (in the chain of spherical varieties, ordered by inclusion; recall that a nilpotent orbit or its variety is called spherical if it admits a dense suborbit under the action of a Borel subgroup). The same result holds for the largest spherical variety in type $A$ in odd rank; for other spherical varieties in this type, the rational singular locus coincides with the boundary.
\section{Second method--Borho-MacPherson}
We turn now to the second method, which is due to Borho and MacPherson and applies specifically to nilpotent varieties \cite{BM83}. It uses the heavier machinery of intersection homology, but is able to compute the dimensions of the cohomology groups (rather than just determining whether they vanish outside the top degree). We begin by invoking the Springer correspondence: given $e$ the Springer fiber $\mathcal B_e$ of Borel subalgebras containing $e$ is such that its (singular) cohomology groups carry commuting actions of the component group $A(G\cdot e)$ of the centralizer of $e$ in $G$ and the Weyl group $W$. The representation $\sigma_e$ of $W$ on the $A(G\cdot e)$-fixed vectors in the top cohomology group is then irreducible \cite{S78}. Given now another nilpotent element $x$, the Borho-MacPherson Decomposition Theorem \cite{BM83} then implies that $$ \dim IH^i_x(\bar{\mathcal O},\mathbb Q) = \dim\hom (\sigma_e,H^i(\mathcal B_x.\mathbb Q)) $$ \noindent where as usual $\bar{\mathcal O}$ denotes the closure of the orbit through $e$. Here the left hand side denotes the intersection homology groups of $\bar{\mathcal O}$, which are naturally isomorphic to its cohomology groups in the complementary degree. Hence $\bar{\mathcal O}$ is rationallly smooth at $x$ if and only if $IH^i_x(\bar{\mathcal O},\mathbb Q)$ vanishes in positive degree. If $\bar{\mathcal O} =\mathcal N$, then $\sigma_e$ is trivial and an old computation of Lusztig shows that $\sigma_e$ occurs once in $H^i(\mathcal B_x,\mathbb Q)$ if $i=0$ and not at all if $i>0$ (for any $x$), so $\mathcal N$ is rationally smooth. In general it is not easy to compute the module structure of $H^*(\mathcal B_x,\mathbb Q)$, but explicit tables are available in the exceptional cases \cite{BS84} while in the classical cases one has algorithms due to Shoji and Lusztig \cite{L81,FMM13,Sh83}. An easy special case occurs when $x$ is regular nilpotent in some proper Levi subalgebra $\frak l$ of $\frak g$: if $W_L\subset W$ is the Weyl group of $\frak l$, then $H^*(\mathbb B_x,\mathbb Q)$ is just the permutation representation of $W$ on $W/W_L$ \cite{AL82}. This permutation representation can be computed by the Littlewood-Richardson rule in the classical cases and tables of Alvis in the exceptional ones.
To determine rational smoothness at $x$ it remains to compute $IH^*_y$ for all $y$ in a neighborhood of $x$; by $G$-equivariance it suffices to compute this group for each of the finitely many orbits $G\cdot y$ whose closures lie between $\overline{G\cdot x}$ and $\bar{\mathcal O}$. One could ask whether cohomology vanishing at $x$ implies cohomology vanishing in a neighborhood (as it does for Schubert varieties). The answer is no, already in type $C_3$: there one computes that the cohomology of $\bar{\mathcal O}_{3,3}$ vanishes at points of $\mathcal O_{2,1^4}$, but not at points of $\mathcal O_{2^2,1^2}$, where $\mathcal O_{\lambda}$ denotes the orbit with partition $\lambda$ and exponents in partitions as usual denote repeated parts. It also fails in type $D_4$: the cohomology of $\bar{\mathcal O}_{5,3}$ vanishes at points of $\mathcal O_{3,2^2,1}$ but not at points of $\mathcal O_{3^2,1^2}$. But it holds in type $A$, as follows from the combinatorics of Kostka numbers and the fact that every nilpotent element in that type is regular in some Levi subalgebra.
We conclude with an example of a rational singular locus of codimension 2 (this cannot happen for Schubert varieties, or for closures of $K$-orbits in the flag variety $G/B$, where $K$ is a symmetric subgroup of $G$). Take $\mathcal O$ to be the orbit with Bala-Carter label $A_4$ in type $E_6$; this has dimension 60. Applying the second method and using \cite{BS84} we compute that the rational singular locus of $\bar{\mathcal O}$ coincides with its boundary and has dimension 58; this is the closure of the orbit $\mathcal O'$ with Bala-Carter label $D_4(a_1)$. In this case $\sigma_e$ occurs with multiplicity 3 in $H^*(\mathcal B_x)$ (where $e\in\mathcal O,x\in\mathcal O'$). In fact $\sigma_e $ is paired with the two-dimensional reflection representation of the component group $A(\mathcal O')$ in the Springer correspondence. The corresponding phenomenon also occurs for one the 42-dimensional orbits in type $F_4$ and the (unique) 40-dimensional orbit contained in its closure; there the component group of the smaller orbit is the symmetric group $S_4$ and so once again the multiplicity of the relevant Springer representation is larger than one.
\end{document} |
\begin{document}
\baselineskip=17pt
\subjclass[2020]{11B68, 11D41} \keywords{Diophantine equations, exponential equations, Bernoulli polynomials}
\title[On equal values of products and power sums...]{On equal values of products and power sums of consecutive elements in an arithmetic progression}
\author[A. Bazs\'o, D. Kreso, F. Luca and \'A. Pint\'er]{A. Bazs\'o, D. Kreso, F. Luca, \'A. Pint\'er, and Cs. Rakaczki}
\address{A. Bazs\'o \newline \indent Institute of Mathematics \newline \indent University of Debrecen \newline \indent P.O. Box 400, H-4002 Debrecen, Hungary \newline \indent and \newline \indent ELKH-DE Equations, Functions, Curves and their Applications Research Group} \email{bazsoa@science.unideb.hu}
\address{D. Kreso \newline \indent Institute f\"ur Mathematik \newline \indent Technische Universit\"at Graz \newline \indent Steyrergasse 30, 8010 Graz Austria} \email{kreso@math.tugraz.at}
\address{F. Luca \newline \indent School of Mathematics \newline \indent Wits University \newline \indent 1 Jan Smuts, Brammfontein, 2000 Johannesburg, South Africa, \newline
\indent Research Group in Algebraic Structures and Applications \newline \indent King Abdulaziz University \newline \indent Abdulah Sulayman, Jeddah 22254, Saudi Arabia, \newline \indent and \newline
\indent Mathematical Institute \newline \indent UNAM Ap. Postal 61-3 (Xangari) \newline \indent CP 58 089. Morelia, Michoac\'an, Mexico} \email{florian.luca@wits.ac.za}
\address{\'A. Pint\'er \newline \indent Institute of Mathematics \newline \indent U niversity of Debrecen \newline \indent P.O. Box 400, H-4002 Debrecen, Hungary} \email{apinter@science.unideb.hu}
\address{Cs. Rakaczki\newline \indent Institute of Mathematics\newline \indent University of Miskolc \newline \indent H-3515 Miskolc Campus, Hungary} \email{ matrcs@uni-miskolc.hu}
\thanks{}
\date{}
\begin{abstract} In this paper we study the Diophantine equation \begin{align*} b^k + \left(a+b\right)^k + &\left(2a+b\right)^k + \ldots + \left(a\left(x-1\right) + b\right)^k = \\ &y\left(y+c\right) \left(y+2c\right) \ldots \left(y+ \left(\ell-1\right)c\right), \end{align*} where $a,b,c,k,\ell$ are given integers under natural conditions. We prove some effective results for special values for $c,k$ and $\ell$ and obtain a general ineffective result based on Bilu-Tichy method. \end{abstract}
\maketitle
\section{Introduction}
The polynomials \begin{equation} S_{a,b}^k \left(x\right) = b^k + \left(a+b\right)^k + \left(2a+b\right)^k + \ldots + \left(a\left(x-1\right) + b\right)^k \label{pol:skabx} \end{equation} and \begin{equation} R_c^{\ell} \left(x\right) = x\left(x+c\right) \left(x+2c\right) \ldots \left(x+ \left(\ell-1\right)c\right), \end{equation} are natural generalizations of the widely studied polynomials $S_k (x) = S_{1,0}^k (x)$ and $R_{\ell} (x) =R_1 ^{\ell} (x)$, respectively. Various Diophantine equations concerning $R_{\ell} (x)$ and $S_k (x)$ have been extensively investigated. See e.g. \cite{BBKPT} and the references given there.
It is easy to see that involving Bernoulli polynomials, the polynomial defined above by \eqref{pol:skabx} can be rewritten as \begin{equation} \label{eq:mainI} S_{a,b}^k \left(x\right) = \frac{a^k}{k+1} \left(B_{k+1} \left(x+ \frac{b}{a}\right) - B_{k+1} \left(\frac{b}{a}\right)\right). \end{equation}
In \cite{BBKPT}, Bilu, Brindza, Kirschenhofer, Pint\'er and Tichy proved that for $k \geq 1, \ell \geq 2$, and $(k,\ell) \neq (1,2)$, the equation $S_k (x) = R_{\ell} (y)$ has at most finitely many integer solutions. They also proved a similar result for the equation $S_k (x) = S_{\ell} (y)$. Both of these results were ineffective, since their proofs were mainly based on the general finiteness criterion of Bilu and Tichy \cite{BiluTichy} for Diophantine equations of the form $f(x) = g(y)$. In certain special cases they also proved effective finiteness results for the corresponding equations.
In our earlier paper \cite{BKLP}, using a slightly modified approach, we generalized the above result of Bilu, Brindza, Kirschenhofer, Pint\'er and Tichy \cite{BBKPT} concerning the equation $S_k (x) = S_{\ell} (y)$ by proving that the more general equation $S_{a,b}^k (x) = S_{c,d}^{\ell} (y)$ has at most finitely many solutions in rational integers $x,y$. This theorem is also ineffective, but it is made effective in some special cases.
The purpose of this paper is to study the equation \begin{equation} \label{eq:RS} S_{a,b}^k (x) = R_{c}^{\ell} (y) \end{equation} in integers $x,y$.
As first result we prove a generalization of original Sch\"affer problem on the power values of power sums.
\begin{theorem} \label{thm:eff1} Let $a,b$ be rational integers with $a>0$ and suppose that $c=0$. We consider equation \begin{equation}\label{eq:c=0} S_{a,b}^k (x) =y^{\ell} \end{equation} in integers $x,y>1$ and $\ell\geq 2$. If $(k,a,b)\neq (1,2,1 )$ then $\ell<C_1$ where $C_1$ is an effectively computable constant depending only on $k,a$ and $b$. Further, apart from the cases $$(k,\ell,a,b)\in\{ (1,2,a,0),(3,2,a,0),(3,2,2,1),(3,4,a,0), (5,2,a,0) \}$$
equation (\ref{eq:c=0}) implies $\max(|x|,|y|)<C_2$ where $C_2$ is an effectively computable constant depending on $k,\ell,a$ and $b$. \end{theorem}
In degenerate case $(k,a,b)\neq (1,2,1 )$ we have $x^2=y^{\ell}$, thus we can not give an upper bound for $\ell$. From the 5 exceptional cases 4 are the well-known examples by Sch\"affer ($b=0)$, and if $(k,\ell,a,b)=(3,2,2,1)$ we get the equation $x^2(2x^2-1)=y^2$, and the theory of Pell equations yields infinitely many integer solutions in $x,y$. We remark that Bazs\'o \cite{B} considered a more general case for shifted power values of power sums.
One can treat the cases when the parameters $k$ or $\ell$ are small. Set $I_1=\{(1,2),(1,4),(3,2),(3,4) \}$ and $I_2=I_1\cup \{(5,2),(5,4)\}$. We obtain
\begin{theorem} \label{thm:eff2}
Let $\ell\geq 2$ be a rational integer and $k\in\{1,3\}$ with $(k,\ell)\notin I_1$, or $k\geq 1$ is a rational integer and $\ell\in \{2,4\}$ with $(k,\ell)\notin I_2$. Equation (\ref{eq:RS}) has only finitely many solutions in integers $x$ and $y$, and $\max(|x|,|y|)$ is bounded by an effectively computable constant depending on $a,b,c$ and $\ell$ or $k$, respectively. \end{theorem}
When $(k,\ell)=(1,2)$ from (\ref{eq:RS}) we have the Pellian equation $$ (2ax+2b-a)^2-2a(2y+c)^2=(2b-a)^2-2ac^2. $$ Now, there are infinitely many solutions in positive integers $a,b,c$ of the equation $(2b-a)^2-2c^2=1$, and similarly for fixed triplet $(a,b,c)$ there exist infinitely many solutions $x,y$ for the previous Pellian equation.
Focusing on the exceptional cases we have
\begin{theorem} \label{thm:eff3}
Apart from the cases \begin{itemize}
\item $(k,\ell,a,b,c)=(1,4,2,b,c)$ with $b=\pm 2c^2+1$, \item $(k,\ell,a,b,c)=(3,2,a,b,c)$ with $\frac{c^2}{a^3}-B_4\left( \frac{b}{a} \right)=\frac{1}{30}$ or $-\frac{7}{240}$, \item $(k,\ell,a,b,c)=(3,4,1,b,c)$ with $b(b-1)=2c^2$, \item $(k,\ell,a,b,c)=(5,2,a,b,c)$ with $\frac{3c^2}{2a^5}-B_6\left( \frac{b}{a} \right)=-\frac{1}{42}$ or $-\frac{1}{189}$,
and \item $ (k,\ell,a,b,c)=(5,4,a,b,c)$ with $\frac{6c^4}{a^5}-B_6\left( \frac{b}{a} \right)=-\frac{1}{42}$ or $-\frac{1}{189},$ \end{itemize} the equations $$S_{a,b}^1 (x) = R_{c}^{4} (y), S_{a,b}^3 (x) = R_{c}^{2} (y),S_{a,b}^3 (x) = R_{c}^{4} (y),S_{a,b}^5 (x) = R_{c}^{2} (y),$$
and $S_{a,b}^5 (x) = R_{c}^{4} (y)$, respectively, in integers $x,y$ imply $\max(|x|,|y|)<C_3,$ where $C_3$ is an effectively computable constant depending on $a,b$ and $c$.
\end{theorem}
Our main result is the following general analogue of Theorem 1.1 in \cite{BBKPT}. \begin{theorem} \label{thm:main} Let $k,\ell$ be rational integers with $k\geq 2, k\notin\{3,5\}$ and $\ell=3$ or $\ell\geq 5$. Then for all nonzero integers $a,b,c$ with $\gcd(a,b)=1$ equation (\ref{eq:RS}) has only finitely many solutions $(x,y)$. \end{theorem}
\section{Auxiliary results}
We denote by $\mathbb{C}[x]$ the ring of polynomials in the variable $x$ with complex coefficients. A decomposition of a polynomial $F(x) \in \mathbb{C}[x]$ is an equality of the following form $$ F(x) = G_1 (G_2 (x)) \ \ \ (G_1 (x), G_2 (x) \in \mathbb{C}[x]), $$ which is nontrivial if $$ \deg G_1 (x) > 1 \ \ \ \text{and} \ \ \ \deg G_2 (x) > 1. $$ Two decompositions $F(x) = G_1 (G_2 (x))$ and $F(x) = H_1 (H_2 (x))$ are said to be equivalent if there exists a linear polynomial $\ell (x) \in \mathbb{C}[x]$ such that $G_1 (x) = \ell (H_1 (x))$ and $H_2 (x) = \ell (G_2 (x))$. The polynomial $F(x)$ is called decomposable if it has at least one nontrivial decomposition; otherwise it is said to be indecomposable.
Bazs\'o, Pint\'er and Srivastava \cite{BPS} recently proved the following theorem about the decomposition of the polynomial $S_{a,b}^k \left(x\right)$.
\begin{lemma} \label{thm:BPS} The polynomial $S_{a,b}^k \left(x\right)$ is indecomposable for even $k$. If $k=2v-1$ is odd, then any nontrivial decomposition of $S_{a,b}^k \left(x\right)$ is equivalent to the following decomposition: \begin{equation} S_{a,b}^k \left(x\right) = \widehat{S}_v \left(\left(x+\frac{b}{a} - \frac{1}{2}\right)^2\right). \end{equation} \end{lemma}
\begin{proof}[Proof of Lemma \ref{thm:BPS}] This is Theorem 2 of \cite{BPS}. \end{proof}
For classifying the decompositions of the polynomial $R_c^{\ell} (x)$ we need the following lemma.
\begin{lemma} \label{lemma:Rlx} The polynomial $R_{\ell} (x) =R_1 ^{\ell} (x)$ is indecomposable if $k$ is odd. If $\ell=2m$ is even then any nontrivial decomposition of $R_k(x)$ is equivalent to \begin{equation} R_{\ell}(x)=\widehat{R}_m((x+(\ell-1)/2)^2), \end{equation} where $$\widehat{R}_m(x)=\left(x-\frac{1}{4}\right)\left(x-\frac{9}{4}\right)\cdots \left(x-\frac{(2m-1)^2}{4}\right).$$ In particular, the polynomial $\widehat{R}_m(x)$ is indecomposable for any $m$. \end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:Rlx}] See Theorem 4. 3 in \cite{BBKPT}. \end{proof}
The proof of the general case is based on the previous lemma and on the easy observation
\begin{equation} \label{eq:obser} R_c^{\ell} (x)=c^{\ell} R_{\ell} \left(\frac{x}{c}\right). \end{equation} \begin{lemma} \label{lemma:Rclx} The polynomial $R_c^{\ell} (x)$ is indecomposable if $\ell$ is odd. If $\ell=2m$ is even, then any nontrivial decomposition of $R_c^{\ell} (x)$ is equivalent to $$ R_c^{\ell} (x) = \widehat{R}_c^m \left(\left(x + \frac{(\ell-1)c}{2}\right)^2\right), $$ where $$ \widehat{R}_c^m (x) = \left(x - \frac{c^2}{4}\right) \left(x - \frac{9c^2}{4}\right) \ldots \left(x - \frac{((2m-1)c)^2}{4}\right). $$ and the polynomial $\widehat{R}_c^m (x)$ is indecomposable for any $m$. \end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:Rclx}] Let $\ell$ be an odd integer with $\ell\geq 1$. On supposing the contrary we obtain $$R_c^{\ell} (x)=f_1(f_2(x)),$$ where $\deg f_1>1$ and $\deg f_2>1$. Using (\ref{eq:obser}) we have $$c^{\ell} R_{\ell} \left(\frac{x}{c}\right)=f_1(f_2(x))$$ and $$R_{\ell}(x)=\frac{1}{c^{\ell}}f_1(f_2(cx))$$ which is a contradiction. In the even case, from Lemma \ref{lemma:Rlx} and (\ref{eq:obser}) we get $$f_2(x)=\left(\frac{x}{c}+\frac{\ell-1}{2}\right)^2$$ and our lemma is proved. \end{proof}
Our next lemma provides information on the structure of the zeros of Bernoulli polynomials.
\begin{lemma} \label{eff:1} (i) For every $d\in \mathbb Q$ and rational integer $k\geq 3$ the polynomial $B_k(x)+d$ has at least three simple zeros apart from the cases $(k,d)\in\{(4,\frac{1}{30}),(4,-\frac{7}{240}),(6,-\frac{1}{42}),(6,-\frac{1}{189})\}$.
(ii) For every $d\in \mathbb Q$ and rational integer $k\geq 7$, the polynomial $B_k(x)+d$ has at least one complex nonreal zero.
(iii) The zeros of $B_k(x)$ are all simple.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{eff:1}] For $d=0$ and odd values of $k\geq 3$ Part (i) is a consequence of a theorem by Brillhart \cite[Corollary of Theorem 6]{Bril}. For non-zero rational $d$ and odd $k$ with $k\geq 3$ and for even values of $k\geq 4$ our lemma follows from \cite[Theorem]{pr}, and \cite[Theorem 2.3]{raka} and the subsequent remarks, respectively.
For (ii) assume that all the zeros of $B_k(x)+d$ are real. Then also all the zeros of its derivative $$(B_k(x)+d)'=kB_{k-1}(x)$$ are real. By induction, all the roots of $B_{k-1}(x), B_{k-2}(x),\ldots $ are real. Since $k\geq 7$ and $B_6(x)$ has a complex nonreal root, we obtain a contradiction. Part (iii) was proved in \cite{dilcher}.
\end{proof}
Let $q$ be a rational number and put $$f_{\ell,q}(x)=x(x+1)\cdots (x+\ell-1)+q.$$
\begin{lemma} \label{Rl+q} Suppose that $\ell\geq 3$. Then $f_{\ell,q}(x)$ has at least three simple zeros apart from the cases $f_{4,1}(x)=x(x+1)(x+2)(x+3)+1$ and $f_{4,-\frac{9}{16}}(x)=x(x+1)(x+2)(x+3)-\frac{9}{16}$. \end{lemma}
\begin{proof}[Proof of Lemma \ref{Rl+q}] This is a reformulation of Theorem 2 in \cite{yuan}. \end{proof}
Our next auxiliary result is an easy consequence of an effective theorem concerning the $S$-integer solutions of so-called hyperelliptic equations.
\begin{lemma}\label{lem:hyper} Let $f(x)$ be a polynomial with rational coefficients and with at least two distinct zeros and $u,v$ be fixed positive rational numbers. Then the equation $$f\left(\frac{x}{u}\right)=vy^z$$ in integers $x, y>1$ and $z>1$ implies $z<C_3$, where $C_3$. Further, if the polynomial $f$ has at least two simple zeros, then all the solutions $x$ and $y$ of the equation $$f\left(\frac{x}{u}\right)=vy^m, m\geq 3$$
satisfy $\max(|x|,y)<C_4$, and if $f$ possesses at least three simple zeros then all the solutions $x.y$ of the equation $$f\left(\frac{x}{u}\right)=vy^2$$ are bounded by $C_5$. Here $C_3,C_4$ and $C_5$ are effectively computable constants depending on the parameters of $f, u$ and $v$. \end{lemma}
\begin{proof}[Proof of Lemma \ref{lem:hyper}] This lemma is an easy consequence of a classical theorem of Schinzel and Tijdeman \cite{ST} and the main result of \cite{brindza}. \end{proof}
The ineffective statement of this paper is mainly based on the following lemma, which is analogous to Theorem 4.4 in \cite{BBKPT}.
\begin{lemma} \label{lemma:mainineff} Let $k\geq 2$ be a rational integer with $k\notin\{3,5\}$. There exist no polynomial $p(x)$ and $\alpha, \beta, \gamma, \delta\in \mathbb C$ such that $$S_{a,b}^{k}(x)=R_{c}^{\ell}(p(x)\sqrt{\alpha x^2+\beta x+\gamma}+\delta).$$ \end{lemma}
To prove this lemma we need the next result.
\begin{lemma} \label{lemma:aux} Assume that $f(x), g(x)\in \mathbb Q[x]$ and that $f(x)=g(\lambda x+\nu)$. Further, suppose that all the zeros of $g(x)$ are rational and that $f(x)$ vanishes at $\beta\in \mathbb Q$ but it is not of the form $h((x-\beta)^d)$, where $h(x)\in \mathbb Q[x]$ and $d>1$. Then $\lambda, \nu\in \mathbb Q$. \end{lemma}
\begin{proof} This is Lemma 4.5 in \cite{BBKPT}. \end{proof}
\begin{proof}[Proof of Lemma \ref{lemma:mainineff}] We have $$R_{c}^{\ell}(p(x)\sqrt{\alpha x^2+\beta x+\gamma}+\delta)=c^{\ell}R_{\ell}\left((p(x)/c\sqrt{\alpha x^2+\beta x+\gamma}+\delta/c\right),$$ so up to replacing $p(x)$ and $\delta$ by $p(x)/c$ and $\delta/c$, we may work with the polynomial $c^{\ell}R_{\ell}(x)$ instead of the polynomial $R_{c}^{\ell}(x)$. We follow the proof of Theorem 4.4 in \cite{BBKPT}. We start with the particular case for $k, \ell \geq 2$ there exist no polynomial $p(x)$ such that $$S_{a,b}^{k}(x)=c^{\ell} R_{\ell}(p(x)).$$ Assume on the contrary, Lemma \ref{thm:BPS} implies that $\deg p(x)\leq 2$. Suppose first that $\deg p(x)=1$. Then $p(x)=\lambda x + \nu$ and $\ell=k+1$. Suppose first that $\frac{b}{a}=\frac{1}{2}$ and that $k$ is odd. Then $b=1, a=2$ and $$S_{2,1}^{k}(x)=\frac{2^k}{k+1}\left(B_{k+1}\left(x+\frac{1}{2}\right)-B_{k+1}\left(\frac{1}{2}\right)\right)=$$ $$\frac{2^k}{k+1}\left(x^{k+1}-\frac{(k+1)k}{24}x^{k-1}+\frac{(k+1)k(k-1)(k-2)}{384}x^{k-3}+\ldots \right)$$ for all $k\geq 5$. The zeros of $R_{\ell}(\lambda x+\nu)$ are $$-\frac{j+\nu}{\lambda}$$ for $j=0, \ldots, \ell$, and their sum must be $0$ because $x^k$ appears with coefficient equal to zero in $S_{2,1}^{k}(x)$, therefore $$0=-\frac{1}{\lambda}\sum_{j=0}^{k}(j+\nu),$$ so $$\nu=-\frac{k}{2}=-\frac{\ell-1}{2}.$$ Thus we get that $$S_{2,1}^{k}(x)=c^{k+1}R_{k+1}\left(\lambda x-\frac{k}{2}\right)=c^{k+1}\widehat{R}_{(k+1)/2}((\lambda x)^2)=$$ $$=c^{k+1}\left((\lambda x)^2-\frac{1}{4}\right)\left((\lambda x)^2-\frac{9}{4}\right)\cdots \left((\lambda x)^2-\frac{k^2}{4}\right)=$$ $$=c^{k+1}\left( (\lambda x)^{k+1}-\frac{k(k+1)(k+2)}{24}(\lambda x)^{k-1}+\right.$$ $$\left. +\frac{k(k^2-1)(k^2-4)(5k+12)}{5760}(\lambda x)^{k-3}+\ldots \right).$$ Identifying the first three nonzero coefficients above, we get $$\frac{2^k}{k+1}=(c\lambda)^{k+1},$$ $$\frac{2^k k}{24}=\frac{c^{k+1}\lambda^{k-1}k(k+1)(k+2)}{24},$$ $$\frac{2^k k(k-1)(k-2)}{384}=\frac{c^{k+1}\lambda^{k-3}k(k^2-1)(k^2-4)(5k+12)}{5760}.$$ Dividing the first equation by the second one we have $$\lambda^2=k+2$$ and dividing the second equation by the third one we obtain $$\lambda^2=\frac{5k+12}{15},$$ giving $k=-1.8$, contradiction.
From now on, we assume that either $b/a\neq 1/2$ or $b/a=1/2$ but $k$ is even. Then the argument in \cite{BBKPT} applies. Namely, $S_{a,b}^{k}(x)$ has a zero at $x=0$ and it is not of the form $h(x^d)$ for some $d>1$ and polynomial $h(x)$ by Lemma \ref{thm:BPS}, so $\lambda, \nu \in \mathbb Q$ by Lemma \ref{lemma:aux}. In particular, all the zeros of the polynomial $R_{k+1}(\lambda x+\nu)$ are real. By Lemma \ref{eff:1}, we deduce that $k\leq 5$. So, we have to check the impossibility of the identity $$S_{a,b}^k(x)=c^{k+1}R_{k+1}(\lambda x+\nu)$$ for some $a,b,c\in \mathbb N$ with $\gcd(a,b)=1$, $\lambda, \nu\in \mathbb Q$ and $k\in \{2,3,4,5\}$. We give the details only for $k=2$ the calculations in other cases are very similar and we leave them to the reader. For $k=2$ we have $$S_{a,b}^{2}(x)=\frac{a^2}{3}x^3+\frac{a(2b-a)}{2}x^2+\left(\frac{a^2}{6}-ab+b^2\right)x$$ and $$c^3R_3(\lambda x+\nu)=c^3\lambda^3 x^3+3c^3\lambda^2(\nu+1)x^2+c^3(2\lambda+6\lambda\nu+3\lambda \nu^2)x+c^3(\nu^3+3\nu^2+2\nu).$$
On comparing the corresponding coefficients we get \begin{equation} \label{comp:1} \frac{a^2}{3}=c^3\lambda^3 \end{equation}
\begin{equation} \label{comp:2} \frac{a(2b-a)}{2}=3c^3\lambda^2 (\nu+1) \end{equation}
\begin{equation} \label{comp:3} \frac{a^2}{6}-ab+b^2=c^3\lambda (3\nu^2+6\nu+2) \end{equation} and
\begin{equation} \label{comp:4} 0=c^3 \nu (\nu+1)(\nu+2). \end{equation}
From (\ref{comp:4}) we obtain $\nu \in \{0,-1,-2\}$. Suppose first that $\nu=0$. Then dividing (\ref{comp:1}) by (\ref{comp:2}) and dividing (\ref{comp:2}) by (\ref{comp:3}) we get $$\lambda=\frac{2a}{2b-a}$$ and $$\lambda=\frac{a(2b-a)}{\frac{a^2}{2}-3ab+3b^2}.$$ These relations yield $$a^2-6ab+6b^2=(2b-a)^2$$ and $$2b(b-a)=0,$$ thus $a=b$ or $b=0$, a contradiction. Now assume that $\nu=-1$. Then from (\ref{comp:2}), $a(2b-a)=0$, we get a contradiction again. Finally, if $\nu=-2$, using the previous argument, and obtaining $3b^2+2ab=0$ we arrive at a contradiction.
We now assume that that $\deg p(x)=2$, in which case $k+1=2\ell$. By Lemma \ref{thm:BPS}, the decomposition $S_{a,b}^k (x)=c^{\ell}R_{\ell}(p(x))$ is equivalent to $$S_{a,b}^{k} (x)= \widehat{S}_{(k+1)/2} \left(\left(x+\frac{b}{a} - \frac{1}{2}\right)^2\right)$$ which means that $$p(x)=\lambda\left(x+\frac{b}{a} - \frac{1}{2}\right)^2 + \nu \quad \text{and} \quad \widehat{S}_{(k+1)/2}(x)=c^{\ell} R_{\ell}(\lambda x+\nu).$$ If $\ell=2$, we get $k=3$, however this is an easily excludable case. Thus we may assume that $\ell\geq 3$. The polynomial $\widehat{S}_{k}(x)$, vanishes at $x_0=(1/2-b/a)^2$, because $$\widehat{S}_{m}\left(\left(\frac{1}{2}-\frac{b}{a}\right)^2\right)=S_{a,b}^{2m-1}\left(1-\frac{2b}{a}\right)=$$ $$=\frac{a^{2\ell-1}}{2\ell}\left(B_{2\ell}\left(1-\frac{2b}{a}+\frac{b}{a}\right)-B_{2\ell}\left(\frac{b}{a}\right)\right)=0,$$ where we used the fact that $B_{2\ell}(1-y)=B_{2\ell}(y)$ with $y=b/a$. This polynomial is not of the form $h((x-x_0)^d)$ for some $d>1$ by the argument from the footnote of page 181 on \cite{BBKPT}. Indeed, if it were, by the indecomposability of $\widehat{S}_{\ell}(x)$ (see Lemma \ref{thm:BPS}), we would get that $$h(x)=\frac{a^{2\ell-1}}{(x-x_0)^m},$$ so $$\frac{a^{2\ell-1}}{2\ell}\left(B_{2\ell}\left(x+\frac{b}{a}-B_{2\ell}\left(\frac{b}{a}\right)\right)\right)=\frac{a^{2\ell-1}}{2\ell}\left(\left(x+\frac{b}{a}-\frac{1}{2}\right)^2-x_0\right)^{\ell},$$ so $$B_{2\ell}(x)=(x^2-x_0)^{\ell}+C,$$ where $$C=\frac{2\ell}{a^{2\ell-1}}B_{2\ell}\left(\frac{b}{a}\right).$$ Taking the derivative in the above formula and using the fact that $k\geq 3$, we conclude that $\pm \sqrt{x_0}$ are double roots of $B_{2\ell}'(x)=2mB_{2\ell-1}(x)$, which is impossible by Lemma \ref{eff:1}, part (iii). Hence, $\lambda, \nu \in \mathbb Q$. It remains to identify coefficients. It is easy to see that the polynomial $\widehat{S}_{\ell}(x)$ and the polynomial $\widetilde{B_{\ell}}(x)$ of \cite{BBKPT} are related via the formula $$\widehat{S}_{\ell}(x)=\frac{a^{2\ell-1}}{2\ell}\widetilde{B_{\ell}}(x)+D,$$ with $$D=\frac{a^{2\ell-1}}{2\ell}\left(B_{2\ell}-B_{2\ell}\left(\frac{b}{a}\right)\right).$$ Thus, we get, from a previous calculation (with the change of variable $k+1=2\ell$), $$\widehat{S}_{\ell}(x)=\frac{a^{2\ell-1}}{2\ell}\left(x^\ell-\frac{2\ell(2\ell-1)}{24}x^{\ell-1}+\right.$$ $$\left.+\frac{2\ell(2\ell-1)(2\ell-2)(2\ell-3)}{384}x^{\ell-2}+\ldots \right).$$ Writing $$c^{\ell}R_{\ell}(\lambda x+\nu)=c^{\ell}\left((\lambda x+\nu)^{\ell}+\frac{\ell(\ell-1)}{2}(\lambda x+\nu)^{\ell-1}+\right.$$ $$\left. +\frac{\ell(\ell-1)(2\ell-1)}{6}(\lambda x+\nu)^{\ell-2}+\ldots \right),$$ and identifying the corresponding coefficients, we get $$\frac{a^{2\ell-1}}{2\ell}=c^\ell \lambda^m;$$ $$-\frac{a^{2\ell-1}(2\ell-1)}{24}=c^{\ell} \lambda^{\ell-1}\ell\left(\nu+\frac{\ell-1}{2}\right);$$ and $$\frac{a^{2\ell-1}(2\ell-1)(2\ell-2)(2\ell-3)}{384}=$$ $$\frac{c^{\ell}\lambda^{\ell-2}\ell(\ell-1)}{2}\left(\nu^2+(\ell-1)\nu+\frac{2\ell-1}{3} \right).$$ Taking ratios of the first two equations and then the next two equations, we get $$\frac{\lambda}{\nu+(\ell-1)}=-\frac{12}{2\ell-1};$$ $$\frac{\lambda(\nu+(\ell-1)/2}{\nu^2+(\ell-1)\nu+(2\ell-1)/3}=-\frac{4}{2\ell-3}.$$ Dividing the second equation above equation by the first, we get $$\frac{(\nu+(\ell-1)/2)^2}{(\nu+(\ell-1)/2)^2-(3\ell^2-14\ell+7)/12}=\frac{2\ell-1}{3(2\ell-3)}.$$ This gives $$\frac{x^2}{x^2-(3\ell^2-14\ell+7)/12}=\frac{2\ell-1}{3(2\ell-3)}$$ with $x=\nu+(\ell-1)/2$, so $$(\ell-2)x^2=-\frac{(2\ell-1)(3\ell^2-14\ell+7)}{12}.$$ This can be checked to be false for $\ell=2,3,4$ and for $\ell\geq 5$, the left-hand side is positive and the right-hand side is negative. This shows that indeed it is not possible that $S_{a,b}^k(x)=c^{\ell}R_{\ell}(p(x))$ for some polynomial $p(x)$. Now we can prove the theorem in its full generality. Assume that $$r(x)=\alpha x^2+\beta x+\gamma$$ is not a complete square, otherwise $p(x)\sqrt{r(x)}+\delta$ is a polynomial, case which has already been treated. The argument from \cite{BBKPT} applies to say that $$c^{\ell}R_{\ell}(p(x)\sqrt{r(x)}+\delta)=c^{\ell} r(x)^{\ell/2}p(x)^{\ell}+$$ $$+c^{\ell} r(x)^{(\ell-1)/2}p(x)^{\ell-1}\left(\ell\delta+\frac{\ell(\ell-1)}{2}\right)+\ldots $$ is a polynomial so $\ell$ must be even. Furthermore, $$\ell\delta+\frac{\ell(\ell-1)}{2}=0,$$ that is $\delta=-\frac{\ell-1}{2}$. But then $$R_{\ell}(p(x)\sqrt{r(x)}+\delta)=R_{\ell}\left(p(x)\sqrt{r(x)}-\frac{\ell-1}{2}\right)=\widehat{R}_{\ell/2}(r(x)p(x)^2).$$ Thus, $S_{a,b}^k(x)=\widehat{R}_{\ell/2}(\widetilde{p}(x))$, where $\widetilde{p}(x)=r(x)p(x)^2$. The case $\ell=2$ leads to $$S_{a,b}^k(x)=cr(x)p(x)^2-\frac{c}{4},$$ so $$\frac{a^k}{k+1}\left( B_{k+1}\left(x+\frac{b}{a}\right)-B_{k+1}\left(\frac{b}{a}\right) \right)=cr(x)p(x)^2-\frac{c}{4},$$ so $$B_{k+1}(x)=\frac{c(k+1)}{a^k}r\left(x-\frac{b}{a}\right)p\left(x-\frac{b}{a}\right)^2+\left(B_{k+1}\left(\frac{b}{a}\right) -\frac{c(k+1)}{4a^k}\right).$$ By Lemma \ref{eff:1}, we get $k\in \{3,5\}$, however, these cases are excluded by the condition of our lemma.
So, it must be the case that $\ell\geq 4$. We have $S_{a,b}^k(x)=\widehat R_{\ell/2}(\widetilde{p}(x))$. By Lemma \ref{thm:BPS} and the fact that $r(x)$ is not a complete square, it follows that in fact $r(x)$ is a linear polynomial. Say $r(x)=\lambda x+\nu$. Assume first that $b/a=1/2$ and $n$ is even. We then get $$\widehat{R}_{\ell/2}(r(x))=S_{a,b}^k(x)=\widehat{S}_{(k+1)/2}\left(\left(x+\frac{b}{a}-\frac{1}{2}\right)^2\right),$$ with a linear polynomial $r(x)$, contradicting the indecomposability of $\widehat{R}_{\ell/2}(x)$, see Lemma \ref{lemma:Rlx}. So either $b/a\neq 1/2$ or $b/a=1/2$ but $k$ is not even. Then $S_{a,b}^{k}(x)$ has $x=0$ as a zero but it is not of the form $h(x^d)$ for any $d>1$ by Lemma \ref{thm:BPS} and $\widehat{R}_{\ell}(x)$ has rational zeros. So, from Lemma \ref{lemma:aux}, $\lambda$ and $\nu$ are rational. In particular, all zeros of $\widehat{R}_{\ell/2}(r(x))$ are real. Thus, $S_{a,b}^{k}$ has only real roots showing that $k\in \{2,3,4,5\}$. Considering these small cases, on comparing the corresponding coefficients we obtain a contradiction. \end{proof}
We will introduce some notation to recall the finiteness criterion by Bilu and Tichy. In what follows $\alpha$ and $\beta$ are nonzero rational numbers, $\mu,\nu$ and $q$ are positive integers, $p$ is a nonnegative integer and $\nu(X)\in \mathbb Q[X]$ is a nonzero polynomial (which may be constant).
A standard pair of the first kind is $(X^q,\alpha X^{p}\nu(X)^q)$ or switched, $(\alpha X^{p}\nu(X)^q, X^q)$, where $0\leq p<q, (p,q)=1$ and $p+\deg \nu(X)>0$.
A standard pair of the second kind is $(x^2,(\alpha x^2+\beta)\nu(x)^2)$ (or switched).
Denote by $D_{\mu}(x,\delta)$ the $\mu$th Dickson polynomial, defined by the functional equation $D_{\mu}(z+\delta/z,\delta)=z^{\mu}+(\delta/z)^{\mu}$or by the explicit formula $$D_{\mu}(x,\delta)=\sum_{i=0}^{[\mu/2]}d_{\mu,i}x^{\mu-2i},$$ with $$d_{\mu,i}=\frac{\mu}{\mu-i}{\binom{\mu-i}{i}}(-\delta)^i.$$ A standard pair of the third kind is $D_{\mu}(x,\alpha^{\nu}),D_{\nu}(x,\alpha^{\mu}$, where $\gcd(\mu,\nu)=1$.
A standard pair of the fourth kind is $\left(\alpha^{-\mu/2} D_{\mu}(x,\alpha), -\beta^{-\nu/2}D_{\nu}D_{\nu}(x,\beta)\right),$ where $\gcd(\mu,\nu)=2$.
A standard pair of the fifth kind is $((\alpha x^2-1)^3, 3x^4-4x^3)$ (or switched).
\begin{lemma} \label{lemma:BT} Let $Rx),S(x)$ be nonconstant polynomials such that the equation $R(x)=S(y)$ has infinitely many solutions in rational integers $x,y$. Then $R(x)=\phi(f(\kappa(x)))$ and $S(x)=\phi(g(\lambda(x)))$ where $\kappa(x),\lambda(x)\in \mathbb Q[x]$ are linear polynomials, $\phi(x)\in \mathbb Q[x]$, and $(f(x),g(x))$ is a standard pair. \end{lemma}
We need the analogs of Lemmata 5.2 and 5.3 in \cite{BBKPT}. Both of them follow immediately from the analogous results in \cite{BBKPT}, so their proofs are omitted.
\begin{lemma} \label{lemma:npower} None of the polynomials $S_{a,b}^n(a_1x+a_0)$ or $c^mR_m(b_1x+b_0)$ or is of the form $e_1x^q+e_0$ with $q\geq 3$. \end{lemma}
\begin{lemma} \label{lemma:ndickson} The polynomial $S_{a,b}^{n}(a_1 x+a_0)$ is not of the form $e_1D_{t}(x,\delta)+e_0$, where $D_t(x,\delta)$ is the Dickson polynomial with $t>4$ and $\delta\in \mathbb Q\setminus \{0\}$. \end{lemma}
\section{Proofs of the Theorems}
\begin{proof}[Proof of Theorem \ref{thm:eff1}]
If $b=0$ we essentialy obtain the original Sch\"affer equation so in the sequel we assume $b\neq 0$.
For $k\in \{2,4\}$ or $k\geq 6$, our theorem is an easy consequence of (\ref{eq:mainI}), Lemmata \ref{eff:1} and \ref{lem:hyper}. Suppose that $k=1$ and consider the equation $$S_{a,b}^1 (x) =\frac{1}{2}x(ax+2b-a)=y^{\ell}.$$
Since $(a,b)=1$, one can see that the quadratic polynomial on the left hand side has two simple zeros apart from the case $a=2,b=1$. For $k=3$ and $k=5$ the discriminant of $S_{a,b}^k (x)$ is $$\frac{1}{256}a^6b^4(a-b)^4(a-2b)^2(a^2+4ab-4b^2)$$ and $$\frac{1}{967458816}a^{20}b^4(a-b)^4(a-2b)^2(a^2+3ab-3b^2)^4(a^2-6ab+6b^2)^2\times$$ $$\times (a^2+4ab-4b^2)(a^2+2ab-2b^2)^2(3a^4+12a^3b+4a^2b^2-32ab^3+16b^4), $$ respectively.
We have two critical cases (i. e. $\frac{a}{b}$ is a rational zero of discriminants,) $a=b=1$ and $a=2,b=1$. In the first case $$S_{1,1}^{k}(x)=S_{k}(x+1),$$ where $S_k(x)$ denotes the usual Sh\"affer's sum of $k$th powers. If $a=2, b=1$ then we get $$S_{2,1}^{3}(x)=2x^4-x^2\,\,\mbox{and}\,\, S_{2,1}^{5}(x)=\frac{1}{3}x^2(16x^4-20x^2+7),$$ and these polynomials have two and four simple zeros, respectively.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:eff2}] First we consider our equation $$ S_{a,b}^{k}(x)=R_{c}^{\ell}(y) $$ in integers $x$ and $y$, where $\ell\geq 2$ and $k\in\{1,3\}$. Formulas (\ref{eq:mainI}) and (\ref{eq:obser}) give $$ R_{c}^{\ell}(y)=c^{\ell}R_{\ell}\left(\frac{y}{c}\right)=\frac{1}{2}x(ax+2b-a),$$ $$8ac^{\ell}R_{\ell}\left(\frac{y}{c}\right)+(2b-a)^2=(2ax+2b-a)^2,$$ and $$ R_{c}^{\ell}(y)=c^{\ell}R_{\ell}\left(\frac{y}{c}\right)=\frac{1}{4}x(ax+2b-a)\times$$ $$\times (a^2x^2+(2ba-a^2)x+2b^2-2ab),$$ $$4ac^{\ell}R_{\ell}\left(\frac{y}{c}\right)=X(X+2b^2-2ab)=(X+(b^2-ab))^2-(b^2-ab)^2,$$ where $X=a^2x^2+(2ab-b^2)x$, respectively, and Lemmas \ref{Rl+q} and \ref{lem:hyper} complete the proof for $(k,\ell)\notin \{(1,2),(3,2),(1,4),(3,4)\}$.
Now, if $\ell\in\{2,4\}$ we have $S_{a,b}^{k}(x)=y(y+c), 4S_{a,b}^{k}(x)+c^2=(2y+c)^2$ and $$S_{a,b}^{k}(x)=y(y+c)(y+2c)(y+3c)=(y^2+3cy+c^2)^2-c^4,$$ respectively, and our result is proved by (\ref{eq:mainI}) and Lemmas \ref{eff:1} and \ref{lem:hyper} for $$(k,\ell)\notin \{(1,2),(3,2),(1,4),(3,4),(5,2),(5,4)\}.$$ \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:eff3}] For $(k,\ell)=(1,4)$ we get $$\frac{1}{2}x(ax+2b-a)=y(y+c)(y+2c)(y+3c),$$ and $$8ac^4\frac{y}{c}\left(\frac{y}{c}+1 \right)\left(\frac{y}{c} +2 \right)\left( \frac{y}{c}+3 \right)=(2ax+2b-a)^2-(2b-a)^2.$$ Since $\frac{(2b-a)^2}{8ac^4}$ is non-negative, we cannot guarantee three simple zeros when this fraction is 1 (cf. Lemma \ref{Rl+q}). If
$(2b-a)^2=8ac^4$, we have $a=2$. Indeed, by the parities $a\neq 1$. Denote by $p$ an arbitrary prime divisor of $a$, so $p|a$ and thus $p|2b$, and $p=2$. Now, if $a=2^{\alpha}$, where $\alpha\geq 2$, then $\mbox{ord}_2(2b-a)=1$ which is a contradiction, so $a=2$ and $(b-1)^2=4c^4$. For $(k,\ell)=(3,4)$ we can apply a very similar argument, and here $$\frac{(b^2-ab)^2}{4ac^4}=1,$$ and this yields $a=1,\left(b(b-1)\right)^2=4c^4$.
For $(k,\ell)=(3,2),(5,2)$ and $(5,4)$ we follow the same idea, and give the details only for case $(k,\ell=(3,2)$. Consider the equation $$S_{a,b}^{3}(x)=\frac{a^3}{4}\left(B_4\left(x+\frac{b}{a}\right)-B_4\left(\frac{b}{a}\right)\right)=R_{c}^{2}(y)=y(y+c),$$ and $$a^3\left(B_4\left(x+\frac{b}{a}\right)-B_4\left(\frac{b}{a}\right)+\frac{c^2}{a^3}\right)=(2y+c)^2.$$ Finally, Lemma \ref{eff:1} completes the proof. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:main}] We follow Section 5.3 of \cite{BBKPT}. In view of the small cases treated and the fact that we have proved the analog of Theorem 4.4 in \cite{BBKPT}, the argument from Page 184 shows that we may assume that $(f(x), g(x))$ do not form a pair of second or fifth kind. If it is of the first kind, we get the same contradiction based on Lemma \ref{lemma:npower}, and if it is of the fourth kind, we get again the contradiction based on Lemma \ref{lemma:ndickson}. So, we only need to revisit the argument in \cite{BBKPT} for the pairs of the third kind. For this, we just notice that, with the notations from there, all coefficients $s_i$ get multiplied by $a^n = a_2$ (except for the last one which also gets shifted but hopefully we shall not get to it), and all the coefficients $r_j$ get multiplied by $c_m$. So, the analogs of (26)-(29) in \cite{BBKPT} become $$s_3=\frac{b_{1}^{3}a^2}{3}=e_1,$$ $$s_1=-\frac{b_1a^2}{24}=-3e_1\alpha^m,$$ $$r_m=a_{1}^{m}c^m=e_1,$$ $$r_{m-2}=-a_{1}^{m-2}c^m\frac{m(m-1)(m+1)}{24}=-e_1m\alpha^3.$$ So we get $$\alpha^m=\frac{b_{1}^{-2}}{24}, \alpha^3=a_{1}^{-2}\frac{m^2-1}{24}, c^ma_{1}^{m}=\frac{a^2b_{1}^{3}}{3}.$$ Hence, $$\frac{b_{1}^{-6}}{24^3}=a_{1}^{-2m}\left(\frac{m^2-1}{24}\right)^m=(a^2c^{-m})^{-2}9b_{1}^{-6}\left(\frac{m^2-1}{24}\right)^m,$$ giving $$\frac{1}{2^9 3^5}=\left(\frac{c^m}{a^2}\right)^2\left(\frac{m^2-1}{24}\right)^m.$$ If $m$ is even, the number on the right above is a square of a rational number, whereas the number on the left is not. So, $m$ is odd. Now, we get $$\frac{1}{2^9 3^5}=\left(\frac{c^m}{a^2}\right)^2\left(\frac{m^2-1}{24}\right)^{m-1}\left(\frac{m^2-1}{24}\right),$$ or $$\frac{1}{m^2-1}=2^6 3^4\left(\frac{c^m}{a^2}\right)^2\left(\frac{m^2-1}{24}\right)^{m-1},$$ and the right-hand side above is a square of a rational number, therefore so is the left-hand side, so $m^2-1$ is a square, contradiction. The theorem is proved. \end{proof}
\end{document} |
\begin{document}
\baselineskip 16pt
\title{Some new characterizations of $PST$-groups}
\author{Xiaolan Yi \\ {\small Department of Mathematics, Zhejiang Sci-Tech University,}\\
{\small Hangzhou 310018, P.R.China}\\ {\small E-mail: yixiaolan2005@126.com}\\ \\ { Alexander N. Skiba }\\ {\small Department of Mathematics, Francisk Skorina Gomel State University,}\\ {\small Gomel 246019, Belarus}\\ {\small E-mail: alexander.skiba49@gmail.com}}
\date{} \maketitle
\begin{abstract} Let $H$ and $B$ be subgroups of a finite group
$G$ such that $G=N_{G}(H)B$.
Then we say that $H$ is \emph{quasipermutable} (respectively \emph{$S$-quasipermutable}) in $G$
provided $H$ permutes with $B$ and with every subgroup (respectively with every Sylow subgroup)
$A$ of $B$ such that $(|H|, |A|)=1$.
In this paper we analyze the influence of $S$-quasipermutable and quasipermutable
subgroups on the structure of $G$. As an application, we give new characterizations of soluble $PST$-groups.
\end{abstract}
\let\thefootnoteorig\empty \renewcommand{\empty}{\empty}
\footnotetext{Keywords: finite group, quasipermutable subgroup, $PST$-group, Hall subgroup, supersoluble group,
Gasch\"utz subgroup, Carter subgroup, saturated formation.}
\footnotetext{Mathematics Subject Classification (2010): 20D10, 20D15, 20D20} \let\empty\thefootnoteorig
\section{Introduction}
Throughout this paper, all groups are finite and $G$ always denotes a finite group. Moreover $p$ is always supposed to be a prime and $\pi$ is a subset of the set $\Bbb{P}$ of all primes; $\pi (G)$ denotes the
set of all primes dividing $|G|$.
A subgroup $H$ of $G$ is said to be \emph{quasinormal} or \emph{permutable} in $G$ if $H$ permutes with every subgroup $A$ of $G$, that is, $HA=AH$;
$H$ is said to be \emph{$S$-permutable}
in $G$ if $H$ permutes with every Sylow subgroup of $G$.
A group $G$ is called a \emph{$PT$-group} if
permutability is a transitive relation on $G$, that is, every permutable subgroup of
a permutable subgroup
of $G$ is permutable in $G$. A group $G$ is called a \emph{$PST$-group} if
$S$-permutability is a transitive relation on $G$.
As well as $T$-groups, $PT$-groups and $PST$-groups possess many interesting properties (see Chapter 2 in \cite{prod}).
The general description of $PT$-groups and $PST$-groups were firstly obtained by Zacher \cite{G.Zacher} and Agrawal \cite{Agr},
for the soluble case, and
by Robinson in \cite{217}, for the general case. Nevertheless, in the further publications, the authors (see for example recent papers\cite{78}-- \cite{khaledII}) have found out and described many other interesting characterizations of soluble $PT$ and $PST$-groups.
In this paper we give new "Hall"-characterizations of soluble $PST$-groups on the basis of the following
{\bf Definition 1.1.} We say that
a subgroup $H$ is \emph{quasipermutable} (respectively \emph{$S$-quasipermutable}) in $G$
provided $H$ permutes with $B$ and with every subgroup (respectively with every Sylow subgroup)
$A$ of $B$ such that $(|H|, |A|)=1$.
Examples and some applications of quasipermutable
subgroups were discussed in our
papers \cite{Bull} and \cite{proble} (see also remarks in Section 5 below). In this paper,
we give the following result, which we consider as one more
motivation for introducing the concept of quasipermutability.
{\bf Theorem A.} {\sl Let $D=G^{\cal N} $ and $\pi =\pi (D)$.
Then the following statements are equivalent:}
(i) {\sl $D$ is a Hall subgroup of $G$ and every Hall subgroup of $G$ is quasipermutable in $G$.}
(ii) {\sl $G$ is a soluble $PST$-group.}
(iii) {\sl Every subgroup of $G$ is quasipermutable in $G$.}
(iv) {\sl Every $\pi$-subgroup of $G$
and some minimal supplement of $D$ in $G$ are quasipermutable in $G$.}
In the proof Theorem A we use the next three our results.
A subgroup $S$ of $G$ is called a \emph{Gasch\"utz} subgroup of $G$ (L.A. Shemetkov \cite[IV, 15.3]{26}) if $S$ is supersoluble
and for any subgroups $K \leq H$ of $G$, where $S\leq K$, the number $|H:K|$ is not prime.
{\bf Theorem B.} {\sl The following statements are equivalent:}
(I) {\sl $G$ is soluble, and if $S$ is a Gasch\"utz
subgroup of $G$, then every Hall subgroup $H$ of $G$ satisfying $\pi (H)\subseteq \pi (S)$
is quasipermutable in $G$.}
(II) {\sl $G$ is supersoluble and the following hold: }
(a) {\sl $G=DC$, where $D=G^{\cal N}$
is an abelian complemented subgroup of $G$ and $C$ is a Carter subgroup of $G$;}
(b) {\sl $D\cap C$ is normal in $G$ and
$(p, |D/D\cap C|)=1$ for all prime divisors $p$ of $|G|$ satisfying $(p-1, |G|)=1$.}
(c) {\sl For any non-empty set $\pi $ of primes, every $\pi $-element of any
Carter subgroup of $G$ induces a power automorphism on the
Hall $\pi'$-subgroup of $D$.}
(III) {\sl Every Hall subgroup of $G$ is quasipermutable in $G$.}
Let $\cal F$ be a class of groups. If $1\in {\cal F}$, then we write $G^{\cal F}$ to denote the intersection of all normal subgroups $N$ of $G$ with $G/N\in {\cal F}$. The class $\cal F$ is said to be a \emph{formation} if either ${\cal F}= \varnothing $ or $1\in {\cal F}$ and every homomorphic image of $G/G^{\cal F}$ belongs to $ {\cal F}$ for any group $G$.
The formation ${\cal F}$ is said to be \emph{saturated} if $G\in {\cal F}$ whenever $G/\Phi (G) \in {\cal F}$. A subgroup $H$ of $G$ is said to be an \emph{$\cal F$-projector} of $G$ provided $H\in {\cal F}$ and $E=E^{\cal F}H$ for any subgroup $E$ of $G$ containing $H$. By the Gasch\"utz's theorem \cite[VI, 9.5.4 and 9.5.6]{hupp}, for any saturated formation $\cal F$, every soluble group $G$ has an $\cal F$-projector and any two
$\cal F$-projectors of $G$ are conjugate.
{\bf Theorem C.} {\sl Let $\cal F$ be a saturated formation containing all nilpotent groups. Suppose that $G$ is soluble and let $\pi =\pi (C) \cap \pi( G^{\cal F})$, where $C$ is an $\cal F$-projector of $G$. If every maximal subgroup of every Sylow $p$-subgroup of $G$ is
$S$-quasipermutable in $G$ for all $p\in \pi $, then $G^{\cal F}$ is a Hall subgroup of $G$.}
{\bf Theorem D.} {\sl Let $\cal F$ be a saturated formation containing all supersoluble groups and $\pi =\pi (F^{*}(G^{\cal F}))$. If $G^{\cal F}\ne 1$, then for some $p\in \pi $ some maximal subgroup of a Sylow
$p$-subgroup of $G$ is not $S$-quasipermutable in $G$.}
In this theorem $F^{*}(G^{\cal F})$ denotes the generalized Fitting subgroup of $G^{\cal F}$, that is, the product of all normal quasinilpotent subgroups of
$G^{\cal F}$.
The main tool in the proofs of Theorems C and D is the following our result.
{\bf Proposition.} {\sl Let $E$ be a normal subgroup of $G$ and
$P$ a Sylow $p$-subgroup of $E$ such that $|P| > p$. }
(i) {\sl If every
number $V$ of some fixed ${\cal M}_{\phi}(P)$ is $S$-quasipermutable in $G$, then $E$ is $p$-supersoluble. }
(ii) {\sl If
every maximal subgroup of $P$ is $S$-quasipermutable in $G$, then every chief factor of $G$ between $E$ and $O_{p'}(E)$
is cyclic. }
(iii) {\sl If every maximal subgroup of every Sylow subgroup of $E$ is $S$-quasipermutable in $G$, then every chief factor of $G$ below $E$ is cyclic. }
In this proposition we write ${\cal M}_{\phi}(G)$, by analogy with \cite{shirong},
to denote a set of maximal subgroups of $G$ such that ${\Phi}(G)$ coincides with the intersection of all subgroups in ${\cal M}_{\phi}(G)$.
Note that Proposition may be independently interesting because this result
unifies and generalize
many known results, and in particular, Theorems 1.1--1.5 in
\cite{shirong} (see Section 5). In Section 5 we discus also some
further applications of the results.
All unexplained notation and terminology are standard. The reader is referred to \cite{26}, \cite{DH}, or \cite{Bal-Ez} if necessary.
\section{Basic Propositions}
Let $H$ be a subgroup of $G$.
Then we say, following \cite{Bull}, that $H$ is \emph{propermutable}
(respectively \emph{$S$-propermutable})
in $G$ provided there is a subgroup $B$ of $G$ such that $G=N_{G}(H)B$ and $H$ permutes with all subgroups (respectively with all
Sylow subgroups) of $B$.
{\bf Proposition 2.1.} {\sl Let $H\leq G$ and $N$ a normal subgroup of $G$. Suppose that $H$ is quasipermutable ($S$-quasipermutable) in $G$.}
(1) {\sl If either $H$ is a Hall subgroup of $G$ or
for every prime $p$ dividing $|H|$ and for every Sylow $p$-subgroup $H_{p}$ of $H$ we have $H_{p}\nleq N$, then $HN/N$ is quasipermutable
($S$-quasipermutable, respectively) in $G/N$. }
(2) {\sl If $\pi =\pi (H)$ and $G$ is $\pi$-soluble, then $H$
permutes with some Hall $\pi'$-subgroup of $G$. }
(3) {\sl $H$ permutes with some Sylow $p$-subgroup
of $G$ for every prime $p$ dividing $|G|$ such that $(p, |H| )=1$.}
(4) {\sl $|G:N_{G}(H\cap N)|$ is a $\pi$-number, where $\pi = \pi (N)\cup \pi (H)$.}
(5) {\sl If $H$ is propermutable ($S$-propermutable) in $G$, then
$HN/N$ is propermutable ($S$-propermutable, respectively) in $G/N$. }
(6) {\sl If $H$ is $S$-propermutable in $G$, then $H$ permutes with some Sylow $p$-subgroup
of $G$ for any prime $p$ dividing $|G|$. }
(7) {\sl Suppose that $G$ is $\pi$-soluble. If $H$ is a Hall $\pi$-subgroup
of $G$, then $H$ is propermutable ($S$-propermutable, respectively) in $G$. }
{\bf Proof.} By hypothesis, there is a subgroup $B$ of $G$ such that $G=N_{G}(H)B$
and $H$ permutes with $B$ and with all
subgroups (with all Sylow subgroups, respectively) $A$ of $B$
such that $(|H|, |A|)=1$.
(1) It is clear that
$$G/N=(N_{G}(H)N/N)(BN/N)=N_{G/N}(HN/N)(BN/N).$$
Let $K/N$ be any subgroup (any Sylow subgroup, respectively) of
$BN/N$ such that $(|HN/N|, |K/N|)=1$.
Then $K=(K\cap B)N$. Let $B_{0}$ be a minimal supplement of $K\cap B\cap N$ to $K\cap B$. Then $K/N=(K\cap B)N/N =B_{0}(K\cap B\cap N)N/N=B_{0}N/N$ and $K\cap B\cap N\cap B_{0}=N\cap B_{0}\leq \Phi (B_{0})$. Therefore $\pi (K/N)=\pi (K\cap B/K\cap B\cap N)=\pi (B_{0})$, so
$(|HN/N|, |B_{0}|)=1$. Suppose that some prime $p\in \pi (B_{0})$
divides $|H|$, and let $H_{p}$ be
a Sylow $p$-subgroup of $H$. We shall show that
$H_{p}\nleq N$. In fact, we may suppose that
$H$ is a Hall subgroup of $G$. But in this case,
$H_{p}$ is
a Sylow $p$-subgroup of $G$. Therefore, since $p\in \pi (B_{0})\subseteq \pi (G/N)$, $H_{p}\nleq N$. Hence $p$
divides $|HN/N|$, a contradiction. Thus $(|H|, |B_{0}|)=1$, so in the case, when $H$ is quasipermutable in $G$, we have $HB_{0}=B_{0}H$ and hence $HN/N$ permutes with $K/N=B_{0}N/N$. Thus $HN/N$ is quasipermutable in $G/N$.
Finally, suppose that $H$ is $S$-quasipermutable in $N$. In this case,
$B_{0}$ is a $p$-subgroup of $B$, so for some Sylow $p$-subgroup $B_{p}$ of $B$ we have $B_{0}\leq B_{p}$ and $(|H|, p)=1$. Hence $K/N=B_{0}N/N\leq B_{p}N/N$, which implies that $K/N= B_{p}N/N$.
But $H$ permutes with $B_{p}$ by hypothesis, so $HN/N$ permutes with $K/N$.
Therefore $HN/N$ is $S$-quasipermutable in $G/N$.
(2) By \cite[VI, 4.6]{hupp}, there are
Hall $\pi'$-subgroups $E_{1}$, $E_{2}$ and $E$ of $N_{G}(H)$, $B$ and $G$, respectively, such that $E=E_{1}E_{2}$.
Then $H$ permutes with all Sylow subgroups of $E_{2}$ by hypothesis, so
$$HE=H(E_{1}E_{2})=(HE_{1})E_{2}=(E_{1}H)E_{2}=$$ $$
E_{1}(HE_{2})=E_{1}(E_{2}H)=(E_{1}E_{2})H=EH$$ by \cite[A, 1.6]{DH}.
(3) See the proof of (2).
(4) Let $p$ be a prime such that $p\not \in \pi $. Then by (3), there is a Sylow $p$-subgroup $P$ of $G$ such that $HP=PH$ is a subgroup of $G$. Hence $HP\cap N=H\cap N$ is a normal subgroup of
$HP$. Thus $p$ does not divide $|G:N_{G}(H\cap N)|$.
(5) See the proof of (1).
(6) See the proof of (2).
(7) Since $G$ is $\pi$-soluble, $B$ is $\pi$-soluble. Hence by \cite[VI, 1.7]{hupp},
$B=B_{\pi}B_{\pi'}$ where $B_{\pi}$ is a Hall $\pi$-subgroup of $B$ and $B_{\pi'}$ is a Hall $\pi'$-subgroup of $B$. By \cite[VI, 4.6]{hupp}, there are
Hall $\pi$-subgroups $N_{\pi}$, $B_{\pi}$ and $G_{\pi}$ of $N_{G}(H)$, $B$ and $G$, respectively, such that $G_{\pi}=N_{\pi}B_{\pi}$. But since $H\leq N_{\pi}$,
$N_{\pi}$ is a Hall $\pi$-subgroup of $G$. Therefore $G_{\pi}=N_{\pi}B_{\pi}=N_{\pi}$, so $B_{\pi}\leq N_{\pi}$. Hence
$G=N_{G}(H)B=N_{G}(H)B_{\pi}B_{\pi'}=N_{G}(H)B_{\pi'}$, so $H$ is propermutable ($S$-propermutble, respectively) in $G$.
A group $G$ is said to be a \emph{$C_{\pi }$-group} provided $G$ has a Hall ${\pi }$-subgroup and any two Hall ${\pi }$-subgroups of $G$ are conjugate.
On the basis of Proposition 2.1 the following two results are proved.
{\bf Proposition 2.2.} {\sl Let $H$ be
a Hall $S$-quasipermutable subgroup of $G$. If $\pi = \pi (|G:H|)$, then $G$
is a $C_{\pi }$-group.}
{\bf Proposition 2.3.} {\sl Let $E$ be a normal subgroup of $G$ and $H$
a Hall $\pi$-subgroup of $E$.
If $H$ is nilpotent and $S$-quasipermutable in $G$, then $E$ is $\pi$-soluble.}
\section{Groups with a Hall quasipermutable subgroup }
A group $G$ is said to be \emph{$\pi$-separable} if every chief factor of $G$ is either a $\pi$-group or a $\pi'$-group. Every $\pi$-separable group $G$ has a series $$1=P_{0}(G)\leq M_{0}(G) < P_{1}(G) < M_{1}(G) < \ldots < P_{t}(G)\leq M_{t}(G)=G $$ such that
$$M_{i}(G)/P_{i}(G) =O_{\pi'}(G/P_{i}(G))$$ ($i=0, 1, \ldots , t$) and $$P_{i+1}(G)/M_{i}(G)= O_{\pi}(G/M_{i}(G))$$ ($i=1, \ldots , t$)
The number $t$ is called the \emph{$\pi$-length} of $G$ and denoted by $l_{\pi}(G)$ (see \cite[p. 249]{rob}).
One more result, which we use use in the proof of our main results, is the following
{\bf Theorem 3.1.} {\sl Let $H$ be a
Hall subgroup of $G$ and $\pi =\pi (H)$. Suppose that $H$ is quasipermutable in $G$. }
(I) {\sl If $ p > q $ for all primes $p$ and $q$ such that $p\in \pi $ and
$q$ divides $|G:N_{G}(H)|$, then $H$ is normal in $G$.}
(II) {\sl If $H$ is supersoluble, then $G$ is $\pi$-soluble.}
(III) {\sl If $H$ is $\pi$-separable, then the following fold:}
(i) {\sl $H'\leq O_{\pi}(G)$. If, in addition, $N_{G}(H)$ is nilpotent, then $G'\cap H \leq O_{\pi}(G)$.}
(ii) {\sl $l_{\pi}(G) \leq 2$ and $l_{\pi'}(G) \leq 2$. }
(iii) {\sl If for some prime $p\in \pi'$ a Hall $\pi'$-subgroup $E$
of $G$
is $p$-supersoluble, then $G$ is $p$-supersoluble. }
Let $\cal M$ and $\cal H$ be non-empty formations. Then the \emph{product} ${\cal M} {\cal H}$ of these formations is the class of all groups $G$ such that $G^{\cal H}\in {\cal M}$. It is well-known that such an operation
on the set of all non-empty formations is associative (Gasch\"utz). The symbol
${\cal M}^{t}$ denotes the product of $t$ copies of ${\cal M}$.
We shall need following well-known
Gasch\"utz-Shemetkov's theorem \cite[Corollary 7.13]{100}.
{\bf Lemma 3.2}. {\sl The product of any two non-empty
saturated formations is also a saturated formation.}
In in the proof of Theorem 3.1 we use the following
{\bf Lemma 3.3.} {\sl The class $\cal F$ of all $\pi$-separable groups $G$ with $l_{\pi}(G) \leq t$ is a saturated formation.}
{\bf Proof.} It is not difficult to show that for any non-empty
set $\omega \subseteq \Bbb{P}$ the class ${\cal G}_{\omega}$ of all $\omega$-groups is a saturated formation and that ${\cal F}=({\cal G}_{\pi'}{\cal G}_{\pi})^{t}{\cal G}_{\pi'}$.
Hence ${\cal F}$ is a saturated formation by Lemma 3.2.
{\bf Lemma 3.4.} {\sl Suppose that $G$ is separable. If Hall $\pi$-subgroups
of $G$ are abelian, then $l_{\pi}(G) \leq 1$.}
{\bf Proof.} Suppose that this lemma is false and let $G$ be a counterexample of minimal order. Let $N$ be a minimal normal subgroup of $G$.
Since $G$ is $\pi$-separable, $N$ is a $\pi$-group or a $\pi'$-group. It is clear that the hypothesis holds for $G/N$, so $l_{\pi}(G/N) \leq 1$ by the choice of $G$. By Lemma 3.3, the class of all $\pi$-soluble groups with $l_{\pi}(G) \leq 1$ is a saturated formation. Therefore $N$ is a unique minimal normal subgroup of $G$, $N\nleq \Phi (G)$ and $N$ is not a $\pi'$-group. Hence $N$ is a $\pi$-group and $N=C_{G}(N)$ by \cite[A, 15.2]{DH}. Therefore $N\leq H$, where $H$ is a Hall $\pi$-subgroup of $G$. But since $H$ is abelian, $N=H$ is a Hall $\pi$-subgroup of $G$. Hence $l_{\pi}(G) \leq 1$.
A group $G$ is called \emph{$\pi$-closed} provided $G$ has a normal Hall $\pi$-subgroup.
{\bf Lemma 3.5.} {\sl Let $H$ be a Hall $\pi$-subgroup of $G$. If $G$ is $\pi$-separable and $H\leq Z(N_{G}(H))$, then $G$ is $\pi'$-closed. }
{\bf Proof.} Suppose that this lemma is false and let $G$ be a counterexample of minimal order. Then $G\ne H$.
The class $\cal F$ of all $\pi'$-closed groups coincides with the product ${\cal G}_{\pi'}{\cal G}_{\pi}$. Hence $\cal F$ is a saturated formation by
Lemma 3.2.
Let $N$ be a minimal normal subgroup of $G$. Since $G$ is $\pi$-separable, $N$ is a $\pi$-group or a $\pi'$-group. Moreover, $G$ is a $C_{\pi}$-group by \cite[9.1.6]{rob}), so the hypothesis holds for $G/N$.
Hence $G/N$ is $\pi'$-closed by the choice of $G$.
Therefore $N$ is the only minimal normal subgroup of $G$, $N\nleq \Phi (G) $ and $N$ is a $\pi$-group. Therefore $N\leq H$ and $N=C_{G}(N)$ by \cite[A, 15.2]{DH}. Since $H\leq Z(N_{G}(H))$ and $H$ is a Hall $\pi$-subgroup of $G$, $N=H$. Therefore $N\leq Z(G)$, which implies that $N=H=G$. This contradiction completes the proof of the lemma.
\section{Proof of Theorem A}
\
Recall that $G$ is a $PST$-group if and only if $G=D\rtimes M$, where $D=G^{\cal N }$
is abelian Hall subgroup of $G$ and every element $x\in M$ induces a power automorphism on $D$ \cite{Agr}. Therefore the implication
(i) $\Rightarrow$ (ii) is a direct corollary of Theorem B.
Now suppose that $G=D\rtimes M$, where $D=G^{\cal N }$, is a soluble $PST$-group. Let $H$ be any subgroup of $G$ and $S$ a Hall $\pi '$-subgroup of $H$. Since $G$ is soluble, we may assume without loss of generality that $S\leq M$. Hence $H=(D\cap H)(M\cap H)=(D\cap H)S$ and $D\cap H$ is normal in $G$.
Let $\pi _{1}= \pi (S)$. Let $A$ be a Hall $\pi _{1}$-subgroup
of $M$ and $E$ a complement to $A$ in $M$. Then $E\leq C_{G}(S)$. Therefore $G= DM=DAE=N_{G}(H)(DA)$ and every subgroup $L$ of $DA$ satisfying
$(|H|, |L|)=1 $ is contained in $D$. Thus $H$
is quasipermutablein $G$. Thus (ii) $\Rightarrow$ (iii).
(iv) $\Rightarrow$ (ii) By Theorems C and D, $G$ is supersoluble and $D$
is a Hall subgroup of $G$. Therefore $G=D\rtimes W$, where $W$ is a Hall $\pi'$-subgroup of $G$. By hypothesis, $W$ is quasipermutable in $G$. Now arguing similarly as in the proof of Theorem B one can show that $D$ is abelian and every subgroup of $D$ is normal in $G$. Therefore $G$ is a $PST$-group.
\section{Final remarks }
\
1. The subgroup $S_{3}$ is quasipermutable, $S$-propermutable and not propermutable in $S_{4}$. If $H$ is the subgroup of order 3 in $S_{3}$, then $H$ is $S$-quasipermutable and not quasipermutable in $S_{4}$.
2. Arguing similarly to the proof of Theorem A one can prove the following fact.
{\bf Theorem 5.1.} {\sl Suppose that $G$ is soluble and let
$\pi =\pi (G^{\cal N})$. Then $G$ is a $PST$-group if and only if
every subnormal $\pi$-subgroup and a Hall $\pi'$-subgroup of $G$
are propermutable in $G$. }
3. If $G$ is metanilpotent, that is $G/F(G)$ is nilpotent, then for every Hall subgroup $E$ of $G$ we have $G=N_{G}(E)F(G)$. Therefore, in this case, every characteristic
subgroup of every Hall subgroup of $G$ is $S$-propermutable in $G$. In particular, every Hall subgroup of every supersoluble group is $S$-propermutable. This observation makes natural the following question: {\sl What is the structure of $G$ under the hypothesis that every Hall subgroup of $G$ is propermutable in $G$ ?} Theorem B
gives an answer to this question.
4. Every maximal subgroup of a supersoluble group is quasipermutable. Therefore, in fact,
Theorem A shows that the class of all soluble groups in which quaipermutability is a transitive relation coincides with the class of all soluble $PST$-groups.
5. We say that $G$ is a \emph{$SQT$-group} if $S$-quasipermutability is a transitive relation in $G$. Arguing similarly to the proof of Theorem A one can prove the following fact.
{\bf Theorem 5.2.} {\sl A soluble group $G$ is an $SQT$-group if and only if
$G=D\rtimes M $ is supersoluble,
where $D$ and $M$
are Hall nilpotent subgroups of $G$ and the index
$|G:DN_{G}(H\cap D)|$ is a $\pi (H)$-number for every subgroup $H$ of $G$. }
6. A subgroup $H$ of $G$ is called \emph{$SS$-quasinormal} \cite{shirong}
(\emph{semi-normal} \cite{Su}) in $G$ provided $G$ has a subgroup $B$ such that $HB=G$ and $H$ permutes with all Sylow subgroups ($H$ permutes with all subgroups, respectively) of $B$.
It is clear that every $SS$-quasinormal subgroup is $S$-propermutable and every semi-normal subgroup is propermutable. Moreover, there are simple examples (consider, for example, the group
$C_{7}\rtimes \text{Aut} (C_{7})$, where $C_{7}$ is a group of order 7) which show that,
in general, the class of all $S$-propermutable subgroups of $G$ is wider than the class of all its $SS$-quasinormal subgroups and the class of all propermutable subgroups of $G$ is wider than the class of all its semi-normal subgroups. Therefore Proposition covers main results (Theorems 1.1--1.5) in \cite{shirong}.
7. Theorem 3.1 is used in the proof of Theorem B.
From this result we also get
{\bf Corollary 5.3} (See \cite[Theorem 5.4]{8}). {\sl Let $H$ be
a Hall semi-normal subgroup of $G$. If $p
> q $ for all primes $p$ and $q$ such that $p$ divides $|H|$ and $q$
divides $|G:H|$, then $H$ is normal in $G$. }
{\bf Corollary 5.4} (See \cite[Theorem]{GuoS}). {\sl Let $P$ be a
Sylow $p$-subgroup of $G$. If $P$ is semi-normal in $G$, then the following statements hold: }
(i) {\sl $G$ is $p$-soluble and $P'\leq O_{p}(G)$.}
(ii) {\sl $l_{p}(G) \leq 2$. }
(iiii {\sl If for some prime $q\in \pi'$ a Hall $p'$-subgroup
of $G$
is $q$-supersoluble, then $G$ is $q$-supersoluble. }
{\bf Corollary 5.5} (See \cite[Theorem 3]{podg}). {\sl If a Sylow $p$-subgroup $P$ of
$G$, where $p$ is the largest prime dividing $|G|$, is semi-normal in $G$, then $P$ is normal in $G$.}
\end{document} |
\begin{document}
\begin{abstract} Let $X$ be a topological space. Let $X_0 \subseteq X$ be a second countable subspace. Also, assume that $X$ is first countable at any point of $X_0$. Then we provide some conditions under which we ensure that $X_0$ is not Baire.
\end{abstract} \title{ A criterion to specify the absence of Baire property } Subject Classification: 37B55, 54E52. \\ \title{ A criterion to specify the absence of Baire property } Keywords: Nonautonomous systems, topological transitivity, Baire space, Birkhoff theorem. \title{ A criterion to specify the absence of Baire property }
\title{ A criterion to specify the absence of Baire property } \section{Introduction } A space $X$ is called Baire if the intersection of any sequence of dense open subsets of $X$ is dense in $X$. Alternatively, this notation can be formulated in terms of second category sets. The Baire category theory has numerous applications in Analysis and Topology. Among these applications are, for instance, the open mapping, closed graph theorem and the Banach-Steinhaus theorem in Functional Analysis \cite {{Aarts}, {Haworth}}.
The aim of this paper is to introduce a trick that concludes the absence of Baire property for some topological spaces using dynamical techniques and tools. Before stating the main result, we establish some notations.
Let $(X,~\tau)$ be a topological space, $X_n$'s its subspaces and $$ x_{n+1}=f_n(x_n), ~ n \in \mathbb{N} \cup \{0\}, $$ where $ f_n: {X_n} \to { X_{n+1}}$ are continuous maps. The family $\{f_n\}_{n=0}^{\infty}$ is called a nonautonomous discrete system \cite{{Shi}, {Shi-Chen}}. For given $x_{0}\in X_{0}$, the orbit of $x_0$ is defined as $$ orb(x_{0}):=\big\{ x_{0}, ~ f_{0}(x_{0}), ~ f_{1}\circ f_{0}(x_{0}), ~\cdots , ~f_{n} \circ f_{n-1} \circ \cdots \circ f_{0}(x_{0}),~\cdots \big\},$$ and we say that this orbit starts from the point $x_0$. The topological structure of the orbit that starts from the point $x_0$ may be complex. Here, we study the points of $ X_{0}$ whose orbits always intersect around $ X_{0}$. They are formulated as follows:
$$O:= \big\{x \in X_0 | ~~ {\overline{orb(x)}}^{{X}}\cap X_0=X_0 \big\}.$$ The system $\{f_n\}_{n=0}^{\infty}$ is called topologically transitive on $ X_{0}$ if for any two non-empty open sets $U_{0}$ and $ V_{0}$ in $ X_{0}, $ there exists $ n \in \mathbb{N}$ such that $U_{n}\cap V_{0}\neq \phi $, where $ U_{i+1}=f_{i}(U_i)$ for $ 0 \leq i \leq n-1$, in other word $(f_{n-1} \circ f_{n-2} \circ \cdots \circ f_{1} \circ f_{0})(U_{0}) \cap V_{0}\neq \phi$ \cite {Shi-Chen}.
Our main theorem is as follows: \begin{theorem} \label{Theorem }
Let $X$ be a topological space. Let $X_0$ be a second countable subspace of $ X$ and let $X$ be first countable at any point of $X_0$. Also, suppose that
the system $\{f_n\}_{n=0}^{\infty}$ is topologically transitive on $X_{0}$ and $ \overline{O}\neq X_0$. Then $X_0$ can not be a Baire subspace. \end{theorem} Note that, if $X$ is a metric space, $X_n=X$, and $f_n=f $ for each $n$, then Theorem \ref{Theorem } will be obtained as a direct result of Birkhoff transitivity theorem. This fact was our motivation in writing the paper. \section{Proof }
So as $X_{0}$ is a second countable subspace and $X$ first countable at any point of $X_0$, it is easy to show that there exists a collection $\{U_m\}_{m \in N}$ of open sets in $X$ such that \begin{itemize}
\item [$i$)] $ U_m \cap D_0 \neq\phi$,
\item [$ii$)] the family $\{U_m \cap D_0\}_{m \in \mathbb{N}}$ is a basis for $D_0$,
\item [$iii$)]for each $x_0\in D_0$, the family $\{U_m\}_{m \in {\mathbb{N}}}$ is a local basis for $x_0$ in $X$. \end{itemize} We claim that $$O=\bigcap_{m=1}^{\infty}\bigcup_{n=1}^{\infty}{f_{n-1}}
\circ {f_{n-2}} \circ \cdots \circ {f_{1}} \circ {f_{0}}^{-1}(U_{m}). \eqno{(2.1)}$$ To prove the claim, put $O^*:=\bigcap_{m=1}^{\infty}\bigcup_{n=1}^{\infty}{f_{n-1}\circ f_{n-2} \circ \cdots \circ f_{1} \circ f_{0}}^{-1}(U_{m}).$ Firstly, we show that $O \subseteq O^*$. Suppose otherwise, there is $ x \in O $ such that $x \notin O^*$. So as $x \notin O^*$, there exists $m \in \mathbb{N}$ such that for each $n \in \mathbb{N}$ we have $$ {(f_{n-1}\circ f_{n-2}\circ \cdots \circ f_{1} \circ f_{0})}(x) \notin {U_{m}}.$$ Hence, $orb(x) \cap U_m=\phi$. Since $U_m\cap D_0 \neq \phi$, there exists an element $z\in U_m\cap X_0$, such that $z\notin {\overline{orb(x)}}^X.$ But $ z \in X_0$ and so $ {\overline{orb(x)}}^X\cap X_0 \neq D_0$. It is concluded that $x \notin O$ which contradicts the choice of $x$. Now, it is shown that $O^* \subseteq O$. Let $x\in O^*$ but $x \notin O$. So as $x\in O^*$, concluded for each $m \in \mathbb{N}$, there exists $n \in \mathbb{N}$ such that $ {(f_{n-1}\circ f_{n-2} \circ \cdots \circ f_{1} \circ f_{0})}(x) \in U_m.$ Thus, $orb(x) \cap U_m \neq \phi$. Moreover, the relation $x \notin O$ indicates that there exists $ z\in X_0$ such that $z \notin \overline{orb(x)}^X.$ Consequently, there exists $U_k$ containing $z$, such that $U_k \cap orb(x)= \phi$ that this contradicts with $ orb(x) \cap U_m \neq \phi$, for each $m\in\mathbb{N}$. \\ By continuity of $ f_n: {X_n} \rightarrow { X_{n+1}},$ each set $\bigcup_{n=1}^{\infty}{(f_{n-1} \circ f_{n-2} \circ \cdots \circ f_{1} \circ f_{0})}^{-1}(U_{m})$ is open and because of transitivity, these open sets are dense in $X_{0}$. If $X_{0}$ be a Baire space, then (2.1) implies that $O$ is a dense $ G_{\delta}$-set. This is a contradict with $ \overline{O}\neq X_0$. Thus $X_0$ is not a Baire subspace, and the proof of the Theorem \ref{Theorem } is complete.
\section{Example } \begin{example}
Consider $X=\mathbb H(\mathbb C)
=\big\{f:\mathbb C \rightarrow \mathbb C|~ f ~ is ~holomorphic \big\}$ endowed with the metric $d(f,g)=\displaystyle\sum_ {n=1}^{\infty}\frac {1}{2^n} min \big(~1,~p_n(f-g)\big),$
with $p_n(h) ={sup}_{|z| \leq n } |h(z)|$. Then $X$ is a separable Banach space and besides that the differentiation operator $D:\mathbb H(\mathbb C) \rightarrow \mathbb H(\mathbb C)$ with $ D(f)=f^\prime$ is continuous \cite {Grosse-Erdmann}. Moreover, the space $\mathbb H(\mathbb C)$ is Baire and if we consider the dynamical system $ D:\mathbb H(\mathbb C ) \rightarrow \mathbb H(\mathbb C),$ then Birkhoff theorem guarantees the existence of functions that their orbit is dense in $\mathbb{ H} (\mathbb{ C})$.
Now, assume that
$$ X_0=\big\{\sum_{i=0}^{N} a_iz^i+\alpha g(z)\big |~ a_i , \alpha \in \mathbb{C} \big\}.$$ Then the subspace $ X_0$ is not Baire. To see this, take $\{\alpha_n\}_{n=0}^{\infty}$ be a subsequence
with $\alpha_0=0$ in this way that $D^{\alpha_n}(g)$ is convergent. We consider nonautonomous discrete system $\{f_{n}\}_{n=0}^{\infty}$ with $f_n=D^{\alpha_{n+1}-\alpha_n}$ where
$ X_{n}=\big\{\sum_{i=0}^{N} a_iz^i+\alpha g^{(\alpha_n)}(z)\big | a_i , \alpha \in \mathbb{C} \big\}$. By planning the arguments similar to what employed in the proof of Example 2.21 in \cite{Grosse-Erdmann}, we observe that the system $\{f_n\}_{n=0}^{\infty}$ is topologically transitive. Now the assertion obtains by using Theorem \ref{Theorem } since the set $ O$ is empty.
\end{example}
\end{document} |
\begin{document}
\title{AN ANALOGUE OF THE L\'EVY-CRAM\'ER THEOREM FOR RAYLEIGH DISTRIBUTIONS } \author{Thu Van Nguyen} \address{Department of Mathematics;
International University, HCM City;
No.6 Linh Trung ward, Thu Duc District, HCM City;
Email: nvthu@hcmiu.edu.vn} \date{June 30, 2009}
\begin{abstract} In the present paper we prove that every k-dimensional Cartesian product of Kingman convolutions can be embedded into a k-dimensional symmetric convolution (k=1, 2, \ldots) and obtain an analogue of the Cram\'er-L\'evy theorem for multi-dimensional Rayleigh distributions.
\end{abstract}\indent \maketitle{Keywords and phrases: Cartesian products of Kingman convolutions; Rayleigh distributions; radial characteristic functions;
AMS2000 subject classification: 60B07, 60B11, 60B15, 60K99.}
\section{Introduction, Notations and Preliminaries}\label{S:intro}
In probability theory and statistics, the {\bf Rayleigh distribution} is a continuous probability distribution which is widely used to model events that occur in different fields such as medicine, social and natural sciences. A multivariate Rayleigh distribution is the probability distribution of a vector of norms of random Gaussian vectors.
The purpose of this paper, is to introduce and study the fractional indexes multivariate Rayleigh distributions via the Cartesian product of Kingman convolutions and, in particular, to prove an analogue of the L\'evy-Cram\'er theorem for multivariate Rayleigh distributions.
Let $\mathcal P:=\mathcal P(\mathbb R^+)$ denote the set of all probability measures (p.m.'s) on the positive half-line $\mathbb R^+$. Put, for each continuous bounded function f on $\mathbb R^{+}$,
\begin{multline}\label{astKi} \int_{0}^{\infty}f(x)\mu\ast_{1,\delta}\nu(dx)=\frac{\Gamma(s+1)}{\sqrt{\pi}\Gamma(s+\frac{1}{2})}\\ \int_{0}^{\infty}\int_{0}^{\infty}\int_{-1}^{1}f((x^2+2uxy+y^2)^{1/2})(1-u^2)^{s-1/2}\mu(dx)\nu(dy)du, \end{multline}
where $\mu\mbox{ and }\nu\in\mathcal P\mbox{ and }\delta=2(s+1)\geq1$ (cf. Kingman \cite{Ki} and Urbanik \cite{U1}). The convolution algebra $(\mathcal{P},\ast_{1,\delta})$ is
the most important example of Urbanik convolution algebras (cf Urbanik \cite{U1}). In language of the
Urbanik convolution algebras, the {\it characteristic measure}, say $\sigma_s$, of the Kingman convolution
has the Rayleigh density
\begin{equation}\label{Ray}
d\sigma_s(y)= \frac{2{(s+1)^{s+1}}}{\Gamma(s+1)}y^{2s+1}\exp{(-(s+1)y^2)}dy
\end{equation} with the characteristic exponent $\varkappa=2$ and the kernel $\Lambda_s$ \begin{equation}\label{eq:Lam}
\Lambda_s(x)= \Gamma(s+1) J_{s}(x)/(1/2x)^{s}, \end{equation} where $J_s(x)$ denotes the Bessel function of the first kind, \begin{equation}\label{eq:Bessel} J_s(x):= \Sigma_{k=0}^{\infty} \frac{(-1)^k (x/2)^{\nu+2k}}{k!\Gamma(\nu+k+1)}. \end{equation}
It is known (cf. Kingman \cite{Ki}, Theorem 1), that the kernel $\Lambda_s$ itself is an ordinary characteristic function (ch.f.) of a symmetric p.m., say $F_s$, defined on the interval [-1,1]. Thus, if $\theta_s$ denotes a random variable (r.v.) with distribution $F_s$ then for each $t\in \mathbb R^+$, \begin{equation}\label{eq:LamThe} \Lambda_s(t)= E\exp{(it\theta_s)}=\int_{-1}^1\cos{(tx)}dF_s(x).\end{equation}
Suppose that $X$ is a nonnegative r.v. with distribution $\mu\in\mathcal{P}$
and $X$ is independent of $\theta_s$. The {\it radial characteristic function}
(rad.ch.f.) of $\mu$, denoted by $\hat\mu(t),$ is defined by
\begin{equation}\label{ra.ch.f.} \hat\mu(t) = E\exp{(itX\theta_s)} = \int_0^{\infty} \Lambda_s(tx)\mu(dx), \end{equation}
for every $t\in \mathbb R^{+}$.
The characteristic measure of the Kingman convolution $\ast_{1, \delta}$, denoted by $\sigma_s$, has the Maxwell density function \begin{equation}\label{Maxwell density} \frac{d\sigma_s(x)}{dx}=\frac{2(s+1)^{s+1}}{\Gamma(s+1)}x^{2s+1}exp\{-(s+1)x^2\}, \quad(0<x<\infty). \end{equation} and the rad.ch.f. \begin{equation} \hat\sigma_s(t)=exp\{-t^2/4(s+1)\}. \end{equation}
Let $\tilde P:=\tilde{\mathcal P}(\mathbb R^+)$ denote the class of symmetric p.m.'s on $\mathbb R^+.$ Putting, for every $G\in \mathcal P$, \begin{equation*}
F_s(G)=\int_0^{\infty}F_{cs} G(dc),
\end{equation*}
we get a continuous homeomorphism from the Kingman convolution algebra $(\mathcal{P},\ast_{1,\delta})$ onto the ordinary convolution algebra $(\tilde{\mathcal P}, \ast)$ such that \begin{eqnarray}\label{homeomorphism1}
F_s\{G_1\ast_{1, \delta}G_2\}&=&(F_sG_1)\ast(F_sG_2) \qquad G_1, G_2\in \mathcal P\\
F_s\sigma_s&=&N(0, 2s+1)
\end{eqnarray}
which shows that every Kingman convolution can be embedded into the ordinary convolution $\ast$.
\section{Cartesian product of Kingman convolutions}
Denote by $ \mathbb {R}^{+k}, k=1,2,...$ the k-dimensional nonnegative cone
of $ \mathbb {R}^{k}$ and $\mathcal{P}(\mathbb {\mathbb R}^{+k})$ the class of all p.m.'s on $\mathbb {\mathbb R}^{+k}$ equipped with the weak convergence. In the sequel, we will denote the multidimensional vectors and random vectors (r.vec.'s) and their distributions by bold face letters.
For each point z of any set $A$ let $\delta_z$ denote the Dirac measure (the unit mass) at the point z. In particular, if $\mathbf x=(x_1, x_2,\cdots,x_k)\in
\mathbb R^{k+}$, then \begin{equation}\label{proddelta} \delta_{\mathbf {x}} = \delta_{x_1}\times\delta_{x_2}\times \ldots\times\delta_{x_k},\quad (k\; times), \end{equation} where the sign $"\times"$ denotes the Cartesian product of measures.
We put, for $\mathbf {x} = (x_1,\cdots, x_k)\mbox{ and }\mathbf {y} = (y_1,y_2,\cdots, y_k)\in \mathbb R^{+k},$
\begin{equation}\label{convdeltas}\delta_{\mathbf x}\bigcirc_{s, k} \delta_{\mathbf {y}} = \{\delta_{x_1}\circ _s \delta_{y_1}\} \times\{\delta_{x_2} \circ _s\delta_{y_2}\} \times\cdots\
\times \{\delta_{x_k} \circ_s \delta_{y_k}\},\quad (k\; times), \end{equation} here and somewhere below for the sake of simplicity we denote the Kingman convolution operation $\ast_{1,\delta}, \delta=2(s+1)\ge 1$ simply by $\circ_{s}, s\ge \frac{!}{2}.$
Since convex combinations of p.m.'s of the form (\ref{proddelta}) are dense in $\mathcal P(\mathbb R^{+k})$ the relation (\ref{convdeltas}) can be extended to arbitrary p.m.'s $ \mathbf{G}_1 \mbox{ and } \mathbf{G}_2\in\mathcal{P}( \mathbb R^{+k})$. Namely, we put \begin{equation}\label{convF} \mathbf {G}_1 \bigcirc_{s, k} \mathbf {G}_2 = \iint\limits_{ \mathbb R^{+k}}
\delta_{\mathbf {x}} \bigcirc_{s, k} \delta_{\mathbf {y}} {\mathbf G}_1(d\mathbf {x}) {\mathbf G}_2(d\mathbf {y})
\end{equation} which means that for each continuous bounded function $\phi$ defined on $\mathbb R^{+k}$
\begin{equation}\label{convof} \int\limits_{\mathbb R^{+k}} \phi({\mathbf z}) {\mathbf G}_1 \bigcirc_{s, k} {\mathbf G}_2 (d{\mathbf z})= \iint\limits_{ \mathbb R^{+k}}\big\{\int\limits_{\mathbb R^{+k}} \phi({\mathbf z}) \delta_{{\mathbf x}} \bigcirc_{s, k} \delta_{{\mathbf y}}(d{\mathbf z})\big\}{ \mathbf G}_1(d{\mathbf x}) {\mathbf G}_2(d{\mathbf y}). \end{equation}
In the sequel, the binary operation $\bigcirc_{s, k}$ will be called {\it the k-times Cartesian product of Kingman convolutions} and the pair $(\mathcal P( \mathbb R^{+k}), \bigcirc_{s, k})$ will be called {\it the k-dimensional Kingman convolution algebra}. It is easy to show that the binary operation $\bigcirc_{s, k}$ is continuous in the weak topology which together with (\ref{astKi}) and (\ref{convF}) implies the following theorem.
\begin{theorem}\label{Theo:Kingmanalgebra} The pair $(\mathcal P{( \mathbb R^{+k})} ,\bigcirc_{s, k})$
is a commutative topological semigroup with $\delta_{\mathbf 0}$ as the unit element. Moreover, the operation $\bigcirc_{s, k}$ is distributive w.r.t. convex combinations of p.m.'s in $\mathcal P( \mathbb R^{+k})$. \end{theorem}
\ For every ${\mathbf G}\in\mathcal P( \mathbb R^{+k})$ the k-dimensional rad.ch.f. $\hat{{\mathbf G}}({\mathbf t}), {\mathbf t}=(t_1, t_2, \cdots t_k)\in \mathbb R^{+k},$ is defined by \begin{equation}\label{k-ra.ch.f.} \hat{\mathbf G}(\mathbf {t})=\int\limits_{\mathbb R^{+k}}
\prod_{j=1}^{k}\Lambda_s(t_jx_j){ \mathbf G}(\mathbf {dx}),
\end{equation}
where $\mathbf {x}=(x_1, x_2, \cdots x_k)\in \mathbb R^{+k}.$
Let $\mathbf{\Theta_s} = \{\theta_{s, 1},\theta_{s, 2}, \cdots ,\theta_{s, k}\}$, where $\theta_{s, j}$ are independent r.v.'s with the same distribution
$F_s $.
Next, assume that $ {\mathbf X}=\{X_1, X_2,..., X_k\}$ is a k-dimensional r.vec. with distribution $\mathbf{G}$ and $\mathbf{X}$ is independent of $\mathbf{\Theta}_s$. We put
\begin{equation}\label{[Theta,X]} [{\mathbf\Theta}_s,{\mathbf X}]=\{{\theta_{s,1} X_1, \theta_{s, 2} X_2,...,\theta_{s, k}X_k}\}. \end{equation}
Then, the following formula is equivalent to (\ref{k-ra.ch.f.}) (cf. \cite{Ng4})
\begin{equation}\label{multiradchf}
\widehat{\mathbf G}({\mathbf t})=Ee^{i<{\mathbf t},[{\mathbf\Theta_s, \mathbf X}]>},\qquad {\mathbf t}\in \mathbb R^{+k}.
\end{equation} The Reader is referred to Corollary 2.1, Theorems 2.3 \& 2.4 \cite{Ng4} for the principal properties of the above rad.ch.f.
Given $s\ge -1/2$ define a map $F_{s, k}: \mathcal P(\mathbb R^{+k}) \rightarrow \mathcal P(\mathbb R^k)$ by
\begin{equation}\label{k-map}
F_{s, k}({\mathbf G})=\int\limits_{\mathbb R^{+k}} (T_{c_1}F_s)\times(T_{c_2}F_s)\times \ldots\times(T_{c_k}F_s) {\mathbf G}(d{\mathbf c}),
\end{equation}
here and in the sequel, for a distribution $\mathbf G$ of a r.vec. $\mathbf X$ and a real number c we denote by $T_c{\mathbf G}$ the distribution of $c{\mathbf X}$.
Let us denote by $\tilde{ \mathcal P}_{s, k}(\mathbb{R}^{+k})$ the sub-class of $\mathcal P(\mathbb R^k)$ consisted of all p.m.'s defined by the right-hand side of (\ref{k-map}).
By virtue of (\ref{k-ra.ch.f.})-(\ref{k-map}) it is easy to prove the following theorem.
\begin{theorem}\label{symmconvo} The set $\tilde{ \mathcal P}_{s, k}(\mathbb{R}^{+k})$ is closed w.r.t. the weak convergence and the ordinary convolution $\big.\ast$ and the following equation holds \begin{equation}\label{Fourier=rad.ch.f.} \hat{\mathbf G}({\mathbf t})=\mathcal F(F_{s, k}({\mathbf G}))({\mathbf t})\qquad {\mathbf t}\in {\mathbb R^{+k}} \end{equation} where $\mathcal F({\mathbf K})$ denotes the ordinary characteristic function (Fourier transform) of a p.m. ${\mathbf K}$. Therefore, for any ${\mathbf G}_1\mbox{ and } {\mathbf G}_2\in \mathbb R^{+k}$ \begin{equation}\label{convolequality} F_{s, k}({\mathbf G}_1)\big.\ast F_{s, k}({\mathbf G}_2)=F_{s, k}({\mathbf G}_1\bigcirc_{s, k}{\mathbf G}_2) \end{equation} and the map $F_{s, k}$ commutes with convex combinations of distributions and scale changes $T_c, c>0$. Moreover, \begin{equation}\label{Gaussian-Rayleigh} F_{s, k}({\Sigma_{s, k}})=N({\mathbf 0}, 2(s+1){\mathbf I}) \end{equation} where $\Sigma_{s, k}$ denotes the k-dimensional Rayleigh distribution and $N({\mathbf 0}, 2(s+1){\mathbf I}) $ is the symmetric normal distribution on $\mathbb R^k \mbox{ with variance operator } R= 2(s+1) {\mathbf I}, {\mathbf I}$ being the identity operator. Consequently, Every Kingman convolution algebra $\big( \mathcal P(\mathbb R^{+k}), \bigcirc_{s, k}\big)$ is embedded in the ordinary convolution algebra $\big( \mathcal P_{s, k}(\mathbb{R}^{+k}), \big.\star \big)$ and the map $F_{s, k}$ stands for a homeomorphism. \end{theorem} \begin{proof}
First we prove the equation (\ref{Fourier=rad.ch.f.}) by taking the Fourier transform of the right-hand side of (\ref{k-map}). We have, for ${\mathbf t}\in \mathbb R^k,$ \begin{eqnarray}\label{Fourier-r.ch.f} \mathcal F(F_{s, k}({\mathbf G}))({\mathbf t})&=&\notag \int\limits_{\mathbb R^k}\Pi_{j=1}^k \cos(t_jx_j)H_{s, k}({\mathbf G})d{\mathbf x}\\ &=&\int\limits_{\mathbb R^k}\int_{\mathbb R^{+k}} \Pi_{j=1}^k\cos(t_jx_j)(T_{c_j}F_s (d{\mathbf x}){\mathbf G}(d{\mathbf c})\\ &= &\int\limits_{\mathbb R^{+k}} \prod_{j=1}^{k}\Lambda_s(t_jc_j) {\mathbf G}(d{\mathbf c})\notag\\ &=& \hat{\mathbf G}({\mathbf t})\notag \end{eqnarray} which implies that the set set $\tilde{ \mathcal P}_{s, k}(\mathbb{R}^{+k})$ is closed w.r.t. the weak convergence and the ordinary convolution $\big.\ast$ and, moreover the equations (\ref{convolequality}) and (\ref{Gaussian-Rayleigh}) hold. \end{proof}
\begin{definition}\label{k-ID}
A p.m. ${\mathbf F} \in \mathcal P(\mathbb R^{+k})$ is called $\bigcirc_{s, k}-$infinitely divisible
($\bigcirc_{s, k}-$ID), if for every m=1, 2, \ldots there exists $\mathbf F_m\in \mathbf P(\mathbb R^{+k})$ such that
\begin{equation}\label{kID} { \mathbf F}={\mathbf F}_m\bigcirc_{s, k} {\mathbf F}_m\bigcirc_{s, k}\ldots \bigcirc_{s, k}{\mathbf F}_m\quad (m\;times).
\end{equation}
\end{definition}
\begin{definition}\label{stability}
$\mathbf F$ is called stable, if for any positive
numbers a and b there exists a positive number c such that
\begin{equation}\label{k-stability}
T_a{\mathbf F}\;{\bigcirc_{s, k}}\;T_b{\mathbf F}=T_c{\mathbf F}
\end{equation} \end{definition}
By virtue of Theorem \ref{symmconvo} it follows that the following theorem holds.
\begin{theorem}\label{equivdef}
A p.m. $\mathbf G\mbox{ is } \bigcirc_{s, k}-ID$, resp. stable if and only if
$H_{s, k}({\mathbf G})$ is ID, resp. stable, in the usual sense.
\end{theorem}
The following lemma will be used in the representation of $\bigcirc_{s, k}-ID, k\ge 2.$
\begin{lemma}\label{Bessellimittheorem}
(i) For every $t\ge 0$
\begin{equation}\label{Bessellimittheorem 1}
\lim_{x\rightarrow 0}\frac{1-\Lambda_s(tx)}{x^2}=
\lim_{x\rightarrow
0}\frac{1-Ee^{it\theta}}{x^2}=\frac{t^2}{2}.
\end{equation}
(ii) For any ${\mathbf x}=(x_0, x_1,\cdots ,x_k)\mbox{ and }{\mathbf t}=(t_0, t_1, \cdots, t_k)\in\mathbb R^{k+1}, k=1,2, ...$
\begin{equation}\label{bessellimittheorem2}
lim_{\rho\rightarrow 0}\frac{1-\prod_{r=0}^k \Lambda_s (t_rx_r)}{\rho^2}=\frac{1}{2}\Sigma_{r=0}^k \lambda^2_r( Arg({\mathbf x}))t_r^2,
\end{equation}
where $\rho=||\mathbf x||, Arg({\mathbf x})=\frac{\mathbf x}{||\mathbf x||},\mbox{ and } \lambda_r( Arg({\mathbf x})), r=0,1, ...,k$ are given by
\begin{equation}\label{polarization}
\lambda_r( Arg({\mathbf x}))=
\begin{cases}\cos\phi & r=0,\\
\sin\phi\sin\phi_1\cdots
\sin\phi_{r-1}\cos\phi_{r} &1\le r\le k-2,\\
\sin\phi\sin\phi_1...\sin\phi_{k-2}\cos\psi & r={k-1},\\
\sin\phi\sin\phi_1
...\sin\phi_{k-2}\sin\psi & r=k,
\end{cases}
\end{equation}
where $0\le \psi, \phi, \phi_r\le\pi/2, r=1,2,...,k-2$ are angles
of $\mathbf{x}$ appearing its polar form.
\end{lemma}
The following theorem gives a representation of rad.ch.f.'s of $\bigcirc_{s, k}-$ID distributions
(see \cite{Ng4} ), Theorem 2.6 for the proof).
\begin{theorem}\label{LevyID} A p.m. $\mu\in ID(\bigcirc_{s, k})$ if and only if there exist a $\sigma$-finite measure M (a L\'evy's measure) on
$ \mathbb R^{+k}$ with the property that $M({\mathbf 0})=0, {\mathbf M}$ is finite outside every neighborhood of ${\mathbf 0}$ and \begin{equation}\label{integrable w. r. t. weight function}
\int_{\mathbb R^{+k}}\frac {\|{\mathbf x}\|^2} {1+\|{\mathbf x}\|^2} {\mathbf M}(d{\mathbf x}) < \infty \end{equation}
and for each ${\mathbf t}=(t_1,...,t_k)\in \mathbb R^{+k}$ \begin{equation}\label{Levy-Kintchine for k-dim.rad. ch. f.}
-\log{\hat{\mu}({\mathbf t})}=\int_{\mathbb R^{+k}}\{1-\prod_{j=1}^{k}\Lambda_s(t_jx_j)\} \frac
{1+\|{\mathbf x}\|^2} {\|{\mathbf x}\|^2} M({\mathbf {dx}}), \end{equation} where, at the origin $\mathbf{0}$, the integrand on the right-hand side of (\ref{Levy-Kintchine for k-dim.rad. ch. f.}) is assumed to be \begin{equation}\label{limiting integrand}
\Sigma_{j=1}^k \lambda^2_j t_j^2 = lim_{\|\mathbf
{x}\|\rightarrow 0 }\{1-\prod_{j=1}^k
\Lambda_s(t_jx_j)\} \frac {1+\|\mathbf x\|^2}
{\|\mathbf {x}\|^2} \end{equation} for nonnegative $\lambda_j, j=1, 2,...,k$ given by equations (\ref{polarization}) in Lemma \ref{Bessellimittheorem}.
In particular, if $ M=0, \mbox{ then } \mu $ becomes a Rayleighian distribution with the rad.ch.f (see definition \ref{Rayleigh}) \begin{equation}\label{kRayleighian rad. ch. f.} -\log{\hat{\mu}({\mathbf t})}=\frac{1}{2}\sum_{j=1}^k \lambda^2_j t_j^2,\quad {\mathbf t}\in \mathbb R^{+k}, \end{equation}
for some nonnegative $\lambda_j, j=1,...,k.$
Moreover, the representation (\ref{Levy-Kintchine for k-dim.rad. ch.
f.}) is unique.
\end{theorem}
An immediate consequence of the above theorem is the following: \begin{corollary}\label{Cor:Pair} Each distribution $\mu\in ID(\bigcirc_{s, k})$ is uniquely determined by the pair $[\mathbf{M}, \pmb {\lambda}]$, where $\mathbf{M}$ is a Levy's measure on $\mathbb R^{+k}$ such that $\mathbf{M}(\mathbf{0})=0,$ $\mathbf{M}$ is finite outsite every neighbourhood of $\mathbf{0}$ and the condition (\ref{integrable w. r. t. weight function})
is satisfied and $\pmb{\lambda}:=\{\lambda_1, \lambda_2,\cdots \lambda_k\}\in \mathbb R^{+k}$ is a vector of nonnegative numbers appearing in (\ref{kRayleighian rad. ch. f.}). Consequently, one can write $\mu\equiv[\mathbf{M}, \pmb {\lambda}].$\\ \indent In particular, if $\mathbf{M}$ is zero measure then $\mu=[\pmb{\lambda}]$ becomes a Rayleighian p.m. on $\mathbb R^{+k}$ as defined as follows: \end{corollary}
\begin{definition}\label{Rayleigh}
A k-dimensional distribution, say $\pmb{\mathbf \Sigma}_{s, k}$, is called a {\it Rayleigh distribution}, if \begin{equation}\label{k-dimension Rayleigh} \pmb{\mathbf \Sigma}_{s, k}=\sigma_s\times\sigma_s\times\cdots\times\sigma_s \quad
(k\;times).
\end{equation}
Further, a distribution ${\mathbf F}\in \mathcal P(\mathbb R^{+k})$ is called a {\it Rayleighian distribution} if there exist nonnegative numbers $\lambda_r,
r=1,2 \cdots k $ such that \begin{equation}\label{k-dimensional rayleighian} { \mathbf F}=\{T_{\lambda_1}\sigma_s\}\times \{T_{\lambda_2}\sigma_s\}
\times\ldots \times\{T_{\lambda_k}\sigma_s\}.
\end{equation}
\end{definition}
\indent
It is evident that every Rayleigh distribution is a Rayleighian distribution. Moreover, every Rayleighian distribution is $\bigcirc_{s, k}-$ID. By virtue of (\ref{Maxwell density} ) and (\ref{k-dimension Rayleigh}) it follows that the k-dimensional Rayleigh density is given by
\begin{equation}\label{density k-dimension Rayleigh}
g({\mathbf x})=\Pi_{j=1}^k\frac{2^k(s+1)^{k(s+1)}}{\Gamma^k(s+1)}x_j^{2s+1}exp\{-(s+1)||{\mathbf x}||^2\},
\end{equation}
where ${\mathbf x}=(x_1, x_2,\ldots, x_k)\in \mathbb R^{+k}$ and the corresponding rad.ch.f. is given by
\begin{equation}
\hat\Sigma_{s, k}({\mathbf t})=Exp(-|{\mathbf t}|^2/4(s+1)),\quad {\mathbf t}\in \mathbb R^{+k}.
\end{equation}
Finally, the rad.ch.f. of a Rayleighian distribution $\mathbf F\mbox{ on } \mathbb R^{+k}$ is given by
\begin{equation}\label{rad.ch.rayleighian}
\hat{\mathbf F}({\mathbf t})=Exp(-\frac{1}{2}\sum_{j=1}^k\lambda_j^2t_j^2)
\end{equation}
where $\lambda_j, j=1, 2, \ldots, k$ are some nonnegative numbers. \section{An analogue of the L\'evy-Cram\'er Theorem in multi-dimensional Kingman convolution algebras}
We say that a distribution ${\mathbf F \mbox{ on } \mathbb R^k}$ has dimension m, $1\le m \le k$,
if m is the dimension of the smallest hyper-plane which contains the support of $\mathbf F.$
The following theorem can be regarded as a version of the L\'evy-Cram\'er Theorem
for multi-dimensional Kingman convolution.The case k=1 was proved by Urbanik (\cite{U2}).
\begin{theorem}\label{Levy-Cramer}
Suppose that $\mathbf G_i \in \mathcal P(\mathbb R^{+k}), i=1, 2 $ and
\begin{equation}\label{decomposi} \Sigma_{s, k}={\mathbf G}_1 \bigcirc_{s, k} {\mathbf G}_2.
\end{equation}
Then, ${\mathbf G}_i, i=1, 2$ are both Rayleighian distributions fufilling the condition that there exist nonnegative numbers $\lambda_{i, r}, i=1, 2\mbox{ and } r=1, 2,\ldots, k$
such that for each i=1, 2 the number of non-zero coefficients ${\lambda_{i, r}}'s$ among $\lambda_{i, 1}, \lambda_{i, 2},\ldots, \lambda_{i, k}$ are equal to the dimension of ${\mathbf G}_i,$ respectively. Moreover,
\begin{equation}
\lambda_{1, r}^2+\gamma_{2, r}^2=1,\qquad r=1, 2, ..., k
\end{equation}
and
\begin{equation}\label{form i} { \mathbf G}_i =T_{\lambda_{i, 1}}\sigma_s\times T_{\lambda_{i, 2}}\sigma_s\times \ldots\times T_{\lambda_{i, k}} \sigma_s
\end{equation}
\end{theorem}
\begin{proof}
Suppose that the equation (\ref{decomposi}) holds. Using the map $F_{s, k}$ we have \begin{equation*} F_{s, k}(\Sigma_{s, k})=F_{s, k}({\mathbf G}_1)\big.\ast F_{s, k}({\mathbf G}_2) \end{equation*} which, by virtue of (\ref{Gaussian-Rayleigh}), implies that \begin{equation*} N({\mathbf 0}, 2(s+1){\mathbf I})=F_{s, k}({\mathbf G}_1)\big.\ast F_{s, k}({\mathbf G}_2). \end{equation*}
By the well-known L\'evy-Cram\'er Theorem on $\mathbb R^k$ (cf. Linnik and Ostrovskii \cite{LiOst}), that they are both symmetric Gaussian distributions on $\mathbb R^k.$ Consequently, they must be of the form
(\ref{form i}) and the coefficients ${\lambda_{i, r}}'s$ satisfy
the above stated conditions.
\end{proof}
\end{document} |
\begin{document}
\mbox{}
\title{The Chain Group of a Forest}
\author{Felix Gotti}
\address{Mathematics Department\\UC Berkeley\\Berkeley, CA 94720}
\email{felixgotti@berkeley.edu}
\author{Marly Gotti}
\address{Mathematics Department\\University of Florida\\Gainesville, FL 32611}
\email{marlycormar@ufl.edu}
\date{\today}
\begin{abstract}
For every labeled forest $\ff$ with set of vertices $[n]$ we can consider the subgroup $G$ of the symmetric group $S_n$ that is generated by all the cycles determined by all maximal paths of $\ff$. We say that $G$ is the chain group of the forest $\ff$. In this paper we study the relation between a forest and its chain group. In particular, we find the chain groups of the members of several families of forests. Finally, we prove that no copy of the dihedral group of cardinality $2n$ inside $S_n$ can be achieved as the chain group of any forest.
\end{abstract}
\maketitle
\section{Introduction} \label{sec:intro}
It is typical in Mathematics to use intrinsic information of discrete objects such as graphs, trees, and finite posets, to carry out algebraic and geometric constructions. For instance, such constructions include the fundamental group of a graph \cite[Chapter~11]{aH01}, the incidence algebra of a finite poset \cite[Chapter~3]{rS11}, and the forest polytope of a graph \cite[Chapter~50]{aS03}. In this paper we use the maximal paths of a forest $\ff$ to construct a finite group, which we call the \emph{chain group} of $\ff$. The method we use to produce the chain group of a given forest is motivated in part by the way Stanley defines a chain polytope from a locally finite poset (see \cite{rS86}).
Given a finite poset $P = \{x_1, \dots, x_n\}$, its corresponding \emph{chain polytope} $\mathcal{C}(P)$ is defined to be the set of points $(y_1, \dots, y_n) \in \rr_{\ge 0}^n$ satisfying the condition
\begin{equation} \label{eq:chain polytope condition}
y_{i_1} + \dots + y_{i_k} \le 1 \ \text{ whenever } \ x_{i_1} <_P \dots <_P x_{i_k} \ \text{ is a maximal chain of } P.
\end{equation}
In other words, the chain polytope $\mathcal{C}(P)$ is the intersection of the half-spaces determined by the maximal chains of $P$ as indicated in \eqref{eq:chain polytope condition}.
Let us see how to reuse the same method Stanley applies to build the chain polytope of a poset, to naturally associate a finite group $\mathcal{G}(\ff)$ to each forest $\ff$. Instead of taking $\rr^n$ as the universe containing the half-spaces utilized in \eqref{eq:chain polytope condition} to produce $\mathcal{C}(P)$, we can rather consider $S_n$ as the universe containing the generators of a group $\mathcal{G}(P)$, which is defined by
\[
\mathcal{G}(\ff) = \big\langle (s_{i_1} \ \dots \ s_{i_k}) \in S_n \ \text{ whenever } \ x_{i_1} <_P \dots <_P x_{i_k} \ \text{ is a maximal path in} \ \ff \big\rangle.
\]
We call $\mathcal{G}(\ff)$ the \emph{chain group} of the forest $\ff$.
Chain polytopes, as introduced in \cite{rS86} by Stanley, have nice features. For example, if $\mathcal{C}(P)$ is the chain polytope of the poset $P$, then the number of vertices of $\mathcal{C}(P)$ equals the number of antichains of $P$; see \cite[Theorem~2.2]{rS86}. In addition, the volume of $\mathcal{C}(P)$ is determined by the combinatorial structure of $P$; see \cite[Corollary~4.2]{rS86}. We will see in Section~3 that the chain group of a forest has a nice behavior; for example, disjoint unions of forests become direct sums of groups (see Proposition~\ref{prop:chain group of disjoint graphs}). On the other hand, if we relabel a given forest $\ff$, the resulting forest has chain group conjugate to $\mathcal{C}(\ff)$ (see Proposition~\ref{prop:order groups of isomorphic and dual posets}).
There are a few natural questions we might ask about this assignment. How the chain groups of two distinct labeling of the same forest are associated? If $G$ is the chain group of the forest $\ff$, can we determine whether $G$ satisfies certain properties only by studying $\ff$? It is our intension to answer such questions here.
In addition, we might wonder, for a fixed $n$, which subgroups of $S_n$ will show as a chain group of some forest labeled by $[n]$. Given that, every finite subgroup is a subgroup of $S_n$ for $n$ large enough, this is not a question that we expect to answer in its full generality. However, we might hope to decide whether relatively simple families of subgroups of $S_n$ can be realized as chain groups of some $n$-forest. For example, is the alternating group $A_n$ a chain group of an $n$-forest for every $n \in \nn$? This question, along with other similar ones, will be answered later in the sequel.
This paper is structured as follows. In Section~\ref{sec:Background and Notation} we review the definitions on graph theory we will be using later. Then, in Section~\ref{sec:general observations}, we prove that passing from a forest to its chain group behaves well with respect to relabeling and changes disjoint union for direct sum. In Section~\ref{sec:abelian case} we study the abelian chain groups. In Section~\ref{sec:the chain group of a tree} we compute the chain groups of members of several families of trees. We also provide some results useful to find the chain groups of some forests. Finally, in Section~\ref{sec:missing chain groups}, we show that the dihedral cannot be achieved as the chain group of any forest.
\section{Background and Notation} \label{sec:Background and Notation}
In this section, we fix notation and briefly recall the definitions of the main objects related to those being studied here. We also state the relevant properties of such objects necessary to follow the present paper. For background material in group theory, symmetric groups, and graph theory we refer the reader to Rotman \cite{jR94}, Sagan \cite{bS01}, and Bondy and Murty \cite{BM08}, respectively.
The double-struck symbols $\mathbb{N}$ and $\mathbb{N}_0$ denote the sets of positive integers and non-negative integers, respectively. For $n \in \nn$, we denote the set $\{1,\dots, n\}$ just by $[n]$. Following the standard notation of group theory, we let $S_n$ and $A_n$ denote the symmetric and the alternating group on $n$ letters, respectively. In addition, the dihedral group of order $2n$ is denoted by $D_{2n}$.
To settle down our nomenclature, let us recall some basic definitions concerning graphs. A \emph{graph} is a pair $\mathsf{G} = (V,E)$, where $V$ is a finite set and $E$ is a collection of $2$-element subsets of $V$. The elements of $V$ are called \emph{vertices} of $\mathsf{G}$ while the elements of $E$ are called \emph{edges} of $\mathsf{G}$. It is often convenient to denote the set of vertices and the set of edges of $\mathsf{G}$ by $V(\mathsf{G})$ and $E(\mathsf{G})$, respectively. The \emph{degree} of a vertex $v$, denoted by $\deg(v)$, is the number of edges containing it. We say that a vertex is a \emph{leaf} if it has degree one. An edge $\{v,w\}$ is also denoted by $vw$. Distinct vertices $v$ and $w$ of $V$ are called \emph{adjacent} if $vw \in E$. In the context of this paper, a \emph{walk} $\omega$ in $\mathsf{G}$ is a sequence of vertices, say $v_0, \dots, v_\ell$ such that $v_{i-1}$ is adjacent to $v_i$ for each $i = 1, \dots, \ell$. If $v_\ell = v_0$, then the walk $\omega$ is said to be \emph{closed}. If, in addition, $v_i = v_j$ implies that $i = j$ or $\{i,j\} = \{0,\ell\}$ then $\omega$ is called a \emph{path}; in this case we say that the \emph{length} of $\omega$ is $\ell$. A path of $\mathsf{G}$ is \emph{maximal} if it is not strictly contained in another path. A closed path of length at least three is called a \emph{cycle}. A graph is said to be \emph{connected} if any two distinct vertices can be connected by a path. Every graph $G$ is the finite disjoint union of connected graphs, which are called \emph{connected components} of $G$. On the other hand, a graph is called \emph{acyclic} provided it does not contain any cycle.
\begin{definition}
An acyclic connected graph is called a \emph{tree}. A finite disjoint union of trees is said to be a \emph{forest}.
\end{definition}
\begin{example}
The next figure illustrates a graph $\mathsf{G}$ having four connected components. The leftmost component is a \emph{chain}, the second component is a cycle, the third component is a star, and the fourth component is a tree. Notice that $\mathsf{G}$ is not a forest.
\begin{figure}
\caption{A graph with four connected components.}
\label{fig:a graph with four connected components}
\end{figure}
\end{example}
A labeled \emph{forest} is a forest $\mathsf{F}$, whose vertices are labeled by the set $\{1,\dots, |V(\mathsf{F})|\}$. All the forests we will be interested in throughout this paper are labeled.
\section{General Observations} \label{sec:general observations}
In this section we formally define the chain group of a forest and explore some general facts connecting them. We also present some examples to illustrate the connection.
\begin{definition}
For $n \in \nn$ let $\ff$ be a labeled forest with $n$ vertices. The \emph{chain group} of $\ff$, which we denote by $G_{\ff}$, is the subgroup of $S_n$ generated by all cycles $(i_1 \ \dots \ i_m)$ such that $i_1, \dots, i_m$ is a maximal path in $\ff$.
\end{definition}
\begin{example}
Figure~\ref{fig:chain groups of three forests} shows three forests $\ff_1$, $\ff_2$, and $\ff_3$.
\begin{figure}
\caption{Three labeled forests with their respective chain groups.}
\label{fig:chain groups of three forests}
\end{figure}
The forest $\ff_1$ consists of only two disjoint maximal paths, namely $(1,2,3)$ and $(4,5)$; therefore $G_{\ff_1} = \langle (1 \ 2 \ 3), (4 \ 5) \rangle \cong \zz_3 \times \zz_2$ (cf. Proposition~\ref{prop:chain group of disjoint graphs} below).
On the other hand, $\mathsf{F}_2$ has exactly three maximal paths, which are $(1,2,3,4)$, $(1,2,3,5)$, and $(4,3,5)$. As $(1 \ 2 \ 3 \ 4) \circ (1 \ 2 \ 3 \ 5) = (1 \ 3 \ 5 \ 2 \ 4)$, the chain group $G_{\mathsf{F}_3}$ contains a $5$-cycle. On the other hand, as $(3 \ 4 \ 5) \circ (1 \ 2 \ 3 \ 4) = (1 \ 2 \ 4) (3 \ 5)$ it follows that $G_{\mathsf{F}_2}$ also contains the two cycle $(3 \ 5)$. Hence $S_5 = \langle (1 \ 3 \ 5 \ 2 \ 4), (3 \ 5) \rangle \le G_{\mathsf{F}_2}$, and so $G_{\mathsf{F}_2} = S_5$.
Finally, $\ff_3$ has $\binom{5}{2}$ maximal paths. The chain group of $\ff_3$ is generated by the $3$-cycles $(1 \ a \ b)$ for all $a,b \in \{2,3,4,5,6\}$ with $a \neq b$. These $3$-cycles are enough to generate the whole alternating group (see the proof of Theorem~\ref{thm:chain group of the star graph} for more details). Hence $G_{\mathsf{F}_3} = A_6$.
\end{example}
Note that $S_n$ acts on the set of labeled forests having exactly $n$ vertices by relabeling their vertices. We show now that this action conjugates the chain groups.
\begin{proposition} \label{prop:order groups of isomorphic and dual posets}
If $\ff$ and $\ff'$ are forests with $n$ vertices that are a relabeling version of each other, then their chain groups are conjugate in $S_n$.
\end{proposition}
\begin{proof}
Let $G$ and $G'$ denote the chain groups of $\ff$ and $\ff'$, respectively. Let $\pi \colon \ff \to \ff'$ be a graph isomorphism. In particular, we can interpret $\pi$ as an element in $S_n$. Consider the map $\varphi \colon G \to G'$ defined by $\varphi(\sigma) = \pi \sigma \pi^{-1}$. For every maximal path $i_1, \dots, i_m$ in $\ff$, we have that $\pi(i_1), \dots, \pi(i_m)$ is a maximal path in $\ff'$ and, therefore,
\[
\varphi((i_1 \ \dots \ i_m)) = \pi (i_1 \ \dots \ i_m) \pi^{-1} = (\pi(i_1) \ \dots \ \pi(i_m))
\]
is a maximal path in $\ff'$. So the map $\varphi$ is well defined. It follows immediately that $\varphi$ is a group homomorphism. Now we can define $\psi \colon G' \to G$ by $\psi(\sigma) = \pi^{-1} \sigma \pi$, and similarly verify that it is a well-defined homomorphism of groups. Since $\varphi$ and $\psi$ are inverses of each other, $\pi$ is an isomorphism.
\end{proof}
Proposition~\ref{prop:order groups of isomorphic and dual posets} gives us the freedom to talk about the chain group of a non-necessarily labeled forest as long as we are not interested in the specific subgroup of the symmetric group we are dealing with but only in its isomorphic class.
Let us verify now that the chain group of a forest is the direct product of the chain groups of the trees of the given forest.
\begin{proposition} \label{prop:chain group of disjoint graphs}
If $\ff$ is a forest which is the disjoint union of the trees $\sft_1, \dots, \sft_m$, then $G_{\ff} \cong G_{\sft_1} \times \dots \times G_{\sft_m}$.
\end{proposition}
\begin{proof}
Let $n = |\ff|$. It suffices to assume that $m=2$. Let $\sigma_1, \dots, \sigma_r$ be the generating cycles induced by the maximal paths of $\sft_1$, and let $\rho_1, \dots, \rho_s$ be the generating cycles induced by the maximal paths of $\sft_2$. As $\sigma_i$ and $\rho_j$ are disjoint cycles in $S_n$ for each pair $(i,j) \in [r] \times [s]$, we can write every element of $G_{\ff}$ as $\sigma \rho$ for some $\rho \in G_{\sft_1}$ and $\rho \in G_{\sft_2}$. Now it immediately follows that the assignment $(\sigma, \rho) \mapsto \sigma \rho$ is, indeed, an isomorphism from $G_{\sft_1} \times G_{\sft_2}$ to $G_{\ff}$.
\end{proof}
\section{Abelian Chain Groups associated to $n$-Forests} \label{sec:abelian case}
In this section we characterize the forests whose chain groups are abelian. In addition, we determine those abelian groups that show up as chain groups of some forest.
\begin{example} \label{ex:the cyclic group of order n is always a chain group}
Let $\sft$ be an $n$-tree with at most two leafs. Then there is only one maximal path, namely $\sigma(1) \prec \dots \prec \sigma(n)$ for some bijection $\sigma \colon [n] \to [n]$. Thus, the chain group associated to $\sft$ is $G = \langle (\sigma(1) \ \dots \ \sigma(n)) \rangle \cong \zz_n$.
\end{example}
More generally, we have the following result.
\begin{proposition}
Let $n$ be a natural, and let $\ff$ be an $n$-forest. Then the associated chain group of $\ff$ is abelian if and only if $\ff$ is the disjoint union of paths.
\end{proposition}
\begin{proof}
Example~\ref{ex:the cyclic group of order n is always a chain group}, along with Proposition~\ref{prop:chain group of disjoint graphs} in the preview section, immediately implies that if $\ff$ is the disjoint union of $k$ chains of lengths $n_1, \dots, n_k$, then $G_{\ff} \cong \zz_{n_1} \times \dots \times \zz_{n_k}$. In particular, $G_{\ff}$ is abelian. To prove the direct implication, suppose by contradiction that there are two distinct maximal paths $(i_1, \dots, i_r)$ and $(j_1, \dots, j_s)$ that are not disjoint. Set $\sigma = (i_1 \ \dots \ i_r)$ and $\tau = (j_1 \ \dots \ j_s)$. As $\ff$ has no cycles, $\{i_1, i_r\} \neq \{j_1, j_s\}$. We can assume, without loss of generality that $i_r \neq j_s$. Because $i_r \neq j_s$, there exists an index $p > 1$ such that $i_p \in \{j_1, \dots, j_s\}$ and $i_{p+1} \notin \{j_1, \dots, j_s\}$ (let $i_p = j_q$). In this case $(\sigma \circ \tau \circ \sigma^{-1})(i_p) = i_{p+1} \neq j_{q+1} = \tau(j_q)$. Hence $\sigma$ and $\tau$ do not commute, contradicting the fact that $G_{\ff}$ is abelian.
\end{proof}
For $n \in \nn$, we study which abelian groups are chain groups associated to $n$-forests.
\begin{proposition} \label{prop:elementary abelian groups that are chain groups}
The elementary abelian group $(\zz/p\zz)^r$, where $p$ is prime and $r$ is a natural, is the chain group associated to an $n$-forest if and only if $rp \le n$.
\end{proposition}
\begin{proof}
For the direct implication, suppose that $(\zz/p\zz)^r$ is the chain group of an $n$-forest. This implies that $S_n$ contains a copy $G$ of $(\zz/p\zz)^r$. Consider the set $S$ of $p$-cycles inside any disjoint-cycle decomposition of any element of $G$. Take a maximal subset $\{\sigma_1, \dots, \sigma_s\}$ of $S$ satisfying that no element is a power of another one. As $G$ is abelian the $\sigma_i$'s are pairwise disjoint. In addition, $G' = \langle \sigma_1, \dots, \sigma_s \rangle$ is isomorphic to $ \cong (\zz/p\zz)^s$ and contains $G$, which yields that $r \le s$. As the $s$ $p$-cycles are disjoint, one finds that $rp \le sp \le n$.
Suppose, on the other hand, that $rp \le n$. Consider the forest $\ff$ having $r + n - rp$ connected components, $r$ of them being path graphs on $p$ vertices and $n - rp$ of them being $1$-vertex trees. The chain group $G$ of $\ff$ is generated then by $r$ disjoint $p$-cycles. Hence $G$ is a subgroup of $S_n$ isomorphic to the elementary abelian group $(\zz/p\zz)^r$, which completes the proof.
\end{proof}
Not every abelian subgroup of $S_n$ can be reached as an associated chain group of an $n$-forest. In particular, the abelian subgroups of maximal order are never achieved in this way, as we shall prove in Theorem~\ref{thm:abelian group of maximum order that are chain groups}. The following theorem describes the abelian subgroups of $S_n$ of maximum order.
\begin{theorem}\cite[Theorem 1]{BG89}
Let $G$ be an abelian subgroup of maximal order of the symmetric
group $S_n$. Then
\begin{enumerate}
\item $G \cong (\zz/3\zz)^k$ if $n = 3k$;
\item $G \cong \zz/2\zz \times (\zz/3\zz)^k$ if $n = 3k+2$;
\item either $G \cong \zz/4\zz \times (\zz/3\zz)^{k-1}$ or $G \cong (\zz/2\zz)^2 \times (\zz/3\zz)^{k-1}$ if $n = 3k+1$.
\end{enumerate}
\end{theorem}
We can use Theorem~\ref{thm:chain group of the star graph} to argue the following proposition.
\begin{theorem} \label{thm:abelian group of maximum order that are chain groups}
The maximum order abelian subgroups of $S_n$ are the chain group of $n$-forests.
\end{theorem}
\begin{proof}
Suppose first that $n=3k$. Theorem~\ref{thm:chain group of the star graph} guarantees that any maximum order abelian group $G$ of $S_n$ is a copy of $(\zz/3\zz)^k$. It follows by Proposition~\ref{prop:elementary abelian groups that are chain groups} that $G$ is the chain group of an $n$-forest.
Assume now that $n=3k+2$. Consider the $n$-forest $\ff$ consisting of the following $k+1$ connected components: $k$ $3$-vertex paths and one two-vertex path. The chain group associated to $\ff$ is isomorphic to $\zz/2\zz \times (\zz/3\zz)^k$, which is a maximum order abelian group of $S_n$ by Theorem~\ref{thm:chain group of the star graph}.
Lastly, assume that $n = 3k+1$. Then consider the $n$-forest $\ff_1$ having as connected components $k-1$ three-vertex paths and one $4$-vertex path, and also consider the $n$-forest $\ff_2$ having as connected components $k-1$ three-vertex paths and two $2$-vertex path. Notice the chain groups of $\ff_1$ and $\ff_2$ are isomorphic to $\zz/4\zz \times (\zz/3\zz)^{k-1}$ and $(\zz/2\zz)^2 \times (\zz/3\zz)^{k-1}$, respectively. As before such groups have both maximum orders by Theorem~\ref{thm:chain group of the star graph}, and the result follows.
\end{proof}
\section{Chain Groups of some Trees} \label{sec:the chain group of a tree}
In this section, we will only consider trees. The simplest family of trees consists of \emph{chains} (i.e., trees containing exactly one maximal path), and chain groups of chains are cyclic. Another very simple example of trees are the ones having all their vertices except one having degree $1$ (see the second graph in Figure~\ref{fig:chain groups of three forests}). It turns out that the chain group of this family of trees is always $A_n$ as the next theorem indicates.
\begin{theorem} \label{thm:chain group of the star graph}
For every $n \in \nn_{\ge 3}$, there exists a labeled tree with $n$ vertices whose chain group is $A_n$.
\end{theorem}
\begin{proof}
When $n = 3$, the alternating group $A_n$ is is isomorphic to $\zz_3$, and it is enough to take $\sft$ to be the only tree on $3$ vertices. Assume that $n \ge 4$. From the fact that every $3$-cycle $(i \ j \ k)$ in $S_n$ not containing $1$ satisfies $(i \ j \ k) = (1 \ i \ j)(1 \ j \ k)$ and the fact that every $3$-cycle $(1 \ i \ j)$ in $S_n$ not containing $2$ satisfies $(1 \ i \ j) = (1 \ 2 \ j)^2(1 \ 2 \ i)(1 \ 2 \ j)$, we can immediately deduce that $A_n$ is generated by $3$-cycles of the form $(1 \ 2 \ i)$ for all $i \in [n] \setminus \{1,2\}$. Now we just need to take $\mathsf{T}$ to be the star graph $K_{1,n-1}$ to have that $G_{\sft} = A_n$ (see, for an illustration, the central forest in Figure~\ref{fig:chain groups of three forests}).
\end{proof}
Now we turn to find a large family of forests each of its members $\sft$ has chain group $S_n$, where $n = |V(\sft)|$. First, let us introduce the following definition.
\begin{definition}
We say that a tree is an \emph{antenna} if it has exactly one vertex of degree three and exactly one maximal path of length two.
\end{definition}
It is not hard to verify that if a tree is an antenna, then it must be like $\ff_2$ in Figure~\ref{fig:chain groups of three forests} with, perhaps, the vertical path more prolonged upward. In particular, an antenna has exactly three maximal paths.
\begin{proposition} \label{prop:chain group of an antenna}
Let $\sft$ be a labeled antenna with an odd number $n$ of vertices. Then the chain group of $\sft$ is $S_n$.
\end{proposition}
\begin{proof}
By Proposition~\ref{prop:order groups of isomorphic and dual posets}, we can relabel $\sft$ if necessary so that its labels look like the one in Figure~\ref{fig:horizontal antenna}. Let $G_{\sft}$ be the chain group of $\sft$.
\begin{figure}\label{fig:horizontal antenna}
\end{figure}
The maximal path of $\sft$ are the $1,3,2$ with corresponding generator $\sigma = (1 \ 3 \ 2)$, the path $1,3,4, \dots, n$ with corresponding generator $\sigma_1 = (1 \ 3 \ 4 \ \dots \ n)$, and the path $2,3, \dots, n$ with corresponding generator $\sigma_2 = (2 \ 3 \ \dots \ n)$. Notice that $\sigma_1 \circ \sigma_2$ is a cycle of length $n$. In addition, the disjoint cycle decomposition of $\sigma \circ \sigma_2^{-1}$ contains exactly a cycle of length two and a cycle of length $n-2$. Therefore $(\sigma \circ \sigma_2^{-1})^{n-2}$ is a transposition. As $G_{\sft}$ contains a full cycle and a transposition, it must be $S_n$.
\end{proof}
Proposition~\ref{prop:chain group of an antenna} says in particular that for every odd $n \ge 5$ there is a tree whose chain forest is $S_n$. In addition, we can use this proposition to find the chain group of more complex forests. Before explaining how to do this, let us introduce the following definition.
\begin{definition}
Let $G$ be a graph, and let $G'$ be a subgraph of $G$. We say that $G'$ is an \emph{extended subgraph} of $G$ is every leaf of $G'$ is also a leaf of $G$.
\end{definition}
\emph{Extended subtrees} and \emph{extended subforests} are defined in a similar fashion. Notice that a connected component of a graph is always an extended subtree. The next figure depicts a tree and two of its subgraphs (which happen to be forests) only one of them being extended.
\begin{figure}
\caption{A tree and two of its subforests; only the subforest in the center is extended.}
\label{fig:extended subforest}
\end{figure}
It follows immediately that if $\ff$ is a forest and $\ff'$ is an extended subforest of $\ff$, then the chain group of $\ff'$ is a subgroup of the chain group of $\ff$. Using this observation and Proposition~\ref{prop:chain group of an antenna} is not hard to argue the following result.
\begin{proposition} \label{prop:trees with full chain group}
Let $\sft$ be a tree with $n$ vertices and a maximal path $\alpha$ of length two. If the distance from any of the two leaves in $\alpha$ to any leaf that is not in $\alpha$ is odd, then the chain group of $\sft$ is $S_n$.
\end{proposition}
\begin{proof}
Left to the reader.
\end{proof}
Proposition~\ref{prop:trees with full chain group}, along with Proposition~\ref{prop:chain group of disjoint graphs}, allows us to easily determine the chain groups of relatively complex forests. For example, the chain group of the forest illustrated in Figure~\ref{fig:final forest} is $S_{12} \times S_{12} \times S_{17}$.
\begin{figure}
\caption{A forest with 41 vertices.}
\label{fig:final forest}
\end{figure}
We closed this section providing a sufficient condition for the chain groups of some antenna-like trees to have a full cycle.
\begin{proposition}
Let $\sft$ be a labeled tree with $n$ vertices having exactly one vertex $v$ of degree three and the rest of its vertices of degree at most two. If $\mathsf{d}(v,w)$ is odd for some leave $w$, then $G_{\sft}$ has a cycle of length $n$.
\end{proposition}
\begin{proof}
Note first that $\sft$ only has three leaves, say $w_1, w_2,$ and $w_3$. Suppose, without loss of generality, that $\mathsf{d}(v,w_1)$ is odd. Let $w_1 = v_1, v_2, \dots, v_{2t} = v$ be the path from $w_1$ to $v$. Also, let $v = v_{2t+1}, v_{2t+2}, \dots, v_r = v_2$ and $v = v_{2t+1}, v'_1, \dots, v'_s = w_3$. Then notice that
\[
(v_1 \ \dots \ v_r) \circ (v_1 \ \dots \ v_{2t+1} \ v'_1 \ \dots \ v'_s) = (v_1 \ v_3 \ \dots \ v_{2t+1} \ v'_1 \ \dots \ v'_s \ v_2 \ v_4 \ \dots \ v_{2t} \ v_{2t+2} \ \dots v_r)
\]
is a cycle of length $n$.
\end{proof}
\section{The Dihedral is Missing} \label{sec:missing chain groups}
The symmetric group $S_n$ contains many copies of the dihedral group $D_{2n}$. However, none of these copies is the chain group of any labeled forest with $n$ vertices.
\begin{lemma} \label{lem:two vertices greater than two}
Let $\sft$ be a tree with at least two vertices whose degree is strictly greater than $2$. Then $\sft$ contains a maximal chain $C$ such that $|\mathsf{T} \! \setminus \! C| \ge 3$.
\end{lemma}
\begin{proof}
Let $v$ and $w$ be two distinct vertices of $\sft$ with degrees at strictly greater than $2$. Let $\rho$ be the unique path in $\sft$ from $v$ to $w$. As $\deg(v) \ge 3$ there exist two maximal paths $\nu_1$ and $\nu_2$ among those starting at $v$ such that $\nu_1 \cap \nu_2 = \nu_1 \cap \rho = \nu_2 \cap \rho = \{v\}$. Similarly, there are two paths $\omega_1$ and $\omega_2$ maximal among those starting at $w$ satisfying that $\omega_1 \cap \omega_2 = \omega_1 \cap \rho = \omega_2 \cap \rho = \{v\}$. Let $v_1,v_2,w_1,w_2$ be the leaves contained in $\nu_1, \nu_2, \omega_1, \omega_2$, respectively. Now take $C$ to be the unique maximal chain from $v_1$ to $v_2$. Because $C$ does not contain any vertex in $\{w,w_1,w_2\}$, the lemma follows.
\end{proof}
\begin{theorem}
For every $n \in \nn$, the dihedral group $D_{2n}$ is not a chain group of any labeled forest with $n$ vertices.
\end{theorem}
\begin{proof}
The dihedral $D_2 \cong \zz_2$ cannot be the chain group of the trivial forest because the latter is trivial. In addition, the possible chain groups of a $2$-forest are isomorphic to either the trivial group or $\zz_2$ and $D_4 \cong V_4$; therefore the theorem is also true in the case of $n=2$. Let $n \ge 3$ and assume, by way of contradiction, that $\ff$ is an $n$-forest whose chain group is the dihedral $D_{2n}$.
First, let us consider the case in which $\ff$ is disconnected. Since the action of $D_{2n}$ on $[n]$ does not fix any point, $\ff$ cannot have trivial connected components (i.e., isolated vertices). If $\ff$ had a connected component $C$ with at least three vertices, then any element of $D_{2n}$ associated to a maximal path of a component $C' \neq C$ would fix at least three elements of $[n]$, namely the vertices of $C$, which is impossible because every nontrivial element of $D_{2n}$ fixes at most two elements of $[n]$. Therefore every connected component of $\ff$ contains exactly two vertices. But the fact that $\ff$ is the disjoint union of paths, contradicts that $D_{2n}$ is not abelian. Hence $\ff$ cannot be disconnected.
Now let $\ff$ be a tree with $n$ vertices whose associated chain group is $D_{2n}$. Since $D_{2n}$ is not abelian, $\ff$ is not a path graph. If $\ff$ contains two vertices of degree strictly greater than $2$, then Lemma~\ref{lem:two vertices greater than two} guarantees the existence of a maximal chain $C$ such that $\ff \! \setminus \! C$ contains at least three vertices. Thus, the generator of $D_{2n}$ associated to the chain $C$ would fix at least three elements of $[n]$, which cannot be possible. Hence $\ff$ must contain at most one vertex $v$ such that $\deg(v) > 2$.
Suppose first that $\deg(v) \ge 4$. If $\deg(v) \ge 5$, then it is not hard to see that for every maximal chain $C$ of $\ff$ containing $v$ one has $|\ff \setminus C| \ge 3$, which would imply that the generator of $D_{2n}$ associated to $C$ fixes at least $3$ elements of $[n]$. Thus, assume $\deg(v) = 4$. If $|\ff| > 5$, then it follows as before that there are at least $3$ vertices in the complement of any maximal chain of $\ff$ having minimum size among those containing $v$. On the other hand, $|\ff| = 5$ implies that $\ff$ is isomorphic to $K_{1,4}$. By Theorem~\ref{thm:chain group of the star graph}, the chain group of $K_{1,4}$ is $A_5$, which is not isomorphic to $D_{10}$ (for instance, $|A_5| > |D_{10}|$), a contradiction.
Finally, suppose that $\deg(v) = 3$. To argue this case, let $C$ be a maximal chain of $\ff$ with maximum cardinality among those containing $v$, and let $\rho$ be the generator of the copy of $D_{2n}$ in $S_n$ induced by $C$. The element $\rho$ is not an $n$-cycle as $|C| < n$. On the other hand, the maximality of $|C|$ implies that $\rho$ has order strictly greater than $n/2$. As the only elements in $D_{2n}$ of order $n$ are the $n$-cycles, we obtain a contradiction. The theorem now follows.
\end{proof}
\end{document} |
\begin{document}
\title[Classification of spin and multipolar squeezing]{Classification of spin and multipolar squeezing}
\author{Emi Yukawa and Kae Nemoto}
\address{ National Institute of Informatics, 2-1-2, Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan} \ead{yukawa@nii.ac.jp}
\begin{indented} \item[]May 2015 \end{indented}
\begin{abstract} We investigate various types of squeezing in a collective su($2J+1$) system consisting of spin-$J$ particles ($J>1/2$). We show that the squeezing in the collective su($2J+1$) system can be classified into unitary equivalence classes, each of which is characterized by a set of squeezed and anti-squeezed observables forming an su($2$) subalgebra in the su($2J+1$) algebra. The dimensionality of the unitary equivalence class is fundamentally related to its squeezing limit. We also demonstrate the classification of the squeezing among the spin and multipolar observables in a collective su($4$) system. \end{abstract}
\pacs{03.65.Fd, 05.30.Ch, 42.50.Lc}
\noindent{\it Keywords}: spin squeezing, collective spin systems, su($N$) algebra
\submitto{\JPA}
\section{Introduction} Many quantum information protocols involve nonclassical states to achieve their quantum advantages. For instance, quantum high precision measurements achieve sensitivities beyond the standard quantum limit by utilising nonclassical states. The standard quantum limit is given by a coherent state, which satisfies the minimum uncertainty relation where quantum fluctuations are equally shared by any two quadrature amplitudes. One way to break this limit is to squeeze a coherent state~\cite{Yuen}. A squeezed state exhibits quantum fluctuations below the standard quantum limit in one quadrature at the sacrifice of larger quantum fluctuations in the other, which is directly applicable to achieve high precision measurements. To apply squeezed states to high precision measurements, it is important that squeezing can be achieved relatively easily. Fortunately, squeezing can be achieved via quadratic Hamiltonian, and hence it does not require higher-order optical nonlinearity such as Kerr effect~\cite{Slusher,Wu}. Both squeezing and the quantum advantages by squeezing in the high precision measurements have been demonstrated in experiments~\cite{Aasi, Taylor}.
The idea of squeezing has been extended to spin systems~\cite{Kitagawa,Wineland1,Wineland2}. An ensemble of spins can be considered as a collective spin when it satisfies the symmetry under particle permutations. An ensemble of spin-$1/2$ systems can be treated as an su($2$) system in a high dimension dependent on the number of spins in the ensemble. The coherent states of the su($2$) system can be defined as the orbit of the SU($2$) group action on a reference state~\cite{Arecchi}. Usually, we take the lowest weight state as the reference state, analogous to the vacuum state in the optical coherent states. Squeezing can then be introduced on the coherent state of the collective su($2$) system. The spin squeezed states show quantum fluctuations below the standard quantum limit in one degree of freedom, similar to the optical squeezed states. In the spin-$1$ case~\cite{Mustecaplioglu,Yukawa,Colangelo}, the Hilbert space of a spin-$1$ particle can be spanned by three orthnormal states, and we can consider eight independent observables on the Hilbert space. They correspond to eight generators of the su($3$) algebra, and hence the collective system inherits the su($3$) structure. The dimensionality of the su($3$) collective system can be determined by the number of the spin-$1$ particles. Similarly, if the ensemble is of spin-$J$ systems, the collective spin can be treated as an su($2J+1$) system, where its dimension is determined by the number of the ensemble. This extension is relevant to current experiments of squeezing on spin ensembles; for instance, squeezing in a spin-$7/2$ atomic gas~\cite{Auccaise} and in spin-1 Bose-Einstein condensates have been observed~\cite{Hald,Sewell,Hamley}. In view of current and near future experimental developments, it is important to characterize the rich structure of squeezing in the collective su($2J+1$) systems and to systematically classify them based on unitary equivalent classes.
Among the collective su($2J+1$) systems, the su($2$) collective system is simple enough so that squeezing can be understood in comparison with optical squeezing. The representation space based on the SU($2$) coherent states is a sphere, i.e. the Bloch sphere. Squeezing can be tracked on this two-dimensional space. Though it is compact, as the dimensionality of the Bloch sphere is the same as that of the phase space based on the optical coherent states, there are similarities between the SU($2$) squeezed states and the optical squeezed states. When we extend the former to an ensemble of spin-$J$ systems ($J>1/2$), the structure of squeezing is no longer so simple. As the collective su($2J+1$) system has $(2J+1)^2-1$ independent observables, there are a number of possible realizations of squeezed states. In the case of $J=1$, there are the eight independent observables, which can be represented by three spin-vector components and five quadrupolar-tensor components, and squeezing can be implemented in terms of the su($2$) subalgebra among these eight observables. Then, squeezing can be classified into two classes with the different squeezing limits.
In this paper, we generalize the classification to collective su($2J+1$) systems, where the squeezing can be characterized by $(2J+1)^2-1$ linearly independent observables. Following the classification in Ref.~\cite{Yukawa}, we classify squeezing based on the unitary equivalence classes, whose definition is given in Sec. 2.2. We also derive the structure factor of the su($2$) generators to characterise each class and obtain the squeezing limits via the one-axis twisting interaction.
This paper is organized as follows. In Sec. II, we generalize the classification to the collective su($2J+1$) systems to show that the squeezing can be classified into the unitary equivalence classes of ($2J+1$)-dimensional representations of the su($2$) subalgebras. In Sec. III, we derive quantum fluctuations for squeezed states of collective su($2J+1$) systems with one-axis twisting and show their squeezing limits. In Sec. IV we apply our classification to a collective su($4$) system to illustrate the unitary equivalence classes of the squeezing and their squeezing limits, and summarize the main results in Sec. V. Throughout this paper, a scalar, a vector, and a matrix are respectively represented by a normal letter, a bold letter, and a normal letter with a tilde, as in $A$, $\bi{A}$, and $\tilde{A}$. The operator is denoted by a letter with a caret as in $\hat{A}$.
\section{Classification of squeezing in collective su(2J+1) systems} \subsection{Observables of the su(2J+1) systems} Let us identify the linearly independent observables whose quantum fluctuations can be controlled via squeezing. Suppose there is a collective su($2J+1$) system consisting of $N$ spin-$J$ particles. The particles can be fermions as well as bosons when the spatial degrees of freedom of each fermion are frozen and the spin degrees of freedom are separable from the spatial degrees of freedom as in ultracold fermions trapped in an optical lattice~\cite{Martin} or magnetic impurities in a crystal~\cite{Tyryshkin}. We consider a squeezed spin state (SSS) which is generated from a coherent spin state (CSS) via a nonlinear interaction such as the one-axis twisting or the two-axis counter twisting~\cite{Kitagawa}.
In a CSS, all particles are in the same single-spin state~\cite{Kitagawa}. A single-spin state can be expanded in terms of the rank-$d$ multipoles ($d\in \mathbb{N}$) and it can be described by the spherical harmonics of degree $d$. In the case of a spin-$1/2$ particle, the three components of the dipole, i.e., the spin vector, are linearly independent and generate the su($2$) algebra. In the case of a spin-$J$ particle, the $2d+1$ components of the rank-$d$ multipoles ($1 \leq d \leq 2J$) are linearly independent of each other, while the multipoles of the rank higher than $2J$ can be expressed in terms of the lower-rank multipoles and the identity. Thus, the spin and multipoles up to the rank of $2J$, which are comprised of $(2J+1)^2-1=4J(J+1)$ observables in total, completely characterize a single spin-$J$ state; hence they can be chosen as the generators of the su($2J+1$) algebra. We define the second-quantized forms of the single spin and multipolar observables as \begin{equation}
{\hat{\lambda}}_{{\bi n}_j;J,k} = \sum_{m,n=1}^{2J+1} ({\tilde{\lambda}}_{J,k})_{mn} {\hat{c}}^{\dagger}_{{\bi n}_j;J,m} {\hat{c}}_{{\bi n}_j;J,n},
\label{eq:single-gen} \end{equation} where $({\tilde{\lambda}}_{J,k})_{mn}$ represents the $mn$-entry of the $k$-th spin or multipolar matrix ${\tilde{\lambda}}_{J,k}$ of a single spin-$J$ particle, and ${\hat{c}}_{{\bi n}_j;J,m}$ ($ {\hat{c}}_{{\bi n}_j;J,m}^{\dagger}$) denotes the spin-$J$ bosonic or fermionic annihilation (creation) operator of the spatial mode ${\bi n}_j$ and the magnetic sublevel $m_z=J+1-m$. Here, we define ${\hat{\lambda}}_{{\bi n}_j;J,k}$ in Eq.~(\ref{eq:single-gen}) so that the first three observables are given by the Cartesian components of the spin vector, the next five are given by the five independent components of the quadrupolar tensor, the next seven are the seven independent components of the octupolar tensor~\cite{Shiina}, and so on. We also note that the matrices ${\tilde{\lambda}}_{J,k}$ are normalized so that their trace norms satisfy \begin{equation}
||{\tilde{\lambda}}_{J,k}||_{\mathrm{trace}}^2 = \sum_{m_z=-J}^J m_z^2 = \frac{1}{3} J(J+1)(2J+1). \label{eq:norm} \end{equation}
A CSS can be completely described by the collective observables of the single spin and multipolar observables given in Eq.~(\ref{eq:single-gen}). Squeezing can redistribute quantum fluctuations in these collective observables. The second-quantized forms of the collective observables ${\hat{\Lambda}}_{J,k}$ can be expressed as \begin{equation}
{\hat{\Lambda}}_{J,k} = \sum_{j=1}^N {\hat{\lambda}}_{{\bi n}_j;J,k}. \label{eq:coll-gen} \end{equation} The observables ${\hat{\Lambda}}_{J,k}$ in Eq.~(\ref{eq:coll-gen}) satisfy the same commutation relations as ${\tilde{\lambda}}_{J,k}$ in Eq.~(\ref{eq:single-gen}). This implies that they also generate the su($2J+1$) algebra and the matrices $\{ {\tilde{\lambda}}_{J,k} \}$ can be regarded as the irreducible representation
of $\{ {\hat{\Lambda}}_{J,k} \}$ in the basis of $\{ | J, m_z \rangle \}$, which represents the basis of the single-spin magnetic sublevels with respect to the quantization axis along the $z$ axis. Thus, a collective observable ${\hat{O}}_J$ of the collective su($2J+1$) system can be expressed by a $(2J+1)$-dimensional matrix representation ${\tilde{O}}_J$
in the representation space of $V(\{ |J, m_z \rangle \})$ as follows: \begin{equation}
{\tilde{O}}_J = \sum_{k=1}^{4J(J+1)} v_{J,k} {\tilde{\lambda}}_{J,k}, \label{eq:coll-obs} \end{equation} where the real coefficients $v_{J,k}$ satisfy $\sum_{k=1}^{4J(J+1)} v_{J,k}^2 = 1$.
\subsection{Classification based on unitary equivalence classes} We consider squeezing among three observables $\{ {\hat{O}}_{J,k} \}$ ($k=1,2,3$) of the collective su($2J+1$) system, which form an su($2$) subalgebra of the su($2J+1$) algebra and satisfy the commutation relations given by \begin{equation}
[{\hat{O}}_{J,3}, {\hat{O}}_{J,\pm}] = \pm f {\hat{O}}_{J,3}, \label{eq:sub-su2} \end{equation} where ${\hat{O}}_{J,\pm}\equiv {\hat{O}}_{J,1} \pm i {\hat{O}}_{J,2}$ and $f>0$ represents the magnitude of the structure constant. Note that $f$ in Eq.~(\ref{eq:sub-su2}) is not always $f=1$, since $\pm f$ are equivalent to the structure factors of the su($2J+1$) algebra.
The squeezing among an su($2$) subalgebra $\{ {\hat{O}}_{J,k} \}$ can be classified based on the unitary equivalence class. The unitary equivalence class of the squeezing among $\{ {\hat{O}}_{J,k} \}$ can be determined by the $(2J+1)$-dimensional matrix representation of
$\{ {\hat{O}}_{J,k} \}$ in the space of $V(\{ |J,m_z \rangle \})$ spanned by the basis $\{ |J,m_z \rangle \}$. The unitary equivalence class is defined as follows: Suppose $\{ {\tilde{X}}_k \}$ and $\{ {\tilde{X}}^{\prime}_k \}$ are the $n$-dimensional matrix representations of the semi-simple Lie algebra. Then, the representations $\{ {\tilde{X}}_k \}$ and $\{ {\tilde{X}}^{\prime}_k \}$ belong to the same unitary equivalence class, if there exists an SU($n$) transformation matrix $\tilde{U}$ such that $\tilde{U} {\tilde{X}}_k {\tilde{U}}^{\dagger} = {\tilde{X}}^{\prime}_k$ for $\forall k$.
In our case, $\{ {\tilde{O}}_{J,k} \}$ is the $(2J+1)$-dimensional matrix representation of the generators of the su($2$) algebra, which is semi-simple; hence $\{ {\tilde{O}}_{J,k} \}$ should be completely reducible.
The matrix representation $\{ {\tilde{O}}_{J,k} \}$ and its representation space $V(\{ |J,m_z \rangle \})$ can be decomposed into the direct sum of the lower dimensional irreducible representations of the su($2$) generators and their representation spaces, respectively. Suppose the dimension of the $l$-th irreducible representation is $2J_l+1$.
Then, there exists an orthonormal basis set $\{ |J_l, m_l {\rangle}_l \}$ ($m_l = -J_l,\cdots ,J_l$) such that the $l$-th irreducible representation of the su($2$) algebra is given by the spin matrices $\{ {\tilde{\lambda}}_{J_l,1}, {\tilde{\lambda}}_{J_l,2}, {\tilde{\lambda}}_{J_l,3} \}$ for a spin-$J_l$ particle (c.f. Eq.~(\ref{eq:single-gen})).
The state $|J_l, m_l {\rangle}_l$ can be expressed as a linear combination of $|J, m_z \rangle$ ($m_z = -J, \cdots, J$), and $\{ |J_l, m_l{\rangle}_l \}$
and $\{ |J_{l^{\prime}}, m_{l^{\prime}} {\rangle}_{l^{\prime}} \}$ ($l \neq l^{\prime}$) are orthogonal to each other. Then, the completely reducible representation of $\{ {\tilde{O}}_{J,k} \}$ can be expressed as \begin{equation}
{\tilde{O}}_{J,k} = f \bigoplus_{l=1}^{r} {\tilde{\lambda}}_{J_l,k}, \
V(\{ |J,m_z \rangle \}) = \bigoplus_{l=1}^{r} V(\{ |J_l,m_l {\rangle}_l \}). \label{eq:direct-sum} \end{equation} In Eq.~(\ref{eq:direct-sum}), $r$ expresses the number of the irreducible representations and the ``subspins'' $J_l$ satisfy $\sum_{l=1}^{r} (2J_l+1) = 2J+1$. The structure constant $f$ of $\{ {\tilde{O}}_{J,k} \}$ defined in Eq.~(\ref{eq:sub-su2}) is given by \begin{equation}
f = \sqrt{\frac{J(J+1)(2J+1)}{\sum_{l=1}^r J_l (J_l+1) (2J_l+1)}}, \label{eq:f} \end{equation} which can be derived from the irreducibility of $\{ {\tilde{\lambda}}_{J_l,k} \}$ and the normalization condition in Eq.~(\ref{eq:norm}).
Here, we note that $\{ {\tilde{\lambda}}_{J_l,k} \}$ and $V(\{ |J_l,m_l {\rangle}_l \})$ are arranged so that $J_l$ satisfies \begin{equation}
0 \leq J_r \leq \cdots \leq J_2 \leq J_1 \leq J, \label{eq:order} \end{equation} and we define $\{ {\tilde{\lambda}}_{J_l=0,k} \} = \{ 0, 0, 0 \}$.
If two sets of the generators of the su($2$) subalgebras, $\{ {\hat{O}}_{J,k} \}$ and $\{ {\hat{O}}^{\prime}_{J,k} \}$, belong to the same unitary
equivalence class, $\{ {\hat{O}}^{\prime}_{J,k} \}$ and the representation space $V(\{ |J, m_z \rangle \})$ can be decomposed into \begin{equation}
{\tilde{O}}^{\prime}_{J,k} = f^{\prime} \bigoplus_{l=1}^{r^{\prime}} {\tilde{\lambda}}_{J_l^{\prime},k}, \
V(\{ |J, m_z \rangle \}) = \bigoplus_{l=1}^{r^{\prime}} V(\{ |J_l^{\prime}, m_l^{\prime} {\rangle}_l \}), \end{equation} where $f^{\prime} = [J(J+1)(2J+1)/\sum_{l=1}^{r^{\prime}}J_l^{\prime} (J_l^{\prime}+1)(2J_l^{\prime}+1)]^{1/2}$, $m_l^{\prime} =-J_l^{\prime},\cdots ,J_l^{\prime}$, and \begin{equation}
r=r^{\prime} \ \land \ \forall l, J_l = J_l^{\prime}.
\label{eq:eqn-class} \end{equation} Equation~(\ref{eq:eqn-class}) implies that the structure constants $f$ and $f^{\prime}$ are equal. If two sets of the generators of the su($2$) subalgebras, $\{ {\tilde{O}}_{J,k} \}$ and $\{ {\tilde{O}}^{\prime}_{J,k} \}$, do not belong to the same unitary equivalence class, Eq.~(\ref{eq:eqn-class}) does not hold, since a unitary matrix transforms the basis but it cannot change $r$ and $J_l$. The unitary equivalence classes of the su($2J+1$) algebra can be systematically found via the Dynkin diagram of the su($2J+1$) algebra as explained in \ref{subsec:Dynkin}.
\subsection{\label{subsec:Dynkin}Dynkin diagram and unitary equivalence class} \begin{figure}
\caption{(Color Online) (a) Dynikin diagram of the su($2J+1$) algebra. The simple root ${\balpha}_k$ expresses the transition from $m_z=J-k$ to $m_z=J-k+1$. (b) Correspondences between the connected and disconnected simple roots and the lower dimensional irreducible representations of the su($2$) generators. The filled circles and the gray open circles represent the simple roots that are chosen and not chosen, respectively. (i) If the chosen simple roots from ${\balpha}_1$ to ${\balpha}_l$ are connected, then they are substituted by the $(l+1)$-dimensional irreducible representation in the
representation space of $V(\{ |J, m_z \rangle \})$ ($m_z = J, \cdots , J-l$). (ii) If a magnetic sublevel $J-l^{\prime}$ is isolated from the connected simple roots, then it is substituted by the one-dimensional element, i.e., $0$.}
\label{fig:dynkin-sun}
\end{figure} The decomposition of the generators $\{ {\tilde{O}}_{J,k} \}$ of the su($2$) subalgebra in Eq.~(\ref{eq:direct-sum}) can be derived from the Dynkin diagram of the su($2J+1$) algebra. In the Dynkin diagram of the su($2J+1$) algebra, the $2J$ simple roots are connected as shown in Fig.~\ref{fig:dynkin-sun} (a). Here, the $k$-th vertex represents the $k$-th simple root ${\balpha}_k$ that corresponds to the raising matrix ${\tilde{A}}_{J,k}$ from the $k$-th sublevel to the $(k+1)$-th sublevel with respect to the quantization axis determined by the Cartan subalgebra. For the generators of the Cartan subalgebra, we choose the $z$ component of the spin vector ${\tilde{\lambda}}_{J,3}$ and the other $2J-1$ diagonal matrices. Then the quantization axis is given by the $z$ axis, which implies that ${\tilde{A}}_{J,k}$ raises the sublevel from $m_z = J-k$ to $m_z=J-k+1$ as follows: \begin{equation}
({\tilde{A}}_{J,k})_{mn} \equiv \sqrt{\frac{1}{3}J(J+1)(2J+1)} \ {\delta}_{J-k+1,m} {\delta}_{J-k,n}. \label{eq:simple-root-m} \end{equation} The matrix products of ${\tilde{A}}_{J,k}$ and their linear combinations reproduce the spin and multipolar observables ${\tilde{\lambda}}_{J,k}$.
We can construct a complete irreducible representation by choosing $1\leq n\leq 2J$ vertices from the $2J$ vertices and substituting $l$-connected roots of ${\balpha}_{k}$, ${\balpha}_{k+1}$, $\cdots$, ${\balpha}_{k+l-1}$ ($l=1,\cdots ,2J$) by the $(l+1)$-dimensional irreducible representation $\{ {\tilde{\lambda}}_{J_l,k} \}$
($k=1,2,3$) of the su($2$) generators in the representation space of $V(\{ | J,m_z \rangle \})$ ($m_z = J-k+1, J-k, \cdots , J-k-l+1$) as shown in Fig.~\ref{fig:dynkin-sun} (b)-(i). If the magnetic sublevel of $m_z$ is not involved by the connected simple roots, then it is substituted by the one-dimensional element of $0$.
This procedure is equivalent to the decomposition of $V(\{ | J,m_z \rangle \})$ into the subspaces in Eq.~(\ref{eq:direct-sum}) Arranging the irreducible representations so that their dimensions satisfy Eq.~(\ref{eq:order}), we can obtain the decomposition in Eq.~(\ref{eq:direct-sum}). Since the Dynkin diagram does not depend on the choice of the basis, any $(2J+1)$-dimensional matrix representation can be obtained by rotating one of the representations derived from the Dynkin diagram via an SU($2J+1$) unitary matrix.
\section{Properties of squeezing determined by unitary equivalence classes} \subsection{Squeezing parameters} The properties of the squeezing reflect the structure of the unitary equivalence class, i.e., the subspins and the initial coherent state. To confirm this, let us consider squeezing among an su($2$) subalgebra $\{ {\hat{O}}_{J,k} \}$, which can be decomposed into Eq.~(\ref{eq:direct-sum}) with the subspins $\{ J_l \}$. A CSS~\cite{Perelomov,Gilmore,Nemoto,Mathur} can be expressed in terms of two parameters $\theta \in [0,\pi]$ and $\phi \in [0,2\pi )$ as \begin{eqnarray}
&|\theta ,\phi {\rangle}_{\mathrm{tot}}
\equiv {\left [ \bigoplus_{l=1}^r {\zeta}_l |\theta ,\phi {\rangle}_l \right ]}^{\otimes N}
\nonumber \\
&= \sum_{n_1=0}^N \sum_{n_2=0}^{N-n_1} \cdots \sum_{n_{r-1}=0}^{N-n_1-\cdots - n_{r-2}} \sqrt{_NC_{n_1} \ _{N-n_1}C_{n_2} \
\cdots \ _{N-n_1-\cdots - n_{r-2}}C_{n_{r-1}}} \nonumber \\
&\times {\zeta}_1^{n_1} {\zeta}_2^{n_2} \cdots {\zeta}_{r-1}^{n_{r-1}} {\zeta}_r^{N-n_1-\cdots - n_{r-1}} \nonumber \\
&\times \Bigl [ |\theta ,\phi {\rangle}_1^{\otimes n_1} \oplus |\theta ,\phi {\rangle}_2^{\otimes n_2}
\oplus \cdots \oplus |\theta ,\phi {\rangle}_{r-1}^{\otimes n_{r-1}} \oplus |\theta ,\phi {\rangle}_r^{\otimes N - n_1 - \cdots - n_{r-1}} \Bigr ],
\label{eq:CSS-J} \end{eqnarray}
where $\sum_{l=1}^r |{\zeta}_l|^2 = 1$ and the single particle states $|\theta , \phi {\rangle}_l$ in Eq.~(\ref{eq:CSS-J}) for $J_l \neq 0$ and $J_l = 0$ are
defined in terms of the basis $\{ |J_l, m_l {\rangle}_l \}$ as \begin{equation}
\forall J_l \neq 0, \ |\theta , \phi {\rangle}_l \equiv \exp {\left [-\frac{\theta}{2} ( e^{-i\phi } {\tilde{\lambda}}_{J_l,+} - e^{i\phi } {\tilde{\lambda}}_{J_l,-}) \right ]}
|J_l, J_l {\rangle}_l, \label{eq:one-J} \end{equation} with ${\tilde{\lambda}}_{J_l,\pm} \equiv {\tilde{\lambda}}_{J_l,1} \pm i {\tilde{\lambda}}_{J_l,2}$,
and $|\theta ,\phi {\rangle}_l \equiv |J_l = 0, m_l = 0 {\rangle}_l$ ($J_l = 0$), respectively.
The CSS $|\theta ,\phi {\rangle}_{\mathrm{tot}}$ in Eq.~(\ref{eq:CSS-J}) satisfies the minimum uncertainty relation \begin{equation}
\forall \nu \in [0,2\pi ), \ \langle ( \Delta O_{J,\nu} )^2 \rangle \langle ( \Delta O_{J,\nu +\frac{\pi}{2}} )^2 \rangle
= \frac{f^2}{4} \langle {\hat{O}}_{J, \perp} {\rangle}^2, \label{eq:MUR} \end{equation} where $\langle \hat{X} \rangle$ represents the expectation value of an observable $\hat{X}$, the quantum fluctuation in $\hat{X}$ is defined as $\langle (\Delta X )^2 \rangle = \langle {\hat{X}}^2 \rangle - \langle \hat{X} {\rangle}^2$, and ${\hat{O}}_{J,\nu}$ and ${\hat{O}}_{J, \perp}$ are given by \begin{eqnarray}
& {\hat{O}}_{J,\perp} \equiv {\hat{O}}_{J,1} \cos {\phi} \sin {\theta} + {\hat{O}}_{J,2} \sin {\phi} \sin {\theta} + {\hat{O}}_{J,3} \cos {\theta},
\label{eq:O3} \\
& {\hat{O}}_{J,\nu} \equiv {\hat{O}}_{J,1} ( \cos {\phi} \cos {\theta} \cos {\nu} - \sin {\phi} \sin {\nu} ) \nonumber \\
&+ {\hat{O}}_{J,2} ( \sin {\phi} \cos {\theta} \cos {\nu} + \cos {\phi} \sin {\nu} )
- {\hat{O}}_{J,3} \sin {\theta} \cos {\nu}, \label{eq:O1} \end{eqnarray} respectively. The expectation values in Eq.~(\ref{eq:MUR}) can be obtained via the Schwinger-boson approach described in \ref{a:0} as \begin{eqnarray}
& \langle {\hat{O}}_{J, \perp} \rangle = fN \sum_{l,J_l\neq 0} J_l |{\zeta}_l|^2, \label{eq:exp-CSS} \\
& \forall \nu \in [0, 2\pi ), \ \langle ( \Delta O_{J,\nu} )^2 \rangle = \frac{f^2N}{2} \sum_{l,J_l\neq 0} J_l |{\zeta}_l|^2. \label{eq:fluct-CSS} \end{eqnarray} Equations~(\ref{eq:MUR}), (\ref{eq:exp-CSS}) and (\ref{eq:fluct-CSS}) imply that the squeezing among $\{ {\hat{O}}_{J,k} \}$ can suppress $\langle ( \Delta O_{J,\nu} )^2 \rangle$ below the
coherent-spin-state value of $\frac{f}{2} |\langle {\hat{O}}_{J, \perp} \rangle |$ at the expense of $\langle ( \Delta O_{J,\nu+\frac{\pi}{2}} )^2 \rangle$
enhanced above $\frac{f}{2} |\langle {\hat{O}}_{J, \perp} \rangle |$; hence, the squeezing can be characterized by the squeezing parameter $\xi$ defined as \begin{equation}
{\xi}^2 = \left ( 2N \sum_{l,J_l\neq 0} J_l |{\zeta}_l|^2 \right ) \times \frac{\min_{\nu}
{\langle (\Delta O_{J, \nu} )^2 \rangle} }{\langle {\hat{O}}_{J, \perp} {\rangle}^2},
\label{eq:sqparam} \end{equation} where $\min_{\nu} {\langle (\Delta O_{J, \nu} )^2 \rangle}$ is the quantum fluctuations in Eq.~(\ref{eq:O1}) perpendicular to the $O_{J, \perp}$ plane and minimized with respect to the angle $\nu$ in Eq.~(\ref{eq:O1}). Equation~(\ref{eq:sqparam}) is equal to $1$ for the CSS in Eq.~(\ref{eq:CSS-J}) and it implies that a state giving ${\xi}^2 < 1$ is squeezed.
We note that Eq.~(\ref{eq:sqparam}) is equivalent to the Wineland's squeezing parameter~\cite{Wineland1} when the coefficients $\{ |{\zeta}_l |^2 \}$ of the
initial CSS in Eq.~(\ref{eq:CSS-J}) are given by $|{\zeta}_l|^2 = {\delta}_{l,l_0}$ with $l_0$ such that $J_{l_0} \neq 0$. The squeezing parameter $\xi$ in Eq.~(\ref{eq:sqparam}) is characterized by the subspins and the initial CSS, both of which reflect the structure of the unitary equivalence class of the spin and multipolar observables $\{ {\hat{O}}_{J,k} \}$ generating the su($2$) subalgebra.
\subsection{Squeezed and anti-squeezed quantum fluctuations for one-axis twisting interactions} Let us calculate the squeezing parameter $\xi$ in Eq.~(\ref{eq:sqparam}) for an SSS generated via the one-axis twisting interaction~\cite{Kitagawa}. We consider the one-axis twisting interaction \begin{equation}
{\hat{H}}_{\mathrm{OAT}} = \hbar \chi {\hat{O}}_{J,3}^2 \label{eq:OAT} \end{equation} with the interaction energy $\chi$, which distribute the quantum fluctuations in the $O_{J,2}$-$O_{J,3}$ plane. A CSS of the $N$ spin-$J$ particles is given by
$| \theta = \frac{\pi}{2}, \phi = 0 {\rangle}_{\mathrm{tot}}$ in Eq.~(\ref{eq:CSS-J}). Defining the rescaled evolution time $\mu \equiv 2 \chi f^2 t$, we can express the one-axis-twisted SSS
$| {\Psi}_{\mathrm{OAT}} (J,N;\mu ) {\rangle}_{\mathrm{tot}}$ at $\mu $ as \begin{equation}
| {\Psi}_{\mathrm{OAT}} (J,N;\mu ) {\rangle}_{\mathrm{tot}} = \exp {\left [- \frac{i}{2f^2} {\hat{O}}_{J,3}^2 \mu\right ]} \
|\theta = \frac{\pi}{2}, \phi = 0 {\rangle}_{\mathrm{tot}}. \label{eq:SSS-J} \end{equation} In this case, the observable ${\hat{O}}_{J,\perp}$ is given by ${\hat{O}}_{J,1}$ and its expectation value at time $\mu$ can be obtained in a manner similar to Eqs.~(\ref{eq:exp-CSS}) and (\ref{eq:fluct-CSS}) as \begin{eqnarray}
\langle {\hat{O}}_{J, 1} \rangle (\mu ) &\equiv \langle {\Psi}_{\mathrm{OAT}} (J,N;\mu ) | {\hat{O}}_{J,1} | {\Psi}_{\mathrm{OAT}} (J,N;\mu ) {\rangle}_{\mathrm{tot}} \nonumber \\
&= f N \sum_{l:J_l\neq 0} J_l |{\zeta}_l|^2 {\cos}^{2J_l-1} \frac{\mu}{2} \ {\left [1 - |{\zeta}_l|^2 \left ( 1 - {\cos}^{2J_l} \frac{\mu}{2} \right ) \right ] }^{N-1},
\label{eq:perp} \end{eqnarray} as detailed in \ref{a:0}. The quantum fluctuations in the plane perpendicular to ${\hat{O}}_{J,\perp}$ can be simplified as a function of $\nu$ as \begin{equation}
\langle (\Delta O_{J, \nu} )^2 \rangle (\mu ) = \frac{f^2N}{2}
\sum_{l:J_l\neq 0} J_l|{\zeta}_l|^2 \left [ 1 + A_l (1+ \cos {2\nu} ) - B_l \sin {2\nu} \right ].
\label{eq:fluct-3} \end{equation} Here, $A_l$ and $B_l$ are defined as \begin{eqnarray}
A_l & \equiv \frac{J_l }{2} (N-1) |{\zeta}_l|^2 \left \{ 1 - {\cos}^{2(2J_l-1)} \mu \ [1-|{\zeta}_l|^2 (1-{\cos}^{2J_l} \mu) ]^{N-2}
\right \} \nonumber \\
&+ \frac{1}{2} \left ( J_l - \frac{1}{2} \right ) \{ 1 - {\cos}^{2(J_l-1)} \mu \ [1-|{\zeta}_l|^2 (1-{\cos}^{2J_l} \mu) ]^{N-1} \}, \\
B_l & \equiv 2 \Biggl \{ J_l (N-1) |{\zeta}_l|^2 {\cos}^{2J_l} \frac{\mu}{2}
+ \left ( J_l - \frac{1}{2} \right ) \left [1 - |{\zeta}_l|^2 \left (1 - {\cos}^{2J_l} \frac{\mu}{2} \right ) \right ] \Biggr \} \nonumber \\
& \times \sin {\frac{\mu}{2}} \ {\cos}^{2(J_l-1)} \frac{\mu}{2} \ {\left [1 - |{\zeta}_l|^2 \left (1 - {\cos}^{2J_l} \frac{\mu}{2} \right ) \right ]}^{N-2}. \label{eq:fluct-4} \end{eqnarray} Equation~(\ref{eq:fluct-3}) is periodic with respect to $\nu$, so there exist the minimum and the maximum, i.e., the squeezed and anti-squeezed quantum fluctuations, respectively. The squeezing parameter ${\xi}^2 (\mu =0) = 1$ for the initial CSS in Eq.~(\ref{eq:CSS-J}) and the spins are said to be squeezed when ${\xi}^2 (\mu )<1$.
The squeezing limit in Eq.~(\ref{eq:sqparam}) can be analytically obtained in the limit of $\mu \ll 1$ and $N \gg 1$, when the subspins $\{ J_l \}$ in Eq.~(\ref{eq:direct-sum})
and the coefficients $\{ |{\zeta}_l|^2 \}$ of the initial coherent state in Eq.~(\ref{eq:CSS-J}) satisfy $|{\zeta}_l|^2 = {\delta}_{l,l_0}$ ($J_{l_0} \neq 0$). The quantum fluctuations in the $O_{J,2}$-$O_{J,3}$ plane in Eq.~(\ref{eq:fluct-3}) can be simplified as \begin{eqnarray}
\langle (\Delta O_{J_{l_0}, \nu} )^2 \rangle (\mu )
&= \frac{f^2J_{l_0}N}{2} \Biggl \{ 1 + \frac{1}{2} \left (J_{l_0}N - \frac{1}{2} \right ) \nonumber \\
& \times \Biggl [ ( 1 - {\cos}^{2(J_{l_0}N-1)} \mu ) (1 + \cos {2\nu} ) \nonumber \\
&- 4 \sin {\frac{\mu}{2}} \ {\cos}^{2(J_{l_0}N-1)} \frac{\mu}{2} \ \sin {2\nu} \Biggr ] \Biggr \}, \label{eq:fluc-r1} \end{eqnarray} and the expectation value perpendicular to the $O_{J,2}$-$O_{J,3}$ plane in Eq.~(\ref{eq:perp}) is \begin{equation}
\langle {\hat{O}}_{J, 1} \rangle (\mu ) = fJ_{l_0}N {\cos}^{2JN-1} \frac{\mu}{2}. \label{eq:prep-r1} \end{equation} Here, we assume that $\mu$ and $N$ satisfy $\alpha \equiv \frac{1}{2} J_{l_0}N \mu \gg 1$ and $\beta \equiv \frac{1}{4} J_{l_0}N {\mu}^2 \ll 1$. Then, substituting Eqs.~(\ref{eq:fluc-r1}) and (\ref{eq:prep-r1}) into Eq.~(\ref{eq:sqparam}), we obtain the squeezing parameter for $r=1$ up to the second order in $\beta$ as: \begin{equation}
{\xi}^2 (\mu ) \simeq \frac{1}{4{\alpha}^2} + \frac{2}{3} {\beta}^2 + \frac{\beta}{2{\alpha}^2} + \mathcal{O} (\max {\{ \frac{{\beta}^2}{\alpha}, {\beta}^3 \} }),
\label{eq:sqpram-r1} \end{equation} where $\nu \simeq - \frac{1}{2} \arctan {\frac{1}{\alpha}} + \frac{\pi}{2}$. The minimum of Eq.~(\ref{eq:sqpram-r1}), i.e., the squeezing limit is achieved at $\mu = {\mu}_{\mathrm{min}} = (12)^{1/6} (J_{l_0}N)^{-2/3}$ are given by \begin{equation}
{\xi}_{\mathrm{min}}^2 \equiv {\xi}^2 ({\mu}_{\mathrm{min}}) \simeq \frac{1}{2} {\left ( \frac{3}{2J_{l_0}N} \right )}^{2/3} + \frac{1}{2J_{l_0}N} \propto {(J_{l_0}N)^{-2/3}},
\label{eq:minsqparam-r1} \end{equation} which implies that the squeezing limit monotonically decreases with increasing $J_{l_0}$.
\section{Application to collective su($4$) systems} \subsection{Complete set of collective spin and multipolar observables}
To examine the squeezing parameter in Eq.~(\ref{eq:sqparam}) for $r>1$, especially the $\{ |{\zeta}_l|^2 \}$-dependence of the squeezing limit, let us consider a collective su($4$) system consisting of $N$ spin-3/2 particles as an example. In this case, the observables that can completely characterize collective spin states are the spin vector, the quadrupolar tensor, and the octupolar tensor. The Cartesian components of the spin vector ${\hat{\lambda}}_{{\bi n}_j;J=\frac{3}{2},k}$ ($k = 1,2,3$) can be given by \begin{equation}
{\hat{\lambda}}_{{\bi n}_j;\frac{3}{2},k} = \sum_{j=1}^N \sum_{m,n=1}^4 ({\tilde{\lambda}}_{\frac{3}{2},k})_{mn} {\hat{c}}^{\dagger}_{{\bi n}_j;\frac{3}{2},m}
{\hat{c}}_{{\bi n}_j;\frac{3}{2},n}, \label{eq:single-j} \end{equation} where ${\tilde{\lambda}}_{\frac{3}{2},k}$ represent the spin-$3/2$ matrices ${\tilde{J}}_{\mu}$ ($\mu = x,y,z$) given by Eq.~(\ref{eq:single-mj}). The matrix representations of the five independent components of the quadrupolar tensor and the seven independent components of the octupolar tensor~\cite{Shiina} can be respectively expressed in terms of ${\tilde{J}}_{\mu }$ as \begin{eqnarray}
({\tilde{Q}}_{\mu \nu})_{mn}= \frac{\sqrt{15}}{6} ({\tilde{J}}_{\mu} {\tilde{J}}_{\nu} + {\tilde{J}}_{\nu} {\tilde{J}}_{\mu} )_{mn}, \label{eq:single-q} \\
({\tilde{D}}_{xy})_{mn} = \frac{\sqrt{15}}{6} ({\tilde{J}}_{x}^2 - {\tilde{J}}_{y}^2 )_{mn}, \label{eq:single-d} \\
(\tilde{Y})_{mn} = \frac{\sqrt{5}}{6} (-{\tilde{J}}_{x}^2 - {\tilde{J}}_{y}^2 + 2{\tilde{J}}_{z}^2 )_{mn}, \label{eq:single-y} \end{eqnarray} where $(\mu ,\nu) = (x,y), (y,z), (z,x)$ in Eq.~(\ref{eq:single-q}), and \begin{eqnarray}
({\tilde{T}}^{\alpha}_{\mu})_{mn} = \frac{1}{3} (2 {\tilde{J}}_{\mu}^3 - \overline{{\tilde{J}}_{\mu} {\tilde{J}}_{\nu}^2}
- \overline{{\tilde{J}}_{\eta}^2 {\tilde{J}}_{\mu}})_{mn}, \label{eq:single-ta} \\
({\tilde{T}}^{\beta}_{\mu})_{mn} = \frac{\sqrt{15}}{9} (\overline{{\tilde{J}}_{\mu} {\tilde{J}}_{\nu}^2}
- \overline{{\tilde{J}}_{\eta}^2 {\tilde{J}}_{\mu}})_{mn} \\
({\tilde{T}}_{xyz})_{mn} = \frac{\sqrt{15}}{9} (\overline{{\tilde{J}}_{x}{\tilde{J}}_{y}{\tilde{J}}_{z}})_{mn}, \label{eq:single-txyz} \end{eqnarray} where $(\mu ,\nu ,\eta) = (x,y,z)$, $(y,z,x)$, and $(z,x,y)$ and the overbars above the matrix products are defined as $\overline{\tilde{A} {\tilde{B}}^2} = \tilde{A} {\tilde{B}}^2 + \tilde{B} \tilde{A} \tilde{B} + {\tilde{B}}^2\tilde{A}$ and $\overline{\tilde{A} \tilde{B} \tilde{C}} = \tilde{A} \tilde{B} \tilde{C} + \tilde{B} \tilde{C} \tilde{A} + \tilde{C} \tilde{A} \tilde{B} + \tilde{B} \tilde{A} \tilde{C} + \tilde{C} \tilde{B} \tilde{A} + \tilde{A} \tilde{C} \tilde{B}$ with respect to the matrices $\tilde{A}$, $\tilde{B}$, and $\tilde{C}$. Here we note that the matrix representations of the spin and multipolar observables in Eqs.~(\ref{eq:single-j})-(\ref{eq:single-txyz}) are normalized so that they satisfy the condition in Eq.~(\ref{eq:norm}). These fifteen spin and multipolar observables in Eqs.~(\ref{eq:single-j})-(\ref{eq:single-txyz}) together form the su($4$) Lie algebra. Then, the irreducible representations of the collective spin observables describing the symmetric spin state can respectively be given by the matrix representations of their single-spin counter parts in Eqs.~(\ref{eq:single-j})-(\ref{eq:single-txyz}), whose explicit expressions are given in Eqs.~(\ref{eq:single-mj})-(\ref{eq:single-mt}). We define the matrices $\{ {\tilde{\lambda}}_{\frac{3}{2},k} \} \equiv \{ {\tilde{J}}_{\mu}, {\tilde{Q}}_{\mu \nu}, {\tilde{D}}_{xy}, \tilde{Y}, {\tilde{T}}^{\alpha}_{\mu}, {\tilde{T}}^{\beta}_{\mu}, {\tilde{T}}_{xyz} \}$ ($k=1,\cdots ,15$) in the order of Eqs.~(\ref{eq:single-j})-(\ref{eq:single-txyz}). Then, the matrix representation of any observable can be expressed in terms of $\{ {\tilde{\lambda}}_{\frac{3}{2},k} \}$ ($k=1,\cdots ,15$) as in Eq.~(\ref{eq:coll-obs}).
\subsection{Four types of squeezing} There exist four unitary equivalence classes of the su($2$) subalgebras in the su($4$) algebra, which can be found as explained in Sec.~\ref{subsec:Dynkin}. First, let us construct the Dynkin diagram and consider the relation between the simple roots and the spin and multipolar observables in Eqs.~(\ref{eq:single-j})-(\ref{eq:single-txyz}). In collective su($4$) systems, the Dynkin diagram has three simple roots ${\balpha}_1$, ${\balpha}_4$, ${\balpha}_6$ as shown in Fig.~\ref{fig:su4-roots} (b). Choosing the diagonal matrices ${\tilde{J}}_z = {\tilde{\lambda}}_{\frac{3}{2},3}$, $\tilde{Y} = {\tilde{\lambda}}_{\frac{3}{2},8}$, and ${\tilde{T}}^{\alpha}_z = {\tilde{\lambda}}_{\frac{3}{2},11}$ as the generators of the Cartan subalgebra, we can express the matrices ${\tilde{A}}_{\frac{3}{2},1}$, ${\tilde{A}}_{\frac{3}{2},4}$, and ${\tilde{A}}_{\frac{3}{2},6}$ corresponding to the simple roots as \begin{eqnarray}
{\tilde{A}}_{\frac{3}{2},1}
= \frac{\sqrt{15}}{10} {\tilde{J}}_+ + \frac{1}{2} {\tilde{Q}}_+ - \frac{\sqrt{15}}{20} {\tilde{T}}^{\alpha}_+ - \frac{1}{4} {\tilde{T}}^{\beta}_-, \label{eq:a1} \\
{\tilde{A}}_{\frac{3}{2},4}
= \frac{1}{\sqrt{5}} {\tilde{J}}_+ + \frac{3}{4\sqrt{5}} {\tilde{T}}^{\alpha}_+ + \frac{\sqrt{3}}{4} {\tilde{T}}^{\beta}_-, \label{eq:a4} \\
{\tilde{A}}_{\frac{3}{2},6}
= \frac{\sqrt{15}}{10} {\tilde{J}}_+ - \frac{1}{2} {\tilde{Q}}_- - \frac{\sqrt{15}}{20} {\tilde{T}}^{\alpha}_+ - \frac{1}{4} {\tilde{T}}^{\beta}_-, \label{eq:a6} \end{eqnarray} where we define ${\tilde{J}}_{\pm} \equiv {\tilde{J}}_x \pm i{\tilde{J}}_y$, ${\tilde{Q}}_{\pm} \equiv {\tilde{Q}}_{zx} \pm i{\tilde{Q}}_{yz}$, ${\tilde{T}}^{\alpha}_{\pm} = {\tilde{T}}^{\alpha}_x \pm i{\tilde{T}}^{\alpha}_y$, and ${\tilde{T}}^{\beta}_{\pm} = {\tilde{T}}^{\beta}_x \pm i{\tilde{T}}^{\beta}_y$, respectively. The derivation of Eqs.~(\ref{eq:a1})-(\ref{eq:a6}) are detailed in \ref{a:2}. \begin{figure}
\caption{(Color Online) (a) The root diagram of the su($4$) algebra, (b) the Dynkin diagram of the the su($4$) algebra, and (c) the four types of the unitary equivalence classes of the matrix representations of the su($2$) subalgebras. In (c), the chosen simple roots and the omitted simple roots are indicated by the filled black circles and the open grey circles, respectively. }
\label{fig:su4-roots}
\end{figure}
Then, the four unitary equivalence classes of the su($2$) subalgebras can be found, that is, the types (i)-(iv) as illustrated in Figs.~\ref{fig:su4-roots} (c). The su($2$) subalgebra $\{ {\tilde{O}}_{\frac{3}{2},k} \}$ ($k=1,2,3$) of these four classes satisfy $[{\tilde{O}}_{\frac{3}{2},\pm}, {\tilde{O}}_{\frac{3}{2},3}]=\pm f{\tilde{O}}_{\frac{3}{2},\pm}$, where ${\tilde{O}}_{\frac{3}{2},\pm} = {\tilde{O}}_{\frac{3}{2},1} \pm i{\tilde{O}}_{\frac{3}{2},2}$. Suppose the matrices $\{ {\tilde{O}}_{\frac{3}{2},k} \}$ have the block-diagonalized forms as in Eq.~(\ref{eq:direct-sum}); then the ladder operator ${\tilde{O}}_{\frac{3}{2},+}$ and the observable ${\tilde{O}}_{\frac{3}{2},3}$ should be expressed in terms of the linear combinations of ${\tilde{A}}_{\frac{3}{2},k}$ ($k=1, 4, 6$) and ${\tilde{\lambda}}_{\frac{3}{2},k}$ ($k=3,8,11$), respectively, as \begin{equation}
{\tilde{O}}_{\frac{3}{2},+} = \sum_{k=1,4,6}^6 c_k {\tilde{A}}_k, \label{eq:rise} \end{equation} and \begin{equation}
{\tilde{O}}_{\frac{3}{2},3} = d_3 {\tilde{\lambda}}_{\frac{3}{2},3} + d_8 {\tilde{\lambda}}_{\frac{3}{2},8} + d_{11} {\tilde{\lambda}}_{\frac{3}{2},11}, \label{eq:z} \end{equation} where $c_k$ and $d_k$ are the solutions of $[{\tilde{O}}_{\frac{3}{2},\pm}, {\tilde{O}}_{\frac{3}{2},3}]=\pm f{\tilde{O}}_{\frac{3}{2},\pm}$. The solutions, the number of subspaces $r$, the subspins $\{J_l \}$ in Eq.~(\ref{eq:direct-sum}), and the structure factor $f$ are respectively given by \begin{eqnarray}
&\mathrm{(i)} \ & {\tilde{O}}_{\frac{3}{2},+}
= \sqrt{\frac{3}{10}} {\tilde{A}}_{\frac{3}{2},1} + \sqrt{\frac{2}{5}} {\tilde{A}}_{\frac{3}{2},4} + \sqrt{\frac{3}{10}} {\tilde{A}}_{\frac{3}{2},6}, \
{\tilde{O}}_{\frac{3}{2},3} = {\tilde{\lambda}}_{\frac{3}{2},3}, \nonumber \\ & \ &r = 1, \ \{ J_1 = \frac{3}{2} \}, \ f = 1 \label{eq:type1} \\
&\mathrm{(ii)} \ & {\tilde{O}}_{\frac{3}{2},+} = \frac{1}{\sqrt{2}} ({\tilde{A}}_{\frac{3}{2},1} \pm {\tilde{A}}_{\frac{3}{2},4}), \\ \nonumber & \ &
{\tilde{O}}_{\frac{3}{2},3} = \frac{2}{\sqrt{10}} {\tilde{\lambda}}_{\frac{3}{2},3} + \frac{1}{\sqrt{2}} {\tilde{\lambda}}_{\frac{3}{2},8}
- \frac{1}{\sqrt{10}} {\tilde{\lambda}}_{\frac{3}{2},11}, \nonumber \\ & \ &r = 2, \ \{ J_1 = 1, J_2 = 0 \}, \ f = \sqrt{\frac{5}{2}},
\label{eq:type2} \\
&\mathrm{(iii)} \ & {\tilde{O}}_{\frac{3}{2},+} = \frac{1}{\sqrt{2}} ({\tilde{A}}_{\frac{3}{2},1} \pm {\tilde{A}}_{\frac{3}{2},6}), \
{\tilde{O}}_{\frac{3}{2},3} = \frac{1}{\sqrt{5}} {\tilde{\lambda}}_{\frac{3}{2},3} + \frac{2}{\sqrt{5}} {\tilde{\lambda}}_{\frac{3}{2},11}, \nonumber \\ & \ &r = 2, \
\{ J_1 = J_2 = \frac{1}{2} \}, \ f = \sqrt{5}, \label{eq:type3} \\
&\mathrm{(iv)} \ & {\tilde{O}}_{\frac{3}{2},+} = {\tilde{A}}_{\frac{3}{2},1}, \
{\tilde{O}}_{\frac{3}{2},3} = \frac{1}{\sqrt{10}} {\tilde{\lambda}}_{\frac{3}{2},3} + \frac{1}{\sqrt{2}} {\tilde{\lambda}}_{\frac{3}{2},8}
+ \sqrt{\frac{2}{5}} \ {\tilde{\lambda}}_{\frac{3}{2},11}, \nonumber \\ & \ &r = 3, \ \{ J_1 = \frac{1}{2}, J_2=J_3 = 0 \}, \ f = \sqrt{10}. \label{eq:type4} \end{eqnarray} The type (i) squeezing in Eq.~(\ref{eq:type1}) is equivalent to the spin squeezing among $\{ {\hat{J}}_x, {\hat{J}}_y, {\hat{J}}_z \}$ and the type (iii) squeezing in Eq.~(\ref{eq:type4}) is equivalent to the quadrupole-octupole squeezing among $\{{\hat{T}}_z^{\beta}, {\hat{T}}_{xyz}, \hat{Y} \}$ and the quadrupole squeezing among $\{{\hat{Q}}_{zx}, {\hat{Q}}_{yz}, \hat{Y} \}$.
\subsection{Squeezing limits for four types of squeezing} In the case of the type (i) in Eq.~(\ref{eq:type1}), $r=1$ and the squeezing limit for the one-axis twisting is given by Eq.~(\ref{eq:minsqparam-r1}) as \begin{equation}
{\xi}_{\mathrm{min}}^2 \simeq \frac{1}{2} {\left ( \frac{1}{N} \right )}^{2/3} + \frac{1}{3N}, \label{eq:limit-type1} \end{equation} which is achieved at the evolution time of ${\mu}_{\mathrm{min}}= \frac{2}{\sqrt{3}} \times N^{-2/3}$ corresponding to $t_{\mathrm{min}} = \frac{1}{\sqrt{3} \chi} \times N^{-2/3}$.
\begin{figure}\label{fig:sq(ii)and(iv)}
\end{figure} In the case of the types (ii)-(iv) in Eq.~(\ref{eq:type2})-(\ref{eq:type4}), the squeezing limits depend on the initial coherent state in Eq.~(\ref{eq:CSS-J}) in general; however, the squeezing limits for the types (ii) in Eq.~(\ref{eq:type2}) and (iv) in Eq.~(\ref{eq:type4}) can
be calculated in the same manner as the type (i), when $|{\zeta}_l|^2 = {\delta}_{l1}$ in the initial state in Eq.~(\ref{eq:CSS-J}). They are given by \begin{eqnarray}
&\mathrm{(ii)} \ & {\xi}_{\mathrm{min}}^2 \simeq \frac{1}{2} {\left ( \frac{3}{2N} \right )}^{2/3} + \frac{1}{2N}, \label{eq:limit-type2} \\
&\mathrm{(iv)} \ & {\xi}_{\mathrm{min}}^2 \simeq \frac{1}{2} {\left ( \frac{3}{N} \right )}^{2/3} + \frac{1}{N}, \label{eq:limit-type4} \end{eqnarray} respectively. The minimum squeezing limits in Eqs.~(\ref{eq:limit-type2}) and (\ref{eq:limit-type4}) are achieved at ${\mu}_{\mathrm{min}} = 12^{1/6} \times N^{-2/3}$ ($t_{\mathrm{min}} = \frac{12^{1/6}}{5\chi} \times N^{-2/3}$) and ${\mu}_{\mathrm{min}} = 2 \times 3^{1/6} \times N^{-2/3}$ ($t_{\mathrm{min}} = \frac{3^{1/6}}{10\chi} \times N^{-2/3}$), respectively.
If $|{\zeta}_l|^2 \neq 0$ for $\exists l>0$, the squeezing limits for types (ii) and (iv) cannot be obtained by the expression in Eq.~(\ref{eq:minsqparam-r1}).
We numerically calculate the $|{\zeta}_1|^2$-dependences of the squeezing limits and their corresponding evolution times and illustrate them in Figs.~\ref{fig:sq(ii)and(iv)} (a) and (b). In Figs.~\ref{fig:sq(ii)and(iv)}, we plot the squeezing limit ${\xi}_{\mathrm{min}}^2$ and the
evolution time ${\mu}_{\mathrm{min}}$ with respect to $1-|{\zeta}_1|^2$.
The squeezing limits for the types (ii) and (iv) monotonically decrease with increasing $|{\zeta}_1|^2$.
For $|{\zeta}_1|^2 \simeq 1$, the squeezing limits are almost equal to Eqs.~(\ref{eq:limit-type2}) and (\ref{eq:limit-type4}), respectively; however,
for $|{\zeta}_1|^2 < 0.2$, the minimum squeezing limits sharply increase due to the decreases in the number of the Schwinger bosons which are nonlinearly interacting via the one-axis twisting interactions in Eq.~(\ref{eq:OAT}).
In the case of the type (iii) in Eq.~(\ref{eq:type3}), $r=2$ and $J_1 = J_2 = 1/2$, the $|{\zeta}_1|^2$-dependence of the minimum squeezing limit is periodic because of the symmetry with respect to the two subspaces. To see this, let us derive the expression for the squeezing limit for the type (iii): \begin{equation}
{\xi}^2 (\mu ) = \frac{1 + \frac{1}{4} (N-1) \sum_{l=1}^2 {\Delta}_l(\mu )}{
\sum_{l=1}^2 {|{\zeta}_l|}^2 {(1 - 2 {|{\zeta}_l|}^2 {\sin}^2 \frac{\mu}{4})}^{N-1}} , \label{eq:sq-iii} \end{equation} where ${\Delta}_l$'s ($l=1,2$) are defined as \begin{eqnarray}
& {\Delta}_l(\mu ) = \left [ 1 - {\left (1 - 2 {|{\zeta}_l|}^2 {\sin}^2 \frac{\mu}{2} \right )}^{N-2} \right ] \nonumber \\
& \times \left \{ 1 - \sqrt{1 + {\left [ \frac{4 {|{\zeta}_l|}^2 \sin {\frac{\mu}{2}} \ {(1 - 2 {|{\zeta}_l|}^2 {\sin}^2 \frac{\mu}{4})}^{N-2}}{
1 - {(1 - 2 {|{\zeta}_l|}^2 {\sin}^2 \frac{\mu}{2})}^{N-2} } \right ]}^2} \right \}. \label{eq:f-iii} \end{eqnarray}
When the initial state is given by $|{\zeta}_l|^2 = {\delta}_{l1}$, the squeezing limit is given by Eq.~(\ref{eq:limit-type4}) at
${\mu}_{\mathrm{min}} = 2 \times 3^{1/6} \times N^{-2/3}$, which are same as those for the type (iv) with the initial state of $|{\zeta}_l|^2 = {\delta}_{l1}$,
while the evolution time $t_{\mathrm{min}} = \frac{3^{1/6}}{5\chi} \times N^{-2/3}$ is two times larger than that for the type (iv) with the initial state of $|{\zeta}_l|^2 = {\delta}_{l1}$.
When the initial state is the equal superposition of the two subspaces, i.e., ${|{\zeta}_l|}^2 = \frac{1}{2}$, the squeezing limit can be obtained by assuming $\alpha \gg 1$ and $\beta \ll 1$ to be \begin{equation}
{\xi}_{\mathrm{min}}^2 \simeq \frac{1}{2} {\left ( \frac{6}{N} \right )}^{2/3} + \frac{3}{N} \simeq \frac{1}{2} {\left ( \frac{6}{N} \right )}^{2/3}, \label{eq:limit-type3} \end{equation} at the evolution time of ${\mu}_{\mathrm{min}} = 2 \times 3^{1/6} \times (N/2)^{-2/3}$ ($t_{\mathrm{min}} = \frac{48^{1/6}}{5\chi} \times N^{-2/3}$). Equation~(\ref{eq:limit-type3}) is $6^{2/3} \simeq 3.3$ times larger than the type (i) in Eq.~(\ref{eq:limit-type1}), $4^{2/3} \simeq 2.5$ times larger than the type (ii) in Eq.~(\ref{eq:limit-type2})
with the initial state of $|{\zeta}_l|^2 = {\delta}_{l1}$, and $2^{2/3} \simeq 1.6$ times larger than the type (iv) in Eq.~(\ref{eq:limit-type4})
with the initial state of $|{\zeta}_l|^2 = {\delta}_{l1}$ and the type (iii) with the initial
state of $|{\zeta}_l|^2 = {\delta}_{l1}$.
The $|{\zeta}_1|^2$-dependence of the squeezing limit and the corresponding evolution time for $N=10^5$ are illustrated in Fig.~\ref{fig:sq(iii)} (a).
The squeezing limit reaches the maximum at $|{\zeta}_1|^2 \simeq 1-\frac{\pi}{4}$ and $\frac{\pi}{4}$.
The dependence of the squeezing limit on the number of spins for $|{\zeta}_1|^2 \simeq 1-\frac{\pi}{4}$ is shown in Fig.~\ref{fig:sq(iii)} (b), which can be well fitted to \begin{equation}
{\xi}^2_{\mathrm{min}} \simeq 0.11 \pm 0.00 + \frac{0.57 \pm 0.00}{N^{0.50 \pm 0.00}} + \frac{3.8 \pm 0.0}{N} \label{eq:approx-xi} \end{equation} by the least squared method.
Equation~(\ref{eq:approx-xi}) implies that the scaling of the squeezing limit with respect to $N$ is $0$ for ${|{\zeta}_1|}^2 = \frac{\pi}{4}$ and $1 - \frac{\pi}{4}$, although the squeezing limit is still below the standard quantum limit of ${\zeta}^2 = 1$.
The evolution time corresponding to the squeezing limit for ${|{\zeta}_1|}^2 = 1 - \frac{\pi}{4}$ can be well fitted to \begin{equation}
{\mu}_{\mathrm{min}} \simeq (3.9 \pm 0.0) \times N^{-0.73 \pm 0.00}, \label{eq:approx-t} \end{equation} with respect to the number of spins $N$ by the least squared method. \begin{figure}\label{fig:sq(iii)}
\end{figure}
\section{Conclusion} In this paper, we consider the collective su($2J+1$) systems and classify the squeezing among the spin and multipolar observables generating the su($2$) subalgebra of the su($2J+1$) algebra, based on the unitary equivalence class of the su($2J+1$)-dimensional representations of the observables. The matrix representations of the observables and their representation spaces can be decomposed into the direct sums of the lower dimensional irreducible representations of the su($2$) generators in Eq.~(\ref{eq:direct-sum}). This implies that if two sets of observables belong to the same unitary equivalence class, they can be decomposed into the same matrix representation in Eq.~(\ref{eq:direct-sum}) whose bases can be transformed to each other via an SU($2J+1$) transformation; hence they are characterized by the same subspins $\{J_l \}$ in Eq.~(\ref{eq:direct-sum}) giving the structure factor $f$ in Eq.~(\ref{eq:f}). The unitary equivalence class of the su($2$) subalgebra in the su($2J+1$) algebra can be found by choosing vertices in the Dynkin diagram of the su($2J+1$) algebra as shown in Fig.~\ref{fig:dynkin-sun}.
The squeezing limits are determined by the dimensionality of the unitary equivalence class of the observables and the initial CSS involved by the squeezing. Taking the one-axis-twisted SSS for example, we calculate the squeezing limit ${\xi}^2_{\mathrm{min}}$, which is given by the function in Eq.~(\ref{eq:sqparam})
in terms of the subspins $\{ J_l \}$ in the irreducible representations in Eq.~(\ref{eq:direct-sum}) and the coefficients $\{ |{\zeta}_l|^2 \}$ of the initial CSS in Eq.~(\ref{eq:CSS-J}).
When $|{\zeta}_l|^2 = {\delta}_{l1}$ in Eq.~(\ref{eq:CSS-J}), the squeezing limit ${\xi}^2_{\mathrm{min}}$ in Eq.~(\ref{eq:minsqparam-r1}) for the one-axis twisted SSS achieved to be proportional to $(J_{l_0}N)^{-2/3}$ at the evolution time of $\mu \equiv 2 \chi f^2 t \propto (J_{l_0}N)^{-2/3}$ in the limit of $J_{l_0}N \chi f^2 t \gg 1$ and $J_{l_0}N (\chi f^2 t)^2 \ll 1$, which implies that the squeezing among the observables, of which matrix representations are irreducible, gives the minimum squeezing limit of the collective su($2J+1$) consisting of $N$ spin-$J$ particles.
In the case of $|{\zeta}_{l_0}|^2 < 1$ and $^{\exists} |{\zeta}_{l\neq {l_0}}|^2 \neq 0$, the analytical expressions of the squeezing limits in Eq.~(\ref{eq:sqparam}) cannot be easily obtained due to the interference between the representation spaces in Eq.~(\ref{eq:direct-sum}).
Finally, we apply our classification to the squeezing in the collective su($4$) systems and obtain the squeezing limits analytically or numerically. The squeezing can be classified into one of four unitary equivalence classes as shown in Fig.~\ref{fig:su4-roots}.
Their squeezing limits depends on the coefficients $\{ |{\zeta}_l|^2 \}$ in the initial coherent states in Eq.~(\ref{eq:CSS-J}) as well as the subspins $\{J_l \}$, whose behaviors were numerically calculated as shown in Figs.\ref{fig:sq(ii)and(iv)} (a) and \ref{fig:sq(iii)} (a). Since the subspins and the initial coherent sate reflect the structure of the unitary equivalence class of the spin and multipolar observables; hence the unitary equivalence class of the observables can be considered as one of the systematical ways to classify and quantify the squeezing.
\ack E. Y. thanks Prof. Mark Everitt, Prof. Todd Tilma, Dr. Shane Dooley, Mr. Itsik Cohen, and Ms. Marvellous Onuma-Kalu for fruitful discussions. This work is supported by MEXT Grant-in-Aid for Scientific Research(S) No. 25220601.
\appendix \section{\label{a:0}Schwinger-boson approach to calculate expectation values for Eqs.~(\ref{eq:CSS-J}) and (\ref{eq:SSS-J})} The expectation values for the initial CSS in Eq.~(\ref{eq:CSS-J}) and for the one-axis-twisted SSS in Eq.~(\ref{eq:SSS-J}) can be simplified by the Schwinger boson approach.
The observables $\{ {\hat{O}}_{J,k} \}$ can be decomposed into Eq.~(\ref{eq:direct-sum}), which are matrix-represented by the direct sums of the spin matrices $\{ {\tilde{\lambda}}_{J_l,k} \}$ for the spins $J_l$. For each of $r$ subspaces, we can define the Schwinger boson operator ${\hat{a}}_{l\pm}$ (${\hat{a}}_{l\pm}^{\dagger}$) which annihilates (creates) a boson in a mode `$l\pm$.' The annihilation (creation) Schwinger-boson operators ${\hat{a}}_{l\pm}$ (${\hat{a}}_{l\pm}^{\dagger}$) satisfy \begin{equation}
[ {\hat{a}}_{ls}, {\hat{a}}_{l^{\prime}s^{\prime}} ] = 0, \
[ {\hat{a}}_{ls }, {\hat{a}}_{l^{\prime}s^{\prime}}^{\dagger} ] = {\delta}_{ll^{\prime}} {\delta}_{ss^{\prime}} \
(s,s^{\prime} = \pm) , \end{equation}
since the $r$ subspaces $V(\{|J_l, m_l {\rangle}_l \})$ ($l=1,\cdots ,r$) are orthogonal to each other.
The $l$-th symmetric state $|\theta , \phi {\rangle}_l^{\otimes n_l}$ in Eq.~(\ref{eq:CSS-J}) can be regarded as a CSS of the $2J_ln_l$ spin-$1/2$ Schwinger bosons of the mode $l$ whose azimuth and polar angles are given by $\theta$ and $\phi$, respectively: \begin{eqnarray}
|\theta , \phi {\rangle}_l^{\otimes n_l}
& = \sum_{m=0}^{N_l} \sqrt{_{N_l}C_m} \ {\cos}^{N_l-m} \frac{\theta}{2} \ {\sin}^m \frac{\theta}{2} \ e^{-im\phi} \nonumber \\
&\times |n_{l+} = N_l - m, n_{l-} = m {\rangle}_{\mathrm{Sb}},
\label{eq:coh-Sb} \end{eqnarray}
where $N_l \equiv 2J_ln_l$ represents the number of the $l$-th Schwinger bosons, and $|n_{l+}, n_{l-} {\rangle}_{\mathrm{Sb}}$ is the symmetric state of the $n_{l+}$ Schwinger bosons in the `$l+$' state and the $n_{l-}$ Schwinger bosons in the `$l-$' state. The matrix representations ${\tilde{\lambda}}_{J_l,k}$ for the $l$-th subspace with $J_l \neq 0$ can be mapped to the collective spin operators ${\hat{\Lambda}}_{J_l,k}$: \begin{eqnarray}
&{\tilde{\lambda}}_{J_l,1} \to {\hat{\Lambda}}_{J_l,1} = \frac{1}{2} ({\hat{a}}_{l+}^{\dagger} {\hat{a}}_{l-} + {\hat{a}}_{l-}^{\dagger} {\hat{a}}_{l+} ), \label{eq:L1}\\
&{\tilde{\lambda}}_{J_l,2} \to {\hat{\Lambda}}_{J_l,2} = \frac{i}{2} (- {\hat{a}}_{l+}^{\dagger} {\hat{a}}_{l-} + {\hat{a}}_{l-}^{\dagger} {\hat{a}}_{l+} ), \label{eq:L2} \\
&{\tilde{\lambda}}_{J_l,3} \to {\hat{\Lambda}}_{J_l,3} = \frac{1}{2} ({\hat{a}}_{l+}^{\dagger} {\hat{a}}_{l+} - {\hat{a}}_{l-}^{\dagger} {\hat{a}}_{l-} ), \label{eq:L3} \end{eqnarray} with the constraint ${\hat{\Lambda}}_{J_l,1}^2 + {\hat{\Lambda}}_{J_l,2}^2 +{\hat{\Lambda}}_{J_l,3}^2 = J_ln_l (J_ln_l+1)$. For $J_l = 0$, we define ${\hat{\Lambda}}_{J_l,1} = {\hat{\Lambda}}_{J_l,2} = {\hat{\Lambda}}_{J_l,3} = 0$. The observables in Eqs.~(\ref{eq:O3}) and (\ref{eq:O1}) can be expressed in terms of the Schwinger-boson representations in Eqs.~(\ref{eq:L1})-(\ref{eq:L3}) as \begin{eqnarray}
& {\hat{O}}_{J,\perp} = f \bigoplus_{l=1}^r \Bigl [ {\hat{\Lambda}}_{J_l,1} \cos {\phi} \sin {\theta} + {\hat{\Lambda}}_{J_l,2} \sin {\phi} \sin {\theta} + {\hat{\Lambda}}_{J_l,3} \cos {\theta} \Bigr ], \\
& {\hat{O}}_{J,\nu} = f \bigoplus_{l=1}^r \Bigl [ {\hat{\Lambda}}_{J_l,1} (\cos {\phi} \cos {\theta} \cos {\nu} - \cos {\phi} \sin {\nu}) \nonumber \\
&+ {\hat{\Lambda}}_{J_l,2} (\sin {\phi} \cos {\theta} \cos {\nu} + \cos {\phi} \sin {\nu}) - {\hat{\Lambda}}_{J_l,3} \sin {\theta} \cos {\nu} \Bigr ]. \end{eqnarray} Thus, the expectation values in Eqs.~(\ref{eq:O3}) and (\ref{eq:O1}) for the CSS of Eq.~(\ref{eq:CSS-J}) can be obtained as Eqs.~(\ref{eq:exp-CSS}) and (\ref{eq:fluct-CSS}), respectively.
Next, let us simplify the expectation values for the one-axis-twisted SSSs in Eq.~(\ref{eq:SSS-J}) in a manner similar to the case of the CSS.
The one-axis twisting in Eq.~(\ref{eq:OAT}) and $|\frac{\pi}{2} , 0 {\rangle}_l^{\otimes n_l}$ in the initial CSS can be respectively expressed as \begin{equation}
{\hat{H}}_{\mathrm{OAT}} = \hbar \chi f^2 { \left [ \bigoplus_{l=1}^r {\hat{\Lambda}}_{J_l,3} \right ]}^2
= \hbar \chi f^2 \bigoplus_{l=1}^r {\hat{\Lambda}}_{J_l,3}^2, \label{eq:OAT-Sb} \end{equation} and \begin{equation}
|\frac{\pi}{2} , 0 {\rangle}_l^{\otimes n_l}
= \frac{1}{2^{N_l/2}} \sum_{m=0}^{N_l} \sqrt{_{N_l}C_m} \ |n_{l+} = N_l - m, n_{l-} = m {\rangle}_{\mathrm{Sb}}.
\label{eq:kthcoh-Sb} \end{equation} The $l$-th one-axis twisting interaction $\hbar \chi f^2 {\hat{\Lambda}}_{J_l,3}^2$ in Eq.~(\ref{eq:OAT-Sb}) squeezes the $l$-th CSS in Eq.~(\ref{eq:kthcoh-Sb}). The one-axis-twisted SSS of the $l$-th Schwinger bosons at $\mu$ is given by \begin{eqnarray}
|{\psi}_{\mathrm{OAT}} (\frac{1}{2},N_l; \mu) {\rangle}_l &\equiv \frac{1}{2^{N_l/2}}
\sum_{m=0}^{N_l} \sqrt{_{N_l}C_m} \ e^{-im\phi} e^{-\frac{i}{8} ({\hat{a}}_{l+}^{\dagger} {\hat{a}}_{l+} - {\hat{a}}_{l-}^{\dagger} {\hat{a}}_{l-} )^2 \mu} \nonumber \\
&\times |n_{l+} = N_l - m, n_{l-} = m {\rangle}_{\mathrm{Sb}}. \label{eq:lthSSS} \end{eqnarray}
Here, we note that for an observable ${\hat{X}}_{J_l}$, two SSSs $|{\psi}_{\mathrm{OAT}} (\frac{1}{2},2J_ln_l; \mu ) {\rangle}_l$
and $|{\psi}_{\mathrm{OAT}} (\frac{1}{2},2J_ln_l^{\prime}; \mu ) {\rangle}_l$ in the $l$-th subspace satisfy \begin{equation}
\langle {\psi}_{\mathrm{OAT}} (\frac{1}{2},2J_ln_l^{\prime} ; \mu ) | {\hat{X}}_{J_l} |{\psi}_{\mathrm{OAT}} (\frac{1}{2},2J_ln_l; \mu ) {\rangle}_l
\propto {\delta}_{n_ln_l^{\prime}}, \label{eq:note} \end{equation} since the expectation value vanishes when the numbers of the Schwinger bosons in the two states are not equal, i.e., $n_l \neq n_l^{\prime}$. Then, the expectation value of ${\hat{O}}_{J,1}$ can be calculated to give Eq.~(\ref{eq:perp}). The one-axis twisting redistribute the quantum fluctuations in the $O_{J,2}$-$O_{J,3}$ plane as follows: \begin{equation}
{\hat{O}}_{J,\nu} = {\hat{O}}_{J,2} \cos {\nu} - {\hat{O}}_{J,3} \sin {\nu}
= f \bigoplus_{l=1}^r ( {\hat{\Lambda}}_{J_l,2} \cos {\nu} - {\hat{\Lambda}}_{J_l,3} \sin {\nu} ). \label{eq:obs} \end{equation}
The quantum fluctuation in ${\hat{O}}_{J,\nu}$ with respect to the state $| {\Psi}_{\mathrm{OAT}} (J,N;\mu ) {\rangle}_{\mathrm{tot}}$ in Eq.~(\ref{eq:SSS-J}) is obtained by \begin{eqnarray}
\langle (\Delta O_{J, \nu} )^2 \rangle (\mu ) &= \langle {\Psi}_{\mathrm{OAT}} (J,N;\mu ) | {\hat{O}}_{J,\nu}^2 | {\Psi}_{\mathrm{OAT}} (J,N;\mu )
{\rangle}_{\mathrm{tot}} \nonumber \\
&- \langle {\Psi}_{\mathrm{OAT}} (J,N;\mu ) | {\hat{O}}_{J,\nu} | {\Psi}_{\mathrm{OAT}} (J,N;\mu ) {\rangle}_{\mathrm{tot}}^2. \label{eq:fluct} \end{eqnarray} Here, the first term of the right-hand side of Eq.~(\ref{eq:fluct}) is given by \begin{eqnarray}
&\langle {\Psi}_{\mathrm{OAT}} (J,N;\mu ) | {\hat{O}}_{J,\nu}^2 | {\Psi}_{\mathrm{OAT}} (J,N;\mu ) {\rangle}_{\mathrm{tot}} \nonumber \\
&= f^2 \sum_{n_1=0}^N \sum_{n_2=0}^{N-n_1} \cdots \sum_{n_{r-1}=0}^{N-n_1-\cdots - n_{r-2}} \ _NC_{n_1} \ _{N-n_1}C_{n_2} \
\cdots \ _{N-n_1-\cdots - n_{r-2}}C_{n_{r-1}} \nonumber \\
&\times |{\zeta}_1|^{2n_1} |{\zeta}_2|^{2n_2} \cdots |{\zeta}_{r-1}|^{2n_{r-1}} |{\zeta}_r|^{2(N-n_1-\cdots - n_{r-1})} \nonumber \\
&\times \sum_{l:J_l\neq 0} \langle {\psi}_{\mathrm{OAT}} (\frac{1}{2},2J_ln_l;\mu ) | ({\hat{\Lambda}}_{J,2}\cos \nu - {\hat{\Lambda}}_{J,3}\sin \nu)^2
| {\psi}_{\mathrm{OAT}} (\frac{1}{2},2J_ln_l;\mu ) {\rangle}_l \nonumber \\
&= f^2 \sum_{l:J_l\neq 0} \sum_{n_l=0}^N \ _NC_{n_l} |{\zeta}_l|^{2n_l} (1-|{\zeta}_l|^2)^{N-n_l} \nonumber \\
&\times \langle {\psi}_{\mathrm{OAT}} (\frac{1}{2},2J_ln_l;\mu ) | ({\hat{\Lambda}}_{J,2}\cos \nu - {\hat{\Lambda}}_{J,3}\sin \nu)^2
| {\psi}_{\mathrm{OAT}} (\frac{1}{2},2J_ln_l;\mu ) {\rangle}_l, \label{eq:fluct-1} \end{eqnarray} where the first equality is derived from Eq.~(\ref{eq:note}) and the second equality is obtained by the symmetry with respect to the subspace index, $l$. Similarly to Eq.~(\ref{eq:fluct-1}), the second term in Eq.~(\ref{eq:fluct-2}) can be calculated as \begin{eqnarray}
&\langle {\Psi}_{\mathrm{OAT}} (J,N;\mu ) | {\hat{O}}_{J,\nu} | {\Psi}_{\mathrm{OAT}} (J,N;\mu ) {\rangle}_{\mathrm{tot}} \nonumber \\
&= f \sum_{l:J_l\neq 0} \sum_{n_l=0}^N \ _NC_{n_l} |{\zeta}_l|^{2n_l} (1-|{\zeta}_l|^2)^{N-n_l} \nonumber \\
&\times \langle {\psi}_{\mathrm{OAT}} (\frac{1}{2},2J_ln_l;\mu ) | ({\hat{\Lambda}}_{J,2}\cos \nu - {\hat{\Lambda}}_{J,3}\sin \nu)
| {\psi}_{\mathrm{OAT}} (\frac{1}{2},2J_ln_l;\mu ) {\rangle}_l \nonumber \\
&= 0, \label{eq:fluct-2} \end{eqnarray}
since $\langle {\psi}_{\mathrm{OAT}} (\frac{1}{2},2J_ln_l;\mu ) | {\hat{\Lambda}}_{J,k} | {\psi}_{\mathrm{OAT}} (\frac{1}{2},2J_ln_l;\mu ) {\rangle}_l = 0$ for $k=2,3$. Substituting Eq.~(\ref{eq:lthSSS}) into Eq.~(\ref{eq:fluct-1}), we can simplify $\langle (\Delta O_{J, \nu} )^2 \rangle (\mu )$ in Eq.~(\ref{eq:fluct}) as Eqs.~(\ref{eq:fluct-3})-(\ref{eq:fluct-4}).
\section{\label{a:1}Matrix representations of a single spin-3/2 operators} The matrix representations of the spin-vector components ${\tilde{J}}_{\mu}$ in Eq.~(\ref{eq:single-j}), the five independent components of the quadrupolar tensor, ${\tilde{Q}}_{\mu \nu}$, ${\tilde{D}}_{xy}$, and $\tilde{Y}$ in Eqs.~(\ref{eq:single-q})-(\ref{eq:single-y}), and the seven independent components of the octupolar tensor, ${\tilde{T}}^{\alpha}_{\mu}$, ${\tilde{T}}^{\beta}_{\mu}$, and ${\tilde{T}}_{xyz}$ in Eqs.~(\ref{eq:single-ta})-(\ref{eq:single-txyz}), are given by \begin{equation} \eqalign{
{\tilde{J}}_{x} = \frac{1}{2} \left ( \begin{array}{cccc} 0 & \sqrt{3} & 0 & 0 \\ \sqrt{3} & 0 & 2 & 0 \\
0 & 2 & 0 & \sqrt{3} \\ 0 & 0 & \sqrt{3} & 0 \end{array} \right ), \\
{\tilde{J}}_{y} = \frac{i}{2} \left ( \begin{array}{cccc} 0 & -\sqrt{3} & 0 & 0 \\ \sqrt{3} & 0 & -2 & 0 \\
0 & 2 & 0 & -\sqrt{3} \\ 0 & 0 & \sqrt{3} & 0 \end{array} \right ), \
{\tilde{J}}_{z} = \frac{1}{2} \left ( \begin{array}{cccc} 3 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\
0 & 0 & -1 & 0 \\ 0 & 0 &0 & -3 \end{array} \right ), \ } \label{eq:single-mj} \end{equation} \begin{equation} \eqalign{
{\tilde{Q}}_{xy} = \frac{i\sqrt{5}}{2} \left ( \begin{array}{cccc} 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \\
1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{array} \right ) \
{\tilde{Q}}_{yz} = \frac{i\sqrt{5}}{2} \left ( \begin{array}{cccc} 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 \\ 0 & 0 & -1 & 0 \end{array} \right ) \\
{\tilde{Q}}_{zx} = \frac{\sqrt{5}}{2} \left ( \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\
0 & 0 & 0 & -1 \\ 0 & 0 & -1 & 0 \end{array} \right ) \
{\tilde{D}}_{xy} = \frac{\sqrt{5}}{2} \left ( \begin{array}{cccc} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{array} \right ) \\
\tilde{Y} = \frac{\sqrt{5}}{2} \left ( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\
0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right ) } \label{eq:single-mq} \end{equation} and \begin{equation} \eqalign{
{\tilde{T}}^{\alpha}_{x} = \frac{1}{4} \left ( \begin{array}{cccc} 0 & -\sqrt{3} & 0 & 5 \\
-\sqrt{3} & 0 & 3 & 0 \\ 0 & 3 & 0 & -\sqrt{3} \\ 5 & 0 & -\sqrt{3} & 0 \end{array} \right ), \\
{\tilde{T}}^{\alpha}_{y} = \frac{i}{4} \left ( \begin{array}{cccc} 0 & \sqrt{3} & 0 & 5 \\
-\sqrt{3} & 0 & -3 & 0 \\ 0 & 3 & 0 & \sqrt{3} \\ -5 & 0 & -\sqrt{3} & 0 \end{array} \right ), \
{\tilde{T}}^{\alpha}_{z} = \frac{1}{2} \left ( \begin{array}{cccc} 1 & 0 & 0 & 0 \\
0 & -3 & 0 & 0 \\ 0 & 0 & 3 & 0 \\ 0 & 0 & 0 & -1 \end{array} \right ), \\
{\tilde{T}}^{\beta}_{x} = \frac{\sqrt{5}}{4} \left ( \begin{array}{cccc} 0 & -1 & 0 & -\sqrt{3} \\
-1 & 0 & \sqrt{3} & 0 \\ 0 & \sqrt{3} & 0 & -1 \\ -\sqrt{3} & 0 & -1 & 0 \end{array} \right ), \\
{\tilde{T}}^{\beta}_{y} = \frac{i\sqrt{5}}{4} \left ( \begin{array}{cccc} 0 & -1 & 0 & \sqrt{3} \\
1 & 0 & \sqrt{3} & 0 \\ 0 & -\sqrt{3} & 0 & -1 \\ -\sqrt{3} & 0 & 1 & 0 \end{array} \right ), \\
{\tilde{T}}^{\beta}_{z} = \frac{\sqrt{5}}{2} \left ( \begin{array}{cccc} 0 &0 & 1 & 0 \\
0 & 0 & 0 & -1 \\ 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \end{array} \right ), \
{\tilde{T}}_{xyz} := \frac{i\sqrt{5}}{2} \left ( \begin{array}{cccc} 0 & 0 & -1 & 0 \\
0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \end{array} \right ). \ } \label{eq:single-mt} \end{equation}
\section{\label{a:2}Root diagram and simple roots of the su($4$) algebra} First, we chose ${\tilde{\lambda}}_{\frac{3}{2},3}$, ${\tilde{\lambda}}_{\frac{3}{2},8}$, and ${\tilde{\lambda}}_{\frac{3}{2},11}$ as the Cartan subalgebra and obtain their adjoint representations $(\mathrm{ad} [{\tilde{\lambda}}_{\frac{3}{2},k_{\mathrm{C}}}])_{mn} \equiv f_{k_{\mathrm{C}}m}^n$ ($k_{\mathrm{C}}=3,8,11$ and $m,n\neq 3,8,11$), where the structure constant $f_{k_{\mathrm{C}}m}^n$ is defined by $[{\tilde{\lambda}}_{k_{\mathrm{C}}}, {\tilde{\lambda}}_m] = i\sum_n f_{k_{\mathrm{C}}m}^n {\tilde{\lambda}}_n$. Here, the adjoint representations of ${\tilde{\lambda}}_{\frac{3}{2},k_{\mathrm{C}}}$ can be simultaneously diagonalized; hence they have the same eigenvectors ${\tilde{A}}_{\frac{3}{2},k} = \sum_{k\neq 3,8,11} c_k {\tilde{\lambda}}_{\frac{3}{2},k}$ satisfying $[{\tilde{\lambda}}_{\frac{3}{2},k_{\mathrm{C}}}, {\tilde{A}}_{\frac{3}{2},k}] = {\mu}_{k_{\mathrm{C}}k} {\tilde{A}}_{\frac{3}{2},k}$, where ${\mu}_{k_{\mathrm{C}}k}$ are the eigenvalues of $\mathrm{ad} [{\tilde{\lambda}}_{\frac{3}{2},k_{\mathrm{C}}}]$ corresponding to the eigenvectors ${\tilde{A}}_{\frac{3}{2},k}$. Then, we obtain twelve sets of eigenvalues ${\alpha}_k \equiv ({\mu}_{3k}, {\mu}_{8k}, {\mu}_{11k})$, i.e., the roots, and the eigenvectors ${\tilde{A}}_{\frac{3}{2},k}$ ($k=1,\cdots 12$) corresponding to the roots. Plotting these roots in the Cartesian coordinate, we obtain the root diagram of the su($4$) algebra in Fig.~\ref{fig:su4-roots} (a). Here, the roots and their corresponding operators are given by \begin{eqnarray*}
{\balpha}_1 = \left ( \begin{array}{c} 1 \\ \sqrt{5} \\ 2 \end{array} \right ), \ {\tilde{A}}_{\frac{3}{2},1}
= \frac{\sqrt{15}}{10} {\tilde{J}}_+ + \frac{1}{2} {\tilde{Q}}_+ - \frac{\sqrt{15}}{20} {\tilde{T}}^{\alpha}_+ - \frac{1}{4} {\tilde{T}}^{\beta}_-
= \sqrt{5} E_{12}, \\
{\balpha}_2 = \left ( \begin{array}{c} 2 \\ \sqrt{5} \\ -1 \end{array} \right ), \ {\tilde{A}}_{\frac{3}{2},2}
= \frac{1}{2} {\tilde{D}}_+ + \frac{1}{2} {\tilde{F}}_+ = \sqrt{5} E_{13}, \\
{\balpha}_3 = \left ( \begin{array}{c} 3 \\ 0 \\ 1 \end{array} \right ), \ {\tilde{A}}_{\frac{3}{2},3}
= \frac{\sqrt{5}}{4} {\tilde{T}}^{\alpha}_- - \frac{\sqrt{3}}{4} {\tilde{T}}^{\beta}_+ = \sqrt{5} E_{14}, \\
{\balpha}_4 = \left ( \begin{array}{c} 1 \\ 0 \\ -3 \end{array} \right ), \ {\tilde{A}}_{\frac{3}{2},4}
= \frac{1}{\sqrt{5}} {\tilde{J}}_+ + \frac{3}{4\sqrt{5}} {\tilde{T}}^{\alpha}_+ + \frac{\sqrt{3}}{4} {\tilde{T}}^{\beta}_-
= \sqrt{5} E_{23}, \\
{\balpha}_5 = \left ( \begin{array}{c} 2 \\ -\sqrt{5} \\ -1 \end{array} \right ), \ {\tilde{A}}_{\frac{3}{2},5}
= \frac{1}{2} {\tilde{D}}_+ - \frac{1}{2} {\tilde{F}}_+ = \sqrt{5} E_{24}, \\
{\balpha}_6 = \left ( \begin{array}{c} 1 \\ -\sqrt{5} \\ 2 \end{array} \right ), \ {\tilde{A}}_{\frac{3}{2},6}
= \frac{\sqrt{15}}{10} {\tilde{J}}_+ - \frac{1}{2} {\tilde{Q}}_- - \frac{\sqrt{15}}{20} {\tilde{T}}^{\alpha}_+ - \frac{1}{4} {\tilde{T}}^{\beta}_-
= \sqrt{5} E_{34}, \\
{\balpha}_{6+k} = -{\balpha}_k, \ {\tilde{A}}_{6+k} = {\tilde{A}}_k^{\dagger}, \ (k=1,\cdots 6), \end{eqnarray*} where $E_{mn}$ denotes the matrix with 1 in the $mn$ entry and $0$s elsewhere and the ladder operators are defined by ${\tilde{J}}_{\pm} \equiv {\tilde{J}}_x \pm i{\tilde{J}}_y$, ${\tilde{Q}}_{\pm} \equiv {\tilde{Q}}_{zx} \pm i{\tilde{Q}}_{yz}$, ${\tilde{D}}_{\pm} = {\tilde{D}}_{xy} \pm i {\tilde{Q}}_{xy}$, ${\tilde{T}}^{\alpha}_{\pm} = {\tilde{T}}^{\alpha}_x \pm i{\tilde{T}}^{\alpha}_y$, ${\tilde{T}}^{\beta}_{\pm} = {\tilde{T}}^{\beta}_x \pm i{\tilde{T}}^{\beta}_y$, and ${\tilde{F}}_{\pm} = {\tilde{T}}^{\beta}_z \pm i{\tilde{T}}_{xyz}$.
\section*{References}
\end{document} |
\begin{document}
\vfuzz2pt \hfuzz2pt
\newtheorem{thm}{Theorem}[section] \newtheorem{model}{Model} \newtheorem{pro}{Problem}[section]
\newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[]{Lemma}[section] \newtheorem{prop}[]{Proposition}[section] \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \theoremstyle{remark} \newtheorem{rem}[]{Remark}[section] \numberwithin{equation}{section} \newtheorem{col}{Conclusion}
\baselineskip 17pt
\title[]{Self-similar solutions of the spherically symmetric Euler equations for general equations of state} \author[]{Jianjun Chen$^\dag$ and Geng Lai$^\ddag$} \address{} \email{}
\thanks{$^\ddag$Corresponding author. E-mail: mathchenjianjun@163.com(Chen), laigeng@shu.edu.cn(Lai)} \subjclass{} \keywords{}
\dedicatory{$^{\dag}$Department of Mathematics, Zhejiang University of Science and Technology, Hangzhou, 310023, P.R. China\\ $^{\ddag}$Department of Mathematics, Shanghai University, Shanghai, 200444, P.R. China}
\subjclass{} \keywords{}
\begin{abstract} The study of spherically symmetric motion is important for the theory of explosion waves. In this paper, we construct rigorously self-similar solutions to the Riemann problem of the spherically symmetric Euler equations for general equations of state. We used the assumption of self-similarity to reduce the spherically symmetric Euler equations to a system of nonlinear ordinary differential equations, from which we obtain detailed structures of solutions besides their existence.
\ \vskip 0pt \noindent {\sc Keywords.} Compressible Euler equations, van der Waals gas, spherical symmetry, self-similar solution. \ \vskip 4pt \noindent {\sc 2010 AMS subject classification.} Primary: 35L65; Secondary: 35L60, 35L67. \end{abstract}
\maketitle
\section{\bf Introduction}
The 3D isentropic Euler equations has the form \begin{equation}\label{3d} \left\{
\begin{aligned}
&\rho_t+(\rho u_1)_{x_1}+(\rho u_2)_{x_2}+(\rho u_3)_{x_3}=0, \\
&(\rho u_1)_t+(\rho u_1^2+p)_{x_1}+(\rho u_1u_2)_{x_2}+(\rho u_1 u_3)_{x_3}=0, \\
&(\rho u_2)_t+(\rho u_1 u_2)_{x_1}+(\rho u_2^2+p)_{x_2}+(\rho u_2 u_3)_{x_3}=0,\\ &(\rho u_3)_t+(\rho u_1 u_3)_{x_1}+(\rho u_2 u_3)_{x_2}+(\rho u_3^2+p)_{x_3}=0,
\end{aligned} \right. \end{equation} where $\rho$ is the density, $(u_1, u_2, u_3)$ is the velocity, and $p=p(\rho)$ is the pressure.
The global existence of solution to the Cauchy problem for system (\ref{3d}) is still a complicated open problem. Thus it has been profitable to consider some special problems. In this paper, we consider system (\ref{3d}) with the Riemann initial data \begin{equation}\label{3db} \big(\rho, u_1, u_2, u_3\big)(0, x_1, x_2, x_3)~=~\big(\rho_0, u_0\sin\varphi\cos\theta, u_0\sin\varphi\sin\theta, u_0\cos\varphi\big), \end{equation} where $(x_1, x_2, x_3)=(r\sin\varphi\cos\theta, r\sin\varphi\sin\theta, r\cos\varphi)$, $r>0$ is the radial variable, $\varphi\in [0, \pi]$, $\theta\in [0, 2\pi)$, and $\rho_0$ and $u_0$ are two constants.
The problem (\ref{3d}), (\ref{3db}) allows us to look for spherically symmetric solution, i.e., $$\rho=\rho(t, r),\quad u_1=u(x, t)\sin\varphi\cos\theta, \quad u_2=u(x, t)\sin\varphi\sin\theta,\quad u_3=u(x, t)\cos\varphi.$$ We can then reduce system (\ref{3d}) to \begin{equation} \left\{
\begin{aligned}
&\rho_t+(\rho u)_x+\frac{2\rho u}{x}=0,\\ &(\rho u)_x +(\rho u^2+p)_x+\frac{2\rho u^2}{x} =0.
\end{aligned}
\right. \label{AE} \end{equation} Then the problem (\ref{3d}), (\ref{3db}) can be reduced to a Riemann initial-boundary value problem for (\ref{AE}) with the initial and boundary conditions \begin{equation}\label{IBV} (u, \rho)(x, 0)=(u_0, \rho_0), \quad (\rho u)(0, t)=0. \end{equation} The problem (\ref{AE}), (\ref{IBV}) allows us to look for self-similar solutions that depend only on the self-similar variable $\xi=x/t$.
The self-similar solutions for (\ref{AE}) was first studied by Guderley, Taylor, et al; see \cite{CF} and the survey paper \cite{Jenssen}. Taylor \cite{Ta} used the assumption of self-similarity to reduce the spherically symmetric Euler equations for polytropic gases to a system of nonlinear autonomous ordinary differential equations and solved the ``spherical piston" problem. Zhang and Zheng \cite{Zheng} constructed several 2D self-similar radially symmetric solutions with swirl for polytropic gases. Hu \cite{Hu} constructed 2D self-similar axisymmetric solutions for the Euler equations for a two-constant equation of state. For more general existence of weak solutions of (\ref{AE}), we refer the reader to \cite{CJ, ChenG, CGM, DL, LW, MMU1, MMU2}.
\begin{figure}
\caption{\footnotesize Equations of sate.}
\label{Fignew1}
\end{figure}
In this paper, we study the problem (\ref{AE}), (\ref{IBV}) for the following three types of equations of state: \begin{description}
\item[I] $p'(\tau)<0$ and $p''(\tau)>0$ as $\tau>0$; see Figure \ref{Fignew1}(I).
\item[II] $p'(\tau)<0$ as $\tau>0$; $p''(\tau)>0$ as $\tau\in (0, \tau_1^i)\cup (\tau_2^i, +\infty)$; $p''(\tau)<0$ as $\tau\in (\tau_1^i, \tau_2^i)$; see Figure \ref{Fignew1}(II).
\item[III] $p'(\tau)<0$ and $p''(\tau)>0$ as $\tau\in (0, \tilde{\tau}_1)\cup (\tilde{\tau}_2, +\infty)$; $p(\tau)$ is constant as $\tau\in[\tilde{\tau}_1, \tilde{\tau}_2]$; see Figure \ref{Fignew1}(III). \end{description} Here, $\tau=1/\rho$ is the specific volume. These three types of equations of state can be referred for instance to the van der Waals equation of state $ p=\frac{A}{(\tau-1)^{\gamma}}-\frac{1}{\tau^2}, $ where $A$ is a constant corresponding to the entropy, $\gamma$ is a constant between $1$ and $5/3$. The third type equation of state may be seen as a van der Waals equation of state complemented with Maxwell's equal areas law and may be used as a simple model of phase transition; see \cite{GN, MP} and the references cited therein.
\begin{rem} For equation of state II, there exist $\hat{\tau}_1<\tau_1^i<\tau_2^i<\hat{\tau}_2$ such that $$ \frac{p(\hat{\tau}_{1})-p(\hat{\tau}_2)}{\hat{\tau}_{1}-\hat{\tau}_2}=p'(\hat{\tau}_{1})=p'(\hat{\tau}_2). $$ \end{rem}
We make the following assumptions about these equations of state: \begin{description}
\item[(A1)] There exists a $\nu>0$ such that $\lim\limits_{\rho\rightarrow 0}\frac{p'(\rho)}{\rho^{\nu}}=0$.
\item[(A2)] For equation of state III, we assume $\lim\limits_{\tau\rightarrow \tilde{\tau}_1^{-}}p'(\tau)<p'(\tau_c)$, where $\tau_c>\tilde{\tau}_2$ is determined by
$\frac{p(\tau_{c})-p(\tilde{\tau}_1)}{\tau_{c}-\tilde{\tau}_1}=p'(\tau_{c})$. \end{description}
The main result of the paper can be stated as follows. \begin{thm} For equations of state I--III, the Riemann initial-boundary value problem (\ref{AE}), (\ref{IBV}) has a solution for any data $(u_0, \rho_0)$. \end{thm}
We use the assumption of self-similarity to reduce the spherically symmetric Euler equations (\ref{AE}) to a system of nonlinear ordinary equations, from which we obtain detailed structures of solutions of (\ref{AE}), (\ref{IBV}) besides their existence. There are many differences between our results and the previous results for polytropic gases. First, system (\ref{AE}) cannot by self-similar transformation be reduced to an autonomous system of ordinary differential equations for general equations of state, so that the method in \cite{CF,Zheng1} can not be used in here. Second, the solution for (\ref{AE}), (\ref{IBV}) for polytropic gases is continuous as $u_0>0$, whereas the solution for nonconvex equations of state may be discontinuous as $u_0>0$. Third, the solution for (\ref{AE}), (\ref{IBV}) for polytropic gases contains only one shock as $u_0<0$, whereas the solution for nonconvex equations of state
may contain two or even more shocks as $u_0<0$.
\section{\bf Preliminaries}
\subsection{Ordinary equations}
By self-similar transformation, system (\ref{AE}) can be written as $$ \left\{
\begin{aligned} &-\xi\frac{{\rm d} \rho}{{\rm d} \xi}+\frac{{\rm d} (\rho u) }{{\rm d} \xi}+\frac{2\rho u}{\xi}=0,\\
&-\xi\frac{{\rm d} u}{{\rm d} \xi}+ u\frac{{\rm d} u}{{\rm d} \xi}+\frac{1}{\rho}\frac{{\rm d} p}{{\rm d} \xi}=0.
\end{aligned} \right. $$ Hence, \begin{equation}\label{ODE1} \left\{
\begin{aligned}
&\frac{{\rm d} u}{{\rm d} \xi}=-\frac{2p'(\rho) u}{\xi
\big[p'(\rho)-(u-\xi)^2\big]}, \\
&\frac{{\rm d} \rho}{{\rm d} \xi}=\frac{2\rho u(u-\xi)}{\xi
\big[p'(\rho)-(u-\xi)^2\big]}, \\
\end{aligned} \right. \end{equation}
Let $s=1/\xi$. Then, system (\ref{ODE1}) can be changed into \begin{equation}\label{ODE2} \left\{
\begin{aligned}
&\frac{{\rm d} u}{{\rm d} s}=\frac{2p'(\rho) us}{s^2 p'(\rho)-(1-us)^2}, \\
&\frac{{\rm d} \rho}{{\rm d} s}=\frac{2\rho u(1-us)}{s^2 p'(\rho)-(1-us)^2}.
\end{aligned} \right. \end{equation} The initial condition $(u, \rho)(x, 0)=(u_0, \rho_0)$ can be changed into \begin{equation}\label{ID1} (u, \rho)\mid_{s=0}~=~(u_0, \rho_0). \end{equation} The initial value problem (\ref{ODE2}), (\ref{ID1}) is a classically well-posed problem which has a unique local solution for any $(u_0, \rho_0)$. Throughout the paper, we denote by $(u_1, \rho_1)(s)$ the (local) classical solution of the initial value problem (\ref{ODE2}), (\ref{ID1}).
In view of the denominators of the right parts of (\ref{ODE2}), we define \begin{equation}\label{h} h(\rho_1(s), s):=~\frac{1}{s}-\sqrt{p'\big(\rho_1(s)\big)}. \end{equation} Then we have the following properties: \begin{itemize}
\item if $u_1(s)<h(\rho_1(s), s)$ then $s^2 p'(\rho_1)-(1-u_1s)^2<0$;
\item if $u_1(s)=h(\rho_1(s), s)$ then $s^2 p'(\rho_1)-(1-u_1s)^2=0$;
\item if $h(\rho_1(s), s)<u_1(s)<\frac{1}{s}+\sqrt{p'\big(\rho_1(s)\big)}$ then $s^2 p'(\rho_1)-(1-u_1s)^2>0$. \end{itemize}
\subsection{Shock waves} It is known that a weak solution $(u, \rho)$ to (\ref{AE}) satisfies the Rankine-Hugoniot condition across any discontinuity at $(x, t)$: \begin{equation} \frac{\rho_1 u_1-\rho_2 u_2}{\rho_1-\rho_2}~=~\frac{\rho_1 u_1^2+p_1-\rho_2 u_2^2-p_2}{\rho_1 u_1-\rho_2 u_2}~=~\sigma \end{equation} where $(u_1, \rho_1)=(u, r)(x+0, t)$, $(u_2, \rho_2)=(u, r)(x-0, t)$, and $\sigma$ is the speed of discontinuity. For any $(u_*, \rho_*)$, we let the shock set through $(u_*, \rho_*)$ be the set of points $(u, \rho)$ satisfying the Rankine-Hugoniot condition $$ \frac{\rho_* u_*-\rho u}{\rho_*-\rho}~=~\frac{\rho_* u_*^2+p_*-\rho u^2-p}{\rho_* u_*-\rho u}~=~\sigma(u_*, \rho_*; u, \rho). $$ We need to use the entropy condition (E) given by Liu \cite{Liu}. \begin{defn} A discontinuity between two states $(u_1, \rho_1)$ and $(u_2, \rho_2)$ satisfies the entropy condition (E) if \begin{equation}\label{EEntropy} \sigma(u_1, \rho_1; u_2, \rho_2)\geq \sigma(u_1, \rho_1; u, \rho) \end{equation} for all $(u, \rho)$ on the shock set through $(u_1, \rho_1)$ between $(u_1, \rho_1)$ and $(u_2, \rho_2)$. A shock which satisfies the entropy condition (E) will be called an admissible shock. \end{defn}
In this paper, we are only concerned with forward shock waves. So, we give a geometric interpretation of entropy condition (E) for forward shock waves. \begin{lem}\label{lem21} A forward shock between two states $(u_1, \tau_1)$ and $(u_2, \tau_2)$ satisfies the entropy condition (E) if and only if \begin{equation} \sqrt{-\frac{p_2-p_1}{\tau_2-\tau_1}}~\geq~ \sqrt{-\frac{p-p_1}{\tau-\tau_1}} \end{equation} for all $\tau\in\big(\min\{\tau_1, \tau_2\}, \max\{\tau_1, \tau_2\}\big)$. Here, ``1" denotes the fluid in front of the shock, ``2" denotes the fluid behind the shock.
\end{lem} \begin{proof} From the Rankine-Hugoniot conditions for forward shock waves we have \begin{equation}\label{RH} \left\{
\begin{array}{ll}
\rho_1(u_1-\sigma)=\rho_2(u_2-\sigma)<0, \\[4pt]
\rho_1(u_1-\sigma)^2+p_1=\rho_2(u_2-\sigma)^2+p_2.
\end{array} \right. \end{equation}
From (\ref{RH}) we get $$ \frac{\rho_2^2(u_2-\sigma)^2}{\rho_1}+p_1=\rho_2(u_2-\sigma)^2+p_2, $$ and consequently $$ (u_2-\sigma)^2=\frac{p_2-p_1}{\rho_2-\rho_1}\cdot\frac{\rho_1}{\rho_2}=-\tau_2^2\frac{p_2-p_1}{\tau_2-\tau_1}. $$ Thus, we have \begin{equation}\label{1} \sigma(u_1, \rho_1; u_2, \rho_2)=u_2+\tau_2\sqrt{-\frac{p_2-p_1}{\tau_2-\tau_1}}. \end{equation}
Similarly, we have \begin{equation}\label{2} \sigma(u_1, \rho_1; u_2, \rho_2)=u_1+\tau_1\sqrt{-\frac{p_2-p_1}{\tau_2-\tau_1}}. \end{equation} Thus, for all $(u, \rho)$ on the forward shock set through $(u_1, \rho_1)$ we have \begin{equation}\label{102503} \sigma(u_1, \rho_1; u, \rho)=u_1+\tau_1\sqrt{-\frac{p-p_1}{\tau-\tau_1}}. \end{equation} Then by (\ref{EEntropy}) we get this lemma. \end{proof}
We define $$ \phi(\tau; u_1, \tau_1):=u_1+(\tau_1-\tau)\sqrt{-\frac{p-p_1}{\tau-\tau_1}}. $$ Then by (\ref{1}), (\ref{2}), and Lemma \ref{lem21}, we have the following corollaries about forward admissible shocks. \begin{cor}\label{cor1} For equation of state I, the set $\mathcal{S}_{c}$ of the sates which can be connected to $(u_1, \tau_1)$ by a forward admissible compression shock on the left is given by: $$ \mathcal{S}_{c}=\{(u, \tau)\mid u=\phi(\tau; u_1, \tau_1), \tau<\tau_1\}. $$ \end{cor}
\begin{figure}
\caption{\footnotesize Admissible shocks for equation of state II.}
\label{Figure2}
\end{figure}
\begin{cor}\label{cor2} For equation of state II, the set $\mathcal{S}_{c}$ of the sates which can be connected to $(u_1, \tau_1)$ by a forward admissible compression shock on the left is given by: \begin{itemize}
\item If $\tau_1\in (0, \tau_1^i]\cup (\tau_3, +\infty)$, then $\mathcal{S}_{c}=\{(u, \tau)\mid u=\phi(\tau; u_1, \tau_1), \tau<\tau_1\}$, where $\tau_3$ is determined by $$\frac{p(\tau_{3})-p(\tau_{1}^{i})}{\tau_{3}-\tau_{1}^{i}}=p'(\tau_1^{i});$$ see Figure \ref{Figure2}(1--2).
\item If $\tau_1\in (\tau_1^i, \tau_2^i]$, then $\mathcal{S}_{c}=\{(u, \tau)\mid u=\phi(\tau, u_1, \tau_1), \tau<\tau_{1a}\}$, where $\tau_{1a}$ is determined by $$\frac{p(\tau_{1a})-p(\tau_1)}{\tau_{1a}-\tau_1}=p'(\tau_1)\quad \mbox{and}\quad \tau_{1a}<\tau_1;$$ see Figure \ref{Figure2}(3).
\item If $\tau_1\in (\tau_2^i, \tau_3)$, then $\mathcal{S}_{c}=\{(u, \tau)\mid u=\phi(\tau; u_1, \tau_1), \tau<\tau_{1b}~\mbox{and}~\tau_{1c}<\tau<\tau_1\}$, where $\tau_{1b}$ and $\tau_{1c}$ are determined by $$\frac{p(\tau_{1b})-p(\tau_1)}{\tau_{1b}-\tau_1}=\frac{p(\tau_{1c})-p(\tau_1)}{\tau_{1c}-\tau_1}=p'(\tau_{1c})\quad \mbox{and}\quad \tau_{1b}<\tau_1^{i}<\tau_{1a}<\tau_2^{i};$$ see Figure \ref{Figure2}(4). \end{itemize} \end{cor}
\begin{cor}\label{cor3} For equation of state II, the set $\mathcal{S}_{r}$ of the sates which can be connected to $(u_1, \tau_1)$ by a forward admissible rarefaction shock on the left is given by: \begin{itemize}
\item If $\tau_1\in (\hat{\tau}_1, \tau_1^i]$, then $\mathcal{S}_{r}=\{(u, \tau)\mid u=\phi(\tau; u_1, \tau_1), \tau_{1d}<\tau<\tau_{1f}\}$, where $\tau_{1d}$ and $\tau_{1f}$ are determined by $$\frac{p(\tau_{1d})-p(\tau_1)}{\tau_{1d}-\tau_1}=p'(\tau_{1}), \quad \frac{p(\tau_{1f})-p(\tau_1)}{\tau_{1f}-\tau_1}=p'(\tau_{1f}), \quad \mbox{and}\quad \tau_1^{i}<\tau_{1d}<\tau_{1f};$$ see Figure \ref{Figure2}(5).
\item If $\tau_1\in (\tau_1^i, \tau_2^i)$, then $\mathcal{S}_{r}=\{(u, \tau)\mid u=\phi(\tau; u_1, \tau_1), \tau_{1}<\tau<\tau_{1g}\}$, where $\tau_{1g}$ is determined by $$\frac{p(\tau_{1g})-p(\tau_1)}{\tau_{1g}-\tau_1}=p'(\tau_{1g})\quad \mbox{and}\quad \tau_{1g}>\tau_{2}^{i};$$ see Figure \ref{Figure2}(6). \end{itemize} \end{cor}
\begin{cor}\label{cor4} For equation of state III, the set $\mathcal{S}_{c}$ of the sates which can be connected to $(u_1, \tau_1)$ by a forward admissible compression shock on the left is given by: \begin{itemize}
\item If $\tau_1\in (0, \tilde{\tau}_2]$, then $\mathcal{S}_{c}=\{(u, \tau)\mid u=\phi(\tau; u_1, \tau_1), \tau<\min\{\tilde{\tau}_1, \tau_{1}\}\}$; see Figure \ref{fignew}(1).
\item If $\tau_1\in (\tilde{\tau}_2, +\infty)$, then $\mathcal{S}_{c}=\{(u, \tau)\mid u=\phi(\tau; u_1, \tau_1), 0<\tau<\tau_{1h}~\mbox{and}~\tilde{\tau}_{2}<\tau<\tau_{1}\}$, where $\tau_{1h}$ is determined by $$\frac{p(\tau_{1h})-p(\tau_1)}{\tau_{1h}-\tau_1}=\frac{p(\tilde{\tau}_{2})-p(\tau_1)}{\tilde{\tau}_{2}-\tau_1};$$ see Figure \ref{fignew}(2). \end{itemize} \end{cor}
\begin{cor}\label{cor5} For equation of state III, if $\tau_1\in (\tilde{\tau}_1, \tilde{\tau}_2)$, then the set $\mathcal{S}_{r}$ of the sates which can be connected to $(u_1, \tau_1)$ by a forward admissible rarefaction shock on the left is given by
$$\mathcal{S}_{r}=\{(u, \tau)\mid u=\phi(\tau; u_1, \tau_1), \tilde{\tau}_2<\tau<\tau_{1i}\}$$ where $\tau_{1i}$ is determined by $$\frac{p(\tau_{1i})-p(\tau_1)}{\tau_{1i}-\tau_1}=p'(\tau_{1i});$$ see Figure \ref{fignew}(3). \end{cor}
\begin{figure}
\caption{\footnotesize Admissible shocks for equation of state III.}
\label{fignew}
\end{figure}
Corollaries \ref{cor1}--\ref{cor5} are obviously, so we omit their proofs. The readers can also see
\cite{GN,LeFloch} for more details.
\section{\bf Self-similar solutions for $u_0>0$}
In this section, we will construction the self-similar solutions of the problem (\ref{AE}), (\ref{IBV}) for $u_0>0$.
\subsection{Equation of state I} \begin{lem}\label{lem3.1} For any fixed $S>0$, if the initial value problem (\ref{ODE2}), (\ref{ID1}) has a solution $(u_1, \rho_1)(s)$ in $(0, S)$ and $s^2p'(\rho_1)-(1-u_1s)^2<0$ as $0<s<S$, then we have $$ u_1(s)>0,\quad \rho_1(s)>0, \quad \frac{{\rm d} u_1(s)}{{\rm d} s}<0,\quad\mbox{and}\quad \frac{{\rm d} \rho_1(s)}{{\rm d} s}<0\quad\mbox{as}\quad 0<s<S. $$ \end{lem} \begin{proof} Assume there exists a $0<s_*<S$ such that $u_1(s)>0$ as $0<s<s_*$ and $u_1(s_*)=0$. Then by (\ref{ODE2}) we have $$ \int_{u_0}^{0}\frac{1}{u} ~{\rm d}u=\int_{0}^{s_*}\frac{2p'(\rho_1(s)) s}{s^2 p'(\rho_1(s))-(1-u_1(s)s)^2}~{\rm d}s, $$ which leads to a contradiction. Thus, we have $u_1(s)>0$ and $\frac{{\rm d} u_1(s)}{{\rm d} s}<0$ as $0<s<S$. Similarly, we can prove $\rho_1(s)>0$ as $0<s<S$. From $s^2p'(\rho_1)-(1-u_1s)^2<0$ and $u_1(s)<0$, we have $u_1(s)s<1$, consequently we have $\frac{{\rm d} \rho_1(s)}{{\rm d} s}<0$ as $0<s<S$. We then complete the proof of this lemma. \end{proof}
\begin{lem}\label{lem1}
For any fixed $S>0$, if the initial value problem (\ref{ODE2}), (\ref{ID1}) has a solution $(u_1, \rho_1)(s)$ in $(0, S)$ and $\rho_1(s)>0$ and $h(\rho_1(s), s)>0$ as $0<s<S$, then $0< u_1(s)<h(\rho_1(s), s)$ as $0<s<S$. \end{lem} \begin{proof} It is obviously that $0<u_1(s)<h(\rho_1(s), s)$ as $s$ is sufficiently small. By a direct computation, we have \begin{equation}\label{100504} \frac{{\rm }d (u_1-h)}{{\rm d}s} ~=~\frac{2p'(\rho_1) u_1s}{s^2 p'(\rho_1)-(1-u_1s)^2}+\frac{1}{s^2}+\frac{2p''(\rho_1) \rho_1 u_1(1-u_1s)}{2\sqrt{p'(\rho_1)}\big(s^2 p'(\rho_1)-(1-u_1s)^2\big)}. \end{equation}
Suppose that $s_0\in(0, S)$ is the ``first" point such that $\rho_1(s_0)>0$, $h(\rho_1(s_0), s_0)>0$ and $u_1(s_0)=h(\rho_1(s_0), s_0)$. Then we have \begin{equation}\label{100505} \begin{aligned} &~p'(\rho_1) u_1s+\frac{p''(\rho_1) \rho_1 u_1(1-u_1s)}{2\sqrt{p'(\rho_1)}}\\=&~ \frac{u_1}{2\sqrt{-p'(\tau_1)}}\Big(-2\tau_1^2p'(\tau_1)s\sqrt{-p'(\tau_1)}+\tau_1^2p''(\tau_1)(1-u_1s)+2\tau_1 p'(\tau_1)(1-u_1s)\Big)\\=&~ \frac{u_1\tau_1^2p''(\tau_1)s\sqrt{p'(\rho_1)}}{2\sqrt{-p'(\tau_1)}}~>~0 \end{aligned} \end{equation} as $s=s_0$, where $\tau_1=1/\rho_1$. Here, we use $p'(\rho_1)=-\tau_1^2p'(\tau_1)$ and $p''(\rho_1)=2\tau_1^3p'(\tau_1)+\tau_1^4p''(\tau_1)$. From $0<u_1(s)<h(\rho_1(s), s)$ as $s<s_0$, we get $s^2 p'(\rho_1)-(1-u_1s)^2<0$ as $s<s_0$. Hence, we have $\lim\limits_{s\rightarrow s_0^{-}} \frac{{\rm }d (u_1-h)}{{\rm d}s}=-\infty$ which leads to a contradiction. We then complete the proof of the lemma. \end{proof}
In what follows, we are going to show that there are the only following three cases for the local solution $(u_1, \rho_1)(s)$: \begin{itemize}
\item There exists a $s_*>0$ such that $0<u_1(s)<h(\rho_1(s), s)$ as $s<s_*$ and $h(\rho_1(s_*), s_*)=u_1(s_*)=1/s_{*}$; see Figure \ref{fig1}(left).
\item There exists a $s_{*}>0$ such that $0<u_1(s)<h(\rho_1(s), s)$ as $s<s_*$ and $u_1(s_*)=h(\rho_1(s_*), s_*)=0$; see Figure \ref{fig2}(left).
\item $0<u_1(s)<h(\rho_1(s), s)$ for all $s>0$; see Figure \ref{fig3}(left). \end{itemize}
\begin{lem} If $u_0>0$ is sufficiently large, then there exists a $s_*>0$ such that $0<u_1(s)<h(\rho_1(s), s)$ as $s<s_*$ and $h(\rho_1(s_*), s_*)=u_1(s_*)=1/s_{*}$. \end{lem} \begin{proof} If $u_1(s)<h(\rho_1(s), s)$ then we have $$ u_1(s)s<1\quad \mbox{and}\quad s^2p'(\rho_1(s))-(1-u_1(s)s)^2<0. $$ Consequently by (\ref{ODE2}) we have $$\frac{{\rm d} \rho_1}{{\rm d} u_1}~=~\frac{\rho_1 (1-u_1s)}{p'(\rho_1)s}~>~\frac{\rho_1}{\sqrt{p'(\rho_1)}}.$$ Integrating this, we get \begin{equation}\label{91201} \int_{0}^{\rho_0}\frac{\sqrt{p'(\rho_1)}}{\rho_1}~{\rm d}\rho_1~\geq~\int_{\rho_1(s)}^{\rho_0}\frac{\sqrt{p'(\rho_1)}}{\rho_1}~{\rm d}\rho_1~>~\int_{u_1(s)}^{u_0}~{\rm d}u_1. \end{equation} Combining this with assumption (A1) and Lemmas \ref{lem3.1} and \ref{lem1}, we know that when $u_0$ is sufficiently large, e.g. $u_0>\int_{0}^{\rho_0}\frac{\sqrt{p'(\rho)}}{\rho}~{\rm d}\rho$,
there exists a $s_*>0$ such that $\rho_1(s_*)=0$ and $u_1(s_*)=h(\rho_1(s_*), s_*)=1/s_*$.
We then have this lemma. \end{proof}
\begin{figure}
\caption{\footnotesize Continuous solution with a vacuum, where $\xi=x/t$.}
\label{fig1}
\end{figure}
Therefore, the self-similar solution of the problem (\ref{AE}), (\ref{IBV}) for this case has the form $$ (u, \rho)(x, t)=\left\{
\begin{array}{ll}
(u_1, \rho_1)(s), & \hbox{$s<s_*$,} \\[4pt]
\big(\xi_*, 0\big), & \hbox{$s>s_*$;}
\end{array}
\right. $$ where $s=t/x$ and $\xi_*=1/s_*$. This is a continuous solution with a growing vacuum region; see Figure \ref{fig1}(right).
\begin{lem} If $u_0>0$ is sufficiently small then there exists a $s_{*}>0$ such that $0<u_1(s)<h(\rho_1(s), s)$ as $s<s_*$ and $u_1(s_*)=h(\rho_1(s_*), s_*)=0$. \end{lem} \begin{proof} Let $\varepsilon\in (0, \rho_0/2)$ be given such that \begin{equation}\label{91202} \max\limits_{\rho\in [\rho_0-\varepsilon, \rho_0]}\sqrt{p'(\rho)}~<~\frac{3}{2}\sqrt{p'(\rho_0)}. \end{equation} Let $s_0=\frac{1}{2\sqrt{p'(\rho_0)}}$. It follows from (\ref{ODE2}) that if $u_0>0$ is sufficiently small then $\rho_1(s_0)>\rho_0-\varepsilon$. Consequently, by (\ref{91202}) we have $h(\rho_1(s_0), s_0)>0$.
From (\ref{ODE2}), we have $$\frac{{\rm d} \rho_1}{{\rm d} u_1}~=~\frac{\rho_1 (1-u_1s)}{p'(\rho_1)s}~<~\frac{\rho_1}{s_0p'(\rho_1)}\quad \mbox{as}\quad s>s_0.$$ Hence, we have \begin{equation}\label{91204} \int_{\rho_1(s)}^{\rho_0-\varepsilon}\frac{p'(\rho_1)}{\rho_1}~{\rm d}\rho_1~<~\int_{\rho_1(s)}^{\rho_1(s_0)}\frac{p'(\rho_1)}{\rho_1}~{\rm d}\rho_1~<~\frac{1}{s_0}\int_{u_1(s)}^{u_1(s_0)}{\rm d}u_1~<~\frac{u_0}{s_0}. \end{equation} Thus, when $u_0$ is sufficiently small there exists a $\rho_m>0$ such that $\rho_1(s)>\rho_m$. Therefore, there must exists a $s_*>0$ such that $h(\rho_1(s_*), s_*)=0$. By Lemma \ref{lem1} we also have $u_1(s_*)=0$. We then complete the proof of this lemma. \end{proof}
Therefore, the self-similar solution of the problem (\ref{AE}), (\ref{IBV}) for this case has the form $$ (u, \rho)(x, t)=\left\{
\begin{array}{ll}
(u_1, \rho_1)(s), & \hbox{$s<s_*$,} \\[4pt]
\big(0, \rho_1(s_*)\big), & \hbox{$s>s_*$;}
\end{array}
\right. $$ where $s=t/x$. This is a continuous solution with a quiet constant state; see Figure \ref{fig2}(right).
\begin{figure}
\caption{\footnotesize Continuous solution with a quiet constant state.}
\label{fig2}
\end{figure}
\begin{lem}\label{lemcase31} If the first case happens as $(u, \rho)(0)=(u^0, \rho^0)$, then there exists a sufficiently small $\varepsilon>0$ such that the first case will happen as $(u, \rho)(0)\in({u}^0-\varepsilon, {u}^0+\varepsilon)\times\{\rho^0\}$. \end{lem} \begin{proof} Denote by $(\bar{u}, \bar{\rho})(s)$ the solution of system (\ref{ODE2}) with the initial data $(u, \rho)(0)=(u^0, \rho^0)$. Then there exists a $\bar{s}_*>0$ such that $\bar{\rho}(\bar{s}_*)=0$ and $\bar{u}(\bar{s}_{*})=h(\bar{\rho}(\bar{s}_{*}), \bar{s}_{*})=1/\bar{s}_{*}$.
From assumption (A1), we can find a sufficiently small $\delta\in (0, 1/\bar{s}_{*})$ such that \begin{equation}\label{91203} \int_{0}^{\delta}\frac{\sqrt{p'(\rho)}}{\rho}~{\rm d}\rho~<~\frac{1}{2\bar{s}_{*}}. \end{equation}
Since $\bar{\rho}(s)$ is continuous on $[0, \bar{s}_{*}]$, there exists a sufficiently small $\eta>0$ such that \begin{equation} \bar{\rho}(\bar{s}_{*}-\eta)<\frac{\delta}{2}. \end{equation} When $\varepsilon>0$ is sufficiently small the solution $(u, \rho)(s)$ of system (\ref{ODE2}) with the initial data
$(u, \rho)(0)~\in~ ({u}^0-\varepsilon, {u}^0+\varepsilon)\times\{\rho^0\}$ satisfies \begin{equation}\label{8506}
|\rho(\bar{s}_{*}-\eta)-\bar{\rho}(\bar{s}_{*}-\eta)|<\frac{\delta}{4}\quad\mbox{and}\quad
|u(\bar{s}_{*}-\eta)-\bar{u}(\bar{s}_{*}-\eta)|<\frac{\delta}{4}. \end{equation}
It is similar to (\ref{91201}) that $$ \int_{\rho(s)}^{\rho(\bar{s}_{*}-\eta)}\frac{\sqrt{p'(\rho)}}{\rho}~{\rm d}\rho~>~\int_{u(s)}^{u(\bar{s}_{*}-\eta)}~{\rm d}u=u(\bar{s}_{*}-\eta)-u(s) $$ as $s>\bar{s}_{*}-\eta$. Combining this with (\ref{8506}), we get $$ \begin{aligned} \int_{\rho(s)}^{\delta}\frac{\sqrt{p'(\rho)}}{\rho}~{\rm d}\rho~&>~\bar{u}(\bar{s}_{*}-\eta)-\frac{\delta}{4}-u(s)>~\bar{u}(\bar{s}_{*})-\frac{\delta}{4}-u(s)\\&~=~\frac{1}{\bar{s}_{*}}-\frac{\delta}{4}-u(s)~>~ \frac{3}{4\bar{s}_{*}}-u(s) \end{aligned} $$ as $s>\bar{s}_{*}-\eta$. Thus, by (\ref{91203}) and Lemmas \ref{lem3.1} and \ref{lem1} we know that
there exists a $s_{*}$ such that $u(s)<h(\rho(s), s)$ as $0<s<s_{*}$ and $u(s_*)=h(\rho(s_*), s_*)=1/s_*$. We then have this lemma. \end{proof}
\begin{lem}\label{lemcase32}
If the second case happens as $(u, \rho)(0)=(u^0, \rho^0)$, then there exists a sufficiently small $\varepsilon>0$ such that the second case will happen as $(u, \rho)(0)\in({u}^0-\varepsilon, {u}^0+\varepsilon)\times\{\rho^0\}$. \end{lem} \begin{proof}
Denote by $(\bar{u}, \bar{\rho})(s)$ the solution of system (\ref{ODE2}) with the initial data $(u, \rho)(0)=(u^0, \rho^0)$. Then there exists a $\bar{s}_*>0$ such that $\bar{u}(\bar{s}_*)=0$ and $\bar{\rho}(\bar{s}_{*})=\rho_{*}>0$.
Let $$\mathcal{N}=\int_{0}^{\frac{\rho_*}{2}}\frac{p'(\rho)}{\rho}~{\rm d} \rho.$$ There exists a sufficiently small $\eta<\frac{\bar{s}_*}{2}$ such that \begin{equation}\label{100404} 0<\bar{u}(\bar{s}_{*}-\eta)<\frac{\mathcal{N}\bar{s}_*}{4}. \end{equation}
It is easy to see that if $\varepsilon>0$ is sufficiently small, then the solution $(u, \rho)(s)$ of system (\ref{ODE2}) with the initial data
$(u, \rho)(0)~\in~ ({u}^0-\varepsilon, {u}^0+\varepsilon)\times\{\rho^0\}$ satisfies \begin{equation}\label{100403}
|\rho(\bar{s}_{*}-\eta)-\bar{\rho}(\bar{s}_{*}-\eta)|<\frac{\rho_*}{4}\quad\mbox{and}\quad
|u(\bar{s}_{*}-\eta)-\bar{u}(\bar{s}_{*}-\eta)|<\frac{\mathcal{N}\bar{s}_*}{4}. \end{equation}
It is similar to (\ref{91204}) that \begin{equation} \int_{\rho(s)}^{\rho(\bar{s}_{*}-\eta)}\frac{p'(\rho)}{\rho}~{\rm d}\rho~<~\frac{1}{\bar{s}_{*}-\eta}\int_{u(s)}^{u(\bar{s}_{*}-\eta)}{\rm d}u\quad \mbox{as}\quad s>\bar{s}_{*}-\eta. \end{equation} Combining this with (\ref{100403}) we get $$ \int_{\rho(s)}^{\frac{3\rho_*}{4}}\frac{p'(\rho)}{\rho}~{\rm d}\rho~<~\frac{1}{\bar{s}_{*}-\eta}\int_{u(s)}^{\frac{\mathcal{N}\bar{s}_*}{2}}{\rm d}u~<~\mathcal{N}, $$ since $\rho'(s)<0$. Thus, by (\ref{100404}) we know that there exists a $\rho_m>0$ such that $\rho(s)>\rho_m$ as $s>\bar{s}_{*}-\eta$. Consequently, there exists a $s_{*}>\bar{s}_{*}-\eta$ such that $h(\rho(s_*), s_{*})=0$. We then have this lemma. \end{proof}
Using Lemmas \ref{lemcase31} and \ref{lemcase32} and the argument of continuity, we know that for any given $\rho_0>0$ there exists a $u_0>0$ such that the solution $(u_1, \rho_1)(s)$ of the initial value problem (\ref{ODE2}), (\ref{ID1}) satisfies $0<u_1(s)<h(\rho_1(s), s)<1/s$ as $s>0$; see Figure \ref{fig3}(left). That is to say, the initial value problem (\ref{ODE2}), (\ref{ID1}) has a global classical solution. Moreover, this solution satisfies $\lim\limits_{s\rightarrow +\infty}u_1(s)=\lim\limits_{s\rightarrow +\infty}\rho_1(s)=0$. In this case, the initial-boundary value problem (\ref{AE}), (\ref{IBV}) has a self-similar smooth solution $(u, \rho)(x, t)=(u_1, \rho_1)(t/x)$; see Figure \ref{fig3}(right).
\begin{figure}
\caption{\footnotesize A global smooth self-similar solution.}
\label{fig3}
\end{figure}
\subsection{Equation of state II}
\subsubsection{$\tau_0\geq\tau_2^i$} The discussion is similar to that of section 3.1, since $\tau_1'(s)>0$ as $s>0$ and $p''(\tau)>0$ as $\tau<\tau_0$.
\subsubsection{$\tau_1^i\leq \tau_0<\tau_2^i$} There are the following four cases: \begin{itemize}
\item There exists a $s_*>0$ such that $0<u_1(s)<h(\rho_1(s), s)$ as $0<s<s_*$ and $h(\rho_1(s_*), s_*)=u_1(s_*)=\frac{1}{s_{*}}$.
\item There exists a $s_{*}>0$ such that $0<u_1(s)<h(\rho_1(s), s)$ as $0<s<s_*$ and $u_1(s_*)=h(\rho_1(s_*), s_*)=0$.
\item $0<u_1(s)<h(\rho_1(s), s)$ for all $s>0$. \item There exists a $s_*>0$ such that $0<u_1(s)<h(\rho_1(s), s)$ as $0<s<s_*$ and $0<u_1(s_*)=h(\rho_1(s_*), s_*)<\frac{1}{s_{*}}$. (By Lemma \ref{lem1}, there holds $\tau_1(s_*)\in (\tau_0, \tau_2^i)$ in this case.) \end{itemize} It is easy to see that the first three cases can be happened when $\tau_0$ is sufficiently close to $\tau_2^i$. And the discussions for these three cases are similar to that of section 3.1.
In what follows, we are going to discuss the forth case. We first show that the forth case can be happened at least in some cases. To confirm this, we consider the initial value problem \begin{equation}\label{ODE3} \left\{
\begin{aligned}
&\frac{{\rm d} s}{{\rm d} u}=\frac{s^2 p'(\rho)-(1-us)^2}{2p'(\rho) us}, \\
&\frac{{\rm d} \rho}{{\rm d} u}=\frac{\rho (1-us)}{2p'(\rho)s},
\end{aligned} \right. \end{equation} \begin{equation}\label{ID21} (s, \rho)\mid_{u=u_*}~=~(s_*, \rho_*), \end{equation} where $u_*>0$, $\rho_*>0$, and $s_*>0$ satisfy $s_*^2p'(\rho_*)-(1-u_* s_*)^2=0$, $u_*s_*<1$ and $\tau_2^i<1/\rho_*<\tau_2^i$.
\begin{lem}\label{12101} When $\delta>0$ is sufficiently small, the initial value problem (\ref{ODE3}), (\ref{ID21}) has a solution $(\hat{s}, \hat{\rho})(u)$ on $(u_*, u_*+\delta)$. Moreover, this solution satisfies $\frac{{\rm d} \hat{s}}{{\rm d} u}<0$, $\frac{{\rm d} \hat{\rho}}{{\rm d} u}<0$, and $\frac{{\rm d}[\hat{s}^2 p'(\hat{\rho})-(1-u\hat{s})^2]}{{\rm d} u}<0$ as $u\in(u_*, u_*+\delta)$. \end{lem} \begin{proof} It is easy to see that this initial value problem is a classically well-posed problem which has a unique local solution $(\hat{s}, \hat{\rho})(u)$.
By computation, we have $$ \begin{aligned}
&\frac{{\rm d}}{{\rm d} u}\big(\hat{s}^2 p'(\hat{\rho})-(1-u\hat{s})^2\big)\\=~&\big(2\hat{s} p'(\hat{\rho})+2u(1-u\hat{s})\big)\cdot\Big(\frac{\hat{s}^2 p'(\hat{\rho})-(1-u\hat{s})^2}{2p'(\hat{\rho}) u\hat{s}}\Big)+\hat{s}^2p''(\hat{\rho})\frac{{\rm d} \hat{\rho}}{{\rm d}u}+2\hat{s}(1-u\hat{s})\\=~&
\big(2\hat{s} p'(\hat{\rho})+2u(1-u\hat{s})\big)\cdot\Big(\frac{\hat{s}^2 p'(\hat{\rho})-(1-u\hat{s})^2}{2p'(\hat{\rho}) u\hat{s}}\Big)+\frac{2\hat{s}\hat{\tau}^3p''(\hat{\tau})}{p'(\hat{\rho})}(1-u\hat{s})~<~0\quad \mbox{as}\quad u=u_*. \end{aligned} $$
Hence, we have $\hat{s}^2 p'(\hat{\rho})-(1-u\hat{s})^2<0$ and $\frac{{\rm d}[\hat{s}^2 p'(\hat{\rho})-(1-u\hat{s})^2]}{{\rm d} u}<0$ as $u\in(u_*, u_*+\delta)$.
Moreover, by $s_*^2p'(\rho_*)-(1-u_* s_*)^2=0$ and $u_*s_*<1$ we have $1-u\hat{s}>0$ as $u\in(u_*, u_*+\delta)$. Consequently, we have $\frac{{\rm d} \hat{s}}{{\rm d} u}<0$ and $\frac{{\rm d} \hat{\rho}}{{\rm d} u}<0$ as $u\in(u_*, u_*+\delta)$. \end{proof}
When $p''(\tau)<0$ and $p'(\tau)<0$ we have $p''(\rho)=2\tau^3p'(\tau)+\tau^4p''(\tau)<0$. Hence, in view of Lemma \ref{12101}, there may exist a $u^*>u_*$ such that $\hat{s}(u^*)=0$ and $\tau_1^i<\frac{1}{\hat{\rho}(u^*)}<\tau_2^i$, at least for some equations of state. Therefore, if we take $u_0=u^*$ and $\rho_0=\hat{\rho}(u^*)$ then the forth case will happen.
\vskip 4pt
We next construct the solution for the forth case. From $s_*^2p'(\rho_1(s_*))-(1-u_1(s_*) s_*)^2=0$,
we have $\lim\limits_{s\rightarrow s_*^{-}}\frac{{\rm d} u_1}{{\rm d}s}=-\infty$ and $\lim\limits_{s\rightarrow s_*^{-}}\frac{{\rm d} \rho_1}{{\rm d}s}=-\infty$; see Figure \ref{fig4}(left). This implies that the problem (\ref{AE}), (\ref{IBV}) does not have a global continuous solution. So, we need to look for a discontinuous solution.
Since $s^2p'(\rho_1)-(1-u_1 s)^2<0$ and $0<u_1s<1$ as $0<s<s_*$, we have \begin{equation}\label{102401} \frac{1}{s}~>~u_1(s)+\sqrt{p'(\rho_1(s))}~=~u_1(s)+\tau_1(s)\sqrt{-p'(\tau_1(s))}\quad \mbox{as}\quad0<s<s_*. \end{equation}
We first consider the possibility of finding a compression shock wave solution. By (\ref{102401}) and Corollary \ref{cor2} we know that for any $0<s<s_*$, there exists an admissible forward compression shock with the speed $1/s$ and the front side state $(u_1, \rho_1)(s)$. Moreover, the backside state $(u_2, \tau_2)(s)$ can be uniquely determined by \begin{equation}\label{backside} \left\{
\begin{aligned}
&\frac{1}{s}=u_1(s)+\tau_1(s)\sqrt{-\frac{p(\tau_2(s))-p(\tau_1(s))}{\tau_2(s)-\tau_1(s)}},\\
&u_2(s)=u_1(s)+(\tau_1(s)-\tau_2(s))\sqrt{-\frac{p(\tau_2(s))-p(\tau_1(s))}{\tau_2(s)-\tau_1(s)}}.
\end{aligned} \right. \end{equation} Moreover, we have $u_2(s)>u_1(s)>0$ since $\tau_2(s)<\tau_1(s)$. By the entropy condition, we have $$ u_2(s)-\sqrt{p'(\rho_2(s))}~<~\frac{1}{s}~<~u_2(s)+\sqrt{p'(\rho_2(s))}, $$ and consequently \begin{equation}\label{122501} \frac{1}{s}-\sqrt{p'(\rho_2(s))}<u_2(s)<\frac{1}{s}+\sqrt{p'(\rho_2(s))}. \end{equation}
We now assume there exists an admissible forward compression shock with the speed $1/s_1$ and the front side state $(u_1, \rho_1)(s_1)$, where $s_1\in (0, s_*)$.
Then we consider system (\ref{ODE2}) with the data \begin{equation}\label{100501} (u, \rho)\mid_{s=s_1}~ =~ (u_2, \rho_2)(s_1). \end{equation} We have the following lemma: \begin{lem}\label{100502} There exists a $s^{*}>s_1$ such that the solution $(u_3, \rho_3)(s)$ of the initial value problem (\ref{ODE2}), (\ref{100501}) satisfies \begin{equation}\label{100506} \frac{1}{s}-\sqrt{p'(\rho_3(s))}<u_3(s)<\frac{1}{s}+\sqrt{p'(\rho_3(s))}\quad \mbox{as}\quad s_1<s<s^{*} \end{equation} and $u(s^{*})=\frac{1}{s^{*}}+\sqrt{p'(\rho(s^{*}))}>\frac{1}{s_*}$. \end{lem} \begin{proof} It is easy to see that if $\frac{1}{s}-\sqrt{p'(\rho_3(s))}<u_3(s)<\frac{1}{s}+\sqrt{p'(\rho_3(s))}$ then $s^2p'(\rho_3)-(1-u_3s)^2>0$. There are two situations: $u_2(s_1)s_1\geq1$ and $u_2(s_1)s_1<1$.
If $u_2(s_1)s_1>1$, then we have ${\rm d} u_3/{\rm d} s>0$ and ${\rm d} \rho_3/{\rm d} s<0$ as $s>s_1$.
If $u_2(s_1)s_1<1$, then we have $\rho_3'(s_1)>0$. Using (\ref{100504}), (\ref{100505}), and the fact that $\rho_3'(s)>0$ as $u_3s<1$, we get $u_3(s)>1/s-\sqrt{p'(\rho_3(s))}$. Thus there exists a $s_2> s_1$ such that $u_3(s_2)s_2=1$. Moreover, we have ${\rm d} u_3/{\rm d} s>0$ and ${\rm d} \rho_3/{\rm d} s<0$ as $s>s_2$.
If the curves $u=u_3(s)$ and $u=1/s+\sqrt{p'(\rho_3(s))}$ do not intersect with each other, then there must have $$\lim\limits_{s\rightarrow +\infty}\rho_3(s)=\rho_{\infty}>0\quad\mbox{and}\quad \lim\limits_{s\rightarrow +\infty}u_3(s)=u_{\infty}>0.$$ Thus, we have $$ \frac{{\rm d} u_3}{{\rm d} s}=\frac{2p'(\rho_3) u_3s}{s^2 p'(\rho_3)-(1-u_3s)^2}>\frac{2u_3(s)}{s}>\frac{2u_2(s_1)}{s} $$ which leads to a contradiction. We then complete the proof of this lemma. \end{proof} Lemma \ref{100502} implies that the initial value problem (\ref{ODE2}), (\ref{100501}) does not have a solution on $(s_1, +\infty)$. If follows from (\ref{100506}) that $(u_3, \rho_3)(s)$ ($s_1<s<s^{*}$) can not be the front side state of any admissible forward shock with the speed $1/s$. Therefore, the problem (\ref{AE}), (\ref{IBV}) does not permit a compression shock wave solution. In what follows, we are going to look for a rarefaction shock wave solution.
For $\tau_1^i<\tau_1(s)<\tau_2^i$ we let $f(\tau_1(s))$ be defined such that $$ \frac{p(\tau_1)-p(f(\tau_1))}{\tau_1-f(\tau_1)}=p'(f(\tau_1)) \quad \mbox{and}\quad f(\tau_1)>\tau_2^i. $$ It can be seen that \begin{equation}\label{102501} -p'(\tau_1(s))<-p'(f(\tau_1(s)))\quad \mbox{as}~0<s<s_*. \end{equation}
\begin{lem}\label{lem1701} There exists a $s_{**}\in (0, s_*)$ such that for any $s\in [s_{**}, s_{*}]$, there exists an admissible forward rarefaction shock with the speed $1/s$ and the front side state $(u_1, \rho_1)(s)$. \end{lem} \begin{proof} According to Corollary \ref{cor3}, in order that $(u_1, \rho_1)(s)$ can be the front side state of an admissible forward rarefaction shock with the speed $1/s$, there must holds $$ u_1(s)+\tau_1(s)\sqrt{-p'(\tau_1(s))}~\leq~ \frac{1}{s}~\leq~ u_1(s)+\tau_1(s)\sqrt{-p'(f(\tau_1(s)))}. $$
Since $u_1(s)<h(\rho_1(s), s)$ as $0<s<s_*$, we have $$ \frac{1}{s}~>~u_1(s)+\tau_1(s)\sqrt{-p'(\tau_1(s))}\quad \mbox{as}\quad 0<s<s_*. $$
From $u_1(s_*)=h(\rho_1(s_*), s_*)>0$, $\tau_0<\tau_1(s_*)<\tau_2^i$, and (\ref{102501}), we have \begin{equation}\label{81703} \frac{1}{s_*}~=~u_1(s_*)+\tau_1(s_*)\sqrt{-p'(\tau_1(s_*))}~<~u_1(s_*)+\tau_1(s_*)\sqrt{-p'(f(\tau_1(s_*)))}. \end{equation} Thus, there exists a $s_{**}\in (0, s_*)$ such that $1/s<u_1(s)+\tau_1(s)\sqrt{-p'(f(\tau_1(s)))}$ as $s_{**}<s<s_*$ and \begin{equation}\label{102502} \frac{1}{s_{**}}=u_1(s_{**})+\tau_1(s_{**})\sqrt{-p'(f(\tau_1(s_{**})))}. \end{equation}
We then complete the proof of this lemma. \end{proof}
\begin{figure}
\caption{\footnotesize Discontinuous solution with a single rarefaction shock.}
\label{fig4}
\end{figure}
Let $(u_2, \tau_2)(s)$ ($s_{**}\leq s<s_*$) be determined by (\ref{backside}) and $\tau_2(s)>\tau_1(s)$. It is easy to see that $u_2(s_*)=u_1(s_*)>0$ and \begin{equation}\label{81701} \tau_2(s_{**})=f(\tau_1(s_{**}))>\tau_2^i. \end{equation}
\vskip 8pt If $u_2(s_{**})\leq 0$ then there exists a $s_{s}\in [s_{**}, s_{*})$ such that $u_2(s_{s})=0$; see Figure \ref{fig4}(left). In this case, the self-similar solution of the problem (\ref{AE}), (\ref{IBV}) has the form: $$ (u, \rho)(x, t)=\left\{
\begin{array}{ll}
(u_1, \rho_1)(s), & \hbox{$s<s_{s}$,} \\[4pt]
(0, \rho_2(s_{s})), & \hbox{$s>s_{s}$,}
\end{array}
\right. $$ where $s=t/x$; see Figure \ref{fig4}(right).
\vskip 8pt
If $u_2(s)>0$ as $s\in [s_{**}, s_{*}]$. We then consider system (\ref{ODE3}) with the initial data \begin{equation}\label{ID3} (s, \rho)\mid_{u=u_2(s_{**})}~=~(s_{**}, \rho_2(s_{**})). \end{equation}
\begin{lem}\label{100503} When $\delta>0$ is sufficiently small, the initial value problem (\ref{ODE3}), (\ref{ID3}) has a solution $(\bar{s}, \bar{\rho})(u)$ on $(u_2(s_{**})-\delta, u_2(s_{**}))$. Moreover, this solution satisfies ${\rm d} \bar{s}/{\rm d} u<0$ and $\bar{s}^2 p'(\bar{\rho})-(1-u\bar{s})^2<0$ as $u\in\big(u_2(s_{**})-\delta, u_2(s_{**})\big)$. \end{lem} \begin{proof} It is easy to see that the initial value problem is a classically well-posed problem which has a unique local solution. From(\ref{1}), (\ref{2}), and (\ref{102502}) we have \begin{equation}\label{122503} \frac{1}{s_{**}}=u_2(s_{**})+\tau_2(s_{**})\sqrt{-p'(\tau_2(s_{**}))}. \end{equation}
Hence, we have $\bar{s}^2 p'(\bar{\rho})-(1-u\bar{s})^2=0$ as $u=u_2(s_{**})$.
From (\ref{81701}) and (\ref{122503}), we have $$ \begin{aligned}
&\frac{{\rm d}}{{\rm d} u}\big(\bar{s}^2 p'(\bar{\rho})-(1-u\bar{s})^2\big)\\=&
\big(2\bar{s} p'(\bar{\rho})+2u(1-u\bar{s})\big)\cdot\Big(\frac{\bar{s}^2 p'(\bar{\rho})-(1-u\bar{s})^2}{2p'(\bar{\rho}) u\bar{s}}\Big)+\frac{2\bar{s}\bar{\tau}^3p''(\bar{\tau})}{p'(\bar{\rho})}(1-u\bar{s})>0 \end{aligned} $$ as $u=u_2(s_{**})$.
Thus, when $\delta>0$ is sufficiently small we have $$\bar{s}^2 p'(\bar{\rho})-(1-u\bar{s})^2<0\quad \mbox{as}~~ u\in \big(u_2(s_{**})-\delta, u_2(s_{**})\big).$$ We then complete the proof of this lemma. \end{proof}
Let $u=\bar{u}_1(s)$ be the inverse function of $s=\bar{s}(u)$ and $\bar{\rho}_1(s)=\bar{\rho}(\bar{u}_1(s))$. It is obviously that $(\bar{u}_1, \bar{\rho}_1)(s)$ satisfies the system (\ref{ODE2}) in $\big(s_{**}, \bar{s}(u_2(s_{**})-\delta)\big)$. Moreover, by Lemma \ref{100503} we also have $$ 0~<~\bar{u}_1(s)~<~h(\bar{\rho}_1(s), s)\quad \mbox{and}\quad \bar{\tau}_1(s)>\tau_2^i $$ as $s\in \big(s_{**}, \bar{s}(u_2(s_{**})-\delta)\big)$; see Figure \ref{fig12}(left). Thus, when $s>\bar{s}(u_2(s_{**})-\delta)$ the discussion is similar to that of section 3.1. The structures of the solution can be illustrated in Figure \ref{fig12}.
\begin{figure}
\caption{\footnotesize Discontinuous solutions with a single rarefaction shock.}
\label{fig12}
\end{figure}
\subsubsection{\bf $\tau_0<\tau_1^i$} It is similar to $\tau_1^i\leq \tau_0<\tau_2^i$ that there have the four cases. We only discuss the forth case, i.e., there exists a $s_*>0$ such that $0<u_1(s)<h(\rho_1(s), s)$ as $0<s<s_*$ and $0<u_1(s_*)=h(\rho_1(s_*), s_*)<\frac{1}{s_{*}}$.
If $\tau_0\geq\hat{\tau}_1$, we can have Lemmas \ref{100502} and \ref{lem1701}, since $f(\tau_1(s))$ can be also defined for $s\in [\tau_0, s_*]$. Then the discussion will be similar to that of section 3.2.2. We omit the details.
If $\tau_0<\hat{\tau}_1$, then we let $\hat{s}$ be the point such that $\tau_1(\hat{s})=\hat{\tau}_1$. Since $u_1(s)<h(\rho_1(s), s)$ as $0<s<s_*$, we have $$\frac{1}{\hat{s}}>u_1(\hat{s})+\hat{\tau}_1\sqrt{-p'(\hat{\tau}_1)}=u_1(\hat{s})+\hat{\tau}_1\sqrt{-p'(\hat{\tau}_2)} =u_1(\hat{s})+\hat{\tau}_1\sqrt{-p'(f(\hat{\tau}_1))},$$ since $f(\hat{\tau}_1)=\hat{\tau}_2$. Then there exists a $s_{**}\in (\hat{s}, s_*)$ such that $$u_1(s)+\tau_1(s)\sqrt{-p'(\tau_1(s))}<\frac{1}{s}<u_1(s)+\tau_1(s)\sqrt{-p'(f(\tau_1(s)))}$$ as $s\in (s_{**}, s_*)$ and $1/s_{**}=u_1(s_{**})+\tau_1(s_{**})\sqrt{-p'(f(\tau_1(s_{**})))}$. Then the discussion will be similar to that of section 3.2.2. We omit the details.
\subsection{Equation of state III} We first define \begin{equation}\label{b2} b_1:=\lim\limits_{\rho\rightarrow\tilde{\rho}_1^{+}}\sqrt{p'(\rho)}, \quad b_2:=\lim\limits_{\rho\rightarrow\tilde{\rho}_2^{-}}\sqrt{p'(\rho)}, \end{equation} where $\tilde{\rho}_i=\frac{1}{\tilde{\tau}^i}$ ($i=1, 2$).
\subsubsection{$\tau_0\geq\tilde{\tau}_2$} The discussion is similar to that of section 3.1, since $\tau_1'(s)>0$ as $s>0$ and $p''(\tau)>0$ as $\tau<\tau_0$. \subsubsection{$\tilde{\tau}_1<\tau_0<\tilde{\tau}_2$} Let $s_{*}$ be defined so that $$ \int_{\rho_0}^{\tilde{\rho}_2}\frac{1}{\rho}{\rm d}\rho~=~\int_{0}^{s_{*}}\frac{2u_0}{u_0 s-1} {\rm d}s. $$ Hence, we have $$ u_1(s)=u_0, \quad \rho_1(s)=\rho_0\exp\Big(\int_{0}^{s}\frac{2u_0}{u_0 s-1} {\rm d}s\Big), \quad 0<s<s_*. $$
If $u_0\leq 1/s_*-b_2$ then the discussion for $s\geq s_*$ is similar to that of section 3.1, i.e., the problem (\ref{AE}), (\ref{IBV}) has a continuous solution.
In what follows, we are going to discuss the case of $u_0> 1/s_*-b_2$. It is similar to the fourth case of section 3.2.2 that the problem does not have a compression shock wave solution. So, we look for a rarefaction shock wave solution.
For $\tilde{\tau}_1\leq\tau_1\leq\tilde{\tau}_2$, we let $g(\tau_1)$ be defined such that \begin{equation}\label{g1} \frac{p(\tau_1)-p(g(\tau_1))}{\tau_1-g(\tau_1)}=p'(g(\tau_1)) \quad \mbox{and}\quad g(\tau_1)>\tilde{\tau}_2. \end{equation}
\begin{lem} There exist a $s_{**}\in (0, s_*)$ such that for any $s\in [s_{**}, s_{*}]$, there exists an admissible forward rarefaction shock with the speed $1/s$ and the front side state $(u_1, \rho_1)(s)$. \end{lem} \begin{proof} According to Corollary \ref{cor5}, in order that $(u_1, \rho_1)(s)$ can be the front side state of an admissible forward rarefaction shock with the speed $1/s$, there holds $$ 0<\frac{1}{s}~\leq~ u_1(s)+\tau_1(s)\sqrt{-p'(g(\tau_1(s)))}. $$
It follows from $u_0>1/s_*-b_2$ that $$ \frac{1}{s_*}~<~u_0+b_2=u_0+\tau_1(s_*)\sqrt{-p'(g(\tau_1(s_*)))}, $$ since $g(\tau_1(s_*))=\tau_1(s_*)=\tilde{\tau}_2$. Therefore, there exists a $s_{**}\in (0, s_*)$ such that $1/s<u_0+\tau_1(s)\sqrt{-p'(g(\tau_1(s)))}$ as $s\in (s_{**}, s_*)$ and $ 1/s_{**}=u_0+\tau_1(s_{**})\sqrt{-p'(g(\tau_1(s_{**})))}. $
We then complete the proof of this lemma. \end{proof}
Let $(u_2, \tau_2)(s)$ ($s_{**}\leq s<s_*$) be determined by (\ref{backside}) and $\tau_2(s)>\tau_1(s)$. It is easy to see that $u_2(s_*)=u_0>0$. Then, the discussion will be similar to the forth case of section 3.2.2. We omit the details.
\subsubsection{$\tau_0<\tilde{\tau}_1$} We have the following two cases: \begin{itemize}
\item There exists a $s_*>0$ such that $u_1(s)<h(\rho_1(s), s)$ as $s<s_*$ and $u_1(s_*)=h(\rho_1(s_*), s_*)=0$ and $\rho_1(s_*)<\tilde{\rho}_1$.
\item There exists a $s_1>0$ such that $0<u_1(s_1)<h(\rho_1(s), s)<1/s$ as $0<s<s_1$ and $\rho_1(s_1)=\tilde{\rho}_1$. \end{itemize} The structure of the solution for the first case can be illustrated by Figure \ref{fig2}. We only need to discuss the second case.
Let $s_{*}$ be defined so that $$ \int_{\tilde{\rho}_1}^{\tilde{\rho}_2}\frac{1}{\rho}{\rm d}\rho~=~\int_{s_1}^{s_{*}}\frac{2u_1(s_1)}{u_1(s_1) s-1} {\rm d}s. $$ Hence, we have $$ u_1(s)= u_1(s_1), \quad \rho(s)=\tilde{\rho}_1\exp\Big(\int_{s_1}^{s}\frac{2u_1(s_1)}{u_1(s_1) s-1} {\rm d}s\Big), \quad u_1(s)<\frac{1}{s}=h(\rho_1(s), s)\quad\mbox{as}\quad s_1<s<s_*. $$
\begin{figure}
\caption{\footnotesize Discontinuous solutions with a single rarefaction shock.}
\label{fig5}
\end{figure}
If $u_1(s_1)\leq 1/s_*-b_2$ then the discussion for $s>s_*$ is similar to that of section 3.1. In what follows, we are going to discuss the case of $u_1(s_1)> 1/s_*-b_2$. We look for a rarefaction shock wave solution.
\begin{lem} There exists a $s_{**}\in (s_1, s_*)$ such that for any $s\in [s_{**}, s_{*}]$, there exists an admissible forward rarefaction shock with the speed $1/s$ and the front side state $(u_1, \rho_1)(s)$. \end{lem} \begin{proof} Since $u_1(s)<h(\rho_1(s), s)$ as $0<s<s_*$, we have $$ \frac{1}{s}>u_1(s)+\tau_1(s)\sqrt{-p'(\tau_1(s))}\quad \mbox{as}\quad 0<s<s_*. $$ From assumption (A2) we also have \begin{equation}\label{122601} \frac{1}{s_1}>u_1(s_1)+b_1>u_1(s_1)+\tau_1(s_1)\sqrt{-p'(g(\tau_1(s_1)))}. \end{equation} It follows from $u_1(s_1)> 1/s_*-b_2$ that \begin{equation}\label{12201} \frac{1}{s_*}~<~u_1(s_1)+b_2=u_1+\tau_1(s_*)\sqrt{-p'(g(\tau_1(s_*)))}. \end{equation} Combining with (\ref{122601}) and (\ref{12201}), there exists a $s_{**}\in (s_1, s_*)$ such that $1/s<u_1(s)+\tau_1(s)\sqrt{-p'(g(\tau_1(s)))}$ as $s\in (s_{**}, s_*)$ and $1/s_{**}=u_1(s_{**})+\tau_1(s_{**})\sqrt{-p'(g(\tau_1(s_{**})))}$. Thus by Corollary \ref{cor5} we have this lemma. \end{proof} Hence, the discussion for this case will be similar to the forth case of section 3.2.2. The wave structures of the solution can be illustrated in Figure \ref{fig5}.
\section{\bf Self-similar solutions for $u_0<0$} In this section, we will construct the self-similar solutions of the problem (\ref{AE}), (\ref{IBV}) for $u_0>0$.
\subsection{Equation of state I}
From $u_0<0$ we have \begin{equation}\label{102601} \frac{{\rm d} u_1}{{\rm d} s}>0\quad\mbox{and}\quad \frac{{\rm d} \rho_1}{{\rm d} s}>0. \end{equation} \begin{lem}\label{lem4.1} For any $u_0<0$ there exists a $s_{*}>0$ such that $u_1(s)<h(\rho_1(s), s)$ as $0<s<s_*$ and $u_1(s_*)=h(\rho_1(s_*), s_*)<0$; see Figure \ref{fig6}(left). \end{lem} \begin{proof} The proof of this lemma proceeds in two steps.
{\bf Step 1.} We first claim that the integral curves $u=u_1(s)$ and $u=h(\rho_1(s), s)$ can not intersect at the $s-$axis.
We argue by contradiction. Suppose there is a $s_1>0$ such that $u_1(s)<0<h(\rho_1(s), s)$ as $s<s_1$ and $u_1(s_1)=h(\rho_1(s_1), s_1)=0$. Then we have $s_1\sqrt{p'(\rho_1(s_1))}=1$.
We consider the following ordinary system \begin{equation}\label{100601} \left\{
\begin{aligned}
&\frac{{\rm d} u}{{\rm d} r}=2p'(\rho) us,\\
&\frac{{\rm d} s}{{\rm d} r}= s^2 p'(\rho)-(1-us)^2,\\ & \frac{{\rm d} \rho}{{\rm d} r}=2\rho u (1-us)
\end{aligned} \right. \end{equation} At the point $(u, s, \rho)=(0, s_1, \rho_1(s_1))$, we find the linear part of the right-hand side of (\ref{100601}) is given by $M(u, s-s_1, \rho-\rho_1(s_1))^{T}$ where $$ M=\left(
\begin{array}{ccc}
\displaystyle 2\sqrt{p'(\rho_1(s_1))} & 0 & 0\\[6pt]
\displaystyle \frac{2}{\sqrt{p'(\rho_1(s_1))}} & 2\sqrt{p'(\rho_1(s_1))} & \displaystyle\frac{p''(\rho_1(s_1))}{p'(\rho_1(s_1))} \\[6pt] \displaystyle\\ 2\rho_1(s_1) &0 &0
\end{array}
\right). $$ Since $$ \begin{aligned} &\frac{p''(\rho_1(s_1))}{p'(\rho_1(s_1))}\cdot\frac{\rho_1(s_1)}{\sqrt{p'(\rho_1(s_1))} }+\frac{2}{\sqrt{p'(\rho_1(s_1))}}\\ &\qquad =~\frac{1}{(p'(\rho_1(s_1)))^{3/2}}\Big(\rho_1(s_1) p''(\rho_1(s_1))+2p'(\rho_1(s_1))\Big)~=~\frac{\tau_1^3(s_1)p''(\tau_1(s_1))}{(p'(\rho_1(s_1)))^{3/2}}~>~0, \end{aligned} $$ we have that along the integral curves of (\ref{100601}), $ \frac{{\rm d} s}{{\rm d} u}\rightarrow -\infty$ as $(u, s, \rho)\rightarrow (0, s_1, \rho_1(s_1))$; see Figure \ref{fig13}. This leads to a contradiction. Thus the integral curves $u=u_1(s)$ and $u=h(\rho_1(s), s)$ can not intersect at the $s-$axis.
\begin{figure}
\caption{\footnotesize Integration curves of (\ref{100601}).}
\label{fig13}
\end{figure}
{\bf Step 2.} Let $m=\inf\limits_{\rho\in[\rho_0, +\infty)}\sqrt{p'(\rho)}$. Suppose that the curves $u=u_1(s)$ and $u=h(\rho_1(s), s)$ do not intersect with each other. Then by (\ref{102601}) we have $$\lim\limits_{s\rightarrow +\infty}u_1(s)=u_{\infty}<-m.$$ By (\ref{ODE2}), we have $$ \frac{{\rm d} u_1}{{\rm d} s}=\frac{p'(\rho_1) u_1s}{s^2 p'(\rho_1)-(1-u_1s)^2}>\frac{m^3s}{(1-u_{0}s)^2}. $$ Hence, $$ u_{\infty}-u_0~>~u_1(s)-u_0 ~>~ \int_{0}^{s} \frac{m^3 s}{(1-u_{0}s)^2} {\rm d}s \quad \mbox{as}\quad s>0, $$ which leads to a contradiction. We then have this lemma. \end{proof}
Lemma \ref{lem4.1} implies that if $u_0<0$ then the problem (\ref{AE}), (\ref{IBV}) does not have a global continuous solution. So, we need to look for a shock wave solution. \begin{lem} For any $s\in (0, s_*)$ there exists an admissible compression shock with the speed $1/s$ and the front side state $(u_1, \rho_1)(s)$. \end{lem} \begin{proof} This lemma can be proved by the fact that $ 1/s~>~u_1(s)+\sqrt{p'(\rho_1(s))} $ as $0<s<s_*$. \end{proof} Let the back side state of the shock $(u_2, \rho_2)(s)$ ($0<s\leq s_*$) be determined by (\ref{backside}). It is easy to see that $$ u_2(s_{*})=u_1(s_*)<0\quad \mbox{and} \quad \lim\limits_{s\rightarrow 0}u_2(s)=+\infty. $$ Therefore, there exists a $s_{s}\in (0, s_{*})$ such that $u_2(s_{s})=0$. Hence, the self-similar solution of the problem (\ref{AE}), (\ref{IBV}) has the form $$ (u, \rho)(x, t)=\left\{
\begin{array}{ll}
(u_1, \rho_1)(s), & \hbox{$s<s_{s}$,} \\[4pt]
(0, \rho_2(s_{s})), & \hbox{$s>s_{s}$,}
\end{array}
\right. $$ where $s=t/x$; see Figure \ref{fig6}(right).
\begin{figure}
\caption{\footnotesize Discontinuous solution with a single compression shock.}
\label{fig6}
\end{figure}
\subsection{Equation of state II} \subsubsection{$\tau_0\leq \tau_1^{i}$} The discussion is similar to that of section 4.1, since $\tau_1'(s)<0$ as $s>0$ and $p''(\tau)>0$ as $\tau<\tau_0$.
\subsubsection{$\tau_1^{i}<\tau_0< \tau_2^{i}$} There are the following two cases: \begin{itemize}
\item There exists a $s_*>0$ such that $u_1(s)<h(\rho_1(s), s)$ as $0<s<s_*$ and $u_1(s_*)=h(\rho_1(s_*), s_*)<0$.
\item There exists a $s_*>0$ such that $u_1(s)<h(\rho_1(s), s)$ as $0<s<s_*$ and $u_1(s_*)=h(\rho_1(s_*), s_*)=0$. (By Lemma \ref{lem4.1}, we have $\tau_1(s_*)\in (\tau_1^{i}, \tau_2^i)$ in this case.) \end{itemize}
The structure of the solution for the first case is similar to that of section 4.1, since $ 1/s~>~u_1(s)+\sqrt{p'(\rho_1(s))} $ as $0<s<s_*$.
The solution for the second case has the form $$ (u, \rho)(x, t)=\left\{
\begin{array}{ll}
(u_1, \rho_1)(s), & \hbox{$s<s_{*}$,} \\[4pt]
(0, \rho_1(s_{*})), & \hbox{$s>s_{*}$.}
\end{array}
\right. $$
\subsubsection{$\tau_0> \tau_2^{i}$} There are the following two cases: \begin{itemize}
\item There exists a $s_{*}>0$ such that $u_1(s)<0<h_1(\rho_1(s), s)$ as $s<s_*$ and $u_1(s_*)=h(\rho_1(s_*), s_*)=0$ and $\tau_1(s_*)\in (\tau_1^{i}, \tau_2^i)$. \item There exists a $s_{*}>0$ such that $u_1(s)<h_1(\rho_1(s), s)$ as $s<s_*$ and $u_1(s_*)=h(\rho_1(s_*), s_*)<0$ and $\tau_1(s_*)\in[\tau_2^{i}, \tau_0)\cup(0, \tau_1^{i}]$.
\end{itemize} \begin{rem} By (\ref{100504}) and (\ref{100505}), it is impossible to have a $s_*>0$ such that $u_1(s)<h_1(\rho_1(s), s)$ as $s<s_*$ and $u_1(s_*)=h(\rho_1(s_*), s_*)<0$ and $\tau_1(s_*)\in (\tau_1^{i}, \tau_2^i)$. \end{rem}
In what follows, we are going to discuss the second case. For $\tau_2^i<\tau_1<\tau_3$, we let $\psi(\tau_1)$ be defined such that \begin{equation} \frac{p(\tau_1)-p(\psi(\tau_1))}{\tau_1-\psi(\tau_1)}=p'(\psi(\tau_1)) \quad \mbox{and}\quad\tau_1^i< \psi(\tau_1)<\tau_2^i. \end{equation} Here, $\tau_3$ is defined in Corollary \ref{cor2}.
\vskip 4pt We first consider the case of $\tau_2^{i}<\tau_1(s_*)<\tau_0<\tau_3$. Let \begin{equation}\label{100201} \hat{\xi}(s)=u_1(s)+\tau_1(s)\sqrt{-p'(\psi(\tau_1(s)))}\quad \mbox{and}\quad F(s)=\frac{1}{s}-\hat{\xi}(s). \end{equation} Then we have \begin{equation}\label{1006021} \lim\limits_{s\rightarrow0}F(s)=+\infty \end{equation} and \begin{equation}\label{1006022} \begin{aligned} F(s_*)&=\frac{1}{s_*}-u_1(s_*)-\tau_1(s_*)\sqrt{-p'(\psi(\tau_1(s_*)))}\\ &=\frac{1}{s_*}-u_1(s_*)-\tau_1(s_*)\sqrt{-p'(\tau_1(s_*))}+\tau_1(s_*)\Big(\sqrt{-p'(\tau_1(s_*))}-\sqrt{-p'(\psi(\tau_1(s_*)))}\Big) \\ &=\tau_1(s_*)\Big(\sqrt{-p'(\tau_1(s_*))}-\sqrt{-p'(\psi(\tau_1(s_*)))}\Big)<0. \end{aligned} \end{equation}
Since $ 1/s~>~u_1(s)+\sqrt{p'(\rho_1(s))} $ and $\tau_2^i<\tau_1(s)<\tau_0$ as $0<s<s_*$, for any $s\in (0, s_*)$ there exists an admissible compression shock with the speed $1/s$ and the front side state $(u_1, \rho_1)(s)$. Let the back side state of the shock $(u_2, \rho_2)(s)$ ($0<s\leq s_*$) be determined by (\ref{backside}). We have \begin{equation}\label{100603} u_2(s_{*})=u_1(s_*)<0\quad \mbox{and}\quad \lim\limits_{s\rightarrow 0}u_2(s)=+\infty. \end{equation}
From Corollary \ref{cor2} we know that
if $F(s)=0$ then (\ref{backside}) has two solutions $(u_{2}^{+}, \tau_{2}^{+})(s)$ and $(u_{2}^{-}, \tau_{2}^{-})(s)$, where $u_{2}^{+}(s)>u_{2}^{-}(s)$ and $\tau_{2}^{+}(s)<\tau_1^i<\tau_{2}^{-}(s)<\tau_2^i$. So, by (\ref{1006021}) and (\ref{1006022}) we can see that $u_2(s)$ is piecewise continuous on $(0, s_*)$. Hence, we can not determine whether or not $u_2(s)$ has a zero point in $(0, s_*)$.
If there exists a $s_{s}\in (0, s_*)$ such that $u_2(s_s)=0$ then the problem (\ref{AE}), (\ref{IBV}) admits a discontinuous solution with a single shock with the speed $1/s_{s}$; see Figure \ref{fig6}(right).
\begin{figure}
\caption{\footnotesize Discontinuous solution with two compression shocks. }
\label{Figu2}
\end{figure}
If $u_2(s)\neq0$ for all $s\in(0, s_*)$, then by (\ref{100603}) there must exists a $s^{*}\in (0, s_*)$ such that $$ F(s^{*})=0, \quad u_2^{+}(s^{*})>0, \quad u_2^{-}(s^{*})< 0, \quad \mbox{and}\quad \tau_1^i<\tau_2^{-}(s^{*})<\tau_2^i. $$ Then we consider system (\ref{ODE3}) with the data \begin{equation}\label{100302} (s, \rho)\mid_{u=u_2^{-}(s^{*})}~=~(s_*, \rho_2^{-}(s^{*})). \end{equation} \begin{lem} When $\delta>0$ is sufficiently small, the initial value problem (\ref{ODE3}), (\ref{100302}) has a solution $(\bar{s}, \bar{\rho})(u)$ on $(u_2^{-}(s^{*}), u_2^{-}(s^{*})+\delta)$. Moreover, this solution satisfies $\frac{{\rm d} s}{{\rm d} u}>0$ and $s^2 p'(\rho)-(1-us)^2<0$ in $(u_2^{-}(s^{*}), u_2^{-}(s^{*})+\delta)$. \end{lem} \begin{proof} The proof is similar to that of Lemma \ref{100503}, we omit the details.
\end{proof} Let $u=\bar{u}_1(s)$ be the inverse function of $s=\bar{s}(u)$ and $\bar{\rho}_1(s)=\bar{\rho}(\bar{u}_1(s))$. It is obviously that $(\bar{u}_1, \bar{\rho}_1)(s)$ satisfies (\ref{ODE2}) in $\big(s^{*}, \bar{s}(u_2^{-}(s^{*})+\delta)\big)$. Moreover, by Lemma \ref{100503} we also have $$ 0~<~\bar{u}_1(s)~<~h(\bar{\rho}_1(s), s)\quad \mbox{and}\quad \bar{\tau}_1(s)<\tau_2^i $$ as $s\in\big(s^{*}, \bar{s}(u_2^{-}(s^{*})+\delta)\big)$. When $s>s^{*}$ there are two cases: (a)
there exists a $s_{**}>s^{*}$ such that $\bar{u}_1(s)<h(\bar{\rho}_1(s), s)$ as $s^*<s<s_{**}$ and $\bar{u}_1(s_{**})=h(\bar{\rho}_1(s_{**}), s_{**})=0$; (b)
there exists a $s_{**}>s^{*}$ such that $\bar{u}_1(s)<h(\bar{\rho}_1(s), s)$ as $s^*<s<s_{**}$ and $\bar{u}_1(s_{**})=h(\bar{\rho}_1(s_{**}), s_{**})<0$.
The solution for case (a) has the form $$ (u, \rho)(x, t)=\left\{
\begin{array}{ll}
(u_1, \rho_1)(s), & \hbox{$s<s^{*}$,} \\[4pt]
(\bar{u}_1, \bar{\rho}_1)(s), & \hbox{$s^{*}<s<s_{**}$,}\\[4pt] (0, \bar{\rho}_1(s_{**})), & \hbox{$s>s_{**}$.}
\end{array}
\right. $$ For case (b), since $ 1/s~>~\bar{u}_1(s)+\sqrt{p'(\bar{\rho}_1(s))} $ and $\bar{\tau}_1(s)<\tau_2^i$ as $s^*<s<s_{**}$, for any $s\in (s^{*}, s_{**})$ there exists an admissible compression shock with the speed $1/s$ and the front side state $(\bar{u}_1, \bar{\rho}_1)(s)$. Let the back side state of the shock $(\bar{u}_2, \bar{\rho}_2)(s)$ ($s^{*}<s\leq s_{**}$) be determined by (\ref{backside}). Then we have $ \bar{u}_2(s_{**})=\bar{u}_1(s_{**})<0$. Thus by $\bar{u}_2(s^{*})=u_2^{+}(s^{*})>0$ we know that there exists a $s_{s}\in (s^{*}, s_{**})$ such that $\bar{u}_2(s_{s})=0$. Hence, the problem (\ref{AE}), (\ref{IBV}) admits a discontinuous solution with two compression shocks. The solution has the form $$ (u, \rho)(x, t)=\left\{
\begin{array}{ll}
(u_1, \rho_1)(s), & \hbox{$s<s^{*}$,} \\[4pt]
(\bar{u}_1, \bar{\rho}_1)(s), & \hbox{$s^{*}<s<s_{s}$,}\\[4pt] (0, \bar{\rho}_2(s_{s})), & \hbox{$s>s_{s}$;}
\end{array}
\right. $$ see Figure \ref{Figu2}.
\vskip 4pt
We next discuss the case for $\tau_1(s_*)\in \{\tau_2^i\}\cup(0, \tau_1^i]$ or $\tau_0>\tau_3$. If $F(s)\geq 0$ as $\tau_2^i<\tau_1(s)\leq\tau_3$, then $${u}_2(s):=\left\{
\begin{array}{ll}
u_2^{+}(s), & \hbox{$F(s)=0$;} \\[4pt] u_2(s), & \hbox{otherwise}
\end{array}
\right. $$ is a continuous function on $(0, s_*]$. Moreover, ${u}_2(s_*)=u_1(s_*)<0$ and $\lim\limits_{s\rightarrow 0}{u}_2(s)=+\infty$. Hence, there exists a $s_{s}\in (0, s^{*})$ such that ${u}_2(s_{s})=0$, and consequently the problem (\ref{AE}), (\ref{IBV}) admits a discontinuous solution with a single compression shock; see Figure \ref{fig6}(right). If $F(s)$ is not nonnegative as $\tau_2^i<\tau_1(s)\leq\tau_3$, then the discussion will be similar to the previous discussions.
\subsection{Equation of state III} \subsubsection{$\tau_0\leq \tilde{\tau}_1$} The discussion is similar to that of section 4.1, since $\tau_1'(s)<0$ as $s>0$ and $p''(\tau)>0$ as $\tau<\tau_0$. \subsubsection{$\tilde{\tau}_1<\tau_0\leq \tilde{\tau}_2$} Let $s_{*}$ be determined by $$ \int_{\rho_0}^{\tilde{\rho}_1}\frac{1}{\rho}{\rm d}\rho~=~\int_{0}^{s_{*}}\frac{2u_0}{u_0 s-1} {\rm d}s. $$ Hence, we have $$ u_1(s)=u_0, \quad \rho_1(s)=\rho_0\exp\Big(\int_{0}^{s}\frac{2u_0}{u_0 s-1} {\rm d}s\Big), \quad 0<s<s_*. $$ We then have the following two cases: (1) $u_0< 1/s_{*}-b_1$; (2) $u_0\geq 1/s_{*}-b_1$.
\begin{figure}
\caption{\footnotesize Solutions with a single compression shock. Left: $u_0< 1/s_{*}-b_1$; right: $u_0\geq 1/s_{*}-b_1$.}
\label{fig7}
\end{figure}
If $u_0< 1/s_{*}-b_1$ then we consider the system (\ref{ODE2}) with the initial data \begin{equation}\label{4.2} (u, \rho)(s_*)~=~(u_0, \tilde{\rho}_1). \end{equation} It is similar to Lemma \ref{lem4.1} that there exists a $s^{*}>s_*$ such that the solution $(u_1, \rho_1)(s)$ of the problem (\ref{ODE2}), (\ref{4.2}) satisfies $u_1(s)<h(\rho_1(s), s)$ as $s_*<s<s^*$ and $u_1(s^*)=h(\rho_1(s^*), s^{*})<0$. Moreover, for any $s\in (0, s^{*})$ there exists an admissible forward compression shock with the speed $1/s$ and the front side state $(u_1, \rho_1)(s)$. The back side state of the shock $(u_2, \rho_2)(s)$ can be determined by (\ref{backside}). It is easy to see that $u_2(s^{*})=u_1(s^*)<0$ and $\lim\limits_{s\rightarrow 0}u_2(s)=+\infty$. Hence, there exists a $s_{s}\in (0, s^{*})$ such that $u_2(s_{s})=0$, and consequently the problem (\ref{AE}), (\ref{IBV}) admits a discontinuous solution with a single compression shock; see Figure \ref{fig7}.
Next, we discuss the case of $u_0\geq 1/s_{*}-b_1$. By Corollary \ref{cor5}, we know that for any $s\in (0, s^{*})$, there exists an admissible forward compression shock with the speed $1/s$ and the front side state $(u_1, \rho_1)(s)$. The back side state of the shock $(u_2, \rho_2)(s)$ can be determined by (\ref{backside}). It is obviously that $\lim\limits_{s\rightarrow 0}u_2(s)\rightarrow +\infty$. Using $u_0> 1/s_{*}-b_1$, we also have $\lim\limits_{s\rightarrow s_*}\tau_2(s)=\tilde{\tau}_1$ and $\lim\limits_{s\rightarrow s_*}u_2(s)=u_0<0$. Thus, there must exists a $s_{s}\in (0, s_*)$ such that $u_2(s_s)=0$. The solution for this case can be illustrated by Figure \ref{fig7}(right).
\subsubsection{$\tau_0> \tilde{\tau}_2$} We have the following three cases: \begin{itemize}
\item There exists a $s_{*}>0$ such that $u_1(s)<h(\rho_1(s), s)$ as $s<s_*$ and $u_1(s_*)=h(\rho_1(s_*), s_*)<0$ and $\tau_1(s_*)\in [\tilde{\tau}_2, \tau_0)$.
\item There exists a $s_{*}>0$ such that $u_1(s)<h(\rho_1(s), s)$ as $s<s_*$ and $\tau_1(s_*)=\tilde{\tau}_1$ and $u_1(s_*)\geq\frac{1}{s_*}-b_1$.
\item There exists a $s_{*}>0$ such that $u_1(s)<h(\rho_1(s), s)$ as $s<s_*$ and $u_1(s_*)=h(\rho_1(s_*), s_*)<0$ and $\tau_1(s_*)<\tilde{\tau}_1$. (Remark: $u_1(s_*)<\frac{1}{s_*}-b_1$ in this case.) \end{itemize}
We now discuss the first case. For $\tau_1>\tilde{\tau}_2$, we let $\kappa(\tau_1)$ be defined such that \begin{equation} \kappa(\tau_1)=\frac{p(\tilde{\tau}_{2})-p(\tau_1)}{\tilde{\tau}_{2}-\tau_1}. \end{equation} Then we have $-\kappa(\tau_1)>-p'(\tau_1)$, since $\tau_1>\tilde{\tau}_2$.
Let $$\hat{\xi}(s)=u_1(s)+\tau_1(s)\sqrt{-\kappa(\tau_1(s))}, \quad F(s)=\frac{1}{s}-\hat{\xi}(s).$$ Then we have \begin{equation}\label{12202} \lim\limits_{s\rightarrow 0}F(s)=+\infty \end{equation}
and \begin{equation}\label{11501} \begin{aligned} F(s_*)&=\frac{1}{s_*}-u_1(s_*)-\tau_1(s_*)\sqrt{-\kappa(\tau_1(s_*))}\\&= \frac{1}{s_*}-u_1(s_*)-\tau_1(s_*)\sqrt{-p'(\tau_1(s_*))}+\tau_1(s_*)\Big(\sqrt{-p'(\tau_1(s_*))}-\sqrt{-\kappa(\tau_1(s_*))}\Big) \\&=\tau_1(s_*)\Big(\sqrt{-p'(\tau_1(s_*))}-\sqrt{-\kappa(\tau_1(s_*))}\Big)<0. \end{aligned} \end{equation} Since $ 1/s~>~u_1(s)+\sqrt{p'(\rho_1(s))} $ and $\tilde{\tau}_2<\tau_1(s)<\tau_0$ as $0<s<s_*$, for any $s\in (0, s_*)$, there exists an admissible compression shock with the speed $1/s$ and the front side state $(u_1, \rho_1)(s)$. The back side state of the shock $(u_2, \rho_2)(s)$ ($0<s\leq s_*$) can be determined by (\ref{backside}). Moreover, we have $u_2(s_{*})=u_1(s_*)<0$ and $\lim\limits_{s\rightarrow 0}u_2(s)=+\infty$. However, by (\ref{12202}) and (\ref{11501}) we know that $u_2(s)$ is not continuous in $(0, s_*)$. Since, if $F(s)=0$ then (\ref{backside}) has two solutions $(u_{2}^{+}, \rho_{2}^{+})(s)$ and $(u_{2}^{-}, \rho_{2}^{-})(s)$, where $u_{2}^{+}(s)>u_{2}^{-}(s)$ and $\rho_{2}^{+}(s)>\rho_{2}^{-}(s)$.
If there exists a $s_{s}\in (0, s_*)$ such that $u_2(s_s)=0$, then the problem (\ref{AE}), (\ref{IBV}) admits a discontinuous solution with a single shock.
If $u_2(s)\neq0$ for all $s\in(0, s_*)$, then there must exists a $s^{*}\in (0, s_*)$ such that $u_2^{+}(s^{*})>0$, $u_2^{-}(s^{*})<0$, and $\tau_2^{-}(s^{*})=\tilde{\tau}_2$.
Then the discussion for $s>s^{*}$ is similar to that of section 4.3.2. The problem has a discontinuous solution with two compression shocks; see Figure \ref{fig10}.
\begin{figure}
\caption{\footnotesize Solutions with two compression shocks for $u_0<0$ and $\tau_0>\tilde{\tau}_2$.}
\label{fig10}
\end{figure}
We next discuss the second case. Similarly, for any $s\in (0, s_*)$, there exists an admissible compression shock with the speed $1/s$ and the front side state $(u_1, \rho_1)(s)$. The back side state of the shock $(u_2, \rho_2)(s)$ ($0<s\leq s_*$) can be determined by (\ref{backside}). Moreover, by $u_1(s_*)\geq\frac{1}{s_*}-b_1$ we have $\lim\limits_{s\rightarrow s_*}u_2(s)=u_1(s_*)<0$. Let $s_1$ be the point such that $\tau_1(s_1)=\tilde{\tau}_2$. Then the discussion can be divided into the following two cases: (1) $F(s)\geq 0$ as $s\in (0, s_1)$; (2) $F(s)$ is not nonnegative in $(0, s_1)$.
If $F(s)\geq 0$ as $s\in (0, s_1)$, we redefine $u_2(s)=\left\{
\begin{array}{ll}
u_2(s), & \hbox{$F(s)>0$;} \\
u_2^{+}(s), & \hbox{$F(s)=0$}
\end{array}
\right.$ as $0<s<s_1$. Then $u_2(s)$ is a continuous function on $(0, s_*)$. Thus, there exists a $s_{s}\in (0, s_{*})$ such that $u_2(s)=0$. And consequently, the problem (\ref{AE}), (\ref{IBV}) admits a discontinuous solution with a single compressible shock. If $F(s)$ is not nonnegative on $(0, s_1)$ then the discussion will be similar to the first case.
Actually, the discussion for the third case is similar to that of the second case. We omit the details.
\vskip 32pt \centerline{\sc \bf Acknowledgements} \vskip 8pt The first author is partially supported by Natural Science foundation of Zhejiang Province (LQ19A010003). The second author is partially supported by the grant of ``Shanghai Altitude Discipline".
\vskip 32pt
\small
\end{document} |
\begin{document}
\title{Adjoint cyclotomic multiple zeta values and cyclotomic multiple harmonic values}
\begin{abstract} We introduce adjoint cyclotomic multiple zeta values and cyclotomic multiple harmonic values. They are two variants of cyclotomic multiple zeta values, closely related to each other. They arise as key tools for the study of $p$-adic cyclotomic multiple zeta values. Moreover, cyclotomic multiple harmonic values provide an adelic lift to a cyclotomic generalization of finite multiple zeta values. We establish certain standard properties of these two objects. We consider two types of properties : some related to double shuffle relations, and some related to associator and Kashiwara-Vergne relations.
This is Part II-1 of \emph{$p$-adic cyclotomic multiple zeta values and $p$-adic pro-unipotent harmonic actions}. \end{abstract}
\tableofcontents
\section{Introduction}
\subsection{The algebraic theory of cyclotomic multiple zeta values}
Let $d$ be a positive integer, $(n_{i})_{d}=(n_{1},\ldots,n_{d})$ be a sequence of positive integers and $(\xi_{i})_{d}=(\xi_{1},\ldots,\xi_{d})$ be a sequence of roots of unity in $\mathbb{C}$, with $(\xi_{d},n_{d}) \not= (1,1)$. The following complex number is called a cyclotomic multiple zeta value (abbreviated as MZV$\mu_{N}$, where $N$ is a positive integer such that $\xi_{i}^{N}=1$ for all $i$) : \begin{equation} \label{eq:multizetas} \zeta \big( (n_{i})_{d};(\xi_{i})_{d} \big) = \sum_{0<m_{1}<\ldots<m_{d}} \frac{\big( \frac{\xi_{2}}{\xi_{1}}\big)^{m_{1}} \ldots \big( \frac{1}{\xi_{d}}\big)^{m_{d}}}{m_{1}^{n_{1}} \ldots m_{d}^{n_{d}}} . \end{equation} The adjective cyclotomic and the notation $\mu_{N}$ are omitted if $N=1$. One says that $n=n_{1}+\ldots+n_{d}$ is the weight of $\big( (n_{i})_{d};(\xi_{i})_{d} \big)$ and that $d$ is its depth. Denoting by $(\epsilon_{n},\ldots,\epsilon_{1})=(\underbrace{0,\ldots,0}_{n_{d}-1},\xi_{d},\ldots,\underbrace{0,\ldots,0}_{n_{1}-1}, \xi_{1})$, we have \begin{equation} \label{eq:multizetas integral} \zeta \big( (n_{i})_{d};(\xi_{i})_{d} \big) = (-1)^{d}\int_{0}^{1} \frac{dt_{n}}{t_{n}-\epsilon_{n}} \int_{0}^{t_{n-1}} \ldots \int_{0}^{t_{2}} \frac{dt_{1}}{t_{1}-\epsilon_{1}}. \end{equation} \indent This shows that MZV$\mu_{N}$'s are Betti - de Rham periods of the pro-unipotent fundamental groupoid of $\mathbb{P}^{1} - \{0,\mu_{N},\infty\}$ (\cite{Deligne Goncharov}, \S5.16). \newline\indent Let $p$ be a prime number which does not divide $N$. $p$-adic cyclotomic multiple zeta values ($p$MZV$\mu_{N}$'s) are defined as $p$-adic analogues of the integrals in (\ref{eq:multizetas integral}) \cite{Deligne Goncharov} \cite{Furusho 1} \cite{Furusho 2} \cite{Yamashita} \cite{Unver MZV} \cite{U2}, \cite{I-1}, \cite{I-3}. They are elements $\zeta_{p,\alpha}\big((n_{i})_{d};(\xi_{i})_{d}\big)$ of the field $K_{p}$ generated by $\mathbb{Q}_{p}$ and a primitive $N$-th root of unity, where $\alpha \in \mathbb{Z} \cup \{\pm \infty\} - \{0\}$ is the number of iterations of the Frobenius of the crystalline pro-unipotent fundamental groupoid of $\mathbb{P}^{1} - \{0,\mu_{N},\infty\}$. They are reductions of $p$-adic periods by \cite{Yamashita}. \newline\indent In the framework of $\pi_{1}^{\un}(\mathbb{P}^{1} - \{0,\mu_{N},\infty\})$, MZV$\mu_{N}$'s resp. $p$MZV$\mu_{N}$'s often appear via their non-commutative generating series $\Phi_{\KZ}$ resp. $\Phi_{p,\alpha}$, which is an element of the $\mathbb{C}$-algebra, resp. $K_{p}$-algebra of non-commutative power series over the formal variables $e_{x}$ where $x$ is $0$ or a $N$-th root of unity. The coefficient of $e_{0}^{n_{d}-1}e_{\xi_{d}}\ldots e_{0}^{n_{1}-1}e_{\xi_{1}}$ in $\Phi_{\KZ}$, resp. in $\Phi_{p,\alpha}$ is $(-1)^{d}\zeta\big((n_{i})_{d};(\xi_{i})_{d}\big)$, resp. $(-1)^{d}\zeta_{p,\alpha}\big((n_{i})_{d};(\xi_{i})_{d}\big)$. \newline\indent According to the philosophy of periods, one wants to study the algebra generated by MZV$\mu_{N}$'s resp. $p$MZV$\mu_{N}$'s over the $N$-th cyclotomic field $k_{N}$, using the motivic nature of $\pi_{1}^{\un}(\mathbb{P}^{1} - \{0,\mu_{N},\infty\})$ established in \cite{Deligne Goncharov}. There are many known examples of families of polynomial equations in that algebra. Three among them are particularly meaningful and well-known : these are the regularized double shuffle equations, the associator equations, and the Kashiwara-Vergne equations. We will refer to them as the standard equations. At least in the $N=1$ case, it is conjectured that each of them generates all polynomial equations satisfied by MZV$\mu_{N}$'s, resp. $p$MZV$\mu_{N}$'s. Moreover, one can show that the set of solutions to these equations have structures of torsors for the Ihara action which is a byproduct of the motivic Galois action on $\pi_{1}^{\un}(\mathbb{P}^{1} - \{0,\mu_{N},\infty\})$ (\cite{Deligne Goncharov} \S5.12).
\subsection{A computation of $p$-adic cyclotomic multiple zeta values which keeps track of the motivic Galois action}
In \cite{I-1} \cite{I-2} \cite{I-3}, we have found formulas for $p$-adic cyclotomic multiple zeta values, which are analogues of the expression of sums of series (\ref{eq:multizetas}). \newline\indent In these formulas, $\Phi_{p,\alpha}$ is involved via $\Ad_{\Phi_{p,\alpha}}(e_{1}) = \Phi_{p,\alpha}^{-1}e_{1}\Phi_{p,\alpha}$ and its images $\Ad_{\Phi_{p,\alpha}^{(\xi)}}(e_{\xi})$, $\xi \in \mu_{N}(K_{p})$, by the automorphisms $(z \mapsto \xi z)_{\ast}$ of $\pi_{1}^{\un,\dR}(\mathbb{P}^{1} - \{0,\mu_{N},\infty\})$. The correspondence between the coefficients of $\{\Phi_{p,\alpha}^{(\xi)}\}^{-1}e_{\xi}\Phi^{(\xi)}_{p,\alpha}$ and the coefficients of $\Phi_{p,\alpha}$ will be discussed in a subsequent paper. \newline\indent Concretely, computing $p$MZV$\mu_{N}$'s means computing the Frobenius structure of the KZ differential equation (equation (\ref{eq: nabla KZ})) associated with $\pi_{1}^{\un,\dR}(\mathbb{P}^{1} - \{0,\mu_{N},\infty\})$, whose solutions are called multiple polylogarithms (equation (\ref{eq:MPL})) and admit the following power series expansion at $0$ : \begin{equation} \label{eq:Li 0} \Li\big((n_{i})_{d};(\xi_{i})_{d} \big)(z) = \sum_{0<m_{1}<\ldots<m_{d}} \frac{\big( \frac{\xi_{2}}{\xi_{1}} \big)^{m_{1}} \ldots \big( \frac{z}{\xi_{d}} \big)^{m_{d}}}{m_{1}^{n_{1}} \ldots m_{d}^{n_{d}}}. \end{equation} \noindent The computation of $p$MZV$\mu_{N}$'s arises as a characterization of the coefficients of each $\Ad_{\Phi_{p,\alpha}^{(\xi)}}(e_{\xi})$ in terms of weighted cyclotomic multiple harmonic sums, which are essentially the coefficients above (below, $m$ is any positive integer and the other indices are as above) : \begin{equation} \label{eq:mult har sums} \text{har}_{m} \big((n_{i})_{d} ;(\xi_{i})_{d+1} \big) = m^{n_{d}+\ldots+n_{1}} \sum_{0<m_{1}<\ldots<m_{d}<m} \frac{\big( \frac{\xi_{2}}{\xi_{1}} \big)^{m_{1}} \ldots \big(\frac{\xi_{d+1}}{\xi_{d}}\big)^{m_{d}} \big(\frac{1}{\xi_{d+1}}\big)^{m}}{m_{1}^{n_{1}}\ldots m_{d}^{n_{d}}}. \end{equation} \indent The computation gives a formula for the coefficients of $\Ad_{\Phi_{p,\alpha}^{(\xi)}}(e_{\xi})$, as sums of series whose terms are linear combinations over $k_{N}$ of the numbers $\text{har}_{p^{\alpha}} \big((n_{i})_{d} ;(\xi_{i})_{d+1} \big)$ which we call the prime weighted cyclotomic multiple harmonic sums (\cite{I-1}, Definition B.0.1). It also gives a converse of that formula, which is the following ; below, the notation $f[w]$ means the coefficient of a word $w$ in a non-commutative formal power series $f$, and the sum of series converges in $K_{p}$ :
\begin{multline} \label{eq:formula for n=1} \har_{p^{\alpha}} \big( (n_{i})_{d};(\xi_{i})_{d+1} \big) = (-1)^{d} \sum_{\xi \in \mu_{N}(K)} \sum_{l=0}^{\infty} \xi^{-p^{\alpha}} \Ad_{\Phi_{p,\alpha}^{(\xi)}}(e_{\xi}) \Big[ e_{0}^{l} e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}}\ldots e_{0}^{n_{1}-1}e_{\xi_{1}} \Big]. \end{multline}
\indent We have a total of three ways of viewing prime weighted multiple harmonic sums : (\ref{eq:Li 0}), (\ref{eq:mult har sums}) and (\ref{eq:formula for n=1}) ; this will make three frameworks of computation, which we represent respectively by the symbols $\int$, $\Sigma$ and $\int_{1,0}$. One of the central ideas in \cite{I-2} and \cite{I-3} was to express certain byproducts of the Frobenius both in a framework of integrals and the framework of series and to use that the two expressions were equal ; this gave formulas for $p$MZV$\mu_{N}$'s in terms of series. \newline\indent Our computation of $p$MZV$\mu_{N}$'s keeps track of the motivic nature of $\pi_{1}^{\un}(\mathbb{P}^{1} - \{0,\mu_{N},\infty\})$ : the main formulas are expressed by means of new group actions which we call pro-unipotent harmonic actions, which are certain $p$-adic byproducts of the motivic Galois action and of the Ihara action evoked above.
\subsection{Relating the explicit formulas for $p$-adic cyclotomic multiple zeta values and their algebraic theory}
The purpose of this paper and of the two subsequent ones \cite{II-2} and \cite{II-3} is to relate the explicit formulas for $p$MZV$\mu_{N}$'s obtained in \cite{I-1}, \cite{I-2} and \cite{I-3} to the algebraic theory of $p$MZV$\mu_{N}$'s. \newline\indent The main reason for doing it is the fact that our formulas for $p$MZV$\mu_{N}$'s keep track of the motivic Galois action (via the pro-unipotent harmonic actions), and that they are explicit. Another reason for doing it is Kaneko-Zagier's notion of finite multiple zeta values \cite{Kaneko Zagier}, which are in $(\underset{p\text{ prime}}{\prod} \mathbb{Z}/p\mathbb{Z}) \big/ (\underset{p\text{ prime}}{\bigoplus} \mathbb{Z}/p\mathbb{Z})$ \cite{Kaneko Zagier}, and Rosen's lift of finite multiple zeta values, called truncated multiple zeta values, which are elements of the complete topological ring $\varprojlim_{n} (\underset{p\text{ prime}}{\prod} \mathbb{Z}/p^{n}\mathbb{Z}) \big/ (\underset{p\text{ prime}}{\bigoplus} \mathbb{Z}/p^{n}\mathbb{Z})$ \cite{Rosen}. In this paper, certain of our results will recover certain known results on these two notions, and interpret them in terms of $\pi_{1}^{\un}(\mathbb{P}^{1} - \{0,1,\infty\})$. \newline\indent We are going to consider intrinsically the numbers which appear in the right-hand side of (\ref{eq:formula for n=1}) (as well as their complex analogues) and the numbers $\har_{p^{\alpha}}(w)$ involved in the formulas of \cite{I-1}, \cite{I-2}, \cite{I-3}, and turn them into notions called \emph{adjoint $p$-adic cyclotomic multiple zeta values} (Ad$p$MZV$\mu_{N}$'s) (Definition \ref{def adjoint}), and \emph{cyclotomic multiple harmonic values} (MHV$\mu_{N}$'s) (Definition \ref{def harmonic}). We are going to view them as two types of ``periods'' (in a generalized sense for the second case, as will be explained in \cite{II-3}). Since each $\har_{p^{\alpha}}(w)$ is only an algebraic number, in order to turn it into an object which can be regarded as an interesting ``period'' comparable to MZV's, we are going to consider all $p$'s at the same time (as in the work of Kaneko-Zagier and Rosen), or all $\alpha$'s at the same time, or both ; and our cyclotomic multiple harmonic values will lift both Kaneko-Zagier's finite multiple zeta values and Rosen's truncated multiple zeta values. We will establish in \S2.2 the setting for computations on these objects, in the three frameworks $\int_{1,0}$, $\int$ and $\Sigma$. \newline\indent We adopt the following principle, for this paper and all the subsequent ones : for each question on $p$MZV$\mu_{N}$'s which we want to study by means of explicit formulas, we find its analogue for the adjoint variant of $p$MZV$\mu_{N}$'s, we solve that adjoint variant, and it remains to pass from Ad$p$MZV$\mu_{N}$'s to $p$MZV$\mu_{N}$'s ; this last step is delayed to other works. In the present sequence of papers, we only consider the Ad$p$MZV$\mu_{N}$'s which are the natural version of $p$MZV$\mu_{N}$'s adapted to dealing with explicit formulas. \newline\indent With the above ideas, the first step of our explicit version of the algebraic theory of $p$MZV$\mu_{N}$'s is thus to develop intrinsically the properties of adjoint $p$-adic cyclotomic multiple zeta values and cyclotomic multiple harmonic values, and this is the goal of this paper. In the end, our explicit version of the algebraic theory of $p$MZV$\mu_{N}$'s will be formulated as a ``comparison'' between the properties of Ad$p$MZV$\mu_{N}$'s and those of MHV$\mu_{N}$'s, where MHV$\mu_{N}$'s are defined by explicit formulas.
\subsection{Summary of the paper}
We are going to find ``adjoint'' and ``harmonic'' analogues of the standard properties of cyclotomic multiple zeta values.
The analogues of algebraic relations for cyclotomic multiple harmonic values which we find involve infinite sums of prime weighted cyclotomic multiple harmonic sums, which will be convergent both for the weight-adic topology and for the $p$-adic topology, uniformly with respect to $(p,\alpha)$, and with certain bounds on the $p$-adic norms of the rational coefficients. At first sight, the presence of infinite sums makes us leave the framework of algebraic geometry and periods. However, we are going to prove a close relation between the ``adjoint'' and the ``harmonic'' variants of the algebraic properties of MZV$\mu_{N}$'s, namely, a torsor structure using the pro-unipotent harmonic action defined in \cite{I-3}.
The paper is organized as follows.
In \S2 we review prerequisites and we define formally the main objects that we are going to study : adjoint cyclotomic multiple zeta values, cyclotomic multiple harmonic values, and related objects. We discuss the meaning of the definitions and we establish the setting for making computations with these objects.
In \S3 we focus on double shuffle relations and we establish some natural adjoint and harmonic variants of them. Below, $\DS_{\mu}$ is the double shuffle scheme defined by Racinet in \cite{Racinet}, $\circ^{\smallint_{1,0}}$ is the Ihara action, $\circ^{\smallint_{1,0}}_{\Ad}$ is the adjoint Ihara action introduced in \cite{I-2}, Definition 1.1.3, $\circ_{\har}^{\smallint_{1,0}}$ is the pro-unipotent harmonic action introduced in \cite{I-3}, Definition 2.1.2 ; $\mathcal{O}^{\mathcyr{sh}}$ is the shuffle Hopf algebra over the alphabet $e_{0\cup \mu_{N}}$, graded by the number of letters of words called their weight. The map $\comp^{\har,\Ad}$ is defined in \S2.2.1. \newline\newline
\textbf{Theorem 1.} \emph{ \newline\indent (i) (adjoint double shuffle) For any $\mu$, there exists an explicit affine subscheme $\DS_{\mu,\Ad}$ of $\Spec(\mathcal{O}^{\mathcyr{sh}})$. One has a canonical morphism $\Ad(e_{1}) : \DS_{\mu} \rightarrow \DS_{\mu,\Ad}$, which is an isomorphism on its image, and sends Racinet's torsor to a torsor for the adjoint Ihara product. \newline\indent In particular, the non-commutative generating series of Ad$p$MZV$\mu_{N}$'s resp. AdMZV$\mu_{N}$'s is in $\DS_{0,\Ad}(K_{p})$, resp. $\DS_{2i\pi,\Ad}(\mathbb{C})$. \newline\indent (ii) (harmonic double shuffle) There exists an explicit affine ind-scheme $\DS_{\har}$, sub-ind-scheme of $\Spf(\mathcal{O}^{\mathcyr{sh}})$, which can be obtained by each of the three frameworks : ``$\int_{1,0}$'', ``$\int$'' and ``$\Sigma$''. \newline\indent The non-commutative generating series MHV$\mu_{N}$'s is in $\DS_{\har}((\prod_{p}K_{p})^{\mathbb{N}})$. \newline\indent (iii) (relation between adjoint and harmonic) The map $\comp^{\har,\Ad}$ sends $\DS_{0,\Ad} \mapsto \DS_{\har}$ and its image is a torsor under the pro-unipotent harmonic action $\circ_{\har}^{\smallint_{1,0}}$ of the group $(\Ad_{\DS_{0}}(e_{1}),\circ^{\smallint_{1,0}}_{\Ad})$. \newline\indent (iv) More generally, adjoint double shuffle equations are satisfied by adjoint multiple polylogarithms and harmonic double shuffle equations are satisfied by harmonic multiple polylogarithms.} \newline\newline We note that in (ii) above, the frameworks of computations give harmonic double shuffle equations which look different at first sight but we can prove that they are equivalent.
In \S4 we focus on associator and Kashiwara-Vergne equations. We consider an analogy between the passage from associator equations to Kashiwara-Vergne equations constructed in \cite{AT} and \cite{AET} and the passage from double shuffle relations to adjoint double shuffle relations constructed in \S3. The main result is the following. \newline \newline \textbf{Theorem 2.} (rough version) \newline\indent \emph{(i) Kashiwara-Vergne equations arise naturally as a property of ($p$-adic, complex) adjoint MZV's rather than as a property of ($p$-adic, complex) MZV's and naturally amount to certain polynomial equations on adjoint ($p$-adic, complex) MZV's. \newline \indent (ii) There are equations satisfied by harmonic multiple polylogarithms which are of the same source with associator equations and Kashiwara-Vergne equations (functoriality of the KZ equation).} \newline \newline In \S5, as a corollary, we explain that this paper recasts the study finite and symmetric (or symmetrized) multiple zeta values (see \cite{NoteCRAS}) as a particular case and a byproduct of the study of adjoint MZV's and MHV's, which is the theme arising naturally from the study of $p$MZV$\mu_{N}$'s via explicit formulas. Indeed, finite multiple zeta values are reductions of our multiple harmonic values modulo large primes, and symmetric multiple zeta values are a particular case of our adjoint multiple zeta values ; thus they satisfy a particular case of the equations of Theorem 1 and Theorem 2. We introduce finite cyclotomic multiple zeta values and finite multiple polylogarithms, and in which we will more generally relate our study of $p$MZV$\mu_{N}$'s via explicit formulas to these objects.
In \cite{II-2} we will explain how to recover certain properties of Ad$p$MZV$\mu_{N}$'s by the explicit formulas and the properties of MHV$\mu_{N}$'s, which will mostly answer to a question of Deligne and Goncharov. In \cite{II-3} we will formalize the fact that MHV$\mu_{N}$'s can be regarded as periods in a generalized sense, using motivic multiple zeta values. In \cite{III-1} we will study adjoint and harmonic distribution relations.
An open question is whether $\Ad(e_{1}) : \DS_{\mu} \rightarrow \DS_{\mu,\Ad}$ is an isomorhism. We see this question as analogous to the conjecture made in \cite{AT} of an isomorphism relating associators and solutions to the Kashiwara-Vergne problem.
\emph{Acknowledgments.} I thank Masanobu Kaneko, Ivan Marin and Pierre Cartier for useful discussions, and an anonymous referee whose remarks and suggestions enabled me to improve this paper. This work has been done at Universit\'{e} Paris Diderot with support of ERC grant 257638, Universit\'{e} de Strasbourg with support of Labex IRMIA, and Universit\'{e} de Gen\`{e}ve with support of NCCR SwissMAP.
\section{Definitions and setting for computations\label{review}}
We review some material on pro-unipotent fundamental groupoids (\S2.1), we define adjoint cyclotomic multiple zeta values, in the $p$-adic case (\S2.2) and in the complex case (\S2.3), we define cyclotomic multiple harmonic values (\S2.4) and their ``overconvergent'' variants (\S2.5). At the same time we establish the setting for making computations with all these objects and we show that replacing cyclotomic multiple zeta values by their adjoint variants does not change the algebra that they generate. Finally we define more generally adjoint and harmonic analogues of multiple polylogarithms (\S2.6), which admit respectively adjoint cyclotomic multiple zeta values and cyclotomic multiple harmonic values as special values.
In all this text, we denote by $\mathbb{N}$ resp. $\mathbb{N}^{\ast}$ the set of nonnegative, resp. positive integers ; $d$ and $n_{i}$ ($1 \leqslant i \leqslant d$) denote positive integers, $\xi_{i}$ ($1 \leqslant i \leqslant d$ or $1 \leqslant i \leqslant d+1$ depending on the context) are $N$-th roots of unity.
\numberwithin{equation}{subsection}
\subsection{Review on pro-unipotent fundamental groupoids}
\subsubsection{Generalities on the Betti and de Rham realizations of pro-unipotent fundamental groupoids}
Let $X$ be a smooth algebraic variety over a field $K$ of characteristic zero, with $X= \overline{X} - D$ where $\overline{X}$ is proper and smooth and $D$ is a normal crossings divisor. \newline\indent The de Rham pro-unipotent fundamental groupoid $\pi_{1}^{\un,\dR}\big( X\big)$ (\cite{Deligne}, \S10.27, \S10.30 (ii)) is a groupoid over $X$ in the category of affine schemes over $K$, whose base-points are the points of $X$, and the points of the punctured tangent spaces at points of $D$ (\cite{Deligne}, \S15). Assuming that $H^{1}(\overline{X},\mathcal{O}_{\overline{X}})=0$, which holds in the examples of this paper, one also has a canonical base-point $\omega_{\dR}$ (\cite{Deligne}, \S12.4) with, for any couple of base-points $(x,y)$, an isomorphism of schemes $\pi_{1}^{\un,\dR}(X,y,x)\simeq \pi_{1}^{\un,\dR}(X,\omega_{\dR})$, and these isomorphisms are compatible with the groupoid structure. The bundle $\pi_{1}^{\un,\dR}(X,\omega_{\dR}) \times X$ carries the universal unipotent integrable connection on $X$. \newline\indent Let us now assume that we have an embedding $K \hookrightarrow \mathbb{C}$. Then we also have the Betti pro-unipotent fundamental groupoid $\pi_{1}^{\un,\B}(X \times_{\Spec(K)} \Spec(\mathbb{C}))$, which is another groupoid in the category of affine schemes over $X \times_{\Spec(K)} \Spec(\mathbb{C})$, defined as the Malcev completion of the topological fundamental groupoid of $X(\mathbb{C})$ \cite{Deligne}. \newline\indent Chen's theorem \cite{Chen} or, equivalently, the Riemann-Hilbert correspondence \cite{Deligne equa diff} restricted to unipotent objects, gives a natural isomorphism \begin{equation} \label{eq:isomorphism} \pi_{1}^{\un,\B}(X) \times_{\Spec(K)} \Spec(\mathbb{C}) \buildrel \sim \over \longrightarrow \pi_{1}^{\un,\dR}(X) \times_{\Spec(K)} \Spec(\mathbb{C}). \end{equation} Its coefficients are iterated path integrals on $X$ in Chen's sense \cite{Chen}. If $X$ is defined over a number field, they are periods, called the Betti-de Rham periods of $\pi_{1}^{\un}(X)$.
\subsubsection{The de Rham pro-unipotent fundamental groupoid of $\mathbb{P}^{1} - \{0,\mu_{N},\infty\}$}
Let $X=(\mathbb{P}^{1} - \{0,\mu_{N},\infty\}) / K$ where $K$ contains a primitive $N$-th root of unity.
\newline\indent By \cite{Deligne}, \S12, the affine scheme $\pi_{1}^{\un,\dR}(X,\omega_{\dR})$ is canonically isomorphic to $\Spec(\mathcal{O}^{\mathcyr{sh}})\times_{\Spec(\mathbb{Q})} \Spec(K)$, where $\mathcal{O}^{\mathcyr{sh}}$ is the shuffle Hopf algebra on the alphabet $e_{0\cup \mu_{N}} = \{e_{x}\text{ }|\text{ }x \in \{0\} \cup \mu_{N}(K)\}$. By definition, $\mathcal{O}^{\mathcyr{sh}}$ is the $\mathbb{Q}$-vector space $\mathbb{Q}\langle e_{0\cup \mu_{N}} \rangle$ which admits as a basis the set of words on $e_{0 \cup \mu_{N}}$ including the empty word, graded by the number of letters of words called their weight, endowed with the following operations : the shuffle product $\mathcyr{sh}$ defined by $(e_{i_{l+l'}}\ldots e_{i_{l+1}})\text{ }\mathcyr{sh}\text{ }(e_{i_{l}} \ldots e_{i_{1}}) = \sum\limits_{\sigma} e_{i_{\sigma^{-1}(l+l')}} \ldots e_{i_{\sigma^{-1}(1)}}$ where the sum is over permutations $\sigma$ of $\{1,\ldots,l+l'\}$ such that $\sigma(1)<\ldots<\sigma(l)$ and $\sigma(l+1)<\ldots<\sigma(l+l')$ ; the deconcatenation coproduct $\Delta_{\dec}$ defined by $\Delta_{\dec}(e_{i_{l}}\ldots e_{i_{1}}) = \sum\limits_{l'=0}^{l} e_{i_{l}}\ldots e_{i_{l'+1}} \otimes e_{i_{l'}} \ldots e_{i_{1}}$ ; the counit $\epsilon$ equal to the augmentation map ; the antipode $S$ defined by $S(e_{i_{l}}\ldots e_{i_{1}}) = (-1)^{l} e_{i_{1}}\ldots e_{i_{l}}$.
\begin{Notation} Let $K\langle\langle e_{0 \cup \mu_{N}} \rangle\rangle$ be the $K$-algebra of non-commutative formal power series over the variables $e_{x}$, $x \in \{0\} \cup \mu_{N}(K)$ and, for $f\in K\langle\langle e_{0 \cup \mu_{N}} \rangle\rangle$ and $w$ a word over the alphabet
$\{e_{x}\text{ }|\text{ }x \in \{0\} \cup \mu_{N}(K)\}$, let $f[w] \in K$ be the coefficient of $w$ in $f$. \end{Notation}
The group scheme $\Spec(\mathcal{O}^{\mathcyr{sh}})$ is pro-unipotent, and the completed dual of the Hopf algebra $\mathcal{O}^{\mathcyr{sh}}$ is $K \langle \langle e_{0\cup \mu_{N}}\rangle \rangle$ viewed as the Hopf algebra obtained as the completion of the universal enveloping algebra of the free Lie algebra in the variables $e_{x}$, $x \in \{0\} \cup \mu_{N}(K)$. We will denote by $\Delta_{\mathcyr{sh}}$ its coproduct. We have
\begin{equation} \label{eq:shuffle equation} \begin{array}{ll} \Spec(\mathcal{O}^{\mathcyr{sh}})(K) & = \{ f \in K\langle\langle e_{0 \cup \mu_{N}} \rangle\rangle \text{ }|\text{ }\forall w,w' \text{ words on }e_{0 \cup \mu_{N}}, f[w\text{ }\mathcyr{sh}\text{ }w']=f[w]f[w'],\text{ and }f[\emptyset] = 1 \}
\\ & = \{ f \in K\langle\langle e_{0 \cup \mu_{N}} \rangle\rangle \text{ }|\text{ }\Delta_{\mathcyr{sh}}(f) = f \otimes f \}, \end{array} \end{equation} \begin{equation} \label{eq:shuffle equation modulo products} \begin{array}{ll}
\Lie(\Spec(\mathcal{O}^{\mathcyr{sh}})(K)) & = \{ f \in K \langle\langle e_{0 \cup \mu_{N}} \rangle\rangle \text{ }|\text{ }\forall w,w' \text{ words on }e_{0 \cup \mu_{N}},\text{ }f[w\text{ }\mathcyr{sh}\text{ }w']=0\}
\\ & = \{ f \in K \langle\langle e_{0 \cup \mu_{N}} \rangle\rangle \text{ }|\text{ }\Delta_{\mathcyr{sh}}(f) = f \otimes 1 + 1 \otimes f \}. \end{array} \end{equation} \newline\indent The canonical connection on $\pi_{1}^{\un,\dR}(X,\omega_{\dR})\times X$ in the sense of \cite{Deligne}, \S12 is the Knizhnik-Zamolodchikov (KZ) connection : \begin{equation} \label{eq: nabla KZ} \nabla_{\KZ} : f \mapsto df - \bigg( e_{0} f \frac{dz}{z} + \sum_{\xi \in \mu_{N}(K)} e_{\xi} f \frac{dz}{z-\xi} \bigg). \end{equation} \indent \begin{Notation} \label{la premiere notation} (\cite{Deligne Goncharov}, \S5), let $\Pi = \pi_{1}^{\un,\dR}(X,\omega_{\dR})$ and $\Pi_{1,0} = \pi_{1}^{\un,\dR}(X,-\vec{1}_{1},\vec{1}_{0})$ where $\vec{v}_{x}$ means the tangent vector $\vec{v}$ at $x$. \end{Notation}
\subsubsection{Comparison between the Betti and de Rham pro-unipotent fundamental groupoid of $\mathbb{P}^{1} - \{0,\mu_{N},\infty\}$}
Following \cite{Deligne Goncharov}, \S5.16, let $dch \in \pi_{1}^{\un,\B}(X,\vec{1}_{0},\vec{1}_{1})(\mathbb{C})$ be the image, by the Malcev completion map $ \pi_{1}(X,\vec{1}_{0},\vec{1}_{1}) \rightarrow \pi_{1}^{\un,\B}(X,\vec{1}_{0},\vec{1}_{1})(\mathbb{C})$, of the homotopy class of $\gamma : t \in [0,1] \mapsto t \in [0,1]$ ; let \begin{equation} \label{eq:Phi KZ} \Phi_{\KZ} = \comp_{\B,\dR}(dch) \in \Pi_{1,0}(\mathbb{C}), \end{equation} which appeared first in \cite{Drinfeld}, \S2 in the $N=1$ case ; $\Phi_{\KZ}$ is the non-commutative generating series of MZV$\mu_{N}$'s : indeed, given the above definition, the formula for cyclotomic multiple zeta values as iterated integrals (\ref{eq:multizetas}) amounts to : \begin{equation} \label{eq:coefficient} \zeta \big( (n_{i})_{d};(\xi_{i})_{d} \big) = (-1)^{d}\Phi_{\KZ}[e_{0}^{n_{d}-1}e_{\xi_{d}}\ldots e_{0}^{n_{1}-1}e_{\xi_{1}}]. \end{equation} \indent Multiple polylogarithms, abbreviated as MPL's \cite{Goncharov} are multivalued holomorphic functions on $X(\mathbb{C})$, defined as iterated integrals in the sense of Chen \cite{Chen}, such that the non-commutative generating series $\Li = 1 + \sum\limits_{\substack{n \in \mathbb{N}^{\ast} \\ \epsilon_{1},\ldots,\epsilon_{n} \in \{0,1\}^{n}}} \Li\big( e_{\epsilon_{n}} \cdots e_{\epsilon_{1}} \big)e_{\epsilon_{n}} \cdots e_{\epsilon_{1}}$ defines a solution to the $\KZ$ equation (\ref{eq: nabla KZ}). For $\gamma$ a differentiable topological path on $\mathbb{P}^{1}(\mathbb{C})$ such that $\gamma\big((0,1)\big) \subset (\mathbb{P}^{1} - \{0,\mu_{N},\infty\})(\mathbb{C})$, and $\gamma'(0)\not= 0$ and $\gamma'(1)\not= 0$, \begin{equation} \label{eq:MPL} \displaystyle\Li\big( e_{\epsilon_{n}} \cdots e_{\epsilon_{1}} \big)(\gamma) = \int_{t_{n}=0}^{1} \gamma^{\ast}(\frac{dz}{z-\epsilon_{n}})(t_{n}) \int_{t_{n-1}=0}^{t_{n}} \ldots \gamma^{\ast}(\frac{dz}{z-\epsilon_{2}})(t_{2}) \int_{t_{1}=0}^{t_{2}} \gamma^{\ast}(\frac{dz}{z-\epsilon_{1}})(t_{1}). \end{equation} When that integral diverges, (\ref{eq:MPL}) means the regularized iterated integral defined by considering the similar integral on $\gamma([\epsilon,1-\epsilon'])$, its asymptotic expansion when $\epsilon,\epsilon' \rightarrow 0$ which is in $\mathbb{C}[[\epsilon,\epsilon']][\log(\epsilon),\log(\epsilon')]$, and finally the coefficient of this asymptotic expansion. It depends only of the homotopy class of $\gamma$, in the extended sense which includes tangential base-points, i.e. it depends on $\gamma'(0)$ and $\gamma'(1)$.
\newline\indent If we take $\gamma(0)=0$, the formal power series expansion of the iterated integrals (\ref{eq:MPL}) at $z=0$ is convergent for $|z|<1$ : \begin{equation} \label{eq:multiple polylogarithms power series expansion} \Li[e_{0}^{n_{d}-1}e_{\xi_{d}}\ldots e_{0}^{n_{1}-1}e_{\xi_{1}}](z) = (-1)^{d} \sum_{0<m_{1}<\ldots <m_{d}} \frac{\big( \frac{\xi_{2}}{\xi_{1}} \big)^{m_{1}} \ldots \big(\frac{1}{\xi_{d}}\big)^{m_{d}}}{m_{1}^{n_{1}}\ldots m_{d}^{n_{d}}}. \end{equation}
\subsubsection{The crystalline Frobenius of the de Rham pro-unipotent fundamental groupoid of $\mathbb{P}^{1} - \{0,\mu_{N},\infty\}$ over $K_{p}$}
The differential equation $\nabla_{\KZ}$ (\ref{eq: nabla KZ}) has a crystalline Frobenius structure over $K_{p}$ (\cite{Deligne}, \S11).
The theory of Coleman integration, which relies on this Frobenius structure, enables to define $p$-adic analogues of MPL's and MZV$\mu_{N}$'s \cite{Furusho 1} \cite{Furusho 2} \cite{Yamashita}. In particular, one has $\Li_{p,X_{K}}^{\KZ}$ resp. $\Li_{p,X_{K}^{(p^{\alpha})}}^{\KZ}$, the generating series of $p$-adic multiple polylogarithms, Coleman functions characterized as the solution to $\nabla_{\KZ}$ resp. its pull-back $\nabla_{\KZ}^{(p^{\alpha})}$ by the Frobenius $\sigma$ of $K$ iterated $\alpha$ times, equivalent to $e^{\log_{p}(z)e_{0}}$ at $\vec{1}_{0}$ (defined in \cite{Furusho 1}, \cite{Furusho 2} for $N=1$ and \cite{Yamashita} for any $N$).
We consider the Frobenius iterated $\alpha$ times $(\alpha \in \mathbb{N}^{\ast})$ in the sense of \cite{I-1}, \S1. Then, another type of $p$-adic analogue of MPL's and MZV$\mu_{N}$'s can be defined in a more ad hoc way, using canonical de Rham paths \cite{Deligne Goncharov} \cite{Unver MZV} \cite{U2} \cite{I-1} \cite{I-3}. In the end, with \cite{I-1} and \cite{I-3}, we have for each $\alpha \in \mathbb{Z}\cup \{\pm \infty\} - \{0\}$, an element $\Phi_{p,\alpha} \in \Pi_{1,0}(K)$, which characterizes the Frobenius at base-points $(\vec{1}_{1},\vec{1}_{0})$ iterated $\alpha$ times, where $\alpha=-\infty$ corresponds to Coleman integration. $p$-adic cyclotomic multiple zeta values are defined as its coefficients, namely : $$ \zeta_{p,\alpha}\big((n_{i})_{d};(\xi_{i})_{d}\big) = (-1)^{d} \Phi_{p,\alpha}[e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}}]. $$
One also has $\Li_{p,\alpha}^{\dagger}$, the non-commutative generating series of overconvergent $p$-adic multiple polylgarithms (\cite{I-1}, \S1), which are overconvergent analytic functions on the affinoid analytic space $U_{0\infty}^{\an}=\mathbb{P}^{1,\an} - \underset{\xi \in \mu_{N}(K)}{\cup} \B(\xi,1)$, where $\B(x,r)$ means the open ball of center $x$ and radius $r$. It is characterized by $\Li_{p,\alpha}^{\dagger}(0)=1$ and the following differential equation (\cite{I-1}, Proposition 2.1) \begin{equation} \label{eq:horizontality equation} d\Li_{p,\alpha}^{\dagger} = \bigg( p^{\alpha}e_{0}\omega_{0}(z) + \sum_{\xi \in \mu_{N}(K)} p^{\alpha} \omega_{\xi}(z) e_{\xi} \bigg) \Li_{p,\alpha}^{\dagger} - \Li_{p,\alpha}^{\dagger} \bigg( \omega_{0}(z^{p^{\alpha}})e_{0} + \sum_{\xi \in \mu_{N}(K)} \omega_{z_{0}^{p^{\alpha}}}(z^{p^{\alpha}}) \Ad_{\Phi^{(\xi)}_{p,\alpha}} (e_{\xi}) \bigg) \end{equation} \noindent where $\Phi_{p,\alpha}^{(\xi)} = (x \mapsto \xi x)_{\ast}(\Phi_{p,\alpha})$. Equivalently, it is characterized by \begin{multline} \label{eq:horizontality1} \Li_{p,\alpha}^{\dagger}(z)(e_{0},(e_{\xi})_{\xi \in \mu_{N}(K)}) \times\Li_{p,X_{K}^{(p^{\alpha})}}^{\KZ}(z^{p^{\alpha}})\big(e_{0},(\Ad_{\Phi^{(\xi)}_{p,\alpha}}(e_{\xi}))_{\xi \in \mu_{N}(K)} \big) \\ = \Li_{p,X_{K}}^{\KZ}(z) \big(p^{\alpha}e_{0},(p^{\alpha}e_{\xi})_{\xi \in \mu_{N}(K)} \big) \end{multline} We note that $\Li_{p,X_{K}}^{\KZ}$ and $\Li_{p,X_{K}^{(p^{\alpha})}}^{\KZ}$ depend on the choice of a branch of the $p$-adic logarithm, but not $\Li_{p,\alpha}^{\dagger}$, nor $p$MZV$\mu_{N}$'s.
\subsubsection{The motivic pro-unipotent fundamental groupoid of $\mathbb{P}^{1} - \{0,\mu_{N},\infty\}$}
The motivic pro-unipotent fundamental groupoid $\pi_{1}^{\un,\text{mot}}(\mathbb{P}^{1} - \{0,\mu_{N},\infty\})$ is defined and studied in \cite{Deligne Goncharov}. Let $G_{\omega}$ be the fundamental group associated with the Tannakian category of mixed Tate motives over $k_{N}$ which are unramified at primes $p$ prime to $N$, and the canonical fiber functor $\omega$. It is a semi-direct product $G_{\omega}= \mathbb{G}_{m} \ltimes U_{\omega}$ where $U_{\omega}$ is pro-unipotent. It acts on $\Pi_{1,0}$, and this action encodes the algebraic theory of MZV$\mu_{N}$'s and $p$MZV$\mu_{N}$'s according to the conjecture of periods (\cite{Deligne Goncharov}, \S5). The action of $\mathbb{G}_{m}$ encodes the weight grading and is \begin{equation} \label{eq:tau} \tau : \begin{array}{cc} \mathbb{G}_{m} \times \Pi_{1,0} \rightarrow \Pi_{1,0} \\ \big( \lambda,f(e_{0},(e_{\xi})_{\xi \in \mu_{N}(K)})\big) \mapsto f(\lambda e_{0},(\lambda e_{\xi})_{\xi \in \mu_{N}(K)}) \end{array}. \end{equation} The action of $U_{\omega}$ has been computed by Goncharov \cite{Goncharov}. The image of this action by a certain morphism is isomorphic to the Ihara product (our notation below is not standard) \begin{equation} \label{eq:Ihara} \circ^{\smallint_{1,0}} : \begin{array}{cc} \Pi_{1,0} \times \Pi_{1,0} \rightarrow \Pi_{1,0} \\ (g,f) \mapsto g \circ^{\smallint_{1,0}} f = g(e_{0},(e_{\xi})_{\xi \in \mu_{N}(K)}) \times f\big(e_{0},(\Ad_{g^{(\xi)}}(e_{\xi}))_{\xi \in \mu_{N}(K)}\big) \end{array} . \end{equation}
\subsubsection{The Betti and de Rham pro-unipotent fundamental groupoid of a more general $\mathbb{P}^{1} - D$}
By \cite{Deligne}, the description of the Betti and de Rham realizations of $\pi_{1}^{\un}(\mathbb{P}^{1} - \{0,\mu_{N},\infty\})$ of \S2.1.2 remains true in the case of an arbitrary punctured projective line $\mathbb{P}^{1} - D$ over a subfield of $\mathbb{C}$, provided that we replace the alphabet $e_{0 \cup \mu_{N}}$ by the alphabet $\{e_{x}\text{ }|\text{ }x \in D - \{\infty\}\}$. It can also be deduced from the description of $\pi_{1}^{\un}(\mathcal{M}_{0,n})$ of \S2.1.3. We write the words on that alphabet on the form $e_{0}^{n_{d}-1}e_{z_{d}}\ldots e_{0}^{n_{1}-1}e_{z_{1}}e_{0}^{n_{0}-1}$ with $d$ and $n_{i}$ positive integers and $z_{i} \in D - \{0,\infty\}$. For most computations, we can restrict to words such that $n_{d}\geqslant 2$ and $n_{0}=1$. The word above is then denoted also by $((n_{i})_{d},(z_{i})_{d})$. \newline\indent The multiple polylogarithms on $\mathcal{M}_{0,n}$ (\S2.1.3) induce multiple polylogarithms on $\mathbb{P}^{1} - \{0,x_{1},\ldots,x_{r},\infty\}$, whose power series expansion are given by (\ref{eq:Li series bis}) as functions of $z$, which are solution to the KZ equation, generalization of (\ref{eq: nabla KZ}) : \begin{equation} \label{eq: nabla KZ prime} \nabla_{\KZ} : f \mapsto df - \bigg( e_{0} f \frac{dz}{z} + \sum_{\xi \in \mu_{N}(K)} e_{\xi} f \frac{dz}{z-\xi} \bigg) . \end{equation} The weighted multiple harmonic sums, obtained by the coefficients of the power series expansion (\ref{eq:Li series bis}), are now the following numbers (where the $z_{i}$'s are in $D - \{0,\infty\}$) : $$ \har_{m}((n_{i})_{d},(z_{i})_{d+1}) = \sum_{0<m_{1} <\ldots < m_{d}<m} \frac{\big( \frac{z_{2}}{z_{1}} \big)^{m_{1}} \ldots \big(\frac{z_{d+1}}{z_{d}}\big)^{m_{d}} \big(\frac{1}{z_{d+1}}\big)^{m}}{m_{1}^{n_{1}} \ldots m_{d}^{n_{d}}} . $$
\subsubsection{The Betti and de Rham pro-unipotent fundamental groupoid of the moduli spaces $\mathcal{M}_{0,n}$}
Let, for $n \in \mathbb{N}^{\ast}$, the scheme
$\mathcal{M}_{0,n+3} = \{ (x_{1},\ldots,x_{n+3}) \in (\mathbb{P}^{1})^{n+3} \text{ } |\text{ } x_{i}\not= x_{j} \text{ for all i} \not= \text{j} \} / \PGL_{2}$ over $\mathbb{Q}$, and let $\overline{\mathcal{M}}_{0,n+3}$ be its Deligne-Mumford compactification, which is a smooth projective variety such that $\overline{\mathcal{M}}_{0,n+3} - \mathcal{M}_{0,n+3}$ is a normal crossings divisor. The $x_{i}$'s are called the canonical coordinates. The homography of $\mathbb{P}^{1}$ which sends $(x_{n+1},x_{n+2},x_{n+3})$ to $(0,1,\infty)$ induces an isomorphism between $\mathcal{M}_{0,n+3}$ and the affine variety
$\{(y_{1},y_{2},\ldots,y_{n}) \in (\mathbb{P}^{1} - \{0,1,\infty\})^{n} \text{ }|\text{ for all i,j}, y_{i} \not= y_{j} \}$, defined by $\displaystyle y_{i} = \frac{x_{i} - x_{n+1}}{x_{i}-x_{n+3}} \frac{x_{n+2} - x_{n+3}}{x_{n+2}-x_{n+1}}$, $(1 \leqslant i \leqslant n)$ ; the $y_{i}$'s are called simplicial coordinates. In particular, this gives $\mathcal{M}_{0,4} \simeq \mathbb{P}^{1} - \{0,1,\infty\}$, $\overline{\mathcal{M}_{0,4}} =\mathbb{P}^{1}$ ; $\overline{\mathcal{M}_{0,5}}$ is obtained blowing up $(\mathbb{P}^{1})^{2} \supset \mathcal{M}_{0,5}$ at the three points where $(\mathbb{P}^{1})^{2} - \mathcal{M}_{0,5}$ is not normal crossings, namely, in simplicial coordinates, $(0,0)$, $(1,1)$ and $(\infty,\infty)$. \newline\indent By \cite{Deligne} \S12, $\Lie \big(\pi_{1}^{\un,\dR}(\mathcal{M}_{0,n+3},\omega_{\dR}) \big)$ is the pro-nilpotent Lie algebra with generators $e_{ij}$, $1 \leq i \not= j \leqslant n+3$, \noindent and relations $e_{ij} = e_{ji}$, for all $i,j$, $\sum\limits_{j=1}^{n} e_{ij} = 0$ for all $i$, and $[e_{ij},e_{kl}] = 0$ for all $i,j,k,l$ pairwise distinct. This determines the pro-unipotent affine group scheme $\pi_{1}^{\un,\dR}(\mathcal{M}_{0,n},\omega_{\dR})$. \newline\indent The canonical connection on $\pi_{1}^{\un,\dR}(\mathcal{M}_{0,n},\omega_{\dR}) \times \mathcal{M}_{0,n}$ in the sense of \cite{Deligne}, \S12, is the KZ connection in several variables : \begin{equation} \label{eq:nablaKZ M0,n} \nabla_{\KZ} : f \mapsto df - \sum_{1 \leqslant i<j \leqslant n+3} e_{ij} d\log(x_{i}-x_{j}) f , \end{equation} i.e., in the cubic coordinates $c_{i}$ defined by $y_{i} = c_{i} \ldots c_{n}$, $(1 \leqslant i \leqslant n)$, \begin{equation} \nabla_{\KZ} : f \mapsto df - \bigg( \sum_{u=1}^{r} \frac{dc_{u}}{c_{u}} \sum_{u\leqslant i<j \leqslant n} e_{ij} - \sum_{\substack{1\leqslant v \leqslant v' \leqslant n \\ 2 \leqslant i \leqslant j }} \frac{d(c_{v} \ldots c_{v'})}{c_{v} \ldots c_{v'} -1} e_{v-1,v'} - \sum_{\substack{1\leqslant v \leqslant v' \leqslant n \\ 1 \leqslant i \leqslant j}} \frac{d(c_{v} \ldots c_{v'})}{c_{v} \ldots c_{v'} -1} e_{v',n-1} \bigg) f. \end{equation} \indent The groupoid $\pi_{1}^{\un,\B}(\mathcal{M}_{0,n})$ can be computed as follows by induction on $n$ \cite{FR}. Each forgetful map $\mathcal{M}_{0,n+1} \rightarrow \mathcal{M}_{0,n}$ is a fibration and induces a long exact sequence in homotopy ; given that the $\mathcal{M}_{0,n}$'s are $K(\pi,1)$ spaces, that long exact sequence amounts to a short exact sequence : $1 \rightarrow \pi_{1}^{\text{top}}\big(F(\mathbb{C}) \big) \rightarrow \pi_{1}^{\text{top}}(\mathcal{M}_{0,n+1}(\mathbb{C})) \rightarrow \pi_{1}^{\text{top}}(\mathcal{M}_{0,n}(\mathbb{C})) \rightarrow 1$ where $F$ is the fiber of the forgetful map ; moreover, that short exact sequence is split. By the exactness of the functor of Malcev completion, it gives rise to a short split exact sequence $1 \rightarrow \pi_{1}^{\un,\B}\big(F(\mathbb{C}) \big) \rightarrow \pi_{1}^{\un,\B}(\mathcal{M}_{0,n+1}(\mathbb{C})) \rightarrow \pi_{1}^{\un,\B}(\mathcal{M}_{0,n}(\mathbb{C})) \rightarrow 1$. \newline\indent Multiple polylogarithms in several variables are defined as follows \cite{Goncharov}. One consider first the following family of iterated integrals : for $a_{0},\ldots,a_{n+1} \in \mathbb{C}$, and $\gamma$ a path in $\mathbb{C} - \{a_{1},\ldots,a_{n}\}$ such that $\gamma(0)=a_{0}$ and $\gamma(1)=a_{n+1}$ : \begin{equation} \label{eq:Li} I\big( a_{n+1} ; a_{n},\ldots,a_{1} ; a_{0} \big)(\gamma) = \int_{t_{n}=0}^{1}\gamma^{\ast}\bigg(\frac{dz}{z-a_{n}}\bigg)(t_{n}) \int_{t_{n-1}=0}^{t_{n}} \ldots \gamma^{\ast}\bigg(\frac{dz}{z-a_{2}}\bigg)(t_{2}) \int_{t_{1}=0}^{t_{2}} \gamma^{\ast}\big( \frac{dz}{z-a_{1}}\big)(t_{1}) . \end{equation} One usually writes $(a_{n},\ldots,a_{1})$ as $(\overbrace{0,\ldots,0}^{n_{d}-1},z_{d},\ldots,\overbrace{0,\ldots,0}^{n_{1}-1},z_{1},\overbrace{0,\ldots,0}^{n_{0}-1})$ and $a_{n+1}=z$, and an affine change of variable allows to assume that $a_{0}=0$. Finally one writes $(x_{1},\ldots,x_{d})= (\frac{z_{2}}{z_{1}},\ldots,\frac{z}{z_{d}})$. The functions obtained after these transformations are multiple polylogarithms in several variables. They define a solution to the KZ equation (\ref{eq:nablaKZ M0,n}). If $n_{0}=1$, one writes $(\underbrace{0,\ldots,0}_{n_{d}-1},z_{d},\ldots,\underbrace{0,\ldots,0}_{n_{1}-1},z_{1})=\big( (n_{i})_{d}; (z_{i})_{d} \big)$. MPL's have the following power series expansion : \begin{equation} \label{eq:Li series bis} \Li\big( (n_{i})_{d};(z_{i})_{d} \big)(z) = \sum_{0<m_{1} <\ldots < m_{d}} \frac{\big( \frac{z_{2}}{z_{1}} \big)^{m_{1}} \ldots \big(\frac{z}{z_{d}}\big)^{m_{d}}}{m_{1}^{n_{1}}\ldots m_{d}^{n_{d}}} . \end{equation}
\subsection{Adjoint $p$-adic cyclotomic multiple zeta values}
For each prime number $p$ which does not divide $N$, $K_{p}$ is the extension of $\mathbb{Q}_{p}$ generated by the $N$-th roots of unity in $\overline{\mathbb{Q}_{p}}$. \newline\indent Let $\mathcal{O}^{\ast}$ be the $\mathbb{Q}$-vector space generated by the empty word and the words of the form $\big((n_{i})_{d};(\xi_{i})_{d} \big)$, identified to elements of $\mathcal{O}^{\mathcyr{sh}}$ (defined in \S2.1.2) by $\big((n_{i})_{d};(\xi_{i})_{d} \big) = e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}}$.
\subsubsection{Definition}
In addition with the notations above, $b$ is a non-negative integer, and $\Lambda$ is a formal variable.
\begin{Definition} \label{def adjoint}(i) Let adjoint $p$-adic cyclotomic multiple zeta values (Ad$p$MZV$\mu_{N}$'s) be the numbers \begin{multline} \label{eq:zeta adjoint} \zeta^{\Ad}_{p,\alpha} \big( (n_{i})_{d};b;(\xi_{i})_{d+1}\big) = (-1)^{d} \sum_{\xi \in \mu_{N}(K)} \xi^{-p^{\alpha}} \Ad_{\Phi^{(\xi)}_{p,\alpha}}(e_{\xi}) \big[e_{0}^{b}e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}}\ldots e_{0}^{n_{1}-1}e_{\xi_{1}} \big] \\ = (-1)^{d} \sum_{d'=0}^{d} \bigg( \prod_{i=d'+1}^{d} {-n_{i} \choose l_{i}} \bigg)\text{ }\xi_{d'}^{-p^{\alpha}}\text{ } \zeta_{p,\alpha}^{(\xi_{d'})}\big( (n_{d-i}+l_{d-i});(\xi_{d+1-i}) \big)_{0 \leq i \leq d-d'}\text{ } \zeta_{p,\alpha}^{(\xi_{d'})}\big((n_{i});(\xi_{i})\big)_{1 \leq i \leq d'-1} . \end{multline} \noindent (ii) Let the $\Lambda$-adjoint $p$-adic cyclotomic multiple zeta values ($\Lambda$Ad$p$MZV$\mu_{N}$'s) be the following power series : \begin{multline} \zeta^{\Lambda \Ad}_{p,\alpha} \big( (n_{i})_{d};(\xi_{i})_{d+1} \big) = \Lambda^{n_{1}+\ldots+n_{d}} \sum_{b=0}^{\infty} \Lambda^{b} \zeta_{p,\alpha}^{\Ad} \big((n_{i})_{d};b;(\xi_{i})_{d+1} \big) \\ = (-1)^{d} \sum_{\xi \in \mu_{N}(K)} \xi^{-p^{\alpha}} \Ad_{\Phi^{(\xi)}_{p,\alpha}}(e_{\xi}) \bigg[ \frac{\Lambda^{n_{1}+\ldots+n_{d}}}{1-\Lambda e_{0}} e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}} \bigg] . \end{multline} \end{Definition}
In the case of $\mathbb{P}^{1} - \{0,1,\infty\}$, adjoint $p$-adic multiple zeta values are the numbers $\zeta^{\Ad}_{p,\alpha}((n_{i})_{d};b) = (\Phi_{p,\alpha}^{-1}e_{1}\Phi_{p,\alpha})[e_{0}^{b}e_{1}e_{0}^{n_{d}-1}e_{1}\ldots e_{0}^{n_{1}-1}e_{1}] \in \mathbb{Q}_{p}$. \newline\indent Let $\mathcal{O}^{\ast}_{\Ad}$ be the $\mathbb{Q}$-vector space generated by the words of the form $ \big( \big((n_{i})_{d};b;(\xi_{i})_{d+1} \big)\big)$ : $b$ is written separately because it will play a particular role. Moreover, we will see that the $b=0$ and $b>0$ cases will sometimes have different properties (the former corresponds to finite cyclotomic multiple zeta values, see \S6), as already suggested by the formulas for $p$MZV$\mu_{N}$'s in \cite{I-2} in which this distinction exists.
\begin{Definition} Let $K\langle\langle Y_{N}^{\Ad} \rangle\rangle$ be the set of non-commutative formal power series of the following type : $f=\sum\limits_{w=\big( (n_{i})_{d};(\xi_{i})_{d+1}\big)}\sum\limits_{b\in \mathbb{N}} f[w;b](w;b)$ with $f[w;b] \in K$. \end{Definition}
\subsubsection{Relation with $p$-adic cyclotomic multiple zeta values}
We now show that $p$MZV$\mu_{N}$'s and Ad$p$MZV$\mu_{N}$'s generate the same $k_{N}$-algebra, compatibly with the weight and depth.
Let $K$ be a field of characteristic $0$. Let $\Theta(K)$ be the group of characters of $\mu_{N}$ with values in $K^{\ast}$, i.e. the set of multiplicative morphisms $\mu_{N}(K) \rightarrow (K^{\ast},\ast)$. Let $\mathcal{O}^{\mathcyr{sh}}_{n,\leq d}$ be the $k_{N}$-vector space generated by the elements $\mathcyr{sh}_{i=1}^{r}w_{i}$ where $r\geq 1$ and $w_{i}$ are words such that $\sum\limits_{i=1}^{r}\weight(w_{i})=n$ and $\sum\limits_{i=1}^{r}\depth(w_{i})\leq d$. Below we implicitly view $\Phi$ and $\Phi_{\Ad}$ as functions $\mathcal{O}^{\mathcyr{sh}} \rightarrow K$.
\begin{Definition} (i) For any $\chi \in \Theta$, let $\Moy_{\chi} : K \langle\langle e_{0\cup\mu_{N}} \rangle\rangle \rightarrow K \langle\langle e_{0\cup\mu_{N}} \rangle\rangle$ the map which sends $f \mapsto \sum\limits_{\xi \in \mu_{N}(K)} \chi(\xi) f^{(\xi)}$.
(ii) For any $\Phi \in \tilde{\Pi}_{1,0}(K)$,
let $\Phi_{\Ad,\chi} = \Moy_{\chi} \Ad_{\Phi}(e_{1}) = \sum\limits_{\xi \in \mu_{N}(K)} \chi(\xi) {\Phi^{(\xi)}}^{-1}e_{\xi}\Phi^{(\xi)}$. \end{Definition}
\begin{Proposition} The maps $\tilde{\Pi}_{1,0}(K) \rightarrow K \langle \langle e_{0 \cup \mu_{N}} \rangle\rangle$, $\Phi \mapsto \Ad_{e_{1}}(\Phi)$ and $\tilde{\Pi}_{1,0}(K) \times \Theta(K) \rightarrow \Theta \langle \langle e_{0 \cup \mu_{N}} \rangle\rangle$, $(\Phi,\chi) \mapsto \Phi_{\Ad,\chi}$, are injective. Moreover, for all $n \geq d\geq 0$, we have $\Phi(\mathcal{O}^{\mathcyr{sh}}_{n,\leq d}) = \Phi_{\Ad,\chi}(\mathcal{O}^{\mathcyr{sh}}_{n+1,\leq d+1})$. \end{Proposition}
\begin{proof} The injectivity of $\Phi \mapsto \Phi^{-1}e_{1}\Phi$ follows from $K \langle \langle e_{1}\rangle\rangle \cap \tilde{\Pi}_{1,0}(K)= \{1\}$ and the implication $f e_{1} = e_{1}f \Rightarrow f \in K \langle \langle e_{1}\rangle\rangle$. That implication is proved as follows : if $w$ is a word which contains at least a letter different from $e_{1}$, one shows that $f[w]=0$, by writing $w=e_{1}^{n}e_{x}z$ with $x \not=1$ and by an induction on $n$.
Let us prove the rest of the statement. We start with a few preliminary properties : \noindent\newline\indent (a) for all $x_{1},\ldots,x_{n} \in \{0\} \cup \mu_{N}$, $\Phi^{(\xi)}[e_{x_{n}} \ldots e_{x_{1}}] = \Phi [e_{\xi^{-1}x_{n}} \ldots e_{\xi^{-1}x_{1}}] $. Thus, for all $n\geq d \geq 1$, the $k_{N}$-vector space generated by the $\Phi^{(\xi)}[w]$'s with $w$ a word of weight $n$ and depth $d$ is independent of $\xi$. \newline\indent (b) For $f \in \tilde{\Pi}_{1,0}(K)$, writing $f^{-1}f=1$ we get by induction that, for all $n\geq d \geq 1$, we have $f(\mathcal{O}^{\mathcyr{sh}}_{n,\leq d}) = f^{-1}(\mathcal{O}^{\mathcyr{sh}}_{n,\leq d})$ and we also have $f^{-1}[w] \equiv - f[w] \mod f(\mathcal{O}^{\mathcyr{sh}}_{n,\leq d-1})$ for all words $w$. \newline\indent (c) For $f$ a solution to the shuffle equation or to the shuffle equation modulo products and such that $f[e_{0}] = 0$, any $f[w]$ with $w$ a word of weight $n$ and depth $d$ is a $\mathbb{Z}$-linear combination of the $f[w']$ with $w'$ a word of weight $n$ and depth $d$ of the form $w'=e_{0}w''e_{\xi}$ with $\xi \in \mu_{N}(K)$. \newline\indent (d) By (a) and the shuffle equation, for $\Phi \in \tilde{\Pi}_{1,0}(K)$, we have $\Phi^{(\xi)}[e_{0}^{l}] = \Phi^{(\xi)}[e_{\xi}] = 0$ for all $\xi \in \mu_{N}(K)$ and $l\geq 1$. \newline\indent The inclusion $\Phi(\mathcal{O}^{\mathcyr{sh}}_{n,\leq d}) \supset \Phi_{\Ad}(\mathcal{O}^{\mathcyr{sh}}_{n+1,\leq d+1})$ is clear by the shuffle equation for $\Phi$, and we want to prove the converse inclusion and the injectivity, by an induction on $d$. \newline\indent The only non-zero coefficient in $\Phi$ in depth $0$ is the coefficient of the empty word (weight 0), equal to 1. The only non-zero coefficients in $\Phi_{\Ad,\chi}$ in depth $\leq 1$ are the coefficients $\Phi_{\Ad,\chi}[e_{\xi}]$ (weight 1), equal to $\chi(\xi)$. They are in $k_{N} \subset K$. This determines $\chi$ in terms of $\Phi_{\Ad,\chi}$. We have $\Phi(\mathcal{O}^{\mathcyr{sh}}_{n,\leq 0}) = \Phi_{\Ad,\chi}(\mathcal{O}^{\mathcyr{sh}}_{n+1,\leq 1})$ for all $n$. This proves the result for $d=0$. \newline\indent Now for any $d>0$, we consider a word of the form $w=e_{0}^{l}e_{\xi_{d+1}}e_{0}^{n_{d}-1} \ldots e_{0}^{n_{2}-1}e_{\xi_{2}}e_{\xi_{1}}$ with $l>0$, and the $\xi_{i}$'s in $\mu_{N}(K)$ (i.e. the special case $n_{1}=1$ and $l>0$ in usual words). We have by (d) $$ \Phi_{\Ad,\chi}[e_{0}^{l}e_{\xi_{d+1}}e_{0}^{n_{d}-1} \ldots e_{0}^{n_{2}-1}e_{\xi_{2}}e_{\xi_{1}}] \equiv \chi(\xi_{0}) {\Phi^{(\xi_{0})}}^{-1}[e_{0}^{l}e_{\xi_{d+1}}e_{0}^{n_{d}-1} \ldots e_{0}^{n_{2}-1}e_{\xi_{2}}] \mod \Phi(\mathcal{O}^{\mathcyr{sh}}_{n,\leq d-1}) $$ By induction on $d$ and (a), (b), (c) above, this determines $\Phi^{(\xi)}$, thus $\Phi$, in depth $\leq d$, in terms of $\Phi_{\Ad}$, thus it proves the injectivity in depth $\leq d$, and this also proves the inclusion $\Phi(\mathcal{O}^{\mathcyr{sh}}_{n,\leq d}) \subset \Phi_{\Ad,\chi}(\mathcal{O}^{\mathcyr{sh}}_{n,\leq d+1})$. \end{proof}
\begin{Corollary} The $p$MZV$\mu_{N}$'s and Ad$p$MZV$\mu_{N}$'s generate the same $k_{N}$-algebra.
More precisely, for any $n \geq d\geq 1$, the two following vector spaces are equal : the $k_{N}$-vector space of generated by products $\prod_{i=1}^{r}\zeta^{\Ad}_{p,\alpha}(w)$ with $r \geq 1$, $\sum\limits_{i=1}^{r} \weight(w_{i}) = n$ and $\sum\limits_{i=1}^{r} \depth(w_{i}) \leq d$, and The $k_{N}$-vector space generated by products $\prod_{i=1}^{r}\zeta^{\Ad}_{p,\alpha}(w)$ with $r \geq 1$, $\sum\limits_{i=1}^{r} (\weight(w_{i})-1) = n$ and $\sum\limits_{i=1}^{r} (\depth(w_{i})-1) \leq d$. \end{Corollary}
\begin{proof} We apply Proposition 2.2.4 to $\Phi = \Phi_{p,\alpha}$ and $\chi : \xi \mapsto \xi^{-p^{\alpha}}$. \end{proof}
\subsection{Adjoint complex cyclotomic multiple zeta values}
\subsubsection{Definition}
The most direct complex analogue of Ad$p$MZV$\mu_{N}$'s (Definition \ref{def adjoint}) is the following.
Let a character $\chi : \mu_{N}(\mathbb{C}) \rightarrow \mathbb{C}^{\ast}$ and let $\Phi_{\KZ,\Ad,\chi} = \sum\limits_{\xi \in \mu_{N}(K)} \chi(\xi) {\Phi_{\KZ}^{-1}}^{(\xi)}e_{\xi}{\Phi_{\KZ}}^{(\xi)}$.
\begin{Definition} \label{def adjoint complex}(i) Let the adjoint cyclotomic multiple zeta values be the numbers :
\begin{equation} \label{eq:complex01} \zeta^{\Ad}\big( (n_{i})_{d};(\xi_{i})_{d+1};l;\chi) = \Phi_{\KZ,\Ad,\chi} [e_{0}^{l}e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}}] .
\end{equation}
(ii) Let the $\Lambda$-adjoint cyclotomic multiple zeta values be the numbers :
\begin{equation} \label{eq:complex02} \zeta^{\Lambda \Ad}((n_{i})_{d};(\xi_{i})_{d+1};\chi) = \Phi_{\KZ,\Ad,\chi} \big[ \frac{1}{1-\Lambda e_{0}}e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}} \big] .
\end{equation} \end{Definition}
Whereas in the crystalline setting, one has the Frobenius automorphism, whose restriction to $\Lie(\Pi_{0,0})$ sends $(e_{0},e_{1}) \mapsto (\frac{e_{0}}{p},\Phi_{p,1}^{-1}\frac{e_{1}}{p}\Phi_{p,1})$, in the Betti-de Rham setting, one has the automorphism of $\Pi_{0,0}$ which expresses the monodromy of the KZ connection (\ref{eq: nabla KZ}), which sends $(e^{e_{0}},e^{e_{1}}) \mapsto (e^{2i\pi e_{0}}, \Phi_{\KZ}^{-1}e^{2i\pi e_{1}}\Phi_{\KZ})$. Let $\comp^{B,DR}$ be the Betti-de Rham comparison isomorphism of $\Pi_{0,0}$ ; let $\gamma$ be the straight path $[0,1] \rightarrow \mathbb{C}$, $t\mapsto t$ ; let $c_{0}$, resp. $c_{1}$ be a simple loop around $0$, resp. $1$ positively oriented ; we have $$ (e^{2i\pi e_{0}},\Phi_{\KZ}^{-1}e^{2i\pi e_{1}}\Phi_{\KZ}) = \big(\comp^{\B,\dR}(c_{0}), \comp^{\B,\dR} (\gamma^{-1}c_{1}\gamma) \big) . $$ It is thus also natural to consider the following :
\begin{Definition} Let \begin{equation} \label{eq:complex01 bis} \zeta_{\exp}^{\Ad}\big( (n_{i})_{d};(\xi_{i})_{d+1};l;\chi) = \sum\limits_{\xi \in \mu_{N}(\mathbb{C})} \chi(\xi) ({\Phi_{\KZ}^{(\xi)}}^{-1}e^{2i\pi e_{\xi}}\Phi^{(\xi)}_{\KZ}) [e_{0}^{l}e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}}] . \end{equation} \end{Definition}
We define similarly $\zeta_{\exp}^{\Lambda,\Ad}$. Finally, since unlike in the $p$-adic case there is no privileged choice of $\chi$, it is also natural to consider the following :
\begin{Definition} Let
\begin{equation} \label{eq:complex01 bis} \zeta_{\exp,1}^{\Ad}\big( (n_{i})_{d};(\xi_{i})_{d+1};l) = ({\Phi_{\KZ}^{(\xi)}}^{-1}e^{2i\pi e_{\xi}}\Phi^{(\xi)}_{\KZ}) [e_{0}^{l}e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}}] .
\end{equation} \end{Definition}
We define similarly $\zeta_{\exp,1}^{\Lambda,\Ad}$
We have ${\Phi^{(\xi)}_{\KZ}}^{-1}e^{2i\pi e_{\xi}}\Phi^{(\xi)}_{\KZ} = 1 + 2i\pi {\Phi^{(\xi)}_{\KZ}}^{-1}e_{\xi}\Phi^{(\xi)}_{\KZ} + \zeta(2) {\Phi_{\KZ}^{(\xi)}}^{-1}\sum\limits_{n\geq 2} \frac{(2i\pi)^{n-2}e_{1}^{n}}{n!} {\Phi_{\KZ}^{(\xi)}}^{-1}$. Thus, the numbers (\ref{eq:complex01 bis}) divided by $2 \pi i$ are congruent to the numbers (\ref{eq:complex01}) modulo $\zeta(2)$ and they satisfy the results of \S3 and \S4.
One has a variant of cyclotomic multiple zeta values defined as follows. Let $\phi_{\infty}$ be the Frobenius at infinity of $\pi_{1}^{\un,\dR}(\mathbb{P}^{1} - \{0,\mu_{N},\infty\})$ ; it induces the automorphism of $\Lie(\Pi_{0,0})$ which sends $(e_{0},e_{1}) \mapsto (-e_{0},-\Phi_{-}^{-1}e_{1}\Phi_{-})$, where $\Phi_{-} = \phi_{\infty}({}_{\vec{1}_{1}} 1 _{\vec{1}_{0}}) \in \Pi_{1,0}(\mathbb{R})$. $\Phi_{-}$ is $h_{\KZ}$ in \cite{Enriquez}, \S11.1 whereas the $N=1$ case of $\Phi_{-}$ is $g_{\KZ}$ in \cite{Enriquez}, \S11.1. The coefficients of $\Phi_{-}$ are denoted by $\zeta_{-}\big((n_{i})_{d};(\xi_{i})_{d}\big) = \Phi_{-}[e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}}]$. They generate a $\mathbb{Q}$-vector subspace of the one of MZV$\mu_{N}$'s. (For details in the $N=1$ case see \cite{Furusho 2}, \S2.2).
\begin{Definition} Let the adjoint variants of the numbers $\zeta_{-}$ be : \begin{equation} \zeta_{-}^{\Ad} \big( (n_{i})_{d};(\xi_{i})_{d+1};b \big) = (-1)^{d} \sum\limits_{\xi \in \mu_{N}(\mathbb{C})} \xi ({\Phi^{(\xi)}_{-}}^{-1}e_{\xi}\Phi^{(\xi)}_{-}) \big[ e_{0}^{b} e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}} \big] . \end{equation} \end{Definition}
In the $N=1$ case, $\Phi_{-}$ is in $\GRT_{1}(\mathbb{R})$ and in $\DS_{0}(\mathbb{R})$, \cite{Furusho 2} (in particular $\Phi_{-}[e_{0}e_{1}]=0$). In the general case, we have a similar fact by \cite{Enriquez}, \S11.1 and \cite{Racinet}.
\subsubsection{Relation with cyclotomic multiple zeta values}
\begin{Proposition} (i) The respective $k_{N}[2\pi i]$-algebras generated by the numbers $\zeta^{\Ad}(.)$, $\zeta_{\exp}^{\Ad}(.)$, $\zeta_{\exp,1}^{\Ad}(.)$ and $\zeta(.)$ are equal. \newline (ii) The respective $k_{N}$-algebras generated by the numbers $\zeta_{-}^{\Ad}(.)$ and $\zeta^{-}(.)$ are equal. \end{Proposition}
\begin{proof} Similar to the proof of Proposition 2.2.4 plus the inversion of a Vandermonde linear system.
By inverting a Vandermonde linear system, the data of (\ref{eq:complex01}) resp. (\ref{eq:complex01 bis}) is equivalent respectively to the data of the sequences $\displaystyle \bigg( ({\Phi_{\KZ}^{(\xi)}}^{-1} e_{\xi}\Phi^{(\xi)}_{\KZ}) [e_{0}^{l}e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}}] \bigg)_{\xi \in \mu_{N}(K)}$, and \newline $\bigg( ({\Phi_{\KZ}^{(\xi)}}^{-1}e^{2i\pi e_{\xi}}\Phi^{(\xi)}_{\KZ}) [e_{0}^{l}e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}}]\bigg)_{\xi \in \mu_{N}(K)}$. Moreover, we have $\exp \big( {\Phi_{\KZ}^{(\xi)}}^{-1} e_{\xi}\Phi^{(\xi)}_{\KZ}) \big) = {\Phi_{\KZ}^{(\xi)}}^{-1} \exp(2\pi i e_{\xi}) \Phi^{(\xi)}_{\KZ}$. We also note that in the two above sequences, all the terms can be obtained one from another by applying an automorphism $(z \mapsto \xi z)_{\ast}$ of $\pi_{1}^{\un,\dR}(\mathbb{P}^{1} - \{0,\mu_{N},\infty\})$. \end{proof}
Note that, as in the $p$-adic case, the proof gives more precise information on the fact that the above equalities are compatible with depth filtration. They are also of course compatible with the weight.
In the end, in the complex case there are several natural objects which can be called adjoint variants of cyclotomic multiple zeta values, and they can be considered as equivalent. In function of the property that we are looking at, it can be more convenient to consider a version or another of this object.
\subsection{Cyclotomic multiple harmonic values}
\subsubsection{Definition\label{mhv definition}}
\begin{Definition} \label{def harmonic}For any index $w=((n_{i})_{d};(\xi_{i})_{d+1})$, let \newline\indent (i) for each $p \in \mathcal{P}_{N}$, $\har_{p^{\mathbb{N}}}(w) = \big( \har_{p^{\alpha}} (w)\big)_{\alpha \in \mathbb{N}} \in K_{p}^{\mathbb{N}}$, which we call a $p$-adic cyclotomic multiple harmonic value. \newline\indent (ii) for each $\alpha \in \mathbb{N}^{\ast}$, $\har_{\mathcal{P}^{\alpha}}(w) = \big( \har_{p^{\alpha}} (w)\big)_{p \in \mathcal{P}_{N}} \in \prod_{p\in\mathcal{P}_{N}} K_{p}$, which we call an adelic cyclotomic multiple harmonic value. \newline\indent (iii) $\har_{\mathcal{P}_{N}^{\mathbb{N}}}(w) = \big( \har_{p^{\alpha}} (w)\big)_{(p,\alpha) \in \mathcal{P}_{N} \times \mathbb{N}} \in \big( \prod_{p\in\mathcal{P}_{N}} K_{p} \big)^{\mathbb{N}}$, which we call a ($p$-adic $\times$ adelic) cyclotomic multiple harmonic value. \end{Definition}
We will view (i) as the natural explicit $p$-adic substitute to $p$MZV$\mu_{N}$'s ; (ii) as the natural lift of the cyclotomic finite MZV's, defined in \S6 ; (iii) as the natural way to formulate the algebraic properties of (i) and (ii) in a unified way. We will refer most of the time to (iii) and omit the adjective ``$p$-adic $\times$ adelic''. We will abbreviate cyclotomic multiple harmonic values as MHV$\mu_{N}$'s. \newline\indent Let $\mathcal{O}^{\ast}_{\har}$ be the $\mathbb{Q}$-vector space generated by the empty word and the words of the form $\big((n_{i})_{d};(\xi_{i})_{d+1} \big)$.
\subsubsection{Setting for computations, integrals at (1,0)\label{setting infinite sums}}
In the framework of integrals at $(1,0)$, represented by the symbol $\int_{1,0}$, we will transfer algebraic properties from MZV$\mu_{N}$'s to $\Lambda$-adic AdMZV$\mu_{N}$'s, and the $\Lambda=1$ case will give properties of MHV$\mu_{N}$'s, via equation (\ref{eq:formula for n=1}). By that equation, the index $\big((n_{i})_{d};(\xi_{i})_{d+1} \big)$ of MHV$\mu_{N}$'s is identified to $\frac{1}{1-e_{0}}e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}}\ldots e_{0}^{n_{1}-1}e_{\xi_{1}}$. The map $\tau(\Lambda) : \mathcal{O}^{\mathcyr{sh}} \rightarrow \mathcal{O}^{\mathcyr{sh}}[\Lambda] ; w \mapsto \Lambda^{\weight(w)}w$ sends an element to its orbit under the motivic Galois action $\tau$ of $\mathbb{G}_{m} (\ref{eq:tau})$.
\begin{Notation} \label{definition sigma inv DR} (i) Let $\comp^{\text{har},\Ad} : \mathcal{O}^{\mathcyr{sh}} \rightarrow \widehat{\mathcal{O}^{\mathcyr{sh}}}$, $w \mapsto \frac{1}{1-e_{0}}w$. \newline (ii) Let $\comp^{\Lambda \Ad,\Ad} : \mathcal{O}^{\mathcyr{sh}} \rightarrow \widehat{\tau(\Lambda)\mathcal{O}^{\mathcyr{sh}}}$, $w \mapsto \frac{1}{1-\Lambda e_{0}} w$. \newline (iii) Let $\comp_{\Lambda}^{\Lambda \Ad,\har} : \widehat{\mathcal{O}^{\mathcyr{sh}}} \mapsto \widehat{\tau(\Lambda)\mathcal{O}^{\mathcyr{sh}}}$, $\frac{1}{1-e_{0}}w \mapsto \frac{1}{1-\Lambda e_{0}}w$. \end{Notation}
We also denote in the same way the duals of these maps. The map $\comp^{\text{har},\Ad}$ was called $\comp^{\Sigma \smallint}$, ``comparison from integrals to series'', in \cite{I-2}. In this paper, it seems more relevant to call it the ``comparison from adjoint (analogues of MZV$\mu_{N}$'s) to harmonic (analogues of MZV$\mu_{N}$'s)''. We note that $\comp^{\Lambda \Ad, \Ad} = \widehat{\tau(\Lambda)}^{-1} \circ \big(\frac{1}{\Lambda}\comp^{\har,\Ad}\big) \circ \widehat{\tau(\Lambda)}$. \newline\indent The strategy of transferring algebraic properties stated above is motivated by the notions of adjoint Ihara action $\circ_{\Ad}^{\smallint_{1,0}}$ (\cite{I-2} Definition 1.1.3) and pro-unipotent harmonic action $\circ_{\har}^{\smallint_{1,0}}$ (\cite{I-3} Definition 2.1.2), which are pushforwards of the usual Ihara action (\ref{eq:Ihara}) by $\Ad(e_{1})$ and $\comp^{\text{har},\Ad}$ which appears implicitly in equation (\ref{eq:formula for n=1}) ; and with the fact that the Ihara action is compatible with the algebraic relations of MZV$\mu_{N}$'s.
\newline\indent $\Ad(e_{1})$ restricted to $ \tilde{\Pi}_{1,0}(K) = \{f \in \Pi_{1,0}(K) \text{ }|\text{ }f[e_{0}] = f[e_{1}] = 0\}$ is injective. The map $\comp^{\Sigma \smallint}$, viewed here as $\comp^{\text{har},\Ad}$, is not injective, but taking into account all the relations of iteration of the Frobenius enables to replace it by an injective map (\cite{I-3}, \S4.2). Thus we consider that applying the push-forward by $\Ad(e_{1})$ and $\comp^{\text{har},\Ad}$ should not lose information.
\subsubsection{Setting for computations, power series expansions of integrals at $0$}
In that framework, we will view prime weighted cyclotomic multiple harmonic sums via equation (\ref{eq:Li 0}). Let us write more precisely how they are connected to multiple polylogarithms.
\begin{Notation} \label{notation coefficient of power series}For $S = \sum\limits_{(n_{1},\ldots,n_{r}) \in \mathbb{N}^{r}} c_{n_{1},\ldots,n_{r}} x_{1}^{n_{1}} \ldots x_{r}^{n_{r}} \in R[[x_{1},\ldots,x_{r}]]$ a formal power series, where $R$ is a ring, for all $(n_{1},\ldots,n_{r}) \in \mathbb{N}^{r}$, we write $c_{n_{1},\ldots,n_{r}}= S[x_{1}^{n_{1}} \ldots x_{r}^{n_{r}}]$. \end{Notation}
By the power series expansion of multiple polylogarithms (\ref{eq:Li series bis}), we have two slightly different formulas : \begin{equation} \har_{m}\big( (n_{i})_{d};(\xi_{i})_{d+1} \big) = m^{n_{1}+\ldots+n_{d}+l}\Li[e_{0}^{l-1}e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}}][z^{m}]; \end{equation} \begin{equation} \label{eq:har et Li 2} \har_{m}((n_{i})_{d};(\xi_{i})_{d+1}) = m^{n_{1}+\ldots+n_{d}} \sum_{0<m'<m} \xi_{d+1}^{m-m'} \Li[ e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}}][z^{m'}]. \end{equation} \indent We consider multiple polylogarithms restricted to a function on a $(\mathbb{P}^{1} - Z)^{2} - \text{diagonal}$ where $Z$ is a finite subset of $\mathbb{P}^{1}$, and we choose $O = (\vec{1}_{0},\vec{1}_{0})$ in cubic coordinates as the origin of paths of integration. \newline\indent We use the notation $\omega_{z}=\frac{dy}{y-z}$ for $z \in Z- \{0,\infty\}$. By (\ref{eq:Li series bis}), for $(y_{1},y_{2})$ on a neighbourhood of the chosen reference base-point $O$, for $\gamma$ the straight path from $O$ to $(y_{1},y_{2})$, for any two words $w = \big( (n_{i})_{d}:(z_{i})_{d} \big)$, $\tilde{w} = \big( (\tilde{n}_{i})_{d}:(\tilde{z}_{i})_{d} \big)$, we have : \begin{multline} \int_{\gamma} \big( \omega_{0}^{t_{d'}-1}\omega_{z_{d'}} \ldots \omega_{0}^{t_{1}-1}\omega_{z_{1}}\big)(y_{2}) \big(\omega_{0}^{n_{d}-1}\omega_{z_{d}} \ldots \omega_{0}^{n_{1}-1}\omega_{z_{1}}\big)(y_{1}y_{2}) \\ = \sum_{0<m_{1}<\ldots<m_{d}<m'_{1}<\ldots<m'_{d'}=m} \frac{\big( \frac{z_{2}}{z_{1}} \big)^{m_{1}} \ldots \big(\frac{y_{1}}{z_{d}}\big)^{m} \big( \frac{\tilde{z}_{2}}{\tilde{z}_{1}} \big)^{m_{1}} \ldots \big(\frac{y_{2}}{\tilde{z}_{d'}}\big)^{m}} {m_{1}^{n_{1}}\ldots m_{d}^{n_{d}} m_{1}^{\tilde{n}_{1}}\ldots m_{d'-1}^{\tilde{n}_{d'-1}} m_{d}^{\tilde{n}_{d'}}}. \end{multline}
\subsubsection{Setting for computations, series}
In this framework, we will view prime weighted multiple harmonic sums via equation (\ref{eq:mult har sums}). We consider them as functions of the upper bound $m$ of their domain of summation, $\mathbb{N} \rightarrow \mathbb{C}$, resp. $\mathbb{N} \rightarrow K_{p}$. We will study them by using only the structure of topological field of $\mathbb{C}$ and $\mathbb{C}_{p}$, both at the source and at the target. \newline\indent If $c : \mathbb{N} \rightarrow \mathbb{C}$ is a function such that $c(0)=0$, the Newton series of $c$ is the function : $\displaystyle z \mapsto \sum\limits_{m \in\mathbb{N}} (\nabla c)_{m} {z \choose m}$, where $\nabla c = \Big(\sum\limits_{m'=0}^{m} (-1)^{m'}{m \choose m'} c(m')\Big)_{m \in \mathbb{N}}$ and $\displaystyle {z \choose m} = \frac{z(z-1)\ldots(z-m+1)}{m!}$. It is usually defined on a half-plane of the type $\{ \text{Re}(z) > \rho \}$ of $\mathbb{C}$. This notion appears in certain proofs of results on multiple harmonic sums by several other authors, which we will generalize and interpret in terms of cyclotomic multiple harmonic values.
\subsection{``Overconvergent'' variants of cyclotomic multiple harmonic values}
\subsubsection{Definition}
For any word $w$ on $e_{0 \cup \mu_{N}}$, the power series expansion of the functions $\Li_{p,X_{K}}^{\KZ}[w]$ at $0$ in $k_{N}[[z]][\log(z)]$ in is identical to the one of the complex multiple polylogarithm $\Li[w]$ of (\ref{eq:multiple polylogarithms power series expansion}). Thus, the coefficients of the power series expansions of $\Li_{p,\alpha}^{\dagger}$ can be written in terms of multiple harmonic sums and cyclotomic $p$-adic multiple zeta values. In the next statement, we use Notation \ref{notation coefficient of power series}.
\begin{Definition}
\noindent (i) Let the overconvergent variants of cyclotomic multiple harmonic sums be the numbers $\frak{h}_{m}^{\dagger_{p,\alpha}}(w) = \Li_{p,\alpha}^{\dagger}[w][z^{m}] \in K$, where $w$ is a word on $e_{0 \cup \mu_{N}}$ and $m \in \mathbb{N}$.
\newline (ii) Let the overconvergent variants of weighted cyclotomic multiple harmonic sums be the numbers $\har_{m}^{\dagger_{p,\alpha}}(w) = m^{\weight(w)}\Li_{p,\alpha}^{\dagger}[w][z^{m}]$, where $w$ is a word on $e_{0 \cup \mu_{N}}$ and $m \in \mathbb{N}$.
\newline (iii) Let overconvergent prime weighted cyclotomic multiple harmonic sums be the numbers
$\har_{p^{\alpha}}^{\dagger_{p,\alpha}}(w)$, where $w$ is a word on $e_{0 \cup \mu_{N}}$. \end{Definition}
The following is a variant of Definition \ref{def harmonic}, from which we take the same notations. We now consider all $p$'s resp. all $\alpha$'s at the same time.
\begin{Definition} \label{variant harmonic}For any index $w= \big( (n_{i})_{d};(\xi_{i})_{d+1} \big)$, we call respectively
\newline (i) for each $p \in \mathcal{P}_{N}$, $\har^{\dagger}_{p^{\mathbb{N}}}(w)
= \big( \har^{\dagger_{p,\alpha}}_{p^{\alpha}} (w)\big)_{\alpha \in \mathbb{N}} \in K_{p}^{\mathbb{N}}$, overconvergent variants of adelic cyclotomic multiple harmonic values.
\newline (ii) for each $\alpha \in \mathbb{N}^{\ast}$, $\har^{\dagger}_{\mathcal{P}^{\alpha}}(w) = \big( \har^{\dagger_{p,\alpha}}_{p^{\alpha}} (w)\big)_{p \in \mathcal{P}_{N}} \in \underset{p\in\mathcal{P}_{N}}{\prod} K_{p}$, overconvergent variants of $p$-adic cyclotomic multiple harmonic values.
\newline (iii) $\har^{\dagger}_{\mathcal{P}_{N}^{\mathbb{N}}}(w)=\big( \har^{\dagger_{p,\alpha}}_{p^{\alpha}} (w)\big)_{(p,\alpha) \in \mathcal{P}_{N} \times \mathbb{N}} \in \big( \underset{p\in\mathcal{P}_{N}}{\prod} K_{p} \big)^{\mathbb{N}}$, overconvergent variants of ($p$-adic $\times$ adelic) cyclotomic multiple harmonic values. \end{Definition}
As in the previous sections, we will usually refer to (iii) and omit the adjective ``$p$-adic $\times$ adelic''. We will use the abbreviation MHV$\mu_{N}^{\dagger}$.
\subsubsection{Setting for computations}
In the previous parts, we have used three settings for computations (\S2.2.2, \S2.2.3, \S2.2.4) for studying prime weighted multiple harmonic values, corresponding to the frameworks $\int_{1,0}$, $\int$ and $\Sigma$ respectively. We now add to them a complement :
\begin{Proposition} \label{prop formula dagger} (i) ($\int_{1,0}$) We have :
\begin{equation} \label{eq:overconv expression}
\har_{p^{\alpha}}^{\dagger_{p,\alpha}}[e_{0}^{l}e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}}\ldots e_{0}^{n_{1}-1}e_{\xi_{1}}]
= \sum_{b=l}^{\infty} \sum_{\xi \in \mu_{N}(K)} \xi^{-p^{\alpha}}
\Ad_{{\Phi_{p,\alpha}^{(\xi)}}}(e_{\xi}) [e_{0}^{b}e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}}\ldots e_{0}^{n_{1}-1}e_{\xi_{1}}] .
\end{equation}
\noindent (ii) ($\int$) \label{lemma power series expansion Li dagger} We have $ \Li_{p,\alpha}^{\dagger}[w][z^{p^{\alpha}}] = \har^{\dagger}_{p,\alpha}$ and, for $m \in \{1,\ldots,p^{\alpha}-1\}$, $\Li_{p,\alpha}^{\dagger}[w][z^{m}] = (p^{\alpha})^{\weight(w)}\Li(z)[z^{m}] = \frac{1}{m^{l}}\frak{h}_{m} \big((n_{i})_{d};(\xi_{i})_{d+1} \big)$.
\newline\noindent (iii) ($\Sigma$) We can write a formula as sums of series for each $\har_{p^{\alpha}}^{\dagger_{p,\alpha}}(w)$ by composing (\ref{eq:overconv expression}) with the formula for $\Ad_{\Phi_{p,\alpha}^{(\xi)}}(e_{\xi})$ as sums of series obtained in the main theorem of \cite{I-2}. \end{Proposition}
\begin{proof} (i) For any word $w=e_{0}^{l}e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}}\ldots e_{0}^{n_{1}-1}e_{\xi_{1}}$ and let us consider the coefficient $[w][z^{p^{\alpha}}]$ of equation (\ref{eq:horizontality1}) ; we obtain :
\begin{multline} \label{eq:remainder} \har_{p^{\alpha}}^{\dagger_{p,\alpha}}
(e_{0}^{l}e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}}\ldots e_{0}^{n_{1}-1}e_{\xi_{1}})
\\ = \har_{p^{\alpha}} \big( (n_{i})_{d};(\xi_{i})_{d+1} \big) -
\sum_{b=0}^{l} \sum_{\xi \in \mu_{N}(K)} \xi^{-1}
\Ad_{{\Phi_{p,\alpha}^{(\xi)}}}(e_{\xi}) [e_{0}^{b}e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}}\ldots e_{0}^{n_{1}-1}e_{\xi_{1}}] .
\end{multline}
If we combine this and equation (\ref{eq:formula for n=1}) proven in \cite{I-2}, we obtain the result. (In \cite{I-2}, we actually obtained equation (\ref{eq:formula for n=1}) by proving that $\har_{p^{\alpha}}^{\dagger_{p,\alpha}}(e_{0}^{l}e_{z_{i_{d+1}}}e_{0}^{n_{d}-1}e_{\xi_{d}}\ldots e_{0}^{n_{1}-1}e_{\xi_{1}}) \displaystyle \underset{l \rightarrow \infty}{\rightarrow} 0$, which was an immediate consequence of the main result of \cite{I-1}.)
\newline (ii) Follows from equation (\ref{eq:horizontality1}) and from that we have $\Li(z^{p^{\alpha}})[z^{m}]=0$ for all $m \in \{1,\ldots,p^{\alpha}-1\}$.
\newline (iii) Immediate. \end{proof}
In particular, (\ref{eq:overconv expression}) means that the numbers $\har_{p^{\alpha}}^{\dagger_{p,\alpha}}(w)$ are the remainders of the sums of series of (\ref{eq:formula for n=1}) which express $\har_{p^{\alpha}}$ in terms of cyclotomic $p$-adic multiple zeta values. This leads us to a variant of Definition \ref{def adjoint} (ii) :
\begin{Definition} \label{def over adjoint}
Let, for all words, the overconvergent variants of $\Lambda$-adjoint $p$-adic cyclotomic multiple zeta values ($\Lambda$Ad$p$MZV$\mu_{N}^{\dagger}$'s) be
\begin{multline} \zeta^{\Lambda,\Ad,\dagger}_{p^{\alpha}} (e_{0}^{l}e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}}\ldots e_{0}^{n_{1}-1}e_{\xi_{i_{1}}})
\\ = \sum_{b \geqslant l} \sum_{\xi \in \mu_{N}(K)} \xi^{-p^{\alpha}} \Lambda^{b+n_{d}+\ldots+n_{1}}
\Ad_{\Phi_{p,\alpha}^{(\xi)}}(e_{\xi})
[e_{0}^{b}e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}}\ldots e_{0}^{n_{1}-1}e_{\xi_{1}}] .
\end{multline} \end{Definition}
In particular, the $l=0$ case of $\Lambda$Ad$p$MZV$\mu_{N}^{\dagger}$'s are just $\Lambda$Ad$p$MZV$\mu_{N}$'s. For $l\geqslant 0$, they are remainders of $\Lambda$Ad$p$MZV$\mu_{N}$'s viewed as power series in $\Lambda$.
\subsection{Adjoint multiple polylogarithms and harmonic multiple polylogarithms}
Adjoint cyclotomic multiple zeta values and cyclotomic multiple harmonic values are particular values of functions which retain a lot of their properties, and which we introduce now.
\subsubsection{Adjoint multiple polylogarithms}
In this context, the generalization of cyclotomic multiple zeta values are multiple polylogarithms, with the only restriction that we assume them to be iterated integrals on the straight path from $0$ to $1$. \newline\indent This suggests the following generalization of Definition \ref{def adjoint} :
\begin{Definition} Let adjoint multiple polylogarithms be
\begin{multline*} \Li^{\Ad}\big( (n_{i})_{d},l;(z_{i})_{d+1} \big) = (-1)^{d} \sum_{z \in Z} z^{-1} ({\Li^{(z)}}^{-1}e_{z}\Li^{(z)})\big[e_{0}^{l}e_{z_{d+1}}e_{0}^{n_{d}-1}e_{z_{d}}\ldots e_{0}^{n_{1}-1}e_{z_{1}} \big] \\ = \sum_{d'=1}^{d+1} \prod_{i=d'}^{d} {-n_{i} \choose l_{i}}
z_{d'}^{-1} (-1)^{n_{d'}+\ldots+n_{d}}
\Li^{(z_{d'})} \big( (n_{d-i}+l_{d-i})_{d-d'},(z_{d-i})_{d-d'} \big)
\Li^{(z_{d'})} \big( (n_{i})_{d'-1},(z_{i})_{d'-1} \big)
\end{multline*}
where $\Li^{(z)}$ denotes an iterated integral on a straight path from $0$ to $z$. \end{Definition}
Of course, as in \S7.1 we also have a variant where $e_{z}$ above is replaced by $e^{2i\pi e_{z}}$. We note that we have the following adjoint variant of the KZ equation (\ref{eq: nabla KZ}) : for all $u \in \Lie(\Pi_{0,0})$, $$ d\Ad_{\Li}(u) = \Ad_{\Li_{p,X_{K}}^{\KZ}} \bigg( \ad_{u} \Big( e_{0} \omega_{0} + \sum_{z_{0} \in D- \{0,\infty\}} e_{z_{0}}\omega_{z_{0}} \Big) \bigg) . $$ Moreover, in the case of $\mathbb{P}^{1} - \{0,\mu_{N},\infty\}$, we have an adjoint variant of the differential equation satisfied by the overconvergent $p$-adic multiple polylogarithms (\ref{eq:horizontality equation}) : for all $u \in \Lie(\Pi_{0,0})$, \begin{multline*} d\Ad_{\Li_{p,\alpha}^{\dagger}}(u) = \\ \Ad_{\Li_{p,\alpha}^{\dagger}} \bigg( \ad_{u} \Big( \omega_{0}(z^{p^{\alpha}})e_{0} + \sum_{\xi \in \mu_{N}(K)} \omega_{\xi^{p^{\alpha}}}(z^{p^{\alpha}})e_{\xi^{p^{\alpha}}} \Big) \bigg) - \ad_{\Ad_{\Li_{p,\alpha}^{\dagger}}(u)} \Big( \omega_{0}(z) e_{0} + \sum_{\xi \in \mu_{N}(K)} \omega_{\xi}(z)\Ad_{\Phi^{(\xi^{p^{\alpha}})}_{p,\alpha}}(e_{\xi^{p^{\alpha}}}) \Big) . \end{multline*}
\subsubsection{A generalization of cyclotomic multiple harmonic values and finite cyclotomic multiple zeta values}
We now assume that $\mathbb{P}^{1} - D$ is defined over a number field, which is embedded in $\mathbb{C}_{p}$ for all primes $p$.
\begin{Definition} Let multiple harmonic polylogarithms be
$$ \har_{\mathcal{P}^{\mathbb{N}}} =
\bigg( \sum_{0<m_{1} <\ldots < m_{d}<p^{\alpha}}
\frac{\big( \frac{z_{2}}{z_{1}} \big)^{m_{1}} \ldots \big(\frac{z_{d+1}}{z_{d}}\big)^{m_{d}}\big(\frac{1}{z_{d+1}}\big)^{p^{\alpha}}}{m_{1}^{n_{1}}\ldots m_{d}^{n_{d}}} \bigg) \in \prod_{p \in \mathcal{P}} \mathbb{C}_{p}^{\mathbb{N}^{\ast}} . $$ \end{Definition}
Unlike in \S2.4 and \S2.5, we do not claim a crystalline meaning for this notion.
\section{Around double shuffle equations\label{double shuffle}}
We review the regularized double shuffle equations (\S3.1), we construct adjoint double shuffle equations (\S3.2) and harmonic double shuffle relations (\S3.3), in the three frameworks $\int_{1,0}$, $\int$, $\Sigma$, which proves the theorem stated in \S1.3. We discuss a particular consequence of the double shuffle equation, which we call the ``reversal'' equation, and we find its adjoint and harmonic counterparts (\S3.4).
\subsection{Review on the double shuffle equations}
Let $K$ be a field of characteristic zero which contains a primitive $N$-th root of unity. Let the alphabet $Y_{N} =\{ y_{n}^{(\xi)}\text{ }|\text{ }n\geqslant 1, \xi \in \mu_{N}(K)\}$. A word on $e_{0\cup \mu_{N}}$ of the form $e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}}$ can be viewed as the word $y_{n_{1}}^{(\xi_{1})} \ldots y_{n_{d}}^{(\xi_{d})}$ on $Y_{N}$. \newline\indent Racinet has defined \cite{Racinet}, for each $\lambda\in K$ the set $\DS_{\lambda}(K)$ : this is the set of couples $(\psi_{\mathcyr{sh}},\psi_{\ast}) \in K \langle\langle e_{0\cup \mu_{N}} \rangle\rangle \times K \langle\langle Y_{N} \rangle\rangle$ such that $\psi_{\mathcyr{sh}}$ satisfies the shuffle equation (equation (\ref{eq:shuffle eq}) below), $\psi_{\ast}$ satisfies the quasi-shuffle equation (equation (\ref{eq:stuffle eq}) below) ; a certain relation between $\psi_{\mathcyr{sh}}$ and $\psi_{\ast}$ (equation (\ref{eq:rel between reg}) below) ; as well as $\psi_{\mathcyr{sh}}[e_{0}] = \psi_{\mathcyr{sh}}[e_{1}]=0$, and $\psi_{\mathcyr{sh}}[e_{0}e_{1}]=-\lambda^{2}/24$. This defines $\DS_{\lambda}$ as a subscheme of $\Pi_{1,0}$. Racinet proves that the Ihara product (\ref{eq:Ihara}) restricts to a group law on $\DS_{0}$ and to an action of $\DS_{0}$ on $\DS_{\lambda}$ which makes it a torsor. \newline\indent We have $\Phi_{\KZ} \in \DS_{2\pi i}(\mathbb{C})$, and $\Phi_{p,\alpha} \in \DS_{0}(K_{p})$ : for $\alpha=1$ and $\alpha=-\infty$, this follows from \cite{Besser Furusho} ($N=1$), \cite{Yamashita} (any $N$); by the relations of iteration of the Frobenius (\cite{I-3}, equations (1.11), (1.12), (1.13) and Proposition 1.5.2), and by Racinet's theorem, it follows that this is true for any $\alpha \in \mathbb{Z} \cup \{\pm \infty\} - \{0\}$.
\subsubsection{The shuffle relation of iterated integrals}
The shuffle equation appears in equation (\ref{eq:shuffle equation}) : an element $f \in k \langle \langle e_{0 \cup \mu_{N}} \rangle\rangle$ satisfies the shuffle equation if : \begin{equation} \label{eq:shuffle eq} \forall w,w' \text{ words on }e_{0\cup \mu_{N}}, \text{ we have } f[w]f[w'] = f[w\text{ }\mathcyr{sh}\text{ }w'] \end{equation} where $\mathcyr{sh}$ is the shuffle product reviewed in \S2.1.1, characterized by induction on the weight by $w\text{ }\mathcyr{sh}\text{ }1 = 1\text{ }\mathcyr{sh}\text{ }w = w$ ($1$ is the empty word) and $e_{x}w \text{ } \mathcyr{sh} \text{ } e_{x'}w' = e_{x}(w \text{ }\mathcyr{sh}\text{ } e_{x'}w') + e_{x'}(e_{x}w \text{ } \mathcyr{sh}\text{ } w')$ or equivalently $w e_{x} \text{ } \mathcyr{sh}\text{ } w'e_{x} = (w\text{ } \mathcyr{sh}\text{ } w'e_{x'})e_{x} + (w\text{ } \mathcyr{sh}\text{ } w'e_{x})e_{x'}$ for all $x,x' \in \{0\} \cup \mu_{N}(K)$. The shuffle equation amounts to \begin{equation} \label{eq:shuffle eq bis} \hat{\Delta}_{\mathcyr{sh}}(f) = f \hat{\otimes} f \end{equation} where $\hat{\Delta}_{\mathcyr{sh}}$ is the coproduct in $K \langle \langle e_{0 \cup \mu_{N}} \rangle\rangle$ viewed as the completed dual of the Hopf algebra $\mathcal{O}^{\mathcyr{sh}}$. \newline\indent The fact that the generating series $\Phi_{\KZ}$, resp. $\Phi_{p,\alpha}$ satisfies the shuffle equation follows directly from their definition as points of $\Pi_{1,0} \simeq \Spec(\mathcal{O}^{\mathcyr{sh},e_{0\cup \mu_{N}}})$ and from equation (\ref{eq:shuffle equation}), and amounts to a family of relations on MZV$\mu_{N}$'s, resp. $p$MZV$\mu_{N}$'s. The coefficient of $\Phi_{\KZ}$ at a word $e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}}e_{0}^{n_{0}-1}=e_{x_{n}}\ldots e_{x_{1}}$ which does not necessarily satisfy the hypothesis $n_{d}\geqslant 2$ and $n_{0}=1$, is the regularized iterated integral $\displaystyle \underset{\epsilon,\epsilon' \rightarrow 0}{\Reg\lim} \int_{\epsilon}^{1-\epsilon'} \frac{dt_{n}}{t_{n}-x_{n}} \ldots \frac{dt_{1}}{t_{1}-x_{1}}$ defined as the constant term in the asymptotic expansion of $\int_{\epsilon}^{1-\epsilon'} \frac{dt_{n}}{t_{n}-x_{n}} \ldots \frac{dt_{1}}{t_{1}-x_{1}}$, which is in $\mathbb{C}[[\epsilon,\epsilon']][\log(\epsilon),\log(\epsilon')]$. That coefficient is denoted by $\zeta_{\mathcyr{sh}}(e_{x_{n}}\ldots e_{x_{1}})$ and is called a (shuffle-)regularized MZV$\mu_{N}$. Similarly, the coefficient of $\Phi_{p,\alpha}$ at such a word is denoted by $(\zeta_{p,\alpha})_{\mathcyr{sh}}(e_{x_{n}}\ldots e_{x_{1}})$ and called a (shuffle-)regularized $p$MZV$\mu_{N}$. The shuffle equation for $\Phi_{\KZ}$ also follows from the identity \begin{equation} \label{eq:eq shuffle source} \displaystyle \int_{\epsilon<t_{1}< \ldots < t_{n}<1-\epsilon'} \times \int_{\epsilon<t_{n+1}<\ldots t_{n+n'}<1-\epsilon'} = \sum_{\substack{\sigma \text{ permutation of }\{1,\ldots,l+l'\} \\\text{s.t. } \sigma(1)<\ldots<\sigma(n)\\ \text{and } \sigma(n+1)<\ldots<\sigma(n+n')}} \int_{\epsilon<t_{\sigma^{-1}(1)} < \ldots<t_{\sigma^{-1}(n+n')}<1-\epsilon'}, \end{equation} which follows from an equality between domains of integration. The definition of multiple polylogarithms as coefficients of a point of $\pi_{1}^{\un,\dR}(\mathbb{P}^{1} - \{0,\mu_{N},\infty\})$ and equation (\ref{eq:shuffle equation}) imply that they satisfy the shuffle equation ; this also follows from their definition as iterated integrals (\ref{eq:MPL}) and equation (\ref{eq:eq shuffle source}).
\subsubsection{The quasi-shuffle relation of iterated series}
An element $f \in K \langle\langle Y_{N} \rangle\rangle$ satisfies the quasi-shuffle equation if : \begin{equation} \label{eq:stuffle eq} \forall w,w' \text{ words on }Y_{N}, \text{ we have } f[w]f[w'] = f[w \ast w'] \end{equation} where $\ast$, the quasi-shuffle product on $\mathbb{Q} \langle Y_{N} \rangle$, is defined as follows, by induction on the depth : $1 \ast w = w \ast 1 = w$ ($1$ is the empty word) and $w y^{(\xi)}_{n} \ast w'y^{(\xi')}_{n'} = (w \ast w'y^{(\xi')}_{n'})y^{(\xi)}_{n} + (wy^{(\xi)}_{n} \ast w') y^{(\xi')}_{n'} + (w \ast w')y^{(\xi\xi')}_{n+n'}$. The quasi-shuffle product makes $K \langle\langle Y_{N} \rangle\rangle$ into a commutative algebra. The quasi-shuffle equation amounts to \begin{equation} \label{eq:stuffle eq bis} \hat{\Delta}_{\ast}(f) = f \hat{\otimes} f \end{equation} where $\hat{\Delta}_{\ast}$ is the coproduct in $K \langle \langle Y_{N} \rangle\rangle$ viewed as the completed dual coalgebra of $\mathbb{Q} \langle Y_{N} \rangle$. Actually, one has actually a natural structure of Hopf algebra $\mathcal{O}^{\ast,e_{0\cup \mu_{N}}}$ on $\mathbb{Q} \langle Y_{N} \rangle$ in which the product is $\ast$ (\cite{Hoffman} for $N=1$). \newline\indent For a word $w=\big((n_{i})_{d};(\xi_{i})_{d}\big)$ such that we do not necessarily have the hypothesis $(n_{d},\xi_{d})\not=(1,1)$ of (\ref{eq:multizetas}), let $\Reg \displaystyle \sum\limits_{0<m_{1}<\ldots<m_{d}} \frac{\big( \frac{\xi_{2}}{\xi_{1}} \big)^{m_{1}} \ldots \big(\frac{1}{\xi_{d}}\big)^{m_{d}}}{m_{1}^{n_{1}}\ldots m_{d}^{n_{d}}}$ be the constant term in the asymptotic expansion of $\displaystyle \sum\limits_{0<m_{1}<\ldots<m_{d}<m} \frac{\big( \frac{\xi_{2}}{\xi_{1}} \big)^{m_{1}} \ldots \big(\frac{1}{\xi_{d}}\big)^{m_{d}}}{m_{1}^{n_{1}}\ldots m_{d}^{n_{d}}}$ when $m \rightarrow \infty$, which is in $\mathbb{C}[\log(m)][[\frac{1}{m}]]$, in which the terms involving the Euler-Mascheroni constant are ``removed'' in a canonical way (see \cite{Racinet}). We denote it by $\zeta_{\ast}(w)$ and we call it a regularized MZV$\mu_{N}$. Let $(\Phi_{\KZ})_{\ast}= 1+ \sum\limits_{w\text{ word on }Y_{N}} \zeta_{\ast}(w)w$ ; $(\Phi_{\KZ})_{\ast}$ satisfies the quasi-shuffle equation ; this follows from the identities \begin{equation} \label{eq:quasi shuffle eq source} \sum_{0<m_{1}<\ldots<m_{d}<m} \times \sum_{0<m'_{1}<\ldots<m'_{d'}<m} = \sum_{\text{quasi-shuffle elements}}\text{ }\sum_{0<m''_{1}<\ldots<m''_{r}<m} \end{equation} where a quasi-shuffle element is a way to order $m_{1},\ldots,m_{d}$ and $m'_{1},\ldots,m'_{d'}$, which determines an integer $r \in \{\max(d,d'),\ldots,d+d'\}$ and variables $m''_{1},\ldots,m''_{r}$ : for example, for $d=d'=1$, there are three quasi-shuffle elements : $\{m_{1}<m'_{1}\}$, $\{m_{1}=m'_{1}\}$ and $\{m_{1}>m'_{1}\}$. \newline\indent In the $p$-adic case, the fact that $p$MZV$\mu_{N}$'s satisfy the quasi-shuffle equations \cite{Besser Furusho} is proved by using formal properties of Coleman functions. Moreover, an analogue of the quasi-shuffle regularization which still satisfies the quasi-shuffle equation has been defined in \cite{Furusho Jafari} in the case of $N=1$. We can deduce easily a similar definition for any $N$. \newline\indent One has a variant $\ast_{\har}$ on $\mathcal{O}_{\har}^{\ast}$ (defined in \S2.2.1) adapted to cyclotomic multiple harmonic sums. For any words $w = \big((n_{i})_{d};(\xi_{i})_{d+1})$ and $w' = \big((\tilde{n}_{i})_{d};(\tilde{\xi}_{i})_{d+1})$, $w \ast_{\har} w'$ is the sum, indexed by the set of quasi-shuffle elements $(u_{i})_{d''}$ of $\big((n_{i})_{d},(\tilde{n}_{i})_{d}\big)$, of the sequences $\big((u_{i})_{d''}, (\xi_{a_{i}} \tilde{\xi}_{b_{i}})_{d''+1} \big)$ defined as follows : $a_{1} = b_{1} = 1$ and, for $2 \leqslant i \leqslant d''$, $ (a_{i},b_{i}) = \left\{ \begin{array}{ll} (a_{i-1}+1 ,b_{i-1})&
\text{ if } \exists l\text{ }|\text{ } u_{i-1}=n_{l},
\\ (a_{i-1}, b_{i-1}+1) &\text{ if }\exists l'\text{ }|\text{ } u_{i-1}=\tilde{n}_{l'},\text{ }
\\ (a_{i-1}+1,b_{i-1}+1)&\text{ if }\exists l,l' \text{ }|\text{ } u_{i-1}=n_{l}+\tilde{n}_{l'},\text{ } \end{array}\right.$. This makes $\mathcal{O}_{\har}^{\ast}$ into a a commutative algebra. We have natural morphisms of algebras $i : \mathcal{O}^{\ast} \hookrightarrow \mathcal{O}_{\har}^{\ast}$, $\big( (n_{i})_{d};(\xi_{i})_{d} \big) \mapsto \big( (n_{i})_{d};((\xi_{i})_{d},1) \big)$ and $r : \mathcal{O}_{\har}^{\ast} \twoheadrightarrow \mathcal{O}^{\ast}$, $\big( (n_{i})_{d};(\xi_{i})_{d+1} \big) \mapsto \big( (n_{i})_{d}; (\frac{\xi_{i}}{\xi_{d+1}})\big) $, with $r \circ i=\id$. \newline\indent By equation (\ref{eq:quasi shuffle eq source}), the power series expansion of multiple polylogarithms (\ref{eq:multiple polylogarithms power series expansion}) satisfy a version of the quasi-shuffle relation. It can be encoded by means of $\ast_{\har}$ ; we will use it in \S3.3.2.
\subsubsection{The relation between the two regularizations of MZV$\mu_{N}$'s}
The two regularizations of MZV$\mu_{N}$'s are related as follows. Let $\pr : K\langle \langle e_{0\cup \mu_{N}} \rangle\rangle \rightarrow K\langle \langle Y_{N} \rangle\rangle$ be the unique continuous (for the weight-adic topology) and linear map which sends a word $w$ to itself if its rightmost letter is not $e_{0}$ and to $0$ if it is $e_{0}$, where we view words on $Y_{N}$ as words on $e_{0\cup \mu_{N}}$ as usual. Let the maps $\textbf{p},\textbf{q} : \mathcal{O}^{\mathcyr{sh}} \rightarrow \mathcal{O}^{\mathcyr{sh}}$ defined as follows (\cite{Racinet}, \S2.2.3) : $$ \textbf{p}(e_{0}^{n_{d}-1}e_{\sigma_{d}} \cdots e_{0}^{n_{1}-1}e_{\sigma_{1}}e_{0}^{n_{0}-1} ) = e_{0}^{n_{d}-1}e_{\sigma_{d}^{-1}} \cdots e_{0}^{n_{2}-1}e_{(\sigma_{d}\cdots \sigma_{2})^{-1}} e_{0}^{n_{1}-1}e_{(\sigma_{d}\cdots \sigma_{1})^{-1}}e_{0}^{n_{0}-1} $$ $$ \textbf{q} (e_{0}^{n_{d}-1}e_{\xi_{d}} \cdots e_{0}^{n_{1}-1}e_{\xi_{1}}e_{0}^{n_{0}-1} ) = e_{0}^{n_{d}-1}e_{\xi_{d}^{-1}} \cdots e_{0}^{n_{1}-1}e_{\xi_{3}^{-1}\xi_{2}} e_{0}^{n_{1}-1}e_{\xi_{2}^{-1}\xi_{1}}e_{0}^{n_{0}-1} $$ By duality they define maps $K \langle\langle e_{0\cup \mu_{N}} \rangle\rangle \rightarrow K \langle\langle e_{0\cup \mu_{N}} \rangle\rangle$, which we will also denote by $\textbf{p}$ and $\textbf{q}$ and, by restriction, they define maps $K \langle \langle Y_{N} \rangle\rangle K \langle \langle Y_{N} \rangle\rangle$. We have (\cite{Racinet}, Corollaire 2.24, D\'{e}finition 3.1) : \begin{equation} \label{eq:rel between reg} \textbf{q}\pr(\Phi_{\KZ}) = \exp \bigg( \sum\limits_{n=2}^{\infty} \frac{(-1)^{n}}{n}\zeta(n)y_{1}^{n} \bigg)(\Phi_{\KZ})_{\ast} . \end{equation} This formula can be compared with equations (\ref{eq:multizetas}) and (\ref{eq:multizetas integral}). \newline\indent The $p$-adic analogue of this formula has been proved in \cite{Furusho Jafari}, Theorem 0.1, (iii), in the $N=1$ case, and this can be easily adapted to any $N$.
\subsection{Adjoint $p$-adic double shuffle equations}
For $\Phi \in \DS_{0}(K)$, we are going to find some ``adjoint double shuffle equations'' for $\Phi_{\Ad,\chi}$, as defined in Definition 2.2.3. In particular, we will obtain adjoint double shuffle equations for adjoint $p$MZV$\mu_{N}$'s.
\subsubsection{Expression of $\Phi_{\Ad,\chi}$ in terms of $K \langle \langle Y_{N}\rangle\rangle$}
The first step is to rewrite $\pr(f_{\Ad})$ in terms of operations on $K\langle\langle Y_{N}\rangle\rangle$. \newline\indent Below the hat denotes the completion with respect to the weight grading. The notation below refers to the word ``shifting'', this terminology will be explained in a subsequent paper.
\begin{Definition} \label{def shft} (i) Let $\shft_{\ast} : \mathcal{O}^{\ast} \rightarrow \widehat{\mathcal{O}^{\ast}}$, $w(e_{0},(e_{\xi})_{\xi \in \mu_{N}(K)}) \mapsto w (\frac{1}{1+e_{0}}e_{0},(\frac{1}{1+e_{0}}e_{\xi})_{\xi \in \mu_{N}(K)})$. \newline (ii) For any $l \in \mathbb{N}$, let $\shft_{l} : \mathcal{O}^{\ast} \mapsto \mathcal{O}^{\ast}$, $\displaystyle e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}} \mapsto \sum\limits_{\substack{l_{1}+\ldots+l_{d}=l \\ l_{1},\ldots,l_{d} \geqslant 0}} \prod_{i=1}^{d} {-n_{i} \choose l_{i}} e_{0}^{n_{d}+l_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}+l_{1}-1}e_{\xi_{1}}$. \newline Let $\shft^{\vee}_{l} : K \langle \langle Y_{N}\rangle\rangle \rightarrow K \langle \langle Y_{N}\rangle\rangle$, defined by for all $w$, $(\shft_{l}^{\vee}G)[w] = G[\shft_{l}(w)]$. \newline (iii) \label{def SY} For any $G \in K \langle \langle Y_{N}\rangle\rangle$ and $\xi \in \mu_{N}(K)$, let $G^{\inv,\xi} \in K \langle\langle Y_{N}\rangle\rangle$ be defined by, for all $d\geq 0$, $G^{\inv,\xi}[y_{l+1}^{(\xi_{d+1})}y_{n_{d}}^{(\xi_{d})} \ldots y_{n_{1}}^{(\xi_{1})}] = \left\{ \begin{array}{ll} (-1)^{n_{1}+\ldots+n_{d}}G[y_{n_{1}}^{(\xi_{2})} \ldots y_{n_{d}}^{(\xi_{d+1})}] & \xi_{1}=\xi \\ 0 & \xi_{1}\not=\xi \end{array} \right.$. \end{Definition}
We note that $\shft_{l}$ is the coefficient of $\Lambda^{l}$ in $\widehat{\tau(\Lambda)}^{-1} \shft_{\ast} \tau(\Lambda)$.
\begin{Proposition} \label{prop adjoint star} For any $f\in \tilde{\Pi}_{1,0}(K)$, then we have $$ \pr(f_{\Ad,\chi}) = \sum_{l\geq 0} \sum_{w_{l}} \sum_{\xi \in \mu_{N}(K)} \bigg( \chi(\xi)(-1)^{l} \big( \shft^{\vee}_{\ast,l} \pr(f^{(\xi)})\big)^{\inv,\xi} \pr(f^{(\xi)}) \bigg) [w_{l}] w_{l} $$ \end{Proposition}
\begin{proof} Consider a word whose rightmost letter is not $e_{0}$, and write it as $w=e_{0}^{l}e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}}$ ($\xi_{i}$'s are roots of unity, $l\geq 0$, $n_{i}\geq 1$). Then, for $f$ a solution to the shuffle equation such that $f[e_{0}]=0$, we have $$ \sum_{\xi \in \mu_{N}(K)} \chi(\xi)({f^{(\xi)}}^{-1}e_{\xi}f^{(\xi)})[w] = \sum_{d'=1}^{d+1} \chi(\xi_{d'}) {f^{(\xi_{d'})}}^{-1}[e_{0}^{l}e_{\xi_{d+1}}e_{0}^{n_{d}-1} \ldots e_{0}^{n_{d'}-1}] f^{(\xi_{d'})}[e_{0}^{n_{d'-1}-1}e_{\xi_{d'-1}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}}] $$ By using the formula for the antipode of the shuffle Hopf algebra we have $$ {f^{(\xi_{d'})}}^{-1}[e_{0}^{l}e_{\xi_{d+1}}e_{0}^{n_{d}-1} \ldots e_{0}^{n_{d'}-1}] = (-1)^{n_{d'}+\ldots+n_{d}+l} f^{(\xi_{d'})}[e_{0}^{n_{d'}-1}e_{\xi_{d'+1}} \ldots e_{0}^{n_{d}-1}e_{\xi_{d+1}}e_{0}^{l}] $$ By using a well-known consequence of the shuffle equation and of $f[e_{0}]=0$, we have $$ f^{(\xi_{d'})}[e_{0}^{n_{d'}-1}e_{\xi_{d'+1}} \ldots e_{0}^{n_{d}-1}e_{\xi_{d+1}}e_{0}^{l}] = \sum_{l_{d'}+\ldots+l_{d}=l} \prod_{i=d'}^{d} {-n_{i} \choose l_{i}} f^{(\xi_{d'})}[e_{0}^{n_{d'}+l_{d'}-1}e_{\xi_{d'+1}} \ldots e_{0}^{n_{d}+l_{d}-1}e_{\xi_{d+1}}] $$ By the definition of $\shft_{l}$ (Definition 3.2.1 (iii)), we have $$ \sum_{l_{d'}+\ldots+l_{d}=l} \prod_{i=d'}^{d} {-n_{i} \choose l_{i}} f^{(\xi_{d'})}[e_{0}^{n_{d'}+l_{d'}-1}e_{\xi_{d'+1}} \ldots e_{0}^{n_{d}+l_{d}-1}e_{\xi_{d+1}}] = (\shft_{l}^{\vee}f^{(\xi_{d'})})[(e_{0}^{n_{d'}-1}e_{\xi_{d'+1}} \ldots e_{0}^{n_{d}-1}e_{\xi_{d+1}})] $$ Moreover, $$ (-1)^{l+n_{d'}+\ldots+n_{d}} (pr f^{(\xi_{d'})})[\shft_{l}(e_{0}^{n_{d'}-1}e_{\xi_{d'+1}} \ldots e_{0}^{n_{d}-1}e_{\xi_{d+1}})] = (-1)^{l} \big( \shft^{\vee}_{\ast,l} \pr(f^{(\xi)})\big) ^{\inv,\xi_{d'}}[e_{0}^{l}e_{\xi_{d+1}}\ldots e_{0}^{n_{d'}-1}e_{\xi_{d'}}] $$ and, finally, since all the words involved have their rightmost letter not equal to $e_{0}$ we can replace everywhere $f^{(\xi)}$ by $\pr(f^{(\xi)})$. In the end, we have \begin{multline} \sum_{\xi \in \mu_{N}(K)} \chi(\xi)({f^{(\xi)}}^{-1}e_{\xi}f^{(\xi)})[w] \\ \begin{array}{l} = \displaystyle \sum_{d'=1}^{d+1} \chi(\xi_{d'}) (-1)^{l} \big( \shft^{\vee}_{l} \pr(f^{(\xi)}) \big)^{\inv,\xi_{d'}}[e_{0}^{l}e_{\xi_{d+1}}\ldots e_{0}^{n_{d'}-1}e_{\xi_{d'}}] \pr(f^{(\xi_{d'})})[e_{0}^{n_{d'-1}-1}e_{\xi_{d'-1}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}}] \\ \displaystyle = \Big( \sum_{d'=1}^{d+1} \chi(\xi_{d'}) (-1)^{l} (\shft^{\vee}_{l} \pr(f^{(\xi_{d'})}))^{\inv,\xi_{d'}} \pr(f^{(\xi_{d'})}) \Big) [w] \\ \displaystyle = \Big( \sum_{\xi \in \mu_{N}(K)} \chi(\xi) (-1)^{l} (\shft^{\vee}_{\ast,l} \pr(f^{(\xi)}))^{\inv,\xi} \pr(f^{(\xi)}) \Big) [w] \end{array} \end{multline} \end{proof}
\subsubsection{Relation between the two regularizations}
The second step is to observe that the relation between the two regularizations becomes trivial in the adjoint setting. This is an aspect of the proximity between adjoint cyclotomic multiple zeta values and cyclotomic multiple harmonic sums which we are using in this work. \newline\indent In view of the Lemma \ref{prop adjoint star}, we now define an analogue of $\pr(f_{\Ad})$ in which $f$ is replaced by $f_{\ast}$ defined by the equalities : \begin{equation} \label{eq:passage a star 1} E_{f} = \exp(\sum_{n=2}^{\infty} \frac{(-1)^{n}}{n} f[e_{0}^{n-1}e_{1}] e_{1}^{n}) \end{equation} \begin{equation} \label{eq:passage a star 2} \textbf{q} \pr(f) = E_{f} f_{\ast} \end{equation}
\begin{Definition} For any $f \in \tilde{\Pi}_{1,0}(K)$, let $\displaystyle f_{\Ad,\chi,\ast} = \sum_{l\geq 0} \sum_{w_{l}} \sum_{\xi \in \mu_{N}(K)} \chi(\xi) \big((-1)^{l} \big( \shft^{\vee}_{\ast,l}(f_{\ast}^{(\xi)})\big)^{\inv,\xi}f_{\ast}^{(\xi)} \big)[w_{l}]w_{l}$ where the sum is over $w_{l}$ of the form $e_{0}^{l}e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}}=y_{l+1}^{(\xi_{d+1})}y_{n_{d}}^{(\xi_{d})} \ldots y_{n_{1}}^{(\xi_{1})}$, $d\geq 0$. \end{Definition}
In \cite{I-2}, Definition 1.1.3 and Proposition 1.1.4, we have defined the adjoint Ihara product on $\Ad_{\Pi_{1,0}}(e_{1})$ by the formula $(g,f) \mapsto g \circ_{\Ad}^{\smallint_{1,0}} f = f (e_{0},(g^{(\xi)})_{\xi \in \mu_{N}(K)})$, and we have proved that it is a group law on $\Ad_{\Pi_{1,0}}(e_{1})$. \newline\indent In the next statement, (i) says that the two regularizations (integrals and series) give the same result for adjoint $p$-adic cyclotomic multiple zeta values : this is coherent with the fact that their adjoint quasi-shuffle relation ((iii) below) can be understooed via cyclotomic multiple harmonic sums, as we are going to see in the subsequent paper.
\begin{Proposition} \label{comparaison reg adjoint}We have \begin{equation} \label{eq:comparison of regularisations} \textbf{q}\pr(\Phi_{\Ad,\chi}) = \Phi_{\Ad,\chi,\ast} \end{equation} \end{Proposition}
\begin{proof} (i) We are going to simplify the expression of $pr(\Phi)$ given by Lemma \ref{prop adjoint star}. By equation (\ref{eq:passage a star 2}) we have, for each $\xi \in \mu_{N}(K)$ and $l\geq 0$, $$ (-1)^{l}\shft^{\vee}_{l} (pr(\Phi^{(\xi)})^{\inv,\xi} ) \pr(\Phi^{(\xi)}) = (-1)^{l}\shft^{\vee}_{l}\big( E^{(\xi)}(\Phi_{\ast}^{(\xi)})^{\inv,\xi}) \big) E^{(\xi)} \Phi_{\ast}^{(\xi)} $$ Let now a word $w=e_{0}^{l}e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}}$. By the definition of the map $G \mapsto G^{\inv,\xi}$ (Definition \ref{def shft} (iii)), for all $d'$ ($1 \leq d' \leq d+1$), assuming $\xi=\xi_{d'}$, we have \begin{multline*} (-1)^{l}\big( \shft_{l}^{\vee}(E_{\Phi}^{(\xi)}\Phi_{\ast}^{(\xi)}) \big)^{\inv,\xi}[e_{0}^{l}e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{d'}-1}e_{\xi_{d'}}] \\ = (-1)^{l+n_{d}+\ldots+n_{d'}} \big( \shft_{l}^{\vee}(E^{(\xi)}\Phi_{\ast}^{(\xi)})\big) [e_{0}^{n_{d'}-1}e_{\xi_{d'+1}} \ldots e_{0}^{n_{d}-1}e_{\xi_{d+1}}] \end{multline*} by the definition of $\shft_{l}$ (Definition 3.2.1, (iii)), and by the fact that $E_{\Phi}^{(\xi)}$ has non-zero coefficients only at certain words which are of the form $e_{\xi}^{n}$, we have \begin{multline} \label{eq:3} \big(\shft^{\vee}_{\ast,l}\big( E_{\Phi}^{(\xi)}(\Phi_{\ast}^{(\xi)}) \big)[e_{0}^{n_{d'}-1}e_{\xi_{d'+1}} \ldots e_{0}^{n_{d}-1}e_{\xi_{d+1}}] \\ \begin{array}{l} \displaystyle = \big(E_{\Phi}^{(\xi)} \shft^{\vee}_{\ast,l}(\Phi_{\ast}^{(\xi)}) \big) [e_{0}^{n_{d'}-1}e_{\xi_{d'+1}} \ldots e_{0}^{n_{d}-1}e_{\xi_{d+1}}] \\ \displaystyle = \sum_{r=d'}^{d+1} E_{\Phi}^{(\xi)}[e_{0}^{n_{d'}-1}e_{\xi_{d'+1}} \ldots e_{0}^{n_{r}-1}e_{\xi_{r+1}}] \shft^{\vee}_{\ast,l}(\Phi_{\ast}^{(\xi)}) [e_{0}^{n_{r+1}-1}e_{\xi_{r+2}} \ldots e_{0}^{n_{d}-1}e_{\xi_{d+1}}] \end{array} \end{multline} By the definition of the map $G \mapsto G^{\inv,\xi}$ (Definition \ref{def shft} (iii)), we have $$ (-1)^{n_{r+1}+\ldots+n_{d}} \shft^{\vee}_{l}(\Phi_{\ast}^{(\xi)}) [e_{0}^{n_{r+1}-1}e_{\xi_{r+2}} \ldots e_{0}^{n_{d}-1}e_{\xi_{d+1}}] = \big( \shft^{\vee}_{l}(\Phi_{\ast}^{(\xi)}) \big)^{(\inv,\xi_{r+1})} [e_{0}^{l}e_{\xi_{d+1}} \ldots e_{0}^{n_{r+2}-1}e_{\xi_{r+1}}] $$ Since $E_{\Phi}^{(\xi)}$ has non-zero coefficients only at words of the form $e_{\xi}^{n}$, in the sum over $r$ in (\ref{eq:3}), a non-zero term can appear only if $\xi_{r+1}=\xi_{d'}$, and we can write $$ E_{\Phi}^{(\xi)}[e_{0}^{n_{d'}-1}e_{\xi_{d'+1}} \ldots e_{0}^{n_{r}-1}e_{\xi_{r+1}}] = E_{\Phi}^{(\xi)}[e_{\xi_{r+1}}e_{0}^{n_{r}-1} \ldots e_{\xi_{d'+1}}e_{0}^{n_{d'}-1}] = E_{\Phi}^{(\xi)}[e_{0}^{n_{r}-1} \ldots e_{0}^{n_{d'}-1}e_{\xi_{d'}}] $$ Below we use the notation $\tau$ introduced in (\ref{eq:tau}). By the two previous equalities, (\ref{eq:3}) multiplied by $(-1)^{l+n_{d}+\ldots+n_{d'}}$ is equal to \begin{multline*} \sum_{r=d'}^{d+1} (-1)^{l} \big(\shft^{\vee}_{\ast,l}(\Phi_{\ast}^{(\xi_{d})}) \big)^{(\inv,\xi_{d'})} [e_{0}^{l}e_{\xi_{d+1}} \ldots e_{0}^{n_{r+1}-1}e_{\xi_{r+1}}] \big( \tau(-1)E_{\Phi}^{(\xi)}[e_{0}^{n_{r}-1} \ldots e_{0}^{n_{d'}-1}e_{\xi_{d'}}] \\ = (-1)^{l}\bigg( \big(\shft^{\vee}_{\ast,l}(\Phi_{\ast}^{(\xi_{d})}) \big)^{(\inv,\xi_{d'})} (\tau(-1)E^{(\xi)}) \bigg)[e_{0}^{l}e_{\xi_{d+1}} \ldots e_{\xi_{d'+1}} e_{0}^{n_{d'}-1}e_{\xi_{d'}}] \end{multline*} This formula combined to Lemma \ref{prop adjoint star} shows that \begin{multline} \label{eq:avant derniere} \pr(\Phi)[w] = \sum_{d'=1}^{d+1} \bigg( (-1)^{l} \xi_{d'}^{-1} \big(\shft^{\vee}_{\ast,l}(\Phi_{\ast}^{(\xi_{d'})}) \big)^{(\inv,\xi_{d'})} (\tau(-1)E^{(\xi_{d'})}) E^{(\xi_{d'})} \Phi_{\ast}^{(\xi_{d'})} \bigg)[w] \\ = \sum_{\xi \in \mu_{N}(K)} \bigg( (-1)^{l} \xi^{-1} \big(\shft^{\vee}_{\ast,l}(\Phi_{\ast}^{(\xi)}) \big)^{(\inv,\xi)} (\tau(-1)E^{(\xi)}) E^{(\xi)} \Phi_{\ast}^{(\xi)} \bigg)[w] \end{multline} For any $S \in K\langle \langle e_{0\cup \mu_{N}}\rangle\rangle$, and $n \geq 0$, we have $(\tau(-1)S)^{n} = \tau(-1)(S^{n})$ because for any words $w_{i}$, $\displaystyle \sum_{i=1}^{n}\weight(w_{i}) = \weight(w_{1}\ldots w_{n})$. Thus, if the coefficient of the empty word in $S$ is $0$, we can write $\displaystyle \tau(-1)\exp(S) = \sum_{n=0}^{\infty} \tau(-1)\frac{S^{n}}{n!} = \sum_{n=0}^{\infty} \frac{(\tau(-1)S)^{n}}{n!} = \exp (\tau(-1)S)$. In particular, by (\ref{eq:passage a star 1}) $\displaystyle \tau(-1)E_{\Phi} = \exp \big(\sum_{n=2}^{\infty} \frac{1}{n}\Phi[e_{0}^{n-1}e_{1}]e_{1}^{n} \big)$. Thus $$(\tau(-1)E_{\Phi}) E_{\Phi} = \exp \big(\sum_{n=2}^{\infty} \frac{1}{n}\Phi[e_{0}^{n-1}e_{1}]e_{1}^{n} \big) \exp \big(\sum_{n=2}^{\infty} \frac{(-1)^{n}}{n}\Phi[e_{0}^{n-1}e_{1}]e_{1}^{n} \big) = \exp \big(\sum_{n=2}^{\infty} \frac{(1+(-1)^{n})}{n}\Phi[e_{0}^{n-1}e_{1}]e_{1}^{n} \big) $$ For $n$ odd, we have $1+(-1)^{n}=0$ ; for $n$ even, since $\Phi \in \DS_{0}(K)$ we have $\Phi[e_{0}^{n-1}e_{1}]=0$. This proves \begin{equation} \label{eq:derniere} \tau(-1)(E_{\Phi})E_{\Phi} = 1 \end{equation} The result follows from (\ref{eq:avant derniere}) and (\ref{eq:derniere}). \end{proof}
\subsubsection{Definition of the adjoint double shuffle relations}
\begin{Proposition} \label{221}Let $\Phi \in \DS_{0}(K)$ and $\chi \in \Theta$. \newline (i) We have \begin{equation} \label{eq:ds adjoint 1} \text{for all non-empty words }w,w',\text{ } \Phi_{\Ad,\chi}[w \text{ }\mathcyr{sh} \text{ }w'] = 0 . \end{equation} \noindent (ii) We have \begin{multline} \label{eq:ds adjoint 2} \text{for all words }w,w'\text{ and }L \in \mathbb{N},
\text{ }\sum_{\substack{l,l'\geqslant 0 \\ l+l'=L}} \Phi_{\Ad,\chi,\ast}[w;l] \Phi_{\Ad,\chi,\ast}[w';l'] = \Phi_{\Ad,\chi,\ast}[ w \ast_{\har} w';L] . \end{multline} \end{Proposition}
\begin{proof} (i) Let $\Delta_{\mathcyr{sh}}$ be the shuffle coproduct. For all $\xi \in \mu_{N}(K)$, we have $\Delta_{\mathcyr{sh}}(e_{\xi})= e_{\xi} \otimes 1 + 1 \otimes e_{\xi}$, and $\Delta_{\mathcyr{sh}}(\Phi^{(\xi)})=\Phi^{(\xi)} \otimes \Phi^{(\xi)}$, whence $\Delta( {\Phi^{(\xi)}}^{-1} e_{\xi} \Phi^{(\xi)}) = {\Phi^{(\xi)}}^{-1} e_{\xi} \Phi^{(\xi)} \otimes 1 + 1 \otimes {\Phi^{(\xi)}}^{-1} e_{\xi} \Phi^{(\xi)}$. This implies $\Delta_{\mathcyr{sh}}(\Phi_{\Ad}) = \Phi_{\Ad} \otimes 1 + 1 \otimes \Phi_{\Ad}$. \newline\indent (ii) Let us prove that $\shft_{\ast}$ is a morphism of quasi-shuffle algebras. For simplicity, we do the proof for $\mathbb{P}^{1} - \{0,1,\infty\}$. The dual of $\shft_{\ast}$ is the concatenation algebra morphism $\imath_{\ast}^{\vee}$ defined by $$ y_{n} \mapsto \Lambda^{n} \sum_{l=0}^{n-1} \Lambda^{l} {n-1 \choose l} (-1)^{n-l} y_{n-l} = \Lambda^{n} \sum_{l=1}^{n} \Lambda^{n-l} (-1)^{l} y_{l} {n-1 \choose n-l} . $$ \noindent We have $(\imath^{\vee} \otimes \imath^{\vee})\Delta_{\ast}(y_{n}) = 1 \otimes \imath^{\vee}(y_{n}) + \imath^{\vee}(y_{n}) \otimes 1 + \sum_{k=1}^{n-1} \imath^{\vee}(y_{k}) \otimes \imath^{\vee}(y_{n-k})$ ; the third term of this sum is $$ \Lambda^{n} \sum_{k=1}^{n-1} \big( \sum_{l=1}^{k} \Lambda^{k-l} {k-1 \choose k-l} (-1)^{l} y_{l} \big) \otimes \big( \sum_{l=1}^{n-k} \Lambda^{n-k-l} {n-k-1 \choose n-k-l'} (-1)^{l'} y_{l'} \big) $$ $$ = \Lambda^{n} \sum_{L=2}^{n} \Lambda^{n-L}(-1)^{n-L} \big(
\sum_{\substack{l+l'=L\\ l,l'\geqslant 1}} y_{l} \otimes y_{l'}
\big) \sum_{\substack{l\leqslant k \leqslant n-L+l}} {k-1 \choose k-l} {n-k-1 \choose n-k-l'} $$ \noindent and for all $l,l'$ such that $l+l'=L$, we have $$ \sum_{\substack{l\leqslant k \leqslant n-L+l}} {k-1 \choose k-l} {n-k-1 \choose n-k-l'} = \sum_{k'=0}^{n-L} {k' + l-1 \choose k'} {n-L-k'+l'-1 \choose n-L-k'} = {n-L+L-1 \choose n-L} . $$ \noindent On the other hand, $S_{Y}$ is an anti-morphism of quasi-shuffle algebras. This gives the result. \end{proof}
\begin{Definition} We call (\ref{eq:ds adjoint 1}) the \emph{adjoint shuffle equation} and (\ref{eq:ds adjoint 2}) the \emph{adjoint quasi-shuffle equation} ; and we call the collection of (\ref{eq:ds adjoint 1}) and (\ref{eq:ds adjoint 2}) the \emph{adjoint double shuffle equations}.
We denote by $\DS_{0,\Ad}(K)$ the set of $\psi \in K \langle \langle e_{0\cup\mu_{N}} \rangle\rangle$ which satisfy the adjoint shuffle equation and such that for all $\chi \in \Theta$, $\textbf{q} \pr \Moy_{\chi}(\psi)$ satisfies the adjoint quasi-shuffle equation.
These equations define an affine scheme $\DS_{0,\Ad}$ over $k_{N}$, which we call the \emph{adjoint double shuffle scheme}. \end{Definition}
By the previous propositions, we have proved that if $\Phi \in \DS_{0}(K)$ then $\Ad_{\Phi}(e_{1})$ is in $\DS_{0,\Ad}(K)$. We note that the adjoint shuffle equation is also known as the linearlized shuffle equation, or the shuffle equation modulo products, and amounts to say that $\Ad_{\Phi}(e_{1})$ is primitive for the shuffle coproduct $\Delta_{\mathcyr{sh}}$. We also note that the adjoint quasi-shuffle equation can be reformulated as saying that $\Ad_{\Phi}(e_{1})$ is a "grouplike" element for a certain adjoint variant $\Delta_{\ast}^{\Ad}$ of $\Delta_{\ast}$.
\subsubsection{Stability by the Ihara product}
We now deduce from Racinet's theorem that $\DS_{0}$ is a group for the Ihara product \cite{Racinet} an adjoint variant of this statement.
\begin{Proposition} The image of the map $\Ad(e_{1}) : \DS_{0} \mapsto \DS_{0,\Ad}$ is an algebraic group with the adjoint Ihara product $\circ_{\Ad}^{\smallint_{1,0}}$. \end{Proposition}
\begin{proof} By Racinet's theorem \cite{Racinet}, $(\DS_{0},\circ^{\smallint_{1,0}})$ is an algebraic group. By \cite{I-2}, Proposition 1.1.4, $\Ad(e_{1})$ is a morphism of algebraic groups $(\tilde{\Pi}_{1,0},\circ^{\smallint_{1,0}}) \buildrel \sim \over \longrightarrow (\Ad_{\tilde{\Pi}_{1,0}}(e_{1}),\circ_{\Ad}^{\smallint_{1,0}})$. \end{proof}
The following question seems interesting, as we will explain in the next section :
\begin{Question} \label{question} Is $\Ad(e_{1}) : \DS_{0} \mapsto \DS_{0,\Ad}$ an isomorphism ? \end{Question}
\subsection{Adjoint complex double shuffle equations}
\subsubsection{Adjoint double shuffle equation\label{paragraph lifts}}
Let $K$ be a field of characteristic $0$. For $\mu \in K - \{0\}$, let $\DS_{\mu}$ be the scheme of regularized double shuffle relations with parameter $\mu$ defined by Racinet \cite{Racinet}.
In order to write the relation between the two regularizations of adjoint cyclotomic multiple zeta values, we have to consider a slightly more general object.
Let $\rho_{\Ad}$ be the linear map $K[T_{1},T_{2}] \rightarrow K[T_{1},T_{2}]$ defined by the formula
$$ \rho (e^{T_{2}e_{1}} e^{T_{1}e_{1}}) \mapsto (-1)^{\weight}(e^{T_{1}e_{1}})(-1)^{weight}(E)E e^{T_{2}e_{1}} $$
We now define the regularized versions of adjoint complex cyclotomic multiple zeta values. Let the two regularizations $\Phi_{\KZ}(T)$ and $\Phi_{\KZ,\ast}(T)$ of $\Phi_{\KZ}$ resp. $\Phi_{\KZ,\ast}$ which can be found in \cite{Racinet} ($T$ is a formal variable). We define their adjoint analogues, the two regularizations of $\Phi_{\KZ,\Ad,\chi}$ :
(a) The regularization in the sense of integrals : $\displaystyle \Phi_{\KZ,\Ad,\chi}(T) = \sum\limits_{\xi \in \mu_{N}(K)} \chi(\xi) {\Phi(T)^{-1}}^{(\xi)}e_{\xi}\Phi(T)^{(\xi)} $
(b) The regularization in the sense of series (in view of Proposition \ref{prop adjoint star}) : \newline $\displaystyle \Phi_{\KZ,\Ad,\chi,\ast}(T) = \sum_{l\geq 0} \sum_{w_{l}} \sum_{\xi \in \mu_{N}(K)} \chi(\xi) \big((-1)^{l} \big( \shft^{\vee}_{\ast,l}(\Phi_{\ast}(T)^{(\xi)})\big)^{\inv,\xi} \Phi_{\ast}^{(\xi)}(T) \big)[w_{l}]w_{l} \in \mathbb{C} \langle\langle Y_{N} \rangle\rangle$
By considering their coefficients, we deduce the regularized version of adjoint MZV$\mu_{N}$'s :
\begin{Definition} (a) The regularized (in the sense of integrals) AdMZV$\mu_{N}$'s are
$$ \zeta^{\Ad}\big( (n_{i})_{d};(\xi_{i})_{d+1};l;\chi)(T) =
\Phi_{\KZ,\Ad,\chi}(T) [e_{0}^{l}e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}}]. $$
(b) The regularized (in the sense of series) AdMZV$\mu_{N}$'s are
$$ \zeta^{\Ad}\big( (n_{i})_{d};(\xi_{i})_{d+1};l;\chi)(T) =
\Phi_{\KZ,\Ad,\chi,\ast}(T) [e_{0}^{l}e_{\xi_{d+1}}e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}-1}e_{\xi_{1}}]. $$
And similarly for the $\Lambda$-adic AdMZV$\mu_{N}$'s. \end{Definition}
We generalize the previous definitions as follows : $$ \Phi_{\KZ,\chi,\Ad}(T_{1},T_{2}) = \sum_{\xi \in \mu_{N}(K)} \chi(\xi) {\Phi(T_{2})^{-1}}^{(\xi)}e_{\xi}\Phi(T_{1})^{(\xi)} $$ $$ \Phi_{\KZ,\ast,\chi,\Ad}(T_{1},T_{2}) = \sum_{l\geq 0} \sum_{w_{l}} \sum_{\xi \in \mu_{N}(K)} \chi(\xi) \big((-1)^{l} \big( \shft^{\vee}_{\ast,l}(\Phi_{\ast}(T_{2})^{(\xi)})\big)^{\inv,\xi} \Phi_{\ast}^{(\xi)}(T_{1}) \big)[w_{l}]w_{l} $$
\begin{Proposition} If $\Phi \in \DS_{\mu}(K)$, we have
\newline (i) $\Phi_{\Ad,\chi}(T)$ satisfies the adjoint shuffle equation
\newline (ii) $\Phi_{\Ad,\chi,\ast}(T)$ satisfies the adjoint quasi-shuffle equation
\newline (iii) We have $\textbf{q} \pr \rho (\Phi_{\Ad,\chi}(T_{1},T_{2})) = \Phi_{\Ad,\chi,\ast}(T_{1},T_{2})$,
$\Phi_{\Ad,\chi}(T) = \Phi_{\Ad,\chi}(T,T)$, $ \Phi_{\Ad,\chi,\ast}(T) = \Phi_{\Ad,\chi,\ast}(T,T)$
\newline (iv) We have $\Phi_{\Ad,\chi}[e_{1}e_{0}e_{1}] = (\Phi^{-1}e_{1}\Phi)[e_{1}e_{0}e_{1}] = 2\Phi[e_{0}e_{1}]$. \end{Proposition}
\begin{proof} Similar to the proofs in the $p$-adic case (\S3.2). (iv) follows from $\Phi^{-1}[e_{1}e_{0}] = (-1)^{2}\Phi[e_{0}e_{1}]$ and $({\Phi^{-1}}^{(\xi)}e_{\xi}\Phi^{(\xi)})[e_{1}e_{0}e_{1}]=0$ for $\xi\not=1$. \end{proof}
\begin{Definition} Let $\DS_{\Ad,\eta}(\mathbb{C})$ be the set of $(\Phi_{\Ad,\chi}(T_{1},T_{2}),\Phi_{\Ad,\chi,\ast}(T_{1},T_{2})) \in \mathbb{C}(T)\langle \langle e_{0\cup \mu_{N}} \rangle\rangle^{2}$ which satisfy the above equations and $\Phi_{\Ad,\chi}[e_{1}e_{0}e_{1}]=2 (-\mu^{2}/24)$. \newline The above equations define an affine scheme which we call the \emph{scheme of adjoint regularized double shuffle relations}. \end{Definition}
We deduce :
\begin{Corollary} The map $\Ad(e_{1})$ defines a morphism $\DS_{\mu} \rightarrow \DS_{\mu,\Ad}$.
By the adjoint Ihara action, the image of the map $\DS_{\mu} \rightarrow \DS_{\mu,\Ad}$ is a torsor under the group defined as the image of the map $\DS_{0} \rightarrow \DS_{0,\Ad}$. \end{Corollary}
\begin{proof} This follows from Racinet's theorem that $\DS_{\mu}$ is a torsor under the group $\DS_{0}$ for the Ihara action, and that $\Ad(e_{1})$ is a morphism of algebraic groups $(\tilde{\Pi}_{1,0},\circ^{\smallint_{1,0}}) \buildrel \sim \over \longrightarrow (\Ad_{\tilde{\Pi}_{1,0}}(e_{1}),\circ_{\Ad}^{\smallint_{1,0}}))$ (\cite{I-2}, Proposition 1.1.4). \end{proof}
Particular cases of our adjoint regularized double shuffle relations can be found in \cite{Hi} and \cite{HMS} which appeared recently on the arXiv.
\begin{Remark} We note that $$ (-1)^{\weight}(E)E= \exp(\sum_{n\geq 2,\text{even}} \frac{2}{n} \Phi[e_{0}^{n-1}e_{1}]e_{1}^{n} ) = \exp(\sum_{n\geq 1} \frac{1}{n} \Phi[e_{0}^{2n-1}e_{1}]e_{1}^{2n} ) $$
In the context of \S3, this was equal to 1. Here, in the case where $\Phi=\Phi_{\KZ}$, we have
$\Phi[e_{0}^{2n-1}e_{1}] = \frac{(-1)^{n-1} B_{2n}}{2.(2n)!} (24\Phi[e_{0}e_{1}])^{n}$. It follows from \cite{Ihara Kaneko Zagier}, Theorem 7 that this relation holds for all solutions to the double shuffle equations. Thus
$$ (-1)^{\weight}(E)E = \exp(\sum_{n\geq 1} \frac{1}{n} \frac{(-1)^{n-1} B_{2n}}{2.(2n)!} (24\Phi[e_{0}e_{1}])^{n}e_{1}^{2n} )= \exp(\sum_{n\geq 1} \frac{- B_{2n}}{2n.(2n)!} (-24\Phi[e_{0}e_{1}]e_{1}^{2})^{n}) $$
This term appears implicitly in the relation between the two regularizations of adjoint cyclotomic multiple zeta values. \end{Remark}
\subsubsection{Other aspects of the adjoint double shuffle relations\label{paragraph lifts}}
We give two $\Lambda$-adjoint formulations of the above adjoint shuffle equation.
\begin{Proposition} \label{4.8} For all $w,w' \in \mathcal{O}^{\ast}$, and for $\Lambda, \Lambda'$ formal variables, we have, for all $w,w' \in \mathcal{O}^{\ast}$, and for any $\xi, \xi' \in \mu_{N}(\mathbb{C})$
\begin{multline}
(\Phi_{\KZ}^{-1}e^{2\pi i e_{1}} \Phi_{\KZ}) (\frac{1}{1-\Lambda e_{0}}e_{\xi}w)\text{ }
(\Phi_{\KZ}^{-1}e^{2\pi i e_{1}} \Phi_{\KZ})(\frac{1}{1-\Lambda' e_{0}}e_{\xi'}w')
\\ =
(\Phi_{\KZ}^{-1}e^{2\pi i \xi} \Phi_{\KZ}) \bigg( \frac{1}{1-(\Lambda+\Lambda') e_{0}} e_{\xi} \big( w \text{ }\mathcyr{sh} \text{ } (\frac{1}{1-\Lambda' e_{0}}e_{\xi'}w') \big)
+ e_{\xi'} \big( (\frac{1}{1-\Lambda e_{0}}e_{\xi}w) \text{ }\mathcyr{sh} \text{ }w' \big) \bigg),
\end{multline}
\begin{equation}
(\Phi_{\KZ}^{-1}e_{1} \Phi_{\KZ}) \bigg( \frac{1}{1-(\Lambda+\Lambda') e_{0}} e_{\xi}( w \text{ }\mathcyr{sh} \text{ } (\frac{1}{1-\Lambda' e_{0}}e_{\xi'}w'))
+ e_{\xi'}((\frac{1}{1-\Lambda e_{0}}e_{\xi}w) \text{ }\mathcyr{sh} \text{ }w')\bigg) = 0.
\end{equation} \end{Proposition}
\begin{proof} Below we use using Notation \ref{definition sigma inv DR} for $\comp^{\Lambda \Ad,\Ad}$. (a) Let us show that
\begin{multline} \label{eq:thing to prove} \comp^{\Lambda \Ad,\Ad}e_{\xi}w
\text{ }\mathcyr{sh}\text{ }\comp^{\Lambda'\Ad,\Ad}e_{\xi'}w' =
\\
\comp^{(\Lambda+\Lambda')\Ad, \Ad}e_{\xi}( w \text{ }\mathcyr{sh} \text{ } \comp^{\Lambda'\Ad,\Ad}e_{\xi'}w')
+
\comp^{(\Lambda+\Lambda')\Ad,\Ad}
e_{\xi'}(\comp^{\Lambda\Ad,\Ad}e_{\xi}w \text{ }\mathcyr{sh} \text{ }w') .
\end{multline}
\noindent Let $L$ be the left-hand side in (\ref{eq:thing to prove}). We compute the image of $\partial_{e_{x}}(L)$ for all $x \in \{0\} \cup \mu_{N}(K)$ (for the definition of $\partial_{e_{x}}$, see the proof of Proposition \ref{prop 2.3.1}). Given that $\partial_{e_{\xi}}$ is a derivation for $\mathcyr{sh}$, we obtain (where $\delta$ is Kronecker's symbol) : \begin{equation*} \begin{array}{l} \partial_{e_{0}}L = (\Lambda+\Lambda')L, \\ \partial_{e_{\xi}}L = (w\text{ }\mathcyr{sh}\text{ } \comp^{\Lambda'\Ad,\Ad}e_{\xi'}w') + \delta_{\xi,\xi'}( \comp^{\Lambda\Ad,\Ad}e_{\xi}w \text{ }\mathcyr{sh}\text{ }w'), \\ \partial_{e_{\xi'}}L = \delta_{\xi,\xi'} (w\text{ } \mathcyr{sh}\text{ } \comp^{\Lambda'\Ad,\Ad}e_{\xi'}w') + ( \comp^{\Lambda\Ad,\Ad}e_{\xi}w\text{ }\mathcyr{sh}\text{ } w'), \\ \partial_{e_{x}}L = 0 \text{ }\text{if}\text{ }x \not\in \{0,\xi,\xi'\}. \end{array} \end{equation*} We deduce (\ref{eq:thing to prove}) by $L=\sum\limits_{x\in \{0\} \cup \mu_{N}(K)} e_{x}\partial_{e_{x}}(L)$. \newline\indent (b) On the other hand, $f^{-1}e^{2i \pi e_{1}} f$ resp. $f^{-1}e_{1} f$ satisfies the shuffle equation, resp. the shuffle equation modulo products. We deduce the result. \end{proof}
\subsubsection{Adjoint and harmonic double shuffle equations for multiple polylogarithms}
The shuffle relation of multiple polylogarithms is true by their definition in terms of the iterated integrals (\ref{eq:Li}).
\newline\indent A quasi-shuffle relation for multiple polylogarithms is still true by their power series expansion (\ref{eq:Li series bis}). However, it involves several curves at the same time : if $D,D'$ are two finite subsets of $\mathbb{P}^{1}(K)$, both containing $0$ and $\infty$, the quasi-shuffle equation expressing product of multiple polylogarithms associated respectively with $\mathbb{P}^{1} - D$ and $\mathbb{P}^{1} - D'$ will involve $\mathbb{P}^{1} - DD'$ where $DD' = \{0,\infty\} \cup \{xx'\text{ | }\text{ } x \in D - \{0,\infty\} , x' \in D' - \{0,\infty\} \}$. We leave the exact definitions to the reader. \newline\indent In the end, by imitating the proofs of \S3, we obtain :
\begin{Proposition} (rough version)
The adjoint multiple polylogarithms and multiple harmonic polylogarithms resp. finite multiple polylogarithms satisfy a generalization of the double shuffle equations of \S3 resp. \S6. \end{Proposition}
\subsection{Harmonic double shuffle equations}
We define harmonic double shuffle equations for cyclotomic multiple harmonic values of Definition \ref{def harmonic}, in the frameworks $\smallint_{1,0}$ (\S3.3.1), $\int$ (\S3.3.2), $\Sigma$ (\S3.3.3), and we prove that they are equivalent (\S3.3.4).
\subsubsection{In the framework $\smallint_{1,0}$}
We use the adjoint double shuffle equations of \S3.2 to define the harmonic double shuffle equations in the framework of $\smallint_{1,0}$. In \cite{I-3}, Definition 2.1.2, we have introduced $\circ_{\har}^{\smallint_{1,0}}$, the pro-unipotent harmonic action of integrals at (1,0).
\begin{Proposition} \label{prop 2.3.1}Let $\psi \in K \langle \langle e_{0 \cup \mu_{N}} \rangle\rangle$ such that $\psi[e_{0}]=0$. Let $h=\comp^{\Lambda \Ad,\Ad}\psi$, i.e. $h= \sum\limits_{w} \psi[\frac{1}{1-\Lambda e_{0}}w]w$ where the sum is over words $w$ of the form $e_{\xi_{d+1}} e_{0}^{n_{d}-1}e_{\xi_{d}} \ldots e_{0}^{n_{1}-1} e_{\xi_{1}}$. \newline (i) $\psi$ satisfies the adjoint quasi-shuffle equation if and only if $h$ satisfies \begin{equation} \label{eq: DS har int1,0 1} \text{for all words }w,w',\text{ }h(w \ast w') = h(w) \text{ }h(w') . \end{equation} \noindent (ii) $\psi$ satisfies the adjoint shuffle equation if and only if $h$ satisfies \begin{equation} \label{eq: DS har int1,0 2} \text{for all words }w,w'\text{ and for all }n \in \mathbb{N}^{\ast},\text{ } h (e_{\xi'} (e_{0}^{n-1}e_{\xi} w \text{ } \mathcyr{sh} \text{ } w' )) = h ( e_{\xi} ( w \text{ }\mathcyr{sh} \text{ }\frac{1}{1 - \Lambda e_{0}} e_{0}^{n-1}e_{\xi'}w') ) . \end{equation} \noindent (iii) The set
$\{ \comp^{\Lambda \Ad,\Ad}\psi \text{ }|\text{ }\psi \in \Ad_{\DS_{0}(K)}(e_{1})\}$ is a torsor under the group $(\Ad_{\DS_{0}(K)}(e_{1}),\circ_{\Ad}^{\smallint_{1,0}})$ for the pro-unipotent harmonic action $\circ_{\har}^{\smallint_{1,0}}$. \end{Proposition}
\begin{proof} (i) The adjoint quasi shuffle equation amounts to $$ \big(\sum_{L \geqslant 0} \psi_{\ast}[w;l]\Lambda^{l} \big) \big(\sum_{L\geqslant 0} \psi_{\ast}[w';l']\Lambda^{l'} \big) = \sum_{L \geqslant 0} \Lambda^{L} \sum_{l+l'=L} \psi[w;l] \psi[w';l'] = \sum_{L \geqslant 0}\Lambda^{L} \psi[ w \ast w';L], $$ i.e. $h(w)h(w') = h(w \ast w')$. \newline (ii) Let us prove that, for all $w,w'$ words, and $n \in \mathbb{N}^{\ast}$, we have : \begin{multline} \label{eq:equation 4.10} -\comp^{\Lambda \Ad,\Ad} \bigg( (e_{0}^{n-1}e_{\xi}w) \text{ }\mathcyr{sh}\text{ } w' - w \text{ }\mathcyr{sh}\text{ } \shft_{\ast}(e_{0}^{n-1}e_{\xi'})w') \bigg) = \\ \sum_{t=0}^{n-1} \big( e_{0}^{t}e_{\xi} w \big) \text{ }\mathcyr{sh}\text{ } \big( (-1)^{n-t}(\shft_{\ast}(e_{0}^{n-1-t}e_{\xi'})w' \big). \end{multline} For any $x \in \{0\} \cup \mu_{N}(K)$, let $\partial_{e_{x}}$, resp. $\tilde{\partial}_{e_{x}}$ be the unique linear maps $\mathcal{O}^{\mathcyr{sh}} \rightarrow \mathcal{O}^{\mathcyr{sh}}$ that send the empty word to $0$ and defined on the other words by $\partial_{e_{x}}(e_{x'}w) =\tilde{\partial}_{e_{x}}(we_{x'}) = \left\{ \begin{array}{l} w \text{ if }x = x' \\ 0 \text{ if }x \not= x' \end{array} \right.$. For all $w \in \mathcal{O}^{\mathcyr{sh}}$, we have $w = \sum\limits_{x \in \{0\} \cup \mu_{N}(K)} e_{x}\partial_{e_{x}}(w) = \sum\limits_{x \in \{0\} \cup \mu_{N}(K)} \tilde{\partial}_{e_{x}}(w)e_{x}$, and $\partial_{e_{x}}$ and $\tilde{\partial}_{e_{x}}$ are derivations for the shuffle product. \newline\indent Let $R$ be the right-hand side in (\ref{eq:equation 4.10}). We have $R= e_{0} \partial_{e_{0}}R + \sum\limits_{\xi \in \mu_{N}(K)} e_{\xi}\partial_{e_{\xi}}(R)$. In order to prove (\ref{eq:equation 4.10}) it suffices to show the equalities : if $\xi \not= \xi'$, $\left\{\begin{array}{l}\partial_{e_{\xi}}(R) = w \text{ }\mathcyr{sh}\text{ } (-1)^{n}\shft_{\ast}(e_{0}^{n-1}e_{\xi'}w') \\ \partial_{e_{\xi'}}(R) = - (e_{0}^{n-1}e_{\xi}w)\text{ }\mathcyr{sh}\text{ } w'\end{array} \right.$ ; if $\xi=\xi'$, $\partial_{e_{\xi}}(R) = w \text{ }\mathcyr{sh}\text{ } (-1)^{n}\shft_{\ast}(e_{0}^{n-1}e_{\xi}w') - (e_{0}^{n-1}e_{\xi}w) \text{ }\mathcyr{sh}\text{ } w'$, and $\partial_{e_{0}}(R) = \Lambda R$. \newline The two first ones are clear ; let us show the last one. One has : \begin{multline} \label{eq:e0x} \partial_{e_{0}}(R) = \sum_{t=1}^{n-1} (e_{0}^{t-1}e_{\xi}w) \text{ }\mathcyr{sh}\text{ } \frac{1}{(\Lambda e_{0}-1)^{n-t}} e_{0}^{n-1-t} e_{\xi'}w' \\ + \sum_{t=0}^{n-2} (e_{0}^{t}e_{\xi}w) \text{ }\mathcyr{sh}\text{ } \frac{1}{(\Lambda e_{0}-1)^{n-t}} e_{0}^{n-2-t} e_{\xi'}w' + (e_{0}^{n-1}e_{\xi}w) \mathcyr{sh} \frac{\Lambda}{\Lambda e_{0}-1} e_{\xi'}w' . \end{multline} The sum of the two first terms of the right hand side of (\ref{eq:e0x}) equals $$ \sum_{t=0}^{n-2} (e_{0}^{t}e_{\xi}w) \text{ }\mathcyr{sh}\text{ } \bigg[1 + \frac{1}{\Lambda e_{0}-1}\bigg] \frac{1}{(\Lambda e_{0}-1)^{n-1-t}} e_{0}^{n-2-t}e_{\xi'}w' = \Lambda \sum_{t=0}^{n-2} (e_{0}^{t}e_{\xi}w) \text{ }\mathcyr{sh}\text{ } \frac{1}{(\Lambda e_{0}-1)^{n-t}} e_{0}^{n-1-t}e_{\xi'}w' $$ This and the third term of (\ref{eq:e0x}) are, respectively, the $0 \leqslant t\leqslant n-2$ terms and the $t=n-1$ term of $\Lambda R$. This proves (\ref{eq:equation 4.10}) which implies the result. \newline (iii) By \cite{I-3}, equation (2.1), $\circ_{\har}^{\smallint_{1,0}}$ is characterized by an equation which we can rewrite with the notation of this paper as $\comp^{\har,\Ad}(g \circ_{\Ad}^{\smallint_{1,0}} f) = g \circ_{\har}^{\smallint_{1,0}} \comp^{\har,\Ad}f$. This gives the result. \end{proof}
\begin{Definition} We call (\ref{eq: DS har int1,0 1}) the \emph{harmonic quasi-shuffle equation} and (\ref{eq: DS har int1,0 2}) the \emph{harmonic shuffle equation of the framework $\int_{1,0}$} ; and we call the collection of (\ref{eq: DS har int1,0 1}) and (\ref{eq: DS har int1,0 2}) the \emph{harmonic double shuffle equations} of the framework $\int_{1,0}$. \newline Let $\DS_{\har}^{\smallint_{1,0}}$ be the affine ind-scheme defined by the equations (\ref{eq: DS har int1,0 1}) and (\ref{eq: DS har int1,0 2}), which we call the \emph{harmonic double shuffle equations} of the framework $\int_{1,0}$. \end{Definition}
\subsubsection{In the framework $\smallint$}
We construct the harmonic double shuffle equations in the framework of $\smallint$, i.e. by considering power series expansions of multiple polylogarithms.
\begin{Proposition} \label{lemma shuffle rien} (i) For any words $w,\tilde{w}$, we have \begin{equation} \label{eq: DS har int 1} \har_{\mathcal{P}^{\mathbb{N}}}(w)\har_{\mathcal{P}^{\mathbb{N}}}(\tilde{w}) = \har_{\mathcal{P}^{\mathbb{N}}}(w \ast_{\har} \tilde{w}) . \end{equation} \noindent (ii) For any words $w= \big( (n_{i})_{d};(\xi_{i})_{d} \big)$, $\tilde{w} =\big( (\tilde{n}_{i})_{d};(\tilde{\xi}_{i})_{d}\big)$, we have \begin{multline} \label{eq: DS har int 2} \har_{\mathcal{P}^{\mathbb{N}}}(w\text{ }\mathcyr{sh}\text{ }\tilde{w}) = \har_{\mathcal{P}^{\mathbb{N}}}\big( (\shft_{\ast}S_{Y})(\tilde{w}) w\big) \\ = \sum_{l_{1},\ldots,l_{d} \in \mathbb{N}} \prod_{i=1}^{d'} {-\tilde{n}_{i} \choose l_{i}} (-1)^{\tilde{n}_{i}} \har_{\mathcal{P}^{\mathbb{N}}}\big( n_{1},\ldots,n_{d-1},n_{d}+\tilde{n}_{d}+l_{d'},\tilde{n}_{d'-1}+l_{d'-1},\ldots,\tilde{n}_{1}+l_{1}; \xi_{1},\ldots,\xi_{d},\tilde{\xi}_{d'},\ldots,\tilde{\xi}_{1} \big) . \end{multline} \end{Proposition}
\begin{proof} (i) This amounts to the quasi-shuffle relation for prime weighted cyclotomic multiple harmonic sums, $\har_{p^{\alpha}}(w) \har_{p^{\alpha}}(w') \har_{p^{\alpha}}(w \ast w')$, which is a consequence of the known double shuffle relations for multiple polylogarithms in two variables. \newline (ii) The shuffle equation for multiple polylogarithms in one variable gives $\Li[w\text{ }\mathcyr{sh}\text{ }\tilde{w}] = \Li[w] \Li[\tilde{w}]$. Let $m \in \mathbb{N}^{\ast}$ ; by equation (\ref{eq:multiple polylogarithms power series expansion}) for all $m' \in \{1,\ldots,m-1\}$, $$ \Li[\tilde{w}][z^{m-m'}] = \sum_{0<m_{1}<\ldots <m_{d} = m-m'} \frac{(\frac{\xi_{2}}{\xi_{1}})^{m_{1}} \ldots (\frac{1}{\xi_{d}})^{m_{d}}}{m_{1}^{n_{1}} \ldots m_{d}^{n_{d}}} = \sum_{m'=m'_{d}<\ldots<m'_{1}<m} \frac{(\frac{\xi_{2}}{\xi_{1}})^{m'-m'_{1}} \ldots (\frac{1}{\xi_{d}})^{m'-m'_{d}}}{(m'-m'_{d})^{n_{d}} \ldots (m'-m'_{1})^{n_{1}}} $$ and \begin{equation*} \label{eq:intermediate} (\Li[w]\Li[\tilde{w}])[z^{m}] = \sum_{0<m_{1}<\ldots<m_{d} < m' < m'_{d'}<\ldots<m'_{1}<m} \frac{\big( \frac{\xi_{2}}{\xi_{1}} \big)^{m_{1}} \ldots \big( \frac{\xi_{d+1}}{\xi_{d}} \big)^{m_{d}} \big( \frac{\xi_{d'+1}}{\xi_{d+1}} \big)^{m'} \big( \frac{\xi_{d'}}{\xi_{d+1}} \big)^{m'_{d'}} \ldots \big(\frac{\xi_{2}}{\xi_{1}} \big)^{m_{1}} } { m_{1}^{n_{1}}\ldots {m'}_{d}^{n_{d}} (m-{m'}_{d'})^{\tilde{n}_{d'}} \ldots (m-{m'}_{1})^{\tilde{n}_{1}}} . \end{equation*} Now assume that $m = p^{\alpha}$. For all $m'_{i} \in \{1,\ldots,p^{\alpha}-1\}$. We have $v_{p}(m'_{i})< v_{p}(p^{\alpha})$, thus $\displaystyle\frac{1}{(m'_{i}-p^{\alpha})^{\tilde{n}_{i}}}= m^{-\tilde{n}_{i}} \sum\limits_{l\geqslant 0} {-\tilde{n}_{i} \choose l} \big( \frac{p^{\alpha}}{m'_{i}} \big)^{l} \in \mathbb{Z}_{p}$, whence : \begin{multline*} (\Li[w]\Li[\tilde{w}])[z^{p^{\alpha}}] = \\ \sum_{l_{1},\ldots,l_{d'} \in \mathbb{N}} \prod_{i=1}^{d} {-{n'}_{i} \choose l_{i}} (-1)^{t'_{i}} \frac{1}{z_{j_{1}}^{p^{\alpha}}} \sum_{\substack{0<m_{1} < \ldots < m_{d-1} < m_{d} = m \\ = m_{d} < m_{d-1} < \ldots m_{1} < p^{\alpha}}} \frac{ (\frac{z_{i_{2}}}{z_{i_{1}}})^{n_{1}} \ldots (\frac{1}{z_{i_{d}}})^{n} y_{d'}^{n} (\frac{z_{j_{d'-1}}}{z_{j_{d'}}})^{m_{d-1}} \ldots (\frac{z_{j_{2}}}{z_{i_{1}}})^{n_{1}}} {m_{1}^{n_{1}} \ldots l_{d}^{n_{d}+t_{d'}+l_{d'}} l_{d-1}^{n_{d-1}+l_{d-1}} \ldots l_{1}^{n_{1}+l_{1}}} . \end{multline*} On the other hand, $w\text{ }\mathcyr{sh}\text{ } \tilde{w}$ is a linear combination of words whose rightmost letter is not $e_{0}$ and, by equation (\ref{eq:har et Li 2}), we have $$ \sum_{0<m'<p^{\alpha}} \Li[w\text{ }\mathcyr{sh}\text{ }\tilde{w}][m'] = \har_{p^{\alpha}}[w\text{ }\mathcyr{sh}\text{ }\tilde{w}] . $$ \noindent We deduce the result, given the definition of $\shft_{\ast}$ and $S_{Y}$ (Definition \ref{def SY}). \end{proof}
\begin{Definition} We call (\ref{eq: DS har int 1}) the \emph{harmonic quasi-shuffle equation} and (\ref{eq: DS har int 2}) the \emph{harmonic shuffle equation of the framework $\int$} ; and we call the collection of (\ref{eq: DS har int 1}) and (\ref{eq: DS har int 2}) the \emph{harmonic double shuffle equations} of the framework $\int$. \newline Let $\DS_{\har}^{\smallint}$ be the affine ind-scheme defined by the equations (\ref{eq: DS har int 1}) and (\ref{eq: DS har int 2}), which we call the \emph{harmonic double shuffle equations} of the framework $\int$. \end{Definition}
\subsubsection{In the framework $\Sigma$}
Given that the power series expansion of multiple polylogarithms is written in terms of sums of series, we get directly the same harmonic double shuffle equations in the framework $\Sigma$. Indeed, it is known that the integral shuffle relation of cyclotomic multiple zeta values can be understood purely in terms of their formula as iterated series : it is a generalization of a proof which goes back to Euler, who defined multiple zeta values in depth $1$ and $2$ by the series formula in (\ref{eq:multizetas}), and found the double shuffle relations in this particular case. The same proof gives the shuffle relation for multiple harmonic sums : for $w=\big((n_{i})_{d};(\xi_{i})_{d+1}\big)$, $w'=\big(({n'}_{i})_{d};({\xi'}_{i})_{d+1}\big)$, $$ \har_{m}(w\text{ }\mathcyr{sh}\text{ }w') = \sum_{0<m_{1}<\ldots<m_{d} < m' < m'_{d'}<\ldots<m'_{1}<m} \frac{\big( \frac{\xi_{2}}{\xi_{1}} \big)^{m_{1}} \ldots \big( \frac{\xi_{d+1}}{\xi_{d}} \big)^{m_{d}} \big( \frac{\xi_{d'+1}}{\xi_{d+1}} \big)^{m'} \big( \frac{\xi_{d'}}{\xi_{d+1}} \big)^{m'_{d'}} \ldots \big(\frac{\xi_{2}}{\xi_{1}} \big)^{m_{1}} } { m_{1}^{n_{1}}\ldots {m'}_{d}^{n_{d}} (m-{m'}_{d'})^{\tilde{n}_{d'}} \ldots (m-{m'}_{1})^{\tilde{n}_{1}}} , $$ from which one can deduce the harmonic shuffle relation defined in \S3.3.2. As concerns the quasi-shuffle relation for cyclotomic multiple harmonic values, it is of course immediate in terms of series. Thus we can directly define :
\begin{Definition} Let $\DS_{\har}^{\Sigma} = \DS_{\har}^{\smallint}$. \end{Definition}
\subsubsection{Comparison between the results of the three frameworks}
We now show that the definitions of \S3.3.1 and \S3.3.2 are equivalent.
\begin{Lemma} \label{lemma for comparison of ds} Let $F$ a function $\mathcal{O}^{\ast} \rightarrow K$ and $\tilde{\imath}$, a function $\mathcal{O}^{\ast} \rightarrow \mathcal{O}^{\ast}[[\Lambda]]$ satisfying, for all $a,b$ words in $\mathcal{O}^{\ast}$, $\tilde{\imath}(ab)= \tilde{\imath}(b)\tilde{\imath}(a)$. We have an equivalence between : \newline (i) $\forall n \in \mathbb{N}^{\ast}$, $\forall \xi \in \mu_{N}(K)$, $\forall w,w'$ words in $\mathcal{O}^{\ast}$, $F\big((e_{0}^{n-1}e_{\xi}w) \mathcyr{sh} w'\big) = F\big(w \mathcyr{sh} (\tilde{\imath}(e_{0}^{n-1}e_{\xi})w')\big)$ \newline (ii) $\forall u,w,w'$ words in $\mathcal{O}^{\ast}$, $F\big((uw) \mathcyr{sh} w'\big) = F\big(w \mathcyr{sh} (\tilde{\imath}(u)w')\big)$ \newline (iii) $\forall w,w'$ words in $\mathcal{O}^{\ast}$, $F(w\text{ }\mathcyr{sh}\text{ }w') = F(\tilde{\imath}(w) w')$. \end{Lemma}
\begin{proof} (i) $\Rightarrow$ (ii) : we write $u$ as a concatenation of words of the form $e_{0}^{n_{i}-1}e_{1}$ and we iterate (iii). \newline (ii) $\Rightarrow$ (iii) : we take $w = \emptyset$. \newline (iii) $\Rightarrow$ (i) : we apply (iii) to each member of (i). \end{proof}
\begin{Proposition} We have $\DS_{\har}^{\smallint_{1,0}} = \DS_{\har}^{\smallint}$. \end{Proposition}
\begin{proof} The harmonic quasi-shuffle relations in $ \DS_{\har_{\mathcal{P}^{\mathbb{N}^{\ast}}}}^{\smallint}$ and $ \DS_{\har_{\mathcal{P}^{\mathbb{N}^{\ast}}}}^{\smallint_{1,0}}$, equations (\ref{eq: DS har int1,0 1}) and (\ref{eq: DS har int 1}), are identical. The equivalence between the harmonic shuffle equations (\ref{eq: DS har int1,0 2}) and (\ref{eq: DS har int 2}) follows from Lemma \ref{lemma for comparison of ds}. \end{proof}
\begin{Definition} Let us call denote by
$\DS_{\har}$ the common value of $\DS_{\har}^{\smallint_{1,0}}$, $\DS_{\har}^{\smallint}$ and $\DS_{\har}^{\Sigma}$ and call its equations the prime harmonic double shuffle equations. \end{Definition}
\begin{Remark} We see that comparing the results in the frameworks $\int_{1,0}$ and $\int$ requires a proof, whereas comparing the results in the frameworks $\int$ and $\Sigma$ is trivial, and this is because the power series expansion of multiple polylogarithms is expressed in terms of series. By contrast, in \cite{I-2} and \cite{I-3}, comparing the pro-unipotent harmonic actions $\circ_{\har}^{\smallint}$ and $\circ_{\har}^{\Sigma}$ required a proof, whereas comparing the pro-unipotent harmonic actions $\circ_{\har}^{\smallint_{1,0}}$ and $\circ_{\har}^{\smallint}$ was simple. \end{Remark}
We now write the "overconvergent" variants of the previous results.
\begin{Proposition} \label{prop other view on }
(i) ($\int_{1,0}$) $\Lambda$Ad$p$MZV$\mu_{N}^{\dagger}$'s satisfy equations obtained as remainders (in the sense of power series in $\Lambda$) of the equations of $\Lambda$Ad$p$MZV$\mu_{N}$'s. By taking $\Lambda=1$, we deduce equations satisfied by MHV$\mu_{N}^{\dagger}$'s.
\newline (ii) ($\int$) (a) For all words $w,w'$ on $e_{0\cup \mu_{N}}$ we have :
$$ \har_{p^{\alpha}}^{\dagger_{p,\alpha}}(w \text{ }\mathcyr{sh}\text{ }w') = \har_{p^{\alpha}}^{\dagger_{p,\alpha}}(w) +
\har_{p^{\alpha}}^{\dagger_{p,\alpha}}(w') + \har_{p^{\alpha}}(\shft_{\ast}(S_{Y}(w))w') . $$
(b) For any automorphism $\sigma$ of $\mathbb{P}^{1} - \{0,\mu_{N},\infty\}$ which fixes $0$, we have an equation satisfied by $\har_{p^{\alpha}}^{\dagger_{p,\alpha}}$ obtained by
$$ \Li_{p,\alpha}^{\dagger}(\sigma(z)) = \sigma_{\ast} \har_{p^{\alpha}}^{\dagger_{p,\alpha}}(z) . $$ \end{Proposition}
\begin{proof} (i) This is immediate by Definition \ref{def over adjoint}.
\newline (ii) (a) This follows from the shuffle relation $\Li_{p,\alpha}^{\dagger}[w\text{ }\mathcyr{sh}\text{ }w'] = \Li_{p,\alpha}^{\dagger}[w]\Li_{p,\alpha}^{\dagger}[w']$ specialized to the coefficient $[z^{p^{\alpha}}]$, combined with the differential equation (\ref{eq:horizontality equation}) characterizing $\Li_{p,\alpha}^{\dagger}$, Lemma \ref{lemma power series expansion Li dagger}, and the proof of Lemma \ref{lemma shuffle rien}.
\newline (b) This follows from the definition of $\Li_{p,\alpha}^{\dagger}$ in terms of the value of the Frobenius on the canonical de Rham path (\cite{I-1}, \S1) and the functoriality of the Frobenius. \end{proof}
We see that, as in \S3.3, the formulas for the harmonic shuffle relation in the framework $\int_{1,0}$ and in the framework $\int$ are different. We leave to the reader to check that they are equivalent, following \S3.3.3. Using the framework $\Sigma$ here is beyond the scope of this paper : this requires to uses the formulas for Ad$p$MZV$\mu_{N}$'s found in \cite{I-2}, and this will be the subject of \cite{II-2}.
\begin{Definition} Let $\DS_{\har}^{\dagger}$, resp. $\M_{\har}^{\dagger}$ be the affine ind-scheme defined by the equations obtained in Proposition \ref{prop other view on } as variants of those of $\DS_{\har}$, resp. $\M_{\har}$ defined in \S3 and \S4 respectively. \end{Definition}
We have canonical isomorphisms $\DS_{\har}^{\dagger} \simeq \DS_{\har}$, resp. $\M_{\har}^{\dagger} \simeq \M_{\har}$.
\begin{Remark} An extension of Proposition \ref{prop other view on } (ii), which would include a quasi-shuffle equation for $\har_{p^{\alpha}}^{\dagger_{p,\alpha}}$ and the equations obtained by the functoriality with respect to automorphisms of
$\mathcal{M}_{0,5}^{(N)}$, could be obtained by using the variant of $\Li_{p,\alpha}^{\dagger}$ on $\mathcal{M}_{0,5}^{(N)}$, defined by the Frobenius of $\pi_{1}^{\un,\crys}(\mathcal{M}_{0,5}^{(N)})$. Here, in simplicial coordinates, $\mathcal{M}_{0,5}^{(N)}$ is
$\{ (y_{1},y_{2},\ldots,y_{n}) \in (\mathbb{P}^{1} - \{0,1,\infty\})^{n} \text{ }|\text{ }\forall i,j, \forall \xi \text{ s.t. }\xi^{N}=1, y_{i} \not= \xi y_{j} \}$. We leave it to the reader. \end{Remark}
\subsection{A corollary of the shuffle equation : the (usual, adjoint and harmonic) reversal equations\label{paragraph reflexion}}
Let $f \in \Pi_{1,0}(K)$ ; since $f$ satisfies the shuffle equation, denoting by $S$ the antipode of the shuffle Hopf algebra, we have $\hat{S}^{\vee}(f) = f^{-1}$. By writing $f^{-1} = (1-(1-f))^{-1} = \sum\limits_{l\geqslant 0} (1-f)^{l}$, we deduce polynomial equations on the coefficients of $f$ :
\begin{equation} \label{eq: reflexion} \text{for all words }w,\text{ } f[S(w)] = \sum_{\substack{l\geqslant 0 \\ w_{1}\ldots w_{l}=w}} (-1)^{l}\prod_{i=1}^{l}f[w_{i}] . \end{equation}
\begin{Definition} \label{def reflexion} We call (\ref{eq: reflexion}) the reversal equation. \end{Definition}
Applying it to $f=\Phi_{\KZ}$ resp. $f=\Phi_{p,\alpha}$ gives a family of polynomial equation on MZV$\mu_{N}$'s, resp. $p$MZV$\mu_{N}$'s. Our terminology ``reversal equation'' in Definition \ref{def reflexion} is motivated by Rosen's ``asymptotic reversal theorem'', \cite{Rosen} which we are going to recover below, as a particular case of Proposition \ref{harmonic reflexion}, as particular cases of our results, and interpret in terms of $\pi_{1}^{\un}(\mathbb{P}^{1} - \{0,1,\infty\})$. We are going to see that the reversal equation has adjoint and harmonic analogues which are given by relatively simple formulas and which are quite natural.
\begin{Proposition} \label{reflexion adjoint} Let $\psi \in K\langle \langle e_{0\cup \mu_{N}}\rangle\rangle$ satisfying the shuffle equation modulo products (equation (\ref{eq:ds adjoint 1})). We have : \begin{multline} \label{eq:reflexion adjoint} \text{for any positive integers } n_{i}, n'_{i'} \text{ and N-th roots of unity } \xi_{i},\xi'_{i}, \\ \sum_{\substack{l \geqslant 0 \\
l_{i} \geqslant 0 (1 \leqslant i \leqslant d)\\
l+l_{1}+\ldots+l_{d}=L}} \prod_{i=1}^{d} {-n_{i} \choose l_{i}} \psi\big( \big( (n'_{i})_{d'},(n_{i}+l_{i})_{d};({\xi'}_{i})_{d'},(\xi_{i})_{d},\xi \big),l\big) \\ = \sum_{\substack{l' \geqslant 0 \\
{l'}_{i} \geqslant 0 (1 \leqslant i \leqslant d') \\
l'+{l'}_{1}+\ldots+{l'}_{d'}=L}} \prod_{i=1}^{d'} {-{n'}_{i} \choose {l'}_{i}} \psi\big( \big( (n_{i})_{d'},({n'}_{i}+{l'}_{i})_{d};\xi,({\xi}_{i})_{d'},({\xi'}_{i})_{d}\big),l'\big) . \end{multline} \end{Proposition}
\begin{proof} The result amounts to the following equality : \begin{multline} \label{eq: 4 2 5} (-1)^{\sum\limits_{i=1}^{d}n_{i}} \psi[ \frac{1}{1-\Lambda e_{0}}e_{\xi} \frac{e_{0}^{n_{1}-1}}{(1-\Lambda e_{0})^{n_{1}}}e_{\xi_{1}} \ldots \frac{e_{0}^{n_{d}-1}}{(1-\Lambda e_{0})^{n_{d}}}e_{\xi_{d}}e_{0}^{{n'}_{d'}-1}e_{{\xi'}_{d'}}\ldots e_{0}^{{n'}_{1}-1}e_{{\xi'}_{1}}] \\ =(-1)^{\sum\limits_{i'=1}^{d'}{n'}_{i'}} \psi \big[\frac{1}{1-\Lambda e_{0}}e_{\xi_{1}} \ldots \frac{e_{0}^{{n'}_{1}}-1}{(1-\Lambda e_{0})^{n_{1}}}e_{\xi_{d'}} \frac{e_{0}^{{n'}_{d'}}-1}{(1-\Lambda e_{0})^{{n'}_{d'}}}e_{\xi_{d}}e_{0}^{n_{d}-1}\ldots e_{\xi_{1}}e_{0}^{n_{1}-1}e_{\xi} \big] . \end{multline} (a) By the hypothesis, $\psi$ is primitive for the shuffle coproduct. Thus, let $S$ be the antipode of $\mathcal{O}^{\mathcyr{sh}}$ ; we have $\hat{S}^{\vee}(\psi)=-\psi$ ; and the first line of (\ref{eq: 4 2 5}) is equal to \begin{equation} \label{eq:this} (-1)^{\sum\limits_{j=1}^{d'}{n'}_{i}} \psi\big[e_{{\xi'}_{1}}e_{0}^{{n'}_{1}-1}\ldots e_{{\xi'}_{d'}}e_{0}^{{n'}_{d'}-1}e_{{\xi}_{d}} \frac{e_{0}^{n_{d}-1}}{(1+\Lambda e_{0})^{n_{d}}} \ldots e_{\xi_{1}} \frac{e_{0}^{n_{1}-1}}{(1+\Lambda e_{0})^{n_{1}}}e_{\xi}\frac{1}{1+\Lambda e_{0}}\big] . \end{equation} (b) Let $f \in K \langle \langle e_{0\cup \mu_{N}} \rangle\rangle$ such that, for all words $w$, we have $f[w\text{ }\mathcyr{sh}\text{ }e_{0}] = f[w]f[e_{0}]$ ; let $T,U_{1},\ldots,U_{d}$ formal variables ; we have : \begin{multline} \label{eq: shuffle coefficients e0 TU} f[\frac{e_{0}^{{n''}_{d}-1}}{(1-U_{d}e_{0})^{{n''}_{d}}}e_{{\xi''}_{d}} \ldots \frac{e_{0}^{{n''}_{1}-1}}{(1-U_{1}e_{0})^{{n''}_{1}}}e_{{\xi''}_{1}} \frac{1}{1 - Te_{0}}] \\ = f[\frac{e_{0}^{{n''}_{d''}-1}}{(1 - (U_{d}-T)e_{0})^{{n''}_{d''}}}e_{{\xi''}_{d''}}\ldots \frac{e_{0}^{{n''}_{1}-1}}{(1 - (U_{1}-T)e_{0})^{{n''}_{1}}} e_{{\xi''}_{1}}] e^{f[e_{0}]T} . \end{multline} We apply this to the particular case where $f[e_{0}]=0$ and where the first line of (\ref{eq: shuffle coefficients e0 TU}) is (\ref{eq:this}). In that case, the second line of (\ref{eq: shuffle coefficients e0 TU}) becomes equal to the second line of (\ref{eq: 4 2 5}). \end{proof}
\begin{Definition} We call (\ref{eq:reflexion adjoint}) the adjoint reversal equation. \end{Definition}
\begin{Proposition} \label{harmonic reflexion}We can prove by the three frameworks $\int_{1,0}$, $\int$ and $\Sigma$ that : \begin{equation} \label{eq:first eq of corollary} \text{for all words }w,w',\text{ } \text{har}_{\mathcal{P}^{\mathbb{N}}}( (\shft_{\ast}S_{Y})(w')w) = \text{har}_{\mathcal{P}^{\mathbb{N}}}( (\shft_{\ast}S_{Y})(w)w') . \end{equation} \end{Proposition}
\begin{proof} (i) In the framework $\int_{1,0}$ : we take $f=\sum\limits_{\xi \in \mu_{N}(K)} \xi^{-p^{\alpha}} {\Phi_{p,\alpha}^{(\xi)}}^{-1}e_{\xi}\Phi_{p,\alpha}^{(\xi)}$ and $\Lambda=1$ in equation (\ref{eq: 4 2 5}) and we use equation (\ref{eq:formula for n=1}). \newline (ii) In the framework $\smallint$ : this is a direct consequence of the harmonic shuffle equation (equation \ref{eq: DS har int 2}). \newline (iii) In the framework $\Sigma$ : this is an easy generalization of Rosen's proof of the ``asymptotic reversal theorem'' \cite{Rosen} which corresponds to the $N=1$, $\alpha=1$ and $w'=\emptyset$ case. \end{proof}
\begin{Definition} We call (\ref{eq:first eq of corollary}) the harmonic reversal equation. \end{Definition}
We also note that, by choosing $w'$ to be the empty word in (\ref{eq:the last equation}) is deduced by choosing $w'$ equal to the empty word in (\ref{eq:first eq of corollary}), we deduce the simpler equation \begin{equation} \label{eq:the last equation} \text{ for all words }w,\text{ } \text{har}_{p^{\alpha}}((\shft_{\ast}S_{Y})(w)) = \text{har}_{p^{\alpha}}(w) . \end{equation}
\section{Around associator equations and Kashiwara-Vergne equations\label{double shuffle}}
We review briefly associator equations and Kashiwara-Vergne equations and the known relation between them (\S4.1) then we explain that Kashiwara-Vergne equations can be formulated as a property of adjoint MZV's more naturally than MZV's (\S4.2) and we explain equations satisfied by multiple harmonic values, and more generally harmonic multiple polylogarithms, which are related to Kashiwara-Vergne equations (\S4.3). In this section, we take $N=1$ most of the time, because to our knowledge there is no cyclotomic Kashiwara-Vergne theory.
\subsection{Review on associators and Kashiwara-Vergne equations}
\subsubsection{Associators}
The notion of associators has been introduced in \cite{Drinfeld}. Let $k$ be a field of characteristic $0$ and let $\mu \in k$. By \cite{Furusho pentagon}, the set of associators $M_{\mu}(k)$ is the set of elements $\Phi \in \Pi(k)$ (we are using Notation \ref{la premiere notation}, and assuming $N=1$), such that $\mu=\pm \sqrt{24\Phi[e_{0}e_{1}]}$ and $$ \phi(e_{12},e_{23}+e_{24})\phi(e_{13}+e_{23},e_{34}) = \phi(e_{23},e_{34}) \phi(e_{12}+e_{13},e_{24}+e_{34}) \phi(e_{12},e_{23}) $$ where $e_{ij}$ are the generators of $\Lie(\pi_{1}^{\un,\dR}(\mathcal{M}_{0,5},\omega_{\dR}))$ (\S2.1.3). The definition in \cite{Drinfeld} uses several equations : (2.12), (2.13) and a rescaled version of equation (5.3) of \cite{Drinfeld}. \newline\indent On the other hand one has the scheme $\GRT_{1}$ defined in \cite{Drinfeld}, (equations (5.12), (5.13), (5.14), (5.15) of \cite{Drinfeld}) ; by \cite{Drinfeld}, Proposition 5.9, $\GRT_{1}$ is isomorphic to $M_{0}$. We note that equation (5.15) of \cite{Drinfeld} is \begin{equation} \label{eq:sum of residues} e_{0} + \phi(e_{0},e_{1})^{-1}e_{1}\phi(e_{0},e_{1}) + \phi(e_{0},e_{\infty})^{-1}e_{\infty}\phi(e_{0},e_{\infty}) = 0 \end{equation} where $e_{0}+e_{1}+e_{\infty}=0$ : the $e_{x}$'s generate $\Lie(\Pi)$. \newline\indent The Ihara product (\ref{eq:Ihara}) restricts to a group law on $\GRT_{1}$ (\cite{Drinfeld}, equation (5.16)) and to an action of $\GRT_{1}$ on $M_{\mu}$ which makes $M_{\mu}$ into a $\GRT_{1}$-torsor (\cite{Drinfeld}, Proposition 5.5). \newline\indent The generating series of multiple zeta values, $\Phi_{\KZ} \in \Pi_{1,0}(\mathbb{R})$ (equation (\ref{eq:Phi KZ})) has been first introduced in \cite{Drinfeld} \S2, and we have $\Phi_{\KZ} \in M_{2i\pi}(\mathbb{R})$ by \cite{Drinfeld}. By \cite{Unver Drinfeld}, combined with the fact that $\Phi_{p,1}$ is in the commutator subgroup of $\Pi_{1,0}(\mathbb{Q}_{p})$, proved in \cite{Furusho 2}, \S3, we have $\GRT_{1}(\mathbb{Q}_{p})$ ; this implies that $\Phi_{p}^{\KZ} \in \GRT_{1}(\mathbb{Q}_{p})$ (\cite{Furusho 2}, proof of Proposition 3.1). By the relations of iteration of the Frobenius (\cite{I-3}, equations (1.11), (1.12), (1.13) and Proposition 1.5.2), which involve the Ihara product, this implies that $\Phi_{p,\alpha} \in \GRT_{1}(\mathbb{Q}_{p})$ for all $\alpha \in \mathbb{Z} \cup \{\pm \infty\} - \{0\}$.
\subsubsection{Review on Kashiwara-Vergne equations, according to Alekseev, Enriquez and Torossian \cite{AT}, \cite{AET}}
Let $k$ be a field of characteristic $0$. Let $\lie_{n}$ be the free Lie algebra over $k$ on $n$ variables $x_{1},\ldots,x_{n}$. Let $\widehat{\lie}_{n}$ be its degree completion (where the $x_{i}$'s have degree 1). \newline\indent For any $u_{1},\ldots,u_{n}$ in $\widehat{\lie}_{n}$, let $[[u_{1},\ldots,u_{n}]]$ be the derivation of $\lie_{n}$ defined by $x_{i} \mapsto [x_{i},u_{i}]$ for all $i$. Such a derivation is called tangential and the set of tangential derivations is denoted by $\tder_{n}$. \newline\indent For any $U_{1},\ldots,U_{n}$ in $\exp(\widehat{\lie}_{n})$, let $[[U_{1},\ldots,U_{n}]]$ be the automorphism of $\widehat{\lie}_{n}$ defined by $x_{i} \mapsto U_{i}x_{i}U_{i}^{-1}$ for all $i$. Such an automorphism is called tangential and the set of tangential automorphism is denoted by $\TAut_{n}$. \newline\indent Let $A_{n}$ be the universal enveloping algebra of $f_{n}$ and $T_{n}=A_{n}/[A_{n},A_{n}]$. The image of an element $S$ by the map $A_{n} \rightarrow T_{n}$ is denoted by $\langle S \rangle$. Let $\partial_{k} : A_{n} \rightarrow A_{n}$ be defined by $x= x_{0}+\sum_{k=1}^{n} \partial_{k}(x)x_{k}$ et $x_{0} \in k$. Let $j : \text{tder}_{n} \rightarrow \hat{T}_{n}$, $[[u_{1},\ldots,u_{n}]] \mapsto \langle \sum_{k=1}^{n} x_{k} \partial_{k}(u_{k})\rangle$. It integrates into $J: \TAut_{n} \mapsto \hat{T}_{n}$ (\cite{AT}, Proposition 5.1). \newline\indent Below, $F_{2}(k)$ be the pro-unipotent completion of the free-group on two generators $X,Y$ and $\sim$ means ``is conjugated to''. Following \cite{AET} \S2.1, let \begin{multline*} \Sol \KV =
\\ \{ \mu : F_{2}(k) \buildrel \sim \over \longrightarrow \Aut(\exp(\widehat{\lie}_{2})) \text{ }|\text{ }\mu(X) \sim e^{x},\mu(Y) \sim e^{y}, \mu(XY)=e^{x+y}, \exists r \in u^{2}k[[u]], j(a) = \langle r(x+y)-r(x)-r(y)\rangle \} \end{multline*} where $r$ is uniquely determined by $\mu$. Following \cite{AET}, \S2.2, let
$$ \KRV = \{ a \in \Aut(\widehat{\lie}_{2}) \text{ }|\text{ }a(x) \sim x,a(y) \sim y, a(x+y)=x+y, \\ \exists s \in u^{2}k[[u]], J(a) = \langle s(x+y)-s(x)-s(y)\rangle \} . $$ \indent By \cite{AET}, Theorem 2.1, the Ihara action (\ref{eq:Ihara}) makes $\KRV$ into a group and $\Sol \KV$ a torsor under that group, and there is a morphism of torsors $M_{1}(K) \rightarrow \Sol\KV(K)$, which sends $\Phi$ to \begin{equation} \label{eq:XY} \mu_{\Phi} : \begin{array}{l} X \mapsto \Phi(x,-x-y)^{-1}e^{x}\Phi(x,-x-y) \\ Y \mapsto e^{-(x+y)/2}\Phi(y,-x-y)^{-1}e^{y} \Phi(y,-x-y)e^{(x+y)/2} \end{array} . \end{equation} The element $r$ associated with $\mu_{\Phi}$ is equal to $-\log(\Gamma_{\Phi})$ where $\Gamma_{\Phi}= \exp \big(\sum\limits_{n\geq 2} \frac{(-1)^{n}}{n} \zeta_{\Phi}(n)u^{n} \big)$ (\cite{AET}, Proposition 2.2). We note that $\zeta_{\Phi}(2n) = -\frac{1}{2}\frac{B_{2n}}{(2n)!}$ for all $n$, which is independent of $\Phi$.
\subsection{The Kashiwara-Vergne equations as a property of adjoint multiple zeta values}
We restrict the study to $N=1$ because to our knowledge there is no cyclotomic Kashiwara-Vergne theory. There is a cyclotomic associator theory \cite{Enriquez}.
We are going to see that Kashiwara-Vergne equations can be naturally viewed as a property of adjoint MZV's rather than as a property of MZV's : this gives much simpler equations. This gives an analogy between the passage from associator equations to Kashiwara-Vergne equations constructed in \cite{AET} and \cite{AT} and the passage from double shuffle equations to adjoint double shuffle equations that we have constructed in \S3.
In particular, our question \ref{question} is an analogue of a conjecture of Alekseev-Torossian \cite{AT} which compares Drinfeld associators and solutions to the Kashiwara-Vergne problem.
\subsubsection{In the $p$-adic case}
We consider that the Kashiwara-Vergne equations are the adjoint version of associator equations. Let us rewrite them - or rather the equations of KRV - in terms of Ad$p$MZV's. By the above discussion, we consider the following equation : \begin{equation} \label{eq:KV} \langle j(\mu_{\Phi_{p,\alpha}})\rangle = \langle \log(\Gamma_{\Phi_{p,\alpha}}(x)) + \log(\Gamma_{\Phi_{p,\alpha}}(y)) - \log(\Gamma_{\Phi_{p,\alpha}})(x+y) \rangle . \end{equation}
Moreover, by the isomorphism $\lie_{2} \simeq \Lie \Pi_{0,0}$ given by $(x,y) \leftrightarrow (e_{1},e_{\infty})$ where $e_{0}+e_{1}+e_{\infty}=0$, $\mu_{\Phi_{p,\alpha}}$ sends $\left\{ \begin{array}{l} e_{0} \mapsto e_{0} \\ e_{1} \mapsto \Phi_{p,\alpha}(e_{0},e_{1})^{-1} e_{1} \Phi_{p,\alpha}(e_{0},e_{1}) \\ e_{\infty} \mapsto \Phi_{p,\alpha}(e_{0},e_{\infty})^{-1} e_{\infty} \Phi_{p,\alpha}(e_{0},e_{\infty}) \end{array} \right.$. This recovers equation (\ref{eq:sum of residues}). \newline This is obtained by applying $\underset{\mu \rightarrow 0}{\lim} \frac{1}{\mu} \frac{d}{d\mu}$ to the automorphism $\left\{ \begin{array}{l} e^{e_{1}} \mapsto \Phi(e_{0},e_{1})^{-1} e^{\mu e_{1}} \Phi(e_{0},e_{1}) \\ e^{e_{\infty}} \mapsto e^{\frac{\mu}{2}e_{0}} \Phi(e_{0},e_{\infty})^{-1} e^{\mu e_{\infty}} \Phi(e_{0},e_{\infty}) e^{-\frac{\mu}{2} e_{0}} \end{array} \right.$ obtained by rescaling (\ref{eq:XY}) : this is the variant of $\Sol KV$ and of the map $M_{1} \rightarrow \Sol \KV$ obtained by choosing $M_{\mu}$ instead of $M_{1}$.
\begin{Proposition} \label{KV explicit}The equation (\ref{eq:KV}) amounts to explicit linear equations on adjoint $p$MZV's. \end{Proposition}
\begin{proof} Equation (\ref{eq:KV}) amounts to say that $$ j(\mu_{\Phi}) - \log(\Gamma_{\Phi}(e_{1})) - \log(\Gamma_{\Phi}(e_{\infty})) + \log(\Gamma_{\Phi})(e_{1}+e_{\infty}) \rangle $$ has image zero by the quotient map $k\langle\langle e_{1},e_{\infty}\rangle\rangle \rightarrow k \langle\langle e_{1},e_{\infty} \rangle\rangle / [k \langle\langle e_{1},e_{\infty} \rangle\rangle,k \langle\langle e_{1},e_{\infty} \rangle\rangle]$. \newline\indent Let $\tilde{\partial}_{e_{1}}^{(e_{1},e_{\infty})}, \tilde{\partial}_{e_{\infty}}^{(e_{1},e_{\infty})} : k\langle \langle e_{1},e_{\infty}\rangle\rangle \rightarrow k\langle\langle e_{1},e_{\infty} \rangle\rangle$ be defined, by, for any word $w$ on the alphabet $e_{1},e_{\infty}$, $w = \partial_{e_{1}}^{(e_{1},e_{\infty})}(w)e_{1} + \partial_{e_{\infty}}^{(e_{1},e_{\infty})}(w)e_{\infty}$. \newline One can compute $j(\mu_{\Phi})$ as follows. We have \begin{equation} \label{eq:jmuPhi} j(\mu_{\Phi}) = e_{1} \tilde{\partial}_{e_{1}}^{(e_{1},e_{\infty})} \big(\Phi^{-1}e_{1}\Phi\big)(e_{0},e_{1}) + e_{\infty}\tilde{\partial}_{e_{\infty}}^{(e_{1},e_{\infty})}\big(\Phi^{-1}e_{1}\Phi\big)(e_{0},e_{\infty}) \end{equation} and the right-hand side of (\ref{eq:jmuPhi}) can be expressed using what follows. Let $w(e_{0},e_{1}) = e_{0}^{n_{d}-1}e_{1}\ldots e_{1}e_{0}^{n_{0}-1}$. \newline\indent If $n_{0}\geq 2$ we have $\left\{ \begin{array}{l} \tilde{\partial}_{e_{1}}^{(e_{1},e_{\infty})}(w(e_{1},e_{\infty})) = e_{0}^{n_{d}-1}e_{1}\ldots e_{1}e_{0}^{n_{0}-1}e_{1}(-e_{1}) \\ \tilde{\partial}_{e_{\infty}}^{(e_{1},e_{\infty})}(w(e_{1},e_{\infty})) = e_{0}^{n_{d}-1}e_{1}\ldots e_{1}e_{0}^{n_{0}-1}e_{1}(-e_{\infty}) \end{array} \right.$. \newline\indent If $n_{0}=1$, we have $\left\{ \begin{array}{l} \tilde{\partial}_{e_{1}}^{(e_{1},e_{\infty})}(w(e_{1},e_{\infty})) = e_{0}^{n_{d}-1}e_{1}\ldots e_{1}e_{0}^{n_{0}-1} \\ \tilde{\partial}_{e_{\infty}}^{(e_{1},e_{\infty})}(w(e_{1},e_{\infty})) = 0 \end{array} \right. $ \newline\indent On the other hand, $\log(\Gamma_{\Phi}(e_{1})) + \log(\Gamma_{\Phi}(e_{\infty})) - \log(\Gamma_{\Phi})(e_{1}+e_{\infty}) \rangle$ is equal to : $$ \sum_{n\geqslant 2} \frac{(-1)^{n}}{n}\zeta_{\Phi}(n) ( e_{1}^{n} + e_{\infty}^{n}-(e_{1}+e_{\infty})^{n}) = \sum_{n\geqslant 2} \frac{(-1)^{n}\zeta_{\Phi}(n)}{n} e_{1}^{n} + \sum_{n\geqslant 2} \frac{\zeta_{\Phi}(n)}{n} \big( (e_{0}+e_{1})^{n} - e_{0}^{n} \big) . $$ \indent We have a basis of $k\langle \langle e_{1},e_{\infty}\rangle\rangle$ formed by the words on the alphabet $\{e_{1},e_{\infty}\}$. It is sent to a generating family of $k\langle \langle e_{1},e_{\infty}\rangle\rangle/[k\langle \langle e_{1},e_{\infty}\rangle\rangle,k\langle \langle e_{1},e_{\infty}\rangle\rangle]$ from which we can extract a basis. A simple computation gives a partition of the set of words on $\{e_{1},e_{\infty}\}$ according to their image in $k\langle \langle e_{1},e_{\infty}\rangle\rangle/[k\langle \langle e_{1},e_{\infty}\rangle\rangle,k\langle \langle e_{1},e_{\infty}\rangle\rangle]$. \newline\indent Finally we use the isomorphism $k\langle \langle e_{0},e_{1}\rangle\rangle \simeq k\langle \langle e_{1},e_{\infty}\rangle\rangle$, $f(e_{0},e_{1}) \mapsto f(e_{1},e_{\infty})$ defined by $e_{\infty}=-e_{0}-e_{1}$. \end{proof}
\begin{Remark} Equation (\ref{eq:sum of residues}) can be regarded as a part of the Kashiwara-Vergne equations. It also amounts to an equation on $p$MZV's : for all words, \begin{equation} \label{eq:KV dim 1} \forall w,\text{ } \zeta_{p,\alpha}^{\Ad}(w) + \zeta_{p,\alpha}^{\Ad}(w(e_{0}-e_{1},-e_{1})) = 0 . \end{equation} Indeed, we have $(\phi^{-1}e_{1}\phi)(e_{0},e_{\infty}) = \sum\limits_{w\text{ word on }\{e_{0},e_{1}\}} (\phi^{-1}e_{1}\phi)[w(e_{0}-e_{1},-e_{1})]w$. \end{Remark}
\subsubsection{In the complex case}
The automorphism of $\Pi_{0,0}(\mathbb{C})$ which sends $(e^{e_{0}},e^{e_{1}}) \mapsto (e^{2i\pi e_{0}}, \Phi_{\KZ}^{-1}e^{2i\pi e_{1}}\Phi_{\KZ})$ satisfies the Kashiwara-Vergne equations rescaled by $\tau(2\pi i)$ where $\tau$ is as in equation (\ref{eq:tau}). This can be easily generalized to any $N$, using \cite{AKKN}. The computations of \S4 can be repeated with the equations instead of the equations of the group $\KRV$. As concerns the equations of dimension 1, the equation $e_{0} + (\Phi_{\KZ}^{-1}e_{1}\Phi_{\KZ})(e_{0},e_{1}) + (\Phi_{\KZ}^{-1}e_{1}\Phi_{\KZ})(e_{0},e_{\infty}) \equiv 0 \mod (\zeta(2))$ is $$ e^{2i\pi e_{0}} . \big( \Phi(e_{0},e_{1})^{-1} e^{2i\pi e_{1}} \Phi(e_{0},e_{1}) \big). \big( e^{\frac{\mu}{2}e_{0}} \Phi(e_{0},e_{\infty})^{-1} e^{2i\pi e_{\infty}} \Phi(e_{0},e_{\infty}) e^{-\frac{\mu}{2} e_{0}} \big) = 1 . $$ We leave the details to the reader.
\subsection{Related properties of harmonic multiple polylogarithms}
We find properties of multiple harmonic values and harmonic multiple polylogarithms, in the three frameworks $\int_{1,0}$, $\int$ and $\Sigma$, which are related to the previosu considerations. We prove that the equations arising from $\pi_{1}^{\un}(\mathbb{P}^{1} - \{0,1,\infty\})$ in these three frameworks are equivalent.
\subsubsection{In the framework $\int_{1,0}$}
\begin{Proposition} \label{harmonic duality DR} (i) $\Phi^{-1}e_{1}\Phi$ satisfies equation (\ref{eq:KV dim 1}) if and only if $h(w) = \comp^{\Lambda \Ad,\Ad} (-1)^{\depth}\Phi^{-1}e_{1}\Phi$ satisfies $$ \forall w,\text{ } h( w(e_{0}+e_{1},-e_{1})) = - \sum_{d'\geqslant 1, \text{ }z = e_{0}^{t_{d'}-1}e_{1}\ldots e_{0}^{t_{1}-1}e_{1}} (-1)^{d'} h(z.w) . $$ (ii) $\Phi^{-1}e_{1}\Phi$ satisfies equation (\ref{eq:KV}) if and only if $\comp^{\Lambda\Ad,\Ad}(\Phi^{-1}e_{1}\Phi)$ satisfies certain equations on $\Lambda$-adjoint $p$MZV's. \end{Proposition}
Of course, in this statement, we can replace $\comp^{\Lambda \Ad,\Ad}$ by $\comp^{\har,\Ad}$, and $\Lambda$-adjoint $p$MZV's by MHV's.
\begin{proof} (i) We have, for all words $w$ : $(1-\Lambda(e_{0}+e_{1}))^{-1}e_{1}w = (1-\Lambda e_{0})^{-1}e_{1}w + (1-\Lambda e_{0})^{-1}\Lambda e_{1}(1-\Lambda (e_{0}+e_{1}))^{-1}e_{1}w$. Indeed, we have $1 = (1 - \Lambda e_{0} - \Lambda e_{1} )(1 - \Lambda e_{0} - \Lambda e_{1} )^{-1} = (1 - \Lambda e_{0} )(1 - \Lambda e_{0} - \Lambda e_{1} )^{-1} - \Lambda e_{1} (1 - \Lambda e_{0} - \Lambda e_{1} )^{-1}$. Left multiplication by $(1 - \Lambda e_{0})^{-1}$ gives $(1-\Lambda(e_{0}+e_{1}))^{-1} = (1-\Lambda e_{0})^{-1} + (1-\Lambda e_{0})^{-1}\Lambda e_{1}(1-\Lambda (e_{0}+e_{1}))^{-1}$. Whence the equality. This implies the result. \newline (ii) Follows from the definitions and from the translation of equation (\ref{eq:KV}) in the proof of Proposition \ref{KV explicit}. \end{proof}
\begin{Definition} Let $\M_{\har}$ the ind-scheme defined by the equations among MHV's resp. $\Lambda$-adjoint $p$MZV's obtained in Proposition \ref{harmonic duality DR}. \end{Definition}
\subsubsection{In the framework $\int$ \label{pre-associator paragraph}}
For any $n \geqslant 4$, any automorphism of $\mathcal{M}_{0,n}$ induces by functoriality an automorphism of $\pi_{1}^{\un,\dR}(\mathcal{M}_{0,n})$ equipped with $\nabla_{\KZ}$. The functoriality of $\nabla_{\KZ}$ gives a relation of the type \begin{equation} \label{eq: LtildeCL} \tilde{L} = C L \end{equation} where $L$ and $\tilde{L}$ are two different branchs of multiple polylogarithms on $\mathcal{M}_{0,n}$, defined as the unique solutions to $\nabla_{\KZ}$ with prescribed asymptotics at a chosen base-point, and $C \in \pi_{1}^{\un,\dR}(\mathcal{M}_{0,5},\omega_{\dR})$
\begin{Definition} Let us call pre-associator equations the equations of the form (\ref{eq: LtildeCL}). \end{Definition}
It is sufficient to restrict to $n \in \{4,5\}$. The associator equations are deduced from the pre-associator equations by writing $C$ in terms of $\Phi_{\KZ}$ and by using that the automorphisms which are involved are of finite order. \newline\indent Let us denote by $O$ the tangential base-point $(\vec{1}_{0},\vec{1}_{0})$ in cubic coordinates on $\overline{\mathcal{M}}_{0,5}$ as well as its image by $\overline{\mathcal{M}}_{0,5} \rightarrow \overline{\mathcal{M}}_{0,4}$, which we choose as the origin of the paths of integration. Let $\Stab_{O}^{\mathcal{M}_{0,4}}$ and $\Stab_{O}^{\mathcal{M}_{0,5}}$ be the stabilizers of $O$ in $\Aut(\mathcal{M}_{0,4}) = S_{3}$ and $\Aut(\mathcal{M}_{0,5})= S_{5}$. \newline\indent The associator equations (duality, hexagon and pentagon) can be obtained by the pre-associator equations associated with the automoprhisms $(z \mapsto 1-z)_{\ast}$, $(z \mapsto \frac{1}{z})_{\ast}$ resp. $\big(\sigma: (x_{1},x_{2},x_{3},x_{4},x_{5}) \mapsto ( x_{5},x_{4},x_{1},x_{3},x_{2})\big)_{\ast} = \big((c_{1},c_{2}) \mapsto (c_{2}, \frac{1 -c_{2}}{1 - c_{1}c_{2}})\big)_{\ast}$ in cubic coordinates, which are elements of $\Aut(\mathcal{M}_{0,4}) - \Stab_{O}^{\mathcal{M}_{0,4}}$, resp. $\Aut(\mathcal{M}_{0,4}) - \Stab_{O}^{\mathcal{M}_{0,5}}$. \newline\indent We now write consequences of the pre-associator equations associated with elements of $\Stab_{O}^{\mathcal{M}_{0,4}}$ and $\Stab_{O}^{\mathcal{M}_{0,5}}$. We consider the involution $\sigma : z \mapsto \frac{z}{z-1}$ which is in $\Aut(\mathcal{M}_{0,4})$, and the element of $\Aut(\mathcal{M}_{0,5})$ written in cubic coordinates as $\rho : (c_{1},c_{2}) \mapsto \big( -c_{1} \frac{1-c_{2}}{1-c_{1}}, - c_{2} \frac{1-c_{1}}{1-c_{2}}\big)$.
\begin{Proposition} \label{harmonic duality DR RT} (i) The pre-associator equation associated with $\sigma : z\mapsto \frac{z}{z-1}$ implies : $$ \har_{\mathcal{P}^{\mathbb{N}}}( w(e_{0}+e_{1},-e_{1})) = - \sum_{\substack{d'\geqslant 1,\text{ }z = e_{0}^{t_{d'}-1}e_{1}\ldots e_{0}^{t_{1}-1}e_{1}}} (-1)^{\depth(z)} \har_{\mathcal{P}^{\mathbb{N}}}(z.w) . $$ \noindent (ii) \label{5 32}The pre-associator equation associated with $\rho : (c_{1},c_{2}) \mapsto \big( -c_{1} \frac{1-c_{2}}{1-c_{1}}, - c_{2} \frac{1-c_{1}}{1-c_{2}}\big)$ implies a family of equations on multiple harmonic values. \end{Proposition}
\begin{proof} (i) See more generally the proof of Proposition \ref{proposition 5 30}, which we can apply to $(a,c,d)=(1,1,-1)$, knowing that $\displaystyle \sigma_{1,1,-1}^{\ast}\big( \frac{dz}{z} \big) = \frac{dz}{z} - \frac{dz}{z-1}$ and $\displaystyle\sigma_{1,1,-1}^{\ast} \big( \frac{dz}{z-1} \big) = \frac{dz}{z-1}$. \newline (ii) We have, for any sequence of coefficients $(a_{m,n})$, $$ \sum_{n,m\geq 0} a_{n,m} \bigg(-\frac{c_{1}(1-c_{2})}{1-c_{1}}\bigg)^{n} \bigg(-\frac{c_{2}(1-c_{1})}{1-c_{2}}\bigg)^{m} = \bigg( \sum_{n<m} + \sum_{n=m} + \sum_{n>m} \bigg) a_{n,m} \bigg(-\frac{c_{1}(1-c_{2})}{1-c_{1}}\bigg)^{n} \bigg(-\frac{c_{1}(1-c_{2})}{1-c_{1}}\bigg)^{m} . $$ The subsum $\displaystyle \sum\limits_{n=m}$ is $\sum_{n \geq 0} a_{n,n} \big(c_{1}c_{2}\big)^{n} $. The subsum $\displaystyle \sum\limits_{n<m}$ is, after writing $(n,m)=(n_{1},n_{1}+n_{2})$, $$ \sum_{\substack{0\leq n_{1} \\ 1\leq n_{2}}} a_{n_{1},n_{1}+n_{2}} (-c_{1})^{n_{1}}(-c_{2})^{n_{1}+n_{2}} \big( \frac{1-c_{1}}{1-c_{2}} \big)^{n_{2}} = \sum_{\substack{0\leq n_{1}\\ 1\leq n_{2}}}\sum_{\substack{0\leq l_{1} \leq n_{2} \\ 0 \leq l_{2}}} a_{n_{1},n_{1}+n_{2}} {n_{2} \choose l_{1}}{-n_{2} \choose l_{2}} (-c_{1})^{n_{1}+l_{1}}(-c_{2})^{n_{1}+n_{2}+l_{2}} $$ and after writing $(N,M)=(n_{1}+l_{1},n_{1}+n_{2}+l_{2})$, $$ = \sum_{N,M\geq 0} c_{2}^{N}c_{1}^{M} \sum_{\substack{0 \leq l_{1} \leq n_{2} \leq M \\0 \leq l_{2} \leq N}} a_{n_{1},n_{1}+n_{2}} (-1)^{M-n_{1}+n_{2}} {n_{2} \choose M-n_{1}} {-n_{2} \choose N-n_{1}-n_{2}} $$ \begin{equation} \label{eq:interm} = \sum_{N,M\geq 0} c_{2}^{N}c_{1}^{M} (-1)^{N+M} \sum_{\substack{0 \leq \leq n_{1} \leq N \\0 \leq n_{1}+n_{2} \leq M}} a_{n_{1},n_{1}+n_{2}} {n_{2} \choose M-n_{1}} {N-n_{1}-1 \choose n_{2}-1} . \end{equation} On the other hand, for all integers $0 \leqslant a \leqslant b$, we have $$ {b \choose a} = \frac{b}{a} \bigg( \frac{b}{1}-1 \bigg) \ldots \bigg( \frac{b}{a-1}-1 \bigg) = \frac{b}{a} \sum_{r \geq 0} b^{r} \frak{h}_{a}(\underbrace{1,\ldots,1}_{r})(-1)^{b-r}$$ where, for any positive integer $m$ and any word $w$, $\frak{h}_{m}(w)$ is defined by $\har_{m}(w)=m^{\weight(w)}\frak{h}_{m}(w)$. Whence (\ref{eq:interm}) equals \begin{multline*} \sum_{N,M\geq 0} c_{2}^{N}c_{1}^{M} (-1)^{N+M} \sum_{\substack{0 \leq l_{1} \leq n_{2} \leq M \\0 \leq l_{2} \leq N \\ n_{1}=N-l_{1}}} a_{n_{1},n_{1}+n_{2}} \\ \sum_{R_{1}\geq 0} n_{2}^{R_{1}+1}\frac{(-1)^{n_{2}-R_{1}}}{M-n_{1}} \frak{h}_{M-n_{1}}(\underbrace{1,\ldots,1}_{R_{1}}) \sum_{R_{2} \geq 0} \frac{(-1)^{N-n_{1}-1-R_{2}} }{n_{2}-1} (N-n_{1}-1)^{r+1} \frak{h}_{n_{2}-1}(\underbrace{1,\ldots,1}_{R_{2}}) . \end{multline*} We now assume that $N=M=p^{\alpha}$ (a more general version of the computation in which $M$ and $N$ are any powers of $p$). In the domain of summation
$\{(m_{1},\ldots,m_{R_{1}}) \in \mathbb{N}^{r}\text{ }|\text{ }0<m_{1}<\ldots<m_{R_{1}} \}$ of $\frak{h}_{M-n_{1}}(\underbrace{1,\ldots,1}_{R_{1}})$, we make the change of variable $m'_{i}=M-R_{1}$. We conclude by using the property of delocalization of cyclotomic multiple harmonic sums established in \cite{I-2}, Proposition-Definition 4.2.2. \end{proof}
\begin{Definition} Let $M_{\har}^{\smallint}$ be the affine ind-scheme defined by the equations of Proposition \ref{harmonic duality DR RT}. \end{Definition}
Let us consider all homographies of $\mathbb{P}^{1}$ which preserve $0$. For $a,c,d \in K$ with $K=\mathbb{C}$ or $K=\mathbb{C}_{p}$, let $\sigma_{a,c,d} : z \in \mathbb{P}^{1}(K)\mapsto \frac{az}{cz+d} \in \mathbb{P}^{1}(K)$.
\begin{Proposition} \label{proposition 5 30}
The horizontality of the morphisms of the form $(\sigma_{a,c,d})_{\ast} : \pi_{1}^{\un,\dR}(\mathbb{P}^{1} - D) \rightarrow \pi_{1}^{\un,\dR}(\mathbb{P}^{1} - D')$ with respect to $\nabla_{\KZ}$ gives an explicit family of relations between prime weighted multiple harmonic sums. \end{Proposition}
\begin{proof} Let $\big((n_{i})_{d}; (z_{i})_{d+1} \big)$ be any index. For $z$ close to $0$, we consider the following equality of iterated integrals :
\begin{equation} \label{eq:formula 0} \int_{0}^{\sigma_{a,c,d}(z)} \omega_{0}^{n_{d}-1}\omega_{z_{d}} \ldots \omega_{0}^{n_{1}-1}\omega_{z_{1}} =
\int_{0}^{z} (\sigma_{a,c,d})^{\ast} \big( \omega_{0}^{n_{d}-1}\omega_{z_{d}} \ldots \omega_{0}^{n_{1}-1}\omega_{z_{1}}\big) .
\end{equation}
\noindent The left-hand side of (\ref{eq:formula 0}) is $\Li \big((n_{i})_{d};(z_{i})_{d+1}\big) \big(\frac{az}{cz+d} \big)$ ; let us write its power series expansion at $0$. For $z \in \mathbb{C}$ on a neighborhood of $0$, we have, for all $n \in \mathbb{N}$ :
$\big(\frac{az}{cz+d} \big)^{n}
= \big(\frac{az}{d} \big)^{n} \sum\limits_{l\geqslant 0} {-n_{d} \choose l} \big(\frac{c}{d}z\big)^{l}$. Thus, by the power series expansion of multiple polylogarithms (\ref{eq:Li series bis}),
\begin{equation} \label{eq:first step} \Li \big( (n_{i})_{d};(z_{i})_{d} \big) \bigg(\frac{az}{cz+d} \bigg) = \sum_{0<m_{1}<\ldots<m_{d}<m}
\frac{ (\frac{z_{i_{2}}}{z_{i_{1}}})^{m_{1}} \ldots (\frac{z_{i_{d}}}{z_{i_{d-1}}})^{m_{d-1}} (\frac{a}{c z_{i_{d}}})^{m_{d}}}{m_{1}^{n_{1}} \ldots m_{d}^{n_{d}}} \big(\frac{c}{d}\big)^{m} {m-1 \choose m - m_{d}} .
\end{equation}
\noindent For all $m \in \mathbb{N}^{\ast}$, and $\tilde{m} \in \{1,\ldots,m-1\}$, we have
$\displaystyle {m - 1 \choose m - \tilde{m}} =
\frac{(m-1)(m-2) \ldots (m - (m-\tilde{m}))}{1 \times 2 \times \ldots \times (m-\tilde{m})}
= \big(\frac{m}{1}-1\big)\big(\frac{m}{2}-1\big) \ldots \big(\frac{m}{m-\tilde{m}}-1\big)$ ; expanding this product gives : $\displaystyle {m-1 \choose m - \tilde{m}} = \sum\limits_{r \geqslant 0} m^{r} (-1)^{m-\tilde{m}-r} \frak{h}_{m-\tilde{m}} (\underset{r}{\underbrace{1,\ldots,1}})$ where the sum over $r$ is finite. By a change of variable, we have, for all $r \in \mathbb{N}$ : $\displaystyle \frak{h}_{m-\tilde{m}} (\underset{r}{\underbrace{1,\ldots,1}}) = \sum\limits_{\tilde{m}<j_{r}< \ldots <j_{1}<m} \frac{(-1)^{r}}{(m-j_{r}) \ldots (m-j_{1})}$. Now, we assume that $m = p^{\alpha}$ with $p$ a prime number and $\alpha \in \mathbb{N}^{\ast}$. By factorizing by $ \frac{1}{(-j_{1})\ldots (-j_{r})}$ and expanding into $p$-adic series factors of the form $\frac{1}{1-x}$ with $|x|_{p}<1$, we obtain
\begin{equation} \label{eq:second step} {p^{\alpha} - 1 \choose p^{\alpha} - \tilde{m}} = (-1)^{\tilde{m}-1}
\sum_{\substack{ r \geqslant 0 \\ l_{1},\ldots,l_{r} \geqslant 0}} (p^{\alpha})^{r + \sum\limits_{i=1}^{r} l_{i} }\sum_{\tilde{m}<j_{r}<\ldots <j_{1}<p^{\alpha}} \frac{1}{j_{r}^{1+l_{r}} \ldots j_{1}^{1+l_{1}}}
\end{equation}
\noindent By (\ref{eq:first step}) and (\ref{eq:second step}) we have :
\begin{multline} \label{eq: 5 4 4} (p^{\alpha})^{n_{1}+\ldots+n_{d}}\Li \Big( (n_{i})_{d};(z_{i})_{d} \Big) \big(\frac{az}{cz+d} \big)[z^{p^{\alpha}}] \\
= - \sum_{\substack{ r \geqslant 0 \\ l_{1},\ldots,l_{r} \geqslant 0}}
(p^{\alpha})^{\sum\limits_{j=1}^{d}n_{j} + r + \sum\limits_{i=1}^{r} l_{i}}\sum_{0<n_{1}<\ldots<n_{d}<j_{r}<\ldots<j_{1}<p^{\alpha}}
\frac{ (\frac{z_{i_{2}}}{z_{i_{1}}})^{m_{1}} \ldots (\frac{z_{i_{d}}}{z_{i_{d-1}}})^{m_{d-1}} (-\frac{a}{c z_{i_{d}}})^{m_{d}}}{m_{1}^{n_{1}} \ldots m_{d}^{n_{d}} j_{r}^{l_{r}+1} \ldots j_{1}^{l_{1}+1}} \big(\frac{c}{d} \big)^{p^{\alpha}}
\\ = -\sum_{\substack{ r \geqslant 0 \\ l_{1},\ldots,l_{r} \geqslant 0}} \har_{p^{\alpha}}
\bigg( (n_{i})_{d};(1+l_{i})_{d}; (z_{i_{j}})_{d},\big(\frac{-a}{c}\big)_{r},
\frac{-ad}{c^{2}} \bigg)
\end{multline}
\noindent The right-hand side of (\ref{eq:formula 0}) can be expressed in terms of multiple harmonic sums via (\ref{eq:Li series bis}) and the fact that, for all $y\in \mathbb{C}$, we have :
$$ \sigma_{a,c,d}^{\ast}\bigg(\frac{dz}{z - y}\bigg) = \bigg( \frac{(a- yc)}{(a-y c)z + (-yd)} - \frac{c}{cz+d} \bigg) dz $$ \end{proof}
\subsubsection{In the framework $\Sigma$}
\begin{Proposition} \label{harmonic duality RT} We can prove in the framework $\Sigma$ that, for all words $w$ : \begin{equation} \label{eq: harmonic duality RT} \har_{\mathcal{P}^{\mathbb{N}}}( w(e_{0}+e_{1},-e_{1})) = - \sum_{\substack{d'\geqslant 1 \\ z = e_{0}^{t_{d'}-1}e_{1}\ldots e_{0}^{t_{1}-1}e_{1}}} (-1)^{\depth(z)} \har_{\mathcal{P}^{\mathbb{N}}}(z.w) . \end{equation} \end{Proposition}
This is a generalization of a result of Rosen called the ``asymptotic duality theorem'' \cite{Rosen}, which is equivalent to the $\alpha=1$ case of this result, although it is formulated differently.
\begin{proof} In Rosen's proof of the asymptotic duality theorem \cite{Rosen}, which we generalize from $\alpha=1$ to any $\alpha \in \mathbb{N}^{\ast}$, let us modify the last step by writing, for all $n_{d} \in \{1,\ldots,p^{\alpha}-1\}$, ${p^{\alpha} - 1 \choose m_{d} - 1} = {p^{\alpha} - 1 \choose p^{\alpha} - m_{d}}$ and applying the canonical expansion of binomial coefficients in terms of multiple harmonic sums to ${p^{\alpha} - 1 \choose p^{\alpha} - m_{d}}$ instead of ${p^{\alpha} - 1 \choose m_{d}-1}$. Instead of obtaining quantities of the form $\sum_{0<n<p^{\alpha}} \frak{h}_{m}(w') \frak{h}_{m}(w'')$ and linearizing them by the quasi-shuffle formula as in \cite{Rosen}, we obtain quantities of the form $\sum_{0<m<p^{\alpha}} \frak{h}_{m}(w')\frak{h}_{p^{\alpha}-m}(w'')$, and we make a change of variable $m_{i}\mapsto p^{\alpha}-m_{i}$ in the domain of summation of $\frak{h}_{p^{\alpha}-m}(w'')$, which gives an infinite sums of prime weighted multiple harmonic sums $\har_{p^{\alpha}}(w''')$. \end{proof}
The asymptotic duality theorem in \cite{Rosen} is that for all indices $w$, we have, for all primes $p$, \begin{equation} \label{eq: remarque 1} \har_{p}\big( w(e_{0}+e_{1},-e_{1})\big) = \har_{p}\big( w + (w \ast (\frac{1}{1+y_{1}}) \big) \end{equation}
It is obtained as a $p$-adic lift of a theorem of Hoffman \cite{Hoffman 2}, about multiple harmonic sums $\frak{h}_{p}(w) \mod p$, called the "duality theorem", which relies on the Newton series of multiple harmonic sums, in the sense of \S2.2.4. The proof in \cite{Hoffman 2} and \cite{Rosen} remains true if one replaces $\har_{p}$ by $\har_{p^{\alpha}}$ for all $\alpha \in \mathbb{N}^{\ast}$. Our alternative version (\ref{eq: harmonic duality RT}) does not use the quasi-shuffle equation. However, given the quasi-shuffle relation for $\har_{p^{\alpha}}$, equations (\ref{eq: remarque 1}) and the generalization of (\ref{eq: harmonic duality RT}) to any $\alpha$ are equivalent. Indeed, we have, for all $w = e_{0}^{n_{d}-1}e_{1}\ldots e_{0}^{n_{1}-1}e_{1}$, \begin{equation} \label{eq: remarque 2} \har_{p^{\alpha}}(w(e_{0}+e_{1},-e_{1})) = \har_{p^{\alpha}}(e_{0}^{n_{d}}e_{1} - e_{0}^{n_{d}-1}e_{1})(e_{0}^{n_{d-1}-1}e_{1}\ldots e_{0}^{n_{1}-1}e_{1} \ast \frac{1}{1+e_{1}}) . \end{equation} \noindent Finally, for all words $w$ and all $n \in \mathbb{N}$, we have : $$ \har_{p^{\alpha}}\big( e_{0}^{n}e_{1} ( w \ast \frac{1}{1+e_{1}}) \big) = - \sum_{\substack{d\geqslant 1 \\ x_{d},\ldots,x_{1} \geqslant 1 \\ z=e_{0}^{x_{d}-1}e_{1}\ldots e_{0}^{n+x_{1}-1}e_{1}}} (-1)^{\depth(z)}\har_{p^{\alpha}}(z.w) . $$
\subsubsection{Comparison between the results of the three frameworks}
We observe that Proposition \ref{harmonic duality DR} (i), Proposition 4.3.4 (i) and Proposition 4.3.6, obtained respectively in the frameworks $\int_{1,0}$, $\int$ and $\Sigma$ are the same results. This is the part of the harmonic associator equations obtained by using $\pi_{1}^{\un}(\mathcal{M}_{0,4})$.
\section{Finite cyclotomic multiple zeta values and finite multiple polylogarithms}
We define ``finite'' analogues of adjoint cyclotomic multiple zeta values and study their algebraic properties. This gives a generalization of the notion of finite multiple zeta values of Kaneko and Zagier and an interpretation of that notion in terms of the crystalline pro-unipotent fundamental groupoid of $\mathbb{P}^{1} - \{0,\mu_{N},\infty\}$.
\subsection{Review on finite multiple zeta values}
Let $\mathcal{P}$ be the set of prime numbers. The following ring is a $\mathbb{Q}$-algebra, it is denoted by $\mathcal{A}$ by Kaneko and Zagier and is called the ring of integers modulo infinitely large primes \cite{Kontsevich} :
\begin{equation} \label{eq:integers mod infinitely large} \mathbb{F}_{p\rightarrow\infty} =
\mathbb{Z}/_{p\rightarrow\infty} = \big( \prod_{p \in \mathcal{P}} \mathbb{Z}/p\mathbb{Z} \big) / \big( \bigoplus_{p \in \mathcal{P}} \mathbb{Z}/p\mathbb{Z} \big) . \end{equation}
Kaneko and Zagier have defined finite multiple zeta values as the following numbers, where $d$ and $n_{i}$ $(1\leqslant i \leqslant d)$ are any positive integers (see also \cite{Hoffman 2} and \cite{Zhao} for earlier almost identical notions) :
\begin{equation} \zeta_{\mathcal{A}} \big((n_{i})_{d} \big) = \left(\sum_{0<m_{1}<\ldots<m_{d}<p} \frac{1}{m_{1}^{n_{1}} \ldots m_{d}^{n_{d}}} \mod p \right)_{p \in \mathcal{P}} \in \mathcal{A} . \end{equation}
They conjecture that the following formula defines an isomorphism between the $\mathbb{Q}$-algebra generated by fMZV's and the $\mathbb{Q}$-algebra generated by MZV's moded out by the ideal $(\zeta(2))$ : the explicit formula is striking \begin{equation} \label{eq:Kaneko Zagier iso} \zeta_{\mathcal{A}}\big( (n_{i})_{d} \big) \mapsto \sum_{d'=0}^{d} (-1)^{n_{d'+1}+\ldots+n_{d}} \zeta(n_{1},\ldots,n_{d'})\zeta(n_{d},\ldots,n_{d'+1}) \mod \zeta(2) . \end{equation} According to the period conjectures on MZV's and $p$MZV's, it is equivalent to make the same conjecture with $p$MZV's instead of MZV's modulo $\zeta(2)$. With Definition \ref{def adjoint}, the $p$-adic analogue of the right-hand side of (\ref{eq:Kaneko Zagier iso}) is $\zeta_{p,\alpha}^{\Ad}\big((n_{i})_{d};0\big)$. Their complex analogues are sometimes called symmetric or symmetrized multiple zeta values \cite{NoteCRAS}, or finite real multiple zeta values.
\subsection{Finite cyclotomic multiple zeta values}
\subsubsection{Variants of the ring of integers modulo infinitely large primes}
We introduce some variants of the ring $\mathbb{F}_{p\rightarrow \infty}$ (\ref{eq:integers mod infinitely large}) :
\begin{Definition} \label{def mod p infinitely large}(i) For each $\alpha \in \mathbb{N}^{\ast}$, let $\mathbb{F}_{p^{\alpha} \rightarrow \infty}=\big( \prod_{p \in \mathcal{P}} \mathbb{F}_{p^{\alpha}} \big) / \big( \bigoplus_{p \in \mathcal{P}} \mathbb{F}_{p^{\alpha}}\big)$.
\newline (ii) Let $\overline{\mathbb{F}}_{p\rightarrow \infty} = \big( \prod_{p \in \mathcal{P}} \overline{\mathbb{F}_{p}} \big) / \big( \bigoplus_{p \in \mathcal{P}} \overline{\mathbb{F}_{p}} \big)$.
\newline (iii) Let the Frobenius of $\mathbb{F}_{p^{\alpha}\rightarrow \infty}$ resp. $\overline{\mathbb{F}}_{p\rightarrow \infty}$ be the automorphism defined as $\sigma : (x_{p})_{p} \mapsto (x_{p}^{p})_{p}$. \end{Definition}
These rings are $\mathbb{Q}$-algebras. We have an inclusion $\mathbb{F}_{p^{\alpha}\rightarrow \infty} \hookrightarrow \overline{\mathbb{F}}_{p\rightarrow \infty}$, and, if $\alpha'$ divides $\alpha''$, we have an inclusion $\mathbb{F}_{p^{\alpha'}\rightarrow \infty} \hookrightarrow \mathbb{F}_{p^{\alpha''}\rightarrow \infty}$, and these inclusions are compatible.
\subsubsection{Finite cyclotomic multiple zeta values}
We now generalize the definition of finite multiple zeta values. Below, for any root of unity $\xi \in \overline{\mathbb{Q}}$ and for any prime $p$, we denote by $\overline{\xi}$ the image of $\xi$ in $\overline{\mathbb{F}_{p}}$.
\begin{Definition} \label{def cycl mzv} Let finite cyclotomic multiple zeta values (fMZV$\mu_{N}$'s) be the following numbers : for any positive integers $d$ and $n_{i}$ ($1\leqslant i \leqslant d$) and roots of unity $\xi_{i}$ ($1\leqslant i \leqslant d+1$),
\begin{equation} \zeta_{f}
\big( (n_{i})_{d};(\xi_{i})_{d+1} \big) = \left(\sum_{0<m_{1}<\ldots<m_{d}<p} \frac{\big( \frac{\bar{\xi}_{2}}{\bar{\xi}_{1}} \big)^{m_{1}} \ldots \big(\frac{\bar{\xi}_{d+1}}{\bar{\xi}_{d}}\big)^{m_{d}} \big(\frac{1}{\bar{\xi}_{d+1}} \big)^{p}}{m_{1}^{n_{1}} \ldots m_{d}^{n_{d}}} \right)_{p \in \mathcal{P}} \in \overline{\mathbb{F}}_{p \rightarrow \infty} .
\end{equation} \end{Definition}
\noindent We note that, letting $N$ be the lcm of the orders of the $\xi_{i}$'s as roots of unity, then, for $p$ large enough, $p$ does not divide $N$, and the crystalline realization of $\pi_{1}^{\un}(\mathbb{P}^{1} \setminus \{0,\mu_{N},\infty\})$ is defined. \newline\indent By extrapolating on Kaneko-Zagier's conjecture (\S6.1), we can speculate an equivalence between the properties of the numbers $\zeta_{f}(n_{i})_{d};(\xi_{i})_{d+1}$ and the numbers \begin{equation} \label{eq: analogue of finite} \zeta_{p,\alpha}^{\Ad} \big( (n_{i})_{d};0;(\xi_{i})_{d}\big) . \end{equation}
We take again the notations of Definition \ref{def mod p infinitely large}. For any $z \in D \setminus \{0,\infty\}$, we denote by $\bar{z}$ the reduction of $z$ modulo $p$ large enough.
\begin{Definition} Let finite multiple polylogarithms be the numbers : $$ \Li_{f} \big( (n_{i})_{d};(z_{i})_{d+1} \big) = \left(\sum_{0<m_{1}<\ldots<m_{d}<p} \frac{\big( \frac{\bar{z}_{2}}{\bar{z}_{1}} \big)^{m_{1}} \ldots \big(\frac{\bar{z}_{d+1}}{\bar{z}_{d}} \big)^{m_{d}} \big(\frac{1}{\bar{z}_{d+1}}\big)^{p} }{m_{1}^{n_{1}} \ldots m_{d}^{n_{d}}} \mod p \right)_{p \in \mathcal{P}} \in \overline{\mathbb{F}}_{p\rightarrow \infty} . $$ \end{Definition}
\subsubsection{Equations satisfied by finite cyclotomic multiple zeta values}
We have
$$ \overline{\mathbb{F}}_{p\rightarrow \infty} = \bigg\{ (x_{p})_{p} \in \prod_{p\in \mathcal{P}}\mathbb{C}_{p} \text{ }|\text{ } v_{p}(x_{p}) \geqslant 0 \text{ for p large} \bigg\} \text{ }\bigg/\text{ }\bigg\{ (x_{p})_{p} \in \prod_{p\in \mathcal{P}} \mathbb{C}_{p} \text{ }|\text{ } v_{p}(x_{p}) \geqslant 1 \text{ for p large} \bigg\} . $$ \indent Finite cyclotomic multiple zeta values are expressed as reductions of cyclotomic multiple harmonic values, up to the Frobenius of Definition \ref{def mod p infinitely large} (iii) : for any harmonic word $w$, we have \begin{equation} \label{eq:red mod cycl har} \sigma^{\alpha-1}\zeta_{f}\big(w\big) = (p^{-\weight(w)}\har_{p^{\alpha}}(w) \mod p)_{p,\alpha} . \end{equation} Indeed, with the notations of (\ref{eq:mult har sums}), the subsum of $\har_{p^{\alpha}}\big( (n_{i})_{d};(\xi_{i})_{d+1}\big)$ on the subdomain $(m_{1},\ldots,m_{d}) \in p^{\alpha-1}\mathbb{N}^{\ast} \times \ldots \times p^{\alpha-1}\mathbb{N}^{\ast}$ is equal, by the change of variable $m_{i}=p^{\alpha-1}m'_{i}$, to $\har_{p} \big( (n_{i})_{d};(\xi^{p^{\alpha}-1}_{i})_{d+1}) \big)$, which is of valuation $\geqslant 0$, and the subsum on the complementary domain has $p$-adic valuation $\geqslant 1$. \newline\indent On the other hand, the numbers (\ref{eq: analogue of finite}) are reductions of the $\Lambda$-adjoint $p$-adic cyclotomic multiple zeta values : \begin{equation} \label{eq:modulo Lambda} \zeta_{p,\alpha}^{\Lambda \Ad}\big( (n_{i})_{d};0;(\xi_{i})_{d+1}\big)= \zeta_{p,\alpha}^{\Lambda \Ad} \big((n_{i})_{d};(\xi_{i})_{d+1} \big) \mod \Lambda^{\weight(w)+1} . \end{equation}
In the $N=1$ case, Akagi-Hirose-Yasuda have proved an integrality property of $p$MZV's and deduced a finite variant of equation (\ref{eq:formula for n=1}) \cite{AHY} :
\begin{equation} \label{eq: AHY} \zeta_{\mathbb{Z}_{/p\rightarrow \infty}}\big((n_{i})_{d}\big) = \big( p^{-\sum_{i=1}^{d}n_{i}}\zeta_{p,1}^{\Ad}\big((n_{i})_{d};0\big) \mod p \big)_{p\text{ prime}} \in \mathcal{A} . \end{equation}
By the relations of iteration of the Frobenius (\cite{I-3}, equations (1.11), (1.12), (1.13) and Proposition 1.5.2), equation (\ref{eq: AHY}) remains true if $\zeta_{p,1}^{\Ad}$ is replaced by $\zeta_{p,\alpha}^{\Ad}$. Moreover, this can be generalized to any $N$ by using the integrality property of $p$-adic iterated integrals proved in \cite{Chatzis}. \newline By equations (\ref{eq:red mod cycl har}), (\ref{eq:modulo Lambda}) and the cyclotomic generalization of (\ref{eq: AHY}), we deduce immediately from \S3 and \S4 some equations satisfied by fMZV$\mu_{N}$'s and their analogues (\ref{eq: analogue of finite}). This leads to the following definition :
\begin{Definition} (i) Let $\DS_{f}$ the scheme of finite double shuffle equations, be the pro-affine scheme defined the term of lowest weight in the equations defining $\DS_{\har}$ of \S3.3.
\newline (ii) Let $\GRT_{f}$, the scheme of finite associator equations, be the pro-affine scheme defined by the term of lowest weight in the equations defining $\M_{\har}$ of \S4.3. \end{Definition}
Some of these equations already appeared in the literature. In the $N=1$ case, the finite double shuffle equations appear in \cite{Kaneko}, \cite{Kaneko Zagier} and in \cite{NoteCRAS}. To our knowledge, the method of proof in \cite{Kaneko}, \cite{Kaneko Zagier} uses the framework $\int$. The one dimensional part of the finite associator equations appears in \cite{Hoffman 2}, where it is called the "duality theorem". The finite version of the reversal equation (\ref{paragraph reflexion}) in the $N=1$ case is the formula $\zeta_{\mathcal{A}}(n_{1},\ldots,n_{d}) = (-1)^{n_{1}+\ldots+n_{d}}\zeta_{\mathcal{A}}(n_{1},\ldots,n_{d}) \mod p$, which appears in \cite{Zhao}, lemma 3.3 and \cite{Hoffman 2}, theorem 4.5.
\subsection{A generalization : finite analogues of adjoint cyclotomic multiple zeta values}
We now associate a ``finite'' analogue to all adjoint cyclotomic multiple zeta values $\zeta_{p,\alpha}^{\Ad}\big((n_{i})_{d};b;(\xi_{i})_{d+1}\big)$, not only the case $b=0$. We are going to use the overconvergent cyclotomic multiple harmonic values (Definition \ref{variant harmonic}) :
\begin{Definition} Let the adjoint finite cyclotomic multiple zeta values (Ad$f$MZV$\mu_{N}$'s) be the numbers
$$ \zeta^{\Ad}_{f,\alpha} \big( (n_{i})_{d};l;(\xi_{i})_{d+1} \big) =
(p^{-(n_{1}+\ldots+n_{d}+l)}\har_{p,\alpha}^{\dagger}( (n_{i})_{d};l;(\xi_{i})_{d+1}) \mod p)_{p} \in \overline{\mathbb{F}}_{p\rightarrow \infty} . $$ \end{Definition}
By the integrality of $p$-adic iterated integrals on punctured projective lines proved in \cite{Chatzis}, and by the expression of $\har_{p,\alpha}^{\dagger}\big((n_{i})_{d};l;(\xi_{i})_{d+1}\big)$ as an infinite sums of Ad$p$MZV$\mu_{N}$'s (Proposition \ref{prop formula dagger} (i)), the numbers $\zeta^{\Ad}_{f,\alpha} \big( (n_{i})_{d};l;(\xi_{i})_{d+1} \big)$ are well defined and satisfy a generalization of equation (\ref{eq: AHY}) : $$ \zeta_{f,\alpha} \big( (n_{i})_{d};l;(\xi_{i})_{d+1} \big) = \big( \zeta^{\Ad}_{p,\alpha} \big((n_{i})_{d};l;(\xi_{i})_{d+1}\big) \mod p \big) \in \overline{\mathbb{F}}_{p\rightarrow\infty} . $$ We also have a generalization of equation (\ref{eq:modulo Lambda}), which refers to the overconvergent variant of $\Lambda$Ad$p$MZV$\mu_{N}$'s (Definition \ref{def over adjoint}) : $$ \zeta^{\Ad}_{p,\alpha} \big((n_{i})_{d};l;(\xi_{i})_{d+1}\big) = \zeta_{p,\alpha}^{\Lambda \Ad \dagger} \big((n_{i})_{d};(\xi_{i})_{d+1}\big) \mod \Lambda^{\weight(w)+1} $$
In the $N=1$ case, the numbers $\zeta_{f,\alpha} \big( (n_{i})_{d};l\big)$ are in the $\mathbb{Q}$-vector space generated by finite multiple zeta values. This follows from equation (\ref{eq: AHY}) and the fact that, by \cite{Yasuda}, that the numbers $\zeta^{\Ad}_{p,\alpha}\big((n_{i})_{d};0\big)$ generate the space of $p$MZV's. We guess that this remains true for any $N$, up to the Frobenius $\sigma$ of $\overline{\mathbb{F}}_{p\rightarrow \infty}$ (Definition \ref{def mod p infinitely large}). We generalize again the conjecture of Kaneko and Zagier :
\begin{Conjecture} The numbers
$\zeta^{\Ad}_{f,\alpha} \big( (n_{i})_{d};l;(\xi_{i})_{d+1}\big)$ and $\zeta_{p,\alpha}^{\Ad} \big( (n_{i})_{d};l;(\xi_{i})_{d+1}\big)$ satisfy the same algebraic properties. \end{Conjecture}
In a summary, starting with MHV$\mu_{N}$'s, one can on the one hand consider the associated graded for the weight filtration which gives Ad$p$MZV$\mu_{N}$'s, and on the other hand consider the associated graded for the filtration defined by the uniform topology on $\underset{p\in\mathcal{P}}{\prod} \mathbb{C}_{p}$ which arises from the $p$-adic topologies (for $p$ infinitely large), which gives Ad$f$MZV$\mu_{N}$'s ; the conjecture says that these two grading operations give equivalent results. Thus, this conjecture states a certain adelic case of equality in the inequality between the slopes of the Frobenius and the Hodge filtration on the log-crystalline cohomology which represents $\pi_{1}^{\un,\crys}(\mathbb{P}^{1} \setminus\{0,\mu_{N},\infty\},\vec{1}_{1},\vec{1}_{0})$. As a conclusion, we have interpreted Kaneko-Zagier's notion of finite multiple zeta values in crystalline terms. We will study it further in a subsequent paper.
\end{document} |
\begin{document}
\begin{abstract} The theory of almost characters which is closely related to character sheaves is proposed by Lusztig to study the representation theory of finite reductive groups. In this article we show that the decomposition of the Weil character for finite reductive dual pairs $(\Sp_{2n},\rmO^\pm_{2n'})$ or $(\Sp_{2n},\SO_{2n'+1})$ with respect to the almost characters is exactly the same as the decomposition with respect to the irreducible characters. As a consequence, the finite theta correspondence on almost characters is established. \end{abstract}
\title{Finite Theta Correspondence of Almost Characters} \tableofcontents
\input sect01 \input sect02 \input sect03 \input sect04 \input sect05
\end{document} |
\begin{document}
\title[Noise-immunity Kazan quantum line at 143 km regular fiber link]{ Noise-immunity Kazan quantum line at 143 km regular fiber link} \author{Oleg I. Bannik$^{1,2}$, Lenar R. Gilyazov$^{1,2}$, Artur V. Gleim$^{1}$, Nikolay S. Perminov$^{1,2}$, Konstantin S. Melnik$^{1,2}$, Narkis M. Arslanov$^{1,2}$, Aleksandr A. Litvinov$^{1,2}$, Albert R. Yafarov$^{1,2}$, and Sergey A. Moiseev$^{1,2,*}$} \affiliation{$^{1}$ Kazan Quantum Center, Kazan National Research Technical University n.a. A.N.Tupolev-KAI, 10 K. Marx, Kazan 420111, Russia} \affiliation{$^{2}$ Kazan Quantum Communication Ltd, 10 K. Marx, Kazan 420111, Russia} \email{s.a.moiseev@kazanqc.org} \begin{abstract} We experimentally demonstrate a long-distance quantum communication at 143 km between the city of Kazan and the urban-type village of Apastovo in the Republic of Tatarstan by using quantum key distribution prototype providing high noise-immunity of network lines and nodes due to phase coding in subcarrier wave. Average secret key generation rate was 12 bits per second with losses in the line of 37 decibels for a distance of 143 km during 16.5 hours of continuous field test. The commercialization perspectives of the demonstrated long-range QKD system are discussed. \end{abstract}
\maketitle
\section{Intercity quantum networks} One of the first commercial system of quantum key distribution (QKD) using a standard fiber-optic channel in urban communication line between the Swiss cities of Geneva and Lausanne was implemented in the work \cite{stucki2002quantum}. The key distribution was demonstrated over a distance of 67 km at a wavelength of 1550 nm with a quantum secret key rate (SKR) of about 60 bits per second (bps). This system used a phase-coded BB84 protocol \cite{bennett1984quantum,muller1997plug}, the main components of which were a polarization beamsplitter, phase modulator and single-photon detection module capable of operating at room temperature. In most quantum communication schemes \cite{gisin2002quantum,andersen2015hybrid,diamanti2016practical,zhang2017quantum,yan2017quantum,acin2018quantum}, the expensive critical element is the photon detector, which largely determines the quantum signal utilization, signal to noise ratio, cost and reliability of the system, as well as its main functionality and complexity of maintenance during operation. Currently, to ensure high quantum communication (QC) devices performance, two main types of quantum detectors are widely used: single-photon avalanche detector (SPAD) \cite{tosi2014low,zhang2015advances} and high-efficiency superconducting single-photon detector (SSPD) \cite{natarajan2012superconducting,zhang2017nbn,zolotov2018photon,roy2018number,autebert2019direct}. Promising results for stable urban quantum networks and long-distance QCs have been recently obtained with SPAD and SSPD detectors.
\textit{QC with SPAD.} The well-known demonstration of a commercial quantum communication system, described in work \cite{stucki2002quantum}, contained a SPAD detector composed of a cooled Peltier avalanche photodiode. In \cite{dixon2015high}, by using phase-coding BB84 protocol \cite{scarani2008quantum,lo2005efficient} with Mach-Zehnder interferometers, UK-Japan group presented the prototype of a high bit rate QKD system providing a total of 878 Gbit of secure key data over a 34 day period that corresponded to a sustained key rate of around 300 kbps. The system was deployed over a standard 45 km link of an installed metropolitan telecommunication fiber network in central Tokyo. The prototype of QKD system is compact, robust and automatically stabilised, enabling key distribution during diverse weather conditions. The security analysis has been performed for this system by taking into account finite key size effects, decoy states \cite{lucamarini2013efficient} with a quantified key failure probability of $\epsilon=10^{{-}10}$.
Recently, Chinese group, known for creating a quantum satellite \cite{liao2017satellite}, in \cite{mao2018integrating} presented integration of QKD on polarization coding decoy-state BB84 protocol \cite{chen2010metropolitan,yin2012quantum,liao2017satellite} with a commercial long-distance network of 3.6 Tbps classical data at 21 dBm launch power over 66 km fiber. This scheme is quite noise immunity in relation to the influences on devices in network nodes, since it does not contain interferometers, and is based on polarizing devices. With 20 GHz pass-band filtering and large effective core area fibers, real-time secure key rates can reach 4.5 kbps and 5.1 kbps for co-propagation and counter-propagation at the maximum launch power, respectively. This demonstrates feasibility and represents an important step towards building a quantum network that coexists with the current trunk fiber infrastructure of classical communications. A significant increase in the maximum range of QKD systems is achieved by replacing SPAD detector with SSPD detector.
\textit{QC with SSPD.} Ultra fast BB84 QKD transmission at 625 MHz clock rate through a 97 km field-installed fiber using practical clock synchronization based on wavelength-division multiplexing for time-bin coding protocol with Mach-Zehnder interferometers was demonstrated by Japan group in \cite{tanaka2008ultra}. The QC has been implemented over-one-hour stable secret key generation at a 800 bps with low quantum bit error rate (QBER) of 2.9 $\%$ where the quantum information was additionally protected using decoy method \cite{lo2005decoy}. These results open the way to global secure QCs capable of functioning at high losses in working optical lines \cite{hwang2003quantum}.
The experimental results for a long-term field trial of 1-GHz clock QKD were presented in \cite{shimizu2013performance} by using the differential phase shift coding scheme \cite{molotkov2015analog} with Mach-Zehnder interferometers incorporated in the test-bed optical fiber network installed in the Tokyo metropolitan area. A photon transmitter and a photon receiver were connected to the 90 km loop-back optical link with 30 dB loss. SSPD detectors were employed to detect photons with a low dark count rate. Stable maintenance-free operation was achieved for 25 days, where the average secure key generation rate 1.1 $\pm$ 0.5 kbps and the quantum bit error rate 2.6$\%$. The experiments have shown a significant impact of meteorological conditions on the main parameters of QKD system.
To move from existing commercial QKD-systems operating at distances of about 100 km to new solutions stable at ultra-long distances about of 150-200 km, it is desirable to use optical schemes that do not include interferometers. In this work we realized stable interferometer-free one-way QKD system at 143 km regular fiber line by using subcarrier waves phase-coding protocol and discuss prospective of this system for ultra-long distance QC.
\section*{Noise immunity prototype for ultra-long distance QC} \textit{Noise immunity problem.} Noise immunity and active intelligent stabilization \cite{kulik2014method,kulik2014line,balygin2016active} are very important characteristics of systems for ultra-long distance QCs. The characteristics allow to classify the existing long-distance QCs \cite{molotkov2008resistance,molotkov2011solution} by using extensive quantum analysis \cite{molotkov2016complexity} (traditional for cryptography) and intelligent noise-analysis \cite{nigmatullin2017general,nigmatullin2019universal,nigmatullin2019detection,nigmatullin2016forecasting} (traditional for steganography, friend identification systems and general type complex systems). For solution of the problems of noise immunity in relation to the influences on both the fiber communication line and QKD apparatus in the network nodes, we must accordingly abandon the use of interferometers \cite{molotkov2004multiplex} reducing vibration resistance of nodes and polarization coding sensitive to deformations, vibration and temperature changes in the fiber.
\textit{Phase-coding as a solution.} The unique noise immunity is inherent to the subcarrier wave phase-coding protocol \cite{merolla1999single} what distinguishes this approach from many others \cite{wehner2018quantum,djordjevic2019recent,razavi2019quantum}. A quantum signal is not directly generated by the source in the one-way QKD system of subcarrier frequencies, but is created as a result of phase modulation of carrier wave at Alice and Bob sides \cite{merolla1999single}. A description of a typical technological implementation of this protocol can be found in \cite{gleim2016secure}. Below we offer a modified variant of this QKD-system.
\textit{Components and properties.} We propose a QKD-system of one-way QC shown in Fig. \ref{scheme}. Tunable wavelength continuous wave mode DFB-laser with 5 kHz frequency linewidth was used as a carrier wavelength source for quantum channel. The wavelength was adjusted in accordance with the filter bandwidth in the receiving unit (Bob side). The laser radiation was launched to the electro-optical amplitude modulator for generation of short phase modulator was a sequence of 2.5 ns pulses at the repetition rate of 100 MHz. Then the generated pulse were transmitted to the optical phase modulator (OPM) providing uniform phase modulation over the entire pulse duration. 4.8 GHz electromagnetic field was also applied to the radio-frequency input of OPM to generate optical sidebands. The relative phase of the subcarrier waves is determined by the phase of the modulating electromagnetic field with modulation index of 0.033. Further, the optical signal was attenuated by the variable attenuator to the sidebands power level defined by the protocol, namely, no more than 0.2 photons per modulation cycle. Chromatic dispersion compensator was introduced to eliminate the detrimental influence of chromatic dispersion effect on the interferometric visibility on the receiver unit side. Alice overall output power was 79.3 pW. \begin{figure}
\caption{ One-way scheme for the long-distance QC complex. DL1 and DL2: diode laser 1 and 2; OI: optical isolator; OAM: optical amplitude modulator; OPM: optical phase modulator; OA: optical attenuator; FPGA: field-programmable gated-array and these serve (in conjunction with the control computers) as the control apparatus of this system; VCO: voltage control oscillator; PLL: phase looked loop; EPM: electrical phase modulator; AMD: amplitude modulator driver; CDC: chromatic dispersion compensator; PC: polarization controller; F: optical filter; SSPD: superconducting single-photon detector; EF: electrical filter; D: photodiode; $\Phi$: optical phase modulator; CC: classical computer.}
\label{scheme}
\end{figure} Both Alice’s and Bob’s phase modulators contained an linear polarizer aligned with the electro-optical crystal axis to ensure fully modulated signal.
To maximize signal utilization, Bob used a polarization controller at its input to compensate for polarization distortions introduced by the fiber optics line. Following the maximum counting rate at the output of a single-photon detector, the controller automatically aligns the signal polarization along the axis of the OPM polarizer. After the phase modulation, the input light was filtered at narrow band notch filter.
Our approach uses only one sideband, introducing an additional 3 dB signal loss, while that allows us to filter out both carrier wavelength and fiber line background spurious noise in a cheap simple way using only one spectral filter. One of the sidebands is launched to a SSPD-detector. SSPD operates at a temperature of 2.1 K and has a low dark count rate of 0.5 Hz with a high quantum efficiency of 50 $\%$.
\section*{Experimental demonstration} For the field tests, we used Kazan-Apastovo regular fiber link with a length of 143 km and losses of 37 dB in the quantum channel and 45 dB in the synchronization channel with erbium-doped fiber amplifier. The total time for continuous testing was 16.5 hours. About 700 kbit of key information was generated in the quantum channel with an average value of the secret quantum key generation rate of 12 bps. The value of QBER was in the stable range from 0.5 $\%$ to 3.5 $\%$, on average, the value of QBER $\sim$ 2 $\%$ (see Fig. \ref{QBER}). This generation rate allows to change the 256-bit encryption key up to two times per minute. The data of the stable experimental demonstration show the possible using of the proposed QKD system in commercial long-range QCs.
In our work, the results were obtained in a long-distance and regular trunk telecommunication fiber between the cities of the Russian Federation. Basic characteristics of the tests of our QKD system for urban quantum networks \cite{bannik2017multinode,news1} and intercity trunk lines \cite{news2} are presented in Tab. \ref{T:param}. For comparison, Tab. \ref{T:param} also presents the basic characteristics of the well-known long-range QKD systems discussed in the introduction which were also tested under real conditions \cite{notes1}.
\begin{table}[ph]
\begin{tabular}{|c|c|c|c|c|} \hline
\multicolumn{5}{|c|}{Kazan quantum lines} \\ \hline Length& SKR & Detector & Loss (dB); & Group \\ (km) & (bps) & & QBER (\%) & \\ \hline 12 & $2 \cdot10^4$ & $^*$SPAD & 7; 4 & Russia \\ \hline 143 & 12 & $^\dagger$SSPD & 37; 2 & Russia \\ \hline
\multicolumn{5}{|c|}{International commercial systems} \\ \hline 67 & 60 & \cite{stucki2002quantum} SPAD & 14; 6 & Switzerland \\ \hline 45 & $3 \cdot10^5$ & \cite{dixon2015high} SPAD & 14; 4 & Japan \\ \hline 66 & $5 \cdot10^5$ & \cite{mao2018integrating} SPAD & 21; 5 & China \\ \hline 97 & 800 & \cite{tanaka2008ultra} SSPD & 33; 3 & Japan \\ \hline 90 & $1 \cdot10^3$ & \cite{shimizu2013performance} SSPD & 30; 3 & Japan \\ \hline \end{tabular} \caption{Basic characteristics of the proposed QKD system: $^*$Kazan city quantum network (previous field tests, Kazan, May 2017 \cite{bannik2017multinode,news1}), $^\dagger$intercity long-distance QC (current experiment, Kazan-Apastovo, August 2019 \cite{news2}).} \label{T:param}
\end{table}
\begin{figure}
\caption{Distributions for \textbf{(a)} secret key rate ($\sim 12$ bps) and \textbf{(b)} QBER ($\sim 2 \%$) for two-day measurements.}
\label{QBER}
\end{figure} \noindent It is worth noting that in \cite{bacco2019field,choi2014field} and \cite{walenta2015towards,boaron2018secure} tests of various QKD systems at shorter distances and using ultra-low loss fibers are presented.
\section{Innovation outlook} Based on the demonstrated QKD prototype, we propose the concept of a universal QC complex $"$Kazan-Q1$"$ (see outlook Tab. \ref{T:complex} and patents \cite{patent1,patent2,patent3,patent4,patent5}) which combines the technological simple solutions, noise immunity of the subcarrier wave coding and robust quantum-classical information protection based on the extended statistical data mining against new types of quantum attacks \cite{bykovsky2018quantum}.
\begin{table}[ht]
\begin{tabular}{|c|c||c|c|} \hline Name & Kazan-Q1 & Detector & SSPD \\
& & & or SPAD \\ \hline Coding & Phase in & Active & Phase \\
& subcarrier wave & element & modulator \\ \hline Security & Quantum and & QRNG & On PD with \\
& classical & & robust-defense \\ \hline Noise & Network nodes & Decoys and & Amplitude \\ immunity & and lines & diagnostics & modulator and AI \\ \hline \end{tabular} \caption{Outlook for the complex $"$Kazan-Q1$"$. Here: QRNG: quantum random number generator; PD: PIN-photodiode; AI: artificial intelligence.} \label{T:complex}
\end{table} \noindent We see the following distinctive features and prospects for the development of the long-distance QC complex $"$Kazan-Q1$"$ in the context of world achievements in the area of innovative quantum technologies:
1) statistical monitoring with AI for keys and electronic components based on the robust nonparametric criteria \cite{nigmatullin2010new,perminov2018rcf,nigmatullin2019universal,smirnov2018sequences,perminov2017sra};
2) compact \cite{melnik2018using,zhang2019integrated,eriksson2019wavelength} planar implementation \cite{orieux2016recent} without using interferometers for the main components \cite{acin2018quantum}: beam splitter \cite{desiatov2019ultra,tao2008cascade,thomaschewski2019plasmonic}, planar phase modulator \cite{ren2019integrated}, high purity QRNG on PD \cite{campbell2016recent} with robust-defense \cite{perminov2018rcf,avesani2018source,ioannou2018much,grangier2018quantum,drahi2019certified};
3) implementation of advanced decoy states \cite{molotkov2019there,huang2018quantum} in a complex using intelligent monitoring methods \cite{perminov2018rcf,nigmatullin2019universal} for quantum diagnostics of network integrity and intrusion detection \cite{kravtsov2018relativistic,gaidash2019countermeasures,gaidash2019methods};
4) advanced post-processing \cite{jia2019quantum} for error correction and modeling in optical simulator for testing and specialization of software for specific urban conditions.
\textit{Acknowledgments.} Research is financially supported by a grant of the Government of the Russian Federation, project No. 14.Z50.31.0040, February 17, 2017.
We are very grateful to the companies of PJSC $"$Tattelecom$"$ and PJSC $"$Rostelecom$"$ for providing working fiber-optic communication lines and for officially documenting Russia's record \cite{news2} for the range of long-distance QC during this experimental demonstration.
\section{Information about authors} \noindent \textbf{Oleg Igorevich Bannik},\\ (b. 1988), in 2012 graduated from the faculty of electronics of Saint Petersburg Electrotechnical University in the direction of "Electronics and Microelectronics", researcher at the Kazan Quantum Center of the KNRTU-KAI.\\ \textit{Area of interest:} quantum communications, optoelectronics, photonics.\\ \textit{E-mail:} olegbannik@gmail.com\\
\\
\noindent \textbf{Lenar Rishatovich Gilyazov},\\ (b. 1985), in 2008 graduated from the the Physics Department of Kazan Federal University, researcher at the Kazan Quantum Center of the KNRTU-KAI.\\ \textit{Area of interest:} quantum communications, optoelectronics, photonics.\\ \textit{E-mail:} lgilyazo@mail.ru\\
\\
\noindent \textbf{Artur Viktorovich Gleim},\\ (b. 1990), in 2012 graduated from ITMO University in the direction of "Photonics and Optoinformatics", candidate of technical sciences, head of lab. at the Kazan Quantum Center of the KNRTU-KAI.\\ \textit{Area of interest:} quantum communications, optoelectronics, photonics.\\ \textit{E-mail:} aglejm@yandex.ru\\
\\
\noindent \textbf{Nikolay Sergeevich Perminov},\\ (b. 1985), in 2008 graduated the department of Theory of Relativity and Gravity of the Physics Department of Kazan Federal University in the direction of “Physics”, researcher at the Kazan Quantum Center of the KNRTU-KAI.\\ \textit{Area of interest:} optimal control, quantum informatics, statistics, software, economics.\\ \textit{E-mail:} qm.kzn@ya.ru\\
\\
\noindent \textbf{Konstantin Sergeevich Melnik},\\ (b. 1993), in 2018 graduated from Institute of Radio Electronics and Telecommunications of Kazan National Research Technical University in the direction of "Radio Engineering", PhD student at the Kazan Quantum Center of the KNRTU-KAI.\\ \textit{Area of interest:} quantum communications, optoelectronics, photonics.\\ \textit{E-mail:} mkostyk93@mail.ru\\
\\
\noindent \textbf{Narkis Musavirovich Arslanov},\\ (b. 1980), in 2003 graduated from the the Physics Department of Kazan Federal University, senior researcher at the Kazan Quantum Center of the KNRTU-KAI.\\ \textit{Area of interest:} quantum communications, photonics.\\ \textit{E-mail:} narkis@yandex.ru\\
\\
\noindent \textbf{Aleksandr Alekseevich Litvinov},\\ (b. 1985), in 2008 graduated the department of Theory of Relativity and Gravity of the Physics Department of Kazan Federal University in the direction of “Physics”, engineer at the Kazan Quantum Center of the KNRTU-KAI.\\ \textit{Area of interest:} quantum communications, optoelectronics, robust methods, quantum memory, software, economics.\\ \textit{E-mail:} litvinov85@gmail.com\\
\\
\noindent \textbf{Albert Ruslanovich Yafarov},\\ (b. 1977), in 2006 graduated from the Department of Geography of Kazan State University in 2006 with a degree in Geography, engineer at the Kazan Quantum Center of the KNRTU-KAI.\\ \textit{Area of interest:} quantum communications, robust methods, quantum memory, project management, public relations, economics.\\ \textit{E-mail:} a.r.yafarov@gmail.com\\
\\
\noindent \textbf{Sergey Andreevich Moiseev},\\ (b. 1957), in 1979 graduated with honors from the physical faculty of Kazan State University in the direction of "Radiophysics". Doctor of Phys.-Math. Sciences, prof. Department of Radio-photonics and Microwave Technologies, KNRTU-KAI, Director of the Kazan Quantum Center KNRTU-KAI.\\ \textit{Area of interest:} optics and spectroscopy, quantum memory, quantum informatics, quantum communications, nonlinear and coherent optics.\\ \textit{E-mail:} s.a.moiseev@kazanqc.org\\
\end{document} |
\begin{document}
\hoffset -34pt
\title{{\large \bf A note on the asymptotic behavior of conformal metrics with negative curvatures near isolated singularities}
\thanks{\mbox{Keywords. Singularities, curvatures, metrics.}}} \author{\normalsize Tanran Zhang } \date{} \maketitle \baselineskip 21pt
\noindent
\begin{minipage}{138mm} \renewcommand{1}{1} \normalsize \begin{abstract} {The asymptotic behavior of conformal metrics with negative curvatures near an isolated singularity for at most second order derivatives was described by Kraus and Roth in one of their papers in 2008. Our work improves one estimate of theirs and shows the estimate for higher order derivatives near an isolated singularity by means of potential theory. We also give some limits of Minda-type for SK-metrics near the origin. Combining these limits with the Ahlfors' lemma, we provide some observations SK-metrics.} \end{abstract} \end{minipage}\\ \\ \\ \renewcommand{1}{1} \normalsize \section{Introduction}
\vspace*{4mm}
The research of conformal metrics has a long history, since the time of Liouville and Picard, see \cite{Liouville1, Picard,Picard2}. For a conformal metric $\lambda(z)|dz|$ on a subdomain $G$ of the complex plane $\mathbb{C}$, we can define its (generalized) Gaussian curvature $\kappa_{\lambda}(z)$. Let $u(z)= \log \lambda(z)$. If $\kappa_{\lambda}(z)=0$, then $u(z)$ satisfies the Laplace equation $\Delta u=0$, which means $u(z)$ is harmonic on $G$. So that the property of $u(z)$ can be studied by means of potential theory, see, e.g. \cite{PDE}. If $\kappa_{\lambda}(z)=-4$, then $\Delta \log \lambda= 4\lambda^2$ and $u(z)$ is the solution to the Liouville equation \begin{eqnarray}\label{liouville equation} \Delta u=4e^{2u}. \end{eqnarray} Each solution to equation (\ref{liouville equation}) belongs to a class of subharmonic functions and it is corresponding to a kind of special metric, called the SK-metric, according to Heins, see \cite{Heins}. The existence and the uniqueness of the solutions to equation (\ref{liouville equation}) are subject to some suitable boundary conditions. Through out our study, we are concerned only with the asymptotic behavior near an isolated singularity of the solution to equation (\ref{liouville equation}), so it is sufficient to consider the behavior in the punctured unit disk $\mathbb{D} \backslash\{0\}$, where the origin is an isolated singularity of some order $\alpha\leq 1$. Near the singularity, we need some more refined invariants to estimate the asymptotic behavior, like the growth of the density. The assignment of the order of the singularity is such an invariant.
\par As for equation (\ref{liouville equation}), Liouville proved that, in any disk $D$ contained in the punctured unit disk $\mathbb{D}\backslash\{0\}$ every solution $u$ to \eqref{liouville equation} can be written as $$u(z)=\log\frac{|f'(z)|}{1-|f(z)|^2},$$ where $f$ is a holomorphic function in $D$, see \cite{Roth1}. Based on Liouville's results, Nitsche described the behavior of $u(z)$ with constant curvature $\kappa(z)\equiv -4$ near the isolated singularities on plane domains in \cite{Nitsche}. Subsequently, Kraus and Roth extended Nitsche's results to the solutions of the more general equation \begin{eqnarray}\label{general equation} \Delta u=-\kappa(z) e^{2u}
\end{eqnarray} with strictly negative, H\"{o}lder continuous curvature functions $\kappa(z)$ in \cite{Rothbehaviour}. In fact, equation (1.2) has an exquisite geometric interpretation: Every solution $u$ to \eqref{general equation} induces a conformal metric $e^{u(z)}|dz|$ with Gaussian curvature function $\kappa(z)$ and vice versa (for more details, see \cite{Rothbehaviour}). Our first result is the estimates for some terms of $u(z)$ near the singularity. We improve the estimate of the mixed derivatives when the order of $u$ is $\alpha=1$ and obtain the estimate for higher order derivatives near the origin. We show in \cite{zhang3} that, our result is sharp by use of the generalized hyperbolic metric $\lambda_{\alpha, \beta, \gamma}$ on the thrice-punctured sphere $\mathbb{P}\backslash \{z_1, z_2, z_3\}$ with singularities of order $\alpha, \beta, \gamma \leq 1$ at $z_1, z_2, z_3$, which was given by Kraus, Roth and Sugawa for $\alpha+\beta+\gamma >2$, see \cite{Rothhyper}. \par As an extremal case of the SK-metric, the hyperbolic metric, also called the Poincar{\'e} metric, plays an important role on (punctured) disks. Early in 1997, Minda \cite{Mindametric} investigated the behavior of the density of the hyperbolic metric in a neighborhood of a puncture on the plane domain using the uniformization theorem. His method offers us a way to describe the asymptotic behavior on an arbitrary hyperbolic region. Our second result is to extend Minda's work and to give some limits of Minda's type. \par This paper is divided into four sections. In Section 2 the notations and the definitions are introduced. Section 3 is contributed to potential theory. The main results and their proofs are given in Section 4.
\section {Preliminaries } \subsection{Singularities and orders} \setcounter{equation}{0} \vspace*{3mm}
If $G \subseteq \mathbb{C}$ is a domain, then every positive, upper semi-continuous, real-valued function $\lambda: G \rightarrow (0, + \infty)$ on $G$ induces a conformal metric $\lambda(z)|dz|$, see \cite{Heins,Roth1}. In our discussion we take the linear notation for a conformal metric $ds=\lambda(z)|dz|$. Let $\mathbb{P}$ denote the Riemann sphere $\mathbb{C}\cup \{\infty\}$ and let $\Omega\subseteq\mathbb{P}$ be a subdomain. For a point $p \in \Omega$, let $z$ be local coordinates such that $z(p)=0$. We say a conformal metric $\lambda(z)|dz|$ on the punctured domain $\Omega^*:=\Omega \backslash \{p\}$ has a singularity of order $\alpha\leq 1$ at the point $p$, if, in local coordinates $z$, \begin{eqnarray} \label{singularity} \log\lambda(z)=\left\{ \begin{array}{ll}
-\alpha\log|z|+v(z) & \mbox{if\ }\ \alpha < 1 \\
-\log|z|-\log\log(1/|z|)+w(z)&\mbox{if\ }\ \alpha=1,\end{array} \right.
\end{eqnarray} where $v(z), w(z) = \textit {O}(1)$ as $z(p)\rightarrow 0$ with $\textit {O}$ and $\textit {o}$ being the Landau symbols throughout our study. Let $M_u(r):=\sup_{|z|=r}u(z)$ for a real-valued function $u(z)$ defined in a punctured neighborhood of $z=0$ and call \begin{eqnarray}\label{order of u} \alpha(u):=\lim_{r\rightarrow 0^+}\frac{M_u(r)}{\log(1/r)} \end{eqnarray} the order of $u(z)$ if this limit exists. For $u(z):=\log \lambda(z)$, $\alpha(u)$ in (\ref{singularity}) is equal to $\alpha$ in \eqref{order of u}. In fact, if $\alpha(u)\leq 1$ in \eqref{order of u}, then $v(z)$ is continuous at $z=0$ and $w(z)=\textit {O}(1)$ as $z \rightarrow 0$, see Theorem 3.1 in \cite{Rothbehaviour}. We call the point $p$ a conical singularity or corner of order $\alpha$ if $\alpha< 1$ and a cusp if $\alpha=1$. The generalized Gaussian curvature $\kappa_{\lambda}(z)$ of the density function $\lambda(z)$ is defined by $$\kappa_{\lambda}(z)=-\frac{1}{\lambda(z)^2}{\liminf_{r\rightarrow 0}\frac{4}{r^2}\left(\frac 1 {2\pi}\int_0^{2\pi}\log\lambda(z+re^{it})dt-\log\lambda(z)\right)}.$$
We say a conformal metric $\lambda(z)|dz|$ on a domain $\Omega\subseteq\mathbb{C}$ is regular, if its density $\lambda(z)$ is positive and twice continuously differentiable, i.e. $\lambda(z)>0$ and $\lambda(z) \in C^2(\Omega)$. If $\lambda(z)|dz|$ is a regular conformal metric, then $$\kappa_{\lambda}(z)=-\frac{\Delta\log\lambda(z)}{\lambda(z)^2},$$
where $\Delta$ denotes the Laplace operator. It is well known that, if $a<\kappa_{\lambda}(z)<b<0$ with constants $a, b \in \mathbb{R}$, the metric $\lambda(z)|dz|$ only has corners or cusps at isolated singularities (see \cite{McOwen1}).
\par The Gaussian curvature is a conformal invariant. Let $\lambda(z)|dz|$ be a conformal metric on a domain $G \in \mathbb{C}$ and $f: \Omega \rightarrow G$ be a holomorphic mapping of a Riemann surface $\Omega$ into $G$. Then we can define the pullback $f^{*}\lambda(w)|dw|$ of $\lambda(z)|dz|$ by
\begin{eqnarray} f^{*}\lambda(w)|dw|:=\lambda(f(w))|f'(w)||dw|. \nonumber
\end{eqnarray} It is easy to see that $f^{*}\lambda(w)|dw|$ is a conformal metric on $\Omega \backslash\{\mbox{critical points of}\ f\}$ with Gaussian curvature \begin{eqnarray} \kappa_{f^{*}{\lambda}}(w)=\kappa_{\lambda}(f(w)). \nonumber \end{eqnarray} Using this conformal invariance, we can easily build relations between Riemann surfaces with conformal metrics. Here we can see that, on the punctured domain $\Omega \backslash\{\mbox{critical points}$ $\mbox{of}\ f\}$, the critical points of $f$ are the source of the singularities.
\par The hyperbolic metric is a complete metric with some constant Gaussian curvature, here we take the constant to be $-4$. We call an upper semi-continuous metric $\lambda(z)|dz|$ on a Riemann surface $\Omega$ an SK-metric if its Gaussian curvature is bounded above by $-4$ at every $z\in \Omega$. The hyperbolic metric on the unit disk $\mathbb{D}$ is defined by \begin{eqnarray} \label{hyperbolic metric}
\lambda_{\mathbb{D}}(z)|dz|=\frac {|dz|}{1-{|z|}^2}.
\end{eqnarray} The following result is a fundamental theorem about SK-metrics by Ahlfors, see \cite{Ahlforslemma}, also \cite{Heins}, which claims that the hyperbolic metric $\lambda_{\mathbb{D}}(z)|dz|$ on the unit disk $\mathbb{D}$ is the unique maximal SK-metric on $\mathbb{D}$.
\begin{theorema} $\mathrm{[1]}.$ \label{Ahlfors lemma} \textsl{Let $ds$ be the hyperbolic metric on $\mathbb{D}$ given in (\ref{hyperbolic metric}) and $d\ell$ be the metrics on $\mathbb{D}$ induce by an SK-metric on a Riemann surface $\Omega$. If the function $f(z)$ is analytic in $\mathbb{D}$, then the inequality \begin{eqnarray} d\ell \leq ds \nonumber \end{eqnarray} will hold throughout the circle. } \end{theorema}
On the punctured unit disk $\mathbb{D}^*:=\mathbb{D} \backslash \{0\}$, the hyperbolic metric is expressed by
$$\lambda_{\mathbb{D}^*}(z)|dz|=\frac{|dz|}{2|z|\log(1/|z|)}$$
with the constant curvature $-4$. We denote $\mathbb{D}_R:=\{z \in {\mathbb{C}}:|z|<R\}$ and ${\mathbb{D}_R}^*:=\mathbb{D}_R\backslash \{0\}$ for $R>0$. On the punctured disk $\mathbb{D}_R^*$, the (generalized) hyperbolic metric with a conical singularity at the origin is given in \cite{Rothhyper}. For its detailed proof, see \cite{zhang2}. \begin{theorema} $\mathrm{[7,14]}.$ \label{maximal} \textsl{ For $R>0$, let \begin{eqnarray*} \lambda_{\alpha,R}(z):=\left\{ \begin{array}{ll}
\displaystyle \frac{(1-\alpha)R^{1-\alpha}|z|^{-\alpha}}{R^{2(1-\alpha)}-|z|^{2(1-\alpha)}} =\frac{1-\alpha}{2|z|\sinh \left((1-\alpha)\log ({R}/{|z|})\right)} & \mbox{if\ }\ \alpha<1, \\
\displaystyle \frac{1}{2|z|\log ({R}/{|z|})} &\mbox{if\ }\ \alpha=1 \end{array} \right. \end{eqnarray*} for $z \in \mathbb{D}_R^*$. Then given an arbitrary SK-metric $\sigma(z)$ on $\mathbb{D}_R^*$ with a singularity at $z=0$ of order $\alpha$, we have $\sigma(z)\leq\lambda_{\alpha,R}(z)$.} \end{theorema}
\subsection{Regularity and Logarithmic potential} If equation \eqref{general equation} has a $C^2$-solution $u(z)$, then the higher regularity properties of $u(z)$ only depends on the smoothness of $\kappa(z)$, according to Gilbarg and Trudinger [2, p.\,109]. Here we need the H\"{o}lder spaces $C^{n, \,\nu}(\mathbb{D}_R)$, consisting of functions whose $n$-th order partial derivatives are locally H\"{o}lder continuous with exponent $\nu$ in $\mathbb{D}_R$, $0<\nu \leq 1$, which are defined as the subspaces of $C^n(\mathbb{D}_R)$. The following result can be obtained immediately from the standard regularity theorem, see, e.g. [2,\,Theorem 6.17]. \begin{lemma} $\mathrm{(Regularity\ theorem)}$ \label{regularity coro} \textsl{ Let $u$ be a $C^2$-solution to the equation $\Delta u=-\kappa(z) e^{2u}$ in $\mathbb{D}^*$, where $\kappa \in C^{n,\,\nu}(\mathbb{D}^*)$. Then $u \in C^{n+2,\,\nu}(\mathbb{D}^*)$. If $\kappa$ lies in $C^{\infty}(\mathbb{D}^*)$, then $u \in C^{\infty}(\mathbb{D}^*)$.} \end{lemma}
\vspace*{2mm} We shall use potential theory as employed by Kraus and Roth in \cite{Rothbehaviour}. Here we list some elementary facts without proof. \vspace*{2mm}
For a bounded, integrable function $f(z)$ defined on a domain $\Omega \subseteq \mathbb{C}$, the integral $$\frac {1}{2\pi}\int _{\Omega}L(z-\zeta) f(\zeta) d\sigma_{\zeta}$$
is called the logarithmic potential of $f$, where $L(z-\zeta)=\log|z-\zeta|$ and $d\sigma_{\zeta}$ is the area element on domain $\Omega$. Write $z=x_1+ix_2$, $\zeta=y_1+iy_2$ and set $0<r\leq 1$. The following lemma was mentioned in \cite{Rothbehaviour}. It is a consequence of the famous Riesz decomposition theorem, and can be obtained from Theorem 4.5.1 and Exercise 3.7.3 in \cite{Ransford}.
\begin{lemmaa} $\mathrm{[6]}.$ \label{poisson jensen} \textsl{Let $u$ be a subharmonic function on $\mathbb{D}_r$ such that $u\in C^2({\mathbb{D}_r}^*)$, $\Delta u$ is integrable in $\mathbb{D}_r$ and
$$\lim_{r\rightarrow 0}\frac{\sup_{|z|=r}u(z)}{\log(1/r)}=0.$$ Then $u(z)=h(z)+\omega(z)$ for $z\in \mathbb{D}_r$, where $h$ is a harmonic function on $\mathbb{D}_r$ and $\omega(z)$ is the logarithmic potential of $\Delta u$.} \end{lemmaa}
\begin{lemmaa} $\mathrm{[2,\,p.\:\!54]},$ \label{newton potential} \textsl{ Let $f: \mathbb{D}_r\rightarrow\mathbb{R}$ be a locally bounded, integrable function in $\mathbb{D}_r$ and $\omega$ be the logarithmic potential of $f$. Then $\omega \in C^1({\mathbb{D}_r})$ and for any $z=x_1+ix_2 \in \mathbb{D}_r$, $$\frac{\partial\:\!{\omega}}{\partial x_j}(z)=\frac 1 {2\pi}\int_{\mathbb{D}_r}\frac {\partial L}{\partial x_j}(z-\zeta) f(\zeta) d\sigma_{\zeta}$$ for $j\ \in \{1,2\}$.\\ If, in addition, $f$ is locally H\"{o}lder continuous with exponent $\nu \leq 1$, then $\omega\in C^2(\mathbb{D}_r)$ and for $z \in \mathbb{D}_r$, \begin{eqnarray} &&\frac{\partial^2\:\!\omega}{\partial x_l\partial x_j}(z)=\frac 1 {2\pi}\int_{\mathbb{D}_R}\frac{\partial^2 L}{\partial x_l\partial x_j}(z-\zeta)\left(f(\zeta)-f(z)\right) d\sigma_{\zeta} \nonumber\\
&& \qquad \qquad \quad \: \ \,-\frac{1}{2\pi}f(z)\int_{\partial \mathbb{D}_R}\frac{\partial L}{\partial x_j}(z-\zeta)N_l(\zeta)|d\zeta|, \nonumber \end{eqnarray} where $N(\zeta)=(N_1(\zeta),N_2(\zeta))$ is the unit outward normal at the point $\zeta \in\partial \mathbb{D}_R$ with $R>r$ and $f$ is extended to vanish outside of $\mathbb{D}_r$.} \end{lemmaa}
\par There is a similar proposition for higher order derivatives of the logarithmic potential. Define a multi-index $\boldsymbol{j}=(j_1, j_2)$, $|\boldsymbol{j}|=j_1+j_2$, $j_1, j_2=0,1,2,\ldots \,$, so $(\zeta-z)^{\boldsymbol{j}}=(y_1-x_1)^{j_1}(y_2-x_2)^{j_2}$, $\boldsymbol{j} !=j_1!j_2!$. For $z=x_1+ix_2$, denote $$\frac{\partial}{\partial x_1}=\partial_1,\ \frac{\partial}{\partial x_2}=\partial_2,\ \displaystyle \partial^{\boldsymbol{j}}=\partial^{j_1}_1 \partial^{j_2}_2.$$ For a given multi-index $\boldsymbol{j}=(j_1, j_2)$, we can choose $\boldsymbol{e}_{\tau}=(0,1)$ or $(1,0)$ for $\tau=1,2,\ldots \,$ such that $\boldsymbol{j}=\boldsymbol{e}_1+\boldsymbol{e}_2+\cdots+\boldsymbol{e}_n$ with $n=|\boldsymbol{j}|$. Write $\zeta=y_1+iy_2$, set \begin{eqnarray*} P_n[f](z,\zeta):=\left\{ \begin{array}{ll} \vspace*{2mm}
\displaystyle \sum_{|\boldsymbol{a}|\leq n} \frac{(\zeta-z)^{\boldsymbol{a}}{\partial}^{\boldsymbol{a}}}{\boldsymbol{a} !}f(z) & \mbox{if\ }\ n \geq 1 \\ f(z) & \mbox{if\ }\ n=0, \end{array} \right. \end{eqnarray*} where $\boldsymbol{a}$ is a multi-index. For $m=1,2$, it holds that $$\frac{\partial P_n[f]}{\partial y_m}(z,\zeta)=P_{n-1}[\partial_m f](z,\zeta),$$ see \cite{zhang2} for more details. Using this notation, we can present the analogue of Lemma \ref{newton potential} as follows. \begin{lemmaa} $\mathrm{[14]}.$ \label{new higher prop} \textsl{ Let $0<r<1$, $f: \mathbb{D}_r\rightarrow\mathbb{R}$ and $f \in C^{n-2,\,\nu}(\mathbb{D}_r)$ with $0<\nu \leq 1$, $n\geq 3$, $\omega$ be the logarithmic potential of $f$. Then
$\omega(z)\in C^{n}(\mathbb{D}_r)$ and for $n=|\boldsymbol{j}|$, \begin{eqnarray} \label{new higher} && \partial^{\boldsymbol{j}}\omega(z) \nonumber \\ &=& \frac 1 {2\pi}\int_{\mathbb{D}_R}\partial^{\boldsymbol{j}} L(z-\zeta) \cdot \left(f(\zeta)-P_{n-2}[f](z,\zeta)\right)d\sigma_{\zeta} \nonumber \\
&& -\frac{1}{2\pi} \sum^{n-1}_{\tau=1} \int_{\partial \mathbb{D}_R}\partial^{\boldsymbol{\theta}_{\tau}} L(z-\zeta) \cdot P_{\tau-1}[\partial^{\boldsymbol{\phi}_{\tau}}f](z,\zeta)\cdot \langle N(\zeta),\boldsymbol{e}_{\tau+1} \rangle |d\zeta|, \end{eqnarray} where $\boldsymbol{\theta}_{\tau}:=\boldsymbol{e}_1+ \cdots +\boldsymbol{e}_{\tau}$, $\boldsymbol{\phi}_{\tau}:=\boldsymbol{e}_{\tau+2}+ \cdots +\boldsymbol{e}_n$ for $\tau=1, \ldots, \,n-1$ and $\boldsymbol{\phi}_{n-1}:=(0,0)$, $N(\zeta)=(N_1(\zeta),N_2(\zeta))$ is the unit outward normal at the point $\zeta \in \partial \mathbb{D}_R$ with $R>r$, $\langle \ , \ \rangle$ is the inner product and the function $f$ is extended to vanish outside of $\mathbb{D}_r$.} \end{lemmaa}
\section{Main estimates} \setcounter{equation}{0} We denote $$\partial^n=\frac{\partial ^n}{\partial z^n},\ \bar{\partial}^n=\frac {\partial^n}{\partial\bar{z}^n}$$ for $n\geq1$. The following theorem is given by Kraus and Roth in \cite{Rothbehaviour}. \begin{theorema} $\mathrm{[6]}.$ \label{original} \textsl{ Let $\kappa:\mathbb{D}\rightarrow\mathbb{R}$ be a locally H\"{o}lder continuous function with $\kappa(0)<0$. If $u:\mathbb{D}^*\rightarrow \mathbb{R}$ is a $C^2$-solution to $\Delta u=-\kappa(z) e^{2u}$ in $\mathbb{D}^*$, then $u$ has an order $\alpha \in (-\infty,1]$ and \begin{align}
&u(z)=-\alpha\log|z|+v(z), & \textrm{if\ } \ \alpha<1,\nonumber \\
&u(z)=-\log|z|-\log\log(1/|z|)+w(z), & \textrm{if\ } \ \alpha=1,\nonumber \end{align} where the remainder functions $v(z)$ and $w(z)$ are continuous in $\mathbb{D}$. Moreover, the first partial derivatives with respect to $z$ and $\bar{z}$, \begin{align} & \partial v(z),\,\bar{\partial} v(z)\ \mbox{are continuous at}\ z=0& \mbox{if\ }&\ \alpha<1/2;\nonumber \end{align} and \begin{align} & \partial v(z),\,\bar{\partial} v(z)=\textit {O}(1) &\mbox{if\ }& \ \alpha=1/2;\nonumber\\
& \partial v(z),\,\bar{\partial} v(z)=\textit {O}(|z|^{1-2\alpha}) &\mbox{if\ }&\ 1/2<\alpha<1,\nonumber\\
& \partial w(z),\,\bar{\partial} w(z)=\textit {O}(|z|^{-1}(\log(1/|z|))^{-2}) &\mbox{if\ }& \ \alpha=1,\nonumber \end{align} when $z$ approaches $0$. In addition, the second partial derivatives, \begin{align} &\partial^2 v(z),\,\partial \bar{\partial}v(z) \ \textrm{and}\ \bar{\partial}^2 v(z) \ \textrm{are\ continuous \ at}\ z=0 & \textrm{if\ }& \ \alpha \leq 0;\nonumber\end{align} and \begin{align}
& \partial^2 v(z),\,\partial \bar{\partial}v(z),\,\bar{\partial}^2 v(z)= \textit {O}(|z|^{-2\alpha}) &\textrm{if\ }&\ 0<\alpha<1,\nonumber\\
& \partial^2 w(z),\,\partial \bar{\partial} w(z),\,\bar{\partial}^2 w(z)=\textit {O}(|z|^{-2}(\log(1/|z|))^{-2}) & \textrm{if\ }& \ \alpha=1, \end{align} when $z$ tends to $z=0$.} \end{theorema} \par In the work of Kraus and Roth, the proof of Theorem \ref{original} was based on Lemma \ref{newton potential}. Since we have obtained a similar statement in Lemma \ref{new higher prop}, the estimate for higher order derivatives of the remainder functions $v(z)$, $w(z)$ can be given. We consider $v(z)$ and $w(z)$ separately. \begin{theorem} \label{estimate v} \textsl{ Let $\kappa(z)$, $u(z)$, $v(z)$ and $\alpha$ be the same as in Theorem \ref{original}. If $0<\alpha<1$ and if, in addition, $\kappa(z) \in C^{n-2,\,\nu}(\mathbb{D}^*)$ for an integer $n \geq 3$, $0<\nu \leq 1$, then for $n_1$, $n_2 \geq 1$, $n_1+n_2 =n$, near the origin, the remainder function $v(z)$ satisfies
$$\partial^n v(z),\,\bar{\partial}^n v(z),\,\bar{\partial}^{n_1}\partial^{n_2}v(z)=\textit{O}(|z|^{2-2\alpha-n}).$$} \end{theorem} \textbf{Proof.} Lemma \ref{regularity coro} shows that $u(z) \in C^{n,\,\nu}(\mathbb{D}^*)$. Due to Kraus and Roth \cite{Rothbehaviour} we have $$v(z)=h(z)+\frac 1 {2\pi}\int_{\mathbb{D}_r}L(z-\zeta)f(\zeta)
d\sigma_{\zeta}$$ for $z\in \mathbb{D}_r^*$, $0<r<1$ and a harmonic function $h$ on $\mathbb{D}_r$, where $q(z)=-\kappa(z)e^{2v(z)}$, $f(z)=q(z)|z|^{-2\alpha}$. Now fix $0<R<1$, choose $z \in \mathbb{D}_{R/2}^*$ and let $r=|z|/2$. Then for a multi-index $\boldsymbol{j}$, $|\boldsymbol{j}|=n \geq 3$, rearranging (\ref{new higher}) leads to \begin{eqnarray} \label{for v} &&\partial^{\boldsymbol{j}}v(z) \nonumber \\ &=&\partial^{\boldsymbol{j}}h(z)+\frac{1}{2\pi}\int_{\mathbb{D}_{R}\backslash \mathbb{D}_{r}}\partial^{\boldsymbol{j}}L(z-\zeta)f(\zeta)d\sigma_{\zeta}+ \frac 1 {2\pi}\int_{\mathbb{D}_r}\partial^{\boldsymbol{j}}L(z-\zeta) \left( f(\zeta)-f(z)\right)d\sigma_{\zeta} \nonumber\\
&&+\frac 1 {2\pi}\int_{\mathbb{D}_r}\partial^{\boldsymbol{j}}L(z-\zeta)\sum_{1\leq |\boldsymbol{a}|\leq n} \frac{(\zeta-z)^{\boldsymbol{a}}\partial^{\boldsymbol{a}}f(z)}{\boldsymbol{a}!}d\sigma_{\zeta} \nonumber\\
&&-\frac{1}{2\pi} \sum^{n-1}_{\tau=1} \int_{\partial \mathbb{D}_r}\partial^{\boldsymbol{\theta}_{\tau}} L(z-\zeta) \cdot P_{\tau-1}[\partial^{\boldsymbol{\phi}_{\tau}}f](z,\zeta)\cdot \langle N(\zeta),\boldsymbol{e}_{\tau+1} \rangle |d\zeta| \end{eqnarray} for $z=x_1+ix_2$ and a harmonic function $h$ on $\mathbb{D}_R$, and the same symbols $\boldsymbol{\theta}_{\tau}$, $\boldsymbol{\phi}_{\tau}$ are used here as in \eqref{new higher}.
It is known that \begin{eqnarray} \label{logn}
\left| \partial^{\boldsymbol{j}}L(z-\zeta) \right|
\leq \frac{n!}{|z-\zeta|^{n}},
\end{eqnarray} see [2, p.\,17]. Denote $M=\sup_{\zeta \in \mathbb{D}_R}|q(\zeta)|$ and let $C_n>0$, $n \in \mathbb{N}$, be some constants. Then
$$\left| \int_{\mathbb{D}_{R}\backslash \mathbb{D}_{r}}\partial^{\boldsymbol{j}}L(z-\zeta)f(\zeta)d\sigma_{\zeta} \right| \leq M \int_{\mathbb{D}_{R}\backslash \mathbb{D}_{r}}
\frac{n!}{|z-\zeta|^{n}} \frac{1}{|\zeta|^{2\alpha}} d\sigma_{\zeta} \leq
\frac{C_1}{|z|^{2\alpha+n-2}},$$ and \begin{eqnarray}
&&\left| \int_{\mathbb{D}_r}\partial^{\boldsymbol{j}}L(z-\zeta)
\left( f(\zeta)-f(z)\right)d\sigma_{\zeta} \right| \nonumber \\
&\leq& \int_{\mathbb{D}_{r}} \frac{n!}{|z-\zeta|^{n}} \frac{|q(\zeta)-q(z)|}{|\zeta|^{2\alpha}} d\sigma_{\zeta}
+ M \int_{\mathbb{D}_{r}}\frac{n!}{|z-\zeta|^{n}}
\frac{(|\zeta|^{\alpha}+|z|^{\alpha})||\zeta|^{\alpha}-|z|^{\alpha}|}{|z|^{2\alpha}|\zeta|^{2\alpha}} d\sigma_{\zeta} \nonumber \\
&\leq& \frac{C_2}{|z|^{2\alpha+n-2}}, \nonumber \end{eqnarray} see \cite{Rothbehaviour}. When one of $j_1$ and $j_2$ is zero, there is no cancelation. Thus we obtain
$\partial^n v, \bar{\partial}^n v=\textit{O}(|z|^{2-2\alpha-n}).$ If neither $j_1$ nor $j_2$ is zero, the first three integrals in (\ref{for v}) are canceled, so we have to consider the last term in (\ref{for v}). In the last sum in (\ref{for v}) letting $\tau=1$, we get the integral
$$\partial^{\boldsymbol{\phi}_1}f(z) \cdot \int_{\partial \mathbb{D}_r}\partial^{\boldsymbol{e}_1} L(z-\zeta) \cdot \langle N(\zeta),\boldsymbol{e}_{2} \rangle |d\zeta|. $$ Writing $\zeta=re^{i\theta}$ and taking $\boldsymbol{e}_1=(1,0)$, $\boldsymbol{e}_2=(0,1)$ without loss of generality, we have \begin{eqnarray} \label{sincos}
&&\left|\int_{\partial \mathbb{D}_r}\partial_1L(z-\zeta)N_2(\zeta)|d\zeta|\right|=\left|\int_{\partial \mathbb{D}_r}\frac{x_1-r\cos\theta}{|z-\zeta|^2}\sin\theta|d\zeta|\right|\nonumber\\
&=&\left|\int^{2\pi}_0 \frac{x_1-r\cos\theta}{|z-\zeta|^2} r \sin\theta d\theta\right| \leq 2\pi \frac{|x_1-r\cos\theta|}{|z-\zeta|^2} r |\sin\theta| \leq 6\pi. \end{eqnarray} So it is evident that,
$$\left|\int_{\partial \mathbb{D}_r}\partial^{\boldsymbol{e}_1} L(z-\zeta) \cdot \langle N(\zeta),\boldsymbol{e}_{2} \rangle |d\zeta| \right| \leq 6\pi$$
holds for all kinds of $\boldsymbol{e}_1$ and $\boldsymbol{e}_2$. Now consider $\partial^{\boldsymbol{\phi}_1}f(z)$. Since $f(z)=q(z)|z|^{-2 \alpha}$, then the term $q(z)\cdot \partial^{\boldsymbol{\phi}_1}(|z|^{-2 \alpha})$ appears with some coefficient. Note that
$$\left|\partial^{\boldsymbol{\phi}_1} \left(\frac{1}{|z|^{2\alpha}}\right)\right| \leq \frac{C_3}{|z|^{2\alpha+n-2}},$$ so
$$\left| q(z) \partial^{\boldsymbol{\phi}_1} \left(\frac{1}{|z|^{2\alpha}}\right) \cdot \int_{\partial \mathbb{D}_r}\partial^{\boldsymbol{e}_1} L(z-\zeta) \cdot \langle N(\zeta),\boldsymbol{e}_{2} \rangle |d\zeta| \right| \leq \frac{6\pi M C_3}{|z|^{2\alpha+n-2}}.$$ Therefore $\bar{\partial}^{n_1}\partial^{n_2}v=\textit{O}(|z|^{2-2\alpha-n}). $ \hspace*{\fill} $\Box$\\ \par The following result is for the higher order derivatives of the remainder functions $w(z)$ when the order is $1$. \begin{theorem} \label{for w theorem} \textsl{ Let $\kappa(z)$, $u(z)$, $w(z)$ and $\alpha$ be the same as in Theorem \ref{original}. If $\alpha=1$ and if, in addition, $\kappa(z) \in C^{n-2,\,\nu}(\mathbb{D}^*)$ for an integer $n \geq 3$, $0<\nu \leq 1$, then for $n_1$, $n_2 \geq 1$, $n_1+n_2 =n$, near the origin, the remainder functions $w(z)$ satisfies \begin{eqnarray}
\bar{\partial}^nw(z),\,\partial^nw(z)=\textit {O}(|z|^{-n}(\log(1/|z|))^{-2}), \nonumber \\
\bar{\partial}^{n_1}\partial^{n_2}w(z)=\textit {O}(|z|^{-n}(\log(1/|z|))^{-3}). \label{new mixed} \end{eqnarray}} \end{theorem} \par The proof is based on the following lemma. \begin{lemmaa}\label{estimate of kzew-1} $\mathrm{[6]}.$ \ \textsl{Let $\kappa:\mathbb{D}\rightarrow\mathbb{R}$ be a continuous function with $\kappa(0)<0$ and \begin{eqnarray}
\kappa(z)=\kappa(0)+\frac{s(z)}{(\log(1/|z|))^2},\nonumber\end{eqnarray} where $s(z)=\textit {O}(1)$ as $z\rightarrow 0$. If $u: \mathbb{D}^* \rightarrow\mathbb{R}$ is a solution to $\Delta u=-\kappa(z) e^{2u}$ with $u(z)=-\log|z|-\log\log(1/|z|)+w(z)$ where $w(z)=\textit {O}(1)$ for $z\rightarrow0$, then there exists a disk $\mathbb{D}_{\rho}$ such that \begin{eqnarray} \label{lemma}
\left|-\kappa(z) e^{2w(z)}-1\right|\leq\frac C {\log(1/|z|)},\ \ \ z\in \mathbb{D}_{\rho}, \end{eqnarray} for some constant $C>0$.} \end{lemmaa} \textbf{Proof of Theorem \ref{for w theorem}.} Lemma \ref{regularity coro} shows that $u(z) \in C^{n,\,\nu}(\mathbb{D}^*)$. For $w(z)$ defined as in Theorem \ref{original}, we first show that \begin{eqnarray}\label{w(z)}
w(z)=h(z)+\frac 1 {2\pi}\int_{\mathbb{D}_r}L(z-\zeta)\frac{-\kappa(\zeta)e^{2w(\zeta)}-1}{|\zeta|^2(\log(1/|\zeta|))^2} d\sigma_{\zeta}\end{eqnarray} for $z\in \mathbb{D}_r^*$, $0<r<1$, where $h$ is harmonic on $\mathbb{D}_r$. Let
$$t(z):=-\log\log(1/|z|), \quad p(z):=w(z)+t(z)=u(z)+\log|z|$$ for $z\in \mathbb{D}_r^*$. Since
$$\Delta p(z)=-\kappa(z) e^{2u}=\frac{-\kappa(z) e^{2w(z)}}{|z|^2(\log(1/|z|))^2}>0,$$ $p(z)$ is subharmonic on $\mathbb{D}_r^*$ and $\lim_{z\rightarrow0}p(z)=-\infty$, then $p(z)$ is subharmonic on $\mathbb{D}_r$. By Lemma \ref{poisson jensen}, as $z \mapsto \Delta p(z)$ is integrable over $\mathbb{D}_r$,
\begin{eqnarray} p(z)=h_p(z)+\frac 1 {2\pi}\int_{\mathbb{D}_r}L(z-\zeta)\frac{-\kappa(\zeta)e^{2w(\zeta)}}{|\zeta|^2(\log(1/|\zeta|))^2} d\sigma_{\zeta},\ z\in \mathbb{D}_r,\nonumber\end{eqnarray} where $h_p(z)$ is harmonic on $\mathbb{D}_r$. For $t(z)$, we also have
\begin{eqnarray} t(z)=h_t(z)+\frac 1 {2\pi}\int_{\mathbb{D}_r}L(z-\zeta)\frac{1}{|\zeta|^2(\log(1/|\zeta|))^2}d\sigma_{\zeta},\ z\in \mathbb{D}_r,\nonumber\end{eqnarray} where $h_t(z)$ is harmonic on $\mathbb{D}_r$. Setting $w(z)=p(z)-t(z)$ gives \eqref{w(z)} with $h(z)=h_p(z)-h_t(z)$.
Now set $R<1 / e^2$. So there exists a number ${\rho}>0$ such that the inequality (\ref{lemma}) holds in the disk $\mathbb{D}_{\rho}$. Let $\widetilde{\rho}=\min\{R/2,\,\rho\}$. We choose $z\in \mathbb{D}_{\widetilde{\rho}}$ and set $r=|z|/2$. Let $q(z)=-\kappa(z)e^{2w(z)}-1$, $f(z)=q(z)|z|^{-2\alpha}$. Then from (\ref{new higher}), we have \begin{eqnarray} \label{for w} && \partial^{\boldsymbol{j}}w(z)\nonumber \\ &=&\partial^{\boldsymbol{j}}h(z)+\frac{1}{2\pi}\int_{\mathbb{D}_{\widetilde{\rho}}\backslash \mathbb{D}_{r}}\partial^{\boldsymbol{j}}L(z-\zeta)f(\zeta)d\sigma_{\zeta}+ \frac 1 {2\pi}\int_{\mathbb{D}_r}\partial^{\boldsymbol{j}}L(z-\zeta) \left( f(\zeta)-f(z)\right)d\sigma_{\zeta} \nonumber\\
&&+\frac 1 {2\pi}\int_{\mathbb{D}_r}\partial^{\boldsymbol{j}}L(z-\zeta)\sum_{1\leq |\boldsymbol{a}|\leq n} \frac{(\zeta-z)^{\boldsymbol{a}}\partial^{\boldsymbol{a}}f(z)}{\boldsymbol{a}!}\;d\sigma_{\zeta} \nonumber\\
&&-\frac{1}{2\pi} \sum^{n-1}_{\tau=1} \int_{\partial \mathbb{D}_r}\partial^{\boldsymbol{\theta}_{\tau}} L(z-\zeta) \cdot P_{\tau-1}[\partial^{\boldsymbol{\phi}_{\tau}}f](z,\zeta)\cdot \langle N(\zeta),\boldsymbol{e}_{\tau+1} \rangle |d\zeta| \end{eqnarray} for a harmonic function $h$ on $\mathbb{D}_{\widetilde{\rho}}$. We can obtain
$$\left| \int_{\mathbb{D}_{\widetilde{\rho}}\backslash \mathbb{D}_{r}}\partial^{\boldsymbol{j}}L(z-\zeta)f(\zeta)d\sigma_{\zeta} \right| \leq
\frac{C_4}{|z|^n (\log(1/|z|))^2},$$
$$ \left| \int_{\mathbb{D}_r}\partial^{\boldsymbol{j}}L(z-\zeta)
\left( f(\zeta)-f(z)\right)d\sigma_{\zeta} \right| \leq \frac{C_5}{|z|^n (\log(1/|z|))^2},$$
by \eqref{logn} and Theorem \ref{logn} in \cite{Rothbehaviour}. So $ \bar{\partial}^nw(z)$, $\partial^nw(z)=\textit {O}(|z|^{-n}(\log(1/|z|))^{-2})$. For the mixed partial derivatives, since the first three integrals are canceled, we have to estimate the last term in (\ref{for w}). Letting $\tau=1$ in the last sum of (\ref{for w}), the term
$$ \partial^{\boldsymbol{\phi}_{1}}f(z)\int_{\partial \mathbb{D}_r}\partial^{\boldsymbol{e}_1} L(z-\zeta) \cdot \langle N(\zeta),\boldsymbol{e}_{2} \rangle |d\zeta|$$
appears. Now consider $\partial^{\boldsymbol{\phi}_{1}}f(z)$. Our aim is $\bar{\partial}^{n_1}\partial^{n_2}w(z)=\textit {O}(|z|^{-n}(\log(1/|z|))^{-3})$. Since $f(z)=q(z)|z|^{-2}(\log(1/|z|))^{-2}$, then $q(z)\cdot \partial^{\boldsymbol{\phi}_{1}} (|z|^{-2}(\log(1/|z|))^{-2})$ appears in $ \partial^{\boldsymbol{\phi}_{1}} f(z)$ with some coefficient. We can calculate that
$$\left| \partial^{\boldsymbol{\phi}_{1}} \frac{1}{|z|^{2}(\log(1/|z|))^{2}}\right| \leq \frac{C_6}{|z|^{n}(\log(1/|z|))^2},$$ thus
$$\left| q(z) \partial^{\boldsymbol{\phi}_{1}} \frac{1}{|z|^{2}(\log(1/|z|))^{2}} \cdot \partial^{\boldsymbol{e}_1} L(z-\zeta) \cdot \langle N(\zeta),\boldsymbol{e}_{2} \rangle |d\zeta|\right| \leq \frac{6\pi C_7 \cdot C_6}{|z|^{n}(\log(1/|z|))^{3}}$$
provided (\ref{logn}) and (\ref{sincos}). So $ \bar{\partial}^{n_1}\partial^{n_2}w(z)=\textit {O}(|z|^{-n}(\log(1/|z|))^{-3})$ as desired. \hspace{\fill} $\Box$\\
The second order derivative of $w(z)$ in Theorem \ref{original} is contained in Theorem \ref{for w theorem}. However, for the mixed partial derivative, the estimate (\ref{new mixed}) is more accurate than (4.1). We take it to be a corollary as following. \begin{corollary} \label{second partial derivative order} \textsl{ Let $\kappa:\mathbb{D}\rightarrow\mathbb{R}$ be a locally H\"{o}lder continuous function with $\kappa(0)<0$. If $u:\mathbb{D}^* \rightarrow \mathbb{R}$ is a $C^2$-solution to $\Delta u=-\kappa(z) e^{2u}$ in $\mathbb{D}^*$ with the order $\alpha=1$ at the point $z=0$, then for the remainder function $w(z)$ as in Theorem \ref{original}, the second partial derivatives satisfy
$$w_{z\bar{z}}(z)=\textit {O}(|z|^{-2}(\log(1/|z|))^{-3}).$$ } \end{corollary}
As for the sharpness of Theorems \ref{original}, \ref{estimate v} and \ref{for w theorem}, the generalized hyperbolic metric on the thrice-punctured sphere makes a convictive case here. Theorems 3.3 and 4.2 in \cite{zhang3} verify that Theorems \ref{original}, \ref{estimate v} and \ref{for w theorem} are sharp, see \cite{zhang3} for details.
\vspace*{3mm}
\section{Minda-type theorems} \setcounter{equation}{0} The following result is Minda's theorem. It is a general estimate for the hyperbolic metric near the singularity. \begin{theorema} $\mathrm{[10]}.$ \label{Minda original}
\textsl{Suppose $\Omega$ is a hyperbolic region in the complex plane and $p \in \mathbb{C}$ is an isolated boundary point of $\Omega$. Let the hyperbolic metric on $\Omega$ with the constant Gaussian curvature $-1$ be $\lambda_{\Omega}(\omega)|d\omega|$. Then \begin{eqnarray*}
\lim_{\omega\rightarrow p}|\omega-p|\log(1/|\omega-p|)\lambda_{\Omega}(\omega)=\frac{1}{2}\,. \end{eqnarray*}} \end{theorema}
The following theorem is due to Kraus and Roth. \begin{theorema} $\mathrm{[6]}.$ \label{estimate for cusps original}
\textsl{Let $\lambda(z)|dz|$ be a regular conformal metric on a domain $\Omega\subseteq\mathbb{C}$ with an isolated singularity at $z=p$. Suppose that its curvature $\kappa :\Omega\rightarrow \mathbb{R}$ has a H\"{o}lder continuous extension to $\Omega\cup\{p\}$ such that $\kappa(p)< 0$. Then $\log\lambda$ has an order $\alpha\leq1$ at $z=p$ and \begin{eqnarray*}
\lim_{z\rightarrow p}|z-p|\log(1/|z-p|)\lambda(z)=\left\{\begin{array}{ll} 0&\mbox{if\ }\ \alpha< 1\\ \displaystyle \frac 1 {\sqrt{-\kappa(p)}}&\mbox{if\ }\ \alpha=1.\end{array}\right. \end{eqnarray*} \vspace*{1mm} } \end{theorema} \par We obtain the following result in relation to Theorem \ref{estimate for cusps original}. \begin{theorem}\label{coro}
\textsl{ Let $\lambda(z)|dz|$ be a regular conformal metric on a domain $\Omega\subseteq\mathbb{C}$ with an isolated singularity at $z=p$. Suppose that the curvature $\kappa :\Omega\rightarrow \mathbb{R}$ has a H\"{o}lder continuous extension to $\Omega\cup\{p\}$ such that $\kappa(p)< 0$ and the order of $\log\lambda$ is $\alpha=1$ at $z=p$. Then \vspace*{2mm} \\
(i)$\ \displaystyle \lim_{z\rightarrow p} (z-p)|z-p|\log(1/|z-p|)\lambda_{z}(z)=-{\frac 1 {2\sqrt{-\kappa(p)}}}, \hspace*{\fill} $ \vspace*{2mm}\\
(ii)$\ \displaystyle \ \lim_{z\rightarrow p}(z-p)^2|z-p|\log(1/|z-p|)\lambda_{zz}(z)=\displaystyle {\frac 3{4\sqrt{-\kappa(p)}}}, \hspace*{\fill} $ \vspace*{2mm}\\
(iii)$\ \displaystyle \ \lim_{z\rightarrow p}|z-p|^3\log(1/|z-p|)\lambda_{z\bar{z}}(z)=\displaystyle {\frac 1 {4\sqrt{-\kappa(p)}}}. \hspace*{\fill} $ } \end{theorem} \textbf{Proof.} Let $\mathbb{H}$ be the upper half-plane. For each simply closed curve $\gamma: [0,1] \rightarrow \Omega$ around $p$ with $\gamma(0)=\gamma(1)$, consider the lift $\widetilde{\gamma}$ of $\gamma$ in $\mathbb{H}$. Since there exists an automorphism $g$ on $\mathbb{H}$ such that $\widetilde{\gamma}(1)=g(\widetilde{\gamma}(0))$, we may assume that $g(z)=z+1$ on $\mathbb{H}$. Let $\pi: \mathbb{H}\rightarrow \Omega$ be the regular covering projection, we have $\pi \circ g = \pi$. Define $\varphi: \mathbb{H} \rightarrow \mathbb{D}^*$, $z \mapsto e^{2 \pi iz}$, then the quotient space $\mathbb{H}/ \langle g \rangle$ is conformally equivalent to $\mathbb{D}^*$. Hence there exists a conformal mapping $\rho: \mathbb{D}^* \rightarrow \Omega$ such that $\rho \circ \varphi=\pi$ and $\rho$ can be extended to $\mathbb{D} \rightarrow \Omega$ holomorphically by setting $\rho (0)=p$. So it is sufficient to consider the case $p=0$ and $\mathbb{D}^*=\Omega$. \par Let $u(z):=\log\lambda(z)$, so $\lambda_z(z)=u_z(z)\lambda(z)$. It holds $$\lim_{z\rightarrow 0}z u_z(z)=-\frac 1 2,\; \;
\lim_{z\rightarrow 0}z^2 u_{zz}(z)=\frac 1 2,\; \; \lim_{z\rightarrow0}|z|^2u_{z\bar{z}}=0$$ by Theorem \ref{original}. In combination with Theorem \ref{estimate for cusps original}, we have \begin{eqnarray*}
&&\lim_{z\rightarrow 0}z|z|\log(1/|z|)\lambda_z(z)=\lim_{z\rightarrow 0}z|z|\log(1/|z|) u_z(z)\lambda(z)\\
&=&\lim_{z\rightarrow 0}|z|\log(1/|z|)\lambda(z) \cdot z u_{z}(z)=-\frac 1 {2\sqrt{-\kappa(0)}} \end{eqnarray*} for the first case, \begin{eqnarray*}
&&\lim_{z\rightarrow 0}z^2|z|\log(1/|z|)\lambda_{zz}(z)=\lim_{z\rightarrow 0}z^2|z|\log(1/|z|)(u_{zz} \lambda+u_z \lambda_z)\\
&=&\lim_{z\rightarrow 0}(z^2u_{zz}\cdot|z|\log(1/|z|)\lambda)+\lim_{z\rightarrow 0}(z|z|\log(1/|z|)\lambda_z\cdot zu_z)\\ &=&\frac 1 {2\sqrt{-\kappa(0)}}+(-\frac 1 2)\cdot(-\frac 1 {2\sqrt{-\kappa(0)}})=\frac 3{4\sqrt{-\kappa(0)}}\nonumber \end{eqnarray*} for the second case and \begin{eqnarray*}
&&\lim_{z\rightarrow0}|z|^3\log(1/|z|)\lambda_{z\bar{z}}(z)=\lim_{z\rightarrow0}|z|^3\log(1/|z|)(u_{z\bar{z}}\lambda+u_{z} \lambda_{\bar{z}})\nonumber\\
&=&\lim_{z\rightarrow0}(|z|^2u_{z\bar{z}}\cdot|z|\log(1/|z|)\lambda)
+\lim_{z\rightarrow0}(\bar{z}|z|\log(1/|z|)\lambda_{\bar{z}}\cdot zu_z)\nonumber\\ &=&-\frac 1 {2\sqrt{-\kappa(0)}}\cdot(-\frac 1 2)=\frac 1 {4\sqrt{-\kappa(0)}}\nonumber \end{eqnarray*} for the last case as desired.
$\Box$ \vspace*{2mm} \par Theorem \ref{coro} is given for a regular conformal metric with a (locally) H\"oder continuous Gaussian curvature $\kappa$. Considering Theorems \ref{for v} and \ref{for w theorem}, if we add the assumption that $\kappa$ is $n$-th order (locally) H\"older continuous, we can obtain the higher order version of Theorems \ref{estimate for cusps original} and \ref{coro}. \begin{theorem}\label{general u} \textsl{Let $\kappa:\mathbb{D}\rightarrow\mathbb{R}$ be of class $C^{n-2,\,\nu}(\mathbb{D}^*)$ for an integer $n \geq 3$, $0<\nu \leq 1$ and $\kappa(0)<0$. If $u:\mathbb{D}^*\rightarrow \mathbb{R}$ is a $C^{n,\,\nu}$-solution to $\Delta u=-\kappa(z) e^{2u}$ in $\mathbb{D}^*$, then $u$ has order $\alpha \in (-\infty, 1]$ and for $n_1,\,n_2 \geq 1$, $n_1+n_2\leq n$, \vspace*{2mm} \\ (i) $ \displaystyle \ \lim_{z\rightarrow0}z^n\partial^nu(z)=\frac {\alpha}2(-1)^n(n-1)!=\lim_{z\rightarrow0}\bar{z}^n\bar{\partial}^nu(z),$ \vspace*{2mm} \\ (ii) $ \displaystyle \ \lim_{z\rightarrow 0}\bar{z}^{n_1} z^{n_2}\bar{\partial}^{n_1}\partial^{n_2}u(z)=0.$} \end{theorem}
\textbf{Proof.} When $0<\alpha<1$,\ $u(z)=-\alpha\log|z|+v(z)$. Theorems 4.1 and 4.2 imply that $$\lim_{z \rightarrow 0}z^n \partial^n v(z)=0, \ \ \lim_{z \rightarrow 0}\bar{z}^{n_1} z^{n_2} \bar{\partial}^{n_1}\partial^{n_2} v(z)=0$$ for $n_1$, $n_2$, $n \geq 1$. Since \begin{eqnarray} \label{pa logz}
\partial^{n}\log|z|=\frac{(-1)^{n-1}(n-1)!}{2z^n}, \quad \bar{\partial}^{n_1}\partial^{n_2}\log|z|=0,
\end{eqnarray} so $$\lim_{z \rightarrow 0}z^n \partial^n u(z)=-\alpha \lim_{z \rightarrow 0}z^n \partial^n \log|z|+\lim_{z \rightarrow 0}z^n \partial^n v(z)=\frac{\alpha}{2z^n}(-1)^{n}(n-1)!,$$ $$\lim_{z \rightarrow 0}\bar{z}^{n_1} z^{n_2} \bar{\partial}^{n_1}\partial^{n_2} u(z)=0.$$
When $\alpha=1$,\ $u(z)=-\log|z|-\log\log(1/|z|)+w(z)$. We have $$\lim_{z \rightarrow 0}z^n \partial^n w(z)=0, \ \ \lim_{z \rightarrow 0}\bar{z}^{n_1} z^{n_2} \bar{\partial}^{n_1}\partial^{n_2} w(z)=0$$ for $n_1,\,n_2,\,n \geq 1$, from Theorems 4.1 and 4.3. By induction,
$$\partial^n\log\log(1/|z|)=\sum^n_{j=1}\frac{C^{(n)}_j}{z^n(\log(1/|z|))^j}$$ with constant $C^{(n)}_{j}$ for $1 \leq j \leq n$. If we fix $n_2$, then
$$\bar{\partial}^{n_1}\partial^{n_2}\log\log(1/|z|)=\sum^{n_2}_{j=1}\frac{C^{(n_1,\, n_2)}_{j}}{\bar{z}^{n_1}z^{n_2} (\log(1/|z|))^{j+1}}$$
with constant $C^{(n_1,\,n_2)}_{j}$ for $1 \leq j \leq n_2$. So $$\lim_{z \rightarrow 0}z^n \partial^n \log\log(1/|z|)=0, \ \ \lim_{z \rightarrow 0}\bar{z}^{n_1}z^{n_2} \bar{\partial}^{n_1}\partial^{n_2} \log\log(1/|z|)=0$$ for $n_1,\,n_2,\,n \geq 1$. Combining with \eqref{pa logz} leads to
$$\lim_{z \rightarrow 0}z^n \partial^n u(z)=-\alpha \lim_{z \rightarrow 0}z^n \partial^n \log|z|+\lim_{z \rightarrow 0}z^n \partial^n v(z)=\frac{(-1)^{n}(n-1)!}{2z^n},$$ $$\hspace*{51mm}\lim_{z \rightarrow 0}\bar{z}^{n_1} z^{n_2} \bar{\partial}^{n_1}\partial^{n_2} u(z)=0.\hspace*{51mm} \Box $$
From the proof above, we can obtain a stronger limit for the mixed derivative of $u(z)$ when the order $\alpha=1$,
$$\displaystyle \ \lim_{z\rightarrow 0}\bar{z}^{n_1} z^{n_2}(\log(1/|z|))^2\bar{\partial}^{n_1}\partial^{n_2}u(z)=C^{(n_1,\;n_2)}_1=\frac{(-1)^{n_1+n_2-1}}{4}(n_1-1)!(n_2-1)!,$$ see \cite{zhang3} for more details. \vspace*{2mm} \par On the basis of Theorem \ref{general u}, we can provide the following result as a higher order estimate for a conformal metric with the negative curvature near the origin when $\alpha=1$. \begin{theorem} \label{higher lambda for cusps} \textsl{Let $\kappa$ and $u$ be the same as in Theorem \ref{general u}. If the order of $u$ is $\alpha=1$, then for $n_1,\,n_2 \geq 0$, $n_1+n_2 \leq n$, the limit
$$l_{n_1,n_2}:=\frac{1}{n_1!n_2!}\lim_{z \rightarrow0}|z|\log(1/|z|){\bar{z}}^{n_1} z^{n_2} {\bar{\partial}}^{n_1} \partial^{n_2} \lambda(z)$$ exists. Moreover, the numbers $l_{n_1,n_2}$ satisfy the following \vspace*{1mm}\\ (i)$ \ \displaystyle l_{n_1,n_2}={-\frac{1}{2} \choose n_1}{-\frac{1}{2} \choose n_2}\frac 1 {\sqrt{-\kappa(0)}}$, \vspace*{3mm} \\ (ii)$\ l_{n_1,n_2}=l_{n_2,n_1}$,\\ where $${\tau \choose j}=\frac{\tau(\tau-1)\cdots (\tau-j+1)}{j\,!}$$ is the binomial coefficient. } \end{theorem} \textbf{Proof.} We write $\lambda(z)=e^{u(z)}$, then $\partial \lambda (z)=\lambda(z)\, \partial u(z)$, and $$\partial^{n}\lambda(z)=\sum_{j=0}^{{n}-1}{{n}-1 \choose j}\partial^{{n}-j}u(z) \,\partial^j\lambda(z)$$ by induction, where $\partial^0\lambda(z)=\bar{\partial}^0\lambda(z)=\lambda(z),$ so $$l_{0,{n_2}}=\frac{1}{{n_2}!}\lim_{z\rightarrow0}\sum_{j=0}^{{n_2}-1}{{n_2}-1 \choose j}
z^{{n_2}-j}\partial^{{n_2}-j}u(z)\cdot|z|\log(1/|z|)z^j\partial^j\lambda(z).$$ Theorem \ref{estimate for cusps original} gives that $l_{0,0}=1/\sqrt{-\kappa(0)}$. From the existence of $\lim_{z\rightarrow0}z^{{n_2}-j}\partial^{{n_2}-j}u(z)$ and $l_{0,0}$, we know that $l_{0,\;{n_2}}$ exists. Next, limit (ii) in Theorem \ref{general u} enables us to write $l_{{n_1},{n_2}}$ as a sum of the terms only containing pure derivatives of $u(z)$, \begin{eqnarray}\label{ind} l_{{n_1},{n_2}}=\frac{1}{{n_1}!{n_2}!}\lim_{z\rightarrow0}\sum_{j=0}^{{n_2}-1}{{n_2}-1 \choose j}
z^{{n_2}-j}\partial^{{n_2}-j}u(z)\,|z|\log(1/|z|)\bar{z}^{n_1} z^j\bar{\partial}^{n_1}\partial^j\lambda(z), \end{eqnarray} thus the existence of $l_{0,{n_2}}$ guarantees $l_{{n_1},{n_2}}$ exists.
By Theorem \ref{coro}, it is known that $l_{0,1}$ is a real number, so $l_{1,0}=\overline{l_{0,1}}=l_{0,1}$. Since \begin{eqnarray} \label{pa bar n lam} {\displaystyle\bar{\partial}^{n_2}\lambda(z)=\sum_{j=0}^{{n_2}-1}{{n_2}-1 \choose j} \bar{\partial}^{{n_2}-j}u(z)\,\bar{\partial}^j\lambda(z)}, \end{eqnarray} then $l_{{n_2},0}=l_{0,{n_2}}$ by induction. From \eqref{ind}, \eqref{pa bar n lam}, and (i) of Theorem \ref{general u}, we have \begin{eqnarray*} &&l_{{n_1},{n_2}}\\ &=&\sum_{j=0}^{{n_2}-1} \lim_{z\rightarrow0} \frac{1}{{n_1}!{n_2}!} \frac{({n_2}-1)!}{j!({n_2}-1-j)!}
z^{{n_2}-j}\partial^{{n_2}-j}u(z)\cdot|z|\log(1/|z|)\bar{z}^{n_1} z^j\bar{\partial}^{n_1}\partial^j\lambda(z) \\ &=&\frac{1}{{n_2}}\sum_{j=0}^{{n_2}-1} \frac{1}{{n_1}!} \frac{1}{j!({n_2}-1-j)!}\lim_{z\rightarrow0}
z^{{n_2}-j}\partial^{{n_2}-j}u(z)\cdot\lim_{z\rightarrow0}|z|\log(1/|z|)\bar{z}^{n_1}z^j\bar{\partial}^{n_1}\partial^j\lambda(z) \\
&=&\frac{1}{{n_2}}\sum_{j=0}^{{n_2}-1} \frac{(-1)^{{n_2}-j}}{2}\frac{1}{{n_1}!j!} \lim_{z\rightarrow0}|z|\log(1/|z|)\bar{z}^{n_1} z^j\bar{\partial}^{n_1}\partial^j\lambda(z) =\frac{1}{2{n_2}}\sum_{j=1}^{{n_2}-1}(-1)^{{n_2}-j}l_{{n_1},j}. \end{eqnarray*} Then $$ {n_2} \cdot l_{{n_1},{n_2}}=\frac{1}2\sum_{j=0}^{{n_2}-2}(-1)^{{n_2}-j}\,l_{{n_1},j}-\frac{1}{2}\,l_{{n_1},{n_2}-1} =-({n_2}-1)l_{{n_1},{n_2}-1}-\frac{1}{2}\,l_{{n_1},{n_2}-1}.$$ Since $l_{0,{n_2}}=l_{{n_2},0}$, \begin{eqnarray*} l_{{n_1},{n_2}}&=&\frac{-\frac{1}{2}-{n_2}+1}{{n_2}}l_{{n_1},{n_2}-1}= {-\frac{1}{2} \choose {n_2}}l_{{n_1},0}\\ &=&{-\frac{1}{2} \choose {n_2}}l_{0,{n_1}}={-\frac{1}{2} \choose {n_2}}{-\frac{1}{2} \choose {n_1}}l_{0,0}. \end{eqnarray*} Thus (i) is valid and (ii) follows form (i).
$\Box$\\
However, when the order $\alpha<1$, the analogous limit \begin{eqnarray} \label{lza}
l':=\lim_{z\rightarrow0}|z|^{\alpha}\lambda(z) \end{eqnarray} may also exist but cannot be described only in terms of the curvature of $\lambda(z)$. To discuss the limit \eqref{lza} for an SK-metric, we consider the limit \begin{eqnarray} \label{upper lza}
l:=\limsup_{z\rightarrow0}|z|^{\alpha}\lambda(z) \end{eqnarray} instead, since the definition \eqref{singularity} of a corner guarantees that $l<\infty$ for $l$ defined above. Based on Theorem \ref{Ahlfors lemma} and Corollary 4.4 in \cite{Rothhyper}, we have the following result corresponding to Theorem \ref{higher lambda for cusps} in the thrice-punctured sphere. \begin{theorem} \label{limitfor3} \textsl{Let $0<\alpha,\ \beta <1$ and $0<\gamma \leq 1$ such that $\alpha+\beta+\gamma>2$ and $\lambda(z)$ be an SK-metric on the thrice-punctured Riemann sphere $\widehat{\mathbb{C}}\backslash \{0,\,1,\,\infty\}$ of orders $\alpha,\, \beta,\,\gamma$ at $0$, $1$, $\infty$, respectively, with the curvature $\kappa(z)$. Then the upper limit \eqref{upper lza} satisfies \begin{eqnarray} \label{3lgene} l \leq \frac{\delta}{1-\delta^2}(1-\alpha), \end{eqnarray} where \begin{eqnarray*} \delta=\frac{\Gamma( c)}{\Gamma(2- c)} \left(\frac{\Gamma(1- a)\Gamma(1- b)\Gamma( a+1- c) \Gamma( b+1- c)}{\Gamma( a)\Gamma( b)\Gamma( c- a) \Gamma( c- b)} \right)^{1/2} \end{eqnarray*} with \begin{eqnarray*} a=\frac{\alpha+\beta-\gamma}{2},\ b=\frac{\alpha+\beta+\gamma-2}{2},\ c=\alpha. \end{eqnarray*}} \end{theorem}
Now we consider the upper limit $l$ in the once-punctured unit disk $\mathbb{D}^*$. The following result is evident if we combine Theorem \ref{Ahlfors lemma} with Theorem \ref{maximal}. \begin{theorem}
\textsl{If $\lambda(z)|dz|$ is an an SK-metric on $\mathbb{D}^*$ with the order $\alpha \in (0,\,1)$ at the origin, then the upper limit $l$ defined in \eqref{upper lza} satisfies $l \leq 1-\alpha$.} \end{theorem}
When the SK-metric satisfies some stronger continuity assumption, the upper limit $l$ in \eqref{upper lza} will become the limit $l'$ in \eqref{lza} at the origin, which enables us to consider the derivatives of $l$ locally. For instance, if the $\kappa(z)$ and $u(z)$ satisfy the assumption of Theorem \ref{general u}, then $u(z)$ is of class $C^{\;\!2}$ in a neighborhood of $z=0$ and $l=l'$ locally holds near the origin. Therefore we state the following result in terms of the recurrence relation similar to Theorem \ref{higher lambda for cusps}. \begin{theorem} \textsl{Let the functions $\kappa(z)$ and $u(z)$ satisfy the assumption of Theorem \ref{general u} for an integer $n$ and let $\lambda(z):=e^{u(z)}$. If the order of $u(z)$ is $\alpha \in (-\infty, \,1)$, then for $n_1$, $n_2\geq 0$, $n_1+n_2 \leq n$, the limit
$$l_{n_1,n_2}:=\frac{1}{n_1!n_2!}\lim_{z \rightarrow0}|z|{\bar{z}}^{n_1} z^{n_2} {\bar{\partial}}^{n_1} \partial^{n_2} \lambda(z)$$ exists and satisfies the following \vspace*{1mm}\\ (i)$ \ \displaystyle l_{n_1,n_2}={-\frac{\alpha}{2} \choose n_1}{-\frac{\alpha}{2} \choose n_2}l'$, \vspace*{3mm} \\ (ii)$\ l_{n_1,n_2}=l_{n_2,n_1}$, \vspace*{1mm} \\ where $l'$ is defined by \eqref{lza}.} \end{theorem}
\vspace*{5mm} \hspace*{-17pt}\textbf{Acknowledgement.} I would like to thank Prof. Toshiyoki Sugawa for his helpful comments, suggestion and encouragement. I also want to thank Prof. Toshihiro Nakanishi for his suggestion on the SK-metrics.\\
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
\end{document} |
\begin{document}
\conferenceinfo{ISSAC'03,} {August 3--6, 2003, Philadelphia, Pennsylvania, USA.}
\CopyrightYear{2003}
\crdata{1-58113-641-2/03/0008}
\title{Determining the Automorphism Group of a Hyperelliptic Curve
}
\numberofauthors{1}
\author{ \alignauthor Tanush Shaska\titlenote{ Useful suggestions of the anonymous referee are gratefully acknowledged.} \\
\affaddr{Department of Mathematics}\\
\affaddr{University of California at Irvine}\\
\affaddr{103 MSTB, Irvine, CA 92697}\\
}
\date{}
\newtheorem {Theorem}{Theorem}[section] \newtheorem {prop}[Theorem]{Proposition} \newtheorem {definition}[Theorem]{Definition} \newtheorem {Example}[Theorem]{Example} \newtheorem {Exercise} [Theorem]{Exercise} \newtheorem {remark}[Theorem]{Remark} \newtheorem {Corollary}[Theorem]{Corollary} \newtheorem {lemma}[Theorem]{Lemma} \newtheorem {Result}{Result}
\maketitle
\def\mathbb Z{\mathbb Z}
\def\mathbb Q} \def\C{\mathbb C} \def\bP{\mathbb P{\mathbb Q} \def\C{\mathbb C} \def\bP{\mathbb P} \def\mathcal B} \def\cA{\mathcal A} \def\L{\mathcal L{\mathcal B} \def\cA{\mathcal A} \def\L{\mathcal L} \def\mathcal R} \def\H{\mathcal H} \def\M{\mathcal M{\mathcal R} \def\H{\mathcal H} \def\M{\mathcal M} \def\mathcal N} \def\w{\widetilde} \def\l{\lambda} \def\s{\sigma{\mathcal N} \def\w{\widetilde} \def\l{\lambda} \def\s{\sigma}
\def\mathcal Q{\mathcal Q}
\def\alpha} \def\aa{\bar \alpha} \def\b{\beta} \def\p{\mathfrak p{\alpha} \def\aa{\bar \alpha} \def\b{\beta} \def\p{\mathfrak p} \def\mathcal P} \def\e{\varepsilon} \def\iso{\equiv{\mathcal P} \def\e{\varepsilon} \def\iso{\equiv} \def{\rtimes}{{\rtimes}}
\def\oplus{\oplus}
\def\gamma} \def\bg{\bar \gamma} \def\u{\mathfrak u{\gamma} \def\bg{\bar \gamma} \def\u{\mathfrak u} \def\mathfrak U} \def\k{\bar k} \def\iso{{\, \cong\, }{\mathfrak U} \def\k{\bar k} \def\iso{{\, \cong\, }} \def{\, \vartriangleleft \, }} \def\<{\langle} \def\>{\rangle{{\, \vartriangleleft \, }} \def\<{\langle} \def\>{\rangle} \def\hookrightarrow } \def\Aut{\mbox {Aut}{\hookrightarrow } \def\Aut{\mbox {Aut}} \def\overline{\mbox {Aut}}} \def\D{\mathcal D{\overline{\mbox {Aut}}} \def\D{\mathcal D}
\def\bar G{\bar G}
\def\bar \psi} \def\f{\psi} \def\ff{\phi} \def\f{\psi{\bar \psi} \def\f{\psi} \def\bar \psi} \def\f{\psi} \def\ff{\phi} \def\f{\psi{\phi} \def\f{\psi} \def\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu{\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu} \def\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu{\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu} \def\overline k{\overline k}
\begin{abstract} In this note we discuss techniques for determining the automorphism group of a genus $g$ hyperelliptic curve $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ defined over an algebraically closed field $k$ of characteristic zero. The first technique uses the classical $GL_2 (k)$-invariants of binary forms. This is a practical method for curves of small genus, but has limitations as the genus increases, due to the fact that such invariants are not known for large genus.
The second approach, which uses dihedral invariants of hyperelliptic curves, is a very convenient method and works well in all genera. First we define the normal decomposition of a hyperelliptic curve with extra automorphisms. Then dihedral invariants are defined in terms of the coefficients of this normal decomposition. We define such invariants independently of the automorphism group $\Aut (\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$. However, to compute such invariants the curve is required to be in its normal form. This requires solving a nonlinear system of equations.
We find conditions in terms of classical invariants of binary forms for a curve to have reduced automorphism group $A_4$, $S_4$, $A_5$. As far as we are aware, such results have not appeared before in the literature.
\end{abstract}
\category{I.1} {SYMBOLIC AND ALGEBRAIC MANIPULATION}{ALGORITHMS}
\terms{Algorithms, Theory}
\keywords{Hyperelliptic curve, automorphism, moduli space}
\section{Introduction}
Let $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ be an algebraic curve of genus $g$ defined over an algebraically closed field $k$ of characteristic zero. We denote by $\Aut(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ the group of analytic (equivalently, algebraic) automorphisms of $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$. Then $\Aut(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ acts on the finite set of Weierstrass points of $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$. This action is faithful unless $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ is hyperelliptic, in which case its kernel is the group of order 2 containing the hyperelliptic involution of $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$. Thus in any case, $\Aut (\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ is a finite group. This was first proved by Schwartz. In 1893 Hurwitz discovered what is now called the Riemann-Hurwitz formula. From this he derived that
$$| \Aut (\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)| \ \ \leq \ \ 84 \, \, (g-1)$$ which is known as the Hurwitz bound. However, it is not an easy task to compute the automorphism group of a given algebraic curve. Even compiling a list of possible candidates for a small genus $g$ is quite difficult. In \cite{MS} we provide an algorithm which computes such lists. We give a complete list for $g=3$ and list ``large'' groups for $g\leq 10$. This work is based on previous work of Breuer, among many others; see \cite{MS} for a complete list of references.
If $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ is hyperelliptic then $\Aut(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ is a degree 2 central extension of $\mathbb Z_n, D_n, A_4, S_4, A_5$. We will explain this briefly in section 2. However, computing $\Aut(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ for a given $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ is still difficult. Even sophisticated computer algebra packages do not have such capabilities for $g \geq 3$. The case $g=2$ has recently been implemented in Magma \cite{Magma} and is based on methods used in \cite{SV1}.
In this short note we will focus on determining $\Aut(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ for a given genus $g$ hyperelliptic curve $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$. We will not prove any of the results. The interested reader can check \cite{GS}, \cite{Sh3}, \cite{Sh5}, \cite{Sh6}, \cite{Sh7}, or \cite{SV1} for details. Most of the papers above have focused on studying the locus of all hyperelliptic curves of genus $g$ whose automorphism group contains a subgroup $G$ and inclusions between such loci. In this paper we combine the above results to get a treatment for all hyperelliptic curves in all genera. We generalize the notion of dihedral invariants of hyperelliptic curves with extra involutions discovered in \cite{GS} for all hyperelliptic curves with extra automorphisms (cf. Theorem 5.1.). Using these dihedral invariants and classical invariants of binary forms of degree $2g+2$ we discover some nice necessary conditions for a curve to have reduced automorphism group $A_4, S_4, A_5$ (cf. section 5).
\noindent {\bf Notation:} We will use the term ``curve'' to mean a ``compact Riemann surface''. Throughout this paper $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ denotes a hyperelliptic curve of genus $g\ge 2$.
$D_n$ denotes the dihedral group of order $2n$.
\section{Preliminaries}
Let $k$ be an algebraically closed field of characteristic zero and $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ be a genus $g$ hyperelliptic curve given by the equation $Y^2=F(X)$, where $\deg(F)=2g+2$. Denote the function field of $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ by $K:=k(X,Y)$. Then, $k(X)$ is the unique degree 2 genus zero subfield of $K$. $K$ is a quadratic extension field of $k(X)$ ramified exactly at $d=2g+2$ places $\alpha} \def\aa{\bar \alpha} \def\b{\beta} \def\p{\mathfrak p_1, \dots , \alpha} \def\aa{\bar \alpha} \def\b{\beta} \def\p{\mathfrak p_d$ of $k(X)$. The corresponding places of $K$ are called the {\it Weierstrass points} of $K$. Let $\mathcal P} \def\e{\varepsilon} \def\iso{\equiv:=\{ \alpha} \def\aa{\bar \alpha} \def\b{\beta} \def\p{\mathfrak p_1, \dots , \alpha} \def\aa{\bar \alpha} \def\b{\beta} \def\p{\mathfrak p_d \}$ and $G=Aut(K/k)$. Since $k(X)$ is the only genus 0 subfield of degree 2 of $K$, then $G$ fixes $k(X)$. Thus, $G_0:=Gal(K/k(X))=\< z_0 \>$, with $z_0^2=1$, is central in $G$. We call the {\it reduced automorphism group} of $K$ the group $\bar G:=G/G_0$. By a theorem of Dickson, $\bar G$ is isomorphic to one of the following:
$$ \mathbb Z_n, D_n, A_4, S_4, A_5,$$
with branching indices of the corresponding cover $\bar \psi} \def\f{\psi} \def\ff{\phi} \def\f{\psi :\bP^1 \to \bP^1/ \bar G$ given respectively by \begin{equation}\label{red_sig} (n,n), (2, 2, n), (2, 3, 3), (2, 4, 4), (2, 3, 5). \end{equation}
In \cite{BS} all subgroups of $G$ are classified and in \cite{Bu} all groups that occur as full automorphism groups of hyperelliptic curves are classified. We use the notation of \cite{Bu} and define $V_n$, $H_n$, $G_n$, $U_n$, $W_2$, $W_3$ as follows:
\begin{equation} \begin{split}
V_n := & \< \, \, x, y \, | \, \, x^4, y^n, (x y)^2, (x^{-1}y)^2 \>, \\
H_n := & \<\, x, y \, \, | \, \, x^4, y^2x^2, (xy)^n \, \>, \\
G_n:= & \< \, x, y \, \, | \, \, x^2 y^n, y^{2n}, x^{-1} y x y \, \>,\\
U_n := & \< \, x, y \, | \, \, x^2, y^{2n}, xyxy^{n+1} \>, \\
W_2:= & \< \, x, y \, | \, \, x^4, y^3, y x^2 y^{-1} x^2, (x y)^4 \>, \\
W_3:= & \< \, x, y \, | \, \, x^4, y^3, x^2 (x y)^4, (x y)^8 \> \\ \end{split} \end{equation}
The following is proven in \cite{Bu}.
\begin{Theorem} The automorphism group of a hyperelliptic curve is one of the following $D_n$, $\mathbb Z_n$, $V_n$, $H_n$, $G_n$, $U_n$, $GL_2 (3)$, $W_2$, $W_3$. \end{Theorem}
The reader should be careful when reading Theorem 3.1., in \cite{Bu}. It seems as the cases $H_1$ and $G_1$ (which are isomorphic to $ \mathbb Z_4$) must be excluded. For example, for $g=2$, according to Theorem 3.1., $H_1\iso \mathbb Z_4$ must occur as an automorphism group, but it is well known that this is not the case; see \cite{SV1} among many others. It is safe to exclude these cases since the group is cyclic and corresponds to case 3 of Table 1.
Also, for $g=3$ let $N=3$ in the case 3.d, of Table 2 in \cite{Bu}. This case is not excluded from Theorem 3.1., (pg. 273). In this case the group is $D_3$ (dihedral group of order 6) and this group does not occur as an automorphism group of a genus 3 hyperelliptic curve; see \cite{MS}.
\subsection{MODULI SPACES OF COVERS}
\def{\bf C}{{\bf C}}
Let $\phi_0: \mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g \to \bP^1$ be the cover which corresponds to the degree 2 extension $K/k(X)$. Then, $\psi:= \phi \circ \phi_0$ has monodromy group $G:=\Aut(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$. From basic covering theory, the group $G$ is embedded in the group $S_n$, where $n=\deg \psi$. There is an $r$-tuple $\bar \s := (\s_1, \dots , \s_r)$, where $\s_i\in S_n$ such that $\s_1, \dots , \s_r$ generate $G$ and $\s_1 \cdots \s_r=1$. The signature of $\psi$ is an $r$-tuple of conjugacy classes ${\bf C} :=(C_1, \dots , C_r)$ in $S_n$ such that $C_i$ is the conjugacy class of $\s_1$. We use the notation $n^p$ to denote the conjugacy class of permutations which are a product of $p$ cycles of length $n$. Using the signature of $\phi: \bP^1 \to \bP^1$ given in \eqref{red_sig} and the Riemann-Hurwitz formula, one finds out the signature of $\psi: \mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g \to \bP^1$ for any given $g$ and $G$. A natural question is if a given group $G$ occurs as an automorphism group of a curve $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ with more then one signature ${\bf C}$ (cf. Theorem \ref{thm_2_2}).
For a fixed $G, {\bf C}$ the family of covers $\psi : \mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g \to \bP^1$ is a Hurwitz space $\H (G, {\bf C})$. This is a quasiprojective variety, not a priori connected. To show irreducibility of $\H (G, {\bf C})$ one has to show that there is only one braid orbit in the set of Nielsen classes $Ni (G, {\bf C})$.
There is a map $$\Phi : \H (G, {\bf C}) \to \H_g$$ where $\H_g$ is the moduli space of genus $g$ hyperelliptic curves. We denote by $\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu (G, {\bf C})$ the dimension in $\H_g$ of $\Phi (\H (G, {\bf C}))$. Further $i(G)$ denotes the number of involutions of $G$.
\begin{Theorem}\label{thm_2_2} For each $g\geq 2$, the groups $G$ that occur as automorphism groups
and their signatures ${\bf C}$ are given in Table 1. Moreover; $\H (G, {\bf C})$ is an irreducible algebraic variety of dimension $\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu (G, {\bf C})$ as given in Table 1.
\end{Theorem}
\begin{tiny} \begin{table*}[hbt!] \label{table}
\vskip 0.2cm \begin{center} \renewcommand{1.24}{1.24}
\begin{tabular}{||c|c|c|c|c|c|c||} \hline \hline $G$ & $\bar G$ & $\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu (G, {\bf C})$ & $\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu, \, n, \,g $ & ${\bf C}=(C_1, \dots C_r)$ & $ \bar \psi} \def\f{\psi} \def\ff{\phi} \def\f{\psi: \bP^1\to \bP^1$ &i(G) \\ \hline \hline $\mathbb Z_2 \oplus \mathbb Z_n $ & &$\frac {2g+2} n -1$&$n < g+1$ & $(n^2, n^2, 2^n, \dots , 2^n)$ && 3 \\ $\mathbb Z_{2n}$ &$\mathbb Z_n$ &$\frac {2g+1}n -1$ & & $(n^2, 2n, 2^n, \dots , 2^n )$& $(n, n)$& 1\\ $\mathbb Z_{2n}$ & &$\frac {2g} n -1 $ &$n < g$ & $(2n, 2n, 2^n, \dots , 2^n)$ & & 1 \\ \hline \hline $\mathbb Z_2 \oplus D_n$ & &$\frac {g+1} n$ & & $( n^4, 2^{2n}, \dots , 2^{2n} )$ & & 2n+3\\ $V_n$ & &$\frac {g+1}n-\frac 1 2$ & & $(n^4, 4^n, 2^{2n}, \dots , 2^{2n} )$& & n+3\\ $D_{2n}$ &$D_n$&$\frac g n$ & & $((2n)^2, 2^{2n}, \dots , 2^{2n} )$& $(2^n, 2^n, n^2)$ &n+1 \\ $H_n$ & &$\frac {g+1} n -1$ &$n < g+1$ &$(4^n, 4^{n}, n^4, 2^{2n} \dots , 2^{2n} )$ & & 3 \\ $U_n$ & &$\frac g n- \frac 1 2$ &$g \neq 2$ &$(4^n, (2n)^2, 2^{2n}, \dots , 2^{2n})$ & & n+1 \\ $G_n$ & &$\frac g n - 1$ &$n < g$ &$(4^n, 4^n, (2n)^2, 2^{2n}, \dots , 2^{2n})$ & & 1 \\ \hline \hline $\mathbb Z_2\oplus A_4$ & &$\frac {g+1} 6$& &$( 3^8, 3^8, 2^{12}, \dots , 2^{12} )$ & & \\ $\mathbb Z_2\oplus A_4$ & &$\frac {g-1} 6$& & $( 3^8, 6^4, 2^{12}, \dots , 2^{12} )$& & 7 \\ $\mathbb Z_2\oplus A_4$&$A_4$&$\frac {g-3} 6$&$\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu\neq 0$ & $( 6^4, 6^4, 2^{12}, \dots , 2^{12})$&$(2^6, 3^4, 3^4)$ & \\
$SL_2(3) $ & & $\frac {g-2} 6$ &$\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu\neq 0$ &$(4^6, 3^8, 3^8, 2^{12}, \dots , 2^{12})$ & & \\ $SL_2(3) $ & & $\frac {g-4} 6$ & &$(4^6, 3^8, 6^4, 2^{12}, \dots , 2^{12})$ & & 1 \\ $SL_2(3) $ & & $\frac {g-6} 6$ &$\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu\neq 0$ &$(4^6, 6^4, 6^4, 2^{12}, \dots , 2^{12})$ & & \\ \hline \hline $\mathbb Z_2\oplus S_4$ & &$\frac {g+1} {12}$& &$( 3^{16}, 4^{12}, 2^{24}, \dots , 2^{24} )$& & \\ $\mathbb Z_2\oplus S_4$ & &$\frac {g-3} {12}$& &$( 6^{8}, 4^{12}, 2^{24}, \dots , 2^{24} )$& & 19\\ $GL_2(3)$ & &$\frac {g-2} {12}$& &$( 3^{16}, 8^{6}, 2^{24}, \dots , 2^{24} )$& & \\ $GL_2(3)$&$S_4$&$\frac {g-6}{12}$& &$( 6^{8}, 8^{6}, 2^{24}, \dots , 2^{24} )$&$(2^{12}, 3^8, 4^6)$&13 \\ $W_2$ & &$\frac {g-5} {12}$& &$( 4^{12}, 4^{12}, 3^{16}, 2^{24}, \dots , 2^{24} )$& & 7\\ $W_2$ & &$\frac {g-9} {12}$& &$( 4^{12}, 4^{12}, 6^{8}, 2^{24}, \dots , 2^{24} )$& & \\ $W_3$ & &$\frac {g-8} {12}$ & &$( 4^{12}, 3^{16}, 8^{6}, 2^{24}, \dots , 2^{24} )$& & 1 \\ $W_3$ & &$\frac {g-12} {12}$& &$( 4^{12}, 6^{8}, 8^{6}, 2^{24}, \dots , 2^{24} )$& & \\ \hline \hline $\mathbb Z_2\oplus A_5$ & & $\frac {g+1} {30}$& &$(3^{40}, 5^{24}, 2^{60}, \dots , 2^{60})$& &\\ $\mathbb Z_2\oplus A_5$ & & $\frac {g-5} {30}$& &$(3^{40}, 10^{12}, 2^{60}, \dots , 2^{60})$& & 31 \\ $\mathbb Z_2\oplus A_5$ & & $\frac {g-15} {30}$& &$(6^{20}, 10^{12}, 2^{60}, \dots , 2^{60})$& &\\ $\mathbb Z_2\oplus A_5$ & $A_5$ & $\frac {g-9} {30}$& &$(6^{20}, 5^{24}, 2^{60}, \dots , 2^{60})$& $( 2^{30}, 3^{20}, 5^{12} )$ &\\ $SL_2(5)$& &$\frac {g-14} {30}$& &$(4^{30}, 3^{40}, 5^{24}, 2^{60}, \dots , 2^{60})$& &\\ $SL_2(5)$& &$\frac {g-20} {30}$& &$(4^{30}, 3^{40}, 10^{12}, 2^{60}, \dots , 2^{60})$& & 1\\ $SL_2(5)$& &$\frac {g-24} {30}$ & &$(4^{30}, 6^{20}, 5^{24}, 2^{60}, \dots , 2^{60})$& &\\ $SL_2(5)$& &$\frac {g-30} {30}$ & &$(4^{30}, 6^{20}, 10^{12}, 2^{60}, \dots , 2^{60})$& &\\ \hline \hline \end{tabular} \end{center} \caption{Automorphism groups of hyperelliptic curves} \end{table*}
\end{tiny}
Finding algebraic descriptions of Hurwitz spaces is in general a difficult problem. In \cite{Sh7} it is shown that each of the spaces $\H (G, {\bf C})$ is a rational variety. Further, the inclusions between such loci are studied.
Let $t$ be the order of an automorphism of an algebraic curve $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ (not necessary hyperelliptic). Hurwitz \cite{Hu} showed that $t \leq 10 (g-1)$. In 1895, Wiman improved this bound to be $t \leq 2 (2g+1)$ and showed that it is the best possible. Thus, if a cyclic group $H$
occurs as an automorphism group then $|H| \leq 2 (2g+1)$. Indeed, this bound can be achieved for any genus via a hyperelliptic curve. For example, the curve $$Y^2=X(X^{2g+1}-1)$$ has automorphism group the cyclic group of order $4g+2$. This is the second case in Table 1, when $n=2g+1$. The family of such curves is 0-dimensional in $\H_g$.
Now we turn our attention to determining if a given curve $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ belongs to any of the families of Table 1. In other words, find conditions in terms of the coefficients of $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ such that $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ belong to a family in Table 1. This would determine the $\Aut (\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$.
\section{Invariants of Binary Forms}
In this section we define the action of $ GL_2(k)$ on binary forms and discuss the basic notions of their invariants. Let $k[X,Z]$ be the polynomial ring in two variables and let $V_d$ denote the $(d+1)$-dimensional subspace of $k[X,Z]$ consisting of homogeneous polynomials.
\begin{equation} \label{eq1} f(X,Z) = a_0X^d + a_1X^{d-1}Z + ... + a_dZ^d \end{equation}
of degree $d$. Elements in $V_d$ are called {\it binary forms} of degree $d$. We let $GL_2(k)$ act as a group of automorphisms on $ k[X, Z] $ as follows:
\begin{equation}
M = \begin{pmatrix} a &b \\ c & d \end{pmatrix} \in GL_2(k), \textit{ then }
\quad M \begin{pmatrix} X\\ Z \end{pmatrix} = \begin{pmatrix} aX+bZ\\ cX+dZ \end{pmatrix}. \end{equation}
This action of $GL_2(k)$ leaves $V_d$ invariant and acts irreducibly on $V_d$.
Let $A_0$, $A_1$, ... , $A_d$ be coordinate functions on $V_d$. Then the coordinate ring of $V_d$ can be identified with $ k[A_0 , ... , A_d] $. For $I \in k[A_0, ... , A_d]$ and $M \in GL_2(k)$, define $I^M \in k[A_0, ... ,A_d]$ as follows
\begin{equation} \label{eq_I} {I^M}(f):= I(M(f)) \end{equation} for all $f \in V_d$. Then $I^{MN} = (I^{M})^{N}$ and Eq.~(\ref{eq_I}) defines an action of $GL_2(k)$ on $k[A_0, ... ,A_d]$.
A homogeneous polynomial $I\in k[A_0, \dots , A_d, X, Z]$ is called a {\it covariant} of index $s$ if $$I^M(f)=\delta^s I(f)$$
where $\delta =\det(M)$. The homogeneous degree in $A_1, \dots , A_n$ is called the {\it degree} of $I$, and the homogeneous degree in $X, Z$ is called the {\it order} of $I$. A covariant of order zero is called {\it invariant}. An invariant is a $SL_2(k)$-invariant on $V_d$.
We will use the symbolic method of classical theory to construct covariants of binary forms. Let
\begin{equation} \begin{split} f(X,Z):= & \sum_{i=0}^n \begin{pmatrix} n \\ i \end{pmatrix}a_i X^{n-i} \, Z^i, \\
g(X,Z) := & \sum_{i=0}^m \begin{pmatrix} m \\ i \end{pmatrix} b_i
X^{n-i} \, Z^i \\ \end{split} \end{equation}
be binary forms of degree $n$ and $m$ respectively with coefficients in $k$. We define the {\bf r-transvection}
\begin{equation} (f,g)^r:= c_k \cdot \sum_{k=0}^r (-1)^k \begin{pmatrix} r \\ k \end{pmatrix} \cdot \frac {\partial^r f} {\partial X^{r-k} \, \, \partial Y^k} \cdot \frac {\partial^r g} {\partial X^k \, \, \partial Y^{r-k}} \end{equation}
where $c_k=\frac {(m-r)! \, (n-r)!} {n! \, m!}$. It is a homogeneous
polynomial in $k[X, Z]$ and therefore a covariant of order $m+n-2r$
and degree 2. In general, the $r$-transvection of two covariants of
order $m, n$ (resp., degree $p, q$) is a covariant of order $m+n-2r$
(resp., degree $p+q$).
For the rest of this paper $F(X,Z)$ denotes a binary form of order $d:=2g+2$ as below \begin{equation} F(X,Z) = \sum_{i=0}^d a_i X^i Z^{d-i} = \sum_{i=0}^d \begin{pmatrix} n \\ i \end{pmatrix} b_i X^i Z^{n-i} \end{equation} where $b_i=\frac {(n-i)! \, \, i!} {n!} \cdot a_i$, for $i=0, \dots , d$. We denote invariants (resp., covariants) of binary forms by $I_s$ (resp., $J_s$) where the subscript $s$ denotes the degree (resp., the order). We define the following covariants and invariants:
\begin{equation}\label{covar} \begin{split}
I_2 & :=(F,F)^d, \\ J_{4j} & := (F,F)^{d-2j}, \, \, j=1, \dots , g, \\ I_4 & :=(J_4, J_4)^4, \\ I_4' &:= (J_8, J_8)^8, \\ I_6 & :=((F, J_4)^4, (F, J_4)^4)^{d-4}, \\ I_6^\prime &:=((F, J_8)^8, (F, J_8)^8)^{d-8}, \\ I_6^{``} & :=((F, J_{12})^{12}, (F, J_{12})^{12})^{d-12}, \\ I_3 &:=(F, J_d)^d, \\ M & :=((F, J_4)^4, (F, J_8)^8)^{d-10}, \\ I_{12} & :=(M, M)^8\\ \end{split} \end{equation}
$GL_2(k)$-invariants are called {\it absolute invariants}. We define the following absolute invariants:
\begin{equation} \begin{split} & i_1:=\frac {I_4'} {I_2^2},\, i_2:=\frac {I_3^2} {I_2^3},\, i_3:=\frac {I_6^{``} } {I_2^3}, \, j_1 := \frac {I_6^{'}} {I_3^2}, \\ & j_2:= \frac {I_6} {I_3^2}, s_1:=\frac {I_6^2} {I_{12}}, \, s_2:=\frac {(I_6^{'})^2} {I_{12}}, \, \v_1:= \frac {I_6} {I_6^{``}}, \\ & \v_2:=\frac {(I_4^{'})^3} {I_3^4}, \, \, \v_3:= \frac {I_6} {I_6^{'}}, \, \v_4:=\frac {(I_6^{``})^2} {I_4^3}, \, \v_5:=\frac {I_6^{``}}{I_6^{'}} \end{split} \end{equation}
For a given curve $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ we denote by $I(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ or $i(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ the corresponding invariants. Two isomorphic hyperelliptic curves have the same absolute invariants.
\begin{remark} It is an open problem to determine the field of invariants of binary form of degree $d \geq 7$. \end{remark}
\section{Equations of curves}
In this section we state the equations of curves in each case of Table 1. For a more detailed treatment of these spaces, including proofs, the reader can check results in \cite{Sh6}, \cite{Sh7}. The reader can also check \cite{BG} where equations for each family are computed; however the main goal of the book is to study hyperelliptic Riemann surfaces with real structures. In this section $G$ denotes a group as in the first column of Table 1, and $\L_g^G$ the locus of hyperelliptic genus $g$ curves $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ such that $G$ is embedded in $\Aut(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$.
\subsection{$\overline{\mbox {Aut}}} \def\D{\mathcal D(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ is isomorphic to $\mathbb Z_n$}
If $\overline{\mbox {Aut}}} \def\D{\mathcal D(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)\iso \mathbb Z_n$ then $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ belongs to cases 1, 2, 3 in Table 1. These loci were studied in detail in \cite{Sh6}. The family of curves are given below:
\begin{equation} \begin{split} Y^2= & X^{nt}+ \dots + a_i X^{n(t-i)} + \dots a_{t-1} X^n +1, \\ Y^2= & X^{nt}+ \dots + a_i X^{n(t-i)} + \dots a_{t-1} X^n +1, \\ Y^2= & X\, (X^{nt} + \dots + a_i X^{n(t-i)} + \dots a_{t-1} X^n +1) \\ \end{split} \end{equation}
where $t$ is respectively $ \frac {2g+2} n, \frac {2g+1} n, \frac {2g} n$. To classify these curves (up to isomorphism) we need to find invariants of the $GL_2(k)$-action on $k(a_1, \dots , a_{t-1})$. The following \begin{equation}
u_i:= a_1^{t-i} \, a_i \, + \, a_{\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu}^{t-i} \, a_{t-i}, \quad for \quad 1 \leq i \leq \delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu\\ \end{equation}
are called {\it dihedral invariants} for the genus $g$ and the tuple $$\u:=(u_1, \dots , u_\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu)$$ is called the {\it tuple of dihedral invariants}. It can be checked that $\u=0$ if and only if $a_1=a_\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu=0$. In this case replacing $a_1, a_\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu$ by $a_2, a_{\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu-1}$ in the formula above would give new invariants. The next theorem shows that the dihedral invariants generate $k(\L_g^G)$.
\begin{Theorem} $ \L_g^G$ is a $\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu$-dimensional rational variety. Moreover, $ k(\L_g^G) =k(u_1, \dots , u_\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu)$. \end{Theorem}
If $n=2$ then $G$ is the Klein 4-group. Then $\L_g^G=\L_g$ where $\L_g$ is the locus of hyperelliptic curves with extra involutions, see \cite{GS}. A nice necessary and sufficient condition is found in \cite{Sh5} in terms of the dihedral invariants for a curve to have more than three involutions. More precisely, for such curves the relation holds: $$2^{g-1}u_1^2 - u_g^{g+1}=0.$$
\subsection{$\overline{\mbox {Aut}}} \def\D{\mathcal D(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ is isomorphic to $D_n$}
The dihedral group is generated by
$$D_n=\< \s, \tau | \, \, \s^n=\tau^2=1\>$$ such that $$ \s(X)=\e_n\, X, \quad \tau(X)=\frac 1 X.$$
Then $\s$ fixes $X= 0, \infty$ and $\tau $ fixes $X=\pm 1$ and permutes 0 and $\infty$. We let $$G(X):= \prod_{i=1}^t (X^{2n} + \l_i X^n +1)$$ Then, \begin{equation} \begin{split} G(X)= & X^{2nt}+ a_1 X^{2nt-n} + \dots + a_t X^{nt} + \\ & a_{t-1}
X^{(n-1)t} + \dots + a_1 X^n +1 \end{split} \end{equation}
where $a_i, i=1,\dots t$ are polynomials in terms of the symmetric polynomials $s_1, \dots , s_t$ of $\l_i$ (i.e., $a_1=s_1, a_2=t+s_2, a_3=(t-1)s_1+s_3, a_4:=\left( \overset{t} {n/2} \right) + (t-2)s_2 +s_4, $ etc.).
Depending on whether $0, \pm 1, $ and $\infty$ are Weierstrass points we get the equations $Y^2=F(X)$ where
\begin{equation} \begin{split}
F(X) = &\, G(X), \quad (X^n-1) \cdot G(X), \\ &\, X \cdot G(X),
\quad (X^{2n}-1) \cdot G(X)\,\\ &\, X\, (X^n-1) \cdot
G(X), \quad X\, (X^{2n}-1) \cdot G(X) \\ \end{split} \end{equation}
where $n$ is respectively as in cases 4-9 of Table 1.
\begin{remark} Notice that in all cases $n$ is even; see Theorem 2.1., in \cite{Bu}. \end{remark}
The case $Y^2=G(X)$ corresponds to the group $\mathbb Z_2 \oplus D_n$. If $n=2$, then this is a special case of $G\iso \mathbb Z_2 \oplus \mathbb Z_n$. Indeed, $$2^{g-1}u_1^2 - u_g^{g+1}=0$$
as expected; see \cite{Sh5} for details. If $n> 2$ then
$$\u=(u_1, \dots , u_g)=(0, \dots , 0) $$ where $\u=(u_1, \dots , u_g) $ is as defined in \cite{GS}.
\subsection{$\overline{\mbox {Aut}}} \def\D{\mathcal D(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ is isomorphic to $ A_4$ }
This case is treated in detail in \cite{Sh6}. Let \begin{small} \begin{equation} \label{G} G_{i} (X)=X^{12} - \l_i X^{10} - 33 X^8 + 2 \l_i X^6 - 33 X^4 - \l_i X^2+1, \end{equation} \end{small}
for $\l_1^2+108\neq 0$. Denote by
$$G(X):=\prod_{i=1}^\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu G_i(X)$$
Then, each family is parameterized as in Table 2.
\begin{table}[ht]
\begin{center} \renewcommand{1.24}{1.24}
\begin{tabular}{||c|c|c||} \hline \hline $G$ & $\delta$ & Equation $Y^2= $ \\ \hline \hline $\mathbb Z_2\oplus A_4$ & $\frac {g+1} 6$ & $G(X)$ \\ $\mathbb Z_2\oplus A_4$ & $\frac {g-1} 6$ & $ (X^4+2i \sqrt{3}X^2 +1)\cdot G(X)$ \\
$\mathbb Z_2\oplus A_4$ &$\frac {g-3} 6$& $ (X^8+14 X^4+1) \cdot G(X)$ \\
$SL_2(3) $ & $\frac {g-2} 6$ & $X (X^4-1) \cdot G(X) $ \\
$SL_2(3) $ & $\frac {g-4} 6$ & $X (X^4-1)(X^4+2i \sqrt{3} X^2 +1) \cdot G(X)$\\
$SL_2(3) $ & $\frac {g-6} 6$ & $X(X^4-1) (X^8+14X^4+1)\cdot G(X)$ \\ \hline \hline \end{tabular} \end{center} \caption{Hyperelliptic curves with $\overline{\mbox {Aut}}} \def\D{\mathcal D(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)= A_4$} \end{table}
The following lemma gives a necessary condition that a curve has automorphism group $\mathbb Z_2 \oplus A_4$ or $SL_2(3)$.
\begin{lemma} Let $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ be a hyperelliptic curve of genus $g$ with $\overline{\mbox {Aut}}} \def\D{\mathcal D(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)\iso A_4$. Then, $I_4=0$. Moreover;
i) if $g=4$ then $I_2=I_4=I_4^{'}=I_6^{'}=0$
ii) if $g=5, 9, 12$ then $I_4=I_6=0$
iii) if $g=7, 10$ then $I_2=I_4=I_4^{'}=I_6^{``}=0$
iv) if $g=8$ then $I_4=0$. \end{lemma}
\subsection{$\overline{\mbox {Aut}}} \def\D{\mathcal D(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ is isomorphic to $ S_4$ }
In this case the reduced automorphism group is generated by $$\s (X)=- \frac {x-1} {x+1}, \quad \tau(X)=i X.$$
We also denote
\begin{equation} \begin{split} G_i(X):= & X^{24} + \l X^{20} + (759-4\l)X^{16} + 2 (3\l +\\ & 1288)
X^{12} + (759-4\l)X^{8}+ \l X^{4}+1, \\ \\ R(X):= & X^{12} -
33 X^8 - 33 X^4 +1, \\ S(X):= & X^8 + 14 X^4 + 1, \\ T(X):=
& X^4-1. \end{split} \end{equation}
Let $$G(X):=\prod_{i=1}^\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu G_i(X)$$ where $\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu$ is as in Table 1. Then, the equations of the curves in each case are $Y^2=F(X)$ where $F$ is as below (we suppress $X$):
$$F= G, \, S G, \, T G, \, S T G, \, R G, \, R S G, \, R T G, \, R S T G. $$
Similar conditions in terms of the classical invariants as in the previous case can be obtained in this case also.
\begin{lemma}Let $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ be a hyperelliptic curve of genus $g$ with $\overline{\mbox {Aut}}} \def\D{\mathcal D(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)\iso S_4$. Then, $I_4=0$. \end{lemma}
\subsection{$\overline{\mbox {Aut}}} \def\D{\mathcal D(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ is isomorphic to $ A_5$ }
We briefly state the equations here. We denote by $G_i(X)$, $R(X)$, $S(X)$, $T(X)$ the following:
\begin{tiny} \begin{equation} \begin{split} G_i (X):= & (\l_i -1) X^{60} - 36\, (19\l_i +29) X^{55}+6 (26239\l_i - 42079) X^{50}\\ & - 540 (23199\l_i -19343) X^{45} + 105 (737719\l_i - 953143) X^{40} \\ & - 72 (1815127\l_i - 145087) X^{35} - 4 (8302981\l_i +49913771) X^{30}\\ & + 72 (1815127\l_i - 145087) X^{25} + 105 (737719\l_i - 953143) X^{20}\\ & + 540 (23199\l_i -19343) X^{15} + 6 ( 26239\l_i - 42079 ) X^{10} \\ & + 36\, (19\l_i +29) X^{5}+(\l_i-1)\\ \\ R(X):= & X^{30}+522X^{25} -10005X^{20} -10005X^{15} -522X^5+1\\ \\ S(X):= & X^{20} - 228X^{15} + 494X^{10} +228X^5 +1\\ \\ T(X):= & X^{10}+10X-1.\\ \end{split} \end{equation} \end{tiny}
As above we let $$G(X):=\prod_{i=1}^\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu G_i(X).$$ In the order of Table 1 equations are given as $Y^2=F(X)$ where $F(X)$ is as given as (we suppress $X$):
$$F= G, \, S G, \, T G, \, S T G, \, R G, \, R S G, \, R T G, \, R S T G $$
These curves can be expressed as $Y^2 =\, M(X^2)$ or $Y^2=X\cdot M(X^2)$ where $M$ is a polynomial in $X^2$. This fact will be used in the next section. The expressions are rather large and we will not state them here. However, we get the following useful fact:
\begin{lemma}Let $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ be a hyperelliptic curve of genus $g$ with $\overline{\mbox {Aut}}} \def\D{\mathcal D(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)\iso A_5$. Then, $I_{4}=I_4^\prime=I_6=I_6^\prime=I_{12}=0$. \end{lemma}
\section{Determining the automorphism group of a given curve}
Let $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ be given. We want to determine $\Aut(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$. In order to find an algorithm which would work for any $g$ we would have to check whether $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ can be written in any of the forms above. Thus, we want to find if there is a coordinate change
$$X \to \frac {aX + b} {cX+d}$$
which transforms $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ to one of the forms of section 4. This would require solving a system of equations for each case and therefore would not be efficient.
\subsection{Using classical invariants}
For a fixed $g$ we know the dimension $\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu$ of the locus $\L_g^G$. We compute enough absolute invariants to generate this locus. Thus, we determine the loci $\L_g^G$ for all $G$ in Table 1 in terms of some invariants $i_1, \dots i_{\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu+1}$. These loci are computed only once for each $g$. Then, for a particular curve we simply compute these invariants and check if they generate any of the loci $\L_g^G$. These spaces were computed in detail in \cite{Sh7} for $\bar G=A_4$. We will illustrate with $g \leq 12$ and $\bar G \iso A_4, S_4, A_5$.
We define $\p(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ as follows:
\begin{eqnarray*} \p(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g):=(\p_1, \p_2)= \left\{ \aligned \v_1, \qquad \qquad \qquad \qquad \textit{ if } g=4, \\ (i_1, i_2), \quad \textit{ if } g=5, 9, \textit{ and } I_2\neq 0 \\ \v_2, \quad \quad \textit{ if } g=5, 9, \textit{ and } I_2=0 \\ (j_1, j_2), \qquad \textit{ if } g=7, \, \, \textit{ and } I_3 \neq 0 \\ \v_3, \qquad \quad \textit{ if } g=7, \, \, \textit{ and } I_3 = 0 \\ (i_1, i_3), \quad \textit{ if } g=8, 12, \textit{ and } I_2\neq 0 \\ \v_4, \qquad \textit{ if } g=8, 12, \textit{ and } I_2 = 0 \\ (s_2, s_1), \quad \textit{ if } g=10, \, \, \textit{ and } I_{12}\neq 0 \\ \v_5, \qquad \textit{ if } g=10, \, \, \textit{ and } I_{12}= 0 \\ \endaligned \right. \end{eqnarray*}
From Lemma 4.3. and results for cases $\bar G \iso S_4, A_5$ one can check that $\p(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ is well defined. Moreover, the subvariety $\L_g^G$ is 1-dimensional if $\bar G$ is isomorphic to $A_4, S_4, A_5$. For each parametric curve $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ of the previous section we compute $\p(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ in terms of the parameter $\l$. Eliminating $\l$ gives an equation for $\L_g^G$, see \cite{Sh7} for explicit equations.
The following algorithm determines if the automorphism group of a hyperelliptic genus $g\leq 12$ curve is isomorphic to $\mathbb Z_2 \oplus A_4, \mathbb Z_2\oplus S_4, \mathbb Z_2\oplus A_5, SL_2(3), SL_2(5), GL_2(3), W_2, W_3$.
\noindent {\sc Algorithm 1:}
{\bf Input:} A hyperelliptic curve $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g: Y^2= F(X, Z)$.
{\bf Output:} Determine if the automorphism group $\Aut(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ is one of $\mathbb Z_2 \oplus A_4$, $\mathbb Z_2\oplus S_4$, $\mathbb Z_2\oplus A_5$, $SL_2(3)$, $SL_2(5)$, $GL_2(3)$, $W_2$, $W_3$.
{\bf Step1:} Compute $I_4 (\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$. If $I_4\neq 0$ then $\Aut(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ is not isomorphic to any of $\mathbb Z_2 \oplus A_4, \mathbb Z_2\oplus S_4,$ $\mathbb Z_2\oplus A_5$, $SL_2(3), SL_2(5)$, $GL_2(3), W_2, W_3$. Otherwise go to {\bf Step 2}.
{\bf Step 2:} Compute $\p(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$.
{\bf Step 3:} Find $\L_g^G$ which is satisfied by $\p(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ (equations are given in \cite{Sh6}). Then, $\Aut(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ is isomorphic to $ G$.
The definition of $\p(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ is a little more elaborate for $\bar G=\mathbb Z_n, D_n$ since the dimension of $\L_g^G$ is $> 1$. Once the definition of the moduli point is modified and the corresponding $\L_g^G$ are computed the following can be used:
\noindent {\sc Algorithm 2:}
{\bf Input:} A hyperelliptic curve $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g: Y^2= F(X, Z)$.
{\bf Output:} The automorphism group $\Aut(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$.
{\bf Step 1:} Compute $\p(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$.
{\bf Step 2:} Find $\L_g^G$ which is satisfied by $\p(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$. Then, $\Aut(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ is isomorphic to $ G$.
The above method of classical invariants is difficult to implement for large $g$. That's because finding enough absolute invariants is not an easy task for large $g$. Also the expressions of these invariants and the equations for the loci $\L_g^G$ get very large as $g$ grows. In order to deal with these problems we use the dihedral invariants which will be explained next.
\subsection{Using dihedral invariants}
In section 4.1., we introduced dihedral invariants for hyperelliptic curves $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ such that $\Aut (\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g) \iso \mathbb Z_n$. In this section we generalize this approach to all hyperelliptic curves with extra automorphisms. Theorem 5.1., makes this generalization possible.
Let $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ be an hyperelliptic curve with extra automorphisms. The following lemma gives a general description of how to write an equation for $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$.
\noindent \begin{Theorem}\label{thm5}
Let $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ be a hyperelliptic curve with $$| \Aut(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)| > 2.$$ Then, $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ can be written as
\begin{equation}\label{normal}
Y^2=F(X^n), \quad \textit{ or } \quad Y^2=X\cdot F(X^n), \end{equation}
where $n=2$ or $n$ is odd and divides $2g+2, 2g+1, g$. Moreover, if $n> 2$ then $\Aut(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ is a cyclic group. \end{Theorem}
Let $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ be a hyperelliptic curve with $| \Aut(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)| > 2$ and written as in \eqref{normal}. We call this form a {\bf decomposition} of $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$. Let $s$ be the smallest $n$ that such decomposition is possible. Then, \begin{equation}\label{normal2} Y^2= F(X^{s}), \quad or \quad Y^2=X \cdot F(X^{s}) \end{equation} is called the {\bf normal decomposition} or the {\bf normal form} of $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ and $s$ is called the {\bf degree } of the decomposition. If no such decomposition is possible then we say that $s=1$. Let $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g$ be in its normal decomposition given below:
\begin{equation} \begin{split} Y^2= & X^{nt}+ \dots + a_i X^{n(t-i)} + \dots a_{t-1} X^n +1, \\ Y^2= & X\, (X^{nt} + \dots + a_i X^{n(t-i)} + \dots a_{t-1} X^n +1) \\ \end{split} \end{equation}
where $nt=2g+2, 2g+1, 2g$.
We define the following \begin{equation}
u_i:= a_1^{t-i} \, a_i \, + \, a_{\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu}^{t-i} \, a_{t-i}, \quad for \quad 1 \leq i \leq \delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu=t-1,\\ \end{equation}
which are called {\it dihedral invariants} for genus $g$ and the tuple $$\mathfrak U} \def\k{\bar k} \def\iso{{\, \cong\, }^1:=(u_1, \dots , u_\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu)$$ is called the {\it tuple of dihedral invariants}. It can be checked that $\u=0$ if and only if $a_1=a_\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu=0$. Then, let $(a_j, a_{\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu-j+1})$ be the first nonzero tuple. Replacing $a_1, a_\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu$ by $a_j, a_{\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu-j+1}$ in the formula above would give new invariants. Thus, we define
\begin{equation}\label{u_i} u_i^j:= a_j^{\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu-i+1} \, a_i \, + \, a_{\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu-j}^{\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu-i+1} \, a_{\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu-i+1}, \end{equation}
for $1 \leq i \leq \delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu$, and $ 1 \leq j \leq [\frac {\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu+1} 2]$. Then
\begin{equation} \mathfrak U} \def\k{\bar k} \def\iso{{\, \cong\, }^j:=(u_1^j, \dots , u_m^j) \end{equation}
where $m=\delta} \def\m{\mu} \def\v{\mathfrak v} \def\r{\mu - 2j$.
\noindent {\sc Algorithm 3:}
{\bf Input:} A hyperelliptic curve $\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g: Y^2= F(X, Z)$.
{\bf Output:} The automorphism group $\Aut(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$.
{\bf Step 1:} Check whether the curve has a normal decomposition. If
``Yes'' then go to {\bf Step 2} otherwise $\Aut (\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)=\mathbb Z_2$
{\bf Step 2:} Compute the degree $s$ of the normal decomposition. If $s$
is odd then $\Aut(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g) \iso \mathbb Z_{2s}$, otherwise go to {\bf Step 3}.
{\bf Step 3:} Compute the dihedral invariants $\mathfrak U} \def\k{\bar k} \def\iso{{\, \cong\, }_i^j$ of the
normal decomposition. Go to {\bf Step 4}.
{\bf Step 4:} Find
$\L_g^G$ which is satisfied by $\mathfrak U} \def\k{\bar k} \def\iso{{\, \cong\, }_i^j$. Then, $\Aut(\mathcal X} \def\Y{\mathcal Y} \def\fH{\mathfrak H} \def\t{\mu_g)$ is isomorphic to $G$.
The above method was used in \cite{SV1} and \cite{GS} to determine the automorphism group of genus 2 and 3. It has the advantages that it can be used for any $g$ no matter how large. A disadvantage is that a nonlinear system of equations must be solved in order to determine the normal decomposition.
\begin{Example}
For genus 2, the curve can be written as $$Y^2=X^6+a_1 X^4 + a_2 X^2 +1 $$ and its the dihedral invariants are
$$u_1=a_1^3 +a_2^3, \quad \quad u_2 = 2 a_1 a_2, $$
Then,
a) $G\iso V_6$ if and only if $(u_1, u_2 )=(0,0)$ or $$(u_1, u_2 )=(6750, 450).$$
b) $G\iso GL_2(3) $ if and only if $(u_1, u_2) = ( -250, 50)$.
c) $G\iso D_{6}$ if and only if $$ u_2^2 - 220 u_2 -16 u_1 +4500=0,$$ for $u_2 \neq 18, 140 + 60\sqrt{5}, 50$.
d) $G\iso D_4$ if and only if $$ 2 u_1^2-u_2^3=0,$$ for $u_2 \neq 2, 18, 0, 50, 450$. Cases $u_2 = 0, 450, 50 $ are reduced to cases a),and b) respectively, see \cite{SV1} for details. \end{Example}
\begin{remark} The notation used in \cite{SV1} to denote the groups is different. $V_6$ is this case has order 24 and in \cite{SV1} is identified as $\mathbb Z_3 {\rtimes} D_4$. \end{remark}
\section{Closing remarks}
We briefly described techniques of determining the automorphism group of a hyperelliptic curve. A combination of both methods sometime produces better results. Our goal is to combine these methods and explicitly compute loci $\L_g^G$ for reasonable $g$ (i.e., $g \leq 60$).
There are polynomial time algorithms to compute the decomposition of a polynomial $F(X)$ up to an affine transformation $X \to aX +b$, see \cite{Gu}. However, this is not sufficient for our purposes since we want to find such decomposition up to a liner fractional transformation $X \to \frac {aX + b} {cX + d}$. If a polynomial time algorithm would be found in this case this would make the second method preferable to the first.
Besides computing the automorphism groups the above techniques can also be used to answer other questions on hyperelliptic curves. For example dihedral invariants can be used to determine the field of moduli of a given curve. The reader can check \cite{Sh5} for details and open questions on the field of moduli and other computational aspects of hyperelliptic curves.
\end{document} |
\begin{document}
\title{A Unified Approach to Discrepancy Minimization}
\begin{abstract} We study a unified approach and algorithm for constructive discrepancy minimization based on a stochastic process. By varying the parameters of the process, one can recover various state-of-the-art results. We demonstrate the flexibility of the method by deriving a discrepancy bound for smoothed instances, which interpolates between known bounds for worst-case and random instances.
\end{abstract} \section{Introduction} Given a universe of elements $U=\{1,\ldots, n\}$ and a collection $\mathcal{S} = \{S_1, \ldots, S_m\}$ of subsets $S_i \subseteq U$, the discrepancy of the set system $\mathcal{S}$ is defined as \[
\mathrm{disc}(\mathcal{S}) = \min_{x: U \rightarrow \{-1,1\}} \max_{i \in [m]} \Big|\sum_{j \in S_i} x(j) \Big| \,. \] That is, the discrepancy is the minimum imbalance that must occur in at least one of the sets in $\mathcal{S}$ over all bipartitions of $U$. More generally for an $m \times n$ matrix $A$, the discrepancy of $A$ is defined as $ \mathrm{disc}(A) = \min_{x\in\{-1,1\}^n}\norm{{A}x}_{\infty}$. Note that the definition for set systems corresponds to choosing $A$ as the incidence matrix of $\mathcal{S}$, i.e., $A_{ij} = 1$ if $j \in S_i$ and $0$ otherwise. Discrepancy is a well-studied area with several applications in both mathematics and theoretical computer science (see \cite{beck1995discrepancy, chazelle2001discrepancy, matousek1999geometric}). \allowdisplaybreaks
\paragraph{Spencer's problem.} In a celebrated result, Spencer \cite{spencer1985six} showed that the discrepancy of any set system with $m = n$ sets is $O(\sqrt{n})$, and more generally $O(\sqrt{n\log(2m/n)})$ for $m \geq n$. To show this, he developed a general partial-coloring method (a.k.a. the entropy method), building on a counting argument of Beck \cite{beck1981roth}, that has since been used widely for various other problems. A similar approach was developed independently by Gluskin \cite{gluskin1989extremal}. Roughly, here the elements are colored in $O(\log n)$ phases. In each phase, an $\Omega(1)$ fraction of the elements get colored while incurring a small discrepancy for each row.
\paragraph{Beck-Fiala and Koml\'{o}s problems.} Another central question is the Beck-Fiala problem where each element appears in at most $k$ sets in $\mathcal{S}$. Equivalently, every column of the incidence matrix is $k$-sparse. The long-standing Beck-Fiala conjecture \cite{beck1981integer} states that $\mathrm{disc}(\mathcal{S}) = O(\sqrt{k})$. A further generalization is the Koml\'{o}s problem, also called the vector balancing problem, about the discrepancy of matrices $A$ with column $\ell_2$-norms at most $1$. Koml\'{o}s conjectured that $\mathrm{disc}(A) = O(1)$ for any such matrix. Note that the Koml\'{o}s conjecture implies the Beck-Fiala conjecture.
Banaszczyk showed an $O(\sqrt{\log n})$ bound for the Koml\'{o}s problem based on a deep geometric result~\cite{banaszczyk1998balancing}. Here, the full coloring is constructed directly (in a single phase), and this result has also found several applications. The resulting $O(\sqrt{ k \log n})$ bound for the Beck-Fiala problem is also the best known bound for general $k$.\footnote{For $k =o(\log n)$ an improved bound follows from the $2k-1$ bound by \cite{beck1981integer}.}
In contrast, the partial coloring method only gives weaker bounds of $O(\log n)$ and $O(k^{1/2} \log n)$ for these problems -- the $O(\log n)$ loss is incurred due to the $O(\log n)$ phases of partial coloring.
\paragraph{Limitations of Banaszczyk's result.} Even though Banaszczyk's method gives better bounds for the Koml\'{o}s problem, it is not necessarily stronger, and is incomparable to the partial coloring method. E.g., it is not known how to obtain Spencer's $O(\sqrt{n})$ result (or anything better than the trivial $O(\sqrt{n \log n})$ random-coloring bound) using Banaszczyk's result. A very interesting question is whether there is a common generalization that unifies both these results and techniques.
\paragraph{Algorithmic approaches.} Both the partial coloring method and Banaszczyk's result were originally non-algorithmic, and a lot of recent progress has resulted in their algorithmic versions. Starting with the work of \cite{bansal2010constructive}, several different algorithmic approaches are now known for the partial coloring method \cite{lovett2015constructive, rothvoss2017constructive, harvey2014discrepancy, eldan2014efficient}, based on various elegant ideas from linear algebra, random walks, optimization and convex geometry.
In further progress, an algorithmic version of the $O(\sqrt{\log n})$ bound for the Koml\'{o}s problem was obtained by \cite{bansal2019algorithm}, see also \cite{bansal2017algorithmic}, and \cite{bansal2018gram} for the more general algorithmic version of Banaszczyk's result. In related work, Levy et al.~\cite{levy2017deterministic} gave deterministic polynomial time constructive algorithms for the Spencer and Koml\'{o}s settings matching $O(\sqrt{n\log(2m/n)})$ and $O(\sqrt{\log{n}})$ respectively.
A key underlying idea behind many of these results is to perform a discrete Brownian motion (random walk with small steps) in the $\{-1,1\}^n$ cube, where the update steps are correlated and chosen to lie in some suitable subspace. However, the way in which these subspaces are chosen for the partial coloring method and the Koml\'{o}s problem are quite different. We give a high level description of these approaches as this will be crucial later on.
In the partial coloring approach, the walk is performed in a subspace orthogonal to the {\em tight discrepancy constraints}. If the discrepancy for some row $A_i$ reaches its target discrepancy bound, the update $\Delta x$ to the coloring satisfies $A_i \cdot \Delta x=0$. As the walk continues over time, the subspace dimension gets smaller and smaller until the walk is stuck. At this point, the subspace is reset and the {\em next phase} resumes.
On the other hand, the algorithm for the Koml\'{o}s problem does not consider the discrepancy constraints at all, and chooses a different subspace with a certain sub-isotropic property which ensures the discrepancy incurred for a row is roughly proportional to its $\ell_2$ norm, while ensuring that the rows with large $\ell_2$-norm incur zero-discrepancy. In particular, in contrast to the partial coloring method, all the elements are colored in a {\em single phase}, and the discrepancy constraints are ignored.
\paragraph{The need for a combined approach.} Even though the $O(\sqrt{k \log n})$ bound for the general Beck-Fiala problem is based on Banaszczyk's method, all the important special cases where the conjectured $O(\sqrt{k})$ bound holds are based on the partial coloring method. For example, Spencer's problem with $m=O(n)$ sets corresponds to special case of the Beck-Fiala problem with $k = O(n)$. So Spencer's six-deviations result resolves the Beck-Fiala conjecture for this case, which we do not know how to obtain from Banaszczyk's result.
The Beck-Fiala conjecture also holds for the case of random set systems with $m \geq n$. In particular, Potukuchi \cite{potukuchi2019spectral} considers the model where each column has 1's in $k$ randomly chosen rows and shows that the discrepancy is $O(\sqrt{k})$ with high probability. See also \cite{ezra2019beck, bansal2020discrepancy, hoberg2019fourier, altschuler2021discrepancy} for related results. Potukuchi's result crucially relies on the partial coloring approach, and it is not clear at all how to exploit the properties of random instances in Banaszcyck's approach.
Thus a natural question and a first step towards resolving the Beck-Fiala and Koml\'{o}s conjecture, and making progress on other discrepancy problems, is whether there exist more general techniques to obtain both Spencer's and Potukuchi's result and the $O(\sqrt{k \log n})$ bound for the Beck-Fiala problem in a unified way.
\subsection{Our results}
We present a new unified framework that recovers all the results mentioned above, and various other state-of-the-art results as special cases. Our algorithm is based on a derandomization of a stochastic process that is guided by a barrier-based potential function.
Given a matrix $A$, the algorithm starts with the all-zero coloring $x_0$. Let $x_t \in [-1,1]^n$ be the coloring at time. The algorithm maintains a barrier $b_t>0$ over time and defines the slack of row $i$ at time $t$ as \begin{equation}\label{slack} s_i(t) = b_t - \underbrace{\sum_{j=1}^n a_i(j) x_t(j)}_{\text{current discrepancy}} - \lambda \underbrace{\sum_{j=1}^n a_i(j)^2 (1-x_t(j)^2)}_{\text{remaining variance}}. \end{equation} Notice that when all $x_t(j)$ eventually reach $\pm 1$, the {\em remaining variance} term is zero and the slack measures the gap between the discrepancy and the barrier. We define the potential \begin{equation}\label{potential}
\Phi(t) = \sum_i s_i(t)^{-p} \end{equation} for some fixed $p>1$, that penalizes the rows with small slacks and blows up to infinity if some slack approaches zero. If we can ensure that the slacks are always positive and the potential is bounded, then the discrepancy is upper bounded by value of the barrier when the algorithm terminates.
At each time step, the algorithm picks a random direction $v_t$ that is orthogonal to some of the rows with the least slack, and satisfies some additional properties, and updates the coloring by a small amount in the direction $v_t$. The barrier $b_t$ is also updated. These updates are chosen to ensure that the potential does not increase in expectation, and hence all the slacks stay bounded away from $0$. We give a more detailed overview in Section \ref{sec:framework}.
By changing the parameters $p, \lambda$ depending on the problem at hand, we obtain several results using a unified approach. \begin{enumerate}
\item Set coloring~\cite{spencer1985six}. For any set system on $n$ elements and $m \geq n$ sets,
$\mathrm{disc}(\script{S}) = O(\sqrt{n\log(2m/n)})$.
\item Koml\'{o}s problem~\cite{bansal2017algorithmic}. For any ${A} \in \mathbb{R}^{m\times n}$ with columns norms $\norm{{A}^{j}}_2 \leq 1$, $\mathrm{disc}(A) = O(\sqrt{\log{n}})$. \item Random/Spectral Hypergraphs~\cite{potukuchi2019spectral}. Let $A \in \{0,1\}^{m\times n}$ be the incidence matrix of a set system with $n$ elements and $m$ sets, where element lies in at most $k$ sets and let $\gamma = \max_{v\perp \mathbf{1}, \norm{v}=1} \norm{Av}$. Then for $m\geq n$, $\mathrm{disc}(\script{S}) = O(\sqrt{k} + \gamma)$. \item Gaussian Matrix~\cite{chandrasekaran2014integer}. For a random matrix $A \in \mathbb{R}^{m\times n}$ with each entry $A_{ij} \sim \script{N}(0, \sigma^2)$ independently, with probability at least $1-(1/m^3)$,
$\mathrm{disc}(A) = O\left(\sigma\left(\sqrt{n}+\sqrt{\log m}\right) \cdot \sqrt{\log \frac{2m}{n}} \right)$. \end{enumerate} More generally, given a matrix $A$, we state the following result based on optimizing the various parameters of the algorithm, depending on the properties of $A$. This allows our framework to be applied in a black-box manner to a given problem at hand.
\begin{restatable}{thm}{mainthm} \label{thm:g3} For a matrix $A \in \mathbb{R}^{m\times n}$ with $\norm{A^j}_2\leq L$ and $\vert a_i(j)\vert \leq M$ for all $i\in [m], j\in [n]$, let $h:\mathbb{R}^+ \rightarrow \mathbb{R}^+$ be a non-increasing function such that for every subset $S\subseteq [n]$ and $i \in [m]$, \begin{equation}
\sum_{j\in S} a_i(j)^2 \leq \vert S\vert\cdot h(\vert S\vert).
\label{eq:g5} \end{equation} Then, for any $p > 1$, there exists a vector $x\in \{-1,1\}^n$ such that $\norm{Ax}_{\infty} \leq 5b_0 + 2M$, where \begin{align}
b_0 &= \min \left(\sqrt{8(p+1)(48m)^{1/p} \cdot \beta},\; 250 L \sqrt{\log\left(2m\right)} \right)\label{eq:c4}. \end{align} where $\beta = \int_{t = 0}^{n-2}h(n-t)(n-t)^{-1/p}dt$. \end{restatable}
Let us see how Theorem \ref{thm:g3} directly leads to the results stated above.
\paragraph{Set coloring.} As $\|A^j\|_2\leq \sqrt{m}$, we have $L = \sqrt{m}$, and as $\sum_{j\in S} a_i(j)^2\leq \vert S \vert$, we can set $h(t) = 1$ for all $t\in [n]$. Consider \eqref{eq:c4} and suppose $p\geq 1.1$ so that $p/(p-1) = O(1)$. Then \[ \beta = \int_{t=0}^{n-2} h(n-t)\cdot (n-t)^{-1/p}dt = O(n^{1-1/p}),\] and the first bound in \eqref{eq:c4} gives $b_0 = O(pn^{1/2} (m/n)^{1/p})$. Setting $p=\log(2m/n)$ gives Spencer's $O(\sqrt{n \log (2m/n)})$ bound.
Interestingly, the above result gives a new proof of Spencer's six-deviations result based on a direct single-phase coloring. In contrast, all the previously known proofs of this result \cite{bansal2010constructive, lovett2015constructive, rothvoss2017constructive, eldan2014efficient} required multiple partial coloring phases.
\paragraph{Koml\'{o}s problem.} Here $L=1$ and the second term in \eqref{eq:c4} directly gives a $O(\sqrt{\log m})$ bound\footnote{It would be interesting to construct an explicit family of examples where the discrepancy obtained by our approach is $\Omega(\sqrt{\log n})$.}. This also implies an $O(\sqrt{\log n})$ bound as at most $n^2$ rows can have $\ell_1$-norm more than $1$, and we can assume that $m \leq n^2$.
Similarly, bounding $h(t)$ using standard concentration bounds, directly gives the following results for various models of random matrices.
\begin{restatable}[Sub-Gaussian Matrix]{thm}{subg} \label{thm:subg} Let $A \in \mathbb{R}^{m\times n}$ with each column drawn independently from a distribution $\script{D}$, where the marginal of each coordinate is sub-Gaussian with mean $0$ and variance $\sigma^2$. Then, for $n \leq m \leq 2^{O(\sqrt{n})}$, $ \mathrm{disc}(A) = O(\sigma\sqrt{n\log(2m/n)})$, with probability at least $1-(1/m^2)$. \end{restatable} \begin{restatable}[Random Matrix]{thm}{random} \label{thm:random} Let $A \in \mathbb{R}^{m\times n}$, $m\geq n$ such that every column of $A$ is drawn independently from the uniform distribution on $\{x\in \mathbb{R}^m: \norm{x}_2 \leq 1\}$. Then $\mathrm{disc}(A) = O(1)$ with probability at least $1-(1/m^2)$. \end{restatable}
\subsubsection{Flexibility of the method} An important advantage of the method is it flexibility, which can be used to obtain several additional results.
\paragraph{Subadditivity.} Given $A, B \in \mathbb{R}^{m \times n}$, can we bound $\mathrm{disc}(A+B)$ given bounds on $\mathrm{disc}(A)$ and $\mathrm{disc}(B)$? Such questions can be directly handled by this framework by considering a weighted combination of two different potential functions -- one for $A$ and another for $B$.
More precisely, let us define $\mathrm{sdisc}(A)$, the {\em Stochastic Discrepancy} of a matrix $A$, to be the upper bound on discrepancy obtained by the Potential Walk described in Algorithm~\ref{algo:potential_walk}. For this notion, we have the following approximate subadditivity for arbitrary matrices. \begin{restatable}[Subadditivity of Stochastic Discrepancy]{thm}{subadditive} \label{thm:subadd} For any two arbitrary matrices $A,B \in \mathbb{R}^{m \times n}$, there exists $x \in \{-1,1\}^n$ such that \begin{align*}
|\dt{a_i}{x}| &\lesssim \mathrm{sdisc}(A) \quad \text{for every row }a_i \text{ of } A, \text{ and}\\
|\dt{b_i}{x}| &\lesssim \mathrm{sdisc}(B) \quad \text{for every row }b_i \text{ of } B. \end{align*} \label{thm:sdisc} In particular, this implies that
$\mathrm{sdisc}(A+B)\lesssim \mathrm{sdisc}(A)+\mathrm{sdisc}(B)$. \end{restatable} Here $a \lesssim b$ means that $a = O(1) b$. The theorem is algorithmic if $A,B$ are given. It also implies that for any matrix $A$, we have $\mathrm{sdisc}(A) \lesssim \min_B ( \mathrm{sdisc}(B)+\mathrm{sdisc}(A-B))$.
Similar questions have been studied previously in the context of understanding the discrepancy of unions of systems \cite{matouek-detlb, MatousekNikolov15}. For example, other related quantities such as the $\gamma_2$-norm and the determinant lower bound are also subadditive \cite{matouek-detlb, MatousekNikolov15}, We remark that the additive bound cannot hold for the (actual) discrepancy or even hereditary discrepancy\footnote{A classical example due to Hoffman gives two set systems $A$ and $B$, each with hereditary discrepancy $1$, but their union has discrepancy $\Omega(\log n/\log \log n)$ \cite{Matousek-geometric}.}, and a logarithmic loss is necessary. For this reason, the previous additive bounds based on $\gamma_2$-norm and the determinant lower bound lose extra polylogarithmic factors when translated to discrepancy.
A direct application of Theorem \ref{thm:sdisc} is the following. \begin{restatable}[Semi-Random Koml\'{o}s]{thm}{semirandom} \label{thm:sr1} Let $C \in \mathbb{R}^{m\times n}$ be an arbitrary matrix with columns satisfying $\norm{C^j}_2 \leq 1$ for all $j \in [n]$, and $R \in \mathbb{R}^{m \times n}$ be a matrix with entries drawn i.i.d. from $\mathcal{N}(0, \sigma^2)$. Then, for $n \leq m \leq 2^{O(\sqrt{n})}$, with probability at least $1-(1/m^2)$, \begin{equation*}
\mathrm{disc}(C+R) = O\left(\sqrt{\log{n}}+ \sigma\sqrt{n\log(2m/n)} \right). \end{equation*} \end{restatable} For $m=O(n)$, the bound above is $O(\sqrt{\log n}+\sigma\sqrt{n})$, which is better than the bound of $O(\sqrt{\log n}(1+\sigma\sqrt{n}))$ obtained by directly applying the best-known bound for the Koml\'{o}s problem to $C+R$.
As another application, consider a matrix $C$ with $n$ columns and two sets of rows, $A$ and $B$, where each row in $A$ has entries in $\{0,1\}$, and the column norm of every column restricted to rows in $B$ is at most $1$. Suppose that $A$ has $O(n)$ rows. Applying the framework gives a coloring with $O(\sqrt{n})$ discrepancy for rows in $A$ and $O(\sqrt{\log{n}})$ for rows in $B$.\footnote{This answers a question of Haotian Jiang.} Notice that using previous techniques, if we apply the partial coloring method to get $O(\sqrt{n})$ discrepancy for $A$, this would give $O(\log n)$ for rows of $B$. On the other hand, if we apply try to obtain $O(\sqrt{\log n})$ discrepancy for $B$, all the known methods would incur $O(\sqrt{n \log n})$ discrepancy for $A$.
\paragraph{Relaxing the function $h(\cdot)$.} Recall that the function $h$ in Theorem \ref{thm:g3}, that controls how the $\ell_2$ norms of rows decrease when restricted to subsets $S$ of columns, and plays an important role in the bounds. In many random or pseudo-random instances however, a worst case bound on $h$ can be quite pessimistic. For example, here even though most rows decrease significantly when restricted to $S$, $h$ can remain relatively high due to a few outlier rows. The following result gives improved bound for such settings where for any subset $S$ of columns, most row sizes restricted to $S$ do not deviate much from their expectation if $S$ is chosen at random. \begin{restatable}[pseudo-random bounded degree hypergraphs]{thm}{extended} \label{thm:extended} Let $A \in \{0,1\}^{m\times n}$ such that $\norm{A^j}_1 \le k$. Suppose there exists $\beta \leq k$ s.t. for any $S\subseteq [n]$ and any $c>0$, the number of rows of $A$ with \begin{equation}
\Big\vert \sum_{j\in S} a_i(j) - \norm{a_i}_1 \cdot(\vert S\vert/n)\Big\vert \geq c\beta\label{eq:s1} \end{equation} is at most $c^{-2}\vert S\vert$. Then $\mathrm{disc}(A) = O(\sqrt{k} + \beta)$. \end{restatable}
As discussed in \cite{potukuchi2019spectral}, one can set $\beta \leq \max_{v\perp \mathbf{1}, \norm{v} =1 }\norm{Av}$ in \eqref{eq:s1}, which in particular gives Potukuchi's result \cite{potukuchi2019spectral} for random $k$-regular hypergraphs as $\beta = O(k^{1/2})$ in this case.
Combining with Theorem \ref{thm:subadd}, this extends to the following semi-random setting. Consider a random $k$-regular hypergraph $A$ with $n$ vertices and $n$ edges. Suppose an adversary can arbitrarily modify $A$ by adding or deleting vertices from edges such that degree of any vertex changes by at most $t$. How much can this affect the discrepancy of the hypergraph? \begin{restatable}[Semi-Random Hypergraphs]{thm}{semirandomh} \label{thm:sr2} Consider a random $k$-regular hypergraph with incidence matrix $A\in \mathbb{R}^{m \times n}$ with $m\geq n$, and let $C \in \{-1,0,1\}^{m\times n}$ be an arbitrary matrix with at most $t$ non-zero entries per column. Then $\mathrm{disc}(A + C) = O\left(\sqrt{k}+\sqrt{t\log{n}}\right)$ with probability $1-n^{-\Omega(1)}$. \end{restatable}
\section{The Framework}\label{sec:framework} Given a matrix $A \in \mathbb{R}^{m\times n}$, we start at some $x_0$ and our goal is to reach an $x_T$ in $\{-1,1\}^n$ with small discrepancy. The basic idea will be to apply a small random update (of size $\delta$) to $x_t$ at step $t$ for $T$ steps, where the update will be chosen with care. We use the slack function and the potential function defined in (\ref{slack}) and (\ref{potential}) to implement this approach. The figure below gives a high level description of the algorithm.
\begin{algorithm}[H]\label{algo:potential_walk} \SetAlgoLined \caption{PotentialWalk} \textbf{Input:} A matrix $A \in R^{m\times n}$, a potential function $\Phi: \mathbb{R} \times \mathbb{R}^n \rightarrow \mathbb{R}^+$.
Let $x_0 = 0, t=0$. Let $T = (n-2)/\delta^2$.
\For {$t \in [T]$}{
Select $v_t$ such that: (i) $\mathbb{E}_{\varepsilon}[\Phi(t+1, x_t + \varepsilon\delta v_t)] \leq \Phi(t, x_t)$, (ii) $x_t \pm\delta v_t \in [-1,1]^n$, and (iii) $\dt{x_t}{v_t} = 0$, where $\varepsilon$ is a Rademacher random variable ($\pm 1$ with probability $1/2$).
Let $x_{t+1} = x_t + \varepsilon \delta v_t$.
} \textbf{Output:} $x_T$ \label{alg:p1} \end{algorithm}
\subsection{Example: Koml\'{o}s setting} We first give an overview of the ideas by describing how the framework above works for the Koml\'{o}s setting. Recall that here $A\in \mathbb{R}^{m\times n}$ has columns satisfying $\norm{A^j}_2 \leq 1$. To minimize notation, let us assume here that $m=n$ (this is also the hardest case for the problem).
At time $t$, let $\script{V}_t = \{j \in [n]: \vert x_t(j) \vert < 1- 1/2n \}$ and let $n_t = \vert \script{V}_t\vert$. These are the variables that are ``alive", and not yet ``frozen". To ensure that $x_t \in [-1,1]^n$, the update $v_t$ will only change the variables in $\script{V}_t$. We also set $\dt{v_t}{x_t} = 0$, which ensures that $\norm{x_t}^2 = \delta^2 t$ for any $t \in [0, T]$. So $v_t$ satisfies
\begin{equation}
v_t(j) = 0 \text{ for all } j\not\in \script{V}_t \text{ and } \dt{v_t}{x_t} = 0.\label{eq:t1}
\end{equation} As $\vert x_t(j)\vert \geq (1-1/2n)$ for all $j \notin \script{V}_t$, we have \begin{equation*}
(n-n_t)(1-1/2n)^2\leq \sum_{j\notin \script{V}_t} x_t(j)^2 \leq \sum_{j \in [n]} x_t(j)^2 = \delta^2 t. \end{equation*} So the number of alive variables at time $t$ satisfies $n_t \geq n - \frac{\delta^2 t}{(1-(1/(2n)))^2} > n - \delta^2 t - 1$.
{\bf Blocking large rows.} To ensure the two-sided bound
$|\sum_j a_i(j) x(j)| < b_0$, we create a new row $-a_i$ for each row $a_i$ at the beginning. Now, as the squared 2-norm of every column of $A$ is at most $2$, at any time $t$, the number of rows with $\sum_{j \in \script{V}_t} a_i(j)^2 > 12$ is at most $\vert \script{V}_t\vert /6 = n_t/6$.
Let us call such rows {\em large} (at time $t$). Otherwise, the row is {\em small}. We additionally constrain $v_t$ so that \begin{equation}
\dt{{a}_i}{v_t} = 0 \text{ for all rows } \{i: \sum_{j\in \script{V}_t} a_i(j)^2 > 12\}. \label{eq:t6} \end{equation} This ensures that a row only starts to incur any discrepancy once it becomes small. So at step $t$, we will define the slacks only for small rows and only such rows will contribute to the potential $\Phi(t)$. Let $\script{I}_t$ denote the set of small rows at time $t$. In the slack function \eqref{slack}, we will set $b_t=b_0$ for all $t$ and $\lambda = 2^{-5}b_0$. So, at the beginning of the algorithm, when $x_0(j)=0$ for all $j$, we have \begin{equation*}
\Phi(0) = \sum_{i\in \script{I}_0} \frac{1}{(b_0 - \lambda\cdot \sum_{j\in[n]} a_i(j)^2)^p} \leq \frac{|\script{I}_0|}{(b_0 - 12\lambda)^p} \leq n\left(\frac{2}{b_0}\right)^p . \end{equation*}
At any time $t$, the change in potential $\Phi(t+1)-\Phi(t)$ is due to (i) new rows becoming small and entering $\script{I}_{t+1}$ and (ii) and the change slack of rows in $\script{I}_{t}$. As each row has discrepancy $0$ until it becomes small, the total contribution of step (i) over the entire algorithm is at most $n (2/b_0)^p$.
So the main goal will be to show that $\Phi$ does not rise due to step (ii). This will ensure that the potential throughout the algorithm is at most $2n (2/b_0)^p$, which gives the $\sum_j a_i(j) x(j) < b_0$ for all $i$.
{\bf Bounding the increase in $\Phi$.} We now describe the main ideas of the algorithm and computations for the change in $\Phi$ in step (ii). The desired $O(\sqrt{\log n})$ will then follow directly by optimizing the parameters $b_0$ and $p$ in $\eqref{slack}$.
Let $e_{t,i}$ denote a vector in $\mathbb{R}^n$ with $j$-th entry $a_i(j)^2 x_t(j)$. At step $t$, $x_t$ changes as $x_{t+1}-x_t = \varepsilon \delta \cdot v_t$ and, by a simple calculation, the approximate change in $s_i(t)$ is: \begin{align*} s_i(t+1)-s_i(t) &\simeq \left(2\lambda \dt{e_{t,i}}{v_t} - \dt{{a}_i}{v_t}\right)\varepsilon \delta + \lambda \dt{{a}_i^{(2)}}{v_t^{(2)}} \delta^2 \end{align*} where $\varepsilon$ is a Rademacher random variable and ${a}^{(2)}$ denotes the vector with $j$-th entry $a(j)^2$. The error terms not included above are all higher powers of $\delta$, and can be ignored for small enough $\delta$ as long as all coefficients are bounded. We formalize this in Section \ref{sec:gen} and Appendix \ref{appendix:ss}.
Then, up to second order terms in $\delta$,
\[ \Phi(t+1)-\Phi(t) \simeq f(t) \delta^2 + g(t) \varepsilon \delta \]
where,
\begin{align*}
f(t) &= -p \lambda\sum_{i\in \script{I}} \frac{\dt{{a}_i^{(2)}}{v_t^{(2)}}}{s_i(t)^{p+1}} + \frac{p(p+1)}{2}\sum_{i\in \script{I}} \frac{\left(2\lambda \dt{e_{t,i}}{v_t} - \dt{{a}_i}{v_t}\right)^2 }{s_i(t)^{p+2}}, \\
g(t) &= p \sum_{i\in \script{I}} \frac{\left(2\lambda \dt{e_{t,i}}{v_t} - \dt{{a}_i}{v_t}\right)}{s_i(t)^{p+1}}. \end{align*}
To bound the expected change in $\Phi$, note that the expectation of the second term $g(t) \varepsilon \delta $ is zero. So it suffices to prove that there is a choice of $v_t$ such that $f(t) \leq 0$. This will ensure the expected change of $\Phi$ is at most $zero$, and there will be a choice of $\epsilon$ that ensures $\Phi$ is nonincreasing.
The difficulty in making $f(t)$ at most zero is that the positive part (the second term of $f(t)$) has an extra factor of $s_i(t)$ in the denominator. So if some $s_i(t)$ becomes very small, the positive term could dominate. To ensure this doesn't happen, we choose $v_t$ to be in a subspace that makes this positive term zero for the smallest slack indices.
{\bf Blocking small slacks.} Let $\mathcal{J}_t$ be the subset of $\mathcal{I}$ corresponding to all but the $\floor{n_t/12}$ smallest values of $s_i(t)$ at time $t$. Select $v_t$ such that \begin{equation}
\left(2\lambda\dt{e_{t,i}}{v_t} - \dt{{a}_i}{v_t}\right) = 0 \text{ for all } i\in \script{I} \backslash \script{J}_t, \label{eq:t3} \end{equation} Then as $\sum_i s_i(t)^{-p}) \leq \Phi(t)$, and the smallest $n_t/12$ slacks are ``blocked", we have \[ \max_{j\in \script{J}_t}\frac{1}{s_j(t)} \le \left(\frac{\Phi(t)}{n_t/12}\right)^{1/p}, \] and so, \begin{align*}
f(t) &\leq p\left(\frac{p+1}{2}\sum_{i\in \mathcal{J}_t} \frac{\left(2\lambda \dt{e_{t,i}}{v_t} - \dt{{a}_i}{v_t}\right)^2}{s_i(t)^{p+1}} \max_{j\in \script{J}_t}s_j(t)^{-1} -\lambda\sum_{i\in \mathcal{I}} \frac{\dt{{a}_i^{(2)}}{v_t^{(2)}}}{s_i(t)^{p+1}} \right)\\
&\leq p\left(\frac{p+1}{2}\sum_{i\in \mathcal{J}_t} \frac{\left(2\lambda \dt{e_{t,i}}{v_t} - \dt{{a}_i}{v_t}\right)^2}{s_i(t)^{p+1}}\left(\frac{12\Phi(t)}{n_t}\right)^{1/p} -\lambda\sum_{i\in \mathcal{I}} \frac{\dt{{a}_i^{(2)}}{v_t^{(2)}}}{s_i(t)^{p+1}} \right) \end{align*} In addition to \eqref{eq:t1} and \eqref{eq:t3}, suppose $v_t$ also satisfies \begin{equation}
\sum_{i\in \script{J}_t} \frac{\dt{2\lambda {e}_{t,i} - {a}_{i}}{v_t}^2}{s_i(t)^{p+1}} \leq 12 \cdot \sum_{i\in \script{J}_t} \frac{\dt{a_i^{(2)}}{{v_t}^{(2)}}}{s_i(t)^{p+1}}.\label{eq:t4} \end{equation} {\bf Choosing the update $v_t$.} Later in Section \ref{sec:gen}, we will see how to find a vector $v_t$ satisfying \eqref{eq:t1}, \eqref{eq:t3}, \eqref{eq:t6}, and \eqref{eq:t4}. Then, \[
f(t)\leq p\sum_{i\in \mathcal{J}_t} \frac{\dt{{a}_i^{(2)}}{v_t^{(2)}}}{s_i(t)^{p+1}} \left(6(p+1)\left(\frac{12\Phi(t)}{n_t}\right)^{1/p} - \lambda \right). \] To show that $f(t) \leq 0$, it thus suffices to have $ 6(p+1) \left(12\Phi(t)/n_t\right)^{1/p} - \lambda \leq 0$.
As $\Phi(t)^{\frac{1}{p}} \leq 2(2n)^{1/p}/b_0$ by the inductive hypothesis, and $n_t \geq 1$, it suffices to have \[\frac{12(p+1)}{b_0} \left(24n\right)^{1/p} - \lambda \leq 0.\] Choosing $p = \log{n}$ so that $n^{1/p} = O(1)$, and as $\lambda=2^{-5}b_0$, we can pick $b_0 = O(\sqrt{\log{n}})$ to satisfy the above. This gives the desired discrepancy bound.
\subsection{The General Framework} \label{sec:gen} We now describe the algorithm more formally. Given a matrix $A \in \mathbb{R}^{m\times n}$ with $\norm{A^j}_2 \leq 1$ for all $j\in [n]$, extend $A$ such that for each original row $a_i$ of $A$, there are two rows $a_i$ and $-a_i$ in $A$. Additionally, partition every row $a_i$ into $2$ rows, $a_i^S$ and $a_i^L$, with small and large entries, as follows: \begin{equation*}
a_i^S(j) = \begin{cases} 0 & \text{ if } \vert a_{i}(j)\vert > 1/2\lambda \\ a_i(j) & \text{ otherwise}\end{cases}, \quad a_i^L(j) = \begin{cases} a_i(j) & \text{ if } \vert a_{i}(j)\vert > 1/2\lambda\\ 0 & \text{ otherwise,}\end{cases} \end{equation*}
where $\lambda$ is a parameter to be determined later. After this transformation, for any $x \in \mathbb{R}^n$, $\norm{Ax}_{\infty} = \max_{i} \dt{a_i^S + a_i^L}{x}$, and the squared 2-norm of any column of $A$ is at most $2$.
Let $\script{I}$ denote the index set of all rows of ${A}$, and $\script{I}^S$ denote the index set of rows of the first type above.
The step-size of the algorithm is $\delta$ and the algorithm will run for $T = \frac{n-2}{\delta^2}$ steps. Starting with $x_0 = 0$, let $v_t \in \mathbb{R}^n$ with $\dt{x_t}{v_t} = 0$. For $t \in [T]$, \begin{equation*}
x_t = \begin{cases} x_{t-1} + \delta v_{t-1} & \text{ w.p. } 1/2,\\
x_{t-1} - \delta v_{t-1} & \text{ w.p. } 1/2.
\end{cases} \end{equation*} As $t$ increases, some variables will start approaching $1$ in magnitude. To ensure that $x_t \in [-1,1]^n$, we restrict $v_t$ to be in the space of \emph{alive} variables, defined as \begin{equation*}
\script{V}_t = \{i \in [n]: \vert x_t(i) \vert < 1- 1/(2n) \}. \end{equation*} For any $t \in [T]$, $\norm{x_t}^2 = \delta^2 t$ as \begin{equation}
\norm{x_t}^2 = \norm{x_{t-1} + \delta v_t}^2 = \norm{x_{t-1}}^2 + \delta^2\norm{v_t}^2 = \delta^2(t-1) + \delta^2 = \delta^2t.\label{eq:norm_x} \end{equation} Let $n_t = \vert \script{V}_t\vert$ denote the number of alive variables at $t$. By~\eqref{eq:norm_x}, $(n-n_t)(1-\epsilon)^2 \leq \delta^2 t$, which gives \begin{equation*}
n_t \geq n - \frac{\delta^2 t}{(1-1/(2n))^2} > n - \delta^2 t - 1. \end{equation*}
The goal of the rest of this section is to select a $v_t$ such that for all $t\in [T]$, $x_t \in [-1,1]^n$ and $\dt{a_i}{x_t}$ is bounded by some function of $m$ and $n$ for all rows. To help with this goal, we classify the rows according to how many variables are still ``uncolored'' in a row.
Let the set of \emph{s-Alive} rows at time $t$ be defined as: \[ \script{I}_t = \{i \in \script{I}^S: \sum_{j \in \script{V}_t} a_i(j)^2 \leq 20\}. \]
The choice of $20$ here is arbitrary, and large enough constant works. We can now define the slack and the potential function.
\noindent \textbf{Slack.} For any $i\in \script{I}$, the slack function is defined as \[
s_i(t) = b_t - \dt{a_i}{x_{t}} - \lambda \cdot \sum_{j=1}^{n}a_{i}(j)^2(1-x_{t}(j)^2).\] We call $b_t$ the barrier, and for $t\in [T]$, we also move it as \[ b_t = b_{t-1} + \delta^2 d_{t-1}, \] for some function $d_{t}$. We set $\lambda = cb_0$ where $c = 1/42$ and $b_0$ is the initial barrier.
\noindent \textbf{Potential function.} The potential function has a parameter $p > 1$ and is defined as\[
\Phi(t)= \sum_{i\in \mathcal{I}_t} s_i(t)^{-p}.\]
\allowdisplaybreaks We will only consider slacks for {\em alive} rows and ensure that they are always positive. Moreover, we will consider only the small {\em s-Alive} rows as the rows in $\script{I}^L$ will be easily handled. To ensure that $s_i(t)$ does not become too ``small" for any \emph{s-Alive} row, the choice of $v_t$ should not decrease the smallest slacks. This motivates the following definitions. \begin{itemize}
\item \emph{Blocked} rows: Let $\mathcal{C}_t$ be the subset of $\mathcal{I}_t$ corresponding to the $\floor{n_t/12}$ smallest values of $s_i(t)$.
\item Let $\mathcal{J}_t = \mathcal{I}_t \backslash \mathcal{C}_t$. These are the ``large slack" rows. \end{itemize}
To prove that all the slacks are positive, we will upper bound the potential throughout by bounding the change in $\Phi(t)$ at each step. Note that $\Phi(t)$ will experience jumps whenever a new index gets added to $\script{I}_t$, however the total contribution of jumps is easily shown to be bounded (see Lemma \ref{lem:dphi}) and can essentially be ignored. To bound the one-step change in $\Phi$, we use the second order Taylor expansion of $\Phi(t+1)$ centered at $\Phi(t)$. In Appendix \ref{appendix:ss}, we show that by choosing $\delta \leq O(1/(n^2m^{6}p^4))$, the overall error due to ignoring the higher powers of $\delta$ is negligible.
\subsection{Algorithm and Analysis} Recall that $e_{t,i}$ denotes the vector in $\mathbb{R}^n$ with $j$-th entry $a_i(j)^2 x_t(j)$. We can now state the algorithm for selecting $v_t$.
\begin{algorithm}[H] \label{alg:p2} \caption{Algorithm for Selecting $v_t$} \SetAlgoNoLine \DontPrintSemicolon \setstretch{1.35} Initialize $x_0 \leftarrow 0$\; \For{$t = 1, \ldots, T=\frac{n-2}{\delta^2}$}{ Let $\script{W}_t = \{{w} \in \mathbb{R}^n: {w}(i) = 0, \; \forall i \notin \script{V}_t \}$ \tcp*{\small restrict to alive variables} Let $\script{U}_t = \{{w} \in \script{W}_t: \dt{{w}}{2\lambda {e}_{t,i} - {a}_{i}} = 0, \forall i \in \mathcal{C}_t \text{ and } \dt{{w}}{x_t} = 0 \}$\;\tcp*{\small restrict to large slack rows} Let $\script{Y}_t = \{{w} \in \mathcal{W}_t: \dt{{w}}{{a}_{i}} = 0, \forall i \in \script{I} \backslash \script{I}_t\}$\tcp* {\small restricted to s-Alive rows} Let $\script{G}_t$ denote the subspace \begin{equation}
\label{eq:gt} \script{G}_t = \left\{{w} \in \script{W}_t: \sum_{i\in \script{J}_t} \dt{\left(2\lambda {e}_{t,i} - {a}_{i}\right)}{{w}}^2 s_i(t)^{-p-1} \leq 40 \cdot \sum_{i\in \script{J}_t} \; \dt{a_i^{(2)}}{{w}^{(2)}} s_i(t)^{-p-1} \right\}\end{equation}\\
Consider the subspace $\script{Z}_t = \script{U}_t \cap\script{Y}_t \cap \script{G}_t$ and let $W =\{{w}_1, {w}_2, \ldots, {w}_k \}$ be an orthonormal basis for $\script{Z}_t$. Choose
\begin{equation}
\label{eq:vt7}
v_t = \arg\min_{w \in W} \sum_{i\in \script{J}_t} \dt{2\lambda {e}_{t,i} - {a}_{i}}{w}^2 s_i(t)^{-(p+1)}.
\end{equation} } \end{algorithm}
We now re-state our main theorem. In words, the assumption of the theorem is that there is a non-decreasing function $h(.)$ such that for any row, the squared norm in any subset of coordinates $S$ is proportional to $h(|S|)$ times the size of the subset $S$. Under this condition, we can bound the discrepancy as a function of $h$.
\mainthm* The case when $h(t)=h$ is often useful, in which case we have following corollary. \begin{cor} \label{thm:g2}
For a matrix $A \in \mathbb{R}^{m\times n}$ with $\|A^j\|\leq L$ and $\vert a_i(j)\vert \leq M$ for all $i\in [n], j\in [m]$, let $h$ be such that for every subset $S\subseteq [n]$ and every $i \in [m]$, \begin{equation}
\sum_{j\in S} a_i(j)^2\leq \vert S\vert \cdot h.
\label{eq:g4} \end{equation} Then, $\mathrm{disc}(A) \leq 5b_0 + 2M$, where $b_0 = \min (26\sqrt{h n\log(2m/n)},\;
250L\sqrt{\log\left(2m\right)})$. \end{cor} \begin{proof} For a constant $h$, we have $
\beta= \int_{0}^{n-2}(n-t)^{-1/p} h dt \leq n^{1-1/p}h/(1-1/p)$. Choosing $p = \log(2m/n)$ to optimize the first term in \eqref{eq:c4} gives the result. \end{proof}
\paragraph{Roadmap of the proof.} The first main lemma below (Lemma ~\ref{lem:g2}) establishes that there is a large feasible subspace from which $v_t$ as defined above can be chosen. Using this we prove Lemma ~\ref{thm:g1}, which bounds the change in potential. This will allow us to bound the discrepancy of each row and hence prove Theorem~\ref{thm:g3}.
A key fact used for proving Lemma ~\ref{lem:g2} is the following lemma in \cite{bansal2017algorithmic}. We include a proof for the reader's convenience. \begin{lem}[\cite{bansal2017algorithmic}] \label{lem:subspace} Let $G, H \in \mathbb{R}^{m\times n}$ be matrices such that $\vert G_{ij} \vert \leq \alpha \vert H_{ij} \vert$ for all $i\in [m]$ and $j \in [n]$. Let $K = \text{diag}(H^\top H)$. Then for any $\beta \in (0,1]$, there exists a subspace $W \subseteq \mathbb{R}^n$ satisfying \begin{enumerate}
\item $\dim(W) \geq (1-\beta)n$, and
\item $\forall w \in W,\; w^{\top} G^{\top}G w \leq \frac{\alpha^2}{\beta} \cdot w^{\top} K w$. \end{enumerate} \end{lem}
\begin{proof} If $K_{ii} = 0$ for some $i$, then $H_{ji} = G_{ji} = 0$ for all $j \in [n]$. So, for a $w\in W$, $w_i$ can take any value, and removing the $i$-th column of $G$ and $H$ decreases both $n$ and $\dim(W)$ by $1$. Without loss of generality, assume that $K_{ii} > 0$ for all $i\in [n]$ and let $M = GK^{-\frac{1}{2}}$. For any $w \in \mathbb{R}^n$, let $y =K^{\frac{1}{2}} w$. Then \begin{equation*}
w^{\top} G^{\top}G w \leq \frac{\alpha^2}{\beta} \cdot w^{\top} K w \Leftrightarrow y^{\top} M^\top M y\leq \frac{\alpha^2}{\beta} \cdot y^\top y. \end{equation*}Let $Y$ be the subspace of vectors $y$ that satisfy $\beta y^{\top} M^\top M y \leq \alpha^2\cdot y^\top y$. Then $\dim(W) = \dim(Y)$. Thus, $\dim(W)$ is equal to the number of eigenvalues of $M^\top M$ less than $\alpha^2/\beta$. The sum of eigenvalues of $M^\top M$ is equal to $\mathrm{tr}(M^{\top}M)$, which is equal to sum of length squared of columns of $M$. Since $M = GK^{-\frac{1}{2}}$ and $\vert G_{ij} \vert \leq \alpha \vert H_{ij} \vert$, the length of every column of $M$ is at most $\alpha$, and $\mathrm{tr}(M^{\top}M) \leq n\alpha^2$. Therefore, the number of eigenvalues of $M^\top M$ greater than $\alpha^2/\beta$ is at most $\beta n$ and the lemma follows. \end{proof}
We now prove Lemma \ref{lem:g2}. \begin{lem}[Subspace Dimension] \label{lem:g2} For all $t\in T$, $\dim(\script{Z}_t) \geq \ceil{2n_t/3}$. \end{lem} \begin{proof} To lower bound the dimension of $\script{Z}_t$ we lower bound the dimensions of $\script{U}_t, \script{Y}_t$ and $\script{G}_t$.
First, we have $\dim(\script{U}_t) \geq n_t - \dim(\script{C}_t)-1 \geq \ceil{11n_t/12}-1$. Second, at time $t$, as the sum of $\ell_2$-norm square of all columns is at most $2n_t$, we have that $\sum_{i \in \mathcal{I}} \sum_{j\in \mathcal{V}_t} a_i(j)^2 \leq 2n_t$. So the number of rows $a_i$ with $\sum_{j\in \mathcal{V}_t} a_i(j)^2 \geq 20$ is at most $\floor{n_t/10}$ and $\dim(\script{Y}_t)\geq n_t - \floor{n_t/10} = \ceil{9n_t/10}$.
We now bound $\dim(\script{G}_t)$ by applying Lemma \ref{lem:subspace}. Let $G$ denote the matrix with columns $j$ corresponding to variables in $\script{V}_t$ and rows $i$ restricted to $i \in \mathcal{J}_t$ with $(i,j)$ entry $(2\lambda {e}_{t,i}(j) - {a}_{i}(j)) s_i(t)^{-(p+1)/2}$.
Let $H$ be the matrix with entries ${a}_{i}(j)\cdot s_i(t)^{-(p+1)/2}$ for $i \in \mathcal{J}_t\}$ and $j \in \mathcal{V}_t$. As $\vert a_{ij} \vert \leq 1/(2\lambda)$ for $i \in \script{I}_t$, we have \begin{equation*}
|G_{ij}| = \vert 2\lambda a_{i}(j)^2x_t(j) - a_j(i) \vert \leq \vert 2\lambda a_{i}(j)^2x_t(j) \vert + \vert a_j(i) \vert\leq 2\vert a_j(i) \vert = 2 |H_{ij}|. \end{equation*}
Let $K = \text{diag}(H^\top H)$. Then, using Lemma \ref{lem:subspace} with $\alpha = 2$ and $\beta = 1/10$, we get that there is a subspace $\mathcal{G}_t$ with $\dim(\script{G}_t) \geq \ceil{9 n_t/10}$ such that \[
\script{G}_t = \{w \in \script{W}_t: w^\top G^\top G w \leq 40 \cdot w^\top K w\}, \] which by the definition of $G$ and $H$ is equivalent to that given by \eqref{eq:gt}.
Putting together the bounds on the dimensions of these subspaces gives, \[
\dim(\script{Z}_t) \geq \dim(\script{U}_t \cap\script{Y}_t \cap \script{G}_t)\geq \ceil{11n_t/12} - 1 + \ceil{9n_t/10} + \ceil{9n_t/10} - 2n_t \geq \ceil{2n_t/3}.\qedhere\] \end{proof}
\paragraph{Setting the parameters.}
To show the two bounds in \eqref{eq:c4}, we will set the parameters $b_t,d_t$ (the change in $b_t$) and $p$ in two ways:
\begin{equation}
{\text{ \emph{Case 1:} }} d_t = 4(p+1) \cdot h(n_t) \cdot \max_{i\in \mathcal{J}_t} s_i(t)^{-1} \text{ for all $t \in [T]$, and $p,b_0$ arbitrary}
\label{eq:c1}
\end{equation}
\begin{equation}
\text{ \emph{Case 2:} } p = 2\log(2m), \; b_0 = 840(p+1) \cdot \max_{j\in\script{J}_t}s_j(t)^{-1} \text{ and } d_t = 0 \text{ for all }t \in [T].
\label{eq:c2}
\end{equation}
\paragraph{Bounding the potential.} The next lemma shows that in both these cases, the potential function remains bounded. \begin{lem}[Bounded Potential] \label{thm:g1} In either of the cases given by \eqref{eq:c1} and\eqref{eq:c2}, we have that $\Phi(t) \leq 4m (2/b_0)^p$, for all $t=0,\ldots,T$. \end{lem} \begin{proof} We will prove this by induction. Clearly, this holds at $t=0$ as $\Phi(0) \leq 2 m (2/b_0)^p$. For the inductive step, we will show that for any $j=0,\ldots,T-1$, if $\Phi(j) \leq 4m (2/b_0)^p$ then \begin{equation}
\Phi(j+1) \leq \Phi(j) + \frac{1}{Tb_0^p} + \vert \script{I}_{j+1} \backslash \script{I}_{j} \vert \cdot \left(\frac{2}{b_0}\right)^p.\label{eq:thm_ind} \end{equation} Note that $\vert \script{I}_{j+1} \backslash \script{I}_{j} \vert$ is the number of additional rows in $\mathcal{I}^S$ that may become alive at step $j$.
This gives the result by induction as summing \eqref{eq:thm_ind} over $j=0,\ldots,T-1$ will give \begin{equation}
\Phi(t+1) \leq \Phi(0) + \sum_{j=0}^{T-1}\frac{1}{Tb_0^{p}} + \left(\frac{2}{b_0}\right)^p\sum_{j=0}^{T-1}\vert \script{I}_{j+1} \backslash \script{I}_{j} \vert \leq 2m \cdot \left(\frac{2}{b_0}\right)^p + \frac{1}{b_0^p} \leq 4m \cdot \left(\frac{2}{b_0}\right)^p.\label{eq:phi} \end{equation}
We now focus on proving \eqref{eq:thm_ind} for $j=t$.
By the induction hypothesis, $\Phi(t)\leq 4m\left(2/b_0\right)^{p}$. By Lemma \ref{lem:dphi}, one of the signs for $x_{t+1}$ gives \begin{equation*}
\mathbb{E}(\Phi(t+1)) -\Phi(t) \leq f(t)+ \frac{1}{Tnb_0^p}+ \vert \script{I}_{t+1} \backslash \script{I}_{t} \vert \cdot \left(\frac{2}{b_0}\right)^{p}, \text{ where} \end{equation*} where \begin{equation*}
f(t) = -p \delta^2 \sum_{i\in \script{I}_t} \frac{d_t + \lambda\dt{{a}_i^{(2)}}{v_t^{(2)}}}{s_i(t)^{p+1}} + \frac{p(p+1)\delta^2}{2}\sum_{i\in \script{I}_t} \frac{\left(2\lambda \dt{e_{t,i}}{v_t} - \dt{{a}_i}{v_t}\right)^2 }{s_i(t)^{p+2}}. \end{equation*} So to prove \eqref{eq:thm_ind}, it suffices to show that $f(t)\leq 0$. We first consider the case when $b_t,d_t$ and $p$ are given by \eqref{eq:c1}.
As $2\lambda \dt{e_{t,i}}{v_t} - \dt{{a}_i}{v_t} = 0$ for all $i\notin \script{J}_t$, $f(t)$ satisfies \begin{align}
f(t) &\leq -p \delta^2 \sum_{i\in \script{J}_t} \frac{d_t + \lambda\dt{{a}_i^{(2)}}{v_t^{(2)}}}{s_i(t)^{p+1}} + \frac{p(p+1)\delta^2}{2}\max_{j\in\script{J}_t}s_j(t)^{-1}\cdot\sum_{i\in \script{J}_t} \frac{\left(2\lambda \dt{e_{t,i}}{v_t} - \dt{{a}_i}{v_t}\right)^2 }{s_i(t)^{p+1}}. \label{eq:k1} \end{align} By a simple averaging argument described in Lemma \ref{lem:g4}, we also have that \begin{align}
\sum_{i\in \script{I}_t} &\frac{\left(2\lambda \dt{e_{t,i}}{v_t} - \dt{{a}_i}{v_t}\right)^2 }{s_i(t)^{p+1}}\leq \sum_{i\in \script{I}_t} \frac{8h(n_t)}{s_i(t)^{p+1}}. \label{eq:g2} \end{align} Plugging~\eqref{eq:g2} in~\eqref{eq:k1} gives \begin{align}
f(t) &\leq -p \delta^2 \sum_{i\in \script{J}_t} \frac{d_t}{s_i(t)^{p+1}} + \frac{p(p+1)\delta^2}{2}\max_{j\in\script{J}_t}s_j(t)^{-1}\cdot\sum_{i\in \script{J}_t} \frac{8 h(n_t)}{s_i(t)^{p+1}}.
\label{eq:extra1} \end{align} Therefore, if $d_t$ satisfies equation~\eqref{eq:c1}, then $f(t)\leq 0$.
We now consider the case in \eqref{eq:c2}. As $v_t \in \script{G}_t$, we have \begin{align}
\sum_{i\in \script{J}_t} &\frac{\left(2\lambda \dt{e_{t,i}}{v_t} - \dt{{a}_i}{v_t}\right)^2 }{s_i(t)^{p+1}}\leq 40 \cdot \sum_{i\in \script{J}_t} \frac{\dt{a_i^{(2)}}{{v_t}^{(2)}}}{s_i(t)^{p+1}}.\label{eq:g3} \end{align} Next, as $d_t=0$ and $\lambda = b_0/42$, \eqref{eq:k1} and \eqref{eq:g3} give \[ f(t) \label{eq:form2} \leq \sum_{i\in \script{J}_t}\frac{p\delta^2\dt{{a}_i^{(2)}}{v_t^{(2)}}}{s_i(t)^{p+1}}\cdot \left(-\frac{b_0}{42}+ 20(p+1) \cdot \max_{j\in\script{J}_t}s_j(t)^{-1} \right). \] So if $b_0$ satisfies equation~\eqref{eq:c2}, then $f(t)\leq 0$. \end{proof} The next lemma gives a bound on the minimum value of slack for any active row, given the bound on potential function. \begin{lem}
For any $t \in \{0,\ldots,T\}$, if $\Phi(t) \leq 4m (2/b_0)^p$, then $ \max_{i\in \mathcal{J}_t} s_i(t)^{-1} \leq \frac{2}{b_0}\left(\frac{48m}{n_t}\right)^{\frac{1}{p}}$.
\end{lem} \begin{proof} By the definition of $\mathcal{J}_t$, for any $i \in \mathcal{J}_t$, there are at least $\floor{n_t/12}+1$ indices $j$ in $\script{I}_t$ such that $s_j(t) \leq s_i(t)$. Therefore, \begin{equation}
\max_{i\in \mathcal{J}_t} \frac{1}{s_i(t)} \leq \left(\frac{12\Phi(t)}{n_t}\right)^{\frac{1}{p}} \leq \frac{2}{b_0}\left(\frac{48m}{n_t}\right)^{\frac{1}{p}},\label{eq:k2} \end{equation} where the last inequality follows by the assumption, $\Phi(t) \leq 4m (2/b_0)^p$. \end{proof} \begin{lem} \label{lem:g4} For any $t\in [T]$, the choice of $v_t$ satisfies \begin{equation}
\sum_{i\in \script{J}_t} \frac{\dt{2\lambda{e}_{t,i} - {a}_{i}}{v_t}^2}{s_i(t)^{p+1}}\leq \sum_{i\in \script{J}_t} \frac{8h(n_t)}{s_i(t)^{p+1}}.\label{eq:vt1} \end{equation} \end{lem} \begin{proof}
Using $(a+b)^2 \leq 2 (a^2 + b^2)$, and
as $ |2\lambda e_{t,i}(j)| = |2 \lambda a_i(j)^2x_t(j)| \leq |a_i(j)|$ as $|a_i(j)|\leq 1/2\lambda$ for any $j$ and $i \in \script{I}^S$, we have that for any $w$, \begin{equation*}
\sum_{i\in \script{J}_t} \frac{\dt{2\lambda{e}_{t,i} - {a}_{i}}{w}^2}{s_i(t)^{p+1}} \leq \sum_{i\in \script{J}_t} \frac{2\dt{a_i}{w}^2 + 2\dt{2 \lambda e_{t,i}}{w}^2}{s_i(t)^{p+1}} \leq 4 \sum_{i\in \script{J}_t} \frac{\dt{a_i}{w}^2}{s_i(t)^{p+1}}. \end{equation*} Let $W_t = \{w_1,\ldots,w_k\}$ be an orthonormal basis for $\script{Z}_t$ and $k=\dim(\script{Z}_t)$. As $\script{Z}_t \subseteq \script{V}_t$, \begin{align*}
\sum_{i\in\script{J}_t}\frac{\sum_{j=1}^k \dt{a_i}{w_j}^2 }{s_i(t)^{p+1}} \leq \sum_{i\in\script{J}_t}\frac{\sum_{j\in \script{V}_t}a_i(j)^2}{s_i(t)^{p+1}} \leq n_t \sum_{i\in \script{J}_t} \frac{h(n_t)}{s_i(t)^{p+1}}\label{eq:alph}. \end{align*} where the second inequality uses that $\sum_{j\in \script{V}_t}a_i(j)^2 \leq n_t \cdot h(n_t)$ by the definition of $h$.
As $k \geq \ceil{n_t/2}$, this gives \begin{equation*}
\frac{1}{k}\sum_{j=1}^k\sum_{i\in \script{J}_t} \frac{\dt{2\lambda{e}_{t,i} - {a}_{i}}{w_j}^2}{s_i(t)^{p+1}}\leq \frac{n_t}{k} \sum_{i\in \script{J}_t} \frac{4h(n_t)}{s_i(t)^{p+1}} \leq \sum_{i\in \script{J}_t} \frac{8h(n_t)}{s_i(t)^{p+1}} . \end{equation*} The result now follows as $v_t$ in \eqref{eq:vt7} minimizes $\sum_{i\in \script{J}_t} \dt{2\lambda{e}_{t,i} - {a}_{i}}{w_j}^2 s_i(t)^{-p-1} $ over all $w_j \in W_t$. \end{proof}
We now prove prove the main theorem. \begin{proof}[Proof of Theorem \ref{thm:g3}] Recall that we divide each row $a$ of $A$ as $a = a^S + a^L$. We will bound $\dt{a^L}{x_T}$ and $\dt{a^S}{x_T}$ separately.
Let $t_1$ denote the earliest when the squared norm of $a^L$ (restricted to the alive variables) is at most $20$, and let $n_1$ be number of non-zeros in ${a}^L$ restricted to the set $\script{V}_{t_1}$.
As $\vert a^L(j)\vert \geq 1/(2\lambda)$ for each $j$, the number of non-zero variables $n_1$ in $a^L$ at time $t_1$ is at most $80 \lambda^2$, as
\[ n_1/(4\lambda^2)\leq \sum_{j \in \script{V}_{t_1}} a^L(j)^2 \leq 20.\] Moreover, as $a^L$ incurs zero discrepancy until $t_1$, the overall discrepancy satisfies \begin{align}
\vert \dt{{a}^L}{{x}_{T}} \vert &= \vert \dt{{a}^L}{{x}_{t_1}}\vert + \vert \dt{{a}^L}{x_T - {x}_{t_1}}\vert \leq 0+\sqrt{n_1}\cdot (\sum_{j \in \script{V}_{t_1}} a^L(j)^2)^{1/2} \leq 80\lambda\leq 3b_0. \label{eq:al} \end{align}
Henceforth, we focus on the rows $a^S$. We first show that the slacks are always positive. Let $\gamma = b_0/4(4m)^{\frac{1}{p}}$. By Lemma \ref{thm:g1}, for all $t \in [T]$, $\Phi(t) \leq 4m(2/b_0)^{p}< \gamma^{-p}$. This implies that $\vert s_i(t) \vert \geq \gamma$ for all $i\in \script{I}^S_t$ and $t \in [T]$. In one step of the algorithm, \begin{align*}
\vert s_i(t) - s_i(t-1)\vert&\leq \delta^2 d_{t-1} + \vert \dt{a_i}{x_t} - \dt{a_i}{x_{t-1}}\vert \\
&\leq \delta^2 d_{t-1} + \vert \delta\dt{a_i}{v_{t-1}} \vert \leq 20 n\delta \leq 2\gamma. \end{align*} So, if $s_i(t-1) \geq \gamma$ and $\Phi(t) <\gamma^{-p}$, then $s_i(t) \geq 0$, i.e., the slack $s_i(t)$ cannot go from being greater than $\gamma$ to less than $-\gamma$ in a single step. So, for every $i\in \script{I}^S$ and $t\in [T]$, $s_i(t)\geq \gamma$ and
$\dt{a_i}{x_{T}} \leq b_T$. Together with~\eqref{eq:al} this gives, $|\dt{a}{x_T}| \leq |\dt{a^S}{x_T}| + |\dt{a^L}{x_T}| \leq b_T +3b_0$.
Let $x\in \{-1,1\}^n$ be obtained from $x_T$ by the rounding $x(j) = \mathrm{sign}(x_{T}(j))$. As $T = (n-2)/\delta^2$, $\norm{x_{T}}^2 = n-2$ with $\vert x_\tau(j) \vert \leq 1$ for all $j\in [n]$. After rounding $x_{T}$ to $x$, we have $\norm{x}^2 = n$. For any row $a$ of $A$, the discrepancy is bounded by \[
\vert \dt{a}{x} \vert = \vert \dt{a}{x_T} \vert + \vert \dt{a}{x - x_T}\vert\leq \vert \dt{a}{x_T}\vert + M\sum_{j=1}^n \vert x(j) - x_{T}(j)\vert
\leq b_T + 3b_0 + 2M. \] We now consider the two cases for $b_0$, $d_t$, $p$. If the second case given by \eqref{eq:c2}, then by \eqref{eq:k2}, $b_0 \leq 1680(p+1) \cdot (48m/n_t)^{1/p}/b_0$.
As $n_t \geq 1$ for all $t \in [T]$ and $p = \log(2m)$, we have $\left(48m/n_t\right)^{1/p}\leq 10e$, and setting $b_0 = 250\sqrt{\log(2m)}$ suffices. Since $d_t = 0$, $b_T=b_0$ and $\norm{Ax}_\infty \leq 4b_0 + 2M$.
In the first case given by~\eqref{eq:c1}, then by~\eqref{eq:k2}, we have $d_t = 8(p+1)(48m)^\frac{1}{p}\cdot \frac{h(n_t)}{b_0 n_t^{1/p}}$ for all $t \in [T]$. Summing $d_t$ over $t$ gives \[
b_T - b_0 = \delta^2\sum_{t=0}^{T-1} d_t =8(p+1)(48m)^\frac{1}{p}\delta^2 \cdot\sum_{t=0}^{T-1} h(n_t)/(b_0n_t^{1/p}).\] As $n_t > n-\delta^2 t - 1 \geq$ and $h$ is non-increasing,
$\delta^2 \cdot\sum_{t=0}^{T-1} h(n_t) n_t^{-1/p}
\leq \beta$,
so that $b_T \leq b_0 + 8(p+1)(48m)^{1/p}\beta/b_0$. Optimizing $b_0 = (8(p+1)(48m)^{1/p} \beta)^{1/2} $ gives that $b_T = 2b_0$ and thus
$\norm{Ax}_\infty \leq b_T + 3b_0 + 2M \leq 5b_0 + 2M\label{eq:b2}$, giving the desired result.
\end{proof}
\section{Applications} \subsection{Set Coloring}
We bound the discrepancy of a set system $(U, \mathcal{S})$ with $\vert U\vert = n$, $\vert \script{S}\vert = m$, and $m\geq n$. As $\|A^j\|_2\leq \sqrt{m}$, we have $L = \sqrt{m}$, and as $\sum_{j\in S} a_i(j)^2\leq \vert S \vert$, we can set $h(t) = 1$ for all $t\in [n]$. Consider \eqref{eq:c4} and suppose $p\geq 1.1$ so that $p/(p-1) = O(1)$. Then \[ \beta = \int_{t=0}^{n-2} h(n-t)\cdot (n-t)^{-1/p}dt = O(n^{1-1/p}),\] and the first bound in \eqref{eq:c4} gives $b_0 = O(pn^{1/2} (m/n)^{1/p})$. Setting $p=\log(2m/n)$ gives Spencer's $O(\sqrt{n \log (2m/n)})$ bound.
\subsection{Vector Balancing}\label{vb} We now consider the discrepancy a matrix $A \in \mathbb{R}^{m \times n}$ with column $\ell_2$-norms at most $1$.
Here $L=1$ and the second term in \eqref{eq:c4} directly gives a $O(\sqrt{\log m})$ bound. This also implies an $O(\sqrt{\log n})$ bound as at most $n^2$ rows can have $\ell_1$-norm more than $1$, and we can assume that $m \leq n^2$. In particular, for a row $a_i$ with $\norm{a_i}_2 < 1/n^{1/2}$, we have $\vert \dt{a_i}{x} \vert\leq \norm{a_i}_1 \leq \sqrt{n\norm{a_i}_2} < 1$ and it can be ignored. The sum of squares of elements in $A$ is at most $n$ the number of rows with $\norm{a_i}_2 > 1/n^{1/2}$ is at most $n^2$.
\subsection{Sub-Gaussian Matrices} \label{gm} Let $X$ be a random variable with $\mathbb{E}(X) = 0$. $X$ is called Sub-Gaussian with variance $\sigma^2$ if its moment generating function satisfies $
\mathbb{E}(e^{sX}) \leq e^{\sigma^2 s^2/2}$ for all $s \in \mathbb{R}$. For a Sub-Gaussian random variable, $\mathbb{E}(X^2) \leq 4\sigma^2$. \subg* \begin{proof} As $a_{i}(j)$ is a Sub-Gaussian with variance $\sigma^2$, $a_{i}(j)^2- \mathbb{E}(a_{i}(j)^2)$ is a mean zero and sub-exponential random variable with parameter $16\sigma^2$ \cite{vershynin2018high}.
For any $S \subseteq [n]$ with $\vert S\vert = s$,
Bernstein's inequality for sub-exponential random variables \cite{vershynin2018high} (Theorem 2.8.1) gives that, \begin{equation}
\Pr(\sum_{j\in S} a_{i}(j)^2 - \mathbb{E}(a_{i}(j)^2) \geq st) \leq \exp(-\min(s^2t^2/16\sigma^4, st/16\sigma^2)).\label{eq:bern} \end{equation} Setting $t = 96\sigma^2\left(\log (ne/s) + (\log m)/s \right)$ and as $\mathbb{E}( a_i(j)^2) \leq 4 \sigma^2$, and taking a union bound over all the rows and all possible subsets of $s$ columns, we get that,
\begin{equation}
\sum_{j \in S} a_{i}^2(j) \leq 100\sigma^2 \vert S\vert \left(\log (ne/| S|) +\log m)/|S|)\right).\label{eq:reg}
\end{equation}
for every $S \subseteq [n]$, $i \in [m]$, with probability at least $1-1/2m^2$.
Similarly, as $a_i(j)$ is sub-Gaussian with mean $0$ and variance $\sigma^2$, with probability at least $1-1/2m^2$,
we have
$ \vert a_{i}(j) \vert \leq 3\sigma \sqrt{\log(mn)}$
for all $i\in [m], j\in [n]$, and thus the $\ell_2$-norm of a column is at most $L = 3\sqrt{m}\sigma \sqrt{\log(mn)}$ and $M = 3 \sigma\sqrt{\log mn}$. By~\eqref{eq:reg}, we can set \begin{equation*}
h(t) = 100\sigma^2 \left(\log \left(\frac{ne}{t}\right) +\frac{\log m}{t}\right). \end{equation*} A direct computation gives $\beta = \int_0^{n-2} h(n-t) (n-t)^{-1/p} dt = O(\sigma^2 (n^{1-1/p} + p\log{m})) .$ Using Theorem \ref{thm:g3} with $p = 2\ceil{\log(2m/n)}$, gives $b_0 = O(\sigma (p (m/n)^{1/p} (n+ n^{1/p} p \log m))^{1/2}) = O(\sigma n^{1/2} \log (2m/n))$.
Thus, with high probability $ \norm{Ax}_{\infty} \leq (5b_0+2M) = O(\sigma \sqrt{n \log (2m/n)})$.
\end{proof} \subsection{Random Matrices} The result above directly implies the following bound for random matrices. \random* \begin{proof}Consider a random vector $X$ chosen uniformly at random from the unit ball, $\{x\in \mathbb{R}^m : \norm{x}_2 \leq 1\}$. Then every coordinate of $X$ is sub-Gaussian with variance $\sigma^2 = C/\sqrt{m}$, where $C$ is a constant \cite{vershynin2018high} (Theorem 3.4.6, Ex 3.4.7). The result now follows from Theorem \ref{thm:sr1}. \end{proof}
\section{Flexibility of the Method} An advantage of the potential function approach is its flexibility. We describe two illustrative applications. In Section \ref{s:stoc-disc} we show how the bounds for matrices $A$ and $B$ obtained using the framework can be used to directly give bounds for $C= A+B$ by combining the potentials for $A$ and $B$ in a natural way.
In Section \ref{sec:spectral} we consider how the requirement on the function $h(\cdot)$ in Theorem \ref{thm:g3} can be relaxed, and use it to bound the discrepancy of sparse hypergraphs (the Beck-Fiala setting) satisfying a certain pseudo-randomness condition.
\subsection{Subadditive Stochastic Discrepancy} \label{s:stoc-disc} \subadditive* \begin{proof} Let $\Phi_1(t)$, $\Phi_2(t)$ be the potential functions corresponding to $A$ and $B$, respectively. Let the parameters for Algorithm \ref{alg:p2} on $A$ be $b^1_0, p_1, d^1_t, h_1(\cdot)$ and for $B$ be $b^2_0, p_2, d^2_t, h_2(\cdot)$.
Note that it might not be possible to select an update $v_t$ at time $t$, that ensures that both $\Phi_1(t+1) \leq \Phi_1(t)$ and $\Phi_2(t+1) \leq \Phi_2(t)$ hold, but we can find a $v_t$ for which a weighted sum of $\Phi_1(t)$ and $\Phi_2(t)$ decreases at every step.
Consider the potential function \begin{equation*}
\Phi(t) = \left(b^1_0/2\right)^{p_1}\Phi_1(t) + (b^2_0/2)^{p_2}\Phi_2(t)\,. \end{equation*}
We apply the same algorithmic framework. For $t=1, \ldots, T$ , select $v_t$ such that $\mathbb{E}(\Phi(t+1)) \leq \Phi(t)$, and select the sign of $\varepsilon$ for which $\Phi(t+1) \leq \Phi(t)$, and set $x_{t+1} = x_t + \epsilon\delta v_t$. To this end, it suffices to find a $v_t$ such that $\mathbb{E}(\Phi_1(t+1)) \leq \Phi_1(t)$ and $\mathbb{E}(\Phi_2(t+1)) \leq \Phi_2(t)$.
Let $\script{Z}^1_t$ and $\script{Z}^2_t$ be the feasible subspaces at step $t$ for $A$ and $B$ respectively from Algorithm \ref{alg:p2}.
We will search for $v_t$ in $\script{Z}_t = \script{Z}^1_t \cap \script{Z}^2_t$. By Lemma \ref{lem:g2}, $\dim(\script{Z}^1_t), \dim(\script{Z}^2_t)\geq \ceil{2n_t/3}$. Therefore, \begin{equation*}
\dim(\script{Z}_t) = \dim(\script{Z}^1_t \cap \script{Z}^2_t)\geq \ceil{2n_t/3} + \ceil{2n_t/3} - n_t\geq n_t/3. \end{equation*}
Using Lemma \ref{lem:g4} on $A$ and $B$, along with Markov's inequality implies that there exists a vector $w \in \script{Z}_t$ such that \begin{equation}
\sum_{i\in \script{I}^1_t} \frac{\dt{2c b_0^1{e}_{t,i} - {a}_{i}}{w}^2 }{s_i(t)^{p_1+1}} \leq \sum_{i\in \script{I}^1_t} \frac{25h_1(n_t)}{s_i(t)^{p_1+1}}\quad\text{ and }\quad \sum_{i\in \script{I}^2_t} \frac{\dt{2cb_0^2{e}_{t,i} - {a}_{i}}{w}^2 }{s_i(t)^{p_2+1}} \leq \sum_{i\in \script{I}^2_t} \frac{25h_2(n_t)}{s_i(t)^{p_2+1}}.\label{eq:twovt} \end{equation} Comparing~\eqref{eq:twovt} with~\eqref{eq:vt1}, the functions $h_1(\cdot)$ and $h_2(\cdot)$ only increase by a constant factor when compared to running Algorithm \ref{alg:p2} on $A$ and $B$ independently. So it suffices to multiply $d_t^1$ and $d_t^2$ by $4$ to ensure that by Lemma \ref{thm:g1},
\begin{equation}
\mathbb{E}[\Phi_1(t)] -\Phi_1(t-1) \leq \frac{1}{Tn(b_0^1)^{p_1}} \quad \text{and} \quad \mathbb{E}[\Phi_2(t)] -\Phi_2(t-1) \leq \frac{1}{Tn(b_0^2)^{p_2}}. \label{eq:sa1} \end{equation} Plugging~\eqref{eq:sa1} in the definition of $\Phi(t)$, we get $ \mathbb{E}[\Phi(t)] -\Phi(t-1) \leq 2/(Tn)$. So one of the two choices of $x_t$ gives $\Phi(t) -\Phi(t-1) \leq 2/(Tn)$. Summing over $t$, \begin{align*}
\Phi(t) &\leq \Phi(0) + \frac{2}{n}
\leq \left(\frac{b_0^1}{2}\right)^{p_1}\Phi_1(0) + \left(\frac{b_0^2}{2}\right)^{p_2}\Phi_2(0) + \frac{2}{n}. \end{align*} By Lemma \ref{lem:dphi}, $\Phi_1(0)\leq 2m\cdot (2/b_0^1)^{p_1}$ and $\Phi_2(0)\leq 2m\cdot (2/b_0^2)^{p_2}$, thus $\Phi(t)\leq \Phi(0) + 2/n \leq 5m$. For a row $i \in \script{J}_t^\ell$ for $\ell \in \{1, 2\}$, we have $(\floor{n_t/12}+1) \cdot (b^\ell_0/2)^{p_\ell} \cdot s_i(t)^{-p_\ell} \leq \Phi(t) \leq 5m$, which implies that for any $t$, and $\ell \in \{1,2\}$, \begin{equation}
\max_{i\in \script{J}_t^\ell}\; s_i(t)^{-1} \leq \frac{2}{b_0^\ell}\left(\frac{60m}{n_t}\right)^\frac{1}{p_\ell}. \label{eq:sa2} \end{equation} Upon comparing~\eqref{eq:sa2} with~\eqref{eq:k2}, notice that $\max_{k\in \script{J}_t^1} \; s_k(t)^{-1}$ and $\max_{k\in \script{J}_t^2} \; s_k(t)^{-1}$ are only a constant factor larger when compared to running Algorithm \ref{alg:p2} on $A$ and $B$ separately, and hence the discrepancies for both $A$ and $B$ are only a constant factor larger. \end{proof}
\subsection{Discrepancy of Sparse Pseudo-random Hypergraphs} \label{sec:spectral} In this section, we consider $0/1$ matrices that satisfy a certain regularity property, namely, for most rows, the sum of their entries in any subset of columns is close to the sum of the full row scaled by the fraction of columns in the subset. This property is satisfied, e.g., by the matrices that correspond to sparse random hypergraphs. In particular, we show the following.
\extended*
\paragraph{Proof outline.} At a high level the proof is similar to that of Theorem~\ref{thm:sdisc}, using a weighted potential function. However, rather than just two potentials, we will have to consider a combination of $O(\log n)$ potentials, and it will take some care to make sure this doesn't create an overhead in the discrepancy. We note that the main algorithm remains: at each step choose a vector in a subspace defined by a set of constraints based on the current vector $x_t$.
Consider the case when $A$ has at most $n$ rows and we run Algorithm \ref{alg:p2} on $A$ with the additional constraints that at time $t$, \begin{enumerate}[label=(\alph*)]
\item we ignore all rows with $\sum_{i\in \script{V}_t} a_i(j) < 20\beta$ from the potential function, and
\item we move orthogonal to all rows for which $|\sum_{i\in \script{V}_t} a_i(j) - \norm{a_i}_1\cdot (n_t/n)| \geq 10n_t$. \end{enumerate} In the first case, once the size of rows becomes less than $20\beta$ at some step $t$, we will simply bound the discrepancy gained by this row after $t$ by $20\beta$.
The second set of rows are the one that do not reduce in size proportional to the progress of the coloring. Using the assumption in the theorem, i.e., \eqref{eq:s1} with $c = 10$, the number of rows for which point (b) is true is at most $n_t/100$. So, for all but $n_t/100$ rows, \begin{equation*}
20\beta \leq \sum_{j \in \script{V}_t} a_i(j) \leq \norm{a_i}_1 \cdot \frac{n_t}{n} + 10\beta. \end{equation*} This gives $\beta \leq (1/10)\norm{a_i}_1 \cdot (n_t/n)$ if row $i$ is active and therefore, for all but $n_t/100$ rows, using the assumption of the theorem, \begin{equation}
\sum_{j \in \script{V}_t} a_i(j) \leq 2\norm{a_i}_1 \cdot \frac{n_t}{n}. \label{eq:h1} \end{equation}
So, $h_i(|S|) = 2\norm{a_i}_1/n$ satisfies the bound \eqref{eq:g5} in Theorem \ref{thm:g3} and we obtain \[
|a_i \cdot x_T| = O(\beta) + \min \left( O(\sqrt{p\cdot \norm{a_i}_1 }), O(\sqrt{n\log(2n)})\right)\]
For $p = 2$, $ |a_i \cdot x_T|= O(\beta+\sqrt{\norm{a_i}_1})$. So, the discrepancy of a row is proportional to the square-root of its initial $\ell_1$-norm. Unfortunately, for rows with large initial norms, this can be as large as $O(\sqrt{n})$.
To fix this issue, let us restrict ourselves to the case when all rows have similar initial $\ell_1$-norm, i.e., for all $i$, \begin{equation*}
x\cdot k \leq \norm{a_i}_1 < 2x \cdot k. \end{equation*} Since every column of $A$ contains at most $k$ ones, the number of rows with $\ell_1$-norm greater than $x \cdot k$ is at most $(k\cdot n)/ (x \cdot n) = n/x$
By \eqref{eq:h1}, for all but $n_t/100$ rows, $\sum_{j \in \script{V}_t} a_i(j) \leq 4x \cdot k \cdot (n_t/n)$. Note that a row only gains discrepancy when it satisfies both $\sum_{i \in \script{V}_t} a_i(j) < 20k$ and $|\sum_{i \in \script{V}_t} a_i(j) - \norm{a_i}_1 \cdot (n_t/n) | \leq 10 \beta$. This implies that \begin{equation*}
\norm{a_i}_1\cdot (n_t/n) - 10\beta \leq \sum_{i \in \script{V}_t} a_i(j) \leq 20k. \end{equation*} In other words, $\norm{a_i}_1\cdot (n_t/n) \leq 20 k + 10\beta \leq 30 k$. Under the assumption that $\norm{a_i}_1 \geq x \cdot k$ for all rows, we get $(n_t/n) \leq 30/x$. So, when $n_t \geq 30n/x$, we can set $h(n_t) = 0$. In other words, the function \[
h(|S|) = \begin{cases}
0 & \mbox{ when } |S| \geq 30n/x\\ 4x \cdot (k/n) & \mbox{ otherwise} \end{cases} \] satisfies \eqref{eq:g5}. This gives $ \int_{t=0}^{n-2} h(n-t) \cdot (n-t)^{-1/p}dt = O(x^{1/p} \cdot k \cdot n^{-1/p})$, and by Theorem \ref{thm:g3}, \[
\mathrm{disc}(A) = \beta +\min \left( O(\sqrt{p\cdot k}), O(\sqrt{n\log(2n)})\right) = O(\beta+ \sqrt{k}) \; \text{ for }p = 2.\] So if we only consider a set rows with similar initial $\ell_1$-norms (within constant factor of each other) at a time, the discrepancy of such a set is bounded by $O(\beta + \sqrt{k})$. This suggests using Theorem \ref{thm:subadd} to bound the discrepancy of union of this set. However, since the initial $\ell_1$-norms of rows can range anywhere from $1$ to $n$, there can be as many as $\log(n)$ sets and corresponding potential functions. Naively applying Theorem \ref{thm:subadd} will give a $\sqrt{\log(n)}$ factor increase in discrepancy, rather than a constant.
Before discussing how to fix this issue, we formally describe the partition of rows into classes:
\noindent \textbf{Partitioning rows according to $\ell_1$-norm}: First, extend $A$ such that for each original row $a_i$, there are two rows $a_i$ and $-a_i$ in $A$. Since our goal is to prove discrepancy $O(\sqrt{k})$, we can ignore all rows will $\ell_1$-norm less than $\sqrt{k}$. Then $m\leq n\sqrt{k}$ because the number of rows with $\ell_1$-norm greater than $\sqrt{k}$ is at most $2nk/\sqrt{k} = 2n\sqrt{k}$. Let $N = \ceil{\log_2{n/k}}$ and $\script{\script{Q}} = \{0\} \cup [N]$. Partition the rows of $A$ into based on their initial $\ell_1$-norm into $|\script{Q}|=N+1$ classes: \begin{itemize}
\item $\script{A}_0 = \{i \in \script{I}: \sqrt{k} \leq \norm{a_i}_1 < 2k \} $.
\item For each $i\in [N]$, let $\script{A}_i = \{i \in \script{I}: 2^{i} k \leq \norm{a_i}_1 < 2^{i+1}k \} $. \end{itemize} The sum of $\ell_1$-norms of rows in $A$ is at most $2nk$, therefore for any $i$, $2^{i}k \vert \script{A}_i \vert \leq 2nk$ and $\vert \script{A}_i\vert \leq 2^{1-i}n$.
To keep the increase in discrepancy a constant factor rather than $\sqrt{\log(n)}$, we carefully distribute the following two resources among these classes at any step: \begin{itemize}
\item The number of rows with small slacks that $v_t$ is orthogonal to from each class. Since the total number of rows $v_t$ can move orthogonal to at time $t$ is at most $n_t$, we need to distribute $n_t$ among the classes. See Lemma \ref{lem:sdim} for more details.
\item The bound on $ \sum_{i\in \script{I}_t \cap \script{A}_q} \left(2\lambda \dt{e_{t,i}}{v_t} - \dt{{a}_i}{v_t}\right)^2 s_i(t)^{-p-1}$ in terms of $\sum_{i\in \script{I}_t \cap \script{A}_q} h(n_t)s_i(t)^{-p-1}$ for each class $q$. \end{itemize} Rows with larger initial $\ell_1$ norm get more of each resource.
We create $N+1$ potential functions $\{\Phi_i(t)\}_{i=0}^N$, one associated with each row partition. The potential functions use the same $p, b_0$ parameters, and $\lambda = cb_0$ with $c = 1/42$, but have different rate of change of barrier functions $d_q(\cdot)$, based on $q$. We will run Algorithm \ref{alg:p2} on each partition separately but use the same $x_t$ and $v_t$ at each step. In this case, we can select parameters to ensure that each potential function is decreasing in expectation (see Lemma \ref{lem:phi_dec}). However, there might not exist a vector $v_t$ that ensure that moving in $v_t$ direction decreases all the potential functions simultaneously.
To deal with this, we use a weighted combination of $\Phi_q$ as the potential function: Let \begin{equation}
\Phi(t) = \frac{1}{k} \cdot \Phi_0(t)+\sum_{q \geq 1} 2^{2q} \cdot \Phi_q(t). \label{eq:sum_potential} \end{equation} For reasoning behind the form of $\Phi(t)$, see Section \ref{sec:step_t}. \subsubsection{A suitable subspace} To identify the constrained subspace for the PotentialWalk (Algorithm~\ref{alg:p1}), we use the following definitions. The set of \emph{Active} rows is defined as \[ \script{I}_t = \{i \in \script{I}: \sum_{j \in \script{V}_t} \vert a_i(j) \vert \leq 12 k\}. \] For each class $q$, let $h_q: \mathbb{R}^+ \rightarrow \mathbb{R}$ be a non-increasing function such that for every subset $S \subseteq n$, at most $n_t/16$ rows $i$ from class $\script{A}_q$ violate the condition \begin{equation}
\sum_{j \in S} |a_i(j)| \leq |S| \cdot h_q(|S|) \label{eq:hq} \end{equation} While following the general framework from Section \ref{sec:gen}, we make three crucial changes: \begin{itemize}
\item Move orthogonal to rows with \emph{large deviation}. At step $t$, the $\ell_1$ norm of row $a_i$ will be close to $(n_t / n)\cdot \norm{a_i}_1$ for most rows. Let $a_{i,t}$ denote a vector in $\mathbb{R}^n$ with $j$-th entry $\mathbf{1}_{j\in \script{V}_t} a_i(j)$, i.e., $a_{i,t}$ is row $a_i$ restricted to the alive coordinates at time $t$. Then the set of \emph{large deviation} rows consists of rows that deviate significantly from this expected value \begin{equation}
\script{B}_t = \{i \in \script{I}: \vert \norm{a_{i,t}}_1- \norm{a_i}_1 \cdot(n_t/n) \vert \geq 4\beta \}. \label{eq:badrows} \end{equation} For any $t\in [T]$, \eqref{eq:s1} implies that $\dim(\script{B}_t) \leq \floor{n_t/16}$. \item Ignore \emph{Dead} rows. As soon as the $\ell_1$-norm of some row becomes less than $8\beta$, we drop it from the potential function. The set of \emph{dead} rows at step $t$ is defined as \begin{equation}
\script{D}_t = \{i \in \script{I}: \norm{a_{i,t}}_1 \leq 8\beta \}. \label{eq:deadrows} \end{equation} For a dead row, rather than keeping track of its discrepancy using a slack function, we will uniformly bound the the additional discrepancy gained by a row after it becomes dead. \item \emph{Block} rows based on their initial size. For $q\in \script{Q}$, let $\mathcal{C}_t^q$ be the subset of $\mathcal{A}_q \cap \script{I}_t$ corresponding to the $\floor{2^{i-8}n_t^2/n}$ smallest values of $\{s_i(t): i\in\script{A}_q \cap \script{I}_t\}$, and let $\mathcal{J}_t^q = \mathcal{A}_i \backslash \{\mathcal{C}_t^q \cup \script{D}_t\}$. \end{itemize} We are ready to state the algorithm for selecting $v_t$.
\allowdisplaybreaks \begin{algorithm}[H] \SetAlgoNoLine \DontPrintSemicolon \setstretch{1.35} \caption{Algorithm for Selecting $v_t$}\label{alg:p3} Let $h_q(n_t) = 2^{q+2}/n$ and $w_q(t) = 2^{5-\frac{q}{4}} \left(\frac{n}{n_t}\right)^{1/4}$\; \For{$t=1,\ldots, T$}{ Let $\script{W}_t = \{{w} \in \mathbb{R}^n: {w}(i) = 0, \; \forall i \in \script{V}_t \}$ \tcp*{\small restrict to alive variables} Let $\script{U}_t = \{{w} \in \script{W}_t: \dt{{w}}{2cb_0 {e}_{t,i} - {a}_{i}} = 0, \forall i \in \mathcal{C}_t \text{ and } \dt{{w}}{x_t} = 0 \}$\; \tcp*{\small restrict to large slack rows} Let $\script{Y}_t = \{{w} \in \mathcal{W}_t: \dt{{w}}{{a}_{i}} = 0, \forall i \in \script{I} \backslash \script{I}_t\}$\tcp*{\small move orthogonal to large norm rows} Let $\script{G}_t =\{w \in \script{W}_t: \dt{a_i}{w} = 0, \; \forall i\in \script{B}_t\}$ \;\tcp*{\small move orthogonal to large deviation rows} Let $\script{Z}_t = \script{U}_t \cap\script{Y}_t \cap \script{G}_t$ and let $W =\{{w}_1, {w}_2, \ldots, {w}_k \}$ be an orthonormal basis for $\script{Z}_t$\; Let $v_t\in W$ such that for all $q \in \script{Q}$,
\begin{equation}
\sum_{i\in \script{J}^q_t} \dt{2cb_0 {e}_{t,i} - {a}_{i}}{v_t}^2 s_i(t)^{-p-1} \leq 8 w_q(t) \cdot h_q(n_t)\sum_{i\in\script{J}^q_t} s_i(t)^{-p-1}.\label{eq:vt}
\end{equation}} \end{algorithm}
We are now ready for the formal proof. We divide it into several subparts. The first part bounds the number of active classes at time $t$, as a slowly increasing function of $t$. Then we derive the specific weights used in the potential function that combines potential functions for each class of rows (based on initial norm). After that we show that there is a large subspace of vectors which all satisfy the desired goal of not increasing the potential value while satisfying all the constraints about inactive rows and variables. Using this we bound the final discrepancy.
\subsubsection{Number of active classes}\label{sec:step_t} \begin{lem}\label{lem:snum} At step $t$, the following two conditions hold: (i) The number of classes $q$ for which $\script{A}_q \cap \{\script{I}_t\backslash\{\script{B}_t \cup \script{D}_t\}\} \neq \emptyset$ is at most $\log(16n/n_t)$ and (ii)
$h_q(t) = 2^{q+2} k/ n$ satisfies~\eqref{eq:hq} for all $q \in \script{Q}$. \end{lem} \begin{proof}
Let $\norm{a_{i,t}}_1 = \sum_{j \in \script{V}_t} |a_i(j)|$, i.e., it is the $\ell_1$-norm of row $i$ restricted to $\script{V}_t$. At step $t$, if $i \in \script{I} \backslash \{\script{B}_t \cup \script{D}_t\}$, then by~\eqref{eq:badrows} and~\eqref{eq:deadrows}, we have $ 8\beta \leq \norm{a_{i,t}}_1$ and \begin{align}
(n_t/n) \cdot \norm{a_i}_1 - 4\beta &\leq \norm{a_{i,t}}_1 \leq (n_t/n) \cdot \norm{a_i}_1 +4\beta.\label{eq:s2} \end{align} This gives
$4\beta \leq (n_t/n) \cdot \norm{a_i}_1$ and
$
\norm{a_{i,t}}_1 \leq (2n_t/n) \cdot \norm{a_i}_1 \label{eq:eq2}$.
Moreover, if $i\in \script{A}_q$ then $\norm{a_i}_1 \leq 2^{q+1}k$ and we get
$\norm{a_{i,t}}_1\leq (n_t/n) \cdot 2^{q+2}k.$ Therefore $h_q(t) = 2^{q+2}/k$ satisfies~\eqref{eq:hq}.
Furthermore, if $i \in \script{I}_t$, i.e., $\norm{a_{i,t}}_1 \leq 12 k$, by~\eqref{eq:s2} we have $(n_t/n)\cdot \norm{a_i}_1 -4\beta \leq \norm{a_{i,t}}_1 \leq 12k$. As $\beta < k$, this gives
$(n_t/n)\cdot \norm{a_i}_1 \leq 4 \beta + 12 k \leq 16 k.$ So if $i\in\script{I}_t\backslash\{\script{B}_t \cup \script{D}_t\}$, then \[ 4\beta\cdot (n/n_t)\leq \norm{a_i}_1 \leq 16 k \cdot (n/n_t).\] Note that this condition is dependent only on the initial $\ell_1$-norm of $a_i$. Since $2^q k\leq\norm{a_i}_1 < 2^{q+1}k$ for any $i\in \script{A}_q$, a necessary condition for $\script{A}_q \cap \{\script{I}_t\backslash\{\script{B}_t \cup \script{D}_t\}\} \neq \emptyset$ is \begin{equation}
(2\beta/k)\cdot (n/n_t) \leq 2^{q} \leq 16 \cdot (n/n_t).\label{eq:s4} \end{equation} Therefore $q \leq \log(16n/n_t)$. \end{proof} Lemma \ref{lem:snum} implies that at any step $t$, the set of active rows is from the first $\log_2(16n/n_t)$ classes of rows. It also helps us define two important parameters associated with a row class $q$. At step $t$, consider a $q\in \script{Q}$ with $\script{A}_q \cap \{\script{I}_t \backslash \{\script{B}_t \cup \script{D}_t\}\}\neq \emptyset$. \begin{itemize}
\item Since $n - \delta^2 t-1 < n_t\leq 16\cdot 2^{-q} n$ By~\eqref{eq:s4}.
For $q \geq 1$, let \[ t_q := \max\left\{0,\;n\delta^{-2}\left(1-16\cdot 2^{-q} - 1/n\right)\right\}\] Similarly, let \[
t_0 := n\delta^{-2} \left(1-16k^{-1/2} - 1/n\right).\] Before step $t_q$, for any $i \in\script{A}_q$, $\dt{a_i}{v_t} = 0$. Because $s_i(t)$ is a constant till $t_q$, we set $d_q(t) = 0$ for all $t < t_q$.
\item On the other hand, $q$ must satisfy $2^q \leq \frac{16n}{n_t}$. Let \[
q_t := \arg \max_{i\geq 0} \left\{2^i \leq 16\cdot(n/n_t)\right\}. \] \end{itemize}
\subsubsection{The weighted potential function} Now we can justify our choice of the potential function. If all the potential functions actually decreased at every step of the algorithm, and we could select a $v_t$ that ensured $\max_{i \in \script{A}_q} (\dt{a_{i}}{v_t})^2 \leq k/n$ for all $q$, then using $h_q(t) = k/n$ for all $q \in \script{Q}$, Theorem \ref{thm:g3} gives us \begin{equation*}
\sum_{t=t_q}^n d_q(t) \simeq O(\sqrt{p(2^{1-q}n)^{1/p}})\cdot \sqrt{\int_{t=t_q}^{n-2}(n-t)^{-1/p}dt\cdot (k/n)} = O(2^{\frac{q}{p} - q(1-\frac{1}{p})}\sqrt{k}) = O(\sqrt{k}), \end{equation*} for $p=2$. However, since the potential functions decrease simultaneously only in expectation, there might not exist a $v_t$ such that each potential function decreases when we move along $v_t$. Instead we take a weighted linear combination of the potential functions $\Phi(t)$~\eqref{eq:sum_potential}, and ensure that $\Phi(t)$ is decreasing at each step $t$. Strictly speaking, $\Phi(t)$ is not decreasing over time but actually increasing as row classes with higher $q$ get added in later steps. When we say $\Phi(t)$ is decreasing, we mean that $\Phi(t+1)$ restricted to rows in $\script{I}_t$ is less than $\Phi(t)$ restricted to rows in $\script{I}_t$, i.e., $\sum_{i\in \script{I}_t}1/s_i(t+1)^{-p}\leq \sum_{i\in \script{I}_t}1/s_i(t)^{-p}$.
What should the weights be? First, we need to normalize $\Phi_q(t)$ by $|\script{A}_q(t)|$. However this is not enough as we still want to use $\Phi(t)$ to bound $1/s_i(t)$ for each active row. However, $\Phi(t)$ can be much larger than the $\Phi_q(t)$.
If we use the sum of normalized potential functions as the potential, consider some $i \in \script{A}_{q}$. Condition~\eqref{eq:s4} implies that at step $t$, there are at most $\log_2(16n/n_t)$ active classes and therefore $\max_{i\in \script{A}_q} (s_i(t))^{-p}\propto \log_2(16n/n_t) \cdot \Phi_q(0)$. This gives \begin{align*}
\sum_{t=t_q}^n d_q(t) &\simeq O\left(\sqrt{p(2^{1-q}n)^{1/p}} \right)\cdot \sqrt{\int_{t=t_q}^{n-2}\left( \log\frac{n}{n_t}\cdot\frac{1}{(n-t)}\right)^{1/p} \cdot (k/n)} \\
&= O(q2^{\frac{q}{p} - q(1-\frac{1}{p})}\sqrt{k}) = O(q\sqrt{k}), \end{align*} for $p = 2$. Intuitively, a row with a large initial size may acquire high discrepancy because it gets added to the potential function later, when $\Phi(t)$ contains the potentials corresponding to more row classes $q$, and therefore the value of $\Phi(t)$ is actually higher. This suggests that the potential $\Phi_q(t)$ corresponding to a large $q$ should have a higher weight to balance the effect of a large value of $\Phi(t)$, and hence our choice of $\Phi(t)$: \begin{equation*}
\Phi(t) = \frac{1}{k} \cdot \Phi_0(t)+\sum_{q = 1}^{q_t} 2^{2q} \cdot \Phi_q(t). \end{equation*}
\subsubsection{Bounding the discrepancy} The next lemma gives a bound on $\dim(\script{Z}_t)$ analogous to \ref{lem:g2}. \begin{lem} \label{lem:sdim} For any $t \in [T]$, it holds that $\dim(\script{Z}_t)\geq \ceil{n_t/2}$. \end{lem} \begin{proof} At time $t$, $\script{I}_t$ only consists of rows from class $\script{A}_q$ with $q\leq q_t$. So, \begin{align*}
\dim(\script{C}_t) &\leq \sum_{i=0}^{q_t} \dim(\script{C}_t^i)\leq\sum_{i=0}^{q_t} \frac{2^{i-8}n_t^2}{n}
\leq \frac{n_t^2}{n}\cdot 2^{q_t-7}\leq \frac{2^{-7}n_t^2}{n} \cdot \frac{16n}{n_t} \leq \frac{n_t}{8}. \end{align*} Since the number of rows in $\script{I}_t$ is at most $\floor{n_t/6}$, we have $\dim(\script{Y}_t) \geq n_t - \floor{n_t/6}$.
By~\eqref{eq:s1}, $\dim(\script{B}_t) \leq \floor{n_t/16}$ and $\dim(\script{G}_t) \geq n_t - \floor{n_t/16}$. Putting it together, \begin{equation*}
\dim(\script{Z}_t) \geq \dim(\script{Y}_t) - \dim(\script{B}_t) - \dim(\script{C}_t) - 1 \geq \ceil{n_t/2}.\qedhere \end{equation*} \end{proof}
The next lemma is analogous to Lemma \ref{lem:g4}. \begin{lem} \label{lem:salph} For all $t \in [T]$, there exists $v_t \in \script{Z}_t$ such that $\forall q \in \script{Q}$, \begin{align}
\sum_{i\in\script{J}^q_t} \dt{2cb_0 {e}_{t,i} - {a}_{i}}{v_t}^2 s_i(t)^{-p-1}&\leq 8w_q(t) \cdot h_q(n_t) \sum_{i\in\script{J}^q_t} s_i(t)^{-p-1} \; .\label{eq:kvt} \end{align} \end{lem} \begin{proof} By Lemmas \ref{lem:g4} and \ref{lem:sdim}, for each $q \in \script{Q}$, there exists $v_q \in \script{Z}_t$ such that \begin{equation*}
\sum_{i\in \script{J}^q_t} \dt{2cb_0{e}_{t,i} - {a}_{i}}{v_q}^2s_i(t)^{-p-1}\leq \frac{n_t}{\dim(\script{Z}_t)} \cdot\sum_{i\in \script{J}^q_t} 4h_q(n_t)s_i(t)^{-p-1} \leq \sum_{i\in \script{J}^q_t} 8h_q(n_t)s_i(t)^{-p-1} . \end{equation*}
However, this does not imply that there exists a $v_t$ that satisfies these bounds for all classes simultaneously. Instead, we use Markov's inequality to assign a weight $w_q(t)$ to each class $q$ at step $t$ such that $\sum_{q = 0}^{q_t} w_q^{-1}(t) < 1$, and therefore there exists a vector $v_t \in \script{Z}_t$ such that
\begin{equation}
\sum_{i\in \script{I}_t \cap \script{A}_q} \left(2\lambda \dt{e_{t,i}}{v_t} - \dt{{a}_i}{v_t}\right)^2 s_i(t)^{-p-1} \leq w_q(t) \cdot \sum_{i\in \script{I}_t \cap \script{A}_q} 8h(n_t)s_i(t)^{-p-1} \label{eq:wts}
\end{equation} and
for each class. Let \begin{equation*}
\script{\script{Q}}_t = \{q \in \script{\script{Q}}: \script{A}_q \cap \{\script{I}_t\backslash\{\script{B}_t \cup \script{D}_t\}\} \neq \emptyset\}. \end{equation*} If some row class $q$ is not in $\script{\script{Q}}_t$, then any row $i \in \script{A}_q$ is either dead or frozen or bad. If it is dead, we drop it from the potential and it does not affect~\eqref{eq:wts}. If it is frozen or bad, $\dt{2cb_0e_{i,t} - a_i}{v_t} = 0$ and the condition is trivially satisfied. So we only need to consider $q \in\script{\script{Q}}_t$. The weight $w_q = 2^{5-q/4}\left(n/n_t\right)^{1/4}$ suffices as $\sum_{q = 1}^{q_t}2^{q/4-5}\left(n/n_t\right)^{1/4} \leq 1/2$. \end{proof}
Note that for any row $i \in \script{A}_q$, at $t \leq t_q$, $\dt{2cb_0e_{i,t} - a_i}{v_t} = 0$. So, we can set $d_t^q = 0$ for rows in class $q$. So, by Lemma \ref{thm:g1} and equation~\eqref{eq:wts}, \begin{equation}
d^q(t) = \begin{cases}
0 & \text{if } t \leq t_q \\
4(p+1) \cdot w_q(t) \cdot h_q(n_t) \cdot \max_{i\in \mathcal{J}^q_t} s_i(t)^{-1} & \text{otherwise},
\end{cases}
\label{eq:sc1} \end{equation} implies that there exists a $v_t \in \script{Z}_t$ such that for all $q \in \script{Q}$,\begin{equation}
\mathbb{E}[\Phi_q(t)] \leq \Phi_q(t-1) + \frac{1}{Tn b_0^p}. \label{eq:s11} \end{equation} The next lemma helps us bound the rate of change of $b_q(t)$, which eventually gives a bound on $b_q(T)$ in Theorem \ref{thm:extended}. \begin{lem} For any $t \in \{0, \ldots, T\}$, if $\Phi(t) \leq 8n\left(\frac{2}{b_0}\right)^p (\frac{16n}{n_t})$, then \begin{equation}
\max_{j\in\script{J}_t^q}s_j(t)^{-1} \leq \begin{cases}
k^{1/p} \cdot \frac{2^{1+15/p}}{b_0}\left(\frac{n}{n_t}\right)^{3/p} & \text{if } q=0 \\
\frac{2^{1+(15-3q)/p}}{b_0}\left(\frac{n}{n_t}\right)^{3/p} & \text{if } q \geq 1. \label{eq:sss}
\end{cases} \end{equation} \end{lem} \begin{proof} For any $q$ and $i \in \mathcal{J}_t^q$, there are at least $\floor{2^{q-8}n_t^2/n}$ indices $j$ in $\script{I}_t \cap \script{A}_q$ such that $s_j(t) \leq s_i(t)$. Therefore, for $q \geq 1$, \begin{align}
2^{2q}\cdot \frac{2^{q-8}n_t^2}{n} \cdot s_i(t)^{-p}\leq \Phi(t),\label{eq:s6} \end{align} and for $q = 0$, \begin{align}
\frac{1}{k}\cdot \frac{2^{-8}n_t^2}{n} \cdot s_i(t)^{-p}\leq \Phi(t).\label{eq:s10} \end{align} Plugging $\Phi(t) \leq 8n\left(\frac{2}{b_0}\right)^p (\frac{16n}{n_t})$ in~\eqref{eq:s6} and~\eqref{eq:s10} gives the required bounds. \end{proof}
\begin{lem} \label{lem:phi_dec} For value of $p$ and $d_q$ given by \eqref{eq:wts}, for all $t=0,\ldots,T$, we have \begin{equation*}
\Phi(t) \leq \frac{2^7 n^2}{n_t} \cdot \left(\frac{2}{b_0}\right)^p. \end{equation*} \end{lem} \begin{proof}
Plugging~\eqref{eq:s11} in the definition of $\Phi(t)$, \begin{equation}
\mathbb{E}(\Phi(t+1)) - \Phi(t) \leq \frac{1}{Tb_0^p} + \vert \{\script{I}_{t} \backslash \script{I}_{t-1}\} \cap \script{A}_0\vert \cdot \frac{1}{k}\cdot \left(\frac{2}{b_0}\right)^p+ 2^{2q}\cdot \left(\frac{2}{b_0}\right)^p \cdot\sum_{q\geq 1}\vert \{\script{I}_{t} \backslash \script{I}_{t-1}\} \cap \script{A}_q \vert .\label{eq:sphi} \end{equation} At every step $t$, the algorithm selects the choice of $x_t$ for which the above inequality is true. Summing $\Phi(s)-\Phi(s-1)$ over $s\in [t]$, \begin{align}
\Phi(t)&\leq \Phi(0) + \frac{1}{k}\vert \script{I}^0_{t} \vert \cdot \left(\frac{2}{b_0}\right)^{p}+ \sum_{q\geq 1} 2^{2q}\vert \script{I}^q_{t} \vert \cdot \left(\frac{2}{b_0}\right)^{p} \end{align} For any $q \in \script{Q}$, by Lemma \ref{lem:dphi} we have $\Phi_q(0) + \sum_{t}\vert \script{I}^q_{t+1} \backslash \script{I}^q_{t} \vert \cdot (2/b_0)^{p} \leq \vert \script{A}_q \vert \cdot (2/b_0)^{p}$. This gives \begin{align}
\Phi(t)&\leq \frac{1}{k}\vert \script{A}_0\vert\cdot \left(\frac{2}{b_0}\right)^p + \sum_{1\leq q \leq q_t}2^{2q} \vert \script{A}_q\vert\cdot \left(\frac{2}{b_0}\right)^p \end{align} Using $\vert \script{A}_0\vert \leq 2n/\sqrt{k}$ and $\vert \script{A}_q\vert \leq 2^{1-q}n$ for $q \geq 1$, we get \begin{align}
\Phi(t)&\leq 2n \left(\frac{2}{b_0}\right)^p\left(\frac{1}{\sqrt{k}}+\sum_{q=1}^{q_t} 2^q\right) \leq 4n\left(\frac{2}{b_0}\right)^p 2^{q_t+1} \leq 2^7 \left(\frac{2}{b_0}\right)^p \left(\frac{n^2}{n_t}\right), \label{eq:s5} \end{align} where the last inequality follows from $2^{q_t} \leq 16(n/n_t)$. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:extended}] If row $i \in \script{A}_q$ becomes dead after step $t-1$, then \begin{align*}
\vert \dt{a_i}{x_T} \vert &\leq \vert \dt{a_i}{x_t} \vert+ \vert \dt{a_i^S}{x_T-x_t} \vert \leq b_t(q) + 2\sum_{j\in \script{V}_t}\vert a_i(j)\vert \\
&\leq b_T(q) + 2\sum_{j\in \script{V}_t} a_i(j)^2 \leq b_T(q) + 16\beta. \end{align*} Substituting the bound on $\max_{i \in \script{J}_q^t} s_i(t)^{-1}$ from~\eqref{eq:sss}, and using $w_q(t) = 2^{5-q/4}\cdot (n/n_t)^{1/4}$ and $h_q(t) = 2^{q+2}/n$, we get \begin{equation*}
d_q(t) = \begin{cases} 0 & \text{ if } t < t_q\\
9k\cdot\frac{2^{3q/8+14}}{nb_0}\left(\frac{n}{n-\delta^2 t - 1}\right)^{5/8} & \text{ if } q \geq 1 \text{ and } t \geq t_q\\
9k^{\frac{9}{8}}\cdot\frac{2^{14}}{nb_0}\left(\frac{n}{n-\delta^2 t - 1}\right)^{5/8} & \text{ if } q = 0 \text{ and } t \geq t_0.
\end{cases} \end{equation*} For any $q\geq 1$, summing up $d_q(\cdot)$, \begin{align*}
b_q(T) &= b_0 + \delta^2 \sum_{t=t_q}^{T-1} d_q(0) \leq \delta^2 \int_{t = t_q}^T\frac{9k\cdot 2^{3q/4+12+(15-3q)/8}}{nb_0}\left(\frac{n}{n-\delta^2 t-1}\right)^{5/8}dt\\
&\leq b_0+\int_{t = \delta^2 t_q}^{n-2} \frac{9k\cdot2^{3q/8+14}}{nb_0}\left(\frac{n}{n- t-1}\right)^{5/8}dt\\
&\leq b_0 + \frac{2^{19+3q/8}k}{b_0} \cdot n^{-3/8} \cdot (n-\delta^2t_q)^{3/8} = b_0 +\frac{2^{20}k}{b_0}.
\end{align*}
For $b_0 = 2^{10}\sqrt{k}$, $b_q(T) \leq 2^{11}\sqrt{k}$ for all $q \geq 1$.
Similar calculation for $q = 0$ show that $b_0 = 2^{10}\sqrt{k}$ and $b_T(0) = 2^{11}\sqrt{k}$ suffice.
Let $x\in \{-1,1\}^n$ be obtained from $x_T$ by the rounding $x(j) = \mathrm{sign}(x_{T}(j))$. Since $T = (n-2)/\delta^2$, $\norm{x_{T}}^2 = n-2$ with $\vert x_T(j) \vert \leq 1$ for all $j\in [n]$. After rounding $x_{T}$ to $x$, $\norm{x}^2 = n$ and \begin{align*}
\vert \dt{a_i}{x} \vert &\leq \vert \dt{a_i}{x_T} \vert+ \vert \dt{a_i}{x-x_T} \vert \leq 2b_T + 16\beta + \sum_{j}\vert x(j)- x_T(j)\vert \\
&\leq b_T + 16\beta + 2.\qedhere \end{align*} \end{proof}
\paragraph{Random and Semi-random Sparse Hypergraphs.} This gives an alternate proof of the result \cite{potukuchi2019spectral} of Potukuchi that $\mathrm{disc}(\script{H}) = O(\sqrt{k})$ for regular random $k$-regular hypergraph $\script{H}$, on $n$ vertices and $m$ edges with $m\geq n$ and $k=o(m^{1/2})$. In particular, Potukuchi showed that such hypergraphs satisfy condition \eqref{eq:s1} with high probability.
Consider a random $k$-regular hypergraph $A$ with $n$ vertices and $m$ edges as above, but suppose that an adversary can change the graph so that the number of edges incident to $v$ that are added or deleted is at most $t$. Let $A+C$ denote the incidence matrix of this corrupted hypergraph. How much can this affect the discrepancy of the hypergraph? \semirandomh* \begin{proof}
By the subadditive property of stochastic discrepancy, $
\mathrm{disc}(A+C) \leq O(\sqrt{k}) + O(\sqrt{t\log{n}}).$ However, this bound is not algorithmic because it requires running the algorithm separately on $A$ and $A_c - A$. \end{proof}
\paragraph{Acknowledgments.} We are grateful to Yin Tat Lee and Mohit Singh for helpful discussions. The latter two authors were supported in part by NSF awards CCF-2007443 and CCF-2134105.
\begin{appendix}
\section{Appendix: Bounding the step size}\label{appendix:ss}
\begin{lem}\label{lem:dphi} For $A \in \mathbb{R}^{m \times n}$, \begin{itemize}
\item $\Phi(0) + \sum_{t}\vert \script{I}_{t+1} \backslash \script{I}_{t} \vert \cdot \left(\frac{2}{b_0}\right)^{p} \leq 2m \cdot \left(\frac{2}{b_0}\right)^{p}$.
\item For all $t \in \{0, 1, \ldots, T-1\}$, if $\Phi(t) \leq 2^7 m^2 \left(\frac{2}{b_0}\right)^{p}$ and $d_t = O(pn \cdot \max_{i \in \script{J}_t} s_i(t)^{-1})$, then for step size $\delta^2 \leq (Cn^2m^6p^4)^{-1}$, \begin{align*}
\mathbb{E}(\Phi(t+1)) -\Phi(t) &\leq f(t)+ \frac{1}{Tnb_0^p}+ \vert \script{I}_{t+1} \backslash \script{I}_{t} \vert \cdot \left(\frac{2}{b_0}\right)^{p}\\
\text{where }\qquad f(t) &= -p \delta^2 \sum_{i\in \script{I}_t} \frac{d_t + cb_0\dt{{a}_i^{(2)}}{v_t^{(2)}}}{s_i(t)^{p+1}} + \frac{p(p+1)\delta^2}{2}\sum_{i\in \script{I}_t} \frac{ \dt{2cb_0e_{t,i} - a_i}{v_t}^2 }{s_i(t)^{p+2}}. \end{align*} \end{itemize} \end{lem} \begin{proof} We note that the purpose of this lemma is to allow the proof to ignore higher order terms by making the step size inverse polynomially small, and thereby obtain a (deterministic) polytime algorithm. As our focus is on establishing polynomiality, we have not optimized the bounds.
For any $i \in \script{I}_{t+1}\backslash \script{I}_{t}$, we have $\dt{a_i}{x_{t+1}} = 0$ and \[ \sum_{j}a_{i}(j)^2(1-x_{t+1}(j)^2) \leq \sum_{j\in\mathcal{V}_{t+1}} a_i(j)^2 + n (1-(1-\frac{1}{2n})^2) < 21. \] Therefore, for any $i \in \script{I}_{t+1}\backslash \script{I}_{t}$, using the fact that the coefficient of the above energy term is $cb_0 = b_0/42$, \begin{align*}
\frac{1}{s_{i}(t+1)} \leq \frac{1}{(b_{t+1} - 21 cb_0)}\leq \frac{2}{b_0} . \end{align*} Therefore \[
\Phi(0) = \sum_{i \in \script{I}_0} s_i(0)^{-p} \leq |\script{I}_{0}| \cdot (\frac{2}{b_0})^p. \]
Since $|\script{I}_{0}| + \sum_{t} |\script{I}_{t+1}\backslash \script{I}_{t}| \leq 2m$, we have \begin{equation*}
\Phi(0) + \sum_{t}\vert \script{I}_{t+1} \backslash \script{I}_{t} \vert \cdot \left(\frac{2}{b_0}\right)^{p} \leq 2m \cdot \left(\frac{2}{b_0}\right)^{p}. \end{equation*} This concludes the proof of the first part.
For the second part, we will use a second-order Taylor approximation and choose $\delta$ small enough so that the higher order terms are negligible.
Let $Z_t(b, x) := \sum_{i \in \script{I}_t} \left(b - \dt{a_i}{x} - \lambda\cdot \sum_{j=1}^n a_i(j)^2(1-x(j)^2)\right)^{-p} = \sum_{i \in \script{I}_t} s_i(t)^{-p}$, the potential function restricted to the active rows in time step $t$. Then, \begin{align*}
\Phi(t+1) - \Phi(t) &= \sum_{i\in \script{I}_t}s_i(t+1)^{-p} -s_i(t+1)^{-p} + \sum_{i\in \script{I}_{t+1} \backslash \script{I}_{t} }s_i(t+1)^{-p}\\
&\leq Z_t(b_{t+1}, x_{t+1}) - Z_t(b_t, x_t) + \vert \script{I}_{t+1} \backslash \script{I}_{t} \vert \cdot \left(\frac{2}{b_0}\right)^{p}. \end{align*} Hence, \begin{align}
\mathbb{E}(\Phi(t+1)) - \Phi(t) &\leq \mathbb{E}(Z_t(b_{t+1}, x_{t+1})) - Z_t(b_t, x_t) + \vert \script{I}_{t+1} \backslash \script{I}_{t} \vert \cdot \left(\frac{2}{b_0}\right)^{p}\label{eq:apf} \end{align} Using Taylor's theorem, \begin{align*}
Z_t(b_{t+1}, x_{t+1}) - Z_t(b_t, x_t) &= \delta \cdot\nabla_xZ_t(b_t, x_t)^{\top}v_t + \delta^2\cdot\nabla_b Z_t(b_t, x_t)d_t+ \frac{\delta^2}{2}\cdot v_t^{\top} \nabla_x^2Z_t(b_t, x_t) v_t \\ &+ \frac{\delta^4}{2}\cdot \nabla^2_bZ_t(b_t,x_t)d_t^2 + \frac{1}{6}\cdot\nabla^3Z_t(b', x')[w,w,w], \end{align*} for some $b' \in [b_t, b_t + \delta^2 d_t]$ and $x' \in [x_t, x_t+\delta v_t]$, and $w$ is the tuple $(\delta^2 d_t, \delta v_t)$. Taking expectation, \begin{align}
\mathbb{E}(Z_t(b_{t+1}, x_{t+1})) - Z_t(b_t, x_t) &= \delta^2\cdot\nabla_b Z_t(b_t, x_t)d_t+ \frac{\delta^2}{2}\cdot v_t^{\top} \nabla_x^2Z_t(b_t, x_t) v_t \notag \\
&+ \frac{\delta^4}{2}\cdot \nabla^2_bZ_t(b_t,x_t)d_t^2 + \mathbb{E}(\frac{1}{6}\cdot\nabla^3Z_t(b', x')[w,w,w]).\label{eq:ap3} \end{align}
For any $t \in [T]$, \begin{align}
\nabla_b Z_t(b_t,x_t) &= -p\sum_{i \in \script{I}_t} \frac{1}{s_i(t)^{p+1}}, \label{eq:ap4}\quad \text{and}\\
\nabla^2_x Z_t(b_t,x_t) &= p(p+1)\sum_{i \in \script{I}_t} \frac{(2cb_0 e_{t,i}-a_i )(2cb_0 e_{t,i}-a_i)^{\top}}{s_i(t)^{p+2}} -pcb_0\sum_{i \in \script{I}_t}\frac{\text{diag}(a_i^{(2)})}{s_i(t)^{p+1}}.\label{eq:ap5} \end{align} We will show the following claim.
\begin{claim} For any $t$ and any $b',x'$ as defined above, \[\mathbb{E}(\frac{1}{6}\cdot \nabla^3Z_t(b', x')[w,w,w]) + \frac{\delta^4}{2}\cdot \nabla^2_bZ_t(b_t,x_t)d_t^2 \leq \frac{1}{Tnb_0^p}.\] \end{claim} Combining this claim with~\eqref{eq:ap3},~\eqref{eq:ap4}, and~\eqref{eq:ap5}, we get \begin{align*}
\mathbb{E}(Z_t(b_{t+1}, x_{t+1})) - Z_t(b_t, x_t) &\leq
-p \delta^2 \sum_{i\in \script{I}_t} \frac{d_t + cb_0\dt{{a}_i^{(2)}}{v_t^{(2)}}}{s_i(t)^{p+1}} \\&+ \frac{p(p+1)\delta^2}{2}\sum_{i\in \script{I}_t} \frac{ \dt{2cb_0e_{t,i} - a_i}{v_t}^2 }{s_i(t)^{p+2}}
+ \frac{1}{Tb_0^p}\\
&= f(t) + \frac{1}{Tnb_0^p}. \end{align*} Substituting this bound in~\eqref{eq:apf} proves the lemma.
{\bf Proof of Claim 1.}
As $\Phi(t) \leq 2^7 m^2 \cdot \left(2/b_0\right)^p$, for any $i\in \script{I}_t$, \begin{equation}
s_i(b_t, x_t) = s_i(t) \geq b_0 (2^{p+7}m^2)^{-1/p}. \label{eq:ap1} \end{equation}
By~\eqref{eq:ap1}, \begin{equation}
d_t = O(pn \cdot \max_{i \in \script{J}_t} s_i(t)^{-1}) = O\left(pn \cdot (2^{p+7} m^2)^{1/p} b_0^{-1}\right).\label{eq:ap2} \end{equation} By~\eqref{eq:ap1} and~\eqref{eq:ap2}, and as the second derivative of $Z_t$ with respect to $b_t$ is \[
\nabla^2_bZ_t(b_t, x_t) = p(p+1)\sum_{i\in\script{I}_t} s_i(t)^{-p-2},\] we obtain \[\delta^4\nabla^2_bZ_t(b_t, x_t)d_t^2 = O(2^p n^2m^{3+\frac{3}{p}}p^4\delta^4 (m/n)^{\frac{2}{p}}b_0^{-p-4}).\] For each of the choices $p = 2\ceil{\log(2m)}$ or $p = 2\ceil{\log(2m/n)}$ or $p=8$, since $\delta^2 = 1/Cn^2m^6p^4$ and $T=(n-2)/\delta^2$, we have \begin{equation}
\delta^4\nabla^2_b Z_t(t)d_t^2 \leq \frac{\delta^2}{2n(n-2)b_0^p} \leq \frac{1}{2Tnb_0^p}. \label{eq:b1} \end{equation} $\mathbb{E}(\nabla^3 Z_t(b', x'))$ in direction $w$ is given by \begin{align} &\mathbb{E}(\nabla^3 Z_t(b', x')[w,w,w]) = -p(p+1)(p+2)\sum_{i\in\script{I}_t} \frac{\delta^6 d_t^3}{s_i(b', x')^{p+3}} \notag \\ &- 3p(p+1)(p+2)\delta^4\sum_{i\in\script{I}_t} \frac{d_t\left(2cb_0 \dt{(a_i^2 x')}{v_t} - \dt{{a}_i}{v_t}\right)^2}{s_i(b', x')^{p+3}} +3p(p+1)cb_0\delta^4\sum_{i\in\script{I}_t} \frac{d_t\dt{{a}_i^{(2)}}{v_t^{(2)}} }{s_i(b', x')^{p+2}}\notag\\ &\leq 3p(p+1)cb_0\delta^4\sum_{i\in\script{I}_t} \frac{d_t}{s_i(b', x')^{p+2}},\label{eq:a3} \end{align} where we use that $d_t,s_i \geq 0$.
To bound the difference between $ s_i(b', x') - s_i(b_t, x_t)$, consider the difference between $b'$ and $b$, \begin{align}
\vert b' - b_t \vert &\leq \delta^2 d_t \leq \frac{b_0}{16(2^7m^2)^{1/p}},\label{eq:a1} \end{align} and the difference between $\dt{a_i}{x'}+cb_0\cdot \sum_{j=1}^n a_i(j)^2(1-x'(j)^2)$ and $\dt{a_i}{x_t}+cb_0\cdot \sum_{j=1}^n a_i(j)^2(1-x_t(j)^2)$,
\begin{align}
\vert \dt{a_i}{x'} &+\sum_{j=1}^n a_i(j)^2(1-x'(j)^2) - \dt{a_i}{x_t} -\sum_{j=1}^n a_i(j)^2(1-x_t(j)^2)\vert \notag\\&\leq \delta \vert \dt{a_i}{v_t}\vert + cb_0 \sum_{j} a_i(j)^2\vert x_t(j)^2 - (x_t(j)+\delta\lambda_2 v_t(j))^2)\vert \notag\\
&\leq \delta(1+4cb_0\sqrt{n})\leq \frac{b_0}{16(2^7m^2)^{1/p}} \label{eq:a2}. \end{align} By~\eqref{eq:a1} and~\eqref{eq:a2}, \begin{align}
s_i(b', x') &= s_i(b_t, x_t) + y - b_t + \dt{a_i}{x'} +\sum_{j=1}^n a_i(j)^2(1-x'(j)^2) \dt{a_i}{x_t} -\sum_{j=1}^n a_i(j)^2(1-x_t(j)^2)\notag \\
&\geq s_i(b_t, x_t)-\frac{b_0}{16(2^7m^2)^{1/p}} -\frac{b_0}{16(2^7m^2)^{1/p}}\geq \frac{3b_0}{8(2^7m^2)^{1/p}}\label{eq:a4} \end{align} By~\eqref{eq:a3} and~\eqref{eq:a4}, $\mathbb{E}(\nabla^3 Z_t(b', x')[w,w,w]) = O(nm^{3+\frac{3}{p}}p^3\delta^4 (8/3)^p b_0^{-p-1})$.
Again, since $p = 2\ceil{\log(2m)}$ or $p = 2\ceil{\log(2m/n)}$ or $p=8$, for $\delta^2 = 1/Cn^2m^6p^4$, \begin{equation}
\mathbb{E}(\nabla^3 Z_t(b', x')[w,w,w])\leq \frac{\delta^2}{2n(n-2)b_0^p} = \frac{1}{2Tnb_0^p}. \label{eq:b0} \end{equation} Now the claim follows from the bounds~\eqref{eq:b1} and~\eqref{eq:b0}. \end{proof} \end{appendix}
\end{document} |
\begin{document}
\begin{abstract} In the present paper the authors construct normal numbers in base $q$ by concatenating $q$-adic expansions of prime powers $\left\lfloor\alpha p^\theta\right\rfloor$ with $\alpha>0$ and $\theta>1$. \end{abstract}
\title{Construction of normal numbers via generalized prime power sequences}
\section{Introduction}
Let $q\geq 2$ be a fixed integer and $\sigma=0.a_1a_2\dots$ be the $q$-ary expansion of a real number $\sigma$ with $0<\sigma<1$. We write $d_1\cdots d_\ell\in\{0,1,\dots,q-1\}^\ell$ for a block of $\ell$ digits in the $q$-ary expansion. By $\mathcal{N}(\sigma;d_1\cdots d_\ell;N)$ we denote the number of occurrences of the block $d_1\cdots d_\ell$ in the first $N$ digits of the $q$-ary expansion of $\sigma$. We call $\sigma$ normal to the base $q$ if for every fixed $\ell\geq 1$ \begin{align*} \mathcal{R}_N(\sigma)=\mathcal{R}_{N,\ell}(\sigma)= \sup_{d_1\cdots d_\ell}\left\vert\frac{1}{N}\mathcal{N}(\sigma;d_1\cdots d_\ell;N)
-\frac{1}{q^\ell}\right\vert=o(1) \end{align*} as $N\rightarrow\infty$, where the supremum is taken over all blocks $d_1\cdots d_\ell\in\{0,1,\dots,q-1\}^\ell$.
A slightly different, however equivalent definition of normal numbers is due to Borel \cite{Borel1909:les_probabilites_denombrables} who also showed that almost all numbers are normal (with respect to the Lebesgue measure) to any base. However, despite their omnipresence among the reals, all numbers currently known to be normal are established by ad hoc constructions. In particular, we do not know whether given numbers, such as $\pi$, $e$, $\log 2$ and $\sqrt 2$, are normal.
In this paper we consider the construction of normal numbers in base $q$ as concatenation of $q$-ary integer parts of certain functions. A first result was achieved by Champernowne~\cite{Champernowne1933:construction_decimals_normal}, who showed that \begin{align*} 0.1\,2\,3\,4\,5\,6\,7\,8\,9\,10\,11\,12\,13\,14\,15\,16\,17\,18\,19\,20\dots \end{align*} is normal in base $10$. This construction can be easily generalised to any integer base $q$. Copeland and Erd{\"o}s \cite{copeland_erdoes1946:note_on_normal} proved that \begin{align*} 0.2\,3\,5\,7\,11\,13\,17\,19\,23\,29\,31\,37\,41\,43\,47\,53\,59\,61\,67\dots \end{align*} is normal in base $10$.
This construction principle has been generalized in several directions. In particular, Dumont and Thomas \cite{Dumont_Thomas1994:modifications_de_nombres} used transducers in order to rewrite the blocks of the expansion of a given normal number to produce another one. Such constructions using automata yield to $q$-automatic numbers, i.e., real numbers whose $q$-adic representation is a $q$-automatic sequence (cf. Allouche and Shallit \cite{allouche_shallit2003:automatic_sequences}). By these means one can show that for instance the number \[ \sum_{n\geq0}3^{-2^n}2^{-3^{2^n}} \] is normal in base 2.
In the present paper we want to use another approach to generalize Champernowne's construction of normal numbers. In particular, let $f$ be any function and let $[f(n)]_q$ denote the base $q$ expansion of the integer part of $f(n)$. Then define \begin{equation}\label{normal} \begin{split} \sigma_q&=\sigma_q(f)=
0.\left\lfloor f(1)\right\rfloor_q\left\lfloor f(2)\right\rfloor_q\left\lfloor f(3)\right\rfloor_q \left\lfloor f(4)\right\rfloor_q \left\lfloor f(5)\right\rfloor_q \left\lfloor f(6)\right\rfloor_q \dots, \end{split} \end{equation} where the arguments run through all positive integers. Champernowne's example corresponds to the choice $f(x)=x$ in \eqref{normal}. Davenport and Erd{\"o}s \cite{davenport_erdoes1952:note_on_normal} considered the case where $f(x)$ is an integer valued polynomial and showed that in this case the number $\sigma_q(f)$ is normal. This construction was subsequently extended to polynomials over the rationals and over the reals by Schiffer \cite{schiffer1986:discrepancy_normal_numbers} and Nakai and Shiokawa~\cite{Nakai_Shiokawa1992:discrepancy_estimates_class}, who were both able to show that $\mathcal{R}_N(\sigma_q(f))=\mathcal{O}(1/\log N)$. This estimate is best possible as it was proved by Schiffer \cite{schiffer1986:discrepancy_normal_numbers}. Furthermore Madritsch et al. \cite{Madritsch_Thuswaldner_Tichy2008:normality_numbers_generated} gave a construction for $f$ being an entire function of bounded logarithmic order.
Nakai and Shiokawa \cite{Nakai_Shiokawa1990:class_normal_numbers} constructed a normal number by concatenating the integer part of a pseudo-polynomial sequence, i.e., a sequence $(\left\lfloor p(n)\right\rfloor)_{n\geq1}$ where \begin{gather}\label{mani:pseudopoly}
p(x)=\alpha_0 x^{\theta_0}+\alpha_1x^{\theta_1}+\cdots+\alpha_dx^{\theta_d} \end{gather} with $\alpha_0,\theta_0,\ldots,\alpha_d,\theta_d\in\mathbb{R}$, $\alpha_0>0$, $\theta_0>\theta_1>\cdots>\theta_d>0$ and at least one $\theta_i\not\in\mathbb{Z}$.
This method of construction by concatenating function values is in strong connection with properties of $q$-additive functions. We call a function $f$ strictly $q$-additive, if $f(0)=0$ and the function operates only on the digits of the $q$-adic representation, i.e., \[
f(n)=\sum_{h=0}^\ell f(d_h)\quad\text{ for }\quad n=\sum_{h=0}^\ell d_hq^h. \] A very simple example of a strictly $q$-additive function is the sum of digits function $s_q$, defined by \[
s_q(n)=\sum_{h=0}^\ell d_h\quad\text{ for }\quad n=\sum_{h=0}^\ell d_hq^h. \]
Refining the methods of Nakai and Shiokawa the first author obtained the following result. \begin{thm*}[{\cite[Theorem 1.1]{madritsch2012:summatory_function_q}}] Let $q\geq2$ be an integer and $f$ be a strictly $q$-additive function. If $p$ is a pseudo-polynomial as defined in (\ref{mani:pseudopoly}), then there exists $\varepsilon>0$ such that \begin{gather}\label{mani:mainsum}
\sum_{n\leq N}f\left(\left\lfloor p(n)\right\rfloor\right)
=\mu_fN\log_q(p(N))
+NF\left(\log_q(p(N))\right)
+\mathcal{O}\left(N^{1-\varepsilon}\right), \end{gather} where \[ \mu_f=\frac1q\sum_{d=0}^{q-1}f(d) \] and $F$ is a $1$-periodic function depending only on $f$ and $p$. \end{thm*}
The aim of the present paper is to extend the above results to prime power sequences. Let $f$ be a function and set \begin{gather}\label{mani:tau} \tau_q=\tau_q(f)=0.\left\lfloor f(2)\right\rfloor_q \left\lfloor f(3)\right\rfloor_q \left\lfloor f(5)\right\rfloor_q \left\lfloor f(7)\right\rfloor_q \left\lfloor f(11)\right\rfloor_q \left\lfloor f(13)\right\rfloor_q \dots, \end{gather} where the arguments of $f$ run through the sequence of primes.
Letting $f$ be a polynomial with rational coefficients, Nakai and Shiokawa \cite{Nakai_Shiokawa1997:normality_numbers_generated} could show that $\tau_q(f)$ is normal. Moreover, letting $f$ be an entire function of bounded logarithmic order, Madritsch $et al.$ \cite{Madritsch_Thuswaldner_Tichy2008:normality_numbers_generated} showed that $\mathcal{R}_N(\tau_q(f))=\mathcal{O}(1/\log N)$.
At this point we want to mention the connection of normal numbers with uniform distribution. In particular, a number $x\in[0,1]$ is normal to base $q$ if and only if the sequence $\{q^nx\}_{n\geq0}$ is uniformly distributed modulo 1 (cf. Drmota and Tichy \cite{drmota_tichy1997:sequences_discrepancies_and}). Here $\{y\}$ stands for the fractional part of $y$. Let us mention Kaufman \cite{kaufman1979:distribution_surd_p} and Balog \cite{balog1985:distribution_p_heta,balog1983:fractional_part_p}, who investigated the distribution of the fractional part of $\sqrt p$ and $p^\theta$ respectively. Harman \cite{harman1983:distribution_sqrtp_modulo} gave estimates for the discrepancy of the sequence $\sqrt p$. In his papers Schoissengeier~\cite{schoissengeier1979:connection_between_zeros,
schoissengeier1978:neue_diskrepanz_fuer} connected the estimation of the discrepancy of $\alpha p^\theta$ with zero free regions of the Riemann zeta function. This allowed Tolev \cite{tolev1991:simultaneous_distribution_fractional} to consider the multidimensional variant of this problem as well as to provide an explicit estimate for the discrepancy. This result was improved for different special cases by Zhai \cite{zhai2001:simultaneous_distribution_fractional}. Since the results above deal with the case of $\theta<1$ Baker and Kolesnik \cite{baker_kolesnik1985:distribution_p_alpha} extended these considerations to $\theta>1$ and provided an explicit upper bound for the discrepancy in this case. This result was improved by Cao and Zhai \cite{cao_zhai1999:distribution_p_alpha} for $\frac53<\theta<3$. A multidimensional extension is due to Srinivasan and Tichy \cite{srinivasan_tichy1993:uniform_distribution_prime}.
Combining the methods for proving uniform distribution mentioned above with a recent paper by Bergelson et al. \cite{bergelson_kolesnik_madritsch+2012:uniform_distribution_prime} we want to extend the construction of Nakai and Shiokawa \cite{Nakai_Shiokawa1990:class_normal_numbers} to prime numbers. Our first main result is the following theorem.
\begin{thm}\label{thm:normal} Let $\theta>1$ and $\alpha>0$. Then \[ \mathcal{R}_N(\tau_q(\alpha x^\theta))=\mathcal{O}(1/\log N). \] \end{thm}
\begin{rem} This estimate is best possible as Schiffer \cite{schiffer1986:discrepancy_normal_numbers} showed. \end{rem}
In our second main result we use the connection of this construction of normal numbers with the arithmetic mean of $q$-additive functions as described above. Known results in this area are due to Shiokawa \cite{shiokawa1974:sum_digits_prime}, who was able to show the following theorem. \begin{thm*}[{\cite[Theorem]{shiokawa1974:sum_digits_prime}}] We have \[ \sum_{p\leq x}s_q(p)=\frac{q-1}2\frac x{\log
q}+\mathcal{O}\left(x\left(\frac{\log\log x}{\log x}\right)^{\frac12}\right), \] where the sum runs over the primes and the implicit $\mathcal{O}$-constant may depend on $q$. \end{thm*}
Similar results concerning the moments of the sum of digits function over primes have been established by K\'atai \cite{katai1977:sum_digits_primes}. An extension to Beurling primes is due to Heppner \cite{heppner1976:uber_die_summe}.
Let $\pi(x)$ stand for the number of primes less than or equal to $x$. Adapting these ideas to our method we obtain the following theorem. \begin{thm}\label{thm:summatoryfun} Let $\theta>1$ and $\alpha>0$. Then \[ \sum_{p\leq N}s_q(\left\lfloor\alpha p^\theta\right\rfloor)=\frac{q-1}2\pi(N)\log_qN^\theta+\mathcal{O}(\pi(N)), \] where the sum runs over the primes and the implicit $\mathcal{O}$-constant may depend on $q$ and $\theta$. \end{thm}
\begin{rem} With simple modifications Theorem \ref{thm:summatoryfun} can be extended to completely $q$-additive functions replacing $s_q$. \end{rem}
The proof of the two theorems is divided in three parts. In the following section we rewrite both statements and state the central theorem, which combines them and which we prove in the rest of the paper. In Section \ref{sec:tools} we present all the tools we need in the proof of the central theorem. Finally, in Section \ref{sec:proof-prop-refm} we proof the theorem.
\section{Preliminaries}\label{sec:preliminaries} Throughout the paper, an interval denotes a set \[
I=(\alpha,\beta]=\{x:\alpha<x\leq\beta\}
\quad\text{with}\quad\beta>\alpha\geq\frac12. \] We will often subdivide a interval into smaller ones. In particular we use the observation that if $\log(\beta/\alpha)\ll\log N$, then $(\alpha,\beta]$ is the union of, say, $s$ intervals of the type $(\gamma,\gamma_1]$ with $s\ll\log N$ and $\gamma_1\leq2\gamma$. Given any complex function $F$ on $I$, we have \begin{gather}\label{bak:intervalsplit} \left\vert\sum_{x\in I}F(x)\right\vert\ll(\log N)\left\vert\sum_{\gamma<x\leq\gamma_1}F(x)\right\vert, \end{gather} for some such $(\gamma,\gamma_1]$.
In the proof $p$ will always denote a prime. We fix the block $d_1\cdots d_\ell$ and write $\mathcal{N}(f(p))$ for the number of occurrences of this block in the $q$-ary expansion of $\lfloor f(p)\rfloor$. By $\ell(m)$ we denote the length of the $q$-ary expansion of an integer $m$.
In the first step we want to get rid of the blocks that may occur between two expansions. To this end we define an integer $N$ by \begin{gather}\label{mani:P} \sum_{p\leq N-1}\ell\left(\lfloor p^\theta\rfloor\right) <L\leq \sum_{p\leq N}\ell\left(\lfloor p^\theta\rfloor\right), \end{gather} where $\sum$ indicates that the sum runs over all primes. Thus we get that \begin{equation}\label{mani:NtoP} \begin{split} L&=\sum_{p\leq N}\ell(\left\lfloor p^\theta\right\rfloor)+\mathcal{O}(\pi(N))+\mathcal{O}(\theta \log_q(N))\\ &=\frac{\theta}{\log q}N+\mathcal{O}\left(\frac{N}{\log N}\right). \end{split}\end{equation} Here we have used the prime number theorem in the form \[
\pi(x)=\mathrm{Li}\, x+\mathcal{O}\left(\frac x{(\log x)^G}\right), \] where $G$ is an arbitrary positive constant and \[
\mathrm{Li}\,x=\int_2^x\frac{\mathrm{d}t}{\log t}. \] Let $\mathcal{N}(n;d_1\cdots d_\ell)$ be the number of occurrences of the block $d_1\cdots d_\ell$ in the expansion of $n$. Since we have fixed the block $d_1\cdots d_\ell$ we will write $\mathcal{N}(n)=\mathcal{N}(n;d_1\cdots d_\ell)$ for short. Then \eqref{mani:NtoP} implies that \begin{gather}\label{mani:Ntrunc}
\left\vert\mathcal{N}(\tau_q(x^\theta);d_1\cdots d_\ell;L)-\sum_{p\leq
N}\mathcal{N}(p^\theta)\right\vert\ll\frac L{\log L}. \end{gather}
For the next step we collect all the values that have a certain length of expansion. Let $j_0$ be a sufficiently large integer. Then for each integer $j\geq j_0$ we get that there exists an $N_j$ such that \[
q^{j-2}\leq f(N_j)<q^{j-1}\leq f(N_j+1)<q^j. \] We note that this is possible since $f$ asymptotically grows as its leading coefficient. This implies that \[
N_j\asymp q^{\frac j\beta}. \] Furthermore for $N\geq q^{j_0}$ we set $J$ to be the greatest length of the $q$-ary expansions of $f(p)$ over the primes $p\leq N$, i.e., \begin{gather}\label{mani:JP} J:=\max_{p\leq N}\ell(\lfloor f(p)\rfloor)=\log_q(f(N))+\mathcal{O}(1)\asymp\log N. \end{gather}
In the next step we want to perform the counting by adding the leading zeroes to the expansion of $f(p)$. For $N_{j-1}<p\leq N_j$ we may write $f(p)$ in $q$-ary expansion, i.e., \begin{gather*} f(p)=b_{j-1}q^{j-1}+b_{j-2}q^{j-2}+\dots+b_{1}q+b_{0}+b_{-1}q^{-1}+\dots. \end{gather*} Then we denote by $\mathcal{N}^*(f(p))$ the number of occurrences of the block $d_1,\ldots,d_\ell$ in the string $0\cdots0b_{j-1}b_{j-2}\cdots b_1b_0$, where we filled up the expansion with zeroes such that it has length $J$. The error of doing so can be estimated by \begin{equation}\label{mani:NtoNstar}\begin{split} 0&\leq\sum_{p\leq N}\mathcal{N}^*(f(p))-\sum_{p\leq N}\mathcal{N}(f(p))\\ &\leq\sum_{j=j_0+1}^{J-1}(J-j)\left(\pi(N_{j+1})-\pi(N_{j})\right)+\mathcal{O}(1)\\ &\leq\sum_{j=j_0+2}^{J}\pi(N_{j})+\mathcal{O}(1)\ll\sum_{j=j_0+2}^{J}\frac{q^{j/\beta}}j \ll\frac N{\log N}\ll\frac L{\log L}.\\ \end{split}\end{equation}
In the following two sections we will estimate this sum of indicator functions in order to prove the following proposition. \begin{prop}\label{mani:centralprop} Let $\theta>1$ and $\alpha>0$. Then \begin{gather}\label{mani:centralprop:statement} \sum_{p\leq
N}\mathcal{N}^*\left(\left\lfloor \alpha p^\theta\right\rfloor\right)=q^{-k}\pi(N)\log_qN^\theta+\mathcal{O}\left(\frac{N}{\log
N}\right) \end{gather} \end{prop}
\begin{proof}[Proof of Theorem \ref{thm:normal}] We insert \eqref{mani:centralprop:statement} into \eqref{mani:Ntrunc} and get the desired result. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:summatoryfun}] For this proof we have to rewrite the statement. In particular, we use that the sum of digits function counts the number of $1$s, $2$s, etc. and assigns weights to them, i.e., \[ s_q(n)=\sum_{d=0}^{q-1}d\cdot\mathcal{N}(n;d). \] Thus \begin{align*} \sum_{p\leq N}s_q(\left\lfloor p^\theta\right\rfloor) &=\sum_{p\leq N}\sum_{d=0}^{q-1}d\cdot\mathcal{N}(p^\theta) =\sum_{p\leq
N}\sum_{d=0}^{q-1}d\cdot\mathcal{N}^*(p^\theta)+\mathcal{O}\left(\frac{N}{\log
N}\right)\\ &=\frac{q-1}2\pi(N)\log_q(N^\theta)+\mathcal{O}\left(\frac{N}{\log
N}\right) \end{align*} and the theorem follows. \end{proof}
\section{Tools}\label{sec:tools}
In this section we want to present all the tools we need on the way of proof of Proposition \ref{mani:centralprop}. We start with an estimation which essentially goes back to Vinogradov. This will provide us with Fourier expansions for the indicator functions used in the proof. As usual given a real number $y$, the expression $e(y)$ will stand for $\exp\{2\pi i y\}$.
\begin{lem}[{\cite[Lemma
12]{vinogradov2004:method_trigonometrical_sums}}]\label{vin:lem12} Let $\alpha$, $\beta$, $\Delta$ be real numbers satisfying \begin{gather*} 0<\Delta<\frac12,\quad\Delta\leq\beta-\alpha\leq1-\Delta. \end{gather*} Then there exists a periodic function $\psi(x)$ with period $1$, satisfying \begin{enumerate} \item $\psi(x)=1$ in the interval $\alpha+\frac12\Delta\leq x
\leq\beta-\frac12\Delta$, \item $\psi(x)=0$ in the interval $\beta+\frac12\Delta\leq x
\leq1+\alpha-\frac12\Delta$, \item $0\leq\psi(x)\leq1$ in the remainder of the interval
$\alpha-\frac12\Delta\leq x\leq1+\alpha-\frac12\Delta$, \item $\psi(x)$ has a Fourier series expansion of the form
$$
\psi(x)=\beta-\alpha+\sum_{\substack{\nu=-\infty\\\nu\neq0}}^\infty
A(\nu) e(\nu x),
$$
where
\begin{gather}\label{mani:A}
\left\vert A(\nu)\right\vert \ll \min \left( \frac 1\nu,
\beta-\alpha,\frac{1}{\nu^2\Delta} \right).
\end{gather} \end{enumerate} \end{lem}
After we have transformed the sums under consideration into exponential sums we want to split the interval by the following lemma. \begin{lem}\label{lem:intervalsplit} Let $I=(a,b]$ be an interval and $F$ be a complex function defined on $I$. If $\log(b/a)\ll L$, then $I$ is the union of $\ell$ intervals of the type $(c,d]$ with $\ell\ll L$ and $d\leq 2c$. Furthermore we have \[
\left\vert\sum_{n\in I}F(n)\right\vert\ll L\left\vert\sum_{n\in(c,d]}F(n)\right\vert, \] for some such $(c,d]$. \end{lem}
\begin{proof} For $i=1,\ldots,\ell$ let $I_i$ be the $\ell$ splitting intervals. Then \begin{align*} \left\vert \sum_{n\in I}F(n)\right\vert =\left\vert\sum_{i=1}^\ell\sum_{n\in I_i}F(n)\right\vert \leq \ell\max_{1\leq i\leq \ell}\left\vert\sum_{n\in I_i}F(n)\right\vert\ll L\left\vert\sum_{n\in I_i}F(n)\right\vert \end{align*} \end{proof}
We will apply the following lemma in order to estimate the occurring exponential sums provided that the coefficients are very small. This corresponds to the case of the most significant digits in the expansion.
\begin{lem}[{\cite[Lemma 4.19]{titchmarsh1986:theory_riemann_zeta}}] \label{tit:lem4.19} Let $F(x)$ be a real function, $k$ times differentiable, and satisfying $\left\vert F^{(k)}(x)\right\vert\geq\lambda>0$ throughout the interval $[a,b]$. Then \[ \left\vert\int_a^be(F(x))\mathrm{d}x\right\vert \leq c(k)\lambda^{-1/k}. \] \end{lem}
A standard tool for estimating exponential sums over the primes is Vaughan's identity. In order to apply this identity we have to rewrite the exponential sum into a normal one having von Mangoldt's function as weights. Therefore let $\Lambda$ denote von Mangoldt's function, i.e., \[ \Lambda(n)=\begin{cases} \log p,&\text{if $n=p^k$ for some prime $p$ and an integer $k\geq1$;}\\ 0,&\text{otherwise}. \end{cases} \] In the next step
we may subdivide this weighted exponential sum into several sums of Type I and II. In particular, let $P\geq2$ and $P_1\leq 2P$, then we define Type I and Type II sums by the expressions \begin{align} &\sum_{X<x\leq X_1}a_x\sum_{\substack{Y<y\leq Y_1\\P<xy\leq
P_1}}f(xy)\label{type:1:sum}\quad(\text{Type I})\\ &\sum_{X<x\leq X_1}a_x\sum_{\substack{Y<y\leq Y_1\\P<xy\leq P_1}}(\log y)f(xy)\notag\\ &\sum_{X<x\leq X_1}a_x\sum_{\substack{Y<y\leq Y_1\\P<xy\leq P_1}}b_yf(xy)\label{type:2:sum}\quad(\text{Type II}) \end{align} with $X_1\leq 2X$, $Y_1\leq 2Y$, $\left\vert a_x\right\vert\ll P^\varepsilon$, $\left\vert b_y\right\vert\ll P^\varepsilon$ for every $\varepsilon>0$ and \[ P\ll XY\ll P, \] respectively. The following lemma provides the central tool for the subdivision of the weighted exponential sum.
\begin{lem}[{\cite[Lemma 1]{baker_kolesnik1985:distribution_p_alpha}}] \label{bakkol:vaughan} Let $f(n)$ be a complex valued function and $P\geq2$, $P_1\leq 2P$. Furthermore let $U$, $V$, and $Z$ be positive numbers satisfying \begin{gather} 2\leq U<V\leq Z\leq P,\\ U^2\leq Z,\quad 128UZ^2\leq P_1,\quad 2^{18}P_1\leq V^3. \end{gather} Then the sum \[ \sum_{P\leq n\leq P_1}\Lambda(n)f(n) \] may be decomposed into $\ll(\log P)^6$ sums, each of which is either a Type I sum with $Y\geq Z$ or a Type II sum with $U\leq Y\leq V$. \end{lem}
The next tool is an estimation for the exponential sum. After subdividing the weighted exponential sum we use Vinogradov's method in order to estimate the occurring unweighted exponential sums.
\begin{lem}[{\cite[Lemma 6]{Nakai_Shiokawa1990:class_normal_numbers}}] \label{nakshi:lem6} Let $k$, $P$ and $N$ be integers such that $k\geq2$, $2\leq N\leq P$. Let $g(x)$ be real and have continuous derivatives up to the $(k+1)$th order in $[P+1,P+N]$; let $0<\lambda<1/(2c_0(k+1))$ and \[
\lambda\leq\frac{g^{(k+1)}(x)}{(k+1)!}\leq c_0\lambda
\quad(P+1\leq x\leq P+N), \] or the same for $-g^{(k+1)}(x)$, and let \[ N^{-k-1+\rho}\leq\lambda\leq N^{-1} \] with $0<\rho\leq k$. Then \[
\sum_{n=P+1}^{P+N}e(g(n))\ll N^{1-\eta}, \] where \begin{gather}\label{mani:eta} \eta=\frac{\rho}{16(k+1)L},\quad L=1+\left\lfloor\frac14k(k+1)+kR\right\rfloor,\quad R=1+\left\lfloor\frac{\log\left(\frac1\rho k(k+1)^2\right)}{-\log\left(1-\frac1k\right)}\right\rfloor. \end{gather} \end{lem}
\section{Proof of Proposition \ref{mani:centralprop}}\label{sec:proof-prop-refm}
We will apply the estimates of the preceding sections in order to estimate the exponential sums occurring in the proof. We will proceed in four steps. \begin{enumerate} \item In the first step we use a method of Vinogradov
\cite{vinogradov2004:method_trigonometrical_sums} in order to rewrite the
counting function into the estimation of exponential sums. Then we will
distinguish two cases in the following two steps. \item First we assume that the we are interested in a block which occurs among
the most significant digits. This corresponds to a very small coefficient in
the exponential sum and we may use the method of van der Corput
(cf. \cite{graham_kolesnik1991:van_der_corputs}). \item For the blocks occurring among the least significant digits we apply
Vaughan's identity together with ideas from a recent paper by Bergelson
et al. \cite{bergelson_kolesnik_madritsch+2012:uniform_distribution_prime}. \item Finally we combine the estimates of the last two steps in order to end
the proof. \end{enumerate}
In this proof, the letter $p$ will always denote a prime and we set $f(x):=\alpha x^\theta$ for short. Furthermore we set \begin{gather}\label{mani:delta} \delta:=\min\left(\frac14,1-\theta\right). \end{gather}
\subsection{Rewriting the sum}\label{sec:rewriting-sum} Throughout the rest of the paper we fix a block $d_1\cdots d_\ell$. In order to count the occurrences of this block in the $q$-ary expansion of $\lfloor f(p) \rfloor$ ($2\le p \le P$) we define the indicator function \begin{align}\label{mani:I} \mathcal{I}(t)=\begin{cases}
1, &\text{if }\sum_{i=1}^\ell d_iq^{-i}\leq t-\lfloor t\rfloor
<\sum_{i=1}^\ell d_iq^{-i}+q^{-\ell};\\
0, &\text{otherwise;}
\end{cases} \end{align} which is a $1$-periodic function. Indeed, we have \[ \mathcal{I}(q^{-j}f(n)) = 1 \Longleftrightarrow d_1\cdots d_\ell = b_{j-1}\cdots b_{j-\ell}. \] Thus we can write our block counting function as follows \begin{gather}\label{mani:NthetatoNstar} \mathcal{N}^*(f(p))=\sum_{j=l}^J\mathcal{I}\left(q^{-j}f(p)\right). \end{gather}
Following Nakai and Shiokawa~\cite{Nakai_Shiokawa1990:class_normal_numbers} we want to approximate $\mathcal{I}$ from above and from below by two $1$-periodic functions having small Fourier coefficients. In particular, we set $H=N^{\delta/3}$ and \begin{equation}\label{mani:abd} \begin{split} \alpha_-=\sum_{\lambda=1}^\ell d_\lambda q^{-\lambda}+(2H)^{-1},\quad \beta_-=\sum_{\lambda=1}^\ell d_\lambda q^{-\lambda}+q^{-\ell}-(2H)^{-1},\quad \Delta_-=H^{-1},\\ \alpha_+=\sum_{\lambda=1}^\ell d_\lambda q^{-\lambda}-(2H)^{-1},\quad \beta_+=\sum_{\lambda=1}^\ell d_\lambda q^{-\lambda}+q^{-\ell}+(2H)^{-1},\quad \Delta_+=H^{-1}. \end{split} \end{equation} We apply Lemma \ref{vin:lem12} with $(\alpha,\beta,\delta)=(\alpha_-,\beta_-,\delta_-)$ and $(\alpha,\beta,\delta)=(\alpha_+,\beta_+, \delta_+)$, respectively, in order to get two functions $\mathcal{I}_-$ and $\mathcal{I}_+$. By the choices of $(\alpha_\pm,\beta_\pm,\delta_\pm)$ it is immediate that \begin{equation}\label{uglI} \mathcal{I}_-(t)\leq\mathcal{I}(t)\leq\mathcal{I}_+(t) \qquad (t\in\mathbb{R}). \end{equation} Lemma \ref{vin:lem12} also implies that these two functions have Fourier expansions \begin{align}\label{mani:Ifourier} \mathcal{I}_\pm(t)=q^{-\ell}\pm H^{-1}+
\sum_{\substack{\nu=-\infty\\\nu\neq0}}^\infty A_\pm(\nu)e(\nu t) \end{align} satisfying \begin{gather*} \left\vert A_\pm(\nu)\right\vert \ll\min(\left\vert\nu\right\vert^{-1},H\left\vert\nu\right\vert^{-2}). \end{gather*} In a next step we want to replace $\mathcal{I}$ by $\mathcal{I}_+$ in (\ref{mani:NthetatoNstar}). For this purpose we observe, using \eqref{uglI}, that \begin{gather*} \left\vert\mathcal{I}(t)-\mathcal{I}_+(t)\right\vert \le \left\vert\mathcal{I}_+(t)-\mathcal{I}_-(t)\right\vert
\ll H^{-1} + \sum_{\substack{\nu=-\infty\\\nu\neq0}}^\infty
A_\pm(\nu)e(\nu t). \end{gather*} Thus subtracting yields the main part, and summing over $p\leq N$ gives \begin{gather}\label{mani:0.5} \left\vert\sum_{p\leq N}\mathcal{I}(q^{-j}f(p))-\frac{\pi(N)}{q^{\ell}}\right\vert \ll\pi(N)H^{-1}+\sum_{\substack{\nu=-\infty\\\nu\neq0}}^\infty A_{\pm}(\nu)\sum_{p\leq N}e\left(\frac{\nu}{q^j}f(p)\right). \end{gather}
Now we consider the coefficients $A_\pm(\nu)$. Noting \eqref{mani:A} one observes that \begin{gather*} A_\pm(\nu)\ll\begin{cases}
\nu^{-1}, &\text{for }\left\vert\nu\right\vert\leq H;\\
H\nu^{-2}, &\text{for }\left\vert\nu\right\vert>H.
\end{cases} \end{gather*} Estimating all summands with $\left\vert\nu\right\vert>H$ trivially we get \begin{gather*} \sum_{\substack{\nu=-\infty\\\nu\neq0}}^\infty
A_\pm(\nu)e\left(\frac{\nu}{q^j}f(p)\right) \ll\sum_{\nu=1}^{H}\nu^{-1}e\left(\frac{\nu}{q^j}f(p)\right)+H^{-1}. \end{gather*} Using this in \eqref{mani:0.5} yields \begin{gather}\label{mani:1.5} \left\vert\sum_{p\leq N}\mathcal{I}(q^{-j}f(p))-\frac{\pi(N)}{q^{\ell}}\right\vert \ll\pi(N)H^{-1}+\sum_{\nu=1}^{H} \nu^{-1}\sum_{p\leq N}e\left(\frac{\nu}{q^j}f(p)\right). \end{gather}
Finally we sum over all $j$s and get \begin{equation}\label{mani:2} \begin{split} \left\vert\sum_{p\leq N}\mathcal{N}^*(f(p))-\frac{\pi(N)}{q^{\ell}}J\right\vert \ll\pi(N)H^{-1}J+\sum_{j=\ell}^J\sum_{\nu=1}^{H} \nu^{-1}S(N,j,\nu), \end{split} \end{equation} where we have set \[ S(N,j,\nu):=\sum_{p\leq N}e\left(\frac{\nu}{q^j}f(p)\right). \]
The crucial part is the estimation of the exponential sums over the primes. In the following we will distinguish two cases according to the size of $j$. This corresponds to the position in the expansion of $f(p)$. In particular, let $\rho>0$ be arbitrarily small then we want to distinguish between the most significant digits and the least significant digits, i.e., between the ranges \[ 1\leq q^j\leq N^{\theta-1+\rho} \quad\text{and}\quad N^{\theta-1+\rho}<q^j\leq N^\theta. \]
\subsection{Most significant digits} In this subsection we assume that \[ N^{\theta-1+\rho}<q^j\leq N^\theta, \] which means that we deal with the most significant digits in the expansion. We start by rewriting the sum into an integral. \begin{align*} S(N,j,\nu)=\sum_{p\leq N}e\left(\frac{\nu}{q^j}f(p)\right) =\int_{2}^{N}e\left(\frac{\nu}{q^j}f(t)\right)\mathrm{d}\pi(t)+\mathcal{O}(1). \end{align*} In the second step we then apply the prime number theorem. Thus \begin{align*} S(N,j,\nu) =\int_{N(\log N)^{-G}}^{N} e\left(\frac{\nu}{q^j}f(t)\right) \frac{\mathrm{d}t}{\log t} +\mathcal{O}\left(\frac{N}{(\log N)^G}\right). \end{align*} Now we use the second mean-value theorem together with Lemma \ref{tit:lem4.19} and $k=\left\lfloor\theta\right\rfloor$ to get \begin{equation}\label{mani:res:most} \begin{split} S(N,j,\nu)&\ll\frac1{\log N}\sup_{\xi}
\left\vert\int_{N(\log N)^{-G}}^{\xi}e\left(\frac{\nu}{q^j}f(t)\right)\mathrm{d}t\right\vert
+\mathcal{O}\left(\frac{N}{(\log N)^G}\right)\\ &\ll\frac1{\log N}\left(\frac{\left\vert \nu\right\vert}{q^j}\right)^{-\frac1k}
+\mathcal{O}\left(\frac{N}{(\log N)^G}\right). \end{split} \end{equation}
\subsection{Least significant digits} For the digits in this range we want to apply Vaughan's identity in order to transfer the sum over the primes into two special types of sums involving products of integers. Before we may apply Vaughan's identity we have to weight the exponential sum under consideration by the von Mangoldt function. By an application of Lemma \ref{lem:intervalsplit}, it suffices to consider an interval of the form $(P,2P]$. Thus \[ \left\vert S(N,j,\nu)\right\vert
\ll(\log N)\left\vert\sum_{P<p\leq2P}e\left(f(p)\right)\right\vert. \] Using partial summation we get \[ \left\vert S(N,j,\nu)\right\vert \ll(\log N) \left\vert\sum_{P<p\leq 2P}e\left(f(p)\right)\right\vert \ll (\log N)P^{\frac12}+(\log N)\left\vert\sum_{P<n\leq P_1}\Lambda(n)e\left(f(n)\right)\right\vert \] for some $P_1$ with $P<P_1\leq 2P$. From now on we may assume that $P>N^{1-\eta}$.
Then an application of Lemma \ref{bakkol:vaughan} with $U=P^{\frac\delta3}$, $V=P^{\frac13}$, $Z=P^{\frac12-\frac\delta3}$ yields \begin{align}\label{mani:afterVaughan} S(N,j,\nu)\ll P^{\frac12}+\left(\log P\right)^7\left\vert S_1\right\vert, \end{align} where $S_1$ is either a Type I sum as in \eqref{type:1:sum} with $Y\geq P^{\frac12-\frac\delta3}$ or a Type II sum as in \eqref{type:2:sum} with \[ P^{\frac\delta3}\leq Y\leq P^{\frac13}. \]
Suppose first that $S_1$ is a Type II sum, i.e., \[ S_1=\sum_{X<x\leq X_1}a_x\sum_{\substack{Y<y\leq Y_1\\P<xy\leq
P_1}}b_ye\left(f(xy)\right). \] Then an application of the Cauchy-Schwarz inequality yields \begin{align*} \left\vert S_1\right\vert^2 &\leq\sum_{X<x\leq X_1}\left\vert a_x\right\vert^2\sum_{X<x\leq X_1}
\left\vert\sum_{\substack{Y<y\leq Y_1\\P<xy\leq P_1}}b_ye\left(\frac{\nu}{q^j}f(xy)\right)\right\vert^2\\ &\ll XP^{2\varepsilon}\sum_{Y<y\leq Y_1}\sum_{Y<z\leq Y_1}b_y\overline{b_z}
\sum_{\substack{X<x\leq X_1\\P<xy,xz\leq P_1}}e\left(\frac{\nu}{q^j}\left(f(xy)-f(xz)\right)\right), \end{align*} where we have used that $\left\vert a_x\right\vert\ll P^\varepsilon$. Collecting all the terms where $y=z$ and using $\left\vert b_y\right\vert\ll P^\varepsilon$ yields \begin{gather}\label{mani:3.5} \left\vert S_1\right\vert^2\ll XP^{4\varepsilon}\left(XY+\sum_{Y<y<z\leq Y_1}
\left\vert\sum_{\substack{X<x\leq X_1\\P<xy,xz\leq P_1}}e\left(\frac{\nu}{q^j}\left(f(xy)-f(xz)\right)\right)\right\vert\right). \end{gather}
There must be a pair $(y,z)$ with $Y<y<z<Y_1$ such that \begin{gather}\label{mani:4.5} \left\vert S_1\right\vert^2\ll P^{2+4\varepsilon}Y^{-1}+P^{4\varepsilon}XY^2
\left\vert\sum_{X_2<x\leq X_3}e(g(x))\right\vert, \end{gather} where $X_2=\max(X,Py^{-1})$, $X_3=\min(X_1,P_1z^{-1})$ and \[ g(x) =\frac{\nu}{q^j}\left(f(xy)-f(xz)\right) =\frac{\nu}{q^j}\alpha(y^\theta-z^\theta)x^\theta. \]
We will apply Lemma \ref{nakshi:lem6} to estimate the exponential sum. Setting \[k:=\left\lceil 2\theta\right\rceil+1 \] we get that $g^{(k+1)}(x)\sim \nu q^{-j}\alpha\theta(\theta-1)\cdots(\theta-k)x^{\theta-(k+1)}$. Thus \[ \lambda\leq\frac{g^{(k+1)}(x)}{(k+1)!}\leq c_0\lambda\quad(X_2<x\leq X_3) \] or similarly for $-g^{(k+1)}(x)$, where \[ \lambda=c\nu q^{-j}\alpha(y^{\theta}-z^{\theta})X^{\theta-(k+1)} \] and $c$ depends only on $\theta$ and $\alpha$.
Since $\theta>1$ we get \begin{align*} \lambda&\geq P^{\delta-\theta}Y^{\theta-1}X^{\theta-(k+1)}\geq X^{-k-\frac12}. \end{align*} Similarly we obtain \[ \lambda \leq P^{2\delta}Y^{\theta}X^{\theta-(k+1)} \ll P^{\theta+2\delta}X^{-(k+1)} \leq X^{-1}. \] Thus we get that $X^{-k-\frac12}\leq\lambda\leq X^{-1}$. Therefore an application of Lemma \ref{nakshi:lem6} yields \[ \sum_{X_2<x\leq X_3}e(g(x))\ll X^{1-\eta}, \] where $\eta$ depends only on $k$ an therefore on $\theta$. Inserting this in \eqref{mani:4.5} we get \begin{gather}\label{mani:res:typeII} \left\vert S_1\right\vert^2\ll P^{2+4\varepsilon}Y^{-1}+P^{4\varepsilon}XY^2X^{1-\eta} \ll P^{2+4\varepsilon}\left(P^{-\delta/3}+P^{-2\eta/3}\right). \end{gather}
The case of $S_1$ being a type I sum is similar but simpler. We have \begin{align*} \left\vert S\right\vert \leq\sum_{X<x\leq X_1}\left\vert a_x\right\vert
\left\vert\sum_{\substack{Y<y\leq Y_1\\P<xy\leq P_1}}(\log y)e\left(f(xy)\right)\right\vert \ll XP^{\varepsilon}\left\vert\sum_{\substack{Y<y\leq Y_1\\P<xy\leq P_1}}(\log y)e\left(f(xy)\right)\right\vert \end{align*} for some $x$ with $X<x\leq X_1$. By a partial summation we get \begin{gather}\label{mani:6}
\left\vert S\right\vert\ll XP^\varepsilon\log P\left\vert\sum_{\substack{Y_2<y\leq
Y_3\\P<xy\leq P_1}}e\left(f(xy)\right)\right\vert \end{gather} for some $Y\leq Y_2<Y_3\leq Y_1$. Now we set \[ g(y) =f(xy) =\frac{\nu}{q^{j}}\alpha x^\theta y^\theta. \]
Again the idea is to apply Lemma \ref{nakshi:lem6} for the estimation of the exponential sum. We set \[ k:=\left\lceil 3\theta\right\rceil +2 \] and get for the $k+1$-st derivative \[
\lambda\leq\frac{g^{(k+1)}(x)}{(k+1)!}\leq c_0\lambda\quad(X_2<x\leq X_3) \] or similarly for $-g^{(k+1)}(x)$, where \[ \lambda=c\frac{\nu}{q^j}\alpha x^{\theta}Y^{\theta-(k+1)} \] and $c$ again depends only on $\alpha$ and $\theta$.
We may assume that $N$ and hence $P$ is sufficiently large, then we get that \[ Y^{-(k+1)}\ll P^{-\theta}X^{\theta}Y^{\theta-(k+1)}\leq \lambda\leq P^{2\delta}X^{\theta}Y^{\theta-(k+1)}\ll P^{\theta+2\delta}Y^{-(k+1)}\leq Y^{-1}. \] Now an application of Lemma 2.5 yields \[ \sum_{Y_2<y\leq Y_3}e(g(y))\ll Y^{1-\eta}, \] where $\eta$ depends only on $k$ and thus on $\theta$. Inserting this in \eqref{mani:6} we get \begin{gather}\label{mani:res:typeI}
\left\vert S_1\right\vert \ll(\log P)XP^\varepsilon Y^{1-\eta}\ll(\log P)P^{1+\varepsilon-\eta(1/2-\delta/3)}. \end{gather}
Combining \eqref{mani:res:typeI} and \eqref{mani:res:typeII} in \eqref{mani:afterVaughan} yields \begin{equation}\label{mani:res:least} \begin{split} \left\vert S(N,j,\nu)\right\vert &\ll P^{\frac12}+\left(\log
P\right)^7\left(P^{1+2\varepsilon}\left(P^{-\delta/6}+P^{-\eta/3}\right)+(\log P)P^{1+\varepsilon-\eta(1/2-\delta/3)}\right)\\ &\ll P^{\frac12}+\left(\log P\right)^8P^{1-\sigma}. \end{split} \end{equation}
\subsection{Conclusion} On the one hand summing \eqref{mani:res:most} over $j$ and $\nu$ yields \begin{align*} &\sum_{1\leq\left\vert
\nu\right\vert\leq\delta^2}\left\vert\nu\right\vert^{-1}\sum_{N^{\theta-\delta}<q^{j}\leq
N^{\theta}}
S(N,j,\nu)\\ &\quad\ll\sum_{1\leq\left\vert
\nu\right\vert\leq\delta^2}\left\vert\nu\right\vert^{-1}\sum_{N^{\theta-\delta}<q^{j}\leq
N^{\theta}}
\left(\frac1{\log N}\left(\frac{\left\vert \nu\right\vert}{q^j}\right)^{-\frac1k}
+\mathcal{O}\left(\frac{N}{(\log N)^G}\right)\right)\\ &\quad\ll\frac1{\log N}\sum_{1\leq\left\vert
\nu\right\vert\leq\delta^2}\left\vert\nu\right\vert^{-1-\frac1k}\sum_{N^{\theta-\delta}<q^{j}\leq
N^{\theta}}q^{-\frac jk}
+\mathcal{O}\left(\frac{N}{(\log N)^{G-2}}\right)\\ &\quad\ll\frac{N}{\log N}. \end{align*}
On the other hand in \eqref{mani:res:least} we sum over $j$ and $\nu$ and get \[ \sum_{1\leq\left\vert\nu\right\vert\leq\delta^2}\left\vert\nu\right\vert^{-1} \sum_{q^\ell\leq q^j\leq N^\theta}S(N,j,\nu) \ll(\log N)^2N^{\frac12}+(\log N)^{10}N^{1-\sigma'}. \]
Combining these estimates in \eqref{mani:2} finally yields \begin{align*} \left\vert\sum_{p\leq
N}\mathcal{N}^*(f(p))-\frac{\pi(N)}{q^{\ell}}J\right\vert\ll\frac{N}{\log N} \end{align*} and the proposition is proved.
\section*{Acknowledgment} The authors thank the anonymous referee, who read very carefully the manuscript and his/her suggestions improve considerably the presentation of the results.
\def$'${$'$}
\end{document} |
\begin{document}
\title{$\mathcal{I}
\begin{abstract} In this paper we introduce $\IhJ$-convergence which is a common generalization of the $\ensuremath{\mathcal{I}}\xspace^*$-convergence of sequences, double sequences and nets. We show that many results that were shown before for these special cases are true for the $\IhJ$-convergence, too.
\noindent Keywords: ideal convergence, double sequence, filter
\noindent Mathematical Reviews subject classification: Primary 54A20, 40A05; Secondary 40B05.
\end{abstract}
\title{$\mathcal{I}
\sectionmark
\section{Historical background and introduction}
The main topic of this paper is convergence of a function along an ideal. As the dual notion of the convergence along a filter was studied as well, let us start by saying a few words about the history of this concept.
It was defined for the first time probably by Henri Cartan \cite{CARTANULTRA} (see also \cite[p.71, Definition 1]{BOURBAKIGTENG}). Although the notion of a limit along a filter was defined here in the maximal possible generality -- the considered filter could be a filter on an arbitrary set and the limit was defined for any map from this set to a topological space -- the attention of mathematicians in the following years was mostly focused to two special cases.
In general topology the notion of the limit of a filter on a topological space $X$ became one of the two basic tools used to describe the convergence in general topological spaces together with the notion of a net (see \cite[Section 1.6]{ENGNEW}).
Some authors studied also the convergence of a sequence along a filter. This notion was rediscovered independently by several authors, we could mention A.~Robinson \cite{ROBINSONLIMITS}, A.~R.~Bernstein \cite{BERNSTEINFILT} (these authors used ultrafilters only) or M.~Kat\v{e}tov \cite{KATETOVFIL}.
The definition of the limit along a filter can be reformulated using ideals -- the dual notion to the notion of filter. This type of limit of sequences was introduced independently by P.~Kostyrko, M.~Ma\v{c}aj and T. \v{S}al\'at \cite{KMS} and F.~Nuray and W.~H.~Ruckle \cite{NURAYRUCKLE} and studied under the name \emph{$\ensuremath{\mathcal{I}}\xspace$-convergence} of a sequence by several authors (see also \cite{DEMIRCILIMSUP,KMSS,KSW}). The motivation for this direction of research was an effort to generalize some known results on statistical convergence. Since the notions that we intend to generalize in this paper stem from one of the results on the statistical convergence, let us describe in more detail how they evolved.
Motivated by a result of T.~\v{S}al\'at \cite{SALATSTAT} and J.~A.~Fridy \cite{FRIDY} about statistically convergent sequences, the authors of \cite{KMS} also defined so called $\ensuremath{\mathcal{I}}\xspace^*$-convergence (a sequence $\seq xn$ being \emph{$\ensuremath{\mathcal{I}}\xspace^*$-convergent} to $x$ provided that there exist $M\in\ensuremath{\mc F(\I)}$ such that the corresponding subsequence converges to $x$) and asked for which ideals the notions of $\ensuremath{\mathcal{I}}\xspace$-convergence and $\ensuremath{\mathcal{I}}\xspace^*$-convergence coincide. This question was answered in \cite{KSW} where the authors showed that these notions coincide if and only if\xspace the ideal $\ensuremath{\mathcal{I}}\xspace$ satisfies the property AP, which we call $\APIFin$ here (see also \cite{KMSS,NURAYRUCKLE}).
Later the analogues of the notion of $\ensuremath{\mathcal{I}}\xspace^*$-convergence were defined and similar characterizations were obtained for double sequences (see \cite{DASKOSMAWI,KUMARDOUBLE}) and nets (see \cite{LAHDASNETS}).
In this paper we define $\IhJ$-convergence as a common generalization of all these types of $\ensuremath{\mathcal{I}}\xspace^*$-convergence and obtain results which strengthen the results from the above papers. In the last section we also point at neglected relation between the $\ensuremath{\mathcal{I}}\xspace$-convergence of sequences and double sequences.
Although our motivation arises mainly from the results obtained for sequences, we will work with functions. One of the reasons is that using functions sometimes helps to simplify notation. Another reason is that we tried to obtain the maximal possible generality allowed by the tools we are using.
\section{Notation and preliminaries}
In this section we recall some notions and results concerning the $\ensuremath{\mathcal{I}}\xspace$-convergence.
If $S$ is a set, then a system $\ensuremath{\mathcal{I}}\xspace\subseteq\powerset{S}$ is called an \emph{ideal on $S$} if it is additive, hereditary and non-empty, that is, \begin{compactenum} \enu
\item $\ensuremath{\emptyset}\xspace\in\ensuremath{\mathcal{I}}\xspace$,
\item $A,B\in\ensuremath{\mathcal{I}}\xspace$ \ensuremath{\Rightarrow}\xspace $A\cup B\in\ensuremath{\mathcal{I}}\xspace$,
\item $A\in\ensuremath{\mathcal{I}}\xspace$ $\land$ $B\subseteq A$ \ensuremath{\Rightarrow}\xspace $B\in\ensuremath{\mathcal{I}}\xspace$. \end{compactenum} An ideal on $S$ is called \emph{admissible} if it contains all singletons, that is, $\{s\}\in\ensuremath{\mathcal{I}}\xspace$ for each $s\in S$. An ideal $\ensuremath{\mathcal{I}}\xspace$ on $S$ is called \emph{proper} if $S\notin\ensuremath{\mathcal{I}}\xspace$, a proper ideal is called \emph{maximal} if it is a maximal element of the set of all proper ideals on $S$ ordered by inclusion. It can be shown that a proper ideal $\ensuremath{\mathcal{I}}\xspace$ is maximal if and only if\xspace $(\forall A\subseteq S)$ $A\in\ensuremath{\mathcal{I}}\xspace$ $\lor$ $S\ensuremath{\setminus} A\in \ensuremath{\mathcal{I}}\xspace$.
We will denote by $\ensuremath{\textrm{Fin}}$ the ideal of all finite subsets of a given set $S$.
The dual notion to the notion of an ideal is the notion of a filter. A system $\mc F\subseteq\powerset S$ of subsets of $S$ is called a \emph{filter on $S$} if \begin{compactenum} \enu
\item $S\in\mc F$,
\item $A,B\in\mc F$ \ensuremath{\Rightarrow}\xspace $A\cap B\in\mc F$,
\item $A\in\mc F$ $\land$ $B\supseteq A$ \ensuremath{\Rightarrow}\xspace $B\in\mc F$. \end{compactenum} A filter $\mc F$ is called \emph{proper} if $\ensuremath{\emptyset}\xspace\notin\mc F$.
The dual notion to the notion of a maximal ideal is the notion of \emph{ultrafilter.}
A system $\mc B\subseteq\powerset S$ is called \emph{filterbase} if \begin{compactenum} \enu
\item $\mc B\ne\ensuremath{\emptyset}\xspace$,
\item $A,B\in\mc B$ \ensuremath{\Rightarrow}\xspace $(\exists C\in\mc B)$ $C\subseteq A\cap B$. \end{compactenum} If $\mc B$ is a filterbase, then the system $$\mc F=\{A\supseteq B;B\in \mc B\}$$ is a filter. It is called filter \emph{generated} by the base $\mc B$.
For any ideal $\ensuremath{\mathcal{I}}\xspace$ on a set $S$ the system $$\ensuremath{\mc F(\I)}=\{X\ensuremath{\setminus} A; A\in \ensuremath{\mathcal{I}}\xspace\}$$ is a filter on $S$. It is called the \emph{filter associated with the ideal $\ensuremath{\mathcal{I}}\xspace$.} In a similar way we can obtain ideal from any filter. This yields a one-to-one correspondence between ideals and filters on a given set.
\begin{DEF}\label{DEFICONV} Let $\ensuremath{\mathcal{I}}\xspace$ be an ideal on a set $S$ and $X$ be a topological space. A function $\Zobr fSX$ is said to be \emph{$\ensuremath{\mathcal{I}}\xspace$-convergent to $x\in X$} if $$\Invobr fU=\{s\in S; f(s)\in U\} \in \ensuremath{\mc F(\I)}$$ holds for every neighborhood $U$ of the point $x$.
We use the notation $$\Jlim{\I} f = x.$$ \end{DEF}
If $S=\mathbb N$ we obtain the usual definition of $\ensuremath{\mathcal{I}}\xspace$-convergence of sequences. In this case the notation $\Jlim{\I} x_n=x$ is used.
We include a few basic facts concerning $\ensuremath{\mathcal{I}}\xspace$-convergence for future reference. \begin{LM}\label{LMCOMPAR} Let $S$ be a set, let $\ensuremath{\mathcal{I}}\xspace$, $\ensuremath{\mathcal{I}}\xspace_1$ and $\ensuremath{\mathcal{I}}\xspace_2$ be ideals on $S$ and let $X$ and $Y$ be topological spaces. \begin{enumerate} \enu
\item\label{itIMPROPER} If $\ensuremath{\mathcal{I}}\xspace$ is not proper, that is, if $\ensuremath{\mathcal{I}}\xspace=\powerset
S$, then every function $\Zobr fSX$ converges to each point of $X$.
\item\label{itCOMPAR} If $\ensuremath{\mathcal{I}}\xspace_1\subseteq\ensuremath{\mathcal{I}}\xspace_2$, then for every function $\Zobr
{f}SX$, we have
$$\Jlim{\ensuremath{\mathcal{I}}\xspace_1} f = x \qquad \text{implies} \qquad \Jlim{\ensuremath{\mathcal{I}}\xspace_2} f=x.$$
\item\label{itHAUS} If $X$ is Hausdorff and $\ensuremath{\mathcal{I}}\xspace$ is proper,
then every function $\Zobr fSX$ has at most one $\ensuremath{\mathcal{I}}\xspace$-limit.
\item\label{itCONTIN} If $\Zobr gXY$ is a continuous mapping and
$\Zobr fSX$ is $\ensuremath{\mathcal{I}}\xspace$-convergent to $x$, then $g\circ f$ is $\ensuremath{\mathcal{I}}\xspace$-convergent to $g(x)$.
\item\label{itCOMPACT} If $\ensuremath{\mathcal{I}}\xspace$ is a maximal ideal and $X$ is compact,
then every function $\Zobr fSX$ has an $\ensuremath{\mathcal{I}}\xspace$-limit. \end{enumerate} \end{LM}
Let us note that the above properties are more frequently stated for filters rather than ideals. Moreover, the property \eqref{itHAUS} is in fact a characterization of Hausdorff spaces and the property \eqref{itCOMPACT} is a characterization of compact spaces.
\section{$\IhJ$-convergence}
\subsection{Definition and basic results}
As we have already mentioned, we aim to generalize the notion of $\ensuremath{\mathcal{I}}\xspace^*$-convergence of sequences, introduced in \cite{KMS} for sequences of real numbers and generalized to metric spaces in \cite{KSW}. Since we are working with functions, we modify this definition in the following way: \begin{DEF}\label{DEFIHCONV} Let $\ensuremath{\mathcal{I}}\xspace$ be an ideal on a set $S$ and let $\Zobr fSX$ be a function to a topological space $X$. The function $f$ is called \emph{$\ensuremath{\mathcal{I}}\xspace^*$-convergent} to the point $x$ of $X$ if there exists a set $M\in\ensuremath{\mc F(\I)}$ such that the function $\Zobr gSX$ defined by $$g(s)= \begin{cases} f(s), &\text{if }s\in M\\ x, &\text{if }s\notin M \end{cases}$$ is $\ensuremath{\textrm{Fin}}$-convergent to $x$. If $f$ is $\ensuremath{\mathcal{I}}\xspace^*$-convergent to $x$, then we write $\Jhlim\I f=x$. \end{DEF} The usual notion of $\ensuremath{\mathcal{I}}\xspace^*$-convergence of sequences is a special case for $S=\mathbb N$. Similarly as for the $\ensuremath{\mathcal{I}}\xspace$-convergence of sequences, we write $\Jhlim\I x_n=x$.
In fact, the $\ensuremath{\mathcal{I}}\xspace^*$-convergence was defined in \cite{KMS} in a slightly different way -- the $\ensuremath{\textrm{Fin}}$-convergence of the restriction $g|_M$ was used. It is easy to see that these two definitions are equivalent. Our approach will prove advantageous when using more complicated ideals instead of $\ensuremath{\textrm{Fin}}$.
In the definition of $\IhJ$-convergence we simply replace the ideal $\ensuremath{\textrm{Fin}}$ by an arbitrary ideal on the set $S$. \begin{DEF}\label{DEFIHJ} Let $\ensuremath{\mathcal{K}}\xspace$ and $\ensuremath{\mathcal{I}}\xspace$ be ideals on a set $S$, let $X$ be a topological space and let $x$ be an element of $X$. The function $\Zobr fSX$ is said to be \emph{$\IhJ$-convergent} to $x$ if there exists a set $M\in\ensuremath{\mc F(\I)}$ such that the function $\Zobr gSX$ given by $$g(s)= \begin{cases} f(s), &\text{if }s\in M\\ x, &\text{if }s\notin M \end{cases}$$ is \ensuremath{\mathcal{K}}\xspace-convergent to $x$. If $f$ is $\IhJ$-convergent to $x$, then we write $\operatorname{\IhJ\text{-}\lim} f=x$. \end{DEF}
As usual, in the case $S=\mathbb N$ we speak about $\IhJ$-convergence of sequences and use the notation $\operatorname{\IhJ\text{-}\lim} x_n=x$.
\begin{REM}\label{REMDECOMP} The definition of $\IhJ$-convergence can be reformulated in the form of decomposition theorem. A function $f$ is $\IhJ$-convergent if and only if\xspace it can be written as $f=g+h$, where $g$ is $\ensuremath{\mathcal{K}}\xspace$-convergent and $h$ is non-zero only on a set from $\ensuremath{\mathcal{I}}\xspace$. An analogous observation was made in \cite{CONNORSTRONG} for the statistical convergence of sequences and in \cite{MORICZDOUBLE} for the statistical convergence of double sequences. \end{REM}
\begin{REM}\label{REMTRACE}
A definition of $\IhJ$-convergence following more closely the approach from \cite{KMS} would be: there exists $M\in\ensuremath{\mc F(\I)}$ such that the function $f|_M$ is $\ensuremath{\mathcal{K}}\xspace|M$-convergent to $x$
where $\ensuremath{\mathcal{K}}\xspace|M=\{A\cap M; A\in\ensuremath{\mathcal{K}}\xspace\}$ is the trace of $\ensuremath{\mathcal{K}}\xspace$ on $M$. These two definitions are equivalent but the one given in Definition \ref{DEFIHJ} is somewhat simpler. \end{REM}
One can show easily directly from the definitions that $\ensuremath{\mathcal{K}}\xspace$-convergence implies $\IhJ$-convergence. \begin{LM}\label{LMKRAIK} If $\ensuremath{\mathcal{I}}\xspace$ and $\ensuremath{\mathcal{K}}\xspace$ are ideals on a set $S$ and $\Zobr fSX$ is a function such that $\Jlim{\ensuremath{\mathcal{K}}\xspace} f=x$, then $\operatorname{\IhJ\text{-}\lim} f=x$. \end{LM}
Using Lemma \ref{LMCOMPAR} \eqref{itCOMPAR} and the definition of $\IhJ$-convergence we get immediately \begin{PROP}\label{PROPCOMPAR} Let $\ensuremath{\mathcal{I}}\xspace$, $\ensuremath{\mathcal{I}}\xspace_1$, $\ensuremath{\mathcal{I}}\xspace_2$, $\ensuremath{\mathcal{K}}\xspace$, $\ensuremath{\mathcal{K}}\xspace_1$ and $\ensuremath{\mathcal{K}}\xspace_2$ be ideals on a set $S$ such that $\ensuremath{\mathcal{I}}\xspace_1\subseteq\ensuremath{\mathcal{I}}\xspace_2$ and $\ensuremath{\mathcal{K}}\xspace_1\subseteq\ensuremath{\mathcal{K}}\xspace_2$ and let $X$ be a topological space. Then for any function $\Zobr fSX$ we have \begin{gather*} \IhJhlim{\ensuremath{\mathcal{I}}\xspace_1}{\ensuremath{\mathcal{K}}\xspace} f=x \qquad \ensuremath{\Rightarrow}\xspace \qquad \IhJhlim{\ensuremath{\mathcal{I}}\xspace_2}{\ensuremath{\mathcal{K}}\xspace} f=x,\\ \IhJhlim{\ensuremath{\mathcal{I}}\xspace}{\ensuremath{\mathcal{K}}\xspace_1} f=x \qquad \ensuremath{\Rightarrow}\xspace \qquad \IhJhlim{\ensuremath{\mathcal{I}}\xspace}{\ensuremath{\mathcal{K}}\xspace_2} f=x. \end{gather*} \end{PROP}
In what follows we are going to study the relationship between the $\ensuremath{\mathcal{I}}\xspace$-convergence and $\IhJ$-convergence. In particular, we will specify the conditions under which the implications \begin{eqnarray} \operatorname{\IhJ\text{-}\lim} f=x \qquad &\ensuremath{\Rightarrow}\xspace& \qquad \Jlim{\I} f =x, \label{IMP1}\\ \Jlim{\I} f =x \qquad &\ensuremath{\Rightarrow}\xspace& \qquad \operatorname{\IhJ\text{-}\lim} f=x, \label{IMP2} \end{eqnarray} hold.
We start with the easier implication \eqref{IMP1}. In the case $\ensuremath{\mathcal{K}}\xspace=\ensuremath{\textrm{Fin}}$ this implication is known to be true for the admissible ideals, that is, for ideals fulfilling $\ensuremath{\mathcal{K}}\xspace\subseteq\ensuremath{\mathcal{I}}\xspace$. We next show that the same is true in general. \begin{PROP}\label{PROPIMP1} Let $\ensuremath{\mathcal{I}}\xspace,\ensuremath{\mathcal{K}}\xspace$ be ideals on a set $S$, let $X$ be a topological space and let $f$ be a function from $S$ to $X$.
\begin{enumerate} \enu
\item\label{itIMP1:1} If the implication \eqref{IMP1} holds for some point $x\in X$
which has at least one neighborhood different from $X$,
then $\ensuremath{\mathcal{K}}\xspace\subseteq\ensuremath{\mathcal{I}}\xspace$. Consequently, if the implication
\eqref{IMP1} holds in a topological space that is not
indiscrete, then $\ensuremath{\mathcal{K}}\xspace\subseteq\ensuremath{\mathcal{I}}\xspace$.
\item\label{itIMP1:2} If $\ensuremath{\mathcal{K}}\xspace\subseteq\ensuremath{\mathcal{I}}\xspace$, then the implication \eqref{IMP1}
holds. \end{enumerate} \end{PROP}
\begin{proof} \eqref{itIMP1:1} Suppose that $\ensuremath{\mathcal{K}}\xspace\nsubseteq\ensuremath{\mathcal{I}}\xspace$, that is, there exists a set $A\in \ensuremath{\mathcal{K}}\xspace\ensuremath{\setminus}\ensuremath{\mathcal{I}}\xspace$. Let $x$ be a point with a neighborhood $U\subsetneqq X$ and $y\in X\ensuremath{\setminus} U$. Let us define a function $\Zobr fSX$ by $$ f(t)=
\begin{cases}
x & \text{if }t\notin A, \\
y & \text{otherwise}.
\end{cases} $$ Clearly, $\Jlim{\ensuremath{\mathcal{K}}\xspace}f=x$ and thus by Lemma \ref{LMKRAIK} we get $\operatorname{\IhJ\text{-}\lim} f=x$. As $\Invobr f{X\ensuremath{\setminus} U}=A\notin\ensuremath{\mathcal{I}}\xspace$, the function $f$ is not $\ensuremath{\mathcal{I}}\xspace$-convergent to $x$
\eqref{itIMP1:2} Let $X$ be any topological space, $x\in X$ and $\Zobr fSX$. Let $\ensuremath{\mathcal{K}}\xspace\subseteq\ensuremath{\mathcal{I}}\xspace$ and $\operatorname{\IhJ\text{-}\lim} f=x$. By the definition of $\IhJ$-convergence there exists $M\in\ensuremath{\mc F(\I)}$ such that $$C:=\Invobr f{X\ensuremath{\setminus} U}\cap M\in\ensuremath{\mathcal{K}}\xspace\subseteq\ensuremath{\mathcal{I}}\xspace$$ for each neighborhood $U$ of the point $x$. Consequently, $$\Invobr f{X\ensuremath{\setminus} U} \subseteq (X\ensuremath{\setminus} M) \cup C \in \ensuremath{\mathcal{I}}\xspace$$ and thus $\Jlim{\I} f=x$. \end{proof}
\subsection{Additive property and $\IhJ$-convergence}
Inspired by \cite{KSW} and \cite{LAHDASTOP} where the case $\ensuremath{\mathcal{K}}\xspace=\ensuremath{\textrm{Fin}}$ and $S=\mathbb N$ is investigated, we now concentrate on an algebraic characterization of the ideals $\ensuremath{\mathcal{I}}\xspace$ and $\ensuremath{\mathcal{K}}\xspace$ such that the implication \eqref{IMP2} holds for each function $\Zobr fSX$. Before doing this we need to prove some auxiliary results.
\begin{DEF} Let $\ensuremath{\mathcal{K}}\xspace$ be an ideal on a set $S$. We write $A\subJJ{\J} B$ whenever $A\ensuremath{\setminus} B \in \ensuremath{\mathcal{K}}\xspace$. If $A\subJJ{\J} B$ and $B\subJJ{\J} A$, then we write $A\simJb{\J} B$. Clearly, $$A\simJb{\J} B \qquad \ensuremath{\Leftrightarrow}\xspace \qquad A\triangle B \in \ensuremath{\mathcal{K}}\xspace.$$
We say that a set $A$ is \emph{\ensuremath{\mathcal{K}}\xspace-pseudointersection} of a system $\{A_n; n\in\mathbb N\}$ if $A\subJJ{\J} A_n$ holds for each $n\in\mathbb N$. \end{DEF}
In the case $\ensuremath{\mathcal{K}}\xspace=\ensuremath{\textrm{Fin}}$ we obtain the notion of pseudointersection and the relations $\subseteq^*$ and $=^*$ which are often used in set theory (see \cite[p.102]{JUSTWEESE2}).
It is easy to see that using the symbols $\subJJ{\J}$ and $\simJb{\J}$ can be understood as another way of speaking about the equivalence classes of the subsets of $S$ in the quotient Boolean algebra $\powerset{S}/\ensuremath{\mathcal{K}}\xspace$.
In the following lemma we describe several equivalent formulations of a condition for ideals $\ensuremath{\mathcal{I}}\xspace$ and $\ensuremath{\mathcal{K}}\xspace$ which will play crucial role in further study.
\begin{LM}\label{LMAP} Let $\ensuremath{\mathcal{I}}\xspace$ and $\ensuremath{\mathcal{K}}\xspace$ be ideals on the same set $S$. The following conditions are equivalent: \begin{enumerate} \enu
\item\label{PI1} For every sequence $(A_n)_{n\in\mathbb N}$ of sets from \ensuremath{\mathcal{I}}\xspace there is $A\in\ensuremath{\mathcal{I}}\xspace$
such that $A_n\subJJ{\J} A$ for all $n$'s.
\item\label{PI2} Any sequence $(F_n)_{n\in\mathbb N}$ of sets from $\ensuremath{\mc F(\I)}$ has a \ensuremath{\mathcal{K}}\xspace-pseudointersection in
$\ensuremath{\mc F(\I)}$.
\item\label{PI3} For every sequence
$(A_n)_{n\in\mathbb N}$ of sets belonging to $\ensuremath{\mathcal{I}}\xspace$ there exists a sequence
$(B_n)_{n\in\mathbb N}$ of sets from $\ensuremath{\mathcal{I}}\xspace$ such that $A_j \simJb{\J} B_j$
for $j\in\mathbb N$ and $B=\bigcup_{j\in\mathbb N} B_j\in\ensuremath{\mathcal{I}}\xspace$.
\item\label{PI4} For every sequence of mutually disjoint sets
$(A_n)_{n\in\mathbb N}$ belonging to $\ensuremath{\mathcal{I}}\xspace$ there exists a sequence
$(B_n)_{n\in\mathbb N}$ of sets belonging to $\ensuremath{\mathcal{I}}\xspace$ such that $A_j \simJb{\J} B_j$
for $j\in\mathbb N$ and $B=\bigcup_{j\in\mathbb N} B_j\in\ensuremath{\mathcal{I}}\xspace$.
\item\label{PI6} For every non-decreasing sequence $A_1\subseteq A_2 \subseteq \dots \subseteq A_n \subseteq \dots$
of sets from $\ensuremath{\mathcal{I}}\xspace$ there exists a sequence
$(B_n)_{n\in\mathbb N}$ of sets belonging to $\ensuremath{\mathcal{I}}\xspace$ such that $A_j \simJb{\J} B_j$
for $j\in\mathbb N$ and $B=\bigcup_{j\in\mathbb N} B_j\in\ensuremath{\mathcal{I}}\xspace$.
\item\label{PI5} In the Boolean algebra $\powerset{S}/\ensuremath{\mathcal{K}}\xspace$ the ideal $\ensuremath{\mathcal{I}}\xspace$
corresponds to a $\sigma$-directed subset, that is, every countable subset has an
upper bound. \end{enumerate} \end{LM}
Note that (\ref{PI2}) is just a dual formulation of (\ref{PI1}). Similarly, (\ref{PI5}) is the formulation of (\ref{PI1}) in the language of Boolean algebras. The equivalence of (\ref{PI3}), (\ref{PI4}), (\ref{PI6}) can be easily shown by the standard methods from the measure theory. Proof of the equivalence of the remaining conditions is similar to the proof of Proposition 1 of \cite{BADEKO}, where the case $\ensuremath{\mathcal{K}}\xspace=\ensuremath{\textrm{Fin}}$ is considered. We include the proof for the sake of completeness and also to stress that the validity of this lemma does not depend on the countability of $S$ or the assumption that $\ensuremath{\mathcal{K}}\xspace\subseteq\ensuremath{\mathcal{I}}\xspace$.
\begin{proof} (\ref{PI1})\ensuremath{\Rightarrow}\xspace(\ref{PI6}) Let $A_1\subseteq A_2 \subseteq \dots \subseteq A_n \subseteq \dots$ be a non-decreasing sequence of sets from $\ensuremath{\mathcal{I}}\xspace$. Since each $A_n\in\ensuremath{\mathcal{I}}\xspace$, the condition (\ref{PI1}) yields the existence of a set $A\in\ensuremath{\mathcal{I}}\xspace$ satisfying $A_n\subJJ{\J} A$ for $n\in\mathbb N$. Let $B_n:=A\cap A_n$. Since $B_n\subseteq A$, we have $B_n\in\ensuremath{\mathcal{I}}\xspace$. Moreover, $B_n\triangle A_n = A_n\ensuremath{\setminus} A \in \ensuremath{\mathcal{K}}\xspace$, thus $B_n\simJb{\J} A_n$. Finally, $B=\bigcup_{j\in\mathbb N} B_j\subseteq A\in\ensuremath{\mathcal{I}}\xspace$, as required.
(\ref{PI3})\ensuremath{\Rightarrow}\xspace(\ref{PI1}) Let $(A_n)_{n\in\mathbb N}$ be a sequence of sets belonging to $\ensuremath{\mathcal{I}}\xspace$. By (\ref{PI3}) there exists a sequence $(B_n)_{n\in\mathbb N}$ of sets from $\ensuremath{\mathcal{I}}\xspace$ such that for all $n$ we have $B_n \simJb{\J} A_n$ and $A:=\bigcup_{n\in\mathbb N} B_n \in \ensuremath{\mathcal{I}}\xspace$. From $A_n\triangle B_n \in \ensuremath{\mathcal{K}}\xspace$ and $B_n\subseteq A$ we get $A_n\subJJ{\J} A$, which proves (\ref{PI1}). \end{proof}
It is also easy to see that in condition (\ref{PI2}) it suffices to consider only sequences of sets from a filterbase. This reformulation of (\ref{PI2}) can be sometimes easier to prove.
\begin{DEF} Let $\ensuremath{\mathcal{I}}\xspace$, $\ensuremath{\mathcal{K}}\xspace$ be ideals on a set $S$. We say that $\ensuremath{\mathcal{I}}\xspace$ has the \emph{additive property} with respect to $\ensuremath{\mathcal{K}}\xspace$, or more briefly that $\APIJ$ holds, if any of the equivalent conditions of Lemma \ref{LMAP} holds. \end{DEF}
The condition AP from \cite{KSW}, which characterizes ideals such that $\ensuremath{\mathcal{I}}\xspace^*$-convergence implies $\ensuremath{\mathcal{I}}\xspace$-convergence, is equivalent to the condition $\APIFin$. Let us note that ideals fulfilling this condition are often called \emph{P-ideals} (see for example \cite{BADEKO} or \cite{FILIPOWBOLZ}).
In the following two theorems we show that the condition $\APIJ$ is the correct generalization of conditions AP from \cite{KSW}, \cite{LAHDASTOP} and \cite{DASKOSMAWI}. In particular, as special cases of our results we obtain Theorem 3.1 of \cite{KSW}, Theorem 8 of \cite{LAHDASNETS} and Theorem 2 of \cite{DASKOSMAWI}.
Although we do not consider arbitrary topological spaces, we feel that the restriction to the first countable spaces is sufficient for most applications. For example, in \cite{KSW} the authors work only with metric spaces and in \cite{LAHDASTOP} the case that $X$ is a first countable $T_1$-space is considered.
\begin{THM}\label{THMIMP2} Let $\ensuremath{\mathcal{I}}\xspace$ and $\ensuremath{\mathcal{K}}\xspace$ be ideals on a set $S$ and let $X$ be a first countable topological space. If the ideal $\ensuremath{\mathcal{I}}\xspace$ has the additive property with respect to $\ensuremath{\mathcal{K}}\xspace$, then for any function $\Zobr fSX$ the implication \eqref{IMP2} holds. In other words, if the condition $\APIJ$ holds, then the $\ensuremath{\mathcal{I}}\xspace$-convergence implies the \IhJ-convergence. \end{THM}
\begin{proof} Let $\Zobr fSX$ be an \ensuremath{\mathcal{I}}\xspace-convergent function and let $x=\Jlim{\I} f$. Let $\mc B=\{U_n; n\in\mathbb N\}$ be a countable base for $X$ at the point $x$. By the \ensuremath{\mathcal{I}}\xspace-convergence of $f$ we have $$\Invobr f{U_n} \in \ensuremath{\mc F(\I)}$$ for each $n$, thus by Lemma \ref{LMAP} there exists $A\in\ensuremath{\mc F(\I)}$ with $A\subJJ{\J} \Invobr f{U_n}$, that is, $A\ensuremath{\setminus}\Invobr f{U_n}\in\ensuremath{\mathcal{K}}\xspace$ for all $n$'s.
Now it suffices to show that the function $\Zobr gSX$ given by
$g|_A=f|_A$ and $\Obr g{S\ensuremath{\setminus} A}=\{x\}$ is $\ensuremath{\mathcal{K}}\xspace$-convergent to $x$. As for $U_n\in\mc B$ we have $$\Invobr g{U_n}= (S\ensuremath{\setminus} A)\cup \Invobr f{U_n}= S\ensuremath{\setminus}(A\ensuremath{\setminus}\Invobr f{U_n}),$$ and the set $A\ensuremath{\setminus}\Invobr f{U_n}$ belongs to $\ensuremath{\mathcal{K}}\xspace$, its complement $\Invobr g{U_n}$ lies in $\ensuremath{\mc F(\J)}$, as required. \end{proof}
Let us recall that a topological space $X$ is called \emph{finitely generated space} or \emph{Alexandroff space} if any intersection of open subsets of $X$ is again an open set (see \cite{ARENAS}). Equivalently, $X$ is finitely generated if and only if\xspace each point of $x$ has a smallest neighborhood. Finitely generated $T_1$-spaces are precisely the discrete spaces.
\begin{THM}\label{THMIMP2b} Let $\ensuremath{\mathcal{I}}\xspace$, $\ensuremath{\mathcal{K}}\xspace$ be ideals on a set $S$ and let $X$ be a first countable topological space which is not finitely generated. If the implication \eqref{IMP2} holds for any function $\Zobr fSX$, then the ideal $\ensuremath{\mathcal{I}}\xspace$ has the additive property with respect to $\ensuremath{\mathcal{K}}\xspace$. \end{THM}
\begin{proof} Let $x\in X$ be an accumulation point of $X$ which does not have a smallest neighborhood. Let $\mc B=\{U_i; i\in\mathbb N\cup\{0\}\}$ be a countable base at $x$ such that $U_{n}\supsetneqq U_{n+1}$ and $U_0=X$. Suppose we are given some countable family $A_n$ of mutually disjoint sets from $\ensuremath{\mathcal{I}}\xspace$.
For each $n\in\mathbb N$ choose an $x_n\in U_{n-1}\ensuremath{\setminus} U_{n}$. Let us define $\Zobr fSX$ as $$f(s)=
\begin{cases}
x_n & \text{if } s\in A_n, \\
x & \text{if } s\notin\bigcup_{n\in\mathbb N} A_n.
\end{cases} $$
We have $\Invobr f{X\ensuremath{\setminus} U_n}=\bigcup_{i=1}^n A_i\in\ensuremath{\mathcal{I}}\xspace$, hence $\Jlim{\I} f=x$. By the assumption, $\operatorname{\IhJ\text{-}\lim} f=x$, which means that there is $A\in\ensuremath{\mc F(\I)}$ such that the function $\Zobr gSX$ given by
$g|_A=f|_A$ and $\Obr g{S\ensuremath{\setminus} A}=\{x\}$ is $\ensuremath{\mathcal{K}}\xspace$-convergent to $x$. This yields $$\Invobr g{X\ensuremath{\setminus} U_n}=\left(\bigcup_{i=1}^n A_i\right)\cap A = \bigcup_{i=1}^n (A_i\cap A)\in \ensuremath{\mathcal{K}}\xspace.$$ From this we have $A_i\cap A\in \ensuremath{\mathcal{K}}\xspace$, thus $B_i:=A_i\ensuremath{\setminus} A \simJb{\J} A_i$.
Note that, at the same time $$\bigcup_{i\in\mathbb N} B_i = \left(\bigcup_{i\in\mathbb N} A_i\right)\ensuremath{\setminus} A \subseteq S\ensuremath{\setminus} A\in \ensuremath{\mathcal{I}}\xspace.$$ We have shown \eqref{PI4} from Lemma \ref{LMAP}. \end{proof}
\begin{REM}\label{REMIMP2bLOCAL} Let us note that we have in fact proved a slightly stronger result: Whenever $x$ is an accumulation point of $X$ such that there exists a countable basis at $x$, the point $x$ does not have a smallest neighborhood and the implication \eqref{IMP2} holds for each function $\Zobr fSX$ which is $\IhJ$-convergent to $x$, then the ideal $\ensuremath{\mathcal{I}}\xspace$ has the additive property with respect to $\ensuremath{\mathcal{K}}\xspace$. \end{REM}
We next provide an example showing that Theorem \ref{THMIMP2} does not hold in general for spaces which are not first countable.
\begin{EXA}\label{EXAJASREC} Pointwise $\ensuremath{\mathcal{I}}\xspace$-convergence of sequences of continuous real functions was studied in \cite{JASINSKIRECLAW} and \cite{JASINSKIRECLAW2008}. It can be understood as convergence of sequences of elements of the space $C_p(X)$ of all real continuous function endowed with the topology of pointwise convergence. The authors of \cite{JASINSKIRECLAW,JASINSKIRECLAW2008} defined and studied the \ensuremath{\mathcal{I}}\xspace-convergence property which, using our terminology, can be formulated as follows: A topological space $X$ has the \emph{$\ensuremath{\mathcal{I}}\xspace$-convergence property} if \eqref{IMP2} holds in the space $C_p(X)$ for $S=\mathbb N$ and $\ensuremath{\mathcal{K}}\xspace=\ensuremath{\textrm{Fin}}$.
It is known that $C_p(X)$ is first countable if and only if\xspace $X$ is countable, see \cite[Theorem 4.4.2]{MCCOYNTANTU}. Hence our Theorem \ref{THMIMP2} yields that all countable spaces have the $\ensuremath{\mathcal{I}}\xspace$-convergence property for every P-ideal $\ensuremath{\mathcal{I}}\xspace$. The same result was obtained in \cite[Corollary 1]{JASINSKIRECLAW}.
It was shown in \cite{JASINSKIRECLAW2008} that $\mathbb R$ does not have $\ensuremath{\mathcal{I}}\xspace$-convergence property for any nontrivial analytic P-ideal on $\mathbb N$. (By trivial ideals we mean the ideals of the form $\ensuremath{\mathcal{I}}\xspace_C=\{A\subseteq\mathbb N; A\subseteq^* C\}$ for some $C\subseteq\mathbb N$.) Hence, $C_p(\mathbb R)$ provides the desired counterexample, which works for a large class of ideals on $\mathbb N$. The definition of analytic ideals, more related results and many examples of analytic P-ideals can be found, for example, in \cite{FARAHMEMOIRS,FILIPOWBOLZ}. \end{EXA}
To find a counterexample showing that Theorem \ref{THMIMP2b} is in general not true without the assumption that the space $X$ is first countable we can use any space in which all $\ensuremath{\mathcal{I}}\xspace$-convergent sequences are, in some sense, trivial.
\begin{EXA}\label{EXAIMP2b} Let us recall that $\ensuremath{\omega_1}\xspace$ denotes the first uncountable ordinal with the usual ordering. Let $X$ be the topological space on the set $\ensuremath{\omega_1}\xspace \cup \{\ensuremath{\omega_1}\xspace\}$ with the topology such that all points different from $\ensuremath{\omega_1}\xspace$ are isolated and the base at the point $\omega_1$ consists of all sets $U_\alpha=\{\beta\in X; \beta>\alpha\}$ for $\alpha<\ensuremath{\omega_1}\xspace$. Notice that if $C\subseteq\ensuremath{\omega_1}\xspace$ is a set such that $\ensuremath{\omega_1}\xspace\in\ol C$, then $\abs{C}=\ensuremath{\aleph_1}\xspace$.
Now let $\ensuremath{\mathcal{I}}\xspace$ be an admissible ideal on $\mathbb N$ and let a function $\Zobr f{\mathbb N}X$ be $\ensuremath{\mathcal{I}}\xspace$-convergent to $\ensuremath{\omega_1}\xspace$. We will show that then there exists $M\in\ensuremath{\mc F(\I)}$ such that $f(x)=\omega_1$ for each
$x\in M$, that is, $f|_M$ is constant. Clearly, this implies that $f$ is $\ensuremath{\mathcal{I}}\xspace^*$-convergent.
For the sake of contradiction, suppose that each set $M\in\ensuremath{\mc F(\I)}$ contains some point $m$ such that $f(m)\ne\ensuremath{\omega_1}\xspace$. Since $\Invobr fU\in\ensuremath{\mc F(\I)}$, for any neighborhood $U$ of $\ensuremath{\omega_1}\xspace$ in $X$ there exists $m\in\mathbb N$ with $f(m)\in U\ensuremath{\setminus}\{\ensuremath{\omega_1}\xspace\}$. Therefore for the set $C=\{m\in\mathbb N; f(m)\ne\ensuremath{\omega_1}\xspace\}$ we have $\ensuremath{\omega_1}\xspace\in \ol{\Obr fC}$. Since $\Obr fC \subseteq\ensuremath{\omega_1}\xspace$ and it is a countable set contained in $\ensuremath{\omega_1}\xspace$, this is a contradiction.
Now, by choosing an ideal $\ensuremath{\mathcal{I}}\xspace$ which does not have the additive property $\APIFin$ we obtain the desired counterexample. \end{EXA}
\section{Examples and applications}
We have already mentioned that our motivation for definition and study of $\IhJ$-convergence was an effort to provide a common generalization to the notion of $\ensuremath{\mathcal{I}}\xspace^*$-convergence which was defined first for the usual sequences in \cite{KMS} and later generalized for sequences of functions, double sequences and nets in \cite{GEZERKARAKUS}, \cite{KUMARDOUBLE} and \cite{LAHDASNETS}, respectively.
In this section we show that the notion of the $\IhJ$-convergence is a correct generalization of these notions, that is, all these notions are special cases of the $\IhJ$-convergence. We begin with the notion of $\ensuremath{\mathcal{I}}\xspace^*$-convergence of double sequences.
\subsection{Double sequences}
In the study of double sequences several types of convergence are used. For our purposes, the following one is the most important.
\begin{DEF}[\cite{BALDEMS,PRINGSHEIM}]\label{DEFPRG} A double sequence $\dseq xmn$ of points of a topological space $X$ is said to converge to $x$ \emph{in Pringsheim's sense} if for each neighborhood $U$ of the point $x$ $$(\exists k\in\mathbb N) (\forall m\geq k) (\forall n\geq k) x_{m,n}\in U.$$ \end{DEF}
It is easy to see that the convergence in Pringsheim's sense is equal to the $\ensuremath{\mathcal{I}}\xspace$-convergence along the \emph{Pringsheim's ideal} $\ensuremath{\I_2}$ on $\mathbb N\times\mathbb N$ whose dual filter $\FIh{\ensuremath{\I_2}}$ is given by the filterbase $$\ensuremath{{\mc B_2}}=\{\intrvl{m}\infty\times\intrvl{m}\infty; m\in\mathbb N\}.$$ We will give a different description of this ideal in Example \ref{EXAMACAJID}.
Altogether four types of convergence of double sequences were studied in \cite{BALDEMS}. All of them can be described as $\ensuremath{\mathcal{I}}\xspace$-convergences using appropriate ideals on $\mathbb N\times\mathbb N$ (see Figure \ref{FIGIDEALS}). In fact, we denote the Pringsheim's ideal by $\ensuremath{\I_2}$ in order to be consistent with the notation of \cite{BALDEMS}.
\begin{figure}
\caption{Ideals from \cite{BALDEMS} illustrated by depicting typical sets from the filterbase. Vertical lines represent the partition of $\mathbb N\times\mathbb N$ into countably many infinite sets $\{i\}\times\mathbb N$.}
\label{FIGIDEALS}
\end{figure}
The $\ensuremath{\mathcal{I}}\xspace^*$-convergence of double sequences studied in \cite{KUMARDOUBLE} and \cite{DASKOSMAWI} is the same as $\IhJh{\ensuremath{\mathcal{I}}\xspace}{\ensuremath{\I_2}}$-convergence in $\mathbb N\times\mathbb N$. Therefore, as a special case of our Theorems \ref{THMIMP2} and \ref{THMIMP2b} for $S=\mathbb N\times\mathbb N$ and $\ensuremath{\mathcal{K}}\xspace=\ensuremath{\I_2}$ we obtain Proposition 4.2 of \cite{KUMARDOUBLE}, and Theorems 3 and 4 of \cite{DASKOSMAWI}. Note that in \cite{KUMARDOUBLE} and \cite{DASKOSMAWI} only the ideals containing $\ensuremath{\I_2}$ are considered, see Proposition \ref{PROPIMP1}.
\subsection{Further examples}
In order to avoid technical details we will define neither the notions of pointwise and uniform $\ensuremath{\mathcal{I}}\xspace^*$-convergence of a sequence of functions defined in \cite{GEZERKARAKUS}, nor the notions of the $\ensuremath{\mathcal{I}}\xspace$- and $\ensuremath{\mathcal{I}}\xspace^*$-convergence of nets defined in \cite{LAHDASNETS}.
We just mention that, given an ideal $\mc L$ on $\mathbb N$, the uniform $\mc L^*$-convergence of a sequence of functions defined on $X$ is precisely the $\IhJ$-convergence for the ideal $\ensuremath{\mathcal{I}}\xspace$ on $X\times\mathbb N$ given by the filterbase $\{X\times (\mathbb N\ensuremath{\setminus} A); A\in\mc L\}$ and the ideal $\ensuremath{\mathcal{K}}\xspace$ given by the filterbase $\{X\times (\mathbb N\ensuremath{\setminus} A); A\in\ensuremath{\textrm{Fin}}\}$. The pointwise $\mc L^*$-convergence can be obtained if $\ensuremath{\mathcal{I}}\xspace$ is the ideal of all sets $A\subseteq X\times\mathbb N$ such that for each $x\in X$ the $x$-cut $A_x:=\{n\in\mathbb N; (x,n)\in A\}$ belongs to $\mc L$, and $\ensuremath{\mathcal{K}}\xspace$ consists of all sets such that each $A_x$ is finite.
In both cases it can be shown that the condition $\APIJh{\ensuremath{\mathcal{I}}\xspace}{\ensuremath{\mathcal{K}}\xspace}$ is equivalent to the condition $\APIJh{\mc{L}}{\ensuremath{\textrm{Fin}}}$. Hence our Theorems \ref{THMIMP2} and \ref{THMIMP2b} imply that these two types of $\ensuremath{\mathcal{I}}\xspace$-convergence are equivalent to corresponding $\ensuremath{\mathcal{I}}\xspace^*$-convergence if and only if\xspace $\APIJh{\mc{L}}{\ensuremath{\textrm{Fin}}}$ holds. This observation has been made already in \cite{GEZERKARAKUS}.
Similarly, the concept of $\ensuremath{\mathcal{I}}\xspace^*$-convergence of nets is a special case of $\IhJ$-convergence and Theorem 12 of \cite{LAHDASNETS} can be obtained from our Theorems \ref{THMIMP2} and \ref{THMIMP2b} by choosing the section filter of the considered directed set for $\ensuremath{\mathcal{K}}\xspace$ (the definition of the section filter can be found, for example, in \cite[p.60]{BOURBAKIGTENG}).
\subsection{$\ensuremath{\mathcal{I}}\xspace$-convergence of double sequences}
We close this paper with an observation concerning the $\ensuremath{\mathcal{I}}\xspace$-convergence of double sequences.
Notice that any bijection between sets $S$ and $T$ naturally gives rise to a bijection between $X^S$ and $X^T$, an isomorphism between Boolean algebras $\powerset{S}$ and $\powerset{T}$ and also to an isometric isomorphism between linear normed spaces $\ell_\infty(S)$ and $\ell_\infty(T)$. It is easy to see that this correspondence preserves also the properties related to the notion of $\ensuremath{\mathcal{I}}\xspace$-convergence. Hence results about $\ensuremath{\mathcal{I}}\xspace$-convergence for a given set $S$ do not depend on the natural (partial) ordering on the set $S$ in any way. Thus these results can be transferred to any set of the same cardinality.
We can use any bijection between $\mathbb N$ and $\mathbb N\times\mathbb N$ to relate results about sequences and double sequences. It is interesting to note that several authors working in this area did not realize this possibility.
The basic results on $\ensuremath{\mathcal{I}}\xspace$-convergence (such as additivity, multiplicativity, uniqueness of limit in Hausdorff spaces) need not be shown again for double sequences, since they follow from the analogous result for sequences; although the proofs are rather trivial in both cases. But there are also some more interesting concepts that were defined for double sequences in a such way that they are preserved by this correspondence. Namely, this is true for the notions of $\ensuremath{\mathcal{I}}\xspace$-Cauchy double sequences, extremal $\ensuremath{\mathcal{I}}\xspace$-limit points ($\ensuremath{\mathcal{I}}\xspace$-limit superior and $\ensuremath{\mathcal{I}}\xspace$-limit inferior) and $\ensuremath{\mathcal{I}}\xspace$-cluster points.
In this way, some results from the papers \cite{DASMALIKEXTREMALDOUBLE,GURDALSAHINEREXPTS,KUMARICORE,TRIPDS} on the above mentioned concepts can be obtained from the results of \cite{DEMIRCILIMSUP,DEMSICAUCH,KMSS,LAHDASLIMSUP}. Actually, the fact that a double sequence is $\ensuremath{\mathcal{I}}\xspace$-convergent if and only if\xspace it is $\ensuremath{\mathcal{I}}\xspace$-Cauchy is shown in Proposition 5 of \cite{DEMSICAUCH} using a bijection between $\mathbb N$ and $\mathbb N\times\mathbb N$.
The above observation can also be used to get an alternative description the ideal $\ensuremath{\I_2}$.
\begin{EXA}\label{EXAMACAJID} A basic example of an ideal which does not have the property $\APIFin$ is the ideal $\ensuremath{{\I_m}}$ given in Example 1.1.(g) of \cite{KSW} and Example (XI) of \cite{KMS}. It is defined as follows: Suppose we are given any partition $\mathbb N=\bigcup_{i=1}^n D_i$ of $\mathbb N$ into countably many infinite sets. A set $A\subseteq\mathbb N$ belongs to $\ensuremath{{\I_m}}$ if and only if\xspace it intersects only finitely many $D_i$'s. Of course, choosing different partitions of $\mathbb N$ can lead to ideals which are different, but equivalent from the point of view of $\ensuremath{\mathcal{I}}\xspace$-convergence.
We can also use any countable set instead of $\mathbb N$. In particular, as observed in the proof of Corollary 4 in \cite{BALDEMS}, by choosing the partition of $\mathbb N\times\mathbb N$ into sets $D_i=\{(n,i);n\ge i\}\cup\{(i,k); k\ge i\}$ we obtain the ideal $\ensuremath{\I_2}$ in this way. Similarly, by using $D_i=\{i\}\times\mathbb N$ we get the ideal $\ensuremath{{\I_1}}$ of \cite{BALDEMS} (see Figure \ref{FIGDECOMP}). Thus the ideals $\ensuremath{{\I_1}}$, $\ensuremath{\I_2}$ and $\ensuremath{{\I_m}}$ are essentially the same. In particular, this gives an alternative proof that $\APIJh{\ensuremath{\I_2}}{\ensuremath{\textrm{Fin}}}$ and $\APIJh{\ensuremath{{\I_1}}}{\ensuremath{\textrm{Fin}}}$ fail, see \cite{DASKOSMAWI}. \end{EXA}
\begin{figure}\label{FIGDECOMP}
\end{figure}
\noindent {\bf Acknowledgment}. We would like to thank the referees for suggesting several improvements and corrections. In particular, one of the referees pointed our attention to results of \cite{JASINSKIRECLAW2008}, which were used to make Example \ref{EXAJASREC} more general.
\end{document} |
\begin{document}
\begin{frontmatter} \title{A Secure Control Framework for Resource-Limited Adversaries}
\thanks[footnoteinfo]{This paper was not presented at any IFAC meeting. Corresponding author Andr\'{e} Teixeira. Tel. +46-73-429 78 31. Fax +46-8-790 73 29.}
\author{Andr\'{e} Teixeira$^\star$}\ead{\texttt{andretei@kth.se}}, \author{Iman Shames$^\dagger$}\ead{\texttt{iman.shames@unimelb.edu.au}}, \author{Henrik Sandberg$^\star$}\ead{\texttt{hsan@kth.se}}, \author{Karl H. Johansson$^\star$}\ead{\texttt{kallej@kth.se}}
\address{$^\star$ACCESS Linnaeus Centre, KTH Royal Institute of Technology, Electrical Engineering, Stockholm, Sweden } \address{$^\dagger$Department of Electrical and Electronic Engineering, University of Melbourne, Australia }
\begin{keyword} Cyber-physical systems, security, attack space, secure control systems. \end{keyword}
\begin{abstract} Cyber-secure networked control is modeled, analyzed, and experimentally illustrated in this paper. An attack space defined by the adversary's system knowledge, disclosure, and disruption resources is introduced. Adversaries constrained by these resources are modeled for a networked control system architecture. It is shown that attack scenarios corresponding to denial-of-service, replay, zero-dynamics, and bias injection attacks can be analyzed using this framework. Furthermore, the attack policy for each scenario is described and the attack's impact is characterized using the concept of safe sets. An experimental setup based on a quadruple-tank process controlled over a wireless network is used to illustrate the attack scenarios, their consequences, and potential counter-measures. \end{abstract}
\end{frontmatter}
\section{Introduction} \label{sec:intro}
Safe and reliable operation of infrastructures is of major societal importance. These systems need to be engineered in such a way so that they can be continuously monitored, coordinated, and controlled despite a variety of potential system disturbances. Given the strict operating requirements and system complexity, such systems are operated through IT infrastructures enabling the timely data flow between digital controllers, sensors, and actuators. However, the use of communication networks and heterogeneous IT components has made these cyber-physical systems vulnerable to cyber threats. One such example are the industrial systems and critical infrastructures operated through Supervisory Control and Data Acquisition (SCADA) systems. The measurement and control data in these systems are commonly transmitted through unprotected communication channels, leaving the system vulnerable to several threats~\cite{kn:Johansson09}. As illustrative examples, we mention the cyber attacks on power transmission networks operated by SCADA systems reported in the public media~\cite{kn:WallStreet09}, and the Stuxnet malware that supposedly infected an industrial control system and disrupted its operation~\cite{kn:symantec2010_report,kn:Rid2011}.
There exists a vast literature on computer security focusing on three main properties of data and IT services, namely confidentiality, integrity, and availability~\cite{kn:Bishop2002}. Confidentiality relates to the non-disclosure of data by unauthorized parties. Integrity on the other hand concerns the trustworthiness of data, meaning there is no unauthorized change of the data contents or properties, while availability means that timely access to the data or system functionalities is ensured. Unlike other IT systems where cyber-security mainly involves the protection of data, cyber attacks on networked control systems may influence physical processes through feedback actuation. Therefore networked control system security needs to consider threats at both the cyber and physical layers. Furthermore, it is of the utmost importance in the study of cyber attacks on control systems to capture the adversary's resources and knowledge. Cyber threats can be captured in the attack space illustrated in Figure~\ref{fig:attack_space}, which depicts several attack scenarios as points. For instance, the eavesdropping attack and the denial-of-service (DoS) attack are indicated in the figure.
\begin{figure}
\caption{The cyber-physical attack space.}
\label{fig:attack_space}
\end{figure}
We propose three dimensions for the attack space: the adversary's \emph{a priori} system model knowledge and his disclosure and disruption resources. The \emph{a priori} system knowledge can be used by the adversary to construct more complex attacks, possibly harder to detect and with more severe consequences. Similarly, the disclosure resources enable the adversary to obtain sensitive information about the system during the attack by violating data confidentiality. Note that disclosure resources alone cannot disrupt the system operation. An example of an attack using only disclosure resources is the eavesdropping attack illustrated in Figure~\ref{fig:attack_space}. On the other hand, disruption resources can be used to affect the system operation, which happens for instance when data integrity or availability properties are violated. One such example is the DoS attack, where the data required for correctly operating the system are made unavailable. In particular this characterization fits the Stuxnet malware, which had resources to record and manipulate data in the SCADA network~\cite{kn:symantec2010_report}. Moreover, the complexity and operation of Stuxnet also indicate that its developers had access to a reasonable amount of knowledge of both physical and cyber components of the target control system.
\subsection{Related Work}
Control theory has contributed with frameworks to handle model uncertainties and disturbances as well as fault diagnosis and mitigation, see, for example, \cite{kn:Zhou1996} and \cite{Cheng_Patton_1999,Hwang2010}, respectively. These tools can be used to detect and attenuate the consequences of cyber attacks on networked control systems, as has recently been done in the literature.
Cyber attacks on control systems compromising measurement and actuator data integrity and availability have been considered in~\cite{kn:Cardenas08b}, where the authors modeled the attack effects on the physical dynamics. Several attack scenarios have been simulated and evaluated on the Tennessee-Eastman process control system~\cite{Cardenas2011} to study the attack impact and detectability. The attack scenarios in~\cite{Cardenas2011} are related to the ones considered in this paper, but we quantify the attack resources and policies in a systematic way.
Availability attacks have been analyzed in~\cite{AminCardenasSastry-HSCC-2009, kn:Gupta2010} for resource constrained adversaries with full-state information. Particularly, the authors considered DoS attacks in which the adversary could tamper with the communication channels and prevent measurement and actuator data from reaching their destination, rendering the data unavailable. A particular instance of the DoS attack in which the adversary does not have any \emph{a priori} system knowledge, as the attack in~\cite{AminCardenasSastry-HSCC-2009}, is represented in the attack space in Figure~\ref{fig:attack_space}.
Deception attacks compromising integrity have recently received attention. Replay attacks on the sensor measurements, which is a particular kind of deception attack, have been analyzed in~\cite{kn:Bruno09}. The authors considered the case where all the existing sensors were attacked and suitable counter-measures to detect the attack were proposed. In this attack scenario the adversary does not have any system knowledge but is able to access and corrupt the sensor data, thus having disclosure and disruptive resources, as depicted in Figure~\ref{fig:attack_space}.
Another class of deception attacks, false-data injection attacks, has been studied in recent work. For instance, in the case of power networks, an adversary with perfect model knowledge has been considered in~\cite{kn:Liu09}. The work in~\cite{kn:Kosut10} considered stealthy attacks with limited resources and proposed improved detection methods, while~\cite{kn:Sandberg10} analyzed the minimum number of sensors required for stealthy attacks. A corresponding measurement security metric for studying sets of vulnerable sensors was proposed in~\cite{kn:Sandberg10}. The consequences of these attacks have also been analyzed in~\cite{kn:Xie10,kn:Teixeira2011,kn:Teixeira_ACC2012}. In particular, in \cite{kn:Teixeira2011} the authors analyzed attack policies with limited model knowledge and performed experiments on a power system control software, showing that such attacks are stealthy and can induce the erroneous belief that the system is at an unsafe state. The models used in the previous work are static, hence these attack scenarios are closest to the bias injection attack shown in Figure~\ref{fig:attack_space}.
Data injection attacks on dynamic control systems were also considered. In~\cite{Smith-IFAC-2011} the author characterizes the set of attack policies for covert (undetectable) false-data injection attacks with detailed model knowledge and full access to all sensor and actuator channels, while~\cite{kn:Pasqualetti2011} described the set of undetectable false-data injection attacks for omniscient adversaries with full-state information, but possibly compromising only a subset of the existing sensors and actuators. In the context of multi-agent systems, optimal adversary policies for data injection using full model knowledge and state information were derived in~\cite{KhanaferTouriBasar2012}. In these attack scenarios confidentiality was violated, as the adversary had access to either measurement and actuator data or full-state information. These attacks are therefore placed close to the covert attack in Figure~\ref{fig:attack_space}.
Most of the recent work on cyber-security of control systems has considered scenarios where the adversary has access to a large set of resources and knowledge, thus being placed far from the origin of the attack space in Figure~\ref{fig:attack_space}. A large part of the attack space has not been addressed. In particular, the class of detectable attacks that do not trigger conventional alarms has yet to be covered in depth.
\subsection{Contributions and Outline} In this paper we consider a typical networked control architecture under both cyber and physical attacks. A generic adversary model applicable to several attack scenarios is discussed and the attack resources are mapped to the corresponding dimensions of the attack space. To illustrate the proposed framework, we consider several attack scenarios where the adversary's goal is to drive the system to an unsafe state while remaining stealthy. For each scenario we formulate the corresponding stealthy attack policy, comment on the attack's performance, and describe the adversary's capabilities along each dimension of the attack space in Figure~\ref{fig:attack_space}, namely the disclosure resources, disruption resources, and system knowledge. Some of the attack scenarios analyzed in the paper have been staged on a wireless quadruple tank testbed for security of control systems. The testbed architecture and results from the staged attacks are presented and discussed.
One of the attack scenarios analyzed corresponds to a novel type of detectable attack, the bias injection attack. Although this attack may be detected, it can drive the system to an unsafe region and it only requires limited model knowledge and no information about the system state. Stealthiness conditions for this attack are provided, as well as a methodology to assess the attack impact on the physical state of the system.
The material in this paper is an extension of the authors' preliminary work, see~\cite{kn:Teixeira_HICONS2012}. Particularly, in the current work the attack goals are formalized using the notion of safe regions of the state space and two additional attack scenarios are described and analyzed. Furthermore, the attack performance of each scenario is analyzed in more detail and additional results for the zero-dynamics and bias injection attacks are presented.
The outline of the paper is as follows. The system architecture and model are described in Section~\ref{sec:cps}, while Section~\ref{sec:attack_models} contains the adversary model and a detailed description of the attack resources on each dimension of the attack space. The framework introduced in the previous sections is then illustrated for five particular attack scenarios in Section~\ref{sec:attack_scenarios}, supposing that the adversary aims at driving the system to an unsafe state while remaining stealthy. The attack policy, attack performance, and required system knowledge, disclosure, and disruption resources are described in detail for each attack scenario. The results of the experiments for four of the attack scenarios in a secure control systems testbed are presented and discussed in Section~\ref{sec:experiments}, followed by conclusions in Section~\ref{sec:conc}.
\section{Networked Control System}\label{sec:cps}
In this section we describe the networked control system structure, where we consider three main components: the physical plant and communication network, the feedback controller, and the anomaly detector.
\subsection{Physical Plant and Communication Network} The physical plant is modeled in a discrete-time state-space form \begin{equation}\label{eq:plant_state space_faults} \mathcal{P}:\left\{\begin{aligned} x_{k+1}&=A x_k+B \tilde{u}_k + G w_k + F f_k\\ y_k&=C x_k + v_k \end{aligned}\right. , \end{equation}
where $x_k \in \mathbb{R}^{n}$ is the state variable, $\tilde{u}_k\in\mathbb{R}^{q}$ the control actions applied to the process, $y_k\in\mathbb{R}^{p}$ the measurements from the sensors at the sampling instant $k \in \mathbb{Z}$, and $f_k\in\mathbb{R}^d$ is the unknown signal representing the effects of anomalies, usually denoted as fault signal in the fault diagnosis literature~\cite{Ding2008}. The process and measurement noise, $w_k \in \mathbb{R}^n$ and $v_k \in \mathbb{R}^p$, represent the discrepancies between the model and the real process, due to unmodeled dynamics or disturbances, for instance, and we assume their means are respectively bounded by $\delta_w$ and $\delta_v$, i.e. $\bar{w} = \|\mathbb{E}\{w_k\}\| \leq \delta_w$ and $\bar{v} = \|\mathbb{E}\{v_k\}\|\leq \delta_v$.
The physical plant operation is supported by a communication network through which the sensor measurements and actuator data are transmitted, which at the plant side correspond to $y_k$ and $\tilde{u}_k$, respectively. At the controller side we denote the sensor and actuator data by $\tilde{y}_k\in\mathbb{R}^{p}$ and $u_k\in\mathbb{R}^{q}$, respectively. Since the communication network may be unreliable, the data exchanged between the plant and the controller may be altered, resulting in discrepancies in the data at the plant and controller ends. In this paper we do not consider the usual communication network effects such as packet losses and delays. Instead we focus on data corruption due to malicious cyber attacks, as described in Section~\ref{sec:attack_models}. Therefore the communication network \textit{per se} is supposed to be reliable, not affecting the data flowing through it.
Given the physical plant model~\eqref{eq:plant_state space_faults} and assuming an ideal communication network, the networked control system is said to have a \emph{nominal behavior} if $f_k = 0$, $\tilde{u}_k=u_k$, and $\tilde{y}_k=y_k$. The absence of either one of these condition results in an abnormal behavior of the system.
\subsection{Feedback Controller} In order to comply with performance requirements in the presence of the unknown process and measurement noises, we consider that the physical plant is controlled by an appropriate linear time-invariant feedback controller~\cite{kn:Zhou1996}. The output feedback controller can be written in a state-space form as \begin{equation}\label{eq:controller_space_state} \mathcal{F}:\left\{\begin{aligned} z_{k+1} &= A_c z_k + B_c \tilde{y}_k \\ u_k &= C_c z_k + D_c \tilde{y}_k \end{aligned}\right. \end{equation} where the states of the controller, $z_k \in \mathbb{R}^m$, may include the process state and tracking error estimates. Given the plant and communication network models, the controller is supposed to be designed so that acceptable performance is achieved under nominal behavior.
\subsection{Anomaly Detector} In this section we consider the anomaly detector that monitors the system to detect possible anomalies, i.e. deviations from the nominal behavior. The anomaly detector is supposed to be collocated with the controller, therefore it only has access to $\tilde{y}_k$ and $u_k$ to evaluate the behavior of the plant.
Several approaches to detecting malfunctions in control systems are available in the fault diagnosis literature \cite{Ding2008,Hwang2010}. Here we consider the following observer-based Fault Detection Filter \begin{equation}\label{eq:residual_dynamics} \mathcal{D}:\left\{\begin{aligned}
\hat{x}_{k|k} & = A\hat{x}_{k-1|k-1} + Bu_{k-1} + K(\tilde{y}_k - \hat{y}_{k|k-1})\\
r_k & = V(\tilde{y}_k - \hat{y}_{k|k}) \end{aligned} \right. , \end{equation}
where $\hat{x}_{k|k}\in\mathbb{R}^{n}$ and $\hat{y}_{k|k}=C\hat{x}_{k|k}\in\mathbb{R}^{p}$ are the state and output estimates given measurements up until time $k$, respectively, and $r_k\in\mathbb{R}^{p_d}$ the residue evaluated to detect and locate existing anomalies.
The anomaly detector is designed by choosing $K$ and $V$ such that \begin{enumerate}
\item under nominal behavior of the system (i.e., $f_k = 0$, $u_k=\tilde{u}_k$, $y_k=\tilde{y}_k$), the expected value of the residue converges asymptotically to a neighborhood of zero, i.e., $\lim_{k\rightarrow\infty} \|\mathbb{E}\{r_k\}\| \leq \delta_r$, with $\delta_r \in \mathbb{R}^+$;
\item the residue is sensitive to the anomalies ($f_k\not\equiv0$).
\end{enumerate}
An alarm is triggered if the residue meets \begin{equation}\label{eq:residue_threshold}
\| r_k \| \geq \delta_r + \delta_{\alpha}, \end{equation} where $\delta_{\alpha}\in \mathbb{R}^+$ is chosen so that the false alarm rate does not exceed a given threshold $\alpha\in[0,\, 1]$.
\section{Adversary Models}\label{sec:attack_models} \begin{figure}
\caption{Adversary model for a point in the attack space in Figure~\ref{fig:attack_space}.}
\label{fig:attack_policy}
\end{figure}
The adversary model considered in this paper is illustrated in Figure~\ref{fig:attack_policy} and is composed of an attack policy and the adversary resources i.e., the system model knowledge, the disclosure resources, and the disruption resources. Each of the adversary resources can be mapped to a specific axis of the attack space in Figure~\ref{fig:attack_space}: $\mathcal{K}=\{\hat{\mathcal{P}}, \, \hat{\mathcal{F}}, \,\hat{\mathcal{D}}\}$ is the \emph{a~priori} system knowledge possessed by the adversary; $\mathcal{I}_k$ corresponds to the set of sensor and actuator data available to the adversary at time $k$ as illustrated in~\eqref{eq:disclosure_attack}, thus being mapped to the disclosure resources; $a_k$ is the attack vector at time $k$ that may affect the system behavior using the disruption resources captured by $\mathbf{B}$, as defined in the current section. The attack policy mapping $\mathcal{K}$ and $\mathcal{I}_k$ to $a_k$ at time $k$ is denoted as \begin{equation}\label{eq:attack_policies} \begin{aligned} a_k &= g(\mathcal{K}, \mathcal{I}_k). \end{aligned} \end{equation} Examples of attacks policies for different attack scenarios are given in Section~\ref{sec:attack_scenarios}.
In this section we describe the networked control system under attack with respect to the attack vector $a_k$. Then we detail the adversary's system knowledge, the disclosure resources, and the disruption resources. Models of the attack vector $a_k$ for particular disruption resources are also given.
\subsection{Networked Control System under Attack} The system components under attack are now characterized for the attack vector $a_k$, which also includes the fault signal $f_k$. Considering the plant and controller states to be augmented as $\eta_k = [x_k^\top \quad z_k^\top ]^\top$, the dynamics of the closed-loop system composed by $\mathcal{P}$ and $\mathcal{F}$ under the effect of $a_k$ can be written as \begin{equation}\label{eq:closed_loop_attacks} \begin{aligned} \eta_{k+1} & = \mathbf{A} \eta_k + \mathbf{B} a_k + \mathbf{G} \begin{bmatrix} w_{k} \\ v_{k} \end{bmatrix}\\ \tilde{y}_{k} & = \mathbf{C}\eta_{k} + \mathbf{D} a_{k}+ \mathbf{H} \begin{bmatrix}w_{k}\\ v_k\end{bmatrix}, \end{aligned} \end{equation} where the system matrices are \begin{equation*} \begin{array}{ll} \mathbf{A} = \begin{bmatrix} A + B D_c C & B C_c\\ B_c C & A_c \end{bmatrix}, & \mathbf{G} = \begin{bmatrix} G & B D_c \\ 0 & B_c \end{bmatrix},\\ &\\ \mathbf{C} = \begin{bmatrix} C & 0 \end{bmatrix}, & \mathbf{H} = \begin{bmatrix} 0 & I \\ \end{bmatrix}, \end{array} \end{equation*} and $\mathbf{B}$ and $\mathbf{D}$ capture the way in which the attack vector $a_k$ affects the plant and controller. These matrices are characterized for some attack scenarios in Section~\ref{sec:model_disruptive}.
Similarly, using $\mathcal{P}$ and $\mathcal{D}$ as in~\eqref{eq:plant_state space_faults} and~\eqref{eq:residual_dynamics}, respectively, the anomaly detector error dynamics under attack are described by \begin{equation}\label{eq:residual_dynamics_attack} \begin{aligned}
\xi_{k|k} & = \mathbf{A}_e\xi_{k-1|k-1} +\mathbf{B}_e a_{k-1} + \mathbf{G}_e \begin{bmatrix}w_{k-1}\\ v_k\end{bmatrix} \\
r_k & = \mathbf{C}_e \xi_{k-1|k-1} + \mathbf{D}_e a_{k-1}+ \mathbf{H}_e \begin{bmatrix}w_{k-1}\\ v_k\end{bmatrix}, \end{aligned} \end{equation}
where $\xi_{k|k}\in\mathbb{R}^{n}$ is the estimation error and \begin{equation*} \begin{array}{ll} \mathbf{A}_e = (I-KC)A,&\;
\mathbf{G}_e = \begin{bmatrix}(I-KC)G & -K \end{bmatrix},\\
\mathbf{C}_e = VC(I-KC)A,&\;
\mathbf{H}_e = \begin{bmatrix}VC(I-KC)G & V(I-CK) \end{bmatrix}. \end{array} \end{equation*} The matrices $\mathbf{B}_e$ and $\mathbf{D}_e$ are specific to the available disruptive resources and are characterized in Section~\ref{sec:model_disruptive}.
\subsection{System Knowledge} The amount of \emph{a priori} knowledge regarding the control system is a core component of the adversary model, as it may be used, for instance, to render the attack undetectable. In general, we may consider that the adversary approximately knows the model of the plant ($\hat{\mathcal{P}}$) and the algorithms used in the feedback controller ($\hat{\mathcal{F}}$) and the anomaly detector ($\hat{\mathcal{D}}$), thus denoting the adversary knowledge by $\mathcal{K}=\{\hat{\mathcal{P}}, \hat{\mathcal{F}},\hat{\mathcal{D}}\}$. Figure~\ref{fig:attack_space} illustrates several types of attack scenarios with different amounts of required system knowledge. In particular, note that the replay attacks do not need any knowledge of the system components, thus having $\mathcal{K}=\emptyset$, while the covert attack requires full knowledge about the system, hence $\mathcal{K}=\{\mathcal{P}, \mathcal{F},\mathcal{D}\}$.
\subsection{Disclosure Resources} The disclosure resources enable the adversary to gather sequences of data from the calculated control actions $u_k$ and the real measurements $y_k$ through disclosure attacks. Denote $\mathcal{R}^u\subseteq \{1,\dots,q\}$ and $\mathcal{R}^y\subseteq \{1,\dots,p\}$ as the disclosure resources, i.e. set of actuator and sensor channels that can be accessed during disclosure attacks, and let $\mathcal{I}_k$ be the control and measurement data sequence gathered by the adversary from time $k_0$ to $k$. The disclosure attacks can then be modeled as \begin{equation}\label{eq:disclosure_attack} \begin{aligned} \mathcal{I}_k & := \mathcal{I}_{k-1} \cup \left\{ \begin{bmatrix} \Upsilon^u & 0\\ 0 & \Upsilon^y \end{bmatrix} \begin{bmatrix} u_k\\ y_k \end{bmatrix} \right\} , \end{aligned} \end{equation}
where $\Upsilon^{u}\in\mathbb{B}^{ |\mathcal{R}^{u}|\times q}$ and $\Upsilon^{y}\in\mathbb{B}^{ |\mathcal{R}^{y}|\times p}$ are the binary incidence matrices mapping the data channels to the corresponding data gathered by the adversary and $\mathcal{I}_{k_0} = \emptyset$.
As seen in the above description of disclosure attacks, the physical dynamics of the system are not affected by these type of attacks. Instead, these attacks gather intelligence that may enable more complex attacks, such as the replay attacks depicted in Figure~\ref{fig:attack_space}.
\subsection{Disruption Resources}\label{sec:model_disruptive}
As seen in the system dynamics under attack,~\eqref{eq:closed_loop_attacks} and~\eqref{eq:residual_dynamics_attack}, disruption resources are related to the attack vector $a_k$ and may be used to affect the several components of the system. The way a particular attack disturbs the system operation depends not only on the respective resources, but also on the nature of the attack. For instance, a physical attack directly perturbs the system dynamics, whereas a cyber attack disturbs the system through the cyber-physical couplings.
To better illustrate this discussion we now consider physical and data deception attacks.
\subsubsection{Physical Resources} Physical attacks may occur in control systems, often in conjunction with cyber attacks. For instance, in~\cite{kn:Amin2010_water} water was pumped out of an irrigation system while the water level measurements were corrupted so that the attack remained stealthy. Since physical attacks are similar to the fault signals $f_k$ in~\eqref{eq:plant_state space_faults}, in the following sections we consider $f_k$ to be the physical attack modifying the plant dynamics as \begin{align*} x_{k+1}&=Ax_k+B \tilde{u}_k + G w_k + Ff_k\\ y_k &= Cx_k. \end{align*}
Considering $a_k = f_k$, the resulting system dynamics are described by~\eqref{eq:closed_loop_attacks} and~\eqref{eq:residual_dynamics_attack} with
\begin{equation*} \begin{array}{llll} \mathbf{B}= \begin{bmatrix} F\\ 0 \end{bmatrix},\; & \mathbf{D} = 0,\; & \mathbf{B}_e = (I-KC)F,\; & \mathbf{D}_e = VC(I-KC)F. \end{array} \end{equation*} Note that the disruption resources in this attack are captured in the matrix $F$.
\subsubsection{Data Deception Resources} The deception attacks modify the control actions $u_k$ and sensor measurements $y_k$ from their calculated or real values to the corrupted signals $\tilde{u}_k$ and $\tilde{y}_k$, respectively. Denoting $\mathcal{R}_I^u\subseteq \{1,\dots,q\}$ and $\mathcal{R}_I^y\subseteq \{1,\dots,p\}$ as the deception resources, i.e. set of actuator and sensor channels that can be affected, the deception attacks are modeled as
\begin{equation}\label{eq:deception_attack}
\tilde{u}_k := u_k + \Gamma^u b^u_k,\quad
\tilde{y}_k := y_k + \Gamma^y b^y_k, \end{equation}
where the signals $b^u_k \in \mathbb{R}^{|\mathcal{R}_I^{u}|}$ and $b^y_k \in \mathbb{R}^{|\mathcal{R}_I^{y}|}$ represent the data corruption and $\Gamma^{u}\in\mathbb{B}^{ q \times |\mathcal{R}_I^{u}|}$ and $\Gamma^{y}\in\mathbb{B}^{ p \times |\mathcal{R}_I^{y}|}$ ($\mathbb{B} := \left \{0, \, 1 \right \}$) are the binary incidence matrices mapping the data corruption to the respective data channels. The matrices $\Gamma^{u}$ and $\Gamma^{y}$ indicate which data channels can be accessed by the adversary and are therefore directly related to the adversary resources in deception attacks.
Defining $a_k = [b^{u\top}_{k} \quad b^{y\top}_{k+1}\quad b^{y\top}_{k} ]^\top$, the system dynamics are given by~\eqref{eq:closed_loop_attacks} and~\eqref{eq:residual_dynamics_attack} with
\begin{equation*} \begin{array}{l} \mathbf{B}= \begin{bmatrix} B\Gamma^u & 0 & B D_c\Gamma^y \\ 0 & 0 & B_c\Gamma^y \end{bmatrix}, \quad \mathbf{D} = \begin{bmatrix} 0 & 0 & \Gamma^y \\ \end{bmatrix}, \quad \mathbf{B}_e = \begin{bmatrix} (I-KC)B\Gamma^u & -K\Gamma^y & 0 \end{bmatrix},\quad \mathbf{D}_e = \begin{bmatrix} VC(I-KC)B\Gamma^u & V(I-CK)\Gamma^y & 0 \end{bmatrix}. \end{array} \end{equation*}
Note that deception attacks do not possess any disclosure capabilities, as depicted in Figure~\ref{fig:attack_space} for examples of deception attacks such as the bias injection attack.
\section{Attack Scenarios}\label{sec:attack_scenarios}
In this section we discuss the general goal of an adversary and likely choices of the attack policy $g(\cdot,\cdot)$. In particular, using the framework introduced in the previous sections, we consider several attack scenarios where the adversary's goal is to drive the system to an unsafe state while remaining stealthy. For each scenario we formulate the corresponding stealthy attack policy, comment on the attack's performance, and describe the adversary's capabilities along each dimension of the attack space in Figure~\ref{fig:attack_space}, namely the disclosure resources, disruption resources, and system knowledge. A set of these scenarios is illustrated by experiments on a process control testbed in Section~\ref{sec:experiments}.
\subsection{Attack Goals and Constraints} In addition to the attack resources, the attack scenarios need to also include the intent of the adversary, namely the attack goals and constraints shaping the attack policy. The attack goals can be stated in terms of the attack impact on the system operation, while the constraints may be related to the attack detectability.
Several physical systems have tight operating constraints which if not satisfied might result in physical damage to the system. In this work we use the concept of safe regions to characterize the safety constraints.
\begin{definition}\label{def:safe_set}
At a given time instant $k$, the system is said to be safe if $x_{k}\in \mathcal{S}_x$, where $\mathcal{S}_x$ is a closed and compact set with non-empty interior. \end{definition}
\begin{assumption} The system is in a safe state at the beginning of the attack, i.e. $x_{k_0}\in \mathcal{S}_x$. \end{assumption}
The physical impact of an attack can be evaluated by assessing whether or not the state of the system remained in the safe set during and after the attack. The attack is considered successful if the state is driven out of the safe set.
Regarding the attack constraints, we consider that attacks are constrained to remain stealthy. Furthermore, we consider the disruptive attack component consists of only physical and data deception attacks, and thus we have the attack vector $a_k = [f_k^\top \quad b^{u\top}_{k} \quad b^{y\top}_{k+1}\quad b^{y\top}_{k} ]^\top$. Given the anomaly detector described in Section~\ref{sec:cps} and denoting $\mathcal{A}_{k_0}^{k_f}=\{a_{k_0},\, \dots,\, a_{k_f}\}$ as the attack signal, the set of stealthy attacks are defined as follows. \begin{definition}\label{def:stealthy}
The attack signal $\mathcal{A}_{k_0}^{k_f}$ is stealthy if $\|r_k\| < \delta_r + \delta_\alpha,\; \forall k\geq k_0$. \end{definition} Note that the above definition is dependent on the initial state of the system at $k_0$, as well as the noise terms $w_k$ and $v_k$.
Since the closed-loop system~\eqref{eq:closed_loop_attacks} and the anomaly detector~\eqref{eq:residual_dynamics_attack} under linear attack policies are linear systems, each of these systems can be separated into two components, the nominal component with $a_k=0 \; \forall k$ and the following systems \begin{equation}\label{eq:closed_loop_attacks_linear} \begin{aligned} \eta_{k+1}^a & = \mathbf{A} \eta_k^a + \mathbf{B} a_k\\ \tilde{y}^a_k &= \mathbf{C}\eta_{k}^a + \mathbf{D} a_k \end{aligned} \end{equation} and \begin{equation}\label{eq:residual_dynamics_attack_linear} \begin{aligned}
\xi_{k|k}^a & = \mathbf{A}_e\xi_{k-1|k-1}^a +\mathbf{B}_e a_{k-1} \\
r^a_k & = \mathbf{C}_e \xi_{k-1|k-1}^a + \mathbf{D}_e a_{k-1}, \end{aligned} \end{equation}
with $\eta_0^a = \xi_{0|0}^a=0$.
Assuming the system is behaving nominally before the attack, using the triangle inequality and linearity of~\eqref{eq:residual_dynamics_attack} we have $||r^a_k||\leq \delta_\alpha \Rightarrow ||r_k||\leq\delta_r + \delta_\alpha$, leading to the following definition: \begin{definition}\label{def:alpha_stealthy}
The attack signal $\mathcal{A}_{k_0}^{k_f}$ is $\alpha-$stealthy with respect to $\mathcal{D}$ if $||r^a_k|| < \delta_\alpha, \; \forall k\geq k_0$. \end{definition} Albeit more conservative than Definition~\ref{def:stealthy}, this definition only depends on the attack signals $\mathcal{A}_{k_0}^{k_f}$. Similarly, the impact of attacks on the closed-loop system can also be analyzed by looking at the linear system~\eqref{eq:closed_loop_attacks_linear}, as illustrated in Section~\ref{sec:attack_bias} for the bias injection attack.
\subsection{Denial-of-Service Attack} The DoS attacks prevent the actuator and sensor data from reaching their respective destinations and should therefore be modeled as the absence of data, for instance $u_k = \emptyset$ if all the actuator data is unavailable. However such a model would not fit the framework in~\eqref{eq:closed_loop_attacks} and~\eqref{eq:residual_dynamics_attack} where $a_k$ is assumed to be a real valued vector. Hence we consider instead one of the typical mechanisms used by digital controllers to deal with the absence of data~\cite{kn:SchenatoTAC2009}, in which the absent data is replaced with the last received data, $u_{\tau_u}$ and $y_{\tau_y}$ respectively. Denoting $\mathcal{R}_A^u\subseteq \{1,\dots,q\}$ and $\mathcal{R}_A^y\subseteq \{1,\dots,p\}$ as the set of actuator and sensor channels that can be made unavailable, we can model DoS attacks as deception attacks in~\eqref{eq:deception_attack} with \begin{equation}\label{eq:DoS_attack} \begin{matrix} b^u_k & := -S^u_k \Gamma^{u\top} (u_k - u_{\tau_u})\\ b^y_k & := -S^y_k \Gamma^{y\top} (y_k - y_{\tau_y}) \end{matrix} \end{equation}
where $S^u_k\in\mathbb{B}^{|\mathcal{R}_A^{u}| \times |\mathcal{R}_A^{u}|}$ and $S^y_k\in\mathbb{B}^{|\mathcal{R}_A^{y}|\times |\mathcal{R}_A^{y}|}$ are boolean diagonal matrices where the $i-$th diagonal entry indicates whether a DoS attack is performed ($[S^{(\cdot)}_k]_{ii}=1$) or not ($[S^{(\cdot)}_k]_{ii}=0$) on the corresponding channel. Therefore DoS attacks on the data are a type of disruptive attacks, as depicted in Figure~\ref{fig:attack_space}.
\textbf{Attack policy: } The attack scenario analyzed in this paper considers a Bernoulli adversary~\cite{AminCardenasSastry-HSCC-2009} on the sensor channels following the random policy \begin{equation*} \begin{aligned}
\mathbb{P}([S^{y}_k]_{ii} = 1) &= 0,\; \forall i=1,\dots, |\mathcal{R}_A^{u}|,\quad k < k_0 \\
\mathbb{P}([S^{y}_k]_{ii} = 1) &= p,\; \forall i=1,\dots, |\mathcal{R}_A^{u}|,\quad k\geq k_0 \end{aligned} \end{equation*} where $p$ is the probability of blocking the data packet at any given time.
\textbf{Attack performance: } Although the absence of data packets is not stealthy since it is trivially detectable, DoS attacks may be misdiagnosed as a poor network condition. As for the impact on the closed-loop system, the results available for Bernoulli packet losses readily apply to the current attack scenario~\cite{kn:Zhang2001_NCS,kn:Schenato07foundationsNCS,kn:SchenatoTAC2009}. In particular, we recall a result for the case where a hold scheme~\eqref{eq:DoS_attack} is used in the absence of data.
\begin{proposition}[Theorem 8 in~\cite{kn:Zhang2001_NCS}]\label{thm:DoS_stability} Assume the closed-loop system with no DoS attack is stable. Then the closed-loop system with Bernoulli DoS attacks is exponentially stable for $p\in[0,\; 1)$ if the open-loop system \[ \eta_{k+1}=\begin{bmatrix} A & BC_c\\ 0 & A_c \end{bmatrix}\eta_{k} \] is marginally stable. \end{proposition}
\textbf{Disclosure resources: } Although the proposed model of DoS attacks in \eqref{eq:DoS_attack} contains the control and output signals, note that no disclosure resources are needed in the actual implementation of the attack. Thus we have $\mathcal{R}^{u}=\mathcal{R}^{y}=\emptyset$.
\textbf{Disruption resources: } The disruption capabilities correspond to the data channels that the adversary is able to make unavailable, $\mathcal{R}_A^{u}$ and $\mathcal{R}_A^{y}$.
\textbf{System knowledge: } For the Bernoulli attack policy, no \emph{a priori} knowledge of the system model is needed.
\subsection{Replay Attack}\label{sec:replay_attack} In replay attacks the adversary first performs a disclosure attack from $k=k_0$ until $k_r$, gathering sequences of data $\mathcal{I}_{k_r}$, and then begins replaying the recorded data at time $k=k_r+1$ until the end of the attack at $k=k_f>k_r$, as illustrated in Figure~\ref{fig:replay_attack}. In the scenario considered here the adversary is also able to perform a physical attack while replaying the recorded data, which covers the experiment on a water management SCADA system reported in~\cite{kn:Amin2010_water} and one of Stuxnet's operation mode~\cite{kn:symantec2010_report}. \begin{figure}
\caption{Schematic of the replay attack.}
\label{fig:replay_attack1}
\label{fig:replay_attack2}
\label{fig:replay_attack}
\end{figure}
\textbf{Attack policy: } Similar to the work in~\cite{kn:Bruno09}, assuming $\mathcal{R}^{(\cdot)} = \mathcal{R}_I^{(\cdot)}$ i.e., the adversary can corrupt the digital channels from which the data sequences are gathered, the replay attack policy can be described as \begin{equation}\label{eq:replay_policy0} \mbox{Phase I: }\; \left\{\begin{aligned} a_k &= 0\\ \mathcal{I}_{k} &= \mathcal{I}_{k-1} \cup \left\{ \begin{bmatrix} \Upsilon^u & 0 \\ 0 & \Upsilon^y \end{bmatrix} \begin{bmatrix} u_{k} \\ y_{k} \end{bmatrix} \right\}, \end{aligned}\right. \end{equation} with $k_0 \leq k\leq k_r$ and $\mathcal{I}_{k_0}=\emptyset$ and \begin{equation}\label{eq:replay_policy} \mbox{Phase II: }\; \left\{\begin{aligned} a_k &= \begin{bmatrix} g_f(\mathcal{K}, \mathcal{I}_{k_r}) \\ \Upsilon^u (u_{k - T} - u_k)\\ \Upsilon^y (y_{k+1 - T}-y_{k+1}) \\ \Upsilon^y (y_{k - T}-y_k) \end{bmatrix}\\ \mathcal{I}_{k} &= \mathcal{I}_{k-1}, \end{aligned}\right. \end{equation} where $T=k_r-1 + k_0$ and $ k_r+1 \leq k\leq k_f$. An interesting instance of this attack scenario consists of applying a pre-defined physical attack to the plant, while using replay attacks to render the attack stealthy. In this case the physical attack signal $f_k$ corresponds to an open-loop signal, $f_k = g_f(k)$.
\textbf{Attack performance: } The work in~\cite{kn:Bruno09} provided conditions under which replay attacks with access to all measurement data channels are stealthy. However, these attacks are not guaranteed to be stealthy when only a subset of the data channels is attacked. In this case, the stealthiness constraint may require additional knowledge of the system model. For instance, the experiment presented in Section~\ref{sec:experiments} requires knowledge of the physical system structure, so that $f_k$ only excites the attacked measurements. Hence $f_k$ can be seen as a zero-dynamics attack with respect to the healthy measurements, which is characterized in the section below. Since the impact of the replay attack is dependent only on $f_k$, we refer the reader to Section~\ref{sec:attack_zero} for a characterization of the replay attack's impact.
\textbf{Disclosure resources: } The disclosure capabilities required to stage this attack correspond to the data channels that can be eavesdropped by the attacks, namely $\mathcal{R}^{u}$ and $\mathcal{R}^{y}$.
\textbf{Disruption resources: } In this case the deception capabilities correspond to the data channels that the adversary can tamper with, $\mathcal{R}_I^{u}$ and $\mathcal{R}_I^{y}$. In particular, for replay attacks the adversary can only tamper with the data channels from which data has been previously recorded, i.e. $\mathcal{R}_I^{u} \subseteq \mathcal{R}^{u}$ and $\mathcal{R}_I^{y}\subseteq \mathcal{R}^{y}$.
Direct disruption of the physical system through the signal $f_k$ depends on having direct access to the physical system, modeled by the matrix $F$ in~\eqref{eq:plant_state space_faults}.
\textbf{System knowledge: } Note that no \emph{a priori} knowledge $\mathcal{K}$ on the system model is needed for the cyber component of the attack, namely the data disclosure and deception attack, as seen in the attack policy~\eqref{eq:replay_policy0} and~\eqref{eq:replay_policy}. As for the physical attack, $f_k$, the required knowledge is scenario dependent. In the scenario considered in the experiments described in Section~\ref{sec:experiments}, this component was modeled as an open-loop signal, $f_k = g_f(k)$.
\subsection{Zero-Dynamics Attack}\label{sec:attack_zero} Recalling that for linear attack policies the plant and the anomaly detector are linear systems, \eqref{eq:closed_loop_attacks_linear} and~\eqref{eq:residual_dynamics_attack_linear} respectively, Definition~\ref{def:alpha_stealthy} states that this type of attacks are $0-$stealthy if $r^a_k=0,\, k=k_0,\dots,k_f$. The idea of $0-$stealthy attacks then consists of designing an attack policy and attack signal $\mathcal{A}_{k_0}^{k_f}$ so that the residue $r_k$ does not change due to the attack.
A particular subset of $0-$stealthy attacks are characterized in the following lemma: \begin{lemma}\label{lem:output_zeroing_attack} The attack signal $\mathcal{A}_{k_0}^{k_f}$ is $0-$stealthy with respect to any $\mathcal{D}$ if $\tilde{y}^a_k = 0, \, \forall k\geq k_0$. \end{lemma} \begin{proof}
Consider the attacked components of the controller and the anomaly detector in~\eqref{eq:closed_loop_attacks_linear} and~\eqref{eq:residual_dynamics_attack_linear} with $\hat{x}^a_0=\xi_{0|0}^a=0$. From the controller dynamics it directly follows that $\tilde{y}^a_k = 0, \, \forall k\geq k_0$ results in $u_k^a = 0, \, \forall k\geq k_0$, as the input to the controller ($\tilde{y}^a_k$) is zero. Since $\hat{x}^a_0=0$ and $\tilde{y}^a_k = u_k^a = 0, \, \forall k\geq k_0$, meaning that the detector's inputs are zero, we then conclude $r^a_k = 0, \, \forall k\geq k_0$. \end{proof}
Both the definition of $0-$stealthy attacks and Lemma~\ref{lem:output_zeroing_attack} indicate that these attacks are decoupled from the outputs of linear systems, $r_k$ and $y_k$ respectively. Hence finding $0-$stealthy attack signals relates to the output zeroing problem or zero-dynamics studied in the control theory literature~\cite{kn:Zhou1996}. Note that such an attack requires the perfect knowledge of the plant dynamics $P$ and the attack signal is then based on the open-loop prediction of the output changes due to the attack. This is illustrated in Figure~\ref{fig:zero_attack} where $\mathcal{K}_z$ denote the zero-dynamics and there is no disclosure of sensor or actuator data.
\begin{figure}
\caption{Schematic of the zero-dynamics attack.}
\label{fig:zero_attack}
\end{figure}
\textbf{Attack policy: } The attack policy then corresponds to the input sequence ($a_k$) that makes the outputs of the process ($\tilde{y}^a_k$) identically zero for all $k$ and is illustrated in Figure~\ref{fig:zero_attack}. It can be shown~\cite{kn:Zhou1996} that the solution to this problem is given by the sequence \begin{equation} \label{zero_sequence} a_k = g \nu^k, \end{equation} parameterized by the input-zero direction $g$ and the system zero $\nu$.
For sake of simplicity we consider a particular instance of this attack, where only the actuator data is corrupted. In this case the zero attack policy corresponds to the transmission zero-dynamics of the plant. The plant dynamics due to an attack on the actuator data are described by \begin{equation}\label{eq:transmission_system} \begin{aligned} x_{k+1}^a &= A x_k^a + B a_k\\ \tilde{y}^a_k &= C x_k^a \end{aligned} \end{equation} with $a_k = b^u_k$. Given the discrete-time system~\eqref{eq:transmission_system} with $B$ having full column rank, the transmission zeros can be calculated as the values $\nu \in \mathbb{C}$ that cause the matrix $P(\nu)$ to lose rank, where \begin{equation*} P(\nu)=\begin{bmatrix} \nu I - A & -B \\ C & 0 \end{bmatrix}. \end{equation*}
Those values are called minimum phase or non-minimum phase zeros depending on whether they are stable or unstable zeros, respectively. In discrete-time systems a zero is stable if $|\nu| < 1$ and unstable otherwise.
The input zero direction can be obtained by solving the following equation \begin{equation}\label{eq:zero_dynamics} \begin{bmatrix} \nu I-A & -B \\ C & 0 \end{bmatrix} \begin{bmatrix} x_0 \\ g \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \end{equation} where $x_0$ is the initial state of the system for which the input sequence~\eqref{zero_sequence} results in an identically zero output, $\tilde{y}^a_k=0\,\forall k$.
\begin{lemma}\label{lem:zero_dynamics_invariance}
Let $x_0$ be the initial state of the system, where $x_0$ satisfies~\eqref{eq:zero_dynamics}. The state trajectories generated by the zero-dynamics attack are contained in $\mbox{span}(x_0)$ i.e., $x_{k}^a\in \mbox{span}(x_0)\; \forall k\geq0$. \end{lemma} \begin{proof} Consider the zero-dynamics attack parameterized by $x_0$ and $g$ and denote $L$ as a map for which $L x_0 = g$. Then from~\eqref{eq:zero_dynamics} we have $\left(\nu I -(A+BL)\right) x_0 = 0$ and conclude that $x_0$ is an eigenvector of $A+BL$ associated with its eigenvalue $\nu$. Now consider the state evolution under attack, $x^a_{k+1} = Ax^a_k + Bg$ with $x^a_0=x_0$. The proof is completed by noting that $x^a_1 = Ax_0 + Bg = (A+BL)x_0 = \nu x_0$ and applying an induction argument. \end{proof}
\textbf{Attack performance: } Note that the zero-dynamics attack is $0-$stealthy only if $x^a_0 = x_0$. However the initial state of the system under attack $x^a_0$ is defined to be zero at the beginning of the attack. Therefore stealthiness of the attack may be violated for large differences between $x^a_0=0$ and $x_0$. We refer the reader to~\cite{Teixeira_Allerton2012} for a detailed analysis of the effects of zero initial conditions on zero-dynamics attacks.
If the zero is stable, that is $| \nu | < 1$, the attack will asymptotically decay to zero, thus having little effect on the plant. However, in the case of unstable zeros the attack grows geometrically, which could cause a great damage to the process. This statement is captured in the following result.
\begin{theorem}\label{thm:zero_unsafe}
A zero-dynamics attack with $| \nu | > 1$ leads the system to an unsafe state if and only if $\mbox{span}(x_0)$ is not contained in $\mathcal{S}_x$. \end{theorem} \begin{proof}
Follows directly from Lemma~\ref{lem:zero_dynamics_invariance} and from the fact that the zero-attack with $|\nu| >1$ generates an unstable state trajectory moving away from the origin along $\mbox{span}(x_0)$. \end{proof}
\textbf{Disclosure resources: } This attack scenario considers an open-loop attack policy and so no disclosure capabilities are required, resulting in $\mathcal{R}^{u}=\mathcal{R}^{y}=\emptyset$ and $\mathcal{I}^u_{k}=\mathcal{I}^y_{k}=\emptyset \,\forall k$.
\textbf{Disruption resources: } The disruption capabilities in this attack scenario correspond to the ability of performing deception attacks on the actuator data channels. Therefore the required resources are $\mathcal{R}_I^{u}=\{1,\dots,q\}$, $\mathcal{R}_I^{y}=\emptyset$, and $F=0$
\textbf{System knowledge: } The ability to compute the open-loop attack policy requires the perfect knowledge zero-dynamics, which we denote as $\mathcal{K}_z$. Note that computing the zero-dynamics requires perfect knowledge of the plant dynamics, namely $A$, $B$, and $C$. No knowledge of the feedback controller or anomaly detector is assumed in this scenario.
\subsection{Local Zero-Dynamics Attack} In the previous scenario the zero-dynamics attack was characterized in terms of the entire system. Here we further restrict the adversary resources by considering that the adversary has disruption resources and knows the model of only a subset of the system. In particular, we rewrite the plant dynamics~\eqref{eq:transmission_system} as \begin{equation}\label{eq:transmission_system_local} \begin{aligned} \begin{bmatrix} x_{k+1}^{1} \\ x_{k+1}^{2} \end{bmatrix} &= \begin{bmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{bmatrix} \begin{bmatrix} x_{k}^{1} \\ x_{k}^{2} \end{bmatrix} + \begin{bmatrix} B_{1}\\ 0 \end{bmatrix} a_k\\ \tilde{y}^a_k &= \begin{bmatrix} C_{1} & C_{2} \end{bmatrix} \begin{bmatrix} x_{k}^{1} \\ x_{k}^{2} \end{bmatrix} \end{aligned} \end{equation} and assume the adversary has access to only $A_{11}$, $A_{21}$, $B_1$, and $C_1$. From the adversary's view, this local system is characterized by \begin{equation*} \begin{aligned} x_{k+1}^1 &= A_{11} x_k^1 + B_1 a_k + A_{12}x_k^2\\ y^l_k &= \begin{bmatrix} C_1 \\ A_{21} \end{bmatrix}x_k^1, \end{aligned} \end{equation*} where $y^l_k$ encodes the measurements depending on the local state, $C_1x^1_k$, and the interaction between the local subsystem and the remaining subsystems, $A_{21}x^1_k$.
\textbf{Attack policy: } Similar to the zero-dynamics attack, the attack policy is given by the sequence \begin{equation*} a_k = g \nu^k, \end{equation*} where $g$ is the input zero direction for the chosen zero $\nu$. The input zero direction can be obtained by solving the following equation \begin{equation*} \begin{bmatrix} \nu I-A_{11} & -B_1 \\ C_1 & 0 \\ A_{21} & 0\end{bmatrix} \begin{bmatrix} x^1_0 \\ g^1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0\\ 0 \end{bmatrix}. \end{equation*}
Note that the zero-dynamics parameterized by $g^1$ and $\nu$ correspond to local zero-dynamics of the global system.
\textbf{Attack performance: }
A similar discussion as for the global zero-dynamics attack applies to this scenario. In particular, the stealthiness of the local zero-dynamics attack may be violated for large differences between $x^1_0$ and $0$. Additionally, as stated in Theorem~\ref{thm:zero_unsafe}, attacks associated with unstable zeros yielding $|\nu|>1$ are more dangerous and may lead the system to an unsafe state.
\textbf{Disclosure resources: } This attack scenario considers an open-loop attack policy and so no disclosure capabilities are required, resulting in $\mathcal{R}^{u}=\mathcal{R}^{y}=\emptyset$ and $\mathcal{I}^u_{k}=\mathcal{I}^y_{k}=\emptyset \;\forall k$.
\textbf{Disruption resources: } The disruption capabilities in this attack scenario correspond to the ability of performing deception attacks on the actuator data channels of the local subsystem. Therefore the required resources are $\mathcal{R}_I^{u}=\{1,\dots,q_1\}$, $\mathcal{R}_I^{y}=\emptyset$, and $F=0$.
\textbf{System knowledge: } The open-loop attack policy requires the perfect knowledge of the local zero-dynamics, denoted as $\tilde{\mathcal{K}}_{z}$ and obtained from $A_{11}$, $B_1$, $C_1$, and $A_{21}$.
\subsection{Bias Injection Attack}\label{sec:attack_bias} Here a particular scenario of false-data injection is considered, where the adversary's goal is to inject a constant bias in the system without being detected. For this scenario, the class of $\alpha-$stealthy attacks is characterized at steady-state and a method to evaluate the corresponding impact is proposed. Furthermore, we derive the policy yielding the largest impact on the system.
\textbf{Attack policy: } The bias injection attack is illustrated in Figure~\ref{fig:bias_attack}. The attack policy is composed of a steady-state component, the desired bias denoted as $a_\infty$, and a transient component. For the transient, we consider that the adversary uses a linear low-pass filter so that the data corruptions are slowly converging to the steady-state values. As an example, for a set of identical first-order filters the open-loop attack sequence is described by \begin{equation} \label{eq:bias_sequence} a_{k+1} = \beta a_k + (1-\beta)a_\infty^*, \end{equation} where $a_0 = 0$ and $0<\beta<1$ can be chosen using the results from Theorem~\ref{thm:bias_transient}. The steady-state attack policy yielding the maximum impact on the physical system is described below, where the computation of $a_\infty$ is summarized in Theorem~\ref{thm:bias_attack_2} and Theorem~\ref{thm:bias_attack_infinity}.
\begin{figure}
\caption{Schematic of the bias injection attack.}
\label{fig:bias_attack}
\end{figure}
\textbf{Attack performance: } First the steady-state policy is considered. Denote $a_{\infty}$ as the bias to be injected and recall the anomaly detector dynamics under attack~\eqref{eq:residual_dynamics_attack}. The steady-state detectability of the attack is then dependent on the steady-state value of the residual \begin{equation*} r^{a}_\infty =\left ( \mathbf{C}_e (I - \mathbf{A}_e)^{-1} \mathbf{B}_e + \mathbf{D}_e \right ) a_\infty=: G_{ra} a_\infty. \end{equation*} The largest $\alpha-$stealthy attacks are then characterized by \begin{equation} \label{r gap}
\left \| G_{ra} a_\infty \right \|_2 = \delta_{\alpha}. \end{equation} Although attacks satisfying \eqref{r gap} could be detected during the transient, incipient attack signals slowly converging to $a_{\infty}$ may go undetected, as it is stated in Theorem~\ref{thm:bias_transient} and shown in the experiments in Section~\ref{sec:experiments}.
The impact of such attacks can be evaluated using the closed-loop dynamics under attack given by~\eqref{eq:closed_loop_attacks}. Recalling that $\eta^a_k = [x_k^{a^\top} \quad z_k^{a^\top} ]^\top$, the steady-state impact on the state is given by \begin{equation*} x^{a}_\infty = \left[I\quad 0\right] \left (I - \mathbf{A} \right )^{-1} \mathbf{B} a_\infty =: G_{xa} a_\infty. \end{equation*}
Consider the following safe set defined in terms of $x^a_k$. \begin{definition}\label{def:safe_sets_2}
The $2-$norm safe set $\mathcal{S}_{x^a}^2$ is defined as
\[\mathcal{S}_{x^a}^2 = \left\{x\in\mathbb{R}^n :\, \|x\|_2^2 \leq 1 \right\},\]
and the system is said to be in a safe state if $x^a_k\in\mathcal{S}_{x^a}^2$. \end{definition}
For the $2-$norm safe set $\mathcal{S}_{x^a}^2$, the most dangerous bias injection attack corresponds to the $\alpha-$stealthy attack yielding the largest bias in the $2-$norm sense, which can be computed by solving \begin{equation}\label{eq:max_impact_2} \begin{matrix}
\underset{a_\infty}{\max} \left \| G_{xa}a_\infty \right \|^2_2 \\ \\
\mbox{s.t. } \; \; \; \; \left \| G_{ra}a_\infty \right \|^2_2 \leq \delta_{\alpha}^2. \end{matrix} \end{equation}
\begin{lemma}~\label{lem:bounded_bias} The optimization problem~\eqref{eq:max_impact_2} is bounded if and only if $\ker(G_{ra})\subseteq\ker(G_{xa})$. \end{lemma} \begin{proof}
Suppose that $\ker(G_{ra})\neq\emptyset$ and consider the subset of solutions where $a_\infty\in\ker(G_{ra})$. For this subset of solutions, the optimization problem then becomes $\max_{a_\infty\in\ker(G_{ra})} \left\| G_{xa}a_\infty \right\|^2_2$. Since the latter corresponds to a maximization of a convex function, its solution is unbounded unless $G_{xa}a_\infty = 0$ for all $a_\infty\in\ker(G_{ra})$ i.e., $\ker(G_{ra})\subseteq\ker(G_{xa})$. Noting that the feasible set and the objective function are bounded for all solutions $a_\infty \not \in \ker(G_{ra})$ concludes the proof. \end{proof}
Given Lemma~\ref{lem:bounded_bias}, below we consider the non-trivial case for which it holds that $\ker(G_{ra})\subseteq\ker(G_{xa})$. The above optimization problem can be transformed into a generalized eigenvalue problem and the corresponding optimal solution characterized in terms of generalized eigenvalues and eigenvectors. Before formalizing this statement, we introduce the following result.
\begin{lemma}~\label{lem:bias_generalized} Let $Q\in\mathbb{R}^{n\times n}$ and $P\in\mathbb{R}^{n\times n}$ be positive semi-definite matrices satisfying $\ker(Q)\subseteq\ker(P)$. Denote $\lambda^*$ as the largest generalized eigenvalue of the matrix pencil $(P,Q)$ and $v^*$ as the corresponding eigenvector. Then the matrix $P - \lambda Q$ is negative semi-definite for a generalized eigenvalue $\lambda$ if and only if $\lambda=\lambda^*$. Moreover, we have $\lambda^*\geq0$ and $x^\top(P-\lambda^* Q)x = 0$ with $Qx\neq0$ if and only if $x\in\mbox{span}(v^*)$. \end{lemma} \begin{proof} The proof can be found in Appendix~\ref{app:A}. \end{proof}
The optimal bias injection attack in the sense of~\eqref{eq:max_impact_2} is characterized by the following result. \begin{theorem}\label{thm:bias_attack_2} Consider the $2-$norm safe set $\mathcal{S}_{x^a}^2$ and the corresponding optimal $\alpha-$stealthy bias injection attack parameterized by the optimization problem~\eqref{eq:max_impact_2}, which is assumed to be bounded. Denote $\lambda^*$ and $v^*$ as the largest generalized eigenvalue and corresponding unit-norm eigenvector of the matrix pencil $(G_{xa}^\top G_{xa},\,G_{ra}^\top G_{ra})$. The optimal bias injection attack is given by \begin{equation}\label{eq:optimal_bias}
a^*_\infty = \pm\frac{\delta_{\alpha}}{\|G_{ra}v^*\|_2} v^*, \end{equation}
and the corresponding optimal value is $\|G_{xa}a_\infty \|^2_2 = \lambda^* \delta_{\alpha}^2$. Moreover, at steady-state the system is in a safe state if and only if $\lambda^* \delta_{\alpha}^2 \leq 1$. \end{theorem} \begin{proof} Let $P,\,Q\in\mathbb{R}^{n \times n}$ be positive semi-definite matrices such that $\ker(Q)\subseteq\ker(P)$. Recall that $\lambda$ is a generalized eigenvalue of $(P, Q)$ if $\mbox{rank}(P-\lambda Q ) < \mbox{normalrank}(P, Q)$, where $\mbox{normalrank}(P, Q)$ is defined as the rank of $P-\nu Q$ for almost all values of $\nu\in\mathbb{C}$. Furthermore, denote $v$ as the generalized eigenvector associated with $\lambda$ for which $(P-\lambda Q )v = 0$ with $v\not\in\ker(Q)$. The necessary and sufficient conditions for the optimization problem~\eqref{eq:max_impact_2} are given by~\cite{kn:Hiriart-Urruty2001} \begin{equation*} \begin{aligned} 0&=(G_{xa}^\top G_{xa}-\lambda^* G_{ra}^\top G_{ra})a_\infty^*,\\ 0 &= a_\infty^{*\top} G_{ra}^\top G_{ra}a_\infty^* - \delta_\alpha^2,\\ 0 &\geq y^\top (G_{xa}^\top G_{xa}-\lambda^* G_{ra}^\top G_{ra}) y ,\; \mbox{for }y\neq0. \end{aligned} \end{equation*}
Suppose $\lambda^*$ is the largest generalized eigenvalue of $(G_{xa}^\top G_{xa},\,G_{ra}^\top G_{ra})$ and let $v^*$ be the corresponding eigenvector. Scaling $v^*$ by $\kappa$ so that $a_\infty^* = \kappa v^*$ satisfies $\|G_{ra}a^*_\infty\|^2_2 = \delta_\alpha^2$ leads to $\kappa =\pm \frac{\delta_{\alpha}}{\|G_{ra}v^*\|_2}$, and the first and second conditions are satisfied. As for the third condition, note that $G_{xa}^\top G_{xa}-\lambda^* G_{ra}^\top G_{ra}$ is negative semi-definite by Lemma~\ref{lem:bias_generalized}, given that $\lambda^*$ is the largest generalized eigenvalue, $G_{xa}^\top G_{xa}$ and $G_{ra}^\top G_{ra}$ are positive semi-definite, and the assumption that $\ker(G_{ra})\subseteq\ker(G_{xa})$. To conclude our proof, observe that the optimal value is given by $a_\infty^{*\top} G_{xa}^\top G_{xa} a_\infty^* = \lambda^* a_\infty^{*\top} G_{ra}^\top G_{ra}a_\infty^* = \lambda^* \delta_\alpha^2=\|x^a_\infty\|_2^2$ and thus, by definition, $x^a_\infty\in\mathcal{S}_{x^a}^2$ if and only if $\lambda^* \delta_\alpha^2 \leq 1$. \end{proof}
More generally, the optimal bias injection attacks for ellipsoidal safe sets of the form
$\mathcal{S}_{x^a} = \left\{x^a\in\mathbb{R}^n :\, x^{a^\top}P x^a\leq 1 \right\}$, with $P$ positive definite, can be found by replacing the objective function in~\eqref{eq:max_impact_2} by $\|P^{1/2} G_{xa}a_\infty\|_2^2$.
Similarly, consider the safe set as defined below. \begin{definition}\label{def:safe_sets_inf}
The infinity-norm safe set $\mathcal{S}_{x^a}^\infty$ is defined as
\[\mathcal{S}_{x^a}^\infty = \left\{x\in\mathbb{R}^n :\, \|x\|_\infty \leq 1 \right\},\]
and the system is said to be in a safe state if $x^a_k\in\mathcal{S}_{x^a}^\infty$. \end{definition}
Given the infinity-norm safe set $\mathcal{S}_{x^a}^\infty$, the bias injection attack with the largest impact corresponds to the $\alpha-$stealthy attack yielding the largest bias in the infinity-norm sense. This attack can be obtained by solving the following optimization problem \begin{equation}\label{eq:max_impact_infty} \begin{matrix}
\underset{a_\infty}{\max} \left \| G_{xa}a_\infty \right \|_\infty \\
\mbox{s.t. } \; \; \; \; \left \| G_{ra}a_\infty \right \|_2 \leq \delta_{\alpha}. \end{matrix} \end{equation}
A possible method to solve this problem is to observe that \[\| G_{xa}a_\infty\|_\infty = \max_i\, \|e_i^\top G_{xa}a_\infty \|_2,\] where the vector $e_i$ is $i-$th column of the identity matrix. Thus one can transform the optimization problem~\eqref{eq:max_impact_infty} into a set of problems with the same structure as~\eqref{eq:max_impact_2}, obtaining \begin{equation}\label{eq:bias_infinity_2} \begin{matrix}
\underset{i}{\max}\, \underset{a^i_\infty}{\max}\, \left \| e_i^\top G_{xa}a^i_\infty \right \|_2 \\
\mbox{s.t. } \; \; \; \; \left \| G_{ra}a^i_\infty \right \|_2 \leq \delta_{\alpha}. \end{matrix} \end{equation}
\begin{theorem}\label{thm:bias_attack_infinity} Consider the infinity-norm safe set $\mathcal{S}_{x^a}^\infty$ and the corresponding optimal $\alpha-$stealthy bias injection attack parameterized by the optimization problem~\eqref{eq:max_impact_infty}, which is assumed to be bounded. Let $e_i$ be the $i-$th column of the identity matrix and denote $\lambda^*_i$ and $v^*_i$ as the largest generalized eigenvalue and corresponding unit-norm eigenvector of the matrix pencil $G_{xa}^\top e_i e_i^\top G_{xa} - \lambda G_{ra}^\top G_{ra}$. Letting $\lambda^* = \max_i \lambda^*_i$, with $v^*$ as the corresponding generalized eigenvector, the optimal bias attack is given by \begin{equation}\label{eq:optimal_bias_infinity}
a^*_\infty = \pm\frac{\delta_{\alpha}}{\|G_{ra}v^*\|_2} v^*, \end{equation}
and the corresponding optimal value is $\|G_{xa}a_\infty \|_\infty = \sqrt{\lambda^*} \delta_{\alpha}$. Moreover, at steady-state the system is in a safe state if and only if $\lambda^* \delta_{\alpha}^2 \leq 1$. \end{theorem} \begin{proof} The proof follows directly from considering the set of optimization problems in~\eqref{eq:bias_infinity_2} and applying Theorem~\ref{thm:bias_attack_2}. \end{proof}
Note that the steady-state value of the data corruption $a^*_\infty$ is not sufficient for the attack to be $\alpha-$stealthy, since the transients are disregarded. In practice, however, it has been observed in the fault diagnosis literature that incipient faults with slow dynamics are hard to detect~\cite{Cheng_Patton_1999}. Therefore the low-pass filter dynamics in the attack policy~\eqref{eq:bias_sequence} could be designed sufficiently slow as to difficult detection. Below we provide a method to verify whether a given filter parameter $\beta$ renders the bias attack $\alpha-$stealthy.
\begin{theorem}\label{thm:bias_transient} Consider the attack policy $a_{k+1}=\beta a_k + (1-\beta)a^*_\infty$ with $\beta\in(0,\,1)$. The residual $r^a_k$ is characterized as the output of the autonomous system \begin{equation}\label{eq:bias_transient} \begin{aligned} \psi^a_{k+1} &= \bar{A}\psi^a_k\\ r^a_k &= \bar{C}\psi^a_k \end{aligned} \end{equation} with \begin{equation*} \begin{aligned} \bar{A}&=\begin{bmatrix} \mathbf{A}_e & \mathbf{B}_e & 0\\ 0 & \beta I & (1-\beta)I\\ 0 & 0 & I \end{bmatrix},\quad \psi^a_0=\begin{bmatrix} 0\\ 0\\ a^*_{\infty} \end{bmatrix}, \\ \bar{C} &= \begin{bmatrix} \mathbf{C}_e & \mathbf{D}_e & 0 \end{bmatrix}. \end{aligned} \end{equation*}
Moreover, the attack policy is $\alpha-$stealthy for a given $\beta$ if the following optimization problem admits a solution \begin{equation} \begin{array}{rr}
& \underset{\gamma,P}{\min}\quad \gamma\quad \\ & \mbox{s.t. } \quad \gamma \leq \delta_\alpha^2,\\
& P \succ 0,\\
& \psi_0^{a^\top} P \psi^a_0 \leq 1, \\
& \begin{bmatrix} P & \bar{C}^\top\\ \bar{C} & \gamma I \end{bmatrix} \succeq 0,\\
& \bar{A}^\top P \bar{A} - P \prec 0. \end{array} \end{equation} \end{theorem} \begin{proof}
The autonomous system is directly obtained by considering the augmented state $\psi^a=[\xi_{k|k}^{a^\top}\; l_k^\top\; s_k^\top]^\top$, where $l_k$ is the state of the low-pass filter bank and $s_k$ the integral state initialized at $s_0 = a_\infty$. Given this autonomous system, one observes that the attack is $\alpha-$stealthy if and only if the corresponding output-peak $\|r^a_k\|_2^2$ is bounded by $\delta_\alpha^2$ for all $k\geq0$, given the initial condition parameterized by $\alpha^*_\infty$. The remainder of the proof follows directly from the results in~\cite{Boyd_LMI94} regarding output-peak bounds for autonomous systems. \end{proof}
However, the output-peak bounds are in general conservative and thus the conditions in the previous theorem are only sufficient.
\textbf{Disclosure resources: } Similarly to the zero attack, no disclosure capabilities are required for this attack, since the attack policy is open-loop. Therefore we have $\mathcal{R}^{u}=\mathcal{R}^{y}=\emptyset$ and $\mathcal{I}^u_{k}=\mathcal{I}^u_{k}=\emptyset \,\forall k$.
\textbf{Disruption resources: } The biases may be added to both the actuator and sensor data, hence the required resources are $\mathcal{R}_I^{u} \subseteq \{1,\dots,q\}$, $\mathcal{R}_I^{y}\subseteq \{1,\dots,p\}$. Since no physical attack is performed, we have $F=0$.
\textbf{System knowledge: } As seen in~\eqref{eq:max_impact_2}, the open-loop attack policy~\eqref{eq:bias_sequence} requires the knowledge of the closed-loop system and anomaly detector steady-state gains $G_{ra}$ and $G_{xa}$, which we denoted as $\mathcal{K}_0$ as shown in Figure~\ref{fig:bias_attack}.
\section{Experiments}\label{sec:experiments} In this section we present our testbed and report experiments on staged cyber attacks following the different scenarios described in the previous section.
\subsection{Quadruple-Tank Process}
Our testbed consists of a Quadruple-Tank Process (QTP) \cite{Johansson2000} controlled through a wireless communication network, as shown in Figure~\ref{figQTP}. \begin{figure}
\caption{Schematic diagram of the testbed with the Quadruple-Tank Process and a multi-hop communication network.}
\label{figQTP}
\end{figure}
The plant model can be found in~\cite{Johansson2000} \begin{equation}\label{eqQTP}
\begin{split}
\dot{h}_1 &= -\frac{a_1}{A_1}\sqrt{2gh_1}+\frac{a_3}{A_1}\sqrt{2gh_3}+\frac{\gamma_1 k_1}{A_1}u_1,\\
\dot{h}_2 &= -\frac{a_2}{A_2}\sqrt{2gh_2}+\frac{a_4}{A_2}\sqrt{2gh_4}+\frac{\gamma_2 k_2}{A_2}u_2,\\
\dot{h}_3 &= -\frac{a_3}{A_3}\sqrt{2gh_3}+\frac{(1-\gamma_2) k_2}{A_3}u_2,\\
\dot{h}_4 &= -\frac{a_4}{A_4}\sqrt{2gh_4}+\frac{(1-\gamma_1) k_1}{A_4}u_1,
\end{split} \end{equation}
where $h_i \in[0,\; 30]$ are the heights of water in each tank, $A_i$ the cross-section area of the tanks, $a_i$ the cross-section area of the outlet hole, $k_i$ the pump constants, $\gamma_i$ the flow ratios and $g$ the gravity acceleration. The nonlinear plant model is linearized for a given operating point. Moreover, given the range of the water levels, the following safe set is considered $\mathcal{S}_x = \{x\in\mathbb{R}^n:\; \|x-\sigma\mathbf{1} \|_\infty\leq 15,\, \sigma=15\}$, where $\mathbf{1}\in\mathbb{R}^n$ is a vector with all entries set to $1$.
The QTP is controlled using a centralized LQG controller with integral action running in a remote computer and a wireless network is used for the communications. A Kalman-filter-based anomaly detector is also running in the remote computer and alarms are triggered according to~\eqref{eq:residue_threshold}, for which we computed $\delta_r=0.15$ and chose $\delta_{\alpha}=0.25$ for illustration purposes. The communication network is multi-hop, having one additional wireless device relaying the data, as illustrated in Figure~\ref{figQTP}.
\subsection{Denial-of-Service Attack} Here we consider the case where the QTP suffers a DoS attack on both sensors, while operating at a constant set-point. The state and residual trajectories from this experiment are presented in Figure~\ref{fig_dos_graph}. The DoS attack follows a Bernoulli model~\cite{AminCardenasSastry-HSCC-2009} with $p=0.9$ as the probability of packet loss and the last received data is used in the absence of data. From Proposition~\ref{thm:DoS_stability}, we have that the closed-loop system under such DoS attack is exponentially stable.
\begin{figure}
\caption{Results for the DoS attack performed against both sensors since $t\approx 100\,s$.}
\label{fig_dos_graph}
\end{figure}
The DoS attack initiates at $t\approx 100\,s$, leading to an increase in the residual due to successive packet losses. However the residual remained below the threshold during the attack and there were no significant changes in the system's state.
\subsection{Replay Attack} In this scenario, the QTP is operating at a constant set-point while a hacker desires to steal water from tank 4, the upper tank on the right side. An example of this attack is presented in Figure~\ref{fig_replay_graph}, where the replay attack policy is the one described in Section~\ref{sec:replay_attack}. The adversary starts by replaying past data from $y_2$ at $t\approx 90\,s$ and then begins stealing water from tank 4 at $t\approx 100\,s$. Tank 4 is successfully emptied and the attacks stops removing water at $t\approx180\,s$. To ensure stealthiness, the replay attack continues until the system recovered its original setpoint at $t\approx 280\,s$. As we can see, the residue stays below the alarm threshold and therefore the attack is not detected.
\subsection{Zero-Dynamics Attack}
The QTP has a non-minimum phase configuration in which the plant possesses an unstable zero. In this case, as discussed in Section~\ref{sec:attack_zero}, an adversary able to corrupt all the actuator channels may launch a false-data injection attack where the false-data follows the zero-dynamics. Moreover, since the safe region is described by the set $\mathcal{S}_x = \{x\in\mathbb{R}^n:\; \|x-\sigma\mathbf{1} \|_\infty\leq 15,\, \sigma=15\}$, from Theorem~\ref{thm:zero_unsafe} we expect that the zero-dynamics attack associated with the unstable zero can drive the system to an unsafe region. This scenario is illustrated in Figure~\ref{fig_zero_graph}.
\begin{figure}
\caption{Results for the replay attack performed against sensor 2 from $t\approx 90\,s$ to $t\approx 280\,s$. Additionally, the adversary opens the tap of tank 4 at $t \approx 100\,s$ and closes it at $t \approx 180\,s$.}
\label{fig_replay_graph}
\end{figure}
\begin{figure}
\caption{Results for the zero-dynamics attack starting at $t \approx 30\,s$. Tank 3 is emptied at $t\approx 55\,s$, resulting in a steep increase in the residual since the linearized model is no longer valid.}
\label{fig_zero_graph}
\end{figure}
The adversary's goal is to either empty or overflow at least one of the tanks, considered as an unsafe state. The attack on both actuators begins at $t\approx30\,s$, causing a slight increase in the residual. Tank 3 becomes empty at $t\approx55\,s$ and shortly after actuator 2 saturates, producing a steep increase in the residual which then crosses the threshold. However, note that the residual was below the threshold when the unsafe state was reached.
After saturation of the water level and the actuators, the system dynamics change and therefore the attack signal no longer corresponds to the zero-dynamics and is detected, although it has already damaged the system. Thus these attacks are particularly dangerous in processes that have unstable zero-dynamics and in which the actuators are over-dimensioned, allowing the adversary to perform longer attacks before saturating.
\subsection{Bias Injection Attack}
The results for the case where $u_1$ and $y_1$ are respectively corrupted with $b^u_\infty$ and $b^y_\infty$ are presented in the Figure~\ref{figSA}. In this scenario, the adversary aimed at driving the system out of the safe set $\mathcal{S}_x$ while remaining stealthy for $\delta_\alpha = 0.25$. The bias was slowly injected using a first-order low-pass filter with $\beta=0.95$ and the following steady-state value, computed using Theorem~\ref{thm:bias_attack_infinity}, $a_\infty = [b^u_\infty\, b^y_\infty]^\top = [2.15\, -9.42]^\top$.
\begin{figure}
\caption{Results for the bias attack against the actuator 1 and sensor 1 in the minimum phase QTP. The attack is launched using a low-pass filter in the instant $t \approx 70\,s$ and stopped at $t \approx 230\,s$.}
\label{figSA}
\end{figure}
The bias injection began at $t\approx70\,s$ and led to an overflow in tank 4 at $t\approx225\,s$. At that point, the adversary started removing the bias and the system recovered the original setpoint at $t\approx 350\,s$. The residual remained within the allowable bounds throughout the attack, thus the attack was not detected.
\section{Conclusions}\label{sec:conc} In this paper we have analyzed the security of networked control systems. A novel attack space based on the adversary's system knowledge, disclosure, and disruption resources was proposed and the corresponding adversary model described. Attack scenarios corresponding to replay, zero-dynamics, and bias injection attacks were analyzed using this framework. In particular the maximum impact of stealthy bias injection attacks was derived and it was shown that the corresponding policy does not require perfect model knowledge. These attack scenarios were illustrated using an experimental setup based on a quadruple-tank process controlled over a wireless network.
\section{Acknowledgments} This work was supported in part by the European Commission through the HYCON2 project, the EIT-ICT Labs through the project SESSec-EU, the Swedish Research Council under Grants 2007-6350 and 2009-4565, and the Knut and Alice Wallenberg Foundation.
\appendix \section{Proof of Lemma~\ref{lem:bias_generalized}}\label{app:A}
Recall that $\lambda$ is a generalized eigenvalue of $(P, Q)$ if $\mbox{rank}(P-\lambda Q ) < \mbox{normalrank}(P, Q)$, where $\mbox{normalrank}(P, Q)$ is defined as the rank of $P-\nu Q$ for almost all values of $\nu\in\mathbb{C}$. Furthermore, denote $v$ as the generalized eigenvector associated with $\lambda$ for which $(P-\lambda Q )v = 0$ with $v\not\in\ker(Q)$.
Define $T=[V_{\bar{N}} \, V_N] \in \mathbb{R}^{n\times n}$ where the columns of $V_N$ are a basis for $\ker(Q)$ and $V_{\bar{N}}$ is chosen such that $T$ is nonsingular. Given that $\ker(Q)\subseteq\ker(P)$, the coordinate transformation induced by $T$ leads to \[T(P-\lambda Q)T^{-1} = \begin{bmatrix}\tilde{P}-\lambda \tilde{Q} & 0\\ 0 & 0\end{bmatrix},\] where $\tilde{Q}\succ 0$ and $\tilde{P}\succeq 0$ and we conclude that $P-\lambda Q \preceq0$ if and only if $\tilde{P}-\lambda \tilde{Q} \preceq0$. Additionally, we see that all the non-zero generalized eigenvalues of $(P,Q)$ need to reduce the rank of $\tilde{P}-\lambda \tilde{Q}$ and thus need to be positive. Hence we have proved that all generalized eigenvalues are non-negative and that $\lambda^*\geq0$.
Now we show that $\tilde{P}-\lambda \tilde{Q}$ is indefinite for all generalized eigenvalues $0 < \lambda < \lambda^*$. Let $\bar\lambda > 0$ be a generalized eigenvalue of $(\tilde{P},\tilde{Q})$ with the associated eigenvector $\bar{v}$. Then $\bar{v}^{\top}(\tilde{P}-\lambda \tilde{Q})\bar{v} = (\bar{\lambda} - \lambda)\bar{v}^{\top}\tilde{Q}\bar{v}$, which can be made positive or negative for all generalized eigenvalues $\lambda \in (0,\, \lambda^*)$ and thus our assertion is proved.
As the next step, we show that $\tilde{P}-\lambda^* \tilde{Q}\preceq 0$. Since $\tilde{Q}$ is invertible, the generalized eigenvalues of $(\tilde{P},\tilde{Q})$ correspond to the eigenvalues of the positive semi-definite matrix $M\tilde{P}M$ with $M=\tilde{Q}^{-1/2}$. Furthermore note that $\tilde{P}-\lambda^* \tilde{Q}\preceq 0$ is equivalent to having $M\tilde{P}M-\lambda^* I\preceq 0$, which holds since $M\tilde{P}M$ is positive semi-definite with $\lambda^*$ as the largest eigenvalue.
All that is left to show now is that $x^\top(P-\lambda^* Q)x = 0$ with $Qx\neq0$ if and only if $x\in\mbox{span}(v^*)$. Given the condition $Qx\neq0$, it is enough to verify that $x^\top(\tilde{P}-\lambda^* \tilde{Q})x = 0$ for $x\neq0$ if and only if $x\in\mbox{span}(\tilde{v}^*)$, where $\tilde{v}^*$ is the generalized eigenvector of $(\tilde{P},\tilde{Q})$ associated with $\lambda^*$. The proof is concluded by recalling that $\tilde{P}-\lambda^* \tilde{Q}\preceq 0$, hence $x^\top(\tilde{P}-\lambda^* \tilde{Q})x = 0$ if and only if $x$ belongs to the subspace spanned by the eigenvectors associated with $\lambda^*$.
\end{document} |
\begin{document}
\sloppy
\righthyphenmin = 2
\newcommand{\mbox{$\alpha$}}{\mbox{$\alpha$}} \newcommand{\mbox{$\sigma$}}{\mbox{$\sigma$}} \newcommand{\mbox{$\delta$}}{\mbox{$\delta$}} \newcommand{\mbox{$\omega$}}{\mbox{$\omega$}} \newcommand{\mbox{$\Delta$}}{\mbox{$\Delta$}} \newcommand{\varepsilon}{\varepsilon} \newcommand{\mbox{$\lambda$}}{\mbox{$\lambda$}} \newcommand{\mbox{$\Lambda$}}{\mbox{$\Lambda$}} \newcommand{\mbox{$\varphi$}}{\mbox{$\varphi$}} \newcommand{\mbox{$\gamma$}}{\mbox{$\gamma$}}
\newcommand{\exp_{p_i}}{\exp_{p_i}} \newcommand{\exp^{-1}_{p_{i+1}}}{\exp^{-1}_{p_{i+1}}} \newcommand{\exp_{p_{i+1}}}{\exp_{p_{i+1}}}
\newcommand{\mbox{$\mathds{R}$}}{\mbox{$\mathds{R}$}} \newcommand{\mbox{$\textbf{S}$}}{\mbox{$\textbf{S}$}} \newcommand{\mbox{$\mathds{Z}$}}{\mbox{$\mathds{Z}$}} \newcommand{\mbox{$\mathds{N}$}}{\mbox{$\mathds{N}$}} \newcommand{\mbox{${\bf C}$}}{\mbox{${\bf C}$}} \newcommand{\mbox{${\cal F}$}}{\mbox{${\cal F}$}} \newcommand{\mbox{${\cal B}$}}{\mbox{${\cal B}$}} \newcommand{\mbox{${\cal K}$}}{\mbox{${\cal K}$}} \newcommand{\mbox{${\cal H}$}}{\mbox{${\cal H}$}} \newcommand{\mbox{${\cal L}$}}{\mbox{${\cal L}$}}
\newcommand{\mbox{${\bf K}$}}{\mbox{${\bf K}$}}
\newcommand{\mbox{PerSh}}{\mbox{PerSh}} \newcommand{\mbox{LipPerSh}}{\mbox{LipPerSh}}
\newcommand{\sref}[1]{(\ref{#1})}
\title{Periodic shadowing and $\Omega$-stability}
\author{A. V.\ Osipov\footnotemark[1],\; S. Yu.\ Pilyugin\footnotemark[1],\;
and S. B.\ Tikhomirov\footnotemark[2] \footnotemark[3]}
\date{}
\footnotetext[1] {Faculty of Mathematics and Mechanics, St.\ Petersburg State University,
University av.\ 28,
198504, St.\ Petersburg, Russia}
\footnotetext[2] {Department of Mathematics, National Taiwan University, No. 1, Section 4, Roosevelt Road, Taipei 106, Taiwan}
\footnotetext[3] {The research of the third author is supported by NSC (Taiwan) 98-2811-M-002-061}
\maketitle
\begin{abstract}
We show that the following three properties of a diffeomorphism $f$ of a smooth closed manifold are equivalent: (i) $f$ belongs to the $C^1$-interior of the set of diffeomorphisms having periodic shadowing property; (ii) $f$ has Lipschitz periodic shadowing property; (iii) $f$ is $\Omega$-stable. Bibliography: 20 titles.
\end{abstract}
Mathematics Subject Classification: 37C50, 37D20
Keywords: periodic shadowing, hyperbolicity, $\Omega$-stability
\section{Introduction}
The theory of shadowing of approximate trajectories (pseudotrajectories) of dynamical systems is now a well developed part of the global theory of dynamical systems (see, for example, the monographs [1, 2]).
This theory is closely related to the classical theory of structural stability. It is well known that a diffeomorphism has shadowing property in a neighborhood of a hyberbolic set [3, 4] and a structurally stable diffeomorpism has shadowing property on the whole manifold [5 -- 7]. Analyzing the proofs of the first shadowing results by Anosov [3] and Bowen [4], it is easy to see that, in a neighborhood of a hyperbolic set, the shadowing property is Lipschitz (and the same holds in the case of a structurally stable diffeomorphism, see [1]).
The shadowing property means that, near a sufficiently precise approximate trajectory of a dynamical system, there is an exact trajectory. One can pose a similar question replacing arbitrary approximate and exact trajectories by periodic ones (the corresponding property is called periodic shadowing property, see [8]).
In this paper, we study relations between periodic shadowing and structural stability (to be more precise, $\Omega$-stability).
It is easy to give an example of a diffeomorphism that is not structurally stable but has shadowing property (see [9], for example). Similarly, there exist diffeomorphisms that are not $\Omega$-stable but have periodic shadowing property.
Thus, structural stability is not equivalent to shadowing (and $\Omega$-stability is not equivalent to periodic shadowing).
One of possible approaches in the study of relations between shadowing and structural stability is the passage to $C^1$-interiors. At present, it is known that the $C^1$-interior of the set of diffeomorphisms having shadowing property coincides with the set of structurally stable diffeomorphisms [10]. Later, a similar result was obtained for orbital shadowing property (see [11] for details).
In this paper, we show that the $C^1$-interior of the set of diffeomorphisms having periodic shadowing property coincides with the set of $\Omega$-stable diffeomorphisms.
We are also interested in the study of the above-mentioned relations without the passage to $C^1$-interiors. Let us mention in this context that Abdenur and Diaz conjectured that a $C^1$-generic diffeomorphism with shadowing property is structurally stable; they have proved this conjecture for so-called tame diffeomorphisms [12]. Recently, it was proved that Lipschitz shadowing and the so-called variational shadowing are equivalent to structural stability [13, 9].
The second main result of this paper states that Lipschitz periodic shadowing property is equivalent to $\Omega$-stability.
\section{Main results}
Let us pass to exact definitions and statements.
Let $f$ be a diffeomorphism of a smooth closed manifold $M$ with Riemannian metric $\mbox{dist}$. We denote by $Df(x)$ the differential of $f$ at a point $x\in M$.
Denote by $T_xM$ the tangent space of $M$ at a point $x$; let $|v|,\;v\in T_xM$, be the norm generated by the metric $\mbox{dist}$.
As usual, we say that a sequence $\xi=\{x_i\in M,\;i\in\mbox{$\mathds{Z}$}\}$ is a $d$-pseudotrajectory of $f$ if \begin{equation} \label{0} \mbox{dist}(f(x_i),x_{i+1})<d,\quad i\in\mbox{$\mathds{Z}$}. \end{equation}
{\bf Definition 1. } We say that $f$ has {\em periodic shadowing} property if for any positive $\varepsilon$ there exists a positive $d$ such that if $\xi=\{x_i\}$ is a periodic $d$-pseudotrajectory, then there exists a periodic point $p$ such that \begin{equation} \label{00} \mbox{dist}(f^i(p),x_i)<\varepsilon,\quad i\in\mbox{$\mathds{Z}$}. \end{equation}
Denote by $\mbox{PerSh}$ the set of diffeomorphisms having periodic shadowing property.
{\bf Definition 2. } We say that $f$ has {\em Lipschitz periodic shadowing} property if there exist positive constants $\mbox{${\cal L}$},d_0$ such that if $\xi=\{x_i\}$ is a periodic $d$-pseudotrajectory
with $d\leq d_0$, then there exists a periodic point $p$ such that \begin{equation} \label{00L} \mbox{dist}(f^i(p),x_i)\leq \mbox{${\cal L}$} d,\quad i\in\mbox{$\mathds{Z}$}. \end{equation}
Denote by $\mbox{LipPerSh}$ the set of diffeomorphisms having Lipschitz periodic shadowing property.
Denote by $\Omega S$ the set of $\Omega$-stable diffeomorphisms (it is well known that $f\in\Omega S$ if and only if $f$ satisfies Axiom A and the no cycle condition, see, for example, [14]). Denote by $\Diff ^1(M)$ the space of diffeomorphisms of $M$ with the $C^1$ topology. For a set $P\subset \Diff ^1(M)$ we denote by $\mbox{Int}^1(P)$ its $C^1$-interior.
Let us state our main result.
{\bf Theorem. } $\mbox{Int}^1(\mbox{PerSh})=\mbox{LipPerSh}=\Omega S$.
The structure of the paper is as follows. In Sec. 3, we prove the inclusion $\Omega S\subset\mbox{LipPerSh}$. Of course, this inclusion implies that $\Omega S\subset\mbox{PerSh}$. Since the set $\Omega S$ is $C^1$-open, we conclude that $\Omega S\subset \mbox{Int}^1(\mbox{PerSh})$. In Sec. 4, we prove the inclusion $\mbox{Int}^1(\mbox{PerSh})\subset\Omega S$. In Sec. 5, we prove the inclusion $\mbox{LipPerSh}\subset\Omega S$.
\section{$\Omega S\subset\mbox{LipPerSh}$}
First we introduce some basic notation.
Denote by $\Per(f)$ the set of periodic points of $f$ and by $\Omega(f)$ the nonwandering set of $f$. Let $N=\sup_{x\in M}\|Df(x)\|$.
Let us formulate several auxiliary definitions and statements.
It is well known that if a diffeomorphism $f$ satisfies Axiom A, then its nonwandering set can be represented as a disjoint union of a finite number of compact sets: \begin{equation} \label{spe} \Omega(f)=\Omega_1\cup\dots\cup\Omega_m, \end{equation} where the sets $\Omega_i$ are so-called basic sets (hyperbolic sets each of which contains a dense positive semi-trajectory).
We say that a diffeomorphism $f$ has Lipschitz shadowing property on a set $U$ if there exist positive constants $\mbox{${\cal L}$},d_0$ such that if $\xi=\{x_i,\;i\in\mbox{$\mathds{Z}$}\}\subset U$ is a $d$-pseudotrajectory with $d\leq d_0$, then there exists a point $p\in U$ such that inequalities (\ref{00L}) hold.
We say that a diffeomorphism $f$ is expansive on a set $U$ if there exists a positive number $a$ (expansivity constant) such that if two trajectories $\{f^i(p):\;i\in\mbox{$\mathds{Z}$}\}$ and $\{f^i(q):\;i\in\mbox{$\mathds{Z}$}\}$ belong to $U$ and the inequalities $$ \mbox{dist}(f^i(p),f^i(q))\leq a,\quad i\in\mbox{$\mathds{Z}$}, $$ hold, then $p=q$.
The following statement is well known (see [1, 14], for example).
{\bf Proposition. } {\em If $\Lambda$ is a hyperbolic set, then there exists a neighborhood $U$ of $\Lambda$ such that $f$ has Lipschitz shadowing property on $U$ and is expansive on $U$.}
We also need the following two lemmas (see [15]).
{\bf Lemma 1. }{\em Let $f$ be a homeomorpism of a compact metric space $(X,\dist)$. For any neighborhood $U$ of the nonwandering set $\Omega(f)$ there exist positive numbers $B,d_1$ such that if $\xi=\{x_i,\;i\in\mbox{$\mathds{Z}$}\}$ is a $d$-pseudotrajectory of $f$ with $d\leq d_1$ and $$ x_k,x_{k+1},\dots,x_{k+l}\notin U $$ for some $l>0$ and $k\in\mbox{$\mathds{Z}$}$, then $l\leq B$}.
Let $\Omega_1,\dots,\Omega_m$ be the basic sets in decomposition (\ref{spe}) of the nonwandering set of an $\Omega$-stable diffeomorphism $f$.
{\bf Lemma 2. }{\em Let $U_1,\dots,U_m$ be disjoint neighborhoods of the basic sets $\Omega_1,\dots,\Omega_m$. There exist neighborhoods $V_j\subset U_j$ of the sets $\Omega_j$ and a number $d_2>0$ such that if $\xi=\{x_i,\;i\in\mbox{$\mathds{Z}$}\}$ is a $d$-pseudotrajectory of $f$ with $d\leq d_2$ such that $x_0\in V_j$ and $x_t\notin U_j$ for some $j\in\{1,\dots,m\}$ and some $t>0$, then $x_i\notin V_j$ for $i\geq t$.}
{\bf Lemma 3. }{$\Omega S\subset\mbox{LipPerSh}$.}
{\em Proof. } Apply the above proposition and find disjoint neighborhoods $W_1,\dots,W_m$ of the basic sets $\Omega_1,\dots,\Omega_m$ in decomposition (\ref{spe}) such that (i) $f$ has Lipschitz shadowing property on any of $W_j$ with the same constants $\mbox{${\cal L}$},d^*_0$; (ii) $f$ is expansive on any of $W_j$ with the same expansivity constant $a$.
Find neighborhoods $V_j,U_j$ of $\Omega_j$ (and reduce $d^*_0$, if necessary) so that the following properties are fulfilled:
$\bullet$ $V_j\subset U_j\subset W_j,\quad j=1,\dots,m$;
$\bullet$ the statement of Lemma 2 holds for $V_j$ and $U_j$ with some $d_2>0$;
$\bullet$ the $\mbox{${\cal L}$} d^*_0$-neighborhoods of $U_j$ belong to $W_j$.
Apply Lemma 1 to find the corresponding constants $B,d_1$ for the neighborhood $V_1\cup\dots\cup V_m$ of $\Omega(f)$.
We claim that $f$ has the Lipschitz periodic shadowing property with constants $\mbox{${\cal L}$},d_0$, where $$ d_0=\min\left(d^*_0,d_1,d_2,\frac{a}{2\mbox{${\cal L}$}}\right). $$
Take a $\mu$-periodic $d$-pseudotrajectory $\xi=\{x_i,\;i\in\mbox{$\mathds{Z}$}\}$ of $f$ with $d\leq d_0$. Lemma 1 implies that there exists a neighborhood $V_j$ such that $\xi\cap V_j\neq\emptyset$; shifting indices, we may assume that $x_0\in V_j$.
In this case, $\xi\subset U_j$. Indeed, if $x_{i_0}\notin U_j$ for some $i_0$, then $x_{i_0+k\mu}\notin U_j$ for all $k$. It follows from Lemma 2 that if $i_0+k\mu>0$, then $x_i\notin V_j$ for $i\geq i_0+k\mu$, and we get a contradiction with the periodicity of $\xi$ and the inclusion $x_0\in V_j$.
Thus, there exists a point $p$ such that inequalities (\ref{00L}) hold. Let us show that $p\in\Per(f)$. By the choice of $U_j$ and $W_j$, $f^i(p)\in W_j$ for all $i\in\mbox{$\mathds{Z}$}$. Let $q=f^\mu(p)$. Inequalities (\ref{00L}) and the periodicity of $\xi$ imply that $$ \mbox{dist}(f^i(q),x_{i})= \mbox{dist}(f^i(q),x_{i+\mu})\leq \mbox{${\cal L}$} d,\quad i\in\mbox{$\mathds{Z}$}. $$ Thus, $$ \mbox{dist}(f^i(q),f^i(p))\leq 2\mbox{${\cal L}$} d\leq a,\quad i\in\mbox{$\mathds{Z}$}, $$ which implies that $f^\mu(p)=q=p$. This completes the proof.
{\bf Remark. } Thus, we have shown that an $\Omega$-stable diffeomorphism has periodic shadowing property (and its Lipschitz variant). It must be noted that it was shown in [16] that there exist $\Omega$-stable diffeomorphisms that do not have weak shadowing property (hence, they do not have orbital and usual shadowing properties, see [11] for details).
\section{ $\mbox{Int}^1(\mbox{PerSh})\subset\Omega S$}
In the proof, we refer to the following well-known statement. Denote by $\mbox{HP}$ the set of diffeomorphisms $f$ such that every periodic point of $f$ is hyperbolic; let ${\cal F}=\mbox{Int}^1(\mbox{HP})$. It is known (see [17, 18]) that the set ${\cal F}$ coincides with the set $\Omega S$ of $\Omega$-stable diffeomorphisms.
Thus, it suffices for us to prove the following statement.
{\bf Lemma 4. } $\mbox{Int}^1(\mbox{PerSh})\subset{\cal F}$.
{\em Proof. } In the proof of this lemma, as well as in some proofs below, we apply the usual linearization technique based on exponential mapping.
Let $\exp$ be the standard exponential mapping on the tangent bundle of $M$ and let $\exp_x$ be the corresponding mapping $$ T_xM\to M. $$
Let $p$ be a periodic point of $f$; denote $p_i=f^i(p)$ and $A_i=Df(p_i)$.
We introduce the mappings \begin{equation} \label{1} F_i=\exp^{-1}_{p_{i+1}}\circ f\circ\exp_{p_i}: T_{p_i}M\to T_{p_{i+1}}M. \end{equation} It follows from the standard properties of the exponential mapping that $D\exp_x(0)=\mbox{Id}$; hence, $$ DF_i(0)=A_i. $$ We can represent $$ F_i(v)=A_iv+\phi_i(v), $$ where $$
\frac{|\phi_i(v)|}{|v|}\to 0\mbox{ as } |v|\to 0. $$
Denote by $B(r,x)$ the ball in $M$ of radius $r$ centered at a point $x$ and by $B_T(r,x)$ the ball in $T_xM$ of radius $r$ centered at the origin.
There exists $r>0$ such that, for any $x\in M$, $\exp_x$ is a diffeomorphism of $B_T(r,x)$ onto its image, and $\exp^{-1}_x$ is a diffeomorphism of $B(r,x)$ onto its image. In addition, we may assume that $r$ has the following property.
If $v,w\in B_T(r,x)$, then $$
\frac{\mbox{dist}(\exp_x(v),\exp_x(w))}{|v-w|}\leq 2; $$ if $y,z\in B(r,x)$, then $$
\frac{|\exp^{-1}_x(y)-\exp^{-1}_x(z)|}{\mbox{dist}(y,z)}\leq 2. $$
Every time, constructing periodic $d$-pseudotrajectories of $f$, we take $d$ so small that the considered points of our pseudotrajectories, points of shadowing trajectories, their ``lifts" to tangent spaces, etc belong to the corresponding balls $B(r,p_i)$ and $B_T(r,p_i)$ (and we do not repeat this condition on the smallness of $d$).
To prove Lemma 4, it is enough for us to show that $\mbox{Int}^1(\mbox{PerSh})\subset\mbox{HP}$ and to note that the left-hand side of this inclusion is $C^1$-open.
To get a contradiction, let us assume that a diffeomorphism $f\in\mbox{Int}^1(\mbox{PerSh})$ has a nonhyperbolic periodic point $p$. Fix a $C^1$-neighborhood ${\cal N}\subset\mbox{PerSh}$ of $f$.
For simplicity, let us assume that $p$ is a fixed point and that the matrix $A_0=Df(p)$ has an eigenvalue $\mbox{$\lambda$}=1$ (the remaining cases are considered using a similar reasoning, see, for example, [19]).
In our case, an analog of mapping (\ref{1}), $$ F=\exp_p^{-1}\circ f\circ\exp_p: T_{p}M\to T_{p}M, $$ has the form $$ F(v)=A_0v+\phi(v). $$ Clearly, we can find a number $a\in(0,r)$ (recall that the number $r$ was fixed above when properties of the exponential mapping were described), coordinates $v=(u,w)$ in $T_pM$ with one-dimensional $u$, and a diffeomorphism $h\in{\cal N}$ such that if $$ H=\exp_p^{-1}\circ h\circ\exp_p $$
and $|v|\leq a$, then $$ H(v)=Av=(u,Bw), $$ where $B$ is a matrix of size $(n-1)\times(n-1)$ (and $n$ is the dimension of $M$). For this purpose, we take a matrix $A$, close to $A_0$ and having an eigenvalue $\mbox{$\lambda$}=1$ of multiplicity one, and ``annihilate" the $C^1$-small term $(A_0-A)v+\phi(v)$ in the small ball $B_T(a,p)$.
Take a positive $\varepsilon$ such that $8\varepsilon<a$. Since $h\in{\cal N}$, there exists a corresponding $d\in(0,\varepsilon)$ from the definition of periodic shadowing (for the diffeomorphism $h$). Take a natural number $K$ such that $Kd>8\varepsilon$. Reducing $d$, if necessary, we may assume that \begin{equation} \label{2.01} 8\varepsilon<Kd<2a. \end{equation} Let us construct a sequence $y_k\in T_pM,\;k\in\mbox{$\mathds{Z}$},$ as follows: $$ y_0=0,\quad y_{k+1}=Ay_k+\left(\frac{d}{2},0\right),\quad 0\leq k\leq K-1, $$ $$ y_{k+1}=Ay_k-\left(\frac{d}{2},0\right),\quad K\leq k\leq 2K-1, $$ and $y_{k+2K}=y_k,\;k\in\mbox{$\mathds{Z}$}$. Clearly, \begin{equation} \label{2.2} y_K=\left(\frac{Kd}{2},0\right). \end{equation} Let $$ x_k=\exp_p(y_k). $$ Since $$ \exp_p^{-1}(h(x_k))=H(y_k)=Ay_k $$ and $$
|y_{k+1}-Ay_k|=\frac{d}{2}, $$ the sequence $\xi=\{x_k\}$ is a $2K$-periodic $d$-pseudotrajectory of $h$.
By our assumption, there exists a periodic point $p_0$ of $h$ such that $$ \mbox{dist}(p_k,x_k)<\varepsilon,\quad k\in\mbox{$\mathds{Z}$}, $$ where $p_k=h^k(p_0)$. Let $$ p_k=\exp_p(q_k),\quad k\in\mbox{$\mathds{Z}$}, $$ where $q_k=(U_k,W_k)$, and let $y_k=(u_k,w_k)$; then $$
|U_k-u_k|\leq|q_k-y_k|<2\varepsilon,\quad k\in\mbox{$\mathds{Z}$}, $$ which implies that $$
|U_0|\leq|q_0|<2\varepsilon. $$
Since $q_{k+1}=H(q_k)$, $U_k=U_0$ for all $k$ due to the structure of $H$. We conclude that $|U_K|<2\varepsilon$ and get a contradiction with
the inequalities $|U_K-u_K|<2\varepsilon$, (\ref{2.01}), and (\ref{2.2}). The lemma is proved.
\section{ $\mbox{LipPerSh}\subset\Omega S$}
In this section, we assume that $f\in\mbox{LipPerSh}$ (with constants $\mbox{${\cal L}$}\geq 1,d_0>0$). Clearly, in this case $f^{-1}\in\mbox{LipPerSh}$ as well (and we assume that the constants $\mbox{${\cal L}$},d_0$ are the same for $f$ and $f^{-1}$).
In the construction of pseudotrajectories, we apply the same linearization technique as in the previous section.
{\bf Lemma 5. } {\em Every point $p\in\Per(f)$ is hyperbolic.}
{\em Proof. } To get a contradiction, let us assume that $f$ has a nonhyperbolic periodic point $p$ (to simplify notation, we assume that $p$ is a fixed point; literally the same reasoning can be applied to a periodic point of period $m>1$).
In this case, mapping (\ref{1}) takes the form $$ F(v)=\exp^{-1}_p\circ f\circ\exp_p(v)=Av+\phi(v), $$ where $A$ is a nonhyperbolic matrix. The following two cases are possible:
(Case 1): $A$ has a real eigenvalue $\mbox{$\lambda$}$ with $|\mbox{$\lambda$}|=1$;
(Case 2): $A$ has a complex eigenvalue $\mbox{$\lambda$}$ with $|\mbox{$\lambda$}|=1$.
We treat in detail only Case 1; we give a comment concerning Case 2. To simplify presentation, we assume that 1 is an eigenvalue of $A$; the case of eigenvalue $-1$ is treated similarly.
We can find coordinates $v$ in $T_pM$ such that, with respect to this coordinate, the matrix $A$ has block-diagonal form, \begin{equation} \label{bform} A=\mbox{diag}(B,P), \end{equation} where $B$ is a Jordan block of size $l\times l$: $$ B=\left( \begin{array}{ccccc} 1&1&0&\ldots&0\\ 0&1&1&\ldots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\ldots&1 \end{array} \right). $$
Of course, introducing new coordinates, we have to change the constants $\mbox{${\cal L}$},d_0,N$; we denote the new constants by the same symbols. In addition, we assume that $\mbox{${\cal L}$}$ is integer.
We start considering the case $l=2$; in this case, $$ B=\left( \begin{array}{cc} 1&1\\ 0&1 \end{array} \right). $$ Let $$ e_1=(1,0,0,\dots,0) \mbox{ and } e_2=(0,1,0,\dots,0) $$ be the first two vectors of the standard orthonormal basis.
Let $K=25\mbox{${\cal L}$}$.
Take a small $d>0$ and construct a finite sequence $y_0,\dots,y_Q$ in $T_pM$ (where $Q$ is determined later) as follows: $y_0=0$ and \begin{equation} \label{pst} y_{k+1}=Ay_k+de_2,\quad k=0,\dots, K-1. \end{equation} Then $$ y_K=(Z_1(K)d,Kd,0,\dots,0), $$ where the natural number $Z_1(K)$ is determined by $K$ (we do not write $Z_1(K)$ explicitly). Now we set $$ y_{k+1}=Ay_k-de_2,\quad k=K,\dots, 2K-1. $$ Then $$ y_{2K}=(Z_2(K)d,0,0,\dots,0), $$ where the natural number $Z_2(K)$ is determined by $K$ as well. Take $Q=2K+Z_2(K)$; if we set $$ y_{k+1}=Ay_k-de_1,\quad k=2K,\dots, Q-1, $$ then $y_Q=0$. Let us note that both numbers $Q$ and $$
Y:=\frac{\max_{0\leq k\leq Q-1}|y_k|}{d} $$ are determined by $K$ (and hence, by $\mbox{${\cal L}$}$).
Now we construct a $Q$-periodic sequence $y_k,k\in\mbox{$\mathds{Z}$},$ that coincides with the above sequence for $k=0,\dots,Q$.
We set $x_k=\exp_p(y_k)$ and claim that if $d$ is small enough, then $\xi=\{x_k\}$ is a $4d$-pseudotrajectory of $f$ (and this pseudotrajectory is $Q$-periodic by construction).
Indeed, we know that
$|y_k|\leq Yd$ for $k\in\mbox{$\mathds{Z}$}$. Since $\phi(v)=o(|v|)$ as $|v|\to 0$, \begin{equation} \label{5}
|\phi(y_k)|<d,\quad k\in\mbox{$\mathds{Z}$}, \end{equation} if $d$ is small enough.
The definition of $\{y_k\}$ implies that \begin{equation} \label{6}
|y_{k+1}-Ay_{k}|=d,\quad k\in\mbox{$\mathds{Z}$}. \end{equation}
Note that $$ \exp^{-1}_p(f(x_k))=F(y_k)=Ay_k+\phi(y_k); $$ thus, it follows from (\ref{5}) and (\ref{6}) that $$
|y_{k+1}-\exp^{-1}_p(f(x_k))|\leq |y_{k+1}-Ay_{k}|+|\phi(y_k)|<2d, $$ which implies that $\xi=\{x_k\}$ is a $4d$-pseudotrajectory of $f$ if $d$ is small enough.
Now we estimate the distances between points of trajectories of the mapping $F$ and its linearization.
Let us take a vector $q_0\in T_pM$ and assume that the sequence $q_k=F^k(q_0)$ belongs to the ball $|v|\leq (Y+8\mbox{${\cal L}$})d$ for $0\leq k\leq K$. Let $r_k=A^kq_0$ (we impose no conditions on $r_k$ since below we estimate $\phi$ at points $q_k$ only).
Take a small number $\mu\in(0,1)$ (to be chosen later) and assume that $d$ is small enough, so that the inequality $$
|\phi(v)|\leq\mu|v| $$
holds for $|v|\leq (Y+8\mbox{${\cal L}$})d$.
Then $$
|q_1|\leq|Aq_0|+|\phi(q_0)|\leq (N+1)|q_0|,\dots,
|q_{k}|\leq|Aq_{k-1}|+|\phi(q_{k-1})|\leq (N+1)^k|q_0| $$ for $1\leq k\leq K$, and $$
|q_1-r_1|=|Aq_0+\phi(q_0)-Aq_0|\leq\mu|q_0|, $$ $$
|q_2-r_2|=|Aq_1+\phi(q_1)-Ar_1|\leq N|q_1-r_1|+\mu|q_1|
\leq \mu(2N+1)|q_0|, $$ $$
|q_3-r_3|\leq N|q_2-r_2|+\mu|q_2|
\leq \mu(N(2N+1)+(N+1)^2)|q_0|, $$ and so on.
Thus, there exists a number $\nu=\nu(K,N)$ such that $$
|q_k-r_k|\leq \mu\nu|q_0|,\quad 0\leq k\leq K. $$ We take $\mu=1/\nu$, note that $\mu=\mu(K,N)$, and get the inequalities \begin{equation} \label{7}
|q_k-r_k|\leq |q_0|,\quad 0\leq k\leq K, \end{equation} for $d$ small enough.
Since $f\in\mbox{LipPerSh}$, for $d$ small enough, the $Q$-periodic $4d$-pseudotrajectory $\xi$ is $4\mbox{${\cal L}$} d$-shadowed by a periodic trajectory. Let $p_0$ be a point of this trajectory such that \begin{equation} \label{8} \mbox{dist}(p_k,x_k)\leq 4\mbox{${\cal L}$} d,\quad k\in\mbox{$\mathds{Z}$}, \end{equation} where $p_k=f^k(p_0)$. Let $q_k=\exp^{-1}_p(p_k)$.
The inequalities $|y_k|\leq Yd$ and (\ref{8}) imply that \begin{equation} \label{9}
|q_k|\leq |y_k|+2\mbox{dist}(p_k,x_k)\leq (Y+8\mbox{${\cal L}$})d,\quad k\in\mbox{$\mathds{Z}$}. \end{equation}
Note that $|q_0|\leq 8\mbox{${\cal L}$} d$.
Set $r_k=A^kq_0$; we deduce from estimate (\ref{7}) that if $d$ is small enough, then \begin{equation} \label{10}
|q_K-r_K|\leq |q_0|\leq 8\mbox{${\cal L}$} d. \end{equation}
Denote by $v^{(2)}$ the second coordinate of a vector $v\in T_pM$.
It follows from the structure of the matrix $A$ that \begin{equation} \label{11}
|r_K^{(2)}|=|q_0^{(2)}|\leq 8\mbox{${\cal L}$} d. \end{equation} The relations $$
|y_K^{(2)}|=Kd\mbox{ and } |q_K-y_K|\leq 8\mbox{${\cal L}$} d $$ imply that \begin{equation} \label{12}
|q_K^{(2)}|\geq Kd-8\mbox{${\cal L}$} d=17\mbox{${\cal L}$} d \end{equation} (recall that $K=25\mbox{${\cal L}$}$).
Estimates (\ref{10})--(\ref{12}) are contradictory. Our lemma is proved in Case 1 for $l=2$.
If $l=1$, then the proof is simpler; the first coordinate of $A^kv$ equals the first coordinate of $v$, and we construct the periodic pseudotrajectory perturbing the first coordinate only.
If $l>2$, the reasoning is parallel to that above; we first perturb the $l$th coordinate to make it $Kd$, and then produce a periodic sequence consequently making zero the $l$th coordinate, the $(l-1)$st coordinate, and so on.
If $\mbox{$\lambda$}$ is a complex eigenvalue, $\mbox{$\lambda$}=a+bi$, we take a real $2\times 2$ matrix $$ R=\left( \begin{array}{cc} a&-b\\ b&a\\ \end{array} \right) $$ and assume that in representation (\ref{bform}), $B$ is a real $2l\times 2l$ Jordan block: $$ B=\left( \begin{array}{ccccc} R&E_2&0&\ldots&0\\ 0&R&E_2&\ldots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\ldots&R \end{array} \right), $$ where $E_2$ is the $2\times 2$ unit matrix.
After that, almost the same reasoning works; we note that $|Rv|=|v|$ for any 2-dimensional vector $v$ and construct periodic pseudotrajectories replacing, for example, formulas (\ref{pst}) by the formulas $$ y_{k+1}=Ay_k+dw_k,\quad k=0,\dots,K-1, $$ where $j$th coordinates of the vector $w_k$
are zero for $j=1,\dots,2l-2,2l+1,\dots,n$, while the 2-dimensional vector corresponding to $(2l-1)$st and $2l$th coordinates has the form $R^kw$ with $|w|=1$, and so on. We leave details to the reader. The lemma is proved.
{\bf Lemma 6. }{\em There exist constants $C>0$ and $\mbox{$\lambda$}\in(0,1)$ depending only on $N$ and $\mbox{${\cal L}$}$ and such that, for any point $p\in\Per(f)$, there exist complementary subspaces $S(p)$ and $U(p)$ of the tangent space $T_pM$ that are $Df$-invariant, i.e.,
(H1) $Df(p)S(p)=S(f(p))$ and $Df(p)U(p)=U(f(p))$,
\noindent and the inequalities
(H2.1) $|Df^j(p)v|\leq C\mbox{$\lambda$}^j|v|, \quad v\in S(p), j\geq 0$,
\noindent and
(H2.2) $|Df^{-j}(p)v|\leq C\mbox{$\lambda$}^j|v|, \quad v\in U(p), j\geq 0$,
\noindent hold}.
{\bf Remark. } Lemma 6 means that the set $\Per(f)$ has all the standard properties of a hyperbolic set, with the exception of compactness.
{\em Proof. } Take a periodic point $p\in\Per(f)$; let $m$ be the minimal period of $p$.
Denote $p_i=f^i(p)$, $A_i = D f(p_i)$, and $B = D f^m(p)$. It follows from Lemma 5 that the matrix $B$ is hyperbolic. Denote by $S(p)$ and $U(p)$ the invariant subspaces of $B$ corresponding to parts of its spectrum inside and outside the unit disk, respectively. Clearly, $S(p)$ and $U(p)$ are invariant with respect to $Df$, $T_{p}M = S(p) \oplus U(p)$, and the following relations hold: \begin{equation}\label{1.1} \lim_{n \to +\infty} B^n v_s = \lim_{n \to +\infty} B^{-n} v_u = 0, \quad v_s \in S(p), v_u \in U(p). \end{equation}
We prove that inequalities (H2.2) hold with $C=16\mbox{${\cal L}$}$ and $\mbox{$\lambda$}=1+1/(8\mbox{${\cal L}$})$ (inequalities (H2.1) are established by similar reasoning applied to $f^{-1}$ instead of $f$).
Consider an arbitrary nonzero vector $v_u \in U(p)$ and an integer $j\geq 0$. Define sequences $v_i, e_i \in T_{p_i}M$ and $\mbox{$\lambda$}_i > 0$ for $i\geq 0$ as follows: $$
v_0 = v_u, \quad v_{i+1} = A_i v_i, \quad e_i = \frac{v_i}{|v_i|},
\quad \mbox{$\lambda$}_i = \frac{|v_{i+1}|}{|v_i|} = |A_i e_i|. $$ Let $$ \tau= \frac{\mbox{$\lambda$}_{m-1}\cdot \ldots \cdot \mbox{$\lambda$}_1 + \mbox{$\lambda$}_{m-1}\cdot \ldots \cdot \mbox{$\lambda$}_2 + \ldots + \mbox{$\lambda$}_{m-1} + 1}{\mbox{$\lambda$}_{m-1}\cdot \ldots \cdot \mbox{$\lambda$}_0}. $$ Consider the sequence $\{a_i \in \mbox{$\mathds{R}$},\;i\geq 0\}$ defined by the following formulas: \begin{equation} \label{1.2} a_0 = \tau, \quad a_{i+1} = \mbox{$\lambda$}_i a_i -1. \end{equation} Note that \begin{equation} \label{1.3} a_{m} = 0 \quad \mbox{and} \quad a_i >0, \quad i \in [0, m-1]. \end{equation} Indeed, if $a_i\leq 0$ for some $i \in [0, m-1]$, then $a_k<0$ for $k \in [i+1, m]$.
It follows from (\ref{1.1}) that there exists $n > 0$ such that \begin{equation} \label{2.1}
|B^{-n}\tau e_0| < 1. \end{equation}
Consider the finite sequence $\{w_i \in T_{p_i}M,\;i\in[0,m(n+1)]\}$ defined as follows: $$ \begin{cases} w_i=a_i e_i, & \quad i \in [0, m-1], \\ w_{m} = B^{-n}\tau e_0, & \;\\ w_{m+1+i} = A_i w_{m+i}, & \quad i \in [0, mn - 1]. \end{cases} $$ Clearly, $$ w_{km}=B^{k-1-n}\tau e_0,\quad k\in[1,n+1], $$ which means that we can consider $\{w_i\}$ as an $m(n+1)$-periodic sequence defined for $i\in\mbox{$\mathds{Z}$}$.
Let us note that $$ A_iw_i=a_iA_ie_i=
a_i\frac{v_{i+1}}{|v_{i}|},\quad i\in[0,m-2], $$ $$
w_{i+1}=(\mbox{$\lambda$}_ia_i-1)\frac{v_{i+1}}{|v_{i+1}|}
=a_i\frac{v_{i+1}}{|v_{i}|}-e_{i+1},\quad i\in[0,m-2], $$ and $$
A_{m-1}w_{m-1}=a_{m-1}\frac{v_{m}}{|v_{m-1}|}=
\frac{v_{m}}{\mbox{$\lambda$}_{m-1}|v_{m-1}|}=e_m $$ (in the last relation we take into account that $a_{m-1}\mbox{$\lambda$}_{m-1}=1$ since $a_m=0$).
The above relations and condition (\ref{2.1}) imply that \begin{equation} \label{15}
|w_{i+1} - A_i w_i| < 2, \quad i \in \mbox{$\mathds{Z}$}. \end{equation}
Now we take a small $d>0$ and consider the $m(n+1)$-periodic sequence $\xi=\{x_i=\mbox{exp}_{p_i}(dw_i),\;i\in \mbox{$\mathds{Z}$}\}$.
We claim that if $d$ is small enough, then $\xi$ is a $4d$-pseudotrajectory of $f$.
Denote $$ \zeta_{i+1}=\exp^{-1}_{p_{i+1}}(f(x_i))\;\mbox{ and }\;\zeta'_{i+1}=\exp^{-1}_{p_{i+1}}(x_{i+1}). $$ Then $$ \zeta_{i+1}=\exp^{-1}_{p_{i+1}} f(\exp_{p_i}(dw_i))=F_i(dw_i)=A_idw_i+\phi_i(dw_i), $$
where the mapping $F_i$ is defined in (\ref{1}) and $\phi_i(v)=o(|v|)$, and $$ \zeta'_{i+1}=\exp^{-1}_{p_{i+1}}(x_{i+1})=dw_{i+1}. $$ It follows from estimates (\ref{15}) that $$
|\zeta'_{i+1}-\zeta_{i+1}|\leq 2d $$ for small $d$, and $$ \mbox{dist}(f(x_i),x_{i+1})\leq 4d. $$
By Lemma 5, the $m$-periodic trajectory $\{p_i\}$ is hyperbolic; hence, $\{p_i\}$ has a neighborhood in which $\{p_i\}$ is a unique periodic trajectory. It follows that if $d$ is small enough, then the pseudotrajectory $\{x_i\}$ is $4\mbox{${\cal L}$} d$-shadowed by $\{p_i\}$.
The inequalities $\dist(x_i,p_i)\leq 4\mbox{${\cal L}$} d$
imply that $|a_i|=|w_i|\leq 8\mbox{${\cal L}$}$ for $0\leq i\leq m-1$.
Now the equalities $\mbox{$\lambda$}_i = (a_{i+1}+1)/a_i$ imply that if $0\leq i\leq m-1$, then $$ \mbox{$\lambda$}_0\cdot\ldots\cdot\mbox{$\lambda$}_{i-1} =\frac{a_{1}+1}{a_0}\frac{a_{2}+1}{a_{1}}\dots \frac{a_{i}+1}{a_{i-1}}= $$ $$ =\frac{a_{i}+1}{a_0}\left(1+\frac{1}{a_{1}}\right)\dots\left(1+\frac{1}{a_{i-1}}\right)\geq $$ $$ \geq \frac{1}{8\mbox{${\cal L}$}}\left(1+\frac{1}{8\mbox{${\cal L}$}}\right)^{i-1}> \frac{1}{16\mbox{${\cal L}$}}\left(1+\frac{1}{8\mbox{${\cal L}$}}\right)^{i} $$ (we take into account that $1+1/(8\mbox{${\cal L}$})<2$ since $\mbox{${\cal L}$}\geq 1$).
It remains to note that $$
|Df^i(p)v_u|=\mbox{$\lambda$}_{i-1}\cdots\mbox{$\lambda$}_0|v_u|,\quad 0\leq i\leq m-1, $$ and that we started with an arbitrary vector $v_u\in U(p)$.
This proves our statement for $j\leq m-1$. If $j\geq m$, we take an integer $k>0$ such that $km>j$ and repeat the above reasoning for the periodic trajectory $p_0,\dots,p_{km-1}$ (note that we have not used the condition that $m$ is the minimal period). Lemma 6 is proved.
{\bf Lemma 7. } {\em If} $f\in\mbox{LipPerSh}$, {\em then $f$ satisfies Axiom A.}
{\em Proof. } Denote by $P_l$ the set of points $p\in\Per(f)$ of index $l$ (as usual, the index of a hyperbolic periodic point is the dimension of its unstable manifold).
Let $R_l$ be the closure of $P_l$. Clearly, $R_l$ is a compact $f$-invariant set. We claim that any $R_l$ is a hyperbolic set. Let $n=\mbox{dim}M$.
Consider a point $q\in R_l$ and fix a sequence of points $p_m\in P_l$ such that $p_m\to q$ as $m\to\infty$. By Lemma 6, there exist complementary subspaces $S(p_m)$ and $U(p_m)$ of $T_{p_{m}}M$ (of dimensions $n-l$ and $l$, respectively) for which estimates (H2.1) and (H2.2) hold.
Standard reasoning shows that, introducing local coordinates in a neighborhood of $(q,T_qM)$ in the tangent bundle of $M$, we can select a subsequence $p_{m_k}$ for which the sequences $S(p_{m_k})$ and $U(p_{m_k})$ converge (in the Grassmann topology) to subspaces of $T_qM$ (let $S_0$ and $U_0$ be the corresponding limit subspaces).
The limit subspaces $S_0$ and $U_0$ are complementary in $T_qM$. Indeed, consider the ``angle" $\beta_{m_k}$ between the subspaces $S(p_{m_k})$ and $U(p_{m_k})$ which is defined (with respect to the introduced local coordinates in a neighborhood of $(q,T_qM)$) as follows: $$
\beta_{m_k}=\min |v^s-v^u|, $$ where the minimum is taken over all possible pairs of unit vectors $v^s\in S(p_{m_k})$ and $v^u\in U(p_{m_k})$.
It is shown in [16, Lemma 12.1] that the values $\beta_{m_k}$ are estimated from below by a positive constant $\alpha=\alpha(C,\mbox{$\lambda$},N)$. Clearly, this implies that the subspaces $S_0$ and $U_0$ are complementary.
It is easy to show that the limit subspaces $S_0$ and $U_0$ are unique (which means, of course, that the sequences $S(p_m)$ and $U(p_m)$ converge). For the convenience of the reader, we prove this statement (our reasoning is close to that of [16]).
To get a contradiction, assume that there is a subsequence $p_{m_i}$ for which the sequences $S(p_{m_i})$ and $U(p_{m_i})$ converge to complementary subspaces $S_1$ and $U_1$ different from $S_0$ and $U_0$ (for definiteness, we assume that $S_0\setminus S_1\neq\emptyset$).
Due to the continuity of $Df$, the inequalities $$
|Df^j(q)v|\leq C\mbox{$\lambda$}^j|v|,\quad v\in S_0\cup S_1, $$ and $$
|Df^j(q)v|\geq C^{-1}\mbox{$\lambda$}^{-j}|v|,\quad v\in U_0\cup U_1, $$ hold for $j\geq 0$.
Since $$ T_qM=S_0\oplus U_0=S_1\oplus U_1, $$ our assumption implies that there is a vector $v\in S_0$ such that $$ v=v^s+v^u,\quad v^s\in S_1, v^u\in U_1, v^u\neq 0. $$ Then $$
|Df^j(q)v|\leq C\mbox{$\lambda$}^j|v|\to 0,\quad j\to\infty, $$ and $$
|Df^j(q)v|\geq C^{-1}\mbox{$\lambda$}^{-j}|v^u|-C\mbox{$\lambda$}^j|v^s|\to \infty,\quad j\to\infty, $$ and we get the desired contradiction.
It follows that there are uniquely defined complementary subspaces $S(q)$ and $U(q)$ for $q\in R_l$ with proper hyperbolity estimates; the $Df$-invariance of these subspaces is obvious. We have shown that each $R_l$ is a hyperbolic set with $\mbox{dim}S(q)=n-l$ and $\mbox{dim}U(q)=l$ for $q\in R_l$.
If $r\in\Omega(f)$, then there exists a sequence of points $r_m\to r$ as $m\to\infty$ and a sequence of indices $k_m\to\infty$ as $m\to\infty$ such that $f^{k_m}(r_m)\to r$.
Clearly, if we continue the sequence $$ r_m,f(r_m),\dots,f^{k_m-1}(r_m) $$ periodically with period $k_m$, we get a periodic $d_m$-pseudotrajectory of $f$ with $d_m\to 0$ as $m\to\infty$.
Since $f\in \mbox{LipPerSh}$, for large $m$ there exist periodic points $p_m$ such that $\mbox{dist}(p_m,r_m)\to 0$ as $m\to\infty$. Thus, periodic points are dense in $\Omega(f)$.
Since hyperbolic sets with different dimensions of the subspaces $U(q)$ are disjoint, we get the equality $$ \Omega(f)=R_0\cup\dots\cup R_{n}, $$ which implies that $\Omega(f)$ is hyperbolic. The lemma is proved.
It was mentioned above that if a diffeomorphism $f$ satisfies Axiom A, then its nonwandering set can be represented as a disjoint union of a finite number of basic sets (see representation (\ref{spe})).
The basic sets $\Omega_i$ have stable and unstable ``manifolds": $$ W^s(\Omega_i)=\{x\in M:\;\mbox{dist}(f^k(x),\Omega_i)\to 0,\quad k\to\infty\} $$ and $$ W^u(\Omega_i)=\{x\in M:\;\mbox{dist}(f^k(x),\Omega_i)\to 0,\quad k\to-\infty\}. $$ If $\Omega_i$ and $\Omega_j$ are basic sets, we write $\Omega_i\to\Omega_j$ if the intersection $$ W^u(\Omega_i)\cap W^s(\Omega_j) $$ contains a wandering point.
We say that $f$ has a 1-cycle if there is a basic set $\Omega_i$ such that $\Omega_i\to\Omega_i$.
We say that $f$ has a $t$-cycle if there are $t>1$ basic sets $$ \Omega_{i_1},\dots,\Omega_{i_t} $$ such that $$ \Omega_{i_1}\to\dots\to\Omega_{i_t}\to\Omega_{i_1}. $$
{\bf Lemma 8. } {\em If } $f\in\mbox{LipPerSh}$, {\em then $f$ has no cycles.}
{\em Proof. } To simplify presentation, we prove that $f$ has no 1-cycles (in the general case, the idea is literally the same, but the notation is heavy).
To get a contradiction, assume that $$ p\in(W^u(\Omega_i)\cap W^s(\Omega_i))\setminus\Omega(f). $$ In this case, there are sequences of indices $j_m,k_m\to\infty$ as $m\to\infty$ such that $$ f^{-j_m}(p),f^{k_m}(p)\to\Omega_i,\quad m\to\infty. $$ Since the set $\Omega_i$ is compact, we may assume that $$ f^{-j_m}(p)\to q\in\Omega_i\;\mbox{ and}\;f^{k_m}(p)\to r\in\Omega_i. $$ Since $\Omega_i$ contains a dense positive semi-trajectory, there exist points $s_m\to r$ and indices $l_m>0$ such that $f^{l_m}(s_m)\to q$ as $m\to\infty$.
Clearly, if we continue the sequence $$ p,f(p),\dots,f^{k_m-1}(p),s_m,\dots,f^{l_m-1}(s_m),f^{-j_m}(p),\dots,f^{-1}(p) $$ periodically with period $k_m+l_m+j_m$, we get a periodic $d_m$-pseudotrajectory of $f$ with $d_m\to 0$ as $m\to\infty$.
Since $f\in\mbox{LipPerSh}$, there exist periodic points $p_m$ (for $m$ large enough) such that $p_m\to p$ as $m\to\infty$, and we get the desired contradiction with the assumption that $p\notin\Omega(f)$. The lemma is proved.
Lemmas 5 -- 8 show that $\mbox{LipPerSh}\subset\Omega S$.
1. S. Yu. Pilyugin, {\em Shadowing in Dynamical Systems}, Lecture Notes Math., vol. 1706, Springer, Berlin, 1999.
2. K. Palmer, {\em Shadowing in Dynamical Systems. Theory and Applications}, Kluwer, Dordrecht, 2000.
3. D. V. Anosov, {\em On a class of invariant sets of smooth dynamical systems}, Proc. 5th Int. Conf. on Nonlin. Oscill., {\bf 2}, Kiev, 1970, 39-45.
4. R. Bowen, {\em Equilibrium States and the Ergodic Theory of Anosov Diffeomorphisms}, Lecture Notes Math., vol. 470, Springer, Berlin, 1975.
5. C. Robinson, {\em Stability theorems and hyperbolicity in dynamical systems}, Rocky Mount. J. Math., {\bf 7}, 1977, 425-437.
6. A. Morimoto, {\em The method of pseudo-orbit tracing and stability of dynamical systems}, Sem. Note {\bf 39}, Tokyo Univ., 1979.
7. K. Sawada, {\em Extended $f$-orbits are approximated by orbits}, Nagoya Math. J., {\bf 79}, 1980, 33-45.
8. P. Ko\'scielniak, {\em On genericity of shadowing and periodic shadowing property}, J. Math. Anal. Appl., {\bf 310}, 2005, 188-196.
9. S. Yu. Pilyugin, {\em Variational shadowing}, Discrete Contin. Dyn. Syst. (accepted).
10. K. Sakai, {\em Pseudo orbit tracing property and strong transversality of diffeomorphisms of closed manifolds}, Osaka J. Math., {\bf 31}, 1994, 373-386.
11. S. Yu. Pilyugin, A. A. Rodionova, and K. Sakai, {\em Orbital and weak shadowing properties}, Discrete Contin. Dyn. Syst., {\bf 9}, 2003, 287-308.
12. F. Abdenur and L. J. Diaz, {\em Pseudo-orbit shadowing in the $C^1$ topology}, Discrete Contin. Dyn. Syst., {\bf 7}, 2003, 223-245.
13. S. Yu. Pilyugin and S. B. Tikhomirov, {\em Lipschitz shadowing implies structural stability} (to appear).
14. S. Yu. Pilyugin, {\em Spaces of Dynamical Systems} [in Russian], Reg. Chaotic Dynamics, Moscow-Izhevsk, 2008.
15. S. Yu. Pilyugin, K. Sakai, and O. A. Tarakanov, {\em Transversality properties and $C^1$-open sets of diffeomorphisms with weak shadowing}, Discrete Contin. Dyn. Syst., {\bf 9}, 2003, 287-308.
16. O. B. Plamenevskaya, {\em Weak shadowing for two-dimensional diffeomorphisms}, Mat. Zametki, {\bf 65}, 1999, 477-480.
17. N. Aoki, {\em The set of Axiom A diffeomorphisms with no cycle}, Bol. Soc. Brasil. Mat. (N.S.), {\bf 23}, 1992, 21-65.
18. S. Hayashi, {\em Diffeomorphisms in $\mathcal{F}^1(M)$ satisfy Axiom A}, Ergod. Theory Dyn. Syst., {\bf 12}, 1992, 233-253.
19. S. Yu. Pilyugin, {\em Sets of diffeomorphisms with various limit shadowing properties}, J. Dynamics Differ. Equat., {\bf 19}, 2007, 747-775.
20. S. Yu. Pilyugin, {\em Introduction to Structurally Stable Systems of Differential Equations}, Birkh\"auser-Verlag, 1994.
\end{document} |
\begin{document}
\title{Euler--Mahonian Statistics Via Polyhedral Geometry}
\author{Matthias Beck} \address{Department of Mathematics\\
San Francisco State University\\
San Francisco, CA 94132} \email{mattbeck@sfsu.edu}
\author{Benjamin Braun} \address{Department of Mathematics\\
University of Kentucky\\
Lexington, KY 40506--0027} \email{benjamin.braun@uky.edu}
\date{21 May 2013}
\thanks{The authors would like to thank Ira Gessel, Carla Savage, and the anonymous referees for their valuable comments and suggestions. This research was partially supported by the NSF through grants DMS-0810105, DMS-1162638 (Beck) and DMS-0758321 (Braun), and by a SQuaRE at the American Institute of Mathematics.}
\begin{abstract} A variety of descent and major-index statistics have been defined for symmetric groups, hyperoctahedral groups, and their generalizations. Typically associated to a pair of such statistics is an \emph{Euler--Mahonian distribution}, a bivariate polynomial encoding the statistics; such distributions often appear in rational bivariate generating-function identities. We use techniques from polyhedral geometry to establish new multivariate identities generalizing those giving rise to many of the known Euler--Mahonian distributions. The original bivariate identities are then specializations of these multivariate identities. As a consequence of these new techniques we obtain bijective proofs of the equivalence of the bivariate distributions for various pairs of statistics. \end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
The symmetric group $S_n$ is the group of permutations of $[n]:=\{1,2,\ldots,n\}$, also realized as the Coxeter group $A_{n-1}$ yielding symmetries of a simplex. For a permutation $\pi \in S_n$, the descent set is a classical object of study in combinatorics.
\begin{definition} For $\pi\in S_n$, the \emph{descent set} of $\pi$ is \[ \mathrm{Des}(\pi) := \bigl\{ j \in [n-1] : \, \pi(j) > \pi(j+1) \bigr\} \, . \] The \emph{descent statistic} is $\mathrm{des} (\pi) := \# \mathrm{Des}(\pi)$. \end{definition} The descent statistic is encoded in the \emph{Eulerian polynomial} $\sum_{ \pi \in S_n } t^{ \mathrm{des} (\pi) }$ and the most basic identity for Eulerian polynomials is \begin{equation}\label{euleriangenfcteq}
\sum_{ k \ge 0 } (k+1)^n \, t^k = \frac{ \sum_{ \pi \in S_n } t^{ \mathrm{des} (\pi) } }{ \left( 1 - t \right)^{ n+1 } } \, . \end{equation} Euler used this identity to \emph{define} Eulerian polynomials in \cite{eulereulerian} which he needed in his study of what is now called the Riemann $\zeta$-function; it is unlikely that he was aware of the connection of his polynomials to descent statistics. For more on the interesting history regarding Eulerian polynomials, descent statistics, and algebraic geometry, see \cite{hirzebrucheulerian} and \cite[Chapter 1 Notes]{stanleyec1}.
Equation \eqref{euleriangenfcteq} has inspired a host of generalizations and extensions. The first such extension is the following $q$-analogue of \eqref{euleriangenfcteq}, which in this form is due to Carlitz \cite{carlitzeulerian}, though with some effort one can derive it from the works of MacMahon \cite[Volume 2, Chapter IV, \S462]{macmahon}. This extension involves a joint distribution of the descent statistic and the major index, defined as follows, together with the notation $[m]_q := 1 + q + q^2 + \dots + q^{ m-1 }$. \begin{definition} For $\pi \in S_n$, the \emph{major index} of $\pi$ is \[ \mathrm{maj}(\pi) := \sum_{ j \in \mathrm{Des}(\pi) } j \, . \] \end{definition}
\begin{theorem}[Carlitz] \label{macmahoncarlitz} \[
\sum_{ k \ge 0 } [k+1]_q^n \, t^k
= \frac{ \sum_{ \pi \in S_n } t^{ \mathrm{des} (\pi) } q^{ \mathrm{maj} (\pi) } }{ \prod_{ j=0 }^n \left( 1 - t q^j \right) } \, . \] \end{theorem}
Note that \eqref{euleriangenfcteq} follows from Theorem~\ref{macmahoncarlitz} by setting $q=1$. This identity is called the \emph{Carlitz identity}, and the numerator on the right is known as an \emph{Euler--Mahonian distribution} due to the relation with Euler's work and MacMahon's original introduction of the major index. The search for further generalizations of this identity has focused on finding new identities of the form \begin{equation}\label{statformgenfn}
\sum_{ k \ge 0 } [k+1]_q^n \, t^k
= \frac{ \sum_{ g \in G_n } t^{ \mathrm{stat}_1 (g) } q^{ \mathrm{stat}_2 (g) } }{ \prod_{ j=0 }^n h_j(t,q) } \, \end{equation} for various families of groups $G_n$ and statistics $\mathrm{stat}_1$ and $\mathrm{stat}_2$ defined on elements of $G_n$, together with naturally occuring families of functions $h_j(t,q)$. This search has been successful, also producing analogous generalizations of the identities \eqref{wreatheulerianfcteq} and \eqref{deuleriangenfcteq} discussed in the next section. To our knowledge, there are three general approaches to proving such identities: \begin{itemize} \item via combinatorial/bijective proofs in the theory of partitions and their extensions; \item via connections between permutation statistics and the theory of Coxeter groups, including connections to invariant theory and the coinvariant algebra of a Weyl group; and \item via the theory of symmetric/quasisymmetric functions. \end{itemize} For more information regarding the first two approaches, see the citations listed throughout this paper. For examples of the symmetric/quasisymmetric function approach, see \cite{hyatt,mendesremmel,shareshianwachseulerian}.
Our goal is to provide new multivariate generalizations of these identities using polyhedral geometry and lattice-point enumeration; as a consequence, we obtain new proofs of two-variable identities in the form of \eqref{statformgenfn}. One of the benefits of the geometric approach is that it is relatively simple, the key ingredients being the triangulation of the unit cube by the braid arrangement together with careful choices of ray generators for unimodular cones. Another benefit is that bijective proofs of the equidistribution of various pairs of statistics are obtained as immediate corollaries.
As we discuss in Remark~\ref{hilbertseries}, our multivariate identities can be viewed as Hilbert-series identities for various finely-graded algebras, i.e., algebras equipped with an $\mathbb{N}^n$-grading. A Hilbert-series approach to multivariate extensions of these identities has previously been used in \cite{ABRmultivariate} and subsequent papers, emphasizing the use of descent bases for coinvariant algebras. Our algebras and specializations are in some sense more straightforward than the previously considered ones, because the bivariate identities arise as simple specializations of our multivariate identities, requiring minimal or no additional substitutions and algebraic manipulations. The geometric perspective also allows us to avoid the use of straightening laws and other algebraic techniques regarding coinvariant algebras.
Our paper is structured as follows. In Section~\ref{singlevariablebackground}, we discuss analogues of \eqref{euleriangenfcteq} for generalizations of permutation groups. In Section~\ref{geometry}, we discuss the results we will need from integer-point enumeration and polyhedral geometry. In Section~\ref{asection}, we use polyhedral geometry to prove Theorem~\ref{Atheorem}; this proof serves as a model for all the proofs in the paper. We also briefly discuss connections between our approach, the theory of $P$-partitions, and the theory of affine semigroup algebras.
Section~\ref{wreathsection} contains most of our new results in the general setting of wreath products of the form $\mathbb{Z}_r\wr~S_n$. These results generalize known bivariate identities due to Bagno, Bagno--Biagioli, and Chow--Mansour, which are themselves generalizations of type-$B$ results due to Adin--Brenti--Roichman and Chow--Gessel. As these original type-$B$ results have been of particular interest, we state our multivariate identities in this special case in Section~\ref{bigbsection}. Also in Section~\ref{bigbsection} is a type-$B$ extension of an identity due to Chow--Gessel, one which in our approach relies heavily on the type-$B$ Coxeter arrangement; we do not know of an obvious extension of this to the wreath product case. We close the paper with Section~\ref{dsection}, where we prove new type-$D$ generating-function identities.
\section{Generalized permutation groups and descents}\label{singlevariablebackground}
We discuss in this section analogues of \eqref{euleriangenfcteq} for hyperoctahedral groups, type-$D$ Coxeter groups, and wreath products of cyclic groups with symmetric groups.
The wreath product $\mathbb{Z}_r \wr S_n$ of a cyclic group of order $r$ with $S_n$ consists of pairs $(\pi,\epsilon)$ where $\pi \in S_n$ and $\epsilon\in \{\omega^{0},\omega^{1}, \ldots, \omega^{r-1}\}^n$ for $\omega:=e^{2\pi i / r}$ a primitive $r^{\text{th}}$ root of unity, see \cite{JamesKerber}. Thus, $\epsilon$ is a sequence of powers of an $r^{\text{th}}$ root of unity. Elements of these groups are often called \emph{colored}, or \emph{indexed}, permutations.
\begin{remark} By convention, for elements of $\mathbb{Z}_r \wr S_n$ we define additional values of $\pi$ and $\epsilon$ as follows: $\pi_{n+1}:=n+1$, $\epsilon_{n+1}:=1$, $\pi_{0}:=0$, and $\epsilon_{0}:=1$. \end{remark}
We will find it convenient to use \emph{window notation} for elements of wreath products. If $\epsilon_j=\omega^{c_j}$, then we will denote $(\pi,\epsilon)$ as the \emph{window} $[\pi(1)^{c_1} \, \pi(2)^{c_2} \, \cdots \, \pi(n)^{c_n}]$. We use the notation $j^{c_j}$ and $(\omega^{c_j},j)$ interchangeably for elements of $\{\omega^0,\omega^1,\ldots,\omega^{r-1}\}\times [n]$. It is sometimes convenient to refer to $\pi(j)^{c_j}$ as $\pi(j)$ with \emph{color} $c_j$.
Because we will need to use inverses for these group elements, and for the sake of clarity, we review the algebraic structure of wreath products. The element $(\pi,\epsilon)\in\mathbb{Z}_r \wr S_n$ can be identified with the permutation matrix for $\pi$ where the $1$ in position $(\pi(i),i)$ is replaced by $\epsilon_i$. The group operation in $\mathbb{Z}_r \wr S_n$ is then given by matrix multiplication where entry-by-entry multiplication of non-zero terms is given by the group operation in $\mathbb{Z}_r$.
We next consider the special case of the hyperoctahedral group $B_n$, i.e., the Coxeter group yielding symmetries of a $\pm 1$-cube. The group $B_n$ arises as the wreath product $\mathbb{Z}_2\wr S_n$ and thus consists of \emph{signed permutations} (see, e.g., \cite{reinersignedpermutationstats}), i.e., pairs $(\pi, \epsilon)$ where $\pi \in S_n$ and $\epsilon \in \left\{ \pm 1 \right\}^n$. Because of this structure, it is common to associate the elements of $B_n$ to permutations $g$ of $[-n,n]\setminus \{0\}$ satisfying $g(-i)=-g(i)$ via the following map. To the element $(\pi,\epsilon)\in B_n$ we assign the set permutation $g_{(\pi,\epsilon)}$ given by \[ g(i)=\epsilon_i\pi(i) \, . \] Thus, we will interchangeably write $j^1$ and $-j$ when using window notation and in definitions.
\begin{example}\label{BnCompEx} In $B_4=\mathbb{Z}_2\wr~S_4$, the composition $[4^1 \, 1 \, 2^1 \, 3^1] \circ [3 \, 1^1 \, 4^1 \, 2]$ is equal to $[2^1 \, 4 \, 3 \, 1]$, since this composition maps, for example, $2\mapsto 1^1$ via $[3 \, 1^1 \, 4^1 \, 2]$ and $1^1\mapsto 4$ via $[4^1 \, 1 \, 2^1 \, 3^1]$, yielding \[ 2\mapsto 1^1 \mapsto (4^1)^1 =4 \, . \] This takes the matrix multiplication form \[ \left[ \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \\ -1 & 0 & 0 & 0 \\ \end{array}
\right] \left[ \begin{array}{cccc} 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ \end{array}
\right] = \left[ \begin{array}{cccc} 0 & 0 & 0 & 1 \\ -1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ \end{array}
\right] \] The element $[4^1 \, 1 \, 2^1 \, 3^1]^{-1}$ is given by $[2 \, 3^1 \, 4^1 \, 1^1]$, since, for example, $[4^1 \, 1 \, 2^1 \, 3^1]$ sends $3\mapsto 2^1$, requiring that the inverse send $2\mapsto 3^1$. \end{example}
For elements of $B_n$, there are several definitions of descents in the literature; we provide three of them here. While the first applies to $B_n$, the latter two are defined for all $\mathbb{Z}_r \wr S_n$.
\begin{definition}\label{BnNatDes} For an element $(\pi,\epsilon)\in B_n$, the \emph{naturally ordered descent set} is \begin{equation}\label{Bdescentdef}
\mathrm{NatDes}(\pi, \epsilon) := \bigl\{ j \in \left\{ 0, 1, \dots, n-1 \right\} : \, \epsilon_j \pi(j) > \epsilon_{ j+1 } \pi(j+1) \bigr\} \, , \end{equation} with the convention $\epsilon_0 \pi(0) = 0$. The \emph{natural descent statistic} for $B_n$ is $\mathrm{natdes}(\pi,\epsilon):=\#\mathrm{NatDes}(\pi,\epsilon)$. \end{definition} The reason for calling this the \emph{naturally} ordered descent set is that it uses the natural order $-n<-n+1<\cdots<-1<1<2<\cdots<n$ on the integers. For example, the permutation \[ [3^1 \, 2 \, 1^1 \, 4^1 ] \] in $B_4$ has descents in the zeroth, second, and third positions.
In \cite{Steingrimsson}, Steingr{\'{\i}}msson defined the following descent set for elements of $\mathbb{Z}_r \wr S_n$.
\begin{definition}\label{BnStDes} Totally order the elements of $\{\omega^0,\omega^1,\ldots,\omega^{r-1}\}\times [n]$ by $j^{c_j}<k^{c_k}$ if $c_j<c_k$ or if both $c_j=c_k$ and $j<k$ hold. For an element $(\pi, \epsilon) = [\pi(1)^{c_1} \, \pi(2)^{c_2} \, \cdots \, \pi(n)^{c_n}]$ in $\mathbb{Z}_r \wr S_n$, \emph{Steingr{\'{\i}}msson's descent set} is \begin{equation}\label{Steinwreathdescentdef}
\mathrm{StDes}(\pi, \epsilon) := \bigl\{ j \in \left\{ 1, \dots, n \right\} : \, \pi(j)^{c_j}>\pi(j+1)^{c_{j+1}} \bigr\} . \end{equation} \emph{Steingr{\'{\i}}msson's descent statistic} is $\mathrm{stdes} (\pi,\epsilon) := \#\mathrm{StDes}(\pi,\epsilon)$. \end{definition}
As an example, observe that with Steingr{\'{\i}}msson's ordering we have $\{\omega^0,\omega^1,\omega^{2}\}\times [3]$ ordered as \[ 1^0<2^0<3^0<1^1<2^1<3^1<1^2<2^2<3^2 \, , \] and the permutation \[ [2^2 \, 3^2 \, 1^1 ] \] has descents in positions $2$ and $3$.
Finally, we define the following closely-related descent set. This definition differs from Steingr{\'{\i}}msson's both in the role played by the order of the roots of unity and in the indices where descents may occur.
\begin{definition}\label{BnDes} Totally order the elements of $\{\omega^{r-1},\omega^{r-2},\ldots,\omega^{0}\}\times [n]$ by $j^{c_j}<k^{c_k}$ if $c_j>c_k$ or if both $c_j=c_k$ and $j<k$ hold. For an element $(\pi, \epsilon) = [\pi(1)^{c_1} \, \pi(2)^{c_2} \, \cdots \, \pi(n)^{c_n}]$ in $\mathbb{Z}_r \wr S_n$, the \emph{descent set} is \begin{equation}\label{wreathdescentdef}
\mathrm{Des}(\pi, \epsilon) := \bigl\{ j \in \left\{ 0, \dots, n-1 \right\} : \, \pi(j)^{c_j}>\pi(j+1)^{c_{j+1}} \bigr\} . \end{equation} The \emph{descent statistic} is $\mathrm{des} (\pi,\epsilon) := \#\mathrm{Des}(\pi,\epsilon)$. \end{definition}
As an example, observe that with this order we have $\{\omega^0,\omega^1,\omega^{2}\}\times [3]$ ordered as \[ 1^2<2^2<3^2<1^1<2^1<3^1<1^0<2^0<3^0 \, , \] and the permutation \[ [3^2 \, 2^0 \, 1^1 ] \] has descents in positions $0$ and $2$.
The Eulerian polynomials for wreath products are $\sum_{(\pi,\epsilon)\in \mathbb{Z}_r \wr S_n}t^{\mathrm{des}(\pi,\epsilon)}$, where one may use either of the two wreath product descent definitions or, in the case $r=2$, the natural descent statistic. The resulting analogue of \eqref{euleriangenfcteq} is \begin{equation}\label{wreatheulerianfcteq}
\sum_{ k \ge 0 } (rk+1)^n \, t^k
= \frac{ \sum_{(\pi,\epsilon)\in \mathbb{Z}_r \wr S_n}t^{\mathrm{des}(\pi,\epsilon)} }{(1-t)^{n+1} } \, . \end{equation} This identity appears to have been found by various authors for different descent statistics; for more details, see \cite{brentieulerian,Steingrimsson}.
\section{A geometric perspective}\label{geometry}
\subsection{Simplices and cones}
The forms of equations \eqref{euleriangenfcteq}, \eqref{wreatheulerianfcteq}, and \eqref{deuleriangenfcteq} suggest that one should look at them geometrically as stemming from lattice-point enumeration of the cube $[0,r]^n$ as it is partitioned in various ways; for example, \eqref{euleriangenfcteq} suggests we consider $[0,1]^n$ partitioned by the braid arrangement consisting of the hyperplanes $x_j = x_k$ for $1 \le j < k \le n$. As a result of such partitions, we will encounter certain simplices throughout this work, all of which are (after a suitable change of variables) of the form \[
\Delta_I := \left\{ {\boldsymbol x} \in \mathbb{R}^n :
\begin{array}{ll}
0 \le x_n \le x_{n-1} \le \dots \le x_1 \le 1, \\
x_{j+1} < x_{ j } \text{ if } j \in I
\end{array}
\right\} , \] where $I \subseteq [n]$ is some index set, and we use the convention $x_n > 0$ if $n \in I$.
\begin{remark} The definition of $\Delta_I$ we have given is technically that of a simplex with some of its facets removed. Throughout this work, we will be decomposing cubes into disjoint unions of such objects; the removal of facets will be needed to ensure that our decompositions are disjoint. In the following, to simplify nomenclature, we will freely refer to these partially open objects as \emph{simplices}. Further, for a polyhedron $P$ with some facets removed, we will use the terms \emph{faces} and \emph{vertices} of $P$ to refer to the faces and vertices of the closure of $P$. \end{remark}
The vertices of $\Delta_I$ are ${\boldsymbol 0}, \, {\boldsymbol e}_1 + \dots + {\boldsymbol e}_n, \, {\boldsymbol e}_1 + \dots + {\boldsymbol e}_{n-1}, \dots, \, {\boldsymbol e}_{1 } + {\boldsymbol e}_2, \, {\boldsymbol e}_1$, where ${\boldsymbol e}_j$ is the $j$'th unit vector in $\mathbb{R}^n$. Note that $\Delta_I$ is \emph{unimodular}, i.e., the $n$ edge directions at any vertex of $\Delta_I$ generate $\mathbb{Z}^n$. The \emph{cone over} $\Delta_I$ is the nonnegative span of $\left\{ (1,{\boldsymbol x}) \in \mathbb{R}^{ n+1 } : \, {\boldsymbol x} \in \Delta_I \right\}$, where we encode the ``new" dimension by the variable $x_0$, i.e., \[
\mathrm{cone} \left( \Delta_I \right) := \mathbb{R}_{ \ge 0 } \, {\boldsymbol e}_0 + \sum_{ j \in I } \mathbb{R}_{ >0 } \left( {\boldsymbol e}_0 + {\boldsymbol e}_{ 1 } + {\boldsymbol e}_{ 2 } + \dots + {\boldsymbol e}_j \right) + \sum_{ j \notin I } \mathbb{R}_{ \ge 0 } \left( {\boldsymbol e}_0 + {\boldsymbol e}_{1 } + {\boldsymbol e}_{ 2 } + \dots + {\boldsymbol e}_j \right) , \] where the complement of $I$ is taken in $[n]$.
\subsection{Generating functions for cones}
Let \[
\sigma_C (z_0, z_1, \dots, z_n) := \sum_{ {\boldsymbol m} \in C \cap \mathbb{Z}^{ n+1 } } {\boldsymbol z}^{\boldsymbol m} \] be the multivariate (``full") generating function encoding the integer lattice points in a subset $C\subset \mathbb{R}^{n+1}$, where we have used the shorthand ${\boldsymbol z}^{\boldsymbol m} := z_0^{ m_0 } z_1^{ m_1 } \cdots z_n^{ m_n }$. A standard geometric-series argument (see, e.g., \cite[Theorem 3.5]{ccd}), together with the unimodularity of $\mathrm{cone} \left( \Delta_I \right)$, gives the following.
\begin{lemma}\label{conelemma} Let $\Delta_I$ be as above. Then \ \[
\sigma_{ \mathrm{cone} \left( \Delta_I \right) } (z_0, z_1, \dots, z_n) = \frac{ \prod_{ j \in I } z_0 \, z_{ 1 } z_{ 2 } \cdots z_j }{ \prod_{ j=0 }^n \left( 1 - z_0 \, z_{ 1 } z_{ 2 } \cdots z_j \right) } \, . \] \end{lemma}
We will not always use the above natural way to write the generating function of a unimodular cone, in which case we will apply the following more general lemma. The proof is a straightforward extension of \cite[Theorem 3.5 and Corollary 3.6]{ccd} and \cite[Corollary 4.6.8 and its Note]{stanleyec1}; only the latter reference discusses the relationship between determinants and monomials stated here.
\begin{lemma}\label{genconelemma} Let $C = \sum_{ j=0 }^k \mathbb{R}_{ \ge 0 } {\boldsymbol v}_j + \sum_{ j=k+1 }^n \mathbb{R}_{ > 0 } {\boldsymbol v}_j$ be a half-open simplicial cone in $\mathbb{R}^{ n+1 }$ with linearly independent generators ${\boldsymbol v}_0, {\boldsymbol v}_1, \dots, {\boldsymbol v}_n \in \mathbb{Z}^{ n+1 }$. Then \[
\sigma_C (z_0, z_1, \dots, z_n) = \frac{ \sigma_{ \Pi_C } (z_0, z_1, \dots, z_n) }{ \prod_{ j=0 }^n \left( 1 - {\boldsymbol z}^{ {\boldsymbol v}_j } \right) } \] where $\Pi_C := \sum_{ j=0 }^k [0,1) {\boldsymbol v}_j + \sum_{ j=k+1 }^n (0,1] {\boldsymbol v}_j$. Furthermore, the number of integer points in $\Pi_C$ (and thus the number of monomials in $\sigma_{ \Pi_C } (z_0, z_1, \dots, z_n)$) is given by the determinant of the matrix with column vectors ${\boldsymbol v}_0, {\boldsymbol v}_1, \dots, {\boldsymbol v}_n$. \end{lemma} We refer to the set $\Pi_C$ arising in the lemma as the \emph{fundamental parallelepiped} of $C$; note that it depends on the choice of generators of~$C$.
\subsection{Unimodular cones with scaled ray generators}\label{sec:unim}
Throughout this work we will frequently need to compute $\sigma_{ \Pi_C } (z_0, z_1, \dots, z_n)$ for a unimodular cone $C$ of the form given in Lemma~\ref{genconelemma}, where the generators chosen for the cone are not the minimal length ray generators. Using the notation of Lemma~\ref{genconelemma}, let ${\boldsymbol v}_0,\ldots,{\boldsymbol v}_n$ denote the minimal ray generators for a unimodular cone $C$, so that \[ \sigma_{\overline{C}} (z_0, z_1, \dots, z_n) = \frac{ 1 }{ \prod_{ j=0 }^n \left( 1 - {\boldsymbol z}^{ {\boldsymbol v}_j } \right) } \, , \] where $\overline{C}$ denotes the topological closure of $C$. If we use instead the ray generators $c_0{\boldsymbol v}_0,c_1{\boldsymbol v}_1,\ldots,c_n{\boldsymbol v}_n$ for some positive integer scaling factors $c_0,c_1,\ldots,c_n$, we will desire in this paper to obtain the integer points in \[ \Pi_{C}=\sum_{ j=0 }^k [0,1) c_j{\boldsymbol v}_j~+~\sum_{ j=k+1 }^n (0,1] c_j{\boldsymbol v}_j \] from the integer points in \[ \Pi_{\overline{C}}=\sum_{ j=0 }^n [0,1) c_j{\boldsymbol v}_j \, . \] Since $C$ is unimodular with ray generators given by the ${\boldsymbol v}_j$'s, the integer points in $\Pi_{\overline{C}}$ are those integer points of the form \[ {\boldsymbol p}=\sum_{j=0}^n \alpha_j{\boldsymbol v}_j \] where $0\leq \alpha_j< c_j$ is an integer. Thus, there are $\prod_jc_j$ integer points contained in $\Pi_{\overline{C}}$. Observe that ${\boldsymbol p}$ lies on the facet of $C$ opposite ${\boldsymbol v}_j$ if and only if $\alpha_j=0$. Thus, each integer point ${\boldsymbol p}$ in the set $\Pi_{\overline{C}}$ with $\alpha_j=0$ for some indices $j\geq k+1$ does not lie in the set $\Pi_C$. Similarly, each integer point in $\Pi_C$ of the form ${\boldsymbol p}=\sum_{j=0}^n \alpha_j{\boldsymbol v}_j$, such that $\alpha_j=c_j$ for some $j\geq k+1$, is not in $\Pi_{\overline{C}}$. If we fix an index set $J\subseteq \{k+1,k+2,\ldots,n\}$, there is a bijective correspondence between the points \[ {\boldsymbol p}_1=\sum_{j=0}^n \alpha_j{\boldsymbol v}_j\in \Pi_{\overline{C}} \] where $\alpha_j=0$ if $j\in J$ and the points \[ {\boldsymbol p}_2=\sum_{j=0}^n \beta_j{\boldsymbol v}_j\in \Pi_C \] where $\beta_j=c_j$ if $j\in J$. This bijection is obtained by identifying two such points when $\alpha_j=\beta_j$ for all $j\notin J$.
In the following, we will use one of the following two techniques to obtain the set $\mathbb{Z}^n\cap \Pi_C$ from $\mathbb{Z}^n\cap \Pi_{\overline{C}}$. \begin{itemize} \item \emph{Shifting integer points off the boundary:} Each integer point ${\boldsymbol p}\in\Pi_{\overline{C}}\setminus \Pi_C$ is of the form ${\boldsymbol p}=\sum_{j=0}^n \alpha_j{\boldsymbol v}_j$ where $0\leq \alpha_j< c_j$ and $\alpha_j=0$ for all $j\in J_{{\boldsymbol p}} \subset \{k+1,k+2,\ldots,n\}$ for some index set $J_{{\boldsymbol p}}$. By shifting each such ${\boldsymbol p}$ by $\sum_{j\in J_{{\boldsymbol p}}}c_jv_j$, we obtain \[ \mathbb{Z}^n\cap \Pi_C = \left[\mathbb{Z}^n\cap\Pi_{\overline{C}}\cap \Pi_C\right]\cup \left\{{\boldsymbol p}+\sum_{j\in J_{{\boldsymbol p}}}c_j{\boldsymbol v}_j: {\boldsymbol p}\in\Pi_{\overline{C}}\setminus \Pi_C \right\} . \]
\item \emph{Shifting the entire parallelepiped:} Alternatively, we may observe that $\Pi_{\overline{C}}$ is a parallelepiped with half of its facets removed, where no two opposite pairs of facets are simultaneously removed. Similarly, $\Pi_C$ is a parallelepiped of the same type, but with a different selection of included facets. Thus, it is immediate that \[ \mathbb{Z}^n\cap \Pi_C = \left[\mathbb{Z}^n\cap \Pi_{\overline{C}}\right]+\left(\sum_{\substack{i :\text{ the facet opposite}\\{\boldsymbol v}_i \text{ is removed in }C}} {\boldsymbol v}_i \right) . \]
\end{itemize}
\begin{example} Let $C=\{{\boldsymbol x}\in \mathbb{R}^3:0\leq x_3 <x_2 <x_1\}$. Thus, $\overline{C}\subset \mathbb{R}^3$ is generated by ${\boldsymbol v}_1=(1,0,0)$, ${\boldsymbol v}_2=(1,1,0)$, and ${\boldsymbol v}_3=(1,1,1)$. Using the ray generators $2{\boldsymbol v}_1$, $2{\boldsymbol v}_2$, and $3{\boldsymbol v}_3$ for $\overline{C}$, there are three integer points in $\Pi_C\cap \Pi_{\overline{C}}$ given by \[ (2,1,0), (3,2,1), (4,3,2) \, . \] There are nine integer points in $\Pi_{\overline{C}}\setminus \Pi_C$, namely \[ (0,0,0), (1,1,1), (2,2,2), (1,1,0), (1,0,0), (2,2,1), (3,3,2), (3,2,2), (2,1,1) \, . \] Similarly, there are nine integer points in $\Pi_C\setminus \Pi_{\overline{C}}$, namely \[ (4,2,0), (5,3,1), (6,4,2), (3,1,0), (3,2,0), (4,3,1), (5,3,2), (5,4,2), (4,2,1) \, . \] It is straightforward to check that both of the shifting methods described above produce the integer points in $\Pi_C$ from the integer points in $\Pi_{\overline{C}}$. Shifting off the boundary adds one or both of $(2,0,0)$ and $(2,2,0)$ to the points of $\Pi_{\overline{C}}\setminus \Pi_C$, while shifting the parallelepiped adds $(2,1,0)$ to all the points of $\Pi_{\overline{C}}$. \end{example}
\section{Type $A$}\label{asection}
We begin with a multivariate identity that specializes to Theorem~\ref{macmahoncarlitz}. The proof of this identity, though simple, demonstrates the approach used in this paper.
\begin{theorem}\label{Atheorem} \[
\sum_{ k \ge 0 } \prod_{ j=1 }^n [k+1]_{ z_j } \, z_0^k
\ = \ \sum_{ \pi \in S_n } \frac{ \prod_{ j \in \mathrm{Des}(\pi) } z_0z_{ \pi(1) } z_{ \pi(2) } \cdots z_{ \pi(j) } }{ \prod_{ j=0 }^n \left( 1 - z_0 \, z_{ \pi(1) } z_{ \pi(2) } \cdots z_{ \pi(j) } \right) } \, . \] \end{theorem}
\begin{proof} Triangulate the $n$-cube $[0,1]^n$ into the disjoint union of simplices \[
\Delta_\pi := \left\{ {\boldsymbol x} \in \mathbb{R}^n :
\begin{array}{ll}
0 \le x_{ \pi(n) } \le x_{ \pi(n-1) } \le \dots \le x_{ \pi(1) } \le 1, \\
x_{ \pi(j+1) } < x_{ \pi(j) } \text{ if } j \in \mathrm{Des}(\pi)
\end{array}
\right\} \] (one for each $\pi \in S_n$). Lemma 4.5.1 of \cite{stanleyec1} implies that the strict inequalities determined by the descent set of $\pi$ make this triangulation \emph{disjoint}. For example, if ${\boldsymbol x}=(x_1,\ldots,x_9)=(.2,.1,.2,.3,.1,.1,.3,.3,.2) \in [0,1]^9$, then ${\boldsymbol x}\in\Delta_\pi$ where $\pi=[4,7,8,1,3,9,2,5,6]$, since $x_6=x_5=x_2< x_9=x_3=x_1< x_8=x_7=x_4$. By Lemma \ref{conelemma}, \[
\sigma_{ \mathrm{cone}(\Delta_\pi) } (z_0, z_1, \dots, z_n) = \frac{ \prod_{ j \in \mathrm{Des}(\pi) } z_0 \, z_{ \pi(1) } z_{ \pi(2) } \cdots z_{ \pi(j) } }{ \prod_{ j=0 }^n \left( 1 - z_0 \, z_{ \pi(1) } z_{ \pi(2) } \cdots z_{ \pi(j) } \right) } \, . \] On the other hand, \[
\sigma_{ \mathrm{cone}([0,1]^n) } (z_0, z_1, \dots, z_n) = \sum_{ k \ge 0 } \prod_{ j=1 }^n \left( 1 + z_j + z_j^2 + \dots + z_j^k \right) z_0^k \, , \] and the disjoint triangulation gives \[
\sigma_{ \mathrm{cone}([0,1]^n) } (z_0, z_1, \dots, z_n) = \sum_{ \pi \in S_n } \sigma_{ \mathrm{cone}(\Delta_\pi) } (z_0, z_1, \dots, z_n) \, . \qedhere \] \end{proof}
\begin{proof}[Proof of Theorem \ref{macmahoncarlitz}] Setting $t := z_0$ and $q := z_1 = z_2 = \dots = z_n$ in Theorem \ref{Atheorem} gives \[
\sum_{ k \ge 0 } [k+1]_q^n \, t^k
= \frac{ \sum_{ \pi \in S_n } \prod_{ j \in \mathrm{Des}(\pi) } t q^{ j } }{ \prod_{ j=0 }^n \left( 1 - t q^{ j } \right) }
= \frac{ \sum_{ \pi \in S_n } t^{ \mathrm{des} (\pi) } q^{ \mathrm{maj} (\pi) } }{ \prod_{ j=0 }^n \left( 1 - t q^j \right) } \, . \qedhere \] \end{proof}
\begin{remark} Our approach is related to the theory of $P$-partitions \cite{stanleythesis,stanleyec1}. For a given finite poset $P$, one can associate a cone of $P$-partitions. The standard approach to studying $P$-partitions, going back to Stanley's pioneering work referenced above, is to recognize that each $P$-partition cone is a union of closed chambers of the type-$A$ braid arrangement. Thus, each $P$-partition cone admits a unimodular triangulation, and these unimodular subcones are indexed by linear extensions of $P$.
Our approach is based almost entirely on the triangulation of $[0,1]^n$ induced by the type-$A$ braid arrangement; the relationship with $P$-partitions is then that $[0,1]^n$ is a truncation of the $P$-partition cone in the case where $P$ is an antichain of size~$n$. That the linear extensions of such an antichain are easily put into bijection with the elements of $S_n$ gives our connection to symmetric groups and the braid arrangement. Mirroring these similarities, our Theorem~\ref{Atheorem} resembles \cite[Theorem~7.1]{stanleythesis}.
Where our techniques diverge from being a minor variant of $P$-partition theory is that throughout this work, when we encounter a unimodular triangulation of $\mathrm{cone}([0,1]^n)$, we often choose non-unimodular generators for the unimodular cones in our triangulation. Also, several of our generating-function identities require studying non-unimodular triangulations of $\mathrm{cone}([0,r]^n)$ for $r\geq 2$. To our knowledge, this approach has not been used in the study of $P$-partitions. \end{remark}
\begin{remark}\label{hilbertseries} There is also a connection between our generating functions and the theory of affine semigroup algebras. The generating function in Theorem~\ref{Atheorem} is the finely-graded Hilbert series for the affine semigroup algebra formed from the semigroup of integer points in $\mathrm{cone}([0,1]^n)$, as discussed in \cite{HibiBook,MillerSturmfels,StanleyGreenBook}. Through much of the recent literature on Euler--Mahonian distributions referenced in this paper, Hilbert-series interpretations for these bivariate identities have been sought. All of our identities provide such interpretations, as they arise from the finely-graded Hilbert series of affine semigroup algebras.
Further, the study of semigroup algebras arising from polyhedral cones has been an area of intense study for combinatorial commutative algebraists over the past several decades. The most important general result regarding Hilbert series for such cones is Hochster's theorem, which states that normal affine semigroup algebras are Cohen--Macaulay \cite{Hochster}. The Cohen--Macaulay property forces serious constraints on single-variable specializations of the associated finely-graded Hilbert series for the algebra; these constraints apply to univariate specializations of our identities. \end{remark}
\section{Wreath products}\label{wreathsection}
In this section, we prove three new multivariate generating function identities connected with pairs of statistics on wreath products of the form $\mathbb{Z}_r\wr~S_n$. Our proofs of these identities lead to a bijective proof of the joint equidistribution of the ``negative'' and ``flag'' statistics.
\subsection{Identities involving $(k+1)^n$}
We begin by recalling the definition of the negative statistics and flag statistics on $\mathbb{Z}_r \wr S_n$, as introduced in \cite{bagno, bagnobiagioli}. These are generalizations of the type-$B$ negative and flag statistics introduced by Adin--Brenti--Roichman, which we discuss in Section~\ref{bigbsection}. Our interest in these statistics comes from the role they play in the following two identities.
\begin{theorem}[Bagno, \cite{bagno}]\label{wreathnegBthm} \[
\sum_{ k \ge 0 } [k+1]_q^n \, t^k
= \frac{ \sum_{(\pi,\epsilon) \in \mathbb{Z}_r\wr S_n } t^{ \mathrm{ndes}(\pi,\epsilon) } q^{ \mathrm{nmajor}(\pi,\epsilon) } }{ (1-t)\prod_{j=1}^n (1-t^rq^{rj}) } \, . \] \end{theorem}
\begin{theorem}[Bagno--Biagioli, \cite{bagnobiagioli}]\label{wreathflagBthm} \[
\sum_{ k \ge 0 } [k+1]_q^n \, t^k
= \frac{ \sum_{(\pi,\epsilon) \in \mathbb{Z}_r\wr S_n } t^{ \mathrm{fdes}(\pi,\epsilon) } q^{ \mathrm{fmajor}(\pi,\epsilon) } }{ (1-t)\prod_{j=1}^n (1-t^rq^{rj}) } \, . \] \end{theorem}
\begin{remark} Bagno and Biagioli also prove in \cite{bagnobiagioli} a multivariate theorem of this type for a family of normal subgroups of $\mathbb{Z}_r \wr S_n$. Their techniques involve studying colored-descent representations of these subgroups, which are representations of the groups on the associated coinvariant algebra. \end{remark}
Throughout this subsection, we use the total order from Definition~\ref{BnDes} on the elements of $\{\omega^{r-1},\omega^{r-2},\ldots,\omega^{0}\}\times [n]$, i.e., $j^{c_j}<k^{c_k}$ if $c_j>c_k$ or if both $c_j=c_k$ and $j<k$ hold.
\begin{definition}\label{typeAwreathstats} For an element $(\pi,\epsilon)\in \mathbb{Z}_r \wr S_n$, we define the \emph{negative set} of $(\pi,\epsilon)$ to be \[ \mathrm{Neg}(\pi,\epsilon):= \{ i\in [n]: \epsilon_i \neq \omega^{0}=1 \} \, , \] and we define $\mathrm{neg}(\pi,\epsilon):=\#\mathrm{Neg}(\pi,\epsilon)$. Writing $\epsilon_j=\omega^{c_j}$, we define the \emph{color sum statistic} to be \[ \mathrm{col}(\pi,\epsilon):=\sum_{i\in [n]} c_i \, . \] The \emph{type-$A$ descent set} is defined to be \[ \mathrm{Des}_A(\pi,\epsilon):= \{i\in [n-1] : \pi_i^{c_i} > \pi_{i+1}^{c_{i+1}} \} \, \] and the \emph{type-$A$ descent statistic} is \[ \mathrm{des}_A(\pi,\epsilon):=\#\mathrm{Des}_A(\pi,\epsilon) \, . \] The \emph{type-$A$ major index} is \[ \mathrm{major}_A(\pi,\epsilon)=\sum_{j\in \mathrm{Des}_A(\pi,\epsilon)}j \, . \] \end{definition}
\begin{example} Let $(\pi,\epsilon)=[1^3 \, 4^0 \, 2^1 \, 3^0 \, 6^2 \, 5^1 ]\in \mathbb{Z}_4\wr S_6$. Then \[ \mathrm{Neg}(\pi,\epsilon)=\{1,3,5,6\} \] and $\mathrm{col}(\pi,\epsilon)=3+1+2+1=7$. Further, \[ \mathrm{Des}_A(\pi,\epsilon)=\{2,4\} \] and thus $\mathrm{des}_A(\pi,\epsilon)=2$ and $\mathrm{major}_A(\pi,\epsilon)=6$. \end{example}
We next define \emph{negative} statistics for wreath products, following \cite{bagno}. Recall first that a \emph{multiset} of elements of $[n]$ is a subset $S\subseteq[n]$ together with a function $\nu:S\rightarrow \mathbb{Z}_{\geq 1}$, where we call $\nu(i)$ is the \emph{multiplicity} of $i$ in $S$. Instead of specifying $\nu$ for a multiset, we typically write a multiset as a set of elements with repetition, e.g. $M=\{1,1,1,2,4,4,4,4,7,7\}$ represents the multiset with $S=\{1,2,4,7\}$ where $\nu(1)=3$, $\nu(2)=1$, $\nu(4)=4$, and $\nu(7)=2$. The cardinality of a multiset is the sum of the multiplicities of the elements of the underlying set. To form a union of multisets, we take the union of the underlying sets and sum the multiplicities of the elements. When forming a sum (or product) indexed by the elements of a multiset $(S,\nu)$, we include $\nu(i)$ summands (or factors) for each $i\in S$. For example, with our previous example $M$, we have $\sum_{i\in M}2^i=3\cdot 2^1 + 1\cdot 2^2 + 4\cdot 2^4 + 2\cdot 2^7$.
\begin{definition} For an element $(\pi,\epsilon)$ in $\mathbb{Z}_r\wr S_n$, we define the \emph{negative inverse multiset} as \[ \mathrm{NNeg}(\pi,\epsilon) :=\{\underbrace{i,i,\ldots,i}_{c_i \text{ times}}:i\in[n]\} \, . \] We define the \emph{negative descent multiset} as \[ \mathrm{NDes}(\pi,\epsilon):= \mathrm{Des}_A(\pi,\epsilon)\cup \mathrm{NNeg}((\pi,\epsilon)^{-1}) \, . \] The \emph{negative descent statistic} is \[ \mathrm{ndes}(\pi,\epsilon):=\#\mathrm{NDes}(\pi,\epsilon) \, . \] The \emph{negative major index} is \[ \mathrm{nmajor}(\pi,\epsilon):=\sum_{i\in \mathrm{NDes}(\pi,\epsilon)}i \, . \] \end{definition}
\noindent Observe that $\mathrm{NNeg}((\pi,\epsilon)^{-1})$ contains exactly $(r-c_{\pi^{-1}(i)})\mod r$ copies of each $i\in [n]$.
\begin{example} Let $(\pi,\epsilon)=[1^3 \, 4^0 \, 2^1 \, 3^0 \, 6^2 \, 5^1 ]\in \mathbb{Z}_4\wr S_6$. Then $(\pi,\epsilon)^{-1}=[1^1 \, 3^3 \, 4^0 \, 2^0 \, 6^3 \, 5^2 ]$, and hence $\mathrm{NNeg}((\pi,\epsilon)^{-1})=\{1,2,2,2,5,5,5,6,6\}$. There are $r-c_{\pi^{-1}(5)}=4-c_6=4-1=3$ copies of $5$ contained in this set, and there are $r-c_{\pi^{-1}(3)}=4-c_4=4-0\equiv 0 \bmod 4$ copies of $3$. Further, \[ \mathrm{NDes}(\pi,\epsilon)=\{2,4\}\cup\{1,2,2,2,5,5,5,6,6\}=\{1,2,2,2,2,4,5,5,5,6,6\} \] and thus $\mathrm{ndes}(\pi,\epsilon)=11$ and $\mathrm{nmajor}(\pi,\epsilon)=40$. \end{example}
There are also flag statistics for wreath products, due to Bagno and Biagioli \cite{bagnobiagioli}.
\begin{definition}[Bagno--Biagioli]\label{wreathflagdef} For an element $(\pi,\epsilon)$ in $\mathbb{Z}_r\wr S_n$, we define the \emph{flag descent statistic} as \[ \mathrm{fdes}(\pi,\epsilon):= r\cdot \mathrm{des}_A(\pi,\epsilon) + c_1 \, , \] where as usual $\epsilon_1=\omega^{c_1}$. The \emph{flag major index} is \[ \mathrm{fmajor}(\pi,\epsilon):= r\cdot \mathrm{major}_A(\pi,\epsilon) + \mathrm{col}(\pi,\epsilon) \, . \] \end{definition}
\begin{example} Let $(\pi,\epsilon)=[1^3 \, 4^0 \, 2^1 \, 3^0 \, 6^2 \, 5^1 ]\in \mathbb{Z}_4\wr S_6$. Then $\mathrm{fdes}(\pi,\epsilon)=4\cdot 2 + 3=11$ and $\mathrm{fmajor}(\pi,\epsilon)=4\cdot 6+7=31$. \end{example}
For the statements of our multivariate generalizations of Theorems~\ref{wreathnegBthm} and~\ref{wreathflagBthm}, we will need two more definitions.
\begin{definition}\label{Irn} Define the subset of \emph{increasing elements of $\mathbb{Z}_r \wr S_n$}, denoted $I_{r,n}$, to be those elements satisfying $\mathrm{des}_A(\rho,\epsilon)=0$, i.e., $I_{r,n}$ contains all permutations $(\rho,\epsilon)$ such that $\rho(j)^{c_j}<\rho(j+1)^{c_{j+1}}$ for all $j\in [n-1]$. \end{definition} It is straightforward that every element of $\mathbb{Z}_r \wr S_n$ can be represented uniquely as \[ (\rho,\epsilon)\circ(\pi,(1,1,\cdots, 1)) \]
for some $\pi\in S_n$ and $(\rho,\epsilon)\in I_{r,n}$, since applying $(\pi,(1,1,\cdots, 1))$ on the right permutes the entries of the window notation for $(\rho,\epsilon)$, and the window for $(\rho,\epsilon)$ yields the unique increasing list of these entries. For example, in $B_6=\mathbb{Z}_2\wr~S_6$, \[ [4^1 \, 1^1 \, 5 \, 3^1 \, 6 \, 2] = [1^1 \, 3^1 \, 4^1 \, 2 \, 5 \, 6][3\, 1 \, 5 \, 2 \, 6 \, 4] \, . \] Thus, \[ \mathbb{Z}_r \wr S_n=\bigcup_{\pi\in S_n} I_{r,n}\pi \, , \] where we write $\pi$ for $(\pi,(1,1,\cdots, 1))$ to simplify notation.
\begin{proposition}\label{prop:IncNeg} For $(\rho,\epsilon)\in I_{r,n}$ and $\pi\in S_n$, \[ \mathrm{NNeg}([(\rho,\epsilon)\pi]^{-1})=\mathrm{NNeg}((\rho,\epsilon)^{-1}) \, . \] Further, each permutation $(\rho,\epsilon)\in I_{r,n}$ is uniquely determined by $\mathrm{NNeg}((\rho,\epsilon)^{-1})$. \end{proposition}
\begin{proof} If $(\tau,\epsilon)=[\tau_1^{c_1} \, \cdots \, \tau_n^{c_n} ]$ is any element of $\mathbb{Z}_r\wr S_n$, then \[ \mathrm{NNeg}((\tau,\epsilon)^{-1})=\{\underbrace{\tau_i,\ldots,\tau_i}_{\substack{(r-c_i) \bmod r\\ \text{ times }}}:c_i\neq 0\} \, . \] Since the window for $(\rho,\epsilon)\pi$ consists of a permutation of the window elements for $(\rho,\epsilon)$, and each $\rho_i^{c_i}$ is permuted as a unit by $\pi$ from the window of $(\rho,\epsilon)$ to the window for $(\rho,\epsilon)\pi$, it follows that the labels $\rho_i^{c_i}$ in the window are identical for both these permutations. The first claim follows.
To verify the uniqueness statement, it is enough to observe that $\mathrm{NNeg}((\rho,\epsilon)^{-1})$ determines the exponent on each $i\in [n]$ in the window notation for $(\rho,\epsilon)$. Since being an element in $I_{r,n}$ ensures that the entries of the window for $(\rho,\epsilon)$ are in increasing order, this determines the permutation. \end{proof}
\begin{definition}\label{wreathdefsignchange} For an element $\epsilon=(\omega^{c_j})_{j=1}^n\in~\{1,\omega^1,\omega^2,\ldots,\omega^{r-1}\}^n$ with $\epsilon_{n+1}:=1=\omega^0$, for $j\in [n]$ define \[ a_j^\epsilon:= (c_j-c_{j+1})\mod r \, , \] which we call the \emph{$j$th color change} for $\epsilon$. Define $\operatorname{ch}(\epsilon):=\sum_ja_j^\epsilon$ to be the \emph{total color change} in $\epsilon$. \end{definition}
\begin{example} Let $(\pi,\epsilon)=[1^3 \, 4^0 \, 2^1 \, 3^0 \, 6^2 \, 5^1 ]\in \mathbb{Z}_4\wr S_6$, so that $\epsilon=(\omega^3, \omega^0, \omega^1,\omega^0,\omega^2,\omega^1)$. Then \[ a^\epsilon=(3,3,1,2,1,1) \] and $\operatorname{ch}(\epsilon)=3+3+1+2+1+1=11$. \end{example}
\subsection{Multivariate identities}
Our multivariate extension of Theorem~\ref{wreathnegBthm} is the following.
\begin{theorem}\label{wreathcubeBthm} \begin{align*}
\sum_{ k \ge 0 } \prod_{ j=1 }^n [k+1]_{ z_j } \, z_0^k \ = \ \sum_{\pi\in S_n} \sum_{(\rho,\epsilon)\in I_{r,n}} & \frac{\displaystyle \prod_{j\in \mathrm{Des}(\pi)} z_0z_{\pi(1)}z_{\pi(2)}\cdots z_{\pi(j)} \prod_{j\in \mathrm{NNeg}((\rho,\epsilon)^{-1})} z_0z_{\pi(1)}z_{\pi(2)}\cdots z_{\pi(j)} }{\displaystyle (1-z_0)\prod_{j=1}^n \left(1-z_0^r z_{\pi(1)}^r\cdots z_{\pi(j)}^r \right) } \, . \end{align*} \end{theorem}
\begin{proof} We begin with the triangulation of $\mathrm{cone} ([0,1]^n)$ into the set of cones $ \left\{ \mathrm{cone}(\Delta_\pi) : \, \pi \in S_n \right\} $ found in the proof of Theorem~\ref{Atheorem}. While $\mathrm{cone}(\Delta_\pi)$ is unimodular for each $\pi$, for this proof we use the non-unimodular ray generators \[ {\boldsymbol e}_0, r({\boldsymbol e}_0+{\boldsymbol e}_{\pi(1)}), r({\boldsymbol e}_0+{\boldsymbol e}_{\pi(1)}+{\boldsymbol e}_{\pi(2)}), \ldots, r({\boldsymbol e}_0+{\boldsymbol e}_{\pi(1)}+\cdots + {\boldsymbol e}_{\pi(n)}) \, , \] together with the technique of shifting the entire fundamental parallelepiped described in Section~\ref{sec:unim}. There are $r^n$ integer points in the fundamental parallelepiped for $\overline{\mathrm{cone}(\Delta_\pi)}$ using these ray generators. Thus, every integer point ${\boldsymbol p}$ in the fundamental parallelepiped for $\mathrm{cone}(\Delta_\pi)$ can be uniquely expressed as \[ {\boldsymbol p}=\sum_{j\in \mathrm{Des}(\pi)} ({\boldsymbol e}_0+{\boldsymbol e}_{\pi(1)}+\cdots + {\boldsymbol e}_{\pi(j)}) + \sum_{j=1}^n \alpha_j({\boldsymbol e}_0+{\boldsymbol e}_{\pi(1)}+\cdots + {\boldsymbol e}_{\pi(j)}) \, \] with $\alpha_j\in \{0,1,\ldots,r-1\}$.
Associate to the point ${\boldsymbol p}$ the element $(\rho,\epsilon)\pi \in \mathbb{Z}_r \wr S_n$, where $\alpha_j=k$ if and only if $j$ has multiplicity $k$ in $\mathrm{NNeg}((\rho,\epsilon)^{-1})=\mathrm{NNeg}([(\rho,\epsilon)\pi]^{-1})$. Thus, for example, let $r=4$ and $n=6$, and consider $\pi=[1\,6\, 3\, 5\, 2\, 4]$ and $\alpha_1=1$, $\alpha_2=3$, $\alpha_3=\alpha_4=0$, $\alpha_5=3$, and $\alpha_6=2$. The element of $\mathbb{Z}_4\wr S_6$ associated to this point is $[1^3 \, 4^0 \, 2^1 \, 3^0 \, 6^2 \, 5^1]$, since it is contained in $I_{4,6}\pi$ and has the $\mathrm{NNeg}$ set of its inverse equal to $\{1,2,2,2,5,5,5,6,6\}$.
This correspondence creates a bijection between the elements of $\mathbb{Z}_r \wr S_n$ and the (appropriately shifted) integer points in the fundamental parallelepipeds for the cones over the $\Delta_\pi$. Note that this bijection encodes $I_{r,n}$ as the integer points in the fundamental parallelepiped for $\mathrm{cone}(\Delta_{\mathrm{Id}})$, where $\mathrm{Id}$ denotes the identity element in $S_n$. Thus \[
\sigma_{\mathrm{cone}(\Delta_\pi)}(z_0,\ldots,z_n)
= \sum_{(\rho,\epsilon)\in I_{r,n}} \frac{\displaystyle \prod_{j\in \mathrm{Des}(\pi)} z_0z_{\pi(1)}z_{\pi(2)}\cdots z_{\pi(j)} \prod_{j\in \mathrm{NNeg}((\rho,\epsilon)^{-1})} z_0z_{\pi(1)}z_{\pi(2)}\cdots z_{\pi(j)} }{\displaystyle (1-z_0)\prod_{j=1}^n \left(1-z_0^r z_{\pi(1)}^r \cdots z_{\pi(j)}^r \right) } \, . \] This completes our proof, since from our triangulation it follows that \[ \sigma_{\mathrm{cone} ([0,1]^n)}(z_0,\ldots,z_n) = \sum_{\pi\in S_n} \sigma_{\mathrm{cone}(\Delta_\pi)}(z_0,\ldots,z_n) \, . \qedhere \] \end{proof}
\begin{proof}[Proof of Theorem~\ref{wreathnegBthm}] Setting $t:=z_0$ and $q:=z_1=\cdots =z_n$ in Theorem~\ref{wreathcubeBthm} yields our desired form on the left-hand side of our identity, while the denominator of the right-hand side uniformly becomes \[ (1-t) \prod_{ j=1 }^n \left( 1 - t^r q^{jr} \right) . \] Each element $\displaystyle (\rho,\epsilon)\pi\in \bigcup_{\pi\in S_n} I_{r,n}\pi$ contributes to the numerator on the right-hand side of our identity a summand of \[ \prod_{j\in \mathrm{Des}(\pi)}tq^j \prod_{j\in \mathrm{NNeg}([(\rho,\epsilon)\pi ]^{-1})} tq^j . \] Because $\mathrm{Des}(\pi)=\mathrm{Des}_A((\rho,\epsilon)\pi)$, it follows that \[ \prod_{j\in \mathrm{Des}(\pi)}tq^j \prod_{j\in \mathrm{NNeg}([(\rho,\epsilon)\pi ]^{-1})} tq^j = t^{\mathrm{ndes}((\rho,\epsilon)\pi)}q^{\mathrm{nmajor}((\rho,\epsilon)\pi)} , \] hence our proof is complete. \end{proof}
The following is our multivariate extension of Theorem~\ref{wreathflagBthm}.
\begin{theorem}\label{wreathflagcubeBthm} \begin{align*}
\sum_{ k \ge 0 } \prod_{ j=1 }^n [k+1]_{ z_j } \, z_0^k \ = \
\sum_{ (\pi,\epsilon)\in \mathbb{Z}_r\wr S_n } & \frac{ \displaystyle \prod_{\substack{j\in \mathrm{Des}(\pi) \\ a_j^\epsilon=0 }}z_0^rz_{\pi(1)}^rz_{\pi(2)}^r\cdots z_{\pi(j)}^r \prod_{j=1}^nz_0^{a_j^\epsilon}z_{\pi(1)}^{a_j^\epsilon}z_{\pi(2)}^{a_j^\epsilon}\cdots z_{\pi(j)}^{a_j^\epsilon} }{ \displaystyle (1-z_0)\prod_{ j=1 }^n \left( 1 - z_0^r \, z_{\pi(1) }^{r} z_{\pi(2) }^{r} \cdots z_{\pi(j)}^{r} \right) } \, . \end{align*} \end{theorem}
\begin{proof} We begin again with the triangulation of $\mathrm{cone} ([0,1]^n)$ by the set of cones $ \left\{ \mathrm{cone}(\Delta_\pi): \, \pi \in S_n \right\} $ found in the proof of Theorem~\ref{Atheorem}. As in our previous proof, for $\overline{\mathrm{cone}(\Delta_\pi)}$ we use the non-unimodular ray generators \[ {\boldsymbol e}_0, r({\boldsymbol e}_0+{\boldsymbol e}_{\pi(1)}), r({\boldsymbol e}_0+{\boldsymbol e}_{\pi(1)}+{\boldsymbol e}_{\pi(2)}), \ldots, r({\boldsymbol e}_0+{\boldsymbol e}_{\pi(1)}+\cdots + {\boldsymbol e}_{\pi(n)}) \, . \] However, in this proof we use the technique of shifting integer points off of the boundary, discussed in Section~\ref{sec:unim}. Hence we represent every integer point ${\boldsymbol p}$ in the fundamental parallelepiped for $\mathrm{cone}(\Delta_{\pi})$ uniquely using a coefficient vector $\alpha\in \{0,1,2,\ldots,r-1\}^n$ in the sum \[ {\boldsymbol p}=\sum_{j=1}^n \alpha_j({\boldsymbol e}_0+{\boldsymbol e}_{\pi(1)}+\cdots + {\boldsymbol e}_{\pi(j)}) + \sum_{\substack{j\in \mathrm{Des}(\pi)\\ \alpha_j=0}} r({\boldsymbol e}_0+{\boldsymbol e}_{\pi(1)}+\cdots + {\boldsymbol e}_{\pi(j)}) \, . \]
We may then associate to the point ${\boldsymbol p}$ the element $(\pi,\epsilon)\in \mathbb{Z}_r\wr S_n$ where $\pi$ is the same as the index on $\mathrm{cone}(\Delta_\pi)$ and $\epsilon$ is defined by $a_j^\epsilon=\alpha_j$. This bijectively relates $\mathbb{Z}_r\wr S_n$ to the (possibly shifted) integer points in the fundamental parallelepipeds of the cones over the $\Delta_\pi$'s. Thus \[
\sigma_{\mathrm{cone}(\Delta_\pi)}(z_0,\ldots,z_n)
= \sum_{(\pi,\epsilon)\in \mathbb{Z}_r\wr S_n} \frac{\displaystyle \prod_{\substack{j\in \mathrm{Des}(\pi)\\ a_j^\epsilon=0}} z_0^rz_{\pi(1)}^rz_{\pi(2)}^r\cdots z_{\pi(j)}^r \prod_{j=1}^n z_0^{a_j^\epsilon}z_{\pi(1)}^{a_j^\epsilon}z_{\pi(2)}^{a_j^\epsilon}\cdots z_{\pi(j)}^{a_j^\epsilon} }{\displaystyle (1-z_0)\prod_{j=1}^n \left(1-z_0^rz_{\pi(1)}^r\cdots z_{\pi(j)}^r \right) } \, . \] (Note that in the summand on the right-hand side, $\pi$ is fixed while $\epsilon$ varies.)
This completes our proof, since from our triangulation it follows that \[ \sigma_{\mathrm{cone} ([0,1]^n)}(z_0,\ldots,z_n) = \sum_{\pi\in S_n} \sigma_{\mathrm{cone}(\Delta_\pi)}(z_0,\ldots,z_n) \, . \qedhere \] \end{proof}
\begin{remark}\label{rem:descent} For the proof of Theorem~\ref{wreathflagBthm}, we need to understand the causes of descents in elements of wreath products. Let $(\pi,\epsilon)\in \mathbb{Z}_r\wr S_n$. A descent in position $j$ of $(\pi,\epsilon)$ can arise for one of three reasons: \begin{itemize} \item \emph{color change:} $c_j<c_{j+1}$, or \item \emph{standard descent:} $\epsilon_{j}=\epsilon_{j+1}$ and $j\in \mathrm{Des}(\pi)$, or \item \emph{zero descent:} $j=1$ and $c_1\neq 0$. \end{itemize} For example, in $[1^1\, 2^3 \, 5^0\, 3^0\, 4^1 \, 6^0]$, there are color-change descents in positions $1$ and $4$, a standard descent in position $3$, and a zero descent in position $0$. Descents in position $0$ are precisely those called zero descents, and hence type-$A$ descents arise only from color change and standard descents.
Regarding color-change descents, consider the partial sums $A_k=\sum_{j=k}^na_j^{\epsilon}$ of color changes. We have that $a_j^{\epsilon}\leq r-1$ and that for all $j$, we obtain one descent for each $k$ such that $A_k\geq lr$ and $A_{k+1}< lr$ for some fixed multiple of $r$. In less formal terms, as we read in window notation from right to left, each time the partial sum of color changes accrues an additional $r$, that forces another color-change descent. If $A_1=\operatorname{ch}(\epsilon)$ is a multiple of $r$, then $c_1=0$, and hence there is no zero descent. On the other hand, if $A_1=\operatorname{ch}(\epsilon)$ is not a multiple of $r$, then this implies $c_1\neq 0$, which creates a zero descent. Standard descents arise when $a_j^{\epsilon}=0$, in which case a descent in position $j$ is controlled completely by the descent structure of $\pi$. \end{remark}
\begin{proof}[Proof of Theorem~\ref{wreathflagBthm}] Setting $t:=z_0$ and $q:=z_1=\cdots =z_n$ in Theorem~\ref{wreathflagcubeBthm} yields our desired form on the left-hand side of our identity, while the denominator of the right-hand side uniformly becomes \[ (1-t) \prod_{ j=1 }^n \left( 1 - t^r q^{rj} \right) . \] Each element $(\pi,\epsilon)\in \mathbb{Z}_r \wr S_n$ contributes to the numerator on the right-hand side of our identity a summand of \[ \prod_{\substack{j\in \mathrm{Des}(\pi)\\ a_j^\epsilon=0}} t^rq^{rj} \prod_{j=1}^n t^{a_j^\epsilon}q^{j a_j^\epsilon} . \] Therefore, our proof will be complete once we prove that \begin{equation}\label{wreathfdessignrep} \mathrm{fdes}(\pi,\epsilon)=\sum_{\substack{j\in \mathrm{Des}(\pi)\\ a_j^\epsilon=0}}r+\sum_{j=1}^na_j^\epsilon \end{equation} and \begin{equation}\label{wreathfmajorsignrep1} \mathrm{fmajor}(\pi,\epsilon)=\sum_{\substack{j\in \mathrm{Des}(\pi)\\ a_j^\epsilon=0}}rj+\sum_{j=1}^nja_j^\epsilon \, . \end{equation}
As an example, consider the element $[1^1\, 2^3 \, 5^0\, 3^0\, 4^1 \, 6^0]\in \mathbb{Z}_4\wr S_6$, for which $\mathrm{Des}_A(\pi,\epsilon)=\{1,3,4\}$, $\mathrm{Des}(\pi)=\{3\}$, and $a^\epsilon=(2,3,0,3,1,0)$. Then we see that $\mathrm{fdes}(\pi,\epsilon)$ is obtained as \[ 4\cdot \mathrm{des}_A(\pi,\epsilon)+c_1=4\cdot 3 + 1 = 13 = 4 + 9 = \sum_{\substack{j\in \mathrm{Des}(\pi)\\ a_j^\epsilon=0}}4+\sum_{j=1}^6a_j^\epsilon \, , \] while $\mathrm{fmajor}(\pi,\epsilon)$ is obtained as both \[ 4\cdot \mathrm{major}_A(\pi,\epsilon)+\mathrm{col}(\pi,\epsilon)=4\cdot 8 + 5 = 37 \] and \[ 37= 4\cdot 3 + 2+3\cdot 2 + 3\cdot 4 + 5 = \sum_{\substack{j\in \mathrm{Des}(\pi)\\ a_j^\epsilon=0}}4j+\sum_{j=1}^6a_j^\epsilon j \, . \]
To prove \eqref{wreathfdessignrep}, we build upon Remark~\ref{rem:descent} to investigate the relationship between the values $a_j^{\epsilon}$ and type-$A$ descents. We must show that \[ r\cdot \mathrm{des}_A(\pi,\epsilon) + c_1 = \sum_{\substack{j\in \mathrm{Des}(\pi)\\ a_j^\epsilon=0}}r+\sum_{j=1}^na_j^\epsilon \, . \] Following Remark~\ref{rem:descent}, we observe that $\lfloor \frac{\operatorname{ch}(\epsilon)}{r} \rfloor$ is equal to the number of color-change descents in $(\pi,\epsilon)$, which are all type-$A$ descents. Thus \[
\left(\sum_{j=1}^na_j^\epsilon\right)-c_1=r\cdot\left\lfloor \frac{\operatorname{ch}(\epsilon)}{r} \right\rfloor \] is equal to the number of color-change descents multiplied by $r$. Similarly, Remark~\ref{rem:descent} implies that the number of standard descents multiplied by $r$ (also type-$A$ descents) is given by $\displaystyle \sum_{\substack{j\in \mathrm{Des}(\pi)\\ a_j^\epsilon=0}}r$. The equality in \eqref{wreathfdessignrep} follows immediately.
To prove \eqref{wreathfmajorsignrep1}, we must show that \[ r\cdot \mathrm{major}_A(\pi,\epsilon)+\mathrm{col}(\pi,\epsilon)=\sum_{\substack{j\in \mathrm{Des}(\pi)\\ a_j^\epsilon=0}}rj+\sum_{j=1}^na_j^\epsilon j \, . \] It follows from Remark~\ref{rem:descent} that $\displaystyle \sum_{\substack{j\in \mathrm{Des}(\pi)\\ a_j^\epsilon=0}}rj$ is equal to the contribution given by standard descents to the type-$A$ major index. We are left to consider the contribution of color-change descents, hence what remains to be shown is that \begin{equation}\label{lasteqn} \sum_{j=1}^na_j^\epsilon j = \mathrm{col}(\pi,\epsilon)+\sum_{\substack{j\in \mathrm{Des}(\pi,\epsilon) \\ a_j^{\epsilon}\neq 0 }} rj \, , \end{equation} which we prove as follows. \begin{align*} \sum_{j=1}^nja_j^{\epsilon} & = \sum_{i=1}^n\sum_{j=i}^na_j^{\epsilon} \\ & = \sum_{i=1}^n\left[c_i + r\cdot \#\left\{j\geq i : j\in \mathrm{Des}(\pi,\epsilon)\text{ arising from a color change} \right\} \right] \\ & = \sum_{i=1}^nc_i + \sum_{i=1}^nr\cdot \#\left\{j\geq i : j\in \mathrm{Des}(\pi,\epsilon)\text{ arising from a color change} \right\} \\ & = \mathrm{col}(\pi,\epsilon) + \sum_{\substack{j\in \mathrm{Des}(\pi,\epsilon)\\ a_j^{\epsilon}\neq 0}}jr \, . \end{align*} The key observation in the above sequence of equalities is that \[ \sum_{j=i}^na_j^{\epsilon}=c_i + r\cdot \#\left\{j\geq i : j\in \mathrm{Des}(\pi,\epsilon)\text{ arises from a color change} \right\} \, , \] which follows from the discussion in Remark~\ref{rem:descent}. \end{proof}
\begin{example}\label{ex:signchange} To illustrate the key observation at the end of the proof above, consider an arbitrary $(\pi,\epsilon)\in \mathbb{Z}_5\wr S_9$ with the color vector $c=(4,1,2,3,0,1,1,3,1)$. Note that there are four type-$A$ descents in $(\pi,\epsilon)$ caused by color changes, with color-change descent positions 7, 5, 3, and 2. The color-change vector for $c$ is $a=(3,4,4,3,4,0,3,2,1)$, where the right-hand $1$ accounts for $c_9-c_{10}$ where $c_{10}$ is by definition $0$. When $i=4$, we see that \begin{align*} \sum_{j=4}^na_j^{\epsilon} & = 3+4+0+3+2+1 \\ & = 13 \\ & = 3 + 5\cdot 2 \\ & = c_4 + r\cdot \#\left\{j\geq 4 : j\in \mathrm{Des}(\pi,\epsilon)\text{ arises from a color change} \right\} \, . \end{align*} \end{example}
The proofs of Theorems~\ref{wreathcubeBthm} and~\ref{wreathflagcubeBthm} together yield a bijective proof of the equidistribution of the pairs of statistics $(\mathrm{ndes},\mathrm{nmajor})$ and $(\mathrm{fdes},\mathrm{fmajor})$ for $\mathbb{Z}_r\wr S_n$. As far as we know, this bijection is new.
\begin{corollary}\label{cor:wreathbijection} \[ \sum_{(\pi,\epsilon)\in \mathbb{Z}_r \wr S_n }t^{\mathrm{ndes}(\pi,\epsilon)}q^{\mathrm{nmajor}(\pi,\epsilon)} =\sum_{(\pi,\epsilon)\in \mathbb{Z}_r \wr S_n }t^{\mathrm{fdes}(\pi,\epsilon)}q^{\mathrm{fmajor}(\pi,\epsilon)} . \] \end{corollary}
\begin{proof} Our proof relies on the indexing of integer points in fundamental parallelepipeds for $\mathrm{cone}(\Delta_\pi)$ found in the proofs of Theorems~\ref{wreathcubeBthm} and~\ref{wreathflagcubeBthm}. To the element $(\rho,\epsilon)\pi\in \bigcup_{\pi\in S_n}I_{r,n}\pi$ we associated the integer point \[ {\boldsymbol p}=\sum_{j\in \mathrm{Des}(\pi)} ({\boldsymbol e}_0+{\boldsymbol e}_{\pi(1)}+\cdots + {\boldsymbol e}_{\pi(j)}) + \sum_{j=1}^n \alpha_j({\boldsymbol e}_0+{\boldsymbol e}_{\pi(1)}+\cdots + {\boldsymbol e}_{\pi(j)}) \, \] where $\alpha_j=k$ if and only if $j$ has multiplicity $k$ in $\mathrm{NNeg}((\rho,\epsilon)^{-1})=\mathrm{NNeg}([(\rho,\epsilon)\pi]^{-1})$. Rewriting ${\boldsymbol p}$ as \[ {\boldsymbol p}=\sum_{j=1}^n \beta_j({\boldsymbol e}_0+{\boldsymbol e}_{\pi(1)}+\cdots + {\boldsymbol e}_{\pi(j)}) + \sum_{\substack{j\in \mathrm{Des}(\pi)\\ \beta_j=0}} r({\boldsymbol e}_0+{\boldsymbol e}_{\pi(1)}+\cdots + {\boldsymbol e}_{\pi(j)}) \, . \] we associated to ${\boldsymbol p}$ the element $(\pi,\epsilon)\in \mathbb{Z}_r\wr S_n$ where $\pi$ is the same as the index on $\mathrm{cone}(\Delta_\pi)$ and $\epsilon$ is defined by $a_j^\epsilon=\beta_j$. This yields an explicit bijection from $\mathbb{Z}_r\wr S_n$ to itself that preserves the pairs of statistics $(\mathrm{ndes},\mathrm{nmajor})$ and $(\mathrm{fdes},\mathrm{fmajor})$. \end{proof}
\begin{example} Let $r=4$ and $n=6$, and consider the element $[1^3 \, 4^0 \, 2^1 \, 3^0 \, 6^2 \, 5^1]$ with the $\mathrm{NNeg}$ set of its inverse equal to $\{1,2,2,2,5,5,5,6,6\}$. Our goal is to find the element in $\mathbb{Z}_4\wr S_6$ paired with this element under our bijection. We first encode the element as an integer point, which requires using $\pi=[1\,6\, 3\, 5\, 2\, 4]$ and $\alpha_1=1$, $\alpha_2=3$, $\alpha_3=\alpha_4=0$, $\alpha_5=3$, and $\alpha_6=2$; note that $\mathrm{Des}(\pi)=\{2,4\}$. Thus, writing the two summands arising from the descent positions in $\pi$ first in the first sum, we have that the element is encoded by \begin{align*} {\boldsymbol p} = & \phantom{=} {\boldsymbol e}_0+{\boldsymbol e}_1+{\boldsymbol e}_6 + \\ & \phantom{=} {\boldsymbol e}_0+{\boldsymbol e}_1+{\boldsymbol e}_6+{\boldsymbol e}_3+{\boldsymbol e}_5 + \\ & \phantom{=} {\boldsymbol e}_0+{\boldsymbol e}_1 + \\ & \phantom{=} 3({\boldsymbol e}_0+{\boldsymbol e}_1+{\boldsymbol e}_6) + \\ & \phantom{=} 3({\boldsymbol e}_0+{\boldsymbol e}_1+{\boldsymbol e}_6+{\boldsymbol e}_3+{\boldsymbol e}_5+{\boldsymbol e}_2) + \\ & \phantom{=} 2({\boldsymbol e}_0+{\boldsymbol e}_1+{\boldsymbol e}_6+{\boldsymbol e}_3+{\boldsymbol e}_5+{\boldsymbol e}_2+{\boldsymbol e}_4) \\
= & \phantom{=} {\boldsymbol e}_0+{\boldsymbol e}_1 + \\ & \phantom{=} {\boldsymbol e}_0+{\boldsymbol e}_1+{\boldsymbol e}_6+{\boldsymbol e}_3+{\boldsymbol e}_5 + \\ & \phantom{=} 3({\boldsymbol e}_0+{\boldsymbol e}_1+{\boldsymbol e}_6+{\boldsymbol e}_3+{\boldsymbol e}_5+{\boldsymbol e}_2) \\ & \phantom{=} 2({\boldsymbol e}_0+{\boldsymbol e}_1+{\boldsymbol e}_6+{\boldsymbol e}_3+{\boldsymbol e}_5+{\boldsymbol e}_2+{\boldsymbol e}_4) \\ & \phantom{=} 4({\boldsymbol e}_0+{\boldsymbol e}_1+{\boldsymbol e}_6) \, . \end{align*} Note that in the final sum above, we have merged our summands arising from $\mathrm{Des}(\pi)$ in the first sum into the others to obtain a representation of ${\boldsymbol p}$ where the last of our summands has a coefficient of $4$. This $4$ arises in one of the descent positions for $\pi$, corresponding to $\beta_2=0$. Hence, our second encoding vector is $\beta=(1,0,0,1,3,2)$. Thus, we recover our new element of $\mathbb{Z}_4\wr S_6$ by setting $a^{\epsilon}=\beta$, obtaining \[ (\pi,\epsilon)=[1^3\, 6^2\, 3^2\, 5^2\, 2^1\, 4^2] \, . \] Finally, observe that for our original element $[1^3 \, 4^0 \, 2^1 \, 3^0 \, 6^2 \, 5^1]$, we have that \[ (\mathrm{ndes},\mathrm{nmajor})= (11,40) \, , \] while for the element $[1^3\, 6^2\, 3^2\, 5^2\, 2^1\, 4^2]$, we have that \[ (\mathrm{fdes},\mathrm{fmajor})= (11,40) \, , \] as desired. \end{example}
\subsection{Identities involving $(rk+1)^n$}
In \cite[Theorem 9]{chowmansourcarlitz}, Chow and Mansour provide an Euler--Mahonian distribution for wreath products using Steingr{\'{\i}}msson's descent statistics and a new flag major index. Their identity is a generalization of a result due to Chow--Gessel which we discuss in Section~\ref{bigbsection}. In this section, we state a similar Euler--Mahonian distribution for the descent statistic and flag major index given in Definitions~\ref{BnDes} and~\ref{wreathflagdef}. By combining Theorem~\ref{chowmansourflagthm} below and \cite[Theorem 9]{chowmansourcarlitz}, we see that the pairs \[ (\text{Steingr{\'{\i}}msson's descent statistic},\text{Chow--Mansour's flag major index}) \] and \[ (\mathrm{des},\mathrm{fmajor}) \] are equidistributed over $\mathbb{Z}_r\wr S_n$.
\begin{theorem}\label{chowmansourflagthm} \[
\sum_{ k \ge 0 } [rk+1]_q^n \, t^k
\ = \ \frac{\sum_{(\pi,\epsilon)\in \mathbb{Z}_r\wr S_n}t^{\mathrm{des}(\pi,\epsilon)}q^{\mathrm{fmajor}(\pi,\epsilon)} }{ \prod_{ j=0 }^n \left( 1 - t q^{rj} \right) } \, . \] \end{theorem}
We obtain in Theorem~\ref{wreathnewBthm} below a multivariate generalization of this bivariate identity.
\begin{remark}At the end of \cite{chowmansourcarlitz}, Chow and Mansour indicate that having a Hilbert-series interpretation of \cite[Theorem 9]{chowmansourcarlitz} is an open problem. The proof of Theorem~\ref{wreathnewBthm} provides such an interpretation, after taking into account Remark~\ref{hilbertseries}. \end{remark}
\begin{remark} In \cite[Equation (8.1)]{biagiolizengwreath}, Biagioli--Zeng obtain a wreath product version of Theorem~\ref{chowgesselthm}, a result due to Chow--Gessel. The authors do not at this time know of a way to obtain this identity using polyhedral geometry. \end{remark}
\subsection{Multivariate identities}
Our multivariate generalization of Theorem~\ref{chowmansourflagthm} is the following.
\begin{theorem}\label{wreathnewBthm} \begin{align*}
\sum_{ k \ge 0 } \prod_{ j=1 }^n [rk+1]_{ z_j } \, z_0^k \ = \
\sum_{ (\pi,\epsilon)\in \mathbb{Z}_r\wr S_n } \frac{ \displaystyle z_0^{\left\lceil \operatorname{ch}(\epsilon)/r \right\rceil } \prod_{j=1}^nz_{\pi(1)}^{a_j^\epsilon}z_{\pi(2)}^{a_j^\epsilon}\cdots z_{\pi(j)}^{a_j^\epsilon} \prod_{\substack{j\in \mathrm{Des}(\pi) \\ a_j^\epsilon=0 }}z_0z_{\pi(1)}^rz_{\pi(2)}^r\cdots z_{\pi(j)}^r }{ \displaystyle \prod_{ j=0 }^n \left( 1 - z_0 \, z_{\pi(1) }^{r} z_{\pi(2) }^{r} \cdots z_{\pi(j)}^{r} \right) } \, . \end{align*} \end{theorem}
\begin{proof} As in our previous proofs, this proof proceeds in two stages. We first triangulate the cube $[0,r]^n$ into a disjoint union of simplices, then set up an indexing system for the integer points in the fundamental parallelepipeds for the cones over these simplices. Second, we bijectively associate the elements of $\mathbb{Z}_r \wr S_n$ with these integer points in a way that allows us to recover, in our subsequent proof of Theorem~\ref{chowmansourflagthm}, the descent and flag major index statistics from these integer points.
We begin by triangulating $[0,r]^n$ into the disjoint simplices \[
\Delta_\pi := \left\{ {\boldsymbol x} \in \mathbb{R}^n :
\begin{array}{ll}
0 \le x_{ \pi(n) } \le x_{ \pi(n-1) } \le \dots \le x_{ \pi(1) } \le r, \\
x_{ \pi(j+1) } < x_{ \pi(j) } \text{ if } j \in \mathrm{Des}(\pi)
\end{array}
\right\} \] (one for each $\pi \in S_n$). As before, the strict inequalities determined by the descent set of $\pi$ ensures that this triangulation is disjoint.
Unlike the cones produced by coning over the simplices in our triangulation of $[0,1]^n$, the cones arising from this triangulation of $[0,r]^n$ are not unimodular. By Lemma \ref{genconelemma}, the integer-point transform of $\mathrm{cone}(\Delta_{\pi})$ can be expressed as a rational function where the denominator has the form \[ \prod_{ j=0 }^n \left( 1 - z_0 \, z_{\pi(1) }^{r} z_{\pi(2) }^{r} \cdots z_{\pi(j)}^{r} \right) \, , \] i.e., where the displayed exponent vectors are the ray generators for this cone. As the determinant of the matrix formed by the ray generators of $\overline{\mathrm{cone}(\Delta_\pi)}$ is $r^n$, there are $r^n$ integer points in the fundamental parallelepiped of $\overline{\mathrm{cone}(\Delta_\pi)}$. It is a straightforward observation that there are $r^n$ such integer points formed by taking linear combinations of the ray generators for the cone with coefficients from the set $\left\{0, \frac 1 r,\ldots, \frac{ r-1 }{ r } \right\}$. We will use the following notation to denote such an integer point; for $\alpha_j \in \left\{0, \frac 1 r,\ldots, \frac{ r-1 }{ r } \right\}$ where $j=0,\ldots, n$, \[ {\boldsymbol p}=\alpha_0{\boldsymbol e}_0 + \sum_{j=1}^n \alpha_j\left( {\boldsymbol e}_0 + r{\boldsymbol e}_{\pi(1)} + r{\boldsymbol e}_{\pi(2)} + \cdots + r{\boldsymbol e}_{\pi(j)} \right) \, . \] Observe that because ${\boldsymbol p}$ is an integer point, the value of $\alpha_0$ is determined by the condition that the coefficient of ${\boldsymbol e}_0$, $\sum_{j=0}^n\alpha_j$, be an integer.
To determine the numerator of $\sigma_{\mathrm{cone}(\Delta_\pi)}(z_0,\ldots,z_n)$, as in our earlier situations dealing with unimodular cones, we must shift some integer points off of the boundary of $\Pi_{\overline{\mathrm{cone}(\Delta_\pi)}}$, specifically those that are not contained in $\Pi_{\mathrm{cone}(\Delta_\pi)}$. When ${\boldsymbol p}$ is contained in a given facet indexed by $j\in \mathrm{Des}(\pi)$, then we must shift ${\boldsymbol p}$ by the minimal ray generator opposite that facet, namely $({\boldsymbol e}_0+r{\boldsymbol e}_{\pi(1)}+r{\boldsymbol e}_{\pi(2)}+\cdots +r{\boldsymbol e}_{\pi(j)})$. Such a point ${\boldsymbol p}$, when written in the form displayed above, is contained in such a facet precisely when $\alpha_j=0$. Thus, each such ${\boldsymbol p}$ must be shifted from $\Pi_{\overline{\mathrm{cone}(\Delta_\pi)}}$ to $\Pi_{\mathrm{cone}(\Delta_\pi)}$ by the vector \[ \sum_{\substack{j\in \mathrm{Des}(\pi) \\ \alpha_j=0}}({\boldsymbol e}_0+r{\boldsymbol e}_{\pi(1)}+r{\boldsymbol e}_{\pi(2)}+\cdots +r{\boldsymbol e}_{\pi(j)}) \, , \] yielding the point \[ \alpha_0{\boldsymbol e}_0 + \sum_{j=1}^n \alpha_j\left( {\boldsymbol e}_0 + r{\boldsymbol e}_{\pi(1)} + r{\boldsymbol e}_{\pi(2)} + \cdots + r{\boldsymbol e}_{\pi(j)} \right) + \sum_{\substack{j\in \mathrm{Des}(\pi) \\ \alpha_j=0}}({\boldsymbol e}_0+r{\boldsymbol e}_{\pi(1)}+r{\boldsymbol e}_{\pi(2)}+\cdots +r{\boldsymbol e}_{\pi(j)}) \] in $\Pi_{\overline{\mathrm{cone}(\Delta_\pi)}} \, .$ Hence \begin{align*}
\sigma_{ \mathrm{cone}(\Delta_{ \pi }) } (z_0, z_{ 1 }, & \dots, z_{ n }) \ = \
\sum_{\alpha \in \{0,\frac{1}{r},\ldots,\frac{r-1}{r}\}^n} \frac{ \displaystyle z_0^{ \sum_{j=0}^n \alpha_j } \prod_{j=1}^nz_{\pi(1)}^{r\alpha_j}z_{\pi(2)}^{r\alpha_j}\cdots z_{\pi(j)}^{r\alpha_j} \prod_{\substack{j\in \mathrm{Des}(\pi) \\ \alpha_j=0 } }z_0z_{\pi(1)}^rz_{\pi(2)}^r\cdots z_{\pi(j)}^r }{\displaystyle \prod_{ j=0 }^n \left( 1 - z_0 \, z_{\pi(1) }^{r} z_{\pi(2) }^{r} \cdots z_{\pi(j)}^{r} \right) } \, . \end{align*}
We now associate to the element $(\pi,\epsilon)\in \mathbb{Z}_r\wr S_n$ the integer point in $\Pi_{\mathrm{cone}(\Delta_\pi)}$ with $\alpha_j:=\frac{a_j^\epsilon}{r}$ for $j=1,\ldots,n$, i.e. the point \[ \alpha_0 {\boldsymbol e}_0 + \sum_{j=1}^n \frac{a_j^\epsilon}{r}({\boldsymbol e}_0 + r{\boldsymbol e}_{\pi(1)} + r {\boldsymbol e}_{\pi(2)}+ \cdots + r {\boldsymbol e}_{\pi(j)}) \, + \sum_{\substack{j\in \mathrm{Des}(\pi) \\ a_j^\epsilon=0}} ({\boldsymbol e}_0 + r{\boldsymbol e}_{\pi(1)} + r {\boldsymbol e}_{\pi(2)}+ \cdots + r {\boldsymbol e}_{\pi(j)}) \, , \] where $\alpha_0$ is determined by the condition that $\alpha_0+\sum_{j=1}^n \frac{a_j^\epsilon}{r}$ be an integer. Through this association, the set of ``color" vectors $\{1,\omega,\omega^2,\ldots,\omega^{r-1}\}^n$ parametrizes the integer points in the fundamental parallelepiped for $\mathrm{cone}(\Delta_\pi)$. This parametrization is bijective, and the coefficient of ${\boldsymbol e}_0$ in the first two terms of the sum above is equal to both $\lceil \frac{\operatorname{ch}(\epsilon)}{r} \rceil $ and $\sum_{j=0}^n \alpha_j$. Thus, our proof is complete following the observation that \[ \sigma_{\mathrm{cone}([0,r]^n)}(z_0,\ldots,z_n) = \sum_{\pi\in S_n} \sigma_{\mathrm{cone}(\Delta_\pi)}(z_0,\ldots,z_n) \, . \qedhere \] \end{proof}
\begin{proof}[Proof of Theorem \ref{chowmansourflagthm}] Setting $t:=z_0$ and $q:=z_1=\cdots =z_n$ in Theorem~\ref{wreathnewBthm} yields our desired form on the left-hand side, while the denominator of the right-hand side uniformly becomes $ \prod_{ j=0 }^n \left( 1 - t q^{rj} \right)$. Each element $(\pi,\epsilon)\in \mathbb{Z}_r\wr S_n$ contributes a summand of \[ t^{\left\lceil \operatorname{ch}(\epsilon)/r \right\rceil } \prod_{j:a_j^\epsilon\neq 0}q^{a_j^{\epsilon}j} \prod_{\substack{j\in \mathrm{Des}(\pi) \\ a_j^\epsilon=0 }}tq^{rj} \] to the numerator of the right-hand side. Hence, we need to prove \begin{equation}\label{wreathdessignrep} \mathrm{des}(\pi,\epsilon)=\left\lceil \tfrac{ \operatorname{ch}(\epsilon) }{ r } \right\rceil + \#\{ j\in \mathrm{Des}(\pi): a_j^\epsilon=0\} \end{equation} and \begin{equation}\label{wreathfmajorsignrep} \mathrm{fmajor}(\pi,\epsilon)=\sum_{j=1}^na_j^\epsilon j + \sum_{\substack{j\in \mathrm{Des}(\pi)\\ a_j^\epsilon=0}}rj \, . \end{equation} Observe that \eqref{wreathfmajorsignrep} is identical to \eqref{wreathfmajorsignrep1}, which was proved earlier.
As discussed in Remark~\ref{rem:descent}, a descent in position $j$ of $(\pi,\epsilon)$ can be a color-change descent, a standard descent, or a zero descent. If $\tfrac{ \operatorname{ch}(\epsilon) }{ r }$ is an integer, then there is no zero descent, and there are $\tfrac{ \operatorname{ch}(\epsilon) }{ r }$ color-change descents. If $\tfrac{ \operatorname{ch}(\epsilon) }{ r }$ is not an integer, there are $\lfloor \tfrac{ \operatorname{ch}(\epsilon) }{ r } \rfloor$ color-change descents and a zero descent, which contribute a total of $\lceil \tfrac{ \operatorname{ch}(\epsilon) }{ r } \rceil$ descents. Further, the standard descents are counted by $\#\{ j\in \mathrm{Des}(\pi): a_j^\epsilon=0\}$. The equality in \eqref{wreathdessignrep} follows immediately from these observations. \end{proof}
\section{Type $B$}\label{bigbsection}
In this section we state our main results from Section~\ref{wreathsection} in the special case of hyperoctahedral groups. We also prove a new multivariate identity given in Theorem~\ref{Btheorem}.
\subsection{Identities involving $(k+1)^n$}
In \cite{adinbrentiroichman}, Adin, Brenti, and Roichman introduced several pairs of statistics on $B_n$; these were the inspiration for the statistics considered in Section~\ref{wreathsection}. For the interested reader, we state the original identities of Adin--Brenti--Roichman and our multivariate identities generalizing them. The statistics arising here are special cases of those defined in Section~\ref{wreathsection}.
\begin{theorem}[Adin--Brenti--Roichman]\label{negBthm} \[
\sum_{ k \ge 0 } [k+1]_q^n \, t^k
\ = \ \frac{ \sum_{(\pi,\epsilon) \in B_n } t^{ \mathrm{ndes}(\pi,\epsilon) } q^{ \mathrm{nmajor}(\pi,\epsilon) } }{ (1-t)\prod_{j=1}^n (1-t^2q^{2j}) } \, . \] \end{theorem}
\begin{theorem}[Adin--Brenti--Roichman]\label{flagBthm} \[
\sum_{ k \ge 0 } [k+1]_q^n \, t^k
\ = \ \frac{ \sum_{(\pi,\epsilon) \in B_n } t^{ \mathrm{fdes}(\pi,\epsilon) } q^{ \mathrm{fmajor}(\pi,\epsilon) } }{ (1-t)\prod_{j=1}^n (1-t^2q^{2j}) } \, . \] \end{theorem}
Note that in the original work of Adin--Brenti--Roichman, the flag and negative statistics were defined using the natural order; in that context, the flag major index was denoted by $\mathrm{fmaj}$ rather than $\mathrm{fmajor}$. However, Adin--Brenti--Roichman point out in \cite[p.\ 218]{adinbrentiroichman} that either order can be used to obtain Theorems~\ref{negBthm}~and~\ref{flagBthm}. Our generalizations in type $B$ are the following corollaries of Theorems~\ref{wreathcubeBthm}~and~\ref{wreathflagcubeBthm}, again using notation from Section~\ref{wreathsection}.
\begin{corollary}\label{cubeBthm} \begin{align*}
\sum_{k \ge 0 } \prod_{j=1}^n [k+1]_{ z_j } z_0^k \ = \ \sum_{\pi\in S_n} \sum_{(\rho,\epsilon)\in I_{2,n}} & \frac{\displaystyle \prod_{j\in \mathrm{Des}(\pi)} z_0z_{\pi(1)}z_{\pi(2)}\cdots z_{\pi(j)} \prod_{j\in \mathrm{NNeg}((\rho,\epsilon)^{-1})} z_0z_{\pi(1)}z_{\pi(2)}\cdots z_{\pi(j)} }{\displaystyle (1-z_0)\prod_{j=1}^n \left(1-z_0^2z_{\pi(1)}^2\cdots z_{\pi(j)}^2 \right) } \, . \end{align*} \end{corollary}
\begin{corollary}\label{flagcubeBthm} \begin{align*}
\sum_{ k \ge 0 } \prod_{ j=1 }^n [k+1]_{ z_j } z_0^k \ = \
\sum_{ (\pi,\epsilon)\in B_n } & \frac{ \displaystyle \prod_{\substack{j\in \mathrm{Des}(\pi) \\ a_j^\epsilon=0 }}z_0^2z_{\pi(1)}^2z_{\pi(2)}^2\cdots z_{\pi(j)}^2 \prod_{j:a_j^\epsilon=1}z_0z_{\pi(1)}z_{\pi(2)}\cdots z_{\pi(j)} }{ \displaystyle (1-z_0)\prod_{ j=1 }^n \left( 1 - z_0^2 \, z_{\pi(1) }^{2} z_{\pi(2) }^{2} \cdots z_{\pi(j)}^{2} \right) } \, . \end{align*} \end{corollary}
In the original work of Adin, Brenti, and Roichman \cite{adinbrentiroichman}, it was left as an open question to give a bijective proof in type $B$ of the equidistribution of the pairs of statistics $(\mathrm{ndes},\mathrm{nmajor})$ and $(\mathrm{fdes},\mathrm{fmajor})$; a combinatorial proof in type $B$ leading to an implicit bijection was given by Lai and Petersen in \cite{petersenlai}. When restricted to type $B$, the proof of Corollary~\ref{cor:wreathbijection} also produces such a bijection.
\subsection{Identities involving $(2k+1)^n$}
Recall Definition~\ref{BnNatDes} which introduced $\mathrm{NatDes}(\pi,\epsilon)$ and $\mathrm{natdes}(\pi,\epsilon)$. For an element $(\pi,\epsilon)\in B_n$, the \emph{naturally ordered major index} is \[ \mathrm{natmaj} (\pi, \epsilon):= \sum_{i\in \mathrm{NatDes}(\pi,\epsilon)}i \, . \] Further, for an element $(\pi,\epsilon)\in B_n$, we write $\mathrm{neg}(\pi,\epsilon):=\mathrm{col}(\pi,\epsilon)$, where $\mathrm{col}(\pi,\epsilon)$ is given in Definition~\ref{typeAwreathstats}, to emphasize that this statistic counts the number of negative signs in the window for $(\pi,\epsilon)$. Chow and Gessel \cite[Equation (26)]{chowgessel} proved the following hyperoctahedral analogue of Theorem~\ref{macmahoncarlitz}:
\begin{theorem}[Chow--Gessel]\label{chowgesselthm} \[
\sum_{ k \ge 0 } \left( [k+1]_q + s \, [k]_q \right)^n t^k
\ = \ \frac{ \sum_{ \pi \in S_n, \, \epsilon \in \left\{ \pm 1 \right\}^n } s^{ \mathrm{neg}(\pi,\epsilon) } t^{ \mathrm{natdes}(\pi, \epsilon) } q^{ \mathrm{natmaj}(\pi, \epsilon) } }{ \prod_{ j=0 }^n \left( 1 - t q^j \right) } \, . \] \end{theorem}
The special case $q=1$ is due to Brenti \cite[Theorem 3.4]{brentieulerian}. Chow and Gessel also showed in \cite{chowgessel} how Theorem \ref{chowgesselthm} implies other versions of ``$q$-Eulerian polynomials" of type $B$ involving a flag major index statistic using the natural order, such as the following.
\begin{theorem}[Chow--Gessel]\label{newchowgesselflagthm} \[
\sum_{ k \ge 0 } [2k+1]_q^n t^k
\ = \ \frac{\sum_{(\pi,\epsilon)\in B_n}t^{\mathrm{natdes}(\pi,\epsilon)}q^{\mathrm{natfmaj}(\pi,\epsilon)} }{ \prod_{ j=0 }^n \left( 1 - t q^{2j} \right) } \, . \] \end{theorem} The statistic $\mathrm{natfmaj}$ used above is defined as follows.
\begin{definition} Use the order $-n<\cdots<-1<1<\cdots<n$ on $[-n,n]\setminus\{0\}$. We define the \emph{natural type-$A$ descent set} as \[ \mathrm{NatDes} \!\mbox{}_A(\pi,\epsilon):= \{i\in [n-1] : \epsilon_i\pi_i > \epsilon_{i+1}\pi_{i+1} \} \, \] while the \emph{natural type-$A$ descent statistic} is $\mathrm{natdes}_A(\pi,\epsilon):=\#\mathrm{NatDes} \!\mbox{}_A(\pi,\epsilon)$. The \emph{natural type-$A$ major index} is defined as \[ \mathrm{natmajor}_A(\pi,\epsilon):=\sum_{i \in \mathrm{NatDes}_A(\pi,\epsilon)} i \, . \] The \emph{natural flag major index} is \[ \mathrm{natfmaj}(\pi,\epsilon):= 2\cdot \mathrm{natmajor}_A(\pi,\epsilon) + \mathrm{neg}(\pi,\epsilon) \, . \] \end{definition}
\begin{remark} Through this work, $\mathrm{natdes}$ will always refer to the statistic introduced in Definition~\ref{BnNatDes} while $\mathrm{natdes}_A$ will be used to indicate the definition given above. \end{remark}
While it is observed by Chow--Gessel in \cite{chowgessel} that Theorems~\ref{chowgesselthm} and~\ref{newchowgesselflagthm} are equivalent via a change of variables, the geometric perspective illustrates how these two theorems arise as specializations of two distinct multivariate generating-function identities.
\subsection{Multivariate identities}\label{bsection}
For the type-$B$ generalization of Theorem~\ref{chowgesselthm}, we introduce the variables $z_{\pm j}$ to keep track of the positive/negative $j$th component of a lattice point, respectively, and the variable $s$ to indicate the presence in each coordinate of our point of a negative sign.
\begin{theorem}\label{Btheorem} \begin{align*}
\sum_{ k \ge 0 } \prod_{ j=1 }^n \left( [k+1]_{ z_j } + s \, z_{-j}^{ -1 } [k]_{ z_{-j}^{ -1 } } \right) z_0^k \ = \
\sum_{ (\pi,\epsilon)\in B_n } \displaystyle s^{\mathrm{neg}(\epsilon)} \phantom{,} \frac{\displaystyle \prod_{ j \in \mathrm{NatDes}(\pi, \epsilon) } z_0z_{ \epsilon_{j+1} \pi(j+1) }^{ \epsilon_{j+1} } z_{ \epsilon_{j+2} \pi(j+2) }^{ \epsilon_{j+2} } \cdots z_{\epsilon_n \pi(n)}^{ \epsilon_n } }{ \displaystyle \prod_{ j=0 }^n \left( 1 - z_0 \, z_{ \epsilon_{j+1} \pi(j+1) }^{ \epsilon_{j+1} } z_{ \epsilon_{j+2} \pi(j+2) }^{ \epsilon_{j+2} } \cdots z_{\epsilon_n \pi(n)}^{ \epsilon_n } \right) } \, . \end{align*} \end{theorem}
\begin{proof} Recall that we use the order \[ -n<-n+1<\cdots <-1<1<2<\cdots <n \, . \] As in Definition~\ref{Irn}, create the set of increasing elements, denoted $I_{2,n}^{\operatorname{nat}}$, using the natural order above. It is straightforward from the discussion following Definition~\ref{Irn} to show that \[ B_n=\bigcup_{\pi\in S_n}I_{2,n}^{\operatorname{nat}}\pi \, . \] For each $(\rho, \gamma)\in I_{2,n}^{\operatorname{nat}}$, the first $\mathrm{neg}(\gamma)$ elements of the permutation are negated, with labels $\rho_1,\ldots,\rho_{\mathrm{neg} (\gamma)}$.
For each $(\rho, \gamma)\in I_{2,n}^{\operatorname{nat}}$, define \[ \Box_{(\rho,\gamma)}:= \left\{ x\in \mathbb{R}^n:
\begin{array}{ll}
0 \le x_{\rho(j)} \le 1 & \text{ if } j\geq \mathrm{neg}(\gamma)+1, \\
0 < -x_{\rho(j)} \le 1 & \text{ if } j\leq \mathrm{neg}(\gamma)
\end{array} \right\} \, . \] It is straightforward to show that \[ [-1,1]^n=\bigcup_{(\rho,\gamma)\in I_{2,n}^{\operatorname{nat}}} \Box_{(\rho,\gamma)} \, , \] where this union is disjoint; any point in $[-1,1]^n$ with negative entries in positions $i_1>\cdots>i_k$ is an element of $\Box_{(\rho,\gamma)}$ where $\rho_j=i_j$ and $\gamma$ has $-1$ in precisely the first $k$ entries.
Fix $(\rho,\gamma)\in I_{2,n}^{\operatorname{nat}}$, and for each $\tau\in S_n$ consider the element $(\pi,\epsilon)=(\rho,\gamma)\tau$. For each such $(\pi,\epsilon)$, set \[ \Delta_{(\pi,\epsilon)}:= \left\{ x\in \Box_{(\rho,\gamma)}:
\begin{array}{l}
0\leq \epsilon_1x_{\pi(1)}\leq \cdots \leq \epsilon_nx_{\pi(n)}\leq 1 \\
\epsilon_jx_{\pi(j)}<\epsilon_{j+1}x_{\pi(j+1)} \text{ if } j\in \mathrm{NatDes}(\pi,\epsilon)
\end{array} \right\} \, , \] where $\epsilon_0x_{\pi(0)}=0$. Thus, the left-most inequality might be strict, while the right-most inequality is never strict. It follows that \[ \Box_{(\rho,\gamma)}=\bigcup_{\substack{(\pi,\epsilon)=(\rho,\gamma)\tau \\ \tau\in S_n}} \Delta_{(\pi,\epsilon)} \, , \] where our union is again disjoint. Observe that this triangulation of $\Box_{(\rho,\gamma)}$ is induced by \[ \left\{x_i=x_j: i,j \leq \mathrm{neg}(\gamma) \text{ or } i,j \geq \mathrm{neg}(\gamma)+1 \right\} \cup \left\{x_i=-x_j: i\geq \mathrm{neg}(\gamma)+1 \text{ and } j\leq \mathrm{neg}(\gamma) \right\} \, , \] a sub-arrangment of the type $B$ braid arrangement that intersects $\Box_{(\rho,\gamma)}$ in the same manner as the type $A$ braid arrangement intersects $[0,1]^n$.
For example, given $[2^1 \, 1^1 \, 3^0]=(\rho,\gamma)\in I_{2,n}^{\operatorname{nat}}$, the six elements of $[2^1 \, 1^1 \, 3^0]S_3$ are \begin{align*} [2^1 \, 1^1 \, 3^0]\circ [1 \, 2 \, 3] & = [2^1 \, 1^1 \, 3^0]\\ [2^1 \, 1^1 \, 3^0]\circ [2 \, 1 \, 3] & = [1^1 \, 2^1 \, 3^0]\\ [2^1 \, 1^1 \, 3^0]\circ [1 \, 3 \, 2] & = [2^1 \, 3^0 \, 1^1]\\ [2^1 \, 1^1 \, 3^0]\circ [3\, 2 \, 1] & = [3^0 \, 1^1 \, 2^1]\\ [2^1 \, 1^1 \, 3^0]\circ [2 \, 3 \, 1] & = [1^1 \, 3^0 \, 2^1]\\ [2^1 \, 1^1 \, 3^0]\circ [3 \, 1 \, 2] & = [3^0 \, 2^1 \, 1^1] \, , \end{align*} giving rise to $\Box_{[2^1 \, 1^1 \, 3^0]}$ being a union of the six corresponding $\Delta_{(\pi,\epsilon)}$'s shown below: \begin{align*} \Delta_{[2^1 \, 1^1 \, 3^0]} & = \{x\in \Box_{[2^1 \, 1^1 \, 3^0]}: 0 < -x_2 \leq -x_1 \leq x_3 \leq 1\} \\ \Delta_{[1^1 \, 2^1 \, 3^0]} & = \{x\in \Box_{[2^1 \, 1^1 \, 3^0]}: 0 < -x_1 < -x_2 \leq x_3\leq 1\} \\ \Delta_{[2^1 \, 3^0 \, 1^1]} & = \{x\in \Box_{[2^1 \, 1^1 \, 3^0]}: 0< -x_2 \leq x_3<-x_1 \leq 1\}\\ \Delta_{[3^0 \, 1^1 \, 2^1]} & = \{x\in \Box_{[2^1 \, 1^1 \, 3^0]}: 0\leq x_3<-x_1<-x_2\leq 1\}\\ \Delta_{[1^1 \, 3^0 \, 2^1]} & = \{x\in \Box_{[2^1 \, 1^1 \, 3^0]}: 0<-x_1\leq x_3<-x_2\leq 1\}\\ \Delta_{[3^0 \, 2^1 \, 1^1]} & = \{x\in \Box_{[2^1 \, 1^1 \, 3^0]}: 0\leq x_3<-x_2\leq -x_1\leq 1\} \, . \end{align*}
A lattice point ${\boldsymbol m}\in \mathrm{cone}(\Box_{(\rho,\gamma)})$ gets encoded by the monomial \[
z_0^{ m_0 } \prod_{ \epsilon_j = -1 } s \, z_{ -j }^{ -m_j } \prod_{ \epsilon_j = 1 } \, z_j^{ m_j } \, . \] Because of the definition of $\Delta_{(\pi,\epsilon)}$, we can use our shifting techniques from Section~\ref{sec:unim} (either technique will suffice in this case) to conclude \begin{align*}
\sigma_{ \mathrm{cone}(\Delta_{( \pi, \epsilon) }) } (z_0, z_{ \pm 1 }^{\pm 1}, & \dots, z_{ \pm n }^{\pm 1}, s) \ = \
\frac{\displaystyle s^{\mathrm{neg}(\pi,\epsilon)} \prod_{ j \in \mathrm{NatDes}(\pi, \epsilon) } z_0 \, z_{ \epsilon_{ j+1 } \pi(j+1) }^{ \epsilon_{ j+1 } } z_{ \epsilon_{ j+2 } \pi(j+2) }^{ \epsilon_{ j+2 } } \cdots z_{ \epsilon_n \pi(n) }^{ \epsilon_n } }{\displaystyle \prod_{ j=0 }^n \left( 1 - z_0 \, z_{ \epsilon_{ j+1 } \pi(j+1) }^{ \epsilon_{ j+1 } } z_{ \epsilon_{ j+2 } \pi(j+2) }^{ \epsilon_{ j+2 } } \cdots z_{ \epsilon_n \pi(n) }^{ \epsilon_n } \right) } \, . \end{align*} On the other hand, \begin{align*}
&\sigma_{ \mathrm{cone}([-1,1]^n) } (z_0, z_{\pm 1}^{\pm 1}, \dots, z_{\pm n}^{\pm 1}, s ) = \\
&\qquad \sum_{ k \ge 0 } \prod_{ j=1 }^n \left( s \, z_{ -j }^{-k} + s \, z_{ -j }^{-(k-1)} + \dots + s \, z_{ -j }^{ -1 } + 1 + z_j + z_j^2 + \dots + z_j^k \right) z_0^k \, , \end{align*} and the disjoint triangulations discussed above yield \[
\sigma_{ \mathrm{cone}([-1,1]^n) } (z_0, z_{\pm 1}^{\pm 1}, \dots, z_{\pm n}^{\pm 1}, s ) = \sum_{ (\pi,\epsilon) \in B_n } \sigma_{ \mathrm{cone}(\Delta_{\pi,\epsilon}) } (z_0, z_{\pm 1}^{\pm 1}, \dots, z_{\pm n}^{\pm 1}, s ) \, . \qedhere \] \end{proof}
\begin{proof}[Proof of Theorem \ref{chowgesselthm}] Setting $t := z_0$ and $q := z_1 = \dots = z_n = z_{ -1 }^{ -1 } = \dots = z_{ -n }^{ -n }$ in Theorem \ref{Btheorem} gives \[
\sum_{ k \ge 0 } \left( [k+1]_q + s \, q [k]_q \right)^n t^k
= \sum_{ (\pi,\epsilon)\in B_n} s^{\mathrm{neg}(\epsilon)} \, \frac{\displaystyle \prod_{j\in \mathrm{NatDes}(\pi,\epsilon)} t q^{ n-j } }{ \prod_{ j=0 }^n \left( 1 - t q^{ n-j } \right) } \, . \] Applying the change of variables $q\mapsto \frac 1 q$ and $t\mapsto tq^n$ finishes the proof. \end{proof}
Our multivariate generalization of Theorem~\ref{newchowgesselflagthm} is the following, which is a special case of Theorem~\ref{wreathnewBthm}. Recall from Definition~\ref{wreathdefsignchange} the notation $\operatorname{ch}(\epsilon)$ for the number of color changes in $\epsilon$ and the notation $a_j^\epsilon$ to keep track of where color changes occur.
\begin{theorem}\label{newBtheorem} \begin{align*}
&\sum_{ k \ge 0 } \prod_{ j=1 }^n [2k+1]_{ z_j } z_0^k \ = \
\sum_{ (\pi,\epsilon)\in B_n } \frac{ \displaystyle z_0^{\left\lceil \operatorname{ch}(\epsilon)/2 \right\rceil } \prod_{j:a_j^\epsilon=1}z_{\pi(1)}z_{\pi(2)}\cdots z_{\pi(j)} \prod_{\substack{j\in \mathrm{Des}(\pi) \\ a_j^\epsilon=0 }}z_0z_{\pi(1)}^2z_{\pi(2)}^2\cdots z_{\pi(j)}^2 }{ \displaystyle \prod_{ j=0 }^n \left( 1 - z_0 \, z_{\pi(1) }^{2} z_{\pi(2) }^{2} \cdots z_{\pi(j)}^{2} \right) } \, . \end{align*} \end{theorem}
\begin{remark} Observe that by specializing Theorem~\ref{newBtheorem} using $t:=z_0$ and $q:=z_1=\cdots =z_n$, we obtain a bivariate generating function identity involving the joint distribution for $(\mathrm{des},\mathrm{fmajor})$. Theorem~\ref{newchowgesselflagthm} follows from this, as the pairs of statistics $(\mathrm{natdes},\mathrm{natfmaj})$ and $(\mathrm{des},\mathrm{fmajor})$ are equidistributed in $B_n$; this is a consequence of the bijection mapping every permutation $(\pi,\epsilon)\in B_n$ to the permutation where the $\pi(j)$ for $\epsilon_j=-1$ are reversed in order in the window for $(\pi,\epsilon)$, while the $\epsilon$-vector remains the same.
As an example, consider $[2^1\, 4^1\, 5^0\, 1^1 \, 3^1]\in B_5$. The entries $2$, $4$, $1$, and $3$ correspond to the positions where $\epsilon_j=-1$. Hence, by reversing the order of these entries, we obtain a new permutation $[3^1\, 1^1\, 5^0\, 4^1 \, 2^1]$, and it is immediate that the descent positions for the new permutation using the natural order are the same as those in the first permutation using our standard order for wreath products. Observe that for the first permutation we have $(\mathrm{des},\mathrm{fmajor})=(2,10)$, and for the second we also have $(\mathrm{natdes},\mathrm{natfmaj})=(2,10)$.
Hence, we may conclude that Theorem~\ref{newchowgesselflagthm} follows from the special case of $r=2$ in Theorem~\ref{wreathnewBthm}. \end{remark}
\section{Type $D$}\label{dsection}
In this section we prove a multivariate identity related to negative statistics on Coxeter groups of type $D$. One may consider type-$D$ Eulerian polynomials stemming from the signed permutations in $B_n$ with an even number of $-1$'s. Let \[
D_n := \left\{ (\pi, \epsilon) \in B_n : \, \epsilon_1 \cdots \epsilon_n = 1 \right\} . \] The definition of $\mathrm{DNatDes}(\pi, \epsilon)$ and $\mathrm{dnatdes}(\pi, \epsilon)$ in type $D$ is analogous to \eqref{Bdescentdef}, except that we now use the convention $\epsilon_0 \pi(0) := - \epsilon_2 \pi(2)$. Brenti \cite[Theorem 4.10]{brentieulerian} proved that \begin{equation}\label{deuleriangenfcteq}
\sum_{ k \ge 0 } \left( \left( 2k+1 \right)^n - 2^{ n-1 } \left( B_n (k+1) - B_n(0) \right) \right) t^k
\ = \ \frac{ \sum_{ (\pi, \epsilon) \in D_n } t^{ \mathrm{dnatdes}(\pi, \epsilon) } }{ (1-t)^{ n+1 } } \, , \end{equation} where $B_n(x)$ is the $n$'th Bernoulli polynomial. We focus on the following identity due to Biagioli in \cite{biagioli}, involving negative statistics in type $D$.
\begin{theorem}[Biagioli]\label{negDthm}
\[
\sum_{ k \ge 0 } [k+1]_q^n \, t^k
= \frac{ \sum_{(\pi,\epsilon) \in D_n } t^{ \mathrm{dndes}(\pi,\epsilon) } q^{ \mathrm{dnmajor}(\pi,\epsilon) } }{ (1-t)(1-tq^n)\prod_{j=1}^{n-1} (1-t^2q^{2j}) } \, . \] \end{theorem}
\begin{definition}\label{negDstats} Using the order $-1<\cdots<-n<1<\cdots<n$ on $[-n,n]\setminus\{0\}$, for an element $(\pi,\epsilon)\in D_n$, we define $\mathrm{Des}_A(\pi,\epsilon)$, $\mathrm{neg}(\pi,\epsilon)$, and $\mathrm{des}_A(\pi,\epsilon)$ as for the group $B_n$. Further, we set $\mathrm{Neg}(\pi,\epsilon):=\mathrm{NNeg}(\pi,\epsilon)$. We define the \emph{type-$D$ negative descent multiset} as \begin{align*} \mathrm{DNDes}(\pi,\epsilon) &:= \mathrm{Des}_A(\pi,\epsilon)\cup \{\pi(i) -1 : \epsilon_i=-1 \} \setminus \{0\} \\ &= \mathrm{Des}_A(\pi,\epsilon)\cup \{j-1: j\in \mathrm{Neg}((\pi,\epsilon)^{-1})\setminus \{1\}\} \, . \end{align*} The \emph{type-$D$ negative descent statistic} is \[ \mathrm{dndes}(\pi,\epsilon):=\#\mathrm{DNDes}(\pi,\epsilon) \, . \] The \emph{type-$D$ negative major index} is \[ \mathrm{dnmajor}(\pi,\epsilon):=\sum_{i\in \mathrm{DNDes}(\pi,\epsilon)}i \, . \] \end{definition}
\begin{example}\label{ex:typeD} Let $(\pi,\epsilon)=[2^1\, 4^1\, 5^0\, 1^1\, 3^1]\in D_5$. Then $\mathrm{Des}_A(\pi,\epsilon)=\{3\}$ and $\mathrm{Neg}((\pi,\epsilon)^{-1})=\{1,2,3,4\}$, hence $\mathrm{DNDes}(\pi,\epsilon)=\{3\}\cup\{1,2,3\}$ and $\mathrm{dnmajor}(\pi,\epsilon)=9$. \end{example}
\begin{remark} Biagioli originally defined $\mathrm{dnmajor}$ and $\mathrm{dndes}$ in \cite{biagioli} using the natural order, but the Theorem~\ref{negDthm} holds for either definition. \end{remark}
Our multivariate generalization of Theorem~\ref{negDthm} is as follows. Let $I_{2,n}^*\subseteq I_{2,n}$ denote the elements $(\rho,\epsilon)\in I_{2,n}$ satisfying $\epsilon_1\epsilon_2\cdots \epsilon_n=1$. It is straightforward from our discussion regarding $I_{r,n}$ that \[ D_n=\bigcup_{\pi\in S_n}I_{2,n}^*\pi \, . \]
\begin{theorem}\label{cubeDthm} \begin{align*}
\sum_{k \ge 0 } \prod_{j=1}^n [k+1]_{ z_j } \, z_0^k = & \\ \sum_{\pi\in S_n} \sum_{(\rho,\epsilon)\in I_{2,n}^*} & \frac{\displaystyle \prod_{j\in \mathrm{Des}(\pi)} z_0z_{\pi(1)}z_{\pi(2)}\cdots z_{\pi(j)} \prod_{j\in \mathrm{Neg}((\rho,\epsilon)^{-1})\setminus \{1\}} z_0z_{\pi(1)}z_{\pi(2)}\cdots z_{\pi(j-1)} }{\displaystyle (1-z_0)(1-z_0z_{\pi(1)}\cdots z_{\pi(n)})\prod_{j=1}^{n-1} \left(1-z_0^2z_{\pi(1)}^2\cdots z_{\pi(j)}^2 \right) } \, . \end{align*} \end{theorem}
\begin{proof} We begin with the triangulation of $\mathrm{cone} ([0,1]^n)$ into the set of cones $ \left\{ \mathrm{cone}(\Delta_\pi) : \, \pi \in S_n \right\} $ found in the proof of Theorem~\ref{Atheorem}. For $\mathrm{cone}(\Delta_\pi)$ we use the non-unimodular ray generators \[ {\boldsymbol e}_0, 2({\boldsymbol e}_0+{\boldsymbol e}_{\pi(1)}), \ldots, 2({\boldsymbol e}_0+{\boldsymbol e}_{\pi(1)}+\cdots + {\boldsymbol e}_{\pi(n-1)}), {\boldsymbol e}_0+{\boldsymbol e}_{\pi(1)}+\cdots + {\boldsymbol e}_{\pi(n)} \, . \] There are $2^{n-1}$ integer points in the fundamental parallelepiped for $\mathrm{cone}(\Delta_\pi)$ using these ray generators. Each such point can be expressed as a linear combination of the middle $n-1$ generators with coefficients $\alpha_j\in \{0,\frac 1 2\}$, plus a sum of shifting vectors for those integer points that need to be shifted away from the boundary of the cone. As in our proof of Theorem~\ref{wreathcubeBthm}, we will use the technique of shifting the entire parallelepiped.
Thus, every integer point ${\boldsymbol p}$ in the (shifted) fundamental parallelepiped for $\mathrm{cone}(\Delta_\pi)$ can be uniquely expressed as \[ {\boldsymbol p}=\sum_{j\in \mathrm{Des}(\pi)} ({\boldsymbol e}_0+{\boldsymbol e}_{\pi(1)}+\cdots + {\boldsymbol e}_{\pi(j)}) + \sum_{j=1}^{n-1} \alpha_j({\boldsymbol e}_0+{\boldsymbol e}_{\pi(1)}+\cdots + {\boldsymbol e}_{\pi(j)}) \, \] with $\alpha_j\in \{0, 1\}$. Associate to the point ${\boldsymbol p}$ the element $(\rho,\epsilon)\pi \in D_n$, where $\alpha_j=1$ if and only if $j+1\in \mathrm{Neg}((\rho,\epsilon)^{-1})=\mathrm{Neg}([(\rho,\epsilon)\pi]^{-1})$.
As in the proof of Theorem~\ref{wreathcubeBthm}, this correspondence creates a bijection between the elements of $D_n$ and the (appropriately shifted) integer points in the fundamental parallelepipeds for the cones over the $\Delta_\pi$. Our choice of $(\rho,\epsilon)\pi$ associated to ${\boldsymbol p}$ is unique because the condition $\alpha_j=1$ if and only if $j+1\in \mathrm{Neg}((\rho,\epsilon)^{-1})=\mathrm{Neg}([(\rho,\epsilon)\pi]^{-1})$ determines the signs placed on the letters $\{2,3,\ldots,n\}$ when $(\rho,\epsilon)\pi$ is written in window notation. Hence, $\epsilon_1$ is determined from these $n-1$ signs and the fact that $\epsilon_1\cdots\epsilon_n=1$. This bijection encodes $I_{2,n}^*$ as the integer points in the fundamental parallelepiped for $\mathrm{cone}(\Delta_{\mathrm{Id}})$.
Thus \[
\sigma_{\mathrm{cone}(\Delta_\pi)}(z_0,\ldots,z_n)
= \sum_{(\rho,\epsilon)\in I_{2,n}^*} \frac{\displaystyle \prod_{j\in \mathrm{Des}(\pi)} z_0z_{\pi(1)}z_{\pi(2)}\cdots z_{\pi(j)} \prod_{j\in \mathrm{Neg}((\rho,\epsilon)^{-1})\setminus \{1\}} z_0z_{\pi(1)}z_{\pi(2)}\cdots z_{\pi(j-1)} }{\displaystyle (1-z_0)(1-z_0z_{\pi(1)}\cdots z_{\pi(n)})\prod_{j=1}^{n-1} \left(1-z_0^2z_{\pi(1)}^2\cdots z_{\pi(j)}^2 \right) } \, . \] This completes our proof, since from our triangulation it follows that \[ \sigma_{\mathrm{cone} ([0,1]^n)}(z_0,\ldots,z_n) = \sum_{\pi\in S_n} \sigma_{\mathrm{cone}(\Delta_\pi)}(z_0,\ldots,z_n) \, . \qedhere \] \end{proof}
\begin{proof}[Proof of Theorem~\ref{negDthm}] Setting $t:=z_0$ and $q:=z_1=\cdots =z_n$ in Theorem~\ref{cubeDthm} yields our desired form on the left-hand side of our identity, while the denominator of the right-hand side uniformly becomes \[ (1-t)(1-tq^n) \prod_{ j=1 }^{n-1} \left( 1 - t^2 q^{2j} \right) . \] Each element $\displaystyle (\rho,\epsilon)\pi\in \bigcup_{\pi\in S_n} I_{2,n}^*\pi$ contributes to the numerator on the right-hand side of our identity a summand of \[ \prod_{j\in \mathrm{Des}(\pi)}tq^j \prod_{j\in \mathrm{Neg}([(\rho,\epsilon)\pi ]^{-1})} tq^{j-1} . \] Because $\mathrm{Des}(\pi)=\mathrm{Des}_A((\rho,\epsilon)\pi)$, it follows that \[ \prod_{j\in \mathrm{Des}(\pi)}tq^j \prod_{j\in \mathrm{Neg}([(\rho,\epsilon)\pi ]^{-1})} tq^{j-1} = t^{\mathrm{dndes}((\rho,\epsilon)\pi)}q^{\mathrm{dnmajor}((\rho,\epsilon)\pi)} , \] hence our proof is complete. \end{proof}
\end{document} |
\begin{document}
\title{Proof-of-principle experimental demonstration of twin-field type quantum key distribution}
\author{Xiaoqing Zhong} \email{xzhong@physics.utoronto.ca} \affiliation{Center for Quantum Information and Quantum Control, Dept. of Physics, University of Toronto, Toronto, Ontario, M5S 1A7, Canada} \author{Jianyong Hu} \email{jianyong\_hu@163.com} \affiliation{Center for Quantum Information and Quantum Control, Dept. of Electrical \& Computer Engineering, University of Toronto, Toronto, Ontario, M5S 3G4, Canada} \author{Marcos Curty} \affiliation{EI Telecomunicaci\'on, Dept. of Signal Theory and Communications, University of Vigo, E-36310 Vigo, Spain} \author{Li Qian} \affiliation{Center for Quantum Information and Quantum Control, Dept. of Electrical \& Computer Engineering, University of Toronto, Toronto, Ontario, M5S 3G4, Canada} \author{Hoi-Kwong Lo} \affiliation{Center for Quantum Information and Quantum Control, Dept. of Physics, University of Toronto, Toronto, Ontario, M5S 1A7, Canada} \affiliation{Center for Quantum Information and Quantum Control, Dept. of Electrical \& Computer Engineering, University of Toronto, Toronto, Ontario, M5S 3G4, Canada}
\begin{abstract}
The twin-field (TF) quantum key distribution (QKD) protocol and its variants are highly attractive because they can beat the well-known rate-loss limit ({\it i.e.}, the PLOB bound) for QKD protocols without quantum repeaters. In this paper, we perform a proof-of-principle experimental demonstration of TF-QKD based on the protocol proposed by Curty et al.~\cite{tf-qkd_marcos} which removes from the original TF-QKD scheme the need for post-selection on the matching of a global phase, and can deliver nearly an order of magnitude higher secret key rate. Furthermore, we overcome the major difficulty in the practical implementation of TF-QKD, namely, the need to stabilize the phase of the quantum state over kilometers of fiber. A Sagnac loop structure is utilized to ensure excellent phase stability between the different parties. Using decoy states, we demonstrate secret-key generation rates that beat the PLOB bound when the channel loss is above 40 dB.
{\bf Keywords: quantum cryptography, quantum key distribution, twin-field quantum key distribution, secret key rate, PLOB bound, phase encoding}
\end{abstract}
\maketitle
\textit{Introduction} - Quantum key distribution (QKD)~\cite{R1,R2,R3,R4,Ekert1991} makes it possible to distribute secret keys to remote users with information-theoretic security, which means that its security is independent of an attacker's computational power~\cite{R5,R6,R7,R8,R9,R10,R11,NJP11,Mayer_proof,secure}. Experimentally, QKD has been performed over 421 km of fiber~\cite{R13}, as well as over 1000 km of free space through satellite to ground links~\cite{R14,R15}. Towards the construction of a global quantum internet, performing long distance QKD is a crucial step~\cite{R16,R17,R18,R19}. However, there is a fundamental limit on the point-to-point secret key rate versus channel transmittance~\cite{plob,R21,R22} that can be achieved by two remote parties using QKD without an intermediate node. This limit, also called the PLOB bound~\cite{plob}, states that the secret key rate scales basically linearly with the channel transmittance.
To overcome the PLOB bound, besides using quantum repeaters~\cite{R23,R24,R25,R26}, it has been proposed to employ measurement-device-independent (MDI) QKD~\cite{R29} in combination with quantum memories~\cite{PRA89,NJP16} or in combination with quantum non-demolition measurements~\cite{R27} located at the untrusted intermediate node that is used in MDI-QKD. While promising, all these approaches are however far away from our current experimental capabilities. Remarkably, more recently it has been theoretically proven~\cite{R39,R40,R41,R42,tf-qkd_marcos,R44,finite_decoy_tfqkd} that variations of the twin-field (TF) QKD protocol proposed by Lucamarini et al.~\cite{tf-qkd_original} can beat the PLOB bound with the help of just one untrusted intermediate node (Charlie) performing a simple interferometric measurement. This shows that intercity QKD could be feasible with today’s technology without the need for quantum memories or quantum non-demolition measurements. In TF-QKD, two users (Alice and Bob) send two optical fields to produce a single-photon interference on a beam splitter located at Charlie. A successful result corresponds to Charlie observing a single-photon detection event, which measures the relative phase between the two optical beams. The fact that TF-QKD uses singles (i.e. single-photon detection events) results in a secret key rate that scales as the square-root of the channel transmittance instead of linearly. This is because only one photon (either from Alice or from Bob) has to arrive at Charlie. Importantly, since TF-QKD has a similar structure as MDI-QKD, in the sense that it uses an untrusted node to interfere Alice and Bob’s signals, all the advantages of MDI-QKD~\cite{R29,PRA89,NJP16,R27,R30,R31,R32,R33,R34}, such as its immunity to any possible attack against the measurement unit and its readiness for star networks, are retained by TF-QKD. In this regard, TF-QKD can be considered as a MDI-QKD scheme based on singles, rather than on coincidences.
With the foundations of TF-QKD firmly established, it is now very important to demonstrate the viability of TF-QKD experimentally. One main drawback of the original TF-QKD protocol for practical implementations is that it requires long-distance subwavelength path-length phase stability, which is a new requirement in QKD and is much more demanding to achieve than two-photon interference as needed in standard MDI-QKD. Another drawback is that the original protocol requires to perform a post-selection step based on the matching of a global phase between Alice and Bob, which results in a reduction of the secret key rate by about an order of magnitude. To overcome these limitations, various variants of the original TF-QKD protocol have been very recently proposed and investigated~\cite{R39,R40,R41,R42,tf-qkd_marcos,R44,finite_decoy_tfqkd}. For example, ref.~\cite{R39} analyzes the security of a modified TF-QKD scheme by exploiting a quantum coin idea~\cite{R6,R10}. Ref.~\cite{R40}, on the other hand, introduces a phase-matching QKD which releases the requirement of active global phase randomization, but still needs a phase post-selection step. In Refs.~\cite{tf-qkd_marcos,R44,finite_decoy_tfqkd} the need for such post-selection step is removed. In summary, the security of some variants of TF-QKD have now been firmly established and their key rates beat the PLOB bound. So, it is now important to implement experimentally a TF-QKD protocol to demonstrate their practicality.
In this paper we perform a proof-of-principle experimental implementation of a TF-QKD protocol introduced by Curty et al.~\cite{tf-qkd_marcos}. This protocol does not need a post-selection step based on the matching of a global phase, and can provide a secret key rate which is about an order of magnitude higher than previous proposals~\cite{qcrypt2018}. The key idea is to use coherent states for key generation and photon number states as the complementary basis to prove security~\cite{NJP11}. The latter type of states can be simulated by means of phase-randomized coherent states in combination with the decoy-state method~\cite{R35,R36,R37,R38}. In our experiment, to stabilize the phase of the quantum states over kilometers of fiber, an auto-compensating set-up, which provides excellent phase stability, is built up. We remark that an auto-compensating set-up, also known as “plug-and-play” system, is widely used in QKD~\cite{plug-play,NJP4} and is the workhorse of a widely deployed commercial QKD system manufactured by the company ID Quantique. Security proofs for such “plug-and-play” QKD systems have been developed in~\cite{PRA77,NJP12}. More concretely, we use a Sagnac loop where an optical pulse (generated by a single laser in our experiment) travels through either a clockwise path or an anti-clockwise path and then the two paths interfere with each other. In one path, the pulse is modulated in phase by Alice whereas in the other path, the pulse is modulated in phase by Bob. To implement the decoy-state method, intensity modulators are employed to modulate the pulses leaving Bob and Alice’s stations. Our experimental results confirm an achievable secret key rate well above the PLOB bound, and constitute a crucial step towards demonstrating the practical experimental feasibility of TF-QKD.
\textit{Protocol and Experiment} - The TF-QKD protocol introduced in~\cite{tf-qkd_marcos} is composed of the following five steps.
Step 1: Each of Alice and Bob prepares a weak coherent state. Alice (Bob) chooses the $X$ basis with probability $P_X$ and the $Z$ basis with probability $P_Z=1-P_X$. If the $X$ basis is chosen, Alice (Bob) randomly prepares a coherent state $\left| \alpha\right\rangle_A$ ($\left| \alpha\right\rangle_B$) for the bit value $b_A=0$ ($b_B=0$) or $\left|-\alpha\right\rangle_A$ ($\left|-\alpha\right\rangle_B$) for the bit value $b_A=1$ ($b_B=1$). If the $Z$ basis is chosen, Alice (Bob) prepares a phase-randomized coherent state \begin{equation}\label{state}
\rho_A=\frac{1}{2\pi}\int_{0}^{2\pi}d\varphi_A\left|\beta_Ae^{i\varphi_A}\right\rangle_A\left\langle\beta_Ae^{i\varphi_A}\right| \end{equation}
($\rho_B$ has the same expression as~(\ref{state}) with all subscripts changed to $B$). The value of the intensity $|\beta_A|^2$ ($|\beta_B|^2$) is chosen at random from a set $S=\left\lbrace \mu,\nu,\omega\right\rbrace $ containing say three possible intensities.
Step 2: Alice and Bob send their states to the middle node Charlie through optical channels, each of them with transmittance $\sqrt{\eta}$.
Step 3: Charlie interferes the incoming states with a 50:50 beam splitter followed by two single-photon detectors, $D_0$ and $D_1$. He records the result, i.e. which detector clicks at each expected arrival time slot.
Step 4: Once the quantum communication phase of the protocol has finished, Charlie announces all the results obtained, and Alice and Bob declare the bases used.
Step 5: Based on the information announced, Alice and Bob estimate the bit and phase error rate and distill a secret key from those instances where they used the $X$ basis and Charlie declared one detection click. More precisely, whenever Charlie reports one click event in say $D_0$ ($D_1$) and both Alice and Bob choose the $X$ basis, $b_A$ and $b_B$ ($b_B\oplus1$) are regarded as their raw keys.
\begin{figure}
\caption{Experimental set-up of twin-field quantum key distribution. The experiment starts by Charlie modulating the cw light from the laser to create coherent pulses through his intensity modulator (IM$_C$). Then he uses the optical attenuator (Att$_C$) to prepare weak coherent pulses. The pulses travel through a circulator (C) and enter the Sagnac loop by a 50:50 beam splitter (BS). The clockwise (counter-clockwise) traveling pulse goes through another optical attenuator Att$_B$ (Att$_A$) and, for some cases, a 5-km single mode fiber spool F$_B$ (F$_A$) before reaching at Bob’s (Alice’s) station. Bob (Alice) then lets the pulse pass through his (her) station with no modulation and sends it to Alice (Bob) through a 7-km fiber spool F$_{A-B}$. Once Alice (Bob) receives the pulse, she (he) uses her (his) phase modulator PM$_A$ (PM$_B$) and intensity modulator IM$_A$ (IM$_B$) to add a phase to the pulse and modulate the intensity of the pulse respectively, based on her (his) choice of the basis and her (his) bit value. Then Alice (Bob) sends the modulated coherent pulse back to Charlie’s BS through F$_A$ (F$_B$) and Att$_A$ (Att$_B$). Charlie records the interference result by using two single photon detectors $D_0$ and $D_1$, and publicly announces the outcomes.}
\label{set-up}
\end{figure}
Fig.~\ref{set-up} shows the schematic diagram of our experimental set-up. It is a two-way QKD system consisting of a Sagnac interferometer. It is similar to the “plug-and-play” QKD system employed in~\cite{plug-play,NJP4}. The Sagnac arrangement is chosen to overcome the main challenges in implementing TF-QKD, namely, to share a phase reference between Alice and Bob and to achieve single-photon interference at Charlie, which requires phase stability. The common-path nature of the Sagnac loop automatically compensates for phase fluctuations of the two fields from Alice and Bob. In this set-up, the laser source is in Charlie's hands and is shared by Alice and Bob. This ensures that the three parties have the same phase reference. Charlie uses his intensity modulator (IM$_C$) and his optical attenuator (Att$_C$) to create weak coherent pulses from a cw DFB laser (PRO 800, wavelength 1552.6 nm). The IM$_C$ has an extinction ratio of $>$30 dB. The pulses have a FWHM width of 900 ps and a repetition rate of 10 MHz. They go through an optical circulator and then enter the Sagnac loop through a 50:50 fiber-based beam splitter. Clockwise and counter-clockwise traveling pulses each go through an optical variable attenuator (and a 5-km fiber spool F$_B$ or F$_A$ respectively in one case) before arriving at Alice’s or Bob's station. The clockwise (counter-clockwise) pulses go through Bob’s (Alice’s) station without being modulated. Therefore, no information is directly communicated between Alice and Bob. The clockwise (counter-clockwise) pulses then undergo a 7-km fiber spool F$_{A-B}$ and reach Alice's (Bob's) station. This is to ensure that Alice and Bob are physically separated with kilometers of fiber. Inside the station of Alice (Bob), there is a phase modulator PM$_A$ (PM$_B$) and an intensity modulator IM$_A$ (IM$_B$). If Alice (Bob) chooses the $X$ basis, she (he) uses her (his) PM$_A$ (PM$_B$) to randomly add a $0$ or $\pi$ phase to the pulse. If Alice (Bob) chooses the $Z$ basis, then she (he) uses her (his) PM$_A$ (PM$_B$) to add a random phase between $0$ and $2\pi$ to the pulse. The intensity modulator is applied to set the average number of photons per pulse to be, either $|\alpha|^2$ for the signal states in the $X$ basis, or one of the intensities in the set $S=\left\lbrace \mu,\nu,\omega\right\rbrace$ for the decoy states in the $Z$ basis. After the phase and intensity modulations, Alice (Bob) sends the pulses through a variable optical attenuator and, in some cases, through a 5-km spool of actual fiber F$_A$ (F$_B$), before reaching the beam splitter of Charlie. The loss between Alice (Bob) and Charlie is adjusted to simulate the loss due to the communication channel. To demonstrate the practicality of the scheme, we add the 5-km fiber spool F$_A$ (F$_B$) between Alice (Bob) and Charlie, in addition to the attenuator. The pulses coming from Alice and Bob interfere at Charlie’s beam splitter. One output of this beam splitter is directed to a single-photon detector (SPD) $D_0$ via the circulator, while the other output is followed directly by another SPD, $D_1$. The SPDs are commercial free-run avalanche photodiodes (ID220) with an efficiency of 11.7\% and a dark count rate of 750 Hz. The SPDs have a time jitter on the order of 200 ps, matching the optical pulse width. Charlie records each click event (within a 900 ps window where the detection is expected), and publicly announces the result. Afterward, Alice and Bob declare their bases choices and use the instances where they both selected the $X$ basis and Charlie announced a single-click event to distill a secure secret key.
Whether in Alice’s station or in Bob’s station, both clockwise and counter-clockwise traveling pulses pass through the IM and PM. It is crucial to ensure that Alice (Bob) only modulates the clockwise (counter-clockwise) traveling pulse. This is achieved by using appropriate fiber lengths, between Alice (Bob) and Charlie and between Alice and Bob, so that the two counter propagating pulses never overlap with each other at any modulator inside the Sagnac loop. Note that this is not a practical limitation, since in practice Alice and Bob can measure the fiber lengths in the link and add or remove small lengths of fiber within their own set-up. All modulators used in our set-up are driven and synchronized by a high-speed arbitrary waveform generator (AWG, Keysight M8195A). The delay times of the driving signal of Alice’s (Bob’s) IM$_A$ (IM$_B$) and PM$_A$ (PM$_B$), relative to the driving signal of Charlie’s IM$_C$, are well adjusted to ensure that Alice and Bob modulate the intended pulses. As in any practical system, there are unintended reflections and backscattering from the channel, causing unintended “clicks” in the detectors. Fortunately, these unintended signals do not arrive at the detectors at the same time as the signals from Alice and Bob. With precise synchronization, we can eliminate the unintended clicks by choosing the appropriate time windows for detection in Charlie's station. To further reduce the errors in the detection results, the fiber lengths within Charlie's station are adjusted to guarantee that the unintended signals due to reflections / backscattering do not overlap with the real signals at the detectors, i.e., do not fall inside the detection window.
\begin{figure*}
\caption{(a) Interference visibility of the system running in 40 minutes. Each data point represents the average interference visibility in 30s. The blue squares indicate the case in which Alice (Bob) and Charlie are connected only through an attenuator. While the red circles indicate the case in which a 5-km fiber spool F$_A$ (F$_B$) is added between Alice (Bob) and Charlie. For both cases, the system is stable and the interference visibility is kept above 99.5\%. (b) Ratio of the total number of photon counts at detector $D_0$ to the total number of photon counts at detector $D_1$. Phase randomization is applied in 40 minutes. The total number of photon counts at detector $D_0$ is calibrated to compensated the loss in the circular. In this scenario, the photons should be detected by the detectors $D_0$ and $D_1$ with equal probability. Again, the blue squares (red circles) represent the ratio without (with) 5-km fiber spool F$_A$ (F$_B$) inserted between Alice (Bob) and Charlie. As expected, the ratio in both cases is very close to 1.}
\label{IV}
\label{ratio}
\label{stablity}
\end{figure*}
A number of fiber-based polarization controllers are installed inside the Sagnac loop (see Fig.~\ref{set-up}) to ensure that the pulses are aligned in polarization after they travel through the entire loop and interfere at Charlie’s beam splitter. All the fiber spools are stored in sealed boxes, but no active polarization stabilization is applied. The sealed boxes mimic the environment of buried fiber cables. The voltage of the driving signal of Alice’s (Bob’s) PM$_A$ (PM$_B$) is tuned to V$_\pi$ (4.5V) to maximize the interference visibility. Since the two pulses coming from Alice and Bob travel exactly the same path, they undergo the same phase drift and their relative phase is therefore stable without active phase stabilization. Due to the combination of the aforementioned measures, even after traveling kilometers of fiber, the interference visibility of our system is kept well above 99\% for an extensive duration, as shown in Fig.~\ref{stablity} (a). When the fiber spool F$_A$ (F$_B$) is taken out of the loop and Alice (Bob) and Charlie is connected only through an attenuator (blue squares in Fig.~\ref{stablity} (a)), the system is more stable and the average interference visibility is about 99.8\%. When the 5-km fiber spool F$_A$ (F$_B$) is added between Alice (Bob) and Charlie (red circles in Fig.~\ref{stablity} (a)), the interference visibility is slightly lower compared to the cases without the fibers spools F$_A$ and F$_B$. We attribute this degradation of the visibility to polarization fluctuations, the depolarization effect, as well as low-levels of Rayleigh backscattering in long fiber spools. Nonetheless, the interference visibility in this latter case is still stable for at least 40 minutes and the average value is about 99.7\%. In order to keep the high visibility and improve the performance of the system, we stop the experiment every 40 minutes for polarization realignment. When a random phase is required (for the decoy state signal), the voltage of the driving signal is randomly chosen from $0$ to V$_\pi$. In this scenario, the photons should be detected by the detectors $D_0$ and $D_1$ with equal probability. Fig.~\ref{stablity} (b) shows the ratio of the total number of photon counts at $D_0$ to the total number of photon counts at $D_1$ when phase randomization is applied continuously at both Alice’s and Bob’s stations for 40 minutes. Note that the total number of photon counts at detector $D_0$ is calibrated to compensate for the circulator loss. As illustrated in Fig.~\ref{stablity} (b), both cases (with or without the 5-km fiber spools F$_A$ and F$_B$) maintain a stable ratio close to 1, which indicates that phase randomization can be effectively implemented.
\textit{Results and Discussion} - We implement the experiment for four different values of the overall system loss between Alice and Bob, 38.0 dB, 46.7 dB, 55.1 dB and 49.4 dB, respectively. Optical attenuators are applied to simulate the channel loss for the first three system losses. For the 49.4 dB loss, 5-km fiber spools are inserted between Alice (as well as Bob) and Charlie in addition to the attenuator. As discussed above, the detector efficiency is about 11.7\%, which is equivalent to a 9.3 dB loss that we include in the overall system loss. For different values of the system loss, we choose different intensity sets $\left\lbrace |\alpha|^2,\mu,\nu,\omega\right\rbrace $. The selection of the signal intensity $|\alpha|^2$ is done by optimizing a priori the secret key rate formula for a channel model that approximately simulates the expected behavior of an experimental realization, based on the devices’ parameters. In the asymptotic regime, given that the weakest decoy intensity $\omega$ is sufficiently small, the selection of the other two decoy intensities $\mu$ and $\nu$ (within certain limits) is not so crucial and its effect on the resulting secret key rate turns out to be small. So, we take the values of $\mu$ and $\nu$ shown in Table~\ref{tab:intensity} as an example and for experimental convenience.
\begin{table*}[t]
\begin{tabular}{cccccc}
\hline
\hline
\multirow{2}{*}{\bf Loss} & \multirow{2}{1.2cm}{\bf Fiber Inserted$^*$ } & \multicolumn{4}{c}{\bf Intensities}
\\
\cline{3-6}& & $|\alpha|^2$ & $\mu$ & $\nu$ & $\omega$ \\
\hline
$38.0$ dB & No & $0.0256\pm0.0001$ & $0.087\pm0.001$ & $0.0088\pm0.0002$ & $(1.0\pm0.2)\times10^{-4}$ \\
$46.7$ dB & No & $0.02495\pm0.00005$ & $0.0978\pm0.0008$ & $0.0099\pm0.0001$ & $(7.5\pm0.2)\times10^{-5}$ \\
$49.4$ dB & Yes & $0.0183\pm0.0001$ & $0.02005\pm0.00002$ & $0.00828\pm0.00007$ & $(9.2\pm1.0)\times10^{-6}$ \\
$55.1$ dB & No & $0.0175\pm0.0002$ & $0.0382\pm0.0004$ & $0.00790\pm0.00007$ & $(6.5\pm1.0)\times10^{-5}$ \\
\hline
\hline
\end{tabular}
\caption{\label{tab:intensity} List of intensity sets for the four different values of the overall system loss 38.0 dB, 46.7 dB, 49.4 dB and 55.1 dB. $|\alpha|^2$ is the average photon number (per pulse) of the coherent states in the $X$ basis. $\mu$, $\nu$ and $\omega$ are the average photon number (per pulse) of the decoy states in the $Z$ basis. The uncertainty of each intensity refers to the measurement of its statistical fluctuation. *: 5-km fiber spool F$_A$ (F$_B$) is inserted between Alice (Bob) and Charlie. Note that the fiber spool F$_{A-B}$ is fixed inside the loop for all the four cases to ensure that Alice and Bob are physically separated with kilometers of fiber.} \end{table*}
\begin{table*}[t]
\begin{tabular}{ccC{1.3cm}C{1.3cm}C{2.5cm}C{2.5cm}C{2.5cm}C{2.5cm}}
\hline
\hline
\multirow{2}{*}{\bf Loss} & \multirow{2}{*}{\bf Fiber Inserted$^*$ } & \multicolumn{2}{c}{\bf QBER} & \multicolumn{3}{c}{\bf Experimental Secret Key Rates} & \multirow{2}{*}{\bf PLOB Bound}
\\
\cline{3-4}\cline{5-7}& & $D_0$ & $D_1$ & $R_{mean}$& $R_{min}$ & $R_{max}$ & \\
\hline
$38.0$ dB & No & $0.0032$ & $0.0036$ & $2.6484\times10^{-4}$ & $1.9917\times10^{-4}$ & $3.4765\times10^{-4}$ & $2.2867\times10^{-4}$ \\
$46.7$ dB & No & $0.0058$ & $0.0032$ & $7.8389\times10^{-5}$ & $6.9058\times10^{-5}$ & $8.8458\times10^{-5}$ & $3.0845\times10^{-5}$ \\
$49.4$ dB & Yes & $0.0059$ & $0.0056$ & $3.6306\times10^{-5}$ & $2.4061\times10^{-5}$ & $5.4130\times10^{-5}$ & $1.6564\times10^{-5}$ \\
$55.1$ dB & No & $0.0116$ & $0.0108$ & $1.7542\times10^{-5}$ & $1.0516\times10^{-5}$ & $2.5652\times10^{-5}$ & $4.4584\times10^{-6}$\\
\hline
\hline
\end{tabular}
\caption{\label{tab:rate} List of experimental results for the four different values of the overall system loss considered, 38.0 dB, 46.7 dB, 49.4 dB and 55.1 dB. QBER is the experimental quantum bit error rate observed at detectors D0 and D1 when Alice and Bob choose the $X$ basis. The experimental secret key rate includes three cases, i.e. the case where intensity fluctuations are disregarded and the worst and best case scenarios where intensity fluctuations are taken into account. These three cases are indicated with the notation R$_{mean}$, R$_{min}$ and R$_{max}$ respectively. Also, for comparison purposes, this table includes the PLOB bound~\cite{plob} corresponding to each system loss. *: 5-km fiber spool F$_A$ (F$_B$) is inserted between Alice (Bob) and Charlie. Note that the fiber spool F$_{A-B}$ is fixed inside the loop for all the four cases to ensure that Alice and Bob are physically separated with kilometers of fiber.} \end{table*}
\begin{figure}
\caption{Secret key rate (per pulse) in logarithmic scale as a function of the overall loss between Alice and Bob. The results shown in Table~\ref{tab:rate} are illustrated as black crosses. The vertical line of each cross shows the difference between the worst and best case scenarios when intensity fluctuations are taken into account, {\it i.e.} the difference between R$_{min}$ and R$_{max}$. The horizontal line of each cross shows the uncertainty of the overall loss, which is $\pm 0.1$ dB for the cases 38.0 dB, 46.7 dB and 49.4 dB system loss, and $\pm 0.2$ for the case 55.1 dB system loss. The point where these two lines cross coincides with the secret key rate if intensity fluctuations are disregarded, {\it i.e.} with R$_{mean}$ in Table~\ref{tab:rate}. The solid red line illustrates the PLOB bound introduced in~\cite{plob}. The solid green line corresponds to a theoretical simulation result realised with the channel model introduced in~\cite{tf-qkd_marcos}. This channel model, for simplicity, assumes the same experimental parameters over all the distances and does not optimise the values of the different decoy intensities which are fixed a priori. Still, the experimental parameters selected are similar to those of the experimental implementation (though these change for each point) and thus also the resulting secret key rate is reasonable similar. Most importantly, our results demonstrate clearly that the experiments performed beat the PLOB
bound.
}
\label{Fig_rates}
\end{figure}
For each value of the system loss and each intensity pair, Alice and Bob each send out $3\times10^9$ coherent pulses. The experimental quantum bit error rates (QBER) observed in both detectors $D_0$ and $D_1$ when Alice and Bob choose $X$ basis are listed in Table~\ref{tab:rate}. Given the high stability of the system and the high interference visibility, the QBERs observed in the experiment are correspondingly low. Even the maximum QBER observed at the highest system loss is lower than 1.2\%. As the overall system loss decreases, the impact of the dark counts of the single-photon detectors diminishes. This is reflected in the lower QBERs obtained for lower system loss. More experimental data, such as the experimentally observed gains, could be found in the supplementary material~\cite{supplementary}. To extract a secure secret key, we use the security analysis and secret key rate formula reported in~\cite{tf-qkd_marcos} (see supplementary material~\cite{supplementary}). The secret key rates for different system losses, as well as the corresponding PLOB bound, are also listed in Table~\ref{tab:rate}. For each value of the system loss, Table~\ref{tab:rate} includes three cases, i.e. the case where intensity fluctuations are disregarded and the worst- and best-case scenarios where intensity fluctuations are taken into account. For the worst- and best-case scenarios, we numerically minimize and maximize the secret key rate formula among all possible values for the different intensities (within the reported experimental intervals which include the intensity fluctuations). These three cases are indicated in the table with the notation R$_{mean}$, R$_{min}$ and R$_{max}$ respectively. These results are also illustrated in Fig.~\ref{Fig_rates}, which shows the secure key rate (bits per pulse) in logarithmic scale as a function of the overall system loss. The solid red line illustrates the PLOB bound introduced in~\cite{plob}. The solid green line corresponds to a theoretical simulation result realized with the channel model introduced in ~\cite{tf-qkd_marcos}. This channel model, for simplicity, assumes the same experimental parameters (similar to those of the experimental implementation) over all the distances and does not optimize the values of the different decoy intensities which are fixed a priori. The experimental secret key rates are illustrated as black crosses. The vertical line of each cross shows the difference between the worst- and best-case scenarios when intensity fluctuations are taken into account, i.e. the difference between and R$_{min}$ and R$_{max}$. The horizontal line of each cross shows the uncertainty of the overall loss. The point where these two lines cross coincides with the secret key rate if intensity fluctuations are disregarded, i.e. with R$_{mean}$ in Table~\ref{tab:rate}. As depicted in Fig.~\ref{Fig_rates}, the experimental secret key rates are reasonably close to the theoretical simulation results, except that the key rate at the system loss 49.4 dB is slightly lower compared with the simulation result. This is because of the 5-km fiber spools F$_A$ (F$_B$) that are added in between Alice (Bob) and Charlie in this case, which reduces the performance of the system. Nonetheless, the experimental results, as expected, follow the rate-loss dependence of TF-QKD, scaling with the square-root of the channel transmittance. When the overall system loss between Alice and Bob is around 38 dB, the secret key rate sits around the PLOB bound. However, as the overall system loss increases, the experimental key rate evidently surpasses the PLOB bound even when the minimum key rate in the worst-case scenario is considered. This achievement experimentally proves that TF-QKD can beat the fundamental bounds on the private capacity of point-to-point QKD. Particularly, the observed higher key rate, compared with the PLOB bound, at loss 49.4 dB with real fiber spools (F$_A$ and F$_B$) shows the feasibility of the practical application of TF-QKD.
\textit{Conclusion} - In summary, we have implemented the first twin-field quantum key distribution experiment over 10 km of real optical fibers without active phase stabilization or phase post-selection. The secure key rate of the system scales as the square-root of the overall channel transmittance. In particular, we have observed that the resulting secret key rate at the high loss region clearly beats the PLOB bound even when intensity fluctuations are taken into account. Our work shows the feasibility of overcoming the private capacity of a point-to-point QKD link with current technology. Longer fibers could be added into our system in the future to extend the range of TF-QKD.
\textit{Acknowledgments} - We thank Olinka Bedroya and Shihan Sajeed for their assistance and enlightening discussions. We also thank financial support from NSERC, CFI, ORF, US Office of Naval Research, MITACS, Royal Bank of Canada and Huawei Technologies Canada, Ltd. M.C. acknowledges support from the Spanish Ministry of Economy and Competitiveness (MINECO), the Fondo Europeo de Desarrollo Regional (FEDER) through grants TEC2014-54898-R and TEC2017-88243-R, and the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 675662 (project QCALL).
\end{document} |
\begin{document}
\title{Compressed Indexing with Signature Grammars}
\begin{abstract}
The \textit{compressed indexing problem} is to preprocess a string
$S$ of length $n$ into a compressed representation that supports
pattern matching queries. That is, given a string $P$ of length
$m$ report all occurrences of $P$ in $S$.
We present a data structure that supports pattern matching queries in
$O(m + \occ (\lg\lg n + \lg^\epsilon z))$ time using $O(z \lg(n / z))$
space where $z$ is the size of the LZ77 parse of $S$ and $\epsilon > 0$ is an arbitrarily small constant,
when the alphabet is small
or $z = O(n^{1 - \delta})$ for any constant $\delta > 0$.
We also present two data structures for the general case;
one where the space is increased by $O(z\lg\lg z)$, and
one where the query time changes from worst-case to expected.
These results improve the previously best known solutions.
Notably, this is the first data structure that decides if $P$ occurs in $S$
in $O(m)$ time using $O(z\lg(n/z))$ space.
Our results are mainly obtained by a novel combination of
a randomized grammar construction algorithm
with well known techniques relating pattern matching to 2D-range reporting. \end{abstract}
\section{Introduction}
Given a string $S$ and a pattern $P$, the core problem of pattern matching is to report all locations where $P$ occurs in $S$. Pattern matching problems can be divided into two: the algorithmic problem where the text and the pattern are given at the same time, and the data structure problem where one is allowed to preprocess the text (pattern) before a query pattern (text) is given. Many problems within both these categories are well-studied in the history of stringology, and optimal solutions to many variants have been found.
In the last decades, researchers have shown an increasing interest in the compressed version of this problem, where the space used by the index is related to the size of some compressed representation of $S$ instead of the length of $S$. This could be measures such as the size of the LZ77-parse of $S$, the smallest grammar representing $S$, the number of runs in the BWT of $S$, etc. see e.g. \cite{gagie2017optimal,bille2017time,gagie2014lz77,gagie2012faster,Nishimoto2015,navarro2007compressed,Karkkainen1996}. This problem is highly relevant as the amount of highly-repetitive data increases rapidly, and thus it is possible to handle greater amounts of data by compressing it. The increase in such data is due to things like DNA sequencing, version control repositories, etc.
In this paper we consider what we call the \textit{compressed indexing problem}, which is to preprocess a string $S$ of length $n$ into a compressed representation that supports fast \textit{pattern matching queries}. That is, given a string $P$ of length $m$, report all $\occ$ occurrences of substrings in $S$ that match $P$.
Table \ref{table} gives an overview of the results on this problem.
\begin{table}[] \centering \caption{Selection of previous results and our new results on compressed indexing. The variables are the text size $n$, the LZ77-parse size $z$, the pattern length $m$, $\occ$ is the number of occurrences and $\sigma$ is the size of the alphabet. (The time complexity marked by \dag\ is expected whereas all others are worst-case)} \label{table} \begin{tabular}{lllll}
\multicolumn{1}{l|}{Index} & \multicolumn{1}{l|}{Space} & \multicolumn{1}{l|}{Locate time} & $\sigma$ & \\ \cline{1-4}
\multicolumn{1}{l|}{Gagie et al. \cite{gagie2014lz77}} & \multicolumn{1}{l|}{$O(z\lg(n/z))$} & \multicolumn{1}{l|}{$O(m\lg m + \occ \lg \lg n)$} & $O(1)$ & \\
\multicolumn{1}{l|}{Nishimoto et al. \cite{Nishimoto2015}} & \multicolumn{1}{l|}{$O(z\lg n\lg^* n)$} & \multicolumn{1}{p{100px}|}{$O(m \lg \lg n \lg \lg z +\lg z \lg m \lg n ( \lg^* n)^2 + \occ \lg n)$} & $n^{O(1)}$ & \\
\multicolumn{1}{l|}{Bille et al. \cite{bille2017time}} & \multicolumn{1}{l|}{$O(z(\lg(n/z) + \lg^\epsilon z))$} & \multicolumn{1}{l|}{$O(m + \occ(\lg^\epsilon n + \lg \lg n))$} & $n^{O(1)}$ & \\
\multicolumn{1}{l|}{Bille et al. \cite{bille2017time}} & \multicolumn{1}{l|}{$O(z\lg(n/z)\lg \lg z)$} & \multicolumn{1}{l|}{$O(m + \occ \lg \lg n)$} & $O(1)$ & \\
\multicolumn{1}{l|}{Bille et al. \cite{bille2017time}} & \multicolumn{1}{l|}{$O(z\lg(n/z))$} & \multicolumn{1}{p{100px}|}{$O(m(1 + \frac{\lg^\epsilon z}{\lg(n/z)}) + \occ(\lg^\epsilon n + \lg \lg n))$} & $O(1)$ & \\
\multicolumn{1}{l|}{\textbf{Theorem 1}} & \multicolumn{1}{l|}{\textbf{$O(z\lg(n/z))$}} & \multicolumn{1}{l|}{\textbf{$O(m + \occ (\lg^\epsilon z + \lg \lg n))$}} & \textbf{$O(1)$} & \\
\multicolumn{1}{l|}{\textbf{Theorem 2 (1)}} & \multicolumn{1}{l|}{\textbf{$O(z(\lg(n/z) + \lg \lg z))$}} & \multicolumn{1}{l|}{\textbf{$O(m + \occ (\lg^\epsilon z + \lg \lg n))$}} & \textbf{$n^{O(1)}$} & \\
\multicolumn{1}{l|}{\textbf{Theorem 2 (2)}} & \multicolumn{1}{l|}{\textbf{$O(z(\lg(n/z))$}} & \multicolumn{1}{l|}{\textbf{$O(m + \occ (\lg^\epsilon z + \lg \lg n))^\dag$}} & \textbf{$n^{O(1)}$} & \\
\end{tabular} \end{table}
\subsection{Our Results}
In this paper we improve previous solutions that are bounded by the size of the LZ77-parse. For constant-sized alphabets we obtain the following result:
\begin{theorem}\label{thm:main}
Given a string $S$ of length $n$ from a constant-sized alphabet with an LZ77
parse of length $z$, we can build a compressed-index supporting pattern matching queries in
$O(m + \occ( \lg \lg n + \lg^{\epsilon} z))$ time using $O(z \lg(n/z))$ space. \end{theorem}
\noindent In particular, we are the first to obtain optimal search time using only $O(z\lg(n/z))$ space. For general alphabets we obtain the following:
\begin{theorem}\label{thm:general}
Given a string $S$ of length $n$ from an integer alphabet polynomially bounded by $n$
with an LZ77-parse of length $z$, we can build a compressed-index supporting pattern matching queries in:
\begin{enumerate}
\item[(1)] $O(m + \occ( \lg \lg n + \lg^{\epsilon} z))$ time using $O(z (\lg(n/z) + \lg\lg z))$ space.
\item[(2)] $O(m + \occ( \lg \lg n + \lg^{\epsilon} z))$ expected time using $O(z \lg(n/z))$ space.
\item[(3)] $O(m + \lg^\epsilon z + \occ( \lg \lg n + \lg^{\epsilon} z))$ time using $O(z \lg(n/z))$ space.
\end{enumerate} \end{theorem} Note $\lg\lg z = O(\lg(n/z))$ when either the alphabet size is $O(2^{\lg^{\epsilon} n})$ or $z = o(\frac{n}{ \lg^{\epsilon' n}})$ where $\epsilon$ and $\epsilon'$ are arbitrarily small positive constants.
Theorem~\ref{thm:main} follows directly from Theorem~\ref{thm:general} (1) given these observations. Theorem~\ref{thm:general} is a consequence of Lemma~\ref{lem:longstrings},~\ref{lem:shortstrings},~\ref{lem:semishort}~and~\ref{lem:expectedsolution}.
\subsection{Technical Overview}
Our main new contribution is based on a new grammar construction. In \cite{mehlhorn1997maintaining} Melhorn et al. presented a way to maintain dynamic sequences subject to equality testing using a technique called signatures. They presented two signature construction techniques. One is randomized and leads to complexities that hold in expectation. The other is based on a deterministic coin-tossing technique of Cole and Vishkin \cite{COLE198632} and leads to worst-case running times but incurs an iterated logarithmic overhead compared to the randomized solution. This technique has also resembles the string labeling techniques found e.g. in \cite{548491}. To the best of our knowledge, we are the first to consider grammar compression based on the randomized solution from \cite{mehlhorn1997maintaining}. Despite it being randomized we show how to obtain worst-case query bounds for text indexing using this technique.
The main idea in this grammar construction is that similar substrings will be parsed almost identically. This property also holds true for the deterministic construction technique which has been used to solve dynamic string problems with and without compression, see e.g. \cite{Nishimoto2015,alstrup2000pattern}. In \cite{jez2015faster} Je{\.z} devices a different grammar construction algorithm with similar properties to solve the algorithmic pattern matching problem on grammar compressed strings which has later been used for both static and dynamic string problems, see \cite{tomohiro2016longest,gawrychowski2015optimal}
Our primary solution has an $\lg^\epsilon z$ term in the query time which is problematic for short query patterns. To handle this, we show different solutions for handling short query patterns. These are based on the techniques from LZ77-based indexing combined with extra data structures to speed up the queries.
\section{Preliminaries}
We assume a standard unit-cost RAM model with word size $\Theta(\lg n)$ and that the input is from an integer alphabet
$\Sigma = \{1,2,\ldots, n^{O(1)}\}$. We measure space complexity in terms of machine words unless explicitly stated otherwise. A string $S$ of length $n = |S|$ is a sequence of $n$ symbols $S[1]\ldots S[n]$ drawn from an alphabet $\Sigma$. The sequence $S[i,j]$ is the \textit{substring} of $S$ given by $S[i]\ldots S[j]$ and strings can be concatenated, i.e. $S = S[1, k]S[k+1, n]$. The empty string is denoted $\epsilon$ and $S[i,i] = S[i]$ while $S[i,j] = \epsilon$ if $j < i$, $S[i,j] = S[1,j]$ if $i < 1$ and $S[i,n]$ if $j > n$. The reverse of $S$ denoted $rev(s)$ is the string $S[n]S[n-1]\ldots S[1]$. A \textit{run} in a string $S$ is a substring $S[i,j]$ with identical letters, i.e. $S[k] = S[k+1]$ for $k = i,\ldots, j-1$. Let $S[i,j]$ be a run in $S$ then it is a \textit{maximal run} if it cannot be extended, i.e. $S[i-1] \neq S[i]$ and $S[j] \neq S[j+1]$. If there are no runs in $S$ we say that $S$ is \textit{run-free} and it follows that $S[i] \neq S[i+1]$ for $1 \leq i < n$. Denote by $[u]$ the set of integers $\{1,2, \ldots, u\}$.
Let $X \subseteq [u]^2$ be a set of points in a 2-dimensional grid. The \textit{2D-orthogonal range reporting problem} is to compactly represent $Z$ while supporting \textit{range reporting queries}, that is, given a rectangle $R = [a_1, b_1] \times [a_2, b_2]$ report all points in the set $R \cap X$. We use the following: \begin{lemma}[Chan et al. \cite{Larsen11}]
For any set of $n$ points in $[u] \times [u]$ and constant $\epsilon > 0$,
we can solve 2D-orthogonal range reporting with $O(n\lg n)$ expected preprocessing
time using:
\begin{enumerate}
\item[i] $O(n)$ space and $(1 + k) \cdot O(\lg^\epsilon n \lg \lg u)$ query time
\item[ii] $O(n\lg \lg n)$ space and $(1 + k) \cdot O(\lg \lg u)$ query time
\end{enumerate}
where $k$ is the number of occurrences inside the rectangle.
\label{lem:range} \end{lemma}
A \textit{Karp-Rabin fingerprinting function} \cite{Karp:1987} is a randomized hash function for strings. Given a string $S$ of length $n$ and a fingerprinting function $\phi$ we can in $O(n)$ time and space compute and store $O(n)$ fingerprints such that the fingerprint of any substring of $S$ can be computed in constant time. Identical strings have identical fingerprints. The fingerprints of two strings $S$ and $S'$ \textit{collide} when $S \neq S'$ and $\phi(S) = \phi(S')$. A fingerprinting function is \textit{collision-free} for a set of strings when there are no collisions between the fingerprints of any two strings in the set. We can find collision-free fingerprinting function for a set of strings with total length $n$ in $O(n)$ expected time \cite{Porat}.
Let $D$ be a lexicographically sorted set of $k$ strings. The weak prefix search problem is to compactly represent $D$ while supporting \textit{weak prefix queries}, that is, given a query string $P$ of length $m$ report the rank of the lexicographically smallest and largest strings in $D$ of which $P$ is a prefix. If no such strings exist, the answer can be arbitrary.
\begin{lemma}[Belazzougui et al.~\cite{BelazzouguiBPV10}, appendix H.3]
Given a set $D$ of $k$ strings with average length $l$, from an alphabet of size $\sigma$,
we can build a data structure using $O(k(\lg l + \lg\lg \sigma))$ bits of space
supporting weak prefix search for a pattern $P$ of length $m$ in $O(m\lg \sigma/w + \lg m)$ time
where $w$ is the word size. \label{lem:fat1} \end{lemma}
We will refer to the data structure of Lemma~\ref{lem:fat1} as a \textit{z-fast trie} following the notation from \cite{BelazzouguiBPV10}. The $m$ term in the time complexity is due to a linear time preprocessing of the pattern and is not part of the actual search. Therefore it is simple to do weak prefix search for any length $l$ substring of $P$ in $O(\lg l)$ time after preprocessing $P$ once in $O(m)$ time.
The \textit{LZ77-parse} \cite{Ziv1977} of a string $S$ of length $n$ is a string $\mathcal{Z}$ of the form $(s_1, l_1, \alpha_1) \ldots (s_{z}, l_{z}, \alpha_{z}) \in ([n], [n], \Sigma)^z$. We define $u_1 = 1$, $u_i = u_{i-1} + l_{i-1} + 1$ for $i > 1$. For $\mathcal{Z}$ to be a valid parse, we require $l_1 = 0$, $s_i < u_i$, $S[u_i, u_i + l_i - 1] = S[s_i, s_i + l_i - 1]$, and $S[u_i + l_i] = \alpha_i$ for $i \in [z]$. This guarantees $\mathcal{Z}$ \textit{represents} $S$ and $S$ is uniquely defined in terms of $\mathcal{Z}$. The substring $S[u_i, u_i + l_i]$ is called the $i^{th}$ phrase of the parse and $S[s_i, s_i + l_i - 1]$ is its source. A minimal LZ77-parse of $S$ can be found greedily in $O(n)$ time and stored in $O(z)$ space \cite{Ziv1977}. We call the positions $u_1 + l_1, \ldots, u_{z} + l_z$ the borders of $S$.
\section{Signature Grammars}
We consider a hierarchical representation of strings given by Melhorn~et~al. \cite{mehlhorn1997maintaining} with some slight modifications. Let $S$ be a run-free string of length $n$ from an integer alphabet $\Sigma$ and let $\pi$ be a uniformly random permutation of $\Sigma$. Define a position $S[i]$ as a local minimum of $S$ if $1 < i < n$ and $\pi(S[i]) < \pi(S[i-1])$ and $\pi(S[i]) < \pi(S[i+1])$. In the block decomposition of $S$, a block starts at position $1$ and at every local minimum in $S$ and ends just before the next block begins (the last block ends at position $n$). The block decomposition of a string $S$ can be used to construct the signature tree of $S$ denoted $sig(S)$ which is an ordered labeled tree with several useful properties.
\begin{lemma}
Let $S$ be a run-free string $S$ of length $n$ from an alphabet $\Sigma$ and
let $\pi$ be a uniformly random permutation of $\Sigma$ such that $\pi(c)$
is the rank of the symbol $c \in \Sigma$ in this permutation. Then the expected length
between two local minima in the sequence $\pi(S[1]), \pi(S[2]), \ldots,
\pi(S[n])$ is at most 3 and the longest gap is $O(\lg n)$ in expectation.
\label{lem:avg-length} \end{lemma}
\begin{proof}
First we show the expected length between two local minima is at most 3. Look at a position $1 \leq i \leq n$ in the sequence $\pi(S[1]), \pi(S[2]), \ldots, \pi(S[n])$. To determine if $\pi(S[i])$ is a local minimum, we only need to consider the two neighbouring elements $\pi(S[i - 1])$ and $\pi(S[i + 1])$ thus let us consider the triple $(\pi(S[i - 1]), \pi(S[i]), \pi(S[i + 1]))$. We need to consider the following cases. First assume $S[i - 1] \neq S[i] \neq S[i+1]$. There exist $3! = 6$ permutations of a triple with unique elements and in two of these the minimum element is in the middle. Since $\pi$ is a uniformly random permutation of $\Sigma$ all 6 permutations are equally likely, and thus there is $1/3$ chance that the element at position $i$ is a local minimum. Now instead assume $S[i - 1] = S[i+1] \neq S[i]$ in which case there is $1/2$ chance that the middle element is the smallest. Finally, in the case where $i = 1$ or $i = n$ there is also $1/2$ chance. As $S$ is run-free, these cases cover all possible cases. Thus there is at least $1/3$ chance that any position $i$ is a local minimum independently of $S$. Thus the expected number of local minima in the sequence is therefore at least $n / 3$ and the expected distance between any two local minima is at most $3$.
The expected longest distance between two local minima of $O(\lg n)$ was shown in \cite{mehlhorn1997maintaining}. \end{proof}
\subsection{Signature Grammar Construction} We now give the construction algorithm for the signature tree $sig(S)$. Consider an ordered forest $F$ of trees. Initially, $F$ consists of $n$ trees where the $i^{th}$ tree is a single node with label $S[i]$. Let the label of a tree $t$ denoted $l(t)$ be the label of its root node. Let $l(F)$ denote the string that is given by the in-order concatenation of the labels of the trees in $F$. The construction of $sig(S)$ proceeds as follows:
\begin{enumerate}
\item
Let $t_i, \ldots, t_j$ be a maximal subrange of consecutive trees of $F$ with
identical labels, i.e. $l(t_i) = \ldots = l(t_j)$.
Replace each such subrange in $F$ by a new tree having as root a new node $v$ with children $t_i, \ldots, t_j$ and
a label that identifies the number of children and their label. We call this kind of node a run node. Now $l(F)$ is run-free.
\item Consider the block decomposition of $l(F)$.
Let $t_i, \ldots, t_j$ be consecutive trees in $F$ such that their labels form a block in $l(F)$.
Replace all identical blocks $t_i, \ldots, t_j$ by a new tree having as root a new node with children $t_i, \ldots, t_j$ and a unique label. We call this kind of node a run-free node.
\item Repeat step $1$ and $2$ until $F$ contains a single tree, we call this tree $sig(S)$. \end{enumerate}
In each iteration the size of $F$ decreases by at least a factor of two and each iteration takes $O(|F|)$ time, thus it can be constructed in $O(n)$ time.
Consider the directed acyclic graph (DAG) of the tree $sig(S)$ where all identical subtrees are merged. Note we can store run nodes in $O(1)$ space since all out-going edges are pointing to the same node, so we store the number of edges along with a single edge instead of explicitly storing each of them. For run-free nodes we use space proportional to their out-degrees. We call this the signature DAG of $S$ denoted $dag(S)$. There is a one-to-one correspondence between this DAG and an acyclic run-length grammar producing $S$ where each node corresponds to a production and each leaf to a terminal.
\subsection{Properties of the Signature Grammar}
\noindent We now show some properties of $sig(S)$ and $dag(S)$ that we will need later. Let $str(v)$ denote the substring of $S$ given by the labels of the leaves of the subtree of $sig(S)$ induced by the node $v$ in left to right order.
\begin{lemma}\label{siggrammar}
Let $v$ be a node in the signature tree for a string $S$ of length $n$.
If $v$ has height $h$ then $|str(v)|$ is at least $2^h$ and thus $sig(S)$ (and $dag(S)$)
has height $O(\lg n)$. \end{lemma}
\begin{proof}
This follows directly from the out-degree of all nodes being at least 2. \end{proof}
Denote by $T(i, j)$ the set of nodes in $sig(S)$ that are ancestors of the $i^{th}$ through $j^{th}$ leaf of $sig(S)$. These nodes form a sequence of adjacent nodes at every level of $sig(S)$ and we call them \textit{relevant nodes} for the substring $S[i,j]$.
\begin{lemma} $T(i, j)$ and $T(i', j')$ have identical nodes except at most the two first and two last nodes on each level whenever $S[i, j] = S[i', j']$. \label{lem:size} \end{lemma}
\begin{proof}
Trivially, the leaves of $T(i, j)$ and $T(i', j')$ are identical if
$S[i, j] = S[i', j']$. Now we show it is true for nodes on level $l$
assuming it is true for nodes on level $l - 1$. We only consider the
left part of each level as the argument for the right part is (almost) symmetric. Let $v_1, v_2, v_3, \ldots$
be the nodes on level $l - 1$ in $T(i, j)$ and $u_1, u_2, u_3, \ldots$
the nodes on level $l - 1$ in $T(i', j')$ in left to right order. From the
assumption, we have $v_a, v_{a+1}, \ldots$ are identical with $u_b,
u_{b+1}, \ldots$ for some $1 \leq a, b \leq 3$. When constructing the
$l^{th}$ level of $sig(S)$, these nodes are divided into blocks.
Let $v_{a + k}$ be the first block that starts after $v_a$
then by the block decomposition, the first block after $u_b$ starts at $u_{b+k}$.
The nodes $v_1, \ldots, v_{a + k}$ are spanned by at most two blocks
and similarly for $u_1, \ldots, u_{b + k}$. These blocks become
the first one or two nodes on level $l$ in $T(i, j)$ and $T(i', j')$
respectively. The block starting at $v_{a+k}$
is identical to the block starting at $u_{b+k}$ and the same holds for the following blocks.
These blocks result in identical nodes on level $l$. Thus, if we
ignore the at most two first (and last) nodes on level $l$ the remaining nodes are
identical. \end{proof}
We call nodes of $T(i, j)$ consistent in respect to $T(i, j)$ if they are guaranteed to be in any other $T(i', j')$ where $S[i, j] = S[i', j']$. We denote the remaining nodes of $T(i, j)$ as inconsistent. From the above lemma, it follows at most the left-most and right-most two nodes on each level of $T(i, j)$ can be inconsistent.
\begin{lemma}\label{lem:expectedsize}
The expected size of the signature DAG $dag(S)$ is $O(z\lg(n/z))$. \end{lemma}
\begin{proof} We first bound the number of unique nodes in $sig(S)$ in terms of the LZ77-parse of $S$ which has size $z$.
Consider the decomposition of $S$ into the $2z$ substrings $S[u_1, u_1 + l_1], S[u_1 + l_1 + 1], \ldots, S[u_z, u_z + l_z], S[u_z + l_z + 1]$ given by the phrases and borders of the LZ77-parse of $S$ and the corresponding sets of relevant nodes $R = \{T(u_1, u_1 + l_1), T(u_1 + l_1 + 1, u_1 + l_1 + 1), \ldots \}$. Clearly, the union of these sets are all the nodes of $sig(S)$. Since identical nodes are represented only once in $dag(S)$ we need only count one of their occurrences in $sig(S)$. We first count the nodes at levels lower than $\lg(n/z)$. A set $T(i, i)$ of nodes relevant to a substring of length one has no more than $O(\lg(n/z))$ such nodes. By Lemma~\ref{lem:size} only $O(\lg(n/z))$ of the relevant nodes for a phrase are not guaranteed to also appear in the relevant nodes of its source. Thus we count a total of $O(z\lg(n/z))$ nodes for the $O(z)$ sets of relevant nodes. Consider the leftmost appearance of a node appearing one or more times in $sig(S)$. By definition, and because every node of $sig(S)$ is in at least one relevant set, it must already be counted towards one of the sets. Thus there are $O(z\lg(n/z))$ unique vertices in $sig(S)$ at levels lower than $\lg(n/z)$. Now for the remaining at most $\lg(z)$ levels, there are no more than $O(z)$ nodes because the out-degree of every node is at least two. Thus we have proved that there are $O(z \lg(n/z))$ unique nodes in $sig(S)$. By Lemma~\ref{lem:avg-length} the average block size and thus the expected out-degree of a node is $O(1)$. It follows that the expected number of edges and the expected size of $dag(S)$ is $O(z\lg(n/z))$.
\end{proof}
\begin{lemma}
A signature grammar of $S$ using $O(z\lg(n/z))$ (worst case) space can be constructed in $O(n)$ expected time. \end{lemma}
\begin{proof}
Construct a signature grammar for $S$ using the signature grammar construction algorithm. If the average out-degree of the run-free nodes in $dag(S)$ is more than some constant greater than 3 then try again. In expectation it only takes a constant number of retries before this is not the case. \end{proof}
\begin{lemma}
Given a node $v \in dag(S)$, the child that produces the character at position $i$ in $str(v)$ can be found in $O(1)$ time. \end{lemma}
\begin{proof}
First assume $v$ is a run-free node. If we store $|str(u)|$ for each child $u$ of $v$ in order, the correct child corresponding to position $i$ can simply be found by iterating over these. However, this may take $O(\log n)$ time since this is the maximum out-degree of a node in $dag(S)$. This can be improved to $O(\log \log n)$ by doing a binary search, but instead we use a Fusion Tree from \cite{Fredman1990} that allows us to do this in $O(1)$ time since we have at most $O(\log n)$ elements. This does not increase the space usage.
If $v$ is a run node then it is easy to calculate the right child by a single division. \end{proof}
\section{Long Patterns}
In this section we present how to use the signature grammar to construct a compressed index that we will use for patterns of length $\Omega(\lg^\epsilon z)$ for constant $\epsilon > 0$. We obtain the following lemma:
\begin{lemma}\label{lem:longstrings}
Given a string $S$ of length $n$ with an LZ77-parse of length $z$
we can build a compressed index supporting pattern matching queries in
$O(m + (1 + \occ)\lg^\epsilon z)$ time using $O(z\lg(n/z))$ space for any constant $\epsilon > 0$. \end{lemma}
\subsection{Data Structure} Consider a vertex $v$ with children $u_1, \ldots u_k$ in $dag(S)$. Let $pre(v, i)$ denote the prefix of $str(v)$ given by concatenating the strings represented by the first $i$ children of $v$ and let $suf(v, i)$ be the suffix of $str(v)$ given by concatenating the strings represented by the last $k - i$ children of $x$.
The data structure is composed of two z-fast tries (see Lemma~\ref{lem:fat1}) $T_1$ and $T_2$ and a 2D-range reporting data structure $R$.
For every non-leaf node $v \in dag(S)$ we store the following. Let $k$ be the number of children of $v$ if $v$ is a run-free node otherwise let $k = 2$:
\begin{itemize}
\item The reverse of the strings $pre(v, i)$ for $i \in [k - 1]$ in the z-fast trie $T_1$.
\item The strings $suf(v, i)$ for $i \in [k - 1]$ in the z-fast trie $T_2$.
\item The points $(a, b)$ where
$a$ is the rank of the reverse of $pre(v, i)$ in $T_1$ and $b$ is the rank of $suf(v, i)$ in $T_2$ for $i \in [k - 1]$
are stored in $R$. A point stores the vertex $v \in dag(S)$ and the length of $pre(v, i)$ as auxiliary information.
\end{itemize}
There are $O(z\lg(n/z))$ vertices in $dag(S)$ thus $T_1$ and $T_2$ take no more than $O(z\lg(n/z) )$ words of space using Lemma~\ref{lem:fat1}.
There $O(z\lg(n/z))$ points in $R$ which takes $O(z\lg(n/z))$ space using Lemma~\ref{lem:range} (i) thus the total space in words is $O(z\lg(n/z))$.
\subsection{Searching}
Assume in the following that there are no fingerprint collisions. Compute all the prefix fingerprints of $P$ $\phi(P[1]), \phi(P[1, 2]), \ldots, \phi(P[1, m])$. Consider the signature tree $sig(P)$ for $P$. Let $l_i^k$ denote the $k$'th left-most vertex on level $i$ in $sig(P)$ and let $j$ be the last level. Let $P_L = \{ |str(l_1^1)|, |str(l_1^1)| + |str(l_1^2)|, |str(l_2^1)|,
|str(l_2^1)| + |str(l_2^2)|, \ldots, |str(l_j^1)|, |str(l_j^1)| + |str(l_j^2)| \}$. Symmetrically, let $r_i^k$ denote the $k$'th right-most vertex on level $i$ in $sig(P)$
and let $P_R = \{ m - |str(r_1^1)|, m - |str(r_1^1)| - |str(r_1^2)|, m - |str(r_2^1)|, m - |str(r_2^1)| - |str(r_2^2)|, \ldots, m - |str(r_j^1)|, m - |str(r_j^1)| - |str(r_j^2)| \}$. Let $P_{S} = P_L \cup P_R$.
For $p \in P_S$ search for the reverse of $P[1, p]$ in $T_1$ and for $P[p+1, m]$ in $T_2$ using the precomputed fingerprints. Let $[a,b]$ and $[c,d]$ be the respective ranges returned by the search. Do a range reporting query for the (possibly empty) range $[a,b] \times [c,d]$ in $R$. Each point in the range identifies a node $v$ and a position $i$ such that $P$ occurs at position $i$ in the string $str(v)$.
If $v$ is a run node, there is furthermore an occurrence of $P$ in $str(v)$ for all positions $i + k\cdot |str(child(v))|$ where $k = 1, \ldots, j$
and $j \cdot |str(child(v))| + m \leq str(v)$.
To report the actual occurrences of $P$ in $S$ we traverse all ancestors of $v$ in $dag(S)$; for each occurrence of $P$ in $str(v)$ found, recursively visit each parent $u$ of $v$ and offset the location of the occurrence to match the location in $str(u)$ instead of $str(v)$. When $u$ is the root, report the occurrence. Observe that the time it takes to traverse the ancestors of $v$ is linear in the number of occurrences we find.
We now describe how to handle fingerprint collisions. Given a z-fast trie, Gagie et al. \cite{gagie2014lz77} show how to perform $k$ weak prefix queries and identify all false positives using $O(k\lg m + m)$ extra time by employing bookmarked extraction and bookmarked fingerprinting. Because we only compute fingerprints and extract prefixes (suffixes) of the strings represented by vertices in $dag(S)$ we do not need bookmarking to do this. We refer the reader to \cite{gagie2014lz77} for the details.
Thus, we modify the search algorithm such that all the searches in $T_1$ and $T_2$ are carried out first, then we verify the results before progressing to doing range reporting queries only for ranges that were not discarded during verification.
\subsection{Correctness}
For any occurrence $S[l, r]$ of $P$ in $S$ there is a node $v$ in $sig(S)$ that stabs $S[l, r]$, ie. a suffix of $pre(v, i)$ equals a prefix $P[1, j]$ and a prefix of $suf(v, i)$ equals the remaining suffix $P[j + 1, m]$ for some $i$ and $j$. Since we put all combinations of $pre(v, i)$, $suf(v, i)$ into $T_1, T_2$ and $R$, we would be guaranteed to find all nodes $v$ that contains $P$ in $str(v)$ if we searched for all possible split-points $1, \ldots, m-1$ of $P$ i.e. $P[1, i]$ and $P[i + 1, m]$ for $i = 1, \ldots, m-1$.
We now argue that we do not need to search for all possible split-points of $P$ but only need to consider those in the set $P_S$. For a position $i$, we say the node $v$ stabs $i$ if the nearest common ancestor of the $i^{th}$ and $i+1^{th}$ leaf of $sig(S)$ denoted $NCA(l_i, l_{i+1})$ is $v$.
Look at any occurrence $S[l, r]$ of $P$. Consider $T_S = T(l, r)$ and $T_P = sig(P)$. Look at a possible split-point $i \in [1, m - 1]$ and the node $v$ that stabs position $i$ in $T_P$. Let $u_l$ and $u_r$ be adjacent children of $v$ such that the rightmost leaf descendant of $u_l$ is the $i^{th}$ leaf and the leftmost leaf descendant of $u_r$ is the $i+1^{th}$ leaf.
We now look at two cases for $v$ and argue it is irrelevant to consider position $i$ as split-point for $P$ in these cases:
\begin{enumerate}
\item \textbf{Case $v$ is consistent (in respect to $T_P$)}. In this case it is guaranteed that the node that stabs $l + i$ in $T_S$ is identical to $v$. Since $v$ is a descendant of the root of $T_P$ (as the root of $T_P$ is inconsistent) $str(v)$ cannot contain $P$ and thus it is irrelevant to consider $i$ as a split-point.
\item \textbf{Case $v$ is inconsistent and $u_l$ and $u_r$ are both consistent (in respect to $T_P$)}. In this case $u_l$ and $u_r$ have identical corresponding nodes $u_l'$ and $u_r'$ in $T_S$. Because $u_l$ and $u_r$ are children of the same node it follows that $u_l'$ and $u_r'$ must also both be children of some node $v'$ that stabs $l + i$ in $T_S$ (however $v$ and $v'$ may not be identical since $v$ is inconsistent). Consider the node $u_{ll}'$ to the left of $u_l'$ (or symmetrically for the right side if $v$ is an inconsistent node in the right side of $T_P$). If $str(v')$ contains $P$ then $u_{ll}'$ is also a child of $v'$ (otherwise $u_{l}$ would be inconsistent). So it suffices to check the split-point $i - |u_l|$. Surely $i - |u_l|$ stabs an inconsistent node in $T_P$, so either we consider that position relevant, or the same argument applies again and a split-point further to the left is eventually considered relevant. \end{enumerate}
Thus only split-points where $v$ and at least one of $u_l$ or $u_r$ are inconsistent are relevant. These positions are a subset of the position in $P_S$, and thus we try all relevant split-points.
\subsection{Complexity}
A query on $T_1$ and $T_2$ takes $O(\lg m)$ time by Lemma~\ref{lem:fat1} while a query on $R$ takes $O(\lg^\epsilon z)$ time using Lemma~\ref{lem:range} (i) (excluding reporting). We do $O(\lg m)$ queries as the size of $P_S$ is $O(\lg m)$. Verification of the $O(\lg m)$ strings we search for takes total time $O(\lg^2 m + m) = O(m)$. Constructing the signature DAG for $P$ takes $O(m)$ time, thus total time without reporting is $O(m + \lg m \lg^\epsilon z) = O(m + \lg^{\epsilon'} z)$ for any $\epsilon' > \epsilon$. This holds because if $m \leq \lg^{2\epsilon} z$ then $\lg m \lg^\epsilon z \leq \lg \lg^{2\epsilon} z \lg^\epsilon z = O(\lg^{\epsilon'} z)$, otherwise $m > \lg^{2\epsilon} z \Leftrightarrow \sqrt{m} > \lg^\epsilon z$ and then $\lg m \lg^\epsilon z = O(\lg m \sqrt{m}) = O(m)$. For every query on $R$ we may find multiple points each corresponding to an occurrence of $P$. It takes $O(\lg^\epsilon z)$ time to report each point thus the total time becomes $O(m + (1 + \occ)\lg^{\epsilon'} z)$.
\section{Short Patterns}
Our solution for short patterns uses properties of the LZ77-parse of $S$. A \textit{primary} substring of $S$ is a substring that contains one or more borders of $S$, all other substrings are called $\textit{secondary}$. A primary substring that matches a query pattern $P$ is a \textit{primary occurrence} of $P$ while a secondary substring that matches $P$ is a \textit{secondary occurrence} of $P$. In a seminal paper on LZ77 based indexing \cite{Karkkainen1996} K{\"{a}}rkk{\"{a}}inen and Ukkonen use some observations by Farach and Thorup \cite{Farach1998} to show how all secondary occurrences of a query pattern $P$ can be found given a list of the primary occurrences of $P$ through a reduction to orthogonal range reporting. Employing the range reporting result given in Lemma~\ref{lem:range} (ii), all secondary occurrences can be reported as stated in the following lemma:
\begin{lemma}[K{\"{a}}rkk{\"{a}}inen and Ukkonen \cite{Karkkainen1996}]\label{lem:lz77secondary}
Given the LZ77-parse of a string $S$ there exists a data structure that uses $O(z \lg \lg z)$ space that can report all secondary occurrences of a pattern $P$ given the list of primary occurrences of $P$ in $S$ in $O(\occ \lg \lg n)$ time. \end{lemma}
We now describe a data structure that can report all primary occurrences of a pattern $P$ of length at most $k$ in $O(m + \occ)$ time using $O(zk)$ space.
\begin{lemma}\label{lem:shortstrings}
Given a string $S$ of length $n$ and a positive integer $k \leq n$
we can build a compressed index supporting pattern matching queries for patterns of length $m$
in $O(m + \occ\lg\lg n)$ time using $O(zk + z\lg\lg z)$ space that works for $m \leq k$. \end{lemma}
\begin{proof} Consider the set $C$ of $z$ substrings of $S$ that are defined by $S[u_i-k, u_i+k - 1]$ for $i \in [z]$, ie. the substrings of length $2k$ surrounding the borders of the LZ77-parse. The total length of these strings is $\Theta(zk)$. Construct the generalized suffix tree $T$ over the set of strings $C$. This takes $\Theta(zk)$ words of space. To ensure no occurrence is reported more than once, if multiple suffixes in this generalized suffix tree correspond to substrings of $S$ that starts on the same position in $S$, only include the longest of these. This happens when the distance between two borders is less than $2k$.
To find the primary occurrences of $P$ of length $m$, simply find all occurrences of $P$ in $T$. These occurrences are a super set of the primary occurrences of $P$ in $S$, since $T$ contains all substrings starting/ending at most $k$ positions from a border. It is easy to filter out all occurrences that are not primary, simply by calculating if they cross a border or not. This takes $O(m + \occ)$ time (where $\occ$ includes secondary occurrences). Combined with Lemma~\ref{lem:lz77secondary} this gives Lemma~\ref{lem:shortstrings}. \end{proof}
\section{Semi-Short Patterns}
In this section, we show how to handle patterns of length between $\lg \lg z$ and $\lg^\epsilon z$. It is based on the same reduction to 2D-range reporting as used for long patterns. However, the positions in $S$ that are inserted in the range reporting structure is now based on the LZ77-parse of $S$ instead. Furthermore we use Lemma~\ref{lem:range} (ii) which gives faster range reporting but uses super-linear space, which is fine because we instead put fewer points into the structure. We get the following lemma:
\begin{lemma}\label{lem:semishort}
Given a string $S$ of length $n$ we solve the compressed indexing problem
for a pattern $P$ of length $m$ with $\lg\lg z \leq m \leq \lg^\epsilon z$ for any positive constant $\epsilon < \frac{1}{2}$ in $O(m + \occ(\lg\lg n + \lg^\epsilon z))$ time using $O(z(\lg \lg z + \log(n/z)))$ space. \end{lemma}
\subsection{Data Structure}
As in the previous section for short patterns, we only need to worry about primary occurrences of $P$ in $S$. Let $B$ be the set of all substrings of length at most $\lg^\epsilon z$ that cross a border in $S$.
The split positions of such a string are the offsets of the leftmost borders in its occurrences. All primary occurrences of $P$ in $S$ are in this set. The size of this set is $|B| = O(z \lg^{2\epsilon} z)$. The data structure is composed by the following:
\begin{itemize}
\item A dictionary $H$ mapping each string in $B$ to its split positions.
\item A z-fast trie $T_1$ on the reverse of the strings $T[u_i, l_i]$ for $i \in [z]$.
\item A z-fast trie $T_2$ on the strings $T[u_i, n]$ for $i \in [z]$.
\item A range reporting data structure $R$ with a point $(c,d)$ for every pair
of strings $C_i = T[u_i,l_i], D_i = T[u_{i+1}, n]$ for $i \in [z]$ where $D_{z} = \epsilon$ and
$c$ is the lexicographical rank of the reverse of $C_i$ in the set $\{C_1, \ldots, C_{z} \}$
and $d$ is the lexicographical rank of $D_i$ in the set $\{D_1, \ldots D_{z}\}$.
We store the border $u_i$ along with the point $(c, d)$.
\item The data structure described in Lemma~\ref{lem:lz77secondary} to report secondary occurrences.
\item The signature grammar for $S$. \end{itemize}
Each entry in $H$ requires $\lg \lg^\epsilon z = O(\lg \lg z)$ bits to store since a split position can be at most $\lg^\epsilon z$. Thus the dictionary can be stored in $O(|B| \cdot \lg \lg z) = O(z \lg^{2\epsilon} z \lg \lg z)$ bits which for $\epsilon < \frac{1}{2}$ is $O(z)$ words. The tries $T_1$ and $T_2$ take $O(z)$ space while $R$ takes $O(z\lg\lg z)$ space. The signature grammar takes $O(z \log(n/z))$. Thus the total space is $O(z(\lg \lg z + \log(n/z)))$.
\subsection{Searching}
Assume a lookup for $P$ in $H$ does not give false-positives. Given a pattern $P$ compute all prefix fingerprints of $P$. Next do a lookup in $H$. If there is no match then $P$ does not occur in $S$. Otherwise, we do the following for each of the split-points $s$ stored in $H$. First split $P$ into a left part $P_l = P[0,s-1]$ and a right part $P_r = P[s, m]$. Then search for the reverse of $P_l$ in $T_1$ and for $P_r$ in $T_2$ using the corresponding fingerprints. The search induces a (possibly empty) range for which we do a range reporting query in $R$. Each occurrence in $R$ corresponds to a primary occurrence of $P$ in $S$, so report these. Finally use Lemma~\ref{lem:lz77secondary} to report all secondary occurrences.
Unfortunately, we cannot guarantee a lookup for $P$ in $H$ does not give a false positive. Instead, we pause the reporting step when the first possible occurrence of $P$ has been found. At this point, we verify the substring $P$ matches the found occurrence in $S$. We know this occurrence is around an LZ-border in $S$ such that $P_l$ is to the left of the border and $P_r$ is to the right of the border. Thus we can efficiently verify that $P$ actually occurs at this position using the grammar.
\subsection{Analysis}
Computing the prefix fingerprints of $P$ takes $O(m)$ time. First, we analyze the running time in the case $P$ actually exists in $S$. The lookup in $H$ takes $O(1)$ time using perfect hashing. For each split-point we do two z-fast trie lookups in time $O(\lg m) = O(\lg \lg z)$. Since each different split-point corresponds to at least one unique occurrence, this takes at most $O(\occ \lg \lg z)$ time in total. Similarly each lookup and occurrence in the 2D-range reporting structure takes $\lg \lg z$ time, which is therefore also bounded by $O(\occ \lg \lg z)$ time. Finally, we verified one of the found occurrence against $P$ in $O(m)$ time. So the total time is $O(m + \occ \lg \lg z)$ in this case.
In the case $P$ does not exists, either the lookup in $H$ tells us that, and we spend $O(1)$ time, or the lookup in $H$ is a false-positive. In the latter case, we perform exactly two z-fast trie lookups and one range reporting query. These all take time $O(\lg \lg z)$. Since $m \geq \lg\lg z$ this is $O(m)$ time. Again, we verified the found occurrence against $P$ in $O(m)$ time. The total time in this case is therefore $O(m)$.
Note we ensure our fingerprint function is collision free for all substrings in $B$ during the preprocessing thus there can only be collisions if $P$ does not occur in $S$ when $m \leq \lg^\epsilon z$.
\section{Randomized Solution}
In this section we present a very simple way to turn the $O(m + (1 + \occ)\lg^\epsilon z)$ worst-case time of Lemma~\ref{lem:longstrings} into $O(m + \occ\lg^\epsilon z)$ expected time. First observe, this is already true if the pattern we search for occurs at least once or if $m \geq \lg^\epsilon z$.
As in the semi-short patterns section, we consider the set $B$ of substrings of $S$ of length at most $\lg^\epsilon z$ that crosses a border. Create a dictionary $H$ with $z\lg^{3\epsilon} z$ entries and insert all the strings from $B$. This means only a $\lg^\epsilon z$ fraction of the entries are used, and thus if we lookup a string $s$ (where $|s| \leq \lg^\epsilon z$) that is not in $H$ there is only a $\frac{1}{\lg^\epsilon z}$ chance of getting a false-positive.
Now to answer a query, we first check if $m \leq \lg^\epsilon z$ in which case we look it up in $H$. If it does not exist, report that. If it does exist in $H$ or if $m > \lg^\epsilon z$ use the solution from Lemma~\ref{lem:longstrings} to answer the query.
In the case $P$ does not exist, we spend either $O(m)$ time if $H$ reports no, or $O(m + \lg^\epsilon z)$ time if $H$ reports a false-positive. Since there is only $\frac{1}{\lg^\epsilon z}$ chance of getting a false positive, the expected time in this case is $O(m)$. In all other cases, the running time is $O(m + \occ\lg^\epsilon z)$ in worst-case, so the total expected running time is $O(m + \occ\lg^\epsilon z)$. The space usage of $H$ is $O(z\lg^{3\epsilon} z)$ bits since we only need to store one bit for each entry. This is $O(z)$ words for $\epsilon \leq 1/3$. To sum up, we get the following lemma:
\begin{lemma}\label{lem:expectedsolution}
Given a signature grammar for a text $S$ of length $n$ with an LZ77-parse of length $z$
we can build a compressed index supporting pattern matching queries in
$O(m + \occ\lg^\epsilon z)$ expected time using $O(z\lg(n/z))$ space for any constant $0 < \epsilon \leq 1/3$. \end{lemma}
\end{document} |
\begin{document}
\title{Security of continuous-variable quantum key distribution against canonical attacks\\
\thanks{This work was funded by the European Union’s Horizon 2020 research and innovation program under grant agreement No 820466 (CiViQ: “Continuous Variable Quantum Communications”).} }
\author{\IEEEauthorblockN{Panagiotis Papanastasiou} \IEEEauthorblockA{\textit{Computer Science} \\ \textit{University of York}\\ York, United Kingdom \\ panagiotis.papanastasiou@york.ac.uk} \and \IEEEauthorblockN{Carlo Ottaviani} \IEEEauthorblockA{\textit{Computer Science} \\ \textit{University of York}\\ York, United Kingdom \\ carlo.ottaviani@york.ac.uk} \and \IEEEauthorblockN{ Stefano Pirandola} \IEEEauthorblockA{\textit{Computer Science} \\ \textit{University of York}\\ York, United Kingdom \\ stefano.pirandola@york.ac.uk}}
\maketitle
\begin{abstract} We investigate the performance of Gaussian-modulated coherent-state QKD protocols in the presence of canonical attacks, which are collective Gaussian attacks resulting in Gaussian channels described by one of the possible canonical forms. We present asymptotic key rates and then we extend the results to the finite-size regime using a recently-developed toolbox for composable security. \end{abstract}
\begin{IEEEkeywords} Continuous variables, quantum key distribution, Gaussian modulation, finite-size effects, composable security \end{IEEEkeywords}
\section{Introduction} A quantum key distribution (QKD) protocol describes the communication steps performed by two remote authenticated parties to establish a shared key even though the link between them is potentially compromised~\cite{revQKD}. The information-theoretic security of such a protocol is granted by the laws of nature (quantum mechanics)~\cite{noclone0,noclone}. The first protocols designed were based on discrete variable (DV) systems, while more recently proposed protocols use continuous variables (CV), i.e., the position and momentum quadratures of the bosonic modes of the electromagnetic field~\cite{Braunstein_rev,Stefano_rev}. In particular, CV-QKD protocols using Gaussian modulation of coherent states for the encoding of information~\cite{GG02,RR_protocol,weedbrook2004noswitching} can be easily implemented using the current telecommunication infrastructure and may achieve high rates close to the PLOB bound for repeaterless quantum communications in a lossy channel~\cite{PLOB}. More specifically, these protocols can be considered as coming from a single scheme with different aspects~\cite{Stefano_rev}: reverse reconciliation (RR) or direct reconciliation (DR), with homodyne or heterodyne decoding measurement.
Their security analysis was first studied for asymptotic key rates under the assumption of collective Gaussian attacks~\cite{opt_Gaussian_attack1,opt_Gaussian_attack2}, completely characterized by Ref.~\cite{Stefano_CF}. Later, security was extended to the finite size regime~\cite{Antony_cpe, UsenkoFNSZ, finite-size thermal} and to a general composable framework~\cite{Leverier_definetti,free space}, including free-space~\cite{free space} and satellite-based scenarios~\cite{SatQKD}. Proof-of-principle and in-field experiments have been recently demonstrated in long ground-based fiber connections~\cite{HuangExp,ZhangExpI,ZhangExpII}. As pointed out in Ref.~\cite{Stefano_CF}, single-mode Gaussian channels and the corresponding collective Gaussian attacks can be classified in different canonical forms. One of these forms is represented by the thermal-loss (attenuation) channel and the associated collective entangling-cloner attack, typically assumed in CV-QKD security proofs.
Here we present the asymptotic secret key rates of the Gaussian-modulated coherent-state protocols with respect to the other canonical forms. Besides the attenuation channel, these include the amplifying channel, the additive classical-noise channel and other more exotic Gaussian channels~\cite{Stefano_rev,Stefano_CF}. Then, using the toolbox of Ref.~\cite{free space} for composable security under general channel conditions, we extend the analysis of the amplifying and classical-noise channels to include finite-size effects and composable security.
After a short description of the canonical forms in Sec.~\ref{sec:Canforms}, in Sec.~\ref{sec:Protocol} we describe the security analysis in the asymptotic regime in the presence of a generic canonical form for the cases of for homodyne or heterodyne protocol in RR/DR. In Sec.~\ref{sec:rates}, we present the results of the previous analysis specified for each canonical form by assuming ideal reconciliation efficiency and large modulation. In Sec.~\ref{sec:PE}, we perform the parameter estimation (PE) following Refs.~\cite{UsenkoFNSZ,finite-size thermal} and in Sec.~\ref{sec:composable} we compute the composable key rates using Ref.~\cite{free space}.
\section{Canonical Forms\label{sec:Canforms}} Recall that a Gaussian channel $\mathcal{G}(\mathbf{T},\mathbf{N},\mathbf{d})$ acting on a single mode, for $\mathbf{T},\mathbf{N}$ $2\times 2$ real matrices and $\mathbf{d}$ an $\mathbb{R}^2$ vector, is a completely positive trace-preserving map that maintains the Gaussian statistics of the input state. It can be mapped to its canonical form $\mathcal{C}$, which is a Gaussian channel with $\mathbf{d}=0$ and $\mathbf{T}_c$, $\mathbf{N}_c$ diagonal, by $\mathcal{G}=\mathcal{U}_A \circ \mathcal{C} \circ \mathcal{U}_B$, where $\mathcal{U}_A$ and $\mathcal{U}_B$ are Gaussian unitaries. One can reduce the description of $\mathbf{T}_c$, $\mathbf{N}_c$ to three symplectic invariants: the generalized transmission $\tau=\text{det}\mathbf{T}$, for $-\infty<\tau<\infty$, the rank $r=(\text{rk}(\mathbf{T})\text{rk}(\mathbf{N}))/2$ for $r=0,1,2$ and the temperature $\bar{n}$, connected to $\text{det}\mathbf{N}$.
According to the first two parameters, the canonical forms can be grouped into different classes: The class $A_1$ for $\tau=0$, $r=0$, which replaces the input states with thermal states (completely depolarizing channel). The classes $A_2$ and $B_1$ for $\tau=0$, $r=1$ and $\tau=1$, $r=1$ transforming the quadratures asymetrically. $B_2$ is the additive classical-noise channel for $\tau=1$ and $r=2$ and it collapses to the identity channel for $\bar{n}=0$. Class $C$ is connected to channels with transmissivity, i.e., $0<\tau\neq 1$ and $r=2$, with the subcases $\tau<1$ (attenuation channel) and $\tau>1$ (amplifying channel). Finally, the class $D$, where its output can be seen as complementary to the amplifying channel and is connected to negative transmissivities.
Via the Stinespring dilation, one can represent the canonical form $\mathcal{C}(\tau,r,\bar{n})$ with a unitary symplectic transformation $\mathcal{L}(\tau,r)$ mixing the input state and a two-mode squeezed-vacuum (TMSV) state with variance $\omega=2\bar{n}+1$, which describes the environment. In more detail, apart from the class $B_2$, we have that $\mathcal{L}(\tau,r)=\mathcal{M}(\tau,r)\oplus \mathbf{I}$ where $\mathcal{M}(\tau,r)$ is a symplectic form interacting only with the input state and one mode from the TMSV state, with the other mode being subject to the identity $\mathbf{I}$. For the class $B_2$, we adopt a description using the attenuation channel as we will see later.
The unitary dilation of the canonical form represents the Gaussian interaction performed by the eavesdropper that controls the TMSV state of the environment~\cite{Stefano_CF}. After interaction with the input mode, the environmental output is stored in a quantum memory, that will be subject to a joint and optimal measurement (collective attack).
\section{Aspects of the protocol scheme\label{sec:Protocol}} Alice picks randomly $2N$ samples $\{x_i\}$ from the variable $x$ distributed according to the normal distribution \begin{equation} p(x)=(\sqrt{2\pi V_A})^{-1}\exp\left[-x^2/(2V_A)\right] \end{equation}
with zero mean and variance $V_A$. Then she modulates mode $A$ carrying coherent states $|\alpha\rangle$ according to these samples with \begin{equation} \alpha=(q_A+\mathrm{i}p_A)/2=(x_{2j-1}+\mathrm{i}x_{2j})/2, \end{equation} where $q_A$ and $p_A$ are the encoding on the quadratures and $j=1,\dots,N$. In the asymptotic regime ($N \gg 1$), the covariance matrix (CM) of Alice's ensemble state is given by $\mathbf{V}_A= \mu \mathbf{I}$ with $\mathbf{I}= \text{diag} \{1,1\}$ and $\mu=V_A+1$. Mode $A$ is traveling through a quantum channel modeled by one of the canonical forms~\cite{Stefano_rev,Stefano_CF}. In particular, Eve's system is described by two modes $E$ and $e$ in a TMSV state with variance $\omega$ and covariance matrix \begin{equation} \mathbf{V}_{Ee}=\begin{pmatrix} \omega \mathbf{I}&\sqrt{\omega^2-1} \mathbf{Z}\\ \sqrt{\omega^2-1}\mathbf{Z}&\omega \mathbf{I} \end{pmatrix}, \end{equation} where $\mathbf{Z}=\text{diag} \{1,-1\}$. Mode $E$ is mixed with $A$ via a canonical form whose dilation is represented by a symplectic matrix $\mathcal{M}$ (e.g., this is a beam-splitter transformation in the case of an attenuation channel). One output mode $B$ goes to Bob, while the other $E'$ goes to Eve. Eve's idler mode $e$ and mode $E'$ are kept in a quantum memory for a later optimal measurement. Then the CM for modes $B$, $E'$, and $e$ is given by \begin{equation} \mathbf{V}_{BE'e}=(\mathcal{M}^\mathsf{T} \oplus \mathbf{I}) \left(\mathbf{V}_A\oplus\mathbf{V}_{Ee}\right) (\mathcal{M} \oplus \mathbf{I}), \end{equation} which can be expressed as follows \begin{equation}\label{eq:CMBE'e} \mathbf{V}_{BE'e}=\begin{pmatrix} \mathbf{V}_B & \mathbf{C}_{BE'}&\mathbf{C}_{Be}\\ \mathbf{C}_{BE'}&\mathbf{V}_{E'}&\mathbf{C}_{E'e}\\ \mathbf{C}_{Be}&\mathbf{C}_{E'e}&\mathbf{V}_{e} \end{pmatrix}. \end{equation}
From the previous CM, we can derive the CM of Eve's average state by tracing out mode $B$ and Bob's CM by tracing out $E'e$ respectively. Therefore, we obtain \begin{equation}\label{eq:average CM} \mathbf{V}_{E'e}=\begin{pmatrix} \mathbf{V}_{E'} &\mathbf{C}_{E'e} \\ \mathbf{C}_{E'e}&\mathbf{V}_{e} \end{pmatrix},~~ \mathbf{V}_B=\text{diag}\{V^q_B(V_A),V^p_B(V_A)\} \end{equation} where $\mathbf{V}_{E'}=\text{diag}\{V^q_{E'}(V_A),V^p_{E'}(V_A)\}$ is a function of $V_A$. In general, the canonical forms may treat the quadratures $q$ and $p$ asymmetrically resulting in different variances $V_B^q$ and $V_B^p$ or $V_E^q$ and $V_E^p$ respectively. Note that for the class $C$ and the classical-noise channel the treatment is symmetric so we have $V_B^q=V_B^p=V_B$ and $V_{E'}^q=V_{E'}^p=V_{E'}$.
In the homodyne protocol, Bob measures either the $q$-quadrature or $p$-quadrature of the arriving mode with outcome $q_B$ or $p_B$ respectively. He informs Alice about the choice of quadrature and then she keeps only the relevant encoding $q_A$ or $p_A$ respectively (shifting the outcomes). In contrast, in the heterodyne protocol, Bob measures both quadratures and Alice's encoding is described by the pair $q_A,p_A$ and Bob's outcome by $q_B,p_B$.
For the homodyne protocol in DR, we derive Eve's conditional CM $\mathbf{V}_{E'e|q_A}$ (respectively $\mathbf{V}_{E'e|p_A}$) on Alice's encoding $q_A$ (or $p_A$) given by~(\ref{eq:average CM}) up to the replacement of $\mathbf{V}_{E'}$ with $\text{diag}\{V^q_{E'}(0),V^p_{E'}(V_A)\}$ (respectively with $\text{diag}\{V^q_{E'}(V_A),V^p_{E'}(0)\}$); for the heterodyne protocol, the conditional CM $\mathbf{V}_{E'e|q_A,p_A}$ is given by replacing $\mathbf{V}_{E'}$ by $\text{diag}\{V^q_{E'}(0),V^p_{E'}(0)\}$ in the same equation.
Let us now compute Eve's conditional CM on Bob's measurement outcome $l_B$, with $l$ equal to either $q$ or $p$ for a different quadrature, in RR. For the homodyne protocol, we obtain~\cite{Stefano_rev}
\begin{equation}
\mathbf{V}_{E'e|l_B}=\mathbf{V}_{E'e}-\mathbf{C}_{BE'e}^\mathsf{T}\left(\Pi_{l}\mathbf{V}_B\Pi_{l}\right)^{-1}\mathbf{C}_{BE'e}, \end{equation}
where $\mathbf{C}_{BE'e}=\begin{pmatrix} \mathbf{C}_{BE'} \\ \mathbf{C}_{Be} \end{pmatrix}$, $\Pi_q=\text{diag}\{1,0\}$, $\Pi_p=\text{diag}\{0,1\}$, and $(.)^{-1}$ corresponds here to the calculation of the pseudo-inverse. If Bob's measurement is a heterodyne measurement then the conditional CM is given by~\cite{Stefano_rev} \begin{equation}
\mathbf{V}_{E'e|q_B,p_B}=\mathbf{V}_{E'e}-\mathbf{C}_{BE'e}^\mathsf{T}\left(\mathbf{V}_B +\mathbf{I}\right)^{-1}\mathbf{C}_{BE'e}. \end{equation}
Let us assume now a very large number of exchanged signals ($N\gg1$). Then the mutual information between the encoding $q_A$ or $p_A$ and the outcome $q_B$ or $p_B$ for the homodyne protocol is given by \begin{equation}
I(\mu,\tau,\omega)=\frac{1}{2}\left(\frac{1}{2}\log_2\frac{V^q_{B}}{V^q_{B|q_A}}+\frac{1}{2}\log_2\frac{V^p_{B}}{V^p_{B|p_A}}\right), \end{equation}
for $V^l_{B|l_A}=V^l_{B}(0)$, where we have assumed that half of the times Bob's outcome is $q_B$ and otherwise $p_B$. On the other hand, the mutual information between the encoding $q_A,p_A$ and the outcome $q_B, p_B$ for the heterodyne protocol is given by \begin{equation}
I(\mu,\tau,\omega)=\frac{1}{2}\left(\log_2\frac{V^q_{B}+1}{V^q_{B|q_A}+1}+\log_2\frac{V^p_{B}+1}{V^p_{B|p_A}+1}\right). \end{equation}
Eve's Holevo information is calculated by the symplectic spectrum $\boldsymbol{\nu}_{E'e}$ of the CM $\mathbf{V}_{E'e}$ and the spectra, $\boldsymbol{\nu}_{E'e|q_A}$, $\boldsymbol{\nu}_{E'e|p_A}$ and $\boldsymbol{\nu}_{E'e|q_A,p_A}$ or $\boldsymbol{\nu}_{E'e|q_B}$, $\boldsymbol{\nu}_{E'e|p_B}$ and $\boldsymbol{\nu}_{E'e|q_B, p_B}$, associated with the conditional CMs in DR or RR respectively. More specifically, for the homodyne protocol, we have that \begin{align} \chi(\mu,\tau,\omega)&=\sum_{i=1,2} h\left([\boldsymbol{\nu}_{E'e}]_i\right)\notag\\
&-\frac{1}{2}\left(\sum_{i=1,2} h\left([\boldsymbol{\nu}_{E'e|q_\gamma}]_i\right)+\sum_{i=1,2} h\left([\boldsymbol{\nu}_{E'e|p_\gamma}]_i\right)\right), \end{align} while for the heterodyne protocol \begin{align} \chi(\mu,\tau,\omega)&=\sum_{i=1,2} h\left([\boldsymbol{\nu}_{E'e}]_i\right)\notag
-\sum_{i=1,2} h\left([\boldsymbol{\nu}_{E'e|q_\gamma,p_\gamma}]_i\right), \end{align} where \begin{equation} h(x)=\frac{x+1}{2}\log_2\frac{x+1}{2}-\frac{x-1}{2}\log_2\frac{x-1}{2}, \end{equation} with $\gamma$ being either $A$ or $B$ for the protocol in DR or RR respectively, Then the asymptotic secret key rate is obtained by~\cite{revQKD} \begin{equation}\label{eq:asym_rate} R(\mu,\tau,\omega)=\zeta I(\mu,\tau,\omega)-\chi(\mu,\tau,\omega), \end{equation} where $\zeta$ is the reconciliation efficiency parameter.
\section{\label{sec:rates}Asymptotic key rates}
Here we calculate the asymptotic secret key rate for each of the canonical forms assuming an ideal reconciliation efficiency $\zeta=1$ and the large modulation limit $\mu \rightarrow \infty$. In fact, we present results in detail for the practical cases of attenuation, amplifying, and classical-noise channel. Class $B_1$ is always secure (see Appendix for more details) while classes $D$ and $A_2$ do not provide a secret key rate, i.e., for any set of parameters describing the corresponding canonical form the parties cannot extract a secret key. The whole class of such channels have the property of anti-degradability~\cite{Stefano_rev}: In terms of cryptography, the eavesdropper (Eve) can obtain the receiver's (Bob's) state by applying a CPT map on the state of the environment forbidding the secret key extraction. However, for classes with members that may hold this property or not, e.g., the attenuation channel for $\tau<1/2$ against the cases with $\tau\geq1/2$, the RR can provide a remedy. \subsection{C class} The symplectic matrix associated with the dilation of the C class is \begin{equation} \mathcal{M}_\text{Att}(0<\tau<1)=\begin{pmatrix} \sqrt{\tau}\mathbf{I} & \sqrt{1-\tau}\mathbf{I}\\ -\sqrt{1-\tau}\mathbf{I} & \sqrt{\tau}\mathbf{I} \end{pmatrix} \end{equation} and \begin{equation} \mathcal{M}_\text{Amp}(\tau>1)=\begin{pmatrix} \sqrt{\tau}\mathbf{I} & \sqrt{\tau-1}\mathbf{Z}\\ \sqrt{\tau-1}\mathbf{Z} & \sqrt{\tau}\mathbf{I} \end{pmatrix} \end{equation} for the attenuation and amplifying channel respectively. Following the steps in Sec.~\ref{sec:Protocol}, one easily obtains the secret key rates for the homodyne (hom) and heterodyne (het) protocols
in DR ($\blacktriangleright$) and RR ($\blacktriangleleft$). We have \begin{align}
R_\text{hom}^\blacktriangleright (\tau,\omega)&=\frac{1}{2}\log_2\frac{\tau\left(\tau\omega+|1-\tau|\right)}{|1-\tau|(\tau+|1-\tau|\omega)}\notag\\&~~~~-h(\omega)+h\left(\sqrt{\frac{\omega(\tau+|1-\tau|\omega)}{|1-\tau|+\tau\omega}}\right),\label{eq:AttDRhom}\\
R_\text{hom}^\blacktriangleleft(\tau,\omega)&=\frac{1}{2}\log_2\frac{\omega}{|1-\tau|(\tau+(|1-\tau|)\omega)}-h(\omega),\label{eq:AttRRhom}\\
R_\text{het}^\blacktriangleright (\tau,\omega)&=\log_2\frac{2\tau}{\mathrm{e}|1-\tau|\left(\tau+|1-\tau|\omega+1\right)}\notag\\&~~~~-h(\omega)+h\left(\tau+|1-\tau|\omega\right),\label{eq:AttDRhet}\\
R_\text{het}^\blacktriangleleft(\tau,\omega)&=\log_2\frac{2\tau}{\mathrm{e}|1-\tau|(\tau+|1-\tau|\omega+1)}\notag\\&~~~~-h(\omega)+h\left(\frac{1+|1-\tau|\omega}{\tau}\right)\label{eq:AttRRhet}. \end{align}
In Fig.~\ref{fig:thresholds}, we plot the security threshold for each of the cases above with respect to transmissivity and excess noise $\xi=\frac{|1-\tau| (\omega-1)}{\tau}$. Then, in Fig.~\ref{fig:Att_asy}, for $\omega:=1$ (no thermal noise), $\zeta=1$, and $\tau:=10^{\frac{-L}{10}}$ where $L$ is the attenuation in dB, we plot~(\ref{eq:AttDRhom}),~(\ref{eq:AttDRhet}),~(\ref{eq:AttRRhom}) and~(\ref{eq:AttRRhet}). In Fig.~\ref{fig:Amp_asy}, we plot the same cases for $\tau:=10^{\frac{L}{10}}$ with $L$ being the gain in dB.
\begin{figure}
\caption{ The asymptotic security thresholds of the C class for transmisivities $\tau>0$ ($\tau\neq 1$) with respect to the excess noise $\xi$, where the reconciliation has been considered ideal and $\mu\rightarrow \infty$. We plot the homodyne protocol in DR (black solid line) and in RR (black dashed line) and the heterodyne protocol in DR (gray solid line) and in RR (gray dashed line). The instances with high excess noise (above the threshold lines) give no secret key rate.}
\label{fig:thresholds}
\end{figure}
\begin{figure}
\caption{Secret key rate versus attenuation in dB for an attenuation channel. With thin lines, we plot the asymptotic rate for the homodyne protocol in DR (black solid) and in RR (black dashed) and for the heterodyne protocol in DR (gray solid) and in RR (gray dashed) for $\xi=0$, $\zeta=1$, and $\mu \rightarrow \infty$. For the composable secret key rates (corresponding thick lines), we have assumed channel excess noise $\xi=0.01$ and conservative values for the parameters $\zeta=0.9$, $p_\text{EC}=0.8$, and $N=10^6$. We have optimized over the ratio $r$ and the modulation $V_A$ with $\epsilon_\text{PE}\approx 10^{-10}$, $\epsilon_\text{s}=\epsilon_\text{h}=10^{-20}$, and $d=2^5$. The plots of the asymptotic key rate evaluate the security of the protocol and the associated performance taking into account only theoretical aspects, e.g. the kind of attack, focusing more on the quantum communication part of the protocol. On the contrary, the finite-size analysis in a composable framework takes also into account the classical post-processing parts of the protocol providing with a performance close to a practical implementation usually expected to be worse than the ideal case as it is supported by the plots in thick lines compared with the corresponding cases in thin lines. We observe here that the heterodyne protocols behave better in closer distances (higher signal to noise ratio) compared with the homodyne protocols since they can take into advantage the double encoding into the same signal. Despite this fact, the homodyne protocols have achievable rates in longer distances. In fact, they behave better against the excess noise in long distances and, in particular the RR protocol, against the parameter estimation effects connected to the excess noise and transmissivity.}
\label{fig:Att_asy}
\end{figure}
\begin{figure}
\caption{ Asymptotic key rates in the presence of an amplifying channel. With thin lines, we plot the asymptotic rate for the homodyne protocol in DR (black solid) and in RR (black dashed) and for the heterodyne protocol in DR (gray solid) and in RR (gray dashed) for $\xi=0$, $\zeta=1$ and $\mu \rightarrow \infty$. For the composable secret key rates (corresponding thick lines), we have assumed channel excess noise $\xi=0.01$ and conservative values for the parameters $\zeta=0.9$, $p_\text{EC}=0.8$, and $N=10^6$. We have optimized over the ratio $r$ and the modulation $V_A$ with $\epsilon_\text{PE}\approx 10^{-10}$, $\epsilon_\text{s}=\epsilon_\text{h}=10^{-20}$, and $d=2^5$. Here we observe that the RR and DR protocols have the opposite behaviour compared with the attenuation channel case, i.e., we have smaller achievable rate distances for the RR protocols instead for the DR protocols. Comparing also the composable rates of the RR protocols, it seems that in the regime of $N=10^6$, the homodyne protocol cannot surpass the performance of the heterodyne protocol due to the fact that the gain variance plays an important role in the amplifying channel: the coefficient in front of the gain variance in~(\ref{eq:sigmaXihom_amp}) is double of its counterpart in~(\ref{eq:sigmaXihet_amp}).}
\label{fig:Amp_asy}
\end{figure}
\subsection{Classical-noise channel} To simulate a Gaussian channel with additive classical-noise, we adopt the symplectic matrix of a beam splitter \begin{equation} \mathcal{M}_\text{Att}(0<\tau<1)=\begin{pmatrix} \sqrt{\tau}\mathbf{I} & \sqrt{1-\tau}\mathbf{I}\\ -\sqrt{1-\tau}\mathbf{I} & \sqrt{\tau}\mathbf{I} \end{pmatrix} \end{equation} and take the joint limits for $\tau \rightarrow 1$ and $\omega \rightarrow \infty$ so that $(1-\tau)\omega=\theta$, for some constant variance $\theta$ of the additive noise. The corresponding secret key rates are given by \begin{align} &R_{\text{hom}}^\blacktriangleright(\theta)=\log_2\left(\frac{2}{e \sqrt{\theta(\theta+1)}}\right)+h(\sqrt{1+\theta}),\label{eq:ClasDRhom}\\ &R_{\text{hom}}^\blacktriangleleft(\theta)=\log_2 \left(\frac{2}{e \sqrt{\theta (\theta +1)}}\right),\label{eq:ClasRRhom}\\ &R_{\text{het}}^\blacktriangleright(\theta)=R_{\text{het}}^\blacktriangleleft(\theta)=\log_2 \left(\frac{4}{e^2 \theta (\theta +2)}\right)+h(\theta+1),\label{eq:ClasDRhet}
\end{align} We plot~\eqref{eq:ClasDRhom},~\eqref{eq:ClasRRhom} and~\eqref{eq:ClasDRhet} in Fig.~\ref{fig:Clas_hom}.
\begin{figure}
\caption{Secret key rate for a classical-noise channel against the classical thermal noise $\theta$. With thin lines, we plot the asymptotic rate for the homodyne protocol in DR (black solid) and in RR (black dashed) and for the heterodyne protocol in DR (gray solid) and in RR (gray dashed) for $\xi=0$, $\zeta=1$ and $\mu \rightarrow \infty$. Note that the lines for the heterodyne protocol in DR and RR coincide. For the composable secret key rates (corresponding thick lines), we have assumed channel excess noise $\xi=0.01$ and conservative values for the parameters $\zeta=0.9$, $p_\text{EC}=0.8$, and $N=10^6$. We have optimized over the ratio $r$ and the modulation $V_A$ with $\epsilon_\text{PE}\approx 10^{-10}$, $\epsilon_\text{s}=\epsilon_\text{h}=10^{-20}$, and $d=2^5$. Here we observe that the most robust protocol against classical thermal noise is the homodyne protocol in DR while the other cases have similar performance.}
\label{fig:Clas_hom}
\end{figure}
\section{\label{sec:PE}Parameter estimation}
\subsection{\label{sec:PEAtt}Attenuation channel} Let us assume a protocol with homodyne detection. Here Bob's measurement outcome is described by the generic variable \begin{align} y=&\sqrt{\tau} x+z
\end{align}
where $y$ describes either the outcome connected with the quadrature $q$ or $p$. Accordingly, the variable $x$ describes Alice's encoding while \begin{align} z~=\sqrt{\tau}x_s+\sqrt{1-\tau} x_o +x_\Xi,
\end{align} is a variable representing the noise detected by Bob. The variables $x_s$ and $x_o$ have equal variance $V_s=V_o=1$ describing the quantum shot noise, and the variable $x_\Xi$ with variance $\Xi:=\tau\xi$ describes the excess noise of the channel $\xi=\frac{(1-\tau)(\omega-1)}{\tau}$.
Therefore we obtain the noise variance \begin{align}\label{eq:signalnoiseAtt} \sigma^2_z=\Xi+1.
\end{align}
Based on the previous analysis and assuming $m$ signals for PE, we derive the variances of the maximum likelihood estimators (MLEs) $\widehat{\tau}$ and $\widehat{\Xi}$ of the transmissivity and excess noise according to Ref.~\cite{finite-size thermal}. Therefore, the worst case scenario values for the channel parameters are given by \begin{align} \tau_m=\tau-w \sigma_\tau,~~\Xi_m=\Xi+w \sigma_\Xi, \end{align} with $w=\sqrt{2}\text{erf}^{-1}(1-\epsilon_{\mathrm{PE}})$, as the extremal values of the intervals defined by the estimator variances
\begin{align} \sigma_\tau^2=\frac{4\tau^2}{m}\left( 2+\frac{\sigma_z^2}{\tau V_A}\right),~~\sigma_\Xi^2=2 \frac{\sigma_z^4}{m}\label{eq:vartauh1},
\end{align} where $\text{erf}(.)$ is the error function and $\epsilon_{\mathrm{PE}}$ is the associated error probability.
In the heterodyne protocol, Bob mixes the incoming mode $B$ with a vacuum mode in a balanced beam splitter. Then he applies two conjugate homodyne detections to the beam-splitter outputs. Due to the presence of the extra vacuum mode, the outputs have an increased noise variance by $1$ shot noise units compared with the protocol using homodyne measurement. In addition, there is an estimator for $\tau$ and $\Xi$ from each one of the quadratures. These are optimally combined and give the variances
\begin{equation} \sigma_\tau^2=\frac{2\tau^2}{m}\left( 2+\frac{\sigma^2_z+1}{\tau V_A}\right)~\text{and}~\sigma^2_\Xi=\frac{(\sigma^2_z+1)^2}{m}.
\end{equation}
Finally, the key rate in Eq.~(\ref{eq:asym_rate}) is expressed via the parameter $\Xi$ as $\tilde{R}(\mu,\tau,\Xi)=R(\mu,\tau,\omega)$ and by setting the worst case scenario values one obtains the secret key rate after PE \begin{equation}\label{eq:RM}
R_m=\tilde{R}(\mu,\tau_m,\Xi_m). \end{equation}
\subsection{Amplifying channel} Here, Bob detects noise described by the variable
\begin{align} z=\sqrt{\tau} x_s \pm \sqrt{\tau-1} x_o + x_\Xi~\text{with}~\sigma_z^2=2\tau+\Xi-1
\end{align}
where $\Xi:=\tau\xi$, $\xi=\frac{(\tau-1)(\omega-1)}{\tau}$ resulting in estimator variances
\begin{align}\label{eq:sigmaXihom_amp} \sigma_\tau^2=\frac{4\tau^2}{m}\left( 2+\sigma^2_z/(\tau V_A)\right),~~\sigma_\Xi^2=2\frac{\sigma_z^4}{m}+4\sigma_\tau^2
\end{align} for the homodyne protocol and \begin{align}\label{eq:sigmaXihet_amp} \sigma_\tau^2=\frac{2\tau^2}{m}\left( 2+(\sigma_z^2+1)/(\tau V_A)\right),~~\sigma_\Xi^2=\frac{(\sigma_z^2+1)^2}{m}+2\sigma_\tau^2
\end{align} for the heterodyne protocol. Finally, one calculates the corresponding secret key rate $R_m$ after PE as in~(\ref{eq:RM}).
\subsection{Classical-noise channel} For the classical-noise channel we adopt the same analysis as in Sec.~\ref{sec:PEAtt} in addition to the assumption of \begin{align} \Xi&=\tau\xi=(1-\tau)\omega-(1-\tau)~~\text{with}~\lim_{\tau \rightarrow 1}\Xi=\theta. \end{align} This leads to the following relations for the noise variance \begin{align}\label{eq:signalnoiseClas} \sigma^2_z=\theta+1,
\end{align} Therefore we obtain the worst case estimator \begin{align} \Xi_m=&\Xi+w\sigma_\Xi~~\text{with}~\sigma_\Xi^2=2 \frac{\sigma_z^4}{m} \end{align}
for the homodyne protocol and
$\sigma_\Xi^2=\frac{(\sigma_z^2+1)^2}{m}$
for the heterodyne protocol. Then one obtains the corresponding secret key rate $R_m$ after PE as in~(\ref{eq:RM}).
\section{\label{sec:composable}Composable key rates}
According to Ref.~\cite{free space}, the composable key rate takes the form \begin{equation} R\geq r\left[R_{m}-n^{-1/2}\Delta_{\text{AEP}}+n^{-1} \Theta \right],\label{sckeee} \end{equation} where \begin{align}
\Theta:=\left\{ \log_{2}[p\left( 1-\epsilon^2_{\text{s}}/3\right) ]+2\log_{2}\sqrt{2}\epsilon_{\text{h}}\right\},~~r=\frac{n p_\text{EC}}{N} \end{align} and \begin{align} \Delta_{\text{AEP}}&:=4\log_{2}\left( 2\sqrt {d}+1\right) \sqrt{\log(18/(p^2\epsilon_{\text{s}}^{4}))}, \label{AEPd}
\end{align} is the correction term for using the von Neumann entropy in the calculation of a finite-size rate and is dependent on the number of bins $d$ used during the discretization step of the variables.
The frame error rate $1-p_\text{EC}$ is the number of blocks with initial size $N$ that passed through the error correction (EC) step while $n=N-m$ is the portion of signals devoted to secret key creation. With $\epsilon_\text{s}$, $\epsilon_\text{h}$, $\epsilon_\text{PE}$, and $\epsilon_\text{cor}$ we denote the smoothing parameter, the privacy amplification (hashing) parameter, the channel estimation parameter, and the EC parameter. Note that $p_\text{EC}$ is a function of $\epsilon_{EC}$ but their relation can only be evident in a specific practical implementation of the protocol. Each $\epsilon$ parameter quantifies a distance from an ideal implementation of each step of the protocol. An overall security parameter can then be calculated by composing these parameters into a sum $\epsilon=\epsilon_{\text{S}}+\epsilon_\text{cor}+\epsilon_{\text{h}}+2 p_\text{EC}\epsilon_{\text{PE}}$.
In Figs.~\ref{fig:Att_asy},~\ref{fig:Amp_asy}, and~\ref{fig:Clas_hom}, we present results regarding the secret key rate in the composable framework for the attenuation, amplifying, and additive classical thermal noise channel, respectively. We assume conservative values for the parameters $N=10^6$, $\beta=0.9$, and $p_\text{EC}=0.8$ due to limitations that may occur in the data post-processing procedure~\cite{QKD_SIM}. However, still, the protocols provide the parties with positive rates at metropolitan distances, e.g., $\approx 10$ km [see Fig.~\ref{fig:Att_asy} (black thick dashed line)]. The security parameters have been set to $\epsilon_\text{PE}\approx 10^{-10}$ and $\epsilon_\text{s}=\epsilon_\text{h}=10^{-20}$. In addition, we chose $d=2^5$ and we optimized over $r$ and $V_A$.
\section{Conclusion} In this work we expanded the security of CV-QKD to all canonical forms. We studied first the asymptotic security, then we focused on the finite-size and composable security. We first provided a compact description of the asymptotic secret-key rates of practical channels like the attenuation, amplification, and the classical-noise channels. Then our analysis discussed in more detail the impact of parameter estimation and that of other finite-size effects on the secret-key rates achievable over these channels. We also computed the secret-key rate for more exotic Gaussian channels finding that we either obtain an always positive key rate (for $B_1$ assuming large Gaussian modulation) or no asymptotic secret key rate (for the forms $D$ and $A_2$). This analysis can be expanded, in future works, to protocols that use squeezed and/or thermal states, protocols with discrete alphabets, or CV measurement device independent schemes, in each case by assuming links described by the previous channel classes.
\section*{Acknowledgments} This work has been funded by the European Union’s Horizon 2020 research and innovation program under grant agreement No 820466 (Quantum-Flagship Project CiViQ: “Continuous Variable Quantum Communications”) and the EPSRC via the Quantum Communications Hub (Grant No. EP/T001011/1).
\appendix \section{$B_1$ class\label{app:b1}} The asymptotic secret key rates for
the canonical form $B_1$ associated with the symplectic transformation \begin{equation} \mathcal{M}_{B_1}=\begin{pmatrix} \mathbf{I} &\frac{\mathbf{I} +\mathbf{Z}}{2}\\ \frac{\mathbf{I} -\mathbf{Z}}{2} &-\mathbf{I} \end{pmatrix} \end{equation} are given by \begin{align} &R_{\text{hom}}^\blacktriangleright(\mu)= \frac{1}{2}\log_2\frac{\sqrt{2\mu}}{\mathrm{e}}+\frac{1}{2}h(\sqrt{2}),\label{eq:B1DDRhomQ}\\
&R_{\text{hom}}^\blacktriangleleft(\mu) =\frac{1}{2}\log_2\frac{\sqrt{2\mu}}{\mathrm{e}},
\\ &R_{\text{het}}^\blacktriangleright(\mu)=R_{\text{het}}^\blacktriangleleft(\mu)=\log_2\frac{\sqrt{2\mu}}{\mathrm{e}\sqrt{3}}+h(\sqrt{2}).\label{eq:B1DDRhet}
\end{align}
\end{document} |
\begin{document}
\global\long\def\,d{\,d} \global\long\def\mathrm{tr}{\mathrm{tr}} \global\long\def\operatorname{spt}{\operatorname{spt}} \global\long\def\operatorname{div}{\operatorname{div}} \global\long\def\operatorname{osc}{\operatorname{osc}} \global\long\def\esssup{\esssup} \global\long\def\dashint{\dashint} \global\long\def\esssinf{\esssinf} \global\long\def\esssliminf{\esssliminf}
\global\long\def|{|} \global\long\def\operatorname{sgn}{\operatorname{sgn}}
\global\long\def\operatorname{dist}{\operatorname{dist}} \global\long\def\operatorname{diam}{\operatorname{diam}} \global\long\defp_{\text{min}}{p_{\text{min}}} \global\long\defp_{\text{max}}{p_{\text{max}}} \excludeversion{details}\keywords{non-divergence form equation, normalized equation, $p$-Laplace, Hölder gradient regularity, viscosity solution, inhomogeneous equation}\subjclass[2020]{35J92, 35J70, 35J75, 35D40}\date{October 2021} \title[hölder gradient regularity]{Hölder gradient regularity for the inhomogeneous normalized $p(x)$-Laplace equation} \begin{abstract} We prove the local gradient Hölder regularity of viscosity solutions to the inhomogeneous normalized $p(x)$-Laplace equation \[ -\Delta_{p(x)}^{N}u=f(x), \] where $p$ is Lipschitz continuous, $\inf p>1$, and $f$ is continuous and bounded. \end{abstract}
\author{Jarkko Siltakoski} \maketitle
\section{Introduction}
We study the \textit{inhomogeneous normalized $p(x)$-Laplace equation}
\begin{equation} -\Delta_{p(x)}^{N}u=f(x)\quad\text{in }B_{1},\label{eq:normalized p(x)} \end{equation} where \[
-\Delta_{p(x)}^{N}u:=-\Delta u-(p(x)-2)\frac{\left\langle D^{2}uDu,Du\right\rangle }{\left|Du\right|^{2}} \] is the \textit{normalized $p(x)$-Laplacian}, $p:B_{1}\rightarrow\mathbb{R}$ is Lipschitz continuous, $1<p_{\min}:=\inf_{B_{1}}p\leq\sup_{B_{1}}p=:p_{\max}$ and $f\in C(B_{1})$ is bounded. Our main result is that viscosity solutions to (\ref{eq:normalized p(x)}) are locally $C^{1,\alpha}$-regular.
Normalized equations have attracted a significant amount of interest during the last 15 years. Their study is partially motivated by their connection to game theory. Roughly speaking, the value function of certain stochastic tug-of-war games converges uniformly up to a subsequence to a viscosity solution of a normalized equation as the step-size of the game approaches zero \cite{peresShefield08,manfrediParviainenRossi10,manfrediParviainenRossi12,banerjeeGarofalo15,blancRossi19}. In particular, a game with space-dependent probabilities leads to the normalized $p(x)$-Laplace equation \cite{arroyoHeinoParviainen17} and games with running pay-offs lead to inhomogeneous equations \cite{ruosteenoja16}. In addition to game theory, normalized equations have been studied for example in the context of image processing \cite{does11,elmoatazToutainTenbrinck15}.
The variable $p(x)$ in (\ref{eq:normalized p(x)}) has an effect that may not be immediately obvious: If we formally multiply the equation by $\left|Du\right|^{p(x)-2}$ and rewrite it in a divergence form, then a logarithm term appears and we arrive at the expression \begin{equation}
-\operatorname{div}(\left|Du\right|^{p(x)-2}Du)+\left|Du\right|^{p(x)-2}\log(\left|Du\right|)Du\cdot Dp=\left|Du\right|^{p(x)-2}f(x).\label{eq:strong p(x)} \end{equation} For $f\equiv0$, this is the so called \textit{strong $p(x)$-Laplace equation} introduced by Adamowicz and Hästö \cite{adamowiczHasto10,adamowiczHasto11} in connection with mappings of finite distortion. In the homogeneous case viscosity solutions to (\ref{eq:normalized p(x)}) actually coincide with weak solutions of (\ref{eq:strong p(x)}) \cite{siltakoski18}, yielding the $C^{1,\alpha}$-regularity of viscosity solutions as a consequence of a result by Zhang and Zhou \cite{strongpx_regularity}.
In the present paper our objective is to prove $C^{1,\alpha}$-regularity of solutions to (\ref{eq:normalized p(x)}) directly using viscosity methods. The Hölder regularity of solutions already follows from existing general results, see \cite{krylovSafonov79,krylovSafonov80,caffarelli89,caffarelliCabre}. More recently, Imbert and Silvestre \cite{imbertSilvestre12} proved the gradient Hölder regularity of solutions to the elliptic equation \[
\left|Du\right|^{\gamma}F(D^{2}u)=f, \] where $\gamma>0$ and Imbert, Jin and Silvestre \cite{jinsilvestre17,imbertJinSilvestre16} obtained a similar result for the parabolic equation \[
\partial_{t}u=\left|Du\right|^{\gamma}\Delta_{p}^{N}u, \] where $p>1$, $\gamma>-1$. Furthermore, Attouchi and Parviainen \cite{attouchiParv} proved the $C^{1,\alpha}$-regularity of solutions to the inhomogeneous equation $\partial_{t}u-\Delta_{p}^{N}u=f(x,t)$. Our proof of Hölder gradient regularity for solutions of (\ref{eq:normalized p(x)}) is in particular inspired by the papers \cite{jinsilvestre17} and \cite{attouchiParv}.
We point out that recently Fang and Zhang \cite{fangZhang21b} proved the $C^{1,\alpha}$-regularity of solutions to the parabolic normalized $p(x,t)$-Laplace equation \begin{equation} \partial_{t}u=\Delta_{p(x,t)}^{N}u,\label{eq:parabolic normalized p(x)} \end{equation} where $p\in C_{\text{loc}}^{1}$. The equation (\ref{eq:parabolic normalized p(x)}) naturally includes (\ref{eq:normalized p(x)}) if $f\equiv0$. However, in this article we consider the inhomogeneous case and only suppose that $p$ is Lipschitz continuous. More precisely, we have the following theorem. \begin{thm} \label{thm:main-1} Suppose that $p$ is Lipschitz continuous in $B_{1}$, $p_{\min}>1$ and $f\in C(B_{1})$ is bounded. Let $u$ be a viscosity solution to \[ -\Delta_{p(x)}^{N}u=f(x)\quad\text{in }B_{1}. \] Then there is $\alpha(N,p_{\min},p_{\max},p_{L})\in(0,1)$ such that \[ \left\Vert u\right\Vert _{C^{1,\alpha}(B_{1/2})}\leq C(N,p_{\min},p_{\max},p_{L},\left\Vert f\right\Vert _{L^{\infty}(B_{1})},\left\Vert u\right\Vert _{L^{\infty}(B_{1})}), \] where $p_{L}$ is the Lipschitz constant of $p$. \end{thm}
The proof of Theorem \ref{thm:main-1} is based on suitable uniform $C^{1,\alpha}$-regularity estimates for solutions of the regularized equation \begin{equation}
-\Delta v-(p_{\varepsilon}(x)-2)\frac{\left\langle D^{2}vDv,Dv\right\rangle }{\left|Dv\right|^{2}+\varepsilon^{2}}=g(x),\label{eq:intro regularized} \end{equation} where it is assumed that $g$ is continuous and $p_{\varepsilon}$ is smooth. In particular, we show estimates that are independent of $\varepsilon$ and only depend on $N$, $\sup p$, $\inf p$, $\left\Vert Dp_{\varepsilon}\right\Vert _{L^{\infty}}$ and $\left\Vert g\right\Vert _{L^{\infty}}$. To prove such estimates, we first derive estimates for the perturbed homogeneous equation \begin{equation}
-\Delta v-(p_{\varepsilon}(x)-2)\frac{\left\langle D^{2}v(Dv+q),Dv+q\right\rangle }{\left|Dv\right|^{2}+\varepsilon^{2}}=0,\label{eq:intro homogeneous} \end{equation} where $q\in\mathbb{R}^{N}$. Roughly speaking, $C^{1,\alpha}$-estimates for solutions of (\ref{eq:intro homogeneous}) are based on ``improvement of oscillation'' which is obtained by differentiating the equation and observing that a function depending on the gradient of the solution is a supersolution to a linear equation. The uniform $C^{1,\alpha}$-estimates for solutions of (\ref{eq:intro homogeneous}) then yield uniform estimates for the inhomogeneous equation (\ref{eq:intro regularized}) by an adaption of the arguments in \cite{imbertSilvestre12,attouchiParv}.
With the \textit{a priori} regularity estimates at hand, the plan is to let $\varepsilon\rightarrow0$ and show that the estimates pass on to solutions of (\ref{eq:normalized p(x)}). A problem is caused by the fact that, to the best of our knowledge, uniqueness of solutions to (\ref{eq:normalized p(x)}) is an open problem for variable $p(x)$ and even for constant $p$ if $f$ is allowed to change signs. To deal with this, we fix a solution $u_{0}\in C(\overline{B}_{1})$ to (\ref{eq:normalized p(x)}) and consider the Dirichlet problem \begin{equation} -\Delta_{p(x)}^{N}u=f(x)-u_{0}(x)-u\quad\text{in }B_{1}\label{eq:intro dirichlet} \end{equation} with boundary data $u=u_{0}$ on $\partial B_{1}$. For this equation the comparison principle holds and thus $u_{0}$ is the unique solution. We then consider the approximate problem \begin{equation}
-\Delta u_{\varepsilon}-(p_{\varepsilon}(x)-2)\frac{\left\langle D^{2}u_{\varepsilon}Du_{\varepsilon},Du_{\varepsilon}\right\rangle }{\left|Du_{\varepsilon}\right|^{2}+\varepsilon^{2}}=f_{\varepsilon}(x)-u_{0,\varepsilon}(x)-u_{\varepsilon}\label{eq:intro regularized 2} \end{equation} with boundary data $u_{\varepsilon}=u_{0}$ on $\partial B_{1}$ and where $p_{\varepsilon},f_{\varepsilon},u_{0,\varepsilon}\in C^{\infty}(B_{1})$ are such that $p\rightarrow p_{\varepsilon}$, $f_{\varepsilon}\rightarrow f$ and $u_{0,\varepsilon}\rightarrow u_{0}$ uniformly in $B_{1}$ and $\left\Vert Dp_{\varepsilon}\right\Vert _{L^{\infty}(B_{1})}\leq\left\Vert Dp\right\Vert _{L^{\infty}(B_{1})}$. As the equation (\ref{eq:intro regularized 2}) is uniformly elliptic quasilinear equation with smooth coefficients, the solution $u_{\varepsilon}$ exists in the classical sense by standard theory. Since $u_{\varepsilon}$ also solves (\ref{eq:intro regularized}) with $g(x)=f_{\varepsilon}(x)-u_{0,\varepsilon}(x)-u_{\varepsilon}(x)$, it satisfies the uniform $C^{1,\alpha}$-regularity estimate. We then let $\varepsilon\rightarrow0$ and use stability and comparison principles to show that $u_{0}$ inherits the regularity estimate.
For other related results, see for example the works of Attouchi, Parviainen and Ruosteenoja \cite{OptimalC1} on the normalized $p$-Poisson problem $-\Delta_{p}^{N}u=f$, Attouchi and Ruosteenoja \cite{attouchiRuosteenoja18,attouchiRuosteenoja20,attouchi20}
on the equation $-\left|Du\right|^{\gamma}\Delta_{p}^{N}u=f$ and its parabolic version, De Filippis \cite{deflippis21} on the double phase problem $(\left|Du\right|^{q}+a(x)\left|Du\right|^{s})F(D^{2}u)=f(x)$
and Fang and Zhang \cite{fangZhang21} on the parabolic double phase problem $\partial_{t}u=(\left|Du\right|^{q}+a(x,t)\left|Du\right|^{s})\Delta_{p}^{N}u$. We also mention the paper by Bronzi, Pimentel, Rampasso and Teixeira
\cite{bronziPimentelRampassoTeixeira} where they consider fully nonlinear variable exponent equations of the type $\left|Du\right|^{\theta(x)}F(D^{2}u)=0$.
The paper is organized as follows: Section 2 is dedicated to preliminaries, Sections 3 and 4 contain $C^{1,\alpha}$-regularity estimates for equations (\ref{eq:intro homogeneous}) and (\ref{eq:intro regularized 2}), and Section 5 contains the proof of Theorem (\ref{thm:main-1}). Finally, the Appendix contains an uniform Lipschitz estimate for the equations studied in this paper and a comparison principle for equation (\ref{eq:intro dirichlet}).
\section{Preliminaries}
\subsection{Notation}
We denote by $B_{R}\subset\mathbb{R}^{N}$ an open ball of radius $R>0$ that is centered at the origin in the $N$-dimensional Euclidean space, $N\geq1$. The set of symmetric $N\times N$ matrices is denoted by $S^{N}$. For $X,Y\in S^{N}$, we write $X\leq Y$ if $X-Y$ is negative semidefinite. We also denote the smallest eigenvalue of $X$ by $\lambda_{\min}(X)$ and the largest by $\lambda_{\max}(X)$ and set \[
\left\Vert X\right\Vert :=\sup_{\xi\in B_{1}}\left|X\xi\right|=\sup\left\{ \left|\lambda\right|:\lambda\text{ is an eigenvalue of }X\right\} . \] We use the notation $C(a_{1},\ldots,a_{k})$ to denote a constant $C$ that may change from line to line but depends only on $a_{1},\ldots,a_{k}$. For convenience we often use $C(\hat{p})$ to mean that the constant may depend on $p_{\min}$, $p_{\max}$ and the Lipschitz constant $p_{L}$ of $p$.
For $\alpha\in(0,1)$, we denote by $C^{\alpha}(B_{R})$ the set of all functions $u:B_{R}\rightarrow\mathbb{R}$ with finite Hölder norm \[
\left\Vert u\right\Vert _{C^{\alpha}(B_{R})}:=\left\Vert u\right\Vert _{L^{\infty}(B_{R})}+\left[u\right]_{C^{\alpha}(B_{R})},\text{\ensuremath{\quad}where }\left[u\right]_{C^{\alpha}(B_{R})}:=\sup_{x,y\in B_{R}}\frac{\left|u(x)-u(y)\right|}{\left|x-y\right|^{\alpha}}. \] Similarly, we denote by $C^{1,\alpha}(B_{R})$ the set of all functions for which the norm \[ \left\Vert u\right\Vert _{C^{1,\alpha}(B_{R})}:=\left\Vert u\right\Vert _{C^{\alpha}(B_{R})}+\left\Vert Du\right\Vert _{C^{\alpha}(B_{R})} \]
is finite.
\subsection{Viscosity solutions}
Viscosity solutions are defined using smooth test functions that touch the solution from above or below. If $u,\varphi:\mathbb{R}^{N}\rightarrow\mathbb{R}$ and $x\in\mathbb{R}^{N}$ are such that $\varphi(x)=u(x)$ and $\varphi(y)<u(y)$ for $y\not=x_{0}$, then we say that \textit{$\varphi$ touches $u$ from below at $x_{0}$}. \begin{defn} \label{def:viscosity solutions} Let $\Omega\subset\mathbb{R}^{N}$ be a bounded domain. Suppose that $f:\Omega\times\mathbb{R}\rightarrow\mathbb{R}$ is continuous. A lower semicontinuous function $u:\Omega\rightarrow\mathbb{R}$ is a \textit{viscosity supersolution} to \[ -\Delta_{p(x)}^{N}u\geq f(x,u)\quad\text{in }\Omega \] if the following holds: Whenever $\varphi\in C^{2}(\Omega)$ touches $u$ from below at $x\in\Omega$ and $D\varphi(x)\not=0$, we have \[
-\Delta\varphi(x)-(p(x)-2)\frac{\left\langle D^{2}\varphi(x)D\varphi(x),D\varphi(x)\right\rangle }{\left|D\varphi(x)\right|^{2}}\geq f(x,u(x)) \] and if $D\varphi(x)=0$, then \[ -\Delta\varphi(x)-(p(x)-2)\left\langle D^{2}\varphi(x)\eta,\eta\right\rangle \ge f(x,u(x))\quad\text{for some }\eta\in\overline{B}_{1}. \] Analogously, a lower semicontinuous function $u:\Omega\rightarrow\mathbb{R}$ is a viscosity subsolution if the above inequalities hold reversed whenever $\varphi$ touches $u$ from above. Finally, we say that $u$ is a \textit{viscosity solution} if it is both viscosity sub- and supersolution. \end{defn}
\begin{rem*} The special treatment of the vanishing gradient in Definition \ref{def:viscosity solutions} is needed because of the singularity of the equation. Definition \ref{def:viscosity solutions} is essentially a relaxed version of the standard definition in \cite{userguide} which is based on the so called semicontinuous envelopes. In the standard definition one would require that if $\varphi$ touches a viscosity supersolution $u$ from below at $x$, then \[ \begin{cases} -\Delta_{p(x)}^{N}\varphi(x)\geq f(x,u(x)) & \text{if }D\varphi(x)\not=0,\\ -\Delta\varphi(x)-(p(x)-2)\lambda_{\min}(D^{2}\varphi(x))\geq f(x,u(x)) & \text{if }D\varphi(x)=0\text{ and }p(x)\geq2,\\ -\Delta\varphi(x)-(p(x)-2)\lambda_{\max}(D^{2}\varphi(x))\geq f(x,u(x)) & \text{if }D\varphi(x)=0\text{ and }p(x)<2. \end{cases} \] Clearly, if $u$ is a viscosity supersolution in this sense, then it is also a viscosity supersolution in the sense of Definition \ref{def:viscosity solutions}. \end{rem*}
\section{Hölder gradient estimates for the regularized homogeneous equation\label{sec:regularized homogeneous}}
In this section we prove $C^{1,\alpha}$-regularity estimates for solutions to the equation \begin{equation}
-\Delta u-(p(x)-2)\frac{\left\langle D^{2}u(Du+q),Du+q\right\rangle }{\left|Du+q\right|^{2}+\varepsilon^{2}}=0\quad\text{in }B_{1},\label{eq:regularized homogeneous} \end{equation} where $p:B_{1}\rightarrow B_{1}$ is Lipschitz, $p_{\min}>1$, $\varepsilon>0$ and $q\in\mathbb{R}^{N}$. Our objective is to obtain estimates that are independent of $q$ and $\varepsilon$. Observe that (\ref{eq:regularized homogeneous}) is a uniformly elliptic quasilinear equation with smooth coefficients. Viscosity solutions to (\ref{eq:regularized homogeneous}) can be defined in the standard way and they are smooth if $p$ is smooth. \begin{prop} \label{prop:c infty} Suppose that $p$ is smooth. Let $u$ be a viscosity solution to (\ref{eq:regularized homogeneous}) in $B_{1}$. Then $u\in C^{\infty}(B_{1})$. \end{prop}
It follows from classical theory that the corresponding Dirichlet problem admits a smooth solution (see \cite[Theorems 15.18 and 13.6]{gilbargTrudinger01} and the Schauder estimates \cite[Theorem 6.17]{gilbargTrudinger01}). The viscosity solution $u$ coincides with the smooth solution by a comparison principle \cite[Theorem 3]{kawohlNikolai98}.
\subsection{Improvement of oscillation}
Our regularity estimates for solutions of (\ref{eq:regularized homogeneous}) are based on improvement of oscillation. We first prove such a result for the linear equation \begin{equation} -\mathrm{tr}(G(x)D^{2}u)=f\quad\text{in }B_{1},\label{eq:linear equation} \end{equation} where $f\in C^{1}(B_{1})$ is bounded, $G(x)\in S^{N}$ and there are constants $0<\lambda<\varLambda<\infty$ such that the eigenvalues of $G(x)$ are in $[\lambda,\varLambda]$ for all $x\in B_{1}$. The result is based on the following rescaled version of the weak Harnack inequality found in \cite[Theorem 4.8]{caffarelliCabre}. Such Harnack estimates for non-divergence form equations go back to at least Krylov and Safonov \cite{krylovSafonov79,krylovSafonov80}. \begin{lem}[Weak Harnack inequality] \label{lem:weak harnack} Let $u\geq0$ be a continuous viscosity supersolution to (\ref{eq:linear equation}) in $B_{1}$. Then there are positive constants $C(\lambda,\varLambda,N)$ and $q(\lambda,\varLambda,N)$ such that for any $\tau<\frac{1}{4\sqrt{N}}$ we have \begin{equation}
\tau^{-\frac{N}{q}}\left(\int_{B_{\tau}}\left|u\right|^{q}\,d x\right)^{1/q}\leq C\left(\inf_{B_{2\tau}}u+\tau\left(\int_{B_{4\sqrt{N}\tau}}\left|f\right|^{N}\,d x\right)^{1/N}\right).\label{eq:weak harnack} \end{equation} \end{lem}
\begin{proof} Suppose that $\tau<\frac{1}{4\sqrt{N}}$ and set $S:=8\tau$. Define the function $v:B_{\sqrt{N}/2}\rightarrow\mathbb{R}$ by \begin{align*} v(x) & :=u(Sx) \end{align*} and set \[ \tilde{G}(x):=G(Sx)\quad\text{and}\quad\tilde{f}(x):=S^{2}f(Sx). \] Then, if $\varphi\in C^{2}$ touches $v$ from below at $x\in B_{\sqrt{N}/2}$, the function $\phi(x):=\varphi(x/S)$ touches $u$ from below at $Sx$. Therefore \[ -\mathrm{tr}(G(Sx)D^{2}\phi(Sx))\geq f(Sx). \] Since $D^{2}\phi(Sx)=S^{-2}D^{2}\varphi(x)$, this implies that \[ -\mathrm{tr}(G(Sx)D^{2}\varphi(x))\geq S^{2}f(Sx). \] Thus $v$ is a viscosity supersolution to \[ -\mathrm{tr}(\tilde{G}(x)D^{2}v)\geq\tilde{f}(x)\quad\text{in }B_{\sqrt{N}/2}. \] We denote by $Q_{R}$ a cube with side-length $R/2$. Since $Q_{1}\subset B_{\sqrt{N}/2}$, it follows from \cite[Theorem 4.8]{caffarelliCabre} that there are $q(\lambda,\varLambda,N)$ and $C(\lambda,\varLambda,N)$ such that \begin{align*}
\left(\int_{B_{1/8}}\left|v\right|^{q}\,d x\right)^{1/q}\leq\left(\int_{Q_{1/4}}\left|v\right|^{q}\,d x\right)^{1/q} & \leq C\left(\inf_{Q_{1/2}}v+\left(\int_{Q_{1}}|\tilde{f}|^{N}\,d x\right)^{1/N}\right)\\
& \leq C\left(\inf_{B_{1/4}}v+\left(\int_{B_{\sqrt{N}/2}}|\tilde{f}|^{N}\,d x\right)^{1/N}\right). \end{align*} By the change of variables formula we have \begin{align*}
\int_{B_{1/8}}\left|v\right|^{q}\,d x= & \int_{B_{1/8}}\left|u(Sx)\right|^{q}\,d x=S^{-N}\int_{B_{S/8}}\left|u(x)\right|^{q}\,d x \end{align*} and \[
\int_{B_{\sqrt{N}/2}}|\tilde{f}|^{N}\,d x=S^{2N}\int_{B_{\sqrt{N}/2}}\left|f(Sx)\right|^{N}\,d x=S^{N}\int_{B_{S\sqrt{N}/2}}\left|f(x)\right|^{N}\,d x. \] Recalling that $S=8\tau$, we get \[
8^{-\frac{N}{q}}\tau^{-\frac{N}{q}}\left(\int_{B_{\tau}}\left|u(x)\right|^{q}\,d x\right)^{1/q}\leq C\left(\inf_{B_{2\tau}}u+8\tau\left(\int_{B_{S\sqrt{N}/2}}\left|f(x)\right|^{N}\,d x\right)^{1/N}\right). \] Absorbing $8^{\frac{N}{q}}$ into the constant, we obtain the claim. \end{proof} \begin{lem}[Improvement of oscillation for the linear equation] \label{lem:imposc linear} Let $u\geq0$ be a continuous viscosity supersolution to $\eqref{eq:linear equation}$ in $B_{1}$ and $\mu,l>0$. Then there are positive constants $\tau(\lambda,\varLambda,N,\mu,l,\left\Vert f\right\Vert _{L^{\infty}(B_{1})})$ and $\theta(\lambda,\varLambda,N,\mu,l)$ such that if \begin{equation}
\left|\left\{ x\in B_{\tau}:u\ge l\right\} \right|>\mu\left|B_{\tau}\right|,\label{eq:improvement of oscillation 1} \end{equation} then we have \[ u\geq\theta\quad\text{in }B_{\tau}. \] \end{lem}
\begin{proof} By the weak Harnack inequality (Lemma \ref{lem:weak harnack}) there exist constants $C_{1}(\lambda,\varLambda,N)$ and $q(\lambda,\varLambda,N)$ such that for any $\tau<1/(4\sqrt{N}),$ we have \begin{equation}
\inf_{B_{2\tau}}u\geq C_{1}\tau^{\frac{-N}{q}}\left(\int_{B_{\tau}}\left|u\right|^{q}\,d x\right)^{1/q}-\tau\left(\int_{B_{4\sqrt{N}\tau}}\left|f\right|^{N}\,d x\right)^{1/N}.\label{eq:improvement of oscillation 2} \end{equation} In particular, this holds for \[
\tau:=\min\left(\frac{1}{4\sqrt{N}},\sqrt{\frac{C_{1}\left|B_{1}\right|^{\frac{1}{q}-\frac{1}{N}}\mu^{\frac{1}{q}}l}{2\cdot4\sqrt{N}(\left\Vert f\right\Vert _{L^{\infty}(B_{1})}+1)}}\right). \] We continue the estimate (\ref{eq:improvement of oscillation 2}) using the assumption (\ref{eq:improvement of oscillation 1}) and obtain \begin{align*}
\inf_{B_{\tau}}u\geq\inf_{B_{2\tau}}u & \geq C_{1}\tau^{-\frac{N}{q}}\left(\left|\left\{ x\in B_{\tau}:u\geq l\right\} \right|l^{q}\right)^{1/q}-\tau\left(\int_{B_{4\sqrt{N}\tau}}\left|f\right|^{N}\,d x\right)^{1/N}\\
& \geq C_{1}\tau^{-\frac{N}{q}}\mu^{\frac{1}{q}}\left|B_{\tau}\right|^{\frac{1}{q}}l-\tau\left|B_{4\sqrt{N}\tau}\right|^{\frac{1}{N}}\left\Vert f\right\Vert _{L^{\infty}(B_{1})}\\
& =C_{1}\left|B_{1}\right|^{\frac{1}{q}}\mu^{\frac{1}{q}}l\tau^{-\frac{N}{q}}\tau^{\frac{N}{q}}-4\sqrt{N}\left|B_{1}\right|^{\frac{1}{N}}\left\Vert f\right\Vert _{L^{\infty}(B_{1})}\tau^{2}\\
& =C_{1}\left|B_{1}\right|^{\frac{1}{q}}\mu^{\frac{1}{q}}l-4\sqrt{N}\left|B_{1}\right|^{\frac{1}{N}}\left\Vert f\right\Vert _{L^{\infty}(B_{1})}\tau^{2}.\\
& \geq\frac{1}{2}C_{1}\left|B_{1}\right|^{\frac{1}{q}}\mu^{\frac{1}{q}}l,\\
& =:\theta, \end{align*} where the last inequality follows from the choice of $\tau$. \end{proof}
We are now ready to prove an improvement of oscillation for the gradient of a solution to (\ref{eq:regularized homogeneous}). We first consider the following lemma, where the improvement is considered towards a fixed direction. We initially also restrict the range of $\left|q\right|$.
The idea is to differentiate the equation and observe that a suitable function of $Du$ is a supersolution to the linear equation (\ref{eq:linear equation}). Lemma \ref{lem:imposc linear} is then applied to obtain information about $Du$. \begin{lem}[Improvement of oscillation to direction] \label{lem:imposc dir} Suppose that $p$ is smooth. Let $u$ be a smooth solution to (\ref{eq:regularized homogeneous}) in $B_{1}$
with $\left|Du\right|\leq1$ and either $q=0$ or $\left|q\right|>2$. Then for every $0<l<1$ and $\mu>0$ there exist positive constants $\tau(N,\hat{p},l,\mu)<1$ and $\gamma(N,\hat{p},l,\mu)<1$ such that \[
\left|\left\{ x\in B_{\tau}:Du\cdot d\leq l\right\} \right|>\mu\left|B_{\tau}\right|\quad\text{implies\ensuremath{\quad}}Du\cdot d\leq\gamma\text{ in }B_{\tau} \] whenever $d\in\partial B_{1}$. \end{lem}
\begin{proof} To simplify notation, we set \begin{align*}
A_{ij}(x,\eta) & :=\delta_{ij}+(p(x)-2)\frac{(\eta_{i}+q_{i})(\eta_{j}+q_{j})}{\left|\eta+q\right|^{2}+\varepsilon^{2}}. \end{align*} We also denote the functions $\mathcal{A}_{ij}:x\mapsto A_{ij}(x,Du(x))$, $\mathcal{A}_{ij,x_{k}}:x\mapsto(\partial_{x_{k}}A_{ij})(x,Du(x))$ and $\mathcal{A}_{ij,\eta_{k}}:x\mapsto(\partial_{\eta_{k}}A_{ij})(x,Du(x))$. Then, since $u$ is a smooth solution to (\ref{eq:regularized homogeneous}) in $B_{1}$, we have in Einstein's summation convention \[ -\mathcal{A}_{ij}u_{ij}=0\quad\text{pointwise in }B_{1}. \] Differentiating this yields \begin{align} 0=(\mathcal{A}_{ij}u_{ij})_{k} & =\mathcal{A}_{ij}u_{ijk}+(\mathcal{A}_{ij})_{k}u_{ij}\nonumber \\
& =\mathcal{A}_{ij}u_{ijk}+\mathcal{A}_{ij,\eta_{m}}u_{ij}u_{km}+\mathcal{A}_{ij,x_{k}}u_{ij}\quad\text{for all }k=1,\ldots N.\label{eq:imposc dir 2} \end{align} Multiplying these identities by $d_{k}$ and summing over $k$, we obtain \begin{align} 0 & =\mathcal{A}_{ij}u_{ijk}d_{k}+\mathcal{A}_{ij,\eta_{m}}u_{ij}u_{km}d_{k}+\mathcal{A}_{ij,x_{k}}u_{ij}d_{k}\nonumber \\
& =\mathcal{A}_{ij}(Du\cdot d-l)_{ij}+\mathcal{A}_{ij,\eta_{m}}u_{ij}(Du\cdot d-l)_{m}+\mathcal{A}_{ij,x_{k}}u_{ij}d_{k}.\label{eq:imposc dir 3} \end{align} Moreover, multiplying (\ref{eq:imposc dir 2}) by $2u_{k}$ and summing over $k$, we obtain \begin{align} 0 & =2\mathcal{A}_{ij}u_{ijk}u_{k}+2\mathcal{A}_{ij,\eta_{m}}u_{ij}u_{km}u_{k}+2\mathcal{A}_{ij,x_{k}}u_{ij}u_{k}\nonumber \\
& =\mathcal{A}_{ij}(2u_{ijk}u_{k}+2u_{kj}u_{ki})-2\mathcal{A}_{ij}u_{kj}u_{ki}+2\mathcal{A}_{ij,\eta_{m}}u_{ij}u_{km}u_{k}+2\mathcal{A}_{ij,x_{k}}u_{ij}u_{k}\nonumber \\
& =\mathcal{A}_{ij}(u_{k}^{2})_{ij}-2\mathcal{A}_{ij}u_{kj}u_{ki}+\mathcal{A}_{ij,\eta_{m}}u_{ij}(u_{k}^{2})_{m}+2\mathcal{A}_{ij,x_{k}}u_{ij}u_{k}\nonumber \\
& =\mathcal{A}_{ij}(\left|Du\right|^{2})_{ij}+\mathcal{A}_{ij,\eta_{m}}u_{ij}(\left|Du\right|^{2})_{m}+2\mathcal{A}_{ij,x_{k}}u_{ij}u_{k}-2\mathcal{A}_{ij}u_{kj}u_{ki}.\label{eq:imposc dir 4} \end{align}
We will now split the proof into the cases $q=0$ or $\left|q\right|>2$, and proceed in two steps: First we check that a suitable function of $Du$ is a supersolution to the linear equation (\ref{lem:imposc linear}) and then apply Lemma \ref{lem:imposc linear} to obtain the claim.
\textbf{Case $q=0$, Step 1: }We denote $\Omega_{+}:=\left\{ x\in B_{1}:h(x)>0\right\} $, where \[
h:=(Du\cdot d-l+\frac{l}{2}\left|Du\right|^{2})^{+}. \]
If $\left|Du\right|\leq l/2$, we have \[
Du\cdot d-l+\frac{l}{2}\left|Du\right|^{2}\leq-\frac{l}{2}+\frac{l^{3}}{8}<0. \]
This implies that $\left|Du\right|>l/2$ in $\Omega_{+}$. Therefore, since $q=0$, we have in $\Omega_{+}$ \begin{align}
\left|\mathcal{A}_{ij,\eta_{m}}\right| & =\left|p(x)-2\right|\left|\frac{\delta_{im}(u_{j}+q_{j})+\delta_{jm}(u_{i}+q_{i})}{\left|Du+q\right|^{2}+\varepsilon^{2}}-\frac{2(u_{m}+q_{m})(u_{i}+q_{i})(u_{j}+q_{j})}{(\left|Du+q\right|^{2}+\varepsilon^{2})^{2}}\right|\nonumber \\
& \leq8l^{-1}\left\Vert p-2\right\Vert _{L^{\infty}(B_{1})},\label{eq:imposc dir 5}\\
\left|\mathcal{A}_{ij,x_{k}}\right| & =\left|Dp(x)\right|\left|\frac{(\eta_{i}+q_{i})(\eta_{j}+q_{j})}{\left|\eta+q\right|^{2}+\varepsilon^{2}}\right|\leq p_{L}.\label{eq:imposc dir 6} \end{align} Summing up the equations (\ref{eq:imposc dir 3}) and (\ref{eq:imposc dir 4}) multiplied by $2^{-1}l$, we obtain in $\Omega_{+}$ \begin{align*} 0=\ & \mathcal{A}_{ij}(Du\cdot d-l)_{ij}+\mathcal{A}_{ij,\eta_{m}}u_{ij}(Du\cdot d-l)_{m}+\mathcal{A}_{ij,x_{k}}u_{ij}d_{k}\\
& +2^{-1}l\big(\mathcal{A}_{ij}(\left|Du\right|^{2})_{ij}+\mathcal{A}_{ij,\eta_{m}}u_{ij}(\left|Du\right|^{2})_{m}+2\mathcal{A}_{ij,x_{k}}u_{ij}u_{k}-2\mathcal{A}_{ij}u_{kj}u_{ki}\big)\\ =\ & \mathcal{A}_{ij}h_{ij}+\mathcal{A}_{ij,\eta_{m}}u_{ij}h_{m}+\mathcal{A}_{ij,x_{k}}u_{ij}d_{k}+l\mathcal{A}_{ij,x_{k}}u_{ij}u_{k}-l\mathcal{A}_{ij}u_{kj}u_{ki}\\
\leq\ & \mathcal{A}_{ij}h_{ij}+\left|\mathcal{A}_{ij,\eta_{m}}u_{ij}\right|\left|h_{m}\right|+\left|\mathcal{A}_{ij,x_{k}}u_{ij}\right|\left|d_{k}+lu_{k}\right|-l\mathcal{A}_{ij}u_{kj}u_{ki}. \end{align*}
Since $\left|Du\right|\leq1$, we have $\left|d_{k}+lu_{k}\right|^{2}\leq4$
and by uniform ellipticity $\mathcal{A}_{ij}u_{kj}u_{ki}\geq\min(p_{\min}-1,1)\left|u_{ij}\right|^{2}$. Therefore, by applying Young's inequality with $\epsilon>0$, we obtain from the above display \begin{align*}
0 & \leq\mathcal{A}_{ij}h_{ij}+N^{2}\epsilon^{-1}(\left|h_{m}\right|^{2}+\left|d_{k}+lu_{k}\right|^{2})+\epsilon(\left|\mathcal{A}_{ij,\eta_{m}}\right|^{2}+\left|\mathcal{A}_{ij,x_{k}}\right|^{2})\left|u_{ij}\right|^{2}-l\mathcal{A}_{ij}u_{kj}u_{ki}\\
& \leq\mathcal{A}_{ij}h_{ij}+N^{2}\epsilon^{-1}(\left|Dh\right|^{2}+4)+\epsilon C(N,\hat{p})(l^{-2}+1)\left|u_{ij}\right|^{2}-l\min(p_{\text{min}}-1,1)\left|u_{ij}\right|^{2}, \end{align*} where in the second estimate we used (\ref{eq:imposc dir 5}) and (\ref{eq:imposc dir 6}). By taking $\epsilon$ small enough, we obtain \begin{equation}
0\leq\mathcal{A}_{ij}h_{ij}+C_{0}(N,\hat{p})\frac{\left|Dh\right|^{2}+1}{l^{3}}\quad\text{in }\Omega_{+},\label{eq:imposc dir 7} \end{equation} Next we define \begin{equation} \overline{h}:=\frac{1}{\nu}(1-e^{\nu(h-H)}),\quad\text{where}\quad H:=1-\frac{l}{2}\quad\text{and}\quad\nu:=\frac{C_{0}}{l^{3}\min(p_{\text{min}}-1,1)}.\label{eq:imposc dir 10} \end{equation} Then by (\ref{eq:imposc dir 7}) and uniform ellipticity we have in $\Omega_{+}$ \begin{align*} -\mathcal{A}_{ij}\overline{h}{}_{ij} & =\mathcal{A}_{ij}(h_{ij}e^{\nu(h-H)}+\nu h_{i}h_{j}e^{\nu(h-H)})\\
& \geq e^{\nu(h-H)}(-C_{0}\frac{\left|Dh\right|^{2}}{l^{3}}-\frac{C_{0}}{l^{3}}+\nu\min(p_{\text{min}}-1,1)\left|Dh\right|^{2})\\
& \geq-\frac{C_{0}}{l^{3}}. \end{align*} Since the minimum of two viscosity supersolutions is still a viscosity supersolution, it follows from the above estimate that $\overline{h}$ is a non-negative viscosity supersolution to \begin{equation} -\mathcal{A}_{ij}\overline{h}_{ij}\ge\frac{-C_{0}}{l^{3}}\quad\text{in }B_{1}.\label{eq:imposc dir 8} \end{equation}
\textbf{Case $q=0$, Step 2: }We set $l_{0}:=\frac{1}{\nu}(1-e^{\nu(l-1)})$. Then, since $\overline{h}$ solves (\ref{eq:imposc dir 8}), by Lemma \ref{lem:imposc linear} there are positive constants $\tau(N,p,l,\mu)$ and $\theta(N,p,l,\mu)$ such that \[
\left|\left\{ x\in B_{\tau}:\overline{h}\geq l_{0}\right\} \right|>\mu\left|B_{\tau}\right|\quad\text{implies}\quad\overline{h}\geq\theta\quad\text{in }B_{\tau}. \] If $Du\cdot d\leq l$, we have $\overline{h}\geq l_{0}$ and therefore \[
\left|\left\{ x\in B_{\tau}:\overline{h}\geq l_{0}\right\} \right|\geq\left|\left\{ x\in B_{\tau}:Du\cdot d\leq l\right\} \right|>\mu\left|B_{\tau}\right|, \] where the last inequality follows from the assumptions. Consequently, we obtain \[ \overline{h}\geq\theta\quad\text{in }B_{\tau}. \] Since $h-H\leq0$, by convexity we have $H-h\geq\overline{h}$. This together with the above estimate yields \[
1-2^{-1}l-(Du\cdot d-l+2^{-1}l\left|Du\right|^{2})\geq\theta\quad\text{in }B_{\tau} \] and so \[
Du\cdot d+2^{-1}l(Du\cdot d)^{2}\leq Du\cdot d+2^{-1}l\left|Du\right|^{2}\leq1+2^{-1}l-\theta\quad\text{in }B_{\tau}. \] Using the quadratic formula, we thus obtain the desired estimate \[ Du\cdot d\leq\frac{-1+\sqrt{1+2l(1+2^{-1}l-\theta)}}{l}=\frac{-1+\sqrt{(1+l)^{2}-2l\theta}}{l}=:\gamma<1\quad\text{in }B_{\tau}. \]
\textbf{Case $\left|q\right|>2$: }Computing like in (\ref{eq:imposc dir 5}) and (\ref{eq:imposc dir 6}), we obtain this time in $B_{1}$ \[
\left|\mathcal{A}_{ij,\eta_{m}}\right|\leq4\left\Vert p-2\right\Vert _{L^{\infty}(B_{1})}\quad\text{and}\quad\left|\mathcal{A}_{ij,x_{k}}\right|\leq p_{L} \] Moreover, this time we set simply \[
h:=Du\cdot d-l+2^{-1}l\left|Du\right|^{2}. \] Summing up the identities (\ref{eq:imposc dir 3}) and (\ref{eq:imposc dir 4})
and using Young's inequality similarly as in the case $\left|q\right|=0$, we obtain in $B_{1}$ \begin{align*}
0 & \leq\mathcal{A}_{ij}h_{ij}+N^{2}\epsilon^{-1}(\left|h_{m}\right|^{2}+\left|d_{k}+lu_{k}\right|^{2})+\epsilon(\left|\mathcal{A}_{ij,\eta_{m}}\right|^{2}+\left|\mathcal{A}_{ij,x_{k}}\right|^{2})\left|u_{ij}\right|^{2}-l\mathcal{A}_{ij}u_{kj}u_{ki}\\
& \leq\mathcal{A}_{ij}h_{ij}+N^{2}\epsilon^{-1}(\left|Dh\right|^{2}+4)+\epsilon C(\hat{p})\left|u_{ij}\right|^{2}-lC(\hat{p})\left|u_{ij}\right|^{2}. \end{align*} By taking small enough $\epsilon$, we obtain \[
0\leq\mathcal{A}_{ij}h_{ij}+C_{0}(N,\hat{p})\frac{\left|Dh\right|^{2}+1}{l}\quad\text{in }B_{1}. \] Next we define $\overline{h}$ and $H$ like in (\ref{eq:imposc dir 10}), but set instead $\nu:=C_{0}/(l\min(p_{\min}-1,1))$. The rest of the proof then proceeds in the same way as in the case $q=0$.\begin{details} \[ \overline{h}:=\frac{1}{\nu}(1-e^{\nu(h-H)}),\quad\text{where}\quad H:=1-\frac{l}{2}\quad\text{and}\quad\nu:=\frac{C_{0}}{l\min(p_{\text{min}}-1,1)}. \] Then we have in $B_{1}$ \begin{align} -A_{ij}\overline{h}_{ij} & =A_{ij}(h_{ij}e^{\nu(h-H)}+\nu h_{i}h_{j}e^{\nu(h-H)})\nonumber \\
& \geq e^{\nu(h-H)}(-C_{0}\frac{\left|Dh\right|^{2}}{l}-C_{0}+\nu\min(p_{\text{min}}-1,1)\left|Dh\right|^{2})\nonumber \\
& \geq-C_{0}.\label{eq:imposc dir 9} \end{align}
\textbf{(Case $\left|q\right|>2$, Step 2)} We set $l_{0}:=\frac{1}{\nu}(1-e^{\nu(l-1)})$. Then, since $\overline{h}$ solves (\ref{eq:imposc dir 9}), by Lemma (\ref{eq:imposc dir 9}) there are positive constants $\tau(p,N,l,\mu)$ and $\theta(p,N,l,\mu)$ such that \[
\left|\left\{ x\in B_{\tau}:\overline{h}\geq l_{0}\right\} \right|>\mu\left|B_{\tau}\right|\quad\text{implies\ensuremath{\quad}}\overline{h}\ge\theta\quad\text{in }B_{\tau}. \] If $Du\cdot d\leq l$, we have $\overline{h}\geq l_{0}$ and therefore \[
\left|\left\{ x\in B_{\tau}:h\geq l_{0}\right\} \right|\geq\left|\left\{ x\in B_{\tau}:Du\cdot d\leq l\right\} \right|>\mu\left|B_{\tau}\right|, \] where the last estimate follows from the assumptions. Consequently we obtain \[ \overline{h}\geq\theta\quad\text{in }B_{\tau}. \] By convexity we have $H-h\geq\overline{h}.$ This together with the above estimate yields \[
1-2^{-1}l-(Du\cdot d-l+2^{-1}l\left|Du\right|^{2})\geq\theta\quad\text{in }B_{\tau} \] and so \[
Du\cdot d+2^{-1}l(Du\cdot d)^{2}\leq Du\cdot d+2^{-1}l\left|Du\right|^{2}\leq1+2^{-1}l-\theta\quad\text{in }B_{\tau}. \] \end{details} \end{proof} Next we inductively apply the previous lemma to prove the improvement of oscillation. \begin{thm}[Improvement of oscillation]
\label{thm:imposc} Suppose that $p$ is smooth. Let $u$ be a smooth solution to (\ref{eq:regularized homogeneous}) in $B_{1}$ with $\left|Du\right|\leq1$
and either $q=0$ or $\left|q\right|>2$. Then for every $0<l<1$ and $\mu>0$ there exist positive constants $\tau(N,\hat{p},l,\mu)<1$ and $\gamma(N,\hat{p},l,\mu)<1$ such that if \begin{equation}
\left|\left\{ x\in B_{\tau^{i+1}}:Du\cdot d\leq l\gamma^{i}\right\} \right|>\mu\left|B_{\tau^{i+1}}\right|\quad\text{for all }d\in\partial B_{1},\ i=0,\ldots,k,\label{eq:imposc cnd} \end{equation} then \begin{equation}
\left|Du\right|\leq\gamma^{i+1}\quad\text{in }B_{\tau^{i+1}}\quad\text{for all }i=0,\ldots,k.\label{eq:imposc cnd2} \end{equation} \end{thm}
\begin{proof} Let $k\geq0$ be an integer and suppose that (\ref{eq:imposc cnd}) holds. We proceed by induction.\textbf{ }
\textbf{Initial step:} Since (\ref{eq:imposc cnd}) holds for $i=0$, by Lemma \ref{lem:imposc dir} we have $Du\cdot d\leq\gamma$ in $B_{\tau}$ for all $d\in\partial B_{1}$. This implies (\ref{eq:imposc cnd2}) for $i=0$.
\textbf{Induction step:} Suppose that $0<i\leq k$ and that (\ref{eq:imposc cnd2}) holds for $i-1$. We define \[ v(x):=\tau^{-i}\gamma^{-i}u(\tau^{i}x). \] Then $v$ solves \[
-\Delta v-(p(\tau^{i}x)-2)\frac{\left\langle D^{2}v(Dv+\gamma^{-i}q),Dv+\gamma^{i}q\right\rangle }{\left|Dv+\gamma^{-i}q\right|^{2}+(\gamma^{-i}\varepsilon)^{2}}=0\quad\text{in }B_{1}. \]
Moreover, by induction hypothesis $\left|Dv(x)\right|=\gamma^{-i}\left|Du(\tau^{i}x)\right|\leq\gamma^{-i}\gamma^{i}=1$ in $B_{1}$. Therefore by Lemma \ref{lem:imposc dir} we have that \begin{equation}
\left|\left\{ x\in B_{\tau}:Dv\cdot d\leq l\right\} \right|>\mu\left|B_{\tau}\right|\quad\text{implies}\quad Dv\cdot d\leq\gamma\text{ in }B_{\tau}\label{eq:imposc 1} \end{equation} whenever $d\in\partial B_{1}$. Since \[
\left|\left\{ x\in B_{\tau}:Dv\cdot d\leq l\right\} \right|>\mu\left|B_{\tau}\right|\iff\left|\left\{ x\in B_{\tau^{i+1}}:Du\cdot d\leq l\gamma^{i}\right\} \right|>\mu\left|B_{\tau^{i+1}}\right|, \] we have by (\ref{eq:imposc cnd}) and (\ref{eq:imposc 1}) that $Dv\cdot d\leq\gamma$ in $B_{\tau}$. This implies that $Du\cdot d\leq\gamma^{i+1}$ in $B_{\tau^{i+1}}$. Since $d\in\partial B_{1}$ was arbitrary, we obtain (\ref{eq:imposc cnd2}) for $i$. \end{proof}
\subsection{Hölder gradient estimates}
In this section we apply the improvement of oscillation to prove $C^{1,\alpha}$-estimates for solutions to (\ref{eq:regularized homogeneous}). We need the following regularity result by Savin \cite{savin07}. \begin{lem} \label{lem:small perturbation}Suppose that $p$ is smooth. Let $u$ be a smooth solution to (\ref{eq:regularized homogeneous}) in $B_{1}$
with $\left|Du\right|\leq1$ and either $q=0$ or $\left|q\right|>2$. Then for any $\beta>0$ there exist positive constants $\eta(N,\hat{p},\beta)$ and $C(N,\hat{p},\beta)$ such that if \[
\left|u-L\right|\leq\eta\quad\text{in }B_{1} \]
for some affine function $L$ satisfying $1/2\leq\left|DL\right|\leq1$, then we have \[
\left|Du(x)-Du(0)\right|\leq C\left|x\right|^{\beta}\quad\text{for all }x\in B_{1/2}. \] \end{lem}
\begin{proof} Set $v:=u-L$. Then $v$ solves \begin{equation}
-\Delta u-\frac{(p(x)-2)\left\langle D^{2}u(Du+q+DL),Du+q+DL\right\rangle }{\left|Du+q+DL\right|^{2}+\varepsilon^{2}}=0\quad\text{in }B_{1}.\label{eq:small perturbation} \end{equation}
Observe that by the assumption $1/2\leq\left|DL\right|\leq1$ we have
$\left|Du+q+DL\right|\geq1/4$ if $\left|Du\right|\leq1/4$. It therefore follows from \cite[Theorem 1.3]{savin07} (see also \cite{wang13}) that $\left\Vert v\right\Vert _{C^{2,\beta}(B_{1/2})}\leq C$ which implies the claim. \end{proof} We also use the following simple consequence of Morrey's inequality. \begin{lem}
\label{lem:morrey lemma}Let $u:B_{1}\rightarrow\mathbb{R}$ be a smooth function with $\left|Du\right|\leq1$. For any $\theta>0$ there are constants $\varepsilon_{1}(N,\theta),\varepsilon_{0}(N,\theta)<1$ such that if the condition \[
\left|\left\{ x\in B_{1}:\left|Du-d\right|>\varepsilon_{0}\right\} \right|\leq\varepsilon_{1} \] is satisfied for some $d\in S^{N-1}$, then there is $a\in\mathbb{R}$ such that \[
\left|u(x)-a-d\cdot x\right|\leq\theta\text{ for all }x\in B_{1/2}. \] \end{lem}
\begin{proof} By Morrey's inequality (see for example\ \cite[Theorem 4.10]{measuretheoryevans}) \begin{align*}
\underset{x\in B_{1/2}}{\operatorname{osc}}(u(x)-d\cdot x) & =\sup_{x,y\in B_{1/2}}\left|u(x)-d\cdot x-u(y)+d\cdot y\right|\\
& \leq C(N)\Big(\int_{B_{1}}\left|Du-d\right|^{2N}\,d x\Big)^{\frac{1}{2N}}\\
& \leq C(N)(\varepsilon_{1}^{\frac{1}{2N}}+\varepsilon_{0}). \end{align*} Therefore, denoting $a:=\inf_{x\in B_{1/2}}(u(x)-d\cdot x)$, we have for any $x\in B_{1/2}$ \[
\left|u(x)-a-d\cdot x\right|\leq\operatorname{osc}_{B_{1/2}}(u(x)-d\cdot x)\leq C(N)(\varepsilon_{1}^{\frac{1}{2N}}+\varepsilon_{0})\leq\theta, \] where the last inequality follows by taking small enough $\varepsilon_{0}$ and $\varepsilon_{1}$. \end{proof}
We are now ready to prove a Hölder estimate for the gradient of solutions to (\ref{eq:regularized homogeneous}). We first restrict the range of $\left|q\right|$. \begin{lem} \label{thm:regularized apriori} Suppose that $p$ is smooth. Let $u$ be a smooth solution to (\ref{eq:regularized homogeneous}) in
$B_{1}$ with $\left|Du\right|\leq1$ and either $q=0$ or $\left|q\right|>2$. Then there exists a constant $\alpha(N,\hat{p})\in(0,1)$ such that \[ \left\Vert Du\right\Vert _{C^{\alpha}(B_{1/2})}\leq C(N,\hat{p}). \] \end{lem}
\begin{proof} For $\beta=1/2,$ let $\eta>0$ be as in Lemma \ref{lem:small perturbation}. For $\theta=\eta/2$, let $\varepsilon_{0},\varepsilon_{1}$ be as in Lemma \ref{lem:morrey lemma}. Set \[
l:=1-\frac{\varepsilon_{0}^{2}}{2}\quad\text{and}\quad\mu:=\frac{\varepsilon_{1}}{\left|B_{1}\right|}. \] For these $l$ and $\mu$, let $\tau,\gamma\in(0,1)$ be as in Theorem \ref{thm:imposc}. Let $k\geq0$ be the minimum integer such that the condition (\ref{eq:imposc cnd}) does not hold.
\textbf{Case $k=\infty$:} Theorem \ref{thm:imposc} implies that \[
\left|Du\right|\leq\gamma^{i+1}\quad\text{in }B_{\tau^{i+1}}\text{ for all }i\geq0. \]
Let $x\in B_{\tau}\setminus\left\{ 0\right\} $. Then $\tau^{i+1}\leq\left|x\right|\leq\tau^{i}$ for some $i\geq0$ and therefore \[
i\leq\frac{\log\left|x\right|}{\log\tau}\leq i+1. \] We obtain \begin{equation}
\left|Du(x)\right|\leq\gamma^{i}=\frac{1}{\gamma}\gamma^{i+1}\leq\frac{1}{\gamma}\gamma^{\frac{\log\left|x\right|}{\log\tau}}=\frac{1}{\gamma}\gamma^{\frac{\log\left|x\right|}{\log\gamma}\cdot\frac{\log\gamma}{\log\tau}}=:C\left|x\right|^{\alpha},\label{eq:regularized apriori 0} \end{equation} where $C=1/\gamma$ and $\alpha=\log\gamma/\log\tau$.
\textbf{Case $k<\infty$:} There is $d\in\partial B_{1}$ such that \begin{equation}
\left|\left\{ x\in B_{\tau^{k+1}}:Du\cdot d\leq l\gamma^{k}\right\} \right|\leq\mu\left|B_{\tau^{k+1}}\right|.\label{eq:regularized apriori 1} \end{equation} We set \[ v(x):=\tau^{-k-1}\gamma^{-k}u(\tau^{k+1}x). \] Then $v$ solves \[
-\Delta v-(p(\tau^{k+1}x)-2)\frac{\left\langle D^{2}v(Dv+\gamma^{-k}q),Dv+\gamma^{-k}q\right\rangle }{\left|Dv+\gamma^{-k}q\right|^{2}+\gamma^{-2k}\varepsilon^{2}}=0\quad\text{in }B_{1} \] and by (\ref{eq:regularized apriori 1}) we have \begin{align}
\left|\left\{ x\in B_{1}:Dv\cdot d\leq l\right\} \right| & =\left|\left\{ x\in B_{1}:Du(\tau^{k+1}x)\cdot d\leq l\gamma^{k}\right\} \right|\nonumber \\
& =\tau^{-N(k+1)}\left|\left\{ x\in B_{\tau^{k+1}}:Du(x)\cdot d\leq l\gamma^{k}\right\} \right|\nonumber \\
& \leq\tau^{-N(k+1)}\mu\left|B_{\tau^{k+1}}\right|=\mu\left|B_{1}\right|=\varepsilon_{1}.\label{eq:regularized apriori 2} \end{align}
Since either $k=0$ or (\ref{eq:imposc cnd}) holds for $k-1$, it follows from Theorem \ref{thm:imposc} that $\left|Du\right|\leq\gamma^{k}$ in $B_{\tau^{k}}$. Thus \begin{equation}
\left|Dv(x)\right|=\gamma^{-k}\left|Du(\tau^{k+1}x)\right|\leq1\quad\text{in }B_{1}.\label{eq:regularized apriori 3} \end{equation} For vectors $\xi,d\in B_{1}$, it is easy to verify the following fact \[
\left|\xi-d\right|>\varepsilon_{0}\implies\xi\cdot d\leq1-\varepsilon_{0}^{2}/2=l. \] Therefore, in view of (\ref{eq:regularized apriori 2}) and (\ref{eq:regularized apriori 3}), we obtain \[
\left|\left\{ x\in B_{1}:\left|Dv-d\right|>\varepsilon_{0}\right\} \right|\leq\varepsilon_{1}. \] Thus by Lemma \ref{lem:morrey lemma} there is $a\in\mathbb{R}$ such that \[
\left|v(x)-a-d\cdot x\right|\leq\theta=\eta/2\quad\text{for all }x\in B_{1/2}. \] Consequently, by applying Lemma \ref{lem:small perturbation} on the function $2v(2^{-1}x)$, we find a positive constant $C(N,\hat{p})$ and $e\in\partial B_{1}$ such that \[
\left|Dv(x)-e\right|\leq C\left|x\right|\quad\text{in }B_{1/4}. \]
Since $\left|Dv\right|\leq1$, we have also \[
\left|Dv(x)-e\right|\leq C\left|x\right|\quad\text{in }B_{1}. \] Recalling the definition of $v$ and taking $\alpha^{\prime}\in(0,1)$ so small that $\gamma/\tau^{\alpha^{\prime}}<1$ we obtain \begin{equation}
\left|Du(x)-\gamma^{k}e\right|\leq C\gamma^{k}\tau^{-k-1}\left|x\right|\leq\frac{C}{\tau^{\alpha^{\prime}}}\left(\frac{\gamma}{\tau^{\alpha^{\prime}}}\right)^{k}\left|x\right|^{\alpha^{\prime}}\leq C\left|x\right|^{\alpha^{\prime}}\quad\text{in }B_{\tau^{k+1}},\label{eq:regularized apriori 4} \end{equation} where we absorbed $\tau^{\alpha^{\prime}}$ into the constant. On the other hand, we have \[
\left|Du\right|\leq\gamma^{i+1}\quad\text{in }B_{\tau^{i+1}}\text{ for all }i=0,\ldots,k-1 \]
so that, if $\tau^{i+2}\leq\left|x\right|\leq\tau^{i+1}$ for some $i\in\left\{ 0,\ldots,k-1\right\} $, it holds that \[
\left|Du(x)-\gamma^{k}e\right|\leq2\gamma^{i+1}\frac{\left|x\right|^{\alpha^{\prime}}}{\left|x\right|^{\alpha^{\prime}}}\leq\frac{2}{\tau^{\alpha^{\prime}}}\left(\frac{\gamma}{\tau^{\alpha^{\prime}}}\right)^{i+1}\left|x\right|^{\alpha^{\prime}}\le C\left|x\right|^{\alpha^{\prime}}. \] Combining this with (\ref{eq:regularized apriori 4}) we obtain \begin{equation}
\left|Du(x)-\gamma^{k}e\right|\leq C\left|x\right|^{\alpha^{\prime}}\quad\text{in }B_{\tau}.\label{eq:regularized apriori 5} \end{equation}
The claim now follows from (\ref{eq:regularized apriori 0}) and (\ref{eq:regularized apriori 5}) by standard translation arguments. \begin{details}
Let $x_{0}\in B_{1}$. We set \[ v(x):=2u(\frac{1}{2}(x-x_{0})). \] Since $v$ solves \[
\Delta v+(p(\frac{1}{2}(x-x_{0}))-2)\frac{\left\langle D^{2}v(Dv+q),Dv+q\right\rangle }{\left|Dv+q\right|^{2}+\varepsilon^{2}}=0\quad\text{in }B_{1}, \] by Lemma \ref{thm:regularized apriori} there exists $C(p,N)$ and $\alpha(p,N)$ such that \[
\left|Dv(x)-Dv(0)\right|\leq C\left|x\right|^{\alpha}\quad\text{for all }x\in B_{1/2}. \] In other words, we have \[
\left|Du(\frac{1}{2}(x-x_{0}))-Du(\frac{1}{2}x_{0})\right|\leq C\left|x\right|^{\alpha}\quad\text{for all }x\in B_{1/2},x_{0}\in B_{1}, \] from which it follows that \[
\left|Du(x-x_{0})-Du(x_{0})\right|\le C\left|x\right|^{\alpha}\quad\text{for all }x\in B_{1/4},x_{0}\in B_{1/2}. \] This implies the Hölder estimate \[ \left\Vert Du\right\Vert _{C^{\alpha}(B_{1/2})}\leq C. \] \end{details} \end{proof} \begin{thm} \label{cor:h=0000F6lder estimate for regularized} Let $u$ be a bounded viscosity solution to (\ref{eq:regularized homogeneous}) in $B_{1}$ with $q\in\mathbb{R}^{N}$. Then \begin{equation} \left\Vert u\right\Vert _{C^{1,\alpha}(B_{1/2})}\leq C(N,\hat{p},\left\Vert u\right\Vert _{L^{\infty}(B_{1})})\label{eq:h=0000F6lder estimate for regularized 1} \end{equation} for some $\alpha(N,\hat{p})\in(0,1)$. \end{thm}
\begin{proof} Suppose first that $p$ is smooth. Let $\nu_{0}(N,\hat{p},\left\Vert u\right\Vert _{L^{\infty}(B_{1})})$ and $C_{0}(N,\hat{p},\left\Vert u\right\Vert _{L^{\infty}(B_{1})})$ be as in the Lipschitz estimate (Theorem \ref{thm:Lipschitz estimate} in the Appendix) and set \[ M:=2\max(\nu_{0},C_{0}). \]
If $\left|q\right|>M$, then by Theorem \ref{lem:lipschitz lemma} we have \[
\left|Du\right|\leq C_{0}\quad\text{in }B_{1/2}. \]
We set $\tilde{u}(x):=2u(x/2)/C_{0}$. Then $\left|D\tilde{u}\right|\leq1$ in $B_{1}$ and $\tilde{u}$ solves \[
-\Delta\tilde{u}-(p(x/2)-2)\frac{\left\langle D^{2}\tilde{u}(D\tilde{u}+q/C_{0}),D\tilde{u}+q/C_{0}\right\rangle }{\left|D\tilde{u}+q/C_{0}\right|^{2}+(\varepsilon/C_{0})^{2}}=0\quad\text{in }B_{1}, \] where $q/C_{0}>2$. Thus by Theorem \ref{thm:regularized apriori} we have \[ \left\Vert D\tilde{u}\right\Vert _{C^{\alpha}(B_{1/2})}\leq C(N,\hat{p}), \] which implies (\ref{eq:h=0000F6lder estimate for regularized 1}) by standard translation arguments.
If $\left|q\right|\leq M$, we define \[ w:=u-q\cdot x. \] Then by Theorem \ref{thm:Lipschitz estimate} we have \[
\left|Dw\right|\leq C(N,\hat{p},\left\Vert w\right\Vert _{L^{\infty}(B_{1})})=:C^{\prime}(N,\hat{p},\left\Vert u\right\Vert _{L^{\infty}(B_{1})})\quad\text{in }B_{1/2}. \]
We set $\tilde{w}(x):=2w(x/2)/C^{\prime}.$ Then $\left|D\tilde{w}\right|\leq1$ and so by Theorem \ref{lem:small perturbation} we have \[ \left\Vert D\tilde{w}\right\Vert _{C^{\alpha}(B_{1/2})}\leq C(N,\hat{p}), \] which again implies (\ref{eq:h=0000F6lder estimate for regularized 1}).
Suppose then that $p$ is merely Lipschitz continuous. Take a sequence $p_{j}\in C^{\infty}(B_{1})$ such that $p_{j}\rightarrow p$ uniformly in $B_{1}$ and $\left\Vert Dp_{j}\right\Vert _{L^{\infty}(B_{1})}\leq\left\Vert Dp\right\Vert _{L^{\infty}(B_{1})}$. For $r<1$, let $u_{j}$ be a solution to the Dirichlet problem \[ \begin{cases}
-\Delta u_{j}-(p_{j}(x)-2)\frac{\left\langle D^{2}u(Du_{j}+q),Du_{j}+q\right\rangle }{\left|Du_{j}+q\right|^{2}+\varepsilon^{2}}=0 & \text{in }B_{r},\\ u_{j}=u & \text{on }B_{r}. \end{cases} \] As observed in Proposition \ref{prop:c infty}, the solution exists and we have $u_{j}\in C^{\infty}(B_{r})$. By comparison principle $\left\Vert u_{j}\right\Vert _{L^{\infty}(B_{r})}\leq\left\Vert u\right\Vert _{L^{\infty}(B_{1})}$. Then by the first part of the proof we have the estimate \[ \left\Vert u_{j}\right\Vert _{C^{1,\beta}(B_{r/2})}\leq C(N,\hat{p},\left\Vert u\right\Vert _{L^{\infty}(B_{1})}). \] By \cite[Theorem 4.14]{caffarelliCabre} the functions $u_{j}$ are equicontinuous in $B_{1}$ and so by the Ascoli-Arzela theorem we have $u_{j}\rightarrow v$ uniformly in $B_{1}$ up to a subsequence. Moreover, by the stability principle $v$ is a solution to (\ref{eq:regularized homogeneous}) in $B_{r}$ and thus by comparison principle \cite[Theorem 2.6]{kawohlKutev07} we have $v\equiv u$. By extracting a further subsequence, we may ensure that also $Du_{j}\rightarrow Du$ uniformly in $B_{r/2}$ and so the estimate $\left\Vert Du\right\Vert _{C^{1,\beta}(B_{r/2})}\leq C(N,\hat{p},\left\Vert u\right\Vert _{L^{\infty}(B_{1})})$ follows. \end{proof}
\section{Hölder gradient estimates for the regularized inhomogeneous equation\label{sec:sec 2}}
In this section we consider the inhomogeneous equation \begin{equation}
-\Delta u-(p(x)-2)\frac{\left\langle D^{2}u(Du+q),Du+q\right\rangle }{\left|Du\right|^{2}+\varepsilon^{2}}=f(x)\quad\text{in }B_{1},\label{eq:non-homogeneous reguralized} \end{equation} where $p:B_{1}\rightarrow\mathbb{R}$ is Lipschitz continuous, $p_{\min}>1$, $\varepsilon>0$, $q\in\mathbb{R}^{N}$ and $f\in C(B_{1})$ is bounded. We apply the $C^{1,\alpha}$-estimates obtained in Theorem \ref{cor:h=0000F6lder estimate for regularized} to prove regularity estimates for solutions of (\ref{eq:non-homogeneous reguralized}) with $q=0$. Our arguments are similar to those in \cite[Section 3]{attouchiParv}, see also \cite{imbertSilvestre12}. The idea is to use the well known characterization of $C^{1,\alpha}$-regularity via affine approximates. The following lemma plays a key role: It states that if $f$ is small, then a solution to (\ref{eq:non-homogeneous reguralized}) can be approximated by an affine function. This combined with scaling properties of the equation essentially yields the desired affine functions. \begin{lem} \label{lem:non-homogeneous regularized first lemma}There exist constants $\epsilon(N,\hat{p})$,$\tau(N,\hat{p})\in(0,1)$ such that the following holds: If $\left\Vert f\right\Vert _{L^{\infty}(B_{1})}\leq\epsilon$ and $w$ is a viscosity solution to (\ref{eq:non-homogeneous reguralized}) in $B_{1}$ with $q\in\mathbb{R}^{N}$, $w(0)=0$ and $\operatorname{osc}_{B_{1}}w\leq1$, then there exists $q^{\prime}\in\mathbb{R}^{N}$ such that \[ \operatorname{osc}_{B_{\tau}}(w(x)-q^{\prime}\cdot x)\leq\frac{1}{2}\tau. \]
Moreover, we have $\left|q^{\prime}\right|\le C(N,\hat{p})$. \end{lem}
\begin{proof} Suppose on the contrary that the claim does not hold. Then, for a fixed $\tau(N,\hat{p})$ that we will specify later, there exists a sequence of Lipschitz continuous functions $p_{j}:B_{1}\rightarrow\mathbb{R}$ such that \[ p_{\min}\leq\inf_{B_{1}}p_{j}\leq\sup_{B_{1}}p_{j}\leq p_{\max}\quad\text{and}\quad(p_{j})_{L}\leq p_{L}, \]
functions $f_{j}\in C(B_{1})$ such that $f_{j}\rightarrow0$ uniformly in $B_{1}$, vectors $q_{j}\in\mathbb{R}^{N}$ and viscosity solutions $w_{j}$ to \[
-\Delta w_{j}-(p_{j}(x)-2)\frac{\left\langle D^{2}w_{j}(Dw_{j}+q_{j}),Dw_{j}+q_{j}\right\rangle }{\left|Dw_{j}+q_{j}\right|^{2}+\varepsilon^{2}}=f_{j}(x)\quad\text{in }B_{1} \] such that $w_{j}(0)=0$, $\operatorname{osc}_{B_{1}}w_{j}\leq1$ and \begin{equation} \operatorname{osc}_{B_{\tau}}(w_{j}(x)-q^{\prime}\cdot x)>\frac{\tau}{2}\quad\text{for all }q^{\prime}\in\mathbb{R}^{N}.\label{eq:first 0} \end{equation} By \cite[Proposition 4.10]{caffarelliCabre}, the functions $w_{j}$ are uniformly Hölder continuous in $B_{r}$ for any $r\in(0,1)$. Therefore by the Ascoli-Arzela theorem, we may extract a subsequence such that $w_{j}\rightarrow w_{\infty}$ and $p_{j}\rightarrow p_{\infty}$ uniformly in $B_{r}$ for any $r\in(0,1)$. Moreover, $p_{\infty}$ is $p_{L}$-Lipschitz continuous and $p_{\min}\leq p_{\infty}\leq p_{\max}$. It then follows from (\ref{eq:first 0}) that
\begin{equation} \operatorname{osc}_{B_{\tau}}(w_{\infty}(x)-q^{\prime}\cdot x)>\frac{\tau}{2}\quad\text{for all }q^{\prime}\in\mathbb{R}^{N}.\label{eq:first 1} \end{equation} We have two cases: either $q_{j}$ is bounded or unbounded.
\textbf{Case $q_{j}$ is bounded: }In this case $q_{j}\rightarrow q_{\infty}\in\mathbb{R}^{N}$ up to a subsequence. It follows from the stability principle that $w_{\infty}$ is a viscosity solution to \begin{equation}
-\Delta w_{\infty}-(p_{\infty}(x)-2)\frac{\left\langle D^{2}w_{\infty}(Dw_{\infty}+q_{\infty}),Dw_{\infty}+q_{\infty}\right\rangle }{\left|Dw_{\infty}+q_{\infty}\right|^{2}+\varepsilon^{2}}=0\quad\text{in }B_{1}.\label{eq:first -1} \end{equation} Hence by Theorem \ref{cor:h=0000F6lder estimate for regularized} we have $\left\Vert Dw_{\infty}\right\Vert _{C^{\beta_{1}}(B_{1/2})}\leq C(N,\hat{p})$ for some $\beta_{1}(N,\hat{p})$. The mean value theorem then implies the existence of $q^{\prime}\in\mathbb{R}^{N}$ such that \[ \operatorname{osc}_{B_{r}}(u-q^{\prime}\cdot x)\leq C_{1}(N,\hat{p})r^{1+\beta_{1}}\quad\text{for all }r\leq\frac{1}{2}. \]
\textbf{Case $q_{j}$ is unbounded:} In this case we take a subsequence such that $\left|q_{j}\right|\rightarrow\infty$ and the sequence
$d_{j}:=d_{j}/\left|d_{j}\right|$ converges to $d_{\infty}\in\partial B_{1}$. Then $w_{j}$ is a viscosity solution to \[
-\Delta w_{j}-(p_{j}(x)-2)\frac{\left\langle D^{2}w_{j}(| q_{j}|^{-1}Dw_{j}+d_{j}),| q_{j}|^{-1}Dw_{j}+d_{j}\right\rangle }{\left|\left|q_{j}\right|^{-1}Dw_{j}+d_{j}\right|^{2}+\left|q_{j}\right|^{-2}\varepsilon^{2}}=f_{j}(x)\quad\text{in }B_{1}. \] It follows from the stability principle that $w_{\infty}$ is a viscosity solution to \[ -\Delta w_{j}-(p_{\infty}(x)-2)\left\langle D^{2}w_{\infty}d_{\infty},d_{\infty}\right\rangle =0\quad\text{in }B_{1}. \] By \cite[Theorem 8.3]{caffarelliCabre} there exist positive constants $\beta_{2}(N,\hat{p})$, $C_{2}(N,\hat{p})$, $r_{2}(N,\hat{p})$ and a vector $q^{\prime}\in\mathbb{R}^{N}$ such that \[ \operatorname{osc}_{B_{r}}(w_{\infty}-q^{\prime}\cdot x)\leq C_{2}r^{1+\beta_{2}}\quad\text{for all }r\leq r_{2}. \]
We set $C_{0}:=\max(C_{1},C_{2})$ and $\beta_{0}:=\min(\beta_{1},\beta_{2})$. Then by the two different cases there always exists a vector $q^{\prime}\in\mathbb{R}^{N}$ such that \[ \operatorname{osc}_{B_{r}}(w_{\infty}-q^{\prime}\cdot x)\leq C_{0}r^{1+\beta_{0}}\quad\text{for all }r\leq\min(\frac{1}{2},r_{2}). \] We take $\tau$ so small that $C_{0}\tau^{\beta_{0}}\leq\frac{1}{4}$ and $\tau\leq\min(\frac{1}{2},r_{2})$. Then, by substituting $r=\tau$ in the above display, we obtain \begin{equation} \operatorname{osc}_{B_{\tau}}(w_{\infty}-q^{\prime}\cdot x)\leq C_{0}\tau^{\beta_{0}}\tau\leq\frac{1}{4}\tau,\label{eq:first 2} \end{equation} which contradicts (\ref{eq:first 1}).
The bound $\left|q^{\prime}\right|\le C(N,\hat{p})$ follows by observing that (\ref{eq:first 2}) together with the assumption $\operatorname{osc}_{B_{1}}w\leq1$
yields $\left|q^{\prime}\right|\leq C$. Thus the contradiction is still there even if (\ref{eq:first 1}) is weakened by requiring additionally that $\left|q^{\prime}\right|\leq C$. \end{proof} \begin{lem} \label{lem:second lemma} Let $\tau(N,\hat{p})$ and $\epsilon(N,\hat{p})$ be as in Lemma \ref{lem:non-homogeneous regularized first lemma}. If $\left\Vert f\right\Vert _{L^{\infty}(B_{1})}\leq\epsilon$ and $u$ is a viscosity solution to (\ref{eq:non-homogeneous reguralized}) in $B_{1}$ with $q=0$, $u(0)=0$ and $\operatorname{osc}_{B_{1}}u\leq1$, then there exists $\alpha\in(0,1)$ and $q_{\infty}\in\mathbb{R}^{N}$ such that \[
\sup_{B_{\tau^{k}}}\left|u(x)-q_{\infty}\cdot x\right|\leq C(N,\hat{p})\tau^{k(1+\alpha)}\quad\text{for all }k\in\mathbb{N}. \] \end{lem}
\begin{proof} \textbf{Step 1:} We show that there exists a sequence $(q_{k})_{k=0}^{\infty}\subset\mathbb{R}^{N}$ such that \begin{equation} \operatorname{osc}_{B_{\tau^{k}}}(u(x)-q_{k}\cdot x)\leq\tau^{k(1+\alpha)}.\label{eq:second lemma 1} \end{equation} When $k=0$, this estimate holds by setting $q_{0}=0$ since $u(0)=0$ and $\operatorname{osc}_{B_{1}}\leq1$. Next we take $\alpha\in(0,1)$ such that $\tau^{\alpha}>\frac{1}{2}$. We assume that $k\ge0$ and that we have already constructed $q_{k}$ for which (\ref{eq:second lemma 1}) holds. We set \[ w_{k}(x):=\tau^{-k(1+\alpha)}(u(\tau^{k}x)-q_{k}\cdot(\tau^{k}x)) \] and \[ f_{k}(x):=\tau^{k(1-\alpha)}f(\tau^{k}x). \] Then by induction assumption $\operatorname{osc}_{B_{1}}(w_{k})\leq1$ and $w_{k}$ is a viscosity solution to \[
-\Delta w_{k}-\frac{(p(\tau^{k}x)-2)\left\langle D^{2}w_{k}(Dw_{k}+\tau^{-k\alpha}q_{k}),Dw_{k}+\tau^{-k\alpha}q_{k}\right\rangle }{\left|Dw_{k}+\tau^{-k\alpha}q_{k}\right|^{2}+(\tau^{-k\alpha}\varepsilon)^{2}}=f_{k}(x)\quad\text{in }B_{1}. \]
By Lemma \ref{lem:non-homogeneous regularized first lemma} there exists $q_{k}^{\prime}\in\mathbb{R}^{N}$ with $\left|q_{k}^{\prime}\right|\leq C(N,\hat{p})$ such that \[ \operatorname{osc}_{B_{\tau}}(w_{k}(x)-q_{k}^{\prime}\cdot x)\leq\frac{1}{2}\tau. \] Using the definition of $w_{k}$, scaling to $B_{\tau^{k+1}}$ and dividing by $\tau^{-k(\alpha+1)}$, we obtain from the above \[ \operatorname{osc}_{B_{\tau^{k+1}}}(u(x)-(q_{k}+\tau^{k\alpha}q_{k}^{\prime})\cdot x)\leq\frac{1}{2}\tau^{1+k(1+\alpha)}\leq\tau^{(k+1)(1+\alpha)}. \] Denoting $q_{k+1}:=q_{k}+\tau^{k\alpha}q_{k}^{\prime}$, the above estimate is condition (\ref{eq:second lemma 1}) for $k+1$ and the induction step is complete.
\textbf{Step 2:} Observe that whenever $m>k$, we have \[
\left|q_{m}-q_{k}\right|\leq\sum_{i=k}^{m-1}\tau^{i\alpha}\left|q_{i}^{\prime}\right|\leq C(N,\hat{p})\sum_{i=k}^{m-1}\tau^{i\alpha}. \] Therefore $q_{k}$ is a Cauchy sequence and converges to some $q_{\infty}\in\mathbb{R}^{N}$. Thus \[
\sup_{x\in B_{\tau^{k}}}(q_{k}\cdot x-q_{\infty}\cdot x)\leq\tau^{k}\left|q_{k}-q_{\infty}\right|\leq\tau^{k}\sum_{i=k}^{\infty}\tau^{i\alpha}q_{i}^{\prime}\leq C(N,\hat{p})\tau^{k(1+\alpha)}. \] This with (\ref{eq:second lemma 1}) implies that \[
\sup_{x\in B_{\tau^{k}}}\left|u(x)-q_{\infty}\cdot x\right|\leq C(N,\hat{p})\tau^{k(1+\alpha)}.\qedhere \] \end{proof} \begin{thm} \label{cor:main corollary}Suppose that $u$ is a viscosity solution to (\ref{eq:non-homogeneous reguralized}) in $B_{1}$ with $q=0$ and $\operatorname{osc}_{B_{1}}\leq1$. Then there are constants $\alpha(N,\hat{p})$ and $C(N,\hat{p},\left\Vert f\right\Vert _{L^{\infty}(B_{1})})$ such that \[ \left\Vert u\right\Vert _{C^{1,\alpha}(B_{1/2})}\leq C. \] \end{thm}
\begin{proof} Let $\epsilon(N,\hat{p})$ and $\tau(N,\hat{p})$ be as in Lemma \ref{lem:second lemma}. Set \[ v(x):=\kappa u(x/4) \] where $\kappa:=\epsilon(1+\left\Vert f\right\Vert _{L^{\infty}(B_{1})})^{-1}$. For $x_{0}\in B_{1}$, set \[ w(x):=v(x+x_{0})-v(x_{0}). \] Then $\operatorname{osc}_{B_{1}}w\leq1$, $w(0)=0$ and $w$ is a viscosity solution to \[
-\Delta w-\frac{(p(x/4+x_{0}/4)-2)\left\langle D^{2}wDw,Dw\right\rangle }{\left|Dw\right|^{2}+\varepsilon^{2}\kappa^{2}/4^{2}}=g(x)\quad\text{in }B_{1}, \] where $g(x):=\kappa f(x/4+x_{0}/4)/4^{2}$. Now $\left\Vert g\right\Vert _{L^{\infty}(B_{1})}\leq\epsilon$ so by Lemma \ref{lem:second lemma} there exists $q_{\infty}(x_{0})\in\mathbb{R}^{N}$ such that \[
\sup_{x\in B_{\tau^{k}}}\left|w(x)-q_{\infty}(x_{0})\cdot x\right|\leq C(N,\hat{p})\tau^{k(1+\alpha)}\quad\text{for all }k\in\mathbb{N}. \] Thus we have shown that for any $x_{0}\in B_{1}$ there exists a vector $q_{\infty}(x_{0})$ such that \[
\sup_{x\in B_{r}(x_{0})}\left|v(x)-v(x_{0})-q_{\infty}(x_{0})\cdot(x-x_{0})\right|\leq C(N,\hat{p})r{}^{1+\alpha}\quad\text{for all }r\in(0,1]. \] This together with a standard argument (see for example \cite[Lemma A.1]{attouchiParv}) implies that $[Dv]_{C^{\alpha}(B_{1})}\leq C(N,\hat{p})$ and so by defintion of $v$, also $[Du]_{C^{\alpha}(B_{1/4})}\leq C(N,\hat{p},\left\Vert f\right\Vert _{L^{\infty}(B_{1})})$. The conclusion of the theorem then follows by a standard translation argument. \end{proof}
\section{Proof of the main theorem}
In this section we finish the proof our main theorem. \begin{proof}[Proof of Theorem \ref{thm:main-1}] We may assume that $u\in C(\overline{B}_{1})$. By Comparison Principle (Lemma \ref{lem:comparison principle} in the Appendix) $u$ is the unique viscosity solution to \begin{equation} \begin{cases}
-\Delta v-\frac{(p(x)-2)\left\langle D^{2}vDv,Dv\right\rangle }{\left|Dv\right|^{2}}=f(x)+u-v & \text{in }B_{1},\\ v=u & \text{on }\partial B_{1}. \end{cases}\label{eq:main 1} \end{equation} By \cite[Theorem 15.18]{gilbargTrudinger01} there exists a classical solution $u_{\varepsilon}$ to the approximate problem \[ \begin{cases}
-\Delta u_{\varepsilon}-\frac{(p_{\varepsilon}(x)-2)\left\langle D^{2}u_{\varepsilon}Du_{\varepsilon},Du_{\varepsilon}\right\rangle }{\left|Du_{\varepsilon}\right|^{2}+\varepsilon^{2}}=f_{\varepsilon}(x)+u-u_{\varepsilon} & \text{in }B_{1},\\ u_{\varepsilon}=u & \text{on }\partial B_{1}, \end{cases} \] where $p_{\varepsilon},f_{\varepsilon},u_{\varepsilon}\in C^{\infty}(B_{1})$ are such that $p_{\varepsilon}\rightarrow p$, $f_{\varepsilon}\rightarrow f$ and $u_{\varepsilon}\rightarrow u_{0}$ uniformly in $B_{1}$ as $\varepsilon\rightarrow0$ and $\left\Vert Dp_{\varepsilon}\right\Vert _{L^{\infty}(B_{1})}\leq\left\Vert Dp\right\Vert _{L^{\infty}(B_{1})}$. The maximum principle implies that $\left\Vert u_{\varepsilon}\right\Vert _{L^{\infty}(B_{1})}\leq2\left\Vert f\right\Vert _{L^{\infty}(B_{1})}+2\left\Vert u\right\Vert _{L^{\infty}(B_{1})}$. By \cite[Proposition 4.14]{caffarelliCabre} the solutions $u_{\varepsilon}$ are equicontinuous in $\overline{B}_{1}$ (their modulus of continuity depends only on $N$, $p$, $\left\Vert f\right\Vert _{L^{\infty}(B_{1})}$, $\left\Vert u\right\Vert _{L^{\infty}(B_{1})}$ and modulus of continuity of $u$). Therefore by the Ascoli-Arzela theorem we have $u_{\varepsilon}\rightarrow v\in C(\overline{B}_{1})$ uniformly in $\overline{B}_{1}$ up to a subsequence. By the stability principle, $v$ is a viscosity solution to (\ref{eq:main 1}) and thus by uniqueness $v\equiv u$.
By Corollary \ref{cor:main corollary} we have $\alpha(N,\hat{p})$ such that \begin{equation} \left\Vert Du_{\varepsilon}\right\Vert _{C^{\alpha}(B_{1/2})}\leq C(N,\hat{p},\left\Vert f\right\Vert _{L^{\infty}(B_{1})},\left\Vert u\right\Vert _{L^{\infty}(B_{1})})\label{eq:main 2} \end{equation} and by the Lipschitz estimate \ref{thm:Lipschitz estimate} also \[ \left\Vert Du_{\varepsilon}\right\Vert _{L^{\infty}(B_{1/2})}\leq C(N,\hat{p},\left\Vert f\right\Vert _{L^{\infty}(B_{1})},\left\Vert u\right\Vert _{L^{\infty}(B_{1})}). \] Therefore by the Ascoli-Arzela theorem there exists a subsequence such that $Du_{\varepsilon}\rightarrow\eta$ uniformly in $B_{1/2}$, where the function $\eta:B_{1/2}\rightarrow\mathbb{R}^{N}$ satisfies \[ \left\Vert \eta\right\Vert _{C^{\alpha}(B_{1/2})}\leq C(N,\hat{p},\left\Vert f\right\Vert _{L^{\infty}(B_{1})},\left\Vert u\right\Vert _{L^{\infty}(B_{1})}). \] Using the mean value theorem and the estimate (\ref{eq:main 2}), we deduce for all $x,y\in B_{1/2}$ \begin{align*}
& \left|u(y)-u(x)-(y-x)\cdot\eta(x)\right|\\
& \ \leq\left|u_{\varepsilon}(x)-u_{\varepsilon}(y)-(y-x)\cdot Du_{\varepsilon}(x)\right|\\
& \ \ \ \ +\left|u(y)-u_{\varepsilon}(y)-u(x)+u_{\varepsilon}(x)\right|+\left|x-y\right|\left|\eta(x)-Du_{\varepsilon}(x)\right|\\
& \leq C(N,\hat{p},\left\Vert u\right\Vert _{L^{\infty}(B_{1})})\left|x-y\right|^{1+\alpha}+o(\varepsilon)/\varepsilon. \end{align*} Letting $\varepsilon\rightarrow0$, this implies that $Du(x)=\eta(x)$ for all $x\in B_{1/2}$. \end{proof}
\appendix
\section{Lipschitz estimate}
In this section we apply the method of Ishii and Lions \cite{ishiiLions90} to prove a Lipschitz estimate for solutions to the inhomogeneous normalized $p(x)$-Laplace equation and its regularized or perturbed versions. We need the following vector inequality. \begin{lem} \label{lem:lipschitz lemma}Let $a,b\in\mathbb{R}^{N}\setminus\left\{ 0\right\} $ with $a\not=b$ and $\varepsilon\geq0$. Then \[
\left|\frac{a}{\sqrt{\left|a\right|^{2}+\varepsilon^{2}}}-\frac{b}{\sqrt{\left|b\right|^{2}+\varepsilon^{2}}}\right|\leq\frac{2}{\max\left(\left|a\right|,\left|b\right|\right)}\left|a-b\right|. \] \end{lem}
\begin{proof}
We may suppose that $\left|a\right|=\max(\left|a\right|,\left|b\right|)$. Let $s_{1}:=\sqrt{\left|a\right|^{2}+\varepsilon^{2}}$ and $s_{2}:=\sqrt{\left|b\right|^{2}+\varepsilon^{2}}$. Then \begin{align*}
\left|\frac{a}{s_{1}}-\frac{b}{s_{2}}\right|=\frac{1}{s_{1}}\left|a-b+\frac{b}{s_{2}}(s_{2}-s_{1})\right| & \leq\frac{1}{s_{1}}(\left|a-b\right|+\frac{\left|b\right|}{s_{2}}\left|s_{2}-s_{1}\right|)\\
& \leq\frac{1}{\left|a\right|}(\left|a-b\right|+\left|s_{2}-s_{1}\right|). \end{align*} Moreover \begin{align*}
\left|s_{2}-s_{1}\right| & =\left|\sqrt{\left|a\right|^{2}+\varepsilon^{2}}-\sqrt{\left|b\right|^{2}+\varepsilon^{2}}\right|=\frac{\left|\left|a\right|^{2}-\left|b\right|^{2}\right|}{\sqrt{\left|a\right|^{2}+\varepsilon^{2}}+\sqrt{\left|b\right|^{2}+\varepsilon^{2}}}\\
& \leq\frac{(\left|a\right|+\left|b\right|)\left|\left|a\right|-\left|b\right|\right|}{\left|a\right|+\left|b\right|}\leq\left|a-b\right|\qedhere. \end{align*} \end{proof} $ $ \begin{thm}[Lipschitz estimate] \label{thm:Lipschitz estimate} Suppose that $p:B_{1}\rightarrow\mathbb{R}$ is Lipschitz continuous, $p_{\min}>1$ and that $f\in C(B_{1})$ is bounded. Let $u$ be a viscosity solution to \[
-\Delta u-(p(x)-2)\frac{\left\langle D^{2}u(Du+q),Du+q\right\rangle }{\left|Du+q\right|^{2}+\varepsilon^{2}}=f(x)\quad\text{in }B_{1}, \] where $\varepsilon\geq0$ and $q\in\mathbb{R}^{N}$. Then there are constants $C_{0}(N,\hat{p},\left\Vert u\right\Vert _{L^{\infty}(B_{1})},\left\Vert f\right\Vert _{L^{\infty}(B_{1})})$
and $\nu_{0}(N,\hat{p})$ such that if $\left|q\right|>\nu_{0}$ or
$\left|q\right|=0$, then we have \[
\left|u(x)-u(y)\right|\leq C_{0}\left|x-y\right|\quad\text{for all }x,y\in B_{1/2}. \] \end{thm}
\begin{proof} We let $r(N,\hat{p})\in(0,1/2)$ denote a small constant that will be specified later. Let $x_{0},y_{0}\in B_{r/2}$ and define the function \[
\Psi(x,y):=u(x)-u(y)-L\varphi(\left|x-y\right|)-\frac{M}{2}\left|x-x_{0}\right|^{2}-\frac{M}{2}\left|y-y_{0}\right|^{2}, \] where $\varphi:[0,2]\rightarrow\mathbb{R}$ is given by \[ \varphi(s):=s-s^{\gamma}\kappa_{0},\quad\kappa_{0}:=\frac{1}{\gamma2^{\gamma+1}}, \] and the constants $L(N,\hat{p},\left\Vert u\right\Vert _{L^{\infty}(B_{1})}),M(N,\hat{p},\left\Vert u\right\Vert _{L^{\infty}(B_{1})})>0$ and $\gamma(N,\hat{p})\in(1,2)$ are also specified later. Our objective is to show that for a suitable choice of these constants, the function
$\Psi$ is non-positive in $\overline{B_{r}}\times\overline{B_{r}}$. By the definition of $\varphi$, this yields $u(x_{0})-u(y_{0})\leq L\left|x_{0}-y_{0}\right|$ which implies that $u$ is $L$-Lipschitz in $B_{r}$. The claim of the theorem then follows by standard translation arguments.
Suppose on contrary that $\Psi$ has a positive maximum at some point $(\hat{x},\hat{y})\in\overline{B_{r}}\times\overline{B_{r}}$. Then $\hat{x}\not=\hat{y}$ since otherwise the maximum would be non-positive. We have \begin{align}
0 & <u(\hat{x})-u(\hat{y})-L\varphi(\left|\hat{x}-\hat{y}\right|)-\frac{M}{2}\left|\hat{x}-x_{0}\right|^{2}-\frac{M}{2}\left|\hat{y}-y_{0}\right|^{2}\nonumber \\
& \leq\left|u(\hat{x})-u(\hat{y})\right|-\frac{M}{2}\left|\hat{x}-x_{0}\right|^{2}.\label{eq:lipschitz est 1} \end{align} Therefore, by taking \begin{equation} M:=\frac{8\operatorname{osc}_{B_{1}}u}{r^{2}},\label{eq:lipschitz est M} \end{equation}
we get \[
\left|\hat{x}-x_{0}\right|\leq\sqrt{\frac{2}{M}\left|u(\hat{x})-u(\hat{y})\right|}\leq r/2 \]
and similarly $\left|\hat{y}-y_{0}\right|\leq r/2$. Since $x_{0},y_{0}\in B_{r/2}$, this implies that $\hat{x},\hat{y}\in B_{r}$.
By \cite[Proposition 4.10]{caffarelliCabre} there exist constants $C^{\prime}(N,\hat{p},\left\Vert u\right\Vert _{L^{\infty}(B_{1})},\left\Vert f\right\Vert _{L^{\infty}(B_{1})})$ and $\beta(N,\hat{p})\in(0,1)$ such that \begin{equation}
\left|u(x)-u(y)\right|\leq C^{\prime}\left|x-y\right|^{\beta}\quad\text{for all }x,y\in B_{r}.\label{eq:lipschitz est 2} \end{equation} It follows from (\ref{eq:lipschitz est 1}) and (\ref{eq:lipschitz est 2}) that for $C_{0}:=\sqrt{2C^{\prime}}\sqrt{M}$ we have \begin{align}
M\left|\hat{x}-x_{0}\right|\leq C_{0}\left|\hat{x}-\hat{y}\right|^{\beta/2},\nonumber \\
M\left|\hat{y}-y_{0}\right|\leq C_{0}\left|\hat{x}-\hat{y}\right|^{\beta/2}.\label{eq:lipschitz est 3} \end{align}
Since $\hat{x}\not=\hat{y}$, the function $(x,y)\mapsto\varphi(\left|x-y\right|)$ is $C^{2}$ in a neighborhood of $(\hat{x},\hat{y})$ and we may invoke the Theorem of sums \cite[Theorem 3.2]{userguide}. For any $\mu>0$ there exist matrices $X,Y\in S^{N}$ such that \begin{align*}
(D_{x}(L\varphi(\left|x-y\right|))(\hat{x},\hat{y}),X) & \in\overline{J}^{2,+}(u-\frac{M}{2}\left|x-x_{0}\right|^{2})(\hat{x}),\\
(-D_{y}(L\varphi(\left|x-y\right|))(\hat{x},\hat{y}),Y) & \in\overline{J}^{2,-}(u+\frac{M}{2}\left|y-y_{0}\right|^{2})(\hat{y}), \end{align*} which by denoting $z:=\hat{x}-\hat{y}$ and \begin{align*}
a & :=L\varphi^{\prime}(\left|z\right|)\frac{z}{\left|z\right|}+M(\hat{x}-x_{0}),\\
b & :=L\varphi^{\prime}(\left|z\right|)\frac{z}{\left|z\right|}-M(\hat{y}-y_{0}), \end{align*}
can be written as \begin{equation} (a,X+MI)\in\overline{J}^{2,+}u(\hat{x}),\quad(b,Y-MI)\in\overline{J}^{2,-}u(\hat{y}).\label{eq:lipschitz est 6} \end{equation} By assuming that $L$ is large enough depending on $C_{0}$, we have by (\ref{eq:lipschitz est 3}) and the fact $\varphi^{\prime}\in\left[\frac{3}{4},1\right]$ \begin{align}
\left|a\right|,\left|b\right| & \leq L\left|\varphi^{\prime}(\left|\hat{x}-\hat{y}\right|)\right|+C_{0}\left|\hat{x}-\hat{y}\right|^{\beta/2}\leq2L,\label{eq:lipschitz est 4}\\
\left|a\right|,\left|b\right| & \geq L\left|\varphi^{\prime}(\left|\hat{x}-\hat{y}\right|)\right|-C_{0}\left|\hat{x}-\hat{y}\right|^{\beta/2}\geq\frac{1}{2}L.\label{eq:lipschitz est 5} \end{align} Moreover, we have \begin{align} -(\mu+2\left\Vert B\right\Vert )\begin{pmatrix}I & 0\\ 0 & I \end{pmatrix} & \leq\begin{pmatrix}X & 0\\ 0 & -Y \end{pmatrix}\nonumber \\
& \leq\begin{pmatrix}B & -B\\ -B & B \end{pmatrix}+\frac{2}{\mu}\begin{pmatrix}B^{2} & -B^{2}\\ -B^{2} & B^{2} \end{pmatrix},\label{eq:lipschitz est 7} \end{align} where \begin{align*}
B & =L\varphi^{\prime\prime}(\left|z\right|)\frac{z}{\left|z\right|}\otimes\frac{z}{\left|z\right|}+\frac{L\varphi^{\prime}(\left|z\right|)}{\left|z\right|}\left(I-\frac{z}{\left|z\right|}\otimes\frac{z}{\left|z\right|}\right),\\
B^{2} & =BB=L^{2}(\varphi^{\prime\prime}(\left|z\right|))^{2}\frac{z}{\left|z\right|}\otimes\frac{z}{\left|z\right|}+\frac{L^{2}(\varphi^{\prime}(\left|z\right|))^{2}}{\left|z\right|^{2}}\left(I-\frac{z}{\left|z\right|}\otimes\frac{z}{\left|z\right|}\right). \end{align*}
Using that $\varphi^{\prime\prime}(\left|z\right|)<0<\varphi^{\prime}(\left|z\right|)$
and $\left|\varphi^{\prime\prime}(\left|z\right|)\right|\leq\varphi^{\prime}(\left|z\right|)/\left|z\right|$, we deduce that \begin{equation}
\left\Vert B\right\Vert \leq\frac{L\varphi^{\prime}(\left|z\right|)}{\left|z\right|}\quad\text{and}\quad\text{\ensuremath{\left\Vert B^{2}\right\Vert \leq\frac{L^{2}(\varphi^{\prime}(\left|z\right|))^{2}}{\left|z\right|^{2}}}}.\label{eq:lipschitz b est} \end{equation} Moreover, choosing \[
\mu:=4L\left(\left|\varphi^{\prime\prime}(\left|z\right|)\right|+\frac{\left|\varphi^{\prime}(\left|z\right|)\right|}{\left|z\right|}\right), \]
and using that $\varphi^{\prime\prime}(\left|z\right|)<0$, we have \begin{align}
\left\langle B\frac{z}{\left|z\right|},\frac{z}{\left|z\right|}\right\rangle +\frac{2}{\mu}\left\langle B^{2}\frac{z}{\left|z\right|},\frac{z}{\left|z\right|}\right\rangle =L\varphi^{\prime\prime}(\left|z\right|)+\frac{2}{\mu}L^{2}\left|\varphi^{\prime\prime}(\left|z\right|)\right| & \leq\frac{L}{2}\varphi^{\prime\prime}(\left|z\right|).\label{eq:lipschitz est 8} \end{align} We set $\eta_{1}:=a+q$ and $\eta_{2}:=b+q$. By (\ref{eq:lipschitz est 4}) and (\ref{eq:lipschitz est 5}) there is a constant $\nu_{0}(L)$
such that if $\left|q\right|=0$ or $\left|q\right|>\nu_{0}$, then \begin{equation}
\left|\eta_{1}\right|,\left|\eta_{2}\right|\geq\frac{L}{2}.\label{eq:lipschitz est 9} \end{equation}
We denote $A(x,\eta):=I+(p(x)-2)\eta\otimes\eta$ and $\overline{\eta}:=\frac{\eta}{\sqrt{\left|\eta\right|^{2}+\varepsilon^{2}}}$. Since $u$ is a viscosity solution, we obtain from (\ref{eq:lipschitz est 6}) \begin{align} 0 & \leq\mathrm{tr}(A(\hat{x},\overline{\eta}_{1})(X+MI))-\mathrm{tr}(A(\hat{y},\overline{\eta}_{2})(Y-MI))+f(\hat{x})-f(\hat{y})\nonumber \\
& =\mathrm{tr}(A(\hat{y},\overline{\eta}_{2})(X-Y))+\mathrm{tr}((A(\hat{x},\overline{\eta}_{2})-A(\hat{y},\overline{\eta}_{2}))X)\nonumber \\
& \ \ \ +\mathrm{tr}((A(\hat{x},\overline{\eta}_{1})-A(\hat{x},\overline{\eta}_{2}))X)+M\mathrm{tr}(A(\hat{x},\overline{\eta}_{1})+A(\hat{y},\overline{\eta}_{2}))\nonumber \\
& \ \ \ +f(\hat{x})-f(\hat{y})\nonumber \\
& =:T_{1}+T_{2}+T_{3}+T_{4}+T_{5}.\label{eq:lipschitz est 11} \end{align} We will now proceed to estimate these terms. The plan is to obtain a contradiction by absorbing the other terms into $T_{1}$ which is negative by concavity of $\varphi$.
\textbf{Estimate of $T_{1}$: }Multiplying (\ref{eq:lipschitz est 7})
by the vector $(\frac{z}{\left|z\right|},-\frac{z}{\left|z\right|})$ and using (\ref{eq:lipschitz est 8}), we obtain an estimate for the smallest eigenvalue of $X-Y$ \begin{align*}
\lambda_{\min}(X-Y) & \leq\left\langle (X-Y)\frac{z}{\left|z\right|},\frac{z}{\left|z\right|}\right\rangle \\
& \leq4\left\langle B\frac{z}{\left|z\right|},\frac{z}{\left|z\right|}\right\rangle +\frac{8}{\mu}\left\langle B^{2}\frac{z}{\left|z\right|},\frac{z}{\left|z\right|}\right\rangle \leq2L\varphi^{\prime\prime}(\left|z\right|). \end{align*} The eigenvalues of $A(\hat{y},\overline{\eta}_{2})$ are between $\min(1,p_{\text{min}}-1)$ and $\max(1,p_{\max}-1)$. Therefore by \cite{theobald75} \begin{align*} T_{1}=\mathrm{tr}(A(\hat{y},\overline{\eta}_{2})(X-Y)) & \leq\sum_{i}\lambda_{i}(A(\hat{y},\overline{\eta}_{2}))\lambda_{i}(X-Y)\\
& \leq\min(1,p_{\text{min}}-1)\lambda_{\min}(X-Y)\\
& \leq C(\hat{p})L\varphi^{\prime\prime}(\left|z\right|). \end{align*}
\textbf{Estimate of $T_{2}$: }We have \[
T_{2}=\mathrm{tr}((A(\hat{x},\overline{\eta}_{2})-A(\hat{y},\overline{\eta}_{2}))X)\leq\left|p(\hat{x})-p(\hat{y})\right|\left|\left\langle X\overline{\eta}_{2},\overline{\eta}_{2}\right\rangle \right|\leq C(\hat{p})\left|z\right|\left\Vert X\right\Vert , \] where by (\ref{eq:lipschitz est 7}) and (\ref{eq:lipschitz b est}) \begin{align}
\left\Vert X\right\Vert \leq\left\Vert B\right\Vert +\frac{2}{\mu}\left\Vert B\right\Vert ^{2} & \leq\frac{L\left|\varphi^{\prime}(\left|z\right|)\right|}{\left|z\right|}+\frac{2L^{2}(\varphi^{\prime}(\left|z\right|))^{2}}{4L(\left|\varphi^{\prime\prime}(\left|z\right|)\right|+\frac{\left|\varphi^{\prime}(\left|z\right|)\right|}{\left|z\right|})\left|z\right|^{2}}\nonumber \\
& \leq\frac{2L\varphi^{\prime}(\left|z\right|)}{\left|z\right|}.\label{eq:lipschitz est 12} \end{align}
\textbf{Estimate of $T_{3}$: }From Lemma \ref{lem:lipschitz lemma} and the estimate (\ref{eq:lipschitz est 9}) it follows that \begin{align}
\left|\overline{\eta}_{1}-\overline{\eta}_{2}\right| & \leq\frac{2\left|\eta_{1}-\eta_{2}\right|}{\max(\left|\eta_{1}\right|,\left|\eta_{2}\right|)}\leq\frac{4}{L}\left|\eta_{1}-\eta_{2}\right|=\frac{4}{L}\left|a-b\right|\nonumber \\
& \leq\frac{4}{L}(M\left|\hat{x}-x_{0}\right|+M\left|\hat{y}-y_{0}\right|)\leq\frac{8C_{0}}{L}\left|z\right|^{\beta/2},\label{eq:lipschitz est 10} \end{align} where in the last inequality we used (\ref{eq:lipschitz est 3}). Observe that \[
\left\Vert \overline{\eta}_{1}\otimes\overline{\eta}_{1}-\overline{\eta}_{2}\otimes\overline{\eta}_{2}\right\Vert =\left\Vert (\overline{\eta}_{1}-\overline{\eta}_{2})\otimes\overline{\eta}_{1}-\overline{\eta}_{2}\otimes(\overline{\eta}_{2}-\overline{\eta}_{1})\right\Vert \leq(\left|\overline{\eta}_{1}\right|+\left|\overline{\eta}_{2}\right|)\left|\overline{\eta}_{1}-\overline{\eta}_{2}\right|. \] Using the last two displays, we obtain by \cite{theobald75} and (\ref{eq:lipschitz est 12}) \begin{align*} T_{3}=\mathrm{tr}((A(\hat{x},\overline{\eta}_{1})-A(\hat{x},\overline{\eta}_{2}))X) & \leq N\left\Vert A(x_{1},\overline{\eta}_{1})-A(x_{1},\overline{\eta}_{2})\right\Vert \left\Vert X\right\Vert \\
& \leq N\left|p(x_{1})-2\right|(\left|\overline{\eta}_{1}\right|+\left|\overline{\eta}_{2}\right|)\left|\overline{\eta}_{1}-\overline{\eta}_{2}\right|\left\Vert X\right\Vert \\
& \leq\frac{C(N,\hat{p})C_{0}}{L}\left|z\right|^{\beta/2}\left\Vert X\right\Vert \\
& \leq C(N,\hat{p},\left\Vert u\right\Vert _{L^{\infty}},\left\Vert f\right\Vert _{L^{\infty}})\sqrt{M}\varphi^{\prime}(\left|z\right|)\left|z\right|^{\beta/2-1}. \end{align*}
\textbf{Estimate of $T_{4}$ and $T_{5}$: }By Lipschitz continuity of $p$ we have \begin{align*} T_{4} & =M\mathrm{tr}(A(\hat{x},\overline{\eta}_{1})+A(\hat{y},\overline{\eta}_{2}))\leq2MC(N,\hat{p}). \end{align*} We have also \[ T_{5}=f(\hat{x})-f(\hat{y})\leq2\left\Vert f\right\Vert _{L^{\infty}(B_{1})}. \]
Combining the estimates, we deduce the existence of positive constants $C_{1}(N,\hat{p})$ and $C_{2}(N,\hat{p},\left\Vert u\right\Vert _{L^{\infty}(B_{1})},\left\Vert f\right\Vert _{L^{\infty}(B_{1})})$ such that \begin{align}
0 & \leq C_{1}L\varphi^{\prime\prime}(\left|z\right|)+C_{2}\big(L\varphi^{\prime}(\left|z\right|)+\sqrt{M}\varphi^{\prime}(\left|z\right|)\left|z\right|^{\frac{\beta}{2}-1}+M+1\big)\nonumber \\
& \leq C_{1}L\varphi^{\prime\prime}(\left|z\right|)+C_{2}(L+\sqrt{M}\left|z\right|^{\frac{\beta}{2}-1}+M+1)\label{eq:lipschitz est 15} \end{align}
where we used that $\varphi^{\prime}(\left|z\right|)\in[\frac{3}{4},1]$. We take $\gamma:=\frac{\beta}{2}+1$ so that we have \[
\varphi^{\prime\prime}(\left|z\right|)=\frac{1-\gamma}{2^{\gamma+1}}\left|z\right|^{\gamma-2}=\frac{-\beta}{2^{\frac{\beta}{2}+3}}\left|z\right|^{\frac{\beta}{2}-1}=:-C_{3}\left|z\right|^{\frac{\beta}{2}-1}. \] We apply this to (\ref{eq:lipschitz est 15}) and obtain \begin{align}
0 & \leq(C_{2}\sqrt{M}-C_{1}C_{3}L)\left|z\right|^{\frac{\beta}{2}-1}+C_{2}(L+M+1)\label{eq:lipschitz est 155} \end{align} We fix $r:=\frac{1}{2}\left(\frac{6C_{2}}{C_{1}C_{3}}\right)^{\frac{1}{\frac{\beta}{2}-1}}$. By (\ref{eq:lipschitz est M}) this will also fix $M=(N,\hat{p},\left\Vert u\right\Vert _{L^{\infty}(B_{1})})$. We take $L$ so large that \[ L>\max(\frac{2C_{2}\sqrt{M}}{C_{1}C_{3}},M+1). \] Then by (\ref{eq:lipschitz est 155}) we have \begin{align*}
0<-\frac{1}{2}C_{1}C_{3}L\left|z\right|^{\frac{\beta}{2}-1}+2C_{2}L & \leq L(-\frac{1}{2}C_{1}C_{3}(2r)^{\frac{\beta}{2}-1}+2C_{2})\\
& =-LC_{2}\leq0, \end{align*} which is a contradiction. \end{proof}
\section{Stability and comparison principles} \begin{lem} Suppose that $p\in C(B_{1})$, $p_{\min}>1$ and that $f:B_{1}\times\mathbb{R}\rightarrow\mathbb{R}$ is continuous. Let $u_{\varepsilon}$ be a viscosity solution to \[
-\Delta u_{\varepsilon}-(p_{\varepsilon}(x)-2)\frac{\left\langle D^{2}u_{\varepsilon}Du_{\varepsilon},Du_{\varepsilon}\right\rangle }{\left|Du_{\varepsilon}\right|^{2}+\varepsilon^{2}}=f_{\varepsilon}(x,u(x))\quad\text{in }B_{1} \] and assume that $u_{\varepsilon}\rightarrow u\in C(B_{1})$, $p_{\varepsilon}\rightarrow p$ and $f_{\varepsilon}\rightarrow f$ locally uniformly as $\varepsilon\rightarrow0$. Then $u$ is a viscosity solution to \[
-\Delta u-(p(x)-2)\frac{\left\langle D^{2}uDu,Du\right\rangle }{\left|Du\right|^{2}}=f(x,u(x))\quad\text{in }B_{1}. \] \end{lem}
\begin{proof} It is enough to consider supersolutions. Suppose that $\varphi\in C^{2}$ touches $u$ from below at $x$. Since $u_{\varepsilon}\rightarrow u$ locally uniformly, there exists a sequence $x_{\varepsilon}\rightarrow x$
such that $u_{\varepsilon}-\varphi$ has a local minimum at $x_{\varepsilon}$. We denote $\eta_{\varepsilon}:=D\varphi(x_{\varepsilon})/\sqrt{\left|D\varphi(x_{\varepsilon})\right|^{2}+\varepsilon^{2}}.$ Then $\eta_{\varepsilon}\rightarrow\eta\in\overline{B}_{1}$ up to a subsequence. Therefore we have \begin{align} 0 & \leq-\Delta\varphi(x_{\varepsilon})-(p_{\varepsilon}(x_{\varepsilon})-2)\left\langle D^{2}\varphi(x_{\varepsilon})\eta_{\varepsilon},\eta_{\varepsilon}\right\rangle -f_{\varepsilon}(x_{\varepsilon},u_{\varepsilon}(x_{\varepsilon}))\nonumber \\
& \rightarrow-\Delta\varphi(x)-(p(x)-2)\left\langle D^{2}\varphi(x_{\varepsilon})\eta,\eta\right\rangle -f(x,u(x)),\label{eq:stability 1} \end{align} which is what is required in Definition \ref{def:viscosity solutions} in the case $D\varphi(x)=0$. If $D\varphi(x)\not=0$, then $D\varphi(x_{\varepsilon})\not=0$
when $\varepsilon$ is small and thus $\eta=D\varphi(x)/\left|D\varphi(x)\right|$. Therefore \ref{eq:stability 1} again implies the desired inequality. \end{proof} \begin{lem} \label{lem:comparison principle}Suppose that $p:B_{1}\rightarrow\mathbb{R}$ is Lipschitz continuous, $p_{\min}>1$ and that $f\in C(B_{1})$ is bounded. Assume that $u\in C(\overline{B}_{1})$ is a viscosity subsolution to $-\Delta_{p(x)}^{N}u\leq f-u$ in $B_{1}$ and that $v\in C(\overline{B}_{1})$ is a viscosity supersolution to $-\Delta_{p(x)}^{N}v\geq f-v$ in $B_{1}$. Then \[ u\leq v\quad\text{on }\partial B_{1} \] implies \[ u\leq v\quad\text{in }B_{1}. \] \end{lem}
\begin{proof} \textbf{Step 1:} Assume on the contrary that the maximum of $u-v$ in $\overline{B}_{1}$ is positive. For $x,y\in\overline{B}_{1}$, set \[ \Psi_{j}(x,y):=u(x)-v(y)-\varphi_{j}(x,y), \]
where $\varphi_{j}(x,y):=\frac{j}{4}\left|x-y\right|^{4}$. Let $(x_{j},y_{j})$ be a global maximum point of $\Psi_{j}$ in $\overline{B}_{1}\times\overline{B}_{1}$. Then \[
u(x_{j})-v(y_{j})-\frac{j}{4}\left|x_{j}-y_{j}\right|^{4}\geq u(0)-v(0) \] so that \[
\frac{j}{4}\left|x_{j}-y_{j}\right|^{4}\leq2\left\Vert u\right\Vert _{L^{\infty}(B_{1})}+2\left\Vert v\right\Vert _{L^{\infty}(B_{1})}<\infty. \] By compactness and the assumption $u\leq v$ on $\partial B_{1}$ there exists a subsequence such that $x_{j},y_{j}\rightarrow\hat{x}\in B_{1}$ and $u(\hat{x})-v(\hat{x})>0$. Finally, since $(x_{j},y_{j})$ is a maximum point of $\Psi_{j}$, we have \[
u(x_{j})-v(x_{j})\leq u(x_{j})-v(y_{j})-\frac{j}{4}\left|x_{j}-y_{j}\right|^{4}, \] and hence by continuity \begin{equation}
\frac{j}{4}\left|x_{j}-y_{j}\right|^{4}\leq v(x_{j})-v(y_{j})\rightarrow0\label{eq:comparison convergence} \end{equation} as $j\rightarrow\infty$.
\textbf{Step 2:} If $x_{j}=y_{j}$, then $D_{x}^{2}\varphi_{j}(x_{j},y_{j})=D_{y}^{2}\varphi_{j}(x_{j},y_{j})=0$. Therefore, since the function $x\mapsto u(x)-\varphi_{j}(x,y_{j})$ reaches its maximum at $x_{j}$ and $y\mapsto v(y)-(-\varphi_{j}(x_{j},y))$ reaches its minimum at $y_{j}$, we obtain from the definition of viscosity sub- and supersolutions that \[ 0\leq f(x_{j})-u(x_{j})\quad\text{and}\quad0\geq f(y_{j})-v(y_{j}). \] That is $0\leq f(x_{j})-f(y_{j})+v(y_{j})-u(x_{j}),$ which leads to a contradiction since $x_{j},y_{j}\rightarrow\hat{x}$ and $v(\hat{x})-u(\hat{x})<0$. We conclude that $x_{j}\not=y_{j}$ for all large $j$. Next we apply the Theorem of sums \cite[Theorem 3.2]{userguide} to obtain matrices $X,Y\in S^{N}$ such that \[ (D_{x}\varphi(x_{j},y_{j}),X)\in\overline{J}^{2,+}u(x_{j}),\quad(-D_{y}\varphi(x_{j},y_{j}),Y)\in\overline{J}^{2,-}v(y_{j}) \]
and \begin{equation} \begin{pmatrix}X & 0\\ 0 & -Y \end{pmatrix}\leq D^{2}\varphi(x_{j},y_{j})+\frac{1}{j}(D^{2}(x_{j},y_{j}))^{2},\label{eq:matrix ineq} \end{equation} where \[ D^{2}(x_{j},y_{j})=\begin{pmatrix}M & -M\\ -M & M \end{pmatrix} \]
with $M=j(2(x_{j}-y_{j})\otimes(x_{j}-y_{j})+\left|x_{j}-y_{j}\right|^{2}I)$. Multiplying the matrix inequality (\ref{eq:matrix ineq}) by the $\mathbb{R}^{2N}$ vector $(\xi_{1},\xi_{2})$ yields \begin{align*} \left\langle X\xi_{1},\xi_{1}\right\rangle -\left\langle Y\xi_{2},\xi_{2}\right\rangle & \leq\left\langle (M+2j^{-1}M^{2})(\xi_{1}-\xi_{2}),\xi_{1}-\xi_{2}\right\rangle \\
& \leq(\left\Vert M\right\Vert +2j^{-1}\left\Vert M\right\Vert ^{2})\left|\xi_{1}-\xi_{2}\right|^{2}. \end{align*}
Observe also that $\eta:=D_{x}\varphi(x_{j},y_{j})=-D_{y}(x_{j},y_{j})=j\left|x_{j}-y_{j}\right|^{2}(x_{j}-y_{j})\not=0$ for all large $j$. Since $u$ is a subsolution and $v$ is a supersolution, we thus obtain \begin{align*}
& f(y_{j})-f(x_{j})+u(x_{j})-v(y_{j})\\
& \ \leq\mathrm{tr}(X-Y)+(p(x_{j})-2)\left\langle X\frac{\eta}{\left|\eta\right|},\frac{\eta}{\left|\eta\right|}\right\rangle -(p(y_{j})-2)\left\langle Y\frac{\eta}{\left|\eta\right|},\frac{\eta}{\left|\eta\right|}\right\rangle \\
& \ \leq(p(x_{j})-1)\left\langle X\frac{\eta}{\left|\eta\right|},\frac{\eta}{\left|\eta\right|}\right\rangle -(p(y_{j})-1)\left\langle Y\frac{\eta}{\left|\eta\right|},\frac{\eta}{\left|\eta\right|}\right\rangle \\
& \ \leq(\left\Vert M\right\Vert +2j^{-1}\left\Vert M\right\Vert ^{2})\Big|\sqrt{p(x_{j})-1}-\sqrt{p(y_{j})-1}\Big|^{2}\\
& \ \leq Cj\left|x_{j}-y_{j}\right|^{2}\frac{\left|p(x_{j})-p(y_{j})\right|^{2}}{\left(\sqrt{p(x_{j})-1}+\sqrt{p(y_{j})-1}\right)^{2}}\\
& \leq C(\hat{p})j\left|x_{j}-y_{j}\right|^{4}. \end{align*} This leads to a contradiction since the left-hand side tends to $u(\hat{x})-v(\hat{y})>0$ and the right-hand side tends to zero by (\ref{eq:comparison convergence}).
\add \end{proof}
\end{document} |
\begin{document}
\title{Degenerate Affine Hecke-Clifford Algebras and Type $Q$ Lie Superalgebras}
\author{David Hill } \address{Department of Mathematics \\
University of California, Berkeley \\
Berkeley, CA 94720-3840} \email{dhill1@math.berkeley.edu} \author{Jonathan R. Kujawa} \address{Department of Mathematics \\
University of Oklahoma \\
Norman, OK 73019} \email{kujawa@math.ou.edu} \author{Joshua Sussan} \address{Department of Mathematics \\
University of California, Berkeley \\
Berkeley, CA 94720-3840} \email{sussan@math.berkeley.edu} \thanks{Research of the second author was partially supported by NSF grant DMS-0734226. Research of the first and third author was partially supported by NSF EMSW21-RTG grant DMS-0354321}\ \date{\today} \subjclass[2000]{Primary 20C08,20C25; Secondary 17B60,17B20,17B37}
\begin{abstract} We construct the finite dimensional simple integral modules for the (degenerate) affine Hecke-Clifford algebra (AHCA), ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$. Our construction includes an analogue of Zelevinsky's segment representations, a complete combinatorial description of the simple calibrated ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-modules, and a classification of the simple integral ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-modules. Our main tool is an analogue of the Arakawa-Suzuki functor for the Lie superalgebra ${\mathfrak{q}}(n)$. \end{abstract}
\maketitle
\section{Introduction}\label{S:Intro} \subsection{} Throughout this paper, we will work over the ground field ${\mathbb{C}}$. As is well known,
the symmetric group, $S_d$, has a non-trivial \emph{central extension}: \[ \xymatrix{1\ar[r]&{\mathbb{Z}}/2{\mathbb{Z}}\ar[r]&\widehat{S}_d\ar[r]&S_d\ar[r]&1}. \] The double cover $\widehat{S}_d$ is generated by elements $\zeta,\hat{s}_1,\ldots,\hat{s}_{d-1}$, where $\zeta$ is central, $\zeta^2=1$, and the $\hat{s}_i$ satisfy the relations $\hat{s}_i\hat{s}_{i+1}\hat{s}_i=\hat{s}_{i+1}\hat{s}_i\hat{s}_{i+1}$ and $\hat{s}_j\hat{s}_i=\zeta\hat{s}_i\hat{s}_j$ for admissible $i$ and $j$
satisfying $|i-j|>1$. The \emph{projective} or $\emph{spin}$ representations of $S_d$ are the linear representations of $\widehat{S}_d$ which factor through ${\mathbb{C}}\widehat{S}_d/(\zeta+1)$. This paper is a study of some structures arising from the projective representation theory of symmetric groups.
The double cover $\widehat{S}_d$ suffers a defect: it is difficult to define parabolic induction, see \cite[Section 4]{stem}. Since the inductive approach to the study of linear representations of the symmetric group is so effective, it is preferable to study the \emph{Sergeev algebra} ${\mathcal{S}}(d)$ introduced in \cite{s,n}, which provides a natural fix to this problem. As a vector space, ${\mathcal{S}}(d)={\mathcal{C}\ell}(d)\otimes{\mathbb{C}} S_d$, where ${\mathcal{C}\ell}(d)$ is the $2^d$-dimensional Clifford algebra with generators $c_1,\ldots,c_d$ subject to the relations $c_i^2=-1$ and $c_ic_j=-c_jc_i$ for $i\neq j$, and ${\mathbb{C}} S_d$ is the group algebra of $S_d$. Let $s_i=(i,i+1)\in S_d$ be the $i$th basic transposition, and identify ${\mathcal{C}\ell}(d)$ and ${\mathbb{C}} S_d$ with the subspaces ${\mathcal{C}\ell}(d)\otimes 1$ and $1\otimes{\mathbb{C}} S_d$ respectively. Multiplication is defined so that ${\mathcal{C}\ell}(d)$ and ${\mathbb{C}} S_d$ are subalgebras, and $wc_i=c_{w(i)}w$ for all $1\leq i\leq d$ and $w\in S_d$. The Sergeev algebra admits a natural definition of parabolic induction and the projective representation theory of the symmetric group can be recovered from that of ${\mathcal{S}}(d)$, \cite[Theorem 3.4]{bk1}.
Additionally, the Sergeev algebra is a \emph{superalgebra}, and plays the role of the symmetric group for a super version of Schur-Weyl duality known as Sergeev duality in honor of A. N. Sergeev who extended the classical theorem of Schur and Weyl \cite{s}. If $V={\mathbb{C}}^{n|n}$ is the standard representation of the Lie superalgebra ${\mathfrak{q}}(n)$, then both ${\mathcal{S}}(d)$ and ${\mathfrak{q}}(n)$ act on the tensor product $V^{\otimes d}$ and each algebra is the commutant algebra of the other. In particular, there exists an isomorphism of superalgebras \[ {\mathcal{S}}(d)\rightarrow\operatorname{End}_{{\mathfrak{q}}(n)}(V^{\otimes d}). \]
The algebra ${\mathcal{S}}(d)$ admits an affinization, ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$, called the (degenerate) affine Hecke-Clifford algebra (AHCA). The affine Hecke-Clifford algebra was introduced by Nazarov in \cite{n} and studied in \cite{n,bk2,kl,w}. As a vector space, ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)={\mathcal{P}}_d[x]\otimes{\mathcal{S}}(d)$, where ${\mathcal{P}}_d[x]={\mathbb{C}}[x_1,\ldots,x_d]$. We identify ${\mathcal{P}}_d[x]$ and ${\mathcal{S}}(d)$ with the subspaces ${\mathcal{P}}_d[x]\otimes 1$ and $1\otimes{\mathcal{S}}(d)$. Multiplication is defined so that these are subalgebras, $c_ix_j=x_jc_i$ if $j\neq i$, $c_ix_i=-x_ic_i$, $s_ix_j=x_js_i$ if $j\neq i,i+1$, and \[ s_ix_i=x_{i+1}s_i-1+c_ic_{i+1}. \] In addition to ${\mathcal{S}} (d)$ being a subalgebra of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$, there also exists a natural surjection ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)\twoheadrightarrow{\mathcal{S}}(d)$ obtained by mapping $x_1\mapsto 0$, $c_i\mapsto c_i$ and $s_i\mapsto s_i$. Therefore, the representation theory of the AHCA contains that of the Sergeev algebra.
Surprisingly little is explicitly known about the representation theory of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$, in contrast with its linear counterpart, the \emph{(degenerate) affine Hecke algebra} ${\mathcal{H}^{\mathrm{aff}}}(d)$. The most significant contribution to the projective theory is from \cite{bk2,kl}, which describe the Grothendieck group of the full subcategory of \emph{integral} ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-modules in terms of the crystal graph associated to a maximal nilpotent subalgebra of ${\mathfrak{b}}_\infty$ (or, more generally, $A_{2\ell}^{(2)}$ if working over a field of odd prime characteristic $2\ell-1$). We will return to this important topic later on.
The algebra ${\mathcal{H}^{\mathrm{aff}}}(d)$ has been studied for many years. Of particular interest are those modules for ${\mathcal{H}^{\mathrm{aff}}}(d)$ which admit a generalized weight space decomposition with respect to the polynomial generators. It is known that among these modules it is enough to consider those for which the generalized eigenvalues of the polynomial generators are integers, cf. \cite[$\S7.1$]{kl}. These are known as \emph{integral modules}. As discovered in \cite{n}, the appropriate analogue of integral modules for ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ are those which admit a generalized weight space decomposition with respect to the $x_i^2$, and the generalized eigenvalues of the $x_i^2$ are of the form $q(a):=a(a+1)$, $a\in{\mathbb{Z}}$.
The finite dimensional, irreducible, integral modules for ${\mathcal{H}^{\mathrm{aff}}}(d)$ were classified by Zelevinsky in \cite{z} via combinatorial objects known as multisegments. A segment is an interval $[a,b]\in{\mathbb{Z}}$. To each segment $[a,b]$ with $d=b-a+1$, Zelevinsky associates a 1-dimensional ${\mathcal{H}^{\mathrm{aff}}}(d)$-module ${\mathbb{C}}_{[a,b]}$ defined from the trivial representation of ${\mathbb{C}} S_d$ by letting $x_1$ act by the scalar $a$. A multisegment may be regarded as a pair of compositions $(\beta,\alpha)=((b_1,\ldots,b_n),(a_1,\ldots,a_n))\in{\mathbb{Z}}^n\times{\mathbb{Z}}^n$, with $d_i=b_i-a_i\geq 0$. If $d=d_1+\cdots+d_n$, Zelevinsky associates to the multisegment $(\beta,\alpha)$ a \emph{standard cyclic} ${\mathcal{H}^{\mathrm{aff}}}(d)$-module \[ {\mathcal{M}}(\beta,\alpha) =\operatorname{Ind}_{{\mathcal{H}^{\mathrm{aff}}}(d_1)\otimes\cdots\otimes{\mathcal{H}^{\mathrm{aff}}}(d_n)}^{{\mathcal{H}^{\mathrm{aff}}}(d)} {\mathbb{C}}_{[a_1,b_1-1]}
\boxtimes\cdots\boxtimes{\mathbb{C}}_{[a_n,b_n-1]}. \] To explain the classification, let $P={\mathbb{Z}}^n$ be the weight lattice associated to ${\mathfrak{gl}}_n({\mathbb{C}})$, $P^+$ the dominant weights, and $\rho=(n-1,\ldots,1,0)$. Additionally, define the weights \[ {P_{\geq0}}(d)=\{\mu\in{\mathbb{Z}}^n_{\geq0}\mid \mu_1+\cdots+\mu_n=d\}\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, P^+[\lambda]=\{\mu\in P\mid \mu_i\geq\mu_j\mbox{ whenever }\lambda_i=\lambda_j\}. \] Given $\lambda\in P^+$, let \begin{align}\label{E:Bsubd} \mathcal{B}_d[\lambda]=\{\mu\in P^+[\lambda]\mid \lambda-\mu\in{P_{\geq0}}(d)\}, \end{align} and \[ \mathcal{A}_d=\{(\lambda,\mu)\mid \lambda\in P^+,\mbox{ and }\mu\in\mathcal{B}_d[\lambda+\rho]\}. \] Then, the set $\{{\mathcal{L}}(\beta,\alpha)\mid (\beta,\alpha)\in{\mathcal{A}}_d\}$ is a complete list of irreducible integral ${\mathcal{H}^{\mathrm{aff}}}(d)$-modules.
In the case of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$, the situation is more subtle. To describe this, fix a segment $[a,b]$. The obvious analogue of the trivial representation of ${\mathbb{C}} S_d$ is the $2^d$-dimensional basic spin representation ${\mathcal{C}\ell}_d={\mathcal{C}\ell}(d).1$ of ${\mathcal{S}}(d)$. If $a=0$, the action of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ factors through ${\mathcal{S}}(d)$ and it can be checked that ${\mathcal{C}\ell}_d$ is the desired segment representation. If $a\neq 0$, it is not immediately obvious how to proceed. Inspiration comes from a \emph{rank 1} application of the functor described below. We define a module structure on the \emph{double} of ${\mathcal{C}\ell}_d$: $\hat{\Phi}_{[a,b]}=\Phi_a\otimes{\mathcal{C}\ell}_d$, where $\Phi_a$ is a 2-dimensional Clifford algebra. The module $\hat{\Phi}_{[a,b]}$ is not irreducible, but decomposes as a direct sum of irreducibles $\Phi_{[a,b]}^+\oplus\Phi_{[a,b]}^-$, where $\Phi_{[a,b]}^+$ and $\Phi_{[a,b]}^-$ are isomorphic via an \emph{odd} isomorphism. Let $\Phi_{[a,b]}$ denote one of these simple summands. Now, given a multisegment $(\lambda,\mu)$, with $\lambda_i-\mu_i=d_i$ and $d=d_1+\cdots+d_n$, we define the standard cyclic module \[ {\mathcal{M}}(\lambda,\mu)=\operatorname{Ind}_{{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d_1)\otimes\cdots\otimes{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d_n)}^{{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)}
\Phi_{[\mu_1,\lambda_1-1]}\circledast\cdots\circledast\Phi_{[\mu_n,\lambda_n-1]}, \] where $\circledast$ is an analogue of the outer tensor product adapted for superalgebras, see section \ref{S:Prelim} below.
A weight $\lambda\in P$ is called typical if $\lambda_i+\lambda_j\neq0$ for all $i\neq j$. Let \[ {P^{++}}=\{\lambda\in P^+\mid \lambda_{1}\geq \dotsb \geq \lambda_{n}, \text{ and } \lambda_i+\lambda_j\neq 0\mbox{ for all }i\neq j\} \] be the set of dominant typical weights. We prove
\begin{thme} Assume that $\lambda\in{P^{++}}$ and $\mu\in\mathcal{B}_d[\lambda]$. Then, ${\mathcal{M}}(\lambda,\mu)$ has a unique simple quotient, denoted ${\mathcal{L}}(\lambda,\mu)$. \end{thme}
In the special case where the multisegment $(\lambda,\mu)$ corresponds to skew shapes (i.e. $\lambda,\mu \in P^+$), the associated ${\mathcal{H}^{\mathrm{aff}}}(d)$-modules are called calibrated. The calibrated representations may also be characterized as those modules on which the polynomial generators act semisimply, and were originally classified by Cherednik in \cite{ch0}. In \cite{ram}, Ram gives a complete combinatorial description of the calibrated representations of ${\mathcal{H}^{\mathrm{aff}}}(d)$ in terms of skew shape tableaux and provides a complete classification (see also \cite{kr} for another combinatorial model).
The projective analogue of the skew shapes are the shifted skew shapes which have appeared already in \cite{s2,stem} and correspond to when $\lambda$ and $\mu$ are \emph{strict} partitions. As in the linear case, these are the modules for which the $x_i$ act semisimply. In the spirit of \cite{ram}, we prove that
\begin{thme} For each shifted skew shape $\lambda/\mu$, where $\lambda$ and $\mu$ are strict partitions such that $ \lambda $ contains $ \mu, $ there is an irreducible ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-module $H^{\lambda/\mu}$. Every irreducible, calibrated ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-module is isomorphic to exactly one such $H^{\lambda/\mu}$. \end{thme}
The $H^{\lambda/\mu}$ are constructed directly using the combinatorics of shifted skew shapes. Furthermore, we show that $H^{\lambda/\mu}\cong{\mathcal{L}}(\lambda,\mu)$. We would also like to point out that Wan, \cite{wan}, has recently obtained a classification of the calibrated representations for ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ over any arbitrary algebraically closed field of characteristic not equal to 2.
The appearance of the weight lattice for ${\mathfrak{gl}}_n({\mathbb{C}})$ in the representation theory of ${\mathcal{H}^{\mathrm{aff}}}(d)$ is explained by a work of Arakawa and Suzuki who introduced in \cite{as} a functor from the BGG category $ \mathcal{O}(\mathfrak{gl}_n) $ to the category of finite dimensional representations of ${\mathcal{H}^{\mathrm{aff}}}(d)$. The authors proved that the functor maps Verma modules to the standard modules or zero. Using the Kazdhan-Lusztig conjecture together with the results of \cite{ginz}, they proved that simple objects in $\mathcal{O}(\mathfrak{gl}_n)$ are mapped by the functor to simple modules or zero. In \cite{su1}, Suzuki avoided the Kazdhan-Lusztig conjecture, and proved that the functor maps simples to simples using Zelevinsky's classification together with the existence of a nonzero ${\mathcal{H}^{\mathrm{aff}}}(d)$-contravariant form on certain standard modules, see \cite{r}. In \cite{su2}, Suzuki was able to avoid the results of Zelevinsky and independently reproduce the classification via a careful analysis of the standard modules. For a complete explanation of the functor in type $A$, we refer the reader to \cite{or}.
The functor and related constructions have had numerous applications in various areas of representation theory. This includes the study of affine Braid groups and Hecke algebras \cite{or}, Yangians \cite{KN}, the centers of parabolic category $\mathcal{O}$ for $\mathfrak{gl}_{n}$ \cite{b2}, finite W-algebras \cite{bk4}, and the proof of Brou\'e's abelian defect conjecture for symmetric groups by Chuang and Rouquier via $\mathfrak{sl}_{2}$ categorification \cite{CR}.
We define an analogous functor from the category $\mathcal{O}({\mathfrak{q}}(n))$ to the category of finite dimensional modules for ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$. The contruction of this functor relies on the following key result:
\begin{thme} Let $M$ be a ${\mathfrak{q}}(n)$-supermodule. Then, there exists a homomorphism \[ {\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)\rightarrow\operatorname{End}_{\ensuremath{\mathfrak{q}} (n)}(M\otimes V^{\otimes d}). \] \end{thme}
To define the functor, let ${\mathfrak{q}}(n)={\mathfrak{n}}^+\oplus{\mathfrak{h}}\oplus{\mathfrak{n}}^-$ be the triangular decomposition of ${\mathfrak{q}}(n)$. For each $\lambda\in P$, the functor \[ F_\lambda:\mathcal{O}({\mathfrak{q}}(n))\rightarrow{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)\mbox{-mod} \] is defined by \begin{equation*} F_\lambda M=\{\,m\in M \mid {\mathfrak{n}}^+.m=0 \text{ and } hv =\lambda(h)v \text{ for all } h\in {\mathfrak{h}} \}. \end{equation*} The functor $F_\lambda$ is exact when $\lambda\in{P^{++}}$.
The dimension of the highest weight space of a Verma module in $\mathcal{O}({\mathfrak{q}}(n))$ is generally greater than one. A consequence of this is that the functor maps a Verma module to a direct sum of the same standard module. A simple object in $ \mathcal{O}(\mathfrak{q}(n)) $ is mapped to a direct sum of the same simple module or else zero. Determining when a simple object is mapped to something non-zero is a more difficult question than in the non-super case and we have only partial results in this direction. The main difficulty is a lack of information about the category $\mathcal{O}({\mathfrak{q}}(n))$. The category of finite dimensional representations of ${\mathfrak{q}}(n)$ has been studied by Penkov and Serganova \cite{p,ps,ps2}; they give a character formula for all finite dimension simple ${\mathfrak{q}}(n)$-modules. Using other methods, Brundan \cite{b} has also studied this category, and has even obtained some (conjectural) information about the whole category $\mathcal{O}({\mathfrak{q}}(n))$ via the theory of crystals. The most useful information, however, comes from Gorelik \cite{g}, who defines the Shapovalov form for Verma modules and calculates the linear factorization of its determinant.
In various works by Ariki, Grojnowski, Vazirani, and Kleshchev \cite{ar,gr,v,kl} it was shown that there is an action of $U({\mathfrak{gl}}_\infty)$ on the direct sum of Grothendieck groups of the categories of integral ${\mathcal{H}^{\mathrm{aff}}}(d)$-modules, for all $d$. This gives another type of classification of the simple integral modules as nodes on the crystal graph associated to a maximal nilpotent subalgebra of ${\mathfrak{gl}}_\infty$. In \cite{bk1}, Brundan and Kleshchev show there is a classification of the simple integral modules for ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ parameterized by the nodes of the crystal graph associated to a maximal nilpotent subalgebra of ${\mathfrak{b}}_\infty$, see also \cite{kl}.
In \cite{lec}, Leclerc studied dual canonical bases of the quantum group ${\mathcal{U}}_q({\mathfrak{g}})$ for various finite dimensional simple Lie algebras ${\mathfrak{g}}$ via embeddings of the quantized enveloping algebra ${\mathcal{U}}_q({\mathfrak{n}})$ of a maximal nilpotent subalgebra ${\mathfrak{n}}\subseteq{\mathfrak{g}}$ in the \emph{quantum shuffle algebra}. To describe the quantum shuffle algebra associated to ${\mathfrak{g}}$ of rank $r$, let $\mathcal{F}$ be the free associative algebra on the letters $[0],\ldots,[r-1]$, and let $[i_1,i_2,\ldots,i_k]:=[i_1]\cdot[i_2]\cdots[i_k]$. Then, the quantum shuffle algebra is the algebra $(\mathcal{F},*)$, where \[ [i_1,\ldots,i_k]*[i_{k+1},\ldots,i_{k+\ell}]=\sum_\sigma q^{-e(\sigma)}[i_\sigma(1),\ldots,i_{\sigma(k+\ell)}], \] where the sum is over all minimal length coset representatives in $S_{k+\ell}/(S_k\times S_\ell)$, and $e(\sigma)$ is some explicit function of $\sigma$. There exists an \emph{injective} homomorphism $\Psi:{\mathcal{U}}_q({\mathfrak{n}})\hookrightarrow{\mathcal{F}}$ satisfying $\Psi(xy)=\Psi(x)*\Psi(y)$ for all $x,y\in{\mathcal{U}}_q({\mathfrak{n}})$. Let $\mathcal{W}=\Psi({\mathcal{U}}_q({\mathfrak{n}}))$.
The ordering $[0]<[1]<\cdots<[r-1]$ yields two total ordering on words in ${\mathcal{F}}$: One the standard lexicographic ordering reading from \emph{left to right}, and the other the \emph{costandard} lexicographic ordering reading from \emph{right to left}. These orderings give rise to special words in ${\mathcal{F}}$ called Lyndon words, and every word has a canonical factorization as a non-increasing product of Lyndon words. In \cite{lec}, Leclerc uses the standard ordering, while we use the costandard ordering. It is easy to translate between results using one ordering as opposed to the other. However, in our situation, choosing the costandard ordering leads to some significant differences in the \emph{shape} of Lyndon words. We will explain this shortly.
Bases for ${\mathcal{W}}$ are parameterized by certain words called \emph{good words}.
A \emph{good word} is a nonincreasing product of \emph{good Lyndon word} which have been studied in \cite{lr, ro1,ro2,ro3}. The good Lyndon words are in 1-1 correspondence with the positive roots, $\Delta^+$, of ${\mathfrak{g}}$, and the (standard or costandard) lexicographic ordering on good Lyndon words gives rise to a convex ordering on $\Delta^+$. The convex ordering on $\Delta^+$ gives rise to a PBW basis for ${\mathcal{U}}_q({\mathfrak{n}})$, which in turn gives a multiplicative basis $\{E^*_g=(E^*_{l_k})*\cdots*(E_{l_1}^*)\}$ for ${\mathcal{W}}$ labeled by good words $g=l_1\cdots l_k$, where $l_1\geq\cdots \geq l_k$ are good Lyndon words. Additionally, the bar involution on ${\mathcal{U}}_q({\mathfrak{n}})$ gives rise to a bar involution on ${\mathcal{W}}$, and hence, a \emph{dual canonical basis} $\{b^*_g\}$ labeled by good words. The transition matrix between the basis $\{E^*_g\}$ and $\{b^*_g\}$ is triangular and, in particular, $b^*_l=E^*_l$ for each good Lyndon word $l$. In what follows, let $\underline{w}$ denote the specialization at $q=1$ of an element $w\in{\mathcal{W}}$.
For ${\mathfrak{g}}$ of type $A_\infty=\underrightarrow{\lim}A_r$, good Lyndon words are labelled by segments $[a,b]$, and there is no difference between the standard and costandard ordering. In this case, for a good Lyndon word $l$, $\underline{E^*_l}=l$. The Mackey theorem for ${\mathcal{H}^{\mathrm{aff}}}(d)$ (see section \ref{SS:Mackey}) implies that the formal character of a standard module ${\mathcal{M}}(\beta,\alpha)$ is given by $\underline{E^*_g}$, where $g$ is the good word $[\alpha_1,\ldots,\beta_1-1,\ldots,\alpha_n,\ldots,\beta_n-1]$. A much deeper fact, proved by Ariki in \cite{ar}, is that the character of the simple module ${\mathcal{L}}(\beta,\alpha)$ is given by the dual canonical basis element $\underline{b^*_g}$.
Leclerc also studied the Lie algebra ${\mathfrak{b}}_r$ of type $B_r$, and hence that of type $B_\infty=\underrightarrow{\lim}B_r$. The good Lyndon words for ${\mathfrak{b}}_r$ with respect to the standard ordering are segments $[i,\ldots,j]$, $0\leq i\leq j<r$, and \emph{double segments} $[0,\ldots,j,0,\ldots,k]$, $0\leq j<k<r$ (cf. \cite[$\S8.2$]{lec}). In this case, when $l=[i,\ldots,j]$ is a segment, $\underline{b_l^*}=[i,\ldots,j]=\operatorname{ch}\Phi_{[i,j]}$. However, when $l=[0,\ldots,j,0,\ldots,k]$ is a double segment \begin{align}\label{E:StdDblSeg} \underline{b^*_l}=2[0]\cdot([0,\ldots,j]*[1,\ldots,k]). \end{align} When we adopt the costandard ordering, the picture becomes much more familiar. Indeed, the good Lyndon words are of the form $[i,\ldots,j]$ $0\leq i<j<r$ and $[j,\ldots,0,0,\ldots,k]$, $0\leq j<k<r$! In particular, they correspond to weights of the segment representations $\Phi_{[i,j]}$ and $\Phi_{[-j-1,k]}$ respectively. Moreover, for $l=[j,\ldots,0,0,\ldots,k]$ \[ \underline{b^*_{l}}=2[j,\ldots,0,0,\ldots,k]=\operatorname{ch} \Phi_{[-j-1,k]}. \]
Leclerc conjectures \cite[Conjecture 52]{lec} that for each good word $g$ of \emph{principal degree} $d$, there exists a simple ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-module with character given by $b^*_g$. We are not yet able to confirm the conjecture for general good words. However, the combinatorial construction of $H^{\lambda/\mu}$ immediately implies Leclerc's conjecture for calibrated representations (cf. \cite[Proposition 51]{lec} and Corollary \ref{C:characters}). Additionally, for each good Lyndon word $l$ (with respect to the costandard ordering), there is a simple module with character $b^*_l$.
Also, an application of the functor $F_\lambda$ gives a representation theoretic interpretation of \eqref{E:StdDblSeg} above. Indeed, let $\lambda=(k+1,j+1)$ and $\alpha=(1,-1)$. Then, \[ \operatorname{ch}{\mathcal{L}}(\lambda,-\alpha)=2[0]\cdot([0,\ldots,j]*[1,\ldots,k]). \]
Finally, the analysis of good Lyndon words leads to a classification of simple integral modules for ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$. Indeed, recall the set \eqref{E:Bsubd}, and let \[ \mathcal{B}_d=\{(\lambda,\mu)\mid \lambda\in{P^{++}},\mbox{ and }\mu\in\mathcal{B}_d(\lambda)\}. \] Then,
\begin{thme} The following is a complete list of pairwise non-isomorphic simple modules for ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$: \[ \{\,{\mathcal{L}}(\lambda,\mu) \mid (\lambda,\mu)\in \mathcal{B}_d\,\}. \] \end{thme}
We believe this paper may serve as a starting point for future investigations into categorification theories associated to non-simply laced Dynkin diagrams. In particular, we hope that the functor introduced here will play a role in showing that the 2-category for $\mathfrak{b}_\infty$, introduced by Khovanov-Lauda and independently by Rouquier, acts on $\mathcal{O}(\mathfrak{q}(n))$, see \cite{khl1,khl2,khl3,rq}. Additionally, in \cite{wz}, Wang and Zhao initiated a study of super analogues of $W$-algebras. This functor should be useful for studying these $W$-superalgebras along the lines of \cite{bk3, bk4}.
In \cite{b}, Brundan studied the category of finite dimensional modules for ${\mathfrak{q}}(n)$ via Kazhdan-Lusztig theory. Among the finite dimensional ${\mathfrak{q}}(n)$-modules are the polynomial representations, which correspond under our functor to calibrated representations. Other modules in this category are those associated to \emph{rational} weights, i.e.\ strict partitions with negative parts allowed. The functor should map these modules to interesting ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-modules. These should be investigated. It would also be interesting to compare the Kazhdan-Lusztig polynomials in \cite{b} to those appearing in \cite{lec}.
We now briefly outline the paper. In section~\ref{S:Prelim}, we review some basic notion of super representation theory. In section ~\ref{S:ASA} we define the degenerate AHCA and review some of its properties which may also be found in \cite{kl}. The standard modules and their irreducible quotients are introduced in section \ref{S:standardreps}. The classification of the calibrated representations are given in section ~\ref{S:Calibrated}. In section ~\ref{S:Lie algebras} we review some basic notions about category $ \mathcal{O}(\mathfrak{q}(n)) $ which may be found in \cite{b,g}. Next, in section ~\ref{S:LieTheoreticConstr} the functor is developed along with its properties. Finally, in section \ref{S:Classification} a classification of simple modules is obtained.
\subsection{Acknowlegments}\label{SS:acknowlegements} The work presented in this paper was begun while the second author visited the Mathematical Sciences Research Institute in Berkeley, CA. He would like to thank the administration and staff of MSRI for their hospitality and especially the organizers of the ``Combinatorial Representation Theory'' and ``Representation Theory of Finite Groups and Related Topics'' programs for providing an exceptionally stimulating semester.
We would like to thank Mikhail Khovanov for suggesting we consider a super analogue of the Arakawa-Suzuki functor. We would also like to thank Bernard Leclerc for pointing out \cite{lec}, as well as Monica Vazirani and Weiqiang Wang for some useful comments.
\section{(Associative) Superalgebras and Their Modules}\label{S:Prelim} We now review some basics of the theory of superalgebras, following \cite{bk1,bk2,kl}. The objects in this theory are ${\mathbb{Z}}_2$-graded. Throughout the exposition, we will make definitions for homogeneous elements in this grading. These definitions should always be extended by linearity. Also, we often choose to not write the prefix \emph{super}. As the paper progresses this term may be dropped; however, we will always point out when we are explicitly ignoring the ${\mathbb{Z}}_2$-grading.
A vector superspace is a ${\mathbb{Z}}_2$-graded ${\mathbb{C}}$-vector space $V=V_{{\bar{0}}}\oplus V_{{\bar{1}}}$. Given a nonzero homogeneous vector $v\in V_{{\bar{i}}}$, let $p(v) = {\bar{i}} \in{\mathbb{Z}}_2$ be its \emph{parity}. Given a superspace $V$, let $\Pi V$ be the superspace obtained by reversing the parity. That is, $\Pi V_{\bar{i}}=V_{{\bar{i}}+1}$. A supersubspace of $V$ is a \emph{graded} subspace $U\subseteq V$. That is, $U=(U\cap V_{{\bar{0}}})\oplus(U\cap V_{{\bar{1}}})$. Observe that $U$ is a supersubspace if, and only if, $U$ is stable under the map $v\mapsto(-1)^{p(v)}v$ for homogeneous vectors $v\in V$.
Given two superspaces $V,W$, the direct sum $V\oplus W$ and tensor product $V\otimes W$ satisfy $(V\oplus W)_{{\bar{i}}}=V_{\bar{i}}\oplus W_{\bar{i}}$ and \[ (V\otimes W)_{\bar{i}}=\bigoplus_{{\bar{j}}+{\bar{k}}={\bar{i}}}V_{\bar{j}}\otimes W_{\bar{k}}. \] We may regard $\operatorname{Hom}_{\mathbb{C}}(V,W)$ as a superspace by setting $\operatorname{Hom}_{\mathbb{C}}(V,W)_{\bar{i}}$ to be the set of all homogeneous linear maps of degree ${\bar{i}}$. That is, linear maps $\varphi:V\rightarrow W$ such that $\varphi(V_{\bar{j}})\subseteq W_{{\bar{j}}+{\bar{i}}}$. Finally, $V^*=\operatorname{Hom}_{\mathbb{C}}(V,{\mathbb{C}})$ is a superspace, where ${\mathbb{C}}={\mathbb{C}}_{\bar{0}}$.
Now, a superalgebra is a vector superspace $A$ that has the structure of an associative, unital algebra such that $A_{\bar{i}} A_{\bar{j}}\subseteq A_{{\bar{i}}+{\bar{j}}}$. A superideal of $A$ is a two sided ideal of $A$ that is also a supersubspace of $A$. A superalgebra homomorphism $\varphi:A\rightarrow B$ is an even (i.e.\ grading preserving) linear map which is also an algebra homomorphism. Observe that since $\varphi$ is even, its kernel, $\ker\varphi$, is a superideal of $A$. Finally, given superalgebras $A$ and $B$, their tensor product $A\otimes B$ is a superalgebra with product given by \begin{equation}\label{tensor product rule-algebra} (a\otimes b)(a'\otimes b')=(-1)^{p(a')p(b)}(aa'\otimes bb'). \end{equation}
We now turn our attention to supermodules. Given a superalgebra $A$, let $A$-smod denote the category of all finite dimensional $A$-supermodules, and $A$-mod be the category of $A$-modules in the usual ungraded sense. An object in $A$-smod is a ${\mathbb{Z}}_2$-graded left $A$-module $M=M_{\bar{0}}\oplus M_{\bar{1}}$ such that $A_{\bar{i}} M_{\bar{j}}\subseteq M_{{\bar{i}}+{\bar{j}}}$. A homomorphism of $A$-supermodules $M$ and $N$ is a map of vector superspaces $f:M\rightarrow N$ satisfying $f(am)=(-1)^{p(a)p(f)}af(m)$ when $f$ is homogeneous. A submodule of an $A$-supermodule $M$ will always be a supersubspace of $M$. An $A$-supermodule $M$ is called irreducible if it contains no proper nontrivial subsupermodules.
The supermodule $M$ may or may not remain irreducible when regarded as an object in $A$-mod. If $M$ remains irreducible as an $A$-module, it is called \emph{absolutely irreducible}, and if it decomposes, it is called \emph{self associate}. Alternatively, absolutely irreducible supermodules are said to be irreducible of type \texttt{M}, while self associate supermodules are irreducible of type \texttt{Q}. When $M\in A$-smod is self associate, there exists an odd $A$-smod homomorphism $\theta_M$ which interchanges the two irreducible components of $M$ as an object in $A$-mod.
Now, let $A$ and $B$ be superalgebras, $M\in A$-smod and $N\in B$-smod. The vector superspace $M\otimes N$ has the structure of an $A\otimes B$-supermodule via the action is given by \begin{eqnarray}\label{tensor product rule-module} (a\otimes b)(m\otimes n)=(-1)^{p(b)p(m)}(am\otimes bn) \end{eqnarray} for homogeneous $b\in B$ and $m\in M$. This is called the outer tensor product of $M$ and $N$ and is denoted $M\boxtimes N$.
Unlike the classical situation, it may happen that the outer tensor product of irreducible supermodules is no longer irreducible. This only happens when both modules are self associate. To see this, let $M\in A$-smod and $N\in B$-smod be self associate, and recall the odd homomorphisms $\theta_M$ and $\theta_N$. Then, $\theta_M\otimes\theta_N:M\boxtimes N\rightarrow M\boxtimes N$, is an even automorphism of $M\boxtimes N$ that squares to $-1$. Hence $M\boxtimes N$ decomposes as direct sum of two $A\otimes B$-supermodules, namely the $(\pm\sqrt{-1})$-eigenspaces. These two summands are absolutely irreducible and isomorphic under the odd isomorphism $\Theta_{M,N}:=\theta_M\otimes\mathrm{id}_N$, see \cite[Lemma 2.9]{bk1} and \cite[Section 2-b]{bk2}. When $M$ and $N$ are irreducible, define the (irreducible) $A\otimes B$-module $M\circledast N$ by the formula \begin{equation}\label{E:startensor} M\boxtimes N = \begin{cases} M\circledast N, & \text{if either $M$ or $N$ is of type \texttt{M};}\\
(M\circledast N)\oplus\Theta_{M,N}(M\circledast N),& \text{if both $M$ and $N$ are of type \texttt{Q}.} \end{cases} \end{equation} When $M=M'\oplus M''$, define $M\circledast N=(M'\circledast N)\oplus(M''\circledast N)$.
Finally, let $A-\mbox{smod}_{\mbox{ev}}$ be the abelian subcategory of $A-\mbox{smod}$ with the same objects, but only \emph{even} morphisms. Then, the Grothendieck group $K(A-\mbox{smod})$ is the quotient of the Grothendieck group $K(A-\mbox{smod}_{\mbox{ev}})$ modulo the relation $M-\Pi M$ for every $A$-supermodule $M$. We would like to emphasize again that we allow odd morphisms and, therefore, $M\cong\Pi M$ in the original category.
\section{The Degenerate Affine Hecke-Clifford Algebra}\label{S:ASA}
In this section we define the algebra which is the principle object of study in this paper and summarize the results we will require in what follows. Many of the results may be found in \cite{kl}, however, we include them here in an effort to make this paper self contained and readable to a wider audience.
\subsection{The Algebra}\label{SS:Saffdef} Let ${\mathcal{C}\ell}(d)$ denote the Clifford algebra over ${\mathbb{C}}$ with generators $c_1,\ldots,c_d$, and relations \begin{eqnarray}\label{c} c_i^2=-1,\;\;\; c_ic_j=-c_jc_i\;\;\; 1\leq i\neq j\leq d. \end{eqnarray} Then ${\mathcal{C}\ell} (d)$ is a superalgebra by declaring the generators $c_{1}, \dotsc , c_{d}$ to all be of degree $\ensuremath{\bar{1}}$.
Let $S_d$ be the symmetric group on $d$ letters with Coxeter generators $s_1,\ldots, s_{d-1}$ and relations \begin{eqnarray}\label{s} s_i^2=1\;\;\; s_is_{i+1}s_i=s_{i+1}s_is_{i+1}\;\;\;s_is_j=s_js_i \end{eqnarray}
for all admissible $i$ and $j$ such that $|i-j|>1$. The group algebra of the symmetric group, ${\mathbb{C}} S_{d}$, is a superalgebra by viewing it as concentrated in degree $\ensuremath{\bar{0}}$; that is, $({\mathbb{C}} S_{d})_{\ensuremath{\bar{0}}}= {\mathbb{C}} S_{d}$.
The \emph{Sergeev algebra} is given by setting \[ {\mathcal{S}}(d)= {\mathcal{C}\ell}(d)\otimes {\mathbb{C}} S_d \] as a vector superspace and declaring ${\mathcal{C}\ell}(d) \cong {\mathcal{C}\ell}(d)\otimes 1$ and ${\mathbb{C}} S_d \cong 1\otimes{\mathbb{C}} S_d$ to be subsuperalgebras. The Clifford generators $c_1,\ldots,c_d$ and Coxeter generators $s_1,\ldots,s_{d-1}$ are subject to the mixed relation \begin{eqnarray}\label{c&s} s_ic_i=c_{i+1}s_i,\;\;\;s_ic_{i+1}=c_is_i,\;\;\; s_ic_j=c_js_i, \end{eqnarray} for all admissible $i$ and $j$ such that $j\neq i,i+1$.
The algebra of primary interest in this paper is the \emph{(degenerate) affine Hecke-Clifford algebra}, AHCA. It is given as \[ {\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d) = {\mathcal{P}}_d[x]\otimes{\mathcal{S}}(d) \] as a vector superspace, where ${\mathcal{P}}_d[x]:={\mathbb{C}}[x_1,\ldots,x_d]$ is the polynomial ring in $d$ variables and is viewed as a superalgebra concentrated in degree $\ensuremath{\bar{0}}$. Multiplication is defined so that ${\mathcal{S}} (d) \cong 1\otimes{\mathcal{S}} (d) $ and ${\mathcal{P}}_{d}[x] \cong {\mathcal{P}}_{d}[x] \otimes 1$ are subsuperalgebras. The generators of these two subalgebras are subject to the mixed relations \begin{eqnarray}\label{c&x} c_ix_i=-x_ic_i,\;\;\;c_jx_i=x_ic_j,\;\;\;1\leq i\neq j\leq d, \end{eqnarray} and \begin{eqnarray}\label{s&x} s_ix_i=x_{i+1}s_i-1+c_ic_{i+1},\;\;\;s_ix_j=x_js_i \end{eqnarray} for $1\leq i\leq d-1$, $1\leq j\leq d$, $j\neq i,i+1$.
Note that relation \eqref{s&x} differs from the corresponding relation in \cite{bk2,kl}. This is because in \eqref{c} we choose $c_{i}^{2}=-1$, following \cite{o,s,s2}, whereas in \emph{loc. cit.} the authors take $c_{i}^{2}=1$. The resulting algebras are isomorphic and the only effect of this convention is that this change of sign has to be taken into account when comparing formulae.
It will be useful to consider another decomposition \begin{equation}\label{E:AlternateDecomp} {\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d) \cong{\mathcal{A}}(d)\otimes{\mathbb{C}} S_d, \end{equation} where $A(d)$ is the subalgebra generated by ${\mathcal{C}\ell}(d)$ and ${\mathcal{P}}_{d}[x]$. As a superspace \begin{equation}\label{E:Adef} A(d) \cong {\mathcal{P}}_d[x]\otimes{\mathcal{C}\ell}(d). \end{equation}
We have the following PBW-type theorem for ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$. Given $\alpha=(\alpha_1,\ldots,\alpha_d)\in{\mathbb{Z}}_{\geq0}^d$ and $\varepsilon=(\varepsilon_1,\ldots,\varepsilon_d)\in{\mathbb{Z}}_2^d$, set $x^\alpha=x_1^{\alpha_1}\cdots x_d^{\alpha_d}$ and $c^\varepsilon=c_1^{\varepsilon_1}\cdots c_d^{\varepsilon_d}$. Then,
\begin{thm}\cite[Theorem 14.2.2]{kl} The set $\{\,x^\alpha c^\varepsilon w\,|\,\alpha\in{\mathbb{Z}}_{\geq0}^d,\,\varepsilon\in{\mathbb{Z}}_2^d,\,w\in S_d\}$ forms a basis for ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$. \end{thm}
\subsection{Some (Anti)Automorphisms}\label{SS:alghomoms} The superalgebra ${\mathcal{H}_{\Cl}^{\mathrm{aff}}} (d)$ admits an automorphism $\sigma:{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)\rightarrow{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ given by \begin{equation}\label{E:sigmadef} \sigma(s_i)=-s_{d-i}, \hspace{.25in} \sigma(c_i)=c_{d+1-i}, \hspace{.25in} \sigma(x_i)=x_{n+1-i}. \end{equation}
It also admits an antiautomorphism $\tau:{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)\rightarrow{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ given by \[ \tau(s_i)=s_i, \hspace{.25in} \tau(c_i)=-c_i, \hspace{.25in} \tau(x_i)=x_i. \] Note that, for superalgebras, antiautomorphism means that, for any homogeneous $x,y \in {\mathcal{H}_{\Cl}^{\mathrm{aff}}} (d)$, \begin{equation}\label{E:taudef} \tau(xy) = (-1)^{p(x)p(y)}\tau(y) \tau(x). \end{equation}
\subsection{Weights and Integral Modules}\label{SS:weights} We now introduce the class of integral ${\mathcal{H}_{\Cl}^{\mathrm{aff}}} (d)$-modules. It is these modules which are the main focus of the paper. To this end, for each $a\in{\mathbb{C}}$, define \begin{equation}\label{E:qdef} q(a)=a(a+1). \end{equation} By \cite[Theorem 14.3.1]{kl}, the center of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ consists of symmetric polynomials in $x_1^2,\ldots,x_d^2$. Let ${\mathcal{P}}_{d}[x^{2}]={\mathbb{C}}[x_1^2,\ldots,x_d^2]\subset{\mathcal{P}}_d[x]$. A \emph{weight} is an algebra homomorphism \[ \zeta:{\mathcal{P}}_d[x^2]\rightarrow{\mathbb{C}}. \] It is often convenient to identify a weight $\zeta$ with the $d$-tuple of complex numbers $\zeta=(\zeta(x_1^2),\ldots,\zeta(x_d^2))\in{\mathbb{C}}^d$.
Given an ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-supermodule $M$ and a weight $\zeta$, define the \emph{$\zeta$ weight space}, \[ M_\zeta=\left\{ m\in{\mathcal{M}} \mid x_i^2m =q\left(\zeta\left( x_i^2\right)\right)m \text{ for all $i=1,\ldots,d$} \right\}, \] and the \emph{generalized $\zeta$ weight space}, \[ M_\zeta^{\mathrm{gen}} =\left\{ m\in{\mathcal{M}} \mid \left( x_i^2-q(\zeta\left(x_i^2 \right)\right)^km=0 \text{ for $k\gg 0$ and all $i=1,\ldots,d$} \right\}. \] Observe that if $M_\zeta^{\text{gen}}\neq 0$, then $M_\zeta\neq0$.
Following \cite{bk2}, say that an ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-module $M$ is \emph{integral} if \[ M=\bigoplus_\zeta M_\zeta^{\text{gen}} \] and $M^{\text{gen}}_\zeta\neq0$ implies $\zeta\left( x_i^2\right)\in{\mathbb{Z}}$ for $i=1,\ldots,d$.
Let $\operatorname{Rep}{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ denote the full subcategory of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-smod of finite dimensional \emph{integral} modules for the degenerate AHCA. Unless stated otherwise, all ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-modules will be integral by assumption.
\subsection{The Mackey Theorem}\label{SS:Mackey} In this section we review the Mackey Theorem for integral ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}$-modules. Refer to \cite{kl} for details.
Let $\mu=(\mu_1,\ldots,\mu_k)$ be a composition of $d$. Define the parabolic subgroup $S_\mu=S_{\mu_1}\times\cdots\times S_{\mu_k}\subseteq S_d$, and parabolic subalgebra ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(\mu):={\mathcal{H}_{\Cl}^{\mathrm{aff}}}(\mu_1)\otimes\cdots \otimes {\mathcal{H}_{\Cl}^{\mathrm{aff}}}(\mu_k)\subseteq{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$. Define the functor \[ \operatorname{Ind}_\mu^d:\operatorname{Rep}{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(\mu)\rightarrow\operatorname{Rep}{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d),\;\;\; \operatorname{Ind}_\mu^dM={\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)\otimes_{{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(\mu)}M. \] This functor is left adjoint to $\operatorname{Res}_\mu^d:\operatorname{Rep}{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)\rightarrow\operatorname{Rep}{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(\mu)$. Also, given a composition $\nu=(\nu_1,\ldots,\nu_\ell)$ of $d$, which is a refinement of $\mu$ (i.e.\ there exist $0=i_1\leq\ldots\leq i_{k+1}=\ell$ such that $\nu_{i_j}+\ldots+\nu_{i_{j+1}-1}=\mu_j$), define $\operatorname{Ind}_\nu^\mu$ and $\operatorname{Res}_\nu^\mu$ in the obvious way.
Now, let $\mu$ and $\nu$ be compositions of $d$, and let $D_{\mu,\nu}$ denote the set of minimal length $S_\mu\backslash S_d/S_\nu$-double coset representatives and $D_\nu=D_{(1^d),\nu}$. Let $w\in D_{\mu,\nu}$. The following lemma is standard.
\begin{lem}\label{L:MinCosetReps} Let $\nu=(\nu_1,\ldots,\nu_n)$ be a composition of $d$, and set $a_i=\nu_1+\cdots+\nu_{i-1}+1$ and $b_i=\nu_1+\cdots+\nu_i$. If $w\in D_\nu$ and $a_i\leq k<k'\leq b_i$ for some $i$, then $w(k)<w(k')$. \end{lem}
It is known that $S_\mu\cap wS_\nu w^{-1}$ and $w^{-1}S_\mu w\cap S_\nu$ are parabolic subgroups of $S_d$. Hence we may define compositions $\mu\cap w\nu$ and $w^{-1}\mu\cap\nu$ by the formulae \[ S_\mu\cap w^{-1}S_\nu w=S_{\mu\cap w\nu}\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, w^{-1}S_\mu w\cap S_\nu=S_{w^{-1}\mu\cap\nu}. \] Moreover, the map $\sigma\mapsto w\sigma w^{-1}$ induces a length preserving isomorphism $S_{\mu\cap w\nu}\rightarrow S_{w^{-1}\mu\cap\nu}$.
Using this last fact, it can be proved that for each $w\in D_{\mu,\nu}$ there exists an algebra isomorphism \[ \varphi_{w^{-1}}:{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(\mu\cap w\nu)\rightarrow{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(w^{-1}\mu\cap\nu) \] given by $\varphi_{w^{-1}}(\sigma)=w^{-1}\sigma w$, $\varphi_{w^{-1}}(c_i)=c_{w^{-1}(i)}$ and $\varphi_{w^{-1}}(x_i)=x_{w^{-1}(i)}$ for $1\leq i\leq d$ and $\sigma\in S_{\mu\cap w\nu}$. If $M$ is a left ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(\mu\cap w\nu)$-supermodule, let $^wM$ denote the ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(w^{-1}\mu\cap\nu)$-supermodule obtained by twisting the action with the isomorphism $\varphi_{w^{-1}}$. We have the following ``Mackey Theorem'':
\begin{thm}\label{Mackey}\cite[Theorem 14.2.5]{kl} Let $M$ be an ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(\nu)$-supermodule. Then $\operatorname{Res}_\mu^d\operatorname{Ind}_\nu^dM$ admits a filtration with subquotients isomorphic to \[ \operatorname{Ind}_{\mu\cap w\nu}^\mu{}^w(\operatorname{Res}_{w^{-1}\mu\cap\nu}^\nu M), \] one for each $w\in D_{\mu,\nu}$. Moreover the subquotients can be taken in any order refining the Bruhat order on $D_{\mu,\nu}$. In particular, $\operatorname{Ind}_{\mu\cap\nu}^\mu\operatorname{Res}_{\mu\cap\nu}^\nu M$ appears as a subsupermodule. \end{thm}
\subsection{Characters}\label{SS:characters} Following \cite[Chapter 16]{kl}, we now describe the notion of characters for integral ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-supermodules.
Recall the subsuperalgebra ${\mathcal{A}}(d)\subseteq{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ defined in \eqref{E:Adef}. When $d=1$ and $a\in{\mathbb{Z}}$ there exists a $2$-dimensional simple ${\mathcal{A}}(1)$-module \[ {\mathcal{L}}(a)={\mathcal{C}\ell}(1)1_a=\C1_a\oplus{\mathbb{C}} c_1.1_a, \] which is free as a ${\mathcal{C}\ell} (1)$-module satisfying \[ x_1.1_a=\sqrt{q(a)}1_a. \] The ${\mathbb{Z}}_{2}$-grading on ${\mathcal{L}}(a)$ is given by setting $p(1_{a})=\ensuremath{\bar{0}}$.
Observe that ${\mathcal{L}}(a)\cong{\mathcal{L}}(-a-1)$ and that by replacing $\sqrt{q(a)}$ with $-\sqrt{q(a)}$ in the action of $x_1$ yields an isomorphic supermodule under the odd isomorphism $1_a\mapsto c_1.1_a$. A direct calculation verifies that this module is of type \texttt{M} if $a\neq 0$ and of type \texttt{Q} if $a=0$.
Now, ${\mathcal{A}}(d) \cong {\mathcal{A}}(1)\otimes\cdots\otimes{\mathcal{A}}(1)$. Hence, applying \eqref{E:startensor} we obtain a simple ${\mathcal{A}} (d)$-module ${\mathcal{L}}(a_1)\circledast\cdots\circledast{\mathcal{L}}(a_d)$. Given $(a_{1}, \dotsc , a_{d})\in{\mathbb{Z}}^d_{\geq0}$, let \begin{equation}\label{E:gammazerodef}
\gamma_{0}(a_{1}, \dotsc, a_{d})=|\{ i \mid a_i=0 \}|. \end{equation} We have
\begin{lem}\label{A(d) irreducibles}\cite[Lemma 16.1.1]{kl} The set \[ \left\{{\mathcal{L}}(a_1)\circledast\cdots\circledast {\mathcal{L}}(a_d) \mid (a_1,\ldots,a_d)\in{\mathbb{Z}}_{\geq0}^d \right\} \] is a complete set of pairwise non-isomorphic irreducible integral ${\mathcal{A}} (d)$-modules.
The module ${\mathcal{L}}(a_1)\circledast\cdots\circledast{\mathcal{L}}(a_d)$ is of type \texttt{M} if $\gamma_0$ is even and of type \texttt{Q} if $\gamma_0$ is odd. Moreover, \[ \dim{\mathcal{L}}(a_1)\circledast\cdots\circledast{\mathcal{L}}(a_d)=2^{n-\lfloor\gamma_0/2\rfloor} \] where $\gamma_0=\gamma_0(a_1,\ldots,a_d)$ as above. \end{lem}
Restriction to the subalgebra $A(d)={\mathcal{H}_{\Cl}^{\mathrm{aff}}}((1^d))\subseteq{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ defines a functor from $\operatorname{Rep}{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ to ${\mathcal{A}} (d)$-mod. The map obtained by applying this functor and passing to the Grothendieck group of the category ${\mathcal{A}}(d)$-mod yields a map \[ \operatorname{ch}:\operatorname{Rep}{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)\rightarrow K({\mathcal{A}}(d)\mbox{-mod}) \] defined by \[ \operatorname{ch} M=\left[ \operatorname{Res}^{d}_{1^d}M \right] \] where $[X]$ is the image of an ${\mathcal{A}}(d)$-module, $X$, in $K({\mathcal{A}}(d)\mbox{-mod})$. The image $\operatorname{ch} M$ is called the \emph{formal character} of the ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-module $M$.
The following fundamental result is given in \cite[Theorem 17.3.1]{kl}. \begin{lem}\label{L:independenceofcharacters} The induced map on Grothendeick rings \[ \operatorname{ch} : K(\operatorname{Rep}{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)) \to K({\mathcal{A}} (d)\text{-mod}) \] is injective.
\end{lem}
For convenience of notation, set \[ [a_1,\ldots,a_d]=[{\mathcal{L}}(a_1)\circledast\cdots\circledast {\mathcal{L}}(a_d)]. \] The following lemma describes how to calculate the character of $M \circledast N$ in terms of the characters of $M$ and $N$, and is a special case of the Mackey Theorem:
\begin{lem}\label{L:ShuffleLemma}\cite[Shuffle Lemma]{kl} Let $K\in{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(k)$ and $M\in{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(m)$ be simple, and assume that \[ \operatorname{ch} K=\sum_{{\underline{i}}\in{\mathbb{Z}}_{\geq0}^k}r_{{\underline{i}}}[i_1,\ldots,i_k]\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, \operatorname{ch} M=\sum_{{\underline{j}}\in{\mathbb{Z}}_{\geq0}^m}s_{{\underline{j}}}[j_1,\ldots,j_m]. \] Then, \begin{eqnarray*} \operatorname{ch}\operatorname{Ind}_{m,k}^{m+k}K\circledast M =\sum_{{\underline{i}},{\underline{j}}}r_{{\underline{i}}}s_{{\underline{j}}}[i_1,\ldots,i_k]*[j_1,\ldots,j_m] \end{eqnarray*} where \[ [i_1,\ldots,i_k]*[i_{k+1},\ldots,i_{k+m}] =\sum_{w\in D_{(m,k)}}[w(i_1),\ldots,w(i_{k+m})]. \] \end{lem}
\subsection{Duality}\label{SS:duality} Now, given an ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-module $M$, we obtain a new module $M^\sigma$ by twisting the action of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ by $\sigma$. That is, define a new action, $*$, on $M$ by $x*m=\sigma(x).m$ for all $x\in{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$. We have
\begin{lem}\label{sm twisted action}\cite[Lemma 14.6.1]{kl} If $M$ is an ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(k)$-module and $N$ is an ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(\ell)$-module, then \[ (\operatorname{Ind}_{k,\ell}^{k+\ell}M\circledast N)^\sigma \cong\operatorname{Ind}_{k,\ell}^{k+\ell}M^\sigma\circledast N^\sigma. \] \end{lem}
If $M$ is an ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-module, with character \[ \operatorname{ch} M=\sum_{{\underline{i}}\in{\mathbb{Z}}_{\geq0}^d}r_{{\underline{i}}}[i_1,\ldots,i_d], \] then Lemma ~\ref{sm twisted action} implies that \[ \operatorname{ch} M^{\sigma}=\sum_{{\underline{i}}\in{\mathbb{Z}}_{\geq0}^d}r_{{\underline{i}}}[i_d,\ldots,i_1]. \]
\subsection{Contravariant Forms}\label{SS:contravariantforms} Let $M$ be in $\operatorname{Rep}{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$. A bilinear form $(\cdot,\cdot):M\otimes M\rightarrow{\mathbb{C}}$ is called a contravariant form if \[ (x.v,v')=(v,\tau(x).v') \] for all $x\in{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ and $v,v'\in M$.
\begin{lem}\label{L:ASeContraForm} Let $M$ be in $\operatorname{Rep}{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ equipped with a contravariant form $(\cdot,\cdot)$. Then \[ M_\eta\perp M_\zeta^{\mathrm{gen}}\;\;\;\mbox{unless}\;\;\;\eta=\zeta. \] \end{lem}
\begin{proof} Assume $\eta\neq\zeta$, and let $v\in M_\eta$ and $v'\in M^{\mathrm{gen}}_\zeta$. Choose $i$ such that $q(\eta(x_i^2))\neq q(\zeta(x_i^2))$, and $N\gg0$ such that \[ (x_i^2-q(\zeta(x_i^2))^N.v'=0. \] Then \begin{align*} (q(\eta(x_i^2))-q(\zeta(x_i^2))^N(v,v') =&((x_i^2-q(\zeta(x_i^2)))^N.v,v')\\ =&(v,\tau((x_i^2-q(\zeta(x_i^2)))^N).v')\\ =&(v,(x_i^2-q(\zeta(x_i^2)))^N.v')=0 \end{align*} showing that $(v,v')=0$. \end{proof}
\subsection{Intertwiners} Define the intertwiner \begin{eqnarray}\label{E:intertwiner} \phi_i=s_i(x_i^2-x_{i+1}^2)+(x_i+x_{i+1})-c_ic_{i+1}(x_i-x_{i+1}). \end{eqnarray} Given an ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-supermodule $M$, we understand that $\phi_i{\mathcal{M}}_{\zeta}^{\mathrm{gen}}\subseteq{\mathcal{M}}_{s_i(\zeta)}^{\mathrm{gen}}$. Moreover, a straightforward calculation gives \begin{eqnarray}\label{E:intertwinersquared} \phi_i^2=2x_i^2+2x_{i+1}^2-(x_i^2-x_{i+1}^2)^2. \end{eqnarray} The following lemma now directly follows (see also \cite{kl}).
\begin{lem}\label{L:InvertibleIntertwiner} Assume that $Y$ is in $\operatorname{Rep}{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$, and $v\in Y$ satisfies $x_i.v=\sqrt{q(a)}v$ and $x_{i+1}.v=\sqrt{q(b)}v$ for some $a,b\in{\mathbb{Z}}$. Then, $\phi_i^2.v\neq 0$ unless $q(a)=q(b+1)$ or $q(a)=q(b-1)$. \end{lem}
\section{Standard Modules}\label{S:standardreps} We construct a family of standard modules which are an analogue of Zelevinsky's construction for the degenerate affine Hecke algebra. The key ingredient is to define certain irreducible supermodules for a parabolic subalgebra of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}} (d)$; the so-called segment representations. The standard modules are then obtained by inducing from the outer tensor product of these modules.
\subsection{Segment Representations}\label{subsection irred modules} We begin by constructing a family of irreducible ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-supermodules that are analogues of Zelevinsky's segment representations for the degenerate affine Hecke algebra. To begin, define the $2^d$-dimensional ${\mathcal{S}}(d)$-supermodule \begin{equation}\label{E:Cldef} {\mathcal{C}\ell}_{d}=\operatorname{Ind}_{S_d}^{{\mathcal{S}}(d)}{\mathbb{C}}{\mathbf{1}}, \end{equation} where ${\mathbb{C}}{\mathbf{1}}$ is the trivial representation of $S_d$. That is, ${\mathcal{C}\ell}_{d}={\mathcal{C}\ell}(d).{\mathbf{1}}$, where the cyclic vector ${\mathbf{1}}$ satisfies \begin{eqnarray*} w.{\mathbf{1}}={\mathbf{1}},\;\;\;w\in S_d. \end{eqnarray*} This is often referred to as the \emph{basic spin representation} of ${\mathcal{S}}(d)$.
Introduce algebra involutions $\epsilon_i:{\mathcal{C}\ell}(d)\rightarrow{\mathcal{C}\ell}(d)$ by $\epsilon_i(c_j)=(-1)^{\delta_{ij}}c_j$ for $1\leq i,j\leq d$. The elements $ \epsilon_i $ act on ${\mathcal{C}\ell}_{d}$ by $\epsilon_i.{\mathbf{1}}={\mathbf{1}}$ for $1\leq i\leq d$ and, more generally, $\epsilon_i.s{\mathbf{1}}=\epsilon_{i}(s){\mathbf{1}}$ for $1\leq i\leq d$. Also, note that the operators $ \epsilon_i $ commute with each other.
For each $a\in{\mathbb{Z}}$, define the Clifford algebra
\begin{equation}\label{Pha} \Phi_a= \begin{cases} {\mathbb{C}}\langle \varphi \rangle / (\varphi^2-a),
&\text{if $a \neq 0$}; \\
{\mathbb{C}} \langle \varphi \rangle / (\varphi),
& \text{if $a=0$}.\end{cases} \end{equation} The ${\mathbb{Z}}_{2}$-grading on $\Phi_{a}$ is given by declaring $p(\varphi)={\bar{1}}$.
Given a pair of integers $a\leq b$ define the \emph{segment} \begin{equation*} [a,b]=\{a,a+1,\ldots,b\}. \end{equation*} Given a segment $[a,b]$ with $b-a+1=d\in{\mathbb{Z}}_{\geq0}$, define the $\Phi_a\otimes {{\mathcal{S}}}(d)$-module \begin{equation}\label{E:segment} \hat{\Phi}_{[a,b]}=\Phi_a\boxtimes{\mathcal{C}\ell}_{d}. \end{equation}
Of course, when $d=0$ the segment $[a,a-1]=\emptyset$, and $\hat{\Phi}_{\emptyset}=\Phi_a\otimes{\mathbb{C}}$.
For $i=1, \dotsc ,d$ let $s_{ij}$ denote the transposition $(ij)$, and \begin{align}\label{E:JMelt} {\mathcal{L}}_i=\sum_{j<i}(1-c_jc_i)s_{ij} \end{align} be the \emph{$i$th Jucys-Murphy element} (cf. \cite[(13.22)]{kl}).
\begin{prp}\label{segment representation} Let $[a,b]$ be a segment with $b-a+1=d.$ Then, \begin{enumerate} \item[(i)] The vector space $\hat{\Phi}_{[a,b]}$ is an ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-module with $s_i.v=(1\otimes s_i).v$, $c_i.v=(1\otimes c_i).v$ and \begin{align*} x_i.v &= \left(a\otimes \epsilon_i+1\otimes {\mathcal{L}}_{i}-\varphi\otimes c_i\right).v \\ &=\left(a\otimes \epsilon_i+\sum_{k<i}1\otimes(1-c_kc_i)s_{ki}-\varphi\otimes c_i\right).v, \end{align*} for all $v\in\hat{\Phi}_{[a,b]}$. \item[(ii)] The action of ${\mathcal{P}}_d[x^2]$ on $\hat{\Phi}_{[a,b]}$ is determined by \[ x_i^2.(\varphi^{\delta}\otimes {\mathbf{1}})=q(a+i-1)\varphi^\delta\otimes{\mathbf{1}},\;\;\;\delta\in\{0,1\}, \;\;\;i=1,\ldots,d. \] \end{enumerate} \end{prp}
\begin{proof} (i) The fact that this is an ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-module is an easy check which we leave to the reader.
(ii) To check the action of $x_i^2$, observe that \[ x_i.1\otimes{\mathbf{1}}=\left(a+i-1-\sum_{j<i}c_jc_i\right).1\otimes{\mathbf{1}} +c_i.\varphi\otimes{\mathbf{1}} \] and \[ x_i.\varphi\otimes{\mathbf{1}}=\left(a+i-1-\sum_{j<i}c_jc_i\right).\varphi\otimes{\mathbf{1}} +ac_i.1\otimes{\mathbf{1}}. \] Now, the result follows using the commutation relations for ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$. \end{proof}
\begin{rmk}\label{R:Duality} In fact, we need not consider all $a,b\in{\mathbb{Z}}$. Given any segment $[a,b]$, consider the module $\hat{\Phi}_{[a,b]}^\sigma$ obtained by twisting the action of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ by the automorphism $\sigma$ as described in Section~\ref{SS:duality}. Note that when $b\neq-1$, \[ \hat{\Phi}_{[a,b]}^\sigma\cong\hat{\Phi}_{[-b-1,-a-1]}. \] When $b=-1$, $\hat{\Phi}_{[a,-1]}^\sigma\cong\hat{\Phi}_{[0,-a-1]}^{\oplus2}$. In particular, for $b\neq0$, $\hat{\Phi}_{[-(b+1),b-1]}^\sigma\cong\hat{\Phi}_{[-b,b]}$, and $\hat{\Phi}_{[-1,-1]}^\sigma\cong\hat{\Phi}_{[0,0]}^{\oplus2}$. Therefore, it is enough to describe the modules \begin{enumerate} \item $\hat{\Phi}_{[a,b]}$, $0\leq a\leq b$, and \item $\hat{\Phi}_{[-a,b]}$, $0<a\leq b$. \end{enumerate} \end{rmk} The following result describes $\hat{\Phi}_{[a,b]}$ at the level of characters.
\begin{prp}\label{character formula} Let $[a,b]$ be a segment with $a,b \geq 0.$ Then, \begin{enumerate} \item if $0\leq a\leq b$, then \begin{equation*} \operatorname{ch}\hat{\Phi}_{[a,b]}=\begin{cases}[a,\ldots,b],
&\text{if $a=0$};\\ 2[a,\ldots,b], &\text{if $a \neq 0$};\end{cases} \end{equation*}
\item if $0<a\leq b$, then \[ \operatorname{ch}\hat{\Phi}_{[-a,b]}=4[a-1,\ldots,1,0,0,1,\ldots,b] \] \end{enumerate} \end{prp}
\begin{proof} The action of $x_i^2$ commutes with ${\mathcal{C}\ell}(d)$ and $\hat{\Phi}_{[a,b]}={\mathcal{C}\ell}(d).(1\otimes{\mathbf{1}})+{\mathcal{C}\ell}(d).(\varphi\otimes{\mathbf{1}})$. Therefore, applying Proposition~\ref{segment representation}(2), we deduce in both cases that the $x_i^2$ act by the prescribed eigenvalues. The result now follows from the dimension formula in Lemma \ref{A(d) irreducibles}. \end{proof}
Let $\varphi\hat{{\mathbf{1}}}_{[a,b]}=\varphi\otimes{\mathbf{1}}$ and $\hat{{\mathbf{1}}}_{[a,b]}=1\otimes{\mathbf{1}}$. Also, in what follows, we omit the tensor symbols. For example, we write \[ a\epsilon_i+{\mathcal{L}}_i-\varphi c_i:=a\otimes \epsilon_i+1\otimes {\mathcal{L}}_{i}-\varphi\otimes c_i. \]
\begin{dfn}\label{X} Let $a\in{\mathbb{Z}}$ and $\kappa_1,\ldots,\kappa_d\in{\mathbb{R}}$ satisfy $\kappa_i^2=q(a+i-1)$ where $d=b-a+1$. Given a subset $S\subseteq\{1,\ldots,d\}$ define the element $X_{S} \in {\mathcal{H}_{\Cl}^{\mathrm{aff}}} (d)$ by \[ X_S=\prod_{i\notin S}(x_i+\kappa_i). \] Observe that $X_S$ is only defined up to the choices of sign for $\kappa_1,\ldots,\kappa_d$. \end{dfn}
\begin{lem}\label{nonzero} Let $[a,b]$ be a segment with $d=b-a+1$. Assume that either $-a\notin\{1,\ldots,d\}$ and $S$ is arbitrary, or assume that $-a\in\{1,\ldots,d\}$ and either $-a+1\in S$ or $-a\in S$. Then $X_S.\hat{{\mathbf{1}}}_{[a,b]}\neq0$. \end{lem}
\begin{proof} Let $\hat{{\mathbf{1}}}=\hat{{\mathbf{1}}}_{[a,b]}$. By Proposition~\ref{segment representation}(i), \[ x_k.v=(a\epsilon_k+{\mathcal{L}}_k-\varphi c_k).v. \] Let $\{d_1>d_2>\ldots>d_\ell\}=\{1,\ldots,d\}\backslash S$. Since the $x_i$ mutually commute, \begin{eqnarray*} X_S.\hat{{\mathbf{1}}}&=&(x_{d_1}+\kappa_{d_1})\cdots(x_{d_\ell}+\kappa_{d_\ell}).\hat{{\mathbf{1}}}\\
&=&(a\epsilon_{d_1}+\kappa_{d_1}+{\mathcal{L}}_{d_1}-\varphi c_{d_1})\cdots
(a\epsilon_{d_\ell}+\kappa_{d_\ell}+{\mathcal{L}}_{d_\ell}-\varphi c_{d_\ell}).\hat{{\mathbf{1}}}\\
&=&((a+\kappa_{d_1})+{\mathcal{L}}_{d_1}-\varphi c_{d_1})\cdots
((a+\kappa_{d_\ell})+{\mathcal{L}}_{d_\ell}-\varphi c_{d_\ell}).\hat{{\mathbf{1}}}. \end{eqnarray*} The last equality follows since $\epsilon_k{\mathcal{L}}_j={\mathcal{L}}_j\epsilon_k$ if $k>j$. Now, \begin{eqnarray}\label{w 1} X_S.\hat{{\mathbf{1}}} &=&\bigg(\bigg(a+\kappa_{d_1}+\sum_{j<d_1}s_{jd_1}\bigg)+ \bigg(\sum_{j<d_1}s_{jd_1}c_j-\varphi \bigg)c_{d_1}\bigg)\cdots\\\nonumber&&\hspace{1.5in}\cdots
\bigg(\bigg(a+\kappa_{d_\ell}+\sum_{j<d_\ell}s_{jd_\ell}\bigg)
+\bigg(\sum_{j<d_\ell}s_{jd_\ell}c_j-\varphi \bigg)c_{d_\ell}\bigg).\hat{{\mathbf{1}}}\\\nonumber
&=&\prod_{i\notin S}(a+i-1+\kappa_i).\hat{{\mathbf{1}}}+(\bigstar).\hat{{\mathbf{1}}} \end{eqnarray} where $(\bigstar)=p'(c)-\varphi p''(c)$, where $p'(c)\in{\mathcal{C}\ell}(d)_{\bar{0}}$, $p''(c)\in{\mathcal{C}\ell}(d)_{\bar{1}}$, and $p'(c)$ has no constant term. Therefore, if either $a\geq 0$, or $-a+1\in S$, $X_S.{\mathbf{1}}\neq 0$.
Now, assume $-a+1\in\{1,\ldots,d\}$, and $-a+1\notin S$, but $a\in S$. Observe that $\kappa_{-a+1}=\kappa_{-a}=0$. Now, \begin{eqnarray}\label{nonzero 2} x_{-a}.\hat{{\mathbf{1}}}=\left(-1-\sum_{j<-a}c_jc_{-a}-\varphi c_{-a}\right).\hat{{\mathbf{1}}}=-c_{-a}c_{-a+1}x_{-a+1}.\hat{{\mathbf{1}}}. \end{eqnarray} Let $R=S\cup\{-a+1\}$ and $T=R\backslash\{-a\}$. Then, \[ X_S.\hat{{\mathbf{1}}}=X_{R}x_{-a+1}.\hat{{\mathbf{1}}}=c_{-a}c_{-a+1}X_{R}x_{-a}.\hat{{\mathbf{1}}} =c_{-a}c_{-a+1}X_T.\hat{{\mathbf{1}}}\neq0. \]
Finally, if $d=-a$, then in \eqref{w 1}, $d_1=-a$ and it is clear that the coefficient of $c_{-a-1}c_{-a}$ is nonzero. \end{proof}
\begin{lem}\label{A submodule} If $i\notin S$, then $x_iX_S.\hat{{\mathbf{1}}}=\kappa_iX_S.\hat{{\mathbf{1}}}$. \end{lem}
\begin{proof} Since $x_{i}^{2}.\hat{{\mathbf{1}}} = q(a-i+1)\hat{{\mathbf{1}}}= \kappa_{i}^{2}\hat{{\mathbf{1}}}$, \begin{eqnarray*} x_i(x_i+\kappa_i).\hat{{\mathbf{1}}}=(x_i^2+\kappa_ix_i)\hat{{\mathbf{1}}}=\kappa_i(\kappa_i+x_i)\hat{{\mathbf{1}}}, \end{eqnarray*} so the result follows because the $x_i$ commute. \end{proof}
\begin{lem}\label{s_i action} If $i,i+1\notin S$ and $i\neq-a$, then \[ s_iX_S.\hat{{\mathbf{1}}}=\left(\frac{\kappa_{i+1}+\kappa_i}{2(a+i)}+
\frac{\kappa_{i+1}-\kappa_i}{2(a+i)}c_ic_{i+1}\right)X_S.\hat{{\mathbf{1}}}. \] \end{lem}
\begin{proof} Let $w:=X_S.\hat{{\mathbf{1}}}$, and recall the intertwining element $\phi_i$. By character considerations $\phi_i.\hat{\Phi}_{[a,b]}=\{0\}$. In particular, \begin{eqnarray*} 0&=&\phi_i.w\\
&=&(s_i(x_i^2-x_{i+1}^2)+(x_i+x_{i+1})-c_ic_{i+1}(x_i-x_{i+1})).w\\
&=&-2(a+i)s_i.w+((\kappa_{i+1}+\kappa_i)+(\kappa_{i+1}-\kappa_i)c_ic_{i+1}).w. \end{eqnarray*} Hence, the result. \end{proof}
We can now describe the irreducible segment representations of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}} (d).$ \begin{thm}\label{module decomposition} The following holds: \begin{enumerate} \item[(i)] The module $\hat{\Phi}_{[0,d-1]}$ is an irreducible ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-module of type \texttt{Q}.
\item[(ii)] Assume $0<a\leq b$. The module $\hat{\Phi}_{[a,b]}$, has a submodule $\hat{\Phi}_{[a,b]}^+={\mathcal{C}\ell}(d).w$, where $w=X_\emptyset.\hat{{\mathbf{1}}}$. Moreover, if $w'=(x_1-\kappa_1)X_{\{1\}}.\hat{{\mathbf{1}}}$, and $\hat{\Phi}_{[a,b]}^-={\mathcal{C}\ell}(d).w'$, then \[ \hat{\Phi}_{[a,b]}=\hat{\Phi}_{[a,b]}^+ \oplus \hat{\Phi}_{[a,b]}^-. \] The submodules $\hat{\Phi}_{[a,b]}^\pm$ are simple modules of type \texttt{M}.
\item[(iii)] If $0<a\leq b$, the $\hat{\Phi}_{[-a,b]}$ has a submodule $\hat{\Phi}_{[-a,b]}^+={\mathcal{C}\ell}(d)w\oplus{\mathcal{C}\ell}(d)\overline{w}$, where \[ w=-(1+\sqrt{-1}c_ac_{a+1})X_{\{a+1\}}.\hat{{\mathbf{1}}}\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,\overline{w}=s_aw. \] Moreover, if \[ w'=-(1-\sqrt{-1}c_ac_{a+1})X_{\{a+1\}}.\hat{{\mathbf{1}}},\;\;\;\overline{w}'=s_aw', \] and $\hat{\Phi}_{[-a,b]}^-={\mathcal{C}\ell}(d)w'\oplus{\mathcal{C}\ell}(d)\overline{w}'$, then \[ \hat{\Phi}_{[-a,b]}=\hat{\Phi}_{[a,b]}^+\oplus \hat{\Phi}_{[-a,b]}^-. \] The submodules $\hat{\Phi}_{[-a,b]}^\pm$ are simple of type \texttt{M}. \end{enumerate} \end{thm}
\begin{proof} (i) First, we deduce that $\hat{\Phi}_{[0,d-1]}$ is irreducible by character considerations. It has two \emph{non-homogeneous} submodules: \[ {\mathcal{C}\ell}(d)(\sqrt{-d}+(c_1+\cdots+c_d)).\hat{{\mathbf{1}}}_{[0,d-1]}\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, {\mathcal{C}\ell}(d)(\sqrt{-d}-(c_1+\cdots+c_d)).\hat{{\mathbf{1}}}_{[0,d-1]}. \] These vector spaces are clearly stable under the action of ${\mathcal{S}}(d)$. Since $ x_1 $ acts by zero on these vector spaces, the action of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ factors through ${\mathcal{S}}(d)$ and thus these vector spaces are ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-submodules. Therefore $\hat{\Phi}_{[0,d-1]}$ is of type \texttt{Q} (cf. Section~\ref{S:Prelim}).
(ii) Let $\hat{{\mathbf{1}}}=\hat{{\mathbf{1}}}_{[a,b]}$, $w=X_\emptyset.\hat{{\mathbf{1}}}$ and $\hat{\Phi}_{[a,b]}^+={\mathcal{C}\ell}(d).w$. By Lemma \ref{nonzero}, $w\neq 0$. Now, Lemmas \ref{A submodule} and \ref{s_i action} together imply that $\hat{\Phi}_{[a,b]}^+$ is a submodule.
It now remains to show that $\hat{\Phi}_{[a,b]}=\hat{\Phi}_{[a,b]}^+\oplus \hat{\Phi}_{[a,b]}^-$, where $\hat{\Phi}_{[a,b]}^-$ is as in the statement of the proposition. To this end, assume that $w'\in \hat{\Phi}_{[a,b]}^+$. That is, there exists $p(c)\in{\mathcal{C}\ell}(d)$ such that $p(c).w=w'$. Write \[ p(c)=\sum_{\varepsilon}a_\varepsilon c^\varepsilon, \] where the sum is over $\varepsilon=(\varepsilon_1,\ldots,\varepsilon_d)\in{\mathbb{Z}}_2^d$. Then, for $1\leq i\leq d$, \begin{eqnarray*} (-1)^{\delta_{1i}}w'&=&\frac{1}{\kappa_i}x_i.w'
=\frac{1}{\kappa_i}x_i\left(\sum_{\varepsilon}a_\varepsilon c^\varepsilon\right).w
=\left(\sum_{\varepsilon}(-1)^{\varepsilon_i}a_\varepsilon c^\varepsilon\right).w, \end{eqnarray*} where (of course) the $\delta$ on the left of the equal sign is the Kronecker delta. This forces $ p(c)=r c_1 + s $ for complex numbers $ r $ and $ s$. Since $ w' $ is even, $ r=0 $ implying that $ w'=s w $ which is impossible.
(iii) We deal with $\hat{\Phi}_{[-a,b]}^+$, the proposed submodule $\Phi_{[-a,b]}^-$ being similar. Let $w=-(1+\sqrt{-1}c_ac_{a+1})X_{\{a+1\}}.\hat{{\mathbf{1}}}$, $\overline{w}=s_a.w$, and $\hat{\Phi}_{[a,b]}^+={\mathcal{C}\ell}(d).w+{\mathcal{C}\ell}(d).\overline{w}$. The proof of Lemma \ref{nonzero} shows that \[ X_{\{a+1\}}.\hat{{\mathbf{1}}}=\prod_{\substack{1\leq i\leq d\\i\neq a+1}}(a+i-1+\kappa_i).\hat{{\mathbf{1}}}+(\bigstar).\hat{{\mathbf{1}}} \] where $(\bigstar)=p'(c)-\varphi p''(c)$ where $p'(c)\in{\mathcal{C}\ell}(d)_{\bar{0}}$, $p''(c)\in{\mathcal{C}\ell}(d)_{\bar{1}}$, and $p'(c)$ has no constant term. It is also easy to see that $p'(c)$ and $p''(c)$ have coefficients in ${\mathbb{R}}$. We conclude from this that $w\neq 0$. Note that by definition, $c_ac_{a+1}.w=-\sqrt{-1}w$.
Lemma \ref{A submodule} shows that for $i\neq a,a+1$, $x_i.w=\kappa_iw$. Moreover, \[ x_a.w=-(1-\sqrt{-1}c_ac_{a+1})x_aX_{\{a+1\}}.\hat{{\mathbf{1}}}=0. \] Also, $x_a.\hat{{\mathbf{1}}}=-c_ac_{a+1}x_{a+1}.\hat{{\mathbf{1}}}$ (see the computation \eqref{nonzero 2} for details). Thus, \begin{eqnarray}\label{alternate w} w=-\sqrt{-1}(1+\sqrt{-1}c_ac_{a+1})X_{\{a\}}.\hat{{\mathbf{1}}} \end{eqnarray} so $x_{a+1}.w=0$. As for $\overline{w}=s_aw$, $x_i.\overline{w}=\kappa_i\overline{w}$ for $i\neq a,a+1$. Using commutation relations, we compute \begin{eqnarray}\label{x_a} x_a\overline{w}=x_as_a.w=(s_ax_{a+1}-1-c_ac_{a+1}).w=-(1+\sqrt{-1})w. \end{eqnarray} Similarly, \begin{eqnarray}\label{x_{a+1}} x_{a+1}.\overline{w}=(1+\sqrt{-1})w. \end{eqnarray}
We now turn to the action of the symmetric group. First, for $i\neq a-1,a+1$, Lemma \ref{s_i action} shows that $s_i.w\in\hat{\Phi}_{[a,b]}^+$. Also by Lemma \ref{s_i action}, \[ s_{a-1}X_{\{a+1\}}.\hat{{\mathbf{1}}}=\frac{\kappa_{a-1}}{2}(c_{a-1}c_a-1)X_{\{a+1\}}.\hat{{\mathbf{1}}}. \] Thus, \begin{eqnarray*} s_{a-1}.w&=&-\frac{\kappa_{a-1}}{2}(1+\sqrt{-1}c_{a-1}c_{a+1})
(c_{a-1}c_a-1)X_{\{a+1\}}.\hat{{\mathbf{1}}}\\
&=&-\frac{\kappa_{a-1}}{2}(1+c_{a-1}c_a+\sqrt{-1}c_{a-1}c_{a+1}
-\sqrt{-1}c_ac_{a+1})X_{\{a+1\}}.\hat{{\mathbf{1}}}\\
&=&\frac{\kappa_{a-1}}{2}(c_{a-1}c_a-1).w. \end{eqnarray*} Similarly, by \eqref{alternate w} and Lemma \ref{s_i action}, \[ s_{a+1}.w=\frac{\kappa_{a+2}}{2}(1+c_{a+1}c_{a+2}).w. \] Now, for $i\neq a-1,a+1$, $s_is_a=s_as_i$. Hence, by Lemma \ref{s_i action} \begin{eqnarray}\label{s_i.overline{w}} s_i.\overline{w}=\left(\frac{\kappa_{i+1}+\kappa_i}{2(a+i)}
+\frac{\kappa_{i+1}-\kappa_i}{2(a+i)}c_ic_{i+1}\right).\overline{w}. \end{eqnarray} To deduce the action of $s_{a-1}$ and $s_a$ on $\overline{w}$, we proceed as in the proof of Lemma \ref{s_i action}. Recall again the intertwining elements $\phi_{a-1}$ and $\phi_{a+1}$. By character considerations, we deduce that $\phi_{a-1}.\overline{w}=0=\phi_{a+1}.\overline{w}$. Unlike in lemma ~\ref{A submodule}, in this case the action of $x_a$ (resp. $x_{a+1}$) is given by \eqref{x_a} (resp. \eqref{x_{a+1}}). Thus, \begin{eqnarray}\label{s_{a-1}.overline{w}} s_{a-1}.\overline{w}=\frac{(1+\sqrt{-1})}{2}(1+c_{a-1}c_a).w
-\frac{\kappa_{a-1}}{2}(1-c_{a-1}c_a).\overline{w} \end{eqnarray} and \begin{eqnarray}\label{s_a.overline{w}} s_{a+1}.\overline{w}=\frac{(1-\sqrt{-1})}{2}(1-c_{a+1}c_{a+2}).w
+\frac{\kappa_{a+2}}{2}(1+c_{a+1}c_{a+2}).\overline{w}. \end{eqnarray}
It is easy to see that $\hat{\Phi}_{[-a,b]}=\hat{\Phi}_{[-a,b]}^+ + \hat{\Phi}_{[-a,b]}^-$ since $\frac{1}{2}(w+w')=X_{\{a\}}.\hat{{\mathbf{1}}}$ is a cyclic vector for $\hat{\Phi}_{[-a,b]}$. As in part (ii), it is easy to see that if $ w' = p(c)w+r(c)s_a w $ where $ p(c) $ and $ r(c) $ are polynomials in the Clifford generators, that $ p(c) = \lambda_1 + \lambda_2 c_a c_{a+1} $ and $ r(c) = \lambda_3 + \lambda_4 c_a c_{a+1} $ for some complex numbers $ \lambda_1, \lambda_2, \lambda_3, \lambda_4$. Noting that $ c_a c_{a+1} w = -\sqrt{-1} w $ gives that all the coefficients are zero.
Therefore, we are left to show that $\hat{\Phi}_{[-a,b]}^+$ is simple. Indeed, assume $V\subseteq\hat{\Phi}_{[-a,b]}^+$ is a submodule. Then, \[ \operatorname{ch} V=[a-1,\ldots,0,0,\ldots,b]. \] Let $v=p_1(c).w+p_2(c).\overline{w}\in V$ be a vector satisfying $x_i.v=\kappa_iv$ for all $i$, where $p_1(c),p_2(c)\in{\mathcal{C}\ell}(d)$. For $i=1,2$, define $p_i'(c)$ by the formulae $x_ap_i(c)=p_i'(c)x_a$. Then, \[ 0=x_a.v=-(1+\sqrt{-1})p_2'(c).w \] showing that $p_2'(c)=0$ (hence, $p_2(c)=0$). Now, arguing as above with the vector $s_a.v$ shows that $p_1(c)=0$. \end{proof}
We can now define the irreducible segment representations which are the key to defining the standard ${\mathcal{H}_{\Cl}^{\mathrm{aff}}} (d)$-modules. \begin{dfn}\label{segments} Let $a,b \in {\mathbb{Z}}_{\geq 0}$. \begin{enumerate} \item Let $\Phi_{[0,d-1]}=\hat{\Phi}_{[0,d-1]}$, ${\mathbf{1}}:=X_{\{1\}}.\hat{{\mathbf{1}}}$, where $\kappa_i=\sqrt{q(i-1)}$. \item If $0 < a\leq b$, let $\Phi_{[a,b]}=\hat{\Phi}_{[a,b]}^+$ in Proposition \ref{module decomposition}(ii), with $\kappa_i=+\sqrt{q(a+i-1)}$ for all $i$, and let ${\mathbf{1}}:=w$. \item If $0<a\leq b$, let $\Phi_{[-a,b]}=\hat{\Phi}_{[-a,b]}^+$ with $\kappa_i=+\sqrt{q(-a+i-1)}$, ${\mathbf{1}}:=w$ and $\overline{{\mathbf{1}}}:=\overline{w}$. \item If $0\leq a$, let $\Phi_{[a,a-1]}=\Phi_\emptyset={\mathbb{C}}$. \end{enumerate} \end{dfn}
\subsection{Some Lie Theoretic Notation}\label{SS:LieThy} It is convenient in this section to introduce some Lie theoretic notation. This section differs from \cite{kl} in that the notation defined here is associated to the Lie superalgebra ${\mathfrak{q}}(n)$ (as opposed to the Kac-Moody algebra ${\mathfrak{b}}_\infty$).
Define the sets $P={\mathbb{Z}}^n$, $P_{\geq0}={\mathbb{Z}}^n_{\geq0}$, and \begin{eqnarray}
\label{dom wt}P^+&=&\{\,\lambda=(\lambda_1,\ldots,\lambda_n)\in P\,|\,\lambda_i\geq\lambda_{i+1}\mbox{ for all }1\leq i\leq n\,\}\\
\label{dom typ wt}{P^{++}}&=&\{\,\lambda\in P^+\,|\,\lambda_i+\lambda_j\neq0\mbox{ for all } 1\leq i,j\leq n\,\}\\
\label{rat wt}{P^+_{\mathrm{rat}}}&=&\{\,\lambda\in P^+\,|\,\lambda_i=\lambda_{i+1}\mbox{ implies }\lambda_i=0\,\}\\
\label{poly wt}{P_{\mathrm{poly}}^+}&=&\{\,\lambda\in{P^+_{\mathrm{rat}}}\,|\,\lambda_n\geq 0\,\}\\
\label{pos wt}{P_{\geq0}}&=&\{\lambda\in P\,|\,\lambda_i\geq0\mbox{ for all }i\,\}, \end{eqnarray} The weights \eqref{dom wt} are called dominant, and \eqref{dom typ wt} are called dominant typical. A weight $\lambda\in P$ is simply \emph{typical} if $\lambda_i+\lambda_j\neq0$ for all $i,j$. The weights \eqref{rat wt} are called rational, \eqref{poly wt} are polynomial, and the set \ref{pos wt} are simply compositions. For each of the sets $X=P^+,P^{++},{P^+_{\mathrm{rat}}},{P_{\mathrm{poly}}^+},{P_{\geq0}}$ above, define \[
X(d)=\{\lambda\in X|\lambda_1+\cdots+\lambda_n=d\}. \] Let $R\subset P$ be the root system of type $A_{n-1}$. That is, $R=\{\alpha_{ij}\mid 1\leq i\neq j\leq n\}$ where $\alpha_{ij}$ is the $n$-tuple with 1 in the $i$th coordinate and $-1$ in the $j$th coordinate. The positive roots are $R^+=\{\alpha_{ij}\in R \mid i<j\}$, the root lattice $Q$ is the ${\mathbb{Z}}$-span of $R$, and $Q^+$ is the ${\mathbb{Z}}_{\geq 0}$-span of $R^+$. The symmetric group, $S_n$, acts on $P$ by place permutation. Define the length function $\ell:S_n\rightarrow{\mathbb{Z}}_{\geq0}$ in the usual way: \[
\ell(w)=|\{\alpha\in R^+\mid w(\alpha)\in-R^+\}|. \] Equivalently, $\ell(w)$ is the number of simple transpositions occurring in a reduced expression for $w$. Write $w\rightarrow y$ if $y=s_\alpha w$ for some $\alpha\in R^+$ and $\ell(w)<\ell(y)$. Define the \emph{Bruhat} order on $S_n$ by $w<_by$ if there exists a sequence $w\rightarrow w_1\rightarrow\cdots\rightarrow y$. Also, for $\lambda\in P$, define \[
S_n[\lambda]=\{\,w\in S_n \mid w(\lambda)=\lambda\,\},\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, R[\lambda]=\{\,\alpha_{ij}\in R|\,s_{ij}(\lambda)=\lambda\,\}, \] and define \[
P^+[\lambda]=\{\,\mu\in P\,|\,\mu_i\geq\mu_j\mbox{ if }s_{ij}\in S_n[\lambda]\,\},\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, P^-[\lambda]=\{\,\mu\in P\,|\,\mu_i\leq\mu_j\mbox{ if }s_{ij}\in S_n[\lambda]\,\} \] where $s_{ij}\in S_n$ denotes the transposition $(ij)$.
\subsection{Induced Modules}\label{SS:inducedmodules} Using the irreducible segment representations defined above we now define standard representations. Let $\lambda,\mu\in P$ satisfy $\lambda-\mu\in{P_{\geq0}}(d)$. Define \[ \widehat{\Phi}(\lambda,\mu) =\hat{\Phi}_{[\mu_1,\lambda_1-1]}\boxtimes\cdots\boxtimes\hat{\Phi}_{[\mu_n,\lambda_n-1]} \] and \[ \Phi(\lambda,\mu)=\Phi_{[\mu_1,\lambda_1-1]}\circledast\cdots\circledast\Phi_{[\mu_n,\lambda_n-1]}, \] and define \emph{standard (cyclic) modules} for ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ by \begin{equation}\label{E:Mhatdef} \widehat{{\mathcal{M}}}(\lambda,\mu)=\operatorname{Ind}_{d_1,\ldots,d_n}^d\widehat{\Phi}(\lambda,\mu) \end{equation} and \begin{equation}\label{E:Mdef} {\mathcal{M}}(\lambda,\mu)=\operatorname{Ind}_{d_1,\ldots,d_n}^d\Phi(\lambda,\mu). \end{equation} We call the standard modules $\widehat{{\mathcal{M}}}(\lambda,\mu)$ and ${\mathcal{M}}(\lambda,\mu)$ \emph{big} and \emph{little}, respectively.
Both the big and little standard modules are cyclic. Let \begin{align}\label{E:hatcyclicvector} \hat{{\mathbf{1}}}_{\lambda,\mu}=1\otimes(\hat{{\mathbf{1}}}\otimes\cdots\otimes{\hat{{\mathbf{1}}}})
\in\widehat{{\mathcal{M}}}(\lambda,\mu) \end{align} be the distinguished cyclic generator of $\widehat{{\mathcal{M}}}(\lambda,\mu)$. Fix the following choice of distinguished cyclic generator ${\mathbf{1}}_{\lambda,\mu}\in{\mathcal{M}}(\lambda,\mu)$. Let $i_1<\cdots<i_k$ be such that $\mu_{i_j}=0$ for all $j$ and $\gamma_0(\mu)=k$. Choose \[ {\mathbf{1}}_{\lambda,\mu}=\prod_{j=1}^{\lfloor k/2\rfloor} (1-\sqrt{-1}c_{i_{2j-1}}c_{i_{2j}})1\otimes({\mathbf{1}}\otimes\cdots\otimes{\mathbf{1}}). \]
\begin{lem}\label{L:standard cyclic dim} Let $\lambda, \mu \in P$ so that $\lambda - \mu \in P_{\geq 0}(d).$ Then, \begin{enumerate} \item[(i)] $\dim\widehat{{\mathcal{M}}}(\lambda,\mu)
=\frac{d!}{d_1!\cdots d_n!}2^{d+n-\gamma_0(\mu)}$ \item[(ii)] $\dim{\mathcal{M}}(\lambda,\mu)
=\frac{d!}{d_1!\cdots d_n!}2^{d-\lfloor\frac{\gamma_0(\mu)}{2}\rfloor}$ \item[(iii)] $\widehat{M}(\lambda,\mu)\cong{\mathcal{M}}(\lambda,\mu)^{\oplus 2^{n-\lfloor\frac{\gamma_0(\mu)+1}{2}\rfloor}}$. \end{enumerate} \end{lem}
\begin{proof}(i) The dimension of $\widehat{{\mathcal{M}}}(\lambda,\mu)$ follows from the definition.
(ii) Use Proposition \ref{module decomposition}.
(iii) Since induction commutes with direct sums we have that $\widehat{{\mathcal{M}}}(\lambda,\mu)$ is a direct sum of copies ${\mathcal{M}}(\lambda,\mu)$. A count using (i) and (ii) yields (iii). \end{proof}
We end this section by recording certain data about the weight spaces and generalized weight spaces of ${\mathcal{M}} (\lambda, \mu)$ which will be useful later. Define the weight $\zeta_{\lambda,\mu}:{\mathcal{P}}_d[x^2]\rightarrow{\mathbb{C}}$ by $f.{\mathbf{1}}_{\lambda,\mu}=\zeta_{\lambda,\mu}(f){\mathbf{1}}_{\lambda,\mu}$ for all $f\in{\mathcal{P}}_d[x]$. As in $\S$\ref{SS:LieThy}, the symmetric group, $S_d$, acts on an integral weight $\zeta:{\mathcal{P}}_d[x^2]\rightarrow{\mathbb{C}}$ by $w(\zeta)(x_i^2)=\zeta(x_{w(i)}^2)$. Let \[
S_d[\zeta]=\{\,w\in S_d\,|\,w(\zeta)=\zeta\,\}. \] Define $\ell(w)$ to be the length of $w$ (i.e.\ the number of simple transpositions occurring in a reduced expression of $w$) and recall the definition of the Bruhat order given in section~\ref{SS:LieThy}.
\begin{lem}\label{L:weights of M} Given $\lambda,\mu\in P$ with $\lambda-\mu\in{P_{\geq0}}(d)$, \begin{enumerate}
\item[(i)] $P({\mathcal{M}}(\lambda,\mu))=\{\,w(\zeta_{\lambda,\mu})\,|\,w\in D_{\lambda-\mu}\,\}$, \item[(ii)] For any $\zeta\in P({\mathcal{M}}(\lambda,\mu))$, \[ \dim{\mathcal{M}}(\lambda,\mu)_\zeta^{\mathrm{gen}}=2^{d-\lfloor\frac{\gamma_0(\mu)}{2}\rfloor}
|\{\,w\in D_{\lambda-\mu}\,|\,w(\zeta)=\zeta\,\}|. \]
In particular, \[ \dim{\mathcal{M}}(\lambda,\mu)_{\zeta_{\lambda,\mu}}^{\mathrm{gen}}=
2^{d-\lfloor\frac{\gamma_0(\mu)}{2}\rfloor}\big|D_{\lambda-\mu}\cap
S_d[\zeta_{\lambda,\mu}]\big|. \] \end{enumerate} \end{lem}
\begin{proof} (i) This follows directly upon applying the Mackey Theorem to the character map.
(ii) Given $f\in{\mathcal{P}}_d[x^2]$ and $w\in S_d$, we have the relation \[ fw=w\cdot w^{-1}(f)+\sum_{u<_{b}w}uC_uf_u \] where the sum is over $u<_{b}w$ in the Bruhat order, $C_u\in{\mathcal{C}\ell}(d)$, $f_u\in{\mathcal{P}}_d[x]$ and $\deg f_u<\deg f$, see \cite[Lemma 14.2.1]{kl}. Therefore, if $f\in{\mathcal{P}}_d[x^2]$, $C\in{\mathcal{C}\ell}(d)$ and $w\in D_{\lambda-\mu}$, \begin{align}\label{E:lowerTriangular} f(wC.{\mathbf{1}}_{\lambda,\mu})=w(\zeta_{\lambda,\mu})(f)wC.{\mathbf{1}}_{\lambda,\mu}+\sum_{u<_{b}w}uC_uf_u.{\mathbf{1}}_{\lambda,\mu} \end{align} where the sum is over $u\in D_{\lambda-\mu}$. In particular, $wC.{\mathbf{1}}_{\lambda-\mu}\in{\mathcal{M}}(\lambda,\mu)_{\zeta_{\lambda,\mu}}^{\mathrm{gen}}$ only if $w\in D_{\lambda-\mu}\cap S_d[\zeta_{\lambda,\mu}]$. Conversely, if $w\in D_{\lambda-\mu}\cap S_d[\zeta_{\lambda,\mu}]$, it is straightforward to see that all $u$ occurring on the right hand side of \eqref{E:lowerTriangular} also belong to $D_{\lambda-\mu}\cap S_d[\zeta_{\lambda,\mu}]$. This gives the result. \end{proof}
\subsection{Unique Simple Quotients}\label{unique simple quotient} In general, the standard cyclic module ${\mathcal{M}}(\lambda,\mu)$ may not have a unique simple head. However, in this subsection, we determine sufficient conditions for this to hold. Throughout this section, keep in mind that $q(a)=q(-a-1)$ for all $a\in{\mathbb{Z}}$. We follow closely the strategy in \cite{su2}. We begin with some preparatory lemmas.
\begin{lem}\label{L:x weights} Let $M$ be an ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-module, and $\zeta$ a weight of $M$, then there exists $v\in{\mathcal{M}}(\lambda,\mu)_{\zeta}$ such that \[ x_i.v=\sqrt{q(\zeta(x_i^2))}\;v \] for all $i=1,\ldots,d$. \end{lem}
\begin{proof} Choose $0\neq v_0\in M_\zeta$. Recall the definition \ref{X}. We adapt this to our current situation by setting $\kappa_i=\sqrt{q(\zeta(x_i^2))}$ and $S=\{i \mid x_iv=-\kappa_iv \}$. Then, $v_1:=X_S.v_0\in{\mathcal{M}}_\zeta$ is nonzero and $x_i.v_1=\pm\kappa_iv_1$ for all $i$. Now, set \[ v=\left(\prod_{i\in S}c_i\right)v_1. \] Then, $v$ is nonzero and has the desired properties. \end{proof}
Therefore, we may define the non-zero subspace \[ M_{\sqrt{\zeta}}= \left\{\,m\in M_\zeta \mid x_i.m=\sqrt{q(\zeta(x_i^2))}\;m\mbox{ for }i=1,\ldots,d\,\right\}. \]
We will use the following key lemma repeatedly in this section.
\begin{lem} \label{techlemma} Let $ Y $ be in $\operatorname{Rep}{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ and $v \in Y_{\sqrt{\zeta}}$ for some weight $\zeta$. Assume that for some $1\leq i<d-1$, $x_i.v=\sqrt{q(a)}$, $x_{i+1}=\sqrt{q(b)}$ where $a,b\in{\mathbb{Z}}$ and either $q(a)\neq0$ or $q(b)\neq 0$. Further, if $q(a)=q(b\pm1)$, assume that \begin{align}\label{E:techlemma} s_{i+1}.v =(\kappa_1+\kappa_2c_{i+1}c_{i+2}).v \end{align} for some constants $\kappa_1,\kappa_2\in{\mathbb{C}}$, not both 0. Then, $v\in{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d).\phi_i.v$. \end{lem}
\begin{proof} First, if $q(a)=q(b)\neq0$, then using \eqref{E:intertwiner} and Lemma 14.8.1 of \cite{kl} we deduce that \[ \phi_i.v=2q(a)v\neq0, \] so the result is trivial. If $q(a)\neq q(b\pm1)$, then using \eqref{E:intertwinersquared} we deduce that \[ \phi_i^2.v=(2q(a)-2q(b)-(q(a)-q(b))^2)v\neq0 \] and again the result is trivial.
Now, let $\kappa_3=q(a)-q(b)\neq0$, $\kappa_4=\sqrt{q(a)}-\sqrt{q(b)}\neq0$ and $\kappa_5=\sqrt{q(a)}+\sqrt{q(b)}>0$. Then, appealing again to \eqref{E:intertwiner} we have that \[ \phi_{i} v = (\kappa_3 s_{i} - \kappa_4 c_{i} c_{i+1} + \kappa_5)v \]
Let $ \mathbf{c'}$ and $ \mathbf{c''} $ be two elements of the Clifford algebra. Consider an expression of the form \begin{align*} (1+ \mathbf{c'} s_{i+1}- \mathbf{c''} s_{i}s_{i+1})\phi_i v =&(\kappa_3 s_{i} - \kappa_4 c_{i}c_{i+1} + \kappa_5 + \kappa_3 \mathbf{c'}
s_{i+1}s_{i}\\ &- \kappa_4 \mathbf{c'} c_{i}c_{i+2}s_{i+1}+
\kappa_5 \mathbf{c'} s_{i+1}- \kappa_3 \mathbf{c''} s_{i+1}s_{i}s_{i+1}\\ &+ \kappa_4 \mathbf{c''} c_{i+1}c_{i+2}s_{i}s_{i+1}-\kappa_5 \mathbf{c''} s_{i}s_{i+1}) v. \end{align*} By \eqref{E:techlemma}, this equals \begin{align*} (\kappa_3& s_{i}-\kappa_4 c_{i}c_{i+1}+\kappa_5+\kappa_3 \mathbf{c'} s_{i+1}s_{i} - \kappa_1 \kappa_4 \mathbf{c'} c_{i}c_{i+2}\\ &-\kappa_2\kappa_4 \mathbf{c'} c_{i}c_{i+1}+ \kappa_1 \kappa_5 \mathbf{c'} + \kappa_2 \kappa_5 \mathbf{c'} c_{i+1}c_{i+2} - \kappa_1\kappa_3 \mathbf{c''} s_{i+1}s_{i}\\ &-\kappa_2\kappa_3 \mathbf{c''} c_{i}c_{i+1} s_{i+1}s_{i} + \kappa_1 \kappa_4 \mathbf{c''} c_{i+1}c_{i+2}s_{i} - \kappa_2 \kappa_4 \mathbf{c''} c_{i}c_{i+1} s_{i}\\ &-\kappa_1\kappa_5 \mathbf{c''} s_{i} -\kappa_2\kappa_5 \mathbf{c''} c_{i}c_{i+2}s_{i})v. \end{align*}
The coefficient of $ s_i v $ is $$ \kappa_3 + \kappa_1 \kappa_4 \mathbf{c''} c_{i+1}c_{i+2}
- \kappa_2 \kappa_4 \mathbf{c''} c_{i}c_{i+1}
- \kappa_1 \kappa_5 \mathbf{c''}
- \kappa_2 \kappa_5 \mathbf{c''} c_{i}c_{i+2}. $$ The coefficient of $ s_{i+1}s_{i}v $ is $$ \kappa_3 \mathbf{c'} - \kappa_1 \kappa_3 \mathbf{c''}
- \kappa_2 \kappa_3 \mathbf{c''} c_{i}c_{i+1}. $$ In order to make both of these coefficients zero, set $ \mathbf{c'} = \mathbf{c''}(\kappa_1+\kappa_2c_{i}c_{i+1}) $ and $$ \mathbf{c''} = \gamma(\kappa_1 \kappa_5 +\kappa_1 \kappa_4 c_{i+1}c_{i+2}
-\kappa_2 \kappa_4 c_{i}c_{i+1} -\kappa_2 \kappa_5 c_{i}c_{i+2}), $$ where $$ \gamma = \frac{-\kappa_3}{(\kappa_1^2+\kappa_2^2)(\kappa_4^2+\kappa_5^2)}. $$
The coefficient of $ v $ is \begin{align*} -\kappa_4 c_{i}c_{i+1}&+\kappa_5-\kappa_1\kappa_4 \mathbf{c'} c_{i}c_{i+2}
- \kappa_2 \kappa_4 \mathbf{c'} c_{i}c_{i+1} + \kappa_1 \kappa_5 \mathbf{c'}
+ \kappa_2 \kappa_5 \mathbf{c'} c_{i+1} c_{i+2}\\ =& -\kappa_4 c_{i}c_{i+1} + \kappa_5 -\kappa_1 \kappa_4 \mathbf{c''}(\kappa_1c_{i}c_{i+2} + \kappa_2 c_{i+1}c_{i+2}) -\kappa_2\kappa_4 \mathbf{c''}(\kappa_1 c_{i}c_{i+1} - \kappa_2)\\
&+ \kappa_1\kappa_5 \mathbf{c''}(\kappa_1+\kappa_2c_{i}c_{i+1})
+ \kappa_2 \kappa_4 \mathbf{c''}(\kappa_1 c_{i+1}c_{i+2}
- \kappa_2 c_{i}c_{i+2}). \end{align*}
This is equal to \begin{align*} \kappa_5 - \kappa_4& c_{i}c_{i+1} + (-\kappa_1\kappa_2\kappa_4 + \kappa_1 \kappa_2 \kappa_5) \mathbf{c''} c_{i}c_{i+1} + (-\kappa_1^2\kappa_4 - \kappa_2^2\kappa_5)\mathbf{c''} c_{i}c_{i+2}\\ &+(-\kappa_1 \kappa_2 \kappa_4 + \kappa_1 \kappa_2 \kappa_5)\mathbf{c''} c_{i+1}c_{i+2} + (\kappa_2^2 \kappa_4 + \kappa_1^2 \kappa_5) \mathbf{c''}\\ = &\kappa_5 - \kappa_4 c_{i}c_{i+1}+(\kappa_1 \kappa_2 \kappa_5 - \kappa_1 \kappa_2 \kappa_4) \gamma(-\kappa_1 \kappa_5 c_{i}c_{r+p}
- \kappa_1 \kappa_4 c_{i}c_{i+2} - \kappa_2 \kappa_4
- \kappa_2 \kappa_5 c_{i+1}c_{i+2})\\ &+(-\kappa_1^2 \kappa_4 - \kappa_2^2 \kappa_5) \gamma(-\kappa_1 \kappa_5 c_{i}c_{i+2}
+ \kappa_1 \kappa_4 c_{i}c_{i+1} - \kappa_2 \kappa_5
+ \kappa_2 \kappa_4 c_{i+1}c_{i+2})\\ &+(-\kappa_1 \kappa_2 \kappa_4 + \kappa_1 \kappa_2 \kappa_5) \gamma(-\kappa_1 \kappa_5 c_{i+1}c_{i+2}
- \kappa_2 \kappa_4c_{i}c_{i+2} + \kappa_1 \kappa_4
+ \kappa_2 \kappa_5 c_{i}c_{i+1})\\ &+(\kappa_2^2 \kappa_4 + \kappa_1^2 \kappa_5) \gamma(-\kappa_1 \kappa_4 c_{i+1}c_{i+2}
+ \kappa_2 \kappa_4 c_{i}c_{i+1} - \kappa_1 \kappa_5
+ \kappa_2 \kappa_5 c_{i}c_{i+2})\\ =& \kappa_5 + \delta_1 c_{i}c_{i+1} + \delta_2 c_{i+1}c_{i+2} + \delta_3 c_{i}c_{i+2} \end{align*} for some constants $ \delta_1, \delta_2, \delta_3\in{\mathbb{R}}. $
Thus, \begin{align*} (\kappa_5 -\delta_1 c_{i}c_{i+1}-\delta_2 c_{i+1} c_{i+2}
-\delta_3 c_{i}c_{i+2})&(1 + \mathbf{c'} s_{i+1}
- \mathbf{c''} s_{i}s_{i+1})\phi_i v\\
=& (\kappa_5+\delta_1^2+\delta_2^2+\delta_3^2) v. \end{align*} Since $ \delta_1^2, \delta_2^2, \delta_3^2 \in {\mathbb{R}}_{\geq 0} $ and $ \kappa_5 > 0, $ the result follows. \end{proof}
\begin{prp}\label{dominant wt space} Assume that $\lambda\in{P^{++}}$, $\mu\in P^+[\lambda]$, and $\lambda-\mu\in{P_{\geq0}}(d)$. Then, \[ {\mathcal{M}}(\lambda,\mu)_{\sqrt{\zeta_{\lambda,\mu}}}={\mathbb{C}}{\mathbf{1}}_{\lambda,\mu}. \] \end{prp}
We begin by proving a special case of the Proposition. Suppose $ n $ divides $ d $, and $d/n=b-a$ for some $a,b\in{\mathbb{Z}}$, $b>0$. Let $ \lambda = (b, \ldots, b) $ and $ \mu = (a, \ldots, a) $ be weights of $ {\mathfrak{q}}(n). $ Set $ {\mathcal{M}}_{a,b,n} = {\mathcal{M}}(\lambda, \mu), $ and set ${\mathbf{1}}_{a,b,n}={\mathbf{1}}_{\lambda,\mu}$. Let $$ \zeta_{a,b,n} = (a,a+1, \ldots, b-1, \ldots, a, a+1, \ldots, b-1) $$ be a weight for $ {\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d) $ where the sequence $ a, a+1, \ldots, b-1 $ appears $ n $ times.
The first goal is to compute the weight space $({\mathcal{M}}_{a,b,n})_{\sqrt{\zeta_{a,b,n}}}$.
Set $n=d$ in the definition above so that $b=a+1$. The resulting module is the Kato module $K(a, \ldots, a)=K_a$, where all the $ x_i^2 $ act by $q(a)$ on the vector ${\mathbf{1}}_{a,b,n}$.
The following is \cite[Lemma 16.3.2, Theorem 16.3.3]{kl}.
\begin{lem} \label{katolemma} \begin{enumerate} \item If $a\neq-1$ or $0$, the weight space of $ K(a, \ldots, a) $ corresponding to $ (a, \ldots, a) $ with respect to the operators $ x_1^2, \ldots, x_n^2 $ has dimension $ 2^n. $ If $a=-1$ or $0$, then the weight space of $K(a,\ldots,a)$ corresponding to $(a,\ldots,a)$ with respect to the operators $ x_1, \ldots, x_n $ has dimension $2^{\lfloor\frac{n+1}{2}\rfloor}$. \item The module $ K(a, \ldots, a) $ is equal to its generalized weight space for the weight $ (a, \ldots, a). $ \item The module $ K(a, \ldots, a) $ is simple of type \texttt{Q} if $ a=0 $ and $ d $ is odd, and is of type \texttt{M} otherwise. \end{enumerate} \end{lem}
Set $m=d/n$. In the set of weights of $ {\mathcal{M}}_{a,b,n}, $ there exists a unique anti-dominant weight $ \zeta_{a,b,n}^{\circ} $ that is given by $$ \zeta_{a,b,n}^{\circ}
= (\underbrace{a, \ldots, a,}_n \underbrace{a+1,
\ldots, a+1,}_n \ldots, \underbrace{b-1,
\ldots, b-1}_n). $$
Take an element $ \tau\in D_{\lambda-\mu} $ such that $ \tau(\zeta_{a,b,n}) = \zeta_{a,b,n}^{\circ}. $ If $a\geq 0$, it is given by $ \tau = \omega^1 \cdots \omega^{m-1}, $ where $ \omega^p=\rho_{n-1}^p \rho_{n-2}^p \cdots \rho_1^p $, $$ \rho_k^p = \xi_{k(p+1)-(k-1)}^p \cdots \xi_{(k(p+1)-1)}^p \xi_{k(p+1)}^p, $$ and, for $1 \leq r \leq d-1, $ and $1 \leq p \leq d-r$, $\xi_r^p = s_{r+p-1} \cdots s_{r+1}s_r$.
If $b\leq0$, then $\tau=\sigma(\omega^1\cdots\omega^{m-1})$, where $\sigma$ is the automorphism of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$. Finally, if $a<0$ and $b>0$, $\tau=\sigma_{(-a+1)n}(\omega^2\cdots\omega^{-a})\omega^{-a+1}\cdots\omega^{m-1}$, where $\sigma_{-a}$ is the automorphism of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(-a)\subseteq{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ embedded on the left.
\begin{lem}\label{cyclicvectorlemma} The vector $ \phi_{\tau} {\mathbf{1}}_{a,b,n} $ is a cyclic vector of $ {\mathcal{M}}_{a,b,n}. $ \end{lem}
\begin{proof} This follows from iterated applications of lemma ~\ref{techlemma}. \end{proof}
The proof of the following lemma is similar to \cite[Lemma A.7]{su2}, substituting Lemmas ~\ref{katolemma} and ~\ref{L:weights of M} appropriately into Suzuki's argument.
\begin{lem} \label{antidominantlemma} $({\mathcal{M}}_{a,b,n})_{\sqrt{\zeta_{a,b,n}^{\circ}}} \subseteq \phi_{\tau} {\mathcal{C}\ell}(d) {\mathbf{1}}_{a,b,n}. $ \end{lem}
\begin{proof} By an argument similar to the proof of \cite[Lemma A.7]{su2}, we deduce that \[ ({\mathcal{M}}_{a,b,n})_{\zeta_{a,b,n}^{\circ}} \cong (K_a)_{a^{(n)}} \circledast (K_{a+1})_{(a+1)^{(n)}} \circledast \cdots \circledast (K_{b-1})_{(b-1)^{(n)}} \] if $a\geq 0$, and \[ ({\mathcal{M}}_{a,b,n})_{\zeta_{a,b,n}^{\circ}} \cong (K_{-a-1})_{(-a-1)^{(n)}} \circledast \cdots (K_1)_{1^{(n)}}\circledast(K_0)_{0^{(2n)}}\circledast(K_1)_{1^{(n)}}\cdots \circledast (K_{b-1})_{(b-1)^{(n)}} \] if $a<0$. Here, $ (K_j)_{j^{(n)}} $ is the weight space $ K(j, \ldots, j)_{(j, \ldots, j)} $ of a Kato module. Since \[ ({\mathcal{M}}_{a,b,n})_{\sqrt{\zeta_{a,b,n}^{\circ}}} \subseteq ({\mathcal{M}}_{a,b,n})_{\zeta_{a,b,n}^{\circ}}, \] we deduce that if $a\geq 0$ \[ ({\mathcal{M}}_{a,b,n})_{\sqrt{\zeta_{a,b,n}^{\circ}}} = (K_a)_{\sqrt{a^{(n)}}} \circledast (K_{a+1})_{\sqrt{(a+1)^{(n)}}} \circledast \cdots \circledast(K_{b-1})_{\sqrt{(b-1)^{(n)}}} \subseteq {\mathcal{C}\ell}(d) \phi_{\tau} {\mathbf{1}}_{a,b,n}. \] Similarly, if $a<0$, $({\mathcal{M}}_{a,b,n})_{\sqrt{\zeta_{a,b,n}^{\circ}}}\subseteq{\mathcal{C}\ell}(d) \phi_{\tau} {\mathbf{1}}_{a,b,n}$. \end{proof}
\begin{prp} \label{mainprop1} For the special standard module defined above, $ ({\mathcal{M}}_{a,b,n})_{\sqrt{\zeta_{a,b,n}}} \subseteq {\mathcal{C}\ell}(d) {\mathbf{1}}_{a,b,n}. $ \end{prp}
\begin{proof} For $ i = 1, \ldots, d, $ let $ i = jm+r $ where $ 0 \leq j <n $ and $ 0 < r < m. $ Take any $v \in ({{\mathcal{M}}}_{a,b,n})_{\sqrt{\zeta_{a,b,n}}}$. Lemma~\ref{antidominantlemma} implies that $ \phi_{\tau} v = \phi_{\tau} z{\mathbf{1}} $ for some $ z \in {\mathcal{C}\ell}(d). $ Put $ v_0 = v- z 1. $ Then $ \phi_{\tau} v_0 = 0$. Note that since $ r \neq m, $ $ \phi_i v_0 = 0$ since $s_i(\zeta_{a,b,n})$ is not a weight of ${{\mathcal{M}}}_{a,b,n}$.
If $r\neq -a$, we can solve for $s_iv_0$ in the equation $\phi_i.v_0=0$ to get \[ s_i.v_0=\left(\frac{\kappa_r-\kappa_{r-1}}{-2(a+r)}
+\frac{\kappa_r+\kappa_{r-1}}{-2(a+r)}c_ic_{i+1}\right)v_0 \] where $\kappa_r=\sqrt{q(a+r-1)}$.
Similarly, if $ r \neq -a,$ \[ s_i.{{\mathbf{1}}}_{a,b,n}=\left(\frac{\kappa_r-\kappa_{r-1}}{-2(a+r)}
+\frac{\kappa_r+\kappa_{r-1}}{-2(a+r)}c_ic_{i+1}\right){{\mathbf{1}}}_{a,b,n}. \]
If $r=-a$, then routine calculations from earlier gives that \[ c_i c_{i+1} {{\mathbf{1}}}_{a,b,n} = - \sqrt{-1} {{\mathbf{1}}}_{a,b,n}. \]
Hence there exists an ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-homomorphism $ \psi:{{\mathcal{M}}}_{a,b,n} \rightarrow {{\mathcal{M}}}_{a,b,n} $ such that $ \psi({{\mathbf{1}}}_{a,b,n})=v_0 $ if $ a \geq 0 $ or $ b \leq 0. $ If $ a < 0 < b, $ then there is an ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-homomorphism $ \psi:{{\mathcal{M}}}_{a,b,n} \rightarrow {{\mathcal{M}}}_{a,b,n} $ such that $ \psi({{\mathbf{1}}}_{a,b,n})=\prod_{0 \leq j <n} (1+ \sqrt{-1} c_{jm-a} c_{jm-a+1} )v_0 $
Thus by lemma ~\ref{antidominantlemma}, the kernel of $ \psi $ is equal to $ {\mathcal{M}}_{a,b,n}. $ Therefore $ v_0 =0. $ Thus $ v \in {\mathcal{C}\ell}(d) {{\mathbf{1}}}_{a,b,n}. $ \end{proof}
We now reduce the general case to the special case above. To this end, fix $\lambda\in{P^{++}}$, $\mu\in P^+[\lambda]$, and $\lambda-\mu\in{P_{\geq0}}(d)$. Set $d_i=\lambda_i-\mu_i$, and let $a_i=d_1+\cdots+d_{i-1}+1$, $b_i=d_1+\cdots+d_i$. Observe that \begin{align}\label{E:Step2Formulae} \zeta_{\lambda,\mu}(x^2_{a_i})=\mu_i \,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, \zeta_{\lambda,\mu}(x^2_{b_i})=\lambda_i-1. \end{align} Furthermore, observe that if $a_i\leq c\leq b_i$, \begin{align}\label{E:Step2Formulae2} \zeta_{\lambda,\mu}(x^2_{c})=\zeta_{\lambda,\mu}(x^2_{b_i})-(b_i-c) \,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, \zeta_{\lambda,\mu}(x^2_{c})=\zeta_{\lambda,\mu}(x^2_{a_i})+(c-a_i). \end{align}
Since $\lambda\in{P^{++}}$ and $\mu\in P^+[\lambda]$, we can find integers $ 0 = n'_0 < n'_1 < \cdots < n'_r = n $, and $ 0 = n_0 < n_1 < \cdots < n_s = n $ such that \[ R[\lambda] = R \cap \sum_{i \neq n'_0, \ldots, n'_r} \mathbb{Z} \alpha_i\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, R[\lambda]\cap R[\mu] = R \cap \sum_{i \neq n_0, \ldots, n_s} \mathbb{Z} \alpha_i. \] Let $$ I'_p = \{\, a_{n'_{p-1}+1}, a_{n'_{p-1}+1}+1, \ldots, b_{n'_p}-1\,\} \;\;\; (p=1, \ldots, r),\;\;\; I' = I'_1 \cup \ldots \cup I'_r, $$ and $$ I_p = \{\, a_{n_{p-1}+1}, a_{n_{p-1}+1}+1, \ldots, b_{n_p}-1\,\} \;\;\; (p=1, \ldots, s),\;\;\; I = I_1 \cup \ldots \cup I_s. $$ Then, $S_{\lambda-\mu}\subseteq S_I\subseteq S_{I'}$ and \[ S_{I'}/S_{\lambda-\mu}\cong D_{\lambda-\mu}\cap S_{I'}\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, S_I/S_{\lambda-\mu}\cong D_{\lambda-\mu}\cap S_I,\;\;\;\mbox{(cf. $\S$\ref{SS:Mackey}).} \]
\begin{lem}\cite[Lemma A.9]{su2} There is a containment of sets $ D_{\lambda - \mu}\cap S_d[\zeta_{\lambda, \mu}] \subset D_{\lambda - \mu}\cap S_I. $ \end{lem}
Let $ v \in {\mathcal{M}}(\lambda, \mu)_{\sqrt{\zeta_{\lambda, \mu}}}. $ For each $ p \in \lbrace 1, \ldots, s \rbrace, $ we can write $ v = \sum_j x_j^{(p)} z_j^{(p)} v_j $ where $ v_j \in \Phi(\lambda, \mu), $ $ \lbrace x_j^{(p)} \rbrace_j $ are linearly independent elements of $ {\mathbb{C}}[D_{\lambda - \mu} \cap S_{I - I_p}] $ and
$z_j^{(p)} \in {\mathbb{C}}[D_{\lambda - \mu} \cap S_{I_p}]$. Let ${\mathcal{P}}_d[x^2]_{I_p}={\mathbb{C}}[x_i^2|i\in I_p]$.
\begin{lem}\cite[Lemma A.10]{su2} For $ f \in {\mathcal{P}}_d[x^2]_{I_p}, $ $ f z_k^{(p)} v_j = \zeta_{\lambda, \mu}(f) z_k^{(p)} v_j. $ \end{lem}
\begin{proof} Observe \[ 0=(f-\zeta_{\lambda,\mu}(f))v =\sum_jx_j^{(p)}(f-\zeta_{\lambda,\mu}(f))z_j^{(p)}{\mathbf{1}}_{\lambda,\mu}. \] Since $S_{I_p}\subset S_d$ is closed with respect to the Bruhat order we have $f z_j^{(p)}{\mathbf{1}}_{\lambda,\mu}\in{\mathbb{C}}[D_{\lambda-\mu}\cap S_{I_p}]$. Since $\{x_j^{(p)}\}_j$ are linearly independent, each $(f-\zeta_{\lambda,\mu}(f))z_j^{(p)}{\mathbf{1}}_{\lambda,\mu}$ must be 0. \end{proof}
\noindent\emph{Proof of Proposition \ref{dominant wt space}.} Let $ {\mathcal{H}_{\Cl}^{\mathrm{aff}}}(I_p) $ be the subalgebra corresponding to $ I_p. $ Note that
$ {\mathcal{H}_{\Cl}^{\mathrm{aff}}}(I_p) \cong {\mathcal{H}_{\Cl}^{\mathrm{aff}}}(|I_p|). $ First note that $ {\mathcal{H}_{\Cl}^{\mathrm{aff}}}(I_p) v_j \cong {\mathcal{M}}_{a,b,n_p-n_{p-1}} $ for some $a,b$. Thus by Proposition ~\ref{mainprop1}, $z_k^{(p)} v_j \in {\mathbb{C}}{\mathbf{1}}_{\lambda, \mu}$. Thus, $ v \in {\mathbb{C}}[D_{\lambda - \mu} \cap S_{I - I_p}] $ for any $ p. $ It now follows that $ v \in \mathbb{C} {\mathbf{1}}_{\lambda, \mu}. $\QED
\begin{thm}\label{thm:unique irred quotient} Assume that $\lambda\in{P^{++}}$, $\mu\in P^+[\lambda]$, and $\lambda-\mu\in{P_{\geq0}}(d)$. Then ${\mathcal{M}}(\lambda,\mu)$ has a unique simple quotient module, denoted ${\mathcal{L}}(\lambda,\mu)$. \end{thm}
\begin{proof} Assume $N$ is a submodule of ${\mathcal{M}}(\lambda,\mu)$. If $N_{\zeta_{\lambda,\mu}}^{\mathrm{gen}}\neq0$, then $N_{\sqrt{\zeta_{\lambda,\mu}}}\neq0$. By the previous lemma, $N\cap{\mathcal{C}\ell}(d){\mathbf{1}}_{\lambda,\mu}\neq\{0\}$, so ${\mathbf{1}}_{\lambda,\mu}\in N$ because ${\mathcal{C}\ell}(d){\mathbf{1}}_{\lambda,\mu}$ is an irreducible ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(\lambda-\mu)$-module. Hence, $N={\mathcal{M}}(\lambda,\mu)$. It follows that \[ N\subseteq\bigoplus_{\eta\neq\zeta_{\lambda,\mu}}{\mathcal{M}}(\lambda,\mu)^{\mathrm{gen}}_\eta. \] The sum of all proper submodules satisfies this property. Therefore, ${\mathcal{M}}(\lambda,\mu)$ has a unique maximal proper submodule and a unique simple quotient. \end{proof}
Let $\mathcal{R}(\lambda,\mu)$ denote the unique maximal submodule, and define ${\mathcal{L}}(\lambda,\mu)={\mathcal{M}}(\lambda,\mu)/\mathcal{R}(\lambda,\mu)$.
\section{Classification of Calibrated Representations}\label{S:Calibrated} A representation $ M $ of the AHCA is called \emph{calibrated} if the polynomial subalgebra ${\mathcal{P}}_d[x]\subseteq{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ acts semisimply. The main combinatorial object associated to such a representation is the shifted skew shape. Calibrated representations of the affine Hecke algebra were studied and classified in \cite{ram}. The main combinatorial object in that case were pairs of skew shapes and content functions. That construction along with \cite[Conjecture 52]{lec} motivated the construction given here. A proof of a slightly modified version of that conjecture is given here. Leclerc defined a calibrated representation to be one in which ${\mathcal{P}}_d[x^2]$ acts semisimply. For example, the module $ \Phi_{[-1,0]} $ is calibrated in the sense of \cite{lec} but $ x_1, x_2 $ do not act diagonally in any basis.
\subsection{Construction of Calibrated Representations}\label{SS:Calibrated} Let $ \lambda = (\lambda_1, \ldots, \lambda_r) $ and $ \mu = (\mu_1, \ldots, \mu_r) $ be two partitions with $ \lambda_1 > \cdots > \lambda_r >0 $ and $ \mu_1 \geq \cdots \geq \mu_r $ such that $ \mu_i = \mu_{i+1} $ implies $ \mu_i = 0$ and $ \lambda_i \geq \mu_i $ for all $ i. $ To such data, associate a shifted skew shape of boxes where row $ i $ has $ \lambda_i - \mu_i $ boxes and the leftmost box occurs in position $ i. $ Figure \ref{ex1} illustrates a skew shape for $ \lambda = (5,2,1) $ and $ \mu = (3,1,0). $
\begin{figure}
\caption{Skew Shape filled with contents}
\label{ex1}
\end{figure}
A standard filling of a skew shape $ \lambda / \mu $ with a total of $ d $ boxes is an insertion of the set $ \{ 1, \ldots, d \}$ into the boxes of the skew shape such that each box gets exactly one element, each element is used exactly once and the rows are increasing from left to right and the columns are increasing from top to bottom. In a shifted shape, $\lambda$, all the boxes will lie above one main diagonal running from northwest to southeast. Each box in this main diagonal will be assigned content 0. The contents of the other boxes will be constant along the diagonals where the contents of the diagonal northeast of its immediate neighbor will be one more than the contents of its immediate neighbor. In a shifted skew shape, $\lambda/\mu$, the contents are defined as in figure \ref{ex1}.
Given a standard tableaux $ L $ for a shifted skew shape $ \lambda / \mu, $ let $ c(L_i) $ be the contents of the box labeled by $ i. $ Thus $ L $ gives rise to a $ d$-tuple $ c(L) = (c(L_1), \ldots, c(L_d)) $ called the content reading of $ \lambda / \mu $ with respect to $ L. $
Let $ \lambda / \mu $ be a shifted skew shape such that $ \lambda / \mu $ has $ d $ boxes. Set $ \kappa_{i,L} = \sqrt{q(c(L_{i}))} $ and $$ \mathcal{Y}_{i,L} = \sqrt{1-\frac{1}{({\kappa_{i+1,L}-\kappa_{i,L}})^2}
-\frac{1}{({\kappa_{i+1,L}+\kappa_{i,L}})^2}}.
$$
Now to a skew shape $ \lambda/\mu, $ associate a vector space $ \widehat{H}^{\lambda / \mu} = \oplus_L Cl(d) v_L $ where $ L $ ranges over all standard tableaux of shape $ \lambda/\mu $ and $ d $ is the number of boxes in the shifted skew shape. Define $ x_i v_L = \kappa_{i,L} v_L. $ Define $$ s_i v_L = \frac{1}{\kappa_{i+1,L}-\kappa_{i,L}} v_L + \frac{1}{\kappa_{i+1,L}+\kappa_{i,L}} c_i c_{i+1} v_L + \mathcal{Y}_{i,L} v_{s_i L} $$ where $ v_{s_i L} = 0 $ if $ s_i L $ is not a standard tableaux.
\begin{prp} The action of the $ x_i $ and $s_{i}$ given above endow $ \widehat{H}^{\lambda / \mu} $ with the structure of a $ {\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-module. \end{prp}
\begin{proof} We have \begin{align*} s_i^2 v_L &= \frac{1}{\kappa_{i+1,L}-\kappa_{i,L}} s_i v_L - \frac{1}{\kappa_{i+1,L}+\kappa_{i,L}} c_i c_{i+1} s_i v_L + \mathcal{Y}_{i,L} s_i v_{s_i L}\\ &= \frac{1}{\kappa_{i+1,L}-\kappa_{i,L}}\left(\frac{1}{\kappa_{i+1,L}-\kappa_{i,L}} v_L + \frac{1}{\kappa_{i+1,L}+\kappa_{i,L}} c_i c_{i+1} v_L + \mathcal{Y}_{i,L} v_{s_i L}\right)\\ &-\frac{c_i c_{i+1}}{\kappa_{i+1,L}+\kappa_{i,L}}\left(\frac{1}{\kappa_{i+1,L}-\kappa_{i,L}} v_L + \frac{1}{\kappa_{i+1,L}+\kappa_{i,L}} c_i c_{i+1} v_L + \mathcal{Y}_{i,L} v_{s_i L}\right)\\ &+ \mathcal{Y}_{i,L}\left(\frac{1}{\kappa_{i,L}-\kappa_{i+1,L}} v_{s_i L} + \frac{1}{\kappa_{i+1,L}+\kappa_{i,L}} c_i c_{i+1} v_{s_i L} + \mathcal{Y}_{i,L} v_{L}\right)\\ &= \left(\frac{1}{(\kappa_{i+1,L}-\kappa_{i,L})^2}+\frac{1}{(\kappa_{i+1,L}+\kappa_{i,L})^2} + \mathcal{Y}_{i,L} \mathcal{Y}_{i,L}\right) v_L = v_L. \end{align*} Note that if $ v_{s_i L} = 0, $ then $ \frac{1}{(\kappa_{i+1,L}-\kappa_{i,L})^2} + \frac{1}{(\kappa_{i+1,L}+\kappa_{i,L})^2}=1. $
Next, $$ s_i x_i v_L = \frac{\kappa_{i,L}}{\kappa_{i+1,L}-\kappa_{i,L}} v_L + \frac{\kappa_{i,L}}{\kappa_{i+1,L}+\kappa_{i,L}} c_i c_{i+1} v_L + \mathcal{Y}_{i,L} v_{s_i L}. $$ On the other hand, $$ x_{i+1} s_i v_L - v_L + c_i c_{i+1} v_L = \frac{\kappa_{i+1,L}}{\kappa_{i+1,L}-\kappa_{i,L}} v_L - \frac{\kappa_{i+1,L}}{\kappa_{i+1,L}+\kappa_{i,L}} c_i c_{i+1} v_L +\mathcal{Y}_{i,L} v_{s_i L} - v_L + c_i c_{i+1} v_L. $$ Thus it is easily seen that $$ s_i x_i v_L = x_{i+1} s_i v_L - v_L + c_i c_{i+1} v_L. $$
We now check the braid relations. To this end, fix $j\in\mathbb{N}$ and set $\kappa_i=\sqrt{j+i}$ for $i\geq0$.
\begin{figure}
\caption{Case 1}
\label{F:Case 1}
\end{figure}
Case 1: Let $ L $ be the standard tableaux given in Figure \ref{F:Case 1}. A calculation gives {\small \begin{align*} s_i s_{i+1} s_i v_L = s_{i+1} s_i s_{i+1} v_L =&\left(\frac{1}{(\kappa_3-\kappa_2)^2(\kappa_2-\kappa_1)}
-\frac{1}{(\kappa_2+\kappa_3)^2(\kappa_1+\kappa_2)}\right)v_L\\ &+\left(\frac{1}{(\kappa_3^2-\kappa_2^2)(\kappa_2+\kappa_1)}
+\frac{1}{(\kappa_3^2-\kappa_2^2)^2(\kappa_2-\kappa_1)}\right)c_ic_{i+1}v_L\\ &+\left(\frac{1}{(\kappa_3^2-\kappa_2^2)(\kappa_2-\kappa_1)}
+\frac{1}{(\kappa_3^2-\kappa_2^2)^2(\kappa_2+\kappa_1)}\right)c_{i+1}c_{i+2}v_L\\ &+\left(\frac{1}{(\kappa_3-\kappa_2)^2(\kappa_2+\kappa_1)}
-\frac{1}{(\kappa_2+\kappa_3)^2(\kappa_2-\kappa_1)}\right)c_ic_{i+2}v_L. \end{align*} }
\begin{figure}
\caption{Case 2}
\label{F:Case 2}
\end{figure}
Case 2: Let $ L_1 $ and $ L_2 $ be the standard tableaux given in Figure \ref{F:Case 2}. A calculation gives {\small \begin{align*} s_i s_{i+1} s_i v_{L_1} = s_{i+1} s_i s_{i+1}v_{L_1} =&\left(\frac{1}{(\kappa_3-\kappa_2)^2(\kappa_1-\kappa_3)}
+\frac{1}{(\kappa_2+\kappa_3)^2(\kappa_1+\kappa_3)}\right)v_{L_1}\\
&+\left(\frac{1}{(\kappa_3^2-\kappa_2^2)(\kappa_1-\kappa_3)}
-\frac{1}{(\kappa_3^2-\kappa_2^2)^2(\kappa_1+\kappa_3)}\right)c_ic_{i+1}v_{L_1}\\ &+\left(\frac{1}{(\kappa_2^2-\kappa_3^2)(\kappa_1+\kappa_3)}
+\frac{1}{(\kappa_3^2-\kappa_2^2)^2(\kappa_1-\kappa_3)}\right)c_{i+1}c_{i+2}v_{L_1}\\ &+\left(\frac{1}{(\kappa_3-\kappa_2)^2(\kappa_1+\kappa_3)}
-\frac{1}{(\kappa_2+\kappa_3)^2(\kappa_1-\kappa_3)}\right)c_ic_{i+2}v_{L_1}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_1}}{(\kappa_3-\kappa_2)(\kappa_1-\kappa_2)}\right)v_{L_2}
+\left(\frac{\mathcal{Y}_{i+1,L_1}}{(\kappa_3-\kappa_2)(\kappa_1+\kappa_2)}\right)
c_ic_{i+1}v_{L_2}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_1}}{(\kappa_1-\kappa_2)(\kappa_2+\kappa_3)}\right)
c_{i+1}c_{i+2}v_{L_2}
+\left(\frac{\mathcal{Y}_{i+1,L_1}}{(\kappa_2+\kappa_3)(\kappa_1+\kappa_2)}\right)
c_ic_{i+2}v_{L_2}. \end{align*} \begin{align*} s_i s_{i+1} s_i v_{L_2} = s_{i+1} s_i s_{i+1}v_{L_2} =& \left(\frac{1}{(\kappa_1-\kappa_2)^2(\kappa_3-\kappa_1)}
+\frac{1}{(\kappa_1+\kappa_2)^2(\kappa_1+\kappa_3)}\right)v_{L_2}\\ &+\left(\frac{1}{(\kappa_1^2-\kappa_2^2)(\kappa_3-\kappa_1)}
-\frac{1}{(\kappa_1^2-\kappa_2^2)^2(\kappa_1+\kappa_3)}\right)c_ic_{i+1}v_{L_2}\\ &+\left(\frac{-1}{(\kappa_1^2-\kappa_2^2)(\kappa_1+\kappa_3)}
+\frac{1}{(\kappa_1^2-\kappa_2^2)^2(\kappa_3-\kappa_1)}\right)c_{i+1}c_{i+2}v_{L_2}\\ &+\left(\frac{1}{(\kappa_1-\kappa_2)^2(\kappa_1+\kappa_3)}
+\frac{1}{(\kappa_1+\kappa_2)^2(\kappa_3-\kappa_1)}\right)c_ic_{i+2}v_{L_2}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_2}}{(\kappa_3-\kappa_2)(\kappa_1-\kappa_2)}\right)v_{L_1}
+\left(\frac{\mathcal{Y}_{i+1,L_2}}{(\kappa_1-\kappa_2)(\kappa_2+\kappa_3)}\right)
c_ic_{i+1}v_{L_1}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_2}}{(\kappa_3-\kappa_2)(\kappa_1+\kappa_2)}\right)
c_{i+1}c_{i+2}v_{L_1} +\left(\frac{\mathcal{Y}_{i+1,L_1}}{(\kappa_2+\kappa_3)(\kappa_1+\kappa_2)}\right)c_ic_{i+2}v_{L_1}. \end{align*} } \begin{figure}
\caption{Case 3}
\label{F:Case 3}
\end{figure}
Case 3: Let $L_1$ and $L_2$ be as in figure \ref{F:Case 3}. Then, a calculation analogous to case 2 shows that $s_is_{i+1}s_iv_{L_1}=s_{i+1}s_is_{i+2}v_{L_1}$ and $s_is_{i+1}s_iv_{L_2}=s_{i+1}s_is_{i+2}v_{L_2}$.
\begin{figure}
\caption{Case 4}
\label{F:Case 4}
\end{figure}
Case 4: Let $ L_1, L_2, $ and $ L_3 $ be the standard tableaux given in Figure \ref{F:Case 3}. {\small \begin{align*} s_i s_{i+1} s_i v_{L_1} = s_{i+1} s_i s_{i+1}v_{L_1} =& \left(\frac{1}{(\kappa_1-\kappa_0)^2(\kappa_3-\kappa_1)}
+\frac{1}{(\kappa_0+\kappa_1)^2(\kappa_1+\kappa_3)}\right)v_{L_1}\\ &+\left(\frac{1}{(\kappa_1^2-\kappa_0^2)(\kappa_3-\kappa_1)}
-\frac{1}{(\kappa_1^2-\kappa_0^2)(\kappa_1+\kappa_3)}\right)c_ic_{i+1}v_{L_1}\\ &+\left(\frac{1}{(\kappa_0^2-\kappa_1^2)(\kappa_1+\kappa_3)}
-\frac{1}{(\kappa_0^2-\kappa_1^2)(\kappa_3-\kappa_1)}\right)c_{i+1}c_{i+2}v_{L_1}\\ &+\left(\frac{1}{(\kappa_1-\kappa_0)^2(\kappa_1+\kappa_3)}
-\frac{1}{(\kappa_0+\kappa_1)^2(\kappa_3-\kappa_1)}\right)c_ic_{i+2}v_{L_1}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_1}}{(\kappa_1-\kappa_0)(\kappa_3-\kappa_0)}\right)v_{L_2}
+\left(\frac{\mathcal{Y}_{i+1,L_1}}{(\kappa_1-\kappa_0)(\kappa_0+\kappa_3)}\right)
c_ic_{i+1}v_{L_2}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_1}}{(\kappa_0+\kappa_1)(\kappa_3-\kappa_0)}\right)
c_{i+1}c_{i+2}v_{L_2}
+\left(\frac{\mathcal{Y}_{i+1,L_1}}{(\kappa_0+\kappa_1)(\kappa_0+\kappa_3)}\right) c_ic_{i+2}v_{L_2}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_1}\mathcal{Y}_{i,L_2}}{\kappa_1-\kappa_0}\right)v_{L_3}
+\left(\frac{\mathcal{Y}_{i+1,L_1}\mathcal{Y}_{i,L_2}}{\kappa_0+\kappa_1}\right) c_{i+1}c_{i+2}v_{L_3}. \end{align*} \begin{align*} s_i s_{i+1} s_i v_{L_2} = s_{i+1} s_i s_{i+1}v_{L_2} =& \left(\frac{1}{(\kappa_3-\kappa_0)^2(\kappa_1-\kappa_3)}
+\frac{1}{(\kappa_0+\kappa_3)^2(\kappa_1+\kappa_3)}+\frac{\mathcal{Y}_{i,L_2} \mathcal{Y}_{i,L_3}}{\kappa_1-\kappa_0}\right)v_{L_2}\\ &+\left(\frac{1}{(\kappa_3^2-\kappa_0^2)(\kappa_1-\kappa_3)}
-\frac{1}{(\kappa_3^2-\kappa_0^2)(\kappa_1+\kappa_3)}\right)c_ic_{i+1}v_{L_2}\\ &+\left(\frac{-1}{(\kappa_3^2-\kappa_0^2)(\kappa_1-\kappa_3)}
-\frac{1}{(\kappa_3^2-\kappa_0^2)(\kappa_1-\kappa_3)}\right)c_{i+1}c_{i+2}v_{L_2}\\ &+\left(\frac{1}{(\kappa_3-\kappa_0)^2(\kappa_1+\kappa_3)}
-\frac{1}{(\kappa_0+\kappa_3)^2(\kappa_1-\kappa_3)}
+\frac{\mathcal{Y}_{i,L_2}\mathcal{Y}_{i,L_3}}{\kappa_0+\kappa_1}\right)
c_ic_{i+2}v_{L_2}\\ &+\left(\frac{\mathcal{Y}_{i,L_2}}{(\kappa_3-\kappa_0)(\kappa_1-\kappa_3)}
+\frac{\mathcal{Y}_{i,L_2}}{(\kappa_1-\kappa_0)(\kappa_0-\kappa_3)}\right)v_{L_3}\\ &+\left(\frac{-\mathcal{Y}_{i,L_2}}{(\kappa_3+\kappa_0)(\kappa_1+\kappa_3)}
+\frac{\mathcal{Y}_{i,L_2}}{(\kappa_1-\kappa_0)(\kappa_0+\kappa_3)}\right)
c_ic_{i+1}v_{L_3}\\ &+\left(\frac{\mathcal{Y}_{i,L_2}}{(\kappa_3+\kappa_0)(\kappa_1-\kappa_3)}
-\frac{\mathcal{Y}_{i,L_2}}{(\kappa_1+\kappa_0)(\kappa_0+\kappa_3)}\right)
c_{i+1}c_{i+2}v_{L_3}\\ &+\left(\frac{\mathcal{Y}_{i,L_2}}{(\kappa_3-\kappa_0)(\kappa_1+\kappa_3)}
+\frac{\mathcal{Y}_{i,L_2}}{(\kappa_1+\kappa_0)(\kappa_0-\kappa_3)}\right)
c_ic_{i+2}v_{L_3}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_2}}{(\kappa_1-\kappa_0)(\kappa_3-\kappa_0)}\right)v_{L_1}
+\left(\frac{\mathcal{Y}_{i+1,L_2}}{(\kappa_1+\kappa_0)(\kappa_3-\kappa_0)}\right)
c_ic_{i+1}v_{L_1}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_2}}{(\kappa_3+\kappa_0)(\kappa_1-\kappa_0)}\right)
c_{i+1}c_{i+2}v_{L_1}
+\left(\frac{\mathcal{Y}_{i+1,L_2}}{(\kappa_3+\kappa_0)(\kappa_1+\kappa_0)}\right)
c_ic_{i+2}v_{L_1}. \end{align*} \begin{align*} s_i s_{i+1} s_i v_{L_3} = s_{i+1} s_i s_{i+1}v_{L_3} =& \left(\frac{1}{(\kappa_3-\kappa_0)^2(\kappa_1-\kappa_0)}
+\frac{1}{(\kappa_0+\kappa_3)^2(\kappa_0+\kappa_1)}+\frac{\mathcal{Y}_{i,L_2} \mathcal{Y}_{i,L_3}}{\kappa_1-\kappa_3}\right) v_{L_3} \\ &+\left(\frac{1}{(\kappa_0^2-\kappa_3^2)(\kappa_1-\kappa_0)}
-\frac{1}{(\kappa_0^2-\kappa_3^2)(\kappa_0+\kappa_1)}\right)c_ic_{i+1}v_{L_3}\\ &+\left(\frac{-1}{(\kappa_0^2-\kappa_3^2)(\kappa_0+\kappa_1)}
+\frac{1}{(\kappa_0^2-\kappa_3^2)(\kappa_1-\kappa_0)}\right)c_{i+1}c_{i+2}v_{L_3}\\ &+\left(\frac{1}{(\kappa_3-\kappa_0)^2(\kappa_0+\kappa_1)}
+\frac{1}{(\kappa_0+\kappa_3)^2(\kappa_1-\kappa_0)}
+\frac{\mathcal{Y}_{i,L_2}\mathcal{Y}_{i,L_3}}{\kappa_1+\kappa_3}\right)
c_ic_{i+2}v_{L_3}\\ &+\left(\frac{\mathcal{Y}_{i,L_3}}{(\kappa_0-\kappa_3)(\kappa_1-\kappa_0)}
+\frac{\mathcal{Y}_{i,L_3}}{(\kappa_1-\kappa_3)(\kappa_3-\kappa_0)}\right)v_{L_2}\\ &+\left(\frac{-\mathcal{Y}_{i,L_3}}{(\kappa_3+\kappa_0)(\kappa_0+\kappa_1)}
+\frac{\mathcal{Y}_{i,L_3}}{(\kappa_1-\kappa_3)(\kappa_0+\kappa_3)}\right)
c_ic_{i+1}v_{L_2}\\ &+\left(\frac{\mathcal{Y}_{i,L_3}}{(\kappa_3+\kappa_0)(\kappa_1-\kappa_0)}
-\frac{\mathcal{Y}_{i,L_3}}{(\kappa_1+\kappa_3)(\kappa_0+\kappa_3)}\right)
c_{i+1}c_{i+2}v_{L_2}\\ &+\left(\frac{\mathcal{Y}_{i,L_3}}{(\kappa_0-\kappa_3)(\kappa_0+\kappa_1)}
+\frac{\mathcal{Y}_{i,L_3}}{(\kappa_1+\kappa_3)(\kappa_3-\kappa_0)}\right)
c_ic_{i+2}v_{L_2}\\ &+\left(\frac{\mathcal{Y}_{i,L_3}\mathcal{Y}_{i+1,L_2}}{\kappa_1-\kappa_0}\right)v_{L_1}
+\left(\frac{\mathcal{Y}_{i,L_3}\mathcal{Y}_{i+1,L_2}}{\kappa_1+\kappa_0}\right)
c_ic_{i+1}v_{L_1}. \end{align*} }
\begin{figure}
\caption{Case 5}
\label{F:Case 5}
\end{figure}
Case 5: Let $ L_1, L_2, L_3, L_4, L_5, $ and $ L_6 $ be given as in Figure \ref{F:Case 5}.
{\small \begin{align*} s_i s_{i+1} s_i v_{L_1} = s_{i+1} s_i s_{i+1}v_{L_1} =& \left(\frac{1}{(\kappa_2-\kappa_0)^2(\kappa_4-\kappa_2)}
+\frac{1}{(\kappa_2+\kappa_0)^2(\kappa_4+\kappa_2)}+\frac{\mathcal{Y}_{i,L_1} \mathcal{Y}_{i, L_2}}{\kappa_4-\kappa_0}\right)v_{L_1}\\ &+\left(\frac{1}{(\kappa_2^2-\kappa_0^2)(\kappa_4-\kappa_2)}
-\frac{1}{(\kappa_2^2-\kappa_0^2)(\kappa_4+\kappa_2)}\right)c_ic_{i+1}v_{L_1}\\ &+\left(\frac{-1}{(\kappa_2^2-\kappa_0^2)(\kappa_4+\kappa_2)}
+\frac{1}{(\kappa_2^2-\kappa_0^2)(\kappa_4-\kappa_2)}\right)c_{i+1}c_{i+2}v_{L_1}\\ &+\left(\frac{1}{(\kappa_2-\kappa_0)^2(\kappa_4+\kappa_2)}
+\frac{1}{(\kappa_2+\kappa_0)^2(\kappa_4-\kappa_2)}+\frac{\mathcal{Y}_{i,L_1} \mathcal{Y}_{i, L_2}}{\kappa_4+\kappa_0}\right)c_ic_{i+2}v_{L_1}\\ &+\left(\frac{\mathcal{Y}_{i,L_1}}{(\kappa_2-\kappa_0)(\kappa_4-\kappa_2)}
+\frac{\mathcal{Y}_{i,L_1}}{(\kappa_4-\kappa_0)(\kappa_0-\kappa_2)}\right)v_{L_2}\\ &+\left(\frac{-\mathcal{Y}_{i,L_1}}{(\kappa_2+\kappa_0)(\kappa_4+\kappa_2)}
+\frac{\mathcal{Y}_{i,L_1}}{(\kappa_4-\kappa_0)(\kappa_0+\kappa_2)}\right)c_ic_{i+1}v_{L_2}\\ &+\left(\frac{\mathcal{Y}_{i,L_1}}{(\kappa_2+\kappa_0)(\kappa_4-\kappa_2)}
-\frac{\mathcal{Y}_{i,L_1}}{(\kappa_4+\kappa_0)(\kappa_0+\kappa_2)}\right)
c_{i+1}c_{i+2}v_{L_2}\\ &+\left(\frac{\mathcal{Y}_{i,L_1}}{(\kappa_2-\kappa_0)(\kappa_4+\kappa_2)}
+\frac{\mathcal{Y}_{i,L_1}}{(\kappa_4+\kappa_0)(\kappa_0-\kappa_2)}\right)c_{i}c_{i+2}v_{L_2}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_1}}{(\kappa_2-\kappa_0)(\kappa_4-\kappa_0)}\right)v_{L_3}+
\left(\frac{\mathcal{Y}_{i+1,L_1}}{(\kappa_2-\kappa_0)(\kappa_4+\kappa_0)}\right)
c_ic_{i+1}v_{L_3}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_1}}{(\kappa_2+\kappa_0)(\kappa_4-\kappa_0)}\right)
c_{i+1}c_{i+2}v_{L_3}
+\left(\frac{\mathcal{Y}_{i+1,L_1}}{(\kappa_2+\kappa_0)(\kappa_4+\kappa_0)}\right) c_ic_{i+2}v_{L_3}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_1})\mathcal{Y}_{i,L_3}}{\kappa_2-\kappa_0}\right)v_{L_6}
+\left(\frac{\mathcal{Y}_{i+1,L_1}\mathcal{Y}_{i,L_3}}{\kappa_2+\kappa_0}\right) c_{i+1}c_{i+2}v_{L_6}\\ &+\left(\frac{(\mathcal{Y}_{i,L_1})(\mathcal{Y}_{i+1,L_2})}{\kappa_4-\kappa_2}\right)v_{L_4}
+\left(\frac{\mathcal{Y}_{i,L_1}\mathcal{Y}_{i+1,L_2}}{\kappa_4+\kappa_2}\right)
c_ic_{i+1}v_{L_4}+(\mathcal{Y}_{i,L_1}\mathcal{Y}_{i+1,L_2}\mathcal{Y}_{i,L_4})v_{L_5}. \end{align*} \begin{align*} s_i s_{i+1} s_i v_{L_2} = s_{i+1} s_i s_{i+1}v_{L_2} =& \left(\frac{1}{(\kappa_2-\kappa_0)^2(\kappa_4-\kappa_0)}
+\frac{1}{(\kappa_2+\kappa_0)^2(\kappa_4+\kappa_0)}+\frac{\mathcal{Y}_{i,L_1} \mathcal{Y}_{i,L_2}}{\kappa_4-\kappa_2}\right)v_{L_2}\\ &+\left(\frac{1}{(\kappa_0^2-\kappa_2^2)(\kappa_4-\kappa_0)}
-\frac{1}{(\kappa_0^2-\kappa_2^2)(\kappa_4+\kappa_0)}\right)c_ic_{i+1}v_{L_2}\\ &+\left(\frac{-1}{(\kappa_0^2-\kappa_2^2)(\kappa_4+\kappa_0)}
+\frac{1}{(\kappa_0^2-\kappa_2^2)(\kappa_4-\kappa_0)}\right)c_{i+1}c_{i+2}v_{L_2}\\ &+\left(\frac{1}{(\kappa_0-\kappa_2)^2(\kappa_4+\kappa_0)}
+\frac{1}{(\kappa_2+\kappa_0)^2(\kappa_4-\kappa_0)}+\frac{\mathcal{Y}_{i,L_1} \mathcal{Y}_{i, L_2}}{\kappa_4+\kappa_2}\right)c_ic_{i+2}v_{L_2}\\ &+\left(\frac{\mathcal{Y}_{i,L_2}}{(\kappa_0-\kappa_2)(\kappa_4-\kappa_0)}
+\frac{\mathcal{Y}_{i,L_2}}{(\kappa_4-\kappa_2)(\kappa_2-\kappa_0)}\right)v_{L_1}\\ &+\left(\frac{-\mathcal{Y}_{i,L_2}}{(\kappa_2+\kappa_0)(\kappa_4+\kappa_0)}
+\frac{\mathcal{Y}_{i,L_2}}{(\kappa_4-\kappa_2)(\kappa_0+\kappa_2)}\right)c_ic_{i+1}v_{L_1}\\ &+\left(\frac{\mathcal{Y}_{i,L_2}}{(\kappa_2+\kappa_0)(\kappa_4-\kappa_0)}
-\frac{\mathcal{Y}_{i,L_2}}{(\kappa_4+\kappa_2)(\kappa_0+\kappa_2)}\right)
c_{i+1}c_{i+2}v_{L_1}\\ &+\left(\frac{\mathcal{Y}_{i,L_2}}{(\kappa_0-\kappa_2)(\kappa_0+\kappa_4)}
+\frac{\mathcal{Y}_{i,L_2}}{(\kappa_4+\kappa_2)(\kappa_0-\kappa_2)}\right)c_{i}c_{i+2}v_{L_1}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_2}}{(\kappa_0-\kappa_2)(\kappa_4-\kappa_2)}\right)v_{L_4}
+\left(\frac{\mathcal{Y}_{i+1,L_2}}{(\kappa_0-\kappa_2)(\kappa_4+\kappa_2)}\right) c_ic_{i+1}v_{L_4}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_2}}{(\kappa_2+\kappa_0)(\kappa_4-\kappa_2)}\right)
c_{i+1}c_{i+2}v_{L_4}
+\left(\frac{\mathcal{Y}_{i+1,L_2}}{(\kappa_2+\kappa_0)(\kappa_4+\kappa_2)}\right) c_ic_{i+2}v_{L_4}+\\ &+\left(\frac{\mathcal{Y}_{i+1,L_1}\mathcal{Y}_{i,L_4}}{\kappa_0-\kappa_2}\right)v_{L_5}
+\left(\frac{\mathcal{Y}_{i+1,L_1}\mathcal{Y}_{i,L_4}}{\kappa_2+\kappa_0}\right) c_{i+1}c_{i+2}v_{L_5}\\ &+\left(\frac{\mathcal{Y}_{i,L_2}\mathcal{Y}_{i+1,L_1}}{\kappa_4-\kappa_0}\right)v_{L_3}
+\left(\frac{\mathcal{Y}_{i,L_2}\mathcal{Y}_{i+1,L_1}}{\kappa_4+\kappa_0}\right) c_ic_{i+1}v_{L_3}
+\left(\mathcal{Y}_{i,L_2} \mathcal{Y}_{i+1, L_1} \mathcal{Y}_{i, L_3}\right)v_{L_6}. \end{align*} \begin{align*} s_i s_{i+1} s_i v_{L_3} = s_{i+1} s_i s_{i+1}v_{L_3} =& \left(\frac{1}{(\kappa_4-\kappa_0)^2(\kappa_2-\kappa_4)}
+\frac{1}{(\kappa_4+\kappa_0)^2(\kappa_4+\kappa_2)}+\frac{\mathcal{Y}_{i,L_3} \mathcal{Y}_{i, L_6}}{\kappa_2-\kappa_0}\right)v_{L_3}\\ &+\left(\frac{1}{(\kappa_4^2-\kappa_0^2)(\kappa_2-\kappa_4)}
-\frac{1}{(\kappa_4^2-\kappa_0^2)(\kappa_4+\kappa_2)}\right)c_ic_{i+1}v_{L_3}\\ &+\left(\frac{-1}{(\kappa_4^2-\kappa_0^2)(\kappa_4+\kappa_2)}
+\frac{1}{(\kappa_4^2-\kappa_0^2)(\kappa_2-\kappa_4)}\right)c_{i+1}c_{i+2}v_{L_3}\\ &+\left(\frac{1}{(\kappa_4-\kappa_0)^2(\kappa_4+\kappa_2)}
+\frac{1}{(\kappa_4+\kappa_0)^2(\kappa_2-\kappa_4)}+\frac{\mathcal{Y}_{i,L_3} \mathcal{Y}_{i, L_6}}{\kappa_0+\kappa_2}\right)c_i c_{i+2}v_{L_3}\\ &+\left(\frac{\mathcal{Y}_{i,L_3}}{(\kappa_4-\kappa_0)(\kappa_2-\kappa_4)}
+\frac{\mathcal{Y}_{i,L_3}}{(\kappa_2-\kappa_0)(\kappa_0-\kappa_4)}\right)v_{L_6}\\ &+\left(\frac{-\mathcal{Y}_{i,L_3}}{(\kappa_4+\kappa_0)(\kappa_4+\kappa_2)}
+\frac{\mathcal{Y}_{i,L_3}}{(\kappa_2-\kappa_0)(\kappa_0+\kappa_4)}\right) c_i c_{i+1}v_{L_6}\\ &+\left(\frac{\mathcal{Y}_{i,L_3}}{(\kappa_4+\kappa_0)(\kappa_2-\kappa_4)}
-\frac{\mathcal{Y}_{i,L_3}}{(\kappa_0+\kappa_2)(\kappa_0+\kappa_4)}\right) c_{i+1} c_{i+2}v_{L_6}\\ &+\left(\frac{\mathcal{Y}_{i,L_3}}{(\kappa_4-\kappa_0)(\kappa_2+\kappa_4)}
+\frac{\mathcal{Y}_{i,L_3}}{(\kappa_0+\kappa_2)(\kappa_0-\kappa_4)}\right)c_{i}c_{i+2}v_{L_6}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_3}}{(\kappa_4-\kappa_0)(\kappa_2-\kappa_0)}\right)v_{L_1}
+\left(\frac{\mathcal{Y}_{i+1,L_3}}{(\kappa_4-\kappa_0)(\kappa_0+\kappa_2)}\right) c_i c_{i+1}v_{L_1}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_3}}{(\kappa_4+\kappa_0)(\kappa_2-\kappa_0)}\right)
c_{i+1}c_{i+2}v_{L_1}+
\left(\frac{\mathcal{Y}_{i+1,L_3}}{(\kappa_4+\kappa_0)(\kappa_0+\kappa_2)}\right) c_i c_{i+2}v_{L_1}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_3}\mathcal{Y}_{i,L_1}}{\kappa_4-\kappa_0}\right)v_{L_2}
+\left(\frac{\mathcal{Y}_{i+1,L_3}\mathcal{Y}_{i,L_1}}{\kappa_4+\kappa_0}\right) c_{i+1}c_{i+2}v_{L_2}\\ &\left(\frac{\mathcal{Y}_{i,L_3}\mathcal{Y}_{i+1,L_6}}{\kappa_2-\kappa_4}\right)v_{L_5}
+\left(\frac{\mathcal{Y}_{i,L_3}\mathcal{Y}_{i+1,L_6}}{\kappa_4+\kappa_2}\right) c_ic_{i+1}v_{L_5}
+(\mathcal{Y}_{i,L_3}\mathcal{Y}_{i+1,L_6}\mathcal{Y}_{i,L_3})v_{L_4}. \end{align*} \begin{align*} s_i s_{i+1} s_i v_{L_4} = s_{i+1} s_i s_{i+1}v_{L_4} =& \left(\frac{1}{(\kappa_4-\kappa_2)^2(\kappa_0-\kappa_4)}+\frac{1}{(\kappa_4+\kappa_2)^2(\kappa_4+\kappa_0)}
+\frac{\mathcal{Y}_{i,L_4}\mathcal{Y}_{i,L_5}}{\kappa_0-\kappa_2}\right)v_{L_4}\\ &+\left(\frac{1}{(\kappa_4^2-\kappa_2^2)(\kappa_0-\kappa_4)}
-\frac{1}{(\kappa_4^2-\kappa_2^2)(\kappa_4+\kappa_0)}\right)c_ic_{i+1}v_{L_4}\\ &+\left(\frac{-1}{(\kappa_4^2-\kappa_2^2)(\kappa_4+\kappa_0)}
+\frac{1}{(\kappa_4^2-\kappa_2^2)(\kappa_0-\kappa_4)}\right)c_{i+1}c_{i+2}v_{L_4}\\ &+\left(\frac{1}{(\kappa_4-\kappa_2)^2(\kappa_4+\kappa_0)}
+\frac{1}{(\kappa_4+\kappa_2)^2(\kappa_0-\kappa_4)}
+\frac{\mathcal{Y}_{i,L_4}\mathcal{Y}_{i,L_5}}{\kappa_0+\kappa_2}\right)
c_ic_{i+2}v_{L_4}\\ &+\left(\frac{\mathcal{Y}_{i,L_4}}{(\kappa_4-\kappa_2)(\kappa_0-\kappa_4)}
+\frac{\mathcal{Y}_{i,L_4}}{(\kappa_0-\kappa_2)(\kappa_2-\kappa_4)}\right)v_{L_5}\\ &+\left(\frac{-\mathcal{Y}_{i,L_4}}{(\kappa_4+\kappa_2)(\kappa_4+\kappa_0)}
+\frac{\mathcal{Y}_{i,L_4}}{(\kappa_0-\kappa_2)(\kappa_2+\kappa_4)}\right)c_ic_{i+1}v_{L_5}\\ &+\left(\frac{\mathcal{Y}_{i,L_4}}{(\kappa_4+\kappa_2)(\kappa_0-\kappa_4)}
-\frac{\mathcal{Y}_{i,L_4}}{(\kappa_0+\kappa_2)(\kappa_2+\kappa_4)}\right) c_{i+1} c_{i+2}v_{L_5}\\ &\left(\frac{\mathcal{Y}_{i,L_4}}{(\kappa_4-\kappa_2)(\kappa_0+\kappa_4)}
+\frac{\mathcal{Y}_{i,L_4}}{(\kappa_0+\kappa_2)(\kappa_2-\kappa_4)}\right) c_{i} c_{i+2}v_{L_5}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_4}}{(\kappa_4-\kappa_2)(\kappa_0-\kappa_2)}\right)v_{L_2}
+\left(\frac{\mathcal{Y}_{i+1,L_4}}{(\kappa_4-\kappa_2)(\kappa_0+\kappa_2)}\right) c_i c_{i+1}v_{L_2}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_4}}{(\kappa_4+\kappa_2)(\kappa_0-\kappa_2)}\right)
c_{i+1}c_{i+2}v_{L_2}
+\left(\frac{\mathcal{Y}_{i+1,L_4}}{(\kappa_4+\kappa_2)(\kappa_0+\kappa_2)}\right) c_i c_{i+2}v_{L_2}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_4}\mathcal{Y}_{i,L_2}}{\kappa_4-\kappa_2}\right)v_{L_1}
+\left(\frac{\mathcal{Y}_{i+1,L_4}\mathcal{Y}_{i,L_2}}{\kappa_4+\kappa_2}\right) c_{i+1} c_{i+2}v_{L_1}\\ &+\left(\frac{\mathcal{Y}_{i,L_4}\mathcal{Y}_{i+1,L_5}}{\kappa_0-\kappa_4}\right)v_{L_6}
+\left(\frac{\mathcal{Y}_{i,L_4}\mathcal{Y}_{i+1,L_5}}{\kappa_4+\kappa_0}\right) c_i c_{i+1}v_{L_6}
+(\mathcal{Y}_{i,L_4} \mathcal{Y}_{i+1, L_5} \mathcal{Y}_{i, L_4})v_{L_3}. \end{align*} \begin{align*} s_i s_{i+1} s_i v_{L_5} = s_{i+1} s_i s_{i+1} v_{L_5} =& \left(\frac{1}{(\kappa_4-\kappa_2)^2(\kappa_0-\kappa_2)}
+\frac{1}{(\kappa_4+\kappa_2)^2(\kappa_2+\kappa_0)}+\frac{\mathcal{Y}_{i,L_5} \mathcal{Y}_{i, L_4}}{\kappa_0-\kappa_4}\right)v_{L_5} \\ &+\left(\frac{1}{(\kappa_2^2-\kappa_4^2)(\kappa_0-\kappa_2)}
-\frac{1}{(\kappa_2^2-\kappa_4^2)(\kappa_2+\kappa_0)}\right)c_ic_{i+1}v_{L_5}\\ &+\left(\frac{-1}{(\kappa_2^2-\kappa_4^2)(\kappa_2+\kappa_0)}
+\frac{1}{(\kappa_2^2-\kappa_4^2)(\kappa_0-\kappa_2)}\right)c_{i+1}c_{i+2}v_{L_5}\\ &+\left(\frac{1}{(\kappa_4-\kappa_2)^2(\kappa_2+\kappa_0)}
+\frac{1}{(\kappa_4+\kappa_2)^2(\kappa_0-\kappa_2)}+\frac{\mathcal{Y}_{i,L_5} \mathcal{Y}_{i, L_4}}{\kappa_0+\kappa_4}\right)c_ic_{i+2}v_{L_5}\\ &+\left(\frac{\mathcal{Y}_{i,L_5}}{(\kappa_2-\kappa_4)(\kappa_0-\kappa_2)}
+\frac{\mathcal{Y}_{i,L_5}}{(\kappa_0-\kappa_4)(\kappa_4-\kappa_2)}\right)v_{L_4}\\ &+\left(\frac{-\mathcal{Y}_{i,L_5}}{(\kappa_4+\kappa_2)(\kappa_2+\kappa_0)}
+\frac{\mathcal{Y}_{i,L_5}}{(\kappa_0-\kappa_4)(\kappa_2+\kappa_4)}\right)c_ic_{i+1}v_{L_4}\\ &+\left(\frac{\mathcal{Y}_{i,L_5}}{(\kappa_4+\kappa_2)(\kappa_0-\kappa_2)}
-\frac{\mathcal{Y}_{i,L_5}}{(\kappa_0+\kappa_4)(\kappa_2+\kappa_4)}\right)
c_{i+1}c_{i+2}v_{L_4}\\ &+\left(\frac{\mathcal{Y}_{i,L_5}}{(\kappa_2-\kappa_4)(\kappa_0+\kappa_2)}
+\frac{\mathcal{Y}_{i,L_5}}{(\kappa_0+\kappa_4)(\kappa_4-\kappa_2)}\right) c_{i} c_{i+2}v_{L_4}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_5}}{(\kappa_2-\kappa_4)(\kappa_0-\kappa_4)}\right)v_{L_6}
+\left(\frac{\mathcal{Y}_{i+1,L_5}}{(\kappa_4-\kappa_2)(\kappa_0+\kappa_2)}\right)
c_ic_{i+1}v_{L_6}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_5}}{(\kappa_4+\kappa_2)(\kappa_0-\kappa_4)}\right)
c_{i+1}c_{i+2}v_{L_6}
+\left(\frac{\mathcal{Y}_{i+1,L_5}}{(\kappa_4+\kappa_2)(\kappa_0+\kappa_4)}\right) c_i c_{i+2}v_{L_6}\\ &+\left(\frac{(\mathcal{Y}_{i+1,L_5})(\mathcal{Y}_{i,L_6})}{\kappa_2-\kappa_4}\right)v_{L_3}
+\left(\frac{(\mathcal{Y}_{i+1,L_5})(\mathcal{Y}_{i,L_6})}{\kappa_4+\kappa_2}\right) c_{i+1} c_{i+2}v_{L_3}\\ &+\left(\frac{(\mathcal{Y}_{i,L_5})(\mathcal{Y}_{i+1,L_4})}{\kappa_0-\kappa_2}\right)v_{L_2}
+\left(\frac{(\mathcal{Y}_{i,L_5})(\mathcal{Y}_{i+1,L_4})}{\kappa_2+\kappa_0}\right) c_i c_{i+1}v_{L_6}
+\left(\mathcal{Y}_{i,L_5}\mathcal{Y}_{i+1,L_4}\mathcal{Y}_{i,L_2}\right)v_{L_1}. \end{align*} \begin{align*} s_i s_{i+1} s_i v_{L_6} = s_{i+1} s_i s_{i+1} v_{L_6} =& \left(\frac{1}{(\kappa_0-\kappa_4)^2(\kappa_2-\kappa_0)}
+\frac{1}{(\kappa_0+\kappa_4)^2(\kappa_2+\kappa_0)}+\frac{\mathcal{Y}_{i,L_6} \mathcal{Y}_{i, L_3}}{\kappa_2-\kappa_4}\right) v_{L_6} \\ &+\left(\frac{1}{(\kappa_0^2-\kappa_4^2)(\kappa_2-\kappa_0)}
-\frac{1}{(\kappa_0^2-\kappa_4^2)(\kappa_2+\kappa_0)}\right)c_ic_{i+1}v_{L_6}\\ &+\left(\frac{-1}{(\kappa_0^2-\kappa_4^2)(\kappa_2+\kappa_0)}
+\frac{1}{(\kappa_0^2-\kappa_4^2)(\kappa_2-\kappa_0)}\right)c_{i+1}c_{i+2}v_{L_6}\\ &+\left(\frac{1}{(\kappa_0-\kappa_4)^2(\kappa_2+\kappa_0)}
+\frac{1}{(\kappa_4+\kappa_0)^2(\kappa_2-\kappa_0)}
+\frac{\mathcal{Y}_{i,L_6}\mathcal{Y}_{i,L_3}}{\kappa_2+\kappa_4}\right)
c_ic_{i+2}v_{L_6}\\ &+\left(\frac{\mathcal{Y}_{i,L_6}}{(\kappa_0-\kappa_4)(\kappa_2-\kappa_0)}
+\frac{\mathcal{Y}_{i,L_6}}{(\kappa_2-\kappa_4)(\kappa_4-\kappa_0)}\right)v_{L_3}\\ &+\left(\frac{-\mathcal{Y}_{i,L_6}}{(\kappa_4+\kappa_0)(\kappa_2+\kappa_0)}
+\frac{\mathcal{Y}_{i,L_6}}{(\kappa_2-\kappa_4)(\kappa_0+\kappa_4)}\right)c_ic_{i+1}v_{L_3}\\ &+\left(\frac{\mathcal{Y}_{i,L_6}}{(\kappa_0+\kappa_4)(\kappa_2-\kappa_0)}
-\frac{\mathcal{Y}_{i,L_6}}{(\kappa_2+\kappa_4)(\kappa_0+\kappa_4)}\right)
c_{i+1}c_{i+2}v_{L_3}\\ &+\left(\frac{\mathcal{Y}_{i,L_6}}{(\kappa_0-\kappa_4)(\kappa_0+\kappa_2)}
+\frac{\mathcal{Y}_{i,L_6}}{(\kappa_2+\kappa_4)(\kappa_4-\kappa_0)}\right) c_{i} c_{i+2}v_{L_3}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_6}}{(\kappa_0-\kappa_4)(\kappa_2-\kappa_4)}\right)v_{L_5}
+\left(\frac{\mathcal{Y}_{i+1,L_6}}{(\kappa_0-\kappa_4)(\kappa_2+\kappa_4)}\right) c_i c_{i+1} v_{L_5}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_6}}{(\kappa_0+\kappa_4)(\kappa_2-\kappa_4)}\right)
c_{i+1}c_{i+2}v_{L_5}
+\left(\frac{\mathcal{Y}_{i+1,L_6}}{(\kappa_0+\kappa_4)(\kappa_2+\kappa_4)}\right)
c_ic_{i+2}v_{L_5}\\ &+\left(\frac{\mathcal{Y}_{i+1,L_6}\mathcal{Y}_{i,L_5}}{\kappa_0-\kappa_4}\right)v_{L_4}
+\left(\frac{\mathcal{Y}_{i+1,L_6}\mathcal{Y}_{i,L_5}}{\kappa_0+\kappa_4}\right) c_{i+1} c_{i+2}v_{L_4}\\ &\left(\frac{\mathcal{Y}_{i,L_6}\mathcal{Y}_{i+1,L_3}}{\kappa_2-\kappa_0}\right)v_{L_1}
+\left(\frac{(\mathcal{Y}_{i,L_6})(\mathcal{Y}_{i+1,L_3})}{\kappa_2+\kappa_0}\right) c_i c_{i+1}v_{L_1}
+\left(\mathcal{Y}_{i,L_6}\mathcal{Y}_{i+1,L_3}\mathcal{Y}_{i,L_6}\right)v_{L_2}. \end{align*} } \begin{figure}
\caption{Case 6}
\label{F:Case 6}
\end{figure}
Case 6: Let $ L $ be as in Figure \ref{F:Case 6}. Then $$ s_i s_{i+1} s_i v_L = s_{i+1} s_i s_{i+1} v_L =\frac{1}{\sqrt{2}}(-c_i c_{i+1} v_L + c_i c_{i+2} v_L). $$
\end{proof}
Now define an $ {\mathcal{A}}(d)$-module $ H^{\lambda / \mu} $ to be $ \sum_{w\in S_n} \phi_w{\mathcal{L}}(c(L)) $ where $ L $ is a fixed standard filling of the shifted skew shape $ \lambda/ \mu $ and $ \mathcal{L}(c(L))={\mathcal{L}}(c(L_1))\circledast\cdots\circledast{\mathcal{L}}(c(L_d)) $ is an irreducible $ {\mathcal{A}}(d)$ submodule of $ Cl(d) v_L $ introduced in section \ref{SS:characters}.
\begin{prp} The $ \mathcal{A}(d)$-module $ H^{\lambda / \mu} $ is a $ {\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-module. \end{prp}
\begin{proof} Let $ c v_L \in H^{\lambda / \mu}. $ Then $ (\phi_i - s_i(x_i^2-x_{i+1}^2)) c v_L \in H^{\lambda / \mu}. $ Note that $ \phi_i c v_L = {}^{s_i}c \phi_i v_L = k {}^{s_i}c v_{s_i L} $ where $ {}^{s_i}c=s_ics_i$ denotes the Clifford element twisted by $ s_i. $ This element is in $ H^{\lambda / \mu} $ because the twisting of the Clifford element $ c $ by $ s_i $ is compatible with the permutation of the zero eigenvalues of the $ x_j's $ by $ s_i. $ Thus $ s_i(x_i^2-x_{i+1}^2) c v_L = k' s_i c v_L \in H^{\lambda / \mu}. $ Since by construction $ (x_i^2-x_{i+1}^2) v_L \neq 0 $ by construction, $ s_i c v_L \in H^{\lambda / \mu}. $ \end{proof}
\begin{thm} For each shifted skew shape $ \lambda/\mu, $ $ H^{\lambda/\mu} $ is an irreducible $ {\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-module. Every irreducible, calibrated $ {\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-module is isomorphic to exactly one such $ H^{\lambda/\mu}. $ \end{thm}
\begin{proof} First to show that $ H^{(\lambda, \mu)} $ is irreducible. Let $ L $ be a standard tableaux of shape $ \lambda / \mu. $ Let $ N $ be a non-zero submodule of $ H^{\lambda / \mu} $ and let $ v = \Sigma_Q C_Q v_Q \in N $ be non-zero where $ C_Q \in Cl(d). $ Let $ L $ be a standard tableaux such that $ \mathcal{Y}_L \neq 0. $ If $ P \neq L $ then there exists an $ i $ such that $ x_i v_P \neq x_i v_L. $ Suppose $ \mathcal{Y}_P \neq 0. $ Then $ \frac{x_i-\kappa_{i,P}}{\kappa_{i,L}-\kappa_{i,P}} v $ no longer has a $ v_P $ term but still has a $ v_L $ term. This element is also in $ N. $ Iterating this process it is clear that $ v_L \in N. $ The set of tableaux is identified with an interval of $ S_n $ under the Bruhat order. The minimal element is the column reading $ C. $ Thus there exists a chain $ C < s_{i_1} C < \cdots < s_{i_p} \cdots s_{i_1} C = L. $ Therefore $ \tau_{i_1} \cdots \tau_{i_p} v_L = \kappa v_C $ for some non-zero complex number $ \kappa. $ This implies $ v_C \in N. $ Now let $ Q $ be an arbitrary standard tableaux of $ \lambda / \mu. $ There is a chain $ C < s_{j_1} C < \cdots < s_{j_p} \cdots s_{j_1} C = Q. $ Then $ \tau_{j_p} \cdots \tau_{j_1} v_C = \kappa' v_Q $ for some non-zero complex number $ \kappa'. $ Thus $ v_Q \in N $ so $ N = H^{\lambda/ \mu}. $
It is clear by looking at the eigenvalues that if $ \lambda / \mu \neq \lambda' / \mu', $ then $ H^{\lambda / \mu} \neq H^{\lambda' / \mu'}. $
Next to show that the weight of a calibrated module $ M $ is obtained by reading the contents of a shifted skew shape via a standard filling. That is, if $ (t_1, \ldots, t_d) $ be such a weight, then it is necessary to show that it is equal to $ (c(L_1), \ldots, c(L_d)) $ for some standard tableaux $ L. $ It will be shown that if $ t_i = t_j $ for some $ i<j, $ then there exists $ k,l $ such that $ i<k<l<j $ such that $ t_k = t_i \pm 1 $ and $ t_l = t_i \mp 1 $ unless $ t_i =t_j= 0 $ in which case there is a $ k $ with $ i < k < j $ such that $ t_k =1. $
Let $ j>i $ be such that $ t_j=t_i $ and $ j-i $ is minimal, let $m_t\in M$ be anonzero vector of weight $t=(t_1,\ldots,t_d)$, and let $\varrho_i=\sqrt{q(t_i)}$. The proof will be by induction on $ j-i. $
\noindent\textbf{Case 1:} Suppose $ j-i=1. $
First the case that $ t_i = 0. $ If $ t_i =0, $ then $ t_{i+1}=0 $ by assumption and then $ x_i s_i m_t = -m_t -c_i c_{i+1} m_t. $ It is clear that $ -m_t - c_i c_{i+1} m_t \neq 0. $ Otherwise, $ m_t = -c_i c_{i+1} m_t $ which implies after multiplying both sides by $ c_i c_{i+1} $ that $ m_t = -m_t $ giving $ m_t =0. $ Thus $ x_i^2 s_i m_t = 0 $ but $ x_i s_i m_t \neq 0. $ Similarly, $ x_{i+1}^2 s_i m_t = 0, $ but $ x_{i+1} s_i m_t \neq 0. $ Clearly $ (x_j - \varrho_j) s_i m_t = 0 $ for $ j \neq i, i+1. $ Thus if $ t_i = 0, $ then $ s_i m_t \in M^{\text{gen}}_t, $ but not in $ M_t $contradicting the assumption that $ M $ is calibrated.
Now assume $ t_i \neq 0. $ Then, $ s_i m_t - \frac{1}{2t_i} c_i c_{i+1} m_t \in M^{\text{gen}}_t $ but not in $ M_t. $ To see this, calculate: $$ x_i(s_i m_t - \frac{1}{2\varrho_i}c_i c_{i+1} m_t) = t_i s_i m_t - \frac{1}{2} c_i c_{i+1} m_t - m_t. $$ This implies $ (x_i - \varrho_i)(s_i m_t - \frac{1}{2\varrho_i} c_i c_{i+1} m_t) = -m_t \neq 0 $ and $ (x_i - \varrho_i)^2(s_i m_t - \frac{1}{2\varrho_i} c_i c_{i+1} m_t) = 0. $ Similarly, $ (x_{i+1} - \varrho_{i+1})(s_i m_t - \frac{1}{2\varrho_i} c_i c_{i+1} m_t) = m_t \neq 0 $ and $ (x_{i+1} - \varrho_{i+1})^2(s_i m_t - \frac{1}{2\varrho_i} c_i c_{i+1} m_t) = 0. $ If $ j \neq i, i+1, $ then $ (x_j -\varrho_j)(s_i m_t - \frac{1}{2\varrho_i} c_i c_{i+1} m_t) = 0. $ Thus $ s_i m_t - \frac{1}{2\varrho_i} c_i c_{i+1} m_t \in M^{\text{gen}}_t $ but not in $ M_t $ verifying case 1.
\noindent\textbf{Case 2:} Suppose $ j-i =2. $
Since $ m_t $ is a weight vector, the vector $$ m_{s_i t}=\phi_i m_t =(\varrho_i - \varrho_{i+1}) s_i m_t -(\varrho_i-\varrho_{i+1})c_i c_{i+1} m_t + (\varrho_i + \varrho_{i+1}) m_t $$ is a weight vector of weight $ t' = s_i t. $ Then $ t_{i+1}' = t_{i+2}'. $ By case 1, this is impossible so $ m_{s_i t}=0. $ Note that $ \varrho_i + \varrho_{i+1} \neq 0. $ If it did, then $ m_{s_i t} = 0 $ which would imply $ c_i c_{i+1} m_t = 0 $ which would imply $ m_t = 0. $ Thus, $ s_i m_t = \frac{m_t}{\varrho_{i+1}-\varrho_i} + \frac{c_i c_{i+1} m_t}{\varrho_{i+1}+\varrho_i}. $ Since $ s_i^2 m_t = m_t, $ it follows that $ m_t = (\frac{2(\varrho_i+\varrho_{i+1})}{(\varrho_i-\varrho_{i+1})^2}) m_t. $ This implies $ 2(\varrho_i + \varrho_{i+1}) = (\varrho_i-\varrho_{i+1})^2. $ The solutions of this equation are $$ \varrho_{i+1} \in \lbrace \pm \sqrt{(t_i+1)(t_i+2)},\pm \sqrt{(t_i-1)(t_i)} \rbrace. $$ Since it is assumed that the positive square root is taken, there are only two subcases to investigate. For the first subcase, assume $ \varrho_{i+1}=\sqrt{q(t_i+1)}. $ A routine calculation gives $$ s_i s_{i+1} s_i m_t = \frac{-s_i m_t}{(\varrho_i-\varrho_{i+1})^2} + \frac{c_i c_{i+2} s_i m_t}{\varrho_{i+1}-\varrho_i} + \frac{c_{i+1}c_{i+2} s_i m_t}{\varrho_i-\varrho_{i+1}} - \frac{c_i c_{i+1} s_i m_t}{(\varrho_i+\varrho_{i+1})^2}. $$ From this it follows that the coefficient of $ m_t $ is $ \frac{1}{(\varrho_i-\varrho_{i+1})^3}+\frac{1}{(\varrho_i+\varrho_{i+1})^3}. $ Similarly, from $$ s_{i+1} s_{i} s_{i+1} m_t = \frac{-s_{i+1} m_t}{(\varrho_i-\varrho_{i+1})^2} + \frac{c_i c_{i+2} s_{i+1} m_t}{\varrho_i-\varrho_{i+1}} + \frac{c_{i}c_{i+1} s_{i+1} m_t}{\varrho_{i+1}-\varrho_i} - \frac{c_{i+1} c_{i+2} s_{i+1} m_t}{(\varrho_i+\varrho_{i+1})^2} $$ it follows that the coefficient of $ m_t $ is $ \frac{-1}{(\varrho_i-\varrho_{i+1})^3}+\frac{-1}{(\varrho_i+\varrho_{i+1})^3}. $ Therefore $ (\varrho_i-\varrho_{i+1})^3 + (\varrho_i+\varrho_{i+1})^3 =0. $ Recalling that $ \varrho_{i+1}=\sqrt{q(t_i+1)} $ in this subcase, it is clear that $ t_i = t_{i+2}=0 $ and $ t_{i+1} = 1. $ The other subcase is similar.
Now for the induction step. Assume $j-i>2$. If $ t_{j-1} \neq t_j \pm 1, $ then the vector $ \phi_{j-1} m_t $ is a non-zero weight vector of weight $ t' = s_{j-1} t $ by \cite[Lemma 14.8.1]{kl}. Since $ t_i' = t_i = t_j = t_{j-1}', $ the induction hypothesis may be applied to conclude that there exists $ k $ and $ l $ with $ i < k < l < j-1 $ such that $ t_k' = t_j \pm 1 $ and $ t_l' = t_j \mp 1. $ (In the case $ t_i = t_j = 0, $ then there exists $ t_k' =1. $) This implies $ t_k = t_j \pm 1 $ and $ t_l = t_j \mp 1. $ (In the case $ t_i = t_j =0, $ there exists $ t_k = 1. $) Similarly, if $ t_{i+1} \neq t_i \pm 1, $ consider $ \phi_i m_t $ and proceed by induction. Otherwise, $ t_{i+1} = t_i \pm 1 $ and $ t_{j-1} = t_i \pm 1. $ Since $ i $ and $ j $ are chosen such that $ t_i = t_j $ and $ j-i $ is minimal, $ t_{i+1} \neq t_{j-1}. $ This then gives the conclusion. (If $ t_i = t_j = 0, $ then $ t_{i+1} = 1 $ or $ t_{j-1} = 1. $
Suppose $ M $ is an irreducible, calibrated $ {\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-module such that $ m_t $ is a weight vector with weight $ t = (t_1,\ldots,t_d) $ such that $ t_{i+1} = t_i \pm 1. $ Then $ \phi_i m_t = 0. $ This follows exactly as in step 5 of \cite[Theorem 4.1]{ram}.
Finally, let $ m_t $ be a non-zero weight vector of an irreducible, calibrated module $ M. $ By the above, $ t = (c(L_1), \ldots, c(L_d)) $ for $ L $ some standard tableaux of shifted skew shape $ \lambda / \mu. $ The rest of the proof follows as in step 6 of \cite[Theorem 4.1]{ram}. Choose a word $ w = s_{i_p} \cdots s_{i_1} $ such that $ w $ applied to the column reading tableaux of $ \lambda / \mu $ gives the tableaux $ L. $ Then $ m_C = \phi_{i_1} \cdots \phi_{i_p} m_t $ is non-zero. Now to any other standard tableaux $ Q $ of $ \lambda / \mu $ there is a non-zero weight vector obtained by applying a sequence of intertwiners to $ m_C. $ By the above, $ \phi_i m_Q = 0 $ if $ s_i Q $ is not standard. Thus the span of vectors $ \lbrace m_Q \rbrace $ over all the standard tableaux of shape $ \lambda /\mu $ is a submodule of $ M. $ Since $ M $ is irreducible, this span must be the entire module. Thus there is an isomorphism $ M \cong H^{\lambda / \mu} $ defined by sending $ \phi_w m_C $ to $ \phi_w v_C. $ \end{proof}
\begin{cor}\label{C:CalibratedSimples} Let $\lambda/\mu$ be a shifted skew shape. Then, ${\mathcal{L}}(\lambda,\mu)\cong H^{\lambda/\mu}$. \end{cor}
\begin{proof} Let $T$ be the standard tableaux obtained by filling in the numbers $1,\ldots,d$ along rows from top to bottom and left to right. Note that if $s_i\in S_{\lambda-\mu}$, then $v_{s_iT}=0$ because $s_iT$ is not standard. By Frobenius reciprocity, it follows that there exists a surjective ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-homomorphism $f:{\mathcal{M}}(\lambda,\mu)\rightarrow H^{\lambda/\mu}$ given by $f({\mathbf{1}}_{\lambda-\mu})=v_T$. \end{proof}
Furthermore, by construction we have the following result. Note that this agrees with Leclerc's conjectural formula for the calibrated simple modules of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}} (d)$ \cite[Proposition 51]{lec}. \begin{cor}\label{C:characters} Let $\lambda/\mu$ be a shifted skew shape. Then, \[ \operatorname{ch} {\mathcal{L}} (\lambda, \mu)= \sum_{L} \left[c(L_1),\ldots,c(L_d) \right], \] where the sum is over all standard fillings of the shape $\lambda / \mu$. \end{cor}
\section{The Lie Superalgebras ${\mathfrak{gl}}(n|n)$ and ${\mathfrak{q}}(n)$}\label{S:Lie algebras}
\subsection{The Algebras}\label{SS:qndfn} Let $I=\{-n,\ldots,-1,1,\ldots,n\}$, and
$I^+=\{1,\ldots,n\}$. Let $V={\mathbb{C}}^{n|n}$ be the $2n$-dimensional vector superspace with standard basis $\{v_i\}_{i\in I}$. The standard basis for the superalgebra $\operatorname{End}(V)$ is the set of matrix units $\{E_{ij}\}_{i,j\in I}$, and the ${\mathbb{Z}}_2$-grading for $\operatorname{End}(V)$ and $V$ are given by \[ p(v_k)={\bar{0}},\;\;\;p(v_{-k})={\bar{1}},\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, p(E_{ij})=p(v_i)+p(v_j) \] for $k\in I^+$ and $i,j\in I$.
Let $C=\sum_{i,j\in I^+}(E_{-i,j}-E_{i,-j})$, and let $Q(V)\subset\operatorname{End}(V)$ be the supercentralizer of $C$. Then, $Q(V)$ has basis given by elements \[ e_{ij}=E_{ij}+E_{-i,-j},\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, f_{ij}=E_{-i,j}+E_{i,-j}\;\;\;i,j\in I^+. \] When $Q(V)$ and $\operatorname{End}(V)$ are viewed as Lie superalgebras relative to the superbracket: \[ [x,y]=xy-(-1)^{p(x)p(y)}yx, \]
for homogeneous $x,y\in\operatorname{End}(V)$, we denote them ${\mathfrak{q}}(n)$ and ${\mathfrak{gl}}(n|n)$ respectively.
We end this section by introducing important elements of ${\mathfrak{gl}}(n|n)$ that will be needed later. Set \begin{eqnarray}\label{bar-e/f} {\bar{e}}_{ij}=E_{ij}-E_{-i,-j},\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, {\bar{f}}_{ij}=E_{-i,j}-E_{i,-j},\;\;\;i,j\in I^+. \end{eqnarray}
\subsection{Root Data, Category $\mathcal{O}$, and Verma Modules}\label{SS:RootData} Fix the triangular decomposition \[ {\mathfrak{q}}(n)={\mathfrak{n}}^-\oplus{\mathfrak{h}}\oplus{\mathfrak{n}}^+, \]
where ${\mathfrak{n}}^+_{\bar{0}}$ (resp. ${\mathfrak{n}}^-_{\bar{0}}$) is the subalgebra spanned by the $e_{ij}$ for $1\leq i<j\leq n$ (resp. $i>j$), ${\mathfrak{h}}_{\bar{0}}$ is spanned by the $e_{ii}$, $1\leq i\leq n$, ${\mathfrak{n}}^+_{\bar{1}}$ (resp. ${\mathfrak{n}}^-_{\bar{1}}$) is the subalgebra spanned by the $f_{ij}$ for $1\leq i<j\leq n$ (resp. $i>j$) and ${\mathfrak{h}}_{\bar{1}}$ is spanned by the $f_{ii}$, $1\leq i\leq n$. Let ${\mathfrak{b}}^+={\mathfrak{h}}\oplus{\mathfrak{n}}^+$ and let ${\mathfrak{b}}^-={\mathfrak{h}}\oplus{\mathfrak{n}}^-$.
The isomorphism ${\mathfrak{q}}(n)_{\bar{0}}\rightarrow{\mathfrak{gl}}(n)$, $e_{ij}\mapsto E_{ij}$, identifies ${\mathfrak{h}}_{\bar{0}}$
with the standard torus for ${\mathfrak{gl}}(n)$. Let $\varepsilon_i\in{\mathfrak{h}}_{\bar{0}}^*$ denote the $i$th coordinate function. For $i\neq j$, define $\alpha_{ij}=\varepsilon_i-\varepsilon_j$, and fix the choice of simple roots $\Delta=\{\alpha_i=\alpha_{i,i+1}|1\leq i<n\}$. The corresponding root system is $R=\{\alpha_{ij}|1\leq i\neq j\leq n\}$, and the positive roots are
$R^+=\{\alpha_{ij}|1\leq i<j\leq n\}$. The root lattice is
$Q=\sum_{i=1}^{n-1}{\mathbb{Z}}\alpha_i$ and weight lattice $P=\sum_{i=1}^n{\mathbb{Z}}\varepsilon_i$. We can, and will, identify $P={\mathbb{Z}}^n$, and $Q=\{\lambda\in P|\lambda_1+\cdots+\lambda_n=0\}$. Define the sets of weights $P^+$, ${P^{++}}$, ${P^+_{\mathrm{rat}}}$, ${P_{\mathrm{poly}}^+}$ and ${P_{\geq0}}$ as in $\S$\ref{SS:LieThy}. We call these sets dominant, dominant-typical, rational, polynomial, and positive, respectively. Finally, let ${P^{++}_{\mathrm{rat}}} = {P^+_{\mathrm{rat}}} \cap {P^{++}}$, and ${P^{++}_{\mathrm{poly}}} ={P_{\mathrm{poly}}^+} \cap {P^{++}}$.
To begin, let $\mathcal{O}:=\mathcal{O}({\mathfrak{q}}(n))$ denote the category of all finitely generated ${\mathfrak{q}}(n)$-supermodules $M$ that are locally finite dimensional over ${\mathfrak{b}}$ and satisfy \[ M=\bigoplus_{\lambda\in P}M_\lambda \] where $M_\lambda=\{\,v\in M \mid h.v=\lambda(h)v\mbox{ for all }h\in{\mathfrak{h}}_{\bar{0}}\,\}$ is the $\lambda$-weight space of $M$.
We now define two classes of \emph{Verma modules}. To this end, given $\lambda\in P$, let ${\mathbb{C}}_\lambda$ be the 1-dimensional ${\mathfrak{h}}_{\bar{0}}$-module associated to the weight $\lambda$. Let $\theta_\lambda:{\mathfrak{h}}_{\bar{1}}\rightarrow{\mathbb{C}}$ be given by $\theta_\lambda(k)=\lambda([k,k])$ for all $k\in{\mathfrak{h}}_{\bar{1}}$. Let ${\mathfrak{h}}_{\bar{1}}'=\ker\theta$. Let
$\overline{{\mathcal{U}}({\mathfrak{h}})}={\mathcal{U}}({\mathfrak{h}})/\mathfrak{i}$, where $\mathfrak{i}$ is the left ideal of $U({\mathfrak{h}})$ generated by $\{\,h-\lambda(h) \mid h\in{\mathfrak{h}}_{\bar{0}}\,\}\cup{\mathfrak{h}}_{\bar{1}}'$. Recall, $\gamma_0(\lambda)=|\{\,i \mid \lambda_i=0\,\}|$. Since $\overline{{\mathcal{U}}({\mathfrak{h}})} $ is isomorphic to a Clifford algebra of rank $ n-\gamma_0(\lambda), $ we can define the $\overline{{\mathcal{U}}({\mathfrak{h}})}$-modules $ C(\lambda) $ and $ E(\lambda) $ where $ C(\lambda) $ is the regular representation of the resulting Clifford algebra and $ E(\lambda) $ is its unique irreducible quotient. Both $ C(\lambda) $ and $ E(\lambda) $ become modules for ${\mathcal{U}}({\mathfrak{h}})$ via inflation through the canonical projection ${\mathcal{U}} ({\mathfrak{h}})\to \overline{{\mathcal{U}}({\mathfrak{h}})}$. Note that as a $ {\mathcal{U}}({\mathfrak{h}})$-module, $ C(\lambda) \cong \operatorname{Ind}_{{\mathcal{U}}({\mathfrak{h}}_{\bar{0}}+{\mathfrak{h}}_{\bar{1}}')}^{{\mathcal{U}}({\mathfrak{h}})}{\mathbb{C}}_\lambda. $ Extend $C(\lambda)$ and $E(\lambda)$ to representations of ${\mathcal{U}}({\mathfrak{b}}^+)$ by inflation, and define the \emph{Big Verma} $\widehat{M}(\lambda)$ and \emph{Little Verma} $M(\lambda)$ by \[ \widehat{M}(\lambda)=\operatorname{Ind}_{{\mathcal{U}}({\mathfrak{b}}^+)}^{{\mathcal{U}}({\mathfrak{q}}(n))}C(\lambda)\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, M(\lambda)=\operatorname{Ind}_{{\mathcal{U}}({\mathfrak{b}}^+)}^{{\mathcal{U}}({\mathfrak{q}}(n))}E(\lambda). \] The following lemma is obtained from the standard decomposition of the Clifford algebra into irreducible modules:
\begin{lem}\label{L:little verma in big verma} We have $\widehat{M}(\lambda) \cong M(\lambda)^{\oplus 2^{\lfloor \frac{n-\gamma_0(\lambda)}{2} \rfloor}}. $ \end{lem}
It is known that $M(\lambda)$ has a unique irreducible quotient $L(\lambda)$ (see, for example, \cite{g}). Moreover, it is known $L(\lambda)$ is finite dimensional if, and only if, $\lambda\in{P^+_{\mathrm{rat}}}$ (see \cite{p}).
The following lemma seems standard, but we cannot find it stated in the literature. See \cite[Corollary 7.1, 11.6]{g} for related statements. If $M$ is a ${\mathcal{U}} (\ensuremath{\mathfrak{q}})$-module, then recall that a vector $m \in M$ is called \emph{primitive} if ${\mathfrak{n}}^{+}v=0$.
\begin{lem}\label{L:InjHom} Let $\lambda\in P$, and assume that for some $\alpha\in R^+$, there exists $r>0$ such that $s_\alpha\lambda=\lambda-r\alpha$. Then, there exists an injective homomorphism \[ M(s_\alpha\lambda)\rightarrow M(\lambda). \] \end{lem}
\begin{proof} Let $\alpha=\alpha_{ij}$, and let $v_\lambda\in M(\lambda)_\lambda$ be an odd primitive vector. Then, direct calculation verifies that \[ v_{\lambda-r\alpha}:=(e_{ji}^{r-1}(rf_{ji}-e_{ji}(f_{ii}-f_{jj})).v_\lambda \] is a primitive vector of weight $\lambda-r\alpha$ (see, for example \cite[Corollary 7.1]{g}). This implies that there is an injective ${\mathcal{U}}({\mathfrak{b}}^+)$-homomorphism \[ E(s_\alpha\lambda)\to{\mathcal{U}}({\mathfrak{h}}).v_{\lambda-r\alpha}. \] Indeed, clearly every vector in ${\mathcal{U}}({\mathfrak{h}}).v_{\lambda-r\alpha}$ has weight $\lambda-r\alpha$. Moreover, if $N\in{\mathcal{U}}({\mathfrak{n}}^+)$ and $H\in{\mathcal{U}}({\mathfrak{h}})$, then $[N,H]\in{\mathcal{U}}({\mathfrak{n}}^+)$, so \[ N.(H.v_{\lambda-r\alpha})=(HN+[N,H]).v_{\lambda-r\alpha}=0. \] The result follows because, by our choice of primitive vector, a standard argument using the filtration of ${\mathcal{U}} (\ensuremath{\mathfrak{q}}(n) )$ by total degree and a calculation in $U(\ensuremath{\mathfrak{q}} (2)$ shows that ${\mathcal{U}}(b^-).v_{\lambda-r\alpha}$ is a free ${\mathcal{U}}({\mathfrak{n}}^-)$-module. \end{proof}
\subsection{The Shapovalov Form}\label{SS:ShapovalovForm} The Shapovalov map for ${\mathfrak{q}}(n)$ was constructed in \cite{g}. We review this construction briefly.
Let $\mathcal{D}$ be the category of $Q^-=-Q^+$-graded ${\mathfrak{q}}(n)$-modules with degree 0 with respect to this grading. We regard the big and little Verma's as objects in this category by declaring $\deg M(\lambda)_{\lambda-\nu}=-\nu$ for all $\nu\in Q^+$. Let $\mathcal{C}$ be the category of left ${\mathfrak{h}}$-modules.
Let $\Psi_0:\mathcal{D}\rightarrow\mathcal{C}$ be the functor $\Psi_0(N)=N_0$ (i.e.\ the degree 0 component). The functor $\Psi_0$ has a left adjoint $\operatorname{Ind}:\mathcal{C}\rightarrow\mathcal{D}$ given by $\operatorname{Ind} A=\operatorname{Ind}_{{\mathfrak{b}}^+}^{{\mathfrak{q}}(n)} A$, where we regard the ${\mathfrak{h}}$-module $A$ as a ${\mathfrak{b}}^+$-module by inflation. The functor $\Psi_0$ also has an exact right adjoint $\operatorname{Coind}$ (see \cite[Proposition 4.3]{g}).
As in \cite{g}, let $\Theta(A):\operatorname{Ind} A\rightarrow\operatorname{Coind} A$ be the morphism corresponding to the identity map $\mathrm{id}_A:A\rightarrow A$. This induces a morphism of functors $\Theta:\operatorname{Ind}\rightarrow\operatorname{Coind}$. The main property we will use is
\begin{thm}\cite[Proposition 4.4]{g} We have $\ker\Theta(A)$ is the maximal graded submodule of $\operatorname{Ind} A$ which avoids $A$. \end{thm}
Define the Shapovalov map $S:=\Theta({\mathcal{U}}({\mathfrak{h}})):\operatorname{Ind}({\mathcal{U}}({\mathfrak{h}})) \rightarrow \operatorname{Coind}({\mathcal{U}}({\mathfrak{h}}))$. Given an object $A$ in $\mathcal{C}$, proposition 4.3 of \cite{g} shows there is a canonical isomorphism $\operatorname{Ind} A\cong \operatorname{Ind}{\mathcal{U}}({\mathfrak{h}})\otimes_{{\mathcal{U}}({\mathfrak{h}})}A$ and $\operatorname{Coind} A\cong\operatorname{Coind}{\mathcal{U}}({\mathfrak{h}})\otimes_{{\mathcal{U}}({\mathfrak{h}})}A$. In this way, we may identify $\Theta(A)$ with $\Theta({\mathcal{U}}({\mathfrak{h}}))\otimes_{{\mathcal{U}}({\mathfrak{h}})}\mathrm{id}_A$. It follows that the map $\Theta(A)$ is completely determined by the Shapovalov map.
In order to describe $S$ in more detail, we introduce some auxiliary data. Let $\varsigma:{\mathcal{U}}({\mathfrak{q}}(n))\rightarrow{\mathcal{U}}({\mathfrak{q}}(n))$ be the antiautomorphism defined by $\varsigma(x)=-x$ for all $x\in{\mathfrak{q}}(n)$ and extended to ${\mathcal{U}}({\mathfrak{q}}(n))$ by the rule $\varsigma(xy)=(-1)^{p(x)p(y)}\varsigma(y)\varsigma(x)$ for $x,y\in{\mathcal{U}}({\mathfrak{q}}(n))$. Also, define the Harish-Chandra projection $HC:{\mathcal{U}}({\mathfrak{q}}(n))\rightarrow{\mathcal{U}}({\mathfrak{h}})$ along the decomposition \[ {\mathcal{U}}({\mathfrak{q}}(n))={\mathcal{U}}({\mathfrak{h}})\oplus({\mathcal{U}}({\mathfrak{q}}(n)){\mathfrak{n}}^++{\mathfrak{n}}^-{\mathcal{U}}({\mathfrak{q}}(n))). \]
Now, we may naturally identify $\operatorname{Ind}{\mathcal{U}}({\mathfrak{h}})\cong{\mathcal{U}}({\mathfrak{b}}^-)$ as $ ({\mathfrak{b}}^-,{\mathfrak{h}}) $-bimodules. The $Q^-$-grading on ${\mathcal{U}}({\mathfrak{b}}^-)$ is given by \begin{eqnarray}\label{E:Q grading of bminus} {\mathcal{U}}({\mathfrak{b}}^-)_{-\nu}=\{\,x\in{\mathcal{U}}({\mathfrak{b}}^-) \mid [h,x]=-\nu(h)x\mbox{ for all }h\in{\mathfrak{h}}_{\bar{0}}\,\} \end{eqnarray} for all $\nu\in Q^+$.
To describe $\operatorname{Coind}{\mathcal{U}}({\mathfrak{h}})$, let $\mathcal{D}_+$ be the category of $Q^+$ graded submodules and $\operatorname{Ind}_+$ be the left adjoint to the functor $\Psi_0^+:\mathcal{C}\rightarrow\mathcal{D}_+$. We may naturally identify $\operatorname{Ind}_+{\mathcal{U}}({\mathfrak{h}})\cong{\mathcal{U}}({\mathfrak{b}}^+)$ as $ ({\mathfrak{b}}^+,{\mathfrak{h}}) $-bimodules and ${\mathcal{U}}({\mathfrak{b}}^+)$ has a $Q^+$-grading analogous to \eqref{E:Q grading of bminus}. Now, let ${\mathcal{U}}({\mathfrak{h}})^\varsigma$ be the $({\mathfrak{h}}, {\mathfrak{h}})$-bimodule obtained by twisting the action of ${\mathfrak{h}}$ with $\varsigma$. That is, $h.x=(-1)^{p(h)p(x)} \varsigma(h)x$ and $ x.h = (-1)^{p(h)p(x)} x \varsigma(h)$ for all $x\in{\mathcal{U}}({\mathfrak{h}})^\varsigma$ and $h\in{\mathfrak{h}}$. Then, there is a natural identification of $\operatorname{Coind}{\mathcal{U}}({\mathfrak{h}})$ with the graded dual of ${\mathcal{U}}({\mathfrak{b}}^+)$ as $ ({\mathcal{U}}{\mathfrak{g}}, {\mathcal{U}}{\mathfrak{h}}) $ -bimodules: \[ \operatorname{Coind}{\mathcal{U}}({\mathfrak{h}})\cong{\mathcal{U}}({\mathfrak{b}}^+)^{\#} :=\bigoplus_{\nu\in Q^+}\operatorname{Hom}_{\mathcal{C}}({\mathcal{U}}(b^+)_\nu,{\mathcal{U}}({\mathfrak{h}})^\varsigma), \] see \cite[Proposition 4.3(iii)]{g}. Observe that ${\mathcal{U}}({\mathfrak{b}}^+)^{\#}$ has a $Q^-$ grading given by ${\mathcal{U}}({\mathfrak{b}}^+)^{\#}_{-\nu}=\operatorname{Hom}_{\mathcal{C}}({\mathcal{U}}(b^+)_\nu,{\mathcal{U}}({\mathfrak{h}})^\varsigma)$.
Using these identifications, we realize the Shapovalov map via the formula: \[ S(x)(y)=(-1)^{p(x)p(y)}HC(\varsigma(y)x), \] for $x\in{\mathcal{U}}({\mathfrak{q}}(n))$ and $y\in{\mathcal{U}}({\mathfrak{q}}(n))$, \cite[$\S$4.2.4, Claim 3]{g}.
The Shapovalov map is homogeneous of degree 0. Therefore, $S=\sum_{\nu\in Q^+}S_\nu$, where $S_\nu:{\mathcal{U}}({\mathfrak{b}}^-)_{-\nu}\rightarrow{\mathcal{U}}({\mathfrak{b}}^+)^{\#}_{-\nu}$ is given by restriction.
For our purposes, it is more convenient to introduce a bilinear form \[ (\cdot,\cdot)_S:{\mathcal{U}}({\mathfrak{q}}(n))\otimes{\mathcal{U}}({\mathfrak{q}}(n))\rightarrow{\mathcal{U}}({\mathfrak{h}}) \] with the property that $\operatorname{Rad}(\cdot,\cdot)_S=\ker S$. To do this we introduce the (non-super) \emph{transpose} antiautomorphism $\tau:{\mathcal{U}}({\mathfrak{q}}(n))\rightarrow{\mathcal{U}}({\mathfrak{q}}(n))$ given by $\tau(x)=x^t$ if $x\in{\mathfrak{q}}(n)$ and extend to ${\mathcal{U}}({\mathfrak{q}}(n))$ by $\tau(xy)=\tau(y)\tau(x)$. Note that this is the ``naive'' antiautomorphism introduced in \cite{g}. Define $(\cdot,\cdot)_S$ by \[ (u,v)_S=(-1)^{p(u)p(v)}S(v)(\varsigma\tau(u))=HC(\tau(u)v) \] for all $u,v\in{\mathcal{U}}({\mathfrak{q}}(n))$.
\begin{prp} The radical of the form may be identified as: $\operatorname{Rad}(\cdot,\cdot)_S=\ker S$. \end{prp}
\begin{proof} Assume $u\in\ker S$ and $v\in{\mathcal{U}}({\mathfrak{b}}^-)$. Then, $\tau(v)\in{\mathcal{U}}({\mathfrak{b}}^+)$ and \[ (\tau\varsigma(v),u)_S =(-1)^{p(u)p(v)}S(u)(\varsigma\tau\tau\varsigma(v))=(-1)^{p(u)p(v)}S(u)(v)=0, \] showing that $u\in\operatorname{Rad}(\cdot,\cdot)_S$.
Conversely, assume $u\in\operatorname{Rad}(\cdot,\cdot)_S$ and $v\in{\mathcal{U}}({\mathfrak{b}}^+)$. Then, $\tau\varsigma(v)\in{\mathcal{U}}({\mathfrak{b}}^-)$ and \[ 0=(\tau\varsigma(v),u)_S =(-1)^{p(u)p(v)}S(u)(\varsigma\tau\tau\varsigma(v))=(-1)^{p(u)p(v)}S(u)(v). \] Hence, $u\in\operatorname{Ker} S$. \end{proof}
\begin{rmk} We have already defined $\tau$ to be an antiautomorphism of the AHCA. We will show the compatibility of the two anti-automorphisms in Proposition \ref{P:when tau's collide}. \end{rmk}
\section{A Lie-Theoretic construction of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$}\label{S:LieTheoreticConstr} Let $X$ be a $\ensuremath{\mathfrak{q}}(n)$-supermodule. In this section we construct a homomorphism of superalgebras \[ {\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)\rightarrow\operatorname{End}_{\ensuremath{\mathfrak{q}}(n)}(X\otimes V^{\otimes d}) \] along the lines of Arakawa and Suzuki, \cite{as}. The main difficulty is the lack of an even invariant bilinear form, and consequently, a lack of a suitable Casimir element in $q(n)^{\otimes2}$. However, we find inspiration for a suitable substitute in Olshanski's work in the quantum setting \cite{o}.
\subsection{Lie Bialgebra structures on $\ensuremath{\mathfrak{q}}(n)$} We begin by reviewing the construction of a Manin triple for $\ensuremath{\mathfrak{q}}(n)$ from \cite{o} (see also \cite{d1}). A Manin triple $({\mathfrak{p}},{\mathfrak{p}}_1,{\mathfrak{p}}_2)$ consists of a Lie superalgebra ${\mathfrak{p}}$, a nondegenerate even invariant bilinear symmetric form $B$ and two subalgebras ${\mathfrak{p}}_1$ and ${\mathfrak{p}}_2$ which are $B$-isotropic transversal subspaces of ${\mathfrak{p}}$. Then, $B$ defines a nondegenerate pairing between ${\mathfrak{p}}_1$ and ${\mathfrak{p}}_2$.
Define a cobracket $\Delta:{\mathfrak{p}}_1\rightarrow {\mathfrak{p}}_1^{\otimes2}$ by dualizing the bracket ${\mathfrak{p}}_2^{\otimes 2}\rightarrow{\mathfrak{p}}_2$: \[ B^{\otimes2}(\Delta(X),Y_1\otimes Y_2)=B(X,[Y_1,Y_2]),\;\;\;(X\in{\mathfrak{p}}_1). \] Then, the pair $({\mathfrak{p}}_1,\Delta)$ is called a Lie (super)bialgebra.
Choose a basis $\{X_\alpha\}$ for ${\mathfrak{p}}_1$ and a basis $\{Y_\alpha\}$ for ${\mathfrak{p}}_2$ such that $B(X_\alpha,Y_\beta)=\delta_{\alpha\beta}$, and set $s=\sum_\alpha X_\alpha\otimes Y_\alpha$. Then, it turns out that $s$ satisfies the classical Yang-Baxter equation \[ [s^{12},s^{13}]+[s^{12},s^{23}]+[s^{13},s^{23}]=0 \] and $\Delta(X)=[1\otimes X+X\otimes 1,s]$, for $X\in{\mathfrak{p}}_1$.
\subsection{The Super Casimir} Note that when ${\mathfrak{p}}={\mathfrak{g}}$ is a simple Lie algebra, ${\mathfrak{p}}_1=\mathfrak{b}_+$, ${\mathfrak{p}}_2=\mathfrak{b}_-$ are the positive and negative Borel subalgebras and $B$ is the trace form, $s$ becomes the classical $r$-matrix, which we will denote $r^{12}$. We can repeat this construction with the roles of ${\mathfrak{p}}_1$ and ${\mathfrak{p}}_2$ reversed and obtain another classical $r$-matrix which we denote $r^{21}$. Then, the Casimir is simply $\Omega=r^{12}+r^{21}$, see \cite{as} $\S1.2$.
In \cite{o}, Olshanski constructs such an element $s$ for ${\mathfrak{p}}={\mathfrak{gl}}(n|n)$, ${\mathfrak{p}}_1=\ensuremath{\mathfrak{q}}(n)$ and some fixed choice of ${\mathfrak{p}}_2$ analogous to a positive Borel. We will review this construction to obtain an element which we will call $s_+$, then replace ${\mathfrak{p}}_2$ with an analogue of a negative Borel to obtain another element called $s_-$. Then, we show that the element $\Omega=s_++s_-$ performs the role of the Casimir in our setting.
\begin{dfn}Let ${\mathfrak{p}}={\mathfrak{gl}}(n|n)$, $B(x,y)=\mathrm{str}(xy)$ (where $\mathrm{str}(E_{ij})=\delta_{ij}\mathrm{sgn}(i)$ for $i,j\in I$), and ${\mathfrak{p}}_1={\mathfrak{q}}(n)$.
\begin{enumerate} \item Let \[ {\mathfrak{p}}_2^+=\sum_{i\in I^+}{\mathbb{C}}(E_{ii}-E_{-i,-i})+\sum_{\substack{i,j\in I,\\ i<j}}{\mathbb{C}} E_{ij}. \] Then the corresponding element $s_+$ is given by \[ s_+=\frac12\sum_{i\in I^+}e_{ii}\otimes{\bar{e}}_{ii}+\sum_{\substack{i,j\in I^+\\i>j}}e_{ij}\otimes E_{ji}-\sum_{\substack{i,j\in I^+\\i<j}}e_{ij}\otimes E_{-j,-i}-\sum_{i,j\in I^+}f_{ij}\otimes E_{-j,i}. \] \item Let \[ {\mathfrak{p}}_2^-=\sum_{i\in I^+}{\mathbb{C}}(E_{ii}-E_{-i,-i})+\sum_{\substack{i,j\in I,\\ i>j}}{\mathbb{C}} E_{ij}. \] Then, the corresponding element $s_-$ is given by \[ s_-=\frac12\sum_{i\in I^+}e_{ii}\otimes{\bar{e}}_{ii}-\sum_{\substack{i,j\in I^+\\i>j}}e_{ij}\otimes E_{-j,-i}+\sum_{\substack{i,j\in I^+\\i<j}}e_{ij}\otimes E_{j,i}+\sum_{i,j\in I^+}f_{ij}\otimes E_{j,-i}. \] \end{enumerate} \end{dfn}
We now define our substitute Casimir: \begin{eqnarray}\label{casimir} \Omega=s_++s_-=\sum_{i,j\in I^+}e_{ij}\otimes{\bar{e}}_{ji}-\sum_{i,j\in I^+}f_{ij}\otimes{\bar{f}}_{ji}\in Q(V)\otimes\operatorname{End}(V), \end{eqnarray} where ${\bar{e}}_{ij}$ and ${\bar{f}}_{ij}$ are given in \eqref{bar-e/f}.
\subsection{Classical Sergeev Duality}\label{SS:Sergeev Duality} We now need to recall Sergeev's duality between ${\mathcal{S}} (d)$ and $\ensuremath{\mathfrak{q}} (n)$. Recall the matrix $C=\sum_{i\in I^+}{\bar{f}}_{ii}$ from the previous section, and define the superpermutation operator \[ S=\sum_{i,j\in I}\mathrm{sgn}(j)E_{ij}\otimes E_{ji}\in\operatorname{End}(V)^{\otimes2}, \] where $\mathrm{sgn}(j)$ is the sign of $j$. Let $\pi_i:\operatorname{End}(V)\rightarrow\operatorname{End}(V)^{\otimes d}$ be given by $\pi_i(x)=1^{\otimes i-1}\otimes x\otimes 1^{\otimes d-i}$ for all $x\in\operatorname{End}(V)$ and $i=1,\ldots, d$; similarly, define $\pi_{ij}:\operatorname{End}(V)^{\otimes 2}\rightarrow\operatorname{End}(V)^{\otimes d}$ by $\pi_{ij}(x\otimes y)=1^{\otimes i-1}\otimes x\otimes 1^{\otimes j-i-1}\otimes y\otimes 1^{\otimes d-j}$. Set $C_i=\pi_i(C)$ and, for $1\leq i<j\leq d$, set $S_{ij}=\pi_{ij}(S)$. Then,
\begin{thm}\label{Sergeev Duality Theorem}\cite[Theorem 3]{s} The map which sends $c_i\mapsto C_i$ and $s_i\mapsto S_{i,i+1}$ is an isomorphism of superalgebras \[ {\mathcal{S}}(d)\rightarrow\operatorname{End}_{\ensuremath{\mathfrak{q}}(n)}(V^{\otimes d}). \] \end{thm}
\subsection{${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-action}\label{SS:action} Let $M$ be a ${\mathfrak{q}}(n)$-supermodule. In this section we construct an action of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ on $M\otimes V^{\otimes d}$ that commutes with the action of ${\mathfrak{q}}(n)$. To this end, extend the map $\pi_i$ from $\S$\ref{SS:Sergeev Duality} to a map $\pi_i:\operatorname{End}(V)\rightarrow\operatorname{End}(V)^{\otimes d+1}$ so that $\pi_i(x)=1^{\otimes i}\otimes x\otimes 1^{\otimes d-i}$ for $x\in \operatorname{End}(V)$ and $i=0,\ldots,d$ (i.e.\ add a 0th tensor place); similarly, extend $\pi_{ij}$.
Define $C_i$ and $S_{ij}$ as in $\S$\ref{SS:Sergeev Duality}. Define \[ \Omega_{ij}=\pi_{ij}(\Omega)\;\;\;0\leq i<j\leq d \] and set $X_i=\Omega_{0i}+\sum_{1\leq j<i}(1-C_jC_i)S_{ji}$.
\begin{thm}\label{Affine Sergeev Action} Let $M$ be a ${\mathfrak{q}}(n)$-supermodule. Then, the map which sends $c_i\mapsto C_i$, $s_i\mapsto S_{i,i+1}$ and $x_i\mapsto X_i$ defines a homomorphism \[ {\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)\rightarrow\operatorname{End}_{\ensuremath{\mathfrak{q}}(n)}(M\otimes V^{\otimes d}). \] \end{thm}
\begin{proof} It is clear from Theorem \ref{Sergeev Duality Theorem} that the $C_i$ and $S_{i,i+1}$ form a copy of the Sergeev algebra ${\mathcal{S}}(d)$ inside $\operatorname{End}_{{\mathfrak{q}}(n)}(M\otimes V^{\otimes d})$ via the obvious embedding $\operatorname{End}_{{\mathfrak{q}}(n)}(V^{\otimes d})\hookrightarrow\operatorname{End}_{{\mathfrak{q}}(n)}(M\otimes V^{\otimes d})$, $A\mapsto\mathrm{id}_M\otimes A$. Moreover, for $i=1,\ldots,d$, $X_i\in\operatorname{End}(M\otimes V^{\otimes d})$, since $X_i\in Q(n)\otimes\operatorname{End}(V)^{\otimes d}$. Therefore it is enough to check the following properties: \begin{enumerate} \item[(a)] The $X_i$ satisfy the mixed relations \eqref{c&x} and \eqref{s&x}, \item[(b)] $X_iX_j-X_jX_i=0$, and \item[(c)] the $X_i$ commute with the action of ${\mathfrak{q}}(n)$ on $M\otimes V^{\otimes d}$. \end{enumerate} First, we check that $\Omega(1\otimes C)=-(1\otimes C)\Omega$. To do this, a calculation shows that $C\bar{e}_{ji}=-\bar{e}_{ji}$ and $C\bar{f}_{ji}=\bar{f}_{ji}C$. Hence, \[ (1\otimes C)(e_{ij}\otimes\bar{e}_{ji})=-(e_{ij}\otimes\bar{e}_{ji})(1\otimes C) \] and \begin{eqnarray*} (1\otimes C)(f_{ij}\otimes\bar{f}_{ji})&=&(-1)^{p(f_{ij})p(C)} (f_{ij}\otimes C\bar{f}_{ji})\\ &=&(-1)^{p(\bar{f}_{ji})p(C)}(f_{ij}\otimes \bar{f}_{ji}C)\\ &=&(-1)^{p(\bar{f}_{ji})p(C)+p(1)p(\bar{f}_{ji})}(f_{ij}\otimes \bar{f}_{ji})(1\otimes C), \end{eqnarray*} so the result follows since $p(1)={\bar{0}}$. Next, it is easy to see that $S_i\Omega_{0i}S_i=\Omega_{i+1}$ using \eqref{tensor product rule-algebra}. Therefore, (a) follows from the definition of $X_i$. It is now easy to show that, for $i<j$, (b) is equivalent to \[ \Omega_{0i}\Omega_{0j}-\Omega_{0j}\Omega_{0i}=(\Omega_{0j}-\Omega_{0i})S_{ij}+ (\Omega_{0j}+\Omega_{0i})C_iC_jS_{ij}. \] This equality is then a direct calculation. Finally, to verify (c), it is enough to show that for any $X\in{\mathfrak{q}}(n)$, \[ [1\otimes X+X\otimes1,\Omega]=0. \] This is another routine calculation using \eqref{tensor product rule-algebra}. \end{proof}
Now, recall the ``naive'' antiautomorphism $\tau:{\mathcal{U}}({\mathfrak{q}}(n))\rightarrow{\mathcal{U}}({\mathfrak{q}}(n))$. This extends to an antiautomorphism of ${\mathcal{U}}({\mathfrak{gl}}(n|n))$. Extend $\tau$ to an antiautomorphism of ${\mathcal{U}}({\mathfrak{gl}}(n|n))^{\otimes 2}$ by $\tau(x\otimes y)=(-1)^{p(x)}\tau(x)\otimes\tau(y)$. By induction, extend $\tau$ to an antiautomorphism of ${\mathcal{U}}({\mathfrak{gl}}(n|n))^{\otimes k}$ by $\tau(x_1\otimes\cdots\otimes x_k)=(-1)^{p(x_1)}\tau(x_1)\otimes\tau(x_2\otimes\cdots\otimes x_k)$. A direct check verifies the following result.
\begin{prp}\label{P:when tau's collide} We have that $\tau(C_i)=-C_i$, $\tau(S_{i,i+1})=S_{i,i+1}$ and $\tau(X_i)=X_i$ for all admissible $i$'s. In particular, the antiautomorphism $\tau^{\otimes d+1}:{\mathcal{U}}({\mathfrak{gl}}(n|n))^{\otimes d+1}\rightarrow{\mathcal{U}}({\mathfrak{gl}}(n|n))^{\otimes d+1}$ coincides with the antiautomorphism $\tau:{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)\rightarrow{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$. \end{prp}
\subsection{The Functor $F_\lambda$}\label{SS:Flambda} In the previous section, we showed that there is a homomorphism from ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ to $\operatorname{End}_{{\mathfrak{q}}(n)}(M\otimes V^{\otimes d})$. Since the action of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ on $M\otimes V^{\otimes d}$ commutes with the action of ${\mathfrak{q}}(n)$, it preserves both primitive vectors and weight spaces. By \emph{primitive vector} we mean an element of $M\otimes V^{\otimes d}$ which is annihilated by the subalgebra ${\mathfrak{n}}^{+}$ given by the triangular decomposition of $\ensuremath{\mathfrak{q}} (n)$ as in Section~\ref{SS:RootData}. Therefore, given a weight $\lambda\in P(M\otimes V^{\otimes d})$ we have an action of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ on
\begin{equation}\label{E:Flambdadef} F_\lambda M :=\left\{m \in M\otimes V^{\otimes d} \mid {\mathfrak{n}}^+.m=0 \text{ and } m \in \left(M\otimes V^{\otimes d} \right)_{\lambda} \right\} \end{equation}
In the case when $\lambda\in{P^{++}}$ we can provide alternative descriptions of the functor $F_{\lambda}$. First we recall the following key result of Penkov \cite{p}. Given a weight $\lambda \in P$, we write $\chi_{\lambda}$ for the central character defined by the simple $\ensuremath{\mathfrak{q}} (n)$-module of highest weight $\lambda$. Then, there is a block decomposition \begin{equation}\label{E:blockdecomp} \mathcal{O}({\mathfrak{q}}(n))=\bigoplus_{\chi_{_{\lambda}}}\mathcal{O}({\mathfrak{q}}(n))^{[\lambda]} \end{equation} where the sum is over all central characters $\chi_{\lambda}$ and $\mathcal{O}({\mathfrak{q}}(n))^{[\lambda]}=\mathcal{O}({\mathfrak{q}}(n))^{[\chi_\lambda]}$ denotes the block determined by the central character $\chi_\lambda$. Given $N$ in $\mathcal{O}({\mathfrak{q}}(n))$, let $N^{[\chi_{\gamma}]}=N^{[\gamma]}$ denote the projection of $N$ onto the direct summand which lies in $\mathcal{O}({\mathfrak{q}}(n))^{[\chi_\gamma]}$
The question then becomes to describe when $\chi_{\lambda}=\chi_{\mu}$ for $\lambda, \mu \in P$. This is answered in the case when $\lambda$ is typical by the following result of Penkov \cite{p}. Recall that the symmetric group acts on $P$ by permuation of coordinates. \begin{prp}\label{P:penkov} Let $\lambda \in {P^{++}} $ be a typical weight and let $\mu \in P$. Then $\chi_{\lambda}=\chi_{\mu}$ if and only if $\mu =w(\lambda)$ for some $w \in S_n.$ \end{prp}
For short we call a weight $\lambda \in P$ \emph{atypical} if it is not typical. By the description of the blocks $\mathcal{O}({\mathfrak{q}}(n))^{[\lambda]}$, if $L(\mu)$ is an object of $\mathcal{O}({\mathfrak{q}}(n))^{[\lambda]}$ then $\lambda$ is typical if and only if $\mu$ is typical (c.f.\ \cite[Proposition 1.1]{ps2} and the remarks which follow it). We then have the following preparatory lemma.
\begin{lem} Let $\lambda,\gamma \in P$. Then the following statements hold: \begin{enumerate} \item [(i)] Assume $\gamma$ is atypical and $\lambda$ is typical. If $N$ is an object of $\mathcal{O}^{[\gamma]}$, then $N_{\lambda}=(\ensuremath{\mathfrak{n}^{-}} N)_{\lambda}.$ \item [(ii)] Assume $\lambda, \gamma \in P^{++}$ are typical and dominant and $\lambda \neq \gamma$. If $N$ is an object of $\mathcal{O}^{[\gamma]}$, then $N_{\lambda}=(\ensuremath{\mathfrak{n}^{-}} N)_{\lambda}$. \end{enumerate} \end{lem}
\begin{proof} By \cite[Lemma 4.5]{b}, every object $\mathcal{O}({\mathfrak{q}}(n))$ has a finite Jordan-H\"{o}lder series. The proof of (i) is by induction on the length of a composition series of $N.$ The base case is when $N$ has length one (ie.\ $N \cong L(\nu)$ is a simple module). This case immediately follows from the fact that in order for $N_{\lambda}$ to be nontrivial it must be that $\lambda < \nu$. But then it follows from the assumption that $\nu$ is atypical (since $L(\nu)$ is an object of $\mathcal{O}^{[\gamma]}$) while $\lambda$ is typical. Now consider a composition series \[ 0=N_{0} \subset N_{1} \subset \dotsb \subset N_{t} =N. \] Let $v \in N_{\lambda}$ so that $v +N_{t-1} \in N_{t}/N_{t-1}$ is nonzero. Since $N_{t}/N_{t-1}$ is a simple module in $\mathcal{O}^{[\gamma]},$ by the base case there exists a $w \in N_{t}=N$ and $y \in \ensuremath{\mathfrak{n}^{-}}$ so that $yw + N_{t-1}=v + N_{t-1}.$ Thus, $v-yw \in N_{t-1}$ and is of weight $\lambda.$ By the inductive assumption, there exists $w' \in N_{t-1} \subset N$ and $y' \in \ensuremath{\mathfrak{n}^{-}}$ such that $y'w' = v-yw.$ That is, $v=yw+y'w' \in \ensuremath{\mathfrak{n}^{-}} N.$ This proves the desired result.
Now, (ii) follows by a similar argument by induction on the length of a composition series. If $N$ is simple and $N_{\lambda}\neq 0$, then $\lambda$ is not the highest weight of $N$ (as $\gamma$ is the unique dominant highest weight among the simple modules in $\mathcal{O}^{[\gamma]}$ by Proposition~\ref{P:penkov}). From this it immediately follows that $N_{\lambda}=(\ensuremath{\mathfrak{n}^{-}} N)_{\lambda}$. Now proceed by induction as in the previous paragraph. \end{proof}
\begin{lem}\label{L:TypicalFlambda} Let $\lambda \in P^{++}$ be typical and dominant, and let $M \in \mathcal{O}.$ Then \[ F_{\lambda}\left(M \right) \cong \left( (M\otimes V^{\otimes d})^{[\lambda]}\right)_\lambda \cong \left[ M\otimes V^{\otimes d}/{\mathfrak{n}}_-(M\otimes V^{\otimes d})\right]_\lambda \] as ${\mathcal{H}_{\Cl}^{\mathrm{aff}}} (d)$-modules. \end{lem}
\begin{proof} It should first be remarked that since the action of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}} (d)$ commutes with the action of ${\mathfrak{q}} (n)$, the action of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}} (d)$ on $M \otimes V^{\otimes d}$ induces an action on each of the vector spaces given in the theorem.
Now, by Proposition~\ref{P:penkov} and the assumption that $\lambda$ is dominant, it follows that for any module $N \in \mathcal{O}^{[\lambda]},$ $N_{\nu} \neq 0$ only if $\nu \leq \lambda$ in the dominance order. Thus any vector of weight $\lambda$ in $M \otimes V^{\otimes d}$ is necessarily a primitive vector. On the other hand, if there is a primitive vector of weight $\lambda$ in $M \otimes V^{\otimes d},$ then it must lie in the image of a nonzero homomorphism $M(\lambda) \to M \otimes V^{\otimes d}$. But as $M(\lambda)$ is an object in $\mathcal{O}^{[\lambda]}$, it follows that the primitive vector lies in $\left( (M\otimes V^{\otimes d})^{[\lambda]}\right)_\lambda $. Thus, there exists a canonical projection map \[ F_{\lambda}\left(M \right) \to \left( (M\otimes V^{\otimes d})^{[\lambda]}\right)_\lambda \] and this map is necessarily a vector space isomorphism. The fact that it is a ${\mathcal{H}_{\Cl}^{\mathrm{aff}}} (d)$-module homomorphism follows from the fact that the action of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}} (d)$ on both vector spaces is induced by the action of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}} (d)$ on $M \otimes V^{\otimes d}.$
Now consider the block decomposition \begin{equation*} M\otimes V^{\otimes d}= \oplus_{\chi_{\gamma}} (M\otimes V^{\otimes d})^{[\chi_{\gamma}]}, \end{equation*} where the direct sum runs over dominant $\gamma \in \ensuremath{\mathfrak{h}}_{{\bar{0}}}^{*}$ so that different $\chi_{\gamma}$ are different central characters of $U(\ensuremath{\mathfrak{g}} )$. This then induces the vector space direct sum decomposition \[ (M\otimes V^{\otimes d})/\ensuremath{\mathfrak{n}^{-}} (M\otimes V^{\otimes d}) = \oplus_{\chi_{\gamma}} (M\otimes V^{\otimes d})^{[\chi_{\gamma}]}/\ensuremath{\mathfrak{n}^{-}} (M\otimes V^{\otimes d})^{[\chi_{\gamma}]}, \] where $(M\otimes V^{\otimes d})^{[\chi_{\gamma}]}$ denotes the direct summand of $M\otimes V^{\otimes d}$ which lies in the block $\mathcal{O}^{[\gamma]}.$
By the previous lemma, if $\gamma$ is atypical or if $\gamma$ is typical and $\gamma \neq \lambda$, then \[ \left[ (M\otimes V^{\otimes d})^{[\chi_{\gamma}]})/ \ensuremath{\mathfrak{n}^{-}} (M\otimes V^{\otimes d})^{[\chi_{\gamma}]})\right]_{\lambda}=0. \] Therefore, \begin{equation}\label{E:functorvariations} \left[ (M\otimes V^{\otimes d})/\ensuremath{\mathfrak{n}^{-}} (M\otimes V^{\otimes d})\right]_{\lambda} = \left[(M\otimes V^{\otimes d})^{[\chi_{\lambda}]}/\ensuremath{\mathfrak{n}^{-}} (M\otimes V^{\otimes d})^{[\chi_{\lambda}]} \right]_{\lambda}. \end{equation} Finally, if $N$ is an object of $\mathcal{O}^{[\lambda]},$ then $N_{\mu} \neq 0$ only if $\mu \leq \lambda$ in the dominance order. Thus weight considerations imply $\left[ \ensuremath{\mathfrak{n}^{-}} (M\otimes V^{\otimes d})^{[\chi_{\lambda}]})\right]_{\lambda}=0$ which, in turn, implies that canonical projection \[
\left( (M\otimes V^{\otimes d})^{[\lambda]}\right)_\lambda \to \left[ M\otimes V^{\otimes d}/{\mathfrak{n}}_-(M\otimes V^{\otimes d})\right]_\lambda \] is a vector space isomorphism. That is its a ${\mathcal{H}_{\Cl}^{\mathrm{aff}}} (d)$-module homomorphism follows from the fact that in both cases the action is induced from the ${\mathcal{H}_{\Cl}^{\mathrm{aff}}} (d)$ action on $M \otimes V^{\otimes d}$. \end{proof}
\begin{cor}\label{C:Flambdaexactness} If $\lambda \in {P^{++}}$ is dominant and typical, then the functor $F_{\lambda}:\mathcal{O} \to {\mathcal{H}_{\Cl}^{\mathrm{aff}}} (d)$-mod is exact. \end{cor}
\begin{proof} This follows immediately from the first alternative description of $F_{\lambda}$ in the above theorem as it is the composition of the exact functors $- \otimes V^{\otimes d}$, projection onto the direct summand lying in the block $\mathcal{O}^{[\lambda]}$, and projection onto the $\lambda$ weight space. \end{proof}
In what follows when $\lambda$ is dominant and typical we use whichever description of $F_{\lambda}$ given in lemma ~\ref{L:TypicalFlambda} is most convenient.
\subsection{Image of the Functor}\label{SS:functorimage} We can now describe the image of Verma modules under the functor.
\begin{lem}\label{L:description} Let $M(\mu)$ be a Verma module in $\mathcal{O}$ and let $\lambda \in P^{++}$ be a dominant and typical weight. The natural inclusion \[ E(\mu)\otimes(V^{\otimes d})_{\lambda-\mu}\hookrightarrow(M(\mu)\otimes V^{\otimes d})_\lambda \] induces an isomorphism of ${\mathcal{S}}(d)$-modules $E(\mu)\otimes(V^{\otimes d})_{\lambda-\mu}\cong F_\lambda(M(\mu))$. In particular, $F_\lambda(M(\mu))=0$ unless $\lambda-\mu\in {P_{\geq0}}(d)$. \end{lem}
\begin{proof} This is proved exactly as in \cite[Lemma 3.3.2]{as}, except now the highest weight space of $M(\mu)$ is $E(\mu)$. Namely, by the tensor identity and the PBW theorem, \begin{equation}\label{E:tensoridentity} M(\mu) \otimes V^{\otimes d} \cong U(\ensuremath{\mathfrak{g}} ) \otimes_{U(\ensuremath{\mathfrak{b}})} \left(E(\mu) \otimes V^{\otimes d} \right) \cong U(\ensuremath{\mathfrak{n}^{-}}) \otimes E(\mu) \otimes V^{\otimes d}, \end{equation} where the first isomorphism is as $\ensuremath{\mathfrak{g}}$-modules and the second is as $\ensuremath{\mathfrak{h}}_{\bar{0}}$-modules. Thus the canonical projection map induces the isomorphism of $\ensuremath{\mathfrak{h}}_{\bar{0}}$-modules given by \begin{equation*} 1 \otimes E(\mu) \otimes V^{\otimes d} \cong M(\mu) \otimes V^{\otimes d} / \ensuremath{\mathfrak{n}^{-}} \left(M(\mu) \otimes V^{\otimes d} \right). \end{equation*} Taking $\lambda$ weight spaces on both sides yields the vector space isomorphism \[ 1 \otimes E(\mu) \otimes \left( V^{\otimes d}\right)_{\lambda-\mu}\cong \left[ M(\mu) \otimes V^{\otimes d} / \ensuremath{\mathfrak{n}^{-}} \left(M(\mu) \otimes V^{\otimes d} \right)\right]_{\lambda}. \] Now, the composition of the natural inclusion $E(\mu)\otimes(V^{\otimes d})_{\lambda-\mu}\hookrightarrow(M(\mu)\otimes V^{\otimes d})_\lambda$ with \eqref{E:tensoridentity}, and the isomorphism above implies that \[
E(\mu) \otimes \left( V^{\otimes d}\right)_{\lambda-\mu} \cong 1 \otimes E(\mu) \otimes \left( V^{\otimes d}\right)_{\lambda-\mu} \cong \left[ M(\mu) \otimes V^{\otimes d} / \ensuremath{\mathfrak{n}^{-}} \left(M(\mu) \otimes V^{\otimes d} \right)\right]_{\lambda} = F_{\lambda}\left(M(\mu) \right), \]
That it is an isomorphism of ${\mathcal{S}}(d)$-modules follows from the fact that in each case the action of ${\mathcal{S}}(d)$ is via the action induced from the action of ${\mathcal{S}}(d)$ on $M(\mu) \otimes V^{\otimes d}.$ \end{proof}
\begin{cor}\label{C:StandardDim} Let $\lambda \in P^{++}$ be a dominant and typical weight and let $\mu \in P$ with $\lambda-\mu\in {P_{\geq0}}(d)$. Set $d_{i}=\lambda_{i}-\mu_{i}$ for $i=1, \dotsc ,n$. \begin{enumerate} \item [(i)] Let $M(\mu)$ be the little Verma module of highest weight $\mu$. Then, \[ \dim F_{\lambda}(M(\mu)) = 2^{d+\lfloor(n-\gamma_0(\mu)+1)/2 \rfloor}\frac{d!}{d_{1}! \dotsb d_{n}!}. \] \item [(ii)] Let $\widehat{M}(\mu)$ be the big Verma module of highest weight $\mu$. Then, \[ \dim F_\lambda(\widehat{M}(\mu))=2^{d+n-\gamma_0(\mu)}\frac{d!}{d_1!\cdots d_n!}. \] \end{enumerate} \end{cor}
\begin{proof} We have $\dim E(\mu) = 2^{\lfloor(n-\gamma_0(\mu)+1)/2 \rfloor}$. For each $\varepsilon_{i}$ $(i=1, \dotsc , n)$, $\dim V_{\varepsilon_{i}}=2.$ A combinatorial count shows that \[ \dim \left(V^{\otimes d} \right)_{\lambda - \mu} = \frac{d!}{d_{1}! \dotsb d_{n}!}2^{d}. \] The statement of (i) then follows by Lemma~\ref{L:description}. The statement of (ii) follows from (i) and Lemma~\ref{L:little verma in big verma}. \end{proof}
Fix $\lambda,\mu\in P$ such that $\lambda-\mu\in {P_{\geq0}}(d)$, and let $d_i=\lambda_i-\mu_i$. Let $\{u_i,u_{\bar{i}}\}_{i=1,\ldots,n}$ be the standard basis for $V$, let $v_\mu\in E(\mu)$, and let $u_{\lambda-\mu}=u_1^{\otimes d_1}\otimes\cdots\otimes u_n^{\otimes d_n}\in (V^{\otimes d})_{\lambda-\mu}$. Finally, let \[ m_k=\sum_{i=1}^kd_k, \] and define $F_k=\pi_0(f_{kk})$ (see Section~\ref{SS:action}).
\begin{lem}\label{X action} Let $v_{\mu} \in M(\mu)_{\mu}$ be a primitive vector of weight $\mu,$ and let $u=u_{\lambda - \mu}= u_{1}^{\otimes d_{1}} \otimes \dotsb u_{n}^{\otimes d_{n}}.$ For each $1\leq k\leq n$ and $m_{k-1}<i\leq m_k$, \[ X_i.v_\mu\otimes u_{\lambda-\mu}\equiv \left(\mu_k+i-m_{k-1}-1-\sum_{m_{k-1}<l<i}C_lC_i -F_{k}C_i\right)v_\mu\otimes u_{\lambda-\mu} \] modulo ${\mathfrak{n}}_-(M(\mu)\otimes V^{\otimes d})$. As a consequence, \[ X_i^2v_\mu\otimes u_{\lambda-\mu}\equiv(\mu_k+i-m_{k-1}-1)(\mu_k+i-m_{k-1})v_\mu\otimes u_{\lambda-\mu}, \] again modulo ${\mathfrak{n}}_-(M(\mu)\otimes V^{\otimes d})$. \end{lem}
\begin{proof} We first do some preliminary calculations. Let $1 \leq j < k \leq n$ be fixed, let $m_{k-1} \leq i \leq m_{k}$ be fixed, and consider the vector \[ v_{\mu}\otimes u_{1}^{\otimes d_{1}} \otimes \dotsb \otimes u_{k}^{\otimes a} \otimes u_{j} \otimes u_{k}^{\otimes b} \otimes \dotsb \otimes u_{n}^{\otimes d_{n}}, \] where the $u_{j}$ is the $i$th tensor and $a+b+1=d_{k}$ (i.e.\ among the $u_{k}$'s, the one in the $i$th position, recalling that $v_{\mu}$ is in the zeroth position, is replaced with $u_{j}$). For short, let us write $u = u_{1}^{\otimes d_{1}} \otimes \dotsb u_{n}^{\otimes d_{n}}$ and $\hat{u}=u_{1}^{\otimes d_{1}} \otimes \dotsb \otimes u_{k}^{\otimes a} \otimes u_{j} \otimes u_{k}^{\otimes b} \otimes \dotsb \otimes u_{n}^{\otimes d_{n}}$. Then, \begin{eqnarray*} e_{kj}(v_{\mu}\otimes\hat{u})&=&(e_{kj}v_{\mu})\otimes\hat{u}\\ &&+ \sum_{r = 1}^{d_{j}} v_{\mu}\otimes u_{1}^{\otimes d_{1}} \otimes \dotsb \otimes u_{j}^{\otimes r-1 } \otimes u_{k} \otimes u_{j}^{\otimes d_{j}-r} \otimes \dotsb \otimes u_{k}^{\otimes a} \otimes u_{j} \otimes u_{k}^{\otimes b} \otimes \dotsb \otimes u_{n}^{\otimes d_{n}} + v_{\mu} \otimes u\\
&=&(e_{kj}v_{\mu})\otimes \hat{u} + \sum_{r = 1}^{d_{j}} S_{m_{j-1}+r, i}
(v_{\mu}\otimes u) + v_{\mu} \otimes u. \end{eqnarray*} Similarly, if we write $\check{u}=C_{i}\hat{u}=u_{1}^{\otimes d_{1}} \otimes \dotsb \otimes u_{k}^{\otimes a} \otimes v_{-j} \otimes v_{k}^{\otimes b} \otimes \dotsb \otimes v_{n}^{\otimes d_{n}}$, then \begin{eqnarray*} f_{kj} (v_{\mu}\otimes \check{u}) &=& (f_{kj}v_{\mu})\otimes \check{u}
+(-1)^{p(v_{\mu})}\times\\
&&\times \sum_{r=1}^{d_{j}} v_{\mu}
\otimes u_{1}^{\otimes d_{1}}
\otimes \dotsb \otimes u_{j}^{\otimes r-1} \otimes u_{-k} \otimes u_{j}^{\otimes d_{j}-r}
\otimes \dotsb \otimes u_{k}^{\otimes a} \otimes u_{-j} \otimes u_{k}^{\otimes b}
\otimes \dotsb \otimes u_{n}^{\otimes d_{n}}\\
&&+ (-1)^{p(v_{\mu})}v_{\mu} \otimes u\\
&=&(f_{kj}v_{\mu})\otimes \check{u} +(-1)^{p(v_{\mu})}
\sum_{r = 1}^{d_{j}} C_{m_{j-1}+a}C_{i}S_{m_{j-1}+r, i}
(v_{\mu}\otimes u) + (-1)^{p(v_{\mu})} v_{\mu} \otimes u. \end{eqnarray*}
We can now consider the first statement of the lemma. Throughout, we write $\equiv$ for congruence modulo the subspace ${\mathfrak{n}}_-(M(\mu)\otimes V^{\otimes d})$. Let $1 \leq k \leq n$ be fixed so that $m_{k-1} < i \leq m_{k}$ (ie.\ there is a $u_{k}$ in the $i$th position of $v_{\mu}\otimes u$). Using that $v_{\mu}$ is a primitive vector and the equalities given above, we deduce that \begin{small} \begin{eqnarray*} X_i\left( v_\mu\otimes u_{\lambda-\mu}\right)&=&\sum_{\ell,j=1}^n e_{\ell j}v_\mu\otimes u_1^{\otimes d_1}\otimes\cdots\otimes u_k^{\otimes i-m_{k-1}-1}\otimes {\bar{e}}_{j\ell}u_k\otimes u_k^{\otimes m_k-i}\otimes\cdots\otimes u_n^{\otimes d_n}\\
&&-(-1)^{p(v_{\mu})}\sum_{\ell,j=1}^nf_{\ell j}v_\mu\otimes u_1^{\otimes d_1}\otimes\cdots\otimes u_k^{\otimes i-m_{k-1}-1}\otimes {\bar{f}}_{j\ell}u_k\otimes u_k^{\otimes m_k-i}\otimes\cdots\otimes u_n^{\otimes d_n}\\
&&+\sum_{\ell<i}(1-C_\ell C_i)S_{\ell i}(v_\mu\otimes
u)\\
&=&\sum_{j\leq k} e_{kj}v_\mu\otimes u_1^{\otimes d_1}\otimes\cdots\otimes u_k^{\otimes i-m_{k-1}-1}\otimes u_j\otimes u_k^{\otimes m_k-i}\otimes\cdots\otimes u_n^{\otimes d_n}\\
&&-(-1)^{p(v_\mu)}\sum_{j\leq k}f_{kj}v_\mu\otimes u_1^{\otimes d_1}\otimes\cdots\otimes u_k^{\otimes i-m_{k-1}-1}\otimes u_{-j}\otimes u_k^{\otimes m_k-i}\otimes\cdots\otimes u_n^{\otimes d_n}\\
&&+\sum_{\ell<i}(1-C_\ell C_i)S_{\ell i}(v_\mu\otimes u)\\
& \equiv& - \sum_{j < k} \left[\sum_{a = 1}^{d_{j}} S_{m_{j-1}+a, i}
(v_{\mu}\otimes u) + v_{\mu} \otimes u \right] \\
&&+ \sum_{j < k} \left[ \sum_{a = 1}^{d_{j}}
C_{m_{j-1}+a}C_{i}S_{m_{j-1}+a, i} (v_{\mu}\otimes u) + v_{\mu} \otimes u \right] \\
&&+ \mu_{k}v_{\mu}\otimes u - C_{i}
\left((f_{kk}v_{\mu})\otimes u \right) +\sum_{\ell<i}(1-C_\ell C_i)S_{\ell i}
(v_\mu\otimes u)\\
&=& - \sum_{l \leq m_{k-1}} S_{l,i}v_{\mu}\otimes u -(k-1)v_{\mu} \otimes u
+ \sum_{l \leq m_{k-1}} C_{l}C_{i}S_{l,i}v_{\mu}\otimes u +(k-1)v_{\mu}\otimes u \\
&&+ \mu_{k}v_{\mu}\otimes u
- C_{i}\left((f_{kk}v_{\mu})\otimes u \right) +\sum_{\ell<i}(1-C_\ell C_i)S_{\ell i}(v_\mu\otimes u)\\
& = &\mu_kv_\mu\otimes
u_{\lambda-\mu}+C_i((f_{kk}v_\mu)\otimes u_{\lambda-\mu})+\sum_{m_{k-1}<\ell<i}
(1-C_\ell C_i)S_{l,i}(v_\mu\otimes u)\\
&=&\left(\mu_k+i-m_{k-1}-1-\sum_{m_{k-1}<\ell<i}C_\ell C_iS_{l,i}\right)
(v_\mu\otimes u_{\lambda-\mu})+C_i((f_{kk}v_\mu)\otimes u)\\
&=&\left(\mu_k+i-m_{k-1}-1-\sum_{m_{k-1}<\ell<i}C_\ell C_i -F_{k}C_i\right)
(v_\mu \otimes u). \end{eqnarray*} \end{small} Note the last equality makes use of the fact that $S_{l,i} v_{\mu}\otimes u = v_{\mu}\otimes u$ for $m_{k-1}<l<i$ and that as (odd) linear maps $F_{k}C_{i}=-C_{i}F_{k}.$
Now we consider the second statement of the lemma. Using the previous calculation, the fact that $X_{i}$ and the $C$'s satisfy relation \eqref{c&x} of the degenerate AHCA, and the fact that $f_{kk}v_{\mu} \in M(\mu)_{\mu}$ is again a primitive vector of weight $\mu$,
\begin{eqnarray*} X_i^2(v_\mu\otimes u_{\lambda-\mu})
&\equiv& X_i\left(\mu_k+i-m_{k-1}-1-\sum_{m_{k-1}<\ell<i}C_\ell C_i-F_kC_i\right)
(v_\mu\otimes u_{\lambda-\mu})\\
&=&\left(\mu_k+i-m_{k-1}-1+\sum_{m_{k-1}<\ell<i}C_\ell C_i \right)X_{i}
(v_{\mu}\otimes u)- C_iX_{i}((f_{kk}v_\mu)\otimes u_{\lambda-\mu})\\
& \equiv & \left(\mu_k+i-m_{k-1}-1+\sum_{m_{k-1}<\ell<i}C_\ell C_i \right)\times\\
&&\times
\left(\mu_k+i-m_{k-1}-1-\sum_{m_{k-1}<\ell<i}C_\ell C_i-F_kC_i\right)
(v_\mu\otimes u_{\lambda-\mu}) \\
&&-C_{i}\left(\mu_k+i-m_{k-1}-1-\sum_{m_{k-1}<\ell<i}C_\ell C_i-F_kC_i\right)
((f_{kk}v_\mu)\otimes u_{\lambda-\mu}) \\
&=&\left(\mu_k+i-m_{k-1}-1+\sum_{m_{k-1}<\ell<i}C_\ell C_i \right)
\left(\mu_k+i-m_{k-1}-1-\sum_{m_{k-1}<\ell<i}C_\ell C_i \right) v_{\lambda}
\otimes u\\
&&+ C_{i}F_{k}C_{i}((f_{kk}v_{\mu}) \otimes u) \\
&=&\left( (\mu_k+i-m_{k-1}-1)^{2}
- \left( \sum_{m_{k-1}<\ell<i}C_\ell C_i\right)^{2}\right) v_{\mu}\otimes u
+ (f^{2}_{kk}v_{\mu}) \otimes u \\
&=&\left( (\mu_k+i-m_{k-1}-1)^{2}
+(\mu_k+i-m_{k-1}-1)\right) v_{\mu}\otimes u. \end{eqnarray*} The last equality follows from the fact that in the Clifford algebra \[ \left( \sum_{m_{k-1}<\ell<i}C_\ell C_i\right)^{2} = \sum_{m_{k-1}<\ell<i}(C_\ell C_i)^{2} = \sum_{m_{k-1}<\ell<i} -1 = -(i-m_{k-1} +1) \] and that, in ${\mathfrak{q}} (n)$, $f_{kk}^2=e_{kk}$. \end{proof}
\begin{cor}\label{C:ImageisIntegral} Let $\lambda \in P^{++}$ be a dominant typical weight, let $\mu\in P$, and let $M(\mu)$ be a Verma module in $\mathcal{O}(\ensuremath{\mathfrak{q}} (n))$. Then for $i=1, \dotsc , d$ the element $x_{i}^{2}$ acts on $F_{\lambda}(M(\mu))$ with generalized eigenvalues of the form $q(a)$ for various $a \in {\mathbb{Z}}$. Hence, $F_{\lambda}(M(\mu))$ is integral. \end{cor}
As a consequence of the previous corollary we see that for $\lambda \in P^{++}$ we have that $F_{\lambda}\left(L(\mu) \right)$ is integral for any simple module $L(\mu)$ in $\mathcal{O}$ and, therefore, \[ F_\lambda:\mathcal{O}({\mathfrak{q}}(n))\rightarrow\operatorname{Rep}{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d). \]
\begin{prp} Let $\lambda\in{P^{++}}$ and $\mu\in\lambda-{P_{\geq0}}(d)$. Then, $F_\lambda(\widehat{M}(\mu))\cong\widehat{{\mathcal{M}}}(\lambda,\mu)$. \end{prp}
\begin{proof} Let $v_+\in{\mathbb{C}}_\mu$ be a nonzero vector in the 1-dimensional ${\mathfrak{h}}_{\bar{0}}$-module ${\mathbb{C}}_\mu$, let $v_\mu=1\otimes v_+\in C(\mu)_{\bar{0}}$ be its image and let $u_{\lambda-\mu}$ be as in the prevous lemma. Then $v_\mu\otimes u_{\lambda-\mu}$ is a cyclic vector for $F_\lambda(\widehat{M}(\mu))$ as a ${\mathcal{H}_{\Cl}^{\mathrm{aff}}} (d)$-module.
Recall the cyclic vector $\hat{{\mathbf{1}}}_{\lambda,\mu}\in\widehat{{\mathcal{M}}}(\lambda,\mu)$. For $\delta_1,\ldots,\delta_n\in\{0,1\}$, let $\varphi_1^{\delta_1}\cdots\varphi_n^{\delta_n}\hat{{\mathbf{1}}}_{\lambda,\mu}=1\otimes\varphi_1^{\delta_1}\hat{{\mathbf{1}}}\otimes\cdots\otimes\varphi_n^{\delta_n}\hat{{\mathbf{1}}}$, cf. \eqref{E:hatcyclicvector}.
Note that $w.(v_\mu\otimes u_{\lambda-\mu})=v_\mu\otimes u_{\lambda-\mu}$ for all $w\in S_{\lambda-\mu}$. Comparing Lemma \ref{X action} and Proposition \ref{segment representation}, we deduce that, by Frobenious reciprocity, there exists a surjective ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-homomorphism $\widehat{{\mathcal{M}}}(\lambda,\mu)\rightarrow F_\lambda(\widehat{M}(\mu))$ sending $\varphi_1^{\delta_1}\cdots\varphi_n^{\delta_n}\hat{{\mathbf{1}}}_{\lambda,\mu}\mapsto F_1^{\delta_1}\cdots F_n^{\delta_n}v_\mu\otimes u_{\lambda-\mu}$. That this is an isomorphism follows by comparing dimensions using Lemmas~\ref{L:standard cyclic dim} and~\ref{C:StandardDim}. \end{proof}
\begin{cor}\label{C:Image of the little verma} We have \[ F_\lambda M(\mu)\cong{\mathcal{M}}(\lambda,\mu)^{\oplus 2^{\varpi(\mu)}} \] where \[ \varpi(\mu) =\begin{cases}\lfloor\frac{n+1}{2}\rfloor&\mbox{if }\gamma_0(\mu)\mbox{ is even,}\\\lfloor\frac{n}{2}\rfloor&\mbox{if }\gamma_0(\mu)\mbox{ is odd.}\end{cases}. \] \end{cor}
\begin{proof} Using the additivity of the functor $F_{\lambda}$, the previous proposition, and Lemmas~\ref{L:little verma in big verma} and~\ref{L:standard cyclic dim} we obtain $F_\lambda M(\mu)=2^{n-\lfloor\frac{\gamma_0(\mu)+1}{2}\rfloor -\lfloor\frac{n-\gamma_0(\mu)}{2}\rfloor}{\mathcal{M}}(\lambda,\mu)$. It is just left to observe that \[ n-\lfloor\frac{\gamma_0(\mu)+1}{2}\rfloor -\lfloor\frac{n-\gamma_0(\mu)+1}{2}\rfloor=\varpi(\mu). \] \end{proof}
\begin{lem}\label{L:IsoStdMod} Assume that $\lambda\in{P^{++}}$, $\mu\in P^+[\lambda]$, $\lambda-\mu\in{P_{\geq0}}(d)$, and $\alpha\in R^+[\lambda]$. Then, ${\mathcal{M}}(\lambda,\mu)\cong{\mathcal{M}}(\lambda,s_\alpha\mu)$. \end{lem}
\begin{proof} By Lemma \ref{L:InjHom}, there exists an injective homomorphism $M(s_\alpha\mu)\rightarrow M(\mu)$. Since $\varpi(\mu)=\varpi(s_\alpha\mu)$, there exists an injective homomorphism \[ {\mathcal{M}}(\lambda,s_\alpha\mu)^{\varpi(\mu)}=F_\lambda M(s_\alpha\mu)\to F_\lambda M(\mu)= {\mathcal{M}}(\lambda,\mu)^{\varpi(\mu)}. \] Since $\dim{\mathcal{M}}(\lambda,s_\alpha\mu)=\dim{\mathcal{M}}(\lambda,\mu)$ and by Theorem~\ref{thm:unique irred quotient} ${\mathcal{M}}(\lambda,\mu)$ is indecomposible, it follows that this map is an isomorphism. \end{proof}
\begin{thm}\label{T:MaxSubmod} Assume $\lambda\in{P^{++}}$ and $\mu\in\lambda-{P_{\geq0}}(d)$. Then, ${\mathcal{M}}(\lambda,\mu)$ has a unique maximal submodule $\mathcal{R}(\lambda,\mu)$ and unique irreducible quotient ${\mathcal{L}}(\lambda,\mu)$. \end{thm}
\begin{proof} There exists $w\in S_d[\lambda]$ such that $w\mu\in P^+[\lambda]$. By Lemma \ref{L:IsoStdMod}, ${\mathcal{M}}(\lambda,w\mu)\cong{\mathcal{M}}(\lambda,\mu)$. By Theorem \ref{thm:unique irred quotient}, ${\mathcal{M}}(\lambda,w\mu)$ has a unique maximal submodule and unique irreducible quotient, so the result follows. \end{proof}
Given $\mu\in P$, the Shapovalov form on $M(\mu)$ induces a non-degenerate ${\mathfrak{q}}(n)$-contravariant form on $L(\mu)$, which we will denote $(\cdot,\cdot)_\mu$. In turn we have a non-degenerate ${\mathfrak{q}}(n)$-contravariant form on $L(\mu)\otimes V^{\otimes d}$ given by $(\cdot,\cdot)_\mu\otimes(\cdot,\cdot)_{\varepsilon_1}^{\otimes d}$. Observe that different weight spaces are orthogonal with respect to this form and different blocks of $\mathcal{O}({\mathfrak{q}} (n))$ given by central characters are also orthogonal. Therefore, when $\lambda\in{P^{++}}$ is dominant and typical it follows that the bilinear form restricts to a form on $(L(\mu)\otimes V^{\otimes d})^{[\lambda]}_\lambda=F_\lambda(L(\mu))$, which is non-degenerate whenever it is nonzero. By Proposition \ref{P:when tau's collide}, this form is ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-contravariant.
Similarly, Proposition \ref{P:when tau's collide} implies that the Shapovalov form on $\widehat{M}(\mu)$ induces an ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-contravariant form on $\widehat{{\mathcal{M}}}(\lambda,\mu)$. Now, if $\lambda\in{P^{++}}$ and $\mu\in\lambda-{P_{\geq0}}(d)$, then by Theorem \ref{T:MaxSubmod}, $\widehat{{\mathcal{M}}}(\lambda,\mu)$ posesses a unique submodule $\widehat{\mathcal{R}}(\lambda,\mu)$ which is maximal among those which avoid the generalized $\zeta_{\lambda,\mu}$ weight space. Indeed, \[ \widehat{\mathcal{R}}(\lambda,\mu)=\mathcal{R}(\lambda,\mu)^{\oplus 2^{n-\lfloor\frac{\gamma_0(\mu)+1}{2}\rfloor}}. \]
\begin{prp}\label{P:ASeRadical} Assume that $\lambda\in{P^{++}}$, $\mu\in\lambda-{P_{\geq0}}(d)$, and $\widehat{{\mathcal{M}}}(\lambda,\mu)$ possesses a nonzero contravariant form $(\cdot,\cdot)$. Let $\mathcal{R}$ denote the radical of this form. Then, \[ \mathcal{R}\supseteq\widehat{\mathcal{R}}(\lambda,\mu). \] \end{prp}
\begin{proof} First, recall that $\widehat{{\mathcal{M}}}(\lambda,\mu)$ is cyclically generated by $\hat{{\mathbf{1}}}_{\lambda,\mu}\in\widehat{{\mathcal{M}}}(\lambda,\mu)_{\zeta_{\lambda,\mu}}$. Now, assume $v\in\widehat{\mathcal{R}}(\lambda,\mu)$ and $v'\in\widehat{{\mathcal{M}}}(\lambda,\mu)$. Then, $v'=X.\hat{{\mathbf{1}}}_{\lambda,\mu}$ for some $X\in{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$. Moreover, $\tau(X).v\in\widehat{\mathcal{R}}(\lambda,\mu)$. Applying Lemma \ref{L:ASeContraForm} and the definition of $\widehat{\mathcal{R}}(\lambda,\mu)$ we deduce that \[ (v',v)=(X.\hat{{\mathbf{1}}}_{\lambda,\mu},v)=(\hat{{\mathbf{1}}}_{\lambda,\mu},\tau(X).v)=0. \] Hence, $v\in\mathcal{R}$. \end{proof}
\begin{cor}\label{C:ASeRadical} Given $\lambda\in{P^{++}}$ and $\mu\in\lambda-{P_{\geq0}}(d)$, \[ \mathcal{R}=\mathcal{R}(\lambda,\mu)^{\oplus k}\oplus{\mathcal{M}}(\lambda,\mu)^{\oplus 2^{n-\lfloor\frac{\gamma_0(\mu)+1}{2}\rfloor}-k} \] for some $0\leq k\leq2^{n-\lfloor\frac{\gamma_0(\mu)+1}{2}\rfloor}$. \end{cor}
\begin{thm}\label{T:SimplesToSimples} Assume $\lambda\in{P^{++}}$, and $\mu\in\lambda-{P_{\geq0}}(d)$. If $F_\lambda L(\mu)$ is nonzero, then \[ F_\lambda L(\mu)\cong{\mathcal{L}}(\lambda,\mu)^{\oplus\ell} \] for some $0<\ell\leq\varpi(\mu)$. \end{thm}
\begin{proof} Let $\widehat{L}(\mu)=L(\mu)^{\oplus 2^{\lfloor\frac{n-\gamma_0(\mu)+1}{2}\rfloor}}$, so that $\widehat{L}(\mu)=\widehat{M}(\mu)/\widehat{R}(\mu)$ where $\widehat{R}(\mu)$ is the radical of the Shapovalov form on $\widehat{M}(\mu)$. Applying the functor, we see that \[ F_\lambda \widehat{L}(\mu)=\widehat{{\mathcal{M}}}(\lambda,\mu)/ F_\lambda\widehat{R}(\mu). \] Now, $F_\lambda\widehat{R}(\mu)=\mathcal{R}$. Hence, Corollary \ref{C:ASeRadical} and a calculation similar to Corollary \ref{C:Image of the little verma} gives the result. \end{proof}
\begin{prp}\cite[Proposition 18.18.1]{kl} Any finite dimensional irreducible ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-module is a composition factor of ${\mathcal{M}}(\lambda,\lambda-\varepsilon)$ for some $\lambda\in{P^{++}}$. \end{prp}
\begin{thm} Any finite dimensional simple module for ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ is isomorphic ${\mathcal{L}}(\lambda,\mu)$ for some $\mu\in(\lambda-\varepsilon)-Q^+$. \end{thm}
\begin{proof} The functor $F_\lambda$ transforms the compostition series for $M(\lambda-\varepsilon)$ into the compostition series for ${\mathcal{M}}(\lambda,\lambda-\varepsilon)$. It is now just left to observe that if $L(\mu)$ is a composition factor for $M(\lambda-\varepsilon)$, then $\mu\in(\lambda-\varepsilon)- Q^+$. \end{proof}
\subsection{Calibrated Representations Revisited}
\begin{thm} If $\lambda,\mu\in{P_{\mathrm{poly}}^+}$ satisfy $\lambda-\mu\in{P_{\geq0}}(d)$, then $F_\lambda(L(\mu)) \neq 0$ and hence one has a simple module ${\mathcal{L}} (\lambda, \mu)$. \end{thm}
\begin{proof} The formal character of $L(\mu)$ when $\mu\in{P_{\mathrm{poly}}^+}$ is given by the $Q$-Schur function $Q_\mu$ (c.f.\ \cite{s}). There is a nondegenerate bilinear form, $(\cdot,\cdot)_{{P_{\mathrm{poly}}^+}}$ on the subring of symmetric functions spanned by Schur's $Q$-functions given by \[ \left(Q_{\lambda}, Q_{\mu} \right)_{{P_{\mathrm{poly}}^+}} = \operatorname{Hom}_{\ensuremath{\mathfrak{q}} (n)}\left(L(\lambda), L(\mu) \right). \] Furthermore, the basis $Q_\mu$ ($\mu\in{P_{\mathrm{poly}}^+}$) is an orthogonal basis. Within this subring are the skew $Q$-Schur functions $Q_{\lambda/\mu}$. We refer the reader to \cite{stem, m} for details.
Under the hypotheses of the theorem, $\lambda/\mu$ is a skew shape. Moreover, $F_\lambda L(\mu)=0$ implies that \begin{eqnarray}\label{E:PolynomialRep} 0=\operatorname{Hom}_{\ensuremath{\mathfrak{q}} (n)} \left(L(\lambda),L(\mu)\otimes V^{\otimes d} \right)=\bigoplus_{\nu\in{P_{\mathrm{poly}}^+}(d)}\operatorname{Hom}(L(\lambda),L(\mu)\otimes L(\nu))^{\oplus N_\nu}. \end{eqnarray} The second equality follows from Sergeev duality which implies that as a ${\mathfrak{q}}(n)$-module \[ V^{\otimes d}=\bigoplus_{\nu\in{P_{\mathrm{poly}}^+}(d)}L(\nu)^{\oplus N_\nu}, \] where $N_\nu$ is the dimension of the Specht module of ${\mathcal{S}}(d)$ corresponding to $\nu$ \cite{s2}.
In terms of the bilinear form on symmetric functions, \eqref{E:PolynomialRep} implies \begin{equation}\label{E:perp} 0 = \left(Q_{\lambda}, Q_{\mu}Q_{\nu} \right) \end{equation} for all $\nu \in {P_{\mathrm{poly}}^+} (d)$. In fact \eqref{E:perp} holds for all $\nu \in {P_{\mathrm{poly}}^+}$ since different graded summands of the symmetric function ring are orthogonal. However, \[
\left(Q_{\lambda}, Q_{\mu}Q_{\nu} \right) = \left(Q_{\mu}^{\bot}Q_{\lambda}, Q_{\nu} \right) = 2^{\ell(\mu)}\left( Q_{\lambda/\mu}, Q_{\nu}\right), \] where $Q_{\mu}^{\bot}$ denotes the adjoint of $Q_{\mu}$ with respect to the form and the second equality follows from $Q_{\mu}^{\bot}Q_{\lambda}= 2^{-\ell(\mu)} Q_{\lambda/\mu}$ (cf.\ \cite[II.8]{m}). Thus, \eqref{E:PolynomialRep} implies that \[ (Q_{\lambda/\mu},Q_\nu)=0 \] for all $\nu\in{P_{\mathrm{poly}}^+}$. But the $Q$-functions form an orthogonal basis for this subring. This implies $Q_{\lambda/\mu}=0$, which is not true. Hence, $F_{\lambda}L(\mu) \neq 0$. \end{proof}
Arguing as in section 7 of \cite{su2} using Sergeev duality \cite{s,s2} we obtain the following result.
\begin{cor} Let $\lambda,\mu\in{P_{\mathrm{poly}}^+}$ such that $\lambda-\mu\in P_{\geq 0}(d)$. Then the group character of ${\mathcal{L}}(\lambda,\mu)\downarrow_{{\mathcal{S}}(d)}$ is a power of $2$ multiple of the skew $Q$-Schur function $Q_{\lambda/\mu}$. \end{cor}
\section{A Classification of Simple Modules}\label{S:Classification}
In \cite{bk2,kl}, it was shown that the Grothendieck group of finite dimensional integral representations of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$ is a module for the Kostant-Tits ${\mathbb{Z}}$-form of the Kac-Moody Lie algebra ${\mathfrak{b}}_\infty$. Indeed, let ${\mathfrak{n}}_\infty$ be a maximal nilpotent subalgebra of ${\mathfrak{b}}_\infty$, and let ${\mathcal{U}}_{\mathbb{Z}}^*({\mathfrak{n}}_\infty)$ be the \emph{minimal} admissible lattice inside the universal envelope of ${\mathfrak{n}}_\infty$. This lattice is spanned by Lusztig's dual canonical basis,
\begin{thm}\cite[Theorem 20.5.2]{kl} There is an isomorphism of graded Hopf algebras \[ {\mathcal{U}}_{\mathbb{Z}}^*({\mathfrak{n}}^+_\infty)\cong\bigoplus_{d\geq 0}K(\operatorname{Rep}{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)). \] \end{thm}
and,
\begin{thm}\cite[Theorem 21.0.4]{kl} The set $B(\infty)$ of isomorphism classes of simple ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-modules, for all $d$, can be given the structure of a crystal (in the sense of Kashiwara). Moreover, this crystal is isomorphic to Kashiwara's crystal associated to the crystal base of ${\mathcal{U}}_{\mathbb{Q}}({\mathfrak{n}}_\infty)$. \end{thm}
\subsection{Quantum Groups and Shuffle Algebras}\label{SS:ShuffleAlg} Let ${\mathfrak{b}}_r$ be the simple finite dimensional Lie algebra of type $B_r$ over ${\mathbb{C}}$, and ${\mathcal{U}}_q({\mathfrak{b}}_r)$ the associated quantum group with Chevalley generators $e_i,f_i$ ($i=0,\ldots,r-1$) corresponding to the labeling of the Dynkin diagram:
\begin{center} \begin{picture}(340,30)
\put(100,15){\circle{4}}\put(99,0){$0$}
\put(100,17){\line(1,0){32}}\put(100,13){\line(1,0){32}}\put(113,12.5){$<$} \put(133,15){\circle{4}}\put(132,0){$1$}
\put(135,15){\line(1,0){30}} \put(167,15){\circle{4}}\put(166,0){$2$}
\put(169,15){\line(1,0){30}} \put(201,15){\circle{4}}\put(200,0){$3$}
\put(210,12){$\cdots$} \put(235,15){\circle{4}}\put(228,0){$r-2$}
\put(237,15){\line(1,0){30}} \put(269,15){\circle{4}}\put(265,0){$r-1$}
\end{picture} \end{center}
Fix a triangular decomposition ${\mathfrak{b}}_r={\mathfrak{n}}^+_r\oplus{\mathfrak{h}}_r\oplus{\mathfrak{n}}^-_r$. Let $\Delta$ be the root system of ${\mathfrak{b}}_r$ relative to this decomposition, $\Delta^+$ the positive roots, and $\Pi=\{\beta_0,\ldots,\beta_{r-1}\}$ the simple roots. Let $\mathcal{Q}$ be the root lattice and $\mathcal{Q}^+=\sum_{i=0}^{r-1}{\mathbb{Z}}_{\geq 0}\beta_i$. Finally, let $(\cdot,\cdot)$ denote the trace form on ${\mathfrak{h}}^*$. The Cartan matrix of ${\mathfrak{b}}_r$ is then $A=(a_{ij})_{i,j=0}^{r-1}$, where \[ a_{ij}=\frac{2(\beta_i,\beta_j)}{(\beta_i,\beta_i)},\;\;\; d_i=\frac{(\beta_i,\beta_i)}2\in\{1,2\}. \] Let $q_i=q^{d_i}$. To avoid confusion with notation we will use later, we adopt the following non-standard notation for $q$-integers and $q$-binomial coefficients: \[ (k)_i=\frac{q_i^k-q_i^{-k}}{q_i-q_i^{-1}}. \]
The algebra ${\mathcal{U}}_q={\mathcal{U}}_q({\mathfrak{n}}^+_r)$ is naturally $\mathcal{Q}^+$-graded by assigning to $e_i$ the degree $\beta_i$. Let $|u|$ be the ${\mathcal{Q}}^+$-degree of a homogeneous element $u\in{\mathcal{U}}_q({\mathfrak{n}}^+_b)$.
There exist $q$-derivations $e_i'$, $i=0,\ldots,r-1$ given by \[
e_i'(e_j)=\delta_{ij}\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\, e_i'(uv)=e_i'(u)v+q^{(\beta_i,|u|)}ue_i'(v) \] for all homogeneous $u,v\in{\mathcal{U}}_q^+$.
Now, let ${\mathcal{F}}$ be the free associative algebra over ${\mathbb{Q}}(q)$ generated by the set of letters $\{[0],\ldots,[r-1]\}$. Write $[i_1,\ldots,i_k]:=[i_1]\cdot[i_2]\cdots[i_k]$, and let $[]$ denote the empty word. The algebra ${\mathcal{F}}$ is ${\mathcal{Q}}^+$ graded by assigning the degree $\beta_i$ to
$[i]$ (as before, let $|f|$ denote the ${\mathcal{Q}}^+$-degree of a homogeneous $f\in{\mathcal{F}}$). Notice that ${\mathcal{F}}$ also has a \emph{principal grading} obtained by setting the degree of a letter $[i]$ to be 1; let ${\mathcal{F}}_d$ be the $d$th graded component in this grading.
Now, define the (quantum) shuffle product, $*$, on ${\mathcal{F}}$ inductively by \begin{align}\label{E:inductiveqshuffle}
(x\cdot[i])*(y\cdot[j])=(x*(y\cdot[j])\cdot[i]+q^{-(|x|+\beta_i,\beta_j)}((x\cdot[i])*y)\cdot[j],\;\;\;x*[]=[]*x=x. \end{align} Iterating this formula yields \[ [i_1,\ldots,i_\ell]*[i_{\ell+1},\ldots,i_{\ell+k}] =\sum_{w\in D_{(\ell,k)}}q^{-e(w)}[i_{w(1)},\ldots,i_{w(k+\ell)}] \] where \[ e(w)=\sum_{\substack{s\leq\ell<t\\w(s)<w(t)}}(\beta_{i_{w(s)}},\beta_{i_{w(t)}}), \] see \cite[$\S2.5$]{lec} for details. The product $*$ is associative and, \cite[Proposition 1]{lec}, \begin{eqnarray}\label{E:qShuffle}
x*y=q^{-(|x|,|y|)}y\overline{*}x \end{eqnarray} where $\overline{*}$ is obtained by replacing $q$ with $q^{-1}$ in the definition of $*$.
Now, to $f=[i_1,\ldots,i_k]\in{\mathcal{F}}$, associate $\partial_f=e_{i_1}'\cdots e_{i_k}'\in\operatorname{End} {\mathcal{U}}_q$, and $\partial_{[]}=\operatorname{Id}_{{\mathcal{U}}_q}$. Then,
\begin{prp}\cite{ro1,ro2,grn} There exists an injective ${\mathbb{Q}}(q)$-linear homomorphism \[ \Psi:{\mathcal{U}}_q\rightarrow({\mathcal{F}},*) \]
defined on homogeneous $u\in{\mathcal{U}}_q$ by the formula $\Psi(u)=\sum\partial_f(u)f$, where the sum is over all monomials $f\in{\mathcal{F}}$ such that $|f|=|u|$. \end{prp}
Therefore ${\mathcal{U}}_q^+$ is isomorphic to the subalgebra ${\mathcal{W}}\subseteq({\mathcal{F}},*)$ generated by the letters $[i]$, $0\leq i<r$.
Let ${\mathcal{A}}={\mathbb{Q}}[q,q^{-1}]$, and let ${\mathcal{U}}_{\mathcal{A}}$ denote the ${\mathcal{A}}$-subalgebra of ${\mathcal{U}}_q$ generated by the divided powers $e_i^k/(k)_i!$ ($0\leq i<r$, $k\in{\mathbb{Z}}_{\geq0}$). Let $(\cdot,\cdot)_K:{\mathcal{U}}_q\times{\mathcal{U}}_q\rightarrow{\mathbb{Q}}(q)$ denote the unique symmetric bilinear form satisfying \[ (1,1)_K=1\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,(e_i'(u),v)_k=(u,e_iv)_K \] for all $0\leq i<r$, and $u,v\in{\mathcal{U}}_q$. Let \begin{align}\label{E:DualEnvelope} {\mathcal{U}}_{\mathcal{A}}^*=\{\,u\in{\mathcal{U}}_q \mid (u,v)_K\in{\mathcal{A}}\mbox{ for all }v\in{\mathcal{U}}_{\mathcal{A}}\,\} \end{align} and let $u^*\in{\mathcal{U}}_{\mathcal{A}}^*$ denote the dual to $u\in{\mathcal{U}}_{\mathcal{A}}$ relative to $(\cdot,\cdot)_K$.
Now, given a monomial \[ [i_1^{a_1},i_2^{a_2},\ldots,i_k^{a_k}]
=[\underbrace{i_1,\ldots,i_1}_{a_1},\underbrace{i_2,\ldots,i_2}_{a_2},
\ldots,\underbrace{i_k,\ldots,i_k}_{a_k}] \]
with $i_j\neq i_{j+1}$ for $1\leq j<k$, let
$c_{i_1,\ldots,i_k}^{a_1,\ldots,a_k}=(a_1)_{i_1}!\cdots(a_k)_{i_k}!$, so that
$(c_{i_1,\ldots,i_k}^{a_1,\ldots,a_k})^{-1}e_{i_1}^{a_1}\cdots e_{i_k}^{a_k}$ is a
product of divided powers. Let \[ {\mathcal{F}}_{\mathcal{A}}=\bigoplus{\mathcal{A}} c_{i_1,\ldots,i_k}^{a_1,\ldots,a_k} [i_1^{a_1},i_2^{a_2},\ldots,i_k^{a_k}] \] and ${\mathcal{W}}^*_{\mathcal{A}}={\mathcal{W}}\cap{\mathcal{F}}_{\mathcal{A}}$. It is known that ${\mathcal{W}}_{\mathcal{A}}^*=\Psi({\mathcal{U}}_{\mathcal{A}}^*)$, \cite[Lemma 8]{lec}.
Define \[ {\mathcal{F}}_{\mathbb{C}}={\mathbb{C}}\otimes_{\mathcal{A}}{\mathcal{F}}_{\mathcal{A}}, \,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,{\mathcal{W}}_{\mathbb{C}}^*={\mathbb{C}}\otimes_{\mathcal{A}}{\mathcal{W}}_{\mathcal{A}}^* \] where ${\mathbb{C}}$ is an ${\mathcal{A}}$-module via $q\rightarrow 1$. Given an element $E\in{\mathcal{W}}_{\mathcal{A}}$ (resp. ${\mathcal{F}}_{\mathcal{A}}$) let $\underline{E}$ denote its image in ${\mathcal{W}}_{\mathbb{C}}$ (resp. ${\mathcal{F}}_{\mathbb{C}}$).
Observe that $({\mathcal{F}}_{\mathbb{C}},*)$ is the classical shuffle algebra and the shuffle product coincides with the formula for the characters associated to parabolic induction of ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$-modules (see Lemma \ref{L:ShuffleLemma}).
We close this section by describing the bar involution on ${\mathcal{F}}$:
\begin{dfn}\label{D:BarInv}\cite[Proposition 6]{lec} Let $-:{\mathcal{F}}\rightarrow{\mathcal{F}}$ be the ${\mathbb{Q}}$-linear automorphism of $({\mathcal{F}},*)$ defined by $\bar{q}=q^{-1}$ and \[ \overline{[i_1,\ldots,i_k]} =q^{-\sum_{1\leq s<t\leq k}(\beta_{i_s},\beta_{i_t})}[i_k,\ldots,i_1]. \] \end{dfn}
\subsection{Good Words and Lyndon Words}\label{SS:LyndonWords} In what follows, it is convenient to differ from the conventions in \cite{lec}. In particular, it is natural from our point of view to order monomial in ${\mathcal{F}}$ lexicographically reading from \emph{right to left}. Unlike the type $A$ case, this convention leads to some significant differences in the good Lyndon words that appear. This section contains a careful explanation of all the changes that occur.
Fix the ordering on the set of letters in ${\mathcal{F}}$ (resp. $\Pi$): $[0]<[1]<\cdots<[r-1]<[]$ (resp. $\beta_0<\beta_1<\cdots<\beta_{r-1}$). Give the set of monomials in ${\mathcal{F}}$ the associated lexicographic order read from right to left. That is, \[ [i_1,\ldots,i_k]<[j_1,\ldots,j_\ell]\mbox{ if }i_k<j_\ell,\mbox{ or for some }m, i_{k-m}<j_{\ell-m}\mbox{ and }i_{k-s}<j_{\ell-s}\mbox{ for all }s<m. \] Note that since the empty word is larger than any letter, every word is smaller than all of its right factors: \begin{align}\label{E:rightfactors} [i_1,\ldots,i_k]<[i_j,\ldots,i_k],\mbox{ for all }1<j\leq k. \end{align} (For those familiar with the theory, this definition is needed to ensure that the induced Lyndon ordering on positive roots is convex, cf. $\S$\ref{SS:PBWandCanonical} below.)
For a homogeneous element $f\in{\mathcal{F}}$, let $\min(f)$ be the smallest monomial occurring in the expansion of $f$. A monomial $[i_1,\ldots,i_k]$ is called a \emph{good word} if there exists a homogeneous $w\in{\mathcal{W}}$ such that $[i_1,\ldots,i_k]=\min(w)$, and is called a \emph{Lyndon word} if it is larger than any of its proper left factors: \[ [i_1,\ldots,i_j]<[i_1,\ldots,i_k],\mbox{ for any }1\leq j<k. \] Let $\mathcal{G}$ denote the set of good words, ${\mathcal{L}}$ the set of Lyndon words, and $\mathcal{GL}={\mathcal{L}}\cap\mathcal{G}\subset\mathcal{G}$ the set of good Lyndon words.
\begin{lem}\label{L:GoodFactors}\cite[Lemma 13]{lec} Every factor of a good word is good. \end{lem}
Because of our ordering conventions, \cite[Lemma 15, Proposition 16]{lec} become
\begin{lem}\cite[Lemma 15]{lec} Let $l\in{\mathcal{L}}$, $w$ a monomial such that $w\geq l$. Then, $\min(w*l)=wl$. \end{lem} \noindent and \begin{prp}\label{P:GLproduct}\cite[Proposition 16]{lec} Let $l\in\mathcal{GL}$, and $g\in\mathcal{G}$ with $g\geq l$. Then $gl\in\mathcal{G}$. \end{prp}
Hence, we deduce from Lemma \ref{L:GoodFactors} and Proposition \ref{P:GLproduct} \cite[Proposition 17]{lec}:
\begin{prp}\cite{lr,lec} A monomial $g$ is a good word if, and only if, there exist good Lyndon words $l_1\geq\ldots\geq l_k$ such that \[ g=l_1l_2\cdots l_k. \] \end{prp}
As in \cite{lec}, we have
\begin{prp}\cite{lr,lec} The map $l\rightarrow|l|$ is a bijection $\mathcal{GL}\rightarrow\Delta^+$. \end{prp}
Given $\gamma\in\Delta^+$, let $\gamma\rightarrow l(\gamma)$ be the inverse of the above bijection (called the Lyndon covering of $\Delta^+$).
We now define the \emph{bracketing} of Lyndon words, that gives rise to the \emph{Lyndon basis} of ${\mathcal{W}}$. To this end, given $l\in{\mathcal{L}}$ such that $l$ is not a letter, define the standard factorization of $l$ to be $l=l_1l_2$ where $l_2\in{\mathcal{L}}$ is a proper left factor of maximal length. Define the $q$-bracket \begin{align}\label{E:qbracket}
[f_1,f_2]_q=f_1f_2-q^{(|f_1|,|f_2|)}f_2f_1 \end{align} for homogeneous $f_1,f_2\in{\mathcal{F}}$ in the ${\mathcal{Q}}^+$-grading. Then, the bracketing ${\langle} l{\rangle}$ of $l\in{\mathcal{L}}$ is defined inductively by ${\langle} l{\rangle}=l$ if $l$ is a letter, and \begin{align}\label{E:Lyndonbracketing} {\langle} l{\rangle}=[{\langle} l_1{\rangle},{\langle} l_2{\rangle}]_q \end{align} if $l=l_1l_2$ is the standard factorization of $l$.
\begin{exa} (1) ${\langle} [0]{\rangle}=[0]$;
\noindent(2) ${\langle} [12]{\rangle}=[[1],[2]]_q=[12]-q^{-1}[21]$;
\noindent(3) ${\langle}[012]{\rangle}=[[0],[12]-q^{-1}[21]]_q=[012]-q^{-1}[021]-q^{-2}[120]+q^{-3}[210]$. \end{exa}
As is suggested in this example, we have
\begin{prp}\label{P:bracketingtriangularity}\cite[Proposition 19]{lec} For $l\in{\mathcal{L}}$, ${\langle} l{\rangle}=l+r$ where $r$ is a linear combination of words $w$ such that $|w|=|l|$ and $w<l$. \end{prp}
Any word $w\in{\mathcal{F}}$ has a canonical factorization $w=l_1\cdots l_k$ such that $l_1,\ldots,l_k\in{\mathcal{L}}$ and $l_1\geq\cdots\geq l_k$. We define the bracketing of an arbitrary word $w$ in terms of this factorization: ${\langle} w{\rangle}={\langle} l_1{\rangle}\cdots{\langle} l_k{\rangle}$. Define a homomorphism $\Xi:({\mathcal{F}},\cdot)\to({\mathcal{F}},*)$ by $\Xi([i])=[i]$. Then, $\Xi([i_1,\ldots,i_k])=[i_1]*\cdots*[i_k]=\Psi(e_{i_1}\cdots e_{i_k})$. In particular, $\Xi({\mathcal{F}})={\mathcal{W}}$. We have the following characterization of good words:
\begin{lem}\cite[Lemma 21]{lec} The word $w$ is good if and only if it cannot be expressed modulo $\ker\Xi$ as a linear combination of words $v<w$. \end{lem}
For $g\in\mathcal{G}$, set $r_g=\Xi({\langle} g{\rangle})$. Then, we have
\begin{thm}\label{T:Lyndonbasis}\cite[Propostion 22, Theorem 23]{lec} Let $g\in\mathcal{G}$ and $g=l_1\cdots l_k$ be the canonical factorization of $g$ as a nonincreasing product of good Lyndon words. Then \begin{enumerate} \item $r_g=r_{l_1}*\cdots*r_{l_k}$, \item $r_g=\Psi(e_g)+\sum_{w<g}x_{gw}\Psi(e_w)$ where, for a word $v=[i_1,\ldots,i_k]$, $e_v=e_{i_1}\cdots e_{i_k}$, and
\item $\{r_g|g\in\mathcal{G}\}$ is a basis for ${\mathcal{W}}$. \end{enumerate} \end{thm}
The basis $\{r_g\mid g\in\mathcal{G}\}$ is called the Lyndon basis of ${\mathcal{W}}$. An immediate consequence of Proposition \ref{P:bracketingtriangularity} and Theorem \ref{T:Lyndonbasis} is the following:
\begin{prp}\label{P:LyndonCoveringProperty}\cite[Proposition 24]{lec} Assume $\gamma_1,\gamma_2\in \Delta^+$, $\gamma_1+\gamma_1=\gamma\in\Delta^+$, and $l(\gamma_1)<l(\gamma_2)$. Then, $l(\gamma_1)l(\gamma_2)\geq l(\gamma)$. \end{prp}
This gives an inductive algorithm to determine $l(\gamma)$ for $\gamma\in\Delta^+$ (cf. \cite[$\S4.3$]{lec}):
For $\beta_i\in\Pi\subset\Delta^+$, $l(\beta_i)=[i]$. If $\gamma$ is not a simple root, then there exists a factorization $l(\gamma)=l_1l_2$ with $l_1,l_2$ Lyndon words. By Lemma \ref{L:GoodFactors}, $l_1$ and $l_2$ are good, so $l_1=l(\gamma_1)$ and $l_2=l(\gamma_2)$ for some $\gamma_1,\gamma_2\in\Delta^+$ with $\gamma_1+\gamma_2=\gamma$. Assume that we know $l(\gamma_0)$ for all $\gamma_0\in\Delta^+$ satisfying $\mathrm{ht}(\gamma_0)<\mathrm{ht}(\gamma)$. Define \[ C(\gamma)=\{\,(\gamma_1,\gamma_2)\in\Delta^+\times\Delta^+ \mid \gamma=\gamma_1+\gamma_2, \mbox{ and }l(\gamma_1)<l(\gamma_2)\,\}. \] Then, Proposition \ref{P:LyndonCoveringProperty} implies
\begin{prp}\cite[Proposition 25]{lec} We have \[ l(\gamma)=\min\{\,l(\gamma_1)l(\gamma_2) \mid (\gamma_1,\gamma_2)\in C(\gamma)\,\} \] \end{prp}
In our situation, \[
\Delta^+=\{\beta_i+\beta_{i+1}+\cdots+\beta_j|0\leq i\leq j<r\}
\cup\{2\beta_0+\cdots+2\beta_j+\beta_{j+1}+\cdots+\beta_k|0\leq j<k<r\}. \] A straightforward inductive argument shows that \[ l(\beta_i+\beta_{i+1}\cdots+\beta_j)=[i,i+1,\ldots,j]\,\,\,\,\,\, {\mbox{and}} \,\,\,\,\,\,
l(2\beta_0+\cdots+2\beta_j+\beta_{j+1}+\cdots+\beta_k)=[j,j-1,\ldots,0,0,\ldots,k-1,k]. \] Remarkably, \begin{prp} In the notation of Lemma \ref{L:ShuffleLemma} we have \[ l(\beta_i+\cdots+\beta_j)=\operatorname{ch}\Phi_{[i,j]} \] and \[ 2l(2\beta_0+\cdots+2\beta_j+\beta_{j+1}+\cdots+\beta_k)
=\operatorname{ch}\Phi_{[-j-1,k]}. \] \end{prp}
Observe that we may write any good Lyndon word uniquely in the form
$l=[i,i+1,\ldots,j]$ where $i,j\in{\mathbb{Z}}$ and $0\leq|i|\leq j<r$. For example, \begin{align}\label{E:GoodLyndonWordConvention} l(2\beta_0+\cdots+2\beta_j +\beta_{j+1}+\cdots+\beta_k)=[-j-1,\ldots,k]. \end{align}
In the following definition, we mean for $n$ to vary. Given $\lambda\in P_{>0}^{++}$, let \begin{align}\label{E:Bdld} \mathcal{B}_d(\lambda)=\{\,\mu\in P^+[\lambda] \mid \lambda-\mu\in {P_{\geq0}}(d)\mbox{ and
}|\mu_i|<\lambda_i\mbox{ for all }i\,\} \end{align} and let \begin{align}\label{E:Bd} \mathcal{B}_d=\{\,(\lambda,\mu) \mid \lambda\in P_{>0}^{++}\mbox{ and }\mu\in\mathcal{B}_d(\lambda)\,\}. \end{align}
Let $\mathcal{G}_d=\mathcal{G}\cap{\mathcal{F}}_d$ be the set of good words of principal degree $d$. We have
\begin{lem}\label{L:BdGd} The map $(\lambda,\mu)\mapsto[\lambda-\mu]=[\mu_1,\ldots,\lambda_1-1,\ldots,\mu_n,\ldots,\lambda_n-1]$ induces a bijection $\mathcal{B}_d\rightarrow\mathcal{G}_d$. \end{lem}
\begin{pff} By \eqref{E:GoodLyndonWordConvention}, $[\lambda-\mu]$ is a well-defined element of ${\mathcal{F}}_d$. Since $\lambda\in P^{++}_{>0}$ and $\mu\in P^+[\lambda]$, the ordering convention and \eqref{E:rightfactors} imply that $[\lambda-\mu]\in\mathcal{G}_d$. This map is clearly bijective. \end{pff}
\subsection{PBW and Canonical Bases}\label{SS:PBWandCanonical} The lexicographic ordering on $\mathcal{GL}$ induces a total ordering on $\Delta^+$, which is \emph{convex}, meaning that if $\gamma_1,\gamma_2\in\Delta^+$ with $\gamma_1<\gamma_2$, and $\gamma=\gamma_1+\gamma_2\in\Delta^+$, then $\gamma_1<\gamma<\gamma_2$ (cf. \cite{ro3,lec}).
Indeed, assume $\gamma_1,\gamma_2,\gamma=\gamma_1+\gamma_2\in\Delta^+$ and $\gamma_1<\gamma_2$. Proposition \ref{P:LyndonCoveringProperty} and \eqref{E:rightfactors} imply that $l(\gamma)\leq l(\gamma_1)l(\gamma_2)<l(\gamma_2)$. If $l(\gamma)=l(\gamma_1)l(\gamma_2)$, then the definition of Lyndon words implies $l(\gamma_1)<l(\gamma)$. We are therefore left to prove that $l(\gamma_1)<l(\gamma)$ even if $l(\gamma)<l(\gamma_1)l(\gamma_2)$. This cannot happen if $\gamma=\beta_i+\cdots+\beta_j$. In the case $\gamma=2\beta_0+\cdots+2\beta_j +\beta_{j+1}+\cdots+\beta_k$, the possibilities for $\gamma_1<\gamma_2$ are $\gamma_1=\beta_i+\cdots+\beta_j$ and $\gamma_2=2\beta_0+\cdots+2\beta_{i-1}+\beta_i+\cdots+\beta_k$ for $0\leq i\leq j$. In any of these cases, $[i,\ldots,j]<[j,\ldots,0,0,\ldots,k]$. That is, $l(\gamma_1)<l(\gamma)<l(\gamma_2)$.
Each convex ordering, $\gamma_1<\cdots<\gamma_N$, on $\Delta^+$ arises from a unique decomposition $w_0=s_{i_1}s_{i_2}\cdots s_{i_N}$ of the longest element of the Weyl group of type $B_r$ via \[ \gamma_1=\beta_{i_1},\;\gamma_2=s_{i_1}\beta_{i_2},\;\cdots,\gamma_N=s_{i_1}\cdots s_{i_{N-1}}\beta_{i_N}. \] Lusztig associates to this data a PBW basis of ${\mathcal{U}}_{\mathcal{A}}$ denoted \[ E^{(a_1)}(\gamma_1)\cdots E^{(a_n)}(\gamma_N),\;\;\;(a_1,\ldots,a_N)\in{\mathbb{Z}}_{\geq0}^N. \] Leclerc \cite[$\S4.5$]{lec} describes the image in ${\mathcal{W}}$ of this basis for the convex Lyndon ordering. We use the same braid group action as Leclerc and the results of \cite[$\S4.5,4.6$]{lec} carry over, making changes in the same manner indicated in the previous section. We describe the relevant facts below.
For $g=l(\gamma_1)^{a_1}\cdots l(\gamma_k)^{a_k}$, where $\gamma_1>\cdots>\gamma_k$ and $a_1,\ldots,a_k\in{\mathbb{Z}}_{>0}$ set \[ E_g=\Psi(E^{(a_k)}(\gamma_k)\cdots E^{(a_1)}(\gamma_1))\in{\mathcal{W}}_{\mathcal{A}} \] and let $E_g^*\in{\mathcal{W}}_{\mathcal{A}}^*$ be the image of $(E^{(a_k)}(\gamma_k)\cdots E^{(a_1)}(\gamma_1))^*\in{\mathcal{U}}_{\mathcal{A}}^*$. Observe that the order of the factors in the definition of $E_g$ above are increasing with respect to the Lyndon ordering. Leclerc shows that if $\gamma\in\Delta^+$, then \begin{align}\label{E:Proportional} \kappa_{l(\gamma)}E_{l(\gamma)}=r_{l(\gamma)}, \end{align} For some $\kappa_{l(\gamma)}\in{\mathbb{Q}}(q)$, \cite[Theorem 28]{lec} (the proof of this theorem in our case is obtained by reversing all the inequalities and using the standard factorization as opposed to the costandard factorization).
More generally, let $f\mapsto f^t$ be the linear map defined by $[i_1,\ldots,i_k]^t=[i_k,\ldots,i_1]$ and $(x*y)^t=y^t*x^t$. Then, $E_g$ is proportional to $\overline{r_g^t}$ (cf. \cite[$\S4.6$, $\S5.5.2-5.5.3$]{lec}).
As in \cite[$\S5.5.3$]{lec}, we see that there exists an explicit $c_g\in{\mathbb{Z}}$ such that \[ E_g^*=q^{c_g} (E_{l_m}^*)*\cdots*(E_{l_1}^*) \] if $g=l_1\cdots l_m$ with $l_1>\cdots>l_m$. Using \eqref{E:qShuffle} we deduce that \[ E_g^*=q^{C_g}(E_{l_1}^*)\bar{*}\cdots\bar{*}(E_{l_m}^*), \] where $C_g=c_g-\sum_{1\leq i<j\leq m}(\beta_i,\beta_j)$. In particular, \begin{align}\label{E:Eshuffle} \underline{E_g^*}=\underline{(E_{l_1}^*)*\cdots*(E_{l_m}^*)}. \end{align}
Using the bar involution (Definition \ref{D:BarInv}), Leclerc constructs the canonical basis, $\{b_g \mid g\in\mathcal{G}\}$ for ${\mathcal{W}}_{\mathcal{A}}$ via the PBW basis $\{E_g \mid g\in\mathcal{G}\}$. It has the form \[ b_g=E_g+\sum_{\substack{h\in\mathcal{G}\{\mathfrak{h}}<g}}\chi_{gh}E_h. \] The dual canonical basis then has the form \[ b_g^*=E_g^*+\sum_{\substack{h\in\mathcal{G}\{\mathfrak{h}}>g}}\chi_{gh}^*E_h^*. \] In particular, for good Lyndon words, \cite[Corollary 41]{lec}, $b^*_{l}=E^*_{l}$ for every $l\in\mathcal{GL}$. As in \cite[Lemma 8.2]{lec}, we see that $b^*_{[i,\ldots,j]}=[i,\ldots,j]$ for $0\leq i<j<r$. We now prove
\begin{lem}\label{L:DblSeg} For $0\leq j<k<r$, one has \[ b^*_{[j,\ldots,0,0,\ldots,k]}=(2)_0[j,\ldots,0,0,\ldots,k]. \] \end{lem}
\begin{pff} We prove this by induction on $j$ and $k$ with $j<k$, using \eqref{E:inductiveqshuffle}, \eqref{E:qbracket}, and \eqref{E:Lyndonbracketing} for the computations.
Observe that for $k\geq1$, $r_{[0,1,\ldots,k]}=(q^2-q^{-2})^k[0,1,\ldots,k]$, which can be proved easily by downward induction on $j$, $0\leq j<k$, using \eqref{E:inductiveqshuffle} and \[ r_{[j,\ldots,k]}=\Xi({\langle}[j,\ldots,k]{\rangle})=\Xi([[j],{\langle}[j+1,\ldots,k]]_q)=[j]*r_{[j+1,\ldots,k]}-q^{-2}r_{[j+1,\ldots,k]}*[j]. \] By \eqref{E:inductiveqshuffle}, we have \begin{align*} [0]*[0,1]-[0,1]*[0]&=[0,1,0]+q^2([0]*[0])[1]-([0]*[0])[1]-[0,1,0]\\ &=(q^2-1)([0,0]+q^{-2}[0,0])[1]=(q^2-q^{-2})[0,0,1] \end{align*} Therefore, applying \eqref{E:Lyndonbracketing} and the relevant definitions, we deduce that \begin{align*} r_{[0,0,1]}&=\Xi({\langle}[0,0,1]{\rangle})\\
&=\Xi([[0],{\langle}[0,1]{\rangle}]_q^2)\\
&=[0]*r_{[0,1]}-r_{[0,1]}*[0]\\
&=(q^2-q^{-2})([0]*[0,1]-[0,1]*[0])\\
&=(q^2-q^{-2})^2[0,0,1] \end{align*} Once again, using \eqref{E:inductiveqshuffle}, we deduce that for all $k\geq2$, \begin{align}\label{E:DblSegReduction0} [0]*[0,\ldots,k]-[0,\ldots,k]*[0]=([0]*[0,\ldots,k-1]-[0,\ldots,k-1]*[0])[k]. \end{align} Assume $k\geq2$. Then, $(\beta_0,\beta_0+\cdots+\beta_k)=0$, so iterated applications of \eqref{E:DblSegReduction0} yields \begin{align*} r_{[0,0,\ldots,k]}&=[0]*r_{[0,\ldots,k]}-r_{[0,\ldots,k]}*[0]\\ &=(q^2-q^{-2})^k([0]*[0,\ldots,k]-[0,\ldots,k]*[0])\\
&=(q^2-q^{-2})^k([0]*[0,1]-[0,1]*[0])[2,\ldots, k]\\
&=(q^2-q^{-2})^{k+1}[0,0,\ldots, k] \end{align*}
Now, assume that $k\geq 2$, and $0<j<k$. To compute $r_{[j,\ldots,0,0,\ldots,k]}$, we need the following. For $|j-k|>1$, \begin{align}\label{E:DblSegReduction1} [j]*[j-1,\ldots,&k]-q^{-2}[j-1,\ldots,k]*[j]\\
\nonumber&=([j]*[j-1,\ldots,k-1]-q^{-2}[j-1,\ldots,k-1]*[j])[k]. \end{align} For $j=k-1$, \begin{align}\label{E:DblSegReduction2} [j]*[j-1,\ldots,0,&0,\ldots,j+1]-q^{-2}[j-1,\ldots,0,0,\ldots,j+1]*[j]\\
\nonumber&=(q^2[j]*[j-1,\ldots,0,0,\ldots,j]-q^{-2}[j-1,\ldots,0,0,\ldots,j]*[j])[j+1]. \end{align} Finally, \begin{align}\label{E:DblSegReduction3} q^2[j]*[j-1,\ldots,0,&0,\ldots,j]-q^{-2}[j-1,\ldots,0,0,\ldots,j]*[j]\\
\nonumber&=([j]*[j-1,\ldots,0,0,\ldots,j-2]-q^{-2}[j-1,\ldots,0,0,\ldots,j-2]*[j])[j,j+1]. \end{align} Indeed, \eqref{E:DblSegReduction1} and \eqref{E:DblSegReduction2} are straightforward applications of \eqref{E:inductiveqshuffle}. Equation \eqref{E:DblSegReduction3} involves a little more calculation: \begin{align*} q^2[j]*[j-1,\ldots,0,&0,\ldots,j]-q^{-2}[j-1,\ldots,0,0,\ldots,j]*[j]\\ =&q^2[j-1,\ldots,0,0,\ldots,j,j]+q^{-2}([j]*[j-1,\ldots,0,0,\ldots,j-1]\\
&-[j-1,\ldots,0,0,\ldots,j-1]*[j])[j]-q^{-2}[j-1,\ldots,0,0,\ldots,j,j]\\
=&(q^2-q^{-2})[j-1,\ldots,0,0,\ldots,j,j]+q^{-2}([j-1,\ldots,0,0,\ldots,j]\\&+q^2([j]*[j-1,\ldots,0,0,\ldots,j-2])[j-1]
-([j-1,\ldots,0,0,\ldots,j-2]*[j])[j-1]\\&-q^4[j-1,\ldots,0,0,\ldots,j])[j]\\
=&([j]*[j-1,\ldots,0,0,\ldots,j-2]-q^{-2}[j-1,\ldots,0,0,\ldots,j-2]*[j])[j,j+1], \end{align*} Note that \eqref{E:DblSegReduction1} holds for both $[j-1,j,\ldots,k]$ and $[j-1,\ldots,0,0,\ldots,k]$.
Now, assume that we have shown that $r_{[j-1,\ldots,0,0,\ldots,k]}=(q^2-q^{-2})^{j+k}[j-1,\ldots,0,0,\ldots,k]$. Then, since $(\beta_j,2\beta_0+\cdots+2\beta_{j-1}+\beta_j+\cdots+\beta_k)=-2$, \begin{align*} r_{[j,\ldots,0,0,\ldots,k]}=&[j]*r_{[j-1,\ldots,0,0,\ldots,k]}-r_{[j-1,\ldots,0,0,\ldots,k]}*[j]\\
=&(q^2-q^{-2})^{j+k}[j]*[j-1,\ldots,0,0,\ldots,k]-q^{-2}[j-1,\ldots,0,0,\ldots,k]*[j]\\
=&(q^2-q^{-2})^{j+k}([j]*[j-1,\ldots,0,0,\ldots,j+1]\\&-q^{-2}[j-1,\ldots,0,0,\ldots,j+1]*[j])[j+2,\ldots,k]
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\mbox{by \eqref{E:DblSegReduction1}}\\
=&(q^2-q^{-2})^{j+k}(q^2[j]*[j-1,\ldots,0,0,\ldots,j]\\&-q^{-2}[j-1,\ldots,0,0,\ldots,j]*[j])[j+1,\ldots,k]
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\mbox{by \eqref{E:DblSegReduction2}}\\
=&(q^2-q^{-2})^{j+k}([j]*[j-1,\ldots,0,0,\ldots,j-2]\\&-q^{-2}[j-1,\ldots,0,0,\ldots,j-2]*[j])[j,\ldots,k]
\,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\mbox{by \eqref{E:DblSegReduction3}}\\
=&(q^2-q^{-2})^{j+k}([j]*[j-1]-q^{-2}[j-1]*[j])[j-2,\ldots,0,0,\ldots,k]\;\;\;\;\mbox{by \eqref{E:DblSegReduction1}}\\
=&(q^2-q^{-2})^{j+k+1}[j,\ldots,0,0,\ldots,k]. \end{align*}
Finally, the result follows after computing the normalizing coefficient \eqref{E:Proportional} using \cite[Equation (28)]{lec}. We leave the details to the reader. \end{pff}
\subsection{}In section we give a representation theoretic interpretation of the good Lyndon words associated to the root vectors $2\beta_0+\cdots+2\beta_j+\beta_{j+1}+\cdots+\beta_k$ ($0\leq j<k<r$) which appear in \cite[Lemma 53]{lec}. The corresponding dual canonical basis vectors are given by the formula \[ [0]\cdot([1,\ldots,j]*[0,\ldots,k]). \]
\begin{lem} Let $0\leq a<b$, $d=b+a+2$, $\lambda=(b+1,a+1)$ and $\alpha=(1,-1)$. Then, for $1\leq k\leq a$, \[ \operatorname{ch}{\mathcal{L}}(\lambda,-k\alpha)=2\underline{[k-1]\cdot([k-2,k-3,\ldots,1,0,0,1,\ldots,b]*[k,\ldots,a])} \] where if $ k=1$, we interpret \[[k-2,k-3,\ldots,1,0,0,1,\ldots,b]=[0,1,\ldots, b] \] \end{lem}
\begin{proof} By \cite[Proposition 11.4]{g}, for each $k\in{\mathbb{Z}}_{\geq0}$, there exists a short exact sequence \[ \xymatrix{0\ar[r]&L(-(k+1)\alpha)\ar[r]&M(-k\alpha)\ar[r]&L(-k\alpha)\ar[r]&0}. \] For $k\leq a+1$, applying the functor $F_\lambda$ yields the exact sequence \begin{eqnarray}\label{E:ShortExactSeq} \xymatrix@1{0\ar[r]&F_\lambda L(-(k+1)\alpha)\ar[r]&2{\mathcal{M}}(\lambda,-k\alpha)\ar[r]&F_\lambda L(-k\alpha)\ar[r]&0}. \end{eqnarray} Therefore, \[ \operatorname{ch} F_\lambda L(-k\alpha)=4\underline{[k-1,\ldots,1,0,0,1,\ldots,b]*[k,\ldots,a]}-\operatorname{ch} F_\lambda L(-(k+1)\alpha). \] Note that when $k=a+1$, $F_\lambda L(-(k+1)\alpha)=0$ since ${\mathcal{M}}(\lambda,-(a+2)\alpha)=0$. Therefore the sequence \eqref{E:ShortExactSeq} implies $F_\lambda L(-k\alpha)=2{\mathcal{L}}(\lambda,-(a+1)\alpha)\cong2{\mathcal{M}}(\lambda,-(a+1)\alpha)\cong 2\Phi_{[-a-1,b]}$, and \[ \operatorname{ch}\Phi_{[-a-1,b]}=2\underline{[a,a-1,\ldots,1,0,0,1,\ldots,b]}. \]
We now prove the lemma by downward induction on $k\leq a$. We have \begin{align*} \operatorname{ch} F_\lambda L(-a\alpha)=&4\,\underline{[a-1,\ldots,1,0,0,1,\ldots,b]*[a]-4[a,\ldots,1,0,0,1,\ldots,b]}\\ =&4\,\underline{[a-1]\cdot([a-2,\ldots,1,0,0,1,\ldots,b]*[a])}. \end{align*} Hence, $F_\lambda L(-a\alpha)=2{\mathcal{L}}(\lambda,-a\alpha)$ and the lemma holds for $k=a$. Now, assume $k<a$, $F_\lambda L(-(k+1)\alpha)=2{\mathcal{L}}(\lambda,-(k+1)\alpha)$, and \[ \operatorname{ch}{\mathcal{L}}(\lambda,-(k+1)\alpha)=2\underline{[k]\cdot([k-1,\ldots,1,0,0,1,\ldots,b]*[k+1,\ldots,a])}. \] Then, \begin{align*} \operatorname{ch} F_\lambda L(-k\alpha)=&4\underline{[k-1,\ldots,1,0,0,1,\ldots,b]*[k,\ldots,a]}-
4\underline{[k]\cdot([k-1,\ldots,1,0,0,1,\ldots,b]*[k+1,\ldots,a])}\\
=&4\underline{[k-1]\cdot([k-2,\ldots,1,0,0,1,\ldots,b]*[k,\ldots,a])}. \end{align*} Hence, $F_\lambda L(-k\alpha)\neq 0$, so $F_\lambda L(-k\alpha)=2{\mathcal{L}}(\lambda,-k\alpha)$ and the lemma holds. \end{proof}
\begin{cor}\label{C:LecDblSeg} Let $0\leq a<b$, $d=b+a+2$, $\lambda=(b+1,a+1)$ and $\mu=-\alpha=(-1,1)$. Then, \[ \operatorname{ch}{\mathcal{L}}(\lambda,-\alpha)=2\,\underline{[0]\cdot[0,\ldots,b]*[1,\ldots,a]}. \] \end{cor}
\subsection{A Basis for the Grothendieck Group $K(\mbox{Rep} {\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d))$}\label{SS:GrothendieckGroup}
\begin{thm}\label{T:GrothendieckBasis1} The set \[ \{ \left[ {\mathcal{M}}(\lambda,\mu)\right]\mid (\lambda,\mu)\in\mathcal{B}_d\} \] forms a basis for $K(\operatorname{Rep}{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d))$. \end{thm}
\begin{proof} By Lemma \ref{L:DblSeg} and \eqref{E:Eshuffle}, it follows that $\operatorname{ch}{\mathcal{M}}(\lambda,\mu)=\underline{E^*_{[\lambda-\mu]}}$. The result now follows from Lemma \ref{L:BdGd} and the fact that the character map is injective. \end{proof}
We will now describe a basis for $K(\operatorname{Rep}{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d))$ in terms of the simple modules ${\mathcal{L}}(\lambda,\mu)$.
\begin{prp}\label{P:StandardSegmentForm} Let $b\geq0$, $\lambda=(b+1,b+1)$ and $\alpha=(1,-1)$. Then, \[ \Phi_{[-b-1,b]}\cong{\mathcal{L}}(\lambda,b\alpha). \] \end{prp}
\begin{proof} There is a surjective homomorphism ${\mathcal{M}}(\lambda,b\alpha)\to\Phi_{[-b-1,b]}$. The result follows since $\Phi_{[-b-1,b]}$ is simple. \end{proof}
\begin{cor}\label{C:StandardizingWords} Assume that $\lambda\in{\mathcal{P}}_{>0}^{++}$,
$\mu\in P^+[\lambda]$, $\lambda-\mu\in{P_{\geq0}}(d)$, and $|\mu_i|\leq\lambda_i$ for all $i$. Then, there exists $(\eta,\nu)\in\mathcal{B}_d$ such that \[ {\mathcal{L}}(\lambda,\mu)\cong{\mathcal{L}}(\eta,\nu), \] and $[\lambda-\mu]\leq[\eta-\nu]$. \end{cor}
\begin{proof} First, we may assume $\mu_i<\lambda_i$ for all $i$, since the terms for which $\lambda_i=\mu_i$ do not contribute to ${\mathcal{L}}(\lambda,\mu)$. Proceed by induction on $N(\lambda,\mu)=|\{i=1,\ldots,n \mid \mu_i=-\lambda_i\}|$. If $N(\lambda,\mu)=0$, then $(\lambda,\mu)\in\mathcal{B}^+_d$ so there is nothing to do. If $N(\lambda,\mu)>0$, let $j$ be the smallest index such that $\mu_j=-\lambda_j$. Set $\lambda^{(1)}=(\lambda_1,\ldots,\lambda_{j-1},\lambda_j,\lambda_j,\lambda_{j+1},\ldots,\lambda_n)$ and $\mu^{(1)}=(\mu_1,\ldots,\mu_{j-1},\lambda_j-1,\mu_j+1,\mu_{j+1},\ldots,\lambda_n)$. Clearly, $\lambda^{(1)}\in P_{>0}^{++}$ and $\mu^{(1)}\in\lambda^{(1)}-{P_{\geq0}}(d)$. We now show $\mu^{(1)}\in P^+[\lambda]$. Indeed, $\lambda_j>0$, so $\lambda_j-1>1-\lambda_j=\mu_j+1$; and, $\mu_j\geq\mu_{j+1}$, so $\mu_j+1>\mu_{j+1}$. Since $\mu_j<\lambda_j-1$, the $j$th twisted good Lyndon word in $[\lambda^{(1)}-\mu^{(1)}]$ is greater than the $j$th twisted good Lyndon word in $[\lambda-\mu]$. Hence, $[\lambda-\mu]\leq[\lambda^{(1)}-\mu^{(1)}]$.
Now, there exists a surjective homomorphism \begin{align*} \Phi_{[\mu_1,\lambda_1-1]}\circledast\cdots\circledast{\mathcal{M}}((\lambda_j,\lambda_j),(\lambda_j-1,\mu_j+1))
&\circledast\cdots\circledast\Phi_{[\mu_{n},\lambda_{n}-1]}\\ &\to\Phi_{[\mu_1,\lambda_1-1]}\circledast\cdots\circledast\Phi_{[\mu_j,\lambda_j-1]}
\circledast\cdots\circledast\Phi_{[\mu_{n},\lambda_{n}-1]} \end{align*} Hence, a surjective homomorphism ${\mathcal{M}}(\lambda^{(1)},\mu^{(1)})\to{\mathcal{L}}(\lambda,\mu)$. It follows that ${\mathcal{L}}(\lambda^{(1)},\mu^{(1)})\cong{\mathcal{L}}(\lambda,\mu)$.
Since $N(\lambda^{(1)},\mu^{(1)})<N(\lambda,\mu)$ the result follows. \end{proof}
Recall that given $\mu\in\lambda-{P_{\geq0}}(d)$ there exists a unique $w\in S_d[\lambda]$ such that $w\mu\in P^+[\lambda]$. Let $\mu^+$ denote this element. Also, given $\lambda\in{P^{++}}$, and $\mu\in\lambda-{P_{\geq0}}(d)$, let $[\lambda-\mu]^+=[\lambda-\mu^+]\in\mathcal{TG}$ be the associated twisted good word. The following lemma is straightforward.
\begin{lem}\label{L:WordTriangularity} Assume that $\lambda\in{P^{++}}$, $\lambda-\mu\in{P_{\geq0}}(d)$ and $\gamma\in Q^+$. Then, $[\lambda-\mu]\leq[\lambda-(\mu-\gamma)^+]$. \end{lem}
\begin{thm} The following is a complete list of pairwise non-isomorphic simple modules for ${\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d)$: \[ \{\,{\mathcal{L}}(\lambda,\mu)\mid (\lambda,\mu)\in \mathcal{B}^+_d\,\}. \] \end{thm}
\begin{proof} Every composition factor of $M(\mu)$ is of the form $L(\mu-\gamma)$ for some $\gamma \in Q^+$. Applying the functor, we deduce that every composition factor of ${\mathcal{M}}(\lambda,\mu)$ is of the form ${\mathcal{L}}(\lambda,\mu-\gamma)\cong{\mathcal{L}}(\lambda,(\mu-\gamma)^+)$. Now, putting together Corollary \ref{C:StandardizingWords} and Lemma \ref{L:WordTriangularity}, we deduce that in the Grothendieck group \[ [{\mathcal{M}}(\lambda,\mu)]=\sum_{\substack{\nu \in\mathcal{B}_d(\eta)\\ \eta \in P^{++}_{>0}\\ [\lambda-\mu]\leq[\eta-\nu]}}c_{\lambda,\mu, \eta, \nu}[{\mathcal{L}}(\eta,\nu)], \] where the $c_{\lambda,\mu,\eta, \nu}$ are integers and where $c_{\lambda,\mu,\lambda, \mu} \neq 0 $. Therefore, the transition matrix between the basis for $K(\operatorname{Rep}{\mathcal{H}_{\Cl}^{\mathrm{aff}}}(d))$ given by standard modules and that given by simples is triangular. \end{proof}
\section{Table of Notation}\label{SS:TableofNotation} For the convenience of the reader we provide a table of notation with a reference to where the notation is first defined.
\begin{center} \begin{tabular}{ccl} Notation & & First Defined\\ \hline ${\mathcal{S}} (d)$, ${\mathcal{H}_{\Cl}^{\mathrm{aff}}} (d)$, ${\mathcal{P}}_{d}[x]$, ${\mathcal{A}}(d) $ & & Section~\ref{SS:Saffdef} \\ $q(a)$ & & Section~\ref{SS:weights},\eqref{E:qdef}\\ ${\mathcal{P}}_{d}[x^{2}]$ & & Section~\ref{SS:weights} \\ $\operatorname{Ind}^{d}_{\mu}$ & & Section~\ref{SS:Mackey} \\ $D_\nu$, $D_{(m,k)}$ & & Section~\ref{SS:Mackey} \\ $\gamma_{0}=\gamma_{0}(a_{1}, \dots ,a_{d})$ & & Section~\ref{SS:characters}, \eqref{E:gammazerodef}\\ $[a_1,\ldots,a_d]$ & & Section~\ref{SS:characters} \\ ${\mathcal{C}\ell}_{d}$ & & Section~\ref{subsection irred modules}, \eqref{E:Cldef}\\ ${\mathcal{L}}_i$, $s_{ij}$&& Section~\ref{subsection irred modules}, \eqref{E:JMelt}\\ $[a,b]$ & & Section~\ref{subsection irred modules} \\ $\hat{\Phi}_{[a,b]}$, $\hat{\Phi}_{[a,b]}^{+}$, $\hat{\Phi}_{[a,b]}^{-}$ & & Section~\ref{subsection irred modules} \\ $\Phi_{[a,b]}$ & & Section~\ref{subsection irred modules}, Definition~\ref{segments} \\ $\hat{{\mathbf{1}}}_{[a,b]}$, $ \varphi\hat{{\mathbf{1}}}_{[a,b]}$ & & Section~\ref{subsection irred modules} \\ ${{\mathbf{1}}}_{a,b,n}$ & & Section ~\ref{unique simple quotient}\\ $R$, $R^{+}$, $Q$, $Q^{+}$ & & Section~\ref{SS:LieThy} \\ $P$, $P_{\geq 0}$, $P^{+}$, $P^{++}$, $P^{+}_{\text{rat}}$, $P^{+}_{\text{poly}}$ & & Section~\ref{SS:LieThy} \\ $P(d)$, $P_{\geq 0}(d)$, $P^{+}(d)$, $P^{++}(d)$, $P^{+}_{\text{rat}}(d)$, $P^{+}_{\text{poly}}(d)$ & & Section~\ref{SS:LieThy} \\ $S_{n}[\lambda]$, $R[\lambda]$, $P^{+}[\lambda]$, $P^-[\lambda]$ & & Section~\ref{SS:LieThy} \\ $\widehat{\Phi}(\lambda, \mu)$, $\Phi(\lambda, \mu)$ & & Section~\ref{SS:inducedmodules} \\ $\widehat{{\mathcal{M}}}(\lambda, \mu)$, ${\mathcal{M}} (\lambda, \mu)$ & & Section~\ref{SS:inducedmodules}, \eqref{E:Mhatdef}, \eqref{E:Mdef}\\ ${{\mathcal{M}}}_{a,b,n}$ & & Section ~\ref{unique simple quotient}\\ $S_{n}[\zeta]$ & & Section~\ref{SS:inducedmodules} \\ $\mathcal{R}(\lambda, \mu)$ & & Section~\ref{unique simple quotient} \\ $L(\lambda, \mu)$ & & Section~\ref{unique simple quotient}, Theorem~\ref{thm:unique irred quotient} \\ $\lambda/\mu$& & Section~\ref{S:Calibrated}\\ $\mathcal{Y}_{i,L}$& & Section~\ref{S:Calibrated}\\ $H^{\lambda/\mu}$& & Section~\ref{S:Calibrated}\\ $e_{i,j}$, $f_{i,j}$, $\bar{e}_{i,j}$, $\bar{f}_{i,j}$ & & Section~\ref{SS:qndfn} \\ $\mathcal{O}$, $\mathcal{O}(\ensuremath{\mathfrak{q}} (n))$ & & Section~\ref{SS:RootData} \\ $\widehat{M}(\lambda)$, $M(\lambda)$ & & Section~\ref{SS:RootData} \\ $(\cdot, \cdot)_{S}$ & & Section~\ref{SS:ShapovalovForm} \end{tabular}
\begin{tabular}{ccl} Notation & & First Defined\\ \hline $C_{i}$, $S_{i,j}$, $F_i$ & & Section~\ref{SS:Sergeev Duality} \\ $\Omega_{i,j}$ & & Section~\ref{SS:action} \\ $F_{\lambda}$ & & Section~\ref{SS:Flambda}, \eqref{E:Flambdadef}\\ $(\cdot, \cdot)_{\mu}$ & & Section~\ref{SS:functorimage}\\ $\varpi(\mu)$ & & Section~\ref{SS:functorimage}\\ $\Delta^+$, $\Pi$, $\mathcal{Q}$, $\mathcal{Q}^+$& & Section~\ref{SS:ShuffleAlg}\\ $(\mathcal{F},*)$, $\mathcal{W}$& & Section~\ref{SS:ShuffleAlg}\\ ${\mathcal{F}}_{\mathcal{A}}$, ${\mathcal{F}}_{\mathbb{C}}$, ${\mathcal{W}}_{\mathcal{A}}$, ${\mathcal{W}}_{\mathbb{C}}$& & Section~\ref{SS:ShuffleAlg}\\ $\underline{E}\in{\mathcal{W}}_{\mathbb{C}}$& & Section~\ref{SS:ShuffleAlg}\\ $\mathcal{GL}$, $\mathcal{G}$ & & Section~\ref{SS:LyndonWords}\\ $\mathcal{B}_d[\lambda]$, $\mathcal{B}_d$ & & Section~\ref{SS:LyndonWords}\\ $[\cdot,\cdot]_q$, $\Xi$, $r_g$ & & Section~\ref{SS:LyndonWords}\\ $E_g$, $E_g^*$, $b_g$, $b_g^*$& & Section~\ref{SS:PBWandCanonical} \end{tabular} \end{center}
\pagebreak
\end{document} |
\begin{document}
\title{Transfer of an unknown quantum state, quantum networks, and memory} \author{Asoka Biswas and G. S. Agarwal} \affiliation{Physical Research Laboratory, Navrangpura, Ahmedabad - 380 009, India} \date{\today}
\begin{abstract} We present a protocol for transfer of an unknown quantum state. The protocol is based on a two-mode cavity interacting dispersively in a sequential manner with three-level atoms in $\Lambda$-configuration. We propose a scheme for quantum networking using an atomic channel. We investigate the effect of cavity decoherence in the entire process. Further we demonstrate the possibility of an efficient quantum memory for arbitrary superposition of two modes of a cavity containing one photon.
\end{abstract}
\pacs{03.67.-a, 03.67.Hk}
\maketitle
\section{Introduction}
In the quantum information theory \cite{info}, transfer of information in the form of a coherently prepared quantum state is essential. One can transfer a quantum state either by the method of teleportation \cite{bennett} or through quantum networking. The basic idea behind quantum network is to transfer a quantum state from one node to another node with the help of a career (a quantum channel) such that it arrives intact. In between, one has to perform a process of quantum state transfer (QST) to transfer the state from one node to the career and again from the career to the destination node. There have been some proposals \cite{cirac} for quantum networking using cavity-QED, where two atoms trapped inside two spatially separated cavities serve the purpose of two nodes. In Ref.~\cite{cirac} the task was to transfer the state of {\it one atom into the other} via the process of QST between the atom and photon where the later is used as a career. The photon carries the information through either free space or an optical fiber between the cavities and the success depends on the {\it probabilistic\/} detection of photons or adiabatic passage through the cavities. We note that, though it may be difficult to beat the communication with photons, it is always interesting to explore the alternatives. In fact, very recently, quantum network using linear XY chain of $N$ interacting qubits has been proposed. In this proposal, the quantum state can be transferred from first qubit to $N$th qubit within microscopic distance by pre-engineering interqubit interactions \cite{ekert}.
Further, storage of quantum states is also an important issue. There have been several proposals for quantum memory. For example, recent proposals \cite{lukin,molmer} have shown how to transfer the field state into atomic coherence by adiabatic technique and again retrieve the same through the method of adiabatic following \cite{lukin} or using teleportation technique \cite{molmer}. Quantum memory of individual polarization state into a collective atomic ensemble has been proposed \cite{guo}. Initially an entangled state of two pairs of atomic ensembles is prepared, where the single photon polarization state is stored through a process similar to teleportation. Though the information can be transferred back to the photon state, the protocol only succeeds with a probability 1/4. Decoherence-free memory of one qubit in a pair of trapped ions has also been experimentally demonstrated \cite{memo}. Ma\^{\i}tre {\it et al.\/} \cite{memory} have proposed a quantum memory, where the quantum information on the superposition state of a two-level atom was stored in a cavity as a superposition of $0$ and $1$ photon Fock states. The holding time of such memories is generally limited by the cavity decay time.
In this paper, we propose a new scheme for QST to transfer the unknown state of one atom to another atom where the atoms are not directly interacting with each other. Note that by direct spin interaction of $\vec{S}_1.\vec{S}_2$ kind, quantum state could be transferred from one atom to another within a microscopic range. In the present scheme we show how similar kind of interaction between two atoms can be mediated via a cavity. Thus the atomic state can be transferred from one atom to another in mesoscopic range.
We extend our idea of QST to quantum network where we transfer the {\it state of one cavity to another spatially separated cavity}. For this we use long-lived atoms as career, and make use of the QST process to transfer the state of cavity to atom and again to the target cavity. Our protocol for quantum networking provides a {\it deterministic\/} way to transfer the quantum state between the cavities. This protocol does not require any kind of probability arguments based on the outcome of a measurement. Further we propose the realization of a quantum memory of {\it arbitrary superposition of two modes\/} of a cavity which contains only one photon. This superposition state can be stored in the long-lived states of the neutral atoms and retrieved in another two-mode cavity later, {\it deterministically}. Our proposal relies on the technological advances and realizations as described in Ref.~\cite{haroche}.
The structure of the paper is as follows. In Sec.~II, we describe the model and provide the relevant equations. In Sec.~III, we discuss how transfer of an unknown quantum state can be performed between two atoms. We provide an estimate of possible decoherence in this process due to cavity decay. In Sec.~IV, we extend our scheme to quantum networks and quantum memory.
\section{Model configuration}
To describe how the QST protocol works, we consider a three-level atom in $\Lambda$ configuration interacting with a two-mode cavity (see Fig.~\ref{fig1}). \begin{figure}
\caption{Three-level atomic configuration with levels $|g\rangle$,
$|e\rangle$, and $|f\rangle$ interacting with two orthogonal modes of the cavity, described by operators $a$ and $b$. Here $g_1$ and $g_2$ represent the atom-cavity coupling of the $a$ and $b$ modes with the corresponding transitions and $\Delta$ is the common one-photon detuning.}
\label{fig1}
\end{figure} The modes with annihilation operators $a$ and $b$ interact with the
$|e\rangle \leftrightarrow |g\rangle$ and $|e\rangle \leftrightarrow |f\rangle$ transitions, respectively.
The Hamiltonian under rotating wave approximation can be written as \begin{eqnarray}
H&=&\hbar\left[\omega_{eg}|e\rangle\langle e|+\omega_{fg}|f\rangle\langle f|+\omega_1a^\dag a+\omega_2b^\dag b\right.\nonumber\\
&&+\left.\left\{g_1|e\rangle\langle g|a+g_2|e\rangle\langle f|b+\mathrm{H.c.}\right\}\right] \label{fullhamil} \end{eqnarray} where $\omega_{lg}$ $(l \in e,f)$ is the atomic transition frequency, $\omega_i$ $(i\in 1,2)$ is the frequency of the cavity modes $a$ and $b$, and $g_i$ is the atom-cavity coupling constant. We assume $g_i$ to be real.
We work under the two-photon resonance condition and assume large single-photon detuning. After adiabatically eliminating the excited level $|e\rangle$ in large detuning domain, we derive an effective Hamiltonian describing the system of Fig.~\ref{fig1} \begin{eqnarray}
H_{\mathrm{eff}}&=&-\frac{\hbar g^2}{\Delta}\left[|g\rangle\langle g|a^\dag a+|f\rangle\langle f|b^\dag b\right]\nonumber\\
&&-\frac{\hbar g^2}{\Delta}\left[|g\rangle\langle f|a^\dag b+|f\rangle\langle g|a b^\dag\right], \label{effham} \end{eqnarray} where $\Delta=\omega_{eg,f}-\omega_{1,2}$ is the common one-photon detuning of the cavity modes and $g_1=g_2=g$ $(\ll \Delta)$. The condition $g_1=g_2$ can be satisfied by proper choice as we can choose appropriate transitions in atomic systems, frequencies etc.
Note that if one consider the levels $|g\rangle$ and
$|f\rangle$ as Zeeman sublevels, then these conditions are automatically satisfied. In that case we may consider the two modes of the cavity as two orthogonal polarization states of a photon. Now note that, the first two terms in Eq.~(\ref{effham}) represent the self-energy terms and the last two terms give the interaction leading to a transition from the initial state to the final state.
The probability amplitudes of relevant basis states $|g\rangle |n,\mu\rangle$
and $|f\rangle|n-1,\mu +1\rangle$ in the state vector
\begin{equation}
|\psi(t)\rangle=d_g(t)|g,n,\mu\rangle+d_f(t)|f,n-1,\mu +1\rangle \end{equation} are given by
\begin{eqnarray} d_g(t)&=&\frac{\sqrt{n}XY}{n+\mu+1}+d_g(0)\;,\nonumber\\ d_f(t)&=&\frac{\sqrt{\mu+1}XY}{n+\mu+1}+d_f(0)\;, \label{solns} \end{eqnarray} where $X=\sqrt{n}d_g(0)+\sqrt{\mu+1}d_f(0)$, $Y=\exp[ig^2(n+\mu+1)t/\Delta]-1$,
$n$ and $\mu$ are the respective photon numbers in the modes $a$ and $b$. We note that the effective interaction (\ref{effham}) can be seen as an interaction between two qubits defined via the atomic variables and field variables \begin{eqnarray}
&&S^+=|f\rangle\langle g|,\;S^-=|g\rangle\langle f|,\;S^z=\frac{1}{2}(|f\rangle\langle f|-|g\rangle\langle g|);\nonumber\\ &&R^+=a^\dag b,\;R^-=ab^\dag,\;R^z=\frac{1}{2}(a^\dag a-b^\dag b). \end{eqnarray} In the single photon space, the field operators $R^\pm$, $R^z$ satisfy spin-$1/2$ algebra and thus the interaction (\ref{effham}) can be written as interaction between two qubits \begin{equation} H_{\mathrm{eff}}\equiv -\frac{\hbar g^2}{\Delta}(R^+S^-+R^-S^+-2R^zS^z). \end{equation} In view of the above form of the effective interaction we conclude that our system of Fig.~\ref{fig1} can be used for a {\it number of quantum logic operations\/}.
\begin{figure}
\caption{ Schematic diagram for the QST protocol for a number of atoms interacting with the two-mode cavity in a sequential manner for a time $T=\Delta\pi/2g^2$.}
\label{fig2}
\end{figure}
\section{Quantum state transfer protocol} We next demonstrate how the dynamics of an atom in a two-mode cavity can be used to implement the QST protocol. Now onwards we refer a $\pi$ pulse to an equivalent traversal time $T$ of the atom through the cavity such that, $2g^2T/\Delta =\pi$. The time $T$ could be controlled by selecting the atomic velocity.
We assume that the atom A is initially in an unknown state \begin{equation}
|i\rangle_A=\alpha |g\rangle_A +\beta |f\rangle_A\;, \end{equation}
where $\alpha$ and $\beta$ are {\it unknown\/} arbitrary coefficients. The state $|i\rangle_A$ of atom A is to be transferred to another atom B which is
elsewhere. Preparing the cavity in a state $|0,1\rangle$ (i.e., initially one photon in the $b$ mode), we send the atom A through the cavity for certain time which is equivalent to a $\pi$ pulse. After atom A comes out of the cavity, the atom B in state \begin{equation}
\label{state2}|i'\rangle=\alpha'|g\rangle+\beta'|f\rangle \end{equation} is sent through the cavity. Here $\alpha'$ and $\beta'$ are arbitrary coefficients and need not to be known. The atom B also experiences a $\pi$ pulse during the interaction with the cavity. The entire process can be described as follows: \begin{eqnarray}
&|i\rangle_A&|0,1\rangle\nonumber\\ &\downarrow&~~ \pi~\textrm{pulse on atom A}\nonumber \\
&|g\rangle_A&(\alpha |0,1\rangle -\beta |1,0\rangle)\nonumber\\ \label{qst}&\downarrow&~~ \textrm{B atom enters}\\
&|g\rangle_A&|i'\rangle_B(\alpha |0,1\rangle -\beta |1,0\rangle)\nonumber\\ &\downarrow& ~~\pi~\textrm{pulse on atom B} \nonumber\\
&|g\rangle_A&|i\rangle_B(\alpha'|0,1\rangle -\beta'|1,0\rangle)\;.\nonumber \end{eqnarray}
If one prepares the cavity initially in state $|1,0\rangle$, then following the
similar sequence as above, the final state will be $-|f\rangle_A|i\rangle_B(\alpha'|0,1\rangle -\beta'|1,0\rangle)$. Note that the atom B has already acquired the state $|i\rangle$ of atom A, i.e., the
state $|i\rangle$ is transferred from the atom A to atom B.
\begin{figure*}
\caption{(a) Variation of the fidelity $F(T)$ of mapping the state of the atom A in the cavity C$_1$ with $\kappa/g$. We have assumed that the cavity decay rates are the same for both the modes and $\Delta=10g$. (b) Variation of the fidelity $F$ calculated at time $2T+\tau$, with the time-delay $\tau$ between the atoms for $\kappa=0.002g$ and $\Delta=10g$.}
\label{fig3}
\end{figure*}
More generally, our QST protocol can be written as \begin{equation}
|i\rangle_A|i'\rangle_B (\gamma|0,1\rangle+\delta|1,0\rangle)_{\mathrm{cav}} \stackrel{U(\pi)}{\longrightarrow}
(\gamma|g\rangle-\delta|f\rangle)_A|i\rangle_B|\psi\rangle_{\mathrm{cav}}\;, \end{equation} where $U(\pi)=U_A(\pi)U_B(\pi)$, $U_k(\pi)$ ($k\in $ A,B) $[=\exp\{-iH_{\mathrm{eff}}T/\hbar\}]$ denotes the $\pi$-pulse operation on the atom $k$, and \begin{equation}
|\psi\rangle_{\mathrm{cav}}=\alpha'|0,1\rangle_{\mathrm{cav}}-\beta'|1,0\rangle_{\mathrm{cav}}\;. \label{psicav} \end{equation}
Our protocol has interesting features: (a) the initial states of the atoms can be arbitrary, (b) the field state can also be an arbitrary superposition of $|0,1\rangle$
and $|1,0\rangle$. Note that in case of two-level atom interacting with a resonant single-mode cavity, the QST protocol from one atom to another atom has difficulties associated with relative phase which can be changed either by using a conditional phase shift which is essentially a two-qubit operation (see Eq.~(3.8), Ref.~\cite{haroche}) or by applying a resonant microwave field to the atomic qubit.
We note that if the initial state of the atom B is $|g\rangle$ (or $|f\rangle$)
and the cavity is initially in state $|0,1\rangle$ (or $|1,0\rangle$), then we can not only transfer the state of atom A to B, but also can interchange the states between them. However, the QST protocol described here cannot be interpreted as a SWAP gate, as in usual version of a quantum gate, the atoms A and B must interact with the field simultaneously. We also note that, in the process of coherence transfer between two atoms using, for example, the scheme of Ref.~\cite{pelli}, the atoms must be addressed by the pulses simultaneously which is basically a local interaction. In the present protocol, the atoms interact with the $\pi$-pulse in a sequential manner. This is essentially a non-local process.
Extending the idea of QST described above to a number of atoms, we can transfer the state of any atom to the consecutive atom. This means, if we consider a sequel of atoms, then the state of any atom can be transferred to the consecutive atom which will pass the cavity after the former leaves the cavity. The procedure of transfer of atomic states to consecutive atom has been shown schematically in Fig.~\ref{fig2}.
Here the atoms A, B, C, etc. are sent through another identical bimodal cavity in initial state $|0,1\rangle$. After passing through
this cavity, the atom C is again prepared in state $|i\rangle$. Thus, using a second cavity in this way, we can transfer the state of the first atom A to a third atom C. Clearly, if we would use $n$ number of cavities in this sequence, we could transfer the state of the atom A to $(n+1)$-th atom in the sequence.
\subsection*{Effects of decoherence - fidelity of QST protocol} Decoherence is a strong limiting factor in the realization of any quantum computational protocol. The interaction of the atom and the cavity with the environment causes them to decay and results in decoherence. Thus, one has to consider the effect of decoherence to examine with how much efficiency, the desired outcome can be produced. These calculations can be done in the density matrix framework using the following Liouville equation \begin{eqnarray} \dot{\rho}=-\frac{i}{\hbar}[H_{\mathrm{eff}},\rho]&-&\kappa_a(a^\dag a\rho-2a\rho a^\dag +\rho a^\dag a)\nonumber\\ &-&\kappa_b(b^\dag b\rho-2b\rho b^\dag +\rho b^\dag b)\;, \label{decay} \end{eqnarray} where $\kappa_a$ and $\kappa_b$ are the decay constants of the two modes and $H_{\mathrm{eff}}$ is given by Eq.~(\ref{effham}).
In the present case, to investigate the effect of decoherence, let us consider
a possible scheme. We consider $|g\rangle$ and $|f\rangle$ to be the Rydberg levels as in Haroche's experiments. In that case, we can use a bimodal microwave cavity like the one used by the group of Haroche. We use parameters similar to those in the experiments by Haroche and his co-workers. If the cavity coupling constant $g$ is $2\pi\times 50$ kHz and the cavity decay constant $\kappa_a=\kappa_b=\kappa$ for each mode is $2\pi\times 100$ Hz, then $\kappa/g=0.002$. Further, for $\Delta=10g$, we calculate the cavity interaction time to be 50 $\mu$s for a $\pi$ pulse, which is consistent with the interaction time possible to achieve in a microwave experiment. One sends the atoms with a velocity $\sim 10^2$ cm s$^{-1}$ through a few cm long cavity to achieve this interaction time. Using these parameters, we calculate the fidelity $F$ that the first step of the evolution (\ref{qst}) occurs. The variation of $F(T)$ with the decay constant $\kappa$ is shown in Fig.~\ref{fig3}(a), where $T$ is the interaction time of the atom with the cavity. Note that the probability that the state of the atom A is transferred to the cavity remains more than $90\%$ for $\kappa=0.002g$. We next show [see Fig.~\ref{fig3}(b)] the variation of the fidelity $F(2T+\tau)$ of the entire process (\ref{qst}) to occur with the time-delay $\tau$ between the atoms A and B for $\kappa =0.002g$. It is clear that the probability that the atom B acquires the desired state remains above $80\%$ even at $g\tau=20 (\equiv\tau\approx 63 \mu$s).
\begin{figure}
\caption{ Schematic diagram for the quantum network between distant cavities via atomic channel. Description of the figure is in the text.}
\label{fig4}
\end{figure}
\section{Extensions of quantum state transfer protocol} \subsection{Quantum networks} Now we show how the above QST protocol can be made useful in preparing a quantum network, in which long-lived atomic states are used to communicate between the two nodes of the network. We assume that there are two identical two-mode cavities C$_1$ and C$_2$, which are considered as two nodes of the
network. Let us consider that the cavity C$_1$ is initially in a state $|0,1\rangle$. To prepare this cavity in a superposition state \begin{equation}
\label{ano}|E\rangle_{\mathrm{cav}}=\alpha |0,1\rangle_{\mathrm{cav}} -\beta |1,0\rangle_{\mathrm{cav}}\;, \end{equation}
we send an atom A in
state $|i\rangle$ through the cavity (see Fig.~\ref{fig4}) such that the atom A experiences a $\pi$ pulse. Now our goal is to transfer this cavity state $|E\rangle_{\mathrm{cav}}$ to the other node C$_2$. For that we send a second atom B through the cavity C$_1$ after A comes out of it. We see that the atom B is prepared in $|i\rangle$ state through the evolution (\ref{qst}). This atom is now sent through the second node
C$_2$ which is initially in state $|0,1\rangle$.
In this way, the state $|E\rangle_{\mathrm{cav}}$ of node C$_1$ is transferred to the node C$_2$.
Extending the above idea to a number of distant nodes (cavities), we thus can transfer the state $|E\rangle_{\mathrm{cav}}$ from one node to another node of the proposed quantum network via a
quantum channel (atom). For example, to send this state $|E\rangle_{\mathrm{cav}}$ from C$_2$ to another node (say, C$_3$), we can send a third atom C through these two nodes subsequently.
We emphasize that our protocol of quantum networking is distinct from
the teleportation protocol of Davidovich {\it et al.\/} \cite{davi}. Their protocol depends on the Bell state measurements whereas in our protocol no Bell measurement is ever made.
We further note that the present scheme can be used to spread entanglement between two distant cavities. For this, one first sends an atom A in $|g\rangle$ state through the first cavity C$_1$ prepared initially in the state $|1,0\rangle$ such that the atom experiences a $\pi/2$ pulse ($2g^2T/\Delta=\pi/2$). This would prepare the atom and the cavity in the following entangled state: \begin{equation}
|\Psi\rangle_{AC_1}=\frac{1}{\sqrt{2}}e^{i\pi/2}(|g\rangle_A|1,0\rangle_1+|f\rangle_A|0,1\rangle_1)\;, \end{equation} Next the atom passes through
a second cavity C$_2$ initially in the state $|0,1\rangle$ and experiences a $\pi$ pulse. Thus, at the end of this process, the two cavities are prepared in an {\it entangled state of two modes\/} as \begin{equation}
|\Psi\rangle_{C_1C_2}=\frac{1}{\sqrt{2}}e^{i\pi/2}[|1,0\rangle_1|0,1\rangle_2-|0,1\rangle_1|1,0\rangle_2]\;. \end{equation} Clearly one can spread entanglement between atom and the cavity to another distant cavity. Note that in our proposal, entanglement is created between the modes of the two different cavities. The entanglement between two-modes of a single cavity has been produced in \cite{raus}.
\subsection{Storage and retrieval of an arbitrary superposition state of two modes of a cavity} We now discuss how the present $\pi$-pulse technique can be used to prepare an efficient quantum memory for arbitrary superposition of two cavity modes, where there is only one photon is present in either mode. Let us consider a two-mode cavity which is in a superposition state of two modes [see Eq.~(\ref{ano})] \begin{equation}
|E\rangle_{\mathrm{cav}}=\alpha|0,1\rangle_{\mathrm{cav}} -\beta|1,0\rangle_{\mathrm{cav}}\;, \label{ecav} \end{equation} where $\alpha$ and $\beta$ are {\it known\/} coefficients. Now we send an atom in state (\ref{state2}) through the cavity.
Applying a $\pi$-pulse on it, we can map the superposition of $|E\rangle_{\mathrm{cav}}$ into the state of the atom. This procedure can be written as \begin{equation}
|i'\rangle|E\rangle_{\mathrm{cav}}\longrightarrow -|i\rangle|\psi\rangle_{\mathrm{cav}}\;, \end{equation}
where $|i\rangle=\alpha|g\rangle+\beta|f\rangle$ and $|\psi\rangle_{\mathrm{cav}}$
is given by Eq.~(\ref{psicav}). Because, the states $|g\rangle$ and $|f\rangle$ of the atom are radiatively long lived, information about the state of the cavity can be stored inside the atom for sufficiently long time. To retrieve this information into the cavity,
we prepare a {\it second\/} cavity in either of the states $|0,1\rangle$ or $|1,0\rangle$
and send the atom in state $|i\rangle$ through the cavity. Upon applying a $\pi$-pulse, the cavity can again be prepared in the superposition state as before. The retrieval of superposition can be shown as
\begin{equation}|i\rangle |0,1\rangle_{\mathrm{cav}}\rightarrow |g\rangle |E\rangle_{\mathrm{cav}}\;,\; |i\rangle |1,0\rangle_{\mathrm{cav}}\rightarrow -|f\rangle |E\rangle_{\mathrm{cav}}\;. \label{decode} \end{equation}
We should mention here that the quantum memory proposed here for the cavity state is expected to work better since the information is being stored inside the long-lived
atomic states $|g\rangle$ and $|f\rangle$. However, the transfer time of the cavity state to the atom is limited by the cavity holding time and the atom must stop interacting with the cavity before it decays. We also note that if the two modes are degenerate and correspond to two states of circular polarizations, then (\ref{ecav}) can be viewed as a superposition of two polarization states of a photon. In such a case our proposal corresponds to storage and retrieval of the polarization states of a photon.
\section{Conclusion} In conclusion, we have presented a protocol for the transfer of a quantum state from one atom to another atom. This protocol can be extended to a number of atoms passing through sequential cavities and thus one can set up a quantum network. We have further shown how an efficient quantum memory of arbitrary superposition of two cavity modes can be built up. Our proposals have certain advantages as we work with long-lived states of atoms. We provide a proper estimate of the efficiency of the state transfer protocol against cavity decoherence.
\end{document} |
\begin{document}
\title{Testing nonclassicality in multimode fields:\\ a unified derivation of classical inequalities}
\author{Adam Miranowicz} \affiliation{Advanced Science Institute, RIKEN, Wako-shi, Saitama 351-0198, Japan} \affiliation{Faculty of Physics, Adam Mickiewicz University, PL-61-614 Pozna\'n, Poland}
\author{Monika Bartkowiak} \affiliation{Faculty of Physics, Adam Mickiewicz University, PL-61-614 Pozna\'n, Poland}
\author{Xiaoguang Wang} \affiliation{Advanced Science Institute, RIKEN, Wako-shi, Saitama 351-0198, Japan} \affiliation{Zhejiang Institute of Modern Physics, Department of Physics, Zhejiang University, Hangzhou 310027, China}
\author{Yu-xi Liu} \affiliation{Institute of Microelectronics, Tsinghua University, Beijing 100084, China} \affiliation{Tsinghua National Laboratory for Information Science and Technology (TNList), Tsinghua University, Beijing 100084, China} \affiliation{Advanced Science Institute, RIKEN, Wako-shi, Saitama 351-0198, Japan}
\author{Franco Nori} \affiliation{Advanced Science Institute, RIKEN, Wako-shi, Saitama 351-0198, Japan} \affiliation{Physics Department, The University of Michigan, Ann Arbor, Michigan 48109-1040, USA}
\date{\today}
\begin{abstract} We consider a way to generate operational inequalities to test nonclassicality (or quantumness) of multimode bosonic fields (or multiparty bosonic systems) that unifies the derivation of many known inequalities and allows to propose new ones. The nonclassicality criteria are based on Vogel's criterion corresponding to analyzing the positivity of multimode $P$~functions or, equivalently, the positivity of matrices of expectation values of, e.g., creation and annihilation operators. We analyze not only monomials, but also polynomial functions of such moments, which can sometimes enable simpler derivations of physically relevant inequalities. As an example, we derive various classical inequalities which can be violated only by nonclassical fields. In particular, we show how the criteria introduced here easily reduce to the well-known inequalities describing: (a) multimode quadrature squeezing and its generalizations including sum, difference and principal squeezing, (b) two-mode one-time photon-number correlations including sub-Poisson photon-number correlations and effects corresponding to violations of the Cauchy-Schwarz and Muirhead inequalities, (c) two-time single-mode photon-number correlations including photon antibunching and hyperbunching, and (d) two- and three-mode quantum entanglement. Other simple inequalities for testing nonclassicality are also proposed. We have found some general relations between the nonclassicality and entanglement criteria, in particular, those resulting from the Cauchy-Schwarz inequality. It is shown that some known entanglement inequalities can be derived as nonclassicality inequalities within our formalism, while some other known entanglement inequalities can be seen as sums of more than one inequality derived from the nonclassicality criterion. This approach enables a deeper analysis of the entanglement for a given nonclassicality. \end{abstract}
\pacs{42.50.Ar, 42.50.Xa, 03.67.Mn}
\maketitle \pagenumbering{arabic}
\section{Introduction}
Testing whether a given state of a system cannot be described within a classical theory, has been one of the fundamental problems of quantum theory from its beginnings to current studies in, e.g., quantum optics~\cite{Glauber,Sudarshan,DodonovBook,VogelBook,MandelBook,PerinaBook,Walls79,Loudon80,Loudon87,Smirnov87,Klyshko96,Dodonov02}, condensed matter (see, e.g., Refs.~\cite{DodonovBook,Nori}), nanomechanics~\cite{Schwab05,Wei06}, and quantum biology (see, e.g., Ref.~\cite{qbiology}). Macroscopic quantum superpositions (being at the heart of the Schr\"odinger-cat paradox) and related entangled states (which are at the core of the Einstein-Podolsky-Rosen paradox and Bell's theorem) are famous examples of nonclassical states which are not only physical curiosities but now fundamental resources for quantum-information processing~\cite{Nielsen}.
All states (or phenomena) are quantum, i.e., nonclassical. Thus, it is quite arbitrary to call some states ``classical''. Nevertheless, some states are closer to their classical approximation than other states. The most classical {\em pure} states of the harmonic oscillator are coherent states. Thus, usually, they are considered classical, while all other pure states of the harmonic oscillator are deemed nonclassical. The nonclassicality criterion for {\em mixed} states is more complicated and it is based on the Glauber-Sudarshan $P$~function~\cite{Glauber,Sudarshan}. A commonly accepted formal criterion which enables to distinguish nonclassical from classical states reads as follows~\cite{DodonovBook,VogelBook,MandelBook,PerinaBook}: A quantum state is {\em nonclassical} if its Glauber-Sudarshan $P$ function cannot be interpreted as a {\em true} probability density. Note that, according to this definition, any entangled state is nonclassical, but not every separable state is classical.
Various operational criteria of nonclassicality (or quantumness) of single-mode fields were proposed~(see, e.g., Refs.~\cite{DodonovBook,VogelBook,Richter02,Rivas} and references therein). In particular, Agarwal and Tara~\cite{Agarwal92}, Shchukin, Richter and Vogel (SRV)~\cite{NCL1,NCL2} proposed nonclassicality criteria based on matrices of moments of annihilation and creation operators for single-mode fields. Moreover, an efficient method for measuring such moments was also developed by Shchukin and Vogel~\cite{SVdetect}.
It is not always sufficient to analyze a single-mode field, i.e., an elementary excitation of a normal mode of the field confined to a one-dimensional cavity. To describe the generation or interaction of two or more bosonic fields, the standard analysis of single-system nonclassicality should be generalized to the two- and multi-system (multimode) case. Simple examples of such bosonic fields are multimode number states, multimode coherent and squeezed light, or fields generated in multi-wave mixing, multimode scattering, or multi-photon resonance.
Here, we study in greater detail and modify an operational criterion of nonclassicality for multimode radiation fields of Vogel \cite{Vogel08}, which is a generalized version of the SRV nonclassicality criterion~\cite{NCL1,NCL2} for single-mode fields. It not only describes the multimode fields, but can also be applied in the analysis of the dynamics of radiation sources. This could be important for the study of, e.g., time-dependent correlation functions, which are related to time-dependent field commutation rules (see, e.g., subsections 2.7 and 2.8 in Ref.~\cite{VogelBook}).
A variety of multimode nonclassicality inequalities has been proposed in quantum optics~(see, e.g., textbooks~\cite{DodonovBook,VogelBook,MandelBook,PerinaBook}, reviews~\cite{Walls79,Loudon80,Loudon87,Klyshko96,Smirnov87}, and Refs.~\cite{Yuen76,Kozierowski77,Caves85,Reid86,Dalton86,Schleich87,Agarwal88,Luks88,Hillery89,Lee90,Zou90,Klyshko96pla,Miranowicz99a,Miranowicz99b,An99,An00,Jakob01}) and tested experimentally (see, e.g., Refs.~\cite{Clauser74,Kimble77,Short83,Slusher85,Grangier86,Hong87,Lvovsky02}). The nonclassicality criterion described here enables a simple derivation of them. Moreover, it offers an effective way to derive new inequalities, which might be useful in testing the nonclassicality of specific states generated in experiments.
It is worth noting that we are analyzing nonclassicality criteria but {\em not} a degree of nonclassicality. We admit that the latter problem is experimentally important and a few ``measures'' of nonclassicality have been proposed~\cite{Hillery87,Lee92,Lutkenhaus95,Dodonov00,Marian02,Dodonov03,Malbouisson03,Kenfack04,Asboth05,Boca09}.
Analogously to the SRV nonclassicality criteria, Shchukin and Vogel~\cite{SV05} proposed an entanglement criterion based on the matrices of moments and partial transposition. This criterion was later amended~\cite{MP06} and generalized~\cite{MPHH09} to replace partial transposition by nondecomposable positive maps and contraction maps (e.g., realignment). A similar approach for entanglement verification, based on the construction of matrices of expectation values, was also investigated in Refs.~\cite{Rigas06,Korbicz06,Moroder08,Haseler08}.
Here we analyze relations between classical inequalities derived from the two- and three-mode nonclassicality criteria and the above-mentioned entanglement criterion.
The article is organized as follows: In Sect.~\ref{Sect2}, a nonclassicality criterion for multimode bosonic fields is formulated. We apply the criterion to rederive known and a few apparently new nonclassicality inequalities. In subsection~\ref{Sect3a}, we summarize the Shchukin-Vogel entanglement criterion~\cite{SV05,MP06}. In subsection~\ref{Sect3b}, we apply it to show that some known entanglement inequalities (including those of Duan {\em et al.}~\cite{Duan} and Hillery and Zubairy~\cite{Hillery06}) exactly correspond to unique nonclassicality inequalities. In subsection~\ref{Sect3c}, we analyze such entanglement inequalities (including Simon's criterion~\cite{Simon}) that are represented apparently not by a single inequality but by sums of inequalities derived from the nonclassicality criterion. Moreover, other entanglement inequalities are derived in subsection~\ref{Sect3c2}. The discussed nonclassicality and entanglement criteria are summarized in Tables~I and~II. We conclude in Sect.~\ref{Sect4}.
\section{Nonclassicality criteria for multimode fields\label{Sect2}}
An $M$-mode bosonic state $\hat\rho$ can be completely described by the Glauber-Sudarshan $P$~function defined by~\cite{Glauber,Sudarshan}: \begin{eqnarray}
\hat\rho &=& \intda P(\bm{\alpha,\alpha}^*)|\bm{\alpha}\rangle\langle \bm{\alpha}|,
\label{N01} \end{eqnarray}
where $|\bm{\alpha}\rangle= \prod_{m=1}^M|\alpha_m\rangle$ and $|\alpha_m\rangle$ is the $m$th-mode coherent state, i.e., the eigenstate of the $m$th-mode annihilation operator $\hat a_m$, $\bm{\alpha}$ denotes complex multivariable $(\alpha_1,\alpha_2,...,\alpha_M)$, and ${\rm d}^2 \bm{\alpha}=\prod_{m}{\rm d}^2\alpha_m$. The density matrix $\hat\rho$ can be supported on the tensor product of either infinite-dimensional or finite-dimensional Hilbert spaces. For the sake of simplicity, we assume the number $M$ of modes to be finite. But there is no problem to generalize our results for an infinite number of modes.
A criterion of nonclassicality is usually formulated as follows~\cite{Titulaer65}: \begin{criterion} A multimode bosonic state $\hat\rho$ is considered to be nonclassical if its Glauber-Sudarshan $P$~function cannot be interpreted as a classical probability density, i.e., it is nonpositive or more singular than Dirac's delta function. Conversely, a state is called classical if it is described by a $P$~function being a classical probability density. \end{criterion}
It is worth noting that Criterion~1 (and the following criteria) does not have a fundamental indisputable validity, and it was the subject of criticism by, e.g., W\"unsche~\cite{Wunsche04}, who made the following two observations. (i) In the vicinity of any classical state there are nonclassical states, as can be illustrated by analyzing modified thermal states. So, arbitrarily close to any classical state there is a nonclassical state giving, to arbitrary precision, exactly the same outcomes as for the classical state in any measurement. Note that analogous problems can be raised for entanglement criteria~\cite{MPHH09} for continuous-variable systems, as in the vicinity of any separable state there are entangled states.~\footnote{It is worth stressing that this is the case only for continuous-variable systems: in the finite dimensional case, the set of separable states has finite volume.} (ii) There are intermediate quasiclassical (or unorthodox classical) states, which {\em cannot} be clearly classified as classical or nonclassical according to Criterion~1. This can be illustrated by analyzing the squeezing of thermal states, which does not lead immediately from classical to nonclassical states.
Due to the singularity of the $P$~function, Criterion~1 is not operationally useful as it is extremely difficult (although sometimes possible~\cite{Kiesel}) to directly reconstruct the $P$~function from experimental data.
Recently, Shchukin, Richter and Vogel~\cite{NCL1,NCL2} proposed a hierarchy of operational criteria of nonclassicality of single-mode bosonic states. This approach is based on the normally ordered moments of, e.g., annihilation and creation operators or position and momentum operators. An infinite set of these criteria (by inclusion of the correction analogous to that given in Ref.~\cite{MP06}) corresponds to a single-mode version of Criterion~1.
Let us consider a (possibly infinite) countable set $\hat F=(\hat f_{1},\hat f_{2},\ldots,\hat f_{i},\ldots)$ of $M$-mode operators $\hat f_i\equiv \hat f_i (\hat {\bf a},\hat {\bf a}^\dagger)$, each a function of annihilation, $\hat {\bf a}\equiv (\hat a_1,\hat a_2,...,\hat a_M)$, and creation, $\hat {\bf a}^\dagger$, operators. For example, we may choose such operators as monomials \begin{eqnarray} \label{eq:product}
\hat f_{i}
= \prod_{m=1}^M (\hat a^\dagger_m)^{i_{2m-1}} \hat a_m^{i_{2m}}, \end{eqnarray} where $i$ stands in this case for the multi-index ${\bf i}\equiv (i_{1},i_{2},...,i_{2M})$, but the $\hat f_{i}$'s can be more complicated functions, for example polynomials in the creation and annihilation operators.
If \begin{equation} \label{N02}
\hat f
= \sum_{i} c_{i} \hat f_{i}, \end{equation} where $c_{i}$ are arbitrary complex numbers, then with the help of the $P$~function one can directly calculate the normally ordered (denoted by $::$) mean values of the Hermitian operator $\hat f^\dagger \hat f$ as follows~\cite{NCL1,Korbicz}: \begin{eqnarray}
\normal{\hat f^\dagger \hat f } &=& \intda |f(\bm{\alpha,\alpha}^*)|^2
P(\bm{\alpha,\alpha}^*) .
\label{N03} \end{eqnarray} The crucial observation of SRV~\cite{NCL1} in the derivation of their criterion is the following:
\begin{observation} \label{obs:obsSRV} If the $P$~function for a given state is a classical probability density, then ${\normal{\hat f^\dagger \hat f}}\ge 0$ for any function $\hat f$. Conversely, if $\normal{ \hat f^\dagger \hat f } < 0$ for some $\hat f$, then the $P$~function is not a classical probability density. \end{observation} The condition based on nonpositivity of the $P$~function is usually considered a necessary and sufficient condition of nonclassicality. In fact, as shown by Sperling~\cite{Sperling}, if the $P$~function is more singular than Dirac's $\delta$-function [e.g., given by the $n$th derivative of $\delta(\alpha)$ for $n=1,2,...$], then it is also nonpositive.
With the help of Eq.~(\ref{N02}), $\normal{\hat f^\dagger \hat f}$ can be given by \begin{eqnarray}
\normal{\hat f^\dagger \hat f} &=& \sum_{i,j} c^*_{i}c_{j}
M^{\rm (n)}_{ij}(\hat\rho) \label{N04} \end{eqnarray} in terms of the normally ordered correlation functions \begin{eqnarray}
M^{\rm (n)}_{ij}(\hat\rho) &=& {\rm Tr}\,(: \hat f_{i}^\dagger
\hat f_{j}:\, \hat\rho), \label{N05} \end{eqnarray} where the superscript $(n)$ (similarly to $:\,:$) denotes the normal order of field operators. In the special case of two modes, analyzed in detail in the next sections, and with the choice of $\hat f_i$ given by Eq.~\eqref{eq:product}, Eq.~(\ref{N05}) can be simply written as \begin{equation}
M^{\rm (n)}_{ij}(\hat\rho)
={\rm Tr}\,\big[:({\hat a}^{\dagger i_{1}}{\hat a}^{i_{2}}{\hat b}^{\dagger i_{3}}{\hat b}^{i_{4}})^\dagger ({\hat a}^{\dagger j_{1}}{\hat a}^{j_{2}}{\hat b} ^{\dagger j_{3}}{\hat b}^{j_{4}}):\hat\rho\big], \label{N06} \end{equation} where $\hat a=\hat a_1$ and $\hat b=\hat a_2$. It is worth noting that there is an efficient optical scheme \cite{SVdetect} for measuring the correlation functions~(\ref{N06}).
With a set $\hat F=(\hat f_{1},\hat f_{2},\ldots,\hat f_{i},\ldots)$ fixed, the correlations \eqref{N05} form a (possibly infinite) Hermitian matrix \begin{eqnarray}
M^{\rm (n)}(\hat\rho)
= [M^{\rm (n)}_{ij}(\hat\rho)].
\label{N07} \end{eqnarray} In order to emphasize the dependence of~(\ref{N07}) on the choice of $\hat F$, we may write $M^{\rm (n)}_{\hat F}(\hat\rho)$. Moreover, let $[M^{\rm (n)}(\hat\rho)]_{\bf r}$, with ${\bf r}=(r_1,\ldots,r_N)$, denote the $N \times N$ principal submatrix of $M^{\rm (n)}(\hat\rho)$ obtained by deleting all rows and columns except the ones labeled by $r_1,\ldots,r_N$.
Analogously to Vogel's approach \cite{Vogel08}, by applying Sylvester's criterion (see, e.g., \cite{Strang,MP06}) to the matrix~(\ref{N07}), a generalization of the single-mode SRV criterion for multimode fields can be formulated as follows: \begin{criterion} For any choice of $\hat F=(\hat f_{1},\hat f_{2},\ldots,\hat f_{i},\ldots)$, a multimode state $\hat\rho$ is nonclassical if there exists a negative principal minor, i.e., $\det [M_{\hat F}^{\rm (n)}(\hat\rho)]_{\bf r}< 0$, for some ${\bf r}\equiv(r_1,\ldots,r_N)$, with $1\le r_1< r_2<\ldots < r_{N}$. \end{criterion} According to Vogel~\cite{Vogel08}, this criterion (and the following Criterion~3) can also be applied to describe the nonclassicality of space-time correlations and the dynamics of radiation sources by applying the generalized $P$~function: \begin{eqnarray}
P(\bm{\alpha,\alpha}^*) &=& \left\langle \dd \prod_{i=1}^M
\delta(\hat a_i - \alpha_i)\dd \right\rangle. \label{VogelP} \end{eqnarray} where $\bm{\alpha}=(\alpha_1,...,\alpha_M)$, with $\alpha_i=\alpha_i({\bf r}_i,t_i)$ depending on the space-time arguments $({\bf r}_i,t_i)$. By contrast to the standard definition of $P$~function, symbol $\dd\,\dd$ describes both the normal order of field operators and also time order, i.e., time arguments increase to the right (left) in products of creation (annihilation) operators~\cite{VogelBook}. As an example, we will apply this generalized criterion to show the nonclassicality of photon antibunching and hyperbunching effects in Appendix C.
Note that Criterion~2, even for the choice of $\hat f_i$ given by Eq.~\eqref{eq:product} and in the special case of single-mode fields, does not exactly reduce to the SRV criterion as it appeared in Ref.~\cite{NCL2}. To show this, let us denote by $M^{\rm (n)}_N(\hat\rho)$ the submatrix corresponding to the first $N$ rows and columns of $M^{\rm (n)}(\hat\rho)$. According to the original SRV criterion (Theorem~3 in Ref.~\cite{NCL2}), a single-mode state is nonclassical if there exists an $N$, such that the leading principal minor is negative, i.e. $\det M^{\rm (n)}_N(\hat\rho)<0$. Such formulated criterion fails for singular (i.e., $\det M^{\rm (n)}_N(\hat\rho) =0$) matrices of moments, as explained in detail in the context of quantum entanglement in Ref.~\cite{MP06}.
Considering $[M^{\rm (n)}_{\hat F}(\hat\rho)]_{\bf r}$ is equivalent to considering the correlation matrix corresponding to a subset $\hat F'\subset \hat F$, with $\hat F'=(\hat f_{r_1},\hat f_{r_2},...,\hat f_{r_N})$, i.e., $[M^{\rm (n)}_{\hat F}(\hat\rho)]_{\bf r}=M^{\rm (n)}_{\hat F'}(\hat\rho)$. We note that the subset symbol is used for brevity although it is not very precise, as the $\hat F$s are ordered collections of operators.
Thus, by denoting \begin{eqnarray}
M^{\rm (n)}_{\hat F'}(\hat\rho) \equiv
[M^{\rm (n)}_{\hat F}(\hat\rho)]_{\bf r} \hspace{4.5cm} \nonumber\\ = \Mat{ \normal{\hat f_{r_1}^\dagger \hat f_{r_1}} & \normal{\hat f_{r_1}^\dagger \hat f_{r_2}} & \cdots & \normal{\hat f_{r_1}^\dagger \hat f_{r_N}} } { \normal{\hat f_{r_2}^\dagger \hat f_{r_1}} & \normal{\hat f_{r_2}^\dagger \hat f_{r_2}} & \cdots & \normal{\hat f_{r_2}^\dagger \hat f_{r_N}} } { \vdots & \vdots & \ddots & \vdots } { \normal{\hat f_{r_N}^\dagger \hat f_{r_1}} & \normal{\hat f_{r_N}^\dagger \hat f_{r_2}} & \cdots & \normal{\hat f_{r_N}^\dagger \hat f_{r_N}} },
\label{N08} \end{eqnarray} and its determinant \begin{eqnarray} \dfnnew{\hat F'}(\hat\rho)\equiv \det \,
M^{\rm (n)}_{\hat F'}(\hat\rho),
\label{N08a} \end{eqnarray} we can equivalently rewrite Criterion~2 as: \begin{criterion} A multimode bosonic state $\hat\rho$ is nonclassical if there exists $\hat F$, such that $\dfnnew{\hat F}(\hat\rho)$ is negative. \end{criterion} This can be written more compactly as: \begin{eqnarray}
\hat\rho \textrm{~is classical} &\Rightarrow& \forall { \hat F}: \quad \dfn(\hat\rho) \ge 0, \nonumber \\
\hat\rho \textrm{~is nonclassical} &\Leftarrow& \exists {\hat F}: \quad \dfn(\hat\rho) <0. \label{N09} \end{eqnarray} In the following, we use the symbol $\ncl$ to emphasize that a given inequality {\em can} be satisfied only for {\em nonclassical} states and the symbol $\cl$ to indicate that an inequality {\em must} be satisfied for all {\em classical} states.
Let us comment further on the relation between Criteria 2 and 3 and the SRV criterion (in its amended version that takes into account the issue of singular matrices). Criterion 3 corresponds to checking the positivity of an infinite matrix $M_{ij}^{(n)}$ defined as in \eqref{N05} with the $\hat f_i$'s chosen to be all possible monomials given by Eq.~\eqref{eq:product}. Considering the positivity of larger and larger submatrices of this matrix leads to a hierarchy of criteria: testing the positivity of some submatrix $M^{(n)}_N$ leads to a stronger criterion than testing the positivity of a submatrix $M^{(n)}_{N'}$, with $N'< N$. Nonetheless, when one invokes Sylvester's criterion in order to transform the test of positivity of a matrix into the test of positivity of its many principal minors, it is arguably difficult to speak of a ``hierarchy''. Indeed, because of the issue of the possible singularity of the matrix we cannot simply consider, e.g., leading principal minors involving larger and larger submatrices.
As regards the general formalism, of course by adding operators to the set $\hat F$, and therefore increasing the dimension of the matrix $M^{(n)}_{\hat F'}$, one obtains a hierarchy of \emph{matrix} conditions on classicality. Nonetheless, also in our case when moving to scalar inequalities by considering determinants, we face the issue of the possible singularity of matrices. Motivated also by this difficulty, in the present article we do not focus so much on the idea a hierarchy of criteria, but rather explore the approach that by using matrices of expectations values it is possible to easily obtain criteria of nonclassicality and entanglement in the form of inequalities. As already explained, this is done by referring to Observation \ref{obs:obsSRV} and considering $\hat f_i$'s possibly more general than monomials, e.g., polynomials.
Indeed, when we choose a set of operators $\hat F=(\hat f_1,\hat f_2,\dots)$, we compute the corresponding matrix of expectation values, and we check its positivity, what we are doing is equivalent to checking positivity of, e.g., ${\normal{\hat f^\dagger \hat f}}$ for all $f$'s that can be written as a linear combination of the operators in $\hat F$: $\hat f=\sum_ic_i\hat f_i$. As polynomials can be expanded into monomials, it is clear that checking the positivity of a matrix $M^{(n)}_{\hat F}$ with $\hat F$ consisting of polynomials, cannot give a stronger criterion than checking the positivity of a matrix $M^{(n)}_{\hat F'}$, where $\hat F'$ is given by all the monomials appearing in the elements of $\hat F$. Of course, to have a stronger \emph{matrix} criterion of classicality we pay a price in terms of the dimension of the matrix $M^{(n)}_{\hat F'}$, which is larger than $M^{(n)}_{\hat F}$. Further, as we will see, by considering general sets $\hat F$---that is, not only containing monomials---one can straightforwardly obtain interesting and physically relevant inequalities, which may be difficult to pinpoint when considering monomials as ``building blocks''. It is worth noting that the possibility of using polynomial functions of moments was also discussed in Ref.~\cite{SV05} in the context of entanglement criterion.
Finally, we remark that to make the above criteria sensitive in detecting nonclassicality, the $f_i$ must be chosen such that the normal ordering is important in giving $M^{(n)}$. In particular, assuming this special structure for the $f_i$'s, there must be some combination of creation and annihilation operators. On the contrary, the inclusion of only creation or only annihilation operators would give a matrix $M^{(n)}$ positive for every state, thus completely useless for detecting nonclassicality.
\begin{table*}[tbp] \caption{Criteria for single-time nonclassical effects in two-mode (TM) and multimode (MM) fields, and two-time nonclassical effects in single-mode (SM) fields.} \begin{center} \begin{tabular}{l c c} \hline\hline {\bf Nonclassical effect} & {\bf Criterion} & Equations \\ \hline\hline MM quadrature squeezing & $\dn(1,\hat X_{\bm{\phi}})<0$ & (\ref{N10}),~(\ref{N15})\\[5pt] TM principal squeezing of Luk\v{s} {\em et al.}~\cite{Luks88} & $\dn(\Delta \hat a_{12}^\dagger,\Delta \hat a_{12})=\dn(1, \hat a_{12}^\dagger, \hat a_{12})<0$ & (\ref{N16})--(\ref{z36}) \\[5pt] TM sum squeezing of Hillery~\cite{Hillery89} & $\dn(1,\hat V_{\phi})<0$ & (\ref{N20}),~(\ref{N22}) \\[5pt] MM sum squeezing of An-Tinh~\cite{An99} & $\dn(1,\hat {\cal V}_{\phi})<0$ & (\ref{z9}),~(\ref{z10})\\[5pt] TM difference squeezing of Hillery~\cite{Hillery89} & $\dn(1,\hat W_{\phi})<- \frac12 \min \left(\mean{\hat n_1},\mean{\hat n_2}\right)$ & (\ref{N23}),~(\ref{N26}),~(\ref{z1})\\[5pt]
MM difference squeezing of An-Tinh~\cite{An00}& $\dn(1,\hat {\cal W}_{\phi})<-\frac14 \left||\mean{\hat C}|-\mean{\hat D}\right|$ & (\ref{z18}),~(\ref{z19}) \\[5pt] TM sub-Poisson photon-number correlations & $\dn(1,\hat n_1 \pm\hat n_2)<0$ & (\ref{N28}),~(\ref{N30}) \\[5pt] Cauchy-Schwarz inequality violation& $\dn(\hat f_1,\hat f_2)<0$ & (\ref{x94}),~(\ref{x96}) \\[5pt] TM Cauchy-Schwarz inequality violation via Agarwal's test~\cite{Agarwal88}& $\dn(\hat n_1,\hat n_2)<0$ & (\ref{x15}),~(\ref{x17}) \\[5pt] TM Muirhead inequality violation via Lee's test~\cite{Lee90}& $\dn(\hat n_1-\hat n_2)<0$ & (\ref{x30}),~(\ref{x30a}) \\[5pt] SM photon antibunching& $\dn[\hat n(t),\hat n(t+\tau)]<0$ & (\ref{y05}),~(\ref{x23}) \\[5pt] SM photon hyperbunching& $\dn[\Delta\hat n(t),\Delta\hat n(t+\tau)]$ & (\ref{y05a}),~(\ref{x27}),~(\ref{z34}) \\ &\quad $=\dn[1,\hat n(t),\hat n(t+\tau)]<0$ & \\[4pt] Other TM nonclassical effects & $\dn(1,\hat a_1\hat a_2,\hat a_1^\dagger\hat a_2^\dagger)<0$ & (\ref{x72}) \\[5pt] & $\dn(1,\hat a_1\hat a_2^\dagger,\hat a_1^\dagger \hat a_2)<0$ & (\ref{x78}) \\[5pt] & $\dn(1,\hat a_1+\hat a_2^\dagger,\hat a_1^\dagger +\hat a_2)<0$ & (\ref{x84}) \\[5pt] & $\dn(1,\hat a_1+\hat a_2,\hat a_1^\dagger +\hat a_2^\dagger)<0$ & (\ref{x90}) \\[5pt] & $\dn(1,\hat a_1,\hat a_1^\dagger,\hat a_2^\dagger,\hat a_2)<0$ & (\ref{x36}) \\[2pt] \hline \hline \end{tabular} \end{center} \end{table*}
\begin{table*}[tbp] \caption{Entanglement criteria via nonclassicality criteria.} \begin{center} \begin{tabular}{l c c c} \hline\hline Reference & {\bf Entanglement criterion} & {\bf Equivalent nonclassicality criterion} & Equations \\ \hline\hline Duan {\em et al.}~\cite{Duan} & $\dPT(\Delta\hat a_1,\Delta\hat a_2)=\dPT(1,\hat a_1,\hat a_2)<0$ & $\dn(\Delta\hat a_1,\Delta\hat a_2^\dagger)=\dn(1,\hat a_1,\hat a_2^\dagger)<0$ & (\ref{x7})--(\ref{z30}) \\[5pt] Simon~\cite{Simon} & $\dPT(1,\hat a_1,\hat a_1^\dagger,\hat a_2,\hat a_2^\dagger)<0$ & $\dn(1,\hat a_1,\hat a_1^\dagger,\hat a_2^\dagger,\hat a_2) +\dn(1,\hat a_1,\hat a_2^\dagger)$ & (\ref{x43}) \\[5pt]
& & ~~~$+\dn(1,\hat a_1,\hat a_1^\dagger,\hat a_2^\dagger)+\dn(1,\hat a_1,\hat a_2^\dagger,\hat a_2)<0$ & \\[5pt] Mancini {\em et al.}~\cite{Mancini} & $\dPT(1,\hat a_1+\hat a_2,\hat a_1^\dagger +\hat a_2^\dagger)<0$ & $\dn(1,\hat a_1+\hat a_2^\dagger,\hat a_1^\dagger+\hat a_2) + 2 \dn(1,\hat a_1+\hat a_2^\dagger) +1<0$ & (\ref{x81}),~(\ref{x57}) \\[5pt] Hillery \& Zubairy~\cite{Hillery06} & $\dPT(1,\hat a_1\hat a_2)<0$ & $\dn(1,\hat a_1\hat a_2^\dagger)<0$ & (\ref{x1}),~(\ref{x2}) \\[5pt] {\em ditto} & $\dPT(1,\hat a_1^m \hat a_2^n)<0$ & $\dn(1,\hat a_1^m (\hat a_2^\dagger)^n)<0$ & (\ref{x60})--(\ref{x63}) \\[5pt] {\em ditto} & $\dPT(\hat a_1,\hat a_2)<0$ & $\dn(\hat a_1,\hat a_2^\dagger)<0$ & (\ref{x4}),~(\ref{x6}) \\[5pt] {\em ditto} & $\dPT(1,\hat a_1\hat a_2\hat a_3)<0$ & $\dn(1,\hat a_1^\dagger\hat a_2\hat a_3)<0$ & (\ref{x34}),~(\ref{x46}) \\[5pt] Miranowicz {\em et al.}~\cite{MPHH09}& $\dPT(\hat a_1,\hat a_2\hat a_3)<0$ & $\dn(\hat a_1^\dagger,\hat a_2\hat a_3)<0$ & (\ref{x49}) \\[4pt] Other entanglement tests
& $\dPT(1,\hat a_1^{k} \hat a_2^{l}\hat a_3^{m})<0$ & $\dn(1,(\hat a_1^\dagger)^{k} \hat a_2^{l}\hat a_3^{m})<0$ & (\ref{z24}),~(\ref{z25}) \\[5pt]
& $\dPT(\hat a_1^{k}, \hat a_2^{l}\hat a_3^{m})<0$ & $\dn((\hat a_1^\dagger)^{k}, \hat a_2^{l}\hat a_3^{m})<0$ & (\ref{z26}),~(\ref{z27}) \\[5pt]
& $\dPT(1,\hat a_1\hat a_2,\hat a_1^\dagger \hat a_2^\dagger)<0$ & $\dn(1,\hat a_1\hat a_2^\dagger,\hat a_1^\dagger\hat a_2) + (\mean{\hat n_{1}+\hat n_{2}}+1)\, \dn(1,\hat a_1\hat a_2^\dagger)<0$ & (\ref{x69}),~(\ref{x56}) \\[5pt]
& $\dPT(1,\hat a_1\hat a_2^\dagger,\hat a_1^\dagger\hat a_2)<0$ & $\dn(1,\hat a_1\hat a_2,\hat a_1^\dagger\hat a_2^\dagger)+\<\hat n_1\>\<\hat n_2\> + \mean{\hat n_{1}+\hat n_{2}}\, \dn(1,\hat a_1\hat a_2)<0$ & (\ref{x75}),~(\ref{x59}) \\[5pt]
& $\dPT(1,\hat a_1+\hat a_2,\hat a_1^\dagger +\hat a_2^\dagger)<0$ & $\dn(1,\hat a_1+\hat a_2^\dagger,\hat a_1^\dagger+\hat a_2) + 2 \dn(1,\hat a_1+\hat a_2^\dagger)<0$ & (\ref{x87}),~(\ref{x58}) \\[2pt] \hline \hline \end{tabular} \end{center} \end{table*}
\subsection{Nonclassicality and the Cauchy-Schwarz inequality}
The Cauchy-Schwarz inequality (CSI) for operators can be written as follows (see, e.g., Ref.~\cite{MandelBook}): \begin{eqnarray}
\mean{\hat A^{\dagger} \hat A} \mean{\hat B^{\dagger} \hat B} &\ge&
|\mean{\hat A^{\dagger} \hat B}|^2, \label{x92} \end{eqnarray} where $\hat A$ and $\hat B$ are arbitrary operators for which the above expectations exist. Indeed, $\mean{\hat A^{\dagger} \hat B}\equiv {\rm Tr}\,(\rho\hat{A}^\dagger \hat B)$ is a valid inner product because of the positivity of $\rho$. Similarly, one can define a valid scalar product for a positive $P$~function. In detail, by identifying $\hat A=f_1({\bf a,a}^\dagger)$ and $\hat B=f_2({\bf a,a}^\dagger)$, one can define the scalar product \begin{equation} \label{x97} \begin{split} \normal{\hat f_i^{\dagger} \hat f_j}
= \intda f^*_i(\bm{\alpha,\alpha}^*) f_j(\bm{\alpha,\alpha}^*)
P(\bm{\alpha,\alpha}^*). \end{split} \end{equation} Then, a CSI can be written as: \begin{eqnarray}
\normal{\hat f_1^{\dagger} \hat f_1}
\normal{\hat f_2^{\dagger} \hat f_2} \cl
|\normal{\hat f_1^{\dagger} \hat f_2}|^2. \label{x94} \end{eqnarray} Such CSI, for a given choice of operators $\hat f_1$ and $\hat f_2$, can be violated by some nonclassical fields described by a $P$~function which is not positive everywhere, that is such that \eqref{x97} does not actually define a scalar product. We then say that the state of the fields violates the CSI. The nonclassicality of states violating the CSI can be shown by analyzing Criterion~3 for $\hat F=(\hat f_1,\hat f_2)$, which results in \begin{eqnarray}
\dfn &=& \DET{\normal{\hat f_1^{\dagger} \hat f_1}
& \normal{\hat f_1^{\dagger} \hat f_2}}{
\normal{\hat f_1 \hat f_2^{\dagger}}
&\normal{\hat f_2^{\dagger} \hat f_2}} \ncl 0. \label{x96} \end{eqnarray}
\subsection{A zoo of nonclassical phenomena\label{Sect2b}}
In Table I, we present a variety of multimode nonclassicality criteria, which can be derived by applying Criterion 3 as shown in this subsection and in greater detail in Appendices A--C.
In the following, we give a few simple examples of other classical inequalities, which---to our knowledge---have not been discussed in the literature. In particular, we analyze inequalities based on determinants of the following form: \begin{eqnarray}
D(x,y,z) &=& \left|
\begin{array}{lll}
1 & x & x^* \\
x^* & z & y^* \\
x & y & z \
\end{array}
\right|. \label{x66} \end{eqnarray}
(i) By applying Criterion~3 for $\hat F=(1,\hat a_1\hat a_2,\hat a_1^\dagger\hat a_2^\dagger)$, we obtain \begin{equation}
\dfn=D(\langle\hat a_1\hat a_2\rangle,\langle\hat a_1^2\hat a_2^2\rangle,\langle\hat n_1\hat n_2\rangle)\ncl 0, \label{x72} \end{equation} where $\hat n_1=\hat a_1^\dagger \hat a_1$ and $\hat n_2=\hat a_2^\dagger \hat a_2$.
\noindent (ii) For $\hat F=(1,\hat a_1\hat a_2^\dagger,\hat a_1^\dagger \hat a_2)$ one obtains
\begin{equation}
\dfn=D(\langle\hat a_1\hat a_2^\dagger\rangle,\langle\hat a_1^2(\hat a_2^\dagger)^2\rangle,\langle\hat n_1\hat n_2\rangle)\ncl 0. \label{x78} \end{equation} (iii) For $\hat F=(1,\hat a_1+\hat a_2^\dagger,\hat a_1^\dagger +\hat a_2)$, Criterion~3 leads to
\begin{equation}
\dfn=D(\langle\hat a_1+\hat a_2^\dagger\rangle,\langle(\hat a_1+\hat a_2^\dagger)^2\rangle,z)\ncl 0, \label{x84} \end{equation} where $z=\langle\hat n_1\rangle+\langle\hat n_2\rangle+2{\rm Re}\langle\hat a_1\hat a_2\rangle$.
\noindent (iv) For $\hat F=(1,\hat a_1+\hat a_2,\hat a_1^\dagger +\hat a_2^\dagger)$ one has \begin{equation}
\dfn=D(\langle\hat a_1+\hat a_2\rangle,\langle(\hat a_1+\hat a_2)^2\rangle,z)\ncl 0, \label{x90} \end{equation} where $z=\langle\hat n_1\rangle+\langle\hat n_2\rangle+2{\rm Re}\langle\hat a_1\hat a_2^\dagger\rangle$. These nonclassicality criteria, given by Eqs.~(\ref{x72})--(\ref{x90}), will be related to the entanglement criteria in subsection~\ref{Sect3c2}.
Another example, which is closely related to the Simon entanglement criterion~\cite{Simon}, as will be shown in subsection~\ref{Sect3c1}, can be obtained from Criterion~3 assuming $\hat F=(1,\hat a_1,\hat a_1^\dagger,\hat a_2^\dagger,\hat a_2)$. Thus, we obtain: \begin{equation}
\dfn= \left|
\begin{array}{lllll}
1 & \mean{\hat a_1} & \mean{\hat a_1^\dagger} & \mean{\hat a_2^\dagger} & \mean{\hat a_2} \\
\mean{\hat a_1^\dagger} & \mean{\hat a_1^\dagger\hat a_1} & \mean{(\hat a_1^\dagger)^2} & \mean{\hat a_1^\dagger\hat a_2^\dagger} & \mean{\hat a_1^\dagger\hat a_2} \\
\mean{\hat a_1} & \mean{\hat a_1^2} & \mean{\hat a_1^\dagger\hat a_1} & \mean{\hat a_1\hat a_2^\dagger} & \mean{\hat a_1\hat a_2} \\
\mean{\hat a_2} & \mean{\hat a_1\hat a_2} & \mean{\hat a_1^\dagger\hat a_2} & \mean{\hat a_2^\dagger\hat a_2} & \mean{\hat a_2^2} \\
\mean{\hat a_2^\dagger} & \mean{\hat a_1\hat a_2^\dagger} & \mean{\hat a_1^\dagger\hat a_2^\dagger} & \mean{(\hat a_2^\dagger)^2} & \mean{\hat a_2^\dagger\hat a_2} \\
\end{array}
\right| \ncl 0. \label{x36} \end{equation}
\section{Entanglement and nonclassicality criteria}
Here, we express various two- and three-mode entanglement inequalities in terms of nonclassicality inequalities derived from Criterion~3, which are summarized in Table~II. First, we briefly describe the Shchukin-Vogel entanglement criterion, which enables the derivation of various entanglement inequalities.
\subsection{The Shchukin-Vogel entanglement criterion\label{Sect3a}}
The Criterion~3 of nonclassicality resembles the Shchukin-Vogel (SV) criterion~\cite{SV05,MP06,MPHH09} for distinguishing states with positive partial transposition (PPT) from those with nonpositive partial transposition (NPT). In analogy to Eqs.~(\ref{N06}) and~(\ref{N07}), one can define a matrix $M(\hat\rho)=[M_{ij}(\hat\rho)]$ of moments as follows: \begin{equation} M_{ij}(\hat\rho) ={\rm Tr}\,\big[({\hat a}^{\dagger i_{1}}{\hat a}^{i_{2}}{\hat b}^{\dagger i_{3}}{\hat b}^{i_{4}})^\dagger ({\hat a}^{\dagger j_{1}}{\hat a}^{j_{2}}{\hat b} ^{\dagger j_{3}}{\hat b}^{j_{4}})\hat\rho\big], \label{N06x} \end{equation} where the subscripts $i$ and $j$ correspond to multi-indices $(i_1,i_2,i_3,i_4)$ and $(j_1,j_2,j_3,j_4)$, respectively. Note that, contrary to Eq.~(\ref{N06}), the creation and annihilation operators are {\em not} normally ordered. As discussed in Ref.~\cite{MPHH09}, the matrix $M(\hat\rho)$ of moments for a separable state $\hat\rho$ is also separable, i.e., \begin{equation}
\hat\rho=\sum_i p_i \hat\rho^A_i\otimes\hat\rho^B_i \Rightarrow M(\hat\rho)=\sum_i p_i M^A(\hat\rho^A_i)\otimes M^B(\hat\rho^A_i),
\label{Nsep} \end{equation} where $p_i\geq0$, $\sum_ip_i=1$, $M^A(\hat\rho^A)=
\sum_{i'j'}M_{i'j'}(\Hat\rho^A)|i'\rangle\langle j'|$ is expressed in a formal basis $\{|i'\rangle\}$ with $i'=(i_1,i_2,0,0)$ and $j'=(j_1,j_2,0,0)$; $M^B(\Hat\rho^B)$ defined analogously. Reference~\cite{SV05} proved the following criterion: \begin{criterion} A bipartite quantum state $\hat\rho$ is NPT if and only if $M(\hat\rho^\Gamma)$ is NPT. \label{t1} \end{criterion} The elements of the matrix of moments, $M(\hat\rho^\Gamma)=[M_{ij}(\hat\rho^\Gamma)]$, where $\Gamma$ denotes partial transposition in some fixed basis, can be simply calculated as \begin{eqnarray} M_{ij}(\hat\rho^\Gamma)
={\rm Tr}\,\big[({\hat a}^{\dagger i_{1}}{\hat a}^{i_{2}}{\hat b}^{\dagger i_{3}}{\hat b}^{i_{4}})^\dagger ({\hat a}^{\dagger j_{1}}{\hat a}^{j_{2}}{\hat b} ^{\dagger j_{3}}{\hat b}^{j_{4}})\hat\rho^\Gamma\big] \nonumber\\ ={\rm Tr}\,\big[({\hat a}^{\dagger i_{1}}{\hat a}^{i_{2}}{\hat b}^{\dagger j_{3}}{\hat b}^{j_{4}})^\dagger ({\hat a}^{\dagger j_{1}}{\hat a}^{j_{2}}{\hat b} ^{\dagger i_{3}}{\hat b}^{i_{4}})\hat\rho\big].\; \label{N06y} \end{eqnarray} Let us define \begin{eqnarray}
d^{\Gamma}_{\hat F}(\hat\rho)=
\Det{
\langle\hat f_{r_1}^\dagger \hat f_{r_1}\rangle^{\Gamma} &
\langle\hat f_{r_1}^\dagger \hat f_{r_2}\rangle^{\Gamma} & \cdots &
\langle\hat f_{r_1}^\dagger \hat f_{r_N} \rangle^{\Gamma} }{
\langle\hat f_{r_2}^\dagger \hat f_{r_1}\rangle^{\Gamma} &
\langle\hat f_{r_2}^\dagger \hat f_{r_2}\rangle^{\Gamma} & \cdots &
\langle\hat f_{r_2}^\dagger \hat f_{r_N}\rangle^{\Gamma} }{
\vdots & \vdots & \ddots & \vdots }{
\langle\hat f_{r_N}^\dagger \hat f_{r_1}\rangle^{\Gamma} &
\langle\hat f_{r_N}^\dagger \hat f_{r_2}\rangle^{\Gamma} & \cdots &
\langle\hat f_{r_N}^\dagger \hat f_{r_N}\rangle^{\Gamma} },
\label{t4a} \end{eqnarray} in terms of $\langle\hat f_{r_i}^\dagger \hat f_{r_j}\rangle^{\Gamma}\equiv \mean{(\hat f_{r_i}^\dagger \hat f_{r_j})^\Gamma}$ ($i,j=1,...,N$). For example, if $\hat X$ is an operator acting on two or more modes, and we take partial transposition with respect to the first mode, $$ \hat X^\Gamma=(T\otimes{\rm id})(\hat X), $$ with $T$ the transposition acting on the first mode and ${\rm id}$ the identity operation doing nothing on the remaining modes, respectively. Then the SV Criterion~4, for brevity referred here to as the entanglement criterion, can be formulated as follows~\cite{MPHH09}: \begin{criterion} A bipartite state $\hat\rho$ is NPT if and only if there exists ${\hat F}$, such that $d^{\Gamma}_{\hat F}(\hat\rho)$ is negative. \end{criterion} This Criterion~5 can be written more compactly as follows: \begin{eqnarray} \hat \rho \textrm{~is PPT} &\Leftrightarrow& \forall {\hat F}: \quad d^\Gamma_{\hat F}(\hat\rho) \ge 0, \nonumber \\ \hat \rho \textrm{~is NPT} &\Leftrightarrow& \exists {\hat F}: \quad d^\Gamma_{\hat F}(\hat\rho) <0. \label{N08b} \end{eqnarray} As for the case of the nonclassicality criteria, the original SV criterion actually refers to a set $\hat F$ given by monomials in the creation and annihilation operators. This entanglement criterion can be applied not only to two-mode fields but also to multimode fields~\cite{SV06multi,MPHH09}. Note that Criterion~5 does not detect PPT-entangled states (which are part, and possibly the only members, of the family of the so-called bound entangled states)~\cite{HorodeckiReview}. Analogously to the notation of $\ncl$, we use the symbol $\ent$ to indicate that a given inequality can be fulfilled {\em only} for entangled states.
Here we show that various well-known entanglement inequalities can be derived from the nonclassicality Criterion~3 including the criteria of Hillery and Zubairy \cite{Hillery06}, Duan {\em et al.} \cite{Duan}, Simon \cite{Simon}, or Mancini {\em et al.} \cite{Mancini}. We also derive new entanglement criteria and show their relation to the nonclassicality criterion.
Other examples of entanglement inequalities, which can be easily derived from nonclassicality criteria, include~\cite{Raymer,Agarwal05,Song}. However, for brevity, we do not include them here.
\subsection{Entanglement and the Cauchy-Schwarz inequality \label{Marco}}
The matrix $M_{\hat F}^{(n)}(\hat \rho)$ is linear in its state $\hat \rho=\sum_ip_i\hat \rho_i$. Therefore we have \begin{eqnarray} M_{\hat F}^{(n)}(\hat \rho)=\sum_ip_i M_{\hat F}^{(n)}(\hat \rho_i)\geq 0 \end{eqnarray} if $M_{\hat F}^{(n)}(\hat \rho_i)\geq 0$ for all $\hat \rho_i$. Thus, $M_{\hat F}^{(n)}$ is positive for separable states if it is positive on factorized states.
Let \begin{eqnarray} \hat F=(\hat f_1,\ldots,\hat f_N) \end{eqnarray} with functions $\hat f_{i}=\hat f_{i1}\hat f_{i2}\cdots \hat f_{iM}$, where \begin{eqnarray} \hat f_{ij}=\begin{cases} 1 & \textrm{if }i\neq k_j\\ \textrm{either }g_j(\hat a_j)\textrm{ or }g_j(\hat a^\dagger_j)& \textrm{if }i=k_j. \end{cases} \end{eqnarray} Here, $i$ is the index of the element $\hat f_i$ in $\hat F$, and index $j$ refers to the mode. $\hat f_{ij}$ is possibly different from the identity for one unique value $i=k_j$, and in that case it is equal to a function $g_j$ of either the creation or annihilation operators of mode $j$, but not of both.
Writing the matrix $M_{\hat F}^{(n)}$ in a formal basis
$\{|k\rangle\}$, one then has \begin{eqnarray} \begin{aligned}
M_{\hat F}^{(n)}&=\sum_{kl}\langle :\hat f_k^{\dagger}\hat f_l : \rangle |k\rangle\langle l |\\
&=\sum_{kl}\langle :\hat f_{k1}^{\dagger}\hat f_{l1}\ldots \hat f_{kM}^{\dagger}\hat f_{lM}: \rangle |k\rangle\langle l |.\\ \end{aligned} \end{eqnarray}
For factorized states holds \begin{eqnarray} \begin{aligned}
M_{\hat F}^{(n)}&=\sum_{kl}\langle :\hat f_{k1}^{\dagger}\hat f_{l1}:\rangle \cdots \langle : \hat f_{kM}^{\dagger}\hat f_{lM}: \rangle |k\rangle\langle l |\\
&=\sum_{k}\langle :\hat f_{k1}^{\dagger}\hat f_{k1}:\rangle \cdots \langle : \hat f_{kM}^{\dagger}\hat f_{kM}: \rangle |k\rangle\langle l |\\
&\quad+\sum_{k\neq l}\langle :\hat f_{k1}^{\dagger}\hat f_{l1}:\rangle \cdots \langle : \hat f_{kM}^{\dagger}\hat f_{lM}: \rangle |k\rangle\langle l |\\
&=\sum_{k}\langle :\hat f_{k1}^{\dagger}\hat f_{k1}:\rangle \cdots \langle : \hat f_{kM}^{\dagger}\hat f_{kM}: \rangle |k\rangle\langle l |\\
&\quad+\sum_{k\neq l}\langle \hat f_{k1}^{\dagger}\rangle\langle \hat f_{l1} \rangle \cdots \langle \hat f_{kM}^{\dagger}\rangle\langle \hat f_{lM}\rangle |k\rangle\langle l |\\
&\geq \sum_{k}\langle \hat f_{k1}^{\dagger}\rangle \langle \hat f_{k1} \rangle \cdots \langle \hat f_{kM}^{\dagger}\rangle \langle \hat f_{kM} \rangle |k\rangle\langle l |\\
&\quad+\sum_{k\neq l}\langle \hat f_{k1}^{\dagger}\rangle\langle \hat f_{l1} \rangle \cdots \langle \hat f_{kM}^{\dagger}\rangle\langle \hat f_{lM}\rangle |k\rangle\langle l |\\
&=\Big(\sum_{k}\langle \hat f_{k1}^{\dagger}\rangle \cdots \langle \hat f_{kM}^{\dagger}\rangle
|k\rangle\Big) \\
&\quad\times \Big(\sum_{l}\langle \hat f_{l1}\rangle \cdots \langle \hat f_{lM}\rangle \langle l|\Big)\geq0. \label{Marco1} \end{aligned} \end{eqnarray} The first equality comes from the state being factorized. The third equality is due to the fact that the $\hat f_{ij}$s are functions of either annihilation or creation operators, but not of both, so $\langle :\hat f_{k1}^{\dagger}\hat f_{l1}:\rangle= \langle \hat f_{k1}^{\dagger}\hat f_{l1}\rangle$ or $\langle :\hat f_{k1}^{\dagger}\hat f_{l1}:\rangle= \langle \hat f_{l1} \hat f_{k1}^{\dagger}\rangle$, and that for $k\neq l$ at least one among $\hat f_{k1}^{\dagger}$ and $\hat f_{l1}$, let us say, e.g., $\hat f_{l1}$, is equal to the identity---in particular this implies that its expectation value is equal to $\langle \hat f_{l1}\rangle=1$. The first inequality is due to the fact that $\langle :\hat f_{k1}^{\dagger}\hat f_{k1}:\rangle= \langle \hat f_{k1}^{\dagger}\hat f_{k1}\rangle$ or ${\langle :\hat f_{k1}^{\dagger}\hat f_{k1}:\rangle}= \langle \hat f_{k1} \hat f_{k1}^{\dagger}\rangle$, and to the Cauchy-Schwarz inequality.
\subsection{Entanglement criteria {\em equal} to nonclassicality criteria\label{Sect3b}}
By applying the nonclassicality Criterion~3, we give a few examples of classical inequalities, which can be violated {\em only} by entangled states.
\subsubsection{Hillery-Zubairy's entanglement criteria}
Hillery and Zubairy~\cite{Hillery06} derived a few entanglement inequalities both for two-mode fields: \begin{eqnarray}
\mean{\hat n_1\hat n_2} \ent |\mean{\hat a_1\hat a_2^\dagger}|^2, \label{x1}
\\
\mean{\hat n_1}\mean{\hat n_2} \ent |\mean{\hat a_1\hat a_2}|^2 , \label{x4} \end{eqnarray} and three-mode fields \begin{eqnarray}
\mean{\hat n_1\hat n_2\hat n_3} \ent |\mean{\hat a_1^\dagger\hat a_2\hat a_3}|^2.
\label{x34} \end{eqnarray} These inequalities can be derived from the entanglement Criterion~5~\cite{SV05,MPHH09} assuming: $\hat F=(1,\hat a_1\hat a_2)$ to derive Eq.~(\ref{x1}), $\hat F=(\hat a_1,\hat a_2)$ for Eq.~(\ref{x4}), and $\hat F=(1,\hat a_1\hat a_2\hat a_3)$ for Eq.~(\ref{x34}).
On the other hand, Eq.~(\ref{x1}) can be obtained from the nonclassicality Criterion~3 assuming $\hat F=(1,\hat a_1\hat a_2^\dagger)$, which gives \begin{eqnarray}
\dfn &=& \DET{1&\mean{\hat a_1\hat a_2^\dagger}}{\mean{\hat a_1^\dagger\hat a_2}& \mean{\hat n_1\hat n_2}} \ncl 0. \label{x2} \end{eqnarray} Analogously, assuming $\hat F=(\hat a_1,\hat a_2^\dagger)$, one gets \begin{eqnarray}
\dfn &=& \DET{\mean{\hat n_1}&\mean{\hat a_1^\dagger\hat a_2^\dagger}}
{\mean{\hat a_1\hat a_2}&\mean{\hat n_2}} \ncl 0, \label{x6} \end{eqnarray} which corresponds to Eq.~(\ref{x4}). By choosing a set of three-mode operators $\hat F=(1,\hat a_1^\dagger\hat a_2\hat a_3)$, one readily obtains \begin{eqnarray}
\dfn &=& \DET
{1&\mean{\hat a_1^\dagger\hat a_2\hat a_3}}
{\mean{\hat a_1\hat a_2^\dagger\hat a_3^\dagger}& \mean{\hat n_1\hat n_2\hat n_3}} \ncl 0, \label{x46} \end{eqnarray} which corresponds to Eq.~(\ref{x34}).
By applying Criterion~3 with $\hat F=(\hat a_1^\dagger,\hat a_2\hat a_3)$, we find another inequality \begin{eqnarray}
\dfn &=& \DET
{\mean{\hat n_1}&\mean{\hat a_1\hat a_2\hat a_3}}
{\mean{\hat a_1\hat a_2\hat a_3}^*& \mean{\hat n_2\hat n_3}} \ncl 0, \label{x49} \end{eqnarray} which was derived in Ref.~\cite{MPHH09} from the entanglement Criterion~5.
Using the Cauchy-Schwarz inequality, Hillery and Zubairy~\cite{Hillery06} also found a more general form of inequality than the one in Eq.~(\ref{x1}), which reads as follows: \begin{eqnarray}
\mean{(\hat a_1^\dagger)^m\hat a_1^m (\hat a_2^\dagger)^n\hat a_2^n} \ent |\mean{\hat a_1^m (\hat a_2^\dagger)^n}|^2. \label{x60} \end{eqnarray} This inequality can be derived from the nonclassicality Criterion~3 for $\hat F=(1,\hat a_1^m (\hat a_2^\dagger)^n)$, which leads to \begin{eqnarray}
\dfn = \DET
{1&\mean{\hat a_1^m (\hat a_2^\dagger)^n}}
{\mean{(\hat a_1^\dagger)^m \hat a_2^n} & \mean{ (\hat a_1^\dagger)^m\hat a_1^m (\hat a_2^\dagger)^n\hat a_2^n}} \ncl 0. \label{x62} \end{eqnarray} Alternatively, Eq.~(\ref{x60}) can be derived from the entanglement Criterion~5 for $\hat F=(1,\hat a_1^m \hat a_2^n)$. Thus, we see that \begin{equation}
\dn(1,\hat a_1^m (\hat a_2^\dagger)^n) = \dPT(1,\hat a_1^m \hat a_2^n) \ent 0, \label{x63} \end{equation} where, for clarity, we use the notation $d^{k}(\hat F)$ instead of $d^{k}_{\hat F}$ for $k=(n),\Gamma$. Moreover, we can generalize entanglement inequality, given by Eq.~(\ref{x46}), as follows: \begin{equation}
\mean{\hat n_1^{k}\hat n_2^{l}\hat n_3^{m} } \ent |\mean{(\hat a_1^\dagger)^{k}
\hat a_2^{l}\hat a_3^{m}}|^2 \label{z24} \end{equation} for arbitrary integers $k,l,m>0$. This inequality can be proved by applying both Criteria~3 and~5: \begin{eqnarray} \dn(1,(\hat a_1^\dagger)^{k} \hat a_2^{l}\hat a_3^{m})= \dPT(1,\hat a_1^{k} \hat a_2^{l}\hat a_3^{m}) \hspace{3cm} \nonumber \\ =\DET{1&\mean{(\hat a_1^\dagger)^{k} \hat a_2^{l}\hat a_3^{m}}} {\langle(\hat a_1^\dagger)^{k} \hat a_2^{l}\hat a_3^{m}\rangle^*& \mean{\hat n_1^{k}\hat n_2^{l}\hat n_3^{m} }}\ncl 0, \hspace{1cm} \label{z25} \end{eqnarray} where the first mode is partially-transposed. Analogously, Eq.~(\ref{x49}) can be generalized to following entanglement inequality: \begin{equation} \mean{\hat n_1^{k}}\mean{\hat n_2^{l}\hat n_3^{m} } \ent
|\mean{\hat a_1^{k} \hat a_2^{l}\hat a_3^{m}}|^2, \label{z26} \end{equation} which can be shown by applying Criteria~3 and~5: \begin{eqnarray} \dn((\hat a_1^\dagger)^{k}, \hat a_2^{l}\hat a_3^{m})&=& \dPT(\hat a_1^{k}, \hat a_2^{l}\hat a_3^{m}) \nonumber \\ &=&\DET{\mean{\hat n_1^{k}}&\mean{\hat a_1^{k} \hat a_2^{l}\hat a_3^{m}}} {\langle\hat a_1^{k} \hat a_2^{l}\hat a_3^{m}\rangle^* & \mean{\hat n_2^{l}\hat n_3^{m} }} \ncl 0. \hspace{5mm} \label{z27} \end{eqnarray}
It is worth remarking that in all the above cases, once the $\ncl$ inequalities are found as nonclassicality inequalities, it is easy to check that they can be satisfied only by entangled states, that is they really are $\ent$ inequalities. Indeed, the determinant condition is the only nontrivial one for establishing the positivity of the involved $2\times 2$ matrices. Further, these matrices are linear in the state with respect to which the expectation values are calculated. Thus, if we prove that the matrices are positive for factorized states, then we have that they are necessarily positive for a separable state, and so are the determinants. For the sake of concreteness and clarity, we prove the positivity of the $2\times 2$ matrix of Eq.~\eqref{x6} for a factorized state. The positivity of the other matrices for factorized states is analogously proved.
For a factorized state, as a special case of inequalities given in Eq.~(\ref{Marco1}), we have \begin{equation} \begin{aligned} \left( \begin{array}{cc} \mean{\hat n_1}&\mean{\hat a_1^\dagger\hat a_2^\dagger}\\
\mean{\hat a_1\hat a_2}&\mean{\hat n_2} \end{array} \right) &= \left( \begin{array}{cc} \mean{\hat a_1^\dagger\hat a_1}&\mean{\hat a_1^\dagger}\mean{\hat a_2^\dagger}\\
\mean{\hat a_1}\mean{\hat a_2}&\mean{\hat a_2^\dagger\hat a_2} \end{array} \right)\\ &\geq \left( \begin{array}{cc} \mean{\hat a_1^\dagger}\mean{\hat a_1}&\mean{\hat a_1^\dagger}\mean{\hat a_2^\dagger}\\
\mean{\hat a_1}\mean{\hat a_2}&\mean{\hat a_2^\dagger}\mean{\hat a_2} \end{array} \right)\\ &= \left( \begin{array}{c} \mean{\hat a_1^\dagger}\\ \mean{\hat a_2} \end{array} \right) \left( \begin{array}{cc} \mean{\hat a_1}&\mean{\hat a_2^\dagger} \end{array} \right)\geq0, \end{aligned} \end{equation} where the first inequality is due to the Cauchy-Schwarz inequality
$\mean{\hat{X}^\dagger\hat{X}}\geq|\mean{\hat{X}}|^2$.
\subsubsection{Entanglement criterion of Duan et al.}
A sharpened version of the entanglement criterion of Duan {\em et al.}~\cite{Duan} can be formulated as follows~\cite{SV05}: \begin{eqnarray}
\mean{\Delta\hat a_1^\dagger\Delta\hat a_1}\mean{\Delta\hat a_2^\dagger\Delta\hat a_2}\ent
|\mean{\Delta\hat a_1\Delta\hat a_2}|^2, \label{x7} \end{eqnarray} where $\Delta\hat a_i=\hat a_i-\mean{\hat a_i}$ for $i=1,2$. Equation~(\ref{x7}) follows from the entanglement Criterion~5 for $\hat F=(1,\hat a_1,\hat a_2)$~\cite{SV05} or, equivalently, for $\hat F=(\Delta\hat a_1,\Delta\hat a_2)$. It can also be derived from the nonclassicality Criterion~3 for $\hat F=(\Delta\hat a_1,\Delta\hat a_2^\dagger)$. Thus, we obtain \begin{eqnarray} \dfn &=& \DET
{\mean{\Delta\hat a_1^\dagger\Delta\hat a_1}&\mean{\Delta\hat a_1^\dagger\Delta\hat a_2^\dagger}}
{\mean{\Delta\hat a_1\Delta\hat a_2}& \mean{\Delta\hat a_2^\dagger\Delta\hat a_2}} \ncl 0, \label{x9} \end{eqnarray} which corresponds to Eq.~(\ref{x7}). Alternatively, by choosing $\hat F=(1,\hat a_1,\hat a_2^\dagger)$, one obtains \begin{equation} \dfn = \DETT {1&\mean{\hat a_1}&\mean{\hat a_2^\dagger}} {\mean{\hat a_1^\dagger}&\<\hat n_1\>&\mean{\hat a_1^\dagger\hat a_2^\dagger}} {\mean{\hat a_2}&\mean{\hat a_1\hat a_2}&\<\hat n_2\>}, \label{z30} \end{equation} which is equal to Eq.~(\ref{x9}). Thus, it is seen that this nonclassicality criterion is equal to the entanglement criterion. Moreover, the advantage of using polynomials, instead of monomial, functions of moments in $\hat F$ is apparent. The same conclusion was drawn by comparing Eqs.~(\ref{N18}) and~(\ref{z36}) or Eqs.~(\ref{x27}) and~(\ref{z34}).
\subsection{Entanglement criteria via sums of nonclassicality criteria\label{Sect3c}}
Here, we present a few examples of classical inequalities derived from the entanglement Criterion~5 and the nonclassicality Criterion~3 that are apparently not equal. More specifically, we have presented in subsection~\ref{Sect3b} examples of classical inequalities, which can be derived from the entanglement Criterion~5 for a given $\hat F_1$ or, equivalently, from the nonclassicality Criterion~3 for $\hat F_2$ equal to a partial transpose of $\hat F_1$. In this section, we give examples of entanglement inequalities, which {\em cannot} be derived from Criterion~3 for $\hat F_2=\hat F_1^\Gamma$.
States satisfying Criterion~5 for entanglement must be nonclassical, as any entangled state is necessary nonclassical in the sense of Criterion 1. We will provide specific examples that satisfying an entanglement inequality implies satisfying one or more nonclassical inequalities. This approach enables an analysis of the entanglement for a given nonclassicality. The main problem is to express $\dfPT\equiv\dPT(\hat F)$ as linear combinations of some $\dn(\hat F^{(k)})$, i.e.: \begin{eqnarray}
\dfPT = \sum_k c_k \dn(\hat F^{(k)}), \label{x52} \end{eqnarray} where $c_k>0$. To find such expansions explicitly, we apply the following three properties of determinants: (i) The Laplace expansion formula along any row (or column): $\det M=\sum_{j}(-1)^{i+j}M_{ij}\mu_{ij}$, where $\mu_{ij}$ is a minor of a matrix $M=(M_{ij})$. (ii) Swapping rule: By exchanging any two rows (columns) of a determinant, the value of the determinant is the same of the original determinant but with opposite sign. (iii) Summation rule: If some (or all) the elements of a column (row) are sum of two terms, then the determinant can be given as the sum of two determinants, e.g., $\det(a+a',b+b';c,d)=\det(a,b;c,d)+\det(a',b';c,d).$
\subsubsection{Simon's entanglement criterion\label{Sect3c1}}
As the first example of such nontrivial relation between the nonclassicality and entanglement criteria, let us consider Simon's entanglement criterion~\cite{Simon}. As shown in Ref.~\cite{SV05}, it can be obtained from Criterion~5 as $\dfPT\ent 0$ for $\hat F=(1,\hat a_1,\hat a_1^\dagger,\hat a_2,\hat a_2^\dagger)$. We found that Simon's criterion can be expressed as a sum of nonclassicality criteria as follows: \begin{eqnarray}
\dfPT &=&
\dn(1,\hat a_1,\hat a_1^\dagger,\hat a_2^\dagger,\hat a_2) +\dn(1,\hat a_1,\hat a_2^\dagger) \nonumber\\
&&+\dn(1,\hat a_1,\hat a_1^\dagger,\hat a_2^\dagger)+\dn(1,\hat a_1,\hat a_2^\dagger,\hat a_2), \label{x43} \end{eqnarray} where $\dn(1,\hat a_1,\hat a_1^\dagger,\hat a_2^\dagger,\hat a_2)$ is given by Eq.~(\ref{x36}). Moreover, $\dfn$ for $\hat F=(1,\hat a_1,\hat a_1^\dagger,\hat a_2^\dagger)$, $\hat F=(1,\hat a_1,\hat a_2^\dagger,\hat a_2)$ and $\hat F=(1,\hat a_1,\hat a_2^\dagger)$ can be obtained from~(\ref{x36}) by analyzing its principal minors. Thus, one can prove the entanglement for a given nonclassicality by checking the violation of specific classical inequalities resulting from the nonclassicality Criterion~3.
\subsubsection{Other entanglement criteria\label{Sect3c2}}
Now, we present a few entanglement inequalities, which are simpler than Simon's criterion, but still correspond to sums of nonclassicality inequalities.
Let us denote the following determinant: \begin{eqnarray}
D(x,y,z,z') &=& \left|
\begin{array}{lll}
1 & x & x^* \\
x^* & z & y^* \\
x & y & z' \
\end{array}
\right|. \label{x65} \end{eqnarray}
(i) Criterion~5 for $\hat F=(1,\hat a_1\hat a_2,\hat a_1^\dagger \hat a_2^\dagger)$ results in \begin{equation}
\dfPT=D\left(\langle\hat a_1\hat a_2^\dagger\rangle,\langle\hat a_1^2(\hat a_2^\dagger)^2\rangle,
\<\hat n_1 \hat n_2\>,z'\right)\ent 0, \label{x69} \end{equation} where $z'=\mean{(\hat n_1+1)(\hat n_2+1)}$. By using the aforementioned properties of determinants, we find that the entanglement criterion in Eq.~(\ref{x69}) can be given as the following sum of nonclassicality inequalities resulting from Criterion~3: \begin{eqnarray}
\dfPT &=& \dn(1,\hat a_1\hat a_2^\dagger,\hat a_1^\dagger\hat a_2) \nonumber \\
&&+ (\<\hat n_1\> +\<\hat n_2\>+1)\, \dn(1,\hat a_1\hat a_2^\dagger). \label{x56} \end{eqnarray}
(ii) Criterion~5 for $\hat F=(1,\hat a_1\hat a_2^\dagger,\hat a_1^\dagger\hat a_2)$ leads to \begin{equation}
\dfPT=D(\langle\hat a_1\hat a_2\rangle,\langle\hat a_1^2\hat a_2^2\rangle,z,z')\ent 0, \label{x75} \end{equation} where $z=\<\hat n_1 \hat n_2\> +\<\hat n_1\> $ and $z'=\<\hat n_1 \hat n_2\> +\<\hat n_2\>$. Analogously to Eq.~(\ref{x56}), we find that the following sum of the nonclassicality criteria corresponds to the entanglement criterion in Eq.~(\ref{x75}): \begin{eqnarray}
\dfPT &=& \dn(1,\hat a_1\hat a_2,\hat a_1^\dagger\hat a_2^\dagger)+\<\hat n_1\>\<\hat n_2\> \nonumber \\
&&+ (\<\hat n_1\> +\<\hat n_2\>)\, \dn(1,\hat a_1\hat a_2). \label{x59} \end{eqnarray}
(iii) For $\hat F=(1,\hat a_1+\hat a_2^\dagger,\hat a_1^\dagger +\hat a_2)$, one obtains \begin{equation}
\dfPT=D(\langle\hat a_1+\hat a_2\rangle,\langle(\hat a_1+\hat a_2)^2\rangle,z,z)\ent 0, \label{x81} \end{equation} where $z=\langle\hat n_1\rangle+\langle\hat n_2\rangle+2{\rm Re}\langle\hat a_1\hat a_2^\dagger\rangle+1$. Analogously to the former cases, we find the relation between the entanglement criterion in Eq.~(\ref{x81}) and the nonclassicality Criterion~3 as follows: \begin{eqnarray}
\dfPT &=& \dn(1,\hat a_1+\hat a_2,\hat a_1^\dagger+\hat a_2^\dagger) \notag\\
&&
+ 2 \dn(1,\hat a_1+\hat a_2) +1. \label{x57} \end{eqnarray}
(iv) As a final example, let us consider the entanglement Criterion~5 for $\hat F=(1,\hat a_1+\hat a_2,\hat a_1^\dagger +\hat a_2^\dagger)$. One obtains \begin{equation}
\dfPT=D(\langle\hat a_1+\hat a_2^\dagger\rangle,\langle(\hat a_1+\hat a_2^\dagger)^2\rangle,z,z')\ent 0, \label{x87} \end{equation} where $z=\langle\hat n_1\rangle+\langle\hat n_2\rangle+2{\rm Re}\langle\hat a_1\hat a_2\rangle$ and $z'=z+2$, which is related to the nonclassicality Criterion~3 as follows: \begin{equation}
\dfPT = \dn(1,\hat a_1+\hat a_2^\dagger,\hat a_1^\dagger+\hat a_2) + 2 \dn(1,\hat a_1+\hat a_2^\dagger), \label{x58} \end{equation} where $\dn(1,\hat a_1+\hat a_2^\dagger,\hat a_1^\dagger+\hat a_2)$ is given by Eq.~(\ref{x84}), and $\dn(1,\hat a_1+\hat a_2^\dagger)$ is given by its principal minor. Equation~(\ref{x87}) corresponds to the entanglement criterion of Mancini {\em et al.} \cite{Mancini} (see also~\cite{SV05}).
\section{Conclusions\label{Sect4}}
We derived classical inequalities for multimode bosonic fields, which can {\em only} be violated by {\em nonclassical} fields, so they can serve as a nonclassicality (or quantumness) test. Our criteria are based on Vogel's criterion~\cite{Vogel08}, which is a generalization of analogous criteria for single-mode fields of Agarwal and Tara~\cite{Agarwal92} and, more directly, of Shchukin, Richter, and Vogel (SRV)~\cite{NCL1,NCL2}. The nonclassicality criteria correspond to analyzing the positivity of matrices of normally ordered moments of, e.g., annihilation and creation operators, which, by virtue of Sylvester's criterion, correspond to analyzing the positivity of Glauber-Sudarshan $P$~function. We used not only monomial, but also polynomial functions of moments. We showed that this approach can enable simpler and more intuitive derivation of physically relevant inequalities.
We demonstrated how the nonclassicality criteria introduced here easily reduce to the well-known inequalities~(see, e.g., textbooks~\cite{DodonovBook,VogelBook,MandelBook,PerinaBook}, reviews~\cite{Walls79,Loudon80,Loudon87,Klyshko96}, and Refs.~\cite{Yuen76,Kozierowski77,Caves85,Reid86,Dalton86,Schleich87,Agarwal88,Luks88,Hillery89,Lee90,Zou90,Klyshko96pla,Miranowicz99a,Miranowicz99b,An99,An00,Jakob01}) describing various multimode nonclassical effects, for short referred to as the nonclassicality inequalities. Our examples, summarized in Tables~I and~II, include the following:
(i)~Multimode quadrature squeezing~\cite{VogelBook} and its generalizations, including the sum and difference squeezing defined by Hillery~\cite{Hillery89}, and An and Tinh~\cite{An99,An00}, as well the principal squeezing related to the Schr\"odinger-Robertson indeterminacy relation~\cite{SR} as defined by Luk\v{s} {\em et al.}~\cite{Luks88}.
(ii)~Single-time photon-number correlations of two modes, including squeezing of the sum and difference of photon numbers (which is also referred to as the photon-number sum/difference sub-Poisson photon-number statistics)~\cite{PerinaBook}, violations of the Cauchy-Schwarz inequality~\cite{MandelBook} and violations of the Muirhead inequality~\cite{Muirhead,Lee90}, which is a generalization of the arithmetic-geometric mean inequality.
(iii)~Two-time photon-number correlations of single modes including photon antibunching~\cite{VogelBook,MandelBook,Miranowicz99a} and photon hyperbunching~\cite{Jakob01,Miranowicz99b} for stationary and nonstationary fields.
(iv)~Two- and three-mode quantum entanglement inequalities (e.g., Refs.~\cite{Duan,Hillery06,Simon,Mancini}). We have shown that some known entanglement inequalities (e.g., of Duan {\em et al.}~\cite{Duan}, and Hillery and Zubairy~\cite{Hillery06}) can be derived as nonclassical inequalities. Other entanglement inequalities (e.g., of Simon~\cite{Simon}) can be represented by sums of nonclassicality inequalities.
Moreover, we developed a general method of expressing inequalities derived from the Shchukin-Vogel entanglement criterion~\cite{SV05,MP06} as a sum of inequalities derived from the nonclassicality criteria. This approach enables a deeper analysis of the entanglement for a given nonclassicality.
We also presented a few inequalities derived from the nonclassicality and entanglement criteria, which to our knowledge have not yet been described in the literature.
It is seen that the nonclassicality criteria based on matrices of moments offer an effective way to derive specific inequalities which might be useful in the verification of nonclassicality of particular states generated in experiments.
It seems that the quantum-information community more or less ignores nonclassicality as something closely related to quantum entanglement. We hope that this article presents a useful approach in the direction of a common treatment of both types of phenomena.
\begin{acknowledgments} We are very grateful to Marco Piani for his help in clarifying and generalizing some results of this article. We also thank Werner Vogel and Jan Sperling for their comments. A.M. acknowledges support from the Polish Ministry of Science and Higher Education under Grant No. 2619/B/H03/2010/38. X.W. was supported by the National Natural Science Foundation of China under Grant No. 10874151, the National Fundamental Research Programs of China under Grant No. 2006CB921205, and Program for New Century Excellent Talents in University (NCET). Y.X.L. was supported by the National Natural Science Foundation of China under Grant No. 10975080. F.N. acknowledges partial support from the National Security Agency, Laboratory of Physical Sciences, Army Research Office, National Science Foundation Grant No. 0726909, JSPS-RFBR Contract No. 09-02-92114, Grant-in-Aid for Scientific Research (S), MEXT Kakenhi on Quantum Cybernetics, and FIRST (Funding Program for Innovative R\&D on S\&T). \end{acknowledgments}
\begin{appendix}
\section{Unified derivations of criteria for quadrature squeezing and its generalizations}
Here and in the following appendices, we present a unified derivation of the known criteria for various multimode nonclassicality phenomena, which are summarized in Table~I.
\subsection{Multi-mode quadrature squeezing}
The {\em quadrature squeezing} of multimode fields can be defined by a negative value of the normally ordered variance \cite{Caves85,Loudon87,VogelBook}
\begin{equation} \varn{X_{\bm{\phi}}}<0 \label{N10} \end{equation} with $\Delta\hat{X}_{\bm{\phi}} =\hat{X}_{\bm{\phi}}-\langle\hat{X}_{\bm{\phi}}\rangle$, of the multimode quadrature operator
\begin{equation}
\hat X_{\bm{\phi}} = \sum_{m=1}^M c_m\; \hat
x_m(\phi_m), \label{N11} \end{equation} which is given in terms of single-mode phase-rotated quadratures
\begin{equation}
\hat x_m(\phi_m)= \hat a_m \exp(i\phi_m)
+ \hat a_m^\dagger \exp(-i\phi_m). \label{N12} \end{equation} It is a straightforward generalization of the single-mode quadrature squeezing~\cite{Yuen76,Walls79}. In~(\ref{N11}), $\bm{\phi}=(\phi_1,...,\phi_M)$ and $c_m$ are real parameters. In the analysis of physical systems, it is convenient to analyze the annihilation ($\hat a_m$) and creation ($\hat a_m^\dagger$) operators corresponding to slowly-varying operators. Usually, $\hat x_m(0)$ and $\hat x_m(\pi/2)$ are interpreted as canonical position and momentum operators, although this interpretation can be applied for any two quadratures of orthogonal phases, $\hat x_m(\phi_m)$ and $\hat x_m(\phi_m+\pi/2)$.
The normally ordered variance can be directly calculated from the $P$~function as follows:
\begin{equation}
\varn{X_{\bm{\phi}}} = \intda P(\bm{\alpha,\alpha}^*)[X_{\bm{\phi}}
(\bm{\alpha,\alpha}^*)-\langle\hat X_{\bm{\phi}}\rangle]^2, \label{N13} \end{equation} where
\begin{equation}
X_{\bm{\phi}}(\bm{\alpha,\alpha}^*) = \sum_{m=1}^M c_m (\alpha_m e^{i\phi_m} +\alpha^*_m e^{-i\phi_m}) \label{N14} \end{equation} and $\bm{\alpha}=(\alpha_1,...,\alpha_M)$. From Eq.~(\ref{N13}) it is seen that a negative value of $\varn{X_{\bm{\phi}}}$ implies the nonpositivity of the $P$~function in some regions of phase space, so the multimode quadrature squeezing is a nonclassical effect. This conclusion can also be drawn by applying Criterion~3. In fact, by choosing $\hat F=(1,\hat X_{\bm{\phi}})$, one obtains
\begin{equation} d_{\hat F}^{\rm (n)} =
\DET{1 & \langle\hat X_{\bm{\phi}}\rangle}
{\langle\hat X_{\bm{\phi}}\rangle & \langle:\hat X_{\bm{\phi}}^2:\rangle}
= \varn{X_{\bm{\phi}}} \ncl 0, \label{N15} \end{equation} which is the squeezing condition (\ref{N10}).
\subsection{Two-mode principal squeezing}
For simplicity, we analyze below the two-mode ($M=2$) case for $c_1=c_2=1$ and $\phi_2-\phi_1=\pi/2$. The two-mode {\em principal} (quadrature) squeezing can be defined as the $\bm{\phi}$-optimized squeezing defined by Eq.~(\ref{N10}):
\begin{equation}
\min_{\bm{\phi}:\phi_2-\phi_1=\pi/2} \varn{X_{\bm{\phi}}} < 0. \label{N16} \end{equation} By applying the Schr\"odinger-Robertson indeterminacy relation~\cite{SR}, Luk\v{s} {\em et al.}~\cite{Luks88} have given the following necessary and sufficient condition for the two-mode {\em principal squeezing}
\begin{equation}
\langle\Delta \hat a_{12}^\dagger \Delta \hat a_{12} \rangle < |\langle
(\Delta \hat a_{12})^2 \rangle| , \label{N17} \end{equation} where $$\hat a_{12}=\hat a_{1}+\hat a_{2},\quad\Delta \hat a_{12}=\hat a_{12}-\langle\hat a_{12}\rangle.$$ This condition for principal squeezing can be derived from Criterion~3 by choosing $\hat F=(\Delta \hat a_{12}^\dagger,\Delta \hat a_{12})$, which leads to:
\begin{equation}
d_{\hat F}^{\rm (n)} =
\DET{\langle\Delta \hat a_{12}^\dagger \Delta \hat a_{12} \rangle
& \langle (\Delta \hat a_{12})^2 \rangle}
{\langle (\Delta \hat a_{12}^\dagger)^2 \rangle
& \langle\Delta \hat a_{12}^\dagger \Delta \hat a_{12} \rangle} \ncl 0. \label{N18} \end{equation} Equivalently, by applying Criterion~3 for $\hat F=(1, \hat a_{12}^\dagger, \hat a_{12})$ one obtains:
\begin{equation} d_{\hat F}^{\rm (n)} =\DETT{1&\mean{\hat a^\dagger_{12}}&\mean{\hat a_{12}}} {\mean{\hat a_{12}}&\mean{\hat n_{12}}&\mean{(\hat a_{12})^2}} {\mean{\hat a^\dagger_{12}}&\mean{(\hat a^\dagger_{12})^2} &\mean{\hat n_{12}}}, \label{z36} \end{equation} where $$\hat n_{12}=\hat a^\dagger_{12}\hat a_{12}=\hat n_1+\hat n_2 +2{\rm Re}(\hat a_1^\dagger\hat a_2).$$ The determinants, given by Eqs.~(\ref{N18}) and~(\ref{z36}) are equal to each other and equivalent to Eq.~(\ref{N17}). This example shows that the application of polynomial functions of moments, instead of monomials, can lead to matrices of moments of lower dimension. Thus, the polynomial-based approach can enable simpler and more intuitive derivations of physically relevant criteria.
\subsection{Sum squeezing}
According to Hillery~\cite{Hillery89}, a two-mode state exhibits {\em sum squeezing} in the direction $\phi$ if the variance of \begin{eqnarray}
\hat V_{\phi} &=& \frac12 (\hat a_1 \hat a_2 e^{-i\phi}+
\hat a_1^\dagger \hat a_2^\dagger e^{i\phi} ) \label{N19} \end{eqnarray} satisfies \begin{eqnarray}
\var{V_{\phi}} &<& \frac12 \langle\hat V_z\rangle, \label{N20} \end{eqnarray} where $$\hat V_z=\frac12(\hat n_1+\hat n_2+1)$$ and $\hat n_m=\hat a_m^\dagger \hat a_m$ for $m=1,2$. As for the case of quadrature squeezing, $\hat a_1$ and $\hat a_2$ usually correspond to slowly varying operators. Let us denote $\hat V_x=\hat V(\phi=0)$ and $\hat V_y=\hat V(\phi=\pi/2)$. It is worth mentioning that the operators $\hat V_x$, $(-\hat V_y)$ and $\hat V_z$ are the generators of the SU(1,1) Lie algebra. Equation~(\ref{N20}) can be readily justified by noting that $[\hat V_x,\hat V_y]=i\hat V_z$, which implies the Heisenberg uncertainty relation $$\var{V_{x}}\var{V_{y}}\ge \frac14\langle\hat V_z\rangle^2.$$ By analogy with the standard quadrature squeezing, sum squeezing occurs when $\min\{\var{V_{x}},\var{V_{y}}\}<\langle\hat V_z\rangle/2$, or more generally if Eq.~(\ref{N20}) is satisfied. We note that, in analogy to the principal quadrature squeezing, one can define the principal sum squeezing by minimizing $\var{V_{\phi}}$ over $\phi$: \begin{equation}
\min_{\phi}\var{V_{\phi}} < \frac12 \langle\hat V_z\rangle. \label{N20a} \end{equation} Conditions~(\ref{N20}) and~(\ref{N20a}) can be easily derived from Criterion~3. In fact, by noting that \begin{eqnarray}
\var{V_{\phi}} &=& \varn{V_{\phi}} + \frac12 \langle\hat V_z\rangle,
\label{N21} \end{eqnarray} the condition for sum squeezing can equivalently be given by a negative value of the variance $\varn{V_{\phi}}$. On the other hand, by applying Criterion~3 for $\hat F=(1,\hat V_{\phi})$, one obtains \begin{eqnarray}
d_{\hat F}^{\rm (n)} =
\DET{1 & \mean{\hat V_{\phi}}}
{\mean{\hat V_{\phi}} & \mean{:\hat V^2_{\phi}:}}
=\varn{V_{\phi}} \ncl 0, \label{N22} \end{eqnarray} which is equivalent to Eq.~(\ref{N20}). So it is seen that sum squeezing is a nonclassical effect---in the sense of Criterion 1.
Two-mode sum squeezing can be generalized for any number of modes by defining the following $M$-mode phase-dependent operator~\cite{An99}: \begin{equation} \hat {\cal V}_\phi = \frac12 \left( {\rm e}^{-i \phi}\prod_{j} \hat a_j+ {\rm e}^{i \phi}\prod_{j} \hat a_j^\dagger \right)
\label{z2} \end{equation} satisfying the commutation relation \begin{equation} [\hat {\cal V}_\phi ,\hat {\cal V}_{\phi+\pi/2} ]=\frac{i}2 \hat C, \quad\hat C=\prod_{j}(1+\hat n_j)-\prod_{j}\hat n_j. \label{z5} \end{equation}
Hereafter $j=1,...,M$ and we note that $|\mean{\hat C}|=\mean{\hat C}$. Thus, multimode sum squeezing along the direction $\phi$ occurs if \begin{equation}
\var{{\cal V}_\phi} < \frac{|\mean{\hat C}|}4. \label{z9} \end{equation} One can find that \begin{equation}
\var{{\cal V}_\phi}=\varn{{\cal V}_\phi}+ \frac{|\mean{\hat C}|}4. \label{z6} \end{equation} Thus, by applying the nonclassicality Criterion~3 for $\hat F=(1,\hat {\cal V}_\phi)$, we obtain the sum squeezing condition \begin{equation} \varn{{\cal V}_\phi} = \dfn \ncl 0, \label{z10} \end{equation} which is equivalent to condition in Eq.~(\ref{z9}).
\subsection{Difference squeezing}
As defined by Hillery~\cite{Hillery89}, a two-mode state exhibits {\em difference squeezing} in the direction $\phi$ if \begin{eqnarray}
\var{W_{\phi}} &<& \frac12 |\langle\hat W_z\rangle|, \label{N23} \end{eqnarray} where \begin{eqnarray}
\hat W_{\phi} &=& \frac12 (\hat a_1 \hat a_2^\dagger e^{i\phi}+
\hat a_1^\dagger \hat a_2 e^{-i\phi} ) \label{N24} \end{eqnarray} and $\hat W_z=\frac12(\hat n_1-\hat n_2)$. The principal difference squeezing can be defined as: \begin{equation}
\min_{\phi}\var{W_{\phi}} < \frac12 |\langle\hat W_z\rangle|, \label{N23a} \end{equation} in analogy to the principal quadrature squeezing and the principal sum squeezing. Contrary to the $\hat V_{i}$ operators for sum squeezing, operators $\hat W_x=\hat W(\phi=0)$, $\hat W_y=\hat W(\phi=\pi/2)$ and $\hat W_z$ are generators of the SU(2) Lie algebra. The uncertainty relation $\var{W_{x}}\var{W_{y}}\ge
(1/4)|\langle\hat W_z\rangle|^2$, justifies defining difference squeezing by Eq.~(\ref{N23}). One can find that \begin{equation}
\var{W_{\phi}} = \varn{W_{\phi}} + \frac14 (\langle\hat n_1\rangle+\langle\hat n_2\rangle). \label{N25} \end{equation} By recalling Criterion~3 for $\hat F=(1,\hat W_{\phi})$, it is seen that \begin{eqnarray}
d_{\hat F}^{\rm (n)} =\varn{W_{\phi}} \ncl 0, \label{N26} \end{eqnarray} in analogy to Eq.~(\ref{N22}). And the condition for sum squeezing, given by Eq.~(\ref{N23}), can be formulated as: \begin{equation}
\dfn < - \frac12 \min_{i=1,2} \mean{\hat n_i}. \label{z1} \end{equation} So, states exhibiting difference squeezing are nonclassical. But also states satisfying \begin{eqnarray}
\frac14|\langle\hat n_1\rangle-\langle\hat n_2\rangle| \le \var{W_{\phi}}
< \frac14(\langle\hat n_1\rangle+\langle\hat n_2\rangle) \label{N27} \end{eqnarray} are nonclassical although {\em not} exhibiting difference squeezing. The first inequality in Eq.~(\ref{N27}) corresponds to condition opposite to squeezing condition given by Eq.~(\ref{N23}).
Criterion~3 can also be applied to the multimode generalization of difference squeezing, which can be defined via the operator~\cite{An00}: \begin{equation} \hat {\cal W}_\phi = \frac12 {\rm e}^{-i \phi}\prod_{k=1}^K \hat a_k \prod_{m=K+1}^M \hat a_m^\dagger + {\rm H.c.} \label{z12} \end{equation} for any $K<M$. For simplicity, hereafter, we skip the limits of multiplication in $\prod_{k}$ and $\prod_{m}$. The commutation relation \begin{equation} [\hat {\cal W}_\phi ,\hat {\cal W}_{\phi+\pi/2} ]=\frac{i}2 \hat C, \label{z13} \end{equation} where \begin{equation} \hat C=\prod_{k}(1+\hat n_k)\prod_{m}\hat n_m -\prod_{k}\hat n_k \prod_{m} (1+\hat n_m), \label{z14} \end{equation} justifies the choice of the following condition for multimode difference squeezing along the direction $\phi$~\cite{An00}: \begin{equation}
\var{{\cal W}_\phi} < \frac{|\mean{\hat C}|}4. \label{z18} \end{equation} We find that \begin{equation}
\var{{\cal W}_\phi}=\varn{{\cal W}_\phi}+ \frac{|\mean{\hat D}|}4, \label{z15} \end{equation} where \begin{equation} \hat D=\prod_{k}(1+\hat n_k)\prod_{m}\hat n_m +\prod_{k}\hat n_k \prod_{m} (1+\hat n_m)-2 \prod_{j=1}^M\hat n_j. \label{z17} \end{equation} By applying Criterion~3 for $\hat F=(1,\hat {\cal W}_\phi)$, we obtain the following condition for multimode difference squeezing: \begin{equation}
\dfn = \varn{{\cal W}_\phi} < \frac14 \left(|\mean{\hat C}|-\mean{\hat D}\right), \label{z19} \end{equation} which corresponds to the original condition, given by Eq.~(\ref{z18}). For states exhibiting difference squeezing, the right-hand side of Eq.~(\ref{z19}) is negative. In fact, if $\mean{\hat C}>0$ then \begin{equation} \hat C - \hat D = -2 \prod_{k}\hat n_k \left(\prod_{m}(1+\hat n_m) -\prod_{m}\hat n_m \right) < 0, \label{z20} \end{equation} otherwise \begin{equation} \hat C - \hat D = -2 \left(\prod_{k}(1+\hat n_k) -\prod_{k}\hat n_k \right) \prod_{m}\hat n_m < 0. \label{z21} \end{equation} It is seen that the difference squeezing condition is stronger than the nonclassicality condition $\dfn\ncl 0$. This means that states satisfying inequalities \begin{eqnarray}
\frac14 \left(|\mean{\hat C}|-\mean{\hat D}\right) \le \varn{{\cal W}_\phi} < 0 \label{z21a} \end{eqnarray} are nonclassical but {\em not} exhibiting difference squeezing.
\section{Unified derivations of criteria for one-time photon-number correlations}
Various criteria for the existence of nonclassical photon-number intermode phenomena in two-mode radiation fields have been proposed (see, e.g., Refs.~\cite{Reid86,Agarwal88,Lee90,DodonovBook,VogelBook,MandelBook,PerinaBook}). Here, we give a few examples of such nonclassical phenomena revealed by single-time moments.
\subsection{Sub-Poisson photon-number correlations}
The {\em squeezing} of the sum ($\hat n_+=\hat n_1 +\hat n_2$) or difference ($\hat n_-=\hat n_1 -\hat n_2$) of photon numbers occurs if \begin{eqnarray}
\varn{n_{\pm}}
&<& 0, \label{N28} \end{eqnarray} which can be interpreted as the photon-number sum/difference {\em sub-Poisson statistics}, respectively~\cite{PerinaBook}. These are nonclassical effects, as can be seen by analyzing the $P$~function: \begin{equation}
\varn{n_{\pm}} = \intda P(\bm{\alpha,\alpha}^*)
[(|\alpha_1|^2\pm|\alpha_2|^2) -\langle\hat n_{\pm}\rangle]^2, \label{N29} \end{equation} where $\bm{\alpha}=(\alpha_1,\alpha_2)$. Thus, photon-number squeezing implies the nonpositivity of the $P$~function. The same conclusion can also be drawn by applying Criterion~3 for $\hat F_{\pm}=(1,\hat n_{\pm})$, which leads to \begin{eqnarray}
d_{\hat F_{\pm}}^{\rm (n)} = \DET{1&\mean{\hat n_{\pm}}}
{\mean{\hat n_{\pm}}&\normal{\hat n_{\pm}^2}} = \varn{n_{\pm}} \ncl 0. \label{N30} \end{eqnarray}
\subsection{Agarwal's nonclassicality criterion}
Here, we consider an example of the violation of the CSI for two modes at the same evolution time. Other examples of violations of the CSI for a single mode, but at two different evolution times, are discussed in Appendix C in relation to photon antibunching and hyperbunching.
By considering the violation of the following CSI: \begin{eqnarray}
\normal{\hat n_1^2}\normal{\hat n_2^2} &\cl& \langle \hat n_1 \hat
n_2\rangle^2, \label{x15} \end{eqnarray} Agarwal~\cite{Agarwal88} introduced the following nonclassicality parameter:
\begin{equation} I_{12} = \frac{\sqrt{\langle :\hat{n}_1^2:\rangle \langle :\hat{n}_2^2:\rangle}} {\mean{\hat{n}_1 \hat{n}_2}}-1.
\label{x14} \end{equation} Explicitly, the nonclassicality of phenomena described by a negative value of $I_{12}$ is also implied by Criterion~3 for $\hat F=(\hat n_1,\hat n_2)$, which results in \begin{eqnarray}
\dfn &=& \DET{\normal{\hat n_1^2} & \mean{\hat{n}_1 \hat{n}_2}}
{\mean{\hat{n}_1 \hat{n}_2} & \normal{\hat n_2^2}} \ncl 0. \label{x17} \end{eqnarray}
\subsection{Lee's nonclassicality criterion}
The Muirhead classical inequality~\cite{Muirhead} is a generalization of the arithmetic-geometric mean inequality. Lee has formulated this inequality as follows~\cite{Lee90} \begin{equation} D_{12} = \langle :\hat{n}_1^2:\rangle + \langle :\hat{n}_2^2:\rangle - 2 \langle \hat{n}_1 \hat{n}_2\rangle \cl 0. \label{x30} \end{equation} The nonclassicality of correlations with a negative value of the parameter $D_{12}$ is readily seen by applying Criterion~3 for $\hat F=(\hat n_1-\hat n_2)\equiv (\hat n_{-})$, which yields \begin{equation}
D_{12} = \normal{\hat{n}_{-}^2} \ncl 0.
\label{x30a} \end{equation} For comparison, let us analyze Criterion~3 for $\hat F=(1,\hat n_{-})$, which leads to \begin{equation}
\dfn = \normal{\hat{n}_{-}^2} -\mean{\hat{n}_{-}}^2 \cl 0.
\label{x31} \end{equation} Clearly \begin{equation}
D_{12} < 0 \Rightarrow \dfn \ncl 0. \label{x31a} \end{equation} Thus, the criterion given by Eq.~(\ref{x31}) detects more nonclassical states than that based on the $D_{12}$ parameter.
Alternatively, a direct application of the relation \begin{equation}
D_{12} = \intda P(\bm{\alpha,\alpha}^*)
(|\alpha_1|^2- |\alpha_2|^2)^2 \ncl 0 \label{x91} \end{equation} also implies the nonpositivity of the $P$~function in some regions of phase space.
\section{Unified derivations of criteria for two-time photon-number correlations}
Here, we consider the two-time single-mode photon-number nonclassical correlations on examples of photon antibunching and photon hyperbunching.
\subsection{Photon antibunching}
The {\em photon antibunching}~\cite{Kimble77,Walls79,Loudon80,VogelBook,MandelBook} of a stationary or nonstationary single-mode field can be defined via the two-time second-order intensity correlation function given by \begin{eqnarray} G^{(2)}(t,t+\tau) &=& \normalo{\hat{n}(t)\hat{n}(t+\tau)} \notag\\ &=&\langle \hat{a}^{\dagger}(t)\hat{a}^{\dagger}(t+\tau) \hat{a}(t+\tau)\hat{a}(t)\rangle\quad\quad \label{y01} \end{eqnarray} or its normalized intensity correlation functions defined as \begin{equation} g^{(2)}(t,t+\tau )= \frac{G^{(2)}(t,t+\tau )}{\sqrt{ G^{(2)}(t,t)G^{(2)}(t+\tau ,t+\tau )}}, \label{y02} \end{equation} where $\dd\,\dd$ denotes the time order and normal order of field operators. Photon antibunching occurs if $g^{(2)}(t,t)$ is a strict local minimum at $\tau =0$ for $g^{(2)}(t,t+\tau )$ considered as a function of $\tau$ (see, e.g., Refs.~\cite{MandelBook,Miranowicz99a}):
\begin{eqnarray} g^{(2)}(t,t+\tau ) > g^{(2)}(t,t). \label{y05} \end{eqnarray} {\em Photon bunching} occurs if $g^{(2)}(t,t+\tau)$ decreases, while {\em photon unbunching} appears if $g^{(2)}(t,t+\tau)$ is locally constant.
For {\em stationary} fields [i.e., those satisfying $G^{(2)}(t,t+\tau)=G^{(2)}(\tau)$ so $g^{(2)}(t,t+\tau)=g^{(2)}(\tau)$], Eq.~(\ref{y05}) reduces to the standard definition of photon antibunching~\cite{VogelBook,MandelBook}: \begin{eqnarray} g^{(2)}(\tau )> g^{(2)}(0). \label{y05b} \end{eqnarray}
Photon antibunching, defined by Eq.~(\ref{y05}), is a nonclassical effect as it corresponds to the violation of the Cauchy-Schwarz inequality:
\begin{eqnarray} G^{(2)}(t,t)G^{(2)}(t+\tau ,t+\tau ) \cl \big[G^{(2)}(t,t+\tau )\big]^2. \label{y09} \end{eqnarray} As shown in Ref.~\cite{Vogel08}, this property follows from Criterion~3 based on the generalized definition of space-time $P$~function, given by~(\ref{VogelP}). In fact, by assuming $\hat F=(\hat n(t),\hat n(t+\tau))$, which leads to \begin{eqnarray}
\dfn &=& \DET{\normalo{\hat n^2(t)} &
\normalo{\hat n(t)\hat n(t+\tau)}}
{\normalo{\hat n(t)\hat n(t+\tau)} & \normalo{\hat n^2(t+\tau)}} \notag \\
&=& \DET{G^{(2)}(t,t) & G^{(2)}(t,t+\tau)}
{G^{(2)}(t,t+\tau) & G^{(2)}(t+\tau,t+\tau)} \ncl 0.\quad\quad \label{x23} \end{eqnarray}
\subsection{Photon hyperbunching}
{\em Photon hyperbunching}~\cite{Jakob01}, also referred to as photon antibunching effect~\cite{Miranowicz99b}, can be defined as:
\begin{eqnarray} \overline{g}^{(2)}(t,t+\tau )> \overline{g}^{(2)}(t,t), \label{y05a} \end{eqnarray} given in terms of the correlation coefficient~\cite{Berger93}
\begin{equation} \overline{g}^{(2)}(t,t+\tau )= \frac{\overline{G}^{(2)}(t,t+\tau )}{\sqrt{ \overline{G}^{(2)}(t,t)\overline{G}^{(2)}(t+\tau ,t+\tau )}}, \label{y07} \end{equation} where the covariance $\overline{G}^{(2)}(t,t+\tau)$ is given by \begin{equation} \overline{G}^{(2)}(t,t+\tau) = G^{(2)}(t,t+\tau) -G^{(1)}(t) G^{(1)}(t+\tau), \label{y08} \end{equation} and $G^{(1)}(t)=\langle \hat n(t)\rangle =\langle \hat{a}^{\dagger }(t) \hat{a}(t)\rangle$ is the light intensity. It is worth noting that, for {\em stationary} fields, the definitions given by Eqs.~(\ref{y05}) and~(\ref{y05a}) are equivalent and equivalent to definitions of photon antibunching based on other normalized correlation functions, e.g., \begin{equation} \tilde g^{(2)}(t,t+\tau )=\frac{G^{(2)}(t,t+\tau )}{[ G^{(1)}(t)] ^{2}}. \label{y08a} \end{equation} However for {\em nonstationary} fields, these definitions correspond in general to different photon antibunching effects~\cite{Miranowicz99a,Miranowicz99b,Jakob01}.
Analogously to Eq.~(\ref{y05}), the photon hyperbunching, defined by Eq.~(\ref{y05a}), can occur for nonclassical fields violating the Cauchy-Schwarz inequality: \begin{equation} \overline{G}^{(2)}(t,t)\overline{G}^{(2)}(t+\tau ,t+\tau ) \cl \big[\overline{G}^{(2)}(t,t+\tau )\big]^2. \label{y10} \end{equation} Again, the nonclassicality of this effect can be shown by applying Criterion~3 for the space-time $P$~function, given by~(\ref{VogelP}), assuming $\hat F=(\Delta \hat n(t),\Delta \hat n(t+\tau))$, where $\Delta \hat n(t) =\hat n(t)-\mean{\hat n(t)}$. Thus, one obtains \begin{equation}
\dfn = \DET{\overline G^{(2)}(t,t) & \overline G^{(2)}(t,t+\tau)}
{\overline G^{(2)}(t,t+\tau) & \overline G^{(2)}(t+\tau,t+\tau)} \ncl
0, \label{x27} \end{equation} which is equivalent to Eq.~(\ref{y05a}). Alternatively, by choosing $\hat F=(1,\hat n(t),\hat n(t+\tau))$, one finds \begin{equation} \dfn = \DETT {1&\mean{\hat n(t)}&\mean{\hat n(t+\tau)}} {\mean{\hat n(t)}&\mean{\dd\hat n^2(t)\dd} &\mean{\dd \hat n(t) \hat n(t+\tau)\dd}} {\mean{\hat n(t+\tau)}&\mean{\dd \hat n(t) \hat n(t+\tau)\dd}& \mean{\dd\hat n^2(t+\tau)\dd}}, \label{z34} \end{equation} which is equal to the determinant given by Eq.~(\ref{x27}). By comparing Eqs.~(\ref{x27}) and~(\ref{z34}), analogously to Eqs.~(\ref{N18}) and~(\ref{z36}), it is seen the advantage of using polynomial, instead of monomial, functions of moments in $\hat F$.
Finally, it is worth noting that the {\em single-mode sub-Poisson} photon-number statistics, defined by the condition $\varn{n}<0$, although also referred to as {\em photon antibunching}, is an effect different from those defined by Eqs.~(\ref{y05}) and~(\ref{y05a}), as shown by examples in Ref.~\cite{Zou90}.
\end{appendix}
\end{document} |
\begin{document}
\begin{frontmatter} \journal{J. Math. Anal. Appl.}
\title{Ramanujan-Slater Type Identities \\Related to the Moduli 18 and 24} \date{February 21, 2008}
\author{James McLaughlin} \address{Department of Mathematics, West Chester University, West Chester, PA; telephone 610-738-0585; fax 610-738-0578} \ead{jmclaughl@wcupa.edu} \ead[url]{http://math.wcupa.edu/\~{}mclaughlin}
\author{Andrew V. Sills} \address{Department of Mathematical Sciences, Georgia Southern University, Statesboro, GA; telephone 912-681-5892; fax 912-681-0654} \ead{asills@GeorgiaSouthern.edu} \ead[url]{http://math.georgiasouthern.edu/\~{}asills}
\begin{abstract} We present several new families of Rogers-Ramanujan type identities related to the moduli 18 and 24. A few of the identities were found by either Ramanujan, Slater, or Dyson, but most are believed to be new. For one of these families, we discuss possible connections with Lie algebras. We also present two families of related false theta function identities. \end{abstract}
\begin{keyword} Rogers-Ramanujan identities\sep Bailey pairs \sep $q$-series identities \sep basic hypergeometric series \sep false theta functions \sep affine Lie algebras \sep principal character \MSC 11B65\sep 33D15\sep 05A10 \sep 17B57\sep 17B10 \end{keyword} \end{frontmatter}
\section{Introduction} The Rogers-Ramanujan identities are \begin{thm}[The Rogers-Ramanujan Identities]
\begin{equation}\label{RRa1}
\sum_{n=0}^\infty \frac{q^{n^2}}{(q;q)_n} =
\frac{(q^2, q^3, q^5; q^5)_\infty}{(q;q)_\infty},
\end{equation} and \begin{equation}\label{RRa2}
\sum_{n=0}^\infty \frac{q^{n(n+1)}}{(q;q)_n} =
\frac{(q, q^4, q^5; q^5)_\infty}{(q;q)_\infty},
\end{equation} where \[ (a;q)_m = \prod_{j=0}^{m-1} (1-aq^j), \quad
(a;q)_\infty = \prod_{j=0}^\infty (1-aq^j), \] and
\[ (a_1, a_2, \dots, a_r; q)_s = (a_1;q)_s (a_2;q)_s \dots (a_r;q)_s. \] \end{thm}
(Although the results in this paper may be considered purely from the point of view of formal power series, they also yield identities of analytic functions provided $|q|<1$.)
The Rogers-Ramanujan identities are due to L.~J.~Rogers~\cite{R94}, and were rediscovered by S. Ramanujan~\cite{M18} and I. Schur~\cite{S17}. Rogers and others discovered many series--product identities similar in form to the Rogers-Ramanujan identities, and such identities are called ``identities of the Rogers-Ramanujan type." Two of the largest collections of Rogers-Ramanujan type identities are contained in Slater's paper~\cite{S52} and Ramanujan's Lost Notebook~\cite[Chapters 10--11]{AB05},~\cite[Chapters 1--5]{AB07}.
Rogers-Ramanujan type identities occur in closely related ``families." Just as there are two Rogers-Ramanujan identities related to the modulus 5, there are a family of three Rogers-Selberg identities related to the modulus 7~\cite[p. 331, (6)]{R17}, a family of three identities related to the modulus 9 found by Bailey~\cite[p. 422, Eqs. (1.6)--(1.8)]{B47}, a family of four identities related to the modulus 27 found by Dyson~\cite[p. 433, Eqs. (B1)--(B4)]{B47}, etc.
While both Ramanujan and Slater usually managed to find all members of a given family, this was not always the case. In this paper, we present several complete families of identities for which Ramanujan or Slater found only one member, as well as two complete new families.
The following family of four identities related to the modulus 18 is believed to be new: {\allowdisplaybreaks \begin{gather} \sum_{n=0}^\infty \frac{ q^{n(n+1)} (-1;q^3)_n}{ (-1;q)_n (q;q)_{2n} } = \frac{(q,q^8,q^9;q^9)_\infty (q^7,q^{11};q^{18})_\infty} {(q;q)_\infty} \label{m18-1}\\ \sum_{n=0}^\infty \frac{ q^{n^2} (-1;q^3)_n}{ (-1;q)_n (q;q)_{2n} } = \frac{(q^2,q^7,q^9;q^9)_\infty (q^5,q^{13} ; q^{18})_\infty} {(q;q)_\infty} \label{m18-2} \\ \sum_{n=0}^\infty \frac{ q^{n(n+1)} (-q^3;q^3)_n}{ (-q;q)_n (q;q)_{2n+1} } = \frac{(q^3,q^6,q^9;q^9)_\infty (q^3,q^{15};q^{18})_\infty}{(q;q)_\infty} \label{m18-3} \\ \sum_{n=0}^\infty \frac{ q^{n(n+2)} (-q^3;q^3)_n } { (q^2;q^2)_n (q^{n+2};q)_{n+1} } = \frac{(q^4,q^5,q^9;q^9)_\infty (q,q^{17};q^{18})_\infty}{(q;q)_\infty} \label{m18-4} \end{gather} }
\begin{rem} We included Identity~\eqref{m18-3} in our joint paper with D. Bowman~\cite[Eq. (6.30)]{BMS07}, as it also occurs as part of a different family of four identities. \end{rem}
A closely related family of mod 18 identities is as follows. \begin{gather} 1+\sum_{n=1}^\infty \frac{ q^{n^2} (q^3;q^3)_{n-1} (2+q^n)} { (q;q)_{n-1} (q;q)_{2n} } = \frac{(-q,-q^8,q^9;q^9)_\infty (q^7,q^{11};q^{18})_\infty}{(q;q)_\infty} \label{m18-m1}\\ 1+\sum_{n=1}^\infty \frac{ q^{n^2} (q^3;q^3)_{n-1} (1+2q^n)} { (q;q)_{n-1} (q;q)_{2n} } = \frac{(-q^2,-q^7,q^9;q^9)_\infty (q^5,q^{13};q^{18})_\infty} {(q;q)_\infty} \label{m18-m2}\\
\sum_{n=0}^\infty \frac{ q^{n(n+1)} (q^3;q^3)_n}{ (q;q)_n (q;q)_{2n+1} } = \frac{(-q^3,-q^6,q^9;q^9)_\infty (q^3,q^{15};q^{18})_\infty}{(q;q)_\infty}
\label{m18-m3}\\ \sum_{n=0}^\infty \frac{ q^{n(n+2)} (q^3;q^3)_n } { (q;q)_n^2 (q^{n+2};q)_{n+1} } = \frac{(-q^4,-q^5,q^9;q^9)_\infty (q,q^{17};q^{18})_\infty}{(q;q)_\infty}
\label{m18-m4} \end{gather} Identity~\eqref{m18-m3} is due to Dyson~\cite[p. 434, Eq. (B3)]{B47} and also appears in Slater~\cite[p. 161, Eq. (92)]{S52}. In both~\cite{B47} and~\cite{S52}, the right hand side of~\eqref{m18-m3} appears in a different form and thus is seen to be a member of a different family of four identities related to the modulus 27.
Following Ramanujan (cf. \cite[p. 11, Eq (1.1.7)]{AB05}), let us use the notation \begin{equation*} \psi(q) = \frac{(q^2;q^2)_\infty}{(q;q^2)_\infty}. \end{equation*} Ramanujan recorded the identity \begin{equation} \sum_{n=0}^\infty \frac{ q^{n^2} (-q^3;q^6)_n}{ (q^2;q^2)_{2n} } = \frac{ (q^2,q^{10},q^{12};q^{12})_\infty (q^{8},q^{16};q^{24})_\infty }{\psi(-q)} \label{m24t-2} \end{equation} in his lost notebook~\cite[Entry 5.3.8]{AB07}. As we see below, it is actually only one of a family of five similar identities. \begin{gather} \sum_{n=0}^\infty \frac{ q^{n(n+2)} (-q;q^2)_n (-1;q^6)_n}{ (q^2;q^2)_{2n} (-1;q^2)_n} = \frac{ (q,q^{11},q^{12};q^{12})_\infty (q^{10},q^{14};q^{24})_\infty }{\psi(-q)} \label{m24t-1}\\ \sum_{n=0}^\infty \frac{ q^{n^2} (-q;q^2)_n (-1;q^6)_n}{ (q^2;q^2)_{2n} (-1;q^2)_n } = \frac{ (q^3,q^{9},q^{12};q^{12})_\infty (q^{6},q^{18};q^{24})_\infty }{\psi(-q)} \label{m24t-3}\\ \sum_{n=0}^\infty \frac{ q^{n(n+2)} (-q^3;q^6)_n}{ (q;q)_{2n+1} (-q;q)_{2n} } = \frac{ (q^4,q^{8},q^{12};q^{12})_\infty (q^{4},q^{20};q^{24})_\infty }{\psi(-q)} \label{m24t-4}\\ \sum_{n=0}^\infty \frac{ q^{n(n+2)} (-q;q^2)_{n+1} (-q^6;q^6)_n } { (q^4;q^4)_{n} (q^{2n+4};q^2)_{n+1} } = \frac{ (q^5,q^{7},q^{12};q^{12})_\infty (q^{2},q^{22};q^{24})_\infty }{\psi(-q)} \label{m24t-5} \end{gather}
Ramanujan also recorded the identity \begin{equation} \label{m24t-m2} \sum_{n=0}^\infty \frac{ q^{n^2} (q^3;q^6)_n}{ (q;q^2)_{n}^2 (q^4;q^4)_n } = \frac{ (-q^2,-q^{10},q^{12};q^{12})_\infty (q^{8},q^{16};q^{24})_\infty }{\psi(-q)} \end{equation} in the lost notebook~\cite[Entry 5.3.9]{AB07}.
Again, it is one of a family of five similar identities. This time, however, two of the remaining four identities were found by Slater. Identity~\eqref{m24t-m4} is a corrected presentation of~\cite[p. 164, Eq. (110)]{S52} and identity~\eqref{m24t-m5} is a corrected presentation of~\cite[p. 163, Eq. (108)]{S52}.
\begin{gather} 1+\sum_{n=1}^\infty \frac{ q^{n^2} (-q;q^2)_n (q^6;q^6)_{n-1} (2+q^{2n})}{ (q^2;q^2)_{2n} (q^2;q^2)_{n-1}} = \frac{ (-q,-q^{11},q^{12};q^{12})_\infty (q^{10},q^{14};q^{24})_\infty } {\psi(-q)} \label{m24t-m1}\\ 1+\sum_{n=1}^\infty \frac{ q^{n^2} (-q;q^2)_n (q^6;q^6)_{n-1} (1+2q^{2n})} { (q^2;q^2)_{2n} (q^2;q^2)_{n-1} } = \frac{ (-q^3,-q^{9},q^{12};q^{12})_\infty (q^{6},q^{18};q^{24})_\infty } {\psi(-q)} \label{m24t-m3}\\
\sum_{n=0}^\infty \frac{ q^{n(n+2)} (q^3;q^6)_n (-q;q^2)_{n+1} }{ (q^2;q^2)_{2n+1} (q;q^2)_n } = \frac{ (-q^4,-q^{8},q^{12};q^{12})_\infty (q^{4},q^{20};q^{24})_\infty } {\psi(-q)} \label{m24t-m4}\\
\sum_{n=0}^\infty \frac{ q^{n(n+2)} (-q;q^2)_{n+1} (q^6;q^6)_n } { (q^{2n+4};q^2)_{n+1} (q^2;q^2)_n^2 } = \frac{ (-q^5,-q^{7},q^{12};q^{12})_\infty (q^{2},q^{22};q^{24})_\infty } {\psi(-q)}\label{m24t-m5} \end{gather}
We believe that the following family of five identities has not previously appeared in the literature: \begin{gather} \sum_{n=0}^\infty \frac{ q^{n(n+1)} (-q^2;q^2)_n (-q^3;q^6)_n }
{ (q;q)_{2n} (-q;q)_{2n+1} (-q;q^2)_n} = \frac{ (q,q^{11},q^{12};q^{12})_\infty (q^{10},q^{14};q^{24})_\infty } {\varphi(-q^2)}\label{m24s-1}\\ \sum_{n=0}^\infty \frac{ q^{n(n+1)} (-1;q^6)_n (-q^2;q^2)_n }{ (q^2;q^2)_{2n} (-1;q^2)_n } = \frac{ (q^2,q^{10},q^{12};q^{12})_\infty (q^{8},q^{16};q^{24})_\infty } {\varphi(-q^2)} \label{m24s-2}\\ \sum_{n=0}^\infty \frac{ q^{n(n+1)} (-q^2;q^2)_n (-q^3;q^6)_n}{ (q^2;q^2)_{2n+1} (-q;q^2)_n } = \frac{ (q^3,q^{9},q^{12};q^{12})_\infty (q^{6},q^{18};q^{24})_\infty } {\varphi(-q^2)}\label{m24s-3}\\ \sum_{n=0}^\infty \frac{ q^{n(n+1)} (-q^6;q^6)_n}{ (q^2;q^2)_{2n+1} } = \frac{ (q^4,q^{8},q^{12};q^{12})_\infty (q^{4},q^{20};q^{24})_\infty } {\varphi(-q^2)} \label{m24s-4}\\ \sum_{n=0}^\infty \frac{ q^{n(n+3)} (-q^2;q^2)_{n} (-q^3;q^6)_n } { (q^2;q^2)_{2n+1} (-q;q^2)_n } = \frac{ (q^5,q^{7},q^{12};q^{12})_\infty (q^{2},q^{22};q^{24})_\infty } {\varphi(-q^2)} \label{m24s-5} ,\end{gather} where \begin{equation*}
\varphi(q) := \frac{(-q;-q)_\infty}{(q;-q)_\infty}
\end{equation*} is another notation used by Ramanujan.
In the following counterpart to the preceding family, two of the five identities appear in Slater's list. \begin{gather} \sum_{n=0}^\infty \frac{ q^{n(n+1)} (-q^2;q^2)_n (q^3;q^6)_{n} }{ (q;q)_{2n+1} (-q;q)_{2n} (q;q^2)_{n}} = \frac{ (-q,-q^{11},q^{12};q^{12})_\infty (q^{10},q^{14};q^{24})_\infty } {\varphi(-q^2)}\label{m24s-m1} \\ 1+\sum_{n=1}^\infty \frac{ q^{n(n+1)} (q^6;q^6)_{n-1} (-q^2;q^2)_n}{ (q^2;q^2)_{n-1} (q^2;q^2)_{2n} } = \frac{ (-q^2,-q^{10},q^{12};q^{12})_\infty (q^{8},q^{16};q^{24})_\infty }
{\varphi(-q^2)} \label{m24s-m2} \\ \sum_{n=0}^\infty \frac{ q^{n(n+1)} (-q^2;q^2)_n (q^3;q^6)_{n}}{ (q^2;q^2)_{2n+1} (q;q^2)_{n} } = \frac{ (-q^3,-q^{9},q^{12};q^{12})_\infty (q^{6},q^{18};q^{24})_\infty }{\varphi(-q^2)}\label{m24s-m3}\\ \sum_{n=0}^\infty \frac{ q^{n(n+1)} (q^6;q^6)_n (-q^2;q^2)_n } { (q^2;q^2)_{2n+1} (q^2;q^2)_n } = \frac{ (-q^4,-q^{8},q^{12};q^{12})_\infty (q^{4},q^{20};q^{24})_\infty }{\varphi(-q^2)}\label{m24s-m4}\\
\sum_{n=0}^\infty \frac{ q^{n(n+3)} (-q^2;q^2)_{n} (q^3;q^6)_n } { (q^2;q^2)_{2n+1} (q;q^2)_n } = \frac{ (-q^5,-q^{7},q^{12};q^{12})_\infty (q^{2},q^{22};q^{24})_\infty} {\varphi(-q^2)}\label{m24s-m5} \end{gather} Identity~\eqref{m24s-m3} is due to Slater~\cite[p. 163, Eq. (107)]{S52}. Identity~\eqref{m24s-m4} is originally due to Dyson~\cite[p. 434, Eq. (D2)]{B47} and also appears in Slater~\cite[p. 160, Eq. (77)]{S52}.
The following false theta series identities, which are closely related to identities~\eqref{m24s-1}--\eqref{m24s-m5}, are believed to be new, except for~\eqref{ft7} and ~\eqref{ft9}. Identity~\eqref{ft7} is due to Dyson~\cite[p. 434, Eq. (E1)]{B47}, while Identity~\eqref{ft9} appears in Ramanujan's lost notebook~\cite[Entry 5.4.2]{AB07} and was rediscovered by Dyson~\cite[p. 434, Eq. (E2)]{B47}. \begin{multline} \sum_{n=0}^\infty \frac{(-1)^n q^{n(n+1)} (-q^3;q^6)_n} { (q^{2};q^4)_n (-q;q)_{2n+1} }\\ = \sum_{n=0}^\infty (-1)^n q^{18n^2 + 3n}(1+q^{30n+15}) - q \label{ft1} \sum_{n=0}^\infty (-1)^n q^{18n^2 + 9n}(1+q^{18n+9}) \end{multline} \begin{equation} \sum_{n=0}^\infty \frac{ (-1)^n q^{n(n+3)} (-q^6;q^6)_n}{ (q^{2};q^4)_{n+1} (-q^2;q^2)_{n} (-q^2;q^2)_{n+1}} =\sum_{n=0}^\infty (-1)^n q^{18n^2+12n} (1+q^{12n+6}) \label{ft2} \end{equation} \begin{multline} \sum_{n=0}^\infty \frac{ (-1)^n q^{n(n+1)} (-q^3;q^6)_n}{ (q^{2};q^4)_{n+1} (-q;q)_{2n}} \\=\sum_{n=0}^\infty (-1)^n q^{18n^2+3n}(1+q^{30n+15})
+q^3\sum_{n=0}^\infty (-1)^n q^{18n^2+15n}(1+q^{6n+3}) \label{ft3} \end{multline} \begin{multline} \sum_{n=0}^\infty \frac{ (-1)^n q^{n(n+1)} (-q^6;q^6)_n}{ (q^{2};q^4)_{n+1} (-q^2;q^2)_{n}^2}\\ =\sum_{n=0}^\infty (-1)^n q^{18n^2+6n} (1+q^{24n+12})
+2 q^4 \sum_{n=0}^\infty (-1)^n q^{18n^2+18n} \label{ft4} \end{multline} \begin{multline} \sum_{n=0}^\infty \frac{ (-1)^n q^{n(n+3)} (-q^3;q^6)_n}{ (q^{2};q^4)_{n+1} (-q;q)_{2n}}\\ =\sum_{n=0}^\infty (-1)^n q^{18n^2+9n} (1+q^{18n+9})
+ q^2 \sum_{n=0}^\infty (-1)^n q^{18n^2+15n} (1+q^{6n+3}) \label{ft5} \end{multline} \begin{multline} \sum_{n=0}^\infty \frac{(-1)^n q^{n(n+1)} (q^3;q^6)_n }{ (q^{2};q^4)_n (-q^2;q^2)_n (q;q^2)_{n+1} } \\=\sum_{n=0}^\infty q^{18n^2 + 3n}(1-q^{30n+15}) + q \sum_{n=0}^\infty q^{18n^2 + 9n}(1-q^{18n+9}) \label{ft6} \end{multline} \begin{equation} \sum_{n=0}^\infty \frac{ (-1)^{n} q^{n(n+3)}(q^6;q^6)_{n}}{ (q;q)_{2n+1} (-q;q)_{2n+2} } =\sum_{n=0}^\infty q^{18n^2+12n} (1-q^{12n+6})\label{ft7} \end{equation} \begin{multline} \sum_{n=0}^\infty \frac{ (-1)^n q^{n(n+1)} (q^3;q^6)_n}{ (q^{2};q^4)_{n+1} (-q^2;q^2)_n (q;q^2)_{n} } \\ =\sum_{n=0}^\infty q^{18n^2 + 3n}(1-q^{30n+15}) -q^3 \sum_{n=0}^\infty q^{18n^2 + 15n}(1-q^{6n+3}) \label{ft8} \end{multline} \begin{equation} \sum_{n=0}^\infty \frac{ (-1)^n q^{n(n+1)} (q^6;q^6)_n}{ (q^{2};q^2)_{2n+1} } =\sum_{n=0}^\infty q^{18n^2+6n} (1-q^{24n+12}) \label{ft9} \end{equation} \begin{multline} \sum_{n=0}^\infty \frac{ (-1)^n q^{n(n+3)} (q^3;q^6)_n}{ (q^{2};q^4)_{n+1} (-q^2;q^2)_n (q;q^2)_{n} } \\=\sum_{n=0}^\infty q^{18n^2+9n} (1-q^{18n+9})
+ q^2 \sum_{n=0}^\infty q^{18n^2+15n} (1-q^{6n+3}) \label{ft10} \end{multline}
In \S\ref{StdResults}, we will review some standard definitions and results to be used in the sequel. In \S\ref{Proofs}, we indicate the Bailey pairs necessary to prove Identities~\eqref{m18-1}--\eqref{ft10} and provide the keys to proving Identities~\eqref{m18-1}--\eqref{m24s-m5}. In \S\ref{FT}, we will discuss and prove the false theta series identities~\eqref{ft1}--\eqref{ft10}. Finally, in \S\ref{Lie} we discuss possible connections between Identities~\eqref{m18-1}--\eqref{m18-4} and the standard level 6 modules associated with the Lie algebra $A_{2}^{(2)}$.
\section{Standard definitions and results}\label{StdResults}
We will require a number of definitions and theorems from the literature. It will be convenient to adopt Ramanujan's notation for theta functions~\cite[p. 11, Eqs. (1.1.5)--(1.1.8)]{AB05}. {\allowdisplaybreaks
\begin{defn} For $|ab|<1$, let \begin{align} f(a,b) &:= \sum_{n=-\infty}^\infty a^{n(n+1)/2} b^{n(n-1)/2}, \label {fdef}\\ \varphi(q) &:= f(q,q), \label{phidef}\\ \psi(q) &:= f(q,q^3), \label{psidef}\\ f(-q) &:= f(-q,-q^2).\label{PNSdef} \end{align} \end{defn} }
Both the Jacobi triple product identity and the quintuple product identity were used extensively by Ramanujan (cf.~\cite{AB05}, \cite{AB07}) and Slater~\cite{S52}. Rogers, on the other hand, appears to have been unaware of the quintuple product identity, since he referred to~\cite[p. 333, Eq. (16)]{R94} \begin{equation} \label{remarkable} \frac{(q^2;q^2)_\infty }{ (q^{30}; q^{30})_\infty (q;q^5)_\infty (q^4;q^5)_\infty} =
(q^{13}; q^{30})_\infty (q^{17};q^{30})_\infty + q (q^7;q^{30})_\infty (q^{23};q^{30})_\infty, \end{equation} which follows immediately from the quintuple product identity, as a ``remarkable identity" after observing that both sides of~\eqref{remarkable} are equal to the same series. Accordingly, we have chosen the name ``Ramanujan-Slater type identities" in our title for the identities in this paper rather than ``Rogers-Ramanujan type identities."
Many proofs of the Jacobi triple product identity are known; see, e.g.,~\cite[pp. 496--500]{AAR99} for two proofs. For a history and many proofs of the quintuple product identity, see S. Cooper's excellent survey article~\cite{C06}.
\begin{thm}[Jacobi's triple product identity] For $|ab|<1$,
\begin{equation} \label{jtp}
f(a,b) = (-a, -b, ab ; ab)_\infty.
\end{equation} \end{thm}
\begin{thm}[Quintuple product identity] For $|w|<1$ and $x\neq 0$,
\begin{multline} \label{qpi}
f(-wx^3, -w^2 x^{-3}) + x f(-wx^{-3}, -w^2 x^3) = \frac{ f(w/x, x) f(-w/x^2, -wx^2) }{ f(-w^2) } \\
= (-wx^{-1}, -x, w; w)_\infty (wx^{-2}, wx^2; w^2)_\infty.
\end{multline} \end{thm}
The following is a special case of Bailey's ${}_6 \psi_6$ summation formula~\cite[Eq. (4.7)]{B36} which appears in Slater~\cite[p. 464, Eq. (3.1)]{S51}. \begin{thm}[Bailey] \begin{multline} \label{6psi6}
\sum_{r=-\infty}^\infty \frac{ (1-aq^{6r})(q^{-n};q)_{3r} (e;q^3)_r a^{2r} q^{3nr} }
{(1-a)(aq^{n+1}; q)_{3r} (aq^3/e;q^3)_r e^r } \\
= \frac{ (a;q^3)_\infty (q^3/a;q^3)_\infty (aq^2/e;q^3)_\infty (aq/e; q^3)_\infty (q;q)_n (aq;q)_n (a^2/e; q^3)_n}
{ (q;q^3)_\infty (q^2;q^3)_\infty (q^3/e;q^3)_\infty (a^2/e; q^3)_\infty (a;q)_{2n} (aq/e;q)_n },
\end{multline}
where $a$ must be a power of $q$ so that the series terminates below. \end{thm}
The next two $q$-hypergeometric summation formulas are due to to Andrews~\cite[p. 526, Eqs. (1.8) and (1.9) respectively]{A73}. \begin{thm}[$q$-analog of Gauss's ${}_2 F_{1} (\frac 12)$ sum] \begin{equation} \label{q2ndGauss} \sum_{n=0}^\infty \frac{ q^{n(n+1)} (a;q^2)_n (b;q^2)_n }
{ (q^2;q^2)_n (abq^2;q^4)_n } = \frac{ (aq^2;q^4)_\infty (bq^2;q^4)_\infty } { (q^2;q^4)_\infty (abq^2;q^4)_\infty }. \end{equation} \end{thm} {\allowdisplaybreaks \begin{thm}[$q$-analog of Bailey's ${}_2 F_1 (\frac 12)$ sum] \begin{equation} \label{qBailey}
\sum_{n=0}^\infty \frac{ (bq;q^2)_n (b^{-1}q;q^2)_n c^n q^{n^2} } {(cq;q^2)_n (q^4;q^4)_n } = \frac{ (b^{-1}cq^2;q^4)_\infty (bcq^2; q^4)_\infty }{ (cq;q^2)_\infty }. \end{equation} \end{thm} } \begin{defn}
A pair of sequences \[ \left( \{\alpha_n (a,q) \}_{n=0}^\infty, \{ \beta_n(a,q)\}_{n=0}^\infty \right)\]
is called
a \emph{Bailey pair relative to $a$} if
\begin{equation} \label{BPdef}
\beta_n (a,q) = \sum_{r=0}^n \frac{\alpha_r (a,q)}{(a q;q )_{n+r} (q;q)_{n-r}}. \end{equation} \end{defn} Bailey~\cite[p. 3, Eq. (3.1)]{B49} proved a key result, now known as ``Bailey's lemma," which led to the discovery of many Rogers-Ramanujan type identities.
We will require several special cases of Bailey's lemma. \begin{thm} If $\left( \{ \alpha_n (a,q) \}, \{ \beta_n (a,q) \} \right)$ form a Bailey pair, then {\allowdisplaybreaks \begin{align} \sum_{n=0}^\infty a^n q^{n^2} \beta_n(a,q) &= \frac{1}{(aq;q)_\infty}
\sum_{r=0}^\infty a^r q^{r^2} \alpha_r (a,q) \label{aPBL} \\ \sum_{n=0}^\infty a^n q^{n^2} (-q;q^2)_n \beta_n(a,q^2) &= \frac{(-aq;q^2)_\infty }{(aq^2;q^2)_\infty}
\sum_{r=0}^\infty a^r q^{r^2} \alpha_r (a,q^2) \label{aTBL}\\ \frac{1}{1-q^2}\sum_{n=0}^\infty q^{n(n+1)} (-q^2;q^2)_n \beta_n(q^2,q^2)
&= \frac{1}{\varphi(-q^2)}
\sum_{r=0}^\infty q^{r(r+1)} \alpha_r (q^2,q^2). \label{S2BL} \\ \frac{1}{1-q^2}\sum_{n=0}^\infty (-1)^n q^{n(n+1)} (q^2;q^2)_n \beta_n(q^2,q^2)
&=
\sum_{r=0}^\infty (-1)^r q^{r(r+1)} \alpha_r (q^2,q^2). \label{FBL}
\end{align}
} \end{thm}
Eq.~\eqref{aPBL} is~\cite[p. 3, Eq. (3.1) with $\rho_1, \rho_2\to\infty$]{B49}. Eq.~\eqref{aTBL} is~\cite[p. 3, Eq. (3.1) with $\rho_1=-\sqrt{q};\ \rho_2\to\infty$]{B49}. Eq.~\eqref{S2BL} is~\cite[p. 3, Eq. (3.1) with $\rho_1=-q;\ \rho_2\to\infty$]{B49}. Eq.~\eqref{FBL} is~\cite[p. 3, Eq. (3.1) with $\rho_1=q;\ \rho_2\to\infty$]{B49}.
\section{Proofs of Identities~\eqref{m18-1}--\eqref{m24s-m5}}\label{Proofs}
To facilitate the proofs of many of the identities, we will first need to establish a number of Bailey pairs. For instance,
\begin{lem} \label{BP2} If {\allowdisplaybreaks \begin{equation*}
\alpha_n (1,q) =
\left\{ \begin{array}{ll}
1 &\mbox{if $n=0$}\\
q^{\frac 92 r^2 - \frac 32 r} (1+q^{3r}) &\mbox{if $n=3r>0$}\\
-q^{\frac 92 r^2 - \frac 92 r + 1} &\mbox{if $n=3r-1$}\\
-q^{\frac 92 r^2 + \frac 92 r + 1} &\mbox{if $n=3r+1$}
\end{array} \right. \end{equation*} and \[ \beta_n (1,q) = \frac {(-1; q^3)_n } { (q;q)_{2n} (-1; q)_n },\] then $\left( \alpha_n (1,q) , \beta_n (1,q) \right)$ form a Bailey pair relative to $1$.} \end{lem} \begin{pf} Set $a=q$ and $e=-q^2$ in~\eqref{6psi6} and simplify to obtain \begin{equation} \label{6psi6spec} \sum_{r\in\mathbb Z} \frac{ (1-q^{6r+1}) q^{\frac 92 r^2 -\frac 32 r} }
{(q;q)_{n-3r} (q; q)_{n+3r+1} }
= \frac{ (-1; q^3)_n}
{ (q;q)_{2n} (-1;q)_n }.
\end{equation} \begin{align*} &\qquad\quad\sum_{r=0}^n \frac{ \alpha_r(1,q)}{(q;q)_{n-r} (q;q)_{n+r}} \\ &= \frac{1}{(q;q)_n^2} + \sum_{r\geqq 1} \frac{ \alpha_{3r}(1,q)}{(q;q)_{n-3r} (q;q)_{n+3r}}
+ \sum_{r\geqq 1} \frac{ \alpha_{3r-1}(1,q)}{(q;q)_{n-3r+1} (q;q)_{n+3r-1}} \\
&\qquad\qquad
+ \sum_{r\geqq 0} \frac{ \alpha_{3r+1}(1,q)}{(q;q)_{n-3r-1} (q;q)_{n-3r+1}} \\
&= \sum_{r\in\mathbb Z} \frac{ q^{\frac 92 r^2 -\frac 32 r} }{(q;q)_{n-3r} (q;q)_{n+3r}}
- \sum_{r\in\mathbb Z} \frac{ q^{\frac 92 r^2 + \frac 92 r + 1}}{(q;q)_{n+3r+1} (q;q)_{n-3r-1}}
\\
&= \sum_{r\in\mathbb Z} \frac{ q^{\frac 92 r^2 -\frac 32 r} }{(q;q)_{n-3r} (q;q)_{n+3r+1}}
\left( (1-q^{n+3r+1}) - q^{6r+1} (1-q^{n-3r}) \right) \\ &=\sum_{r\in\mathbb Z} \frac{ q^{\frac 92 r^2 -\frac 32 r} (1-q^{6r+1}) }{(q;q)_{n-3r} (q;q)_{n+3r+1}}
= \frac{(-1;q^3)}{(q;q)_{2n} (-1;q)_n} \mbox{ (by~\eqref{6psi6spec}) }. \qed \end{align*} \end{pf}
The other necessary Bailey pairs can be established similarly, so we omit the details and summarize the results in Table~\ref{BPtable}.1.
With the required Bailey pairs in hand, the identities can be proved. For example, to prove Identity~\eqref{m18-1}, we proceed as follows: \begin{pf} Insert the Bailey pair P2 into Eq.~\eqref{aPBL} with $a=1$ to obtain \begin{align*}
& \quad\qquad \sum_{n=0}^\infty \frac{ q^{n(n+1)} (-1;q^3)_n}{ (q;q)_{2n} (-1;q)_n}\\ & = \frac{1}{(q;q)_\infty}\left( 1 + \sum_{r=1}^\infty q^{\frac{27}{2} r^2 - \frac 32 r} (1+q^{3r})
- \sum_{r=1}^\infty q^{\frac{27}{2} r^2 - \frac {15}{2} r +1}
- \sum_{r=0}^\infty q^{\frac{27}{2} r^2 + \frac{15}{2} r+1 } \right) \\ & = \frac{1}{(q;q)_\infty} \left( \sum_{r=-\infty}^\infty q^{\frac{27}{2} r^2 - \frac 32 r}
-q \sum_{r=-\infty}^\infty q^{\frac{27}{2} r^2 - \frac {15}{2}r } \right)\\
&= \frac{ f(q^{12}, q^{15}) - q f(q^6, q^{21}) }{ f(-q)} \qquad \qquad\mbox{ (by~\eqref{jtp})} \\
& = \frac{ (q, q^8 , q^9; q^9)_\infty (q^7, q^{11};q^{18})_\infty }{ (q;q)_\infty}
\qquad\mbox{ (by~\eqref{qpi}) }. \qed \end{align*} \end{pf} The details of the proofs of the other identities are similar and therefore omitted, with the key information summarized in Table~\ref{IdTable}.2.
\begin{landscape} \begin{table}[hbt] \label{BPtable} \centering
\begin{tabular}{|c|c|c|c|c|c|c|c| } \hline\hline
& $a$ &$e$ & $\beta_n$ & $\alpha_{3r+1}$ & $\alpha_{3r}$ & $\alpha_{3r-1}$ & rel to \\ \hline P1& $q$ & $-q^2$ & $\frac{(-1;q^3)}{(q;q)_{2n} (-1;q)_n}$
& $-q^{\frac 92 r^2 + \frac 92 r + 1} $
& $ q^{\frac 92 r^2 - \frac 32 r} (1+q^{3r})$
& $ -q^{\frac 92 r^2 - \frac 92 r + 1} $
& $1$ \\ \hline P2& $q$ & $-q^2$ &$\frac {q^n (-1; q^3)_n } { (q;q)_{2n} (-1; q)_n }$
& $-q^{\frac 92 r^2 + \frac 32 r}$
& $q^{\frac 92 r^2 - \frac 32 r} (1+q^{3r}) $
& $-q^{\frac 92 r^2 - \frac 32 r}$
& $1$ \\ \hline P3& $q^2$ & $-q$ & $ \frac {(-q^3; q^3)_n } { (q^2;q)_{2n} (-q; q)_n }$
& $-2q^{\frac 92 r^2 + \frac 92 r + 1} $
& $ q^{\frac 92 r^2 + \frac 32 r}$
& $q^{\frac 92 r^2 - \frac 32 r} $
& $q$\\
\hline P4& $q^2$ & $-q^{5/2} $ & $\frac {(-q^{3/2}; q^3)_n } { (q^2;q)_{2n} (-q^{1/2}; q)_n }$
& $-q^{\frac 92 r^2+3r+\frac 12} (1+q^{3r+\frac 32})$
& $q^{\frac 92 r^2}$
& $q^{\frac 92 r^2}$
& $q$ \\
\hline P5& $q^2$ & $-q^{5/2}$ & $ \frac {q^n (-q^{3/2}; q^3)_n } { (q^2;q)_{2n} (-q^{1/2}; q)_n }$
& $ -q^{\frac 92 r^2} (q^{6r+\frac 32}+q^{3r})$
& $q^{\frac 92 r^2+3r} $
& $q^{\frac 92 r^2-3r} $
& $q$\\
\hline P6 & $q$ & $-q^2$ & $ \frac {(1-q)(-1; q^3)_n } { (q;q)_{2n} (-1; q)_n }$
& $0$
& $ q^{\frac 92 r^2 - \frac 32 r} (1-q^{6r+1}) $
& $ -q^{\frac 92 r^2 - \frac 92 r + 1}(1-q^{6r-1})$
& $q$\\
\hline
P7 & $q$ & $q^2$
& $\frac {(q^3; q^3)_{n-1} } { (q^2;q)_{2n-1} (q;q)_{n-1} }$
& $(-1)^{r+1} q^{\frac 92 r^2 + \frac 32 r +1}\frac{1-q^{6r+3}}{1-q}$
& $(-1)^r q^{\frac 92 r^2 - \frac 32 r} \frac{1-q^{6r+1}}{1-q} $
& $(-1)^{r+1} q^{\frac 92 r^2 - \frac 92 r+1} \frac{1-q^{6r-1}}{1-q}$
& $q$\\
\hline \end{tabular} \caption{By specializing $a$ and $e$ in~\eqref{6psi6} as indicated, each of the following seven Bailey pairs (relative to $1$ or $q$ as stated) can be established. In all cases $\alpha_0 = \beta_0 = 1$.} \end{table} \end{landscape}
\begin{table} \label{IdTable} \caption{Proofs of identities~\eqref{m18-1}--~\eqref{m24s-m5}}
\begin{tabular}{|c|c|c|c|c| } \hline\hline Eq. & Bailey& Bailey & $a$ & \\
& pair & lemma & & \\ \hline \eqref{m18-1} & P2 & \eqref{aPBL} & $1$ &\\ \eqref{m18-2} & P1 & \eqref{aPBL} & $1$&\\ \eqref{m18-3} & P3 & \eqref{aPBL} & $q$&\\ \eqref{m18-4} & $-$ & $-$ & $-$ & $q^{-1} \times ( \eqref{m18-2} - \eqref{m18-1} ) $\\ \eqref{m18-m1} & $-$ & $-$ & $-$ & ~\cite[p. 433, (B4) $+q\times$(B2)]{B47}\\ \eqref{m18-m2} & $-$ & $-$ & $-$ & ~\cite[p. 433, (B4)$+q^2\times$(B1)]{B47}\\ \eqref{m18-m3} & $-$ & $-$ & $-$ &~\cite[p. 433, (B3)]{B47}\\ \eqref{m18-m4} & $-$ & $-$ & $-$ &~\cite[p. 433, (B2) $-q\times$(B1)]{B47}\\ \eqref{m24t-2} & $-$ & $-$ & $-$ & Set $b=e^{\pi i/3}$ and $c=1$ in \eqref{qBailey}.\\ \eqref{m24t-1} & P2 & \eqref{aTBL} & $1$ &\\ \eqref{m24t-3} & P1 & \eqref{aTBL} & $1$ &\\ \eqref{m24t-4} & $-$ & $-$ & $-$ &Set $b=e^{\pi i/3}$ and $c=q^2$ in~\eqref{qBailey}.\\ \eqref{m24t-5} & $-$ & $-$ & $-$ & $q^{-1}\times (\eqref{m24t-3}-\eqref{m24t-1})$\\ \eqref{m24t-m2} & $-$ & $-$ & $-$ &Set $b=e^{2\pi i/3}$ and $c=1$ in \eqref{qBailey}.\\ \eqref{m24t-m1} & $-$ & $-$ & $-$ & \cite[p. 434, (C3)$ +q\times$(C2)]{B47}\\ \eqref{m24t-m3} & $-$ & $-$ & $-$ & \cite[p. 434, (C3)$ +q^3\times$(C1)]{B47}\\ \eqref{m24t-m4} & $-$ & $-$ & $-$ & Set $b=e^{2\pi i/3}$ and $c=q^2$ in \eqref{qBailey}\\ \eqref{m24t-m5} & $-$ & $-$ & $-$ & $q^{-1}\times (\eqref{m24t-m1}-\eqref{m24t-m3})$\\ \eqref{m24s-1} & $-$ & $-$ & $-$ & $\eqref{m24s-3}-q\times\eqref{m24s-5} $ \\ \eqref{m24s-2} & $-$ & $-$ & $-$ &Set $a= e^{\pi i /3} $, $b=e^{-\pi i/ 3}$ in~\eqref{q2ndGauss}.\\ \eqref{m24s-3} & P4 & \eqref{S2BL} & $q$ &\\ \eqref{m24s-4} & $-$ & $-$ & $-$ &Set $a= e^{\pi i /3} q^2$, $b=e^{-\pi i/ 3} q^2$ in~\eqref{q2ndGauss}.\\ \eqref{m24s-5} & P5 & \eqref{S2BL} & $q$ & \\ \eqref{m24s-m1} & $-$ & $-$ & $-$ & $\eqref{m24s-m3}+q\times\eqref{m24s-m5} $ \\ \eqref{m24s-m2} & $-$ & $-$ & $-$ & Set $a= e^{2\pi i /3} $, $b=e^{-2\pi i/ 3}$ in~\eqref{q2ndGauss}.\\ \eqref{m24s-m3} & J4~\cite[p. 149]{S52} & \eqref{S2BL} & $q$ &\\ \eqref{m24s-m4} &$-$ & $-$ & $-$ & \small{Set $a= e^{2\pi i /3} q^2$, $b=e^{-2\pi i/ 3} q^2$ in~\eqref{q2ndGauss}.}\\ \eqref{m24s-m5} &J5~\cite[p. 149]{S52} & \eqref{S2BL} & $q$ & \\ \hline \end{tabular} \end{table}
\section{False theta series identities}\label{FT} Rogers introduced the term ``false theta series" and included a number of related identities in his 1917 paper~\cite{R17}. Ramanujan presented a number of identities involving false theta series in his lost notebook~\cite[p. 256--259, \S11.5]{AB05}.
Recalling that Ramanujan defines the theta function as
\begin{align*} f(a,b)&:= \sum_{n=-\infty}^\infty a^{n(n+1)/2} b^{n(n-1)/2}\\
&=
\sum_{n=0}^\infty a^{n(n+1)/2} b^{n(n-1)/2} + \sum_{n=1}^\infty a^{n(n-1)/2} b^{n(n+1)/2}\\
&= 1 +a + b + a^3 b + ab^3 + a^6 b^3 + a^3 b^6 + a^{10} b^6 + a^6 b^{10} + \dots,
\end{align*}
let us define the corresponding \emph{false theta function} as \begin{align*} \Psi(a,b)&:=\sum_{n=0}^\infty a^{n(n+1)/2} b^{n(n-1)/2} - \sum_{n=1}^\infty a^{n(n-1)/2} b^{n(n+1)/2}\\ &= \sum_{n=0}^\infty a^{n(n+1)/2} b^{n(n-1)/2} (1 - b^{2n+1}) \\ &=1 +a - b + a^3 b - ab^3 + a^6 b^3 - a^3 b^6 + a^{10} b^6 - a^6 b^{10} + \dots. \end{align*} In practice, $a$ and $b$ are always taken to be $\pm q^h$ for some integer or half-integer $h$.
The key to the proof of each false theta series identity is indicated in Table~\ref{FTtable}.1.
\begin{table} \label{FTtable} \caption{Proofs of identities~\eqref{ft1}--~\eqref{ft10}}
\begin{tabular}{|c|c|c|c|c| } \hline\hline Eq. & Bailey pair & form of Bailey lemma & $a$ & \\ \hline \eqref{ft1} & $-$ & $-$ & $-$ & \eqref{ft3}$-q\times$\eqref{ft5}\\ \eqref{ft2} & P6 & \eqref{FBL} &$q$ & \\ \eqref{ft3} & P4 & \eqref{FBL} &$q$ &\\ \eqref{ft4} & P3 & \eqref{FBL} &$q$ &\\ \eqref{ft5} & P5 & \eqref{FBL} &$q$ &\\ \eqref{ft6} & $-$ & $-$ & $-$ & \eqref{ft8}+$q\times$\eqref{ft10}\\ \eqref{ft7} & P7 & \eqref{FBL} &$q$ &\\ \eqref{ft8} & J4~\cite[p. 149]{S52} & \eqref{FBL} &$q$ &\\ \eqref{ft9} & $-$ & $-$ & $-$ & See~\cite[Entry 5.4.2]{AB07}\\ \eqref{ft10} & J5~\cite[p. 149]{S52} & \eqref{FBL} &$q$ &\\ \hline \end{tabular} \end{table}
\section{Connections with Lie algebras}\label{Lie}
Let $\mathfrak{g}$ be the affine Kac-Moody Lie algebra $A_1^{(1)}$ or $A_2^{(2)}$. Let $h_0, h_1$ be the usual basis of a maximal toral subalgebra $T$ of $\mathfrak{g}$. Let $d$ denote the ``degree derivation" of $\mathfrak{g}$ and $\tilde{T}:= T \oplus \mathbb C d$. For all dominant integral $\lambda\in\tilde{T}^*$, there is an essentially unique irreducible, integrable, highest weight module $L(\lambda)$, assuming without loss of generality that $\lambda(d) = 0$. Now $\lambda= s_0 \Lambda_0 + s_1 \Lambda_1$ where $\Lambda_0$ and $\Lambda_1$ are the fundamental weights, given by $\Lambda_i(h_j) = \delta_{ij}$ and $\Lambda_i(d) = 0$; here $s_0$ and $s_1$ are nonnegative integers. For $A_1^{(1)}$, the canonical central element is $c= h_0 + h_1$, while for $A_2^{(2)}$, the canonical central element is $c = h_0 + 2h_1$. The quantity $\lambda(c)$ (which equals $s_0+s_1$ for $A_1^{(1)}$ and which equals $s_0+2s_1$ for $A_2^{(2)}$) is called the \emph{level} of $L(\lambda)$. (cf.\cite{K90}, \cite{LM78}.)
Additionally (see~\cite{LM78}), there is an infinite product $F_{\mathfrak{g}}$ associated with $\mathfrak{g}$, often light-heartedly called the ``fudge factor," which needs to be divided out of the the principally specialized character $\chi(L(\lambda)) = \chi(s_0 \Lambda_0 + s_1\Lambda_1)$, in order to obtain the quantities of interest here. For $\mathfrak{g}=A_1^{(1)}$, the fudge factor is given by $F_{\mathfrak{g}} = (q;q^2)_\infty^{-1}$, while for $\mathfrak{g}=A_2^{(2)}$, it is given by $F_{\mathfrak{g}} = \left[ (q;q^6)_\infty (q^5;q^6)_\infty \right]^{-1}$.
Now $\mathfrak{g}$ has a certain infinite-dimensional Heisenberg subalgebra known as the ``principal Heisenberg vacuum subalgebra" $\mathfrak{s}$ (see~\cite{LW78} for the construction of $A_1^{(1)}$ and~\cite{KKLW81} for that of $A_2^{(2)}$). As shown in~\cite{LW82}, the principal character $\chi(\Omega(s_0 \Lambda_0 + s_1 \Lambda_1))$, where $\Omega(\lambda)$ is the vacuum space for $\mathfrak{s}$ in $L(\lambda)$, is \begin{equation} \label{char} \chi(\Omega(s_0 \Lambda_0 + s_1\Lambda_1)) = \frac{\chi( L(s_0 \Lambda_0 + s_1 \Lambda_1)) }{ F_{\mathfrak{g}} }, \end{equation} where $\chi(L(\lambda))$ is the principally specialized character of $L(\lambda)$.
By~\cite{LM78} applied to~\eqref{char} in the case of $A_1^{(1)}$, for standard modules of odd level $2k+1$, $$\chi(\Omega( (2k-i+2)\Lambda_0 + (i-1)\Lambda_1 ))$$ is given by Andrews' analytic generalization of the Rogers-Ramanujan identities~\cite{A74}: \begin{equation}\label{AndGor} \sum_{n_1, n_2, \dots, n_{k}\geqq 0}
\frac{ q^{N_1^2 + N_2^2 + \cdots + N_{k}^2 + N_i+N_{i+1}+\cdots+N_{k}}}
{(q;q)_{n_1} (q;q)_{n_2} \cdots (q;q)_{n_{k}} } = \frac{(q^i,q^{2k+3-i},q^{2k+3};q^{2k+3})_\infty }{(q;q)_\infty }, \end{equation} where $1\leqq i \leqq k+1$ and $N_j: = n_j + n_{j+1} + \cdots + n_{k}$. The combinatorial counterpart to~\eqref{AndGor} is Gordon's partition theoretic generalization of the Rogers-Ramanujan identities~\cite{G61}; this generalization was explained vertex-operator theoretically in~\cite{LW84} and~\cite{LW85}.
In addition, for the $A_1^{(1)}$ standard modules of even level $2k$, \[ \chi( \Omega( (2k-i+1)\Lambda_0 + (i-1)\Lambda_1 )) \] is given by Bressoud's analytic identity~\cite[p. 15, Eq. (3.4)]{B80} \begin{equation}\label{BressoudEven} \sum_{n_1, n_2, \dots, n_{k}\geqq 0}
\frac{ q^{N_1^2 + N_2^2 + \cdots + N_{k}^2 + N_i+N_{i+1}+\cdots+N_{k}}}
{(q;q)_{n_1} (q;q)_{n_2} \cdots (q;q)_{n_{k-1}} (q^2;q^2)_{n_k} } = \frac{(q^i,q^{2k+2-i},q^{2k+2};q^{2k+2})_\infty }{(q;q)_\infty }, \end{equation} where $1\leqq i \leqq k+1$, and its partition theoretic counterpart~\cite[p. 64, Theorem, $j=0$ case]{B79}; likewise, this generalization was explained vertex-operator theoretically in~\cite{LW84} and~\cite{LW85}.
Notice that the infinite products associated with level $\ell$ standard modules for $A_1^{(1)}$ in~\eqref{AndGor} and~\eqref{BressoudEven} are instances of the Jacobi triple product identity for modulus $\ell+2$ divided by $(q;q)_\infty$.
Probably the most efficient way of deriving~\eqref{AndGor} is via the Bailey lattice~\cite{AAB87}, which is an extension of the Bailey chain concept (\cite{A84}; cf. \cite[\S 3.5, pp. 27ff]{A86}) built upon the ``unit Bailey pair" \[ \beta_n(1,q) = \left\{
\begin{array}{ll}
1 &\mbox{if $n=0$}\\
0 &\mbox{if $n>0$}
\end{array} \right. \] \[ \alpha_n(1,q) = \left\{
\begin{array}{ll}
1 &\mbox{if $n=0$}\\
(-1)^n q^{n(n-1)/2} (1+q^n) &\mbox{if $n>0$.}
\end{array} \right. \]
Similarly, \eqref{BressoudEven} follows from a Bailey lattice built upon the Bailey pair
\[ \beta_n(1,q) = \frac{1}{(q^2;q^2)_n}, \] \[ \alpha_n(1,q) = \left\{
\begin{array}{ll}
1 &\mbox{if $n=0$}\\
(-1)^n 2 q^{n^2} &\mbox{if $n>0$.}
\end{array} \right. \]
Thus the standard modules of $A_1^{(1)}$ may be compactly ``explained" via two interlaced instances of the Bailey lattice.
In contrast, the standard modules of $A_{2}^{(2)}$ are not as well understood, and a uniform $q$-series and partition correspondence analogous to what is known for $A_1^{(1)}$ has thus far remained elusive.
As with $A_1^{(1)}$, there are $1+\lfloor \frac{\ell}{2} \rfloor$ inequivalent level $\ell$ standard modules associated with the Lie algebra $A_2^{(2)}$, but the analogous quantity for the level $\ell$ standard modules \[ \chi (\Omega( (\ell-2i+2)\Lambda_0 + (i-1)\Lambda_1 )) \] is given by instances of the quintuple product identity (rather than the triple product identity) divided by $(q;q)_\infty$:
\begin{equation} \label{A22prodside}
\frac{ (q^i, q^{\ell+3-i}, q^{\ell+3}; q^{\ell+3})_\infty (q^{\ell+3-2i},
q^{\ell+2i+3}; q^{2\ell+6})_\infty}{(q;q)_\infty}, \end{equation} where $1\leqq i \leqq 1 + \lfloor \frac{\ell}{2} \rfloor $; see~\cite{LM78}.
It seems quite plausible that in the case of $A_2^{(2)}$, the analog of the Andrews-Gordon-Bressoud identities would involve the interlacing of six Bailey lattices in contrast to the two that were necessary for $A_1^{(1)}$. To see this, consider the following set of Andrews-Gordon-Bressoud type identities where the product sides involve instances of the quintuple product identity rather than the triple product identity:
{\allowdisplaybreaks \begin{multline} \label{lev6k2}
\sum_{n_1, n_2, \dots, n_k \geqq 0}
\frac{ q^{ N_1(N_1+1)/2 + N_2(N_2+1) + N_3(N_3+1)+ \cdots +N_k(N_k+1) + N_k^2} }
{ (q;q)_{n_1} (q;q)_{n_2} \cdots (q;q)_{n_{k-1}} (q;q)_{2n_k+1} (-q^{N_1+1};q)_\infty} \\= \frac{ (q^{k}, q^{5k-1}, q^{6k-1}; q^{6k-1})_\infty (q^{4k-1},q^{8k-1}; q^{12k-2})_\infty }{(q;q)_\infty} \end{multline} \begin{multline} \label{lev6k3} \sum_{n_1, n_2, \dots, n_{k+1} \geqq 0}
\frac{ q^{N_1^2 + N_2^2 + \cdots +N_{k}^2} \left( \frac{n_k- n_{k+1}+1}{3} \right) }
{ (q;q)_{n_1} (q;q)_{n_2} \cdots (q;q)_{n_{k+1}} (q;q)_{2n_k- n_{k+1} }} \\= \frac{ (q^{k}, q^{5k}, q^{6k}; q^{6k})_\infty (q^{4k},q^{8k}; q^{12k})_\infty }{(q;q)_\infty} \end{multline} \begin{multline} \label{lev6k4}
\sum_{n_1, n_2, \dots, n_k \geqq 0}
\frac{ q^{ N_1(N_1+1)/2 + N_2(N_2+1) + N_3(N_3+1)+ \cdots +N_k(N_k+1)} }
{ (q;q)_{n_1} (q;q)_{n_2} \cdots (q;q)_{n_{k-1}} (q;q)_{2n_k+1} (-q^{N_1+1};q)_\infty} \\= \frac{ (q^{2k}, q^{4k+1}, q^{6k+1}; q^{6k+1})_\infty (q^{2k+1},q^{10k+1}; q^{12k+2})_\infty }{(q;q)_\infty} \end{multline} } {\allowdisplaybreaks \begin{multline} \label{lev6k5} \sum_{n_1, n_2, \dots, n_k \geqq 0}
\frac{ q^{N_1^2 + N_2^2 + \cdots +N_{k-1}^2+2N_k^2}}
{ (q;q)_{n_1} (q;q)_{n_2} \cdots (q;q)_{n_{k-1}} (q;q)_{2n_k}} \\= \frac{ (q^{k}, q^{5k+2}, q^{6k+2}; q^{6k+2})_\infty (q^{4k+2},q^{8k+2}; q^{12k+4})_\infty }{(q;q)_\infty} \end{multline} } {\allowdisplaybreaks \begin{multline} \label{lev6k6} \sum_{n_1, n_2, \dots, n_k \geqq 0}
\frac{ q^{N_1^2 + N_2^2 + \cdots +N_k^2} (-1;q^3)_{n_k}}{ (q;q)_{n_1} (q;q)_{n_2} \cdots (q;q)_{n_{k-1}} (q;q)_{2n_k} (-1;q)_{n_k} } \\= \frac{ (q^{k+1}, q^{5k+2}, q^{6k+3}; q^{6k+3})_\infty (q^{4k+1},q^{8k+5}; q^{12k+6})_\infty }{(q;q)_\infty} \end{multline} } {\allowdisplaybreaks \begin{multline}\label{lev6k7} \sum_{n_1, n_2, \dots, n_k \geqq 0}
\frac{ q^{N_1^2 + N_2^2 + \cdots +N_k^2}}{ (q;q)_{n_1} (q;q)_{n_2} \cdots (q;q)_{n_{k-1}} (q;q)_{2n_k}} \\= \frac{ (q^{k+1}, q^{5k+3}, q^{6k+4}; q^{6k+4})_\infty (q^{4k+2},q^{8k+6}; q^{12k+8})_\infty }{(q;q)_\infty}, \end{multline} } where $\left( \frac{n}{p} \right)$ in~\eqref{lev6k3} is the Legendre symbol. We note that~\eqref{lev6k3} first appeared in~\cite[p. 400, Eq. (1.7)]{S04} and that \eqref{lev6k7} is due to Andrews~\cite[p. 269, Eq. (1.8)]{A84}. While~\eqref{lev6k2}, \eqref{lev6k4}, and \eqref{lev6k5} probably have not appeared explicitly in the literature, they each follow from building a Bailey chain on a known Bailey pair and may be regarded as nothing more than a standard exercise in light of Andrews' discovery of the Bailey chain~(\cite{A84}; cf.~\cite[\S3.5]{A86}). Indeed the $k=1$ cases of~\eqref{lev6k2},~\eqref{lev6k4},~\eqref{lev6k5}, and~\eqref{lev6k7} are all due to Rogers and appear in Slater's list~\cite{S52} as Eqs. (62), (80), (83), and (98) respectively. On the other hand,~\eqref{lev6k6} is new since it arises from inserting a new Bailey pair, namely the one from Lemma~\ref{BP2} in this paper, into the Bailey chain mechanism. Notice that as $k$ runs through the positive integers in the numerators of the right hand sides of~\eqref{lev6k2}--\eqref{lev6k7}, we obtain instances of the quintuple product identity for all moduli represented in~\eqref{A22prodside} (except for the trivial level 1 case where the relevant identity reduces to ``$1=1$"). It is because of the preceding observations that we conjecture that $A_2^{(2)}$ may be ``explained" by six interlaced Bailey lattices.
We now turn our attention to combinatorial considerations in the context of $A_2^{(2)}$. In his 1988 Ph.D. thesis S. Capparelli~\cite{C88} conjectured two beautiful partition identities resulting from his analysis of the two inequivalent level 3 standard modules of $A_2^{(2)}$, using the theory in~\cite{LW84} and~\cite{LW85}. Capparelli's conjectures were first proved by Andrews~\cite{A94} using combinatorial methods. Later, Lie algebraic proofs were found by Tamba and Xie~\cite{TX95} and Capparelli himself~\cite{C96}. More recently, Capparelli~\cite{C04} related the principal characters of the vacuum spaces for the standard modules of $A_2^{(2)}$ for levels 5 and 7 to some known $q$-series and partition identities. In the same way, our identities~\eqref{m18-1}--\eqref{m18-4} appear to correspond to the standard modules for level 6.
\end{document} |
\begin{document}
\title[Global Yamabe flow ] {Global Yamabe flow on asymptotically flat manifolds}
\author{Li Ma} \address{Li MA, School of Mathematics and Physics\\
University of Science and Technology Beijing \\
30 Xueyuan Road, Haidian District
Beijing, 100083\\
P.R. China }
\address{ Department of Mathematics \\ Henan Normal university \\ Xinxiang, 453007 \\ China}
\thanks{Li Ma's research was partially supported by the National Natural
Science Foundation of China (No.11771124)}
\begin{abstract} In this paper, we study the existence of global Yamabe flow on asymptotically flat (in short, AF or ALE) manifolds. Note that the ADM mass is preserved in dimensions 3,4 and 5. We present a new general local existence of Yamabe flow on a complete Riemannian manifold with the initial metric quasi-isometric to a background metric of bounded scalar curvature. Asymptotic behaviour of the Yamabe flow on ALE manifolds is also addressed provided the initial scalar curvature is non-negative and there is a bounded subsolution to the corresponding Poisson equation. We also present a maximum principle for a very general parabolic equations on the complete Riemannian manifolds.
{ \textbf{Mathematics Subject Classification 2010}: 53E99, 35A01, 35K55, 35R01, 53C21.}
{ \textbf{Keywords}: Yamabe flow, global existence, scalar curvature, asymptotic behaviour} \end{abstract}
\maketitle
\section{Introduction}\label{sect1} The goal of this paper is to consider the global Yamabe flow on complete manifolds. This topic has recently been studied in the works \cite{M}, \cite{MC}, \cite{CZ} and \cite{M1,M2}. Since the Yamabe flow is degenerate, the expected global flow is rare, however, the Yamabe flow on asymptotically flat (in short, AF or ALE) manifolds is widely believed to be global. We shall confirm this in this paper. Hamilton \cite{Hamilton1989} \cite{H} have introduced Yamabe flow which describes a family of Riemannian metrics $g(t)$ subject to the evolution equation $\frac{\partial}{\partial t}g=-R(g)\,g$, where $R(g)$ denotes the scalar curvature corresponding to the metric $g$. Hamilton proved local in time existence of Yamabe flows on compact manifolds without boundary. Asymptotic behaviour of the Yamabe flow was subsequently analysed by B. Chow \cite{C}, R. Ye \cite{Y94}, Schwetlick and M. Struwe \cite{SS} and S. Brendle \cite{B}. The discrete Morse flow method for 2-d Yamabe flow was developed in \cite{MW}. The theory of Yamabe flows on non-compact manifolds was addressed by Ma and An \cite{AM}. Daskalopoulos and Sesum \cite{Daskalopoulos2013} analysed the profiles of self-similar solutions (Yamabe solitons). More recently, Bahuaud and Vertman \cite{Bahuaud2014,Bahuaud2016} constructed Yamabe flows on spaces with incomplete edge singularities such that the singular structure is preserved along the Yamabe flow. {Choi}, {Daskalopoulos}, and {King} \cite{Choi2018} were able to find solutions to the Yamabe flow on the Euclidean space $\mathbb{R}^n$ which develop a type II singularity in finite time. In the interesting work \cite{GT}, Gregor Giesen and Peter M. Topping obtained remarkable results about Yamabe flow incomplete surfaces. In \cite{S1, S2}, assuming that the initial metric is conformally hyperbolic with conformal factor and scalar curvature bounded from above, Schulz had obtained existence of instantaneously complete Yamabe flows on hyperbolic space of arbitrary dimension $n\geq3$. The study of Yamabe flow on $R^n$ may be included in the class of porous-media equations \cite{Ar} \cite{DK} \cite{HP}.
Given an $n$-dimensional complete Riemannian manifold $(M^n,g_0)$, $n\geq 3$. The Yamabe flow on $(M^n,g_0)$ is a family of Riemannian metrics $\{g(\cdot, t)\}$ on $M$ defined by the evolution equation \begin{equation}\label{yamabe_flow_curvature} \left\{ \begin{array}{ll}
\frac{\partial g}{\partial t}=-Rg \quad &\text{in}\ M^n\times[0,T),\\
g(\cdot,0)=g_0 &\text{in}\ M^n, \end{array} \right. \end{equation} where $R$ is the scalar curvature of the metric $ g:=g(\cdot,t)=u^{\frac{4}{n-2}}g_0, $ where $u:M^n\to \mathbb{R}^+$ is a positive smooth function on $M^n$. Let $p=\frac{n+2}{n-2}$, $L_{g_0}u=\Delta_{g_0}u-aR_{g_0}u$ and $a=\frac{n-2}{4(n-1)}$. By changing time by a constant scale, (\ref{yamabe_flow_curvature}) can be written in the equivalent form \begin{equation}\label{yamabe_flow_u} \left\{ \begin{array}{ll}
\frac{\partial u^p}{\partial t}=L_{g_0}u, \quad &\text{in}\ M^n\times[0,T),\\
u(\cdot,0)=1, &\text{in}\ M^n. \end{array} \right. \end{equation}
To understand the local existence result of the Yamabe flow on the Riemannian manifold $(M,g_0)$, we may choose a base metric $g_M$ on $M$ and write the Yamabe flow equation \cite{S1} as follows. Let $g(t)=w(x,t)g_M$ with $w=w(x,t)>0$ on $M$. Then the Yamabe flow equation is $$
\frac{1}{m-1}w_t=-\frac{wR}{n-1}=-\frac{R_0}{n-1}+\frac{\Delta_{g_{M}} w}{w}+\frac{(n-6)}{4}\frac{|\nabla w|_{g_{M}}^2}{w^2}, $$ with $w(0)=w_0$. We may denote the terms involving $w$ in the equation above by $$
B[w]:=(n-1)\Bigl(-\frac{R_0}{n-1}+\frac{\Delta_{g_{M}} w}{w}+\frac{(n-6)}{4}\frac{|\nabla w|_{g_{M}}^2}{w^2}\Bigr). $$ We shall apply inverse function theorem to this form of equation to get local existence of solutions.
Before presenting the main result of this paper, we need the following two definitions. The first one is the definition of asymptotically flat (AF or ALE) manifold of order $\tau>0$ (\cite{S} \cite{LP}). \begin{Def}\label{AE_def} A Riemannian manifold $M^n$, $n\geq 3$, with $C^{\infty}$ metric $g$ is called asymptotically flat of order $\tau$ if there exists a decomposition $M^n= M_0\cup M_{\infty}$ (for simplicity we deal only with the case of one end and the case of multiple ends can be dealt with similarly) with $M_0$ compact and a diffeomorphism $M_{\infty}\cong \mathbb{R}^n-B(o,R_0)$ for some constant $R_0 > 0$ such that \begin{align}\label{AE} g_{ij} -\delta_{ij}\in C^{2+\alpha}_{-\tau}(M) \end{align} (defined in Definition \ref{elliptic_wss} below) in the coordinates $\{x^i\}$ induced on $M_{\infty}$. And the coordinates $\{x^i\}$ are called asymptotic coordinates. \end{Def}
The second one is about the fine solution to Yamabe flow (\cite{CZ} \cite{M3}). \begin{Def}\label{fine} We say that $u(x,t)\in C^1(M\times [0,t_{max})$ is a fine function if $0<\delta\leq u(x,t)\leq C$ for $0\leq t\leq T$ with any $0<T<t_{max}$ and
$\sup\limits_{M^n\times [0,T]}|\nabla_{g_0} u(x,t)|\leq C$. We say that $u(x,t)\in C^1(M\times [0,t_{max})$ is a fine solution of the Yamabe flow, $0\leq t<t_{max}$, on a complete manifold $(M^n,g_0)$ if it is a fine function solution to the Yamabe flow and $\sup\limits_{M^n\times [0,T]}|Rm(g)|(x,t)\leq C$ for any $T<t_{max}$, such that either $\lim\limits_{t\to t_{max}}\sup\limits_{M}|Rm|(\cdot,t)=\infty$ for $t_{max}<\infty$ or $t_{max}=\infty$, where $Rm(g)$ is the Riemannian curvature of the metric $g:=g(t)=u^{4/(n-2)}g_0$. \end{Def} We remark that in the language from (2.5.2) in \cite{SY} ( see also \cite{M4}), the fine solution is uniformly quasi-isometric to the initial metric $g_0$ on every interval $[0,T]$ for $0<T<t_{max}$.
Our main result is in below. \begin{Thm}\label{global} Given an $n$-dimensional asymptotically flat manifold $(M^n,g_0)$ of any order $\tau>\frac{n-2}{2}$. There exists an unique global Yamabe flow $g(x,t)=u(x,t)^{4/(n-2)}g_0$ with the initial metric $g(0)=g_0$ and
the solution $u(x,t)$, $0\leq t<t_0<\infty$, is a fine solution to the Yamabe flow (\ref{yamabe_flow_u}) and the flow preserves the AF property of the initial metric. In other word, for $v=1-u$, we have $v(x,t)\in C^{2+\alpha}_{-\tau}(M)$ and $g_{ij}(x,t)-\delta_{ij}\in C^{2+\alpha}_{-\tau}(M)$ for $t\in [0,t_{max})$. \end{Thm}
The uniqueness part follows from the standard argument and we shall omit the detail. We shall present a general local existence result in Theorem \ref{short_existence} in section \ref{sect2}.
As a direct application of the computation as showed in \cite{CZ} and \cite{M}, we have \begin{Thm}\label{main_5} Let $u(x,t)$, $0\leq t< t_0<\infty$, be the fine solution to the Yamabe flow (\ref{yamabe_flow_u}) on an $n$-dimensional asymptotically flat manifold $(M^n,g_0)$ of order $\tau>\frac{n-2}{2}$ with $u(0)=1$. Assume that $R_{g_0}\geq 0$ and $R_{g_0}\in L^{1}(M)$, where $R_{g_0}$ is the scalar curvature of $g_0$. Denoted by $g(t)=u^\frac{4}{n-2}g_0$. Then for $n=3,4,$ or $5$, ADM mass $m(g(t))$ (see \cite{SY} \cite{LP} or below for the definition) is well-defined under the Yamabe flow (\ref{yamabe_flow_u}) for $0\leq t<\infty$ (i.e. ADM mass is independent of the choices of the coordinates),i.e, $m(g(t))\equiv m(g_0)$. \end{Thm}
Recall here that the ADM mass of $n$-dimensional AF Riemannian manifolds \cite{LP} is defined as \begin{align}\label{ADM_Mass} m(g)=\lim\limits_{r\to\infty}\frac{1}{4\omega}\int_{S_r}(\partial_j g_{ij}-\partial_ig_{jj})dS^i, \end{align} where $\omega$ denotes the volume of unit sphere in $\mathbb{R}^n$, $S_r$ denotes the Euclidean sphere with radius $r$ and $dS^i$ is the normal surface volume element to $S_r$ with respect to Euclidean metric. Similar results for Ricci flow were established in \cite{BW} and \cite{DM}.
With an application of Theorem 5 in \cite{M3}, we get the convergent result of the global Yamabe flow below.
\begin{Thm}\label{main_6} Let $u(x,t)$, $0\leq t<\infty$, be the global solution to the Yamabe flow (\ref{yamabe_flow_u}) on an $n$-dimensional asymptotically flat manifold $(M^n,g_0)$ of order $\tau>\frac{n-2}{2}$ with $u(0)=1$. Assume that $R_{g_0}\geq 0$ and there exists a bounded sub-solution $w_0$ to the Poisson equation $$ L_{g_0}w_0=\Delta_{g_0}w_0-aR_{g_0}w_0\geq 0, \ \ in \ M. $$ Then the Yamabe flow $g(t)$ converges in $C^\infty_{loc}(M)$ to a Yamabe metric of scalar curvature zero. \end{Thm}
We now recall the definition of weighted spaces (see \cite{LP}) for elliptic operators on asymptotically flat manifolds. \begin{Def}\label{elliptic_wss}
Suppose $(M^n,g)$ is an $n$-dimensional asymptotically flat manifold with asymptotic coordinates $\{x^i\}$. Denote $D^j_x v=\sup\limits_{|\alpha|=j}|\frac{\partial^{|\alpha|}}{\partial x_{i_1}\cdots\partial x_{i_j}}v|$. Let $r(x)=|x|$ on $M_{\infty}$ (defined in Definition \ref{AE_def}) and extend $r$ to a smooth positive function on all of $M^n$. For $q\geq 1$ and $\beta\in \mathbb{R}$, the weighted Lebesgue space $L^q_{\beta}(M)$ is defined as the set of locally integrable functions $v$ with the norm given by
$$ ||v||_{L^q_\beta(M)}=\left\{
\begin{array}{ll}
(\int_{M}|v|^q r^{-\beta q-n}dx)^{\frac{1}{q}}, & \hbox{$q<\infty$;} \\
ess \sup\limits_{M} (r^{-\beta}|v|), & \hbox{$q=\infty$.}
\end{array}
\right. $$
Then the weighted Sobolev space $W^{k,q}_\beta(M)$ is defined as the set of functions $v$ for which $|D^j_xv|\in L^q_{\beta-j}(M)$ with the norm $$
||v||_{W^{k,q}_\beta(M)}=\sum\limits^k_{j=0}||D^j_x v||_{L^q_{\beta-j}(M)}. $$ For a nonnegative integer $k$, the weighted $C^k$ space $C^k_{\beta}(M)$ is defined as the set of $C^k$ functions $v$ with the norm $$
||v||_{C^k_\beta(M)}=\sum\limits_{j=0}^k\sup\limits_{M} r^{-\beta+j}|D^j_xv|. $$ The weighted H\"{o}lder space $C^{k+\alpha}_{\beta}(M)$ is defined as the set of functions $v\in C^{k}_{\beta}(M)$ with the norm $$
||v||_{C^{k+\alpha}_\beta(M)}=||v||_{C^k_\beta(M)}+\sup\limits_{x\neq y\in M}\min(r(x),r(y))^{-\beta+k+\alpha}\frac{|D^k_xv(x)-D^k_xv(y)|}{|x-y|^{\alpha}}. $$ \end{Def}
We end the introduction with a brief outline of the paper. We discuss the local existence theory of Yamabe flow on a complete Riemannian manifold with bounded scalar curvature in section \ref{sect2}, this part may be well-known to experts. In section \ref{sect3}, we obtain the global Yamabe flows on AF manifolds. We show or give an outline proof of Theorems \ref{main_5} and \ref{main_6}. In the appendix section \ref{sect4}, we discuss the general version of maximum principle, which may be used in the space decaying argument of the Yamabe flows on AF manifolds.
\section{Yamabe flow: local existence}\label{sect2}
Let $(M,g_M)$ be a complete Riemannian manifold of dimension $n=dim(M)$. Given an initial metric $g_0=w_0g_M$ where $w_0>0$ is a fine function on $M$. Let $R_0=R(g_0)$ be the scalar curvature of the initial Riemannian metric $g_0$. The following local existence results of the solutions to the Yamabe flow \eqref{eqn:Yamabe-flow} may be well-known for experts \cite{AM} (see also Theorem 2.4 in \cite{CZ}), but it is new.
\begin{Thm}\label{short_existence} Let $(M,g_M)$ be an $n$-dimensional complete manifold with bounded scalar curvature and let $g_0=w_0g_M$, where $w_0>0$ is a fine function on $M$. Then Yamabe flow (\ref{eqn:Yamabe-flow}) below with initial metric $g_0$ has a smooth solution on a maximal time interval $[0,T_{max})$ with $T_{max}>0$ such that either $T_{max}=+\infty$ or the evolving metric contracts to a point at finite time $T_{max}$. \end{Thm}
Since the assumption above is weaker than previous existence result, one can not expects the uniqueness of the Yamabe flow. We shall use the formulation from the interesting paper \cite{S1}. The plan of the proof is to get the local existence result of the Yamabe flow on the Riemannian manifold $(M,g_M)$ by considering the evolution equation in the following form (\cite{AM}) \begin{align}\label{eqn:Yamabe-flow}
\frac{1}{n-1}w_t=-\frac{wR}{n-1}=-\frac{R_0}{n-1}+\frac{\Delta_{g_{M}} w}{w}+\frac{(n-6)}{4}\frac{|\nabla w|_{g_{M}}^2}{w^2}, \end{align} with $w(0)=w_0$. We denote the terms of right side of the equation \eqref{eqn:Yamabe-flow} by \begin{align*}
B[w]:=(n-1)\Bigl(-\frac{R_0}{n-1}+\frac{\Delta_{g_{M}} w}{w}+\frac{(n-6)}{4}\frac{|\nabla w|_{g_{M}}^2}{w^2}\Bigr). \end{align*}
We first set up the local existence result on any bounded domain with uniform time interval. Given a smooth, bounded domain $\Omega\subset M$ and $T>0$, we may assume that $-R_0\geq n c$ for some constant $c$ and we consider the problem \begin{align}\label{eqn:pde} \left\{ \begin{aligned} \frac{\partial w}{\partial t}&=B[w] &&\text{ in $\Omega\times[0,T]$, } \\ w&=\phi &&\text{ on $\partial\Omega\times[0,T]$, } \\[.5ex] w&=w_0 &&\text{ on $\Omega\times\{0\}$} \end{aligned}\right. \end{align} for given $0<w_0\in C^{2,\alpha}(\overline{\Omega})$ and $\phi\in C^{2,\alpha;1,\frac{\alpha}{2}}(\partial\Omega\times[0,T])$ satisfying $\phi(\cdot,t)=w_0$ on $\partial\Omega\times[0,T]$ Since $w_0$ and $R_{g_0}$ are bounded on the compact set $\partial\Omega$ and $w_0>0$, the nonlinear term $B[w]$ is well-defined at the initial time.
By the standard parabolic theory \cite{Ladyzenskaja1967} we may solve
the linear parabolic problem \begin{align}\label{eqn:pde-linear-u} \left\{\begin{aligned} \frac{1}{n-1}\frac{\partial\tilde{u}}{\partial t}-\frac{\Delta_{g_{M}}\tilde{u}}{w_0} -\frac{(n-6)}{4}\frac{<\nabla\tilde{u},\nabla w_0>_{g_{M}}}{w_0^2} &=-\frac{R_0}{n-1} &&\text{ in $\Omega\times[0,T]$, } \\ \tilde{u}&=\phi &&\text{ on $\partial\Omega\times[0,T]$, } \\[.5ex] \tilde{u}&=w_0 &&\text{ on $\Omega\times\{0\}$. } \end{aligned}\right. \end{align} to get the solution $\tilde{u}$. Since $\Omega$ is bounded and since $w_0>0$ in $M$, there exists some $\delta>0$ depending on $\Omega$ and $w_0$ such that $w_0\geq \delta$ in $\Omega\times [0,T]$. Therefore, equation \eqref{eqn:pde-linear-u} is uniformly parabolic with regular coefficients and the initial-boundary conditions are satisfied. According to linear parabolic theory \cite{Ladyzenskaja1967}[IV.5, Theorem 5.2], problem \eqref{eqn:pde-linear-u} has a unique solution $\tilde{u}\in C^{2,\alpha;1,\frac{\alpha}{2}}(\overline{\Omega}\times[0,T])$. Since $w_0>0$ and $\phi(\cdot,t)=w_0$ for all $t\in[0,T]$, the parabolic maximum principle applied to $\tilde{u}(\cdot,t)$ implies $\tilde{u}\geq\varepsilon$ on $\Omega\times[0,T]$ for some $\varepsilon>0$ depending on $\Omega$ and $\tilde{u}$. For the short time $t>0$, we want to get the solution $w$ to \eqref{eqn:pde} which will be close to the function $\tilde{u}$.
We shall use the inverse function theorem to construct the short time solution to \eqref{eqn:pde} on any bounded domain. \begin{Lem}[Short-time existence on bounded domains] \label{lem:shorttimeexistence} Let $\Omega\subset M$ be a smooth bounded domain in $(M, g_M)$. Then there exists $T>0$ such that problem \eqref{eqn:pde} has a unique solution. \end{Lem}
\begin{proof} We shall construct a solution $w$ to \eqref{eqn:pde} is of the form $w=\tilde{u}+v$, where $\tilde{u}$ solves \eqref{eqn:pde-linear-u} and \begin{align}\label{eqn:pde-v} \left\{\begin{aligned} \frac{\partial v}{\partial t} &=B[\tilde{u}+v]-\frac{\partial \tilde{u}}{\partial t} &&\text{ in $\Omega\times[0,T]$, } \\ v&=0 &&\text{ on $\partial\Omega\times[0,T]$, } \\[.5ex] v&=0 &&\text{ on $\Omega\times\{0\}$. } \end{aligned}\right. \end{align} For the H\"older exponent $0<\alpha<1$, we define the working space \begin{align*} X&:=\{v\in C^{2,\alpha;1,\frac{\alpha}{2}}(\overline{\Omega}\times[0,T]) \mid{}v=0\text{ on }(\Omega\times\{0\})\cup(\partial\Omega\times[0,T])\}, \\[1ex] Y&:=\{f\in C^{0,\alpha;0,\frac{\alpha}{2}}(\overline{\Omega}\times[0,T]) \mid{}f=0\text{ on }\partial\Omega\times\{0\}\}. \end{align*} Notice that the map $F: X\to Y$, \begin{align*} F: v\mapsto \frac{\partial}{\partial t}(\tilde{u}+v)-B[\tilde{u}+v]. \end{align*}
is well-defined because the initial-boundary conditions imply that at every $p\in\partial\Omega$ for every $v\in X$, we have \begin{align*} (F v)(p,0)&=\Bigl(\frac{\partial\tilde{u}}{\partial t}-B[\tilde{u}]\Bigr)(p,0) =\Bigl(\frac{\partial\phi}{\partial t}(\cdot,0)-B[u_0]\Bigr)(p)=0. \end{align*} The linearization of $B[\tilde{u}]$ around $\tilde{u}\in C^{2,\alpha;1,\frac{\alpha}{2}}(\overline{\Omega}\times[0,T])$ gives the linear operator \begin{align*} \breve{L}(\tilde{u})&=(n-1)\Bigl(-\frac{\Delta_{g_{M}}\tilde{u}}{\tilde{u}^2}
-\frac{(n-6)}{2}\frac{|\nabla\tilde{u}|_{g_{M}}^2}{\tilde{u}^3}+ \frac{(n-6)}{2\tilde{u}^2}<\nabla\tilde{u},\nabla\,\cdot\,>_{g_{M}} +\frac{\Delta_{g_{M}}}{\tilde{u}} \Bigr). \end{align*}
We claim that the map $F$ is Fr\'echet differentiable at $0\in X$. The reason is below. First, the map $F$ is G\^ateaux differentiable at $0\in X$ with derivative \begin{align*} D F(0)\colon X&\to Y \\ w&\mapsto \frac{\partial}{\partial t}w-L(\tilde{u})w. \end{align*} Second, the mapping $u\mapsto \breve{L}(u)$ is continuous near $\tilde{u}$ because $\tilde{u}$ is bounded away from zero. Hence, $DF(0)$ is the Fr\'{e}chet-derivative of $S$ at $0\in X$. Note that the linear operator $\frac{\partial}{\partial t}-\breve{L}(\tilde{u})$ is uniformly parabolic.
Let $f\in Y$ be an arbitrary element. By definition, $0=f(\cdot,0)$ on $\partial\Omega$. We consider the linear parabolic problem \begin{align}\label{eqn:pde-linear} \left\{\begin{aligned} \frac{\partial w}{\partial t}-\breve{L}(\tilde{u})w&=f &&\text{ in $\Omega\times[0,T]$, } \\ w&=0 &&\text{ on $\partial\Omega\times[0,T]$, } \\[.5ex] w&=0 &&\text{ on $\Omega\times\{0\}$. } \end{aligned}\right. \end{align} As before, linear parabolic theory guarantees that \eqref{eqn:pde-linear} has a unique solution $w\in X$. Hence, the continuous linear map $DF(0)\colon X\to Y$ is invertible.
By the Inverse Function Theorem, $F$ is invertible in some neighborhood $V_0\subset Y$ of $F(0)$. Claim that $V_0$ contains an element $h$ such that $h(\cdot,t)=0$ for $0\leq t\leq\varepsilon$ and sufficiently small $\varepsilon>0$. Fix $f:=F(0)=\frac{\partial}{\partial t}\tilde{u}-B[\tilde{u}]$. Choose $\eta: [0,T]\to[0,1]$, a smooth cutoff function such that \begin{align*} \eta(t)&=\begin{cases} 0, &\text{ for } t\leq\varepsilon, \\ 1, &\text{ for } t>2\varepsilon, \end{cases}& 0&\leq\frac{d\eta}{d t}\leq\frac{3}{\varepsilon}. \end{align*} Note that $\eta f\in V_0$ for sufficiently small $\varepsilon>0$. In fact, since $\tilde{u}$ is smooth in $\overline{\Omega}\times[0,T]$, we have $f\in C^{1}(\overline{\Omega}\times[0,T])$. Noting at $t=0$, we have \begin{align}\label{est:f-hoelder1} f(\cdot,0)&=\frac{\partial\tilde{u}}{\partial t}(\cdot,0)-B[w_0]=0 \quad\text{ on $\overline{\Omega}$, } \end{align} we may estimate \begin{align}\label{eqn:sC1}
|{f(\cdot,s)}|=|f(\cdot,s)-f(\cdot,0)|\leq s|{f}|_{C^1(\overline{\Omega}\times[0,T])}. \end{align} Take $t,s\in[0,T]$ and $t>s$. For $s>2\varepsilon$, we have $$(f-\eta f)(\cdot,s)=(f-\eta f)(\cdot,t)=0.$$ Then we may assume $s\leq2\varepsilon$. In this case we may estimate the time difference of the function $(f-\eta f)$ in the following way. \begin{align}\nonumber
&|(f-\eta f)(\cdot,t)-(f-\eta f)(\cdot,s)| \\ \nonumber
&\leq |f(\cdot,t)-f(\cdot,s)|
+|\eta f(\cdot,t)-\eta f(\cdot,s)| \\ \nonumber &\leq
\bigl(1+|\eta(t)|\bigr)|f(\cdot,t)-f(\cdot,s)|
+|f(\cdot,s)||\eta(t)-\eta(s)| \\ \nonumber
&\leq2 |f|_{C^1}|t-s|+s|{f}|_{C^1}|\eta'|_{C^0}|{t-s}| \\
&\leq\bigl(2+s\tfrac{3}{\varepsilon}\bigr)|f|_{C^1}|{t-s}| \\ \label{est:f-hoelder3}
&\leq 8 |{f}|_{C^1}|{t-s}|. \end{align} By \eqref{est:f-hoelder1}, we may reduce the special case $s=0$ to the bound \begin{align}\label{eqn:20190125-3}
|(f-\eta f)(\cdot,t)|
&\leq8t |{f}|_{C^1}. \end{align} Since the left-hand side of \eqref{eqn:20190125-3} vanishes for $t>2\varepsilon$, we may have \begin{align*}
|f-\eta f|_{C^0}
&\leq 16\varepsilon |f|_{C^1}. \end{align*}
If $|t-s|<\varepsilon$, the estimate \eqref{est:f-hoelder3} implies that \begin{align*}
|(f-\eta f)(\cdot,t)-(f-\eta f)(\cdot,s)|
&\leq 8\varepsilon^{1-\frac{\alpha}{2}}|f|_{C^1}|{t-s}|^{\frac{\alpha}{2}}. \end{align*}
If $|{t-s}|\geq\varepsilon$, we may replace the estimate by the fact that \begin{align*}
|(f-\eta f)(\cdot,t)-(f-\eta f)(\cdot,s)|
&\leq 2|f-\eta f|_{C^0} \\ \nonumber
&\leq 32\varepsilon |{f}|_{C^1} \\ \nonumber
&\leq 32\varepsilon^{1-\frac{\alpha}{2}}|f|_{C^1}|t-s|^{\frac{\alpha}{2}}. \end{align*}
Then, $$[f-\eta f]_{\frac{\alpha}{2},t}\leq32\varepsilon^{1-\frac{\alpha}{2}}|f|_{C^1}.$$
For the estimation of the spatial H\"older seminorm, we may obtain a similar estimate from \eqref{eqn:sC1} and estimate the space difference: \begin{align*}
&|{(f-\eta f)(x,t)-(f-\eta f)(y,t)}| \\ \nonumber
&\leq |{1-\eta(t)}||{f(x,t)-f(y,t)}|^{\alpha}|{f(x,t)-f(y,t)}^{1-\alpha}| \\ \nonumber
&\leq |{f}|_{C^1}^{\alpha}\,d(x,y)^{\alpha}\bigl(4\varepsilon |{f}|_{C^1}\bigr)^{1-\alpha}
=(4\varepsilon)^{1-\alpha}|{f}|_{C^1}\,d(x,y)^{\alpha}, \end{align*}
where $d(x,y)$ is the Riemannian distance between $x$ and $y$ in $(M,g_{M})$. Then, $$|{f-\eta f}|_{Y}\leq C\varepsilon^{\beta-\alpha}|{f}|_{C^1}.$$ This implies that $\eta f$ belongs to the neighborhood $V_0$ of $f$ if $\varepsilon>0$ is sufficiently small. By the construction above, $F^{-1}(\eta f)$ is a solution to \eqref{eqn:pde-v} in $\Omega\times[0,\varepsilon]$. Setting $T=\varepsilon>0$, we then obtain the desired result. \end{proof}
We are now going to prove Theorem \ref{short_existence}. \begin{proof} We now may obtain the local in time solution to \eqref{eqn:Yamabe-flow} on the whole Riemannian manifold $(M,g_M)$. Recall that we have assumed the scalar curvature of $g_0=w_0g_M$ is bounded. Let $\Omega_1\subset\Omega_2\subset\cdots$ be the smooth compact domain exhaustion of $M$ (the existence of such domain exhaustion was used in \cite{D}). Recall that $p=\frac{n+2}{n-2}$, $L_{g_0}v=\Delta_{g_0}v-aR_{g_0}v$ and $a=\frac{n-2}{4(n-1)}$. We may write $g_0=v_0^{4/(n-2)}g_M$ for $w_0=v_0^{4/(n-2)}$ and look for for solution of the form $$g(x,t)=\check{u}^{4/(n-2)}g_0=(\check{u}{v_0})^{4/(n-2)}g_M=v^{4/(n-2)}g_M=ug_M. $$ Then the Yamabe flow equation may be written as $$
\frac{\partial v^p}{\partial t}=L_{g_M}v, \quad \ x\in M, \ t>0, $$ with the initial data $v(0)=v_0$. Recall that $L_{g_M}v =\Delta_{g_M}v-aR_{g_M}v$ in $M$.
For shortening the notation, we may assume $v_0=1$ and then $g_0=g_M$. Then, the solution $\check{u}(x,t)$ to Yamabe flow \eqref{eqn:Yamabe-flow} may be obtained by a sequence of approximation solutions $u_m(x,t)=\check{u}_m(x,t)^{4/(n-2}$ obtained above. Note that $\check{u}_m(x,t)$ satisfies \begin{equation}\label{yamabe_flow_u_local1} \left\{ \begin{array}{ll}
\frac{\partial \check{u}^p_m}{\partial t}=L_{g_0}\check{u}_m, \quad &\ x\in\Omega_m, \ t>0,\\
\check{u}_m(x,t)>0, \quad &\ x\in\Omega_m, \ t>0,\\
\check{u}_m(x,t)=1, \quad &\ x\in \partial \Omega_m, \ t>0,\\
\check{u}_m(\cdot,0)=1, \quad &\ x\in\Omega_m.\\ \end{array} \right. \end{equation}
Since $\check{u}_m(x,t)=1$ is bounded on $\partial \Omega_m$, by the maximum principle, we may conclude that \begin{align*}
\max\limits_{\Omega_m} \check{u}_m(t)\leq (1+\frac{n-2}{(n-1)(n+2)}\sup\limits_{M^n}|R_{g_0}|t)^{\frac{n-2}{4}}. \end{align*} and \begin{align*}
\min\limits_{\Omega_m} \check{u}_m(t)\geq (1-\frac{n-2}{(n-1)(n+2)}\sup\limits_{M^n}|R_{g_0}|t)^{\frac{n-2}{4}}. \end{align*}
We see that $\check{u}_m(t)$ has an uniformly upper bound on $[0,t_0)$ for any $t_0>0$ and uniformly positive lower bound on $[0,\frac{(n-1)(n+2)}{2(n-2)\sup\limits_{M^n}|R_{g_0}|}]$. Let $T=\frac{(n-1)(n+2)}{2(n-2)\sup\limits_{M^n}|R_{g_0}|}$. Then every local solution $\{u_m\}$ is well-defined on the time interval $[0,T]$.
Applying Trudinger's estimate \cite{Trudinger1968} (or the Krylov-Safonov estimate) and Schauder estimate of parabolic equations to \eqref{eqn:Yamabe-flow} on any ball $B_{g_0}(p,r_0)\subset (M,g_0)$, we have
$||u_m||_{C^{2+\alpha,1+\frac{\alpha}{2}}(B_{g_0}(p,r_0)\times [0,T])}\leq C$, where $C$ is independent of the point $p$. Using the diagonal subsequence of $\{u_m\}$, we may extract a $C^{2+\alpha,1+\frac{\alpha}{2}}_{loc}$ convergent sequence with its positive limit $u(x,t)=\check{u}^{\frac{4}{n-2}}$ on whole $M$, which is the desired local in time solution to \eqref{eqn:Yamabe-flow}. Since $g(\cdot,t)=\check{u}^{\frac{4}{n-2}}g_0$, we also have $\sup\limits_{B_{g_0}(p,r_0)\times [0,T]}|Rm(x,t)|\leq C$. With this understanding, we may extend the solution to the maximal time solution as we wanted.
This completes the proof of the result. \end{proof}
\section{global Yamabe flows on ALE manifolds}\label{sect3}
Assume that $(M,g_0)$ is an ALE manifold. We shall show that the Yamabe flow exists globally. Assume by contrary that the maximal time of Yamabe flow is finite, i.e., $T_{max}<\infty$. Then according to AF property of the solution in Theorem 5.1 \cite{CZ} (and its argument is based on the generalized maximum principle Theorem \ref{Ecker_Huisken} as showed in appendix), we know that there exists a compact set $S\subset M$ such that
$$
\frac12\leq u(x,t)\leq 3/4, \ \ \forall (x,t)\in (M\setminus S)\times [0,T_{max})
$$
and there is a point $x\in S$ such that $u$ is non-trivial in a neighborhood of $x$.
As in \cite{Y94}, using DiBennedetto's estimate we may extend the solution $u$ continuously to $T_{max}$. Using Ye's argument, we know that $u$ can not have any zero point in $S$ at $T_{max}$. Therefore, there exist two positive constants $c_1$ and $c_2$ such that
$$
c_1\leq u(x,t)\leq c_2, \ \ \forall (x,t)\in M\times [0,T_{max}] \ \text{uniformly} .
$$
Then we may use the standard parabolic theory \cite{Trudinger1968} to extend the solution beyond $T_{max}$, which is a contradiction with $T_{max}<\infty$. The decaying property of the Yamabe flow follows from Theorem 5.1 in \cite{CZ}.
This then completes the proof of Theorem \ref{global}.
The proof of Theorem \ref{main_5} follows from the application of Theorem \ref{short_existence}, Theorem 5.1 in \cite{CZ}, and Theorem 6 in \cite{M}.
Since the proof of Theorem \ref{main_6} is by now easy to give and the proof is below. \begin{proof}
We choose $\delta>0$ small such that $\tilde{u}_0=\delta w_0<1$. Note that $\tilde{u}_0$ is the lower solution of the Yamabe flow.
Let $\tilde{g}_0= \tilde{u}_0^{4/(n-2)}g_0$. Then the scalar curvature of the metric $\tilde{g}_0$ is non-negative and we also have $$ g_0\geq \tilde{g}_0. $$ Note that along the Yamabe flow $g(t)\geq \tilde{g}_0$ and by the maximum principle we know that the scalar curvature $R=R(g(t))\geq 0$ on $M$. Since $$ \frac{\partial g}{\partial t}=-Rg \leq 0, $$ We then know that the Yamabe flow $g(t)$ converges in $C^\infty_{loc}(M)$ to a Yamabe metric of scalar curvature zero. \end{proof}
\section{Appendix: the maximum principle}\label{sect4} In this section, we present a generalized version of the maximum principle (see Theorem 4.3 in \cite{EH} and Theorem 2.6 in \cite{CZ}), where they consider the maximum principle for the parabolic equation $\frac{\partial }{\partial t}v - \Delta v\leq b\cdot \nabla v+cv$ or
$\frac{\partial }{\partial t}v - div(a \nabla v)\leq b\cdot \nabla v+cv$ on noncompact manifolds, where $\Delta$ and $\nabla$ depend on $g(t)$. Our maximum principle is about the more general equation
$$m(x) \frac{\partial }{\partial t}v - div(a \nabla v)\leq b\cdot \nabla v+cv, \ \ M\times [0,T)$$ where $m(x)$ is a positive regular function on $M$.
\begin{Thm}\label{Ecker_Huisken} Suppose that the complete noncompact manifold $M^n$ with Riemannian metric $g(t)$ satisfies the uniformly volume growth condition \begin{align*}
vol_{g(t)}(B_{g(t)}(p,r))\leq exp(k(1+r^2)) \end{align*} for some point $p\in M$ and a uniform constant $k>0$ for all $t\in [0,T]$. Let $v$ be a differentiable function on $M\times (0,T]$ and continuous on $M\times [0,T]$. Assume that $v$ and $g(t)$ satisfy
(i) The differential inequality \begin{align*}
m(x)\frac{\partial }{\partial t}v - div(a \nabla v)\leq b\cdot \nabla v+cv, \end{align*} where $m(x)$ is a positive continuous function on $M$ such that $0<m_0\leq m(x)\leq m_1$ for some constant $m_0>0$ and $m_1>0$, the vector field $b$ and the function $a$ and $c$ are uniformly bounded \begin{align*}
0<\alpha_1' \leq a\leq \alpha_1, \sup\limits_{M\times [0,T]} |b|\leq \alpha_2, \sup\limits_{M\times [0,T]} |c|\leq \alpha_3, \end{align*} for some constants $\alpha_1',\alpha_1,\alpha_2<\infty$. Here $\Delta$ and $\nabla$ depend on $g(t)$.
(ii) The initial data \begin{align*}
v(p,0)\leq 0, \end{align*} for all $p\in M$.
(iii) The growth condition \begin{align*}
\int^T_0(\int_{M}exp[-\alpha_4 d_{g(t)}(p,y)^2]|\nabla v|^2(y)d \mu_t)dt<\infty. \end{align*} for some constant $\alpha_4>0$.
(iv) Bounded variation condition in metrics in the sense that \begin{align*}
\sup\limits_{M\times[0,T]}|\frac{\partial}{\partial t}g(t)|\leq \alpha_5 \end{align*} for some constant $\alpha_5<\infty$.
Then we have $ v\leq 0 $ on $M\times [0,T]$. \end{Thm}
\begin{Rk}\label{remark_max} Note that the conditions (iii) and (iv) are satisfied if the sectional curvature of $g(t)$ and $\nabla v$ are uniformly bounded on $[0,T]$. There are many versions of maximum principles, one may prefer to \cite{A} and \cite{BK}. \end{Rk}
\textbf{Proof of Theorem \ref{Ecker_Huisken}:} Fix $K_0>0$ large. We choose $\theta>0$ and let $$ h(y,t)=-\frac{\theta d^2_{g(t)}(p,y)}{4(2\eta-t)},0<t<\eta, $$ where $d_{g(t)}(p,y)$ is the distance between $p$ and $y$ at time $t$ and $0<\eta<\min(T,\frac{1}{64K_0},\frac{1}{32\alpha_4},\frac{1}{4\alpha_5})$. Then \begin{align*} \frac{d}{dt}h=-\frac{\theta d^2_{g(t)}(p,y)}{4(2\eta-t)^2}-\frac{\theta d_{g(t)}(p,y)}{2(2\eta-t)}\frac{d}{dt}d_{g(t)}(p,y). \end{align*} By (iv), we have \begin{align*}
|\frac{d}{dt}d_{g(t)}(p,y)|\leq \frac{1}{2}\alpha_5 d_{g(t)}(p,y). \end{align*} Then we have that \begin{align*}
\frac{d}{dt}h\leq -\theta^{-1}|\nabla h|^2+\theta^{-1}\alpha_5|\nabla h|^2 (2\eta-t), \end{align*} We choose $\theta=\frac{1}{4\alpha_1}$. Using $\eta\leq \frac{1}{4\alpha_5}$ we have \begin{align}\label{h_estiamte}
m_0\frac{d}{dt}h+2a|\nabla h|^2\leq 0. \end{align} Let $K>0$, which will be a very large constant. Taking $f_K=\max\{\min(f,K),0\}$ and $0<\epsilon<\eta$, we have \begin{align*} &\int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M}\phi^2 e^h f_K(div(a\nabla f)-\frac{\partial f}{\partial t})d\mu_t)dt\\
\geq & -\alpha_2 \int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M}\phi^2 e^h f_K|\nabla f|d\mu_t)dt\\ &-\alpha_3 \int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M}\phi^2 e^h f_K fd\mu_t)dt \end{align*} for some smooth time independent compactly supported function $\phi$ on $M^n$, where $\beta>0$ will be chosen later. Then we have \begin{align*} 0\leq &-\int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M}\phi^2 e^h a <\nabla f_K,\nabla f_K>d\mu_t)dt\\ &-\int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M}\phi^2 e^h f_K a<\nabla h,\nabla f>d\mu_t)dt\\ &-2\int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M}\phi e^h f_K a<\nabla \phi,\nabla f>d\mu_t)dt\\ &-\int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M}m(x)\phi^2 e^h f_K\frac{\partial f}{\partial t}d\mu_t)dt +\alpha_3 \int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M}\phi^2 e^h f_K fd\mu_t)dt \\
&+\alpha_2 \int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M}\phi^2 e^h f_K|\nabla f|d\mu_t)dt\\ =&\textrm{I}+\textrm{II}+\textrm{III}+\textrm{IV}+\textrm{V}+\textrm{VI}. \end{align*} By Schwartz' inequality, we derive \begin{align*}
\textrm{II}\leq \frac{1}{4}\int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M}\phi^2 e^h a|\nabla f|^2d\mu_t)dt+\int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M}\phi^2 e^h f_K^2 a|\nabla h|^2d\mu_t)dt, \end{align*} \begin{align*}
\textrm{III}\leq \frac{1}{2}\int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M}\phi^2 e^h a|\nabla f|^2d\mu_t)dt+2\int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M} e^h f_K^2 a|\nabla \phi|^2d\mu_t)dt, \end{align*} and \begin{align*}
\textrm{VI}&\leq \frac{1}{4}\int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M}\phi^2 e^h a|\nabla f|^2d\mu_t)dt+\alpha_2^2\int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M} e^h f_K^2 \frac{1}{a}|\nabla \phi|^2d\mu_t)dt\\
&\leq \frac{1}{4}\int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M}\phi^2 e^h a|\nabla f|^2d\mu_t)dt+\frac{\alpha_2^2}{\alpha_1'}\int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M} e^h f_K^2 |\nabla \phi|^2d\mu_t)dt. \end{align*} Since \begin{align*} -e^hf_K\frac{\partial f}{\partial t}\leq -e^h f_K \frac{\partial f_K}{\partial t}+\frac{\partial }{\partial t}(e^hf_K(f_K-f)), \end{align*} and $$f_K(f_K-f)\leq 0,$$ we obtain \begin{align*} &\qquad\textrm{IV}+\textrm{V}\\ &\leq -\frac{1}{2}\int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M}m(x)\phi^2 e^h \frac{\partial f_K^2}{\partial t}d\mu_t)dt +\int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M}m(x)\phi^2 \frac{\partial }{\partial t}(e^h f_K(f_K-f))d\mu_t)dt\\ & -\alpha_3 \int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M}\phi^2 e^h f_K(f_K-f)d\mu_t)dt+\alpha_3 \int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M}\phi^2 e^h f_K^2d\mu_t)dt. \end{align*} Moreover, we have \begin{align*}
|\frac{d}{dt}(d\mu_t)|\leq n \alpha_5 d\mu_t \end{align*} by (iv). Now we choose $\beta>0$ such that $m_0\beta\geq 2n\alpha_5+4\alpha_3+4\frac{\alpha_2^2}{\alpha_1'}$. Then \begin{align*} &\qquad\textrm{IV}+\textrm{V}\\
&\leq -\frac{1}{2}e^{-\beta t}\int_{M}m(x)\phi^2 e^h f_K^2d\mu_t|_{t=\eta}
+\frac{1}{2}e^{-\beta t}\int_{M}m(x)\phi^2 e^h f_K^2d\mu_t|_{t=\epsilon}\\ &+\frac{1}{2}\int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M}m(x)\phi^2 e^h f_K^2 \frac{\partial h}{\partial t}d\mu_t)dt-\frac{1}{4}m_0\beta\int^{\eta}_{\epsilon}e^{-\beta t}(\int_{M}\phi^2 e^h f_K^2 d\mu_t)dt\\ &
+e^{-\beta t}\int_{M}\phi^2 e^h f_K(f_K-f)d\mu_t|_{t=\eta}-e^{-\beta t}\int_{M}\phi^2 e^h f_K^2d\mu_t|_{t=\epsilon}. \end{align*} Combining the estimates of $\textrm{I}-\textrm{VI}$ and letting $\epsilon\to 0$, we obatin \begin{align*}
0\leq &-\int^{\eta}_{0}e^{-\beta t}(\int_{M}\phi^2 e^h a |\nabla f_K|^2d\mu_t)dt
+\int^{\eta}_{0}e^{-\beta t}(\int_{M}\phi^2 e^h a |\nabla f|^2d\mu_t)dt\\
&+2\int^{\eta}_{0}e^{-\beta t}(\int_{M} e^h f_K^2 a|\nabla \phi|^2d\mu_t)dt-\frac{1}{2}e^{-\beta t}\int_{M}m(x)\phi^2 e^h f_K^2d\mu_t|_{t=\eta}. \end{align*}
by $f_K\equiv 0$ at $t=0$ and (\ref{h_estiamte}). Now we choose $0\leq \phi\leq 1$ satisfying $\phi \equiv 1$ on $B_{g_0}(p,R)$, $\phi \equiv 0$ outside $B_{g_0}(p,R+1)$ and $|\nabla_{g_0} \phi|_{g_0}\leq 2$. Then we have \begin{align*}
&\frac{1}{2}e^{-\beta \eta}\int_{B_{g_0}(p,R)}m(x)\phi^2 e^h f_K^2d\mu_t|_{t=\eta}\leq \int^{\eta}_{0}e^{-\beta t}(\int_{B_{g_0}(p,R+1)}\phi^2 e^h a (|\nabla f|^2-|\nabla f_K|^2)d\mu_t)dt\\ &+C(\alpha_5)\int^{\eta}_{0}e^{-\beta t}(\int_{B_{g_0}(p,R+1)\backslash B_{g_0}(p,R)} e^h f_K^2 a d\mu_t)dt, \end{align*} where $C(\alpha_5)$ is a constant only depending on $\alpha_5$. By $0<\eta<\min(\frac{1}{K_0},\frac{1}{32\alpha_4})$ and volume growth assumptions on $M^n$, we have \begin{align*} \int^{\eta}_{0}e^{-\beta t}(\int_{B_{g_0}(p,R+1)\backslash B_{g_0}(p,R)} e^h f_K^2 a d\mu_t)dt\to 0, \end{align*} as $R\to \infty$. Then we derive \begin{align*}
&\frac{1}{2}e^{-\beta \eta}\int_{M}\phi^2 e^h f_K^2d\mu_t|_{t=\eta}\leq \int^{\eta}_{0}e^{-\beta t}(\int_{M}\phi^2 e^h a (|\nabla f|^2-|\nabla f_K|^2)d\mu_t)dt. \end{align*} Letting $K\to\infty$, we conclude that \begin{align*}
\frac{1}{2}e^{-\beta \eta}\int_{M}m(x)\phi^2 e^h (\max(f,0))^2d\mu_t|_{t=\eta}\leq 0, \end{align*} where $0<\eta<\min(T,\frac{1}{64K_0},\frac{1}{32\alpha_4},\frac{1}{4\alpha_5})$. That implies that $f\leq 0$ in $M^n\times[0,\eta]$. By the induction argument, we then have that $f\leq 0$ in $M^n\times[0,T]$. $\Box$
\end{document} |
\begin{document}
\title{Generalized Fourier--Feynman transforms and generalized convolution products on Wiener space II}
\titlerunning{Generalized Fourier--Feynman transforms and generalized convolution products II}
\author{Sang Kil Shim \and Jae Gil Choi}
\institute{Sang Kil Shim \at
Department of Mathematics, Dankook University, Cheonan 330-714, Republic of Korea\\
\email{skshim22@dankook.ac.kr}
\and
Jae Gil Choi (Corresponding author)\at
School of General Education, Dankook University, Cheonan 330-714, Republic of Korea\\
\email{jgchoi@dankook.ac.kr} }
\date{Received: date / Accepted: date}
\maketitle
\begin{abstract} The purpose of this article is to present the second type fundamental relationship between the generalized Fourier--Feynman transform and the generalized convolution product on Wiener space. The relationships in this article are also natural extensions (to the case on an infinite dimensional Banach space) of the structure which exists between the Fourier transform and the convolution of functions on Euclidean spaces. \keywords{Wiener space \and Gaussian process \and generalized Fourier--Feynman transform \and generalized convolution product.}
\subclass{Primary 46G12; Secondary 28C20 \and 60G15 \and 60J65} \end{abstract}
\setcounter{equation}{0} \section{Introduction}\label{sec:introduction}
\par Given a positive real $T>0$, let $C_0[0,T]$ denote one-parameter Wiener space, that is, the space of all real-valued continuous functions $x$ on $[0,T]$ with $x(0)=0$. Let $\mathcal{M}$ denote the class of all Wiener measurable subsets of $C_0[0,T]$ and let $\mathfrak{m}$ denote Wiener measure. Then, as is well-known, $(C_0[0,T],\mathcal{M},\mathfrak{m})$ is a complete measure space.
\par In \cite{HPS95,HPS96,HPS97-1,PSS98} Huffman, Park, Skoug and Storvick established fundamental relationships between the analytic Fourier--Feynman transform (FFT) and the convolution product (CP) for functionals $F$ and $G$ on $C_0[0,T]$, as follows:
\begin{equation}\label{eq:offt-ocp} T_{q}^{(p)}\big((F*G)_q\big)(y) = T_{q}^{(p)}(F)\bigg(\frac{y}{\sqrt2}\bigg) T_{q}^{(p)}(G)\bigg(\frac{y}{\sqrt2}\bigg) \end{equation} and \begin{equation}\label{eq:ocp-offt} \big(T_{q}^{(p)}(F)*T_{q}^{(p)}(G)\big)_{-q}(y) = T_{q}^{(p)}\bigg(F\bigg(\frac{\cdot}{\sqrt2}\bigg)G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y) \end{equation} for scale-almost every $y\in C_{0}[0,T]$, where $T_{q}^{(p)}(F)$ and $(F*G)_q$ denote the $L_p$ analytic FFT and the CP of functionals $F$ and $G$ on $C_0[0,T]$. For an elementary introduction of the FFT and the corresponding CP, see \cite{SS04}.
\par For $f\in L_2(\mathbb R)$, let the Fourier transform of $f$ be given by \[ \mathcal{F}(f)(u)=\int_{\mathbb R}e^{iuv}f(v)dm_L^{\mathfrak{n}}(v) \] and for $f, g\in L_2(\mathbb R)$, let the convolution of $f$ and $g$ be given by \[ (f*g)(u)=\int_{\mathbb R} f(u-v)g(v)dm_L^{\mathfrak{n}}(v) \] where $dm_L^{\mathfrak{n}} (v)$ denotes the normalized Lebesgue measure $(2\pi)^{-1/2}dv$ on $\mathbb R$. As commented in \cite{Indag}, the Fourier transform $\mathcal{F}$ acts like a homomorphism with convolution $*$ and ordinary multiplication on $L_2(\mathbb R)$ as follows:
for $f, g \in L_2(\mathbb R)$ \begin{equation}\label{worthy} \mathcal{F}(f*g)=\mathcal{F}(f)\mathcal{F}(g). \end{equation} But the Fourier transform $\mathcal{F}$ and the convolution $*$ have a dual property such as \begin{equation}\label{eq:F-02} \mathcal{F}(f)*\mathcal{F}(g)=\mathcal{F}(f g). \end{equation} Equations \eqref{eq:offt-ocp} and \eqref{eq:ocp-offt} above are natural extensions (to the case on an infinite dimensional Banach space) of the equations \eqref{worthy} and \eqref{eq:F-02}, respectively.
\par In \cite{CCKSY05,HPS97-2}, the authors extended the relationships \eqref{eq:offt-ocp} and \eqref{eq:ocp-offt} to the cases between the generalized FFT (GFFT) and the generalized CP (GCP) of functionals on $C_0[0,T]$. The definition of the ordinary FFT and the corresponding CP are based on the Wiener integral, see \cite{HPS95,HPS96,HPS97-1}. While the definition of the GFFT and the GCP studied in \cite{CCKSY05,HPS97-2} are based on the generalized Wiener integral \cite{CPS93,PS91}. The generalized Wiener integral (associated with Gaussian process) was defined by $\int_{C_0[0,T]} F(\mathcal Z_h(x,\cdot))d\mathfrak{m}(x)$ where $\mathcal Z_h$ is the Gaussian process on $C_0[0,T]\times[0,T]$ given by $\mathcal Z_h (x,t)=\int_0^t h(s)\tilde{d}x(s)$, and where $h$ is a nonzero function in $L_2[0,T]$ and $\int_0^t h(s)\tilde{d}x(s)$ denotes the Paley--Wiener--Zygmund stochastic integral \cite{PWZ33,Park69,PS88}.
On the other hand, in \cite{Indag}, the authors defined a more general CP (see, Definition \ref{def:cp} below) and developed the relationship, such as \eqref{eq:offt-ocp}, between their GFFT and the GCP (see, Theorem \ref{thm:gfft-gcp-compose} below). Equation \eqref{eq:gfft-gcp} in Theorem \ref{thm:gfft-gcp-compose} is useful in that it permits one to calculate the GFFT of the GCP of functionals on $C_0[0,T]$ without actually calculating the GCP.
In this paper we work with the second relationship, such as equation \eqref{eq:ocp-offt}, between the GFFT and the GCP of functionals on $C_0[0,T]$. Our new results corresponds to equation \eqref{eq:F-02} rather than equation \eqref{worthy}. It turns out, as noted in Remark \ref{re:meaning-main} below, that our second relationship between the GFFT and the CP also permits one to calculate the GCP of the GFFT of functionals on $C_0[0,T]$ without actually calculating the GCP.
\setcounter{equation}{0} \section{Preliminaries}\label{sec:introduction}
\par In order to present our relationship between the GFFT and the GCP, we follow the exposition of \cite{Indag}.
\par A subset $B$ of $C_0[0,T]$ is said to be scale-invariant measurable provided $\rho B\in \mathcal{M}$ for all $\rho>0$, and a scale-invariant measurable set $N$ is said to be scale-invariant null provided $\mathfrak{m}(\rho N)=0$ for all $\rho>0$. A property that holds except on a scale-invariant null set is said to hold scale-invariant almost everywhere (s-a.e.). A functional $F$ is said to be scale-invariant measurable provided $F$ is defined on a scale-invariant measurable set and $F(\rho\,\cdot\,)$ is Wiener-measurable for every $\rho> 0$. If two functionals $F$ and $G$ are equal s-a.e., we write $F\approx G$.
\par Let $\mathbb C$, $\mathbb C_+$ and $\mathbb{\widetilde C}_+$ denote the set of complex numbers, complex numbers with positive real part and nonzero complex numbers with nonnegative real part, respectively. For each $\lambda \in \mathbb C$, $\lambda^{1/2}$ denotes the principal square root of $\lambda$; i.e., $\lambda^{1/2}$ is always chosen to have positive real part, so that $\lambda^{-1/2}=(\lambda^{-1})^{1/2}$ is in $\mathbb C_+$ for all $\lambda\in\widetilde{\mathbb C}_+$.
\par Let $h$ be a function in $L_2[0,T]\setminus\{0\}$ and let $F$ be a $\mathbb C$-valued scale-invariant measurable functional on $C_0[0,T]$ such that \[ \int_{C_0[0,T]} F\big(\lambda^{-1/2}\mathcal Z_h(x,\cdot)\big)d\mathfrak{m}(x) =J(h;\lambda) \] exists as a finite number for all $\lambda>0$. If there exists a function $J^* (h;\lambda)$ analytic on $\mathbb C_+$ such that $J^*(h;\lambda)=J(h;\lambda)$ for all $\lambda>0$, then $J^*(h;\lambda)$ is defined to be the generalized analytic Wiener integral (associated with the Gaussian process $\mathcal{Z}_h$) of $F$ over $C_0[0,T]$ with parameter $\lambda$, and for $\lambda \in \mathbb C_+$ we write \[ \int_{C_0[0,T]}^{\mathrm{anw}_{\lambda}} F\big(\mathcal{Z}_h(x,\cdot)\big)d\mathfrak{m}(x) = J^*(h;\lambda). \] Let $q\ne 0$ be a real number and let $F$ be a functional such that \[ \int_{C_0[0,T]}^{\mathrm{anw}_{\lambda}} F\big(\mathcal{Z}_h(x,\cdot)\big)d\mathfrak{m}(x) \] exists for all $\lambda \in \mathbb C_+$. If the following limit exists, we call it the generalized analytic Feynman integral of $F$ with parameter $q$ and we write \[ \int_{C_0[0,T]}^{\mathrm{anf}_{q}} F\big(\mathcal{Z}_h(x,\cdot)\big)d\mathfrak m(x) = \lim_{\substack{ \lambda\to -iq \\ \lambda\in \mathbb C_+}}
\int_{C_0[0,T]}^{\mathrm{anw}_{\lambda}} F\big(\mathcal{Z}_h(x,\cdot)\big)d\mathfrak m(x). \]
\par Next (see \cite{CCKSY05,Indag,HPS97-2}) we state the definition of the GFFT.
\renewcommand{\thesection.3}{\thesection.1}
\begin{definition} Let $h$ be a function in $L_2[0,T]\setminus\{0\}$. For $\lambda\in\mathbb{C}_+$ and $y \in C_{0}[0,T]$, let \[ T_{\lambda,h}(F)(y) =\int_{C_0[0,T]}^{\mathrm{anw}_{\lambda}} F\big(y+\mathcal{Z}_h(x,\cdot)\big)d\mathfrak{m}(x). \] For $p\in (1,2]$ we define the $L_p$ analytic GFFT (associated with the Gaussian process $\mathcal{Z}_h$), $T^{(p)}_{q,h}(F)$ of $F$, by the formula, \[ T^{(p)}_{q,h}(F)(y) =\operatorname*{l.i.m.}_{\substack{ \lambda\to -iq \\ \lambda\in \mathbb C_+}} T_{\lambda,h} (F)(y) \] if it exists; i.e., for each $\rho>0$, \[ \lim_{\substack{ \lambda\to -iq \\ \lambda\in \mathbb C_+}}
\int_{C_{a,b}[0,T]}\big| T_{\lambda,h} (F)(\rho y)
-T^{(p)}_{q, h }(F)(\rho y) \big|^{p'} d\mathfrak m (y)=0 \] where $1/p+1/p' =1$. We define the $L_1$ analytic GFFT, $T_{q, h }^{(1)}(F)$ of $F$, by the formula \[ T_{q, h }^{(1)}(F)(y) = \lim_{\substack{ \lambda\to -iq \\ \lambda\in \mathbb C_+}} T_{\lambda,h} (F)(y) \] for s-a.e. $y\in C_0[0,T]$ whenever this limit exists. \end{definition}
\par We note that for $p \in [1,2]$, $T_{q,h}^{(p)}(F)$ is defined only s-a.e.. We also note that if $T_{q,h}^{(p)}(F)$ exists and if $F\approx G$, then $T_{q,h}^{(p)}(G)$ exists and $T_{q,h}^{(p)}(G)\approx T_{q,h }^{(p)}(F)$. One can see that for each $h\in L_2[0,T]$, $T_{q,h}^{(p)}(F)\approx T_{q,-h}^{(p)}(F)$ since \[ \int_{C_0[0,T]}F(x)d\mathfrak{m}(x)=\int_{C_0[0,T]}F(-x)d\mathfrak{m}(x). \]
\renewcommand{\thesection.4}{\thesection.2}
\begin{remark}\label{remark:ordinary-fft} Note that if $h\equiv 1$ on $[0,T]$, then the generalized analytic Feynman integral and the $L_p$ analytic GFFT, $T_{q,1}^{(p)}(F)$, agree with the previous definitions of the analytic Feynman integral and the analytic FFT, $T_{q}^{(p)}(F)$, respectively \cite{HPS95,HPS96,HPS97-1,PSS98} because $\mathcal Z_1(x,\cdot)=x$ for all $x \in C_0[0,T]$. \end{remark}
\par Next (see \cite{Indag}) we give the definition of our GCP.
\renewcommand{\thesection.3}{\thesection.3}
\begin{definition}\label{def:cp} Let $F$ and $G$ be scale-invariant measurable functionals on $C_{0}[0,T]$. For $\lambda \in \widetilde{\mathbb C}_+$ and $h_1,h_2\in L_2[0,T]\setminus\{0\}$, we define their GCP with respect to $\{\mathcal{Z}_{h_1},\mathcal{Z}_{h_2}\}$ (if it exists) by
\begin{equation}\label{eq:gcp-Z} \begin{aligned}
(F*G)_{\lambda}^{(h_1,h_2)}(y)
= \begin{cases} \int_{C_0[0,T]}^{\mathrm{ anw}_{\lambda}} F\big(\frac{y+{\mathcal Z}_{h_1} (x,\cdot)}{\sqrt2}\big) G\big(\frac{y-{\mathcal Z}_{h_2} (x,\cdot)}{\sqrt2}\big)d \mathfrak m(x),
\quad \lambda \in \mathbb C_+ \\ \int_{C_0[0,T]}^{\mathrm{ anf}_{q}} F\big(\frac{y+{\mathcal Z}_{h_1} (x,\cdot)}{\sqrt2}\big) G\big(\frac{y-{\mathcal Z}_{h_2} (x,\cdot)}{\sqrt2}\big)d \mathfrak{m}(x),\\
\qquad \qquad \qquad \qquad \qquad
\qquad \lambda=-iq,\,\, q\in \mathbb R, \,\,q\ne 0. \end{cases} \end{aligned} \end{equation} When $\lambda =-iq$, we denote $(F*G)_{\lambda}^{(h_1,h_2)}$ by $(F*G)_{q}^{(h_1,h_2)}$. \end{definition}
\renewcommand{\thesection.4}{\thesection.4}
\begin{remark}\label{remark:ordinary-cp} (i) Given a function $h$ in $L_2[0,T]\setminus\{0\}$ and letting $h_1=h_2\equiv h$, equation \eqref{eq:gcp-Z} yields the convolution product studied in \cite{CCKSY05,HPS97-2}: \[ \begin{aligned} (F*G)_{q}^{(h,h)}(y)& \equiv(F*G)_{q,h}(y)\\ &=\int_{C_0[0,T]}^{\mathrm{ anf}_{q}} F\bigg(\frac{y+ \mathcal Z_{h} (x,\cdot)}{\sqrt2}\bigg) G\bigg(\frac{y- \mathcal Z_{h} (x,\cdot)}{\sqrt2}\bigg)d \mathfrak{m}(x) . \end{aligned} \]
(ii) Choosing $h_1=h_2\equiv 1$, equation \eqref{eq:gcp-Z} yields the convolution product studied in \cite{HPS95,HPS96,HPS97-1,PSS98}: \[ \begin{aligned} (F*G)_{q}^{(1,1) }(y) & \equiv (F*G)_{q}(y)\\ & =\int_{C_0[0,T]}^{\mathrm{ anf}_{q}} F\bigg(\frac{y+ x}{\sqrt2}\bigg) G\bigg(\frac{y- x}{\sqrt2}\bigg)d \mathfrak{m}(x). \end{aligned} \] \end{remark}
\par In order to establish our assertion we define the following conventions. Let $h_1$ and $h_2$ be nonzero functions in $L_2[0,T]$. Then there exists a function $\mathbf{s}\in L_2[0,T]$ such that \begin{equation}\label{eq:fn-rot} \mathbf{s}^2(t)=h_1^2(t)+h_2^2(t) \end{equation} for $m_L$-a.e. $t\in [0,T]$, where $m_L$ denotes Lebesgue measure on $[0,T]$. Note that the function `$\mathbf{s}$' satisfying \eqref{eq:fn-rot} is not unique. We will use the symbol $\mathbf{s}(h_1,h_2)$ for the functions `$\mathbf{s}$' that satisfy \eqref{eq:fn-rot} above.
Given nonzero functions $h_1$ and $h_2$ in $L_{2}[0,T]$, infinitely many functions, $\mathbf{s}(h_1,h_2)$, exist in $L_{2}[0,T]$. Thus $\mathbf{s}(h_1,h_2)$ can be considered as an equivalence class of the equivalence relation $\sim$ on $L_2[0,T]$ given by \[ \mathbf{s}_1\sim \mathbf{s}_2 \,\,\Longleftrightarrow\,\, \mathbf{s}_1^2=\mathbf{s}_2^2 \,\,\,m_L\mbox{-a.e.}. \] But we observe that for every function $\mathbf{s}$ in the equivalence class $\mathbf{s}(h_1,h_2)$, the Gaussian random variable {\color{red}${\mathcal {Z}}_{\mathbf{s}}(x,T)$} has the normal
distribution $N(0,\|h_1\|_2^2+\|h_2\|_2^2)$.
Inductively, given a sequence $\mathcal H=\{h_1,\ldots, h_n\}$ of nonzero functions in $L_2[0,T]$, let $\mathbf{s}(\mathcal H)\equiv \mathbf{s}(h_1,h_2,\ldots,h_n)$ be the equivalence class of the functions $\mathbf{s}$ which satisfy the relation \[ \mathbf{s}^2(t)=h_1^2(t)+\cdots+h_n^2(t) \] for $m_L$-a.e. $t\in[0,T]$. Throughout the rest of this paper, for convenience, we will regard $\mathbf{s}(\mathcal H)$ as a function in $L_2[0,T]$. We note that if the functions $h_1,\ldots, h_n$ are in $L_{\infty}[0,T]$, then we can take $\mathbf{s}(\mathcal H)$ to be in $L_{\infty}[0,T]$. By an induction argument it follows that \[ \mathbf{s}(\mathbf{s}(h_1,h_2,\ldots,h_{k-1}),h_k) =\mathbf{s}(h_1,h_2,\ldots,h_k) \] for all $k\in\{2,\ldots,n\}$.
\renewcommand{\thesection.6}{\thesection.5}
\begin{example} Let $h_1(t)=t^4$, $h_2(t)=\sqrt{2}t^3$, $h_3(t)=\sqrt{3}t^2$, $h_4(t)={\sqrt{2}}t$, $h_5(t)=1$, and $\mathbf{s}(t)=t^4 +t^2 +1$ for $t\in [0,T]$. Then $\mathcal H=\{h_1,h_2,h_3,h_4,h_5\}$ is a sequence of functions in $L_2[0,T]$ and it follows that \[ \mathbf{s}^2(t)= h_1^2(t)+ h_2^2(t)+ h_3^2(t)+ h_4^2(t)+ h_5^2(t). \] Thus we can write $\mathbf{s}\equiv \mathbf{s}(h_1,h_2,h_3,h_4,h_5)$. Furthermore, one can see that \[ (-1)^{m}\mathbf{s}\equiv \mathbf{s}((-1)^{n_1}h_1,(-1)^{n_2}h_2,(-1)^{n_3}h_3,(-1)^{n_4}h_4,(-1)^{n_5}h_5) \] with $m, n_1,n_2,n_3,n_4,n_5 \in \{1,2\}$. On the other hand, it also follows that \[ \mathbf{s}(h_1,h_2,h_3,h_4,h_5)(t)\equiv \mathbf{s}(g_1,g_2,g_3) (t) \] for each $t\in [0,T]$, where $g_1(t)=-t^4-1$, $g_2(t)={\sqrt2} t\sqrt{t^4+1}$, and $g_3(t)=t^2$ for $t\in [0,T]$.
\end{example}
\renewcommand{\thesection.6}{\thesection.6}
\begin{example} Let $h_1(t)=t^4+t^2$, $h_2(t)=t^4-t^2$, $h_3(t)=\sqrt2 t^3$, and $\mathbf{s}(t)=\sqrt{2(t^8 +t^4)}$ for $t\in [0,T]$. Then, by the convention for $\mathbf{s}$, it follows that \[ \mathbf{s}(t)\equiv\mathbf{s}(h_1,h_2) (t)
\equiv \mathbf{s}({\sqrt2} h_2 ,{\sqrt2} h_3)(t). \] \end{example}
\renewcommand{\thesection.6}{\thesection.7}
\begin{example} Using the well-known formulas for trigonometric and hyperbolic functions, it follows that \[ \begin{aligned} \sec \big(\tfrac{\pi}{4 T} t\big) &=\mathbf{s}\big(1, \tan\big(\tfrac{\pi}{4 T} \cdot\big)\big)(t)\\ &=\mathbf{s}\big(\sin,\cos,\tan\big(\tfrac{\pi}{4 T} \cdot\big)\big)(t) \\ & =\mathbf{s}\big(\sin\big(\tfrac{\pi}{4 T} \cdot\big),\cos\big(\tfrac{\pi}{4 T}
\cdot\big),\tan\big(\tfrac{\pi}{4 T} \cdot\big)\big)(t), \end{aligned} \] \[ \cosh t =\mathbf{s}(1, \sinh)(t)=\mathbf{s}(-1, \sinh)(t)=\mathbf{s}(\sin,\cos,\sinh)(t) , \] and \[ -\coth \big(t+\tfrac12\big) =\mathbf{s}\big(1, \mathrm{csch}\big(\cdot+\tfrac12\big)\big)(t) =\mathbf{s}(-\sin,\cos,- \mathrm{csch}\big(\cdot+\tfrac12\big))(t) \] for each $t\in [0,T]$. \end{example}
\setcounter{equation}{0} \section{The relationship between the GFFT and the GCP}
\par The Banach algebra $\mathcal S(L_2[0,T])$ consists of functionals on $C_0[0,T]$ expressible in the form
\begin{equation}\label{eq:element} F(x)=\int_{L_2[0,T]}\exp\{i\langle{u,x}\rangle\}df(u) \end{equation} for s-a.e. $x\in C_0[0,T]$, where the associated measure $f$ is an element of $\mathcal M(L_2[0,T])$, the space of $\mathbb C$-valued countably additive (and hence finite) Borel measures on $L_2[0,T]$, and the pair $\langle{u,x}\rangle$ denotes the Paley--Wiener--Zygmund stochastic integral $\mathcal Z_u(x,T) \equiv \int_0^T u(s)\tilde{d}x(t)$. For more details, see \cite{CS80,CPS93,HPS97-2,PSS98}.
\par We first present two known results for the GFFT and the GCP of functionals in the Banach algebra $\mathcal S(L_2[0,T])$.
\renewcommand{\thesection.3}{\thesection.1}
\begin{theorem}[\cite{HPS97-2}]\label{thm:gfft} Let $h$ be a nonzero function in $L_\infty[0,T]$, and let $F\in\mathcal S(L_2[0,T])$ be given by equation \eqref{eq:element}. Then, for all $p\in[1,2]$, the $L_p$ analytic GFFT, $T_{q,h}^{(p)}(F)$ of $F$ exists for all nonzero real numbers $q$, belongs to $\mathcal S(L_2[0,T])$, and is given by the formula
\[ T_{q,h}^{(p)}(F)(y) = \int_{L_2[0,T]}\exp\{i\langle{u,y}\rangle\}df_t^h(u) \] for s-a.e. $y\in C_{0}[0,T]$, where $f_t^h$ is the complex measure in $\mathcal M(L_2[0,T])$ given by \[
f_t^{h}(B)=\int_B \exp\bigg\{-\frac{i}{2q}\|uh\|_2^2\bigg\}df(u) \] for $B \in \mathcal B(L_2[0,T])$. \end{theorem}
\renewcommand{\thesection.3}{\thesection.2}
\begin{theorem}[\cite{Indag}] \label{thm:gcp} Let $k_1$ and $k_2$ be nonzero functions in $L_\infty[0,T]$ and let $F$ and $G$ be elements of $\mathcal S(L_2[0,T])$ with corresponding finite Borel measures $f$ and $g$ in $\mathcal M(L_2[0,T])$. Then, the GCP $(F*G)_q^{(k_1,k_2)}$ exists for all nonzero real $q$, belongs to $\mathcal S(L_2[0,T])$, and is given by the formula \[ (F*G)_q^{(k_1,k_2)}(y) = \int_{L_2[0,T]}\exp\{i\langle{w,y}\rangle\}d\varphi^{k_1,k_2}_c(w) \] for s-a.e. $y\in C_{0}[0,T]$, where \[ \varphi^{k_1,k_2}_c =\varphi^{k_1,k_2}\circ\phi^{-1}, \] $\varphi^{k_1,k_2}$ is the complex measure in $\mathcal M(L_2[0,T])$ given by \[ \varphi_{k_1,k_2}(B)
=\int_B \exp\bigg\{-\frac{i}{4q}\|uk_1-vk_2\|_2^2\bigg\}df(u)dg(v) \] for $B \in \mathcal B(L_2^2[0,T])$, and $\phi:L_2^2[0,T]\to L_2[0,T]$ is the continuous function given by $\phi(u,v)=(u+v)/\sqrt2$. \end{theorem}
\par The following corollary and theorem will be very useful to prove our main theorem (namely, Theorem \ref{thm:cp-tpq02}) which we establish the relationship between the GFFT and the GCP such as equation \eqref{eq:ocp-offt}. The following corollary is a simple consequence of Theorem \ref {thm:gfft}.
\renewcommand{\thesection.9}{\thesection.3}
\begin{corollary}\label{thm:afft-inverse} Let $h$ and $F$ be as in Theorem \ref{thm:gfft}. Then, for all $p\in[1,2]$, and all nonzero real $q$, \begin{equation}\label{eq:inverse} T_{-q, h}^{(p)}\big(T_{q,h}^{(p)}(F)\big)\approx F. \end{equation} As such, the GFFT, $T_{q,h}^{(p)}$, has the inverse transform $\{T_{q,h}^{(p)}\}^{-1}=T_{-q,h}^{(p)}$. \end{corollary}
\par The following theorem is due to Chang, Chung and Choi \cite{Indag}.
\renewcommand{\thesection.3}{\thesection.4}
\begin{theorem} \label{thm:gfft-gcp-compose} Let $k_1$, $k_2$, $F$, and $G$ be as in Theorem \ref{thm:gcp}, and let $h$ be a nonzero function in $L_\infty[0,T]$. Assume that $h^2=k_1k_2$ $m_L$-a.e. on $[0,T]$. Then, for all $p\in[1,2]$ and all nonzero real $q$,
\begin{equation}\label{eq:gfft-gcp} \begin{aligned} &T_{q,h}^{(p)}\big((F*G)_q^{(k_1,k_2)}\big)(y) \\ & = T_{q,\mathbf{s}(h,k_1)/\sqrt2}^{(p)}(F)\bigg(\frac{y}{\sqrt2}\bigg) T_{q,\mathbf{s}(h,k_2)/\sqrt2}^{(p)}(G)\bigg(\frac{y}{\sqrt2}\bigg) \end{aligned} \end{equation} for s-a.e. $y\in C_{0}[0,T]$, where $\mathbf{s}(h,k_j)$'s, $j\in \{1,2\}$, are the functions which satisfy the relation \eqref{eq:fn-rot}, respectively. \end{theorem}
\renewcommand{\thesection.4}{\thesection.5}
\begin{remark} In equation \eqref{eq:gfft-gcp}, choosing $h=k_1=k_2\equiv 1$ yields equation \eqref{eq:offt-ocp} above. Also, letting $h=k_1=k_2$ yields the results studied in \cite{CCKSY05,HPS97-2}. As mentioned above, equation \eqref{eq:gfft-gcp} is a more general extension of equation \eqref{worthy} to the case on an infinite dimensional Banach space. \end{remark}
\par We are now ready to establish our main theorem in this paper.
\renewcommand{\thesection.3}{\thesection.6}
\begin{theorem} \label{thm:cp-tpq02} Let $k_1$, $k_2$, $F$, $G$, and $h$ be as in Theorem \ref{thm:gfft-gcp-compose}. Then, for all $p\in[1,2]$ and all nonzero real $q$, \begin{equation}\label{eq:cp-fft-basic} \begin{aligned} &\Big(T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (F)*
T_{q,\mathbf{s}(h,k_2)/\sqrt{2}}^{(p)}(G) \Big)_{-q}^{(k_1,k_2)}(y) \\ &=T_{q,h}^{(p)} \bigg(F \bigg(\frac{\cdot}{\sqrt2} \bigg)
G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y) \end{aligned} \end{equation} for s-a.e. $y\in C_0[0,T]$, where $\mathbf{s}(h,k_j)$'s, $j\in \{1,2\}$, are the functions which satisfy the relation \eqref{eq:fn-rot}, respectively. \end{theorem}
\begin{proof} Applying \eqref{eq:inverse}, \eqref{eq:gfft-gcp} with $F$, $G$, and $q$ replaced with $T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (F)$, $T_{q,\mathbf{s}(h,k_2)/\sqrt{2}}^{(p)}(G)$, and $-q$, respectively, and \eqref{eq:inverse} again, it follows that for s-a.e. $y\in C_0[0,T]$, \[ \begin{aligned} &\Big(T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (F)*
T_{q,\mathbf{s}(h,k_2)/\sqrt{2}}^{(p)}(G) \Big)_{-q}^{(k_1,k_2)}(y)\\ &= T_{q,h}^{(p)}\Big(T_{-q,h}^{(p)}\Big(\big(T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (F)*
T_{q,\mathbf{s}(h,k_2)/\sqrt{2}}^{(p)}(G) \big)_{-q}^{(k_1,k_2)}\Big)\Big)(y)\\ &= T_{q,h}^{(p)}\bigg( T_{-q,\mathbf{s}(h,k_1)/\sqrt2}^{(p)}\Big(T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)}(F)\Big) \bigg(\frac{\cdot}{\sqrt2}\bigg)\\ &\qquad\quad\times T_{-q,\mathbf{s}(h,k_2)/\sqrt2}^{(p)}\Big(T_{q,\mathbf{s}(h,k_2)/\sqrt{2} }^{(p)}(G)\Big) \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg) (y)\\
&=T_{q,h}^{(p)} \bigg(F \bigg(\frac{\cdot}{\sqrt2} \bigg)
G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y) \end{aligned} \] as desired. \qed\end{proof}
\renewcommand{\thesection.4}{\thesection.7}
\begin{remark}\label{re:meaning-main} (i) Equation \eqref{eq:gfft-gcp} shows that the GFFT of the GCP of two functionals is the ordinary product of their transforms. On the other hand, equation \eqref{eq:cp-fft-basic} above shows that the GCP of GFFTs of functionals is the GFFT of product of the functionals. These equations are useful in that they permit one to calculate $T_{q,h}^{(p)}((F*G)_q^{(k_1,k_2)})$ and $(T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (F)* T_{q,\mathbf{s}(h,k_2)/\sqrt{2}}^{(p)}(G))_{-q}^{(k_1,k_2)}$ without actually calculating the GCPs involved them, respectively. In practice, equation \eqref{eq:cp-fft-basic} tells us that to calculate $T_{q,h}^{(p)}(F(\frac{\cdot}{\sqrt2}) G(\frac{\cdot}{\sqrt2} ))$ is easier to calculate than are $T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (F)$, $T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (G)$, and $(T_{q,\mathbf{s}(h,k_1)/\sqrt{2} }^{(p)} (F)* T_{q,\mathbf{s}(h,k_2)/\sqrt{2}}^{(p)}(G) )_{-q}^{(k_1,k_2)}$.
(ii) Equation \eqref{eq:cp-fft-basic} is a more general extension of equation \eqref{eq:F-02} to the case on an infinite dimensional Banach space. \end{remark}
\renewcommand{\thesection.9}{\thesection.8}
\begin{corollary}[Theorem 3.1 in \cite{PSS98}] Let $F$ and $G$ be as in Theorem \ref{thm:gcp}. Then, for all $p\in[1,2]$ and all real $q\in\mathbb R\setminus\{0\}$, \[ \Big( T_q^{(p)}(F)*T_q^{(p)}(G)\Big)_{-q} (y) =T_q^{(p)}\bigg(F \bigg(\frac{\cdot}{\sqrt2}\bigg)
G \bigg(\frac{\cdot}{\sqrt2}\bigg) \bigg)(y) \] for s-a.e. $y\in C_{0}[0,T]$, where $T_q^{(p)}(F)$ denotes the ordinary analytic FFT of $F$ and $(F*G)_q$ denotes the CP of $F$ and $G$ (see Remarks \ref{remark:ordinary-fft} and \ref{remark:ordinary-cp}). \end{corollary}
\begin{proof} In equation \eqref{eq:cp-fft-basic}, simply choose $h=k_1=k_2\equiv 1$. \qed\end{proof}
\renewcommand{\thesection.9}{\thesection.9}
\begin{corollary}[Theorem 3.2 in \cite{CCKSY05}] Let $F$, $G$, and $h$ be as in Theorem \ref{thm:gfft-gcp-compose}. Then, for all $p\in[1,2]$ and all real $q\in\mathbb R\setminus\{0\}$, \[ \Big(T_{q,h}^{(p)}(F)*T_{q,h}^{(p)}(G)\Big)_{-q} (y) =T_{q,h}^{(p)}\bigg(F \bigg(\frac{\cdot}{\sqrt2}\bigg)
G \bigg(\frac{\cdot}{\sqrt2}\bigg) \bigg)(y) \] for s-a.e. $y\in C_{0}[0,T]$, where $(F*G)_q\equiv (F*G)_q^{(h,h)}$ denotes the GCP of $F$ and $G$ studied in \cite{CCKSY05,HPS97-2} (see Remark \ref{remark:ordinary-cp}). \end{corollary}
\begin{proof} In equation \eqref{eq:cp-fft-basic}, simply choose $h=k_1=k_2$. \qed\end{proof}
\setcounter{equation}{0} \section{Examples}
The assertion in Theorem \ref{thm:cp-tpq02} above can be applied to many Gaussian processes $\mathcal Z_h$ with $h\in L_\infty[0,T]$. In view of the assumption in Theorems \ref{thm:gfft-gcp-compose} and \ref{thm:cp-tpq02}, we have to check that there exist solutions $\{h,k_1,k_2, \mathbf{s}_1,\mathbf{s}_2\}$ of the system \[ \begin{cases} \mbox{(i)} &h^2 =k_1k_2,\\ \mbox{(ii)} &\mathbf{s}_1=\mathbf{s}(h,k_1) \,\, m_L\mbox{-a.e} \, \mbox{ on }\, [0,T],\\ \mbox{(iii)} &\mathbf{s}_2=\mathbf{s}(h,k_2) \,\, m_L\mbox{-a.e} \, \mbox{ on }\, [0,T], \end{cases} \] or, equivalently, \begin{equation}\label{system} \begin{cases} \mbox{(i)} &h^2 =k_1k_2,\\ \mbox{(ii)} &\mathbf{s}_1^2=h^2 +k_1^2 \,\, m_L\mbox{-a.e} \, \mbox{ on }\, [0,T],\\ \mbox{(iii)} &\mathbf{s}_2^2=h^2 +k_2^2 \,\, m_L\mbox{-a.e} \, \mbox{ on }\, [0,T]. \end{cases} \end{equation} Throughout this section, we will present some examples for the solution sets of the system \eqref{system}. To do this we consider the Wiener space $C_0[0,1]$ and the Hilbert space $L_2[0,1]$ for simplicity.
\renewcommand{\thesection.6}{\thesection.1}
\begin{example} (Polynomials) The set $\mathcal P =\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$ of functions in $L_\infty[0,1]$ with \[ \begin{cases} & h(t) = 2t(t^2-1) \\ &k_1(t) =(t^2-1)^2, \\ &k_2(t) =4t^2, \\ &\mathbf{s}_1(t) = (t^2-1)(t^2+1), \\ &\mathbf{s}_2(t) = 2 t(t^2+1) \end{cases} \] is a solution set of the system \eqref{system}. Thus \[ \mathbf{s} (h,k_1)(t) \equiv \mathbf{s}_1(t) =(t^2-1)(t^2+1), \] and \[ \mathbf{s}(h,k_2)(t)\equiv \mathbf{s}_2(t) =2 t(t^2+1) \] for all $t\in [0,1]$. In this case, equation \eqref{eq:cp-fft-basic} with the functions in $\mathcal P$ holds for any functionals in $F$ and $G$ in $\mathcal S(L_2[0,1])$. \end{example}
\renewcommand{\thesection.6}{\thesection.2}
\begin{example} (Trigonometric functions I) The set $\mathcal T_1=\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$ of functions in $L_\infty[0,1]$ with \[ \begin{cases}
h(t)=\sin 2t=2\sin t\cos t, \\ k_1(t)=2\sin^2t, \\ k_2(t)=2\cos^2t, \\ \mathbf{s}_1(t)=2\sin t, \\ \mathbf{s}_2(t)=2\cos t \end{cases} \] is a solution set of the system \eqref{system}. Thus \[ \mathbf{s} (h,k_1)(t) \equiv \mathbf{s}_1(t) =\mathbf{s}(2\sin\cos,2\sin^2)(t) =2 \sin t, \] and \[ \mathbf{s}(h,k_2)(t)\equiv \mathbf{s}_2(t) =\mathbf{s}(2\sin\cos,2\cos^2)(t) =2 \cos t \] for all $t\in [0,1]$. Also, using equation \eqref{eq:cp-fft-basic}, it follows that for all $p\in[1,2]$, all nonzero real $q$, and all functionals $F$ and $G$ in
$\mathcal S(L_2[0,1])$, \[
\Big(T_{q,\sqrt{2}\sin }^{(p)} (F)*
T_{q,\sqrt{2}\cos}^{(p)}(G) \Big)_{-q}^{(2\sin^2,2\cos^2)}(y)
=T_{q,2\sin\cos}^{(p)} \bigg(F \bigg(\frac{\cdot}{\sqrt2} \bigg)
G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y)
\] for s-a.e. $y\in C_0[0,1]$. \end{example}
\renewcommand{\thesection.6}{\thesection.3}
\begin{example} (Trigonometric functions II)
The set $\mathcal T_2=\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$ of functions in $L_\infty[0,1]$ with \[ \begin{cases}
h(t)=\sqrt2\sin t, \\ k_1(t)=\sqrt2\sin t\tan t, \\ k_2(t)=\sqrt2\cos t, \\ \mathbf{s}_1(t)=\sqrt2\tan t, \\ \mathbf{s}_2(t)=\sqrt2 \end{cases} \] is a solution set of the system \eqref{system}. Thus \[ \mathbf{s} (h,k_1)(t) \equiv \mathbf{s}_1(t) =\mathbf{s}(\sqrt2\sin,\sqrt2\sin\tan)(t) = \sqrt2\tan t, \] and \[ \mathbf{s}(h,k_2)(t)\equiv \mathbf{s}_2(t) =\mathbf{s}(\sqrt2\sin,\sqrt2\cos)(t) =\sqrt2 \,\,\,\,\,(\mbox{constant function}) \] for all $t\in [0,1]$. \end{example}
\renewcommand{\thesection.6}{\thesection.4}
\begin{example} (Hyperbolic functions) The hyperbolic functions are defined in terms of the exponential functions $e^{x}$ and $e^{-x}$. The set $\mathcal H=\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$ of functions in $L_\infty[0,1]$ with \[ \begin{cases}
h(t)=1 , \\ k_1(t)= \sinh\big(t+\tfrac12\big), \\ k_2(t)= \mathrm{csch} \big(t+\tfrac12\big), \\ \mathbf{s}_1(t)= \cosh\big(t+\tfrac12\big), \\ \mathbf{s}_2(t)= \coth\big(t+\tfrac12\big) \end{cases} \] is a solution set of the system \eqref{system}. Thus \[ \mathbf{s} (h,k_1)(t) \equiv \mathbf{s}_1(t) =\mathbf{s}\big(1,\sinh\big(\cdot+\tfrac12\big)\big)(t) = \cosh\big(t+\tfrac12\big), \] and \[ \mathbf{s}(h,k_2)(t)\equiv \mathbf{s}_2(t) =\mathbf{s}\big(1,\mathrm{csch} \big(\cdot+\tfrac12\big)\big)(t) = \coth\big(t+\tfrac12\big) \] for all $t\in [0,1]$. \end{example}
\setcounter{equation}{0} \section{Iterated GFFTs and GCPs}\label{relation2}
\par In this section, we present general relationships between the iterated GFFT and the GCP for functionals in $\mathcal S(L_2[0,T])$ which are developments of \eqref{eq:cp-fft-basic}. To do this we quote a result from \cite{Indag}.
\renewcommand{\thesection.3}{\thesection.1}
\begin{theorem}\label{thm:2018-step1} Let $F\in \mathcal S(L_2[0,T])$ be given by equation \eqref{eq:element}, and let $\mathcal H=\{h_1,\ldots,h_n\}$ be a finite sequence of nonzero functions in $L_\infty[0,T]$. Then, for all $p\in[1,2]$ and all nonzero real $q$, the iterated $L_p$ analytic GFFT, \[ T_{q,h_n}^{(p)}\big(T_{q,h_{n-1}}^{(p)}\big(
\cdots\big(T_{q,h_2}^{(p)}\big(T_{q,h_1}^{(p)}(F)\big)\big)\cdots\big)\big) \] of $F$ exists, belongs to $\mathcal S(L_2[0,T])$, and is given by the formula \[
T_{q,h_n}^{(p)}\big(T_{q,h_{n-1}}^{(p)}\big(
\cdots\big(T_{q,h_2}^{(p)}\big(T_{q,h_1}^{(p)}(F)\big)\big)\cdots\big)\big) (y)
= \int_{L_2[0,T]}\exp\{i\langle{u,y}\rangle\}df_t^{h_1,\ldots,h_n}(u)
\] for s-a.e. $y\in C_{0}[0,T]$, where $f_t^{h_1,\ldots,h_n}$ is the complex measure in $\mathcal M(L_2[0,T])$ given by \[ f_t^{h_1,\ldots,h_n}(B)
=\int_B \exp\bigg\{-\frac{i}{2q}\sum_{j=1}^n\|uh_j\|_2^2\bigg\}df(u) \] for $B \in \mathcal B(L_2[0,T])$. Moreover it follows that \begin{equation}\label{eq:gfft-n-fubini-add} T_{q,h_n}^{(p)}\big(T_{q,h_{n-1}}^{(p)}\big(
\cdots\big(T_{q,h_2}^{(p)}\big(T_{q,h_1}^{(p)}(F)\big)\big)\cdots\big)\big)(y) =T_{q, \mathbf{s}(\mathcal H)}^{(p)}(F)(y) \end{equation} for s-a.e. $y\in C_{0}[0,T]$, where $\mathbf{s}(\mathcal H)\equiv \mathbf{s}(h_1,\ldots,h_n)$ is a function in $L_{\infty}[0,T]$ satisfying the relation \begin{equation}\label{eq:fn-rot-ind} \mathbf{s}(\mathcal H)^2(t)=h_1^2(t)+\cdots+h_n^2(t) \end{equation}
for $m_L$-a.e. $t\in [0,T]$. \end{theorem}
We next establish two types of extensions of Theorem \ref{thm:cp-tpq02} above.
\renewcommand{\thesection.3}{\thesection.2}
\begin{theorem} \label{thm:iter-gfft-gcp-compose} Let $k_1$, $k_2$, $F$, and $G$ be as in Theorem \ref{thm:gcp}, and let $\mathcal H=\{h_1,\ldots,h_n\}$ be a finite sequence of nonzero functions in $L_{\infty}[0,T]$. Assume that \[ \mathbf{s}(\mathcal H)^2 \equiv \mathbf{s} (h_1,\ldots,h_n)^2=k_1k_2 \] for $m_L$-a.e. on $[0,T]$, where $\mathbf{s}(\mathcal H)$ is the function in $L_{\infty}[0,T]$ satisfying \eqref{eq:fn-rot-ind} above. Then, for all $p\in[1,2]$ and all nonzero real $q$,
\begin{equation} \label{eq:multi-rel-01} \begin{aligned} &\Big(T_{q,k_1/\sqrt2}^{(p)} \big(T_{q,h_n/\sqrt2}^{(p)}\big(\cdots\big(T_{q,h_2/\sqrt2}^{(p)} \big(T_{q,h_1/\sqrt2}^{(p)}(F)\big)\big)\cdots\big)\big)\\ &\qquad *T_{q,k_2/\sqrt2}^{(p)}\big( T_{q,h_n/\sqrt2}^{(p)}\big( \cdots\big(T_{q,h_2/\sqrt2}^{(p)} \big(T_{q,h_1/\sqrt2}^{(p)}(G)\big)\big)\cdots\big)\big)\Big)_{-q}^{(k_1,k_2)}(y)\\ &=\Big(T_{q, \mathbf{s}(\mathcal H,k_1)/\sqrt2}^{(p)}(F) *T_{q, \mathbf{s}(\mathcal H,k_2)/\sqrt2}^{(p)}(G)\Big)_{-q}^{(k_1,k_2)}(y)\\ &= T_{q,\mathbf{s}(\mathcal H)}^{(p)} \bigg(F \bigg(\frac{\cdot}{\sqrt2} \bigg)
G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y) \end{aligned} \end{equation} for s-a.e. $y\in C_0[0,T])$, where $\mathbf{s}(\mathcal H,k_1)$ and $\mathbf{s}(\mathcal H,k_2)$ are functions in $L_{\infty}[0,T]$ satisfying the relations \[ \mathbf{s}(\mathcal H,k_1)^2 \equiv \mathbf{s}(h_1,\ldots,h_n,k_1)^2 =h_1^2 +\cdots+h_n^2 +k_1^2 \] and \[ \mathbf{s}(\mathcal H,k_2)^2\equiv \mathbf{s}(h_1,\ldots,h_n,k_2)^2 =h_1^2 +\cdots+h_n^2 +k_2^2 \] for $m_L$-a.e. on $[0,T]$, respectively. \end{theorem}
\begin{proof} Applying \eqref{eq:gfft-n-fubini-add}, the first equality of \eqref{eq:multi-rel-01} follows immediately. Next using \eqref{eq:cp-fft-basic} with $h$ replaced with $\mathbf{s}(\mathcal H)$, the second equality of \eqref{eq:multi-rel-01} also follows. \qed\end{proof}
\par
In view of equations \eqref{eq:gfft-n-fubini-add} and \eqref{eq:cp-fft-basic},
we also obtain the following assertion.
\renewcommand{\thesection.3}{\thesection.3}
\begin{theorem} \label{thm:iter-gfft-gcp-compose-2nd} Let $F$ and $G$ be as in Theorem \ref{thm:gcp}. Given a nonzero function $h$ in $L_{\infty}[0,T]$ and finite sequences $\mathcal K_1=\{k_{11},k_{12},\ldots,k_{1n}\}$ and $\mathcal K_2=\{k_{21},k_{22},\ldots,k_{2m}\}$ of nonzero functions in $L_{\infty}[0,T]$, assume that \[ h^2=\mathbf{s}(\mathcal K_1)\mathbf{s}(\mathcal K_2) \] for $m_L$-a.e. on $[0,T]$. Then, for all $p\in[1,2]$ and all nonzero real $q$,
\begin{equation} \label{eq:multi-rel-02-2nd} \begin{aligned} &\Big(T_{q,h/\sqrt2}^{(p)} \big(T_{q,k_{1n}/\sqrt2}^{(p)} \big(\cdots \big(T_{q,k_{12}/\sqrt2}^{(p)}\big(T_{q,k_{11}/\sqrt2}^{(p)}(F)\big)\big)\cdots\big)\big)\big)\\ &\quad *T_{q,h/\sqrt2}^{(p)}\big( T_{q,k_{2m}/\sqrt2}^{(p)} \big( \cdots \big(T_{q,k_{22}/\sqrt2}^{(p)}\big(T_{q,k_{21}/\sqrt2}^{(p)}(G)\big)\big)\cdots\big)\big)\big)\Big)_{-q}^{(\mathbf{s}(\mathcal K_1),\mathbf{s}(\mathcal K_2))}(y)\\ &=\Big(T_{q,h/\sqrt2}^{(p)}\big(T_{q,\mathbf{s}(\mathcal K_1)/\sqrt2}^{(p)}(F)\big) *T_{q,h/\sqrt2}^{(p)}\big( T_{q,\mathbf{s}(\mathcal K_2)/\sqrt2}^{(p)}(G)\big) \Big)_{-q}^{(\mathbf{s}(\mathcal K_1),\mathbf{s}(\mathcal K_2))}(y)\\ &=\Big(T_{q, \mathbf{s}(h,\mathbf{s}(\mathcal K_1))/\sqrt2}^{(p)}(F) *T_{q,\mathbf{s}(h,\mathbf{s}(\mathcal K_2))/\sqrt2}^{(p)}(G)\Big)_{-q}^{(\mathbf{s}(\mathcal K_1),\mathbf{s}(\mathcal K_2))}(y)\\ &= T_{q,h}^{(p)} \bigg(F \bigg(\frac{\cdot}{\sqrt2} \bigg)
G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y) \end{aligned} \end{equation} for s-a.e. $y\in C_0[0,T])$, where $\mathbf{s}(h,\mathbf{s}(\mathcal K_1))$, and $\mathbf{s}(h,\mathbf{s}(\mathcal K_2))$ are functions in $L_{\infty}[0,T]$ satisfying the relations \[ \mathbf{s}(h,\mathbf{s}(\mathcal K_1))^2 =h^2 +\mathbf{s} (\mathcal K_1)^2=h^2+k_{11}^2 + \cdots+k_{1n}^2, \] and \[ \mathbf{s}(h,\mathbf{s}(\mathcal K_2))^2 =h^2 +\mathbf{s} (\mathcal K_2)^2=h^2+k_{21}^2 +\cdots+k_{2m}^2 \] for $m_L$-a.e. on $[0,T]$, respectively. \end{theorem}
\renewcommand{\thesection.4}{\thesection.4}
\begin{remark} Note that given the functions $\{\mathbf{s}(\mathcal H),k_1,k_2, \mathbf{s}(\mathcal H,k_1),\mathbf{s}(\mathcal H,k_2) \}$ in Theorem \ref{thm:iter-gfft-gcp-compose}, the set $\mathcal F=\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$ of functions in $L_\infty[0,T]$ with \[ \begin{cases}
h(t)=\mathbf{s}(\mathcal H)(t), \\ \mathbf{s}_1(t)=\mathbf{s}(\mathcal H,k_1)(t), \\ \mathbf{s}_2(t)=\mathbf{s}(\mathcal H,k_2)(t) \end{cases} \] is a solution set of the system \eqref{system}. Also, given the functions \[ \{h,\mathbf{s}(\mathcal K_1),\mathbf{s}(\mathcal K_2),\mathbf{s}(h,\mathbf{s}(\mathcal K_1)),\mathbf{s}(h,\mathbf{s}(\mathcal K_2))\} \] in Theorem \ref{thm:iter-gfft-gcp-compose-2nd}, the set $\mathcal F=\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$ of functions in $L_\infty[0,T]$ with \[ \begin{cases}
k_1(t)=\mathbf{s}(\mathcal K_1)(t), \\
k_2(t)=\mathbf{s}(\mathcal K_2)(t), \\ \mathbf{s}_1(t)=\mathbf{s}(h,\mathbf{s}(\mathcal K_1))(t), \\ \mathbf{s}_2(t)=\mathbf{s}(h,\mathbf{s}(\mathcal K_2))(t) \end{cases} \] is a solution set of the system \eqref{system}. \end{remark}
In the following two examples, we also consider the Wiener space $C_0[0,1]$ and the Hilbert space $L_\infty[0,1]$ for simplicity.
\renewcommand{\thesection.6}{\thesection.5}
\begin{example} Let $h_1=\sin \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)$, $h_2=\cos \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)$, $h_3=\tan\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)$, $k_1(t)=\tan \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)$, and $k_2(t)= \sec \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\csc \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)$ on $[0,1]$. Then $\{h_1,h_2,h_3,k_1,k_2\}$ is a set of functions in $L_\infty[0,1]$, and given the set $\mathcal H=\{h_1,h_2,h_3\}$, it follows that \[ \begin{aligned} \mathbf{s}(\mathcal H)(t) &\equiv\mathbf{s}(h_1,h_2,h_3)^2(t)\\ &=\mathbf{s}\big(\sin\tfrac{\pi}{4}\big(\cdot+\tfrac{1}{2} \big),\cos\tfrac{\pi}{4} \big(\cdot+\tfrac{1}{2} \big),\tan\tfrac{\pi}{4}\big(\cdot+\tfrac{1}{2} \big)\big)^2(t)\\ &=\sec^2 \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\\ &=k_1(t)k_2(t), \end{aligned} \] \[ \begin{aligned} \mathbf{s}(\mathcal H,k_1)^2(t) &\equiv \mathbf{s}(h_1,h_2,h_3,k_1)^2(t)\\ &=\sec^2\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)+\tan^2\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big) =\mathbf{s}(\mathbf{s}(\mathcal H),k_1)^2(t), \end{aligned} \] and \[ \begin{aligned} \mathbf{s}(\mathcal H,k_2)^2(t) &\equiv \mathbf{s}(h_1,h_2,h_3,k_2)^2(t)\\ &=\sec^2\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big) +\sec^2\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\csc^2\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\\ &=\mathbf{s}(\mathbf{s}(\mathcal H),k_2)^2(t), \end{aligned} \] for all $t\in [0,1]$.
From this we see that the set $\mathcal F_1=\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$ of functions in $L_\infty[0,1]$ with \[ \begin{cases}
h(t)= \mathbf{s}(h_1,h_2,h_3)(t)=\sec \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big), \\ k_1(t)=\tan \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big) ,\\ k_2(t)= \sec \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\csc \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big),\\ \mathbf{s}_1(t)=\mathbf{s}(\mathcal H,k_1)(t), \\ \mathbf{s}_2(t)=\mathbf{s}(\mathcal H,k_2)(t) \end{cases} \] is a solution set of the system \eqref{system}, and equation \eqref{eq:multi-rel-01} holds with the sequence $\mathcal H=\{h_1,h_2,h_3\}$ and the functions $k_1$ and $k_2$. \end{example}
In the next example, the kernel functions of the Gaussian processes defining the transforms and convolutions involve trigonometric and hyperbolic (and hence exponential) functions.
\renewcommand{\thesection.6}{\thesection.6}
\begin{example} Consider the function \[
h(t)= 2\sqrt{\csc\tfrac{\pi}{4}\big( t+\tfrac{1}{2}\big) \mathrm{cosh}\tfrac{\pi}{4}\big( t+\tfrac{1}{2}\big)} \] on $[0,1]$, and the finite sequences \[ \mathcal K_1=\big\{2\mathrm{tanh}\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big), 2\mathrm{sech} \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big),2 \cot\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\big\} \] and \[ \mathcal K_2=\big\{\sqrt2\sin\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big),\sqrt2\cos\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big), \sqrt2\mathrm{sinh}\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big),\sqrt2\mathrm{cosh}\tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\big\} \] of functions in $L_{\infty}[0,1]$. Then using the relationships among hyperbolic functions and among trigonometric functions, one can see that \[ \mathbf{s}(\mathcal K_1)(t)=2\csc \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big)\quad \mbox{ and } \quad \mathbf{s}(\mathcal K_2)(t)=2\mathrm{cosh} \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big) \] on $[0,1]$. From this we also see that the set $\mathcal F_1=\{h, k_1, k_2, \mathbf{s}_1,\mathbf{s}_2\}$ of functions in $L_\infty[0,1]$ with \[ \begin{cases}
h(t)= 2\sqrt{\csc\frac{\pi}{4}( t+\frac{1}{2}) \mathrm{cosh}\frac{\pi}{4}( t+\frac{1}{2})}, \\ k_1(t)=\mathbf{s}(\mathcal K_1)(t)=2\csc \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big) ,\\ k_2(t)=\mathbf{s}(\mathcal K_2)(t)=2\mathrm{cosh} \tfrac{\pi}{4}\big(t+\tfrac{1}{2} \big),\\ \mathbf{s}_1(t)=\mathbf{s}(h,\mathbf{s}(\mathcal K_1))(t), \\ \mathbf{s}_2(t)=\mathbf{s}(h,\mathbf{s}(\mathcal K_2))(t) \end{cases} \] is a solution set of the system \eqref{system}, and equation \eqref{eq:multi-rel-02-2nd} holds with the function $h$, and the sequences $\mathcal K_1$ and $\mathcal K_2$. \end{example}
\setcounter{equation}{0} \section{Further results}
\par In this section, we derive a more general relationship between the iterated GFFT and the GCP for functionals in $\mathcal S(L_2[0,T])$. To do this we also quote a result from \cite{Indag}.
\renewcommand{\thesection.3}{\thesection.1}
\begin{theorem} \label{thm:iter-gfft-more-1} Let $F$ and $\mathcal H=\{h_1,\ldots,h_n\}$ be as in Theorem \ref{thm:2018-step1}. Assume that $q_1,q_2,\ldots, q_n$ are nonzero real numbers with $\mathrm{sgn}(q_1)=\cdots=\mathrm{sgn}(q_n)$, where `$\mathrm{sgn}$' denotes the sign function. Then, for all $p\in[1,2]$,
\[ \begin{aligned} &T_{q_n,h_n}^{(p)}\big(T_{q_{n-1},h_{n-1}}^{(p)}\big(
\cdots\big(T_{q_2,h_2}^{(p)}\big(T_{q_1,h_1}^{(p)}(F)\big)\big)\cdots\big)\big) (y)\\ &=T_{\alpha_n,\tau_n^{(n)}h_n}^{(p)}\Big(T_{\alpha_{n},\tau_n^{(n-1)}h_{n-1}}^{(p)} \Big(\cdots \big(T_{\alpha_n,\tau_n^{(2)}h_2}^{(p)} \big(T_{\alpha_n,\tau_n^{(1)}h_1}^{(p)}(F) \big)\big)\cdots\Big)\Big) (y) \end{aligned} \] for s-a.e. $y\in C_{0}[0,T]$, where $\alpha_n$ is given by \[ \alpha_n=\frac{1}{\frac{1}{q_1}+\frac{1}{q_2}+\cdots+\frac{1}{q_n}} \] and $\tau_n^{(j)}=\sqrt{{\alpha_n}/{q_j}}$ for each $j\in \{1,\ldots,n\}$. Moreover it follows that \[ T_{q_n,h_n}^{(p)}\big(T_{q_{n-1},h_{n-1}}^{(p)}\big(
\cdots\big(T_{q_2,h_2}^{(p)}\big(T_{q_1,h_1}^{(p)}(F)\big)\big)\cdots\big)\big) (y) =T_{\alpha_n,\mathbf{s}(\tau\mathcal H)}^{(p)}(F)(y) \] for s-a.e. $y\in C_{0}[0,T]$, where $\mathbf{s}(\tau\mathcal H) \equiv \mathbf{s}(\tau_n^{(1)}h_1, \ldots, \tau_n^{(n)}h_n )$ is a function in $L_{\infty}[0,T]$ satisfying the relation \[ \mathbf{s}(\tau\mathcal H) ^2(t) =(\tau_n^{(1)}h_1)^2(t)+ \ldots+ (\tau_n^{(n)}h_n)^2(t) \]
for $m_L$-a.e. $t\in [0,T]$. \end{theorem}
\par Next, by a careful examination we see that for all $F\in \mathcal S(L_2[0,T])$ and any positive real $\beta>0$, \begin{equation}\label{eq:2018-new-parameter-change} T_{\beta q,h}^{(p)} (F) \approx T_{q,h/\sqrt{\beta}}^{(p)}(F) . \end{equation} Using \eqref{eq:2018-new-parameter-change} and \eqref{eq:cp-fft-basic}, we have the following lemma.
\renewcommand{\thesection.2}{\thesection.2}
\begin{lemma} \label{thm:2018-last-pre} Let $k_1$, $k_2$, $F$, $G$, and $h$ be as in Theorem \ref{thm:gfft-gcp-compose}. Let $q$, $q_1$, and $q_2$ be nonzero real numbers with $\mathrm{sgn}(q)=\mathrm{sgn}(q_1)=\mathrm{sgn}(q_2)$. Then, for all $p\in [1,2]$, \[ \begin{aligned} & \big(T_{q_1,\sqrt{q_1/(2q)} \mathbf{s}(h,k_1)}^{(p)} (F)*
T_{q_2,\sqrt{q_2/(2q)}\mathbf{s}(h,k_2)}^{(p)}(G) \big)_{-q}^{(k_1,k_2)}(y)\\ & =T_{q,h}^{(p)} \bigg(F \bigg(\frac{\cdot}{\sqrt2} \bigg) G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y) \end{aligned} \] for s-a.e. $y\in C_{0}[0,T]$. \end{lemma}
\par Finally, in view of Theorem \ref{thm:iter-gfft-more-1} and Lemma \ref{thm:2018-last-pre}, we obtain the following assertion.
\renewcommand{\thesection.3}{\thesection.3}
\begin{theorem} \label{thm:2018-last} Let $k_1$, $k_2$, $F$, $G$, and $h$ be as in Theorem \ref{thm:gfft-gcp-compose}. Let $\mathcal H_1=\{h_{1j}\}_{j=1}^n$ and $\mathcal H_2=\{h_{2l}\}_{l=1}^m$ be finite sequences of nonzero functions in $L_{\infty}[0,T]$. Given nonzero real numbers $q$, $q_1$, $q_{11}$, $\ldots$, $q_{1n}$, $q_2$, $q_{21}$, $\ldots$, $q_{2m}$ with \[ \begin{aligned} \mathrm{sgn}(q) &=\mathrm{sgn}(q_1)=\mathrm{sgn}(q_{11})=\cdots=\mathrm{sgn}(q_{1n})\\ &=\mathrm{sgn}(q_2)=\mathrm{sgn}(q_{21})=\cdots=\mathrm{sgn}(q_{2m}), \end{aligned} \] let
\[ \alpha_{1n}=\frac{1}{\frac{1}{q_{11}}+\frac{1}{q_{12}}+\cdots+\frac{1}{q_{1n}}}, \] \[ \alpha_{2m}=\frac{1}{\frac{1}{q_{21}}+\frac{1}{q_{22}}+\cdots+\frac{1}{q_{2m}}}, \] \[ \beta_{1n}=\frac{1}{\frac{1}{q_{1}}+\frac{1}{q_{11}}+\frac{1}{q_{12}}+\cdots+\frac{1}{q_{1n}}}, \] and \[ \beta_{2m}=\frac{1}{\frac{1}{q_{2}}+\frac{1}{q_{21}}+\frac{1}{q_{22}}+\cdots+\frac{1}{q_{2m}}}. \] Furthermore, assume that \[ h^2 = \mathbf{s}(\tau_{1n}\mathcal H_1)\mathbf{s}(\tau_{2m}\mathcal H_2) \] for $m_L$-a.e. on $[0,T]$, where $\mathbf{s}(\tau_{1n}\mathcal H_1)$ and $\mathbf{s}(\tau_{2m}\mathcal H_2)$ are functions in $L_{\infty}[0,T]$ satisfying the relation \[ \mathbf{s}(\tau_{1n}\mathcal H_1)^2 \equiv \mathbf{s}(\tau_{1n}^{(1)}h_{11}, \ldots, \tau_{1n}^{(n)}h_{1n} )^2 =(\tau_{1n}^{(1)}h_{11})^2 + \cdots+ (\tau_{1n}^{(n)}h_{1n})^2 \] and \[ \mathbf{s}(\tau_{2m}\mathcal H_2)^2 \equiv \mathbf{s}(\tau_{2m}^{(1)}h_{21}, \ldots, \tau_{2m}^{(m)}h_{2m} )^2 =(\tau_{2m}^{(1)}h_{21})^2 + \cdots+ (\tau_{2m}^{(m)}h_{2m})^2 , \]
respectively, and where $\tau_{1n}^{(j)}=\sqrt{{\alpha_{1n}}/{q_{1j}}}$ for each $j\in \{1,\ldots,n\}$, and $\tau_{2m}^{(l)}=\sqrt{{\alpha_{2m}}/{q_{2l}}}$ for each $l\in \{1,\ldots,m\}$. For notational convenience, let \[ h_1'=\sqrt{q_{1}/(2q)}h,\quad h_{1j}'=\sqrt{\alpha_{1n}/(2q)}h_{1j}, \quad j=1,\ldots,n, \] and let \[ h_2' =\sqrt{q_{2}/(2q)}h,\quad h_{2l}'=\sqrt{\alpha_{2m}/(2q)}h_{2l}, \quad l=1,\ldots,m. \] Then, for all $p\in[1,2]$,
\[ \begin{aligned} &\Big(T_{q_1,h_1'}^{(p)}\big(T_{q_{1n},h_{1n}'}^{(p)}\big( \cdots\big(T_{q_{11},h_{11}'}^{(p)} (F)\big)\cdots \big)\big) \\ & \quad\quad\quad *T_{q_2,h_2'}^{(p)}\big( T_{q_{2m}, h_{2m}'}^{(p)}\big( \cdots\big(T_{q_{21}, h_{21}' }^{(p)} (G)\big)\cdots \big)\big) \Big)_{-q}^{(\mathbf{s}(\tau_{1n}\mathcal H_1),\mathbf{s}(\tau_{2m}\mathcal H_2))} (y)\\
&=\Big(T_{q_1,\sqrt{q_{1}/(2q)} h}^{(p)} \big( T_{\alpha_{1n},\sqrt{\alpha_{1n}/(2q)}\mathbf{s}(\tau_{1n}\mathcal H_1)}^{(p)}(F)\big)\\ &\quad \quad \quad
*T_{q_2,\sqrt{q_{2}/(2q)}h}^{(p)} \big(T_{\alpha_{2m},\sqrt{\alpha_{2m}/(2q)} \mathbf{s}(\tau_{2m}\mathcal H_2)}^{(p)}(G)\big) \Big)_{-q}^{(\mathbf{s}(\tau_{1n}\mathcal H_1),\mathbf{s}(\tau_{2m}\mathcal H_2))} (y)\\
&=\Big( T_{\beta_{1n},\sqrt{\beta_{1n}/(2q)}\mathbf{s}(h,\mathbf{s}(\tau_{1n}\mathcal H_1))}^{(p)}(F)\\ &\qquad \qquad \qquad\qquad\,\,\,
*T_{\beta_{2m},\sqrt{\beta_{2m}/(2q)}\mathbf{s}(h,\mathbf{s}(\tau_{2m}\mathcal H_2))}^{(p)}(G) \Big)_{-q}^{(\mathbf{s}(\tau_{1n}\mathcal H_1),\mathbf{s}(\tau_{2m}\mathcal H_2))} (y)\\
&= T_{q,h}^{(p)} \bigg(F \bigg(\frac{\cdot}{\sqrt2} \bigg)
G \bigg(\frac{\cdot}{\sqrt2}\bigg)\bigg)(y) \end{aligned} \] for s-a.e. $y\in C_{0}[0,T]$. \end{theorem}
\end{document} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.